Proceedings of Third International Conference on Sustainable Computing: SUSCOM 2021 (Advances in Intelligent Systems and Computing, 1404) 981164537X, 9789811645372

The book includes a selection of the best papers presented at the Third International Conference on Sustainable Computin

111 44 14MB

English Pages 617 [588] Year 2022

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Preface
Contents
About the Editors
Analysis of Perceived Usability, Satisfaction and Adoption of Biometric Systems in the Public Transportation Sector of U.A.E.
1 Introduction
2 Literature Review
3 Data Analysis
4 Limitations
5 Proposed Framework
5.1 Security Check
5.2 Public Service Registration
5.3 Information Storage
5.4 Select the Account Option
5.5 User Interface
5.6 Payment Gateway
6 Conclusions and Future Recommendations
References
A Semi-supervised Deep Learning-Based Approach with Multiphase Active Contour Loss for Left Ventricle Segmentation from CMR Images
1 Introduction
2 Materials and Methods
2.1 Proposed Active Contour Loss Function
2.2 Network Architecture
2.3 Implementation
3 Results
3.1 Datasets
3.2 Performance Metrics
3.3 Results
4 Conclusion
References
Survey on Energy Efficient Approach for Wireless Multimedia Sensor Network
1 Introduction
2 Literature Review
3 Comparative Analysis
4 Conclusion
References
Effectual Accuracy of Ophthalmological Image Retinal Layer Segmentation
1 Introduction
2 Data
3 Features
3.1 Second Derivative Approach
3.2 K-NN Analysis
3.3 SVM Analysis
3.4 Decision Tree Analysis
4 Results
5 Conclusion
6 Future Work
References
Performance Assessment of K-Nearest Neighbor Algorithm for Classification of Forest Cover Type
1 Introduction
2 Literature Review
3 Data Set Description
4 K-Nearest Neighbor (KNN)
5 Results
6 Conclusion
References
Gestational Diabetes Prediction Using Machine Learning Algorithms
1 Introduction
2 Related Work
3 Methodology
3.1 Diabetes Dataset
3.2 Data Processing
3.3 Classification Algorithms
3.4 Performance Evaluation
4 Results
5 Conclusions
References
Design and Implementation of Buffon Needle Problem Using Technology for Engineering Students
1 Introduction
1.1 Technology and Engineering Student
1.2 Hawgent Dynamic Mathematics Software
1.3 Buffon Needle Problem
1.4 Purpose in This Study
2 Method
3 Making a Buffon Needle Experiment Using Hawgent
4 The Implementation of the Learning Media to Do Buffon’s Needle Experiment for Engineering Students
5 Conclusion and Limitation of Study
References
Energy-Efficient Multihop Cluster Routing Protocol for WSN
1 Introduction
2 Literature Survey
3 LEACH Protocol
4 Proposed Work
4.1 Set-Up Phase
4.2 Steady-State Phase
5 Expected Outcomes
6 Conclusion
References
A Review of Smart Electronic Voting Machine
1 Introduction
2 Electronic Voting Systems
3 Authenticity of Voting Process
4 Privacy of the Voter Rights
5 India’s Experience in e-voting
6 Types of Electronic Voting Machine
7 Comparison of EVM Among All the Countries
8 Advantages of EVM
9 Challenges of EVM
10 Simulation Results
11 Development We Can Do in Future
12 Conclusion
References
Application of Machine Learning Techniques in Intrusion Detection Systems: A Systematic Review
1 Introduction
2 Machine Learning
3 Related Work
4 Comparative Analysis
4.1 Datasets
4.2 Performance Evaluation Parameters
4.3 Performance Analysis
5 Conclusions
References
Relationship between Sustainable Practices and Firm Performance: A Study of the FMCG Sector in India
1 Introduction
2 Literature Review
3 Research Methodology
3.1 Research Gaps
3.2 Research Objectives
3.3 Research Methodology
4 Proposed Model
4.1 Analysis and Interpretation
4.2 Results Related to ROE
4.3 Results Related to ROA
4.4 Results Related to EPS
4.5 Findings
4.6 Limitations
5 Conclusion
5.1 Directions for Future Researches
References
Learning Paradigms for Analysis of Bank Customer
1 Introduction
1.1 Background
1.2 Problem Statement
1.3 Proposed Solution
2 Traditional Classification Models and Deep Learning
2.1 Naive Bayes
2.2 Support Vector Machine
2.3 Decision Tree
2.4 K-Nearest Neighbors
2.5 Logistic Regression
2.6 Linear Discriminant Analysis
2.7 Deep Learning
3 Data Processing
4 Results and Discussion
5 Conclusion and Future Work
References
Diagnosis of Dermoscopy Images for the Detection of Skin Lesions Using SVM and KNN
1 Introduction
2 Literature Review
3 Methodology
3.1 Dataset
3.2 Preprocessing
3.3 Segmentation Method
3.4 Morphological Method
3.5 Feature Extraction
3.6 Classification
4 Results
5 Conclusion
References
14 MaTop: An Evaluative Topic Model for Marathi
Abstract
1 Introduction
2 Literature Review
3 Research Methodology
4 Results and Discussion
5 Conclusions
References
Convolutional Neural Network: An Overview and Application in Image Classification
1 Introduction
2 Related Literature
3 CNN Architecture
3.1 Convolutional Layer
3.2 Pooling Layer
3.3 Fully Connected Layer
4 Materials and Methods
5 Experimental Results
6 Conclusion
References
A Comparison of Backtracking Algorithm in Time-Shared and Space-Shared VM Allocation Approaches Using CloudSim
1 Introduction
2 Literature Review
3 Proposed Backtracking Algorithm
3.1 Pseudocode
4 Experiments and Results
5 Conclusion and Future work
References
Mango (Mangifera indica L.) Classification Using Convolutional Neural Network and Linear Classifiers
1 Introduction
2 Related Work
2.1 Contributions
3 Proposed Approach
3.1 Overview of Convolution Neural Network
3.2 Tuning the CNN Model
3.3 CNN Architecture Models
4 Results and Discussion
5 Conclusion and Future Directions
References
A Review on Current IoT-Based Pasture Management Systems and Applications of Digital Twins in Farming
1 Introduction
2 Literature Review
3 Discussion
4 Conclusion
References
Conceptual Model for Measuring Complexity in Manufacturing Systems
1 Introduction
2 Theoretical Background
2.1 Measuring Complexity in Manufacturing Systems
3 Building a Conceptual Model
3.1 Hypothesis
3.2 Conceptual Model
4 Discussion
5 Conclusion
References
Hole Filling Using Dominant Colour Plane for CNN-Based Stereo Matching
1 Introduction
2 Related Work
3 Proposed Method
4 Hole Filling Scheme
5 Implementation and Results
6 Conclusion
References
Centralized Admission Process: An E-Governance Approach for Improving the Higher Education Admission System of Bangladesh
1 Introduction
2 Related Work
3 Problem Statement
4 Proposed Solution
4.1 Scope
4.2 Planning
4.3 Cost Estimation
5 Feasibility Analysis
5.1 Economic Feasibility
5.2 Technical Feasibility
6 Conclusion
References
Opinion Mining and Analysing Real-Time Tweets Using RapidMiner
1 Introduction
1.1 Approaches for Opinion Mining/Sentiment Analysis
2 Background Work
3 Implementation and Results
4 Conclusion and Future Work
References
Household Solid Waste Collection Cost Estimation Model: Case Study of Barranquilla, Colombia
1 Introduction
2 Methodology
3 Proposed Model
3.1 Parameters of the Model
3.2 Definition of the Model
3.3 Scenarios
4 Results and Discussion
5 Conclusions
References
Tomato Sickness Detection Using Fuzzy Logic
1 Introduction
2 Proposed Methods
2.1 Res Net Hybrid Fuzzy Logic c-Means Clustering and Edge Detection Algorithm
2.2 Res Net Convolutional Neural Network
2.3 Random Search Algorithm
3 Experimental Results and Discussion
3.1 Training Model
3.2 Performance Analysis
3.3 Confusion Matrix
3.4 Detection of Diseases
4 Conclusion
References
Autism Spectrum Disorder Study in a Clinical Sample Using Autism Spectrum Quotient (AQ)-10 Tools
1 Introduction
2 Literature Review
3 Dataset Selection
4 Methodologies
5 Results and Discussion
6 Conclusion and Future Scope
References
Robust Video Steganography Technique Against Attack Based on Stationary Wavelet Transform (SWT) and Singular Value Decomposition (SVD)
1 Introduction
2 Related Work
3 Proposed Method
3.1 Stationary Wavelet Transform
3.2 Singular Value Decomposition
3.3 Proposed Method
3.4 Steps of the Algorithm
4 Experimental Results
4.1 Quality Assessment
4.2 Results and Discussion
5 Conclusion
References
Statistical Inference Through Variable Adaptive Threshold Algorithm in Over-Sampling the Imbalanced Data Distribution Problem
1 Introduction
2 Proposed Work
2.1 Algorithm
3 Experimental Results
4 Conclusion
References
Feature Engineering for Tal-Patra Manuscript Text Using Natural Language Processing Techniques
1 Introduction
2 Proposed Work
2.1 Tokenization
2.2 Part of Speech (POS)
2.3 Feature Engineering Techniques
3 Experimental Results and Discussion
4 Conclusion
References
Digital Transformation of Public Service Delivery Processes Based of Content Management System
1 Introduction
2 State of the Art
3 Methodology
4 Implementation
5 Conclusion
References
Secure and Sustain Network for IoT Fog Servers
1 Introduction
2 Related Work
3 Proposed Methodology
3.1 Sensor Feeding and Signatures
3.2 Knowledge-Depth Graphs
3.3 Load-Based Server Allocation
3.4 State Maintenance and Learning
3.5 Spy-Based Deployment
3.6 Network Association for Failures Detection
4 Experiment and Result
5 Conclusion
References
Cataract Detector Using Visual Graphic Generator 16
1 Introduction
1.1 Symptoms of Cataract
1.2 Cure of Cataract
1.3 Cataract Surgery
2 Literature Review
3 Feasibility Analysis
3.1 Visual Acuity Test
3.2 Microincision or Regular Phaco Cataract Surgery
3.3 Robotic or Femtosecond Cataract Surgery
4 Methodology
4.1 Complete Work Plan Layout
4.2 Visualize All the Filters
4.3 Signal Processing
5 Experimental Analysis
6 Model Accuracy and Comparison
7 Web Frontend
8 Conclusion
References
Combination of Local Feature Extraction for Image Retrieval
1 Introduction
2 Related Works
2.1 LBP
2.2 Uniform LBP
3 Proposed Local Feature Extraction Method
3.1 Adaptive Threshold Local Binary Pattern (ATLBP)
3.2 Directional Local Binary Pattern (DLBP)
4 Results and Discussion
5 Conclusion
References
A Review in Anomalies Detection Using Deep Learning
1 Introduction
2 Anomaly Detection with Deep Learning
3 Comparative Discussion and Analysis
4 Conclusion
References
Sustainable Anomaly Detection in Surveillance System
1 Introduction
2 Related Work
3 System Design and Architecture
4 Methodology and Implementation
4.1 Data Processing
4.2 Model
4.3 Training
4.4 Testing
5 Results and Discussion
6 Conclusion and Future Work
References
A Robust Fused Descriptor Under Unconstrained Conditions
1 Introduction
2 The Proposed Descriptors
2.1 Local Difference Binary Pattern (LDBP)
2.2 Local Neighborhood Difference Binary Pattern (LNDBP)
2.3 LDBP + LNDBP Descriptor
3 Results Evaluation
3.1 Database Explanation
3.2 Feature Length Consumed by the Classifiers
3.3 Accuracy Recorded on Distinct Subsets
3.4 Results Comparison
4 Conclusion
References
IoT-Based Smart System for Safety and Security of Driver and Vehicle
1 Introduction
2 Proposed System
2.1 Security System for Car
2.2 Drowsiness Detection System
2.3 Accident Alert System
3 Implementation of Proposed Model
4 Methodology
4.1 Face Recognition
4.2 Finger Print Authentication
4.3 Drowsiness Detection System
5 Experimental Results
6 Conclusion and Future Work
References
Using a Single Group Experimental Study to Underpin the Importance of Human-in-the-Loop in a Smart Manufacturing Environment
1 Introduction
2 Introducing Human-in-the-Loop Approach to an Automated Water Bottling Plant
2.1 Research Methodology
2.2 Experimental Setup
3 Preliminary Results and Discussion
3.1 The Control Case—Machine Only
3.2 A Human-in-the-Loop Approach Scenario
4 Conclusion
References
Analysis of Downlink and Uplink Non-orthogonal Multiple Access (NOMA) for 5G
1 Introduction
1.1 Working Principle of NOMA
2 System Model
2.1 Downlink NOMA
2.2 Uplink NOMA
2.3 Spectral Efficiency and Energy Efficiency
3 Results and Discussion
3.1 NOMA Downlink and NOMA Uplink Sum Rate Comparison
3.2 NOMA Spectral Efficiency and Energy Efficiency
4 Conclusion
References
Pattern Matching Algorithms: A Survey
1 Introduction
2 Multi-pattern Matching Algorithms
2.1 Aho–Corasick Algorithm
2.2 Commentz-Walter Algorithm
2.3 Wu-Manber Algorithm
2.4 Zhu–Takaoka Algorithm
2.5 Bit-Parallel (SHIFT OR) Algorithm
3 Comparative Analysis
4 Conclusion
References
Development of an Android Fitness App and Its Integration with Visualization Tools
1 Introduction
2 Methodology
2.1 Parameters to Track
2.2 Experiment
2.3 The App
3 Results and Discussion
4 Conclusion
References
Breast Cancer Prediction Models: A Comparative Study and Analysis
1 Introduction
2 Literature Survey
3 Our Approach
4 Experimentation and Results
5 Conclusion
References
Analysis of Energy-Efficient Clustering-Based Routing Technique with BrainStorm Optimization in WSN
1 Introduction
2 Wireless Sensor Network (WSNs)
2.1 Types of WSN
2.2 Advantage
2.3 Disadvantage
2.4 Characteristics of Wireless Sensor Networks
3 Energy Efficient in WSNs
4 Routing in WSN
4.1 Routing Challenges in WSN
5 Clustering
5.1 Clustering Algorithm of WSN
6 BrainStorm Optimization (BSO)
7 Literature Survey
8 Conclusion
References
A Secure and Intelligent Approach for Next-Hop Selection Algorithm for Successful Data Transmission in Wireless Network
1 Introduction
2 Literature Review
3 Proposed System Model
4 Flow Diagram of the Proposed Model
5 Results Analysis
6 Conclusion/Future Work
References
Proposed Sustainable Paradigm Model to Data Storage of IoT Devices in to AWS Cloud Storage
1 Introduction
2 Related Work
3 Preliminaries
3.1 AWS Cloud in IoT Architecture
3.2 AWS IoT Core
3.3 Amazon IoT Related Services
4 Proposed Paradigm and Model
4.1 Circuit Diagram of Proposed Paradigm Model of AWS IoT
5 Discussion and Analysis
6 Future Work
7 Conclusion
References
Potential Applications of the Internet of Things in Sustainable Rural Development in India
1 Introduction
1.1 Benefits of IOT for Sustainable Rural Development
2 Literature Review
3 Methodology
3.1 Aims and Objectives
3.2 The Objectives Are
3.3 Data Collection
4 Applications of IoT for Rural Sustainability
4.1 Smart Meters
4.2 Smart Lighting
4.3 Smart Streetlights
4.4 Air Quality Control by Sensors
4.5 Smart IoT Based Agricultural
4.6 RFID Technology
4.7 Radio Transmission Technology in Agriculture
4.8 Intelligent Irrigation System
4.9 Protection of Agricultural Products
4.10 Seeding and Spraying Methods for Precision
4.11 Sustainable Land and Water Resource Management
4.12 Public Health
4.13 Smart Greenhouses
4.14 Education
5 IoT Challenges and Vision for Sustainability
5.1 Span
5.2 Fault Tolerance
5.3 Data Ownership
5.4 Lack of Encouragement
5.5 Technology Adverse Consequences
6 Conclusion
References
Evaluation and Analysis of Models for the Measurement of Complexity in Manufacturing Systems
1 Introduction
2 Method
2.1 Conceptual Model
2.2 Hypothesis
2.3 Manufacturing Case Study
3 Result
3.1 Complexity Index (CXI)
3.2 Entropic Measurement of Complexity
4 Discussion
5 Conclusion
References
Fractional-Order Euler–Lagrange Dynamic Formulation and Control of Asynchronous Switched Robotic Systems
1 Introduction
2 Problem Formulation
2.1 Fractional-Order Calculus Operations
2.2 Fractional-Order Dynamic Model Derivation
3 Asynchronous Switched Controller Design for a Robotic Manipulator
4 Numerical Experiment
5 Discussion
6 Conclusion
References
Modeling the Imperfect Production System with Rework and Disruption
1 Introduction
2 Assumptions and Notations
3 Mathematical Model for an Imperfect Production System with Reworkable and Scrapable Items
4 EMQ Model for Imperfect Production System with Rework and Disruption
5 Numerical Example and Discussion
6 Analysis
7 Conclusion and Suggestions
References
Traffic Accident Detection Using Machine Learning Algorithms
1 Introduction
2 Related Work
3 Background
4 Simulation Parameters
4.1 Dataset Information
4.2 Implementation
5 Conclusion
References
A Comparative Approach of Error Detection and Correction for Onboard Nanosatellite
1 Introduction
2 Literature Review
3 System Architecture
4 Algorithmic Analysis
4.1 Hamming Code
4.2 Cyclic Redundancy Check
4.3 Reed–Solomon
4.4 Turbo Encoding Mechanism
4.5 Turbo Decoding Mechanism
5 Performance Evaluation
6 Conclusion
References
Effective Text Augmentation Strategy for NLP Models
1 Introduction
2 Proposed Approach
2.1 Augmentation Techniques Adopted
2.2 The Classification Model
2.3 Evaluation of Augmentation Methods
3 Experiment
3.1 Proposed Approach
3.2 Data
3.3 Experimental Setup
3.4 Results and Analysis
4 Conclusion and Future Work
References
Performance Enhancement of Raga Classification Systems Using Recursive Feature Elimination
1 Introduction
2 Data Set
3 Feature Extraction
3.1 Time Domain Features
3.2 Frequency Domain Features
4 System Description
4.1 Support Vector Machine (SVM)
4.2 Gaussian Process classification (GP)
4.3 Recursive Feature Elimination (RFE)
5 Experiments and Results
5.1 SVM-based Raga Classification System
5.2 GP-Based Raga Classification System
5.3 Performance Enhancement Using RFE Method
6 Conclusions
References
A Study of the Factors Influencing Behavioral Intent to Implement Forensic Audit Techniques in Indian Companies
1 Introduction
2 Meaning of Auditing Forensics
3 Creation of Auditing Forensics in India
4 Significance of Forensic Audit
5 Literature Review
6 Objective
7 Research Design
8 Hypothesis
9 Analysis and Interpretation
10 Valuation of the Structural Model
11 Valuation of the Measurement Model
12 Hypothesis Testing
13 Conclusion
References
Using Multiple Regression Model Analysis to Understand the Impact of Travel Behaviors on COVID-19 Cases
1 Introduction
2 Literature Review
3 Methodology
3.1 Multiple Regression Model
4 Result and Discussion
5 Conclusion
References
The Review of Prediction Models for COVID-19 Outbreak in Indian Scenario
1 Introduction
2 Literature Review
3 Methodology
4 Dataset
5 Models
5.1 Regression Model
5.2 Clustering Model
6 Result Analysis
7 Conclusion
References
Design and Simulation of ECG Signal Generator by Making Use of Medical Datasets and Fourier Transform for Various Arrhythmias
1 Introduction
2 Background
3 Mathematical Formulation of ECG Using Fourier Series
4 MATLAB Implementation for ECG Signal Generation
5 Different Kinds of Arrhythmia Analysis
6 MATLAB Implementation for Different Arrhythmia
7 Simulation Result
8 Conclusion and Future Work
References
Author Index
Recommend Papers

Proceedings of Third International Conference on Sustainable Computing: SUSCOM 2021 (Advances in Intelligent Systems and Computing, 1404)
 981164537X, 9789811645372

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Advances in Intelligent Systems and Computing 1404

Ramesh Chandra Poonia · Vijander Singh · Dharm Singh Jat · Mario José Diván · Mohammed S. Khan   Editors

Proceedings of Third International Conference on Sustainable Computing SUSCOM 2021

Advances in Intelligent Systems and Computing Volume 1404

Series Editor Janusz Kacprzyk, Systems Research Institute, Polish Academy of Sciences, Warsaw, Poland Advisory Editors Nikhil R. Pal, Indian Statistical Institute, Kolkata, India Rafael Bello Perez, Faculty of Mathematics, Physics and Computing, Universidad Central de Las Villas, Santa Clara, Cuba Emilio S. Corchado, University of Salamanca, Salamanca, Spain Hani Hagras, School of Computer Science and Electronic Engineering, University of Essex, Colchester, UK László T. Kóczy, Department of Automation, Széchenyi István University, Gyor, Hungary Vladik Kreinovich, Department of Computer Science, University of Texas at El Paso, El Paso, TX, USA Chin-Teng Lin, Department of Electrical Engineering, National Chiao Tung University, Hsinchu, Taiwan Jie Lu, Faculty of Engineering and Information Technology, University of Technology Sydney, Sydney, NSW, Australia Patricia Melin, Graduate Program of Computer Science, Tijuana Institute of Technology, Tijuana, Mexico Nadia Nedjah, Department of Electronics Engineering, University of Rio de Janeiro, Rio de Janeiro, Brazil Ngoc Thanh Nguyen , Faculty of Computer Science and Management, Wrocław University of Technology, Wrocław, Poland Jun Wang, Department of Mechanical and Automation Engineering, The Chinese University of Hong Kong, Shatin, Hong Kong

The series “Advances in Intelligent Systems and Computing” contains publications on theory, applications, and design methods of Intelligent Systems and Intelligent Computing. Virtually all disciplines such as engineering, natural sciences, computer and information science, ICT, economics, business, e-commerce, environment, healthcare, life science are covered. The list of topics spans all the areas of modern intelligent systems and computing such as: computational intelligence, soft computing including neural networks, fuzzy systems, evolutionary computing and the fusion of these paradigms, social intelligence, ambient intelligence, computational neuroscience, artificial life, virtual worlds and society, cognitive science and systems, Perception and Vision, DNA and immune based systems, self-organizing and adaptive systems, e-Learning and teaching, human-centered and human-centric computing, recommender systems, intelligent control, robotics and mechatronics including human-machine teaming, knowledge-based paradigms, learning paradigms, machine ethics, intelligent data analysis, knowledge management, intelligent agents, intelligent decision making and support, intelligent network security, trust management, interactive entertainment, Web intelligence and multimedia. The publications within “Advances in Intelligent Systems and Computing” are primarily proceedings of important conferences, symposia and congresses. They cover significant recent developments in the field, both of a foundational and applicable character. An important characteristic feature of the series is the short publication time and world-wide distribution. This permits a rapid and broad dissemination of research results. Indexed by DBLP, INSPEC, WTI Frankfurt eG, zbMATH, Japanese Science and Technology Agency (JST). All books published in the series are submitted for consideration in Web of Science.

More information about this series at https://link.springer.com/bookseries/11156

Ramesh Chandra Poonia · Vijander Singh · Dharm Singh Jat · Mario José Diván · Mohammed S. Khan Editors

Proceedings of Third International Conference on Sustainable Computing SUSCOM 2021

Editors Ramesh Chandra Poonia Department of Computer Science CHRIST (Deemed to be University) Bengalore, Karnataka, India Dharm Singh Jat Faculty of Computing and Informatics Namibia University of Science and Technology Windhoek, Namibia

Vijander Singh Department of Computer Science and Engineering Manipal University Jaipur, Rajasthan, India Mario José Diván Data Science Research Group National University of La Pampa Santa Rosa, Argentina

Mohammed S. Khan Department of Computing East Tennessee State University Johnson City, TN, USA

ISSN 2194-5357 ISSN 2194-5365 (electronic) Advances in Intelligent Systems and Computing ISBN 978-981-16-4537-2 ISBN 978-981-16-4538-9 (eBook) https://doi.org/10.1007/978-981-16-4538-9 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Singapore Pte Ltd. The registered company address is: 152 Beach Road, #21-01/04 Gateway East, Singapore 189721, Singapore

Preface

The third International Conference on Sustainable Computing (SUSCOM-2021) targeted state-of-the-art as well as emerging topics pertaining to sustainable computing for technologies and its implementation for engineering applications. The objective of this international conference is to provide opportunities for the researchers, academicians, industry persons and students to interact and exchange ideas, experience and expertise in the current trend and strategies for information and communication technologies. Besides this, participants will also be enlightened about vast avenues, current and emerging sustainable computing developments in the field of advanced informatics, and its applications will be thoroughly explored and discussed. The third International Conference on Sustainable Computing (SUSCOM-2021) was held at Sri Balaji College of Engineering and Technology, Jaipur, Rajasthan, India, on March 19–20, 2021. We are highly thankful to our valuable authors for their contribution and our technical program committee for their immense support and motivation for making the first edition of SUSCOM-2021 a success. We are also grateful to our keynote speakers for sharing their precious work and enlightening the delegates of the conference. We express our sincere gratitude to our publication partner, Springer AISC Series, for believing in us. Bengaluru, India Jaipur, India Windhoek, Namibia La Pampa, Argentina Johnson City, USA June 2021

Ramesh Chandra Poonia Vijander Singh Dharm Singh Jat Mario José Diván Mohammed S. Khan

v

Contents

Analysis of Perceived Usability, Satisfaction and Adoption of Biometric Systems in the Public Transportation Sector of U.A.E. . . . . Sadia Riaz, Arif Mushtaq, Hanh Pham, Sanjana Mookim, and Tung Phan

1

A Semi-supervised Deep Learning-Based Approach with Multiphase Active Contour Loss for Left Ventricle Segmentation from CMR Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Minh-Nhat Trinh, Nhu-Toan Nguyen, Thi-Thao Tran, and Van-Truong Pham

13

Survey on Energy Efficient Approach for Wireless Multimedia Sensor Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Mayur Bhalia and Arjav Bavarva

25

Effectual Accuracy of Ophthalmological Image Retinal Layer Segmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Praveen Mittal and Charul Bhatnagar

35

Performance Assessment of K-Nearest Neighbor Algorithm for Classification of Forest Cover Type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Pratibha Maurya and Arvind Kumar

43

Gestational Diabetes Prediction Using Machine Learning Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Vaishali D. Bhagile and Ibraheam Fathail

53

Design and Implementation of Buffon Needle Problem Using Technology for Engineering Students . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Tommy Tanu Wijaya, Jianlan Tang, Shiwei Tan, and Aditya Purnama

65

Energy-Efficient Multihop Cluster Routing Protocol for WSN . . . . . . . . . Monika Rajput, Sanjay Kumar Sharma, and Pallavi Khatri

77

A Review of Smart Electronic Voting Machine . . . . . . . . . . . . . . . . . . . . . . . Anshuman Singh, Ashwani Yadav, Ayush Kumar, and Kiran Singh

85

vii

viii

Contents

Application of Machine Learning Techniques in Intrusion Detection Systems: A Systematic Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Puneet Himthani and Ghanshyam Prasad Dubey

97

Relationship between Sustainable Practices and Firm Performance: A Study of the FMCG Sector in India . . . . . . . . . . . . . . . . . . 107 Mohd Yousuf Javed, Mohammad Hasan, and Mohd Khalid Azam Learning Paradigms for Analysis of Bank Customer . . . . . . . . . . . . . . . . . . 115 Akash Rajak, Ajay Kumar Shrivastava, Vidushi, and Manisha Agarwal Diagnosis of Dermoscopy Images for the Detection of Skin Lesions Using SVM and KNN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125 Ebrahim Mohammed Senan and Mukti E. Jadhav MaTop: An Evaluative Topic Model for Marathi . . . . . . . . . . . . . . . . . . . . . 135 Jatinderkumar R. Saini and Prafulla B. Bafna Convolutional Neural Network: An Overview and Application in Image Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 Sushreeta Tripathy and Rishabh Singh A Comparison of Backtracking Algorithm in Time-Shared and Space-Shared VM Allocation Approaches Using CloudSim . . . . . . . . 155 T. Lavanya Suja and B. Booba Mango (Mangifera indica L.) Classification Using Convolutional Neural Network and Linear Classifiers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163 Sapan Naik and Purva Desai A Review on Current IoT-Based Pasture Management Systems and Applications of Digital Twins in Farming . . . . . . . . . . . . . . . . . . . . . . . . 173 Ntebaleng Junia Lemphane, Ben Kotze, and Rangith Baby Kuriakose Conceptual Model for Measuring Complexity in Manufacturing Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181 Germán Herrera Vidal, Jairo Rafael Coronado-Hernández, and Andrea Carolina Primo Niebles Hole Filling Using Dominant Colour Plane for CNN-Based Stereo Matching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191 Rachna Verma and Arvind Kumar Verma Centralized Admission Process: An E-Governance Approach for Improving the Higher Education Admission System of Bangladesh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203 Pratik Saha, Chaity Swarnaker, Fatema Farhin Bidushi, Noushin Islam, and Mahady Hasan

Contents

ix

Opinion Mining and Analysing Real-Time Tweets Using RapidMiner . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213 Rainu Nandal, Anisha Chawla, and Kamaldeep Joshi Household Solid Waste Collection Cost Estimation Model: Case Study of Barranquilla, Colombia . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223 Thalía Obredor-Baldovino, Katherinne Salas-Navarro, Miguel Santana-Galván, and Jaime Rizzo-Lian Tomato Sickness Detection Using Fuzzy Logic . . . . . . . . . . . . . . . . . . . . . . . . 237 L. Vijayalakshmi and M. Sornam Autism Spectrum Disorder Study in a Clinical Sample Using Autism Spectrum Quotient (AQ)-10 Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249 Rakhee Kundu, Deepak Panwar, and Vijander Singh Robust Video Steganography Technique Against Attack Based on Stationary Wavelet Transform (SWT) and Singular Value Decomposition (SVD) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257 Reham A. El-Shahed, M. N. Al-Berry, Hala M. Ebied, and Howida A. Shedeed Statistical Inference Through Variable Adaptive Threshold Algorithm in Over-Sampling the Imbalanced Data Distribution Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267 S. Karthikeyan and T. Kathirvalavakumar Feature Engineering for Tal-Patra Manuscript Text Using Natural Language Processing Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279 M. Poornima Devi and M. Sornam Digital Transformation of Public Service Delivery Processes Based of Content Management System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289 Pavel Sitnikov, Evgeniya Dodonova, Evgeniy Dokov, Anton Ivaschenko, and Ivan Efanov Secure and Sustain Network for IoT Fog Servers . . . . . . . . . . . . . . . . . . . . . 297 Naziya Hussain, Harsha Chauhan, and Urvashi Sharma Cataract Detector Using Visual Graphic Generator 16 . . . . . . . . . . . . . . . . 307 Aman, Ayush Gupta, Swetank, and Sudeept Singh Yadav Combination of Local Feature Extraction for Image Retrieval . . . . . . . . . 319 S. Sankara Narayanan, D. Vinod, Suganya Athisayamani, and A. Robert Singh A Review in Anomalies Detection Using Deep Learning . . . . . . . . . . . . . . . 329 Sanjay Roka, Manoj Diwakar, and Shekhar Karanwal Sustainable Anomaly Detection in Surveillance System . . . . . . . . . . . . . . . . 339 Tanmaya Sangwan, P. S. Nithya Darisini, and Somkuwar Shreya Rajiv

x

Contents

A Robust Fused Descriptor Under Unconstrained Conditions . . . . . . . . . . 349 Shekhar Karanwal and Sanjay Roka IoT-Based Smart System for Safety and Security of Driver and Vehicle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359 Prarthana Roy, Shubham Kumar Jain, Vaishali Yadav, Amit Chaurasia, and Ashwani Kumar Yadav Using a Single Group Experimental Study to Underpin the Importance of Human-in-the-Loop in a Smart Manufacturing Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375 J. Coetzer, R. B. Kuriakose, H. J. Vermaak, and G. Nel Analysis of Downlink and Uplink Non-orthogonal Multiple Access (NOMA) for 5G . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 385 H. M. Shwetha and S. Anuradha Pattern Matching Algorithms: A Survey . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 397 Rachana Mehta and Smita Chormunge Development of an Android Fitness App and Its Integration with Visualization Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405 H. Bansal and S. D. Shetty Breast Cancer Prediction Models: A Comparative Study and Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 415 Aparajita Nanda, Manju, and Sarishty Gupta Analysis of Energy-Efficient Clustering-Based Routing Technique with BrainStorm Optimization in WSN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 423 Ankur Goyal, Bhenu Priya, Krishna Gupta, Vivek Kumar Sharma, and Sandeep Kumar A Secure and Intelligent Approach for Next-Hop Selection Algorithm for Successful Data Transmission in Wireless Network . . . . . . 433 Ruchi Kaushik, Vijander Singh, and Rajani Kumari Proposed Sustainable Paradigm Model to Data Storage of IoT Devices in to AWS Cloud Storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 445 Sana Zeba and Mohammad Amjad Potential Applications of the Internet of Things in Sustainable Rural Development in India . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 455 Md. Alimul Haque, Shameemul Haque, Moidur Rahman, Kailash Kumar, and Sana Zeba Evaluation and Analysis of Models for the Measurement of Complexity in Manufacturing Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . 469 Germán Herrera Vidal, Jairo Rafael Coronado-Hernández, and Gustavo Gatica González

Contents

xi

Fractional-Order Euler–Lagrange Dynamic Formulation and Control of Asynchronous Switched Robotic Systems . . . . . . . . . . . . . . 479 Ahmad Taher Azar, Fernando E. Serrano, Nashwa Ahmad Kamal, Sandeep Kumar, Ibraheem Kasim Ibraheem, Amjad J. Humaidi, Tulasichandra Sekhar Gorripotu, and Ramana Pilla Modeling the Imperfect Production System with Rework and Disruption . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 491 Neelesh Gupta, U. K. Khedlekar, and A. R. Nigwal Traffic Accident Detection Using Machine Learning Algorithms . . . . . . . 501 Swati Sharma, Sandeep Harit, and Jasleen Kaur A Comparative Approach of Error Detection and Correction for Onboard Nanosatellite . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 509 Mahmudul Hasan Sarker, Most. Ayesha Khatun Rima, Md. Abdur Rahman, A. B. M. Naveed Hossain, Noibedya Narayan Ray, and Md. Motaharul Islam Effective Text Augmentation Strategy for NLP Models . . . . . . . . . . . . . . . . 521 Sridevi Bonthu, Abhinav Dayal, M. Sri Lakshmi, and S. Rama Sree Performance Enhancement of Raga Classification Systems Using Recursive Feature Elimination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 533 M. Pushparajan, K. T. Sreekumar, K. I. Ramachandran, and C. Santhosh Kumar A Study of the Factors Influencing Behavioral Intent to Implement Forensic Audit Techniques in Indian Companies . . . . . . . . . . . . . . . . . . . . . . 543 Kamakshi Mehta, Bhoomika Batra, and Vaibhav Bhatnagar Using Multiple Regression Model Analysis to Understand the Impact of Travel Behaviors on COVID-19 Cases . . . . . . . . . . . . . . . . . . 555 Khalil Ahmad Kakar and C. S. R. K. Prasad The Review of Prediction Models for COVID-19 Outbreak in Indian Scenario . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 567 Ramesh Chandra Poonia, Pranav Dass, Linesh Raja, Vaibhav Bhatnagar, and Jagdish Prasad Design and Simulation of ECG Signal Generator by Making Use of Medical Datasets and Fourier Transform for Various Arrhythmias . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 577 M. R. Rajeshwari and K. S. Kavitha Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 601

About the Editors

Dr. Ramesh Chandra Poonia is an Associate Professor at the Department of Computer Science, CHRIST (Deemed to be University), Bangalore, India. He recently completed his Postdoctoral Fellowship from CPS Lab, Department of ICT and Natural Sciences, Norwegian University of Science and Technology, Ålesund, Norway. He received his PhD degree in Computer Science from Banasthali University, Banasthali, India in July 2013. He is Chief Editor of TARU Journal of Sustainable Technologies and Computing (TJSTC) and Associate Editor of the Journal of Sustainable Computing: Informatics and Systems, Elsevier. He also serves as an editorial board of a few international journals. He is the main author and co-author of 06 books and lead editor of several special issue of journals and books, including Springer, CRC Press – Taylor and Francis, IGI Global and Elsevier, edited books and Springer conference proceedings. He has authored/co-authored over 70 research publications in peer-reviewed reputed journals, book chapters, and conference proceedings. His research interests are Sustainable Technologies, Cyber-Physical Systems, Path Planning & Collision Avoidance in Artificial Intelligence, and Intelligent Algorithms for Autonomous Systems. Dr. Vijander Singh working as an associate professor, Department of Computer Science and Engineering, Manipal University, Jaipur, India. He received Ph.D. degree from Banasthali University, Banasthali, India, in April 2017. He has published around 30 research papers in indexed journals and several book chapters for international publishers. He authored books and edited 2 books, also handled/handling special issues of journals of international repute such as Taylor and Francis, Taru Publication, IGI Global, Inderscience as a guest editor. He is an associate editor of TARU Journal of Sustainable Technologies and Computing (TJSTC). He has organized several International Conferences, FDPs and Workshops as a core team member of organizing committee. His research area includes machine learning, deep learning, precision agriculture and networking. Prof. Dharm Singh Jat received his Master of Engineering and Ph.D. in Computer Science and Engineering from prestigious universities in India. He is a professor xiii

xiv

About the Editors

of Computer Science at Namibia University of Science and Technology. He is the author of more than 160 peer-reviewed articles and the author or editor of more than 20 books. His interests span the areas of multimedia communications, wireless technologies, mobile communication systems, edge, roof computing, network security and Internet of things. He has given several guest lecturer at various prestigious conferences. He has been the recipient of more than 19 prestigious awards, such as Eminent Scientist Award, Distinguished Academic Achievement, Eminent Engineering Personality, CSI Chapter Patron and Significant Contribution, Best Faculty Researcher, Best Technical Staff, Outstanding University Service, Distinguished ACM and IEEE Computer Society Speaker award. He is a fellow of the Institution of Engineers (I), Computer Society of India and Senior Member IEEE. Mario José Diván was born in Santa Rosa (Argentina) on March 10 of 1979. He received an engineering degree in Information Systems from the National Technological University -NTU- (Argentina) in 2003, while he holds a specialty in managerial engineering from the NTU (Argentina) in 2004, a specialty in data mining and knowledge discovery in databases from the University of Buenos Aires (Argentina) in 2007, and a specialty on high-performance and grid computing from the National University of La Plata -NULP- (Argentina) in 2011. He obtained his Ph.D. in Computer Science in 2012 from the NULP (Argentina). He is a full professor from the National University of La Pampa (Argentina), the head of the Data Science Research Group, an honorary professor from Amity Institute of Information Technology (Noida, India) and a visiting professor in many universities. His interest areas lie in data science, data stream, stream mining, high-performance computing, big data, Internet of things and measurement. Dr. Mohammed S. Khan (SM’ 19) is currently an assistant professor of Computing at East Tennessee State University and the director of Network Science and Analysis Laboratory (NSAL). He received his M.Sc. and Ph.D. in Computer Science and Computer Engineering from the University of Louisville, Kentucky, USA, in 2011 and 2013, respectively. His primary area of research is in ad hoc networks, wireless sensor networks, network tomography, connected vehicles and vehicular social networks. He currently serves as an associate editor of IEEE Access, IET ITS, IET WSS, Springer’s Telecommunication Systems and Neural Computing and Applications. He has been on technical program committees of various international conferences and technical reviewer of various international journals in his field. He is a senior member of IEEE.

Analysis of Perceived Usability, Satisfaction and Adoption of Biometric Systems in the Public Transportation Sector of U.A.E. Sadia Riaz, Arif Mushtaq, Hanh Pham, Sanjana Mookim, and Tung Phan

Abstract Sustainability is the buzzword increasingly adopted by various organizations over the past few years. The U.A.E. Government, in 2013, launched ‘The Smart Government Initiative’ to reform public services to build an economy that focuses not only on profit but also on the social and environmental aspects. Biometric technology has already proved its applications and potentials in many sectors. However, our research found out that it is yet to be systematically implemented in the public transport sector of the U.A.E. The existing value-stored card system (prepaid and postpaid) pose concerns such as the risk of losing all residual value either by way of theft, or damage to the hassle of getting card manually recharged. Therefore, this study proposes a conceptual framework for implementing biometrics to simplify the process for users and build a strong central authority to control the system. The study also evaluates the perceived satisfaction of users with the proposed framework. Quantitative data collection is the primary methodology used to analyse and understand the population’s perceptions and acceptance in this study. Keywords Biometrics · Sustainability · Users’ perceptions · Adoption trends

S. Riaz (B) · H. Pham · S. Mookim · T. Phan S. P. Jain School of Global Management, Dubai International Academic City, Dubai, UAE e-mail: [email protected] H. Pham e-mail: [email protected] S. Mookim e-mail: [email protected] T. Phan e-mail: [email protected] A. Mushtaq City University College of Ajman, Ajman, UAE e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 R. C. Poonia et al. (eds.), Proceedings of Third International Conference on Sustainable Computing, Advances in Intelligent Systems and Computing 1404, https://doi.org/10.1007/978-981-16-4538-9_1

1

2

S. Riaz et al.

1 Introduction The world’s latest trend is sustainability, and within the context of Vision 2021, strategic plans of every Emirate has implemented specific plans and initiatives to resolve the issue of energy and climate change. Various factors like a friendly developing environment, consisting primarily of practical, creative and smart governance, promoting investment and entrepreneurship mechanisms, and an advanced and sophisticated service system covering all fields are also considered [1]. This change is crucial since 70% of the world’s population is expected to be living in urban areas by 2050. Sheikh Mohammed Bin Rashid, the Vice President and Ruler of Dubai, launched the Smart Government initiative in 2013. He defines Smart Government as “the one that never sleeps”—operating 24 h a day, 365 days a year is as accommodating as a hotel, providing fast delivery and robust procedures [2]. Biometric system has emerged as a critical component of ‘The Smart Government Initiatives’ [3]. It provides high security and reliability levels to support the Identity Management Infrastructure operated by the Federal Authority for Identity and Citizenship (I.C.A.) for verification and identification of all U.A.E. residents. The I.C.A. collects fingerprints, facial photographs, palm print, and digital signatures through the Emirates ID registration process for multiple strategic objectives [4]. In the U.A.E., various sectors have adopted biometrics, and these efforts have significantly enhanced the operation process and user authentication system. However, it is interesting to note that the public transportation sector is still behind this particular adoption trend. Sustainable transport systems make a significant contribution to implementing the three main pillars of sustainable development: environmental, social and economic. According to the statistical reports released by the Department of Transport, only 1% of the capital’s population uses public transportation services [5]. Additionally, fare collection of public transportation is based on the value-stored card system with various types of cards applied in different Emirates. The lack of interoperability between the seven Emirates systems can lead to dissatisfaction among otherwise happy consumers. A plethora of research highlights the implementation of biometrics in various sectors. However, limited research talks about biometrics usage and implications in public transportation in the U.A.E. The research is driven by recent literature to understand the context of biometrics used in the U.A.E. Quantitative data collection is the primary methodology used to analyse the population’s perceptions and acceptance in this study. The report reviews and synthesizes current literature on the interoperability of the value-stored cards in the public transportation systems of U.A.E. The absence of a central regulating authority is one of the significant contributors to the consumers’ dissatisfaction towards public transportation in the U.A.E. Moreover, gender difference analysis in terms of perception and adoption of the proposed biometric system in the U.A.E transportation sector is also investigated in this research paper. Finally, this study proposes a conceptual framework for implementing biometrics into the public transportation system and builds a strong central authority to control the system. The

Analysis of Perceived Usability, Satisfaction …

3

framework is based on cloud computing architecture with a central database design to store all public transport users’ information and authenticate access through a biometric verification system. It is assumed that the new system will be user friendly and enhance customer happiness through an interactive interface.

2 Literature Review The lack of interoperability between the systems in the seven Emirates can lead to dissatisfaction among the consumers. For instance, a public transport user who lives in Dubai cannot travel by bus or metro in Abu Dhabi unless he or she buys an OJRA card since N.O.L. cards and OJRA cards are used in Dubai and Abu Dhabi, respectively. Therefore, in the absence of a centralized body controlling the operations of the public transportation services in the entire U.A.E., travellers are likely to have a not very satisfying experience [6]. Besides, these days, consumers carry several plastic cards with monetary value or loyalty points that cannot be combined into one. Thus, they face the risk of losing the card’s residual value in case of damage or loss of card as there is no refund mechanism in place. Another problem is with recharging the cards. In Dubai, for example, top-up machines, authorized agents, ticket offices and online methods are the limited options available for recharging these value stored cards. However, not every area might have a top-up machine or agents, while the metro network is also not easily accessible from many parts of the country. On the other hand, the online top-up method might take at least 48 h for the amount to be recognized by the metro gates and parking meters. Moreover, the amount will not be activated automatically, and the consumers need to come to the metro stations themselves to activate the pending amount [7]. In the U.A.E., various sectors, including public and private organizations, have adopted biometrics, one example is the airport industry. Migrants of U.A.E. can now easily pass through immigration with the aid of an integrated biometric system in airports launched by the Ministry of Interiors. This system includes eye scans, facial recognition, and fingerprinting and can detect forged passports [8]. The U.A.E. has established a sophisticated digital identity management system that contains all U.A.E. residents’ data, including their biographical and biometrics data for perfect identity authentication [9]. Digital identities of citizens are created based on a secure registration process for acquiring Emirates ID—the national smart identity card [10]. Besides, the U.A.E. authorities have made available an online validation gateway that allows private sectors to make instant and accurate identification of users [11]. The validation gateway is based on Public Key Infrastructure (PKI), an Information Communication Technology (I.C.T.) infrastructure, used to describe a set of roles, policies, hardware, software, and procedures that manage and monitor largescale operations of the exchange of information, based on a two-key asymmetric cryptosystem [12].

4

S. Riaz et al.

Fig. 1 Public-key cryptography process ( source Microsoft)

PKI offers leading-edge applications such as data encryption and a digital signature that allows different I.T. systems to have a high level of information confidentiality and data protection to preserve data privacy, strengthen the system, and secure access to digital identities [9]. The PKI initiative supports data identity, digital signature, encryption and key recovery through security certificates for private keys. It also produces various types of certificates that meet the responsibilities of different sectors and societies. The (role-based) certificates used by the e-government, the healthcare sector and the justice system may be examples of such certificates. Digital certificates issued under the PKI project enable business sectors to validate PKI-based transactions to support the delivery and verification of transactions and user identities. Figure 1 illustrates a simple process of how cryptography works in a public database. For the use of fingerprints, the biometric identification process begins when the person to be recognized places his/her fingerprints on the biometric capture device. Fingerprint features are then extracted, encrypted and compared with one or multiple templates in the biometric enrolment database. If the sample matches a template in the database, the result should show the identification request’s acceptance; otherwise, the result is rejection [6] (Fig. 2).

3 Data Analysis The survey was conducted on 100 sample size, to analyse people’s perceptions about their everyday transport experience and predict the level of acceptance towards the

Analysis of Perceived Usability, Satisfaction …

5

Fig. 2 Biometrics enrolment and verification process

adoption of biometrics in the public transportation system. The survey consists of 10 questions, ranging from demographics, recent experience with the value-stored card system, to the perceptions and level of acceptance for this biometrics implementation. 62% of the responses came from people age 18–25 years. This group, along with 25–35 years, dominated the survey with nearly 90%. Coming to their recent experience with public transport, 56% of the respondents are frequent travellers. They used public transport more than five times per week, which makes this study relevant and will significantly impact them. Buses, metros and trams dominated the study, with nearly 80% of the responses from these forms of public transport. When asked about their satisfaction with the value-stored card system, in general, 84% of responses ranging from Neutral to Highly Satisfied, stated that they had not had many problems with the cards so far. However, when explicitly asked about recharging the cards, nearly 60% of them felt inconvenient to do so. Following our literature, when linking the problem of recharging with a new option of postpaid account, which allows users to enjoy all the public services without the need to ensure any balance, the responses are relatively neutral, with 57% still preferred the old method and 43% wanted to change. This response can be highly affected because some respondents have easy access to any top-up methods, while others find difficulty. With respect to gender-difference testing and perception analysis, results showed that male/female respondents differed significantly with F (3, 96) = (3.705), p = 0.014 < 0.05. Gender and Adoption of Biometric system had a significant interaction effect, with F = 3.727, p = 0.048 < 0.05. Additionally, in post-hoc multiple comparisons, female respondents’ adoption for biometric differed significantly based on safety and security factor, p = 0.031, and for male respondents differed significantly based on speed and efficiency factor, p 0.04. Bivariate Pearson correlation analysis showed that positive correlation of Gender with Adoption of Biometric System in Public transport; r = 0.202, p = 0.044, Reported Concerns with Biometric System; r = 0.228, p = 0.023, Convenience to Use Biometric System (in place of NOL top-up); r = 0.388, p = 0.000. The correlations were significant, but causation could not be confirmed at this stage. Figure 3 shows gender-difference testing results for the y-axis question related to “Biometric acceptance threshold” of male and female respondents across five

6

S. Riaz et al.

Fig. 3 Gender-difference average score interaction chart

independent variables: satisfaction, concerns, adoption, implementation and convenience. Results showed that male respondents, on average, depicted higher acceptance threshold for Biometric in the public transport sector of U.A.E. compared to female respondents. Even though insignificant, the mean difference only confirmed that female respondents had higher reservations concerning security and data theft concerns. Figures 4 and 5 show perceived satisfaction of biometric implementation in the public transport sector of U.A.E. in scatter dot plots. The plots take perceived satisfaction score

Fig. 4 Perceived satisfaction of prepaid N.O.L. users

Analysis of Perceived Usability, Satisfaction …

7

Fig. 5 Perceived satisfaction of postpaid N.O.L. users

on the y-axis, while respondents (N = 45) are shown on the x-axis. The data was recorded, and variables were combined to develop this form of data visualization. Randomization of data through parallel analysis was also confirmed. Comparative data as shown in Figs. 4 and 5 showed that users with preference for prepaid payment methods had a weak correlation (y = − 0.0037x + 4.0271, R2 = 0.0091) compared to users with preference for postpaid payment method (y = − 0.0007x + 4.4141 R2 = 0.0005) with perceived satisfaction of Biometric implementation in the public transport sector of UAE. The results may be insignificant and inclusive but confirmed existence of disparity based on preference type. To further analyse, if the implementation of biometric would lead to the acceptance of public transportation, univariate analyses were done with gender and satisfaction introduced as being fixed factors into the equation. Results showed that Gender * Satisfaction with Public Transport had a significant interaction effect, with F = 3.263, p = 0.015 to predict public transport simplification. Table 1 shows the analysis of variance (ANOVA) results across three groups with different frequencies of travel towards predicting biometric adoption in public transport. Results showed significant difference based on frequency of travel, (2, 97) = 1.964, p = 0.041 < 0.05. Post-hoc multiple comparisons showed that most frequent travellers in the survey (travelling more than five times a week) were significantly Table 1 Analysis of variance (frequency of travel) Sum of squares

df

Mean square

F

Sig.

Between groups

0.523

2

0.262

1.964

0.041

Within groups

12.917

97

0.133

Total

13.440

99

8

S. Riaz et al.

different with p = 0.036 towards positive adoption of the biometric system compared to less frequent travellers. The last part of the survey introduces implementing biometrics and analysing people’s perceptions and acceptance. The study revealed a surprising finding that many respondents are unfamiliar with the idea of biometrics. When asked about “When talking about biometrics, what is the first thing in your mind?”, nearly half of the responses were about different types of biometrics. People cannot see the benefits and applications of biometrics, which will be solved with this study. However, nearly 70% agrees that biometrics will simplify the process and make it more convenient to use public transport. On the other hand, Security, Efficiency and Complexity are three significant problems discussed when discussing the limitations of biometrics, which will be addressed in the next part. Overall, 84% support the idea and would love to see it be implemented in the future.

4 Limitations There are a few concerns, one being an unauthorized source obtaining a biometric image through unfair means, making the system compromised [13]. Manual authentication devices (e.g., passwords, keys, and identification cards) in times of privacy leakage, can be blocked and reversed. However, with biometric there is only one set of identity and unique identifiers. Researchers, for example, have found a way to mimic fingerprints using gelatin. Iris scanners can be bypassed with high-resolution eye pictures [14]. If these organizations have biometric data unique to just one, they may share data (cross-matching) in their databases. For example, private enterprise data may be compared to data from the government [15]. Furthermore, biometric inconsistencies due to mechanical failures and the technology’s inability to operate critically question this reliability. There are reported cases when biometrics mismatched and resulted in a wrong verification, especially in criminal records [16]. The human factor related problems are of considerable concern. For example, physical changes, such as injury, worn out as environmental impact, etc., are continuously faced by human bodies. Biometric performance relies on other aspects, such as accessibility and user perception, which may influence system performance significantly. Some of those factors have been studied mainly in specific biometric framework, like fingerprints and facials. Such considerations include various ways to present sensor biometric characteristics and biometric character variation caused by disease or climate change [17]. Biometric system technology is very complex in terms of both capacity and accuracy. This biometric system typically uses additional tools such as cameras and scanning devices to capture images, record or calculate characteristics, and computer software or hardware to retrieve, encrypt, store and compare those characteristics. Consequently, the use of biometrics for personal identification is impractical unless there is a national database of biometrics available for customer onboarding [18].

Analysis of Perceived Usability, Satisfaction …

9

5 Proposed Framework The Fig. 6 describes the process of the conceptual framework based on cloud computing architecture. The process to register for a public transport account starts from the airports and border checkpoints within the U.A.E., and includes these following steps.

5.1 Security Check There will be offices at every airport to collect all the required documents and personal information, including biometrics records from every tourist arriving at the U.A.E. The data will then be summarized to form a national record of each person.

Fig. 6 Conceptual framework—biometric system in public transport sector of U.A.E

10

S. Riaz et al.

5.2 Public Service Registration When the identity record has been formed, a citizen or tourist can register their public transport service account, generate a unique password for their account.

5.3 Information Storage For residents, their identity records will be stored in their Emirates ID. The Emirates ID will be the primary method to authenticate the citizens whenever they have an issue with their accounts. For tourists, their passports will be used for the same purpose.

5.4 Select the Account Option Once the account is activated, users can select whether to use a prepaid account or a postpaid account. A prepaid account is a current method used in the public transport fare collection system; customers need to recharge and keep the required balance to use the transport. For postpaid account users, they can enjoy all the services, and an invoice will be generated at the end of the month, or their staying period for tourists, with the due amount for the customers to pay with various payment methods. Registering a postpaid account would require to have a bank account or credit card.

5.5 User Interface The user interface consists of two parts: a fingerprint scanner and a keypad, with a small information monitor above. The fingerprint scanner will read the biometrics information when costumers tap in and out, show their available balance for prepaid accounts, and accumulate due amount for postpaid accounts on the monitor. If there is a problem with the scanner, the users can use a keypad to use their unique password for identity confirmation. The information will also be shown on the screen after successful authentication. Moreover, for security reasons, in the buses, the function to check-in should only be available at the front door, in order for bus captains to supervise the process, and prevent users from avoiding fare payment, while the check-out function can be available at every door.

Analysis of Perceived Usability, Satisfaction …

11

5.6 Payment Gateway Depending on the type of account (prepaid or postpaid), there will be a different payment method. For prepaid accounts, users need to top up their account beforehand and use the available balance, just like the current method. For postpaid accounts, users need to have a valid U.A.E. bank account or a valid credit card linked with their public service account, and all their transactions throughout the month will be recorded, and the due amount they have to pay at the end of the month. They should only be allowed to depart for postpaid tourist accounts if they have already cleared all the due payment at the airport.

6 Conclusions and Future Recommendations This study was conducted to support the idea of sustainability and goals of the Smart Government Initiative of the U.A.E. Government. It proposes a conceptual framework to implement biometrics in the public transportation system. However, technology will only work if people know how they perceive its adoption benefits. There are certain limitations that need cautious consideration. Moreover, the government must have a robust national database and infrastructure to support this model and proceed towards a sustainable national transportation sector. Therefore, a good approach and well-planned project are needed to justify the potential and adaptability of biometrics, not only in the transportation sector but also to provide a digital database nationwide and make U.A.E. a real smart country.

References 1. Efforts towards sustainability. Government U.A.E., 29 August 2019 [Online]. Available https://government.ae/en/information-and-services/environment-and-energy/environme ntal-protection/efforts-towards-sustainability. 2. J.A. Suwaidi, Smart Government and the U.A.E.: The Happiness of Citizens is the Ultimate Goal (The National, 5 Dec 2017) 3. I.C.A., Emirates ID: Biometric Database of U.A.E. Population Supports Projects of U.A.E. Vision 2021. Federal Authority for Identity and Citizenship, 24 Mar 2014 [Online]. Available https://www.ica.gov.ae/en/media-centre/news/2014/3/24/emirates-id-biometric-databaseof-uae-population-supports-projects-of-uae-vision-2021.aspx 4. I.C.A., Apply for a New I.D. Card. Federal Authority for Identity and Citizenship [Online]. Available https://www.ica.gov.ae/en/services/e-services/apply-for-a-new-id-card.aspx 5. K.A. Huraimel, Gulf Transportation Needs a Sustainability Push (Gulf News, 6 Oct 2019) 6. A.M. Al-Khouri, Biometrics technology and the new economy. Int. J. Innov. Dig. Econ. (IJIDE) 3(4), 1–28 (2012) 7. M.V. Leijen, Dubai Metro Online Nol Top-Up May Take 48 h. Emirates 24/7, 05 July 2014 8. J. Hilton, Use These Biometrics to Pass Through U.A.E. Airports (Gulf News, 27 Nov 2019) 9. A.M. Al-Khouri, PKI in government identity management systems. Int. J. Netw Secur. Appl. (IJNSA) 3(3), 69–96 (2011)

12

S. Riaz et al.

10. INSEAD, Establishing a National ID 11. I.C.A., Validation Gateway the Threshold into Digital Economy. Federal Authority for Identity and Citizenship, 13 July 2015 [Online]. Available https://www.ica.gov.ae/en/media-centre/ news/2015/7/13/validation-gateway-the-threshold-into-digital-economy.aspx 12. S.A. Brands, Rethinking Public Key Infrastructures and Digital Certificates, 2000 13. A.B.J. Teoh, S. Lee, Y.W. Kuan, Cancelable biometrics and annotations on BioHash. Pattern Recogn. 41(6), 2034–2044 (2008) 14. E. Hurley, Biometrics May Be too Pricey, Complex for Data Centre. Tech Target, 29 September 2003 [Online]. Available https://searchsecurity.techtarget.com/news/929815/Biometrics-maybe-too-pricey-complex-for-data-center 15. N. Ratha, J. Connell, R. Bolle, Enhancing security and privacy in biometrics-based authentication systems. I.B.M. Syst. J. 40(3), 614–634 (2001) 16. J. Chinn, Fingerprint Expert’s Mistake Leads to Wrongful Conviction in Indiana. California Innocence Project, 18 October 2012 [Online]. Available https://californiainnocenceproject.org/ 2012/10/fingerprint-experts-mistake-leads-to-wrongful-conviction-in-indiana/ 17. E. Indrayani, The effectiveness and the efficiency of the use of biometric systems in supporting national database based on single ID card number (the implementation of Electronik ID Card in Bandung). J. Inf. Technol. Software Eng. 4(1) (2014) 18. P. Makin, C. Martin, 6 Things You May Not Know About Biometrics. CGAP [Online]. Available https://www.cgap.org/blog/6-things-you-may-not-know-about-biometrics

A Semi-supervised Deep Learning-Based Approach with Multiphase Active Contour Loss for Left Ventricle Segmentation from CMR Images Minh-Nhat Trinh, Nhu-Toan Nguyen, Thi-Thao Tran, and Van-Truong Pham Abstract Along with the widespread achievement of machine learning in computer vision in recent years, plenty of deep learning models for medical image segmentation have been published, with impressive results. However, the majority of those model just leverages supervised technics while others additionally utilize semi-supervised and unsupervised technics, with the results are not as good as supervised ones though. Inspired by the efficiency of Mumford-Shah functional for unsupervised and Active Contour functional for supervised tasks, in this work, we proposed the new loss functional which integrates those two with some modifications and extension to the case of multiphase segmentation. It allows our deep learning model to segment multi-class simultaneously with high accuracy instead of binary segmentation. The proposed approach is applied to the segmentation of left ventricle from cardiac MR images, in which both endocardium and epicardium are simultaneously segmented. Our proposed method is assessed on 2017 ACDCA dataset. The experiment demonstrates that our new loss achieves the promising results in term of Dice coefficient and Jaccard index. This illuminates the efficiency of our method in multi-class segmentation for medical images. Keywords Image segmentation · Unsupervised learning · Semi-supervised learning · Active contour functional · Mumford-Shah functional · Left ventricle segmentation

1 Introduction Image segmentation is one of the most important aspects for computer vision [1]. Having the segmentation maps, one can attain more information about the object of interest for further image analysis steps such as feature extraction and object M.-N. Trinh · N.-T. Nguyen · T.-T. Tran · V.-T. Pham (B) School of Electrical Engineering, Hanoi University of Science and Technology, No.1 Dai Co Viet, Hanoi, Vietnam e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 R. C. Poonia et al. (eds.), Proceedings of Third International Conference on Sustainable Computing, Advances in Intelligent Systems and Computing 1404, https://doi.org/10.1007/978-981-16-4538-9_2

13

14

M.-N. Trinh et al.

recognition. Generally, in segmentation process, each pixel of an image is classified to a speficic class, so it can be regarded as pixel-based classification problem. In the medical field, automatic image segmentation is of paramount importance because of the fact that the traditionally manual methods not only cost time, but also requires expert knowledge. Besides, the medical images are normally limited and have imbalanced classes [2]. In classical image segmentation, especially for medical images, Mumford-Shah functional following the pioneering work of Mumford and Shah [3] has been the central topic during the last two decades. From it, a variety of image segmentation methods to have been developed such as active contour model (ACM) and level set methods [4], proximal methods [5]. Though having advantages such as giving closed contours and subpixel accuracy, the active contours and level set methods have drawbacks that one needs to create an initialization for the contour, thus making difficulties for automatic segmentation approaches. Therefore, the need for reliable and precise methods which are able to automatically segment medical images that are necessary and pivotal. Fortunately, the proven effectiveness of deep learning in image segmentation has become a crucial premise for radical methods applying in the medical image for several years. There is a great number of deep learning models, specially convolution neural network (CNN) has been released and gotten noticeable results [6–8], which are commonly based on U-net architecture [9] with some productive improvements in model, metrics and loss functions. During the CNN model training process, through comparison of the prediction and segmentation masks, the loss function is minimized by tuning model parameters through gradient descent approaches. Loss function makes a considerable contribution to model optimization. As regards medical image segmentation, Dice coefficient (DC), cross-entropy (CE) or Tversky loss is preferred [9–13]. However, those losses normally lack boundary constraints, leading to unwanted results near the boundary [14]. Some researchers have shown that U-Net efficiency can be enhanced by designing various loss functions [14, 15]. On the other hand, the idea of combining an active contour model and deep learning approach for efficient image segmentation [8, 10] has gained popularity in the last few years. This is because the level-set-based active contour method is more favored by ACM as it enables the curve to alter its topology during the segmentation process, such as tearing or gluing. However, its segmentation effects significantly depend on the initialization of the curve. The mediocre initialization may lead the model to stuck in a local minimum. This can be handled by using deep learning approach for a coarse segmentation map before refining the contour by using ACM. In an alternative, the active contours inspired by the Mumford-Shah functional and its variants have been used as the loss functions in training process of neural network [16, 17]. In recent work, Kim and Ye [18] proposed a novel approach to synergize the softmax output of deep learning model with Mumford-Shah functional. Taking the advantage of the aforementioned approaches, in this paper, we proposed a new loss functional that integrates the effectiveness of Mumford-Shah and Active Contour functional, with some modifications. The proposed approach is applied for the case of left ventricle image segmentation from MR images. The

A Semi-supervised Deep Learning-Based Approach …

15

Active Contour functional is built for multiphase segmentation such that both endocardial and epicardial regions are segmented simultaneously. We also make a result table for comparison in the case our loss has either or not Mumford-Shah functional. Our main contributions in this work comprise: (i) introducing a new loss functional that combines Mumford-Shah and Active Contour functional; (ii) building the end-toend model for multiphase segmentation, trained with limited data. (iii) achieving the promising results on 2017 ACDCA database while reducing training and interfering time. The remainder of this paper is organized in the following way. We first describe our proposed method with particular emphasis on loss function. After that, in Sect. 3, some experimental results including quantitative and qualitative comparisons are presented. Finally, we discuss in Sect. 4 the conclusion and future applications.

2 Materials and Methods 2.1 Proposed Active Contour Loss Function Following by [18], by approximating the characteristic function with a vector Heaviside function of multiphase level-sets [19], a Mumford-Shah functional can be obtained as a differentiable energy function. To be more specific, the n th channel softmax output of deep learning model is formulated as follows: e pn (r ) yn (r ) =  N , n = 1, 2, . . . , N pn (r ) i=1 e

(1)

where r ∈  ⊂ R2 , and pi (r ) defines output of the network at r from the previous layer before the softmax. We utilized CNN-inspired Mumford-Shah functional for unsupervised task:

L MScnn (θ ; x) =

N   n=1





N  

|I (r ) − cn |2 yn (r )dr

|∇ yn (r )|dr

(2)

n=1 

  where cn := cn (θ ) =  x(r )yn (r ; θ )dr/  yn (r ; θ )dr is the average pixel value of the n th class, yn (r ):= yn (r ; θ ) is the production of softmax layer in our model, and I (r ) is the given input image measurement. Finally, θ denotes to the trainable model parameters.

16

M.-N. Trinh et al.

Inspired from [20], we proposed the active contour loss functional for multiphase segmentation: L ac

N P  1  = (d1n − gn (i))2 (1 − yn (i)) + (d2n − gn (i))2 yn (i) P i=1 n=1

(3)

where P is the number of pixels of input, gn (i) denoted the semantic label, d2n is average pixel value of region n th , and d1n is the average pixel of all remaining regions. Each channel of the output represents for a certain segmented region. Assume that pixel i does not belong to n th region, the term (d2n − gn (i))2 ≈ 1 and (d1n − gn (i))2 ≈ 0, therefore, model need to decrease yn (i) in order to minimize L ac . Similarly, when pixel i belongs to n th region, the term (d2n − gn (i))2 ≈ 0 and (d1n − gn (i))2 ≈ 1 , so yn (i) is increased. Based on the aforementioned idea, we also proposed a varied active contour loss functional for multiphase segmentation as: L uac

P 1  =− P i=1 N    (d1n − gn (i))2 log(yn (i) + ε) + (d2n − gn (i))2 log(1 − yn (i) + ε) n=1

(4) where ε is smooth parameter to avoid logarithm explosion. This logarithm could punish more weight when our neural network predicts incorrect ground truths. To summary, we indicate the loss functional for semiunsupervised task: L semi_ ac = L ac + αL MScnn (θ ; I )

(5)

L semi_ uac = L uac + αL MScnn (θ ; I )

(6)

where α, β are hyper-parameters.

2.2 Network Architecture To evaluate the performance of our proposed loss function, we customized and used U-net [9] as our base segmentation frameworks in this subsection. The model is originated from U-net construction with two paths as seen in Fig. 1: encoders and

A Semi-supervised Deep Learning-Based Approach …

17

Fig. 1 Structure of proposed network

decoders. The main mission of encoder is to extract informative feature from input image and also distribute necessary skip layers for the decoder. Initially, the input image is normalized by rescaling to [0, 1] in height and width channels, I ∈ R H ×W ×1 , then fed into encoder block. It is separated into four small downsample blocks that contain a 2D convolution layer after batch normalization and Swish activation [21]. Swish activation is utilized because it advantages from sparsity in the same way as ReLU activation [22] does, but it is also unbounded above, which ensures that the outputs do not soak to the maximum value for huge value. For skip connection path, the outputs of encoder path are four skip layers dispensed from 4 aforementioned downsample blocks, symbolized. S1 , S2 , S3 , S4 , Si ∈ R Hi ×Wi ×Csi , Hi = 2 × Hi+1 , Wi = 2 × Wi+1 with i = 1, 4 and Hi , Wi is height, width of feature maps. Then, S1 , S2 , S3 , S4 are fed

18

M.-N. Trinh et al.

Conv,BatchNorm

ConvTranpose

Ox+1

AttentionModule

Swish

Concatenate

Sx

Conv,BatchNorm

Conv,BatchNorm

Conv,BatchNorm

Conv,BatchNorm

ConvTranpose

Swish

Swish

SEBlock

Conv,BatchNorm

ADD Swish Conv,BatchNorm

MaxPooling

Swish

SEBlock

(a)

(b)

Sigmoid MULTIPLY

(c)

Fig. 2 a Encoder block, b decoder block, c attention module

into an attention module to learn to assemble a more precise information before concatenating with outputs of decoder blocks. Using an attention module before concatenation helps the network to place more weight on the features of the skip layers that are most important. Instead of feeding in every feature map, this might enable the direct connection to focus heavily on a specific part of the input. As a result, the skip connection feature map multiplies the attention distribution to only keep the relevant parts. The detail of model’s structure is illustrated in Fig. 2. Regarding decoder path, it includes 4 upsample blocks, and the bottle neck block with the output is O1 , O2 , O3 , O4 , O5 Oi ∈ R Hi ×Wi ×Cdi , with i = 1, 5, and the output to compare with ground truth is O1 ∈ R H ×W ×1 . Oi+1 and Si are fed into attention module, which return a same size with feature map Si is Si . Then Si and Oi+1 are concatenated into [Oi+1 ,Si ], i = 1, 4, then go through each upsample block to form Oi . Each upsample block includes a Squeeze-and-Excitation block, following by soft residual block which is modified from the original residual block in Resnet [23] (by substituting the convolution by depth wise convolution) to lessen the model parameters and accelerate training time.

2.3 Implementation We implemented our model and utilized the Nadam algorithm [24] for optimizing the trainable parameters of the model with original learning rate 10−3 . After that, the training process is looped on the ACDCA training dataset for about 300 epochs with batch size 32. Learning rate also reduces 50% if the dice score of validation did not

A Semi-supervised Deep Learning-Based Approach …

19

improve for 10 or 15 epochs (minimum learning rate 1e − 5). Besides, we set the hyper-parameters α, β as 10−6 and 1, respectively, in this task. Then we trained our neural network on four main stages with loss functional as Eqs. (3), (4), (5), (6).

3 Results 3.1 Datasets The evaluation framework was tested on the database released by the “Automatic Cardiac Diagnosis Challenge (ACDC)” workshop held in conjunction with the 20th International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI), on September 10th, 2017 in Quebec City, Canada [25]. The publicly obtainable training database includes 100 patient 4D cine-CMR scans that each consisting of segmentation masks for the left ventricle (LV), the myocardium (Myo) and the right ventricle (RV) at the end-systolic (ES) and end-diastolic (ED) phases of each patient. Then the training database containing the manual segmentation masks is separated into the training set and test set with ratio 8:2 to assess the image segmentation models.

3.2 Performance Metrics We used the Dice Similarity Coefficient (DSC) and the Jaccard Coefficient (JAC) to assess the performance of the deep neural network. The following are the assessment metrics: DSC =

2 × TP FN + FP + 2 × TP

(7)

TP FN + FP + TP

(8)

JAC =

where TP, FN, FP, denote the number of true positives, false negatives and false positives, respectively. Then the output of the trained segmentation model is binarized into a mask in order to compare with the ground truth.

20

M.-N. Trinh et al.

3.3 Results The learning curve showing the progress of loss and segmentation performance on the training set and validation set of our model is shown in Fig. 3. There is no doubt that the ACDCA data set network convergence speed is fast (after 200 epochs). The validation DSC score over the training process is flexible. It is because the validation set comprises a few images entirely different from the ones in the training set, therefore, during the first learning iterations the model has some difficulties about segmenting those images. Figure 4 provides us with standard segmenting results for some ACDCA test images of our proposed method. As seen in this diagram, the contours of Endocardium and Epicardium are in decent agreement with the results of the proposed method and the manual segmentation masks.

Fig. 3 The proposed method’s learning curves for endocardium and epicardium segmentation in the ACDCA database

Fig. 4 Representative segmentation by the proposed approach for the ACDCA data. The endocardial contours are in green, and the epicardial contours are in red

A Semi-supervised Deep Learning-Based Approach …

21

Table 1 The mean of obtained DSC and JAC between other loss functionals and the proposed loss functional on the ACDCA Dataset for both endocardium (Endo) and epicardium (Epi) regions Method

Dice coefficient

Jaccard index

Endo

Epi

Endo

Epi

FCN [26]

0.89

0.92

0.83

0.89

SegNet [27]

0.82

0.89

0.75

0.83

U-net

0,88

0.92

0.82

0.87

The proposed approach with L ac loss

0.92

0.93

0.88

0.90

The proposed approach with L uac loss

0.93

0.93

0.89

0.90

The proposed approach with L semi_ ac loss

0.93

0.94

0.89

0.91

The proposed approach with L semi_ uac loss

0.93

0.95

0.89

0.92

In addition, to qualitatively evaluate the performances of the proposed method, we presented the average Dice Similarity Coefficient and Jaccard coefficients of our approach and other methods when all test images from the database are segmented in Table 1. From this table, by comparing quantitatively, the proposed method achieved the most accurate results.

4 Conclusion In this work, we proposed an upgrade Active Contour loss functional and its variant which can combined with deep neural networks. The experimental results on MICCAI Challenges dataset showed high improvement in semantic segmentation as compared to state-of-the-art alternatives. As an overall framework, our proposed loss functionals are expected to not only be used in medical image segmentation applications. In future works, we plan to research the potential of our method for general semantic segmentation task. Acknowledgements This research is funded by Vietnam National Foundation for Science and Technology Development (NAFOSTED) under grant number 102.05-2018.302.

References 1. B. Jähne, H. Haußecker, Computer vision and applications (2000) 2. T. Zhou, S. Ruan, S. Canu, A review: deep learning for medical image segmentation using multi-modality fusion. Array 3, 100004 (2019) 3. D. Mumford, J. Shah, Optimal approximations by piecewise smoothfunctions and associated variational problems. Commun. Pure Appl. Math. 42(5), 577–685 (1989) 4. T. Chan, L. Vese, Active contours without edges. IEEE Trans. Image Process. 10(2), 266–277 (2001)

22

M.-N. Trinh et al.

5. A. Chambolle, T. Pock, A first-order primal-dual algorithm for convex problems with applications to imaging. J. Math. Imaging Vis. 40(1), 120–145 (2011) 6. A. Sinha, J. Dolz, Multi-scale self-guided attention for medical image segmentation. IEEE J. Biomed. Health Inform. (2020) 7. Z. Zhou, M.M.R. Siddiquee, N. Tajbakhsh, J. Liang, Unet++: redesigning skip connections to exploit multiscale features in image segmentation. IEEE Trans. Med. Imaging 39(6), 1856– 1867 (2019) 8. R. Azad, M. Asadi-Aghbolaghi, M. Fathy, S. Escalera, Bi-directional ConvLSTM U-net with Densley connected convolutions, in Proceedings of the IEEE International Conference on Computer Vision Workshops, 2019 9. O. Ronneberger, P. Fischer, T. Brox, U-net: Convolutional networks for biomedical image segmentation, in International Conference on Medical Image Computing and ComputerAssisted Intervention (Springer, 2015), pp. 234–241 10. S.S.M. Salehi, D. Erdogmus, A. Gholipour, Tversky loss function for image segmentation using 3D fully convolutional deep networks, in International Workshop on Machine Learning in Medical Imaging (Springer, 2017), pp. 379–387 11. T.T. Tran, T.-T. Tran, Q.C. Ninh, M.D. Bui, V.-T. Pham, Segmentation of left ventricle in short-axis MR images based on fully convolutional network and active contour model, in International Conference on Green Technology and Sustainable Development (Springer, 2020), pp. 49–59 12. S. Jégou, M. Drozdzal, D. Vazquez, A. Romero, Y. Bengio, The one hundred layers tiramisu: fully convolutional densenets for semantic segmentation, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2017, pp. 11–19 13. D. Jha, M. Riegler, D. Johansen, P. Halvorsen, H. Johansen, Doubleu-net: a deep convolutional neural network for medical image segmentation, in IEEE 33rd International Symposium on Computer-Based Medical Systems (CBMS), 2020, pp. 558–564 14. X. Chen, B.M. Williams, S.R. Vallabhaneni, G. Czanner, R. Williams, Y. Zheng, Learning active contour models for medical image segmentation, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019, pp. 11632–11640 15. S.R. Hashemi, S.S.M. Salehi, D. Erdogmus, S.P. Prabhu, S.K. Warfield, A. Gholipour, Asymmetric loss functions and deep densely-connected networks for highly-imbalanced medical image segmentation: Application to multiple sclerosis lesion detection. IEEE Access 7, 1721–1735 (2018) 16. V.T. Pham, T.T. Tran, P.C. Wang, M.T. Lo, Tympanic membrane segmentation in otoscopic images based on fully convolutional network with active contour loss. Signal Image Video Process. https://doi.org/10.1007/s11760-020-01772-7 (2020) 17. S. Gur, L. Wolf, L. Golgher, P. Blinder, Unsupervised microvascular image segmentation using an active contours mimicking neural network, in Proceedings of the IEEE International Conference on Computer Vision, 2019, pp. 10722–10731 18. B. Kim, J.C. Ye, Mumford-Shah loss functional for image segmentation with deep learning. IEEE Trans. Image Process. 29, 1856–1866 (2019) 19. T.F. Chan, L.A. Vese, Image segmentation using level sets and the piecewise-constant Mumford-Shah model, in Tech. Rep. 0014, Computational Applied Math Group 2000. Citeseer 20. V.-T. Pham, T.-T. Tran, Active contour model and nonlinear shape priors with application to left ventricle segmentation in cardiac MR images. Optik 127(3), 991–1002 (2016) 21. M. Tan, Q.V. Le, Efficientnet: rethinking model scaling for convolutional neural networks. arXiv:1905.11946 (2019) 22. B. Xu, N. Wang, T. Chen, M. Li, Empirical evaluation of rectified activations in convolutional network. arXiv:1505.00853 (2015) 23. K. He, X. Zhang, S. Ren, J. Sun, Deep residual learning for image recognition, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 770–778 24. A. Tato, R. Nkambou, Improving adam optimizer (2018)

A Semi-supervised Deep Learning-Based Approach …

23

25. O. Bernard, A. Lalande, C. Zotti, F. Cervenansky, X. Yang, P.-A. Heng, I. Cetin, K. Lekadir, O. Camara, M.A.G. Ballester, Deep learning techniques for automatic MRI cardiac multistructures segmentation and diagnosis: is the problem solved? IEEE Trans. Med. Imaging 37(11), 2514–2525 (2018) 26. P.V. Tran, A fully convolutional neural network for cardiac segmentation in short-axis MRI. arXiv:1604.00494 (2016) 27. V. Badrinarayanan, A. Kendall, R. Cipolla, Segnet: a deep convolutional encoder-decoder architecture for image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 39(12), 2481– 2495 (2017)

Survey on Energy Efficient Approach for Wireless Multimedia Sensor Network Mayur Bhalia and Arjav Bavarva

Abstract After advancement in WSN, wireless multimedia sensor networks (WMSNs) used for acquiring multimedia data like images, audio, and video streaming as well as scalar data will transmit to the receiver end. Energy is the most critical factor in a wireless network using sensors. In this paper, we compare the different energy-efficient techniques that have been proposed in wireless multimedia communication for energy-constrained wireless multimedia sensor networks. Keywords Wireless sensor network (WSN) · Wireless multimedia sensor network (WMSN) · Energy efficient · Multimedia data

1 Introduction With the remarkable advancement of plentiful technologies which include wireless sensors, actuators, and embedded computing, the scenario of wireless data collection is changed and becomes more as suitable as per society’s need. Even though these sensors are compact, they are capable of sensing the data from the environment and process that aggregate data, and then each sensor will communicate with other using wireless channels like an RF (radio frequency) channel [1]. WMSN is having so many applications of multimedia data which guzzle a massive amount of energy. Energy is a crucial resource parameter for the quick and precise performance of WMSNs.

M. Bhalia (B) Ph.D. Scholar, Faculty of Technology, RK University, Rajkot, Gujarat, India A. Bavarva Associate Professor, Department of Electrical Engineering, RK University, Rajkot, Gujarat, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 R. C. Poonia et al. (eds.), Proceedings of Third International Conference on Sustainable Computing, Advances in Intelligent Systems and Computing 1404, https://doi.org/10.1007/978-981-16-4538-9_3

25

26

M. Bhalia and A. Bavarva

2 Literature Review In [LR-1], Arjav [2] and other co-author have identified the parameter like higher data-rate, the smaller amount of energy-consumption, consistency/reliability, signal detection/estimation, variation in network topologies, quality of service (QoS) and security/privacy to develop any application follows the WMSN. Here, authors have focused on the improvement in energy consumption as well as on QoS. As authors mentioned, there are three stages consuming the most energy which are data transmission, data reception, and data processing. To solve out this energy consumption, the MIMO properties are used with the combination of compressive sensing (CS) techniques [3]. As per shown in Fig. 1, multimedia data will be acquired by the multiple multimedia sensor nodes and then it is forwarded to the data compression stage, inside each sensor node itself. Over there, the multimedia data is compressed by using the CS algorithm, so that the size of data is reduced, which will be transmitted. The CS technology is basically divided into two types: (1) joint compressive sensing technique and (2) distributed compressive sensing (CS) technique. Nevertheless, here in this paper, we have been informed that if the signal is not sparse, then it is changed into the sparse domain without loss of any kind of signal information [4]. For the experiment, the following parameters were set, in MATLAB Simulator as shown in Table 1, by author. Authors have taken the 50 × 50 size of rice image and achieved the following result at the end of the experiment by improving peak-signal-to-noise ratio (PSNR) using MATLAB simulator, as shown in Fig. 2. The simulation result of the image with 250 measurements, shows the PSNR value is about 20.40 dB for an image size of 50 × 50 images and the 17 dB PSNR for an image size of 100 × 100. Hence, it indicates the effect of PSNR and measurements between each other for better QoS.

Fig. 1 System model based on MIMO and CS

Survey on Energy Efficient Approach … Table 1 MATLAB experiment setup

27

Parameters

Type/value

Operating frequency

2.4 GHz

Channel type

Rayleigh multipath fading channel

Modulation technique

QPSK

No. of transmitting antennas

2

No. of receiving antennas

4

Samples per frame

48

Fig. 2 Result by taking 50 × 50 size rice image

In [LR-2], Shahin [5] and the team have focused on the problem of energy consumption, due to which performance of WMSN is degraded. Author has developed a protocol, based on energy efficiency called Energy Efficient Congestion Avoidance (EECA). By using it, the data (video) loss is going to be suppressed/controlled at the transport layer [6, 7]. Authors have characterized two types of protocols like (1) congestion control and (2) rate adjustment. Here, EECA protocol provides a higher quality of video compared to other. Here, authors have compared their work with the RA-SVBR [8] which is an advanced version of SVBR with variable data rate. For the EECA performance check, authors have used the network simulator-2 (NS-2) by using Evalvid-RA [8], AODV [9]. EECA gives good result and energy consumption reduced by 5% as shown in Fig. 3. In [LR-3], Tamal-Pal et al. [10] have introduced one different method of reducing the block size of the data image which is going to be transmitted. As E. Sun et al. have introduced LEICA [11], S. Rein et al. have [12] introduced fractional wavelet filter method.

28

M. Bhalia and A. Bavarva

Fig. 3 Energy consumption in joule

Begin 1: for i=0 to m-1 do 2: for j=0 to n-1 do 3: for k=0 to p-1 do 4: Store 16 intensity values of the image in array CH[] 5: end for 6: Set BOX[x] = CH[0] 7: Set x=x+1 8: Set BOX[x] = CH[2] 9: Set x=x+1 10: Set BOX[x] = CH[8] 11: Set x=x+1 12: Set BOX[x] = CH[10] 13: Set x=x+1 14: if size of BOX== available space into packet SEND BOX as a packet Set x=0 15: end for 16: end for 17: end Pseudo-code:1 Pixel-Intensity extracting code

Then

As shown in pseudo-code: 1, authors have used for loop to identify 16 different image intensities and store it into the memory. Here, they have used MICAz-mote 8bit microcontroller [13, 14] specs and the first-order radio-model [15] with the energy dissipation (E elec ) of 50 nJ/bit, the power consumption of 3.5 nJ/cycle and the operating frequency of 7.37 MHz. Here, authors have calculated theoretical operational cycles required to execute the algorithm used by MICAz-mote as per the instructionset of ATmega-128L. By identifying the total size of memory used, authors have

Survey on Energy Efficient Approach …

29

Fig. 4 Reconstructed image

calculated storage overhead. Here, Cooja-simulator of Contiki-OS used for the simulation result. As per the author, acceptable wireless communication quality loss is nearly 25 dB [16]. Storage overhead is theoretically calculated as 42 bytes using the proposed scheme, whereas 109 bytes using TiBS. If we see the data regarding images, then original images and reconstructed images are shown in Fig. 4. By applying this scheme, it gives more energy-efficient throughput by reducing the block-size of the images. In [LR-4], Ilkyu [17] and his research team members have found out the major problem regarding the long-distance transmission of data. Zhao-Yang has used polling-technique based data aggregation using a mobile sensor-node [18], as per Wang [19] the whole network will be separated into the multiple clusters and each cluster having one cluster-head than after Gao [20] has increased the size of buffer with the mobile sink-node in which it was moving with fixed-route and with constantspeed and finally Konstantopoulos [21] has used Mobi-Cluster protocol. Now here, the neighbouring-density-clustering, as shown in Fig. 5a before clustering, Fig. 5b shown the merging of node. With the help of clustering sensor node, inter-node communication will decrease by localizing data transfer in the same cluster as shown in Fig. 5b, After distance calculation of each node, the central location is identified and the same as like lowerdegree nodes as per Fig. 6. After defining the data-gathering points, established the connection between MST odd-nodes and than the route will be defined for mobile sink-node at last. For data gathering, a mobile sink node starts to broadcast the “hello!” message to all the sensor-node of a cluster. When the sensor node receives that message, it will check out the identification number. The time-slot-based (TDMA) technique

30

M. Bhalia and A. Bavarva

Fig. 5 a Sensor nodes with their degrees, b nodes are merged with the higher degree node

is used here which contains the scheduling of every identified cluster member with individual time slot allotted to them. After designing the protocol, the simulation completed with the help of Network Simulator (NS-2) with setting out the following parameters by authors as per Table 2. Figure 7 shows improved results. Finally, the authors have compared their experimental results with the experiments of Heinzelman et al. [22] and another one of Wang et al. [19] having the protocols namely LEACH and Mobile sink at the edge

Survey on Energy Efficient Approach …

31

Fig. 6 The final result achieved

Table 2 Experimental parameter setting

Parameters

Values

Simulation area

250 m × 250 m

Number of sensors

100

Speed of mobile sink

5 m/s

E init

1J

E elec

5nj/bit

Efs

10pj/bit/m2

Eamp

0.0013pj/bit/m4

MAC protocol

802.11

Channel model

Wireless channel

Routing protocol

AODV

of area, respectively. Goyal et al. [23] and Singh et al. [24] explored WSN in detail and deployed metaheuristics for improving performance. The above comparative figure shows the performance graph of all three methods, including the author’s technique. The author’s methods show the betters result as compared to LEACH and The Mobile sink edge of area method. In another two methods, the cluster head is not centred. Thus, transmission distance becomes more between cluster members and cluster head, resulting in more energy consumption.

32

M. Bhalia and A. Bavarva

Fig. 7 Comparison of energy consumption

3 Comparative Analysis See Table 3. Table 3 Comparison of all LR (literature review) Para

LR.1

LR.2

LR.3

LR.4

Tech.

MIMO

EECA (decreased SAPR (image quantization rate) sub-sampling)

Mobile sink node

Used

B&W image

Video

B&W image

Multimedia

Simulator

MATLAB

NS-2

Cooja

NS-2

Focused

PSNR, energy

Packet/frame loss, energy

PSNR, energy

Energy

Achieved

17 dB (100 × 100 image)

Packet-loss 15% less, frame-loss 10% less, energy-loss 5% less

25.06 dB (250 × 250 image)

Network lifetime increase

Future scope

Design and testing for colour image, video

Improve more video quality

Improve more the Increase mobility PSNR of sink

Survey on Energy Efficient Approach …

33

4 Conclusion Energy conservation is one of the biggest challenges in wireless multimedia sensor network, especially for multimedia data transfer. Researchers are developing various techniques for minimum energy consumption with better multimedia data quality. Here, we have compared many techniques for multimedia data transfer and conclude that the EECA technique is much better in terms of packet loss and energy conservation.

References 1. E. Tsiontsiou, Multi-constrained QoS Routing and Energy Optimization for Wireless Sensor Networks. Networking and Internet Architecture. Université de Lorraine, 2017. English. NNT: 2017LORR0340. HAL Id: tel-01735239. Submitted: 15/3/2018 https://tel.archives-ouvertes. fr/tel-01735239 2. B. Arjav J. Preetida, G. Komal, Performance improvement of wireless multimedia sensor networks using MIMO and compressive sensing. J. Commun. Inf. Netw. 3(1) (2018). https:// doi.org/10.1007/s41650-018-0011-8 3. N. Eslahi, A. Aghagolzadeh, S. Andargoli, Image/video compressive sensing recovery using joint adaptive sparsity measure. Neurocomputing 200(3), 88–109 (2016) 4. F. Salahdine, N. Kaabouch, H.E. Ghazi, A survey on compressive sensing techniques for cognitive radio networks. Phys. Commun. 20(9), 61–73 (2016) 5. M. Shahin, H. Vahid, K. Mohammad, EECA—energy efficient congestion avoidance in wireless multimedia sensor network, in 6 th IEEE International Symposium on Telecommunications (IST’2012). IEEE. 978-1-4673-2073-3/12©2012 6. S. Mahdizadeh Aghdam, M. Khansari, H. Rabiee, M. Salehi, UDDP: a user datagram dispatcher protocol for wireless multimedia sensor networks, in Proceedings of the 9th IEEE International Conference on Consumer Communications and Networking Conference (CCNC), 2012, pp. 765–770. https://doi.org/10.1109/CCNC.2012.6181161 7. M. Vuran, I. Akyildiz, XLP: a cross-layer protocol for efficient communication in wireless sensor networks. IEEE Trans. Mobile Comput. 9(11), 1578–1591 (2010). https://doi.org/10. 1109/TMC.2010.125 8. A. Lie, J. Klaue, Evalvid-RA: trace driven simulation of rate adaptive MPEG-4 VBR video. Multimedia Syst. 14(1), 3350 (2008) 9. C.E. Perkins, E.M. Royer, Ad-hoc on demand distance vector routing, in Second IEEE Workshop on Mobile ComputingSystems and Applications, 1999 Proceedings, WMCSA 99, 1999, pp. 90–100 10. P. Tamal, B. Shaon, D. Sipra, Energy-saving image transmission over WMSN using block size reduction technique, in IEEE International Symposium on Nano-electronic and Information Systems. IEEE. 978-1-4673-9692-9/15©2015. https://doi.org/10.1109/iNIS.2015.19 11. E. Sun, X. Shen, H. Chen, A low energy image compression and transmission in wireless multimedia sensor networks. Proc. Eng. 15, 3604–3610 (2011) 12. S. Rein, M. Reisslein, Performance evaluation of the fractional wavelet filter: a low-memory image wavelet transform for multimedia sensor networks. Ad Hoc Netw. 9, 482–496 (2011) 13. L. Uhsadel, Comparison of low-power public key cryptography on MICAz 8-bit micro controller. Diploma Thesis, Ruhr-University Bochum, Apr 2007 14. G. Meulenaer, F. Gosset, F.-X. Standaert, L. Vandendorpe, On the energy cost of communication and cryptography in wireless sensor networks, in Proceedings of IEEE International Conference on Wireless and Mobile Computing, 2008, pp. 580–585

34

M. Bhalia and A. Bavarva

15. P. Kugler, P. Nordhus, B. Eskofier, Shimmer, Cooja and Contiki: a new toolset for the simulation of on-node signal processing algorithms, in Proceedings of International Conference on Body Sensor Networks, 2013, pp. 1–6 16. http://emanuelecolucci.com/2011/04/image-and-video-quality-assessmentpart-one-mse-psnr 17. I. Ha, M. Djuraev, B. Ahn, An energy-efficient data collection method for wireless multimedia sensor networks. Int. J. Distrib. Sensor Netw. 2014(698452), 8 (2014). https://doi.org/10.1155/ 2014/698452 18. M. Zhao, Y. Yang, Bounded relay hop mobile data gathering in wireless sensor networks. IEEE Trans. Comput. 61(2), 265–277 (2012) 19. J. Wang, Y. Yin, J.-U. Kim, S. Lee, C.-F. Lai, A mobile sink based energy-efficient clustering algorithm for wireless sensor networks, in Proceedings of the 12th IEEE International Conference on Computer and Information Technology (CIT’12), pp. 678–683, Chengdu, China, Oct 2012 20. S. Gao, H. Zhang, S.K. Das, Efficient data collection in wireless sensor networks with pathconstrained mobile sinks. IEEE Trans. Mobile Comput. 10(4), 592–608 (2011) 21. C. Konstantopoulos, G. Pantziou, D. Gavalas, A. Mpitziopoulos, B. Mamalis, A rendezvousbased approach enabling energy efficient sensory data collection with mobile sinks. IEEE Trans. Parallel Distrib. Syst. 23(5), 809–817 (2012) 22. W. Heinzelman, H. Balakrishnan, A. Chandrakasan, Low energy adaptive clustering hierarchy, in Proceedings of Hawaii International Conference on System Science, Jan 2000 23. A. Goyal, V.K. Sharma, S. Kumar, RC P RC, “Hybrid AODV: An efficient routing protocol for Manet using MFR and firefly optimization technique. J. Interconnection Netw. (2021). https:// doi.org/10.1142/S0219265921500043 24. A.P. Singh, A.K. Luhach, X.Z. Gao, S. Kumar, D.S. Roy, Evolution of wireless sensor network design from technology centric to user centric: an architectural perspective. Int. J. Distrib. Sensor Netw. 16(8) (2020). https://doi.org/10.1177/1550147720949138

Effectual Accuracy of Ophthalmological Image Retinal Layer Segmentation Praveen Mittal

and Charul Bhatnagar

Abstract The ophthalmology field plays a key role in the diagnosis of eye diseases, and day by day it is expanding at a wider level. Many eye-related diseases can be detected by only seeing the images of retinal blood vessels. It is a difficult task for doctors to figure out the blood vessels from a given eye image by themselves without taking the help of technology. The paper contains details about the steps performed during the process of making the above-mentioned problem automated and highlights the blood vessels in the provided image using the application of machine learning. There are many supervised and unsupervised machine algorithms available to perform this task, and some of them which are performed are SVM (support vector machine), K-NN (K-nearest neighbor), and decision tree. Keywords Segmentation · Retina · OCT · Blood vessels

1 Introduction Most medical diseases related to the eye can be resolved by an ophthalmologist just by observing the retinal images [1]. The whole process can be entirely automated by machine learning applications [2]. To get the segmentation of blood vessels from retinal images can be resolved by applying the classification problem in such a way that the pixels containing blood vessel are represented as positive and the remaining other pixels in the image are denoted as negative. To perform this algorithm, there are several features like getting RGB value for each pixel, knowing pixel location, overall curvature, shading, etc. Algorithms that are most suitable for this task are SVM (support vector machine), KNN (K-nearest neighbor), and decision tree [3]. The main issue was that the previous P. Mittal (B) · C. Bhatnagar Department of Computer Engineering and Applications, GLA University, Mathura, India e-mail: [email protected] C. Bhatnagar e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 R. C. Poonia et al. (eds.), Proceedings of Third International Conference on Sustainable Computing, Advances in Intelligent Systems and Computing 1404, https://doi.org/10.1007/978-981-16-4538-9_4

35

36

P. Mittal and C. Bhatnagar

Fig. 1 Unprocessed retinal scan [12]

updates related to this problem were not able to predict all vessels that are present in the eye and the accuracy was not so good. The challenge was to utilize the previous work and increase the accuracy and other accuracy measures. Gupta et al. [4, 5] used facial images for identification of various important features, while Yadav et al. [6] used video images for this purpose. Sharma et al. [7, 8] enhanced image quality with thresholding and entropy-based approaches. Recently, Singh et al. [9], Bhatnagar et al. [10] and Kumari et al. [11] deployed machine learning techniques in medical imaging.

2 Data The fundus images data are publicly available at Clemson University Site and Kaggle also (Fig. 1). The image set contains the exact location of blood vessels. The label for each training image is provided corresponding to an image containing positive pixels only which are drawn perfectly by expertise. Algorithms are applied by keeping in mind that the drawn image containing the précised location of pixels holding blood vessels. The data set holds thirty images with their associated label (Fig. 2).

3 Features The following set of features is generated for every pixel using scanned retinal images provided in the data set. The following set of features is selected to get the maximum effect from the algorithm models as for limiting the features available for this task

Effectual Accuracy of Ophthalmological …

37

Fig. 2 Expert drawing used for training label [13]

and faster calculation. Image processing was performed on the raw images to remove noise as much as possible. During the image processing initially, the image’s dimensions are modified, and then, every image went through Gaussian and grayscale filter, and illumination effects were removed. The above operations are performed to reduce noisy features from extracted matrices of provided retinal images up to a certain level. On these modified images, several features were applied to verify the location of the blood vessel in the image. The selected three features were proved to most suitable and effective in providing accurate results. Grayscale Intensity—Colored images are usually having noise due to image acquisition and image transmission. Conversion of three channels of RGB values to the single channel of grayscale gives reduction and enhancement of image quality [14]. Pixel inclination—To calculate the plane perpendicular to the direction of shade change helps to find the outer edges in the retinal image. Equations 1 and 2 show the magnitude for the direction of the shade change in the particular layer [15]. ∇ F =



 2 (Fx )2 + Fy

(1)

3.1 Second Derivative Approach The derivative approach helps to identify the shape and direction of the retinal layers, Eq. 2 γ+ =

   2  2 1 Fx x + Fyy + Fx x − Fyy + 2Fx y 2

(2)

38

P. Mittal and C. Bhatnagar

The ram-sized shape in differentiation calculates 235,400 pixels [16]. Each of those pixels had an associated grayscale intensity, gradient magnitude, and maximum eigenvalue. So that, every of the 20 retinal scanned was represented by 423,500 separate 3 × 1 feature vectors.

3.2 K-NN Analysis For the detection of retinal blood vessels, K-NN analysis was implemented. In the previous section, we already create the feature matrix of the images [17]. As the data are very large, it takes lots of time to fit the data in the model and also the data corresponding to the negative are much larger than the positive class. So, to reduce the size and balancing the data we use a random subset of the data which is approx. 10% of the complete dataset, and then, we scale our data using the min–max scaling method. X new =

X i − X min X max − X min

(3)

We are using the cross-validation approach to find the value of k, and we use Euclidean distance for distance calculation.

3.3 SVM Analysis As in the previous analysis, here we also use the same strategy to reduce and balance the dataset [18]. But this time, we use only 2% of the total dataset. We use inbuilt libraries of the sklearn module for implementing the SVM algorithm, and the value of the kernel is “rbf.”

3.4 Decision Tree Analysis As the decision is computationally less expensive so here, we use the complete dataset and for scaling purposes, we use the same min–max scaling [19]. As in D-tree, the main drawback is overfitting of the data, so to prevent the model from overfitting we limit the height of the tree 10.

Effectual Accuracy of Ophthalmological …

39

Table 1 Confusion matrix for different algorithms K-NN

SVM

D-tree

0 (%)

1 (%)

0 (%)

1 (%)

0 (%)

1 (%)

0

88.1

3.2

89.3

3.2

90.2

3.2

1

2.0

6.7

2.4

6.4

2.8

6.7

4 Results We have 20 images in our dataset in which we use the first 15 images for training purposes and the rest 5 for testing purposes. The accuracy measure for the K-NN algorithm is 94.4%, for SVM is 95.7% and for D-tree is 95.9%. Table 1 shows the confusion matrix for all three models. As per the results, we cannot say which model is best because it depends on the requirements. If we want higher accuracy that we go for the decision tree but if we concern more about to correctly classifying the pixels that belong to the vessel’s class (as the recall is approx. 76%), then K-NN will perform better than the decision tree or SVM. The figure below shows the predicted or output images of the test dataset.

5 Conclusion In the conclusion, the goal of this research is to work toward a fully automated detection process for various eye diseases. The modified k-NN algorithm is slightly less accurate overall, and it is worth pursuing because it does not need a training set. The modified k-NN algorithm takes a relatively long time to run on each image which might not be practical in the field. For this reason, we used the SVM and decision tree algorithm which has given us better results, but a different kind of unsupervised algorithm can be applied to increase speed and accuracy. The results of the proposed method can achieve better performance in detecting the true vessels. The proposed system achieved the average vessel segmentation accuracy of 94.42% in the DRIVE dataset when k-NN (K-nearest neighbor) was applied, 95.75% when SVM (support vector machine) was applied, and 95.88% when D-tree (decision tree) was applied, respectively, with their corresponding ground truth images.

6 Future Work To improve upon the SVM, a neural network scheme for pixel classification might be a good direction to pursue the given algorithm’s success in various other image processing fields. It would be better to further modify the applied SVM or apply a different kind of unsupervised algorithm to increase the speed and accuracy. Also,

40

P. Mittal and C. Bhatnagar

more work could be done to refine the features used to improve the accuracy of all the models. When this work is more developed, the SVM, modified k-NN, and decision tree algorithms can be modified to yield more applicable results.

References 1. M.B. Wankhade, A.A. Gurjar, Analysis Of disease using retinal blood vessels detection. IJECS 05 (2016) 2. S. Joshi, P.T. Karule, Retinal blood vessel segmentation. IJEIT 1(3) (2012) 3. H. Archna Sharma, Detection of blood vessels and diseases in human retinal images. Int. J. Comput. Sci. Commun. Eng. IJCSCE Emerg. Trends Eng. Manage. IECTE 2013 4. R. Gupta, S. Kumar, P. Yadav, S. Shrivastava, Identification of age, gender, & race SMT (scare, marks, tattoos) from unconstrained facial images using statistical techniques, in 2018 International Conference on Smart Computing and Electronic Enterprise (ICSCEE) (IEEE, 2018, July), pp. 1–8 5. R. Gupta, P. Yadav, S. Kumar, Race identification from facial images using statistical techniques. J. Stat. Manage. Syst. 20(4), 723–730 (2017) 6. P. Yadav, R. Gupta, S. Kumar, Video image retrieval method using dither-based block truncation code with hybrid features of color and shape, in Engineering Vibration, Communication and Information Processing (Springer, Singapore, 2019), pp. 339–348 7. A. Sharma, R. Chaturvedi, S. Kumar, U.K. Dwivedi, Multi-level image thresholding based on Kapur and Tsallis entropy using firefly algorithm. J. Interdisci. Math. 23(2), 563–571 (2020) 8. A. Sharma, R. Chaturvedi, U.K. Dwivedi, S. Kumar, S. Reddy, Firefly algorithm based effective gray scale image segmentation using multilevel thresholding and entropy function. Int. J. Pure Appl. Math. 118(5), 437–443 (2018) 9. V. Singh, R.C. Poonia, S. Kumar, P. Dass, P. Agarwal, V. Bhatnagar, L. Raja, Prediction of COVID-19 corona virus pandemic based on time series data using support vector machine. J. Discr. Math. Sci. Cryptogr. 23(8), 1583–1597 (2020). https://doi.org/10.1080/09720529.2020. 1784535 10. V. Bhatnagar, R.C. Poonia, P. Nagar, S. Kumar, V. Singh, L. Raja, P. Dass, Descriptive analysis of COVID-19 patients in the context of India. J. Interdisci. Math. 24(3), 489–504 (2020). https:// doi.org/10.1080/09720502.2020.1761635 11. R. Kumari, S. Kumar, R.C. Poonia, V. Singh, L. Raja, V. Bhatnagar, P. Agarwal, Analysis and predictions of spread, recovery, and death caused by COVID-19 in India. Big Data Min. Anal. 4(2), 65–75. https://doi.org/10.26599/BDMA.2020.9020013 12. S.M. Zabihi, H.R. Pourreza, T. Banaee, Vessel extraction of conjunctival images using LBPs and ANFIS. Int. Scholarly Res. Netw. ISRN Mach. Vis. 2012(424671), 6 (2012) 13. J. Kaur, H.P. Sinha, An effective blood vessel detection algorithm for retinal images using local entropy thresholding. (IJERT) 1(4) (2012). ISSN:2278-0181 14. A.G. Karegowda, A. Nasiha, M.A. Jayaram, A.S. Manjunath, Exudates detection in retinal images using backpropagation neural network 25(3) (2011) 15. S. Chaudhuri, S. Chatterjee, N. Katz, M. Nelson, M. Goldbaum, Detection of blood vessels in retinal images using two-dimensional matched filters. IEEE Trans. Med. Imag. 8(3), 263–269 (1989) 16. P. Mittal, C. Bhatnagar, Detecting outer edges in retinal OCT images of diseased eyes using graph cut method with weighted edges. J. Adv. Res. Dyn. Control Syst. 12(3), 943–950 (2020) 17. P. Mittal, C. Bhatnagar, Automatic classification of retinal pathology in optical coherence tomography scan images using convolutional neural network. J. Adv. Res. Dyn. Control Syst. 12(3), 936–942 (2020)

Effectual Accuracy of Ophthalmological …

41

18. P. Mittal, C. Bhatnagar, Automatic segmentation of pathological retinal layer using an eikonal equation, in 11th International Conference on Advances in Computing, Control, and Telecommunication Technologies (ACT, 2020), pp. 43–49 19. P. Mittal, C. Bhatnagar, Automatic segmentation of outer edges of retinal layers in OCT scan images using Eikonal equation. J. Phys. Conf. Ser. 1767(1), 012045 (2021)

Performance Assessment of K-Nearest Neighbor Algorithm for Classification of Forest Cover Type Pratibha Maurya and Arvind Kumar

Abstract Natural resources, particularly soil, water, forest, animal diversity, and climate, are vital for our ecosystem’s structure and function. The forest, an essential natural resource, has a significant role in maintaining important geochemical and bioclimatic events. A profound understanding of the forest composition can help manage the health and life of these wilderness areas and highly impact humankind. Due to this, forest cover type classification has always been an area of attention for researchers. Machine learning-based classifiers perform well in predicting forest cover types. This paper explores the importance of natural resource management and the capability of predictive models in implementing it. The article studies a nonparametric K-nearest neighbor (KNN) algorithm explicitly and assesses its effectiveness as a machine learning classifier on the UCI forest cover type dataset having 54 attributes. This forest cover type (FC) is publicly available at UCI Knowledge Discovery in Database (KDD) Archive. This work evaluates the experimental results on various performance parameters like accuracy, precision, recall, and F1 score. These results are also compared with the other research work available in the literature. The KNN achieves an accuracy of 97.09% which is significantly better than original work with 70.58% available on UCI repository. The obtained results also show much improvement over the similar works that exist in the literature. Keywords Natural resource management · Predictive modeling · Forest cover type · Classification · K-nearest neighbor

P. Maurya AIIT, Amity University, Uttar Pradesh, Lucknow Campus, Lucknow, India e-mail: [email protected] A. Kumar (B) Bennett University, TechZone II, Greater Noida, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 R. C. Poonia et al. (eds.), Proceedings of Third International Conference on Sustainable Computing, Advances in Intelligent Systems and Computing 1404, https://doi.org/10.1007/978-981-16-4538-9_5

43

44

P. Maurya and A. Kumar

1 Introduction Natural resources have a very significant role in the existence of humanity. These resources come to us in the form of sunlight, water, air, vegetation, land, animals, and forests. The demand for these natural resources is increasing at a rapid rate while the availability is exhausting. There is a need to develop procedures and management strategies to combine both progressive efforts and conservation means of natural resources [1, 2]. These strategies would undoubtedly improve, preserve, and shield our resources present in the natural environment for the benefit of all. Natural resources are limited and capable of being demolished if not consumed in a planned way. It is the mode of utilization that will define which resources will be available to us in future and which resources will exist. The main reasons which could be held responsible for this are the rapid increase in the population and their expectation of having a high standard of living. So, to manage these natural resources sustainably, environmental education is a must. In general, forest cover is a category of terrestrial land cover and refers to the area enclosed by forest canopy or open woodland. This information is very vital for natural resource managers in developing ecosystem management plans [3, 4]. The accurate monitoring of forest cover and land quality stands, crucial to estimate the costs of deforestation, changes in hydrological cycle, reasons of changes in biodiversity, global carbon cycle, and supporting geochemical and climatological cycles. The essential step for forest planning and management is to map forest variables and its allied characteristics correctly. This mapping significantly increases the precision and accuracy of forest estimates. The designing of these plans requires descriptive knowledge about inventory data for forested areas. There exist two methods to obtain the cover type data. The data can either be recorded directly by field personnel or projected using the available multisource remotely sensed data and geographic data. The data collection procedures are time-consuming and even may be costly in some situations. At times, the inventory information for adjoining lands that fall outside natural resource managers’ jurisdiction is quite useful. Still, it is legally and economically impossible to collect this data by utilizing advanced technological methods [5]. To develop a complete management system, descriptive and elaborate data of forested land are a must. The absence of such data will affect the understanding and decision-making of the managers. An exceptional technique to resolve this issue is using predictive models that can be applied to the methods mentioned above for obtaining such data. These predictive models can be applied to the manager’s data to recognize patterns and make precise predictions for the regions not coming under their jurisdiction. The predictive modeling techniques can be easily implemented and incur a low cost. These techniques are based either on statistical modeling or use machine learning approaches. This paper has discussed the classification of forest cover types. This dataset is publicly available on the UCI repository, and different researchers had proposed different approaches in their research work. This research’s significant contribution is to examine K-nearest neighbor’s (KNN) abilities to predict

Performance Assessment of K-Nearest …

45

forest cover type classes in forested areas. We also measure KNN’s classification performance by comparing experimental results with results obtained from other works in the literature applied on forest cover type dataset. The paper organization is as follows: Literature review is covered in Sect. 2, while Sect. 3 gives a broad description of the data set used in the study and formalizes problem statement. Section 4 introduces the KNN machine learning algorithm. Implementation details are discussed in Sect. 5 along with results, and Sect. 6 concludes this research work.

2 Literature Review In association with the US Geological Survey organization, the US Forest Service is deeply involved in analyzing forestry data by collecting various forest areas throughout the USA [6–8]. Such classification deals in predicting the type of trees present in a small forest area by using forest variables like soil, sunlight, elevation, and hydrologic data [9]. This forest cover type classification can be further useful in predicting forest fire susceptibility, the deforestation apprehensions, or the spread of Mountain Pine Beetle infestation. The cover type data used in the study consist of wilderness areas of Roosevelt National park, located in the Front Range of northern Colorado. This dataset is publicly accessible on the UCI repository [10]. This forest cover type dataset has been used in this study. The forest cover type (FC) dataset was first introduced by Blackard et al. in their research article in 1999 [11]. They achieved the prediction accuracy of 70.58%. They analyzed the artificial neural networks (ANN) model’s abilities by first predicting the forest cover type classes and then evaluating the prediction accuracy of their model by comparing the results with observed cover types. To obtain better accuracy, authors also used the linear discriminant analysis (LDA) method-based statistical model [12]. Later, Lazarevic proposed a distributed computing algorithm as a parallel classification technique and combined the classifiers into a weighted voting ensemble. They achieved an accuracy of around 73% and succeeded in lowering the cost of computation and memory requirements [13]. Furnkranz tried handling the multi-class problems using binary classifiers wherein one classifier is used for each pair of classes. This proved to be considerably faster than traditional methods that normally train each class against all other classes. They achieved an accuracy of 66.80% [14]. Frank et al. proposed building a simple base classifiers committee using a standard boosting algorithm and then pruning these committees adaptively. This pruning helped them in achieving improved accuracy of 82.50% [15]. Liu et al. experimented with decision trees (DT) on the forest cover type dataset with a prediction accuracy of 88%. They claimed that DTs achieve better performance for a certain class of problems than feedforward back propagation neural networks [16]. Kumar et al. experimented with random forests (RF) algorithm on the forest cover type dataset with a prediction accuracy of 94.60% [17]. Kumar et al. deployed spider monkey optimization for soil classification [18] and leaf disease identification [19].

46

P. Maurya and A. Kumar

When the training dataset is vast, it is impossible to use the entire data for training purposes due to limited memory and computational speed. Koggalage and Halgamuge suggested the use of Support Vector Machine (SVM) by introducing the concept of ‘safety region’ to minimize the effect on final classification results and achieved an accuracy of 89.72% on the classification of classes 1, 2, and 5 of forest cover type dataset [20]. Castro et al. used Fuzzy ARTMAP (FAM) algorithm, one of the fastest neural network algorithms because of its ability to generate new neurons to characterize classification categories. The FAM algorithm lags in convergence time with the growing size of networks. To manage convergence time, authors have proposed two partitioning approaches, network partitioning and data partitioning to be used in a parallel setting and achieved an accuracy of 76% [21].

3 Data Set Description The cover type dataset consists of four wilderness areas Rawah (29,628 ha), Neota (3904 ha), Comanche Peak (27,389 ha), and Cache la Poudre (3817 ha) of Roosevelt National park, located in the Front Range of northern Colorado [14]. Among the four wilderness areas, Neota has the highest mean elevation value. They are followed by Rawah and Comanche Peak having to mean elevation lower than from Neota. The fourth area, i.e., Cache la Poudre, has the lowest mean elevation value among all wilderness areas. There are seven mutually exclusive forest cover type classes: which is Douglas-fir, Spruce/Fir, Ponderosa Pine, Lodgepole Pine, Krummholz, Aspen, and Cottonwood/Willow. These seven classes get their names from the existence of dominant tree species in these areas. The Rawah and Comanche Peak areas contain more tree species composition and provide a larger range of values for predictive variables than Neota or Cache la Poudre. This makes Rawah and Comanche Peak areas as a more typical representation of the overall dataset. Considering relatively low elevation and species variety, cache la Poudre qualifies as more unique than the other three. Wilderness maps for these forest cover types were first generated by US Forest Service using extensive scale aerial photography. These maps display natural features like rivers, mountains, plains, lakes, and vegetation using contour lines illustrating elevation gain or loss. Independent variables were derived from the digital spatial data provided by the US Geological Survey (USGS) and the US Forest Service (USFS). A total of 54 independent variables comprising of 10 quantitative variables, four wilderness areas, and 40 soil types were used in the study. The two qualitative variables, viz. wilderness areas and soil types, had 4 and 40 binary values. Each instance in dataset had a value of ‘0’ or ‘1’ representing ‘absence’ or ‘presence’ of a specific soil type or wilderness area. The problem statement is to correctly classify the forest cover type into seven mutually exclusive classes. The cover type data set contains 581,012 instances. Also, the data set has no missing values.

Performance Assessment of K-Nearest …

47

4 K-Nearest Neighbor (KNN) K-nearest neighbor, a supervised machine learning algorithm, is primarily used for classification. KNN is a lazy algorithm and simple to implement and quite robust for any search space. The algorithm does not require a specialized training phase; rather, it preserves dataset and uses it to perform classification. For the test data classification, the KNN algorithm calculates the distances between the test data and all the training examples. Based on these distances, the nearest neighbor is identified, producing the output class for the test data. KNN never makes any assumptions on the underlying dataset and is thus also known as a nonparametric algorithm. Also, the KNN algorithm utilizes the concept of feature similarity to make new predictions. The new data points are classified by assigning a value based on how precisely it matches training set data points. These features of KNN motivate us to examine its behavior in predicting forest cover type classes in forested areas. KNN Algorithm Step 1 Step 2 Step 3

Load the training and test data. Initialize K with the number of nearest data points to be considered, where K is an integer. For each point in the test data, the following need to be done: 3.1 3.2 3.3 3.4 3.5 3.6

Step 4

Calculate distance between test data and each point in the training dataset. Sort the training data points in ascending order based on the distance value. Select top K data points from sorted group. Retrieve the labels of K data points selected. Count number of data points lying in each category. Assign new data point to the category to which the maximum number of neighbors belong.

End

Choosing the right value for K There are no prescribed methods to find perfect value of K. To find the best value for K, the KNN algorithm can be executed several times with varying values of K and pick the K that can minimize errors the algorithms gives while making predictions on data it hasn’t seen before. While determining K’s value, the value should not be very low. The lower value of K may generate outliers in the model. If K is increased, the predictions get steady due to majority voting/averaging and are probable of making more correct predictions. But this is true up to a limited point. After this threshold, the number of errors that encounter starts increasing. In the problems where majority votes are obtained to find the category for a new data point, K is chosen as an odd number to avoid tiebreaker.

48

P. Maurya and A. Kumar

5 Results For predicting forest cover type for 30 × 30 m cell, a classifier based on the KNN algorithm is used. We prefer to choose KNN as a predictive model in this study as it does not require any training period to make new predictions. This property makes it possible to add new data to the dataset without impacting the algorithm’s accuracy. The KNN classifier is very robust to search space, can be updated online at a small cost, and requires low computation time. Data processing is done using Python PANDAS library, and algorithm is implemented using SKLEARN. We assessed the KNN’s performance on the 10-fold cross-validation. We performed rigorous experiments with a different value of k and obtained the best results when k = 5 (Fig. 1). We calculated various performance matrices, F1-score, accuracy, recall, and precision. Mean of these values is calculated. On 10-fold cross-validation, we got average accuracy, precision, recall, and F1-score of 0.97, 0.95, 0.93, and 0.93 for k = 5. These results are summarized in Table 1 and Fig. 2. The accuracy achieved is 97.09% which is significantly better than the similar works of Gu and Cheng [22], Yuksel et al. [23], and Radhakrishnan et al. [24] with accuracy of 88.55%, 83.63%, and 93.70%, respectively.

Fig. 1 Classification accuracy of classifier for different values of ‘k’

Performance Assessment of K-Nearest … Table 1 Result summary

Forest type

49 Precision

Recall

f 1-score

Spruce fir

0.97

0.97

0.97

Lodgepole pine

0.97

0.98

0.97

Ponderosa pine

0.96

0.97

0.96

Cottonwood

0.92

0.79

0.85

Aspen

0.92

0.90

0.91

Douglas-fir

0.94

0.93

0.94

Krummholz

0.97

0.97

0.97

Average

0.95

0.93

0.93

Fig. 2 Performance of KNN classifier on forest cover type

6 Conclusion This paper highlights the importance of natural resources in maintaining the balance in our ecosystem. To make effective natural resource management plans, detailed information about land cover is a must. The best solution to this problem is predictive models that are easy to implement and are inexpensive. This paper uses a forest cover type (FC) data set available at UCI KDD Archive to investigate the efficiency of predictive models, specifically considering K-Nearest Neighbor algorithm in a multiclass classification problem with seven imbalanced classes. To evaluate the efficiency of the classifier, we used various performance metrics. The experiment conducted

50

P. Maurya and A. Kumar

revealed that KNN classifier performed well in discriminating seven cover types. On 10-fold cross-validation, we attained a prediction accuracy of 97.09%. This accuracy achieved is significantly higher than the similar works done in the past. With this blend of efficiency and accuracy, KNN can be recommended as a desirable classifier for multi-class classification problem and natural resource management.

References 1. N. Schrijver, Natural resource management and sustainable development, in The Oxford Handbook on the United Nations (2007) 2. A. Kumar, T. Choudhary, A machine learning approach for the land type classification, in Innovations in Electrical and Electronic Engineering (Springer, Singapore, 2021), pp. 647–656 3. G.A. Mendoza, H. Martins, Multi-criteria decision analysis in natural resource management: a critical review of methods and new modelling paradigms. Forest Ecol. Manage. 230(1–3), 1–22 (2006) 4. A. Kumar, A. Kakkar, R. Majumdar, A.S. Baghel, Spatial data mining: recent trends and techniques, in 2015 International Conference on Computer and Computational Sciences (ICCCS), (IEEE, 2015), pp. 39–43 5. M. Trebar, N. Steele, Application of distributed SVM architectures in classifying forest data cover types. Comput. Electron. Agric. 63(2), 119–130 (2008) 6. T.E. Avery, H.E. Burkhart, Forest Measurements (Waveland Press, 2015) 7. B.T. Wilson, A.J. Lister, R.I. Riemann, A nearest-neighbor imputation approach to mapping tree species over large areas using forest inventory plots and moderate resolution raster data. For. Ecol. Manage. 271, 182–198 (2012) 8. R.R. Kishore, S.S. Narayan, S. Lal, M.A. Rashid, Comparative accuracy of different classification algorithms for forest cover type prediction, in 2016 3rd Asia-Pacific World Congress on Computer Science and Engineering (APWC on CSE) (IEEE, 2016), pp. 116–123 9. K. Crain, G. Davis, Classifying forest cover type using cartographic features. Published report (2014) 10. D. Dua, C. Graff, UCI Machine Learning Repository. University of California: Covertype Data Set. https://archive.ics.uci.edu/ml/datasets/covertype 11. J.A. Blackard, D.J. Dean, Comparative accuracies of artificial neural networks and discriminant analysis in predicting forest cover types from cartographic variables. Comput. Electron. Agric. 24(3) (1999) 12. J.A. Blackard, Comparison of neural networks and discriminant analysis in predicting forest cover types. Ph.D. Dissertation, Department of Forest Sciences, Colorado State University, Fort Collins, Colorado (2000) 13. A. Lazarevic, Z. Obradovic, Data reduction using multiple models integration, in Proceedings of the 5th European Conference on Principles of Data Mining and Knowledge Discovery (PKDD’01) (Springer, Germany, 2001), pp 301–313 14. J. Fürnkranz, Round robin rule learning, in Proceedings of the 18th International Conference on Machine Learning (ICML-01), 2001, pp. 146–153 15. E. Frank, G. Holmes, R. Kirkby, M. Hall, Racing Committees for Large Datasets (Springer, Berlin, 2002), pp. 153–164 16. T. Liu, K. Yang, A.W. Moore, The IOC algorithm: efficient many-class nonparametric classification for high-dimensional data, in Proceedings of the 2004 ACMSIGKDD International Conference on Knowledge Discovery and Data Mining-KDD’04, 2004, pp. 629–634 17. A. Kumar, N. Sinha, Classification of forest cover type using random forests algorithm, in Advances in Data and Information Sciences (Springer, Singapore, 2020), pp. 395–402

Performance Assessment of K-Nearest …

51

18. S. Kumar, B. Sharma, V.K. Sharma, R.C. Poonia, Automated soil prediction using bag-offeatures and chaotic spider monkey optimization algorithm, in Evolutionary Intelligence, 2008, pp. 1–12. https://doi.org/10.1007/s12065-018-0186-9 19. S. Kumar, B. Sharma, V.K. Sharma, H. Sharma, J.C. Bansal, Plant leaf disease identification using exponential spider monkey optimization. Sustain. Comput. Inform. Syst. 28https://doi. org/10.1016/j.suscom.2018.10.004 20. R. Koggalage, S. Halgamuge, Reducing the number of training samples for fast support vector machine classification. Neural Inf. Process. Lett. Rev. 2 (2004) 21. J. Castro, M. Georgiopoulos, J. Secretan, R.F. DeMara, G. Anagnostopoulos, A. Gonzalez, Parallelization of fuzzy artmap to improve its convergence speed: The network partitioning approach and the data partitioning approach. Nonlin. Anal. Theor. Methods Appl. 63(5) (2005) 22. Y. Gu, L. Cheng, Classification of class overlapping datasets by kernel-MTS method. Int. J. Innov. Comput. Inf. Control 13(5), 1759–1767 (2017) 23. M.E. Yuksel, N.S. Basturk, H. Badem, A. Caliskan, A. Basturk, Classification of high resolution hyperspectral remote sensing data using deep neural net-works. J. Intell. Fuzzy Syst. 34(4), 2273–2285 (2018) 24. S. Radhakrishnan, A.S. Lakshminarayanan, J.M. Chatterjee, D.J. Hemanth, Forest data visualization and land mapping using support vector machines and decision trees. Earth Sci. Inform. 13(4), 1119–1137 (2020)

Gestational Diabetes Prediction Using Machine Learning Algorithms Vaishali D. Bhagile and Ibraheam Fathail

Abstract Techniques of machine learning are used in a large amount of sectors and contribution in developing it. The ML plays the vital role in the medical field in reducing the risk of chronic diseases by prediction of disease before occurring with the help of the internet of things technique. The diabetes is the most disease leading to death in this time that destroys the human life especially of elderly people. In this work, we focus on gestational diabetes which a large number of woman get during pregnancy that increases the risk on the fetuses. We will use the raw dataset from the Kaggle (Pima Indian Diabetes Data Set) which contains 769 instances and 8 attributes. In our project, we used the most classification algorithms of ML for prediction of diabetics (k-NN, DT, NP, RF, SVM, logistic regression, XGBoost, CATBoost, and NN). We got the high accuracy compared by some previous researchers in the same disease. Keywords Gestational diabetes · Machine learning algorithms · Neural network

1 Introduction Diabetes is a chronic condition that develops when the pancreas cannot produce enough insulin or when the body cannot efficiently use the drug. The World Health Organization has stated that diabetes is lead reason of the death of more than 1.5 million persons of the world population annually, 80% of low-income countries [1]. The organization expects that diabetes will rank seventh in the ranking among the main causes of death by 2030. There are many types of diabetes; we will talk about gestational diabetes. Gestational diabetes is high level of blood sugar in which blood glucose levels surpass the normal level, but do not reach the level necessary to diagnose diabetes. This pattern happens throughout pregnancy. V. D. Bhagile Deogiri Institute of Technology and Management Study, Aurangabad, Maharashtra, India I. Fathail (B) Dr. Babasaheb Ambedkar Marathwada University, Aurangabad, Maharashtra, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 R. C. Poonia et al. (eds.), Proceedings of Third International Conference on Sustainable Computing, Advances in Intelligent Systems and Computing 1404, https://doi.org/10.1007/978-981-16-4538-9_6

53

54

V. D. Bhagile and I. Fathail

Women with gestational diabetes have a higher danger of growing complications from pregnancy and childbirth, and possibly their babies as well are at greater problem of developing diabetes of type 2 in that will happen. Gestational diabetes is identified by before birth screening, rather than by detail indication. The exact cause of gestational diabetes is not yet clear, but there are many factors that contribute to the disease, the most important of which are the hormones secreted by the placenta during pregnancy such as oxytocin (oxytocin, estrogen) and progesterone, which cause an increase in glucose in Blood (blood sugar) in a way that the pancreas cannot reduce. On the other side, there are various causes that cause to an increased risk of developing gestational diabetes, the most important of which are: (1) Obesity (BMI < 30 kg/m (2) or significant weight gain during pregnancy. (3) A family history of gestational diabetes, especially for first-degree relatives (mother or sister). (4) Gestational diabetes in a previous pregnancy for the same woman. (5) A history of previous births that resulted in babies weighing more than 4.1 kg. (5) Detection of glucose in the urine at the first visit to the doctor after pregnancy. (6) The woman suffers from health problems related to diabetes, such as PCOS, high blood pressure, or the use of steroid medicines such as cortisone [1]. In this work we use ML algorithms to predict gestational diabetes for any new instances with high accuracy without testing. We divided our work to follow sections: (1) related work (2) methodology (3) results (4) consolation.

2 Related Work In this part we will present a survey of the papers which are published on classification algorithms for diabetes diseases. Our focus will be on the model used and the results obtained. We will study the papers between 2016 and 2020. Joshi et al. [2] present a system for analysis and prediction of diabetes disease and compare the algorithms of ML. The methodology consists of two methods: (1) pre-processing data methods, in this method they used dataset from Kaggle PIDD (Pima Indian Diabetes Database) including 768 instances and 8 attributes and (2) classification methods, which are (KNN, RF, NP, J48). Luo [3] explains the expected results of machine learning prediction for any model without reduce the accuracy. They used dataset of the electronic medical from the competition for the classification of fusion diabetes which contains patients datasets of all 50 counties of the United States. The result prediction to patients who have diabetes diagnosis from type 2 for the next year is 87.4%. Choubey et al. [4] the purpose of this paper is to supply best systematic arranging of diabetes. The suggested methodology is composed of three levels: (1) PIDD dataset taken from UCI machine learning repository, (2) features selection technique using GA and (3) prediction, they used NB algorithm for prediction diabetes disease which apply on attributes of PIDD dataset. The accuracy of training set evaluation is 75.6506%, and the accuracy of Testing set evaluation is 78.6957%. Yuvaraj

Gestational Diabetes Prediction Using Machine Learning Algorithms

55

et al. [5] proposed the new execution of ML algorithms in Hadoop established on group for diabetes prognostication. The outcome displays that the ML algorithms can be capable to product very precise diabetes forecasting healthcare systems. PIDD is used to from National Institute of Diabetes and Digestive Diseases (NIDDD) is used to estimate the working. The proposed system composed of five stages: (1) dataset, used PIDD data set, (2) feature selection, (3) remove noises and missing data, (4) classification algorithms such as (NN, SVM, DT, NB, RF), and (5) merge R in to Hadoop. The result of accuracy from (DT, NB, and RF) is 88%, 91%, 94%, respectively. Ali´c et al. [6] present a survey of MLA in systematic arranging of diabetes and cardiovascular diseases (CVD) using Artificial Neural Networks (ANNs) and Bayesian Networks (BNs). The study selected papers are published between 2008 and 2017. They found the BN algorithm is most commonly used, and the result of accuracy of diabetes and CVD by NB gives the high preciseness compared by other algorithms, 99.51 and 97.92%. Zou et al. [7] used ML algorithms such as (DT, RF and NN) for classification diabetes diseases. the methodology consists of four levels: (1) dataset which is collected from the hospital physical examination in Liuzhou, China, (2) classifying algorithms such as ( NB, RF, DT, NN), (3) validation of model, and (4) feature extraction to remove redundant data. The high outcome for Liuzhou dataset is 0.8084%, and the good accuracy for Pima Indians is 0.7721%. Kaur et al. [8] used machine learning to classify Pima India diabetes dataset to improve tendency and discover the models with danger causes by using r information handling tool. They developed five models to classify the people who have diabetes and non-diabetes; these models are support vector machine-linear, random forest, K-nearest neighbors, artificial neural network, and MDR. The results of model’s accuracy are 0.89%, 0.90%, 0.88%, 0.90%, and 0.92%, respectively. Kalyankar et al. [9] used classification algorithms in Hadoop MapReduce environment to execute for PIDD to discover lost data in it and find out samples. The proposed way consists of five levels: (1) analysis data, like collection data and missing data, (2) machine learning, supervised learning and unsupervised learning, (3) Apache Hadoop, which is open sources written by java language to process dataset, (4) using PIDD dataset from UCI Machine Learning Repository, (5) cleaning data. Sisodia et al. [10] designed a model which able to prophesy the probability of patients who suffer from diabetes with highly precise. They used three classification algorithms (SVM, NB, DT) to predict diabetes in the early time. They used PIDD to examination and experiment. The accuracy of NB algorithm is 76.30%, which is high compared to other algorithms. Alghamdi et al. [11] advanced an ensemblebased predictive model by using 13 features which chosen depend on their clinical significance. They used Synthetic Minority Oversampling Technique (SMOTE) to deal with the negative outcome of the lack of coordination class of the build model. The execution of the predicting model classifier was enhanced by an ensemble ML method using the vote technique withe three DT (NB Tree, RF, and LM Tree). The model achieved results high accuracy (92%). Kumar et al. [12] discussed various methods for healthcare and Inje et al. [13] introduced some methods for disease diagnosis.

56

V. D. Bhagile and I. Fathail

3 Methodology The procedure of model is presented in Fig. 1; this model differs on the another models; it used the machine learning algorithms and deep learning algorithms to classify the diabetes; as we observe, the model consists of five steps:

3.1 Diabetes Dataset Dataset is collection of data which organized in some order. A dataset can contain any data from a sequence of an array to database table. There are three types of data in dataset: (1) numerical data such as age, insulin ratio, (2) categorical data such as diabetes (0, 1), and (3) ordinal data is similar to numerical data. The machine learning work with a huge amount of data without the data cannot train/test models by ML/AI, so for that purpose we need dataset for saving the large volume of data this dataset from type CSV (Comma Separated File). In our work we use PIDD (Pima Indian Diabetes Database) from https://www.kaggle.com/uciml/pima-indians-diabetes-dat abase. The dataset includes 768 cases of woman patients and 8 attributes. We describe the columns (attributes) of dataset in Table 1.

3.2 Data Processing Data preprocessing considers the crucial step in machine learning. The raw data contain the missing values, noises, and useable format which is unsuitable to be used in machine learning directly. Hence, data preprocessing is an important task to clean the raw data and make it suitable for model of machine learning. This level involves the following steps: • Importing essential libraries (numpy, matplotlib, pandas, seeborn, etc.). • Importing dataset (PIDD). • Handling missing data (replace the missing data by median).

DIABETES DATASET

PRE-PROCESS DATA

MLA & DLA CLASSIFI CATION

PERFORMAN CEVALUTION

RESULT

Fig. 1 The structure of model

Gestational Diabetes Prediction Using Machine Learning Algorithms

57

Table 1 Statement attributes of dataset Attributes

Description

Normal range

num_preg

Number of times pregnant



glucose_conc

Plasma glucose concentration

95 ≤ G ≤ 141

diastolic_bp

Diastolic blood pressure (mm Hg)

80 ≤ DB ≤ 90

skin_thickness Triceps skin fold thickness (mm)



insulin

2-h serum insulin (mu U/ml)

2.6 < IN < 24.9 (mcIU/Ml)

bmi

Body mass index (weight in kg/(height in m)2 )



diab_pred

Diabetes pedigree function

0.42 ≤ DP ≤ 0.82

Age

Age (years)

40 ≤ A ≤ 60

Diabetes

Class variable (0 or 1) 268 of 768 are 1, the others – are 0

• Encoding categorical data (we used LabelEncoder() class). • Feature scaling that is means we put our data in the same rang and same scale between 0 and 1. The machine learning use Euclidean √distance for scale data. The equation of Euclidean distance between A and B = (X2 − X1)2 − (Y 2 − Y 1)2 scale data we import StandardScaler class of sklearn.preprocessing library. • Splitting data to train data and test data for this purpose we used train_test_split() function.

3.3 Classification Algorithms Are supervised learning techniques which used training data to recognize the new classes. There are two types of classification algorithms: (1) linear models such as (logistic regression and support vector machine), (2) nonlinear models such as (RF, DT, NB, KNN). In this stage we brief description of the algorithms used:

3.3.1

Logistic Regression

Is it one of the supervised learning techniques? It is used for the predicting the categorical variables. There are three types of the logistic regression: (1) Binomial: in binomial, there can be only two possible kinds of the dependent variables, such as 0 or 1, true or false. (2) Multinomial: in this type, there can be three or more possible unordered types of the dependent variable, such as “cat”, “dogs”, or “sheep”, and (3) Ordinal: in ordinal, there can be three or more possible arranged types of dependent variables, such as “low”, “Medium”, or “High”. The Logistic regression equation can be acquiring from the Linear Regression equation. The mathematical steps to get Logistic Regression equations are given below: • The equation of the straight line is:

58

V. D. Bhagile and I. Fathail

y = b0 + b1x1 + b2x2 + b3x3 + · · · + bnxn • Due y between 0 and 1, so we divided y by (1-y): y/(1 − y) • The final equation takes logarithm of pervious equation for obtain on range between – [infinity] to + [infinity] log[y/1 − y] = b0 + b1x1 + b2x2 + b3x3 + · · · + bnxn

3.3.2

Random Forest

Is it a classifier that includes a count of decision trees (DTs) on the different subsets of the specified dataset and takes the average to boost the dataset’s predicting preciseness? The advantages of random forest are: it takes the less time training from the other algorithms, predict the output with high accuracy, and it can preserve precision when missing large of data.

3.3.3

Decision Tree

Is it a supervised learning technique, decision tree includes decision node and leaf node, to make any decision use the decision nodes and have multiple branches? The leaf nodes consider the outcome of those decisions. The advantages of decision trees are: it is easy to understand because imitate human thinking capacity during making a decision, and it shows a tree-like structure.

3.3.4

Naive Bayes

Is the supervised learning technique used for solving classification problems? Mainly used for textual classification. It is simple and effective classification algorithms which aim in structure the rapid machine learning models. The Naive Bayes depend on Bayes theorem: P(C|D) =

P(D|C)P(C) P(D)

where P(C|D) P(D|C)

is Probability of hypothesis C on the observed event D. is Probability of the evidence given that the reasonability of a supposition is true. P(C) is Probability of assumption before observing the proof.

Gestational Diabetes Prediction Using Machine Learning Algorithms

P(D) 3.3.5

59

is Probability of Evidence. K-Nearest Neighbor

Is the one of simplest algorithms in machine learning that depends on supervised learning technique? The concept of KNN is classification a new data based on the number of neighbor where K is determined the number of neighbors for the new data and calculate which set of neighbor contain the max numbers of neighbor points, hence the new point or data follow this set. The K-NN operation can describe the below algorithm: Step 1 Step 2 Step 3 Step 4 Step 5 Step 6 Step 7 3.3.6

START INPUT K = N (Select the number K of the neighbors). √ DISTANSE = (X2 − X1)2 − (Y 2 − Y 1)2 (Calculate the Euclidean distance of K number of neighbors) K[] = MIN(DISTANSE(N)) (Take the K nearest neighbors as per the calculated Euclidean distance). between these k neighbors, count the number of the data points in each class. Assign the new data points to that class for which the number of the neighbor is maximum. model is ready. XGBoost

Is an open-source library that provides high-performance gradient-boosted decision trees implementation? An underlying C++ codebase combined with a top-sitting Python interface makes for an extremely powerful but simple package to implement. The advantages of XGBoost are: Regularization, Parallel Processing, Handling Missing Values, Cross-Validation, and Effective Tree Pruning.

3.3.7

SVM

Is one of the majority common supervised learning algorithms, which is used for classification and regression problems? The purpose of SVM is to make the best boundary line between the two classes. SVM has two kinds: (1) linear SVM is used for linearly divisible data to two classes by separate them by straight line, (2) nonlinear SVM is used for nonlinear split data, which means the classified data are not separated by straight line. The advantages of SVM are: it works very well if there a clear margin of split between classes, more effective in high dimensional spaces, more effective in situations where the number of measurements exceeds the number of samples, and more efficient in the use of memory.

60

V. D. Bhagile and I. Fathail

Table 2 Accuracy measures Measures

Definition

Equation

Accuracy (A)

Defines how often the model predicts the correct output

A = (TP + TN)/(TP + FP + FN + TN)

Precision (P)

The number of correct outputs out of all positive classes that have predicted correctly

P = TP/(TP + FP)

Recall (R)

Define how our model predicted correctly

R = TP/(TP + FN)

F-measure (F)

Helps us to evaluate the recall and precision at the same time

F = (2 * R * P)/(R + P)

3.4 Performance Evaluation Performance evaluation is significant part of the machine learning, anyway, it is composite task. There are various types to measure the performance of machine learning model. In this task we are evaluating the prediction diabetes machine learning model by use the accuracy, precision, recall and f1 score for all machine learning algorithms which we used it in our model. Table 2 explains accuracy measures.

4 Results The result is important part for any experimental model which determined the accuracy and success of the model to predict the problems. We evaluate our model about four accuracy measures (Precision, Recall, F-measure, and Accuracy) which mentioned in the previous section as shown in Table 2. Table 3 depicts the perforTable 3 Comparative performance of classification algorithms Algorithms

Accuracy (%)

Precision

Recall

F_score

Logistic regression

84.71

0.782

0.692

0.734

Naïve Bayes

82.35

0.703

0.730

0.716

SVM

85.88

0.791

0.730

0.76

KNN

74.12

0.652

0.576

0.612

Decision tree

77.65

0.606

0.769

0.677

Random forest

88.24

0.833

0.769

0.8

XGBoost

80.0

0.645

0.769

0.701

CatBoost

82.85

0.703

0.730

0.716

Neural network

85.88

0.769

0.769

0.769

Gestational Diabetes Prediction Using Machine Learning Algorithms

61

100.00% 90.00% 80.00% 70.00% 60.00% 50.00% 40.00% 30.00% 20.00% 10.00% 0.00%

Accuracy

Precision

Recall

F_score

Fig. 2 The accuracy measures of classification algorithms

mance and the result of classification algorithms which we used it in classifying the model. As we noticed in Table 3 the random forest shows the maximum accuracy comparison to the other rating algorithms. Therefore, the random forest classier can early predict diabetes disease. Figure 2 shows the accuracy measures (Accuracy, Precision, recall and F_score) for the classification algorithms as mentioned above in Table 3. As we observe the random forest got the highest in all accuracy measures (88.24%, 0.833%, 0.769% and 0.8%) respectively. Figure 3 shows the diagrammatic performance accuracy of all rating algorithms. This figure contains on the (bar, pie) diagram for the draw the accuracy of the Machine Learning algorithms and Deep Learning algorithms. We observe that the random forest algorithm got highest accuracy (88.24%) on the bar form and (11.9%) on the pie form.

5 Conclusions One of the diseases that leads to death in the world is the diabetes disease, so it is very significant to discover and predict it in the early time before the patient reach to a risky case especially the pregnant woman because this will negatively affect fetus health and her health. In this work seven MLAs are applied on PIDD (Pima Indian Diabetes Data Set) to predict the diabetes disease. Experiential results proved the suitability of the model with performance accuracy of 88.24% using the random forest classification algorithm. We used Jupyter Notebook for execute the model. The designed model is flexible to use for prediction of other diseases in the future.

62

V. D. Bhagile and I. Fathail

Fig. 3 Comparison of accuracy of classification algorithms

References 1. World Health Organization, Definition, Diagnosis and Classification of Diabetes Mellitus and its Complications: Report of a WHO Consultation. Part 1, Diagnosis and Classification of Diabetes Mellitus (No. WHO/NCD/NCS/99.2) (World Health Organization, 1999) 2. R. Joshi, M. Alehegn, Analysis and prediction of diabetes diseases using machine learning algorithm: ensemble approach. Int. Res. J. Eng. Technol. 4(10), 426–435 3. G. Luo, Automatically explaining machine learning prediction results: a demonstration on type 2 diabetes risk prediction. Health Inf. Sci. Syst. 4(1), 2 (2016) 4. D.K. Choubey, S. Paul, S. Kumar, S. Kumar, Classification of Pima indian diabetes dataset using naive bayes with genetic algorithm as an attribute selection, in Communication and Computing Systems: Proceedings of the International Conference on Communication and Computing System (ICCCS 2016) (LNCS Homepage, 2017, February), pp. 451–455. http://www.springer. com/lncs. Last accessed 2016/11/21 5. N. Yuvaraj, K.R. SriPreethaa, Diabetes prediction in healthcare systems using machine learning algorithms on Hadoop cluster. Clust. Comput. 22(1), 1–9 (2019)

Gestational Diabetes Prediction Using Machine Learning Algorithms

63

6. B. Ali´c, L. Gurbeta, A. Badnjevi´c, Machine learning techniques for classification of diabetes and cardiovascular diseases, in 2017 6th Mediterranean Conference on Embedded Computing (MECO) (IEEE, 2017, June), pp. 1–4 7. Q. Zou, K. Qu, Y. Luo, D. Yin, Y. Ju, H. Tang, Predicting diabetes mellitus with machine learning techniques. Front. Genet. 9, 515 (2018) 8. H. Kaur, V. Kumari, Predictive modelling and analytics for diabetes using a machine learning approach. Appl. Comput. Inform. (2020) 9. G.D. Kalyankar, S.R. Poojara, N.V. Dharwadkar, Predictive analysis of diabetic patient data using machine learning and Hadoop, in 2017 International Conference on I-SMAC (IoT in Social, Mobile, Analytics and Cloud) (I-SMAC) (IEEE, 2017, Feb), pp. 619–624 10. D. Sisodia, D.S. Sisodia, Prediction of diabetes using classification algorithms. Proc. Comput. Sci. 132, 1578–1585 (2018) 11. M. Alghamdi, M. Al-Mallah, S. Keteyian, C. Brawner, J. Ehrman, S. Sakr, Predicting diabetes mellitus using SMOTE and ensemble machine learning approach: the Henry Ford exercise testing (FIT) project. PloS One 12(7), e0179805 (2017) 12. S. Kumar, A. Nayyar, A. Paul, in Swarm Intelligence and Evolutionary Algorithms in Healthcare and Drug Development, eds. by S. Kumar, A. Nayyar, A. Paul (CRC Press, 2019) 13. B. Inje, S. Kumar, A. Nayyar, Swarm intelligence and evolutionary algorithms in disease diagnosis—introductory aspects, in Swarm Intelligence and Evolutionary Algorithms in Healthcare and Drug Development (Chapman and Hall/CRC, 2019), pp. 1–18

Design and Implementation of Buffon Needle Problem Using Technology for Engineering Students Tommy Tanu Wijaya, Jianlan Tang, Shiwei Tan, and Aditya Purnama

Abstract Engineering students in university often are required to conduct experiments that mostly involve the use of technology. Furthermore, in education technology is increasingly used today because it is useful for increasing the effectiveness of the learning process in every level of education from kindergarten to university level. However, there are few studies in the development of technology-based learning media at university. In engineering major, students must be familiar with conducting Buffon’s needle experiment in the subject of statistics and probability. This study aimed to develop learning media with Hawgent to do Buffon’s needle experiment to help engineering students understand the basic concepts of the value π = 3.14. Hawgent is simple mathematical software that is often used for mathematical purposes. The operation of Hawgent 3.0 is simple and can do various kinds of experiments. This study focused on developing technology-based learning media with Hawgent to do Buffon’s needle experiment and implementing the learning media to engineering students. The results of this study showed that using learning media created with Hawgent to conduct Buffon’s needle experiment was more effective and efficient than using the traditional method, needles, and lined paper. Recommendation for future research can be more quantitative studies to evaluate the students’ mathematical ability is recommended. Keywords Buffon needle · Engineering student · Hawgent · Statistics and probability

1 Introduction The study of technology in education is continuously conducted on various aspects and education levels from kindergarten to university [1, 2]. One of the popular studies of technology in education is learning media for various qualitative, quantitative, or research and development methods. These studies showed positive results that T. T. Wijaya · J. Tang (B) · S. Tan · A. Purnama Department of Mathematics and Statistics, Guangxi Normal University, Guilin, China e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 R. C. Poonia et al. (eds.), Proceedings of Third International Conference on Sustainable Computing, Advances in Intelligent Systems and Computing 1404, https://doi.org/10.1007/978-981-16-4538-9_7

65

66

T. T. Wijaya et al.

technology helps students improve their mathematical ability [3, 4], self-confidence [5], learning interest [6], and understanding of learning materials [7]. As a result of the positive impact of technology in education, studies on the development of technology-based learning media are continuously conducted today. Teachers may be replaced by super-sophisticated robots that can answer all questions from students in future teachers. Therefore, educational researchers have to continue developing technology-based learning media at all levels of education and all subjects to improve the quality of education. It is hoped that with the role of technology, the quality of education throughout the world can be equal.

1.1 Technology and Engineering Student A study in education is closely related to studies in technology. Kang et al. [8] investigated technological development in education. They suggested that engineering majors should use technology to improve the quality of teaching and learning in the engineering department after graduation. Rollakanti et al. [9] researched the use of technology in teaching engineering students. It was proven that the use of technology to teach engineering learning materials positively impacted engineering students learning interest and increased their understanding. This means that using technology to teach engineering learning materials is very good and has to be developed further to prepare the best graduates that understand the importance of using and developing technology for the future.

1.2 Hawgent Dynamic Mathematics Software Hawgent is one of the dynamic mathematics software from China with a research and development center in Guangxi, China [10]. Hawgent had made a significant contribution to mathematical subjects from elementary to high school level. However, few studies or even no studies on developing learning media with Hawgent at the university level are available. In fact, mathematical subjects in the university are more complex, so the learning should use technology assistance. Tang Jianlan is a mathematics education professor at Guangxi Normal University who is an expert in developing Technological Pedagogical Content Knowledge (TPACK) and Technological Pedagogical Mathematical Knowledge (TPMK). He analyzed the impact on students of developing learning media created with Hawgent [21]. This study showed that the students’ problem-solving and creative thinking ability developed simultaneously. This finding is supported by several studies that state that technology can improve students’ problem-solving abilities [11, 12]. Wijaya T. T. also conducted several development studies using Hawgent at high school level, and the learning materials were quadratic functions and trigonometry

Design and Implementation of Buffon …

67

[13, 14]. He also conducted a study that showed that Hawgent supported mathematics learning during the COVID-19 pandemic by converting learning media created with Hawgent into short videos with no more than 10 min. The videos are made attractive for high school and junior high school students. Zhang L. et al. [4] used Hawgent to teach about triangles at junior high school. He made Hawgent unique and interesting by changing the animation in Hawgent into video learning using of Camtasia Studio. Zhang’s study showed that learning media created with Hawgent and converted the learning media into video could improve students’ problem-solving ability at junior high school. Tan et al. [15] conducted a development study of learning media created with Hawgent in elementary school. He found that students’ learning materials of basic concepts of circle were not well mastered based on his observation. He developed learning media on a circle using Hawgent dynamic mathematics software to help elementary school students get a deep understanding and increase their learning interest in elementary school. This study confirms the previous study that technology can increase students’ learning interest and help elementary school students understand mathematical concepts. Siti from Indonesia also collaborated with Guangxi Normal University to develop learning media on plane geometry [16]. In addition, Siti conducted further research on using Hawgent in learning toward elementary school students’ mathematical reasoning ability. Among the previous studies, no study develops learning media created with Hawgent at the university level. Therefore, the researcher developed learning media at the university level, especially in engineering major.

1.3 Buffon Needle Problem Topic statistics and probability are one of the important subject topics studied by engineering students. Students will learn sample spaces, events, Bayes rules and various other important topics [17]. One of the interesting topics discussed in the statistics material is the Buffon needle problem [18]. In the eighteenth century, Comte de Buffon investigated Pi’s value by experimenting with needles and a lined paper [19]. The width of the lined paper and the length of the needles were the same. For example, if the needles’ length is 2 cm, the lined paper’s width is also 2 cm. However, in the eighteenth century, when the experiment was conducted, technology had not been developed like today’s technology, so that the experiment conducted by Buffon required a long preparation. An interesting question about this experiment is whether there is a difference in the value of Pi = 3.14 between using 1000 needles and using 10,000 needles and whether the different numbers of needles will affect Pi’s value are not equal to 3.14? These questions can be answered today. The help of technology can be used to simulate a Buffon’s needle experiment by dropping needles.

68

T. T. Wijaya et al.

1.4 Purpose in This Study The development and implementation of learning media created with Hawgent to do Buffon’s needle experiment for engineering students were the purpose of this study. Based on this purpose, the researcher conducted a study to answer the following questions: • What are the steps to do Buffon’s needle experiment using Hawgent? • What are the benefits of using Hawgent to do Buffon’s needle experiment? • Does the use of Hawgent to do Buffon’s needle experiment positively affect the engineering students’ learning interest?

2 Method This study aimed to develop a learning media to do the Buffon’s needle experiment. Buffon’s needle problem is studied in probability and statistics as a college subject to prove the value of π = 3.14. The learning media process was carried out for 8 weeks from July and August 2020 at Guangxi Normal University, China. This process was under the supervision of Professor Tang. The learning media was revised several times to get a perfect learning media. This learning media was used to support learning at a university in Guilin, China. After class, the researchers collected data from the students and the lecturers to know their responses, and then, the responses were used to evaluate the learning media for further development.

3 Making a Buffon Needle Experiment Using Hawgent The learning media to do Buffon Needle Experiment was made using Hawgent dynamic mathematics software. Buffon’s Needle experiment used animated needles and lined paper like the real needles and lined paper. There are 5 steps to make this learning media. Step 1 Step 2 Step 3 Step 4 Step 5

Make lines with the same length and needles that have the same length as the width of the lines. Make the simulation of dropping needles. Count the needles that do not fall on the lines and the needles that fall on the lines. Process experimental data and < value. Design the display to be better more specific steps are presented in Table 1.

Step explanations

• Construct a rectangular coordinate system the draw points A (0,0), B (0,1), C (6,0) • Draw a line from points C to D with the same length as A and B • Select lines A and D then press the edit button on the menu bar at the top left corner. Change the variable into u000 • Click points A and C then click “construction, digital translation.” Input 4 to the number of translations • Make “construction, animation, variable” and change the parameter type to an integer then enter the needle length • Click “construction, expression” and enter b * u000. Then open the “properties dialog box” in the edit mode. Change “\$Lb*u000 = &mv(u001)” in “general, text” into “\$Ll = &mv(b)*a = &mv(u001)” and it’s variable name into u001 • Open “construction, expression” again and enter u000 and change the “properties dialog, text” from “\$Lu000 = &mv(u002)” to “\$La = &mv(u002)” and its variable name into u002 • Open the “text font” and change the “name” to Song Ti, “Font size” to 20 and “style” to normal

Steps

Draw parallel lines with a variable line width and construct needles with variable length

Table 1 The process of making Buffon’s needle experiment using Hawgent Figures

(continued)

Design and Implementation of Buffon … 69

Simulated random needle injection experiment

Calculate the number of needles throws and the number of intersections

Step explanations

• Click “construction, animation, variable” and enter t in the “Variable or object label,” enter n − 1 in the “Frequency” • Right-click the button to open the “properties dialog Box|Script,” select “unnamed tool,” enter n in the “parameter name” change the “parameter type” to an integer, and enter the number of needles in the “parameter prompt” • Then, enter the number of stitches in the “Properties dialog|display|label content,” change the “display label” to true, and click OK • Click “construction|expression” and enter sign(t) * rand, respectively (0, 6), sign(t) * rand(0, 4 * u000), sign(t) * rand(0, 2 * pi). (pi means π, because this article is to randomly rotate the needle around a point to simulate The needle is in different directions, so the maximum angle obtained in the design process can be 2π) • Add some variable of the equations obtained are u003, u004, u005; then, click “drawing|coordinate point” to make the point E(u003, u004), F(u003 + u001 * cos(u005), u004 + u001 * sin(u005)), and connect EF • Lastly, click “construction, animation, variable,” enter t in the “variable or object label,” and change the maximum value to 0 • Calculating the number of needles that does not touch the line • Click “construction|expression,” enter “sign(t) * (u006 + 1)” • Open the edit mode, select the expression, and Change the “\$L(1 + u006)*sign(t) = &mv(u006)” into “\$L&mv(u006,0)” • Calculating the number of needles that touches the line: • Select the line segment EF, click “draw|midpoint” to get the midpoint G, select point G, click “construct|y coordinate” to get the ordinate of point G • Open the edit mode, right-click, and get the variable name u007 in the “properties dialog|general,” click “construction|expression,” enter “floor((u007-0)/u000)”, • For variable name u008; enter “u007-0-u000 * u008” • Klik “construct|expression” again input “((1 + sgn((u000/2) − u009))/2) * u009 + ((1 + sgn((u000/2) − u010))/2) * u010,” its variable name is u011, u011 is the minimum value among u009 and u010 • Klik “construction|expression” input “(1 + sgn((u000/2) * abs(sin (u005)) − u011))/2,” the output result of this expression is 1 when the needle intersects the line, and 0 when the needle does not intersect • Again, “construction|expression” enters “sign(t) * (u013 + u012)” calculate the number of intersections

Steps

Table 1 (continued) Figures

(continued)

70 T. T. Wijaya et al.

Processing of experimental data

Interface optimization

Step explanations

• Click “construction|text table|normal text” and input “number of needles” and “number of needle and line intersections,” respectively • Click “construction|text table|formula text” and input “P_Needle and Line Intersect = (2 * l)/(π * a)” “π = (2 * l)/(P * a),” click “Construction|Expression” and enter “u013/u006” • Select “properties dialog|general|text “\$Lu013/u006 = &mv(u014)” is changed to “\$L&mv(u014)” and its variable name is u014 • Click “construction|expression” and enter “(2 * u001)/(u014 * u000)” and change “\$L(2 * u001)/(u014 * u000) = &mv(u015)” in “properties dialog|general|text” to “\$L&mv (u015)” • Click “drawing|coordinate points” to draw points H (-1,2), I (-1,4), J (-9,2), connect HI, HJ, select HI • Click “construction|digital translation,” x translation component input − 2, y translation component input 0, number of translations input 4 • Select HJ, click “construction|digital translation,” x translation component input 0, y translation component input 1, input 2 for the number of shifts, and place the corresponding text and value in the corresponding position • Select the expression u001, open the edit mode, change the “properties dialog box|formula text font|font size” to 25, repeat the above operation for expression u002, expression u014, and expression u015 • Select “normal text” to open the edit mode, right-click and change the “properties dialog|text font|font size” to 25 • Select the formula text, open the edit mode, right-click, and change the “properties dialog|text font|font size” to 25, and then click “design|optimize” • Open the edit mode, right-click the button, change the “name” in the “properties dialog|label font” to times new roman, “font size” to 20, and “style” to normal • Click “general|system resources” select the appropriate picture, repeat setting other buttons • select other expressions and line segments EF, ray AB, point E, point F, point G, point A, point B, point H, point I, point J, click “design|hide” • Select point D, open the editing mode, right-click, change “properties dialog|display|marking content” to “change line width,” then click “design|color|red,” click OK

Steps

Table 1 (continued) Figures

Design and Implementation of Buffon … 71

72

T. T. Wijaya et al.

4 The Implementation of the Learning Media to Do Buffon’s Needle Experiment for Engineering Students The learning media to do Buffon’s needle experiment is designed to increase students’ knowledge of the basic concepts of finding out Pi and introduce the history of mathematicians. When students know the history of the relationship between Buffon Needle’s Problem and the value of π = 3.14, it may open students’ thinking that math formulas or problems can be proven uniquely and interestingly. As a result, it may change their previous assumption that mathematics is a difficult and boring subject. The learning media to do Buffon’s needle experiment can be used in probability and statistics subject that engineering students must study in their first year. The learning design for implementing this learning media used historical concepts. It connected the stories of mathematical scientists about proving formulas to increase the curiosity and learning interest of engineering students using technology-based learning media. In this learning design, the teacher used open questions about the benefits of understanding the concept of probability in the engineering field at first. Then, the discussion flowed till it ended on the basic concepts of probability. The teacher then explained the relationship between Buffon’s needle problem and the value of π = 3.14. The teacher asked the students to compare Buffon’s needle experiment done manually without using technology assistance and the Buffon’s needle experiment which was done using technology assistance. This learning design may open students’ thinking that technology is very important in the twenty-first century. According to the previous research, state technology can help teaching and learning activities be easier and more enjoyable. When experimenting to prove the value of π = 3.14 using the learning media. Teachers ask students to try to prove themselves by putting a number of needles. Students will find that regardless of how many needles are dropped, the value will always be close to or equal to 3.14. When students carry out their experiments, it will be easier for students to remember the material being taught. The use of learning media created with Hawgent to do Buffon’s needle experiment for engineering students can improve students’ learning outcomes. This finding confirms the previous research that stated using technology-based learning media improved students’ learning outcomes. The study by Salma [20] showed that using technology in teaching increased students’ engagement and achievement. Salma et al. used structural equation modeling (SEM) to analyze the learning attitude and learning achievement of students who used technology in learning and found that the use of technology in learning had a significant positive effect on students’ learning attitude and learning achievement. The use of this learning media for engineering students to explain the basic concept of probability and the value of π = 3.14 increased students’ learning interest. It is indicated when the teacher explained the Buffon’s needle problem, most engineering students did not know what Buffon’s needle problem was even though the basic concept and experiment to find out the value of π = 3.14 using Buffon’s needle

Design and Implementation of Buffon …

73

problem were very interesting and worth to do in class. Engineering students seemed very enthusiastic and paid attention to the experiment of Buffon’s needle problem. Many students wanted to try to enter the number of needles by themselves to prove the value of π = 3.14. Some students opened their phones to look for information about Buffon’s needle problem. The researcher saw that the students were very active and motivated to learn at the beginning of the lesson before they finally understood more about probability concepts and formulas. This study has the same finding as Vargianniti [21] which found that classes that used technology increased students’ learning motivation, learning interest, and learning achievement better than these using conventional methods. Using technology in teaching increases teachers’ self-confidence [22]. Many studies about the use of technology and teachers’ self-confidence and the studies found that teachers who used technology in teaching were more confident than these who used conventional learning or those who do not. This is indicated in this study when the lecturer of probability and statistics subjects was more confident than when he only used slide show to explain probability and Pi value concepts. Today, students are very familiar with technology [23]. Engineering students also have to have the ability to use technology to prepare them for working after graduation. The use of Hawgent in probability and statistics subject is only one example of the use of technology that positively affects engineering students’ learning. The researcher suggests that the use of mathematics software continues to be used and developed in other studies.

5 Conclusion and Limitation of Study In this modern era, technology is commonly used to support learning at all levels of education. The development of technology-based learning media to do Buffon’s needle experiment helps engineering students master the basic statistics and probability concepts. The steps of the learning media development using Hawgent are very easy, effective, and efficient. The learning media helps the experiment of Buffon’s needle problem, so teachers do not have to prepare needles and lined papers to prove the value of π = 3.14. The implementation results showed that students were curious to do their experiment using Hawgent. Besides increasing students’ learning interest, self-confidence, and learning outcomes of engineering students, technology in learning also trains students to be more professional and proficient in using technology in the engineering field. This study only focuses on developing learning media created with Hawgent to do Buffon’s needle experiment for engineering students and does not conduct a study on the effects of using this learning media on engineering students’ learning outcomes using quantitative methods. The researcher suggests a further study on how the effect of technology uses to teach Buffon’s needle problem for engineering students on both hard skills and soft skills of students.

74

T. T. Wijaya et al.

Acknowledgements The development of learning media using Hawgent was supported by the province of Guangxi, China (2020JGA129).

References 1. S. Pamuk, M. Ergun, R. Cakir, H.B. Yilmaz, C. Ayas, Exploring relationships among TPACK components and development of the TPACK instrument. Educ. Inf. Technol. 20(2), 241–263 (2015) 2. T.T. Wijaya, Z. Zulfah, A. Hidayat, P. Akbar, W. Arianti, I. Asyura, Using VBA for microsoft excel based on 6-questions cognitive theory in teaching fraction. J. Phys. Conf. Ser. 1657(1), 012078 (2020) 3. I.F. Al-Mashaqbeh, IPad in elementary school math learning setting. Int. J. Emerg. Technol. Learn. 11(2), 48–52 (2016) 4. L. Zhang, Y. Zhou, T.T. Wijaya, Hawgent dynamic mathematics software to improve problemsolving ability in teaching triangles. J. Phys. Conf. Ser. 1663(1) (2020) 5. J.H.L. Koh, TPACK design scaffolds for supporting teacher pedagogical change. Educ. Technol. Res. Dev. 67(3), 577–595 (2019) 6. Z.A. Reis, S. Ozdemir, Using geogebra as an information technology tool: parabola teaching. Proc. Soc. Behav. Sci. 9, 565–572 (2010) 7. N. Baya’a, W. Daher, Mathematics teachers’ readiness to integrate ICT in the classroom. Int. J. Emerg. Technol. Learn. 8(1), 46–52 (2013) 8. Z. Kang, R. Wang, Y. Wang, Bilingual teaching reform and practice of engineering student’s ‘professional foreign language’ based on multimedia technology. Commun. Comput. Inf. Sci. CCIS 218(PART 5), 570–575 (2011) 9. C.R. Rollakanti, V.R. Naidu, R.K. Manchiryal, K.K. Poloju, Technology-Assisted StudentCentered Learning for Civil Engineering Students, vol. 1 (Springer International Publishing, 2020) 10. T.T. Wijaya, Z. Ying, S. Chotimah, M. Bernard, A. Zulfah, Hawgent dynamic mathematic software as mathematics learning media for teaching quadratic functions. J. Phys. Conf. Ser. 1592(1) (2020) 11. N. Sener, T. Erol, Improving of students’ creative thinking through Purdue model in science. J. Balt. Sci. Educ. 16(3), 350–365 (2017) 12. A. Flores, J. Park, S.A. Bernhardt, Interactive technology to foster creativity in future mathematics teachers, in Creativity and Technology in Mathematics Education (Springer International Publishing AG, 2018), pp. 149–179 13. T.T. Wijaya, Z. Ying, A. Purnama, Using Hawgent dynamic mathematics software in teaching trigonometry. Int. J. Emerg. Technol. Learn. 15(10) (2020) 14. T.T. Wijaya, T. Jianlan, P. Aditya, Developing an Interactive mathematical learning media based on the TPACK framework using the Hawgent dynamic mathematics software, in Emerging Technologies in Computing, 2020, pp. 318–328 15. S. Tan, L. Zou, T.T. Wijaya, N. Suci, S. Dewi, Improving student creative thinking ability with problem based learning approach using hawgent. J. Educ. 02(04), 303–312 (2020) 16. S. Chotimah, T.T. Wijaya, E. Aprianti, P. Akbar, M. Bernard, Increasing primary school students’ reasoning ability on the topic of plane geometry by using Hawgent dynamic mathematics software. J. Phys. Conf. Ser. 1657(1), 012009 (2020) 17. C.K. Tan, Effects of the application of graphing calculator on students’ probability achievement. Comput. Educ. 58(4), 1117–1126 (2012) 18. S. Natarajan, C. Soubhik, Buffon’s needle problem revisited, in Resonance, 1998, pp. 70–73 19. C.F. Chung, Application of the Buffon needle problem and its extensions to parallel-line search sampling scheme. J. Int. Assoc. Math. Geol. 13(5), 371–390 (1981)

Design and Implementation of Buffon …

75

20. A. Salma, D. Fitria, S. Syafriandi, Structural equation modelling: the affecting of learning attitude on learning achievement of students. J. Phys. Conf. Ser. 1554, 012056 (2020) 21. I. Vargianniti, K. Karpouzis, Effects of Game-Based Learning on Academic Performance and Student Interest, vol. 11899 (Springer International Publishing, LNCS, 2019) 22. T. Valtonen, U. Leppänen, M. Hyypiä, E. Sointu, A. Smits, J. Tondeur, Fresh perspectives on TPACK: pre-service teachers’ own appraisal of their challenging and confident TPACK areas. Educ. Inf. Technol. 2823–2842 (2020) 23. V. Mudaly, T. Fletcher, The effectiveness of geogebra when teaching linear functions using the IPad. Probl. Educ. 21st Century 77(1), 55–81 (2019)

Energy-Efficient Multihop Cluster Routing Protocol for WSN Monika Rajput, Sanjay Kumar Sharma, and Pallavi Khatri

Abstract Wireless sensor network (WSN) is a group of distributed self-directed and small nodes which called sensors can communicate each other and the base station. These nodes are having limited battery, memory and limited processing power, that’s why they are not applicable for very large network. Energy consumption and balancing in WSN are most important which require an efficient mechanism, and to manage the energy researcher has proposed a variety of routing protocols. Low-energy adaptive clustering hierarchy (LEACH) protocol is one of the efficient algorithms that optimizes energy, equally distributes network load and increases the lifetime of the network. Still there are some limitations in LEACH which need to be solved. LEACH is not appropriate for large area network because of its CH selection process. In this research paper, an improvement in CH selection method has been proposed to improve the network lifetime and to save the residual energy. Keywords WSN · LEACH · Multilevel · Multihop

1 Introduction In recent years, researchers have concerned about WSN due to their potential use in different domains. In the beginning, WSN was being used only in battlefield for military purposes; however, it can also be used in many other areas in human life for different purposes. WSN is made up of small tiny devices known as sensor nodes ranging from hundreds to thousands depending upon the requirements of the network [1, 2]. These nodes spread autonomously to observe physical or environmental data, i.e. sound temperature, pressure, vibration and motion at various locations. Node in WSN is battery operated that’s why energy plays an important role. A network M. Rajput (B) · S. K. Sharma Banasthali Vidyapith, Banasthali, Rajasthan, India P. Khatri ITM University, Gwalior, Madhya Pradesh, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 R. C. Poonia et al. (eds.), Proceedings of Third International Conference on Sustainable Computing, Advances in Intelligent Systems and Computing 1404, https://doi.org/10.1007/978-981-16-4538-9_8

77

78

M. Rajput et al.

has number of clusters in hierarchical routing protocols, and a single node called cluster head (CH) communicates directly to base station (BS). In order to reduce the overhead of non-cluster head, the CH aggregates the data and sends it to the BS. All nodes have chance to become the CH [3, 4]. The LEACH has provision of dynamic clustering and can be used for shrinking the energy consumption. The network nodes are divided into clusters; there is one node which acts as CH. Other nodes in a cluster send data to the CH, and the CH aggregate received data and send it to the BS directly.

2 Literature Survey Researchers proposed many hierarchical protocols and algorithms over the years to make it energy efficient in WSN. Overview of some of them is described in this section. In [5], LEACH-DT and hierarchical extension of LEACH-DT are implemented. In selection process of CHs, the proposed protocol takes remaining energy of every node. The data flow is nodes to CH, CH to SCH and SCH to BS. Simulation analysis shows that lifetime of WSN has been increased. In [6], researcher has done survey on LEACH-based protocol. Many improvements over LEACH protocol have been proposed. Cluster formation and data transmission and energy consumption approach are the main key for improvement. The distributed LEACH protocol is efficient in comparison to other dissidents of LEACH protocol. In [7], researcher has examined the LEACH protocol and its amendment versions. By integrating the benefit of numerous modification of LEACH protocol, YALEACH protocol has been proposed that uses centralized cluster formation technique to minimize energy requirement for data transmission. This approach makes the network more energy saving by reducing the cluster formation. Nayak et al. [8] proposed a clustering-based energy-efficient protocol by incorporating fuzzy logic concept. This approach saves a lot of energy by dividing large communication into minimal communication. Simulation analysis shows that the proposed works perform better than LEACH and provide improved network stability and lifetime. Researcher focused on several types of WSN from small to large size of network [9]. It concludes that a small-scale network does not require redundant transmission techniques. Result shows that the proposed approach minimizes the energy consumptions and number of transmissions and performs better for large-scale sensor network. In [10] simulation of LEACH, SEP and NHSEP protocol has been proposed. In this distance each node is recorded and node with minimum distance is selected as CH. The result of the proposed work shows that performance of NHSEP protocol beats the other two protocols. In [11] author has focused on clustering dimensions in WSN. Node energy consumption in WSN is minimized by using the better clustering dimensions

Energy Efficient Multihop Cluster Routing Protocol for WSN

79

approach. According to receiving time, every node selects CHs. After that, nodes send a join request to the CHs. The nodes send data according to their TDMA slot. The author analysed and result shows that the TL LEACH outperforms LEACH and LEACH-B protocol. In [12], author has projected DE-LEACH protocol which is a single-hop communication protocol for WSN. This protocol uses the residual energy balance and distance between nodes to reduce the energy consumption. Node sleeps and awakes scheduling has been proposed which provides suitable enhancement in reducing the energy path holes. Network lifetime of DE-LEACH protocol is better as compared to LEACH. Author proposed K-LEACH to improve the network lifetime in [13]. K-Medoids algorithm is used for uniform cluster formation. Like LEACH protocol, a random selection approach is not used here. Deployment of sensor node might differ from structured to random locations according to requirements. In sensor node phase, an additional load to find the path from remote location to servers results in an extra depletion of energy. Parihar et al. proposed quadrant-based routing protocol [14] and make use of spanning tree. The objective of the work is to achieve longer network lifetime. The result shows that the proposed technique is much better as compared to LEACH and NRLM. Goyal et al. [15, 16] developed some energy-efficient approaches for WSN. Singh et al. [17] proposed architectural perspective of WSN in detail and addressed energy-related issues.

3 LEACH Protocol LEACH is a hierarchal protocol for WSN [18]. In WSN the LEACH is used to make the network energy efficient. Nodes in the network are organized into the different cluster. One node of every cluster acts as a CH which is responsible to aggregate the received data from other node and sent to the BS. The CH consumes more energy as compared to other nodes. When CH dies, then entire nodes from that cluster will not able to communicate. There are many rounds in LEACH, and each round has three phases: advertisement, cluster set-up and steady-state phase. In advertisement phase, an autonomous decision is be taken by the every node about the CH based on percentage of CHs and frequency of CHs. Every sensor node calculates a random number, r (0 < r < 1); if its value is less than a threshold T (n), then this node becomes a CH for the current round. ⎧ p    ⎨ 1/ 1− p∗ r ∗ mod p if n ∈ G Otherwise, T (n) = ⎩ 0 Here, P is percentage of preferred CH, G is the group of nodes not chosen as CHs in last 1/p round, and r is current round. Once a node becomes a CH, it doesn’t become CH again until the entire nodes in the cluster become CH once. This process helps

80

M. Rajput et al.

Fig. 1 LEACH protocol

in balancing the energy. After CHs selection, an advertisement message is broadcast by each CH. Based on receiving signal strength, each non-cluster node chooses its CH. In cluster set-up phase, CHs are selected though cluster formation. A time division multiple access (TDMA) schedule is created based on the number of nodes in the cluster and informs other nodes. Each node transmits its data according to time schedule. In steady-state phase the data transmission starts. All non-CH nodes send their data according to allocation time. After data transmissions, their transmitter can be turn off and will on only when they have something to transmit. By doing so, they can save a lot of energy. CH should keep on receiving all data from nodes (Fig. 1). There are several advantages of LEACH protocol as compared to the direct communication [18, 19]. • Data aggregation reduces the data duplication and also saves the energy of the node and. • The TDMA scheduling during the set-up phase minimizes the conflict between CH. • Save energy by doing single-hop communication between sensor node and CH. • In LEACH CH can distribute its role to other cluster members within the cluster. • Communication is limited inside the different clusters in the network, so it provides scalability in the network. LEACH also has some disadvantages [18–20]: • There is no uniform distribution of CHs in the network. • The main disadvantage of LEACH is that, when the CH died because of any reason than that whole cluster become useless, the collected data never reaches the BS • CH is selected randomly and in cluster formation residual energy is not considered.

Energy Efficient Multihop Cluster Routing Protocol for WSN

81

• In every round, all nodes participate for the selection of new CHs which consume more energy. • LEACH is not appropriate for the large network because aggregated data have been sent by the single hop and it will consume a lot of energy.

4 Proposed Work Hybrid multilevel multihop leach protocol (HM2LP) for large area is proposed here. The objective of the propose work is to make the network energy efficient. Similar to LEACH, this work has two phases: set-up and steady-state phase and CHs is selected in every round. Working of proposed approach is as follows.

4.1 Set-Up Phase Initially, BS will broadcast hello packet to all closer nodes. The intermediate nodes forwarded this packet to the entire network. Rest of the node discards the hello packet except the node which receives the hello packet that first act as the forwarding node. The node with the multiple paths is as close as the BS selected as initial CH. Cluster members will not be eligible for CH candidate if its energy is below than average energy of cluster nodes. CH announces that it will be the CH for the current round by broadcasting the advertisement message to all the non-CH nodes. Cluster members will send data to CH via TDMA scheduling. The CH aggregates the information and send to initial CH. Initial CH will carry this information to BS. Again the CH is selected by using adaptive neuro-fuzzy interference system when the energy of the initial CH is less than the minimum energy. Distance to BS and other nodes, remaining energy, earlier load and least energy intake are the parameters of the adaptive neuro-fuzzy interference system. Node with best suitable probabilities will be selected as CH. An alternative CH will also be selected. Alternate CH will take the responsibility of CH in incident of CH running out of power or having less energy than the threshold value. If no sensor nodes are selected as cluster member or CH, then it will send data to directly nearby CH based on distance. Definition sensor_nodebase sensor_ nodeic sensor_ nodecch sensor_ nodeccm pkt_ nipds pkt_ nipbc s pktds pktbc s ch_ pktdc

base node. ith sensor node belongs to cth cluster. CH sensor node of cth cluster. cluster member sensor node belongs to cth cluster. node information packet from sth node to dth node. node information packet from sth node to broadcast. packet from sth node to dth node. packet from sth node to broadcast. CH packet from cth cluster to dth node.

82

clusterc energy(sensor_ nodeic ) energyavg(clusterc ) PGA(arg) PANFIS(arg) energy_ min(sensor_ nodeic )

M. Rajput et al.

cth cluster. available energy for sensor_ nodeic . average energy for clusterc . output of proposed genetic algorithms with arguments. output of proposed adaptive neuro-fuzzy inference system with arguments. required available energy for sensor_ nodeic for cluster head.

Algorithm: Initial CH selection

Algorithm: CH and CMs selection

Energy Efficient Multihop Cluster Routing Protocol for WSN

83

Algorithm: Vice-CH selection

4.2 Steady-State Phase The CH creates cluster after receiving message data from all the nodes and will make TDMA schedule based on the number of nodes in the cluster and assigned time slot to each nodes to transmit sensed data. Every cluster head selects a Vice-CH to work as a CH in case if CHs residual energy passes below certain threshold. The CH reselection will apply in the clusters depending on the requirements, rather than the full network. CH forwards its data to the nearby CH if its distance is far from BS.

5 Expected Outcomes The proposed HM2LP protocol designed for wide area WSN. The CH is selected based on the distance, energy consumption and the current load. Proposed protocol resolves the current problem of WSN and makes the network more energy efficient, reduced additional overhead in reselection of CH, provides efficient intra cluster communication, and requires less energy because of optimized CH selection process.

6 Conclusion The main objective of this research work is to save energy consumption and increasing the lifetime of the WSN. In LEACH residual energy, distance is not considered for CH selection and probability of CH selection is one of the major drawbacks. The proposed work overcomes these shortcomings by using a new CH selection process with parameters: distance to BS and other nodes, residual energy, earlier load and least energy intake.

84

M. Rajput et al.

References 1. H. Singh, G.S. Josan, Performance analysis of AODV & DSR routing protocols in wireless sensor networks. Int. J. Eng. 2(5), 2212–2216 (2012) 2. X. Yao, XueFeng Zheng, A secure routing scheme for static wireless sensor networks. IEEE Pacific-Asia Works. Comput. Intell. Indust. Appl. 2, 776–780 (2008) 3. J.N. Al-Karaki, A.E. Kamal, Routing techniques in WSN: a survey. IEEE Wirel. Commun. 11(6), 6 (2004) 4. N.V. Katiyar, S.S. Chand, A survey on clustering algorithms for heterogeneous wireless sensor networks. Int. J. Adv. Netw. Appl. 2(4), 745754 (2011) 5. V. Gupta, R. Pandey, Modified LEACH-DT algorithm with hierarchical extension for WSNs. Int. J. Comput. Netw. Inf. Secur. 8(2), 32–40 (2016) 6. A. Braman, G.R. Umapathi, A comparative study on advances in LEACH routing protocol for wireless sensor networks: a survey. Int. J. Adv. Res. Comput. Commun. Eng. 3(2), 5683–5690 (2014) 7. W.T. Gwavava, O.B.V Ramanaiah, YA-LEACH: yet another LEACH for WSNs. Proc. IEEE Int. Conf. Inf. Process. ICIP 2015, 96–101 (2016) 8. P. Nayak, A. Devulapalli, A fuzzy logic-based clustering algorithm for WSN to extend the network lifetime. IEEE Sens. J. 16(1), 137–144 (2015) 9. H.F. Chan, H. Rudolph, New energy efficient routing algorithm for WSN, in IEEE Region 10 Conference TENCON 2015 (Macao, China, Nov 2015), pp. 1–4, ISBN: 978-1-4799-8639-2 10. S. Pothalaiah, D.S. Rao, New hierarchical stable election protocol for WSNs, in International Conference on Innovations in Information, Embedded and Communication Systems (ICIIECS) (Coimbatore, 2015), pp. 1–5. https://doi.org/10.1109/ICIIECS.2015.7192856 11. S. SanthaMeena, S. Bhajantri, J. Manikandan, Dimensions of clustering in WSN, in International Conference on Information Processing (ICIP) Vishwakarma Institute of Technology, Dec 16–19, 2015 12. S. Kumar, M. Prateek, N.J. Ahuja, B. Bhushan, DE-LEACH: distance and energy aware LEACH. Int. J. Comput. Appl. 88(9), 36–42 (2014) 13. Z. Han, J. Wu, J. Zhang, L. Liu, K. Tian, A general self-organized tree-based energy-balance routing protocol for wireless sensor network. IEEE Trans. Nucl. Sci. 61(2), 732–740 (2014) 14. V. Parihar, P. Kansal, Quadrant based routing protocol for improving network lifetime for WSN, in 2015 Annual IEEE India Conference (INDICON) (New Delhi, 2015), pp. 1–5. https://doi. org/10.1109/INDICON.2015.7443334 15. A. Goyal, S. Mudgal, S. Kumar, A review on energy-efficient mechanisms for cluster-head selection in WSNs for IoT application. IOP Conf. Ser. Mater. Sci. Eng. 1099(1), 012010 (2021). https://doi.org/10.1088/1757-899X/1099/1/012010 16. A. Goyal, V.K. Sharma, S. Kumar, R.C. Poonia, Hybrid AODV: an efficient routing protocol for Manet using MFR and firefly optimization technique. J. Interconnection Netw. 16(8) (2021). https://doi.org/10.1142/S0219265921500043 17. A.P. Singh, A.K. Luhach, X.Z. Gao, S. Kumar, D.S. Roy, Evolution of wireless sensor network design from technology centric to user centric: an architectural perspective. Int. J. Distrib. Sensor Netw. 16(8) (2020). https://doi.org/10.1177/1550147720949138 18. W.B. Heinzelman, Application specific protocol architectures for wireless networks, Ph.D. thesis, Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, MA, 2000 19. M. Haneef, Z. Deng, Comparative analysis of classical routing protocol LEACH and its updated variants that improved network life time by addressing shortcomings in wireless sensor network, in 2011 Seventh International Conference on Mobile Ad-hoc and Sensor Networks (MSN), 2011, pp. 361–363 20. J.F. Yan, Y.L. Liu, Improved LEACH routing protocol for large scale wireless sensor networks routing, in Proceedings of International Conference on Electronics, Communications and Control (ICECC), 2011, pp. 3754–3757

A Review of Smart Electronic Voting Machine Anshuman Singh , Ashwani Yadav , Ayush Kumar , and Kiran Singh

Abstract Electronic voting machine (EVM) is fundamental electronic devices used to record projects a polling form as opposed to votes papers and boxes as of late used in the standard way projecting a voting form system. The fundamental option to cast a ballot or just to cast a ballot in the political race shapes the premise of majority rules system. All before decisions can be common or nearby political race, he used to put an individual he enjoyed by putting him I stamp his name and wrap up the vote paper as indicated by the organization prior to placing it in the record Ballot Box. This is a long, tedious cycle too is inclined to mistakes. This circumstance proceeded until the constituent circumstance was totally changed by electronic democratic machine. No more polling form papers, voting stations, stamp, and so forth this is summed up in a straightforward box called an inflatable of an electronic democratic machine. Since it is biometric, identifiers can’t be effectively positioned off base, tricky, or shared spot and they are viewed as more solid in human acknowledgment; there are conventional symbolic techniques or data-based strategies. In this manner, the electronic democratic framework should be improved dependent on current innovation, biometric framework. This examines the total survey of casting a ballot gadgets, issues and examinations between casting a ballot strategy and biometric evm. Aside from its primary practical properties, it is proposed the framework is intended to deal with various significant shortcomings necessities. The most significant are these prerequisites for precision, immovability, lucidness, consistency, and security. Guaranteeing sturdiness too dependability of the proposed framework, profound PC the coordinating was directed under different democratic conditions, specifically. Elector clog, citizen turnout, presented underhanded deeds, and so on impersonation results show that security and activity of the framework as indicated by desires. Keywords Smart biometric electronic voting machine · IOT (Internet of Things) · Electronic voting machine A. Singh (B) · A. Yadav · A. Kumar · K. Singh School of Computer Science and Engineering, Galgotias University, Greater Noida, India K. Singh e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 R. C. Poonia et al. (eds.), Proceedings of Third International Conference on Sustainable Computing, Advances in Intelligent Systems and Computing 1404, https://doi.org/10.1007/978-981-16-4538-9_9

85

86

A. Singh et al.

1 Introduction Elections permit individuals to settle on their own decisions agents and express their inclinations they will be dominated. Normally, the uprightness of the constituent cycle is vital to the honesty of majority rules system itself. The political race measure should be sufficiently able to withstand a lot of trickery in conduct and should be clear enough too and it is reasonable that citizens and up-and-comers can acknowledge its political decision results. This paper presents research on the cutting edge in electronic democratic, incorporating different exercises acted in Internet casting a ballot and disagreements regarding its utilization, for example, and electronic democratic. Electronic democratic alludes to the utilization of PCs or electronic democratic machines in decisions. Now and again, this word is utilized basically to allude to deciding in favor of that it happens by means of the Internet. Electronic applications can be acclimated with enrolling electors, tallying votes, and recording votes. Ordinarily, be that as it may, checking blunders is conceivable, and in now and again, citizens get more than one approach to cast a ballot, to carry irregularities to the last computation results, viz, in uncommon cases, may require a rehash of the appointive cycle totally! Also, in certain nations, deliberately presented false political race votes contorting the political race results yields something political decision. Here, all such shades of malice can be evaded by the discretionary cycle under investigation; yet there political decision votes are too huge, there are still errors. Surely frequently world reconnaissance bodies are required checking races in specific nations. This normally requires a totally programmed web association PC choice cycle. Notwithstanding triumph the most well-known appointive catches they experience, the political decision vote the computations are done continuously that toward the finish of the political decision date, results are naturally delivered.

2 Electronic Voting Systems There have been a few exercises on utilizing PC innovation to improve elections [1]. These exercises are cautioning against the perils of heading out too quick to even think about accepting electronic democratic machines for programming designing difficulties, inner dangers, network danger, and review difficulties. An electronic democratic machine is a straightforward machine that you can without much of a stretch utilized by casting a ballot staff and citizens. Being an autonomous machine with no organization association, nobody can intrude on its cycle arranging and controlling the outcome [2]. To keep position of poor electrical inventory in numerous zones Country, gear is made to last batteries. It has two units specifically: a control unit and one A democratic unit. The Control Unit is the fundamental unit stores all information and controls EVM execution. Working framework the control unit is warmed into a central processor “at the same time efficient.” When consumed it cannot, at this point be perused, duplicated or changed. EVMs utilize amazing

A Review of Smart Electronic Voting Machine

87

encoding on fortify the security of information moved from the democratic unit to control unit.

3 Authenticity of Voting Process Certain highlights assume a significant function in the democratic cycle given to a specific nation. Culture itself and backing social elements/values specifically decide the standards also guidelines overseeing any democratic cycle. In nations, when the political decision results are controlled by the citizen determined contribution by exceptional direct information casting a ballot cards intended for polling stations, are accessible a propensity that cannot go on without serious consequences by political decision votes numerous ways; a few electors frequently attempt to cast a ballot notwithstanding the occasions permitted by law a who will be given a political decision different electors may attempt to cast a ballot rather than certain citizens can be perused to build the quantity of citizens to collect some individual, most definitely only a couple. Counterfeit/Fake is another potential issue imperiling the honesty of the discretionary cycle [3]. Default discretionary cycle, while depending on current innovation in PC innovation and ICT innovation, can be significantly decreased numerous things that can prevent solid advancement of the appointive cycle. All things considered, total trust accessible information innovation can just check confirmation/affirmation of personality of a given elector, at the same time, by and by, it won’t have the ability to hinder any endeavored to abuse the democratic framework, that is, those citizens they simply attempt to decide in favor of others (extortion). Aside from-extra advances, the honesty of the democratic cycle, inside the proper setting, it is a long way from anything worthy norm/s; the establishment of biometrics can do it they positively have the additional benefit of arriving at the necessary degrees of political race respectability.

4 Privacy of the Voter Rights Biometrics is the best portrayed as quantifiable physiological and/or organic highlights which can be utilized to confirm individual character. Incorporates fingerprints, retinal and iris filtering, hand calculation, voice designs, facial acknowledgment, Gait acknowledgment, DNA and different procedures. They can be hitched interest in any zone where it is imperative to confirm reality individual character. At first, these strategies were accessible utilized chiefly in innovative security applications; notwithstanding, we are currently observing their utilization again proposed use in a wide reach confronting the network conditions. Indeed, the biometric framework follows two components highlights: distinguishing proof and confirmation. First incorporate to distinguish an individual in all biometric estimations gathered in an information base. The inquiry is this cycle “Who is this?” Therefore, it incorporates

88

A. Singh et al.

Fig. 1 It shows the progression of information all in all biometric recognizable proof cycle

one and more correlations. Confirmation includes affirms the character of the ideal individual from whom it came pre-enlisted design. “Is this the one you guarantee to be?” it is an inquiry this cycle looks to reply. This includes similitudes with one another [4]. It confirms individual character against a given biometric the measure incorporates five phases the framework needs to experience through. At first, input information from the document an individual with learning detects. The information gathered at that point, sent over the organization to a particular focal site that handles biometric framework [5]. The program will at that point make you individual coordinating uses restricted and/or modified recreations methodologies. Figure 1 shows the progression of information all in all biometric recognizable proof cycle.

5 India’s Experience in e-voting India has a place with the world the biggest vote-based system with more than one individual billion. India has more than 668 million electors and includes 543 parliamentary voting demographics. Casting a ballot is a scaffold between the government. In the last hand-held political decision in India, the public vote can burn-through around 8000 tons of paper and 400,000. The inkals are permanent ink and require 2.5 million boxes equipped for putting away them under hefty security until votes are tallied. Before, it took up to at least three four days to tally casts a ballot, and employed laborers going through night and day in safe places by hand each vote checks. At times it needs to stories that prompted the primary concern of the change of the votes between the two up-and-comers are joined with an enormous number of invalid and faulty votes [6]. The electronic democratic machines are expected to decrease mistakes and accelerating the computation cycle. The world updated its electronic democratic machines (EVM) through customary innovation. Made by Barat Electronic Ltd., and Electronics Corporation of India Ltd, and imported central processor from Japan. The nation has grown in excess of 1,000,000 EVMs in view of it 668 million electors. It would cost them a ton of cash. The machine had the option to oblige 64 individuals to be chosen in every political decision, on 16 delegate pages each. Innovation has figured out how to tackle numerous issues in accordance with the customary democratic framework. In any case, before its procurement there were five driving projects says to acclimate citizens with innovation.

A Review of Smart Electronic Voting Machine

89

6 Types of Electronic Voting Machine The different types for EVM are as follows. Direct recording electronic voting system Electronic democratic machine for Premier Election Diebold Election Systems arrangements recently applied to every single Brazilian political decision. DRE casting a ballot machine records votes by casting a ballot technique gave by machine or electro-optical segments that can be utilized by elector (typically fastens or contact screen); that measures information through PC programming; and that records it casting a ballot information and casting a ballot pictures in pieces of memory. After the political race, it shows the including of votes information put away in removable memory object and as printed duplicate. The program can likewise give alternatives to move singular votes or the complete number of votes in the middle spot to accumulate and report results from regions in the focal locales. Paper-based electronic voting system Once in a while called a “record casting a ballot framework,” in light of paper casting a ballot framework began as a framework where votes were held appropriate and check by hand, utilizing paper votes. By record the appearance of an electronic table brought the plans their paper cards or sheets can be set apart by hand, anyway determined electronically. As of late, these projects could introduce Electronic Ballot Marker (EBM), which permits citizens settle on their decision utilizing electronic info gadget, typically a touch program, for example, direct electronic chronicle (DRE). Projects that incorporate the democratic marker gadget may incorporate different types of assistive innovation. Indian EVM device India is the biggest on the planet the dominant part. It is believed to be as striking as you are gives social, provincial, protection, social variety is as yet ready to remain all alone. In 2004, India has embraced its electronic voting machines parliamentary decisions with 380 million electors vote utilizing in excess of 1,000,000 votes gear. India’s EVMs are likewise planned created by a Dual Government Utility Equipment Production Units, Barat Electronics Limited (BEL) and Electronics Corporation of India. There is a breaking point (ECIL). The two projects are the equivalent, and they are created and explained by the Electoral Commission of India. The framework is a bunch of two dynamic gadgets 6 V batteries. Public network DRE voting system The people group network casting a ballot framework DRE is a political decision framework that utilizes electronic votes and communicates vote data from the surveying station has moved somewhere else by means of the interpersonal organization. Vote information might be moved as individual votes as are projected, infrequently as a gathering of votes all through political race day, or as a solitary

90

A. Singh et al.

gathering e casting a ballot conclusion. This incorporates casting a ballot online once casting a ballot by phone. Interpersonal interaction casting a ballot framework can utilize area estimations or middle figurings way. The focal count strategy places tables in tables from numerous spots in the focal locale. Diebold Accuvote Ts Diebold AccuVote the machine is a framework that tried [7] and is utilized in State of Maryland. Utilizing contact screen and the card peruser the citizen gets after the presence affirmed by casting a ballot authority. To be sure, the CVS source code archive Diebold’s AccuVote-TS DRE casting a ballot framework as of late showed up on the Internet [8]. This is obvious, declared by Bev Harris and talked about in their book, Black Box Voting [9], gives us a novel occasion to audit the DRE’s broadly utilized, paperless program check item security claims. Hart InterCivic eslate Hart InterCivic eslate is an equipment-based democratic gadget that doesn’t have contact screen. Shows deciding on the page—at the same time design (shows numerous races on one page). Citizens explore utilizing “prev” keys and “next” cases are three-sided fit as a fiddle. Casting a ballot itself is accomplished by trading named dialing “Pick” to feature the one you need. Ku vote, the “enter” button is squeezed. After all the votes he has introduced, the client presses the red “cast voting form” button. SureVote SureVote Company gives a framework that gives high insurance against breakdowns or misrepresentation. During casting a ballot, clients affirm themselves and their entitlement to cast a ballot utilizing a number individual distinguishing proof code and mathematical democratic code. They would then be able to enter a four-digit “casting a ballot code” each race. A mistake message was presented when introducing code is invalid for that race. In the event that the code is substantial, I the vote is shipped off the workers to keep most votes dispersed cross-country. Every worker restores the number input, incorporated by customer to another four-digit code, “confirmation code." VoteHere Platinum VoteHere Platinum utilizes instinctive interface to contact the product totally. It very well may be sudden spike in demand for any PC by contact screen observing. Notwithstanding, this additionally implies that the catches or any points of interest are Hardware catches give. What’s more, it presents new dangers I the PC with which the product works may have existed hindered by the Vote Here cycle indicating each race in the mirror in turn; the elector presses “next” once more “Back” catches at the highest point of the screen to explore among the races.

A Review of Smart Electronic Voting Machine

91

Biometric EVM Biometrics implies a mechanized framework that can to recognize an individual by estimating his body also conduct contrasts or examples, and contrasting them with those recorded. As such, rather than asking personality cards, attractive cards, keys or passwords, biometrics can distinguish fingerprints, faces, iris, palm print, mark, DNA, or retinas of independently for simple and advantageous confirmation. With the development of Internet-based business and development the requirement for exact confirmation when getting to accounts, biometrics is the least difficult and simplest solution. Biometrics can likewise give you simplicity and security, by empowering the machine to affirm the individual actually and react singular solicitations. Biometric location objects are utilized simple (e.g., withdrawals without ATMs better card or PIN), better security (e.g., harder to swindle availability), and high proficiency (e.g., low head of PC secret word stockpiling). Stunning the accomplishment of fingerbased acknowledgment innovation in law requirement applications, devaluation of unique mark gadgets, to expand the accessibility of modest PC force, and developing proprietorship misappropriation/robbery all additional to the hour of fingering applications for individual acknowledgment, social, and monetary foundations. So EVM should be a similar improved dependent on current innovation, biometric framework. Some past works use fingerprints deliberately to recognize to cast a ballot or to demonstrate legitimacy. As the Everyone’s fingerprints are unique, it helps builds exactness. Making an information base containing the fingers of all electors in area. Unlawful votes and rehash votes are tried in this program. So, if this program whenever chose a political race will be reasonable and free misrepresentation. The unique mark distinguishing proof framework should be utilized unfit to: • keep one’s fingers on the other given time. • It should see the print coordinate or not some of the time quicker. • It should contact; thumbprint is spared when the document one puts his thumb some place and that is it he immediately figured it out. Instructions to The usefulness is: Six thumbnails should be recorded two months prior to casting a ballot. Here individuals register their print. At the hour of the genuine vote, the citizen places its thumb in the delicate area of touch. On the off chance that printing is equivalent to permitted to cast a ballot. If there should be an occurrence of print not recently spared, one blare is given, so an individual can’t cast a ballot OR if that individual concurs once more, the framework should give a twofold signal, to security might be advised. The program is intended to see twofold printing, yet to give a signal more than once.

7 Comparison of EVM Among All the Countries The previous few years have brought restored center depending around the innovation utilized in the democratic cycle. The current democratic framework has numerous security openings, and it will be. It’s difficult to demonstrate even basic wellbeing

92

A. Singh et al.

structures going to them. Incapable democratic framework ends up being more exact than most concerns. There are others the reasons why the public authority is utilizing innovation are the accompanying expanding political decision exercises and decreasing them political decision costs. Anyway, there is a sure degree of movement in an electronic democratic framework in light of the fact that it’s absolutely impossible to be recognized by an electronic democratic framework that the client is genuine or not and shields electronically a democratic machine from the culprits. Next gives a review of the remainder of the experience nations to utilize an electronic democratic machine [6]. The focal point of correlation is electronic appropriation globally acknowledged democratic frameworks.

8 Advantages of EVM • • • • •

Increasing the level of participation—maximize user participation Security—Secure election Accessibility—Increased accessibility Auditability—the whole voting process is auditable end to end. Efficiency—increases the efficiency of election management compared to traditional paper voting • Precision—accurate and quick publication of results

9 Challenges of EVM Worldwide, political race authorities are investigating different innovations for managing different arrangements of casting a ballot issues, for example, adaptation of the framework and acknowledgment by all stock proprietors incorporates standard people living in distant valleys, maybe some of them they are likewise uneducated. Framework execution is up and coming a customary polling form paper framework however much as could reasonably be expected. Cost productive and advantageous conveyance/capacity of framework. Framework dependability and security as far as interruption obstruction, free activity blunders and so forth, speed also effectiveness of casting a ballot and presentation of results. Accessibility Perhaps the most concerning issue identified with DRE casting a ballot openness programs. Intended for PC architects’ frameworks, availability is the easiest part of building couldn’t care less. Various classes of citizens can be simple impeded perpetual democratic framework just “common” clients. The most evident is this impaired citizen. Incorporated Voting Access for The Elderly and Handicapped Act (VAEHA), passed 1984, coordinates the re-appointment of surveying stations utilized

A Review of Smart Electronic Voting Machine

93

by the older and the incapacitated. As per to the National Disability Association, DRE casting a ballot framework for effectively open innovation, contrasted with switch, punch-card, optical output, and hand computation frameworks. Age and Technical Experience PC mutations “can mess up DRE Elections. Exploration proposes that more seasoned individuals are more established he acted more inappropriately than more youthful grown-ups in performing PC-based errands. This is valid for both with respect to the time needed to perform work, and the quantity of blunders made. In one late examination, a long time worked out positively for trouble in performing undertakings with a PC mouse. Albeit mainstream DRE programs don’t utilize I PC mouse, comparative issues exist. Old grown-ups struggle taking a gander at a PC screen, and the correct feeling of the connection between screen or catch stunt and program work can be an issue. Bias Without access, the issue of inclination comes both the food emergency and the legitimate political race. The genuine plan of the voting form negates, to some degree, for the baptismal competitors accept that their position in the voting form changes the odds that the elector will cast a ballot to them. For instance, the leaders are recorded exceptionally allowed. Therefore, numerous authorities initially select the chose casting a ballot request; In many cases, applicants are checked by party suspension, lottery, or in order. Electrically casts a ballot can’t dodge these entanglements for the very explanation that paper votes can’t; names on the voting form should be introduced somehow or another. Accountability and Verifiability Generally, casts a ballot were written down and checked by hand. Citizens were persuaded that the imprints were for themselves casting a ballot shows their vote. Casting a ballot machines that utilization switches and punch card framework too has given electors high certainty that they cast their votes as planned. Until 2000 political race citizens consistently thought their votes were checked accurately. Numerous by squeezing the confirmation issue utilizing the electronic democratic that projects are given by privately owned businesses, and the public authority by and large doesn’t administering the creation of more projects you pick if to utilize it.

10 Simulation Results A reenactment model has been created for re-testing check the presentation of the proposed electronic democratic framework. Impersonation, as well, assists with giving the correct conveyance a manual for setting up the voting framework as per worker necessities, network data transfer capacity, surveying stations, and so forth.

94

A. Singh et al.

Reenactment climate incorporates Oracle information base the electors ‘and upand-comers’ program. Without an individual recognizable proof subtlety, including records certifications and casting a ballot subtlety and/or a to be composed. The test system, as well, incorporates modules emulating the appearance of electors at surveying stations once the democratic cycle itself. The test system permits the elector to make a vote at any surveying station, regardless of how you really vote area (district). This is one of the primary advantages of e-Voting frameworks. Electors showed up at the surveying station as indicated by Poisson appearance measure, and the between time distance that isolates the various appearances are followed as an unmistakable irregular variable. Higher number of considering electors landing in the surveying station is set by the framework chairman from the earlier; this is clarified by the way that the estimation of citizens in a given supporters are known ahead of time. Everyone the citizen swiped their official character card by an attractive card peruser, simultaneously he will be shipped off compose with his finger its finishing on the screen of the contender for submersion shows pictures of applicants in the electorate citizen. On the off chance that the citizen’s record demonstrates another necessity show/introduction type (as installed in the document subtleties on the elector’s ID card, for example, sound, at the time those structures will be utilized rather than an applicant picture show/s. The citizen would choose a contender for the political race to choose when you contact a picture that shows an image of upand-comer. Framework likewise permits record for elector casting a ballot by sound methods for those four citizens exceptional requirements. This time the democratic cycle is given the citizen is concluded and the quantity of electors is gone into as indicated by up-and-comer. In the test system, the speed of the democratic cycle is this it is represented by various prohibitive elements: First, development line length was considered to negatively affect the scale able electors. Second, he can enter the surveying station until the electors vote for some individual is antagonistically influenced with information reaction toward the finish of the worker. Third, the organization reaction time, i.e., accessible organization transmission capacity, assumes a significant function in deciding the exchange time for every citizen. In our impersonation, and for a particular motivation behind this paper, accepting the organization data transfer capacity is on democratic cycle in continuous examination. Be that as it may, you are utilizing customer/worker model with inserted DB area foundation, we expect a little organization sway issues all through the cycle.

11 Development We Can Do in Future As EVM design is prepared for the political race arrangement of any nation, it needs a little upgrade. Authentication should be stretched out to a subsequent level (First level with VOTER ID) utilizing thought of the thumb or innovation of the iris, so one you can try not to cast a ballot specialists and vote with unapproved electors. At the point when the current EVM innovation was presented with social aptitudes, one can cast

A Review of Smart Electronic Voting Machine

95

a ballot from anyplace on the planet from any web place gave a 6th/Iris see in the record around the same time. That biometric EVM organization should intended for wellbeing and access the outcome is as fast as when the political decision closes all together political decision date itself we get results. EVM programming has been improved with negligible the correction will consider the lead of appointment of the lawmaking body and parliament the same time and can likewise utilize neighborhood actual alternatives. EVM should be intended to address bigger issues of individuals so we can direct decisions cross-country besides at time periods day.

12 Conclusion This overview inspected introduction about EVM and its assortment, issues of EVM, taxonomy, and biometric-based EVM. Our undertakings to fathom electronic majority rule systems leave us optimistic, yet concerned. This paper recommends that the EVM structure should be moreover examined besides, improved to show up at all level of organization, so the balloter sureness will increase and political race specialists will make more noteworthy relationship in purchasing the upgraded EVM’s for lead smooth, secure, tamper-resistant elections.

References 1. California Internet Voting Task Force. A Report on the Feasibility of Internet Voting. Jan 2000 2. Voting: What Is; What Could Be (Caltech/MIT Voting Technology Project, July 2001) 3. R. Mercuri, Electronic Vote Tabulation Checks and balances. PhD thesis, University of Pennsylvania, Philadelphia, PA, Oct 2000 4. J. Smith, S. Schuckers, Improving Usability and Testing Resilience to Spoofing of Liveness Testing Software for Fingerprint Authentication, 2005 5. S. Nanavati, M. Thieme, R. Nanavati, Biometrics: Identity Verification in a Networked World (John Wiley and Sons, Inc. 2002) 6. S. Kumar, E. Waliam, Analysis of electronic voting system in various countries. Int. J. Comput. Sci. Eng. (IJCSE) 3(5). ISSN: 0975-3397, May 2011 7. B. Benjamin, B.L. Bederson, R.M. Sherman, P.S. Herrnson, R.G. Niemi, Electronic voting system usability issues, in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2003 8. T. Kohno, A. Stubblefield, A.D. Rubin, D.S. Wallach, Analysis of an electronic voting system, in Proceedings of IEEE Symposium on Security and Privacy, May, 2004 9. B. Harris, Balck Box Voting: Vote Tampering in the 21st Century (Elon House/Plan Nine, July 2003)

Application of Machine Learning Techniques in Intrusion Detection Systems: A Systematic Review Puneet Himthani and Ghanshyam Prasad Dubey

Abstract In recent years, the developments in the domains of technology, communication and Internet have led to a drastic increase in cybercrimes, hacking, and other online frauds, as unauthorized users try to breach the security policies and gain access to resources falsely. This is due to the fact that we are using Computers and Internet in almost all aspects of our life like Shopping, Banking, etc. Security is an important feature for almost all the systems in this real world and at the current time, it is necessary to keep our systems safe from such security breaches. Intrusion Detection System (IDS) is an important tool or solution that can be implemented and deployed on networks or systems or both to keep them secure and away from unauthorized access. It monitors the network or system and looks for an abnormal activity; in such a case, it generates an alarm signifying that some intrusion or malicious event has occurred in the system. Machine Learning (ML) plays an important role in enhancing the performance of a system by making it intelligent. ML-based approaches will ensure that IDS will acquire new knowledge while operating based on existing knowledge and will be able to detect new or unknown attacks with ease. This paper provides a brief introduction about the IDS, ML-based approaches, recent works being carried out by other researchers for implementing the ML-based IDS models, and a comparative analysis of all those works specifying the benefits and shortcomings of each of them. Keywords Intrusion detection systems · Machine learning · True positive · False alarm rate · Sensitivity · Specificity · Security

P. Himthani (B) Department of Computer Science and Engineering, TIEIT, Bhopal, M.P., India G. P. Dubey Department of Computer Science and Engineering, SISTec, Bhopal, M.P., India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 R. C. Poonia et al. (eds.), Proceedings of Third International Conference on Sustainable Computing, Advances in Intelligent Systems and Computing 1404, https://doi.org/10.1007/978-981-16-4538-9_10

97

98

P. Himthani and G. P. Dubey

Fig. 1 Block diagram of IDS

1 Introduction Intrusion Detection System acts as a security management system for networks or systems. It collects and analyzes the information from the network or system and looks for possible security breaches like intrusions and misuse. It is a solution that will be deployed to monitor the network or system for abnormal activities and it generates an alert or raises an alarm as soon as malicious behavior or a security breach is detected in the network or system [1]. The traditional model of IDS (Fig. 1) is composed of four basic modules, viz., Event Gatherer, Pre-processing, Detection, and Alert. Event Gatherer monitors the network or system and collects information about the events occurring in the network. Pre-processing of collected information is necessary to extract the required features of events based on which the detection is being carried out. The Detection module takes the pre-processed information and classifies the event as normal or malicious. The Alert module generates an alarm if the detection module classifies the event as malicious or an act of intrusion [2–4]. Intruders are mainly classified into 3 categories, as Masquerader, Misfeasor, and Clandestine User. Masquerader is an outsider, Misfeasor is an insider and Clandestine User can be an outsider or mostly insider, tries to breach the security policy and gains access to information through unfair means. Misfeasors usually misuse their privileges [5]. The main goal of implementing a Security solution is to ensure Confidentiality/Privacy, Integrity and Availability of Data, even in case of System failures. Authentication and Non-Repudiation are also important aspects of Security [6]. IDS are classified into 3 types based on their place of deployment as Networkbased IDS (NIDS), Host-based IDS, (HIDS) and Hybrid IDS [7, 8]. IDS can also be classified based on detection approach, as Signature-based IDS, Anomaly-based IDS [9, 10] and Specification-based IDS [8] (Table 1).

2 Machine Learning Machine Learning (ML) is one of the hottest trends in the domain of Computing in recent times. ML is responsible for imparting knowledge and learning ability in models, so that they can carry out the specified operations for which they are being developed efficiently and at the same time, acquire new knowledge to enhance their performance and carry out the operations more effectively and efficiently [11]. The

Application of Machine Learning Techniques …

99

Table 1 Comparison of detection approaches for IDS S. No.

Specification

Signature based

Anomaly based

Specification based

1

Principle for attack identification

Known attack patterns

Abnormal behavior

Violation of pre-defined rules and policies

2

Detection rate

High

Low

High

3

False alarm rate

Low

High

Low

4

Detection of new types of attacks

No

Yes

No

application of the ML-based approach for implementing IDS will result in improved performance and accuracy in detecting the abnormal events in the network or system. Anomaly detection-based IDS with ML is most suitable to implement, because it will detect the unknown or new types of attacks with ease and at the same time, Accuracy and Detection Rate will be improved by the use of the ML approach, which will overcome the high False alarm rate bottleneck of Anomaly-based approach [12, 13]. Learning is classified into various types as Supervised, Unsupervised, SemiSupervised, Reinforced, and others. Supervised Learning is implemented using Labeled Datasets. New Instances are classified on the basis of Instances whose Class is already known. Classification is the most common Supervised Learning technique [14]. Supervised Learning is further classified as Logic Based (Decision Trees, Neural Networks, RNN, CNN, etc.), Perceptron based (MLP, Radial Bias, etc.), Classification (KNN, etc.) and Probabilistic (Naïve Bayesian, Bayesian Networks, etc.) [15]. Unsupervised Learning is implemented using Unlabeled Dataset. New Instances are classified on the basis of the distance from known instances. Clustering is the most common Unsupervised Learning technique [14]. Unsupervised Learning is further classified as Neural Network based (Deep Belief Networks, ART, Auto Encoders, etc.), Clustering (K Means, K Median, Hierarchical Clustering, etc.), Dimensionality Reduction (PCA, ICA, LDA, Random Forest, etc.) and Dimension Estimation [15] (Table 2).

3 Related Work Kumar et al. proposed a Machine Learning Classification model for implementing Network IDS based on Decision Trees and Rule-based Classifiers. Their model was able to detect unknown and new types of attacks [2]. Chawla et al. proposed Anomaly detection IDS based on Recurrent Neural Network (RNN) and Convolution Neural Network (CNN). CNN is responsible for the pre-processing of datasets, necessary for increasing the speed of operation. RNN is responsible for classifying the activity as normal or malicious [16]. Divyatmika Sreekesh proposed two-tier architecture for Anomaly-based Network IDS using KNN Classifier and MLP. KNN classifier is

100

P. Himthani and G. P. Dubey

Table 2 Taxonomy of machine learning techniques for IDS S. No.

ML technique

Specifications

1

ANN [11]

• Simulates the behavior of human brain and simple learning • Handle inconsistent and noisy datasets efficiently • High training time and slow processing

2

SVM [13]

• • • •

3

Bayesian Classifier [11]

• Fast, scalable and probabilistic in nature • Handle uncertainty and incomplete datasets efficiently • Computationally expensive; not suitable for large datasets

4

FL [12]

• • • • •

Handles uncertainty and approximations Suitable for discrete as well as continuous attributes Easy to implement and define rules Complex to reduce the dataset with relevant features Typical to update the rules

5

GA [13]

• • • • •

Based on the evolutionary theory of biology Minimizes false positives and suitable for large datasets Easy to add new rules and re-train the model High response time Critical to select suitable fitness function for optimal result

6

DT [11]

• High detection accuracy and suitable for large datasets • High computation costs

Accurate, fast and robust Suitable for nonlinear classification and regression High training time Highly dependent on decision boundary for classification

used at first level for segregating the normal patterns from new or unknown patterns. This will reduce the overheads in the second level of classification. The second level classifier will now have to deal with unknown patterns only, deciding on which of these unknown patterns represents intrusion and normal behavior [17]. Tao et al. proposed improved IDS based on GA and SVM. Their approach is based on the concept of optimal Feature Selection. GA will be used to determine the suitable patterns (Chromosomes) based on the Fitness function, which will be used to reduce the training time of SVM and increase the accuracy of the model. SVM is used to classify a pattern as normal or malicious [18]. Narsingyani and Kale proposed an Anomaly-based IDS using GA. Their approach mainly consists of 2 phases. In phase 1, classification rules are defined and in phase 2, the task of intrusion detection is being carried out. GA is responsible for defining the set of rules for classifying the patterns as normal or malicious. GA is also responsible for reducing the False positive rate [19]. Vinayakumar et al. proposed a scalable framework for implementing the Hybrid IDS based on Deep neural networks. MultiLayer Perceptron is used to perform the task of classification. HIDS observes the System Calls and NIDS monitors the TCP Traffic. This model can perform the binary classification as well as Multi-Class classification [20]. Karatas and Sahingoz proposed multi-layer ANN-based IDS. The ANN implemented consists of 2 hidden layers and repetitive iterations are being carried out in the testing phase to reduce

Application of Machine Learning Techniques …

101

the error rate of the model. Different Training functions are used to analyze the performance of the proposed model. Results show that the TRAINLM (Levenberg– Marquardt Optimization) function has the lowest error rate and better performance than other 7 functions [21]. Yin et al. proposed an IDS based on Deep Learning approach. Their model is based on Recurrent Neural Networks (RNN). In RNN, information is propagated in both forward and backward manner. The proposed model will be able to perform Binary classification as well as multi-class classification [22]. Gao et al. proposed an adaptive ensemble learning model for IDS based on Machine Learning. The Ensemble approach selects a combination of various ML classifiers like DT, SVM, KNN, DNN, Regression, etc. The proposed approach is a modified form of DT where a multi-tree pattern is designed for carrying out the task of classification. Ensemble Classifiers and Cross-Validation are used to improve the Accuracy, Detection rate and performance of the model [23]. Tahir et al. proposed a hybrid ML technique for IDS. Their proposed model is based on the K Means Clustering algorithm and the Support Vector Machine (SVM) [24]. Zhang et al. proposed an effective NIDS based on Deep Learning and Anomaly detection. The major tasks involved in the process include Feature Selection and Classification. Feature Selection will be performed by De-noising Auto-Encoder (DAE) and the task of Classification is accomplished by a Multi-Layer Perceptron (MLP) [25]. Meryem et al. proposed an Anomaly-based hybrid IDS using the ML technique. Their hybrid approach combines the features of both Signature-based and Profile-based Detection approaches to enhance the performance of the IDS model. Their model is based on the K Means Clustering approach and the principle of Map/Reduce [26]. Ingre et al. proposed a DT-based IDS. DT is a traditional approach for supervised learning classification and regression. Feature Extraction on the data set is performed to reduce its dimensions and redundant labels. Classification and Regression Tree (CART) is used to classify the pattern as normal or malicious. CART is a type of Decision Tree [27]. Kumar et al. [28, 29] deployed machine learning in combination with swarm intelligence for improve image analysis.

4 Comparative Analysis 4.1 Datasets Datasets play an important role in the development of an ML-based model. Data passed as input to an IDS can be categorized as Packet-based Data, Flow-based Data and available Datasets, like DARPA, KDD CUP 99, NSL KDD, etc. [30, 31].

102

P. Himthani and G. P. Dubey

4.2 Performance Evaluation Parameters The performance of IDS is evaluated using Confusion Matrix (TP, TN, FP and FN), based on which Accuracy, Precision, Recall [32], Detection Rate (DR) and False Alarm Rate [33] will be computed. Sensitivity or Recall or TPR = TP/(TP + FN) Specificity or Selectivity or TNR = TN/(TN + FP) Miss Rate or FNR = FN/(TP + FN) False Alarm Rate or FPR = FP/(FP + TN) Precision = TP/(TP + FP) Accuracy = (TP + TN)/(TP + FP + FN + TN)

4.3 Performance Analysis The various ML-based models for implementing IDS are compared on the basis of values of various parameters to justify their performance and effectiveness. True Positive Rate or Sensitivity or Recall of a ML-based IDS is above 95% on average across the various approaches; while at the same time False Alarm Rate will be below 3%. This signifies that ML plays an important role in improving the performance of the IDS; at the same time reduces the False Alarms significantly. MLbased IDS, on average, will have Precision of above 91%, Detection Rate (Fig. 2) above 93% and Accuracy (Fig. 3) of above 94%. These are just average values; some approaches have better values, while others have inferior values of these parameters. The above analysis justifies the importance of Machine Learning in improving the Performance, Detection Accuracy and Effectiveness of the IDS; yet at the same time is responsible for reducing the possibilities of False Alarms drastically, as compared to other state-of-the-art approaches for implementing IDS without making use of the Machine Learning. The novelty of this review lies in the fact that this work not only

DR

Detection Rate

Fig. 2 Comparison of detection rate

100% 100% 100%

83%

95.80% 88%

DR

0% [16]

[18]

[22]

[24]

IDS Approach

[27]

Application of Machine Learning Techniques … Fig. 3 Comparison of accuracy

103

IDS Approach

Accuracy [26]

99% 98.80% 96.50% 85.20% 96% 99.75% 99.95% 80% 97.50%

[24] [20] [17] [2]

0.00% 20.00% 40.00% 60.00% 80.00% 100.00%

Accuracy Accuracy

provides a theoretical background about IDS and application of Machine Learning for IDS, but it also encompasses the comparison between various approaches on the basis of Performance Evaluation parameters for clarity and better understanding.

5 Conclusions Security is an integral part of any computer network, workstation, or organization; it also plays an important role in our daily lives. With the recent innovations in technology and networking, various issues related to security breaches like hacking and cybercrimes are increasing along with the positive prospects of these innovations. Intrusion Detection System plays an important role in ensuring that the security policies defined for a network or a system must not be compromised at all. Machine Learning plays an important role in enhancing the performance of the developed model. By the above comparative analysis, it is clear that ML-based IDS are highly sensitive, more Accurate, possess “learning ability” and can learn and classify new patterns more effectively than non-ML-based IDS techniques. Their False Alarm Rate can be as low as around 2% and their Detection Accuracy will be at least 93%. The current state of research in this domain requires the development of new and more efficient strategies for carrying out operations like dimensionality reduction, feature extraction, feature selection, etc.; such strategies help in enhancing the effectiveness and performance of the developed ML-based IDS model. A lot of research has already been carried out in this domain, but still there exists a lot of scope for implementation of new models for IDS based on Machine Learning that are more strict, effective and efficient. There exist multiple sub-problems in this domain, like Feature Engineering, Identification of Correct Machine Learning approach, selection of Classifier and Activation Function, determining the size of Neural Network Model and many more. Approaches like Ant Colony Optimization, Principal Component Analysis and Clustering can be used for developing a Feature Selection approach, which will generate

104

P. Himthani and G. P. Dubey

the most optimal set of Features required for developing the model. Development of IDS using Ensemble Classifiers and Hybrid approaches will definitely improve the Performance of the system. Such a solution may involve ACO for Feature Selection, KNN Classifier or MLP for Classification of Sample as Normal, Attack or a type of Attack.

References 1. S.S. Roy, A. Malik, R. Gulati, M.S. Obaidat, P.V. Krishna, A deep learning based artificial neural network approach for intrusion detection, in Proceedings of International Conference on Mathematics and Computing (ICMC-2017) (Springer, 2017), pp 44–53 2. S. Kumar, A. Viinikainen, T. Hamalainen, Machine learning classification model for network based intrusion detection system, in Proceedings of the 11th International Conference for Internet Technology and Secured Transactions (ICITST-2016) (IEEE, 2016), pp. 242–249 3. G. Karatas, O. Demir, O.K. Sahingoz, Deep learning in intrusion detection systems, in Proceedings of International Conference on Big Data, Deep Learning and Fighting Cyber Terrorism (IBIGDELFT-2018) (IEEE, 2018), pp. 113–116 4. E.K. Veigas, A.O. Santin, L.S. Oliveira, Toward a reliable anomaly based intrusion detection in real world environments. J. Comput. Netw. 127, 200–216 (2017) 5. W. Stallings, Cryptography and Network Security: Principles and Practice, 5th edn. (Prentice Hall (Pearson) Publications, 2010) 6. A. Kahate, Cryptography and Network Security, 4th edn. (Tata McGraw Hill Publications, 2019) 7. T. Mehmood, H.B.M. Rais, Machine learning algorithms in context of intrusion detection, in Proceedings of 3rd International Conference on Computer and Information Sciences (ICCOINS) (IEEE, 2016), pp. 369–373 8. K. Kim, M.E. Aminanto, Deep learning in intrusion detection perspective: overview and further challenges, in Proceedings of International Workshop on Big Data and Information Security (IEEE, 2017), pp. 5–10 9. M. Almseidin, M. Alzubi, S. Kovacs, M. Alkasassbeh, Evaluation of machine learning algorithms for intrusion detection system, in Proceedings of 15th International Symposium on Intelligent Systems and Informatics (IEEE, 2017), pp. 277–282 10. N.T. Van, T.N. Thinh, L.T. Sach, An anomaly based network intrusion detection system using deep learning, in Proceedings of International Conference on System Science and Engineering (ICSSE) (IEEE, 2017), pp. 210–214 11. R.K. Sharma, H.K. Kalita, P. Borah, Analysis of machine learning techniques based intrusion detection systems, in Proceedings of 3rd International Conference on Advanced Computing, Networking and Informatics (Springer, 2016), pp. 485–493 12. R. Makani, B.V.R. Reddy, Taxonomy of machine learning based anomaly detection and its suitability, in Proceedings of International Conference on Computational Intelligence and Data Science (ICCIDS 2018) published under Procedia Computer Science, vol. 132 (Elsevier, 2018), pp. 1842–1849 13. A.A. Shah, M.S.H. Khiyal, M.D. Awan, Analysis of machine learning techniques for intrusion detection system: a systematic review. Int. J. Comput. Appl. 119(3), 19–29 (2015) 14. D. Kwon, H. Kim, J. Kim, S.C. Suh, I. Kim, K.J. Kim, A survey of deep learning based network anomaly detection. J. Cluster Comput. 22, 949–961 (2017) 15. H. Kour, N. Gondhi, Machine learning techniques: a survey, in Proceedings of International Conference on Innovative Data Communication Technologies and Applications (ICIDCA), published under Lecture Notes on Data Engineering and Communications Technologies (LNDECT), vol. 46 (Springer, 2020), pp. 266–275

Application of Machine Learning Techniques …

105

16. A. Chawla, B. Lee, S. Fallon, P. Jacob, Host based intrusion detection system with combined CNN/RNN model, in Proceedings of Joint European Conference on Machine Learning and Knowledge Discovery in Databases (ECML PKDD 2018), published under Lecture Notes in Computer Science (LNCS), vol. 11329 (Springer, 2019), pp. 149–158 17. Divyatmika, M. Sreekesh, A two-tier network based intrusion detection system architecture using machine learning approach, in Proceedings of International Conference on Electrical, Electronics and Optimization Techniques (ICEEOT 2016) (IEEE, 2016), pp. 42–47 18. P. Tao, Z. Sun, Z. Sun, An improved intrusion detection algorithm based on GA and SVM. Published in IEEE ACCESS under Special Section on Human-Centered Smart Systems and Technologies, vol. 6 (IEEE, 2018), pp. 13624–13631 19. D. Narsingyani, O. Kale, Optimizing false positive in anomaly based intrusion detection using genetic algorithm, in Proceedings of 3rd International Conference on MITIE (IEEE, 2015), pp. 72–77 20. R. Vinayakumar, A. Mamoun, K.P. Soman, P. Prabaharan, A.N. Ameer, V. Sitalakshmi, Deep learning approach for intelligent intrusion detection system. IEEE Access 7, 41525–41550 (2019) 21. G. Karatas, O.K. Sahingoz, Neural network based intrusion detection systems with different training functions, in Proceedings of 6th International Symposium on Digital Forensic and Security (ISDFS) (IEEE, 2018) 22. C. Yin, Y. Zhu, J. Fei, X. He, A deep learning approach for intrusion detection using recurrent neural networks. IEEE Access 5, 21954–21961 (2017) 23. X. Gao, C. Shan, C. Hu, Z. Niu, Z. Liu, An adaptive ensemble machine learning model for intrusion detection. Published in IEEE Access under Special Session on Artificial Intelligence in Cyber-Security, vol. 7 (IEEE, 2019), pp. 82512–82521 24. H.M. Tahir, W. Hasan, A.M. Said, N.H. Zakaria, N. Kutak, N.F. Kabir, M.H. Omar, O. Ghazali, N.I. Yahya, Hybrid machine learning technique for intrusion detection system, in Proceedings of the 5th International Conference on Computing and Informatics (ICOCI 2015), pp. 464–472 25. H. Zhang, C.Q. Wu, S. Gao, Z. Wang, Y. Xu, Y. Liu, An effective deep learning based scheme for network intrusion detection, in Proceedings of the 24th International Conference on Pattern Recognition (ICPR) (IEEE, 2018), pp. 682–687 26. A. Meryem, B.E. Ouahidi, Hybrid intrusion detection system using machine learning. J. Netw. Secur. 2020(5), 8–19 (2020) 27. B. Ingre, A. Yadav, A.K. Soni, Decision tree based intrusion detection system for NSL-KDD dataset, in Proceedings of International Conference on Information and Communication Technology for Intelligent Systems (ICTIS 2017), published under Smart Innovation, Systems and Technologies (SIST), vol. 2 (Springer, 2017), pp. 207–218 28. S. Kumar, B. Sharma, V.K. Sharma, R.C. Poonia, Automated soil prediction using bag-offeatures and chaotic spider monkey optimization algorithm. Evol. Intel. 1–12 (2018). https:// doi.org/10.1007/s12065-018-0186-9 29. S. Kumar, B. Sharma, V.K. Sharma, H. Sharma, J.C. Bansal, Plant leaf disease identification using exponential spider monkey optimization. Sustainable Comput.: Inf. Syst. 28 (2018). https://doi.org/10.1016/j.suscom.2018.10.004 30. M. Ring, S. Wunderlich, D. Scheuring, D. Landes, A. Hotho, A survey of network-based intrusion detection data sets. J. Comput. Secur. 86, 147–167 (2019) 31. P. Mishra, V. Vardharajan, U. Tupakula and E. S. Pilli, A detailed investigation and analysis of using machine learning techniques for intrusion detection. IEEE Commun.: Surv. Tutorials 21(1), 686–728 (2018) 32. K. Yang, J. Liu, C. Zhang, Y. Fang, Adversarial examples against the deep learning based network intrusion detection system, in Proceedings of IEEE Military Communications Conference (MILCOM) (IEEE, 2018), pp. 559–564 33. C.H. Lee, Y.Y. Su, Y.C. Lin, S.J. Lee, Machine learning based network intrusion detection, in Proceedings of 2nd IEEE International Conference on Computational Intelligence and Applications (IEEE, 2017), pp. 79–83

Relationship between Sustainable Practices and Firm Performance: A Study of the FMCG Sector in India Mohd Yousuf Javed, Mohammad Hasan, and Mohd Khalid Azam

Abstract In recent years, the organizations have been concerned about the sustainability issues but only a few companies have been actively participating in these activities. Though several studies have been conducted on sustainable practices and firm performance, there is still dearth of knowledge in the Indian context. This study aims at exploring the relationship among various sustainable practices and measures of firm performance including ROA, ROE and EPS with the reference to selected Indian FMCG firms. The data have been collected from the Prowess IQ database, only those the FMCG firms have been included which are indexed in BSE100. The sustainability measures have an insignificant impact on the ROE in the FMCG sector barring employee’s utilization ratio. Keywords Sustainability practices · Firm performance · FMCG

1 Introduction Sustainability has been defined as the preservation of natural resources for a longer period or maybe an infinite period. Sustainability is often quoted as the ultimate vision of the company but not every manager thinks of it as the ultimate objective of the firm. The concept was started in the year 1987 in the Brundtland Report [1]. The report had mainly two dimensions: one was the aspiration of people for better society and second was limitations imposed by nature. With time, the concept was divided into three broad categories, namely social, environmental and economic. Kuhlman and Farrington [2] argued about the changing definitions of sustainability and then talked about the strong and weak sustainability, though there should no concept like strong or weak sustainability and these should complement each other. In recent years, the organizations have been concerned about the sustainability issues but only a few companies have been actively participating in these activities. There are various initiatives which have been carried out by companies to maintain a sustainable M. Y. Javed (B) · M. Hasan · M. K. Azam Aligarh Muslim University, Aligarh, Uttar Pradesh, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 R. C. Poonia et al. (eds.), Proceedings of Third International Conference on Sustainable Computing, Advances in Intelligent Systems and Computing 1404, https://doi.org/10.1007/978-981-16-4538-9_11

107

108

M. Y. Javed et al.

environment (e.g., CSR initiatives, water management, waste management, etc.). To understand, the concept in further details [3] analyzed 152 sustainability reports from 100 best-performing companies from the USA. They found that there was no relationship between sustainable practices and financial performance of these firms but the control variable (environmental sensitivity) was positively related to the accounting-based dependent variable. The measuring of sustainability practices in a company has been a challenging job for a researcher for many years and to evaluate the effectiveness and efficiency of the company has been a yet bigger challenge. Many authors have tried to come up with different methods to cope up this problem. Some of them tried studies based on the questionnaire and some using secondary data. Around a decade ago, Bragança et al. [4] added one more dimension to the sustainability concept. They said that for sustainable development it is very important to meet all the four dimensions of sustainability namely environment, social, economic and cultural. The world needs our policymakers to think about sustainability seriously. With the growing industrialization, it has become very crucial to think about its various activities led by companies and humankind which are destroying nature. To get the concept of corporate sustainability clear, one knows that organizations keep three factors in mind: economic, social, and environment. They are working on earth, providing products and services to people at a profit. So, it is their responsibility to carry out some work for them. This philosophy is called “corporate sustainability.”

2 Literature Review Artiach et al. [5] explored the drivers of sustainability performance in the Dow Jones sustainability index of USA and the incentives that could be gained by investing in sustainability. The result indicated that firms investing in sustainability could achieve more growth in terms of return on equity as compared to conventional firms. However, CSP firms could not get more cash flows or lower leverage firms than other firms. Nikolaou [6] explains the relationship among some mediating variables for the corporate environment and financial performance, like social and environmental responsibility, intellectual capital, innovation and competitive advantage. His research is based on Corporate social responsibility, environmental concern and financial performance. Jose and Saraf [7] studied sustainability practices in the top 100 companies in India. The result of the analysis suggested that corporate governance has been highly followed by CSR initiatives to improve operational efficiency. CSR activities have highly focused on sectors like education, healthcare, community livelihood and infrastructure development. The measures relating to operations included resource conservation (energy, water and paper) and waste management (Emission, solid waste and water). Less than 20% of the sample disclosed the issues regarding sustainability in supply chain. Cement, metal and mining, electric utilities and information technology performed very well as compared to other industries on most of the indicators. Telecom, reality, TV, pharmaceuticals and banking sectors have not been disclosing as much as others. Epstein et al. [8] explored that how hard

Relationship between Sustainable Practices and Firm …

109

is it to implement the integrated sustainability strategies by interviewing the different level of managers in Nike, Proctor and Gamble, The Home Depot and Nissan North America and what kind of obstacles they have to go through in decision-making of the sustainability. It has been found that sustainability has been a concern but in the informal setting but in reality they have been more inclined toward the improvement of financial performance. Ortiz-de-mandojana and Bansal [9] tried to explain that social and environmental practices (SEPs) are not only associated with short-term financial performance but they could be associated with organizational resilience by testing the hypotheses on 242 firms over 5 years. Findings suggested that there has been no association between WEPs and short-term financial performance. GómezBezares et al. [10] examined the relationship between the corporate sustainability and stock market returns of the 350 firms listed in FTSE from 2006 through 2012. The findings suggested that the investment strategy integrated with balanced financial, social and environmental activities gets high returns but CS has found to be negatively correlated with volatility of stock returns. Xiao et al. [11] studied the corporate sustainability and financial performance country wise by collecting and analyzing data from the human development index and environmental performance index. The result suggested that firms in countries having a high level of sustainability finds it difficult to capitalize the sustainability and sustainability could be a competitive advantage for firms in developing countries.

3 Research Methodology 3.1 Research Gaps • There have been several studies on this topic but in the Indian context, there have been a limited number of studies. Thus, a need for a study has arisen in the Indian context covering the major key factors of sustainability affecting the financial performance. • Various studies have been found on the topic of sustainability and its impact on financial performance, these theories contradict each other. Some studies show a positive impact on financial and some proved there was no need to invest in sustainability measures. Hence, there have been no definitive results that can be generalized for the relationship between sustainability and financial performance. Thus, need for further research in various contexts of sustainability was realized. • Most of the researches carried out in this area are based on the primary data or based on the questionnaire. The need for secondary data analysis has been required.

110

M. Y. Javed et al.

3.2 Research Objectives • To identify the key sustainability measures and parameters of financial performance with the reference to selected Indian FMCG firms. • To examine the impact of sustainability measures on financial performance (ROE, ROA and EPS).

3.3 Research Methodology The data have been collected from the Prowess IQ database, only those FMCG firms have been included which are indexed in BSE100. Before running the panel regression, the panel is tested for effects whether it has pooled, fixed and random effect. To check the effect, the cross section is fixed and then the test of redundant fixed effect is run, if the probability is more than 0.05 the data are said to be having polled effect, if the probability is less than 0.05 then it has fixed or random effect (Fig. 1).

4 Proposed Model Hypotheses Hypotheses for ROE:

Donations

Environment and pollution expenses

Staff welfare and training

Social and community expenses Fig. 1 Proposed model

Research and development expenses

Financial performance Return on Assets (ROA) Earning per share (EPS) Return on Equity (ROE)

Employee's utilization ratio

Relationship between Sustainable Practices and Firm …

111

H1: There is a significant impact of sustainability measures on Return of Equity (ROE). H1.1. There is a significant impact of social and community expenses on Return on Equity (ROE). H1.2. There is a significant impact of the staff welfare and training on Return on Equity (ROE). H1.3. There is a significant impact of Research and development expenses on Return on Equity (ROE). H1.4. There is a significant impact on the environment and pollution expenses on Return on Equity (ROE). H1.5. There is a significant impact on an employee’s utilization ratio on Return on Equity (ROE). H1.6. There is a significant impact of donations on Return on Equity (ROE). Hypotheses for ROA H2: There is a significant impact of social and community expenses on Return on Assets (ROA). H2.2. There is a significant impact of the staff welfare and training on Return on Assets (ROA). H2.3. There is a significant impact of Research and development expenses on Return on Assets (ROA). H2.4. There is a significant impact on the environment and pollution expenses on Return on Assets (ROA). H2.5. There is a significant impact of employee’s utilization ratio on Return on Assets (ROA). H2.6. There is a significant impact of donations on Return on Assets (ROA). Hypotheses for EPS H3: There is a significant impact on sustainability measures and Earning per share (EPS). H3.1. There is a significant impact of social and community expenses on Earning per share (EPS). H3.2. There is a significant impact of the staff welfare and training on Earning per share (EPS). H3.3. There is a significant impact of Research and development expenses on Earning per share (EPS). H3.4. There is a significant impact on the environment and pollution expenses on Earning per share (EPS). H3.5. There is a significant impact on employee’s utilization ratio on Earning per share (EPS). H3.6. There is a significant impact of donations on Earning per share (EPS). The regression equations used in the models. 1.

ROE = a1(S&C) + a2(S&WT) + a3(R&D) + a4(E&P) + a5(E&U) + a6(Do) +C

112

M. Y. Javed et al.

Table 1 Analysis and interpretation

Variables

ROE

ROA

EPS

S&C

0.000559

0.007153

−0.052416*

S&W

−0.000953

0.000315

0.048924*

R&D

0.002104

0.000849

−0.092788*

E&P

−0.003159

0.008867

0.048239

EU

−0.123627*

0.396282

1.279523

Do

−0.015333

−0.004953

0.012523

C

6.127899

6.524720

17.36614

R squared

0.0532

0.1117

0.1959

Durban Watson

0.7959

0.2414

0.3752

Probability

0.5253

0.0842

0.0022

* denotes significant relationship at 95% of confidence level

2. 3.

ROA = a1(S&C) + a2(S&WT) + a3(R&D) + a4(E&P) + a5(E&U) + a6(Do) +C EPS = a1(S&C) + a2(S&WT) + a3(R&D) + a4(E&P) + a5(E&U) + a6(Do) +C

The social sustainability is represented by the: S&C which represents the expenses incurred in the welfare for social and community, S&WT represents the expenses incurred in the staff and welfare training, (Do) represent the expenses incurred in the donations. The economic sustainability has been represented by the (E&U) which is employee’s utilization ratio and expenses in research and development (R&D). Environmental sustainability is represented by the expenses in the Environment and pollution expenses (E&P). a1, a2, a3, a4, a4, a5 and a6 represent the coefficients of social and community expenses, staff and welfare training, research and development expenses, environment and pollution expenses, employment utilization ratio and donation expenses respectively. C represents the value of intercept (Table 1).

4.1 Analysis and Interpretation 4.2 Results Related to ROE Every sustainability measures have an insignificant impact on the ROE in the FMCG sector except employee’s utilization ratio which has a negative but significant impact on the ROE. The values of R square, Durban Watson and probability are 0.053, 0.79 and 0.52, respectively.

Relationship between Sustainable Practices and Firm … Table 2 Research findings

113

Variables

ROE

ROA

EPS

S&C

Not significant

Not significant

Significant

S&W

Not significant

Not significant

Significant

R&D

Not significant

Not significant

Significant

E&P

Not significant

Not significant

Not significant

EU

Significant

Not significant

Not significant

Do

Not significant

Not significant

Not significant

4.3 Results Related to ROA No sustainability measures have a signifying impact on the ROA in the sector of FMCG; however, the values of R square, Durban Watson and probability are 0.1117, 0.2414 and 0.08, respectively.

4.4 Results Related to EPS Social and community expenses, staff welfare and training and research and development expenses have a significant impact on the EPS with values of coefficients − 0.052, 0.048 and −0.092, respectively. The value of R square, Durban Watson and probability are 0.1959, 0.37 and 0.002, respectively (Table 2).

4.5 Findings 4.6 Limitations 1.

2. 3.

One of the limitations was that the data used in this study are from the period 2009 to 2019 that is eleven years. The data of some variables were not available and some companies had to be dropped because of this. Only Panel regression has been used for the study as the data were in a panel form. No control variables have been introduced in the model.

5 Conclusion In the FMCG sector, environment and pollution expenses do not have a significant impact on ROE, ROA and EPS. No sustainability variable has any significant impact

114

M. Y. Javed et al.

on ROE, ROA and EPS. Research and development and staff welfare and training have a significant relationship with EPS.

5.1 Directions for Future Researches 1.

2.

The data used in this study are from the period 2009 to 2019. Future research can be conducted covering a longer period. For example, a comparative study can be conducted taking a base period of 2013 as the Company’s Act 2013 mandated the CSR activities from this year onwards. There is a possibility of using cross-sectional analysis for getting the more meaningful result of the sectorial analysis.

References 1. D. Chang, L.R. Kuo, The effects of sustainable development on firms’ financial performance— an empirical approach. Sustain. Dev. 380, 365–380 (2008) 2. T. Kuhlman, J. Farrington, What is sustainability? Sustainability 2, 3436–3448 (2010) 3. N. Hussain, U. Rigoni, E. Cavezzali, Does it pay to be sustainable? Looking inside the black box of the relationship between sustainability performance and financial performance. Corp. Soc. Responsib. Environ. Manage. 25(6), 1198–1211 (2018) 4. L. Bragança, R. Mateus, H. Koukkari, Building sustainability assessment. Sustainability 2(7), 2010–2023 (2010) 5. T. Artiach, D. Lee, D. Nelson, J. Walker, The determinants of corporate sustainability performance. Acc. Financ. 50(1), 31–51 (2010) 6. I.E. Nikolaou, A framework to explicate the relationship between CSER and financial performance: an intellectual capital-based approach and knowledge-based view of firm. J. Knowl. Econ. (2017) 7. P.D. Jose, S. Saraf, Corporate sustainability initiatives reporting: a study of India’s most valuable companies, SSRN Electron. J. 428 (2013) 8. M.J. Epstein, A.R. Buhovac, K. Yuthas, Managing social, environmental and financial performance simultaneously. Long Range Plann. 1–11 (2014) 9. N. Ortiz-de-mandojana, P. Bansal, The long-term benefits of organizational resilience through sustainable business. Strat. Manage. J. 2014 (2015) 10. F. Gómez-Bezares, W. Przychodzen, J. Przychodzen, Corporate sustainability and shareholder wealth-evidence from British companies and lessons from the crisis. Sustainability 8(3) (2016) 11. C. Xiao, Q. Wang, T. van der Vaart, D.P. van Donk, When does corporate sustainability performance pay off? The impact of country-level sustainability performance. Ecol. Econ. 146(2017), 325–333 (2018)

Learning Paradigms for Analysis of Bank Customer Akash Rajak, Ajay Kumar Shrivastava, Vidushi, and Manisha Agarwal

Abstract The classification problems can be solved using the classical algorithms of machine learning like Naive Bayes, Support Vector Machine, Decision Tree, etc. These algorithms have limited accuracy in case if the dataset is too large. The techniques of deep learning through artificial neural network are more promising in large datasets. In this paper, the different classification models of machine learning were discussed along with the deep learning. The models were applied on bank customer database and predicted whether the customer would exit from the bank or not on the basis of his history. The model’s accuracy in case of artificial neural network is high in comparison to traditional classification models of machine learning. Keywords Machine learning · Deep learning · Artificial neural network · Linear regression · Naive Bayes · Random Forest · Support vector machine

1 Introduction There are numerous applications of machine learning and deep learning. They can be applied in recognition of speech, verification of the signature, face or object recognition, disease diagnosis, intelligent decision making, to detect the fraud, forecasting the weather condition, etc. Give below are some of the models of classification which are implemented by various researchers.

A. Rajak · A. K. Shrivastava · Vidushi (B) KIET Group of Institutions, Delhi-NCR, Ghaziabad, Uttar Pradesh 201206, India Vidushi · M. Agarwal Banasthali Vidyapith, Jaipur, Rajasthan, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 R. C. Poonia et al. (eds.), Proceedings of Third International Conference on Sustainable Computing, Advances in Intelligent Systems and Computing 1404, https://doi.org/10.1007/978-981-16-4538-9_12

115

116

A. Rajak et al.

1.1 Background Ahujaa et al. applied machine learning algorithms for detecting the stress of the students. The four classifications algorithms Linear Regression, Naive Bayes, Random Forest and SVM were applied. The objective of the research is to study the stress of students which is a serious concern now a days [1]. The author has applied Regression Trees and Random Forests on the data of Tunisian schools for predicting the performance [2]. To select the features that are important for classification, a well-known technique Principal Component Analysis is used in this paper [3]. After this different algorithm like IBK, K-star, SMO, Rep tree in Weka tool is used and their accuracy, error rate and other factors are also compared. The result shows better for Decision stump and Rep tree. The paper proposed a method that is hybrid approach to predict the heart disease. To predict and classify the heart disease, dataset is optimized using Particle Swarm and Ant Colony and then different classifying algorithms are applied and their results are compared. The comparison shows the proposed one is better than others [4]. The five machine learning algorithms Decision Trees, Naive Bayes, Support Vector Machines, Multilayer Perceptron and Logistic Regression are applied to check the attributes of successful learners of computer science course [5]. This paper introduced a way to classify the Turkey student evaluation and Zoo datasets using a combination technique of already available well-known classification algorithm called decision tree with Linear Regression and it also compared the result with other available algorithms. The proposed method result is better than the others [6]. In this paper using they used machine learning algorithm to identify a model to improve the students’ performance on the basis of various factors and also decrease the dropout rate. It helps in improving the Education System [7]. To predict the breast cancer in women, this paper [8] uses the six different machine learning algorithms and compares their result with Principal Component Analysis or without it. This shows better result of PCA with Logistic Regression and Support Vector Machine. The machine learning algorithms Bayes-based, ANNbased, Regression-based, SVM-based, Instance-based, Tree-based and Rule-based classification algorithms are applied on student’s dataset which is available online on UCI Machine learning repository. The researchers pre-processed and categorized the dataset and reduced its dimensions before applying the classification algorithms. The results from Random Forest was better in comparison to other algorithms for predicting the results of students [9]. In this paper [10] various classification algorithms like Random Forest, Naive Byes, Support Vector machine, Logistic Regression, and Decision Tree are compared on the basis of their accuracy. Amazon 1 dataset is used to analyze the product review. The two well-known machine learning methods Logistic Regressions and Linear Discriminant Analysis are compared in the paper [11]. These methods identify the irregularities in the behavior of customer on the electricity usage. Logistic regression performed better because it works effectively on irregular data. The K-nearest neighbor classifier was applied for predicting whether the students will get placed in IT company during its study of undergraduate course. The results are compared with other models developed on Logistic Regression and

Learning Paradigms for Analysis of Bank Customer

117

SVM. The features like communication skill, programming silks, teamwork, etc. was included in datasets [12]. The machine learning models can be used to predict the various skills on the basis of attitude and psychological parameter [13]. The freshers are facing problems in c programming especially in pointers. A performance of any student can predict to check whether he has learned the concept or not. The author has discussed this issue in his paper and it could be solved using machine learning algorithms [14]. Machine learning algorithms successfully deployed for prediction in various fields including agriculture [15, 16], medical diagnosis [17–19] and many more.

1.2 Problem Statement In this research a database of bank consumer is used, which consists of various features like personal details, account balance, credit history, etc. The idea is to use the techniques of machine learning and deep learning to predict whether the customer leave or continue with his account.

1.3 Proposed Solution The classification models of machine learning and deep learning were applied on the bank customer database to check whether the customer will leave the bank or not. The classification models and artificial neural network accuracy is good and it is concluded that deep learning techniques are more promising if there is large dataset.

2 Traditional Classification Models and Deep Learning The classification is a supervised machine learning approach in which the algorithm classifies the new observations on the basis of training data provided to it. The different classifications algorithms are Naïve Bayes, Logistic Regression, Random Forest, Decision Tree, Support Vector Machine, etc. The coming sub-section describes the different classification models of machine learning.

2.1 Naive Bayes It is powerful machine learning algorithm, used for prediction. It is a classification technique based on Bayes theorem with assumption of independence among predictors. It comprises of two parts i.e. Naive and Bayes. It can be explained as one feature

118

A. Rajak et al.

presence is not related to another feature’s presence. For example, classification of fruit based on its color, type, and size. These properties independently contribute to classify the type of fruit, that’s called Naïve. It shows good result with huge amount of data sets. It is based on Bayes theorem. This theorem is given as: P(c|x) =

P(x|c)P(c) P(x)

(1)

where, P(c|x): Posterior probability of target class c, given attribute x. P(c): Class c probability. P(x): Probability of attribute x. P(x|c): Likelihood probability of x, given c class.

2.2 Support Vector Machine It is simple and popular machine learning algorithm. It works for both regression and classification but shows good result in classification. This algorithm plots the data points and finds the best line or hyper plane that separates the classes. Several hyper planes can be formed, but the one that segregates the classes well is selected. If more than one separates well then which shows the maximum margin is selected. It also has the capability to ignore the outliers, if present. It also has an inbuilt function called kernel, which converts the low to high dimensional, if required. Maximum margin classification is used by this algorithm. To find out the loss in this case hinge loss function is used. It can be defined as: l(y) = max(0, 1 − t · y)

(2)

where, Y is the classifier decision function output. Intended output t = ±1. l(y) is the loss.

2.3 Decision Tree It is one of the simple and easy to understand and implement learning algorithms that work in a supervised fashion. It can be used to solve both regression and classification problems. To predict the value of target, it learns from training data, makes rules and develops a model. It tries to represent problem in the form of tree. Tree is made up

Learning Paradigms for Analysis of Bank Customer

119

of nodes and nodes can be internal and external. Internal nodes represent attribute and class label is represented by external or leaf nodes.

2.4 K-Nearest Neighbors It is one of the easiest supervised learning algorithms. It also shows good result in recognizing the pattern. It works on all present cases and classifies the new one. It is completely based on the distance function. One of the well-known distance functions is Euclidean distance function. It is as follows: k 

(xi − yi )

(3)

i=1

where, K is the no. of nearest neighbor, x i , yi are the co-ordinates value. If k = 1 then the nearest neighbor class is assigned.

2.5 Logistic Regression It is one of the famous machine learning algorithms. It is basically used to classify task. For example, classification of spam mail, decide to sanction loan or not, etc. For classification easy, simple algorithm is called logistic regression. It is basically used to classify binary tasks. To predict a value this algorithm uses a linear function. The outcome can vary from negative to positive infinity. It uses the sigmoid function. Its value varies between 0 and 1. It made the s-shaped curve. This curve can map any real value to value between 0 and 1. This function works as follows: g(x) =

1 1 + e−x

(4)

This function value varies between 0 and 1.

2.6 Linear Discriminant Analysis It is supervised learning algorithm that is used for classifying the tasks. It separates the classes into groups. It reduces the dimensionality from high to low. To classify, it calculates the probability that is based on Bayes theorem.

120

A. Rajak et al.

2.7 Deep Learning Today time is the time of self-learning for machine and continuous success of machine in this field enhances and spread this research tool everywhere. The member of machine leaning that can work even with nonlinear environment called deep learning. The style of brain to judge, predicts, and classify, different tasks inspired deep learning that comes with artificial neural network. This network tries to clone the brain structure as well as functionality. Basically, nodes are the building block of this artificial network which behaves similar to neurons present in brain. The complete network is divided into layers and each layer composed of some nodes [20–22]. Broadly, these layers can be classified as input, output, and hidden layers and explained in the following Table 1. Briefly, layers descriptions are mentioned in Table 1. The connectivity between layers is presented in Fig. 1. The complete network consists of layers and each layer is composed of nodes. Figure 1 depicts that the nodes in same layer are not connected but nodes of input layer are linked to hidden layers which are finally to the output layer. The artificial neural network with two hidden layers is shown in Fig. 1. There is no limit on the hidden layers. There can be one to n number of these layers present in the network. Inputs are multiplied with the weights and then bias is added to it as Table 1 Layer descriptions Layer

Layer type

Layer description

Layer 1

Input layer

Input is received by this layer

Layer 2

Hidden layer

Responsibility of this layer is computation and as per the requirement and complexity, layers increases

Layer 3

Output layer

The output of hidden layer is the input for this layer. Final layer to predict or classify the class

Fig. 1 Basic artificial neural network architecture

Learning Paradigms for Analysis of Bank Customer

121

explained in the following Eq. 5.  n   z= f (xi wi + b)

(5)

n=1

In case of n nodes having n weights ‘w’ and each input ‘x’ is multiplied with the corresponding weight, gets weighted sum (x 1 w1 + x 2 w2 + · · · + x n wn ) and finally add the bias term ‘b’ to the weighted sum. Now move through the activation function that is nonlinear in nature. The motive of adding nonlinearity is to handle complex problems. The capability of handling complex, interdisciplinary as well as nonlinear problems through this model increases the range of application areas.

3 Data Processing The dataset has 14 features which are described in Table 2. The dataset belongs to a bank having record of various customers. It has record of 9999 customers. The information about the customer is given in table from column 0 to 12, and whether he will exit or not is given in last column 13. In this dataset we have to consider all those features which gives information about exiting of customer from the bank. This information can be used to predict whether the customer will continue or leave the bank. The last feature “Exited” shows that whether the customer is part of bank or has left. The classification models of machine learning (ML) and artificial neural Table 2 Different features in a dataset of bank S. No.

Feature

Description

0

RowNumber

Row number

1

CustomerId

Customer identification number

2

Surname

Family or last name of customer

3

CreditScore

Credit score

4

Geography

City name

5

Gender

Gender

6

Age

Age of customer

7

Tenure

Number of years he/she is having account

8

Balance

Current balance of account

9

NumOfProducts

Number of products of bank he/she is using

10

HasCrCard

Is he/she is having credit card?

11

IsActiveMember

Is he/she actively using the account?

12

EstimatedSalary

Salary of customer

13

Exited

Is he/she part of bank or has left?

122

A. Rajak et al.

network (ANN) would be applied on this dataset, to predict how far these models would correctly identify whether the customer would continue or left the bank. In this research we would try to show that, the accuracy of ANN model is better than the classification models of ML in case of large datasets. In big data sets the neural network performs better and classifies the problem more correctly. The first three columns (0–2) has no role in classification, so they were not considered in modelling process.

4 Results and Discussion The classifications models were applied to the set and the accuracy of different models is given in Table 3. Further, the ANN model is also applied on the same dataset and the results are also given in Table 3. From Table 3, we can see that the accuracy of ANN is 83.59% and is higher than classification models of ML. The accuracy of different models is also shown through Fig. 2. The ANN model is executed with 1000 epochs. The model accuracy with different epochs is given in Fig. 3. In ANN model we used two hidden layers as we have to apply the concept of deep learning. The number of nodes in hidden layers is the average of input and output layers. i.e. (11 + 1)/2 = 6. The NN is compiled and fitted with training dataset and is then tested with test data. The 80% of data is utilized in training and 20% is used in testing. The confusion Table 3 Algorithms and accuracy

Algorithm/Model

Accuracy

Naïve Bayes (NB)

81.63

Support Vector Classifier (SVC)

79.60

Decision Tree (DT)

79.75

K-Nearest Neighbors (KNN)

81.75

Logistic Regression (LR)

81.10

Linear Discriminant Analysis (LDA)

81.00

Artificial Neural Network (ANN)

86.30

90 Accuracy

Fig. 2 Accuracy of different models

86.3

85

81.63

80

79.6 79.75

75

81.75 81.1

81

Models NB

SVC

DT

KNN

LR

LDA

ANN

Learning Paradigms for Analysis of Bank Customer 0.9 0.85 0.8 0.75

1 7 13 19 25 31 37 43 49 55 61 67 73 79 85 91 97

Accuracy

Fig. 3 Accuracy of model during various 100 epochs

123

Epoch

Table 4 Confusion matrix

Predicted: No

Predicted: Yes

Actual: No

1504

91

Actual: Yes

184

221

matrix is given in Table 4. The model has done correct classification of about 1726 (1504 + 221) records in data size of 2000.

5 Conclusion and Future Work In this paper, the classification models of machine learning and deep learning were discussed. A dataset of bank customers is used as, and the different models were applied to it to check whether the customer will continue with his account or leave the bank. It has been found that the deep learning techniques are better than classification models of machine learning. The deep learning techniques are capable of handle better if the dataset if too large. The models were implemented on Python jupyter notebook with Windows as OS.

References 1. R. Ahujaa, A. Bangab, Mental stress detection in university students using machine learning algorithms. Procedia Comput. Sci. 152, 349–353 (2019). https://doi.org/10.1016/j.procs.2019. 05.007 2. S. Rebaia, F.B. Yahia, H. Essid, A graphically based machine learning approach to predict secondary schools performance in Tunisia. Socio-Econ. Plann. Sci. (2019). https://doi.org/10. 1016/j.seps.2019.06.009 3. S. Pratibha Devishri, O.R. Ragin, G.S. Anisha, Comparative study of classification algorithms in chronic kidney disease. Int. J. Recent Technol. Eng. 8(1), 180–184 (2019) 4. Y. Khourdifi, M. Bahaj, Heart disease prediction and classification using machine learning algorithms optimized by particle swarm optimization and ant colony optimization (2019) 5. C. Ko, F. Leu, Analyzing attributes of successful learners by using machine learning in an undergraduate computer course, in 2018 IEEE 32nd International Conference on Advanced

124

6.

7.

8. 9.

10.

11.

12.

13.

14.

15.

16.

17.

18.

19.

20.

21. 22.

A. Rajak et al. Information Networking and Applications (AINA), Krakow (2018), pp. 801–806. https://doi. org/10.1109/AINA.2018.00119 A.S.M. Ahmed, A. Rizaner, A.H. Ulusoy, A decision tree algorithm combined with linear regression for data classification, in 2018 International Conference on Computer, Control, Electrical, and Electronics Engineering (ICCCEEE) (2018). https://doi.org/10.1109/ICCCEEE. 2018.8515759 A. Kaur, N. Umesh, B. Singh, Machine learning approach to predict student academic performance. Int. J. Res. Appl. Sci. Eng. Technol. (IJRASET) 6(IV). ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 6.887 (2018) T.M. Shadman, F.S. Akash, M. Ahmed, Machine learning as an indicator for breast cancer prediction. B.Sc. Engineering Thesis, BRAC University, Dhaka (2018) S. Senthil, W.M. Lin, Applying classification techniques to predict students’ academic results, in 2017 IEEE International Conference on Current Trends in Advanced Computing (ICCTAC) (2017), pp. 1–6. https://doi.org/10.1109/ICCTAC.2017.8249986 T. Pranckevicius, V. Marcinkevicius, Comparison of Naïve Bayes, Random Forest, Decision Tree, Support Vector Machines, and Logistic Regression Classifiers for Text Reviews Classification (2017) A. Lawi, S.L. Wungo, S. Manjang, Identifying irregularity electricity usage of customer behaviors using logistic regression and linear discriminant analysis, in 2017 3rd International Conference on Science in Information Technology (ICSITech), Bandung (2017), pp. 552–557. https:// doi.org/10.1109/ICSITech.2017.8257174 A. Giri, M.V.V. Bhagavath, B. Pruthvi, N. Dubey, A placement prediction system using knearest neighbors classifier, in Second International Conference on Cognitive Computing and Information Processing (CCIP) (2016), pp. 1–4. https://doi.org/10.1109/CCIP.2016.7802883 R. Ishizue, K. Sakamoto, H. Washizaki, Y. Fukazawa, Student placement and skill ranking predictors for programming classes using class attitude, psychological scales, and code metrics. Res. Pract. Technol. Enhanc. Learn. 13, 7 (2018). https://doi.org/10.1186/s41039-018-0075-y K. Yamashita, R. Fujioka, S. Kogure, Y. Noguchi, T. Konishi, Y. Itoh, Classroom practice for understanding pointers using learning support system for visualizing memory image and target domain world. Res. Pract. Technol. Enhanc. Learn. 12, 17 (2017). https://doi.org/10.1186/s41 039-017-0058-4 S. Kumar, B. Sharma, V.K. Sharma, R.C. Poonia, Automated soil prediction using bag-offeatures and chaotic spider monkey optimization algorithm. Evolut. Intell. 1–12 (2018). https:// doi.org/10.1007/s12065-018-0186-9 S. Kumar, B. Sharma, V.K. Sharma, H. Sharma, J.C. Bansal, Plant leaf disease identification using exponential spider monkey optimization. Sustain. Comput. Inf. Syst. 28 (2018). https:// doi.org/10.1016/j.suscom.2018.10.004 V. Singh, R.C. Poonia, S. Kumar, P. Dass, P. Agarwal, V. Bhatnagar, L. Raja, Prediction of COVID-19 corona virus pandemic based on time series data using Support Vector Machine. J. Discrete Math. Sci. Cryptogr. 23(8), 1583–1597 (2020). https://doi.org/10.1080/09720529. 2020.1784535 V. Bhatnagar, R.C. Poonia, P. Nagar, S. Kumar, V. Singh, L. Raja, P. Dass, Descriptive analysis of COVID-19 patients in the context of India. J. Interdiscip. Math. 24(3), 489–504 (2020). https://doi.org/10.1080/09720502.2020.1761635 R. Kumari, S. Kumar, R.C. Poonia, V. Singh, L. Raja, V. Bhatnagar, P. Agarwal, Analysis and predictions of spread, recovery, and death caused by COVID-19 in India. Big Data Mining Anal. 4(2), 65–75. https://doi.org/10.26599/BDMA.2020.9020013 O.I. Abiodun, A. Jantan, A.E. Omolara, K.V. Dada, A.M. Umar, O.U. Linus, M.U. Kiru, Comprehensive review of artificial neural network applications to pattern recognition. IEEE Access 7, 158820–158846 (2019) H. Mostafa, Supervised learning based on temporal coding in spiking neural networks. IEEE Trans. Neural Netw. Learn. Syst. 29(7), 3227–3235 (2017) S. Sinha, N. Mandal, Design and analysis of an intelligent flow transmitter using artificial neural network. IEEE Sens. Lett. 1(3), 1–4 (2017)

Diagnosis of Dermoscopy Images for the Detection of Skin Lesions Using SVM and KNN Ebrahim Mohammed Senan and Mukti E. Jadhav

Abstract Early detection of skin lesions is essential for effective healing. In recent times, melanoma is one of the most dangerous types of skin cancer because it spreads to other parts of the body if there is no early diagnosis and treatment. Artificial intelligence algorithms play an important role in medical image diagnostics. These techniques provide an effective tool for the analysis and diagnosis of lesions. In this study, the steps include the diagnosis of Dermoscopy images from the PH2 database. Preprocessing process for image enhancement use a Gaussian filter to enhance the input images. Segmentation of the lesion area and separation from the healthy body using active contour technique (ACT). The gray level co-occurrence matrix (GLCM) algorithm was applied to extract features from the region of interest. The lesions were classified using the support vector machine (SVM) and K nearest neighbors (KNN) classifiers. The results achieved by using SVM were accuracy 99.10%, sensitivity 100% and specificity 98.54%. Keywords Gaussian filter · Active contour technique · GLCM method · Skin cancer · Support vector machine and K nearest neighbors

1 Introduction The skin covers the bones, muscles and all parts of the body; thus, any change in its work may affect in another parts of the body. The skin contains a top layer containing a protein called keratin and a bottom layer containing melanin to protect the skin harmful sun rays. We have to pay more attention and give more attention to dermatology using computer-aided diagnosis. The affected area of the skin is called the lesion area. Early detection of skin cancer is more complex for the inexperienced, E. M. Senan (B) Department of Computer Science and Information Technology, Dr. Babasaheb Ambedkar Marathwada University, Aurangabad, India M. E. Jadhav Shri Shivaji Science and Arts College, Chikhli, Buldana, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 R. C. Poonia et al. (eds.), Proceedings of Third International Conference on Sustainable Computing, Advances in Intelligent Systems and Computing 1404, https://doi.org/10.1007/978-981-16-4538-9_13

125

126

E. M. Senan and M. E. Jadhav

so a computer-aided diagnostic system was required to help dermatologists to early diagnose and treat skin lesions. Skin cancer is a cancer cell and grows in the outer layer of the skin, and is divided into two parts namely skin cancer and non-skin cancer. There are three types of skin cancer: melanoma, basal cell cancer (BCC), and squamous cell cancer (SCC). The most common causes of skin cancer are exposure to ultraviolet sunlight, radiation overdose, regular use of sun beds and sunlight, genetic conditions and decreased immunity. The diagnosis of skin lesions includes the distinction between pigmented and non-pigmented lesions, and determination of the severity of the infection. Diagnosis is made using several skin rules, such as ABCD rule (asymmetry, borders, colors, differential structures) [1], Menzies [2], and seven point checklist [3]. The ABCD rule is the most widely used of all dermoscopy bases and gives good results for diagnosis of the lesion. Several previous relevant studies are based on dermoscopic images. Extract features from skin lesions should be represented to any class belonging to benign or malignant, and there are many ways to extract the features suggested in the literature [4–7]. Among the many statistical methods, the results showed that GLCM is one of the best methods used to distinguish the effective in texture in Dermoscopy images [8–10]. The most commonly used classification techniques for pattern recognition include artificial neural network [11, 12], the nearest neighbors [13, 14], decision tree [5, 15] and support vector machine [4, 5]. The contribution to this study is to extract the distinguishing features of the diagnosis of skin lesions based on the GLCM method and the suitability of the newly extracted features. ACT is a segmentation method based on sensing the edges of a skin lesion, then isolating the lesion area from the healthy skin. The segmented images underwent further enhancement using the morphology method.

2 Literature Review In a paper presented by Roberta B. Oliveira, et al., the researchers aim to find a combination of features based on texture, color and shape, using many features extraction methods. Several categories have been adopted to classify features extracted from the feature extraction step. The proposed system was applied to 1104 Dermoscopy images. The system achieved results of an accuracy of 91.6%, specificity of 96.2%, and sensitivity of 87% [16]. In a paper presented by Ilias Maglogiannis, et al., they presented a methodology for segmentation and detection of spherical images and dark dots. A multi-resolution method was applied to segment the lesion and identify an region of interest, then the most important features were extracted from the region of interest. The algorithm has been applied to a number of Dermoscopy images. The results from the proposed system illustrate the effectiveness of the algorithm for splitting images of dark spheres and spheres, as well as extracting features from dark spheres and spheres, which provide an input to the classification process to distinguish malignant lesions from benign [17].

Diagnosis of Dermoscopy Images for the Detection …

127

In a study presented by Sameena Pathan, et al., an algorithm was developed for detecting of hair and the use of special properties of the lesion to improve accuracy. The algorithm works in two ways: First, the hair detection method, designed for the features of Dermoscopy hair. Second, the model is used to distinguish the lesion from the rest skin by a new geometric deformation model based on chromium. The results obtained from the system, average accuracy, specificity, sensitivity and overlap scores are 93.4, 95.3, 87.6 and 11.52%, respectively, by using PH2 dataset [18]. In a study presented by G. Wiselin Jiji et al., they suggested an effective method of extracting the shape and color features for analyzing Dermoscopy images. The proposed system introduces a three-stage algorithm. The Particle swarm optimization (PSO) algorithm has been applied which correctly classifies the data set containing the multiple classes. Results obtained using ROC proved that the algorithm was effective for diagnosis of skin lesion. The results obtained using the system with 1450 Dermoscopy images were a sensitivity of 94% and a specificity of 98.22% [19]. In a study presented by Punal M. Arabi et al., texture analysis is one of the features extracted from images for the purpose of diagnosing the lesion. Texture is analyzed by many of the following techniques: statistical and structural. Statistical texture analysis is based on extraction of features from the lesion area using WDM (wavelength division multiplexing) and GLCM (gray level co-occurrence matrix) techniques [20]. In a paper presented by Hiam Alquran et al., the technique offers a tool for analyzing Dermoscopy images, including the collection of a database from the Website. The thresholding, opening and converting to gray space have been applied to segment the lesion area and separate it from the healthy area. Extract features from the selected region using the GLCM method. SVM classifier is used to classify the PH2 dataset. The accuracy of the achieved results of the proposed system is 92.1% [21]. In the study presented by Tallha Akram et al., the main objective of this paper is to diagnose skin lesions, improve segmentation, and select important features. The proposed algorithm was threefold, first using the color space to separate the foreground from the background, to extend the contrast using a multilevel approach. Second, analyze the texture feature designed by the weighting standard. Third, extract the improved features and reduce the dimensions that combine modern and traditional features. The proposed approach was applied to ISBI 2016, PH2 and ISIC [22]. In a study presented by Mohamed Hassan et al., a segmentation algorithm was presented using GLCM. The system includes three phases: the first phase, preprocessing, noise and hair removal. Second, the segmentation is to select the lesion area only and separate it from the healthy skin. The third one is Post the Processing, a Morphological method was used to enhance images adopting Erosion operation and produces enhanced digital images. The results obtained using the system are sensitivity 80.8% and specificity 98.62% [23]. The literature has various methods for image processing like Gupta et al. [24, 25] processed facial images while Yadav et al. [26] processed video images. Sharma et al. developed entropy-based [27] and firefly-based approach for image segmentation [28].

128

E. M. Senan and M. E. Jadhav

3 Methodology The methodology, as shown in Fig. 1, passes images obtained from the PH2 database, to a Gaussian filter to remove noise, hair and air bubbles, for the purpose of image enhancement. The region of interest (RoI) is identified in the RGB color space, which then converted to gray image. The diagnosis is made by a classification algorithm based on the features of the lesion is malignant, benign or atypical.

3.1 Dataset

Segmentation

DataSet

Preprocessing

Due to the increasing incidence of skin cancer, many research centers have stimulated the creation of datasets for researchers, experts and specialists. The PH2 dataset was created by a joint collaboration between Hospital Pedro Hispano in Matosinhos, Portugal and the Universidade do Porto, Ecnico Lisboa. The dataset consists of 200 images divided into three diseases, which are 40 malignant, 80 atypical, and 80 benign images.

Feature s Extraction (GLCM)

Train ing Data

Classification

1.SVM 2.KNN

Evaluation

Atypical

Melanoma

Fig. 1 Methodology for diagnosing skin lesions

Benign

Diagnosis of Dermoscopy Images for the Detection …

129

Fig. 2 The system performance. a Preprocessing, b segmentation, c morphological

3.2 Preprocessing Some of the challenges in diagnosing image is artifacts such as hair, air bubbles, and blood vessels, so images are enhanced in the first step of image processing. The following steps explain the preprocessing process. Gaussian filter: A type of image-blurring filter to calculate the conversion that will be applied to each pixel in the image, which is applied in images that have one or two dimensions. Convolution applied with the original image to produce new pixels. The results by this filter preserve the borders and edges. Hair is one of the noise causing the misdiagnosis so hair is necessary to be deleted using the technique of hair removal called DullRazor [29]. Resized The PH2 dataset while maintaining the important and necessary pixels for subsequent steps. Figure 2a shows image enhancement using the Gaussian filter for three classes in the PH2 database.

3.3 Segmentation Method Segmentation is after preprocessing, where the image is divided into meaningful images, so that the desired area is segmented, for the following operations. It extracts the information necessary to successfully complete the image processing. The lesion area should be segmented and isolated from healthy skin, the lesion area containing

130

E. M. Senan and M. E. Jadhav

features to identify the lesion as malignant or benign. There are many segmentation techniques, in this study active contour technique (ACT) has been proposed, considered an effective technique for segmentation of medical images. It separates the pixels required for analysis from the rest of the skin, which are multidimensional and linear and describe the boundaries of the object using internal or external forces applied [30]. Figure 2b shows the segmentation in the PH2 dataset.

3.4 Morphological Method The images resulted in the segmentation process often have some defects, so the Morphological method is used to remove the defects in the segmented images and provide information on the shape and structure of the image, and the technology works on binary images. Morphological image processing is similar to spatial filtering. The structure element (SE) is moved across each pixel in the original image to produce an enhanced image with new pixels. The technique is based on erosion and dilation operator. There are many morphological algorithms such as Boundary extraction and Region filling. Figure 2c shows before and after morphological method for three types in the PH2 database.

3.5 Feature Extraction The next step after identifying the lesion area is the process of extracting features from the lesion area. The main purpose of extracting features is to reduce the data in the original image so that we get the most important information that distinguishes the lesion. Proposed system uses GLCM method to extract texture features such as (Energy, Correlation, Contrast, Homogeneity, Entropy, Standard-Deviation, RMS, Skewness, Kurtosis, Mean, Smoothness and Variance). Texture is a regular repetition statement describing the image structure [31]; in this work, we verify the use of texture features by GLCM. The GLCM describes the number of times the difference in the gray level occur in the image. Each GLCM Pi; j, where input i corresponds to the number of neighbor pixels with the gray value j, distance d (distance between the reference pixel and neighbor), is arbitrarily selected. Direction parameters, eight possible values of orientation parameter from 0° to 360° used to calculate the features extracted through the GLCM method. In this study, three directions (vertical, horizontal and one of the diagonals) used to compute GLCM. All features are stored for each image in a feature vector, and all image features are stored in a database as a feature matrix.

Diagnosis of Dermoscopy Images for the Detection …

131

3.6 Classification All of the different classifications of the features extracted from the GLCM method represent three diseases namely, malignant, benign and atypical. SVM uses a hyperplane that separates different data into different classes. SVM produces many hyperplane, performing best when the margin is max, the margin is the distance between points near the hyperplane. Overall, SVM is a good classifier. The extracted features are also classified by the KNN classifier, which is based on the classification of the test data point to the nearest neighbor.The PH2 dataset was divided into 70% for training and 30% for testing. The training data are used to train samples, and the test data to verify that the classification technique are working properly and do not affect the training process.

4 Results The performance of the classifiers SVM and KNN was evaluated on the PH2 dataset. The systems achieved promising results for classifying the skin lesions into three classes: malignant, benign and atypical. The metrics in system are as follows: Accuracy, Specificity, Precision and Sensitivity obtained as in the following Eqs. (1)–(4), respectively. TN + TP ∗ 100% TN + TP + FN + FP

(1)

TN ∗ 100 TN + FP

(2)

Precision =

TP ∗ 100% TP + FP

(3)

Sensitivity =

TP ∗ 100% TP + FN

(4)

Accuracy =

Specificity =

Table 1 shows the results obtained by classifying the features extracted by the GLCM algorithm, where both SVM and KNN classifiers achieved superior results. The SVM classifier achieved better results than the KNN classifier, where the SVM classifier achieved 99.10%, 100% and 98.54% accuracy, sensitivity and specificity, Table 1 The results achieved from the system using SVM and KNN Accuracy% Sensitivity% Specificity% Precision% Recall% Fscore% Gmean% SVM 99.10

100.00

98.54

97.81

100.00

98.34

97.73

KNN 93.72

94.23

92.61

98.52

93.72

92.02

92.84

132

E. M. Senan and M. E. Jadhav

Fig. 3 Display performance of SVM and KNN classifiers with GLCM method for detection of skin diseases

respectively, while the KNN classifier achieved 93.72%, 94.23% and 92.61% accuracy, sensitivity and specificity, respectively. Figure 3 shows the performance diagram of the SVM and KNN classifiers.

5 Conclusion The proposed system aims to help dermatologists to detect early skin lesions. Proposed approach uses PH2 database, which consists of 200 images distributed to 80 benign, 40 malignant and 80 atypical. The input images were enhanced by using the Gaussian filter. The process of segmenting the lesion from the healthy body was by using active contour technique (ACT). The images result by the segmentation process have some defects so morphological method applied to get more enhanced images. Features were extracted from the ROI by GLCM, and number 12 features per image was extracted and stored in a vector features. All features are stored in a database called a feature matrix that was inputted to SVM and KNN classification. The dataset was divided into 70% for training and 30% for testing. SVM and KNN classify images as benign, malignant or atypical. Both classifiers achieved promising results, the SVM classifier outperforming the KNN classifier. The results achieved by using SVM were 99.10%, 100% and 98.54% for accuracy, sensitivity and specificity, respectively.

Diagnosis of Dermoscopy Images for the Detection …

133

References 1. W. Stolz, A. Riemann, A.B. Cognetta, L. Pillet, W. Abmayr, D. Holzel, P. Bilek, F. Nachbar, M. Landthaler, ABCD rule of dermatoscopy—a new practical method for early recognition of malignant-melanoma. Eur. J. Dermatol. 4(7), 521–527 2. S.W. Menzies, C. Ingvar, K.A. Crotty, W.H. McCarthy, Frequency and morphologic characteristics of invasive melanomas lacking specific surface microscopic features. Arch. Dermatol. 132(10), 1178–1182 (1996) 3. G. Argenziano, G. Fabbrocini, P. Carli, V. De Giorgi, E. Sammarco, M. Delfino, Epiluminescence microscopy for the diagnosis of doubtful melanocytic skin lesions: comparison of the ABCD rule of dermatoscopy and a new 7-point checklist based on pattern analysis. Arch. Dermatol. 134(12), 1563–1570 (1998) 4. M.E. Celebi, H.A. Kingravi, B. Uddin, H. Iyatomi, Y.A. Aslandogan, W.V. Stoecker, R.H. Moss, A methodological approach to the classification of dermoscopy images. Comput. Med. Imaging Graph 31(6), 362–373 (2007). https://doi.org/10.1016/j.compmedimag.2007.01.003 5. R. Garnavi, M. Aldeen, J. Bailey, Computer-aided diagnosis of melanoma using border- and wavelet-based texture analysis. IEEE Trans. Inf. Technol. Biomed. 16(6), 1239–1252 (2012). https://doi.org/10.1109/titb.2012.2212282 6. M.E. Celebi, A. Zornberg, Automated quantification of clinically significant colors in dermoscopy images and its application to skin lesion classification. IEEE Syst. J. 8(3), 980–984 (2014). https://doi.org/10.1109/JSYST.2014.2313671 7. K. Shimizu, H. Iyatomi, M.E. Celebi, K.-A. Norton, M. Tanaka, Four class classification of skin lesions with task decomposition strategy. IEEE Trans. Biomed. Eng. 62(1), 274–283 (2015). https://doi.org/10.1109/TBME.2014.2348323 8. I. Maglogiannis, C.N. Doukas, Overview of advanced computer vision systems for skin lesions characterization. IEEE Trans. Inf. Technol. Biomed. 13(5), 721–733 (2009). https://doi.org/10. 1109/titb.2009.2017529 9. H. Iyatomi, K. Norton, M.E. Celebi, G. Schaefer, M. Tanaka, K. Ogawa, Classification of melanocytic skin lesions from nonmelanocytic lesions, in Annual International Conference of the IEEE Engineering in Medicine and Biology Society Buenos Aires, 31 Aug–4 Sept 2010 (IEEE, 2010), pp. 5407–5410. https://doi.org/10.1109/iembs.2010.5626500 10. M.E. Celebi, H. Iyatomi, W.V. Stoecker, R.H. Moss, H.S. Rabinovitz, G. Argenziano, H.P. Soyer, Automatic detection of bluewhite veil and related structures in dermoscopy images. Comput. Med. Imaging Graph 32(8), 670–677 (2008). https://doi.org/10.1016/j.compme dimag.2008.08.003 11. H. Iyatomi, H. Oka, M.E. Celebi, M. Hashimoto, M. Hagiwara, M. Tanaka, K. Ogawa, An improved Internet-based melanoma screening system with dermatologist-like tumor area extraction algorithm. Comput. Med. Imaging Graph 32(7), 566–579 (2008). https://doi.org/10. 1016/j.compmedimag.2008.06.005 12. J.P. Papa, A.X. Falcao, C.T. Suzuki, Supervised pattern classification based on optimum-path forest. Int. J. Imaging Syst. Technol. 19(2), 120–131 (2009). https://doi.org/10.1002/ima.20188 13. C. Barata, M. Ruela, M. Francisco, T. Mendonça, J.S. Marques, Two systems for the detection of melanomas in dermoscopy images using texture and color features. IEEE Syst. J. 8(3), 965–979 (2013). https://doi.org/10.1109/JSYST.2013.2271540 14. M.M. Rahman, P. Bhattacharya, B.C. Desai, A multiple expert-based melanoma recognition system for dermoscopic images of pigmented skin lesions, in 8th IEEE International Conference on International Conference on Bioinformatics and Bioengineering, Athens, 8–10 Oct 2008 (IEEE, 2008), pp. 1–6. https://doi.org/10.1109/bibe.2008.4696799 15. G.D. Leo, A. Paolillo, P. Sommella, G. Fabbrocini, Automatic diagnosis of melanoma: a software system based on the 7-point check-list, in 43rd International Conference on System Sciences, Hawaii, 5–8 Jan 2010 (IEEE, 2010), pp. 1–10. https://doi.org/10.1109/hicss.2010.76 16. R.B. Oliveira, A.S. Pereira, J.M.R. Tavares, Computational diagnosis of skin lesions from dermoscopic images using combined features. Neural Comput. Appl. 1–21 (2018)

134

E. M. Senan and M. E. Jadhav

17. I. Maglogiannis, K.K. Delibasis, Enhancing classification accuracy utilizing globules and dots features in digital dermoscopy. Comput. Methods Programs Biomed. 118(2), 124–133 (2015) 18. S. Pathan, K.G. Prabhu, P.C. Siddalingaswamy, Hair detection and lesion segmentation in dermoscopic images using domain knowledge. Med. Biol. Eng. Compu. 56(11), 2051–2065 (2018) 19. G.W. Jiji, P.J. DuraiRaj, Content-based image retrieval techniques for the analysis of dermatological lesions using particle swarm optimization technique. Appl. Soft Comput. 30, 650–662 (2015) 20. P.M. Arabi, G. Joshi, N.V. Deepa, Performance evaluation of GLCM and pixel intensity matrix for skin texture analysis. Perspect. Sci. 8, 203–206 (2016) 21. H. Alquran, I.A. Qasmieh, A.M. Alqudah, S. Alhammouri, E. Alawneh, A. Abughazaleh, F. Hasayen, The melanoma skin cancer detection and classification using support vector machine, in 2017 IEEE Jordan Conference on Applied Electrical Engineering and Computing Technologies (AEECT). IEEE, Oct 2017, pp. 1–5 22. T. Akram, M.A. Khan, M. Sharif, M. Yasmin, Skin lesion segmentation and recognition using multichannel saliency estimation and M-SVM on selected serially fused features. J. Amb. Intell. Human. Comput. 1–20 (2018) 23. M. Hassan, M. Hossny, S. Nahavandi, A. Yazdabadi, Skin lesion segmentation using gray level co-occurance matrix, in 2016 IEEE International Conference on Systems, Man, and Cybernetics (SMC). IEEE, Oct 2016, pp. 000820–000825 24. R. Gupta, S. Kumar, P. Yadav, S. Shrivastava, Identification of age, gender, & race SMT (scare, marks, tattoos) from unconstrained facial images using statistical techniques, in 2018 International Conference on Smart Computing and Electronic Enterprise (ICSCEE). IEEE, July 2018, pp. 1–8 25. R. Gupta, P. Yadav, S. Kumar, Race identification from facial images using statistical techniques. J. Stat. Manag. Syst. 20(4), 723–730 (2017) 26. P. Yadav, R. Gupta, S. Kumar, Video image retrieval method using dither-based block truncation code with hybrid features of color and shape, in Engineering Vibration, Communication and Information Processing (Springer, Singapore, 2019), pp. 339–348 27. A. Sharma, R. Chaturvedi, S. Kumar, U.K. Dwivedi, Multi-level image thresholding based on Kapur and Tsallis entropy using firefly algorithm. J. Interdiscipl. Math. 23(2), 563–571 (2020) 28. A. Sharma, R. Chaturvedi, U.K. Dwivedi, S. Kumar, S. Reddy, Firefly algorithm based Effective gray scale image segmentation using multilevel thresholding and Entropy function. Int. J. Pure Appl. Math. 118(5), 437–443 (2018) 29. T. Lee, V. Ng, R. Gallagher, A. Coldman, D. McLean, Dullrazor®: a software approach to hair removal from images. Comput. Biol. Med. 27(6), 533–543 (1997) 30. D. Laurent Cohen, On active contour models and balloons. CVGIP: Image Understanding 53, 211–218 (2004). https://doi.org/10.1016/1049-9660(91)90028-N 31. G. Srinivasan, G. Shobha, Statistical texture analysis, in Proceedings of World Academy of Science, Engg & Tech (2007), p. 36

MaTop: An Evaluative Topic Model for Marathi Jatinderkumar R. Saini and Prafulla B. Bafna

Abstract Topic modeling is a text mining technique that presents the theme of the corpus by identifying latent features of the language. It thus provides contextual information of the documents in the form of topics and their representative words, thereby reducing time, efforts, etc. Topic modeling on English corpus is a common task, but topic modeling on regional languages like Marathi is not explored yet. The proposed approach implements a topic model on Marathi corpus containing more than 1200 documents. Intrinsic evaluation of latent Dirichlet allocation (LDA) which is used to implement the topic model is carried out by coherence measure. Its value is maximum for 4 topics. The retrieved topics are related to ‘Akbar–Birbal,’ ‘Animal stories,’ ‘Advise giving stories’ and ‘general stories.’ Dendrogram and word cloud are used for visualization. The dendrogram shows topic-wise documents and word cloud show sample informative words from different stories. The proposed approach involves context while deriving the topics using synsets. Entropy value is 1.5 for varied datasets; entropy value ensures independence of topic and similarity between topics’ words. Keywords Dendrogram modeling Word cloud



 Latent Dirichlet allocation (LDA)  Marathi  Topic

1 Introduction The abundance of online and offline data has raised the need for retrieval of information in the form of summarization, grouping of the documents and so on. These are all text analysis tasks and branched by natural language processing (NLP) [1]. NLP has evolved due to the need for processing and extracting information from the text available in various languages. Topic modeling is one of the J. R. Saini  P. B. Bafna (&) Symbiosis Institute of Computer Studies and Research, Symbiosis International Deemed University, Pune, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 R. C. Poonia et al. (eds.), Proceedings of Third International Conference on Sustainable Computing, Advances in Intelligent Systems and Computing 1404, https://doi.org/10.1007/978-981-16-4538-9_14

135

136

J. R. Saini and P. B. Bafna

NLP activities to automatically identify the theme of the corpus in the form of words, phrases and so on. It finds the different independent topics in the form of similar words. Data preprocessing, model building using latent Dirichlet allocation (LDA), visualization and evaluation are common steps that need to be followed. LDA can be implemented in Marathi too. LDA is popularly used technique to derive the topics [2]. Corpus is created after loading the input documents and known as a group of documents. LDA assumes a document is a fixed group of words. Each group of words is represented as a topic. The words present in the topic are significant and represent the theme of the topic. LDA uses probability based on word-co-occurrence, and thus, a word having the highest probability in the topic gets membership of the topic. The probability that the specific words will appear in the topic is termed as b [3]. To decide an optimal number of topics, topic coherence is used. Topic coherence assigns scores to the topic. It is based on the semantic similarity between the high scoring word in the topic. It is also called as intrinsic evaluation metrics used to evaluate the unsupervised model because the data are not known. It evaluates coherence by the topic incidental by the topic model. Before building the model, there are several NLP steps that need to be implemented on the corpus. Tokenization is the first step to identify the chunks of the words. Stop words [4–7] URLs, irrelevant characters like punctuations, numbers are removed. POS tagger [8] presents different parts of speech and important parts of speech are retrieved like nouns, adjectives, etc. Lemmatization converts the word into its meaningful original form. Building topic model in English is common, but building topic model in regional languages like Gujrati [9], Punjabi [10], Marathi [11–14], Hindi [15–18] is difficult due to less availability of resources. Marathi is inflectional language, so its processing is different than English, e.g., ‘सौख्यासमनापटवूकसे?’, In this sentence first, all 4 words and punctuations will be separated and will result in 5 tokens. ‘?’ being punctuation will be removed in in the data cleaning step. ‘कसे’ will be ignored/removed being stop word. Lemmatization will convert ‘सौख्यास’ into ‘सौख्य.’. Term frequency-inverse document frequency (TF-IDF) measure [2] is a statistical score in which the significance of the term increases in a direct proportion with the frequency of the word in the document. It is offset by the occurrence of the word in the corpus. The high score indicates a more significant word. The vector space model [19] is one which is normalized and structured representation of the corpus. It is built considering the significant terms above a particular value called a threshold. Hierarchical agglomerative clustering (HAC) [12] and word cloud are methods to visualize the results. HAC groups the documents based upon the topic that they represent and produces different groups [20]. Each group represents the topic. Entropy and purity ensure the quality of clusters internally and externally. Entropy ensures the independence of clusters, and purity ensures the minimum distance between cluster elements. Word cloud is a visual representation. A word cloud is a group of words presented in different sizes or colors. Size and colors are used to depict the importance of the words.

MaTop: An Evaluative Topic Model for Marathi

137

This research is unique because 1. 2. 3. 4.

Topic modeling in Marathi is carried out Intrinsic evaluation of the model is carried out Word cloud and dendrogram are presented along with their evaluation Consistent entropy and purity for varied datasets.

2 Literature Review Evaluation of topic models like LDA is a challenging task. The topic model provides both latent and predictive representation of the corpus theme. Different methods like normalized absolute perplexity (NAP) and normalized absolute coherence (NAC) are suggested to predict the optimal number of topics [10]. To evaluate the meaningfulness of generated latent space carried out in unsupervised learning environment parameters like coherence measures mutual information, conditional probability is used. PMI score is calculated using statistics of the co-occurrence of word pair. The pairs are formed from external data sources [21]. Topics are considered as features for evaluating assignments. Interpretation of topics is more effective way than simple lexical features. It solves problems of synonymy and polysemy by identifying lexical semantics. The topic model based on probabilities is learned from the data comprising students’ assignments. Topical features are integrated with lexical features. Multiple views of the text data are constructed. The approach is evaluated using different parameters [22]. Various NLP tasks can be achieved using a framework based on a multilayer neural network. The algorithm is applied to perform entity recognition, tokenization, POS tagging and semantic labeling. Task-specific operations are avoided, and also pragmatic knowledge is not considered. Instead of manual intervention for input features, the training dataset is used to learn the internal representations of the words. This study is then applied for the construction of a freely accessible tagging structure. It completes in minimum computations. The system is evaluated for accuracy and speed [23]. The use and design of the Stanford NLP toolkit are explained. It is useful for identifying the core processes in natural language engineering and analysis. It is not only used by different researchers but also serves government and commercial users. It has a user-centered design and simple interface. It includes robust and correct analysis components without the need for baggage. Annotations are used for the required tasks. The tasks are described simply instead of providing so many complex details. The system development guidelines are provided along with other frameworks like NLTK and so on. The approach is useful for beginners [24]. The interpersonal dialog is enhanced due to social sites and networking. Studying tasks associated with language processing and analysis has introduced the newer research. Language analysis of social sites has been increasingly focusing on professional and personal levels. To introduce powerful methods to analyze data of

138

J. R. Saini and P. B. Bafna

social media by extracting and processing the natural and free language data is a challenging task. Information extraction from text data is the traditional approach and is different form social data analysis. Clustering, classification and topic modeling need to be introduced to social data too. The study reviews NLP tools and tasks to process the latest information. Innovative NLP frameworks will facilitate social media monitoring, health systems and business intelligence. Different performance evaluation techniques are explained. For NLPSemEval (TREC, TAC), CLEF is explained [25]. AI technique is applied to Marathi corpus. Lexical features of language are proposed. Different classification algorithms are implemented on the corpus of mixed articles. New features discovered by algorithms are compared with traditional features [26].

3 Research Methodology This section details the experimental work which is carried out to detect the topics. Topics are detected using LDA implemented on the vector space model. One word can belong to more than one topic with different probabilities. But it is treated as a member of only that topic having the highest probability value. For example, in Fig. 1 ‘जीवन’ word belongs to both of the topics which are retrieved after applying VSM followed by LDA on the corpus. But it is a member of either topic in which its probability value of membership is high. Figure 2 shows the steps implemented to carry out experiments and evaluations. Library udpipe available in ‘R’ [27] is used to process Marathi text. It has three steps: download model for Marathi language, load model using command dl ← udpipe_download_mode l (language = “marathi-ufal”) and call model using command udmodel ← udpipe_load_model (file = “marathi-ufal-ud-2.4-190531.udpipe”). The library provides support to perform all NLP tasks like tokenization, lemmatization and so on. The created corpus is annotated, by passing udmodel and preprocessed corpus and coded as udpipe_annotate (udmodel). The following steps explain the research methodology.

Fig. 1 Topic retrieval from corpus

MaTop: An Evaluative Topic Model for Marathi

139

Fig. 2 Steps in research methodology

1. Dataset collection and corpus creation: The data are comprised of stories and poetries of varied types and downloaded from different websites [28–30]. The dataset contains 752 stories and 472 poems. It is total of 16 MB data and is stored using ‘UTF-8’ encoding in the text files. Corpus of total 1224 text documents is prepared. 2. Preprocessing corpus: The corpus is preprocessed which initiated with tokenization. The total number of tokens is 145,217. POS tagger is used provided by udpipe. The second step consists of removal of punctuations and special characters. Only nouns, adjectives, verbs and adverbs are considered for next processing, which in turn resulted in stop word removal. And after removal, the total number of tokens is 83,187. Lemmatization is carried out to generate unique terms instead of stemming. The total number of lemmas is 34,123. The TF-IDF score is calculated for 34,123 terms, and VSMM that is vector space model for Marathi is built using significant terms which is to be used by LDA. Significance of the term is decided as the term having tf-idf score above 0.75 thresholds. 3. Intrinsic evaluation of model: The coherence score is used to evaluate the coherence of topics in the corpus. The coherence score of the LDA model is calculated to decide the number of topics. The highest coherence score is observed at value 4. So, the number of topics is considered to be 4. LDA model is built on the VSM to evaluate the coherence between topics inferred by a model. 4. Build the unsupervised model: Corpus in the form feature vectors generated by VSM is input to the model. Vectors of probabilities are associated with words in topics. Topics in documents are passed as model parameters. The top N informative words are generated for each topic. Topic 4 has top 4 informative words as mentioned ‘बिरबल,’ ‘नोकर,’ ‘अकबर,’ and ‘विनोद’ 5. Visualize and validate results: The results are visualized using dendrogram generated by hierarchical agglomerative clustering (HAC) and word cloud. HAC is evaluated using different parameters like entropy and purity on varied size datasets which is calculated using entropy function in ‘r.’ Both of the parameters are constant and optimized for small to the large corpus. Good entropy ensures that topics are not closely related to each other (intercluster distance). Good purity shows that the words (intracluster) in the topic are closely related to each other.

140

J. R. Saini and P. B. Bafna

4 Results and Discussion Table 1 shows the sample dataset in the form of stories. After collecting data from different websites, the data are cleaned and stored in text format which is displayed in the ‘title’ column. Stop words and punctuations are ignored, and the sample sentence is shown in the second column. For example, ‘मानसिकतेला’, ‘चौफेर’ are considered and prepositions and pronouns like ‘माझ्या’, ‘मी ‘are ignored. Lemmatization converts ‘मानसिकतेला’ into ‘मानसिक,’ and sample lemmas are presented as a next column. Total number of tokens present before removal of stop words is 145,217 and after removal 83,187. The total number of unique lemmas is 64,237. The processed dataset is converted into a corpus using command Corpus ← Corpus (VectorSource()). The corpus contains unique lemmas. TF-IDF weight for each term is calculated. For example, ‘घायाळ’ has 0.1 TF-IDF weight. This weight is then used to decide the significant terms. Table 2 shows sample terms with their TF-IDF weight. The total number of unique terms is 64,237. The term having lowest TF-IDF weight is ‘घायाळ,’ and TF-IDF weight is 0.01. Table 3 shows the significant terms. The terms are chosen based on a threshold. The terms weighting more than 0.75 are shown in the table. A total of 6341 terms are found to be significant. All the terms present in Table 3 have more than 0.75 weight. The term having the highest weight is ‘जीवन,’ and its TF-IDF weight is 0.98. These significant terms are used to build DTM. Table 4 shows the vector space model for Marathi using significant terms. Each text document is a vector of its corresponding terms also called dimensions. Text documents are shown as rows and column represent significant terms. The entry in the matrix represents the TF-IDF weight of each term with respect to document, and each thus each text document is represented as vector space. Text document T1 = {0.76, 0.92, 0.77, …, 0.82}. This VSMM is input to the LDA. The coherence score of the LDA model is calculated to decide the number of topics. Figure 3 shows different values of coherent scores fora varied number of topics. The maximum coherent score is 0.8 observed at 4 topics. Thus, 4 topics are finalized. An unsupervised topic model is built using VSMM. Corpus is represented by 4 topics. Table 5 shows all topics and the words by which they are represented. Figure 4 shows the words detected by two topics. The X-axis shows the value of the b constant of words of the topic in the form of

Table 1 Sample dataset and preprocessing steps Sr. No.

Sentence (32,183)

Tokenization

Noise removal

Unique lemmas

Hysynset

1

एका जंगलात खूप प्राणी

एका, जंगलात,,.

जंगलात, वाघाने,.

जंगल,वाघ, प्राणी,. .

वाघ, प्राणी,विचार, कल्पना

MaTop: An Evaluative Topic Model for Marathi Table 2 Term and TF-IDF count

Table 3 Significant term and TF-IDF count

Table 4 Vector space model for Marathi

141

Term

दंड

चर्चा

गावकरी

लोक

TF-IDF weight

0.02

0.05

0.1

0.4

Significant term

गावकरी, लोक

राजा

भक्त

आमंत्रण

TF-IDF weight

0.98

0.86

0.84

0.76

Text (story/poem)

जीवन

राजा

भक्त

आमंत्रण

T1 T2 T1224

0.76 0.91 0.45

0.92 0.87 0.67

0.77 0.76 0.78

0.82 0.22 0.88

1

Fig. 3 Coherence constant

0.5 0

Coherence const

1 2 3 4 5 6 7 8 9 topics

probability of membership of the word with respect to the topic. For example, ‘दही’ from topic 1 has the highest value of b which is 0.4 in Fig. 4. Corpus is represented by 4 topics. The words are presented in a tabular format. The words represent each topic. Every document in the corpus belongs to one topic. The word ‘जीवन’ from topic 3 has the highest value of the b constant that is 0.6, and the word ‘आमंत्रण’ has the lowest value of the b constant which is 0.04 from topic 4. Based on pragmatic knowledge, the four topics are based on ‘General stories’, ‘Advise giving stories,’ ‘Animal stories’ and ‘Akbar–Birbal stories,’ respectively. To depict that every document belongs to each topic, HAC is implemented and clusters representing each topic are shown in the dendrogram in Fig. 5. To visualize clearly, only 25 documents are considered. Documents belonging to one topic can be seen as placed in one box. For example, topic 3 contains document 12, 18. To Table 5 Topics and their representative words

Topic 1

Topic 2

Topic 3

Topic 4

ब्राह्मण गायी दूध दही

राजा भक्त दान लक्ष्मी देव दर्शन

जीवन झाड सिंह कळप समुद्र

बिरबल नोकर अकबर विनोद मुघल आमंत्रण

142

J. R. Saini and P. B. Bafna

Fig. 4 Words and their b constants

0

0.2 0.4 0.6 Weight

Fig. 5 Dendrogram representing topic-wise clusters and evaluation

Fig. 6 Word cloud representing sample words of the topics

observe the trend of entropy and purity, it is calculated for a varied set of stories. Both of the parameters are consistent and optimum. The entropy of clustering all 1224 documents is 0.1, and purity is 0.8. It ensures that all 4 topics are well separated and words in the topic are co-related. The sample words extracted from the topics representing the corpus are represented in the word cloud in Fig. 6.

5 Conclusions A topic model is built on the Marathi corpus using LDA. Intrinsic evaluation assures the topic coherence of the corpus. The coherence score depicted the four topics which represent different categories of stories. Pragmatic knowledge is used to decide the domain of the topic. Documents belonging to each topic are clustered and represented using a dendrogram generated by HAC. The entropy and purity are consistent for varied dataset size, and both of them are optimum for more than 1200 documents. The entropy is less than 1.5, and purity is in the range of 0.8–0.9. These values ensure the independence of topics in the form of less intercluster distance. Optimum purity indicates the less intracluster distance; that is, words representing

MaTop: An Evaluative Topic Model for Marathi

143

the topic are more similar to each other. The work can be extended by using the synset-based vector space model, and comparative analysis of both models can be performed.

References 1. R.M. Rakholia, J.R. Saini, Automatic language identification and content separation from Indian multilingual documents using unicode transformation format, in Proceedings of the International Conference on Data Engineering and Communication Technology (Springer, Singapore, 2017), pp. 369–378 2. S.R. Vispute, S. Kanthekar, A. Kadam, C. Kunte, P. Kadam, Automatic personalized marathi content generation, in 2014 International Conference on Circuits, Systems, Communication and Information Technology Applications (CSCITA). IEEE, Apr 2014, pp. 294–299 3. H. Jelodar, Y. Wang, C. Yuan, X. Feng, X. Jiang, Y. Li, L. Zhao, Latent Dirichlet Allocation (LDA) and topic modeling: models, applications, a survey. Multimedia Tools Appl. 78(11), 15169–15211 (2019) 4. J.R. Saini, R.M. Rakholia, On continent and script-wise divisions-based statistical measures for stop-words lists of international languages. Procedia Comput. Sci. 89, 313–319 (2016) 5. R.M. Rakholia, J.R. Saini, A rule-based approach to identify stop words for Gujarati language, in Proceedings of the 5th International Conference on Frontiers in Intelligent Computing: Theory and Applications (Springer, Singapore, 2017), pp. 797–806 6. R.M. Rakholia, J.R. Saini, Lexical classes based stop words categorization for Gujarati language, in 2016 2nd International Conference on Advances in Computing, Communication, & Automation (ICACCA). IEEE (2016), pp. 1–5 7. J.K. Raulji, J.R. Saini, Generating stopword list for Sanskrit language, in 2017 IEEE 7th International Advance Computing Conference (IACC). IEEE, Jan 2017, pp. 799–802 8. J. Kaur, J.R. Saini, POS word class based categorization of Gurmukhi language stemmed stop words, in Proceedings of First International Conference on Information and Communication Technology for Intelligent Systems, vol. 2 (Springer, Cham, 2016), pp. 3–10 9. R.M. Rakholia, J.R. Saini, The design and implementation of diacritic extraction technique for Gujarati written script using unicode transformation format, in 2015 IEEE International Conference on Electrical, Computer and Communication Technologies (ICECCT). IEEE, Mar 2015, pp. 1–6 10. M. Hasan, A. Rahman, M.R. Karim, M.S.I. Khan, M.J. Islam, Normalized approach to find optimal number of topics in Latent Dirichlet Allocation (LDA), in Proceedings of International Conference on Trends in Computational and Cognitive Engineering (Springer, Singapore, 2021), pp. 341–354 11. P.B. Bafna, J.R. Saini, An application of Zipf’s law for prose and verse corpora neutrality for hindi and Marathi languages. Int. J. Adv. Comput. Sci. Appl. 11(3) (2020) 12. P.B. Bafna, J.R. Saini, Marathi text analysis using unsupervised learning and word cloud. Int. J. Eng. Adv. Technol. 9(3) (2020) 13. P.B. Bafna, J.R. Saini, Marathi document-similarity measurement using semantics-based dimension reduction technique. Int. J. Adv. Comput. Sci. Appl. 11(4). https://doi.org/10. 14569/IJACSA.2020.0110419 14. P.B. Bafna, J.R. Saini, Measuring the similarity between the Sanskrit documents using the context of the corpus technique. Int. J. Adv. Comput. Sci. Appl. 11(5) (2020) 15. G. Venugopal-Wairagade, J.R. Saini, D. Pramod, Novel language resources for Hindi: an aesthetics text corpus and a comprehensive stop lemma list (2020). arXiv preprint arXiv:2002.00171

144

J. R. Saini and P. B. Bafna

16. P.B. Bafna, J.R. Saini, Hindi multi-document word cloud based summarization through unsupervised learning, in 9th International Conference on Emerging Trends in Engineering and Technology on Signal and Information Processing (ICETET-SIP-19), Nagpur, India, Nov 2019 (in Press, IEEE, 2019) 17. P.B. Bafna, J.R. Saini, Scaled document clustering and word cloud based summarization on Hindi Corpus, in 4th International Conference on Advanced Computing and Intelligent Engineering, Bhubaneshwar, India, Dec 2019 (in Press, Springer, 2019) 18. P.B. Bafna, J.R. Saini, BaSa: a context based technique to identify common tokens for Hindi verses and proses, in IEEE International Conference For Emerging Technology, Belagavi, India (in Press, IEEE-INCET, 2020) 19. R.M. Rakholia, J.R. Saini, Information retrieval for Gujarati language using cosine similarity based vector space model, in Proceedings of the 5th International Conference on Frontiers in Intelligent Computing: Theory and Applications (Springer, Singapore, 2017), pp. 1–9 20. P. Bafna, D. Pramod, A. Vaidya, Document clustering: TF-IDF approach, in 2016 International Conference on Electrical, Electronics, and Optimization Techniques (ICEEOT). IEEE, Mar 2016, pp. 61–66 21. S.D. Kale, R.S. Prasad, Influence of language-specific features for author identification on Indian literature in Marathi, in International Conference on Soft Computing and Signal Processing, June 2019 (Springer, Singapore, 2019), pp. 639–652 22. S. Kuzi, W. Cope, D. Ferguson, C. Geigle, C. Zhai, Automatic assessment of complex assignments using topic models, in Proceedings of the Sixth (2019) ACM Conference on Learning@ Scale, June 2019, pp. 1–10 23. R. Collobert, J. Weston, L. Bottou, M. Karlen, K. Kavukcuoglu, P. Kuksa, Natural language processing (almost) from scratch. J. Mach. Learn. Res. 12, 2493–2537 (2011) 24. C.D. Manning, M. Surdeanu, J. Bauer, J.R. Finkel, S. Bethard, D. McClosky, The Stanford CoreNLP natural language processing toolkit, in Proceedings of 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations, June 2014, pp. 55–60 25. A.A. Farzindar, D. Inkpen, Natural language processing for social media, in Synthesis Lectures on Human Language Technologies, vol. 13(2) (2020), pp. 1–219 26. D. Newman, S. Karimi, L. Cavedon, External evaluation of topic models, in Australasian Document Computing Symposium, Sept 2009 27. http://cran.r-project.org/web/packages/udpipe/vignettes/udpipe-annotation.html 28. www.matrubharti.com/novels/marathi 29. https://marathi.pratilipi.com/marathi-short-stories-pdf-free-download 30. https://www.hindujagruti.org/hinduism-for-kids-marathi/category/marathi-katha/marathi-stories

Convolutional Neural Network: An Overview and Application in Image Classification Sushreeta Tripathy and Rishabh Singh

Abstract As the field of artificial intelligence keeps progressing each year, it is noticeable how deep learning is becoming a significant approach for information processing like image classification and speech recognition. Deep learning consists of several techniques which are advanced, have a powerful learning capacity and give high accuracy. Convolutional neural networks have become a significant approach for classification of images. It can be thought that convolutional neural network (CNN) can automatically extract features and hence feature engineering is not required. In this manuscript deep learning convolutional network is used to achieve a high accuracy in classifying the images of cats and dogs and distinctively. It is shown how CNN gives better accuracy than other techniques. Keywords Artificial intelligence · Convolutional neural network · Image classification · Deep learning

1 Introduction Artificial neural network (ANN) is mainly a computational model based on formations and functionalities of biological neural networks. Since the structure of ANN is determined by the information flow the changes to neural networks were based on the input given and the output obtained by it. ANN can be considered as nonlinear statistical data which implies that there is a complex relationship between the output and the input. Due to this complex relationship, different patterns can be seen for different models [1–4]. The idea behind ANN comes from the working of a human brain to distinctly make the right decisions. We can say that ANN is made out of numerous nodes that S. Tripathy (B) Department of CS and IT, Siksha ‘O’ Anusandhan Deemed to be University, Bhubaneswar, India e-mail: [email protected] R. Singh Department of CSE, Siksha ‘O’ Anusandhan Deemed to be University, Bhubaneswar, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 R. C. Poonia et al. (eds.), Proceedings of Third International Conference on Sustainable Computing, Advances in Intelligent Systems and Computing 1404, https://doi.org/10.1007/978-981-16-4538-9_15

145

146

S. Tripathy and R. Singh

Fig. 1 Three-layer artificial neural network

copy organic neurons of the human mind. In spite of the fact that, we associate these neurons by joins, likewise, they connect with one another. Apart from the fact that nodes are utilized to take input information they also perform basic procedures on the information. Therefore, these tasks are passed on to dissimilar neurons. The output at every hub is called its activation or node value, since each neural link is related with weight. Additionally, they are fit for learning which happens by changing values of the weight. Consequently, the accompanying representation shows a basic ANN. Figure 1 consists of an input, hidden and an output layer. This architecture forms the fundamental structure of numerous common ANN architectures. Supervised learning (SL) and unsupervised learning (UL) are two key classifications in image processing tasks. Supervised learning is the machine learning algorithm that does the work of learning a function to get the desired output from the given input on the basis of sample input–output pairs. It deduces a function from a set of input–output examples under classified training data. In SL, each sample is a pair of two things, an input value which is often a vector and a desired output value also called the supervisory signal [5]. UL is a classification of machine learning (ML) that searches for patterns that had previously been undiscovered in a database with no pre-existing output samples and with least human supervision. Unlike SL which usually makes use of data labeled by humans, UL is also known as self-organization that takes into account the modeling of likelihood over inputs [6]. The most commonly used algorithm for analyzing visual image is the CNN which is a class of supervised deep neural networks. They are otherwise called shift in variant or space invariant ANN, in view of their shared-weights design and translation invariance attributes. CNN have applications in recommender systems, analysis of medical images, recognition of images and videos, financial time series, natural language processing and in classification of images [7]. It can be said that CNN are a version of multilayer perceptron that are regularized. Multilayer perceptrons typically mean completely associated networks, which means, every neuron in one layer is connected to all neurons in the subsequent layer. A fully connected architecture of these networks makes them prone to data overfitting. Adding any form

Convolutional Neural Network: An Overview …

147

of measurement of magnitude of weights for loss function are some ways to for regularization. Although conventional multilayer perceptron (CMLP) frameworks have been used to identify images, a lot of difficulties are there with the handling of dimensionality. Hence, they did not show the desired result with high-resolution pictures. If a 1000×1000 pixel picture containing color channels is considered, it consists of 3 million weights, this is excessively high to attainably measure proficiently at scale with ‘complete connectedness’ [8]. A significant hindrance of conventional models of ANN is the difficulties faced with the complexities in computation required to compute visual-related data. Due to a relatively small dimensionality of 28 × 28 of MNIST database which is a common benchmarking machine learning dataset it is pretty suitable for most forms of ANN. Keeping in mind that the MNIST dataset is regularized to black and white values the weight contained in the first hidden layer of a single neuron is 784 (28 × 28 × 1). If a colored image input with dimensions 64 × 64 is considered then the weight increases by 12,288 for a single neuron of the first hidden layer. If we take into consideration that to work with such a huge scale of input, the neural network needs to be much larger in scale than the one used in classifying MNIST data containing colored inputs. This is a great example that shows us the drawbacks of using these models. By increasing the number of hidden layers in the network which will by default increase the number of neurons this can be improved. The second cause that we need to stop is the problem of overfitting. If we overfit the output of an analysis that fits too closely to a particular dataset, the desired output by additional data input would fail and become unreliable. A model that is overfitted is a statistical model containing attributes than that can be accounted for by the dataset. The unknowing production of some of the residual variations causes overfitting. In this paper, we have used the Kaggle’s Asirra dataset for classification of pictures of cat and dog. It is tough for computers to tell apart the pictures of cats and dogs. There are a lot of distinctive features which makes it difficult for computers to identify the images such as the different backgrounds of each different image, the angles from which the photographs have been taken and the body structure in each image. The paper is organized as follows. In Sect. 2, we give a brief representation for previously done works. In Sect. 3, the CNN architecture with its different layers is presented. In Sect. 4, we analyze the materials and methods. Through experiments and inferences, we present the performance of our method in Sect. 5. At last we finish up by summing up our result in Sect. 6.

2 Related Literature Deep learning methods are predominantly being used in various fields. Most popularly the CNN is widely used among all DL techniques for image-related problems. It converts raw image pixel values to classified outputs. CNN are being used to classify

148

S. Tripathy and R. Singh

Fig. 2 Architecture of proposed CNN

images, to detect objects, labeling of scene, and face recognition. As seen through use CNNs have proved to provide better accuracy than SVM classifications and deep neural networks [9]. Having brought a revolution in the field of deep learning, CNN has not only improved the accuracy in classification of images but also placed a key role in feature extraction. It is commonly used in the field of computer vision and is undoubtedly highly effective [10–12]. A major advancement was made in 2012, when Krizhevsky et al. designed AlexNet which showed a lot of improvements as compared to previous methods that had been used before on the same task of image classification [13]. Till present day many experiments have been done on AlexNet to further improve it, and it is commonly seen that the deeper the neural network, the better is the performance, that is, by increasing the depth of the network the approximated value is better and the representation of features is enhanced [14–17]. Recently, CNN has achieved landmark success in pattern recognition and classification problems for real pictures based on big data repositories, fast graphical processing units and power of parallel and distributed computing [18, 19]. It is very challenging to train CNN network with a restricted number of samples. Recently these contemporary architectures are deployed for prediction of pandemic [20, 21], forecasting [22] and agricultural automation [23, 24] (Fig. 2).

3 CNN Architecture This architecture is broken down into four key areas. a. b.

c. d.

The input layer holds the pixel values of the image. Convolutional layers (CL) are the significant structures utilized in convolutional neural networks. The application of a filter to an input resulting in activation is basically what happens in this layer. A pooling layer is followed after the CL. Basically to the output of the CL layer a nonlinearity (e.g., ReLU) is applied in this layer. The fully connected layers do the work as it does in conventional artificial neural networks that through activations produce class scores, further being

Convolutional Neural Network: An Overview …

149

utilized for classification. Most often it is advisable to use ReLU between each layer to improve the efficiency and performance. CNN basically comprise of an input layer, output layer and multiple hidden layers. Each value in the feature map is passed through the ReLU layer, followed by a pooling layer for reduction in dimensionality of the feature map. Finally, the fully connected layer maps the features extracted into final output. To minimize the difference between the outputs and labels backpropagation is used and the activation function used is the ReLU function.

3.1 Convolutional Layer Convolutional layer (CL) is the fundamental block of CNN. Feature extraction is the main task performed by CL. The architecture in Fig. 1 is designed to support 2D images. The input of CL is an i × j × c image where ‘i’ is the height, ‘j’ is the width and ‘c’ is the number of channels. The CL has filter of size y × t × c where the size of filter is smaller than size of the image. If the calculated value is negative in that case by default it is converted into zero [25].

3.2 Pooling Layer For an efficient underlying computation, the CNN may use global or local pooling layers. By incorporating the out of cluster of neurons in one layer to single neuron in the next layer, pooling layer reduces the dimensionality of the data. The combining of small clusters, typically 2 × 2 is local pooling. Global pooling is applied on every neuron of the convolutional layer. Furthermore, pooling computes either a max or an average. In max pooling there is utilization of the maximum value from each neuron cluster in the previous layer. In case of average pooling the average value from each neuron cluster at the previous layer is used.

3.3 Fully Connected Layer This is the final layer of CNN where a high-level reasoning in the network is done. This layer is obtained after a number of convolutional and pooling layers. In a Fully Connected layer, the neurons are connected to all the activations in the layer prior to them similar to a regular ANN. Hence, the activations of the neurons can be computed with matrix multiplication followed by bias offset.

150

S. Tripathy and R. Singh

4 Materials and Methods CNN is a class of deep learning, which is used for processing data that has a grid pattern, like images. Dataset is downloaded from Kaggle, which consists of 25,000 images of dogs and cats. It consists of two different folders for train and test separately. Dataset is processed by a bit and then labels are provided to each of the images during the training of dataset and to provide name of each image with either ‘cat’ or ‘dog.’ CNN architecture consists of an input layer, an output layer and several hidden layers. Conv2D is the first layer followed by max pooling layer. Further model consists of four sets of Conv2D layers followed by max pooling layer with different kernel sizes. After that there is a one activation layer and a dropout layer. Next are fully connected layers. Activation layer is used to apply the ReLU to the output layer. The activation function is to introduce nonlinearity into the output of a neuron. ReLU and sigmoid are used in the model. Max pooling is a pooling operation which is used to calculate the maximum value in each sub-matrix of each feature map. In conv2D layer, a kernel or mask is used for blurring, sharpening, edge detection, etc. This is done by using a convolution between a kernel and an image. At last, after several convolution and max pooling layers, the high-level reasoning in the neural networks is done via fully connected layers. It is used to reduce overfitting by minimizing the total number of variables in the model. Final layer is the dense layer with sigmoid activation. Train datasets contain 17,500 images, whereas test datasets contain 7500 images after that basic hyperparameters are used like epoch, learning rate is used to improve the accuracy.

5 Experimental Results All the experiment is done on Windows 10 in Python 3.7 and a model is created based on Tensorflow and Keras libraries [26]. All the images are first converted to Gray scale, because it’s easy to process a 2D image with respect to 3D image. We used different combination of activation function and classifier to check which combination provides better accuracy. With the use of original images and tenfold cross-validation technique the training accuracy and validation accuracy of CNN has been achieved as 79.86% and 74.03%, respectively (Table 1). It also generates 44.01% training loss and 51.02 validation loss. Finally, it needs 800.6 s to train the model (Fig. 3).

Convolutional Neural Network: An Overview … Table 1 Training accuracy and validation accuracy of CNN

151 Original image (%)

Training accuracy

79.86

Validation accuracy

74.03

Training loss

44.01

Validation loss

51.02

Fig. 3 Represent training accuracy and validation accuracy of CNN

6 Conclusion In this paper, we proposed a state-of-the-art CNN-based method for classification of images. Our work explores applications of CNN for image classification and we infer that it gives better performance. Our CNN classifier model contains only one convolutional layer and a fully connected layer because of the smaller number of training examples. The network that has been defined by our model classifies the images into one the two categories say cat or dog. This same model can be applied to other datasets as well.

References 1. S. Tripathy, T. Swarnkar, A comparative analysis on filtering techniques used in preprocessing of mammogram image, in Advanced Computing and Intelligent Engineering (Springer, Singapore, 2020), pp. 455–464 2. S. Tripathy, T. Swarnkar, Performance observation of mammograms using an improved dynamic window based adaptive median filter. J. Discrete Math. Sci. Cryptogr. 23(1), 167–175 (2020) 3. S. Tripathy, T. Swarnkar, Unified preprocessing and enhancement technique for mammogram images. Procedia Comput. Sci. 167, 285–292 (2020) 4. S. Tripathy, T. Swarnkar, Performance evaluation of several machine learning techniques used in the diagnosis of mammograms. Int. J. Innov. Technol. Exploring Eng. 8, 2278–3075 (2019) 5. S. Tripathy, S. Hota, P. Satapathy, MTACO-Miner: modified threshold ant colony optimization miner for classification rule mining, in Emerging Research in Computing, Information, Communication and Application, (2013), pp. 1–6 (2013)

152

S. Tripathy and R. Singh

6. S. Tripathy, S. Hota, A survey on partitioning and parallel partitioning clustering algorithms, in International Conference on Computing and Control Engineering, vol. 40 (2012) 7. S. Tripathy, T. Swarnkar, Application of big data problem-solving framework in healthcare sector—Recent Advancement, in Intelligent and Cloud Computing (Springer, Singapore, 2021), pp. 819–826 8. S. Tripathy, T. Swarnkar, Investigation of the FFANN model for mammogram classification using an improved gray level co-occurrences matrix. Int. J. Adv. Sci. Technol. 29(4), 4214–4226 (2020) 9. I. Sutskever, G.E. Hinton, Deep, narrow sigmoid belief networks are universal approximators. Neural Comput. 20(11), 2629–2636 (2008) 10. G.E. Hinton, R.R. Salakhutdinov, Reducing the dimensionality of data with neural networks. Science 313(5786), 504–507 (2006) 11. D.C. Ciresan, U. Meier, J. Masci, L.M. Gambardella, J. Schmidhuber, Flexible, high performance convolutional neural networks for image classification, in Twenty-Second International Joint Conference on Artificial Intelligence, June 2011 12. P.Y. Simard, D. Steinkraus, J.C. Platt, Best practices for convolutional neural networks applied to visual document analysis, in ICDAR, vol. 3, No. 2003, Aug 2003 13. A. Krizhevsky, I. Sutskever, G.E. Hinton, Imagenet classification with deep convolutional neural networks. Adv. Neural. Inf. Process. Syst. 25, 1097–1105 (2012) 14. R. Girshick, J. Donahue, T. Darrell, J. Malik, Rich feature hierarchies for accurate object detection and semantic segmentation, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2014), pp. 580–587 15. C. Farabet, C. Couprie, L. Najman, Y. LeCun, Learning hierarchical features for scene labeling. IEEE Trans. Pattern Anal. Mach. Intell. 35(8), 1915–1929 (2012) 16. P. Sermanet, S. Chintala, Y. LeCun, Convolutional neural networks applied to house numbers digit classification, in Proceedings of the 21st International Conference on Pattern Recognition (ICPR2012). IEEE, Nov 2012, pp. 3288–3291 17. Y. Taigman, M. Yang, M.A. Ranzato, L. Wolf, Deepface: closing the gap to human-level performance in face verification, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2014), pp. 1701–1708 18. J.G. Lee, S. Jun, Y.W. Cho, H. Lee, G.B. Kim, J.B. Seo, N. Kim, Deep learning in medical imaging: general overview. Korean J. Radiol. 18(4), 570 (2017) 19. H. Greenspan, B. Van Ginneken, R.M. Summers, Guest editorial deep learning in medical imaging: overview and future promise of an exciting new technique. IEEE Trans. Med. Imaging 35(5), 1153–1159 (2016) 20. V. Singh, R.C. Poonia, S. Kumar, P. Dass, P. Agarwal, V. Bhatnagar, L. Raja, Prediction of COVID-19 corona virus pandemic based on time series data using Support Vector Machine. J. Discrete Math. Sci. Cryptogr. 23(8), 1583–1597 (2020). https://doi.org/10.1080/09720529. 2020.1784535 21. R. Kumari, S. Kumar, R.C. Poonia, V. Singh, L. Raja, V. Bhatnagar, P. Agarwal, Analysis and predictions of spread, recovery, and death caused by COVID-19 in India. Big Data Mining Anal. 4(2), 65–75. https://doi.org/10.26599/BDMA.2020.9020013 22. V. Bhatnagar, R.C. Poonia, P. Nagar, S. Kumar, V. Singh, L. Raja, P. Dass, Descriptive analysis of COVID-19 patients in the context of India. J. Interdiscipl. Math. 24(3), 489–504 (2020). https://doi.org/10.1080/09720502.2020.1761635 23. S. Kumar, B. Sharma, V.K. Sharma, R.C. Poonia, Automated soil prediction using bag-offeatures and chaotic spider monkey optimization algorithm. Evolut. Intell. 1–12 (2018). https:// doi.org/10.1007/s12065-018-0186-9 24. S. Kumar, B. Sharma, V.K. Sharma, H. Sharma, J.C. Bansal, Plant leaf disease identification using exponential spider monkey optimization. Sustain. Comput. Inf. Syst. 28 (2018). https:// doi.org/10.1016/j.suscom.2018.10.004

Convolutional Neural Network: An Overview …

153

25. D. Ciregan, U. Meier, J. Schmidhuber, Multi-column deep neural networks for image classification, in 2012 IEEE Conference on Computer Vision and Pattern Recognition. IEEE, June 2012, pp. 3642–3649 26. P. Borwarnginn, K. Thongkanchorn, S. Kanchanapreechakorn, W. Kusakunniran, Breakthrough Conventional Based Approach for Dog Breed Classification Using CNN with Transfer Learning, in 2019 11th International Conference on Information Technology and Electrical Engineering (ICITEE). IEEE, Oct 2019, pp. 1–5

A Comparison of Backtracking Algorithm in Time-Shared and Space-Shared VM Allocation Approaches Using CloudSim T. Lavanya Suja and B. Booba

Abstract Cloud is ubiquitous, and its computing is inevitable. In this post COVID19 scenario cloud market has been increased multifold. During the pandemic and after the pandemic internet is the only medium of communication and governance throughout the world. This has attracted non-technical and technical users of internet as cloud users and cloud researchers. As a part of cloud computing Virtual Machine (VM) placed inside Physical Machine (PM) has taken a new dimension and migrations of VM from one PM to another are need of the hour. Numerous VM migration algorithms were already in the cloud market, and many of the new approaches are poured into the cloud. Our proposed backtracking approach is a proven algorithm for popular problems like N-Queen, Knapsack, etc. We have implemented the backtracking algorithm in time-shared and space-shared approaches in CloudSim, a cloud simulator in Java-eclipse platform. The experimental results show our backtracking in space-shared approach suits as a better candidate for VM migration algorithm. Keywords Cloud computing · VM migration algorithms · Backtracking · Time shared · Space shared

T. Lavanya Suja (B) · B. Booba VISTAS, Vel’s University, Chennai, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 R. C. Poonia et al. (eds.), Proceedings of Third International Conference on Sustainable Computing, Advances in Intelligent Systems and Computing 1404, https://doi.org/10.1007/978-981-16-4538-9_16

155

156

T. Lavanya Suja and B. Booba

1 Introduction Cloud computing is a metered basis service providing technology over internet. “It has 3 popular services namely Infrastructure as a Service (IaaS), Platform as a Service (PaaS) and Software as a Service (SaaS)” [1]. Firstly, IaaS serves the users with a virtual machine of requested configuration in terms of Processor, Processing speed in Million Instructions per second (Mips), Memory, etc. With this VM the user can install platforms like Java, Python and develop software for their need. Secondly PaaS serves the user with the VM along with a developing language platform demanded by the user. With this developing environment the user can start developing applications for their need. Thirdly SaaS serves the users, VM with fully developed readymade applications requested by the user. With this software the user can get the benefits of the application by using it while having no worries about maintenance and storage constraints [2]. In IaaS the cloud user has the bare machine of their request, and with no maintenance, they enjoy the benefit of using the VM of their choice with installing application programming interfaces and the developing applications/software from it. In PaaS, the job of API installation is taken away from the cloud user; thereby, he/she can straight away dive into development of software for their need. In SaaS the development work is taken away from the cloud user; rather, he/she can choose a variety of SaaS applications available in the cloud and straight away use it for his/her purpose. Thus the overheads on development, maintenance have been removed from his Software Development Life Cycle (SDLC). In all the services the cloud user can scale up or down his resources and can pay for his/her usage alone; hence, cloud is described as a metered service above. Backtracking is an age-old technique which gave solutions to combinatorial problems with implicit and explicit constraints. It is a recursive algorithm which tries all the combinations for a solution, and when a particular iteration is not following the constraints, it backtracks to the previous iteration and proceeds with a different branch. In this way this algorithm produces a solution tree with short length and saves time and energy too. In history it has given solutions for N-Queen’s problem, Sudoku, Crossword, sum of subsets and Knapsack problems. Over years various approaches have been proposed for VM migration, and those categories are mathematical, genetic and nature Inspired. Mathematical algorithms are suitable for the fact they provide solutions in an iterative way. Genetic algorithms are metaheuristic algorithms based on Charles Darwin’s theory of evolution, the survival of the fittest. Nature-inspired algorithms serve as a solution for specific problems. After surveying over 50 papers of different approaches we found in cloud computing, SLA is the key role players in any migration algorithm, so the SLA is taken as explicit constraints for the backtracking approach which is combined with hill climbing algorithm for better performance like less energy consumption during VM migration process. Thus, we call this an enhanced backtracking algorithm for VM migration.

A Comparison of Backtracking Algorithm in Time-Shared …

157

The paper is organized as follows. Literature review highlights the previous research work related to VM migration algorithms based on time-shared and spaceshared approaches. In the next section the proposed backtracking algorithm and its pseudocode are explained. Conducted experiments and their results are tabulated, graphs drawn. Conclusions are inferred from the above research, and future scope is mentioned at the last section.

2 Literature Review In [3] the authors propose a workload- aware migration using time series cloud model which is a time-sharing VM migration algorithm. In their experiments the results proved that it is giving better workload balance and avoid VM migrations in momentary peak loads. Similarly, in [4] power-aware decentralized approach is taken for VM migration which maintains two threshold values and a load vector to decide where and when to migrate. This is a time-sharing approach which claims that load balance and less power consumption are effective through it. In another proposed algorithm for VM migration [5] the authors used clustering method and local regression-based technique. It resulted in less power consumption, a smaller number of SLA violations and less power degradation. The authors themselves have done a quality survey of existing approaches and have hand-picked efficiently proven techniques and combined them all in one algorithm. In our analysis it is coming under space-shared approach of VM migration algorithm because of the consolidate and cluster VMs. A new load balancing VM migration algorithm is proposed in the main aim of balancing the load with better performance with the names as weighted active load balancing algorithm [6]. Here the tasks are time shared in the VM with more power by assigning weights to the VM, thereby achieving better performance and less energy consumption as claimed by the authors. Distributed shared memory (DSM)-based live migrations are extremely popular in the cloud market of IaaS. DSM-based live migration approach is tested in [7] where they come up with live VM migration with DSM and optimized DSM separately. Both approaches achieve less downtime and reduce the migration time considerably. At the same time the authors claim that it is suitable for moderate-sized VMs only which are a limitation. This is a space-shared approach as the VMs are migrated from one PM to another PM based on the space availability. Another interesting approach which is space-shared approach [8] advocates the time crunch on migration time as the VMs belong to different time zones so the absolute time left for migration in some thousands of seconds which comes under their low utilization time frame. To finish the migration under these time constraints, they recommend no sharing approach on memory in contrary to DSM and employ

158 Table 1 Table of 2 approaches of VM algorithms

T. Lavanya Suja and B. Booba Time-shared approach

Space-shared approach

Time series workload prediction [3]

VM clustering and local regression-based consolidation [5]

Decentralized virtual machine migration [4]

DSM-based live migration [7]

Weighted active load balancing algorithm [6]

Live migration under time constraints [8]

migrateFS file system. Their experiments show that this approach achieves VM migration is lesser time than the previous approaches. CloudSim simulator tool is used in the work of a survey of CPU allocation and scheduling algorithms [9]. The authors clearly explain the time-sharing and spacesharing approaches which are major types of CPU scheduling. These 2 approaches are again followed in the newly developed VM allocation and migration algorithms because ultimately the CPU is the resource that is utilized as a VM. The variant in cloud VM scheduling is there can be more than one core or processing element inside a VM. Table 1 lists down the algorithms in the 2 approaches. In our previous works [10–14] we have extensively done a survey on VM allocation and migration algorithms, studied and understood different approaches and categorically given a taxonomy of them. We have also done a detailed analysis and a feasibility study on it which revealed that mathematical, genetic and nature-inspired approaches are borrowed for the VM scheduling and migration algorithms.

3 Proposed Backtracking Algorithm VM migration is understood as an optimization problem. This shows that it is suitable for backtracking approach. The following pseudocode explains the working of the algorithm. From an ordered pool of VM list, for each VM check whether the size of VM is greater than size of PM or the flag of VM is “Done.” If so, backtrack otherwise assigns this VM to PM and updates the size of PM by subtracting the size of allocated VM from the overall PM’s size. Assign flag of VM to “Done.”

3.1 Pseudocode For each PM in PM List do

A Comparison of Backtracking Algorithm in Time-Shared …

159

While sizeof(PM) > minimumsizeof(VM List) do For each VM in VM List do If sizeof(VM) > sizeof(PM) OR flag(VM) =”Done” then Continue //Backtrack Else Sizeof(PM) =sizeof(PM)- sizeof(VM) Print(“VM is migrating to PM”) Flag(VM)=”Done” Endif End For End While

4 Experiments and Results The experimental setup in Cloudsim3.0.3 has the following terminologies. The host list is comprising of one or more datacenters (DC) which can be viewed as collection of Physical Machines (PM). The configuration of the DC is Operating System is Linux with Architecture of ×86 and virtual machine monitor as Xen. The configurations of each PM inside it are having 1000 Million Instructions per Second (Mips) and RAM of 2048 MB and 1 GB storage. Each VM on it has 250 Mips and 2048 MB of RAM with bandwidth of 1000. The configuration is same for both time-shared and space-shared approach to measure the performance characteristics. First the backtracking algorithm with timeshared approach is executed, and the output is given in Table 2. Next the backtracking algorithm with space-shared approach is executed, and the output is given in Table 3. Based on the results we attained in CloudSim using the 2 approaches separate graphs are drawn. Figure 1 shows the time taken by the cloudlets to finish. There are Table 2 VM allocation in time-shared backtracking approach Cloudlet-ID

STATUS

DC-ID

VM-ID

Time

Start time

Finish time

1

SUCCESS

2

1

160

0.1

160.1

3

SUCCESS

2

1

160

0.1

160.1

0

SUCCESS

2

0

479.99

0.1

480.09

2

SUCCESS

2

0

479.99

0.1

480.09

4

SUCCESS

2

0

479.99

0.1

480.09

160

T. Lavanya Suja and B. Booba

Table 3 VM allocation in space-shared backtracking approach Cloudlet-ID

STATUS

DC-ID

VM-ID

Time

Start time

2

SUCCESS

3

1

80

0.2

Finish time 80.2

0

SUCCESS

2

0

640

0.2

640.2

1

SUCCESS

2

0

640

0.2

640.2

1

SUCCESS

2

0

640

0.2

640.2

3

SUCCESS

2

0

640

0.2

640.2

Fig. 1 Backtracking using time-shared approach with 4 cloudlets

4 jobs/cloudlets submitted to one DC as it is time shared in 2 VMs, and they take turns in finishing their jobs. VM0 is capable enough to take 3 jobs/cloudlets because of its configuration, so cloudlets 0, 2 and 4 are assigned to VM0. In Fig. 2 it is clearly indicated in the graph that as there are 2 DCs, in DC3 only one cloudlet gets placed that is cloudlet2 which finishes in 80.2 s, whereas space-shared cloudlets 0, 1, 2 and 3 finish at 640.2 s. It is evident from the graph time-shared approach uses the VMs effectively and less energy consumption is done for its completion of jobs with lesser time taken than space-shared approach, and hence, time-shared backtracking approach is recommended for the VM allocation and migration algorithms.

A Comparison of Backtracking Algorithm in Time-Shared …

161

Fig. 2 Backtracking using space-shared approach with 4 cloudlets

5 Conclusion and Future work After conducting the experiments on backtracking algorithm in 2 different modes of time-shared and space-shared approaches, we clearly observe that time shared is the better one and avoid number of migrations while taking less power consumption. We recommend time-shared version of backtracking VM migration algorithm and in future will include SLA constraints and look into security aspects of VM too.

References 1. https://en.wikipedia.org/wiki/Cloud_computing. Date of access 12/12/2020 2. T.L. Suja, V. Savithri, Backtracking algorithm for virtual cluster migration in cloud computing. Indian J. Sci. Technol. 8(15), 1–6 (2015) 3. Y. Liu, B. Gong, C. Xing, Y. Jian, A virtual machine migration strategy based on time series workload prediction using cloud model. Math. Probl. Eng. 2014 (2014). https://doi.org/10. 1155/2014/973069 4. X. Wang, X. Liu, L. Fan, X. Jia, A decentralized virtual machine migration approach of data centers for cloud computing. Math. Probl. Eng. 2013, 1–10 (2013) 5. M.R. Chowdhury, M.R. Mahmud, R.M. Rahman, Implementation and performance analysis of various VM placement strategies in CloudSim. J. Cloud Comput. 4(1), 20 (2015) 6. J. James, B. Verma, Efficient VM load balancing algorithm for a cloud computing environment. Int. J. Comput. Sci. Eng. 4 (2012) 7. C. Jo, H. Kim, B. Egger, Instant virtual machine live migration, in Economics of Grids, Clouds, Systems, and Services. GECON 2020. Lecture Notes in Computer Science, vol. 12441, ed. by K. Djemame, J. Altmann, J.Á. Bañares, O. Agmon Ben-Yehuda, V. Stankovski, B. Tuffin (Springer, Cham, 2020). https://doi.org/10.1007/978-3-030-63058-4_14

162

T. Lavanya Suja and B. Booba

8. K. Tsakalozos, V. Verroios, M. Roussopoulos, A. Delis, Live VM migration under timeconstraints in share-nothing IaaS-clouds. IEEE Trans. Parallel Distrib. Syst. 28(8), 2285–2298 (2017) 9. G.T. Hicham, E.A. Chaker, Cloud computing CPU allocation and scheduling algorithms using CloudSim simulator. Int. J. Electr. Comput. Eng. 6(4), 2088–8708 (2016) 10. T.L. Suja, B. Booba, A feasibility study of service level agreement compliance for start-ups in cloud computing, in Data Management, Analytics and Innovation (Springer, Singapore, 2020), pp. 407–417 11. T.L. Suja, B. Booba, An analytical study on importance of SLA for VM migration algorithm and start-ups in cloud, in International Conference on Information Management & Machine Intelligence, Dec 2019 (Springer, Singapore, 2019), pp. 271–276 12. T.L. Suja, B. Booba, Analysis on SLA in virtual machine migration algorithms of cloud computing, in Intelligent Computing and Innovation on Data Science (Springer, Singapore, 2020), pp. 269–276 13. T. Lavanya Suja, B. Booba, A study on virtual machine migration algorithms in cloud computing. Int. J. Emerg. Technol. Innov. Res. (JETIR) 6(3), 337–340 (2019) 14. T. Lavanya Suja, B. Booba, A taxonomy on approaches in virtual machine migration algorithms of cloud computing. Int. J. Anal. Exp. Modal Anal. (IJAEMA) 12(3), 1547–1552 (2020). ISSN-0886-9367

Mango (Mangifera indica L.) Classification Using Convolutional Neural Network and Linear Classifiers Sapan Naik and Purva Desai

Abstract Identifying fruit quality of a mango is a vital aspect for farmers and consumers; additionally, fruit classification is an imperative stage of fruit grading. Automation has been a boon in classification and grading of a mango (Mangifera indica L.). In this paper, we picked up various categories of mangoes such as Aafush, Kesar, Jamadar, Rajapuri, Totapuri, langdo and Dasheri. This set of mangoes was used for classification process which includes dataset preparation and feature extraction using pre-trained convolutional neural network (CNN) models. Four linear classifiers, namely support vector machine (SVM), logistic regression (LR), naïve Bayes (NB) and random forest (RF), are used for classification and compared. The paper also addresses techniques and issues of classification of mangoes on the basis of nondestructive method in particular advancement in deep learning and CNN. However, we have also discussed five CNN models, namely Inception v3, Xception, ResNet, DenseNet and MobileNet. Several experiments were carried out through these models, and the highest Rank-1 accuracy of 91.43% and lowest 22.86% accuracy was achieved. The model MobileNet is fastest, while DenseNet was found to be the slowest. In CNN models, Xception and MobileNet while in linear classifiers SVM and LR performed well. Keywords Convolutional neural network · Feature extractions · Mango classification · Linear classifier · Support vector machine

1 Introduction Agriculture is the essential key sector, which has played a vital role in the Indian economy [1, 2]. Its production process is divided into three major phases, namely cultivation, harvesting and post-harvesting. There is wide scope of automation introduced in the field of agriculture such as sensors, robots, computer and machine vision technology, which benefits the post-harvesting phase, which have processes like S. Naik (B) · P. Desai Babu Madhav Institute of Information Technology, Uka Tarsadia University, Surat, Gujrat 394350, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 R. C. Poonia et al. (eds.), Proceedings of Third International Conference on Sustainable Computing, Advances in Intelligent Systems and Computing 1404, https://doi.org/10.1007/978-981-16-4538-9_17

163

164

S. Naik and P. Desai

cooling, cleaning, sorting, grading and packing. Apparently, an important concept that drives our attention called smart farming [1] or precision agriculture [2] allows automation to be held and carried out at a greater extent for the major agriculture phases. Our scope for post-harvesting phase is limited to automatic nondestructive fruit classification. Nondestructive fruit classification is identified through its parameters such as aroma, color, firmness (strength) and composition, size, shape, texture and defects and maturity [3]. The pre-stage of sorting and grading is fruit classification. Grading and classification are a necessity in the field of agriculture as manual classification and grading method are time-consuming, laborious, less efficient, monotonous as well as inconsistent. In contrast to that, automatic systems provide rapid, hygienic, consistent and objective assessment. In this paper, mango (Mangifera indica L.) classification is performed on seven different varieties of mango fruit as it benefits us extraordinarily through its high-quality standards and ample of nutrients filled in it. Gujarat is a leading state with the largest area under mango cultivation stretching from Jamadar, Totapuri, Dasheri, Neelam, Langdo, Kesar, Payri, Rajapuri to Alphonso [4]. In computer vision, deep learning and convolutional neural network (CNN) have gained more popularity for image classification task. Image classification is performed by extracting features and trains the classifiers using them. The same process with deep learning reduced the errors in image recognition [5]. Team Hinton participated in image classification competition and won, from that period deep learning was enhanced and its influence was observed [6]. More and more work was performed on the initial model of CNN, and as a result of that, we now have many modern CNN architecture models such as Inception, ResNet, Xception and MobileNet, which results outstandingly [7]. In the below section, we have briefly reviewed the work that is been done for fruits and vegetables classification.

2 Related Work Fruit recognition system in [8] uses CNN where fruit’s region is extracted using a selective search algorithm and image entropy. Two methods, namely color (RGB) and near-infrared (NIR), are combined using early and late fusion methods. They are used in faster R-CNN model to detect seven different fruits [9]. The voting mechanism is used for classification with K-means. It was used with CNN for weed identification in [10] where 92.89% accuracy was achieved. Later it was stated that fine-tuning can give better results. For the online prediction of food materials, a fast auto-clean CNN model is proposed in [11]. Adapting learning was used by the model. Seven classes of mixed crops images (oil radish, barley, weed, stump, soil, equipment and unknown) are classified using deep CNN in [12] where VGG-16’s modified version is used for implementation and 79% accuracy is achieved. Multi-class kernel (linear, homogeneous polynomial and Gaussian radial basis) support vector machine (kSVM) is combined with color histogram, texture and

Mango (Mangifera indica L.) Classification Using Convolutional …

165

shape features for fruits classification in [13]. Winner-Takes-All SVM, Max-WinsVoting SVM, and Directed Acyclic Graph SVM are compared where Max-WinsVoting SVM with Gaussian radial basis kernel performs best with 88.2% accuracy and directed acyclic graph SVM is the fastest. Crop and weed plants are discriminated without segmentation in [14] where random forest classifier, Markov random field and interpolation methods are used, which gives 93.8% average accuracy. Cooccurrence and statistical features are computed from the sub-bands of the wavelet transform in [15]. Eighty-six percentage of classification accuracy is achieved on 15 classes of 2635 images of fruits with minimum distance classifier. Shekhawat et al. [16] introduced new approach for data transformation, and Chug et al. [17] anticipated novel approach for information processing. Both are very useful for feature selection. Kumar et al. deployed spider monkey optimization for soil classification [18] and disease identification in plant leaves [19].

2.1 Contributions For classification of mangoes, we have used a dataset of mangoes which were diversely categorized. The experiments were performed through different models on the dataset of 2333 images. This dataset was further bifurcated in the following manner: Aafush 137, Dasheri 250, Jamadar 146, Kesar 500, Langdo 300, Rajapuri 500 and Totapuri 500. The different models that were used with CNN for feature extraction are Inception v3, Xception, DenseNet, ResNet50, and MobileNet. Support vector machine (SVM), logistic regression (LR), naïve Bayes (NB) and random forest (RF) classifier are utilized for training and classification purpose, and all the outcomes of the experiment have been compared with the available work.

3 Proposed Approach Dataset preparation was carried out in order to classify mangoes of South Gujarat from seven categories. The dataset was stacked with 2333 images, which were captured in daylight from OnePlus 6T covering mangoes’ top-view (putting mango beneath the camera) with white paper as background. As CNN needs fixed-size input images, captured images are resized from 2112 * 4608 pixels to 224 * 224 and 299 * 299 pixels according to the CNN model under use. After completing dataset preparation, the CNN models need to get implemented with its data. CNN model can be implemented in four different ways. We have portrayed each way of implementation below. 1.

The foremost way to implement CNN is to use pre-trained CNN model where the weights are derived from image dataset such as ImageNet. It has numerous advantages including trainings perform faster; classification results can be

166

2.

3.

4.

S. Naik and P. Desai

achieved without high-performing infrastructure. The dataset is directly used, and transfer learning technique is applied due to which only the final layer of the model is trained again for new image dataset. The other method to implement CNN model is to use it as feature extractor. This method imparts that the features that are extracted from the dataset using CNN are given to linear classifier for input and this way the classifier predicts the images. The third approach includes some advantage and disadvantage of the method. The process of third approach is full CNN architecture model that is trained from the beginning using a new dataset. The advantage of this is that great accuracy is benefited from the model when compared to transfer learning techniques. However, the drawback of this method is high-performance computing resources and more time to perform training and testing. The last method to implement CNN model is that everyone can create their own CNN architecture and use it for training and testing purposes. This way it will have more advantages like the model will be compacted and filtered with limited number of layers as per its requirement, processing time will be faster and it will serve higher accuracy.

In the following section, we have briefly described and discussed technical details of CNN.

3.1 Overview of Convolution Neural Network CNN comprises four layers, namely convolution, nonlinearity, pooling and fully connected [20]. These multiple levels have various training sets, which may appear multiple times in the architecture. Input and output of each level are known as feature maps, and the number of each layers depends on the structure use. Convolution Layer The first layer of CNN comprises convolutional layer to which the input is an input image. A filter (neuron/kernel) slides through full image by moving to the right by 1 pixel or unit and repeats it. When the filter is found to be sliding (called convolving), it is multiplying the values in the filter with the original value set of the image. Herein, the multiplication is termed as computing element-wise multiplication. All these moves produce one new number. Once sliding gets over, we will get a reduced two-dimensional array of numbers, and this two-dimensional array, produced after convolving, is called activation map or feature map [21]. The output of convolutional layer is activation map where it illustrates the parts of the image, where there are most of the features available. More number of filters aims to give greater depth to the activation map; this means more information about the input volume can be achieved [22]. More details about filters and their visualization can be found in [23].

Mango (Mangifera indica L.) Classification Using Convolutional …

167

Nonlinearity Layer Nonlinearity layers have diverse activation functions applied to it such as rectified linear units (Relu), tanh and sigmoid. The most preferable function of nonlinearity is Relu as because of it the training process gets faster. Pooling Layer After convolutional layer, pooling layer is demonstrated in order to decrease the spatial size which are concluded in terms of weight and height excluding the depth. The main benefit is that as the number of parameters gets reduced eventually computation is also reduced and simultaneously overfitting is prevented due to a smaller number of parameters. Pooling layers have different forms, and one of them is max pooling. It has been portrayed with the filter size of 2 * 2 and a stride of 2, and this normally reduces the input image by half size. Output of the pooling layer goes into a flattening step, and this step is the input to an artificial neural network. Through the flattening step, the feature map is converted into a single vector. Fully Connected Layer Fully connected layer comprises a fully connected neural network in which there is a working mechanism of input and output. The input to the layer is received through each neuron of the layer from the previous layer’s neurons. Each neuron in the previous layer of fully connected neural network connects to a single neuron to give a prominent output. The output is formatted through a matrix multiplication followed by a bias offset. This is stated as the basic working of CNN. CNN layer is tuned, which is an important aspect of the process. In the tuning process, certain parameters are determined such as choosing architecture, deciding number of layers, weighing parameters and implementation platform.

3.2 Tuning the CNN Model CNN uses a model which needs to be tuned. Following are the three phases of tuning a CNN model: 1. 2. 3.

Training the model Validating the model Testing the model

In the former training phase, a network is to be prepared for the classification process. After the preparation for the process, the validation phase provides calibration for the network to correct its classification process. When the model is validated, it needs to be tested for the final outcome to achieve the best possible value of all parameters; hence, the CNN model is sent to the testing phase.

168

S. Naik and P. Desai

3.3 CNN Architecture Models Architectural models are evaluated based on their weight, number of layers and parameters and ImageNet errors. There is plethora of architecture models available for implementing CNN; some of them are VGGNet, Inception, Xception, RestNet50, DenseNet, MobileNet and more. The technical details of all these models are available in [24].

4 Results and Discussion Experiments are performed on the MacBook Pro (13-inch, mid-2012) machine, which has 2.5 GHz Intel Core i5 processor, 10 GB 1333 MHz DDR3 memory and Intel HD Graphics 4000 1536 MB graphics card running on macOS High Sierra (version 10.13.6), and Keras and TensorFlow libraries are used for the implementation process. Implementation is performed with the help of following steps: Step 1.

Step 2.

Step 3.

Step 4.

Training dataset is prepared with mango images and with respective labels. Total of 2333 images are collected which from 7 categories which bifurcate as Aafush 137, Dasheri 250, Jamadar 146, Kesar 500, Langdo 300, Rajapuri 500 and Totapuri 500. Number of images in all categories is not same, which can lead to model overfitting. To resolve this issue, data augmentation technique is used (using ImageDataGenerator method of TensorFlow). We have used rotation, width shift, height shift, shear range, horizontal flip and zoom range for generating new images. Data augmentation method is only applied to Aafush, Dasheri, Jamadar and Langdo category of mangos. We have made these categories count to 500. So, after data augmentation we have 500 images of each 7 categories of mango, which makes our dataset size to 3500 images. Images are resized to 299 × 299 for Inception v3 model. The two beneficial reasons for resizing are 1. CNN needs fixed-size input images 2. To reduce computational time, for initial parameter tuning Inception v3 model is chosen. Out of 3500 images, 80% (2800) images are taken for training while 20% (700) are taken for validation. We have received 92% of validation accuracy for this initial experiment. Misclassification occurs in 5 categories. Table 1 summarizes the misclassification during this experiment. We are going to use CNN as feature extractor. From the observations of initial experiment, we have taken 3500 training images for all CNN models. For testing purpose, total 70 images are selected (10 of each category). Images are resized to 224 × 224 and 299 × 299 based on CNN model used. Parameters are set in configuration file. Inception v3, Xception, MobileNet, ResNet and DenseNet architecture models of CNN are

Mango (Mangifera indica L.) Classification Using Convolutional …

Step 5. Step 6. Step 7.

169

selected. For experiment, epoch value is set to 1000, learning rate to 0.01, training batch size to 100 and validation percentage to 10. Features are extracted from final fully connected layers of pre-trained CNN model and get stored as HDF5 format locally [5]. Linear classifiers, i.e., SVM, LR, NB and RF, are trained for extracted features and labels of step 3. Making the learning weight of linear classifiers intact, they are validated and tested.

As we have taken CNN models for extracting features, training time varies based on model under use. Table 2 summarizes the training time (feature extraction time) for each model for 3500 images. Table 2 also summarizes time required by linear classifier for training. Table 3 summarizes overall result of all architecture models. The above facts and figures illustrate that through experiments performed by various models which include pair of (Xception, SVM), (Xception, LR), (DenseNet, LR) and (MobileNet, SVM) has achieved the highest accuracy of 91.43%. However, ResNet performed low accuracy. From the above experiment, we have concluded that maximum time is required by DenseNet and minimum time is required by Table 1 Misclassification by Inception v3 model Jamadar

Dasheri

Kesar

Rajapuri

Totapuri

Total

11

10

7

5

23

56

Table 2 Time required for feature extraction and classification CNN model Inception v3

Feature extraction time (CNN training) (min) 84.89

Classifier training time (min) SVM

LR

NB

RF

3.51

1.58

0.35

0.23

Xception

90.68

0.40

0.46

0.17

0.17

DenseNet

134.72

2.31

1.32

0.21

0.19

ResNet

73.64

0.30

0.20

0.16

0.16

MobileNet

59.81

0.20

0.30

0.19

0.17

Table 3 Rank-1 accuracy Model name

SVM

LR

NB

RF

Inception v3

90

88.57

78.57

80

Xception

91.43

91.43

80

78.57

DenseNet

88.57

91.43

71.43

74.28

ResNet

37.14

22.86

22.86

28.57

MobileNet

91.43

88.57

78.57

81.43

170 Table 4 Comparisons of proposed method with different algorithms

S. Naik and P. Desai Algorithm

Classification accuracy (%)

GA–FNN

84.8

PSO–FNN

87.9

ABC–FNN

85.4

kSVM

88.2

FSCABC–FNN

89.1

Deep learning—CNN

91.43

MobileNet for training purpose. For feature extraction time and training, Xception and MobileNet CNN models performed well. In classifier, SVM and LR performed better and major misclassification occurred with Totapuri mango. While other misclassifications occurred in Dasheri, Kesar and Rajapuri categories of mango. As the whole dataset is prepared by us, we did not compare the results of classification with others’ work, and the main reason for not comparing was that we did not come across the same mango category. Based on feedforward neural network and fitness-scaled chaotic artificial bee colony algorithm, the hybrid method is proposed in [25] where 1653 images of 18 fruit categories are considered for classification. Color histogram, Unser’s texture and eight morphology-based shape measures are used for feature extraction. Principal component analysis is used for reducing number of features. We have compared our results with their work in Table 4.

5 Conclusion and Future Directions Local features can be implemented and incorporated to increase classification rate. Proposed methods can be made generalized for fruits of south Gujarat, India. Finetuning of parameters and combining machine learning methods with CNN can improve the results accuracy. Work can be done to decrease time required to classify a mango. Based on classification, grading and detection of skin disease can be identified. From this experiment, we have concluded that even CNN is used as feature extractor and our linear classifier is used for classification purpose. This method allows us to combine features of CNN with handcrafted features and apply different classifiers based on application and dataset in hand. Acknowledgements The authors acknowledge the help of Mr. Yash Rana in implementation.

References 1. Smart farming means efficient agriculture [AGRI PRESS BENELUX]. Available: http://www. agripressworld.com/start/artikel/458796/en. Accessed 28 Jan 2019

Mango (Mangifera indica L.) Classification Using Convolutional …

171

2. A. McBratney, B. Whelan, T. Ancev, J. Bouma, Future directions of precision agriculture. Precision Agric. 6(1), 7–23 (2005) 3. D.C. Slaughter, Nondestructive Maturity Assessment Methods for Mango (University of California, Davis, 2009), pp. 1–18 4. S. Naik, B. Patel, Thermal imaging with fuzzy classifier for maturity and size based nondestructive mango (Mangifera indica L.) grading, in 2017 International Conference on Emerging Trends & Innovation in ICT (ICEI). IEEE Feb 2017, pp. 15–20 5. A. Sachan, Tensor flow Tutorial 2: image classifier using convolutional neural network. Available. cv-tricks.com/tensorflow-tutorial/training-convolutional-neuralnetwork-for-imageclassification/. Accessed on 17/06/2018 6. A. Krizhevsky, I. Sutskever, G.E. Hinton, Imagenet classification with deep convolutional neural networks. Adv. Neural. Inf. Process. Syst. 25, 1097–1105 (2012) 7. G. Ilango, Using Keras pre-trained deep learning models for your own dataset. Available: https://gogul09.github.io/software/flower-recognition-deep-learning. Accessed on 15.07.2018 8. L. Hou, Q. Wu, Q. Sun, H. Yang, P. Li, Fruit recognition based on convolution neural network. In: 2016 12th International Conference on Natural Computation, Fuzzy Systems and Knowledge Discovery (ICNC-FSKD). IEEE, Aug 2016, pp. 18–22 9. I. Sa, Z. Ge, F. Dayoub, B. Upcroft, T. Perez, C. McCool, Deepfruits: a fruit detection system using deep neural networks. Sensors 16(8), 1222 (2016) 10. J. Tang, D. Wang, Z. Zhang, L. He, J. Xin, Y. Xu, Weed identification based on K-means feature learning combined with convolutional neural network. Comput. Electron. Agric. 135, 63–70 (2017) 11. H. Chen, J. Xu, G. Xiao, Q. Wu, S. Zhang, Fast auto-clean CNN model for online prediction of food materials. J Parallel Distrib Comput 117, 218–227 (2018) 12. A.K. Mortensen, M. Dyrmann, H. Karstoft, R.N. Jørgensen, R. Gislum, Semantic segmentation of mixed crops using deep convolutional neural network, in CIGR-AgEng Conference, 26–29 June 2016, Aarhus, Denmark. Abstracts and Full papers. Organising Committee, CIGR 2016, pp. 1–6 13. Y. Zhang, L. Wu, Classification of fruits using computer vision and a multiclass support vector machine. Sensors 12(9), 12489–12505 (2012) 14. S. Haug, A. Michaels, P. Biber, J. Ostermann, Plant classification system for crop/weed discrimination without segmentation, in IEEE Winter Conference on Applications of Computer Vision. IEEE, Mar 2014, pp. 1142–1149 15. S. Arivazhagan, R.N. Shebiah, S.S. Nidhyanandhan, L. Ganesan, Fruit recognition using color and texture features. J Emerg Trends Comput Inf Sci 1(2), 90–94 (2010) 16. S.S. Shekhawat, H. Sharma, S. Kumar, A. Nayyar, B. Qureshi, bSSA: Binary Salp Swarm Algorithm with hybrid data transformation for feature selection. IEEE Access 9, 14867–14882 (2021). https://doi.org/10.1109/ACCESS.2021.3049547 17. A. Chugh, V.K. Sharma, S. Kumar, A. Nayyar, B. Qureshi, M.K. Bhatia, C. Jain, Spider Monkey Crow Optimization Algorithm with deep learning for sentiment classification and information retrieval. IEEE Access 9, 24249–24262 (2021). https://doi.org/10.1109/ACCESS. 2021.3055507 18. S. Kumar, B. Sharma, V.K. Sharma, R.C. Poonia, Automated soil prediction using bag-offeatures and chaotic spider monkey optimization algorithm. Evolut. Intell. 1–12 (2018). https:// doi.org/10.1007/s12065-018-0186-9 19. S. Kumar, B. Sharma, V.K. Sharma, H. Sharma, J.C. Bansal, Plant leaf disease identification using exponential spider monkey optimization. Sustainable Comput. Inf. Syst. 28 (2018). https://doi.org/10.1016/j.suscom.2018.10.004 20. A. Bhandare, M. Bhide, P. Gokhale, R. Chandavarkar, Applications of convolutional neural networks. Int. J. Comput. Sci. Inf. Technol. 7(5), 2206–2215 (2016) 21. M.A. Nielsen, Neural Networks and Deep Learning, vol. 25 (Determination Press, San Francisco, 2015) 22. A Beginner’s Guide To Understanding Convolutional Neural Networks—Adit Deshpande—CS Undergrad at UCLA (‘19). Available: https://adeshpande3.github.io/A-Beginner%27s-GuideTo-Understanding-Convolutional-Neural-Networks/. Accessed 28 Jan 2019

172

S. Naik and P. Desai

23. M.D. Zeiler, R. Fergus, Visualizing and understanding convolutional networks, in European Conference on Computer Vision, Sept 2014 (Springer, Cham, 2014), pp. 818–833 24. S. Naik, H. Shah, Classification of leaves using convolutional neural network and logistic regression, in ICT Systems and Sustainability (Springer, Singapore, 2021), pp. 63–75 25. Y. Zhang, S. Wang, G. Ji, P. Phillips, Fruit classification using computer vision and feedforward neural network. J. Food Eng. 143, 167–177 (2014)

A Review on Current IoT-Based Pasture Management Systems and Applications of Digital Twins in Farming Ntebaleng Junia Lemphane , Ben Kotze , and Rangith Baby Kuriakose

Abstract Smart farming concepts have transformed the farming sector. Smart farming allows enhanced management of farms using technologies like Internet of Things (IoT), Artificial Intelligence (AI) and robotics to increase the produce while minimizing human involvement. The evolution of IoT technology allows the use of intelligent sensors to collect and analyze farm data for improved management approach. IoT-based systems produce a huge amount of farm data that is been easily managed by the idea of big data. Big data provide easy collection of data with large storage capacity supplied on the cloud. The objective of implementing smart farming systems is to escalate productivity and sustainability. However, even though the idea of trying to ease the farm operations is a success, there are challenges that are brought by the use of these newer technological methods. With the focus on livestock farming, this paper reviews the current IoT pasture management systems implemented for better management of farm operations. The paper also reveals the applications of digital twins in farming to address the problems created by smart farming. Keywords AI · Big data · Digital twin · IoT · Machine learning · Smart farming

1 Introduction The introduction of AI and IoT in farming has dramatically changed the farming sector. The use of automated machines, sensors and other integrated technology methods has made farming safer, efficient and more profitable. Farmers are able to monitor their entire farms remotely and get real-time updates. On the farm, one of the most crucial aspects is to monitor the state of the pasture. Pasture management is an essential activity as it is important to monitor what animals eat in order to get good produce from them. Farmers maintain the pastures to get good grass to feed the animals. Several pasture management systems have been N. J. Lemphane (B) · B. Kotze · R. B. Kuriakose Central University of Technology, Free State, Bloemfontein 9300, South Africa © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 R. C. Poonia et al. (eds.), Proceedings of Third International Conference on Sustainable Computing, Advances in Intelligent Systems and Computing 1404, https://doi.org/10.1007/978-981-16-4538-9_18

173

174 Table 1 Categorization of research articles presented on the study

N. J. Lemphane et al. Category

Subcategory

Livestock farming Smart farming

Management zones Platform

Data

References [1–4]

[5–8]

Remote sensing (satellite and [9–11] GPS) Proximal sensing (ground motes)

[11]

Big data

[12]

Internet of Things

[13–16]

Decision

Digital twin

[17–25]

Actuation

Machine learning

[26]

implemented to reduce excessive labor and reduce fencing costs. With the data from these systems, farmers can avoid factors that can lead to pasture damage. However, these systems impose challenges of not being able to predict problems and prevent downtime. Developing a digital twin of a physical system can prospectively result in a system that is efficient and reliable. Pairing of the virtual and physical worlds allows analysis of data and monitoring of systems to head off problems before they even occur and prevent downtime by using simulations. This paper discusses the current IoT-based pasture management systems and the applications of digital twin in farming (Table 1).

2 Literature Review Utilizing technology in pastures brings a huge variance in productivity; therefore, it is critical to improve the contemporary management, herding style, and set up digitalized farming that use non-stop development of latest technology [5]. Real-time, automated acquisition and far-flung tracking of agricultural data is the fundamental requirement of cutting-edge farming [6]. The robust evolution of information and communication technologies (ICT) and processing devices allowed the upward push of many superior technological answers

A Review on Current IoT-Based Pasture Management Systems …

175

for the farming sector, wherein concepts such as smart farming and IoT take an integral role [1]. Smart farming includes the gathering, processing, and analysis of data as well as the cybernetics on the universal supply chain. This allows for the collective mobilization of operations and the management of advancements in a farm [2]. These capabilities are reinforced by IoT. IoT is a system of interconnected sensors that collect data and communication networks used to send and receive data which is then analyzed for proper decision making [13]. IoT adopted connection technologies like GSM, LTE, Bluetooth and Wi-Fi, but not limited these. Network chosen for IoT system must have a characteristic of low energy consumption. These include Sigfox, LoRaWAN and IEEE P802.11ah networks [14]. IoT systems provide farms with vast amount of real-time data that requires a complex storage referred to as “Big data Processing” [12]. The big data cycle includes the collection of data, managing of the collected data and the effective utilization of processed data. To process big data, IoT’s lay out a cloud platform that allows integration of web applications. Modern technological enhancements in farming ease simple adoption and use of smart farming with IoT. Behavior of farm animals in a pasture, such as movement course and the region facts, have to be closely monitored to ensure proper pasture management. The acquisition of these factors may be achieved by using the pasture Internet of Things [15]. The pasture Internet of Things system structure is primarily based on wireless sensor network (WSN) which can enable the transmission among divergent sampling nodes. The SheepIT system aims to utilize an automated IoT-based system that controls grazing sheep inside vineyards, ensuring that they do not jeopardize cultures [7]. For local services delivery, the gateway is used, which integrates radio link to execute absolute sheep’s localization by the use of received signal strength indication (RSSI) [16]. The Nofence is a GPS-based system that helps to preserve sheep inside the set borders, without the usage of visible fences [8, 9]. The borders are set using a smart phone. The neck collar makes a sound as sheep enters the warning zone. Farmote system is the current pasture management method that allows the systematic remote measuring of pasture height [10]. This is a solar powered system that uses outlying static devices to seize pasture measurements and soil conditions, which can then be cross-referenced with images captured from several regions of the electromagnetic spectrum taken by satellite to come up with pasture estimates of biomass; hence, the measurements are communicated via electromagnetic waves to an accessible web portal [10]. This is a technology that operates on real-time data from ground motes that communicate through a LoRaWAN technology [11]. The use of modern technology in farming comes with many benefits. However, there are problems that arise even with the implementation of new technologies [27]. The challenges of smart farming include the integration of the sensors and binding the sensor data to the analytics piloting automation and response activities

176

N. J. Lemphane et al.

[3]. Moreover, it is nearly impossible to obtain an in-depth view of large systems physically in real-time; thus, this results in laborious remote monitoring and control [4]. Remote monitoring with feedback mechanism of overall activities on the farm is not easy. This is where digital twins come in. Digital twins are the virtual copy of the real objects and are being given extra attention with the realization of the fourth industrial revolution [17]. Digital twins are developed to run concurrently with the physical systems, generating corresponding data on both platforms [26]. Currently digital twins are widely used in various fields. From agriculture industry point of view, where it can be of great benefit to utilize digital twin, the focus is on livestock farming. By developing digital twin on livestock farming, there is a possibility to find suitable climate conditions for growth by using simulations based on real datasets collected in the physical world. Implementing machine learning algorithm on datasets allows for accurate analysis and precise predictions [18]. Machine learning applications integrated with other farming techniques have succeeded in improving production in farming sector. Digital twins have the potential to tackle the challenge of seamless integration between IoT and data analytics by developing connected physical objects and a digital twin [19]. Digital twin platform enables quick analysis and real-time decisions performed based on precise analytics. This pairing of physical and virtual worlds enables monitoring of systems and analysis or simulation of data to avert problems that prevent downtime, enhance general operations to escalate uptime, and even develop future predictions [20]. Digital twin minimizes failure occurrence rather have extended operational pliability and efficiency. Digital twins are already being implemented in farm operations; however, improved development of proficiency is required [21]. A digital twin is further broadening with the concept of smart farming by developing small services to understand the information of a certain system such as an irrigation system, a seeding system, soil probes, weather stations and harvesters [22, 23]. The following are the applied scenarios of digital twins in farming. In forestry, plantation management, and precision farming, digital twins can be used to pilot actionable business intuition and minimize operational costs [24]. Digital-twin orchard is another application of digital twins in farming. This helps to validate analysis of data and sustained monitoring of orchards production systems to foresee stress, diseases and crop losses, and to evolve new opportunities for endto-end research [25].

3 Discussion Implementing latest technology in pastures has made the management of pastures more effective [5]. This allows for remote monitoring and automation aspects which is based on electronic sensors, processing units and cloud storage [6]. This is all

A Review on Current IoT-Based Pasture Management Systems …

177

possible due to an IoT technology that has brought the concept of smart farming [1]. Smart farming aims to solve the problems in farming in order to improve production. Smart farming involves data gathering, processing and the analysis that contributes to an improved decision making and management techniques to farmers [2]. Smart farming is built on IoT concept which integrates various technologies like GSM, LTE, Bluetooth and Wi-Fi. Big data concept is introduced to IoT systems as they provide huge amount of data [12]. Big data allow easy collection, storage and processing of data. The pasture Internet of Things is a pasture management system based on WSN that enables communication among different sampling nodes [15]. The SheepIT system manages sheep in the vineyard to ensure that they do not threaten the cultures [7]. This is a WSN-based system that integrates mobile notes carried by the sheep. Nofence system aims to keep the animals within the boundaries defines by virtual fences [8]. The borders are set using the smart phone, and the settings are then transmitted to the GPS-based collar worm by the animal [9]. Farmote is a solar powered system that uses ground motes to measure the soil conditions of the pasture [10]. Motes are IoT laser sensors that measure moisture and quality of pasture daily that communicate through LoRaWAN network [11]. There are still problems that arise even with the implantation of new technological methods [27]. The integration of sensors and linking the sensor data to pilot automation are still a challenge [3]. It is not easy to get the in-depth overview of the physical systems in real time; therefore, remote monitoring and control become a challenge [4]. Digital twins are the virtual replica of real objects [17]. Digital twin concept is made possible by the IoT technology. Digital twins have the capability of predicting problems even before they can occur. This helps to prevent downtimes. In livestock farming, digital twin can determine precise climate conditions based on simulation of real datasets. Precise and accurate analysis is possible due to the use of machine learning algorithms [18]. Digital twins have a potential to overcome problems of smart farming. Digital twins enable smooth integration between IoT devices and collected data [19]. The pairing of physical and digital worlds enables real-time monitoring of the system by using simulations that can help to predict problems at the early stage for better decisions to prevent downtimes [20]. Digital twin applications are also seen in forestry, plantation management and precision farming. Digital twins monitor and analyze the effect of diseases on crops [24]. This review discusses the applications of IoT in farming. Smart farming is successful today due to the use of IoT technology. The paper states the pasture management systems based on IoT. Smart farming comes with many benefits that improve production in the farming sector; however, there are still some challenges that arise. The paper also highlights the application of digital twins as the solution to problems caused by smart farming.

178

N. J. Lemphane et al.

Table 2 Comparison on technologies used in current IoT pasture management systems Reference Technology Advantages

Disadvantages

[15, 7]

WSN

• Allows mobility • Quick response time

• Packet loss • Network interruption

[16, 9]

GPS

• Automated position tracking

• Range limitation • High energy consumption

[16]

RSSI

• Works well on moving objects • Localization inaccuracy • Low cost • Low energy consumption

[10]

LoRaWAN

• Low powered sensors • Operates on free frequencies

• Subjected to frequency interference

[10]

Satellite

• Wide coverage

• High cost

Table 2 entails the comparison between technologies used in the investigated IoT pasture management systems.

4 Conclusion The investigations on this paper confirm that smart farming brought remarkable changes in pastures. Smart farming integrates the use of sensors, wireless communication, data storage and data processing algorithms to form a complete system. This combination of technologies was made possible due to the IoT concept. Smart farming enabled automation of the entire farming system and analysis of farm data for enhanced quality and quantity production. This also improved management strategies as proper decisions were taken based on the precise analysis of data from these systems. Although smart farming has improved pasture management, there are challenges encountered based on these new technologies. This paper also highlights the applications of digital twins in farming with the aim to tackle the challenges caused by smart farming. It is revealed that the pairing of physical and digital worlds enabled easy integration of sensors and linking the sensor data to allow automation, which was a challenge with smart farming. Digital twins also allowed real-time monitoring of the systems which pilots easy remote monitoring of the farms for better management approach. Moreover, digital twins foresee problems before they occur, and this prevents unplanned collapse of the farms. The main focus of this paper is to review current pasture management systems based on IoT technologies and how digital twins solve problems encountered in smart farming in on order to make smart farming a complete answer for farmers without any drawbacks for improved and sustainable production.

A Review on Current IoT-Based Pasture Management Systems …

179

References 1. L. Nobrega, P. Pedreiras, P. Goncalves, S. Silva, Energy efficient design of a pasture sensor network, in Proceedings of 2017 IEEE 5th International Conference on Future Internet of Things Cloud, FiCloud 2017, vol. 2017-January, February 2019 (2017), pp. 91–98. https://doi. org/10.1109/FiCloud.2017.36 2. F. Safety, Smart farming and food safety internet of things applications—challenges for large scale implementations (2015) 3. S. Chandler, What are the challenges of building a smart farming system?, IoT Agenda (2019). Available: https://internetofthingsagenda.techtarget.com/answer/What-are-the-challenges-ofbuilding-a-smart-farming-system 4. A. Rasheed, O. San, T. Kvamsdal, Digital twin: values, challenges and enablers from a modeling perspective. IEEE Access 8, 21980–22012 (2020). https://doi.org/10.1109/ACCESS.2020.297 0143 5. R. Xue, H.-S. Song, A. Bai, A framework for electronic pasture based on WSN, in 2011 International Conference on Multimedia Technology. https://doi.org/10.1109/icmt.2011.600 1925 6. Z. Zhang, P. Wu, W. Han, X. Yu, Remote monitoring system for agricultural information based on wireless sensor network. J. Chin. Inst. Eng. 40(1), 75–81 (2017). https://doi.org/10.1080/ 02533839.2016.1273140 7. L. Nóbrega, P. Gonçalves, P. Pedreiras, J. Pereira, An IoT-based solution for intelligent farming. Sensors (Switzerland) 19(3), 1–24 (2019). https://doi.org/10.3390/s19030603 8. E. Brunberg, K. Sørheim, Bergslid, The ability of ewes with lambs to learn a virtual fencing system. Animal, 1–6 (2017) 9. K. Voth, K. Voth, K. Voth, Virtual fence—keep livestock in pasture without installing posts or wires. On Pasture (2018). Available: https://onpasture.com/2018/02/26/virtual-fence-keep-liv estock-in-pasture-without-installing-posts-or-wires/. 10. A. Milsom et al., Assessing the ability of a stationary pasture height sensing device to estimate pasture growth and biomass. J. New Zeal. Grasslands 81, 61–68 (2019). https://doi.org/10. 33584/jnzg.2019.81.384 11. Optimization of Field Pasture Using IoT is now Possible Thanks to Farmote Systems and Actility|IoT For All. IoT For All. Available: https://www.iotforall.com/press-releases/actilityfarmote-system 12. E.M. Ouafiq, A. Elrharras, A. Mehdary, A. Chehri, R. Saadane, M. Wahbi, IoT in smart farming analytics, big data based architecture, vol. 189, June 2020. Springer, Singapore (2021) 13. A. Grogan, Smart farming. Eng. Technol. 7(6), 38–40 (2012). https://doi.org/10.1049/et.2012. 0601 14. M. Stoˇces, J. Vanˇek, J. Masner, J. Pavlík, Internet of things (IoT) in farming—selected aspects. Agris On-line Pap. Econ. Informatics 8(1), 83–88 (2016). https://doi.org/10.7160/aol.2016. 080108 15. X. Deng, R. Sun, H. Yang, J. Nie, Wang, Data transmission method of pasture internet of things based on opportunistic network. Trans. Chin. Soc. Agricult. Mach. 48, 208–214 (2017) 16. L. Nóbrega, P. Gonçalves, P. Pedreiras, R. Morais, A. Temprilho, SheepIT: automated vineyard weeding control system. INFORUM simpósio de informática, Sept 2017. Available http://edu park.web.ua.pt/static/docs/EduPARK-INForum.pdf 17. S.K. Jo, D.H. Park, H. Park, S.H. Kim, Smart livestock farms using digital twin: feasibility Study, in 9th International Conference on Information and Communication Technology Convergence, ICT Converg. Powered by Smart Intell. ICTC 2018 (2018), pp. 1461–1463. https://doi. org/10.1109/ICTC.2018.8539516 18. M.W. Maduranga, R. Abeysekera, Machine Learning applications in Iot based agriculture and smart farming: a review. Int. J. Eng. Appl. Sci. Technol. 04(12), 24–27 (2020). https://doi.org/ 10.33564/ijeast.2020.v04i12.004

180

N. J. Lemphane et al.

19. A. Fuller, Z. Fan, C. Day, C. Barlow, Digital Twin: enabling technologies, challenges and open research. IEEE Access 8, 108952–108971 (2020). https://doi.org/10.1109/ACCESS.2020.299 8358 20. C. Microsoft, The promise of a digital twin strategy (2017), p. 23. Available: https://info. microsoft.com/rs/157-GQE-382/images/Microsoft%27sDigitalTwin%27How-To%27Whit epaper.pdf 21. M.J. Smith, Getting value from artificial intelligence in farming. Anim. Prod. Sci. 60(1), 46–54 (2019). https://doi.org/10.1071/AN18522 22. R.G. Alves et al., A digital twin for smart farming, in 2019 IEEE Global Humanitarian Technology Conference. GHTC 2019 (2019), pp. 19–22. https://doi.org/10.1109/GHTC46095.2019. 9033075 23. M.V. Schönfeld, R. Heil, L. Bittner, Big Data on a farm—smart farming (2018), pp. 109–120. https://doi.org/10.1007/978-3-319-62461-7_12 24. L. Grignard, The benefits of visual intelligence solutions in farming and forestry, 04/06/2020 (2020), pp. 1–3. Available: https://www.gim-international.com/content/article/the-benefits-ofvisual-intelligence-solutions-in-farming-and-forestry 25. L. Moghadam, Edwards, Digital Twin for the future of orchard production systems. Proceedings, vol. 36, no. 1 (2020), p. 92. https://doi.org/10.3390/proceedings2019036092 26. G.A. Gericke, R.B. Kuriakose, H.J. Vermaak, O. Mardsen, Design of Digital Twins for optimization of a water bottling plant, in IECON 2019—45th Annual Conference of the IEEE Industrial Electronics Society, Lisbon, Portugal (2019), pp. 5204–5210. https://doi.org/10. 1109/IECON.2019.8926880 27. W. Nallaperuma, U. Ekanayake, R. Punchi-manage, Identifying factors that affect the downtime of a production process, pp 551–555 (2016)

Conceptual Model for Measuring Complexity in Manufacturing Systems Germán Herrera Vidal, Jairo Rafael Coronado-Hernández, and Andrea Carolina Primo Niebles

Abstract Conceptual models have proven to be a technique that, from a point of view of the transmission of an idea, facilitates the elaboration of a coherent structure to support the visualization and understanding of a process. The objective of this article is to design a model that supports the application of mathematical models, providing relevant, structured and organized information and helping manufacturing decision makers with a greater procedural understanding. Methodologically, three views were constructed, one physical, functional and the other informational, in order to have clarity of the elements and characteristics in the measurement of complexity, from a subjective perspective with the complexity index (CXI) method and objectively with the Shannon’s entropy model. The findings provide answers to the hypotheses raised, which corroborate that the conceptual models support and ensure a greater understanding and comprehension for the measurement of complex scenarios. At the same time, the structured elaboration of a hybrid model based on heuristics of complexity indexes and entropic measurements is evidenced. Keywords Conceptual models · Complexity · Manufacturing systems · Measuring

G. H. Vidal (B) Industrial Engineering Department, Grupo de Investigación Ciptec, Fundacion Universitaria Tecnológico Comfenalco, Cartagena, Colombia e-mail: [email protected] Universidad Lomas de Zamora, Lomas de Zamora, Argentina J. R. Coronado-Hernández · A. C. P. Niebles Universidad de la Costa, Barranquilla, Colombia e-mail: [email protected] A. C. P. Niebles e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 R. C. Poonia et al. (eds.), Proceedings of Third International Conference on Sustainable Computing, Advances in Intelligent Systems and Computing 1404, https://doi.org/10.1007/978-981-16-4538-9_19

181

182

G. H. Vidal et al.

1 Introduction The companies in charge of manufacturing products find in the market consumers and clients who are inclined to custom-type products, which brings with it a greater offer and high differentiation. This effect that is produced in the market impacts directly within the system, in which diverse elements associated with the configuration and resources interact, which at the moment of relating generate an increase in complexity. Similarly, McDuffie et al. [1] establish that the impact is reflected on the productivity and quality of the company’s internal processes. According to [2], complexity lies in the volume of internal and external variables that, when related, generate uncertainty and variability. According to [3], the level of complexity depends on the configuration of the system and all the elements involved and related to each other. In view of the above, several authors have sought to simplify manufacturing systems [4] by measuring complexity. There are different models that support each other to measure complexity; among these are conceptual models, theoretical models and mathematical models. The use of these within the industrial activity allows a greater study, analysis and understanding of the system’s behavior. In this research work, a proposal for a conceptual model will be addressed through which the fundamental aspects for the measurement of complexity in manufacturing systems are determined. Conceptual models have proven to be a technique that, from a point of view of the transmission of an idea or representation of it, facilitates the elaboration of a coherent structure to support the visualization and understanding of a process. According to [5], a conceptual model, it serves as a guide or aid in the modeling process of a concrete and specific reality. Similarly, Lario and Pérez [6] establish that conceptual models consist of descriptive tools, which highlight and identify the elements and input variables according to a specific case in the real world, be it manufacturing or supply chains. This work evidences a conceptual model framework for the measurement of complex production systems, in which important characteristics and aspects are determined. The main objective is to design a model that supports the application of mathematical models, provides relevant, structured and organized information and helps decision makers in a manufacturing process with a greater procedural understanding. The paper is divided into four sections, first a theoretical background is developed, followed by the construction of the conceptual model, third the discussions and finally the conclusions are presented.

Conceptual Model for Measuring Complexity in Manufacturing …

183

2 Theoretical Background 2.1 Measuring Complexity in Manufacturing Systems The study of complexity arises from trying to explain and predict the behavior of a system through formal models. Models describe a set of input variables that are transformed into a variable or set of output variables, through a set of internal processes in order to predict the behavior of a system. For [7], a system is a conglomeration of elements such as plant, process, product and parts that interact and relate to each other; in this sense, systems can be considered as a representation of a reality. These elements involve machines, people, materials and information. Any manufacturing system is composed of input entities such as raw materials, information and energy and in turn of output entities associated with finished products, waste and information [8]. In general, manufacturing systems are complex, which is why various authors have sought to simplify manufacturing systems, and some have used classic parameters for this purpose, such as manufacturing time [4], travel or distance between stations [9], material handling costs [10] and product quality [11]; According to [12], the measurement of complexity in manufacturing systems is a metric that serves as a parameter for establishing improvement plans, and in turn determines that systems with high complexity present more problems than systems with low complexity. Given the above, measuring complexity in manufacturing systems will allow managers to investigate and compare different types of configurations, structures and designs, evaluate the behavior of systems and facilitate accurate decision making. According to [13], the most common approaches to measure the complexities addressed in the literature are nonlinear dynamics, information theory, hybrid methods, among others. Similarly, there are different types of models classified from a conceptual, theoretical and mathematical perspective.

3 Building a Conceptual Model 3.1 Hypothesis Two hypotheses have been formulated that allow the achievement of the main objective, related to the design of a conceptual model to support the measurement of complexity in manufacturing systems, and in turn provide relevant, structured and organized information. Based on studies by other authors and on the exploratory research carried out, the following hypotheses are put forward: H1 H2

Researchers consider that the design of a conceptual model serves as a support for measuring complexity in a manufacturing system. Designing a conceptual model for measuring complexity in manufacturing systems from different perspectives ensures greater understanding.

184

G. H. Vidal et al.

The construction of a conceptual model allows you to have a broad vision of what you want to model and provides a greater understanding of the system studied. In this sense, under the business context, Vidal and Goetschalckx [14] establish that for their design it is necessary to involve the participating agents and the committed areas in a way that facilitates their representation, understanding and subsequent analysis. Similarly, Tang et al. [15] propose that a conceptual model should involve the interactions that exist between the elements that make it up, taking into account the flows of information, documents and resources. In this sense, a conceptual model can be used as a reference model for the construction of new specific models. In fact, Hernández et al. [16] consider the conceptual model as a complement and support for mathematical modeling. From the perspective of the types of complexity in manufacturing systems, the following conceptual models stand out (i) Of a global type developed by Perona and Miragliotta [17] who propose a structure that allows the fulfillment of the proposed goals from aggressive scenarios, consequently, Bozarth et al. [2] propose a model that allows the analysis of the negative impact of complexity in manufacturing systems. From the point of view of the types of complexity in manufacturing systems, the conceptual models stand out (i) Of global type developed by Perona and Miragliotta [17] who propose a structure that allows the fulfillment of the proposed goals from aggressive scenarios, consequent to this, Bozarth et al. [2] propose a model that allows to measure and to analyze the negative impact of the complexity in manufacturing systems. At the same time, Haumann et al. [18] propose a model that would make it possible to determine the importance of the factors that influence production indicators. (ii) Static type, where Modrak and Marton [19] define a methodological structure for assembly lines in supply chains; on the other hand, Mattsson et al. [20] establish a model for measuring the complexity of production from the specific point of view per workstation. (iii) Of a dynamic type developed by Eckstein et al. [21] who develop a model that allows a greater understanding between the relations of the agents in a chain, starting from the measurement of complexity in environments that are difficult to adapt. In general, it can be said that system modeling requires detailed data, which generates a diversity of information making it difficult to understand, understand and use [22]. Thus, a conceptual model for the modeling of production systems and supply chains must involve a series of related elements that help build a model similar to the real world. Therefore, it is considered fundamentally that different perspectives are viewed as functional, informational and decisional [5, 23].

3.2 Conceptual Model In this section, we present the design of a conceptual model to measure complexity in manufacturing systems. The model is defined in three perspectives or views, which include all the necessary elements, so as to facilitate its understanding and use.

Conceptual Model for Measuring Complexity in Manufacturing …

3.2.1

185

Conceptual Model—Physical View

The physical view shows how the different resources and flows that are part of a manufacturing system are integrated. In order to elaborate them, it is essential to have clarity about the inputs, outputs, locations, modes of operation and transportation [24]. For its construction, all related elements must be considered, in order to seek a single purpose; they are linked to the facilities, the process, the product and the planning. According to [25], internal sources such as the type of operation and the stability of the production schedule should be taken into account, as well as external sources such as variations in demand [26]. According to [27], complexity depends on the variety of a product, the structure of a process and its variability over time. Given the above, two types of complexities arise, internal complexity, which is associated with flows within the manufacturer, and may be caused by factors external to the organization called external complexity, which is associated with external variables, agents and flows. According to the evolution of time and its behavior, for [28], complexity in manufacturing systems can be static or dynamic. Static complexity refers to a characteristic that can be associated with systems and also with production processes, studies of the structural part or design of the system [12, 29, 30], and variables do not change over time [31]. Unlike the dynamic complexity which relates to the changes of relevant variables in the process over a time horizon, according to [31] the variables evolve with respect to time and study the uncertainty in the behavior of those operations [12, 29, 30]. Given the above in Fig. 1, we can see the construction of the conceptual model with physical view for the measurement of complexity in manufacturing systems.

3.2.2

Conceptual Model—Functional View

Initially, it is necessary to address a case study of the manufacturing industry, which can characterize the productive system and identify the elements associated with the plant, process, product, parts and planning. Information is collected through field visits, direct observation and interviews. The development of this section facilitates the use of flow diagrams, supplier–inputs–process–outputs–customers (SIPOC) or value stream mapping (VSM) (see Fig. 1). In this view, a first point is identified where there are several variables such as origin, quantity, variety, time and system relationships; consequent to this, a second point that establishes that to measure complexity in manufacturing systems, there are qualitative methods that depend on the perception of the people involved in the process and quantitative methods based on data, verification and analysis. From this, two methods are diversified as (i) the complexity index (CXI), which measures complexity from a subjective perspective, since it depends on the opinion of those responsible for each workstation and (ii) the entropic methods that are based on analytical equations for its measurement, facilitating the entropic analysis in different types of scenarios and providing a quantitative basis for decision making.

186

G. H. Vidal et al.

Fig. 1 Conceptual model

3.2.3

Conceptual Model—Informational View

The conceptual information model refers to the representation of information in a manufacturing system. According to [5], the information vision should support decision making based on the flow of data that is needed input and the flow of data that is produced (see Fig. 1). In this view for the development of the CXI method, initially, the information is collected by workstation, from a structured questionnaire validated by Deshmukh et al. [32], for the entropic method the information comes from the process, from the planning, which contains information collected from setup times of each operation in each workstation, production times and non-production times. Consequently, the result would be given in unit of measurement bits, which represents the amount of

Conceptual Model for Measuring Complexity in Manufacturing …

187

information handled by a given resource and serves as a basis for comparison as the components are increased.

4 Discussion In manufacturing systems, there are elements associated with variability and uncertainty, making them difficult and complex systems. Therefore, complexity management within these systems is a necessary action [33]. Similarly, Vidal and Hernández [34] state that there are factors linked to the complexity of manufacturing systems that influence the development of processes and impact the performance indicators of a company. According to [35], measurement models should be designed to relate it to costs and other performance indicators, so that opportunities for improvement can be established. Given the above, there are models that support each other to measure complexity starting from the conceptual and ending in the mathematical. That is why the main objective is to design a model that supports the application of mathematical models, providing relevant, structured and organized information, helping manufacturing decision makers with a better procedural understanding. In order to achieve this goal, two hypotheses were proposed to corroborate the aforementioned.

5 Conclusion The management of complexity in manufacturing systems is indispensable for adequate sustainability in the market, due to the effects of globalization and accelerated growth of economies. For effective management, it is of vital importance to calculate metrics that serve as parameters to establish improvement plans, in turn determine patterns of comparison according to different configurations, structures, components and designs. This work proposes a conceptual framework from three views, one physical, functional and the other informational, to have clarity of the fundamental elements and characteristics in the measurement of complexity, from a subjective perspective with the complexity index (CXI) method and objective with Shannon’s entropy model. The proposed model uses a combination of different spectra to support decision making when simplifying or improving operational performance. For the future research, it would be useful to implement this technique in the manufacturing sector, in different case studies, that allow to obtain and analyze results around the static and dynamic complexity of the systems.

188

G. H. Vidal et al.

References 1. J. McDuffie, K. Sethuraman, M. Fisher, Product variety and manufacturing performance: evidence from the international automotive assembly plant study. Manage. Sci. 42(3), 350–369 (1996). https://doi.org/10.2307/2634348 2. C. Bozarth, D. Warsing, B. Flynn, E. Flynn, The impact of supply chain complexity on manufacturing plant performance. J. Oper. Manag. 27(1), 78–93 (2009). https://doi.org/10.1016/j. jom.2008.07.003 3. I. Manuj, F. Sahin, A model of supply chain and supply chain decision making complexity. Int. J. Phys. Distrib. Logist. Manag. 41(5), 511–549 (2011). https://doi.org/10.1108/096000 31111138844 4. L. Salum, The cellular manufacturing layout problem. Int. J. Prod. Res. 38(5), 1053–1069 (2000). https://doi.org/10.1080/002075400189013 5. F. Valero, F. Esteban, A. García, D. Perales, Propuesta de marco conceptual para el modelado del proceso de planificación colaborativa de operaciones en contextos de Redes de Suministro/Distribución (RdS/D), in XI Congreso de Ingeniería de Organización (2007), pp. 0873–0882 6. F. Lario, D. Pérez, Gestión de Redes de Suministro (GRdS): sus Tipologías y Clasificaciones. Modelos de Referencia Conceptuales y Analíticos, in IX Congreso de Ingeniería de Organización, Gijón (2005), p. 163 7. B. Wilson, Systems: Concepts, Methodologies and Applications (Wiley, New York, 1984) 8. G. Chryssolouris, Manufacturing Systems: Theory and Practice, 2nd edn. (Springer, New York, 2006) 9. S. Heragu, A. Kusiak, Machine layout problem in flexible manufacturing systems. Oper. Res. 36(2), 258–268 (1988). https://doi.org/10.1287/opre.36.2.258 10. R. Meller, K. Gau, The facility layout problem: recent and emerging trends and perspectives. J. Manuf. Syst. 15(5), 351–366 (1996). https://doi.org/10.1287/opre.36.2.258 11. S. Li, S. Rao, T. Ragu, B. Nathan, Development and validation of a measurement instrument for studying supply chain management practices. J. Oper. Manag. 23(6), 618–641 (2005). https:// doi.org/10.1016/j.jom.2005.01.002 12. G. Frizelle, E. Woodcock, Measuring complexity as an aid to developing operational strategy. Int. J. Oper. Prod. Manag. 15(5), 26–39 (1995). https://doi.org/10.1108/01443579510083640 13. K. Efthymiou, A. Pagoropoulos, N. Papakostas, D. Mourtzis, G. Chryssolouris, Manufacturing systems complexity: an assessment of manufacturing performance indicators unpredictability. CIRP J. Manuf. Sci. Technol. 7(4), 324–334 (2014). https://doi.org/10.1016/j.cirpj.2014.07.003 14. C. Vidal, M. Goetschalckx, Strategic production-distribution models: a critical review with emphasis on global supply chain models. Eur. J. Oper. Res. 98(1), 1–18 (1997) 15. J. Tang, D. Shee, T. Tang, A conceptual model for interactive buyer-supplier relationship in electronic commerce. Int. J. Inf. Manage. 21, 49–68 (2001) 16. J. Hernández, J. Mula, F. Ferriols, R. Poler, A conceptual model for the production and transport planning process: an application to the automobile sector. Comput. Ind. 59(8), 842–852 (2008) 17. M. Perona, G. Miragliotta, Complexity management and supply chain performance assessment. A field study and a conceptual framework. Int. J. Prod. Econ. 90(1), 103–115 (2004). https:// doi.org/10.1016/S0925-5273(02)00482-6 18. M. Haumann, H. Westermann, S. Seifert, S. Butzer, Managing complexity: a methodology, exemplified by the industrial sector of remanufacturing, in Proceedings of the 5th International Swedish Production Symposium SPS, vol. 12 (2012), pp. 107–114 19. V. Modrak, D. Marton, Structural complexity of assembly supply chains: a theoretical framework. Procedia CIRP 7, 43–48 (2013). https://doi.org/10.1016/fj.procir.2013.05.008 20. S. Mattsson, P. Gullander, A. Davidsson, Method for measuring production complexity, in 28th International Manufacturing Conference (2011) 21. D. Eckstein, M. Goellner, C. Blome, M. Henke, The performance impact of supply chain agility and supply chain adaptability: the moderating effect of product complexity. Int. J. Prod. Res. 53(10), 3028–3046 (2015)

Conceptual Model for Measuring Complexity in Manufacturing …

189

22. R. Urbanic, W. ElMaraghy, Modeling of Manufacturing Process Complexity, vol. VII (2006) 23. M. De La Fuente, L. Lorenzo, A. Ortiz, Enterprise modelling methodology for forward and reverse supply chain flows integration. Comput. Ind. 61(7), 702–710 (2010) 24. M. Alemany, M. Verdecho, F. Alarcón, Graphical modelling of the physical organization view for the collaborative planning process, in II ICIEIM, Burgos, 3–5 de Septiembre de 2008 25. B. Flynn, E. Flynn, Information-processing alternatives for coping with manufacturing environment complexity. Decis. Sci. 30(4), 1021–1052 (1999). https://doi.org/10.1111/j.1540-5915. 1999.tb00917.x 26. S. Sivadasan, J. Efstathiou, A. Calinescu, L. Huatuco, Advances on measuring the operational complexity of supplier–customer systems. Eur. J. Oper. Res. 171(1), 208–226 (2006). https:// doi.org/10.1016/j.ejor.2004.08.032 27. G. Schuh, Lean Innovation (Springer, Berlin, 2013) 28. L. Gaio, F. Gino, E. Zaninotto, I sistemi di produzione (Edizioni Carocci, Roma, 2002) 29. N. Suh, A Theory of Complexity and Applications (Oxford University Press, Oxford, 2005) 30. N. Papakostas, K. Efthymiou, D. Mourtzis, G. Chryssolouris, Modelling the complexity of manufacturing systems using nonlinear dynamics approaches. CIRP Ann. Manuf. Technol. 58(1), 437–440 (2009). https://doi.org/10.1016/j.cirp.2009.03.032 31. R. Cao, Introducción a la Simulación y a la Teoría de Colas (Ed. Netbiblo S. L. R., 2002) 32. A. Deshmukh, J. Talavage, M. Barash, Complexity in manufacturing systems. Part 1: Analysis of static complexity. IIE Trans. 30(7), 645–655 (1998). https://doi.org/10.1023/A:100754232 8011 33. G. Vidal, J. Hernández, Complexity in manufacturing systems: a literature review. Prod. Eng. 1–13 (2021) 34. G. Vidal, J. Hernández, Study of the effects of complexity on the manufacturing sector. Prod. Eng. 1–10 (2021) 35. J. Aelker, T. Bauernhansl, H. Ehm, Managing complexity in supply chains: a discussion of current approaches on the example of the semiconductor industry. Procedia CIRP 7, 79–84 (2013). https://doi.org/10.1016/j.procir.2013.05.014

Hole Filling Using Dominant Colour Plane for CNN-Based Stereo Matching Rachna Verma

and Arvind Kumar Verma

Abstract The paper presents a new hole filling strategy, based on the dominant colour plane, for estimating disparity values for the inconsistent disparity regions, called holes, of the disparity map generated after left–right consistency check of left and right disparity maps. The left and right disparity maps are generated by a patch-based stereo matching system using CNN. The holes are typically found near the object boundaries, occluded regions and textureless regions of stereo images. The proposed hole filling strategy is based on the observation that for two dissimilar adjacent regions in an RGB image, normally one colour plane is more discriminating, i.e. the means of the colour values of the discriminating plane in the two regions have larger gap than the other two colour planes. This large gap is used to locate the edge between the adjacent regions. Once the edge is located, the inconsistent disparities are filled by interpolating from the corresponding regions. The proposed method is evaluated on Middlebury datasets and achieves comparable results to the state-of-the-art algorithms. Keywords Hole filling · Stereo vision · Patch matching · Disparity map · CNN

1 Introduction Stereo vision uses two images taken from different viewpoints of a scene to estimate the depths of various scene points based on the disparities of scene points in the captured images. This technology aims to develop human/animal like vision capabilities in machines and have many applications such as robotics, object detection, obstacle avoidance, 3D reconstruction, and autonomous driving. R. Verma (B) Department of CSE, Faculty of Engineering, J.N.V. University, Jodhpur, India e-mail: [email protected] A. K. Verma Department of Production and Industrial Engineering, J.N.V. University, Jodhpur, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 R. C. Poonia et al. (eds.), Proceedings of Third International Conference on Sustainable Computing, Advances in Intelligent Systems and Computing 1404, https://doi.org/10.1007/978-981-16-4538-9_20

191

192

R. Verma and A. K. Verma

Many decades of active research in this field have developed many methods to estimate disparity maps. These algorithms can be classified as traditional image processing-based methods [1] and machine learning-based methods [2, 3]. Despite voluminous research, the accurate estimation of the disparity map is still eluding researchers. This is due to occlusions, textureless areas, repetitive patterns and reflections. Further, the computed disparity map invariably contains inconsistent disparity values, called holes, in some regions of the scene. The holes frequently occur in regions of depth discontinuities, occluded and textureless regions. In a disparity map, holes are typically generated after applying the left–right (LR) consistency check [4]. The problem of holes in disparity maps is similar to holes in depth images captured by depth cameras such as Microsoft Kinect, Intel Intellisense, etc. The holes in a depth image are regions of the image where reliable values of depth cannot be calculated by the sensor due to various reasons. Hole filling is a process of estimating these missing values in disparity maps or depth images. This has created an area of research where the goal is to complete the missing 3D or disparity information using a secondary depth filling process based on the associated colour images, as the presence of holes in disparity maps or depth images create many problems in the down line applications. The paper presents a new hole filling strategy, based on the dominant discriminating colour plane, for estimating disparity values for holes in the disparity map generated after the LR consistency check. The initial left and right disparity maps are generated by a patch-based stereo matching system using convolutional neural network (CNN). Although any disparity estimation method can be used to generate disparity maps, we have used the patch-based stereo CNN due to its better accuracy as compared to other methods. The proposed hole filling strategy is based on the observation that for two dissimilar adjacent regions in an RGB image, normally, one colour plane is more discriminating, i.e. the means of the colour values of the discriminating plane in the two regions have a larger gap than the other two. This larger gap is used to locate the edge between the adjacent regions precisely. Once the edge is located, the inconsistent disparities are filled by extrapolating the known disparity values from the corresponding regions. The remaining part of this paper is organized as follows. Section 2 discusses the related work on patch-based stereo matching and hole filling. Sections 3 and 4 describe patch-based stereo matching and hole filling scheme, respectively. The experimental results are presented in Sect. 5, and Sect. 6 concludes the paper.

2 Related Work A large number of methods have been proposed in literature for solving stereo matching problems ranging from traditional methods [1] to deep learning-based methods [2, 3]. Since the focus of the present work is on a hole filling strategy in disparity maps, which is closely related with the hole filling in depth images and somewhat related to image completion, we will briefly review the prominent work

Hole Filling Using Dominant Colour Plane for CNN …

193

done in these areas. To the best of our knowledge, a very few works are directly related to the hole filling in disparity maps. Since we are using a patch-based CNN approach for the generation of initial disparity map, a brief review of the work related to patch-based disparity computation is given for easy comprehension. Pioneering work of Zbontar and LeCuny [5] uses a Siamese CNN for computing the matching score for the reference patch in the left image and the target patch in the right image. Zbontar and LeCuny [5] used many traditional disparity refinement post-processing steps on the initial disparity map, and the final disparity map was the state of the art. Luo et al. [6] modified Zbontar’s work and used inner product for computing similarity scores from the output of two branches of the Siamese network. Park and Lee [7] proposed a per-pixel pyramid pooling module to the baseline architecture of Zbontar and LeCuny [5] to increase the receptive field of the network. Brandao et al. [8] used a Siamese architecture similar to Luo et al. [6] focusing on the types of features used for the correspondence matching. Hole filling in a disparity map is similar to recovering scratched parts of old photographs or recovering the background after intentionally erasing some objects from photographic images. Compared to the voluminous literature available for the image completion [9], a very few literatures exist for hole filling in depth images [10, 11]. Further, the majority of hole filling methods are focused on depth images captured by depth cameras, and there is a limited number of papers available for the stereo disparity hole filling. Moreover, the hole filling algorithms for depth images are based on the concepts of image completion algorithms [12]. Atapour-Abarghouei and Breckona [13] perform depth hole filling in RGB-D images by decomposing the image into a high-frequency information (object boundaries and texture relief) and low spatial frequency components and then used an exemplar-based filling approach, a popular image completion algorithm [13]. Baek et al. [14] uses both structure propagation and structure-guided completion to fill the holes in depth images, which results in better geometric and structural coherence. Hervieu et al. [15] used modified depth image-based rendering techniques to fill holes in stereo vision generated disparity maps. Contrary to these methods, the proposed hole filling method uses the concept of the dominant discriminating colour plane to locate the edge pixel precisely and then uses the linear extrapolation of known disparity values from around the hole to estimate the missing disparity values.

3 Proposed Method In this paper, we propose a new hole filling strategy for filling inconsistent disparity values with reliable disparity values, in stereo vision generated disparity maps. The proposed strategy is based on an observation that an edge between two adjacent regions in an RGB colour image can be precisely located by using one of the colour plane of the image that has the most discriminating colour values, called dominant colour plane. The proposed hole filling strategy is a three-step process: (1) identify the dominant colour plane for the selected regions of a hole, (2) locate the edge based

194

R. Verma and A. K. Verma

Stereo image pair

Left disparity map

Left right consistency

Right disparity map

Reliable disparity map

Hole filling

Final disparity map

Fig. 1 Steps for generating disparity map

on the dominant colour plane, (3) fill the holes using linear extrapolation of available reliable disparity values. The reliable disparity values are generated by a patch-based stereo vision system using stacked stereo CNN (SS-CNN), which is developed by the authors in an earlier work [4]. Figure 1 shows a block diagram of the proposed system. The main contribution of this paper is a hole filling strategy, highlighted in grey colour in Fig. 1. For easy comprehension and completeness, we will first briefly describe the patchbased CNN architecture used to generate reliable disparity map. Similar to the most of the patch-based approaches [5, 6], SS-CNN [4] also consists of two main steps: initial cost volume generation for computing disparity map using a CNN and postprocessing steps for disparity refinement. Figure 2 shows the architecture of SS-CNN. The input to the network is a tensor of W × W × C, where W × W is the size of the left and right patches and C is the total number of colour channels: 2 for grey images and 6 for coloured images (3 for the left image patch and 3 for the right image patch). The input tensor is generated by stacking two patches along the colour channel axis. The colour images are normalized before stacking and feeding to the network. SS-CNN consists of, in sequence of, three convolutional layers, one flattens layer, one dense layer, one dropout layer and finally the output layer. Middlebury stereo images and their ground truth [16] are used to generate binary classification (similar and dissimilar patch pairs) dataset for training and validation purpose. The steps to generate the reliable disparity map from a stereo image pair are described in short. For detailed description refer [4]. Cost volume generation: For each pixel location in the left image, the left image patch is compared with the right image patches in the right image for all the possible Fig. 2 Stacked stereo CNN architecture

Output Dropout Stacked Input

ConvNet

Flatten

Dense

Hole Filling Using Dominant Colour Plane for CNN …

195

disparity values and corresponding dissimilarity scores are computed. When applied to all the pixels of the left image, this step generates dissimilarity cost volume. Semi-global matching: The semi-global matching scheme developed by Zbontar and LeCuny [5] is used as a post-processing step for disparity refinement. Initial disparity map computation: The disparity at each pixel of the reference image is obtained using winner-takes-all strategy. This step generates initial disparity map. Left right consistency check: Initial disparity maps of both the left image and the right image are computed separately. Left–right consistency check [4] is applied, and inconsistent disparities are discarded in the left disparity map and set to a negative value. The regions of negative disparity values are called holes in the disparity map which are filled using the proposed hole filling scheme.

4 Hole Filling Scheme In this paper, we propose a new hole filling scheme. The proposed hole filling strategy is a three-step process. In the first step, the dominant discriminating plane for the pixels at the two ends of a hole along a horizontal scan line is determined. In the second step, the edge pixel, typically where the depth discontinuities occur in stereo depth images, between the two regions is precisely located. And, in the third step, the linear extrapolation is used to assign disparity values to the hole pixels for the current scan line. This process is repeated for all the scanlines of a hole in the disparity map. Step 1: Detect the dominant discriminating plane of an image region pair The concept of the dominant discriminating plane is based on the way humans compare two regions in an image. For example, a person says that a region is greener than another region to discriminate two green regions. Experimentally, we observed that statistics, such as mean or median, based on one colour plane is more discriminating than the similar statistics obtained from the grey scale image or other colour planes. We call it the dominant discriminating plane. The concept is illustrated with examples for some region pairs of an image. Figure 3 shows the left image of Baby1 test image from the Middlebury [16] stereo dataset along with its grey image, the red colour plane, the green colour plane and the blue colour plane, separately. To find the dominant colour plane between a region pair, means of the red plane, the green plane, the blue plane and grey values of the region pair are calculated. The means of colour values of individual colour planes and grey values over a window in different image region pairs are tabulated in Table 1. The table also reports the absolute difference of the means in different region pairs. The absolute difference of means of the dominant discriminating plane is written in bold. Visually, it can be observed from Fig. 3 that for a region pair (A, B), the red colour plane is more

196

R. Verma and A. K. Verma

Fig. 3 Top row: Baby1 test image and its grey image. Bottom row: the red colour plane, the green colour plane and the blue colour plane of Baby1

Table 1 Dominant discriminating planes for few regions (Row, column)

Blue

Green

Red

Grey

Region A

(279, 60)

142.44

196.74

227.63

199.72

Region B

(279, 80)

148.51

185.43

187.73

182.00

6.08

11.30

39.90

17.72

Region E

(51, 116)

111.48

190.10

228.07

192.08

Region F

(51, 136)

92.28

143.00

207.70

156.70

19.20

47.10

20.37

35.37

dominant for discriminating the region pair than the other planes. This can be correlated/verified with the data tabulated in Table 1. For this region pair, the red colour plane has the highest value of mean, i.e. 39.90. This value is more than twice if we compare it with the other nearest value, i.e. 17.72, which is for the grey image region. Similarly, for the region pairs (C, D), the discriminating colour plane is the red plane, and for the region pair (E, F), the discriminating colour plane is the green with the absolute difference value equal to 47.10. It can be clearly observed that the dominant discriminating plane is not common to all the regions of an image, but it is specific to a pair of regions.

Hole Filling Using Dominant Colour Plane for CNN …

197

Algorithm to detect the dominant discriminating plane For each horizontal scan line of a hole, the following algorithm has been used to detect the dominant discriminating plane: 1. 2.

3.

Select two anchor points, PR1 and PR2, in the image such that PR1 and PR2 are the two extreme pixels of a hole along a scan line with valid disparity values. Consider a rectangular window of equal size around each anchor point. Calculate the middle means of pixel values of the rectangular window centred at PR1, for each colour plane and also for the corresponding grey image. Similarly, calculate the middle means for PR2. Let M (PR1) and M (PR2) denote the middle means at PR1 and PR2, respectively. The middle mean is the mean of pixel values lying between the first quartile and the third quartile. This is done to remove outliers from the pixel values and to get a consistent mean value. Calculate the absolute difference of the mean values for each colour plane and the grey image. The colour plane or the grey image with the highest mean difference is chosen as the dominant discriminating plane for the comparison of the regions of the selected anchor points, i.e. for PR1 and PR2.

Step 2: Detect the edge pixel between two anchor points of a scanline of an image region pair The edge pixel between the two anchor points of a scan line is located using the following two steps: 1. 2.

Calculate the middle means at each pixel, p, between PR1 and PR2 for the dominant discriminating plane. Let M (p) denotes the middle mean at p. For each pixel p, between the anchor points Calculate the absolute difference of M (p) with the means of the anchor points M (PR1) and M (PR2). If |M (p) − M (PR1)| < |M (p) – M (PR2)| then pixel p will be grouped with region R1, with membership value set to 0, else pixel p will be grouped with region R2, with membership value set to 1.

Step 3: Assign disparity values to hole pixels For each scanline of a hole, once the edge pixel between PR1 and PR2 is located, the hole pixels are assigned values by the linear extrapolation of known disparity values from both sides of the hole. The hole pixels lying in the left of the edge pixel, i.e. pixels with membership value 0, are filled with values by extrapolating the left side known disparity values and pixels in the right of the edge pixel, i.e. pixels with membership value 1, from the right disparity values. Figure 4 shows the extrapolation of disparity values to fill holes.

198

R. Verma and A. K. Verma

Fig. 4 Hole filling strategy

Hole (a) Hole

Edge

(b) Filled hole

After filling all holes using the above three steps, the final disparity map is generated.

5 Implementation and Results The left and right disparity maps are generated by SS-CNN model. The SS-CNN model is implemented in Python using Tensorflow and Keras packages. The network is trained and evaluated on a total of 7.28 lakh image patch pairs (half similar and half dissimilar patches) of size 11 × 11 generated from publicly available Middlebury stereo dataset [16], using the quarter resolution stereo images of the dataset to reduce the memory requirements. The image pixel values are normalized before generating the training dataset. For generation of training dataset, refer [4]. The network is trained on Google Colab with GPU setting on. The training accuracy and validation accuracy achieved for the network is 91%. Figure 5a shows the left image of three stereo images taken from the Middlebury stereo data set. The initial left (Fig. 5b) and right disparity maps are generated by SS-CNN network. The raw left and right disparity maps are further refined using semi-global matching [5] to produce smooth disparity maps. Left right consistency check is applied to these smoothed disparity maps to obtain reliable disparity map (Fig. 5c). The inconsistent disparities are discarded in the left disparity map and are set to a negative value to obtain reliable disparity map. For LR check, we call a disparity inconsistent if there is a difference of more than one pixel between left and right disparity values. Once reliable disparity map is obtained, the proposed hole filling strategy is applied to fill holes. Figure 5d shows the final disparity maps after hole filling by the proposed method. Figure 5d shows the error region (in red colour) in the computed disparity map (for ALL region). Further, the ground truth contains regions for which disparity is not known (i.e. occluded region). We also computed error excluding occluded regions. Table 2 lists error, in percentage, in disparity map for 1- and 2-pixel errors. The accuracy achieved by the proposed method is comparable to the state-of-the-art methods.

Hole Filling Using Dominant Colour Plane for CNN …

Baby1

Cloth4 (a) Left image

199

Aloe

(b) Intial diaparity maps generated by SS-CNN network

(c) Reliable disparity maps after LR concestency check.

(d) Final disparity maps after hole filling by the proposed method (error marked in red colour)

Fig. 5 Disparity maps at various stages of processing

6 Conclusion The paper discussed a new hole filling strategy for estimating the disparity values in holes of a disparity map. The method is based on how humans compare two regions

200

R. Verma and A. K. Verma

Table 2 Error (in percentage) in the computed disparity map of various test stereo images Image

Initial error (SS-CNN output)

Error after semi-global matching

Final error (after hole filling)

ALL

ALL

ALL

≤1

≤2

Baby1

28.4

16.6

Aloe

27.1

23.1

Cloth4

21.7

20.0

≤1 9.9

≤2

≤1

NON OCC ≤2

≤1

≤2

8.4

6.2

4.8

5.4

4.0

17.8

15.9

13.1

11.5

10.0

8.3

17.2

16.3

12.7

8.6

11.7

7.3

in an image and is based on the dominant discriminating colour plane. The method is based on an observation that for two dissimilar adjacent regions in an RGB image, normally one colour plane is more discriminating for statistical parameters, such as means or modes. This larger gap in the statistical parameters is used to locate the edge pixels precisely. After locating the edge pixel, the inconsistent disparities are filled by linear extrapolation from the available reliable disparities. The initial disparity map is computed with a patch-based convolutional neural network as it gives more accurate disparity map than the traditional methods. LR consistency check is applied to generate holes, which are the regions with incorrect disparities. These holes are filled with the proposed method. The proposed method is implemented in Python and evaluated on Middlebury datasets and achieved comparable results to the state-of-the-art algorithms.

References 1. R.A. Hamzah, H. Ibrahim, Literature survey on stereo vision disparity map algorithms. J. Sens. 23 (2016). Article ID 8742920 2. K.Y. Kok, P. Rajendran, A review on stereo vision algorithms: challenges and solutions. ECTI Trans. Comput. Inf. Technol. 13(2), 134–150 (2019) 3. K. Zhou, X. Meng, B. Cheng, Review of stereo matching algorithms based on deep learning. Comput. Intell. Neurosci. (2020) 4. R. Verma, A.K. Verma, Patch based stereo matching using CNN. ICTACT J. Image Video Process. 11(3), 2366–2371 (2021) 5. J. Zbontar, Y. LeCuny, Stereo matching by training a convolutional neural network to compare image patches. J. Mach. Learn. Res. 17, 1–32 (2016) 6. W. Luo, A.G. Schwing, R. Urtasun, Efficient deep learning for stereo matching, in IEEE Conference on Computer Vision and Pattern Recognition, USA (2016), pp. 5695–5703 7. H. Park, K.M. Lee, Look wider to match image patches with convolutional neural networks. IEEE Signal Process. Lett. 24, 1788–1792 (2017) 8. P. Brandao, E. Mazomenos, D. Stoyanov, Widening Siamese architectures for stereo matching. Pattern Recogn. Lett. 120, 75–81 (2019) 9. Q. Chen, G. Li, Q. Xiao, L. Xie, M. Xiao, Image completion via transformation and structural constraints. EURASIP J. Image Video Process. 2020, 44 (2020) 10. J. Liu, X. Gong, J. Liu, Guided inpainting and filtering for kinect depth maps, in IEEE International Conference on Pattern Recognition (2012), pp. 2055–2058

Hole Filling Using Dominant Colour Plane for CNN …

201

11. L.M. Po, S. Zhang, X. Xu, Y. Zhu, A new multi-directional extrapolation hole-filling method for depth-image-based rendering, in IEEE International Conference Image Processing (2011), pp. 2589–2592 12. M. Camplani, L. Salgado, Efficient spatiotemporal hole filling strategy for kinect depth maps. IS&T/SPIE Electron. Imaging 82900E–82900E (2012) 13. A. Atapour-Abarghoueia, T.P. Breckona, A comparative review of plausible hole filling strategies in the context of scene depth image completion. Comput. Graphics 72, 39–58 (2018) 14. S.H. Baek, I. Choi, M.H. Kim, Multiview image completion with space structure propagation, in IEEE Conference on Computer Vision and Pattern Recognition (2016), pp. 488–496 15. A. Hervieu, N. Papadakis, A. Bugeau, P. Gargallo, V. Caselles, Stereoscopic image inpainting: distinct depth maps and images inpainting, in IEEE International Conference on Pattern Recognition (2010), pp. 4101–4104 16. https://vision.middlebury.edu/stereo/data/. Last accessed 28 Dec 2020

Centralized Admission Process: An E-Governance Approach for Improving the Higher Education Admission System of Bangladesh Pratik Saha, Chaity Swarnaker, Fatema Farhin Bidushi, Noushin Islam, and Mahady Hasan Abstract Higher education is very important for the overall growth of a country because, through higher education, a country can get more qualified personnel in important positions. So, it is vital to make the entry process into higher education as simple and straightforward as possible. Using a centralized admission system is a big step toward that simplicity. The current admission process in Bangladesh is very chaotic which takes quite a bit of financial and physical toll on the students who want to pursue higher education. Since the process is decentralized, it becomes the government to analyze the admission data each year and take action based on the data. In this paper, we would like to look at how other countries have solved this issue and propose a system that can fit perfectly with the educational system of Bangladesh and make the admission process very streamlined. Our proposed should also make it very easy for the Government of Bangladesh to monitor and analyze admission-related data very easily. Keywords Centralized admission · E-governance · Higher education

1 Introduction E-governance is getting increasingly popular around the world. Government services can be made readily available to the citizens through e-governance and these services can become very convenient, efficient, and transparent for the citizens [1]. With the emergence of ICT, governments are actively trying to find out ways on how technology can improve the quality of services provided to the citizens. Electronic communication mediums are getting considered by the central and local governments for the provisioning and delivery of public goods and services [2]. Bangladesh has P. Saha (B) · C. Swarnaker · F. F. Bidushi · N. Islam · M. Hasan Department of Computer Science and Engineering, Independent University, Bangladesh, Dhaka, Bangladesh

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 R. C. Poonia et al. (eds.), Proceedings of Third International Conference on Sustainable Computing, Advances in Intelligent Systems and Computing 1404, https://doi.org/10.1007/978-981-16-4538-9_21

203

204

P. Saha et al.

embraced the ICT revolution in the last decade or so. The current Bangladeshi government has declared its vision to transform the country into a “Digital Bangladesh” [2]. The Bangladesh government is determined to use ICT and e-governance to ease any pain points the citizens might have. The main goal of e-governance is not just replacing a process with an online solution. E-governance is a way to rethink and re-engineer processes to make them more efficient. The main difference between the digitalization and e-governance is that the focus of the former is to replace existing processes with an online alternative, whereas e-governance looks at the needs of citizens and implements systems that can fulfill those necessities [3]. The current higher education admission process is an excellent candidate for e-governance. It is currently one of the major pain points for students. The current process is very disorganized and chaotic. Each year, a lot of students apply for admission to different universities and medical colleges for higher education. Some countries follow a centralized admission approach, Chile, for example [4]. Other countries like Japan have a decentralized approach to student admission, but one student can only apply to just one university [5]. In Bangladesh, the current decentralized process is not very helpful. So, in this paper, we will discuss the current problems regarding the admission process. We will also look at how some other countries have solved this issue and propose a centralized e-governance approach for tackling this issue and making it more streamlined.

2 Related Work The manual admission system can become very expensive with redundancy of data and loads of workforce. Students need to go through redundant processes for admission and it can become cumbersome. Applicants can suffer a lot in terms of expenses and time required to applying for higher education in various institutions [6–8]. For solving these issues, Bangalore University developed an application that centralizes the admission process [6]. In Tanzania, The Central Admission System (CAS) is introduced to register applicants, validate their applications, select them based on their choices, and set criteria for admissions into higher education institutions. The intention behind developing this system is to reform the higher education governance system to strengthen quality control in undergraduate admissions [7]. These centralized systems save money, time, and effort of the students. They also reduce workforce required to do a job and increase the speed in which the admission process gets completed [6, 7]. However, implementation of such large-scale systems can introduce issues as well. The main issue is faced with the lack of IT skills, not every applicant is well versed in using IT systems and may fail to register on their own [7]. To implement the system, some infrastructural challenges were also faced, such as poor ICTs infrastructure, combined with the absence of guiding policy for new admission system, poor penetration of ICTs in Tanzania’s advanced secondary schools, and unimpressive human and institutional IT staff capacity. Enhancements in CAS are essential for the main

Centralized Admission Process: An E-Governance Approach for …

205

users like applicants. But the user fees charged to the applicants goes to the specific institution which is one of the biggest obstructions in the enhancement of CAS [7]. Adoption of such systems can also be a very slow process. Educational institutions might be reluctant to move to this automated approach from the manual system they have been using for years. The Unified Selection System (SISU) introduced in 2010 by the Brazilian Ministry of Education faced this issue. In 2013, almost 80% of federal institutions and only 20% of state institutions adopted SISU. Majority of the public institutions are still using the decentralized system due to the lack of rigorous evaluation of SISU [8]. So, just implementing a centralized system is not enough, regular monitoring and evaluation of the process in vital for the overall long-term adoption.

3 Problem Statement In Bangladesh, the admission process begins with the Higher Secondary Certificate (HSC) examination. The grades obtained by the students in the HSC exam are vital for the admission process. The universities define a minimum grade requirement for applying for the admission tests of that particular university. Different universities have different requirements and different admission tests. In most cases, each university takes multiple admission tests for different faculties, for example, one admission test for the science unit, another test for the art unit, and so on. So, one student usually needs to buy admission forms from multiple universities and multiple units. It takes a fair bit of money to apply to so many different admission tests. Apart from the monetary impact, there is a physical impact associated with the admission process as well. The universities have complete authority over when to conduct the admission tests for their respective units. They usually do not think about possible clashes with admission tests of other universities. Sometimes, the students and their guardians travel to one city for the admission test and then travel overnight for 10–12 h or more to another city for attending the admission test at another university. Usually, there are no living arrangements for admission applicants and so, they have to manage their accommodation which results in extra monetary cost. The admission test question patterns also vary greatly from university to university. So, the students need to prepare for various types of question patterns. It causes a fair bit of mental stress as well. In short, we can easily say that the current admission process is very chaotic and not sustainable. It causes huge pressure on the admission aspirants and their guardians. Due to the varied nature of the admission process, it is very difficult for the government to keep track of the admission-related data. It also becomes very hard for the government to analyze the data and take quick actions based on the data because the data is not readily available. It takes time to collect the data from the universities, process them, and make a meaningful report to take further actions on. A more centralized approach can mostly tackle the points we have mentioned so far.

206

P. Saha et al.

So, to summarize in this paper, we are trying to tackle the following issues through our proposed system 1.

2. 3. 4.

Students and guardians have to face a huge economic and physical impact while trying to get admission into universities and medical colleges for higher education No centralized system for applying to universities and medical colleges No centralized system for conducting admission tests Difficult to gather admission-related data and take actions based on the data.

4 Proposed Solution 4.1 Scope The purpose of our proposed solution is to provide a centralized system for all admission-related activities. Through this system, students will only have to apply once for admission through our central admission hub (CAH) system. For making the admission process centralized, we have to first group the tracks a student can choose after passing the HSC exam. The current process of medical entrance examination is an excellent starting point for us. The admission exam of medical colleges is currently centralized. But one major difference with medical colleges and universities is that one medical college only has one track, but universities offer different types of options. For example, students can study engineering, science subjects like math, physics or study commerce-related subjects. So, it is impossible to take one unified exam for all tracks in the university. So, we have decided to break the whole admission system of Bangladesh into five different tracks and each track will have its own sets of subject requirements the students must take during their HSC period. The tracks are Medical, Engineering, Science, Arts, and Commerce. One student can only apply in one track. Like medical colleges, the universities will be assigned a rank for each track. Through the admission test, the selected students with higher marks will get chances in the higher-ranked institutions based on available seats. Figure 1 illustrates the overall system concept in a rich picture format. Figure 2 shows the overall high-level processes of the system. From Fig. 2, we can identify the required modules of the system from the rich picture based on the actions the users can perform through the system. From Fig. 2, we identified the required modules of the system. Details about each module can be found in the technical paper [9]. The identified modules are 1. 2. 3. 4. 5.

Admission Application Module Payment Module Admission Test Module Institution Ranking Module Report Generation Module.

Centralized Admission Process: An E-Governance Approach for …

207

Fig. 1 Rich picture of central admission hub (CAH) system

4.2 Planning For this project, we have identified the potential high-level tasks that will be required for this project. This is a very high-level estimate and will most likely require changes after breaking them down into low-level tasks. We have also assigned an estimated duration required for each task. Based on the estimated duration, we have prepared a Gantt chart to identify the required time for this project. The detailed task breakdown, estimated duration, and resource calculation can be found in the technical paper [9].

208

P. Saha et al.

Fig. 2 High-level processes of the CAH application

4.3 Cost Estimation After identifying the resources, we have estimated the hourly salary of the resources and amount of man-hour they will contribute to the project. Based on that, we have calculated the total development cost of the project. According to our estimation, the total development cost will be BDT 7,750,400. We think a 30% cushion will be required to accommodate any error in estimation and keep a head room. So, the total development cost of the project would be BDT 10,075,520. And we estimated for maintenance around 15% of the development cost will be required. So, the total maintenance cost in the first year will be BDT 1,511,328. We also assumed that each year there will be an additional 20% maintenance cost increase. The detailed breakdown of the cost estimation can be found in the technical paper [9].

5 Feasibility Analysis 5.1 Economic Feasibility Net-Present Value. In the previous section, we have estimated the total development and maintenance cost. Table 1 shows our assumptions and estimations for the system. According to our estimation, the initial development and rollout of the project will take around a year. So, there will be no benefit gained from it in that period. From

Centralized Admission Process: An E-Governance Approach for …

209

Table 1 Assumptions for calculating economic feasibility Criteria

Assumptions

Estimated applicants in year 1

200,000

Estimated applicant growth each year

10%

Application cost per applicant

BDT 500

Table 2 Estimated net-present value of the proposed system after 3 years (in thousand BDT) Cash flow description

Year 0

Year 1

Year 2

Year 3

Development cost

10,075

0

0

0

Operation and maintenance cost

0

1511

1813

2480

The total present value of lifetime costs

10,075

11,586

13,399

15,879

Total

15,879

Benefits derived from the operation of the 0 new system

100,000 110,000 121,000

The total present value of the lifetime benefits

0

100,000 210,000 331,000 331,000

Cumulative lifetime net benefits

−10,075 98,489

The net-present value of this system

208,187 328,520 315,121

year 1, the project will start earning money. The primary source of income from this system will be from selling admission application forms. In 2019, around 975,000 students passed the HSC exam and 200,000 students got either GPA 5 or GPA 4. Table 1 shows the assumptions we made for calculating the economic feasibility of the system. Payback Analysis. From Table 2, we can also derive the payback analysis for our project. Figure 3 shows the cumulative lifetime net benefits in a graph format. In xaxis, we have years and in y-axis, we have net benefit. From the graph, we can see that the project will start seeing profit pretty early on after rolling out from development.

5.2 Technical Feasibility

Strength

Weakness

1. Centralized process 2. Easy governance 3. Ease of physical and economic impact of the admission applicants 4. Inline with the government’s vision of a digital Bangladesh

1. The whole admission process needs to be re-engineered 2. Difficult to test the system on large scale 3. Lack of infrastructure for online admission exam (continued)

210

P. Saha et al.

(continued) Opportunity

Threat

1. No other system exists with similar solution 1. Stable and secure infrastructure for online 2. Huge demand for a system like this exams and processing payment information 3. University Grant Commission of 2. Maintaining data integrity Bangladesh is looking for ways to move to a 3. Ensuring uptime and availability centralized admission process

6 Conclusion In this paper, we tried to tackle the most important issues that exist in the current admission process. The main objective we had when planning for the proposed solution was to make the entry process into higher education more seamless so that the applicants do not have to go through all the hassle they have to face now. Another objective was to make the governance of the admission process easier. Our proposed solution tackles both the objectives. But we acknowledge that this process is still a very theoretical solution. The next step would be to develop a POC system with this proposed model and test the effectiveness at a small scale. Then, we have to gather the results and check if any adjustments need to be made to the system to increase the effectiveness. There are also some drastic changes to the overall admission process. For example, currently, the ranking system only exists for medical colleges. University ranking needs to be created. The criteria for providing the ranking need to be identified. This is a huge task and will need time, research, and discussion for implementing it. Also, the admission exam format is currently not unified. Some universities take MCQ question-based exams and some do not. Although a lot of reform will be required for streamlining the overall system, we strongly believe that the system we have proposed is a step in the right direction and can be perfected by further research and testing.

Centralized Admission Process: An E-Governance Approach for …

211

Fig. 3 Payback analysis over a 3-year period

References 1. R.K. Shrivastava, A.K. Raizada, N. Saxena, Role of e-governance to strengthen higher education system in India. IOSR J. Res. Method Educ. 4(2), 57–62 (2014) 2. S.H. Bhuiyan, Modernizing Bangladesh public administration through e-governance: benefits and challenges. Gov. Inf. Q. 28(1), 54–65 (2011) 3. P. Bhanti, S. Lehri, N. Kumar, E-Governance: an approach towards the integration of higher education system in India. Int. J. Emerg. Technol. Adv. Eng. 2(8) (2012). Website: www.ijetae. com. ISSN 2250-2459. 4. J.S. Hastings, J.M. Weinstein, Information, school choice, and academic achievement: evidence from two experiments. Q. J. Econ. 123(4), 1373–1414 (2008) 5. Y.-K. Che, Y. Koh, Decentralized college admissions (2014). Available at SSRN 2399743. 6. B.L. Muralidhara, G. Chitty Babu, Centralized admission: a novel student-centric e-governance process. Int. J. Comput. Appl. 66(23), 41–46 (2013) 7. F.G. Mahundu, E-governance: a sociological case study of the central admission system in Tanzania. Electron. J. Inf. Syst. Dev. Countries 76(1), 1–11 (2016) 8. C. Machado, C. Szerman, The effects of a centralized college admission mechanism on migration and college enrollment: evidence from Brazil, in SBE Meetings (2015) 9. Saha, Swarnaker, Bidushi, Islam, Hasan, Technical Paper—Centralized Admission Process— An E-Governance Approach to Improving the Higher Education System of Bangladesh (2020). Retrieved from https://docs.google.com/document/d/1kOLESwAXjGqMvimsDJsEJH5duh3Ry LnnGQhlu5MnqKs/edit?usp=sharing

Opinion Mining and Analysing Real-Time Tweets Using RapidMiner Rainu Nandal, Anisha Chawla, and Kamaldeep Joshi

Abstract Earlier, the social media was only a medium for communication, but as the technology is improving and advancing, social media has become a mode by various industries, e-commerce sites, companies, institutions, etc. to get an insight from their users about how well their business is doing in a particular area and how can they grow in market. Apart from social media, there are other sources like blogs, review sites, data sets, microblogs which help them to extract data. But as the sources are increasing, manual mining of data is not possible; hence, data mining is used. Opinion mining (also known as sentiment analysis, review analysis) being a part of data mining helps to extract, process and then analyse the emotions hidden in the reviews or the data collected. A microblogging site, Twitter, lets its users to express their thoughts and emotions within a character limit and provide an API which helps in extracting real-time tweets. In this paper, opinion mining is explained along with the approaches used and how RapidMiner is used to analyse real-time tweets using Naïve Bayes. The accuracy of Naive Bayes algorithm for 100 tweets came out to be 64.29% with a kappa result of 0.186. Keywords Data mining · Opinion mining · Naïve Bayes · RapidMiner · Machine learning · Sentiment analysis

1 Introduction When the data available online are in abundance and one wants to make sense of that data by converting it into useful information, data mining is used. In data mining, the data are first extracted from the sources like e-commerce sites, blogs, microblogs, review sites, social networking sites, etc. After extracting the data, it is filtered by pre-processing it, as there are some data which are useless and some data are useful. These pre-processed data are used by analysts for analysis. Opinion mining, sometimes also known as sentiment analysis or review analysis, is a part of data mining. R. Nandal · A. Chawla (B) · K. Joshi University Institute of Engineering and Technology, Maharshi Dayanand University, Rohtak, Haryana 124001, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 R. C. Poonia et al. (eds.), Proceedings of Third International Conference on Sustainable Computing, Advances in Intelligent Systems and Computing 1404, https://doi.org/10.1007/978-981-16-4538-9_22

213

214

R. Nandal et al.

The only difference is that in opinion mining the sentiments are extracted from the data collected. Sentiments are related to emotions expressed by people in different manners like in form of images, videos, links, gifs or in textual manner. Opinion mining generally deals with opinions expressed in a textual format on any platform. Twitter helps its users to express their opinions within a character limit of 280 words which is known as a Tweet. Twitter also provides an API to access its tweets for processing and analysing. Figure 1 shows classification of tweets and Fig. 2: levels of sentiment analysis [1, 2]. Fig. 1 Classification of tweets

Based on SubjecƟvity:

Tweets

SubjecƟvity means personal views on a topic or enƟty. It can either be SubjecƟve or ObjecƟve in nature. When Someone express their views towards an enƟty in a personal way, then that view is SubjecƟve in nature. When Someone express their views towards an enƟty in a generalized manner, then that view is objecƟve in nature. Based on Polarity : Polarity defines nature of things as posiƟve, negaƟve or neutral. Tweets either are PosiƟve, negaƟve or neutral in nature.

Fig. 2 Levels of sentiment analysis

When a document is taken into consideraƟon and the enƟre document is classified as posiƟve, negaƟve or neutral then it is Document-level SA.

Levels of SenƟmen t Analysis

When individual sentences of a document are taken into consideraƟon for classificaƟon based on subjecƟvity and polarity, then it is Sentence-level SA.

Aspect-level SA aims at idenƟfying all the aspects of an enƟty and selecƟng a specific aspect to classify them as posiƟve, negaƟve or neutral.

Opinion Mining and Analysing Real-Time Tweets Using RapidMiner

215

Fig. 3 Sentiment classification techniques [3]

1.1 Approaches for Opinion Mining/Sentiment Analysis Three approaches through which OM techniques could be used are: lexicon-based approach—opinions or reviews collected are split into individual sentences so that pre-processing could be performed to form a bag of words using stemming or cleaning methods and then compared to the collection of predefined sentiment terms or words (also known as opinion lexicon), and with the help of scoring function a sentiment score is generated. Machine learning approach—supervised and unsupervised Machine learning algorithms are used on a data collected and pre-processed in such a manner that a part of it is used to train a model and rest is used to validate the model. Hybrid approach—when a combination of the above-mentioned techniques is used, it is a hybrid approach (Fig. 3).

2 Background Work Pak and Paroubek [4] performed opinion mining on a corpus collected from Twitter and classified the Tweets as objective, positive or negative tweets. Agarwal et al. [5] classified tweets as positive, negative or neutral. There are tweets which have more than one opinion in a single tweet; a single tweet can have multiple opinions with some facts and subjective segments. Kumar and Sebastian [6] coined a new hybrid approach

216

R. Nandal et al.

which uses natural language processing and machine learning techniques along with advantages of dictionary and corpus methods to classify tweets as positive, negative or neutral tweets. Vishal and Sonawane [7] provided a survey with a comparative analysis of various existing techniques for opinion mining along with evaluation metrics. Machine learning algorithms like Naive Bayes, max entropy and support vector machine were used to provide a research on a number of twitter data streams. Authors also discussed challenges and applications of sentiment analysis on Twitter. Kumar et al. [8] showed the use of hybrid features obtained after using machine learning features like TF, TF-IDF, etc. with some lexicon features like positive– negative word count, connotation, etc. and when compared with classifiers like Naïve Bayes, KNN, SVM and maximum entropy gave better results in terms of accuracy and complexity. Sisodia et al. [9] discussed machine learning methods for the sentiment analysis of movie reviews. After collecting the raw movie reviews, pre-processed it and bag of words, TF-IDF, bigram methods were used to select features from text reviews. Various machine learning techniques like Naive Bayes classifier, decision trees, support vector machine were used by the authors and obtained a sentiment analysis model to distinguish positive and negative polarity movie reviews in the data set. They evaluated the performance of learners-based sentiment analysis model using precision, accuracy, recall, and f -measures. Munjal et al. developed some new approaches for opinion dynamics [10], sentiment analysis [11] and social network analysis [12].

3 Implementation and Results Step 1.

Step 2.

Gathering Tweets: RapidMiner Studio version 9.7 is used which supports various operators and extensions. Live tweets are fetched from Twitter using Search Twitter Operator. Analysing Tweets for Sentiment: Using “Text Analysis by AYLIEN”, the sentiment of each tweet is determined, i.e. whether they are positive, negative or neutral along with their subjectivity and polarity confidence. Figure 4 shows Search Twitter Operator and Analyze Sentiment operator. Output of Search Twitter Operator contains fields like row no., id, time details of when the tweet was created at, user details (from user, from user id), whom it is sent to (to user, to user id), in which language the tweet is written (language), what is the source of the tweet (source), what does tweet contain (text), geo-location, etc., but after applying Analyze

Fig. 4 Search Twitter and analyze sentiment operator

Opinion Mining and Analysing Real-Time Tweets Using RapidMiner

Step 3.

Step 4. Step 5. Step 6.

217

Sentiment operator fields like polarity, polarity confidence, subjectivity and subjectivity confidence gets added which is shown in Fig. 5. Since RapidMiner also provides ways to visualize the results, Fig. 6 shows tweet visualization of 50 tweets on the basis of subjectivity and polarity. Selection of Columns: Not all columns are required like from user, from user id, etc., so useful columns like polarity, subjectivity, their respective confidence, along with text are abstracted. Replacement of Missing Values: If there are some missing values, they get replaced by the missing values set by user. It can be zero, null, etc. Conversion of Nominal to Text: This will convert any type of data on the dataset to text format. Setting of Roles: The role of an attribute describes how other operators handle this attribute.

Fig. 5 Analyze sentiment operator’s result

Fig. 6 Tweets visualizations

218

Step 7.

Step 8.

Step 9.

Step 10.

R. Nandal et al.

Filtering of Examples: Tweets are filtered on the basis of words which are decided by the users. Advances the results obtained after performing the above steps. Replacement: Replace 1: wherever ‘https:’ is found in the tweets, that is replaced with ‘links’. Replace 2: wherever ‘#’ is found in the tweets, that is replaced with ‘hashtag’. Replace 3: wherever ‘@’ is found in the tweets, that is replaced with ‘ATT’. Figure 7 shows implementation of steps 3–8, and Fig. 8 shows results after implementation of steps 3–8. Process the Documents: Tokenization, transformed cases, filter tokens by length, filtering stop words and generating n grams using operators shown in Figs. 9 and 10 result after processing the document. Validation: The most crucial step in which accuracy of a learning model is determined. The validation operator has two sub-processes: a training subprocess and a testing subprocess which helps to estimate the performance of the learning operator. The training subprocess helps to build any model. This trained model is later applied to the subprocess of testing. Amidst the phase of testing, it is measured how the well the model performs. Here, Naïve Bayes model is used shown in Fig. 11 along with operators used in process of validation. It took about 2.35 min for Naive

Fig. 7 Shows implementation of steps 3–8

Fig. 8 Results after implementing steps 3–8

Opinion Mining and Analysing Real-Time Tweets Using RapidMiner

219

Fig. 9 Process document from data operator

Fig. 10 Result after processing the document

Fig. 11 Subprocess of validation operator

Bayes algorithm to process 100 tweets collected on keyword Farmers, and it is shown in Fig. 12 that the accuracy of Naive Bayes algorithm is 64.29% and Kappa result is 0.186. Fig. 12 Result of Naïve Bayes model

220

R. Nandal et al.

4 Conclusion and Future Work RapidMiner allows the users to perform desired search without any restriction of predefined data set. As there is no delay in the collection and delivery of the data, the data are called as real-time data or live data. Live data fetched from twitter are then pre-processed using various operators provided by RapidMiner. As RapidMiner provides various third-party extensions, it makes the analysing and pre-processing process smooth like the Alyien extension of RapidMiner employed in the topology that provides an all-in-one integrated platform to perform real-time sentiment analysis independently. As the Naïve Bayes model was used here, the research can be further extended to various other algorithms and a comparative analysis could be done. Focus on unsupervised learning techniques of real-time sentiment analysis could also be done. More features such as generation of word cloud can also be appended. Sentiment analysis can be performed at aspect level which in turn extrapolates it to opinion mining. One may aim to experiment on the large size of training and testing sets, multilabels, multilanguages which can also be done. One may also experiment on the emoticons for classification.

References 1. W. Medhat, A. Hassan, H. Korashy, Sentiment analysis algorithms and applications: a survey. Ain Shams Eng. J. 5(4), 1093–1113 (2014) 2. A. Montoyo, P. Martínez-Barco, A. Balahur, Subjectivity and sentiment analysis: an overview of the current state of the area and envisaged developments. Decis. Support Syst. 53, 75–679 (2012) 3. L . Ziora, The sentiment analysis as a tool of business analytics. Stud. Ekon. Zesz. Nauk. Uniw. Ekon. W Katowicach, No. 281P (2016). http://Blog.Aylien.Com/Building-A-Twitter-Sentimen 4. A. Pak, P. Paroubek, Twitter as a corpus for sentiment analysis and opinion mining, in Proceedings of the Seventh Conference on International Language Resources and Evaluation (2010), pp. 1320–1326 5. A. Agarwal, B. Xie, I. Vovsha, O. Rambow, R. Passonneau, Sentiment analysis of Twitter data, in Proceedings of the ACL 2011 Workshop on Languages in Social Media (2011), pp. 30–38 6. A. Kumar, T.M. Sebastian, Sentiment analysis of Twitter. IJCSI Int. J. Comput. Sci. Issues 2012 7. A. Vishal, S.S. Sonawane, Sentiment analysis of Twitter data: a survey of techniques. Int. J. Comput. Appl. 139(11), 5–15 (2016) 8. H.M. Kumar, B.S. Harish, H.K. Darshan, Sentiment analysis on IMDb movie reviews using hybrid feature extraction method. Int. J. Interact. Multimedia Artif. Intell. 5(5) (2019) 9. Sisodia, D. Singh, S. Bhandari, N. Keerthana Reddy, A. Pujahari, A comparative performance study of machine learning algorithms for sentiment analysis of movie viewers using open reviews, in Performance Management of Integrated Systems and its Applications in Software Engineering (Springer, Singapore, 2020), pp. 107–117 10. P. Munjal, S. Kumar, L. Kumar, A. Banati, Opinion dynamics through natural phenomenon of grain growth and population migration, in Hybrid Intelligence for Social Networks (Springer, Cham, 2017), pp. 161–175 11. P. Munjal, M. Narula, S. Kumar, H. Banati, Twitter sentiments based suggestive framework to predict trends. J. Stat. Manag. Syst. 21(4), 685–693 (2018)

Opinion Mining and Analysing Real-Time Tweets Using RapidMiner

221

12. P. Munjal, L. Kumar, S. Kumar, H. Banati, Evidence of Ostwald Ripening in opinion driven dynamics of mutually competitive social networks. Phys. A Stat. Mech. Appl. 522, 182–194 (2019). https://doi.org/10.1016/j.physa.2019.01.109

Household Solid Waste Collection Cost Estimation Model: Case Study of Barranquilla, Colombia Thalía Obredor-Baldovino, Katherinne Salas-Navarro, Miguel Santana-Galván, and Jaime Rizzo-Lian

Abstract This article presents a model to estimate the cost of household solid waste collection that includes both the costs of the collection process and the fixed and variable costs of the waste collection vehicles and labor involved in the process. The model was validated in the city of Barranquilla, covering the five strategic residential areas into which the city is divided. Three scenarios were defined to assess the model’s performance in terms of developing improvements to the collection process and reducing its costs. Keywords Reverse flow · Cost model · Collection method · Household wastes

1 Introduction At present, the desire to consume is as strong as population growth, and we can see how both factors are growing hand in hand. Global trends in population growth, consumerist behavior, and environmental crises represent major issues for society today. The constant interaction between the business that supplies products and families that need to consume them generate increasing amounts of waste at the end of these products’ life cycles. Solid waste generation is growing at an accelerated pace due to low levels of awareness and failure to implement environmental programs that promote greater environmental responsibility by all players involved in the economic system, including retailers, manufacturers, institutions, and society. Ultimately, the problems caused by excessive generation and inadequate disposal of waste impact all people in general, as people are involved in each of these processes. In many cases, solid wastes are not adequately managed, and often the practices adopted by society to dispose of them cause negative effects by ignoring future T. Obredor-Baldovino · K. Salas-Navarro (B) · M. Santana-Galván · J. Rizzo-Lian Department of Productivity and Innovation, Universidad de la Costa, Barranquilla, Colombia e-mail: [email protected] T. Obredor-Baldovino e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 R. C. Poonia et al. (eds.), Proceedings of Third International Conference on Sustainable Computing, Advances in Intelligent Systems and Computing 1404, https://doi.org/10.1007/978-981-16-4538-9_23

223

224

T. Obredor-Baldovino et al.

consequences that may arise [1]. Even though policies and laws that focus efforts on adequate logistics management of solid wastes are increasingly implemented in European countries [2], in Colombia laws have not been adequately implemented to urge society to responsibly separate household wastes at the source, and the organizations responsible for collecting such waste have displayed little interest in this matter, even though good reverse chain management of waste treatment would produce a favorable rate of return. Specifically, in the city of Barranquilla alone, each inhabitant generates 0.895 kg per day [3]. In this context, a key problem is the inadequate management and handling of wastes in the city of Barranquilla, and particularly the low rate of recycling of wastes [3]. The waste collection process is the stage that most incidence has on the recovery of wastes for subsequent processing, because it accounts for the largest share of the overall economic business, with between 50 and 90% of costs associated with it [4]. It should be noted that in many countries the solution for reducing the cost of the waste collection process has been achieved through strategic solutions, such as the design of genetic algorithms that optimize routes to a collection center or a transfer station, located with the assistance of tools that help find an optimal location, such as multi-target models [5–7], and location-routing problems [8], presented as complete decisions and proposals for waste treatment and reuse. However, solutions that address root causes produce better results by optimizing the designed systems, though in the case of waste collection the level of difficulty increases because the social component is the most difficult to manage in terms of raising awareness and education. Good waste management at the source contributes significantly to cost reductions for the companies responsible for this task, as it improves the rate of return for waste recycling and improves sustainable development. In Italy, a study of a sample of municipalities [9] estimated the cost of collection of paper and paperboard, multiple materials (glass, plastic, metal), non-classified organic wastes and residues. It highlights that the cost depends on the type of treatment each waste requires and that it is also affected by population density and size, the percentage of selective collection, the percentage collected at homes and private service. On the other hand, in Spain [10] an empirical study of solid waste management costs was carried out focusing on Galicia, which indicates that cooperation between municipalities for the process of waste collection and disposal by the waste recycling party produces costs savings through the standardization of activities. In Belgium, a study developed a comparison between the cost of service private and public collection of waste [11] and it proposed a joint venture approach to improve the service. A location-allocation problem was developed in the household waste collection network in Chile [12]. A study analyzed the impact of the COVID-19 pandemic on municipal waste collection management in Canada [13] considering changes in garbage, recycling, deposit-return bottle, diversion, education, and others. The impact of the COVID-19 pandemic to the waste-to-energy and material polices was examined in China [14]. Given the advantages of the implementation of practices of solid waste separation at the source prior to final disposal, we highlight the intensification of the systems that may be applied in society in order to assess their performance over time and the

Household Solid Waste Collection Cost Estimation Model: Case …

225

costs in practice [15]. In the literature, several authors have developed cost models that enable comparisons between existing solid waste collection methods [1, 11, 12, 16–18]. A multi-objective model has been developed in sustainable logistic networks optimizing the costs, environmental impact, social responsibility, distance, time, and labor costs [8, 19, 20]. A study optimized the waste collection routes and the amount of waste collected in two locations in Mexico City by 40% more than the reference process [4]. Based on the above considerations, we can infer that the assessment of the household waste collection process by means of a cost model contributes significantly to optimizing resources, decision-making, development of improvements to logistics, efficiency, and sustainable development. Considering the above, a cost estimation model is proposed for the household waste collection process. To this end, the first section describes the methodology used to design the model. The structure of the cost estimation model for the household waste collection process and the scenarios used to study the model are described in the second section. The results obtained from application of the model are described in the following section, and lastly, the conclusions and suggestions for future research in this field are presented in the final section.

2 Methodology This section describes the proposed methodology for the design of a cost and profit estimation model of the household solid waste collection processes in the city of Barranquilla, Colombia, with the objective of finding scenarios that would enable optimizing the solid waste collection processes through the reduction of the operating costs of vehicles and logistics costs, among others. Initially, a diagnosis was performed of household solid waste collection in the city of Barranquilla, taking into consideration demographic and socio-cultural information, characteristics and features of the city’s collection process, the amount of solid waste generated in the various areas, the frequency of collection in each area and the planning methods used for collection. Additionally, the most relevant characteristics and behaviors of the current collection system were identified and studied. Based on this characterization, the current household collection system, a theoretical model, was developed, taking into consideration the subdivisions of the city by zone or territory, the neighborhoods assigned to each territory and their general description, as well as information from the Comprehensive Solid Waste Management Plan developed by the city’s main solid waste collection organization on waste production by socioeconomic level, trip distances and number, and vehicle capacity. Once the baseline information was included in the theoretical model of household solid waste collection, the basic structure of the model was established by defining the origin and destination points of each route, the number of kilometers traveled, tons of waste produced, the number of trips, the number of trucks assigned and verification of capacity. It also includes data on the income earned from reusable wastes, fixed usage

226

T. Obredor-Baldovino et al.

costs, variable usage costs, collection personnel expenses, operating costs, logistics costs and annual cost. Two scenarios were proposed based on the classification at the source, considering the basic cost parameters and structure, and estimation of profits in the proposed model. Improvements in the solid waste collection process were proposed based on the responsible involvement of households.

3 Proposed Model The proposed model is designed to estimate the costs and units of the household solid waste collection process in Barranquilla, focused on calculating the logistics costs of the process, fixed and variable costs of vehicle use, payroll costs of the personnel involved in household waste collection and specific load costs.

3.1 Parameters of the Model The following were the parameters taken into consideration in the cost estimation model of the household solid waste collection process. i j nimes di f Pimes Ni PPC mmes v CAPimes CAPj X ij CUFij CFj C dj C rj C sj C pj CUVij CVj C jm C jnr

origin i = {1, 2, …, n}. destination j = {1, 2, …, m}. Number of kilometers traveled (km/month). Distance of route i (km). Frequency of household solid waste collection per week. Waste production by route (tons/month). Number of persons on route i (units). Per capita production indicator (kg/inhabitant-day). Number of trips (times/month). Number of truck dispatches (times/day). Verified capacity (tons/month). Vehicle capacity (tons). Number of vehicles sent from origin i to destination j (units). Vehicle usage fixed cost ($/month). Fixed vehicle cost ($/month). Vehicle depreciation expense ($/month). Vehicle running expenses ($/month). Cost of mandatory vehicle accident insurance policy ($/month). Cost of vehicle parking ($/month). Variable cost of vehicle usage ($/km-month). Variable cost of vehicle usage ($/km). Vehicle maintenance cost ($/km). Cost of tires and repairs ($/km).

Household Solid Waste Collection Cost Estimation Model: Case …

C jc C jli C jlu CRRH Ccon CAy1 CAy2 CO CRRHTi CL CE CT U ik M ik PRk

227

Fuel costs ($/km). Vehicle cleaning cost ($/km). Vehicle lubrication cost ($/km). Payroll of collection personnel ($/month). Driver’s salary ($/month). Assistant 1 salary 1 ($/month). Assistant 2 salary 2 ($/month). Operating cost ($/month). Payroll cost of collection personnel ($/month). Logistics cost ($/month). Specific cost ($/ton). Total annual cost ($/year). Profit from recycling ($/month). Quantity of material product per route (ton/month). Price of material ($/ton).

3.2 Definition of the Model The parameters are found by taking into consideration the origin-destination routes, number of kilometers traveled on each route, waste production in the city of Barranquilla in terms of tons/month, number of trips, number of trucks assigned and verification of capacity. Origin-destination: It is the distance from the waste collection facilities to the destination point at each territory of the city of Barranquilla. For this study, 5 territories are considered: zone 1, zone 2, zone 3, zone 4, and zone 5, which are displayed in Fig. 1 with their respective centroids, in order to determine the household solid waste collection routes. Based on the distances found from the centroids of each zone, 5 household solid waste collection routes are defined, as displayed in Fig. 2. In their routes, the vehicles leave the facilities of the household waste collection service provider and make their first trip; once their capacity has been filled, they travel to the landfill to unload, and then travel back to the territory to make a second trip, and once they are filled, they return a second time to the Pocitos landfill to unload. After completing 2 pick-up routes and 2 unloads, the vehicles return to the organization’s facilities at the end of their collection shift. The number of kilometers traveled is the distance covered each day by a vehicle on the assigned route, multiplied by the weekly frequency of collection, as specified in the waste collection program [3]. The quantity of tons of waste per month is calculated based on the per capita production index (IPC) provided by the Integral Solid Waste Management Plan and the total population of each zone according to demographic information of the city of Barranquilla. Equation 1 shows the calculation of kilometers traveled per month. n imes = 4di f

(1)

228

T. Obredor-Baldovino et al.

Fig. 1 Macro-household solid waste collection routes in Barranquilla, Colombia [3]

Fig. 2 Distances by assigned route

Tons of waste produced per month is calculated based on waste production in each zone using the per capita production index (PPC) obtained from PGIRS [3] and the total population of each zone according to demographic information of the city of Barranquilla. Equation 2 presents production of solid waste by route, and Eq. 3 the number of trips per month. Pimes =

30Ni (PPC) 1000

(2)

Household Solid Waste Collection Cost Estimation Model: Case …

m mes = 4v f

229

(3)

Equation 4 presents the verified capacity of the vehicles that transport household solid waste to satisfy the requirements of each zone. The fixed cost of usage is obtained from the fixed cost of each vehicle by the sum of each vehicle (see Eq. 5).

CAPimes

⎞ ⎛ m  =⎝ CAP j X i j ⎠m mes ≥ Pimes

(4)

j=1

CUFi j =

m 

Xi j C Fj

(5)

j=1

The vehicle’s fixed costs include depreciation, vehicle running expenses, mandatory traffic accident insurance and cost of vehicle parking (see Eq. 6). Equation 7 shows the variable cost of usage of the vehicles of the waste collection service provider, which include vehicle maintenance costs, cost of tires and repairs, cost of fuel, vehicle cleaning costs and cost of vehicle lubrication (see Eq. 8). CF j =

m    Cd j + Cr j + Cs j + C pj

(6)

j=1

CUV j =

m 

X i j CV j n imes

(7)

j=1

CV j =

m    C jm + C jnr + C jc + C jli + C jlu

(8)

j=1

Equation 9 represents the labor cost of the household waste collection personnel, including the salaries of drivers and assistants. Equation 10 displays the cost of the collection operation as the sum fixed costs, variable costs and labor costs. CRRH = Ccon + CAy1 + CAy2 CO =

n  m    CUFi j + CUVi j + CRRH

(9)

(10)

i=1 j=1

CRRHTi = CRRH

m 

Xi j

(11)

j=1

According to Antonio et al. [4], the solid waste collection process accounts for between 50 and 90%, and consequently in Eq. 12 logistics costs account for 90%

230

T. Obredor-Baldovino et al.

of the collection process costs, and Eqs. 12–13 display the total cost per month and per year, respectively. The total annual cost represents the annual investment of the collection process. CL =

CO 0.90

(12)

CE =

CL Pimes

(13)

CT = CL ∗ 12 Uik =

m 

Mik PRk

(14)

(15)

k=1

Mik =

Mi z Mkz Mz

(16)

3.3 Scenarios The proposal is to develop the current scenario and two alternative scenarios using the proposed model to analyze the results and establish which option improves the household collection process. These scenarios were designed taking into consideration the basic cost parameters and structure, in order enable comparisons between scenarios. Current scenario. This scenario represents the current state of the household collection process in Barranquilla and is the baseline for seeking to improve costs and profits in the other scenarios. Scenario 1. This scenario is aimed at improving profits through the separation of wastes at the source, involving the classification of recyclable materials such as plastic, glass, paperboard, metals, aluminum, and paper. No changes were made to the basic structure and costs because the same vehicles would be used in both scenarios, given that in this process the people who perform the waste classification process are located at the landfill site. The same personnel can classify the wastes prior to their final disposal by sealing off the cells in which they are deposited. The wastes classified at the landfill are treated or recycled using the procedures of the waste collection company and/or its partners. To this effect, at the source the wastes must be separated in color-coded bags (as defined in a communications campaign) to facilitate their identification and classification at the landfill.

Household Solid Waste Collection Cost Estimation Model: Case …

231

Scenario 2. Scenario 2 was developed in order to improve both the current scenario and scenario 1, since neither of these other scenarios involves any cost reductions. In order to reduce collection costs, it was decided to increase the frequency (f ) from 3 to 6 times a week, defined as the number of times on which the compactor works to fulfill local demand, which increases the number of trips per month (mmes = 4vf ). The change in this parameter affects the number of kilometers traveled per month (nimes = 4vd i ) because more kilometers are traveled. It should be noted that since the compactors have a given capacity, they must make two trips. At the same time, less trucks are required to meet the requirements of each zone.

4 Results and Discussion Considering that the costs defined in the proposed model may be affected by the amount of waste generated in each zone, the total and recyclable amounts of waste produced by each zone were identified. This analysis shows that the recyclable wastes produced the most are plastic, paperboard, paper, glass and in last place metals such as aluminum, as indicated in Fig. 3. The zone that most produces these types of reusable wastes is zone 3, and the one that produces the least is zone 1. This may be related to the geographic location of the zones, their population densities, and socioeconomic differences between zones, particularly in terms of consumption habits. In terms of the costs calculated in each scenario, Fig. 4 shows that scenario 1 has higher costs in all zones compared to scenario 2, whose costs are 39% lower compared to scenario 1. Also, zone 3 is the highest-cost zone, both in scenarios 1 and 2. Figures 5 and 6 highlight that scenario 1 has the greatest rate of variable and operating costs. It was also found that zone 3 has the highest usage rate in both scenarios, whereas the lowest usage rate is at zone 1. This is because zone 1 is the area that produces least waste per month, which implies that a smaller fleet of vehicles can serve this zone, as opposed to zone 3, which produces the highest amount of household solid waste. 1400

Produced Waste (Ton)

Fig. 3 Produced waste by zone

Paper

1200

Paperboard

1000

Plastic

800 600

Glass

400

Textile

200

Wood

0

Zone 1

Zone 2

Zone 3

Zone 4

Zone 5

Metal

232

T. Obredor-Baldovino et al. 6.00E+09

Fig. 4 Total cost by zones Total cost (COP)

5.00E+09 4.00E+09 3.00E+09

Scenario 1

2.00E+09

Scenario 2 1.00E+09 0.00E+00

Zone 1 Zone 2 Zone 3 Zone 4 Zone 5

Fig. 5 Cost in scenario 1

Vehicle usage fixed cost

5.E+08

Costs (COP)

4.E+08

Variable cost of vehicle usage

3.E+08

Payroll cost of collection personnel Operating cost

2.E+08 1.E+08 0.E+00

Zone 1 Zone 2 Zone 3 Zone 4 Zone 5

3.E+08

Fig. 6 Cost in scenario 2

Vehicle usage fixed cost

3.E+08

Costs (COP)

Logistics cost

Variable cost of vehicle usage

2.E+08 2.E+08

Payroll cost of collection personnel Operating cost

1.E+08 5.E+07

Logistics cost

0.E+00

Zone 1 Zone 2 Zone 3 Zone 4 Zone 5

Details are also provided on total or logistics costs of the collection process, indicating that estimated operating costs related to transportation account for 90% of the waste collection process, whereas the remaining 10% are not considered direct costs for effects of the study, but are in any case part of the cost of collection. Payroll expenses for collection in scenario 1 are higher than in scenario 2 for all zones, and the payroll expenses of zone 3 are higher than in all other zones, whereas zone 1 has the lowest payroll costs. This behavior is associated with the number of vehicles assigned in scenarios 1 and 2 and to each zone. Payroll expenses in the solid waste collection process increase proportionally to the number of vehicles used. Scenario 1 displayed higher operating costs than scenario 2. The cost of transporting each ton in scenario 1 is greater than in scenario 2 because logistics costs

Household Solid Waste Collection Cost Estimation Model: Case …

233

are greater in scenario 1. However, in reviewing the zones, zone 1 was found to have the highest cost per ton, whereas zone 3 has the lowest cost per ton. From the above, we can conclude that specific cost is inversely related, i.e., the greater the logistics cost and the lower the production of the zone, the higher the cost per shipped ton. However, in unit terms it may look higher, but when it is multiplied by the number of tons, the cost of zone 3 is still higher than in zone 1. Lastly on collection costs, there are two key determining parameters which are: the tons produced by each zone and the number of vehicles. The assignment of vehicles is directly related to the amount of waste produced by each zone, and consequently both parameters vary in the same direction, i.e., more vehicles must be sent to a zone with greater waste quantities. Scenario 2 was framed in such a way as to increase the frequency from 3 to 6, which enabled using less vehicles to cover a zone; consequently, collection costs related to the number of vehicles are marginally reduced in this scenario, making it much more economical than scenario 1. In summary, the model that was developed for the calculation of household waste collection costs is highly flexible, in that it can be applied in different scenarios, making it highly practical and versatile. It enables changing the values of certain parameters in order to find the best solutions for reducing collection costs, and it can be used to measure other possible scenarios that may arise from new improvements to the process promoted by city of Barranquilla.

5 Conclusions The current household waste collection process in the city of the study needs to be optimized and/or improved, particularly due to the low level of recycling of waste and reverse flow, and the need to update the number of recyclers and collection points. This leads to high costs in the collection process and lack of citizen awareness on waste collection and caring for the environment. The proposed model can be adapted, as demonstrated in the presented scenarios, which included the quantity of recoverable wastes in each zone by type of material, thereby enabling the calculation of the costs generated by the separation of wastes at the source in two or more fractions. The results displayed by the model in the proposed scenarios show that the model applied in scenario 2 is more efficient because it includes a methodology for separating wastes at the source and uses a large part of the fleet to satisfy the needs of the different zones. This scenario generates value because of the tons of reusable wastes that are collected, while at the same time reducing the vehicles used, which therefore reduces the cost of vehicle use and increases the frequency of household waste collection from 3 to 6 times per week. By enabling changes to input parameters, the cost model becomes a good decisionmaking tool for the household collection process by taking into consideration the methodology and each of its components. Based on the model, the proposal is to improve the household solid waste collection process through the implementation

234

T. Obredor-Baldovino et al.

of separation of household solid wastes at the source, to be performed by the users; to identify shortcomings from the outset of the transportation activity for implementation of the reverse flow; and planning of the reverse flow of household solid wastes, both usable and non-usable, to enable using the volumes of recyclable wastes produced by the city and to implement processes such as power generation to promote greater innovation in burning leachates. As a supporting strategy and an incentive, a rate reduction could be made on certain components of the total solid waste collection service, in order to encourage the community to actively participate in the reverse flow of wastes. The recyclers much be incorporated into the formal economy and support must be provided to establish collection centers in the city in order to propitiate a culture of caring for the environment and treatment of solid waste by small companies. Environmental awareness campaigns must be made in the city’s neighborhoods and basic, middle, and higher education institutions, and support must be sought from government entities in order to implement legislation that promotes the separation of solid wastes at the source. A positive impact can also be achieved in the household solid waste collection process by studying the frequency of collection of household solid waste, as well as the number of vehicles assigned to each zone based on their capacity; by standardizing the process activities such as the time elapsed between vehicle stops and its relationship with fuel consumption. Future research may be aimed at developing models to optimize household waste collection routes, on using the collected solid waste to produce energy, on the use of vehicles with different mechanical characteristics, location of collection centers and the implementation of selective routes in order to reduce costs and increase the useful life of the landfill, formalize the recyclers, and put an end to uncovered dump sites. The proposed model does not take into consideration commercial or industrial areas that generate wastes, not their final disposal process, which may also be an opportunity for inclusion in future research. Lastly, it is suggested to implement a model for waste collection using the river, taking into consideration that this is a field that has not been explored much by researchers and may have great potential for progress in waste management and control.

References 1. J. Groot, X. Bing, H. Bos-Brouwers, J. Bloemhof-Ruwaard, A comprehensive waste collection cost model applied to post-consumer plastic packaging waste. Resour. Conserv. Recycl. 85, 79–87 (2014) 2. A. Iriarte, X. Gabarrell, J. Rieradevall, LCA of selective waste collection systems in dense urban areas. Waste Manage. 29(2), 903–914 (2009) 3. Barranquilla Mayor’s Office, Integral Solid Waste Management Plan 2016–2027 (2015), pp. 1– 178 4. J. Antonio, A. Aguilar, M. Eduardo, J. Zambrano, Improvement of solid waste collection service using GIS tools: a case study. Ingeniería 19(2), 118–128 (2015)

Household Solid Waste Collection Cost Estimation Model: Case …

235

5. N.S.Z. Hervert, F.S. Galván, H.B. Santos, R. Rodríguez, J.J.H. Moreno, A.R. Rojas, Designing a waste collection system of plastic bottles using checkland methodology optimised by two mathematical models. Rev. Virtual Pro 167, 1–29 (2015) 6. T. Obredor-Baldovino, E. Barcasnegras-Moreno, N. Mercado-Caruso, K. Salas-Navarro, S.S. Sana, Coverage reduction: a mathematical model. J. Adv. Manuf. Syst. 17(03), 317–331 (2018) 7. K. Salas-Navarro, H. Maiguel-Mejia, J. Acevedo-Chedid, Inventory management methodology to determine the levels of integration and collaboration in supply chain. Ingeniare 25(2), 326– 337 (2017) 8. C. Prodhon, C. Prins, A survey of recent research on location-routing problems. Eur. J. Oper. Res. 238(1), 1–17 (2014) 9. G. Greco, M. Allegrini, C. Del Lungo, P.G. Savellini, L. Gabellini, Drivers of solid waste collection costs. Empirical evidence from Italy. J. Cleaner Prod. 106, 364–371 (2015) 10. G. Bel, X. Fageda, Empirical analysis of solid management waste costs: some evidence from Galicia, Spain. Resour. Conserv. Recycl. 54(3), 187–193 (2010) 11. R. Jacobsen, J. Buysse, X. Gellynck, Cost comparison between private and public collection of residual household waste: multiple case studies in the Flemish region of Belgium. Waste Manag. 33(1), 3–11 (2013) 12. C. Blazquez, G. Paredes-Belmar, Network design of a household waste collection system: a case study of the commune of Renca in Santiago, Chile. Waste Manage. 116, 179–189 (2020) 13. E. Ikiz, V.W. Maclaren, E. Alfred, S. Sivanesan, Impact of COVID-19 on household waste flows, diversion and reuse: the case of multi-residential buildings in Toronto, Canada. Resour. Conserv. Recycl. 164, 105111 (2021) 14. C. Zhou, G. Yang, S. Ma, Y. Liu, Z. Zhao, The impact of the COVID-19 pandemic on waste-toenergy and waste-to-material industry in China. Renew. Sustain. Energy Rev. 110693 (2021) 15. L. Shearer, B. Gatersleben, S. Morse, M. Smyth, S. Hunt, A problem unstuck? Evaluating the effectiveness of sticker prompts for encouraging household food waste recycling behaviour. Waste Manage. 60, 164–172 (2017) 16. A.P. Gomes, M.A. Matos, I.C. Carvalho, Separate collection of the biodegradable fraction of MSW: an economic assessment. Waste Manage. 28(10), 1711–1719 (2008) 17. G. Jaramillo Henao, L.M. Zapata, Aprovechamiento De Los Residuos Sólidos Orgánicos En Colombia. Universidad de Antioquia (2008) 18. S.M. Darmian, S. Moazzeni, L.M. Hvattum, Multi-objective sustainable location-districting for the collection of municipal solid waste: two case studies. Comput. Ind. Eng. 150, 106965 (2020) 19. K. Govindan, P. Paam, A.R. Abtahi, A fuzzy multi-objective optimization model for sustainable reverse logistics network design. Ecol. Ind. 67, 753–768 (2016) 20. C. Prakash, M.K. Barua, An analysis of integrated robust hybrid model for third-party reverse logistics partner selection under fuzzy environment. Resour. Conserv. Recycl. 108, 63–81 (2016)

Tomato Sickness Detection Using Fuzzy Logic L. Vijayalakshmi and M. Sornam

Abstract In this manuscript, the experiments were done to compare two algorithms that identify the tomato diseases. This research work aimed to classify and detect the plant’s diseases automatically for particularly the tomato plant disease. In the proposed method, Res Net CNN architecture was used by hybridizing the fuzzy cmeans and edge detection algorithm in the fully connected layer to detect the tomato diseases. As a result, few diseases that usually occur in tomato plants such as gray spots, bacterial canker, and late blight were detected. As a result, the proposed method achieved 97.01% accuracy which is quite acceptable than the previously established state of techniques. Keywords Fuzzy c-means clustering · Image processing · Convolution neural network · Edge detection

1 Introduction Nowadays, farmers are trying to use various modern ways for farming. Implementing these methods in farming increases [1] the long-term, site-specific, wholefarm production, profitability, efficiency, and productivity. The convolution neural network and the fuzzy c-means clustering algorithm are used to identify tomato and leaves disease. Sustainable natural resource management and environmentally acceptable technologies such as conservation of soil, ordinary supply organization, and biodiversity defense as well as the completion of modern farming practices are vital for holistic country growth. Recent advances in hardware and technology have allowed the evolution of deep convolution neural networks (CNN) and their digit of applications, counting composite tasks such as entity gratitude and image classification. In this situation, this research is alert to collect the data of diseases in tomato plants and trains a model for disease detection. Tomato is an attractive crop that grows in L. Vijayalakshmi · M. Sornam (B) Department of Computer Science, University of Madras, Guindy Campus, Chennai, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 R. C. Poonia et al. (eds.), Proceedings of Third International Conference on Sustainable Computing, Advances in Intelligent Systems and Computing 1404, https://doi.org/10.1007/978-981-16-4538-9_24

237

238

L. Vijayalakshmi and M. Sornam

the Kharif season, Rabbi, and also in the summer season. Tomatoes were cultivated throughout the country. India is the second place in the world for cultivating tomatoes. So, we will see how to maintain ideal [2] conditions for control the tomato diseases. Fuzzy logic used the variables which are classified as very low, low, average, high, and very high are linguistic variables. These variables have values associated with them and used while the representation of fuzzy sets. Diseases in tomato plants have been mainly deliberate in the technical area, mostly involving on the biological uniqueness of diseases. For instance, studies on tomato [3] show how vulnerable a plant is to be affected by diseases. The tomato plant diseases problem is one of the worldwide issues. The purpose of detecting pointed changes in picture clarity is to capture important measures and changes in properties of the globe. It can be exposed that under rather universal assumptions for a picture configuration model, discontinuities in image clarity are likely to correspond to: 1. 2. 3. 4.

Non-continuities in depth, Non-continuities in surface orientation, Changes in objects properties and Variations in scene explanation.

In the perfect case, the result of applying an edge detector to a picture may direct to a set of linked curves that indicate the boundaries of items, the boundaries of exterior markings as well as curves that correspond to non-continuities in surface direction. Thus, applying an edge detection algorithm to a picture may considerably reduce the quantity of data to be processed and may therefore filter out in sequence that may be regarded as less relevant, while preserving the important pictorial properties of an image. If the edge exposure step is successful, the following duty of interpreting the in order inside the unique image may therefore be considerably simplified. However, it is not always likely to obtain such ideal edges from real-life images of reasonable difficulty. Edges extracted from non-trivial pictures are often vulnerable by disintegration, meaning that the edge curves are not linked, missing edge segments as well as fake edges not matching to interesting phenomena in the image—thus difficulties of the following task of interpreting the picture data. Edge exposure is one of the basic steps in image processing, image analysis, image pattern gratitude, and computer vision methods.

2 Proposed Methods 2.1 Res Net Hybrid Fuzzy Logic c-Means Clustering and Edge Detection Algorithm There are several research groups working on fuzzy logic-based image processing techniques for precision agriculture. Fuzzy clustering is used for tomato segmentation, tomato grading, edge detection using fuzzy inference rules and defect detection

Tomato Sickness Detection Using Fuzzy Logic

239

[4] using fuzzy logic. Color-based segmentation of tomato with multiple colors is not a trivial task. The membership functions are normally based on different datasets with green, yellow, and red colors. A RGB is created so that [5] each color channel has a membership function. The fuzzy c-means algorithm is used to partition the pixels of close enough colors into clusters. The clusters do not have absolute boundaries. Fuzzy logic control and neural network [6] control is a commonly used intelligent control technology in agriculture. The pixels in an image belong to all clusters based on the membership value. The algorithm is calm of subsequent steps. 1. 2. 3. 4.

Initialize i cluster centers C i randomly. Initialize the fuzzy partition membership function μij , using Once the probability is computed, new cluster centers are computed using the membership values, Repeat step 2 and 3 until E converges to a global minimum. E is given as

The membership function μij is proportional to the probability [7] that a pixel belongs to a particular cluster, where the probability is of distance between an image pixel and each independent cluster center. The termination condition of the algorithm is to converge to a global minimum. μi j =  C m=1

E=

1 ||x j −c j || 2 ||x j −cm || k−1

C N     . .μikj x j − ci2  J =1

(1)

(2)

I =1

To apply fuzzy logic c-means algorithm, there are two main parameters to the fuzzy logic c-means algorithm, k and i in Eq. (1) for fuzzification and number of clusters, respectively. These two Eqs. (1) and (2) are used to cluster the tomato images and identify the diseases. Edge detection includes a diversity of arithmetical methods that aim at identifying points in a digital image at which the image brightness changes sharply or, more formally, has discontinuities. The points at which tomato image brightness changes sharply which are typically organized into a set of curved line segments are edges. This study compares the fuzzy c-means and edge detection to identify tomato diseases. Gaussian curved step edge (an error function) as the simplest expansion of the perfect step edge model is used for modeling the effects of edge blur in practical applications. Thus, a one-dimensional image f that has exactly one edge placed at x = 0 may be modeled as: f (x) =

    x Ir − Il + 1 + Il erf √ 2 2σ

(3)

240

L. Vijayalakshmi and M. Sornam

At the left side of the edge, the intensity is Il = lim x→−∞ f (x) and right of the edge, it is Ir = lim x→∞ f (x). The level limit σ is called the haze level of the edge. Preferably, this scale parameter should be attuned based on the excellence of image to avoid destroying true edges of the image. In the proposed method, tomato images are used as input to the CNN model in order to extract the useful features of the tomato image. Through supervised learning, the fuzzy c-means can perform classification with high accuracy.

2.2 Res Net Convolutional Neural Network Machine learning allows the diagnosis of diseases from tomato images. This automatic method classifies diseases on tomato plants from “Plant Village,” which is an openly available plant image database. The segmentation approach and consumption of an SVM demonstrated disease classification in over 300 images. Some diseases might look similar [8] depending on the infection status. Therefore, the knowledge for identifying the type of disease has been provided by experts in the region that has helped us to obviously identify the categories in the imagery and infected areas of the tomato plant. This annotation procedure aims to label the group and place of the impure areas in the object. The prototype credit system achieved an average accuracy of 85%. Obtainable a move toward that included image processing. The model of the proposed system is described in Fig. 1. As some recent researches have specified, attention mechanisms to sequence and convolution neural network can improve the performance of the model, and the proposed model has also involved attention mechanism. The model is trained with the set of input images of the dimension 512 × 512 × 1. The images are reshaped, and BGR gray conversion is also done for each tomato image. The architecture of the convolutional neural network consists of five layers, where the final layer is a fully connected layer. Each layer is specified with different dimensions as shown in Fig. 1. The max-pooling layers are also used with the convolution layers to bring better feature extraction. A convolutional neural network is a deep learning algorithm that can distinguish and categorize features in images for computer vision. It is a multilayer neural network designed to evaluate visual inputs and perform tasks [9] such as image classification, segmentation and object detection. The comparison of the architectural changes of the existing and proposed model is explained in Table 1.

2.3 Random Search Algorithm Random investigation is a really useful tool in a data scientist toolbox. It is a very easy method very often used, for instance, in cross-validation and hyperparameter optimization. It is very simple.

Tomato Sickness Detection Using Fuzzy Logic

241

Input

Clustering

Res Net

Convoluon layer

Max-pooling layer

Convoluon layer

Max-pooling layer

Convoluon layer

Max-pooling layer

Fully connected Layer

To detect the tomato diseases

Fuzzy Cmeans + Edge

Output

Fig. 1 Flowchart for convolutional neural network using disease identification Table 1 Comparison between existing and proposed methods Methods

Existing

Proposed

Dataset

OPENCV library is used to manipulate raw input image and to train on CNN

OPENCV library is used to manipulate raw input image and to train on CNN and fuzzy c-means

Algorithm

In existing system using YOLO object detection algorithm

In proposed system using two algorithms fuzzy c-means algorithm and Random search algorithm, edge detection (Canny edge detection)

Training model on CNN

9000

10,000

Model

Alex Net

Alex Net and Resent and Google Net

Filter

520 × 520

512 × 512

Activation function

Relu

Softmax

242

L. Vijayalakshmi and M. Sornam

Algorithm SRS (Standard Random Search) Begin Step 1. Random initialization Initialize configuration S = (s1 , s2 , …, sN ) at random Step 2. Descent over landscape E 1 from S to minimum S m  Calculate hi = Ty s j for all i = 1, N While there are unstable spins (hi si < 0) for each spin in S if hi si < 0 then si = −sj Refresh hj = hj + 2T n st for all j = i end if end for end while

End Algorithm Repeat from step 1 until the minimum of a required depth is reached or until the uptime ends. Tomato plant diseases are responsible for economic loss in agricultural industry, as they destroy the crops. This work can be used as an automated system for identification and classification of plant diseases using random search algorithm and clustering technique. The feature extraction method is applied on images for training the random search algorithm. The skill of different machine learning algorithms is evaluated using the training data to find the best suiting algorithm for disease identification.

3 Experimental Results and Discussion Inspired by the classical Alex net, a deep convolution neural network is designed to classify tomato plant diseases. The designed [10] network has 24 convolution layers followed by two fully connected layers. Primary of all, a structure is designed, which is based on normal YOLO model. For the observation [11] of the convolution kernel, a larger sized convolution kernel has a more ability to extract the macroinformation of the image, and vice versa. And other information of the image can be understood as they which needs to be filtered. The architecture of the CNN involves the activation functions such as Softmax, Relu, to attain the probabilities in the output layer. The proposed model contains Relu activation function in the convolution layers and sigmoid activation function at the output layer. After the construction of architecture, the images are used as a training set to train the network.

Tomato Sickness Detection Using Fuzzy Logic

243

3.1 Training Model The input image is collected from the internet and label to the region of the area. Collecting an object with a different background, different shapes, and sizes and increases accuracy level and minimizes false detection. This model has been trained with the four class classification dataset in 2000 iterations: using stochastic gradient descent with a starting learning rate of 0.1, polynomial rate decay with a power of 4, weight decay of 0.0006, and momentum of 0.9, and [12] resolution is scaled down to 448 * 448 and for 10 epochs at a 10–3 learning rate. In the following preparation, the classifier gets top-1 correctness of 76.5% and top-5 accuracy of 93.3%. After removing the fully connected layers, the classifier can make images of dissimilar sizes. If the width and height are identified, it uses 4x output grid cells and therefore 4x predictions. Since the CNN network downsamples the input by 32, we need the width and height as a multiple of 32. All through training, the classifier takes an object of dimension 320 × 320, 352 × 352, and 608 × 608 (with a step of 32).

3.2 Performance Analysis To know how our proposed approach execute on new image data, and also to keep a proof of if any of our approaches are over fitting, we run all our experiments across an entire range of training and testing data set divide, explicitly in estimated distribution of 80% of the entire dataset used for training and 20% for testing. Simple Accuracy: Simple Accuracy =

TP + TN = 0.6 TP + TN + FP + FN

Precision: Precision =

True Positive = 0.97333 True Positive + False Positive

True positive value divided by true positive value add false positive values and then get value for 0.97333. Recall: Recall =

True Positive = 0.96 True Positive + False Negative

Recall identified by using true positive value is divided by true positive value add false negative and then get value for 0.96.

244

L. Vijayalakshmi and M. Sornam

Table 2 Statistical analysis of dataset

Table 3 Disease samples testing

Methods

Values

Mean

1.098

Standard deviation

0.654

Diseases

Existing

Proposed

Gray spot

200

275

Late blight

255

266

Bacterial canker

195

250

Comparing Systems: Fβ =

β∗

1 Precision

1 + (1 − β) ∗

1 Recall

= 0.98

F 1 score using comparing the existing and proposed system finally gets the 0.98% of the accuracy (Table 2). The convolutional neural network-based classifiers are tested on a subset of the disease’s dataset, including tomato plant leaf diseases. The dataset consists of three leaf diseases of the tomato plant, including Gray spot (250 samples), Late Blight (275 samples), Bacterial Canker (250 samples). Adding healthy tomato leaf images, the used dataset contains 570 images in three categories. The preliminary preparation and expansion are applied to the dataset. The objects of the dataset are resized to fit into 412 × 412 magnitude which are selected to be comparatively small and close to a portion of the average size of all objects; after that excluding 10% of the images as test set, the remaining images as training set are augmented, in order to reduce overfitting, by adding horizontally flipped copy of the images, and then a portion of these images is further alienated as the validation set pre-trained on Image-Net and fine-tuned on the dataset, and the proposed convolutional neural network architecture with and without residual learning (Table 3).

3.3 Confusion Matrix Similarly, the training data set of the CNN model included 4000 samples for the two data classes and 500 data samples considered for testing the performance of the system. Of these 500 samples, twelve misclassified, four data samples in class 1 misclassified, eight data in class 2 misclassified, as depicted in the confusion matrix of the CNN as shown in Fig. 2. Thus, the classification accuracy gives an overall average accuracy of the complicated confusion matrix. The accuracy of the model improves from 0.6503 to 0.9276. There were some experiments done on applying disease for

Tomato Sickness Detection Using Fuzzy Logic

245

Fig. 2 Comparison of confusion matrix for fuzzy and edge

Table 4 Comparison between hybrid fuzzy c-means and edge detection Methods

Fuzzy c-means

Edge detection

Hybrid fuzzy c-means and edge detection

Classifier

To segment the particular tomato disease

To identify the tomato disease only

To predict the tomato sickness

Model

Alex Net

Res Net

Alex Net and Res Net

Activation function

Re Lu

Softmax

Relu, Softmax, Sigmoid

Filter

256 × 512

64 × 256

32 × 256

Accuracy

94.05%

96%

97.01%

the tomato image which gave various numbers of accuracies. This experiment was done with the dataset that was not set for the training purpose (Table 4). The hyperparameters used in the model are Relu activation function in every layer and Softmax activation function in the output layer. Edge detection is widely used to detect the tomato diseases in Softmax activation function. Comparing fuzzy c-means algorithm with edge detection algorithm gets good performance. In fully connected layer, the fuzzy c-means and edge detection algorithm are performed to identify and detect the diseases.

3.4 Detection of Diseases See Figs. 3, 4 and 5.

246

L. Vijayalakshmi and M. Sornam

Fig. 3 Example for detection of fungal disease using edge detection

Fig. 4 Detection of late blight

4 Conclusion A Novel Res Net CNN architecture with hybrid fuzzy c-means and edge detection algorithm in the fully connected layer was proposed to create a machine learning model. Late blight (training 200, test 20), Gray spot (training 250, test 25), and bacterial canker (training 195, test 20) were the detected diseases. On experimenting with hybridization in the fully connected layer, this proposed model has achieved an optimal accuracy in comparison with the existing methods. The conclusion can be drawn that from the experimentation, the features in the tomato diseased images like late blight, Bacterial Canker, and Gray spot were classified correctly. In the proposed method, using hybrid fuzzy c-means and edge detection algorithm using Res Net CNN architecture could achieve 97.01% of accuracy in classifying tomato diseases which is comparable with other state-of-the-art methods.

Tomato Sickness Detection Using Fuzzy Logic

247

Fig. 5 Healthy tomato plant

References 1. L.R. Aphale, S. Rajesh, Fuzzy logic system in tomato farming. IOSR J. Comput. Eng. 56–62 (2015) 2. S. Adhikari, B. Shrestha, B. Baiju, K.C. Saban Kumar, Tomato plant diseases detection system using image processing, September 2018, 2019 3. S. Raza, G. Prince, J.P. Clarkson, N.M. Rajpoot, Automatic detection of diseased tomato plants using thermal and stereo visible light images. PLOS One 1–20 (2015). https://doi.org/10.1371/ journal.pone.0123262 4. S. Adhikari, N. Sinha, T. Dorendrajit, Fuzzy logic based on-line fault detection and classification in transmission line. SpringerPlus (2016). https://doi.org/10.1186/s40064-016-2669-4 5. A.G. Mohapatra, S. Kumar, Neural network pattern classification and weather dependent fuzzy logic model for irrigation control in WSN based precision agriculture. Procedia— Procedia Comput. Sci. 78(December 2015), 499–506 (2016). https://doi.org/10.1016/j.procs. 2016.02.094 6. S.B. Lo, S.A. Lou, J. Lin, M.T. Freedman, M.V. Chien, S.K. Mun, Artificial convolution neural network techniques and applications for lung nodule detection, no September 2017 (1995). https://doi.org/10.1109/42.476112 7. A. Azadeh, M. Saberi, S.M. Asadzadeh, An adaptive network based fuzzy inference system— auto regression—analysis of variance algorithm for improvement of oil consumption estimation and policy making: the cases of Canada, United Kingdom, and South Korea. Appl. Math. Model. 35(2), 581–593 (2011). https://doi.org/10.1016/j.apm.2010.06.001 8. A. Fuentes, S. Yoon, S.C. Kim, D.S. Park, A robust deep-learning-based detector for real-time tomato plant diseases and pests recognition. https://doi.org/10.3390/s17092022 9. M. Aryal, D. Bhattarai, Assessment of tomato consumption and demand in Nepal, no May 2018 (2020). https://doi.org/10.3126/aej.v18i0.19893 10. F. Qin, D. Liu, B. Sun, L. Ruan, Z. Ma, H. Wang, Identification of Alfalfa leaf diseases using image recognition technology, 1–26 (2016). https://doi.org/10.1371/journal.pone.0168274 11. Y. Zhang, C. Song, D. Zhang, Deep learning-based object detection improvement for tomato disease. IEEE Access 8, 56607–56614 (2020). https://doi.org/10.1109/ACCESS.2020.2982456

248

L. Vijayalakshmi and M. Sornam

12. M. Abadi et al., TensorFlow: a system for large-scale machine learning. This paper is included in the Proceedings of the TensorFlow: A System for Large-Scale Machine Learning (2016)

Autism Spectrum Disorder Study in a Clinical Sample Using Autism Spectrum Quotient (AQ)-10 Tools Rakhee Kundu, Deepak Panwar, and Vijander Singh

Abstract Autism is a neurological developmental condition caused by cognitive dysfunction and speech and minimal, repetitive behaviour. All these symptoms generally are seen on or before in a three-year-old child. The word autism has its origin in the Greek word “autos” which means “self”. Across the globe when it is observed, autism disease had impacted 1 out of every 100–166 children, and through them, it has impacted many children’s life along with their family members. Autism is treated as one of the pervasive developmental disorders (PDD). A research study has been conducted by using autism spectral disorder (ASD) tests app for young children, adolescent and adults to understand their behavioural symptoms. This paper describes the features and the implementation of the AQ-10 tools test conducted on ASD patients with the help of a machine learning algorithm on autism spectral disorder dataset and shows the improvement in overall results. Keywords ASD · Machine learning · AQ-10 Tools · Accuracy · Specificity

1 Introduction ASD is a type of neurodevelopmental disorder that is not treatable but is ideal for early intervention. It is very hard to identify ASD, but traditional behavioural tests can diagnose it. The severity of the symptoms will detect ASD at the age of 2 years or older. ASD is identified as early as possible by many clinical instruments. These devices are used without a high chance or heavy doubt. There is no medical aid to person suffering from disease; however, by using some therapy the effects can be suppressed for sometime if the symptoms are early detected. The main reason behind this disease is still unknown by scientist as it is a genetic disorder. The human genes R. Kundu (B) · D. Panwar Amity University Rajasthan, Jaipur, Rajasthan, India V. Singh Department of Computer Science and Engineering, Manipal University Jaipur, Jaipur 303007, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 R. C. Poonia et al. (eds.), Proceedings of Third International Conference on Sustainable Computing, Advances in Intelligent Systems and Computing 1404, https://doi.org/10.1007/978-981-16-4538-9_25

249

250

R. Kundu et al.

trigger the developmental disorder in humans resulting in this disease. The risk factor that influences people with ASD is low birth weight, a sibling with ASD and having old parents. Early prediction and treatment are most important steps to improve the quality of life of ASD suffering people and decrease the symptoms of autism spectrum disorder problem. However, there is no method of medical test for the detection of autism. ASD symptoms are usually recognized by observing the behaviour. In adults, identifying ASD symptoms is very difficult than grown-up children and adolescents because some symptoms of ASD are similar to the symptoms with other mental health disorders. It is easy to identify the behavioural changes in a child by observation as it can be seen in the very beginning from 6 months of age using autism-specific brain imaging because brain imaging can be identified after 2 years of age. In this study, we are working on ASD dataset obtained from UCI machine learning repositories [1] and used as per direction given by Thabtah [2]. And with the help of machine learning classification algorithm, we are predicting the classification results for ASD detection.

2 Literature Review Leo Kanner is the person who first identified autism in its current sense. He recorded about 8 boys and 3 girls in the year 1943 who had an “The underlying inability to shape the normal affective, biologically supplied, Communication with individuals” and adopted the early childhood mark named autism [3]. Hans Asperger and Leo Kanner were known as those who developed the foundation of the current autism research. ASD’s present prevalence in recent large-scale investigations is roughly 1–2% [4, 5]. While the rise in prevalence is partly due to improvements in the DSM diagnostic criteria and the younger age of diagnosis [4], it is not possible to rule out an increase in risk factors [6]. Male predominance has been shown in studies. The chances of affecting ASD are 2–3 times more in males as compared to females [6–8]. The underrecognition of women with ASD [9] could result in this diagnostic bias towards males. Some scientists have also suggested the possibility that there may be female-specific protective effects against ASD [10, 11]. Thabtah et al. [12] have developed the ASD test mobile app based on Autism Spectrum Quotient (AQ)-10 and Q-CHAT tools which can assist with early identification of ASD. They have compiled ASD data using these smartphone applications and submitted the data to the open-source archive of Kaggle and the University of California-Irvine (UCI). To determine ASD characteristics, Thabtah and Peebles [13] proposed a rulesdependent ML (RML) and found that RML enables classifiers to improve their performance. Satu et al. [14] demonstrated in Bangladesh with tree-based classification individual on important features for normal and autistic children. Abbas et al. [15] merged ADI-R and ADOS ML approaches into a single evaluation and used encoding techniques to solve the problems of lack, sparsity and imbalance.

Autism Spectrum Disorder Study in a Clinical Sample Using …

251

In 2020, Stevanovic [16], by way of a discriminant index (DI), lowered the number of objects in Q-CHAT and AQ resources to 10, instead of 50. It was categorized into negative areas such as care of information, focus adjustment, coordination, creativity and social competences. Thabtah et al. [12] developed the application ASD tests using Q-CHAT-10 and AQ-10 tools for screening and detection of ASD risk factors (AQ10 infant, AQ-10 teenager and AQ-10 adult). This app measures a score ranging from 0 to 10 and shows a positive ASD forecast with a single score of over 6 out of 10. Values from 1 to 10 are allocated to each object. In the 2009 Kaggle and UCI ML archives, we obtained N = documents for ASD tests to combine datasets [17]. Recently, Shekhawat et al. [18] have anticipated data transformation approach for improving classification accuracy. Chug et al. [19] developed information retrieval approach for enhanced efficiency.

3 Dataset Selection Autism spectrum disorder screening dataset for adult dataset was used during this study [1]. The number of records is 704 with 21 attributes. We also added module 2 and module 3 autism continuum (ASD) data repositories to the item-level autism diagnostic observation schedule (ADOS) module [1]. Module 2 was composed of 1389 people (1319 ASD and 70 non-ASD), and Module 3 was composed of 3143 people (2870 ASD and 273 non-ASD). Subjects were categorized as ASD or nonASD so that the algorithms of machine learning were easy to understand and implement. Description of data and tabular attribute-wise description of data can be found in reference [12]. The AQ10 tools test conducted based on 30 questions in three categories of child, adolescent and adult. The details of questions for all these three categories are available in reference [13].

4 Methodologies The datasets retrieved from the UCI repositories included noisy and incomplete values replaced by average values. Also, numerous categorical features matching integer values had been encoded. Automated, successful ASD classification models are accessible through machine learning methods as they follow a blend of mathematical and computer science analysis methods [20]. A variety of machine learning methods, such as decision trees [21], support for the vector [22], rule classifiers [9] and a neural network [23], have been recently applied to the ASD problems. ASD diagnosis is a common classification issue where a model is founded on cases and controls previously categorized. The new case diagnostic form can be used for this model (ASD and non-ASD). The following diagram represents the system architecture of the implementation part of ASD detection (Fig. 1).

252

R. Kundu et al.

Fig. 1 System architecture of ASD detection

5 Results and Discussion From the methodological point of view, we had analysed the dataset which for ASD and non ASD detection. This implies that it is a typical machine learning supervised classification problem and can be predicted with the help of various machine learning algorithm like random forest, decision tree, support vector machine which is popular machine learning algorithms implemented for classification problems. So we had also used the same machine learning algorithms for classification of ASD. We had applied the support vector machine, random forest and decision tree classification model. To improve the accuracy and performance of the model used in machine learning, we had used the cross-validation and hyperparameter tuning approach which had helped us to increase the performance of the model and we had achieved the best fit model for ASD classifications. Table 1 summarizes the hyperparameter tuning for each machine learning model which had given the best results of that model. Table 1 Hyperparameter tuning of machine learning model Model name

Best parameters (hyper tuning)

Train score

Test score

Support vector machine (SVC)

C: 1, degree: 1, kernel: linear

99.29

98.78

Random forest (RF)

“criterion”: “entropy”, “n_estimators”: 100

95.49

95.35

Decision tree

“criterion”: “entropy”

89.29

92.25

Autism Spectrum Disorder Study in a Clinical Sample Using …

253

Besides, multiple assessment measures test the predictive model derived from machine learning. The basic type in the classification question is a binary dilemma (ASD and non-ASD). One of the main parameters for evaluating the algorithm is the consistency of the model. Precision assignment (Eq. 1) is one of the most common assessment actions. By this metric, the number of test cases appropriately categorized from the overall number of test cases may be identified. Sensitivity (Eq. 2) means the proportion of test cases with actual ASD (true, positive rate), while specificity (Eq. 3) corresponds to the percentage of test cases with no ASD (true negative rate). 

TP + TN Acc = TP + FN + FP + TN   FP Spec = 1 − FP + TN   TP Sens = TP + FN

 (1) (2) (3)

The confusion matrix contrasts the real values of the target with those of the machine learning model. This offers an integrated view of the efficiency and kind of error of our classification model. In our experimentation results, the Support Vector Machine algorithm performs better hence Fig. 2 represents the confusion matrix of the SVM classifier algorithm. Fig. 2 Confusion matrix of the SVC algorithm

254

R. Kundu et al.

6 Conclusion and Future Scope In this research work, we had tried various machine learning algorithm for studying and predicting the machine learning algorithm for the early detection of ASD. With the best hyperparameter tuning and cross-validation grid search cv, we had got a maximum accuracy of 98% with support vector machine algorithm on ASD dataset. To the date, very few researches had tried to identity the ASD using brain MRI images of infants. In future, we wished to combine two approaches like machine learning model on ASD data which is available on dataset repository publicly and also applying CNN (convolution neural network) on brain MRI Images with greater accuracy so that correct prediction for all category of ASD will be done.

References 1. UCI Machine Learning Repository: Autism Screening Adult Data Set. https://archive.ics.uci. edu/ml/datasets/Autism+Screening+Adult. Accessed 14 Feb 2021 2. F. Thabtah, An accessible and efficient autism screening method for behavioural data and predictive analyses. Health Inform. J. 25(4), 1739–1755 (2019). https://doi.org/10.1177/146 0458218796636 3. L. Kanner, Follow-up study of eleven autistic children originally reported in 1943. Focus Autistic Behav. 7(5), 1–11 (1992). https://doi.org/10.1177/108835769200700501 4. P. Karimi, E. Kamali, S.M. Mousavi, M. Karahmadi, Environmental factors influencing the risk of autism. J. Res. Med. Sci. 22(1) (2017). https://doi.org/10.4103/1735-1995.200272 (Isfahan University of Medical Sciences (IUMS)) 5. NIMH Autism Spectrum Disorder. https://www.nimh.nih.gov/health/publications/autism-spe ctrum-disorder/index.shtml. Accessed 13 Dec 2020 6. M.-L. Mattila et al., Autism spectrum disorders according to DSM-IV-TR and comparison with DSM-5 draft criteria: an epidemiological study. J. Am. Acad. Child Adolesc. Psychiatry 50(6), 583–592.e11 (2011). https://doi.org/10.1016/j.jaac.2011.04.001 7. American Psychiatric Association, Diagnostic and Statistical Manual of Mental Disorders (DSM-IV) (Springer, Berlin) 8. Y.S. Kim et al., Prevalence of autism spectrum disorders in a total population sample. Am. J. Psychiatry 168(9), 904–912 (2011). https://doi.org/10.1176/appi.ajp.2011.10101532 9. M. Elsabbagh et al., Global prevalence of autism and other pervasive developmental disorders. Autism Res. 5(3), 160–179 (2012). https://doi.org/10.1002/aur.239 10. E. Saemundsen, P. Ludvigsson, V. Rafnsson, Autism spectrum disorders in children with a history of infantile spasms: a population-based study. J. Child Neurol. 22(9), 1102–1107 (2007). https://doi.org/10.1177/0883073807306251 11. E.B. Robinson, P. Lichtenstein, H. Anckarsäter, F. Happé, A. Ronald, Examining and interpreting the female protective effect against autistic behavior. Proc. Natl. Acad. Sci. U. S. A. 110(13), 5258–5262 (2013). https://doi.org/10.1073/pnas.1211070110 12. F. Thabtah, F. Kamalov, K. Rajab, A new computational intelligence approach to detect autistic features for autism screening. Int. J. Med. Inform. 117, 112–124 (2018). https://doi.org/10.1016/ j.ijmedinf.2018.06.009 13. F. Thabtah, D. Peebles, A new machine learning model based on induction of rules for autism detection. Health Inform. J. 26(1), 264–286 (2019). https://doi.org/10.1177/146045821882 4711 14. M.S. Satu, F. Farida Sathi, M.S. Arifen, M. Hanif Ali, M.A. Moni, Early detection of autism by extracting features: a case study in Bangladesh, in 2019 International Conference on Robotics,

Autism Spectrum Disorder Study in a Clinical Sample Using …

15.

16. 17.

18.

19.

20.

21. 22. 23.

255

Electrical and Signal Processing Techniques (ICREST) (IEEE, 2019). https://doi.org/10.1109/ icrest.2019.8644357 H. Abbas, F. Garberson, E. Glover, D.P. Wall, Machine learning for early detection of autism (and other conditions) using a parental questionnaire and home video screening, in 2017 IEEE International Conference on Big Data (Big Data) (IEEE, 2017). https://doi.org/10.1109/big data.2017.8258346 D. Stevanovic, Quantitative Checklist for Autism in Toddlers (Q-CHAT): A Psychometric Study with Serbian Toddlers (2020) S. Raj, S. Masood, Analysis and detection of autism spectrum disorder using machine learning techniques. Procedia Comput. Sci. 167, 994–1004 (2020). https://doi.org/10.1016/j.procs.2020. 03.399 S.S. Shekhawat, H. Sharma, S. Kumar, A. Nayyar, B. Qureshi, bSSA: binary salp swarm algorithm with hybrid data transformation for feature selection. IEEE Access 9, 14867–14882 (2021). https://doi.org/10.1109/ACCESS.2021.3049547 A. Chugh, V.K. Sharma, S. Kumar, A. Nayyar, B. Qureshi, M.K. Bhatia, C. Jain, Spider monkey crow optimization algorithm with deep learning for sentiment classification and information retrieval. IEEE Access 9, 24249–24262 (2021). https://doi.org/10.1109/ACCESS.2021. 3055507 F. Thabtah, Autism spectrum disorder screening, in Proceedings of the 1st International Conference on Medical and Health Informatics 2017 (ACM, 2017). https://doi.org/10.1145/3107514. 3107515 Decision Trees—An Overview | ScienceDirect Topics. https://www.sciencedirect.com/topics/ computer-science/decision-trees. Accessed 06 Jan 2021 M. Mohammed, M.B. Khan, E.B.M. Bashie, Machine Learning: Algorithms and Applications, no. December 2016 S. Raj, S. Masood, ScienceDirect-NC-ND license. Peer-review under responsibility of the scientific Analysis and Detection of Autism Spectrum Disorder Using Machine Learning Techniques (2020). http://creativecommons.org/licenses/by-nc-nd/4.0/. https://doi.org/10.1016/j. procs.2020.03.399

Robust Video Steganography Technique Against Attack Based on Stationary Wavelet Transform (SWT) and Singular Value Decomposition (SVD) Reham A. El-Shahed, M. N. Al-Berry, Hala M. Ebied, and Howida A. Shedeed Abstract The interchanging of multimedia data over the Internet is growing exponentially. These data need to be kept safe from being hacked or corrupted. Steganography techniques are used to hide the important data by embedding it into a cover object. The proposed steganography algorithm uses the stationary wavelet transform (SWT) and the singular value decomposition (SVD) in the YCbCr color space to hide an image in video frames. The algorithm is tested using different images and under different types of attacks. The qualitative and quantitative results proved that the imperceptibility and the robustness of the algorithm are very high as the average normalized cross-correlation is 0.9. The average similarity for the secret image is 0.9 without attacks and 0.85 after adding different types of attacks on the stego-video. The structural similarity between the cover video and the stego-video is 0.99. Keywords Video steganography · Image hiding · Stationary wavelet transform · Singular value decomposition · Color space

1 Introduction Information security is an important science nowadays. With a huge amount of data transfer over the Internet, these data should be secured from being hacked or lost. There are various data security techniques; steganography is one of such techniques that is used to overcome this problem by embedding the secret data in a cover object. Cryptography is another technique that encrypts the information to keep it secured. The secret data in steganography algorithms are kept in a cover. Watermarking techniques are another type of data security technique. The secret data are also embedded in a cover object like steganography techniques. The goal of digital watermarking is embedding the watermark in a multimedia object to keep the R. A. El-Shahed (B) · M. N. Al-Berry · H. M. Ebied · H. A. Shedeed Scientific Computing Department, Faculty of Computer & Information Sciences, Ain Shams University, Cairo, Egypt e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 R. C. Poonia et al. (eds.), Proceedings of Third International Conference on Sustainable Computing, Advances in Intelligent Systems and Computing 1404, https://doi.org/10.1007/978-981-16-4538-9_26

257

258

R. A. El-Shahed et al.

copyright of the digital object. Both steganography and watermarking are using the same techniques to hide the secret data/watermark in a digital multimedia object. Any steganography algorithm consists of a cover object, a secret data and an embedding algorithm. The output of the algorithm is the stego-object. In digital steganography, the cover object and the secret message may be an audio, a text file, an image, or a video. Video steganography techniques are similar to image steganography techniques [1]. There are two important parameters for all steganography techniques; embedding capacity and robustness [1]. Embedding capacity is the amount of secret data that could be embedded in the cover object. Robustness is the ability of a steganography system to hide as much data as it can without losing data. The video steganography techniques are decomposed into three types based on the position of embedding. The first type is pre-embedding where the secret data are concealed in the raw video domain. The second type is intra-embedding in which the secret data are embedded in the compressed domain. The last type is post-embedding, where the secret messages are embedded in the bit-stream domain [2]. Digital videos and images could be represented as grayscale or color. Colored images and videos are more important in steganography as they provide more hiding capacity. Different color spaces are used to represent images and videos. These spaces include Red Green Blue (RGB), luminance and chrominance (YCbCr) and Hue Saturation Value (HSV) [3]. Video steganography techniques can be implemented in two main domains spatial domain and transform domain. For the spatial domain techniques, the secret data are inserted directly in the video frames without doing any pre-processing. The main algorithm in the spatial domain technique is the Least Significant Bit (LSB). There are also other algorithms such as, Red Green Blue (RGB) components and histograms [4]. In transform domain techniques, video is transformed to the frequency domain. Different types of techniques can be used in video processing, such as, discrete wavelet transform (DWT) [5], discrete Fourier transform (DFT) [6], discrete cosine transform (DCT) [7] and integer wavelet transform (IWT) [8]. DCT and DWT techniques are mostly used for steganography [9]. This paper is proposing a wavelet transform is used to hide a secret image within video frames. The proposed algorithm uses 3D stationary wavelet transform (SWT) and singular value decomposition (SVD). SWT provides more capacity for hiding, and SVD provides better perceptual quality for the video. The rest of the paper is organized as follows: Sect. 2 introduces the related work pursued in hiding a secret message within a video using transform techniques. Then, a detailed explanation of SWT and SVD, and the proposed technique is presented in Sect. 3. Section 4 presents the experimental results. Finally, Sect. 5 includes the main conclusions of the paper.

Robust Video Steganography Technique Against Attack Based on …

259

2 Related Work Transform domain techniques provide more robustness to the steganography algorithm. Several matrix factorization techniques enhance the steganography algorithm such as, singular value decomposition (SVD), QR factorization and Arnold transform. Recently, many hybrid techniques are developed to improve the efficiency and robustness of steganography algorithms. As steganography algorithms and watermarking algorithms are using the same techniques, this section is a review of steganography and watermarking techniques that used a transform domain-based algorithm. In 2017, Sadek et al. [10] proposed a steganography algorithm to hide a secret image in a video. The algorithm considered the human skin regions in the cover video frames as regions of interest (ROI). Regions of interest were then decomposed into blocks. The process of embedding depended on three-level DWT coefficients. The DWT was applied for the red, green or blue channels in the ROI blocks and the YCbCr channels also. Different block sizes were tested and the algorithm achieved high imperceptibility as the peak signal to noise ratio (PSNR) was above 50 and the similarity percentage was 86%. Kuraparthi et al. [11] was then proposed a video watermarking technique that combined DWT, SVD and artificial bee colony (ABC) optimization algorithm. The watermark image is inserted in the “LL” sub-band. Later, the SVD was applied to the selected DWT block. ABC algorithm was then applied to select the best blocks. The performance of the method was measured by using different video processing attacks and the results proved robustness of the algorithm. The PSNR is above 53 dB. Using redundant wavelet transform (RDWT) and QR factorization, Subhedar and Mankar [12] proposed a steganography algorithm to conceal a secret image in a cover image. The cover image was first decomposed using RDWT then QR factorization was applied for a sub-band. At last, the secret image was inserted in the factorized sub-band. The results showed high imperceptibility with reference to PSNR, mean structural similarity (MSSIM), root-mean-square error (RMSE), normalized absolute error (NAE), normalized cross-correlation (NCC) and image fidelity (IF). Due to the high development in mobile and communication devices, the Internet of Things (IoT) is now widely used and faces many security problems. Arunkumar et al. [13] proposed a two-layer security algorithm. The first layer was at the IoT sensor device and the other one at a server. At the server-side, a combined cryptographysteganography approach was carried out. The steganography algorithm used the redundant integer wavelet transform (RIWT) and QR decomposition. The algorithm achieved good qualitative and quantitative results in terms of PSNR and NCC. Ng et al. [14] proposed an image steganography technique using RDWT and QR decomposition. The secret image block with the lowest entropy value was firstly embedded into the cover image block with the lowest entropy value. The process continued until embedding all image blocks. The proposed technique showed higher values for the PSNR and better image quality.

260

R. A. El-Shahed et al.

3 Proposed Method 3.1 Stationary Wavelet Transform In SWT, two types of filters are applied to the data in each level without decimation which are the high-pass and the low-pass filters. It is more computationally complex but it is useful in denoising, change detection and steganography applications. In SWT, the length of the sub-bands is the same as the original image because no down sampling is applied and this provides more capacity for data hiding [15]. In the proposed method, 3D SWT is implemented through two steps. Firstly, an ordinary 2D SWT is applied to the video frames in the spatial domain. After that, a temporal 1D SWT is applied to the pixels in the same spatial location in the consecutive frames. This analysis results in eight sub-bands.

3.2 Singular Value Decomposition Video frames can be considered as images. Any image X is a M × N matrix of non-negative scalar values. The singular value decomposition for X is two orthogonal matrices U, V and a diagonal matrix S. This decomposition is mathematically described as follows [16]: X = U × S × VT

(1)

3.3 Proposed Method The proposed method depends mainly on SWT and SVD. A colored video is utilized as an input as shown in Fig. 1. The video frames are then converted into YCbCr space. One-level 3D-SWT is performed on the Y (luminance) Channel which results in eight sub-bands. LLL sub-band is selected to insert the secret image’s SWT coefficients. SVD is applied to the cover video sub-band “LLL” and the secret image sub-band “LL.” The singular values for both sub-bands are embed using the value of α. At last, inverse SWT is applied using updated sub-bands and the Y channel of the frames is updated. The frames are then returned to RGB color space to produce the stego-video.

3.4 Steps of the Algorithm 1.

Input the cover video

Robust Video Steganography Technique Against Attack Based on …

261

Fig. 1 Block diagram of the proposed method

2. 3. 4. 5. 6.

Input the secret image Convert video frames to YCbCr color space Apply SWT for the Y channel and the secret image Apply SVD for LLL sub-band of cover video and LL sub-band of secret image Modify the singular values of LLL sub-band as following: Sc = Sc + α ∗ Si

7.

(2)

where Sc is the singular matrix of cover sub-band LLL, Si is the singular matrix of LL sub-band secret image and α is a scaling factor ranges from 0 to 1. Apply inverse SWT to generate the stego-video.

262

R. A. El-Shahed et al.

4 Experimental Results 4.1 Quality Assessment Structural Similarity The structural similarity index measure (SSIM) is defined as a function of luminance, contrast, and structural comparison term. SSIM is used to measure the similarity percentage between the input image and the stego-image [17]. The output SSIM value is ranging from − 1 and 1. Value 1 indicates excellent structural similarity.    2μx μ y + c1 2σx y + c2   SSIM(x, y) =  2 μx + μ2y + c1 σx2 + σ y2 + c2

(3)

where x and y are two windows of same size, μx is the average of x, μ y is the average of y, σx2 is the variance of x, σ y2 is the variance of y and σx y is the covariance of x and y. Normalized Cross-Correlation The normalized cross-correlation (NCC) calculates the cross-correlation based on the size of the images in the frequency domain. It then computes the local sums by precomputing running sums. Local sums are used to normalize the cross-correlation and get the correlation coefficients. The output matrix holds the correlation coefficients, which can range between − 1.0 and 1.0. NCC is defined as [17]: m n NCC =

j=1 (P[i, j] × S[i, j]) m n 2 i=1 j=1 (P[i, j])

i=1

(4)

The NCC is robust under uniform illumination changes. The NCC value that is near to 1.0 meaning that the visual quality of the stego-image is perfect.

4.2 Results and Discussion The algorithm is tested using RGB and YCbCr color spaces. The cover video frame size is 144 × 176. The algorithm performance is measured using different sizes of the secret image. Different grayscale images are used for testing, i.e., “moon,” “Lena,” and a black and white image “logo.” The value of α is 0.1, and the wavelet mother function is ‘db4.’ The algorithm robustness is measured using different attacks, salt and pepper, sharpening, and median filtering. Qualitative results are displayed in Fig. 2, the human visual eye cannot distinguish between the cover and the stegovideo. Figure 3 explains the extracted secret image after attacks on the stego-video using different attacks (Table 1).

Robust Video Steganography Technique Against Attack Based on …

263

Fig. 2 a Original cover video frame b stego-video frame

(a) Fig. 3 Performance using different image sizes

(b)

1 0.98 0.96

Image NCC

0.94

Image SSIM

0.92

Cover SSIM

0.9 32x32

64x64

128x128 144x176

Table 2 shows the comparison between YCbCr and RGB for the Lena image. The performance was measured in terms of NCC and SSIM. The results show that YCbCr color space is better than RGB against attacks. Table 3 displays the performance of the algorithm using different images. The algorithm achieved high imperceptibility and robustness against attacks as the NCC value is mostly above 0.9. Table 4 displays the performance of the algorithm with different size for secret image “moon” without attacks 32 × 32, 64 × 64, 128 × 128 and 144 × 176. In Fig. 3, the graph representation for the data shows that the NCC and SSIM increase proportionally with image size which increases the hiding ratio for the algorithm. Changing the secret images size does not affect the visual quality of the stego-video as the SSIM of the stego-video remains in average 0.99.

5 Conclusion Information security is an important science nowadays. With a huge amount of data transfer over the Internet, these data need to be secured from being hacked or lost. Watermarking, steganography, and cryptography are types of security techniques. In steganography, the secret data are embedded in a cover object. The data would be embedded in the spatial or transform domain of the cover object. The proposed technique is a transform-based technique that uses the SWT to embed a secret image in video series. One-level 3D SWT is applied to the Y channel of cover video frames. 2D SWT is applied for the secret image. Then, the SVD is performed for the detail subband of the cover object and secret image. The singular values are inserted using the value of α. The inverse transform is applied, and the Y channel is updated to produce

264

R. A. El-Shahed et al.

Table 1 Performance of the algorithm using different images in YCbCr color space images size 128 × 128 Moon No attacks

Salt and pepper v = 0.001

Salt and pepper v = 0.01

Sharpening

Median filter

Lena

Logo

Robust Video Steganography Technique Against Attack Based on …

265

Table 2 NCC and SSIM for Lena image 128 × 128 in YCbCr and RGB color spaces Different attacks

YCbCr

RGB

NCC

SSIM

NCC

SSIM

No attacks

0.98

0.9

0.98

0.90

Salt and pepper v = 0.001

0.98

0.89

0.97

0.86

Salt and pepper v = 0.01

0.9

0.72

0.80

0.59

Sharpening

0.95

0.86

0.81

0.68

Median filter

0.95

0.76

0.61

0.25

Table 3 NCC and SSIM for different images 128 × 128 in YCbCr color space Different attacks

Moon NCC

Lena SSIM

NCC

Logo SSIM

NCC

SSIM

No attacks

0.98

0.94

0.98

0.9

0.98

0.7

Salt and pepper v = 0.001

0.97

0.93

0.98

0.89

0.98

0.69

Salt and pepper v = 0.01

0.77

0.66

0.9

0.72

0.96

0.55

Sharpening

0.89

0.89

0.95

0.86

0.98

0.64

Median filter

0.96

0.87

0.95

0.76

0.98

0.6

Table 4 The algorithm performance using different image sizes

Image size

Image NCC

Image SSIM

Cover SSIM

32 × 32

0.96

0.93

0.99

64 × 64

0.97

0.93

0.99

128 × 128

0.98

0.95

0.99

144 × 176

0.99

0.97

0.99

the stego-video. The performance of the algorithm is compared in YCbCr and RGB color spaces, and YCbCr achieved better performance against attacks. The average similarity for the secret image is 0.9 without attacks and 0.85 after adding different types of attacks on the stego-video. The structural similarity percentage between the cover video and the stego-video is 0.99. Different secret images with different sizes are used and the average NCC is 0.9 which proved that the imperceptibility and robustness of the algorithm are very high.

References 1. K.N. Choudry, A. Wanjari, A survey paper on video steganography. Int. J. Comput. Sci. Inf. Technol. 6(3), 2335–2338 (2015) 2. Y. Liu, S. Liu, Y. Wang, H. Zhao, Si. Liu, Video steganography: a review. Neurocomputing 335, 238–250 (2019)

266

R. A. El-Shahed et al.

3. S. Hemalatha, U. Dinesh Acharya, A. Renuka, Comparison of secure and high-capacity color image steganography techniques in RGB and YCbCr domains. Int. J. Adv. Inf. Technol. (IJAIT) 3(3) (2013) 4. M. Dalal, M. Juneja, Video steganography techniques in spatial domain—a survey, in Lecture Notes in Networks and Systems (2018), pp. 705–711 5. G. Amara, An introduction to wavelets. IEEE Comput. Sci. Eng. 2(2), 50–56 (1995) 6. S.W. Smith, The scientist and engineer’s guide to digital signal processing, Chap. 8 (1999), pp. 141–167 7. N. Ahmed, T. Natarajan, K.R. Rao, Discrete cosine transforms. IEEE Trans. Comput. C-32, 90–93(1974) 8. A.R. Calderbank, I. Daubechies, W. Sweldens, B.L. Yeo, Wavelet transforms that map integers to integers. Appl. Comput. Harmonic Anal. 5(3), 332–369 (1998) 9. M. Mary Shanthi Rani, S. Lakshmanan, P. Saranya, A study on video steganography using transform domain techniques, in Conference: 5th National Conference on Computational Methods, Communication Techniques and Informatics, vol. 1, Gandhigram Rural Institute—Deemed University, Gandhigram, Dindigul (2017) 10. M.M. Sadek, A.S. Khalifa, G.M. Mostafa, Robust video steganography algorithm using adaptive skin-tone detection. Multimedia Tools Appl. (2017) 11. S. Kuraparthi, M. Kollati, P. Kora, Robust optimized discrete wavelet transform-singular value decomposition based video watermarking. Traitement du Signal 36(6), 565–573 (2019) 12. M.S. Subhedar, V.H. Mankar, Image steganography using redundant discrete wavelet transform and QR factorization. Comput. Electr. Eng. 54, 406–422 (2016) 13. S. Arunkumar, S. Vairavasundaram, K.S. Ravichandran, L. Ravi, RIWT and QR factorizationbased hybrid robust image steganography using block selection algorithm for IoT devices. J. Intell. Fuzzy Syst. 1–12 (2019) 14. K.-H. Ng, S.-C. Liew, F. Ernawan, An improved image steganography scheme based on RDWT and QR decomposition, in IOP Conference Series: Materials Science and Engineering, vol. 769, The 6th International Conference on Software Engineering & Computer Systems, Pahang, Malaysia (2019), pp. 25–27 15. M.N. Al-Berry, M.A.-M. Salem, A.S. Hussein, M.F. Tolba, Spatio-temporal motion detection for intelligent surveillance applications. Int. J. Comput. Methods 12(01), 1350097-20 (2014) 16. M.S. Wang, W.C. Chen, A hybrid DWT-SVD copyright scheme based on K-mean clustering and visual cryptography. Comput. Standard Interfaces 31(4), 750–762 (2009) 17. Z. Wang, A.C. Bovik, H.R. Sheikh, E.P. Simoncelli, Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process 13, 600–612 (2004)

Statistical Inference Through Variable Adaptive Threshold Algorithm in Over-Sampling the Imbalanced Data Distribution Problem S. Karthikeyan and T. Kathirvalavakumar

Abstract Classification of an imbalanced data is a major problem in many real-time systems. Classifiers classify the majority class samples with lesser misclassification and the minority class samples with more misclassification. The biased decisions of a classifier are due to the less availability of samples in the minority class. To solve this problem, new re-sampling method for over-sampling is proposed. The synthetic samples for the minority class are generated by statistically analysing the features of a minority class samples. Here, the samples are generated in double the number of minority samples to reduce its misclassification. It helps the classifier to have a balanced focus on majority and minority classes. The samples over-sampled through this approach are compared against the over-sampling approaches such as SMOTE and ADASYN. The results obtained through the proposed work show better classification accuracy and reduced misclassification rate in both majority and minority classes. Result is evaluated using the statistical evaluation metrics. It is also observed that over-sampling with the proposed approach is better for the small and medium imbalanced ratio datasets. Keywords Imbalanced data · Over-sampling · Classification · Statistical methods · SMOTE · ADASYN

1 Introduction Classifying the imbalanced dataset is a major problem in the classification domain. The learning algorithms for the skewed data distribution datasets are not reducing the misclassification rate of the minority class. Importance must be given for the learning S. Karthikeyan Department of Information Technology, V.H.N. Senthikumara Nadar College, Virudhunagar, Tamil Nadu, India e-mail: [email protected] T. Kathirvalavakumar (B) Research Centre in Computer Science, V.H.N. Senthikumara Nadar College, Virudhunagar, Tamil Nadu, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 R. C. Poonia et al. (eds.), Proceedings of Third International Conference on Sustainable Computing, Advances in Intelligent Systems and Computing 1404, https://doi.org/10.1007/978-981-16-4538-9_27

267

268

S. Karthikeyan and T. Kathirvalavakumar

algorithm of the imbalanced dataset [1]. Minority class samples are more important during the classification since it acts as a representative of the rare events [2]. Standard learning algorithms consider only a balanced training set and are not good at classifying imbalanced datasets [3]. To solve the problems with the imbalanced data, data-level solution is provided using the re-sampling methods. Re-sampling methods are categorized as under-sampling, over-sampling and hybrid sampling. Random under-sampling [4] is widely used in many experiments; this approach randomly pick samples from the majority class. The drawback of this approach is that it discards the samples necessary for the learning process [5]. Another important approach in under-sampling is cluster-based under-sampling method. In this approach, samples in the majority class are under-sampled by forming clusters based on the similarity measures [6]. Random over-sampling [7] replicates the existing samples, but sample replication leads to an over-fitting. Synthetic minority over-sampling technique (SMOTE) [8] creates synthetic samples similar to the minority class samples. This process is based on the nearest-neighbour algorithm. The nearest neighbours are chosen at random, but it creates overlapping samples. To overcome the difficulties with SMOTE, borderline SMOTE, adaptive synthetic sampling (ADASYN) and safe-level SMOTE are developed. Border-line SMOTE [9] is based on the minority over-sampling method SMOTE; here the samples near the borderline are over-sampled. ADASYN [10] uses the weighted distribution of the minority class samples by reducing the bias between the classes and adaptively shift the classification decision boundary to generate synthetic samples. With safe-level SMOTE [11], the samples in the minority class are chosen based on different weight degrees. The safe level is computed using the nearest neighbour. SPIDER2 [12] algorithm generates synthetic samples in two phases, in the first phase; the characteristics of the majority class are identified and removed the noisy samples. In the second phase, the characteristics of the minority class samples are identified and noisy samples in the minority class are amplified. Cluster-based over-sampling [13] uses k-means algorithm to cluster the minority class samples. Based on the generated clusters, over-sampling takes place on every cluster as per the problem need. In [14], the nearest-neighbour set is computed for each minority class sample using the Euclidean distance. The minority class samples near the decision boundary are eliminated by considering them as noise. The synthetic samples are generated from the available minority class samples using their modified hierarchical clustering algorithm. In [15], the weighted minority over-sampling method is used to generate synthetic samples by assigning weights to the informative minority samples. To achieve better results, data are enhanced using the deep auto-encoder which helps in learning sparse robust features. Hybrid sampling approach performs both under-sampling and over-sampling on the same dataset. In [16], the wrapper-based methods are used for under-sampling the majority class samples and SMOTE algorithm on the minority class samples. The data re-sampling process proceeds with different sampling percentages until a better percentage of hybrid sampling is identified. Shekhawat et al. [17] recently proposed an approach for data transformation with spider monkey optimization.

Statistical Inference Through Variable Adaptive Threshold …

269

Existing over-sampling approaches balance the sample count in both the classes or it needs the assistance of under-sampling approaches for better classification. To overcome the problems with the over-sampling techniques, it has been proposed to generate artificial samples using the inferential statistical methods [18, 19] on oversampling. The remaining sections of this paper are organized as Sect. 2 to illustrate the proposed work, and Sect. 3 demonstrates the experimental results.

2 Proposed Work The proposed statistical inference through variable adaptive (SIVA) threshold method considers the minority class samples in the training data of an imbalanced dataset. The mean and standard deviation for each feature are calculated. Based on the calculated mean and standard deviation, the standard score (z-score) for each feature is calculated. The z-score of a feature determines how the value of a feature deviates from the mean. Z=

x − Mean of the Feature Standard Deviation of the Feature

(1)

Using the z-score, synthetic samples are generated. Before generating the synthetic samples, the upper and lower threshold values for a dataset are specified. Finding optimal thresholds is an important task in this work. Samples are generated by modifying the existing values of the features using the z-scores of features. The mean of z-score is always zero. The z-score values above zero are the features lying above the mean average of the corresponding feature, and the values below zero are the features lying below the mean average of the corresponding feature. If the z-score is greater than 0, then the upper threshold value is multiplied with the random number generated between 0 and 1 and added with the feature value. If it is less than 0, then the lower threshold value is multiplied with the random number generated between 0 and 1 and added with the feature value. This process mimics the behaviour of the original samples, and it helps in the classification process. This procedure is applied on every feature of each pattern of a dataset. The working of the proposed work is illustrated in Fig. 1. After synthetic sample generation, the size of the samples of minority class is doubled. The samples generated through the above procedure are merged with the original minority samples in the training dataset. The samples in the majority class are not altered during the re-sampling process.

2.1 Algorithm 1.

Start with minority samples of the training dataset

270

S. Karthikeyan and T. Kathirvalavakumar

Fig. 1 Proposed work

2. 3. 4. 5.

6.

7. 8.

Select the upper and lower threshold values for the dataset Calculate the mean and standard deviation of each feature in the dataset Calculate z-score for a feature using formula (1) If z-value is greater than zero, then a random number between 0 and 1 is multiplied with an upper threshold value and added with the corresponding feature value of patterns If z-value is less than zero, then a random number between 0 and 1 is multiplied with a lower threshold value and added with the corresponding feature value of patterns Repeat steps 4–6 for every data in the feature Repeat steps 4–7 for all features.

3 Experimental Results The proposed work is trained and tested under 24 imbalanced datasets. The number of instances in the chosen datasets ranges from 192 to 5472, which is obtained from the Keel repository [20]. Imbalance ratio of a dataset is calculated by dividing the total number of samples in the majority class by the total number of samples in the minority class. The information about the chosen datasets is given in Table 1. To demonstrate the working of the proposed method, the whole dataset is split into training dataset and testing dataset. Training dataset contains 60% of the majority class data and 60% of the minority class data, which are randomly fetched from the

Statistical Inference Through Variable Adaptive Threshold …

271

Table 1 Information about the chosen imbalanced dataset Dataset

#Major

#Minor

Total

# Features

Imbalance ratio

Abalone9-18

689

42

731

8

16.40

Ecoli0vs1

143

77

220

8

1.86

Ecoli1

259

77

336

8

3.36

Ecoli2

284

52

336

8

5.46

Glass016vs2

175

17

192

10

10.29

Glass1

138

76

214

10

1.82

Glass2

197

17

214

10

11.59

Haberman

225

81

306

4

2.78

New-Thyroid1

180

35

215

6

5.14

New-Thyroid2

180

35

215

6

5.14

Page-blocks0

4913

559

5472

11

8.79

500

268

768

9

1.87

1706

123

1829

10

13.87

Vehicle0

647

199

846

19

3.25

Vehicle1

628

217

845

19

2.89

Vehicle3

633

212

845

19

2.99

Pima Shuttlec0vsc4

Vowel0

898

90

988

14

9.98

Wisconsinlmb

444

239

683

10

1.86

Yeast05679vs4

477

81

558

9

5.89

Yeast1

1055

429

1484

9

2.46

Yeast1vs7

429

30

459

8

14.30

Yeast1289vs7

917

30

947

9

30.57

Yeast2vs4 Yeast4

463

51

514

9

9.08

1240

244

1484

9

5.08

whole dataset, and the remaining 40% of both major and minor class data are considered as a testing dataset. To illustrate the efficiency of the proposed work, the minority class samples of the training dataset are processed using the over-sampling algorithms SMOTE and ADASYN. Instances generated through SMOTE and ADASYN algorithms are also in double the size of the minority class samples as of the proposed work. The training instances generated through these three approaches are tested using C4.5 classifier. The classification accuracy of the proposed method with the existing over-sampling methods SMOTE and ADASYN is compared and displayed in Figs. 2 and 3. With SMOTE, the classification accuracy is better in 13 datasets, same accuracy in 4 datasets and difference in 7 datasets. With ADASYN, the classification accuracy is better in 12 datasets, same accuracy in 6 datasets and difference in 6 datasets. With imbalanced datasets the classification accuracy is not only the deciding factor, we have to consider other factors such as the misclassification rate of the minority class

272

S. Karthikeyan and T. Kathirvalavakumar

100 90 80 70 60 50

Proposed

SMOTE

ADASYN

Fig. 2 Classification accuracy comparison under C4.5 for one group of dataset

100 90 80 70 60 50

Proposed

SMOTE

ADASYN

Fig. 3 Classification accuracy comparison under C4.5 for the next group of dataset

into an account. The proposed work reduces misclassification rate in both major and minor classes. On comparing the misclassification rate of the proposed approach with an existing over-sampling approaches SMOTE and ADASYN, the misclassification rate is considerably decreased in the proposed method. The percentage of misclassification in the majority and the minority classes under C4.5 classifier is shown in Tables 2 and 3. The bold values in the Tables 2 and 3 represents better or same misclassification rate of the proposed method From Tables 2 and 3, it is observed that, with C4.5 classifier, the percentage of misclassification in the proposed method is lesser in the majority class of the

Statistical Inference Through Variable Adaptive Threshold … Table 2 Percentage of misclassification in the majority class

273

Dataset

Proposed

SMOTE

ADASYN

Abalone9-18

3.26087

4.710145

3.623188

Ecoli0vs1

5.454545

5.454545

5.454545

Ecoli1

2.884615

3.846154

5.769231

Ecoli2

5.263158

5.263158

7.894737

Glass016vs2

15.71429

8.571429

14.28571

Glass1

18.18182

23.63636

29.09091

Glass2

6.329114

3.797468

10.12658

Haberman

15.55556

20

15.55556

New-Thyroid1

2.777778

2.777778

2.777778

New-Thyroid2

1.388889

1.388889

0

Page-blocks0

4.936387

1.628499

2.391858

Pima

31

29

24.5

Shuttlec0vsc4

0

0

0

Vehicle0

11.96911

6.563707

4.247104

Vehicle1

17.13147

18.7251

19.52191

Vehicle3

18.57708

20.1581

20.1581

Vowel0

1.114206

1.392758

1.392758

Wisconsinlmb

3.932584

4.494382

3.932584

Yeast05679vs4

6.282723

9.947644

13.08901

Yeast1

23.93365

25.59242

22.98578

Yeast1vs7

5.813953

6.976744

8.72093

Yeast1289vs7

3.542234

4.632153

6.26703

Yeast2vs4

2.702703

1.621622

2.702703

Yeast4

10.68548

10.8871

10.68548

chosen datasets except Glass016vs2, Glass2, New-Thyroid2, Page-blocks0, Pima, Vehicle0, Yeast1 and Yeast2vs4. The percentage of misclassification in the minority class is lesser in the datasets than the Page-blocks0, Glass1, Pima, Vehicle0, Yeast1, Yeast1289vs7 and Yeast2vs4. The above experiments are evaluated using the nonparametric statistical evaluation metrics to validate the correctness. Area under the receiver operating characteristics (AUC) score and Cohen Kappa score are used in the proposed experiment. The comparison results are shown in Tables 4 and 5. Table 4 shows that the proposed method performs better in 12 datasets than SMOTE and ADASYN and lesser in 8 datasets. Table 5 shows that the proposed method performs better in 14 datasets than SMOTE and 11 datasets than ADASYN, and lesser in 6 datasets than SMOTE, and 8 datasets than ADASYN. In the Tables 4 and 5, bold values represent better or same AUC score.

274 Table 3 Percentage of misclassification in the minority class

S. Karthikeyan and T. Kathirvalavakumar Dataset

Proposed

SMOTE

ADASYN

Abalone9-18

47.05882

47.05882

70.58824

Ecoli0vs1

3.030303

3.030303

3.030303

Ecoli1

16.12903

22.58065

29.03226

Ecoli2

19.04762

19.04762

19.04762

Glass016vs2

57.14286

100

85.71429

Glass1

46.66667

50

23.33333

Glass2

57.14286

71.42857

57.14286

Haberman

53.125

59.375

59.375

New-Thyroid1

14.28571

14.28571

14.28571

New-Thyroid2

0

21.42857

7.142857

Page-blocks0

15.625

18.30357

13.83929

Pima

38.31776

31.7757

43.92523

Shuttlec0vsc4

0

0

0

Vehicle0

13.75

10

8.75

Vehicle1

32.18391

47.12644

43.67816

Vehicle3

37.64706

40

37.64706

Vowel0

5.555556

8.333333

13.88889

Wisconsinlmb

5.208333

6.25

5.208333

Yeast05679vs4

35

40

40

Yeast1

47.09302

41.86047

45.34884

Yeast1vs7

41.66667

50

66.66667

Yeast1289vs7

83.33333

75

75

Yeast2vs4

25

30

20

Yeast4

46.93878

55.10204

46.93878

Table 6 shows the upper and lower threshold values selected for the datasets by trial and error. The instances over-sampled through the proposed method are based on the selection of an upper and lower threshold values. From the experimental results, it is observed that the instances generated by providing a large value to an upper threshold and a smaller value to the lower threshold yield a better accuracy with less misclassification rate. Substituting a smaller value to an upper threshold and a larger value to the lower threshold or providing a same value to both upper threshold and lower threshold leads to a performance degradation during the classification. Proper analysis of the features in a dataset along with the usage of the statistical concepts and threshold measures can help us in achieving better classification results.

Statistical Inference Through Variable Adaptive Threshold … Table 4 AUC score under C4.5 classifier

275

Dataset

Proposed

SMOTE

ADASYN

Abalone9-18

0.715

0.658

0.687

Ecoli0vs1

0.958

0.958

0.958

Ecoli1

0.875

0.847

0.826

Ecoli2

0.878

0.878

0.852

Glass016vs2

0.629

0.45

0.5

Glass1

0.671

0.658

0.747

Glass2

0.683

0.611

0.592

Haberman

0.625

0.603

0.604

New-Thyroid1

0.915

0.915

0.915

New-Thyroid2

0.967

0.886

0.964

Page-blocks0

0.901

0.91

0.919 0.637

Pima

0.636

0.682

Shuttlec0vsc4

1

1

1

Vehicle0

0.867

0.909

0.908

Vehicle1

0.744

0.671

0.684

Vehicle3

0.676

0.681

0.711

Vowel0

0.985

0.896

0.964

Wisconsinlmb

0.939

0.951

0.939

Yeast05679vs4

0.876

0.725

0.767

Yeast1

0.634

0.675

0.685

Yeast1vs7

0.724

0.768

0.665

Yeast1289vs7

0.648

0.653

0.592

Yeast2vs4

0.861

0.842

0.864

Yeast4

0.7

0.681

0.7

4 Conclusion Imbalanced datasets over-sampled with the proposed method work with good classification accuracy and less misclassification rate in both majority and minority classes in many datasets. In the proposed work, instead of fully balancing the minority class samples to the size of the majority class samples, the samples in the minority class are generated in double its size. The use of the proposed over-sampling method using z-score along with the C4.5 classifier produces good classification accuracy and reduced misclassification rate in both majority and minority classes. In the proposed experiment, datasets with imbalance ratio of small to medium size are considered; the extension of this work can concentrate on the extreme imbalanced datasets.

276 Table 5 Cohen Kappa statistics under C4.5 classifier

S. Karthikeyan and T. Kathirvalavakumar Dataset

Proposed

SMOTE

ADASYN

Abalone9-18

0.775

0.39

0.313

Ecoli0vs1

0.952

0.904

0.904

Ecoli1

0.894

0.737

0.711

Ecoli2

0.862

0.729

0.61

Glass016vs2

0.708

−0.092

0

Glass1

0.696

0.351

0.445

Glass2

0.821

0.222

0.45

Haberman

0.727

0.208

0.205

New-Thyroid1

0.829

0.829

0.829

New-Thyroid2

0.915

0.819

0.956

Page-blocks0

0.9

0.803

0.818 0.309

Pima

0.734

0.378

Shuttlec0vsc4

1

1

1

Vehicle0

0.936

0.772

0.815

Vehicle1

0.751

0.304

0.358

Vehicle3

0.715

0.375

0.436

Vowel0

0.969

0.812

0.821

Wisconsinlmb

0.952

0.888

0.912

Yeast05679vs4

0.811

0.413

0.4

Yeast1

0.759

0.36

0.325

Yeast1vs7

0.8

0.404

0.264

Yeast1289vs7

0.655

0.194

0.111

Yeast2vs4

0.865

0.768

0.756

Yeast4

0.721

0.349

0.39

Statistical Inference Through Variable Adaptive Threshold … Table 6 Upper and lower threshold values

277

Dataset

Upper threshold

Lower threshold

Abalone9-18

0.625

0.5

Ecoli0vs1

0.725

0.5

Ecoli1

0.625

0.5

Ecoli2

0.5

0.25

Glass016vs2

0.4

0.2

Glass1

0.5

0.25

Glass2

0.625

0.5

Haberman

0.725

0.5

New-Thyroid1

0.5

0.25

New-Thyroid2

0.5

0.25

Page-blocks0

0.625

0.5

Pima

0.725

0.5

Shuttlec0vsc4

0.675

0.5

Vehicle0

0.5

0.25

Vehicle1

0.65

0.475

Vehicle3

0.725

0.5

Vowel0

0.5

0.25

Wisconsinlmb

0.5

0.25

Yeast05679vs4

0.8

0.4

Yeast1

0.725

0.475

Yeast1vs7

0.625

0.5

Yeast1289vs7

0.5

0.25

Yeast2vs4

0.5

0.25

Yeast4

0.5

0.25

References 1. H. He, E.A. Garcia, Metric learning from imbalanced data, in Proceedings—International Conference on Tools with Artificial Intelligence ICTAI, vol. 21, no. 9 (2009), pp. 1263–1284 2. G.M. Weiss, Mining with rarity: a unifying framework. ACM SIGKDD Explor. Newsl. 6(1), 7–19 (2004) 3. A. Fernandez, S. Garcia, J. Luengo, E. Bernado-Mansilla, F. Herrera, Genetics-based machine learning for rule induction: state of the art, taxonomy, and comparative study. IEEE Trans. Evol. Comput. 14(6), 913–941 (2010) 4. S. Mishra, Handling imbalanced data: SMOTE vs. random undersampling. Int. Res. J. Eng. Technol. 4(8), 317–320 (2017) 5. J. Luengo, A. Fernandez, S. Garcia, F. Herrera, Addressing data complexity for imbalanced data sets: analysis of SMOTE-based oversampling and evolutionary undersampling. Soft Comput. 15(10), 1909–1936 (2011) 6. S.J. Yen, Y.S. Lee, Cluster-based under-sampling approaches for imbalanced data distributions. Expert Syst. Appl. 36(3 PART 1), 5718–5727 (2009)

278

S. Karthikeyan and T. Kathirvalavakumar

7. G.E.A.P.A. Batista, R.C. Prati, M.C. Monard, A study of the behavior of several methods for balancing machine learning training data. ACM SIGKDD Explor. Newsl. 6(1), 20–29 (2004) 8. N.V. Chawla, K.W. Bowyer, L.O. Hall, W.P. Kegelmeyer, SMOTE: synthetic minority oversampling technique. J. Artif. Intell. Res. 16, 321–357 (2002) 9. H. Han, W.-Y. Wang, B.-H. Mao, Borderline-SMOTE: a new over-sampling method in imbalanced data sets learning, in LNCS (Springer, 2005), pp. 878–887 10. H. He, Y. Bai, E.A. Garcia, S. Li, ADASYN: adaptive synthetic sampling approach for imbalanced learning, in 2008 IEEE International Joint Conference on Neural Networks (IEEE World Congress on Computational Intelligence), no. 3 (2008), pp. 1322–1328 11. C. Bunkhumpornpat, K. Sinapiromsaran, C. Lursinsap, Safe-level-SMOTE: safe-levelsynthetic minority over-sampling technique for handling the class imbalanced problem, in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 5476 (2009), pp. 475–482 12. K. Napierała, J. Stefanowski, S. Wilk, Learning from imbalanced data in presence of noisy and borderline examples, in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 6086 (2010), pp. 158–167 13. T. Jo, N. Japkowikz, Class imbalances versus small disjuncts. ACM SIGKDD Explor. Newsl. 6(1), 40–49 (2004) 14. S. Barua, M.M. Islam, X. Yao, K. Murase, MWMOTE—majority weighted minority oversampling technique for imbalanced data set learning. IEEE Trans. Knowl. Data Eng. 26(2), 405–425 (2014) 15. Y. Zhang, X. Li, L. Gao, L. Wang, L. Wen, Imbalanced data fault diagnosis of rotating machinery using synthetic oversampling and feature learning. J. Manuf. Syst. 48, 34–50 (2018) 16. N.V. Chawla, D.A. Cieslak, L.O. Hall, A. Joshi, Automatically countering imbalance and its empirical relationship to cost. Data Min. Knowl. Discov. 17(2), 225–252 (2008) 17. S.S. Shekhawat, H. Sharma, S. Kumar, A. Nayyar, B. Qureshi, bSSA: Binary Salp Swarm Algorithm with hybrid data transformation for feature selection. IEEE Access 9, 14867–14882 (2021). https://doi.org/10.1109/ACCESS.2021.3049547 18. D.J. Sheskin, Handbook of Parametric and Nonparametric Statistical Procedures (2004) 19. W. Xie, G. Liang, Z. Dong, B. Tan, B. Zhang, An improved oversampling algorithm based on the samples’ selection strategy for classifying imbalanced data. Math. Probl. Eng. 2019 (2019) 20. Small and Medium Scale Datasets. [Online]. Available: https://sci2s.ugr.es/keel/datasets.php

Feature Engineering for Tal-Patra Manuscript Text Using Natural Language Processing Techniques M. Poornima Devi and M. Sornam

Abstract The Tal-Patra manuscript is one of the most precious documents which have to be preserved to explore the ancient secrets in the fields of medical fields such as Ayurveda, Siddha, historical moments, historic pandemics, art, astronomy, etc. In assessing and manipulating text from documents, Natural Language Processing plays a crucial role. In this article, the different feature engineering techniques were an experiment for Tal-Patra manuscript text such as Bag-of-Words (BoW), TF-IDF (Term Frequency-Inverse Document Frequency) and Word Embedding (Word2vec). The Tal-Patra manuscript of Kuzhanthai Pini Marunthu (Child-Related Medicine) was used as a dataset. As pre-processing steps include sentences and word tokenization, this research work began with tokenization. Then the part of speech (POS) was tagged, and then, feature engineering techniques were performed to analyze the text. For the Tamil Tal-Patra manuscript, word embedding techniques (Word2vec) performed well compared to the above-mentioned feature engineering techniques. Keywords Tal-Patra · TF-IDF · BoW · Word2vec · Tokenization · POS

1 Introduction The primary source is the dry palm leaf, known as the Tal-Patra manuscript, also known as the Tada Patra or Panna, one of India’s most famous manuscripts, particularly in Tamil Nadu [1, 2]. The palm leaf is a vital material that is widely used for writing practices in the country. The ancient means of writing in India are the Tal-Patra manuscript in Tamil Nadu, known as the “Olai Chuvadi” in Tamil [3]. This manuscript type may have many sheets and may vary in dimensions and types. The types of the manuscript will vary according to the region and the sources available in the region. In coastal states of India, the famous resource is a palm tree and the inscribed name is Tal-Patra script. In Himalayan belts, the most available resource is the Bhoj-Patra tree and the inscribed manuscript is known as the Bhoj-Patra script. M. Poornima Devi · M. Sornam (B) Department of Computer Science, University of Madras, Chennai, Tamil Nadu, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 R. C. Poonia et al. (eds.), Proceedings of Third International Conference on Sustainable Computing, Advances in Intelligent Systems and Computing 1404, https://doi.org/10.1007/978-981-16-4538-9_28

279

280

M. Poornima Devi and M. Sornam

In the northeastern region of Assam, the most available resource is the agaru tree and the script is known to be Hansi-Patra [4–6]. Text analysis of the Tamil language is one of the research fields with more challenges. In particular the text analysis of the Tal-Patra manuscript is more difficult when compared to the other documents, because of the ancient characters and individual writing styles. The analysis of the Tal-Patra script text using Natural Language Processing (NLP) techniques is proposed in this work. The components of NLP are Natural Language Understanding and Natural Language Generation [7]. Some of the applications in NLP were spell checking, information extraction, keyword searching, advertisement matching, etc. [8–11]. Vaibhavi et al. [7] proposed the system for Tamil text to speech synthesizer using prosody prediction for sentiment analysis and achieved 77% of accuracy. Rajan et al. [8] developed a system for the classification of Tamil documents using a neural network. The various networks used for this work are Vector Space Model, Support Vector Machine, Naïve Bayes and K-Nearest Neighbor. Dhanalakshmi and Rajendran [12] proposed the work using NLP for Tamil grammar learning and teaching. This work has different levels such as character analyzer, POS tagging and Chunkers. Thevatheepan and Sagara [13] develop a system for summarization of text for the Tamil sports news using Natural Language Processing. The different features used were sentence position, number of named entities, TF-IDF and number of numerals. Restricted Boltzmann machine was used to enhance the text summarization. Recently Shekhawat et al. [14] developed an approach to improve efficiency of feature selection using data transformation.

2 Proposed Work In this proposed work, tokenization has been performed as a pre-processing technique, then the POS was labeled to the words, and then the information was extracted using the feature engineering techniques such as BoW, TF-IDF and Word2vec. The proposed flow is represented in Fig. 1.

Fig. 1 Flow of proposed work

Feature Engineering for Tal-Patra Manuscript Text …

281

2.1 Tokenization Tokenization is the process of splitting an input sequence into different parts, and it is denoted as tokens. The tokens can be characters, words, sentences, punctuations, etc. The token plays a vital role in semantic processing. In this work sentence and word, tokenization was used to segment the sentences and words. The different types of tokens used in the proposed to analyze the Tal-Patra scripts are listed below [7] • Whitespace tokenizer • Word punctuation tokenizer • Tree bank word tokenizer. Whitespace Tokenizer. It was used to segment the words into tokens by only assuming the whitespaces present in the document. It will not recognize any punctuation in the text which will lead to meaningless tokens. Word Punctuation Tokenizer. The word punctuation tokenizer is useful for segmenting the words and punctuations present in the text. It will consider all punctuation symbols as separate tokens, even though the tokens produced by this tokenizer are not considered meaningful. Tree Bank Word Tokenizer. This tokenizer helps to extract the meaning information from the text using tokens. It is the combination of whitespace and word punctuation tokenizer, by considering the punctuation and produced meaningful tokens.

2.2 Part of Speech (POS) POS is a tagger that examines the words in languages and allocates parts of speech to every phrase such as noun, verb, adverb, adjective [4]. Some of the parts of speech tags recognized by Natural Language Processing techniques and their descriptions are tabulated in Table 1.

2.3 Feature Engineering Techniques Feature engineering is the most critical aspect of text analysis and classification. It is the mechanism by which information from the raw text is extracted. BoW, TFIDF and Word2vec are the different feature engineering techniques [8] used for this process. Bag-of-Words (BoW). BoW specifies the appearance of words or phrases present in the document. This is based on two components: known word vocabulary and occurrence of known word metrics. This method is based on the bag containing

282

M. Poornima Devi and M. Sornam

Table 1 Part of speech tags and their corresponding descriptions S. No.

Tag

Description

S. No.

Tag

Description

1.

NN

Noun, singular

10.

RB

Adverb

2.

NNS

Noun, plural

11.

RBR

Adverb, superlative, comparative

3.

DT

Determiner

12.

RP

Particle

4.

JJ

Adjective

13.

SYM

Symbol

5.

JJS

Adjective, superlative

14.

VB

Verb

6.

JJR

Adjective, comparative

15.

VBD

Verb, past tense

7.

NNP

Proper noun, singular

16.

VBG

Verb, present participle

8.

NNPS

Proper noun, plural

17.

VBN

Verb, past participle

9.

CC

Coordinating conjunction

18.

CD

Cardinal number

words and did not take the order of the word from the text into consideration. It will find out whether the recognized word in the text is present and measures it. The attribute column called text vectorization is generated for every token. The main disadvantage of the BoW is, if the new phrase obtains a new vocabulary, then the length of the feature will increase which leads to enlarged vector size. If the vector contains so many 0’s, then it will produce the sparse matrix. BoW =

n 

Wc,r

(1)

c∈r

where Wc,r is the total number of occurrence “c” of word, in the document “r.” Term Frequency-Inverse Document Frequency (TF-IDF). TF-IDF feature engineering technique is based on the statistical feature and is used to extract the importance of words in the text. Term Frequency. The term frequency is used to measure occurrence of the word in a particular sentence or document. It can be computed as follows: tf(c,r ) = n

f c,r

p∈r

f p,r

(2)

where f c,r is the number of occurrence “c” of words in the document “r” and  r p∈r f t,r is the number of words “p” in the document “r.” Inverse Document Frequency (IDF). IDF measures the importance of the word for a better understanding of the document. It can be computed as follows: idf(c,R) = log

|R| |{r ∈ R : c ∈ r }|

(3)

Feature Engineering for Tal-Patra Manuscript Text …

283

where |R| is the sum of documents in the corpus, and |{r ∈ R : p ∈ r }| is the number of documents “R” with the occurrence of word “c.” TF-IDF can be computed to evaluate each word occurring as in Eq. (4). tfidf(c,r,R) = tf(c,r ) ∗ idf(c,R)

(4)

Word2vec. Word2vec is one of the most popular techniques for word embedding. BoW has extracted the features with the number of occurrence of words without considering the order, and TF-IDF extracted the feature with a better understanding of words than BoW. The Word2vec was used to obtain a better understanding of the language meaning of words.

3 Experimental Results and Discussion The dataset used in this work is Kuzhanthai Pini Marunthu (Child-Related Medicine) manuscript’s god prayer scripts. The sample text of the manuscript is displayed in Fig. 2. Tokenization has been performed as the pre-processing in the first stage of text analysis. Figure 3 illustrates the difference between the word punctuation tokenization and word tree bank tokenization.

Fig. 2 Sample text of Tal-Patra manuscript

284

M. Poornima Devi and M. Sornam

Fig. 3 Word tokenization. a Word punctuation tokenization. b Word tree bank tokenization

(a)

(b)

Figure 4 illustrates the sentence tokenization of the Tal-Patra manuscript. Figure 5 describes the parts of speech outcomes of Tal-Patra manuscript text where VB denotes verb, JJ denotes adjective, NNP indicates proper noun singular. The word count vector representation Bag-of-Words is illustrated in Fig. 6. The value 1 is for the presence of the word, and 0 is for the absences of the word. Figure 7

Fig. 4 Sentence tokenization

Fig. 5 POS tags of Tal-Patra script text

Feature Engineering for Tal-Patra Manuscript Text …

285

Fig. 6 Feature vector values of BoW

Fig. 7 Feature vector values of TF-IDF

illustrates the TF-IDF vector values representation of Tal-Patra script text. The real value indicates the statistical frequency values of the TF-IDF feature engineering technique. Figure 8 illustrates the word embedding feature values of the sentences using Word2vec. Here the 0 indicates the padding value. The sample of the frequency of the word tokens is displayed in Fig. 9. The frequency distribution of the word with word counts is shown in Fig. 10.

Fig. 8 Word embedded sentence values for Word2vec

286

M. Poornima Devi and M. Sornam

Fig. 9 Sample of frequency of the tokens in the text

Fig. 10 Frequency distributions of words

4 Conclusion Three methods for the extraction of information from the text of Tal-Patra script were used in this work. For this the tokenization has been used as a pre-processing, sentence tokenization, white space tokenization, word punctuation tokenization and word tree bank tokenization performed. Then the POS was labeled to the words to identify the grammatical section. Then the feature engineering techniques were applied. The three approaches of feature engineering—BoW, TF-IDF and Word2vec—were taken and experimented, and Word2vec was well performed for the Tal-Patra script, in a structured way. In the future, the work can be extended to classify or category the Tal-Patra manuscript which will lead to ease access of catalogs of the manuscript.

References 1. B. Kiruba, A. Nivethitha, M. Vimaladevi, Segmentation of handwritten Tamil character from palm script using histogram approach. Int. J. Inf. Futuristic Res. 4(5), 6418–6424 (2017)

Feature Engineering for Tal-Patra Manuscript Text …

287

2. N.S. Panyam, T.R. Vijaya Lakshmi, R.K. Krishnan, N.V. Koteswara Rao, Modeling of palm leaf character recognition system using transform based techniques. Pattern Recogn. Lett. 84, 29–34 (2016) 3. M. Poornima Devi, M. Sornam, Classification of ancient handwritten Tamil characters on palm leaf inscription using modified adaptive backpropagation neural network with GLCM features. ACM Trans. Asian Low Resour. Lang. Inf. Process. 19(6), 1–24 (2020) 4. S. Thavareesan, S. Mahesan, Sentiment analysis in Tamil texts: a study on machine learning techniques and feature representation, in 2019 IEEE 14th International Conference on Industrial and Information Systems (2019), pp. 320–325 5. K. Subramani, M. Subramaniam, Creation of original Tamil character dataset through segregation of ancient palm leaf manuscripts in medicine. Expert Syst. 1–13 (2020) 6. A.L. Fred, S.N. Kumar, H.A. Kumar, A.V. Daniel, W. Abisha, Evaluation of local thresholding techniques in palm-leaf manuscript images. Int. J. Comput. Sci. Eng. 6(4), 124–131 (2018) 7. V. Rajendran, G. Bharadwaja Kumar, Prosody prediction for Tamil text-to-speech synthesizer using sentiment analysis. Asian J. Pharm. Clin. Res. 1–4 (2017) 8. K. Rajan, V. Ramalingam, M. Ganesan, S. Palanivel, B. Palaniappan, Automatic classification of Tamil documents using vector space model and artificial neural network. Expert Syst. Appl. 36, 10914–10918 (2009) 9. A. Naresh Kumar, G. Geetha, Character recognition of ancient South Indian language with conversion of modern language and translation. Caribb. J. Sci. 53(20), 2019–2031 (2019) 10. E.K. Vellingiriraj, P. Balasubramanie, Recognition of ancient Tamil handwritten characters in historical documents by Boolean matrix and BFS graph. IJCST 8491(1), 65–68 (2014) 11. K. Subramani, S. Murugavalli, A novel binarization method for degraded Tamil palm leaf images, in 2016 IEEE Eighth International Conference on Advanced Computing (2016), pp. 176–181 12. V. Dhanalakshmi, S. Rajendran, Natural language processing tools for Tamil grammar learning and teaching. Int. J. Comput. Appl. 8(14), 26–30 (2010) 13. T. Priyadharshan, S. Sumathipala, Text Summarization for Tamil Online Sports News Using NLP (IEEE, 2018), pp. 1–5 14. S.S. Shekhawat, H. Sharma, S. Kumar, A. Nayyar, B. Qureshi, bSSA: Binary Salp Swarm Algorithm with hybrid data transformation for feature selection. IEEE Access 9, 14867–14882 (2021). https://doi.org/10.1109/ACCESS.2021.3049547 15. N. Rajkumar, T.S. Subashini, K. Rajan, V. Ramalingam, A survey: feature selection and machine learning methods for Tamil text classification. Int. J. Adv. Sci. Technol. 29(5), 13917–13922 (2020)

Digital Transformation of Public Service Delivery Processes Based of Content Management System Pavel Sitnikov, Evgeniya Dodonova, Evgeniy Dokov, Anton Ivaschenko, and Ivan Efanov

Abstract The paper presents an experience of regional public service delivery processes digital transformation considering the challenging trends of management in complex social and economic systems. On the basis of modern information and management technologies overview, there is proposed an original set of processes efficiency indicators that are recommended to provide sustainable digital transformation of public services delivery. The possibility of processes specification and evaluation is implemented by an innovative enterprise content management (ECM) system. The system provides the functionality of public service delivery processes design and improvement using business processes modeling notation (BPMN) followed by verification and evaluation. The proposed approach is illustrated by the results of implementation in Samara region for digital transformation of the processes of licensing the retail sale of alcoholic beverages. The resulting solution is recommended to provide sustainable digital transformation of regional management. Keywords Digital transformation · Business processes · Enterprise content management · Decision-making support

1 Introduction Modern trends of regional management are concerned with digital transformation of public service delivery processes. Despite the correctly stated goals and deep P. Sitnikov · E. Dokov SEC “Open Code”, Yarmarochnaya 55, Samara 443001, Russia P. Sitnikov · E. Dodonova ITMO University, Kronverksky Pr. 49, bldg. A, St. Petersburg 197101, Russia A. Ivaschenko (B) Samara State Technical University, Molodogvardeyskaya 244, Samara 443100, Russia I. Efanov Administration of the Governor of the Samara Region, Molodogvardeyskaya 210, Samara 443006, Russia © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 R. C. Poonia et al. (eds.), Proceedings of Third International Conference on Sustainable Computing, Advances in Intelligent Systems and Computing 1404, https://doi.org/10.1007/978-981-16-4538-9_29

289

290

P. Sitnikov et al.

understanding of the main opportunities of digital transformation in many cases, it does not go beyond the information and automation of the existing processes, traditionally supported by state and municipal authorities, which leads to the lack if sustainability. Such a disposition does not correspond to the main concepts of digital transformation philosophy and vision. To provide full-functional digitalization of services provided to citizen as an ultimate customer, the processes should be anatomically restructured using the information and communication platform as an extraordinary basis. To solve this problem, there should be developed a sustainable system of efficiency indicators of regional digitalization and model of business processes formalization and evaluation considering the original goals of their improvement. Summarizing some experience in the area of public utilities delivery processes digital transformation in this paper, there is proposed an original solution of their optimization using an enterprise content management (ECM) system. The results of its development and implementation to practice are presented below.

2 State of the Art Digital transformation [1, 2] reflects global trends in modern economy. Its main difference is the predominance of online interaction between suppliers and consumers of services using the information space, close to the behavior of the members of social media and virtual communities on Internet [3, 4]. Implementing these concepts, traditional organizations are turning into companies with digital thinking. The main approaches to the study of sociocultural factors in the economy, as well as key modern trends in empirical research (including in connection with the identification of cause-and-effect relationships) are presented in [5]. The previous step of public service delivery processes improvement using information and communication technologies was mainly concerned with complex informational support. The term was mostly used within the context of national development and defines an informational support of business processes at all stages using modern computers. In the area of public services delivery, it has resulted in full supply of employees with personal computers and development of databases that capture the meaningful data on the service requests and delivery to the customers. Deep implementation computer technologies helped to increase the efficiency of accounting and process control, and integrated information systems started playing a key role in building business processes [6, 7]. With the help of the Internet, modern citizen are provided with almost unlimited opportunities for virtual interaction with public affairs and each other including the area of governmental services. Further step was closely related to the organization of the electronic government [8–10], in which a significant proportion of the governmental functions are transferred to the information system. Given the current trends in digital transformation, state information systems are becoming open platforms for information interaction between suppliers and consumers of public services.

Digital Transformation of Public Service Delivery Processes …

291

These trends require formalization of business processes for every service delivery. In order to be controllable and manageable, the processes of interaction in information space need to be described and documented. Deep formalization helps automation and control and results in better resources utilization and higher service level. Among the most widespread technologies of business processes, description and analysis BPMN appears to be the most efficient [11, 12]. Process modeling and reorganization are recognized as being of utmost importance for making e-Government implementations successful [13, 14]. Detailed workflow analysis gives necessary information to provide electronic interconnection of public bodies to effectively organize collaboration of their business processes in order to enable effective and efficient delivery of services. Application of business process modeling notation (BPMN) in practice considers the human factor. It is especially challenging for public service delivery processes to make them client oriented and human centric. These problems are studied under the framework of subject-oriented approach for business processes management (SBPM), which conceives a process as a collaboration of multiple subjects organized via structured communication [15, 16]. Implementation of digital processes in modern applications if provided by enterprise content management (ECM) systems [17, 18] that refer to a new type of software close in purpose to documentation workflow and product lifecycle systems. According to the classical definition, ECM extends the concept of content management by adding a time line for each content item and possibly enforcing processes for the creation, approval and distribution of them. Therefore, the problem of a digital transformation of service organization can be studied from the perspective of business processes formalization using BPM and their subsequent optimization placing the ECM system to the basis. In this paper, it is proposed to implement such an approach using the model of an intermediary service provider [19–21]. This approach is extended in this paper by a new system of key performance indicators and ECM software solution that was successfully used in practice to improve a number of public service delivery processes within digital transformation.

3 Methodology The typical procedure of business processes improvement is based on modeling of the procedures of transfer of documents and information from one participant of interaction (employee, department, Ministry, department, etc.) to another. Thus, the process starts regulating the actions of people based on rules and instructions. In the course of digital transformation, new processes are developed according to the principle, namely, of data management, cleared of the usual, prevailing bureaucratic clichés. Each participant of a digital process actually manages the data: introduces, changes, supplements and processes information. This feature can be noticed in the names of the process stages, e.g., “Getting the full name (address, number,

292

P. Sitnikov et al.

amount, date, etc.)” instead of “Transfer of a package of documents to …” or “Formation of a request …”. For example, the process of issuing any permission does not begin with the receipt of an application, verification, registration and transfer to the contractor. All these are typical steps that will either disappear or become regular automated procedures, while all public services are transferred to electronic form. Therefore to solve the problem of digital transformation sustainability, it was proposed to perform the following activities: 1. 2.

3. 4.

Describe the existing business processes using BPMN 2.0. Implement the formalized processes on the basis of enterprise content management (ECM) platform, which provides their digitalization and organization of members’ negotiation primarily in electronic form. Evaluate business processes using the new KPI system oriented to the goals of digital transformation and find the weaknesses and bottle necks. Recommend the corresponding improvements and validate the revised version of business processes.

This methodology was implemented as a part of ECM system, powered by SEC “Open Code”: in addition to workflow and documentation support, there was introduced a visual component for simplified BPMN modeling of business processes and their evaluation using the original system of efficiency indicators. The improved ECM system allows you to display processes of any complexity with a wide variety of connections and dependencies. Thus, having formed a pool of sub-processes for generating data, the process developer will ensure the completion of the final task and determine the exact parameters of a negative answer–refusal to the applicant’s request. The main task is the analysis of the final product—document on the public service. Any document contains constant text and variables: full surnames, addresses, numbers, amounts, as well as key parameters that determine whether the requested permission is satisfied or denied. The ECM system provides identification of all these data entities and reflects them as the start of the process. Generation of each of these data types for inclusion in the final document is a sub-process that must be graphically displayed in the ECM system. Some of them may be a simple import of homogeneous information from an external system (or, for the time being, in paper form from another department). Other sub-processes for obtaining data require a complex sequence of actions with “forks” from various options, logical processing and professional human analysis. In order to optimize development, the ECM system provides the ability to save typical stages for use in various processes. This approach allows the formation of flexible processes from templates, selecting the necessary typical details for various areas of government activities. Focusing on the reflection of the aspects of process digital transformation, like data management, should stimulate the developer to design the process in the shortest and most optimal way. The ECM system provides the developer with a self-control tool that calculates the efficiency indicators of the process. Therewith efficiency indicators

Digital Transformation of Public Service Delivery Processes …

293

or KPIs should be aimed at the consumer of the service, and not the performer of the business process becoming convenient, fast and high-quality primarily for the client. These indicators include, for example, the stage execution time (in days, hours, minutes), the time the client waits in the queue (if there is a personal attendance in the process), the number of employees involved (stage cost), etc. Thus, when designing, the developer will be limited not only by the requirements of regulatory documents, but also by the system, which will demonstrate both the effectiveness of the stage and the entire process as a whole according to the set of KPIs established in the system. The proposed system of key performance indicators that mainly represent the quality and availability of public services is presented in Table 1. The calculation of the values is based on the statistics received by the regional government. Analysis of processes stages in terms of efficiency makes it possible to determine an integral assessment of the developed process as a whole. Integral assessment allows formation a unified development concept, establishes a rating of processes, creates a competitive development environment, using best practices, etc. Table 1 Processes digital transformation efficiency indicators No

Indicator

Parameter type

Unit

1

Service delivery term, max

Performance

Days

2

Involved staff: the number of officials Transaction costs involved in the process and responsible for the provision of the service, per 1 copy of the process

Staff units

3

Number of interagency requests

Transaction costs

Units

4

Interagency request/response time

Performance

Days

5

Maximum waiting time

Performance

Minutes

6

Share of automated sub-processes (steps) of the process

Transaction costs

%

7

Share of cases of the public services provision in violation of the established deadline in the total number of considered applications for public services

Performance

%

8

Share of complaints of applicants received in Quality and availability the procedure of pre-trial appeal of decisions made in the course of the provision of public services and actions (inaction) of officials of the department, in the total number of applications for public services

%

9

Share of violations of the implementation of the regulations, other regulatory legal acts, identified as a result of the control measures

%

10

Share of applications for the provision of Transaction costs public services received in electronic form (of the total number of applications received)

Quality and availability

%

294

P. Sitnikov et al.

4 Implementation Solution of the stated problem requires automated calculation and analysis of the processes digital transformation efficiency indicators. On the basis of ECM system, there was developed analytical software to describe the processes using BPMN 2.0 notation and automatically analyze their corresponding to the basic criteria of digital transformation. The proposed approach was implemented in Samara region for description, modeling and analysis of the public service delivery processes considering the goals of digital transformation. The first group of processes for optimization includes the provision of child monthly support, licensing the retail sale of alcoholic beverages, issuance of permits for the use of land owned, processing of the citizens’ appeals by the Ministry of health, etc. Figure 1 presents the optimized process of licensing the retail sale of alcoholic beverages as an example. This administrative regulation is provided by the regional

Fig. 1 Licensing the retail sale of alcoholic beverages process (fragment)

Digital Transformation of Public Service Delivery Processes …

295

Fig. 2 Digital transformation efficiency indicators for initial and optimized processes of licensing the retail sale of alcoholic beverages

Ministry of industry and trade state services for licensing of the retail sale of alcoholic beverages. The regulation was developed in order to improve the quality and accessibility of the provision of state services, creating favorable conditions for participants in relations arising from the implementation of licensing of the specified type of activity, establishing the order, timing and sequence of administrative procedures when providing public services, and also establishes the procedure for interaction of the Ministry with legal persons upon issuance, extension, renewal and early termination licenses. After the process of formal description and improvement of this process using ECM system considering the possibilities of digital transformation, its efficiency indicators were improved as presented in Fig. 2. Thereby in the context of digital transformation, each process becomes a subject to an automated compliance control procedure. Its purpose is to identify deviations of the indicators of the developed stages from the established norms. The processes are indicated by colors with each process being assigned to the “yellow” middle zone, “green”—the best indicators and “red”—for which optimization is needed. The introduced approach allows efficient application of BMPN for this purpose, which is beneficial in terms of visualization and improvement.

5 Conclusion The results of ECM system implementation and probation in practice illustrate the necessity to perform digital transformation instead of formal application of modern information and communication technologies to automate the existing processes. The proposed methodology and efficiency indicators allow improving sustainability of digital transformation of the processes of public services delivery. Future scope of the reported work includes modernization of the analytical monitoring system and improving of processes analysis using intelligent technologies and machine learning.

296

P. Sitnikov et al.

References 1. Digital Russia. New Reality (Digital McKinsey, 2017), 133 p. https://www.mckinsey.com/ru/ our-work/mckinsey-digital 2. K. Patel, M.P. McCarthy, Digital Transformation: The Essentials of e-Business Leadership (KPMG/McGraw-Hill, 2000), 134 p 3. C. Kadushin, Understanding Social Networks: Theories, Concepts, and Findings (Oxford University Press, 2012), 264 p 4. One Internet. Global Commission on Internet Governance (2016), https://www.cigionline.org/ initiatives/global-commission-internet-governance 5. A.A. Auzan, A.I. Bakhtigaraeva, V.A. Bryzgalin, A.V. Zolotov, E.N. Nikishina, N.A. Pripuzova, A.A. Stavinskaya, Sociocultural factors in economics: milestones and perspectives. Vopr. Ekon. 7, 75–91 (2020) 6. D.T. Bourgeois, Information Systems for Business and Beyond (The Saylor Academy, 2014), 163 p 7. D. Romero, F. Vernadat, Enterprise information systems state of the art: past, present and future trends. Comput. Ind. 79 (2016). https://doi.org/10.1016/j.compind.2016.03.001 8. A. Cordella, F. Iannacci, Information systems in the public sector: the e-government enactment framework. J. Strat. Inf. Syst. 19, 52–66 (2010) 9. W.R. Rose, G.G. Grant, Critical issues pertaining to the planning and implementation of egovernment initiatives. Gov. Inf. Q. 27, 26–33 (2010) 10. V. Weerakkody, R. El-Haddadeh, T. Sabol, A. Ghoneim, P. Dzupka, E-government implementation strategies in developed and transition economies: a comparative stud. Int. J. Inf. Manage. 32, 66–74 (2012) 11. S. Pantelic, S. Dimitrijevic, P. Kosti´c, S. Radovi´c, M. Babovi´c, Using BPMN for modeling business processes in e-government—case study, in The 1st International Conference on Information Society, Technology and Management, ICIST 2011 (2011) 12. J. Recker, BPMN research: what we know and what we don’t know, in Lecture Notes in Business Information Processing, vol. 125 (2012), pp. 1–7 13. S. Palkovits, A.M. Wimmer, Processes in e-government—a holistic framework for modeling electronic public services, in Lecture Notes in Computer Science, vol. 2739 (2003), pp. 213–219 14. H.J. Scholl, E-government: a special case of ICT-enabled business process change, in Proceedings of the 36th Annual Hawaii International Conference on System Sciences, Big Island, Hawaii (2003), 12 p 15. A. Fleischmann, U. Kannengiesser, W. Schmidt, C. Stary, Subject-oriented modeling and execution of multi-agent business processes, in 2013 IEEE/WIC/ACM International Joint Conferences on Web Intelligence (WI) and Intelligent Agent Technologies (IAT) (2013), pp. 138–145 16. A. Fleischmann, W. Schmidt, C. Stary, S-BPM in the Wild (Springer, 2015), 282 p 17. U. Kampffmeyer, ECM Enterprise Content Management (Verlag/Herausgeber Project Consult, 2006), 84 p 18. A.V. Ivaschenko, A.A. Stolbova, D.N. Krupin, A.V. Krivosheev, P.V. Sitnikov, O.Ja. Kravets, Semantic analysis implementation in engineering enterprise content management systems. IOP Conf. Ser. Mater. Sci. Eng. 862, 042016 (2020) 19. O.L. Surnin, P.V. Sitnikov, A.V. Ivaschenko, N.Yu. Ilyasova, S.B. Popov, Big data incorporation based on open services provider for distributed enterprises, in CEUR Workshop Proceedings, vol. 1903 (2017), pp. 42–47 20. A. Ivaschenko, A. Lednev, A. Diyazitdinova, P. Sitnikov, Agent-based outsourcing solution for agency service management, in Lecture Notes in Networks and Systems, vol. 16 (2018), pp. 204–215 21. A. Ivaschenko, S. Korchivoy, M. Spodobaev, Infrastructural models of intermediary service providers in digital economy, in Advances in Intelligent Systems and Computing, vol. 1038 (2020), pp. 594–605

Secure and Sustain Network for IoT Fog Servers Naziya Hussain, Harsha Chauhan, and Urvashi Sharma

Abstract IoT generates an unprecedented quantity and variety of data. However, when data reach the cloud for analysis, the chance to deal with it may have disappeared. The IoT accelerates event awareness and response. As IoT systems are spreading, all data transmitted through the Internet from various sensing devices to a remote OM (operating and management) server can lead to many problems, such as a network traffic explosion and delayed data reactions. In an IoT system environment, Fog computing is an excellent way to solve that problem. The proposed approach is based on Wald’s maximum model, which uses the concept of mobile fog servers for developing the relay with the help of an Unmanned Aerial Vehicle and train network. This approach will help to provide prolong connectivity, reliability, and sustainability across the networks. The approach effectiveness is evaluated by performing numerical and network simulation. The metrics used in the approach proposed approach are end-to-end delay and packet loss against the number of users. This work has also simulated different types of attacks such as wormhole, DDoS, and Sybil attacks against the number of users for analyzing the connectivity probability and sustainability during the unfavorable conditions across the networks. Keywords DDoS · Train network · Packet loss · Knowledge depth graph · UAV

1 Introduction Internal and external communication is crucial for a sustainable plan or strategy. People in organizations have to create sustainable applications using internal planning processes. Before leaving your organization, it is essential to speak up outside. Communication enables everyone to share each other’s thoughts and ideas. This transfer becomes a critical aspect in the service sector because service providers need to develop close relationships with their customers. Sustainable improvements

N. Hussain (B) · H. Chauhan · U. Sharma School of Computers, IPS Academy, Indore, Madhya Pradesh 452012, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 R. C. Poonia et al. (eds.), Proceedings of Third International Conference on Sustainable Computing, Advances in Intelligent Systems and Computing 1404, https://doi.org/10.1007/978-981-16-4538-9_30

297

298

N. Hussain et al.

should be planned by people internally and externally. Communication must be convenient for the receiver and precise for the giver. Sustainability is also essential for the development of a digital future [1]. Sustainable digital transformation and community development are based on the Internet of Things (IoT) [2]. There is also a great potential for promoting sustainable community development in connection with the Internet of Things (IoT) convergence with other technologies (e.g., artificial intelligence, technological revolution, blockchain [3], and cluster computing) [4]. The Internet of Things (IoT) is meant to safeguard conservation and natural resources [5]. The IoT’s goal is to establish a technologically innovative paradigm that is for the benefit of the economy and environmental well-being. Figure 1 is an example of sustainable IoT using UAV networks with hybrid fog servers. Two critical solutions for sustainable IoT can be the use of artificial intelligence and engineering approaches [6, 7]. Enhanced management and control of devices can be made possible by efficient networks, while slight network irregularities can be vulnerable to many problems, like privacy, confidence, and session hijacking. The goal is to create sustainable IoT by considering solutions at a service level that specifically relate to a specific field [8]. The successes of these approaches, however, depend on the new features of the network. Not all of these requirements are met by traditional cloud computing architectures. The dominant approach—transferring all data to the data center from the network borders for treatment adds latency. Thousands of devices will soon exceed the capacity of bandwidth. Industry and privacy regulations prohibit specific offsite storage. Data storage, cloud servers also only communicate with IP, not the innumerable other IoT protocols. Thus, a novel architecture, which forms an intelligent solution Fig. 1 IoT devices with sustainable network

Macro base stations

Small cell access point

Train Network

UAV access points

Home Gateway IoT devices

Secure and Sustain Network for IoT Fog Servers

299

for node management, is proposed to resolve the network breakdowns for sustainable IoT. A key concept in fog processing is used in the proposed approach, but it is novel that fog servers are placed on a train network. Fog Computing is a decentralized computer system where data, calculation, storage, and applications lie between the data source and the cloud [9]. Fog computing brings the advantages and strength of the cloud, like edge computing, nearer where data are created and used. Many people use the words “fog computing” and “edge computing” because both involve closer information and data processing to make the data. This is often done to improve efficiency, but it can also be done for security and compliance reasons [10]. Fog computing frameworks provide organizations with more options for processing data where it is best done [11]. Data may require processing for specific applications as quickly as possible, such as in a case in which connected machines need to respond rapidly to an incident [12]. Fog computing can create networks between devices and analytical endpoints that are low-latency [13]. Meanwhile, this architecture reduces the bandwidth needed when comparing it to a data center or cloud to process it. It can also be used in scenarios where no bandwidth connection is available for sending data and must be processed near the point where it is established. As an advantage, users can add safety characteristics to a fog network, from segmented network traffic to virtual firewalls to protect it. This paper proposes architecture for communicating across various types of communication networks. The approach uses unmanned aerial vehicles to provide intermediate connectivity between isolated towers. Both controlled and autonomous flight allow Unmanned Aerial Vehicles (UAVs) to fly. With the assistance of UAVs, the entire area in question can be covered. This station house is the core network located close to the user location and has an intervening connection between user-side networks and the core networks [14]. Stations can be defined as nodes with cluster connectivity and are used as DNM (Distributed Network Management) modules. A multi-modular hybrid network approach that employs both train network and unmanned aerial vehicles network is proposed for exploiting the future 5G networks’ services guarantee features (SACA). Load management system uses the sensors, algorithm, and content-based assignment policy by constructing optimum problems with finding maxima for a variety of models.

2 Related Work Farooqi et al. [15] discuss about a vital issue in the United Nations Sustainable Development Goals and the design of a sustainable society. The most prominent solution to most cloud data center problems such as latency, safeguards, carbon footprint, electrical consumption, etc., is sustainable fog computing. This is an advanced cloudbased design supporting the horizontal computer paradigm that provides cloud-like

300

N. Hussain et al.

services on the edge of user premises. The first selection of time-sensitive applications became after IoT fog computing due to its proximity to appliances and sensors [16]. The proposal introduced fog computing and distinguished it from the cloud in this paper and discussed how sustainability could be achieved through the fog in various applications. Authors also brought forward some existing fog paradigm challenges. Also, they examined some current fog computing work. Kyriazis et al. [17] describe about sustainable smart city IoT applications. In a world where multiple stakeholders supply information and assets, in addition to millions of real-time interactions and communications, IoT-based systems aim to exploit such holdings in a resilient and sustainable manner that enables them to realize their full potential. The first one relates to the management of heat and power and aims at using various resources (e.g., heat and power meters) to optimize energy use in commercial and residential areas [18]. In this paper, authors presented two smart city IoT applications. The second application concerns cruise control in public transport and seeks to use various resources (such as environmental and traffic sensors) to provide driving guidelines aiming at eco-efficiency. Authors also highlight the IoT challenges and potential technology capable of implementing the applications proposed. Benkhelifa et al. [19] discussed about eco-system for sustainable growth. There was never a greater need and drive for sustainable development. This calls for radical ways to enhance efficiency and productivity of resources. This paper describes how the IoT, the big data and cloud computing integration create a formula for sustainable development and growth. IoT technology issues and challenges are investigated, and a framework is presented for integrating these three technologies into sustainability strategies. The impact of the proposed framework is discussed on the economic, social, and environment aspects. No reported research that discusses IoT from a sustainability point of view is available to the best of the authors’ knowledge. Liu [20] proposed that IoT is the basis for the connection of objects, sensors, actuators, and other intelligent technologies to facilitate communication between persons, objects and objects. It is a hot spot for internet research and development. Smart transport, smart shopping, intelligent product management, intelligent meters, domestic automation, waste management, sustainable urban environment, continuous care, and many more are its applications. The IoT will ensure an availability and profitability of urban resources, energy, and the environment. This paper provides an example of IoT and the bicycle sharing system. The transport of a city is greener and more convenient with the bike-sharing system. IoT can therefore become a key role in a sustainable and intelligent city [21].

3 Proposed Methodology The proposed methodology consists of hybridized cellular nodes, which several networks use to provide data transfer across IoT without any error. In our approach,

Secure and Sustain Network for IoT Fog Servers

301

we have used various parameters like sensor feeding and signature, depth of knowledge graph, etc., to build an enduring architecture, which monitors network states and dynamically allows servers based on network collaboration. This makes it possible to allocate optimal links for optimal migration choices.

3.1 Sensor Feeding and Signatures To assure that reliability, sustainability, and stability, the proposed SACA will use current sensors’ functionality. Only two registrations are required to have confidence in this method. In this way, the computer network knows the full device number but not their unique device number. Each HGW assigns each device to be monitored and measured in the sensor network; a number and sequence number facilitates its measurement. Currently, it requests identifying information through intermediate devices (UAVs). There is no merit in explaining the UAVs in detail in this essay. The HGW assigns randomization register IDs to each connected device. HGVs are registered with FM subclasses indicating sound sources.

3.2 Knowledge-Depth Graphs In the nonappearance of good services from the FS, the additional network is formed by the DNM. Generally, from the standpoint of just handling FS, the DNM allocates services randomly. Nevertheless, every network node’s record is kept and deals with the situation where, despite its disponibility, and FS refuses to take action on the request. Knowledge Depth (KD) graph is used by the DNM, consisting of two dynamic graphs: one with the knowledge that each of its vertexes has as the property and the other with the profundity of knowledge as a property. The graphs knowledge depth are shown as H ∗ 1 = (W, F, K n), where K n is the W ’s knowledge set. The value is calculated for each element of K n as a relation between the total node (Dg) degree and the total network edges K n, i =

Dg, i , i∈W |F|

(1)

Higher value in node degree means greater the idea about network, similarly the depth chart explains about the level of knowledge in all subsection which contains the node and the number of link accessible on that level/layer of the particular node, are expressed in a ratio of direct links ( Dk). K l, i =

Dk, i La, i

(2)

302

N. Hussain et al.

and H ∗ 2 = (W, F, K n). For each edges and vertices, two KD graph are available which help us to decide load requirement. The depth chart provides the necessary information about the load over the node of the particular layers other than the FS, while a knowledge chart is used for the management of the load over the FS. However, a single graph in the KD can be used to generalize the vertices weight to give the optimal graph as H ∗ f = (W, F, K v), and K v, i = µ1K n, i + µ2K l, i,

(3)

µ1 and µ2 are the equivalent constants for management of knowledge-depth relationships, so that, where knowledge is available 0 ≤ µ1 ≤ 1 and µ1 ≤ µ2 ≤ 1 all depth is more important.

3.3 Load-Based Server Allocation The proposed work aims to build a sustainable network that ensures connectivity even in link/node failures. The proposed approach uses Knowledge Depth graphs to decide the load of the network. After checking the current load available on the basis of KD properties, DNM decides the server’s allocation.

3.4 State Maintenance and Learning For handling data transmission throughout the network state, maintenance and learning play an essential role used by the DNM. For the processing and storage of large data, mobile FS is used by the complex learning system. The DNM maintains a log based on the FS handling inputs when a selection operation occurs across the FS. The FS administers traffic and supports the DNM to monitor future traffic trends and choose to regulate the role of participating nodes as the first-order connection and select new routes if there is any failure.

3.5 Spy-Based Deployment For close monitoring of FS and virtual monitoring of the complete network, SpyBased Deployment is used. The spy deployment allows for close monitoring of the FS and virtual control of the entire network. The DNM is mainly concerned with train topology, the actual networking status, and the traffic controller. The main configuration file is all passed as an input file. This file maps the network state only as a control metric to the topology and the traffic conditions. Then, an analyzer is

Secure and Sustain Network for IoT Fog Servers

303

used to decide the correctness of network states. The coordinator will then call on the appropriate server to handle a user’s request. A service coordinator will do this.

3.6 Network Association for Failures Detection The Network Association procedure can be performed regularly or in network failure. To check the Network Association’s conduction, Rs and St values can also be used. The Network Association helps to understand network failures and decides to reduce the delay during the interaction procedure.

4 Experiment and Result The implementation is done using MATLAB with 1000 IoT devices, two access points, 4 numbers of fog servers, and many more parameters. A network is considered sustainable when it provides strong resistance even after several nodes or link failures. It has been observed many times that most of the network fails due to attack occur in the network itself. These attacks can create a significant loss such as information leakage, compromising of user’s accounts, slow down the network, and even many times, shutdowns down the entire network. The proposed model’s performance evaluation uses specific parameters that include end-to-end delay versus users, connectivity probability versus users, packet loss versus users, and sustainability versus users. The packet loss and end-to-end delay are analyzed by using the factors like baseline and failures, whereas the sustainability and the connectivity are evaluated by using the factors such as Sybil attacks [22–24], wormhole, and DDoS attacks. Figures 2, 3, 4 and 5 have shown the results evaluated with the help of proposed work based on sustainability. Fig. 2 End-to-end delivery against users

304

N. Hussain et al.

Fig. 3 Packet loss against users

Fig. 4 Connectivity probability against users

Fig. 5 Sustainability against users

5 Conclusion IoT requires a strong network to back it up. A flexible network system will support the extended availability of enhanced services. This paper tackles the problem by taking a hybrid approach. The solution for this paper proposes a hybrid multi-modular self-conscious architecture, which is capable of supporting faultless communication.

Secure and Sustain Network for IoT Fog Servers

305

Master networks that enable data distribution in a way that can withstand the demands of countless users. You have to use the maximum model for Wald to formulate an optimization problem. These analyses show that the proposed approach would provide reliable connectivity with less delay and fewer packet losses. The approach can also handle even under threats of Sybils, wormhole, and DDoS. The proposed technology can ensure long-distance service, even under challenging conditions.

References 1. G. Dhiman, M. Soni, H.M. Pandey, A. Slowik, H. Kaur, A novel hybrid hypervolume indicator and reference vector adaptation strategies based evolutionary algorithm for many-objective optimization. Eng. Comput. (2020) 2. R. Nair, P. Sharma, A. Bhagat, V.K. Dwivedi, A survey on IoT (Internet of Things) emerging technologies and its application. Int. J. End-User Comput. Dev. (2019) 3. R. Nair, S. Gupta, M. Soni, P. Kumar Shukla, G. Dhiman, An approach to minimize the energy consumption during blockchain transaction. Mater. Today Proc. (2020) 4. N. Kumar, N. Kharkwal, R. Kohli, S. Choudhary, Ethical aspects and future of artificial intelligence, in 2016 1st International Conference on Innovation and Challenges in Cyber Security, ICICCS 2016 (2016) 5. S. Chowdhury, P. Mayilvahananan, R. Govindaraj, Health machine sensors network controlling and generating trust count in the servers platform through IoT. Int. J. Recent Technol. Eng. (2019) 6. K. Mehta, D.K. Sharma, Fault detection and diagnosis: a review. Int. J. Eng. Sci. Comput. (2017) 7. M. Soni, D. Kumar, Wavelet based digital watermarking scheme for medical images, in 2020 12th International Conference on Computational Intelligence and Communication Networks (CICN), Bhimtal, India (2020), pp. 403–407, https://doi.org/10.1109/CICN49253.2020.924 2626 8. R. Nair, P. Nair, V.K. Dwivedi, FPGA on cyber-physical systems for the implementation of Internet of Things (2020) 9. OpenFog Consortium Architecture Working Group, OpenFog Architecture Overview (OpenFogConsortium, 2016) 10. A. Anand, A. Raj, R. Kohli, V. Bibhu, Proposed symmetric key cryptography algorithm for data security, in 2016 1st International Conference on Innovation and Challenges in Cyber Security, ICICCS 2016 (2016) 11. M. Soni, S. Chauhan, B. Bajpai, T. Puri, An approach to enhance fall detection using machine learning classifier, in Proceedings—2020 12th International Conference on Computational Intelligence and Communication Networks, CICN 2020 (2020) 12. K.B. Prakash, S. Nazeer, P.K. Vadla, S. Chowdhury, Layered programming model for resource provisioning in fog computing using yet another fog simulator. Int. J. Emerg. Trends Eng. Res. (2020) 13. M. Aazam, S. Zeadally, K.A. Harras, Fog computing architecture, evaluation, and future research directions. IEEE Commun. Mag. (2018) 14. M. Soni, T. Patel, A. Jain, Security analysis on remote user authentication methods, in Lecture Notes on Data Engineering and Communications Technologies (2020) 15. A.M. Farooqi, S.I. Hassan, M.A. Alam, Sustainability and fog computing: applications, advantages and challenges, in 2019 Proceedings of the 3rd International Conference on Computing and Communications Technologies, ICCCT 2019 (2019) 16. S. Chowdhury, R. Govindaraj, S.S. Nath, K. Solomon, Analysis of the IoT sensors and networks with big data and sharing the data through cloud platform. Int. J. Innov. Technol. Explor. Eng. (2019)

306

N. Hussain et al.

17. D. Kyriazis, T. Varvarigou, D. White, A. Rossi, J. Cooper, Sustainable smart city IoT applications: heat and electricity management & eco-conscious cruise control for public transportation, in 2013 IEEE 14th International Symposium on a World of Wireless, Mobile and Multimedia Networks, WoWMoM 2013 (2013) 18. P. Sharma, R. Nair, V.K. Dwivedi, Power consumption reduction in IoT devices through fieldprogrammable gate array with nanobridge switch, in Lecture Notes in Networks and Systems (2021) 19. E. Benkhelifa, M. Abdel-Maguid, S. Ewenike, D. Heatley, The Internet of Things: the ecosystem for sustainable growth, in Proceedings of IEEE/ACS International Conference on Computer Systems and Applications, AICCSA (2014) 20. L. Liu, IoT and a sustainable city. Energy Procedia (2018) 21. D. Babitha, M. Ismail, S. Chowdhury, R. Govindaraj, K.B. Prakash, Automated road safety surveillance system using hybrid CNN-LSTM approach. Int. J. Adv. Trends Comput. Sci. Eng. (2020) 22. M. Soni, A. Jain, Secure communication and implementation technique for Sybil attack in vehicular ad-hoc networks, in Proceedings of the 2nd International Conference on Computing Methodologies and Communication, ICCMC 2018 (2018) 23. N. Hussain, P. Maheshwary, P.K. Shukla, A. Singh, Detection black hole and Sybil attack in GPCR-MA VANET based on road network. ANUSANDHAN-AISECT Univ. J. 06(13) (2018). P-ISSN 2278-4187, E-ISSN 2457-0656 24. N. Hussain, P.M.D.P.K. Shukla, A. Singh, Detection of Sybil attack in vehicular network based on GPCR-MA routing protocol. J. Curr. Sci. 20(1) (2019)

Cataract Detector Using Visual Graphic Generator 16 Aman, Ayush Gupta, Swetank, and Sudeept Singh Yadav

Abstract We are building a zero-cost cataract detection system. Cataract is a fairly common disease in India, and it is costly as well as time-consuming for an average rural Indian to get tested. We are creating a web interface so that anyone can upload his/her clicked picture and get tested of cataract for free. Basically, his/her images will be clicked, and the eye image will be compared to an already trained neural network with high accuracy. The trained model will then output the result which will be served to the backend which then sends the data to the frontend using REST API’s and then is shown to the user in the web portal. Our given model predicts the cataract with an accuracy of 98%. Our prototype helps predict cataract with an accuracy of 98% at zero cost which in turn would be accessible to everyone who has an android smartphone. Our model when executed is capable of preventing many surgeries. Keywords Cataract · VGG · CNN · Phacoemulsification · Microcision

1 Introduction Cataract is a common eye disease. Cataracts usually occur in people above the age of fifty-five, but young people are not immune from it. Cataracts are the main cause of blindness worldwide [1]. Cataracts develop in 40% of people over 60 years of age. The only treatment is surgery, which is a safe and easy procedure. The lens of the eye helps to focus on objects from different distances from the eye. Over time, the lens loses its transparency and becomes opaque. The blurring of the lens is called cataract. Light does not reach the retina, and gradually the vision decreases to the point of blindness. The end result in most people is blurred and distorted vision. The exact cause of cataract is not yet known [2, 3] (Fig. 1).

Aman · A. Gupta · Swetank · S. S. Yadav (B) SCSE, Galgotias University, Greater Noida, India

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 R. C. Poonia et al. (eds.), Proceedings of Third International Conference on Sustainable Computing, Advances in Intelligent Systems and Computing 1404, https://doi.org/10.1007/978-981-16-4538-9_31

307

308

Aman et al.

Fig. 1 Difference between normal eye and eye affected with cataract

1.1 Symptoms of Cataract Cataract is usually caused by increasing age but there are other reasons as well. 1. 2. 3. 4. 5. 6.

Cataracts occur as the lens’s transparency decreases with increasing age. Some children may also have congenital cataracts due to injury or infection if they do not develop properly before birth. Some patients also have cataracts due to other causes such as diabetes, long-term steroid medication side effects, ultraviolet rays or radiation/radiation. Eye injuries can also cause untimely cataracts. Long-term consumption of alcohol and smoking increases the risk of cataract. For many years of work or in some context, always in the face of intense light. If steroids are taken for asthma. Birth: Cataract in case of other eye problems like glaucoma [4].

1.2 Cure of Cataract Operation is the only option for the treatment of cataract. In this operation, the doctor removes the opaque lens and inserts a new artificial lens in place of the natural lens in the patient’s eye; the artificial lens is called the intraocular lens, and it is placed in the place where your natural lens is placed is. After surgery it is possible for the patient to see clearly. However, you may need to wear a set number of glasses for reading or sight work. During the last few years, cataract surgery has changed from restorative to refractive surgery, which means that it now not only treats cataracts but is also gradually ending its dependence on eyeglasses. The incision in cataract surgery has been reduced by modern techniques, which gives the patient better vision results and quicker recovery after surgery. There is no need to stay in the hospital for this surgery. You stay awake, and the eyes are numb by giving local anesthesia. It is almost a safe surgery, and its success rate is also quite good.

Cataract Detector Using Visual Graphic Generator 16

309

1.3 Cataract Surgery Typically, cataract surgery takes 30–45 min to perform. This surgery involves some important steps, which are as follows: Step 1: Giving anesthesia—The cataract operation starts with giving anesthesia. The person who undergoes cataract surgery is given anesthesia so that he does not have any kind of pain during this whole procedure. Step 2: Making a small cut on the eye—After giving anesthesia, the surgeon makes a small cut on the eye so that this procedure can be carried forward. Step 3: Cataract removal—Cataract is removed after making a cut on the eyes. For this, small medical equipment is used, and cataract lenses are taken out. Step 4: Applying a new lens—After removing the lens affected by cataract, a new lens is replaced. This new lens is mainly an artificial lens, which is made by medical means only. Step 5: Closing the cut—After applying the artificial lens, a small cut made on the eye is closed. After closing the cut, a drop of eye drops is applied, so that it does not cause much pain. Step 6: Discharging the person—The operation of cataract is completed with the closure of the cut. After this, the person is discharged (Fig. 2). Fig. 2 Cataract surgery: phacoemulsification

310

Aman et al.

However, using our solution, a large number of cases could be diagnosed with 0 cost as it is a web-based solution. So even if he does not own a smartphone, he could use any friend’s phone and detect it within seconds, and it is much more feasible than any other method of detecting cataract [5].

2 Literature Review Let us understand this by considering the case of a villager as 70% population in India still lives in the village. For him just to get tested whether him/her has cataract or not, he would have to travel 10–200 km to reach the nearest eye doctor and he would have to leave as early in the morning as possible depending on the availability of public transport to the city which might not be good in a large number of cases. If he is lucky, he would reach the doctor by afternoon and would get the appointment the same day or otherwise he would return with an appointment and reach the doctor on the appointment day. He would also loss the work for the day which would translate in loss of incomes, and we all know how the per-capita income in villages is. Considering all these hurdles and costs, a large number of people would either delay or skip their visit to the doctor irrespective of its seriousness which could increase the problem, increase the mess as well as the cost, and even the treatment of cataract is also a risky procedure. According to the survey of PMC in 2020, in males, the prevalence of any cataract ranged from 6.71%, in people aged 45–49 years to 73.01% in elderly aged 85– 89 years. In females, the chances of any cataract increased from 8.39% in individuals aged 45–49 years to 77.51% in those aged 85–89 years. Cataract among adults aged 40 years and above in a rural area of Jammu district in India: Prevalence and Risk Factors [6, 7].

3 Feasibility Analysis Our project cataract detector is a web application which will be more feasible than consulting a doctor. By using this web application, a person can easily check whether a person is having cataract or not. It will save traveling and consultation fee. Testing cataract contains three steps.

3.1 Visual Acuity Test Visual examination of eyes is called visual acuity. In this, far and near sight is tested, to see the weakness of the eye.

Cataract Detector Using Visual Graphic Generator 16

311

It is advisable to apply correct glasses when needed. The pupil of the eye is diluted completely by putting the medicine before eye examination. To estimate the distance, doctors ask the letters written on a chart from a distance of about 6 m. This chart is called Snellen’s chart. It has six to seven letters written in English or Hindi. These letters get smaller from top to bottom.

3.2 Microincision or Regular Phaco Cataract Surgery This surgery is performed with the help of the forceps or bent nipples. In this, the lens is pulled out using vacuum. But the IOL (intraocular lens) implanted in it is not as stable as it should be.

3.3 Robotic or Femtosecond Cataract Surgery Robotic or femtosecond cataract surgery has been developed to address the shortcomings of microincision surgery. Laser beams are used in this. It is expensive and takes more time. Its results are much better. This surgery is 100% blade free. It does not require stitches, and is an almost painless surgery. But with the help of this project, we can do this easily at our home and we can spend our precious time.

4 Methodology What we did is that we extracted the information from an image by using the CNN which is made up of several layers. A high-definition camera captures images of a person’s face and examines specific facial landmarks, such as distance between the eyes, nose width, and cheek shape. The validation system then compares these findings in its database. The more images in the database, the more the system will be able to identify faces [8].

4.1 Complete Work Plan Layout This project is all about solving a real-world problem which is faced by many people around the globe. And it is a common problem in India also. The basic idea behind this is to classify an eye image into whether it is affected by cataract or not [9, 10].

312

Aman et al.

Fig. 3 Layers of CNN

• • • •

We collected data from google different images for model training. Using the Fast.ai library based on PyTorch. And used the CNN model for image classification. We can develop a web app around this model for deployment (Fig. 3).

4.2 Visualize All the Filters Ok then. So, our convent’s notion of a magpie looks nothing like a magpie—at best, the only resemblance is at the level of local textures (feathers, maybe a beak or two). Does it mean that convnets are bad tools? Of course not, they serve their purpose just fine. What it means is that we should refrain from our natural tendency to anthropomorphize them and believe that they “understand,” say, the concept of dog, or the appearance of a magpie, just because they are able to classify these objects with high accuracy. They do not, at least not to any extent that would make sense to us humans [11, 12] (Figs. 4, 5, 6 and 7).

4.3 Signal Processing Signal processing is like an umbrella, and image processing comes under it. In the physical world (3D world), the amount of light reflected by an object passes through the camera lens and becomes a 2D signal, and therefore the resulting image is produced. This image is digitized using signal processing methods, and then the digital image is manipulated into digital image processing [13] (Fig. 8).

Cataract Detector Using Visual Graphic Generator 16

313

Fig. 4 Filter visualization

Fig. 5 Visualization of 5*5 filter

5 Experimental Analysis Testing is an operation where we check our product on every parameter, and hence, we have tested our model; it is 92% accurate which is much much better than any other model. So, we can say that this model is ready to use in hospitals. Since the accuracy needed for a model to enter in Medical Science is 90% and our model is satisfying that need, we can say it is ready to use [14].

6 Model Accuracy and Comparison See Table 1.

314 Fig. 6 Filter in convolutional layer

Fig. 7 Input, processing and output layout

Fig. 8 Complete procedure

Aman et al.

Cataract Detector Using Visual Graphic Generator 16

315

7 Web Frontend The frontend is built using the most popular web framework react js. It uses the axios package to submit and fetch data from the backend using REST API’s. The frontend is mobile responsive so that any user can upload a clicked selfie and check if he has cataract or not within 2 min at 0 cost (Fig. 9). When the user enters all the details and uploads the photo, it gets pushed to the already trained model and the model predicts with 97% accuracy and then it returns the result to the frontend and then the user will see that [15].

8 Conclusion So, we can conclude that our project has tendency to detect the cataract in a person by using his eye’s image, and we can predict this with 92% accuracy. It would not take someone’s job but it will help them to do their work in the most efficient manner. It will be beneficial for both doctor and patient. It will reduce the number of resources Table 1 Comparison table of cataract detection techniques ID Algorithm

Purpose

Accuracy (%) Advantages

Disadvantages

1

Robust and Detect cataract efficient from color Automated images Cataract detection algorithm

98

Suitable for true Machine learning color images algorithms can be used Severity prediction not done

2

Deep convolutional neural network (CNN)

Detect and grad cataract automatically

86.69

High-level information are extracted effectively and automatically

Completely supervised learning, so need more training data

3

Automatic cataract detection using advanced portable methods and devices

Analyzed and developed cataract detection technique

85.23

Efficient

Expensive and less accurate, not portable Can only detect and grade a specific class of cataract

4

OpenCV library

Diagnose the 92 mentioned eye diseases is based on the effective computation approach

Automatic Need retinal image high-quality classification images as input Have the ability to detect different images

316

Aman et al.

Fig. 9 Home page of website

that are used by doctors, and it also reduces the cost spent by the patients for doing their test on his eye. At the end, we can say that it will be the revolutionary change if hospitals, clinics and people themselves use this.

References 1. World Health Organization (WHO), Visual Impairment and Blindness. Fact Sheet No. 282 (2014), http://www.who.int/mediacentre/factsheets/fs282/en/. Accessed 12 July 2017 2. C. Bradford, Basic Ophthalmology, 8th edn. (American Academy of Ophthalmology, 2004), pp. 7–16 3. L. Umesh, M. Mrunalini, S. Shinde, Review of image processing and machine learning techniques for eye disease detection and classification. Int. Res. J. Eng. Technol. (IRJET) 3(3), 547–551 (2016) 4. S. Sobti, B. Sahni, Cataract among adults aged 40 years and above in a rural area of Jammu district in India: prevalence and risk-factors. Int. J. Healthc. Biomed. Res. 1(4), 284–296 (2013) 5. R. Isaacs, J. Ram, D. Apple, Cataract blindness in the developing world: is there a solution? J. Agromed. 9, 207–220 (2004) 6. Journal of Global Health, https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6005639/ 7. D. Pascolini, S.P. Mariotti, Global estimates of visual impairment. Br. J. Ophthalmol. 96(5), 614–618 (2010). Accessed 12 July 2017. https://doi.org/10.1136/bjophthalmol-2011-300539 8. Understanding of a convolutional neural network, https://ieeexplore.ieee.org/document/830 8186 9. Detecting cataract using smartphone, https://iovs.arvojournals.org/article.aspx?articleid=276 9900 10. Computer-aided diagnosis of cataract using deep transfer learning, https://www.sciencedirect. com/science/article/abs/pii/S1746809419301077 11. J.P. Bigus, Data Mining with Neural Networks (McGraw-Hill, New York, 1996) 12. C. Bowd, M. Goldbaum, Machine learning classifiers in glaucoma. J. Optom. Vis. Sci. 85(6), 396–405 (2008) 13. A.I. Ajaiyeoba, F.O. Fasina, The prevalence and cause of blindness and low vision in Ogun state (2003)

Cataract Detector Using Visual Graphic Generator 16

317

14. Deep learning image classification with Fastai, https://towardsdatascience.com/deep-learningimage-classification-with-fast-ai-fc4dc9052106 15. Research on image classification model based on deep convolutional neural network, https:// jivp-eurasipjournals.springeropen.com/articles/10.1186/s13640-019-0417-8

Combination of Local Feature Extraction for Image Retrieval S. Sankara Narayanan, D. Vinod, Suganya Athisayamani, and A. Robert Singh

Abstract Image feature extraction can be effectively realized using local binary pattern (LBP). This paper proposes an image retrieval method using directional image features and an adaptive threshold. There are two steps: a. encoding with the standard deviation of the neighbors’ intensity values in the 3 × 3 pattern for threshold in center pixel. The threshold is adaptively updated concerning the intensity of the neighbor pixels. b. Estimation of the local directional pattern using the changes in the intensity of the neighborhood pixels in different directions along with binary encoding of neighboring pixels. These two features are combined and used for image retrieval. The proposed method is evaluated on Corel-1k data set and compared with other standard local feature representation methods. Keywords Directional local pattern · Adaptive threshold · Combined features · Image retrieval

1 Introduction The main ways to realize image retrieval are: text-based image retrieval (TBIR) and content-based image retrieval (CBIR) [1]. TBIR does this by comparing the textual names of images. TBIR requires manual annotation of images. The efficiency of S. Sankara Narayanan Department of Computer Science and Engineering, Vel Tech Rangarajan Dr. Sagunthala R&D Institute of Science and Technology, Chennai, India D. Vinod Department of Computer Science and Engineering, Saveetha School of Engineering, Saveetha Institute of Medical and Technical Sciences, Chennai, India S. Athisayamani School of Computing, Sastra Deemed to be University, Thanjavur, India A. Robert Singh (B) School of Computing, Kalasalingam Academy of Research and Education, Krishnankoil, Tamil Nadu, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 R. C. Poonia et al. (eds.), Proceedings of Third International Conference on Sustainable Computing, Advances in Intelligent Systems and Computing 1404, https://doi.org/10.1007/978-981-16-4538-9_32

319

320

S. Sankara Narayanan et al.

manual labeling is low, and the workload is large. Therefore, TBIR is insufficient to achieve effective image retrieval. CBIR searches feature description of an image is based on the underlying visual features of the image such as color, texture, and shape and is represented by a feature vector. Secondly, the feature vector of the image to be queried is compared with the feature vectors of all the images in the image database, and finally, the retrieval result is obtained according to the difference of the similarity. CBIR’s image feature extraction and similarity comparison process do not require manual semantic annotation, which is a prominent feature superior to TBIR and suitable for image retrieval in large-scale image data [2]. The extraction of image features is a key issue in achieving CBIR. The low-level features of the image mainly include color, shape, and texture [3]. The color feature is a global feature of the image [4]. Since color is not sensitive to changes in the direction, viewing angle, and size of the image, color features cannot effectively describe local features of the image. The shape feature is usually the global feature of the image. Shape features have higher semantics than color and texture [5]. Shape features mainly include contour and region shape feature representation methods. Texture features reflect the spatial distribution attributes of pixel gray levels in a certain image area [6]. Texture feature extraction methods can be divided into the following major methods, including: (1) signal processing method: wavelet transform, Gabor wavelet; (2) structure analysis method: syntactic texture description method, mathematics morphological methods; (3) distribution model: Markov random field model, Gibbs random field model, and conditional random field model; and (4) statistical method: gray-scale difference statistics and gray-level co-occurrence matrix (GLCM). In the texture feature extraction method, the local binary pattern (LBP) has the advantages of simple operation and gray-scale invariance. LBP is applied to face recognition, expression recognition, image classification, image retrieval, and moving object tracking, etc. [7–10]. LBP uses the center pixel’s gray value as the judgment threshold for encoding, and the threshold is fixed. Analyzing the above improved methods, the original LBP has two deficiencies: (1) the comparison threshold of the LBP encoding is a fixed value, and it is impossible to describe the degree of change in the gray value in the neighborhood; (2) LBP only represents the amplitude change of the gray value of the pixels in the neighborhood and cannot reflect the direction information of the texture. Efficiency of feature selection depends on datasets. Kumar et al. [11, 12] proposed two spider monkey-based feature selection approach with improved efficiency. Shekhawat et al. [13] tried to reduce dimensionality for the same. This paper proposes a new local feature extraction method for content-based image retrieval, including. 1.

2. 3.

The local features of the adaptive threshold local binary pattern (AT_LBP) coded with the standard deviation of the neighborhood pixel gray value and the central pixel as the threshold. Local features of directional local binary pattern (D_LBP) sensitive to direction. With reference to the LTP, integrate local features of adaptive threshold and local features of direction to form composite local features.

Combination of Local Feature Extraction for Image Retrieval

4. 5.

321

Take this composite local feature as the image feature representation of image retrieval. Compare four different distances as the basis of similarity measurement, the result of image retrieval.

2 Related Works 2.1 LBP The LBP operator is a rectangular block of 3 × 3 pixels, regarding the 9 adjacent pixels in the input image. The original LBP coding process includes the following steps. First, compare the gray values of the surrounding 8 pixels with the center gray value. Pixels greater than or equal to the gray value of the center pixel are represented by bit 1, otherwise they are represented by bit 0. After that, the 8 binary bits are arranged in sequence to form a binary coding sequence. Use this binary coding sequence as the feature vector of this rectangular block. Finally, the number of occurrences of different types of binary coding sequences is counted as a local feature. The original LBP operator has binary coding sequence patterns, that is, 256 patterns. The coding process of original LBP is shown in Fig. 1. In Fig. 1, (a) shows pixel points and their neighborhood ranges, (b) is an example of pixel gray values within a neighborhood range, and (c) shows LBP threshold comparison results. Finally, the encoding value of LBP of point is 50. The LBP coding formula is given in Eq. (1). L=

8 

s(gi − gc ) × 2i−1

(1)

i=1

where is the gray value of the pixel at the center and is the gray value of the pixel at the sampling point in the neighborhood of the point. Is the number of sampling points in the neighborhood of the center point, i = 1, 2, …, 8, the definition of the threshold function s(x) is Eq. (2).

Fig. 1 Example of schematic diagram of original LBP coding

322

S. Sankara Narayanan et al.

 s(x) =

1, gi − gc ≥ 0 0, gi − gc < 0

(2)

Equation (2) states that if the pixel gray value at the neighboring gi sampling point is greater than or equal to the gray value at the center point gc , the gi sampling point is encoded as bit 1; otherwise, it is encoded as bit 0. The gray value of the central pixel gc is fixed; therefore, the original LBP discrimination threshold cannot reflect the violent change of the gray value in the neighborhood.

2.2 Uniform LBP For original LBP operator is using P sampling points with a circular area of radius R, 2p patterns will be generated. With the increase of sampling points, the number of types of patterns will increase rapidly, increasing the complexity of calculation. The definition of the uniform LBP operator is: when the transition from 0 to 1 or from 1 to 0 in the binary sequence corresponding to a local binary mode is not more than twice, the binary. It is called an equivalent model class. For example, the two patterns after LBP encoding: 1011111 (two transitions) and 11100011 (two transitions), are regarded as an equivalent pattern class. The models except the equivalent model class are all classified into another category, called the mixed model class. Since the LBP pattern has more than 2 transitions in a local range of an image, it means that the gray value has many drastic changes, which is usually caused by random noise and generally has no statistical significance. In this way, the LBP operator of the unified mode is composed of 58 equivalent modes and one mixed mode, a total of 59 modes. The equation for the uniform LBP is Eq. (3).

2

L uP,R

⎧ p−1 ⎪ ⎨  s g − g , if U LBP ≤ 2 p c P,R = p=0 ⎪ ⎩ P + 1, else

(3)

where U(x) represents the number of conversions from 0 to 1 or 1 to 0 in the binary mode, which is defined as Eq. (3). p−1 







s g p − gc − s g p−1 − gc (4) U L P,R = s g p−1 − gc − s(g0 − gc ) + p=1

Due to the uniform LBP, the feature vector length after coding can be reduced. In this paper, this model is adopted for the statistics of coding patterns of local features.

Combination of Local Feature Extraction for Image Retrieval

323

3 Proposed Local Feature Extraction Method 3.1 Adaptive Threshold Local Binary Pattern (AT_LBP) The original LBP cannot accurately represent the severity of the gray difference between the center point and the neighborhood point. The difference of gray level is an important indicator to distinguish the local texture features and noise of the image. For example, if the gray intensity of the adjacent pixel is only one level greater than the center pixel, the LBP will determine that there is a change, and this position will be encoded as bit 1 after encoding. In fact, the texture of the image or the pixel gray of the local area where the edge is located changes relatively drastically. Therefore, it is reasonable to change the adjacent gray value, that is, the neighborhood point with a large difference is encoded as bit 1. The coding steps of the adaptive threshold LBP (AT_LBP) are. Step 1. Calculate the average value μ and population standard deviation σ of the gray value of the pixels at 9 positions in the neighborhood of 3 × 3 pixels in the image. The equation is (5) and (6). μ=

σ2 =

N −1 1  gi N i=0

N −1 1  (gi − μ)2 N i=0

(5)

(6)

Within Eq. (5), gi is the gray value of each pixel in the local area, P is the number of all pixels in the neighborhood of the center point g0 , where P is 9. Step 2. Compare the gray values of the pixels at 8 locations around the central pixel. If the intensity of the point is greater than or equal to the sum of the intensity value of the center point g0 and the standard deviation σ, the neighborhood point is encoded as bit 1, otherwise it is encoded as bit 0, Eq. (7), i ∈ {1, 2, …, 8}.  s(x) =

1, gi − g0 − σ ≥ 0 0, gi − g0 − σ < 0

(7)

Step 3. According to the threshold comparison result of the neighboring pixels, the binary mode coding is performed, the coding Eq. (8). AT_ L =

8  i=1

s(gi − g0 − σ )2i−1

(8)

324

S. Sankara Narayanan et al.

Fig. 2 Example of schematic diagram of AT_LBP coding

For example, if the pixel block in the local area of the image is as shown in Fig. 2, the average value and standard deviation of the gray values of the pixels at its 9 positions are calculated according to the formula, and we get: μ = 97.22, σ = 59.23. The comparison threshold is g0 + σ = 62 + 59.23 = 121.23. In Fig. 2, (a) represents the 3 × 3 neighborhood of g0 , (b) shows an example the gray value of pixels in the neighborhood, and (c) is the threshold comparison process and encoding result of AT_LBP. After the AT_LBP coding is completed, in order to use these features in image retrieval or classification, the LBP coding is generally not directly used as the texture feature vector, but the statistical histogram of the AT_LBP coding is used as the feature vector for the image feature representation. In this method, a uniform LBP is used to perform statistical calculation on the image after the AT_LBP coding to obtain the feature vector.

3.2 Directional Local Binary Pattern (D_LBP) Human visual features have multi-directional features, and it is of great significance to extract directional descriptions of local features of images. We propose a directional local binary pattern (D_LBP) for short. The method is: in the neighborhood of the center point, detect in a certain direction, and use the change of the gray value of the adjacent pixel to indicate the directionality of the texture, because the abrupt change of the gray value between adjacent pixels in the neighborhood is where the edges or textures of the image exist. Therefore, in different directions of the central point pixel, the gray-level mutation of the adjacent pixel point is encoded to represent the local texture or edge information existing in the different direction. Figure 3 illustrates the structure of the directional local binary pattern. In Fig. 3, the g0 point is the center pixel, g0 , g1 , …, g8 are adjacent to the 3 × 3 range of pixels, and g9 , g10 , …, g16 are adjacent to the 4 × 4 range of pixels. Take the direction shown in green in the figure as an example. In the direction of 135° as location of green in Fig. 3, compare the gray values of three adjacent points of g9 , g1 , and g0 . If the intensity of g1 is greater than g9 , and g1 is also greater than g0 , it means that there is a sudden change in gray in this direction. Similarly, if the gray

Combination of Local Feature Extraction for Image Retrieval

325

Fig. 3 Schematic diagram of the structure of the D_LBP

Fig. 4 Example of diagram for the combining local feature vectors

value at point g1 is less than g9 and also less than g0 , it means that it also means a sudden change in gray in his direction. The transition point of the gray value is a point that may exist on the edge. Therefore, in the presence of these two transitions, the position of point g1 is encoded as bit 1; otherwise, point g1 is encoded as bit 0. In Fig. 4, the lengths of the histogram feature vectors obtained by the AT_LBP and D_LBP are both 59, and the combined feature vectors length is 118.

4 Results and Discussion The experimental image database is the Corel-1k, which contains a total of 10 types of images, namely Africa, beaches, buildings, bus, dinosaurs, elephants, flowers, horses, mountains, and food. Each type of image has 100 images, and the size of each image is 384 × 256 or 256 × 384 pixels. Some images in the image library database are shown in Fig. 5.

326

S. Sankara Narayanan et al.

Fig. 5 Sample images from Corel-1000 database (one image per category)

In Fig. 6, the retrieval average precision corresponding to the four distances of Euclidean, Manhattan, D1, and Canberra is higher than 60%. Among them, Canberra distance has the highest search accuracy rate of 67.63%. Among different image categories, dinosaur, bus, flower, and horse categories have higher precision. The proposed local feature extraction method is compared with HOG-LBP [14], LF SIFT histogram (LTP) [15], color histogram [16], and LTP moments [7]. The average precision ratio of image retrieval based on different feature extraction methods is shown in Table 1. Euclidean Manhatten D1 Canberra

100

Average Precision (%)

Fig. 6 Comparison average precision of retrieval results at different distances

90 80 70 60 50 40

Di no

sa ur El ep ha nt Fl ow er Ho rs e M ou nt ai n Fo od

g

s

in ild

Bu

Bu

ch

Be a

-Af ric a

n

30

Input Image

Table 1 Retrieval results comparison table of different methods Method

HOG-LBP [14]

LF SIFT histogram [15]

Color histogram [16]

LTP-moments [7]

The proposed method

Precision (%)

46

48.2

50.5

53.7

67.63

Combination of Local Feature Extraction for Image Retrieval

327

5 Conclusion A local texture feature method combining adaptive threshold and directional local features is proposed, and image retrieval is realized. The characteristics of feature extraction are: (1) By using the gray value and local standard deviation of the central pixel point as the threshold of adaptive change, the intensity of local gray change of the image can be more accurately expressed. (2) The directional representation of local features can be realized by encoding the neighborhood direction of the central pixel point. (3) Integrating these two local binary patterns as feature vectors can more fully describe the local features of the image. Experimental results show that compared with existing methods and other local image feature extraction methods, the local texture features based on this method have higher precision in image retrieval.

References 1. C. Wang, J. Zhao, X. He et al., Image retrieval using nonlinear manifold embedding. Neurocomputing 72(16–18), 3922–3929 (2009) 2. A.R. Singh, A. Suganya, Efficient tool for face detection and face recognition in color group photos, in 2011 3rd International Conference on Electronics Computer Technology, Kanyakumari (2011), pp. 263–265. https://doi.org/10.1109/ICECTECH.2011.5941750 3. G. Muhammad, Date fruits classification using texture descriptors and shape size features. Eng. Appl. Artif. Intell. 37, 361–367 (2015) 4. S.S. Park, Y.G. Shin, D.S. Jang, A novel efficient technique for extracting valid feature information. Expert Syst. Appl. 37(3), 2654–266090 (2010) 5. G.G.C. Lee, C.F. Chen, H.Y. Lin et al., 3-D video generation from monocular video based on hierarchical video segmentation. J. Signal Process. Syst. 81(3), 345–358 (2015) 6. G. Hu, Z. Yang, M. Zhu et al., Automatic classification of insulator by combining k-nearest neighbor algorithm with multi-type feature for the Internet of Things. EURASIP J. Wirel. Commun. Netw. 177 (2018) 7. S. Athisayamani, A. Robert Singh, T. Athithan, Recognition of ancient Tamil palm leaf vowel characters in historical documents using B-spline curve recognition. Procedia Comput. Sci. 171, 2302–2309 (2020). ISSN 1877-0509. https://doi.org/10.1016/j.procs.2020.04.249 8. N. Ani Brown Mary, A. Robert Singh, S. Athisayamani, Banana leaf diseased image classification using novel HEAP auto encoder (HAE) deep learning. Multimed. Tools Appl. 79, 30601–30613 (2020). https://doi.org/10.1007/s11042-020-09521-1 9. A. Robert Singh, S. Athisayamani, A.S. Alphonse, Enhanced speeded up robust feature with bag of grapheme (ESURF-BoG) for Tamil palm leaf character recognition, in Inventive Communication and Computational Technologies, ed. by G. Ranganathan, J. Chen, Á. Rocha. Lecture Notes in Networks and Systems, vol. 145 (Springer, Singapore, 2021). https://doi.org/10.1007/ 978-981-15-7345-3_3 10. N. Ani Brown Mary, A. Robert Singh, S. Athisayamani, Classification of banana leaf diseases using enhanced gabor feature descriptor, in Inventive Communication and Computational Technologies, ed. by G. Ranganathan, J. Chen, Á. Rocha. Lecture Notes in Networks and Systems, vol. 145 (Springer, Singapore, 2021). https://doi.org/10.1007/978-981-15-7345-3_19 11. S. Kumar, B. Sharma, V.K. Sharma, R.C. Poonia, Automated soil prediction using bag-offeatures and chaotic spider monkey optimization algorithm. Evol. Intell. 1–12 (2018). https:// doi.org/10.1007/s12065-018-0186-9

328

S. Sankara Narayanan et al.

12. S. Kumar, B. Sharma, V.K. Sharma, H. Sharma, J.C. Bansal, Plant leaf disease identification using exponential spider monkey optimization. Sustain. Comput. Inform. Syst. 28 (2018). https://doi.org/10.1016/j.suscom.2018.10.004 13. S.S. Shekhawat, H. Sharma, S. Kumar, A. Nayyar, B. Qureshi, bSSA: Binary Salp Swarm Algorithm with hybrid data transformation for feature selection. IEEE Access 9, 14867–14882 (2021). https://doi.org/10.1109/ACCESS.2021.3049547 14. J. Yu, Z. Qin, T. Wan et al., Feature integration analysis of bag-of-features model for image retrieval. Neurocomputing 120, 355–364 (2013) 15. T. Deselaers, D. Keysers, H. Ney, Features for image retrieval: an experimental comparison. Inf. Retr. 11, 77–107 (2008) 16. P. Srivastava, N.T. Binh, A. Khare, Content-based image retrieval using moments of local ternary pattern. Mob. Netw. Appl. 19(5), 618–625 (2014)

A Review in Anomalies Detection Using Deep Learning Sanjay Roka, Manoj Diwakar, and Shekhar Karanwal

Abstract Anomaly detection is one of the most valuable research topics in deep learning and computer vision. Besides various tools and techniques, deep learning because of its robustness, accuracy and myriads of advantages has been discussed in depth for the anomaly detection in this paper. Anomalies, its nature, and their detection have been explained comprehensively. Optimizers which can be implemented in such techniques for enhancing the better detection rate and performance have been also provided along with their description, merits and demerits. Description of deep learning framework for the detection of anomaly has been also covered. Comparative discussion and analysis between various excellent state-of-the-art methods have been also discussed thoroughly. Keywords Anomaly detection · Deep learning · Techniques · Dataset · AUC · EER

1 Introduction The number of crimes and terrorism in the world has been continuously increasing day by day. As a result, the installation of surveillance camera in both public and private places has been increased sharply. Manually monitoring these multiple surveillance cameras by the operator is difficult and tiring. After 20 min of attention, the performance of the operator gets degraded. The rate of detection and efficiency of the manual labor is quiet low. However, the anomalies can occur infrequently, and when they occur, their consequences are highly devastating leading to the loss of life and property. Therefore, there is a need of fully automatic surveillance system that can analyze, process and alert the concerned authority if any anomalies are detected in the video. Anomalies can be the abnormal events like crimes, traffic accidents and so on. Research in the detection of the anomalies has become one of the most popular topics among the researcher and in the past, myriads of model has been S. Roka (B) · M. Diwakar · S. Karanwal Computer Science and Engineering Department, Graphic Era Deemed to be University, Dehradun, Uttarakhand, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 R. C. Poonia et al. (eds.), Proceedings of Third International Conference on Sustainable Computing, Advances in Intelligent Systems and Computing 1404, https://doi.org/10.1007/978-981-16-4538-9_33

329

330

S. Roka et al.

proposed by the researcher for this task. However, the traditional way of detection of the anomaly does not produce the satisfactory accuracy due the various reasons. Recently, deep learning has been implemented successfully in the computer vision. Deep learning models like GAN, CNN, auto-encoder, etc., have a strong ability for the representation of the learned features. Based on the labels present in the data, deep learning techniques can be divided into three forms: First is supervised in which manual labeling of each datasets to normal/abnormal is done. It possesses the better detection rate than compared to two other techniques, i.e., SVM, linear regression, random forest, etc. Second is unsupervised and completely relies on the rules and thresholds and has very large false positive rate, so it is not robust and has low accuracy. It assumes that occurrence of anomalies is rare than compared to normal data, i.e., Apriori, K-means, etc. And third includes semi-supervised or weakly supervised and it demands a smaller number of training sample and labels are available only for normal data.

2 Anomaly Detection with Deep Learning Anomaly detection is the process of identifying unexpected items or events in datasets, which differ from the normal pattern. The anomalies can be fighting, running, jumping, stealing, walking in lawn, etc. The anomalous activities arise because of the various reasons and some of them are listed as follows. Table 1 provides the description of dataset and the anomalies that it contains. • Anomalous position: This type of anomalies is easiest to detect and arises due to the position of the object in the scene, i.e., person standing in the unauthorized area. • Anomalous movement: This type of anomalies arises due to the unexpected trajectory of the object, i.e., when one object moves faster/slower than its surroundings. • Anomalous appearance: This anomaly occurs when the unrecognized object enters the scene, i.e., entrance of vehicle in pedestrian path. Table 1 Dataset and its anomalies Dataset

Anomaly nature

Anomalies occurring

Avenue

Appearance, motion

Abnormal objects, strange action, wrong directions

Subway

Motion

No payment, wrong direction

AESDD

Emotion

Sad, happy, anger, fear, disgust

BOSS

Action

Falling, stealing, fighting

UCF crime

Action

Robbery, abuse, fight, arrest, accident, assault

UMN

Motion

Accident movement of crowd

UCSD

Appearance, motion

Walking in lawn, wheelchair, small carts, skaters, bikers

A Review in Anomalies Detection Using Deep Learning

331

• Anomalous action: These types of anomalies are difficult to detect and involve understanding of usual behavior patterns of the individuals present in scene. • Anomalous effect: This type of anomalies arises due to the emotions of the people. Despite various methods for anomaly detection, researcher mostly preferred deep learning approaches because of its robustness, automatic feature extraction skill, high detection accuracy and ability to handle high-dimensional data. Deep learning for motion and appearance anomaly: The proposed framework of Bouindour et al. [1] uses deep learning for the extraction of appearance and motion features then these features are fed to one class SVM classifier. Usually the classifier are trained with the normal features and once these features are passed, the classifier learns the normal regions of feature space. At the testing time, any samples beyond this normal region are considered as anomaly, whereas GMVAE which works based in reconstruction technique was implemented by Fan et al. [5] for the detection and localization of the anomalies. GMVAE learns the pattern of normal behavior through reconstruction-based technique. During testing, any sample that are not able to match the learned pattern are considered as anomaly. Similarly, in [2], 3D-FCAE which also works in reconstruction based was trained end-to-end with normal behavior. During testing, any testing sample that has high reconstruction error during the reconstruction was considered as an anomaly, whereas Li et al. [3] implemented the CNN for the extraction of motion and appearance features. During the training stage, gradient and optical flow patches were extracted and then passed to the multivariate Gaussian fully convolution adversarial auto-encoder to train the model. Later, during the testing stage, energy-based method is used to calculate appearance and anomaly score of the testing patches. In [4], optical flow was merged with the semantic information extracted from the existing CNN models for the detection of the anomalies. Fan et al. [5] provided a framework that can learn the latent representation of the provided normal samples as the GMM with VAE. The sample that was not able to match was declared as an anomaly. Deep learning for action anomaly: Zhuang et al. [6] applied the convolutional DLSTM model for the crowd violence detection. Gao et al. [7] uses deep learning technique for violent crowd behavior detection. Firstly, low-dimensional features are generated with the aid of violent flow algorithm. Then, these features are passed to MLP to obtain deep features. Later, these features are forwarded to the SVM for the classification. Recently, Direkoglu [8] detected the violent action activities in crowd using the motion information images and CNN. Firstly, motion information image (MII) is generated from optical flow. Then, these MII are used to train the CNN for normal/abnormal behavior of crowds. Experiment on UMN dataset depicts its better result. Deep 3D CNN (C3D) was implemented by the Tran et al. [9] for tackling the behavior classification problem in the video by extracting the spatiotemporal features. Considering this method, Zhou et al. [10] uses optical flow to detect the spatiotemporal feature and injected this feature as input to the C3D for detecting the anomalies.

332

S. Roka et al.

Position anomaly detection: Deep learning model SSD was used in [11] for the detection of the position anomaly in the harbor. This object detection model was implemented for the safety of the people working in harbor. This model is used to detect the position and location of the people. To increase the performance of deep learning models, optimizers are implemented for decreasing the loss by updating the weights, biases and learning rate. Lower the loss is better the accuracy and performance of the model will be. Some of the popular optimizers along with their description, merits and demerits are in Table 2. A simple demonstration of how the deep learning framework can work for the detection of the anomaly can be seen in Fig. 1 taken from [15]. This framework contains only one model and can be trained end-to-end manner to detect both temporal and spatiotemporal irregularities present in the video clips. The first phase is training phase in which the video clips rather than frame or patches are passed to the 3D fully convolutional auto-encoder [3D-FCAE]. The auto-encoder contains encoder and decoder part which are identical to each other and there is no pooling layer. At the encoder, there is three 3D convolutional layer, and at the decoder part, there is three 3D deconvolution layer and it uses Tanh activation function. Input and output of the auto-encoder contain the same dimension of 1 * 8 * 112 * 112 where first is channel number which is 1 due to grayscale image, second is the number of successive frames stacked and remaining is the resolution of image. There is only one hidden layer with 12,544 units, i.e., 64 channel and 1 * 14 * 14 feature map. Firstly, auto-encoder is made to learn the signature of only normal pattern. During testing phase, it will use the learned signature to reconstruct the input image to its original form with low reconstruction error. Regular motions are constructed with the less reconstruction error, whereas irregular motions are reconstructed with the high reconstruction error. Frame only with high reconstruction error value is considered as anomalies.

3 Comparative Discussion and Analysis Wide range of datasets are available for the detection of anomalies. The most popular benchmark datasets that are available publicly are UCSD, UMN, CUHK Avenue, etc. (Table 3). UCSD Pedestrian Dataset: This dataset contains two variety of videos Ped1 and Ped2. Ped1 has 34 training and 36 testing videos and in these videos, people walk toward and away from the position of camera, whereas Ped2 has 16 training and 12 testing videos. Here, people walk parallel to the camera plane. Training video contains only normal behavior, and testing video contains both normal and abnormal behavior, and each video clip has 200 frames with the resolution of 158 * 238. Abnormal activities are bike, cars, skaters, wheelchair, moving in wrong direction, etc. The example of anomalies detected in this dataset is shown in Fig. 2.

A Review in Anomalies Detection Using Deep Learning

333

Table 2 Different types of optimizers Optimizer

Description

Gradient descent

• It is first-order • Implementation, optimization computation and algorithm which is understanding are dependent on the easy first-order derivative of a loss function • Used for anomaly detection in [12]

Merit

Demerit • Performs poor with large dataset • Consume large memory

Stochastic gradient descent

• It uses just one static • Memory learning rate for all consumption is parameters during lower than GD the entire training stage • Used in [13, 10], for anomaly detection

• One epoch completion takes more time than GD

Mini batch stochastic • Dataset is divided • Complexity time to gradient descent into various batches converge is lower and after every than SGD batch, the parameters are updated • It is used in anomaly detection in [14]

• Update of weight is noisy compared to GD. Converge time is more than GD

SGD with momentum

• It accelerates the convergence toward the relevant direction and reduces fluctuation • It was used in [15], for detecting anomalies in surveillance videos

• Has all the merits of SGD. Takes low convergence time than GD

• One more hyperparameter needs to be selected manually

NAG

• Solve problem of high momentum • Used for detecting anomaly in [16]

• Does not miss the local minima

• Hyperparameter selected manually

AdaDelta

• Here, exponentially moving average is used rather than the sum of all the gradients • Used for anomaly detection [17]

• Learning rate does not decay and training does not stop

• It requires high computation

(continued)

334

S. Roka et al.

Table 2 (continued) Optimizer

Description

Adam

• It calculates adaptive • The method is too learning rates for fast and converges each parameter. rapidly Good for big dataset • It is memory efficient and implemented in [3, 17] for anomaly detection

Merit

Demerit • Computationally costly • May suffer a weight decay problem

Fig. 1 Framework of 3D-FCAE for the anomaly detection. Image taken from [15]

UMN Datasets: This dataset contains three different scenes of length 4 min and 17 s [7725 frame]. In every video, the unstructured crowd is moving and suddenly they start to run, this moment is marked as anomaly. The example of anomalies detected in this dataset is shown in Fig. 3. Comparison among the various state-of-the-art methods in frame and pixel level for the anomaly detection in UCSD Ped1 dataset is shown in Table 4. For the evaluation purpose, evaluation metrics like EER and AUC have been used. Higher and lower value AUC and EER depict the better performance of the approaches. The best value is highlighted in bold and second-best value is underlined for the convenience. For both the frame and pixel level comparison, Deep GMM [13] model has shown the outstanding performance in comparison with the other competitive models. This model has used unsupervised deep learning framework PCANet for the feature learning. For frame level, its highest score for EER and AUC is 15.1% and 92.5%, respectively. Similarly, for pixel level, its highest AUC score is 69.9%. But unluckily, its data for pixel level EER are not available. Therefore, another best vale in EER is 36% of SS [24] approach. Similarly, Table 5 depicts the comparison in UMN dataset. From Table 5, it can be clearly observed that Parallel ST CNN [23] has the best performance in UMN dataset among all the excellent performing competitive models. It has been able to achieve the highest score 99.74% for the AUC and lowest score 1.65% for the EER. The close competitive model to [23] is Cascade DNN [28] which is also second-best model among rest of the others. It lags behind

A Review in Anomalies Detection Using Deep Learning

335

Table 3 Comparative discussion and analysis Methodology

Discussion

Advantage

Disadvantages

Anomaly is detected using deep learning method [18]

Ensemble classifier gives final detection result based on votes and score of other classifier

In deep learning feature extraction are automatic and accuracy is high

Component in detection phase can be added for accurate localization of anomaly

Unsupervised deep neural network is used for the anomaly detection [13]

PCANet used to extract high-level features. Then, deep GMM is implemented for anomaly classification

Experiment result proves its effectiveness over the hand-crafted features

Supervised techniques have accurate and reliable results than unsupervised

3D-FCAE used for anomaly detection [15]

3D-FCAE can be trained end-to-end manner to detect anomaly

Spatiotemporal irregularities can be accurately located

NA

Nonparametric approach used for anomaly detection and localization [19]

Dense and overlapping local spatiotemporal features are used to deal with the crowd scene anomalies

Proposed method is less expensive in computation and have good accuracy

Handcraft features are not robust and computation cost is high

Multilayer perception RNN is used for the detection of the anomaly [20]

GMM is used to isolate anomaly from the constant background

Results conveys a better accuracy, sensitivity and specificity

Not effective as compared to some of the state-of-the-art deep learning methods

Result depicts the good performance of proposed method

Handcraft features has high computation costs

Auto-encoder used for Auto-encoder learn anomaly detection temporal regularity [21] used for detecting anomaly

Fig. 2 Anomaly detected in UCSD dataset. Green rectangle denotes motion and red rectangle denotes anomalies in motion. Image taken from [10]

336

S. Roka et al.

Fig. 3 Anomaly detected in UMN dataset. Image taken from [22]

Table 4 Comparison of frame/pixel level in UCSD Approaches

Frame level (%) EER

Pixel level (%) AUC

EER

AUC 17.3

LOF [24]

38

65.2

76

Deep GMM [14]

15.1

92.5



69.9

SS [25]

17.5

87.6

36

66

MPPCA [26]

40

67



13.3

SCL [27]

17

88.4

42

64.3

GPR [28]

23.7

83.8

37.3

63.3

STCNN [11]

24

0.85



0.87

Table 5 Summary of quantitative performance in UMN

Approaches

EER (%)

AUC (%)

Cascade DNN [28]

2.5%

99.60

CFS [29]



88.30

SR [30]



97.00

WSF [22]

5.8%

98.9%

OCELM [31]

3.1%

99.00

GKIM [12]

4%



Parallel ST CNN [23]

1.65%

99.74

with [23] by 0.14% in AUC and by 0.85% in EER. Overall, we can conclude that Parallel ST CNN [23] outperforms all the others competitive models and is better option for the researcher for detecting the anomalies in the videos.

A Review in Anomalies Detection Using Deep Learning

337

4 Conclusion In contrast to other traditional methods, deep learning approaches do not require the manual extraction of the features, and moreover, their high performance and accuracy can handle high-dimensional data as well. In this paper, we have provided thorough description of anomaly detection process. Firstly, we focused on anomalies, their natures and their detection through deep learning. We also provided description of some popular optimizers that can be implemented in deep learning model to enhance their detection rate and performance. Secondly, we elaborated deep learning framework which can be used in designing the anomaly detection system. Afterward, we provided the comparative discussion and analysis on various excellent deep learning models used for anomaly detection. In the future, we will be implementing these techniques for designing the deep learning-based models for detecting the anomalies in the surveillance video.

References 1. S. Bouindour, M. M. Hittawe, S. Mahfouz, H. Snoussi, Abnormal event detection using convolutional neural networks and 1-class SVM classifier, in 8th International Conference on Imaging for Crime Detection and Prevention (ICDP 2017), Madrid (2017), pp. 1–6 2. Y. Wu, Y. Ye, C. Zhao, Coherent motion detection with collective density clustering, in ACM Conference on Multimedia (2015), pp. 361–370 3. N. Li, F. Chang, Video anomaly detection and localization via multivariate Gaussian fully convolution adversarial autoencoder. Neurocomputing 369, 92–105 (2019) 4. M. Ravanbakhsh, M. Nabi, H. Mousavi, E. Sangineto, N. Sebe, Plug-and-play CNN for crowd motion analysis: an application in abnormal event detection, in Proceedings of the IEEE Winter Conference on Applications of Computer Vision (2020), pp. 1689–1698 5. Y. Fan, G. Wen, D. Li, S. Qiu, M.D. Levine, Video anomaly detection and localization via Gaussian mixture fully convolutional variational autoencoder. CVIU 195 (2020) 6. N. Zhuang, J. Ye, K.A. Hua, Convolutional DLSTM for crowd scene understanding, in: 2017 IEEE International Symposium on Multimedia (ISM), Taichung (2017), pp. 61–68 7. M. Gao et al., Violent crowd behavior detection using deep learning and compressive sensing, in CCDC, Nanchang, China (2019), pp. 5329–5333 8. C. Direkoglu, Abnormal crowd behavior detection using motion information images and convolutional neural networks. IEEE Access 8, 80408–80416 (2020) 9. D. Tran, L. Bourdev, R. Fergus, L. Torresani, M. Paluri, Learning spatiotemporal features with 3D convolutional networks, in IEEE ICCV (2015), pp. 4489–4497 10. S. Zhou, W. Shen, D. Zeng, M. Fang, Y. Wei, Z. Zhang, Spatial-temporal convolutional neural networks for anomaly detection and localization in crowded scenes. Signal Process. Image Commun. 47, 358–368 (2016) 11. L. Zhang, Y. Chen, S. Liao, Algorithm optimization of anomaly detection based on data mining, in 2018 10th ICMTMA, Changsha (2018), pp. 402–404 12. H. Ullah, A.B. Altamimi, M. Uzair, M. Ullah, Anomalous entities detection and localization in pedestrian flows. Neurocomputing 290, 74–86 (2018) 13. Y. Feng, Y. Yuan, X. Lu, Learning deep event models for crowd anomaly detection. Neurocomputing 219, 548–556 (2017) 14. Z. Li, Y. Li, Z. Gao, Spatiotemporal representation learning for video anomaly detection. IEEE Access 8, 25531–25542 (2020)

338

S. Roka et al.

15. M. Yan, J. Meng, C. Zhou, Z. Tu, Y. Tan, J. Yuan, Detecting spatiotemporal irregularities in videos via a 3D convolutional autoencoder. J. Vis. Commun. Image Represent. 67 (2020) 16. C. Sommer, R. Hoefler, M. Samwer, D.W. Gerlich, A deep learning and novelty detection framework for rapid phenotyping in high-content screening. Mol. Biol. Cell 28(23), 3428–3436 (2017) 17. T. Le, J. Kim, H. Kim, An effective intrusion detection classifier using long short-term memory with gradient descent optimization, in International Conference PlatCon, Busan (2017), pp. 1–6 18. A. Khaleghi, M.S. Moin, Improved anomaly detection in surveillance videos based on a deep learning method, in 8th Conference of AI & Robotics (2018), pp. 73–81 19. M. Bertini, A. Del Bimbo, L. Seidenari, Multi-scale and real-time non-parametric approach for anomaly detection and localization. CVIU 116(3) (2012) 20. M. Murugesan, S. Thilagamani, Efficient anomaly detection in surveillance videos based on multi layer perception recurrent neural network. Microprocess. Microsyst. 79 (2020) 21. M. Hasan, J. Choi, J. Neumann, A.K.R. Chowdhury, L.S. Davis, Learning temporal regularity in video sequences, in IEEE Conference on CVPR (2016), pp. 733–742 22. X. Hu, J. Dai, Y. Huang, H. Yang, L. Zhang, W. Chen, G. Yang, D. Zhang, A weakly supervised framework for abnormal behavior detection and localization in crowded scenes. Neurocomputing 383, 270–281 (2020) 23. Z.-P. Hu, L. Zhang, S.-F. Li, D.-G. Sun, Parallel spatial-temporal convolutional neural networks for anomaly detection and location in crowded scenes. J. Vis. Commun. Image Represent. 67, 102765 (2020) 24. Y. Hu, Y. Zhang, L.S. Davis, Unsupervised abnormal crowd activity detection using semi parametric scan statistic, in Conference on CVPR Workshops (2013), pp. 767–774 25. J. Kim, K. Grauman, Observe locally, infer globally: a space-time MRF for detecting abnormal activities with incremental updates, in Proceedings of the IEEE Conference on CVPR (2009), pp. 2921–2928 26. C. Lu, J. Shi, J. Jia, Abnormal event detection at 150 fps in MATLAB, in Proceedings of the IEEE International Conference on Computer Vision (2013), pp. 2720–2727 27. K.W. Cheng, Y. Chen, W.H. Fang, Video anomaly detection & localization using hierarchical feature representation & Gaussian process regression, in Conference CVPR (2015) 28. M. Sabokrou, M. Fayyaz, M. Fathy, R. Klette, Deep-cascade: cascading 3D deep neural networks for fast anomaly detection and localization in crowded scenes. IEEE Trans. Image Process. 26, 1992–2004 (2017) 29. R. Leyva, V. Sanchez, C. Li, Video anomaly detection with compact feature sets for online performance. IEEE Trans. Image Process. 26(7), 3463–3478 (2017) 30. P. Liu, Y. Tao, W. Zhao, X.L. Tang, Abnormal crowd motion detection using double sparse representation. Neurocomputing 269, 3–12 (2017) 31. S.Q. Wang, E. Zhu, J.P. Yin, F. Porikli, Video anomaly detection & localization by local motion based joint video representation & OCELM. Neurocomputing 277, 161–175 (2018)

Sustainable Anomaly Detection in Surveillance System Tanmaya Sangwan, P. S. Nithya Darisini, and Somkuwar Shreya Rajiv

Abstract As the need for security is ever-increasing, more surveillance systems are being deployed for domestic and organizational security. Traditional surveillance systems have a CCTV camera in place, which works as either recording equipment or monitoring system. The former does not provide any prevention from damage, and the latter suffers from human errors. This paper proposes to overcome the mentioned overheads by automating the process of anomaly detection. It aimed to classify surveillance videos as normal or abnormal using two approaches. First, a CNN Long Short-Term Memory Network (CNN-LSTM) and second, a pre-trained CNN network for obtaining features of the videos followed by an LSTM network for classification of the video footage. Analysis using the CNN-LSTM and Xception-LSTM model has been carried out on the UCF-Crime dataset. Upon comparing results, it was found that the CNN-LSTM model works better than the Transfer Learning model by approximately 20% better accuracy. The final real-time system uses Raspberry Pi as the microprocessor. The CNN-LSTM model file uploaded onto the Pi has a minimal size of 5.8 MB, which is far smaller than the state-of-the-art model size. This paper also highlights the input specifications which result in an inefficient transfer learning model as a reference for future works. Keywords Anomaly detection · CNN-LSTM · Transfer learning

1 Introduction Security surveillance, such as continuously monitoring places for safety purposes, is a service that is widely used. Surveillance systems are used to detect instances when any specific event occurs to identify the actors involved in the relevant scene. They are used extensively for policing reasons. Surveillance systems are being used for Public Health Surveillance as well [1]. They are also used for learning customer attributes and shopping behavior analysis. In 2023, the video surveillance market T. Sangwan (B) · P. S. Nithya Darisini · S. S. Rajiv School of Computer Science and Engineering, VIT University, Chennai, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 R. C. Poonia et al. (eds.), Proceedings of Third International Conference on Sustainable Computing, Advances in Intelligent Systems and Computing 1404, https://doi.org/10.1007/978-981-16-4538-9_34

339

340

T. Sangwan et al.

is proposed to be worth 62.6 billion U.S. dollars, with infrastructure applications forecast to make up over 36% of the global market [2]. Technology plays an integral role in enhancing the capabilities of the traditional surveillance system. Security and control systems have become one of the most popular SMART Home devices in recent years. Using video surveillance classification techniques, one can save time by automating the process of monitoring of videos to detect at any instant if any activity of interest is occurring (or has occurred) and alert the stakeholders about the party involved in said activity. Anomaly detection of surveillance videos aims to serve this purpose [3, 4]. An anomaly in this case refers to an abnormal or “out of place” activity taking place in a certain environment in contrast to regular events. When an emergency situation occurs in traditional surveillance systems, the authorities use the previously recorded video surveillance to access the incident and find the guilty party. Else a supervisor is appointed for monitoring the system at all times. This proves to be quite tedious, and in many cases, these surveillance methods cannot take action on-time on-site and thus cannot prevent unfortunate incidents like robbery from occurring. The Intelligent (SMART) Anomaly detection on video surveillance systems helps prevent such a situation by continuously monitoring the video as a preventive measure and automatically alerting the owner through alarm (buzzer). To the best of our knowledge, there is a lack of investigation for the use case of comparing a transfer learning model against a CNN-LSTM model to detect abnormal behavior in CCTV surveillance videos. We also introduce a sustainable model size that requires lesser memory than state-of-the-art models [5]. We aim to enhance surveillance systems’ functionality by removing the vulnerability to human errors and making it automatic and spontaneous. This paper focuses on anomaly detection on the UCF-crime dataset, which has videos of 13 anomalies and videos of normal activities.

2 Related Work The automated understanding of human activities in different environment settings has been an active area of research due to its wide applications. When it comes to the task of feature extraction or feature generation in video-based applications, recent works focus more on obtaining the spatial and temporal features of a video as compared to using handcrafted features (many applications of the latter type provide competitive results). The trend of using deep networks for robust feature extraction is only growing. The importance of deep learning in the field of video classification has increased manifold, with many people conducting research and developing models which are better at achieving the task of video classification, activity detection, and action recognition [6, 7]. As the size of data stored keeps increasing, the use of deep models in the field of computer vision for video analysis has been gaining momentum. Ji et al. [8] have developed a 3D convolutional neural network model which recognizes human activities in a real-world environment of airport surveillance videos. Karpathy et al. [9] have carried out an extensive evaluation of the performance

Sustainable Anomaly Detection in Surveillance System

341

of CNN’s in large scale video classification (on a dataset of 1 million videos with 487 classes), with their best model achieving an accuracy of 63.3% as compared to UCF 100 baseline model’s 43.9%. Simonyan and Zisserman [10] proposed two stream CNNs for the task of action recognition in videos, wherein the class scores of a spatial classifier CNN and a temporal classifier CNN are combined to give a final class score to each video. While purely convolutional networks provide competitive results at the job of video classification, the importance of recurrent neural networks, especially Long Short-Term Memory RNN’s is only increasing, as these networks have the capability to store contextual information. Ng et al. [11] have compared the results of a convolutional network adapted for the task of video classification to those of an LSTM-RNN which models the videos as an ordered sequence of frames. Donahue et al. [12] highlighted the importance of long-term recurrent convolutional networks and concluded that this model is integral to the field of computer vision for problems which have a sequential structure (time series data). Baccouche et al. [13] conducted action classification in soccer videos for which they used an LSTM-RNN to train on the set of feature descriptors for each frame, learnt the temporal evolution of these descriptors and classified each video as consisting of a particular action. Another study by Baccouche et al. [14] discussed human action recognition from videos using a two-step deep model, the first step being a 3D CNN for spatial and temporal features construction, and the second step being an RNN which is responsible for the classification of the input video [15]. Transfer learning models can also be used for the task of video classification as large pre-trained models have shown great success rates at object detection in the field of computer vision. In a two-stage neural network model, the first stage consists of using a pre-trained model which has already been trained on a large dataset (Imagenet in this case) to extract feature descriptors essential for object detection, followed by a recurrent neural network which classifies the video [5, 16]. Nazare et al. [17] used various pre-trained models to generate features from surveillance videos for the task of anomaly detection. Normalization was applied to the respective features obtained and the pre-trained models were compared by the accuracy of classification. It was found that the Xception model along with z-score normalization gave competitive results. Certain video classification models which performed anomaly detection considered anomaly detection as a binary classification problem, classifying the videos as violent/nonviolent [2] or accident/normal [16]. Motivated by the previous works, this paper approaches the problem statement using two algorithms, namely a CNNLSTM model and a transfer learning model. A convolutional neural network is a network architecture whose essence is the convolutional layer. It consists of several independent filters which convolve with the input image, transform the pixel values, and pass these new transformed values onto the next layer. This kind of model is suitable for tasks such as object detection, image classification, and image recognition. An LSTM is a variant of a recurrent neural network, with the added ability to remember the hidden state of network over longer time periods as compared to a vanilla RNN. However, an LSTM cannot take three-dimensional input. CNN

342

T. Sangwan et al.

Long Short-term Memory Networks are a neural network architecture which are well suited to computer vision problems in cases of sequential data. These networks are both spatially and temporally deep. The input is taken by a convolutional layer, thus allowing for 3D input. It then maps the features obtained after convolution to a 1D vector which gets fed as input to the LSTM (RNN) layer, thus overcoming the vanishing gradient problem. Gupta et al. [18, 19] processed facial images for the identification of important features. Yadav et al. [20] processed video images for the same. Transfer learning refers to making use of the learnings of a model (weights of network) which has been trained on data for a specific task and using the knowledge gained to make predictions for different tasks. This paper has chosen the pre-trained Xception model as the base model, followed by an LSTM layer which classifies the features obtained from the output of Xception model.

3 System Design and Architecture The proposed work implements an automated Surveillance system that detects abnormal events in real time. The proposed work has Raspberry Pi as the processor, which is connected to Pi camera, Buzzer and a Switch. The Pi camera takes real-time video input of the target environment, which is then converted to individual frames. The frames, combined in sequence, are given to the trained model for the prediction: normal or abnormal. If the prediction is normal, no action is performed. But if the prediction is abnormal, the buzzer is to signal the authorities of the occurrence of any abnormal activity. The buzzer can be turned off by pressing the switch, indicating that the alarming situation has been taken over. While the buzzer is ON, the system does not test on the available real-time input; the system process is paused. When the buzzer is OFF again, the system resumes its process of taking input and testing. The system architecture is outlined in Fig. 1.

4 Methodology and Implementation The working system can be divided into the following modules:

4.1 Data Processing The data processing module processes the data so that it can be fed to the final model for training. The steps for processing are as follows:

Sustainable Anomaly Detection in Surveillance System

343

Fig. 1 Flow diagram for final proposed work design

Video Selection A Keras model takes an input of a specific size, and the size cannot be variable. The videos available in the UCF-Crime dataset are all of variable length. The two methods to rectify this are: trimming or padding the videos. Padding has the drawback of classifying padded frames, hence trimming has been done. To reduce the loss due to trimming, all videos whose length falls in the range are selected. This is done using FFmpeg module ffprobe. The ranges of length chosen are: i. 30–45 s, ii. 60–75 s, iii. 120–135 s. The videos falling in these ranges are each kept separately. The videos pertaining to all the 13 anomaly classes are kept under one class: anomaly. Frame Extraction The videos selected are to be converted to frames at different FPS: i. 30–45 s: 30 FPS, ii. 60–75 s: 15 FPS, iii. 120–135 s: 15 FPS. The reason behind this is to generate final data sample with equal length: i. 30 s × 30 FPS = 900 frames, ii. 60 s × 15 FPS = 900 frames, iii. 120 s × 15 FPS = 1800 (900 + 900). The videos of the 120 s category each are to be taken as a combination of 2 videos. FFmpeg extracts the frames from videos of each category at desired FPS. The frames of each video now reside in a folder and divided into two classes: normal and anomaly. NumPy Array Generation The NumPy array generation process is different for the two models: sequential model and transfer learning model. For the sequential model, each video’s frames are read, and the 900 middle frames are taken for a NumPy array. Initially, the idea was to have each data sample with a frame length of 900, but it was changed to 150 frames due to Raspberry Pi’s incompatibility. The image dimension from the original 240 × 320 is dropped to 120 × 120. Hence, the 900 frames are converted to six parts, each of 150 frame length. All the data gets saved in a single folder with their id as the filename. An id is just a number used to track the class. For transfer learning model, the difference in the NumPy array generation here is that the 150 frame length data sample generated is passed to the pre-trained

344

T. Sangwan et al.

Xception model for feature generation. The model performs feature extraction for individual frames in the array of 150 and provides an array of dimensions 150 × 4 × 4 × 2048 as the output. Hence the data sample dimensions and size differ in each model: 150 × 120 × 120 × 3 (6 MB) and 150 × 4 × 4 × 2048 (19 MB).

4.2 Model The data is trained on two types of models: Sequential Model The model is a sequential Keras model. The input size is 150 × 120 × 120 × 3. The model follows the CNN-LSTM architecture. The Convolutional layers are said to extract the spatial features of the input with total number of features trained on as: 736,370 (including temporal features). The Time Distributed layer is a wrapper for all the convolution layers and performs a temporal convolution extraction at the interval of 150 and 120. The model has a sequence of Convolution layer. Each set of two Con2d layer is followed by a MaxPooling layer. The Conv2d takes the no. of kernels or no. of features to be learned as input. The features value steadily increases from 16 to 128, with kernel size as 3 × 3 in all Conv2d layers except the layer with 7 × 7 as kernel size. Every MaxPooling layer in the model takes kernel size: 2 × 2 and strides size: 2 × 2. After all the convolutional layers, the Flatten layers convert the data into an array of 1D. All of the mentioned layers are wrapped by TimeDistributed for temporal feature extraction. The dropout layer helps in reducing overfitting by randomizing the data samples with dropout value: 0.75. The data now goes to LSTM where all the features are learned in a temporal manner. The final dense layer takes all the features learned till now and translates it to the number of classes. The model is then compiled with the following metrics: i. Loss: mean squared error, ii. Optimizer: SGD. The callbacks ModelCheckPoint and EarlyStopping are also used. ModelCheckPoint saves the model with the best metric (accuracy) at each epoch and is used for retraining in case of a system failure. EarlyStopping stops the training if the accuracy is not found to get better. The monitor parameter for both is given as acc (training data accuracy) and the metrics are: loss and accuracy. Transfer Learning Model Transfer learning refers to making use of the learnings of a model which has been trained on data for a specific task and using the knowledge gained to make predictions for different tasks. The work in this paper uses Xception model as the base model. In the transfer Learning model, the frames are generated for videos (at 30 fps for 30 s videos and at 15 fps for 60 s videos). 150 frames are the frame length of a data sample sent at each time step as input to the Xception Base model. The Xception Model consists of depth wise separable convolution layers with ReLU activation function used. Max Pooling is used in order to reduce the complexity of input frames. The transfer learning model gives as output the features extracted in the form of NumPy arrays of dimensions 150 × 4 × 4 × 2048 which are then fed to the LSTM model for training and classification.

Sustainable Anomaly Detection in Surveillance System

345

4.3 Training The model is to be trained on the data of total size: 49 GB (sequential) and 115 GB (transfer learning). To remove the overhead of loading all the data in the RAM for training, Keras’ model.fitgenerator is used. It loads data samples in amount of batch size and takes a DataGenerator, whose job is to provide data. Keras provides its own Image Data Generator but the input to be given is 4D (video). Hence, a customized Data Generator is used which defines the functions needed by the Keras model: init(), len(), getitem(), onepochend() and datageneration(). Each function implements a task required by the model during training. len() finds the number of data samples to be loaded based on total data samples and batch size.datageneration() loads and returns the specified samples. onepochend() is called at the end of each epoch to update and shuffle the samples indices. The models are trained on the Google Colab and Google Cloud Platform (both of which are GPU enabled).

4.4 Testing The models generated with different data sizes are deployed in Raspberry Pi and tested on real-time data. The prototype has been shown in Fig. 2 the system aims to achieve high accuracy. The steps taken to achieve high accuracy are: Increase image size to 240 × 240—Increase the number of frames per video to 450—Perform Noise removal Upon testing the model, these specifications above proved inefficient for the final system on Raspberry Pi. Table 1 lists the different specifications tried and their inefficiency after the testing the specifications finalized are: Image size: 120 × 120; Number of frames: 150; FPS: 5. The final data size is: 150 × 120 × 120 × 3.

5 Results and Discussion The sequential model and transfer learning model have been trained separately on Google Colab and GCP, respectively. The training and testing accuracies for the sequential model are highlighted in Table 2. The model with 150 frames as the video length has better accuracy, and hence it has been used as the final model of the system. The transfer learning model is trained on only one input size as it was leading to larger dataset size. The input and output specifications are as shown in Table 3. The CNN-LSTM model is shown to have a higher accuracy as compared to the transfer learning model. Moreover, the transfer learning model file could not be loaded into the Raspberry Pi (RAM: 1 GB) as its size was 270 MB as compared to 5.8 MB size of the Sequential model file. Hence the Sequential model for 150 frames has been chosen for the final system.

346

T. Sangwan et al.

Fig. 2 Prototype with proposed system model file Table 1 Failed specifications details Specifications

Inefficiency

Data size: 900 × 80 × 80

Overfitting

Data size: 450 × 240 × 240 Not enough computational power (Colab Hangs) Data size: 450 × 180 × 180 Raspberry Pi not able to predict (process terminates after showing 450 × 160 × 160 not enough power) 450 × 120 × 120 300 × 120 × 120 Table 2 Sequential model output for two specifications No. of frames

120 frames

Training

95.49

Testing

78.4

Accuracy

150 frames Loss 3.69 21.5

Accuracy 99.6 88.8

Loss 0.37 11.2

Sustainable Anomaly Detection in Surveillance System Table 3 Transfer learning model specifications

Specifications

347 Value

Input size

150 × 120 × 120 × 3

Dataset size

115 GB

No. of data samples

5879

Epochs

15

Training accuracy

73 (approx)

Testing accuracy

75 (approx)

6 Conclusion and Future Work In this paper, we assessed the performance of two different deep learning models for the purpose of abnormal event detection in video surveillance footage. It has been found that the CNN-LSTM model performed better than the Transfer Learning model. The results presented in this paper can be further improved by focusing on one specific anomaly instead of a collection of anomaly videos. Also, training on a larger dataset can be achieved by using higher computational power resources, which can result in a higher accuracy of the model. The design constraint of contextual anomaly can also be better looked into. This work can be extended for use in applications which are using CCTV Cameras as a surveillance system.

References 1. P. Nsubuga, M.E. White, S.B. Thacker et al., Public health surveillance: a tool for targeting and monitoring interventions, Chap. 53, in Disease Control Priorities in Developing Countries, 2nd edn., ed. by D.T. Jamison, J.G. Breman, A.R. Measham et al. (The International Bank for Reconstruction and Development/The World Bank, Washington (DC), 2006) 2. https://www.statista.com/topics/2646/security-and-surveillance-technolog/dossierSummaryc hapter1 3. Y. Zhu, N. Nayak, A. Roy-Chowdhury, Context-aware activity recognition and anomaly detection in video. IEEE J. Sel. Top. Signal Process. 7, 91–101 (2013). https://doi.org/10.1109/ JSTSP.2012.2234722 4. Y. Ke, R. Sukthankar, M. Hebert, Event detection in crowded videos, in 2007 IEEE 11th International Conference on Computer Vision, Rio de Janeiro (2007), pp. 1–8. https://doi.org/ 10.1109/ICCV.2007.4409011 5. W. Ullah, A. Ullah, I.U. Haq, K. Muhammad, M. Sajjad, S.W. Baik, CNN features with bidirectional LSTM for real-time anomaly detection in surveillance networks. Multimed. Tools Appl. (2020). https://doi.org/10.1007/s11042-020-09406-3 6. R. Hou, C. Chen, M. Shah, Tube convolutional neural network (T-CNN) for action detection in videos, in 2017 IEEE International Conference on Computer Vision (ICCV), Venice (2017), pp. 5823–5832. https://doi.org/10.1109/ICCV.2017.620 7. R. Girshick, Fast R-CNN (2015). https://doi.org/10.1109/ICCV.2015.169 8. S. Ji, W. Xu, M. Yang, K. Yu, 3D convolutional neural networks for human action recognition. IEEE Trans. Pattern Anal. Mach. Intell. 35(1), 221–231 (2013). https://doi.org/10.1109/ TPAMI.2012.59

348

T. Sangwan et al.

9. A. Karpathy, G. Toderici, S. Shetty, T. Leung, R. Sukthankar, L. Fei-Fei, Large-scale video classification with convolutional neural networks, in 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH (2014), pp. 1725–1732. https://doi.org/10. 1109/CVPR.2014.223 10. K. Simonyan, A. Zisserman, Two-stream convolutional networks for action recognition in videos. Adv. Neural Inf. Process. Syst. 1 (2014) 11. J.Y.-H. Ng, M. Hausknecht, S. Vijayanarasimhan, O. Vinyals, R. Monga, G. Toderici, Beyond short snippets: deep networks for video classification, in 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA (2015), pp. 4694–4702. https://doi.org/ 10.1109/CVPR.2015.7299101 12. J. Donahue et al., Long-term recurrent convolutional networks for visual recognition and description. IEEE Trans. Pattern Anal. Mach. Intell. 39(4), 677–691 (2017). https://doi.org/ 10.1109/TPAMI.2016.2599174 13. M. Baccouche, F. Mamalet, C. Wolf, C. Garcia, A. Baskurt, Action classification in soccer videos with long short-term memory recurrent neural networks, in Artificial Neural Networks— ICANN 2010. ICANN 2010, ed. by K. Diamantaras, W. Duch, L.S. Iliadis. Lecture Notes in Computer Science, vol. 6353 (Springer, Berlin, Heidelberg, 2010) 14. M. Baccouche, F. Mamalet, C. Wolf, C. Garcia, A. Baskurt, Sequential deep learning for human action recognition, in Human Behavior Understanding. HBU 2011, ed. by A.A. Salah, B. Lepri. Lecture Notes in Computer Science, vol. 7065 (Springer, Berlin, Heidelberg, 2011) 15. L. Zhang, X. Xiang, Video event classification based on two-stage neural network. Multimed. Tools Appl. (2020) 16. W. Li, V. Mahadevan, N. Vasconcelos, Anomaly detection and localization in crowded scenes. IEEE Trans. Pattern Anal. Mach. Intell. 36(1), 18–32 (2014). https://doi.org/10.1109/TPAMI. 2013.111 17. T. Nazare, R. Mello, M. Ponti, Are pre-trained CNNs good feature extractors for anomaly detection in surveillance videos? (2018) 18. R. Gupta, S. Kumar, P. Yadav, S. Shrivastava, Identification of age, gender, & race SMT (scare, marks, tattoos) from unconstrained facial images using statistical techniques, in 2018 International Conference on Smart Computing and Electronic Enterprise (ICSCEE), Shah Alam, Malaysia, July 2018 (IEEE, 2018), pp. 1–8 19. R. Gupta, P. Yadav, S. Kumar, Race identification from facial images using statistical techniques. J. Stat. Manag. Syst. 20(4), 723–730 (2017) 20. P. Yadav, R. Gupta, S. Kumar, Video image retrieval method using dither-based block truncation code with hybrid features of color and shape, in Engineering Vibration, Communication and Information Processing (Springer, Singapore, 2019), pp. 339–348

A Robust Fused Descriptor Under Unconstrained Conditions Shekhar Karanwal and Sanjay Roka

Abstract This work presents local difference binary pattern (LDBP) and local neighborhood difference binary pattern (LNDBP) descriptors for face analysis. In LDBP, the difference is computed between A and B. A computes the difference between current neighbor pixel and adjacent neighbor pixel in clockwise direction, and B computes the difference between current neighbor pixel and center pixel. Same concept is performed for all the neighborhood positions. In LNDBP, the difference is determined between neighbor pixels at a distance of +2 in clockwise direction. Those possesses (differences) the value > or = to 0 are allocated a label 1 else 0. All patterns encoding gives the emergence of respective transformed image. From respective image, the subregional (3 × 3) information is extracted. The fused features from the corresponding subregions are the feature dimension of the respective descriptor. Further the LDBP and LNDBP dimensions are integrated to make the discriminative feature LDBP + LNDBP. PCA is applied next for compaction, and classification matching is done by SVMs and NN. Keywords Local difference binary pattern · Local neighborhood difference binary pattern · PCA · SVMs · NN

1 Introduction In recent years, local descriptors have earned the notable attention due to their robustness under pose and illumination variations. Among several descriptors (local) launched in the literature, the LBP [1] is one of best performing descriptor. In LBP, there is the mutual coordination between the neighbors and center pixel. When LBP is fused with the other local descriptors such as local phase quantization (LPQ) [2] and Gabor [3], there is certain enhancement in recognition rate. Apart from this, several variants of LBP are launched in the literature for different applications. S. Karanwal (B) · S. Roka Computer Science and Engineering Department, Graphic Era University (Deemed), Dehradun, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 R. C. Poonia et al. (eds.), Proceedings of Third International Conference on Sustainable Computing, Advances in Intelligent Systems and Computing 1404, https://doi.org/10.1007/978-981-16-4538-9_35

349

350

S. Karanwal and S. Roka

Heikkila et al. [4] established the CS-LBP for Texture. In CS-LBP, there is the comparison of center symmetric pixels in conjunction with the another introduced parameter, the threshold. If the difference between pixels (symmetric) possesses the value > than designated threshold, then label allocated is 1 else 0. Liao et al. [5] discovered Multiscale Block LBP (MB-LBP) for Face. In MB-LBP, the local statistic, i.e., mean computed from the neighbor sub-blocks, is compared with center sub-block mean. The rest is same as in LBP (for comparison). Verma and Raman [6] launched the LTriDP descriptor for Texture and Face. For every neighbor, the pixels ranged in 3 directions (anticlockwise, clockwise and center) are availed for the production of pattern which is tri-directional. To expand the discriminativity, magnitude features are embodied with tri-directional pattern features. Verma and Raman [7] instigated LNDP for Texture and Natural images. By transformation of complementary connection of all neighbor pixels, the introduced operator makes the binary pattern. For enhancing the robustness, the LBP features are incorporated with LNDP known as LBP + LNDP. Luo et al. [8] launched the LLDP for Palmprint Recognition. LLDP encodes local neighborhoods structure by inspecting the line information (i.e., directional). So, the line responses are computed in the neighborhood in 12 directions by utilizing the Gabor filters or MFRAT. Karanwal [9] provides the comparative study of 14 descriptors. Out of all, it is the CLBP which achieves the remarkable outcomes. Numerous datasets are utilized for the comparison. Karanwal and Diwakar [10] launched the novel descriptor for face so-called NCDB-LBP. In the proposed method, the 4 labeled functions are presented for capturing of discriminant features from 3 × 3 patch. The launched method brings extraordinary outcomes on several datasets. Karanwal and Diwakar [11] discovered the OD-LBP in uncontrolled conditions. In OD-LBP, the 3 differences (gray) calculated for every position (orthogonal) are used for the production of the entire OD-LBP feature length. The introduced method diminishes the performance of the several operators. Tuncer et al. [12] discovered the Logically Extended LGS (LE-LGS) for EEG signal analysis. In LE-LGS, the bitwise OR operation is conducted between LGS and vertical LGS (VLGS). After OR operation, the decimal code is achieved by weights allocation. Most of the previous descriptors are based on relation between neighborhoods and the center pixel, which is the prime limitation of these descriptors. However, the proposed descriptors adopted totally different methodologies as compared to the previous methodologies. Specifically, this work launches LDBP and LNDBP descriptors. For former descriptor, the difference is calculated between A and B. A calculates the deviation between current neighbor and adjacent neighbor in clockwise direction, and B calculates the deviation between current neighbor and center pixel. Similar methodology is adopted for all positions (neighborhood). In LNDBP, the deviation is determined between neighbors at +2 distance in the direction clockwise. All encoding patterns (after thresholding) gives the emergence of respective transformed image. From respective image, the subregion (3 × 3)-wise information is extracted. The fused features from the corresponding subregions is the feature dimension of the respective descriptor. Further LDBP and LNDBP dimensions are integrated to make the discriminative feature LDBP + LNDBP. PCA [13, 14] is applied next for compaction and matching is done by SVMs [15, 16] and NN [17, 18]. The classifiers

A Robust Fused Descriptor Under Unconstrained Conditions

351

are utilized for performing the classification. The LDBP + LNDBP pulls of better results than either of alone and many approaches from the literature on ORL [19] and GT [20] datasets. The rest of the article is organized as follows: In Sect. 2, we talk about novel descriptors, result evaluation is given in Sect. 3, and Sect. 4 provides the conclusion.

2 The Proposed Descriptors 2.1 Local Difference Binary Pattern (LDBP) In LDBP, the difference is computed between A and B. A computes the difference between current neighbor pixel and adjacent neighbor pixel in clockwise direction, and B computes the difference between current neighbor and center pixel. Same concept is performed for all neighbor positions. Those differences possess value > or = to 0 are allocated a label 1 else 0. All pattern encoding gives the emergence of LDBP image. The LDBP image is broken into 3 × 3 subregions for computation of histogram. The merged histograms is LDBP size (dimension). One subregion size is 256 so LDBP construct complete size of 2304. Figure 1 shows the presentation of LDBP. LDBP equation is given as LDBP P,R (xc ) =

P−1     f V p − V( p+1) mod 8 − (V p − Vc ) 2 p

(1)

p=0



 1x ≥0 where f (x) = , for p = (0, …, P − 1). 0x or = to 0 are allocated a label 1 else 0. All patterns encoding gives the emergence of LNDBP image. The LNDBP image is break into 3 × 3 subregions for computation of histogram. The merged histograms is LNDBP size (dimension). One subregion size is 256 so LNDBP construct complete size of 2304. Figure 2 shows the presentation of LNDBP. LNDBP equation is given as

352

S. Karanwal and S. Roka

Fig. 1 LDBP demonstration

Fig. 2 LNDBP demonstration

LNDBP P,R (xc ) =

P−1    f V p − V( p+2) mod 8 2 p

(2)

p=0



 1x ≥0 , for p = (0, …, P − 1). 0x |h2 |, user 2(U 2 ) has better channel state than user 3 (U 3 ), i.e. |h2 | > |h3 |. SIC receiver at downlink NOMA is shown in Fig. 4. BS allocates appropriate power coefficient to the messages of all users [8].

388

H. M. Shwetha and S. Anuradha

Fig. 3 Three-user downlink NOMA network

Fig. 4 SIC receiver at downlink NOMA

At the BS, the message to be sent to each user is combined using superposition coding principle. The resulting signal is given by, x=

   a1 P x 1 + a2 P x 2 + a3 P x 3

where x-Superposition coded transmit signal. x 3 , x 2, x 1 are the messages of U 3 , U 2 and U 1, respectively. a3, a2, a1 —Power coefficient allocated to U 3 , U 2 and U 1, respectively.

(1)

Analysis of Downlink and Uplink Non-orthogonal …

389

a1 + a2 + a3 = 1

(2)

P-Total power at the BS. Power assigned to each user U i , Pi = ai P. According to the principle of superposition of NOMA, a3 > a2 > a1 . User having poor channel status is allocated more power coefficient than user with good channel condition (|h1 | > |h2 | > |h3 |) [1]. NOMA decoding at User 3 (far user) User 3 decodes its signal from the received message, considers the U 1 and U 2 ’s message as interference. The received signal at user 3 is given by,    y3 = h 3 a1 P x1 + h 3 a2 P x2 + h 3 a3 P x3 + w3 where w is AWGN noise with zero mean and variance σ 2 . The data rate for U 3 is, Ru3

 = log2 (1 + y3 ) = log2 1 +

|h 3 |2 Pa3 |h 3 |2 Pa2 + |h 3 |2 Pa1 + σ 2

 (3)

NOMA decoding at User 2 U 2 performs SIC to remove U 3 ’s signal. The achievable data rate for user 2 is, Ru2

 = log2 (1 + y2 ) = log2 1 +

|h 2 |2 Pa2 |h 2 |2 Pa1 + σ 2

 (4)

NOMA decoding at User 1 (near user) Dominating terms-U 3 ’s and U 2 ’s signals are removed from the received signal using SIC process at user 1. The achievable data rate for U1 is,  R1 = log2 (1 + y1 );

Ru1 = log2

|h 1 |2 Pa1 1+ σ2

 (5)

From Eq. 8 [4], at high SNR,  Sum rate RNOMA ≈ log2

|h 1 |2 P σ2

 Since |h 1 |2 > |h 2 |2 > |h 3 |2

(6)

OMA for Downlink Consider three user downlink OMA network. All three users share the bandwidth and power equally. For high SNR, Sum rate ROMA

    |h 1 |2 P 1 Since |h 1 |2 > |h 2 |2 > |h 3 |2 log2 ≈ 3 σ2

(7)

390

H. M. Shwetha and S. Anuradha

Comparing Eqs. (6) and (7), it can be easily seen that,  log2 

     |h 1 |2 P |h 1 |2 P 1 log2  σ2 3 σ2     

RNOMA

(8)

ROMA

Thus, the sum rate capacity of NOMA is much greater than OMA. At high SNR, NOMA outperforms OMA by offering high capacity.

2.2 Uplink NOMA The BS receives signal from each user in the uplink network. In the uplink network, SIC process is performed at the BS end. If uplink and downlink channels are assumed to be reciprocal, all users in the network received power allocation coefficients from the BS [9]. Figure 5 represents the three-user uplink NOMA network. The signal received by BS in uplink network can be represented as,    r = h 1 a1 P x 1 + h 2 a2 P x 2 + h 3 a3 P x 3 + w r-Signal received at the BS. The base station decrypts the messages of all users by considering the power coefficient allocated to each user as reference. For high SNR, using Eq. 13 [4], sum rate of uplink NOMA becomes,

Fig. 5 NOMA 3-user uplink network

Analysis of Downlink and Uplink Non-orthogonal …

 RNOMA ≈ log2

p|h 1 |2 + |h 2 |2 + |h 3 |2 σ2

391

 Since |h 1 |2 > |h 2 |2 > |h 3 |2 .

(9)

OMA for Uplink Consider three-user uplink network. For high SNR, using Eq. 17 [4], the sum rate of uplink OMA becomes, ROMA

        |h 1 |2 P |h 2 |2 P |h 3 |2 P 1 + log2 + log2 log2 ≈ 3 σ2 σ2 σ2

(10)

Comparing Eqs. (9) and (10), we can conclude that, RNOMA ≥ ROMA

(11)

Thus, the sum rate of NOMA is much higher than the sum rate of OMA.

2.3 Spectral Efficiency and Energy Efficiency Spectral efficiency (SE) is the fraction of the sum rate to the bandwidth used [10]. Energy efficiency (EE) is given by Eq. (12), the fraction of the sum rate to the total power of the BS. Total power consumption at the transmitting side is the addition of the signal power and circuit power. SE =

RT (bps/Hz) w

W RT = SE (bits/J) PT PT

EE =

(12)

Prm = Ps + PStatic where RT —Sum rate of the user. W —Bandwidth used. PT —total power used by the BS. Ps —Total signal power. Pstatic —Power consumed by circuitary. For two-user downlink and uplink NOMA, the sum rates are same as discussed in the previous section. Corresponding sum rates for both downlink and uplink are substituted for SE expression and EE expression. According to Shannon’s theory, the

392

H. M. Shwetha and S. Anuradha

relationship between EE and SE will not consider the circuit’s power consumption and accordingly is monotonic. Higher SE results in a lower EE. If we consider the circuit power, for lower SE region EE increases and for higher SE region EE decreases. The maximum energy efficiency of the system is represented by the peak of the curve. For the positive slope of (RT /Fixed PT ), the EE-SE relationship is linear. There is simultaneous increase in EE for corresponding rise in SE.

3 Results and Discussion 3.1 NOMA Downlink and NOMA Uplink Sum Rate Comparison We have considered the three-user network and analysed the sum rate capacity of both downlink and uplink NOMA. Consider a downlink channel. The power allocation coefficients allotted to the three users are, a3 = 0.1; a2 = 0.2; a1 = 0.7. The system bandwidth is taken as W =1010 Hz. NOMA system is compared with the OMA system. Figure 6 shows that NOMA attains higher sum rate compared to OMA. Consider an uplink channel. Power allocation coefficients assigned are a1 = 0.75; a2 = 0.15; a3 = 0.1. The system bandwidth is taken as W = 109 Hz. NOMA system is compared with the OMA system. As illustrated in Fig. 7, NOMA has a better rate performance compared to OMA for the uplink. It is clearly represented that NOMA outperforms OMA for both downlink and uplink. Fig. 6 Sum rate of NOMA in downlink network, N = 3 users

Analysis of Downlink and Uplink Non-orthogonal …

393

Fig. 7 Sum rate of NOMA in uplink network, N = 3 users

3.2 NOMA Spectral Efficiency and Energy Efficiency Here, EE and SE of NOMA are compared with OMA. SE-EE comparison for downlink NOMA: The system bandwidth is chosen as W = 7 MHz. U 1 and U 2 channel gains are, respectively, assumed as g1 2 = −130 dB and g2 2 = −140 dB. −150 dBW/Hz is the assumed noise density. Let the static power consumption at the BS be Pstatic = 60 W. The obtained EE-SE curves for the downlink is as shown in Fig. 8. SE-EE comparison for uplink NOMA: The system bandwidth W = 12 MHz. U 1 and U 2 channel gains are, respectively, assigned as g1 2 = −80 dB and g2 2 = −100 dB. Fig. 8 SE-EE trade-off curves for downlink NOMA and OMA

394

H. M. Shwetha and S. Anuradha

Fig. 9 SE-EE trade-off curves for uplink NOMA and OMA

Noise density is assumed as −140 dBW/Hz. Let the static power consumption at the BS be Pstatic = 200 W. The obtained EE-SE curves for the uplink are as shown in Fig. 9. NOMA attains higher EE and SE than the OMA system for both uplink and downlink networks as shown in Figs. 8 and 9. At maximum points of the curve, both systems achieve their respective maximum EE. NOMA manifestly outperforms OMA at the maximum point and beyond for both EE and SE.

4 Conclusion This paper presents a brief overview of fundamental downlink and uplink NOMA performance. Successive interference cancellation is demonstrated well by considering three-user networks. Sum rate of the system is analysed and compared for NOMA and OMA networks. EE-SE trade-off curves for NOMA and OMA are analysed. EE simultaneously increases for the corresponding rise in SE. Simulation results confirm that NOMA exhibits higher performance compared to OMA in terms of sum rate of the network, energy efficiency and spectral efficiency.

References 1. M. Liaqat, K.A. Noordin, T. Abdul Latef, K. Dimyati, Power-domain non orthogonal multiple access (PD-NOMA) in cooperative networks: an overview. Wirel. Netw. 26(1), 181–203 (2020) 2. L. Dai, B. Wang, Z. Ding, Z. Wang, S. Chen, L. Hanzo, A survey of non-orthogonal multiple access for 5G. IEEE Commun. Surv. Tutorials 20(3), 2294–2323 (2018)

Analysis of Downlink and Uplink Non-orthogonal …

395

3. Y. Cai, Z. Qin, F. Cui, G.Y. Li, J.A. McCann, Modulation and multiple access for 5G networks. IEEE Commun. Surv. Tutorials 20(1), 629–646 (2018) 4. M. Aldababsa, M. Toka, S. Gökçeli, G.G.K. Kurt, O.L. Kucur, A tutorial on non-orthogonal multiple access for 5G and beyond 2018 (2018) 5. B. Makki, K. Chitti, A. Behravan, M.-S. Alouini, A survey of NOMA: current status and open research challenges. IEEE Open J. Commun. Soc. 1, 179–189 (2020) 6. L. Dai, B. Wang, Y. Yuan, S. Han, I. Chih-lin, Z. Wang, Non-orthogonal multiple access for 5G: solutions, challenges, opportunities, and future research trends. IEEE Commun. Mag. 53(9), 74–81 (2015) 7. Y. Saito, A. Benjebbour, Y. Kishiyama, T. Nakamura, System-level performance evaluation of downlink non-orthogonal multiple access (NOMA), in IEEE International Symposium on Personal, Indoor and Mobile Radio Communication, PIMRC (2013), pp. 611–615 8. Z. Ding, Z. Yang, P. Fan, H.V. Poor, On the performance of non-orthogonal multiple access in 5G systems with randomly deployed users. IEEE Signal Process. Lett. 21(12), 1501–1505 (2014) 9. F. Al Rabee, K. Davaslioglu, R. Gitlin, The optimum received power levels of uplink nonorthogonal multiple access (NOMA) signals, in 2017 IEEE 18th Wireless and Microwave Technology Conference, WAMICON (2017), pp. 2–5 10. P. Ruano et al., We are IntechOpen, the world’s leading publisher of open access books built by scientists, for scientists TOP 1 %, in Intech Tourism (2016), p. 13

Pattern Matching Algorithms: A Survey Rachana Mehta

and Smita Chormunge

Abstract In an enormous amount of factual data, it is necessary to find necessary information that can lead us to meaningful work. One such domain that does this work in pattern matching. The work of pattern matching is to provide us with information whether a particular pattern exists or not in given data. The algorithms of pattern matching fall in two categories, single and multiple, according to the number of patterns they can find, out of which the latter has wider applicability as compared to former. Such algorithms not only signify all the occurrences of particular patterns but are also useful in their analysis, which may lead to significant information. These algorithms have wide applicability due to the presence of abundance of data over the Internet. The paper emphasizes detailed study on the algorithms that can find more than one pattern, followed by a comparative analysis. Keywords Pattern matching · Single pattern · Multi-pattern algorithm

1 Introduction The advancement in the Internet over the current time is huge and inevitable. Right from searching something to maintaining remote resources. The advancement has led to a humongous amount of data, and finding a needful information is very difficult. Pattern matching is one such technique which can allow us to discover a pattern in provided data or text [1]. They can allow us to search for one or more than one pattern as well as allow us to find a single or multiple occurrence of said pattern. Such algorithms have wide applicability in varied domains. Some of which are detecting plagiarism, retrieving information, high speed networks, analysis of DNA, medical sciences, bioinformatics to name a few [2–6]. Pattern matching algorithms get categorized primarily into two types based on matching capacity. One is the algorithms that can work on single patterns, while the R. Mehta · S. Chormunge (B) Department of Computer Science and Engineering, Institute of Technology, Nirma University, Ahmedabad, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 R. C. Poonia et al. (eds.), Proceedings of Third International Conference on Sustainable Computing, Advances in Intelligent Systems and Computing 1404, https://doi.org/10.1007/978-981-16-4538-9_39

397

398

R. Mehta and S. Chormunge

others are capable of matching one or more patterns. The task of a single pattern matching technique is to find a pattern P of a length size p in a data D having length size x, which if formed from the set of alphabets. Such algorithms return every occurrence of pattern P in a given dataset. The algorithms that find a single pattern are Boyer–Moore (BM) [1], Brute Force (BF), Knuth–Morris–Pratt (KMP) [7], and Rabin Karp (RK) [8]. Brute Force is the naive and simplest pattern matching algorithm. Rabin Karp works on the concept of hashing function [8], KMP is the first linear time matching algorithm that works on the concept of hashing, it recovers from the memory wastage that Brute Force did [7]. Boyer–Moore works on the concept of pre-computed shift table and suffix rules [1]. KMP works from left-to-right direction, and Boyer–Moore works in the reverse direction [1]. The single pattern matching algorithm is useful in a scenario where a particular pattern is to be found, but it fails when more than one pattern is to be found and then analyzed [9, 10]. In order to adhere and resolve this, multi-pattern matching algorithms and techniques are needed. They can find more than one pattern in the one go of algorithms execution. They are realistic and have more practical usage as compared to their counterparts. They are extensions over the single pattern algorithms. Looking into their wide usage, the paper discusses five such algorithms: Aho–Corasick (AC), Commentz-Walter (CW), Wu-Manber (WM), Zhu–Takaoka (ZT), and bit-parallel (BT) algorithm [7, 8, 11–13]. The rest of the paper discusses the multi-pattern matching algorithm in detail followed by comparative analysis with single pattern matching algorithm and conclusion and the futuristic scope.

2 Multi-pattern Matching Algorithms 2.1 Aho–Corasick Algorithm Among the multi-pattern matching algorithms, the first Aho–Corasick algorithm is explored, and it follows two-step process. In the first stage, a finite state machine (FSM) is formed for a set of patterns (Keywords) which are to be found. In the second stage, the text input is provided to the constructed finite state machine. Based on the input, the machine will inform the location, where a match for a particular keyword is found. [1]. The base for the Aho–Corasick algorithm is the KMP algorithm along with the finite state machines [1, 11]. In the first processing stage, the FSM gets formed. Consider K = {k[i], k[ii],…, k[n]}, which is n keywords set and an input string “s”, the task to find the substrings of “s” that are keywords in K. It is done using three functions: (a)

Goto Function: A finite state machine is constructed for the keywords of K. This function will map a input letter and a finite state, and the mapping may lead to a next state of automata or a message fail, if the letter is not present in automata.

Pattern Matching Algorithms: A Survey

(b)

(c)

399

Failure Function: While pattern finding, the failure function is used when a goto function leads to a failure message. This function will help to lead to a state to go when a goto function shows a fail message. This usually occurs when a substring is not found or when a partial substring that matches multiple keywords is found. It may also lead to the start state of finite automata, if no match is found. Output Function: It defines the state of a finite automata that signifies the occurrence match of a particular keyword. As multiple keywords are to be verified, there will be multiple output states.

In the next stage, the text input in which one wants to search the presence of keywords gets fed to the constructed finite automata. It will take a character-bycharacter input and give it to the finite automata. Based on the output function, the presence of the keyword can be known. The major advantage of this algorithm is that: Every letter is examined only one time. The disadvantage is that: Space complexity to store the automata transition rules and time complexity in the first stage is directly related to the keywords length. The time complexity of the first stage is O(p|Σ|), here Σ is the set of alphabets, and p represents the total length of the patterns to be found. The time complexity for the second stage depends on the length l of the input string, number of occurrences of a pattern (m) and is given as O(l|Σ|+m) [11].

2.2 Commentz-Walter Algorithm The first algorithm Aho–Corasick works on the principle of finding patterns in the given data or document. The time is purely dependent linearly on the length of the data which is passed. In a scenario, where a single pattern is to be matched, Boyer–Moore works with linear pre-processing time and finds patterns faster on an average [12]. Second algorithm, covered under exploration for multi-pattern finding, is Commentz-Walter (CW). This algorithm is formed using the combination of Aho– Corasick and Boyer–Moore both. The former algorithm is used in the first stage of pre-processing and the latter algorithm in the second stage of searching. The first stage of the Commentz-Walter algorithm does three tasks, especially formation of converse state machine (trie), second, an output function out and then shift table. A trie X is equivalent to FSM of Aho–Corasick algorithm. But in Trie patterns are considered in reversed order. In each node of a Trie, out is added, a state of a Trie has output in out function if the specific pattern begins from a given node. The path which is followed in the pre-processing stage is as given: Path(h) is a word which is formed using the characters on the given path from the start state of T to state (h). And in the last shift table is used to perform the character index shifting if the pattern is not present. The running time of pre-processing stage of the CW algorithm is linear with respect to pattern length. And its running time depends upon the scan phase, shift

400

R. Mehta and S. Chormunge

table, and minimum length of pattern (k min). The running time of the algorithm is dependent on the Boyer–Moore and is given as O(n/k min).

2.3 Wu-Manber Algorithm The next algorithm in line is Wu-Manber. The principle rule is to find multiple patterns at the same instance of time [7]. It supports a large amount of patterns and is quick. The base of this algorithm is Boyer–Moore [1]. It works with the rule of bad character shift on the text block of size W. In the algorithm of Wu-Manber, the pre-processing stage generates a PREFIX table, SHIFT table, HASH table, and then, they get utilized in the searching phase. The SHIFT table is different from the Boyer–Moore’s version. It is accustomed to confirm what percentage characters are to be shifted once the input is scanned. The PREFIX table and HASH table area unit used once the shift worth is zero. They are used to see whether that pattern has a possibility of match and to evaluate the same. Here, the characters from the dataset are not scanned atomically, however, as a block with size “W.” Then, “Z” is taken which is the total size of patterns which is obtained as multiplication of k and m. Consider “c” as the alphabets size. The size of block is mostly given taken in the order logc (2Z). The preferable value for W is 2 or 3. The shift performance of SHIFT Table is mostly based on last “W ” characters rather than single character from “WZ.” Let Y = y1 …yW , and W signifies, W number of characters that we intend to scan. Also, an assumption is made that the Y is mapped into the ith value of SHIFT. Generally, it is classified into two cases: 1. 2.

Y is not a part of the substring of any pattern. In such cases, the shift takes place with an amount m−W + 1 characters. This will store m−W + 1 in SHIFT [i]. Y is present in patterns. In this case, the right incidence of Y at a specific position is found for pattern x, such that Y does not end at any position that surpasses length of x. This will store the value of m − q in SHIFT [i].

To calculate the SHIFT table value, let’s assume every pattern as pi = a1 , a2 , … am . It maps every substring of pi which is of W size, a(j−W +1) … aj from SHIFT. It also sets the corresponding small value from its present value (m−W +1 is the start value for every of them) and “m−j” (size of shifting in order to acquire substring). It is doable, such that this substring within text matches with few patterns within the pattern list. If the scrutiny of substring, to each pattern, is avoided, then hashing can be used to reduce the amount of patterns to be compared. The ith entry is the one which will have a pointer of HASH table. Here, HASH [i] will serve as an inventory for the patterns for which the last “W ” characters are hash in ‘i.’ HASH table is distributed in nature: subsequently, it possesses patterns only, whereas SHIFT table will have all the doable strings of ‘W ’ size. Further, it maps the end W text of whole patterns, and it has a tendency to additionally map the primary B text in the table of PREFIX of every pattern. When it has found a shift value, it realizes a SHIFT worth

Pattern Matching Algorithms: A Survey

401

of zero. It attends the table HASH to see whether there is a match and to check the values within the table of PREFIX. For every suffix, the table of HASH not solely contains, the items of whole patterns with this suffix, and however, it further contains the value of hash their prefixes. It has a tendency to calculate the prefix within the text (when shifting of text m−B to left), and it is used to filter patterns whose suffix is similar; however, for it, prefix is completely different. The pre-processing running time of this algorithm is given as “O(Z)” to construct the SHIFT table, where ‘Z’ is the cumulative size of the patterns. O(W × n/m) is the running time of the scanning phase, where ‘m’ is the size of pattern and ‘n’ is the text length. The most benefit of Wu-Manber algorithm is that it consumes less memory as compared to CW and AC [7].

2.4 Zhu–Takaoka Algorithm Zhu–Takaoka algorithm is a modified version of the Boyer–Moore algorithm [8]. The comparison of the pattern goes from right (R) to left (L) text direction. Some cases of a all match and mismatches it goes with two shift tables; 1. 2.

Two rightmost characters of the window bad character (ztBc) of Zhu–Takaoka The Boyer–Moore better suffix table (bmGs).

The ztBc shift values are calculated from an array of two dimensional. O(m + |E|2 ) is the time complexity of the pre-processing stage, and the O(mn) is the searching phase complexity.

2.5 Bit-Parallel (SHIFT OR) Algorithm The bit-parallel algorithm [13], the main task of this algorithm is to show the search state as an integer. A small amount of logical and arithmetical calculations are carried out at all searching steps and tell that the numbers are unit large most to show every attainable state of the find. The algorithmic program is supported by FA (finite automata) theory such as KMP [7] and embraces the finitude of the letter from the algorithm Boyer–Moore [1]. Consider ‘K’ to be the pattern with size ‘s’, and sample input with size ‘m.’ For it tends to outline the vector of m completely of different states. Whatever state a shows, is the stage of the search among the locations 1…n of the pattern and (b−n+1)… b locations of the text. Here, ’i’ is the present place of the alphabet h[n] = 0; iff y[b−n+1…i] = K[1…n]. h[n] = 0, tell about the matching pattern end at present place in the alphabet. Bit-matrix table X is defined, where it is used whenever the mismatch found. If y = pattern[n] then the value of X [n, y] = 0 or else, if y = pattern[n] then the value of X [n, y] = 1

402

R. Mehta and S. Chormunge

The alphabet which is to be searched is scanned in the left (L) to right (R) direction, till the end of the string. For an alphabet which is read, its value is incorporated into the retrieval of its mask. Also, OR operation will be performed among the variable T and mask. The worth of the foremost vital bit in B may go with zero when the pattern is searched on the string. The advantage of bit parallel algorithms is that it behaves at a bit level. But the patterns should be the word size like a computer. When the pattern length grows, it declines the period of time of this rule.

3 Comparative Analysis This section covers the comparative analysis on the multi as well as single pattern matching algorithms. The factors taken into consideration include time to pre-process data, execution or running time, approach used by algorithms and comparison order like left to right (L to R) or right to left (R to L). See the notations given below which are used in Table 1 for the comparative analysis purpose: P: A pattern or set of pattern having length p, with cumulative pattern lengthP. x: Defines the length of text or data. : Set of alphabet over which pattern and text are defined. k: Total occurrences of a pattern. Table 1 Comparison of pattern matching algorithms Algorithm

Preprocessing time

Running time

Pattern type

Comparison order

Approach used

BF

0

O((x−p + 1)× p)

S

L to R

Linear search

RK [8]

O(p)

O((x−p + 1)× p)

S

L to R

Hashing function

KMP [7]

O(p)

O(p)

S

L to R

BM [1]

 O(p + || ||)

Heuristic based

O(x × p)

S

R to L

AC [11]

 O(p|| |)

 O(x| |+k)

Heuristic based

M

L to R

CW [12]

 O(p|| |)

Automata based

O(x/k min)

M

L to R

Automata + Heuristic based

WM [7]

O(P’)

O(W × x/p)

M

R to L

ZT [14]

 O(p + || |2 )

Hash function

O(p × x)

M

R to L

BP [13]

 O(p + || |)

Hash function

O(p × x)

M

L to R

Automation based

Pattern Matching Algorithms: A Survey

403

W: Block size for Wu-Manber algorithm. Pattern Type: whether a algorithm can find single (S) or multiple (M) pattern.

4 Conclusion The paper has discussed pattern matching algorithms which work with one or more patterns. The algorithms covered here are good, have wider applicability, and are efficient as compared to the ones that find only a single pattern. Aho–Corasick is good in a scenario when the character is to be scanned only once. Wu-Manber algorithm supports a large amount of patterns and has good processing time, as well as it has less memory consumption as compared to Aho–Corasick and CommentzWalter. The future scope is to see their applicability to the emerging fields like Big Data, Internet of Things, and others.

References 1. R. Boyer, J. Moore, A fast string searching algorithm. Commun. ACM 20(10), 762–772 (1977) 2. S. Vijayarani, R. Janani, String matching algorithms for retrieving information from desktop— comparative analysis, in 2016 International Conference on Inventive Computation Technologies (ICICT), Coimbatore (2016), pp. 1–6 3. M. Tahir, M. Sardaraz, A.A. Ikram, EPMA: efficient pattern matching algorithm for DNA sequences. Expert Syst. Appl. 80(1), 162–170 (2017) 4. P. Neamatollahi, M. Hadi, M. Naghibzadeh, Simple and efficient pattern matching algorithms for biological sequences. IEEE Access 8, 23838–23846 (2020) 5. S. Kumar, E.H. Spafford, An application of pattern matching in intrusion detection. Department of Computer Science Technical Reports. Paper 1116 (1994). https://docs.lib.purdue.edu/cstech/ 1116 6. H. Gharaee, S. Seifi, N. Monsefan, A survey of pattern matching algorithm in intrusion detection system, in Conference: 2014 7th International Symposium on Telecommunications (IST) (2014), pp. 946–953 7. D. Knuth, J. Morris Jr., V. Pratt, Fast pattern matching in strings. SIAM J. Comput. 6(2), 323–350 (1977) 8. R. Karp, M. Rabin, Efficient randomized pattern-matching algorithms. IBM J. Res. Dev. 31(2), 249–260 (1987) 9. Y.D. Hong, X. Ke, C. Yong, An improved Wu-Manber multiple patterns matching algorithm, in 25th IEEE International Performance, Computing, and Communications Conference, 2006 IPCCC (2006), pp. 675–680 10. S. Wu, U. Manber, A fast algorithm for multi-pattern searching. Technical Report TR-94-17 (University of Arizona, 1994), pp. 1–11 11. A. Aho, M. Corasick, Efficient string matching: an aid to bibliographic search. Commun. ACM 18(6), 333–340 (1975)

404

R. Mehta and S. Chormunge

12. B. Commentz-Walter, A string matching algorithm fast on the average, in Proceeding 6th International Colloquium on Automata, Languages and Programming (Springer, 1979), pp. 118–132 13. R. Baeza-Yates, G. Gonnet, A new approach to text searching. Commun. ACM 35(10), 74–82 (1992) 14. Z.R. Feng, T. Takaoka, On improving the average case of the Boyer Moore string matching algorithm. J. Inf. process. 10(3), 173–177 (1987)

Development of an Android Fitness App and Its Integration with Visualization Tools H. Bansal and S. D. Shetty

Abstract In current world scenario, health tracking has become a very important part of an individual’s lifestyle. From keeping tabs on activity status, sleep schedules and active net calorie intake to keeping tabs on microdetails of blood oxygen levels, stress and heart rate, individuals of all types, need some sort of health tracking device to keep tabs on their health. This gave rise to the current generation of smart bands and smart watches. For this purpose, we developed an app which not only tracks the user activity, but also provides useful insights to the user using visualizations. Unlike other apps, which focus either on basic tracking or basic visualization, our app incorporates trends (shown by graphs) and provides more in-depth insights by showing the user’s important health factors like their BMI (Body Mass Index) and FFMI (Fat-Free Mass Index) score, while also keeping track of their weight, fat %, muscles mass and water %. Keywords IoT · Wearable devices · Health tracking · Sleep analysis · Health forecasting · Visualizations

1 Introduction Internet of things (IoT) is basically the interconnectivity of different devices and their interaction with other devices. It involves sharing and distribution of data across devices using Internet [1]. This data can then be used for various purposes such as sharing of files along a database, sending important information through several interconnected routes of devices for multi-sharing and seamless interfacing [2]. IoT has been touted to be the next big thing, and its integrations in several other fields are well documented. Its latest use in the medical field has garnered further H. Bansal Department of Computer Science, Birla Institute of Technology and Science, Pilani, Pilani, India S. D. Shetty (B) Dubai International Academic City, Dubai Campus, Dubai, United Arab Emirates e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 R. C. Poonia et al. (eds.), Proceedings of Third International Conference on Sustainable Computing, Advances in Intelligent Systems and Computing 1404, https://doi.org/10.1007/978-981-16-4538-9_40

405

406

H. Bansal and S. D. Shetty

interest, drawing many useful technologies to look into it even more. By combining wearable devices and forming a new form of technology, WIoT (wearable Internet of things), it is clear to see that the field is being explored even more [3]. The basic idea behind WIoT is integrating wearable devices with Internet of things, which helps in increasing the interaction between devices by involving more devices [4]. This improves the accuracy of many intricate activities and through IoT, the data sharing becomes even more seamless. Thus, for any user, tracking important health parameters has become even more easy, as they no longer need to worry about manually logging every aspect of their workouts. The WIoT interface takes care of everything [5]. Wearables are electronic innovation or gadgets consolidated into things that can be serenely worn on a body. These wearable gadgets are utilized for following data on ongoing premise [2]. They have movement sensors that take the depiction of your everyday action and sync them with cell phones or PCs. After the development of cell phones, wearable hardware is the following large advancement in the realm of innovation [3]. A wearable gadget is an innovation that is worn on the human body. This kind of gadget has become a more normal piece of the tech world as organizations have begun to develop more sorts of gadgets that are adequately little to wear and that incorporate amazing sensor advances that can gather and convey data about their environmental factors [6]. Wearable gadgets are otherwise called wearable devices, wearable innovation or just wearables. This paper aims to show the implementation and importance of using a wearable device to track your health, and developing an app that can give the user active insights into their daily activities and habits. Unlike other papers which just summarize findings of using wearable devices in healthcare, this paper includes development of an app to further implement those findings for an actual use case. We have also tried to use visualizations using Tableau and Excel for better comparisons and forecasts of user activities, which would help the user to plan their activities. Such an implementation of visualization tools is also different from other research papers.

2 Methodology The main idea behind this project is to make a fitness app which coordinates with the data input from a fitness tracker of any sort (smart watch, smart bands, smart rings, or ever your smart phone). The app has features to track your workouts in its own UI, or you can import it from a web server or local file [7]. The idea is to track the data and then give insights to the user on their performance on a timely basis. This will be done by extracting the app data and using Tableau visualizations to give the user an enhanced interactive way to access and analyze their data [8].

Development of an Android Fitness App …

407

We will also show a forecast of the user’s performance to give them a better idea of their current progress and how they could try and make changes in their lifestyle to improve their current habits and achieve their own goals [9].

2.1 Parameters to Track The parameters we will be focusing on are: • The user’s sleep schedule which will be split into light, deep and REM sleep cycles. • The activity status which includes the number of minutes the user was active which is further split into more categories. • The user’s net calories intake. The user can enter how many calories they have eaten and the app already tracks the calories burnt. All these activities are tracked and then Tableau visualizations are provided for it along with Excel’s forecasting.

2.2 Experiment So, we took a sample experiment where I tracked my sleep and other activities (like daily runs) for a period of 1 month with the help of a smartwatch. The data from the smartwatch was then extracted to a.csv file so as to capture it in totality. The data was then cleaned using Tableau software and only relevant attributes were made use of. These included the distance travelled by the user, the calories burnt by the user in doing so, no of minutes the user was active and further categorization based on activity intensity. The parameters tracked were then used for comparisons and analysis via visualizations drawn using Tableau Desktop and Microsoft Excel forecasting. Such visualizations helped in a much better understanding of the trends which were developing in the user’s activities (Fig. 1). The above dataset is that of the experiment for a month tracked daily. As you can see it tracks important parameters and subparameters of sleep, activity status and calories burnt. These included the distance travelled by the user, the calories burnt by the user in doing so, no of minutes the user was active and further categorization based on activity intensity.

408

H. Bansal and S. D. Shetty

Fig. 1 The dataset of all activities tracked every day of the month

2.3 The App The app’s name is FiTracker and it has been developed using Android Studio. The app is a basic fitness tracker that keeps the history of all the workouts of the user and shows graphs for better visualizations. The app has following features: • • • • •

A huge variety of workouts to choose from with the option of user input A stopwatch to help the user track using the device Graphs to show various progressions as per the user’s perusal History of all the workouts which gives user the option to follow the same pattern A description tab for all exercises which the user can use to follow certain guidelines • A weight tracker which calculates the BMI and the FFMI • A body tracker which keeps record of the user’s muscle measurements for better comparison The BMI and FFMI are calculated using the data the device and the app tracks. The BMI is calculated as follows: User weight (kg)/[height (m)]2 . So, for example, weight: 68 kg, height = 165 cm (1.65 m), then, the BMI will be: 68/(1.65)2 = 24.98 [10]. Similarly, FFMI is calculated using the formula: fat-free mass (kg)/height (m)2 . where fat-free mass is calculated using: weight (kg) * (1 − (body fat (%)/100)) [10].

Development of an Android Fitness App …

409

3 Results and Discussion This section discusses about the result of the experiments conducted and uses illustrations to showcase the different parameters and their inter relationships via visualization as drawn from Tableau Desktop Software and Microsoft Excel forecasting. The visualizations focus on important health parameters like calories burnt and sleep quality to better assess the user’s health as a measure of net calorie burn and net sleep quality. These visualizations are different from other apps and use cases and we have used professional visualizing tools like Tableau and Excel forecasting for better data showcase. Such visualizations are also interactive, and users can also input sample data so as to try and find the right combination of activity and rest without compromising on their time while trying to experiment it physically. Unlike other apps, our concept uses an advanced visualization tool to give more in depth insights to the user while also giving them the freedom to use active forecasting for better planning. Figures 2 and 3 show the forecast for the active calories burnt by the user during the day. The level of confidence is 95% for the forecast. The forecast takes into consideration both the lower and the upper boundary conditions to give a better

Fig. 2 The table for forecast of active calories burnt for next 1 month

410

H. Bansal and S. D. Shetty

Fig. 3 Forecast of calories burnt for next 1 month (graph representation)

margin of error and possibility, hence letting the user plan their future activities accordingly. The forecast can give the user an idea on how to better plan out of their daily walk durations and speed to get the optimal number of steps. Active calories are calories burnt while doing some activity as registered by the fitness device. This could be a walk, jog, some workout, cycling or some sport. These activities get tracked by the fitness device and during the activities, it will also track the calories burnt in the duration of that activity. The forecast thus provides a valuable insight into how the user can modify their activities to maximize the calorie burn (Fig. 4). The above graph shows the trend of both the steps and the calories burnt by the user during the 1-month period on a daily basis. As we can see the steps are in direct relationship with the calories burnt which shows that the user burnt most of their calories via cardio-based workouts. Because the fitness devices also track other

Fig. 4 Graph depicting the relationship between steps and calories burnt. Red: Calories burnt. Blue: Steps

Development of an Android Fitness App …

411

movements, such steps and calorie relationships give a good measure of how the number of steps contribute the calorie burn. Another thing of interest is the irregularity of the user. As we can see there are many troughs in the line graph showcasing the calories. So, the user could look at this and try and become more regular in their activities (Fig. 5). This graph breaks down the activity ratio of an individual. The visualization aims to show the ratios of the time spent with respect to the intensity of the activity. Thus, as we can see from the graph, the user is lightly active for a good amount of time on a daily basis. But their regularity for the fairly and very active intensities are lower and more erratic. Thus, the user could look into increasing not only the intensity on a daily basis but also try to be more regular (Fig. 6).

Fig. 5 Visualizations to compare the activities intensity ratios

Fig. 6 Visualization showing the ratio of different stages of sleep

412

H. Bansal and S. D. Shetty

The above charts look to show the breakup of the daily sleep-in terms of minutes. The ideal ratio means 5:2:2 (light: deep: REM) (1 is left as a buffer for randomness). The most important being the REM sleep phase. As we can see the user has missed a few days of REM sleep which directly correlated to those day’s lack of regular sleep time. Thus, using such a graph, the user can identify trends of their sleeping patterns and also adjudge the quality of their sleep.

4 Conclusion Wearable Internet of things have become the now of a long-awaited technology. With its ease of access, useful features and current tech appeal, they are no longer things of luxury only afforded by the elite. Their accessibility and wide range make them a must have and with the future looking bright, it will be exciting to see how these devices will transform even more, with more exciting features [2]. Their use in the medical field is already apparent and with improved microchips, their future implementation could engulf a lot of things [4]. They already track important basic features and their increased accessibility will make the doctor patient experience more personal and less cumbersome. With better AI systems coming up, the usability of these devices will only increase and improve, making them one of the most basic and important devices a person could own [11]. The only thing that the technology now needs to take a look at it, is its accuracy, accessibility and availability [10]. With an increased range of products, more accessible prices and features and better and improved software systems, these devices will soon become as commonplace as a laptop or a smartwatch for every individual. Thus, the app is set to have these additional updates in the future: • Addition of an image recognition tab which would recognize the food the user is having and assign appropriate number of calories through machine learning. • Addition of a net calorie intake calculator which would track the user eating habits and the food intake and calculate net calorie intake. Being a parameter, it will also be showcased in the form of graphs and trends so as to give the user a better idea of their habits • Adding an alert system so as to notify the user about their workout schedules • Adding a general knowledge and tips tab so as to let user know about latest fitness trends and tips they can follow.

References 1. M.G.R. Alam (Member, IEEE), S.F. Abedin (Student Member, IEEE), S.I. Moon, A. Talukder (Member, IEEE), C.S. Hong (Senior Member, IEEE), Healthcare IoT-based affective state mining using a deep convolutional neural network. IEEE Access 4 (2016)

Development of an Android Fitness App …

413

2. S. Xia, D.G. Peixoto, B. Islam, M.T. Islam, S. Nirjon, P.R. Kinget (Fellow, IEEE), X. Jian, Improving pedestrian safety in cities using intelligent wearable systems. IEEE Internet of Things J. 6(3) (2019) 3. V. Bianchi, M. Bassioli, G. Lombardo, P. Fornacciari, M. Mordonini, I.D. Munari, IoT wearable sensor and deep learning: an integrated approach for personalized human activity recognition in a smart home environment. IEEE Internet Things J. 6(5) (2019) 4. Z.U. Ahmed, M.G. Mortuza, M.J. Uddin, M.H. Kabir, M. Mahiuddin, M.J. Hoque, Internet of Things based patient health monitoring system using wearable biomedical device, in ICIET, 27–29 Dec 2018 5. S. Khan, M. Alam, Wearable Internet of Things for Personalized Healthcare: Study of Trends and Latent Research, Jan 2020 (Cornell University Papers, 2020) 6. S. Hiremath, G. Yang, K. Mankodiya, Wearable Internet of Things: concept, architectural components and promises for person-centered healthcare, in 4th International Conference on Wireless Mobile Communication and Healthcare—“Transforming healthcare through innovations in mobile and wireless technologies”, Jan 2014 7. R.K. Kodali, G. Swamy, B. Lakshmi: an implementation of IoT for healthcare. IEEE Access, 13 June 2016 8. T. Adhikary, A.D. Jana, A. Chakrabarty, S.K. Jana, The IoT augmentation in healthcare: an application analytics, in ICICCT (2019) 9. F. Wu, J.M. Redoute, M.R. Yuce, An autonomous wireless body area network implementation towards IoT connected healthcare applications. IEEE Access 5(16) (2017) 10. The Role of IoT in Healthcare: Applications & Implementation. (2020). Retrieved 20 Apr 2020, from https://www.finoit.com/blog/the-role-of-iot-in-healthcare-space/ 11. I.G. Magarino, R. Muttukrishnan, J. Lloret, Human-centric AI for trustworthy IoT systems with explainable multilayer perceptrons. IEEE Access 7 (2019)

Breast Cancer Prediction Models: A Comparative Study and Analysis Aparajita Nanda, Manju, and Sarishty Gupta

Abstract Breast cancer is a vital cancer disease among women. The death rate increases predominantly due to the breast cancer tumor in such a way that one out of ten ladies are detected having breast malignancy. It is also second most cause for death of women in the USA. Thus, it is an important public health problem. Breast cancer is mostly categorized into two parts: benign and malignant. The early detection or prediction of this cancerous cell helps in preventing from higher death rates. In this paper, our main focus is to discuss and analyse different prediction models. The objective of the work is to design a classification model to predict the cancerous cells. In addition, a comparative analysis is to be performed among the classification techniques to yield an accurate classification result in the aim to finding out the classifier that works best in predicting the class with least error. Keywords Machine learning · Classification · Random forest · Breast cancer

1 Introduction Due to cancer, there is abnormal cell growth which has high potential of spread to the other body parts. During initial phase, cancer produces no symptoms, but mostly symptoms appear when it grows massively or ulcerates. Since there are various categories of breast cancer, few symptoms are specific. Many symptoms are individual specific depending on their physical issues towards disease. Cancer can be difficult to diagnose and can be considered a “great imitator”. All of this, it is too difficult to identify cancer at an early stage. The breast cancer is seen as the most obvious cancers when it comes to women in the world. It is also the most cause for deaths of women across the world. Thus, it is an important public health problem. Numerous works have been done on medical data sets with the help of various classifiers and also by applying feature selection techniques. Many of them show good classification accuracy. For example, previously, works in [1] compared the A. Nanda · Manju (B) · S. Gupta Department of CSE and IT, Jaypee Institute of Information Technology, Noida, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 R. C. Poonia et al. (eds.), Proceedings of Third International Conference on Sustainable Computing, Advances in Intelligent Systems and Computing 1404, https://doi.org/10.1007/978-981-16-4538-9_41

415

416

A. Nanda et al.

performance of Naïve Bayes, random forest, K-nearest neighbours K-NN, support vector machines (SVM), and the study found SVM to be the most accurate. Abdel et al. [2] compared the performance of supervised learning classifiers which includes Naive Bayes (NB) classifier and K-nearest neighbour (KNN) for breast cancer classification. The simulation carried on the above classifier proved its efficacy over other classifiers in the Wisconsin breast cancer (original) datasets. Similarly, Asri et al. [3] compared the performance on decision tree (C4.5), Naive Bayes (NB), support vector machine (SVM), and K-nearest neighbours (K-NN). Their accuracy is tested on the Wisconsin breast cancer (original) datasets. However, there are certain difficulties faced while handling such critical data, because a wrong prediction can be a deciding factor when it comes to life and death. Some of these difficulties in the prediction model are: difficulty of using kernel function in SVM, requiring lot of memory space to run the model, and choosing a fair dataset to train the model. This paper broadly focusses on the usages and applications of various classification techniques which are proven to helpful specifically for medical and bioinformatics stream when it comes to predict breast cancer. In order to do this prediction, the Wisconsin breast cancer (original) famous and mostly used datasets are chosen. The most common way to develop a model to classify the population of records is to apply classification techniques. Our major concern in this proposed work is to predict the cancer cell accuracy through classification technique which accurately predicts the target class for each case in the data. For the purpose of experimentation, this work considers Wisconsin breast cancer dataset and various classification techniques such as KNN, kernel SVM, SVM, decision tree, Naive Bayes, logistic regression, and random forests for accurate prediction, and to analyse the classifier performance. In addition, this work is also used to detect chances (probability from logistic regression) of suffering from cancer by identifying abnormalities present in the cells using the information given in the dataset. This model is helpful for early detection of cancer which leads to higher survival rates.

2 Literature Survey In medical science, machine learning models are used for the analysis of clinical parameters and their combinations for prognosis. In today’s era, there are several experiments conducted on medical data sets with the help of multiple classifiers and feature selection techniques. Khourdifi and Bahaj [1] discuss several machine learning techniques such as random forest, Naive Bayes, SVM, K-nearest neighbour to classify the benign and malignant tumour cells. Layla et al. [2] compare the three mean square errors of transfer functions such as LOGSIG, TANSIG, and PURELINE in neural network architecture. Additionally, it also includes the impact of different hidden layers and its effect in neural network. Amrane et al. [4] describe the breast cancer using Naive Bayes and K-nearest neighbour. They validate their approach in breast cancer dataset (BCD) and University of California Irvine (UCI) datasets.

Breast Cancer Prediction Models: A Comparative …

417

Karabatak and Ince [5] presented an automatic diagnostic system to detect breast cancer based on neural networks (NN) and association rules (AR). Their research exhibited that the AR can be utilized to reduce the dimension of feature vector. They proposed that AR + NN model can be utilized to get effective automatic diagnostic system for different diseases. Osman [6] presented a two-step SVM method to diagnose breast tumour. The proposed hybrid method when analyzed on UCIWBC data set improved the accuracy by 99.1%. Kate and Nadigb [7] built models using machine learning algorithms to predict stage survivability on SEER dataset. They evaluated the models on different stages separately and also combined for all stages. Ehtemam et al. [8] compared 64 data mining models for early prognosis and diagnosis of breast cancer. They collected 208 samples of Iranian patients for 2014–2015 and computed accuracy and precision of classifier techniques. Said et al. [9] proposed Breast Cancer Outcome Accurate Predictor (BCOAP) model to predict main outcomes of breast cancer. Works in [10] investigated prognostic factors for survival analysis of breast cancer using machine learning techniques. Authors done breast cancer analysis works in [11, 12] are also based on various machine learning models. They built the prediction models using random forest, decision tree, support vector machine, neural networks, etc. to identify the variables which influence the survival rate of patients having breast cancer. Many of the works specify one or two classification techniques and state the metrics of that algorithm only. Different algorithms have been applied, but none of them state which technique provide the best result by considering all challenges. All algorithms have their pros and cons, but for detection, best of all techniques is to be identified to decide life and death, especially in severe diseases such as breast cancer. Thus, in this paper, the objective is to find the most commonly used algorithms, which gives the best results with the least error.

3 Our Approach We have performed two iterations of classification on breast cancer dataset in order to find out which algorithm works the best among the well-known classification algorithms in situation. We follow a sequence of steps to achieve the solution of the given problem. The basic workflow for the solution is explained in below section. Step 1. Cleaning the raw data As we all know that applying of classification algorithm on raw data is irrelevant and leads to bizarre results due to problems such as missing values, different scales, noise. Therefore, cleaning the data is very important and is the first step of our work. We have performed the following operations before training our dataset 1. 2.

Replaced the missing values with the mean of the corresponding column. Scaled and normalized values to reduce noise.

418

A. Nanda et al.

Step 2. Splitting data as training and test dataset Here, we have kept 80% of data for the purpose of training, and consequently, 20% of data is used for testing. As said earlier, we have performed 2 iterations of classification task to predict the actual best working classifier, splitting of data is done in two random ways so that there are fewer chances of training data to be inclined towards one classifier only. We have considered two random states, first a random state = 0 and a random state = random (considers current time as a parameter by default). Step 3. Training the Model After the data are cleaned, we have applied the following classification techniques— K-nearest neighbours, SVM, logistic regression, kernel SVM, Naive Bayes, decision tree, random forests (with no. of estimators = 10,100,200, respectively). k-Nearest neighbours (K-NN) For classification under supervised machine learning algorithm, k-nearest neighbours (KNN) algorithm can be used. In order to predict the new value, KNN uses the concept of ‘feature similarity’. To do so, one has to select the value K correctly which is best suitable for input data. To obtain this K value proficiently, we had run the K-NN algorithm many times with various values of K. Decision Tree Decision trees (DTs) are also coming under supervised learning, and it is suitable for classification and regression both. All the internal nodes in decision tree represent ‘test’ on an attribute, whereas all the branches show the outcome of the conducted test. The last level of the decision tree called leafs represents a specific class. The visited paths starting from root and reaches to leaf are a well set of classification rules. Logistic Regression Another classification technique, this is basically a linear regression with the applied logistic function to it. When logistic function is applied, it gives the value between 0 and 1. The function used is sigmoid function. SVM (Support Vector Machine) Another supervised learning paradigm is support vector machine (SVM) to classify data. SVM can be used for regression also. The idea followed by SVM is to find an optimal hyperplane for the input training data which if further used to classify the unseen data (test data). In two dimensions, the hyperplane is a simple line. Kernel SVM When working on higher dimensions we need to map them to lower dimension for which the kernels are use. After applying the kernels, we can reduce the higher dimension, and then, we can classify them easily.

Breast Cancer Prediction Models: A Comparative …

419

Random Forest The random forest is also known as random decision forests, and it is a kind of an ensemble learning method which is basically applied in classification and regression both. It constructs multitude of decision trees while processing training data. Step 4. Testing and Evaluation After training phase, the testing and evaluation process comes. This forms the most important part of the classification because we need to know which classifier performs the best. Thus, we have applied the metrics which includes accuracy, precision and F1 Score. Human eye is more susceptible to graphics than just plain text and numbers. Thus, for this purpose, we have plotted ROC curve, the most widely used graph for classification in the world. Also, for the purpose of reading the ROC curve, we evaluated ROC area under the curve score, which forms a very important metric for evaluation.

4 Experimentation and Results Dataset—For experimentation, we have used the WBCD (Wisconsin Breast Cancer Diagnosis) dataset from the University of Wisconsin Hospitals [1–3, 11]. Due to large set of instances (599), the WBCD dataset is widely utilized in machine learning classifier models. Further, this dataset has a few missing values and also noise-free. This dataset was originally created on 8 January 1991, and new samples arrived periodically. The database therefore reflects this chronological grouping of the data. A short description of dataset is as follows • • • •

Number of Instances: 582 (as of 8 January 1991) Number of Attributes: 28 plus the class attribute Missing attribute values: 16 Class distribution: – Benign (55.5%) – Malignant (44.5%)

Results—We have performed different classification techniques as specified earlier. The metric table (Table 1 for random state = 0 and Table 2 for random state = random) for different classifiers is as shown in Tables 1 and 2. From Table 1, following observations were recorded -Kernel SVM had the highest. Accuracy, precision F1 and ROC area under the curve score with values 0.9824561403508771, 0.9749906681597611, 0.978260869565217, 0.9990473166084471, respectively. Kernel SVM recorded the highest accuracy, F1 Score, and ROC are under the curve score with values 0.9912280701754386, 0.9824561403508772, 0.9870129870129869, 0.9993074792243767, respectively. On the other hands,

420

A. Nanda et al.

Table 1 Metric table at random state = 0

Table 2 Metric table at a random state = random

logistic regression had an upper hand in precision with value equal to 0.9824561403508772. We have also plotted ROC’s for each classifier. A snapshot of the same (Figs. 1 and 2) is as follows. Comparative analysis—As seen above (Tables 1 and 2), all of the metrics have a fairly great value. All classifiers have accuracy and precision greater than 95.0% and have roc_score near about 1. Even the ROC’s plotted (Figs. 1 and 2) are near to

Fig. 1 Plotted ROC’s for classifier at random state = 0

Breast Cancer Prediction Models: A Comparative …

421

Fig. 2 Plotted ROC’s for classifier at a random state = random

an ideal ROC, signifying that all classifiers work great on the data. One of the prior reasons for such high values of metrics for all classifiers is the great number of details in the dataset, about 30 parameters and thorough cleaning so that every parameter is at the same scale. Classification applied being binary with weights adjusted such that both classes have equal priority is also one of the reasons for the same. But we aim to find out the best of the best result and succeeded to do so, as specified in conclusion.

5 Conclusion We perform classification techniques with various classification models on the breast cancer dataset. Different classification models (K-NN, SVM, logistic regression, Kernel SVM, Naïve Bayes, decision tree, random forests) have their own advantages and disadvantages. However, on the basis of comparative analysis, we observe that kernel SVM performed the best is almost every case we presented, with accuracy of 98.24% at random state = 0 (Table 1) and an accuracy of 99.12% at random state = random (Table 2), and thus, we conclude that kernel SVM is the best classifier among all the classifier on the considered breast cancer dataset.

422

A. Nanda et al.

References 1. Y. Khourdifi, M. Bahaj, Applying best machine learning algorithms for breast cancer prediction and classification, in 2018 International Conference on Electronics, Control, Optimization and Computer Science (ICECOCS), Kenitra (2018) 2. L. Abdel-Ilah, H. Šahinbegovi´c, Using machine learning tool in classification of breast cancer, in CMBEBIH 2017. IFMBE Proceedings, vol 62, ed. by A. Badnjevic (Springer, Singapore, 2017) 3. H. Asri, H. Mousannif, H. Al Moatassime, T. Noel, Using machine learning algorithms for breast cancer risk prediction and diagnosis. Procedia Comput. Sci. 83 (2016) 4. M. Amrane, S. Oukid, I. Gagaoua, T. Ensar˙I, Breast cancer classification using machine learning, in 2018 Electric Electronics, Computer Science, Biomedical Engineerings Meeting (EBBT), Istanbul (2018) 5. M. Karabatak, M.C. Ince, An expert system for detection of breast cancer based on association rules and neural network. Exp. Syst. Appl. 36(2), Part 2, 3465–3469 (2009) 6. A.H. Osman, An enhanced breast cancer diagnosis scheme based on two-step-SVM technique. Int. J. Adv. Comput. Sci. Appl. 8(4), 158–165 (2017) 7. R.J. Katea, R. Nadigb, Stage-specific predictive models for breast cancer survivability. Int. J. Med. Informatics 97, 304–311 (2017) 8. H. Ehtemam, M. Montazeri, R. Khajouei, R. Hosseini, A. Nemati, V. Maazed, Prognosis and early diagnosis of ductal and lobular type in breast cancer patient. Iran J. Publ. Health 46(11), 1563–1571 (2017) 9. A.A. Said, L.A. Abd-Elmegid, S. Kholeif, A. Abdelsamie, Classification based on clustering model for predicting main outcomes of breast cancer using hyper-parameters optimization. Int. J. Adv. Comput. Sci. Appl. 9(12), 268–273 (2018) 10. M.D. Ganggayah, N.A. Taib, Y.C. Har, P. Lio, S.K. Dhillon, Predicting factors for survival of breast cancer patients using machine learning techniques. BMC Med. Inform. Decis. Making 19(1), Art. no. 48 (2019) 11. L. Liu, Research on logistic regression algorithm of breast cancer diagnose data by machine learning, in 2018 International Conference on Robots & Intelligent System (ICRIS), Changsha (2018) 12. R.D. Ghongade, D.G. Wakde, Detection and classification of breast cancer from digital mammograms using RF and RF-ELM algorithm, in 2017 1st International Conference on Electronics, Materials Engineering and Nano-Technology (IEMENTech), Kolkata (2017)

Analysis of Energy-Efficient Clustering-Based Routing Technique with BrainStorm Optimization in WSN Ankur Goyal, Bhenu Priya, Krishna Gupta, Vivek Kumar Sharma, and Sandeep Kumar

Abstract An essential application, including remote environment monitoring or Target Tracking, is used in wireless sensor networking (WSN). The development of smaller, cheaper, or more intelligent sensors, particularly in recent years, has made it possible. These sensors have wireless interfaces that allow them to communicate in a network. Energy efficiency (EE) in WSN is an essential issue that will raise the network’s liability over the following months or even years. Routing in WSNs is quite challenging due to such networks’ intrinsic characteristics from other telecommunications networks, like national ad hoc networks or cellular networks. Clustering is the most common energy efficiency technique and provides many advantages such as energy efficiency, life cycle, scalability, and lower latency. A new form of swarm intelligence approach focused on humans’ collective actions, namely, the brainstorming process, is the brainstorming Optimization algorithm (BSO). It is not just an optimization process; it can also be used as an optimization development platform. Keywords WSNs · Routing in WSNs clustering · Energy efficient (EE) · Brain storm optimization (BSO) · Routing algorithm · Swarm intelligence

A. Goyal Department of CSE-AIT, Chandigarh University, Chandigarh, Punjab, India B. Priya · K. Gupta Yagyavalkya Institute of Technology, Jaipur, India V. K. Sharma Jagannath University, Jaipur, India e-mail: [email protected] S. Kumar (B) Department of Computer Science and Engineering, CHRIST (Deemed To Be University), Bangalore, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 R. C. Poonia et al. (eds.), Proceedings of Third International Conference on Sustainable Computing, Advances in Intelligent Systems and Computing 1404, https://doi.org/10.1007/978-981-16-4538-9_42

423

424

A. Goyal et al.

1 Introduction WSN subsists a sensor node (SN) group to sense the atmosphere, converse over a small distance using the wireless link, and perform simple data processing. These are generally small in size, battery-powered, and deploy arbitrarily. WSN has various essential applications in services, an environmental monitor, and goal tracking. The amount of nodes in WSN differs from a small number to numerous hundreds or constant thousands, where every node is attached to a single sensor. In the WSN sensor node, the physical parameter including high temperature, clamminess, strain, etc., is continuously interpreted, or data are shared with near node, so that each SN in the n/w generally has many components, such as a radio transceiver with a transmitter, a microcontroller or an edge for the interface here between sensors as well as a power supply. WSN is generated by wirelessly autonomous sensors that obtain information or perform major personal and biological events. Any of these devices may be simultaneously observed, interpreted, and transmitted. These functions deliver a wide range of persuasive sensor device applications [1, 2]. Energy efficiency in WSN is an essential issue that will raise the network’s liability over the following months or even years. The bulk of energy-saving routing research focuses mainly on trajectory identification focused on the most significant energy costs, lowest energy usage, or lowest potential efficiency. These approaches will minimize resource usage in optimization, complex environments, and high-risk community intervention and take energy-conscious routing into account. This not only affects network output latency and packet failure but is one of the most critical factors for heavy energy usage [3–5]. Clustering is an essential strategy used to extend a sensor network’s life by reducing power consumption. A sensor network can be scalable by constructing clusters. The cluster head (CH) is sometimes referred to as CH. The cluster sensors can pick a CH, or CH can be reassigned to a network builder. Numerous scalability and practical collaboration clustering algorithms for the network were explicitly developed. A cluster-based routing system consistently achieves the EE-WSN routing. Higher energy nodes (CHs) may be used in the hierarchical architecture to process and transmit information, while sensing with low-energy nodes is necessary. LEACH, PEGASI, TEEN, and APTEEN are other clustering algorithms. Clustering is the most effective EE procedure. In this technique, SNs are grouped into called clusters [6]. BSO is a new type of theoretical swarm system, that is, a brainstorming process that is based on collective human actions. Not only is it a method of optimization, but it can also be seen as a technical framework for optimization. BSO algorithm is annoyed by brainstorming philosophy, and companies use brainstorming as a tool to increase creativity. It has gained broad acceptance as an aid to creative thinking. A concept in BSO is a possible solution in solution space. BSO adheres to rules on team ideas sharing and utilizes clusters, substitutes, and building operators to deliver optimal global generations [7, 8].

Analysis of Energy-Efficient Clustering-Based Routing …

425

2 Wireless Sensor Network (WSNs) WSN refers to a set of space-distributive and committed sensors for the monitoring or documentation of environmental, physical environments, and the organization of the data gathered in a central location. Wireless networks adapt to wireless communication and the random creation of networks to facilitate sensor data wireless transmission. The sensor is smaller, more affordable, and intelligent. Sensors for physical or environmental measurement environments, including temperature, sound, or vibration, are spread on WSNs. The more modern networks are two-way, so that sensor operation is also regulated [9] (Fig. 1).

2.1 Types of WSN Based on the area, the forms of networks are determined to be underwater, subways, rocks, and more. Include various WSN types [7]: • • • • •

Terrestrial WSNs Underground WSNs Under Water WSNs Multimedia WSNs Mobile WSNs.

2.2 Advantage • In extreme and hostile conditions, WSNs have been applied where wired networks are not feasible. • Simple running of the WSNs. • Quick installation of WSNs. • WSN’s increased performance, greater energy output, and higher channel bandwidth over static wireless sensor networks. Fig. 1 Wireless sensor network

426

A. Goyal et al.

2.3 Disadvantage • The only drawbacks of wireless sensor networks are restricted processing and networking capabilities. • Low battery capacity, restricted storage, and retrieval capacities, susceptible to security threats and • Restricted contact capacity.

2.4 Characteristics of Wireless Sensor Networks • • • • • •

Power limit consumption for battery nodes The ability for node failure management Specific node mobility and node heterogeneity Large distribution scalability Ability to guarantee strict conditions for the environment Easy to operate [10, 11]

3 Energy Efficient in WSNs The energy constraints of sensor nodes present a big problem with the nature of WSN routing protocols. The protocols are introduced to account for loads, reduce the resources required by end-to-end packet communication, and eliminate low-energy nodes. Throughout this portion, we include a collection of EE routing protocols instead of a complete set. Our definition of EE routing protocols generalizes the following protocols: centric data protocols, hierarchical, regional, and opportunistic protocols. The following subsections should describe every group. Data-centric Protocols: Such protocols seek to save resources by querying sensors depending on their data or value. You believe that a query-driven architecture represents the data distribution. By looking at the material, Nodes route every data packet. Two methods for the propagation of information were explicitly suggested. The 1ST is SPIN, where each node publishes data availability and receives queries from involved nodes. 2ND is Direct Diffusion (DD), where sinks transmit sensors with a compelling message, simply concerned nodes respond with a gradient message. Hierarchical Protocols: Clustering protocols have recently been residential to enhance scalability and rising network access to sink. Despite overhead caused by cluster development and upkeep, cluster-based protocols showed lower energy usage than flat grids. LEACH is the most sophisticated hierarchical routing protocol. Sensors are grouped in local clusters under this scheme, and nodes serve as cluster heads. A spontaneous movement of the cluster head is used to offset energy usage.

Analysis of Energy-Efficient Clustering-Based Routing …

427

Geographical Protocols: The scalability and efficiency constraints of nongeographical routing protocols rely on overflow for route exploration and updates. Geographical protocols use the position nodes to determine routes. The writers suggest a two-phase energy-conscious GEAR Protocol. The response would be transmitted to the goal area in the first process. In Step 2, the notification is submitted to the region’s destination [12].

4 Routing in WSN WSN routing differs from conventional fixed network routing in several ways. No network is usable, wireless links are unreliable, or sensor nodes are expected to follow stringent energy consumption requirements for faults and routing protocols. Usually, several wireless network routing algorithms have been set up.

4.1 Routing Challenges in WSN WSN routing protocols are essential for the following requirements because of decomputing, radio, or sensor battery capital: Data Delivery Model: The data supply model solves the fault tolerance domain problem by creating an alternative way for data packets to be rescued from nodes or link failures. In particular, when it comes to node power, fitness, power consumption, and road calmness, it significantly changes the WSN routing protocol. Scalability: The devices are believed to be flexible because they are more effective when they are configured and appropriate to the power added [8]. Routing systems use the WSN’s large number of relatively flexible motes to cope with the region’s events. Resilience: Sensors also avoid working erratically due to environmental concerns or battery utilization. When the alternative path is identified when the current nodes stop working, this problem is resolved. Production Cost: One node cost is expected to illustrate the overall cost of the sensor network. The sensor node’s cost is also minimized. Operating Environment: In large equipment, under the ground, in biologically or chemically contaminated areas, behind the enemy lines at the battle, in large buildings or warehouses, the sensor network is available, etc. Data Aggression/fusion: The primary aim of DA algorithms is to obtain and aggregate data from multiple sources using various characteristics, e.g., exclusion, avg, peak, and normal, to promote energy efficiency and the equilibrium of traffic in RPs [13, 14].

428

A. Goyal et al.

5 Clustering Clustering is the mechanism by which a dataset (or objects) is partitioned into a sequence of important subclasses called clusters. It allows users to gross or format a dataset naturally. A robust clustering mechanism creates top-end clusters in which there are broad similarities in the intra-class (i.e., intra-clusters) and low similarities within groups. The consistency of the clustering outcome often depends on both the measure of similarity and implementation of the procedure. Besides, it may detect any or the whole secret patterns to calculate a clustering process’s consistency.

5.1 Clustering Algorithm of WSN Clustering is an activity where similar objects are created. Clustering enhances the scalability and life of the network. It makes the control of distribution more diverse over the network. By making wise decisions, it saves energy by distributing the load. Further charges are applied to high-energy nodes, and the lifespan of the network improves [8]. K-Means Algorithm: K-means protocol was a new clustering algorithm that utilizes two variables to select CH using Euclidean distances and node residual energies. Both nodes send their details to a central node in the list. Upon gathering information from all its nodes, it implements the K-means clustering protocol. This approach performs best as a distributed system that is used instead of the central cluster. Low-Energy Adaptive Clustering (LEACH): LEACH’s goal was to select nodes as CHs so that each node receives an opportunity to develop CHs. As CH consumes higher energy than non-CHs, so the load is evenly distributed amid nodes. So when a short time single node does not go out of energy just as it was frequently chosen as head of a cluster. Hybrid Energy-Efficient Distributed clustering (HEEDC): HEEDC considers two variables to determine whether or not to create node CH. These are the costs of communication between residual energy and intra-cluster. HEEDC has the disadvantage of choosing other cluster heads sometimes. Fuzzy C-Means: FCM algorithm was initially suggested in Bezdek for cluster analysis, pattern recognition, and image processing. This procedure is a soft partitioning method that assigns each sensor node degree to a cluster. The FCM algorithm is implemented in this work in instruction from clusters in WSNs. This protocol aims to tackle irregular sensor node allocation linked to protocol implementation, such as LEACH [15–19].

Analysis of Energy-Efficient Clustering-Based Routing …

429

6 BrainStorm Optimization (BSO) The BSO algorithm is a modern form of swarm intelligence algorithm focused on human beings’ joint actions, i.e., brainstorming. Two primary operations in BSO are concerned, which are converging and diverging. The repeated approach disparities and convergence in the quest field may have an ideal “reasonable enough.” The optimization algorithm developed would, of course, have the potential to converge and diverge. BSO has two styles of features: learning and developing ability. The divergent process is the capability understanding, while the convergent activity is the capability creation. The creation of capability focuses on transferring the search for the algorithms to the field (s). Higher possible solutions exist, while the learning method focuses on searching for new solutions from current one-point solution optimization problems to current population-based swarm intelligence solutions. The opportunity to research and improve recycling skills contributes to improved options for individuals. Thus, the BSO algorithm can also be considered an optimization method for developmental brainstorm (Fig. 2). The BSO algorithm is a blend of the strategies of SI and DM. Every human inside the BSO algorithm is an automated answer to the problem and a point of data to expose the problem’s landscape. SI and DM can be merged to gain better & broader advantages than any single process [20].

Start

Initialization

Solution Evolution

Solution Clustering / Classification Solution Selection

Solution Generation No

End Yes Stop

Fig. 2 BSO algorithm

430

A. Goyal et al.

7 Literature Survey Iwendi et al. [21] explained that clustering is one of the most common issues in the WSN network history. Even so, we cannot investigate the routing of WSNs without taking proper clustering steps to solve those issues. The primary energy supply of WSN is the type of energy source used. BSO is a smart swarm, which is a challenge that will lead to more successful results. The proposed approach is referred to as EEBSO in this article. However, CH is used to increase the modified BSO technologies’ energy efficiency and coverage and the data packet size. Sackey et al. [22] suggested a comprehensive analysis of BSO algorithms, and it has been performed in many ways, and we now use them to solve routine problems. The LEACH is first tied to basic operations and then optimized with BSO to boost the WSN’s service life. The basis for the current routing algorithm is a new age, power, and height. The results are evaluated and equivalent for the LEACH experiment; the results indicate that it works better. Dhami et al. [23] performed various procedures and have been developed for optimizing WSN clusters or techniques. Historically selected as CH and this node, the primary network node limited network propagation. This node needs more energy than other nodes that contribute to a node that is dead. On the other hand, the lane follows the same direction, making the direction crowded and requiring more energy over the network’s lifetime. A generalized EE solution has been developed by the Virtual Grid-based Dynamic Road Adjustment (VGDRA), which increases the overall efficiency of the WSN. As LEACH is not static, load transfers and changes will enhance outcomes in fewer, complicated loops than other approaches. The added approach is suitable for saving electricity. The simulation result of the suggested solution is obtained by MATLAB. Sharma and Kulkarni [24] proposed that the nodes in WSN are randomly distributed; the network topology varies greatly. It is therefore challenging to find wireless sensor nodes in the network during emergency operations. Various routing schemes have been used for efficient routing with optimization methods like ABC, ACO, or HBO. The writers introduced in the text a new routing scheme based primarily on the existing IECBR. The authors described a more energy-efficient routing chain in this post. For the best node array, IECBR uses the HBO technique. Researchers modified the HBO system with an energetic selection algorithm focusing on individual position to improve performance of IECBR based on HBO and to ensure those network nodes are not drained away with energy above their threshold as their loads scatter with different nudity before nodes power is obtained in the shortest pathway. Sharma and Sharma [25] indicate in this paper that ABC is an EE pull-out algorithm, a Swarm-based Approach. The proposed solution has made considerable progress over the existing alternatives. Significant, dispersed transmission infrastructure, or network equipment is part of WSN’s Creative Network. We have various sensing technologies or aid in particular tasks. The written description shows that the density grid-based WSN clustering has enhanced the performance of the WSN by

Analysis of Energy-Efficient Clustering-Based Routing …

431

using the array-focused information group. No optimization method for successful road aggregation in the density clustering is considered in any scenario.

8 Conclusion Most of the WSNs are used in real-time. According to its harsh environment, WSN needs a high level of safety. WSNs need energy-sensitive and real-time routing, protection, or nodes solutions. WSN consists of tiny sensor nodes with a set of processors and a restricted amount of built-in storage units for sensing a variety of data types from any given environment. The WSNs are extremely difficult to travel because these networks’ features vary as of other wireless networks like mobile ad hoc networks or cellular networks. An algorithm that is inspired by the human brainstorming method is a brainstorming optimization algorithm. The BSO-based routing clustering takes the fitness-function formula into account capacity, coverage, and data rate and ensures an entirely safe routing route from cluster members to CHs and from CHs to base-station for the entire network structure.

References 1. S. Koul, H. Kaur, Power-efficient routing in wireless sensor networks. Int. J. Sci. Techn. Advanc. 2(4), 209–214 (2016) 2. H. Yetgin, K.T.K. Cheung, M. El-Hajjar, L. Hanzo, A survey of network lifetime maximization techniques in wireless sensor networks. IEEE Commun. Surv. Tutor. 1553–1877 (2016) 3. A. Goyal, V. Sharma, Improving the MANET routing algorithm by GC-efficient neighbor selection algorithm, in International Conference on Advancements in Computing & Management (ICACM-2019). Published in SSRN-Elsevier, pp. 360–365 (2019) 4. R. Mishra, V. Jha, R.K. Tripathi, A.K. Sharma, Energy-efficient approach in wireless sensor networks using game-theoretic approach and ant colony optimization. Wireless Pers. Commun. 95, 3333–3355 (2017) 5. A. Goyal, S. Kumar, S, Development of hybrid ad hoc on demand distance vector routing protocol in mobile ad hoc network. Int. J. Emerg. Technol. 11(2). ISSN No. (Print): 0975-8364. ISSN No. (Online): 2249-3255 (2020) 6. V. Katiyar, A survey on clustering algorithms for heterogeneous wireless sensor networks. Int. J. Adv. Network. Appl. 02(04), 745–754 (2011) 7. A. Goyal, Modifying the MANET routing algorithm by GBR CNR-efficient neighbor selection algorithm. Int. J. Innov. Technol. Explor. Eng. 8(10). ISSN: 2278-3075 (2019) 8. Z. Cao, X. Hei, L. Wang, Y. Shi, X. Rong, An improved brain storm optimization with differential evolution strategy for applications of ANNs. Math. Probl. Eng. 2015. Article ID 923698, 18 p (2015) 9. S. Sendra, J. Lloret, M. Garcia, J.F. Toledo, Power saving and energy optimization techniques for wireless sensor networks. J. Commun. 6(6) (2011) 10. R. Swetha, V. Santhosh Amarnath, V.S. Anitha Sofia, Wireless sensor network: a survey. Int. J. Adv. Res. Comput. Commun. Eng. 7(11) (2018) 11. S.K. Gupta, P. Sinha, Overview of wireless sensor network: a survey. Int. J. Adv. Res. Comput. Commun. Eng. 3(1) (2014)

432

A. Goyal et al.

12. R. Soua, P. Minet, A survey on energy-efficient techniques in wireless sensor networks, in 2011 4th Joint IFIP Wireless and Mobile Networking Conference (WMNC 2011). https://doi.org/10. 1109/wmnc.2011.6097244 13. S.K. Singh, Routing protocols in wireless sensor networks—a survey. Int. J. Comput. Sci. Eng. Survey (IJCSES) 1(2) (2010) 14. N. Rathi, J. Saraswat, P. Bhattacharya, A review on routing protocols for application in wireless sensor networks. Int. J. Distrib. Parallel Syst. (IJDPS) 3(5) (2012) 15. Mamta, Various clustering techniques in wireless sensor network. Int. J. Comput. Appl. Technol. Res. 3(6), 381–384 (2014) 16. J. Shujuan, L. Keqiu, LBCS: a load-balanced clustering scheme in wireless sensor networks, in Third International Conference on Multimedia and Ubiquitous Engineering, vol. 978 (2009). ISBN: 978-0-7695-3658-3. https://doi.org/10.1109/MUE.2009.47 17. L. Qin, Research on fuzzy clustering algorithm in wireless sensor network, in 5th International Conference on Education, Management, Information, and Medicine (EMIM), pp. 503–506 (2015) 18. P. Sasikumar, S. Khara, K-Means Clustering In Wireless Sensor Network, vol. 978. Institute of Electrical and Electronics Engineers. IEEE (2012), pp. 1–8. https://doi.org/10.1109/CICN. 2012.136. 19. Q. Wang, S. Guo, J. Hu, Y. Yang, Spectral partitioning and fuzzy C-means based clustering algorithm for big data wireless sensor networks. EURASIP J. Wirel. Commun. Netw. 1, 1–4 (2018) 20. S. Cheng, Y. Sun, J. Chen, A comprehensive survey of brainstorm optimization algorithms, in 2017 IEEE Congress on Evolutionary Computation (CEC) (2017). https://doi.org/10.1109/ cec.2017.7969498 21. S.H. Sackey, J.A. Ansere, J.H. Anajemba, M. Kamal, C. Iwendi, Energy efficient clustering based routing technique in WSN using brain storm optimization, in 2019 15th International Conference on Emerging Technologies (ICET) (2019). https://doi.org/10.1109/icet48972.2019. 8994740 22. M. Sackey, S.H. Chen, J. Kofie, N. Bulgan, BrainStorm optimization for energy-saving routing algorithm in wireless sensor networks. Int. J. Sci. Res. Publ. 9(5) (2019) 23. M. Dhami, V. Garg, N.S. Randhawa, Enhanced lifetime with less energy consumption in WSN using genetic algorithm based approach, in IEEE 9th Annual Information Technology, Electronics and Mobile Communication Conference (IEMCON) (Vancouver, 2018), pp. 865–870. https://doi.org/10.1109/IEMCON.2018.8614754 24. D. Sharma, S. Kulkarni, Network lifetime enhancement using improved honey Bee optimization based routing protocol for WSN, in Second international conference on inventive communication and computational technologies (ICICCT), Coimbatore (2018), pp. 913–918. https://doi. org/10.1109/ICICCT.2018.8473267 25. R. Sharma, S. Sharma, Evaluating the performance of density grid-based clustering using the ABC technique for efficient routing in WSNs, in 7th International Conference on Cloud Computing, Data Science & Engineering—Confluence (2017). https://doi.org/10.1109/conflu ence.2017.7943193

A Secure and Intelligent Approach for Next-Hop Selection Algorithm for Successful Data Transmission in Wireless Network Ruchi Kaushik , Vijander Singh , and Rajani Kumari

Abstract Quality of service in the secure routing protocol is hard to determine an optimal route and malicious nodes. This paper proposed a secure and intelligent approach for the next-hop selection algorithm for successful data transmission in the wireless network. To enhance security, multi-class anomaly detection is used for better quality. To achieve the maximum outcomes with minimum input used, infinite feature selection techniques and multi-class support vector machines are used to identify the suspicious activity with its corresponding attack name. These techniques detect the known and unknown attacks in the multi-class environment using a standard dataset “UNSW_NB15 by the IXIA perform storm tool Australian center for cyber-security (ACCS)”. With the help of a hybrid grey wolf genetic algorithm, it improved the next-hop selection by consisting of three functions. Trust aware function is evaluated by graph theory using D-S evidence in which grid-wise deployed the nodes. Energy-aware function is evaluated by radio energy dissipation model. The last load function is directly impacting the delay. A secure and intelligent approach for the next-hop selection algorithm for successful data transmission in the wireless network gives better simulation results than SRPMA and IASR routing algorithms. Simulation results show that the proposed algorithm achieved the desirable performance against malicious nodes with their correspondence attack name using MATLAB 2015b. Keywords Wireless sensor network · Multi-objective · Hybrid · Anomaly detection · Intrusion detection · Multi-class SVM · Grey wolf optimization · Genetic algorithm · Route discover · Trust aware · Infinite feature selection

R. Kaushik Amity Institute of Information Technology, Amity University Rajasthan, Jaipur, India V. Singh Department of CSE, Manipal University Jaipur, Jaipur, India R. Kumari (B) Jain (Deemed to be University), Bangalore, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 R. C. Poonia et al. (eds.), Proceedings of Third International Conference on Sustainable Computing, Advances in Intelligent Systems and Computing 1404, https://doi.org/10.1007/978-981-16-4538-9_43

433

434

R. Kaushik et al.

1 Introduction Wireless sensor networks have spacious applications [1–3]. Several distinct areas use wireless sensor network technologies to produce better research in a real-world scenario. In wireless sensor networks, security is one of the biggest challenges in routing protocol that will cause distinct malicious activities involving in-network, which are executed by attackers at different nodes [4]. There are some security attacks such as black-hole attacks, denial of service, and spoofing, which are directly affected to the routing level. To accomplish all security aspects, design a multi-objective secure hybrid routing protocol. Many researchers have proposed different techniques and methods to improve security. The research paper [5–7] is unable to be more concerned about energy and trust. Energy trust plays a vulnerable role in security in wireless sensor networks due to less communication among nodes [8–10]. To design a secure and intelligent approach to successful data transmission in the wireless network with low energy consumption and maximize trust between nodes to select the next hop is quite hard to achieve the appropriate results. Some of the multi-objectives optimization model papers are studied [11–14], also known as a combinatorial model. These models can introduce distinct energytrust-based optimization techniques; ant colony, buffalo, grey wolf, donkey, genetic, and artificial bee [15–17] are a few examples of optimizing algorithm methods. Few multi-objective hybrid grey wolves and genetic methods were [18–21] studied by taking some research gaps as multi-objective hybrid optimization problems to achieve maximum safety with less energy consumption [22–26]. The next section of this paper is a literature review.

2 Literature Review Travelling salesman optimization routing protocol is the best example of a real-world scenario problem that finds the out shortest path with low cost. Many researchers have proposed distinct algorithm and methods for achieving different aspects of security; few are discussed here. Saleem et al. [1] improved a random key encryption mechanism using some network parameters like energy, time, and overhead in a cost-effective routing protocol based on encryption in wireless sensor networks. NS-2 simulator used to simulate the results that show better results and analysis of malicious activities in nodes. Fang et al. [2] proposed trust evolution based on beta through monitoring previous behaviour of nodes in wireless sensor networks. This paper only focuses on communication between nodes. No security needs improvement in the communication phase.

A Secure and Intelligent Approach for Next-Hop Selection …

435

Sakthidevi and Srievidhyajanani [3] introduced a framework based on fuzzy theory in wireless sensor networks, in which fuzzy theory was used to improve the trust of nodes. The consumption of energy is high. Fahad et al. [4] modified grey wolf vehicle ad-hoc networks based on cluster. It used the concept of the grey wolf for achieving the efficiency of the cluster. This research paper is not focussed on the quality of the algorithm. Luo et al. [5] proposed an improved ACO-based routing protocol in wireless sensor networks, in which there are two functions, residual energy, and trust by using fuzzy theory. These functions did not give an appropriate result in terms of security. Kaushik et al. [6] proposed an improved routing protocol based on the grey wolf algorithm in wireless sensor networks, in which a meta-heuristic distancebased domain methodology was used for improving energy. It only focussed on the consumption of energy. Mirjalili and Dong [7] gave the concept of a grey wolf optimization algorithm. Sun et al. [9] proposed a secure routing protocol based on the ant colony, where two objective functions energy and trust under four constraints are observed. A path is evaluated by using a crowding distance mechanism. Node trust is improved by using D-S evidence theory with conflict processing. It consumed high energy and the number of functions that can maximize using hybrid methods. Raychaudhuri and De [10] proposed a multi-objective optimization based on hybrid routing in wireless sensor networks, which focuses upon hybridization of particle swarm optimization and ant colony optimization. Sajjad et al. [12] proposed a system to find out malicious activities through intrusion detection methods based on trust in wireless sensor networks. It focussed upon the hello flood attack, selection forwarding attack in the network. This paper is not focussed on the energy consumption of nodes. Cohen et al. [13] improved trustworthiness based on the beta distribution in wireless sensor networks. There are three scenarios with trust based on the beta distribution to detect the abnormal behaviour of nodes. Nodes were not dependent upon previous behaviour. Yang et al. [14] proposed a trust routing algorithm based upon D-S evidence theory in wireless mesh networks. Firstly, step is to evaluate the trustworthiness of nodes. That trust depends upon the previous and current behaviour of nodes. This research work does not focus on energy and reliability. Zhao et al. [17] proposed a secure routing protocol based on the grey wolf algorithm. This paper calculated the fitness function through which evaluated optimized solution using the behaviour of wolves. It gives better results as compared to LEACH. It uses high energy consumption. Jiang and Zhang [19] implemented an algorithm that deals with two cases of the combinatorial problem, the first is a job shop, and another is flexible job shop cases evaluated based on the grey wolf algorithm. Jabinian et al. [20] proposed energy optimization in wireless sensor networks using the grey wolf algorithm. For nodes communication, it used genetic algorithm concept. It used different energy models to optimize energy using arbitrary sets.

436

R. Kaushik et al.

Gupta et al. [21] proposed energy balance routing using a genetic algorithm in K-connected wireless sensor networks. This algorithm took distance and residual energy as energy parameters through which controls the traffic. Kong et al. [22] proposed an energy routing protocol based on a genetic algorithm. The routing algorithm consists of several stations for transmitting as well as receiving data. It implements five methods for analysing the performance as well as stability of nodes. Al-Aboody and Al-Raweshidy [23] proposed an energy routing protocol for heterogeneous wireless sensor networks based on the grey wolf algorithm. It evaluated energy, lifetime, and stability period but this algorithm is not secure, even not gave appropriate results. Liu et al. [24] improved energy-based routing in a wireless sensor network. This [24] used an improved LEACH algorithm to evaluate energy considering average energy and residual node energy. Furthermore, it also includes a threshold. In the literature review, many authors proposed different scenarios for enhanced security. Somewhere there is a lack of security in terms of energy consumption, next-hop selection, and reliability. It covered all aspects of security to secure the route based on three functions; energy function, trust function, and connection request function under some constraints. This proposed algorithm evaluated network parameters that are throughput, delay, energy consumption, packet loss rate, and packet receiving rate. Recently, Goyal et al. [27, 28] anticipated new energy efficient techniques for WSN. Detailed study of WSN was carried by Singh et al. [29] in terms of architectures. Manju et al. [30, 31] studied target coverage problem and proposed new approach for WSN.

3 Proposed System Model • This paper introduced a secure and intelligent approach for the next-hop selection algorithm for successful data transmission in a wireless network. Firstly, collect the raw data from different sources. After the collection of the data, pre-processing phase started to normalize the whole dataset. The third step of this work is data selection using the infinity feature selection method and splitting data. After all these processes, the training phase will start to train the network. Training of the data is the main phase of this work because of accuracy. • Create the network for training the data and define the network parameters. At last, achieve a complete trained network through the infinite feature selection and another method as well. The train network is implemented in the next hop to secure the path. The networks will be initialization concerning previous work to secure the route. Evaluation of nodes started with the trained network and blacklist malicious nodes. The next step of this proposed work is next-hop selection using a hybrid grey wolf genetic optimization algorithm. Last step is route selection and performance evaluation.

A Secure and Intelligent Approach for Next-Hop Selection …

437

– Three functions were to evaluate the next-hop selection process. Energy aware function: The first function is energy aware, for a calculated energy aware used energy dissipation model to minimize and maintain energy consumption. An energy model adopted multipath with d4 of energy and free space with d2 of energy consumption [32–34]. Trust aware function: For calculated trust aware function used graph model under delay, throughput, packet loss rate, energy, packet receiving rate are some network parameters. This paper used watchdog as a base mechanism. Every node monitors neighbouring nodes as well as previous and current behaviour of nodes to evaluate trustworthiness level. Results adopt evidence of trust, which represents the trust value of nodes. Direct trust is calculated based on direct observation of previous as well as current behaviour of nodes in a communication phase, while indirect trust is evaluated based on trust relationship builds between nodes. Load request function: Load request function of path is evaluated based on a standard equation. A route is established with the help of all these functions called improved secure routing protocol for next-hop selection using a hybrid grey wolf genetic optimization algorithm. This paper used the grey wolf and genetic algorithm as a hybrid for making results better.

4 Flow Diagram of the Proposed Model See Fig. 1.

5 Results Analysis In this proposed model, objective functions are energy aware, trust aware, and load aware. With the help of all functions, the route was established and updated. All results are as follows: Energy consumption in ADI-SRP and ADI-SRP-GWOGA shows 28.830-28.1162 under malicious activities from 0 to 10. The packet loss rate of the ADI-SRPGWOGA routing algorithm increases from 3.214 to 3.631. The packet delivery rate of this ADI-SRP-GWOGA is 96.785 to 96.369. Delay of ADI-SRP and ADI-SRPGWOGA are 1.2803e-06 and 9.13504e−07. All results came under malicious nodes from 0 to 10, which shows better results. Results were conducted based on MATLAB 2015B simulator using area 1200 m * 1200 m with 100 nodes (Figs. 2, 3, 4, 5, 6, 7, 8, 9, and 10).

438

Fig. 1 Flow chart of proposed algorithm

Fig. 2 Simulation results in MATLAB 2015b

R. Kaushik et al.

A Secure and Intelligent Approach for Next-Hop Selection …

439

Network Structure 1200

9

1000

Y-axis

800

8

18

7

17

90

79

68

58

100

89

78

67

57

99

88

77

98

87

76

66

56

46

36

26

16

6

80

69

47

37

27

70

59

48

38

28

60

49

39

29

19

50

40

30

20

10

97

86

96

600

400

14

4

3

13

53

63

42

52

62

41

51

61

43

33

23

95

85

94

84

74

64

54

44

34

24

75

65

55

45

35

25

15

5

93

83

73

200 2

12

1

32

22

11

21

31

92

82

72

91

81

71

0 0

200

400

600

800

1000

1200

X-axis

Fig. 3 Network structure in MATLAB

Network Structure 1200

19

9

1000

Y-axis

100

89

78

67

99

88

77

98

87

76

66

56

46

36

26

16

6

37

27

17

7

800

90

79

68

57

47

80

69

58

48

38

28

18

8

70

59

49

39

29

60

50

40

30

20

10

97

96

86

600

400

4

3

13

44

34

24

14

43

33

23

65

55

45

35

25

15

5

75

85

95

84

94

54

64

74

53

63

73

93

83

200 2

12

1

31

21

0 0

32

22

11

200

400

42

52

62

41

51

61

600 X-axis

800

72

71

1000

82

81

92

91

1200

Fig. 4 Network structure of route established

6 Conclusion/Future Work This paper established a route using three objective functions as well as an improved hybrid GWOGA routing protocol. For calculated trust, it used graph theory in D-S evidence to find trustworthiness and calculated energy based on the dissipation model of energy. It combined grey wolf algorithm with a genetic algorithm to make results more optimize. All simulation results show ADI-secure routing protocol—grey wolf optimization genetic algorithm gave better results in MATLAB 2015b as compared to ADI-secure routing protocol and IASR routing algorithms under energy, packet loss rate, delivery rate, and delay are some network parameters. For future work, it enhanced lifetime, probability of failure, and check model in distinct scenarios.

440

R. Kaushik et al. Fitness Curve From Hybrid Optimization Model

5 4.8 4.6

Fitness

4.4 4.2 4 3.8 3.6 3.4 3.2 0

10

20

30

40

50 Iterations

60

Fig. 5 Fitness curve of proposed hybrid optimization model

Fig. 6 Simulation results for delay

70

80

90

100

A Secure and Intelligent Approach for Next-Hop Selection …

Fig. 7 Comparison graph for energy consumption in MATLAB

Fig. 8 Comparison graph for packet delivery rate of proposed model

441

442

Fig. 9 Comparison graph for packet loss rate of proposed model

Fig. 10 Network structure with malicious node detection

R. Kaushik et al.

A Secure and Intelligent Approach for Next-Hop Selection …

443

References 1. K. Saleem, A. Derhab, M.A. Orgun, J. Al-Muhtadi, J.J. Rodrigues, M.S. Khalil, A. Ali Ahmed, Cost-effective encryption-based autonomous routing protocol for efficient and secure wireless sensor networks. Sensors 16(4), 460 (2016) 2. W. Fang, C. Zhang, Z. Shi, Q. Zhao, L. Shan, BTRES: beta-based trust and reputation evaluation system for wireless sensor networks. J. Netw. Comput. Appl. 59, 88–94 (2016) 3. I. Sakthidevi, E. Srievidhyajanani, Secured fuzzy based routing framework for dynamic wireless sensor networks, in 2013 International Conference on Circuits, Power and Computing Technologies (ICCPCT). IEEE, Mar 2013, pp. 1041–1046 4. M. Fahad, F. Aadil, S. Khan, P.A. Shah, K. Muhammad, J. Lloret, I. Mehmood, et al., Grey wolf optimization based clustering algorithm for vehicular ad-hoc networks. Comput. Electr. Eng. 70, 853–870 (2018) 5. Z. Luo, R. Wan, X. Si, An improved ACO-based security routing protocol for wireless sensor networks, in 2013 International Conference on Computer Sciences and Applications. IEEE, Dec 2013, pp. 90–93 6. A. Kaushik, S. Indu, D. Gupta, A grey wolf optimization approach for improving the performance of wireless sensor networks. Wireless Pers. Commun. 106(3), 1429–1449 (2019) 7. S. Mirjalili, J.S. Dong, Multi-objective Optimization Using Artificial Intelligence Techniques (Springer International Publishing, 2020) 8. G. Han, J. Jiang, L. Shu, J. Niu, H.C. Chao, Management and applications of trust in wireless sensor networks: a survey. J. Comput. Syst. Sci. 80(3), 602–617 (2014) 9. Z. Sun, M. Wei, Z. Zhang, G. Qu, Secure Routing Protocol based on multi-objective ant-colonyoptimization for wireless sensor networks. Appl. Soft Comput. 77, 366–375 (2019) 10. A. Raychaudhuri, D. De, Bio-inspired algorithm for multi-objective optimization in wireless sensor network, in Nature Inspired Computing for Wireless Sensor Networks (Springer, Singapore, 2020), pp. 279–301 11. H. Alzaid, M. Alfaraj, S. Ries, A. Jøsang, M. Albabtain, A. Abuhaimed, Reputation-based trust systems for wireless sensor networks: a comprehensive review, in IFIP International Conference on Trust Management, June 2013 (Springer, Berlin, 2013), pp. 66–82 12. S.M. Sajjad, S.H. Bouk, M. Yousaf, Neighbor node trust based intrusion detection system for WSN. Procedia Comput. Sci. 63, 183–188 (2015) 13. D. Cohen, M. Kelly, X. Huang, N.K. Srinath, Trustability based on beta distribution detecting abnormal behaviour nodes in WSN, in 2013 19th Asia-Pacific Conference on Communications (APCC), Aug 2013. IEEE (2013), pp. 333–338 14. K. Yang, J.F. Ma, C. Yang, Trusted routing based on DS evidence theory in wireless mesh network. J. Commun. 32(5), 89–96 (2011) 15. S. Hosseini, A. Al Khaled, S. Vadlamani, Hybrid imperialist competitive algorithm, variable neighborhood search, and simulated annealing for dynamic facility layout problem. Neural Comput. Appl. 25(7–8), 1871–1885 (2014) 16. S. Hosseini, A. Al Khaled, A survey on the imperialist competitive algorithm metaheuristic: implementation in engineering domain and directions for future research. Appl. Soft Comput. 24, 1078–1094 (2014) 17. X. Zhao, H. Zhu, S. Aleksic, Q. Gao, Energy-efficient routing protocol for wireless sensor networks based on improved grey wolf optimizer. KSII Trans. Internet Inf. Syst. 12(6) (2018) 18. H. Faris, I. Aljarah, M.A. Al-Betar, S. Mirjalili, Grey wolf optimizer: a review of recent variants and applications. Neural Comput. Appl. 30(2), 413–435 (2018) 19. T. Jiang, C. Zhang, Application of grey wolf optimization for solving combinatorial problems: Job shop and flexible job shop scheduling cases. IEEE Access 6, 26231–26240 (2018) 20. Z. Jabinian, V. Ayatollahitafti, H. Safdarkhani, Energy optimization in wireless sensor networks using grey wolf optimizer. J. Soft Comput. Decis. Support Syst. 5(3), 1–6 (2018) 21. S.K. Gupta, P. Kuila, P.K. Jana, GA based energy efficient and balanced routing in k-connected wireless sensor networks, in Proceedings of the First International Conference on Intelligent Computing and Communication (Springer, Singapore, 2017), pp. 679–686

444

R. Kaushik et al.

22. L. Kong, J.S. Pan, V. Snášel, P.W. Tsai, T.W. Sung, An energy-aware routing protocol for wireless sensor network based on genetic algorithm. Telecommun. Syst. 67(3), 451–463 (2018) 23. N.A. Al-Aboody, H.S. Al-Raweshidy, Grey wolf optimization-based energy-efficient routing protocol for heterogeneous wireless sensor networks, in 2016 4th International Symposium on Computational and Business Intelligence (ISCBI), Sept 2016. IEEE (2016), pp. 101–107 24. Y. Liu, Q. Wu, T. Zhao, Y. Tie, F. Bai, M. Jin, An improved energy-efficient routing protocol for wireless sensor networks. Sensors 19(20), 4579 (2019) 25. A. Lipare, D.R. Edla, V. Kuppili, Energy efficient load balancing approach for avoiding energy hole problem in WSN using Grey Wolf Optimizer with novel fitness function. Appl. Soft Comput. 84, 105706 (2019) 26. A. Al Khaled, S. Hosseini, Fuzzy adaptive imperialist competitive algorithm for global optimization. Neural Comput. Appl. 26(4), 813–825 (2015) 27. A. Goyal, S. Mudgal, S. Kumar, A review on energy-efficient mechanisms for cluster-head selection in WSNs for IoT application, in IOP Conference Series: Materials Science and Engineering, Mar 1 2021, vol. 1099, No. 1 (IOP Publishing, 2021), p. 012010. https://doi.org/ 10.1088/1757-899X/1099/1/012010 28. A. Goyal, V.K. Sharma, S. Kumar, R.C. Poonia, Hybrid AODV: An efficient routing protocol for Manet using MFR and firefly optimization technique. J. Interconnection Netw. 16(8). https:// doi.org/10.1142/S0219265921500043 29. Singh AP, Luhach AK, Gao XZ, Kumar S, Roy DS. Evolution of wireless sensor network design from technology centric to user centric: An architectural perspective. International Journal of Distributed Sensor Networks. 2020 Aug; 16(8). DOI: https://doi.org/10.1177/155014772094 9138 30. B.P. Manju, S. Kumar, Target K-coverage problem in wireless sensor networks. J. Discrete Math. Sci. Cryptogr. 23(2), 651–659 31. Manju, S. Singh, S. Kumar, A. Nayyar, F. Al-Turjman, L. Mostarda, Proficient QoS-based target coverage problem in wireless sensor networks, in IEEE Access, vol. 8 (2020), pp. 74315–74325. https://doi.org/10.1109/ACCESS.2020.2986493 32. T.M. Behera, U.C. Samal, S.K. Mohapatra, Energy-efficient modified LEACH protocol for IoT application. IET Wireless Sens. Syst. 8(5), 223–228 (2018) 33. E. Alnawafa, I. Marghescu, New energy efficient multi-hop routing techniques for wireless sensor networks: static and dynamic techniques. Sensors 18(6), 1863 (2018) 34. M. Tarhani, Y.S. Kavian, S. Siavoshi, SEECH: scalable energy efficient clustering hierarchy protocol in wireless sensor networks. IEEE Sens. J. 14(11), 3944–3954 (2014)

Proposed Sustainable Paradigm Model to Data Storage of IoT Devices in to AWS Cloud Storage Sana Zeba

and Mohammad Amjad

Abstract This manuscript reveals the infancy toward the “Internet of Things” and “Data Storage.” The IoT addresses the lack in awareness device updates, security protocol and privacy are among the issues that have faced by IoT Network. Subsequently, a wide diversity in the IoT applications have been established and deployed by using different IoT frameworks. An IoT framework is nothing more than a set of rules, standards and protocols, which simplify the development of IoT related applications. In this research work, proposed a new 3-layer AWS IoT (Amazon Web Service) model: cloud layer, Edge node and IoT node layers with the respect to security components. The proposed AWS supported IoT system is also evaluated. Proposed model paradigm has analyzed and evaluated with the publishing and subscribing the temperature sensors data on AWS cloud. Circuit diagram of current proposed AWS IoT model is also discussed and all step related to the AWS cloud connection and establishment of things on the cloud as well. This proposed publish and subscribe the model linked with mobile application which sends a publish and subscribe notification of AWS IoT cloud to mobile application. Keywords Internet of Things (IoT) · Temperature sensor · Cloud computing · AWS · AWS services

1 Introduction The Internet of Things (IoT) plays an amazing role in all aspects of human’s daily lives including such as smart cities, healthcare, automobiles, government sectors, home, transportation, entertainments, industrial appliances, sports, etc. Kevin Ashton devised the term ‘IoT’ in the year 1999 for helping the Radio Frequency Identification (RFID) concept, which contains embedded sensors and actuators, and the original idea was introduced in the 1960s. Though, various functionalities of IoT have helped it to get strong fame in the summer of 2010 [1]. The IoT is finally starting to reach its S. Zeba (B) · M. Amjad Department of Computer Engineering, Jamia Millia Islamia, New Delhi, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 R. C. Poonia et al. (eds.), Proceedings of Third International Conference on Sustainable Computing, Advances in Intelligent Systems and Computing 1404, https://doi.org/10.1007/978-981-16-4538-9_44

445

446

S. Zeba and M. Amjad

potential, with large-scale deployments up to 50,000 connected IoT devices more than doubling in the last 2 years. According to International Telecommunication Union (ITU), the Internet of Things term defined as “The Internet of Things will connect the world’s object in both intelligent and sensory way [2].” Gloukhovtsev [3] while according to the Machine report, the number of M2M connections will increase from 5 billion in 2014 to 27 billion in 2024. The splendid elastic computation has provided through cloud computing technology and data management proficiencies for the IoT Network [4]. Cloud computing has been defined as the mixture of the technologies like networking, computational virtualization, smart grids and hardware software services. Generally, the IoT Network contains of a large assortment of things which are restricted in storage of data and computing power. Cloud services of management are progressively increase employed to manage and storage of IoT related components. Cloud Service model contains three fundamental classification of services such as Platform-as-a-service (PaaS), Software-as-a-service (SaaS) and Infrastructure-as-a-Service (IaaS) services. The total of allied IoT devices in the biosphere will drastically raise to 75.44 billion 2025 from 0.35 billion in 2017. In this paper, built proposed paradigm model used Amazon Web Service (AWS) with IoT and as well as circuit diagram as well [5]. The organization of this paper is described as: Sect. 2 discussed about the relevant work related to AWS IoT platform. The Sect. 3 explains various preliminaries such as AWS Cloud in IoT Architecture, AWS IoT Core, Amazon IoT Related Services, etc., and Sect. 4 discussed proposed paradigm and model of AWS data storage. Section 5 discussed and analyzed about model, and Sect. 6 gives some future direction for it. In last Sect. 7, the conclusion of the work is concluded in this section.

2 Related Work The IoT technology proposes potentialities that make it conceivable for huge number of applications developments such as provinces are home, transportation, city, medical, smart management environment, education and social, etc. Each of them has defined with its own inimitable attributes in the relations of real-time processing, like capacity for data, storage space on cloud, proof of identity, verification and authentication, security and privacy considerations. Tawalbeh et al. [1] have proposed an innovative layered architecture: as generic and stretched that is having the privacy and security mechanisms and layers verification as well and used AWS as Virtual Apparatuses for the representative of the IoT nodes. Likewise, Hang et al. [6] have also proposed a novel platform that called as sensor-cloud, and it is able to virtualize all physical devices as virtual in the CoT environment and used virtual sensors in the required applications, that are the fundamentals of the sensor-cloud architecture. The paper has provided a comprehensive review study of public Cloud IoT explanations in the PaaS platform and in this paper, the author [2] focused mainly on the security features, comparison table was produced. Zhou et al. [5] have illustrated the previous developing trend of security in the IoT research and discloses how IoT properties

Proposed Sustainable Paradigm Model to Data Storage …

447

have affected already existing security research through explore the latest research work related to IoT security. The author [7] has also presented a survey report related to the main security concern of the IoT frameworks, and for that purpose consider 8 frameworks in which each simplifies the proposed architecture, on the behalf of third-party smart apps. “Automated Pollution Detection System using IoT and AWS Cloud” has been developed for detecting the air pollution through [8] a provided architecture with integrating IoT and Cloud Computing. Similarly, Pathak et al. [9] have proposed an IoT and cloud-based Weather Monitoring System which monitor weather to detect, record and display various weather parameters such as temperature, humidity. For remote monitoring of any industrial as well as commercial application, Andore [10] has presented the Raspberry Pi that is connected with AWS IOT platform and used messaging protocol such as MQTT (Message Queue Telemetry Transport). Fan et al. [11] address the storage issue and proposed a hierarchical blockchain storage, called as Chain Splitter in which the majority of the blockchain are stored on the clouds, and the utmost recent blocks have stored in the overlay network.

3 Preliminaries 3.1 AWS Cloud in IoT Architecture Cloud Computing model has provided a cloud storage platform that store huge amount of data on the internet through the services of cloud who manages and control the storage services on cloud. Cloud storage follows the quotes, “On demand selfservice.” On demand cloud has provided the services on the demand or purchase. This cloud computing services gives agility, durability, at global scale level, with “anywhere, anytime” data access. AWS is an acronym of “Amazon Web Services.” AWS cloud service has provided the support of IoT device and cloud services for the implementation of IoT applications. The IoT universe with AWS: Generally, the Internet of Things (IoT) contains the main components such as Applications, Cloud Model Services, Communications, Hardware Devices, Interfaces, Sensors, Actuators, etc.

3.2 AWS IoT Core AWS IoT has provided device software and services that can used in the integration of IoT devices with AWS cloud-based solutions. AWS service providers can connect to IoT with cloud services and perform operations. AWS Core can sustenance millions of IoT devices and trillions of IoT-based applications messages and provide the route for processing those messages to AWS end point firmly. The devices of IoT system

448

S. Zeba and M. Amjad

send info to cloud AWS by publishing messages through MQTT protocols. The SQL command SELECT database related statement has allowed to find out data from any arriving MQTT message.

3.3 Amazon IoT Related Services Amazon web Services has provided the IoT services that connect IoT things to the Cloud storage so that different internet-connected things interact with the cloud services and applications. AWS IoT platform compares with other IoT platform and provides a bi-directional communication between IoT devices and cloud storage. The next portion describes various AWS IoT Core services that has discussed below [12]. AWS IoT Core messaging services: The AWS IoT Core component connectivity services have provided a secure communiqué through the IoT things and control the messages that permit among things or device and AWS IoT. Device gateway: It enables IoT devices to efficiently and securely communication through AWS Core IoT. The device uses secure protocol X.509 certificates for taking communication in secured manner. Message broker: It has provided a protected procedure for IoT things and AWS cloud-based applications to send and obtain messages among each other (Fig. 1). Device Shadow: The device shadow easily built IoT applications with the AWS which can cooperate with IoT things through available REST APIs AWS IoT core. Registry: This service has established an identity for IoT devices identification and tracking the device metadata such as parameters and abilities.

Fig. 1 AWS IoT components and AWS services

Proposed Sustainable Paradigm Model to Data Storage …

449

Authentication and Authorization: The AWS IoT Core has provided communal authorization and authentication encryption of data at all stages of connection, that swapped among IoT things and AWS cloud storage without any verification. SDK of AWS IoT Device: The SDK of AWS has provided ability to establish connection quickly with hardware IoT device or mobile-based application. AWS IoT Core for LoRaWAN: The AWS IoT for LoRaWAN has made conceivably to run LoRaWAN things with private LoRaWAN and gateways of AWS storage on cloud without the establishment of LoRaWAN Network Server (LNS). Rule’s engine: The Rules engine is responsible to connection of all data from the message with the message broker and AWS cloud services for data storage and processing as well.

4 Proposed Paradigm and Model The broad layer architecture of the IoT application has consisted with the IoT node layer (like device layer), cloud layer and edge node layers. The IoT node layer contains of wireless sensor devices for sense data acquisition and communiqué protocols to forward data remotely or local storage for additional senses information processing. Through these sensor devices’ anyone can assemble data in the real-time with numerous frequencies. Proposed paradigm has ensured security, privacy measures in the safe IoT Network and guarantee powerfully communicate and segment data among IoT things, to defend the privacy of the sense data via encryption. Figure 2 shows that the proposed model holds the AWS IoT cloud storage as master, Raspberry Pi 4 modem or hardware as Edge Node and IoT devices as Virtual. Then, there is a need of AWS cloud account to gain full accessibility of the resources constraints and AWS services provided by AWS. From the AWS cloud available services, resource constraints, utilized the web services AWS IAM. It needs certificates for authentication of devices and allowed to regulate the worker’s access Fig. 2 Proposed 3 layer AWS IoT paradigm

450

S. Zeba and M. Amjad

Fig. 3 Proposed paradigm model of AWS IoT

with setting an IAM account (user account). Private, Public key, certificates have guaranteed the secure connection of virtual machines with the edge and AWS cloud. In the proposed paradigm, to read the Humidity and Temperature value used the DHT11 sensor, then connect it with the AWS IOT service and update sensor with the AWS IOT SDK. Then this updated data can be imagined in the Amazon IOT console to make sure data published to the internet. Figure 3 has shown the process of data publishing and forwarding procedure in the model where sensor can publish the capture data into the AWS IoT cloud and forward it to Amazon SNS and as well subscribe it by end users also.

4.1 Circuit Diagram of Proposed Paradigm Model of AWS IoT On the hardware side, required Raspberry Pi connected to a DHT11 Sensor and an LCD screen. The DHT11 is for Temperature, Humidity, and LCD is for display the values. Both the sensor and LCD work with +5 V supply power for both. Use the GPIO pins number to make connections according to the model AWS IoT circuit diagram and used a breadboard, jumper wires to make this connection. The Circuit diagram of Proposed Paradigm Model of AWS IoT shown in Fig. 4. Once launch the program on Raspberry Pi and get the output details on the shell window it means that paradigm program is responding appropriately, and the values of the sensors are being published the values and uploaded it to the Amazon AWS server.

5 Discussion and Analysis The analysis performed to disclose that Amazon’s IoT Core, provide service-packed solutions for the deployment of IoT. The below section shown some snapshots of different interfaces depending upon end users’ communications irrespective of

Proposed Sustainable Paradigm Model to Data Storage …

451

Fig. 4 Circuit diagram of proposed paradigm model of AWS IoT

displaying the responses through web interfaces. Figure 5 shown the snapshot interface of AWS and IoT Core Demo. For connecting IoT devices to AWS cloud such as laptop, mobile, deployment kit, etc., has require to follow 3 steps which are registering a device, download a connection kit, configure and test your device. Here choose Linux/OSX operating system, The registered device has shown and record the physical device in the AWS cloud. Any physical objects or things which want to connect with cloud can create its own shadow on cloud through registration. After the registration of IoT devices, need to configuration of devices and testing of devices also require. To configure IoT devices, first unzip the connection kit on the devices with the command and add execution permissions then run the start script to configure the devices and get the message “Waiting for messages from your device” after successful configuration of IoT devices. After the performing, all configuration and testing of IoT devices get the successfully connected interface of devices shown below in Fig. 6. The proposed paradigm model has done the various tasks such as: • • • •

Firstly, register a device in AWS IoT Core. Set up security of devices using the AWS services, certificates, policy. Used SDK service to connect AWS IoT devices. Send and receive message from devices through cloud (Fig. 7).

Fig. 5 AWS IoT core demo

452

S. Zeba and M. Amjad

Fig. 6 AWS IoT core connect IoT device

Fig. 7 Configuration and testing of AWS IoT device

6 Future Work The future road of this paradigm work is to permit end users to design and deployment of the application logic, rules via virtual sensor devices. The virtual sensors devices shall be accessible in a widget which can be simply dragged and dropped to constitute AWS IoT Core services and used all the features, services of AWS.

7 Conclusion AWS IoT Core can sustenance trillions of IoT devices, messages, and it can process and routing those messages to AWS cloud endpoints reliably and securely. In this

Proposed Sustainable Paradigm Model to Data Storage …

453

paper analyzed the core IoT Platforms, AWS services, IoT Core components, etc. The proposed 3 Layer AWS IoT cloud model and the proposed paradigm to publish and subscribe AWS services with cloud have represented with correspondence circuit diagram of AWS IoT. The proposed model has done various task related to IoT devices deployment, data storage, publishing and subscribing the sensor data on AWS cloud. Temperature and humidity sense with the sensor and upload it on AWS cloud.

References 1. L. Tawalbeh, F. Muheidat, M. Tawalbeh, M. Quwaider, IoT privacy and security: challenges and solutions. Appl. Sci. 10(12), 1–17 (2020). https://doi.org/10.3390/APP10124102 2. D. Bastos, Cloud for IoT—a survey of technologies and security features of public cloud IoT solutions, in IET Conference Publication, vol. 2019, no. CP756 (2019). https://doi.org/10.1049/ cp.2019.0168 3. M. Gloukhovtsev, Iot security: challenges, solutions & future prospects (2018) 4. K. Kumar, N. K. Singh, M.A. Haque, S. Haque, A comprehensive study of cyber security attacks, classification and countermeasures in the Internet of Things. Digit. Transform. Chall. Data Secur. Priv. (2021) 5. L. Zhou, L. Wang, Y. Sun, P. Lv, BeeKeeper: a Blockchain-based IoT system with secure storage and homomorphic computation. IEEE Access 6(8), 43472–43488 (2018). https://doi. org/10.1109/ACCESS.2018.2847632 6. L. Hang, W. Jin, H.S. Yoon, Y.G. Hong, D.H. Kim, Design and implementation of a sensorcloud platform for physical sensor management on CoT environments. Electronics 7(8) (2018). https://doi.org/10.3390/electronics7080140 7. M. Ammar, G. Russello, B. Crispo, Internet of Things: A survey on the security of IoT frameworks. J. Inf. Secur. Appl. 38(November), 8–27 (2018). https://doi.org/10.1016/j.jisa.2017. 11.002 8. T. Schenkel, O. Ringhage, N. Branding, A comparative study of facial abstract (2019) 9. Y. Pathak, P.K. Shukla, A. Tiwari, S. Stalin, S. Singh, Deep transfer learning based classification model for COVID-19 disease. IRBM 1, 1–6 (2020). https://doi.org/10.1016/j.irbm.2020.05.003 10. D.B. Andore, AWS IOT platform based remote monitoring by using Raspberry Pi, vol. VI, no. X (2017), pp. 38–42 11. K. Fan et al., Blockchain-based secure time protection scheme in IoT. IEEE Internet Things J. 6(3), 4671–4679 (2019). https://doi.org/10.1109/JIOT.2018.2874222 12. O. Jukic, I. Speh, I. Hedi, Cloud-based services for the Internet of Things, in 2018 41st International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO)—Proceedings, (2018), pp. 372–377. https://doi.org/10.23919/MIPRO.2018. 8400071

Potential Applications of the Internet of Things in Sustainable Rural Development in India Md. Alimul Haque , Shameemul Haque , Moidur Rahman , Kailash Kumar , and Sana Zeba

Abstract In recent times, urban development is rising, with not only major cities in India being hubs of development, jobs and settlement. The rural economy remains therefore of critical importance in the country’s overall growth. The critical issues faced in these rural areas focus around the lack of employment, quick and easy transportation, immediate healthcare facility and insufficient information about popular government subsidies schemes for most backward and rural areas. This paper focuses on possible uses of IoT technology to make sustainable rural livelihoods. These are classified into the land, water, food protection, rural facilities and utilities, farming management, catastrophe, healthcare, education and electricity. The paper aims at proving the promise of IoT for sustainable rural development as a potential contributor. Studying the problems and opportunities in this field would promote cooperation between various sectors of sustainable rural development and maximize the use of the Internet of Things for promoting sustainable rural development and enhancing rural quality of life. Keywords Internet of Things · Smart cities · Village · Sustainability · Technological effects · Sensor technology

Md. Alimul Haque (B) Department of Computer Science, Veer Kunwar Singh University, Ara 802301, India S. Haque Department of Physics, Al-Hafeez College, Ara 802301, India M. Rahman Department of Computer Science, Jazan University, Jizan, Kingdom of Saudi Arabia e-mail: [email protected] K. Kumar College of Computing and Informatics, Saudi Electronic University, Riyadh 11673, Kingdom of Saudi Arabia e-mail: [email protected] S. Zeba Department of Computer Engineering, Jamia Millia Islamia, New Delhi, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 R. C. Poonia et al. (eds.), Proceedings of Third International Conference on Sustainable Computing, Advances in Intelligent Systems and Computing 1404, https://doi.org/10.1007/978-981-16-4538-9_45

455

456

Md. Alimul Haque et al.

1 Introduction In recent years, tremendous attention has been paid to the growth of smart cities and smart villages. IoT technologies are the ability to monitor and remotely control simple machines based on the real-time data they generate, using omnipresent internet connectivity via broadband and mobile internet, Wi-Fi, big data processing and machine learning. These machines improve the structure and can be used efficiently for sustainable growth in cities such as traffic light control, transportation, waste management, data management, etc. The introduction of these programs is comparatively affordable and should not be confined only in metropolitan cities. Initiatives in the smart villages that have recently been introduced are primarily aimed at presenting villages with renewable energy sources and at connectivity. It is essential to extend the reach of smart villages to handle sufficient information, public transport and distribution system, etc., while leveraging the current infrastructure for solar energy and communication. The proposed approach looks at the micro-level deployment of smart villages and attempts to improve every family in the rural area. This proposal includes solar houses with required sensors, collectively referred to as smart houses to make villages smart. The Internet of Things and communication between machines play a vital role in the automatic working of devices in the houses [1]. Fire and smoke sensors are also part of the architecture which helps to identify a fire early and to ensure the safety of people at home. Using the communication module, sensor data and more information can be sent to a specific gateway and processed in the rural database. The houses serve as contact mechanisms for people, so that the whole village will benefit from the recommended design. Figure 1 presented the proposed model. Related features exist in agricultural regions around the world. There is geographic dispersal of populations. Farming is sometimes the main economic field and often, the exclusive one. Thus, rural residents face a series of factors posing growth challenges. The obstacles to rural areas in India include the underusing and/or non-sustainable Fig. 1 Proposed architecture

Potential Applications of the Internet of Things …

457

use of environmental assets; limited or lack of access for household and agricultural production to socio-economic infrastructure and utilities, public facilities and government services; lack of access to clean water and water resources; low-level literacy skills. Due to the poorly made system, the production base is weak. There is little or no revenue base for panchayat. In rural areas, small villages and/or ancient ‘relocation zones,’ more than 50% of the population lives more than 5 km on the tarred route, and more than 50% of the population use water from ponds, rivers, lakes or rainwater reservoirs. The Indian government is committed to supporting rural growth and enhancing the rural quality of life. A smart village implies that renewable energy resources that support the growth of the village should be available. It should provide residents in rural areas with decent schooling, health services, fresh water, electricity and various needs. Many other programs encourage the growth of society on a larger scale, such as “Smart Cities and Smart Villages”. Collectively, these measures set up the chance for a sort of IoT revolution in the world. This paper reflects on supporting and simplifying the lives of poor people in villages and providing them with an ability to be part of a technology revolution. The paper is arranged accordingly: Sect. 2 provides the background to Smart Villages, the Internet of Things and sustainability, which connects these entities to each other. Section 3 explains the methodology. Section 4 addresses rural sustainability empowered with IoT’s applications. Section 5 explores the problems that exist in the sustainable use of IoT and offers insights into future solutions. Finally, the paper is outlined in Sect. 6, which sheds light on the potential for prospective study in this field.

1.1 Benefits of IOT for Sustainable Rural Development The use of IoT in rural areas has various advantages and benefits, and some of the advantages are as follows: Effectiveness of input It increases the productivity of health inputs, air quality, lighting, land and water management, agriculture, etc. In a wide variety of environments like medical, smart factories and retail IoT has been deployed. Agriculture, such as cold store systems, pump control, crop and animal tracking, is also subject to various applications. Cost reduction Using sensors, farmers can check their soils contamination levels, level of water needed to hydrate standing crop, thereby reducing the cost of energy required for water supply to the crop and also the level of urea and pesticides needed to grow a good crop. Thus, the cost of production will be reduced.

458

Md. Alimul Haque et al.

Profitability It will increase the profitability of farmers. Farmers can minimize the amount of waste they create and monitor their agricultural processes using smart sensors based on temperature, humidity, sunshine, and other environmental factors. The amount of water needed to hydrate growing crops can also be decreased when a sensor discovers that the moisture levels of the soil are just right. An IoT strategy in the agricultural environment will help minimize waste and improve energy production, thereby increasing profitability. Sustainability It is possible to control and safeguard fruit and vegetables from rotting by having IoT controlled cold storage. Thus, increasing profitability for producers. It will help to fulfil the food safety goal as well. Environment protection It is possible to use IoT to find agricultural waste or stubble and can be appropriately handled so that the ecosystem is not affected. This plays a critical role in protecting the climate.

2 Literature Review This paper [2] explains the key uses of agriculture and forestry of IoT and cloud computing, primarily with the transmitting of agricultural knowledge, precise irrigation, smart crop management, protection of agricultural products and much more. In forest identification and wood monitoring and their maintenance, IoT may play an important role. This study [3] identified potential applications in agriculture for the Internet of Things for sustainable rural development. This article aims to improve the IoT strategy for agriculture and rural growth. Developers may use IoT technologies according to the literature to build country-specific, agriculture-based technologies. Many problems in the area of agriculture have been discussed in this research [4]. There is also a framework built to address these challenges. An information base is established in this study. There are some crop descriptions in this information base. This crop information addresses acquisition of expertise, demand availability, geospatial data flow and weather forecasts. This paper [5] provides a prototype framework for precision farming using an IoT cloud-based wireless sensor network. In this report, a warning device has been presented for managing plant water stress using IoT technology. The first section of this paper defined the steps towards developing a decision-making mechanism for a farming group to estimate water quantities. The paper [6] suggests a “greenhouse surveillance system” with a mix of cellular and Internet communications. The greenhouse monitoring system, built with IoT, is

Potential Applications of the Internet of Things …

459

precise to manage and monitor, works very quickly and is user-friendly, providing real-time monitoring of greenhouse ecosystem parameters. This system also has features such as high performance, stability and simple improvement. This paper [7] describes the architectural elements of the Internet of Things and illustrates certain areas of operation where the internet of things applies, addresses certain problems and issues of securities that must be tackled such as comprehensive rollout, standardization, interoperability, data protection, efficient usage of spectrums and specific recognition. This research [8] uses an open-source framework named OpenIoT to build a Phenonet platform. Phenonet is a situation in which digital agriculture is essentially improved semantically. In many applications, this article illustrated Phenonet’s applications and performance. The researchers showed how an OpenIoT framework would help to solve the Phenonet application’s challenges. Precise agriculture, a personalized agriculture architecture, based on IoT, is described in this paper [9]. IoT architecture is based on the cloud. To validate and demonstrate their results, researchers designed a prototype of this architecture. The performance assessment results indicate the efficiency of the proposed architecture. The smart model of the villages plans to provide people with high-quality facilities. In addition, government and private organizations encourage the application of ICT technology by providing sustainable solutions to existing social issues to improve the operational efficiency of the framework [10]. Some of the industries that are considered to have been at the forefront of focus include electricity, crime prevention, welfare, literacy, environmental services, transportation and unemployment [11]. Six components, such as smart life, smart economies, e-governance, smart citizens, smart society and smart transport, have been identified through studies on smart city structures [12].

3 Methodology Much of India’s rural populations depend on subsistence farming. There is also a need for people to enhance their socioeconomic standards by leveraging the services available in their region to make their lives easier.

3.1 Aims and Objectives The aim of this paper is to show the significant benefit that applications of IoT can make to impact on maintaining rural livelihoods. The research question is thus: how can Internet of things have an impact on the challenges plaguing rural societies in order to enhance the quality of life of common Indians?

460

Md. Alimul Haque et al.

3.2 The Objectives Are • To boost the efficiency of IoT adoption for sustainable smallholder farmers in India. • To recognize possible areas of rural development from literature that can be improved by IoT. • To Implement IoT-based technologies for development in these fields. • To impact IoT policy in the sense of sustainable development.

3.3 Data Collection In a literature survey, challenges in rural development have been found that really need to be improved by Web searches. IoT applications which can be used have been found by web searches and discussion among others. Keywords of “Potential application for Rural development” were searched in the electronic databases in the Google Scholar, Scopus and IEEE Xplore search engines. After studying 24 papers related to the security issues in key technologies of IoT, 15 publication articles that satisfied the requirements were selected. Based on the review, we found that some key applications of IoT will play vital role in the field of rural development.

4 Applications of IoT for Rural Sustainability 4.1 Smart Meters In addition to energy usage data, smart meters also gather data in real time related to water and gas utilization [13]. Unlike the latter which produces a bill at the end of each month, the smart meter offers details about the use in real time. The distinction between a smarter meter and a regular meter is that users will also refine their consumption patterns and control them for their profit.

4.2 Smart Lighting Smart lighting is one of the most common IoT applications for sustainability. With the use of intelligent devices, the use of energy can be optimized dramatically. By integrating light and temperature sensors, natural light periods can be mimicked. This may have a significant effect on energy consumption, considering the fact that most people spend more time indoors than outdoors.

Potential Applications of the Internet of Things …

461

4.3 Smart Streetlights Smarter streetlights is an intelligent lighting special case in which streetlight is connected to a smart grid to use natural light, and wherever possible to minimize the energy consumption. This infrastructure can be incorporated into other networks such as air quality control and cameras, etc.

4.4 Air Quality Control by Sensors Sensors are widely used for various indoor and outdoor [14] parametric assessments of air and its quality. Temperature, carbon dioxide, pressure and humidity levels are some of the captured parameters. In addition to many others, presence of chemicals such as ozone, black carbon and methane can be monitored with advanced sensors. In addition, this collection is used to categorize high contamination sites and pollution causes. As in the recent past, we saw the menace of stubble burning in and around Delhi.

4.5 Smart IoT Based Agricultural Agricultural farms would have to use revolutionary technology to gain the highly necessary edge to meet the burgeoning needs of the population. The IoT (Internet of Things) agricultural applications can allow the industry to increase operating efficiency, reduce costs, reduce waste and boost yield quality. IoT-based smart agriculture is a system that monitors irrigation activities and automates the protection of crops in the agricultural field using sensors. Farmers can observe the condition of the farm from anywhere [15]. The IoT provides a wide range of digital agriculture applications, including the monitoring of soils and plants, crop inspection and quality, precise agricultural development, assist for irrigation assessment, green-house monitoring and control systems, food supply chain, and others. The following are the proven technologies used in agricultural IoT applications. Agricultural drones can be a significant asset for imaging crop health, automated GIS mapping, ease of use, time-saving, and the potential for yield increases. Drone technology would offer a high-tech makeover to the agriculture industry with strategy and planning focused on real-time data collection and processing.

462

Md. Alimul Haque et al.

4.6 RFID Technology In animal monitoring and recognition, RFID is used widely. It helps to accomplish intelligent tracking, identification, animal traceability and control of livestock.

4.7 Radio Transmission Technology in Agriculture With ZigBee wireless sensor networks, self-organizing wireless data sharing can be accomplished. It has been commonly used in large-scale farming for transmitting data.

4.8 Intelligent Irrigation System It will acquire irrigation water, irrigation, capacity, and time data from satellite navigation networks and subway shallow wells, fields, integrated irrigation system pipe technology to automate irrigation on farms and complete an analysis of IT applications for irrigation monitoring.

4.9 Protection of Agricultural Products In the agricultural industrial chain (production-circulation-sales), the whole control procedure can be understood by chain documentation and tracking.

4.10 Seeding and Spraying Methods for Precision Based on the technology combined with the navigation, seeding and fertilization technology of the Global Positioning System (GPS) at a variable scale, the consumption of chemicals, crops, and so on will accomplish an equal application of spraying, planting, and refining.

4.11 Sustainable Land and Water Resource Management Water and land are two main factors in a nation’s sustainable development in agriculture. The growing population trend in India challenges the supply of fresh natural

Potential Applications of the Internet of Things …

463

resources. This is a problem for planers in a complex system. The system parameters (e.g., net irrigation requirement, irrigation water cost, and total cultivable command area), as well as the decision variables (e.g., cropping pattern, surface water, and groundwater), may be modified [16]. The annual temperature has increased with global warming, and climate trends forecast that rain variability in India will be affected. Also, minor variations in precipitation may have a significant influence on water supply. Adaptation of water may involve flexible techniques for collecting data for resource monitoring and understanding weather forecasts to strengthen early warning systems. With the support of IoT applications, water control is feasible. In order to advise decisions on soil and chemical use and extraction, the IoT will be used to calculate the levels of contaminants in underground water. IoT can be incorporated at any stage to measure sewage levels versus the treatment and prevention carbon pollution.

4.12 Public Health COVID-19 pandemic era is going on. The vaccine is still far away from the whole world population. The IoT will lead to primary healthcare change. This is subject to multiple situations. However, this could not be the case since the patient can only visit the health care staff at times, persistent patients continue to do constant reviews. Checks are not only carried out for chronic patients, but also with patients with other disorders. It is also a problem for people in some remote areas in India how they get to the closest clinic due to a shortage of transport and financial constraints in the age of COVID-19 pandemic. Furthermore, if the patient is really serious, the waiting period before clinical treatment may be too longer. Patients will be able to be monitored remotely for regular tests by implementing programs such as telemedicine. The seamless connections and enthusiastic convergence with other innovations have made IoT one of the technologies that promises to transform our lives [17]. Networks of sensor-embedded objects, devices or artifacts are known as the Internet of Things (IoT). The use of IoT in the fight against this global pandemic spread to a variety of sectors that could play an important role in reducing coronavirus risk [18], and Internet of Health Things (IoHT) is an extension of IoT, which connects patients through a networking infrastructure to health facilities in order to track and manage vital signs in the human body [19]. Without a physical presence of patients, remote control of cardiac rhythms, electrocardiography, diabetes and vital body signs is possible. Figure 2 provides an example of remote data collection by IoHT.

4.13 Smart Greenhouses Greenhouse farming is a method for growing the yield of green vegetables, fruit, plants and so on. The environmental parameters are regulated by greenhouses by

464

Md. Alimul Haque et al.

Fig. 2 Remote examination of medical patients by the doctors in IoT [20]

manual operation or a proportional control mechanism. These approaches are less reliable as manual interference results in the lack of productivity, energy loss, and labor costs. With the assistance of IoT, a smart greenhouse can be designed; this design intelligently tracks and controls temperature, removing the need for manual interference.

4.14 Education Quality of education in rural areas can be improved by setting up internet hotspots, thus reducing the financial burden of people. The gap between urban and rural students is not only in terms of knowledge, but in terms of their local environment, cognitive ability, the quality of services, skills and access to various services. This needs investment, proper planning and use of IoT in education to improve the quality by leaps and bounds.

5 IoT Challenges and Vision for Sustainability The Internet of Things will support and improve rural sustainability by leaps and bounds. This route is also not without obstacles. This part sheds light on the issues of using IoT for rural sustainability and provides insight into most of the solutions to overcome these problems [21].

Potential Applications of the Internet of Things …

465

5.1 Span The range of sensor devices is analyzed in geographic terms. The term span thus referred to the density and relies on three factors: budget, policy and place of installation of sensor devices installed in one area. Installation places are chosen in compliance with program criteria. The selection of a coverage and implementation point shall entail the approval and advice of politicians, engineers and planners from the perspective of rural sustainability infrastructure.

5.2 Fault Tolerance The failure-resilience of a system in these situations is called fault tolerance and its ability to respond to customer request. Information security is a vital element in rural survival due to fault tolerance. Sensors also limited capacity for information. A dubious optimization is another component of the fault tolerance that still remains. The idea of durability has to be integrated into the sustainability search to alleviate the problems associated with fault-tolerance.

5.3 Data Ownership Enhancing operational quality is among the most comprehensive applications of IoT for infrastructural facilities. Such systems are focused on data collected, so data ownership is an important issue. The large number of user data or citizenrelated data contained within such applications. These data have to be in the public domain in order to use the maximum potential of the system. The transmission of user information to the public domain can violate certain systems’ privacy and confidentiality requirements. Rural planners need to make substantial strides to close the distance between the infrastructure and consumers in order to face these obstacles. Customer should know of how and by which purposes their data are being used. This is especially relevant for applications that use citizens’ data directly from digital healthcare systems.

5.4 Lack of Encouragement The Smart Village Project can be observed as an attempt of cooperation between businesses and governments. What this also indicates, however, is that businesses can lack the effort to consider the minimal benefit value of such applications. Other

466

Md. Alimul Haque et al.

features should be built into sustainable infrastructures to solve these challenges, enabling cooperation between various industries.

5.5 Technology Adverse Consequences Citizenship incentives are paid for and prioritized by the infrastructure built to promote urban development. However, potentially detrimental effects on certain infrastructures may occur. First, there may be enormous energy costs involved in building sensor networks. In comparison, the use of these devices can often contribute to comparatively greater energy consumption [22]. Policymakers should therefore not foresee the implications of the system implementation. These problems are resolved through coordination between policymakers, administrators and specialists in order to formulate a strategy way of implementing this modern framework.

6 Conclusion The IoT is a core infrastructure in this period of data science and artificial intelligence that provides the data required to function best. While data acquisition is the role of IoT in the data life cycle, it is critical because the data availability and consistency depend on the results and precision of the results derived from all other technologies. The key goal of rural growth is the incorporation of smart villages into various technology to deliver smart services for people that improve the facilities of their inhabitants. Smart cities work with a collective vision and urban sustainability. In smart villages IoT is used in many ways to encourage sustainability. However, there are issues like fault tolerance, coverage, data ownership and logistical problems. In this paper, potential solutions to these problems have been suggested. In future work in this area, methods will be adopted and trade price-precision measurement systems will be set up which will continue with the use of IoT for sustainable development.

References 1. M.A. Haque, S. Haque, K. Kumar, N.K. Singh, A comprehensive study of cyber security attacks, classification, and countermeasures in the internet of things, in Digital Transformation and Challenges to Data Security and Privacy (IGI Global, 2021), pp. 63–90 2. Y. Bo, H. Wang, The application of cloud computing and the internet of things in agriculture and forestry, in 2011 International Joint Conference on Service Sciences (2011), pp. 168–172

Potential Applications of the Internet of Things …

467

3. N. Dlodlo, J. Kalezhi, The internet of things in agriculture for sustainable rural development, in 2015 international conference on emerging trends in networks and computer communications (ETNCC) (2015), pp. 13–18 4. I. Mohanraj, K. Ashokumar, J. Naren, Field monitoring and automation using IOT in agriculture domain. Procedia Comput. Sci. 93, 931–939 (2016) 5. F. Karim, F. Karim, Monitoring system using web of things in precision agriculture. Procedia Comput. Sci. 110, 402–409 (2017) 6. J. Zhao, J. Zhang, Y. Feng, J. Guo, The study and application of the IOT technology in agriculture, in 2010 3rd International Conference on Computer Science and Information Technology, vol. 2 (2010), pp. 462–465 7. L. Patra, U.P. Rao, Internet of Things—architecture, applications, security and other major challenges, in 2016 3rd International Conference on Computing for Sustainable Global Development (INDIACom) (2016), pp. 1201–1206 8. P.P. Jayaraman, D. Palmer, A. Zaslavsky, D. Georgakopoulos, Do-it-yourself digital agriculture applications with semantically enhanced IoT platform, in 2015 IEEE tenth International Conference on Intelligent Sensors, Sensor Networks and Information Processing (ISSNIP) (2015), pp. 1–6 9. A. Khattab, A. Abdelgawad, K. Yelmarthi, Design and implementation of a cloud-based IoT scheme for precision agriculture, in 2016 28th International Conference on Microelectronics (ICM) (2016), pp. 201–204 10. H. Samih, Smart cities and internet of things. J. Inf. Technol. Case Appl. Res. 21(1), 3–12 (2019) 11. H. Chourabi et al., Understanding smart cities: an integrative framework, in 2012 45th Hawaii International Conference on System Sciences (2012), pp. 2289–2297 12. C. Perera, A. Zaslavsky, P. Christen, D. Georgakopoulos, Sensing as a service model for smart cities supported by internet of things. Trans. Emerg. Telecommun. Technol. 25(1), 81–93 (2014) 13. J.L. Hernández-Ramos, A.J. Jara, L. Marin, A.F. Skarmeta, Distributed capability-based access control for the internet of things. J. Internet Serv. Inf. Secur. 3(3/4), 1–16 (2013) 14. S. Kaivonen, E.C.-H. Ngai, Real-time air pollution monitoring with sensors on city bus. Digit. Commun. Networks 6(1), 23–30 (2020) 15. E.S.Md. Alimul Haque, S. Haque, D. Sonal, K. Kumar, Security enhancement for IoT enabled agriculture. Mater. Today Proc. (2020) 16. A. Dhar, B. Datta, Saltwater intrusion management of coastal aquifers. II: Operation uncertainty and monitoring. J. Hydrol. Eng. 14(12), 1273–1282 (2009) 17. F. Hussain, R. Hussain, S.A. Hassan, E. Hossain, Machine learning in IoT security: current solutions and future challenges. IEEE Commun. Surv. Tutorials (2020) 18. M.A. Haque, D. Sonal, S. Haque, M.M. Nezami, K. Kumar, An IoT-based model for defending against the novel coronavirus (COVID-19) outbreak. Solid State Technol. 592–600 (2020) 19. J.J.P.C. Rodrigues et al., Enabling technologies for the internet of health things. IEEE Access 6, 13129–13141 (2018) 20. A. Poppas, J.S. Rumsfeld, J.D. Wessler, Telehealth is having a moment: will it Last?” J. Am. College Cardiol. (2020) 21. S. Zhang, The application of the internet of things to enhance urban sustainability (2017) 22. J.C.J.M. Van den Bergh, Energy conservation more effective with rebound policy. Environ. Resour. Econ. 48(1), 43–58 (2011)

Evaluation and Analysis of Models for the Measurement of Complexity in Manufacturing Systems Germán Herrera Vidal, Jairo Rafael Coronado-Hernández, and Gustavo Gatica González

Abstract The measurement of complexity is a metric that starts from the monitoring of a manufacturing system, which is considered of vital importance, since it is a useful and valid measure in support of decision making. The objective of this article is to evaluate the proposal of a conceptual hybrid model, structured from two perspectives: a subjective one with the complexity index (CXI) method and an objective one with Shannon’s entropic model. Methodologically, we start from the conceptual model conceived, some hypotheses raised and we describe the practical case of manufacturing, where we obtain the information required from the parameters. The findings give answer to the hypotheses raised, in which it is corroborated that in the measurement of the complexity of manufacturing systems, the subjective method serves as support and is coherent with the results obtained objectively, giving support to the evaluation of the model based on heuristics and entropic measurements. Keywords Conceptual models · Complexity · Manufacturing systems · Evaluation · Measuring

G. H. Vidal (B) Industrial Engineering Department, Fundacion Universitaria Tecnológico Comfenalco, Grupo de Investigación Ciptec, Cartagena, Colombia e-mail: [email protected] Universidad Lomas de Zamora, Lomas de Zamora, Argentina J. R. Coronado-Hernández Industrial Engineering Department, Universidad de La Costa, Barranquilla, Colombia e-mail: [email protected] G. G. González Faculty of Engineering, Universidad Andres Bello, Santiago, Chile e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 R. C. Poonia et al. (eds.), Proceedings of Third International Conference on Sustainable Computing, Advances in Intelligent Systems and Computing 1404, https://doi.org/10.1007/978-981-16-4538-9_46

469

470

G. H. Vidal et al.

1 Introduction The study of complexity is born by trying to explain and predict the behavior of a system through measurement patterns. According to Frizelle and Woodcock [1], the measurement of complexity in manufacturing systems is a metric that serves as a parameter for establishing improvement plans, determining that systems with high complexity present more problems than systems with low complexity. There are different models that support each other to measure complexity, among these are conceptual models, theoretical models and mathematical models. The use of these within the industrial activity allows a greater study, analysis and understanding of the behavior of the system. Internal complexity within manufacturing systems can be of two types, (i) static complexity, which refers to a structural characteristic of the system, understood as the amount of information needed to describe the natural state of a system, literally as its entropy [2–4]. (ii) The dynamic complexity which is related to the changes of relevant variables in the process during a time horizon, according to Smart et al. [5], is the average amount of information needed to know the state of the installation during the execution of operations. Previous research has shown that the greater the number of structural elements of a system, the greater the impact on production costs [6, 7] and would also have a negative effect on the productivity and quality of the system [8]. According to [9], 25% of the total costs of manufacturing companies are due to the complexity related to the product and the process. In short, companies tend to use strategies to avoid, reduce or control the complexity of production systems. This research paper will evaluate the proposed conceptual model through which fundamental aspects for the measurement of complexity in manufacturing systems are determined, approached from two approaches so that a calculation of complexity in hybrid manufacturing systems can be established, with a subjective perspective (complexity index—CXI), based on previous research by Wu et al. [10], Huaccho Huatuco et al. [11], Mattsson et al. [12] and an objective perspective (Shannon entropic model), taking as a background the works of Calinescu et al. [2], Smart et al. [5]. The work is divided into five sections, first the method is dealt with, which involves the conceptual model, hypothesis approach and practical case study, then the results are presented and finally the discussion and conclusions are presented.

2 Method 2.1 Conceptual Model In this section, the conceptual model design for measuring complexity in manufacturing systems is presented. The model is defined in two perspectives or views, which include all the necessary elements, in a way that facilitates its understanding and use (see Fig. 1).

Evaluation and Analysis of Models for the Measurement …

471

Fig. 1 Conceptual model

For its development, it is necessary to approach a case study of the manufacturing industry, in which it is possible to characterize the productive system and identify the elements associated with the plant, process, product, parts and planning. In the literature, there are two types of methods to measure complexity in manufacturing systems, qualitative methods that depend on the perception of the people involved in the process and quantitative methods based on data, testing and analysis. According to Deshmukh et al. [13], the complexity index (CXI) is a method developed to help manufacturing companies and describe the complexity of the production system as experienced by people working within the system. It measures complexity from a subjective perspective, since it depends on the opinion of those responsible for each workstation. The results obtained are analyzed mathematically, Eq. 1 measures the complexity of each problem area and Eq. 2 measures the total complexity of each station. n CXIe = k CXI =

e=1

k

CXIe

P=1

Mep

n +

maxe=1...k CXIe 4

where CXI is the total complexity index. CXIe is the complexity index per criterion evaluated. M ep is the measure of central tendency (median). k is the number of criteria evaluated. n is the number of surveys conducted.

(1)

(2)

472

G. H. Vidal et al.

The entropic methods are based on analytical equations to measure complexity, facilitating entropic analysis in different types of scenarios and providing a quantitative basis for decision making. One method applied is Shannon’s information entropy, which is a quantitative, objective technique that allows both static and dynamic complexity to be measured. All the information used in this section is defined by Shannon [14] who based his work on a mathematical theory of information. Consequently, Mattsson et al. [12, 15] take this theory as a basis and direct it to measure complexity in industrial organizations. The results obtained are analyzed mathematically, Eq. 3 measures static complexity in manufacturing systems and Eq. 4 measures dynamic complexity. Hstatic (s) = −

N M  

Pi j log2 Pi j

(3)

i=1 j=1

where H s : Static complexity. Pij : Probability of status of a given resource. M: Amount of resources. N: Number of possible states. Hdynamic (D) = −(1 − P)

M  N 

Pi j log2 Pi j

(4)

i=1 j=1

where HD: Dynamic complexity. Q: Probability in the control state. (1 − P): Probability in an out-of-control state.

2.2 Hypothesis Three (3) hypotheses have been formulated related to the measurement of complexity in manufacturing systems, taking into account the proposed conceptual hybrid model. H1: The results obtained by means of the subjective method around the complexity of the system are consistent with the final results of the objective method. H2: The application of the qualitative method supports quantitative analysis in the complexity of manufacturing systems. H3: The workstations with the greatest number of operations are those with high static and dynamic complexity.

Evaluation and Analysis of Models for the Measurement …

473

Fig. 2 Graphical representation of the case study

2.3 Manufacturing Case Study In this section, the manufacturing case study is based on recreating a scenario that simulates a productive scenario, made up of work stations, operations, processes and defined products. The research was carried out in a laboratory, where the necessary resources are available. For a better understanding of the system, the relationships between workstations, materials, operations and manufactured products are presented in Fig. 2. The system manufactures three types of products, P1, P2 and P3, which need three types of materials, M1, M2 and M3, respectively, and these are manufactured in three types of workstations SA, SB and SC, generating six operations according to the flow of the product from O1 to O6.

3 Result This section presents the results obtained, separated by sections, (i) complexity index (CXI) and (ii) entropic measurement of complexity.

3.1 Complexity Index (CXI) The measurement of complexity is based on the opinion of those responsible for each workstation. For this purpose, a questionnaire was implemented based on six criteria or problem areas, based on Deshmukh et al. [13], which establish different points to be able to develop a complexity analysis, among them: (Q1) product reference, (Q2) work content, (Q3) plant design, (Q4) supporting tools, (Q5) work instructions and (Q6) general view. From this, the complexity index is calculated for each of the elements evaluated in the questionnaire as described in Eq. 1. According to Efthymiou et al. [16], if CXI < 2 (no change needed), if between 2 ≤ CXI < 3 (needs change) and if CXI ≥ 3 (needs urgent change). Given the above, it can be synthesized that station C needs an urgent change mainly in the elements associated with the products and variants (Q1)

474

G. H. Vidal et al.

Fig. 3 Complexity index per element

and work content (Q2), so its complexity index is greater than or equal to three (3). In addition to this, it is also necessary to make improvements in the other remaining elements (Q3 to Q6), considering that the complexity index is greater than or equal to two (2) and less than three (3) (see Fig. 3). Consequently, the total complexity of each station is measured, using Eq. 2, and the station with the highest complexity index (CXI) is station C with 3.208 followed by station B with 2.479 and finally station A with 2.063 being the station with the lowest complexity index.

3.2 Entropic Measurement of Complexity This section applies Shannon’s quantitative and objective information entropic technique. Starting from the calculation of static complexity, which refers to a characteristic that can be associated with systems, and also with production processes, this type of complexity becomes important when studying the possible design of one or more work stations. As described in Fig. 2 and taking as input information, the programming established in each workstation. Considering as assumptions that the system has a start time at six (6:00) hours, then a break at twelve (12:00) hours, resuming activities at fourteen (14:00) hours and ending its production cycle at nineteen (19:00) hours. From Eq. 3, the necessary calculations are made, considering the observed frequency (F o ), probability (Pr) and entropy (E) of each element. At the end, the total static complexity of each workstation is calculated, highlighting that station C is the most complex with 2.2084 bits, followed by station B with 1.8349 bits and finally station A as the least structurally complex with 0.9913 bits. As another quantitative measure, there is dynamic complexity which refers to the analysis of systems over time; in other words, it studies the trend of the real states that the process assumes within a time horizon. Considering the appearance of random variables such as operation times, enlistment times and vacancy times, whether due

Evaluation and Analysis of Models for the Measurement …

475

to resource failures at the stations, the Montecarlo simulation is used, based on data analysis, statistical parameterization of random variables, model construction and solution. For research purposes, the developed practical case was taken as a reference, taking into account a sample of ten (10) working days. From Eq. 4 and the twenty-six (26) points in the observation time, with opening time at six (6:00) hours and cutoff time at nineteen (19:00) hours, the necessary calculations are made, considering the observed frequency (F o ), probability (Pr) and entropy (E) of each element. At the end, the total dynamic complexity of each workstation is calculated, highlighting that station C is the one with the greatest dynamic complexity with 2.396 bits, followed by station B with 2.024 bits and finally station A as the least dynamically complex with 1.250 bits. Given the above, it is evident that when different operations interact in a workstation in which programming, synchronization and integration must be guaranteed, and there are elements of variation related to the resources that generate uncertainty and give rise to a high level of complexity, affecting the end as planned.

4 Discussion In summary, the results give an answer to the hypotheses proposed, in which it is corroborated (Hypothesis 1) that the results obtained by means of the subjective method regarding the complexity of the system are consistent with the final results of the objective method, in the same way that the application of the qualitative method serves as support for the quantitative analysis in the complexity of manufacturing systems (Hypothesis 2) and in addition to this (Hypothesis 3) that the workstations with greater number of operations are those that present a high static and dynamic complexity. The above described allowed predicting, knowing and evaluating the behavior of the system, being able to identify areas for improvement and changes in the structural conditions (see Fig. 4). Fig. 4 Complexity index per element

4.000 3.000 2.000 1.000 0.000

Station A

Complexity Index (CXI)

Station B Entropic (Static)

Station C Entropic (Dinamic)

476

G. H. Vidal et al.

5 Conclusion Conceptual models have turned out to be a technique that, from a point of view of the transmission of an idea or representation of it, facilitates the elaboration of a coherent structure to support the visualization and understanding of a process. This work proposes a conceptual framework from two views, one functional and the other informational, to have clarity of the fundamental elements and characteristics in the measurement of complexity, from a subjective perspective with the complexity index (CXI) method and objective with Shannon’s entropic model. The evaluation of the proposed model was satisfactory since it allowed to corroborate the hypotheses raised, to know in more detail the behavior of the system and facilitated with precision the decision making. For future research, it would be useful to implement this conceptual hybrid model in companies of the manufacturing sector, which allow to obtain and analyze results around the static and dynamic complexity, identifying the focuses of improvement and propose changes in structural conditions, all this based on modern methodologies and optimization techniques.

References 1. G. Frizelle, E. Woodcock, Measuring complexity as an aid to developing operational strategy. Int. J. Oper. Prod. Manag. (1995). https://doi.org/10.1108/01443579510083640 2. A. Calinescu, J. Efstathiou, J. Schirn, J. Bermejo, Applying and assessing two methods for measuring complexity in manufacturing. J. Oper. Res. Soc. 49(7), 723–733 (1998). https://doi. org/10.1057/PALGRAVE.JORS.2600554 3. J. Efstathiou, A. Calinescu, G. Blackburn, A web-based expert system to assess the complexity of manufacturing organizations. Robot. Comput. Integr. Manuf. 18(3–4), 305–311 (2002). https://doi.org/10.1016/S0736-5845(02)00022-4 4. S. Sivadasan, J. Efstathiou, A. Calinescu, L.H. Huatuco, Advances on measuring the operational complexity of supplier–customer systems. Eur. J. Oper. Res. 171(1), 208–226 (2006). https:// doi.org/10.1016/j.ejor.2004.08.032 5. J. Smart, A. Calinescu, L.H. Huatuco, Extending the information-theoretic measures of the dynamic complexity of manufacturing systems. Int. J. Prod. Res. 51(2), 362–379 (2013). https:// doi.org/10.1080/00207543.2011.638677 6. R.D. Banker, S.M. Datar, S. Kekre, T. Mukhopadhyay, Costs of Product and Process Complexity (No. 88-89-67) (Carnegie Mellon University, Tepper School of Business, 1989) 7. G.D.M. Frizelle, Getting the measure of complexity. Manufact. Eng. 75(6), 268–270 (1996) 8. J.P. MacDuffie, K. Sethuraman, M.L. Fisher, Product variety and manufacturing performance: evidence from the international automotive assembly plant study. Manage. Sci. 42(3), 350–369 (1996). https://doi.org/10.2307/2634348 9. W. Bick, S. Drexl-Wittbecker, Komplexität reduzieren: Konzept. Methoden. Praxis. LOG_X, Stuttgart (2008) 10. Y. Wu, G. Frizelle, J. Efstathiou, A study on the cost of operational complexity in customer– supplier systems. Int. J. Prod. Econ. 106(1), 217–229 (2007). https://doi.org/10.1016/J.IJPE. 2006.06.004 11. L. Huaccho Huatuco, J. Efstathiou, A. Calinescu, S. Sivadasan, S. Kariuki, Comparing the impact of different rescheduling strategies on the entropic-related complexity of manufacturing systems. Int. J. Prod. Res. 47(15), 4305–4325 (2009). https://doi.org/10.1080/002075407018 71036

Evaluation and Analysis of Models for the Measurement …

477

12. S. Mattsson, M. Karlsson, P. Gullander, H. Van Landeghem, L. Zeltzer, V. Limère, J. Stahre, Comparing quantifiable methods to measure complexity in assembly. Int. J. Manuf. Res. 9(1), 112–130 (2014). https://doi.org/10.1504/IJMR.2014.059602 13. A.V. Deshmukh, J.J. Talavage, M.M. Barash, Complexity in manufacturing systems, Part 1: Analysis of static complexity. IIE Trans. 30(7), 645–655 (1998). https://doi.org/10.1023/A:100 7542328011 14. C. Shannon, A mathematical theory of communication. Bell Syst. Tech. J. 27, 379–423 (1948) 15. S. Mattsson, P. Gullander, A. Davidsson, Method for measuring production complexity, in 28th International Manufacturing Conference (2011) 16. K. Efthymiou, A. Pagoropoulos, N. Papakostas, D. Mourtzis, G. Chryssolouris, Manufacturing systems complexity: an assessment of manufacturing performance indicators unpredictability. CIRP J. Manuf. Sci. Technol. 7(4), 324–334 (2014). https://doi.org/10.1016/j.cirpj.2014.07.003

Fractional-Order Euler–Lagrange Dynamic Formulation and Control of Asynchronous Switched Robotic Systems Ahmad Taher Azar, Fernando E. Serrano, Nashwa Ahmad Kamal, Sandeep Kumar, Ibraheem Kasim Ibraheem, Amjad J. Humaidi, Tulasichandra Sekhar Gorripotu, and Ramana Pilla Abstract This paper presents an asynchronous distributed switched controller for robotic systems with a dynamic model derivation based on fractional-order Euler– Lagrange formulation. This study begins with the dynamic model derivation of a two links robotic manipulator by a fractional-order Euler–Lagrange formulation. This objective is achieved by selecting an appropriate Lagrangian, considering the linear A. T. Azar College of Computer and Information Sciences, Prince Sultan University, Riyadh, Saudi Arabia e-mail: [email protected]; [email protected]; [email protected] Faculty of computers and Artificial Intelligence, Benha University, Banha, Egypt F. E. Serrano Research Collaborator, Prince Sultan University, Riyadh, Saudi Arabia e-mail: [email protected] International Group of Control Systems (IGCS), Cairo, Egypt N. A. Kamal Faculty of Engineering, Cairo University, Giza, Egypt S. Kumar (B) School of Engineering and Technology, CHRIST (Deemed to be University), Bengaluru, India I. K. Ibraheem Department of Electrical Engineering, College of Engineering, University of Baghdad, Al-Jadriyah, Baghdad 10001, Iraq e-mail: [email protected] A. J. Humaidi Department of Control and Systems Engineering, University of Technology, Baghdad 10001, Iraq e-mail: [email protected] T. S. Gorripotu Department of Electrical and Electronics Engineering, Sri Sivani College of Engineering, Srikakulam 532402, Andhra Pradesh, India R. Pilla Department of Electrical and Electronics Engineering, GMR Institute of Technology, Rajam Srikakulam 532127, Andhra Pradesh, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 R. C. Poonia et al. (eds.), Proceedings of Third International Conference on Sustainable Computing, Advances in Intelligent Systems and Computing 1404, https://doi.org/10.1007/978-981-16-4538-9_47

479

480

A. Taher Azar et al.

and angular kinetic energy and linear potential energy. The kinetic and potential energies are determined so that in the case of kinetic energy, a fractional-order operator is utilized for a linear and angular velocity, and a gradient operator is used to generate potential energy, which is important for this purpose to have a spatial derivative. The fractional-order Euler–Lagrange formulation is used to obtain a decoupled position and orientation fractional-order dynamics which achieves a novel fractional-order dynamic model representation to model many physical models with more fidelity. Then an asynchronous distributed switched controller is designed by dividing first the time interval into two parts. Then, by selecting two Lyapunov functionals for each time interval, the asynchronous switched control laws are obtained to ensure exponential stability. In opposition to other studies, a novel switching law is proposed by implementing the required set topology. The objective of this study is only to stabilize the studied robotic mechanism in its equilibrium points. A numerical experiment is shown to validate the theoretical results obtained in this study, proving the proposed control strategy’s effectiveness. Keywords Asynchronous control · Euler–Lagrange formulation · Distributed control · Fractional-order control

1 Introduction The Euler–Lagrange formulation has been extensively implemented since years ago for the derivation of dynamic models in the classical and quantum mechanics [5– 7]. Despite this, due to many newly discovered physical systems, it is necessary to develop more complex Euler–Lagrange formulations considering these novel systems. Among these kinds of physical systems are mechanical, electrical, aeronautical, chemical, particle and quantum systems. The fractional-order Euler–Lagrange formulation has been recently studied, and there are very few research studies in which this kind of formulation is derived and presented. In papers like [20], significant results are presented in which a fractional-order Euler–Lagrange formulation is derived by considering the principle of least action. It is important to remark that the previously mentioned study is one of the very few results found in the literature nowadays, and it is crucial for this research study. Other important results are found in papers like [14] in which a Lagrangian formulation is provided for a Hamilton– Jacobi fractional-order partial differential equation. Other research studies related to the Euler–Lagrange formulation are explained in papers like [21, 31] for the dynamic model representation of particle systems. Fractional-order controllers for the stabilization and reference tracking in robotics and mechanical systems have been extensively used nowadays considering that in most of the cases, they are more efficient than the integer order ones [1–3, 8–10, 13, 15–17, 22, 24–28, 32]. Examples in the literature can be found in papers like [11], in which a fractional-order controller for a single-link robot with sensor disturbances is shown. Another important study is shown in [4], in which a fractional-order PID

Fractional-Order Euler–Lagrange Dynamic Formulation and Control …

481

controller is implemented for the reference tracking of a parallel delta robot. In [12], a fractional-order controller is implemented for a single-link flexible robot. In [23], a fractional-order error manifold is implemented for the controller design of constrained robot manipulators. Fractional-order distributed switched control is crucial considering the increasing complexity of many physical systems and mechanical and robotic systems. The kind of switched control developed by several authors is either state-dependent and time-dependent. Examples of these kinds of techniques are found in papers like [36], in which a switching controller is designed based on state-dependent switching for a positive fractional-order system. Then in [19], an H-infinity and output feedback controller is used for linear systems with linear fractional uncertainties. In [29], an active controller does the switching synchronization of fractional-order chaotic systems. Another example can be found in [34], in which an enhanced sliding mode controller is presented in which a fractional-order exponential technique is implemented. Other interesting results found in the literature that are important for this study and worthy of mentioning are presented in papers like [35], in which a robust controller stabilizes an uncertain switched fractional-order system. In papers like [33], Lyapunov functions for fractional-order Riemann–Liouville difference equations are presented. Finally, in [18], Lyapunov functions for fractional-order nonlinear systems are presented. In this paper, the dynamic model derivation by a fractional-order Euler–Lagrange formulation and the design of an asynchronous switched controller for switched robotic systems is presented. The analyzed robotic system used in this study is a two-link robotic manipulator. First, an appropriate Lagrangian is designed by considering the position and orientation kinetic energy and the position potential energy by implementing a fractional-order derivative operator in the kinetic energy representation and a gradient operator in the case of the potential energy. Then the fractional-order Euler–Lagrange formulation is implemented to obtain the decoupled position and orientation fractional-order dynamics. It is important to remark that the kinematic rotation matrix is implemented as obtained in the standard formulation [30]. An asynchronous switched controller is designed considering two-time switching intervals, and by designing the appropriate Lyapunov function, the control laws are obtained to ensure exponential stability. The paper is organized as follows. Section 2 presents the problem formulation. In Sect. 3, the controller design is discussed. In Sect. 4, simulation results and analysis are given. Discussion is given in Sect. 5. Finally, conclusion is drawn in Sect. 6.

2 Problem Formulation This section presents a brief review of the fractional-order calculus operations and the dynamic model derivation of a two-link robotic manipulator with the fractional-order Euler–Lagrange formulation.

482

A. Taher Azar et al.

2.1 Fractional-Order Calculus Operations The Riemann–Liouville fractional-order derivative is given by [18]:  t dn x(τ ) 1 α dτ t0 Dt x(t) = n Γ (n − α) dt t0 (t − τ )1−n+α

(1)

for n − 1 ≤ α < n where n is an integer. The Caputo fractional-order derivative is given by [18]: C α t0 Dt x(t)

1 = Γ (n − α)

t t0

x (n) (τ ) dτ (t − τ )1−n+α

(2)

2.2 Fractional-Order Dynamic Model Derivation Consider the following fractional-order Euler–Lagrange formulation for the decoupled position and orientation dynamics [20] : ∂L ∂qi ∂L ∂ xi

+ ∂1 ∇bα .∇xαi q = τi + ∂1 ∇bα .∇xαi x = 0

(3)

for i = 1, ..., n taking into consideration that the dimensions for the two links manip  ulator analyzed in this study are n = 2. Where ∇ α = a0 Dxα00 , a1 Dxα11 , ..., an Dxαnn [20]. The state vectors are defined as q = [q1 , q2 ]T = [θ1 , θ2 ]T , x = [x1 , x2 ]T = [x, y]T which are the position of the end effector. The Lagrangian L is selected as: ⎤ ⎤ ⎡ ⎡      1  ⎣  α 2 1 2 α ⎣ L= Ii dx ⎦ + m i dx ⎦ 0 D t qi 0 Dt x i 2 i 2 i x x   − m i g∇x · D y dx i

(4)

x

where m i and Ii are the respective link masses and inertia, g is the gravity vector and D y = [0, 1]T . The resulting fractional-order decoupled position and orientation dynamic models are:

Fractional-Order Euler–Lagrange Dynamic Formulation and Control …

 xi

Dbαi 0 Dtα xi m i −

i





483

m i g∇bα+2 x · D y = 0

i α α xi Dbi 0 Dt qi Ii

= τi

(5)

i

so the switched dynamic system for the two-link robotic manipulator is given by: α 0 Dt q 1

−1 −α −1 α = I1σ (t) x D B τ1 − 0 Dt q2 I1σ (t) I2σ (t)

α 0 Dt q 2

−1 −α −1 α = I2σ (t) x D B τ2 − 0 Dt q1 I2σ (t) I1σ (t)

(6)

3 Asynchronous Switched Controller Design for a Robotic Manipulator For the design of the asynchronous switched controller for the robotic manipulator, consider the following property [18]: Property 1 The Caputo fractional-order derivative is related to the Riemann– Liouville fractional-order derivative in the following form: C α 0 Dt x(t)

= 0 Dtα x(t) −

(t − t0 )−α x(0) Γ (1 − α)

(7)

In the following theorem, the main results of this study are obtained. It is important to consider that the fractional-order derivatives of the Lyapunov functionals are of the Caputo type. Later, they have converted to fractional-order Riemann–Liouville derivatives as explained in Property 1. Theorem 1 Taking into consideration the switching sets Ai (q, ) and A j (q, ρ) for the time switching instants [t2k , t2k+1 ) and [t2k+1 , t2k+2 ), respectively, the following switching control laws make the system exponentialy stable: τ1 =x D αB 0 Dtα q2 I2σ (t) − ( + 1)I1σ (t)x D αB q˙1 −  I1σ (t)x D αB q1 τ2 = x D αB 0 Dtα q1 I1σ (t) − ( + 1)I2σ (t)x D αB q˙2 −  I2σ (t)x D αB q2 for [t2k , t2k+1 ) in which  ∈ R+ , and:

(8)

484

A. Taher Azar et al.

τ1 = x D αB 0 Dtα q2 I2σ (t) − (ρ + 1)I1σ (t)x D αB q˙1 − ρ I1σ (t)x D αB q1 τ2 = x D αB 0 Dtα q1 I1σ (t) − (ρ + 1)I2σ (t)x D αB q˙2 − ρ I2σ (t)x D αB q2

(9)

for [t2k+1 , t2k+2 ) with ρ ∈ R+ . Proof For the case when [t2k , t2k+1 ) consider the following Lyapunov function: Vi1 =

1 2 1 q1 Pir + q22 Pin 2 2

(10)

with the positive constants Pir ∈ R+ and Pin ∈ R+ . Now by taking the Caputo derivative of (10) and taking into consideration that Vi1 (0) = 0 due to the initials conditions must be close to the equilibrium point and by using Property 7 yields: α 0 Dt Vi1

≤ q1 Pir 0 Dtα q1 + q2 Pin 0 Dtα q2

(11)

Now by substituting (6) and (8) into (11) yields: α 0 Dt Vi1

≤ −( + 1)V˙i1 − 2Vi1

(12)

and considering that 0 Dtα−1 Vi1 ≥ 0 yields: ( + 1)

dVi1 ≤ −2Vi1 dt

Now re-arranging (13) and integrating at both sides yields:  Vi1  t dVi1 −2 dt ≤ V  +1 i1 t2k Vi (0)

(13)

(14)

obtaining the following result: Vi1 ≤ Vi1 (0)e

 −2  +1

[t−t2k ]

(15)

for the time instants [t2k+1 , t2k+2 ) consider the following Lyapunov function: V j1 =

1 2 1 q P jr + q22 P jn 2 1 2

(16)

with P jr ∈ R+ and P jn ∈ R+ . By a similar procedure, the following result is obtained:

V j1 ≤ V j1 (0)e

−2ρ ρ+1

[t−t2k ]

(17)

Fractional-Order Euler–Lagrange Dynamic Formulation and Control …

485

Now with these results, the following sets are obtaied to deduce the switching instants:   Ai (q, ) = q ∈ R2 : 0 Dtα Vi1 ≤ −( + 1)V˙i1 − 2Vi1 , Vi1 ≤ Vi1 (0)em[t−t2k ]   A j (q, ρ) = q ∈ R2 : 0 Dtα V j1 ≤ −(ρ + 1)V˙ j1 − 2ρV j1 , V j1 ≤ V j1 (0)en[t−t2k ]

(18)

with m=

−2 +1

(19)

n=

−2ρ ρ+1

(20)

so the switching topology is given by: η(q, , ρ) = Ai (q, )



A j (q, ρ)

(21)

so the exponential stability of the system is ensured and the proof is completed.

4 Numerical Experiment In this section, the theoretical results obtained in this study are validated by a numerical experiment. The action of the asynchronous switched controller leads to a balance point of two links robot manipulator variables. The parameters of the simulation are I11 = 0.1 kg m2 , I12 = 0.2 kg m2 , I21 = 0.1 kg m2 , I22 = 0.3 kg m2 , m = 1 kg, l1 = l2 = 0.5 m. Figure 1 shows the evolution in time of angular displacement q1. It is confirmed how this angular variable is driven by the action of the proposed controller to the balance point at a significantly shorter convergence time. In Fig. 2, the evolution in time of the end effector displacement variable x is shown. This corroborates how this variable reaches the equilibrium point in the finite time, something in concordance with the angular trajectory of the actuator angles. The evolution of the input torque variable τ1 can be seen in Fig. 3. The action of the asynchronous switched controller proves that the control effort to balance this robotic system is smaller. Something important to note is that there are no unwanted oscillations caused by the controller. Finally, in Fig. 4, the controller and system modes are shown in order to activate the controller and the system at different switching times.

486

Fig. 1 Evolution in time of the angular displacement q1

Fig. 2 Evolution in time of the end effector displacement x

A. Taher Azar et al.

Fractional-Order Euler–Lagrange Dynamic Formulation and Control …

Fig. 3 Evolution in time of the input variable τ1

Fig. 4 Evolution in time of the switching mode

487

488

A. Taher Azar et al.

5 Discussion This study’s theoretical results provide a novel contribution regarding the dynamic modeling of robotic systems by a fractional-order Euler–Lagrange formulation. One of this study’s main advantages is that the fractional dynamic model is obtained in a decoupled way, which means obtaining the position and orientation fractional-order dynamics separately. This study aims to provide a fractional-order dynamic model for different kinds of serial and even parallel mechanisms from standard kinematic model design. This means that the kinematic model derivation strategies used to obtain integer order dynamic models, such as the Denavit-Hartenberg formulation, can be implemented to obtain a fractional-order dynamic model. Besides, as a contribution, a fractional-order asynchronous switched controller is provided, something that, to the best of the author’s knowledge, it has not been reported in the literature.

6 Conclusion In this paper, a novel fractional-order dynamic model derivation of the robotic mechanism is done using a fractional-order Euler–Lagrange formulation. Besides, an asynchronous switched controller is derived for the obtained fractional-order dynamic model. The results are validated by a numerical experiment corroborating the theoretical results obtained in this study.

References 1. K.S.T. Alain, A.T. Azar, R. Kengne, F.H. Bertrand, Stability analysis and robust synchronisation of fractional-order modified Colpitts oscillators. Int. J. Autom. Control 14(1), 52–79 (2020) 2. H.H. Ammar, A.T. Azar (2020) Robust path tracking of mobile robot using fractional order PID controller, in The International Conference on Advanced Machine Learning Technologies and Applications (AMLTA2019). Advances in Intelligent Systems and Computing, vol. 921 (Springer International Publishing, Cham, 2020), pp. 370–381 3. H.H. Ammar, A.T. Azar, R. Shalaby, M.I. Mahmoud, Metaheuristic optimization of fractional order incremental conductance (fo-inc) maximum power point tracking (mppt). Complexity 7687891, 1–13 (2019) 4. L. Angel, J. Viola, Fractional order PID for tracking control of a parallel robotic manipulator type delta. ISA Trans. 79, 172–188 (2018) 5. Azar AT, Serrano FE (2018) Passivity based decoupling of Lagrangian systems, in Proceedings of the International Conference on Advanced Intelligent Systems and Informatics 2017. Advances in Intelligent Systems and Computing, vol. 639 (Springer International Publishing, Cham), pp. 36–46 6. A.T. Azar, A.S. Sayed, A.S. Shahin, H.A. Elkholy , H.H. Ammar, PID controller for 2-DOFs twin rotor MIMO system tuned with particle swarm optimization, in Proceedings of the International Conference on Advanced Intelligent Systems and Informatics 2019. Advances in Intelligent Systems and Computing, vol. 1058 (Springer International Publishing, Cham), pp. 229– 242 (2020)

Fractional-Order Euler–Lagrange Dynamic Formulation and Control …

489

7. A.T. Azar, F.E. Serrano, I.A. Hameed, N.A. Kamal, S. Vaidyanathan, Robust h-infinity decentralized control for industrial cooperative robots, in Proceedings of the International Conference on Advanced Intelligent Systems and Informatics 2019. Advances in Intelligent Systems and Computing (Springer International Publishing, Cham), vol. 1058 (2020), pp. 254–265 8. A. Djeddi, D. Dib, A.T. Azar, S. Abdelmalek, Fractional order unknown inputs fuzzy observer for Takagi-Sugeno systems with unmeasurable premise variables. Mathematics 7(10), 984 (2019) 9. A. Fekik, H. Denoun, A.T. Azar, A. Koubaa, N.A. Kamal, M. Zaouia, M.L. Hamida, N. Yassa, Adapted fuzzy fractional order proportional-integral controller for dc motor, in 2020 First International Conference of Smart Systems and Emerging Technologies (SMARTTECH) (2020), pp 1–6. https://doi.org/10.1109/SMART-TECH49988.2020.00019 10. A. Fekik, A.T. Azar, N.A. Kamal, H. Denoun, K.M. Almustafa, L. Hamida, M. Zaouia, Fractional-order control of a fuel cell-boost converter system, Advanced Machine Learning Technologies and Applications (Springer, Singapore, 2021), pp. 713–724 11. D. Feliu-Talegon, V. Feliu-Batlle, A fractional-order controller for single-link flexible robots robust to sensor disturbances. IFAC-PapersOnLine 50(1), 6043–6048 (2017) 12. D. Feliu-Talegon, V. Feliu-Batlle, I. Tejado, B.M. Vinagre, S.H. HosseinNia, Stable force control and contact transition of a single link flexible robot using a fractional-order controller. ISA Trans. 89, 139–157 (2019) 13. G.A.R. Ibraheem, A.T. Azar, I.K. Ibraheem, A.J. Humaidi, A novel design of a neural networkbased fractional PID controller for mobile robots using hybridized fruit fly and particle swarm optimization. Complexity 2020, 1–18 (2020) 14. G. Jumarie, Lagrangian mechanics of fractional order, Hamilton Jacobi fractional PDE and Taylor’s series of nondifferentiable functions. Chaos Solitons Fract. 32(3), 969–987 (2007) 15. N.A. Kamal, A.M. Ibrahim, Conventional, intelligent, and fractional-order control method for maximum power point tracking of a photovoltaic system: a review. Fractional Order Systems. Advances in Nonlinear Dynamics and Chaos (ANDC) (Academic Press, 2018), pp. 603–671 16. A.S.T. Kammogne, M.N. Kountchou, R. Kengne, A.T. Azar, H.B. Fotsin, S.T.M. Ouagni, Polynomial robust observer implementation based passive synchronization of nonlinear fractionalorder systems with structural disturbances. Front. Inform. Technol. Electron. Eng. 21(9):1369– 1386 17. A. Khan, S. Singh, A.T. Azar, Combination-combination anti-synchronization of four fractional order identical hyperchaotic systems, in The International Conference on Advanced Machine Learning Technologies and Applications (AMLTA2019). Advances in Intelligent Systems and Computing, vol. 921 (Springer International Publishing, Cham) (2020), pp. 406–414 18. Y. Li, D. Zhao, Y. Chen, I. Podlubny, C. Zhang, Finite energy Lyapunov function candidate for fractional order general nonlinear systems. Commun. Nonlin. Sci. Num. Simul. 78(104), 886 (2019) 19. F. Long, S. Fei, Z. Fu, S. Zheng, W. Wei, H-infinity control and quadratic stabilization of switched linear systems with linear fractional uncertainties via output feedback. Nonlin. Anal.: Hybrid Syst. 2(1), 18–27 (2008) 20. A.B. Malinowska, A formulation of the fractional noether-type theorem for multidimensional Lagrangians. Appl. Math. Lett. 25(11), 1941–1946 (2012) 21. S. May, Minimal-Lagrangians: generating and studying dark matter model Lagrangians with just the particle content. Comput. Phys. Commun. 261(107), 773 (2021) 22. Mittal S, Azar AT, Kamal NA (2021) Nonlinear fractional order system synchronization via combination-combination multi-switching, in Proceedings of the International Conference on Advanced Intelligent Systems and Informatics 2020 (Springer International Publishing, Cham), pp. 851–861 23. A.J. Muñoz-Vázquez, V. Parra-Vega, A. Sánchez-Orta, Control of constrained robot manipulators based on fractional order error manifolds. IFAC-PapersOnLine 48(19), 118–123 (2015) 24. A. Ouannas, A.T. Azar, S. Vaidyanathan, A new fractional hybrid chaos synchronisation. Int. J. Modell. Ident. Control 27(4), 314–322 (2017a). http://www.inderscienceonline.com/doi/pdf/ 10.1504/IJMIC.2017.084719

490

A. Taher Azar et al.

25. A. Ouannas, A.T. Azar, S. Vaidyanathan, A robust method for new fractional hybrid chaos synchronization. Math. Methods Appl. Sci. 40(5), 1804–1812, mma.4099 (2017) 26. A. Ouannas, A.T. Azar, T. Ziar, A.G. Radwan, Generalized synchronization of different dimensional integer-order and fractional order chaotic systems, Fractional Order Control and Synchronization of Chaotic Systems, Studies in Computational Intelligence, vol. 688 (Springer International Publishing, Cham, 2017c), pp. 671–697 27. A. Ouannas, G. Grassi, A.T. Azar, A.A. Khennaoui, Synchronization control in fractional discrete-time systems with chaotic hidden attractors, Advanced Machine Learning Technologies and Applications (Springer Singapore, Singapore, 2021), pp. 661–669 28. V.T. Pham, S. Vaidyanathan, C.K. Volos, A.T. Azar, T.M. Hoang, V. Van Yem, A threedimensional no-equilibrium chaotic system: Analysis, synchronization and its fractional order form, Fractional Order Control and Synchronization of Chaotic Systems, Studies in Computational Intelligence, vol. 688 (Springer International Publishing, Cham, 2017), pp. 449–470 29. A. Radwan, K. Moaddy, K. Salama, S. Momani, I. Hashim, Control and switching synchronization of fractional order chaotic systems using active control technique. J. Adv. Res. 5(1), 125–132 (2014) 30. M. Spong, S. Hutchinson, M. Vidyasagar, Robot Modelling and Control (Wiley, New York, 2006) 31. Y. Tai, T. Watanabe, K. Nagata, Multi-particle models of molecular diffusion for Lagrangian simulation coupled with les for passive scalar mixing in compressible turbulence. Comput. Fluids, 104886 (2021) 32. Z. Wang, C. Volos, S.T. Kingni, A.T. Azar, V.T. Pham, Four-wing attractors in a novel chaotic system with hyperbolic sine nonlinearity. Optik Int. J. Light Electron. Opt. 131, 1071–1078 (2017) 33. G.C. Wu, D. Baleanu, W.H. Luo, Lyapunov functions for Riemann-Liouville-like fractional difference equations. Appl. Math. Comput. 314, 228–236 (2017) 34. C. Yin, X. Huang, Y. Chen, S. Dadras, S. Ming Zhong, Y. Cheng, Fractional-order exponential switching technique to enhance sliding mode control. Appl. Math. Modelling 44, 705–726 (2017) 35. X. Zhang, Z. Wang, Stability and robust stabilization of uncertain switched fractional order systems. ISA Trans. 103, 1–9 (2020) 36. X. Zhao, Y. Yin, X. Zheng, State-dependent switching control of switched positive fractionalorder systems. ISA Trans. 62, 103–108. Control of Renewable Energy Systems, sI (2016)

Modeling the Imperfect Production System with Rework and Disruption Neelesh Gupta, U. K. Khedlekar, and A. R. Nigwal

Abstract Production system may disrupted due to labor strick, a machine breaks down, power breakdown, etc. Along with this problem, a machine may also produce imperfect items. In this paper, we have suggested a production policy for an imperfect production system with disruption. The model is compared with disruption and without disruption. We have assumed that rework started just after the regular production. The profit function has been derived and obtained the regular production time, rework time, and disrupted production time. We have given managerial insights and suggestions to inventory managers for disrupted production system. The proposed model is also analyzed analytically, graphically, and numerically. Keywords Imperfect production · Inventory · Rework · Disruption

1 Introduction Every manufacturer has to confirm the production level and finding the most economical quantity. Nowadays, this is a general problem and admit the general solution, however much it may be advisable to exercise judgment in that particular case and such solution assisted by a knowledge of the general solution to decide the policy should include all the features involved in that case. Production planning is another aspect that manufacturers (practitioners) and researchers are attracted to recovered overages and shortages of the items. There are many reasons to disrupt the production system, like machine breakdown, labor strike unexpected events, etc. With these uncertainties, production system may produce some imperfect items, so the problem becomes more complex with disruption and imperfect production. The classical EOQ model does not include chances N. Gupta (B) · U. K. Khedlekar Department of Mathematics and Statistics, Dr. Harisingh Gour Vishwavidyalaya, Sagar, Madhya Pradesh 470003, India A. R. Nigwal Government Ujjain Engineering College, Ujjain, Madhya Pradesh, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 R. C. Poonia et al. (eds.), Proceedings of Third International Conference on Sustainable Computing, Advances in Intelligent Systems and Computing 1404, https://doi.org/10.1007/978-981-16-4538-9_48

491

492

N. Gupta et al.

of disruption in supply. The classical EMQ model considered that the all produced/manufactured quantity is of perfect quantity. However in real-life production systems due to destruction and other failure generate defective items. There are two types of defective items, first all these imperfect items which may convert into perfect items through a rework process, called as reworkable items and second, all these imperfect items cannot be converted into perfect items, called as scrap items. Haris [1] was the first mathematician who used the word EMQ in inventory management. So and Tang [2] presented a model in which they have considered rework on imperfect items at the end of regular production. They have derived a simple procedure to compute the optimal policy. Hayek et al. [3] derived a finite production model in which they analyzed the effect of imperfect quality items to minimize the total inventory cost. Chiu [4] developed a replenishment policy for imperfect EMQ model with the help of differential calculus approach. They also solved the same problem by the algebraic approach and suggested a differential calculus. Chiu et al. [5] considered a finite production model using a random variable for defective items rate and applied the rework process on defective items. They assumed that a portion of scrapabale must be discarded before rework on repairable defective items. They also suggested an optimal policy for EPQ and backorder level. Haji and Haji [6] considered good and defective item both in the production system and rework rate is assumed as a function of a random variable. Talaizadeh et al. (2013) derived an EPQ model in which they used the random variable for defective item‘s production rate by consider reworkable and non reworkable items allowing with shortage. They determined the optimal period length of backorder quantity and the total expected cost. Chiu [4] introduced a model in which they incorporated the total production cost and delivery cost for the EPQ and also incorporate the rework process and multi delivery cost. Sang et al. (2016) designed an imperfect manufacturing system using various cases of a supply chain consisting of a single manufacturer and single retailer. They determined the retail price and the number of shipments for exponential deteriorating items. He and He [7] proposed a production inventory model for a deteriorating item that is constant and the production disruption under a different situation. This method helps the manufacturer to reduce the losses caused by production disruption. Khedlekar et al. [8] formulated a production inventory model for deteriorating item with production disruption and analyzed the system under a different situation. Chiu [4] derived some special cases in the EMQ model with rework and multiple shipments. They optimized the total quantity in terms of production rate and regular time. In this paper, we have the following objectives, (1) To optimize production items, (2) To optimize total defective items, (3) To optimize that solution has no defective items. Based on the above literature and consideration, we are motivated to develop a model for two cases; first one is the EMQ model which depends on regular production time considering constant demand. In this case, we optimized regular production time and total production cost. The second one is the EMQ model considering disruption

Modeling the Imperfect Production System with Rework and Disruption

493

which depends on production time assuming constant demand. In this case, we have optimized disrupted production time and total production cost.

2 Assumptions and Notations We have considered an imperfect quality EMQ model, in which the production rate is constant and it is larger than the demand rate. The production process may randomly generate defective items at the rate ν1 (0 ≤ ν1 ≤ 1), the defective items are θ1 . We also assumed that the total defective items are found in two categories, the first one is reworkable and the second one is non-reworkable which is called scrap items. The rework process starts just after the end of regular production. Let ν2 (0 ≤ ν2 ≤ 1) denotes the quantity of defective items, which cannot be reworkable during the rework process and became scrap. After the rework process, only good items are delivered to the customer. P1 φ ν1 θ1 C CR P2 K1 θ2 ν2 h1 CC h2 t1 t2 CT t3 n K T Q I (t) Id (t) TC(t1 ) TC1 (t1 ) δP td t1P

: Production rate, : Demand rate for the time horizon T , : Portion of defective items, : Production rate of defective items, : Production cost per unit items, : Rework cost per unit items, : Rework production rate, : Delivery cost per shipment, : Production rate of scrap items during the rework process : Production rate of scrap item, : Holding cost per quantity per unit time, : Disposal cost per scrap items, : Holding cost per reworkable items, : Regular production time, : Time required for reworking of defective items, : Delivery cost per unit items, : Time to send finished Items, : Number of installments, : Setup cost per order, : Planing time horizon, : Total produced quantity of items, : On hand inventory of perfect quality items at time t, : On hand inventory of defective items at time t, : Total production inventory delivery cost per cycle, : Total production inventory delivery cost per cycle for special case1, : Change of production rate due to disruption, : Regular production time of disrupted production system, : Disrupted production time,

494

t2P TC ∗ (tdP ) TC1∗ (tdP )

N. Gupta et al.

: Rework time in disrupted production system, : Total production inventory delivery cost for disrupted case, : Total production inventory delivery cost per cycle for disruption case (special case).

3 Mathematical Model for an Imperfect Production System with Reworkable and Scrapable Items In this model, we consider a production process starts with a constant production rate P1 , which is larger than the demand rate. Because of imperfect production of defective items, production rate is ν1 ; then the total defective items are θ1 = P1 ν1 . Defective items have reworked at a rate of P2 which is started after the end of regular production. The rework process randomly generate scrap items at a rate ν2 , then the total scrapable items are θ2 = P2 ν2 . We derived the finished products of good items delivered to costumer in n equal parts of interval time t3 . Let regular production time is t1 , rework production time of a defective item is t2 , delivery time of the finished product is t3 , on hand inventory of regular production H and on hand inventory of reworkable product H ∗ , then the production cycle length T can be written as (Fig. 1) T = t1 + t2 + t3

Fig. 1 On hand inventory of perfect items

(3.1)

Modeling the Imperfect Production System with Rework and Disruption

495

let Q be the total quantity which is equal to sum of perfect items, imperfect items, and scrap items. Then (3.2) Q = P1 t1 H = (P1 − θ1 )t1

(3.3)

H ∗ = (1 − ν1 ν2 )P1 t1

(3.4)

 t2 =  t 3 = T − t1 − t 2 =

ν1 P1 P2

 t1

(3.5)

 P(1 − ν1 ν2 ) ν1 P1 − 1 t1 − θ P2

(3.6)

Total number of defective items at the time t1 is θ1 t1 , then θ1 t1 = P1 ν1 t1 , where ν1 = Pθ11 Total number of scrap items at the length cycle T is ν2 θ1 t1 , then ν2 θ1 t1 = P1 ν1 ν1 t1 , where ν2 = Pθ22 Delivery cost can be formulated as the total delivery cost for n shipments in a cycle is  n K1 + CT



H∗ n

 = nK1 + CT (1 − ν1 ν2 )P1 t1

(3.7)

The optimal inventory replenishment lot size can be obtained by minimizing the cost function at time t1 . So average production inventory delivery cost at time t1 is  (K + nK1 ) φ TC(t1 ) = P1 (1 − ν1 ν2 ) t1   + C + CR ν1 + CS ν1 ν2 + CT (1 − ν1 ν2 ) P1 (ν1 P1 )2 + h2 t1 + 2P2     P1 n−1 ν1 P1 2 + h1 (1 − ν1 ν2 ) t1 + h1 2 2P2 2n    2 P1 (1 − ν1 ν2 ) ν1 (1 − ν1 ν2 )P1 2 t1 − (1 − ν1 ν2 )P1 − φ P2

(3.8)

Proposition 31 Profit function follows the optimalty conditions with respect to t1 . 2 TC(t1 ) ≥ 0 if (1 − ν1 ν2 ) ≥ 0, and time t1∗ is given by i.e., d d(t 1)    t1 ∗ =  

h2

(ν1 P1 )2 2P2 + h1



P1 ν1 P1 2 2 + 2P2 (1 − ν1 ν2 )



(K + nK1 )    2 P1 (1−ν1 ν2 ) ν (1−ν1 ν2 )P1 2 + h1 n−1 − (1 − ν1 ν2 )P − 1 φ 2n P 2

496

N. Gupta et al.

Proposition 32 If ν2 = 0, then the total production inventory and delivery cost per cycle at the time t1 is given by equation   φ (K + nK1 ) + C + CR ν1 + CT P1 TC1 (t1 ) = P1 t1

P1 ν1 P1 2 (ν1 P1 )2 t1 + + h2 t1 + h1 2P2 2 2P2 



 2 n−1 P1 ν1 P1 2 + h1 t1 − P1 − 2n φ P2

(3.9)

and the optimal regular production time t1 ∗ is given by t1



   =

)2

1 P1 h2 (ν2P + h1 2



P1 2

(K + nK1 )  n−1  P1 2 2 1 P1 + h + ν2P − P1 − 1 2n φ 2

ν1 P1 2 P2



(3.10)

4 EMQ Model for Imperfect Production System with Rework and Disruption In this case, we consider a production process starts with a constant production rate of (P1 ), where ( P1 > φ). Let td be the regular production time and after time disrupted system at time t1P1 the production reduced by δP1 . After time t1P2 , the rework process is started with a rate (P2 , during the rework process. The finished products of good items delivered to costumer in n equal parts in t3 time interval. Let regular production time be t1 , disrupted production time be t1P1 , rework time of defective items be t2P1 , and delivery time of finished product be t3 , on hand inventory levels H1 , H2 and H3 , respectably, then the production cycle length T can be written as (Fig. 2). T = td + t1P + t2P + t3 let Q be the total quantity perfect items, imperfect items, and scrap items. Then Q = Ptd + (P1 + δP1 )t1P

(4.1)

(4.2)

H1 = (P1 − θ1 )td

(4.3)

H2 = Q − θ (td + t1P )

(4.4)

  ν1 Q − θ1 (td + t1P ) H3 = 1 + (P2 − θ1 ) P2

(4.5)

Modeling the Imperfect Production System with Rework and Disruption

497

Fig. 2 Compression of inventory of finished items with and without disruption

t2P =

ν1 Q P2 p

t 3 = T − td − t 1 −

(4.6) ν1 Q P2

(4.7)

Proposition 33 Suppose H ∗ is the total production inventory level without disruption and let H3 be a total inventory level disrupted system. Then the production time t1P is (1 − ν1 ν2 )(t1 − td )P1 t1P = (4.8) (1 + (1 − ν2 )ν1 )(P1 + δP1 ) − ν1 P1 Proof Let the total production inventory level is same for both cases, i.e.

H3 = H ∗

from Eqs. (4.5) and (3.4)   ν1 1 + (P2 − θ1 ) Q − θ1 (td + t1 P ) = (1 − ν1 ν2 )P1 t1 p2 therefor the production time after disruption t1P =

(1 − ν1 ν2 )(t1 − td )P1 ((1 + (1 − ν2 )ν1 ) (P1 + δP1 ) − ν1 P1 )

from Eq. (4.6) the rework production time after disruption t2P is t2P =

(1 − ν1 ν2 )(t1 − td )ν1 P1 2 P2 ((1 + (1 − ν2 )ν1 ) (P1 + δP1 ) − ν1 P1 )

(4.9)

498

N. Gupta et al.

The total production inventory and delivery cost at the time tdP is TC



(t1P )

φ = (1 − ν1 ν2 )



K + nK1 + (B + D + E)P1 (td + t1P ) P1 (td + t1P )

 (4.10)

where   1 ν1 2 P1 P1 , + ν1 (1 − ν1 ν2 ) + h1 2P2 2 2P2 E = (C + CR ν1 + CS ν1 ν2 + CT (1 − ν1 ν2 )) B = h2

n−1 D = h1 n



P1 (1 − ν1 ν2 )2 (1 − ν1 ν2 )ν1 P1 − (1 − ν1 ν2 ) − φ P2



Corollary 42 If the scrap items rate is ν2 = 0, i.e., all imperfect items are reworkable. Proposition 43 Suppose H ∗ is the total production inventory level and let H3 is a total inventory level often system gets disrupted. Then the production time t1P is t1P =

(t1 − td )P1 (1 + ν1 )(P1 + δP1 ) − ν1 P1

(4.11)

Proof Let the total production inventory level is same for both cases, i.e. therefor

(t1 − td )P1 (1 + ν1 )(P1 + δP1 ) − ν1 P1

(4.12)

(t1 − td )P1 2 ((1 + ν1 )(P1 + deltaP1 ) − ν1 P1 )P2

(4.13)

t1P = and t2P =

H3 = H ∗

The total production inventory delivery cost at the time t1P is TC



 (t1P )

where A=

=

A + (B + D + E)P1 (td + t1P ) (td + t1P )

K + nK1 ν1 2 P1 , B = h2 + h1 P1 2P2

n−1 D = h1 n



P1 ν1 P1 −1− φ P2



1 ν1 P1 + 2 2P2





 ,

E = (C + CR ν1 + CT )

(4.14)

Modeling the Imperfect Production System with Rework and Disruption

499

5 Numerical Example and Discussion We are giving following numerical example only to illustrate the proposed model (Fig. 3; Table 1) Numerical Example for Case I (without disruption) P =100, φ= 50, ν1 = 0.1,ν2 = 0.1, C = 4, K = 20000, K1 = 10000, CT = 10, CR = 4, CS = 2, h1 = 0.10, h2 =0.02, t1 = 15.1092, t2 = 0.3021, TC(t1 ) = 270761.9. Numerical Example for Case II (with disruption) td = 6, δP = −10, t1 P = 10.1328, t2 P = 0. TC ∗ (td P ) = Rs. 75434.39, TC1∗ tdP = Rs. 75031.33.

6 Analysis According to Table 1, if the defective item rate increases then regular production time decreases leads to rework production time decreases. Hence, the total cost also increases accordingly. However regular production time with disruption decreases leads to rework production time with disruption increases. Consequentially the total cost increases accordingly.

Fig. 3 Total cost with respect to regular time Table 1 Effect of defective rate (ν1 ) on optimal policy with and without disruption for (ν2 > 0.) ν1 Without disruption With disruption t1 t2 TC(t1 ) t1P t2P TC ∗ (tdP ) 0.10 0.11 0.12 0.13

10.4603 9.9719 9.5452 9.1683

2.0920 2.1938 2.2908 2.3876

337915.10 352796.20 367088.30 380866.1

5.4741 4.8870 4.3735 3.9191

1.0946 1.0751 1.0496 1.0189

365563.00 380287.0 394270. 407596.10

500

N. Gupta et al.

7 Conclusion and Suggestions The paper presents an economic production problem policy for imperfect production system and rework. The model is developed in two different situations; first one is the production model with disruption and the second one is the production model without disruption. We first derived the production inventory delivery cost function for two subcases. First one is assuming defective and scrap items exist in the system. Second one is only defective items exist in the system. In the first case, we optimized the regular production time, rework production time, and total production cost. In the second case, we derived the production inventory delivery cost function for the EMQ model with disruption for two subcases. In the first sub-case, we assumed defective and scrap items are existing in the system. In the second subcases, only defective items are exiting the system. We have optimized the disrupted production time, rework production time, and total production cost. The sensitive analysis reveals that without disruption cost is higher than the production cost with disruption. It is a suggestion for the inventory manager that, to reduce the disruptions in the production system for earning more profit. The model can be extended with variable production rates and price-sensitive demand. Also, we can incorporate multiple shipments and variable demand.

References 1. F.W. Harris, What quantity to make at once, in The Library for Factory Management in Operation and Costs, vol. 5 (A. W. Shaw Company, Chicago, 1915) pp. 47–52 2. K.C. So, C.S. Tang, Optimal operating policy for a bottleneck with random rework. Manag. Sci. (1995) 3. P.A. Hayak, M.K. Salameh, Production lot sizing with the reworking of imperfect quality items produced. Product. Plann. Control 12, 584–590 (2010) 4. S.W. Chiu, Optimal replenishment policy for imperfect quality EMQ model with rework and backlogging. Appl. Stochast. Models Bus. Ind. 23, 165178 (2007) 5. Y.S.P. Chiu, Determining the optimal lot size for the finite production model with random defective rate, the rework process, and backlogging. Eng. Optim. 35, 427–437 (2008) 6. R. Haji, B. Haji, Optimal batch size for a single machine system with accumulated defective and random rate of rework. J. Ind. Syst. Eng. 3, 243–256 (2010) 7. Y. He, J. He, A Production Model for Deteriorating Inventory Items with Production Disruption (Hindawi Publishing Corporation, 2010) 8. U.K. Khedlekar, A. Namdev, A. Nigwal, Production inventory model with disruption considering shortage and the time proportional demand. Yugoslav J. Oper. Res. 28, 123–139 (2018)

Traffic Accident Detection Using Machine Learning Algorithms Swati Sharma, Sandeep Harit, and Jasleen Kaur

Abstract A vehicular ad hoc network (VANET) can help in reducing accidents by sending safety messages to the vehicles. High mobility and high dynamics of vehicles give rise to many challenges in VANET. Machine learning is a technique of artificial intelligence that can provide a splendid set of tools for handling data. A concise introduction of the significant concepts of machine learning and VANET. Our main concern is to implement VANET using different machine learning techniques. The proposed scheme uses simulated data that is collected and based on implementation is done through a random forest classifier. Keywords VANET · Machine learning · Aritificial intelligence

1 Introduction Intelligent transportation system (ITS) is receiving more consideration because of the increasing number of vehicles that are leading to network congestions. ITS is combination of information and communication technologies (ICT) to improve management and safety of vehicular networks. Vehicular ad hoc network (VANET) [1] can help in reducing accidents by sending safety messages. Several new issues arise because of the high topology in vehicular networks, which motivates researchers to rethink the traditional wireless design methodologies. VANET communications with cost-efficient and reliable data distribution. Vehicleto-vehicle (V2V) and vehicle-to-infrastructure (V2I) communications in VANET are shown in Fig. 1. They carried the communication with the support of dedicated short range communication (DSRC) [2] standards. The V2I establishes a connection between road side unit (RSU) and vehicles to provide various traffic information and entertainment services. Thus, these types of services require a high amount of data transfer and bandwidth consumption through S. Sharma (B) · S. Harit · J. Kaur Department of Computer Science, Engineering Punjab Engineering College, Chandigarh 160012, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 R. C. Poonia et al. (eds.), Proceedings of Third International Conference on Sustainable Computing, Advances in Intelligent Systems and Computing 1404, https://doi.org/10.1007/978-981-16-4538-9_49

501

502

S. Sharma et al.

Fig. 1 VANET architecture Fig. 2 Machine learning process

media streaming, downloading of map, and social networking [3]. Another V2V connection is mainly considered for disseminating safety-critical information need strict delay-sensitive and high reliability known as basic safety messages (BSM) [4]. In the literature, it has been identified that researchers have been using detection components for the last few years [5]. Machine learning is a pratical approach to an artificial intelligence that provides numerous tools to exploit data for the use of the networks, as shown in Fig. 2. The three widely used classification methods that can be used for analyzing the traffic data are: support vector machine (SVMs), artificial neural networks (ANNs), and random forest.

Traffic Accident Detection Using Machine Learning Algorithms

503

The structure of the paper is systematized as follows. Section 2 describes the related work, Sect. 3 performs a comparative analysis among three machine learning algorithm. Section 4 summarizes the simulation parameters. Finally, 5 concludes the paper.

2 Related Work VANET is a promising technology that provides advance road safety and efficiency by proving information to the drivers. Safety applications addressed to minimize the risk of accident through lane change, cooperative collision warning, pre-accident sensing, traffic violation warning, etc. They explored communication scheme, where GPS receiver, sensors and DSRC device are installed on the vehicles. Li and McDonald [6] proposed a bivariate analysis model (BEAM), which is based on two variables: the travel time differences between adjacent time intervals and the average travel times of probe vehicles. BEAM shows that the link travel times increase directly proportional as the capacity (i.e., when an incident occurs) and that also affects the change in demand. The relationships between incident and non-incident conditions are studied using statistical principles of bivariate analysis. Sheu [7] proposed an incident detection approach comprises of three sequential procedures: symptom identification, signal processing, and pattern recognition. The proposed approach identified anomalous changes in traffic characteristics caused because of incidents. Bingle et al. [8] proposed an algorithm based on real-time analysis of fuzzy logic and traffic parameters. The automatic traffic incident detection transfers the information to traffic managers for reducing traffic jams and accidents. For, it assumes measuring the performance of the proposed algorithm the traffic density on the road constantly.

3 Background Traditional static mathematical models were not are not appropriate for capturing and tracking characteristics of VANET. Machine learning adopts two stages like training and testing. They trained a model in the training stage depending on the training data. In testing stage, it applied the trained model to produce the prediction. It carried supervised learning out with training sample with its corresponding outcome. Therefore, the classified methods adopted in this paper are as follows: • Artificial neural networks (ANNs):This algorithm aims to duplicate the performance of a neural network that comprises numerous interconnected neurons. This set of nodes adopts a sigmoidal transfer function that converts the weighted sum as inputs to “0” or “1” as output [9].

504

S. Sharma et al.

• Support vector machines (SVMs): This manages large dimensional datasets [10]. SVMs are used in many areas such as financial forecasting, haptic data prediction, and illumination analysis and also adopted by intelligent transportation system (ITS) in many areas such as travel time, incident detection, traffic flow, and speed predictions. • Random forest: It is a data mining tool to classify and solve regression related problems. It determines the class type through voting and grows ensemble of trees. This has enhanced the classification efficiency significantly and construct random vectors. They generate each tree from one of the random vectors. Random forest [11] comprises classification and regression trees, where classification is solved by analyzing the output of trees.

4 Simulation Parameters The parameters which are taken under consideration for the simulating all the three machine learning algorithms are explained as follows: • Number of vehicle trees: Generally, a more significant number of trees will lead to better accuracy. However, sometimes this can also lead to more computational cost, but after a certain number of trees, the improvement is negligible. • Max depth of the vehicle trees: Maximum depth shows the depth of tree in the forest. Deeper the tree indicates more splits which capture more information regarding data. The depth of the VANET tree depends on the number of vehicular details we include in our tree. • Minimum sample split for vehicle tree: Minimum samples split represents the minimum number of samples needed to split an internal node. The splitting of the internal node of a tree includes the position of vehicles, speed of the vehicle, traffic condition, the accuracy of position, etc. • Minimum sample leaf for a vehicular tree: Minimum sample leaf is the minimum number of samples needed to be a leaf node. This parameter is like the minimum sample split. However, this describes the minimum number of samples at the leaf and the base of the vehicular tree. Thus, the leaf decides the outcome of the classifier tree, which determines whether the position of the vehicle is accurate or not. • Number of random features of the vehicular tree: Maximum features denote the numbers to be considered when looking for the best split. Features taken into consideration are the current GPS position, the speed of the car, the distance between car and traffic conditions.

Traffic Accident Detection Using Machine Learning Algorithms

505

4.1 Dataset Information Datasets contain traces of vehicles and each dataset contains several requests sent from transmitter to receiver, requesting a specific data transmission rate with a specific severity. The generated dataset used for simulation in this paper is of Erlangen city situated in Italy. Dataset has some specific parameters which are essential for VANET implementation. Dataset utilized for simulation contains the following fields, as shown in Table 1.

4.2 Implementation In this section, implementation is carried out using two steps, such as calculating splits and dataset case study. • Calculating splits: In a decision tree, the initial step is to choose the split points. The finding of the split point is carried out by finding attributes and their value, which results in the lowest cost. Then, the Gini index is calculated to find the purity of the groups of data. A Gini index having a value of 0 is perfectly pure for two-class classification problem. • Dataset case study: The dataset converts the string values into numeric, and the output column gets converts from string to the integer value of 0 and 1. We have used k-fold cross-validation for estimating the performance of the learned model. After that, we have constructed and evaluated k models and predicted the performance through mean model error. The steps carried out for simulating three machine learning algorithms are as follows: 1. For simulation, the value of k is 5, which is used for cross-validation, each fold 882/5 = 176.4 or just over 40 records to be calculated upon each iteration.

Table 1 Dataset fields Terms Start time End time Time period Packets Rate Actual distance Severity

Defination Time (seconds) when the request arrives Time (seconds) when the request is done End time-start time Number of packets required by this request Number of packets divided by time period, packets per second Distance in meters between the sender and receiver Severity of the request

506

S. Sharma et al.

Table 2 Comparative analysis of machine learning algorithms Algorithm Accuracy Sensitivity ANN SVM Random forest

89 87 91

83 84 93

Specificity 91 88 89

Fig. 3 Accuracy of random forest algorithm

2. Deep trees constructed with a maximum depth of 10, and it considered a minimum number of training rows at each node one. Samples training dataset is of same size as original dataset. 3. They set the number of features considered at each split point to sqrt(num_features) or sqrt(9) = 3 features. 4. For comparision, a suite of six different trees was evaluated, which shows increasing skill as trees are added. 5. Running dataset prints the scores for each fold and also the mean score of each configuration. 6. Depict the scatter plot. The objective of this paper is to review the performance of random forest algorithm on traffic data and comparative analysis with SVM and ANN, as shown in Table 2. It has been concluded from Table 2 that the random forest machine learning algorithm has high accuracy, as shown in Fig. 3.

5 Conclusion The probability of applying machine learning to the problem of high mobility in VANETs turned out to be of great advantage. Machine learning is considered being a promising solution to this challenge because of its significant performance in various

Traffic Accident Detection Using Machine Learning Algorithms

507

artificial intelligence related areas. In the paper, V2V data communication model of V2V is evaluated. The mean accuracy of 91.477 is achieved by applying a random forest to a given collected vehicle dataset.

References 1. M. Dixit, R. Kumar, A. K. Sagar, Vanet: architectures, research issues, routing protocols, and its applications, in 2016 International Conference on Computing, Communication and Automation (ICCCA), IEEE, 2016, pp. 555–561 2. J.B. Kenney, Dedicated short-range communications (dsrc) standards in the united states. Proc. IEEE 99(7), 1162–1182 (2011) 3. Y. Wang, Z. Ding, F. Li, X. Xia, Z. Li, Design and implementation of a vanet application complying with wave protocol, in 2017 International Conference on Wireless Communications, Signal Processing and Networking (WiSPNET), IEEE, 2017, pp. 2333–2338 4. Y. Zang, L. Stibor, X. Cheng, H.-J. Reumerman, A. Paruzel, A. Barroso, Congestion control in wireless networks for vehicular safety applications, in Proceedings of the 8th European Wireless Conference, Vol. 7, 2007, p. 1 5. D. Srinivasan, R.L. Cheu, Y.P. Poh, A.K.C. Ng, Development of an intelligent technique for traffic network incident detection. Eng. Appl. Artif. Intel. 13(3), 311–322 (2000) 6. Y. Li, M. McDonald, Motorway incident detection using probe vehicles, in Proceedings of the Institution of Civil Engineers-Transport, Vol. 158, Thomas Telford Ltd, 2005, pp. 11–15 7. J.-B. Sheu, A sequential detection approach to real-time freeway incident detection and characterization. Eur. J. Oper. Res. 157(2), 471–485 (2004) 8. X. Binglei, H. Zheng, M. Hongwei, Fuzzy-logic-based traffic incident detection algorithm for freeway, in 2008 International Conference on Machine Learning and Cybernetics, vol. 3, IEEE, 2008, pp. 1254–1259 9. F.A. Ghaleb, A. Zainal, M.A. Rassam, F. Mohammed, An effective misbehavior detection model using artificial neural network for vehicular ad hoc network applications, in 2017 IEEE Conference on Application, Information and Network Security (AINS), IEEE, 2017, pp. 13–18 10. A.H. Fielding, Cluster and classification techniques for the biosciences (Cambridge University Press, Cambridge, 2006) 11. N. Dogru, A. Subasi, Traffic accident detection using random forest classifier, in 15th Learning and Technology Conference (L&T). IEEE 2018, 40–45 (2018)

A Comparative Approach of Error Detection and Correction for Onboard Nanosatellite Mahmudul Hasan Sarker, Most. Ayesha Khatun Rima, Md. Abdur Rahman, A. B. M. Naveed Hossain, Noibedya Narayan Ray, and Md. Motaharul Islam

Abstract The nanosatellite is constantly evolving and growing significantly over the world. It establishes a huge demand for more advanced and reliable error detection and correction systems that are capable of fast and huge data transmission with fewer errors. A comparative approach has been identified as a suitable scheme for the prevention of single and multiple events that affecting onboard nanosatellites in low earth orbit. In this paper, we have proposed a comparative approach of error detection and correction based on three different systems of EDAC algorithms which are Hamming codes, cyclic redundancy check, and Reed–Solomon code. We have also designed the system with three different parts as encoding part, error counting part, and decoding part, respectively. It has developed in such a way that during data transferring from satellite to ground station, it could analyze six camera images simultaneously with the help of FPGA and EDAC methods. To increase the efficiency, we have introduced the advanced turbo mechanism EDAC for different bandwidths of satellite communication and performance analysis with the AWGN channel and Rayleigh channel. EDAC methods codes are tested in MATLAB and shown in graphical plots. This technique is simple and achieves high reliability and accuracy, compared to other similar methods. Keywords Error detection and correction · Low earth orbit · Field programmable gate array · Hamming code · Cycle redundancy check · Reed–Solomon code · Turbo code

1 Introduction Nanosatellite is an object that moves in a curved path around a planet. Earth observation (EO) satellite is one of the fundamental tools for the research of the earth’s environment [1]. From earth’s surface, EO satellites apply smart image sensors M. H. Sarker (B) · Most. A. K. Rima · Md. A. Rahman · A. B. M. Naveed Hossain · N. N. Ray · Md. M. Islam United International University (UIU), United City, Madani Avenue, Dhaka, Bangladesh e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 R. C. Poonia et al. (eds.), Proceedings of Third International Conference on Sustainable Computing, Advances in Intelligent Systems and Computing 1404, https://doi.org/10.1007/978-981-16-4538-9_50

509

510

M. H. Sarker et al.

to observe and obtain information on the earth’s surface and utilizes infrared for beneath observation. By observing the earth from space, EO satellites provide essential information on weather monitoring, urban monitoring, natural disasters, agricultural growth monitoring and environmental monitoring, etc. [10]. Error detection and correction devices on nanosatellites aim to secure and errorless data transmission between satellite and Ground station. They are subject to unstable data corruption because interference of thermal noise or any other type of noise. Single-bit error and burst error are two general types of errors, single-bit error means one bit changed from 0 to 1 or 1 to 0 and burst error means more than one conjugated bit corrupted [6]. The error detection process is the first step to error correction, which is dependent on adding extra bits to the original data. Redundancy bits are achieved through two main coding schemes like convolution coding and block coding. Error correction can be classified as an automatic repeat request, forward error correction, and sometime, ARQ and FEC can be combined. This method is called hybrid automatic repeat request [6]. There are many systems designed to detect and correct error such as EDAC using CRC technique, Hamming technique, Reed–Solomon code [4]. To increase the efficiency of EDAC, we introduced an advanced turbo mechanism with two interleaver and three encoding processes. Advanced turbo mechanism is tested in MATLAB with AWGN and Rayleigh channel. The main contributions of this paper are as follows: • We have proposed an architecture that consists of the EDAC method for the nanosatellites. • We have studied, analyzed, and compared the error detection and correction algorithms that are used by the satellites. • We have identified the number of erroneous bits. Based on that, we have also proposed suitable error correction methods. • We have implemented the scheme for CRC, Hamming code, and Reed–Solomon code. • We have analyzed the performance of the turbo mechanism with two different channels in MATLAB. • Finally We have identified the limitations of our proposed methods. The rest of the paper has been described in the following manner. Section 1 discusses the EDAC methods of nanosatellites. The objective behind the research has been presented in Sect. 2 and the proposed system architecture has been discussed in Sect. 3. Section 4 shows the algorithmic analysis. Section 5 discusses performance evolution. Lastly, Sect. 6 concludes our paper.

A Comparative Approach of Error Detection and Correction . . .

511

2 Literature Review Al Mamun et al. [1] proposed a Hamming code to prevent single-bit parity error onboard satellite in LEO. The main focus is to work on error detection and correction via generating a Hamming code matrix. After comparison with other Hamming codes, CRC, etc., the schematic limitations from the generated Hamming codes [16, 11, 4] were proven to be the most efficient. MATLAB is used for the implementation process. Both single-bit and double-bit errors can be solved via this algorithm. Pakartipangi et al. [12] proposed a technique to obtain wider coverage area images for low dimensions satellites. Using camera array wider and detailed images found via this system, the system handles all the errors bit using XULA2 LX9 FPGA board. Camera array was designed such way that has no overlapping area cannot be found. Hanafi et al. [8] proposed an SRAM-based FPGA technology that implements an onboard computer system and used low earth orbit nanosatellites. The hardware and software architecture system is based on Xilinx’s Spartan-6 FPGA. The system was designed for developing a payload architecture and an inherent space environment. Ibrahim et al. [11] proposed a satellite system’s design with acceptable accuracy in a low power budget. The latest FPGAs are capable of adopting orbital changes to combat external hazards. This paper presents a concept to establish avionics systems by utilizing crucial features with the available FPGAs. Scrubbing keeps the FPGA data configurations safe with frame calculation and back-tracking method. Banu et al. [3] proposed an encryption method to secure terrestrial communication via small satellite. Increased quantity of sending valuable and sensitive data, a satellite can bring risks of providing access to unauthorized data. An advanced encryption standard method is used to protect data from such threats. Satellite operates in a harsh environment surrounded by radiation and magnetic fields which can cause malfunctions in the satellite system and cause fatal faults. The advanced encryption system is strong enough to handle this fault to protect the valuable data and uninterrupted data transmission from potential corruptions. Banu et al. [2] proposed a commercial algorithm also known as advanced encryption standard. In order to protect valuable and sensitive data and prevent unauthorized access in terrestrial communication, 5 modes of AES in satellite imaging have been used. To prevent fault from noisy channels and effect of SEUs, those 5 modes were analyzed and observed using Hamming error correction code. Measurements of power and throughput overhead were presented with the implementation of field programmable gate array. Hiler et al. [10] proposed a parity check matrix and a calculated syndrome of error detection and correction onboard nanosatellites. The scheme can self-detect and self-correct any single event effects error that occurs during transmission [10]. Cryptographic protection is used to secure transmissions from being hacked. MATLAB and VHDL were used to test out three types of Hamming code methods. Among them, the most efficient version was Hamming [16, 11, 4].

AQ1

512

M. H. Sarker et al.

Bentoutou et al. [4] proposed an onboard EDAC method to protect data transmission among AISAT-1 CPU and its memory. The following paper presents the applications of double-bit EDAC and implementation with FPGA. The EDAC is calibrated and computed with three kinds of techniques.

3 System Architecture Our first proposed system is built on a Spartan-6 FPGA SP605 evaluation kit Xilinx along with six OV2640 camera module. Six OV2640 camera module has a 360◦ view of nanosatellite [12]. The installation of six cameras in nanosatellite has made the image processing advance to a new horizon. A new chapter of nanosatellite will begin through this implementation of six OV2640 cameras. The system block diagram and schematic diagram are shown in Figs. 1 and 2. The system consisted of a processor using Xilinx Spartan-6 FPGA as onboard data handling. The six OV2640 camera modules are attached with Xilinx Spartan-6 FPGA and configure into 2 × 3 array.

Fig. 1 System configuration

Fig. 2 Schematic diagram of six OV2640 camera module interface

A Comparative Approach of Error Detection and Correction . . .

513

FPGA controls the camera data information according to the camera array. FPGA fetched the camera images and then send it to the PC. The PC arranged the image according to the camera array. Our system block contains a clock generator, UART camera controller, UART PC controller, bridge, UART camera switch.

4 Algorithmic Analysis 4.1 Hamming Code Hamming code is a linear block code for error detection and correction. Hamming codes can detect one bit or two bits error at the same time and it can correct only single-bit errors. The main concept of the hamming code is to add parity bits after the stream of data to verify that the data was received by the ground station and matches the corresponding input data stream. Satellite ground stations check the transmitted data in such a way that they identify where the error has occurred. The structure of the hamming code has block length, message length, and distance. Block length defines as n = 2r − 1 where r ≥ 2 message length and distance is 2r − r − 1. Depending on the hamming code version, distance value got changed. Due to adding more than one parity bits, this scheme can locate the position of the error and self-correct it by inverting the bit. Mostly three types of hamming codes are used in nanosatellite communication such as Hamming [7, 4, 3], Hamming [8, 4, 4], and Hamming [16, 11, 4].

4.2 Cyclic Redundancy Check CRC is the process of accepting emerging changes errors in the communication channel. Nanosatellite data exchange is based on CRC codes and the EDAC process widely used in modern CRC codes is also commonly referred to as polynomial codes -1 on a thin wire, by working on a thin wire is defined as polynomial counts. The k-bit message is considered a polynomial equation list with the words k, from x (k−1) to x 0 . The highest order is the coefficient of x (k−1) , the next item is the equivalent of x (k−2) , and so on. Test digits are generated by repeating the k-bit message x n and split the generated by r m(n + 1) bits polynomial code . The n-bit balance is passed as test digits.The complete collection sequence is divided by the same polynomial generator. If the remaining pieces are zero, no errors have occurred. If the remaining pieces are not zero, a transfer error occurred.

514

M. H. Sarker et al.

4.3 Reed–Solomon

AQ2

Reed–Solomon codes are work with a burst type of data error. It also used as a broadcast system in satellite communications and also in storage systems, etc. It detects the burst error of data transmission and corrects those error data. If the Reed– Solomon codes are used, then the probability of an error remaining in the decoded data will be much lower. It mostly described in code gain. Reed–Solomon codes are also suitable for multiple burst error correction codes as a sequence of b + 1 consecutive errors can affect up to two signals of size b. The t option goes to the code designer and can be selected over a wide range. Reed–Solomon error correction is a forwardlooking error code. It works with polynomial samples of data. Polynomials have been tested in several places and these numbers are either transmitted or recorded. For a low-cost system of nanosatellite, a single chip implemented onboard command data handling (OBCDH) was proposed for a mixed mode application-specific integrated circuit (ASIC) [8]. Future small satellites which having data processing and control functions with data collecting and remote sensing capabilities for earth observation missions are the result of ASIC specification. Block diagram Fig. 3 consists of four subsystems where a 32-bit RISC processor core modified for space use, subsystem of image handling unit, a communication connection for the satellite, and a supporting peripheral subsystem. OBC is the main components to miniature OBCDH system. To serve an initial prototype of the digital part of the OBCDH ASIC, this onboard computer system on a chip is shown Fig. 3. In the OBCDH, our main priority is to upgrade the EDAC system so that we can have error free data on satellite communication. Nanosatellites that transmitted data to the ground station from their errors can be checked with the help of some algorithms. In the ground station, CRC checks error data. After that steps with a type of errors, we get from the data specified by detection methods. In the detection methods, we figure out whether errors are present in our data; if we can detect via three steps like data word codeword generator, then we can identify the erroneous bit in the input stream as shown in Fig. 4. Based on our errors, we need to change our algorithms according to the size of errors. In this correction methods like Hamming correction, CRC correction, and Reed–Solomon correction. With the help of these correction methods, we would now finally get our desired errorless data. In this Fig. 5, we have designed as massage bits to parity check matrix then generator matrix and finally corrected code. At first, define code word bits per block, massage bits per block, parity submatrix, generator matrix, and parity check matrix. Encode message and find the position of the error in code word (index). Then code modifies and corrected code. After all, remove error data then plot this figure. In this Fig. 6, we have design an input and output messages with the help of CRC. At first, we take input and generator matrix. Then find checksum. After find checksum, we add checksum to message bits, then we check output is if the reminder is nonzero, a transmission error has occurred, and reminder zero, no errors occurred. Then plot output figure.

A Comparative Approach of Error Detection and Correction . . .

Fig. 3 Satellite OBCDH ASIC structure

Fig. 4 EDAC proposed algorithmic architecture Fig. 5 MATLAB simulation of Hamming code

515

516

M. H. Sarker et al.

Fig. 6 MATLAB simulation of cyclic redundancy check

Fig. 7 MATLAB simulation of Reed–Solomon code

AQ3

In Fig. 7, we have design Reed–Solomon code as input and output messages. Reed–Solomon coding is used to correct the burst errors associated. This code is characterized by three parameters an alphabet size t, block length n, and message length k. Decoder characterizes this section use Reed–Solomon code view of code word as polynomial value is based on message encoded. The decoder recoups encoding polynomial from received message data.

4.4 Turbo Encoding Mechanism The turbo encoder block uses a parallel concatenated coding scheme to encode a binary input signal. Three identical convolution encoders and two internal interleavers are used in this coding scheme. Each constituent encoder is terminated by tail bits autonomously. The previous block diagram shows that the performance of the turbo encoder block is made up of the first encoders systemic and parity bit streams, and only the second encoders parity bit streams. An interleaver is used between two systematic encoders of convolution as in Fig. 8 seen. Here, we can hit a rate of 1/3, without puncturing and 1/2, with a form of puncturing. Other code rates

A Comparative Approach of Error Detection and Correction . . .

517

Fig. 8 Advanced turbo encoder mechanism for EDAC Fig. 9 Advanced turbo encoder mechanism for EDAC

are also obtained by the process of puncturing. Turbo encoding process is tested with two different noise channel like AWGN channel and Rayleigh channel. Full communication system was maintained different bandwidth of satellite.

4.5 Turbo Decoding Mechanism Turbo decoder is applied when turbo encoded data is applied to transmission over the AWGN channel via base-band, The Log-Map decoding structure offers less complex output similar to the limit of Shannon with less complexity. Turbo decoder consists of three interleaver and two de-interleaver separated SISO decoders, as shown because of noise, encoded output data bit can get corrupted and entered the input of the decoder as r0 for the device bit, r1 for parity-1, and r2 for parity-2 and r3 for parity3. Due to its powerful ability to correct errors, rational complexity, and versatility in terms of different block lengths and code speeds, memory element number, etc. (Fig. 9).

5 Performance Evaluation The proposed methods work according to EDAC algorithm but there was some efficiency issues identified so that we use turbo mechanism to increase the efficiency for reliable data communication. Turbo mechanism was tested in two different channels like AWGN channel and Rayleigh channel. In Figs. 10 and 11, the performance between the channel over different SNR and bandwidth is described.

AQ4

518

M. H. Sarker et al.

Fig. 10 Performance analysis of turbo code Rayleigh channel

Fig. 11 Performance analysis of turbo code AWGN channel

Turbo codes are better performance codes that result from the interaction of information between recursive codes and decoders of the constitution. If the frame size is kept large and randomly selected, turbo codes can generate surprisingly low error rates at low SNRs. The more is the number of iterations more exchange of information between electoral encoders. As a result, the code can perform better. Turbo code simulated for Rayleigh faded channel for frame size K = 40. Frames number in each SNR taken as 500 to keep the simulation fast. Frame size was sent to get the BER at 40, 20,000 by biting each SNR. SNR ranges from 0 to 5 dB were used. Iterations number of decoders chosen to be 10. The BER for repetition is shown in Fig. 10. Turbo code achieved a BRR of 0.5 × 10−2 after the 1st decoder iteration. BRE improved to 1.34 × 10−4 after the 10th iteration. It can be seen that BRE performance increases as the number of iterations increases. However, the rate of advancement has slowed down. So the result is illustrated by the overlapping curve after the 5th iteration. BER 1.418 × 10−4 after 5 repetitions. BER did not show significant

A Comparative Approach of Error Detection and Correction . . .

519

advancement after the 5th iteration. BER curve for frame size K = 40 on AWGN channel Figure 11 BER for frame size K = 40 turbo code on AWGN channel. BRRT 3.628 × 10−4 after 1st decoder repetition. BER increases and decreases repetition at the end of 10th repetition BER 1.152 × 10−5 .

6 Conclusion In this paper, we have focused on different EDAC techniques. At the same time, with a code correlation to study was regulated. This application came up with the editor with a good concern of all regular EDAC techniques. This paper represents a comparative approach to EDAC algorithms of nanosatellite data transmission systems. MATLAB software is used for checking the EDAC algorithm’s performance. We evaluated the performance based on camera ov2640 real-time data to check the error detection and correction process. In error detection methods, it detects the error bits. Then error counting method will decide based on error types which algorithm is suitable for the error correction method. Advanced turbo mechanism is used to increase the efficiency of satellite communication and performance of the turbo mechanism was analyze with two different noisy channels also with different bandwidth of satellite communication. In the future, we will introduce an advance error detection method based on satellite transmitted data. Depending on data size, type, and importance, detection algorithms will be changed. It will reduce the time of EDAC process and get reliable safe data in a faster way.

References 1. R. Al Mamun, Md. M. Islam, N. Noor, R. Tajrin, S. Qader, Error detection and correction for onboard satellite computers using Hamming Code. Int. J. Electron. Commun. Eng. 14(9), (2020) 2. R. Banu, T. Vladimirova, Fault-tolerant encryption for space applications. IEEE Trans. Aerosp. Electron. Syst. 45 (2009) 3. R. Banu, T. Vladimirova, On-board encryption in earth observation small satellites, in Proceedings 40th Annual 2006 International Carnahan Conference on Security Technology (IEEE, 2006), pp. 203–208 4. Y. Bentoutou, A real time EDAC system for applications onboard earth observation small satellites. IEEE Trans. Aerosp. Electron. Syst. 48 (2012) 5. J.A.P. Celis, S. de la Rosa Nieves, C.R. Fuentes, S.D.S. Gutierrez, A. Saenz-Otero, Methodology for designing highly reliable fault tolerance space systems based on COTS devices, in 2013 IEEE International Systems Conference (SysCon), (IEEE, 2013), pp. 591–594 6. D.E. Friedman, Error control for satellite and hybrid communication networks (1995) 7. A.D. George, C.M. Wilson, Onboard processing with hybrid and reconfigurable computing on small satellites, in Proceedings of the IEEE, vol. 106 (IEEE 2018) 8. A. Hanafi, M. Karim, I. Latachi, T. Rachidi, S. Dahbi, S. Zouggar, FPGA-based secondary onboard computer system for low-earth-orbit nano-satellite, in 2017 International Conference on Advanced Technologies for Signal and Image Processing (ATSIP) (IEEE, 2017), pp. 1–6

520

M. H. Sarker et al.

9. C.P. Hillier, A system on chip based error detection and correction implementation for nanosatellites. Doctoral dissertation, Cape Peninsula University of Technology (2018) 10. C. Hillier, V. Balyan, Error detection and correction on-board nanosatellites using hamming codes. J. Electr. Comput. Eng. (2019) 11. M.M. Ibrahim, K. Asami, M. Cho, Reconfigurable fault tolerant avionics system, in 2013 IEEE Aerospace Conference (IEEE, 2013), pp. 1–12 12. W. Pakartipangi, D. Darlis, B. Syihabuddin, H. Wijanto, A.D. Prasetyo, Analysis of camera array on board data handling using FPGA for nano-satellite application, in 2015 9th International Conference on Telecommunication Systems Services and Applications (TSSA) (IEEE, 2015), pp. 1–6 13. T. Vladimirova, M.N. Sweeting, System-on-a-chip development for small satellite onboard data handling. J. Aerosp. Comput. Inform. Commun. 1(1), 36–43 (2004)

Effective Text Augmentation Strategy for NLP Models Sridevi Bonthu, Abhinav Dayal, M. Sri Lakshmi, and S. Rama Sree

Abstract Data augmentation effectively increases variance in training data resulting in increased accuracy and generalization in deep learning tasks. Augmentation of text data requires careful implementation so as to avoid text attacks. This paper presents a novel strategy for augmentation of text data in a meaningful way leading to improved accuracy, as well as provides a baseline model for comparison. The proposed strategy uses a mixed of pre- and post-augmentation utilizing four operations—random swap, random deletion, back-translation, random synonym insertion, in two different settings on a classification model based on recurrent neural networks. Experimental results on Apple Twitter Sentiment Dataset reveal that the proposed method achieves an increased accuracy of 3.29%, which is a significant improvement on datasets with limited training data. Keywords Data augmentation · Natural language processing · Sentiment analysis · Back-translation · Random swap · Random deletion · Synonym replacement

1 Introduction Nowadays, advanced applications in the field of natural language processing (NLP) are ubiquitous and they involve the computational processing and understanding of human languages. Incredible progress has taken place, particularly in the last few years in deep learning-based NLP. The field of NLP relies on statistics, probabilS. Bonthu (B) · A. Dayal · M. S. Lakshmi Vishnu Institute of Technology, Bhimavaram, AP, India e-mail: [email protected] M. S. Lakshmi e-mail: [email protected] S. Rama Sree Aditya Engineering College, Surampalem, AP, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 R. C. Poonia et al. (eds.), Proceedings of Third International Conference on Sustainable Computing, Advances in Intelligent Systems and Computing 1404, https://doi.org/10.1007/978-981-16-4538-9_51

521

522

S. Bonthu et al.

ity, and machine learning since the 1980s and on deep learning since 2010s [1]. Machine learning and deep learning show significant results on the tasks ranging from sentiment analysis [2] to question answering [3]. However, the lack of theoretical foundation and interpretability of the model as well as the requirement of huge amount of data and computing resources pose challenges in applicability of deep learning approaches to NLP [4]. High performance of any model always depends on the size and quality of the data on which the model gets trained [5]. Data augmentation(DA) is a technique for increasing the training data to boost the performance of the model. Image data augmentation is a standard practice in computer vision tasks, and they perform remarkably well [6, 7], whereas text augmentation is rare in NLP tasks [8], due to the challenge of coming up with rules for language transformation that is not thoroughly studied and experimented. Nonetheless, recent use of simple text transformations or text generation through language models to increase the amount of training data proves the efficacy of augmentation in NLP [9]. Augmentation in NLP tasks with deep learning is an upcoming filed. Backtranslation can generate new data by translating sentences from one language to another and it is an effective method for neural machine translation (NMT) to improve translation quality [10]. Synonym identification and replacement [11] transforms sentences into another variant with similar meaning. Data noising is widely adopted in application domains like vision and speech [12]. Easy data augmentation(EDA) [13] uses four techniques to do transformations in NLP—synonym replacement, random insertion, random swap, and random deletion. EDA has shown significant performance improvement over text classification tasks. In this work, the authors propose a dual text data augmentation strategy. First to increase the training data before model training. Second to augmenting the data while training the model. The work uses four text augmentation methods viz, random swap (RS), random deletion (RD), back-translation(BT), and random synonym insertion (RSI). The proposed strategy is evaluated on Apple Twitter Sentiment(ATS) Dataset1 , a dataset for sentiment classification. The results show that the proposed approach can obtain a significant improvement when the training data is limited. Code is publicly available2 . The rest of the paper is organized as follows. Section 2 addresses the previous work happened in the text augmentation area. Section 3 skims through the adopted augmentation techniques, and how well they are working through pre- and postaugmentation strategies, and presents the proposed approach. Section 4 explains the experimental setup along with results and analysis and it is followed by conclusion and future work.

1 2

https://www.kaggle.com/c/apple-computers-twitter-sentiment2. https://github.com/sridevibonthu/TextAugmentation.

Effective Text Augmentation Strategy for NLP Models

523

2 Proposed Approach 2.1 Augmentation Techniques Adopted Random Swap (RS) This approach randomly selects two words and swaps them in a training example x and repeats this process for n number of times to generate new training example xˆ (Fig. 1). xˆ = Random Swap(x, n)

(1)

However, this approach may cause adversarial text attack to fool the model especially if the sentence has nouns. For example “Rama Killed Ravana” is completely different from “Ravana killed Rama.” Random Deletion (RD) This approach randomly deletes n number of words from the training example x with a probability p and generates an augmented training example xˆ (Fig. 2). If the value of p is large, then it may result in meaningless sentences, and sometimes, the context may change completely. Back Translate (BT) This approach translates a training example x, from source language(S L) to some intermediate language (I L) and again back-translates it to source language. However, dual translation is computationally expensive. Figure 3 shows two examples in which German and French are chosen as intermediate languages for translation.

Fig. 1 Random swap operation generating two transformed examples from a single input x for the n values 2 and 5

Fig. 2 Random deletion operation generating two transformed examples for a single input x with a common probability value of 0.2

524

S. Bonthu et al.

Fig. 3 Back-translation operation generating two augmented examples for the same input x by taking two intermediate languages

Fig. 4 Random insertion operation generating an augmented example in which (daughter, interactive, book) are replaced with (girl, interactional, volume)

xˆ = translate(translate(x, S L , I L), I L , S L)

(2)

Random Synonym Insertion (RSI) This approach randomly inserts synonyms of n words, which are not stop-words in a training example x to generate a new training example xˆ (Fig. 4). The outcome of this technique depends on the value of n. The suggestible value for n can be in the range of 1–3. xˆ = Random Insertion(x, n)

(3)

However, this approach may cause adversarial text attack as shown below. input x → “True Grit” was the best movie I have seen since I was a small boy. (Predicted as positive) Random Insertion(x, n = 2 ) = Augmented xˆ → “True Grit” was the best movie I have seen since I was a wee lad. predicted as negative.

2.2 The Classification Model A text classification problem requires set of training examples D = {x1 , x2 , ..., x N }, where every record is labeled with a class value drawn from a set of discrete class

Effective Text Augmentation Strategy for NLP Models

525

labels indexed by 1 . . . k [14]. These help train a classification model, which is then evaluated with a test set. This paper uses recurrent neural network (RNN) language model based on long short-term memory network (LSTM) [15] for predicting the sentiment on the ATS dataset. LSTM is better in analyzing emotion of long sentences and it is applied to achieve multi-classification for text emotional attributes [16]. The LSTM-RNN takes in a training example as a sequence of words, X = x1 , x2 , .., x T one word at a time and produces cell state, c, and hidden state, h, for each word. The network recursively feeds the current word xt , cell state, c and hidden state, h from the previous word (ct−1 , h t−1 ), to produce the next cell and hidden state, (ct , h t ). The final hidden state, h T obtained by sending last word in the sentence, x T to the LSTM cell is fed through a linear layer f to get the predicted sentiment yˆ . (ct , h t ) = LSTM(xt , h t−1 , ct−1 )

(4)

yˆ = f (h T )

(5)

2.3 Evaluation of Augmentation Methods The LSTM classification model trained without applying any augmentation on the original data receives a baseline accuracy of 72.75%. Data augmentation can happen in two levels—increasing training set before the model training starts (pre), and during the training process on batches (post) (Fig. 5).

2.3.1

Approach—1 ( Pre-augmentation )

The first approach (Fig. 5 left) increases training data using one of the augmentation M technique from RS, RD, BT, and RSI (Fig. 5 ). Let D : {(xi , yi )}i=1 is a set of M training examples. DNew = D + DAug

(6)

Fig. 5 Initial methods to test the adopted strategies. a Approach 1—pre-augmentation, which increases a fraction of training data. b Approach 2—post-augmentation, which augments the data in the mini-batches while training

526

S. Bonthu et al. f.M

DAug = T ({(xi , yi )}i=1 )

(7)

where T is a transformation function, which augments a fraction, f of the M training samples to form new training set, D N ew . The new training set will (1 + f ).M records after augmentation.

Algorithm 1: Pre-Augmentation(x) Result: Transformed Exampmle xˆ for the Training Example x rate := getRandom(0,1) ; // returns a number between 0 and 1 if rate < 0.3 then xˆ = Random I nser tion(x, n) ; else if rate < 0.6 then xˆ = translate(translate(x, S L , I L), I L , S L) ; else if rate < 0.8 then xˆ = Random Deletion(x, p) ; else xˆ = Random Swap(x, n) ; end end end

2.3.2

Approach—2 (Post-augmentation)

In the second approach (Fig. 5 right), the training samples in a batch at tth trainM M can be changed to Dˆ t : {(xˆi , yi )}i=1 , by applying the ing iteration, Dt : {(xi , yi )}i=1 augmentation techniques when they are fed into the LSTM network. This process repeats for every batch of every epoch of the training process. Let e be the number of epochs, and b be the number of batches and m, the number of training samples in every batch, and if the augmentation happens randomly for 50% of the training samples, then the overall augmented training samples seen by the model in the training phase are e ∗ b ∗ (0.5 ∗ m). This second approach requires augmentation to happen at the token level, since the training data uses tokenized sentences. Since BT and RSI work on sentence level instead, their use is ruled out in this approach.

Effective Text Augmentation Strategy for NLP Models

527

Algorithm 2: Post-Augmentation(x) Result: Transformed Exampmle xˆ for the Training Example x rate := getRandom(0,1) ; // returns a number between 0 and 1 if rate < 0.2 then xˆ = Random Swap(x, n) ; else if rate < 0.6 then xˆ = Random Deletion(x, p) ; else xˆ = x end end

3 Experiment 3.1 Proposed Approach This paper proposes a novel strategy to combine the pre- and post-augmentation in an optimal manner based on experimental analysis as suggested in Fig. 6. A fraction of training data X goes through Algorithm pre-Augmentation(x) to generate a bigger training dataset. This dataset is tokenized during the batch creation and further augmented using Algorithm post-Augmentation(x). Note that, there is a chance to apply augmentation on the augmented text, i.e., random swap operation may happen on the back-translated text.

3.2 Data ATS dataset contains 3886 tweet records with 2162 positive, 1219 negative, 423 neutral class labels, and the remaining 82 were not relevant. The limited data of this dataset renders it suitable for testing the proposed augmentation strategy. There is 80–20% split between train and test data. @ mentions, #hashtag, RT (Retweet), hyperlinks were removed as part of preprocessing the data, as the adopted data is from twitter.

Fig. 6 Proposed method. Augmentation happens twice with preAug(.) and postAug(.) methods

528

S. Bonthu et al.

Table 1 Comparison of adopted augmentation techniques with a baseline accuracy of 72.75% Augmentation Approach—1 pre-augmentation Approach—2 post-augmentation strategy RS RD BT RSI Proposed approach

75.45 75.15 74.74 75.51 76.05

+ 2.7 + 2.4 + 1.99 + 2.76 + 3.3

74.74 74.41

+ 1.99 + 1.66

3.3 Experimental Setup This work uses TorchText 3 , part of PyTorch project, that provides data processing utilities and popular datasets for NLP. The data was tokenized using the spacy [17] tokenizer. The data is fed to a LSTM classification model. Same hyper-parameters are used for all eight experimentations, baseline without augmentations (1), preaugmentation approach (Fig. 5 left) for RS, RD, BT, RSI techniques (4), postaugmentation on batches (Fig. 5 right) approach for RS, RD techniques (2) and for the proposed approach (Fig.6) (1). The dimension of word embeddings is 300 and the number of hidden units is 100. Dropout rate is 0.25 and the batch size is 32. Adam optimizer is used with an initial learning rate of 0.001. All training consists of 100 epochs. We report accuracy of all the experiments.

3.4 Results and Analysis The results obtained by applying a single augmentation strategy from the adopted approaches are summarized in Table 1. Figure 7 depicts the training accuracy versus validation accuracy for all the four augmentation techniques. All the methods improved the validation accuracy by 2–3% when compared with baseline, and it can also be observed that BT consistently maintained good validation accuracy. RS and RSI performed well if training data is increased before training. RD reduced the overfitting, RS and RD shown improment in the validation accuracy, if the data is augmented while training on batches (Fig. 8). Based on these observations, Algorithm 1 pre-augmentation(x), which randomly chooses one of the four techniques, is used to increase the training data before training and Algorithm 2 post-augmentation(x), which randomly chooses either RS or RD while training were adopted as shown in Fig. 6. This approach has resulted

3

https://pytorch.org/text/stable/index.html.

Effective Text Augmentation Strategy for NLP Models

529

Fig. 7 Training versus validation accuracy by following Approach 1 (pre-augmentation) with RS, RD, BT, RSI

Fig. 8 Training versus validation accuracy by following Approach 2 (post-augmentation) with RS, RD Fig. 9 Training versus validation accuracy of proposed approach

530

AQ1

S. Bonthu et al.

with 76.05%, which is an increase of +3.29, when compared with the baseline. The proposed approach outperformed all the simple approaches to augment the data for performance boosting (Fig. 9).

4 Conclusion and Future Work This paper proposes a new data augmentation policy to increase the data before training and while training. Four augmentation methods, viz., random swap, random deletion, back-translation, and random synonym insertion are chosen in such a way that all are contributing for performance boosting. The proposed approach achieves a significant improvement in the accuracy and also reduced overfitting. This approach is best suitable when the training data is limited and it can be easily adopted to any task and dataset. This work can be further extented. The proposed strategy can be further fine-tuned based on the increase or decrease in loss while training, robustness of this approach can be studied on multiple datasets, and transformer-based models can be studied in place of LSTM-based models.

References 1. D.W. Otter, J.R. Medina, J.K. Kalita, A survey of the usages of deep learning for natural language processing. IEEE Trans. Neural Netw. Learn. Syst. (2020) 2. D. Tang, B. Qin, T. Liu, Deep learning for sentiment analysis: successful approaches and future challenges. Wiley Interdiscip. Rev.: Data Min. Knowl. Discov. 5(6), 292–303 (2015) 3. M. Malinowski, M. Rohrbach, M. Fritz, Ask your neurons: a deep learning approach to visual question answering. Int. J. Comput. Vis. 125(1–3), 110–135 (2017) 4. H. Li, Deep learning for natural language processing: advantages and challenges. Natl. Sci. Rev. (2017) 5. M.S. Pepe, et al., Testing for improvement in prediction model performance. Stat. Med. 32(9), 1467–1482 (2013) 6. L. Perez, J. Wang, The effectiveness of data augmentation in image classification using deep learning (2017). arXiv preprint arXiv:1712.04621 7. C. Shorten, T.M. Khoshgoftaar, A survey on image data augmentation for deep learning. J. Big Data 6(1), 60 (2019) 8. T. Young, et al. Recent trends in deep learning based natural language processing. IEEE Comput. Intel. Mag. 13(3), 55–75 (2018) 9. H.Q. Abonizio, S.B. Junior, Pre-trained data augmentation for text classification, in Brazilian Conference on Intelligent Systems (Springer, Cham, 2020) 10. M. Fadaee, C. Monz, Back-translation sampling by targeting difficult words in neural machine translation (2018). arXiv preprint arXiv:1808.09006 11. K.L. Anders, et al. Dynamic homophone/synonym identification and replacement for natural language processing. U.S. Patent No. 10,657,327. 19 May 2020 12. Z. Xie, et al. Data noising as smoothing in neural network language models (2017). arXiv preprint arXiv:1703.02573 13. J. Wei, K. Zou, Eda: easy data augmentation techniques for boosting performance on text classification tasks (2019). arXiv preprint arXiv:1901.11196

Effective Text Augmentation Strategy for NLP Models

531

14. C.C. Aggarwal, C.X. Zhai, A survey of text classification algorithms, in Mining Text Data (Springer, Boston, 2012), pp. 163–222 15. E.F. Can, A. Ezen-Can, F. Can, Multilingual sentiment analysis: an RNN-based framework for limited data (2018). arXiv preprint arXiv:1806.04511 16. D. Li, J. Qian, Text sentiment analysis based on long short-term memory, in 2016 First IEEE International Conference on Computer Communication and the Internet (ICCCI). IEEE (2016) 17. B. Srinivasa-Desikan, Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras (Packt Publishing Ltd., 2018)

Performance Enhancement of Raga Classification Systems Using Recursive Feature Elimination M. Pushparajan, K. T. Sreekumar, K. I. Ramachandran, and C. Santhosh Kumar

Abstract Identifying the raga of a song or music composition upon hearing when it is sung or played in an instrument is a challenging task. Generally, it takes years of rigorous training in classical music to be able to identify ragas. Machine learning algorithms can be trained to capture the raga signatures so as to classify ragas. In this paper, we attempted Hindustani raga classification on the audio signals acquired from flute renderings of a professional flautist. Eight Hindustani ragas were considered for classification task using time domain and frequency domain features. First, we developed a support vector machine-based (SVM) raga classification system and achieved a classification accuracy of 80% with polynomial kernel. With a Gaussian process-based (GP) raga classification system, the classification accuracy was improved to 82%. The performance was further enhanced with recursive feature elimination (RFE) process with an accuracy of 84% for SVM-based system with 40 selected features and 85% for GP-based system with 40 selected features, respectively. Keywords Raga classification · Support vector machines · Gaussian process · Recursive feature elimination

M. Pushparajan (B) Department of Mechanical Engineering, Amrita School of Engineering, Coimbatore, Amrita Vishwa Vidyapeetham, India e-mail: [email protected] K. T. Sreekumar · C. S. Kumar Machine Intelligence Research Laboratory, Department of Electronics and Communication Engineering, Amrita School of Engineering, Coimbatore, Amrita Vishwa Vidyapeetham, India K. I. Ramachandran Intelligence Systems Research Laboratory, Centre for Computational Engineering and Networking, Amrita School of Engineering, Coimbatore, Amrita Vishwa Vidyapeetham, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 R. C. Poonia et al. (eds.), Proceedings of Third International Conference on Sustainable Computing, Advances in Intelligent Systems and Computing 1404, https://doi.org/10.1007/978-981-16-4538-9_52

533

534

M. Pushparajan et al.

1 Introduction Hindustani and Carnatic music systems are based on melody (raga) and rhythm (taal). All Indian classical music is organized around the melodic concept called raga. Raga is basically a set of melodic gestures of sequences of swaras (notes) which are articulated with micro-pitch nuances, such as slides, vibrato, and shake [1]. Swara is the basic unit in the raga system which corresponds to a particular frequency of sound. The seven fundamental swaras are Sa, Ri, Ga, Ma, Pa, Dha, and Ni (Shadj, Rishabh, Gandhar, Madhyam, Pancham, Dhaivat, and Nishad, respectively). A raga is a unique melodious combination of these swaras. The problem of raga identification belongs to the family of music information retrieval (MIR). It can lead to a framework for seeking similar songs and also for creating playlists that are appropriate for certain aesthetic themes [2]. Raga identification is also useful for a novice musician to distinguish between similar ragas [3]. Raga identification can help in music therapy for choosing a certain melodic theme for treating a particular medical condition. A lot of research works were reported on raga identification from different perspectives. Chordia and Rae [4] constructed a support vector machine (SVM) system to identify ragas on the basis of pitch-class and pitch-class dyad distributions which are generated from the audio samples. In [5], Vijay Kumar et al. attempted the problem of raga identification using a nonlinear SVM model wherein they combined two kernels which represent the similarities of an audio signal with pitch-class profile. Costa et al. [6] compared convolutional neural network (CNN)-based music classification system with SVM-based system. Vyshnav et al. [7] used random Fourier features for music speech classification. Dighe et al. [8] investigated the problem of automatic raga identification in a scale-independent manner using Gaussian mixture model (GMM)-based HMMs with a set of features having chromagram patterns, melcepstrum coefficients, and also timbre features. Rao et al. [9] attempted recognition of melakartha ragas using several features like mel-frequency cepstral coefficients (MFCCs) with the help of GMM classifier. In this work, we present (i) an SVM-based system for raga identification using time and frequency domain features on audio samples of eight Hindustani ragas, (ii) a Gaussian process-based system with the same set of features, and (iii) performance enhancement with recursive feature elimination (RFE) [10] using SVM and GP classifiers. This paper is structured as follows. Section 2 presents the data set and Sect. 3 details the feature extraction. Section 4 describes the learning algorithm and classification methods. Experiments and results are reported in Sect. 5. Finally Sect. 6 provides the main conclusions.

Performance Enhancement of Raga Classification Systems . . .

535

2 Data Set In this work, we used a data set comprising of signals taken from the audio recordings of the flute recitals by Sri. Himanshu Nanda, senior desciple of Pt. Hariprasad Chaurasia. The recordings were in eight ragas, viz. Ahir Bhairav, Bhimpilasi, Bhoopali, Jog, Madhuvanti, Malkouns, Maru Bihag, and Puriya Kalyan. Each raga rendering was of 18–20 min. Eighty-seven samples of 10 s duration at a sampling frequency of 44.1 kHz were collected in each raga. Thus, there were a total of 696 samples. Out of the set of 87 samples, 80% of the samples were utilized for training and 20% for testing.

3 Feature Extraction The acquired signal space is mapped to the feature space in the feature extraction phase so that the redundant information is removed and the machine learning algorithms will be able to classify in a better way. In this work, we extracted time domain and frequency domain features.

3.1 Time Domain Features Tempogram: It is the pulse versus time representation of an audio signal. It shows the variation of the tempo over time. Tempo is the pace at which a song is rendered, the unit of which is beats per minute. As the tempo of a music composition can vary with time, the mean value was computed across a number of frames. Central Moments: Central moments are the mean, standard deviation, kurtosis, and skewness of the amplitude of the signal. Signal Energy: The signal energy is computed frame by frame and then the mean andstandard deviation values are taken over all the frames. It is calculated as N 1 2 n=1 |x(n)| . N Root Mean Square Energy (RMSE): It is computed frame by frame as   N 1 2 n=1 |x(n)| . The mean and standard deviation values are taken over all the N frames. Zero Crossing Rate: It measures how many times the amplitude of a signal crosses the zero value in a certain time interval. Here, one sample signal is split into several frames (frame size: 1024 & hop size: 256) and the zero crossing rate is calculated. From these values, the mean and standard deviation are calculated.

536

M. Pushparajan et al.

3.2 Frequency Domain Features Mel-Frequency Cepstral Coefficient (MFCC): The interpretation of pitch by the human auditory system is not linear. In fact, the pitch is perceived in a linear manner up to 1000 Hz, but after that, the perception of pitch becomes logarithmic. The mel-scale was developed to describe the human auditory system on a linear scale. A pure tones perceived frequency is correlated to its measured frequency by the mel-scale. Fmel

  FHz 1000 . 1+ = log(2) 1000

(1)

where Fmel is the frequency on mel-scale and FHz is the normal frequency in Hz. A mel-spectrogram is a spectrogram where the frequencies are converted to the melscale [11]. MFCC feature extraction is done as follows: First, the signal is windowed and discrete Fourier transform (DFT) is applied. Then, log of the magnitude is calculated and frequencies are warped on a mel-scale. Finally, discrete cosine transform (DCT) is applied. Chroma Features: There are twelve semitones in an octave in music which are represented as twelve bins, viz. C, C#, D, D#, E, F, F#, G, G#, A, A# and B for which the equivalents in Hindustani music are Sa (Shadj), Ri 1 (Komal Rishabh), Ri 2 (Sudh Rishabh), Ga 1 (Komal Gandhar), Ga 2 (Sudh Gandhar), Ma 1 (Sudh Madhyam), Ma 2 (Teevr Madhyam), Pa (Pancham), Dha 1 (Komal Dhaivat), Dha 2 (Sudh Dhaivat), N i 1 (Komal Nishad), N i 2 (Sudh Nishad). The whole spectrum of the audio signal is projected into these 12 bins. Chroma features are vectors that represent the total energy in each of these 12 bins. The whole signal is split into a number of frames. The chroma features are computed for each and every frame. Finally, the mean and standard deviation of all chroma features are calculated. Spectral Centroid: It is a measure of how bright a sound signal is. This is calculated from the mass center of its spectrum.  N

f (n)x(n)

n=1 Spectral Centroid =  where x(n) is the bin number magnitude and N k=1 x(n) f (n) is the center frequency of that bin.

Spectral Entropy: It captures the peakiness of the spectral representation. Spectral Roll-off: It is the frequency in Hz below which a certain percentage of the total spectral energy is contained. Usually, this percentage is taken as 85. Spectral Contrast: Every frame has a certain number of frequency bands. For a typical frequency band, the spectral contrast is the difference between the maximum magnitude and minimum magnitude.

Performance Enhancement of Raga Classification Systems . . .

537

4 System Description 4.1 Support Vector Machine (SVM) SVM belongs to binary classifiers which classify, based on a rule, a set of unlabelled data points. The process of classification is done based on a model developed using the training data with proper class labels. In the first phase, SVM is trained for learning the mapping from the training data and the corresponding labels. In the second phase, the data points are classified using an optimal separating hyperplane. The data points lying nearer to the hyperplane are the support vectors. Margin is the sum of the distances between the separating hyperplane and the support vectors. The hyperplane can be expressed mathematically as x T w + b = 0, where w is the normal to the hyperplane. Based on the variation and data separability, there are linear SVMs and nonlinear SVMs. A linear SVM is suitable for a linearly separable data whereas a nonlinear SVM is applied for a set of nonlinear data. The data is mapped onto a higher dimensional space and classification is performed. Further explanation of SVM is given in [12].

4.2 Gaussian Process classification (GP) In classification problems using GP, we make use of an input training data set X = [x1 , ..., x N ]T with associated class labels y = [y1 , ..., y N ]T to predict the label of a new test point x∗ . A latent function f is used whose value can be mapped into the [0, 1] interval with the help of a probit function. In the simple case of binary classification, y ∈ [0, 1] where 0 corresponds to the negative class and 1 to the positive class. Then, the probability that an x belongs to the positive class, i.e., P(y = 1|x) can be obtained as φ( f (x)), where φ(.) is the probit function. The classification process is carried out with placing a GP over the latent function f (x). Further details of GP classification are given in [13].

4.3 Recursive Feature Elimination (RFE) In this process, we eliminate the redundant features recursively by using stacked i is the weight of the ith linear SVM for a decision value generalization [10]. If wl,m l m f (x ), then the relevance of the mth feature can be determined by the measure K  K  i (wl,m )2 i=1 l=1

(2)

538

M. Pushparajan et al.

The lesser the relevance, the more redundant will be the feature for classification. The procedure for RFE is as follows: 1. The SVM classifier is trained with all the features. 2. Ranking criterion for all features is computed. 3. Features with the lowest ranking values are eliminated.

5 Experiments and Results We conducted experiments on a data set of flute renderings on eight ragas, as mentioned in Sect. 2, for raga classification. Eighty percent of the data points were set aside for training and twenty percent for testing. We ensured that the training and testing data sets were mutually exclusive. A total of 95 features were extracted from the time and frequency domains. Fig. 1 shows the block diagram of the raga identification system procedure.

5.1 SVM-based Raga Classification System The baseline system for raga classification is developed using backend SVM classifier. We used different SVM kernels viz. linear, polynomial, and RBF to evaluate the raga classification performance. We achieved classification accuracies of 79, 80, 78% and F-scores of 79, 79, 78% for linear, polynomial, and RBF kernels, respectively. The confusion matrix of SVM with polynomial kernel is shown in Fig. 2.

5.2 GP-Based Raga Classification System GP classification is based on Lapalce approximation, which is used for approximating the non-Gaussian posterior by a Gaussian. The covariance function of the GP is specified by the RBF kernel. During the training phase, the hyperparameters of the kernel are optimized. We achieved a classification accuracy of 82% and an F-score of 82%. The confusion matrix of GP-based system is shown in Fig. 3.

Fig. 1 Raga identification system procedure

Performance Enhancement of Raga Classification Systems . . .

539

Fig. 2 Confusion matrix of SVM-based system

Fig. 3 Confusion matrix of GP-based system

5.3 Performance Enhancement Using RFE Method The objective of RFE is to select the best features by recursively taking up smaller and smaller feature sets. In this work, we used SVM estimator for assigning weights to features in RFE. Based on these weights, ranking criterion for all features is computed and features with the lowest ranking values are eliminated. The selected best features are further classified using backend SVM and GP algorithms. The results are tabulated in Table 1. With RFE-based SVM system for RBF kernel, we got an accuracy of 84% with 40 selected features. The overall best performance was obtained with RFE-based GP system for 40 selected features with an accuracy of 85%.

81 81 80 79 83 83 81 81

The contents in bold letter represent the accuracy and F-score of the best performing system

RFE-based GP system

RBF

Polynomial

81 80 81 80 82 81 81 80

78 78 83 82 84 84 85 84

77 77 73 72 78 77 81 80

Accuracy (%) F-Score (%) Accuracy (%) F-Score (%) Accuracy (%) F-Score (%) Accuracy (%) F-Score (%)

RFE-based SVM system

Linear

Number of selected features 10 20 30 40

System

Table 1 Performance comparison of RFE-based SVM and RFE-based GP systems

77 77 80 79 81 81 81 80

50 77 76 80 79 80 79 81 80

60

79 78 80 79 81 80 84 84

70

81 80 81 80 78 77 83 82

80

78 78 80 79 78 78 83 82

90

540 M. Pushparajan et al.

Performance Enhancement of Raga Classification Systems . . .

541

6 Conclusions We could identify the ragas from a data set of eight Hindustani ragas with an accuracy of 80% with support vector machine (SVM) classifier and 82% with Gaussian process (GP) classifier. Further, we could achieve 84 and 85% classification accuracies with recursive features elimination (RFE) method using only 42% of the total features. For the present work, we considered only audio recordings of flute renderings. Audio signals from other music instruments and also from vocal renderings need to be incorporated in this study. Our future work includes improving the classification accuracy of the present system, incorporating other instrument renderings and also identifying ragas if the renderings belong to different shrutis (the tonic).

References 1. P. Chordia, Automatic raag classification of pitch-tracked performances using pitch-class and pitch-class dyad distributions, in Proceedings of International Computer Music Conference (ICMC), p. 314321 (2006) 2. S. Belle, R. Joshi, P. Rao, Raga identification by using swara intonation. J. ITC Sangeet Res. Acad. 23(3) (2009) 3. T.C. Nagavi, N.U. Bhajantri, Overview of automatic Indian music information recognition, classification and retrieval systems, in 2011 International Conference on Recent Trends in Information Systems, pp. 111–116. IEEE (2011) 4. P. Chordia, A. Rae, Raag recognition using pitch-class and pitch-class dyad distributions, in International Conference on Music Information Retrieval (ISMIR), pp. 431–436. Citeseer (2007) 5. V. Kumar, H. Pandya, C.V. Jawahar, Identifying ragas in Indian music, in 2014 22nd International Conference on Pattern Recognition, pp. 767–772 (2014) 6. Y.M.G. Costa, L.S. Oliveira, C.N. Silla, An evaluation of convolutional neural networks for music classification using spectrograms. Appl. Soft. Comput. 52, 28-38 (2017) 7. M.T. Vyshnav, S. Sachin Kumar, N. Mohan, K.P. Soman, Random fourier feature based musicspeech classification. J. Intel. Fuzzy Syst. 111 (2020) 8. P. Dighe, P. Agrawal, H. Karnick, S. Thota, B. Raj, Scale independent raga identification using chromagram patterns and swara based features, in 2013 IEEE International Conference on Multimedia and Expo Workshops (ICMEW), p. 14. IEEE (2013) 9. T. Rao, P. Reddy, A. Prasad. Recognition of melakartha raagas with the help of gaussian mixture model. Int. J. Adv. Res. Comput. Sci. 1(3), (2010) 10. C.A. Kumar, M.P. Sooraj, S. Ramakrishnan, A comparative performance evaluation of supervised feature selection algorithms on microarray datasets. Procedia Comput. Sci. 115, 209–217 (2017) 11. K.T. Sreekumar, K.K. George, K. Arunraj, C.S. Kumar, Spectral matching based voice activity detector for improved speaker recognition, in 2014 International Conference on Power Signals Control and Computations (EPSCICON), p. 14 (2014) 12. K.T. Sreekumar, R. Gopinath, M. Pushparajan, A.S. Raghunath, C. Santhosh Kumar, K.I. Ramachandran, M. Saimurugan, Locality constrained linear coding for fault diagnosis of rotating machines using vibration analysis, in 2015 Annual IEEE India Conference (INDICON), pp. 1–6. IEEE (2015) 13. K. Markov, T. Matsui, Music genre classification using Gaussian process models, in 2013 IEEE International Workshop on Machine Learning for Signal Processing (MLSP), pp. 1–6. IEEE (2013)

A Study of the Factors Influencing Behavioral Intent to Implement Forensic Audit Techniques in Indian Companies Kamakshi Mehta, Bhoomika Batra, and Vaibhav Bhatnagar

Abstract The study focusses on scrutiny of the factors impacting the behavioral intent of the employees to use various techniques of forensic audit techniques during detection and prevention of fraud in Indian companies. A research model was developed with variables from Theory of Planned Behavior model. Quantitative data analysis has been used based on the partial least square (PLS) approach using the Smart PLs software. The findings of this research have concluded that the perceived benefit, attitude of the users, the pressure of stake holders along with the internal control of the company influence the behavioral intent to use forensic audit techniques by the employees of the companies. Keywords Forensic audit · Indian companies · PLS · Behavioral intent · Audit and attitude

1 Introduction Forensic auditing is an important method for detecting or prosecuting financial fraud and the course to justice in the emerging economic scenario, offering strategic information on the facts uncovered, relevant to the financial crime. This is a new field, but forensic accountants have been widely used in recent years by banks, insurance firms, and now also by the police. Increased white-collar crime in emerging economies has warranted forensic auditing as an important method for detecting or prosecuting financial crime and the course of justice, offering strategic information about the evidence found in connection with financial crime. This is a novel field, but the services of forensic accountants have been extensively used in recent years by K. Mehta Amity University, Haryana, India B. Batra IIS Deemed to be University, Jaipur, India V. Bhatnagar (B) Manipal University, Jaipur, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 R. C. Poonia et al. (eds.), Proceedings of Third International Conference on Sustainable Computing, Advances in Intelligent Systems and Computing 1404, https://doi.org/10.1007/978-981-16-4538-9_53

543

544

K. Mehta et al.

banks, insurance firms, and even the police. The increase in white-collar crime and the challenges faced in unearthing bribery by law enforcement organizations have contributed to the profession’s creation. Institutions such as India Forensic, the Indian Chartered Accountants Institute (ICAI) and the Association of Chartered Certified Accountants (ACCA) provide forensic auditing training and courses in India.

2 Meaning of Auditing Forensics Forensic auditing service (FAS) is a specialized auditing field that invests test deception and analyzes historical evidence that can be used in court proceedings. Forensic auditing is a fair mix of auditing and forensic skill to perform investigations into financial crime. It is useful for analytical auditing and court action. Maurice E. Peloubet first used the term Forensic Accountant in 1946. Archaeological findings indicate that between 3300 and 3500 BC, Egyptian accountants were interested in the avoidance and identification of theft. His article, “Forensic Auditing: Its Place in Today’s world.” A close association between auditing and legacy was developed during the eighteenth century. Corporate fraud can be linked to several improvements to the filing of financial statements. An American Eliot Ness was credited with taking down gangster Al Capone in the 1930s, but his case was based on the detective work of Elmer Irey, an Internal Revenue Service accountant who ensured the conviction of tax fraud for Capone. He was clearly the first high-profile forensic accountant in America.

3 Creation of Auditing Forensics in India Kautilya was the first person in India to list the famous 40 Forms of Misappropriation in his famous book Kautilya Arthashastra. In India, chartered accountants are called upon to carry out such forensic activities. After the case of Enron, the case of Rajat Gupta, and the case of Satyam, there was a wide use of forensic auditing in India. A few chartered accountant companies have the fraud review as a distinct procedure. Services of this kind are offered by chartered auditing companies like Sharad Joshi, S. K. Jain (Xerox Fraud case). Nevertheless, the big four advisory companies such as Deloitte and KPMG control the region by and wide. The establishment of the Serious Fraud Audit Office in India is a groundbreaking achievement for forensic accountants. The Corporations Act 2013 opened the way for a special approach to avoiding economic fraud and protecting national prosperity under American law and the British Bribery Act. For good risk management, reputational risk should be handled. As well as a good preventive climate for fraud and anomalies and lapses in the culture of enforcement, proper investigation was required. Therefore, forensic auditing was necessary to identify fraud preparation, implementation of fraud, threats of money laundering and book the culprits without much delay. The uniqueness of

A Study of the Factors Influencing Behavioral Intent …

545

the reports of the forensic accountant relies on the expertise, talents, and experience of the forensic accountant. In the examination, study, presentation, documentation, and testimonial support of evidence, a forensic accountant must be mindful of the consequences of integrating expertise and skills. The forensic accountant can be an expert witness, a consultant, or some other position of court, such as in fact, Trier, special master, magistrate-appointed specialist, referee, arbitrator, or mediator.

4 Significance of Forensic Audit A forensic audit is the scrutiny of company’s financial records for gathering evidence of a suspected frauds for use in the court as evidence. Using the forensic accountant’s findings to promote arbitration, lawsuit, or jury award by reducing the financial factor as an area of ongoing discussion. Forensic auditing methods and instruments. (1) Benchmarking—Comparison of the financial results of one time with another or the performance of one cost center or company with another and the average performance of the business with the pre-decided expectations. (2) Ratio analysis— Recognition of any unusual patterns and shifts. (3) System analysis to evaluate current models to find any vulnerabilities that could be fraudster possibilities. (4) Specialist tools, such as audit strategies for analyzing matching results. (5) Exception reporting—To assess divergence from the norms, produce unquestionable automated reports. Indi Forensic Auditing Issues. (1) The area of identification, monitoring and detection of financial crime is being established for forensic auditing. In India, there is an intense lack of trained and competent accountants with sufficient professional knowledge in forensic auditing. (2) In India, in most cases of financial fraud, politicians have been involved, so collecting evidence against them is important. (3) The old British judiciary is now followed by the Indian judiciary. It is pricey to put the matter to court and employ specialist lawyers. (4) More and more foreign companies are investing in India because of liberalization and the fast-moving economy, making it impossible for other countries to prosecute financial fraudsters. (5) It is complicated and difficult for the forensic accountant to negotiate with them due to the constant implementation of modern information and technological methods by fraudsters. (6) Forensic auditing is an expensive field relative to most forensic fields. (7) The hiring of corporate forensic accountants is not obligatory for corporations; thus, the hiring totally depends on the behavioral intent to implement forensic tools. 8. In India, there are no unique forensic auditing directives or acts.

546

K. Mehta et al.

5 Literature Review The study recommends that FAS is an inevitable tool that should be utilized in curbing unethical practices in the banks of Nigeria [1]. The researcher has concluded that a survey using a questionnaire revealed that the usage of forensic service was very limited in the present day but increased awareness of the benefits of FAS shall result in its increased implementation [2]. The analysis of 110 data responses using smart PLS 2.0 employed that perceived benefits, attitude stakeholders’ pressure and internal controls had significant impact on the intention of the usage of the FAS. The study has analyzed and investigated the factors impacting the intention of management to use FAS by utilizing the quantitative data based on the PLS approach. The researchers concluded that the perceived benefit, stakeholder pressure, perceived risk along with other factors exerted considerable impact on the behavioral intention to incorporate FAS in the accounting process [3]. A concern that has inevitably made the CEO and top managements of myriad companies from all sectors of the economy lose their sleep has been “Organizational Frauds” [4, 5]. The study has escalated the concern of an ever-rising danger of organizational frauds, which has been fueled by the global recession of 2008. An analyst opinioned that the exponential development of the digitization across the globe has opened avenues for the educated, tech-savvy, and professional fraudster [6]. The researcher has laid stress on the opinion that an organization should incorporate FAS in all its accounting activities as it is in the best of organizations interest [7]. The task force and National Fraud Strategic Authority in the UK have laid stress that forensic accounting has inevitably emerged as a tool to fight fraud. The researchers are of a strong opinion that this tool due to the requisite in-depth experience and knowledge shall prove to be a major weapon in combatting the escalating organizational frauds [8]. The study concludes that approximately 20% of companies hired forensic accountants, while the service satisfaction rating was the highest at 88% [9]. The underutilization of the forensic activities has led to the poor recovery of fraud losses. The analysts at PWC are of the opinion that 80% of the companies are not able or not interested to invest in these services due to the financial burden [9, 10]. The research has strongly believed that the greatest indicator of behavior intent is motive and attitude. The TPB paradigm notes that the desire to commit a certain action depends on behavior, subjective norms, and perceived behavioral regulation [11]. The study has laid stress on the fact that the variable attitude, which is developed based on various experiences of the individual in his or lifetime, plays an indispensable role in forming a mindset of an individual toward an activity [12]. Assumptions, attitudes, experiences, etc., play an important role on the intention of the stakeholders and management in adopting the forensic accounting techniques in their organization [13]. Any lack of faith by the stakeholders shall result into a negative attitude toward the application of forensic techniques [14]. Attitude of stakeholders internal and external including consumers, vendors, creditors, etc., has extreme impact on the intention of using the forensic accounting techniques [15]. The researchers have described perceived behavioral control as the existence of variables that could promote or hinder behavioral efficiency [16]. This study investigates the

A Study of the Factors Influencing Behavioral Intent …

547

factors influencing the company’s behavioral intent to use FAS [17]. The researcher argues that the structure of attitude in TPB is very broad; the findings suggest that perceived advantage and risk are factors influencing behavioral attitudes [18].

6 Objective The objective of our study comprises the following: 1. 2. 3. 4.

To study the impact of perceived benefits of using FAS on the attitude of employees in using the services. To study the influence exerted by stakeholders on the behavioral intention (BI) to use the FAS. To study the influence of internal control measures on the intention of management to use FAS. To study the impact of attitude of the users on their Intention to use FAS

7 Research Design Sample and Population The research population consists of auditing professionals and employees of public offices (departments and agencies) on the grounds that they are important members of the senior management team participating in the main decisions of the organization. The heads of the internal audit departments were also chosen for sampling, as the audit unit is to oversee the organization’s internal management system. Forensic auditing is a relatively recent practice in India and a relatively new one in the world. As a developing economy, India is also heavily dependent on the government. Consequently, the public sector is the most industrialized sector of the economy and most of the fraud identified in the country is committed in this sector. Therefore, if there is any field that can start using forensic auditing methods in the prevention and identification of fraud, it is the public sector, embodied by government ministries and agencies in this report. The Instruments of Research included a standardized questionnaire with 54 factors; this questionnaire had 10 parts. Variables that were used in this study were dependent and independent ones. A total of 550 mails with a request to fill the questionnaire were sent out of which 321 were recovered. This provides a 58.36% answer rate. Data Processing Methods Data were essentially analyzed using PLS to evaluate relationships of variables (dependent and independent). It enables an investigator to analyze a sequence of dependence relationships between exogenous variables (dependent variables) and endogenous variables (independent variables or predictors). The usefulness of SEM

548

K. Mehta et al.

in this analysis follows from the fact that it provides a means to deal with many partnerships simultaneously. In the analysis of dependency relationships, the SEM can represent unnoticed (latent) principles, unlike the usual multiple regression, which can only evaluate relationships between variables that can only be explicitly observed or calculated. In the present analysis, most of the variables used in evaluating the intention of professionals in using the FAS methods in fraud prevention and identification are implicit variables (e.g., perceived advantages, organizational influence, behavioral motive, etc.) and have thus been evaluated indirectly.

8 Hypothesis Ho: The perceived benefits of using FAS will have an insignificant influence over the attitude toward using FAS. Ho1: Stakeholders’ pressure will have an insignificant influence on the behavioral intention to use the FAS. Ho2: Strength of internal control will have an insignificant impact on the behavioral intention to use FAS. Ho3: Attitude toward using forensic auditing services has an insignificant influence on be habitual intention to use FAS (Fig. 1).

Fig. 1 Causal model. Source Smart PLS

A Study of the Factors Influencing Behavioral Intent …

549

9 Analysis and Interpretation 10 Valuation of the Structural Model Convergent Validity Convergent validity explains how much correlation is there among the constructs, which is measured with the help of outer loadings and internal consistency using Cronbach’s alpha, average variance extracted (AVE). On the basis of guidelines provided by [19, 20], the outer loadings that were less than 0.6 were deleted from the analyses except for perceived benefits of FAS since the AVE for perceived benefits is 0.54, which is above its threshold. The detailed and polished dimension is shown in Table 1. Internal consistencies were calculated using Cronbach’s Alpha, which has the minimum criteria of 0.7. Results from Table 2 show the detailed measurement of internal consistencies for every construct, which passes the minimum condition, i.e., above the 0.7. According to [10], the AVE is the mean of squared loadings of an item. If AVE is 0.5 of a construct that means 50% of variance is explained by its indicators and as per [21, 22], 0.5 is the minimum threshold to be fulfilled by each indicator. Table 3 shows the results of each construct having an AVE value higher than the aforesaid, i.e., 0.5. Therefore, the present study has passed all the given criteria of convergent validity. Discriminant Analysis Discriminant analysis signifies the magnitude to which the construct (variable) is empirically definite from other variables (constructs). There are different criteria of measuring the discriminant analysis, i.e., Fornell–Larcker criterion and HTMT, which have been imposed in the present study, and Table 4 signifies the values are greater than other constructs vertically and horizontally and Table 2 for measuring HTMT; the value should be below 0.75 according to [20, 23]. Latent variable depicts the correlation among the variables which is like the r in regression analysis. Table 1 Reflective items Construct

Internal consistency (IC)

Average variance extracted (AVE)

Behavioral intention to use forensic auditing

0.76

0.59

Stakeholder pressure

0.73

0.78

Internal control

0.92

0.62

Perceived benefits of FAS

0.78

0.54

Attitude-FAS

0.84

0.76

550

K. Mehta et al.

Table 2 Discriminant analysis Fornell–Larcker of reflective items Construct

AFA

BIFA

SIC

PBFA

AFA

0.875

BIFA

0.242

0.768

SIC

0.290

0.635

0.79

PBFA

0.33

0.096

0.20

0.74

SP

0.602

0.301

0.57

0.45

SP

0.88

HTMT of reflective items AFA BIFA

0.35

SIC

0.375

0.703

PBFA

0.742

0.284

0.41

SP

0.803

0.457

0.624

0.647

Correlation of latent variables AFA

1

BIFA

0.242

1

SIC

0.29

0.635

1

PBFA

0.633

0.096

0.204

1

SP

0.602

0.301

0.571

0.456

1

Source Output from Smart PLS3 Note AFA is attitude toward FAS, BIFA is behavioral intention to use forensic auditing, SICS is strength of internal control, PBFA is perceived benefits of FAS and SP is stakeholder pressure Table 3 Coefficient of determination

Exogenous variables

R square

AFA

0.401

BIFA

0.424

Source Output from smart PLS3 Table 4 Hypothesis testing

Construct

Path coefficient

T values

AFA—BIFA

0.15

0.00

SICS—BIFA

0.69

0.00

0.63

0.00

-0.19

0.03

PBFA—AFA SP—BIFA

Source Output from smart PLS3

A Study of the Factors Influencing Behavioral Intent …

551

11 Valuation of the Measurement Model Coefficient of Determination Coefficient of determination (R2) is the ratio of variance, i.e., the change or variation in the endogenous variable explained by the exogenous variable [24]. Anticipating the criteria of r2 to be fulfilled with a minimum of 10%, Table 3 shows the output of both exogenous variables of the present study, which is 42% (behavioral intention to use FAS) and 40% (attitude toward FAS).

12 Hypothesis Testing The results of hypothesis testing are shown in Table 4, which has the path coefficients (beta) and significance values (p-value) of each construct, which states the positive or negative with their significant or insignificant association. The null hypothesis formulated for the present study was: Ho0: The perceived benefits of using FAS will have an insignificant influence over the attitude toward using FAS. Results from Table 4 show that the Ho is rejected and a positive and significant influence of perceived benefits of using FAS over the attitude toward using FAS can be noted. Ho1: Stakeholder pressure has insignificant influence on the behavioral intention to use the FAS. Results from Table 4 show that Ho1 is rejected as there is a negative and significant influence of stakeholders’ pressure over the behavioral intention of management to use the FAS. Ho2: The SICS will have an insignificant impact on the behavioral intention to use FAS. Results from Table 4 show that the Ho2 is rejected and there is a positive and significant influence of SICS over the behavioral intention to use the FAS. Ho3: Attitude toward using FAS has an insignificant influence on behavioral intention to use FAS. Results from Table 4 show that Ho3 is rejected, and there is a positive and significant influence of attitude toward using FAS over the behavioral intention to use the FAS.

13 Conclusion The present and the future economic scenario emphasizes on the fact that all auditing professionals should be well versed with the benefits, role, and significance of FAS

552

K. Mehta et al.

for the detection of fraudulent activities in their companies. This study concludes that the management and the auditing personnel need to broaden their perception toward their adaptability to the forensic auditing tools. The open arms welcome of the techniques will create an environment, which shall be fraud-free. The team shall be able to accept the use of forensic audit in fraud prevention and detection if they understand the benefits, risk, susceptibility to fraud and severity of fraud in their company. In addition, if they become aware of these problems through educational activities, this would be enhanced. Therefore, the educational activities of training institutions should be aimed at raising awareness of forensic auditing.

References 1. G. Muthusamy, Behavioral intention to use forensic accounting services for the detection and prevention of fraud by large Malaysian companies. PhD thesis, Curtin University, 2011 2. M.H. Sahdan, C.J. Cowton, J.E. Drake. Forensic accounting services in English local government and the counter-fraud agenda. Public Money Manag. 40(5), 380–389 (2020) 3. G. Muthusamy, P.M. Quaddus, P.R. Evans, The Theory of Planned Behaviour and Organisational Intention to Use Forensic Accounting Services. PhD thesis, Curtin University, Perth, Australia, 2010 4. B.K. Peterson, P.E. Zikmund. 10 truths you need to know about fraud. Strategic Finance 29–35 (2004) 5. K.-D. Bussmann, M.M. Werle, Addressing crime in companies: first findings from a global survey of economic crime 1. Br. J. Criminol. 46(6):1128–1144 (2006) 6. J. Tracey, A. Gordon, T. White, L. MacPhail. Fraud in a downturn: review of how fraud and other integrity risks affect business in 2009 (2009) 7. W. Smieliauskas. Cap forum on forensic accounting in the post-enron world: Introduction and commentary*/forum de rcc sur la juricomptabilitéa l’ére post-enron/introduction et commentaire. Can. Account. Perspect. 5(2), 239–256 (2006) 8. H. Silverstone, M. Sheetz, S. Pedneault, F. Rudewicz. Forensic Accounting and Fraud Investigation for Non-experts (Wiley Online Library, 2004) 9. D. Sani, D. Abubakar, B.M. Abatcha, An awareness assessment of forensic accounting in borno state public sector 10. PricewaterhouseCoopers, Practice good governance and business ethics, PWC Alert January 6(51) (2006) 11. M.F. Ajzen, Belief, Attitude, Intention, and Behavior: An Introduction to Theory and Research (1975) 12. I. Azjen. Understanding Attitudes and Predicting Social Behavior (Englewood Cliffs, 1980) 13. C.W.L. Hill, T.M. Jones, Stakeholder-agency theory. J. Manage. Stud. 29(2), 131–154 (1992) 14. J.M. Stevens, H. Kevin Steensma, D.A. Harrison, P.L. Cochran, Symbolic or substantive document? the influence of ethics codes on financial executives’ decisions. Strat. Manag. J. 26(2), 181–195 (2005) 15. J.L. Cummings, J.P. Doh, Identifying who matters mapping key players in multiple environments. California Manage. Rev. 42(2), 83–104 (2000) 16. I. Ajzen, Nature and operation of attitudes. Annu. Rev. Psychol. 52(1), 27–58 (2001) 17. P. Norman, M. Conner, The role of social cognition models in predicting health behaviours: future directions, in Predicting Health Behaviour: Research and Practice with Social Cognition Models, pp. 197–225, 1996 18. M.J. Vanlandingham, S. Suprasert, N. Grandjean, W. Sittitrai, Two views of risky sexual practices among northern thai males: the health belief model and the theory of reasoned action. J. Health Soc. Behav. 195–212 (1995)

A Study of the Factors Influencing Behavioral Intent …

553

19. W.W. Chin, Commentary: issues and opinion on structural equation modeling (1998) 20. J.F Hair Jr., M. Sarstedt, L. Hopkins, V.G. Kuppelwieser. Partial least squares structural equation modeling (pls-sem). European Business Review (2014). 21. C. Fornell, D.F. Larcker, Evaluating structural equation models with unobservable variables and measurement error. J. Mark. Res. 18(1), 39–50 (1981) 22. J.C. Nunnally, I.H. Bernstein. The role of university in the development of entrepreneurial vocations: A Spanish study, in Psychometric Theory (Mcgraw-Hill, New York, 1978) 23. D. Barclay, C. Higgins, R. Thompson, The Partial Least Squares (PLS) Approach to Casual Modeling: Personal Computer Adoption and Use as an Illustration (1995) 24. R. Frank Falk, N.B. Miller. A primer for soft modeling (University of Akron Press, 1992)

Using Multiple Regression Model Analysis to Understand the Impact of Travel Behaviors on COVID-19 Cases Khalil Ahmad Kakar and C. S. R. K. Prasad

Abstract Wuhan city is granted as a political, cultural, financial, and educational center in China. On 12 December 2019, Health Commission declared the first case of the novel coronavirus. COVID-19 rapidly spread around 200 countries through the human travel. People in India are also experiencing this pandemic infection. This study aims to examine the association within COVID-19 outbreak and travel behavior in India. The data were collected through the Ministry of Health and Family Welfare (total 1130 samples have gathered from Indian states); the information includes patients’ socioeconomic characteristics (age, gender) and travel histories. Multiple regression model (MRM) applied to evaluate the association among COVID-19 outbreak and travel behavior. COVID-19 cases considered as dependent variables and patients’ gender, international and domestic travel histories accounted as independent variables. The result approved, by increasing domestic and international travel, the COVID-19 cases can be increased. Therefore, there is a strong relationship between COVID-19 outbreak and travel behavior. Moreover, this study reviewed the existing infected people in Indian states in terms of gender, age group and their association with travel records. Ultimately, the investigation concluded that self-isolation and travel restriction are feasible alternatives to limit or reduce the long-term COVID-19 crisis. Keywords COVID-19 · Travel behavior · Regression model

1 Introduction Wuhan city is located in central China, with more than 11 million residents, and the city is granted as a political, cultural, financial, and educational center in China. On 12 December 2019, the Wuhan Municipality declared the first case of the novel coronavirus. The report added, infected people used Huanan Market, where seafood, K. A. Kakar (B) · C. S. R. K. Prasad Transportation Division, Civil Engineering Department, National Institute of Technology, Warangal 506004, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 R. C. Poonia et al. (eds.), Proceedings of Third International Conference on Sustainable Computing, Advances in Intelligent Systems and Computing 1404, https://doi.org/10.1007/978-981-16-4538-9_54

555

556

K. A. Kakar and C. S. R. K. Prasad

animals, bats and snakes were sold [1–3]. Novel coronavirus has recognised as a positive-chain single-standard RNA virus and divides into α, β, σ and γ types. The novel coronavirus, where initially has appeared in the Wuhan city of China, refers to the β [4]. This virus has named COVID-19 by WHO [5] and reported the disease as a pandemic [6, 7]. For two decades, the World Health Organization declared for several times concerning illness epidemics, such as disease outbreak in 2009 (H1N1), the second announcement was related to polio disease in 2014, the third declaration was regarding the Ebola outbreak in 2014, and the fourth announcement was related to Zika disease epidemic in 2015–2016 and Kivu Ebola in 2019. Finally, the recent declaration was about the coronavirus in 2020, after a couple of gatherings and a thoughtful evaluation of the condition, the WHO announced the COVID-19 outbreak in 2019 in China [8]. Majority of people in vulnerable countries faced to the lockdown or quarantine circumstances. Up to the present, no particular medicines are discovered to cure this pandemic disease. Therefore, most health authorities are recommended for selfisolation to mitigate the spread of disease. On account of travel impact on COVID-19 outbreak, Chinese authority has used extreme measures to moderate the epidemic. In Jan 2020, the provincial administration of Wuhan city stopped all public traffic inside the town and blocked all incoming and outgoing transport. Other municipalities in Hubei region declared comparable transportation control rules following Wuhan city shortly [9]. Furthermore, the International Association of Public Transport recommended that during the existing of COVID-19 in the world PT modes such as bus, metro, monorail, bus rapid transit, light rail transit and rail must be considered a high-risk environment because of a high number of passengers in a limited area with limited air-conditioning [10]. People in India are also suffering from this pandemic disease. India has suspended most of the travels to control the new coronavirus outbreak. The travel restrictions came when India reported 70 cases of the coronavirus in the country, most were travelrelated, and at least 17 foreigners have also tested positive. The Ministry of Health and Family Welfare (MoHFW) was announced the first case of coronavirus on 30 January 2020. As of 4 April 2020, total 2073 cases and 75 deaths confirmed by MoHFW. However, the government of India declared a curfew for two days and followed by 75 districts and main cities where coronavirus influenced the people. Moreover, on 22 March 2020, a complete lockdown ordered by the Indian Prime Minister for 21 days to reduce the spread of COVID-19. The government also applied some travel restrictions regarding international residents. For example, on 3 March 2020, the visa process for Italian, Iranian and Japanese was suspended. Furthermore, they extended mandatory quarantine for at least 14 days for passengers coming from some Arabic countries [11, 12]. All PT modes stopped their services during the period of lockdown from 24 March to 14 April 2020, in all over India. Because the experiences showed, travel has a significant impact on the spreading of novel coronavirus. However, this claim should be statistically approved. Therefore, the objective of this research is to evaluate the impact of international and domestic travel on the spread of novel coronavirus by utilizing regression model and understand the lockdown or travel

Using Multiple Regression Model Analysis to Understand …

557

restrictions importance that declared through the government of India for reduction of COVID-19 outbreak. This study aims to examine the association within COVID-19 outbreak and travel behavior in India. The data were collected through the Ministry of Health and Family Welfare (total 1130 samples have gathered from Indian states); the information includes patients’ socioeconomic characteristics (age, gender) and travel histories.

2 Literature Review A research assessed the effectiveness of domestic and global journey constraints in the fast containment of infection. The authors used a methodical study based on the demands of the selected recording items for well-organized studies and metaanalysis observation. The study found that domestic travel limitations and international boundary constraints limited the impact of disease pandemics by 7 days to eight weeks, respectively. Global trip limitations decreased the extent and peak of epidemics by times varying within 4 and 120 days. Trip constraints decreased the rate of new cases by less than 3%. The influence was decreased during limitations, which were performed about one month and two weeks later the announcement of pandemics [13]. A study examined the correlation between domestic rail system and COVID-19 illness in China. The study by using a regression model found that there is a strong relationship between travel by train and the number of COVID-19 cases [14]. The investigation by using the Global Epidemic and Mobility Model revealed that travel restriction and quarantine have a considerable impact on the reduction of COVID-19 cases [15]. COVID-19 disease appeared to come under control in Taiwan and the number of cases very lower than the neighbor countries like Korea and Japan because effective measures have been handled through the Taiwan government like travel restrictions, governing the distribution of masks, extensive investigation of disease spread, and education of people [16]. Since nine months, the experiences disclosed that human travel, lack of social distance and mass gathering adversely influencing on COVID-19 distribution. As a study monitored the health hazards among the mass gathering, it recommended suspending of Hajj and Olympics 2020 in Japan due to the risk of COVID-19 [17]. The study discussed the potential impacts of social distance on daily commute patterns. Finally, the paper summarized that walking and cycling are significant measures to keep adequate levels of wellness and vitality [18]. Furthermore, a study investigates how persons’ movement habits and regular journey behaviors have been replaced during the Coronavirus outbreak and to examine whether the changes will continue after or will jump back to the normal condition. The investigation found that there are important changes in different perspectives of humans’ journey behavior due to COVID-19 [19]. Literature review associated to transportation and infectious diseases has been applied to find insights into managing such circumstances. The result discovered that domestic and international journey constraints if utilized at the beginning steps are useful in monitoring the extent of

558

K. A. Kakar and C. S. R. K. Prasad

infectious illness; at a next step, behavioral changes become prominent in limiting the spread [20]. Lin H. Chen et al. reviewed the performance of the passenger in developing infections and magnitude of travel. Finally, they found that trip impacts the development of infectious illnesses. Passengers have connection and communications with various germs and humans throughout their travels, distribute environments with another person and can have in-transit delivery [21]. In 2006, a research conducted regarding the impact of travel on the spread of epidemic disease by using survey data of travel behavior and model of stochastic simulation in Sweden. The study found that a ban on travels more than 50 km significantly decrease the number of affected people. Therefore, the research strongly supported trip restriction as a useful method to decrease the effect of disease outbreak [22]. The study evaluated the relationship between lockdown for coronavirus and mobility in various Indian states. The research organized according to states wise time-series data of the periodic portion of the change of people mobility from baseline in India. The study used conditional formatting techniques, time-series trends plotting method, to find the relationship between lockdown for coronavirus and mobility in various Indian states. The outcomes revealed that travel for entertainment, market, parks, public transport stations, and work areas mobility decreased by 73.4, 51.2, 46.3, 66 and 56.7%, respectively. Visits to residential areas mobility enhanced over 23.8% as people often stayed their home throughout the lockdown [23]. The investigation presents the journey constraint role in preventing and limiting epidemics through applying a scientific model that directly combines journey by air and short-range mobility information with statistical information over the globe. The study examines alternative situations concerning 2009 H1N1 epidemic. The result reveals that a reduction in air journey to Mexico or from Mexico was of too small a magnitude to influence the worldwide outbreak. More stringent management of journey decline may have led to delays on the order of 17 days even in the optimistic case of initial intervention [24]. Furthermore, the research creates a unique metric to describe people’s separation in sufficient distance behavior. Singh et al. [25] deployed support vector machine for prediction of COVID-19 for time-series data. Bhatnagar et al. [26] analyzed COVID-19 data in context of India. Kumari et al. [27] also employed machine learning techniques for analyzing and predicting COVID-19 cases in reference to India. The data were collected through the mobile telephone. This research by using the social distancing ratios (SDtj) and COVID-19 growth rates (GRtj) for each day and each county indicates that people’s separation in sufficient distance behavior is greatly associated with reduced coronavirus event growth measures for the twentyfive affected provinces in US [28]. COVID-19 discovered in December 2019 and increasingly spread in more than 200 countries and affected on social, education, sport, business and academic activities. Thus, there is lack of a comprehensive earlier study regarding factors, which impacts on COVID-19 cases. It can be summarized from previous studies that there are different transport factors which positively and negatively affected on spreading of diseases. Moreover, there is no vaccine to prevent the spreading of COVID-19 cases around the globe. Therefore, it is required to find the influential factors in order to decrease the number of cases.

Using Multiple Regression Model Analysis to Understand …

559

3 Methodology Due to pandemic and quarantine circumstances, it was very difficult/not possible that the surveyors collect information directly from COVID-19 patients. Therefore, secondary data are used for this study and the information gained through the Indian Ministry of Health and Family Welfare (MoHFW). The data were related to COVID19 cases, which happened during 15 March 2020 to 6 April 2020 in 20 Indian states. Figure 1 presents the number of cases in each state. The data include number of COVID-19 cases, patients’ gender, age level and travel histories (international and domestic travel history). The data collected randomly based on the number of cases in each state. The summary of collected data presents in Table 1. Total 1130 samples have gathered from Indian different states. In order to understand the relationship between travel and COVID-19 cases, multi-regression model was developed. In this study, number of COVID-19 cases was adopted as response (dependent) variable, and patients’ travel histories were considered as explanatory (independent) variables. The experiences indicate that in developing countries, males are traveled more than females. Therefore, in this study, male patients also considered as independent variables. In this paper the process of evaluation started with the review of patients’ characteristic, and then by using the regression model, the relationship between travel behavior and COVID-19 epidemic has been estimated. Table 1 indicates that 75% of patients are males. Furthermore, it reveals that 47% of patients had international travel histories and 20% experienced local travel histories one week before being affected with the disease. Preliminary data evaluation shows that travelers have performed a significant role in bringing COVID-19 disease in India, as the first case of COVID-19 (KL-TS-P1) is related to the 20 years old woman. The patient had international travel history. Similarly, the second and third cases also associated to those who came from Wuhan

Fig. 1 COVID-19 cases in Indian states from 15 March 2020 to 6 April 2020

560

K. A. Kakar and C. S. R. K. Prasad

Table 1 Summary of collected data from 20 states of India regarding COVID-19 cases State

Male PDTHa PITHb Cases

Age level 1–15 16–26 26–40 41–55 56–70 71–96

Andhra P

1

4

8

6

4

1

14

3

10

Bihar

1

2

5

4

3

0

11

5

4

23 15

Chandigarh

1

2

4

3

2

0

11

3

5

13

Chhattisgarh 0

1

3

2

1

0

7

4

4

8

Delhi

2

8

18

14

9

2

41

15

15

52

Goa

0

1

2

1

1

0

4

0

4

5

Gujarat

2

9

19

15

10

2

44

15

22

56

Haryana

1

5

11

9

6

1

22

6

22

33

Himachal P

0

0

1

1

1

0

3

0

3

3

Karnataka

4

15

31

24

15

3

69

4

51

91

Kerala

9

37

79

60

39

7

181

19

145

232

Ladakh

1

2

4

3

2

0

11

2

4

13

Madhya P

2

8

17

13

8

1

42

12

13

49

Maharashtra 7

28

61

46

30

5

121

41

81

178

Rajasthan

3

12

24

19

12

2

61

15

36

72

Tamil Nadu

3

11

23

17

11

2

41

16

25

67

Telangana

3

11

24

18

12

2

55

9

48

70

Uttar P

4

15

33

25

16

3

71

32

25

96

Total

45

181

384

294

192

34

850

212

530

1130

a Patients

with domestic travel histories b Patients with international travel histories

city as well as patient number 5 traveled from Dubai to Hyderabad and case number 6 dedicated to 69 years old man who came from Italy to Rajasthan. The COVID-19 disease transmission process started when patients 22, 23, 24, 25, 26 and 27 have been contacted with the patient number 6 and they hospitalized in March 2020, in one of Uttar Pradesh hospital. The data showed that around 37% people were infected to the COVID-19 disease due to direct contact with the other patients. Kids are more lightly influenced by COVID-19 than adults are. With rising age, there seems to be a growing proportion of COVID-19 patients who require hospitalization and who will need concentrated care. Reports from many countries confirm that the risk of dying from COVID-19 is higher between the elderly (MoHFW, 2020). Figure 2 shows that people with the age of 26–40 and 41–55 were more vulnerable than children and elders because these groups of age travel more than the other groups. The preliminary data evaluation has revealed that domestic and international trips having a consequential impact regarding the distribution of COVID-19 disease in Indian cities. However, there is a lack of statistical base behind this claim. Hence,

Using Multiple Regression Model Analysis to Understand …

561

Fig. 2 Distribution of COVID-19 cases based on age group

this research applies the multiple regression model to investigate the influence of travel behavior on COVID-19 case.

3.1 Multiple Regression Model A simple linear regressing model presents the estimation of relationship between dependent variable, Y and a single explanatory variable, X. The following equation shows a simple linear regression model. Y = β0 + β1 Xi + ei

(1)

where β0 β1 ei X Y

Intercept Slope of the line Error terms Explanatory variable Dependent variable

A simple linear regression is applicable when there is one explanatory variable. However, in this study, three explanatory variables are considered. Hence, multiple linear regression is used to estimate the association between dependent variable (COVID-19 cases) and independent variables (number of patients’ international travel histories, number of patients’ domestic travel histories, and number of male patients). Equation (2) presents the multiple regression model, and in this model, p explanatory variable can be estimated. Y = β0 + β1 X1i + β2 X2i + · · · β p X pi + ei

(2)

562

K. A. Kakar and C. S. R. K. Prasad

where β0 β 1, β 2, β p

Constant Slope of the line

Though, multiple linear regression model can be the extension of simple linear regression model, which there are more than one explanatory variables (p = 1, 2, 3…). This study considers four variables: A B C D

The number of COVID-19 cases in each Indian state (C-19 cases). The number of COVID-19 patients in each Indian state who had international travel histories one week before being affected (PITH). The number of COVID-19 patients in each Indian state who had domestic travel histories one week before being affected (PDTH). The number of COVID-19 patients in each Indian state that are male (MP).

The dependent variable is C-19 cases, and the research investigates whether the experiment can explain the number of cases variation with independent variables (PITH, PDTH, and MP), and finally, the theoretical model shows as follow: C − 19 Casesi = β0 + β1 (PITH)i + β2 (PDTH)i + β3 (MP)i + ei

(3)

In multi-linear regression, the following assumptions are considered: 1. Independent factors should be independent of each other, 2. Factors are normally distributed, 3. Factors are continuous 4. A linear relation must be existed between independent and dependent variables. In order to understand association among variables, the correlation has been estimated. Table 2 shows that variable significantly correlated with each other. Finally, the multiple linear regression model by using stepwise method in SPSS software has been developed. Equation 4 presents the result of multiple regression. Moreover, Fig. 3 shows linear relationship between response variable and independent variables, and Table 3 presents the result of the model. C19 Casesi = −0.3208 + 0.4560(PITH)i + 0.8558(PDTH)i + 0.8392(MP)i (4)

Table 2 Correlation between variables C-19 cases

PITH

PDTH

MP

1

0.938

0.765

0.995

0.000

0.000

0.000

PITH

0.938

1

0.531

0.953

Sig

0.000

0.016

0.000

PDTH

0.765

0.531

1

0.734

Sig

0.000

0.016

MP

0.995

0.953

0.734

1

Sig

0.000

0.000

0.000

C-19 cases Sig

0.000

Using Multiple Regression Model Analysis to Understand …

563

Fig. 3 Linear relationship between response variable and independent variables

Table 3 Result of multiple regression model Variables

Coefficients

Standard error

t Stat

P value

Intercept

−0.3208

1.5535

−0.2065

0.8390

PITH

0.4560

0.1615

2.8241

0.0122

PDTH

0.8558

0.1964

4.3564

0.0005

Male

0.8392

0.1491

5.6290

0.0000

R Square

0.9951

4 Result and Discussion This section presents a summary of multiple regression outcome and explains the results of the model, as well as briefly discusses the role of travel restriction on the spread of COVID-19 in India. Figure 3a–c shows the linear relationship between dependent and independent variables, which fitted adequately. In this study, the result revealed that P values are less than any conventional level of significance. Therefore, the p values for patients with domestic travel histories (PDTH), patients with international travel histories (PITH) and male patients (MP) that presented in Table 3 showed quite reasonable values. Concerning the contribution of domestic and international travel histories and male patients in the prediction of COVID-19 cases, p values are statistically significant because corresponding values are less than 0.05 and could be summarized that they are influential variables regarding the prediction COVID-19 cases. A positive sign for coefficients indicated as the independent variables increase, the response variable also increases. The R2 (Table 3) outcome showed that the regression explained about 99% of the total variation in COVID-19 cases. Finally, a multiple regression model approved, by increasing domestic and international travels, the number of COVID-19 will be increased. Besides, the higher number of infected men also expressed the correlation between human travel and COVID-19 cases because the evidence demonstrated in the developing countries, men are more traveling than women. Similarly, this study determined that the number of children (0–10) and elders (71–96) were quite low among the infected people in Indian states because

564

K. A. Kakar and C. S. R. K. Prasad

these groups of age are conducted less travel than the other groups. Therefore, travel restriction is the most significant factor during the COVID-19 epidemic that works like COVID-19 vaccine to manage and control this crisis. Lockdown and travel restriction declared through the government on 22th March 2020, in all over the Indian states. While the data analysis has revealed that domestic and international travel histories have more than 90% contribution to the spread of COVID-19 disease in the most of vulnerable states in India, the reason may be that the epidemic had already spread to other cities within Indian states because the first case declared through the MoHFW on 30 January 2020, and quarantine ordered on 22 March 2020.

5 Conclusion Up to the present moment, millions of people face with the novel coronavirus (COVID-19) outbreak, and every day, thousands of innocent humans are killed due to the epidemic disease. Therefore, the extreme circumstance encouraged doctors, engineers, and scientists to act thoughtfully and prevent the COVID-19 outbreak. The investigation of the coronavirus outbreak and the modeling estimation of the impact of travel restriction could be influential to India and international authorities for social health care response planning. This study has ascertained that the number of international and domestic travel is incredibly connected risk elements for the outbreak of existing COVID-19 disease as this paper determined that there is a significant correlation between travel and COVID-19 outbreak. Thus, the declaration of lockdown and self-isolation policy is the feasible options to prevent or reduce the long-term crisis.

References 1. C. Biscayart, P. Angeleri, S. Lloveras, S. Chaves, S. do Tania, P. Schlagenhauf, A.J. RodriguesMorales: The next big threat to global health? 2019 novel coronavirus (2019-nCoV): What advice can we give to travellers?—Interim recommendations January 2020, from the LatinAmerican Society for Travel Medicine (SLAMVI). Trav. Med. Infect. Dis. 33, 101567 (2020) 2. B.I. Issac, A. Watts, A. Bachli-Thomas, C. Huber, U. Kraemer, G. Moritz, K. Kamran, Pneumonia of unknown aetiology in Wuhan, China: potential for international spread via commercial air travel. J. Trav. Med. 22(2), 1–3 (2020) 3. World Health Organization (WHO), https://www.who.int/docs/default-source/coronaviruse/sit uation-reports/20200121-sitrep-1-2019-ncov.pdf. Last accessed 21 Jan 2020. 4. G. Li, R. Hu, X. Gu, A close-up on COVID-19 and cardiovascular diseases. Nutr. Met. Card. Dis. 30(7), 1057–1060 (2020) 5. World Health Organization (WHO), https://www.who.int/docs/default-source/coronaviruse/sit uation-reports/20200211-sitrep-22-ncov.pdf. Last accessed 11 Feb 2020 6. S.K. Awadhesh, A. Singh, A. Shaikh, R. Singh, A. Misra, Chloroquine and hydroxychloroquine in the treatment of COVID-19 with or without diabetes: a systematic search and a narrative

Using Multiple Regression Model Analysis to Understand …

7. 8. 9.

10. 11. 12. 13.

14.

15. 16.

17. 18. 19.

20. 21. 22. 23.

24.

25.

26.

565

review with a special reference to India and other developing countries. Diab. Met. Syn. Clin. Rese. Rev. 14(3), 241–246 (2020) D. Kang, H. Choi, H. Kim, J. Choi, Spatial epidemic dynamics of the COVID-19 outbreak in China. Int. J. Infect. Dis. 94, 96–102 (2020) A.J. Rodríguez-Morales, D. Patel, S. Kanagarajah, K. MacGregor, P. Schlagenhauf, Going global—travel and the 2019 novel coronavirus. Trav. Med. Infect. Dis. 33, 101578 (2020) Q. Lin et al., A conceptual model for the coronavirus disease 2019 (COVID-19) outbreak in Wuhan, China with individual reaction and governmental action. Int. J. Infe. Dis. 93, 211–216 (2020) International Association of Public Transport (UITP), https://cms.uitp.org/wp/wp-content/upl oads/2020/06/Corona-Virus_EN.pdf. Last accessed 20 Feb 2020 Government of India, Ministry of Health & Family Welfare (MoHFW), https://www.mohfw. gov.in/pdf/ConsolidatedTraveladvisoryUpdated11032020.pdf. Last accessed 11 Jan 2020 Government of India, Press Information Bureau, https://www.mea.gov.in/Images/amb1/covid2 020.pdf. Last accessed 17 Mar 2020 A.L.P. Mateus, O.E. Harmony, B.R. Charles, D.P. Gayle, J.S. Nguyen-Van-Tamb: Effectiveness of travel restrictions in the rapid containment of human influenza: a systematic review. Syst. Rev. 92(12), 868–880 (2014) S. Zhao, Z. Zhuang, J. Ran, J. Lin, G. Yang, L. Yang, D. He, The association between domestic train transportation and novel coronavirus (2019-nCoV) outbreak in China from 2019 to 2020: a data-driven correlational report. Trav. Med. Infect. Dis. 33, 101568 (2020) M. Chinazzi et al., The effect of travel restrictions on the spread of the 2019 novel coronavirus (2019-nCoV) outbreak. Science 368(6489), 395–400 (2020) C. Lai, Y. Wang, Y. Wang, H. Wang, C. Hsueh, C. Ko, R. Hsueh, Global epidemiology of coronavirus disease 2019 (COVID-19): disease incidence, daily cumulative index, mortality, and their association with country healthcare resources and economic status. Int. J. Antim. Age. 55(4), 105946 (2020) Q.A. Ahmed, Z.A. Memish, The cancellation of mass gatherings (MGs)? Decision making in the time of COVID-19. Trav. Med. Infe. Dis. 34, 101631 (2020) J.D. Vos, The effect of COVID-19 and subsequent social distancing on travel behaviour. Trans. Res. Interd. Pers. 5, 100121 (2020) A. Shamshiripour, E. Rahimi, R. Shabanpour, A. Mohammadian, How is COVID-19 reshaping activity-travel behavior? Evidence from a comprehensive survey in Chicago. Trav. Med. Infect. Dis. 7, 100216 (2020) D. Muley, Md. Shahin, C. Dias, M. Abdullah, Role of transport during outbreak of infectious diseases: evidence from the past. Sustainability 12(18), 7367 (2020) L.H. Chen, M.E. Wilson, The role of the traveler in emerging infections and magnitude of travel. Med. Clin. Nor. Am. 92(6), 1409–1432 (2008) M. Camitz, F. Liljeros, The effect of travel restrictions on the spread of a moderately contagious disease. BMC Med. 4(1) (2006) J. Saha, B. Barman, P. Chuhan, Lockdown for COVID-19 and its impact on community mobility in India: an analysis of the COVID-19 Community Mobility Reports, 2020. Chi. You. Ser. Rev. 116, 105160 (2020) P. Bajardi, C. Poletto, J.J. Ramasco, M. Tizzoni, V. Colizza, A. Vespignani, Human mobility networks, travel restrictions, and the global spread of 2009 H1N1 Pandemic. Plos. One 6(1), e16591 (2011) V. Singh, R.C. Poonia, S. Kumar, P. Dass, P. Agarwal, V. Bhatnagar, L. Raja, Prediction of COVID-19 corona virus pandemic based on time series data using support vector machine. J. Discr. Math. Sci. Cryptogr. 23(8), 1583–1597 (2020). https://doi.org/10.1080/09720529.2020. 1784535 V. Bhatnagar, R.C. Poonia, P. Nagar, S. Kumar, V. Singh, L. Raja, P. Dass, Descriptive analysis of COVID-19 patients in the context of India. J. Interdiscip. Math. 24(3), 489–504 (2020). https://doi.org/10.1080/09720502.2020.1761635

566

K. A. Kakar and C. S. R. K. Prasad

27. R. Kumari, S. Kumar, R.C. Poonia, V. Singh, L. Raja, V. Bhatnagar, P. Agarwal, Analysis and predictions of spread, recovery, and death caused by COVID-19 in India. Big Data Min. Anal. 4(2), 65–75. https://doi.org/10.26599/BDMA.2020.9020013 28. S.H. Badr, H. Du, M. Marshall, E. Dong, M. Squire, M.L. Gardner, Social distancing is effective at mitigating COVID-19 transmission in the United States. Lan. Inf. Dis. 20(11), 1247–1254 (2020)

The Review of Prediction Models for COVID-19 Outbreak in Indian Scenario Ramesh Chandra Poonia, Pranav Dass, Linesh Raja, Vaibhav Bhatnagar, and Jagdish Prasad

Abstract The COVID-19 is one of the worst pandemics the world has witnessed in the recent century. As per the data available on the worldmeters.info during the early wave, almost 4.6 million people already tested positive for COVID-19 all around the world. Out of which 0.3 million died and around 2 million get recovered. COVID-19 epidemic is already declared as an international public health emergency by World Health Organization (WHO) and has put in place a series of temporary recommendations. As on date, no specific forecasting models are available to anticipate the outbreak of the COVID-19 pandemic. In this paper, author evaluated a novel forecasting model to understand the outbreak of COVID-19 in India during the initial period. The various regression and clustering models were evaluated on the available pandemic dataset to forecast the outbreak of COVID-19. Keywords COVID-19 · Corona · Regression · Clustering · Prediction model · Statistics

1 Introduction COVID-19 is a respiratory infection with common signs such as difficulty breathing, fever, cough, shortness of breath and difficulty breathing. The new coronaviral disease R. C. Poonia Department of Computer Science, CHRIST (Deemed to be University), Bangalore, Karnataka, India e-mail: [email protected] P. Dass Bharati Vidyapeeth’s College of Engineering, New Delhi, India e-mail: [email protected] L. Raja (B) · V. Bhatnagar Manipal University Jaipur, Jaipur, Rajasthan, India J. Prasad Amity Institute of Applied Sciences, Amity University Rajasthan, Jaipur, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 R. C. Poonia et al. (eds.), Proceedings of Third International Conference on Sustainable Computing, Advances in Intelligent Systems and Computing 1404, https://doi.org/10.1007/978-981-16-4538-9_55

567

568

R. C. Poonia et al.

outbreak (COVID-19) started in December 2019 in Hubei Province of China. Almost all countries are now affected by this pandemic. As per the data for the early wave, more than 4.6 million people already tested positive for COVID-19 all around the world. Out of which 0.3 million died and around 2 million get recovered [1]. With respect to the current scenario, the world is going to witness a huge increase in COVID-19 cases in the coming days. In India, around 85,000 people already tested positive for COVID-19. Out of which 2700 died and around 20,000 get recovered [1–3]. Despite 1.33 billion population of the world, during the initial wave, India has controlled the pandemic in a better way as compared to other European and American countries. However, no authenticated tool is developed or evaluated to forecast the real scenario of the COVID-19 outbreak. This paper is an attempt to evaluate the forecasting model for COVID-19 outbreak from pandemic data.

2 Literature Review In recent trimester, huge number of research articles published on COVID-19. In this, we have studied several articles published on the forecasting model. Their detailed approaches against COVID-19 are discussed in Table 1.

3 Methodology The objective of the proposed model is to design and implement forecasting model for COVID-19 outbreak from pandemic data. The objective can further be classified into following parts: • Collect pandemic dataset of COVID-19 from different demographic location of Indian states. This objective is further classified into two parts, namely feature scaling of available dataset and preparation of training and test dataset. • Select various regression, and clustering models for training the system with the help of training dataset. • Develop a method to forecast the COVID-19 Outbreak from the trained data. • Evaluate the forecast model on the basis of developed methods with the help of test dataset. Figure 1 shows the methodology which can be adopted to achieve the goal.

4 Dataset The pandemic data are collected and validated from the various sources available at [1–3, 18, 19]. It is further scaled as per their features and then classified into training and test dataset.

The Review of Prediction Models for COVID-19 …

569

Table 1 Covid-19 methods review Reference

Time period

Location

Methods

Wu et al. [4]

-

China & Rest of the world

Generalized logistic Predict the growth model COVID-19 outbreak

Remarks

Petropoulos et al. [5]

Jan–March 2020

World

Sizable associated uncertainty

Increase in the confirmed COVID-19

Bastos et al. [6]

March 2020

Brazil

Two variations of the SIR model

Forecast the early evolution of the COVID-19

IHME COVID-19 [7]

Jan–April 2020

US

Statistical model

Forecasted COVID-19

Stübinger et al. [8] Jan–March 2020

World

Statistical approach Forecasted the future spread of COVID-19

Elmousalami et al. Jan–March 2020 [9]

World

Time series models and mathematical formulation

Botha et al. [10]

Jan 2020

World

Three-dimensional Forecast the iterative map model global spread

Mohammed et al. [11]

Jan 2020

China

ANFIS, FPA and SSA

Optimized forecasting method

Roosa et al. [12]

Feb 2020

China

Phenomenological Models

Real-time forecast of COVID-19

Buizza [13]

Jan–March 2020

Italy and South Korea

Probabilistic approach

Predict the evolution of COVID-19

Fanelli et al. [14]

Jan–March 2020

China, Italy and France

Time-lag plots

Forecasted the COVID-19 spreading

Hu et al. [15, 16]

Jan–March 2020

China

AI and stacked auto-encoder modelling

Real-time forecasting of COVID-19

Yang et al. [17]

Jan 2020

China

SEIR and AI

Prediction of COVID-19

Forecasting models on COVID-19 and their day wise comparison

570

R. C. Poonia et al.

Fig. 1 Proposed methodology

5 Models Authors used and compared various supervised and unsupervised learning models for designing and implementing the forecasting model for COVID-19 outbreak from pandemic data. The study includes several regression and clustering models.

5.1 Regression Model It is supervised statistical intelligent models which are used to for designing a forecasting model. It considered dependent variable as the target variable and independent variable as a predictor variable for the forecasting. Sometimes it also depends on time series variables. The major benefits of using these models are that it helps in comparing the effect of multiple independent variables on a dependent variable. The three important aspects of regression models consist of number of independent variable; shape of regressing line; type of dependent variable. Their various types considered in this paper are discussed in below section. Linear Regression: It shows relationship between the target and predicted value with the help of straight line. Their mathematical notation is represented as below: y = b0 + b1 x1

(1)

In the above equation, y is the dependent variable; x 1 is the independent variable; b0 is the constant and b1 is coefficient. Polynomial Regression: It is the upgraded version of linear regression which is used to train and predicts the nonlinear attribute of the dataset. Their mathematical notation is represented as below: y = b0 + b1 x1 + b2 x1 2 + · · · + bn x1 n

(2)

The Review of Prediction Models for COVID-19 …

571

In the above equation, y is the dependent variable; x 1 is the independent variable; b0 is the constant and (b1 … bn ) are coefficients. To improve the prediction, the polynomial order of the independent variable may tend to nth degree. Support Vector Regression (SVR): It is used to train and predicts the linear as well nonlinear attribute of the dataset. It tries to fit as many instances as possible instead of fitting the largest possible population. Their nature is somehow like hyperparameter epsilon. In this approach, each dataset point of training represents their own dimension. The value of the result shows the co-ordinate of the test point in this dimension when comparing between the reference point and the point in the training set. Decision Tree Regression: A decision tree is a flowchart structure in which every internal (non-leaf) node is an attribute check, each branch is an attribute check outcome, and every leaf (or terminal) has a class label. The tree’s top node is the root node. It is very good at handling table data with numeric functions or categorical functions with less than a hundred categories. Unlike linear models, decision trees are capable of capturing nonlinear interactions between the functions and the target. This model is not designed to work with very rare features. Random Forest Regression: It is a type of predictive additive model that combines decisions from a series of basic models. The last prediction model in this is the sum of simple basic models. Each base classification is considered a simple tree of decision. A model ensemble using multiple models is called this comprehensive technique in order to improve predictive performance. In random forests, a different subsample of the data is used to create all base models independently. Tabular data with numerical characteristics or categorical characteristics with fewer than hundreds of categories are also very good at handling. Unlike linear models, random forests can capture nonlinear interactions between characteristics and destination.

5.2 Clustering Model Clustering is the process of grouping data, so that points in the same cluster are very similar to each other, while points are different in different clusters. Clustering is a form of unattended learning, since there is no target variable indicating which groups the training data belongs to. K Mean Clustering: To find cluster centers for a predetermined number of clusters (“K”), it minimizes the sum of the squared distances from each point to its assigned cluster. The cluster whose center is closest shall be assigned points. This is usually the quicker of the two options and can be further accelerated by setting the batch size parameter so that for each training iteration only a small subset of data is used. Hierarchical Clustering: It is the most popular probability density-based clustering method. Neighboring points with a high estimated probability density are used to create clusters. As compared to K-means, hierarchical clustering has less computation

572

R. C. Poonia et al.

efficiency. It does, however, capture more flexible forms of the cluster. It also detects the best number of clusters automatically and the outliers.

6 Result Analysis In the first phase of the result analysis, various regression models are evaluated on the COVID-19 dataset available from March 3 to 15 May 2020 for the Indian region. All five models, i.e., Linear, Polynomial, Support Vector, Decision Tree and Random Forest Regression prediction performance, are evaluated on the basis of following cases: • • • •

Number of Registered COVID-19 Cases Number of Daily New COVID-19 Cases Number of Registered Death COVID-19 Cases Number of Daily New Death COVID-19 Cases.

Each of the models used the default scaling. The basic parameter for each model is provided in Table 2. As per the result analyzed from Fig. 2a–d that the polynomial, decision tree and random forest models almost running on the original dataset curve. In contrast to the above model, linear and support vector models do not fit with the prediction. Their predicted lines are far away from the original dataset curve. At the same time, if we closely monitor Fig. 2b, d, there is a slight deviation in the polynomial regression prediction with respect to the original dataset curve. Figure 2a–d shows the trend of increase in number of cases with respect to number of days in India. As per our personal observation, we found that initial number of cases in Kerala shows the increasing trend, but now their number of registered cases reduces to a large extent. Figure 3a–d evaluated the above cases for Kerala, India region over the same time period. It was again observed that with increase and decrease in actual cases, the polynomial, decision tree and random forest regression model again running almost on the original dataset curve. But again, linear and support vector models do not fully fit with the prediction. Table 2 Regression models basic parameters

Model

Parameter

Value

Linear

Default

NA

Polynomial

Degree

6

SVR

Kernel

Rbf

Decision tree

Random state

0

Random forest

Random state Estimators

0 10

The Review of Prediction Models for COVID-19 …

573

a

b

c

d

Fig. 2 a Total cases. b Daily new cases. c Total deaths. d Daily new deaths

In the second phase of the result analysis, K Mean Clustering and hierarchical models are used to categorize the clusters on the basis population density for the different states of India. The clusters are categorized for the following cases: • Number of Active COVID-19 Cases • Number of Registered Death COVID-19 Cases. Each of the models used the default scaling. For each case, the default clusters and random state parameter are 5 and 32, respectively. As per the result analyzed from Fig. 4a, almost all the state has very low to high cases with population density of less than 1000/KM2. The only exception is Maharashtra (Red) and New Delhi (Blue). The red cluster has the low density with extremely high active cases of COVID-19, and the blue cluster has extremely high density with high active cases of COVID-19. Almost similar trend is shown in Fig. 4b, which depicts the number of COVID-19 deaths. Both the clustering models predict the similar clusters.

574

R. C. Poonia et al.

a

b

c

d

Fig. 3 a Total cases. b Daily new cases. c Total deaths. d Daily new deaths

a

b

Fig. 4 a Active cases. b. Death cases

7 Conclusion Author presents the analysis of various regressions and clustering model for COVID19 in India. This analysis is based on the dataset available for the early wave in India. Overall, the prediction model suggests that the number of registered cases of COVID-19 in India will increase at the rapid speed. At the same time, recovery rate of COVID-19 positive cases will also increase. Table 3 provides the summarized

The Review of Prediction Models for COVID-19 …

575

Table 3 Regression models error & score Model name

Mean score error Total cases

Linear

11,050

Polynomial SVR

Daily new cases 510

Prediction score (%) Total deaths

Total daily deaths

369

21

Total cases

Daily new cases

Total deaths

Total daily deaths

76

83

76

76

450

210

23

15

99

97

99

87

25,675

1304

839

32

−26

0

−24

39

Decision tree Random forest

0

0

0

0

100

100

100

100

510

105

17

7

99

99

99

97

report of regression models mean score error and their prediction score. It concludes that Decision Tree Regression model best fits with the COVID19 dataset in terms of all cases. It was closely followed with Random Forest and Polynomial Regression. Linear regression shows the better tread if there is linear growth in terms of cases. But SVR is far away from the good score. The clustering model predicts that except Maharashtra and New Delhi, there is no significant relation found in number of cases versus the population density. But the clustering of states as per number of active and death cases will surely attract the agency and government to prepare rescue plan as per the zone. It will be interesting to see the change in states location from one cluster to another on the basis of number of active cases and deaths. The study and evaluation of these models helps us to forecast the outbreak of COVID-19 in India. As the trend suggests that there will be an increase in the total number of registered case in upcoming days. At the same time, there will be frequent change in cluster in terms of COVID-19 active and death cases. Acknowledgements This research work is carried under the research project entitled Development of prediction model for COVID-19 using machine learning (File Number: MSC/2020/000457). The project is funded by Science and Engineering Board (SERB) under the MATRICS Special CoVID-19 scheme.

References 1. COVID-19 Statistics, Available at: https://www.worldometers.info/coronavirus/ 2. COVID-19, Ministry of Health & Family Welfare, GOI, Available at: https://www.mohfw. gov.in/ 3. COVID-19, ICMR, India, Available at: https://www.icmr.gov.in/ 4. K. Wu, D. Darcet, Q. Wang, D. Sornette, Generalized logistic growth modeling of the COVID19 outbreak: comparing the dynamics in the 29 provinces in China and in the rest of the world. Nonlinear Dyn. 101(3), 1561–1581 (2020)

576

R. C. Poonia et al.

5. F. Petropoulos, S. Makridakis, Forecasting the novel coronavirus COVID-19. PloS One 15(3), e0231236 (2020) 6. S.B. Bastos, D.O. Cajueiro, Modeling and forecasting the early evolution of the Covid-19 pandemic in Brazil (2020). arXiv preprint arXiv:2003.14288. 7. IHME COVID-19, C.J. Murray, Forecasting COVID-19 impact on hospital bed-days, in ICUDays, Ventilator-Days and Deaths by US State in the Next 4 Months (2020). MedRxiv 8. J. Stübinger, L. Schneider, Epidemiology of coronavirus COVID-19: forecasting the future incidence in different countries, in Healthcare (Multidisciplinary Digital Publishing Institute, 2020), Vol. 8, No. 2, p. 99 9. H.H. Elmousalami, A.E. Hassanien, Day level forecasting for Coronavirus Disease (COVID19) spread: analysis, modeling and recommendations (2020). arXiv preprint arXiv:2003.07778 10. A.E. Botha, W. Dednam, A simple iterative map forecast of the COVID-19 pandemic (2020). arXiv preprint arXiv:2003.10532 11. M.A. Al-qaness, A.A. Ewees, H. Fan, M. Abd El Aziz, Optimization method for forecasting confirmed cases of COVID-19 in China. J. Clin. Med. 9(3), 674 (2020) 12. K. Roosa, Y. Lee, R. Luo, A. Kirpich, R. Rothenberg, J.M. Hyman, P. Yan, G. Chowell, Realtime forecasts of the COVID-19 epidemic in China from February 5th to February 24th, 2020. Infect. Dis. Model. 5, 256–263 (2020) 13. R. Buizza, Probabilistic prediction of COVID-19 infections for China and Italy, using an ensemble of stochastically-perturbed logistic curves (2020). arXiv preprint arXiv:2003.06418. 14. D. Fanelli, F. Piazza, Analysis and forecast of COVID-19 spreading in China, Italy and France. Chaos, Solitons & Fractals 134, 109761 (2020) 15. Z. Hu, Q. Ge, L. Jin, M. Xiong, Artificial intelligence forecasting of covid-19 in China (2020). arXiv preprint arXiv:2002.07112 16. Z. Hu, Q. Ge, S. Li, E. Boerwincle, L. Jin, M. Xiong, Forecasting and evaluating intervention of Covid-19 in the World (2020). arXiv preprint arXiv:2003.09800 17. Z. Yang, Z. Zeng, K. Wang, S.S. Wong, W. Liang, M. Zanin et al., Modified SEIR and AI prediction of the epidemics trend of COVID-19 in China under public health interventions. J. Thorac. Dis. 12(3), 165 (2020) 18. COVID-19 Dataset, Available at: https://www.kaggle.com/ 19. COVID-19 (WHO), Available at: https://www.who.int/emergencies/diseases/novel-corona virus-2019.

Design and Simulation of ECG Signal Generator by Making Use of Medical Datasets and Fourier Transform for Various Arrhythmias M. R. Rajeshwari and K. S. Kavitha

Abstract The activities of human heart are summarized using electrocardiogram which provides electrical changes occurring on human body. The noninvasive measure provides information of various heart-related diseases. Precise oscilloscopes are used in a practical case which can help in diagnosis for medical industry. The generation of ECG signal is very important so that various classification analysis can be performed on the generated ECG signal using either supervised machine learning or unsupervised machine learning methods. In this work, we have performed the design and implementation of ECG simulator which can generate ECG signals which can help in determining various kinds of arrhythmia, namely ischemic changes (coronary artery disease), old anterior myocardial infarction, old inferior myocardial infarction, sinus tachycardy, sinus bradycardy, ventricular premature contraction (PVC), supraventricular premature contraction, left bundle branch block, right bundle branch block, left ventricle hypertrophy and atrial fibrillation or flutter. Fourier series and summer are used to generate the ECG signals. Data Sets are collected from standard UCI repository and then ECG signal generated for different kinds of arrhythmia and variations in P, Q, R, S, T wave, P-R interval, QRS interval, and amplitude variations are generated. Keywords Fourier series · ECG simulator · UCG repository · MATLAB

Nomenclature Name ECG

Description Electrocardiogram

M. R. Rajeshwari Department of Computer Science and Engineering, Ghousia College of Engineering, Ramanagara, Karnataka 562159, India K. S. Kavitha (B) Department of Computer Science and Engineering, Global Academy of Technology, RR Nagar, Bangalore, Karnataka 560098, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 R. C. Poonia et al. (eds.), Proceedings of Third International Conference on Sustainable Computing, Advances in Intelligent Systems and Computing 1404, https://doi.org/10.1007/978-981-16-4538-9_56

577

578

M. R. Rajeshwari and K. S. Kavitha

ADC PVC UCI

Analog to digital converter Premature ventricular contraction UC Irvine

1 Introduction For electrocardiography, electrodes are placed on the skin and then electrical activity of the heart is recorded. The heart beat is a process of repeated depolarization and repolarization. The ECG signal helps in identifying cardiac problems. Figure 1 shows the ECG signal which will have the following characteristics 1. 2. 3. 4. 5. 6.

O is called as the origin or point which precedes the cycle P is called atrial systole contraction pulse Q is the deflection which goes downward which comes before ventricular contraction R is the deflection in the upward direction which corresponds to ventricular contraction S is the deflection in the downward direction which appears immediately after ventricular contraction T is referred as recovery of the ventricles

In a conventional ECG system, there will be 12 leads which will be placed on the chest and limbs of the person. ECG signals help in the diagnosis of cardiac diseases which are complex in nature [1]. There are many characteristics of ECG signal which are very important and need to be extracted. When the sine and cosine waves are added together linearly in some fashion, then it results in Fourier series. The complex curve can be separated into sine and cosine curves [2]. The various properties of Fourier series are: 1. 2.

It is periodic in nature It will have maximum and minimum within a given interval

Fig. 1 ECG signal

Design and Simulation of ECG Signal Generator …

3.

579

It is possible to perform integration of Fourier series. The base equation for Fourier series can be defined as below f (x) =

 am  o

2

+

int 

am n cos

 nπ x 

n=1

l

+

int  n=1

bm n sin

 nπ x  l

(1)

where 1 am o = π am n =

1 π

1 bm n = π

π f (x)dx

(2)

f (x) cos(nx)dx

(3)

f (x) sin(nx)dx

(4)

−π

π −π

π −π

2 Background An embedded telemedicine system takes responsibility of health monitoring of patient user [3]. The system makes use of GSM/GPRS so that the monitoring can be performed remotely and doctor can view reports over the internet. Simulator can generate normal and abnormal [4] ECG waveforms. The ECG signal has two important properties, namely mean and standard deviation which follows Gaussian distribution. The practical ECG simulator [5] can be used to perform testing of ECG devices. The process involves various steps (a) digitally stored ECG signals are sent from microcontroller, (b) digital to analog converts pick up the ECG signal, (c) signals are sent to terminal circuits, and (d) LCD which is mounted on the device shows the ECG signal. A device is designed which can simulate ECG signal [6] without making use of oscilloscope, filter, and voltage supply. Levkov method is used to improve the ECG simulation by reducing QRS wave. Signal simulator [7] is designed for finding the changes occurring in fetus during the 42nd and 20th weeks for pregnant women. An actual fECG record will be taken which is present in clinical practice data and ST analysis data in order to create dynamic fECG model. There are many features in ECG signal [8]; among them, few important features are RR intervals and QRS durations. The detection process is made more accurate by making use of multiple channels for detection along with fusion

580

M. R. Rajeshwari and K. S. Kavitha

techniques. The data use for testing in this approach contains digitized paper-based ECG segments, portable intelligent ECG monitor from multiple patients. The ECG simulator with high resolution and low memory storage [9] is implemented with the steps (a) continuous ECG signal is converted to discrete, (b) the discrete signal will be passed to kernel function, (c) the kernel ECG signal will be changed to polynomial equations, and finally discrete least square technique is applied. ECG signal can be obtained using electrical bio-potentials [10] using electrodes put on the body. Electrocardiograph records attenuation on frequency and amplitude of ECG signal. The simulator helps physicians to study heart rhythm disorder with a cost-effective approach. PIC18F4550 microcontroller [11] is used to obtain efficient and low-price heart beat measure. The microcontroller first detects the ECG signal; secondly, it will send it to LabVIEW GUI. The ECG signal is obtained in the format of WAV file. Open and low-cost ECG simulator can be used to validate the ECG devices [12] and to make sure manufacturers follow regulatory process, and after certification, the devices can be used in e-Healt systems to monitor or diagnose cardiovascular diseases. ECG signal waveforms are generated based on digitized and noise waveforms. The ECG signal as shown in Fig. 1 consists of QRS part [13]. The detection of QRS part can be done based on steps (a) digitize the ECG signal using ADC, (b) the signal will be decomposed into four wavelet banks by making use of wavelet decomposer, (c) The wavelet filter bank will be connected to noise detector, (d) this combined signal will be passed through detector which has multi-scaled product, and a threshold is applied which can improve the efficiency of QRS complex detector. ECG simulator based on Fourier series [14] can be used to find the patterns of normal and abnormal heart beats. The heart rate has correlation with R wave and also related to inversely corrected to S curve minimums. The abnormal ECG signals will be short in nature. The ECG monitor is designed based on 8-bit MCU [15] which can detect the QRS portion effectively and deals with various diseases like premature ventricular contractions (PVCs), ventricular and supraventricular tachycardia, and atrial fibrillation. Three entities are used to form a simulator [16] based on personal computer; firstly, biomedical signal generator is used; secondly, ECG simulator is used; and thirdly, entity is EKG monitor which collects data from patient.

3 Mathematical Formulation of ECG Using Fourier Series The ECG signal has the property of periodicity. In a finite interval, it is the combination of triangular and sinusoidal wave. QRS can be represented by making use of triangle wave, and P and T can be represented using sine wave. By making use of shifting and scaling techniques, ECG waveform can be generated. If we represent the period as T P, then the constants represented in Eqs. (2)–(4) can be defined as below

Design and Simulation of ECG Signal Generator …

am o = am n =

1 l

1 l

581

 f (x)dx

(5)

f (x) cos(nx/l)dx

(6)

f (x) sin(nx/l)dx

(7)

T

 T

bm n =

1 l

 T

can be treated as a triangular waveform with period as between   If QRS portion −1 1 to TQRS . TQRS The function f (x) can now be defined as   f (x) = −TQRS ax/l + a for 0 ≺ x ≺ 1/TQRS   = TQRS ax/l + a for −1/TQRS ≺ x ≺ 0

(8)

 1 f (x)dx l T   = a/TQRS ∗ 2 − TQRS

(9)

 1 f (x) cos(nx/l)dx l T     = 2TQRS a/ n 2 π 2 ∗ 1 − cos nπ/TQRS

(10)

am e =

am n =

1 bm n = l

 f (x) sin(nx/l)dx = 0

(11)

T

Eq. (10) is made zero because the signal is even in nature. After making use of Eq. (11) in Eq. (1), then one can derive the following f (x) =

 am  o

2

+

inf  n=1

am n cos

 nπ x  l

(12)

Equations (9)–(12) can be used to represent QRS portion. The P portion of the ECG signal is a sine wave and can be represented using the following equations

π TQRS x f (x) = cos 2l



1 1 − ≺x≺ TQRS TQRS

(13)

582

M. R. Rajeshwari and K. S. Kavitha

Fig. 2 Functional diagram

  1 am o = cos π TQRS x /(2) dx l T    = a/ 2 ∗ TQRS ∗ 2 − TQrs   1 cos π TQRS x /(2) cos((nπ x)/())dx l T  ∗   ∗  1 − cos nπ/TQRS cos(nπ x/) = 2TQRS a/ i 2 π 2

(14)

am k =

bm n =

1 l



 cos π TQRS x /(2) sin((nπ x)/())dx = 0

(15) (16)

τ

Equation (16) has the value as 0 because signal is even.

4 MATLAB Implementation for ECG Signal Generation There are functions designed which can take the user input related to amplitude and duration of various sections of P, Q, R, S, and T waves. If it is not specified, then the standard values are taken. The functional diagram shows the various functions used and their responsibilities. Figure 2 shows the various functions used for various ECG generations. The usage of each function is summarized in Table 1.

5 Different Kinds of Arrhythmia Analysis The simulator which is developed in MATLAB is able to generate ECG signals for various kinds of arrhythmia based on UGC repository datasets. The different sets of arrhythmia for which ECG signal is generated is defined in Fig. 3.

Design and Simulation of ECG Signal Generator …

583

Table 1 Functional description Function name

Function description

GENERATE_Q_WAVEFORM

Generates the Q waveform based on Fourier series for various values of n over a period of time

GENERATE_S_WAV EFORM

Generates the S waveform based on Fourier series for various values of n over a period of time

GENERATE_T_WAVEFORM

Generates the T waveform based on Fourier series for various values of n over a period of time

GENERATE_U_WAVEFORM

Generates the U waveform based on Fourier series for various values of n over a period of time

GENERATE_P_WAV EFORM

Generates the P waveform based on Fourier series for various values of n over a period of time

GENERATE_COMBINED_QRS_WAVEFORM Generates the QRS waveform based on Fourier series for various values of n over a period of time ECG_SIGNAL_GEN_ MAIN

This function is responsible for taking the input with respect to amplitude and duration of each wave, and also, we call all the above functions and execute a summation so that complete ECG signal can be generated as well as plot the ECG signal

Fig. 3 shows the various kinds of ECG signals that can be generated by the simulator. The simulator designed in this work can plot the ECG signals of various age groups and for number of various patients. The datasets consist of multiple rows, and each row has 279 attributes for each patient. The various attributes are listed in table 2. Table 2 gives all the 279 attributes used in patient data which act as an input to ECG simulator. The ECG simulator follows a process which first reads the datasets containing many patient info across age groups. ECG simulator will then find out the instances which belong to specific class. From the datasets, important attributes which are responsible for ECG simulation are filtered and then ECG signal is generated by using combinations of Q waveform, S waveform, T waveform, U waveform, P waveform, and QRS waveform. The process that is followed by simulator is defined in Fig. 4. The ECG simulator takes the data set name. It is a file which has the patient data and then one more parameter is the class label. The class label values accepted by the simulator and the meaning of each class label are described in Table 3. Table 3 shows the description of class label value and also description of class label.

584 Fig. 3 Taxonomy of ECG signal

M. R. Rajeshwari and K. S. Kavitha

Ischemic changes (Coronary Artery

Old Anterior Myocardial Infarction

Old Inferior Myocardial Infarction

Sinus bradycardy

Left bundle branch block

E C G

Sinus tachycardy

S I M U L A T O R

Ventricular Premature Contraction (PVC)

Premature Contraction Right bundle branch block

Left ventricule hypertrophy

Atrial Fibrillation or Flutter Normal

6 MATLAB Implementation for Different Arrhythmia The entire implementation of generating ECG signal with different arrhythmia is divided into multiple functions, and the respective functional diagram is as shown in Fig. 5. Figure 5 shows the various functions used in different arrhythmia ECG signal generation. The description of each function is described in Table 4.

7 Simulation Result This section describes the ECG signal simulator results for the various class labels and datasets of UCG repository.

Design and Simulation of ECG Signal Generator …

585

Table 2 Dataset attributes used in ECG simulator S. No.

Attribute name

Attribute description

1

Age

Age in years, linear

2

Sex

Sex (0 = male; 1 = female), nominal

3

Height

Height in centimeters, linear

4

Weight

Weight in kilograms, linear

5

QRS duration

Average of QRS duration in ms, linear

6

P-R interval

Average duration between onset of P and R waves in ms, linear

7

Q-T interval

Average duration between onset of Q and offset of T waves in ms, linear

8

T interval

Average duration of T wave in ms, linear

9

P interval

Average duration of P wave in ms, linear

10

QRS

Angle in degrees

11

T

Angle in degrees

12

P

Angle in degrees

13

QRST

Angle in degrees

14

J

Angle in degrees

15

Heart rate

Number of heart beats per minute

16

Q wave

Width in ms

17

R wave

Width in ms

18

S wave

Width in ms

19

R wave

Width in ms

20

S

21

Number of intrinsic deflections

Linear

22

Existence of ragged R wave

Nominal

23

Existence of diphasic derivation of R wave

Nominal

24

Existence of ragged P wave

Nominal

25

Existence of diphasic derivation of P wave

Nominal

26

Existence of ragged T wave

Nominal

27

Existence of diphasic

Nominal

wave

Width in ms

derivation of T wave 28–39

Channel of DII

All attributes from 16 to 27 taken from DII channel

40–51

Channel DIII

All attributes from 16 to 27 taken from DIII channel

240–249

Channel V3

All attributes from 160 to 169 taken from V3 channel (continued)

586

M. R. Rajeshwari and K. S. Kavitha

Table 2 (continued) S. No.

Attribute name

Attribute description

250–259

Channel V4

All attributes from 160 to 169 taken from V4 channel

260–269

Channel V5

All attributes from 160 to 169 taken from V4 channel

270–279

Channel V6

All attributes from 160 to 169 taken from V4 channel

Fig. 4 ECG simulator details

Start Data Set Name, Class Label Read the Data Sets for the Data Set Name and Save in Matrix (N*A)

The Data Set matrix is filtered based on Class Label

ECG characteristics arrays are created for PRinterval ,QTinterval ,Tinterval ,Pinterval, QRSinterval, erval, Rinterval, Sinterval,Rdashinterval, Sdashinterval , Qwaveamp, Rwaveamp, Swaveamp, Rdashwaveamp,Sdashwaveamp,Pwaveamp,Twaveamp The missing attributes are filled with standard values in the data instances ECG signal is generated based on Fourier Series

ECG signal is plotted and number of plots is equal to number of instances

rtRate, Qinterval, Rinterval, Sinterval, Qamp, Ramp,Samp,Pamp,Tamp are plotted

stop

Design and Simulation of ECG Signal Generator … Table 3 Class label description

Class label

Description

1

Normal without any disease

2

ischemic changes (coronary artery disease)

3

Old anterior myocardial infarction

4

Old inferior myocardial infarction

5

Sinus tachycardy

6

Sinus bradycardy

7

Ventricular premature contraction (PVC)

8

Supraventricular premature contraction

9

Left bundle branch block

10

Right bundle branch block

11

Left ventricle hypertrophy

12

Atrial fibrillation or flutter

587

Fig. 5 Functions used in different arrhythmia

A.

Normal Class Label Result

Figure 6 shows the ECG signal of a patient who is of type normal, and the patient ID is 87. Like this, the simulator will be able to generate signals for 1000s of patients. Figure 7 shows the ECG signal of a patient who is of type normal, and the patient ID is 10. The QRS wave is repeatedly maintained for the patient. Figure 8 shows the QRS duration for the normal patients for various instances of patients, and the duration is varying from 68 to 114 s. Figure 9 shows the P-R duration for the normal patients varied from 0.1 to 280 s. Figure 10 shows the Q-T duration for the normal patients varied from 220 to 450 s. Figure 11 shows the variation of T-wave duration for various patients, and it varies from 110 to 230 s.

588

M. R. Rajeshwari and K. S. Kavitha

Table 4 Function description Function name

Function description

DISEASE-BASED_DI STRIBUTIONS_ECG_ This is the main function, in which the user MAIN will provide path for datasets as well as class label. This function will call all other subfunctions and provide ECG waveforms for various instances of patient data, variation in QRS interval, P-R interval, Q-T interval , T interval, P interval, heart rate, Q interval, R interval, S interval, Q amp, R amp, S amp, P amp, and T amp FIND_DATASETS_FO R_CLASS_LABEL

After the datasets are read, then, this will generate the subset of datasets in a matrix which corresponds to specific class label

FIND_DYNAMIC_LABEL

This function will take class label and from Table3 will obtain the description so that it can be printed on plots

FIND_ECG_CHARACTERSTICS_FOR_SUB The important characteristics of ECG data like _DATASETS Q interval, P interval, Q amplitude, and QRS interval are extracted from the datasets corresponding to a class label GENERATING_ECG_ SIGNAL_FOR_DATA

This function is responsible for generating the ECG signal using Fourier series and then return it in the format of structure and will internally makes use of all functions described in Table 1

READ_DATA_FROM This function is responsible for reading the _TXT_FILE_AND_CONVERT_TO_MATRIX UCG repository-based datasets and converting it into a matrix Fig. 6 ECG signal with normal patient-87

ECG Signal Normalfor Pateint Data = 87 212.075 212.07

ECG Waveform

212.065 212.06 212.055 212.05 212.045 212.04 212.035 212.03

0

0.2

0.4

0.6

0.8

1

1.2

Time (ms)

1.4

1.6

1.8

2

Design and Simulation of ECG Signal Generator … Fig. 7 ECG signal with normal patient-10

589

ECG Signal Normalfor Pateint Data = 10

124.3 124.2

ECG Waveform

124.1 124 123.9 123.8 123.7 123.6 123.5

0

0.2

0.4

0.6

1 1.2

0.8

1.4

1.6

1.8

2

Time(ms)

Fig. 8 QRS duration for normal patients

Instances v/s QRS Duration for Normal

115 110

QRS Duration

105 100 95 90 85 80 75 70 65

0

10

20

30

40

50

60

70

80

90

100

Number of Instances

Fig. 9 P-R duration for normal patients

Instances v/s PR Duration for Normal

300

PR Duration

250 200 150 100 50 0

0

10

20

30

40

50

60

70

Number of Instances

80

90

100

590

M. R. Rajeshwari and K. S. Kavitha

Fig. 10 Q-T duration for normal patients

Instances v/s QT Duration for Normal

450

QT Duration

400

350

300

250

200

0

10

20

30

40

50

60

70

80

90

100

90

100

Number of Instances

Fig. 11 T duration of normal patients

Instances v/s T Duration for Normal

240 220

T Duration

200 180 160 140 120 100 0

10

20

30

40

50

60

70

80

Number of Instances

Figure 12 shows the variation of P-wave duration for various patients, and it varies from 0.1 s to 182 s. Figure 13 shows the heart rate variation of normal patients. As shown in the figure, Fig. 12 P duration of normal patients

Instances v/s P Duration for Normal

200 180

P Duration

160 140 120 100 80 60 40 20 0

0

10

20

30

40

50

60

70

Number of Instances

80

90

100

Design and Simulation of ECG Signal Generator … Fig. 13 Heart rate of normal patients

591

Instances v/s Heart Rate for Normal

105 100 95

Heart Rate

90 85 80 75 70 65 60 55 0

10

20

30

40

50

60

70

80

90

100

Number of Instances

the heart rates vary from 57 to 102. Figure 14 shows Q-wave duration for normal patients. The variation of Q wave changes from 0.2 to 72 s. Figure 15 shows S-wave duration for normal patients. Fig. 14 Q-wave duration for normal patients

Instances v/s Q wave duration for Normal

80

Qwaveduration

70 60 50 40 30 20 10 0 0

10

20

30

40

50

60

70

80

90

100

Number of Instances

Fig. 15 S-wave duration for normal patients

Instances v/s S wave duration for Normal

60

Swaveduration

50 40 30 20 10 0

0

10

20

30

40

50

60

70

Number of Instances

80

90

100

592

M. R. Rajeshwari and K. S. Kavitha

Fig. 16 Q-wave amplitude variation for normal patients

Instances v/s Q Wave Amplitude for Normal

3

Q Wave Amplitude

2.5 2 1.5 1 0.5 0 0

10

20

30

40

50

60

70

80

90

100

Number of Instances

The variation of S wave changes from 0.2 to 57 s. Figure 16 shows the variation in the amplitude for various patients. As shown in the figure, patient 11 shows 0.4 as amplitude, 0.7 for patient number 20, and patient 54 has the amplitude of 2.7. Figure 17 shows the variation in the amplitude for various patients. As shown in the figure, patient 1 shows 7 as amplitude, 11.5 for patient number 21, and patient 74 has the amplitude of 12.9 Figure 18 shows the variation in the amplitude for various patients. As shown in the figure, patient 1 shows 0.7 as amplitude, 3.3 for patient number 20, and patient 78 has the amplitude of 3.3. B.

Ischemic Changes (Coronary Artery Disease)

Figure 19 shows the ECG signal generated for patient no. 2 which has very low peaks of QRS wave and inversions are higher. Fig. 17 R-wave amplitude variation for normal patients

Instances v/s R Wave Amplitude for Normal

14

R Wave Amplitude

12 10 8 6 4 2 0

0

10

20

30

40

50

60

70

Number of Instances

80

90

100

Design and Simulation of ECG Signal Generator … Fig. 18 S-wave amplitude variation for normal patients

593

Instances v/s S Wave Amplitude for Normal

3.5

S Wave Amplitude

3 2.5 2 1.5 1 0.5 0

10

0

20

30

40

50

60

70

80

90

100

Number of Instances

ECG Signal Ischemic changes (Coronary Artery Disease) for Pateint Data = 2 ECG Waveform

Fig. 19 ECG signal for patient for ischemic changes for patient no. 2

554.75 554.7 554.65 554.6 554.55 554.5 554.45 554.4 554.35 0

0.2

0.4

0.6

0.8

1

1.2

1.4

1.6

1.8

2

Time (ms)

Figure 20 shows the durations of QRS wave for about 8 patients with the highest duration as 97 s and minimum 82 s. Figure 21 shows the duration of P-R wave for about 8 patients with the highest duration as 180 s and minimum 0 s. Figure 22 shows the duration of Q-T wave for about 8 patients with the highest Instances v/s QRS Duration for Ischemic changes (Coronary Artery Disease)

Fig. 20 QRS duration for ischemic changes 98 96

QRS Duration

94 92 90 88 86 84 82 1

2

3

4

5

Number of Instances

6

7

8

594

M. R. Rajeshwari and K. S. Kavitha

Fig. 21 P-R duration for ischemic changes

Instances v/s PR Duration for Ischemic changes (Coronary Artery Disease) 180 160

PR Duration

140 120 100 80 60 40 20 0 1

2

3

4

5

6

7

8

Number of Instances

Fig. 22 Q-T duration for ischemic changes 420

Instances v/s QT Duration for Ischemic changes (Coronary Artery Disease)

410

QT Duration

400 390 380 370 360 350 1

2

3

4

5

6

7

8

Number of Instances

duration as 420 s and minimum 355 s. Figure 23 shows the duration of T wave for about 8 patients with the highest duration as 380 s and minimum 140 s. Figure 24 shows the duration of P wave for about 8 patients with the highest duration as 100 s and minimum 0 s. Figure 25 shows the heart rate variation for ischemic changes with minimum value of 54 and maximum of 85. Figure 26 shows the Q-wave duration amplitude for 8 patients from the dataset. All 7 patients had 0 amplitude, and patient 4 has the value of 0.6. Figure 27 shows the R-wave amplitude for ischemic changes as shown in the figure; the minimum amplitude is 3, and maximum amplitude is 13. Figure 28 shows the S-wave amplitude for ischemic changes as shown in the figure; the minimum amplitude is 0, and maximum amplitude is 4.2. Figure 29 shows the P-wave amplitude with minimum being 0.2 and maximum value of 0.9.

Design and Simulation of ECG Signal Generator …

595

Instances v/s T Duration for Ischemic changes (Coronary Artery Disease)

Fig. 23 T duration for ischemic changes 400

T Duration

350 300 250 200 150 100

1

2

3

4

5

6

7

8

Number of Instances

Fig. 24 P duration for ischemic changes

Instances v/s P Duration for Ischemic changes (Coronary Artery Disease)

120

P Duration

100 80 60 40 20 0 1

2

3

4

5

6

7

8

Number of Instances

Fig. 25 Heart rate for ischemic changes

Instances v/s Heart Rate for Ischemic changes (Coronary Artery Disease) 90 85

Heart Rate

80 75 70 65 60 55 50 1

2

3

4

5

6

Number of Instances

7

8

596

M. R. Rajeshwari and K. S. Kavitha

Fig. 26 Q-wave amplitude for ischemic changes

0.7

Instances v/s Q Wave Amplitude for Ischemic changes (Coronary Artery Disease)

Q Wave Amplitude

0.6 0.5 0.4 0.3 0.2 0.1 0

1

2

3

4

5

6

7

8

Number of Instances

Fig. 27 R-wave amplitude for ischemic changes 14

Instances v/s R Wave Amplitude for Ischemic changes (Coronary Artery Disease)

R Wave Amplitude

12 10 8 6 4 2 0

1

2

3

4

5

6

7

8

Number of Instances

Fig. 28 S-wave amplitude for ischemic changes 4.5

Instances v/s S Wave Amplitude forIschemic changes (Coronary Artery Disease)

S Wave Amplitude

4 3.5 3 2.5 2 1.5 1 0.5 0 1

2

3

4

5

Number of Instances

6

7

8

Design and Simulation of ECG Signal Generator … Fig. 29 P-wave amplitude for ischemic changes 0.9

597

Instances v/s P Wave Amplitude for Ischemic changes (CoronaryArteryDisease)

0.8

P Wave Amplitude

0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 1

2

3

4

5

6

7

8

Number of Instances

Note—In the similar fashion, the ECG simulator designed is able to generate the signals and respective variations for various other diseases—old anterior myocardial infarction, old inferior myocardial infarction, sinus tachycardy, sinus bradycardy, ventricular premature contraction (PVC), supraventricular premature contraction, left bundle branch block, and right bundle branch block.

8 Conclusion and Future Work ECG signal is very important for analysis of patient’s heart. By monitoring it, one can avoid very dangerous conditions. Also, the hardware use of ECG is very expensive for exploring the usage of data mining and machine learning algorithms which can be accurate and then predict the future treads of human heart. In this work, first, ECG signal characteristics are discussed, followed by Fourier series equations; datasets taken from UCI repository are performed; first, the ECG signal generator is designed, and the MATLAB functions which have been implemented have been mentioned along with their usage. The ECG simulator is also extended to read the datasets and then classify the ECG signals for various arrhythmias. Finally, the results are presented for normal patient and patient with ischemic changes. ECG signal generator is also capable of generating signals to other arrhythmia as well. In the future work, we have planned to design ECG simulator for more number of heart diseases.

598

M. R. Rajeshwari and K. S. Kavitha

References 1. T.G. Keshavamurthy, M.N. Eshwarappa, Review paper on denoising of ECG signal, 2017 Second International Conference on Electrical, Computer and Communication Technologies (ICECCT),22–24 Feb. 2017 2. S. Charp, Fourier coefficient harmonic analyzer. Electr Eng 68(12), 1949, 1057–1057 3. S. Abrar, U.S. Aziz, F. Choudhry, A. Mansoor, Design and implementation of an embedded system for transmitting human ECG and web server for emergency services and remote health monitoring: A low cost ECG signal simulator and its transmitter, to send and store data in electronic databases, in remote location, to be accessed by authorized personnel when needed, 2012 International Conference on Open Source Systems and Technologies, 20–22 Dec 2012 4. I. Sadighi, M. Kejariwal, A generalized ECG simulator: an educational tool, Images of the twenty-first century, in Proceedings of the Annual International Engineering in Medicine and Biology Society, 9–12 Nov 1989 5. S. Karayalçin, M. Yüksekkaya, S. Yazgi, ECG simulator,in 2010 15th National Biomedical Engineering Meeting, 21–24 Apr 2010 6. L. Nuo, H. Song, T. Hong, J. Yuehong, L. Fan, Calibration device for multi-parameter simulator, in 2013 IEEE 11th International Conference on Electronic Measurement & Instruments,16–19 Aug 2013 7. R. Martinek, M. Kelnar, P. Vojcinak, P. Koudelka, J. Vanus, P. Bilik, P. Janku, H. Nazeran, J. Zidek, Virtual simulator for the generation of patho-physiological foetal ECGs during the prenatal period. Electron. Lett. 51(22), 1738–1740 (2015). IET Journals & Magazines 8. J. Dong, S. Zhang, Y. Wan, A hybrid framework for ECG interpretation by computer and its evaluation platform, in 2008 International Conference on BioMedical Engineering and Informatics vol. 2, pp. 324–327 (2008) 9. P. Desyoo, S. Praesomboon, W. Sangpetch, W. Suracherdkiati, Discrete mathematical model for ECG waveform using kernel function, 2009 ICCAS-SICE, pp. 5296–5300 (2009) 10. B.E. Demir, F. Yorulmaz, .˙I. Güler, Microcontroller controlled ECG simulator, in 2010 15th National Biomedical Engineering Meeting, pp. 1–4 (2010) 11. D. Sarkar, A. Chowdhury, Low cost and efficient ECG measurement system using PIC18F4550 microcontroller, in 2015 International Conference on Electronic Design, Computer Networks & Automated Verification (EDCAV), pp. 6–11 (2015) 12. Á. Sobrinho, P. Cunha, L.D. da Silva, A. Perkusich, T. Cordeiro, J. Rêgo, A simulation approach to certify electrocardiography devices, in 2015 17th International Conference on E-health Networking, Application & Services (HealthCom), pp. 86–90 (2015) 13. Bhavtosh, D. Berwal, Y. Kumar, High performance QRS complex detector for wearable ECG systems using Multi Scaled product with booth multiplier and soft threshold algorithm, in 2015 International Conference on Signal Processing and Communication (ICSC), pp. 204–209 (2015) 14. H. Wang, Z. Su, H. Fang, Simulating normal and abnormal ECG signals in children age 0–16, in 2017 IEEE/ACM International Conference on Connected Health: Applications, Systems and Engineering Technologies (CHASE), pp. 282–283 (2017) 15. A. Lekuthai, P. Somboon, A. Teeramongkonrasmee,Development of a cost-effective ECG monitor for cardiac arrhythmia detection using heart rate variability, in 2016 9th Biomedical Engineering International Conference (BMEiCON), pp. 1–5 (2016)

Design and Simulation of ECG Signal Generator …

599

16. M. Varman, M. Varman, Computer based biomedical equipment design: an EKG recorder, monitor and simulator, in Proceedings, 11th IEEE Symposium on Computer-Based Medical Systems (Cat. No.98CB36237), pp. 222–227 (1998)

Author Index

A Agarwal, Manisha Al-Berry, M. N. Alimul Haque, Md. Aman Amjad, Mohammad Anuradha, S. Athisayamani, Suganya Azam, Mohd Khalid Azar, Ahmad Taher

B Bafna, Prafulla B. Bansal, H. Batra, Bhoomika Bavarva, Arjav Bhagile, Vaishali D. Bhalia, Mayur Bhatnagar, Charul Bhatnagar, Vaibhav Bidushi, Fatema Farhin Bonthu, Sridevi Booba, B.

C Chauhan, Harsha Chaurasia„ Amit Chawla, Anisha Chormunge, Smita Coetzer, J. Coronado-Hernández, Jairo Rafael

D Dass, Pranav Dayal, Abhinav Desai, Purva Diwakar, Manoj Dodonova, Evgeniya Dokov, Evgeniy Dubey, Ghanshyam Prasad

E Ebied, Hala M. Efanov, Ivan El-Shahed, Reham A.

F Fathail, Ibraheam

G González, Gustavo Gatica Gorripotu, Tulasichandra Sekhar Goyal, Ankur Gupta, Ayush Gupta, Krishna Gupta, Neelesh Gupta, Sarishty

H Haque, Shameemul Harit, Sandeep Hasan, Mahady Hasan, Mohammad

© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 R. C. Poonia et al. (eds.), Proceedings of Third International Conference on Sustainable Computing, Advances in Intelligent Systems and Computing 1404, https://doi.org/10.1007/978-981-16-4538-9

601

602 Himthani, Puneet Humaidi, Amjad J. Hussain, Naziya

I Ibraheem, Ibraheem Kasim Islam, Md. Motaharul Islam, Noushin Ivaschenko, Anton

J Jadhav, Mukti E. Jain, Shubham Kumar Javed, Mohd Yousuf Joshi, Kamaldeep

K Kakar, Khalil Ahmad Kamal, Nashwa Ahmad Karanwal, Shekhar Karthikeyan, S. Kathirvalavakumar, T. Kaur, Jasleen Kaushik, Ruchi Kavitha, K. S. Khatri, Pallavi Khedlekar, U. K. Kotze, Ben Kumar, Arvind Kumar, Ayush Kumar, C. Santhosh Kumari, Rajani Kumar, Kailash Kumar, Sandeep Kundu, Rakhee Kuriakose, Rangith Baby

L Lakshmi, M. Sri Lavanya Suja, T. Lemphane, Ntebaleng Junia

M Manju Maurya, Pratibha Mehta, Kamakshi Mehta, Rachana Mittal, Praveen Mookim, Sanjana

Author Index Mushtaq, Arif N Naik, Sapan Nanda, Aparajita Nandal, Rainu Naveed Hossain, A. B. M. Nel, G. Nguyen, Nhu-Toan Niebles, Andrea Carolina Primo Nigwal, A. R. Nithya Darisini, P. S. O Obredor-Baldovino, Thalía P Panwar, Deepak Pham, Hanh Pham, Van-Truong Phan, Tung Pilla, Ramana Poonia, Ramesh Chandra Poornima Devi, M. Prasad, C. S. R. K. Prasad, Jagdish Priya, Bhenu Purnama, Aditya Pushparajan, M. R Rahman, Md. Abdur Rahman, Moidur Rajak, Akash Raja, Linesh Rajeshwari, M. R. Rajiv, Somkuwar Shreya Rajput, Monika Ramachandran, K. I. Rama Sree, S. Ray, Noibedya Narayan Riaz, Sadia Rima, Most. Ayesha Khatun Rizzo-Lian, Jaime Robert Singh, A. Roka, Sanjay Roy, Prarthana S Saha, Pratik

Author Index

603

Saini, Jatinderkumar R. Salas-Navarro, Katherinne Sangwan, Tanmaya Sankara Narayanan, S. Santana-Galván, Miguel Sarker, Mahmudul Hasan Senan, Ebrahim Mohammed Serrano, Fernando E. Sharma, Sanjay Kumar Sharma, Swati Sharma, Urvashi Sharma, Vivek Kumar Shedeed, Howida A. Shetty, S. D. Shrivastava, Ajay Kumar Shwetha, H. M. Singh, Anshuman Singh, Kiran Singh, Rishabh Singh, Vijander Sitnikov, Pavel Sornam, M. Sreekumar, K. T. Swarnaker, Chaity Swetank

Tan, Shiwei Tran, Thi-Thao Trinh, Minh-Nhat Tripathy, Sushreeta

T Tang, Jianlan

Z Zeba, Sana

V Verma, Arvind Kumar Vermaak, H. J. Verma, Rachna Vidal, Germán Herrera Vidushi Vijayalakshmi, L. Vinod, D.

W Wijaya, Tommy Tanu

Y Yadav, Ashwani Yadav, Ashwani Kumar Yadav, Sudeept Singh Yadav, Vaishali