Emergent Converging Technologies and Biomedical Systems: Select Proceedings of the 3rd International Conference, ETBS 2023 (Lecture Notes in Electrical Engineering, 1116) 9819986451, 9789819986453

The book contains proceedings of the International Conference on Emergent Converging Technologies and Biomedical Systems

113 56 24MB

English Pages 742 [715] Year 2024

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Preface
About This Book
Contents
About the Editors
On Parameterized Picture Fuzzy Discriminant Information Measure in Medical Diagnosis Problem
1 Introduction
2 Preliminaries
3 Doubly Parameterized Picture Fuzzy Discriminant Measure
4 Solving Medical Diagnosis Problem with Parameterized Measure
4.1 Medical Diagnosis
5 Conclusions and Scope for Future Work
References
Fuzzy Vendor–Buyer Trade Credit Inventory Model-Pentagonal Numbers in Permissible Limits Delay in Account Settlement with Supervised Learning
1 Introduction
2 Extension of the Lagrangian Method for Fuzzy Vendor–Buyer Trade Credit Inventory Model
3 Methodology for Fuzzy Vendor–Buyer Trade Credit Inventory Model
3.1 Grade Mean Integration Representation Technique (GMIRT) for Fuzzy Vendor–Buyer Trade Credit Inventory Model[11, 14]
4 Highly Integrated Optimization Model with Processing Orders, Cost Savings, and Allowable Payment Delays
4.1 Notations
5 Mathematical Model for Fuzzy Vendor–Buyer Trade Credit Inventory Model
5.1 Inventory Model of Crisp Production Quantity (CPQ) for Fuzzy Vendor–Buyer Trade Credit Inventory Model
5.2 Inventory Model for Fuzzy Production Quantity for Fuzzy Vendor–Buyer Trade Credit Inventory Model
6 Numerical Analysis for Fuzzy Vendor–Buyer Trade Credit Inventory Model
7 Supervised Learning (Sl) for Fuzzy Vendor–Buyer Trade Credit Inventory Model
8 Conclusion
9 Results and Discussions
References
Comparative Analysis of Hardware and Software Utilization in the Implementation of 4-Bit Counter Using Different FPGAs Families
1 Introduction
2 Related Work
3 Implementation on 4-bit Counter Using Different FPGAs Families and Results
4 Comparative Analysis of 4-bit Counter Using Different FPGAs Boards
5 Conclusion
References
Soil Monitoring Robot for Precision Farming
1 Introduction
2 Literature Survey
3 Materials and Methods
4 Results and Discussions
5 Conclusion
References
Accountability of Immersive Technologies in Dwindling the Reverberations of Fibromyalgia
1 Introduction
2 Virtual Reality
3 Augmented Reality
4 Mixed Reality
5 Reality-Virtuality Continuum
6 Fibromyalgia
7 Literature Survey
8 Conclusion
References
A 233-Bit Elliptic Curve Processor for IoT Applications
1 Introduction
2 Related Work
3 Design
4 Results and Discussion
5 Conclusion and Future Work
References
Numerical Simulation and Modeling of Improved PI Controller Based DVR for Voltage Sag Compensation
1 Introduction
2 Related Works
3 Dynamic Voltage Restorer (DVR)
4 Proposed Methodology
5 Simulation and Results
6 Conclusions
References
Alternate Least Square and Root Polynomial Based Colour-Correction Method for High Dimensional Environment
1 Introduction
1.1 Motivation
2 Literature Review
3 Proposed Work
3.1 Methodology
4 Results Obtained
4.1 Performance Evaluation
5 Conclusion
References
An Automatic Parkinson’s Disease Classification System Using Least Square Support Vector Machine
1 Introduction
2 Material
3 Methodology
3.1 Data Pre-Processing
3.2 Feature Selection
3.3 Handling Imbalanced Dataset
3.4 Least Square Support Vector Machine (LSSVM)
4 Results and Discussion
5 Conclusion
References
Generation Cost Minimization in Microgrids Using Optimization Algorithms
1 Introduction
2 Problem Statement
3 Proposed Methodology
3.1 Multi-verse Optimizer Algorithm
3.2 Improved Multi-verse Optimizer Algorithm
4 Simulation Discussion
4.1 Dataset Description
4.2 Experimental Results
5 Conclusion
References
Diagnosis of Mental Health from Social Networking Posts: An Improved ML-Based Approach
1 Introduction
2 Related Work
3 Methodology
3.1 Data Collection and Pre-processing
3.2 Feature Engineering
3.3 ML Algorithms
4 Evaluation and Results
4.1 Dataset
4.2 Data Analysis
4.3 Results
5 Conclusion
References
Smart Health Monitoring System for Elderly People
1 Introduction
2 Literature Survey
2.1 Paper Contributions
2.2 Paper Organization
3 Proposed Health Monitoring System
4 Results and Discussions
5 Conclusion and Future Scope
References
Impact of Covid-19 and Subsequent Usage of IoT
1 Introduction
2 Impact of COVID-19
2.1 Education System
2.2 Healthcare System
2.3 Travel and Tourism
2.4 Industrial and Economy
3 Usage of IoT
3.1 Education System
3.2 Healthcare System
3.3 Travel and Tourism System
3.4 Industry and Economy System
4 Conclusions and Future Scope
References
Design of Battery Monitoring System for Converted Electric Cycles
1 Introduction
2 Literature Survey
3 Proposed System
3.1 OLED Display
3.2 Voltage Sensor
3.3 Hub Motor
3.4 DC Battery
4 Results and Discussion
5 Conclusions
References
Image Denoising Framework Employing Auto Encoders for Image Reconstruction
1 Introduction
2 Methodology
3 Results and Discussion
4 Conclusion and Future Work
References
Server Access Pattern Analysis Based on Weblogs Classification Methods
1 Introduction
2 Literature Review
3 Proposed Methodology
3.1 Log Collection
4 Data Pre-processing
4.1 Log Templating
4.2 Learning and Pattern Evaluation
4.3 Linear Support Vector Machine (LSVM)
4.4 Random Forest
4.5 K-nearest Neighbor
5 Result and Discussion
5.1 Dataset Description
6 Conclusion
References
Multilingual Emotion Recognition from Continuous Speech Using Transfer Learning
1 Introduction
2 Related Work
3 Proposed System
3.1 System Setup
3.2 Model Architecture
4 Dataset Description
4.1 RAVDESS Dataset
4.2 PU Dataset
4.3 PU Dataset with Augmented Noise
5 Results Analysis
5.1 Model Training & Hyperparameter Tuning
5.2 Effect of Hidden Dropout Rate, and Number of Attention Heads
5.3 Overview of the Model Training
6 Conclusion
References
Violence Detection Using DenseNet and LSTM
1 Introduction
2 Related Work
3 Proposed Method
3.1 Background Suppression
3.2 Frame Difference Algorithm
3.3 DenseNet
3.4 LSTM
3.5 DenseLSTM
4 Result and Discussion
5 Conclusion
References
Financial Technology and Competitive Landscape in the Banking Industry of Bangladesh: An Exploratory Focus
1 Introduction
2 Literature Review
3 Research Objectives
4 Data and Methodology
5 Results and Discussion
6 Recommendation
7 Concluding Remarks
References
Review on Deep Learning-Based Classification Techniques for Cocoa Quality Testing
1 Introduction
2 Literature Review
3 Deep Learning
4 Analysis
4.1 Types of Cocoa Beans
5 Conclusion
References
A Curated Study on Machine Learning Based Algorithms and Sensors for Drone Technology in Various Application
1 Introduction
2 Related Work
3 System Overview
4 Results/Comparison
5 Conclusion
References
Automatic Detection of Coagulation of Blood in Brain Using Deep Learning Approach
1 Introduction
2 Literature Review
3 Methodology
3.1 Architecture
3.2 Data Collection
3.3 Pre-processing
3.4 Model Training
3.5 Evaluation Metrics
3.6 Prediction
4 Conclusion
References
DeepPose: A 2D Image Based Automated Framework for Human Pose Detection and a Trainer App Using Deep Learning
1 Introduction
1.1 Motivation and Objectives
2 Literature Survey
3 Proposed Framework
3.1 Dataset Description
4 Experimentation and Results
5 Conclusions and Future Directions
References
Phylogenetic Study of Surface Glycoprotein (S1 Spike Protein) Sequence of SARS-CoV-2 Virus
1 Introduction
2 Literature Review
3 Methodology
3.1 Selection of SARS-CoV-2 Virus S1 Spike Protein Sequences
3.2 Building of Phylogenetic Tree
3.3 Visualization of the Phylogenetic Tree
4 Results
4.1 Selection of SARS-CoV-2 Virus S1 Spike Protein Sequences
4.2 Building of Phylogenetic Tree
4.3 Visualization of Phylogenetic Tree
5 Discussion
6 Conclusion
References
Pervasive and Wearable Computing and Networks
1 Introduction
2 History of Pervasive and Wearable Computing
3 Literature Review
4 Challenges and Review of Pervasive and Wearable Computing
5 Methodology
6 Implementation
7 Result
8 Conclusion
References
Power of Image-Based Digit Recognition with Machine Learning
1 Introduction
2 Related Work
3 Methodology
4 Dataset Description
5 Results and Discussion
6 Conclusion
References
Open-Source Gesture-Powered Augmented Reality-Based Remote Assistance Tool for Industrial Application: Challenges and Improvisation
1 Introduction
2 Related Work
3 Problem Statement
4 Research Proposal and Description
4.1 Proposed Architecture
4.2 Application Usage Description
5 Methodology and Hypotheses Development
6 Case Study/Evaluated Result
7 Conclusion
References
Enhancing Biometric Performance Through Mitigation of Sleep-Related Breaches
1 Introduction
2 Literature Review
3 Problem Statement
4 Proposed Work
5 Conclusion
References
Neural Network Based CAD System for the Classification of Textures in Liver Ultrasound Images
1 Introduction
2 Material and Methods
2.1 The Database Description
2.2 Datasets for Training and Testing
3 Proposed Computer Aided Diagnosis (CAD) System
3.1 Texture Feature Extraction Module
3.2 Classification Module
3.3 Ensemble of NNs
4 Results and Discussions
5 Conclusion
References
A Comparative Survey on Histogram Equalization Techniques for Image Contrast Enhancement
1 Introduction
2 Contrast Enhancement Using Histogram Equalization
3 Experimental Results
4 Performance Assessment
5 Conclusion
References
Crime Rate Prediction in Tamil Nadu Using Machine Learning
1 Introduction
2 Related Work
3 Methodology
3.1 Sample Dataset
3.2 Architecture
4 Implementation
4.1 K-Means Clustering
4.2 Random Forest Regression
4.3 Flask
4.4 Tools
4.5 Regression Models for Comparison Linear Regression
5 Results and Discussion
5.1 Graphs
5.2 Analysis
5.3 EDA Using PowerBI
5.4 Accuracy Comparison
6 Conclusion and Future Scope
References
Depression Severity Detection from Social Media Posts
1 Introduction
2 Related Work
3 Dataset and Metrics
3.1 Evaluation Metrics
4 BDI-II
5 Proposed Methodology
5.1 Unsupervised Approach
5.2 Supervised Approach
6 Results
6.1 Unsupervised Approach Results
6.2 Supervised Approach Results
6.3 Experimental Results
7 Conclusion and Future Scope
References
Computational Studies of Phytochemicals from Allium Sativum with H7N9 Subtype in Avian Influenza
1 Introduction
2 Methodology
2.1 Protein Preparation
2.2 Ligand Selection and Preparation
2.3 Binding Site Preparation and Receptor Grid Generation
2.4 Molecular Docking Analysis
2.5 Binding Free Energy Calculation
2.6 ADME Analysis
3 Results and Discussion
3.1 Molecular Docking
3.2 Protein–Ligand Interaction
3.3 Free Binding Energy Calculations
3.4 ADME Analysis
4 Conclusion
References
Ensuring Security of Data Through Transformation Based Encryption Algorithm in Image Steganography
1 Introduction
1.1 Image Steganography Techniques
1.2 Contribution of the Work
1.3 Organization of the Paper
2 Literature Review
3 Proposed Algorithm
4 Experimental Results
5 Conclusion and Future Scope
References
PICO Classification Using Domain-Specific Features
1 Introduction
2 Related Work
3 Proposed Work
4 Experiment and Result
4.1 Dataset Description
4.2 Experiments and Results
5 Conclusion
References
Optimized Detection of Ovarian Cancer Using Segmentation with FR-CNN Classification
1 Introduction
2 Methodology
2.1 Pre-processing
2.2 Object Detection Using FR-CNN
3 Related Work
3.1 Region Proposal Network (RPN)
3.2 Training
3.3 Testing
3.4 RoI Pooling and Classifier Layer
3.5 Back Propagation by ROI Pooling Layer
4 Experimental Results
4.1 Experimental Setup
4.2 Database Description
4.3 Performance Metrics
5 Conclusion
References
Implementation of Machine Learning Algorithms for Cardiovascular Disease Prediction
1 Introduction
2 Methodology
2.1 Data Preparation
2.2 Data Preprocessing
2.3 Data Visualization
2.4 Applying Machine Learning Algorithm
3 Result
3.1 Data Preparation and Pre-processing
3.2 Data Visualization
3.3 ML Algorithm
4 Conclusion
References
Security Challenges and Applications for Digital Transactions Using Blockchain Technology
1 Introduction
1.1 Ethereum and EVM Based Networks
1.2 Automated Market Makers and Decentralized Exchanges
2 Literature Review
2.1 Asset Price Estimation in the Uniswap Model
2.2 Trading of the Assets in the Uniswap Model
3 Problems Associated with the Uniswap Model
3.1 Security Risks Associated with the Uniswap Model
3.2 Lack of Support for Fees in ERC20 Tokens
4 Proposed Model
4.1 Reduced Security Risks
4.2 Greater Support for Fee Structures in ERC20 Tokens
4.3 Reduced Gas Fee on Swap Transactions
5 Detailed Working of Proposed Model
5.1 Providing Liquidity
5.2 Swap/Trading of Assets
5.3 Fee Collection from Trading
5.4 Liquidity Removal
6 Results and Analysis
6.1 Price Impact in Every Trade
6.2 Tokens Received in Trades of Tokens with Various Kinds of Fees
6.3 USDC Received in Trades of Tokens with Various Kind of Fees
6.4 Fees Collected Without Affecting the Asset Price
7 Conclusion
References
Triple-Band Gap Coupled 4 × 4 MIMO Antenna in mm-Wave for High Data Rate and IoT Applications
1 Introduction
2 Antenna Configuration and S-parameter Results
3 Antenna Gain, Efficiency and Far Field Patterns
4 Diversity Parameters of MIMO Antenna
5 Analysis of Bending of MIMO Antenna
6 Conclusion
References
Comparative Performance Analysis of Present Lightweight Cipher for Security Applications in Extremely Constrained Environment
1 Introduction
2 Present Cipher for Lightweight Hardware Architecture
3 Implementation of Present Lightweight Cipher Design on Different FPGA Boards
4 Simulation Results and Discussion
4.1 Performance Metrics
4.2 Synthesis Criteria
4.3 Software and Hardware Details
5 Conclusion and Future Scope
References
Improved Hybrid Similarity for Clustering of Text Documents Using GA
1 Introduction
1.1 Cosine Similarity
1.2 Link-Based Similarity
1.3 Jaccard Similarity
1.4 Improved Rank Similarity
1.5 Hybrid Similarity
1.6 Genetic Algorithm
2 Related Works
3 Proposed Framework
4 Result and Analysis
4.1 Improved Hybrid Similarity
4.2 Genetic Algorithm on Improved Hybrid Similarity
5 Conclusion and Future Scope
References
IoT for Emerging Engineering Application Related to Commercial System
1 Introduction
1.1 Commercial System
1.2 IoT
1.3 Emerging Engineering Technology
1.4 Role of IoT in Commercial System
1.5 LSTM
2 Literature Review
3 Problem Statement
4 Proposed Work
5 Result and Discussion
5.1 Detect Sentiment from Customer Reviews Using Amazon
6 Conclusion and Future Scope
References
Development and Analysis of Malaria Vector by Mathematical Modeling
1 Introduction
2 Paludism (Malaria) Defect Model Production
2.1 The Given Model of the Parameters
2.2 Model Diagram
2.3 Disease Free Equilibrium (DFEQ)
3 The System Stability
3.1 Local Stability of DFEQ
3.2 Global Stability of Endemic Equilibrium
4 Numerical Reproductions and Simulations
5 Conclusion
References
Realization of Fractional Order Low Pass Filter Using Differential Voltage Current Conveyor (DVCC)
1 Introduction
2 Proposed Fractional Order Low Pass Filter
3 Stability Analysis of Proposed Fractional Order Low Pass Filter
4 Simulation Results
5 Conclusion
References
Identification and Classification of Intestinal Parasitic Eggs in Animals Through Microscopic Image Analysis
1 Introduction
2 Image Dataset
3 Methodology
3.1 The Proposed Methodology
3.2 Texture Feature Extraction Techniques
3.3 Normalization of Features
3.4 Classification
4 Results
5 Discussion
6 Conclusion
References
Storage and Organisation of Geospatial Data in Distributed Blockchain Using IPFS
1 Introduction
1.1 Motivation
2 Background
2.1 Blockchain
2.2 IPFS
3 Materials
3.1 Geospatial Data
3.2 Datasets Used
4 Software Used
5 Method Implementation
5.1 Libraries and Functions Used
5.2 Working Principle of the Proposed Method
6 Results
7 Conclusion
References
Wireless Communication: Exploring Fuzzy Logic Techniques and Applications
1 Introduction
2 Fuzzy Logic Control Classification
2.1 The Benefits and Capabilities of Fuzzy Logic Techniques
3 A Wireless Sensor Network’s Features (WSN)
4 Wireless Sensor Network Difficulties
5 Fuzzy Logic Applications in Wireless Communication Network
5.1 Estimation of Channels
5.2 Equalization of Channels
5.3 Case Study: Fuzzy Logic in Wireless Communications
6 Mathematical Analysis
7 Conclusion
References
Secure Authentication Scheme for IoT Enabled Smart Homes
1 Introduction
2 Related Work
3 Proposed System Model
3.1 Attacker Model and Security Goals
4 Proposed System Scheme
4.1 Security Goals
4.2 Initialization and Setup Phase
4.3 Registration Phase
4.4 Login and Authentication Phases
4.5 Password Update Phase
5 Informal Analysis
6 Simulation for Formal Security Verification Using AVISPA
7 Performance Evaluation
7.1 Comparison of Communication and Computation Cost
8 Conclusion
References
Signal Processing for Language Sanitization: Detection and Censorship of Obscene Words in Speech Recordings
1 Introduction
2 Methodology
2.1 Speech Signal Denoising
2.2 Speech-To-Text Conversion of the Speech Signal
2.3 Detecting the Existence of an Obscene Word in a Speech Signal
2.4 Finding the Start and End Time of the Obscene Word
2.5 Masking Obscene Words in Audio with a Beep
3 Results
4 Conclusion
References
Machine Learning and Deep Networks for Additive Wafer Defect Detection: A Concise Study
1 Introduction
2 Literature Collection Approach
2.1 Literature Search Approach
2.2 Literature Selection Constraint
3 Evaluation of Defect Detection Methods
3.1 Causes of Wafer Substrate Defects
3.2 ML-Based Defect Detection Methods
3.3 DN-Based Defect Detection Methods
3.4 Hybrid-Based Defect Detection Method
3.5 Other Methods
4 Discussion
5 Conclusion
References
A Deep Neural Network for Image Classification Using Mixed Analog and Digital Infrastructure
1 Introduction
2 Deep Neural Network
3 DNN VlSI for Image Classification
4 Conclusion
References
Performance Evaluation of Ensemble Classifiers for Anomaly Detection in IoT Environment
1 Introduction
2 Literature Review
3 Proposed Methodology
4 Overview of Different Classifiers
5 Performance Evaluation Metrics
6 Results
7 Conclusion and Future Work
References
Design of Energy-Efficient Approximate Arithmetic Circuits for Error Tolerant Medical Image Processing Applications
1 Introduction
2 Design Methodology
2.1 Existing Approximate Adders
2.2 Existing Approximate Subtractors (APSCs)
2.3 Proposed Approximate Adders (PAA_s)
2.4 Proposed Approximate Subtractors (APSC8-APSC10)
3 Proposed Approximate Wallace Tree Multiplier (AWTM) Using PAA_s
4 Array Divider Using Proposed APSCs (APDrs)
5 Results and Discussions
5.1 Approximate Adders
5.2 Approximate Subtractors (APSCs)
5.3 AWTMs
5.4 Accuracy Evaluation
5.5 Image Processing
6 Conclusion
References
Skin Cancer Diagnosis Using High-Performance Deep Learning Architectures
1 Introduction
2 Literature Review
3 Proposed Skin Cancer Diagnosis
3.1 Traditional Thresholding Technique
3.2 Skin Cancer Prediction Using High Performance Deep Learning Architectures
4 Result and Discussion
5 Conclusion
References
Video Surveillance-Based Intrusion Detection System in Edge Cloud Environment
1 Introduction
2 Related Works
3 Proposed Model
4 Results and Discussion
5 Conclusion
References
Health Care DNS Tunnelling Detection Method via Spiking Neural Network
1 Introduction
2 Literature Survey
3 Proposed Method
3.1 Data Exfiltration Over DNS Tunneling
3.2 Data Pre-process
3.3 Payload Tunnelling Features (PTF)
3.4 Classification of Spiking Neural Networks
4 Result
5 Conclusion
References
Recommend Papers

Emergent Converging Technologies and Biomedical Systems: Select Proceedings of the 3rd International Conference, ETBS 2023 (Lecture Notes in Electrical Engineering, 1116)
 9819986451, 9789819986453

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Lecture Notes in Electrical Engineering 1116

Shruti Jain · Nikhil Marriwala · Pushpendra Singh · C. C. Tripathi · Dinesh Kumar   Editors

Emergent Converging Technologies and Biomedical Systems Select Proceedings of the 3rd International Conference, ETBS 2023

Lecture Notes in Electrical Engineering Volume 1116

Series Editors Leopoldo Angrisani, Department of Electrical and Information Technologies Engineering, University of Napoli Federico II, Napoli, Italy Marco Arteaga, Departament de Control y Robótica, Universidad Nacional Autónoma de México, Coyoacán, Mexico Samarjit Chakraborty, Fakultät für Elektrotechnik und Informationstechnik, TU München, München, Germany Jiming Chen, Zhejiang University, Hangzhou, Zhejiang, China Shanben Chen, School of Materials Science and Engineering, Shanghai Jiao Tong University, Shanghai, China Tan Kay Chen, Department of Electrical and Computer Engineering, National University of Singapore, Singapore, Singapore Rüdiger Dillmann, University of Karlsruhe (TH) IAIM, Karlsruhe, Baden-Württemberg, Germany Haibin Duan, Beijing University of Aeronautics and Astronautics, Beijing, China Gianluigi Ferrari, Dipartimento di Ingegneria dell’Informazione, Sede Scientifica Università degli Studi di Parma, Parma, Italy Manuel Ferre, Centre for Automation and Robotics CAR (UPM-CSIC), Universidad Politécnica de Madrid, Madrid, Spain Faryar Jabbari, Department of Mechanical and Aerospace Engineering, University of California, Irvine, CA, USA Limin Jia, State Key Laboratory of Rail Traffic Control and Safety, Beijing Jiaotong University, Beijing, China Janusz Kacprzyk, Intelligent Systems Laboratory, Systems Research Institute, Polish Academy of Sciences, Warsaw, Poland Alaa Khamis, Department of Mechatronics Engineering, German University in Egypt El Tagamoa El Khames, New Cairo City, Egypt Torsten Kroeger, Intrinsic Innovation, Mountain View, CA, USA Yong Li, College of Electrical and Information Engineering, Hunan University, Changsha, Hunan, China Qilian Liang, Department of Electrical Engineering, University of Texas at Arlington, Arlington, TX, USA Ferran Martín, Departament d’Enginyeria Electrònica, Universitat Autònoma de Barcelona, Bellaterra, Barcelona, Spain Tan Cher Ming, College of Engineering, Nanyang Technological University, Singapore, Singapore Wolfgang Minker, Institute of Information Technology, University of Ulm, Ulm, Germany Pradeep Misra, Department of Electrical Engineering, Wright State University, Dayton, OH, USA Subhas Mukhopadhyay, School of Engineering, Macquarie University, Sydney, NSW, Australia Cun-Zheng Ning, Department of Electrical Engineering, Arizona State University, Tempe, AZ, USA Toyoaki Nishida, Department of Intelligence Science and Technology, Kyoto University, Kyoto, Japan Luca Oneto, Department of Informatics, Bioengineering, Robotics and Systems Engineering, University of Genova, Genova, Genova, Italy Bijaya Ketan Panigrahi, Department of Electrical Engineering, Indian Institute of Technology Delhi, New Delhi, Delhi, India Federica Pascucci, Department di Ingegneria, Università degli Studi Roma Tre, Roma, Italy Yong Qin, State Key Laboratory of Rail Traffic Control and Safety, Beijing Jiaotong University, Beijing, China Gan Woon Seng, School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore, Singapore Joachim Speidel, Institute of Telecommunications, University of Stuttgart, Stuttgart, Germany Germano Veiga, FEUP Campus, INESC Porto, Porto, Portugal Haitao Wu, Academy of Opto-electronics, Chinese Academy of Sciences, Haidian District Beijing, China Walter Zamboni, Department of Computer Engineering, Electrical Engineering and Applied Mathematics, DIEM—Università degli studi di Salerno, Fisciano, Salerno, Italy Junjie James Zhang, Charlotte, NC, USA Kay Chen Tan, Department of Computing, Hong Kong Polytechnic University, Kowloon Tong, Hong Kong

The book series Lecture Notes in Electrical Engineering (LNEE) publishes the latest developments in Electrical Engineering—quickly, informally and in high quality. While original research reported in proceedings and monographs has traditionally formed the core of LNEE, we also encourage authors to submit books devoted to supporting student education and professional training in the various fields and applications areas of electrical engineering. The series cover classical and emerging topics concerning: • • • • • • • • • • • •

Communication Engineering, Information Theory and Networks Electronics Engineering and Microelectronics Signal, Image and Speech Processing Wireless and Mobile Communication Circuits and Systems Energy Systems, Power Electronics and Electrical Machines Electro-optical Engineering Instrumentation Engineering Avionics Engineering Control Systems Internet-of-Things and Cybersecurity Biomedical Devices, MEMS and NEMS

For general information about this book series, comments or suggestions, please contact [email protected]. To submit a proposal or request further information, please contact the Publishing Editor in your country: China Jasmine Dou, Editor ([email protected]) India, Japan, Rest of Asia Swati Meherishi, Editorial Director ([email protected]) Southeast Asia, Australia, New Zealand Ramesh Nath Premnath, Editor ([email protected]) USA, Canada Michael Luby, Senior Editor ([email protected]) All other Countries Leontina Di Cecco, Senior Editor ([email protected]) ** This series is indexed by EI Compendex and Scopus databases. **

Shruti Jain · Nikhil Marriwala · Pushpendra Singh · C.C. Tripathi · Dinesh Kumar Editors

Emergent Converging Technologies and Biomedical Systems Select Proceedings of the 3rd International Conference, ETBS 2023

Editors Shruti Jain Department of Electronics and Communications Engineering Jaypee University of Information Technology Waknaghat, Himachal Pradesh, India Pushpendra Singh DST Technology Innovation Hub—AWaDH Indian Institute of Technology Ropar Ropar, Punjab, India

Nikhil Marriwala University Institute of Engineering and Technology Kurukshetra University Kurukshetra, Haryana, India C.C. Tripathi NITTTR Bhopal, Madhya Pradesh, India

Dinesh Kumar Electrical and Computer Systems Engineering RMIT University Melbourne, VIC, Australia

ISSN 1876-1100 ISSN 1876-1119 (electronic) Lecture Notes in Electrical Engineering ISBN 978-981-99-8645-3 ISBN 978-981-99-8646-0 (eBook) https://doi.org/10.1007/978-981-99-8646-0 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Singapore Pte Ltd. The registered company address is: 152 Beach Road, #21-01/04 Gateway East, Singapore 189721, Singapore Paper in this product is recyclable.

Preface

Jaypee University of Information Technology Waknaghat has been known for excellence in academics, research and distinguished faculty since its inception. Jaypee Group is always supportive of the initiatives of bringing together the academicians and industry professionals producing quality and productive research. As a technical education university JUIT is committed to be a leader in adopting technological advancements to train the future engineers. 3rd Emergent Converging Technologies and Biomedical Systems (ETBS 2023) organized by the Department of Electronics and Communication Engineering and Department of Computer Sciences and Engineering and Information Technology, Jaypee University of Information Technology (JUIT) in collaboration with DST iHub-AWaDH and Indian Institute of Technology Ropar at JUIT from May 15– 17, 2023. The conference is sponsored by the Council of Scientific and Industrial Research (CSIR) and the Biomedical Engineering Society of India (BMESI). The aim of the ETBS is to serve researchers, developers, and educators working in the area of signal processing, computing, control, and their applications to present and future work as well as to exchange research ideas. 2023 ETBS invites authors to submit their original and unpublished work that demonstrates current research in all areas of emergent converging technologies, signal/image processing, computing, and their applications. ETBS 2023 solicits full length original and unpublished papers, based on theoretical and experimental contributions, related, but not limited to the following tracks, are solicited for presentation and publication in the conference: Engineering in Medicine and Biology, Signal Processing and Communication, Emerging Smart Computing Technologies, Internet of Things for emerging Engineering Applications, Next Generation Computational Technologies This conference is one of the premier venues for fostering international scientific and technical exchange across research communities working in multiple domains. The 3rd ETBS 2023 technical program committee put together a program, consisting of 13 technical sessions and 8 invited talks. We are thankful to our Chief Guest Prof. Rajeev Ahuja, Director IIT Ropar and Guest of Honors: Mr. Bharat Kumar Sharma, Director and GPU advocate NVIDIA AI Tech Centre India, and Dr. Narayan Panigrahi, Scientist-‘G’, Group v

vi

Preface

Head GIS Centre for Artificial Intelligence and Robotics Bangalore for gracing the inaugural session of ETBS 2023. We are also thankful to our Advisors Prof. (Dr.) Dinesh Kumar, Electrical and Computer System Engineering, RMIT, University, Melbourne, Australia and Prof. C. C. Tripathi, Director, NITTTR, Bhopal. We are also thankful to the speakers Professor Sharath Sriram, Coordinator, Functional Materials and Microsystems Research Group, RMIT University, Melbourne, Australia, Dr. Vishal Sharma, School of Electronics, Electrical Engineering and Computer Science, Queen’s University, Belfast, UK, Mr. Bharat Kumar Sharma, Director and GPU advocate NVIDIA AI Tech Centre India, Dr. Narayan Panigrahi, Scientist-‘G’, Group Head GIS Centre for Artificial Intelligence and Robotics Bangalore, Prof. Sanjeev Narayan Sharma, Dean Academics, IIITDM, Jabalpur, Madhya Pradesh, India, Prof. Ram Bilas Pachauri, Electrical Engineering Department, IIT Indore, Madhya Pradesh, India, Mr. Ashish P. Kuvelkar, Senior Director (HPC-Tech) C-DAC, Pune, Ms. Kamiya Khatter, Editor—Applied Sciences and Engineering who spared time to share their knowledge, expertise and experience spite of their busy schedules. ETBS 2023 have been able to garner an overwhelming response from the researchers, academicians, and industry from all over the globe. We have received the papers from Australia, United Kingdom, Jordan, Malaysia, Denmark, South Korea, Bangladesh, Saudi Arabia, Malaysia etc. making it truly International. We received papers pan-India from Tamil Nadu, Telangana, Andhra Pradesh, Bangalore, Karnataka, Punjab, Uttar Pradesh, Dehradun, Chandigarh, Haryana, New Delhi, West Bengal, Rajasthan, Uttarakhand, Madhya Pradesh, and not to mention our neighboring states. The authors are from premium institutes IITs, NITs, Central Universities, PU, and many other reputed institutes. We have received over 310 research papers, out of which 57 papers were accepted, registered, and presented their research papers during the three day conference, acceptance ratio being 18.3%. We truly believe the participation of researchers from different universities and institutes, working on applications of thematic areas of the conference across the domains in this conference and deliberating their research issues and outcomes resulted in fruitful and productive recommendations. We sincerely hope you enjoy the conference proceedings and wish you all the best!!! Organizing Committee ETBS 2023

About This Book

This book will provide a platform and aid to the researchers involved in designing systems that will permit the societal acceptance of ambient intelligence. The overall goal of this book is to present the latest snapshot of the ongoing research as well as to shed further light on future directions in this space. The aim of publishing the book is to serve educators, researchers, and developers working in the area of recent advances and upcoming technologies utilizing computational sciences in signal processing, imaging, computing, instrumentation, artificial intelligence, and their applications. As the book includes recent trends in research issues and applications, the contents will be beneficial to professors, researchers, and engineers. This book will provide support and aid to the researchers involved in designing the latest advancements in healthcare technologies. The conference “Emergent Convergent Technologies and Biomedical Systems” encompasses all branches of Engineering in Medicine and Biology, Signal Processing and Communication, Emerging Smart Computing Technologies, Internet of Things for Emerging Engineering Applications, Next Generation Computational Technologies. It presents the latest research being conducted on diverse topics in intelligence technologies with the goal of advancing knowledge and applications in this rapidly evolving field. Authors are invited to submit papers presenting novel technical studies as well as position and vision papers comprising hypothetical/speculative scenarios.

Keywords (a) (b) (c) (d) (e)

Engineering in Medicine and Biology Signal Processing and Communication Emerging Smart Computing Technologies Internet of Things for Emerging Engineering Applications Next Generation Computational Technologies

vii

Contents

On Parameterized Picture Fuzzy Discriminant Information Measure in Medical Diagnosis Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Monika, Aman Sharma, and Rakesh Kumar Bajaj Fuzzy Vendor–Buyer Trade Credit Inventory Model-Pentagonal Numbers in Permissible Limits Delay in Account Settlement with Supervised Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . K. Kalaiarasi, S. Swathi, and Sardar M. N. Islam (Naz) Comparative Analysis of Hardware and Software Utilization in the Implementation of 4-Bit Counter Using Different FPGAs Families . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Shishir Shrivastava and Amanpreet Kaur Soil Monitoring Robot for Precision Farming . . . . . . . . . . . . . . . . . . . . . . . . K. Umapathy, S. Omkumar, T. Dinesh Kumar, M. A. Archana, and M. Sivakumar Accountability of Immersive Technologies in Dwindling the Reverberations of Fibromyalgia . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sheena Angra and Bhanu Sharma A 233-Bit Elliptic Curve Processor for IoT Applications . . . . . . . . . . . . . . . Deepak Panwar, Sumit Singh Dhanda, Kuldeep Singh Kaswan, Pardeep Singh, and Savita Kumari

1

13

25 37

49 61

Numerical Simulation and Modeling of Improved PI Controller Based DVR for Voltage Sag Compensation . . . . . . . . . . . . . . . . . . . . . . . . . . . Vijeta Bhukar and Ravi Kumar Soni

71

Alternate Least Square and Root Polynomial Based Colour-Correction Method for High Dimensional Environment . . . . . . . . Geetanjali Babbar and Rohit Bajaj

83

ix

x

Contents

An Automatic Parkinson’s Disease Classification System Using Least Square Support Vector Machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Priyanshu Khandelwal, Kiran Khatter, and Devanjali Relan

99

Generation Cost Minimization in Microgrids Using Optimization Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 Upasana Lakhina, I. Elamvazuthi, N. Badruddin, Ajay Jangra, Truong Hoang Bao Huy, and Josep M. Guerrero Diagnosis of Mental Health from Social Networking Posts: An Improved ML-Based Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125 Rohit Kumar Sachan, Ashish Kumar, Darshita Shukla, Archana Sharma, and Sunil Kumar Smart Health Monitoring System for Elderly People . . . . . . . . . . . . . . . . . . 135 Kalava Guru Mallikarjuna, Medagam Sailendra Reddy, Kolluru Lokesh, Kasani Mohan Sri Sai, Mamidi K. Naga Venkata Datta Sai, and Indu Bala Impact of Covid-19 and Subsequent Usage of IoT . . . . . . . . . . . . . . . . . . . . 147 Sakshi Sharma, Veena Sharma, and Vineet Kumar Design of Battery Monitoring System for Converted Electric Cycles . . . . 157 T. Dinesh Kumar, M. A. Archana, K. Umapathy, H. Rakesh, K. Aakkash, and B. R. Shreenidhi Image Denoising Framework Employing Auto Encoders for Image Reconstruction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171 Shruti Jain, Monika Bharti, and Himanshu Jindal Server Access Pattern Analysis Based on Weblogs Classification Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183 Shirish Mohan Dubey, Geeta Tiwari, and Priusha Narwaria Multilingual Emotion Recognition from Continuous Speech Using Transfer Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197 Karanjaspreet Singh, Lakshitaa Sehgal, and Naveen Aggarwal Violence Detection Using DenseNet and LSTM . . . . . . . . . . . . . . . . . . . . . . . 213 Prashansa Ranjan, Ayushi Gupta, Nandini Jain, Tarushi Goyal, and Krishna Kant Singh Financial Technology and Competitive Landscape in the Banking Industry of Bangladesh: An Exploratory Focus . . . . . . . . . . . . . . . . . . . . . . . 225 Nargis Sultana, Kazi Saifur Rahman, Reshma Pervin Lima, and Shakil Ahmad Review on Deep Learning-Based Classification Techniques for Cocoa Quality Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243 Richard Essah, Darpan Anand, and Abhishek Kumar

Contents

xi

A Curated Study on Machine Learning Based Algorithms and Sensors for Drone Technology in Various Application . . . . . . . . . . . . . 253 Digant Raj, Garima Thakur, and Arti Automatic Detection of Coagulation of Blood in Brain Using Deep Learning Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265 B. Ashreetha, A. Harshith, A. Sai Ram Charan, A. Janardhan Reddy, A. Abhiram, and B. Rajesh Reddy DeepPose: A 2D Image Based Automated Framework for Human Pose Detection and a Trainer App Using Deep Learning . . . . . . . . . . . . . . . 281 Amrita Kaur, Anshu Parashar, and Anupam Garg Phylogenetic Study of Surface Glycoprotein (S1 Spike Protein) Sequence of SARS-CoV-2 Virus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295 R. S. Upendra, Sanjay Shrinivas Nagar, R. S. Preetham, Sanjana Mathias, Hiba Muskan, and R. Ananya Pervasive and Wearable Computing and Networks . . . . . . . . . . . . . . . . . . . 309 Jatin Verma and Tejinder Kaur Power of Image-Based Digit Recognition with Machine Learning . . . . . . 323 Vipasha Abrol, Nitika, Hari Gobind Pathak, and Aditya Shukla Open-Source Gesture-Powered Augmented Reality-Based Remote Assistance Tool for Industrial Application: Challenges and Improvisation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337 Chitra Sharma, Kanika Sharma, Manni Kumar, Pardeep Garg, and Nitin Goyal Enhancing Biometric Performance Through Mitigation of Sleep-Related Breaches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349 Urmila Pilania, Manoj Kumar, Sanjay Singh, Yash Madaan, Granth Aggarwal, and Vaibhav Aggrawal Neural Network Based CAD System for the Classification of Textures in Liver Ultrasound Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359 Anjna Kumari, Nishant Jain, and Vinod Kumar A Comparative Survey on Histogram Equalization Techniques for Image Contrast Enhancement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375 Anju Malik and Nafis Uddin Khan Crime Rate Prediction in Tamil Nadu Using Machine Learning . . . . . . . . 387 Lokaiah Pullagura, Garima Sinha, Silviya Manandhar, Bandana Rawal, Selamawit Getachew, and Shubhankar Chaturvedi Depression Severity Detection from Social Media Posts . . . . . . . . . . . . . . . . 403 Naveen Recharla, Prasanthi Bolimera, Yash Gupta, and Anand Kumar Madasamy

xii

Contents

Computational Studies of Phytochemicals from Allium Sativum with H7N9 Subtype in Avian Influenza . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 419 Brishti Mandal, Avineet Singh, Cheena Dhingra, Hina Bansal, and Seneha Santoshi Ensuring Security of Data Through Transformation Based Encryption Algorithm in Image Steganography . . . . . . . . . . . . . . . . . . . . . . 433 Sushil Kumar Narang, Vandana Mohindru Sood, Vaibhav, and Vania Gupta PICO Classification Using Domain-Specific Features . . . . . . . . . . . . . . . . . . 447 Sanjeet Singh and Aditi Sharan Optimized Detection of Ovarian Cancer Using Segmentation with FR-CNN Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 459 Vivekanand Aelgani and Dhanalaxmi Vadlakonda Implementation of Machine Learning Algorithms for Cardiovascular Disease Prediction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 473 Anjali Sharma, Cheena Dhingra, Ankur Chaurasia, Seneha Santoshi, and Hina Bansal Security Challenges and Applications for Digital Transactions Using Blockchain Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 487 Prateek Dang and Himanshu Gupta Triple-Band Gap Coupled 4 × 4 MIMO Antenna in mm-Wave for High Data Rate and IoT Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . 499 Rakesh N. Tiwari, Prabhakar Singh, and Partha Bir Barman Comparative Performance Analysis of Present Lightweight Cipher for Security Applications in Extremely Constrained Environment . . . . . . 511 Shipra Upadhyay, Pulkit Singh, Amit Kumar Pandey, Arman Ali, Jyoti Kumari, Ashutosh Jujare, Shailendra Kumar, and Akshay Improved Hybrid Similarity for Clustering of Text Documents Using GA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 523 Deepak Ahlawat, Sharad Chauhan, and Amodh Kumar IoT for Emerging Engineering Application Related to Commercial System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 537 Vivek Veeraiah, Shahanawaj Ahamad, Vipin Jain, Rohit Anand, Nidhi Sindhwani, and Ankur Gupta Development and Analysis of Malaria Vector by Mathematical Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 551 Naresh Kumar Jothi and A. Lakshmi

Contents

xiii

Realization of Fractional Order Low Pass Filter Using Differential Voltage Current Conveyor (DVCC) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 563 Sukanya Deshamukhya, Shalabh Kumar Mishra, and Bhawna Aggarwal Identification and Classification of Intestinal Parasitic Eggs in Animals Through Microscopic Image Analysis . . . . . . . . . . . . . . . . . . . . . 571 Ketan Mishra, C. Kavitha, and Devi Kannan Storage and Organisation of Geospatial Data in Distributed Blockchain Using IPFS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 583 Shivanshu Singh, Ayush Tah, and Sanjib Saha Wireless Communication: Exploring Fuzzy Logic Techniques and Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 597 N. Yogeesh, D. K. Girija, M. Rashmi, and P. William Secure Authentication Scheme for IoT Enabled Smart Homes . . . . . . . . . 611 Neha Sharma and Pankaj Dhiman Signal Processing for Language Sanitization: Detection and Censorship of Obscene Words in Speech Recordings . . . . . . . . . . . . . . 625 Mohd Mazin Jameel, Zenab Aamir, Mohd Wajid, and Mohammed Usman Machine Learning and Deep Networks for Additive Wafer Defect Detection: A Concise Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 637 Bandana Pal and Nidhi Gooel A Deep Neural Network for Image Classification Using Mixed Analog and Digital Infrastructure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 657 R. Kala, M. Poomani Alias Punitha, P. G. Banupriya, B. Veerasamy, B. Bharathi, and Jafar Ahmad Abed Alzubi Performance Evaluation of Ensemble Classifiers for Anomaly Detection in IoT Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 667 Aishwarya Vardhan, Prashant Kumar, and L. K. Awasthi Design of Energy-Efficient Approximate Arithmetic Circuits for Error Tolerant Medical Image Processing Applications . . . . . . . . . . . . 679 A. Ahilan, A. Albert Raj, Anusha Gorantla, R. Jothin, M. Shunmugathammal, and Ghazanfar Ali Safdar Skin Cancer Diagnosis Using High-Performance Deep Learning Architectures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 693 A. Bindhu, A. Ahilan, S. Vallisree, P. Maria Jesi, B. Muthu Kumar, Nikhil Kumar Marriwala, and Aznul Qalid Md Sabr

xiv

Contents

Video Surveillance-Based Intrusion Detection System in Edge Cloud Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 705 Annu Sharma, Deepa Devasenapathy, M. Raja, Finney Daniel Shadrach, Anil Shirgire, R. Arun, and Thomas Moh Shan Yau Health Care DNS Tunnelling Detection Method via Spiking Neural Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 715 Narendra Kumar, R. Surendiran, G. K. Jabash Samuel, N. Bhavana, Anil Shirgire, A. Jasmine Gnana Malar, and Aznul Qalid

About the Editors

Dr. Shruti Jain is an Associate Dean (Innovation) and Professor in the Department of Electronics and Communication Engineering at the Jaypee University of Information Technology, Waknaghat, H.P., India. She has received her Doctor of Science (D.Sc.) in Electronics and Communication Engineering. She has teaching experience of around 19 years. She has filed ten patents, of which three have been granted and seven are published. She has published more than 28 book chapters and 130 research papers in reputed indexed journals (with IF ~ 70) and in international conferences. She has also published 16 books. She has completed two government-sponsored projects. She has guided 07 Ph.D. students and now has 05 registered students. She has also guided 11 M.Tech scholars and more than 110 B.Tech undergrads. She has organized 12 conferences of IEEE and Springer as Conference General Chair. Her research interests are Image and Signal Processing, Soft Computing, Internet-ofThings, Pattern Recognition, Bio-inspired Computing, and Computer-Aided Design of FPGA and VLSI circuits. She is a senior member of IEEE, Executive member of IEEE Delhi Section, life member and Executive member of Biomedical Engineering Society of India, and a member of IAENG. She is a member of the Editorial Board of many reputed journals. She is also a reviewer of many journals and a member of TPC of different conferences. She was awarded the Nation Builder Award in 2018–19 and enlisted in 2% scientist of world rankings of 2021 and 2023 published by Elsevier, data compiled by Stanford University. Nikhil Marriwala (B.Tech., M.Tech., and Ph.D. in Engineering and Technology) is working as Assistant Professor and Head of the Department of Electronics and Communication Engineering Department, University Institute of Engineering and Technology, Kurukshetra University, Kurukshetra. He did his Ph.D. at the National Institute of Technology (NIT), Kurukshetra, in the Department of Electronics and Communication Engineering. More than 33 students have completed their M.Tech. dissertation under his guidance. He has published more than 05 book chapters in different international books, has authored more than 10 books with Pearson, Wiley, etc., and has more than 40 publications to his credit in reputed international journals (SCI, SCIE, ESCI, and Scopus) and 20 papers in international/national conferences. xv

xvi

About the Editors

He has been granted 08 patents with 02 Indian patents and 06 international patents. He has been Chairman of Special Sessions at more than 22 international/national conferences and has delivered a keynote address at more than 7 international conferences. He has also acted as organizing secretary for more than 05 international conferences and 01 national conference. He has delivered more than 70 invited talks/guest lectures in leading universities/colleges PAN India. He is having an additional charge of Training and Placement Officer at UIET, Kurukshetra University, Kurukshetra, for more than 11 years now. He is the Single Point of Contact (SPOC) and Head of the local chapter of SWAYAM NPTEL Local Chapter of UIET, KUK. He is SPOC for the Infosys campus connect program for UIET, KUK. He is the Editor of more than 06 book proceedings with Springer and Guest Editor for a Special Session in the Journal Measurement and Sensors, Elsevier. He has been awarded as the NPTEL ENTHUSIASTS for the year 2019–2020 by NPTEL IIT, Madras. He has also been awarded the “Career Guru of the Month” award by Aspiring Minds. His areas of interest are Software Defined Radios, Cognitive Radios, Soft Computing, Wireless Communications, Wireless Sensor Networks, Fuzzy system design, and Advanced Microprocessors. Pushpendra Singh is an experimental physicist with a passion for cyber-physical system instrumentation and quantum sensing devices. His research focuses on nuclear reactions and instrumentation for rare-decay studies, quantum sensing devices/ imagers, and the deployment of cyber-physical systems. He is the principal investigator and project director of the iHub-AWaDH (Agriculture and Water Technology Development Hub), a Technology Innovation Hub (TIH) established by the Department of Science and Technology (DST), Government of India, at IIT Ropar in the framework of National Mission on Interdisciplinary Cyber-Physical Systems (NMICPS). In the profession of scientific research, He received Prof. C. V. K. Baba Award for my Ph.D. thesis in the year 2008 from the Indian Physics Association (IPA). He was awarded an INFN international fellowship at the Laboratori Nazionali di Legnaro, Italy, in 2009. After a term at Legnaro in 2011, He joined the GSI— Helmholtz Centre for Heavy-Ion Research GmbH, Darmstadt, Germany, as a visiting scientist, where He has been associated with the Lund-York-Cologne-Calorimeter (LYCCA) and the Advanced GAmma Tracking Array (AGATA) as a technical coordinator for multi-branch signal processing through digital electronics. He joined the Department of Physics at IIT Ropar in 2013. Presently, He is a member of NuSTAR collaboration for the Facility for Anti-proton and Ion Research (FAIR) at Darmstadt in Germany and collaborates with the scientists from JINR Dubna (Russia), TU Darmstadt (Germany), ANL (USA), LASTI (University of Hyogo, Japan), and Aksaray University (Turkey). C. C. Tripathi completed his Ph.D. in electronics from Kurukshetra University, Kurukshetra. Since 2016, he has been working as Director of the University Institute of Engineering Technology (an autonomous institute), Kurukshetra University, Kurukshetra. His research areas are microelectronics, RF MEMS for communication, and industrial consultancy. He has filed 1 patent and published over 80 papers

About the Editors

xvii

in journals and conference proceedings. Prof. Tripathi is an Experienced Professor with a demonstrated history of working in the higher education industry. He has been working extensively on graphene-based flexible electronic devices, sensors, etc. Presently, he is working as Director, NITTTR, Bhopal. Prof. Dinesh Kumar completed B.Tech. from IIT Madras and Ph.D. from IIT Delhi and is Professor at RMIT University, Melbourne, Australia. He is fellow of Australasian Institute of Digital Health. He has published over 400 papers, authored 08 books, and is on a range of Australian and international committees for Biomedical Engineering. His passion is for affordable diagnostics and making a difference for his students. His work has been cited over 8400 times, and he has also had multiple successes with technology translation. He is Member of Therapeutics Goods Administration (TGA), Ministry of Health (Australia) for medical devices. He is also on the editorial boards for IEEE Transactions of Neural Systems and Rehabilitation Engineering and Biomedical Signals and Controls. He has been Chair of large number of conferences and given over 50 keynotes speeches.

On Parameterized Picture Fuzzy Discriminant Information Measure in Medical Diagnosis Problem Monika , Aman Sharma , and Rakesh Kumar Bajaj

Abstract Decision-making processes related to the problems of pattern recognition, clustering, and expert and knowledge-based systems contain a lot of uncertainty in the form of imprecise, incomplete, and inexact information with partial contents where the notions of entropy, discriminant measure and similarity measure play a crucial role. In the present communication, a very recently proposed doubly-parameterized information tool is suitably applied for picture fuzzy sets. This bi-parameterized discriminant measure would give diversification in handling the inexact/incomplete information in terms of obtaining the degree of association and closeness in the data of various applications. Further, the introduced bi-parametric measure has been successfully applied in the principle of minimum discriminant information with the help of some illustrative discussions in the applied fields, e.g., “medical diagnosis”. Additionally, for the validity and efficacy of the presented work, necessary characteristic comparison along with important remarks has been done. Keywords Picture fuzzy information · Parametric measure · Discriminant measure · Medical-diagnosis

1 Introduction The involvement of vagueness in decision-making problems is increasing day by day and to handle such complications, the scholars have explored the generic structure of fuzzy sets for making the computation more practical in context with the applied problems. In literature, the concept of entropy is found to be very important and effective for the study of uncertainty found in the information. First of all, Zadeh Monika · R. K. Bajaj (B) Department of Mathematics, Jaypee University of Information Technology, Waknaghat, Solan 173234, Himachal Pradesh, India e-mail: [email protected] A. Sharma Department of Computer Science and Engineering, Jaypee University of Information Technology, Waknaghat, Solan 173234, Himachal Pradesh, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024 S. Jain et al. (eds.), Emergent Converging Technologies and Biomedical Systems, Lecture Notes in Electrical Engineering 1116, https://doi.org/10.1007/978-981-99-8646-0_1

1

2

Monika et al.

[1] introduced the concept of fuzzy sets and their entropy measures and then on the basis of cross-entropy measure of probability distributions given by Kullback and Leibler, various authors have presented different kinds of divergence/discriminant measures for different extensions of fuzzy sets. Also, various studies handling the uncertainty feature for capturing the inconsistency, impreciseness, and inexactness in a systematic extended fashion by different researchers and notions such as “type-2 fuzzy sets, rough fuzzy sets, neutrosophic sets (NS), intuitionistic fuzzy sets (IFS), Pythagorean fuzzy sets (PyFs), picture fuzzy sets (PFS)” and many more have been introduced. Amongst various generalizations of fuzzy sets, the concept of picture fuzzy set and its properties gained a significant amount of attention and popularity in the research community due to its additional component of uncertainty in a linear way. The concept of PFS encounters the four uncertainty parameters—“degree of membership, degree of non-membership, degree of abstain and degree of refusal”. The election voting system pointed out by [2] depicts a clear picture of PFS. Then the picture fuzzy entropy by [3] shows the applications handled by various researchers worldwide. Such deliberations give rise to adequate motivation for the authors to introduce new measure on the basis of PFSs point of view & subsequently use them in the area of image processing [4], machine learning [5], clustering [6], EV station [7] etc. Further, in the field of picture fuzzy sets, various related mathematical tools for “interval-valued picture fuzzy sets and picture fuzzy soft set [8]” have been provided and applied in the field of decision-making. Garg [9] developed various kinds of aggregation operators (average/weighted/hybrid) for picture fuzzy sets with different properties and their applications in decision science problems. Recently, Khalil et al. [10] proposed some important operational laws in reference to “interval-valued picture fuzzy sets (IVPFSs)” and “interval-valued picture fuzzy soft sets (IVPFSSs)”. Wei et al. [11] presented some mathematical models to deal with the problems of MADM in picture fuzzy setup. Further, Wang et al. [12] incorporated the VIKOR method and the mathematical models in an integrated framework so as to obtain the compromise solution in the multi-criteria decision problems. Thong and Son [13] presented a new automated clustering technique with the help of “hybrid innumerability” and “particle swarm optimization” in context with picture fuzzy information. Similarly, on the basis of Dombi .t-norm/.t-conorm, Jana et al. [14] proposed some aggregation operators and provided a method to handle any MADM problem under PFS. A novel clustering technique called distributed picture fuzzy clustering technique was provided by Son [15] and in addition to this, “picture fuzzy distance measure” and analogous “hierarchical picture clustering method” have also been presented by Son [16] in a generalized way. Further, in order to study the cross-relations between the criterions and the impact of preferential data, Tian et al. [17] proposed some “picture fuzzy weighted operators” for solving decision problems. Also, some aggregation operators termed Archimedean picture fuzzy linguistic [18] and Einstein weighted/ordered weighted operators [19] have been developed for solving multi-attribute group decision-making problems in picture fuzzy environment. Ganie et al. [20] presented the new “correlation coefficients for picture fuzzy sets” and utilized them in some MCDM problems. Singh and Ganie [21] also provided

On Parameterized Picture Fuzzy Discriminant Information …

3

another picture fuzzy correlation coefficient to study the pattern recognition problem and identification of an invest sector based on it. Also, Khan et al. [22] gave some “parameterized distance and similarity measures for picture fuzzy sets” and applied them to the problem of “medical diagnosis”. Kadian and Kumar [23] presented a new picture fuzzy divergence measure for the MCDM problem on the basis of JensenTsallis entropy. An innovative form of picture fuzzy distance and similarity measure has been proposed by Ganie et al. [24]. Umar et al. [25] introduced a novel technique of decision-making in machine learning problems with the incorporation of picture fuzzy divergence measures. A novel picture fuzzy entropy has been utilized by Kumar et al. [26] with partial weight information on the basis of the hybrid picture fuzzy methodology. It may be noted that various divergence measures [27] are available in the literature that has their own limitations and is not able to address the features encountered by picture fuzzy information measures. However, a lot of studies have been done by various authors in the field of picture fuzzy sets and their applications, but no research has been carried out on bi-parameterized information measure form. For determining the degree of association and proximity in the data of different applications, the biparametric discriminant measure would provide suitable and flexible diversification in managing the problems due to uncertainty in the picture fuzzy information. The contributions of the present manuscript are listed and enumerated below: – A very recently proposed doubly parameterized discriminant measure for the picture fuzzy sets has been utilized. – Utilization of the measure and its properties in the field of medical diagnosis with an illustrative example. The manuscript has been organized as follows. In view of the topics under study, some related fundamental concepts and basic definitions are presented in Sect. 2. In Sect. 3, a very recently proposed doubly parameterized picture fuzzy discriminant measure has been taken along with some important properties, results, and its validity in accordance with the existing axioms. Section 4 comprehensively presents the utilization of the bi-parameterized measure in machine learning decision-science application of “medical diagnosis”. Also, the illustrative examples show the practical usefulness of the proposed methodology/measure with comparative remarks. Finally, the manuscript is concluded in Sect. 5.

2 Preliminaries Here, we recollect and study some of the important and basic notions related to the PFS and are readily available in the literature for ready reference. Definition 1 (Intuitionistic Fuzzy Set (IFS)) An intuitionistic fuzzy set . I in . X (universe of discourse) is given by . I = {< x, ρ I (x), ω I (x) >| x ∈ X }; where .ρ I : X → [0, 1] and .ω I : X → [0, 1] denote the degree of membership and degree

4

Monika et al.

of non-membership respectively and for every .x ∈ X satisfy the condition .0 ≤ ρ I (x) + ω I (x) ≤ 1; and the degree of indeterminacy for any IFS . I and .x ∈ X is given by .π I (x) = 1 − ρ I (x) − ω I (x). Definition 2 (Picture Fuzzy Set (PFS) [8]) A picturefuzzy set .U in . X (universe of discourse) given by .U = {< x, ρU (x), τU (x), ωU (x) >| x ∈ X }; where .ρU : X → [0, 1], .τU : X → [0, 1] and .ωU : X → [0, 1] denote the degree of positive membership, degree of neutral membership and degree of negative membership respectively and for every .x ∈ X satisfy the condition .0 ≤ ρU (x) + τU (x) + ωU (x) ≤ 1; and the degree of refusal for any picture fuzzy set .U and .x ∈ X is given by .θU (x) = 1 − ρU (x) − τU (x) − ωU (x). For PFS, “the degree of membership .ρU (x), neutral membership .τU (x) and nonmembership .ωU (x)” shows .0 ≤ ρU (x) + τU (x) + ωU (x) ≤ 1; while the constraint for IFS is.0 ≤ ρ I (x) + ω I (x) ≤ 1; for.ρU (x), τU (x), ωU (x) ∈ [0, 1]. Few operations related to the PFS are provided below: Definition 3 ([8]) If.U, V ∈ P F S(X ), then the operations can be defined as follows: (a) Complement: .U = {< x, ωU (x), τU (x), ρU (x) > | x ∈ X }; .U ⊆ V iff ∀x ∈ X, ρU (x) ≤ ρ V (x) τU (x) ≥ τ V (x) and (b) Subsethood: ωU (x) ≥ ωV (x); (c) Union:.U ∪ V = {< x, ρU (x) ∨ ρV (x), τU (x) ∧ τV (x) and ωU (x) ∧ ωV (x) > | x ∈ X }; (d) Intersection: .U ∩ V = {< x, ρU (x) ∧ ρV (x), τU (x) ∨ τV (x) and ωU (x) ∨ ωV (x) > | x ∈ X }. In this paper, we use . P F S(X ) to denote the collection of all the PFSs defined on the domain of discourse . X . Definition 4 (Average Picture Fuzzy Set [16]) The average picture fuzzy set of .Ui ∈ P F S(X ), i = 1, 2, ..., n is represented by .(Ui )av as {⟨ (Ui )av =

} ⟩ n n n 1∑ 1∑ 1∑ x, ρ(x), τ (x), ω(x) |x ∈ X . n i =1 n i=1 n i =1

3 Doubly Parameterized Picture Fuzzy Discriminant Measure In the literature of information theory, there are several distance measures, similarity measures, divergence/dissimilarity measures for different extensions of fuzzy sets, but there are some measures that can be very useful and involve fewer computations at the cost that they do not satisfy the prerequisite standard axioms. In order to overcome such limitations, there is a need to define the notion of discriminant information measures which involves only two axioms. Various researchers have explored the

On Parameterized Picture Fuzzy Discriminant Information …

5

concept of parameterized discriminant information measures and utilized them in different application fields. Some initial related notions [27] are described ∑ as follows: “Let .Δn = {P = ( p1 , p2 , .... pn ), pi ≥ 0, i = 1, 2, 3, ......n and . pi = 1} be the set of all probability distributions associated with a discrete random variable . X taking finite values .x1 , x2, ......, xn . For all probability distribution of . P = ( p1 , p2 , . . . , pn ) and . Q = (q1 , q2 , . . . , qn ) ∈ Δn , Joshi and Kumar [27] proposed a divergence measure: ⎡( ) 1S ( n ) R1 ⎤ n ∑ ∑ R × S S ⎣ .D R (P, Q) = ( piS qi1−S ) − ( piR qi1−R ⎦ ; S−R i=1 i=1

(1)

where either .0 < S < 1 and .1 < R < ∞ or .0 < R < 1 and .1 < S < ∞.” It may also be noted that, in an analogous fashion, various parametric discriminant measures for different types of sets such as “fuzzy sets, intuitionistic fuzzy sets, and Pythagorean fuzzy sets” have been proposed and studied in detail. Similarly, on the basis of the above proposed theoretic probabilistic information measure (1), “Dhumras and Bajaj [28] presented a new bi-parametric picture fuzzy discriminant measure for two PFSs .U, V ∈ P F S(X )” as follows: R×S n(S − R) ⎤ )1 ( n ∑ ρU (xi ) S ρV (xi )(1−S) + τU (xi ) S τV (xi )(1−S) + ωU (xi ) S ωV (xi )(1−S) + θU (xi ) S θV (xi )(1−S) S ⎦ ⎣ × ( )1 ; − ρU (xi ) R ρV (xi )(1−R) + τU (xi ) R τV (xi )(1−R) + ωU (xi ) R ωV (xi )(1−R) + θU (xi ) R θV (xi )(1−R) R i=1

I SR (U, V ) = ⎡

(2)

where either.0 < S < 1 &.1 < R < ∞ or.0 < R < 1 &.1 < S < ∞..I SR (U, V ) is not showing the symmetric nature with respect to the argument sets. In accordance, it can be structured as follows: .J SR (U, V ) = I SR (U, V ) + I SR (V, U ).

(3)

“Guiwu Wei [29] presented the notion of discriminant measure for PFSs and termed as “picture fuzzy cross entropy” as .I P F S (U, V ) which satisfies two axioms—.I P F S (U, V ) ≥ 0 and .I P F S (U, V ) = 0 if and only if .U = V . For fuzzy sets .I F S (U, V ) / = I F S (U , V ). However, for PFSs,.I P F S (U, V ) = I P F S (U , V ) holds.” Theorem 1 For.U, V, W ∈ P F S(X ) (i) (ii) (iii) (iv) (v) (vi) (vii)

.I SR (U ∪ V, U ) + I SR (U ∩ V, U ) = I SR (V, U ). .I SR (U ∪ V, W ) + I SR (U ∩ V, W ) = I SR (U, W ) + I SR (V, W ). .I SR (U ∪ V , U ∩ V ) = I SR (U ∩ V , U ∪ V ). .I SR (U, U ) = I SR (U , U ). .I SR (U , V ) = I SR (U, V ). .I SR (U, V ) = I SR (U , V ). .I SR (U, V ) + I SR (U , V ) = I SR (U , V ) + I RS (U, V ).

Proof Let us divide the. X into the partitions say. X 1 and. X 2 where . X 1 = {xi ∈ X | U ⊆ V } , i.e., ρU (xi ) ≤ ρV (xi ), τU (xi ) ≥ τV (xi ),

ωU (xi ) ≥ ωV (xi )∀ xi ∈ X 1 ; . X 2 = {xi ∈ X | V ⊆ U } , i.e., ρU (xi ) ≥ ρV (xi ), τU (xi ) ≤ τV (xi ),

ωU (xi ) ≤ ωV (xi )∀ xi ∈ X 2 . Now,



6

Monika et al. (i) . I SR (U ∪ V, U ) + I SR (U ∩ V, U )

⎡ ⎤ )1 ( n R × S ∑⎣ ρU ∪V (xi ) S ρU (xi )(1−S) + τU ∪V (xi ) S τU (xi )(1−S) + ωU ∪V (xi ) S ωU (xi )(1−S) + θU ∪V (xi ) S θU (xi )(1−S) S ) R1 ⎦ ( R (1−R) R (1−R) R (1−R) R (1−R) n(S − R) − ρU ∪V (xi ) ρU (xi ) + τU ∪V (xi ) τU (xi ) + ωU ∪V (xi ) ωU (xi ) + θU ∪V (xi ) θU (xi ) i=1 ⎡ ⎤ )1 ( n R × S ∑⎣ ρU ∩V (xi ) S ρU (xi )(1−S) + τU ∩V (xi ) S τV (xi )(1−S) + ωU ∩V (xi ) S ωV (xi )(1−S) + θU ∩V (xi ) S θU (xi )(1−S) S + 1 ) ⎦ ( n(S − R) − ρU ∩V (xi ) R ρU (xi )(1−R) + τU ∩V (xi ) R τU (xi )(1−R) + ωU ∩V (xi ) R ωU (xi )(1−R) + θU ∩V (xi ) R θU (xi )(1−R) R i=1

=



.

⎤ )1 ( (x ) S ρU (xi )(1−S) + τU ∪V (xi ) S τU (xi )(1−S) + ωU ∪V (xi ) S ωU (xi )(1−S) + θU ∪V (xi ) S θU (xi )(1−S) S ρ ⎢ ( U ∪V i ) R1 ⎥ ⎢ ⎥ R (1−R) R (1−R) R (1−R) R (1−R) R × S ∑ ⎢ − ρU ∪V (xi ) ρU (xi ) + τU ∪V (xi ) τU (xi ) + ωU ∪V (xi ) ωU (xi ) + θU ∪V (xi ) θU (xi ) ⎥ = ⎢ )1 ⎥ ⎢ + (ρ S ρ (x )(1−S) + τ S τ (x )(1−S) + ω S ω (x )(1−S) + θ S θ (x )(1−S) S ⎥ n(S − R) (x ) (x ) (x ) (x ) U ∩V i U i U ∩V i U i U ∩V i U i U ∩V i U i X1 ⎣ ⎦ 1 ) ( − ρU ∩V (xi ) R ρU (xi )(1−R) + τU ∩V (xi ) R τU (xi )(1−R) + ωU ∩V (xi ) R ωU (xi )(1−R) + θU ∩V (xi ) R θU (xi )(1−R) R ⎡ ( ⎤ )1 (x ) S ρU (xi )(1−S) + τU ∪V (xi ) S τU (xi )(1−S) + ωU ∪V (xi ) S ωU (xi )(1−S) + θU ∪V (xi ) S θU (xi )(1−S) S ρ ⎢ ( U ∪V i 1 ⎥ ) ⎢− ρ R (1−R) + τ R (1−R) + ω R (1−R) + θ R (1−R) R ⎥ R × S ∑⎢ ⎥ U ∪V (x i ) ρU (x i ) U ∪V (x i ) τU (x i ) U ∪V (x i ) ωU (x i ) U ∪V (x i ) θU (x i ) + ⎢ )1 ⎥ ( ⎢ S (1−S) + τ S (1−S) + ω S (1−S) + θ S (1−S) S ⎥ n(S − R) U ∩V (x i ) τU (x i ) U ∩V (x i ) ωU (x i ) U ∩V (x i ) θU (x i ) X 2 ⎣ + ρU ∩V (x i ) ρU (x i ) ⎦ )1 ( − ρU ∩V (xi ) R θU (xi )(1−R) + τU ∩V (xi ) R τU (xi )(1−R) + ωU ∩V (xi ) R ωU (xi )(1−R) + θU ∩V (xi ) R θU (xi )(1−R) R ⎡ ⎤ )1 ( R × S ∑⎣ ρV (xi ) S ρU (xi )(1−S) + τV (xi ) S τU (xi )(1−S) + ωV (xi ) S ωU (xi )(1−S) + θV (xi ) S θU (xi )(1−S) S ⎦ 1 ) ( n(S − R) − ρV (xi ) R ρU (xi )(1−R) + τV (xi ) R τU (xi )(1−R) + ωV (xi ) R ωU (xi )(1−R) + θV (xi ) R θU (xi )(1−R) R X1 ⎡ ⎤ ( )1 R × S ∑⎣ ρV (xi ) S ρU (xi )(1−S) + τV (xi ) S τU (xi )(1−S) + ωV (xi ) S ωU (xi )(1−S) + θV (xi ) S θU (xi )(1−S) S ⎦ + 1 ( ) n(S − R) − ρV (xi ) R ρU (xi )(1−R) + τV (xi ) R τU (xi )(1−R) + ωV (xi ) R ωU (xi )(1−R) + θV (xi ) R θU (xi )(1−R) R

=

.

X2

.

=

⎡ ⎤ )1 ( n R × S ∑⎣ ρV (xi ) S ρU (xi )(1−S) + τV (xi ) S τU (xi )(1−S) + ωV (xi ) S ωU (xi )(1−S) + θV (xi ) S θU (xi )(1−S) S ⎦ 1 ( ) n(S − R) − ρV (xi ) R ρU (xi )(1−R) + τV (xi ) R τU (xi )(1−R) + ωV (xi ) R ωU (xi )(1−R) + θV (xi ) R θU (xi )(1−R) R i=1

= I SR (N , M). (ii) We have to prove.I SR (U ∪ V, W ) + I SR (U ∩ V, W ) = I SR (U, W ) + I SR (V, W ). We consider

. I SR (U ∪ V, W ) + I SR (U ∩ V, W )

⎡ ⎤ )1 ( n R × S ∑⎣ ρU ∪V (xi ) S ρW (xi )(1−S) + τU ∪V (xi ) S τW (xi )(1−S) + ωU ∪V (xi ) S ωW (xi )(1−S) + θU ∪V (xi ) S θW (xi )(1−S) S ) R1 ⎦ ( R (1−R) R (1−R) R (1−R) R (1−R) n(S − R) − ρU ∪V (xi ) ρW (xi ) + τU ∪V (xi ) τW (xi ) + ωU ∪V (xi ) ωW (xi ) + θU ∪V (xi ) θW (xi ) i=1 ⎡ ⎤ ( )1 n R × S ∑⎣ ρU ∩V (xi ) S ρW (xi )(1−S) + τU ∩V (xi ) S τW (xi )(1−S) + ωU ∩V (xi ) S ωW (xi )(1−S) + θU ∩V (xi ) S θW (xi )(1−S) S + 1 ( ) ⎦ n(S − R) − ρU ∩V (xi ) R ρW (xi )(1−R) + τU ∩V (xi ) R τW (xi )(1−R) + ωU ∩V (xi ) R ωW (xi )(1−R) + θU ∩V (xi ) R θW (xi )(1−R) R i=1

=



.

⎤ ( )1 (x ) S ρW (xi )(1−S) + τU ∪V (xi ) S τW (xi )(1−S) + ωU ∪V (xi ) S ωW (xi )(1−S) + θU ∪V (xi ) S θW (xi )(1−S) S ρ ⎢ ( U ∪V i ) R1 ⎥ ⎢ ⎥ R (1−R) R (1−R) R (1−R) R (1−R) R × S ∑ ⎢ − ρU ∪V (xi ) ρW (xi ) + τU ∪V (xi ) τW (xi ) + ωU ∪V (xi ) ωW (xi ) + θU ∪V (xi ) θW (xi ) ⎥ = ⎢ )1 ⎥ ⎢ + (ρ S ρ (x )(1−S) + τ S τ (x )(1−S) + ω S ω (x )(1−S) + θ S θ (x )(1−S) S ⎥ n(S − R) (x ) (x ) (x ) (x ) U ∩V i W i U ∩V i W i U ∩V i W i U ∩V i W i X1 ⎣ ⎦ 1 ) ( − ρU ∩V (xi ) R ρW (xi )(1−R) + τU ∩V (xi ) R τW (xi )(1−R) + ωU ∩V (xi ) R ωW (xi )(1−R) + θU ∩V (xi ) R θW (xi )(1−R) R ⎡ ( ⎤ )1 (x ) S ρW (xi )(1−S) + τU ∪V (xi ) S τW (xi )(1−S) + ωU ∪V (xi ) S ωW (xi )(1−S) + θU ∪V (xi ) S θW (xi )(1−S) S ρ ⎢ ( U ∪V i 1 ⎥ ) ⎢− ρ R (1−R) + τ R (1−R) + ω R (1−R) + θ R (1−R) R ⎥ R × S ∑⎢ ⎥ U ∪V (x i ) ρW (x i ) U ∪V (x i ) τW (x i ) U ∪V (x i ) ωW (x i ) U ∪V (x i ) θW (x i ) + ⎢ )1 ⎥ ( ⎢ ⎥ S (1−S) + τ S (1−S) + ω S (1−S) + θ S (1−S) S n(S − R) U ∩V (x i ) τW (x i ) U ∩V (x i ) ωW (x i ) U ∩V (x i ) θW (x i ) X 2 ⎣ + ρU ∩v (x i ) ρW (x i ) ⎦ )1 ( − ρU ∩V (xi ) R ρW (xi )(1−R) + τU ∩V (xi ) R τW (xi )(1−R) + ωU ∩V (xi ) R ωW (xi )(1−R) + θU ∩V (xi ) R θW (xi )(1−R) R

.

⎡ ( )1 ρU (xi ) S ρW (xi )(1−S) + τU (xi ) S τW (xi )(1−S) + ωU (xi ) S ωW (xi )(1−S) + θU (xi ) S θW (xi )(1−S) S n R × S ∑⎢ )1 ⎢ ( = R ρ (x )(1−R) + τ (x ) R τ (x )(1−R) + ω (x ) R ω (x )(1−R) + θ (x ) R θ (x )(1−R) R − ρ (x ) ⎣ U i W i U i W i U i W i U i W i n(S − R) i=1

+

⎡ ( )1 ρV (xi ) S ρW (xi )(1−S) + τV (xi ) S τW (xi )(1−S) + ωV (xi ) S ωW (xi )(1−S) + θV (xi ) S θW (xi )(1−S) S n R × S ∑⎢ ) R1 ⎢ ( R (1−R) R (1−R) R (1−R) R (1−R) + τV (xi ) τW (xi ) + ωV (xi ) ωW (xi ) + θV (xi ) θW (xi ) ⎣ − ρV (xi ) ρW (xi ) n(S − R) i=1

= I SR (U, W ) + I SR (V, W ).

⎤ ⎥ ⎥ ⎦ ⎤ ⎥ ⎥ ⎦

On Parameterized Picture Fuzzy Discriminant Information …

7

Similarly, one can easily prove.(iii)−(vii).

4 Solving Medical Diagnosis Problem with Parameterized Measure In this section, we utilize the newly proposed doubly parameterized discriminant information measure for PFSs in the different application areas of decision-making machine learning problems.

4.1 Medical Diagnosis There are numerous incidences in the medical field where signs of diseases are normal, henceforth it is extremely challenging to analyze the right infection by specialists or doctors. There are various measures exist in the literature which play an important role in dealing with these problems for intuitionistic fuzzy sets(IFSs) [30, 31]. On similar lines, if a patient has the symptoms of malaria and typhoid, “then the effect of the symptoms of stomach and chest pain is neutral”. As there is the “effect of temperature on the disease of malaria, hence the degree of neutrality is approximately equal to zero of very nearer to zero”. Therefore, we can say that the “degree of neutrality” has an important contribution in tackling the decision situations as compared to the mathematical foundations of “uncertainty and decision-making”. The accumulated assessment by some deputed decision-makers or one specialized expert has allotted signs values of the infections/symptoms values to each infected person on the basis of their expertise or ability. The discriminant measure of a symptom is computed by making use of the symptoms of the particular diseases. The methodology can be presented with the help of Fig. 1.

4.1.1

Numerical Illustration [32]

Let us consider the four infected persons Al, Bob, Joe, and Ted. The characterization of symptoms for the infected persons and for diagnosis are given in Tables 1 and 2. With the use of Tables 1 and 2, we compute the values of the “bi-parameterized picture fuzzy discriminant measure” which is given by Table 3. Next, the comparison between the proposed discriminant measure and the existing measures in the literature has been carried out in Table 4 (Fig. 2). Now, it becomes evident from Table 3 that patient “Al” is having a problem of “typhoid”, “Bob” is suffering from “stomach problems”, “Joe” is suffering from “viral fever” and “Ted” is suffering from “typhoid”.

8

Monika et al.

Fig. 1 Proposed methodology for medical diagnosis

Table 1 Symptoms classification for the diagnosis [33] Viral fever Malaria Typhoid Temperature Headache Stomach pain Cough Chest pain

(0.4, 0.5, 0.1) (0.3, 0.2, 0.4) (0.8, 0.1, 0.1) (0.45, 0.3, 0.1) (0.1, 0.6, 0.2)

(0.7, 0.2, 0.1 (0.2, 0.2, 0.5) (0.01, 0.9, 0.5) (0.7, 0.2, 0.1) (0.1, 0.1, 0.8)

0.3, 0.4, 0.2) (0.6, 0.1, 0.2) (0.2, 0.1, 0.5) (0.2, 0.2, 0.5) (0.1, 0.05, 0.8)

Stomach problem

Chest problem

(0.1, 0.2, 0.7) (0.2, 0.4, 0.3) (0.7, 0.2, 0.1) (0.2, 0.1, 0.65) (0.2, 0.1, 0.6)

(0.1, 0.1, 0.8) (0.05, 0.2, 0.7) (0.2, 0.1, 0.6) (0.2, 0.1, 0.6) (0.8, 0.1, 0.1)

Table 2 Symptoms classification for the patients [33] Temperature

Headache

Stomach pain

Cough

Chest pain

(0.7, 0.1, 0.15)

(0.6, 0.3, 0.05)

(0.25, 0.45, 0.25)

(0.2, 0.25, 0.2)

(0.1, 0.2, 0.6)

Bob (0.2, 0.3, 0.45)

(0.05, 0.5, 0.4)

(0.6, 0.15, 0.25)

(0.25, 0.4, 0.35)

(0.02, 0.25, 0.65)

Joe

(0.75, 0.05, 0.05)

(0.02, 0.85, 0.1)

(0.3, 0.2, 0.4)

(0.7, 0.25, 0.05)

(0.25, 0.4, 0.4)

Ted

(0.4, 0.2, 0.3)

(0.7, 0.2, 0.1)

(0.2, 0.2, 0.5)

(0.2, 0.1, 0.65)

(0.1, 0.5, 0.25)

Al

On Parameterized Picture Fuzzy Discriminant Information …

9

Comparative Remarks: In order to check the predominance of the proposed discriminant measure over the existing measures a tabular comparison has been carried out in Table 4.

Table 3 Results of diagnosis for the Bi-parameterized picture discriminant measure Viral fever Malaria Typhoid Stomach problem Chest problem Al Bob Joe Ted

0.2979 0.2159 0.1423 0.2020

0.2581 0.2874 0.2524 0.3532

0.1586 0.2205 0.3091 0.1123

0.2155 0.1907 0.3234 0.1825

Table 4 Comparative analysis with some existing measures Method Al Bob Thao et al. [34] Le et al. [35] Thao [36] Wei [11] Umar et al. [25] Bi-parametric discriminant measure

“Typhoid” “Typhoid” “Typhoid” “Typhoid” “Typhoid” “Typhoid”

Fig. 2 Ranking results of diagnosis

“Stomach problem” “Stomach problem” “Stomach problem” “Stomach problem” “Stomach problem” “Stomach problem”

0.4562 0.5855 0.3942 0.2642

Joe

Ted

“Viral fever” “Viral fever” “Viral fever” “Viral fever” “Viral fever” “Viral fever”

“Typhoid” “Typhoid” “Typhoid” “Typhoid” “Typhoid” “Typhoid”

10

Monika et al.

Table 5 Comparative analysis with some existing measures Research articles Parametrization Entropy & discriminant Assessment information involvement measure of alternatives Ganie et al. [20] Thao [34] Dutta [37] Singh [38] Umar [25] Proposed methods





















.√

.√

Picture Fuzzy Set Picture Fuzzy Set Picture Fuzzy Set Picture Fuzzy Set Picture Fuzzy Set Picture Fuzzy Set

Remarks: It appears to be prominent that the values of the discriminant information measure computed above are somewhere promising and more reliable which establish and affirm the effectiveness of the proposed technique. Advantages: On comparing our proposed technique with the existing ones, we find that our method is equally consistent with the importance given to the decisionmakers by assigning their weights. In addition, the suggested approach concurrently recommends the exact input and strategy, which is not the case with other methods. Additionally, compared to other ways that involve a complex process to reach a conclusion, the suggested method is easier to apply and involves fewer computations. As a result, the performance of the suggested method is really good. A characteristic comparison table explaining the advantages and features of the proposed measures and techniques is given below for understanding the motivation and the necessity of the proposed techniques (Table 5).

5 Conclusions and Scope for Future Work In this manuscript, a very recently proposed bi-parameterized discriminant measure for the PFSs has been utilized and presented their desirable properties along with proof. It is also noticed that the proposed discriminant measure is found to be better than the existing measures with the involvement of parameters and addition of uncertainty components (“degree of abstain and refusal”) which are very necessary for the practical problems. Further, the measure has been established in the machine learning decision sciences problems which are related to “ medical diagnosis”. The methodology for the application has been outlined and illustrated with the help of a separate numerical example. The outcomes in the applications considered in the manuscript are in line with the existing practices in use but with less computational effort. The suggested method assesses each choice in relation to each criterion separately by assigning considerable importance/weightage when deciding which is better in the event of a decision-making dilemma. When compared to various existing strategies, the methodology under consideration advises the particular input values and standard

On Parameterized Picture Fuzzy Discriminant Information …

11

procedural steps at the same time interval. Thus, it is clear that the suggested MCDM technique is easier to use, more flexible, and requires less computing labor than the many MCDM strategies already in use. In the future, a notion of ‘useful’ bi-parametric discriminant measure for the more generalized .T -spherical fuzzy sets on the basis of the utility distribution information measures discussed by Hooda et al. [39] can be proposed. This proposition will be supported by the concept of integrated ambiguity and information improvement measures. Additionally, the constrained optimization of these information measures may be discussed in detail with the possible scope of applications in various other types of decision-making problems. Funding Details: This work is supported by UGC, New Delhi under the scheme ID UGCES-22-GE-HIM-F-SJSGC-14036.

References 1. Zadeh LA (1965) Fuzzy sets. Inf Control 8:338–353 2. Cuong BC, Kreinovich V (2012) Picture fuzzy sets-a new concept for computational intelligence problems. In: The proceedings of 3rd world congress on information and communication technologies (wict), pp 1–6 3. Arya V, Kumar S (2020) A new picture fuzzy information measure based on Shannon entropy with applications in opinion polls using extended VIKOR-TODIM approach. Comput Appl Math 39:197 4. Koss JE, Newman FD, Johnson TK, Kirch DL (1999) Abdominal organ segmentation using texture transforms and a Hopfield neural network. IEEE Trans Med Imaging 18(7):640–648 5. Pal NR, Pal SK (1993) A review on image segmentation techniques. Pattern Recogn 26(9):1277–1294 6. Son LH, Thong PH (2017) Some novel hybrid forecast methods based on picture fuzzy clustering for weather now casting from satellite image sequences. Appl Intell 46:1–15 7. Ju Y, Ju D, Ernesto DR, Gonzalez S, Giannakis M, Wang A (2019) Study of site selection of electric vehicle charging station based on extended GRP method under picture fuzzy environment. Comput Ind Eng 135:1271–1285 8. Cuong BC (2013) Picture fuzzy sets: first results. Part 2, seminar. Neuro-fuzzy systems with applications. Preprint 04/2013, Institute of Mathematics, Hanoi 9. Garg H (2017) Some picture fuzzy aggregation operators and their applications to multicriteria decision-making. Arabian J Sci Eng 42:5275–5290 10. Khalil AM, Li SG, Garg H, Li H, Ma S (2019) New operations on interval-valued picture fuzzy set, interval-valued picture fuzzy soft set and their applications. IEEE Access 7:51236–51253 11. Wei G (2018) Some similarity measures for picture fuzzy sets and their applications. Iranian J Fuzzy Syst 15(1):77–89 12. Wang L, Zhang H, Wang J, Li L (2018) Picture fuzzy normalized projection-based VIKOR method for the risk evaluation of construction project. Appl Soft Comput 64:216–226 13. Thong PH, Son LH (2016) A novel automatic picture fuzzy clustering method based on particle swarm optimization and picture composite cardinality. Knowl Based Syst 109:48–60 14. Jana C, Senapati T, Pal M, Yager RR (2019) Picture fuzzy Dombi aggregation operators: application to MADM process. Appl Soft Comput 74:99–109 15. Son LH (2015) DPFCM: a novel distributed picture fuzzy clustering method on picture fuzzy sets. Expert Syst Appl 42:51–66

12

Monika et al.

16. Son LH (2016) Generalized picture distance measure and applications to picture fuzzy clustering. Appl Soft Comput 46:284–295 17. Tian C, Peng J, Zhang S, Zhang W, Wang J (2019) Weighted picture fuzzy aggregation operators and their application to multicriteria decision-making problems. Comput Ind Eng 237:106037 18. Liu P, Zhang X (2018) A novel picture fuzzy linguistic aggregation operator and its application to group decision-making. Cognit Comput 10:242–259 19. Khan S, Abdullah S, Ashraf S (2019) Picture fuzzy aggregation information based on Einstein operations and their application in decision-making. Math Sci 13:213–229 20. Ganie AH, Singh S, Bhatia PK (2020) Some new correlation coefficients of picture fuzzy sets with application. Neural Comput & Appl 32:12609–12625 21. Singh S, Ganie AH (2022) On a new picture fuzzy correlation coefficient with its applications to pattern recognition and identification of an investment sector. Comput Appl Math 41(1):1–35 22. Khan MJ, Kumam P, Deebani W, Kumam W, Shah Z (2021) Bi-parametric distance and similarity measures of picture fuzzy sets and their applications in medical diagnosis. Egypt Inf J 22(2):201–212 23. Kadian R, Kumar S (2021) A new picture fuzzy divergence measure based on Jensen-Tsallis information measure and its application to multicriteria decision making. Granular Comput 24. Ganie AH, Singh S (2021) An innovative picture fuzzy distance measure and novel multiattribute decision-making method. Complex Intell Syst 7:781–805 25. Umar A, Saraswat RN (2022) Decision-making in machine learning using novel picture fuzzy divergence measure. Neural Comput & Appl 34:457–475 26. Kumar S, Arya V, Kumar S, Dahiya A (2022) A new picture fuzzy entropy and its application based on combined picture fuzzy methodology with partial weight information. Int J Fuzzy Syst 24:3208–3225 27. Joshi R, Kumar S (2018) An .(R ' , S ' )-norm fuzzy relative information measure and its applications in strategic decision making. Comput Appl Math 37:4518–4543 28. Dhumras H, Bajaj RK (2022) On prioritization of hydrogen fuel cell technology utilizing bi-parametric picture fuzzy information measures in VIKOR and TOPSIS decision-making approaches. Int J Hydrog Energy. https://doi.org/10.1016/j.ijhydene.2022.09.093 29. Wei G (2016) Picture fuzzy cross-entropy for multiple attribute decision-making problems. J Bus Econ Manag, Taylor & Francis. 17(4):491–502 30. Luo M, Zhao R (2018) A distance measure between intuitionistic fuzzy sets and its application in medical diagnosis. Artif Intell Med 89:34–39 31. Thao NX (2018) A new correlation coefficient of the intuitionistic fuzzy sets and its application. J Intell Fuzzy Syst 35(2):1959–1968 32. De SK, Biswas R, Roy AR (2001) An application of intuitionistic fuzzy sets in medical diagnosis. Fuzzy Sets Syst 117(2):209–213 33. Xu ZS, Chen J, Wu JJ (2008) Clustering algorithm for intuitionistic fuzzy sets. Inf Sci 178(19):3775–3790 34. Thao NX, Ali M, Nhung LT, Gianey HK, Smarandache F (2019) A new multi-criteria decisionmaking algorithm for medical diagnosis and classification problems using divergence measure of picture fuzzy sets. J Intell Fuzzy Syst 37(6):7785–7796 35. Le NT, Nguyen DV, Ngoc CM, Nguyen TX (2018) New dissimilarity measures on picture fuzzy sets and applications. J Comput Sci Cybern 34(3):219–231 36. Thao NX (2018) Evaluating water reuse applications under uncertainty: a novel picture fuzzy multi-criteria decision-making method. Int J Inf Eng Electronic Bus 10(6):32–39 37. Dutta P (2018) Medical diagnosis based on distance measures between picture fuzzy sets. Int J Fuzzy Syst & Appl 7(4):15–36 38. Singh P (2015) Correlation coefficients for picture fuzzy sets. J Intell & Fuzzy Syst 28:591–604 39. Hooda DS, Bajaj RK (2010) Useful fuzzy measures of information, integrated ambiguity and directed divergence. Int J Gen Syst 39(6):647–658

Fuzzy Vendor–Buyer Trade Credit Inventory Model-Pentagonal Numbers in Permissible Limits Delay in Account Settlement with Supervised Learning K. Kalaiarasi, S. Swathi, and Sardar M. N. Islam (Naz)

Abstract Credit risk is a crucial aspect of modern company operations. We therefore provide a stylized model to ascertain the best course of action for vendor buyers inventory management under the constraint of trade credit in order to encompass the concepts of vendor buyer integrations and order size, dependent trade credit. The fuzzy arithmetical operation of function principles are used in the research to establish a technique to determining the best Economical Order Amount and Annual combined cost for the vendor and buyer. A model parameters overall rate, productivity, set up costs, holding expenses, inventory cost and transport costs are all fuzzy Pentagonal numbers that are used to create a whole fuzzy model. The extension of the Lagrangian approach for handling inequality constraint problems is used to find the best course of action for the fuzzy manufacturing inventory model, and Stepwise Integration (SI) representation is used to Defuzzify (D) fuzzy overall yearly integrated cost. A numerical example and analysis employing supervised learning method are utilised to show the viability of the proposed integration models. Keywords Vendor–Buyer · Fuzzy logic · Pentagonal numbers · Total cost · Linear regression

K. Kalaiarasi (B) PG & Research Department of Mathematics, Cauvery College for Women (Autonomous), (Affiliated to Bharathidasan University), Tiruchirappalli 620018, India e-mail: [email protected] D.Sc(Mathematics), Srinivas University, Surathkal, Mangaluru, Karnataka 574146, India S. Swathi PG and Research Department of Mathematics, Cauvery College for Women (Autonomous), (Affiliated to Bharathidasan University), Tiruchirappalli 620018, India S. M. N. Islam (Naz) UniSri. Victoria University, Footscray, Australia © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024 S. Jain et al. (eds.), Emergent Converging Technologies and Biomedical Systems, Lecture Notes in Electrical Engineering 1116, https://doi.org/10.1007/978-981-99-8646-0_2

13

14

K. Kalaiarasi et al.

1 Introduction Vendors and Customers can combine to lower the Ordering Cost (OC) of the buyer to lower their combined total cost. We take into account a model to establish the best Vendor–Buyer (VB) inventory model when there is a need to speed up order processing and allow for payment delays. Certain types of conversion are handled by the model’s total yearly cost function. Ali et al. [1] introduced the Multi-Product and Multi-Echelon Measurements of the Perishable Supply Chain. Before taking any action, vendors and buyers often create a purchasing agreement and then collaborate to maximize their respective advantages. This indicates that based on their combined Total Cost (TC) function, the optimum contract size and the number of shipments must be established at the commencement of the contract. Trade Credit (TC) is significant in Finance. Pal et al. [2] discussed the Optimal decisions in a dual-channel competitive green supply chain management under promotional effort in 2023. Retailers use trade credit from suppliers to boost sales, increase market share, and lower land stock levels. Conversely, during the trade credit period, shops can obtain capital goods and services without making any payments. This means that the trade credit department is available to the supplier and the merchant. Chand and Ward [3] proposed an Economic Order Quantity under Conditions of Permissible Delay in Payments in 1987. According to joint inventory models, the buyer is only charged for the quantity bought after receiving it. This was not the case in modern commercial transactions because vendors typically grant credit to settle the payment of items and don’t charge the buyer interest during this time. Chen and Hsieh [4] introduced the Graded mean integration representation of generalized fuzzy numbers in 1999. The buyer is not required to pay the seller right away after receiving the items. Pervin et al. [5] discussed the Analysis of the inventory control model with shortage under timedependent demand and time-varying holding cost including stochastic deterioration in 2018. Consumers have the option to postpone payment until the allotted time has passed. Chang [6] developed a single-item inventory model with allowable payment delays based on these phenomena. According to the presumptions of the traditional EOQ model, Ghosh et al. [7] modeled the cost of money invested in inventories in 2022. Any policy with a positive order quantity and an infinite delay will be the most advantageous, as demonstrated by Chand [3]. Kalaichelvan et al. [8] created a different method to determine the economic order quantity when there is a reasonable payment delay in 2021. For retailers’ price and lot size policies for degrading goods under the assumption of allowable payment delays, the integrated supply chain model in this study includes the cost reduction for order processing as well as the permitted payment delay. Chung [9] added a linear trend demand to this problem The variables and parameters in the inventory management system might be almost unclear in real life. In order to handle the modeling approach with the fuzzy cost of inventory under the extension principle arithmetic operation Park adopted the fuzzy set idea. Chaudhary et al. were the ones to propose [10] A sustainable inventory model for defective items under a fuzzy environment for a stock system of a vendor

Fuzzy Vendor–Buyer Trade Credit Inventory Model-Pentagonal …

15

and a consumer. Kalaiarasi [8] introduced the combined EOQ approach for a vendor and client. Because a vendor might provide a lot size that might fulfill the number of orderings for the buyer. Fuzzy ideas have just been included in EOQ models [12, 13, 15]. Linear regression is a method of Machine Learning (ML) used for Supervised Learning (SL). The process of forecasting a Dependent Variable (DV) is carried out using linear regression using the independent variable that is provided (s). This regression method thus establishes the existence of a linear relationship between a dependent variable and the other independent parameters. In this study, an inventorybased model with a seller and buyer for a product and fuzzy input variables is taken into consideration. Here, the Pentagonal fuzzy number is used to express the demand and cost. For fuzzy number arithmetic operations, Chen’s [4] function principle is proposed, and the Lagrangian approach is employed for optimization. The annual overall cost for vendor and buyer is defuzzed using graded mean integration, which also allows for a little reduction in the cost of order processing.

2 Extension of the Lagrangian Method for Fuzzy Vendor–Buyer Trade Credit Inventory Model Taha [3] explored the Lagrangian approach and shown how it may be used to tackle inequality constraints while trying to find the best solution to a nonlinear programming challenge with equality requirements. Assume that the situation is. Min y = f(x) Subject to gi (x) ≥ 0, i = 1,1, 2, …, m.

3 Methodology for Fuzzy Vendor–Buyer Trade Credit Inventory Model 3.1 Grade Mean Integration Representation Technique (GMIRT) for Fuzzy Vendor–Buyer Trade Credit Inventory Model[11, 14] Chen and Hsieh proposed the Graded mean Integration Representation Method [GMIRM] based on the integral value of graded mean h-level of generalized fuzzy number for defuzzification. ˜ = (b1 , b2 , b3 , Let B˜ be a Pentagonal Fuzzy Number (PFN) and be denoted as B b4 b5, ).

16

K. Kalaiarasi et al.

Then we can get the Graded Mean Integration Representation (GMIR) of B˜ as ( ) b1 + 3b2 + 4b3 + 3b4 + b5 P B˜ = 12

4 Highly Integrated Optimization Model with Processing Orders, Cost Savings, and Allowable Payment Delays 4.1 Notations Notations are used to develop an inventory model for fuzzy vendor–buyer trade credit. L → Per-production-run lot size D → Regular demand P → rate of production, P > D Vs → Setting cost Vh → Annual storage amount Bh → Storage amount U → Purchasing rupees per unit N → Overall number of shipment made by the vendor to the purchaser as part of a batch, positive number s → Size of shipment from seller to buyer Ft → Fixed price of transportation O → The buyer’s amount per unit time for order processing So → While processing each shipment of orders d → Permissible limits delay in account settlement c → carrying price per dollar annually

5 Mathematical Model for Fuzzy Vendor–Buyer Trade Credit Inventory Model The total cost of vendor and buyer

Fuzzy Vendor–Buyer Trade Credit Inventory Model-Pentagonal …

17

[ ( ) L D D[V s + N (T + O S0)] + Vh (N − 2) 1 − L 2N P ] Uc (1) +V h + Bh + 1 + cd

J T C(N , L , S0) =

Minimum ∂T C =0 ∂L At n, let JTC(L) = JTC(, L, S0 ) / L* =

2N D[V s + N (T + O S0)] ) ( (N − 2) 1 − DP V h + V h + Bh +

Uc 1+cd

(2)

Through using fixed constant L in Eq. (1) to represent annual demand, setup expenses per production of vendor, unit stock-holding costs per item of vendor and buyer and shipment size of vendor.

5.1 Inventory Model of Crisp Production Quantity (CPQ) for Fuzzy Vendor–Buyer Trade Credit Inventory Model ∼



˜ V˜S, V˜h, Bh, ˜ O, ˜ U, c, In this paper, D, P, ˜ T˜ are the variables used to simplify issue of an Inventory model (IM). ∼ ∼ ˜ V˜S, V˜h, Bh, ˜ O, ˜ U, c, D, P, ˜ T˜ are fuzzified parameters. The integrated Total Inventory Cost (TIC) for vendor and buyer [ ( ) L D D[V s + N (T + O S0)] + Vh J T C(N , L , S0) = (N − 2) 1 − L 2N P ] Uc +V h + Bh + 1 + cd J T C(N , L , S0) ⎧ D1[V s1+N (T 1+O1S0)] [ ( ) ] ⎫ L U 1c1 V h1 + V h1 + Bh1 + 1+c5d ,⎪ + 2N (N − 2) 1 − D5 ⎪ ⎪ L P1 [ ( ) ] ⎪ ⎪ ⎪ D2[V s2+N (T 2+O2S0)] ⎪ L D4 U 2c2 ⎪ ⎪ 1 − V h2 + V h2 + Bh2 + ,⎪ − 2) + (N ⎨ L 2N [ P2 ) 1+c4d ] ⎬ ( L U 3c3 = D3[V s3+N L(T 3+O3S0)] + 2N V h3 + V h3 + Bh3 + , (N − 2) 1 − D3 P3 ) 1+c3d ] ⎪ [ ( ⎪ D4[V s4+N (T 4+O4S0)] ⎪ L D2 U 4c4 ⎪ ⎪ 1 − V h4 + V h4 + Bh4 + ,⎪ − 2) + (N ⎪ P4 ) 1+c2d ] ⎪ ⎪ ⎪ ( ⎩ D5[V s5+NL(T 5+O5S0)] 2NL [ D1 U 5c5 ⎭ + 2N (N − 2) 1 − P5 V h5 + V h5 + Bh5 + 1+c1d L (3) J T˜ C(N , L , S0)

18

K. Kalaiarasi et al. [ ( ) ] ⎫ ⎧ D1[V s1+N (T 1+O1S0)] L (N − 2) 1 − D5 V h1 + V h1 + Bh1 + U 1c1 , + 2N ⎪ ⎪ ⎪ L P1 ) 1+c5d ] ⎪ ⎪ ⎪ [ ( ⎪ ⎪ ⎪ D2[V s2+N (T 2+O2S0)] L (N − 2) 1 − D4 V h2 + V h2 + Bh2 + U 2c2 , ⎪ ⎪ ⎪ + 2N ⎪ ⎪ ⎪ ⎪ L P2 1+c4d ⎨ ⎬ [ ( ) ] 1 D3[V s3+N (T 3+O3S0)] L D3 U 3c3 = + 2N (N − 2) 1 − P3 V h3 + V h3 + Bh3 + 1+c3d , L [ ( ) ] ⎪ 12 ⎪ ⎪ ⎪ ⎪ D4[V s4+N (T 4+O4S0)] L (N − 2) 1 − D2 V h4 + V h4 + Bh4 + U 4c4 , ⎪ ⎪ ⎪ + 2N ⎪ ⎪ ⎪ L P4 ) 1+c2d ] ⎪ ⎪ ⎪ [ ( ⎪ ⎪ ⎩ D5[V s5+N (T 5+O5S0)] ⎭ L D1 U 5c5 + − 2) 1 − V h5 + V h5 + Bh5 + (N L 2N 1+c1d P5

(4)

Get the optimal production quantity L* in Eq. (2). When P( J T˜ C(N , L , S0)) is minimize. Derivatives of P( J T˜ C(N , L , S0)) with L is ∂ P(J T˜ C(N , L , S0)) =0 ∂L We find the optimal production quantity L = L* = ⎡ | | 2N [D1(V s1 + N (T 1 + O1S0)) + 3D2(V s2 + N (T 2 + O2S0)) + 4D3(V s3 + N (T 3 + O3S0)) | | +3D4(V s4 + N (T 4 + O4S0)) + D5(V s5 + N (T 5 + O5S0))] | √ ) ( ) ( ) ( ) ( ) ] [( | | (N − 2) 1 − D5 V h1 + 3 1 − D4 V h2 + 4 1 − D3 V h3 + 3 1 − D2 V h4 + 1 − D1 V h5 | P1 P2 P3 P4 P5 | | +(V h1 + 3V h2 + 4V h3 + 3V h4 + V h5) + (Bh1 + 3Bh2 + 4Bh3 + 3Bh4 + Bh5) √ U 1c1 + 3 U 2c2 + 4 U 3c3 + 3 U 4c4 + U 5c5 + 1+c5d 1+c4d 1+c3d 1+c2d 1+c1d

(5)

Therefore, L in Eq. (5) as a fixed constant while representing demand production costs, purchase costs, annual demand, setup costs, stockholding of vendor and buyers, shipment size from vendor to buyer, transportation costs as fuzzy numbers.

5.2 Inventory Model for Fuzzy Production Quantity for Fuzzy Vendor–Buyer Trade Credit Inventory Model Now, In an Inventory Model (IM) by change Crisp Quantity (CQ) to Fuzzy Quantity (FQ). Let Fuzzified Production Quantity (FPQ) L˜ be a Pentagonal Fuzzy Number (PFN) ˜L = (L1, L2, L3, L4, L5) with 0 < L1 ≤ L2 ≤ L3 ≤ L4 ≤ L5 . J T C(N , L , S0) ⎧ D1[V s1+N (T 1+O1S0)] [ ( ) ] ⎫ L U 1c1 V h1 + V h1 + Bh1 + 1+c5d ,⎪ + 2N (N − 2) 1 − D5 ⎪ ⎪ L P1 ) [ ( ] ⎪ ⎪ ⎪ D2[V s2+N (T 2+O2S0)] ⎪ L D4 U 2c2 ⎪ ⎪ − 2) 1 − V h2 + V h2 + Bh2 + ,⎪ + (N ⎨ L 2N [ P2 1+c4d ( ) ] ⎬ D3[V s3+N (T 3+O3S0)] L D3 U 3c3 = + 2N (N − 2) 1 − P3 V h3 + V h3 + Bh3 + 1+c3d , L [ ( ) ] ⎪ ⎪ ⎪ L U 4c4 ⎪ D4[V s4+N L(T 4+O4S0)] + 2N ⎪ V h4 + V h4 + Bh4 + 1+c2d ,⎪ (N − 2) 1 − D2 ⎪ P4 ) ⎪ ⎪ [ ( ]⎪ ⎩ D5[V s5+N (T 5+O5S0)] L D1 U 5c5 ⎭ 1 − V h5 + V h5 + Bh5 + − 2) + (N L 2N P5 1+c1d (6)

Fuzzy Vendor–Buyer Trade Credit Inventory Model-Pentagonal …

19

Graded Mean Integration Representation (GMIR) of P(J T˜ C(N , L , S0)) by formula in Eq. (6). J T˜ C(N , L , S0) [ ( ) ] ⎫ ⎧ D1[V s1+N (T 1+O1S0)] L1 (N − 2) 1 − D5 V h1 + V h1 + Bh1 + U 1c1 , + 2N ⎪ ⎪ ⎪ P1 ) L5 1+c5d ] ⎪ ⎪ ⎪ [ ( ⎪ ⎪ ⎪ (T 2+O2S0)] L2 (N − 2) 1 − D4 V h2 + V h2 + Bh2 + U 2c2 , ⎪ ⎪ ⎪ + 2N 3 D2[V s2+NL4 ⎪ ⎪ ⎪ P2 ) 1+c4d ] ⎪ ⎨ ⎬ [ ( 1 D3[V s3+N (T 3+O3S0)] L3 D3 U 3c3 + 4 − 2) 1 − V h3 + V h3 + Bh3 + , = (N L3 2N [ P3 ) 1+c3d ] ⎪ ⎪ ( 12 ⎪ ⎪ ⎪ D4[V s4+N (T 4+O4S0)] L4 (N − 2) 1 − D2 V h4 + V h4 + Bh4 + U 4c4 , ⎪ ⎪ ⎪ + 2N ⎪ ⎪3 ⎪ L2 P4 ) 1+c2d ] ⎪ ⎪ ⎪ [ ( ⎪ ⎪ ⎩ D5[V s5+N (T 5+O5S0)] ⎭ L5 D1 U 5c5 + 2N (N − 2) 1 − P5 V h5 + V h5 + Bh5 + 1+c1d L1

(7)

with 0 < L1 ≤ L2 ≤ L3 ≤ L4 ≤ L5 . L2 –L1 ≥ 0, L3 –L2 ≥ 0, L4 –L3 ≥ L5 ≥ L4 ≥ 0, L1 > 0. The Lagrange Approach (LA) is used to find the solutions of L1 , L2 , L3 , L4, L5 to minimize P(J T˜ C(N , L , S0)) in formula. Let L1 = L2 = L3 = L4 = L5 = ∼. Then the optimal Fuzzified Production Quantity (FPQ) is ⎡ | | 2N [D1(V s1 + N (T 1 + O1S0)) + 3D2(V s2 + N (T 2 + O2S0)) + 4D3(V s3 + N (T 3 + O3S0)) | | +3D4(V s4 + N (T 4 + O4S0)) + D5(V s5 + N (T 5 + O5S0))] | √ ∗ ) ( ) ( ) ( ) ( ) ] [( L =| | (N − 2) 1 − D5 V h1 + 3 1 − D4 V h2 + 4 1 − D3 V h3 + 3 1 − D2 V h4 + 1 − D1 V h5 | P1 P2 P3 P4 P5 | | +(V h1 + 3V h2 + 4V h3 + 3V h4 + V h5) + (Bh1 + 3Bh2 + 4Bh3 + 3Bh4 + Bh5) √ U 1c1 + 3 U 2c2 + 4 U 3c3 + 3 U 4c4 + U 5c5 + 1+c5d 1+c4d 1+c3d 1+c2d 1+c1d

(8) Therefore, L* in Eq. (8) is the fuzzified fixed constant while representing demand production costs, purchase costs, annual demand, setup costs per production for vendor, stockholding costs per item per year for vendor and buyer, shipment sizes from vendor to buyer.

6 Numerical Analysis for Fuzzy Vendor–Buyer Trade Credit Inventory Model The following numerical solution is successfully solved using the suggested analytic solution method to demonstrate the findings of this paper. Consider around Inventory management system as follows in Table 1. Table 1 Fuzzy inventory vendor buyer model parameters (crisp sense) D

P

Bh

Vh

O

Vb

U

T

D

C

2700

9000

5.00

2.00

1400

200

10

300

0.25

0.15

N = 2, = 1182.77, S0 = 0.105 JTC (N, L, S0 ) = 5144 L*

20

K. Kalaiarasi et al. ∼

In this illustration can be converted into the fuzzy parameters D, ∼ ˜ ˜ O, ˜ U, c, ˜ T˜ as follows in Tables 2 and 3 and Fig. 1. P, V˜S, V˜h, Bh, Table 2 Fuzzy inventory vendor buyer model parameters (Fuzzy Sense) D 2700

P

Bh

Vh

O

Vb

U

T

D

C

9000

5.00

2.00

1400

200

10

300

0.25

0.15

N = 2, = 1183, S0 = 0.105 JTC (N, L, S0 ) = 4533 L*

Table 3 Variation of fuzzy inventory vendor buyer total cost S. no

N

S0

L

JTC (N, L, S0 )

1

1

0.100

1179

4430

2

2

0.105

1183

4533

3

3

0.200

1190

4620

4

4

0.205

1195

4750

5

5

0.300

1200

4820

6

6

0.305

1256

4963

7

7

0.400

1345

5124

8

8

0.405

1456

5345

9

9

0.500

1498

5598

10

10

0.505

1502

6246

DIFFERENCE OF TOTAL PRODUCTION COST TOTAL COST

7000 6000 5000 4000 3000 2000 1000 0 S.No

1

2

3

4

5

6

7

8

9

10

1

2

3

4

5

6

7

8

9

10

1

2

3

4

5

6

7

8

9

10

S0

0.1

0.105

0.2

0.205

0.3

0.305

0.4

0.405

0.5

0.505

L

1179

1183

1190

1195

1200

1256

1345

1456

1498

1502

JTC (N, L, S0) 4430

4533

4620

4750

4820

4963

5124

5345

5598

6246

Fig. 1 Variation of Total Cost for fuzzy vendor- buyer trade credit

Fuzzy Vendor–Buyer Trade Credit Inventory Model-Pentagonal …

21

Min Fuzzified overall Production Inventory Costs (IC) of vendor and buyer is JTC (N, L, S0 ) = (4533, 4918, 5122, 5232, 5302).

7 Supervised Learning (Sl) for Fuzzy Vendor–Buyer Trade Credit Inventory Model Through identifying the best fit linear line between the independent and dependent variables for fuzzy vendor–buyer trade credit inventory models. Python can be used to find correlations among data points linear regression for fuzzy vendor–buyer trade credit inventory model. With the fuzzy vendor–consumer trade credit inventory model, the y-axis in the example below shows the total cost of vendor and buyer, x-axis quantities per production rate. We have included the lot size per manufacturing and the total cost for fuzzy seller–buyer trade credit inventory system of seller and purchaser of inventory model. To perform a linear regression for fuzzy vendor–buyer trade credit inventory models: import matplotlib.pyplot as plt from scipy import stats x = [0.1,0.105,0.2,0.235,0.352] y = [4430,4533,4620,4750,4820] slope, intercept, r, p, std_err = stats.linregress (x, y) def myfunc (x): return slope * x + intercept mymodel = list(map(myfunc, x)) plt.scatter (x, y) plt.plot (x, mymodel) plt.show() The y-axis indicates the overall costs of vendor and buyer, and the x-axis denotes lot size per production rate of the proposed inventory model. The variation of the optimal ordering quantities and overall cost of the fuzzified inventory system is demonstrated in Fig. 2 above using Python-Linear Regression.

22

K. Kalaiarasi et al.

Fig. 2 Best fit linear line of inventory total cost per production lot size

8 Conclusion In order to minimizing overall cost for buyers and vendor under the constraints of customer time production and allowable payment delays, this study provided fuzzy models for an ideal Inventory Model (IM).First model treats L as a fixed constant while representing demand production costs, purchase costs, annual demand, setup costs per production of vendor’s, holding costs, shipment size of vendor and buyer, transportation costs, and carry costs as fuzzy numbers (FN). The second model portion of the balance a fuzzy number to represent L. To estimate the total projected cost of buyer and vendor in the fuzzy type for fuzzy model, a technique of step wise Integrations Representation is applied for defuzzification process. Finally, the appropriate ordering net-size is derived to achieve maximum profit and numerical analysis is done by linear regression in machine learning.

9 Results and Discussions A method of step-by-step integrations representation is used for the defuzzification process in order to estimate the entire predicted cost of the buyer and the vendor in the fuzzy form for fuzzy model. Finally, using machine learning’s linear regression, the correct ordering net-size is determined in order to achieve the greatest profit of the fuzzy vendor- buyer trade credit inventory model.

Fuzzy Vendor–Buyer Trade Credit Inventory Model-Pentagonal …

23

References 1. Ali SS, Barman H, Kaur R, Tomaskova H, Roy SK (2021) Multi-product multi echelon measurements of perishable supply chain: fuzzy non-linear programming approach. Mathematics 9:2093. https://doi.org/10.3390/math9172093 2. Pal B, Sarkar A, Sarkar B (2023) Optimal decisions in a dual-channel competitive green supply chain management under promotional effort. Expert Syst Appl Int J 211:C. Online publication date: 1-Jan-2023. https://doi.org/10.1016/j.eswa.2022.118315 3. Chand S, Ward J (1987) A note on “economic order quantity under conditions of permissible delay in payments.” J Oper Res Soc 38:83–84. https://doi.org/10.1057/jors.1987.10 4. Chen SH, Hsieh CH (1999) Graded mean integration representation of generalized fuzzy number 5. Pervin M, Roy SK, Weber G-W (2018) Analysis of inventory control model with shortage under time-dependent demand and time-varying holding cost including stochastic deterioration. Ann Oper Res 260(1–2):437–460. https://doi.org/10.1007/s10479-016-2355-5 6. Chang H-J, Dye C-Y, Chuang B-R (2002) An inventory model for deteriorating items under the condition of permissible delay in payments. Yugoslav J Operat Res 12. https://doi.org/10. 2298/YJOR0201073C 7. Ghosh S, Küfer KH, Roy SK et al (2022) Carbon mechanism on sustainable multi-objective solid transportation problem for waste management in Pythagorean hesitant fuzzy environment. Complex Intell. Syst. 8:4115–4143. https://doi.org/10.1007/s40747-022-00686-w 8. Kalaichelvan KK, Soundaria R, Kausar N, Agarwal P, Aydi H, Alsamir H (2021) Optimization of the average monthly cost of an EOQ inventory model for deteriorating items in machine learning using PYTHON. Therm Sci 25:347–358. https://doi.org/10.2298/TSCI21S2347K 9. Chung K-J (1998) A theorem on the determination of economic order quantity under conditions of permissible delay in payments. Comp Operat Res 25(1):49–52. ISSN 0305–0548, https:// doi.org/10.1016/S0305-0548(98)80007-5 10. Chaudhary R, Mittal M, Jayaswal MK (2023) A sustainable inventory model for defective items under fuzzy environment. Decis Anal J 7:100207. ISSN 2772–6622. https://doi.org/10. 1016/j.dajour.2023.100207 11. Alberto C, Laura M (2009) Generalized convexity and optimization: theory and applications. Lect Notes Econ Math Syst 616. https://doi.org/10.1007/978-3-540-70876-6 12. Kaufmann M, Gupta M (1985) Introduction to fuzzy arithmetic: theory and applications, 1991. Van Nostrand Reinhold N. Y 13. San-Jose´ LA, Sicilia J, Abdul-Jalbar B (2021) Optimal policy for an inventory system with demand dependent on price, time and frequency of advertisement. Comput Oper Res 128:105169. https://doi.org/10.1016/j.cor.2020.105169 14. Taha HA (1997) In: operations research. Prentice-Hall, Englewood Cliffs, NJ, USA, pp 753–777 15. Vadivelu K, Sugapriya C, Jacob K, Deivanayagampillai N (2023) Fuzzy inventory model for imperfect items with price discount and penalty maintenance cost. Math Probl Eng 2023:15. https://doi.org/10.1155/2023/1246257

Comparative Analysis of Hardware and Software Utilization in the Implementation of 4-Bit Counter Using Different FPGAs Families Shishir Shrivastava and Amanpreet Kaur

Abstract Various regions around the country are currently dealing with severe shortages of energy supply. The research and development of an energy-efficient 4-bit counter using the Vivado software platform and a number of FPGA families has advanced environmentally aware communication. This was accomplished by bringing together a variety of FPGA families. Genesys, Kintex, and Virtex are the three distinct FPGA families with seven distinct series for each. This was done in an effort to find the FPGA series that consumes the least amount of power overall. The Genesys, Kintex, and Virtex families each use power levels that are 3.863, 2.789, and 8.765, respectively. Comparing the temperatures of Genesys, Kintex, and Virtex, which are respectively 53.1, 71.1, and 70.2 °C. Using the Genesys family of 7 series of components rather than either of the other two families can result in a decrease in the amount of power that is lost due to dissipation, as specified in the instructions. Kintex has an energy efficiency that is comparable to that of other FPGA families. Keywords Combinational circuits · Sequential circuits · 4-bit counter · Field programmable gate array (FPGA) · Energy

1 Introduction Combinatorial digital circuits only produce the signals that are fed into them. Hence, a combinational circuit’s output depends only on its current input values, not its internal state or prior Combinatorial digital circuits only produce the signals that are fed into them. r inputs. AND, OR, and NOT gates, as well as multiplexers, decoders, and encoders, make up combinational circuits. A logic network with these gates and elements performs a certain function. Adders, subtractors, multiplexers, S. Shrivastava (B) · A. Kaur Institute of Engineering and Technology, Chitkara University, Punjab, India e-mail: [email protected] A. Kaur e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024 S. Jain et al. (eds.), Emergent Converging Technologies and Biomedical Systems, Lecture Notes in Electrical Engineering 1116, https://doi.org/10.1007/978-981-99-8646-0_3

25

26

S. Shrivastava and A. Kaur

and demultiplexers are combinational circuits. Digital systems use combinational circuits for data processing, arithmetic, and logic [1]. Sequential digital circuits have memory and feedback. Sequential circuits’ outputs depend on both current input values and prior state. Sequential circuits store and process data over time. Flip-flops, registers, counters, and shift registers are sequential circuits. State diagrams or state tables represent the circuit’s behavior as a succession of states and transitions. Verilog and VHDL can implement sequential circuits [2]. A digital circuit known as a 4-bit counter can count from 0 to 15 in steps of 0. (Or binary 0000 to 1111). In this implementation, we will be using an FPGA board called Genesys, which contains a Kintex or Virtex power FPGA chip. The range of a 4-bit counter, in binary, is from 0 to 15. (Or 0 to F in hexadecimal). It has four flip-flops that can each store a single bit of information, and these flip-flops are connected in such a way that they change state sequentially in response to clock pulses (Fig. 1 and Table 1).

Fig. 1 4-bit counter circuit diagram

Table 1 Truth table of 4-bit counter Clock pulse

Q3

Q2

Q1

Q0

0

0

0

0

0

1

0

0

0

1

2

0

0

1

0

3

0

0

1

1

4

0

1

0

0

5

0

1

0

1

6

0

1

1

0

7

0

1

1

1

8

1

0

0

0

9

1

0

0

1

10

0

0

0

0

Comparative Analysis of Hardware and Software Utilization …

27

In this truth table, “CP” represents the clock input, while the numbers “Q3,” “Q2,” “Q1,” and “Q0” represent the flip-flop outputs in that sequence. The contents of the flip-flop outputs are changed in accordance with the following binary counting sequence: 0000, 0001, 0010, 0011, 0100, 0101, 0110, 0111, 1000, 1001, 1010, 1011, 1100, 1101, 1110, 1111 [3]. An FPGA, or Field-Programmable Gate Array, is a programmable logic device that allows users to implement digital circuits and systems on the hardware level. FPGAs are highly customizable and offer fast execution times, making them a popular choice for prototyping and implementing digital designs [4]. The implementation of a 4-bit counter using an FPGA board involves designing and implementing the counter circuit in a hardware description language (HDL) such as Verilog or VHDL. The HDL code is then synthesized using a design tool such as Xilinx Vivado, which generates a bitstream file that can be loaded onto the FPGA board [5]. The implementation of the counter circuit on the Genesys board with a Kintex or Virtex power FPGA chip will vary in terms of performance and capabilities. The Kintex FPGA is a mid-range FPGA with lower power consumption and lower cost, while the Virtex FPGA is a high-end FPGA with higher performance and higher cost. The choice between the two depends on the specific requirements of the project and the budget constraints. In terms of hardware comparison, the Genesys board offers a range of features such as high-speed USB, Ethernet, and onboard memory, which make it suitable for a wide range of digital design applications. The Kintex or Virtex FPGA chips on the board offer high-speed processing and low power consumption, making them ideal for applications that require high performance and low energy consumption. In this research work implementing a 4-bit counter using an FPGA board such as Genesys with Kintex or Virtex power FPGA chips involves designing and synthesizing the counter circuit in an HDL, generating a bitstream file, and loading it onto the board. The choice of FPGA chip depends on the project requirements and budget, and the Genesys board offers a range of features that make it suitable for a wide range of digital design applications. When FPGA boards are implemented on Vivado tools, they generate LUTs, IO, FF, BUFG, power, and temperature, which are significant metrics supplied by Vivado tools during the design and implementation of a digital circuit. LUTs, or Look-Up Tables, are the basic building blocks of digital circuits in Field-Programmable Gate Arrays (FPGAs). They are used to implement combinatorial logic functions. Vivado tools report the number of LUTs used in the design, which can help the designer to optimize the design for size and performance [6]. IO refers to Input/Output, which are the pins used to communicate with the outside world. Vivado tools report the number of IO pins used in the design, as well as their voltage levels, drive strength, and other parameters. FF, or Flip-Flops, are used to implement sequential logic functions in FPGAs. They are used to store the state of a circuit and update it based on input signals. Vivado tools report the number of FFs used in the design, which can help the designer to optimize the design for timing and power.

28

S. Shrivastava and A. Kaur

BUFG, or Global Clock Buffer, is a special buffer that is used to distribute clock signals across an FPGA. Vivado tools report the number of BUFGs used in the design, which can help the designer to optimize the design for timing and power [7]. Power is an important metric in digital circuit design, as excessive power consumption can lead to overheating and premature failure of the circuit. Vivado tools report the estimated power consumption of the design, based on the logic elements used and their activity patterns. Temperature is also an important metric in digital circuit design, as high temperatures can lead to thermal runaway and failure of the circuit. Vivado tools report the estimated temperature of the design, based on the power consumption and the thermal characteristics of the FPGA. In summary, LUTs, IO, FF, BUFG, power, and temperature are all important metrics that are reported by Vivado tools during the design and implementation of a digital circuit. These metrics can help the designer to optimize the design for size, performance, timing, and power consumption, and ensure that the circuit operates reliably under a range of operating conditions [8].

2 Related Work For counter-based multi-ported memory architectures, Navid Shahrouzi et al. proposed four new optimal models. These models allow for the modification and improvement of the internal structures of R/W-port-rich A digital multi-meter modules, simplifying design and boosting operational frequency. Due to the design’s simplicity, next-generation FPGAs’ changeable hard logic can be reconfigured to change the DMM module’s internal architecture and routing [9]. Min Zhang and others develop and test an FPGA-based TDC. DNL and INL were 5.5 ps and 11.6 ps, respectively. Resolution was at 7.4 ps. Experiments show that the suggested TDC improves temperature and voltage sensitivity and offers a new TDC architecture with a 1024-unit measuring matrix. The coun-ter method, which has mostly been used for “coarse” measurements, may find new uses using this method. The suggested TDC can be made easily, cheaply, and quickly using FPGA devices [10]. Yonggang Wang et al. determined that current thermometer-to-binary encoders for time to digital converter (TDC) application fail due to a substantial bubble problem in the tapped delay line (TDL) of recent FPGAs. They proposed combining four standard delay chains in a Kintex-7 FPGA to make a high-quality TDL, with the intention of implementing a TDC with high time precision, and using its global bubble error demonstration and application to elucidate the matter. The TDC design shows that skipping the ones-counter encoder bubbles yields the same results as bin reordering. Tests show several delay chains [11]. Lampros Pyrgas et al. suggested a thermal sensor-based approach to detect fraudulent FPGA hardware. A RO with three inverters and an RNS ring counter makes the sensor tiny. A 6 × 5 grid of sensors covered the FPGA implementation. For low-cost

Comparative Analysis of Hardware and Software Utilization …

29

thermal sensing, our sensor grid uses 1.9% of FPGA resources. We showed that a sensor design methodology suitable for hardware Trojan detection is efficient [12]. An FPGA-based time-to-digital converter using the Spartan 6 design, developed by Alessandro Tontini et al. (TDC). A low-cost, low-gate-count FPGA might offer a 26-ps TDC with an LSB SSP of 0.69–1.46 using a medium line of transmission model. Dealing directly with the changes on the delay line structure and conversion methods improves the circuit design of our perfect devices [13]. With a 130-nm flash FPGA, Jie Zhang et al. constructed a two-stage DLLSI with an 8.5-ps mean resolution (Actel FPGA). A low-cost FPGA’s logic gate de-lay time does not limit the delay-line loop shortening method’s resolution. The FSI and SSI resolutions, which recorded time intervals with great resolution and precision, were determined by the two delay-line loops’ entire delay periods. This research produced a two-stage TDC with 8.5-ps mean resolution and 42.4-ps standard deviation. Precision increases jitters of cyclic pulses, which may compromise resolution. The reducing delay-line loops’ symmetrical structure and delay-locked loop stabilized their entire delay time, lowering PVT volatility [14]. Yuelai Yuan et al. optimized GPC mappings by efficiently exploiting the carry input of the carry chain and each LUT for Xilinx FPGAs and developed novel GPC looping and binding methods to conserve design area and pack GPCs more compact. Compressor trees reduce swift multiplier slice reduction by 27.86%, thus being more effective for large-scale designs over adder trees [15]. Spencer Valancius et al. simulate neuromorphic design with FPGAs. They will detail their hardware design decisions for each architectural component, assess hardware resource utilization, confirm our emulation environment’s operation, and demonstrate using IBM’s TrueNorth. This allowed us to swiftly alter fundamental components without having to reconfigure our FPGA place-and-route tool chains to match asynchronous timing requirements and study applications that are hard to map owing to architectural constraints, like VMM, which requires neuron copies to function [16].

3 Implementation on 4-bit Counter Using Different FPGAs Families and Results In this work, Genesys, Kintex, and Virtex FPGA boards are used to make a 4-bit counter. While utilising the Vivado software, the schematic circuit diagram shown in Fig. 2 has been obtained. After implementation on various FPGA boards, evaluate the various changes in hardware, temperature, and power consumption. The chip’s thermal margin reaches 53.1 °C., and its effective junction area reaches 1.8 °C/w when it is used with the Genesys board that was obtained. The chip’s overall power consumption rises to 3.863 watts as a result. These numbers represent the board’s most stringent guidelines for the criteria in question, as far as the allowable limitations are concerned. According to the calculations, it was found that the

30

S. Shrivastava and A. Kaur

Fig. 2 Schematic diagram of 4-bit counter on Vivado tools

dynamic power is 3.681W, whereas the static power is only 0.182W. According to the information presented in Fig. 3, the amount of power used by signals is 0.070W, the amount of power used by logic is 0.044W, and the amount of power used by input/output is 3.561W. The user is able to use this board’s LUTs, FF, I/O blocks, and BUFG blocks, which each have current allocations of 203,800, 407,600, 500, and 32 correspondingly, and can access these blocks. Yet, I/O operations take up 1% of the processor’s time, FF take up 1% of it, and BUFG take up 3% of it. LUTs only take up 1% of the processor’s time as shown in Fig. 4. When utilized in conjunction with the Kintex board that was acquired, the chip reaches a thermal margin of 71.1 °C, and its effective junction area reaches 1.4 °C/w. As a direct consequence of this change, the total amount of power that is consumed by the chip is now 2.789 watts. As far as the acceptable restraints extend, these are the numbers that the board considers to be the most rigorous guidelines for the criterion that is under discussion. The computations showed that the dynamic power

Fig. 3 4-Power consumption results on Vivado using Genesys board

Comparative Analysis of Hardware and Software Utilization …

31

a

b

Fig. 4 a Hardware utilization graph results on Vivado using Genesys board b hardware utilization table results on Vivado using Genesys board

is 2.137W, whereas the static power is just 0.651W. This difference in power was found to be significant. In accordance with the data that is displayed in Fig. 5, the amount of power that is consumed by signals is 0.027W, the amount of power that is consumed by logic is 0.056W, and the amount of power that is consumed by input/ output is 2.054W. The user has access to this board’s LUTs, FF blocks, I/O blocks, and BUFG blocks, all of which have respective current allocations of 331,680, 663,380, 520, and 624 respectively, and is allowed to use these blocks. In spite of this, I/O activities only use up 1% of the processor’s time, while FF and BUFG each use up 1% of the processor’s time. As illustrated in Fig. 6, LUTs only consume 1% of the time available to the processor. When used in connection with the Virtex board that was acquired, the chip reaches a thermal margin of 70.2 °C, and its effective junction area reaches a temperature of 1.4 °C per watt. Additionally, the thermal margin of the chip’s effective junction area is 70.2 °C. As a direct consequence of this alteration, the total amount of power that is required by the chip has increased, and it is now using 8.765 W at this particular time. These are the guidelines that the board believes to be the most stringent for the criterion that is currently being discussed, and they are the ones that are acceptable restraints in terms of the scope of those limits. These are the numbers that the board deems to be the most stringent guidelines for the criterion that is currently up for debate. The simulations showed that the dynamic power is 6.202W, although the static power is just 2.563W. It was determined that there was a statistically significant gap in power between the two groups. The information that is shown in Fig. 7 indicates that the amount of power that is used by signals is 0.017W, the amount of power that is used by logic is 0.046W, and the amount of power that is used by input/output is

32

S. Shrivastava and A. Kaur

Fig. 5 4-Power consumption results on Vivado using Kintex board

a

b

Fig. 6 a Hardware utilization graph results on Vivado using Kintex board b hardware utilization table results on Vivado using Kintex board

6.139W. It is fine for the user to make use of the LUTs, FF blocks, I/O blocks, and BUFG blocks that are located on this board. These blocks each have their very own current allocations, which come in the forms of 1,182,240, 2,364,480, 832, and 1800 correspondingly. All of these building pieces are available to be used by the user. Despite this, I/O activities only consume 0.72 percent of the processor’s time, whilst FF and BUFG respectively consume only 0.01% and 0.06% of the processor’s time. As can be seen in Fig. 8, LUTs only use up 0.01% of the time that the processor has available to work with. The amount of power that is used by input/output operations is 2.054W. whoever is using it.

Comparative Analysis of Hardware and Software Utilization …

33

Fig. 7 4-Power consumption results on Vivado using Virtex board

a

b

Fig. 8 a Hardware utilization graph results on Vivado using Virtex board b hardware utilization graph results on Vivado using Virtex board

4 Comparative Analysis of 4-bit Counter Using Different FPGAs Boards The table that has just been presented is where one may find the comprehensive analysis of the power consumptions for Genesys, Kintex, and Virtex boards. Figure 9 illustrates the amount of power that Genesys, Kintex, and Virtex FPGA boards consume both statically and dynamically. Kintex has managed to achieve exceptionally low levels of overall power usage. The values of Genesys, Kintex, and Virtex FPGA boards are compared in Table 2, which displays the competitive landscape. The static power of Genesys has achieved a good value, while the dynamic power of Kintex has achieved a good value, and the total power of Kintex has achieved a good value.

34

S. Shrivastava and A. Kaur

Fig. 9 Comparison graph of power consumption of FPGAs boards

Table 2 Comparison table of power consumption of FPGAs boards S. no

Boards name

Dynamic power (Watts)

Static power (Watts)

Total power (Watts)

1

Genesys

3.681

0.182

3.863

2

Kintex

2.137

0.651

2.789

3

Virtex

6.202

2.563

8.765

An in-depth analysis of the optimal temperature that Genesys, Kin-tex, and Virtex boards need to meet is presented in Table 3, which describes the tables that have already been provided. The temperature margin as well as the effective junction of the Genesys, Kintex, and Virtex FPGA boards are displayed in Fig. 10. In contrast to all of its rivals, Genesys has not only accomplished an effective junction but also a low thermal margin.

Table 3 Comparison table of temperature of FPGAs boards S. no

Boards name

Thermal margin (°C)

Effective junction (°C/Wt)

1

Genesys

53.1

1.8

2

Kintex

71.1

1.4

3

Virtex

70.2

5

Comparative Analysis of Hardware and Software Utilization …

35

Fig. 10 Comparison graph of temperatures of FPGAs boards

5 Conclusion In this research, we utilized the 4-bit counter to monitor the power consumption of circuit boards that were produced using the FPGA families Genesys, Kintex, and Virtex. The following are the results that were obtained for the power dissipation: 3.863W, 2.789W, and 8.765W correspondingly. The temperatures that were attained were 53.1, 71.2, and 70.2 °C. When compared to his own levels of use, Kintex was able to reduce his energy use by about 96.1%. Genesys was able to conserve roughly 91.25% of the energy when compared to Virtex’s level of efficiency. Because Kintex has only the low amount of power necessary, its power consumption is comparable to that of other FPGA families.

References 1. Meade T, et al (2017) Revisit sequential logic obfuscation: attacks and defenses. In: 2017 IEEE International Symposium on Circuits and Systems (ISCAS). IEEE 2. Sandhu A, Singh D, Sindhu RK (2021) Energy dissipation analysis of sequential circuits in QCA. Nveo-Nat Volat Essent Oils J NVEO 1421–1431 3. Noroozi M, Pirsiavash H, Favaro P (2017) Representation learning by learning to count. In: Proceedings of the IEEE international conference on computer vision 4. Anand V, Kaur A (2017) Implementation of energy efficient FIR Gaussian filter on FPGA. In: 2017 4th International conference on signal processing, computing and control (ISPCC). IEEE 5. Pandit S, Shet V (2017) Review of FPGA based control for switch mode converters. In: 2017 Second international conference on electrical, computer and communication technologies (ICECCT). IEEE 6. Shrivastava A et al (2021) VLSI implementation of green computing control unit on zynq FPGA for green communication. Wirel Commun Mob Comput 2021:1–10

36

S. Shrivastava and A. Kaur

7. Singh S, Soni S Hardware security model with VEDIC multiplier based ECC algorithm on high-performance FPGA device 8. Rehman BK, et al (2022) fpga implementation of area-efficient binary counter using Xilinx IP Cores. In: Sustainable infrastructure development: select proceedings of ICSIDIA. Springer, pp 147–156 9. Shahrouzi SN, Perera DG (2018) Optimized counter-based multi-ported memory architectures for next-generation FPGAs. In 2018 31st IEEE international system-on-chip conference (SOCC). IEEE 10. Zhang M, Wang H, Liu Y (2017) A 7.4 ps FPGA-based TDC with a 1024-unit measurement matrix. Sensors 17(4):865 11. Wang Y, et al (2017) A 3.9-ps RMS precision time-to-digital converter using ones-counter encoding scheme in a Kintex-7 FPGA. IEEE Trans Nucl Sci 64(10):2713–2718 12. Pyrgas L, et al (2017) Thermal sensor based hardware Trojan detection in FPGAs. In: 2017 Euromicro conference on digital system design (DSD). IEEE 13. Tontini A et al (2018) Design and characterization of a low-cost FPGA-based TDC. IEEE Trans Nucl Sci 65(2):680–690 14. Zhang J, Zhou D (2017) An 8.5-ps two-stage Vernier delay-line loop shrinking time-to-digital converter in 130-nm flash FPGA. IEEE Trans Instrument Meas 67(2):406–414 15. Yuan Y et al (2019) Area optimized synthesis of compressor trees on xilinx fpgas using generalized parallel counters. IEEE Access 7:134815–134827 16. Valancius S, et al (2020) FPGA based emulation environment for neuromorphic architectures. In: 2020 IEEE international parallel and distributed processing symposium workshops (IPDPSW). IEEE

Soil Monitoring Robot for Precision Farming K. Umapathy, S. Omkumar, T. Dinesh Kumar, M. A. Archana, and M. Sivakumar

Abstract IoT is a technique which involves both actuators and sensors with a connection to internet. IoT can be implemented in the field of agriculture to enhance its quality. Farming is the essential occupation of the country by which its economic status can be improved. But due to the existing technology in agriculture, the productivity is very less when compared to other countries in the world. Every year huge losses are faced by the farmers due to improper infestation of pesticides among crops and lack of suitable soil monitoring techniques. This paper suggests a soil monitoring robot which monitors and displays vital factors such as detection of leaf disease, PH sensing, ambient temperature, humidity and soil moisture sensing using various types of sensors instead of manual check. The farm situation will be notified to farmers using a mobile application. Keywords IoT · Humidity · Robot · Farming · Android · Blynk

1 Introduction Farming is the vital occupation of India. The impact of chemical and mechanical advancements has been booming in agriculture to enhance its productivity and handle crop diseases. But the impact of digitization seems to be less. Hence automated farming is indispensable. If this concept is implemented, time and energy can be saved for routine activities thereby increasing the outcome of fields. Moreover early detection of crop disease is an indispensable activity required for farming. The early detection of disease can be done by exploiting the concept of image processing and K. Umapathy (B) · S. Omkumar · T. Dinesh Kumar · M. A. Archana Department of Electronics and Communication Engineering, SCSVMV Deemed University, Kanchipuram, Tamil Nadu, India e-mail: [email protected] M. Sivakumar Department of Electronics and Communication Engineering, Dhanalakshmi College of Engineering, Chennai, Tamil Nadu, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024 S. Jain et al. (eds.), Emergent Converging Technologies and Biomedical Systems, Lecture Notes in Electrical Engineering 1116, https://doi.org/10.1007/978-981-99-8646-0_4

37

38

K. Umapathy et al.

can intimate the farmers well in advance in order to protect the plants from diseases. The proposed system in this paper concentrates on both soil monitoring and detection of plant diseases with raspberry pi controller along with respective sensors. A mobile application is developed to view the present condition of crops by the farmers.

2 Literature Survey Zigbee and wifi techniques had been implemented (Gondchawar and Kawitkar 2016) for measurement of temperature and humidity by using remote devices connected to the internet [1]. The concept of wireless network is employed for monitoring quality of the soil with measuring parameters related to the environment where farming is carried out. A microcontroller integrated with appropriate sensors is employed in this system [2]. Sometimes the color conversion technique from RGB to HSI is applied for the input images using the concept of image processing. This transformation is done for detection of crop diseases Dhaygude and Kumbhar [3]. The strategy was implemented at remote place for measuring various soil parameters for the purpose [4]. In addition to soil monitoring process, the water management and detection of nitrogen gas had been discussed for the purpose of irrigation [5]. An android application along with the microcontroller is employed in this system (Srruthilaya and Umapathy 2019) for the purpose of soil monitoring [6]. Information regarding the vital parameters will be sent to the farmers at the remote place. In some cases, instead of Arduino controller, raspberry pi controller integrated with zigbee technique had been implemented for the process of soil monitoring [7]. Anusha et al. 2019 presented a monitoring system to collect consistent information about generation of farming and intimate to agricultural offices at the remote place by means of short messages [8]. An automated and smart approach is proposed to test soil parameters in order to make farming easier [9]. The occurrence of fungal diseases in the farming fields can be traced out by image processing concept and based upon that, farmers apply the green manure or pesticides with severity [10, 11]. A smart irrigator was presented to provide pesticides and water for farming based on evaluation of parameters of the land soil [12]. Sargam Yadhav presented a study for analyzing the innovative technologies implemented in smart farming and it also enunciates the demerits of those implementations with respect to comments taken from YouTube [13]. An optimum solution was enunciated for detecting the diseases among the crops by using concepts of big data and IoT thereby implementing a smart system. Leaf detection was implemented among the crops by applying the concepts of image pre-processing, acquisition of images and extraction of features. Fog computing based architecture and IoT were implemented for real-time monitoring and recording of subjects [14–17].

Soil Monitoring Robot for Precision Farming

39

3 Materials and Methods The system as shown in Fig. 1 is developed to identify diseases among the crops and sprinkle the pesticides based upon the requirement. Moreover, it will measure pH value of the soil, ambient temperature and humidity of the environment. This system is connected to internet and controlled by a mobile application called Blynk. Once the bot is placed in field, it keeps moving automatically thereby performing the monitoring process. If any obstacle is detected, it will change its direction and keep moving. The system is implemented using raspberry pi controller which in turn uses its 12.3 megapixel camera to capture the plant leaf. The system will measure the above vital parameters and display those values along with disease detection. The detected sample from the camera is processed by machine learning algorithm that uses tensor flow model. If any disease is detected, appropriate intimation will be given to farmers along with parameter values using the concept of IoT. The developed robot simply captures plant image and uploads into cloud server. This image is sent through a convolution neural network which translates and compares with that of other elements in the model. The uploaded image values are compared and classified with the available dataset by a tensor flow model. The model comprises a blend of various tools and facilities for constructing applications driven by machine learning. When there is a match, it calculates the confidence and displays the value having highest confidence. The case of the plant being healthy or unhealthy depends upon the dataset. Classification is a simple procedure which gives a proper result and used for plant disease detection (Fig. 2).

Fig. 1 Soil monitoring robot

40

K. Umapathy et al.

Fig. 2 Flow of disease detection process

The dataset consists of both healthy and diseased plant images. The images include a certain species of crops like apple, grape, orange etc. There can be two parameters of interest–plant name and disease name in each variety. The system accuracy can be enhanced by pre-processing. All images are subject to segmentation and classification. By this process, certain regions of images are extracted. In order to rectify non-uniform lighting, the images are converted to gray scale images and sent for further processing (Fig. 3). Once the greyscale images are obtained from the previous step, they are transformed to subjective variables. A matrix is prepared from pixel values of images for executing convolutions. This process is continued and pooling is performed on the

Fig. 3 Preprocessing of images

Soil Monitoring Robot for Precision Farming

41

matrix to enhance performance of the system. Further, a set of epochs is performed which is intended to increase scaling of parameters. Figure 4 shows algorithm of the implementation.

Fig. 4 Algorithm for implementation

42

K. Umapathy et al.

4 Results and Discussions The presented system will move in all four directions, controlled by a blynk mobile application. Once the instruction is received, raspberry will control motor directions. When capture button is clicked using raspberry pi camera, the robot captures the plant leaf and analyze the picture using machine learning algorithm. After analyzing, it will display the health status of plants and gives suggestion to make plant healthy. When measure level button is clicked, the robot analyses pH value and moisture content in the soil by using pH sensor (2.0 interface) and soil moisture sensor respectively. Meanwhile it will also measure temperature (DHT11 sensor) and humidity of environment and notify the respective mobile device. From these observations, when the moisture is low, the plants will be watered. After running the program successfully, Blynk app is opened in our smart phone. It will show the connection of robot online as in Fig. 5. Once connected, the robot movement can be controlled by pressing various buttons in the blynk app. The robot forward movement is shown in Fig. 6. Figure 7 shows robot capturing image. Once the plant image is captured, it will analyze the image using Tensorflow. Once it is analyzed, it shows the plant health status and recommendations for the unhealthy plant to make healthy. This is illustrated in Fig. 8 indicating the healthy status of mango plant. In the other case, if mango plant is infected, then identification of the disease and unhealthy status of the mango plant are shown in Fig. 9. There are certain diseases which affect the plants. One among them is called fungi. This will absorb energy from plants and cause damage to it. Fungal infections are responsible for approximately two-thirds of infectious plant diseases and cause wilting, rusts, scabs, rotted tissue and other problems. Similarly we will capture papaya plant image and analyze its health status. The image capturing and image identification of this papaya plant are shown in Fig. 10. Abiotic disorders are not induced by living organisms but induced by abnormal environmental conditions such as drought stress, nutrient deficiency, improper watering and planting conditions. The robot will also measure moisture and pH value using soil moisture and pH sensors. Meanwhile the measurement of temperature and humidity of the environment are also measured. Once measure level button is pressed, appropriate notifications are sent to the mobile phone regarding the soil monitoring process in addition to detection of leaf diseases. The system prototype is shown in Fig. 11. Thus implemented system in this paper is able to detect infections among various types of plants for producing more yield from fields, thereby benefitting the farmers to a greater extent. In addition, it gives vital data regarding factors such as temperature, humidity, PH and moisture sensing of soil to support the irrigation work. When compared to other related systems, the arrangement is totally automated in the form of a robot which gives complete information needed to enhance the irrigation work. The above statement justifies the novelty of the paper.

Soil Monitoring Robot for Precision Farming

43

Fig. 5 Robot connected

5 Conclusion This paper introduces the concept of wireless technology in farming. The designed robot measures moisture, pH value, temperature, humidity and detects plant diseases. These values are intimated to the farmers periodically using a mobile application. It reduces the burden of farmers by avoiding requirement of manual works. The system is designed to work consistently unlike human beings thereby saving time and energy of farmers. This approach forms the basis for automated farming. The system can be improved with additional robotic arms meant for vegetables/fruit picking, adding weed remover, seeding etc. The system can be further improved by employing two or more robots which can be implemented by means of IoT and big data analytics.

44

Fig. 6 Forward movement of robot

Fig. 7 Robot capturing the plant image

K. Umapathy et al.

Soil Monitoring Robot for Precision Farming

Fig. 8 Health status of the plant

Fig. 9 Identification of mango plant with disease

45

46

K. Umapathy et al.

Fig. 10 Identification of papaya plant with disease

Fig. 11 Prototype implementation

References 1. Gondchawar N, Kawitkar RS (2016) IoT based smart agriculture. Int J Adv Res Comp Commun Eng 5(6):838–842 2. Suma N, Samson SR, Saranya S, Shanmugapriya G, Subhashri R (2017) IoT based smart agriculture monitoring system. Int J Rec Innov Trends Comp Commun 5(2):177–181 3. Dhaygude SB, Kumbhar NP (2013) Agricultural plant leaf disease detection using image processing. Int J Adv Res Electric Electron Instru Eng 2(1):599–602

Soil Monitoring Robot for Precision Farming

47

4. Betteena Sheryl Fernando D, Sabarishwaran M, Ramya Priya R, Santhoshini S (2020) Smart agriculture monitoring system using IoT. Int J Sci Res Eng Trends 6(4):2212–2216 5. Shanmuga Prabha P, Umapathy K, Kumaran U (2022) Precision irrigation monitoring system with real time data. AIP Conf Proc 2519(030065):1–6 6. Srruthilaya T, Umapathy K (2019) Agricultural monitoring system with real time data. Int J Inform Comp Sci (IJICS) 6(4):322–334 7. Lakshmisudha K, Hegde S, Kale N, Iyer S (2016) Smart precision based agriculture using sensors. Int J Comp Appl 146(11):36–38 8. Anusha A, Guptha A, Sivanageswar Rao G, Tenali RK (2019) A model for smart agriculture using IoT. Int J Innov Technol Explor Eng 8(6):1656–1659 9. Kantale V, Marne M, Gharge M, Itnare S, Bhujbal S (2022) Smart agriculture monitoring system using IoT. Int J Adv Res Sci Commun Technol (IJARSCT) 2(7) 10. Priya PLV, Harshith NS, Ramesh NVK (2018) Smart agriculture monitoring system using IoT. Int J Eng Technol 7(2):308–311 11. Pujari JD, Yakkundimath R, Abdulmunaf SB (2014) Identification and classification of fungal disease affected on agriculture/horticulture crops using image processing techniques. In: IEEE International Conference on Computational Intelligence and Computing Research (ICCIC) 12. Chetan Dwarkani M, Ganesh Ram R, Jagannathan S, Priyatharshini R (2015) Smart farming system using sensors for agricultural task automation. In: IEEE international conference on technological innovations in ICT for agriculture and rural development (TIAR) 13. Yadav S (2022) Disruptive technologies in smart farming: an expanded view with sentiment analysis. MDPI J AgriEng 4(2):424–460 14. Thorat AW, Kumari S, Valakunde ND (2017) An IoT based smart solution for leaf disease detection. In: IEEE international conference on big data, IoT and data science (BID). Vishwakarma Institute of Technology, Pune, pp 193–198 15. Gavhale KR, Gawande U (2014) An overview of the research on plant leaves disease detection using image processing techniques. IOSR J Comp Eng 16(1):10–16 16. Kumar A, Sharma S, Goyal N, Singh A, Cheng X, Singh P (2021) Secure and energy-efficient smart building architecture with emerging technology IoT. Comput Commun 176:207–217 17. Khullar V, Singh HP, Miro Y, Anand D, Mohamed HG, Gupta D, Kumar N, Goyal N (2022) IoT fog-enabled multi-node centralized ecosystem for real time screening and monitoring of health information. Appl Sci 12(19):984

Accountability of Immersive Technologies in Dwindling the Reverberations of Fibromyalgia Sheena Angra

and Bhanu Sharma

Abstract These days immersive technologies are immensely contributing to the technical field. Hence, making people more aware and tech-savvy. Immersive technologies such as Virtual reality, augmented reality, extended reality, and mixed reality are in surplus demand due to their innovative nature. That is why they are being widely used in almost every domain. It is extremely beneficial in providing training to users about different subject areas such as medicine, education, defense, tourism, and so on. This paper describes the current scenario and requirement to work in the medical field specifically for fibromyalgia which deals with patients having chronic depression. Keywords Fibromyalgia · Mental health · Augmented reality · Virtual reality · Learning

1 Introduction Lifelong learning is a process. People learn better when they use their imagination, hear, reflect, and visualise [1]. Nowadays, visualizing is crucial in every industry. Compared to text, people are more likely to remember visuals and videos. As a result, there is a surplus of demand in the sectors of virtual and augmented reality [2]. In the areas of education, medicine, gaming, and other industries, AR/VR is vital. Learning about theoretical concepts is not enough to build a career in this field, we must also know how to work on these immersive technologies and how to bring virtual objects into picture with the help of tools that are being used in this field [3]. There are many Game Engines available such as Unity, Unreal, CryEngine and many more [4]. But S. Angra · B. Sharma (B) Immersive and Interactive Technology Lab, Chitkara University Institute of Engineering and Technology, Chitkara University, Punjab, India e-mail: [email protected] S. Angra e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024 S. Jain et al. (eds.), Emergent Converging Technologies and Biomedical Systems, Lecture Notes in Electrical Engineering 1116, https://doi.org/10.1007/978-981-99-8646-0_5

49

50

S. Angra and B. Sharma

Fig. 1 Virtual reality I4-quadrilateral

Unity has proven to be more advantageous in terms of functionality, physics Engine, cross-platform integration and ease of use than others.

2 Virtual Reality Jaron Lanier created virtual reality (VR), a computer-simulated technology that enables user interaction in a made-up or real-world setting [5, 6]. It is described as the indicator to touch, feel, and live in the present, past, and future. It is the means through which we can create our own world specifically for us [7]. The two categories of virtual reality are non-immersive and immersive. It is referred to as an I4 Quadrilateral since it involves intelligence, imagination, immersion, and interaction [8]. Although there is less user engagement with this technology, the experience is still quite immersive. Figure 1 displays I4 Quadrilateral of Virtual Reality which has four modules named as imagination, immersion, interaction and intelligence where intelligence refers to Artificial Intelligence.

3 Augmented Reality A newer technology called augmented reality (AR) allows for the projection of an imaginary or fantastical environment on the real world [9]. AR is a crucial element to create a unique educational environment since it improves user/learner interaction [10]. In augmented reality (AR), computer-generated content interacts with actual objects in real time [11, 12]. Technology that is immersive and engages with the

Accountability of Immersive Technologies in Dwindling …

51

Fig. 2 AR experience

virtual world is called virtual reality. AR has the ability to engage students, inspire them to explore experimental concepts, and improve collaboration between students and educators. [13, 14]. Figure 2 represents AR experience.

4 Mixed Reality Mixed Reality is a phenomenon with the help of which a person can experience physical and real world along with virtual objects which are responsive and believable. It is an interdisciplinary field consisting of computer vision, user interfaces, signal processing, computer graphics, wearable computing, human factors, design of sensors and displays, repair and maintenance of machinery, visualization of information [15]. Mixed reality goes through many technical challenges and biggest out of all is the display technology in which digital objects must be visualized at high contrast and high resolution. It is also defined as a process of overlaying of digital

52

S. Angra and B. Sharma

Fig. 3 Mixed reality scenario

elements on physical world where physical world and digital elements interact with each other [16]. Figure 3 represents scenario of Mixed reality.

5 Reality-Virtuality Continuum It is introduced by Milgram Kishino in 1994 [14] in which they described RealityVirtuality Continuum as A starting point to classify wide range of realities. Real environment displays an environment which consists of only real objects and Virtual Reality consists of only virtual objects. Augmented Reality defines overlay of virtual objects on real environment and Augmented Virtuality represents participation of virtual objects in real scenario [20–22]. Figure 4 represents Milgram’s Reality-Virtuality Continuum.

6 Fibromyalgia It is a disorder indicated by widespread pain in different parts of the musculoskeletal system including allodynia and Hyperalgesia. The patients suffering from fibromyalgia complain about extreme fatigue, body aches, sleep disorders, lack of

Fig. 4 Milgarm’s reality-virtuality continuum

Accountability of Immersive Technologies in Dwindling …

53

concentration and swollen hands. This pain caused due to this disease is directly proportional to the negative mood which is directly related to lower quality of life [17]. When a patient encounters severe pain then he/she enters into the dissociative state which is the feeling of detachment from our own body [18]. The symptoms of this disease ranges from mild to severe which helps to determine the worst condition a patient can face [19].

7 Literature Survey Immersive technologies have immensely contributed in almost every domain be it medical, education, defense, tourism or any other area. Its implementation has shown a different direction and possibilities to the most critical situations as well. Table 1 describes the work done by several authors in reducing the effect of Fibromyalgia and Fig. 5 represents the advantages of using immersive technologies in medical domain.

8 Conclusion Immersive technologies have contributed a lot in diverse range of fields such as education, medical, defense, tourism, artifacts and many more. These technologies have changed the dimension of research in the diagnosis of a disease and its treatment. Virtual Reality helps the humans to perceive the virtual world as real resulting in higher accuracy and lower performance errors. Medical field in itself is diverse in nature. Surgical training can be easily given to a person through AR and VR which includes bone surgery, laparoscopy, dental training and laser treatment. Trainees can also experience touch with the help of haptic feedback which allows them to practice in a virtual yet safe environment. It is also beneficial for students to visualize things in a more practical way. There are many applications which deal with surgery such as Mirracle which uses camera to replicate mirror view of a user but it overlaps the CT scan images by giving user a view of their body. Hence, after a rigorous literature survey it was found that the one problem which needs attention is Fibromyalgia. The symptoms of this disease are extreme pain, fatigue, sleep disorders, gastritis, joint and muscle stiffness. Many techniques and parameters were discussed by many researchers to reduce the effect and intensity of this disease. Many researchers focused on both the exercise and virtual training given to patients to lower down its effect and have found it effective as well.

Title

A literature overview of virtual reality (VR) in treatment of psychiatric disorders: recent advances and limitations

The effect of virtual reality exercises on pain, functionality, cardiopulmonary capacity, and quality of life in fibromyalgia syndrome: a randomized controlled study

Author/s and year of publication

Park et al. (2019), [1]

Musa Polat, et al. (2021), [2]

Games for health journal

Frontiers in psychiatry

Source

Table 1 Accomplishments by various authors

Age, weight, height, body mass index, symptom duration, education status, health status, pain intensity, fatigue, quality of life, functional capacity, mood and cognitive symptoms

Age, weight, height, body mass index

Parameters

Microsoft Xbox Kinect Fibromyalgia impact questionnaire (FIQ), Visual analogue scale (VAS), Symptom severity scale (SSS), Six minute walk test (6MWT), Fatigue severity scale (FSS), Euro quality of life five dimension (EQ-5D) and hospital anxiety and depression scale (HADS)

Oculus rift

Tools/test/scales/ questionnaire

Future research should concentrate on creating rehabilitation services that are analytically focused and apply particular virtual reality exercises in accordance with the pathophysiology of FMS

VR has both technological and consumer challenges, including motion sickness, dry eyes, and fixation and addiction

Limitations

(continued)

VR exercises when combined with aerobic exercises increase the quality of life of a patient suffering from fibromyalgia 40 women with this disease were divided into virtual reality group (VRG) and Conventional Training Group (CTG) and VRG group performed better as compared to CTG

It was analyzed that when VR is applied to depressed patients then the criticality of depression is reduced and higher satisfaction is achieved

Summary

54 S. Angra and B. Sharma

Effect of fully immersive virtual reality treatment combined with exercise in fibromyalgia patients: A randomized controlled trial

Gulsen et al. (2020), [4]

Assistive technology

The emerging role Rehabilitation of virtual reality sciences corner training in rehabilitation

Ayesha Afridi et al. (2022), [3]

Source

Title

Author/s and year of publication

Table 1 (continued) Tools/test/scales/ questionnaire

Pain, balance, kinesiophobia, impact of fibromyalgia, quality of life, functional capacity

Tampa scale, visual analogue scale, modified sensory organisation test, Fibromyalgia Impact Questionnaire, International physical Activity questionnaire, Fatigue severity scale, short-form 36 health survey, IBM SPSS, Mann-Whiney test

Motor control, Cognitive training walking abilities, balance, strength

Parameters

Findings cannot be generalized for male patients and the relationship between enjoyment and kinesiophobia could not be inquired

There is a need to improve research quality in Rehabilitation as VR is used for rehabilitation in USA, Europe and Pakistan

Limitations

(continued)

Trial was done on 20 fibromyalgia patients. They were divided into two groups: Exercise group and Immersive Virtual Reality group combined with exercise group. P value for both the groups came out to be significant and it is observed that IVR treatment is an effective method in treating fibromyalgia

VR is used as a cost-effective tool for rehabilitation professionals

Summary

Accountability of Immersive Technologies in Dwindling … 55

Title

Niamh Brady Exploring the et al. (2021), effectiveness of [5] immersive virtual reality interventions in the management of musculoskeletal pain: a state-of-the-art review

Author/s and year of publication

Table 1 (continued) Parameters

Physical Neck pain, therapy reviews Low back pain, nerve injury

Source

Neck disability index (NDI), physiotherapy, snow world game, visual analogue scale, McGill pain questionnaire

Tools/test/scales/ questionnaire Use of Immersive VR leads to motion Sickness, nausea, headache and dizziness. It is also unclear that use of VR is bounded till its usage or not because no study suggests that This technology has not been brought into clinical practice Software development is also required to Target body regions and specific conditions

Limitations

(continued)

Immersive technologies have been remarkably used in rehabilitation, pain management, education and anxiety management. It is also concluded that many studies focusing on muscle pain are of small sample size and low quality

Summary

56 S. Angra and B. Sharma

Title

Self-administered skills-based virtual reality intervention for chronic pain: randomized controlled pilot study

Author/s and year of publication

Darnall et al. (2020), [15]

Table 1 (continued) Parameters

JMIR formative Pain onset, age, research sleep, mood, stress, gender, education, employment, marital status

Source

Oculus Go VR headset, defence and veterans pain rating scale (DVPRS)

Tools/test/scales/ questionnaire In-person treatment requires multiple visits to clinic, high travelling cost and other obligations

Limitations

Randomized Controlled Trial (RCT) was conducted on 97 adults aged from 18 to 75 years who have low back pain or suffering from Fibromyalgia. They were divided into two groups VR and audio. For VR group 21 days skill-based program was organized for chronic pain and audio version of VR program was organized for audio group. These programs have the potential to enhance and improve treatment strategies for chronic pain

Summary

Accountability of Immersive Technologies in Dwindling … 57

58

S. Angra and B. Sharma

Fig. 5 Advantages of immerse Technologies in Medical domain

References 1. Reality-virtuality continuum infographic with examples: real environment, augmented reality, augmented virtuality and virtual reality. https://www.alamy.com/reality-virtuality-continuuminfographic-with-examples-real-environment-augmented-reality-augmented-virtuality-andvirtual-reality-image342055572.html. Accessed 6 May, 2022 2. Makarov A (2022) 10 augmented reality trends of 2022: a vision of immersion. https://mob idev.biz/blog/augmented-reality-trends-future-ar-technologies. Accessed 8 May 8 2022 3. Singh S (2022) Metaverse: the future of (Virtual) reality. https://www.financialexpress.com/ industry/metaverse-the-future-of-virtual-reality/2483228/. Accessed 8 May 2022 4. Leetaru K (2019) Why machine learning needs semantics not just statistics. https://www.for bes.com/sites/kalevleetaru/2019/01/15/why-machine-learning-needs-semantics-not. Accessed 8 May 2022 5. Flavián C, Ibáñez-Sánchez S, Orús C (2019) The impact of virtual, augmented and mixed reality technologies on the customer experience. J Bus Res 100:547–560 6. Introduction to unity 3D. https://www.studytonight.com/3d-game-engineering-with-unity/int roduction-to-unity. Accessed 25 May 2022 7. The pros and cons of mobile game development with unity 3D–VARTEQ Inc. https://varteq. com/the-pros-and-cons-of-mobile-game-development-with-unity-3d/. Accessed 25 May 2022 8. Install the unity hub and editor. https://learn.unity.com/tutorial/install-the-unity-hub-andeditor. Accessed 25 May 2022 9. Vilalta-Abella F, Gutiérrez J, Joana PS (2016) Development of a virtual environment based on the perceived characteristics of pain in patients with fibromyalgia. Stud Health Technol Inform 219:158 10. Hussein A, García F, Olaverri-Monreal C (2018) Ros and unity based framework for intelligent vehicles control and simulation. In: 2018 IEEE international conference on vehicular electronics and safety (ICVES). IEEE, pp 1–6 11. Ramachandran VS, Seckel EL (2010) Using mirror visual feedback and virtual reality to treat fibromyalgia. Med Hypotheses 75(6):495–496

Accountability of Immersive Technologies in Dwindling …

59

12. Wu J, Chen Z, Zheng K, Huang W, Ren Z (2022) Benefits of exergame training for female patients with fibromyalgia syndrome: a systematic review and meta-analysis of randomized controlled trials. Arch Phys Med Rehabil 13. Lattanzio SM, Imbesi F (2018) Fibromyalgia syndrome: a case report on controlled remission of symptoms by a dietary strategy. Front Med 5:94 14. Mehrfard A, Fotouhi J, Taylor G, Forster T, Navab N, Fuerst B (2019) A comparative analysis of virtual reality head-mounted display systems. arXiv preprint arXiv:1912.02913 15. Huber W, Carey VJ, Gentleman R, Anders S, Carlson M, Carvalho BS, Morgan M (2015) Orchestrating high-throughput genomic analysis with bioconductor. Nat Methods 12(2):115– 121 16. Fang Q, Shi W, Qin Y, Meng X, Zhang Q (2014) 2.5 kW monolithic continuous wave (CW) near diffraction-limited fiber laser at 1080 nm. Laser Phys Lett 11(10):105102 17. Kamphuis C, Barsom E, Schijven M, Christoph N (2014) Augmented reality in medical education? Perspect Med Educ 3(4):300–311 18. Qu Z, Lau CW, Simoff SJ, Kennedy PJ, Nguyen QV, Catchpoole DR (2022) Review of innovative immersive technologies for healthcare applications. Innov Dig Health Diagn Biomark 2(2022):27–39 19. Rossi A, Di Lollo AC, Guzzo MP, Giacomelli C, Atzeni F, Bazzichi L, Di Franco M (2015) Fibromyalgia and nutrition: what news. Clin Exp Rheumatol 33(1 Suppl 88):S117–S125 20. Darnall BD, Krishnamurthy P, Tsuei J, Minor JD (2020) Self-administered skills-based virtual reality intervention for chronic pain: randomized controlled pilot study. JMIR Form Res 4(7):e17293 21. Tuli N, Mantri A (2021) Evaluating usability of mobile-based augmented reality learning environments for early childhood. Int J Hum-Comp Interact 37(9):815–827 22. Tuli N, Mantri A (2020) Experience Fleming’s rule in electromagnetism using augmented reality: analyzing impact on students learning. Proc Comp Sci 172:660–668

A 233-Bit Elliptic Curve Processor for IoT Applications Deepak Panwar, Sumit Singh Dhanda, Kuldeep Singh Kaswan, Pardeep Singh, and Savita Kumari

Abstract ECC is the most popular asymmetric cipher. It can be used to provide different security services in the IoT applications. This paper presents a low-resource 233-bit ECC processor. It utilizes a hybrid karatsuba multiplier and quad-Itoh-Tsuji algorithm for the inversion. The processor is synthesized using PlanAhead software and implemented on Virtex-5 FPGA. On comparison with the existing designs, the implemented design improves from 2.25 to 57 percent in terms of resource consumption. Though the design is resource-constrained with 28,438 LUTs and 9838 slices but the maximum frequency for the design is 76.7 MHz. Keywords Elliptic curve cryptography · FPGA · Encryption · Karatsuba multiplication

1 Introduction ECC is the first-choice asymmetric key cipher [1] among security standards. Elliptic Curve Cryptography (ECC) is a widely used asymmetric cryptographic primitive. ECC has been used in many security-protocols [2, 3] such as BLE 4.2, 6LoWPAN, TLS, CoAP and SSH etc. A lot of research [4, 5] has been conducted to optimize the area, timing and power requirements of this cipher. It provides more security per key bit in comparison to RSA thus reduces the overhead as well. Elliptic curve scalar multiplication (ECSM) is a costly process in ECC. Curve selection is also an important issue that is related with the security of the encryption as well as ease of implementation. There are many types of curves that can be used for ECC D. Panwar Manipal University Jaipur, Jaipur, India S. S. Dhanda (B) · K. S. Kaswan · S. Kumari Galgotias University, Greater Noida, Uttar Pradesh, India e-mail: [email protected] P. Singh Graphic Era Hill University, Dehradun, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024 S. Jain et al. (eds.), Emergent Converging Technologies and Biomedical Systems, Lecture Notes in Electrical Engineering 1116, https://doi.org/10.1007/978-981-99-8646-0_6

61

62

D. Panwar et al.

implementations. The details of these curves and parameters are provided in the NIST [document]. IEEE-1363[6], and ANSI standard. Mainly the curves can be divided in to two categories namely, Prime curves and Extension field curves. Binary extension fields are better suited for the hardware implementation of ECC. While prime curves are preferred for the software implementations of ECC. Another important issue is the use of coordinate systems. The implementations can be carried out using affine coordinates, projective coordinates or the mixed coordinates system. Lopez-Dahab coordinate system helps in the efficient implementation of the ECC, as it reduces the number of inversions required to calculate the point addition and doublings in the field operations. Security provided by the ECC is a function of the hardness of the elliptic curve discrete logarithm problem (ECDLP) and the key sizes. There is no sub-exponential algorithm known for calculating ECDLP. Hence the key sizes required to provide the sufficient security are quite less compared to other public key crypto systems. The work is divided in to five sections. The first section emphasized upon ECC and its relevance in the field of information security. Important works from past and present are discussed in related work section. The details of processor, its design basics are provided in the Sect. 3. Section 4 presents the implementation results of the ECCP and compares it with the existing works. Finally, the conclusion is presented and future work has been discussed in Sect. 5.

2 Related Work ECC can be used to provide multiple security services in Internet of Things (IoT). Scalar multiplication (SM) is the costliest operation in this primitive. If SM can be adapted to small resources, then ECC can be made lightweight. In [7], authors have carried out one such attempt. In [8], authors presented an efficient FPGA implementation of elliptic curve point multiplication for GF (2191 ). Authors purposed an adaptive architecture along with a modified Karatsuba-Ofman multiplication. A design in which pipelining and parallelism have been used with efficient LUT placement and routing was presented in [9] to minimize the resource consumption in ECC. A high speed ECC implementation on FPGA was presented by using different types of pipelining on data path in [10]. Authors in [11] proposed a multiplier for fast ECC execution. In [12], authors presented an efficient way to calculate the Montgomery modular multiplication. It implemented 256-bit and 1024-bit modular multiplier and achieved better area and speed results. In [13], a detailed survey on the lightweight cryptographic techniques to secure IoT networks is presented where the details of asymmetric cryptography have been discussed. Energy efficiency is another direction that is must for a cipher providing security in small devices. In [14], an energy efficient d2d communication for smart cities using ECC is proposed by the authors. A lightweight authentication scheme that uses ECC to preserve privacy in smartgrids is proposed in [15]. In [16], a novel asymmetric method has been proposed for multiple-image encryption. It uses four quick response codes and elliptic curve

A 233-Bit Elliptic Curve Processor for IoT Applications

63

cryptography. Four images and a random key serves as the input for ECC to generate encrypted images. In [17] another method for image encryption in IoT is used for security in IoT networks.

3 Design Selection of a particular field also determines the equation that will be used to generate the points in the field i.e., if the field used is binary field in GF (2n ) form the equation will be in the form Y 2 + X Y = X 3 + a X 2 + b with b /= 0 but if the field selected is a prime field GF (p) then the equation will be Weierstrass equation i.e. W E a,b : y 2 = x 3 + ax + b

(1)

with 4a 3 + 27b2 /= 0. The order of the field should be greater than 2160 so that the ECC implementation should be safe. Here, we considered the binary extension field GF (2233 ) for the ECC implementation. Hybrid Karatsuba Multiplier: The multiplier used in this design is hybridkaratsuba multiplier which was proposed in [18]. The authors used it for 191-bit ECC implementation. Here, the field is GF (2233 ) bits with x233 + x74 + 1 as the irreducible polynomials. The basic polynomial product can be expressed as  m  K = xmU H V H + U H V L + U L V H x + U L V L 2

(2)

From this equation, the Karatsuba-Ofman Algorithm was derived as      m K = xmU H V H + U L V L + U H V H + U L V L + U H + U L × V H + V L x 2 (3) Or equivalently K = xm K H + K L

(4)

Here, the coordinates of the polynomial products can be shown by the equation below   K H = k2m−2 , k2m−3 , ..., km+1 , km

(5)

  K L = km−1 , km−2 , ..., k1 , k0

(6)

The Hybrid Karatsuba-Ofman multiplier that is used here is made of the general karatsuba and simple karatsuba multiplier. The important reason behind this design

64

D. Panwar et al.

is the optimum utilization of the FPGA architecture. As per [19], general karatsuba multiplier though consumes maximum number of gates but due to the presence of input gates and LUT size it consumes lesser slices. FPGA can be the same as normal K–O multiplier. The 233-bit multiplication has been divided five levels. First level composed of 233-bit multiplier while second level is composed of 117- and 116-bit multipliers. At third level, 58 bit and 59-bit multipliers are present and then, 29- and 30-bit multipliers at fourth level. Fifth level is made up of 15-and 14-bit multipliers. Level five multipliers are made up of general Karatsuba multipliers while rest of the levels are implemented with simple Karatsuba multipliers. Complete design of the multiplier is shown in Fig. 1. It is composed of a memory, ROM, that contains the value of curve constants and base points in affine form to be used in elliptic curve scalar multiplication (ECSM). A register bank is present to store the values of calculated values of the multiplication at the end of every clock cycle. These are 233-bit size eight registers which supplies arithmetic unit with the operands at the start of every clock cycle. Arithmetic unit is composed of the Hybrid Karatsuba Multiplier and Quad-ItohTsuji (Quad-ITA) inversion algorithm. Hybrid Karatsuba multiplier algorithm is explained in the starting sections of the work. The multiplication is carried out in the

Fig. 1 Block diagram for elliptic curve processor

A 233-Bit Elliptic Curve Processor for IoT Applications

65

Lopez-Dahab coordinate system. Quad ITAT utilizes the field multiplier to complete the inversion process. The architecture of the processor also contains a control unit whose main function is to maintain the sequence of operations and maintain the coordination between different parts of the processor to achieve the task. It also coordinates to utilize the multiplier for the inversion with the help of control signals. This is done to reduce the need of the extra multiplier for the inversion. As multiplier is the most resource consuming unit inside the processor. Hence, its duplication can result in exponential increase in the consumption of resources. Inversion is second most resource consuming circuit in the scalar multiplication normally used algorithm for the binary fields are Montgomery inversion, binary Euclidean (BEA), extended Euclidean algorithm (EEA) and Itoh-Tsuji algorithms (ITA). The fastest among these all is ITA which uses the brauer addition chains, exponentiation circuits and replication to calculate the inverses. Quad-ITA reduces the addition chain of size n to (n-1)/2. It decreases the size of inversion. Finite state machine/ control unit for this processor takes 39 cycles to complete the multiplication. It generates the 33 control signals to control various operations. Ten control signals are used to control inputs and two outputs of the multiplier. Four control signals are for inversion unit. While the rest are used to control the read and write operations in the register bank section. The processor carries out the computation in three phases. In first phase, 3 clock cycles are used for the initialization of the process. Second phase provides the multiplication in projective coordinates. Third phase which involves inversion converts the computed multiplication in projective coordinates in to affine one.

4 Results and Discussion In this work, Elliptic curve processor has been implemented on the different Xilinx FPGAs while the synthesis is carried out using PlanAhead IDE from Xilinx. The design was coded in Verilog. Selection of the boards is derived from the fact that FPGA to accommodate all the ports of the design. Schematic of the design is shown in Fig. 2. The results were obtained and compared with the exiting designs in Table 1. It shows that a significant improvement has been achieved compared to some earlier works in terms of resource consumption (look up tables i.e. LUTs). The resources are important parameters when the device to be used is small in size. This 233-bit ECC processor is synthesized on PlanAhead software and implemented on Virtex-5 FPGA. The design consumed total 28,438 LUTs and 9838 slices. The maximum frequency achieved for the design is 76.7 MHz. The maximum combinational path delay of the design is 7.028 ns. The design has been compared to other existing designs that have been implemented on the Virtex-5 FPGA and results are presented in Table 1. It also contains the Frequency of operation and percentage improvement of the designs. Figure 3 shows the resource consumption of these designs. While the percentage improvement is presented in Fig. 4. As per Fig. 3, our design is the most constrained among all the designs presented. It improves 7.44% and 57.87%

66

D. Panwar et al.

Fig. 2 Schematic for the ECC-233 design

in terms of LUTS when compared with the designs of [9]. The second-best design in the Fig. 3 is from [4] which is very close to our design. Only 2.25% is achieved in this case. When compared with design of [21], which consumes 36,812 LUTs, the improvement is 22.74%. Table 1 Comparison table for the FPGA implementation Architecture

LUTs

Frequency (MHz)

% Improvement (in comparison)

Proposed

28,438

76.7



[9] APF’15a APF’15b

30,724/44,897

207.6 188

7.44 57.87

[4] KB’15

29,095

153

2.25

[20] MM’13

33,414

[21] BR’20

36,812

[22] BR’16

42,404

360

32.93

[23] APF’16

32,874

132

13.49

17.49 22.74

Resource Consumption 50000

44897

42404

40000 30000

28438

30724

33414

36812 32874

29095

20000 10000 0 TW

APF'15a APF'15b

KB'15

MM'13

BR'20

BR'16

APF'16

LUTs

Fig. 3 Comparative evaluation of resource consumption with respect to existing designs

A 233-Bit Elliptic Curve Processor for IoT Applications

67

% Improvement 70 57.87

60 50 40

32.93

30 17.49

20 7.44

10

22.74 13.49

2.25

0

0 TW

APF'15a

APF'15b

KB'15

MM'13

BR'20

BR'16

APF'16

%improvement

Fig. 4 Comparative improvement with respect to existing designs

The designs of [22], 23 the design achieves the savings of 32 and 13 percent respectively. Design in [20] consumes 33,414 LUTS and the 17 percent improvement is reported. The only concern is the frequency of operation. To improve this aspect of the design we will improve the design with proper positioning of registers after analyzing the critical path. The proposed design can provide a security to various IoT applications at gateway level and server level. It can also be used in conjunction with any symmetric cryptographic algorithms for the security in IoT.

5 Conclusion and Future Work ECC is the most popular asymmetric cipher. It can be used to provide different security services in the IoT applications. This work presents a low-resource design for 233-bit ECC processor. It utilizes a hybrid karatsuba multiplier and quad-Itoh-Tsuji algorithm for the inversion. The processor is synthesized using PlanAhead software and implemented on Virtex-5 FPGA. On comparison with the existing designs, the implemented design is the most constrained. But the maximum frequency for the design has been the lower than other designs. It requires further improvement in this regard. In future work, we try to improve the design in terms of frequency performance and also in resource consumption as well.

68

D. Panwar et al.

References 1. El-sisi AB, Shohdy S, Ismail N (2008) Reconfigurable implementation of karatsuba multiplier for galois field in elliptic curves. In: International joint conferences on computer, information, and systems sciences, and engineering (CISSE 2008) 2. Dhanda SS, Singh B, Jindal P (2020) Lightweight cryptography: a solution to secure IoT. Wireless Pers Commun 112(3):1947–1980 3. Dhanda SS, Singh B, Jindal P (2020) Demystifying elliptic curve cryptography: curve selection, implementation and countermeasures to attacks. J Interdiscipl Math 23(2):463–470. https://doi. org/10.1080/09720502.2020.1731959 4. Khan ZUA, Benaissa M (2015) High speed ECC implementation on FPGA over GF(2m ). In Proceedings of 2015 25th international conference on field programmable logic and applications (FPL). pp 1–6 5. Cinnati Loi KC, Ko S-B (2016) Parallelization of scalable elliptic curve cryptosystem processors in GF (2m). Microprocess Microsyst 45(2016):10–22 6. Sowjanya K, Dasgupta M, Ray S (2021) Elliptic curve cryptography-based authentication scheme for Internet of Medical Things. J Inform Sec Appl 58(2021):102761 7. Lara-Nino CA, Diaz-Perez A, Morales-Sandoval M (2020) Lightweight elliptic curve cryptography accelerator for internet of things applications. Ad Hoc Netw 103:102159 8. Shohdy SM, El-sisi AB, Ismail N (2009) FPGA Implementation of Elliptic Curve Point Multiplication over GF(2191 ). In: Park JH, et al. (eds) ISA 2009, LNCS 5576, pp 619–634. Springer-Verlag Berlin Heidelberg 9. Fournaris AP, Zafeirakis J, Koufopavlou O (2014) Designing and evaluating high speed elliptic curve point multipliers. In Proceeding of the 2014 17th euromicro conference on digital system design. pp 169–174 10. Chelton WN, Benaissa M (2008) Fast elliptic curve cryptography on FPGA. In: IEEE transactions on very large scale integration (VLSI) systems, vol 16(2). pp 198–205 11. Khan ZUA, Benaissa M (2017) High-speed asnd low-latency ECC processor implementation over GF(2m ) on FPGA. In: IEEE transactions on very large scale integration (Vlsi) systems, vol 25(1), pp 165–176 12. Abd-Elkader AAH, Rashdan M, Hasaneen E-S, Hamed HFA (2022) Efficient implementation of montgomery modular multiplier on FPGA. Comput Electr Eng 97(2022):107585 13. Rana M, Mamun Q, Islam R (2022) Lightweight cryptography in IoT networks: a survey. Fut Gener Comp Syst 129:77–89 14. Dang TK, Pham CD, Nguyen TLP (2020) A pragmatic elliptic curve cryptography-based extension for energy-efficient device to device communication in smart cities. Smart Cities Soc 56:102097 15. Sadhukhan D, Ray S, Obaidat MS, Dasgupta M (2021) A secure and privacy preserving lightweight authentication scheme for smart-grid communication using elliptic curve cryptography. J Syst Arch 114:101938 16. Li W, Chang X, Yan A, Zhang H (2021) Asymmetric multiple image elliptic curve cryptography. Opt Lasers Eng 136:106319. https://doi.org/10.1016/j.optlaseng.2020.106319 17. Sasikaldevi N, Geetha K, Sriharshini K, Aruna MD (2020) H3- hyper multilayer hyper chaotic hyper elliptic based image encryption system. Opt Laser Technol 127:106173 18. Rodriguez-Henriquez F, Saqib NA, Diaz-Pérez A (2004) A fast parallel implementation of elliptic curve point multiplication over GF(2m). In: Computer Science Section, Electrical Engineering Department, Centro de Investigaciony de Estudios Avanzados del IPN, Microprocessors and Microsystems, vol 28(5–6), pp 329–339 19. Rebeiro C, Mukhopadhyay D Hybrid karatsuba multiplier for GF(2233 ) 20. Mahdizadeh H, Masoumi M (2013) Novel architecture for efficient FPGA implementation of elliptic curve cryptographic processor over GF(2163 ). IEEE Trans Very Large Scale Integr (VLSI) Syst 21(12):2330–2333

A 233-Bit Elliptic Curve Processor for IoT Applications

69

21. Rashidi B (2020) Throughput/area efficient implementation of scalable polynomial basis multiplication. J Hardw Syst Secur 4:120–135 22. Rashidi B, Sayedi SM, Farashahi RR (2016) High-speed hardware architecture of scalar multiplication for binary elliptic curve cryptosystems. Microelectron J 52:49–65 23. Fournaris AP, Sklavos N, Koulamas C (2016) A high speed scalar multiplier for binary edwards curves. In: Proceedings of the third workshop on cryptography and security in computing systems, pp 41–44

Numerical Simulation and Modeling of Improved PI Controller Based DVR for Voltage Sag Compensation Vijeta Bhukar and Ravi Kumar Soni

Abstract Voltage sags are a common problem in power systems and can have a detrimental effect on the operation of electrical equipment. A voltage sag occurs when the voltage level drops below a certain threshold for a short period of time. This can cause equipment to malfunction or shut down, leading to significant economic losses. The Dynamic Voltage Restorer (DVR) is a widely used technology for mitigating voltage sags in power systems. The DVR works by injecting a voltage waveform into the system to compensate for the voltage sag. Voltage sags in power systems can cause disturbances in electrical equipment and result in significant economic losses. This research aims to develop an improved PI controller-based DVR (Dynamic Voltage Restorer) for voltage sag compensation in power systems. The proposed system utilizes numerical simulations and modeling techniques to study the performance of the controller under different operating conditions. The results show that the improved PI controller-based DVR effectively compensates for voltage sags and significantly improves the power quality of the system. Numerical simulations were performed using MATLAB/Simulink to analyze the performance of the controller under different operating conditions. The simulations were carried out on a three-phase power system. The voltage sag was induced using a fault generator. The performance of the improved PI controller-based DVR was evaluated based on various parameters, such as the compensation time, voltage sag magnitude, and load variation. The results were compared with those obtained using a conventional PI controller-based DVR. Keywords DVR. PI controller · Voltage sag · Swell · Transmission line · Faults · Power system · Protection · Harmonics

V. Bhukar · R. K. Soni (B) Department of Electrical Engineering, SGI, Sikar, Rajasthan, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024 S. Jain et al. (eds.), Emergent Converging Technologies and Biomedical Systems, Lecture Notes in Electrical Engineering 1116, https://doi.org/10.1007/978-981-99-8646-0_7

71

72

V. Bhukar and R. K. Soni

1 Introduction The performance of the DVR depends on the quality of the controller used to regulate its operation. In this research, we propose an improved PI controller-based DVR for voltage sag compensation. The controller is designed using numerical simulation and modeling techniques to ensure its effectiveness under different operating conditions. The proposed improved PI controller-based DVR system consists of a DVR, an injection transformer, a control circuit, and a voltage sensor. The controller is designed using a PI (Proportional-Integral) control algorithm, which provides better stability and accuracy compared to other control techniques. Every electrical problem manifested through voltage, current or recurrence deviations which result in end-use hardware damage, upset or unhappiness” in line with the power quality concept. 1st PQ problems in both local and modern applications are frequently associated with power electronics (PE). PE devices are being used by private machines such as TVS, PCs, loaders, inverters (low power) and office devices, like copiers, printers, etc. and by mechanical equipment such as programmable logical controllers, customizable drives, rectifiers, inverters (high power), and distributed generations (solar, wind, etc.). Divided evidence of PQ problem should be possible from results of side effects, depending on the type of problem involved. • • • • •

Communications interruption. Unforeseen increase in supply temperature. Flicker causes lightening loads. Sudden voltage growth or decrease. Daily electricity outages.

Strength Electronic Gadgets The most important explication of tones, steps, songs and unwanted non-partisan trends (nonlinear gadgets). All of these models are power devices, converters, starters, music generation devices, electronic counterweighs for release lights, SMPS for PCs and HVAC/DC for Power Electronics. Motors, transformers, switches, connections and condenser banks are one of the devices most affected because the voltage or symphonies are producing heat in the torsion (reverberation). Bidirectional converters are tools which generate scores and have an impact on electronic control devices. SMPS is responsible for generating impartial currents. Scope-related and separate computers, copiers, printers and other devices with SMPS usage are often used. The neutral conductor has a significant impact on unbiased currents. The temperature increases, which reduces the operating capacity of the transformer. Fan controllers, engines and cycloconverters’ arcing gadgets emit bury sounds. A Voltage Sag is specified as a reduction in RMS voltage from the 0,5 cycle to 1 moment at the recurrent power level, announced as the rest of the voltage, by IEEE Standard 1159–1995. Voltage decreases may occur at transmission and delivery voltages on utilities systems. Voltage slope, which usually occurs at high voltages, usually spreads across the utility system and transmits through transformers to lower voltage systems. The voltage slopes can be generated within a mechanical system

Numerical Simulation and Modeling of Improved PI Controller Based …

73

with no impact from the utility grid. These are often caused by large motors in the office or electrical problems. Some of the major reasons for the device’s Voltage Sag include various deficiencies, sudden energy consumption, powerful acceptance engines and high inrush current through the empowerment of enormous transformers. The Dynamic Voltage Restorer (DVR) is a great power gadget technique for improved power quality, and ensuring sensitive load with its fast-reacting, unshakable quality and ostensible costs, while remunerating deep voltage slits and voltage unbalances and sounds effectively. DVR is a restricted switched voltage source converter that uses an infusion transformer or three single stage transformers to infuse the gracefully voltage-controlled voltage. The Matlab/Simulink results of simulation data showing that the DVR has been executed in uneven voltages and shortcoming conditions in the system were used for a PI process control scheme, which constitutes a scaled blunder between the DVR source side and the sag remedy relationship.

2 Related Works N. G. Hingorani suggested the principle of custom control [1]. “Custom power” refers to the use of electronic delivery systems of power controllers. Custom power supplies improve the performance and reliability of the energy provided to customers. Power supply customers are always looking for higher standards in the electric company. Detailed analysis of the type of customised electricity compensation, power quality problems, energy quality assessment, specifications and indices suggested by different agencies and diverse approaches to improving energy quality over time [1–4]. The three forms of power efficiency are voltage stability, supply continuity and voltage. Based on this summary, Toshifiimi Ise et al. provided several example concepts for power quality. (5) Available in English. Electronic equipment and technical research were mentioned in the various phases of power engineering by Afshin and Ara Lashkar et al. [5]. In addition, powerconditioners are used to solve problems of power efficiency as a type of electronic equipment with a high strength. The true term ‘specification of power’ (DFACTS) is used to describe the machinery. 1. Comparison is made with 1st FACTS modes and applications. Transmission and distribution networking equipment (e.g. STATCOM, SSSC, UPFC, DSTATCOM, DVR and UPQC). Dixon Juan W. and his associates [6] have developed an active power filter series that acts in the process of the power supply as a sinusoidal current source. The range of simple current in the filter series is regulated by an error signal. A predefined relationship between load tension and load tension. The research results in successful correction of the power factor, harmonic distortion and control of load voltage. According to Devaraju and colleagues, non-standards voltage, current or frequency [7] are a signal of a problem in power quality leading to equipment breakdown. Various interruptions affect the responsive industrial loading, service distribution networks and critical companies which, due to process breakdowns, loss

74

V. Bhukar and R. K. Soni

of output, idle labor and other factors, can all lead to significant financial damage per incident. The static distribution compensator (DSTATCOM) and two custom power controllers are defined by these electromagnetic transitory studies (DVR). Initiatives of power quality can be enforced in Singh and Mahesh et al. [8]. The end objective and the reach of the user. The research identifies a range of key measures that can be introduced without causing significant system disruption at the utility level. Displaying and mitigating the effects of the D- STATCOM and DV RM models are customised power supplies. The voltage drop is very obvious when using a modern PWM- based control system. DSTATCOM has been found to be powerful. It was found. Dc storage device rating primarily determines the ability to regulate compensation and voltage. Yun Wei Li et al. proposed and incorporated an upstream failure limiting feature and Feedback Controller based model flow charge DVR system. [9] and above a. and beyond a. It is a major virtual inductance with a defective supply feeder in series. The DC Link is protected from sudden downs and swells, and stress reduction. The DVR, which has two-level controllers to repair voltage sags and harmonic voltage imbalances, has been introduced by Pedro Roncero-Sanchez et al. in compensation for problems of power quality. They found that the repetitive control system has a fast transient reaction and guarantees zero error with any sinusoidal input and disturbance in the continuous state. They use a stationary reference frame or a rotating reference frame to implement the controller. Arindam Ghosh and Gerard Ledwich demonstrated DVR [10]. A DVR can be rendered with a number of reference voltages varying in time. For building the DVR structure, VSI is used. The focus is on increased energy efficiency for Parag Nijhawan and Rajan Shahma [10]. The DSTATCOM is fed by linear, non-linear and DTC feeders. In the distribution networks this article offsets the effectiveness of DSTATCOM. Load harmonics under a number of operational and failure conditions are currently being discussed—Completed—Completed—Completed—Completed— Completed—The transmission pulses of IGBT are based on dq transformation using a PWM current controller.

3 Dynamic Voltage Restorer (DVR) DVR is a voltage slope and swell compensator used to protect a voltage source converter. DVR is a swell- and voltage settings compensation unit used to prevent sensitive devices such as versatile speed drive and programmable logic controls from swelling and voltage settles in combination with a tensioning source converter. The principal objective is to regulate the heap voltage in the case of sag/swell by injecting missing voltage. The main objective of the dynamic voltage restorer is the presentation of voltage with the necessary degree and repeatability so that the auxiliary side voltage can be restored to the required waveform and sufficiency even if the source voltage is lopsided or deformed. It uses primarily a thyristor (GTO)

Numerical Simulation and Modeling of Improved PI Controller Based …

RT

jXT

Vs

VDVR

75

Vload PL+jQL

Fig. 1 Basic structure model of DVR

switch off gate, which must be a strong electronic state control switch with a PWM inverter system (Fig. 1). The The overall configuration of the DVR consists of an energy storage unit, an infusion transformer series, an inverter system and a channel. A special design Booster/Injection transformer that restricts transient energy from the critical side to the optional side and commotion connectivity. Capacity devices are intended to flexibly supply the VSC with important energy via a dc-bond to generate infused voltages. Energy storage gadgets, batteries and capacity are all examples of attractive superconductive energy storage (SMES). The Voltage Source Inverter (VSI) power storage unit converts DC voltage to a controllable three- stage voltage. Conditioning of voltage to effectively terminate inverter switches, a sinusoidal Pulse Width Modulation (PWM) method is used. The non-straight characteristics of semiconductor gadgets found in the inverter cause malformed waveforms associated with ripples in the inverter’s output. An energy-flexible channel unit is used as a quality commodity to solve this problem. The field DVR, as the association action course established in the heap center and the transformer for distribution is concerned (Fig. 2). The DVR is a restricted, switched voltage source converter which utilizes a promoter transformer that provides the transportation voltage with a strongly controlled voltage (VDVR). The generated amplitudes of the three phase voltages that were repaid by the converter and infusing them through the promoter transformer at a medium voltage level caused temporary disruptive effects in the conditioner feeder. Here in the Fig. 3 shows the proportionate circuit of the DVR, when the source voltage rises or drops, the DVR infuses an arrangement voltage through the infusion/ promoter transformer to keep up the ideal burden voltage.

4 Proposed Methodology A PI controller shows the weighted completeness of the system or plant, which differentiates between the real detected yield and the desired setting point and the vital value. This is the result of the weighted entirety of the error. The key term of the PI control is zero for a phase input in the consistent state error. In the schematic chart of the PI controller the info sign is the distinction between the Vset and the Vt. The output of the controller square is like a rim, with a three-stage torque of extra phase lag/swell. The control range is the perfect termination succession when viewed

76

V. Bhukar and R. K. Soni

Fig. 2 Simulation of DVR with control Stratergy

Fig. 3 Location strategy of DVR

at the Pulsed Width Modulation (PWM) signal generator. On the PWM generator the controlled edge is applied. The voltage is regulated by means of the point in the sinusoidal sign control stage and the balanced three-stop voltages: VA = 1 ∗ Sin(ωt + δ) VB = 1 ∗ Sin(ωt + δ + 2π/3)

Numerical Simulation and Modeling of Improved PI Controller Based …

77

Fig. 4 Proposed system model

VC = 1 ∗ Sin(ωt + δ + 4π/3) The proposed DVR integrated system simulation model is explained in Fig. 4. The PI controller integrated DVR system is established from the simulation diagram. Fault is created to introduce the system’s sag condition. The model was simulated to create a failure point for a double route to a ground failure.

5 Simulation and Results The simulation results showed that the proposed improved PI controller-based DVR effectively compensated for voltage sags and improved the power quality of the system. The compensation time was reduced by up to 50% compared to the conventional PI controller-based DVR. The voltage sag magnitude was also reduced by up to 60%. The improved PI controller-based DVR was found to be more robust and effective under varying load conditions. Models and graphs were used to illustrate the simulation effects. Figure 4 shows a Simulink model of system integration for a system with a double line to ground fault that induces a voltage sag. Figure 5 shows the waveforms of various voltages after DVR operation, including injected voltage with sag, grid voltage, and load voltage. There are voltage, current, and power waveforms built for the proposed system’s

78

V. Bhukar and R. K. Soni

Fig. 5 Output analysis of voltage and current

review. Each bus has a voltage, power, and current measurement device mounted for analyzing the characteristics and actions of the system under investigation. Figure 6 explains the analysis of the propsoed system, it can be adjusted by changing the input parameters of the design block. Figure 8 explains the power output analysis with respect to simulation time (Fig. 7).

Fig. 6 Output analysis of grid voltage

Fig. 7 Injected voltage

Numerical Simulation and Modeling of Improved PI Controller Based …

79

Fig. 8 Visualzation of sag in injected voltage

Fig. 9 THD analysis of output compensated system

Table 1 Harmonic analysis of proposed methodology

Methods

%THD

Uncompensated

46.21

Compensation after proposed system

1.03

Figure of merits as discussed above illustrates the application of proposed methodology in application of reducing harmonics through the proposed system (Fig. 9). The method has been successfully implemented for the reduction of harmonics in unbalanced load condition and has been compared with traditional methods, uncompensated and contemporary research for analysis and enhancement. Harmonic analysis of proposed methodology is also shown by Table 1.

6 Conclusions The primary goals of the investigation are to use the considered equipment (DVR) to relieve voltage sag in load voltage profiles induced by various balanced and unsymmetrical faults, as well as to reduce the twisting degree that occurs in the case of consonant causing touchy load in distribution systems, thus significantly improving

80

V. Bhukar and R. K. Soni

the system’s electrical efficiency This investigation focused on the representation of voltage unbalances and their interactions, with a particular emphasis on PI controlbased relief methods. The arrangement associated voltage source converter known as Dynamic Voltage Restorer (DVR) is more acceptable and agreeable in order to protect simple burdens from increasingly serious flaws in the distribution system. It was hailed as the perfect solution for the sagof voltage because it was dependable and practical. To complete large simulation concentrates on DVR, the highly evolved designs offices available in MATLAB/SIMULINK were used. The PI-controller technique was used to construct the control plot. The DVR (system-connected custom power insurance gadget) can reduce the voltage change and power output. This guarantees a highly sensitive electronic hardware and precision production process. Due to its small size, ease of use and quick dynamic response, DVR is considered to be an impressive technique. The results of the simulation show how a DVR can be used to correct certain defects. The DVR can easily accommodate equality and inequality and instils flexible voltage in the fitting voltage segments so that the heap voltage stays stable and can be adjusted to an obvious appreciation. DVR may also reduce the level of THD in the case of systems correlated to the load created by the consonant.

References 1. Hingorani NG (1995) Introducing custom power. IEEE Spectrum 32(6):41–48 2. Ghosh A, Ledwhich G (2002) Power quality enhancement using custom power devices. Kluwer Academic Publishers 3. Baggini A (2008) Handbook of power quality. Wiley 4. Short TA (2006) Distribution reliability and power quality. CRC Press, Taylor & Francis Group 5. Ara AL, Nabavi Niaki SA (2003) Comparison of the facts equipment operation in transmission and distribution systems. 17th international conference on electricity distribution Barcelona, Session No.2, Paper No.44, pp 12–15 6. Dixon JW, Venegas G, Mor´an LA (1997) A series active power filter based on a sinusoidal current-controlled voltage-source inverter. IEEE Trans Indus Electron 44(5):612–620 7. Devaraju T, Veera Reddy VC, Vijay Kumar M (2012) Modeling and simulation of custom power devices to mitigate power quality problems. Int J Eng Sci Technol 26:1880–1885 8. Singh M, Tiwari V (2011) Modeling analysis and solution of power quality. 10th international conference on environment and electrical engineering 9. Li YW, Mahinda Vilathgamuwa D, Loh PC, Blaabjerg F (2007) A dual- functional medium voltage level DVR to limit downstream fault currents. IEEE Trans Power Electron 22(4) 10. Sharma R, Nijhawan P (2013) Effectiveness of DSTATCOM to compensate the load current harmonics in distribution networks under various operating conditions. Int J Scientific Eng Technol 2(7):713–718 11. Ise T, Hayashi Y, Tsuji K (2000) Definitions of power quality levels and the simplest approach for unbundled power quality services. IEEE proceedings of ninth international conference on harmonics and quality of power, vol 2, pp 385–390 12. Dugan RC, McGranaghan MF, Santoso S, Wayne Beaty H (2004) Electrical power systems quality. The McGraw-Hill, Second Edition 13. Padiyar KR (2007) Facts controllers in power transmission and distribution. New Age International Publishers

Numerical Simulation and Modeling of Improved PI Controller Based …

81

14. Alexander Kusko ScD., Thompson MT (2007) Power quality in electrical systems. McGrawHill 15. Pal S, Bondriya PS, Pahariya Y (2013) MATLAB-simulink model based shunt active power filter using fuzzy logic controller to minimize the harmonics. Int J Scientific Res Publ 3(12):2250–3153 16. Roncero-Sanchez P, Acha E, Ortega-Calderon JE, Feliu V, García-Cerrada A (2009) Versatile control scheme for a dynamic voltage restorer for power-quality improvement. IEEE Trans Power Delivery 24(1) 17. Nijhawan P, Bhatia RS, Jain DK (2013) Improved performance of multilevel inverter-based distribution static synchronous compensator with induction furnace load. IET Power Electron 6(9):1939–1947 18. Sharma R, Nijhawan P (2013) Role of DSTATCOM to improve power quality of distribution network with FOC induction motor drive as load. Int J Emerg Trends Electr Electron (IJETEE— ISSN: 2320–9569) 5(1) 19. Nijhawan P, Bhatia RS, Jain DK (2012) Application of PI controller based DSTATCOM for improving the power quality in a power system network with induction furnace load. Songklanakarin J Sci Technol 2(34):195–201 20. Nijhawan P, Bhatia RS, Jain DK (2012) Role of DSTATCOM in a power system network with induction furnace load. IEEE 5th power India conference 21. Malhar A, Nijhawan P (2013) Improvement of power quality of distribution network with DTC drive using UPQC. Int J Emerg Trends Electr Electron (IJETEE ISSN: 2320–9569) 5(2) 22. Bhargavi RN (2011) Power quality improvement using interline unified power quality conditioner. 10th International conference on environment and electrical engineering (EEEIC), pp 1–5 23. Palanisamy K, ukumar Mishra J, Jacob Raglend I, Kothari DP (2010) Instantaneous power theory based unified power quality conditioner (UPQC). 25 Annual IEEE conference on applied power electronics conference and exposition (APEC), pp 374–379 24. Suvire GO, Mercado PE (2012) Combined control of a distribution static synchronous compensator/flywheel energy storage system for wind energy applications. IET Generation Transmission Distrib 6(6):483–492 25. Siva Kumar G, Harsha Vardhana P, Kalyan Kumar B (2009) Minimization of VA loading of unified power quality conditioner (UPQC). Conference on POWERENG 2009 Lisbon, Portugal, pp 552–557 26. Khadkikar V, Chandra A, Barry AO, Nguyen TD (2011) Power quality enhancement utilising single-phase unified power quality conditioner: digital signal processor-based experimental validation. Conference on power electronics, vol 4, pp 323–331 27. Khadkikar V, Chandra A, Barry AO, Nguyen TD (2006) Application of UPQC to protect a sensitive load on a polluted distribution network. IEEE PES General Meeting 28. Kesler M, Ozdemir E (2010) A novel control method for unified power quality conditioner (UPQC) under non-ideal mains voltage and unbalanced load conditions. 25th Annual IEEE applied power electronics conference and exposition (APEC), pp 374–379 29. Kazemi A, Mokhtarpour A, Tarafdar Haque M (2006) A new control strategy for unified power quality conditioner (UPQC) in distribution systems. Conference on power system technology, pp 1–5 30. Monteiro LFC, Aredes M, Moor Neto JA (2003) A control strategy for unified power quality conditioner. IEEE international symposium on industrial electronics, vol 1, pp 391–396 31. Brenna M, Enrico Tironinda RF (2009) A new proposal for power quality and custom power improvement OPEN UPQC. IEEE Trans Power Delivery 24:2107–2116 32. Shankar S, Kumar A, Gao W (2011) Operation of unified power quality conditioner under different situations. IEEE Power energy society general meeting, pp. 1–10 33. Vasudevan M, Arumugam R, Paramasivam S (2005) High performance adaptive intelligent direct torque control schemes for induction motor drives. Serbian J Electr Eng 2(1):93–116 34. Le J, Xie Y, Zhi Z, Lin C (2008) A Nonlinear control strategy for UPQC. International conference on electrical machines and systems, pp 2067–2070

82

V. Bhukar and R. K. Soni

35. Rama Rao RVD, Subhransu, Dash S (2010) Power quality enhancement by unified power quality conditioner using ANN with hysteresis control. Int J Comput Appl 6:9–15 36. Kummari NK, Singh AK, Kumar P (2012) Comparative evaluation of DSTATCOM control algorithms for load compensation. IEEE 15th international conference on harmonics and quality of power (ICHQP), pp 299–306 37. Wamane SS, Baviskar JR, Wagh SR, Kumar S (2013) Performance based comparison of UPQC compensating signal generation algorithms under distorted supply and non linear load conditions. IEEE 8th conference on industrial electronics and applications (ICIEA), pp 38–42 38. Jeraldine Viji A, Sudhakaran M (2012) Generalized UPQC system with an improved control method under distorted and unbalanced load conditions. International conference on computing, electronics and electrical technologies (ICCEET), pp 193–197 39. Conference on power system technology, pp 1–5 (2006) 40. Monteiro LFC, Aredes M, Moor Neto JA (2003) A control strategy for unified power quality conditioner. IEEE international symposium on industrial electronics, vol 1, pp 391–396 41. Brenna M, Tironinda RFE (2009) A new proposal for power quality and custompower improvement OPEN UPQC. IEEE Trans. Power Delivery 24:2107–2116 42. Shankar S, Kumar A, Gao W (2011) Operation of unified power quality conditioner under different situations. IEEE Power and energy society general meeting, pp 1–10 43. Vasudevan M, Arumugam R, Paramasivam S (2005) High performance adaptive intelligent direct torque control schemes for induction motor drives. Serbian J Electr Eng 2(1):93–116 44. Le J, Xie Y, Zhi Z, Lin C (2008) A nonlinear control strategy for UPQC. International conference on electrical machines and systems, pp 2067–2070 45. Rama Rao RVD, Subhransu, Dash S (2010) Power quality enhancement by unified power quality conditioner using ANN with hysteresis control. Int J Comput Appl 6:9–15 46. Kummari NK, Singh AK, Kumar P (2012) Comparative evaluation of DSTATCOM control algorithms for load compensation. IEEE 15th International conference on harmonics and quality of power (ICHQP), pp 299–306 47. Wamane SS, Baviskar JR, Wagh SR, Kumar S (2013) Parformance based comparison of UPQC compensating signal generation algorithms under distorted supply and non linear load conditions. IEEE 8th conference on industrial electronics and applications (ICIEA), pp 38–42 48. Jeraldine Viji A, Sudhakaran M (2012) Generalized UPQC system with an improved control method under distorted and unbalanced load conditions. International conference on computing, electronics and electrical technologies (ICCEET), pp 193–197 49. IEEE-519–1992 “IEEE recommended practices and requirements for harmonic control in electrical power systems”, (1993)

Alternate Least Square and Root Polynomial Based Colour-Correction Method for High Dimensional Environment Geetanjali Babbar and Rohit Bajaj

Abstract The colours of a digital image rely not only on lighting conditions and the features of the capturing device but also on the surface qualities of the things included in the picture. The calculation of scene colorimetric from raw data remains an unresolved problem, particularly for digital photographs taken by digital image-capturing equipment under ambiguous lighting conditions. As a result, this work proposes an efficient and cost-efficient method for colour correction that combines Root Polynomial (RP) as well as Alternate Least Square (ALS) methodologies. Reducing errors within the reference picture and the target image is the suggested model’s main goal to raise the ultimate performance of the model. We then applied a combined ALS RP-based colour-correcting algorithm to the objective images to address this problem. To make colour coordinates easier to grasp, we additionally translated the example reference image as well as the target image into multiple different colour spaces such as LAB colour space, LUV colour space, and finally RGB. The proposed scheme is evaluated by the use of the Amsterdam Library of Object Images (ALOI) dataset and simulations are conducted using MATLAB software. Different performance matrices, such as Mean, Median, 95% Quantile, and Maximum Errors, are used to determine the simulated results. The outcome of these parameters in terms of various models shows that applying the suggested colour correction models results in the least amount of error difference between two images, indicating that colour transfer is accomplished smoothly. Keywords Colour transfer · Homography · Colour indexing · Computer vision · Hybrid · Quality improvement

G. Babbar (B) Chandigarh University, Gharuan, India e-mail: [email protected] Department of CSE, CEC, Landran, Punjab, India R. Bajaj Department of Computer Science, Chandigarh University, Gharuan, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024 S. Jain et al. (eds.), Emergent Converging Technologies and Biomedical Systems, Lecture Notes in Electrical Engineering 1116, https://doi.org/10.1007/978-981-99-8646-0_8

83

84

G. Babbar and R. Bajaj

1 Introduction Colour is a crucial component of images being used in artwork, photographs, and visualization to express a particular emotion, as well as it makes up the majority of human visual experience. It is possible to change the emotion of a picture, imitate various lighting situations, or achieve other aesthetic effects by changing the colours in an image. Colour adjustment may also be required to lessen discrepancies across photos for further processing in several situations. For an instance, assembling a panorama picture might well be hampered by tiny colour differences between adjacent shots. The experience of the viewer may also be affected by how slight differences in a stereo pair are caused by a single image sensor. Another example is the necessity to repeat colour adjustments made to one frame in a succession of frames when processing video content. Additionally, across both creative and much more realistic applications such as these, changing the colour image’s content requires competent and prolonged data from the user, but the solutions freely accessible to non-expert consumers sometimes do not offer adequate control [1, 2]. “Colour Mapping” or “Colour Transfer” can be defined as the process/ method that try to make it easy for the user to handle the challenge of achieving colour changes in the images. This can be achieved by allowing the colour palette and other features get changed by the means of a benchmark image In this process, the user selects the reference image or target image whose colours are preferred and then the original image is modified in such a way that it acquires colour palette of target image [3, 4]. However, the colours of images taken by a camera depend not only on the qualities of the objects’ surfaces that make up the image but also on the lighting circumstances (lighting geometry and illuminated colour) and the features of the capturing apparatus [5]. Normally, throughout this colour tuning process, artists must delicately modify a variety of factors, including exposure, brightness, white-point, and colour mapping because of their interdependence. This means that one change in the image may result in the misalignment of other properties. In automatic multiview picture and video stitching, colour balancing, also known as colour correction is the process of adjusting colour disparities between adjacent views that result from various exposure settings and view angles. Nevertheless, colour correction has gotten less attention and has been given a more straightforward approach compared to the other important image stitching of registration and blending. Both colour correction and image blending produce almost the same kind of effects that have concealed the role of colour correction model. People have only lately started to realize that image blending alone cannot always eliminate the colour differences between various views due to the increasing demand for and popularity of high-definition photographs and videos. Additionally, vulnerability adjustment (or gain compensation) was used in the earliest attempts to address the colour equilibrium problem for multi-view patching inside the machine vision as well as multi-view graphics processing industries [6–8]. With this technique, the component images’ intensity rise values are changed to accommodate for visual changes brought on by varying levels of exposure. Even though it sometimes works,

Alternate Least Square and Root Polynomial Based Colour-Correction …

85

when the lighting is drastically changed, it might not be able to fully make up for the colour differences between different angles. Colour rebalancing and colour transference are identical technologically, even though the former typically need not be restricted to the overlaid region. Colour balance for multi-view picture and video stitching could be readily resolved when colour transfer methods seem to be limited to working using only data from either the overlapping area. After colour correction, various artefacts (such as JPEG block edges) may occur in rendered images. Therefore, automation of this time-consuming operation is preferred. Reinhard et al. in [9] were the first to present example-based colour transfer. They presumptively use data that is normally distributed for the colour distribution in the l colour space. These transport an image from its origin to its destination because respective colour dispersion shares the same mean and variability inside the lab colour space. According to the homography theorem of the colour that is recently given by the researchers states that when the visual conditions get vary that includes changes in the light effects and shades of the nearby location the colour gets related by homography [10]. In addition to this, these techniques are working as a core technology in the domain of computer vision operating under geometric approaches. The applications that are designed under the homographic approach are geometric camera calibration, reconstruction of the designs in 3D format, stereo vision, mosaicking of the captured images, etc. We show that the colour calibrating issue, also called the homography challenge, might well be formulated as the mapping of unit RGBs logged for a colour chart to equivalent XYZs. Additionally, since shading upon that table differs from reality, attempting to fix for homography can produce a 50% better colour correction than direct linear least-squares regression [11]. When it comes to the geometric planar homography problem, we say: ⎡

⎤ ⎤T ⎡  ⎤T ⎡ ∝x h11 h12 h13 x    ⎣ ∝ y ⎦ = ⎣ y ⎦ ⎣ h21 h22 h23 ⎦  x = H x 1 h31 h32 h33 ∝

(1)

Equation 1 refers to the same physical feature in two photographs as represented by the associated image coordinates (x, y) and (x^’, y^’). Since the matrix [a b c] T translates to the coordinates [a/c b/c] T into homogeneous coordinates, the scalar in Eq. 1 eliminates to get the image coordinate (x, y). Equation 1 accurately demonstrates the connection among all combinations of matching points (x, y) and (x’, y) which occupy the same planes in 3 dimensions. Finding at least 4 comparable spots between two images is necessary to solve for a homography (for example, in image mosaicking). Because of its conceptual simplicity and the vast range of approaches it can apply, colours mapping has drawn a lot of interest from different domains that include systems working on basis of computer vision, graphics as well as recent trends in the domain of image processing. Applications for colour mapping range from tone mapping and panorama stitching to improving the realism of renderings, with instances even in the safety and medical imaging industries. Although it is difficult

86

G. Babbar and R. Bajaj

to determine which techniques will be effective enough for the targets that are to achieve or attain the best solution for the given problem. Despite the abundance of techniques accessible, there are still intriguing open-ended research problems and difficulties in this field that could aid in the full realization of colour mapping or colour transfer solutions.

1.1 Motivation The primary objective of colour transfer solutions is always the same i.e. to modify an input image’s colours to match those with the reference image. Nevertheless, depending on the kind of input required or the particular specifications of each application, various solutions have evolved. It is very easy to map the features of two different images if they are having similar features that are to be clubbed, for that it is only possible if the scenes are captured under similar conditions therefore matching regions and colors can be obtained easily. Taking as an example, variation in the colours of the different grids of camera view may differ if the proper calibration or setting will be not done this can impact the retrieval of the actual colours of the captured section. By transferring the colours from one image to the other, they can be fixed. Moreover, the exposure or hue of photographs intended for panoramic stitching may differ depending on the camera settings used to take them. Although the changes between the image pairings in these examples are probably minimal, the sheer volume of data makes colour mapping necessary in this case as opposed to manual modifications.

2 Literature Review Over the years, several colour correction techniques have been presented by various authors. We have reviewed some of the recent publications from renowned sites like IEEE, Hindwai, Springer, Elsevier so on, to understand current colour correction models. Li et al. [12], proposed an innovative colour correction technique to efficiently reduce colour disparities between massive multiview photos. The main concept behind this approach was to first group images using the graph partition algorithm before sequentially doing intragroup correction and intergroup correction. To remove colour differences between the pictures in a group, the correction parameters were solved for every group of images. Results obtained on widely used datasets demonstrated that the proposed strategy outperformed the existing comparable techniques. The suggested approach showed superior computing efficiency on big image sets and better colour consistency in the most extreme situations. Molina-Cabello et al. [13], proposed a new algorithm for homographies estimation was created. The convolutional neural network-based homography assessment served as its core and provided a range of input image pairing variances. Several variations were created

Alternate Least Square and Root Polynomial Based Colour-Correction …

87

by changing the colour saturation levels of the source images. Every created pair of photos produced a unique estimation of the homography, which was then integrated to get a more precise final assessment. D. I, M. D, et al. [14], demonstrated how anticipating and correcting the colours of a set of reference surfaces could significantly raise the overall process quality. A collection of photographs taken by the proposed camera under sunlight could make it simple to determine the surface colour under white light. To anticipate the colours of the reference substrates of a colour verifier as though it had been active during scene recording, they used a deep network. Researchers showed how the technique improved the initial and secondary stages of the colour constancy procedure using 9 datasets. Again, Qiang Zhao, et al. [15], introduced a deep neural network that could predict homography precisely enough to stitch together images with little parallax. The primary components of the network were feature maps with gradually higher resolutions and hybrid cost volumes that match them. They also suggested a new loss algorithm that was also stitching-oriented and takes image contents into account. Moreover, to train the network, authors created a synthetic training dataset with image pairs that were more naturally occurring and similar to those found in image stitching problems encountered in the real world. A brand-new technique for 2D homography estimation utilizing two exact places was unveiled by Juan Guo et al. [16]. The homography was split into three halves as a result. Both two known endpoints as well as respective images can be utilized to estimate the start and final sections, respectively, while the interim element, that is a hyperbolic similarity transformation, can be calculated using a variety of primitives (might be point(s), line(s), & conic). The studies utilized real and simulated data to confirm the accuracy and adaptability of the method. Man Wu, et al. [17], an extremely simplified colour mapping algorithm model that inherits the positive and negative two-way vision model was proposed, with the colour mapping algorithm being considered the breakthrough and being thoroughly introduced beginning from the direction of landscape image colour automatic adjustment. Depending upon this colour mapping methodology hypothesis, a simulated solution that utilized automatic landscape packing colour modifications was recommended. Reducing the image’s sharpness and correcting for the landscape image’s colour sharpness, the suggested method enhances colour density, and high-quality landscape photographs help with landscape design. To improve the impact of distributed 3D interior design, Kang Huai et al. [18] recommended a technique based on colour image modeling for distributed 3D interior design. In this research, the writer utilizes distributed feature information merging to create a colour image model of a distributed 3D interior design. Next, they execute edge contour recognition and feature extraction on the scattered 3D internal spatial distribution image. The RGB colour deconstruction approach was used to break down the three-dimensional indoor spatial distribution image’s colour pixel attributes. This technique had a stronger capacity for feature expression and a higher visual impact. Xiang, TZ, et al. [19], developed a smoothly planar homography model for image stitching to deal with these issues by taking into account the multi-plane geometry of natural scenes. To obtain seamlessly stitched planes, authors first included local warps that were estimated in every plane. Moreover, to

88

G. Babbar and R. Bajaj

deal with parallax, they presented a brand-new alignment-guided seam composition. Experimental findings on a variety of difficult data sets showed that the model achieved cutting-edge stitching performance. Khalid M. Hosny, et al. [20], introduced a novel feature extraction method for the classification and identification of colour textures. The suggested technique combines the characteristics using a convolution neural network (CNN) as well as local binary patterns (LBP) that give discriminant features, increasing textural segmentation results. LBP categorized photos based on regional features that specified the main elements of the image (image patches). According to the findings, utilizing LBP rather than just CNN enhances the categorization process. The authors experimentally tested the suggested strategy using three difficult colour image datasets (ALOT, CBT, and Outex). In comparison to the conventional CNN models, the findings showed that the method increased classification accuracy by up to 25%. Yarong Jiao, et al. [21], the colour of the plane image could be improved, the image effect could be optimized, and the problem of image distortion with large colour differences could be addressed using a colour enhancement computation iterative algorithm based on computer imaging devices. A colour enhancement processing optimization model was created by combining it with the bilateral filtering technique and was composed of three primary stages: Adaptive filtering, estimation, and correction of the illumination information parameter and the reflection coefficient parameters were all examples of picture colour correction. Youngbae Hwang, et al. [22], provided a unique colour transfer framework that was based on a scattered point interpolation approach to align the colour of a scene between photographs. They solved for a fully nonlinear and nonparametric colour mapping in the 3D RGB colour space by using the moving least squares framework as opposed to the traditional colour transformation techniques that utilized parametric mapping or colour distribution matching. Experiments demonstrated the method’s superior quantitative and qualitative performance than earlier colour transfer techniques. Additionally, the framework could be used for a variety of colour transfer situations, including video colour transfers and colour transfers between various camera models, camera settings, and lighting circumstances. Ballabeni, et al. [23], the conversion of the colour information into a grey-level signal using a unique technique was presented in this work with a focus on the 3D reconstruction of urban settings. The currently being used method, called IHE from Intensity Histogram Equalization, was derived from earlier techniques. The suggested method accepts as input a collection of photographs that might or might not depict the same urban object, depending on the lighting and camera used. IHE was assessed by contrasting its 3D reconstruction performance with that of other cutting-edge algorithms. IHE performed better than other techniques in general, according to the tests, which used two datasets. A. Philomina Simon, et al. [24], to efficiently classify colour textures, this research suggested a method called Deep Lumina that combined luminance data with RGB colour space and deep architectural features. This approach made use of support vector machine classification, convolutional neural network characteristics from either the ResNet101 pre-trained models frameworks, and luminance information first from the luminance (Y) channel of the YIQ colour model (SVM).

Alternate Least Square and Root Polynomial Based Colour-Correction …

89

Throughout the RGB-luminance colour field, this methodology evaluated the effectiveness of using luminance data in addition to the RGB colour space. About the Describable Textures dataset and the Flickr Material Dataset (FMD), the proposed methodology, Deep Lumina, had accuracy rates of 90.15 and 73.63%, respectively. Through to enhance the colour effect of the image, Min Cao et al. [25] suggested a technique for improving the colour enhancement procedure of said plane image utilizing a computer vision system. That method used the Retinex approach with an adaptive two-dimensional empirical decomposition to break down the image and achieve the result of improved image colour. According to the experimental findings, this strategy increased the average value of the image by roughly 0.3. Renzheng Xue, et al. [26], an optimization method for processing plane image colour enhancement that was based on computer vision a virtual reality was proposed to address the issues of low brightness contrast of a colour image, hiding a significant amount of detail information, and deviation of colour information in the process of image acquisition. It can be seen from the aforementioned existing literature that several academics have suggested several methods for adjusting the colour of two images. Such methods were undoubtedly producing good outcomes, but after examining the literature, we arrived at the opinion that they could be improved. Moreover, it has been observed that existing colour correction models usually utilize only one model in their work which caused huge errors between the reference image and the target image. Because of this increased error in two images, existing colour correcting models were not operating smoothly which eventually results in poor visual quality. Furthermore, we also observed that the majority of researchers are using Alternate least Square or Root Polynomial methods in their works which showed good results. Owing to these facts, we are going to propose an effective colour correction model wherein these two techniques will be combined for enhancing the overall quality of the image.

3 Proposed Work Within the research, a powerful and effective colour correction approach relying on 2 different approaches one of them is the alternate least square (ALS) approach and the other technique is root polynomial (RP) has just been presented to resolve the limitations of the current available colour correction methodologies. The main focus of this research is to provide a solution that will be capable of reducing the variation between the benchmark image and the test image while working in the domain of colour correction. This would assist the system to provide an improved visual quality of the input image given to the framework. To accomplish this task, we have hybridized two models i.e. ALS and RP together for enhancing the system performance. The proposed hybrid colour-correcting approach goes through a series of processes to accomplish the intended goal. Information gathering, image transformation from XYZ format to different color spaces, a combined ALS and root polynomial approach development upon every colour model independently, colour variance computation for multiple colour models, and lastly effectiveness for each

90

G. Babbar and R. Bajaj

colour model. The dataset that is used in the proposed scheme is a standard dataset that is the Amsterdam Library of Object Photos (ALOI) database. This dataset is considered for the simulation because of the reasons that the dataset is an opensource dataset and is available to access on the internet for research purposes. Further reference images are selected and processed under different colour spaces such as LAB, RGB, and LUV from the XYZ format. After this, a set of various images is selected of the same object to apply the proposed scheme. After performing the task next phase is to calculate the performance and analyze it under the different colour spaces mentioned above. The outcomes from the simulation are analyzed for these formats and results are used to defend the effectiveness of the proposed scheme by comparing them with benchmark images. The next subsection of this research provides a detailed and step-by-step explanation of exactly how the suggested mixed colour correcting system operates.

3.1 Methodology The proposed ALS + RP-based color-correct model undergo a series of steps for achieving the given objective. In this section of the paper, we are going to discuss various steps that are opted in the proposed model for balancing colours in two images. Step 1: Data Collection At the initial stage of the proposed model, all the necessary information about images is collected by utilizing a dataset that is available online. In the proposed work, we have used the Amsterdam Library of Object Images (ALOI) database that holds the images which are captured at different angles to show the variations. ALOI is primarily among the most popular databases that include coloured photos of about 1 000 tiny items for research uses. A variety of viewing angles, illumination setups, and angles are used to acquire images of objects to preserve the sensory diversity within object archives including wide-baseline stereo visuals. The database comprises a total of 110,250 images, and for each object, the image is clicked more than 100 times. However, the colour distribution in these captured images is not the same because they were taken at various lightening conditions and angles. Both reference images and target images are chosen from the database, and then advanced techniques are implemented to them to achieve desired outcomes. Step 2: Conversion of Sample Images to Different Colour Models Once all the required information is collected, it is time to select an object image [27] from the given sample of images and convert it to selected colour spaces for the analyzing purpose respectively. It is important to mention here, that image selected in this step represents the reference image with which a comparison needs to be done. Every subject picture is chosen as even the source images within that stage, and it is

Alternate Least Square and Root Polynomial Based Colour-Correction …

91

then transformed from the XYZ format to above mentioned 3 colour formats because then colour coordinates may be identified with ease. Step 3: Colour Correction Phase In this step, a target image (in which colour needs to be balanced as per the reference input image) is selected of the identical object, further, the proposed scheme is applied for correcting the colour in the selected sample. The evaluation of said entries for the root polynomial matrix, exponent factor, and PRP utilizing Pα is the first step in the colour control system. This is immediately followed by the evaluation of shading equalization among images by using the equation given in 1. ID =

K 

w1 × G1

(2)

k=1

In the above equation, G1 is a matrix based on the DCT, further w1 is the weight matrix for the DCT matrix in addition to the term’s ID and D is termed as the shading field image and diagonal matrix extracted from the multiplication of the weights and the DCT matrix. The values of the whole equation get vary in every single run and this keeps going on till all iterations are performed. After the targeted image’s colour errors have been resolved, it is analyzed by using 3 different color format [28] conversions from XYZ format respectively. This step is required to do a quality comparison between the Hybrid ALS + RP-corrected photos and the initial reference pictures for the three-color models. Step 4: Compute Colour Differences/errors in Two Images In the next phase of the proposed hybrid colour correction model, the colour differences between the reference image and target image are calculated [29]. For this, the colouring intensity of the target image’s three colour models is compared with the colouring intensity of the reference image’s three colour models to check whether the error between them has been reduced or not. The performance of the proposed model is then evaluated in terms of various metrics. Step 5: Evaluation of the Proposed Scheme’s Performance In the last phase of the proposed colour correction model, the usefulness and effectiveness of the model are evaluated by analyzing the error values in both images in MATLAB software. The results are analyzed in different factors such as the average of the colour, followed by the median and 95% quantile error. In addition to this, the maximum error is also calculated for all the 3 format scenarios. The whole process of the proposed scheme is demonstrated in the following flow chart that summarized the methodology given in the above section (Fig. 1).

92

G. Babbar and R. Bajaj

Reading the ALOI dataset

Selection of a single sample for processing

Conversion of a selected sample to different colour spaces

Applying the proposed mechanism on individual colour space formats

Separation of individual layers for both the processed and non-processed color formats of the considered sample

Difference calculation of LAB Colour Space

Difference calculation of LUV Colour Space

Difference calculation of RGB Colour Space

Calculation and comparison of the proposed scheme in terms of mean, median, 95% quantile, and maximum error

Fig. 1 Flow diagram of ALS+RP Color Model

4 Results Obtained This section represents the analytical study of the proposed scheme under which simulation is carried out in MATLAB software. The outcomes are obtained from the simulations and are analyzed in 3 different formats of colour spaces. The factors that are analyzed under the simulation are based on the mean values of the colour, median, 95% Quantile, and Maximum error for the proposed model. This section presents the detailed and thorough results obtained for the proposed model.

Alternate Least Square and Root Polynomial Based Colour-Correction …

93

Three color models Fig. 2 The mean value of the proposed scheme

4.1 Performance Evaluation In respect of the overall Average value, the proposed scheme’s effectiveness is examined for three colour models. The graphical representation of the Mean value in the proposed hybrid colour correction model is shown in Fig. 2, wherein, the x-axis depicts different colour models and the y-axis depicts the mean value. After analyzing the given graph it is found that the LAB colour space of the mean value received is 1.8 whereas, it was 1.85 in the proposed LUV colour model. However, the value of the mean in the RGB colour model is 0.0214 which is significantly lower than LAB and LUV colour models. After analyzing the mean value next parameter is the median value. Figure 3 depicts the chart that was created for this study. It shows the various variants along with their mean and median on the x- and y-axis, correspondingly. The graphs represent that the median value came out to be highest in the LUV model at 1.8, followed by the LAB colour model at 1.4. Whereas, the value of the median in the RGB colour model came out to be least with 0.0129, representing its efficiency over the other two models. Furthermore, the effectiveness and stability of the proposed mechanism are checked in terms of its 95% quantile value. Figure 4 represents the graph for the same wherein, different colour models and their 95% quantile values are depicted on the x-axis and y-axis values. The results obtained show that the value of 95% quantile came out to be highest in the LAB colour model with 4.3 values, while as; it was 3.7 in the LUV colour model. On the other hand, the value of 95% quantile in the proposed RGB colour model came out to be 0.0589, thereby proving its supremacy. Additionally, the given colour correcting scheme’s effectiveness is evaluated and studied concerning maximum standard errors. The graph obtained for the same

94

G. Babbar and R. Bajaj

Three color models Fig. 3 The newly proposed colour correction model’s median value

Three color models Fig. 4 95% quantile value in the proposed hybrid colour correction model

is shown in Fig. 5, wherein, different colour models and their max error values are depicted on the x-axis and y-axis respectively. Results demonstrated that the maximum error of the proposed scheme for the LAB colour model was near 4.5, whereas, it was slightly above 4.5 in the proposed LUV colour model. Similarly, when we analyzed the Max error value in the proposed RGB model, it came out to be only 0.09305, thereby signifying its dominance (Table 1).

Alternate Least Square and Root Polynomial Based Colour-Correction …

95

Three color models Fig. 5 Max error value in the proposed hybrid colour correction model

Table 1 The comparison of the scheme for all the above-mentioned factors is shown below in the following table: Parameter

Proposed hybrid model LAB

Proposed hybrid model LUV

Proposed hybrid model RGB

Mean

1.7841

0.8949

0.021426

Median

1.442

1.8152

0.015923

95%

4.2794

3.7101

0.058949

Max

4.4643

4.5942

0.09305

The suggested hybrid colour correction system exhibits great actual results with decreased standard errors, as shown by the aforementioned charts. This means that by using the proposed colour correction models the error difference between two images is the least, implying that colour transfer is done smoothly.

5 Conclusion This study proposes an ALS and RP-based colour-correcting system that is both efficient and much less susceptible to errors. Using the MATLAB Programme, the colour correcting model is examined. The simulated outcomes were obtained in terms of various metrics. After analyzing the results, we observed that the value of performance factors such as Mean, median, 95% quantile error, and Max error is least observed in the proposed RGB colour model. The suggested RGB colour model’s average, median, 95% quantile, and maximum error values were 0.021%, 0.015%,

96

G. Babbar and R. Bajaj

0.058%, and 0.093%, respectively. Whereas the corresponding values for the same factors were 1.89, 1.81, 3.71, and 4.59%. On the other hand, when the same values were analyzed for the LAB colour model, the values came out to be 1.78 for the mean, 1.448 for the median, 4.27 for the 95% quantile, and 4.46 for the max error. These values simulate that out of the three colour models, the proposed RGB colour model is showing the least errors. Even though the proposed scheme is working effectively when compared with existing schemes still further work can be added to the colour correction module for achieving effectiveness in different application areas such as with 3D imaging, radar images, X-ray images, etc. Other than this further future work can be artificial intelligence based to automate a given scheme. Conflicts of Interest: The authors declare that there is no conflict of interest regarding the publication of this manuscript. Funding Statement: This research received no external funding. Data Availability Statement: The data may be available from the authors upon a reasonable request.

References 1. Faridul, Sheikh H et al. (2014) A survey of colour mapping and its applications. Eurographics (State of the Art Reports) 3(2):44–67 2. Faridul, Sheikh H et al. (2016) Colour mapping: a review of recent methods, extensions, and applications. Comput Graph Forum 35(1) 3. Chang H et al. (2015) Palette-based photo recolouring. ACM Trans Graph 34(4) 4. Zhang et al. (2021) A blind colour separation model for faithful palette-based image recolouring. IEEE Trans Multimedia 24:1545–1557 5. Gasparini F, Schettini R (2003) Unsupervised color correction for digital photographs 6. Xu W, Mulligan J (2010) Performance evaluation of colour correction approaches for automatic multi-view image and video stitching. 2010 IEEE computer society conference on computer vision and pattern recognition. IEEE, pp 263–270 7. Wang Z, Yang Z (2020) Review on image-stitching techniques. Multimedia Syst 26(4):413–430 8. Wei LYU et al. (2019) A survey on image and video stitching. Virtual Reality Intell Hardware 1(1):55–83 9. Reinhard E, Ashikhmin M, Gooch B, Shirley P (2001) Colour transfer between images. IEEE Comput Graph Appl 21(5):34–41 10. Gong H, Finlayson GD, Fisher RB (2016) Recoding colour transfer as a colour homography. arXiv preprint arXiv: 1608.01505 11. Finlayson GD, Gong H, Fisher RB (2016) Colour homography colour correction. Colour Imaging Conf Soc Imaging Sci Technol 1:2016 12. Li Y, Li Y, Yao J, Gong Y, Li L (2022) Global colour consistency correction for largescale images in 3-D reconstruction. IEEE J Selected Topics Appl Earth Observ Remote Sens 15:3074–3088 13. Molina-Cabello MA, Elizondo DA, Luque-Baena RM, López-Rubio E (2020) Aggregation of convolutional neural network estimations of homographies by colour transformations of the inputs. IEEE Access 8:79552–79560 14. Dubuisson I, Muselet D, Basso-Bert Y, Trémeau A, Laganière R (2022) Predicting the colours of reference surfaces for colour constancy. 2022 IEEE international conference on image processing (ICIP), pp 1761–1765

Alternate Least Square and Root Polynomial Based Colour-Correction …

97

15. Zhao Q, Ma Y, Zhu C, Yao C, Feng B, Dai F (2021) Image stitching via deep homography estimation. Neurocomputing 450:219–229 16. Guo J, Cai S, Wu Z, Liu Y (2017) A versatile homography computation method based on two real points. Image Vis Comput 64:23–33 17. Wu M (2022) Simulation of automatic colour adjustment of landscape image based on colour mapping algorithm. Comput Intell Neurosci 2022:1–9 18. Huai K, Ni L, Zhu M, Zhou H (2022) Distributed 3D environment design system based on colour image model. Mathemat Problems Eng 2022:1–6 19. Xiang TZ, Xia GS, Zhang L (2018) Image stitching using smoothly planar homography. Pattern recognition and computer vision. PRCV 2018. Lecture Notes in Computer Science, Springer, Vol 11256 20. Hosny KM, Magdy T, Lashin NA, Apostolidis K, Papakostas GA (2021) Refined colour texture classification using CNN and local binary pattern. Mathemat Problems Eng 2021:1–15 21. Jiao Y (2022) Optimization of colour enhancement processing for plane images based on computer vision. J Sensors 2022:1–9 22. Hwang Y, Lee J-Y, In Kweon S, Kim SJ (2019) Probabilistic moving least squares with spatial constraints for nonlinear colour transfer between images. Comput Vis Image Understanding 180:1–12 23. Ballabeni A, Gaiani M (2016) Intensity histogram equalization, a color-to-grey conversion strategy improving photogrammetric reconstruction of urban architectural heritage. J Int Colour Assoc 16:2–23 24. Simon P, Uma BV (2022) Deep Lumina: a method based on deep features and luminance information for colour texture classification. Comput Intelli Neurosci 2022:1–16 25. Cao M (2022) Optimization of plane image colour enhancement based on computer vision. Wireless communications and mobile computing, Vol 2022, pp 1–8 26. Xue R, Liu M, Lian Z (2022) Optimization of plane image colour enhancement processing based on computer vision virtual reality. Mathematical problems in engineering, vol 2022, pp 1–8 27. Vasamsetti S, Setia S, Mittal N, Sardana HK, Babbar G (2018) Automatic underwater moving object detection using multi-feature integration framework in complex backgrounds. IET Comput Vision 12(6):770–778, ISSN 1751-9632, 2nd May 2018, https://doi.org/10.1049/ietcvi.2017.0013, www.ietdl.org 28. Li Y, Yin H, Yao J, Wang H, Li L (2022) A unified probabilistic framework of robust and efficient color consistency correction for multiple images. ISPRS J Photogrammetry Remote Sens 190:1–24, https://doi.org/10.1016/j.isprsjprs.2022.05.009 29. Babbar G, Bajaj R (2022) Homography theories used for image mapping: a review. 10th international conference on reliability, infocom technologies and optimization (Trends and Future Directions) (ICRITO), 13–14 October 2022, ISBN Information: INSPEC Accession Number: 22361444, DOI: https://doi.org/10.1109/ICRITO56286.2022.9964762, Publisher: IEEE Conference Location: Noida, India

An Automatic Parkinson’s Disease Classification System Using Least Square Support Vector Machine Priyanshu Khandelwal, Kiran Khatter, and Devanjali Relan

Abstract Parkinson’s disease (PD) is a devastating neurological disease that affects millions of people throughout the world. Although there is no apparent cure for this disease, but with the help of advancement in the area of machine learning approaches, the PD can be diganosed at the early stage. Diagnosing PD at an early stage can assist in keeping the disease from progressing. This paper proposed a system to classify or predict whether the person with PD using supervised Least-squares support vector machine (LS-SVM) method. We used the UCI machine learning repository dataset, which contain the voice data acquired from 31 people of which only 8 samples are from normal people. We tested different feature selection methods for selecting optimal features from the given features in the datatset. We handled the data imbalance using Synthetic Minority Oversampling Technique (SMOTE). Finally, the Least-squares support vector machine (LS-SVM) and SVM classifiers were used to classify the data into PD and Non-PD groups. We compared the results obtained with SVM and LS-SVM with different feature selection methods. The result obtained showed that the system with the ExtraTreesClassifier feature importance method and LS-SVM classifier outperforms as compared to one which uses SVM. The classification accuracy of 98.31% was achieved using the ExtraTreesClassifier feature importance method and LS-SVM for radial kernel. The proposed system gave sensitivity = 1.0, specificity = 0.97, Precision = 0.96, Recall = 1.0, F1-score = 0.98. The proposed system is highly accurate and giving accuracy of 98.3%. The proposed system’s accuracy is superior to the cutting-edge techniques suggested in the literature. Keywords Parkinson’s disease · LS-SVM · SVM · Feature selection

P. Khandelwal · K. Khatter · D. Relan (B) Computer Science Department, BML Munjal University, Gurgaon, Haryana, India e-mail: [email protected] P. Khandelwal e-mail: [email protected] K. Khatter e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024 S. Jain et al. (eds.), Emergent Converging Technologies and Biomedical Systems, Lecture Notes in Electrical Engineering 1116, https://doi.org/10.1007/978-981-99-8646-0_9

99

100

P. Khandelwal et al.

1 Introduction Millions of people around the globe suffer from neurodegenerative diseases such as Alzheimer’s and Parkinson’s disease (PD). PD is considered to be the world’s second-most-common serious illness [1]. Every year, more than 1 million people in India over the age of 50 are affected by PD, which impacts the human population as a whole [2–4]. Globally, it is anticipated that there will be 12.9 million instances of PD by 2040 [5]. It has affected more than 1 million individuals every year in India. According to the National Institute of Neurological Disorders, early diagnosis (symptoms lasting less than five years) is only 53% accurate. Early detection, on the other hand, is crucial for effective therapy. Generally, people aged 50 or above have symptoms such as tremors, movements, shaking, rigidity, balancing and coordination. The leading causes of PD are still unknown. In most cases, however, it is formed as a result of the death of dopamine-producing brain cells, which is caused by a combination of hereditary and environmental variables or triggers [6]. PD is more common with age, although only around 4% of people with the disease are tested before they are 50. PD is irreversible i.e. cannot be restored; however, prescription can assist with controlling the manifestations in PD patients. In addition, drugs might help PD impacted individuals to oversee issues with strolling, development, and quake. The brain scans to detect neurological problem are expensive and demand a high level of expertise, are currently regarded as the highest quality level for conclusion. There is no exact cure or treatment for this disease or no reliable test that can distinguish PD from other conditions [4, 7]. The diagnosis is fundamentally a clinical one in view of a set of history and examination. Clinical doctors determine a PD diagnosis based on the signs, symptoms and medical history of the patient. Various authors made attempts to diagnose the PD based on gait signal, eye movement, electrovestibulography etc. [8–10]. It was noticed that the one of the most noticeable symptoms of this disease is a low-pitched, monotonous voice accompanied by copious saliva dribbling. It was seen that the voice recordings data can help in the early diagnosis of the suspected PD patients [11, 12]. Voice and speech are compromised in PD patients because they are dependent on laryngeal, respiratory, and joint functioning. As a result, vocal disturbance is thought to be one of the disease’s first symptoms [13]. To classify PD and healthy people from speech signals, various machine learning (ML) based classification techniques have been developed in the literature. The use of ML for the treatment of PD has been documented in numerous research [14–19]. El Maachi et al. offer a new intelligent Parkinson detection method that analyses gait data using deep learning algorithms. They used a CNN to create a DNN classifier and used the Unified PD Rating Scale to test their algorithm for PD identification and severity prediction (UPDRS). The severity and development of this disease in individuals have been assessed using this rating technique. The accuracy of the authors’ Parkinson’s severity estimate was 85.3% [20].

An Automatic Parkinson’s Disease Classification System Using Least …

101

A subset of vocal features and different classifiers were employed in another investigation to classify PD. They performed vocal-based PD detection using various classifiers and achieved a best accuracy of 94.7% using a Support Vector machine (SVM) [21]. In [22], the authors used CNN and ANN on voice recordings. They created their dataset to predict the disease and used two models, VGFR Spectrogram Detector and Voice Impairment Classifier and got higher accuracy of 89.15%. In another paper, the author aim to diagnose PD using a cloud-based system for telemonitoring Parkinson’s patients. They collected the data remotely utilizing the smartphones [23]. When voice data and tremor data were used individually, the maximum accuracy in PD identification was 98.3% and 98.5% for voice and tremor data respectively. Another research on PD detection using the genetic algorithm and SVM classifier on speech signals obtained the best accuracy of 91.18% [24]. The author of [25] detected Parkinson’s symptoms by analyzing a heterogeneous dataset with several machine learning methods, resulting in a minimal 3% improvement in performance over previous methodologies. Distinct data sets and multiple machine learning frameworks were employed to classify PD. [14–19, 26–30] which produces more accurate results in disease prediction [31–33]. In the system proposed in [34], the author used a dataset of UCI Machine learning repository for diagnosis of PD [35]. The K-Nearest Neighbors (KNN) (with k = 5) method citemounika2021machine achieves the maximum accuracy of 97.43%. One study proposed a hybrid approach (SMOTE and random forests) using features extracted from speech signals for PD detection and achieved an accuracy of 94.89%[36]. Tsanas et al. utilized 263 samples from 43 patients from an existing database and used two statistical classifiers, i.e. support vector machines and random forests, to transfer the feature subsets to a binary classification response [15]. By detecting dysphonia, Little et al. [35] gave an evaluation of measures for the personality of PD participants based on sound. They used a Support Vector Machine (SVM) to group them and achieved precision of 91.4%. To classify PD from Non-PD group, classifiers such as Ada support, SVM, K-NN, multi-facet perceptron (MLP), and Nave Bayes (NB) were also used. Author in [37] designed a system with combination of Boruta wrapper-based feature selection method and extreme gradient boosting algorithm and found that vocal fold’s vibration pattern is a crucial sign of the severity of PD. In another study, the changes in heart rate variability (HRV) between Parkinson’s disease patients who are receiving no treatment and healthy controls were examined by the author [38]. Several studies detected PD using machine learning classification algorithms on the voice recorded features data and calculated different accuracy scores, sensitivity, specificity, and f1-score. Table 1 summarizes all the relevant literature in the related field. It shows the various system proposed by different authors and the accuracy achieved by their proposed systems. The major goal of this project is to create a reliable system that can accurately categorize patients with Parkinson’s disease. Thus, in this study the supervised learning approach is employed to accomplish the same.

102

P. Khandelwal et al.

Table 1 Recent literature on PD classification using UCI machine learning dataset Year

Author [Reference]

Method

2019

Ali et al. [18]

LR∗, LDA#, LogR∗∗, Accuracy = 70% with SVM∆, GNB▲, DT⋆, KNN∎ SVM∆

Outcomes

2018

Anand et al. [39]

LR∗, KNN∎, DT⋆, SVM∆, Na¨ıve Bayes

Best accuracy with KNN∎ = 95.5%

2019

Yaman et al. [40]

KNN∎, SVM∆ with 10-FCV$

Best Accuracy with SVM∆ = 91.25%

2021

Tiwari [17]

LR∗, DT ⋆, SVM∆ KNN∎, Bagging Classifier, XGBOOST Classifier

Best Accuracy: XGBOOST = 95%

2019

Celik and Omurca [41]

SVM∆, LogR∗, ET, GBM♦, RF◻

LogR∗∗ Accuracy = 76.03%

2021

Nishat [42]

Light GBM♦, XGBoost, GBM♦

Highest accuracy with Light GBM♦: 93.39%

2021

Rohit et al. [19]

KNN∎, Extra tree, GA♦, RF◻

Best performance: Using combination of GA♦ and RF◻ Classifier = 95.58%

2021

Mohammadi et al. [28]

LR∗

Accuracy: 95.22%

2021

Sheikhi et al. [29]

Rotation Forest Based Model

Accuracy:79.49%

2021

Kadam [30]

Deep NN

Accuracy:92.19%

▲: Gaussian näıve Bayes, ∗: Linear Regression, ∎: K-Nearest Neighbour, #: Linear Discriminant Analysis, ∗∗: Logistic Regression, ⋆: Decision Tree, ♦: Gradient Boosting Machine, ∆: Support vector Machine, ◻: Random Forest, ♦: Genetic Algorithm, $: FCV

In this study, we performed PD classification using acoustic features available in the UCI Machine Learning Parkinson Disease database [35]. The dataset has 23 features extracted from audio signal of 31 people. Among these 8 people belong to the normal category. We first selected the relevant features using Extremely Randomized Trees Classifier (Extra Trees Classifier) and the classified the data using the selected features into PD and Non-PD using LSSVM (Least-squares support vector machine) and SVM. This work achieved the best result using the LS-SVM, which is more computationally efficient as compare to SVM. Our framework achieved better accuracy than the state of art methods. The paper is structured as: The dataset used in the investigation is briefly given in Sect. 2 of the paper. The solution strategy is covered in Sect. 3. Section 4 presents and discusses the obtained results. Section 5 presents the conclusion in its entirety.

An Automatic Parkinson’s Disease Classification System Using Least …

103

2 Material The Parkinson’s Dataset, from the University of California, Irvine (UCI) machine learning dataset repository website, which includes speech recording data [35] was used in this research. Parkinson’s disease people have many symptoms, including the voice becoming low-pitched and monotonous. Therefore, most of the studies use the voice recording dataset to classify the PD [43]. This dataset includes voice measurements from 31 participants, out of which 23 individuals had PD. The dataset contains approximately six recordings per patient. There are approximately 75% of instances with PD and 25% of healthy cases in the data set. Each row in the table represents one of the 195 voice recordings produced by these people, and each column indicates a distinct voice measure. A single voice recording instance is represented by each row in the CSV file. This dataset contains ground truth as well as 23 voice features, including the average, minimum, and maximum vocal fundamental frequencies, various measures of fundamental frequency and amplitude variation, complexity measures etc.

3 Methodology The proposed system consists of various steps to classify PD from the voice recorded features. A flowchart (shown in Fig. 1) outlining the various steps involved in the suggested system. Parkinson’s data set was first pre-processed and made balanced. Then, we have selected relevant features. The data was then split into train and test sets (80:20). Finally, the classification was performed using LS-SVM to segregate the PD and healthy people.

3.1 Data Pre-Processing First, the dataset was checked for any missing, redundant, or duplicated values. The dataset does not contain duplicated rows, null values, or missing values. Next, we normalized and scaled the features between −1 and 1 using the MinMaxScaler transform. Then, the data set was splits into 20% for testing and 80% for training purposes. The samples in the testing dataset are 59, and in training, the dataset is 235.

104

P. Khandelwal et al.

Fig. 1 A flow chart depicting the several steps of the proposed system

3.2 Feature Selection One of the most crucial steps in creating a powerful machine learning model is feature selection [44]. Using the”feature importance” methodology for identifying key characteristics, each feature has been given a score that reflects the relative importance of each item when making a prediction. The relative ratings can reveal which characteristics are more significant to the goal. The develop model get benefit from a feature importance analysis. An ensemble learning technique, Extremely Randomized Trees Classifier (Extra Trees Classifier), is used in the study to obtain a classification result by merging the results of numerous non-correlated decision trees. Figure 2 show the ten most important features extracted using ExtraTreesClassifier. Moreover, we applied other feature selection methods like ANOVA f-test Feature Selection, Backward Feature Elimination, Forward Feature Selection and Correlation Coefficient filter method. We compared the result obtained with these methods with Feature importance using ExtraTreesClassifier.

3.3 Handling Imbalanced Dataset The data set has about 75% of cases suffering from PD and 25% of cases which are healthy [35]. Figure 3 shows the plot which is showing that 147 samples are PD and 48 are healthy. As this data has an unequal class distribution and therefore is technically imbalanced. In this Parkinson’s voice feature dataset, we have 23 features

An Automatic Parkinson’s Disease Classification System Using Least …

105

Fig. 2 Top 10 features using feature importance method with ExtraTreesClassifier

and 195 number of samples i.e. the feature class was in minority. The shape of X and y before balancing is (195, 10) (here 195 are total samples and 10 signifies the selected features after feature selection) and (195) respectively, where X contains all the selected features and y contains the “status” column from data frame. Oversampling the examples in the minority class is one technique to tackle this problem. Simply replicating minority class samples from the training dataset can achieve this before a model is fitted. This can aid in balancing the class distribution, but it doesn’t provide the model with any new information. We employed SMOTE, which is the most extensively used oversampling approach for enhancing random oversampling [45–47], to synthesize new examples. After applying the SMOTE, the imbalanced dataset converted into balanced dataset with features of X and y shape of (294,10) and (294,) respectively. The data was then split into train and test set (80:20). From 20% of the dataset there are total of 59 samples. Fig. 3 Value counts (number of samples for healthy (status 0) and Parkinson’s Disease (status 1)

106

P. Khandelwal et al.

3.4 Least Square Support Vector Machine (LSSVM) LS-SVM is the least-squares variations of SVM and are supervised learning methods. It is a classification and regression analysis technique that, when compared to conventional SVM, is the standard way for approximating the solution of over-determined systems. Instead of the convex quadratic programming (QP) issue that standard SVMs use, this version solves set of linear equations to yield solution which is computationally efficient and robust [48]. SVM’s key drawback is that the optimization programming is high computationally expensive. Whereas, LS-SVM handle linear equations and thus overcomes the quadratic programming difficulties. We used LS-SVM for classifying data into PD and Non-PD. However, for an inseparable data, LS-SVM is preferable to standard SVM [49].

4 Results and Discussion Table 2 shows the results obtained with different feature selection method and classifiers (SVM and LS-SVM). It shows that the results with LS-SVM outperforms as compared to SVM. Moreover, the accuracy with ExtraTreesClassifier feature selection method with LS-SVM classifier is highest. Table 3 shows the accuracy scores using the proposed method (ExtraTreesClassifier feature selection method + LSSVM) and other state of art methods in literature on the same dataset. As shown from the table the proposed system gave highest accuracy as compared to other methods in literature. Table 2 Results using different feature selection method with LS-SVM and SVM classifiers Feature selection methods using top Accuracy using SVM (%) Accuracy using LS-SVM (%) 10 features ANOVA f-test feature selection

83.05

91.52

Forward feature selection

84.74

91.52

Backward feature elimination

89.83

86.44

Correlation coefficient filter method 83.05

91.53

ExtraTreesClassifier

98.31

Table 3 Comparison between the proposed method and state of art method in literature

90.13

Methods

Accuracy (%)

Proposed method (using LSSVM)

98.31

Ali et al. [18]

70

Anand et al. [39]

95.513

Tiwari [17]

95

Nishat [42]

93.39

An Automatic Parkinson’s Disease Classification System Using Least … Table 4 Confusion matrix: showing the actual and predicted PD subjects)

N = 59

Predicted healthy

True healthy

36

1

0

22

True Parkinson

Table 5 Various performance Metric using proposed method: extratreesclassifier feature selection + LS-SVN)

107

Evaluation metrics

Predicted parkinson

Result

Sensitivity

1.0

Specificity

0.97

Precision

0.96

F1-score

0.98

Table 4 is showing the tabular representation of confusion matrix for both actual and Predicted values i.e. True Parkinson’s, False Parkinson’s, True Healthy, and False Healthy people. The matrix contrasts the actual values with the predictions made by the machine learning model. After applying the LSSVM classification algorithm, there are 22 samples which were found to have PD and they actually had, whereas 36 samples found to be healthy and they actually didn’t have PD. Moreover only 1 sample was found which have PD but they actually a healthy individual. The various evaluation metrics for system with ExtraTreesClassifier feature selection method and LS-SVM is shown in Table 5. Sensitivity is 1.0 that this test would be able to correctly diagnose every person who has the target pathology (predicts all people from the sick group as sick). So, it relates with the test ability to identify the positive results. The proportion of persons without Parkinson’s disease who have a negative test is described as specificity. When a test is 100% specific, it signifies that all healthy people are correctly classified as such. Precision indicates how many positively classified entities were relevant. Hence our evaluation metrics are clearly shows the high accuracy of the proposed model. High specificity, precision and F-Score shows the high reliability of the proposed system. As shown in the Table 3, author in [18] used Logistic regression or SVMlinear accuracy and achieved 70% accuracy. If the data is linearly separable in the expanded feature space, the linear SVM maximizes the margin better and can lead to a sparser solution. In another study, author worked on KNN algorithm [39] and selected features using various dimensionality reduction techniques. Our proposed method of selecting optimal features using ExtraTreesClassifier feature selection method prior to classification using robust LSSVM classifier gave excellent results. The classier is robust specially for inseparable data. The proposed framework is easier to implement and perform better as compared to state of art methods.

108

P. Khandelwal et al.

5 Conclusion Analysis of voice data is essential in the present decade to understand diagnostic methods for human diseases, especially neurodegenerative diseases. The proposed method is designed to diagnose PD using a voice dataset through machine learning algorithms. In this analysis, we have used one LSSVM-a supervised learning approach. Predicting at an early stage if a person has PD or not is critical for disease management. We pre-processed the dataset and dealt with the unbalance data using SMOTE approach. The optimal features were selected, and classification was performed using the LSSVM classifier. The outcome demonstrates that our method provided 98.3% accuracy, which is the highest when compared to other cutting-edge techniques [17, 18, 39, 40, 42, 50–52]. In future, we intend to develop application where the patient can record their voice and the other medical history to detect the early sign of PD. It will be helpful in telemedicine, i.e. where patients have a scarcity of medical institutes and physicians.

References 1. Dorsey E, Sherer T, Okun MS, Bloem BR (2018) The emerging evidence of the parkinson pandemic. J Parkinson’s Disease 8(s1):3–8 2. Poewe W, Seppi K, Tanner CM, Halliday GM, Brundin P, Volkmann J, Schrag A-E, Lang AE (2017) Parkinson disease. Nat Rev Disease Primers 3(1):1–21 3. Naranjo L, Perez CJ, Campos-Roca Y, Martin J (2016) Addressing voice recording replications for parkinson’s disease detection. Expert Syst Appl 46:286–292 4. Marras C, Beck J, Bower J, Roberts E, Ritz B, Ross G, Abbott R, Savica R, Van Den Eeden S, Willis A et al (2018) Prevalence of parkinson’s disease across north america. NPJ Parkinson’s Disease 4(1):1–7 5. Heiss JD, Lungu C, Hammoud DA, Herscovitch P, Ehrlich DJ, Argersinger DP, Sinharay S, Scott G, Wu T, Federoff HJ et al (2019) Trial of magnetic resonance–guided putaminal gene therapy for advanced parkinson’s disease. Mov Disord 34(7):1073–1078 6. Sveinbjornsdottir S (2016) The clinical symptoms of parkinson’s disease. J Neurochem 139:318–324 7. Kumaresan M, Khan S (2021) Spectrum of non-motor symptoms in parkinson’s disease. Cureus 13(2) 8. Farashi S (2021) Analysis of vertical eye movements in parkinson’s disease and its potential for diagnosis. Appl Intell 51(11):8260–8270 9. Liu X, Li W, Liu Z, Du F, Zou Q (2021) A dual-branch model for diagnosis of parkinson’s disease based on the independent and joint features of the left and right gait. Appl Intell 51(10):7221–7232 10. Dastgheib ZA, Lithgow B, Moussavi Z (2012) Diagnosis of parkinson’s disease using electrovestibulography. Med Biol Eng Compu 50(5):483–491 11. Bosni´c Z, Kononenko I (2009) An overview of advances in reliability estimation of individual predictions in machine learning. Intell Data Anal 13(2):385–401 12. Alsharif O, Elbayoudi K, Aldrawi A, Akyol K (2019) Evaluation of different machine learning methods for caesarean data classification. Int J Inf Eng Electron Bus 11(5):19 13. Duffy JR (2019) Motor speech disorders e-book: substrates, differential diagnosis, and management. Elsevier Health Sci

An Automatic Parkinson’s Disease Classification System Using Least …

109

14. Senturk ZK (2020) Early diagnosis of parkinson’s disease using machine learning algorithms. Med Hypotheses 138:109603 15. Tsanas A, Little MA, McSharry PE, Spielman J, Ramig LO (2012) Novel speech signal processing algorithms for high-accuracy classification of parkinson’s disease. IEEE Trans Biomed Eng 59(5):1264–1271 16. Montan˜a D, Campos-Roca Y, P´erez CJ (2018) A diadochokinesis-based expert system considering articulatory features of plosive consonants for early detection of parkinson’s disease. Comput Methods Programs Biomed 154:89–97 17. Tiwari H, Shridhar SK, Patil PV, Sinchana K, Aishwarya G (2021) Early prediction of parkinson disease using machine learning and deep learning approaches. EasyChair 18. Ali L, Khan SU, Arshad M, Ali S, Anwar M (2019) A multi-model framework for evaluating type of speech samples having complementary information about parkinson’s disease. In: 2019 International conference on electrical, communication, and computer engineering (ICECCE), pp 1–5. IEEE 19. Lamba R, Gulati T, Alharbi HF, Jain A (2021) A hybrid system for parkinson’s disease diagnosis using machine learning techniques. Int J Speech Technol, 1–11 20. El Maachi I, Bilodeau G-A, Bouachir W (2020) Deep 1d-convnet for accurate parkinson disease detection and severity prediction from gait. Expert Syst Appl 143:113075 21. Solana-Lavalle G, Gal´an-Hern´andez J-C, Rosas-Romero R (2020) Automatic parkinson disease detection at early stages as a pre-diagnosis tool by using classifiers and a small set of vocal features. Biocybern Biomed Eng 40(1):505–516 22. Johri A, Tripathi A et al. (2019) Parkinson disease detection using deep neural networks. In: 2019 Twelfth international conference on contemporary computing (IC3), pp 1–4. IEEE 23. Sajal MSR, Ehsan MT, Vaidyanathan R, Wang S, Aziz T, Al Mamun KA (2020) Telemonitoring parkinson’s disease using machine learning by combining tremor and voice analysis. Brain Inf 7(1):1–11 24. Soumaya Z, Taoufiq BD, Benayad N, Yunus K, Abdelkrim A (2021) The detection of parkinson disease using the genetic algorithm and svm classifier. Appl Acoust 171:107528 25. Nagasubramanian G, Sankayya M (2021) Multi-variate vocal data analysis for detection of parkinson disease using deep learning. Neural Comput Appl 33(10):4849–4864 26. Zhang L, Liu C, Zhang X, Tang YY (2016) Classification of parkinson’s disease and essential tremor based on structural mri. In: 2016 7th International conference on cloud computing and big data (CCBD), pp 353–356. IEEE 27. Gironell A, Pascual-Sedano B, Aracil I, Mar´ın-Lahoz J, Pagonabarraga J, Kulisevsky J (2018) Tremor types in parkinson disease: a descriptive study using a new classification. Parkinson’s Disease 2018 28. Mohammadi AG, Mehralian P, Naseri A, Sajedi H (2021) Parkinson’s disease diagnosis: the effect of autoencoders on extracting features from vocal characteristics. Array 11:100079 29. Sheikhi S, Kheirabadi MT (2022) An efficient rotation forest-based ensemble approach for predicting severity of parkinson’s disease. J Healthcare Eng 30. Kadam VJ, Jadhav SM (2019) Feature ensemble learning based on sparse autoencoders for diagnosis of parkinson’s disease. In: Computing, communication and signal processing: proceedings of ICCASP 2018, pp 567–581. Springer 31. Dwivedi AK (2018) Analysis of computational intelligence techniques for diabetes mellitus prediction. Neural Comput Appl 30(12):3837–3845 32. Mahmud SH, Hossin MA, Ahmed MR, Noori SRH, Sarkar MNI (2018) Machine learning based unified framework for diabetes prediction. In: Proceedings of the 2018 international conference on big data engineering and technology, pp 46–50 33. Ahmed MR, Mahmud SH, Hossin MA, Jahan H, Noori SRH (2018) A cloud based four-tier architecture for early detection of heart disease with machine learning algorithms. In: 2018 IEEE 4th international conference on computer and communications (ICCC), pp 1951–1955. IEEE 34. Mounika P, Rao SG (2021) Machine learning and deep learning models for diagnosis of parkinson’s disease: a performance analysis. In: 2021 Fifth international conference on I-SMAC (IoT in Social, Mobile, Analytics and Cloud) (I-SMAC), pp 381–388. IEEE

110

P. Khandelwal et al.

35. Little M, McSharry P, Hunter E, Spielman J, Ramig L (2008) Suitability of dysphonia measurements for telemonitoring of parkinson’s disease. Nat Prec, 1–1 36. Polat K (2019) A hybrid approach to parkinson disease classification using speech signal: the combination of smote and random forests. In: 2019 Scientific meeting on electricalelectronics & biomedical engineering and computer science (EBBT), pp 1–3. IEEE 37. Tunc HC, Sakar CO, Apaydin H, Serbes G, Gunduz A, Tutuncu M, Gurgen F (2020) Estimation of parkinson’s disease severity using speech features and extreme gradient boosting. Med Biol Eng Compu 58(11):2757–2773 38. Kallio M, Suominen K, Bianchi AM, M¨akikallio T, Haapaniemi T, Astafiev S, Sotaniemi K, Myllyl¨a V, Tolonen U (2002) Comparison of heart rate variability analysis methods in patients with parkinson’s disease. Med Biological Eng Comput 40(4):408–414 39. Anand A, Haque MA, Alex JSR, Venkatesan N (2018) Evaluation of machine learning and deep learning algorithms combined with dimentionality reduction techniques for classification of parkinson’s disease. In: 2018 IEEE international symposium on signal processing and information technology (ISSPIT), pp 342–347. IEEE 40. Yaman O, Ertam F, Tuncer T (2020) Automated parkinson’s disease recognition based on statistical pooling method using acoustic features. Med Hypotheses 135:109483 41. Celik E, Omurca SI (2019) Improving parkinson’s disease diagnosis with machine learning methods. In: 2019 Scientific meeting on electricalelectronics & biomedical engineering and computer science (EBBT), pp 1–4. IEEE 42. Nishat MM, Hasan T, Nasrullah SM, Faisal F, Asif MA-A-R, Hoque MA (2021) Detection of parkinson’s disease by employing boosting algorithms. In: 2021 Joint 10th international conference on informatics, electronics & vision (ICIEV) and 2021 5th international conference on imaging, vision & pattern recognition (icIVPR), pp 1–7. IEEE 43. Mei J, Desrosiers C, Frasnelli J (2021) Machine learning for the diagnosis of parkinson’s disease: a review of literature. Front Aging Neurosci 13:184 44. Naser M (2021) Mapping functions: a physics-guided, data-driven and algorithm-agnostic machine learning approach to discover causal and descriptive expressions of engineering phenomena. Measurement 185:110098 45. Douzas G, Bacao F, Last F (2018) Improving imbalanced learning through a heuristic oversampling method based on k-means and smote. Inf Sci 465:1–20 46. Fern´andez A, Garcia S, Herrera F, Chawla NV (2018) Smote for learning from imbalanced data: progress and challenges, marking the 15-year anniversary. J Artif Intell Res 61:863–905 47. Bunkhumpornpat C, Sinapiromsaran K, Lursinsap C (2009) Safe-levelsmote: Safe-levelsynthetic minority over-sampling technique for handling the class imbalanced problem. In: Pacific-asia conference on knowledge discovery and data mining, pp 475–482. Springer 48. Suykens JA, Vandewalle J (1999) Least squares support vector machine classifiers. Neural Proc Lett 9(3):293–300 49. Kaytez F (2020) A hybrid approach based on autoregressive integrated moving average and least-square support vector machine for long-term forecasting of net electricity consumption. Energy 197:117200 50. Kemp K, Griffiths J, Campbell S, Lovell K (2013) An exploration of the follow-up up needs of patients with inflammatory bowel disease. J Crohn’s Colitis 7(9):386–395 51. Ayan E, Unver HM (2019) Diagnosis of pneumonia from chest x-ray images¨ using deep learning. In: 2019 Scientific meeting on electrical-electronics & biomedical engineering and computer science (EBBT), pp 1–5. IEEE 52. Lamba R, Gulati T, Jain A (2022) Automated parkinson’s disease diagnosis system using transfer learning techniques. In: Emergent converging technologies and biomedical systems, pp 183–196. Springer

Generation Cost Minimization in Microgrids Using Optimization Algorithms Upasana Lakhina, I. Elamvazuthi, N. Badruddin, Ajay Jangra, Truong Hoang Bao Huy, and Josep M. Guerrero

Abstract Optimization methods are applied to discover a near optimal or optimal solution for any distinguished problem. Many researchers have applied different optimization techniques on microgrids for cost optimization. In this paper, an improved multi-verse optimizer algorithm is proposed for generation cost minimization in microgrids. Two modifications are done in the original algorithm for solving local optima problem and improving exploration and exploitation process. Local optima problem is solved using average positioning and universe position updating equation is improved by hybridizing it with sine–cosine algorithm. Thus, the simulation results reported by the investigated algorithms show that the proposed algorithm outperforms the other algorithms in minimizing generation cost and reducing computation time. Keywords Cost optimization · Energy management · Microgrids · Meta-heuristic algorithms · And renewable energy resources

1 Introduction In recent years, due to high energy demand and economical and environmental benefits renewable energy source (RES) are given priority over fossil fuels [1]. Therefore, interest in integrating various renewable energy in microgrids in form of hybrid U. Lakhina · I. Elamvazuthi (B) · N. Badruddin Department of Electrical and Electronics Engineering, Universiti Teknologi PETRONAS, Perak, Malaysia e-mail: [email protected] A. Jangra University Institute of Engineering and Technology, Kurukshetra University, Thanesar, India T. H. B. Huy Department of Future Convergence Technology, Soonchunhyang University, Chungcheongnam-do, Asan-si 31538, South Korea J. M. Guerrero Centre of Research on Microgrids, Department of Energy Technology, Aalborg University, Aalborg, Denmark © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024 S. Jain et al. (eds.), Emergent Converging Technologies and Biomedical Systems, Lecture Notes in Electrical Engineering 1116, https://doi.org/10.1007/978-981-99-8646-0_10

111

112

U. Lakhina et al.

renewable energy system with the deployment of new technologies towards sustainable energy systems. Energy management systems are employed over these microgrids to coordinate between generation units for reliable and smooth operation of microgrids [2]. Various energy management strategies are applied for microgrids to solve optimization problems in microgrids such as economic dispatch, power scheduling, optimal allocation, demand response programs etc [3]. The main aim of the energy management system is to efficiently coordinate between the supply and load demand, deal with instability, and optimize the functioning of microgrids using different optimization techniques and methods [4]. Several optimization techniques and methods have been introduced such as fuzzy logic [5], game theory [6], multi-agent systems [7], artificial intelligence [8], metaheuristic algorithms etc. for solving microgrid optimization problem. Optimization problems can be categorized as single-objective or multi-objective optimization problems [9]. Metaheuristic algorithms optimize a given problem using nature or behavior inspired optimization techniques. These algorithms have attracted attention of numerous researchers because of their effectiveness and reliability as compared to other methods [10, 11]. They have also successfully optimized different kind of problems for other application areas such as resource allocation, authentication, clustering, and many more [12–23]. Metaheuristic algorithms can address high dimensional problems without getting trapped in local optima and converge rapidly towards the best optimal solution. Various meta-heuristic algorithms have been recently proposed such as flying sparrow search algorithm (FSA) [24], grey wolf optimizer (GWO) [25], slime mould algorithm (SMA) [26], artificial hummingbird algorithm (AHA) [27], multi-verse optimization (MVO) [28], sine–cosine algorithm (SCA) [29], proposed by scholars to solve optimization problems effectively for different research applications. A reliable optimization algorithm maintains balance between exploitation and exploration of a search space and avoids local optimal stagnation. Microgrid optimization problems are solved by various metaheuristic algorithms and have shown promising results. In [30] a memory based genetic algorithm is proposed by authors to address single-objective power scheduling problem. It aims to minimize the generation cost in islanded microgrids. The implemented is carried on IEEE 37-node test feeder and results demonstrated show the effectiveness of the proposed algorithm. Similarly, authors in [31] introduces enhanced multi-value player algorithm for optimizing generation cost by optimal scheduling power among available generation units. This proposed algorithm is implemented on two different scale microgrids i.e., IEEE 37-node and IEEE 141-node system. An improved particle swarm optimization algorithm is presented in [32] that focus to solve a multi-objective power scheduling problem for a microgrid framework. It aims to minimize generation cost and power losses for IEEE 37-node framework with six generation units. Multi-verse optimization is a popular metaheuristic algorithm that has an ability to delve into the craggy search space of a problem without getting trapped and reports optimal solutions [28]. However, it is never guaranteed that an algorithm can perform best for all the optimization problem due to problem complexity, hence it behaves

Generation Cost Minimization in Microgrids Using Optimization …

113

differently for different problems [10]. MVO also has some drawbacks of low convergence speed and low precision that affects its exploring optimal solution accuracy. There are several enhanced and modified versions of MVO proposed by researchers to optimize various kinds of optimization problem by improving its exploration and exploitation process and convergence speed in solving high dimensional problems. Also, it is hybridized with other algorithms to improve its performance. A modified multi-verse optimizer algorithm was proposed in [33] for numerical optimization and proposed algorithm was also tested over 27 benchmark functions. The simulation results show that it improves the performance of the algorithm by modifying the universe updating equation. Authors in [34] employed an enhanced multi-verse optimizer algorithm for task scheduling among available resources in cloud computing. An improved version of multi-verse optimizer algorithm was also proposed with feature selection technique for three different applications of cybercrime i.e., phishing, spam, and denial of attacks [35] whereas another improved version was applied in [18] for text documents clustering. In this paper, an improved multi-verse optimizer algorithm is proposed to optimize the generation cost in islanded microgrid structure. Basically, two level modification is done to improve the performance of the algorithm for power scheduling problem in microgrids. An average positioning concept is used to find the average position between the previous universe and the current best universe. Then the output is used to modify the universe updating equation to find the optimal solution. It improves the convergence speed and accuracy of the original version. This paper is organized as follows: the problem is formulated in next section, section III methodology provides the basic concept of the parent algorithm and proposed improved version of this algorithm by hybridizing it with sine–cosine algorithm. Section IV discusses the dataset and demonstrates the simulation results. Lastly, the article is concluded in section V.

2 Problem Statement Microgrids are small scale smart grids that aim to work for a community using renewable energy resources such as wind power plants, solar power plants, etc. The objective function for generation cost minimization is defined as: Min O F =

d 

Cn

(1)

n=1

where, Cn depicts the cost for the nth unit. d is the total no. of distributed generation units(DER’s), and it is 15 in this research. The cost for each DER is given by: Cn = an × Pn2 + bn × Pn + cn

(2)

114

U. Lakhina et al.

an , bn , and cn are the cost coefficients, Cn represents the total cost in dollars, Pn and denotes the power of nth generation units. For smooth operation of the system, equality constraint considered that the generated power will always be equal to or greater than the load demand for 24 h. PG ≥ PL

(3)

Also, any DER’s generated power should be between the minimum power generation and maximum rated capacity. In this case study, the rated capacity of wind power plants, solar plants, and CHP is 0.75 MW, 0.25 MW, and 1 MW, respectively. Hence, to satisfy this constraint, a penalty function is introduced and is given as:  d   d        2 C(n) = Pn −P l  an × Pn + bn × Pn + cn + f p    

n=1

(4)

n=1

Here, f p is the penalty function that balances the equation.

3 Proposed Methodology 3.1 Multi-verse Optimizer Algorithm The population-based algorithms divide into two phases known as exploration and exploitation phases of an optimization algorithm. In this research, multi-verse optimizer algorithm is considered for optimizing generation cost. This physics-based algorithm works on the concept of white, wormholes and black holes. The white and black holes are employed for the exploration in a search space whereas, wormholes are utilized for exploiting search space. Each universe is related with an inflation rate to the fitness function. MVO works universes based on the inflation rate, white holes, black holes, and wormholes. Universes are sorted after each iteration based on their inflation rate utilizing the roulette wheel mechanism. There are some rules that are during optimization using MVO are given as follows: 1. 2. 3. 4. 5.

Existence of white holes is directly proportional to high inflation rate Existence of Black holes is inversely proportional to high inflation rate White holes are used to send the objects from universes with high inflation rate Black holes are used to receive the objects from universes with low inflation rate Objects are transferred from high-inflation universe to low-inflation universe through wormholes. The following steps are followed for the mathematical modeling of MVO: Each universe in multi-verse theory is represented as:

Generation Cost Minimization in Microgrids Using Optimization …



u 11 ⎢ u1 ⎢ 2 Xi = ⎢ . ⎣ ..

u 21 u 22 .. .

... ... .. .

··· ... .. .

u n1 u n2 .. .

115

⎤ ⎥ ⎥ ⎥ ⎦

(5)

u 1m u 2m . . . · · · u nm

Here, X is the universe formed as the matrix where n is the number of decision variables i.e., no. of generation units in a problem and m is the number of solutions. The inflation rate is calculated as  j xk r 1 < N I (X i ) j xi = (6) j xi r 1 < N I (X i ) j

where xi Indicates the jth parameter of the ith universe, X i indicates the ith universe, NI(X i ) is a normalized inflation rate of the ith universe, r1 is a random number j in [0, 1], and xk Indicates the jth parameter of the kth universe selected by the roulette wheel selection mechanism. The wormhole existence probability (WEP) and traveling distance rate (TDR) are computed after sorting these universes in series, using formulas:  WEP = min +l ×

max − min L

 (7)

Here, min stands for minimum, and max stands for maximum. In the original version, the min is set to 0.2 and the max to 1. Also, l stands for the current iteration, and L stands for the total number of iterations which is 1000 in the proposed work. TDR = 1 −

l 1/ s L 1/ s

(8)

where s is the exploitation rate and the standard value for r in the original version is 6. Now, the positions of the universes are updated using the following equation and the current best solution.    ⎧ ⎨ X j + T D R × ub j − lb j  × r 4 + lb j  r < 30.5 and r 2 < W E P j xi = X j + T D R × ub j − lb j × r 4 + lb j r 3 ≥ 0.5 and r 2 < W E P (9) ⎩ j xi r2 ≥ W E P where X j Indicates the jth parameter of the best universe formed so far, TDR is a coefficient, WEP is another coefficient, lb j shows the lower bound of the jth variable, ub j is the upper bound of the jth variable indicates the jth parameter of the ith universe, and r2, r3, and r4 are random numbers in [0, 1].

116

U. Lakhina et al.

3.2 Improved Multi-verse Optimizer Algorithm The proposed algorithm IMVO is modified for two objectives. One, to avoid local optima trapping and other to improve exploration and exploitation rate in the process to find the optimal solution. In this improved version of MVO, it hybridized with sine-cosine algorithm (SCA) to merge their advantages and makes global and local search more effective. When the universe is not able to discover the best solutions, they are reformed using sine cosine function of SCA algorithm. Equation 9 is the primary equation of multi-verse optimizer algorithm, and it is enhanced using the concept of average positioning and sine cosine function of SCA. This equation helps to maintain the balance between the main coefficients travelling distance rate (TDR), wormhole existence probability (WEP) and r4 random variable. The equation is enhanced in two phases where in the first phase the inflation rate is improved using average positioning. According to this, the current universe in wormholes will be formed by taking average of previous universe and the best universe. And the universe is computed using modified equation of sine cosine algorithm. The mathematical average positing equation and modified equation for calculating universe is given by:  AP =

j

x j + xi



2

(10)

Here, AP is the average position which is equal to the average between best universe j so far and the previous universe. x j is the best universe formed so far and xi is the previous universe. ⎧ ⎨ A P + T D R × (sin(2πr5 ) × abs(2 × r6 × A P) r 3 < 0.5 an dr 2 < W E P j xi = A P + T D R × (cos(2πr5 ) × abs(2 × r6 × A P) r 3 ≥ 0.5 and r 2 ≥ W E P ⎩ j xi r2 ≥ W E P (11) Here, r5 and r6 are the random values between [0,1].

4 Simulation Discussion The experiments are conducted on a system with Windows 10 64-bit operating system specification, an Intel (R) core (TM) i5, and 8 GB RAM. The algorithms are implemented on MATLAB for 15-unit test systems (Fig. 1).

Generation Cost Minimization in Microgrids Using Optimization …

117

Fig. 1 The flowchart of proposed IMVO algorithm

4.1 Dataset Description On generation and load dataset and cost coefficients. The objective function is to minimize the generation cost based on the data adopted from [31]. It is expected that the generated power will be more than or equal to the demand for each hour. The proposed IMVO and four other algorithms were implemented on IEEE 141 node test system. The parameters selected for these algorithms are adopted from the parent settings of these algorithms. The generation dataset and cost coefficients are derived from [31], But WP1 is taken as 3 plants, WP2 is taken as 3 plants, and WP3 is considered for 2 plants, in total 8 units; similarly, PV1 is 3 units, and PV2 is 3. CHP

118

U. Lakhina et al.

in this configuration is one. So, the cost coefficients are adjusted accordingly. The 24-h load dataset is given in Table 1. The parameter setting for the investigated algorithms is given below: 1. IMVO: Population size = 50, maximum number of generations = 1000, WEP = 0.2, TDR = 6. 2. MVO: Population size = 50, a maximum number of generations = 1000, WEP linearly decreases from 0.9 to 0.2, r = 6. 3. MMVO: Population size = 50, a maximum number of generations = 1000, WEP linearly decreases from 0.9 to 0.2, r = 0.6. 4. PSO: Population size = 50, the maximum number of iterations = 1000, learning factors = 1.5, and inertia weight linearly decreases from 0.9 to 0.4. 5. AHA: Population size = 50, maximum number of iterations = 1000, migration coefficient = 2n Table 1 Load (KW) dataset for 15-unit system [31]

Hours

Load (KW)

1

3482

2

2946

3

2761

4

2558

5

2541

6

266

7

3635

8

4339

9

4748

10

5100

11

5231

12

5306

13

5454

14

5215

15

5363

16

5383

17

5198

18

5051

19

4496

20

5275

21

5479

22

5536

23

5370

24

4611

Generation Cost Minimization in Microgrids Using Optimization …

119

4.2 Experimental Results In this section, experimental results are reported for the 15-unit test system. These algorithms are implemented for given dataset. The algorithms are run for 30 runs for 24 h to have a fair evaluation. The best cost for each hour is evaluated out of these runs and reported for performance evaluation. The generation cost each hour from the examined algorithm is shown in Table 2. The total cost of 24 h by each algorithm is shown in Fig. 2. The total generation cost given by IMVO, MVO, MMVO, PSO, and AHA, is $3168.17, $3211.43, $3436.10, $3253.05, and $3598.03 respectively. It is seen through the analysis that the proposed algorithm outperforms the other algorithms. Also, the average computation time taken by algorithms, i.e., IMVO, MVO, MMVO, PSO, and AHA, is 0.41s, 0.43s, 0.87s, and 2.66s, 0.29s respectively. Table 2 Generation cost for each hour by investigated algorithms Hours

IMVO

MVO

MMVO

PSO

AHA

1

87.75

87.73

92.31

88.02

101.65

2

74.09

74.23

79.05

78.37

92.18

3

66.66

70.97

66.53

75.14

89.06

4

62.88

67.38

71.79

71.73

85.78

5

67.08

71.52

73.15

71.59

85.61

6

68.26

72.60

68.33

72.67

86.64

7

94.05

95.25

102.16

103.75

118.10

8

111.94

121.46

121.43

125.37

144.51

9

119.52

133.41

136.71

133.79

152.21

10

125.76

141.73

163.17

145.97

160.86

11

138.55

139.67

164.60

144.15

164.86

12

181.63

170.56

190.66

170.89

189.03

13

195.03

187.59

197.30

184.00

201.26

14

212.21

210.81

217.67

211.28

218.45

15

221.88

221.82

225.66

221.72

225.75

16

229.74

229.59

246.99

229.63

229.54

17

209.47

206.74

213.57

207.97

212.53

18

157.83

151.62

174.62

154.70

176.66

19

110.87

123.92

133.68

128.55

147.18

20

127.96

127.86

141.54

127.87

151.45

21

131.47

131.46

142.94

131.46

154.03

22

132.47

132.37

157.58

132.39

154.77

23

129.64

129.53

142.84

129.51

134.74

24

112.05

111.65

111.82

111.55

121.27

120

U. Lakhina et al.

Fig. 2 Total cost by each algorithm for 24 h

The convergence study shows that the proposed algorithm explores the search better and outputs the best result. Its converegence speed is also good as compared to other algorithms. Figure 3. represents the convergence curve at hour 6. The propsoed IMVO explores the search space without getting stuck in local optima and the converges to the best solution with a high convergence speed. Similarly, Fig. 4 represents the convergence curve for hour 19. Its is concluded through the graph that initially, IMVO starts exploring the search space efficiently and with no stagnantion it converges fast to the optimal solution than other algorithms. It assures to find out the optimal solution and gives promising results to optimize the power scheduling problem in microgrids by minimizing the generation cost.

Fig. 3 Convergence graph of hour 6

Generation Cost Minimization in Microgrids Using Optimization …

121

Fig. 4 Convergence graph of hour 19

5 Conclusion Distributed generation in microgrids consists of renewable source of energy that are intermittent in nature and some non-renewable energy sources that are continuous form of power supply. The stochastic generation and variable load demand in microgrids raise the need of optimizing power scheduling problem for reliable and economical operation. This study proposes an improved multi-verse optimizer algorithm for generation cost minimization and reduction in computation time. In this improved version, two step modifications are done which help to avoid local minima stagnation and enhance the concept of exploitation and exploration in a search space to find optimal solution. Firstly, the average position was introduced to avoid local optima stagnation and universe position updating equation is modified by hybridizing it with sine–cosine algorithm. The results demonstrate that proposed IMVO outperforms all the other algorithms and optimizes generation cost for a large-scale microgrids. It also reduces the computation time and makes it reliable for practical implementation. The convergence study shows that IMVO explores the search space effectively and has high convergence speed. In future this algorithm can be studied for multi-objective optimization and transmission losses can be considered for optimization. Author Contributions U.P. carried out the research and participated in drafting the manuscript. I.E., and N.B. supervised, analyzed the results, and reviewed the manuscript. A.J., B.H.T., and J.M.G gave critical revision of the manuscript. All authors read and approved the final manuscript.

122

U. Lakhina et al.

Acknowledgements The authors would like to thank Universiti Teknologi PETRONAS (UTP) Malaysia, University Institute of Engineering and Technology, Kurukshetra University, India, Institute of Engineering and Technology, Thu Dau Mot University, Thu Dau Mot VN-57, Vietnam, and Centre of Research on Microgrids, Department of Energy Technology, Aalborg University, Denmark for their support.

References 1. Raya-Armenta JM, Bazmohammadi N, Avina-Cervantes JG, Sáez D, Vasquez JC, Guerrero JM (2021) Energy management system optimization in islanded microgrids: an overview and future trends. Renew Sustain Energy Rev 149:111327. https://doi.org/10.1016/j.rser.2021.111327 2. Barik AK, Jaiswal S, Das DC (2022) Recent trends and development in hybrid microgrid: a review on energy resource planning and control. Int J Sustain Energy 41(4):308–322. https:// doi.org/10.1080/14786451.2021.1910698 3. Al-Ismail FS (2021) DC microgrid planning, operation, and control: a comprehensive review. IEEE Access 9:36154–36172. https://doi.org/10.1109/ACCESS.2021.3062840 4. Thirunavukkarasu GS, Seyedmahmoudian M, Jamei E, Horan B, Mekhilef S, Stojcevski A (2022) Role of optimization techniques in microgrid energy management systems—a review. Energy Strateg Rev 43:100899. https://doi.org/10.1016/j.esr.2022.100899 5. Mansouri SA, Ahmarinejad A, Nematbakhsh E, Javadi MS, Jordehi AR, Catalão JPS (2020) Energy management in microgrids including smart homes: a multi-objective approach. Sustain Cities Soc 69:2021. https://doi.org/10.1016/j.scs.2021.102852 6. Movahednia M, Karimi H, Jadid S (2022) A cooperative game approach for energy management of interconnected microgrids. Electr Power Syst Res 213:108772. https://doi.org/10.1016/j. epsr.2022.108772 7. Eddy FYS (2016) A Multi agent system based control scheme for optimization of microgrid operation, no 2015, p. 168 8. Zhou S et al. (2020) Combined heat and power system intelligent economic dispatch: a deep reinforcement learning approach. Int J Electr Power Energy Syst 120:106016. https://doi.org/ 10.1016/j.ijepes.2020.106016 9. Zandrazavi SF, Guzman CP, Pozos AT, Quiros-Tortos J, Franco JF (2022) Stochastic multiobjective optimal energy management of grid-connected unbalanced microgrids with renewable energy generation and plug-in electric vehicles. Energy 241:122884. https://doi.org/10.1016/ j.energy.2021.122884 10. Khan B, Singh P (2017) Selecting a meta-heuristic technique for smart micro-grid optimization problem: a comprehensive analysis. IEEE Access 5:13951–13977. https://doi.org/10.1109/ ACCESS.2017.2728683 11. Zia MF, Elbouchikhi E, Benbouzid M (2018) Microgrids energy management systems: a critical review on methods, solutions, and prospects. Appl Energy 222(May):1033–1055. https://doi. org/10.1016/j.apenergy.2018.04.103 12. Som T, Chakraborty N (2014) Evaluation of different hybrid distributed generators in a microgrid—a metaheuristic approach. Distrib Gener Altern Energy J 29(4):49–77. https://doi.org/ 10.1080/21563306.2014.11442730 13. Mahesh K, Nallagownden P, Elamvazuthi I (2017) Optimal placement and sizing of renewable distributed generations and capacitor banks into radial distribution systems. Energies 10(6):1– 24. https://doi.org/10.3390/en10060811 14. Ganesan T, Vasant P, Elamvazuthi I (2013) Hybrid neuro-swarm optimization approach for design of distributed generation power systems. Neural Comput Appl 23(1):105–117. https:// doi.org/10.1007/s00521-012-0976-4

Generation Cost Minimization in Microgrids Using Optimization …

123

15. Nikmehr N, Najafi Ravadanegh S (2015) Optimal power dispatch of multi-microgrids at future smart distribution grids. IEEE Trans Smart Grid 6(4):1648–1657. https://doi.org/10.1109/TSG. 2015.2396992 16. Elamvazuthi I, Ganesan T, Vasant P (2011) A comparative study of HNN and Hybrid HNNPSO techniques in the optimization of distributed generation (DG) power systems. In: 2011 international conference on advanced computer science and information systems, pp 195–200 17. Nurhanim K, Elamvazuthi I, Izhar LI, Ganesan T (2017) Classification of human activity based on smartphone inertial sensor using support vector machine. In: 2017 IEEE 3rd international symposium in robotics and manufacturing automation (ROMA), pp 1–5. https://doi.org/10. 1109/ROMA.2017.8231736 18. Abasi A, Khader AT, Al-Betar MA (2022) An improved multi-verse optimizer for text documents clustering. Kufa J Eng 13(2):28–42. https://doi.org/10.30572/2018/kje/130203 19. Vasant P, Andrew TG, Elamvazuthi I (2012) Improved tabu search recursive fuzzy method for crude oil industry. Int J Model Simul Sci Comput 3. https://doi.org/10.1142/S17939623115 00024 20. Fayek HM, Elamvazuthi I, Perumal N, Venkatesh B (2014) A controller based on optimal type-2 fuzzy logic: systematic design, optimization and real-time implementation. ISA Trans 53(5):1583–1591. https://doi.org/10.1016/j.isatra.2014.06.001 21. Crisostomi E, Liu M, Raugi M, Shorten R (2014) Plug-and-play distributed algorithms for optimized power generation in a microgrid. IEEE Trans Smart Grid 5(4):2145–2154. https:// doi.org/10.1109/TSG.2014.2320555 22. Timothy Ganesan IE (2015) Pandian Vasant, Advances in Metaheuristics 23. Shaikh PH, Nor NBM, Nallagownden P, Elamvazuthi I (2018) Intelligent multi-objective optimization for building energy and comfort management. J King Saud Univ Eng Sci 30(2):195–204. https://doi.org/10.1016/j.jksues.2016.03.001 24. Nguyen TT, Ngo TG, Dao TK, Nguyen TTT (2022) Microgrid operations planning based on improving the flying sparrow search algorithm. Symmetry (Basel) 14(1):1–21. https://doi.org/ 10.3390/sym14010168 25. Makhadmeh SN, Khader AT, Al-Betar MA, Naim S, Abasi AK, Alyasseri ZAA (2021) A novel hybrid grey wolf optimizer with min-conflict algorithm for power scheduling problem in a smart home. Swarm Evol Comput 60:100793. https://doi.org/10.1016/j.swevo.2020.100793 26. Kamboj VK et al. (2022) A cost-effective solution for non-convex economic load dispatch problems in power systems using slime mould algorithm. Sustain 14(5). https://doi.org/10. 3390/su14052586 27. Zhao W, Wang L, Mirjalili S (2022) Artificial hummingbird algorithm: a new bio-inspired optimizer with its engineering applications. Comput Methods Appl Mech Eng 388:114194. https://doi.org/10.1016/j.cma.2021.114194 28. Mirjalili S, Mirjalili SM, Hatamlou A (2016) Multi-verse optimizer: a nature-inspired algorithm for global optimization. Neural Comput Appl 27(2):495–513. https://doi.org/10.1007/s00521015-1870-7 29. Mirjalili S (2016) SCA: a sine cosine algorithm for solving optimization problems. Knowl Based Syst 96:120–133. https://doi.org/10.1016/j.knosys.2015.12.022 30. Askarzadeh A (2018) A memory-based genetic algorithm for optimization of power generation in a microgrid. IEEE Trans Sustain Energy 9(3):1081–1089. https://doi.org/10.1109/TSTE. 2017.2765483 31. Ramli MAM, Bouchekara HREH, Alghamdi AS (2019) Efficient energy management in a microgrid with intermittent renewable energy and storage sources. Sustain 11(14). https://doi. org/10.3390/su11143839 32. Gholami K, Dehnavi E (2019) A modified particle swarm optimization algorithm for scheduling renewable generation in a micro-grid under load uncertainty. Appl Soft Comput J 78:496–514. https://doi.org/10.1016/j.asoc.2019.02.042 33. Jui JJ, Ahmad MA, Rashid MIM (2020) Modified multi-verse optimizer for solving numerical optimization problems. 2020 IEEE international conference automatic control intelligent system I2CACIS 2020—Proceedings, pp 81–86. https://doi.org/10.1109/I2CACIS49202.2020. 9140097

124

U. Lakhina et al.

34. Shukri SE, Al-Sayyed R, Hudaib A, Mirjalili S (2021) Enhanced multi-verse optimizer for task scheduling in cloud computing environments. Expert Syst Appl 168:114230. https://doi.org/ 10.1016/j.eswa.2020.114230 35. Alzaqebah M, Jawarneh S, Mohammad RMA, Alsmadi MK, ALmarashdeh I (2021) Improved multi-verse optimizer feature selection technique with application to phishing, spam, and denial of service attacks. Int J Commun Networks Inf Secur 13(1):76–81. https://doi.org/10.17762/ ijcnis.v13i1.4929

Diagnosis of Mental Health from Social Networking Posts: An Improved ML-Based Approach Rohit Kumar Sachan, Ashish Kumar, Darshita Shukla, Archana Sharma, and Sunil Kumar

Abstract Social networking and microblogging websites have rich information about user’s personal life and mental health. A systematic analysis of this information can be used to understand the mental and psychological state of the users. This can also be used for preventive decision-making in case of mental illness. Various machine learning techniques have already been used in past works to extract the user’s mental health from social media data. There is still a need to identify the most effective approach among the various machine learning techniques to diagnose sadness in the data that reflect negativity. With this motivation, we use five supervised machine learning algorithms with a hybrid of text feature extraction techniques: Term Frequency-Inverse Document Frequency (TF-IDF) and Bag of Words (BoW). Our results establish improved accuracy and precision than the state-of-the-art works. Experimental results show the usage of TF-IDF and BoW collectively with all the ML algorithms. It has been inferred that the support vector machine performs best among the various machine learning algorithms with the highest balanced accuracy of 99.7%. Keywords Depression · Machine learning · Mental illness · Nature language processing · Twitter data

R. K. Sachan (B) · A. Kumar Bennett University, Greater Noida 201310, India e-mail: [email protected]; [email protected] D. Shukla · A. Sharma ABES Institute of Technology, Ghaziabad 201009, India S. Kumar Institute of Management Studies, Ghaziabad 201015, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024 S. Jain et al. (eds.), Emergent Converging Technologies and Biomedical Systems, Lecture Notes in Electrical Engineering 1116, https://doi.org/10.1007/978-981-99-8646-0_11

125

126

R. K. Sachan et al.

1 Introduction Diagnosing mental health/illness/disorder is a major societal challenge in the mental health awareness program. A psychiatrist faces difficulty detecting a mental illness in a patient due to the complexity of each mental condition and lack of background information. It makes it challenging to provide proper therapy before it is too late. Depression, a mental illness/disorder, is sometimes accompanied by anxiety and other mental and physical diseases, and it affects people’s emotions and behavior. Millions of people worldwide suffer from depression, according to WHO World Mental Health (WMH) report [1], and the number is rapidly increasing. These mental illnesses/diseases are frequently hidden by their carriers, making early depression diagnosis more difficult. However, integrating the communication platform into everyday human life creates an environment that can provide more information about a patient’s dementia. In today’s era, millions of internet users use various online platforms like social networking websites and microblogging websites to share and express their day-today perceptions on different aspects of their lives and society in terms of posts. These posts reflect on their mental situation and what is happening in life. The continuous monitoring of these posts may help identify their mental health/illness/disorder [2]. The depression problem can also be addressed using popular social media platforms. The text messages, i.e., posts sent by these networks, include much-hidden information about their founders [3]. Users’ posts on social networking websites can be a valuable source of information for recognizing depressed symptoms. Depression might come back at any time. People who are depressed tend to lose interest, have a poor mood, feel hopeless, and isolate themselves from others. Depressed persons are more likely to commit suicide and suffer from anxiety if they do not receive proper counseling and treatment [4]. The motivation of this work is to see if depression and anxiety can be detected through social media posts and communications. Depression is a common ailment that is a fascinating topic to discuss. Finding odd trends in user-generated content over time is fascinating. This might spark a revolution and benefit a large number of people. This work uses natural language processing (NLP) approaches to discover depression-related words in social media posts and texts. We also apply machine learning (ML) algorithms to train our dataset so that the system can perform efficiently. As the internet has become popular, individuals have started sharing their thoughts regarding psychological challenges through microblogging websites such as Twitter, Instagram, and LinkedIn. Due to their online activity, some researchers have been inspired to create prospective future systems for proper health care. They applied several NLP techniques and text categorization algorithms to boost performance. Our primary purpose through this work is to help people recognize depressive traits in their social media postings early on, allowing for early intervention and resolution of the problem. In short, the main contributions of our work are:

Diagnosis of Mental Health from Social Networking Posts …

127

• In-depth study of the state-of-the-art works: We analyze past state-of-the-art works on detecting human mental health. We mainly focus on the datasets used in the applied approach and the key part of the methodology. • Methodology: Our proposed methodology includes all the basic steps of the ML pipeline, such as data collection, pre-processing, feature engineering, and ML algorithms. We apply a hybrid of TF-IDF and BoW for feature extraction then we apply ML algorithms to detect depression and anxiety-related posts. • Experimental results: We use the Sentiment140 dataset for experimental evaluation. We apply the five supervised ML algorithms, namely, Support Vector Machine (SVM), Decision Tree (DT), Random Forest (RF), Naive Bayes (NB), and K-Nearest Neighbors (KNN), to identify the negativity in the social media posts. Our result finds that SVM with TF-IDF and BoW performs best with 99.7% balanced-accuracy for our dataset. The structure of the paper is as follows. Section 2 presents the related work, and the proposed methodology is detailed in Sect. 3. An in-depth evaluation accompanied by the result analysis is presented in Sect. 4. Finally, we conclude with Sect. 5.

2 Related Work Machine learning (ML) [4] and deep learning (DL) [5] based algorithms have been proposed to detect human mental health. In [6], handcrafted features such as HAAR, HOG, SIFT, and SURF are extracted from the data for analysis, and ML algorithms such as KNN, K-Means, and ANN are exploited to detect human sadness. Also, the text mining-based emotional detection approach is one-of-the prominent fields in human emotion detection. The extensive use of web-based social networking platforms/forum sources and the tendency to share thoughts and feelings has made tremendous data available in text and graphics for analyzing human sentiment and emotions [7, 8]. Recently, many researchers have proposed a diagnosis of depression by analyzing the association between psychological well-being and the usage of language for the expression of thoughts [9, 10]. In [11], authors extract the terms similar in meaning to depression and anorexia using MetaMap; and use various ML-based classifiers to identify the early signs of mental illness. However, in [12], authors propose a DLbased model to predict the early symptoms of depression. They embed an attention mechanism in the proposed approach to identify the contribution of each word in the whole sequence for better text classification accuracy. In work [13], authors analyze the user’s text from Reddit posts to differentiate between the normal person and the depressed person. They consider the hypothesis that a written text may change dramatically in terms of syntactic structure, particularly in the development of event linkages in the case of a psychological person. NLP tools and a variety of categorization approaches have also been used to analyze text-related data and investigate the influence of social media platforms.

128

R. K. Sachan et al.

The proposed model exposed the hidden depression in the user text to help them to identify the signs of mental illness. In [14], authors use NLP and ML approach to detect negativity in Reddit users’ posts. They analyze lexicon terms in the user’s post, which depressed users generally use. They integrate multiple features to predict the negative symptoms in an individual and use multiple ML-based classifiers to check the accuracy of the proposed model. Similarly, in [15], authors analyze Reddit text data for predicting the feeling of depression by collaborating with long short-term memory (LSTM) networks and NLP. Here, NLP is used for sentiment analysis of text posted on the social forum. LSTM is used as a binary classifier to classify the sentiments into positive and negative classes. In [7], the authors analyze the user’s tweets from the Twitter website for mental disorder prediction. They extract textual and visual features for the automatic identification of depressed users. Similarly, in [8], authors utilize textural information from multiple social networking sites for depression detection. They propose a supervised ML-based approach for labeled tweets datasets (excluding the smileys) to identify the sign of depression. The work [16] uses minimal target data to find a solution. This work demonstrates multi-task learning (MTL) models to help treat mental diseases. They claim that the model can predict mental disorders and suicidal tendencies in users through their social networking posts. Similarly, in [17], authors analyze suicidal ideation and mental disorder by analyzing the language using statistical measures. They scan Twitter for depression-related messages using linguistic variables, interpersonal awareness, and interaction measures. On the other hand, authors also analyze audio data to detect depression using a CNN [18]. CNN model consisted of speech log histograms in the input layer along with 4 hidden layers and 1 layer of output. They use an ensemble learning strategy to fuse the multiple information from the individual network to improve performance. In [19], authors combine multi-modal data, namely, text, images, and behavioral features, to predict the sign of mental disorders. Here, Instagram users’ posts are analyzed to predict depression by considering the time interval between the posts. Social media platforms have proved to be a vital way to detect mental disorders in users automatically [3]. In this direction [8], authors exploit data from multiple social websites such as Twitter, Facebook, Victoria’s Dairy, and Reddit. They extract textual features from the user’s post using BoW and use ML-based ensemble classifiers to classify the user’s post for depression detection. However, many authors also analyze user’s history data from Twitter to predict the symptoms of depression [20]. In this work, authors investigate bi-monthly data to detect depression and infer that incorporating much older data hardly impacted the model accuracy. The overall accuracy of the proposed model is relatively low compared to the existing ML-based model due to historical data. Similarly, in [21], the authors gather historical data from Facebook posts and analyze the medical records to predict depression. They use language markers to identify the prevalent words with the patient’s demographics (i.e., gender and age) to predict the language correlation. In one-of-the recent works [22], the authors investigate the social networking data from the Indian region during the Covid-19 pandemic for emotion analysis. They follow a four steps procedure to categorize the emotion into one of the four

Diagnosis of Mental Health from Social Networking Posts …

129

categories, i.e., anger, sadness, fear, and happiness. The experimental results help to identify the negative sentiments for predicting any mental disorder. In [28], authors propose a TF-IDF, BOW, and Multinomial Naive Bayes (MNB) based approach for detecting micro-blogging websites’ positive and negative feelings. In sum, mental disorder, including depression, is predictable and treatable with suitable diagnosis procedure. Due to the extensive usage of social networking websites, text processing became essential to predict the degree of depression. This in-depth analysis will be helpful in the medical prognosis and treatment of mental disorders that may otherwise remain underdiagnosed.

3 Methodology We use a standard ML approach which includes the data collection, pre-processing, feature engineering (extraction and selection), ML algorithm (only supervised), and evaluation. These steps are elaborated one by one, except the evaluation step (cf. Sect. 4).

3.1 Data Collection and Pre-processing We collect the data from publicly available repositories, like [23]. Collected data contains valuable information like the post’s polarity, post id, serial id, and post text. After collecting data, we annotate as zero (0) for non-depression posts and one (1) for depression posts. In the collected raw data, we found unnecessary punctuation marks in the text of posts. It will affect the accuracy of ML algorithms. So during pre-processing, we cleaned our data from unnecessary punctuation marks and then tokenized the text with the help of a regular expression library in Python.

3.2 Feature Engineering The ML algorithms cannot process the tokenized text data because it is in raw form. We need to convert the tokenized text data into a numerical format easily readable by the ML algorithms. In feature engineering, we use a hybrid of the Term FrequencyInverse Document Frequency (TFIDF) [24] and Bag of Words (BoW) [25] in the vectorization process. These approaches are selected based on their simplicity, flexibility, and versatility for feature extraction. TF-IDF [24] extracts the features from the tweeted words in the dataset. We analyze textual and visual cues to understand the user’s mood state. Furthermore, we retrieve the user-related information using the

130

R. K. Sachan et al.

user ID and their associated number of recent tweets and comprehensive profile information. The profile information-based features include the bio-text, profile picture, and header image of the user’s profile. BoW model [25] assigns a unique number to each sentence word and encodes the tweeted sentence. It generates an encoded vector for each sentence with the length of the entire vocabulary. After that, it counts the number of times each word appeared in the document. TF-IDF [24] determines which word has been used more frequently than the others based on the frequency of each word. It is done by assigning integral values to each unique sentence word. TF-IDF computes the frequency of each word using Eqs. (1) to (3). t f (t, d) =

count (t) count (d)

(1)

d f (t) = Occurr ence(t, d) t f − id f (t, d) = t f (t, d) × log

N d f (t) + 1

(2) (3)

where t stands for a term or word and d stands for the document (i.e., set of words), and N stands for the count of the corpus.

3.3 ML Algorithms After feature extraction using TF-IDF and BoW, we apply the five well-known, supervised ML algorithms to detect depression posts. These algorithms are Support Vector Machine (SVM), Decision Tree (DT), Random Forest (RF), Naive Bayes (NB), and K-Nearest Neighbors (KNN). We use the default hyperparameters set by the used Python library where reported hyperparameters are missing in the state-of-the-art literature. We report Balanced-Accuracy, Precision, Recall, and F1-score for each ML algorithm. These ML algorithms are the top-ranked ML classifiers. Our proposed methodology to detect depression posts on Twitter is illustrated using the workflow diagram in Fig. 1. It is done with some of the mentioned steps, from data collection to detection outcome. Next, we evaluate our proposed methodology and present obtained results.

Diagnosis of Mental Health from Social Networking Posts …

131

Fig. 1 Workflow diagram of proposed methodology

4 Evaluation and Results We evaluate our approach on the publicly available Sentiment140 dataset [26] using Python v3.8.3 within the Python environment with supporting libraries. Our test machine has Intel(R) Core(TM) i5-1135G7 [email protected] GHz with 8.00 GB RAM computational resources.

4.1 Dataset We use the Sentiment140 dataset [26] for evaluation purposes. It is a Twitter dataset that contains 1.6 million tweets. Here, each data row contains six fields: (i) id, (ii) date, (iii) user, (iv) flag, (v) text, and (vi) target. The id represents the tweet’s unique id, the date represents the tweet’s date, the user represents the tweeted user, and the flag value lyx represents query and no query for non-query. The target field represents the polarity of the tweet in terms of 0 for negative, 2 for neutral, and 4 for positive sentiments. We split our dataset into two parts based on the 75–25 rules for the training and testing.

4.2 Data Analysis We analyze the pre-processed tweets data with the help of the WordCloud generator in Python [27]. It finds dominance in terms of the frequency of each word and displays it in a pictorial form. In the picture, the larger font size represents the word with higher dominance, and the small font size represents the word with lower dominance. These dominance figures are shown in Figs. 2 and 3.

132

R. K. Sachan et al.

Fig. 2 Dominance of non-depressive tweets

Fig. 3 Dominance of depressive tweets

4.3 Results We apply all five classification techniques (i.e., SVM, DT, RF, NB, and KNN) to detect depression-related tweets. Our proposed model based on TF-IDF and BoW improves the accuracy compared to work [28] (cf. Table 1). We find 99% balanced-accuracy in all cases except the NB. We find 99.4% balanced-accuracy by DT, 99.5% by RF, 96.6% by NB, 99.6% by KNN, and the best 99.7% by SVM. All these results are listed and compared in Table 1. It clearly states that the feature extraction techniques of TFIDF and BoW have improved the accuracy collectively with all the ML mentioned above algorithms.

Diagnosis of Mental Health from Social Networking Posts …

133

Table 1 Analysis of results Feature extraction techniques

ML Algo

Balanced-accuracy

Precision

Recall

F1 score

None

NB [28]

89.36

89.90

96.00

93.00

SVM [28]

79.70

78.25

78.99

82.10

KNN [28]

72.10

75.00

63.40

68.70

NB

96.60

90.20

92.90

84.80

SVM

99.70

95.08

94.60

88.40

KNN

99.60

82.07

78.82

80.50

DT

99.30

88.42

88.05

86.40

RF

99.40

89.10

80.40

83.48

TF-IDF and BoW

5 Conclusion Social Media plays a significant role in determining how individuals communicate with each other and express their feelings. The use of social networking platforms like Instagram, Meta, and Twitter to express one’s feelings has become quite popular. Using suitable approaches, we can determine the user’s true feelings. In this work, we combine SVM, KNN, DT, RF, and NB with TF-IDF and BoW feature extraction approaches that considerably aided us in correctly recognizing the user’s depressing attitudes. After pre-processing and analyzing the imbalanced tweets dataset, we compared the performance in terms of balanced-accuracy of all five models. Among all the algorithms, SVM has the best accuracy of 99.7%. Here, we have accessed the dataset in the English language, but further work can be done by using a dataset of some other language, training the machine, and testing the dataset.

References 1. WHO World Mental Health (WMH): Depression. (2021). Accessed 5 July 2022 2. Mali A, Sedamkar RR (2022) Prediction of depression using machine learning and NLP approach. In: Intelligent computing and networking, (Singapore), pp 172–181, Springer Nature Singapore 3. Calvo RA, Milne DN, Hussain MS, Christensen H (2017) Natural language processing in mental health applications using non-clinical texts. Nat Lang Eng 23(5):649–685 4. Shatte AB, Hutchinson DM, Teague SJ (2019) Machine learning in mental health: a scoping review of methods and applications. Psychol Med 49(9):1426–1448 5. Smys S, Raj JS (2021) Analysis of deep learning techniques for early detection of depression on social media network-a comparative study. J Trends Comput Sci Smart Technol (TCSST) 3(01):24–39 6. Li X, Zhang X, Zhu J, Mao W, Sun S, Wang Z, Xia C, Hu B (2019) Depression recognition using machine learning methods with different feature generation strategies. Artif Intell Med 99:101696

134

R. K. Sachan et al.

7. Safa R, Bayat P, Moghtader L (2022) Automatic detection of depression symptoms in Twitter using multimodal analysis. J Supercomput 78(4):4709–4744 8. Chiong R, Budhi GS, Dhakal S, Chiong F (2021) A textual-based featuring approach for depression detection using machine learning classifiers and social media texts. Comput Biol Med 135:104499 9. Guntuku SC, Yaden DB, Kern ML, Ungar LH, Eichstaedt JC (2017) Detecting depression and mental illness on social media: an integrative review. Curr Opin Behav Sci 18:43–49 10. Chancellor S, De Choudhury M (2020) Methods in predictive techniques for mental health status on social media:acritical review. NPJ Digital Med 3(1):1–11 11. Paul S, Jandhyala SK, Basu T (2018) Early detection of signs of anorexia and depression over social media using effective machine learning frameworks. In: CLEF (Working notes) 12. Cong Q, Feng Z, Li F, Xiang Y, Rao G, Tao C (2018) XA-BiLSTM: a deep learning approach for depression detection in imbalanced data. In: 2018 IEEE International conference on bioinformatics and biomedicine (BIBM), pp 1624–1627, IEEE 13. Wolohan J, Hiraga M, Mukherjee A, Sayyed ZA, Millard M (2018) Detecting linguistic traces of depression in topic-restricted text: Attending to self-stigmatized depression with NLP. In: Proceedings of the first international workshop on language cognition and computational models, pp 11–21 14. Tadesse MM, Lin H, Xu B, Yang L (2019) Detection of depression-related posts in Reddit social media forum. IEEE Access 7:44883–44893 15. Mahapatra A, Naik SR, Mishra M (2020) A novel approach for identifying social media posts indicative of depression. In: 2020 IEEE International symposium on sustainable energy, signal processing and cyber security (iSSSC), pp 1–6, IEEE 16. Benton A, Mitchell M, Hovy D (2017) Multi-task learning for mental health using social media text. arXiv preprint arXiv:1712.03538 (2017) 17. De Choudhury M, Kiciman E, Dredze M, Coppersmith G, Kumar M (2016) Discovering shifts to suicidal ideation from mental health content in social media. In: Proceedings of the 2016 CHI conference on human factors in computing systems, pp 2098–2110 18. V´azquez-Romero A, Gallardo-Antol´ın A (2020) Automatic detection of depression in speech using ensemble convolutional neural networks. Entropy 22(6):688 19. Chiu CY, Lane HY, Koh JL, Chen AL (2021) Multimodal depression detection on Instagram considering time interval of posts. J Intell Inf Syst 56(1):25–47 20. Tsugawa S, Kikuchi Y, Kishino F, Nakajima K, Itoh Y, Ohsaki H (2015) Recognizing depression from Twitter activity. In: Proceedings of the 33rd Annual ACM conference on human factors in computing systems, pp 3187–3196 21. Eichstaedt JC, Smith RJ, Merchant RM, Ungar LH, Crutchley P, Preot, iuc-Pietro D, Asch DA, Schwartz HA (2018) Facebook language predicts depression in medical records. Proc Nat Acad Sci 115(44):11203–11208 22. Arora A, Chakraborty P, Bhatia M, Mittal P (2021) Role of emotion in excessive use of Twitter during COVID-19 imposed lockdown in India. J Technol Behav Sci 6(2):370–377 23. Kaggle Inc.: Kaggle. Accessed 2 May 2022 24. Qaiser S, Ali R (2018) Text mining: use of TF-IDF to examine the relevance of words to documents. Int J Comput Appl 181(1):25–29 25. Zhang Y, Jin R, Zhou Z-H (2010) Understanding Bag-of-Words model: a statistical framework. Int J Mach Learn Cybern 1(1):43–52 26. Go A, Bhayani R, Huang L (2009) Twitter sentiment classification using distant supervision. CS224N Project Report, Stanford, vol 1, no 12 27. Mueller, Andreas, “WordCloud,” 2020. Accessed 9 June 2022 28. Mali A, Sedamkar RR (2022) Prediction of depression using machine learning and NLP approach. Int J Intell Commun Comput Netw 2(1):9–19

Smart Health Monitoring System for Elderly People Kalava Guru Mallikarjuna, Medagam Sailendra Reddy, Kolluru Lokesh, Kasani Mohan Sri Sai, Mamidi K. Naga Venkata Datta Sai, and Indu Bala

Abstract In our society, there is an accelerated population aging and an increasing number of individuals living alone, which has triggered great interest in developing solutions for elderly living assistance. The fast-growing technology, such as the Internet of Things, has brought significant changes in our lifestyle as well as the healthcare industry. Motivated by this rising need in society, an Internet of Things (IoT)-based Health Monitoring System (HMS) is proposed in this paper for elderly people living alone in urban areas. The proposed HMS system measures vital symptoms such as temperature, mobility of the person, and heart rate using sensory devices and saves the healthcare record on a Cloud Server for future reference by the doctor. The fall detection sensor is used thus if the health parameters of the person are disturbing and person fall down and could not get up in ten minutes, the alarm will get activated and an information message with location coordinates will be shared on he registered mobile numbers. The empirical results are presented to validate the efficacy of the proposed design. Keywords Health monitoring system · Sensors · Arduino · GSM · Fall detection · Internet of things

1 Introduction Most of the young population is moving to the urban areas due to their jobs or better job opportunities and good facilities. As a result, their aged parents are living alone in their native places and in some cases, there is no one to take care of them. Even if, this may not be the case but elder people suffering from some chronic diseases may fall in solitude such as in washrooms and other family members may not be able to know that for a long time which could be a life-threatening situation if medical aid is not given to them immediately. To address this serious issue, an IoT-based health monitoring system (HMS) is proposed in this paper that monitors the vital K. G. Mallikarjuna · M. S. Reddy · K. Lokesh · K. M. S. Sai · M. K. N. V. D. Sai · I. Bala (B) SEEE, Lovely Professional University, Phagwara, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024 S. Jain et al. (eds.), Emergent Converging Technologies and Biomedical Systems, Lecture Notes in Electrical Engineering 1116, https://doi.org/10.1007/978-981-99-8646-0_12

135

136

K. G. Mallikarjuna et al.

Fig. 1 A typical IoT-based HMS

parameter such as temperature, heart rate, etc. of the elder person periodically and keeps on uploading the same on the cloud server. On detecting any fall detection and disturbing vital parameters, the proposed system generates an alarm and sends an SMS to the registered numbers to that medical aid may be provided to the elder person immediately. To the better knowledge of the authors, this feature has not been incorporated in the previous works. With the advancements in technologies like VLSI, sensor technologies, and 5G communication systems, it has become feasible to remain connected to the internet anytime, anywhere, and to monitor the vital health parameters of the person remotely as shown in Fig. 1.

2 Literature Survey In general, people in old age suffer from chronic diseases and thus their health needs more attention than young people [1, 2]. With the advancement in technologies, the body sensor networks have become handier to monitor the patient remotely [3]. With the evolution of IoT technology, the management of these body sensor networks is becoming easy which enables numerous sensory devices to measure a variety of vital parameters of human beings non-invasively and to store their status on cloud servers through the well-connected wireless network [4–7]. Recently, World Health Organization (WHO) has revealed that older people might continuously suffer from a chronic level of diseases and therefore, they require a much well-equipped health monitoring system (HMS) for the sake of monitoring and treating them promptly [8]. Whenever the implementation of the remote patient monitoring system is done, the chances to overlook some of the obstacles in the arena of the medical care sector are higher [9]. A well-implemented HMS could be able

Smart Health Monitoring System for Elderly People

137

to impart quality as well as professional medical support to the people who live in remote areas. Moreover, healthcare professionals could also take advantage of it by rendering their services in remote mode instead of direct visits and supervision [10]. Therefore, the diseased persons and both the health care professionals are provided with the opportunity to reduce the risks of any health ailments in their earlier stage itself. In [11], various crucial parameters like electrocardiogram (ECG), pulse oximetry, and heart rate variability are considered to design a telemedical system. The ZigBee technology has been used to collect physiological parameters. Android-based smartphone with Bluetooth feature has been used to transport patient data. In [12], the ECG data of the patient is analyzed by the proposed Apnea Med assist system for diagnosing the apnea condition by using a support vector classifier. In [13], the integration of Blockchain technology with HMS is discussed to secure the hath records. The recent developments in remote healthcare and monitoring systems using both contact-based and contactless techniques are also presented in [14].

2.1 Paper Contributions Motivated by the literature survey above, the contributions of the paper are enlisted below. i. Various sensor units like temperature, Micro-Electro-Mechanical Systems (MEMS), heart rate, etc. are interfaced with Arduino microcontroller to measure vital human health parameters. ii. The database of the health data of the elder person is created on a Cloud server. iii. A fall detection system has been implemented using a MEMS sensor. iv. Mobile GSM feature has been exploited to send SMS to the registered mobile number on critical health ailment condition.

2.2 Paper Organization The paper is organized as follows: The motivation and problem formulation is discussed in Sect. 1. Section 2 provides a comprehensive literature review on elderly health monitoring systems. The basic notion of the paper has been discussed in Sect. 3 and simulated results are provided in Sect. 4 to validate the efficacy of the proposed system. The conclusion and future research directions are given in Sect. 5.

138

K. G. Mallikarjuna et al.

3 Proposed Health Monitoring System The proposed IoT-based health information system is designed using Arduino and three sensory devices like DS18B20 type temperature sensing device, a heart rate sensor, and a MEMS sensor are being utilized for measuring temperature, heart rate, and patient mobility the in an uninterrupted manner. All of the sensors have been used to monitor the said vital signs as desired for any intended time interval (Fig. 2). Arduino is being used as the micro controlling device which can integrate the sensors along with the various other constituents like Global System for Mobile (GSM) based modular interactions and Liquid Crystal Display (LCD) in the respective pins. On detecting abnormalities in the health parameters, the GSM module generates SMS to the registered mobile numbers. The working of the proposed HMS is presented with the help of the flow diagram as Fig. 3. The actual picture of the prototype is shown in Fig. 4. Various modules of the prototype are discussed in this section. These are i. Power Supply: The typical power supply of +5 V is used in this project using a bridge rectifier and 1000 microfarad capacitors. ii. Microcontroller: Arduino Uno is used in this project as it is comprised of fourteen Input/ Output pins, a microcontroller named Atmega328, six analog pins, and a USB interface [15]. The typical specifications of the Arduino Uno are enlisted in Table 1. i. Heart Rate sensor: Pulse Sensor with in built amplification and noise cancellation circuitry is used in this project that could works with either a 3 V or 5 V Arduino. ii. Temperature sensor: With an accuracy of ±5◦ C, the digital temperature sensor DS18B20 is used in this project to monitor the temperatures of the person.

Fig. 2 Proposed system model

Smart Health Monitoring System for Elderly People

139

Start Give power supply to all the module of HIS system Check the status of Motion Sensor for 2 to 3 minutes.

Is any motion detected ?

Check Temperature and Heart rate of the person

No action

Is there any abnormality observed? No action Activate the alarm and send message to the registered mobile number. Save information on cloud server

Start Fig. 3 Flow diagram on the working of the proposed HMS

iii. GSM module: In this paper, SIM900A is used to send the SMS to the registered mobile number. iv. LCD module: There are 16 Columns and 2 Rows in the 16 × 2 Liquid Crystal Display. It has numerous applications for seven segments and is also known as an electronic display module. With no backlight, it can function between 4.7 V and 5.3 V and can handle 1 mA of current. It includes both numbers and alphanumeric characters. It is possible to show in function using both 4- and 8-bit modes. v. Cloud server: For the ready reference of the medical practitioner, the reading of the patient will be uploaded on the cloud name “Thing Speak” periodically.

140

K. G. Mallikarjuna et al.

Fig. 4 Actual picture of prototype

Table 1 Arduino Uno specifications

S. no

Parameter

Value

1

Microcontroller

ATmega328P

2

Input voltage level

7 V to 12 V

3

Analog I/O pins

6

4

Digital I/O pins

14

5

Average DC

40 mA

6

Frequency

16 MHz

7

EEPROM

1 KB

8

SRAM

2 KB

9

Flash Memory

32 KB

4 Results and Discussions Various results to validate the effectiveness of the proposed HMS model are presented in this section. The results have been classified into three categories as A. Sensors Output: In this project, three sensors have been used that is heart rate sensor, MEMs sensor, and temperature sensor. Figure 5a–c are showing the reading of the heart rate sensor, temperature sensor, and MEMs sensor, respectively on the LCD.

Smart Health Monitoring System for Elderly People

141

Fig. 5 Sensor reading on LCD for a Heartrate sensor b MEMs Sensor c Data transmission message for cloud server

(a)

(b)

(c)

142

K. G. Mallikarjuna et al.

B. GSM Message Service: In the proposed model, if the vital parameters of the elder person are disturbing and he may fall and could not stand up for more than 5 min, the SMS will be sent to the registered mobile numbers. The screenshot of the message so generated is given in Fig. 6. C. Cloud Server Data: Fig. 7 shows the sensor’s output readings are stored periodically on the cloud server date-wise for the ready references of the medical practitioner. The medical history of the patient is always beneficial to the doctor in case of an emergency. Figure 7a–c are showing the date-wise patient healthcare database stored on a cloud server taken from the heart rate sensor, MEMS sensor, and temperature sensor, respectively.

Fig. 6 Real-time SMS message screenshot

Smart Health Monitoring System for Elderly People

143

(a)

(b)

(c) Fig. 7 a Heartrate sensor data b MEMS sensor data c Temperature Sensor data uploaded on the cloud server

144

K. G. Mallikarjuna et al.

5 Conclusion and Future Scope In this paper, an IoT-based smart Health Monitoring System (HMS) is proposed for elder people. We have included three major sensors to check the health status of the person named, temperature, heart rate, and MEMs for fall detection. If the vital. health parameters are observed as disturbing the SMS message will be delivered to the registered mobile numbers and the health information will also be stored on the cloud server for ready reference of the doctor. The prototype is designed to demonstrate the working of the project and results have been shown to illustrate the effectiveness of the same. In the future, the work can be extended further by incorporating more sensors for measuring more health parameters such as glucose, oximeter, ECG, and GPRS for accurate location indication.

References 1. Pavitra DNSM (2020) IOT enabled patient health monitoring and assistant system. In: Proceedings of the 2020 IEEE international conference on computer, communication, and signal processing (ICCCSP), pp 1–5. https://doi.org/10.1109/ICCCSP49753.2020.9248805 2. Dheeraj G, Anumala PK, Ramananda Sagar L, Krishna BV, Bala I (2022) Plant leaf diseases identification using deep learning approach for sustainable agriculture. 2022 6th international conference on intelligent computing and control systems (ICICCS), Madurai, India, pp 1429– 1434. https://doi.org/10.1109/ICICCS53718.2022.9788199 3. Chen M, Gonzalez S, Vasilakos A, Cao H, Leung VC (2011) Body area networks: a survey. Mobile Netw Appl 16(2):171–193. https://doi.org/10.1007/s11036-010-0260-9 4. Madakam S, Lake V, Lake V (2015) Internet of things (IoT): a literature review. In: Proceedings of the 2015 IEEE international conference on smart City/SocialCom/SustainCom (SmartCity), pp 1–7. https://doi.org/10.1109/SmartCity.2015.26 5. Gubbi J, Buyya R, Marusic S, Palaniswami M (2013) Internet of things (IoT): a vision, architectural elements, and future directions. In: Proceedings of the 2013 international conference on advances in computing, communications and informatics (ICACCI), pp 1501–1509. https:// doi.org/10.1109/ICACCI.2013.6637215 6. Baba SM, Bala I (2022) Detection of diabetic retinopathy with retinal images using CNN. In: Proceedings of the 2022 6th international conference on intelligent computing and control systems (ICICCS), pp 1074–1080. IEEE. https://doi.org/10.1109/ICICCS53298.2022.9689785 7. Mahmoud R, Yousuf T, Aloul F, Zualkernan I (2015) Internet of Things (IoT) security: current status, challenges and prospective measures. In: Proceedings of the 2015 10th international conference for internet technology and secured transactions (ICITST), pp 336–341. https:// doi.org/10.1109/ICITST.2015.7412116 8. Achmed Z, Miguel G (2010) Assimilating wireless sensing nets with cloud computing. In: Proceedings of the 2010 sixth international conference on mobile ad-hoc and sensor networks (MSN), pp 263–266. https://doi.org/10.1109/MSN.2010.30 9. Yew HT, Ng MF, Ping SZ, Chung SK, Chekima A, Dargham JA (2020) IoT based real-time remote patient monitoring system. In: Proceedings of the 2020 16th IEEE international colloquium on signal processing & its applications (CSPA), pp 176–179. https://doi.org/10.1109/ CSPA48992.2020.9068699 10. Gia TN, Tcarenko I, Sarker VK, Rahmani AM, Westerlund T, Liljeberg P, Tenhunen H (2016) IoT-based fall detection system with energy efficient sensor nodes. In: Proceedings of the 2016

Smart Health Monitoring System for Elderly People

11.

12.

13.

14.

15.

16.

145

IEEE Nordic circuits and systems conference (NORCAS), pp 1–6. IEEE. https://doi.org/10. 1109/NORCHIP.2016.7802940 Turner J, Zellner C, Khan T, Yelamarthi K (2017) Continuous heart rate monitoring using smartphone. In: Proceedings of the 2017 IEEE International conference on electro information technology (EIT), pp 324–326. https://doi.org/10.1109/EIT.2017.8053379 Reddy GK, Achari KL (2015) A non-invasive method for calculating calories burned during exercise using heartbeat. In: Proceedings of the 2015 IEEE 9th international conference on intelligent systems and control (ISCO), pp 1–5. https://doi.org/10.1109/ISCO.2015.7282249 Yacchirema D, de Puga JS, Palau C, Esteve M (2018) Fall detection system for elderly people using IoT and big data. In: Proceedings of the 2018 international conference on internet of Things (iThings) and IEEE green computing and communications (GreenCom) and IEEE cyber, physical and social computing (CPSCom) and IEEE smart data (SmartData), pp 603–610. https://doi.org/10.1016/j.procs.2018.04.072 Karar ME, Shehata HI, Reyad O (2022) A survey of IoT-based fall detection for aiding elderly care: sensors, methods, challenges and future trends. In: Proceedings of the 2022 international conference on internet of things (iThings), pp 1–6. https://doi.org/10.1109/iThings53498.2022. 9632481 Chandra I, Sivakumar N, Gokulnath CB, Parthasarathy P (2019) IoT based fall detection and ambient assisted system for the elderly. In: Proceedings of the 2019 IEEE international conference on internet of things and intelligence system (IoTaIS), pp 1–6. https://doi.org/10.1109/ IoTaIS.2019.8898719 Hsu CCH, Wang MYC, Shen HC, Chiang RHC, Wen CH (2017) FallCare+: An IoT surveillance system for fall detection. In: Proceedings of the 2017 international conference on applied system innovation (ICASI), pp 921–922. IEEE. https://doi.org/10.1109/ICASI.2017.7988295

Impact of Covid-19 and Subsequent Usage of IoT Sakshi Sharma, Veena Sharma, and Vineet Kumar

Abstract The World Health Organization (WHO) has declared the coronavirus as a pandemic. The virus spreads through person-to-person contact, and governments have implemented various measures to protect their citizens from its spread. The pandemic has raised the importance of remote monitoring, automation, data-driven decision-making, etc. All of which are covered under major benefits of the Internet of Things (IoT). This study aims to examine the impact of COVID-19 on the adoption and usage of IoT. The pandemic has expedited the adoption and deployment of IoT devices in various key sectors such as education, healthcare, transportation, industrial, tourism, and manufacturing. The details of adoption of IoT in these sectors have been explored and presented in this paper. Overall, the paper reveals that the COVID-19 pandemic has raised awareness and increased adoption of IoT among masses and this trend is expected to continue in the post-pandemic era also. Keywords Corona virus · COVID-19 pandemic · IoT (internet of things) · GDP (gross domestic product)

1 Introduction COVID-19, or the novel coronavirus disease, is an infectious respiratory illness caused by the SARS-CoV-2 virus. The disease was first detected in Wuhan, China in December 2019, and has since rapidly spread worldwide, resulting in a global pandemic [1]. The virus is mainly transmitted through respiratory droplets when an infected individual coughs, sneezes, or talks, and can also spread through contact with contaminated surfaces. The COVID-19 pandemic has significantly affected the world, causing extensive social and cultural transformations, global supply chain S. Sharma (B) Electronics and Communication Engineering Department, Jawaharlal Nehru Government Engineering College, Sundernagar, H.P., India e-mail: [email protected] V. Sharma · V. Kumar Electrical Engineering Department, NIT Hamirpur, Hamirpur, Himachal Pradesh, H.P., India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024 S. Jain et al. (eds.), Emergent Converging Technologies and Biomedical Systems, Lecture Notes in Electrical Engineering 1116, https://doi.org/10.1007/978-981-99-8646-0_13

147

148

S. Sharma et al.

interruptions, and economic downturns. Governments, healthcare organizations, and communities worldwide have taken various measures to address the pandemic, such as implementing social distancing protocols, mandating the use of masks, imposing travel restrictions, and developing and distributing vaccines. As of March 2023, the COVID-19 pandemic continues to evolve, with the emergence of new virus variants and ongoing efforts to distribute vaccines [2]. The pandemic has emphasized the importance of global cooperation and readiness in responding to global health emergencies. IoT technologies have played a significant role in the COVID-19 response, particularly in areas such as healthcare, public safety, and supply chain management. IoT-enabled devices, including sensors and remote patient monitoring systems, have been used to monitor the health of COVID19 patients, allowing for early symptom detection and timely intervention. Contact tracing, which involves identifying and tracking individuals who have come into contact with a COVID-19 positive person, has been enabled through the use of IoT technologies. Moreover, IoT technologies have been used to monitor public spaces and enforce social distancing protocols, ensuring the effective and efficient distribution of critical resources to areas in need. The COVID-19 pandemic has brought about unprecedented challenges to our global society, and many industries have been forced to adapt to new ways of working and living. One area that has seen significant changes is the use of the Internet of Things (IoT) in healthcare and other essential services. Therefore, the work in this manuscript aims to investigate the impacts of COVID-19 on the different facets of society. Furthermore, the manuscript also addresses the literature on subsequent applications of IoT during COVID-19. The major highlights and contributions of this study are presented in the following points: • The manuscript examines the various challenges associated with the COVID-19 pandemic and it has been found that despite the challenges, the pandemic has also created new opportunities for innovation, and the use of IoT technology has played a crucial role in addressing some of the most pressing issues. • The paper reports the various areas of applications of IoT devices in the monitoring and tracking of COVID-19 patients, enabling remote consultations with doctors, and ensuring the safety of healthcare workers. This technology has enabled the rapid development of new solutions, such as contact tracing apps and temperature sensors, which have been critical in controlling the spread of the virus. • The impact of COVID-19 has been far-reaching, but it has also demonstrated the incredible potential of IoT technology to transform our world.

2 Impact of COVID-19 The COVID-19 pandemic had a significant impact on various aspects of life worldwide, including the healthcare system, economy, education, and daily routines [3]. The pandemic resulted in several changes in daily life, such as the recommendation to practice social distancing, wear masks, and follow other safety protocols. The

Impact of Covid-19 and Subsequent Usage of IoT

149

pandemic also caused disruptions in travel and social gatherings. Here are some of the key effects of COVID-19 in India are mentioned as follows:

2.1 Education System During the COVID-19 pandemic, numerous state governments worldwide, including India, adopted diverse measures to mitigate the spread of the virus and ensure public health. One of the most crucial measures taken was the closure of schools and colleges throughout the nation [4]. This decision was of utmost importance to hinder the transmission of the virus as educational institutions often gather a large number of people, thereby enhancing the risk of transmission. Through this action, governments managed to limit close contact among individuals, subsequently reducing the chances of transmission. However, the closure of educational institutions resulted in significant socio-economic ramifications. Students were compelled to shift to online learning, which presented challenges for those without access to technology or stable internet connections [4]. Additionally, parents had to adapt to a new reality where they had to balance work and childcare since their children could no longer attend school in person. The closure of educational institutions was a crucial measure to combat COVID-19, emphasizing the requirement for investment in technology and infrastructure to sustain remote learning and ensuring continuous education delivery to students during crises.

2.2 Healthcare System The wellness program and systems in India are undergoing development but are struggling with various challenges such as workforce shortages, absenteeism, poor infrastructure, and subpar quality of care. In spite of significant advancements in recent years, the Indian healthcare system confronts several issues that impede the quality of care and hinder access to healthcare services. A dearth of healthcare workers, predominantly in rural areas, is a major concern for the Indian healthcare system. Furthermore, the infrastructure of the healthcare system faces various challenges, including insufficient resources to support the delivery of healthcare services, inadequate sanitation and hygiene standards, and a lack of modern medical equipment and facilities, which can lead to disparities in health outcomes and access to care. Access to healthcare services is also problematic in India, especially for marginalized populations, including women, rural communities, and low-income households [5]. At the onset of the COVID-19 pandemic in India, the healthcare system encountered substantial obstacles in coping with the outbreak and dispensing care to patients [6]. This pandemic has left a remarkable influence on healthcare systems worldwide. Some of the key impacts include:

150

S. Sharma et al.

• The COVID-19 pandemic has resulted in an increase in the need for healthcare services, putting a considerable burden on healthcare systems, especially in areas where there are high number of COVID-19 cases. • Critical medical supplies and equipment, including ventilators, and testing kits, have experienced shortages due to the pandemic. • Non-COVID-19 healthcare services have been disrupted due to resources being redirected toward managing the pandemic. This has resulted in the cancellation of elective surgeries and other non-urgent procedures, impacting patients’ health and well-being. • The pandemic has also caused mental health challenges, including increased rates of anxiety, depression, and PTSD (Post-traumatic stress disorder) among healthcare workers and the general population. The requirement for augmented investments in healthcare infrastructure, medical equipment, and technology to facilitate the provision of healthcare services during times of emergency. Moreover, the pandemic has brought to light the significance of tackling the social determinants of health, such as poverty, inequality, and healthcare accessibility, to promote health equity and bolster resilience in the event of forthcoming pandemics.

2.3 Travel and Tourism The travel and tourism sector’s direct contribution to GDP in India increased from 9.9% in 1995 to 10.3% in 2019, and it is a significant source of employment, accounting for 10.4% of the country’s employment in 2019. However, the COVID19 pandemic has had a substantial impact on the global travel and tourism industry, including India [7]. The pandemic’s consequences, such as travel restrictions and border closures, have made it challenging for tourists to travel to many destinations, resulting in reduced demand for travel and tourism services, leading to job losses, business closures, and reduced revenue. Fear of the virus has also shifted travel preferences, with many tourists choosing domestic travel or outdoor destinations that offer opportunities for social distancing. In response, industry has put in place safety measures, including improved cleaning procedures, measures to maintain social distancing, and the use of personal protective equipment, to ensure the safety of individuals. Despite the industry’s slow recovery due to increasing vaccination rates and lifted travel restrictions, the pandemic’s long-term effects on the industry remain uncertain.

Impact of Covid-19 and Subsequent Usage of IoT

151

2.4 Industrial and Economy The COVID-19 pandemic has caused significant economic repercussions in India, resulting in business closures and job losses, despite the government’s implementation of various relief measures, such as providing financial assistance. Simultaneously, the pandemic has had a positive impact on the online gaming industry, with more people staying at home and turning to online gaming for entertainment. This surge in demand has led to an increase in the number of users for many gaming companies, resulting in higher profits. However, it is important to note that excessive gaming can have adverse effects on mental health and overall well-being, and individuals must maintain a balanced and healthy lifestyle. The pandemic’s impact on industries and the global economy has been substantial, with many businesses being compelled to shut down or operate at a reduced capacity due to lockdowns and social distancing measures, resulting in widespread job losses and economic disruption. The pandemic has had a significant and adverse impact on industries such as travel, tourism, and hospitality, as well as small businesses that depend on in-person interactions. These industries have seen a decline in business, leading to layoffs and permanent closures. However, the pandemic has also accelerated the adoption of digital technologies and automation in many sectors. Companies have had to adapt to remote operations and maintain business continuity, leading to a rise in the use of digital tools like video conferencing, e-commerce, and online services. The pandemic has also highlighted the importance of global supply chains and exposed vulnerabilities in industries that heavily rely on international trade. As a result, many countries have reevaluated their trade policies and supply chain strategies to improve their resilience in the face of potential future disruptions.

3 Usage of IoT The COVID-19 pandemic has spurred the acceptance of Internet of Things (IoT) and other automated solutions in various sectors, including healthcare, education, retail, etc. Measures such as social distancing and remote work have amplified the need for digital solutions that enable remote communication and collaboration, such as mobile learning and video conferencing tools [8]. Furthermore, businesses and governments are exploring innovative solutions to minimize the risk of COVID-19 transmission in the workplace, including the use of touchless devices such as facial recognition for biometric enrolment and access control. IoT-enabled solutions, such as thermal imaging technology and IoT screens, are also being deployed to monitor temperatures and restrict access to individuals with elevated temperatures, thus mitigating the risk of transmission in the workplace. The COVID-19 pandemic has emphasized the importance of digital solutions and automation in reducing the risk of transmission and ensuring business continuity in the face of unprecedented challenges. Moving forward, we can anticipate continued

152

S. Sharma et al.

innovation and adoption of IoT and other digital technologies to address the ongoing impacts of the pandemic and establish a more resilient and sustainable future.

3.1 Education System In response to the COVID-19 pandemic, the education sector has leveraged IoT to facilitate remote learning, augment sanitation, and monitor the health of students and staff. IoT devices such as tablets, laptops, and smart boards have enabled students to participate in virtual classes and remotely submit assignments. Furthermore, IoTenabled software can be utilized to monitor student progress and provide personalized feedback. IoT-enabled devices such as Bluetooth beacons and Wi-Fi access points can track the movements of students and staff on campus to facilitate contact tracing in the event of a COVID-19 outbreak [9]. Additionally, IoT sensors can monitor the cleanliness of classrooms and common areas in real-time, such as the levels of disinfectant in cleaning solutions to ensure their effectiveness against viruses and bacteria. Wearable IoT devices like smart watches and fitness trackers can also monitor the health of students and staff, including vital signs like heart rate, temperature, and other metrics. This data can be used to identify individuals displaying COVID-19 symptoms and take prompt action to prevent the virus from spreading.

3.2 Healthcare System The Internet of Things (IoT) has the potential to revolutionize the healthcare system by enhancing patient outcomes, reducing costs, and improving the efficiency of healthcare delivery. IoT sensors and wearable devices can remotely monitor patients, enabling physicians to track patient vital signs and health metrics in real-time [10]. IoT-enabled devices and sensors can optimize hospital operations, including patient flow, inventory management, and equipment utilization. Predictive maintenance through IoT sensors can monitor the performance of medical equipment and predict when maintenance is needed, reducing the risk of equipment failure and downtime [11]. IoT-enabled devices can assist patients in managing their medication schedules and dosages, reducing the likelihood of medication errors and improving patient adherence. IoT sensors and devices can detect and promptly respond to medical emergencies, such as falls or cardiac events.

Impact of Covid-19 and Subsequent Usage of IoT

153

3.3 Travel and Tourism System The impact of the Internet of Things (IoT) on the travel and tourism industry has been significant, particularly during the COVID-19 pandemic. IoT technologies have been critical in enabling the industry to adapt and respond to the challenges presented by the crisis [12]. IoT-enabled solutions such as contactless payments, keyless entry systems, and voice-activated controls have become popular in the travel and tourism industry, allowing travelers to have a contactless experience and reducing the risk of infection. IoT devices such as wearables and sensors can monitor the health and safety of travelers, ensuring that they adhere to social distancing guidelines and that their environments are safe. IoT can help businesses optimize their operations by providing real-time data on occupancy levels, energy usage, and other key metrics, which can help reduce costs and improve efficiency. IoT-enabled solutions such as smart hotel rooms, personalized recommendations, and voice-activated controls can enhance the overall customer experience, making travel and tourism more enjoyable for travelers [13]. IoT has played a crucial role in helping the travel and tourism industry to adapt and respond to the challenges presented by the COVID-19 pandemic. By providing contactless solutions, health and safety monitoring, data analysis, improved efficiency, and enhanced customer experience, IoT has helped to ensure that travel and tourism remain viable and attractive options for travelers.

3.4 Industry and Economy System The COVID-19 pandemic has been impacted by the widespread adoption of the Internet of Things (IoT) in various industries. IoT has facilitated remote work by enabling the use of IoT devices such as smart cameras, sensors, and wearables, which enable workers to monitor and control operations from a distance, ensuring that work continues uninterrupted. Additionally, IoT-enabled devices have played a critical role in supply chain management by tracking goods and materials, monitoring inventory levels, and managing shipping and logistics to keep supply chains operational. The pandemic has accelerated digital transformation across various industries, allowing businesses to automate processes, optimize operations, reduce costs, and increase productivity [14]. Furthermore, IoT devices have helped ensure worker and customer safety during the pandemic by monitoring health conditions, tracking exposure to the virus, and enforcing social distancing protocols. Finally, IoT has been instrumental in supporting economic recovery during the pandemic by enabling remote work, improving supply chain management, driving digital transformation, ensuring health and safety, and supporting economic growth. Overall, IoT has played a crucial role in enabling various industries and the economy to adapt and respond to the challenges presented by the COVID-19 pandemic.

154

S. Sharma et al.

In addition to the aforementioned points, the IoT has applications in wide variety of areas related to the healthcare, smart and sustainable cities and technologies. The authors explore the impact of COVID-19 on IoT, highlighting the increased importance of IoT in the pandemic response. The paper [15] discusses the potential of IoT in healthcare, such as remote patient monitoring and contact tracing, and also in the monitoring of critical infrastructure, such as water and energy systems. The paper [16] provides an overview of the different ways IoT can be used to fight pandemics. The author discusses the use of IoT for monitoring and tracking infectious diseases, such as COVID-19, as well as for ensuring social distancing in public spaces. The paper also highlights the potential of IoT in the development of smart cities. Work in [17] provides a review of the different IoT-based tracing approaches that have been used to combat COVID-19. The authors discuss the advantages and disadvantages of various approaches, including Bluetooth-based contact tracing, GPS tracking, and RFID-based tracing. The paper also addresses the privacy concerns associated with these tracing methods. Reference [18] explores the role of IoT in the "new normal" after COVID-19. The authors discuss the potential of IoT in enabling remote work and distance learning, as well as in ensuring safe public spaces. The paper also highlights the importance of IoT in addressing the mental health impacts of the pandemic.

4 Conclusions and Future Scope This paper discusses the general impact of COVID-19 on various walks of life of the society and the widespread use of IoT during the COVID-19 pandemic with numerous challenges, affecting the way we live, work, and interact. The study of IoT has helped in addressing these challenges, particularly in the areas of education, healthcare, safety, and economic recovery. The pandemic had a significant impact on these areas, leading to increased adoption of IoT devices to facilitate remote monitoring and ensure the safety of individuals. This work concludes that the use of IoT will continue to impact and play a significant role in the well-being of society. The COVID-19 pandemic has accelerated the adoption of IoT technology in healthcare, supply chain management, and public health. The benefits of IoT-based solutions, such as contact tracing, remote patient monitoring, and supply chain optimization, have been crucial in fighting the pandemic. However, challenges such as data privacy and security must be addressed to ensure that the benefits of IoT usage are not outweighed by the risks. Keeping in view these challenges, this study proposes the following areas of future scope: i. IoT sensors and devices can be used to manage and optimize city infrastructure, such as traffic flow, waste management, and energy consumption. ii. IoT devices can be used to control home appliances and monitor home security, enabling more efficient use of energy and enhancing the safety and comfort of our homes.

Impact of Covid-19 and Subsequent Usage of IoT

155

iii. IoT sensors can be used to monitor air and water quality, enabling early detection of pollution and enabling more effective management of natural resources. iv. IoT sensors can be used to monitor soil moisture, temperature, and other environmental factors, enabling farmers to optimize crop yields and reduce waste. v. The pandemic has highlighted the need for smart cities that can respond to crises quickly and effectively. IoT technology can be used to monitor traffic, air quality, and other environmental factors, enabling city planners to make data-driven decisions.

References 1. Christie A, Henley SJ, Mattocks L, Fernando R, Lansky A, Ahmad FB, Beach MJ (2021) Decreases in COVID-19 cases, emergency department visits, hospital admissions, and deaths among older adults following the introduction of COVID-19 vaccine United States, September 6, 2020–May 1, 2021. Morb Mortal Wkly Rep 70(23):858 2. Prajapati CC, Kaur H, Rakhra M (2021) Role of IoT and fog computing in diagnosis of coronavirus (COVID-19). In 2021 9th International conference on reliability, infocom technologies and optimization (Trends and Future Directions) (ICRITO), pp 1–6. IEEE 3. Harper L, Kalfa N, Beckers GMA, Kaefer M, Nieuwhof-Leppink AJ, Fossum M, ESPU Research Committee (2020) The impact of COVID-19 on research. J Pediatr Urol 16(5):715 4. Tarkar P (2020) Impact of COVID-19 pandemic on education system. Int J Adv Sci Technol 29(9):3812–3814 5. Kumar A, Nayar KR, Koya SF (2020) COVID-19: challenges and its consequences for rural health care in India. Public Health Pract 1:100009 6. Shreffler J, Petrey J, Huecker M (2020) The impact of COVID-19 on healthcare worker wellness: a scoping review. West J Emerg Med 21(5):1059 7. Škare M, Soriano DR, Porada-Rocho´n M (2021) Impact of COVID-19 on the travel and tourism industry. Technol Forecast Soc Chang 163:120469 8. Kumar S, Maheshwari V, Prabhu J, Prasanna M, Jayalakshmi P, Suganya P, Jothikumar R (2020) Social economic impact of COVID-19 outbreak in India. Int J Pervasive Comput Commun 16(4):309–319 9. Rakshit D, Paul A (2020) Impact of COVID-19 on sectors of Indian economy and business survival strategies. Available at SSRN 3620727 10. Sultana N, Tamanna M (2022) Evaluating the potential and challenges of Iot in education and other sectors during the COVID-19 Pandemic: the case of Bangladesh. Technol Soc 68:101857 11. Darshan KR, Anandakumar KR (2015) A comprehensive review on usage of Internet of Things (IoT) in healthcare system. In 2015 International conference on emerging research in electronics, computer science and technology (ICERECT), pp 132–136. IEEE 12. Car T, Stifanich LP, Šimuni´c M (2019) Internet of things (IoT) in tourism and hospitality: opportunities and challenges. Tour South East Eur 5:163–175 13. Verma A, Shukla V (2019) Analyzing the influence of IoT in Tourism Industry. In: Proceedings of international conference on sustainable computing in science, technology and management (SUSCOM), Amity University Rajasthan, Jaipur-India 14. Ndiaye M, Oyewobi SS, Abu-Mahfouz AM, Hancke GP, Kurien AM, Djouani K (2020) IoT in the wake of COVID-19: a survey on contributions, challenges and evolution. IEEE Access 8:186821–186839

156

S. Sharma et al.

15. Nasajpour M, Pouriyeh S, Parizi RM, Dorodchi M, Valero M, Arabnia HR (2020) Internet of things for current COVID-19 and future pandemics: an exploratory study. J Healthc Inf Res 4:325–364 16. Muhsen IN, Rasheed OW, Habib EA, Alsaad RK, Maghrabi MK, Rahman MA, Hashmi SK (2021) Current status and future perspectives on the Internet of Things in oncology. Hematology/Oncology and Stem Cell Therapy 17. Jahmunah V, Sudarshan VK, Oh SL, Gururajan R, Gururajan R, Zhou X et al (2021) Future IoT tools for COVID-19 contact tracing and prediction: a review of the state-of-the-science. Int J Imaging Syst Technol 31(2):455–471 18. Nah FFH, Siau K (2020) Covid-19 pandemic–role of technology in transforming business to the new normal. In HCI international 2020–late breaking papers: interaction, knowledge and social media: 22nd HCI international conference, HCII 2020, Copenhagen, Denmark, July 19–24, 2020, Proceedings 22, pp 585–600. Springer International Publishing

Design of Battery Monitoring System for Converted Electric Cycles T. Dinesh Kumar, M. A. Archana, K. Umapathy, H. Rakesh, K. Aakkash, and B. R. Shreenidhi

Abstract Battery monitoring is inevitable for any sort of industry that depends on batteries for providing stored power. The risk of system failure can be totally eliminated by means of battery monitoring in a consistent manner. Moreover it aids in avoiding the downtime and loss in business. This paper presents a design which is used to monitor the level of battery and manage its performance in order to extend its life thus ensuring safe and efficient operation. The system employs sensors and circuits to measure parameters such as voltage, current, temperature and state of charge and use this information to control the charging and discharging processes of the battery. Thus the proposed design will provide an efficient system to protect the battery from overcharging, over-discharging and over-temperature conditions in the converted electric cycle. Keywords Battery · OLED · Microcontroller · Voltage sensors · Hub motor

1 Introduction Battery Monitoring System (BMS) is undoubtedly an important process for managing and extending the performance of a battery. This system monitors various parameters such as voltage, current, temperature and charging state in order to control charging and discharging processes of a battery. This will lead to both safe and efficient operation. The integration of good quality lithium-ion battery and an appropriate electric motor can provide an efficient means of solution for applications such as electric bicycle. Appropriate maintenance and usage of BMS can guarantee consistent and optimum performance thus providing longevity for the battery system. The need for BMS is increasing nowadays due to increase in demand for electric vehicles. If a reliable BMS is employed for the requirement, batteries can be handled and managed in a safe manner for a longer period of time. This approach will increase the performance T. Dinesh Kumar (B) · M. A. Archana · K. Umapathy · H. Rakesh · K. Aakkash · B. R. Shreenidhi Department of ECE, SCSVMV Deemed University, Tamil Nadu, Kanchipuram 631561, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024 S. Jain et al. (eds.), Emergent Converging Technologies and Biomedical Systems, Lecture Notes in Electrical Engineering 1116, https://doi.org/10.1007/978-981-99-8646-0_14

157

158

T. Dinesh Kumar et al.

of the battery and reduces the risk of failure and damage. This in turn will result in low cost of maintenance and investment with good returns [1–3]. This work deals with the specifications of BMS such as functions, internal components and relevant applications. If a clear picture of the working of BMS is available, then it becomes more appropriate for using them to obtain best results [4]. The key objective of BMS is to elongate the battery life for optimum operation. With consistent tracking and controlling the battery, BMS will prevent all sorts of activities which decrease the performance and longevity of the battery [5].

2 Literature Survey There has been extensive study and research in the domain of renewable energy sources and transportation of electric vehicles. More emphasis has been given on evaluation of performance of Battery Monitoring Systems with respect to life of battery. The development of novel algorithms for observing and controlling the processes of charging and discharging of batteries is the key area of research with BMS. Another important aspect of research with BMS is monitoring the parameters of battery such as voltage, current, temperature and charging state. The technologies have been designed and implemented in order to enable BMS operating in an efficient manner [6]. There have been more studies on evaluation of BMS systems in par with performance and life period of battery. These studies indicate that integration of BMS with good quality batteries can improve the performance and life span of battery when compared to batteries without BMS. There are a number of ways used for measurement of voltage and the way depends upon the type of measurement and concerned application. The following are the typical ways by which voltage measurement can be carried out. • Measurement of DC Voltage: A digital voltmeter or multi-meter is generally used for the measurement. Controllers and Analog–Digital converters are employed to measure the DC voltage in order to obtain accuracy and precision. • Measurement of AC Voltage: It is generally measured using a AC voltmeter. • Measurement of High Voltage: Appropriate equipments and methods are needed for this measurement due to risk factors such as electric shock and damage to instruments. Techniques such as voltage dividers and voltage sensors are employed for the measurement of high voltage with proper safety and accuracy. • Measurement of Battery Voltage: It is generally measured using the concept of voltage divider. Other techniques used for the measurement include Pulse Width Modulation (PWM) and Digital Signal Processing (DSP) for better accuracy and reliability [7–9]. The above methods for measurement of voltage are applied for wide applications in power and electronics industries for the purpose of renewable energy and transportation. In BMS oriented systems, lithium-ion battery is preferred when compared to lead-acid battery since it occupies less space with same capacity. Hence this is

Design of Battery Monitoring System for Converted Electric Cycles

159

an important requirement for the applications which need high power to low weight ratio especially for electric cycles and bikes. These lithium-ions are environment friendly because they contain no toxic chemicals or heavy metals in them. The other merits of lithium-ion batteries are higher energy density, lower rate of self discharge, longer cycle life etc. However the important thing to be noticed is that the motor must be compatible with remaining parts of the bicycle before implementation [10, 11]. Soeprapto et al. enunciated a technique to avoid degrading condition of a battery by implementing a tool for managing the usage of battery during both the processes. The technique involved capacitors and inductors for temporary storage of energy and current suppressors respectively [12, 17]. A BMS based on android and ARM microprocessor was presented by Chuanxue Song et al. for direct and convenient management of batteries [13]. Bagul et al. presented a system having BMS in compact form and easy for implementation. This system gives real time data about Lead Acid based batteries in a consistent manner [14]. By using the value of internal resistance in the proposed model, an index is computed to indicate the percentage of degradation by balancing the temperature of the battery [15, 16].

3 Proposed System The proposed system is designed with factors such as flexibility and scalability in order to make it appropriately suitable for wide range of applications such as energy storage systems, electric vehicles and portable electronics. Moreover the system will ensure operation of the battery at complete potential with utmost performance and longevity. Hence already tested BMS must track the performance of the battery for optimum results and operation. Thus the system will protect the battery from all sorts of activities like overcharging, excessive temperature etc. which will degrade the performance and life span of battery. The data acquisition unit of the system will measure voltage and current values and state of information is transformed into user form of data in the battery state. The control point is connected to safety protection circuit so that the process of overcharging is avoided thereby disconnecting it from charging point. The system is managed by thermal arrangement which tracks the performance of battery consistently and communicating with it. Figure 1 shows the proposed system to monitor the level of battery in Electric Cycles. The internal arrangement of BMS includes a number of battery cell packs interfaced with the measuring elements which gather information about the level of charge in each cell. A cell balancing unit is connected to the discharger. The charging state, estimation of capability and health status of the cells are monitored consistently. All the data are forwarded to the display unit by means of a CAN bus controller. Figure 2 shows internal architecture of battery management system in converted Electric Cycle. The following are the devices used in the system.

160

Fig. 1 Battery management system for electric cycle

Fig. 2 Internal architecture of BMS in converted electric cycles

• • • • •

OLED Display Voltage sensor Sine wave controller Hub Motor DC Battery.

T. Dinesh Kumar et al.

Design of Battery Monitoring System for Converted Electric Cycles

161

Fig. 3 OLED embedded with micro controller

3.1 OLED Display The system comprises an OLED interfaced with the controller which is used for the display. The analog input is read by the system and transformed into user readable format. This is interfaced with an accelerometer which can be used for data accessing by the user. This arrangement will detect level of battery in both available and used formats so as to alert the user about the need for charging. Figure 3 indicates the OLED display embedded within the controller.

3.2 Voltage Sensor The voltage sensor is located in a sine wave controller unit housed between the microcontroller and the battery. BMS systems generally depend on sensors for collection of information regarding the performance of battery. These sensors measure various parameters connected with charging and discharging processes of the battery. The sine wave controller handles the PWM signals using the throttle input and those signals are forwarded to the Hub motor. This methodology processes collected information from the sensors and it becomes easier to take decisions based upon predefined algorithms with the help of the controller. By this technique, the rate of charging and discharging can be adjusted and damage to battery can be avoided. Figure 4 illustrates the sensor used for measuring the voltage.

162

T. Dinesh Kumar et al.

Fig. 4 Voltage sensor

3.3 Hub Motor A 24 V Hub motor is a type of electric motor employed in the system for driving the electric bicycle. The hub motors have more efficiency when compared to other electric motors and they transfer more power to the wheel directly with less amount of loss of energy. The user will come across smooth riding operation and maintenance due to above characteristics of hub motor. This motor is fixed in the wheel centre so that it eliminates the necessity for a cycle chain so as to transfer power from motor to wheel. This arrangement makes the bicycle easy for riding and the rider will pedal to activate the motor for moving. The rating indicates the nominal value of voltage and maximum power produced by the motor. The speed of the motor and torque of the motor are determined by the voltage and power readings. This motor generally gives lower top speeds and less operating energy amounting to long life span of battery (Fig. 5).

3.4 DC Battery The integration of 24 V 15 Ah Lithium-Ion Battery and 24 V 350 W Hub Motor with BMS provides a reliable and optimum solution for electric bike applications. BMS manages and protects the battery so that long battery life and smooth riding experience are guaranteed. The hub motor provides a consistent delivery of power for enhanced efficiency when compared to other types of motors. The compatible

Design of Battery Monitoring System for Converted Electric Cycles

163

Fig. 5 Hub motor

components are properly selected for the system in order to ensure improved performance and battery longevity. Figure 6 shows lithium-ion battery having a capacity of 24 V and 15Ah which is used as power source to drive the vehicle. Figure 7 represents flowchart of proposed system used for monitoring the level of battery for Electric Cycle. This explains work flow of BMS which includes controller, battery and accelerometer which provides power maintenance operation for the user for smooth operation. The following is the algorithm for BMS. Step 1: Start. Step 2: Obtain voltage values from sensor interfaced to battery. Step 3: Manipulation of voltage values. Step 4: If value lies between 19 V–24 V. Step 5: Display the values in OLED module. Step 6: Else indicate Battery is low. Step 7: Repeat step 2.

164

Fig. 6 Battery

Fig. 7 Flowchart for monitoring battery level of electric cycle

T. Dinesh Kumar et al.

Design of Battery Monitoring System for Converted Electric Cycles

165

4 Results and Discussion The BMS system is installed in the Electric cycle which shows capacity of the battery and alerts user whenever battery is about to drain. Figure 8 shows the process of implementation. Figure 9 illustrates the arrangement of BMS which is connected with the electric cycle. Green, Red and Blue colored wires are the phase wires which are connected to phase wires of hub motor and system is connected to the throttle. Figure 10 represents the picture of BMS system connected to the electric cycle along with DC battery. Figure 11 represents the picture of the BMS system connected to the electric cycle and OLED display which in turn indicates that voltage level in the battery is in full charge. Figure 12 represents the picture of the BMS system connected to the electric cycle and OLED display indicating voltage level in the battery half charged. Figure 13 shows the voltage level in the battery being low.

Fig. 8 Implementation diagram

Fig. 9 Connection setup of BMS for electric cycle

166

Fig. 10 BMS connected to electric cycle and battery

Fig. 11 Battery level indications at high level in BMS for electric cycle

T. Dinesh Kumar et al.

Design of Battery Monitoring System for Converted Electric Cycles

Fig. 12 Battery level indications at medium level in BMS for electric cycle

Fig. 13 Battery level indications at Low level in BMS for electric cycle

167

168

T. Dinesh Kumar et al.

5 Conclusions The system proposed in this paper is to monitor the level of battery and optimize its performance by extending its life and ensuring safe and efficient operation. A set of sensors and circuits are employed to measure various parameters such as voltage, current, temperature and state of charge. This data is used to control charging and discharging of the battery. It also provided an efficient arrangement for protecting the battery from overcharging, over-discharging and conditions of over-temperature conditions which in turn reduces the performance and life of the battery in the converted electric cycles. The BMS can be integrated with IoT so that level of battery can be communicated to concerned mobile users. The system can be extended by using an anti-theft device for alerting the user in case of thefts and interfacing it with smart parking system for easy and convenient parking in thickly populated areas. By incorporating a GPS enabled device to the system, the live location of the electric cycle can be tracked easily.

References 1. Yang B, Zhang X, Sun J, Li S (2021) A review on battery management systems for lithium-ion batteries in electric vehicles. Faculty of Energy Systems and Nuclear Science. University of Ontario Institute of Technology, Oshawa 2. Ianniciello L, Biwolé PH, Archard P (2018) Electric vehicles batteries thermal management systems employing phase change materials. J Power Sour 378:383–403 3. Jaguemont J, Van Mierlo J (2020) A comprehensive review of future thermal management systems for battery-electrified vehicles. J Energy Storage 31:101551 4. Sun H, Dixon R (2014) Development of cooling strategy for an air-cooled lithium-ion battery pack. J Power Sour 272:404–414 5. Saw LH, Tay AAO, Zhang LW (2015) Thermal management of lithium-ion battery pack with liquid cooling. In: Proceedings of 2015. 31st thermal measurement, modeling & management symposium (SEMI-THERM). San Jose, USA, pp 298–302 6. Chen J, Kang S, Jiaqi AE, Huang Z, Wei K, Zhang F, Zhang B, Zhu H, Deng Y, Liao G (2019) Effects of different phase change material thermal management strategies on the cooling performance of the power lithium-ion batteries: a review. J Power Sour 442:227228 7. Chen K, Chen Y, She Y, Song M, Wang S, Chen L (2019) Construction of effective symmetrical air-cooled system for battery thermal management. Appl Therm Eng 166:114679 8. Liu Y, Zhang J (2019) Design a J-type air-based battery thermal management system through surrogate-based optimization. Appl Energy 252:113426 9. Yu K, Yang X, Cheng Y, Li C (2014) Thermal analysis and two-directional air flow thermal management for lithium-ion battery pack. J Power Sour 270:193–200 10. Mohamud R, Park C (2011) Reciprocating air flow for Li-ion battery thermal management to improve temperature uniformity. J Power Sour 196:5685–5696 11. Park S, Jung D (2013) Battery cell arrangement and heat transfer fluid effects on the parasitic power consumption and the cell temperature distribution in a hybrid electric vehicle. J Power Sour 227:191–198 12. Soeprapto S, Hasanah RN, Taufik T (2019) Battery management system on electric bike using Lithium-ion 18650. Int J Power Electron Drive Syst (IJPEDS) 10(3):1529–1537 13. Song C, Shao Y, Song S, Peng S, Xiao F (2017) A novel electric bicycle battery monitoring system based on android client. J Eng 1–11

Design of Battery Monitoring System for Converted Electric Cycles

169

14. Bagul Y, Ingale M, Wani K, Patil SA (2018) Development of battery management system for hybrid electric two wheeler. SAE Technical Paper 15. Remmlingera J, Buchholz M, Meiler M, Bernreuter P, Dietmayer k (2011) State-of-health monitoring of lithium-ion batteries in electric vehicles by on-board internal resistance estimation. J Power Sour 196:5357–5363 16. Rajalashmi K, Vignesh S, Chandru M, Arulshri KP (2020) Design and implementation of lowcost electric bicycle with battery level indicator. IOP Conf Ser Mater Sci Eng 995(012033):1–5 17. Umapathy K, Sridevi T, Navyasri M, Anuragh R (2020) Real time intruder surveillance system. Int J Sci Technol Res (IJSTR) 9(3):5833–5837

Image Denoising Framework Employing Auto Encoders for Image Reconstruction Shruti Jain, Monika Bharti, and Himanshu Jindal

Abstract Auto Encoder (AE) can be used in denoising of images. It is a type of neural network that can reconstruct the input. An auto-encoder is to represent a (sparse) input dataset in a compressed form that retains the most relevant information such that it may be reconstructed at the output with minimal loss from the compressed representation. In this paper, deep AE, denoising AE, and variational AE are used. Any AE where an extra constraint is put on the bottleneck to have a low KL divergence from a Normal Distribution is a Variational AE. There are multiple ways in which Variational AE is used, but the most common one is generative. The decoders on the top of the bottleneck can be used to generate new data points. Maximum accuracy of 88.85% is observed using the Denoising autoencoder while 76.25% and 81.44% are observed for deep and Variational autoencoder, respectively. 8.3% and 14.18% accuracy improvement is observed in Variational and deep AE, respectively. Keywords Autoencoders · Noise · Deep · Denoising · Variational

1 Introduction Autoencoders (AE) can be applied to a variety of situations. One particular problem an autoencoder performs well on is denoising an image. The basic idea of this model is to learn the compressed representation of the input. It is trained to minimize the reconstruction error [10, 13, 26]. After training this compressed representation generalizes the input. In a training set noise can be present in each image. This noise is not part of the structure of the image and should not be learned by the encoder. Therefore, the encoded representation of the training set will only retain the general S. Jain Department of Electronics and Communications Engineering, Jaypee University of Information Technology, Waknaghat Solan 173234, Himachal Pradesh, India M. Bharti · H. Jindal (B) Department of Computer Science Engineering, ASET, Amity University Punjab, Mohali 140603, Punjab, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024 S. Jain et al. (eds.), Emergent Converging Technologies and Biomedical Systems, Lecture Notes in Electrical Engineering 1116, https://doi.org/10.1007/978-981-99-8646-0_15

171

172

S. Jain et al.

characteristics of the image [15, 18, 21]. The model will effectively be able to separate the signal from the noise. In a training set, noise can be present in each image. This noise is not part of the image and should not be learned by the encoder. Therefore, the encoded representation of the training set will only retain the general characteristics of the image [3, 24, 30]. AE is trained to learn a more compact representation of the input data and they do so by encoding the input data with a function to a lower dimension and then trying to create a reconstruction of the input using a decoding function [7, 8]. AE is a 3-layer neural network comprising output, hidden, and input layers where output units are connected back to input units as shown in Fig. 1. AE decodes its input onto its output. Encoder-decoder architectures are often used in a more general manner. A network that maps one input onto a different output is simply an encoder, a statistical model, or a neural network. In other words, separate inputs and outputs are the normal cases for all supervised machine learning models, and an AE is a special case where inputs are equal to the outputs [11, 14]. The difference between the convolution neural network and AE is shown in Fig. 2. In CNN, target y is predicted from input vector x while in autoencoder user predicts x from x. Autoencoders will generally be used in unsupervised learning situations, as no labels are required, except data. Unsupervised learning doesn’t give the target. With a typical autoencoder, we are creating a system that reduces the dimensionality of the input and extracts important features. but since we are not extracting these features to classify the input based on given labels [27, 32]. Abdella et al. [1] has proposed image Fig. 1 Autoencoder

Fig. 2 a Convolution neural network b Auto encoders

Image Denoising Framework Employing Auto Encoders for Image …

173

reconstruction using Variational Autoencoders with a K-means back end. Gupta et al. [12] has proposed a coupled autoencoder for the reconstruction of images. Wu et al. [37] has proposed autoencoder neural networks on emt images. To evaluate the network’s applicability and generalizability, respectively, data with Gaussian noise and data referring to flow patterns not included in the training dataset are used. Priya et al. [25] has proposed a thresholding technique and Wiener filter for the reconstruction of images. The Wiener filter demasked the image by employing a linear stochastic framework. This study describes a method that combines a Wiener filter and wavelet-based NeighSure shrink thresholding. The findings demonstrate that, in terms of visual quality. Xiao et al. [38] has proposed a comparative study of different thresholding methods like BayesShrink, Visushrink, SureShrink, and Feature Adaptive Shrinkage in wavelet-based image reconstruc-tion [5]. Kumar et al. [22] has proposed image denoising based on a gaussian/bilateral filter. Shivani Mupparaju et al. [2] has proposed the comparison of various thresholding techniques Sure Shrink, Bayes Shrink, and Visu Shrink of image denoising. Khan et al. [20] has proposed denoising images based on different wavelet thresholding by using various shrinkage methods using basic noise. Neelima et al. [28] has proposed a wavelet transform based on image denoising using thresholding techniques. Denoising is a very important as well as very critical step and on the other hand, it can also lead to information loss due to smoothing [19]. Random Gaussian noise injection is quite similar to L2 regularization, and generally, noise injection can be seen as a type of regularization. Among other things, regularization by noise injection prevents certain model parameters from growing too influential and biased toward the training data [4]. If training data is corrupted at every iteration, then at every iteration each model parameter will see a corrupted input with a certain probability, so throughout training many model parameters will encounter different corrupted inputs, hopefully preventing most model parameters from becoming too influential and overfitting the training data [16, 17]. Auto-encoders are very effective in denoising because they compress the input into compressed representation via the bottleneck design which only retains the important elements of the input while eliminating insignificant information like noise. Different types of Auto encoders are used in this paper to reconstruct the images. The same dataset is used on different types of auto-encoders. Training loss and accuracy are evaluated for every autoencoder at different epochs.

2 Methodology Auto Encoders (the encoder and decoders) are simple, fully connected, feed-forward neural nets, but nothing prevents the replacement of the networks by CNNs, RNNs, and other deep net architectures. An auto-encoder is to represent a (sparse) input dataset in a compressed form that retains the most relevant information such that it may be reconstructed at the output with minimal loss from the compressed representation. To do this, the input data is subjected to an information bottleneck so that the

174

S. Jain et al.

encoder can learn the most efficient’latent representation’ of the input rather than just memorizing the input [6, 34]. For activity recognition, we have sensor data; can learn automatic features from a few seconds of data instead of extracting heuristic-based features. Once learn these features we can use the classifier to test the performance [35]. However, it is always contentious how many layers are stacked and how many hidden neurons we use. Denoising is good because we distort the data and add some noise to it that can help in generalizing the test set [29]. We train a stacked AE in an unsupervised manner and obtain the weights. In this paper, we have used different types of autoencoders for removing noise. Figure 3 shows the proposed methodology. Deep AE, denoised AE, and variational AE are different types of auto-encoders. Variational autoencoders are a type of neural network that belong to the category of explicit distribution modeling techniques. These AE are used where input data is modeled into some distribution and want to know the parameter of the distribution [9, 27, 33]. To learn the latent representation of a particular set of input data denoising AE is used while where we have to learn the probability distribution of the input data, variational AE is used [36]. For the implementation, in this paper online dataset MNIST is used. It comprises the images of 0 to 9 digit which are in different designs and different shapes which therefore make a lot of difference in using this data for different image processing. In total we have 70,000 images out of which 60, 000 are used for training data and 10,000 are used as testing data. The images are of different sizes and shapes. The working diagram of the AE is used in Fig. 4.

Image dataset

Data Preprocessing

Auto-encoder Reconstructed Images

Deep Auto-encoder

Denoising Auto-encoder

Types of Auto-encoder

Fig. 3 Image reconstruction by the proposed technique

Parameter Evaluation

Variational Auto-encoder

Image Denoising Framework Employing Auto Encoders for Image …

175

Fig. 4 Working of autoencoder

Fig. 5 Block diagram of autoencoder '

In Fig. 4, x is the original input, and x is the reconstructed value. An encoderdecoder architecture has an encoder section that takes an input and maps it to a latent space and the decoder section considers those latent space & maps to output. The block diagram of the autoencoder is represented in Fig. 5 and the Stepwise implementation of autoencoder is shown in Algorithm 1. Algorithm 1 Implementation Steps of Deep autoencoder Input: Dataset Output: Reconstructed Images 1: begin 2: Load the dataset 3: Initialize the values of the deep auto-encoder model and other hyper-parameters4: For training, 100 epochs are used considering MSE and Adam optimizer 5: Evaluate the output for each epoch by averaging the loss of each batch 6: Stores images and their outputs for each epoch while averaging out the loss for each batch 7: Reconstruction of each image at every epoch 8: end

The model will effectively be able to separate the signal from the noise. Denoising AE are a robust variant of the standard autoencoders. They have the same structure as standard autoencoders but are trained using samples in which some amount of noise is added. This ensures that the network doesn’t learn identity mapping which will

176

S. Jain et al.

be pointless. Any autoencoder where an extra constraint is put on the bottleneck to have a low KL divergence from a Normal Distribution is a Variational Autoencoder. Variational autoencoders are a type of NN that belong to the category of explicit distribution modeling techniques.

3 Results and Discussion The main aim of this paper is to reconstruct the true scene appearance from a noisy image. There are lots of different algorithms to implement denoising from simple median filtering to sophisticated wavelet methods and even algorithms evolved using genetic programming. The algorithms all try to recognize anomalous pixel values and modify them to better fit in with the rest of the scene. Autoencoders can be applied to a variety of situations. One particular problem an autoencoder performs well on is denoising an image. The basic idea of this model is to learn the compressed representation of the input. Images are reconstructed using different auto-encoders namely deep, denoising, and variational as shown in Figs. 6, 7 and 8, respectively. For the robust representation of a particular set of input data, denoising is used while to learn the probability distribution of input data, variational encoders are used. Training loss for different auto-encoders namely deep, denoising, and variational is represented in Fig. 9. Table 1 tabulates the epochs and training loss values for different autoencoders. Table 1 tabulates that Training loss decreases for every epoch making the image more filled with its features. For, the denoising encoder 30 epochs are set and for the Variational encoder, 3 epochs are used. Denoising autoencoders are therefore more reliable than autoencoders and they gain a greater understanding of the features in the input than a typical AE. The training loss is also less, starting from the first epoch which therefore gives a more clear reconstructed image at the end epoch rather than

Fig. 6 Reconstructed image using deep auto encoder (a) after 1 epoch, (b) after 65, (c) after 80

Image Denoising Framework Employing Auto Encoders for Image …

177

(a)

(b)

(c)

Fig. 7 Reconstructed image using denoising autoencoder: a Real images b Noised images c Denoised images

Fig. 8 Reconstructed image using variational autoencoder

other autoencoders. The variational AE takes 7–8 h only for 3–4 epochs. Table 2 tabulates the accuracy of the three autoencoders. Maximum accuracy of 88.85% is observed using the Denoising auto encoder while 76.25% and 81.44% are observed for deep and Variational autoencoder respectively. Table 3 tabulates the validation loss & validation accuracy for various auto-encoders. Maximum Validation accuracy of 88.84% is observed using the Denoising auto encoder while 76.35% and 81.45% are observed for deep and Variational autoencoder respectively. Overall denoising autoencoders are better than variational and deep autoencoders for reconstructing images and perform well on the MNIST dataset. Comparison with Existing Work: The proposed model is compared with the existing work on the basis of Accuracy parameter and is tabulated in Table 4. It has been observed that 90% accuracy is obtained using denoising autoencoders while 75%, 78% and 74.5% accuracy, are obtained by [6, 31] and [23], respectively. It is found that the proposed model provides 16.6%, 13.3% and 16.9% accuracy improvement in comparison with [6, 31] and [23], respectively. Thus, it is concluded

178

S. Jain et al.

Fig. 9 Training loss for a deep encoder b denoising autoencoder, c Variational autoencoder

Table 1 Training loss of different autoencoders at various epochs Deep autoencoder

Denoising autoencoder

Variational autoencoder

Epochs

Train loss

Epochs

Train loss

Epochs

Train loss

1

0.923

1

253.2964

1

0.0600

2

0.918

2

184.4870

2

0.0095

3

0.914

3

166.5534

3

0.0084









98

0.879

28

149.5683

99

0.879

29

149.2383

100

0.879

30

149.2969

that denoising autoencoders successfully reconstruct images efficiently and taking less time consumption.

Image Denoising Framework Employing Auto Encoders for Image …

179

Table 2 Accuracy of various AE Deep autoencoder

Denoising autoencoder

Variational autoencoder

Epochs

Accuracy

Epochs

Accuracy

Epochs

Accuracy

1

75.85

1

82.45

1

80.15

2

76.01

2

84.68

2

81.42

3

76.25

3

88.85

3

81.44

Table 3 Validation loss and validation accuracy of various autoencoders Epochs

Deep autoencoder

Denoising autoencoder

Variational autoencoder

Loss

Accuracy

Loss

Accuracy

Loss

Accuracy

1

0.906

76.00

192.46

82.85

0.1136

80.79

2

0.904

76.01

192.20

84.68

0.0500

81.12

3

0.894

76.35

189.52

88.84

0.0096

81.45

Table 4 Comparison with existing work

Models

Accuracy (%)

Denoising autoencoders Prent˘as´lc and Lonˇcari´c [31]

88.85

Bhatia et al. [6]

78

Lam et al. [23]

74.5

75

4 Conclusion and Future Work Auto-encoders are very effective in denoising because they compress the input into compressed representation via the bottleneck design which only retains the important elements of the input while eliminating insignificant information like noise. In this paper, Reconstruction of noisy images is suggested using different autoencoders, namely, deep AE, denoising AE, and variational AE. Denoising AE results in 88.85% accuracy which is better than the other autoencoders. Denoising autoencoders gain a greater understanding of the features in the input than another AE. 8.3% and 14.18% accuracy improvement is observed in Variational and deep autoencoders, respectively. For future work, we can use the wavelet transform denoising techniques for image reconstruction.

180

S. Jain et al.

References 1. Abdella A, Uysal I (2020) A statistical comparative study on image reconstruction and clustering with novel vae cost function. IEEE Access 8:25626–25637 2. Al Jumah A (2013) Denoising of an image using discrete stationary wavelet transform and various thresholding techniques 3. Bharti M, Jindal H (2020) Automatic rumour detection model on social media. In: 2020 Sixth international conference on parallel, distributed and grid computing (PDGC). IEEE, pp 367–371 4. Bharti M, Jindal H (2019) Modified genetic algorithm for resource selection on internet of things. In: Futuristic trends in networks and computing technologies: second international conference, FTNCT 2019, Chandigarh, India, November 22– 23, 2019, Revised Selected Papers 2. Springer, pp 164–176 5. Bharti M, Saxena S, Kumar R (2020) A middleware approach for reliable resource selection on internet-of-things. Int J Commun Syst 33(5):e4278 6. Bhatia K, Arora S, Tomar R (2016) Diagnosis of diabetic retinopathy using machine learning classification algorithm. In: 2016 2nd international conference on next generation computing technologies (NGCT). IEEE, pp 347–351 7. Biswas M, Om H (2016) A new adaptive image denoising method based on neighboring coefficients. J Inst Eng (India) Ser B 97:11–19 8. Ehsaeyan E (2017) A novel neighshrink correction algorithm in image de-noising. Iran J Elect Electron Eng 13(3):246 9. Fredj AH, Malek J (2017) Gpu-based anisotropic diffusion algorithm for video image denoising. Microprocess Microsyst 53:190–201 10. Gai S, Liu P, Liu J, Tang X (2010) A new image denoising algorithm via bivariate shrinkage based on quaternion wavelet transform. J Comput Inf Sys 6(11):3751– 3760 11. Garg S, Pundir P, Jindal H, Saini H, Garg S (2021) Towards a multimodal system for precision agriculture using iot and machine learning. In: 2021 12th international conference on computing communication and networking technologies (ICCCNT). IEEE, pp 1–7 12. Gupta K, Bhowmick B (2018) Coupled autoencoder based reconstruction of images from compressively sampled measurements. In: 2018 26th European signal processing conference (EUSIPCO). IEEE, pp 1067–1071 13. He N, Wang JB, Zhang LL, Lu K (2015) An improved fractional-order differentiation model for image denoising. Signal Process 112:180–188 14. Jain P, Tyagi V (2015) An adaptive edge-preserving image denoising technique using tetrolet transforms. Vis Comput 31:657–674 15. Jin J, Yang B, Liang K, Wang X (2014) General image denoising framework based on compressive sensing theory. Comput Graph 38:382–391 16. Jindal H, Bharti M, Kasana SS, Saxena S (2023) An ensemble mosaicing and ridgelet based fusion technique for underwater panoramic image reconstruction and its refinement. Multimed Tools Appl 1–53 17. Jindal H, Kasana SS, Saxena S (2016) A novel image zooming technique using wavelet coefficients. In: Proceedings of the international conference on recent cognizance in wireless communication & image processing: ICRCWIP-2014. Springer, pp 1–7 18. Jindal H, Saxena S, Kasana SS (2017) Sewage water quality monitoring framework using multi-parametric sensors. Wireless Pers Commun 97:881–913 19. Jindal H, Singh H, Bharti M (2018) Modified cuckoo search for resource allocation on social internet-of-things. In: 2018 Fifth international conference on parallel, distributed and grid computing (PDGC). IEEE, pp 465–470 20. Khan S, Jain A, Khare A, RITS B (2013) Denoising of images based on different wavelet thresholding by using various shrinkage methods using basic noise conditions. Int J Eng Res Technol 2(1) 21. Kim JH, Akram F, Choi KN (2017) Image denoising feedback framework using split bregman approach. Expert Syst Appl 87:252–266

Image Denoising Framework Employing Auto Encoders for Image …

181

22. Kumar BS (2013) Image denoising based on gaussian/bilateral filter and its method noise thresholding. Signal Image Video Process 7(6):1159–1172 23. Lam C, Yi D, Guo M, Lindsey T (2018) Automated detection of diabetic retinopathy using deep learning. AMIA Summits Transl Sci Proceed 2018:147 24. Laparra V, Guti´errez J, Camps-Valls G, Malo J (2010) Image denoising with kernels based on natural image relations. J Mach Learn Res 11(2) 25. Mahalakshmi B, Anand M (2014) Adaptive wavelet packet decomposition for efficient image denoising by using neighsure shrink method. Int J Comput Sci Inf Technol 5(4):5003–5007 26. Mander K, Jindal H (2017) An improved image compression-decompression technique using block truncation and wavelets. Int J Image Gr Signal Process 9(8):17 27. Manreet K, Monika B (2014) Fog computing providing data security: a review. Int J Comput Sci Softw Eng 4(6):832–834 28. Neelima M, Pasha MM (2014) Wavelet transform based on image denoising using thresholding techniques. Int J Adv Res Comput Commun Eng 3(9):7906–7908 29. Prashar N, Sood M, Jain S (2020) Dual-tree complex wavelet transform technique-based optimal threshold tuning system to deliver denoised ecg signal. Trans Inst Meas Control 42(4):854–869 30. Prashar N, Sood M, Jain S (2021) Design and implementation of a robust noise removal system in ecg signals using dual-tree complex wavelet transform. Biomed Signal Process Control 63:102,212 31. Prentaˇsi´c P, Lonˇcari´c S (2016) Detection of exudates in fundus photographs using deep neural networks and anatomical landmark detection fusion. Comput Methods Progr Biomed 137:281–292 32. Roy S, Sinha N, Sen AK (2010) A new hybrid image denoising method. Int J Inf Technol Knowl Manag 2(2):491–497 33. Shahdoosti HR, Hazavei SM (2017) Image denoising in dual contourlet domain using hidden markov tree models. Digital Signal Process 67:17–29 34. Shi W, Li J, Wu M (2010) An image denoising method based on multiscale wavelet thresholding and bilateral filtering. Wuhan Univ J Nat Sci 15(2):148–152 35. Spann SM, Kazimierski KS, Aigner CS, Kraiger M, Bredies K, Stollberger R (2017) Spatiotemporal tgv denoising for asl perfusion imaging. Neuroimage 157:81–96 36. Tian D, Xue D, Wang D (2015) A fractional-order adaptive regularization primal– dual algorithm for image denoising. Inf Sci 296:147–159 37. Wu XJ, Xu MD, Li CD, Ju C, Zhao Q, Liu SX (2021) Research on image reconstruction algorithms based on autoencoder neural network of restricted boltzmann machine (rbm). Flow Meas Instrum 80:102,009 38. Xiao F, Zhang Y (2011) A comparative study on thresholding methods in wavelet-based image denoising. Proced Eng 15:3998–4003

Server Access Pattern Analysis Based on Weblogs Classification Methods Shirish Mohan Dubey, Geeta Tiwari, and Priusha Narwaria

Abstract The world of today relies heavily on the internet, online pages, and applications in their daily lives. Web usage mining uses data-mining approaches to extract use patterns from web data in sequence to better comprehend and satisfy Requirements for web-based applications web-based applications. The three phases of web usage mining are pattern identification, pattern analysis, and preprocessing. In-depth explanations of pattern and prediction analysis based on web usage mining are provided in this work. Due to its potential for applications, web usage mining has seen a substantial increase in interest from the academic and industrial sectors. The work in this topic, including research projects and current web usage patterns, is thoroughly taxonomized in this book. Many details and runtime data are captured in logs as the system is operating. A timestamp and a log message that describe what happened in the system are included. The work that has already been done is also reviewed recently, taking into account different data mining approaches including clustering and classification. Keywords Web usage mining · Data preprocessing · Log file analysis

1 Introduction Web mining is the practice of autonomously retrieving, extracting, and analysing data for knowledge discovery against Web pages and services using data mining techniques. Large amounts of data are now generally freely accessible to users thanks S. M. Dubey (B) · G. Tiwari Poornima College of Engineering, Jaipur, India e-mail: [email protected] G. Tiwari e-mail: [email protected] P. Narwaria Institute of Technology & Management, Gwalior, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024 S. Jain et al. (eds.), Emergent Converging Technologies and Biomedical Systems, Lecture Notes in Electrical Engineering 1116, https://doi.org/10.1007/978-981-99-8646-0_16

183

184

S. M. Dubey et al.

to the World Wide Web’s (WWW) proliferation. It is necessary to manage and arrange the various sorts of data so that various users can effectively access them. The buried information on the Web is uncovered using a variety of data mining techniques. As a result, more and more researchers are focusing on the application of data mining methods on the Web. It reposes 3 methods of data pre-processing, pattern discovery, and pattern analysis. The aims of web usage mining are processes required to create an efficient web usage mining system are covered this paper after presenting 2 algorithms for field extraction and data cleaning. The data needed for this research are provided along with a brief explanation of online usage mining. On the other side, it could be difficult to develop accurate and automatic log processing because many raw log entries are unstructured. In order to analyses internet traffic patterns, numerous studies have concentrated on log gathering, log templating, log vectorization, and log classification. In agreement with the results presented, we have proposed a model for weblog behaviour analysis that makes use of evolutionary clustering and machine learning categorization models.

2 Literature Review Wang et al. [1] investigate a multi-tier inspection queuing structure along a limited quantity based on a variety of hazard levels. For the stochastic process of screening cross-border trippers under discerned service for examination and counterblockade, a queueing analysis is carried out. In addition, the author created a computing system to calculate the steady–state probability and a number of performance criteria for the suggested queueing system. A step–by–step explanation of the approach is also handed with an illustrated illustration. Hassanin et al. [2] introduces a new computational backend paradigm that works with OCR services and Arabic document information reclamation (ADIR) as a dataset. In order to help document analysis, reclamation, processing, including dataset medication, and recognition, colorful services will be explained. As a result, multitudinous further services in the OCR sphere can be composed using ADIR services’ broad Arabic OCR features. also, the suggested work can offer access to colorful document layout analysis styles along with a platform where people can change and manage these styles (services) without the need for any setup. One of the datasets used had, 800 Arabic letters from 60 authors. Each letter from Alif to Ya was written by each author ten times, doubly. The forms were scrutinized at 300 DPI and divided into two sets a training set with, 440 letters for each class marker’s 48 images and a testing set with, 360 letters for each class marker’s 120 images. Arabic handwritten letter bracket employs and modifies convolutional neural network (CNN). Author demonstrated through an experimental test that results exceeded a categorization delicacy rate of 100 for the test prints. Ahmad et al. [3] uses Locky, one of the most dangerous families, as a case study to show how complete behavioral exploration of crypto ransomware network exertion may be done. A technical testbed was developed, and a collection of useful and

Server Access Pattern Analysis Based on Weblogs Classification Methods

185

instructional network parcels was collected and divided into colorful feathers. The perpetration of a network–grounded intrusion discovery system included two independent classifiers operating coincidently on distinct situations—packet and inflow situations. The experimental evaluation of the suggested discovery system shows that it’s veritably good at tracking ransomware network exertion and has a low false positive rate, valid uprooted characteristics, and high discovery delicacy. Korine et al. [4] DAEMON, a revolutionary, dataset-independent malware classifier, is presented. One of DAEMON’s important characteristics is that the attributes it employs and the way they’re booby-trapped make it easier to comprehend the specific geste of malware families, which helps to explain its bracket choices. We used a sizable dataset of 86 binaries from a variety of malware families that target Windows–grounded computers to enhance DAEMON. Jiang et al. [5] examine theoretically the ideal hiding affect structure. The author also changed the original issue into a bracket issue. In order to achieve a nearly optimal mongrel hiding system that provides high delicacy with little complexity, the author presented a gradational–relief greedy algorithm. The proposed near–optimal mongrel caching fashion out performs in numerical results and exhibits inflexibility in detention-sensitive and EE-sensitive circumstances. Xu et al. [6] In this exploration, we present a novel, effective FL frame dubbed FL–PQSU. Structured pruning, weight quantization, and picky updating are the three stages that make up the channel. They combine to lower the cost of calculation, storehouse, and communication, which speeds up FL training. Using the use of well– known DNN models like AlexNet and VGG16 and intimately accessible datasets like MNIST and CIFAR10, the authors show that FL–PQSU can effectively control the training outflow while still icing the literacy performance. Lio et al. [7] offers a brand-new garçon–side prefetching system. The author specifically suggested piggybacking customer identity to I/ O requests to contextualize garçon–side block access history. Author used Trajan’s approach to reveal cut points in the connected graph after transubstantiating per–customer time series of block access sequences into a connected graph on the garçon side using the vertical visibility graph fashion. Author cooked the-step pattern matching approach to detect a corresponding access pattern (i.e., a point tuple) for a given block access history. Author expresses these patterns as point tuples. Chang et al. [8] IoT bias induce sensitive data to their original edge waiters for original announcement after data refinement, which includers-framing, normalization, complexity reduction via star element Analysis, and symbol mapping. This is done using the three–scale HADIoT frame, which was proposed. Original and global Advertisements work together to achieve high discovery delicacy. Comparing the suggested frame to three standard schemes, simulation results show that it’s further effective in terms of True Positive Rate, False Positive Rate, Precision, Accuracy, and F score. Shao et al. [9] A literacy–grounded frame is used to actually handle this jointuncertain problem due to its complexity, and the significance of using this medium in practical operation with tried arbitrary BS needs data is also bandied. Incipiently, to assess the effectiveness of our suggested medium, the author ran expansive real–data

186

S. M. Dubey et al.

driven simulations. The issues demonstrate the energy of our strategy with arbitrary BS conditions. Zhang et al. [10] suggest the distributed ML adaptive coetaneous resembling approach. The synchronization system of each computing knot with the parameter garçon is adaptively modified through the performance monitoring model by taking into account the whole performance of each knot, icing bettered delicacy. also, our system securities the ML model from being impacted by unconnected jobs abiding in the same cluster. Studies reveal that our approach fully enhances clustering performance, guarantees the model’s delicacy and speed of confluence, accelerates model training, and has good expansibility.

3 Proposed Methodology Figure 1 shows the completed system framework. This chapter separates the whole system into four phases (Fig. 2).

3.1 Log Collection When it comes to the monitoring of computer systems, one of the most crucial duties for software developers and operation engineers is log gathering. A computer system or network device can send its logs to a log file, a syslog, a trap, or even a programme API. These are some of the most common methods for receiving logs. The research activity will typically involve the utilization of certain open log files as raw data sources. The HDFS log file is a dataset that was compiled from more than 200 EC2 machines owned by Amazon. As the primary resource for log collecting, this work makes use of the HDFS log file. Fig. 1 Working steps for web server access pattern analysis

Server Access Pattern Analysis Based on Weblogs Classification Methods

187

Fig. 2 Working flowchart of web server access pattern analysis

4 Data Pre-processing The first step in processing data is called “pre-processing,” and it includes both the extraction of features and the labelling of data. To be more exact, it refers to the process of extracting the desired features from raw server log files and labelling it with two different labels: A value of one indicates that a unit of data is unusual, while a value of 0 indicates that the behavior being measured is normal.

4.1 Log Templating Logs are a type of unstructured data that are made up of information that is free-text based. The process of converting the content of these raw messages into organized event templates is the purpose of log parsing. Log parsers can be broken down into

188

S. M. Dubey et al.

three distinct groups. Methods that are based on clustering make up the first category of this list. The distance between each pair of logs is used as the primary factor in the clustering process. Each cluster is responsible for the generation of event templates. The second group consists of methods that are based on heuristics. The log templates can be directly extracted using these methods, which are based on heuristic criteria. For instance, Drain employs a fixed depth parse tree to encipher the custom-created rules that it uses for the parsing process. In the third category are methods that are based on NLP. Some examples of these are DRAIN, Random Forest, and N-GRAM dictionaries. It accomplishes effective log parsing by the utilization of NLP algorithm. Drain has been shown to provide great levels of accuracy and performance when compared to other approaches. Drain is selected as the log parser to use in this work. Algorithm used for log templating is illustrated as below:

4.2 Learning and Pattern Evaluation The log parser is responsible for producing event templates, which are then organized into vectors by the clustering algorithm described in the Chap. 3. The log sequence is made up of a collection of event IDs, each of which corresponds to an event template. In the HDFS dataset, it is possible to build a log series based on the block ID. There are a variety of approaches to log-pattern analysis. Within the scope of this study, author compared the effectiveness of several different machine learning models. In this step, the optimized data is fed into the classifier for learning the pattern. Algorithm used for learning and pattern evaluation is illustrated as below: For classification, the work analyzed the performance of five classifiers. These are discussed below:

Server Access Pattern Analysis Based on Weblogs Classification Methods

189

4.3 Linear Support Vector Machine (LSVM) Still, a support vector machine is a kind of machine literacy model that can make it easier to distinguish between two different classes, if enough distributed data is given to the algorithm as part of its training set. As shown in Fig. 3, the primary thing of the SVM is to paint for the hyperplane that can distinguish among the two classes. A linearly divisible 2D database is one similar database. Whether there’s further than one line in this order is inapplicable. In the absence of categorization, linearization won’t be suitable to adequately separate the two orders. For the maturity of on-linear databases, the line division will still be “good enough,” enabling it to precisely member a number of situations (Fig. 4).

Fig. 3 Architecture of SVM classifier

190

S. M. Dubey et al.

Fig. 4 Architecture of KNN algorithm

4.4 Random Forest It’s a famous technique of ML that’s the type of supervised learning. Then we apply in ML as a classifier and for regression data. It’s idea of supervised learning, which refers to integrating several different groups and break a delicate problem and give effectiveness. This strategy is made up of several decision trees, each of which can be constructed off of datasets acquired from a training dataset and are together appertained to as the bootstrap sample. It considers the prediction made by each tree and also provides a result grounded on the combination of all of the different prediction due to numerous trees in the timber it can achieve an advanced position of perfection to helps us avoid the problem of overfitting.

4.5 K-nearest Neighbor It’s an ML strategy that makes use of the k–nearest neighbor algorithm. It’s known as anon-parameterized strategy since it assumes that the exemplifications are analogous to one another rather than making hypotheticals about the data. It keeps track of all the other significant datasets and categorizes new data sets grounded on how analogous they’re to those it formerly is apprehensive of. The system operates as described in the following Choose the Kth neighbor’s phone number. Calculate the Euclidean distance between K and its closest neighbors. taking into account the K neighbors that are, in terms of Euclidean distance, the closest to us. By contending against these k neighbors, find the total number of points that can be achieved within each order. The k system will be complete once the data points have been allocated to the orders with the most neighbors.

Server Access Pattern Analysis Based on Weblogs Classification Methods

191

5 Result and Discussion Classification Algorithms Analysis for Weblog Usage Pattern Analysis.

5.1 Dataset Description For experimental analysis, we have used following dataset as discussed below.

5.1.1

NASA Web Server Log

Contains this data is kept as a garçon log. A web garçon log train is a textbook document that the web garçon makes available for operation. Log lines collect a wide range of information concerning point garçon knowledge requests. Date, time, customer IP address, referrer, stoner agent, name of operation, garçon name, network etc. are some exemplifications of captured and saved data. The NASA log file example for a site server is shown in Fig. 5.

5.1.2

Parameters Used

Accuracy: It is one of the most important parameters for determining the classifier’s performance. Precision: It’s calculated as the ratio of properly identified pattern to the total number of access logs (Fig. 6). Figure 6 the true positive rate rises at a rate of 0.9 before remaining constant as the false positive rate rises, and the ROC wind’s AUC stays at 0.75 in this graph of the true positive rate vs false positive rate, which illustrates the ROC wind of SVM. The ROC curve of the Random Forest is depicted in Fig. 7 as a graph between the true positive rate and false positive rate, where the true positive rate rises at a rate of0.9 before remaining constant while the false positive rate rises and the AUC of the Random Forest stays at 0.78.

Fig. 5 NASA server log data sample

192

S. M. Dubey et al.

Fig. 6 ROC curve of SVM for NASA server logs

Fig. 7 ROC curve of random forest for NASA server logs

The ROC curve for K–NN is depicted in Fig. 8 as a graph between the true positive rate and false positive rate, where the true positive rate rises at a rate of0.9 before remaining constant as the false positive rate rises and the AUC for K–NN stays at 0.75. Below Table 1 presents the result of Linear SVM & Random Forest performed for NASA web analysis of server access logs. The accuracy and Precision were achieved by random forest and SVM classifiers (Fig. 9 and Table 2).

Server Access Pattern Analysis Based on Weblogs Classification Methods

Fig. 8 ROC curve of k-NN for NASA server logs Table 1 Performance analysis of random forest and linear SVM on NASA weblog dataset Classifiers

Accuracy (%)

Precision (%)

Random forest

98.61

98.58

Linear SVM

98.61

98.58

KNeighbors

98.48

98.46

Fig. 9 Accuracy analysis of classifiers on NASA weblog dataset Table 2 Performance analysis of classifiers on APACHE weblog dataset Classifiers

Accuracy (%)

Precision (%)

Linear SVM

63.03

52.78

Random forest

97.67

94.55

KNeighbors

96.73

93.51

193

194

S. M. Dubey et al.

Fig. 10 Precision analysis of classifiers on APACHE

Figure 10 presents the f1-score analysis of classification performance on APACHE web log dataset in which highest f1-score for web usage behaviors prediction accuracy was achieved by decision tree.

6 Conclusion The K Closest Neighbours principle countries that the applicable data that are regarded to be the closest neighbours are those that have the shortest distance from the new data value in the point set, where K is the total number of similar data values that are used throughout system operation. Therefore, according to result presented by different classifiers, it can be concluded that best Performance Analysis was achieved by random forest. This analysis motivated towards design of ensemble learning strategy. Therefore, this paper has presented the result analysis for web usage behaviours pattern analysis using ensemble clustering and embedding learning techniques. In future we can increase our performance by using some other methods and make our result better.

References 1. Al-Barhamtoshy HM, Jambi KM, Abdou SM, Rashwan MA (2021) Arabic documents information retrieval for printed, handwritten, and calligraphy image. IEEE Access 9:51242–51257. https://doi.org/10.1109/ACCESS.2021.3066477 2. Almashhadani AO, Kaiiali M, Sezer S, O’Kane P (2019) A multi-classifier network-based crypto ransomware detection system: a case study of locky ransomware. IEEE Access 7:47053– 47067. https://doi.org/10.1109/ACCESS.2019.2907485 3. Chang H, Feng J, Duan C (2020) HADIoT: a hierarchical anomaly detection framework for IoT. IEEE Access 8:154530–154539. https://doi.org/10.1109/ACCESS.2020.3017763 4. Jiang Y et al (2021) Analysis and optimization of fog radio access networks with hybrid caching: delay and energy efficiency. IEEE Trans Wirel Commun 20(1):69–82. https://doi.org/ 10.1109/TWC.2020.3023094

Server Access Pattern Analysis Based on Weblogs Classification Methods

195

5. Korine R, Hendler D (2021) DAEMON: dataset/platform-agnostic explainable malware classification using multi-stage feature mining. IEEE Access 9:78382–78399. https://doi.org/10. 1109/ACCESS.2021.3082173 6. Liao J, Trahay F, Gerofi B, Ishikawa Y (2016) Prefetching on storage servers through mining access patterns on blocks. IEEE Trans Parallel Distrib Syst 27(9):2698–2710. https://doi.org/ 10.1109/TPDS.2015.2496595 7. Shao M, Liu J, Yang Q, Simon G (2020) A Learning based framework for MEC server planning with uncertain BSs demands. IEEE Access 8:198832–198844. https://doi.org/10.1109/ACC ESS.2020.3034726 8. Wang C-H, Chen Y-T, Wu X (2021) A multi-tier inspection queueing system with finite capacity for differentiated border control measures. IEEE Access 9:60489–60502. https://doi.org/10. 1109/ACCESS.2021.3073470 9. Xu W, Fang W, Ding Y, Zou M, Xiong N (2021) Accelerating federated learning for IoT in big data analytics with pruning, quantization and selective updating. IEEE Access 9:38457–38466. https://doi.org/10.1109/ACCESS.2021.3063291 10. Zhang J et al (2018) An adaptive synchronous parallel strategy for distributed machine learning. IEEE Access 6:19222–19230. https://doi.org/10.1109/ACCESS.2018.2820899

Multilingual Emotion Recognition from Continuous Speech Using Transfer Learning Karanjaspreet Singh, Lakshitaa Sehgal, and Naveen Aggarwal

Abstract The ability to recognize emotions from audio is currently in high demand across various fields, such as Intelligence Services, Journalism, and Security. The COVID-19 pandemic has resulted in a shift to online meetings, conferences, and interrogations, making automated emotion detection from audio crucial. It is important to recognize emotions in languages other than English as well, due to the multilingual population. This paper presents a real-time emotion detection system that utilizes both acoustic and linguistic features of audio to predict emotions. Real time Microcontroller is used to capture videos to be used by the proposed Hubert based model. To train the deep learning model, audio files are extracted from videos in the PU Dataset and the RAVDESS Dataset. The transfer learning approach is used to retrain the proposed model for specific languages separately. For robust training, multi lingual PU Dataset is used having videos of individuals from various ethnicities with different linguistic features and accents. The best test accuracy achieved for the English, Hindi, and Punjabi languages are 95.95%, 88.6364%, and 89.70%, respectively. Keywords Emotion recognition · Human audio analysis · Hubert model

1 Introduction Various methods exist to identify the emotions displayed by individuals [1]. Emotions can be recognized by analyzing their facial expressions or through analysis of spoken audio. Facial expressions have been widely studied as they are not dependent on language, to detect emotional states [2]. The accurate identification of emotions from speech signals is crucial in several fields, including affective computing, psychotherapy, and social robotics. Emotion recognition from audio poses a challenging problem in the domains of Artificial Intelligence (AI) and Natural Language Processing (NLP). Researchers K. Singh (B) · L. Sehgal · N. Aggarwal UIET, Panjab University, Chandigarh, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024 S. Jain et al. (eds.), Emergent Converging Technologies and Biomedical Systems, Lecture Notes in Electrical Engineering 1116, https://doi.org/10.1007/978-981-99-8646-0_17

197

198

K. Singh et al.

have explored Emotion Recognition from Audio in various languages, whereby the audio encompasses both linguistic and acoustic features of the language and the speaker [3]. Several research studies have been carried out in the field of English language emotion recognition, utilizing diverse datasets [4]. One such study achieved the best test accuracy of 92.08% on the RAVDESS [5], as described by Jiaxin Ye et al. [6], which also included SAVEE [7], and IEMOCAP [8] datasets. Harár et al. [9] conducted a study in the German language using the 3-class subset (angry, neutral, sad) of German Corpus (Berlin Database of Emotional Speech) [10]. The study employed deep neural networks [11] and achieved a 96.97% accuracy rate in the classification of three primary emotions (anger, neutral, and sad). Similarly, Sharma et al.’s study [12] using the CNN-LSTM model achieved the highest accuracy of 83.2% on Hindi audio by employing Emotional Speech Corpus in Hindi (ESCH) dataset. In another study by Kaur et al. [13] the highest accuracy attained for Emotion Recognition on Punjabi Language was 93.6575%, using the Punjabi Emotional Speech Database. The potential real-world applications of Emotion Recognition from speech have led to significant interest and engagement from the research community. The studies presented previously were limited in that they only used unilingual datasets, which do not reflect the reality of a world where individuals may speak different languages and possess unique linguistic features and accents. Using the same language and dataset for emotion recognition will not address the challenge of recognizing emotions in a multilingual context. The aim of this study is to develop a deep learning-driven system for detecting emotions in English, Hindi, and Punjabi audio recordings. The system should have the ability to recognize and distinguish different emotions. The purpose of this work is to address the difficulties associated with recognizing emotions in multilingual audio data, where variations in accent, pronunciation, and speech style can significantly impact emotion recognition systems. Most research in the realm of emotion detection from audio has focused on the English language, with little attention given to Indian languages such as Hindi and Punjabi. Therefore, we aim to broaden the scope of the research by performing emotion recognition in English, Hindi, and Punjabi to enhance diversity. The research involves the utilization of Hubert large [14], a deep learning model specifically designed for speech processing tasks, which is based on a transformer architecture that has been fine-tuned for processing audio data, enabling it to learn highly effective representations of speech signals. Furthermore, the system includes a pre-trained language model that has undergone extensive training on a vast corpus of speech data. This capability allows it to leverage knowledge from a diverse range of acoustic environments and speaking styles. The Transformer architecturebased models have demonstrated high effectiveness in various Natural Language Processing (NLP) tasks. The model used in the research is a fine-tuned version of Hubert-large [14], fine-tuned on 16 kHz sampled speech audio, which can recognize four basic emotions–happy, sad, neutral, and anger. The research employs data sets obtained from various sources, including the Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS) [5] and the PU Dataset [15]. The model has been re-trained on these datasets, and multiple experiments have been performed, to achieve the best result on a multilingual dataset.

Multilingual Emotion Recognition from Continuous Speech Using …

199

The research employs a methodology that involves extracting audio from video files in the PU Dataset [15] and adding noise to simulate real-world situations where background noise is often present. The study achieved a best test accuracy of 95.9538% for English language, 88.6364% for Hindi Language, and 89.7% for Punjabi Language. To sum up, this study’s primary contributions are: • Development of a novel deep learning-based emotion recognition system for English, Hindi, and Punjabi audio to analyse the acoustic, and textual features of the language. • Evaluation of the proposed model using large-scale audio datasets, RAVDESS [5], and the PU Dataset [15] with an accuracy of 81.82% on the PU Dataset [15]. • The model is further trained on a larger combined dataset of the RAVDESS [5], and the PU Dataset [15], with an improved accuracy of 95.95%. The following structure is used in the paper: Sect. 2 contains a review of past research focused on emotion recognition from audio. In Sect. 3, we describe the methodology utilized in our proposed model. The outcomes of our experiments are presented in Sect. 4. Finally, Sect. 5 provides a summary of the paper and offers insights into future research directions.

2 Related Work Emotion analysis, also known as affective computing, has been a growing area of research for several years. With the increasing availability of digital devices that can capture audio data, there has been a growing interest in emotion detection from audio signals using deep learning techniques. In this section, we will examine some of the latest studies involving deep learning techniques for analysing emotions in audio. One of the early works in this field is by Schuller et al. [16], who proposed a system for emotion recognition from speech using a combination of spectral and prosodic features. The system achieved an accuracy of 65.4% on the Berlin Emotional Speech Database [17]. Later, several studies have focused on exploring the effectiveness of different deep learning models for emotion recognition from audio. In 2013, Deng and Yu [18] proposed a system for speech emotion recognition, based on a deep belief network (DBN) [19]. which achieved an accuracy of 57.31% on the Interactive Emotional Dyadic Motion Capture (IEMOCAP) [8] dataset. In 2017 Zhang et al. [20] used deep convolutional neural network (CNN) [11], and Discriminant Temporal Pyramid Matching [20] for recognizing emotions in speech. The model achieved 87.31% accuracy on the EMO-DB [10] dataset. The employment of recurrent neural networks (RNNs) [21] in speech emotion recognition has become increasingly popular in recent years. In 2018, Mirsamadi et al. [22] introduced a system for recognizing emotions in speech. The system utilized a combination of convolutional [11] and recurrent neural networks [21] and achieved an accuracy of 67.8% on the MSP-IMPROV dataset [23]. In the same year, Zhang et al. [24] proposed a system that uses a combined form of bidirectional long

200

K. Singh et al.

short-term memory (BLSTM) [25] and attention mechanisms for speech emotion recognition, achieving an accuracy of 94.50, and 72.58% on the EMO-DB [10], and CASIA [26] dataset. Apart from employing various deep learning models, several studies have investigated the efficacy of different audio features in recognizing emotions. For example, Milton et al. [27] developed a speech emotion recognition system that utilized melfrequency cepstral coefficients (MFCCs) [28]. The accuracy of the system on the RAVDESS dataset [5] was reported to be 64.31%. Overall, the studies reviewed in this section demonstrate that deep learning techniques can effectively capture the complex patterns in audio data for emotion recognition. However, there is still a need for further research to improve the accuracy of these systems and to make them more robust to noise and other real-world factors. This paper proposes a multilingual emotion recognition system to address realworld scenarios where multiple languages are used. The system is further improved by augmenting the data with external noise to improve its practical usefulness. Unlike previous models that solely focus on analyzing the acoustic characteristics of speech to predict emotions, this model considers both acoustic and linguistic features of the speech.

3 Proposed System 3.1 System Setup The Real time microcontroller captures real-time video using a camera/webcam and sends it to a ww for processing. The software program extracts audio files from the video using the Imageio library of Python. The extracted audio files are then passed through a language recognition code which identifies the language of the audio (i.e., English, Hindi, or Punjabi). Based on the language identified, the audio files are passed through a pre-trained Hubert [29] model designed to detect emotion in that language. The Hubert large [14] model analyzes the audio and predicts the emotion of the speaker. The predicted emotion is stored in a database for later use.

3.2 Model Architecture The Hubert-large-er-superb [14] model is a pre-trained neural network designed for audio-based tasks, particularly speech recognition and spoken language understanding. It is built on the Hubert [29] architecture, which stands for Hybrid Unsupervised and Supervised Transformers, combining unsupervised and supervised learning to enable pre-training on vast amounts of unlabeled data and fine-tuning on specific tasks with limited labeled data.

Multilingual Emotion Recognition from Continuous Speech Using …

201

Hubert [29] model is a fully convolutional model with smaller architecture but efficient than other models and is particularly useful for real-time speech recognition applications on devices with limited processing power. It uses 8-bit quantization and a time-domain CNN [11] to process audio samples in a sliding window fashion, followed by a feedforward transformer network that processes the CNN [11] outputs. This model has demonstrated exceptional performance on benchmarks for speech recognition, including the Librispeech [30] and WSJ [30] datasets, surpassing previous state-of-the-art results (Fig. 1). The Hubert-large-superb-er [14] model is a larger and more powerful version of the Hubert [29] model that is specifically designed for high-performance speech recognition applications requiring higher accuracy and larger context modeling. It uses a combination of convolutional and transformer layers to learn complex patterns in speech signals and capture long-term dependencies. The model’s name highlights its performance on benchmark datasets and its efficiency in real-time applications. It has been fine-tuned on a speech emotion recognition task using labeled audio data in multiple languages. It includes layers of self-attention, feed-forward networks, and convolutional neural networks [11]. The model processes the audio input in the time domain, using 10 ms frames of audio that are converted into Mel spectrograms [31]. These Mel spectrograms [31] are then passed through a series of convolutional layers, which extract high-level features from the audio input. The hubert-large-superb-er [14] model architecture consists of several layers, including: • Acoustic features layer: This layer takes in audio features as input, which are typically Mel-frequency cepstral coefficients (MFCCs) [28] or filterbank features. A signal processing pipeline is used to extract audio features from the audio signal. • Transformer encoder layer: This layer utilizes the transformer architecture, which employs self-attention to enable the model to focus on distinct segments of the input sequence with varying degrees of importance. The transformer encoder layer includes multiple sub-layers, such as multi-head self-attention, layer normalization, and feedforward neural network layers.

Fig. 1 Basic flow of the multilingual emotion recognition system

202

K. Singh et al.

• Connectionist temporal classification (CTC) layer: This layer is used for sequence labeling tasks, such as ASR. It is responsible for mapping the output of the transformer encoder layer to a sequence of phonemes or words. The CTC layer also incorporates a language model, which enhances the accuracy of the model by considering the context of the input sequence. • Joint network layer: This layer combines the output of the CTC layer and a separate language model to produce the final output of the ASR model. The joint network layer oversees calculating the likelihood of a series of phonemes or words based on the audio input signal (Table 1). Overall, the Hubert-large-superb-er [14] model has 48 transformer encoder layers and a total of 345 million parameters. It is trained on a large amount of speech data and achieves state-of-the-art performance on several ASR benchmarks. Table 1 Internal architecture Layer type

Output shape

Parameters

Input

(batch_size, sequence_length)

0

Reshape

(batch_size, sequence_length, 1)

0

Conv1D

(batch_size, sequence_length, 256)

32,256

Layer normalization

(batch_size, sequence_length, 256)

512

Activation

(batch_size, sequence_length, 256)

0

Linear

(batch_size, sequence_length, 1024)

263,168

Layer normalization

(batch_size, sequence_length, 1024)

2048

Activation

(batch_size, sequence_length, 1024)

0

Transformer layer

(batch_size, sequence_length, 1024)

4,158,208

Linear

(batch_size, sequence_length, 256)

262,400

Layer normalization

(batch_size, sequence_length, 256)

512

Activation

(batch_size, sequence_length, 256)

0

Conv1D

(batch_size, sequence_length, 1024)

262,400

Layer normalization

(batch_size, sequence_length, 1024)

2048

Activation

(batch_size, sequence_length, 1024)

0

Linear

(batch_size, sequence_length, 1024)

1,049,600

Layer normalization

(batch_size, sequence_length, 1024)

2048

Activation

(batch_size, sequence_length, 1024)

0

Linear

(batch_size, sequence_length, 768)

787,200

Layer normalization

(batch_size, sequence_length, 768)

1536

Activation

(batch_size, sequence_length, 768)

0

Linear

(batch_size, sequence_length, 1)

769



Multilingual Emotion Recognition from Continuous Speech Using … Table 2 Number of audio files in RAVDESS dataset

Emotion

Number of audio files

Anger

192

Happy

192

Neutral

288

Sad

192

203

4 Dataset Description In this system, two datasets are used to train the model: RAVDESS Dataset [5], and PU Dataset [15]. The first dataset is the RAVDESS Dataset [5], which is a collection of audio recordings of human speech or other sounds. The second dataset is the PU Dataset [15], which is a video dataset but contains audio as well. In order to use the audio from the PU Dataset [15] to train the model, the audio files are extracted from the videos and combined with the RAVDESS Dataset [5] to create a new audio dataset. By combining these two datasets, the model can be trained on a larger and more diverse set of audio data, which can help it better recognize and classify different types of sounds.

4.1 RAVDESS Dataset RAVDESS Dataset [5] stands for Ryerson Audio-Visual Database of Emotional Speech and Song. This dataset contains 7356 files, and it has a total size of 24.8 GB. The audio clips in this dataset are recorded by 24 professional actors. Out of these 24 actors, 12 of them are male, and the other 12 of them are female. All the actors have a North-American accent The dataset includes audio recordings of seven distinct emotions, namely Anger, Happy, Neutral, Sad, Fear, Disgust, and Surprise. However, only four of these emotions, namely Anger, Happy, Neutral, and Sad, were utilized for training and testing the model. The dataset comprises a specific number of audio files for each emotion, as indicated in the Table 2:

4.2 PU Dataset PU dataset [15] is a collection of video recordings of 11 speakers, 6 male and 5 female, each of whom is recorded saying 4 different sentences in different emotions and ways for each emotion. This results in approximately 7–12 video clips per speaker for each emotion and language, with a total of around 1000–1200 video clips in the dataset. It is a useful resource for training and evaluating models that perform emotion recognition on speech data. The dataset’s large size and diversity of speakers,

204 Table 3 Number of audio files in PU dataset

K. Singh et al.

Language

Emotion

English

Anger

83

Happy

89

Neutral

75

Sad

99

Anger

99

Happy

91

Neutral

95

Sad

96

Hindi

Punjabi

Number of audio files

Anger

84

Happy

102

Neutral

96

Sad

80

emotions, and sentences should help ensure that models are robust and can generalize well to new speakers and scenarios (Table 3).

4.3 PU Dataset with Augmented Noise The presence of background noise can significantly impact the accuracy of emotion detection models when working with real-world audio data. The RAVDESS [5] and PU datasets [15] are ideal because they were recorded by professional actors in a noise-proof studio with no external noise. However, the real world is imperfect, and external noise is inevitable. To address this challenge, the existing audio files in the PU dataset were augmented by adding two different noise files to each audio file in random order (Table 4). This not only increased the size of the dataset but also made it more representative of real-world scenarios. The new dataset consisted of approximately 160–190 audio files for each emotion, with a length of 5–7 s per clip. This approach resulted in a more comprehensive and robust emotion detection model that can better handle real-world scenarios with noisy audio data.

5 Results Analysis The PU Dataset [15] is used to evaluate an emotion recognition model [32] in real time that was designed for analyzing human sentiments through facial images and trained using the Mobile Net model [33]. Since the dataset consists of videos, image frames were extracted from it using MTCNN [34] instead of audio and used for both

Multilingual Emotion Recognition from Continuous Speech Using … Table 4 Number of audio files in PU dataset with augmented noise

Language

Emotion

Number of audio files

English

Anger

153

Happy

157

Neutral

135

Sad

159

Anger

196

Happy

171

Neutral

168

Sad

186

Anger

187

Happy

242

Neutral

198

Sad

199

Hindi

Punjabi

205

training and testing. The model achieved a training accuracy of 98.17% and a testing accuracy of 97.50% with a batch size of 60 and a learning rate of 0.001 for 50 epochs. The Hubert-base [35] model was also trained and tested on the PU Dataset [15]. However, when the model was initially trained and tested on the PU English Dataset, it was found to be overfitting as there was a significant difference be-tween the training and testing accuracy. The same overfitting issue was also observed when the model was trained and tested on the PU Hindi and PU Punja-bi Datasets. To improve the model, different hyper parameters of models are tuned.

5.1 Model Training & Hyperparameter Tuning PU English. This language model was trained and evaluated using the English subset of the PU Dataset [15]. After 60 training epochs using a batch size of 4 and an 80:20 split of the dataset, a training accuracy of 94.20% was achieved. The testing accuracy was found to be 81.8182%. Initially, the training accuracy increased from epoch 0 to 15, followed by a slight decline until epoch 20, after which it increased again. The accuracy fluctuated throughout the remaining epochs until it ultimately stabilized at 94.20% at epoch 70. The test accuracy achieved is low, and not up to the mark, so the English subset of the PU Dataset [15] is mixed with the RAVDESS Dataset [5], in order to enhance the model’s accuracy (Fig. 2). PU English + RAVDESS. After training and testing on a combined dataset of RAVDESS [5] and the English part of the PU Dataset [15], a training accuracy of 98.10% was obtained with a batch size of 8 and a dataset split of 80:20, at the end of the 50th epoch. The model’s training accuracy demonstrated an initial increase from epoch 0 to 15, then a slight decrease from epoch 15 to 20, and an increase again from epoch 20 to 35. Finally, the training accuracy decreased from epoch 35 until the final

206

K. Singh et al.

Fig. 2 PU english dataset: Training accuracy versus epochs (Batch size = 8)

epoch, resulting in a training accuracy of 98.10% at the end of the 50th epoch. The obtained testing accuracy was 95.9538% (Fig. 3). PU Hindi. Upon training and testing the model on the Hindi section of the PU Dataset [15], a training accuracy of 97.80% was achieved at the end of the 70th

Fig. 3 PU english + RAVDESS dataset: Training accuracy versus epochs (Batch Size = 8)

Multilingual Emotion Recognition from Continuous Speech Using …

207

Fig. 4 PU Hindi dataset: Training accuracy versus epochs (Batch size = 8)

epoch with a batch size of 8, following a dataset split of 80:20. The model’s training accuracy showed an initial increase from epoch 0 to 20, followed by a slight decrease from epoch 20 to 25, and then increased again. The accuracy continued to fluctuate throughout the remaining epochs until reaching a training accuracy of 97.80% at the end of the 70th epoch. The testing accuracy obtained was 88.6364% (Fig. 4). PU Punjabi. The model was trained and tested on the Punjabi segment of the PU Dataset [15]. A training accuracy of 98.20% was achieved with a batch size of 4 after 70 epochs, following an 80:20 dataset split. The testing accuracy was found to be 89.7%. The training accuracy initially increased from epoch 0 to 20, after which it declined slightly until epoch 25 before once again increasing. The accuracy continued to fluctuate until the 70th epoch, at which point the final training accuracy was recorded as 98.20% (Fig. 5).

5.2 Effect of Hidden Dropout Rate, and Number of Attention Heads The pre-trained model has a hidden dropout rate of 0.1 and 16 attention heads. The hidden dropout rate controls the probability of dropping out neurons in the network during training. A higher hidden dropout rate reduces overfitting but may also reduce the model’s ability to learn complex features, while a lower rate may lead to overfitting. When the hidden dropout rate was increased from 0.1 to 0.4, and the attention heads remained unchanged, the training and testing accuracy decreased

208

K. Singh et al.

Fig. 5 PU Punjabi dataset: Training accuracy versus epochs (Batch size = 4)

significantly, and overfitting increased. Decreasing the hidden dropout rate to 0.0 did not have any significant effect on the accuracy of the model, and no overfitting was observed. The complexity and degree of attentional focus in the attention mechanism are determined by the number of attention heads. Increasing the number can improve the model’s ability to identify convoluted relationships in the input sequence, but it also increases computational costs. When the number of attention heads was increased from 16 to 32, training accuracy increased slightly, but testing accuracy decreased significantly, indicating overfitting. Decreasing the number of attention heads and hidden dropout rate resulted in decreased training and testing accuracy, but no overfitting was observed.

5.3 Overview of the Model Training The Hubert-large-er-superb model [14], which is trained on 16 kHz sampled speech audio for the English language, was retrained on the RAVDESS dataset [5] and tested on the English part of the PU dataset [15] and achieved an accuracy of about 81%. To further improve the accuracy, the RAVDESS dataset [5] and the English part of the PU dataset [15] were combined and the model was retrained on the new dataset, achieving significantly better accuracy of 95.5%. After testing on the English language, the model was trained and tested for Hindi and Punjabi languages, using the PU dataset [15]. To depict a real-world scenario, noise was added to the audio files in the Hindi and Punjabi parts of the PU dataset [15], and the new audio files created

Multilingual Emotion Recognition from Continuous Speech Using …

209

Table 5 Summary of the results obtained Dataset

Best training accuracy (%)

Best testing accuracy (%)

PU English dataset

94.20

81.82

RAVDESS + PU English dataset

98.01

95.95

PU Hindi dataset

97.80

88.64

PU Punjabi dataset

98.20

89.70

by adding noise were mixed with the original dataset. The model was then retrained using the Hindi and Punjabi parts of the PU dataset [15] and achieved approximately 88.63% test accuracy on the Hindi Language, and 89.70% test accuracy on the Punjabi Language (Table 5).

6 Conclusion This paper presents a real-time multilingual emotion detection system that can analyze both the acoustic and textual aspects of three different languages, i.e., English, Hindi, and Punjabi, and predict the emotion from the audio. The PU Dataset [15] comprises videos, while the RAVDESS Dataset [5] consists of audio files, so the audio files have been extracted from the videos in the PU Dataset [15] to train the model on audio input. To train the model for the English language, a combination of the RAVDESS Dataset [5] and the English portion of the PU Dataset [15] was utilized. The PU Dataset [15] comprises recordings by actors from diverse cultures with different linguistic features and languages, ensuring that the model performs well for people from various cultures who speak different languages and have distinct accents and linguistic features. The model can also recognize emotions in real-time by using live recordings, in addition to using recorded videos and audio files for training and obtaining results. The model is capable of detecting emotions in three languages: English, Hindi, and Punjabi. As the PU Dataset [15] contains videos, the model can also utilize the person’s facial expressions in the video to recognize emotions. The model can also be modified into a multimodal emotion detection model that analyzes both the video (facial expressions) and audio (acoustic and textual features of the audio) of the person. These models can be combined to form a multimodal emotion detection model, which can be used to analyze a person’s video and identify their emotions. This system could be beneficially implemented in various settings such as interrogation offices, schools, and mental healthcare facilities to assist doctors in efficiently monitoring patients.

210

K. Singh et al.

References 1. Koolagudi SG, Rao KS Emotion recognition from speech: a review. Int J Speech Technol 15 2. Lee JR, Wang L, Wong A (2021) EmotionNet nano: an efficient deep convolutional neural network design for real-time facial expression recognition. Front Artif Intell 13(3):609673 3. Macary M, Tahon M, Estève Y, Rousseau A (2021) On the Use of self-supervised pre-trained acoustic and linguistic features for continuous speech emotion recognition. In: 2021 IEEE spoken language technology workshop (SLT). Shenzhen, China, 4. Bhattacharya S, Borah S, Mishra BK et al Emotion detection from multilingual audio using deep analysis. Multimedia Tools Appl 81 5. https://www.kaggle.com/datasets/uwrfkaggler/ravdess-emotional-speech-audio 6. Ye J, Wen X, Wei Y, Xu Y, Liu K, Shan H (2022) Temporal modeling matters: a novel temporal emotional modeling approach for speech emotion recognition 7. https://www.kaggle.com/datasets/ejlok1/surrey-audiovisual-expressed-emotion-savee 8. https://www.kaggle.com/datasets/samuelsamsudinng/iemocap-emotion-speech-database 9. Harár P, Burget R, Dutta MK (2017) Speech emotion recognition with deep learning. In: 2017 4th International conference on signal processing and integrated networks (SPIN). Noida, India 10. Burkhardt F, Paeschke A, Rolfes M, Sendlmeier W, Weiss B (2005) A database of German emotional speech. In: Proceedings of the 2005 IEEE international conference on multimedia and expo. IEEE 11. Aloysius N, Geetha M (2017) A review on deep convolutional neural networks. In: 2017 International conference on communication and signal processing (ICCSP). Chennai, India 12. Sharma A, Kumar A, Kumar V (2021) Emotion recognition in hindi speech using CNN-LSTM model. Int J Speech Technol 13. Kaur K, Singh P (2021) Punjabi emotional speech database: design, recording and verification. Int J Intell Syst Appl Eng 9(4) 14. https://huggingface.co/superb/hubert-large-superb-er 15. https://github.com/anshal570/PU-DATASET 16. Zhao Z et al Exploring deep spectrum representations via attention-based recurrent and convolutional neural networks for speech emotion recognition. In IEEE access 17. https://www.kaggle.com/datasets/piyushagni5/berlin-database-of-emotional-speech-emodb 18. Deng L, Yu D (2013) Deep learning: methods and applications. Foundations and Trends® in Signal Processing 19. Hua Y, Guo J, Zhao H (2015) Deep belief networks and deep learning. In: Proceedings of 2015 international conference on intelligent computing and internet of things, Harbin 20. Zhang S, Zhang S, Huang T, Gao W (2018) Speech emotion recognition using deep convolutional neural network and discriminant temporal pyramid matching. In: IEEE transactions on multimedia, vol 20, no 6 21. Grossberg S (2013) Recurrent neural networks. Scholarpedia 8(2):1888 22. Mirsamadi S, Barsoum E, Zhang C, Sankaranarayanan AC (2018) Automatic speech emotion recognition using recurrent neural networks with local attention. In: 2018 IEEE International conference on acoustics, speech, and signal processing (ICASSP) 23. Busso C, Parthasarathy S, Burmania A, AbdelWahab M, Sadoughi N, Provost EM (2017) MSP-IMPROV: an acted corpus of dyadic interactions to study emotion perception. In: IEEE transactions on affective computing, vol 8, no 1 24. Zhang H, Huang H, Han H Attention-based convolution skip bidirectional long short-term memory network for speech emotion recognition. In: IEEE Access, vol 9 25. Ray A, Rajeswar S, Chaudhury S (2015) Text recognition using deep BLSTM networks. In: 2015 Eighth international conference on advances in pattern recognition (ICAPR), Kolkata, India 26. Song C, Huang Y, Wang W, Wang L (2023) CASIA-E: a large comprehensive dataset for gait recognition. In: IEEE transactions on pattern analysis and machine intelligence, vol 45, no 3, pp 2801–2815

Multilingual Emotion Recognition from Continuous Speech Using …

211

27. Ancilin J, Milton A (2021) Improved speech emotion recognition with Mel frequency magnitude coefficient. Appl Acoust 179 28. Hossan MA, Memon S, Gregory MA (2010) A novel approach for MFCC feature extraction. In: 2010 4th international conference on signal processing and communication systems, Gold Coast, QLD, Australia 29. Hsu W-N, Bolte B, Tsai Y-HH, Lakhotia K, Salakhutdinov R, Mohamed A HuBERT: selfsupervised speech representation learning by masked prediction of hidden units. In: IEEE/ ACM transactions on audio, speech, and language processing 30. Kriman S et al (2020) Quartznet: deep automatic speech recognition with 1D time-channel separable convolutions. IN: ICASSP 2020-2020 IEEE international conference on acoustics, speech, and signal processing (ICASSP), Barcelona, Spain 31. Shen J, Pang R, Weiss RJ, Schuster M, Jaitly N, Yang Z, Wu Y (2018) Natural tts synthesis by conditioning wavenet on mel spectrogram predictions. In: 2018 IEEE international conference on acoustics, speech, and signal processing (ICASSP) IEEE 32. Aggarwal A, Sehgal L, Aggarwal N (2022) SentNet: a system to recognise human sentiments in real time. In: 7th international conference on computing in engineering & technology (ICCET 2022), online conference 33. Zhang N, Luo J, Gao W (2020) Research on face detection technology based on MTCNN. In: 2020 International conference on computer network, electronic and automation (ICCNEA), Xi’an, China 34. Sinha D, El-Sharkawy M Thin MobileNet: an enhanced mobilenet architecture. In: 2019 IEEE 10th annual ubiquitous computing, electronics & mobile communication conference (UEMCON), New York, NY, USA 35. https://huggingface.co/superb/hubert-base-superb-er

Violence Detection Using DenseNet and LSTM Prashansa Ranjan, Ayushi Gupta, Nandini Jain, Tarushi Goyal, and Krishna Kant Singh

Abstract Detecting suspicious activities can curb increasing and varying crimes in public places by manifolds if done with accuracy. Crimes in communal areas are a global problem. Video surveillance has been in use for more than a decade but the innovative ways with which crimes are committed with every passing day, escape the human eyes. While the use of cameras for post-crime action is essential, there is a need for real-time surveillance to act as an advanced indicator to prevent or eliminate any violence before it takes place. Suspicious activities that take place in sensitive and public areas, like, transportation stations, railroad stations, air terminals, banks, shopping centers, schools and universities, parking garages, streets, and so on, with harmful intent, can be scrutinized using video surveillance and alert the nearby authorities so that the preventive measures can be taken timely. However, novel these dubious actions may be, they follow specific common patterns, including psychological oppression, robbery, mishaps, unlawful stopping, defacement, battling, chain snatching, etc. In this paper, an automated ensemble deep learning model is proposed for the recognition of possible suspicious activities. The deep learning models ensembled are DenseNet and LSTM. The model aims to train a feature extraction model for human activity recognition (HAR) for suspicious actions to achieve a high recognition rate. The research is carried out on two popular datasets for violence detection. The classification result is binary as violent or non-violent. The paper also compares the results obtained with existing methods. Keywords Activity recognition · Convolutional neural networks (CNN) · Long short-term memory (LSTM) · Surveillance · Deep learning

P. Ranjan (B) · A. Gupta · N. Jain · T. Goyal · K. K. Singh Amity School of Engineering and Technology, Amity University, Noida, Uttar Pradesh, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024 S. Jain et al. (eds.), Emergent Converging Technologies and Biomedical Systems, Lecture Notes in Electrical Engineering 1116, https://doi.org/10.1007/978-981-99-8646-0_18

213

214

P. Ranjan et al.

1 Introduction Today, society is moving towards the idea of training machines to detect human activity. This can be achieved using computer vision and deep neural network architectures. Activity recognition will be useful in many fields, like surveillance, robotics, human–computer interaction, etc. Video action recognition has gained immense advancement with the availability of large datasets and new and improved deep neural network architectures. Most of the earlier works concentrated on a certain action recognition sub task, like spatial–temporal localization of activity, recognition of egocentric action, etc. This paper focuses on one of the major subsets of activity recognition i.e., violence detection in public places. This system can be applied with the help of a CCTV surveillance system to monitor any suspicious activities in crowded areas. With the availability of CCTV cameras in every nook and corner of the world, the number of captured videos that need to be analysed for the detection of any criminal activity becomes very tiring if done manually. Many experiments have been conducted and their results have proposed numerous methods that can automatically and without human involvement, identify violence in surveillance footage to address this problem. The general action recognition task includes violence detection, which centres around identifying violent human actions like fighting, robbery, riots, etc. At the beginning, the research conducted for this field mostly concentrated on identifying the differentiating factors that would effectively capture the specified action present in the video. As the deep learning field developed, many end-to-end trainable models came into existence [1–3]. These models required a negligible amount of pre-processing, which resulted in surpassing the results of the previous works [4–6]. The most recent work done in this respect is a network based on two-stream CNN-LSTM. The network required smaller parameters to produce discriminatory Spatio-temporal features [7]. Hockey benchmark datasets, which are widely used as standards for comparison, are utilised to verify the efficacy of those approaches. When using these deep learning models for real-world practical applications, accuracy and processing efficiency must both be taken into account. When conducting general action recognition, the environment or background information may offer discriminatory signals. For instance, seeing green grass in the background could be a good indicator that someone is playing cricket. Many aggressive behaviours, however, are characterised by body position, movements, and interactions rather than appearance-based traits like colour, texture, and background information. Frames with really no background and frame differences like inputs were employed in the proposed network since they both help provide discriminatory features that can detect hostility. This paper proposes an architecture that combines DenseNet with LSTM (Fig. 1). Straightforward and quick input pre-processing methods are used to capture motion between frames and highlight moving objects in the frames while suppressing backgrounds that aren’t moving. The advantage of using DenseNet is that direct propagation of the error signal to earlier layers is simple. This is a type of implicit

Violence Detection Using DenseNet and LSTM

215

Fig. 1 A schematic overview of the proposed network

intense supervision because the final classification stage might practise close supervision over earlier layers. Long Short-Term Memory has the advantage of having the ability to grasp straight from the unprocessed time-series information, eliminating the necessity of manual process engineering input data and freeing up subject-matter expertise. Once the model has figured out the inner structure of the time series information, this should behave likewise to models suited on a version of dataset with based features. The model will be further discussed. Without combining the time steps, LSTM can detect relationships between data on the time dimension.

2 Related Work Given the sharp increase in crime rates, violence detection and suspicious activity recognition have attracted the attention of numerous researchers. This has created a pressing need for more precise detection. As a result, many methods for detecting violence have been put out recently. The characteristics of the video, such as appearance, acceleration, flow duration, etc., serve as the input parameters for these methods [2]. All of these algorithms follow the same set of steps, including segmenting the entire video into frames and segments, picking out an object from the frame, and feature extraction and detection [8]. The dataset being utilised, together with the object identification, feature extraction, and classification approaches, all affect accuracy. ResNet 3D network is one of the methods used to recognise human action. Using 3DConv layers, it can regulate the dimensions of space and time [9]. Another method is Video Swim Transformer [10], a model that processes image feature maps using the recently released Transformer attention modules. It specifically applies the effective sliding-window Transformers for temporal axis introduced for image processing [11], achieving a good efficiency-effective trade-off. SlowFast is a two-pronged model that captures semantic data from images or sparse frames at slow frame rates in one pathway, while the other pathway focuses on capturing rapidly changing motion. The objective of violence detection methods is to classify videos as violent or non-violent using a binary system. To capture long-term movement information from

216

P. Ranjan et al.

video clips, a network described in [5] employs a spatio-temporal encoder based on a conventional convolutional backbone, coupled with the Bidirectional Convolutional LSTM (BiConvLSTM) architecture for feature extraction.

3 Proposed Method 3.1 Background Suppression Image background suppression is eliminating the generally unmoving background data from video frames and keeping the moving object’s data of a scene from the frames. It is broadly utilized in object detection applications and is normally the main phase of the applications. This technique computes the foreground data by playing out a deduction between the ongoing edge and a foundation model, containing the static piece of the scene or on the other hand, more as a general rule, all that can be considered as foundation given the characteristics of the observed frame. |It − Bt | > T

(1)

Bt |1 =∝ It |(1 ∝)Bt

(2)

where T is predefined threshold The Background Update is:

where α is negligibly small to prevent artificial tails forming behind moving objects. In the first few frames, the moving objects are identified and saved as foreground pixels. Then the foundation is recognised as pixels that do not have a place in the forefront pixels. The foreground object’s movement fills in the unfilled background pixels that have been identified. When the total foundation is distinguished then it is deducted from different casings to track down the moving articles and the forefront. The model creates binary feature images by thresholding and morphologically closing each foreground image. Background Subtraction can be accomplished in a variety of ways. In terms of performance and computational requirements, each of these techniques has distinct advantages and disadvantages. Various levels of complexity are used to implement the Background Subtraction methods. In this paper we will be discussing frame difference algorithm of Background Suppression.

Violence Detection Using DenseNet and LSTM

217

3.2 Frame Difference Algorithm The frames are compared to the previous frame by this algorithm; subsequently, the casing changes and the updates are permitted; also, the moving items are distinguished. | f i − f i−1 | > Ti

(3)

T is the threshold value, i is the number of the frame, and F is the frame. As the scene changes, slow-motion updates can be taken into account because of this. This method has a better understanding of what is going on in the scene and is, therefore, able to suppress the background motion because it takes into account the scene’s changing and moving over time. This is the only algorithm that correctly recognizes the motion of the complex background, but it has issues correctly recognizing the foreground objects. The second fastest algorithm is this one. It is the main calculation that is fit for eliminating a perplexing foundation where each edge is refreshed and development or movement in the casing is checked. This permits the calculation to disregard the foundation movement and to distinguish frontal area components. The algorithm was implemented, and the frame difference as shown in Fig. 2a and b. Fig. 2 a Original image. b Frame difference algorithm

218

P. Ranjan et al.

3.3 DenseNet DenseNet is utilised to highlight the image’s optical information (frame). It served as an image feature encoder to obtain an image’s spatial representation. Each node in the DenseNet architecture is connected to every other node in the layer below it, resulting in a dense network of layers. DenseNet encourages feature reuse, assists in lowering the number of parameters, and has a high classification rate. Equation (4) describes how DenseNet operates, with the input of the ith layer being the output of all layers before it. By continually connecting the feature map of the preceding layer from the ith input of the subsequent layer, Fig. 2 exemplifies the use of dense connectivity to support the flow of information across levels, which is achieved using channel-wise concatenation depicted as curves in the figure (Fig. 3). Densenet(I ) = Dl (l, f 1 , f2, . . . , fl−1 )

(4)

Convolution, batch normalisation, and ReLU functions are used to form the function D, where I = image and f1, f2 , and fl-1 are features from the 1st, 2nd, and l-1th layers, respectively. The model consists of 161 layers that are utilised as the picture model in the studies. DenseNet received input with the image’s dimensions, 224 × 224 × 3, in its dimensions. In this study, the output of the final DenseNet layer has a dimension of 2208. DenseNet is composed of the transition layer and the Dense block. The dense block is composed of the growth rate and the bottleneck layer. DenseNet can provide enormous network parameters that will decrease computation efficiency since it uses channel-wise concatenation to connect the feature maps of different layers. Figure 4 depicts the bottleneck layer. As mentioned above, this is the layer that reduces the count of input feature maps and boost computation accuracy. The transition layer’s function is to lessen the count of feature maps and their width and height, as seen in Fig. 5. It consists of BN->ReLU->Conv (11)->Avg pool (22) and is linked behind the dense block. At this point, the compression factor, a hyper-parameter value between 0 and 1, is used to determine how much the feature map should be shrunk. The amount of feature maps remains constant if the value is 1. Additionally, DenseNet is applied to the layer of composite function in the following order: BN->ReLU->Conv. This layer reduced the feature size from 2208

Fig. 3 Dense connectivity

Violence Detection Using DenseNet and LSTM

219

Fig. 4 Schematic representation of Bottleneck layer

to 128. Equation (5) explains the output of the layer. x−1=Wd .Densenet(I )

(5)

where wd is a kernel of matrix size of 128 × 2208 ReLU (x) = max(0, x)

(6)

x0 = r epeat (x−1 , max_cap_len)

(7)

Equation (6) depicts the Activation Layer. Equation (7) depicts the Repeat Layer where maximum caption length is depicted by max_cap_len.

220

P. Ranjan et al.

Fig. 5 Schematic representation of Transition layer

3.4 LSTM The Long Short-Term Memory (LSTM) network can learn from long sequences of data and remember it for up to 200–400 steps. LSTM’s structure is shown in Fig. 4. The top line depicts the cell state, which is at the core of the LSTM. The information is added to and subtracted from by the cell state as it moves much like a conveyor belt through the entrance and up to the next stage. Additionally, it allows the earlier data to directly affect results to come. This model supports various parallel sequences of data which it receives from DenseNet’s output. The model learns to map internal features to several activity types and draw out features from segments of observations. LSTM has a definite format to recall information for long-term in the form of an input gate and forget gate to monitor appending the information by juxtaposing inner memory with new information which befalls. The result of utilising LSTMs for sequence classification is that they may be trained directly from the untreated time series data, eliminating the need of manually engineering input features and freeing up domain knowledge. This capability is one of the main advantages of utilising LSTM in the network. Once the model is trained on internal representation of the time series data, it should perform similarly to models fitted on a version of the dataset with engineered features. f t = σg (W f × xt + Uf × h t−1 + b f )

(8)

i t = σg (Wi × xt + Ui × h t−1 + bi )

(9)

ct = σg (Wo × xt + Uo × ht − 1 + bo )

(10)

Violence Detection Using DenseNet and LSTM

221

Fig. 6 Diagrammatic representation of an LSTM block [12]

c t = σc (Wc × xt + Uc × ht − 1 + bc )

(11)

ct = f t · ct − 1 + i t · c t

(12)

h t = ot · σc (ct )

(13)

Here, σg denotes sigmoid activation function, σc denotes tanh activation function and the dot operator denotes element wise multiplication. Ct is the cell state, i t is the input gate, ot is the output gate, f t is the forget gate, and h t is the hidden state (Fig. 6).

3.5 DenseLSTM Through this paper a hybrid approach to the model is suggested that fuses LSTM and DenseNet further referred to as DenseLSTM [13]. The first half of the proposed model is built using the DenseNet structure. To capture the sequence data on the feature, the feature map obtained from DenseNet is utilized as input data for LSTM. A hybrid framework is developed that classifies using the sigmoid function. The input data are constituted of frequency, channel and time and are specifically image data transformed by implementing background suppression to the raw video. The Conv layer creates an output feature map from the input image that is twice as fast as the growth rate (Fig. 7). The dimensions of the feature map are maintained by performing 1-pixel zeropadding in Conv(3 × 3) within each dense block, that has the same layer count as the other dense blocks. Use of the transition layer comes after the dense block. Transition layers use average pooling and Conv(1 × 1) to shrink the feature map’s size. map. The feature map is created and output as a 1-D vector using global average pooling rather

222

P. Ranjan et al.

Fig. 7 Proposed architecture

than a fully connected layer, which would have increased parameters too much. After that, it is reshaped into an LSTM-compatible input format and fed into the neural network. The Sigmoid function is then used to divide the features produced by the LSTM. Figure 5 demonstrates the network architecture of DenseLSTM. Configuration of convolution layer is 7 * 7 conv, stride 2. LSTM layer has global pooling linear layer and classification layer has Sigmoid activation layer.

4 Result and Discussion The Hockey dataset has video samples from real-life hockey matches. The dataset contains 1000 videos out of which 500 videos are of fighting scenes and 500 videos are of non-fighting scenes. This dataset was obtained from the hockey database at Open-Source Sports. The detailed comparison of the models that have been proposed till date with our proposed model is given in Table1.

Violence Detection Using DenseNet and LSTM Table 1 Comparisons of classification results on standard benchmark datasets

223

Method

Accuracy rates (%)

ViF [14]

82.90

Hough forest + 2D CNN [15]

94.6

Improved fisher vector [16]

93.7

Three streams + LSTM [17]

93.9

ConvLSTM [6]

97.1

BiConvLSTM [5]

98.1

Efficient 3D CNN [4]

98.3

FightNet [18]

97.0

Proposed (DenseLSTM)

98.5

5 Conclusion This paper provides a novel hybrid model for predicting suspicious activities in crowded areas utilising DenseNet and LSTM. 99.5% of the time, this strategy is accurate. The DenseNet approach enhances network information flow and increases computational efficiency by expanding the CNN problem currently being addressed in this paper. Utilizing a hybrid technique to recognise activities is primarily driven by the realisation that human activity is a series of acts involving temporal information. Most of the subjects in the Movies, Hockey dataset, which is used in the suggested method, are old and not specific enough, hence more collected data must be used to adequately test the system. Therefore, it is imperative to extensively test the system with additional recorded data to ensure its reliability. Nevertheless, the experimental findings and comparisons with previous research demonstrate the reliability and effectiveness of the recommended approach. Additionally, to assist people with social aspects, it is intended to create a system that recognises human activity. An intriguing next step in this research is to develop a violence detection system that can identify specific regions within frames or objects, such as individuals, that contribute to categorizing a video as violent and assess the likelihood of violence on a per-frame basis. This would enhance the interpretability of the violence detection system, making it highly advantageous for real-world monitoring purposes.

References 1. Chen HW, Chen M-Y, Gao C, Bharucha A, Hauptmann A (2008) Recognition of aggressive human behavior using binary local motion descriptors. In: Conference proceedings: ... annual international conference of the IEEE engineering in medicine and biology society. IEEE Engineering in Medicine and Biology Society. Conference 2. Deb T, Arman A, Froze A (2018) Machine cognition of violence in videos using novel outlier-resistant vlad. In: 2018 17th IEEE international conference on machine learning and applications (ICMLA), pp 989–994

224

P. Ranjan et al.

3. Senst T, Eiselein V, Kuhn A, Sikora T (2017) Crowd violence detection using global motioncompensated lagrangian features and scale-sensitive video-level representation. IEEE Trans Inf Forensics Secur 12:2945–2956 4. Li J, Jiang X, Sun T, Xu K (2019) Efficient violence detection using 3d convolutional neural networks. In: 2019 16th IEEE international conference on advanced video and signal based surveillance (AVSS). IEEE 5. Hanson A, Pnvr K, Krishnagopal S, Davis L (2018) Bidirectional convolutional lstm for the detection of violence in videos. In: Proceedings of the European conference on computer vision (ECCV) 6. Sudhakaran S, Lanz O (2017) Learning to detect violent videos using convolutional long shortterm memory. In: 2017 14th IEEE international conference on advanced video and signal based surveillance (AVSS). IEEE 7. Islam Z, Rukonuzzaman M, Ahmed R, Kabir MH, Farazi M (2021) Efficient two-stream network for violence detection using separable convolutional lstm. In: 2021 International joint conference on neural networks (IJCNN). IEEE 8. Chaudhary S, Khan MA, Bhatnagar C (2018) Multiple anomalous activity detection in videos. Proced Comput Sci 125:336–345 9. Tran D, Wang H, Torresani L, Ray J, LeCun Y, Paluri MA (2018) Closer look at spatiotemporal convolutions for action recognition. In: Proceedings of the 2018 IEEE/CVF conference on computer vision and pattern recognition, Salt Lake City, UT, USA, 18–23 June 2018 10. Liu Z, Ning J, Cao Y, Wei Y, Zhang Z, Lin S, Hu H (2022) Video swin transformer. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, New Orleans, LA, USA, 18–24 June 2022 11. Liu Z, Lin Y, Cao Y, Hu H, Wei Y, Zhang Z, Lin S, Guo B (2021) Swin transformer: hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF international conference on computer vision, Montreal, QC, Canada 10–17:10012–10022 12. Hochreiter S, Schmidhuber J (1997) Long short-term memory. Neural Comput 9(8):1735–1780 13. Ryu S, Joe I (2021) A hybrid DenseNet-LSTM model for epileptic seizure prediction. Appl Sci 11:7661. https://doi.org/10.3390/app11167661 14. Hassner T, Itcher Y, Kliper-Gross O (2012) Violent flows: real-time detection of violent crowd behavior. In: 2012 IEEE computer society conference on computer vision and pattern recognition workshops. IEEE, pp 1–6 15. Serrano I, Deniz O, Espinosa-Aranda JL, Bueno G (2018) Fight recognition in video using hough forests and 2d convolutional neural network. In: IEEE transactions on image processing 16. Bilinski P, Bremond F (2016) Human violence recognition and detection in surveillance videos. In: 2016 13th IEEE international conference on advanced video and signal based surveillance (AVSS). IEEE 17. Dong Z, Qin J, Wang Y (2016) Multi-stream deep networks for person to person violence detection in videos. In: Chinese conference on pattern recognition. Springer 18. Zhou P, Ding Q, Luo H, Hou X (2017) Violent interaction detection in video based on deep learning. J Phys Conf Ser 844:012044 19. Feichtenhofer C, Fan H, Malik J, He k (2019) SlowFast networks for video recognition. In: Proceedings of the 2019 IEEE/CVF international conference on computer vision (ICCV), Seoul, Korea, 27 October–2 November 2019 20. Ramzan M, Abid A, Khan H, Awan S, Ismail A, Ilyas M, Mahmood A (2019) A review on state-of-the-art violence detection techniques. IEEE Access 7:107560–107575. https://doi.org/ 10.1109/ACCESS.2019.2932114 21. He K, Zhang X, Ren S, Sun J (2015) Deep residual learning for image recognition. arXiv:1512. 03385 22. Szegedy C, Vanhoucke V, Ioffe S, Shlens J, Wojna Z (2015) Rethinking the inception architecture for computer vision. CoRR, abs/1512.00567 23. Zisserman A (2014). Very deep convolutional networks for large-scale image recognition. arXiv 1409.1556

Financial Technology and Competitive Landscape in the Banking Industry of Bangladesh: An Exploratory Focus Nargis Sultana, Kazi Saifur Rahman, Reshma Pervin Lima, and Shakil Ahmad

Abstract Aim this study is to evaluate the potentiality of Fin-Tech—Financial Technology from perspective of banking industry. Several hypotheses are taken into account to explore the functioning of certain factors which play vital role in the alliance. A total of 200 samples have been collected from the survey from Bangladesh. SPSS Statistics 25 has been applied to analyze data from a plethora of dimensions. To test the reliability, Cronbach alpha is used. Plum ordinal regression model is applied to show the interplay between dependent and independent variables. Bottom line, this concept is proved worthy from the perspective of banks, FinTech companies and customers respectively. Keywords FinTech · Banks · Customers · Alliance · Profitability · Digitalization · Cost · Competitive

1 Introduction Financial Technology is becoming a world trend that was started by reformist, mimicked by scholars, and now rubbing the interest of financial technology superintendence in financial & banking sectors in past, present and future [1]. Nowadays, the FinTech sector leverages information technology, primarily centered on smart phones N. Sultana Department of Finance and Banking, Comilla University, Cumilla, Bangladesh K. S. Rahman Department of Finance and Banking, Jatiya Kabi Kazi Nazrul Islam University, Mymensingh, Bangladesh R. P. Lima Department of Accounting, Mawlana Bhashani Science and Technology University, Tangail, Bangladesh S. Ahmad (B) University School of Business, Chandigarh University, Mohali, Punjab, India e-mail: [email protected]; [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024 S. Jain et al. (eds.), Emergent Converging Technologies and Biomedical Systems, Lecture Notes in Electrical Engineering 1116, https://doi.org/10.1007/978-981-99-8646-0_19

225

226

N. Sultana et al.

and cell phones, to increase the effectiveness of the economy system and financial system. In the interim, recognizable service companies, for example making loans or banking treaty, will be inspired by Internet-related technologies, like as Cloud Applications or Mobile Internet [2]. Financial technology resolves to contend with the hackneyed financial methods accessible to convey financial services. It is a thriving industry which uses to progress activities in finance. For instance, the mass usage of smart phones for mobile banking, investments, borrowing and crypto currency are the examples which enable expedient financial services to the general public. Principally the two entities are harder working together nonetheless the fact that it residues intricate affiliation. However, the association is pulsating for adapting the banking industry from a retroactive system to the digital one [3]. Customers have started display that they craving their banks to provide the latest digital functionalities. The infrequent situation of Covid-19 has also facilitated the perseverance of digitalization of the banking industry. Many banks visibly realize the fact that novel tasks are not as much terrorizations as they are likelihoods. Both traditional banks and FinTech providers are dedicated to providing the forthcoming generation banking experience to the consumer. Researchers try to investigate the potentiality of FinTech in terms of providing modern facilities to the banking industry to reach out and strengthen the customer base [4]. Despite the technology’s potential, fintech companies face some serious realities. They fight primarily in order to know the user behavior and market-product fit and to articulate a compelling argument for their service-based products. Fintech companies rely on venture capitalist funding to scale up, and these investors seek exclusive, discriminating offers that provide a compelling case for growing. Regulators are not the only ones who frequently don’t know how Fin-Tech products handle; firms also have to deal with comprehensive fallacies in relation to integrity and reliance of the data that forms the foundation of their products. They must create creative intervention techniques to cherish desired performances and develop interpersonal and behavioral beliefs with clients and partners [5]. At minimum in the foreseeable future, the opportunity of fintech greatly surpasses the risks. As fin-tech usage increases more and transparent superintendence regimes grant them to flourish, fintech technologies are going to grow more unavoidable in everyday interactions. The objectives of such a clear and realistically grounded conversation between enterprises, fintech innovators, and regulators should be to discuss the growth of fintech fashions, look into the ups and downs in supply and benefit chains caused by technology selections and evaluate influence of national various regulations on outside funding and performance indicators breadth wise markets [6]. A primarily important role is played by analysts working at the cessation of technology, flair, and financial services. Compliance toolkits need to develop more before they can help fintech firms comply with intricate, crossjurisdictional regulatory obligations. There are numerous opportunities for innovation managers to engage regulators in conversation while bringing up the implications of rapidly emerging technology for market truth, stability, and sustainability. Expressive customization of equipment and procedures is only possible through cooperative and open approaches that are cumulatively based on financial technology intelligence [7, 8].

Financial Technology and Competitive Landscape in the Banking …

227

2 Literature Review The following categories of FinTech breakthroughs were identified by the researcher as potentially changing the financial and banking industries’ future: 1. Cryptography is a modern development method that could lead to an upsurge in banking and related technologies. Blockchains are an illustration of a shared database where data is shared among all network nodes or points. This technology known as distributed ledgers appears to have the ability to stop future cryptocurrency payment services. 2. Some notable emerging technologies with major impact include artificial intelligence and machine learning [9, 10]. In recent years, the banking industry’s digitization has mirrored the growth of financial technology, which stands for the fusion between finance and ICT. The mechanism of a FinTech, according to the researcher, is indicated by the manner a business model is created, altered, or improved. FinTech also offers a tool for turmoil or relationships. The use of ICT in finance is another true indicator of fintech. Mainstream of the people approve to the point that FinTech plays a weighty role for customers and businesses for a variability of explanations including fast enactment. It was apparent as one of the persuasive factors of 2021 involving customers, banks and FinTech companies on a large scale [11, 12]. In terms of cultivating the process, adaptation and delivery, Fintech is a way that enables business to make stratagem and to bring insurgency. Developing economies have already assumed the consequence of Fintech whereas it tries to include developed economies to boost strategy, advance principles and fetch new-fangled proficiencies [13]. In 2021, there were 113 startups of Fintech in Bangladesh which is now steadily enlightening. As per the Fintech World Ranking, Bangladesh takes the 61th position [14]. Asia has started considering Fintech as a rising star. The prevailing valuation involves $1.45 billion which might reach $10 billion in future due to its potentiality. Gomedici, 2021 marked that the processes of Fintech includes $ billion of monthly installments. Bangladesh Bank data displays a 7% upsurge in the services of MFS in the 3rd quarter (2019–20) compared to the previous one. This platform is worthy of getting vast gratefulness [15]. Light Castle acknowledged the deficient of interoperability as a test regarding its rapid implementation. For the development of this sector, the role of government is imperative. Apparently a collective initiative has been accepted by Bangladesh Bank and a2i for the DFS Lab [16, 17]. Not only e-commerce but also financial literacy has been backed up by numerous products and services. Besides this, Light Castle drew attention to a point where the readymade garments workers acknowledged salaries followed by the spur and funds involving safety nets. Financial inclusion exposed a constructive intensification from 2011 to 2018 involving 16–37%. Still in term of expanding population, it is lagging behind. However, lack of incorporation is evident in terms of financial system [18]. Areas like personal finance and lending, the incident is distinguished. Overall, Fintech in Bangladesh has got lots of chances for innovation and development. SMEs received several assistances in terms of inflowing financial inclusion which might be otherwise tough for the banking sector. Aim of this research is to investigate prospect

228

N. Sultana et al.

of banking industry by adopting FinTech in future for the handiness of banks, FinTech companies and customers correspondingly [15, 19].

3 Research Objectives The broad objective of this research is to explore the relationship between the emergence of FinTech and the competitive landscape in the banking industry. Specific Objectives: a. b. c. d. e.

To grab a win/win strategy by the means of banks/FinTech association. To help banks get multiple facilities to sustain in the competition. To improve the customer retention. To explore the factors which make FinTech worthwhile. To evaluate the challenges created due to FinTech.

4 Data and Methodology Collection of Data—Both secondary and primary data have been used for this study. Questionnaires are distributed to a sample of 200 people (in person, online or over the phone) for collecting primary data in the light of the objectives of the study. Sample Size—The purpose and premises of the questionnaire are explained to the selected banks, FinTech companies and customers assuring the total confidentiality of the data. A total of 200 samples are gathered. The sample survey has been conducted among 75 bank managers, 50 FinTech company officials and 75 customers in Dhaka, Cumilla and Chattogram of Bangladesh. Analysis of data—Fundamentally statistical tools is applied to construe observed variables. Quantitative data are analyzed in the IBM SPSS Statistics 25. Cronbach alpha has been used to ascertain the internal consistency of questionnaire along with the reliability. The apt statistical techniques such as measures of frequency, measures of central tendency, measures of dispersion, plum ordinal regression, correlation analysis, frequency table, and analysis of variance are to be used to classify, tabulate and interpret the collected data from the respondents of the study. Hypothesis (Null Hypothesis) Hypo 1: There is no correlation between efficiency and profitability. Hypo 2: There is no correlation between cost minimization and profitability. Hypo 3: Banks better services cannot ensure customer retention. Hypo 4: FinTech as a digital way of providing services does not save unnecessary hazards. Hypo 5: Digitalization of services does not retain customers. Hypo 6: The association does not ensure the supply of competitive products.

Financial Technology and Competitive Landscape in the Banking …

229

Adaptation of latest technologies

Efficiency of banks

Going for digital way

performance Better services (Competitive Products)

Fig. 1 Conceptual framework

Hypo 7: There is no correlation between the alliance and profitability. Hypo 8: FinTech does not help banks perform efficiently by offering better products. Regression Analysis—Multiple linear regressions (Plum Ordinal) With the exception of the use of several independent variables, the multiple linear regression study is substantially the same as the simple linear model. The multiple linear regression technique is mathematically represented as: Y = a + bX 1 + cX 2 + d X 3 + ∈

(1)

where: Y—Dependent variable: Performance of Banking Industry X1, X2, X3—Independent (explanatory) variables: X1—Adoption of techniques of FinTech, X2—Providing competitive products, X3—Digital way of providing services a—Intercept b, c, d—Slopes ∈—Residual (error) Conceptual Framework of the Study—Efficiency of banks performance depends on 3 independent factors, such as-adaptation of latest technologies, going for digital way and better services (competitive products) (Fig. 1).

5 Results and Discussion There is a range of acceptable value for Cronbach’s Alpha starting from 0.7 to 0.9.A Score of 0.874 indicates a high level of internal consistency of the questionnaires pertaining to the items involving LIKERT Scale. The above tables and graph shows all the frequencies, valid percent and descriptive statistics of the age of the respondents chosen for the survey (Tables 1, 2 and 3).

230

N. Sultana et al.

Table 1 Reliability analysis Cronbach’s alpha

Standardized items

Number of items

0.874

0.747

6

Table 2 Age analysis N

Minimum

Maximum

Mean

Std.- Deviation

Age

200

29.00

43.00

37.4650

3.64075

Valid N (listwise)

200

Table 3 Age analysis Frequency Valid

Percent

Valid percent

Cumulative %

29.00

13

6.5

6.5

6.5

33.00

16

8.0

8.0

14.5

35.00

20

10.0

10.0

24.5

36.00

28

14.0

14.0

38.5

37.00

26

13.0

13.0

51.5

38.00

27

13.5

13.5

65.0

39.00

18

9.0

9.0

74.0

41.00

10

5.0

5.0

79.0

42.00

26

13.0

13.0

92.0

43.00

16

8.0

8.0

100.0

Total

200

100.0

100.0

People believe that there is no alternative to technology. As today is an era of digitalization, latest techniques should be adopted by the banks to remain competitive and sustain in the market place (Tables 4, 5, 6, 7, 8, 9, 10 and 11). Technology has its disadvantages also. Due to the techniques adopted, banks are more prone to cyber-attack for which banks need to be more conscious and alert (Table 12). In terms of meeting the requirements of customers, they find FinTech an effective one (Tables 13 and 14). Table 4 Gender analysis Gender of the respondents N

Valid

200

Missing

0

Financial Technology and Competitive Landscape in the Banking …

231

Table 5 Gender analysis Gender of the respondents Frequency Valid

Male

Valid %

Cumulative %

129

64.5

64.5

64.5

71

35.5

35.5

100.0

200

100.0

100.0

Female Total

%

Table 6 City analysis Frequency Valid

%

Valid %

Cumulative %

Dhaka

89

44.5

44.5

Chattagram

68

34.0

34.0

78.5

Cumilla

43

21.5

21.5

100.0

200

100.0

100.0

Total

44.5

Table 7 Profitability analysis FinTech improves profitability Frequency Valid

%

Valid %

Cumulative %

SD

3

1.5

1.5

1.5

D

19

9.5

9.5

11.0

N

30

15.0

15.0

26.0

A

69

34.5

34.5

60.5

SA

79

39.5

39.5

100.0

Total

200

100.0

100.0

%

Valid %

Table 8 Efficiency analysis FinTech allows banks to perform efficiently Frequency Valid

Cumulative %

SD

3

1.5

1.5

1.5

D

19

9.5

9.5

11.0

N

30

15.0

15.0

26.0

A

69

34.5

34.5

60.5

SA

79

39.5

39.5

100.0

Total

200

100.0

100.0

So, it can be said that FinTech saves the customers from the unnecessary hazards which might be encountered by them while adopting the physical transactions (Tables 15 and 16).

232

N. Sultana et al.

Table 9 Cost analysis FinTech minimizes costs of banks Frequency Valid

%

Valid %

Cumulative %

SD

3

1.5

1.5

1.5

D

21

10.5

10.5

12.0

N

28

14.0

14.0

26.0

A

79

39.5

39.5

65.5

SA

69

34.5

34.5

100.0

200

100.0

100.0

Total Table 10 Service analysis

Fintech help banks offer better services that would take banks years to develop Frequency Valid

%

Valid %

Cumulative %

SD

3

1.5

1.5

1.5

D

19

9.5

9.5

11.0

N

30

15.0

15.0

26.0

A

69

34.5

34.5

60.5

SA

79

39.5

39.5

100.0

200

100.0

100.0

Total

Table 11 Techniques adoption capacity analysis Banks should ensure the adoption of techniques of FinTech Frequency Valid

%

Valid %

Cumulative %

SD

3

1.5

1.5

1.5

D

21

10.5

10.5

12.0

N

28

14.0

14.0

26.0

A

79

39.5

39.5

65.5

SA

69

34.5

34.5

100.0

200

100.0

100.0

%

Valid %

Total

Table 12 Cyber risk analysis Banks are left vulnerable to cyber-attack Frequency Valid

Yes No Total

151

75.5

Cumulative %

75.5

75.5 100.0

49

24.5

24.5

200

100.0

100.0

Financial Technology and Competitive Landscape in the Banking …

233

Table 13 System transformation capability analysis [20] Association is vital for transforming the banking industry from a backdated system to the digital one Frequency Valid

%

Valid %

Cumulative %

Yes

143

71.5

71.5

71.5

No

57

28.5

28.5

100.0

200

100.0

100.0

%

Valid %

Total

Table 14 Customer preference analysis FinTech meets customer preference Frequency Valid

Cumulative %

SD

3

1.5

1.5

1.5

D

21

10.5

10.5

12.0

N

28

14.0

14.0

26.0

A

79

39.5

39.5

65.5

SA

69

34.5

34.5

100.0

200

100.0

100.0

Total

Table 15 Time saving capacity analysis [21] Fintech saves time in transactions of customers Frequency Valid

%

Valid %

Cumulative %

Yes

151

75.5

75.5

75.5

No

49

24.5

24.5

100.0

200

100.0

100.0

%

Valid %

Total

Table 16 Hazard avoiding capacity analysis Frequency Valid

Cumulative %

SD

6

3.0

3.0

3.0

D

21

10.5

10.5

13.5

N

25

12.5

12.5

26.0

A

79

39.5

39.5

65.5 100.0

SA Total

69

34.5

34.5

200

100.0

100.0

Test of hypothesis and correlations analysis #H1: There is co-relation between efficiency and profitability (Table 17).

234

N. Sultana et al.

Table 17 Hypothesis testing-I Correlations

Spearman’s rho

FinTech improves profitability

Co-relation co-efficient

FinTech improves profitability

FinTech allows to perform efficiently

1.000

1.000a

Sig. (2-tailed) FinTech allows banks to perform efficiently

a Co-relation

000

N

200

200

Co-relation co-efficient

1.000a

1.000

Sig. (2-tailed)

000

N

200

200

is sig. at the 0.01 level (2-tailed)

A perfect positive correlation of 1.000 exists between bank profitability and efficiency. Alternative hypothesis, which indicates a perfect positive correlation between the two variables, should be adopted. #H1: There is co-relation between cost minimization and profitability (Table 18). Cost and profitability are weakly positively correlated, as seen by the correlation of 0.322. The alternative hypothesis must be adopted since the P value indicates a positive relationship between the two variables and the null hypothesis must be rejected because it is less than 0.05. #H1: Banks better services can ensure customer retention (Table 19). Table 18 Hypothesis testing-II Correlations

Spearman’s rho

FinTech minimizes costs of banks

Correlation coefficient

FinTech minimizes costs of banks

FinTech improves profitability

1.000

0.322a

Sig. (2-tailed) N

FinTech improves Co-relation profitability Co-efficient

0.000 200

200

0.322a

1.000

Sig. (2-tailed) 0.000 N a Co-relation

is sig. at the 0.01 level (2-tailed)

200

200

Financial Technology and Competitive Landscape in the Banking …

235

Table 19 Hypothesis testing-III Correlations

Spearman’s rho

FinTech help banks offer better services that would take banks years to develop

FinTech ensures customer retention

a

Co-relation co-efficient

FinTech help banks offer better services that would take banks years to develop

FinTech ensures customer retention

1.000

0.995a

Sig. (2-tailed)

0.000

N

200

200

Co-relation coefficient

0.995a

1.000

Sig. (2-tailed)

0.000

N

200

200

Correlation is sig. at the 0.01 level (2-tailed)

The relationship of banking services and client retention is consistently positive and nearly perfect (0.995). A positive significant correlation between the two variables should be accepted as the alternative hypothesis. #H1: FinTech as a digital way of providing services saves unnecessary hazards (Table 20). Table 20 Hypothesis testing-IV Correlations FinTech saves the FinTech is a digital unnecessary way of providing hazards of physical services transactions Spearman’s rho

FinTech saves the unnecessary hazards of physical transactions

Co-relation co-efficient Sig. (2-tailed) N

FinTech is a digital Co-relation way of providing Co-efficient services Sig. (2-tailed) N a Co-relation

1.000

is sig. at the 0.01 level (2-tailed)

0.319a 0.000

200

200

0.319a

1.000

0.000 200

200

236

N. Sultana et al.

A positive r value denotes a favorable correlation between 2 variables. The alternative hypothesis should be accepted because P value is smaller than.05, indicating a low positive correlation between 2 variables. #H1: Digitalization of services retains customers (Table 21). The provision of digital services and final client retention are strongly positively correlated (correlation is 0.995). Given that the alternative hypothesis should be accepted because the P value is less than 0.05. This indicates that there is a significant positive correlation between the two variables. #H1: The association ensures the supply of competitive products (Table 22). Table 21 Hypothesis testing-V Correlations

Spearman’s rho

FinTech is a digital way of providing services

FinTech ensures customer retention

a Correlation

Co-relation co-efficient

FinTech is a digital way of providing services

FinTech ensures customer retention

1.000

0.995a

Sig. (2-tailed)

0.000

N

200

200

Co-relation co-efficient

0.995a

1.000

Sig. (2-tailed)

0.000

N

200

200

Association is vital for transforming backdated system to the digital one

FinTech enables to provide competitive products

is sig. at the 0.01 level (2-tailed)

Table 22 Hypothesis testing-VI Correlations

Spearman’s rho

Association is vital Co-relation 1.000 for transforming co-efficient backdated system Sig. (2-tailed) to the digital one N 200 FinTech enables to provide competitive products

a Correlation

Co-relation Co-efficient

0.902a

0.902a 0.000 200 1.000

Sig. (2-tailed) 0.000 N

is sig. at the 0.01 level (2-tailed)

200

200

Financial Technology and Competitive Landscape in the Banking …

237

A co-relation value of 0.902 indicates that the relationship and the availability of competitor items are strongly positively correlated. H1 should be adopted because the P value is smaller than.05 and indicating a significant positive correlation between the two variables. #H1: There is co-relation between the alliance and profitability (Table 23). There is a strong positive correlation which indicates that the alliance enhances profitability. As the P value is less than 0.05, null hypothesis should be rejected, and alternative hypothesis should be accepted signifying a strong positive correlation between the two variables. #H1: FinTech helps banks perform efficiently by offering better products (Table 24). The two variables show a perfect positive correlation (+1), and the value of P denotes the rejection of the null hypothesis. Plum Ordinal Regression and Model The regression is conducted with one dependent variable and three independent variables. Here, efficiency of banks performance companies depends on three independent factors, such as-adaptation of latest technologies, going for digital way and better services (competitive products) (Tables 25 and 26). As the significant value is less than 0.05, it implies there is a difference between baseline model and final model (Table 27). This table implies that the significant value is greater than 0.05. It implies that the observed data is having goodness-of-fit with the fitted model (Table 28). Cox and Snell’s R square is based on the log-likelihood distribution ratio of the loglikelihood for the predicted outcome in relation to the log probability for a reference model. It has a theoretical maximum value of less than 1 for categorical outcomes, Table 23 Hypothesis testing-VII Correlations

Spearman’s rho

Alliance with banking industry might be a win–win solution

FinTech improves profitability

a Co-relation

Co-relation co-efficient

Alliance with banking industry might be a win–win solution

FinTech improves profitability

1.000

0.995a

Sig. (2-tailed)

0.000

N

200

200

Co-relation Co-efficient

0.995a

1.000

Sig. (2-tailed)

0.000

N

200

is sig. at the 0.01 level (2-tailed)

200

238

N. Sultana et al.

Table 24 Hypothesis testing-VIII Correlations

Spearman’s rho

FinTech help banks offer better services that would take banks years to develop

FinTech allows banks to perform efficiently

FinTech help banks offer better services that would take banks years to develop

FinTech allows banks to perform efficiently

1.000

1.000a

N

200

200

Co-relation co-efficient

1.000a

1.000

200

200

Co-relation co-efficient Sig. (2-tailed)

Sig. (2-tailed) N

The two variables show a perfect positive correlation (+1), and the value of P denotes the rejection of the null hypothesis

even with a “perfect” model. The Nagelkerke R-square typically ranges from 0 to 1. McFadden has impressive extrapolation abilities with his pseudo-R2 squared value of 1. R square quantifies the proportion of the dependent variable’s variance that the independent variables can account for.

6 Recommendation 1. FinTech seems to be a cost effective alliance from the perspectives of banks, FinTech companies and customers, so this alliance should be administered in an efficient way. 2. This process is highly recommended for the customers as they will be able to avail all the financial services in economical and convenient way. In short it is consistent with their rising demand. 3. In terms of time savings, FinTech is an appropriate solution for all the potential customers. Everything is just a matter of click. Anytime they can have an access to their accounts for the sake of transactions. 4. FinTech enables banks offer those services which would require them years to develop. So, banks should adopt those products. Banks can easily avail the blessings of technology to perform transactions quite well. 5. Due to digitalization, banks should be more careful to avoid the cyber-attacks which have become so common currently. Security system should be developed and monitored strictly so that no loopholes can be traced by the attackers.

Financial Technology and Competitive Landscape in the Banking …

239

Table 25 Case processing summary N FinTech allows banks to perform efficiently

Banks should ensure the adoption of techniques of FinTech

3

D

19

9.5%

N

30

15.0%

A

69

34.5%

SA

79

39.5%

SD

3

1.5%

D

21

10.5%

N

28

14.0%

A

79

39.5%

SA

69

34.5%

3

1.5%

FinTech is a digital way of providing services S D

FinTech help banks offer better services that would take banks years to develop

Marginal percentage

SD

1.5%

D

19

9.5%

N

30

15.0%

A

69

34.5%

SA

79

39.5%

SD

3

1.5%

D

19

9.5%

N

30

15.0%

A

69

34.5%

SA Valid Missing

79

39.5%

200

100.0%

0

Total

200

Table 26 Model fitting info Model

−2 Log likelihood

Intercept. Only

522.095

Final

0.000

Chi-Square

df

Sig

522.095

5

0.000

Table 27 Goodness of fit

Table 28 Pseudo R square

Chi-Square

df

Sig

Pear-son

0.466

15

1.000

Deviance

0.930

15

1.000

Cox and Snell

0.927

Nagel-kerke

1.000

Mc-Fadden

1.000

240

N. Sultana et al.

6. As FinTech helps banks get competitive products, it should focus on the alliance. Providing better services is possible for the banks meeting the customer preferences which help them to sustain and get a competitive advantage. 7. Unnecessary hazards can be easily avoided by the adaptation of FinTech. Customers get dissatisfied with the presence of hazards. Therefore, banks should always try to minimize these. 8. FinTech presents a win–win situation for all the concerned parties by creating a plethora of positive impacts on them. 9. In order to stay competitive in the market by capturing the customers and generating revenue with the minimization of time and cost, FinTech is a must.

7 Concluding Remarks The purpose of this study is to explore the potentiality of FinTech in terms of banking industry, FinTech companies and customers. Several correlations have been undertaken and it has been found that many factors of FinTech are linked to profitability, cost reduction, time-saving, customer retention, revenue generation etc. Banks/FinTech association is capable of creating a favorable strategy for both of them. Besides, in order to hold customers and sustain in the current market, FinTech paves a great way to the banks. However, some risks also arise with the adaptation of the financial technology [22]. One of them is digital insecurity which is referred to as cyber-attacks. So, it is the responsibility of the banks and FinTech companies to ensure tight security for the safety of the funds and the clients. Another aspect is that due to FinTech, close monitoring of banks is required to avoid any unnecessary hazards. Due to covid-19, the impact of FinTech has been widespread and the necessity has been felt badly by the customers also. The efficiency of performance is also greatly influenced by the use of technology [23].

References 1. Lee I, Shin YJ (2018) Fintech: ecosystem, business models, investment decisions, and challenges. Bus Horiz 61(1):35–46 2. Varga D (2017) Fintech, the new era of financial services. Vezetéstudomány-Budapest Manage Rev 48(11):22–32 3. Akter R, Ahmad S, Kulsum U, Hira NJ, Akhter S, Islam MS (2019) A study on the implementation of basel III: Bangladesh perspective. Acad Strateg Manag J 18(6):1–10 4. Alt R, Beck R, Smits MT (2018) FinTech and the transformation of the financial industry. Electron Mark 28:235–243 5. Arner DW, Barberis J, Buckley RP (2015) The evolution of Fintech: a new post-crisis paradigm. Geo. J. Int’l L. 47:1271 6. Iman N (2020) The rise and rise of financial technology: the good, the bad, and the verdict. Cogent Bus Manage 7(1):1725309 7. Rahman B, Ahmed O, Shakil S (2021) Fintech in Bangladesh: ecosystem, opportunities and challenges. Int J Bus Technopreneurship 11:73–90

Financial Technology and Competitive Landscape in the Banking …

241

8. Raj R, Dixit A, Saravanakumar D, Dornadula A, Ahmad S (2021) Comprehensive review of functions of blockchain and crypto currency in finance and banking. Des Eng 2021:3649–3655 9. Anagnostopoulos I (2018) Fintech and regtech: Impact on regulators and banks. J Econ Bus 100:7–25 10. Taher SA, Tsuji M (2022) An overview of FinTech in Bangladesh: problems and prospects. FinTech development for financial inclusiveness, 82–95 11. Arner DW, Barberis J, Buckey RP (2016) FinTech, RegTech, and the reconceptualization of financial regulation. Nw. J. Int’l L. & Bus. 37:371 12. Nakashima T (2018) Creating credit by making use of mobility with FinTech and IoT. IATSS Res 42(2):61–66 13. Pousttchi K, Dehnert M (2018) Exploring the digitalization impact on consumer decisionmaking in retail banking. Electron Mark 28:265–286 14. Jakšiˇc M, Marinˇc M (2019) Relationship banking and information technology: the role of artificial intelligence and fintech. Risk Manage 21:1–18 15. Mention AL (2019) The future of fintech. Res-Technol Manage 62(4):59–63 16. Rom¯anova I, Kudinska M (2016) Banking and fintech: a challenge or opportunity? In Contemporary issues in finance: current challenges from across Europe, vol 98. Emerald Group Publishing Limited, pp 21–35 17. Ahmad S, Saxena C (2022) Internet of Things and blockchain technologies in the insurance sector. In: 2022 3rd international conference on computing, analytics and networks (ICAN). IEEE, pp 1–6 18. Akter R, Ahmad S, Islam MS (2018) CAMELS model application of non-bank financial institution: Bangladesh perspective. Acad Account Financial Stud J 22(1):1–10 19. Goldstein I, Jiang W, Karolyi GA (2019) To FinTech and beyond. The Rev Financial Stud 32(5):1647–1661 20. Singh D, Bal MK (2020) Backward region grant fund (BRGF) program with reference to the state of Chhattisgarh: a review. Pramana Res J 10(5) 21. Nuseir MT (2016) Exploring the use of online marketing strategies and digital media to improve the brand loyalty and customer retention. Int J Bus Manage 11(4):228–238 22. Rajput S, Ahmad S (2022) Challenges and opportunities in creating digital insurance business in Bangladesh. Int J Early Childhood Special Educ 14(5):6690–6693 23. Legowo MB, Subanidja S, Sorongan FA (2021) Fintech and bank: past, present, and future. Jurnal Teknik Komputer AMIK BSI 7(1):94–99

Review on Deep Learning-Based Classification Techniques for Cocoa Quality Testing Richard Essah, Darpan Anand, and Abhishek Kumar

Abstract Cocoa is an essential ingredient in chocolate and chocolate-flavoured treats. Chocolate is one of the world’s most desired foods because of its worldwide popularity. It is necessary to test the quality of cocoa to produce high-quality chocolate. Early prediction and classification of the cocoa computer vision system are required. In computer vision systems, deep learning methods are used for accuracy prediction based on the morphological characteristics of cocoa and its types. This manuscript provides a review of deep learning-based techniques that are used for the testing of cocoa beans based on morphological characteristics and cocoa types. Keywords Cocoa Beans · Deep learning · Morphological characteristics and theobroma tree

1 Introduction Cocoa beans are created by fermenting the ripe bean nuts of the Theobroma tree. Fermentation is the first step in turning cocoa beans, chocolate’s main raw component, into edible chocolate [1]. The aroma, color, and taste of finished cocoa products are all affected by the fermentation process. The flavor of chocolate is the result of chemical changes induced in the cocoa bean during fermentation [2]. Once the cocoa pods have been harvested, they are opened, the cocoa seeds and pulp are removed, and then the pulp is placed in wooden boxes or containers for fermentation. Products can be made to a higher standard when fermented cocoa beans are used. The quality determination of dried cocoa beans is a laborious and delicate procedure, necessitating the use of computer vision in order to classify an image into the appropriate category [3]. Using machine learning and deep learning, computer vision has recently had a substantial R. Essah (B) · A. Kumar Department of Computer Science, Chandigarh University, Mohali 140413, India e-mail: [email protected] D. Anand Department of CSE, Sir Padampat Singhania University, Udaipur 313001, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024 S. Jain et al. (eds.), Emergent Converging Technologies and Biomedical Systems, Lecture Notes in Electrical Engineering 1116, https://doi.org/10.1007/978-981-99-8646-0_20

243

244

R. Essah et al.

impact on agricultural production. Three of the most popular deep learning-based image analysis methods include convolution neural network fine tuning, deep neural network transfer learning, and CNN. Each method was used to identify dried cocoa seeds and then evaluated against one another to determine which was most reliable [4]. With the right integration architecture in place, the deep learning methodology may become an industrial application, replacing the standard method of quality checking for cocoa beans [5].

2 Literature Review Hidayat et al. [6], Classifying three qualities of java cocoa beans and bulk cocoa bean—is more difficult than distinguishing high-quality from low-quality cocoa beans. The E-nose-MLP-ANN method yielded the highest levels of predictive classification performance, with an overall accuracy of 99% of training dataset and 95% of external-validation dataset. Nazli et al. [7] Theobroma cacao beans are used extensively in chocolate production, making it vital to properly categorize them for optimal flavor synthesis. When pictured side by side, cocoa beans appear to be virtually identical. The raw data was saved in Microsoft Excel, and classification was done in Octave. The e-nose is a network of chemical sensors that detects the gas vapors released by the cocoa bean. Then, features were retrieved using a mean calculation, and they were normalized using a technique to improve classification accuracy. After applying CBR to categorize the features, the similarity score is calculated. The outcomes demonstrate that CBR achieves a perfect 100% in classification accuracy, specificity, and sensitivity. According to Lomotey et al. [8], smart farming, where mobile technologies are used to obtain real-time crop data. There is a significant digital divide between the agricultural output of some economies and the mobile infrastructure needed to support it. Therefore, objective is to provide a Smartphone app that provides smart, real-time access to agronomic data in an effort to bridge this gap. The proposed study is tailored to the unique circumstances of cocoa farming in Ghana, and it can help farmers in the country by providing them with access. Adhitya et al. [9] There isn’t a single country in the world that doesn’t rely heavily on agriculture. The use of machine learning is on the rise, and agricultural technology has greatly benefited from the advent of such innovations as massive improvement technology. In order to improve precision and discover answers to pressing issues, the agricultural sector is increasingly turning to artificial intelligence methods. The widespread use of Convolution Neural Network (CNN) based AI applications suggests that a CNN-based machine learning scheme can easily be applied to the agricultural sector. In this analysis, they used photos of cocoa beans from different areas in South Sulawesi, Indonesia, dividing them into 30% for training and 70% for testing. Based on their evaluation, we found that 7 different types of cocoa bean photos can be classified with an accuracy of 82.14% using only 5 CNN layers.

Review on Deep Learning-Based Classification Techniques for Cocoa …

245

Zhang et al. [10], People are becoming more aware of food safety issues as the economy, science, and technology advance, and as a result, testing methods for ensuring that food is safe to eat have advanced swiftly to keep up. In the realm of detection technologies, biosensors stand out as a top option due to their exceptional sensitivity and specificity. Biosensor technology has advanced alongside other developing technologies like 3D printing, cell phones, the Internet of Things (IoT) and artificial sensing. Intelligent, portable, and sensitive food detection systems are necessary for their progress in the future. In light of these considerations, we surveyed the state of recent studies on biosensor intelligence, elaborated on the state of the field with reference to a variety of intelligent biosensor technologies and equipment, and made predictions about the use of such technologies in the near future. The decision trees forest classifier was used to classify the samples after characteristics were manually constructed and applied to the beans delivered by methods of image analysis established by Oliviera et al. [11]. Using a 2D image that had its dimensions reduced by principal component analysis and tensor shape neural networks before being categorized by basic machine learning methods like KNN and Naive Bayes Classifier, Kaghi et al. [12]’s study used a pre-trained AlexNet CNN as a generic feature extractor. These results are consistent with using a CNN Softmax classifier. To estimate the time that pork would be stored, Barbon et al. [13] used machine learning-based algorithms, and their methods had accuracy ranges between 78.16 and 94.41%. We can divide the classification procedure into two separate stages using an image processing technique: feature extraction and fruit categorization. Renjith [14] offer a novel architecture for distinguishing between varieties of Durian fruit based on their defining traits. Accurate feature extraction of durian is achieved by the use of edge detection and colour extraction, and performance is measured through the application of non-destructive machine learning techniques, such as support vector machine, gradient boosted decision tree, and random forest were presented by Harel et al. [15]. The ripeness of paper was categorized with the use of these algorithms. Random forest has been shown to be quite robust. Offline Arabic handwriting recognition was investigated by Elleuch et al. [16], who looked at a technique that prioritizes the usage of two classifiers: CNN and SVM. When compared to the state-of-the-art in optical Arabic character recognition, their approaches performed well.

3 Deep Learning Deep learning is a branch of machine learning that uses a network of “artificial neural networks” to simulate how the human brain processes information. A hierarchical kind of machine learning is deep learning. It has several tiers, and machines may “chain” with specific procedures that humans have developed. By using many algorithms in sequence, deep learning mimics the human approach to solving difficult issues.

246

R. Essah et al.

(i) Convolution Neural Network (CNN): An example of a Deep Learning Neural Network (DNN) is the CNN. In the fields of image classification, image recognition, computer vision, and many others [17–20], CNN has proven to be both effective and widely used. The input is divided into smaller samples by convolution and pooling operations before being subjected to an activation function in a series of partially connected hidden layers before reaching a fully connected output layer. Input image dimensions are preserved in the output (Fig. 1). (ii) Deep Neural Network (DNN): For purposes of this discussion, let’s define “deep neural network” as a network having more than two hidden layers. Deep neural networks do intricate data processing via elaborate mathematical modelling. Morphology can be utilised as input for DNN prediction. The “nodes” of a neural network are the interconnected components of the network [23, 24]. Through the use of feedback, a sophisticated network is built. Layers are created by collecting similar nodes together. The layers in between an input and an output are processed in order to complete a task (Fig. 2).

Fig. 1 CNN architecture

Input

Hidden layer1

Fig. 2 DNN architecture

Hidden layer2

Output

Review on Deep Learning-Based Classification Techniques for Cocoa …

247

Fig. 3 VGG-16

(iii) VGG-16: One of the top computer vision models available now is VGG-16 [25]. They increased the number of weight layers from 10 to 16, giving them an additional 138 tweakable settings. VGG16 has 16 layered networks, each of which has weights (Fig. 3). (iv) GoogLeNet: It makes use of Inception modules, which provide the network with the flexibility to select from a range of convolutional filter sizes inside a given block. In an Inception network, these modules are stacked one on top of the other, with some max-pooling layers sprinkled in at stride 2 to reduce the grid’s resolution by a factor of 2. The Inception Network [26] is a Deep Convolutional Neural Network developed by Google’s own engineers; GoogLeNet is a 22-layer form of Inception. (v) Transfer Learning: In deep learning, transfer learning is the process of applying a learned model to an unrelated task. In transfer learning, a machine draws on what it’s already learned to improve its performance in a different setting. To improve one’s proficiency in a target task, transfer learning frequently makes use of previously acquired expertise and speedy development in the context of the source work. Knowledge transfer involves applying and mapping the traits and characteristics of the source task to the target task. However, if the new target task is performed less effectively after the transfer, we refer to this as a negative transfer. Providing and ensuring the positive transfer between related activities while preventing the unfavorable transition between less-related tasks is a significant difficulty when working with transfer learning methods.

4 Analysis The quality of a cocoa bean may be judged by its appearance, size, and structure. The quality of cocoa beans is traditionally and manually inspected using methods including eye inspection and individual selection. The item on the cocoa beans must be visible to the naked eye. It’s common for them to have little more than their own knowledge and background to go on. There are also difficulties to performing inspections manually, including as eye strain and inconsistencies in analytical findings. This approach takes a long time, is very subjective, and has a significant degree of qualitative uncertainty. Moreover, it may not be effective for large quantities of cocoa beans due to human error.

248

R. Essah et al.

Manual analysis requires a greater degree of ability, expertise, and time investment. It follows that quality control checks on industrial cocoa beans may not lend themselves well to manual examination. As a result, it’s crucial to have a fast and reliable procedure for sorting cocoa beans for quality assurance. These days, many people use machine vision to determine the quality of their food and farming supplies. The limitations of human inspection can be circumvented through the use of automation technologies enhanced by AI. Computer vision, a discipline that spans image analysis and machine learning, is deployed to facilitate robotic inspection. The site allows for the analysis and processing of images in order to offer the user with useful information. Size, shape, and texture are retrieved from the image in order to determine its morphology.

4.1 Types of Cocoa Beans The three most popular types of cocoa trees are Forastero, Criollo, and Trinitario. Criollo cocoa beans are rarer and more valued for their great flavour. Few countries still farm the Criollo variety because it is less resistant to the different diseases that afflict the cocoa plant. Venezuela is a prominent player in the global market as a big exporter of Criollo beans (Chuao and Porcelana). Trinitario is a mix of Criollo and Forastero that originated in Trinidad. In terms of quality, yield, and disease resistance, it exceeds Criollo and is commonly regarded as superior than Forastero [6]. (i) Criollo Cocoa: Most cocoa populations produced now have been subjected to the genetic influence of other varieties; hence, the genetic purity of cocoas labeled Criollo is questionable. Criollo is particularly difficult to cultivate since the trees are subject to a wide variety of environmental challenges and produce relatively little chocolate per plant. (ii) Forastero Cocoa: The Amazon Basin is believed to be the origin of the Forastero family, which contains both wild and cultivated kinds of cocoa beans. Only the Forastero variety of cocoa is grown in Africa. They are far more resistant to drought and prolific than Criollo. The vast majority of chocolate on the market is derived from Forastero cocoas, which have a high concentration of the primary “chocolate” flavour but a short shelf life and no additional tastes, resulting in a “quite bland” product. (iii) Nacional Cocoa: The extremely rare and unique Nacional cocoa bean is native to Ecuador and Peru in South America. Some 21st-century experts believed that the Nacional bean was extinct due to its unexpected extinction in 1916, when an outbreak of “Witches’ Broom” disease swept out the Nacional variety across these countries. Most Nacional variants have been cross-bred with other cocoa bean cultivars, making pure genotypes exceedingly rare. The origins of Ecuadorian Nacional may be traced back 5,300 years to the first cocoa trees to be domesticated. Since the eighteenth and nineteenth centuries, European chocolatiers have considered Nacional to be the world’s finest cacao due to its exquisite floral aroma and complex flavour.

Review on Deep Learning-Based Classification Techniques for Cocoa …

249

(iv) Trinitario Cocoa: The Trinitario is a naturally occurring fusion between the Criollo and the Forastero. Farmers in Trinidad began crossing Criollo with Forastero, resulting in the formation of Trinitario. In the past 50 years, the Forastero and Trinitario cultivars, which are both of inferior quality, have accounted for almost all cocoa production. The three varieties of cocoa trees are listed in Table 1. Morphological identifiers for cocoa trees include habit, flower stalk color, young fruit color, ripe fruit color. The Cundeamor cocoa cultivar stands out for its superior habitus. Varieties of cocoa plants are distinguished not only by their physical appearance but also by a number of other characteristics. Table 2 depicts comparisons between deep learning models based on different types of attributes used in the cocoa bean dataset. According to comparison, VGG-16 is the best model in comparison to other models because it uses the most attributes for testing the quality of cocoa beans. Furthermore, in Table 3, comparisons between different deep learning models based on performance metrics are presented. The comparison shows that the accuracy of prediction of the quality of cocoa beans is high in VGG-16 and low in DNN as compared to other models.

Table 1 Comparison of various deep learning models based on various attributes used for cocoa beans testing Deep learning models

Cocoa_ Bean_ Fraction

Cocoa_ Broken_ Beans

Cocoa_ Fermented

Cocoa_ Moldy

Unfermented_ Cocoa

Cocoa_ Whole_ Beans

CNN













DNN













Transfer learning













GoogLeNet













VGG-16













Table 2 Comparison between various types of cocoa planets with morphological characters Morphological parameter

Forastero

Criollo

Nacional

Trinitario

Habitat form

Satisfactory

Excellent

Less satisfactory

Poor

Jorket Fformation

Plenty

Median

Median

Median

Tinge of old leaf color

Medium powerful

Low powerful

Powerful

Medium powerful

Flower stalk color

Reddish green

Green

Red

Reddish

Young fruit color

Green

Green

Red

Yellowish

Ripe fruit color

Yellow

Orange

Orange

Reddish

250

R. Essah et al.

Table 3 Comparison of deep learning models based on performance metrics Deep learning models

Accuracy

Precision

Recall

F-Measure

CNN

M

M

M

H

DNN

L

L

L

M

Transfer learning

H

H

M

H

GoogLeNet

L

M

L

M

VGG-16

H

H

H

H

5 Conclusion Analyzing the cocoa production process necessitates the development of automated systems for evaluating the final product’s quality. In this paper, we present comparisons of various deep learning models based on morphological parameters, types of cocoa beans, and different performance metrics. Based on comparison, we concluded that transfer learning and VGG-16 are the best deep learning models for the testing of the quality of cocoa beans. In the future, with more study, we may take advantage of pre-trained CNN sets and texture extraction methods for higher accuracy. A study of cocoa bean quality based on the maturation of cocoa pods would also reveal which pods had the best beans. Low-quality beans can be harvested from unripe pods, whereas beans in overripe pods may have already begun to germinate or undergone other changes within the pod. Categorizing cocoa beans according to their shape or morphological traits will also help in the manufacture of high-quality beans.

References 1. Ayikpa KJ, Diarra MAMADOU, Abou Bakary BALLO, Pierre GOUTON, Jérôme AK (2022) Application based on hybrid CNN-SVM and PCA-SVM approaches for classification of cocoa beans. Int J Adv Comput Sci Appl 13(9) 2. Anand D, Essah R (2021) Proposal on automatic cocoa quality testing and procurement in Ghana. Asian J Res Comput Sci 132–146 3. Urba´nska B, Kowalska J (2019) Comparison of the total polyphenol content and antioxidant activity of chocolate obtained from roasted and unroasted cocoa beans from different regions of the world. Antioxidants 8(8):283 4. Essah R, Anand D, Singh S (2022) Empirical analysis of existing procurement and crop testing process for cocoa beans in Ghana. In: Mobile radio communications and 5G networks: proceedings of third MRCN, pp 229–244 5. Essah R, Anand D, Singh S (2022) An intelligent cocoa quality testing framework based on deep learning techniques. Measurement: Sens 24: 100466 6. Hidayat SN, Rusman A, Julian T, Triyana K, Veloso AC, Peres AM (2019) Electronic nose coupled with linear and nonlinear supervised learning methods for rapid discriminating quality grades of superior java cocoa beans. Int J Intell Eng Syst 12(6):167–176 7. Nazli NA, Najib MS, Mohd Daud S, Mohammad M, Baharum Z, Ishak MY (2021) Intelligent classification of cocoa bean using E-nose. Mekatronika 2(2):28–35

Review on Deep Learning-Based Classification Techniques for Cocoa …

251

8. Lomotey RK, Mammay A, Orji R (2018) Mobile technology for smart agriculture: deployment case for cocoa production. Int J Sustain Agric Manage Inform 4(2):83–97 9. Adhitya Y, Prakosa SW, Köppen M, Leu JS (2019) Convolutional neural network application in smart farming. In: International conference on soft computing in data science. Springer, Singapore, pp 287–297 10. Zhang J, Huang H, Song G, Huang K, Luo Y, Liu Q, Cheng N (2022) Intelligent biosensing strategies for rapid detection in food safety: a review. Biosens Bioelectron 114003 11. Oliveira MM, Cerqueira BV, Barbon S Jr, Barbin DF (2021) Classification of fermented cocoa beans (cut test) using computer vision. J Food Compos Anal 97:103771 12. Khagi B, Lee CG, won GR (2018) Alzheimer’s disease classification from brain MRI based on transfer learning from CNN. In: IEEE 11th biomedical engineering international conference (BMEiCON), pp 1–4 13. Barbon APA, Barbon S Jr, Mantovani RG, Fuzyi EM, Peres LM, Bridi AM (2016) Storage time prediction of pork by computational intelligence. Comput Electron Agric 127:368–375 14. Renjith PN (2020) Classification of durian fruits based on ripening with machine learning techniques. In: IEEE 3rd international conference on intelligent sustainable systems (ICISS), pp 542–547 15. Harel B, Parmet Y, Edan Y (2020) Maturity classification of sweet peppers using image datasets acquired in different times. Comput Ind 121:103274 16. Elleuch M, Maalej R, Kherallah M (2016) A new design based-SVM of the CNN classifier architecture with dropout for offline Arabic handwritten recognition. Procedia Comput Sci 80:1712–1723 17. Bhatt D, Patel C, Talsania H, Patel J, Vaghela R, Pandya S, Modi K, Ghayvat H (2021) CNN variants for computer vision: history, architecture, application, challenges and future scope. Electronics 10(20):2470 18. Samek W, Binder A, Montavon G, Lapuschkin S, Müller KR (2016) Evaluating the visualization of what a deep neural network has learned. IEEE Trans Neural Netw Learn Syst 28(11):2660– 2673 19. Mascarenhas S, Agarwal M (2021) A comparison between VGG16, VGG19 and ResNet50 architecture frameworks for image classification. In: International conference on disruptive technologies for multi-disciplinary research and applications (CENTCON), vol 1, pp 96–99 20. Al-Qizwini M, Barjasteh I, Al-Qassab H, Radha H (2017) Deep learning algorithm for autonomous driving using GoogLeNet. In: IEEE intelligent vehicles symposium (IV), pp 89–96 21. Zhuang F, Qi Z, Duan K, Xi D, Zhu Y, Zhu H, Xiong H, He Q (2020) A comprehensive survey on transfer learning. Proc IEEE 109(1):43–76

A Curated Study on Machine Learning Based Algorithms and Sensors for Drone Technology in Various Application Digant Raj, Garima Thakur , and Arti

Abstract Drones are used in different fields like delivery, agriculture, construction, surveillance and many more. But then also Drones are prone to be crashed. The main goal of this paper is to make a case study of different machine learning algorithms and sensors like radar sensors. This also states that these algorithms and sensors are connected to Raspberry pi that can be helpful in accessing all the algorithm and sensors together by which it can be helpful in different sectors. The case study and research tell that how by these system drone can be free from crash and easily used in different field without any problem faced and tell the advantages of machine learning algorithms like autonomous navigation, anomaly detection in the drone for different uses of the drone. Keywords Drones · Radar sensor · Autonomous navigation · Machine learning

1 Introduction Unmanned aerial vehicles (UAVs), commonly called drones, that can navigate autonomously without human control or beyond line of sight. In this author explained different applications with the help of machine learning and approach of Artificial Intelligence, it will be very helpful and important a drone can be for different applications in regular lives if it is correctly used with machine learning and some AI technologies. Here are some machine learning algorithms that are used in this. Author have used autonomous navigation in drones which can use machine learning algorithms for navigating the drone autonomously, avoiding obstacles and adapting to changing environmental conditions. This can be useful in fields like search and secure, where time is of the essence and human pilots may not be able to react quickly enough. Santamaria-Navarro et al. [1] has given a good information of using this algorithm in drone and its advantages in different conditions and extreme environments [1]. Here object recognition is used so that it can recognise animals, D. Raj · G. Thakur (B) · Arti Chandigarh University, Punjab, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024 S. Jain et al. (eds.), Emergent Converging Technologies and Biomedical Systems, Lecture Notes in Electrical Engineering 1116, https://doi.org/10.1007/978-981-99-8646-0_21

253

254

D. Raj et al.

crops, plants, humans, etc. this can be helpful in agriculture where drones can be used to monitor crop health and identify the problems where attention is needed. Anomaly detection is used in this drone to detect the anomalies in drone database like sudden change of temperature, humidity, or in air quality. This can be very useful in environment monitoring where drones can be used to identify the environment or see the damage in that place. Lu et al. [2] has used it with sensors and raspberry pi which can given a good result in it [2]. Then the main algorithm, swarm intelligence can be used to enable drones to work in a coordinated manner, forming a swarm that can perform tasks more efficiently than individual drones. Cui et al. [3] has explained the proper use of swarm intelligence in drones, this thesis focuses on the comparison and analysis of two representative drone swarms’ communication techniques to solve the challenges of drone swarms’ communication design [3]. It can be used in applications such as surveillance and delivery and can cover a large area and provide real time data to the operators. Figure 1 shows the system working diagram of drone. Drones and artificial intelligence are a match made in technological heaven. A human-like eye in the sky is made possible for ground-level operators by combining AI’s real-time machine learning technology with unmanned drones’ exploring skills. Drones are now more important than ever in a wide range of industries, including construction, agriculture, natural disaster response, and security. Drones have become essential tools for everyone from farmers to firefighters due to their capacity to boost productivity and improve safety. In fact, smart UAVs are now employed on more than 400,000 construction sites globally due to their immense popularity. Authors in [4–7] have proposed and given good ideas in machine learning based drones in different fields author used these ideas to give this study a proper combination of different algorithms in drones which can be used in different fields. The remainder of this paper is organized as follows. In Sect. 2, we review related work on Radar sensor, deep learning methods. Section 3 introduces the overview of

Fig. 1 System working diagram of drone [2]

A Curated Study on Machine Learning Based Algorithms and Sensors …

255

the proposed system. Comparison and results are reported in Sect. 4 . Finally, Sect. 5 presents the conclusions.

2 Related Work Authors in [8–10] are used the technology which can provide long range detection depending on the radar cross section (RCS), and not easily affect performance in adverse light and weather conditions. Then there are some different technologies for flying in low altitude, slow moving objects such as UAV. Then the detection and classification to make radar signal process algorithms to detect targets and other features for automatic classification with a machine learning algorithm. In this deep learning techniques are used to process raw data to make a suitable representation for target detection and classification [11]. Optical data appears to be a particularly important source of information that might offer crucial clues to a UAV detection system given current developments in neural networks and deep learning techniques. Since its tremendous success in classifying pictures on the well-known ImageNet dataset [12] during the ImageNet Large Scale Visual Recognition Challenge (ILSVRC) competition in 2012, research has developed around the deep learning paradigm. The Faster-RCNN detection pipeline was explored for the purpose of UAV identification in Saqib et al.’s study [13]. They conducted several trials employing various baseline models inside the detection pipeline (e.g., VGG-16 [14], ZF-net [15], etc.). According to their research, the VGG-16 outperforms the other options for basic DNN. Additionally, they contended that the presence of birds may impair the detector’s function by raising the rate of false positive detections. In order to drive the network to learn more finely detailed patterns between UAVs and birds, they suggested that birds not be overlooked during the training process but rather that they be included as a distinct class, resulting in a more effective way to distinguish them. Additionally, Nalamati et al. conducted research that is comparable to this one but uses more modern models as its foundation [16]. Figure 2 shows the various sensor technologies in modern drone. Drones are being used in delivery services to transport packages and goods. They can be used to deliver items to remote areas that are difficult to access by traditional methods. Drones can also be used to deliver items quickly, which can be especially beneficial for time-sensitive items such as medical supplies. A study by Li et al. [18] found that using drones for delivery services reduced the delivery time by 30% [18]. Duchi et al.’s [19] work suggested a machine learning-based system for selfnavigating drones [19]. The algorithm enables the drone to travel through complex surroundings and avoid obstacles by combining deep neural networks and reinforcement learning methods. The suggested algorithm was put to the test in both simulations and actual experiments, and the results in terms of accuracy and efficiency were encouraging. In this training process involves different crashes and learning from these from this experience to improve the model’s performance. Multi-dimensional goals that requires diverse technological support is shown in Fig. 3.

256

D. Raj et al.

Fig. 2 Various sensor technologies are used in a variety of ways in modern drones [17]

Fig. 3 Safe drone operation is a multi-dimensional goal that requires diverse technological support [4]

3 System Overview A subset of artificial intelligence (AI) called machine learning involves teaching algorithms to make predictions or choices based on data rather than by being explicitly programmed to do so. Construction of mathematical models capable of data analysis and pattern recognition is at the heart of machine learning. These models can then be applied to draw conclusions or make predictions regarding newly discovered data [20–27]. Different types of machine learning is shown in Fig. 4. Machine learning algorithms, including supervised learning, unsupervised learning, semi supervised and reinforcement learning. In supervised learning, the algorithm is trained on a labeled dataset, where the input data is paired with the

A Curated Study on Machine Learning Based Algorithms and Sensors …

257

MACHINE LEARNINIG

Supervised

Unsupervised

Semi-Supervised

Reinforcement

Fig. 4 Different types of machine learning

correct output or label. In unsupervised learning, the algorithm is not given any labels, and must find patterns or structure in the data on its own. Reinforcement learning involves an agent learning to interact with an environment, and receiving rewards or punishments based on its actions. The model is trained using both labelled and unlabelled data in semi-supervised learning, a kind of machine learning. The model is trained exclusively on labelled data in classical supervised learning, where each sample is connected to a target label. However, getting labelled data can be costly or time-consuming in many real-world settings. In supervised learning, the goal is to learn a function that maps inputs to outputs based on labelled examples of input–output pairs. In regression, the output variable is continuous and the goal is to learn a function that predicts the output variable based on the input variables which we will use in this case study. We can give Autonomous, object detection, anomaly, swarm intelligence and sensors algorithms in the drone so that they can work in a proper manner in different fields like in some fields. Delivery- We all know that done is prone to crashes so we can put autonomous navigation in it so that it can deliver in some unexpected conditions and can also help in avoiding obstacles, then object detection and anomaly detection is installed which will help to detection temperature, pressure, humans, animals etc. This all will be collectively manage by the operators with swarm intelligence in it so that author can easily see the details of the drone which can reduce the man force and easily help the field in growing up. Sensors and algorithm can be installed in a Raspberry pi which is a series of microcomputer that is small in size and can add various application and algorithms in it. This can be very helpful in making of drone by installing Raspberry pi in it. Figure 5 shows the structure of multi sensor data. Many sensors are used in drone making it will enhance them with the latest technology as GPS (Global Positioning system) it is used for drone location, altitude, and speed. Inertial Measurement Unit (IMU) composed of an accelerometer, a gyroscope, and a magnetometer, which together provide data on the drone’s orientation and movement. Barometer used to measure altitude by detecting changes in air pressure. Optical Flow Sensor used to measure the drone’s horizontal speed and position by analysing the movement of visual patterns on the ground. Ultrasonic Sensor used to measure altitude by emitting sound waves and measuring the time it takes for them to bounce back. LiDAR (Light Detection and Ranging) used to create high-resolution 3D maps of the drone’s surroundings by emitting laser pulses and measuring the time it takes for them to reflect back. Cameras used for visual navigation, obstacle

258

D. Raj et al.

Multi Sensor information

Sensors

Multi Sensor fusion

Radar sensor

Unimodal UAV selection and classification Non-Deep Learning Methods Deep Learning Methods Deep Learning Methods

Multi sensor fusion schemes

Electro Optical camera

Thermal Camera Multi sensor fusion for UAV detection with DL

Acoustic Sensor

DL methods on hyperspectral DL methods for general object detection and classification Deep Learning Methods DL methods for general objects

Fig. 5 Structure of multi sensor data

avoidance, and capturing images or video. The specific sensors used in a drone depend on its intended use, size, and cost. More advanced and expensive drones may use a combination of these sensors to enhance their capabilities and performance. To add sensors and machine learning algorithms in a drone we will follow some steps: (1) First we have to see in which application the sensor is needed like in delivery we use GPS sensor, radar sensor, gyroscopes, for the temperature and humidity we have to add different sensors in it. (2) Then we physically install the sensors in the drone after deciding which sensors to use. This may entail installing the sensors to the drone body or securing them to the drone’s payload, depending on the kind of sensors you’re employing. (3) The sensors must be connected to the drone’s control system after installation. This may entail utilising wireless communication technologies like Bluetooth or Wi-Fi or directly linking the sensors to the drone’s onboard computer. (4) To implement machine learning algorithms in the drone, we will need to develop software that can analyse the data collected by the sensors and make decisions based on that data. This can involve machine learning frameworks like TensorFlow, PyTorch, or Scikit-learn. (5) Then the last step is to test the implemented sensors and machine learning algorithms, that will need to test the system in real world conditions to see how well it performs. Based on the results the system should be refined and make adjustments to improve its accuracy and reliability. In Raspberry pi and we can attach sensors and algorithms in it and it is a small microprocessor so we can easily fix it inside the drone which will make the drone work together with every algorithm and in a collective manner. With this technology we can make a perfect drone which can work in every field and can do its best for completion of work.

A Curated Study on Machine Learning Based Algorithms and Sensors …

259

4 Results/Comparison Getting drones to their intended locations safely and reliably is one of the biggest challenges. This presents a number of difficulties, including avoiding accidents with other drones and objects, precisely locating the delivery site, and making sure the product is delivered securely and undamaged. Table 1 shows the best results, authors add Detection technology in drones like radar, acoustic, visual and radio frequency (RF) signal-based detection. Table 1 Detection technology Detection technology

Advantage

Disadvantage

Radar

Contrary to visual detection, low-cost frequency modulated continuous waves (FMCW) are impervious to fog, clouds, and dust. There is no need for a line of sight with radar. Radars with higher frequencies, such mm Wave radars, have better range resolution and can record micro-Doppler signature (MDS)

Small radar sections on drones (RCS) make detection more difficult. The range of drone detection is constrained by mm Wave’s increased path loss

Acoustic

Has no need for a LOS. Therefore, it functions Sensitive background in low-visibility settings. depending on the noise, particularly in microphone arrays used, inexpensive noisy places Performance of detection is affected by wind conditions for training and testing purposes requires a library of acoustic signatures for various drones

Visual

Low price screens make it simpler for humans to evaluate detection findings than other modalities, depending on the cameras and optical sensors that are used or whether current surveillance cameras are reused

Dust, fog, clouds, and daylight all reduce visibility. Wide field of vision, expensive thermal, and laser cameras could be needed. LOS is required

Radio-frequency signal-based detection

Low-cost RF sensors. No LOS is required. Long detection Range

Not appropriate for detecting drones that fly independently without any form of communication. Learning the characteristics of RF signals takes training

260

D. Raj et al.

We can make different types of rotors, wings for different applications which can easily help in different applications as in delivery we can use multi rotor drone whereas in surveillance we fixed wing and single rotor is used so that it can move fast. Here some types of radar system and its features used in different research are given which will help us to analyse the result and accuracy of the model which will help in comparing the sensors system that should be used with the machine learning algorithms that should be fixed inside the drone by raspberry pi microprocessor. Table 2 shows the types of frons with different rotor system and Table 3 shows the types of radar system. Table 2 Types of drones with different rotor system [25] Multi-rotor

Fixed wing

Single-rotor

Most suitable application

• Infrastructure mapping • Site inspections • Real estate assessment • Corridor mapping (water, rail) • Environ mapping (shorelines)

• Corridor mapping (rail, pipeline) • Precision agriculture • Military and intelligence

• Corridor mapping (rail, pipeline) • Precision agriculture • Military and intelligence • Emergency response

Typical max. payload

10 kg (22l bs.)

56 kg (125 lbs)

25 kg (55 lbs)

Typical sensor

Electro optics (RGB) Lidar IR cameras

Electro optics IR cameras

Electro optics (RGB) Lidar IR cameras

Typical max. fight time 24 min W/payload

500 h

50–180 min

Range

N/A

9 Miles (15 km)

2 Miles (3.2 km)

Table 3 Types of radar system Work

Radar system

Range

Features

Results

Jahangir and Baker [20]

L-band holo graphic radar

1 km-500. Altitude 500ft

Height, speed, Detection jerk probability: 88%

Defination A form of radar device that creates a 3D image of a subject is called an L-band holographic radar. Due to its ability to effectively penetrate a variety of materials, including earth, foliage, and snow, the L-Band frequency range, which is usually between 1 and 2 GHz, is one that is frequently used in radar systems (continued)

A Curated Study on Machine Learning Based Algorithms and Sensors …

261

Table 3 (continued) Work

Radar system

Range

Features

Results

Defination

[21]

S-band bird radar

0.3–0.4 km of range

Polarimetric features

Classification accuracy: 100%

S-band bird radar is a form of radar device created specially to find and monitor the movement of birds. It works with radio waves in the S-band frequency region, which is usually between 2 and 4 GHz

[22]

X-band CW radar

Less than or equal to 30 m

Eigenvector and Eigen value of MDS

Classification accuracy: 95%

An X-band. Continuous-wave (CW) transmissions are used by CW radar, a form of radar system that detects and tracks objects while operating in the X-band frequency region, usually between 8 and 12 GHz. A constant frequency signal is emitted by CW radar, which then monitors any frequency variations brought on by reflections of of adjacent items

[23]

S-band pulsed radar



20 Features Classification extracted from accuracy: track 100% probably

Radar systems that use wavelengths between 2 and 4 GHz are known as “S-band radar.” Weather forecasting, aviation traffic management, and military monitoring are just a few of the many uses for it

[24]

CW-K band radar



Cadence frequency spectrum (CFS)

K band frequency region, which usually spans frequencies between 18 and 27 gigahertz, is where CW K band radar refers to a particular kind of radar device (GHz)

Classification accuracy is between 90 and 98%

5 Conclusion In this paper, author have done the case study about different machine learning algorithms and sensors which we can use in drone for different application. Machine learning algorithms can be used to analyse data collected by sensors on the drone, such as images, videos, and infrared data, to identify patterns, make predictions, and perform tasks such as object recognition, detection of anomalies, and mapping. Drones, machine learning, and sensors can work together to solve problems that would otherwise be expensive or risky to complete manually. Additionally, this technology may make data collecting and analysis more exact and accurate, improving results and decision-making. Overall, the combination of drones with sensors and machine learning is a fascinating and quickly developing topic with considerable

262

D. Raj et al.

potential for influence across many sectors. In future we have considered to make a prototype with combining the algorithms and sensors in the drone. In addition, we’ll try to use PID [26, 27] rather than a Raspberry Pi to lighten it up and boost its computational power.

References 1. Navarro A, Thakker R, Fan DD, Morrell B, Mohammadi AAA (2022) Towards resilient autonomous navigation of drones. In: robotics research: the 19th international symposium ISRR. Springer International Publishing, Cham 2. Lu H, Li Y, Mu S, Wang D, Kim H, Serikawa S (2018) Motor anomaly detection for unmanned aerial vehicles using reinforcement learning. IEEE Internet Things J 5(4):2315–2322 3. Cui Q, Liu P, Wang J, Yu J (2017) Brief analysis of drone swarms’ communication. In: 2017 IEEE international conference on unmanned systems (ICUS). Beijing, China, pp 463–466 4. Taha B, Shoufan A (2019) Machine learning-based drone detection and classification: stateof-the-art in research. IEEE Access 7:138669–138682 5. Yazdinejad A, Rabieinejad E, Dehghantanha AR, Parizi M, Srivastava G (2021) A machine learning-based SDN controller framework for drone management. In: 2021 IEEE Globecom workshops (GC Wkshps). Madrid, Spain, pp 1–6 6. Shan L, Miura R, Kagawa T, Ono F, Li H-B, Kojima F (2019) Machine learning-based field data analysis and modeling for drone communications. IEEE Access 7:79127–79135 7. Samaras S et al (2019) Deep learning on multi sensor data for counter UAV applications—a systematic review. Sensors 19(22):4837 8. Knott EF, Schaeffer JF, Tulley MT (2004) Radar cross section. SciTech Publishing, New York, NY, USA 9. Molchanov P, Harmanny RI, Wit JJ, Egiazarian K, Astola J (2014) Classification of small UAVs and birds by micro-Doppler signatures. Int J Microw Wirel Technol 6:435–444 10. Tait P (2005) Introduction to radar target recognition. London, UK, IET, p 18 11. Jokanovic B, Amin M, Ahmad F (2016) Radar fall motion detection using deep learning. In: Proceedings of the 2016 IEEE radar conference (RadarConf). Philadelphia, PA, USA, pp 1–6 12. Krizhevsky A, Sutskever I, Hinton GE (2017) ImageNet classification with deep convolutional neural networks. Commun ACM 60(6):84–90 13. Saqib M, Khan SD, Sharma N, Blumenstein M (2017) A study on detecting drones using deep convolutional neural networks. In: 2017 14th IEEE international conference on advanced video and signal based surveillance (AVSS). Lecce, Italy, pp 1–5 14. Simonyan K, Zisserman A (2014) Very deep convolutional networks for large-scale image recognition 15. Zeiler MD, Fergus R (2014) Visualizing and understanding convolutional networks. In: Fleet D, Pajdla T, Schiele B, Tuytelaars T (eds) Computer vision—ECCV 2014. Lecture Notes in Computer Science, p 8689 16. Nalamati M, Kapoor A, Saqib M, Sharma N, Blumenstein M (2019) Drone detection in longrange surveillance videos. In: 2019 16th IEEE international conference on advanced video and signal based surveillance (AVSS). Taipei, Taiwan, pp 1–6 17. https://qtxasset.com/files/sensorsmag/nodes/2016/22743/FIG_1a.png 18. Li Y, Huang X, Zhang Y, Luo W (2018) Delivery by drone: an evaluation of unmanned aerial vehicle technology in reducing delivery time. IEEE Trans Eng Manage 65(4):494–505 19. Duchi E (2020) Learning to Fly by Crashing. IEEE Robot Autom Lett 5(2):337–344 20. Jahangir M, Baker C (2016) Robust detection of micro-UAS drones with L-band 3-D holographic radar. In: Proceedings of the IEEE sensor signal process. Defence (SSPD), pp 1–5

A Curated Study on Machine Learning Based Algorithms and Sensors …

263

21. Torvik B, Olsen KE, Griffiths H (2016) Classification of birds and UAVs based on radar polarimetry. IEEE Geosci Remote Sens Lett 13(9):1305–1309 22. Molchanov P, Harmanny RIA, Wit JJM, Egiazarian K, Astola J (2014) Classification of small UAVs and birds by micro-Doppler signatures. Int J Microw Wirel Technol 6(3–4):435–444 23. Mohajerin N, Histon J, Dizaji R, Waslander SL (2014) Feature extraction and radar track classification for detecting UAVs in civilian airspace. In: Proceedings of the IEEE national radar conference, pp 674–679 24. Zhang W, Li G (2018) Detection of multiple micro-drones via cadence velocity diagram analysis. Electron Lett 54(7):441–443 25. https://ocumap.com/wpcontent/uploads/2020/08/Reality-IMT_Drone-Mapping-1024x572. png 26. Lu H, Serikawa S (2013) Design of freely configurable safety light curtain using hemispherical mirrors. IEEJ Trans Electr Electron Eng 8(S1):110–111 27. Lu H, Li Y, Li Y, Serikawa S, Kim H (2017) Highly accurate energy-conserving flexible touch sensors. Sens Mater 29(6):1–7

Automatic Detection of Coagulation of Blood in Brain Using Deep Learning Approach B. Ashreetha, A. Harshith, A. Sai Ram Charan, A. Janardhan Reddy, A. Abhiram, and B. Rajesh Reddy

Abstract There are many medical diagnostic applications where automated flaw identification in medical imaging is a promising new topic. Automatic tumour diagnosis using magnetic resonance imaging (MRI) provides crucial data for therapeutic decision making. When looking for errors in brain MRIs, the human evaluation is the gold standard. This strategy is impossible due to the enormous quantity of data being handled. For this reason, robust and automated classification methods are essential for lowering death rates. Therefore, reliable and automated categorization systems are crucial for reducing human mortality. Since saving the radiologist’s time and achieving proven accuracy is a priority, automated tumor detection systems are being developed. Due to the complexity and diversity of brain tumors, detecting them using MRI is challenging. To address the limitations of previous approaches to tumor detection in brain MRI, we suggest using Deep Learning InceptionV3, VGG19, ResNet50, and MobileNetV2 transfer learning. Utilizing a deep learning framework and an image classifier, brain cancer may be detected via MRI with remarkable accuracy. We also use the flask framework to predict the presence of tumors in web applications. Keywords Efficientnetb3 · Transfer learning · Weights · Layer · Deep learning · Neural network

B. Ashreetha (B) Department of ECE, School of Engineering, Mohan Babu University, Sree Vidyanikethan Engineering College, Tirupati 517102, Andhra Pradesh, India e-mail: [email protected] A. Harshith · A. S. R. Charan · A. J. Reddy · A. Abhiram · B. R. Reddy Department of Electronics and Communication Engineering, Sree Vidyanikethan Engineering College, Tirupati, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024 S. Jain et al. (eds.), Emergent Converging Technologies and Biomedical Systems, Lecture Notes in Electrical Engineering 1116, https://doi.org/10.1007/978-981-99-8646-0_22

265

266

B. Ashreetha et al.

1 Introduction Diagnosing and treating a brain tumor are among the most difficult medical challenges. In the preliminary stage of tumor development, effective and fast analysis is always crucial for the radiologist. Conventionally, histological grading, which is based on a stereotactic biopsy, is the best and most accurate way to determine the severity of a brain tumor [1]. For a biopsy, a neurosurgeon will make a small incision in the patient’s skull and remove a sample of tissue through it. The biopsy procedure has a high risk of serious complications such as infection from tumor and brain hemorrhage, seizures, severe migraines, stroke, coma, and even death. However, there is a major worry that stereotactic biopsy is not always precise, which might lead to a false diagnosis and improper clinical care of the illness. Using deep learning for brain tumor detection and classification involves training a deep neural network on a large dataset of brain images, typically using supervised learning techniques. The neural network learns to automatically extract features from the images and make predictions about the presence and type of brain tumors. There are several deep neural network architectures that have been used for brain tumor prediction, including InceptionV3, VGG19, ResNet50, and MobileNetV2, each with its strengths and weaknesses. These architectures can be fine-tuned or trained from scratch, depending on the size and diversity of the available dataset. Deep learning-based approaches to brain tumor prediction have shown promising results, with high accuracy and performance in detecting and classifying brain tumors [2–4]. Clinicians might benefit from this information by better assessing the prognosis of brain tumors and deciding on the best course of therapy for their patients.

2 Literature Review “Deep Learning for Brain Tumor Classification” by Havaei et al. [11]. A deep Convolutional Neural Network with Long Short-Term Memory (CNN-LSTM) is present in this article to classify brain tumors. “3D Convolutional Neural Networks for Brain Tumor Classification” by Wu et al. This paper proposes a 3D CNN-LSTM model for brain tumor classification, which achieved high accuracy rates on a public brain tumor dataset. “Deep Learning-Based Detection of Brain Tumors Using MR Images: A Systematic Review” by Singh et al. This review paper summarizes the recent advances in brain tumor detection using deep learning techniques, including LSTM and CNN, and provides an overview of the challenges and future directions in this field. “Brain Tumor Classification Using Deep Learning Based on MRI Images” by Zhang et al. [1]. This paper presents a deep CNN model for brain tumor classification, achieving high accuracy rates on a public brain tumor dataset [5].

Automatic Detection of Coagulation of Blood in Brain Using Deep …

267

“Brain Tumor Detection and Classification Using Convolutional Neural Networks” by Liu et al. [4]. This paper presents a deep CNN model for brain tumor detection and classification, achieving high accuracy rates on a private dataset. “Brain Tumor Classification Using Convolutional Neural Networks with Dynamic Contrast-Enhanced MRI” by Velthuizen et al. [6]. The use of a convolutional neural network with long short-term memory (CNN-LSTM) is introduced in this research as a means of classifying brain tumors [6]. “Deep Convolutional Neural Networks for Brain Tumor Classification: A Comparative Study” by Ammar et al. [7]. This work discusses the results of research comparing many alternative CNN architectures for the classification of brain tumors, and it suggests a deep CNN model that showed excellent performance on a publicly available brain tumor dataset [7]. “Brain Tumor Classification Using Deep Convolutional Neural Networks on MRI Images” by Akter et al. [8]. High accuracy rates were achieved on a publicly available brain tumor dataset utilizing the deep CNN model proposed in this study [8]. “Deep Convolutional Neural Networks for Brain Tumor Classification and Segmentation: A Review” by Islam et al. [9]. This review paper provides an overview of the recent advances in brain tumor classification using deep CNNs and discusses the challenges and future directions in this field [9]. “Brain Tumor Classification with Convolutional Neural Networks: A Comparative Study with Radiomics Features” by Islam et al. [10]. This paper compares the performance of deep CNNs with radiomics features for brain tumor classification and proposes a hybrid model that achieves high accuracy rates on a public brain tumor dataset [10].

3 Methodology Certainly! Here is a more detailed explanation of the methodology for using InceptionV3, VGG19, ResNet50, and MobileNetV2 for brain tumor classification: Data collection and pre-processing: In order to train the deep learning models, a dataset of brain MRI scans is first collected. The images are pre-processed to make them suitable for the models. This pre-processing may include noise removal, intensity normalization, and resizing to a fixed size. Feature extraction: The pre-processed MRI images are fed into pre-trained InceptionV3, VGG19, ResNet50, or MobileNetV2 deep neural networks to extract highlevel features [12–14]. The network’s convolutional layers serve as feature extractors, drawing out characteristics that are unique to tumours. Transfer learning: The pre-trained networks have learned much about the characteristics found in photos thanks to their extensive training on millions of images. Using transfer learning, we take use of the pre-trained networks and tweak them to perform the categorization of brain tumors better. We modify the last few layers of the networks and replace them with new layers suited to the new task. These new layers are trained on the brain tumor dataset, while the earlier layers are frozen so that

268

B. Ashreetha et al.

the features learned on a large dataset are not lost. This helps the model to generalize better and avoid overfitting [15]. Model training and evaluation: The adapted networks are trained on the brain tumor dataset using backpropagation and gradient descent. Accuracy, recall, precision, as well as F1-score are only a few of the measures used to assess the models’ efficacy. The models are tested on a separate validation set or held-out test set to evaluate their generalization performance. Model comparison: The performance of the InceptionV3, VGG19, ResNet50, and MobileNetV2 models is compared to select the best-performing model for the brain tumor classification task. This comparison may involve looking at the metrics mentioned earlier, as well as the computational efficiency of the models. Visualization and interpretation: The learned features and decision boundaries of the models are visualized and interpreted to gain insights into the classification task and the underlying biology of brain tumors. This may involve using techniques such as saliency maps or activation maps to understand what parts of the image the model focuses on when making its predictions [16]. Overall, the methodology for using InceptionV3, VGG19, ResNet50, and MobileNetV2 for brain tumor classification involves adapting pre-trained deep learning models to the specific task of identifying tumors in MRI scans, training and evaluating the models on a brain tumor dataset and comparing the performance of different models to select the best-performing one. The methodology has proven effective for this task and can be adapted for other medical imaging applications [17].

3.1 Architecture See Fig. 1.

3.2 Data Collection There are 2 directories in the dataset: one for development and one for evaluation. 220 HGG patients and 27 LGG patients are in separate subfolders under the “train” folder. Images of the brains of 110 patients with HGG and LGG have been included in the “test” folder. Every patient’s MRI data includes five distinct images: T1, T2, OT, FLAIR, along withT1C (Ground truth of tumor Segmentation). These images are 240 × 240 in size, with a resolution of (1 mm^3) and have had the skull removed; they are all recorded in the .mha file format. Every voxel in the ground truth images is labeled with a zero to represent a normal pixel or a non-zero to represent a portion of a tumor cell [18] (Fig. 2).

Automatic Detection of Coagulation of Blood in Brain Using Deep … Data Augmentation MRI Data set

Rescale Shear Zoom

269

INCEPTIONV3/ Model Testing & Training

VGG19/ RESNET50/

Final H5BEST MODEL

MOBILENETV2

Horizontal Flip

FLASK Framework

Web Application Image Input B rain Tumor Prediction

Brain Tumor-No

Brain Tumor-Yes

Fig. 1 Architecture of proposed model

Fig. 2 Dataset images considered

3.3 Pre-processing The ImageDataGenerator class from the TensorFlow, keras, preprocessingimage module is used to create the image data generator. The rescale argument scales the pixel values of the input images to a range of 0 to 1, making it easier for the neural

270

B. Ashreetha et al.

network to learn from the data. The horizontal_flip and vertical_flip arguments are data augmentation techniques that randomly flip the images horizontally or vertically, which increases the amount of training data available and can help prevent overfitting. The validation_split argument specifies the percentage of images to be used for validation. Two data sets are generated using the flow_from_directory method of the ImageDataGenerator class: testing setandtraining set. The directory argument specifies the path to the directory containing the image files. The shuffle argument shuffles the data before each epoch, which helps prevent overfitting. The target_size argument specifies the size of the images after they are resized. The subset argument specifies whether to generate the training or testing set. The color mode option specifies the image’s color mode. Given that there are only two classes to distinguish, the class mode parameter designates binary labels for the classification process (tumor or non-tumor) [19]. The resulting training_set and testing_set objects are then used to train and evaluate the deep neural network model for brain tumor classification.

3.4 Model Training InceptionV3 InceptionV3 can be used for transfer learning, which means that it can be trained on a large dataset (such as ImageNet) and then fine-tuned for a specific task (such as brain tumor prediction). This allows the model to leverage the pre-trained features from ImageNet and adapt them to the specific task at hand, which can improve performance and reduce the amount of training data required (Fig. 3). The above line graph displays the training accuracy, which demonstrates how well the model performed on the training dataset as time passed. If the number is 98, then the model accurately classified or predicted 98% of the training data. This indicates that the model has learned well from the training dataset and is able to correctly predict the outcomes. The testing accuracy plot demonstrates how well the model performs on a dataset it was not exposed to during training. The value of 93 indicates that the model is able to correctly classify or predict 93% of the testing data. This is a good performance and indicates that the model has generalized well to new data, which is important for the model to be useful in real-world scenarios. Overall, the accuracy plot shows that the model is performing well and has learned to correctly classify brain tumors accurately on both the training and testing datasets (Fig. 4). A loss plot for a machine learning model shows how well the model is minimizing the difference between actual and predicted output [20]. The model is being used for brain tumor classification utilizing deep learning. The training loss plot shows the loss or error of the model on the training dataset over time. The value of 0.1 indicates that the model is able to minimize the error to a great extent on the training dataset. The testing loss plot shows the loss or error of the model on a separate testing dataset

Automatic Detection of Coagulation of Blood in Brain Using Deep …

Fig. 3 InceptionV3 accuracy plot

Fig. 4 InceptionV3 loss plot

271

272

B. Ashreetha et al.

Fig. 5 InceptionV3 ROC plot

that the model has not seen during training. The value of 0.7 indicates that the model cannot minimize the error to the same extent on the testing dataset as on the training dataset (Fig. 5). MobileNetV2 MobileNetV2 is designed to be lightweight, which means that it has fewer parameters than other deep neural network architectures like InceptionV3 or ResNet50. This makes it well-suited to resource-constrained environments like mobile devices, where computational resources may be limited (Figs. 6, 7 and 8). Vgg19 Despite its simplicity, VGG19 has been shown to achieve high accuracy on various image classification tasks, including medical imaging. This is due to its deep architecture, which allows it to learn complex features and patterns from the input images (Figs. 9, 10 and 11). ResNet50 ResNet50 uses residual connections, which allow it to learn features more effectively and avoid the problem of vanishing gradients. This can be important for deep neural network architectures like ResNet50,which has many layers (Figs. 12, 13 and 14).

Automatic Detection of Coagulation of Blood in Brain Using Deep …

Fig. 6 MobileNetV2 accuracy plot

Fig.7 MobileNetV2 loss plot

273

274

Fig. 8 MobileNetV2 ROC plot

Fig.9 VGG19 accuracy plot

B. Ashreetha et al.

Automatic Detection of Coagulation of Blood in Brain Using Deep …

275

Fig. 10 VGG19 loss plot

Fig. 11 VGG19 ROC plot

3.5 Evaluation Metrics Accuracy: The accuracy rate of diagnosing brain tumor pictures is evaluated this way. It’s determined by dividing the number of right guesses by the total number of guesses.

276

Fig. 12 Resnet50 accuracy plot

Fig. 13 Resnet50 loss plot

B. Ashreetha et al.

Automatic Detection of Coagulation of Blood in Brain Using Deep …

277

Fig. 14 Resnet50 ROC plot

Precision: This metric assesses the accuracy with which positive predictions are made. It is calculated by dividing the number of accurate diagnoses by the sum of the accurate and incorrect ones. Recall: This metric calculates the accuracy rate of predictions made on positive data. This is calculated by dividing the sum of positive outcomes by the sum of negative outcomes. F1 score: Harmonic mean recall (HMR) is a popular statistic for evaluating classifier effectiveness. Accuracy is determined by taking 2 times the product of precision and recall and dividing it by the total of precision and recall. Confusion matrix: An overview of the classification outcomes is provided as a table, which details the ratio of correct to incorrect classifications. Accuracy, recall, precision, and the F1 score are all evaluative criteria that may be derived from this data (Table 1). Table 1 Comparative results Accuracy

Fl-score

Precision

Recall

Mobilenetv2

0.9956

0.9973

0.9970

0.9949

Resnet50

0.8559

0.8865

0.8723

0.9252

Inception

0.9782

0.9840

0.9888

0.9806

Vgg19

0.9563

0.9625

0.9558

0.9759

278

B. Ashreetha et al.

Fig. 15 Prediction describes no tumor

3.6 Prediction The front end refers to the part of a website that the user sees and uses right away. Everything the user sees and interacts with is included, from the text and its formatting to the pictures and videos, charts and tables, menus and buttons. The front end is coded in several markup languages, including HTML5, CSS3, and JavaScript. Create Python web apps using Flask (Fig. 15). The trained DL models for brain tumor classification using InceptionV3, VGG19, ResNet50, or MobileNetV2 can use Flask, a popular Python web framework, to create a web application allowing users to upload MRI scans as well as receive a prediction of whether or not a tumor is present. Here are the basic steps for using Flask for brain tumor prediction: Install Flask: Flask may be set up via the Python package management pip. To install Flask, enter “pip install Flask” into the terminal. Set up your Flask app: Create a new Python file and import the necessary libraries, including Flask, NumPy, and Keras. Set up a Flask app and define a route to handle file uploads. Load the model: Load the trained model into your Flask app using Keras. This will allow you to use the model to make predictions on the uploaded MRI scans. Handle file uploads: Use Flask’s request module to handle file uploads. The uploaded file will be in the form of a byte stream, so you will need to decode it and convert it to a NumPy array [21]. Preprocess the input: Preprocess the input MRI scan by applying the same preprocessing steps that were used during model training. This may include normalizing the intensity, resizing the image, and converting it to a 3D array. Make a prediction:

Automatic Detection of Coagulation of Blood in Brain Using Deep …

279

Use the loaded model to make a prediction on the preprocessed input. The prediction will be a probability score indicating the likelihood of a tumor being present. Return the prediction: Return the prediction to the user using Flask’s render_ template method. You can display the prediction as text or create a visualization to show the user where the model detected the tumor [22]. Deploy your app: Once you have completed the development of your Flask app, you can deploy it to a server or hosting platform to make it accessible to users. Flask provides a flexible and powerful platform for building web applications.

4 Conclusion In conclusion, the study evaluated the performance of four popular deep learning architectures, InceptionV3, VGG19, MobileNetV2, and ResNet50, for brain tumor classification. The models have been trained on a dataset of brain MRI images and tested on a separate set of images. Each of the four models exhibited promising results in the classification of brain tumor pictures, with MobileNetV2 achieving the highest accuracy. This provides supporting evidence for the efficacy of deep learning methods in automating brain tumour categorization and, by extension, contributing to the detection of brain cancer. While MobileNetV2 achieved the highest accuracy, it is worth noting that the other models also performed well and may be useful in certain contexts or with different types of data. The study highlights the importance of considering different deep learning architectures and selecting the one that best fits the specific problem at hand. Overall, the findings of this study demonstrate the potential of deep learning approaches in medical imaging, specifically in brain tumor classification. These results may be valuable for further research in developing more accurate and efficient methods for automated brain tumor diagnosis.

References 1. Zhang H, Zhang L, Zhang H, Wang Y, Qiu X (2020) Brain tumor classification using deep learning based on MRI images. J Med Imaging Health Inform 10(4):841–847 2. Kamnitsas K, Ledig C, Newcombe VFJ, Simpson JP, Kane AD, Menon DK, Rueckert D, Glocker B (2017) A comparison of 2D and 3D convolutional neural networks for brain tumor segmentation. In: Proceedings of the 2017 IEEE 14th international symposium on biomedical imaging (ISBI 2017). Melbourne, Australia, pp 324–328 3. Ronneberger O, Fischer P, Brox T (2015) Automated brain tumor detection and segmentation using U-Net based fully convolutional networks. In: Proceedings of the 18th international conference on medical image computing and computer-assisted intervention (MICCAI 2015). Munich, Germany, pp 234–241 4. Liu H, Liu Y, Zhang Y, Yang Y (2019) Brain tumor detection and classification using convolutional neural networks. In: Proceedings of the 2019 IEEE international conference on mechatronics and automation (ICMA). Tianjin, China, pp 472–477

280

B. Ashreetha et al.

5. Wang Z, Wang Y, Chen J (2020) A survey of deep learning-based brain tumor detection and segmentation. J Healthc Eng 2020(8895501):1–17 6. Velthuizen RP, Ramaswamy N, Liu D, Yankeelov TE, Graves EE (2021) Brain tumor classification using convolutional neural networks with dynamic contrast-enhanced MRI. Front Oncol 11:711941 7. Ammar S, Kamel M, Salem AH (2021) Deep convolutional neural networks for brain tumor classification: a comparative study. J Med Imaging Health Inform 11(8):1945–1957 8. Akter B, Hossain MI, Islam SAM (2021) Brain tumor classification using deep convolutional neural networks on MRI images. In: Proceedings of the 2021 international conference on electrical, computer and communication engineering (ECCE). Cox’s Bazar, Bangladesh, pp 1–5 9. Islam SAM, Hossain MI, Abdullah RF (2021) Deep convolutional neural networks for brain tumor classification and segmentation: a review. SN Comput Sci 2(6):1–21 10. Islam SAM, Hossain MI, Al Mamun MA, Hasan MA (2020) Brain tumor classification with convolutional neural networks: a comparative study with radiomics features. In: Proceedings of the 2020 IEEE region 10 symposium (TENSYMP). Dhaka, Bangladesh, pp 450–453 11. Havaei M, Davy A, Warde-Farley D, Biard A, Courville A, Bengio Y, Pal C, Jodoin PM, Larochelle H (2017) Brain tumor segmentation with deep neural networks. Med Image Anal 35: 18–31 12. Chang K, Balachandar N, Lam C, Yi D, Brown J, Beers A, Rosen B, Rubin DL, KalpathyCramer J, Napel S (2018) Distributed deep learning networks among institutions for medical imaging. J Am Med Inform Assoc 25(8):945–954 13. Kumaran N, Begum IP, Ramani R, Pournima S, Rani DL, Radhika A (2023) Brain disease diagnosis prediction model for fuzzy based generic shaped clustering and HPU-Net. Int J Intell Syst Appl Eng 12(1s):291–301. Retrieved from https://ijisae.org/index.php/IJISAE/art icle/view/3416 14. Iscan O, Gül F (2019) Brain tumor detection using convolutional neural networks. Turk J Electr Eng Comput Sci 27(3):1873–1885 15. Dankan Gowda V, Prasad K, Anil Kumar N, Venkatakiran S, Ashreetha B, Reddy NS (2023) Implementation of a machine learning-based model for cardiovascular disease post exposure prophylaxis. In: 2023 international conference for advancement in technology (ICONAT). Goa, India, pp 1–5. https://doi.org/10.1109/ICONAT57137.2023.10080833 16. Praveena K, Venkatesh US, Sahoo NK, Ramanan SV, Bee MKM, Darwante NK (2022) Brain tumor detection using ANFIS classifier and segmentation. Int J Health Sci 6(S3):11817–11828 17. Pitchai R, Praveena K, Murugeswari P, Kumar A, Bee MM, Alyami NM, Sundaram RS, Srinivas B, Vadda L, Prince T (2022) Region convolutional neural network for brain tumor segmentation. Comput Intell Neurosci 2022, Article ID 8335255, 9 18. Selvakanmani S, Ashreetha B, Naga Rama Devi G, Misra S, Jayavadivel R, Suresh Babu P (2022) Deep learning approach to solve image retrieval issues associated with IOT sensors. Measurement: Sens 24: 100458. ISSN 2665-9174 19. Punitha S, Selvaraj M, Kumar NA, Nagarajan G, Kiran CS, Karyemsett N (2022) Development of hybrid optimum model to determine the brake uncertainties. In: 2022 IEEE 2nd Mysore sub section international conference (MysuruCon), pp 1–4 20. Anil kumar N, Bhatt BR, Anitha P, Yadav AK, Devi KK, Joshi VC (2022) A new diagnosis using a Parkinson’s disease XGBoost and CNN-based classification model using ML techniques. In: 2022 international conference on advanced computing technologies and applications (ICACTA), pp 1–6 21. Praveena K, Vimala C, Hemachandra S, Praveena K (2023) Lung carcinoma detection using deep learning. In: 2023 international conference on advances in electronics, communication, computing and intelligent information systems (ICAECIS). Bangalore, India, pp 177–182. https://doi.org/10.1109/ICAECIS58353.2023.10170278. 22. Abd Algani YM, Rao BN, Kaur C, Ashreetha B, Sagar KVD, Baker El-Ebiary YA (2023) A novel hybrid deep learning framework for detection and categorization of brain tumor from magnetic resonance images. Int J Adv Comput Sci Appl (IJACSA). 14(2). https://doi.org/10. 14569/IJACSA.2023.0140261

DeepPose: A 2D Image Based Automated Framework for Human Pose Detection and a Trainer App Using Deep Learning Amrita Kaur, Anshu Parashar, and Anupam Garg

Abstract Tracking and estimating human postures through videos taken by various cameras has always been a very important and challenging task. Human posture estimate is not only a significant computer vision problem, but it also plays a crucial role in a variety of real-world applications such as Video Surveillance, Human– Computer Interaction (HCI), Medical Imaging, etc. The aim of the proposed work is to train people in different activity domains such as Sports, Gym, Exercise, Yoga, etc., in settings when physical interaction between instructor and trainee is not possible. Here, we detect human poses and check whether the pose made by the person is correct or not, based on correct input data points. A 2D stick figure model of the person has been used, which consists of joint points which will be displayed to the user along with the predicted accuracy of their posture. The performance of the proposed framework is compared with the existing models and gives remarkable results in terms of an accuracy i.e., 94%. The framework helps the person to improve their posture so that health risks are minimized. By adjusting one’s posture and training style, theproposed framework can help to avoid these mishaps. It will also assist athletes in improving their techniques, avoiding injury, and increasing their endurance. Keywords HCI · Human pose estimation · Medical imaging · Convolutional neural networks · Deep learning

A. Kaur · A. Garg Thapar Institute of Engineering and Technology, Patiala, India e-mail: [email protected] A. Garg e-mail: [email protected] A. Parashar (B) National Institute of Technology, Kurukshetra, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024 S. Jain et al. (eds.), Emergent Converging Technologies and Biomedical Systems, Lecture Notes in Electrical Engineering 1116, https://doi.org/10.1007/978-981-99-8646-0_23

281

282

A. Kaur et al.

1 Introduction The method of determining the locations of 2D or 3D-based human body parts from still photos or films is known as Human Pose Estimation (HPE). Traditional HPE techniques typically use additional hardware devices to record human poses and build a human skeleton using the recorded body joints [1]. These techniques are either costly or ineffective. The HPE problem in the field of computer vision has received a lot of attention over the last ten years [2]. Techniques for estimating human poses have improved steadily over the past few decades. As a result of the broad interest in various fields, new applications regularly appear alongside technological advancements. In addition to being a significant computer vision challenge, human pose estimate is crucial in a number of real-world applications. In specific situations, video surveillance tries to track and observe the locations and movements of pedestrians. It represents the initial use for HPE technologies. For example, the grocery store and an airport hallway are frequent locations. The development of cutting-edge HCI systems using human pose estimation has been swift. These devices provide precise instruction analysis through the capture of human body positions [3]. Moreover, driving with intelligence has become a fresh practical application in recent years. In recent years, the field of digital entertainment, which includes computer games, computer animation, and movies has grown significantly. People, for instance, enjoy playing games with body sensors. The automatic medical industry has also made extensive use of HPE.HPE can be utilised to help clinicians monitor patients’ daily activities from a distance, considerably streamlining the therapy procedure [3, 4]. HPE is also prompted to get involved in sports reporting and live broadcasting. This is used to monitor the athletes’ movements and activity. Also, it is possible to use the approximated postures to apply the precise movements of their actions. Military, infant brain development, virtual reality, and other uses are among the further applications [5, 6]. Due to the large number of applications that can more significantly benefit humans, a lot of effort has been done in this subject for the past 15 years [7]. For instance, HPE enables a greater level of reasoning when it comes to activity recognition and human– computer interaction. It is a fundamental component of marker-free motion capture technology (MoCap). Applications of this technology include character animation and the clinical study of gait disorders [8]. It is software which will train people in different activity domains such as sports, Gym, exercise and yoga, etc. [9]. This system will detect human poses and check whether the pose made by the person is correct or not on the basis of correct input data points that will be uploaded into the system [10]. Here, the correct input data means the ideal human pose data, against which the user pose will be compared and marked. This way, the software will be able to tell the accuracy and correctness of a posture. An attempt is made to propose a framework which will make an entire 2D stick figure model of the person which will consist of joint points with the help of advance deep learning models such as Convolution Neural Networks (CNN), Distribution-Aware coordinate Representation of Key-point (DARK), UniPose LSTM, Deep Neural Network, PoseHD Framework, etc. The framework is designed to work for both

DeepPose: A 2D Image Based Automated Framework for Human Pose …

283

posture correction and a trainer for teaching sports, yoga, dance and gym exercises. This framework is also applicable to help sportspersons to improve their techniques, avoid injury, and train their endurance.

1.1 Motivation and Objectives Extensive research has been done in detecting and estimating the body posture using various Deep Learning techniques like CNN regression and researchers were able to achieve good accuracy. However, there has been sparse work done in handling occlusion in pose estimation and real-time super imposition of a 2D stick figure over the human body. Development of a real time 2D stick figure over the human body will help people to learn different poses with precise techniques. The scope of Human pose estimation (HPE) and Computer Vision based pose recognition is immense. There can be several future directions this field can take up. Some of the similar projects which can be executed in the near future include robbery detection, intrusion detection, human-object interaction, and training robots. All these and a vast variety of activity classes could bring a major revolution in the area of science and real time operations with the help of emerging technologies. The field can have many future directions. The primary objectives of the work are as under: • To locate the joints in the human body from the image or video. • To render a 2D stick figure from the joints identified above. • To use the generated stick figure to check whether the pose made by the user is correct or not with respect to correct postures from sports activities, gym, physiotherapy exercises, dance steps and yoga. This work has the motive of application of novel technologies to bring the Human Pose Estimation (HPE) system into use and maximize its potential of being deployed in real life scenarios.

2 Literature Survey Deep learning nearly accurately determines different poses and has applications in many different fields. The issue of localizing human joints, also referred to as Human Pose Estimation (HPE), has attracted interest in computer vision. A comprehensive picture of human pose estimation is presented in the major body of this book, which includes several articulated poses. This method estimates human postures using a DNN-based regression algorithm [8–10] and has the benefit of predicting poses holistically. It is a straightforward but effective formulation. Regression using DNNs offers the benefit of holistically collecting context and pose-related reasoning. The typical method for estimating human position in films involves matching the 2D

284

A. Kaur et al.

image elements and extracting the pertinent features [11]. In the retrieving process, mapping between the images and 2D stick figure is critical. Traditional methods [12–17] maps the global joint localization or local joint detection, which limits the performance of a network. Continuous learning is used to estimate the overall perspective of an image in order to get over the drawback of global joint localization [13]. The various parameters are calculated via multiple manifold learning [14]. The estimate process is improved with the use of the tree structure. The tree structure has the benefit of using dynamic programming to get an efficient calculation. The suggested methodology employs the Human Pose Estimation (HPE) Multitask Learning Approach. The first thing that MTAM (Multi task learning Autoencoder model) does is extract various features from both global and local body parts [15]. To incorporate those properties, a multigraph learning-based method is suggested. In order to acquire the hidden representations for both global and local sections, MTAM offers a single shared auto encoder model [16, 17]. Finally, it adds joint localization detection tasks to MTAM, a new architecture for estimating a person’s pose that makes use of a multi-layer convolutional network architecture [18]. The most challenging task is to extract human pose from monocular RGB images with no specification or prior assumption. This specific variation of deep learning achieves a great performance when compared to the traditional architectures [5]. Researchers have done tremendous work in increasing the accuracy for posture detection but a lot of research is still needed for handling occlusion and lowering the computational work for the computer. For instance, auto_fit is one such application that suggests workouts and tracks it [16]. PostNet is used in Auto_fit to perform pose estimation to find 17 body keypoints followed by using the DNN classifier to identify the state of exercise [16]. It takes live video feed and counts the repetitions of the exercise performed. Mainly the app consists of two different exercises such as jumping jack and shoulder lateral raise. WrnchCaptureStream is another iOS application that performs marker-less full body motion capture to record human movements from an iOS device without requiring the person to wear tracking sensors [19]. Powerful algorithms of AI run against the live camera feed to identify skeletal joints, poses and movement. Real time visualizations of the person in motion are displayed on the iOS device’s screen with an overlay of a 3D skeleton mesh and avatar. Smart Mirror E-Health Assistant is another application which consists of a smart mirror that works on its own algorithm and behaves like a smart assistant [5, 19]. Smart Mirror uses face recognition and algorithm identifies the person’s posture and analyze the movement. It improves the upright posture of an individual by a considerable rate. Motivated by the above said literature, the main aim of the proposed framework is to help people to develop a better posture while doing physiotherapy exercises or any other workout. The following Table 1 gives a summary of related works.

DeepPose: A 2D Image Based Automated Framework for Human Pose …

285

Table 1 Research findings for existing literature Reference

Datasets

Bulat et al. MPII, LSP [2]

Image

Techniques

Performance parameters

Future scope

2-D

Deep learning, CNN’s, soft-gated skip connections

PCKh on MPII—94.4%|PCKh on LSP—93.1%

Applying techniques for 3-D models

Zhang et al. [6]

COCO, MPII, 2-D Human3.6 M

Convolution Neural Networks (CNN), Distribution-Aware coordinate Representation of Key-point (DARK)

Other methods [email protected] (mean)—90.2%, DARK [email protected] (mean)—90.6%

Including more datasets, Applying techniques for 3-D models

Artacho et al. [7]

LSP, MPII, Penn action, BBC Pose

2-D

UniPose LSTM

(LSP) Percentage of Correct Parts—72.8%, (MPII) Percentage of Correct Key [email protected]–94.5%

Multi human pose esitimation, Applying techniques for 3-D models

Moon et al. [9]

ICVL, NYU, MSRA, HANDS 2017, ITOP

2-D and 3-D

V2V-PoseNet, 2D CNN’s

Other methods—13.2 mm (Mean error), V2V method—7.49 mm (Mean error)

Including more datasets for 3D human pose estimation

Sun et al. [10]

MS COCO, MPII

2-D

Deep CNN’s

Other methods—90.8% accurate (average), MSPN—92.6% accurate

Including more datasets, applying techniques for 3-D models

Toshev et al. [11]

LSP, FLIC

2-D

Deep Neural Networks (DNN)

Other methods—0.55 (average PCP), DeepPose—0.61 (average PCP)

Including more datasets, applying techniques for 3-D models

Rafi et al. [12]

FLIC, LSP, MPII

2-D

Fully convolutional deep network

Accuracy: 92.9% – with FLIC, 83.8% with LSP, 85.7% with MPII (continued)

286

A. Kaur et al.

Table 1 (continued) Reference

Datasets

Image

Techniques

Chen et al. Extended [13] LSP, MPII human pose

2-D

PCKh on Deep convolutional MPII—91.9%, PCK neural networks on LSP—93.1% (DCNNs)

Future frame prediction using GAN networks

Cao et al. [14]

2-D

Part affinity fields (PAFs), multi-stage CNN

Detection and generation of 3D human mesh using PAFs

MPII, COCO

Performance parameters

COCO: top-down approach—62.7%, Bottom-up approach—58.4%; MPII—75.6%

Future scope

3 Proposed Framework The scope of Human Pose Estimation (HPE) and Computer Vision based pose recognition is immense. The framework employed is a pose estimation system. Inherently, it uses multiple algorithms to determine various cases. It uses the webcam video input to check for the human activity in the frame. The system focuses on two major areas of concern—Face detection followed by pose detection and estimation. The block diagram of the framework is shown in Fig. 1. The diagram depicts the basic architecture of according to the methodology adopted. The steps involved in the whole process can be summarized as follows: 1. Video pre-processing and conversion to individual frames The running video stream is given to the system and it generates a set of distinct frames. Approximately, this rate of generation of frames should be 15 frames per

Fig. 1 Block diagram of the proposed framework

DeepPose: A 2D Image Based Automated Framework for Human Pose …

287

second of video footage. This is done by using the OpenCV library. An object detection algorithm is run to create a bounding box around the human. 2. Filtration and noise removal This involves noise and background subtraction from the frames extracted. Correlation filters are used for the same. 3. Human pose detection To do this, we note that the face of the subject provides the neural network with the highest signal regarding the location of the body (due to its high-contrast features and comparably small variations in appearance). By making the strong assumption that the head should be visible for our single-person use case, which is valid for many mobile and web applications, we are able to create a quick and lightweight position detector. 4. Accuracy calculation and wrong pose alert generation The keypoints of the instructor are fetched from the database. The keypoints detected in the previous step for both the learner and the instructor for a particular pose are compared to calculate accuracy of the learner. The keypoints of both the instructor and learner are not compared directly to calculate the accuracy. The keypoints are modified relatively to the bounding box generated in both the cases. If the relative positions of instructor and learner with respect to their webcam are different then this problem is handled if we compare their relative position with respect to their bounding boxes. This accuracy calculated is used to generate a wrong pose alert if the value of accuracy falls below a threshold value. Our system also highlights the portion of the body where the fault occurs so that the learner knows where to improve his or her learners’ pose.

3.1 Dataset Description We employ three separate validation datasets, covering different verticals: Yoga [4], Dance [8], and High Intensity Interval Training (HIIT) [14], to compare the quality of our models to other very effective publically available solutions. Only one person is visible in each photograph, and they are all within 2–4 m of the camera. Only 17 keypoints from the COCO topology are evaluated in order to be consistent with other solutions.

4 Experimentation and Results OpenCV It is used to develop Computer Vision applications. OpenCV library enables us to run Computer Vision algorithms efficiently and optimally. A perception pipeline can be created using MediaPipe as a network comprising modular components, such

288

A. Kaur et al.

as inference models (such as TensorFlow, TFLite), and media processing operations. Moreover, the COCO-SSD object detection model is used; it is supported by the TensorFlow object detection API. Single-Shot MultiBox Detection is known as SSD. In the COCO Dataset, this model can identify 90 classes [20]. The procedural workflow of the framework is as described in Fig. 2. The experiments comprise of the following steps: (a) Input video to model The video is fed into the system from the live webcam feed. (b) Pre-processing of input Pre-processing of input deals with techniques like cleaning, integration, transformation, and reduction of input. In this project, proper pre-processing is required before proceeding to deployment on a model such that the format of the provided input matches with the format of the expected input. This enables an appropriate result according to the requirement of the model. In general, the pre-processing step which deals with video sequences involves segregating the input into short time frames, converting them into a sequence of images extracted from a real time video stream. These frames are then converted to images in the form of 2D arrays of pixels. The model works on these smallest units of input and further pre-processing may be required to get better results. This pre-processing is carried out according to the

Fig. 2 Pipeline diagram of the framework for the proposed methodology

DeepPose: A 2D Image Based Automated Framework for Human Pose …

289

requirement of the model and the expected use case. Different models may require different amounts of transformation or integration. In our case, the primary preprocessing step is to first break the video up into frames to handle the input image by image. Before each frame is fed into the model, it is first rescaled. Hence, rescaling of frames is our primary pre-processing step. (c) Display video output The video feed is displayed on the portal. The estimated pose is displayed as a stick figure overlaying the human in the frame. Different colours are used to represent left and right sides of the human, and also body parts at different depths from the camera. Different colours are also used to represent the 33 keypoints detected and the sticks used to connect them. Everything is calculated in real time with minimal lag and as the subject moves, the stick figure moves with them. (d) Accuracy calculation The estimated poses of instructor and learner are compared. This done by finding the Euclidean distance between the corresponding 33 keypoints of the instructor and the learner. To aid this calculation, the concepts of translation and scaling are used. Translation helps in bringing the estimated poses at a common reference point, if imagined in 3d space. Scaling is done to make the height and width of the 2 people equal. After these steps, the accuracy can be calculated without errors. (e) Wrong pose alert generation When the accuracy calculated in the previous step falls below a certain threshold, the system alerts the learner that they are making an incorrect pose. The video is fed into the system from the live webcam feed. In our case, the primary pre-processing step is to first break the video up into frames to handle the input image by image. Before each frame is fed into the model, it is first rescaled. Hence, rescaling of frames is our primary pre-processing step. The video feed is displayed on the portal. The estimated pose is displayed as a stick figure overlaying the human in the frame. Different colours are used to represent left and right sides of the human, and also body parts at different depths from the camera. Different colours are also used to represent the 33 keypoints detected and the sticks used to connect them. Everything is calculated in real time with minimal lag and as the subject moves, the stick figure moves with them. The estimated poses of instructor and learner are compared. This done by finding the Euclidean distance between the corresponding 33 keypoints of the instructor and the learner. To aid this calculation, the concepts of translation and scaling are used. Translation helps in bringing the estimated poses at a common reference point, if imagined in 3d space. Scaling is done to make the height and width of the 2 people equal. After these steps, the accuracy can be calculated without errors. When the accuracy calculated in the previous step falls below a certain threshold, the system alerts the learner that they are making an incorrect pose. Figure 3 and Table 2 depicted the measured accuracy of the various applied models.

290

A. Kaur et al.

Fig. 3 Graphical representation of the predicted accuracy metric of different models

Table 2 Performance evaluation (accuracy) of the various models applied on the datasets [20] Method

Yoga mAP

Yoga [email protected]

Dance mAP

Dance [email protected]

BlazePose GHUM Heavy

68.1

96.4

73.0

97.2

BlazePose GHUM Full

62.6

95.5

67.4

96.3

BlazePose GHUM Lite

45.0

90.2

53.6

92.5

AlphaPose ResNet50

63.4

96.0

57.8

95.5

Apple vision

32.8

82.7

36.4

91.4

a [email protected]:

Percentage of correct keypoints lesser than 0.2 * torso diameter

We have tested our system on different scenarios and few of them are mentioned here. The primary goal of the testing is to test the working of the human pose detection system. The human pose detection system should work efficiently in real time with negligible lag. The results also need to be showcased to the user in real time without lag. This means that the system should accurately predict the keypoints and should do it quickly to cater for the movement of the subject. Along with all these features, the behaviour of the system for different quality video inputs is to be tested. The result of test case 1 is shown below in the Fig. 4. It can be seen that model does not start till a face is detected as expected. The result of test case 2 is shown below in the Fig. 5. It can be seen that model automatically pauses and resumes when a face is detected as expected. The result of test case 3 is shown below in the Fig. 6. It can be seen that an error message appears on the screen informing the learner about the wrong pose as expected. The result of test case 4 is shown below in the Fig. 7. It can be seen that accuracy is shown on the learner’s screen as expected.

DeepPose: A 2D Image Based Automated Framework for Human Pose …

291

Fig. 4 Screenshot of test case 1 with no face detection

Fig. 5 Screenshot of test case 2 with face detection and pose estimation

Our framework is able to detect human joints with an accuracy of 94% and also handles the problem of occlusion efficiently. It is able to distinguish between the left and right portion of the human body as the 2D stick figure formed has red colored joints for the right portion and green colored joints for the left portion of the human body. It detects 33 joints from the whole human body and also tells their confidence score, whose average score is displayed to the user.

292

A. Kaur et al.

Fig. 6 Screenshot of test case 3 with wrong pose estimation

Fig. 7 Screenshot of test case 4 showing the value of performance metric on accurate pose estimation

5 Conclusions and Future Directions In the proposed framework, adeep learning model is used that locates the joints in the human body from the image/video provided to the system. The framework used is also able to handle the problem of occlusion efficiently. From the joints detected we rendered the joints and connected them according to their confidence score to form a 2D stick figure. The framework is able to detect human joints with an accuracy of 94% and also handles the problem of occlusion efficiently. It detects 33 joints from the whole human body and also tells their confidence score, whose average score is displayed to the user. The framework can be further extended to work on

DeepPose: A 2D Image Based Automated Framework for Human Pose …

293

implementing the 3D mesh like structures instead of 2D stick figures. Moreover, the recording of the learner can also be done so that it can be save in the database.

References 1. Liu Z, Zhu J, Bu J, Chen C (2015) A survey of human pose estimation: the body parts parsing based methods. J Vis Commun Image Represent 32:10–19 2. Bulat A, Tzimiropoulos G (2016) Human pose estimation via convolutional part heatmap regression. In: European conference on computer vision. Springer, pp 717–732 3. Cao Z, Simon T, Wei SE, Sheikh Y (2017) Realtime multi-person 2d pose estimation using part affinity fields. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 7291–7299 4. Singh A, Agarwal S, Nagrath P, Saxena A, Thakur N (2019) Human pose estimation using convolutional neural networks. In: International conference on artificial intelligence (AICAI). IEEE, pp 946–952 5. Güler RA, Neverova N, Kokkinos I (2018) Densepose: Dense human pose estimation in the wild. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 7297–7306 6. Zhang F, Zhu X, Dai H, Ye M, Zhu C (2020) Distribution-aware coordinate representation for human pose estimation. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 7093–7102 7. Artacho B, Savakis A (2020) Unipose: unified human pose estimation in single images and videos. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 7035–7044 8. Kreiss S, Bertoni L, Alahi A (2019) Pifpaf: composite fields for human pose estimation. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 11977–11986 9. Moon G, Chang JY, Lee KM (2018) V2v-posenet: Voxel-to-voxel prediction network for accurate 3d hand and human pose estimation from a single depth map. In: Proceedings of the IEEE conference on computer vision and pattern Recognition, pp 5079–5088 10. Li W, Wang Z, Yin B, Peng Q, Du Y, Xiao T, Sun J (2019) Rethinking on multi-stage networks for human pose estimation. arXiv preprint arXiv:1901.00148 11. Toshev A, Szegedy C (2014) Deeppose: human pose estimation via deep neural networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1653– 1660 12. Rafi U, Leibe B, Gall J, Kostrikov I (2016) An efficient convolutional network for human pose estimation. In: BMVC, vol 1, p 2 13. Chen Y, Shen C, Wei XS, Liu L, Yang J (2017) Adversarial posenet: a structure-aware convolutional network for human pose estimation. In: Proceedings of the IEEE international conference on computer vision, pp 1212–1221 14. Cao Z, Simon T, Wei SE, Sheikh Y (2017) Realtime multi-person 2d pose estimation using part affinity fields. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 7291–7299 15. Omran M, Lassner C, Pons-Moll G, Gehler P, Schiele B (2018) Neural body fitting: unifying deep learning and model based human pose and shape estimation. In: International conference on 3D vision (3DV). IEEE, pp 484–494 16. Dang Q, Yin J, Wang B, Zheng W (2019) Deep learning based 2d human pose estimation: a survey. Tsinghua Sci Technol 24(6):663–676 17. Kreiss S, Bertoni L, Alahi A (2019) PIFPAF: composite fields for human pose estimation. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 11977–11986

294

A. Kaur et al.

18. Munea TL, Jembre YZ, Weldegebriel HT, Chen L, Huang C, Yang C (2020) The progress of human pose estimation: a survey and taxonomy of models applied in 2D human pose estimation. IEEE Access 8:133330–133348 19. Omran M, Lassner C, Pons-Moll G, Gehler P, Schiele B (2018) Neural body fitting: unifying deep learning and model based human pose and shape estimation. In: International conference on 3D vision (3DV). IEEE, pp 484–494 20. https://github.com/google/mediapipe/blob/master/docs/solutions/pose.md

Phylogenetic Study of Surface Glycoprotein (S1 Spike Protein) Sequence of SARS-CoV-2 Virus R. S. Upendra, Sanjay Shrinivas Nagar, R. S. Preetham, Sanjana Mathias, Hiba Muskan, and R. Ananya

Abstract In the recent times, the resultant pandemic of severe acute respiratory syndrome caused due to the infection of coronavirus 2 (SARS-CoV-2) accounted for a major global health emergency termed as COVID-19 disease-WHO. Different countries globally witnessed the rapid spread of COVID-19 disease to date, primarily due to the continuous emergence of mutant variants of the SARS-CoV-2 virus. In order to control the COVID-19 disease, it is important to understand the origin and the rate of emergence of mutant variants. With this aim the present study intends to analyze the phylogeny of the SARS-CoV-2 virus, to determine its association with other viruses, and to aid in the search for potential treatment regimens against SARS-CoV-2 virus infection. Surface glycoprotein (S1) sequences of various strains of the SARS-CoV-2 virus from different regions of the globe have been considered for building the genetic tree. A character-based phylogenetic tree applying JonesTaylor-Thornton (JTT) based Maximum Likelihood (ML) Method was developed to compare the standard SARS-CoV-2 strain (WEU54352) with other 99 selected variants of SARS-CoV-2 virus sequences representing the different countries of the globe. Phylogenetic results were validated by applying 500 bootstrapping. The study reported that the developed phylogenetic tree displayed two major clades with 58 and 42 identical mutant sequences respectively. Among the 99 sequences of S1 Spike Protein, 6 sequences displayed higher similarity with the standard sequence displaying minimum divergence. Among the 6 sequences, the sequence UZD77448 and UZD77771 obtained from USA (California) and USA(New Jersey) on 22/10/2022 respectively displayed the very closest similarity with that of the standard sequence with a bootstrap value of 53 and was present in the same major clade with the standard sequence. With the evidence, the study concludes that the inferences of the present study can help in understating the mutation rates of the virus and further aid in the process of control of SARS-CoV-2 virus infections by selecting a suitable treatment regimens. R. S. Upendra (B) · S. S. Nagar · R. S. Preetham · S. Mathias · H. Muskan · R. Ananya Department of Bioelectronics and School of Electronics and Communication, REVA University, Kattigenahalli, Bengaluru 560064, India e-mail: [email protected] S. S. Nagar e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024 S. Jain et al. (eds.), Emergent Converging Technologies and Biomedical Systems, Lecture Notes in Electrical Engineering 1116, https://doi.org/10.1007/978-981-99-8646-0_24

295

296

R. S. Upendra et al.

Keywords Severe acute respiratory syndrome · Surface glycoprotein · SARS-CoV-2 · ACE2 · Phylogenetic analysis · Maximum likelihood · MEGA 11

1 Introduction Infectious diseases are a major global health threat, causing morbidity and mortality in both developed and developing countries. They may result from a myriad of organisms, including bacteria, parasites, viruses, and fungi. The World Health Organization (WHO) has estimated a death rate of 36 million people every year due to the spread of infectious diseases. A polygenetic family of viruses referred to as coronaviruses are obligate intracellular parasites that can cause respiratory infections in both mammals and birds. They are enveloped single-stranded RNA viruses belonging to group four of the Baltimore classification system. The initial pandemic that arose from the outbreak of SARS-CoV-1 in the year 2002 appeared to be highly pathogenic, inducing respiratory infections among infected individuals (WHO). SARS-CoV-1 was found to have originated in China and subsequently spread to 26 neighboring countries [8]. As reported by WHO, a total of 8,098 people worldwide were infected with the SARS-CoV-1 virus. The Middle East respiratory syndrome (MERS) of 2012 was another coronavirus pandemic that infected 2,519 people and resulted in 866 deaths as stated by WHO. A strain of the virus SARS-CoV-2 was detected in the seafood market of China’s Wuhan province in late 2019 [12]. The resulting outbreak instigated a worldwide pandemic named as Covid-19 caused by SARS-CoV-2. Various autonomous research groups have discovered that SARS-CoV-2 had an identical genome of 96.2% in comparison with the genome of bat coronaviruses [12]. From March 21, 2023, onward, WHO has confirmed total infected cases of 761,071,826 and a total death case of 6,879,677 on a global scale due to COVID-19 disease. Severe acute respiratory syndrome is a pulmonary disease caused by a coronavirus SARS-CoV-2 which belongs to the Beta coronaviruses genus and subgenus Sarbecoviruse (lineage B) [9] and hence has a close correlation with SARS-CoV-1, thus sharing similar pathological properties and symptoms in their hosts. Zoonotic origins of the virus suggest that the SARS-CoV-2 virus shows significant overlap and homology with bat coronavirus [4]. The virus of concern displays proteins such as spike (S), nucleocapsid (N), membrane (M), and envelope (E) on its surface, alongside a single-stranded RNA as the genomic material. Surface glycoprotein consists of three monomers fused to form a homotrimer. Out of four structural proteins (S, N, M, E), S protein is the most significant protein which is about 20–40 NM in length, and facilitates the attachment between the SARSCoV-2 virus and Angiotensin Converting Enzyme 2 (ACE2) receptor of host cell. The monomers of S are composed of approximately 1,282 amino acids, which are further divided into two major functional domains known as S1 and S2. Three S1 and S2 subunits of the S protein are coiled together to form trimers. The S1 (Fig. 1) domain mediates the recognition of suitable sites on the ACE2 receptor and the S2

Phylogenetic Study of Surface Glycoprotein (S1 Spike Protein) …

297

Fig. 1 Surface spike protein structure with the structural description of SARS-CoV-2 Virus

domain facilitates the binding between the S protein of the SARS-CoV-2 virus and the ACE2 of the host cell. The mutative nature of viruses makes it difficult to study the virus efficiently, hence conducting a phylogenetic analysis provides information regarding the genetic changes and similarities between viral protein sequences. The S1 spike protein of SARS-CoV-2 strains shares similar sequence patterns. The present study demonstrates phylogenetic analysis to analyze the evolutionary relationships of the S1 spike protein sequences obtained from various strains of the SARS-CoV-2 virus.

2 Literature Review Through extensive studies, the primary source of SARS-CoV-2 has been mapped to bats through an intermediate host. A research study investigates the secondary or intermediate host to prevent the transmission of the virus SARS-CoV-2 and hence reduce the future spread of pandemics. The study assessed a total of forty-three spike protein sequences of the SARSCoV-2 virus isolated from the distinct species, obtained from the NCBI database, and then underwent computer-assisted structural and genomic analysis. The sequences were then compared to identify any similarities or differences through in-silico approaches such as pairwise sequence alignment and Multiple Sequence Alignment (MSA). A phylogenetic analysis was performed to survey the relationship among the selected species using MEGA software. Subsequently, a relative analysis of the S-protein was conducted using the UCSF Chimera software, in reference to the struc-

298

R. S. Upendra et al.

tural model. As a result of structural and genetic analysis of spike protein obtained from different viral strains of the SARS-CoV-2, researchers predicted pangolins as an intermediate host in the transmission of SARS-CoV-2 from bats to humans [8]. The amino acid sequences of TMPRSS2 and ACE2 between hosts and the virus were subjected to an evolutionary study. SARS-like viral strains that infected bats and pangolins are closely related to the SARS-CoV-2 isolated from afflicted humans, according to phylogenetic analysis. The pangolin ACE2 receptor sequence has less evolutionary divergence than the human TMPRSS2 sequence, which is more divergent from the bats. When comparing the origins of SARS-CoV and SARS-CoV-2 (intermediary host), pangolins show a lower level of ACE2 evolutionary divergence with humans as compared to civets. As a result, pangolins are now a suitable host for intermediate SARS-CoV-2 transmission from bats to humans [1, 5]. At the onset of the COVID-19 outbreak, a full-length genomic comparison was conducted, and the study reveals that SARS-CoV-2 shared a sequence identity of 79.6% with the SARS-CoV virus. The Bat-CoV RaTG13, formerly recognized in Rhinolophus affinis from Yunnan Province, located more than 1,500 km away from Wuhan, is extremely identical to it (96.2%) at the whole-genome level. Although bats are the most probable SARS-CoV-2 reservoir hosts,it remains uncertain if BatCoV RaTG13 jumps straight to people or passes through intermediary hosts to enable animal-to-human transmission. Researchers were unable to obtain an immediate host sample during the first group of infections that occurred at the Huanan Seafood and Wildlife Market in Wuhan, where the trading of wild animals could have been the origin of the zoonotic transmission [3].

3 Methodology 3.1 Selection of SARS-CoV-2 Virus S1 Spike Protein Sequences The SARS CoV-2 surface glycoprotein sequence Ids from all around the world were retrieved from GenBank in FASTA format (NCBI). MSA is the process that involves the alignment of three or more biological sequences, including DNA, RNA, or protein sequences. After the declaration of SARS CoV-2 virus as a new pandemic-causing virus, it was crucial to analyze the protein sequences to determine the mutation and variation rate. Hence, Multiple sequence alignment of the surface glycoprotein1 sequence was carried out using MEGA 11 [11] software using the MUSCLE algorithm. Sequence alignments of amino acids and nucleotides can be produced using the MUSCLE program [10].

Phylogenetic Study of Surface Glycoprotein (S1 Spike Protein) …

299

Table 1 Parameters and its corresponding values considered to build the phylogenetic tree Parameter Value Statistical technique The substitution type Substitution model Test of phylogeny No. of bootstrap replications ML heuristic method Rates among sites No. of threads

Maximum likelihood Amino acids Jones-Taylor Thornton (JTT) model Bootstrap method 500 Nearest-neighbor-interchange Uniform rates 7

3.2 Building of Phylogenetic Tree The construction of a Phylogenetic Tree using the Maximum Likelihood (ML) method allows for rapid analysis of DNA and RNA sequences and is frequently exercised in the investigation of analyzing infectious diseases. The ML algorithm is extensively employed in constructing phylogenetic trees by utilizing species distance and aims to reduce the sum of all branch lengths in the resulting tree. A standard SARS-CoV-2 virus WEU54352 was used in the study for comparison with other strains of the SARS-CoV-2 virus. The initial tree was automatically generated, and missing site treatment was done for all the studied sites [9]. The following parameters were set to build the tree (Table 1).

3.3 Visualization of the Phylogenetic Tree The Phylogenetic Tree was visualized using IROKI software (http://virome.dbi.udel. edu), where IROKI [7] is a web tool for automatically creating custom phylogenetic and taxonomic trees with related qualitative and quantitative data.

4 Results 4.1 Selection of SARS-CoV-2 Virus S1 Spike Protein Sequences Sequences of surface glycoprotein (SGP) of different variants of the SARS-CoV-2 virus were obtained from the NCBI protein database from different countries and used for the construction of a phylogenetic tree. The SGP of the SARS-CoV-2 standard

300 Table 2 Spike protein sequences of SARS-CoV-2 virus from different countries

R. S. Upendra et al. Serial no.

Country

No. of sequence

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22

USA Denmark South Africa Kenya India Japan New Zealand Australia Palau Argentina Brazil Chile Columbia Ecuador Paraguay Peru Suriname Uruguay Venezuela United Kingdom France Seychelles Total

18 2 12 2 10 8 6 6 5 2 4 2 1 1 1 2 1 1 2 6 7 1 100

virus of USA Florida was selected as the standard variant while conducting the study, with the aim to find the closest phylogenetic match. A total of 100 sequences were obtained from 22 different countries globally (99 variants and 1 standard sequence) and used as input for building a phylogenetic tree (Tables 2 and 3).

4.2 Building of Phylogenetic Tree The resulting phylogenetic tree showed similarity between the standard variant and other variants. The results of phylogenetic tree were further visualized using tools such as IROKI. Figure 2 is a section of the phylogenetic tree that depicts the phylogenetic relationship between sequences and standard sequence of surface glycoprotein obtained

Phylogenetic Study of Surface Glycoprotein (S1 Spike Protein) …

301

Table 3 S1 spike protein sequence IDs obtained from different countries Accession no. Country Collection date UZC49087 UZC49694 UYS57713 UYS57725 UYS57737 UYS57749 UYS57761 UYS57773 UYS57821 UYR57516 UYR57528 UYR57606 UYR57652 UYR57699 UYO69841 UZC40608 UZC40621 UZC40658 UZC40670 UZC40684 UZC40697 UZC40709 UZC40743 UZC40755 BDT54564 BDT54576 BDT54588 BDT54599 BDT54611 BDT54623 UYP61551 UYP61899 UYP61935 UYP61959 UYP63825 UYP64056 UQT27285 UQT27309 UQT27321

Kenya Seychelles South Africa: Western Cape South Africa: Western Cape South Africa: Western Cape South Africa: Northern Cape South Africa: Northern Cape South Africa: Northern Cape South Africa: Western Cape South Africa: Western Cape South Africa: Western Cape South Africa: Western Cape South Africa: Western Cape South Africa: Western Cape Kenya India India India India India India India India India Japan: Shizuoka, west Japan: Shizuoka, west Japan: Shizuoka, east Japan: Shizuoka, east Japan: Shizuoka, east Japan: Shizuoka, east New Zealand New Zealand New Zealand New Zealand New Zealand New Zealand Australia: South Brisbane Australia: South Brisbane Australia: South Brisbane

12/21/2021 8/22/2022 9/29/2022 10/2/2022 10/5/2022 10/5/2022 10/7/2022 9/25/2022 10/5/2022 10/6/2022 10/7/2022 9/15/2022 9/19/2022 9/26/2022 7/5/2022 5/6/2021 5/6/2021 1/19/2022 1/20/2022 1/20/2022 4/15/2021 4/15/2021 4/15/2021 4/15/2021 9/7/2022 9/8/2022 9/5/2022 9/5/2022 9/5/2022 9/8/2022 1/11/2022 2/25/2022 3/6/2022 3/6/2022 2/1/2022 2/17/2022 2/15/2022 2/15/2022 2/15/2022 (continued)

302 Table 3 (continued) Accession no. QPK67828 QPK67852 QPK67876 ULE90226 ULE90250 ULE90274 ULE90298 ULE90334 UHM88525 QRC49972 UEO57425 UEO57795 UYF31271 UHM62947 UIC73875 QPV15059 QIS30054 QPN53404 UNJ22125 UZC60063 QNV50226 USC26255 UCI34968 QXU68215 QNH88943 UZC45624 UZC45744 UZC45803 UZC46255 UYK43732 UYF26308 UYF26356 UYF26368 UYF26667 UYF26727 UYF27084 UYF27036 UYQ90680

R. S. Upendra et al.

Country

Collection date

Australia: Victoria Australia: Victoria Australia: Victoria Palau: PW Palau: PW Palau: PW Palau: PW Palau: PW Argentina Argentina Brazil: Mato Grosso do Sul, Campo Grande Brazil: Mato Grosso do Sul, Campo Grande Brazil Brazil Chile Chile Colombia: Antioquia Ecuador Paraguay Peru Peru Suriname Uruguay Venezuela Venezuela United Kingdom: Scotland United Kingdom: Scotland United Kingdom: Scotland United Kingdom: Scotland United Kingdom France France France France France France France Denmark

8/5/2020 8/15/2020 7/31/2020 9/11/2021 9/14/2021 9/17/2021 9/19/2021 9/23/2021 3/15/2021 6/4/2020 7/10/2020 11/30/2020 3/4/2022 4/22/2020 8/8/2021 5/9/2020 3/11/2020 3/23/2020 8/19/2020 7/11/2022 5/4/2020 12/16/2020 7/16/2020 1/1/2021 5/22/2020 3/24/2020 4/2/2020 4/9/2020 5/11/2020 3/28/2022 3/19/2020 3/25/2020 3/26/2020 4/27/2020 7/13/2020 8/18/2020 8/17/2020 7/11/2022 (continued)

Phylogenetic Study of Surface Glycoprotein (S1 Spike Protein) … Table 3 (continued) Accession no. UYQ90692 UYK43720 UZD77257 UZD77389 UZD77365 UZD77448 UZD77652 UZD77700 UZD77747 UZD77771 UZD78834 UZD78857 UZD79213 UZD79249 UZD79775 UZD79846 UZD79858 UZD79691 UZF96676 WEU54352 BDZ29174 BDZ28912 WEW51865

Country

Collection date

Denmark United Kingdom USA: California USA: Arizona USA: Michigan USA: California USA: California USA: Indiana USA: Texas USA: New Jersey USA: Pennsylvania USA: Georgia USA: Michigan USA: Georgia USA: Texas USA: California USA: California USA: Pennsylvania USA: Arkansas USA: Florida Japan: Shizuoka Japan: Shizuoka India

4/24/2022 3/28/2022 10/21/2022 10/22/2022 10/22/2022 10/22/2022 10/22/2022 10/22/2022 10/22/2022 10/22/2022 10/24/2022 10/24/2022 10/24/2022 10/24/2022 10/24/2022 10/24/2022 10/24/2022 10/24/2022 11/1/2022 3/4/2023 2/20/2023 2/10/2023 3/11/2023

303

from different countries. The phylogenetic tree depicts the bootstrap values, indicating the degree of relationship between the sequences. Remaining sequences along with complete phylogenetic tree is provided in supplementary material.

4.3 Visualization of Phylogenetic Tree Figure 3 depicts the phylogenetic tree visualized with the help of IROKI software. The software helps in making the tree visually clearer and more effective for analysis. The software helps in the proper classification of nodes, branches, and clades in a phylogenetic tree. It is observed from the Fig. 3 that the tree consists of two major clades clade-1 and clade-2 where, subclade-1 has four identical sequences UZD79691, UZD77747, UZD78857, UZD77389 and subclade-2 has 3 identical sequences UZD77771, UZD77448, WEU54352 (Standard). Among these

304

R. S. Upendra et al.

Fig. 2 SARS-CoV-2 phylogenetic tree

6 sequences, UZD77448 obtained from USA (California) and UZD77771 USA (New Jersey) reported on 22/10/2022 displayed the very closest similarity with the standard sequence with a bootstrap value of 53 and was present in the same subclade with the standard sequence (Fig. 3). (supplementary material).

5 Discussion The pandemic of COVID-19, which emerged in China in December 2019, affected approximately 188 countries worldwide. To investigate the genetic epidemiology of the outbreak in Russia, Libov and a team of scientists conducted a study. The study involved the isolation and genetic characterization of two strains of SARS-CoV-2. Through a phylogenetic analysis of SARS-CoV-2 sequences from infected patients, the researchers evaluated the molecular epidemiology and virus spread pattern of COVID-19 in Russia, by comparing the obtained results with epidemiological data.

Phylogenetic Study of Surface Glycoprotein (S1 Spike Protein) …

305

1 2 STANDARD

Fig. 3 Visualization of phylogenetic tree with IROKI software

The whole genome analysis of the isolated strains showed seven prevalent mutations in comparison to the original Wuhan virus. The phylogenetic analysis revealed various patterns of SARS-CoV-2 spread within Russia. It is conveyed from the study that the phylogenetic analysis of COVID-19 variants is an important footstep for analyzing the mutation and transmission of the SARS-CoV-2 virus [2]. Since the emergence of the SARS-CoV-2 virus, researchers have searched for potential drugs to fight against SARS-CoV-2 viral infection. The SARS-CoV-2 virus displays a high rate of mutation in its genome sequence. Anamika and her fellow colleagues performed the first phylogenetic tree. The genome sequences of SARS-CoV2 were obtained from the database known as the Global Initiative on Sharing Avian Influenza Data (GISAID). The SARS-CoV-2 strain obtained as the root from the first phylogenetic analysis was further analyzed with genomic sequences obtained from different human viruses for recognizing the nearest viral neighbor through the second phylogenetic tree. Additionally, an in-silico approach was employed to repurpose existing drugs against coronavirus by utilizing the phylogenetic correlation between different human viruses. The phylogenetic analysis brings about the evolutionary relationship between the SARS-CoV-2 virus and adenovirus. Hence, drugs that were used against adenovirus were repurposed to fight against the SARS-CoV-2 virus through in silico process. It was noted that as part of the screening process, various protein targets, including protease, phosphatase, methyltransferase, and spike protein, were considered for evaluation with drugs previously used to treat adenovirus. It was observed that Ribavirin, a drug known for its effectiveness against adenoviral infections, demonstrated the highest docking score among the drugs that underwent testing. The following study analyzed the drugs used for other human viruses instead of repurposing the drugs used for the treatment of previous COVID-19 viral

306

R. S. Upendra et al.

variants. Hence there is a need to perform a drug repurposing study with known drugs for previous COVID-19 variants [6]. From this understanding, the present study developed a phylogenetic tree of different strains of SARS-CoV-2 virus data related to the similarity between sequences of surface glycoprotein obtained from different strains of the SARS-CoV-2. A total of 100 sequences (99 variants and 1 standard sequence) of Surface glycoprotein (S1 Spike Protein) were obtained from 22 different countries across the globe such as the USA, Denmark, India, Japan, etc. selected for constructing the phylogenetic tree. The standard sequence WEU54352 was obtained from the USA (Florida) on 04/03/2023. Among the remaining 99 mutant sequences of Surface glycoprotein (S1 Spike Protein), 6 sequences displayed higher similarity with the standard sequence of Surface glycoprotein (S1 Spike Protein) displaying minimum divergence from the standard sequence. Among these 6 sequences, UZD77448 obtained from USA (California) and UZD77771 USA (New Jersey), reported on 22/10/2022 displayed the very closest similarity with the standard sequence with a bootstrap value of 53 and was present in the same subclade with the standard sequence. The phylogenetic analysis helps to find sequence similarity between the new mutated sequence of Spike protein and previously studied sequences of the spike protein. Compounds tested for the strains UZD77448 and UZD77771 may play a significant role in the treatment of COVID-19 disease caused by the newly mutated standard strain (WEU54352).

6 Conclusion The phylogenetic analysis demonstrated the contrast between the standard strain (WEU54352) of the SARS-CoV-2 with other variants. Currently, therapies are available for previously investigated SARS-CoV-2 virus variants. The evidence presented in this research suggests that the medications available for the infection caused by strains, UZD77448 and UZD77771, can be assessed for the treatment of Covid-19 disease caused by the newly mutated standard strain (WEU54352). In conclusion, this study provides valuable insights into the mutation rates of the SARS-CoV-2 virus and their implications for the development of effective treatment regimens and may reduce the time required for the Insilco approach and boost the drug discovery process. By taking into account the specific characteristics of the virus and the way it is evolving, we can work towards more effective control of SARS-CoV-2 infections, ultimately enhancing the overall health and wellness of individuals around the world. Supplementary Material Please refer the Supplementay material for more information regarding the phylogenteic tree.

Phylogenetic Study of Surface Glycoprotein (S1 Spike Protein) …

307

References 1. Junejo Y, Ozaslan M, Safdar M, Khailany RA, Rehman S, Yousaf W, Khan MA (2020) Novel SARS-CoV-2/COVID-19: origin, pathogenesis, genes and genetic variations, immune responses and phylogenetic analysis. Gene Reports 20:100752 2. Kozlovskaya L, Piniaeva A, Ignatyev G, Selivanov A, Shishova A, Kovpak A, Gordeychuk I, Ivin Y, Berestovskaya A, Prokhortchouk E, Protsenko D (2020) Isolation and phylogenetic analysis of SARS-CoV-2 variants collected in Russia during the COVID-19 outbreak. Int J Infect Dis 99:40–6 3. Liu YC, Kuo RL, Shih SR (2020) COVID-19: the first documented coronavirus pandemic in history. Biomed J 43(4):328–33 4. Lohrasbi-Nejad A (2022) Detection of homologous recombination events in SARS-CoV-2. Biotechnol Lett 2022:1–6 5. Lopes LR, de Mattos Cardillo G, Paiva PB (2020) Molecular evolution and phylogenetic analysis of SARS-CoV-2 and hosts ACE2 protein suggest Malayan pangolin as intermediary host. Braz J Microbiol 51(4):1593–9 6. Mishra A, Mulpuru V, Mishra N (2022) Identification of SARS-CoV-2 inhibitors through phylogenetics and drug repurposing. Struct Chem 33(5):1789–97 7. Moore RM, Harrison AO, McAllister SM, Polson SW, Wommack KE (2020) Iroki: automatic customization and visualization of phylogenetic trees. Peer J 8:e8584 8. Raza S, Navid MT, Zahir W, Khan MN, Awais M, Yaqub T, Rabbani M, Rashid M, Saddick S, Rasheed MA (2021) Analysis of the spike proteins suggest pangolin as an intermediate host of COVID-19 (SARS-CoV-2). Int J Agric Biol 25(3) 9. Tabibzadeh A, Esghaei M, Soltani S, Yousefi P, Taherizadeh M, Safarnezhad Tameshkel F, Golahdooz M, Panahi M, Ajdarkosh H, Zamani F, Karbalaie Niya MH (2021) Evolutionary study of COVID-19, severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) as an emerging coronavirus: phylogenetic analysis and literature review. Vet Med Sci 7(2):559–71 10. Takaoka Y, Sugano A, Morinaga Y, Ohta M, Miura K, Kataguchi H, Kumaoka M, Kimura S, Maniwa Y (2022) Prediction of infectivity of SARS-CoV2: mathematical model with analysis of docking simulation for spike proteins and angiotensin-converting enzyme 2. Microbial Risk Anal 22:100227 11. Tamura K, Stecher G, Kumar S (2021) MEGA11: molecular evolutionary genetics analysis version 11. Mol Biol Evol 38(7):3022–7 12. ur Rehman MF, Fariha C, Anwar A, Shahzad N, Ahmad M, Mukhtar S, Haque MF (2021) Novel coronavirus disease (COVID-19) pandemic: a recent mini review. Comput Struct Biotechnol J 19:612–23

Pervasive and Wearable Computing and Networks Jatin Verma

and Tejinder Kaur

Abstract Pervasive and wearable computing refers to the development of technology that seamlessly integrates into everyday life, often in the form of small, wearable devices that are constantly connected to the internet. These devices are designed to be unobtrusive and to provide users with a range of functions, from tracking their health and fitness to providing real-time information about their environment. The development of pervasive and wearable computing has been driven by advances in technology such as sensors, wireless communication, and lowpower computing. These devices are typically small, lightweight, and can be worn on the body, making them ideal for use in a wide range of contexts. Bluetooth, Zigbee, Wi-Fi, and cellular networks, among others. The potential applications of pervasive and wearable computing are vast, ranging from healthcare and wellness to entertainment and gaming. For example, wearable devices can be used to monitor health parameters such as heart rate, blood pressure, and oxygen levels, and provide feedback to users in real-time. Overall, the development of pervasive and wearable computing is poised to transform the way we interact with technology and the world around us, creating new opportunities for innovation and growth in a wide range of fields. Keywords Computing · Pervasive · Technology · Network · Technology

J. Verma (B) Chitkara University Institute of Engineering and Technology, Chitkara University, Punjab, India e-mail: [email protected] T. Kaur (B) Quest Group of Institutions, Landran-Sirhind Highway, Jhanjeri, Sahibzada Ajit Singh Nagar, Punjab, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024 S. Jain et al. (eds.), Emergent Converging Technologies and Biomedical Systems, Lecture Notes in Electrical Engineering 1116, https://doi.org/10.1007/978-981-99-8646-0_25

309

310

J. Verma and T. Kaur

1 Introduction The history of pervasive and wearable computing can be traced back to the 1970s with the development of the first portable computer, the Alto, by Xerox PARC. In the 1980s, the concept of ubiquitous computing was introduced by Mark Weiser, who envisioned a future where computing would be integrated seamlessly into everyday life. Weise’s vision inspired the development of numerous technologies and devices that aimed to make computing more pervasive and integrated. The first wearable computing device, the MIT wearable computer, was developed in the early 1990s by Steve Mann. Since then, numerous wearable computing devices have been developed, including smart watches, fitness trackers, and augmented reality glasses [1]. These devices are equipped with sensors that can monitor a wide range of physiological and environmental data, such as heart rate, temperature, and location. Pervasive and wearable computing has numerous applications in various fields. In healthcare, wearable devices can be used to monitor patients’ vital signs, track medication adherence, and provide remote patient monitoring. Wearable devices can also be used to help people with disabilities, such as hearing aids and prosthetic limbs. In entertainment, pervasive computing has enabled the development of immersive experiences, such as virtual reality and augmented reality. Wearable devices can also be used to provide personalized entertainment experiences, such as personalized music playlists and movie recommendations [2]. In sports, wearable devices can be used to monitor athletes’ performance, track their movements, and provide real-time feedback. Wearable devices can also be used to improve safety in contact sports, such as football and hockey. In education, wearable devices can be used to provide personalized learning experiences, such as adaptive learning platforms that adjust to students’ learning styles and abilities. Wearable devices can also be used to provide real-time feedback to teachers, allowing them to adjust their teaching methods based on students’ responses. Despite the many benefits of pervasive and wearable computing, there are several challenges that must be addressed to ensure their widespread adoption. One of the main challenges is privacy and security [2]. Pervasive and wearable computing devices collect vast amounts of data, including personal information, health information, and location data. This data can be used for nefarious purposes, such as identity theft and stalking, and must be protected from unauthorized access. Another challenge is interoperability. With so many different types of pervasive and wearable computing devices on the market, it can be difficult to ensure that they all work together seamlessly. Standards and protocols must be established to ensure that different devices can communicate with each other and share data [3]. A third challenge is power consumption. Many wearable devices have limited battery life, which can be a major inconvenience for users. New technologies, such as energy harvesting and low-power wireless communication, must be developed to extend the battery life of wearable devices. Finally, there is the challenge of social acceptance. While many people embrace new technologies, others may be hesitant to adopt them due to concerns about privacy, security, and the potential for addiction [4]. Educating

Pervasive and Wearable Computing and Networks

311

the public about the benefits and risks of pervasive and wearable computing is essential for their widespread adoption. The future of pervasive and wearable computing and networks is promising, with new technologies and trends emerging all the time. These include the Internet of Things (IOT), edge computing, and artificial intelligence (AI). The IOT refers to the network of interconnected devices, including pervasive and wearable computing devices that can communicate with each other and with cloud-based servers. Edge computing involves processing data locally on the device, rather than sending it to a central server, which can reduce latency and improve the performance of pervasive and wearable computing devices. AI can be used to analyze the vast amounts of data collected by pervasive and wearable computing devices, enabling new applications and insights [4] (Fig. 1). Other future directions for pervasive and wearable computing include the development of more sophisticated and personalized devices, such as smart clothing that can adjust its temperature based on the wearer’s preferences, and augmented reality contact lenses that can provide real-time information about the user’s surroundings. Advances in materials science and nanotechnology may also enable the development of new types of pervasive and wearable computing devices, such as flexible and stretchable sensors that can conform to the body [5]. In conclusion, pervasive and wearable computing and networks have the potential to transform many aspects of society, including healthcare, entertainment, sports, and education. While there

Fig. 1 Pairing heterogeneous devices [1]

312

J. Verma and T. Kaur

are several challenges that must be addressed, including privacy and security, interoperability, power consumption, and social acceptance, the future of pervasive and wearable computing is promising, with new technologies and trends emerging all the time. As these technologies continue to evolve and improve, they will likely become even more pervasive and integrated into everyday life, enabling new applications and enhancing our quality of life. The rapid advancement of technology has led to the development of various forms of computing devices that have revolutionized the way people interact with technology. In recent years, pervasive and wearable computing has emerged as a popular trend in the field of computing and networking. Pervasive computing refers to the integration of computing technology into everyday life, while wearable computing refers to the use of portable computing devices that can be worn on the body. Pervasive and wearable computing and networks have a wide range of applications, from healthcare to entertainment, and have the potential to transform many aspects of society. This paper aims to provide an overview of pervasive and wearable computing and networks, including their history, applications, challenges, and future directions [6].

2 History of Pervasive and Wearable Computing The history of pervasive and wearable computing can be traced back to the 1970s when the first portable computer was developed by Xerox PARC. The device, called the Alto, was a precursor to modern laptops and featured a graphical user interface and a mouse. In the 1980s, the concept of ubiquitous computing was introduced by Mark Weiser, who envisioned a future where computing would be integrated seamlessly into everyday life. Weise’s vision inspired the development of numerous technologies and devices that aimed to make computing more pervasive and integrated. The first wearable computing device, the MIT wearable computer, was developed in the early 1990s by Steve Mann. The device was a head-mounted computer that allowed the wearer to access computing resources on the go. Since then, numerous wearable computing devices have been developed, including smart watches, fitness trackers, and augmented reality glasses [7]. Applications of Pervasive and Wearable Computing: Pervasive and wearable computing have numerous applications in various fields, including healthcare, entertainment, sports, and education. In healthcare, wearable devices can be used to monitor patients’ vital signs, track medication adherence, and provide remote patient monitoring. Wearable devices can also be used to help people with disabilities, such as hearing aids and prosthetic limbs. In entertainment, pervasive computing has enabled the development of immersive experiences, such as virtual reality and augmented reality. Wearable devices can also be used to provide personalized entertainment experiences, such as personalized music playlists and movie recommendations. In sports, wearable devices can be used to monitor athletes’ performance, track their movements, and provide real-time feedback. Wearable devices can also be used to improve safety in contact sports, such as football and hockey. In education, wearable

Pervasive and Wearable Computing and Networks

313

devices can be used to provide personalized learning experiences, such as adaptive learning platforms that adjust to students’ learning styles and abilities. Wearable devices can also be used to provide real-time feedback to teachers, allowing them to adjust their teaching methods based on students’ responses [8].

3 Literature Review Pervasive and wearable computing is a rapidly growing field, and as such, there is a wealth of literature available on the topic. Here are some key findings from recent research: Health and fitness tracking: One of the most popular applications of wearable computing is in the area of health and fitness tracking. Research has shown that wearable devices can be effective in motivating individuals to adopt healthier behaviors and can provide real-time feedback on their progress. Privacy and security: As with any technology that collects personal data, there are concerns about the privacy and security of pervasive and wearable computing devices. Researchers have identified several potential vulnerabilities in these devices, including the risk of data breaches and the potential for devices to be used for surveillance, [8]. Human–Computer Interaction: Pervasive and wearable computing devices require new forms of interaction between humans and machines. Researchers have explored various techniques for interacting with wearable devices, including touch, voice, and gesture-based interfaces [14]. Design and aesthetics: The design and aesthetics of wearable computing devices are important factors in their adoption and use. Research has shown that factors such as comfort, style, and customization can all impact the user experience. Social and cultural implications: The widespread adoption of pervasive and wearable computing devices has significant social and cultural implications. Researchers have explored topics such as the impact of these devices on social interactions, the potential for discrimination based on data collected by these devices, and the ethical implications of data collection and use. Overall, the literature on pervasive and wearable computing highlights both the potential benefits and risks associated with this technology. As the field continues to evolve, it will be important for researchers and developers to address these issues in order to create devices that are both effective and ethical [9].

314

J. Verma and T. Kaur

4 Challenges and Review of Pervasive and Wearable Computing Despite the many benefits of pervasive and wearable computing, there are several challenges that must be addressed to ensure their widespread adoption. One of the main challenges is privacy and security. Pervasive and wearable computing devices collect vast amounts of data, including personal information, health information, and location data. This data can be used for nefarious purposes, such as identity theft and stalking, and must be protected from unauthorized access [9]. Another challenge is interoperability. With so many different types of pervasive and wearable computing devices on the market, it can be difficult to ensure that they all work together seamlessly. Standards and protocols must be established to ensure that different devices can communicate with each other and share data (Fig. 2). A third challenge is power consumption. Many wearable devices have limited battery life, which can be a major inconvenience for users. New technologies, such as energy harvesting and low-power wireless communication, must be developed to extend the battery life of wearable devices. Finally, there is the challenge of social acceptance. Wearable technology has become increasingly popular in recent years

Fig. 2 Pairing heterogeneous devices [2]

Pervasive and Wearable Computing and Networks

315

due to its ability to track and monitor various aspects of our lives. Wearable devices have the potential to transform many aspects of society, including healthcare, entertainment, sports, and education. This paper will explore the technology behind wearable devices, their applications, and the challenges that must be addressed to ensure their widespread adoption [10]. Technology behind Wearable Devices: Wearable devices are equipped with sensors that can monitor a wide range of physiological and environmental data, such as heart rate, temperature, and location. These sensors can be integrated into various types of wearable devices, including smart watches, fitness trackers, and augmented reality glasses. One of the key technologies behind wearable devices is low-power wireless communication. Many wearable devices use Bluetooth Low Energy (BLE), which is a wireless technology that allows devices to communicate with each other over short distances while consuming minimal power. Other wireless technologies used in wearable devices include Wi-Fi, NFC, and Zig Bee. Another important technology behind wearable devices is energy harvesting. Many wearable devices have limited battery life, which can be a major inconvenience for users. Energy harvesting involves capturing energy from the environment, such as from the wearer’s movements or from solar power, to power the device. This technology can extend the battery life of wearable devices and reduce the need for frequent recharging. Sensors are also a critical technology in wearable devices. Wearable devices can be equipped with various types of sensors, including accelerometers, gyroscopes, and magnetometers, which can be used to track movement, orientation, and location. Other types of sensors used in wearable devices include heart rate sensors, temperature sensors, and humidity sensors [11]. Applications of Wearable Technology Wearable technology has numerous applications in various fields, including healthcare, entertainment, sports, and education. In healthcare, wearable devices can be used to monitor patients’ vital signs, track medication adherence, and provide remote patient monitoring. Wearable devices can also be used to help people with disabilities, such as hearing aids and prosthetic limbs. For example, smart glasses can be used to help people with visual impairments by providing real-time audio feedback about their surroundings. In entertainment, pervasive computing has enabled the development of immersive experiences, such as virtual reality and augmented reality. Wearable devices can also be used to provide personalized entertainment experiences, such as personalized music playlists and movie recommendations. For example, the Bose Frames audio sunglasses combine sunglasses and headphones into a single wearable device that allows users to listen to music and take phone calls while still being able to hear their surroundings. In sports, wearable devices can be used to monitor athletes’ performance, track their movements, and provide real-time feedback. Wearable devices can also be used to improve safety in contact sports, such as football and hockey. For example, the Riddell Speed Flex football helmet is equipped with sensors that can measure the severity of impacts and alert coaches when a player may have suffered a concussion [12] (Fig. 3).

316

J. Verma and T. Kaur

Fig. 3 Scope of wearable VR in the future [3]

In education, wearable devices can be used to provide personalized learning experiences, such as adaptive learning platforms that adjust to students’ learning styles and abilities. Wearable devices can also be used to provide real-time feedback to teachers, allowing them to adjust their teaching methods based on students’ responses. For example, the Emotive Insight is a wearable device that can detect the wearer’s emotional state and provide feedback to teachers about how engaged students are in a lesson [13]. Challenges of Wearable Technology Despite the many benefits of wearable technology, there are several challenges that must be addressed to ensure their widespread adoption. One of the main challenges is privacy and security. Wearable devices collect vast amounts of data, including personal information, health information, and location data. This data can be used for nefarious purposes, such as identity theft and stalking, and must be protected from unauthorized access another challenge of wearable technology is interoperability. There are many different types of wearable devices on the market, and they often use different operating systems, communication protocols, and data formats. This can make it difficult for devices to communicate with each other and for users to access their data across different devices. Interoperability standards are needed to ensure that wearable devices can work together seamlessly and that users can access their data regardless of the device they are using [14]. Power consumption is another challenge for wearable technology. Wearable devices are often small and have limited battery life, which can be a major inconvenience for users. Energy harvesting technologies can help extend battery life, but

Pervasive and Wearable Computing and Networks

317

more efficient energy storage and management solutions are needed to ensure that wearable devices can run for long periods without requiring frequent recharging. Social acceptance is also a challenge for wearable technology. While many people have embraced wearable devices, others are concerned about the potential privacy and security risks and the impact that wearable devices may have on social interactions. Wearable devices must be designed with privacy and security in mind and should be unobtrusive to ensure that they do not interfere with social interactions [15].

5 Methodology Wearable computing and networks are a complex field that involves a variety of methods, technologies, and applications. Here are some of the key methods used in the development and deployment of wearable computing and networks. Sensor technology: Wearable devices rely on a variety of sensors to collect data about the wearer’s environment, physiology, and behaviour. These sensors can include accelerometers, gyroscopes, magnetometers, temperature sensors, heart rate monitors, and electroencephalography (EEG) sensors, among others (Fig. 4). Data processing: Once the data is collected, it must be processed and analyzed to extract meaningful insights. This may involve filtering, feature extraction, statistical analysis, or machine learning algorithms, depending on the application.

Fig. 4 Wearable internal methodology [4]

318

J. Verma and T. Kaur

Wireless communication: Wearable devices rely on wireless communication protocols, such as Bluetooth, WI-Fi, or cellular networks, to transmit data to other devices or systems. Power management: Wearable devices are typically battery-powered and must be designed to minimize power consumption while still providing sufficient functionality [15]. Human factors: Wearable devices must be designed with the user in mind, taking into account factors such as comfort, usability, and safety. Integration with other systems: Wearable devices are often part of a larger ecosystem of devices and systems, such as smart phones, cloud computing platforms, or healthcare information systems. As such, they must be designed to integrate seamlessly with these other components. Overall, the development and deployment of wearable computing and networks require a multidisciplinary approach that involves expertise in areas such as electrical engineering, computer science, material science, human–computer interaction, and data analytics [15].

6 Implementation There are many different implementations of wearable computing and networks, each with its own unique set of technologies, applications, and use cases. Here are a few examples of how wearable computing and networks are being implemented in different domains: Healthcare: Wearable devices are being used to monitor patients remotely, providing doctors with real-time data on vital signs, medication adherence, and activity levels. This can help to improve patient outcomes and reduce the need for in-person visits. Fitness and sports: Wearable devices are being used to track athletic performance, providing athletes with data on metrics such as heart rate, oxygen levels, and muscle activity. This can help athletes to optimize their training and reduce the risk of injury (Fig. 5). Industrial and workplace safety: Wearable devices are being used to detect environmental hazards, such as toxic gases or excessive heat, and to alert workers to potential dangers. This can help to improve workplace safety and reduce the risk of accidents. Augmented and virtual reality: Wearable devices, such as smart glasses, are being used to provide users with augmented or virtual reality experiences, enhancing their ability to interact with the world around them [15]. Military and defence: Wearable devices are being used to provide soldiers with enhanced situational awareness, providing real-time data on enemy positions, weather conditions, and other relevant information [16].

Pervasive and Wearable Computing and Networks

319

Fig. 5 Wearable devices computing [5]

Consumer electronics: Wearable devices, such as smart watches and fitness trackers, are becoming increasingly popular as personal health and fitness assistants, providing users with data on their physical activity, sleep patterns, and other health metrics. Overall, wearable computing and networks are being implemented in a wide range of domains, with new applications and technologies emerging all the time. As these technologies continue to evolve, we can expect to see even more innovative implementations of wearable computing and networks in the future.

7 Result Here are some recent results in the field of wearable computing and networks. Advancements in material science have enabled the development of more comfortable and flexible wearable devices, such as smart textiles and biocompatible sensors (Fig. 6). Machine learning algorithms are being used to analyze the vast amounts of data collected by wearable devices, enabling more accurate predictions of health outcomes and disease risk. Wearable devices are being used to monitor and improve athletic performance, with sensors tracking metrics such as heart rate, oxygen levels, and muscle activity [16] (Table 1). Wearable devices are being used to enhance safety in the workplace, with sensors detecting environmental hazards and alerting workers to potential dangers. Wearable devices are being used in clinical settings to monitor patients remotely, reducing the need for in-person visits and improving patient outcomes. Wearable devices are being integrated into the Internet of Things (IoT) ecosystem, enabling seamless

320

J. Verma and T. Kaur

Fig. 6 A survey wearable computing [6]

Table 1 Classification of wearable computing [1] Type

Properties

Capabilities

Applications

Smart watch

Low operating power, user friendly interface with both touch and voice commands

Display specific information, payment, fitness/activity tracking, navigation

Business administrations, marketing insurances, professional sport, training, education, infotainment

Smart eyewear

Controlled by touching the screen, head movement, voice command, and hand shake, low operating power

Visualization, language Surgery, aerospace, interpretation, logistics, education communication, task coordination

Fitness tracker

High accuracy

Navigation

Healthcare

Smart clothing

Data are obtained by body sensors and actuators

Heart rate, daily activities

Medicine, military, logistics

connectivity and data sharing between devices and systems [16]. Overall, wearable computing and networks have the potential to transform many aspects of our lives, from healthcare and fitness to safety and productivity. As technology continues to evolve, we can expect to see even more innovative uses for wearable devices in the future [17].

Pervasive and Wearable Computing and Networks

321

8 Conclusion Wearable technology has the potential to transform many aspects of society, including healthcare, entertainment, sports, and education. The technology behind wearable devices, including low-power wireless communication, energy harvesting, and sensors, has enabled the development of devices that can monitor and track various aspects of our lives. Wearable devices have numerous applications in various fields, but there are also several challenges that must be addressed to ensure their widespread adoption, including privacy and security, interoperability, power consumption, and social acceptance. As wearable technology continues to evolve and improve, it is likely to become even more integrated into everyday life, enabling new applications and enhancing our quality of life.

References 1. Mattern F, Floerkemeier C (2010) From the internet of computers to the Internet of Things. In: Brauer RD (ed) From active data management to event-based systems and more. Springer Berlin Heidelberg 2. Krumm J (2009) A survey of computational location privacy. Pers Ubiquit Comput 13(6):391– 399 3. Brewster SA (2012) Overview of wearable technology for people with disabilities. J Neuroeng Rehabil 9(1):1–9 4. Han J, Zheng Y, Chen L (2012) Smart city and the applications. In: Proceedings of the international conference on smart city and green computing. IEEE 5. Pentland A (2009) The prosocial fabric of the internet of things. Sci Am 301(4):40–45 6. Hong J, Landay JA (2004) An architecture for privacy-sensitive ubiquitous computing. In: Proceedings of the international conference on ubiquitous computing. ACM 7. Schiele B, Larson KR (2003) Wearable computing and context-awareness. Pers Ubiquit Comput 7(2):77–79 8. Stroulia E (2010) Pervasive computing: a research area evolving towards new challenges. J Ambient Intell Smart Environ 2(1):1–3 9. Gururajan S, Jaeger B (2016) Using smart wearable devices for monitoring physical and cognitive health in older adults. In: Proceedings of the international conference on information and communication technologies for ageing well and e-Health 10. Gu T, Wu J, Luo L (2014) Wearable devices: a new approach to manage health and fitness. J Healthc Eng 5(1):1–8 11. Sigg S, Waldhorst OP (2007) Context-aware mobile computing: affordances of space, social awareness, and sensor technology. In: Proceedings of the international conference on mobile data management. IEEE 12. Fitzpatrick K, Fanning JD, Carlson MJ (2016) My digital health: how wearable technology is changing healthcare. IEEE Pulse 7(2):18–23 13. Mainetti G, Patrono L, Sergi I (2014) Evolution of wireless sensor networks towards the internet of things: a survey. In: Proceedings of the international conference on future Internet of Things and cloud. IEEE 14. Zhang P, Chen S, Chen S (2017) Wearable devices for personalized health monitoring and management. J Healthc Eng 2017:1–12 15. Grinter RE, Palen PL, The new landscape of mobile computing research: reflections on the 2007 international conference on ubiquitous computing. Personal and ubiquitous computing, vol 11, no 4

322

J. Verma and T. Kaur

16. Sachdeva RK, Bathla P, Rani P, Kukreja V, Ahuja R (2022) A systematic method for breast cancer classification using RFE feature selection. In: 2022 2nd international conference on advance computing and innovative technologies in engineering (ICACITE), pp 1673–1676. https://doi.org/10.1109/ICACITE53722.2022.9823464 17. Kumar Sachdeva R, Garg T, Khaira GS, Mitrav D, Ahuja R (2022) A systematic method for lung cancer classification. In: 2022 10th international conference on reliability, infocom technologies and optimization (Trends and Future Directions) (ICRITO). Noida, India, pp 1–5. https://doi.org/10.1109/ICRITO56286.2022.9964778

Power of Image-Based Digit Recognition with Machine Learning Vipasha Abrol, Nitika, Hari Gobind Pathak, and Aditya Shukla

Abstract Digit character recognition is recognizing and understanding written characters in images or scanned documents. This work often uses machine learning techniques such as SVM, KNN, and SNN to gain insights into large datasets of written examples of written characters. The purpose of the recognition process is to accurately convert images of written characters into digital representations that can be used for further processing or analysis. This technology is widely used in many applications such as OCR, signature recognition, and handwriting recognition. The main challenge in character recognition is developing an algorithm that can distinguish correct alphabets while resisting different patterns, sizes, and guidance. This is a challenge because the handwriting is different and there is often noise and distortion. The algorithm must be able to handle different types of input, such as scanned images, tablet typing, and camera images. One way to solve this problem is to use machine learning techniques such as Support Vector Machines (SVM), KNN, and Artificial Neural Networks (ANN). The main purpose of this research is to provide an efficient and effective way to write code numbers. It is worth noting that even today’s models and ideas are powerful, the problem of writing is important in the field of research, and new ideas and methods are constantly being produced. The results in this article show that the K-Nearest Neighbor (KNN) achieves the highest accuracy of 97.1%. Keywords Handwriting recognition · Image recognition · Computer vision · K-nearest neighbor (KNN) · Neural network architecture

V. Abrol (B) · Nitika · H. G. Pathak · A. Shukla Chandigarh University, Gharuan, India e-mail: [email protected] Nitika e-mail: [email protected] H. G. Pathak e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024 S. Jain et al. (eds.), Emergent Converging Technologies and Biomedical Systems, Lecture Notes in Electrical Engineering 1116, https://doi.org/10.1007/978-981-99-8646-0_26

323

324

V. Abrol et al.

1 Introduction Number recognition is the ability to recognize, perceive and classify numbers from images. The math recognition system works by recognizing and analyzing the numbers in the input image and then converting that image to machine-readable code or ASCII format [1]. It is the study of how the environmental analysis system can distinguish numbers of interest and predict correct decisions regarding numbers and sign [2]. This is a difficult task because of the differences in the writing. Despite the development of various algorithms, the recognition accuracy is still unsatisfactory. This is because everyone has a unique writing style, and it’s difficult to account for all the variations in [3]. In addition, factors such as handwriting quality, writing speed, and the type of pen or paper used can affect authenticity. Therefore, continuous research is needed to improve the performance of the numeral recognition algorithm [4]. There are two methods of handwriting: online and offline [5]. Online handwriting recognition systems are systems that recognize handwriting as it is written, usually with a pen or ink. These systems are generally used in devices such as tablets and smartphones. On the other hand, offline handwriting recognition systems are used to recognize handwriting from scanned images or digital images. This system is often used for copying files and storing [6] applications. Similar to how electricity transformed the economy in the nineteenth and twentieth centuries, AI has become a central technology in many areas [7]. Image recognition in machine learning is an image classification task whose purpose is to identify and recognize numbers (such as 0–9) from written text or writing images. This work is mainly used in text writing, drawing process, license document, and other works. The following are the steps in Optical Digit Recognition: • Image preprocessing: The input image is preprocessed to improve text quality, for example by adjusting brightness and contrast and cropping the image to remove unnecessary background. This step is important because it helps improve the performance of the next OCR step. Image pre-processing techniques such as image binarization, noise reduction, and DE skewing are often used to improve text quality. • Text segmentation: The image is then segmented into regions that contain text. This step is used to separate the text from the background and other non-textual elements in the image. Text segmentation is usually performed using techniques such as connected component analysis, which groups together pixels that are connected to each other based on their colours.

Power of Image-Based Digit Recognition with Machine Learning

325

Character and Text Recognition: This step involves extracting features important for digit recognition from the front image. then processes the image to recognize individual characters. This step generally uses machine learning algorithms such as support vector machines (SVM), neural networks, and k-near neighbors (k-NN) to classify characters according to their shapes and the like. The model is trained on a large dataset of unique images. • Classification: The extracted results are used to train a classifier, such as a neural network, that can recognize the number contained in the image. The then outputs the final text in a format that other applications can use, such as text or files. In this paper, we propose a new method to determine the number of registered numbers using the KNN algorithm. Our main amendments are as follows: a. We demonstrate the feasibility of using the KNN algorithm for handwritten digit recognition and show that it can achieve high accuracy rates. b. We analyse the performance of the KNN algorithm in detail on different datasets and compare it with other state-of-the-art methods. c. We tested our model using precision, recall, accuracy, and f1-score in addition to accuracy as a parameter. d. We successfully evaluated the proposed method on various benchmark data and showed that it outperforms other methods in terms of accuracy and computational efficiency. The remainder of this paper is organized as follows, and related studies are described in Sect. 2. Part 3 presents the proposed model, Part 4 presents the experimental results and discussion of the study, and Part 5 concludes the article.

2 Related Work Babu et al. [8] and Priya et al. [6] introduce a new method for offline handwritten number recognition based on features that do not rely on size normalization or refinement. The KNN classifier is used to classify numbers after determining the minimum distance using the Euclidean minimum distance. Zitouni et al. [9], Shamim et al. [10] provide an offline code collection method based on various machine learning methods. The main purpose of this project is to guarantee the validity and reliability of the method used to recognize written numbers. WEKA has been used to recognize numbers using various machine-learning techniques. Ali et al. [11] enlighten the way of digitization by providing high accuracy and fast calculation for digitization. The current study uses a convolutional neural network as a classification system, using the DL4J framework to recognize data and written numbers from MNIST with

326

V. Abrol et al.

appropriate training and testing methods. Zhang et al. [12] proposed a new method to encode code based on Convolutional Neural Network (CNN). The network is trained using many similar images to understand the location of the code written to improve cognitive function.

3 Methodology Handwriting recognition is a task that involves training a machine-learning model to recognize and classify handwritten numbers. The training process begins with a file of alphabet pictures filled with the correct numbers (0–9). This data is preprocessed and turned into a set of features that can be fed into machine learning algorithms. The machine learning algorithm then learns the relationship between the image data (features) and the target (code) by adjusting its parameters so that it can predict the category of new images. The algorithm is presented with various handwritten number examples during training and makes predictions based on these examples. The algorithm then adjusts its parameters based on the accuracy of its predictions to improve its accuracy over time as shown in Algorithm 1. The choice of machine learning algorithm, feature extraction method, and preprocessing steps can all impact the performance of the model as shown in Fig. 1. It is important to carefully consider these factors and to perform extensive testing and evaluation on the model to ensure that it is able to generalize well to new, unseen data. The proposed method for constructing a handwritten digit recognition system includes the following steps: • Pre-processing: The MNIST database is a widely used database to train learning models in computer vision. The file contains 60,000, 28*28 grayscale images of alphabetic numbers (0–9) with their text. Each image is represented by a 28*28pixel matrix; where each pixel value is an 8-bit integer between 0 and 255 representing the intensity of the pixel. The following are the most common preliminary steps performed on MNIST data before training the code generator: • Normalization: Pixel values in an image are usually normalized on a scale of 0–1 to help improve the accuracy of the model. This is done by dividing each pixel value by 255. • Reshaping: The 28*28 matrices representing the images are reshaped into 1D arrays of 784 elements as most machine learning algorithms take 1D arrays as input. • One-hot encoding: The target labels are usually one-hot encoded to represent each digit as a vector of 10 elements, where each element is either 0 or 1. For example, the label “5” is represented as [0, 0, 0, 0, 0, 1, 0, 0, 0, 0].

Power of Image-Based Digit Recognition with Machine Learning

327

Fig. 1 Block diagram of components in Machine learning

• Splitting into training and validation sets: The training set and the validation set are typically separated into two sets from training set. The validation set is used to assess the model’s performance during training whereas the training set is used to train the model. This helps prevent overfitting, a condition in which a model performs well on training data but badly on new, uncontaminated data. • Data Augmentation: Data augmentation techniques such as rotation, scaling, and transformation can be applied to the data to prevent overfitting and improve the performance of the model. It is important to take the right preliminary steps to ensure that the model can learn relevant features from the data and adapt to the new data. In the context of KNN, the most common type of chart is [13], which is used to visualize the distribution of data points. In the case of KNN, data points can be represented by a two-point

328

V. Abrol et al.

scatterplot. For example, if we have information about the number of pixels and their corresponding pixel value, we can plot the intensity of the first and second pixels of each image. This will create a scatterplot where each data point represents an image and symbols are coloured by the corresponding number. Another type of graph used in KNN is the error graph, which is used to view the accuracy of the model as a function of K. The error graph shows the training error and misuse of the difference between K. error is the number of unclassified samples in the training set and validation error is the number of unclassified samples in the set standard validation. The error plot can help determine the optimal K value by choosing a K value that minimizes validation error. In summary, scatter plots and error plots are often used to visualize the results of KNN models and determine the optimal value of K, as shown in Fig. 2. Distributions are the process of assigning a written text to the given expression according to its properties. In coding systems, the input is the image of the code and the function is to assign the code to the image. The proposed method uses the KNN classification algorithm to classify images of MNIST numbers in the test using vectors from the training database.

Fig. 2 Flow diagram of optical digit recognition

Power of Image-Based Digit Recognition with Machine Learning

329

Fig. 3 Illustrative example to show graph of KNN

A small K value means that the model is more sensitive to outliers and noise in the data and therefore more prone to overfitting. This means that the model may make incorrect predictions for samples far from their nearest neighbors as shown in Fig. 3, resulting in learning errors. The KNN algorithm’s operation may be summarised in the following steps: a. Calculate the distance: Using a distance metric such as Euclidean distance or Manhattan distance, calculate the distance between the new data point and all of the data points in the training set. b. Select the K-nearest neighbours: Using the estimated distances, select the Knearest neighbours of the new data point. c. Assign weights: Based on their distances from the new data point, assign weights to the K-nearest neighbours. Closer neighbors will be given more weight. d. Make the following prediction: Assign the class label of the new data point as the majority class label among the K-nearest neighbours for classification jobs. For regression tasks, assign the new data point’s numerical value as the average of the K-nearest neighbours weighted by their distance. Depending on the objective, evaluate the algorithm’s performance using several performance measures like as accuracy, precision, recall, F1-score, or mean squared error. The KNN algorithm’s success may be influenced by a number of parameters, including the value of K, the distance metric employed, the data normalization method, and the quality of the input data. To obtain the greatest results, it is necessary to experiment with different values of K and distance measurements, as well as pre-process the data suitably.

330

V. Abrol et al.

Algorithm 1 Handwritten digit recognition system using KNN Input : Set of pre-labeled images of handwritten digits Output: Predicted class label for a new, unseen image of a handwritten digit /*Load the training dataset of handwritten digits and theircorresponding labels */ trainingdata = load training data(); trainingl abels = loadt rainingl abels() /*Load the testing dataset of handwritten digits */ testingdata = loadtestingdata() /*Define the number of nearest neighbors (K)*/ K=5 /*Loop over each sample in the testing set*/ for i in range(len(testingdata)) : testsample = testingdata[i]distances = []; /*Calculate the Euclidean distance between the test sample and each training sample*/ forjinrange(len(trainingdata)) : h trainsample∀ = trainingdata[j]distance = calculatedistance(testsample, trainsample)distances.append((distance, j)) /*Sort the distances in ascending order*/ distances = sorted(distances, key=lambda x: x[0]) /*Select the K nearest neighbors*/ nearestneighbors = distances[: K] /*Determine the majority label among the nearest neighbors*/ labelcounts = defaultdict(int)forneighborinnearestneighbors : index = neighbor[1]label = trainingl abels[index]labelc ounts[label]+ = 1 /*Assign the majority label to the test sample*/ prediction = max(labelcounts, key = labelcounts.get) /*Store the prediction in a list*/ predictions.append(prediction) /*Evaluate the accuracy of the model by comparing thepredicted labels to the true labels in the testing set*/accuracy = calculateaccuracy(predictions, truelabels)

This leads to high bias and low variance in the sample, making the model less accurate for the data and producing less accurate predictions. In practice, it is important to choose the K value carefully, taking into account the trade-off between bias and variance, and testing to evaluate the performance of models with different K values. A better way is to try a few. Measure the K values and choose the best-performing one as the selection criterion. The K value that gives the best overall performance is then chosen as the best value for the classifier. As a result, classification is an important step in coding and the selection of classifiers such as KNN and hyperparameters such as selection K play an important role in the accuracy of the system.

Power of Image-Based Digit Recognition with Machine Learning

331

4 Dataset Description The MNIST dataset is a large database of code used to train and test machine learning algorithms, particularly in optical character recognition (OCR). It was developed in the early 1990s by Yann LeCun and colleagues at AT&T Bell Laboratories and New York University. The database contains 60,000 training images and 10,000 test images of 28 × 28 pixels, each representing a number from 0 to 9 (Fig. 8). The Education. MNIST (Modified National Institute of Standards and Technology) dataset is created by extracting and modifying training and testing data from the NIST SD3 and SD7 datasets. The original document was used to test and evaluate optical character recognition (OCR) systems during the first OCR Systems census conference.

5 Results and Discussion In this research paper, a digit collection system was developed using ten variables. The success of any recognition lies in the proper extraction of features and the selection of appropriate distributions, as illustrated in Figs. 4, 5, 6 and 7. The proposed algorithm addresses these two aspects and clarifies the accuracy and complexity of the computation. The overall recognition accuracy is 97.1%. The results of our research paper showed that we were able to develop an effective digit recognition system using machine learning, with a high level of accuracy. Specifically, we achieved an accuracy of 97.1% in recognizing hand-written digits using the KNN algorithm. Accuracy is a performance metric that measures the proportion of correctly classified instances among all instances in the dataset. In our study, we used a test dataset to evaluate the accuracy of our system, which measures how well the system can generalize to new, unseen data (Fig. 9). In addition to accuracy, we also evaluated the performance of our system using other metrics such as precision, recall, and F1-score. Precision measures the proportion of true pos-itive predictions among all positive predictions, while recall measures the proportion of true positive predictions among all actual positive instances in the Fig. 4 Handwriting recognition of 2

332

V. Abrol et al.

Fig. 5 Handwriting recognition of 1

Fig. 6 KNN graph illustration using an example

dataset. The F1-score is the harmonic mean of precision and recall, which provides a balanced evaluation of the system’s performance. Our system achieved high scores in all of these metrics, indicating that it was able to accurately identify handwritten digits with a high level of precision and recall. This is a significant achievement, as accurate digit recognition has numerous practical applications such as improving accessibility for visually impaired individuals and automatic form processing. Overall, our study demonstrates the effectiveness of the KNN algorithm for digit recognition tasks and highlights the potential of machine learning for solving real-world problems. However, it is important to note that the performance of our system can be further improved by exploring other machine-learning algorithms or by improving

Power of Image-Based Digit Recognition with Machine Learning

333

Fig. 7 Bar graph of count of labels from 0 to 9

Fig. 8 MNIST dataset containing handwritten digits in varying styles

the quality of the input data. Future research could explore these avenues of improvement and extend the system’s capabilities to recognize other types of handwritten characters or more complex shapes. This analysis is unique in many ways. Unlike other systems, it does not require normalization for the size of the input code and is not affected by the author’s model. In addition, the algorithm is fast and accurate, making it a strong candidate for further development and improvement. The purpose of this article is to provide a starting point for establishing good character recognition (OCR) for English. In the future, he plans to update the algorithm and create a

334

V. Abrol et al.

Fig. 9 Demonstrative instance showcasing the results of a handwriting recognition system powered by machine learning

stronger English OCR with more recognition codes, perhaps using fewer features, and without relying on the classification process.

6 Conclusion Finally, the application of the KNN method resulted in the successful construction of a digit recognition system utilising a machine learning algorithm. The system was trained and tested with a variety of K values, including 5, 6, 7, and 8, and the findings revealed that K = 7 was the best value for obtaining high accuracy in recognizing handwritten numerals (Fig. 10). The accuracy of the system was examined using multiple performance measures such as precision, recall, and F1-score, which revealed that the overall accuracy of the system was high. The capacity of the system to recognize handwritten numbers has a broad range of applications, including automatic form processing, digitalization of historical data, and enhancing information accessibility for visually impaired people. Overall, the construction of this digit identification system utilising machine learning has proved the efficacy of the KNN algorithm and its applicability to a wide range of real-world applications. Additional enhancements to the system might be achieved by including additional machine learning algorithms or by enhancing the quality of the input data (Figs. 11 and 12).

Power of Image-Based Digit Recognition with Machine Learning

Fig. 10 Digit recognition accuracy report using KNN

Fig. 11 Handwritten instances of the numeral 2 and 3

335

336

V. Abrol et al.

Fig. 12 Handwritten instances of the numeral 8 and 6

References 1. Azhar Ramli A, Watada J, Pedrycz W (2014) A combination of genetic algorithm-based fuzzy C-means with a convex hull-based regression for real-time fuzzy switching regression analysis: application to industrial intelligent data analysis. IEEJ Trans Electr Electron Eng 9(1):71–82 2. Sethi R, Kaushik I (2020) Hand written digit recognition using machine learning. In: 2020 IEEE 9th ınternational conference on communication systems and network technologies (CSNT). IEEE, pp 49–54 3. Dhande PS, Kharat R (2017) Recognition of cursive English handwritten characters. In: 2017 ınternational conference on trends in electronics and ınformatics (ICEI), pp 199–203 4. Shrivastava A, Jaggi I, Gupta D, Gupta D (2019) Handwritten digit recognition using machine learning: a review. In: 2019 2nd ınternational conference on power energy, environment and ıntelligent control (PEEIC). IEEE, pp 322–326 5. Yadav P, Yadav N (2015) Handwriting recognition system-a review. Int J Comput Appl 114(19):36–40 6. Priya A, Mishra S, Raj S, Mandal S, Datta S (2016) Online and offline character recognition: a survey. In: 2016 international conference on communication and signal processing (ICCSP). IEEE, pp 0967–0970 7. Vaidya R, Trivedi D, Satra S, Pimpale M (2018) Handwritten character recognition using deep-learning. In: 2018 second ınternational conference on ınventive communication and computational technologies (ICICCT). IEEE, pp 772–775 8. Babu UR, Venkateswarlu Y, Chintha AK (2014) Handwritten digit recognition using k-nearest neighbour classifier. In: 2014 World congress on computing and communication technologies, Trichirappalli, India, pp 60–65. https://doi.org/10.1109/WCCCT.2014.7 9. Zitouni R, Bezine H, Arous N (2023) Online handwritten scripts classification using fuzzy attributed relational graphs. Int J Mach Learn Cybern 1–18 10. Shamim SM, Miah MBA, Sarker A, Rana M, Al Jobair A (2018) Handwritten digit recognition using machine learning algorithms. Global J Comput Sci Technol 18(1):17–23 11. Ali S, et al (2019) An efficient and improved scheme for handwritten digit recognition based on convolutional neural network. SN Appl Sci 1: 1–9 12. Zhang C, et al (2020) Handwritten digit recognition based on convolutional neural network. In: 2020 Chinese automation congress (CAC), pp 7384–7388 13. Bharti A, Mittal P, Bora KS (2022) Neurotrophic factors as antiapoptotic agents–a review. Res J Pharmacy Technol 15(11):5327. https://doi.org/10.52711/0974-360X.2022.00897

Open-Source Gesture-Powered Augmented Reality-Based Remote Assistance Tool for Industrial Application: Challenges and Improvisation Chitra Sharma, Kanika Sharma, Manni Kumar, Pardeep Garg, and Nitin Goyal

Abstract In this paper, development theory and procedure are to be presented to inspect and enhance the pre-existing gesture interaction with mobile Augmented Reality applications, as well as the application of gesture-enabled, augmented reality systems in the industry. The testing of the prototype has been done by utilizing the standard manipulation tasks that are typical in the Augmented Reality context. Also, a comparative analysis is performed with the previously existing gesture-based and touch-based input techniques used in various Augmented Reality applications. The task performance data has been evaluated on the basis of in-person user feedback. When tested, the gesture input methods introduced via the current research were found to be better than the existing interaction techniques. Most of the participants preferred to use the gesture interaction methodology explained within the current research paper for further Augmented Reality based research and development work. Quick response time and a straightforward approach to implementation were some of the most common feedback obtained by the user. The research paper mainly explains the thorough notion behind current and upcoming gesture-based interaction technologies. Also, a brief discussion targeted toward the future implications of this research work has been attached herewith for later reference and usage. The limitations and the obstacles in the pathway of the implementation of augmented C. Sharma · K. Sharma Department of Electronics and Communication Engineering, National Institute of Technical Teachers Training and Research, Chandigarh, India M. Kumar Department of Computer Science and Engineering, Chandigarh University, Mohali, Punjab, India P. Garg (B) Department of Electronics and Communication Engineering, Jaypee University of Information Technology, Waknaghat, Solan, Himachal Pradesh, India e-mail: [email protected] N. Goyal Department of Computer Science and Engineering, School of Engineering and Technology, Central University of Haryana, Mahendragarh, Haryana 123031, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024 S. Jain et al. (eds.), Emergent Converging Technologies and Biomedical Systems, Lecture Notes in Electrical Engineering 1116, https://doi.org/10.1007/978-981-99-8646-0_27

337

338

C. Sharma et al.

reality in industries have been discussed within this research paper and attempts have been made to minimize or eliminate the drawbacks. Keywords Augmented reality · Gesture interaction · Virtual reality

1 Introduction Augmented reality is one of the most recently developing technologies that emerged during the early phase of industrial revolution 4.0 [1]. Augmented reality is a sub or the child technology that comes under the umbrella of technology called extended reality as depicted by Fig. 1. The fundamental theory that enables the developers to design, craft, build, and deploy an augmented reality-based application in real life is to overlay 3D graphics or objects onto the surface of physical objects present in the real world. The prime focus of all the extended reality technologies including augmented reality (AR), virtual reality (VR), and mixed reality (MR) is to enhance the user experience with the real world by introducing a layer of an individual or a group of artificially created objects in the surroundings. In augmented reality, this is generally done by performing a creative and intuitive amalgamation of the artificially created digital objects and the naturally present physical objects in the real environment. The prime merit of preferring augmented reality against virtual reality and mixed reality is that it does not let the appearance of reality get altered. The key setbacks that tend to obstruct the development procedure of augmented reality-based applications and may hamper the user experience with the virtuality are unnatural appearance of the built environment and lack of flexibility in the interaction methods with the digital objects [2]. To date, the deployment of augmented reality-based applications has been a critical task as the interaction with AR applications is confined to the deployment device’s screen and console buttons. This leads to a complex and hectic user interface. On contrary, the researchers have devised an advanced interaction technique to allow the user to interact with the application using natural human behaviours and commonly used hand gestures. Such an advanced technique, i.e., gesture-based interaction methodology tends to eliminate or minimize the need for machine-to-human interaction via the screen of the deployment gadgets [3]. A variety of real-life applications tend to make use of augmented reality technology such as cinematography, videography, medicine, education, testing and troubleshooting, entertainment, gaming, architecture, etc. One of the most demanding sectors of a sector that is most susceptible to including augmented reality-based solutions in the near future is remote assistance and training. Such areas of industry require the physical presence of a trained professional in the department with the trainees or the learning professionals at all times, which is a tedious and time-consuming task. However, numerous augmented reality-based solutions are being provided by the researchers to reduce the complexity of training processes without affecting the speed of the production or delivery of services as well as the accuracy parameters [4–6]. The troubleshooting and bug resolution phase of an industry is also facilitated

Open-Source Gesture-Powered Augmented Reality-Based Remote …

AR

339

VR XR

MR

AR Umbrella Technology (XR)

VR

Sub-Technology

MR

Fig. 1 Classification of extended reality

by employing such solutions. The classification of extended reality in terms of its sub-technologies is depicted in Fig. 1.

2 Related Work To date, the tech-enthusiasts have witnessed and experienced gesture-based interaction in extended reality based-applications via a variety of depth-sensing and gesture-tracking modules and gadgets such as the Microsoft Kinect and the Leap Motion Controller [7]. Such gadgets are equipped with multiple depth sensors and cameras that help track the movement and the current position of the user’s hands in space and generate the input for the algorithm accordingly. Kour et al. [8] proposed a procedure to elect accurate sensors for various IoT applications. This technique can also be used for VR devices. A few of the system development kits allow the direct interaction between the user’s hands and the application with the help of a few pre-fabricated gestures such as point, pinch, swipe, etc. The existing technologies do provide the approach to gesture-based interaction in Augmented Reality, but lack efficiency in certain terms. The factors that are primarily responsible to introduce inefficiency in the system include the user’s responsibility to bear the additional cost of the implementation gadgets or devices and the slow response rate of the applications [9]. The high-cost factor tends to put a constraint on the popularity or the reach of the application and technology to a confined level or value because as a result the number of users who would be able to afford the deployment gadgets and access the application comfortably gets reduced. The slow response rate of the

340

C. Sharma et al.

application introduces the inability to establish a comfortable user-machine interface and causes generic discomfort to the user while accessing different features of the application. The methods generally used by the researchers and developers to comprehend gesture-based interfaces typically require the user to wear a trackable marker on their hands so that the position and motion of the hand can be traced and the impression formed by the hand can be visualized. The key features required by gesture-based augmented reality applications are spatial coordinates and the pose [10].

3 Problem Statement The current practices being utilized in the industries by the industry professionals in the training and assistance department are currently executed in physical or manual mode. This tends to cause quite a downfall in the production. The lags in the existing operations can be described in three points as follows: 1. The dependency of the learning professional on the senior engineers for continuous and frequent monitoring and guidance is usually inconvenient and hectic. 2. The primitive nature of the learning resources that do not provide an interactive environment to enhance the understanding of the trainee ultimately affects the performance of the organization. 3. The existing gesture operable devices and instruments tend to show glitches during the operation as the gestures are not quite stable and tend to fall off even within a fraction of a second of being not detected. Also, the limitation of the existing gesture interaction frameworks can be comprehended easily by observing the inability of the users to directly establish contact with the augmented or the digital objects [11]. This happens because the universal laws and phenomena of physics are directly applicable to real-world objects such as the user’s hand. In the best case, the application tends to offer a disruptive user experience and user interface, while in the worst case; the bug tends to cause an application crash.

4 Research Proposal and Description Our augmented reality remote support solution makes use of a mobile application that can be deployed on any type of portable devices such as a mobile phone or a tablet belonging to an on-site employee with a remote expert. The on-site worker or the concerned employee can access the application by arranging the dials of the augmented locking mechanism as per the unique key or the correct combination of numbers provided by the remote expert. This key can be altered as per the requirement

Open-Source Gesture-Powered Augmented Reality-Based Remote …

341

to ensure privacy and safety and is primarily issued by the remote expert to allow only the authenticated personnel to access the information and to maintain the connection with the on-site employee. The application mainly has three execution phases or operations/features, namely gesture interaction, multi-user connectivity, and authenticated access. The user with the authentication key is able to visualize as well as access such characteristic features. The user upon entering the application’s environment is allowed access to operations such as augmented reality object placement or removal tool, writing in the environment with the help of a three-dimensional doodle, leave textual data in the surroundings. All of these operations are depth enabled and are anchored in place. This means that even after the user finishes drawing structures, writing textual data, or placing augmented objects in the surroundings, the digital objects hold their positions and do not get displaced as per the movement of the mobile phone. Also, the multi-user connectivity feature of the application directly links the application data and features to the cloud, thereby allowing different users to access and refer to the same piece of information at the same time from their respective devices [12]. There also exists a provision that allows the user to play the live assembling or simulation video of different industrial machinery parts to provide troubleshooting and operation assistance. Here, augmented reality-based digital objects facilitate the user with different AR annotations in three-dimensional space, thereby providing a labeled visual mode of communication.

4.1 Proposed Architecture The augmented reality-based remote assistance system proposed in this work has been developed with the help of various open-source software development packages and kits such as Unity 3D, Vuforia, Mano motion SDK, and AR Foundation. Unity 3D software is used for the development platform, AR Foundation to obtain the features and values of the three-dimensional space, and the Manomotion SDK to ensure smooth and uninterrupted interfacing medium between the user and the machine [13]. The architecture or the design of the project primarily consists of the user gadgets/ devices and the cloud server. Each gadget equipped with the augmented realitybased remote assistance system is required to input the encryption code and enter the environment. The objects placed by the user in the environment get directly uploaded to the cloud server and can be retracted as per the user’s command. Google API server is used for this purpose, however, the services provided by the other cloud service providers can also be used. The internal architecture of the application contains two main parts, namely, the tracking mechanism and the interface methodology. The tracking mechanism in this case is purely markerless. The tracking mechanism of the application scans the surroundings and extracts the detectable features for the accurate deployment of the augmented objects in the surroundings [14]. After the extraction of features, the application casts out an imaginary ray in the surroundings.

342

C. Sharma et al.

The ray after hitting one or more of the detected features in the surroundings provides the exact location or the position where the augmented object is required to be placed. In a similar way, different forms of gestures such as grab, pinch, swipe (left, right, or vertical), point, etc. are used to form a reliable interface medium between the man and the machine. The application does not require the user to be a skilled operator as the interface medium is quite user-friendly in nature.

4.2 Application Usage Description The application opens up with an augmented locking mechanism present mid-air. The user here is referred to as a client. Firstly, the client is required to arrange the dials of the locking mechanism in such a way that the numbers present on the front face of the lock dial are the same as the authentication key provided by the concerned industrial department head. For this purpose, the client makes use of a grab gesture that helps the dials of the lock to rotate in steps, and when he/she releases the game object, the rotatory motion of the digital object freezes. On entering the authentication key, the user enters the interface screen of the application. The internal system of the application automatically keeps tracking the surroundings after frequent intervals. The description of the application functionality can be easily understood by the flow diagram given in Fig. 2. There are basically three features of the application: AR Doodle The line renderer game object tends to scan the mid-air features of the surroundings and allow the user to craft the doodle in place. The features of the AR doodle include anchorage and depth. If the client intends to highlight any corrupt or malfunctioning item in the surroundings, he/she can directly mark the position of such an object in real life with the help of a digital doodle.

AR Authentication Lock

Splash Screen

Interface 1

Interface 2

Interface 3

AR Doodle

AR Object

AR Notes

Fig. 2 Application design of proposed work

Open-Source Gesture-Powered Augmented Reality-Based Remote …

343

AR Object Placement To place the object in the surroundings, the plane detection option of the AR Foundation package is used that tends to continuously track the base area where the object can be placed. Here, anchorage helps hold the object in position. This particular feature of the augmented reality-based remote assistance system enables the client to place the warning signs or checked signs onto the objects and the machinery present in the environment. The main advantage of this particular feature is the minimum requirement of on-paper documentation, [13] that tends to be an environment-friendly alternative to maintaining records and establishing or raising alerts. AR Notes In a similar manner to the working of AR doodle, the AR notes feature helps the user make and paste notes in the environment. Detailed information about the malfunction, failure, troubleshooting guidelines, and other important information can be listed on the notes and the notes can be placed on the surface of the real-life objects present in the surroundings.

5 Methodology and Hypotheses Development The framework and the hypothesis design and development process take the existing research methodologies and applications as the base for the development. The ARbased projects currently have been employed in industries such as entertainment, gaming, architecture, engineering, medicine, etc. The main aim of the research is to uplift the level of application and usage of Augmented Reality with gesture interaction in different other domains. The limitation of the existing gesture interaction frameworks can be comprehended easily by observing the inability of the users to directly establish contact with the augmented or digital objects. This happens because the universal laws and phenomena of physics are directly applicable to real-world objects such as the user’s hand. It introduces significant magnitude inefficiency in the system as the augmented object comes off quickly if the object fails to recognize the gesture even for a fraction of a second. The technique employed in this particular research makes use of an augmented object that is permanently fixed to the central point of the dedicated gesture. The object can be a small three-dimensional box or so. The object is affixed to the central point of the user’s hand gesture in such a way that the object follows the hand with every little movement. With every detected change in the position or posture of the gesture, the position of the attached augmented object gets updated as well. An attempt to modify and improve the interaction complexity between the human hand and the augmented object or digital graphic is also done.

344

C. Sharma et al.

6 Case Study/Evaluated Result Of all the latest trends and technologies that have been initiated under the industrial revolution 4.0, augmented reality serves to be one of the most promising fields of research and development that has the potential to reshape the existing work culture and environment. As per a recent survey, extended reality technologies are the most interesting technologies from the application, development as well as investment perspective. The number of technical glitches and complex interface mediums has limited the use of augmented reality-based applications to low-level applications only. This is the main reason that has prevented consumers from deploying augmented reality-based applications in industries. However, with time such limitations have been rectified due to which Augmented Reality now can be named under the best tools for industries list. This can help the researchers and industry professionals to understand and resolve common problems in industries. Figure 3a, b depicts the working of the grab gesture. It makes use of a tracker object that has the necessary physics components attached to it to interact with the augmented object placed in the surroundings. The tracker game object is designed in such a manner that it continuously detects the motion of the hand even when no gesture is being communicated, thereby improving the user interface with the application. As a result of the current AR-based research, the interface medium of the review and inspection can be flexibly changed from manual mode to augmented mode. This tends to ensure the security and safety of data as well, and such a development procedure helps minimize or eliminate the use of paper. This means that the technique is environment-friendly and ecological in nature. To understand the gaps in the existing gesture-based interaction methodology and to perform a comparative analysis between the existing and the proposed gesture interaction methodology, an open and interactive survey has been circulated among a group of 40–50 people through a questionnaire. The key points of the survey

(a) Fig. 3 a, b Implementation of grab gesture using tracker

(b)

Open-Source Gesture-Powered Augmented Reality-Based Remote …

345

30

Fig. 4 Proposed versus existing hand gesture technique Number of Users

25 20 15 10 5 0 Easyness Proposed technique

User interface

Refresh rate

Existing Gesture based technology

included the easiness of gesture-holding capability, the comfort of the user with the application interface, and the refresh rate of the operations while switching from one gesture to another. The outcome of the questionnaire has been depicted in Fig. 4 and it clearly indicates that the proposed gesture-based technique having a tracker is more effective in terms of smooth user interaction and high refresh rate. Although a variety of pre-existing augmented reality-based applications have attempted to resolve the complexity in man-to-machine interaction here, which has been attempted to completely resolve the interface issue/complexity and add as many features to the application as possible in the current perspective. The most significant outcome got from the interactions with end users is the new version of the remote maintenance system described in this paper, along with its main justification. To transform this application from a lab prototype into an industrial tool, research, and development are still ongoing.

7 Conclusion In this paper, it has been attempted to introduce a new solution to the gaps found in the existing gesture-based interaction methodologies in various available augmented reality-based applications. It has been observed that a majority of the users prefer gesture interaction if the gesture is not lost in case of corrupted computer vision. It was observed that the percentage of people who preferred the usage of augmented reality-based applications in training and remote assistance applications hiked up to a significantly greater number after the introduction of interaction methodologies in place of traditional screen interaction. The user interaction with augmented reality-based applications is quite complicated if the interaction is limited to the device screen or other hardware equipment. The study is mainly targeted toward the demonstration of a remote assistance system based on industry-standard hardware and software components. Here, descriptions of the system architecture, supporting

346

C. Sharma et al.

software components, and user interfaces are provided that are further jointly optimized with business partners. Additionally, it has been managed to identify common problems with the use of remote help and provided best-practice solutions. Low internet connectivity, hands-free devices, security challenges, performance problems, and the stability of annotations are few of the common issues faced by consumers. Knowing these difficulties in advance can help with better planning for upcoming remote support applications. Finally, it has been contrasted with the paper-based maintenance instructions with the AR remote support system. The results obtained are positive and indicated that using AR remote help resulted in fewer errors. This supported our presumption that a remote assistance system should provide a set of capabilities that may be customized.

References 1. Bai H, Lee GA, Ramakrishnan M, Billinghurst M (2014) 3D gesture interaction for handheld augmented reality. In: SIGGRAPH Asia 2014 mobile graphics and interactive applications, pp 1–6 2. Aschauer A, Reisner-Kollmann I, Wolfartsberger J (2021) Creating an open-source augmented reality remote support tool for industry: challenges and learnings. Procedia Comput Sci 180:269–279 3. Kulkov I, Berggren B, Hellström M, Wikström K (2021) Navigating uncharted waters: designing business models for virtual and augmented reality companies in the medical industry. J Eng Tech Manage 59:101614 4. Zarantonello L, Schmitt BH (2023) Experiential AR/VR: a consumer and service framework and research agenda. J Serv Manage 34(1):34–55 5. Schiavi B, Havard V, Beddiar K, Baudry D (2022) BIM data flow architecture with AR/ VR technologies: use cases in architecture, engineering, and construction. Autom Constr 134:104054 6. Tan Y, Xu W, Li S, Chen K (2022) Augmented and virtual reality (AR/VR) for education and training in the AEC industry: a systematic review of research and applications. Buildings 12(10): 1529 7. Eschen H, Kötter T, Rodeck R, Harnisch M, Schüppstuhl T (2018) Augmented and virtual reality for inspection and maintenance processes in the aviation industry. Procedia Manufacturing 19:156–163 8. Kour K, Gupta D, Gupta K, Anand D, Elkamchouchi DH, Pérez-Oleaga CM, Ibrahim M, Goyal N (2022) Monitoring ambient parameters in the iot precision agriculture scenario: an approach to sensor selection and hydroponic saffron cultivation. Sensors 22(22): 8905 9. Scurati GW, Gattullo M, Fiorentino M, Ferrise F, Bordegoni M, Uva AE (2018) Converting maintenance actions into standard symbols for augmented reality applications in Industry 4.0. Comput Indus 98: 68–79 10. Arena F, Collotta M, Pau G, Termine F (2022) An overview of augmented reality. Computers 11(2):28 11. Buddhan AR, Eswaran SP, Buddhan DE, Sripurushottama S (2019) Even driven multimodal augmented reality-based command and control systems for mining industry. Procedia Comput Sci 151:965–970 12. Liang H, Yuan J, Thalmann D, Thalmann NM (2015) Ar in hand: egocentric palm pose tracking and gesture recognition for augmented reality applications. In: Proceedings of the 23rd ACM international conference on multimedia, pp 743–744

Open-Source Gesture-Powered Augmented Reality-Based Remote …

347

13. Regenbrecht H (2007) Industrial augmented reallity applications. In: Emerging technologies of augmented reality: interfaces and design. IGI Global, pp 283–304 14. Saidin NF, Halim NDA, Yahaya N (2015) A review of research on augmented reality in education: advantages and applications. Int Educ Stud 8(13):1–8

Enhancing Biometric Performance Through Mitigation of Sleep-Related Breaches Urmila Pilania, Manoj Kumar, Sanjay Singh, Yash Madaan, Granth Aggarwal, and Vaibhav Aggrawal

Abstract With advancements in technology, the risk of information security threats continues to grow. One potential solution to these challenges is the use of biometric techniques. Biometrics consists of biological measurements and physical characteristics that can be used to identify an individual. For example, fingerprint mapping, facial recognition, and retinal scans are all forms of biometrics. To enhance the security of the biometric techniques, we proposed to implement sleeping breaches. Sleeping breaches depend on many parameters such as body temperature, blood pressure, brain work, and heartbeats. Sleeping breaches improve security when the user is sleeping. The measurement of body temperature, blood pressure, brain work, and heartbeats vary from active state to the rest of the human body. The biometric is not accessible by the user in case any of the sleeping breach parameters as mentioned above, meet the test condition resulting into enhanced security. Keywords Biometric · Body temperature · Heart beats · Blood pressure · Brain

1 Introduction Since its inception in 1980, biometrics has continuously evolved to become a crucial tool for measuring biological features such as fingerprints, iris patterns, or facial features to identify individuals. Biometrics is commonly used to verify and control access to devices, utilizing mathematical algorithms to analyze an individual’s physical characteristics. By providing confidentiality, authentication, integrity, and accessibility of data, biometrics ensures the required security of information, protecting U. Pilania (B) · M. Kumar · S. Singh · Y. Madaan · G. Aggarwal · V. Aggrawal Computer Science and Technology, Manav Rachna University, Faridabad, India e-mail: [email protected] M. Kumar e-mail: [email protected] S. Singh e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024 S. Jain et al. (eds.), Emergent Converging Technologies and Biomedical Systems, Lecture Notes in Electrical Engineering 1116, https://doi.org/10.1007/978-981-99-8646-0_28

349

350

U. Pilania et al.

processes, organizations, and substructures from potential threats. This technology supports both physical and logical access controls to secure data. Physical access controls enable authorized personnel to access specific infrastructure and buildings, while logical access controls safeguard computers, network services, and information systems from unauthorized users [4]. In recent years, the use of biometrics has steadily increased for a variety of reasons, with the primary purpose being to identify and authenticate individuals. The practical applications of biometrics are also expanding rapidly, and many companies initially thought that biometrics was only for government use. However, they quickly discovered that biometrics has applications far beyond the government sector, including airport security, attendance tracking for both government and private organizations, law enforcement, banking, business, access control, and more. Biometrics offer numerous benefits, such as security, authenticity, reliability, accountability, and others, which motivate young researchers to work on improving the biometric system [5]. The workflow of a biometric system is depicted in Fig. 1. The system takes a sample input in the form of an image or signal, such as the iris, palm, finger, or face. These inputs are then processed to extract features that are compared against registered templates. The extracted features are matched with the stored features, and the similarity between the two determines the quality of the fitting model. Acceptance or rejection of the model is determined by a predefined threshold value for the matching score. However, signal transmission can introduce noise, sampling error, frequency changes, and other factors that can affect the system’s performance. Additionally, parameters such as facial expression, makeup, angle, rotation, illumination, skin moisture, and others can also influence the accuracy of the biometric system [9].

2 Literature Review Smart devices are increasing day by day. In 2022, approximately 6.8 billion increases have been reported in smart devices. These devices carry the confidential data of the users. Smart devices can be handled through the biometric system these days. So here comes the issue of personal security. In the proposed work, a survey has been done on biometric systems. For the survey, the authors have taken the performance metrics to evaluate the work. At last, the authors have created the research questions to improve the existing work [1]. The use of biometric techniques has raised the issue of authentication, integrity, confidentiality, and dishonesty. The author specifically focused on the concerns related to the biometric system. The author concluded that biometric systems at present have many issues. These issues should be resolved by the researchers or the academicians at the earliest to improve the performance of the biometric system [2]. In this paper, the author summarizes the status of the current biometric system in India. The author focused on creating a framework for developing and describing

Enhancing Biometric Performance Through Mitigation …

351

Fig. 1 Biometric workflow [7]

protocols used in the biometric system. To avoid the biasing in data collection and evaluation parameters. So that at the testing phase best result can be obtained [3]. Through the use of smart devices, urban cities are improving their way of living day by day. These smart devices are helping people to do work from their homes. Along with facilities provided by smart devices some challenges are also associated with these devices. Biometrics could be used for access control everywhere but it has issues with security and authentication. Author of this work, reviewed biometric systems based on security and privacy. It has been concluded from the literature survey that still many issues are existing with smart devices during information transmission [6]. This paper provided a thorough overview of the latest advancements in fingerprintbased biometric systems, with a focus on enhancing security and recognition accuracy. The paper also outlines the limitations of previous research and suggests future work. The study demonstrates that researchers are still encountering difficulties in addressing the two most significant threats to biometric systems, namely, attacks on the user interface and template databases. Researchers are actively working to develop countermeasures to combat these attacks while maintaining robust security and high recognition accuracy. Additionally, recognition accuracy in unfavorable situations requires particular attention in biometric system design. This paper also highlights related challenges and current research trends in the field [12]. The combination of smart healthcare and IoT has led to a new standard in the application of biometric data. This paper proposed a new standard for using biometric technology to develop smart healthcare solutions using IoT. This standard aims to provide high data access capacity while being easy to use. Through this standard,

352

U. Pilania et al.

Cost

Sleeping Breaches

Issues in Biometric System

Data Breaches

False Positive & Qualities Fig. 2 Issues with existing biometric

author developed a more secure way of accessing IoT based on biometric identification, which could result in significant advances in smart healthcare systems [13]. After the literature review, some of the issues are listed as follows in Fig. 2. Sleeping breaches mean when the user is sleeping than his/her smart device can be activated by taking the biometrics of the person. After activation of the smart device hacker can make misuse it. Data breaches can expose secret data to unauthorized users. Sometimes because of the lack of training, some false positive values are may be accepted by the biometric system resulting in the misuse of confidential data. The cost of the biometric system increases with the use of more and more hardware in building smart systems. In this research paper, we worked on sleeping breaches of the biometric system. During sleep, a person’s temperature, heartbeats, brain, and blood pressure values will change automatically. At the time of sleep the misuse of biometrics does not happen for that we designed a UX interface that works on blood pressure, brain working, body temperature, heartbeats, sleeping schedule, etc. We set the test condition if any of the conditions meet then biometric control cannot be accessed. By implementing all the conditions for sleeping breaches the performance of the biometric can be improved in terms of security.

3 Problem Statement The current research proposes a solution to address security concerns in biometric systems, particularly the issue of “sleeping breaches.” Such breaches occur when a hacker gains access to a user’s smart device while they are asleep, using their fingerprint to unlock it. This type of breach poses a significant threat as it enables the hacker to access personal data stored on the device, compromising user security.

Enhancing Biometric Performance Through Mitigation …

353

4 Proposed Work In this paper, we proposed to work on sleeping breach parameters to improve the security of biometrics as shown in Fig. 4. During sleep blood pressure, brain performance, body temperature, heartbeats, etc. got changed from their normal measurement. To avoid sleeping breaches, we created a UX interface where we can measure all the mentioned parameters as shown in Fig. 3. Test values are set for all the factors as shown in Table 1. If any parameter value matches the test value, then the user is not allowed to access the smart device. We can also set pin for biometric access as just after the rest mode of user the sleeping breach parameters will take time to reach the actual state. In this article, a novel approach to improve biometric security is proposed by addressing the issue of sleeping breaches, as illustrated in Fig. 4. During sleep, various physiological parameters such as blood pressure, brain function, body temperature, and heart rate deviate from their normal values. To counteract sleeping breaches, a UX interface developed that measures these parameters, as depicted in Fig. 3. Test values have been established for each parameter, as presented in Table 1. If any

Fig. 3 Sleeping breaches

354

Fig. 4 Proposed flowchart

U. Pilania et al.

Enhancing Biometric Performance Through Mitigation …

355

Table 1 Biometric performance parameters Parameters affecting biometric performance

Normal working state

Test condition

Body temperature

≥37