Emerging Trends in Expert Applications and Security: Proceedings of 2nd ICETEAS 2023, Volume 2 9819919452, 9789819919451

The book covers current developments in the field of computer system security using cryptographic algorithms and other s

219 55 12MB

English Pages 510 [511] Year 2023

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Preface
Acknowledgements
About This Book
Contents
About the Editors
Deep Learning Based Approach in Automatic Microscopic Bacterial Image Classification
1 Introduction
2 Related Work
3 Methods and Material
3.1 Convolutional Neural Network
3.2 Transfer Learning
3.3 Dataset and Pre-Processing
4 Experimentation and Results
5 Conclusion and Future Scope
References
Deep Learning-Based Recognition and Classification of Different Network Attack Patterns in IoT Devices
1 Introduction
2 Literature Survey
3 Proposed Work
4 Results and Analysis
5 Conclusion
References
Lipid Concentration Effects on Blood Flow Through Stenosed Tube
1 Introduction
2 Formulation of the Problem
3 Solution of the Problem
4 Graphical Results and Discussions
5 Conclusion
References
Single-Phase Bi-Directional AC/DC Converters for Fast DC Bus Voltage Controller
1 Introduction
2 Objectives of the Study
2.1 Voltage Control Techniques
2.2 Phase Control Technique
2.3 On-off Control Technique
3 Existing System
4 Proposed Method
4.1 Operation
4.2 Control System
5 Results and Discussion
6 Conclusion and Future Enhancement
References
Padam Persona—Personalizing Text to Image Using Artificial Intelligence
1 Introduction
1.1 Objective
1.2 Project Description
2 System Analysis
2.1 Existing System
2.2 Proposed System
2.3 Technologies Used
3 Related Works
4 System Design
4.1 Architecture Diagram
4.2 System Specification
5 System Implementation
5.1 List of Modules
5.2 Module Description
6 Conclusion and Future Work
6.1 Conclusion
6.2 Future Enhancements
References
Autodubs: Translating and Dubbing Videos
1 Introduction
2 Related Work
3 System Design
4 Methodology
5 System Requirements
6 Algorithm
7 Conclusion and Future Enhancements
References
Deep Neural Based Learning of EEG Features Using Spatial, Temporal and Spectral Dimensions Across Different Cognitive Workload of Human Brain: Dimensions, Methodologies, Research Challenges and Future Scope
1 Introduction
1.1 Deep Learning and EEG Signals
2 Literature Review
3 Experimental Setup
4 Dimensions
5 Proposed Methodology
6 Discussion
7 Conclusion
8 Major Research Challenges, Issues and Future Scope
References
A Framework for Classification of Nematodes Species Using Deep Learning
1 Introduction
2 Related Work
3 Material and Proposed Method
3.1 Description of Dataset
3.2 Proposed Method
4 Experiment Results
5 Conclusion and Future Scope
References
CAD Model for Biomedical Image Processing for Digital Assistance
1 Introduction
2 Literature Review
3 Proposed CNN Model for COVID X-Ray Detection
3.1 Convolutional Neural Network (CNN)
4 Dataset Standardization
5 Implementation of Proposed CNN Model
6 Experimental Results
7 Conclusion
References
Natural Language Processing Workload Optimization Using Container Based Deployment
1 Introduction
1.1 Masked LM (MLM)
1.2 Next Sentence Prediction (NSP)
1.3 NLP and GLUE
2 Literature Review
3 Dataset Characteristics
4 Methodology
5 Implementation
6 Experimental Results
7 Conclusion
References
NDVI Indicator Based Land Use/Land Cover Change Analysis Using Machine Learning and Geospatial Techniques at Rupnarayan River Basin, West Bengal, India
1 Introduction
2 Study Area
3 Materials and Methods
3.1 Data Collection
3.2 Applied Methodology
4 Result and Discussion
5 Conclusion
References
Prediction of Anemia Using Naïve-Bayes Classification Algorithm in Machine Learning
1 Introduction
2 Literature Work and Related Study
3 Objective of Work
3.1 Naïve Bayes Classification Algorithm
3.2 Means of Judgment
3.3 The Matrix of Confusion
3.4 Dataset
4 Methodology
5 Results and Observations
6 Conclusion
References
A Methodology for the Energy Aware Clustering and Aggregate Node Rotation with Sink Relocation in MANET
1 Introduction
2 Protocols
3 EAC-ASR
4 Simulation and Results
5 Problem Formulation and Proposed Solution
6 Result
7 Conclusion and Future Work
References
Differential Evolution Algorithm-Based Optimization of Networked Microgrids
1 Introduction
2 Problem Description
2.1 Objective Function
2.2 Networked Microgrid
2.3 Modes of Operation
3 Contribution of This System
3.1 Objective Function
3.2 Constraint
4 Proposed Methodology
4.1 Proposed Implementation Approach
4.2 Assumption Made
4.3 Proposed Method
5 Results and Discussions
5.1 CASE 1: Normal Mode
5.2 CASE 2: Single Fault Curative Mode
5.3 CASE 3: Multiple Faults Curative Mode
5.4 CASE 4: Forty MG’s Curative Mode
5.5 Chi-Square Test
6 Conclusion
References
OBD II-Based Performance Analysis Based on Fuel Usage of Vehicles
1 Introduction
2 Related Works
3 Experiments and Results
4 Conclusion
References
Performance Analysis of Routing Protocols for WSN-Assisted IoT Networks
1 Introduction
2 Routing Challenges in WSN
3 Literature Survey
4 Results
5 Conclusion
References
A Novel Approach Towards Automated Disease Predictor System Using Machine Learning Algorithms
1 Introduction
2 Literature Survey
3 Methodology
4 Results
5 Conclusion
References
Measure to Improve the Prediction Accuracy of a Convolutional Neural Network Model for Brain Tumor Detection
1 Introduction
2 Classification
2.1 Working of Convolutional Neural Network for Brain Tumor Prediction
2.2 Training of a Convolutional Neural Network Model
3 Normalization
3.1 Methods of Normalization
3.2 Types of Standardization
4 Conclusion
References
Improve Short-Term Stock Price Forecasts Through Deep Learning Algorithms
1 Introduction
1.1 Attention Layer Mechanism
2 Literature Review
3 Methodology
4 Algorithm Used
5 Results
6 Conclusion
References
Multistage Classification of Retinal Images for Prediction of Diabetic Retinopathy-Based Deep Learning Model
1 Introduction
2 Related Work
3 Methodology
3.1 Data Source
3.2 Deep Learning Model for Image Classification
3.3 Proposed Deep Learning Model
3.4 Implementation of Proposed Model
4 Results and Discussion
5 Conclusion and Future Work
References
Crop Disease Detection and Classification Using Deep Learning-Based Classifier Algorithm
1 Introduction
2 Literature Review
3 Proposed Algorithm
3.1 Diseases
3.2 Data Pre-Processing
3.3 Data Processing
4 Implementation of Proposed Algorithm and Result
4.1 Accuracy
4.2 Result
5 Conclusion
References
Comparison and Analysis of Container Placement Algorithms in Cloud Data Center
1 Introduction
2 Literature Review
3 Problem Statement
4 Methodology
4.1 First Fit (FF) Container Placement Algorithm
4.2 First Fit Decreasing (FFD) Container Placement Algorithm
4.3 Random Container Placement Algorithm
4.4 Least Full (LF) Container Placement Algorithm
4.5 Most Full (MF) Container Placement Algorithm
5 Results
6 Future Work and Conclusion
References
Evaluation of Convolution Neural Network Models Using Clinical Datasets
1 Introduction
2 Literature Review
3 Methodology
3.1 Convolution Neural Network (CNN)
3.2 Steps to Involve to Develop a Classifier
4 Result and Discussion
5 Conclusion
References
Milk Quality Prediction Using Supervised Machine Learning Technique
1 Introduction
2 Related Works
3 Proposed System
4 System Architecture
5 Implementation
6 System Output
7 Bar Chart Representation and Performance Analysis
8 Conclusion
References
Evaluation of Machine Learning Techniques to Diagnose Polycystic Ovary Syndrome Using Kaggle Dataset
1 Introduction
2 Literature Review
3 Methodology
3.1 Smote
3.2 Correlation Based Feature Selection
3.3 Support Vector Machine
3.4 Random Forest
3.5 Dataset Description
3.6 Steps Involved to Develop a Classifier
4 Result and Discussion
5 Conclusion
References
Specifying the Virtual Reality Approach in Mobile Gaming Using Unity Game Engine
1 Introduction
2 Research Methodology
2.1 Use Case Diagram Model
2.2 Class Diagram Design of Hypercasual Infinity
3 Working Process of Hypercasual Infinity
4 Implementation of Hypercasual Infinity
5 Conclusion
References
Phishing Attack Detection Using Machine Learning
1 Introduction
2 Related Works
3 System Architecture
3.1 Proposed Algorithm
3.2 Existing System
4 List of Modules
4.1 Module 1: Detection Technique
4.2 Module 2: Phishing Websites Features
4.3 Module 3: Data Set
5 Implementation
5.1 Input Design
5.2 Objectives
5.3 Output Design
5.4 Data Flow Diagram
5.5 Uml Diagrams
5.6 Goals
5.7 Use Case Diagram
5.8 Class Diagram
5.9 Sequence Diagram
5.10 Activity Diagram
6 Conclusions and Future Work
References
Facial Expression Based Smart Music Player
1 Introduction
2 Literature Survey
3 System Architecture
3.1 Architecture
3.2 Flowchart
3.3 Module Mapping
4 Methodology
4.1 Image Dataset
4.2 Song Dataset
5 Modules
5.1 Face Recognition Module
5.2 Emotion Detection Module
5.3 Music Recommendation Module
6 Hardware Requirements
7 Software Requirements
8 Result and Discussion
9 Conclusion
References
Dynamic E-Authentication Attendance System Using QR Code and OTP
1 Introduction
2 Literature Survey
3 System Architecture
4 Modules
4.1 QR Code Generation
4.2 One-Time Password (OTP)
4.3 Student
4.4 Staff
5 System Requirements
6 Use case Diagram
7 Activity Diagram
8 Software and Technologies Description
8.1 Netbeans
9 Conclusion
10 Future Enhancement
References
An Investigative Approach of Context in Internet of Behaviours (IoB)
1 Introduction
1.1 Motivations
2 Related Work on Context
2.1 Parameter of Context
2.2 Context in IoT
2.3 Context Preferences
2.4 Internet of Behaviours (IoB)
2.5 Our Classification of “Context”
3 Challenges in Context Awareness
4 Our Contribution—Three Layer of Context Taxonomy
5 Investigative Approached
6 Context Sensing Algorithms
7 Conclusion
8 Future Work
References
Proposed Convolutional Neural Network with OTSU Thresholding for Accurate Classification of Handwritten Digits
1 Introduction
2 Literature Review
3 Dataset Description
4 OTSU Binarization
5 Proposed CNN Model
6 Results and Discussion
7 Conclusion
References
Impact of Different Batch Sizes on Transfer Learning Models for Multi-class Classification of Alzheimer’s Disease
1 Introduction
2 Literature
2.1 Alzheimer’s Disease Prediction Using Pre-trained CNN Models
3 Dataset
4 Proposed Methodology
5 Experimental Results
5.1 The Training Performance Comparison for Different Models
5.2 State-of-Art Comparison
6 Conclusion
References
VAARTA: A Secure Chatting Application Using Firebase
1 Introduction
2 Literature Survey
2.1 Existing Chat Application System
2.2 Proposed Chat Application System
3 Methodology
3.1 Steps Taken for Developing VAARTA
3.2 Design Model
4 Results
4.1 VAARTA Application Interface
5 Comparative Analysis
6 Conclusion and Future Scope
References
Chicken Quality Evaluation Using Deep Learning
1 Introduction
2 Related Work
3 Materials and Methods
3.1 Dataset
3.2 The Proposed Method
3.3 Pre-trained Models
4 Results and Discussion
5 Conclusion
References
Deep Learning Based Model for Multi-classification of Flower Images
1 Introduction
2 Literature Review
3 Dataset Description
3.1 Dataset Standardization
3.2 Class Imbalance in Image Dataset
3.3 Impact of Class Imbalance on Loss Function
4 Model Methodology
5 Experimental Results
6 Conclusion
References
Image Analysis Aided Freshness Classification of Pool Barb Fish (Puntius sophore)
1 Introduction
2 Literature Review
3 Methodology
3.1 Our Dataset
3.2 Comparative Analysis of Pretrained Models to be Used for Transfer Learning
3.3 Our Proposed Model
4 Results and Conclusion
References
Coating of Graphene on ITO Via Cyclic Voltammetry
1 Introduction
2 Materials and Methods
3 Research and Discussions
4 Summary and Outlook
References
A Deep Learning-Based InceptionResNet V2 Model for Cassava Leaf Disease Detection
1 Introduction
2 Literature
3 Dataset Description
3.1 Data Pre-processing
3.2 Data Normalization
3.3 Data Augmentation
4 Proposed Model Summary
5 Results
5.1 Accuracy and Loss Plots
5.2 Analysis Based on the Characteristics of the Confusion Matrix
6 Conclusion
References
Implementation of SCADA Algorithm Using FPGA for Industry 4.0 Applications
1 Introduction
2 Related Work
3 Utilization of Different Processors for SCADA System
4 Comparative Analysis of Different Processors for SCADA System
5 Conclusion
References
A Survey Based on Privacy-Preserving Over Health Care Data Analysis
1 Introduction
2 Related Works
3 Protecting Health Data Techniques
4 Recent Trends in Health Data Privacy
5 Discussions and Recommendations
6 Conclusions
References
Design of a Robot for Detection of Human Beings Amidst Fire and Wreckage
1 Introduction
1.1 Block Diagram
1.2 Specification of Various Input and Output Devices
2 Implementation of the Circuit
3 Programming the PIC Microcontroller
4 Conclusion
References
To Build 3D Indoor Navigation Application for Museum Visitors
1 Introduction
2 Background and Literature Review
3 Research Method
3.1 Blender
3.2 Unity
4 Conclusion and Future Scope
4.1 Conclusion
4.2 Future Scope
References
Technical Analysis Based on Different Dispatch Strategies of a Smart Off-Grid Hybrid Power Plant Using IoT for SRM IST Delhi-NCR Campus
1 Introduction
2 Methodology
2.1 Dispatch Strategy
3 HPS Components
3.1 Solar PV Array
3.2 Diesel Generator
3.3 Battery Bank
3.4 Converter
4 HPS Economic Parameters
5 HPS Economic Parameters
6 Results and Discussion
6.1 Pollutant Emission
7 Conclusion
References
Novel Two-Bit Magnitude Comparators for IOT Applications
1 Introduction
2 Literature Review
3 Proposed Designs
3.1 Proposed Design I of Two-Bit Magnitude Comparator
3.2 Proposed Design II of Two-Bit Magnitude Comparator
3.3 Proposed Design III of Two-Bit Magnitude Comparator
4 Simulation Results and Discussion
5 Conclusion
References
Machine Translation Based on Computational Linguistics of Sanskrit: A Review
1 Introduction
2 Sanskrit and NLP
2.1 Features of Sanskrit
3 Sanskrit and Machine Translation
3.1 Approaches of Machine Translation
4 Translators Related To Sanskrit
5 Conclusion
References
SVM–Feature Elimination-Based Alzheimer Disease Diagnosis
1 Introduction
2 Problem Statement
3 Result
4 Conclusion
References
Author Index
Recommend Papers

Emerging Trends in Expert Applications and Security: Proceedings of 2nd ICETEAS 2023, Volume 2
 9819919452, 9789819919451

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Lecture Notes in Networks and Systems 682

Vijay Singh Rathore Vincenzo Piuri Rosalina Babo Marta Campos Ferreira   Editors

Emerging Trends in Expert Applications and Security Proceedings of 2nd ICETEAS 2023, Volume 2

Lecture Notes in Networks and Systems Volume 682

Series Editor Janusz Kacprzyk, Systems Research Institute, Polish Academy of Sciences, Warsaw, Poland Advisory Editors Fernando Gomide, Department of Computer Engineering and Automation—DCA, School of Electrical and Computer Engineering—FEEC, University of Campinas—UNICAMP, São Paulo, Brazil Okyay Kaynak, Department of Electrical and Electronic Engineering, Bogazici University, Istanbul, Türkiye Derong Liu, Department of Electrical and Computer Engineering, University of Illinois at Chicago, Chicago, USA Institute of Automation, Chinese Academy of Sciences, Beijing, China Witold Pedrycz, Department of Electrical and Computer Engineering, University of Alberta, Alberta, Canada Systems Research Institute, Polish Academy of Sciences, Warsaw, Poland Marios M. Polycarpou, Department of Electrical and Computer Engineering, KIOS Research Center for Intelligent Systems and Networks, University of Cyprus, Nicosia, Cyprus Imre J. Rudas, Óbuda University, Budapest, Hungary Jun Wang, Department of Computer Science, City University of Hong Kong, Kowloon, Hong Kong

The series “Lecture Notes in Networks and Systems” publishes the latest developments in Networks and Systems—quickly, informally and with high quality. Original research reported in proceedings and post-proceedings represents the core of LNNS. Volumes published in LNNS embrace all aspects and subfields of, as well as new challenges in, Networks and Systems. The series contains proceedings and edited volumes in systems and networks, spanning the areas of Cyber-Physical Systems, Autonomous Systems, Sensor Networks, Control Systems, Energy Systems, Automotive Systems, Biological Systems, Vehicular Networking and Connected Vehicles, Aerospace Systems, Automation, Manufacturing, Smart Grids, Nonlinear Systems, Power Systems, Robotics, Social Systems, Economic Systems and other. Of particular value to both the contributors and the readership are the short publication timeframe and the world-wide distribution and exposure which enable both a wide and rapid dissemination of research output. The series covers the theory, applications, and perspectives on the state of the art and future developments relevant to systems and networks, decision making, control, complex processes and related areas, as embedded in the fields of interdisciplinary and applied sciences, engineering, computer science, physics, economics, social, and life sciences, as well as the paradigms and methodologies behind them. Indexed by SCOPUS, INSPEC, WTI Frankfurt eG, zbMATH, SCImago. All books published in the series are submitted for consideration in Web of Science. For proposals from Asia please contact Aninda Bose ([email protected]).

Vijay Singh Rathore · Vincenzo Piuri · Rosalina Babo · Marta Campos Ferreira Editors

Emerging Trends in Expert Applications and Security Proceedings of 2nd ICETEAS 2023, Volume 2

Editors Vijay Singh Rathore Department of Computer Science and Engineering (CSE) Jaipur Engineering College and Research Centre Jaipur, Rajasthan, India Rosalina Babo Porto Accounting and Business School Polytechnic Institute of Porto Porto, Portugal

Vincenzo Piuri Department of Computer Engineering University of Milan Milan, Italy Marta Campos Ferreira Faculty of Engineering University of Porto Porto, Portugal

ISSN 2367-3370 ISSN 2367-3389 (electronic) Lecture Notes in Networks and Systems ISBN 978-981-99-1945-1 ISBN 978-981-99-1946-8 (eBook) https://doi.org/10.1007/978-981-99-1946-8 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Singapore Pte Ltd. The registered company address is: 152 Beach Road, #21-01/04 Gateway East, Singapore 189721, Singapore

Preface

The 2nd International Conference on Emerging Trends in Expert Applications and Security (ICE-TEAS 2023) was held at JAIPUR ENGINEERING COLLEGE AND RESEARCH CENTRE, Jaipur, India, during 17–19 Feb 2023 in hybrid mode. The conference was organised collectively by the “Department of Computer Science Engineering, Department of Information Technology & Department of Artificial Intelligence and Data Science” of JECRC, Jaipur in association with Springer Nature for publication (LNNS Series), and supported by CSI Jaipur Chapter, ACM Professional Chapter Jaipur, and GR Foundation, Ahmadabad. The conference addressed recent technological developments, specifically the “Expert Applications and its Security”. Technology has transformed with great speed in the last few decades, resulting in the development of Expert Applications making life more effortless. The conference raised awareness about issues related with emerging technologies as well as increased threats in expert applications and security, which will aid in the creation of better solutions for society. The COVID-19 pandemic has impacted us more than any other event in most of our lifetimes. Companies, associations and destinations globally are trying to navigate their way through this crisis, balancing the short-term need with a long-term strategy. While we are all in the same storm, we must realize that we are in different boats, and therefore different solutions and strategies are necessary. To understand another dimension of the conference abbreviation, it would be better to understand the word “ICE”, which means “the solid state of frozen water” i.e. proposing and discussing the scientific thoughts frozen in the mind by analysing all its pros and cons in advance. Also, TEAS means “Hot drink infused with dried crushed herbs and leaves which brings freshness” depicting that the scientific ideas come up with the fresh noble solution for expert application by discussing with the scientists and researchers globally during the Technical Sessions and the TEA breaks resulting the frozen ideas get melted with the proposed solutions. The issues can be addressed with proper planning and utmost care to benefit the concerned. Here, through the conference “ICE-TEAS 2023”, the ‘frozen idea’ (ICE) of the rising threats in the expert applications, would be discussed, analysed, and probably solved, during various Tea Sessions (and Tea Breaks) of the conference. v

vi

Preface

ICE-TEAS 2023 was organized keeping these dimensions at preference. The conference aimed to provide an international platform to the researchers, academicians, industry representatives, government officials, students and other stakeholders in the field to explore the opportunities, to disseminate and acquire beneficial knowledge from the various issues deliberated in the paper presented on different themes in the conference. The Technical Program Committee and Advisory Board of ICE-TEAS 2023 include eminent academicians, researchers, and practitioners globally. The conference received incredible response from both delegates and students in reference to research paper presentations. More than 516 papers were received out of which 98 were selected after impartial plagiarism checks and rigorous peer review processes. All 98 papers have been included in two different volumes (Vol-1, Vol-2) each containing 49 papers which were presented in 12 different technical sessions. The Conference was in Hybrid Mode wherein about 30% participants came to attend the conference physically from across the globe and the rest 70% joined virtually to present their papers as well as to hear the exemplary speakers across the globe. We had international participants and delegates from countries like Italy, Serbia, Norway, Portugal, USA, Vietnam, Ireland, Netherlands, Romania, and Poland are a few. We deeply appreciate all our authors for having confidence in us and considering ICE-TEAS 2023 a platform for sharing and presenting their original research work. We also express our sincere gratitude to the focused team of Chairs, Co-Chairs, International Advisory Committee, and Technical Program Committee. We are very much thankful to Mr. Aninda Bose, (Senior Publishing Editor, Springer Nature, India) for providing continuous guidance and support. Our heartfelt thanks to all the reviewers and Technical Program Committee Members for their cooperation and efforts in the peer review process. We are indeed very much thankful to everyone associated directly or indirectly with the conference, organizing a firm team and leading it towards grand success. We hope that it meets the expectations. We are very much grateful to the Patrons, General Chair, Conference Chairs, Delegates, participants and researchers for their throughout provoking contributions. We extend our heartiest thanks and best wishes to all the concerned(s). Jaipur, India

Prof. Dr. Vijay Singh Rathore PC Chair & Convenor, ICE-TEAS 2023

Acknowledgements

First and foremost, we would like to thank God, the Almighty, who has granted countless blessings for the grand success of the Conference. The organizing committee wishes to acknowledge financial as well as infrastructural support provided by Jaipur Engineering College and Research Centre and the technical support by CSI Jaipur Chapter, and ACM Jaipur Professional Chapter. The support of GR Foundation, Ahmedabad is also gratefully acknowledged. We are very much grateful to Mr. O. P. Agrawal, Chairperson JECRC, Mr. Amit Agrawal Vice Chairperson JECRC, Mr. Arpit Agrawal, Director JECRC for their unstinted support and invaluable guidance throughout, which gave this three-day event in its present shape. We are also grateful to Prof. Vinay Kumar Chandna, Principal JECRC & General Chair of ICE-TEAS 2023 for the meticulous planning and attention to detail which has helped us in organizing this event. Our heartfelt thanks to the PC Chairs and Editors of ICE-TEAS 2023 Proceedings, Prof. Vincenzo Piuri, Professor at University of Milano, Italy, Prof. Joao Manuel RS Tavares Professor, University of Porto, Portugal, Prof. Vijay Singh Rathore Professor-CSE & Director— Out Reach JECRC Jaipur. Our sincere gratitude to the Co-Chairs and Editors of ICETEAS 2023 Proceedings, Prof. Rosalina B. Babo Professor, Polytechnic Institute of Porto, Portugal, Dr. Marta Campos Ferreira Assistant Professor, University of Porto, Portugal and Dr. B. Surendiran, Associate Professor, NIT Puducherry, India. We are extremely grateful to Prof S. K. Singh, Vice Chancellor, RTU Kota who served us as Inaugural Chief Guest, Prof. R. S. Salaria, Director, Guru Nanak University, Hyderabad and Dr. Maya Ingle, Director, DDUKK, Devi Ahilya University, Indore as Inaugural Guests. We would like to extend our deepest gratitude to Mr. Aninda Bose—Senior Editor, Springer Nature, for providing continuous guidance and support. We are very much thankful to Prof. Vijay Kumar Banga, Principal, Amritsar College of Engineering & Technology, Amritsar, Dr. Indira Routaray; Dean, International Education, C. V. Raman Global University, Bhubaneswar; Prof. Mike Hinchey, Professor, University of Limerick, Ireland & President, IFIP; Prof. Marcel Worring, Professor & Director of the Informatics Institute, University of Amsterdam, Netherlands; Prof. Patrizia Pucci, Professor, Department of Mathematics and Informatics, vii

viii

Acknowledgements

University of Perugia, Perugia, Italy; Prof. Cornelia- Victoria Anghel Drug˘arin, Professor, Babes, -Bolyai University Cluj-Napoca, România; Prof. Francesca Di Virgilio, Professor, University of Molise, Italy; Prof. Kirk Hazlett, Adjunct Professor, University of Tampa, Florida, USA; Prof. Milan Tuba, Vice Rector, Singidunum University, Belgrade, Serbia; Prof. Vladan Devedzic, Professor of Computer Science, University of Belgrade, Serbia; Prof. Ibrahim A. Hameed, Professor, ICT, Norwegian University of Science and Technology (NTNU), Ålesund, Norway; Prof. Gabriel Kabanda, Adjunct Professor, Machine Learning, Woxsen University, Hyderabad; Dr. Nguyen Ha Huy Cuong, Professor, Department of Information Technology, The University of Da Nang, College of Information Technology, Da Nang, VietNam; Dr. Dharm Singh, Namibia University of Science and Technology, Namibia; Prof. Igor Razbornik, CEO, Erasmus+ Projects with Igor, Velenje, Solvenia; Dr. Marta Campos Ferreira, Assistant professor, Faculty of engineering, University of Porto, Portugal; Prof. M. Hanumanthappa, Professor & Director, CS, Banglore University, Banglore; Dr. Satish Kumar Singh, Professor, IT, IIIT Allahabad; Prof. Durgesh Kumar Mishra, Professor, CSE, Sri Aurobindo Institute of Technology, Indore; Dr. Nilanjan Dey, Asso. Prof., CSE, Techno International New Town, Kolkata; Prof. P. K. Mishra, Institute of Science, BHU, Varanasi; Dr. Ashish Jani, Professor & Head, CSE, Navrachna University, Vadodara, Gujarat. Prof. Reena Dadhich, Professor & Head, Department of Computer Science, University of Kota, Kota; Prof. O. P. Rishi, Professor—CS, University of Kota, Kota; Prof. K. S. Vaisla, Professor, BTKIT, Dwarahat, Uttrakhand; Dr. Vivek Tiwari, Assistant Professor, IIIT Naya Raipur, Chattisgarh; Prof. Krishna Gupta, Director, UCCS & IT, University of Rajasthan, Jaipur; Prof. Vibhakar Mansotra, Professor, CS, University of Jammu, Jammu; Prof. P. V. Virparia, Professor & Head, CS, Sardar Patel University, Gujarat; Prof. Ashok Agrawal, Professor, University of Rajasthan, Jaipur; Dr. Neeraj Bhargava, Professor & Head, Department of CS, MDS University, Ajmer; Prof. C. K. Kumbharana, Professor & Head, CS, Saurashtra University, Rajkot; Prof. Atul Gonsai, Professor, CS, Saurashtra University, Rajkot; Prof. Dhiren Patel, Professor, Department of MCA, Gujarat Vidyapeeth University, Ahmedabad; Dr. N. K. Joshi, Professor & Head, CS, MIMT, Kota; Prof. Vinod Sharma, Professor, CS, University of Jammu, Jammu; Dr. Tanupriya Chaudhary, Associate Professor, CS, UPES, Dehradun; Dr. Praveen Kumar Vashishth, Amity University, Tashkent, Uzbekistan; Dr. Meenakshi Tripathi, Associate Professor, CSE, MNIT, Jaipur; Dr. Jatinder Manhas, Sr. Assistant Professor, University of Jammu, J&K; Dr. Ritu Bhargava, Professor-CS, Sophia Girls College, Ajmer; Dr. Nishtha Kesswani, Associate Professor, CUoR, Kishangarh; Dr. Shikha Maheshwari, Associate Professor, Manipal University, Jaipur; Dr. Sumeet Gill, Professor, CS, MDU, Rohtak; Dr. Suresh Kumar, Asso. Prof., Savitha Engineering College, Chennai; Dr. Avinash Panwar, Director, Computer Centre, and Head IT, MLSU, Udaipur; Dr. Avinash Sharma, Principal, MMEC, Ambala, Haryana; Dr. Abid Hussain, Associate Professor, Career Point University, Kota; Dr. Sonali Vyas, Assistant Professor, CS, UPES, Dehradun; Mr. Kamal Tripathi, Founder and CEO,NKM Tech Solutions; Dr. Pritee Parwekar, Associate Professor, CSE, SRM IST, Ghaziabad; Dr. Pinal J. Patel, Dr. Lavika Goel, Assistant Professor, MNIT, Jaipur; Dr. N. Suganthi, Assistant Professor, SRM

Acknowledgements

ix

University, Ramapuram Campus, Chennai; Dr. Mahipal Singh Deora, Professor, BN University, Udaipur; Dr. Seema Rawat, Associate Professor., Amity University, Tashkent, Uzbekistan; Ms. Gunjan Gupta, Director, A2G Enterprises Pvt Ltd., Noida; Dr. Minakshi Memoria, HOD CSE, UIT, Uttaranchal University, Uttaranchal; Dr. Himanshu Sharma, Assistant Professor, SRM Inst., Delhi NCR; Dr. Paras Kothari, Professor & Head MCA, GITS, Udaipur; Dr. Kapil Joshi, Assistant Professor, CSE, Uttaranchal University, Dehradun; Dr. Amit Sharma, Associate Professor, Career Point University, Kota; Dr. Salini Suresh, Associate Professor, Dayanand Sagar Institutions, Bengaluru; Dr. Bharat Singh Deora, Associate Professor, JRNRVU, Udaipur; Dr. Manju Mandot, Professor, JRNRVU, Udaipur; Dr. Pawanesh Abrol, Professor, Department of Computer Science, University of Jammu, Jammu; Dr. Preeti Tiwari, Professor, ISIM, Jaipur; Dr. Kusum Rajawat, Principal, SKUC, Jaipur; and all other Resource Persons, Experts, and Guest Speakers, Session Chairs for their gracious presence. We deeply appreciate all our authors for having confidence in us and considering ICE- TEAS 2023 a platform for sharing and presenting their original research work. We also express our sincere gratitude to the focused team of Chairs, Co-Chairs, Reviewers, International Advisory Committee, and Technical Program Committee. No task in this world can be completed successfully without the support of your team members. We would like to extend our heartfelt appreciation to our Organizing Secretary Prof. Sanjay Gaur, Head CSE, JECRC; Co-organizing Secretaries Dr. Vijeta Kumawat, Deputy HoD-CSE, Dr. Smita Agarwal Head IT and Dr. Manju Vyas Head AI & DS, for their seamless contribution towards systematic planning and execution of the conference and making it a grand success altogether. Thanks to all Organizing Committee members and faculty members of the JECRC for their preliminary support. Heartfelt thanks to the media and promotion team for their support in wide publicity of this conference in such a short span of time. Finally, we are grateful to one and all, who have contributed directly or indirectly in making ICE-TEAS 2023 a grand success. Jaipur, India

Prof. Dr. Vijay Singh Rathore PC Chair & Convenor, ICE-TEAS 2023

About This Book

This book is Volume 2 of the Proceedings of International Conference ICE-TEAS 2023 at JECRC, Jaipur. It high-quality and peer-reviewed papers from the 2nd International Conference on Emerging Trends in Expert Applications and Security (ICETEAS 2023) held at the Jaipur Engineering College and Research Centre Jaipur, Rajasthan, India, during 17–19 February 2023, which addressed various facts of evolving technologies in Expert Applications and analysis of the threats associated with them and eventually proposing solutions and security to those threats. Technology advancements have broadened the horizons for Expert Applications proliferation that covers varied domains namely design, monitoring, process control, medical, knowledge, finance, commerce and many more. However, no technology can offer easy and complete solutions owing to technological limitations, difficult knowledge acquisition, high development and maintenance cost and other factors. Hence, the emerging trends and rising threats and proposing adequate security in Expert Applications are also issues of concern. Keeping this ideology in mind, the book offers insights that reflect the advances in these fields across the globe and also the rising threats. Covering a variety of topics, such as Expert Applications and Artificial Intelligence/Machine Learning, Advance Web Technologies, IoT, Big Data, Cloud Computing in Expert Applications, Information and Cyber Security Threats and Solutions, Multimedia Applications in Forensics, Security & Intelligence, advancements in App Development, Management Practices for Expert Applications, Social and Ethical Aspects in Expert Applications through Applied Sciences. It will surely help those who are in the Industry and academia and working on cutting edge technology for the advancement of next-generation communication and computational technology to shape real-world applications. The book is appropriate for researchers as well as professionals. The researchers will be able to save considerable time by getting authenticated technical information on expert applications and security at one place. The professionals will have a readily available rich set of guidelines and techniques applicable to a wide class of engineering domains.

xi

Contents

Deep Learning Based Approach in Automatic Microscopic Bacterial Image Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Priya Rani, Shallu Kotwal, and Jatinder Manhas

1

Deep Learning-Based Recognition and Classification of Different Network Attack Patterns in IoT Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Hiteshwari Sharma, Jatinder Manhas, and Vinod Sharma

11

Lipid Concentration Effects on Blood Flow Through Stenosed Tube . . . . Neha Phogat, Sumeet Gill, Rajbala Rathee, and Jyoti Single-Phase Bi-Directional AC/DC Converters for Fast DC Bus Voltage Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . R. Karpaga Priya, S. Kavitha, M. Malathi, P. Sinthia, and K. Suresh Kumar Padam Persona—Personalizing Text to Image Using Artificial Intelligence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . N. Velmurugan, P. Sanjeev, S. Vinith, and Mohammed Suhaib Autodubs: Translating and Dubbing Videos . . . . . . . . . . . . . . . . . . . . . . . . . . K. Suresh Kumar, S. Aravindhan, K. Pavankumar, and T. Veeramuthuselvan Deep Neural Based Learning of EEG Features Using Spatial, Temporal and Spectral Dimensions Across Different Cognitive Workload of Human Brain: Dimensions, Methodologies, Research Challenges and Future Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ayushi Kotwal, Vinod Sharma, and Jatinder Manhas A Framework for Classification of Nematodes Species Using Deep Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Meetali Verma, Jatinder Manhas, Ripu Daman Parihar, and Vinod Sharma

21

33

45 53

61

71

xiii

xiv

Contents

CAD Model for Biomedical Image Processing for Digital Assistance . . . . Hitesh Kumar Sharma, Tanupriya Choudhury, Richa Choudhary, Jung Sup Um, and Aarav Sharma Natural Language Processing Workload Optimization Using Container Based Deployment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Hitesh Kumar Sharma, Tanupriya Choudhury, Eshan Dutta, Aniruddh Dev Upadhyay, and Aarav Sharma

81

93

NDVI Indicator Based Land Use/Land Cover Change Analysis Using Machine Learning and Geospatial Techniques at Rupnarayan River Basin, West Bengal, India . . . . . . . . . . . . . . . . . . . . . . 105 Krati Bansal, Tanupriya Choudhury, Anindita Nath, and Bappaditya Koley Prediction of Anemia Using Naïve-Bayes Classification Algorithm in Machine Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117 Pearl D’Souza and Ritu Bhargava A Methodology for the Energy Aware Clustering and Aggregate Node Rotation with Sink Relocation in MANET . . . . . . . . . . . . . . . . . . . . . . 129 Neeraj Bhargava, Pramod Singh Rathore, and Apoorva Bhowmick Differential Evolution Algorithm-Based Optimization of Networked Microgrids . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141 D. Kavitha and M. Ulagammai OBD II-Based Performance Analysis Based on Fuel Usage of Vehicles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159 Siddhanta Kumar Singh, Vinod Maan, and Ajay Kumar Singh Performance Analysis of Routing Protocols for WSN-Assisted IoT Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169 Vatan, Sandip Kumar Goyal, and Avinash Sharma A Novel Approach Towards Automated Disease Predictor System Using Machine Learning Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179 Roohi Sille, Bhumika Sharma, Tanupriya Choudhury, and Thinagaran Perumal Measure to Improve the Prediction Accuracy of a Convolutional Neural Network Model for Brain Tumor Detection . . . . . . . . . . . . . . . . . . . 191 Abhimanu Singh and Smita Jain Improve Short-Term Stock Price Forecasts Through Deep Learning Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203 Jitesh Kumar Meena and Rohitash Kumar Banyal

Contents

xv

Multistage Classification of Retinal Images for Prediction of Diabetic Retinopathy-Based Deep Learning Model . . . . . . . . . . . . . . . . . 213 Amita Meshram and Deepak Dembla Crop Disease Detection and Classification Using Deep Learning-Based Classifier Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227 Pradeep Jha, Deepak Dembla, and Widhi Dubey Comparison and Analysis of Container Placement Algorithms in Cloud Data Center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239 Avita Katal, Tanupriya Choudhury, and Susheela Dahiya Evaluation of Convolution Neural Network Models Using Clinical Datasets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253 Shikha Prasher, Leema Nelson, and Avinash Sharma Milk Quality Prediction Using Supervised Machine Learning Technique . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267 S Vidhya, V. Siva Vadivu Ragavi, J. K. Monica, and B . Kanisha Evaluation of Machine Learning Techniques to Diagnose Polycystic Ovary Syndrome Using Kaggle Dataset . . . . . . . . . . . . . . . . . . . . 279 Shikha Prasher, Leema Nelson, and Avinash Sharma Specifying the Virtual Reality Approach in Mobile Gaming Using Unity Game Engine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289 Sharma Avinash, Pratibha Deshmukh, Pallavi Jamsandekar, R. D. Kumbhar, Jyoti Kharade, and Rahul Rajendran Phishing Attack Detection Using Machine Learning . . . . . . . . . . . . . . . . . . 301 G. NaliniPriya, K. Damoddaram, G. Gopi, and R. Nitish Kumar Facial Expression Based Smart Music Player . . . . . . . . . . . . . . . . . . . . . . . . . 313 G. NaliniPriya, M. Fazil Mohamed, M. Thennarasu, and V. Shyam Prakash Dynamic E-Authentication Attendance System Using QR Code and OTP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323 B. Hariram, R. N. Karthika, K. Anandasayanam, and G. Maheswara Pandian An Investigative Approach of Context in Internet of Behaviours (IoB) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333 Pranali G. Chavhan, Ritesh V. Ratil, and Parikshit N. Mahalle Proposed Convolutional Neural Network with OTSU Thresholding for Accurate Classification of Handwritten Digits . . . . . . . . . . . . . . . . . . . . . 345 Rahul Singh, Avinash Sharma, Neha Sharma, and Rupesh Gupta

xvi

Contents

Impact of Different Batch Sizes on Transfer Learning Models for Multi-class Classification of Alzheimer’s Disease . . . . . . . . . . . . . . . . . . 355 Kanwarpartap Singh Gill, Avinash Sharma, Vatsala Anand, and Rupesh Gupta VAARTA: A Secure Chatting Application Using Firebase . . . . . . . . . . . . . 367 Aarti, Ujjawal Chauhan, Aditya Goyal, Pratyush Kumar, Richa Choudhary, and Tanupriya Choudhury Chicken Quality Evaluation Using Deep Learning . . . . . . . . . . . . . . . . . . . . 381 Rishi Madan, Tanupriya Choudhury, Tanmay Sarkar, Nikunj Bansal, and Teoh Teik Toe Deep Learning Based Model for Multi-classification of Flower Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393 Hitesh Kumar Sharma, Tanupriya Choudhury, Rishi Madan, Roohi Sille, and Hussain Mahdi Image Analysis Aided Freshness Classification of Pool Barb Fish (Puntius sophore) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 403 Aniruddh Dev Upadhyay, Tanupriya Choudhury, Tanmay Sarkar, Nikunj Bansal, and Madhu Khurana Coating of Graphene on ITO Via Cyclic Voltammetry . . . . . . . . . . . . . . . . 415 Rudresh Pillai, Varun Chhabra, and Avinash Sharma A Deep Learning-Based InceptionResNet V2 Model for Cassava Leaf Disease Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 423 Rahul Singh, Avinash Sharma, Neha Sharma, Kulbhushan Sharma, and Rupesh Gupta Implementation of SCADA Algorithm Using FPGA for Industry 4.0 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 433 Shishir Shrivastava, Avinash Sharma, Amanpreet Kaur, and Rupesh Gupta A Survey Based on Privacy-Preserving Over Health Care Data Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 443 S. P. Panimalar and S. Gunasundari Design of a Robot for Detection of Human Beings Amidst Fire and Wreckage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 457 Monica P. Suresh To Build 3D Indoor Navigation Application for Museum Visitors . . . . . . . 465 Aastha Singh, Anushka Rawat, Bhawana Singore, Sumita Gupta, Sapna Gambhir, and Jitesh H. Panchal

Contents

xvii

Technical Analysis Based on Different Dispatch Strategies of a Smart Off-Grid Hybrid Power Plant Using IoT for SRM IST Delhi-NCR Campus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 481 Shilpa Sambhi, Himanshu Sharma, Vikas Bhadoria, and Pankaj Kumar Novel Two-Bit Magnitude Comparators for IOT Applications . . . . . . . . . 493 Anju Rajput, Tripti Dua, Sanjay Gour, and Renu Kumawat Machine Translation Based on Computational Linguistics of Sanskrit: A Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 505 Smita Girish and R. Kamalraj SVM–Feature Elimination-Based Alzheimer Disease Diagnosis . . . . . . . . 513 Raghubir Singh Salaria and Neeraj Mohan Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 519

About the Editors

Prof. (Dr.) Vijay Singh Rathore Ph.D. (Computer Science), MCA, M.Tech. (CS), MBA, ICAD-USA, ICDA-USA, Diplomas in French and German, Professor-CSE and Director—OutReach, Jaipur Engineering College and Research Centre, Jaipur. Presently working as Professor-CSE and Director—OutReach, Jaipur Engineering College and Research Centre, Jaipur (India), and Membership Chair, ACM Jaipur Chapter, and Past Chairman, CSI Jaipur Chapter. He has teaching experience of 20+ years, 5 patents published, 20 Ph.D. supervised, published 94 research papers and 10 books and associated as editor and reviewer in some reputed journals, received 20+ national and international awards of repute. His core research areas include Internet security, cloud computing, big data, and IoT. He has organized and participated in 25+ national and international conferences of repute. His foreign visits for various academic activities (Delegate/Invited/Keynote Speaker/Session Chair/Guest) include the USA, UK, Canada, France, Netherlands, Singapore, Thailand, Vietnam, Nepal, etc. He had been a Member of Indian Higher Education Delegation and visited 20+ leading universities in Canada (May 2019), UK (July 2017), and the USA (August 2018) supported by AICTE and GR foundation. Other delegation visits include University of Amsterdam (2016), Nanyang Technological University, Singapore (2018), University of Lincoln, QMUL, Brunel University, and Oxford Brookes University (2020) for discussions on academic and research collaborations. He is an Active Academician and always welcomes and forms innovative ideas in the various dimensions of education and research for the better development of the society. Prof. Vincenzo Piuri has received his Ph.D. in computer engineering at Polytechnic of Milan, Italy (1989). He is a Full Professor in computer engineering at the University of Milan, Italy (since 2000). He has been Associate Professor at Polytechnic of Milan, Italy, Visiting Professor at the University of Texas at Austin, USA, and Visiting Researcher at George Mason University, USA. His main research interests are artificial intelligence, computational intelligence, intelligent systems, machine learning, pattern analysis and recognition, signal and image processing, biometrics, intelligent measurement systems, industrial applications, digital processing architectures, xix

xx

About the Editors

fault tolerance, cloud computing infrastructures, and Internet of Things. Original results have been published in 400+ papers in international journals, proceedings of international conferences, books, and book chapters. He is a Fellow of the IEEE, Distinguished Scientist of ACM, and Senior Member of INNS. He is IEEE Region 8 Director-elect (2021-22) and will be IEEE Region 8 Director (2023–24). Rosalina Babo is the Coordinator Professor (Tenure) of Information Systems Department, Porto Accounting and Business Scholl of Polytechnic of Porto (ISCAP/P.Porto), Portugal. From 2000 to 2022, Rosalina was Head of the Information Systems Department and for about 12 years acted as a Member of the university scientific board. Rosalina’s international recognition was improved with the opportunity to be Visiting Professor at several universities in different countries, namely Belgium (KU LEUVEN), Croatia (University of Split), Kosovo (University of Prishtina), and Latvia (Latvia University of Agriculture). Rosalina was one of the founders of CEOS.PP (former CEISE/STI) research center and its Director for 5 years. Rosalina has served on committees for international conferences and acts as Reviewer in scientific journals and conferences. As Book Editor, Rosalina collaborates with publishers such as Elsevier, Springer, and IGI Global in the fields of data analysis in social networks and e-learning. Having several published papers, her main areas of research are e-learning, e-assessment, e-business, Internet applications focusing on usability, and social networks. Marta Campos Ferreira is Researcher and Invited Assistant Professor at Faculty of Engineering of University of Porto. She holds a Ph.D. in Transportation Systems from the Faculty of Engineering of University of Porto (MIT Portugal Program), an M.Sc. in Service Engineering and Management from the Faculty of Engineering of University of Porto, and a Lic. in Economics from the Faculty of Economics of University of Porto. She is Co-Founder and Co-Editor of the Topical Collection “Research and Entrepreneurship: Making the Leap from Research to Business” with SN Applied Sciences and Associate Editor of the International Journal of Management and Decision Making. She has been involved in several R&D projects in areas such as technology-enabled services, transport, and mobility. Her current research interests include service design, human-computer interaction, data science, knowledge extraction, sustainable mobility, and intelligent transport systems.

Deep Learning Based Approach in Automatic Microscopic Bacterial Image Classification Priya Rani, Shallu Kotwal, and Jatinder Manhas

Abstract Bacteria species are essential to our ecosystem. Their tireless work completes the cycle of different nutrients like carbon, nitrogen, sulphur etc. The study of bacterial species is critical due to its biological importance in food industry, medical diagnosis, veterinary science, genetic engineering, agriculture, biochemistry, and other allied fields. However, identifying and categorising these species is highly complex, time-consuming and needs a systematic approach. Due to close similarities in their morphological features, identifying them becomes very difficult and tedious. Microbiologists are forced to study both phenotypic and genotypic characteristics of different bacteria species in order to identify and classify them correctly. The entire procedure is human dependent and employs costly equipment. To automate the entire process, a deep learning based system to classify bacteria images can reduce the difficulties posed by the scientists working in this domain. Deep learning has made significant progress in recent years in the area of complex problems originated in image classification area. In this paper, a deep learning method based on transfer learning is proposed for image classification of five bacteria species: Micrococcus Luteus, Bacillus Anthracis, Staphylococcus Aureus, Thermus Sp., and Streptomyces Sp. Microscopic bacteria images have been prepared using staining method. The dataset contains 2500 images that have been increased using various image augmentation techniques such as rotation, cropping, flipping and so on. To classify these images, the three pre-trained CNN models, VGG16, ResNet50, and Xception have been fine-tuned. A comparative study of the three different pre-trained CNN models were carried out. It has been observed that Xception model achieved better classification results when compared to other two models. Xception model,

P. Rani (B) Computer Science & IT, University of Jammu, Jammu, India e-mail: [email protected] S. Kotwal Information Technology, Baba Ghulam Shah Badshah University, Rajouri, India J. Manhas Computer Science & IT, Bhaderwah Campus, University of Jammu, Jammu, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 V. S. Rathore et al. (eds.), Emerging Trends in Expert Applications and Security, Lecture Notes in Networks and Systems 682, https://doi.org/10.1007/978-981-99-1946-8_1

1

2

P. Rani et al.

VGG16 model and ResNet50 model have achieved an accuracy of 98.02%, 92.24% and 95.87% respectively. Keywords Convolutional neural networks · Image classification · Recognition · Bacteria · Microscopic · Transfer learning

1 Introduction Bacteria are prokaryotic species consisting of a single biological cell. They inhabit water, soil, radioactive wastes etc. Bacteria species play an important role in sewage treatment, production of cheese, gold recovery etc. Human body also consists of millions of bacteria [1]. Most of them are beneficial, particularly those living in guts. Some bacteria species are also pathogenic and can cause diseases like tuberculosis, cholera, tetanus etc. [2]. Most fatal diseases caused by bacteria species are respiratory diseases. To treat these diseases antibiotics are used. To produce different antibiotics, it is very essential to classify and categorise different bacteria species. Traditionally, bacteria species classification involves studying both genotypic and phenotypic characteristics [3]. These include morphological features and physiological and metabolic traits of bacterial cell. However, this whole process is human dependent and also costly in terms of time and money. Thus, there is a requirement for automatic bacteria classification system that can identify and classify bacteria species from microscopic images itself. The automation can reduce costs and can also be less error-prone. It will also help in identifying novel bacterial strains. With the advent of deep learning in areas like medical imaging, face detection, natural language processing etc. [4] various deep learning models have also been proposed for bacteria image classification. One of the deep learning algorithms called convolutional neural network has shown good performance in classifying bacteria images. As compared to other machine learning algorithms, CNN can extract important features automatically. Using these features model can be trained to classify microscopic bacteria images. In this paper, CNN has been used for the image classification of five bacteria species, namely Micrococcus Luteus, Bacillus Anthracis, Staphylococcus Aureus, Thermus Sp. and Streptomyces Sp. For this work, we have designed our own dataset with the help of National Centre for Cell Science, Pune. The most common problem in training CNN is model overfitting, which can be caused due to the small training dataset. Also, CNN requires high computational power. To deal with this, transfer learning has been adopted. Using transfer learning, weights trained on one problem can be stored and used for solving some another problem. In this paper, three CNN models, namely Xception, VGG16 and ResNet50 pretrained on ImageNet dataset have been employed and compared for feature extraction and bacteria image classification. The rest of the paper is organized as follows: In Sect. 2, related work has been discussed. Section 3 has given the materials and methods employed. Section 4, presents the experimental results and in Sect. 5 conclusion and future scope has been presented.

Deep Learning Based Approach in Automatic Microscopic Bacterial …

3

2 Related Work Deep learning techniques have been widely applied by the researchers for image classification of bacteria species. Ferrari et al. [5] developed deep learning based system to classify and count the colonies formed by different bacteria species using agar plate images. Lopez et al. [6] used CNN to identify tuberculosis bacteria in sputum smear images. CNN was trained for the classification of images as negative patch and positive patch. The authors achieved an accuracy of 99%. Zielinski et al. [7] used deep learning to classify 33 bacteria species. The approach involved employing CNN for feature extraction and Support Vector machine for image classification. Hay et al. [8] proposed deep learning model for classifying non-bacterial and bacterial objects in microscopic images of Zebrafish intestines. The approach involved manually labelling regions of interest by employing Histogram equalization. Following this, 3D-CNN has been used to classify regions as non-bacteria and bacteria. Panicker et al. [9] developed CNN with 3 convolutional layers to classify tuberculosis bacilli in sputum smear images. Traore et al. [10] used CNN for classifying plasmodium falciparum and vibrio cholera. The dataset consisted of 400 images, 200 images each. The authors have designed their own CNN with six convolutional layers. Ahmed et al. [11] used CNN and SVM combination for image classification of seven bacteria species. The dataset included 800 images using the obtained feature vector SVM has been trained for image classification. Treebupachatsakul et al. [12] proposed deep learning based method to classify two bacteria species, namely Lactobacillus Delbrueckii and Staphylococcus Aureus. The authors employed LeNet CNN model for feature extraction and image classification.

3 Methods and Material 3.1 Convolutional Neural Network Convolutional neural network is a feed-forward neural network [13], mostly applied in areas like object detection, image segmentation, image classification, natural language processing etc. Figure 1 shows the basic CNN architecture. CNN consists of input layer, hidden layers and output layer. Hidden layers in CNN architecture included convolution layers. Convolution layers can extract important features from the image itself. The features are extracted by the process of sliding filters across the image. The sliding process includes calculation dot product between input and filter. The features are extracted in the form of feature maps. The feature maps are calculated using the given equation: Fi = X ∗ Wi + yi

i = 1, 2, 3, 4, 5 . . . n

(1)

4

P. Rani et al.

Input Image

Fig. 1 CNN architecture

where, n is number of filters, W is weight of each filter, and F i is output associated with ith filter. To deal with non-linearity, ReLu activation function is used. ReLu replaces negative values with zero using the following equation: M = max(0, N )

(2)

where, M denotes output and N denotes input provided to ReLu, Following ReLu, there is Pooling layer in CNN which down-sample the feature representation. Downsampling helps in minimizing the number of parameters and thus computation. Pooling is of three types; Max-pooling, Min-Pooling and Average-pooling. Pooling is similar to convolution operation. But instead of calculating dot product, minimum, maximum or average is taken between overlapped regions. After Pooling layer, there is a layer called flattening layer in CNN. This layer converts feature maps into onedimensional array. After flattening layers, fully-connected layers are added for classification. For binary and multiclass classification problems Sigmoid and Softmax activation functions are used respectively.

3.2 Transfer Learning For training, CNN requires high computational power, huge datasets and extensive training time. To deal with this problem, transfer learning can be employed. In transfer learning, pre-trained model is re-used for solving new and related problems. The idea behind transfer learning is that the weights learned from one problem can be saved and applied to some another problem. It trains CNN model with fewer amounts of data and less computation. Transfer learning can accomplish substantial results using small training data and less computation. In this paper, using transfer learning three pre-trained CNN model; Xception model, VGG16 model and ResNet50 model have been employed for image classification of five bacteria species.

Deep Learning Based Approach in Automatic Microscopic Bacterial …

Micrococcus Luteus

Bacillus Anthracis

Staphylococcus Aureus

Thermus Sp.

5

Streptomyces Sp.

Fig. 2 Sample images from the dataset

3.3 Dataset and Pre-Processing In this work, a dataset of digital microscopic images of five bacterial species has been created. The microscopic images were collected from National Centre for Cell Science, Pune, India. Bacteria samples were prepared with gram staining method and images were captured using Bright field microscopy with a magnification of 1000x. To provide CNN with diverse image data, data augmentation operations [14] like flipping, rotation, zooming etc. were also performed. The final dataset contains a total of 5000 images, each bacteria species having 1000 images. The dataset was further split into 80:20 ratio. Training set includes 3750 images and test set consists of 1250 images. Further 15% of training set is used for validation. The images were pre-processed using median filter to remove noise. Each image in dataset was resized to 128 * 128 pixels. Sample images from the dataset are shown in Fig. 2.

4 Experimentation and Results In this work, three CNN models; Xception model, VGG16 model and ResNet50 model have been trained using transfer learning for image classification of five bacteria species. For feature extraction, features trained with ImageNet dataset have been used. Then for classification, these features have been provided to classification layers. These layers consist of Softmax and fully-connected layers. Also, the size of fully-connected layers is kept the same in all models. Since our problem is fiveclass problem, Softmax layer generates five probabilities as output. One of the major problem encountered in implementing transfer learning for small dataset is model overfitting. To avoid overfitting, Dropout of value 0.5 has been added before fullyconnected layers. All three models have been trained using Adam optimizer with learning rate = 0.001 for 50 epochs. Moreover, Relu and Categorical-crossentropy have been used as activation and loss function respectively. To create CNN models, Python programming language along with Keras and Tensorflow libraries have been used. The experiments have been performed on Intel(R) Xeon(R) W-1370P @ 3.60 GHz, 32 GB RAM and NVIDIA RTX A4000. The dataset is divided into 80:20 ratio. 3187 images for training, 563 images for validation and 1,250 images for testing. The models are trained using five-fold cross validation. For performance evaluation of models four metrics, namely Accuracy, Recall, Precision and F-score

6

P. Rani et al.

have been used. Accuracy gives the percentage of correctly classified samples. Recall calculates the positive sample identification capability of the model. The percentage of positive samples predicted as positive is given by Precision. F1-score combines Recall and Precision assessments. The formula for these metrics is given as: Accuracy = (TP + TN)/(TP + TN + FP + FN)

(3)

Recall = (TP)/(TP + FN)

(4)

Precision = (TP)/(TP + FP)

(5)

F1 - Score = 2 ∗ (Recall ∗ Precision)/(Recall + Precision)

(6)

where, TN is True Negative, TP is True Positive, FN is False Negative and FP is False Positive. The performance metrics have been calculated by plotting confusion matrix. Figure 3. shows the confusion matrix for Xception, VGG16 and ResNet50 models. Table 1 shows the experimental results achieved using three models. From the results, it has been observed that Xception model performed best among all models with an accuracy of 98.02%. Also, the training and validation accuracy curves have been plotted in Fig. 4. From the curves it can be seen that there is no overfitting, as the training accuracy is greater and comparable as compared to the validation accuracy.

5 Conclusion and Future Scope In this paper, transfer learning based deep learning framework has been proposed for image classification of five bacteria species: Micrococcus Luteus, Bacillus Anthracis, Staphylococcus Aureus, Thermus Sp. and Streptomyces Sp. Microscopic bacteria images were prepared using staining method. The dataset includes 2500 images that have been increased with image augmentation techniques such as rotation, flipping, cropping, and so on. There were 5000 images in the final dataset. To classify these images, the three pre-trained CNN models, VGG16, ResNet50, and Xception have been fine-tuned. A comparative study of the three different pre-trained CNN models were carried out. It has been observed that Xception model achieved better classification results when compared to other two models. Xception model, VGG16 model and ResNet50 model have achieved an accuracy of 98.02%, 92.24% and 95.87% respectively. From these results, it is concluded that deep learning has huge scope in accurately classifying bacteria species. Though, the proposed model has achieved promising results. But this study is limited to only five bacteria species. In future, we will explore the applicability of CNN for image classification of more bacteria species. The model will also be

Deep Learning Based Approach in Automatic Microscopic Bacterial …

(a)

7

(b)

(c) Fig. 3 Confusion matrix. a Confusion matrix of VGG16. b Confusion matrix of ResNet50. c Confusion matrix of Xception

Table 1 Classification results of Xception, VGG16 and ResNet50 Metrics

Accuracy (%)

Precision

Recall

F1_score

VGG16

92.24

0.91

0.92

0.91

Xception

98.02

0.98

0.98

0.98

ResNet50

95.87

0.96

0.95

0.95

Models

extended and fine-tuned for image recognition of identical shaped bacteria species. Also, the dataset used in this work is small. The dataset size will also be increased in future. Deep learning based research has a wide range of applications in the field of microbiology. However, combined efforts are required from scientists of various fields such as informatics, medicine, and biology.

8

P. Rani et al.

(a)

(b)

(c) Fig. 4 Accuracy curves. a Accuracy curves of VGG16. b Accuracy curves of ResNet50. c Accuracy curves of Xception

Acknowledgements We would like to thank the National Centre for Cell Science in Pune, India, for their priceless assistance in creating the Bacteria species image dataset.

References 1. Tshikantwa TS, Ullah MW, He F, Yang G (2018) Current trends and potential applications of microbial interactions for human welfare. Front Microbiol 9(1156) 2. Luise CC, James GC, Iain MS (2016) Mechanisms linking bacterial infections of the bovine endometrium to disease and infertility. Reprod Biol 16(1):1–7 3. Franco-Duarte R, Cernakova L, Kadam S, Kaushik KS, Salehi B, Bevilacqua A et al (2019) Advances in chemical and biological methods to identify microorganisms—from past to present. Microorganisms 7(5) 4. Guan Y, Han Y, Liu S (2022) Deep learning approaches for image classification techniques. In: 2022 IEEE international conference on electrical engineering, big data and algorithms (EEBDA), pp 1132–1136 5. Ferrari A, Lombardi S, Signoroni A (2017) Bacterial colony counting with convolutional neural networks in digital microbiology imaging. Pattern Recogn 61:629–640

Deep Learning Based Approach in Automatic Microscopic Bacterial …

9

6. López YP, Costa Filho CFF, Aguilera LMR, Costa MGF (2017) Automatic classification of light field smear microscopy patches using Convolutional Neural Networks for identifying mycobacterium tuberculosis. In: 2017 CHILEAN conference on electrical, electronics engineering, information and communication technologies (CHILECON), pp 1–5 7. Zielinski B, Plichta A, Misztal K, Spureh, Brzychczy-Wloch PM, Ochonska D (2017) Deep learning approach to bacterial colony classification. PLoS ONE 12(9) 8. Hay E, Parthasarathy R (2018) Performance of convolutional neural networks for identification of bacteria in 3D microscopy datasets. PLOS Comput Biol 14(12) 9. Panicker RO, Kalmady KS, Rajan J, Sabu MK (2018) Automatic detection of tuberculosis bacilli from microscopic sputum smear images using deep learning methods. Biocybern Biomed Eng 38(3) 10. Traore BB, Kamsu-Foguem B, Tangara F (2018) Deep convolution neural network for image recognition. Ecol Inform 48:257–268 11. Ahmed T, Wahid MF, Hasan MJ (2019) Combining deep convolutional neural network with support vector machine to classify microscopic bacteria images. In: 2019 international conference on electrical, computer and communication engineering (ECCE), pp 1–5 12. Treebupachatsakul T, Poomrittigul S (2019) Bacteria classification using image processing and deep learning. In: 2019 34th international technical conference on circuits/systems, computers and communications (ITC-CSCC), pp 1–3, (2019) 13. LeCun Y, Bengio Y (1998) Convolutional networks for images, speech, and time series. The handbook of brain theory and neural networks. MIT Press. Cambridge, MA, USA, pp 255–258 14. Shijie J, Ping W, Peiyi J, Siping H (2017) Research on data augmentation for image classification based on convolution neural networks. In: 2017 Chinese automation congress (CAC), pp 4165– 4170

Deep Learning-Based Recognition and Classification of Different Network Attack Patterns in IoT Devices Hiteshwari Sharma, Jatinder Manhas, and Vinod Sharma

Abstract With the advent of a paradigm shift in the area of data communication, Internet of things (IoT) has remarkably transformed the whole facets of information sharing and data aggregation. It comprises of numerous heterogeneous devices and sensors having different protocols and standards which exchange and collect data over the Internet with robust connection. The extensive usage of these devices poses a vast number of cyber security threats as these devices are subjected to diverse range of attacks due to low power, computational and memory limitations. Various security frameworks and mechanisms have been presented to defend these attacks from time to time. Intrusion Detection System are dedicated hardware or computer programbased frameworks to protect these devices from cyber and malicious threats. In this paper, we have evaluated different deep learning algorithms on state-of-the-art IoT datasets and it has been observed that deep learning-based techniques have shown sufficient potential in the identification and classification of different network attack patterns in IoT devices. Keywords Internet of Things · Cyber security · Intrusion detection system · Deep learning

1 Introduction Internet of Things are vast interconnected networks which includes multiple sensors, devices and microcontrollers. This large web of networks provides huge applications such as smart homes, cities and healthcare and their robust pervasive interconnection with other networks for communication induces more dependability on these devices. However, they are highly susceptible to a variety of cyber-attacks and thefts because of unstandardized security protocols, memory and computation constraints, different H. Sharma (B) · J. Manhas · V. Sharma Department of Computer Science & IT, Bhaderwah Campus, University of Jammu, Jammu, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 V. S. Rathore et al. (eds.), Emerging Trends in Expert Applications and Security, Lecture Notes in Networks and Systems 682, https://doi.org/10.1007/978-981-99-1946-8_2

11

12

H. Sharma et al.

architectures and standards of the devices involved in the network etc. Secure authentication mechanisms and monitoring systems are proven effective in detecting and evading cyber-attacks. Intrusion detection System are dedicated outstanding frameworks which help in the identification of majority of cyber assaults at different layers of IoT network. They have high potential to uncover malicious and exotic traffic patterns which can lead to severe security threats. Cyber-attacks in IoT are highly complex and challengeable due to the rapid advancement in attack pattern generations. Recent developments in deep learning algorithms have been used for network anomaly detection, intrusion detection and prevention. And it has been observed that deep learning-based techniques has shown sufficient potential in the identification and classification of different network attack patterns in IoT devices [1]. In our IoT architecture, where data size grows exponentially as billions of connected devices generate enormous quantities of network traffic, deep learning has the potential to successfully process vast datasets. The DL approach helps predict large-scale and varied IoT assaults by revealing hidden data patterns in traffic. It is a remarkable field of AI which has the potential to enhance security measures and counter multiple attacks by discovering hidden patterns from the training data and can easily identify and discriminate attacks from routine traffic. In this work, an extensive study of different intrusion detection system in IoTs are carried out along with their potential and challenges. In addition, deep learning-based algorithms are applied on CIC IOT 2022 dataset which is one of the latest benchmark datasets for IoT network. The major contributions of this research work are: (i) Different IoT based Intrusion detection system are studied and analysed. (ii) IoT based standard IDS datasets and their features are explored and examined (iii) Deep learning-based algorithms are applied on selected CIC IoT dataset for attack pattern recognition and detection.

2 Literature Survey A lot of research regarding IoT security has been undertaken and attempts have been made to design a resilient IDS or security framework for IoT dedicated devices. IDS can be a software application or hardware device that protects a network or individual system by sounding alerts in the event of a security breach and can further take actions to stop the attacker. Roy et al. [2] proposed Bidirectional Long Short-Term Memory Recurrent Neural Network (BLSTM-RNN) and performed binary classification on different attack patterns using UNSW-NB15 dataset (Fig. 1). The proposed study achieved 95% accuracy in attack detection. Deep learning models are further exploited for detection of DDoS attacks in IoT networks [3] Different DL algorithms like CNN, LSTM and MLP are implemented on novel CICIDS2017 dataset followed by comparison with traditional ML algorithms. CNN + LSTM algo achieved highest accuracy of 97.16%. A distributed generative adversial networks (GANs) are utilized for malicious pattern detection in IoT networks

Deep Learning-Based Recognition and Classification of Different …

13

Fig. 1 IDS placement in IoT network

[4]. Centralized control is only utilized at the training phase as all the IoTD generators are capable of detecting intrusions at the intrusion detection phase. Simulations are performed on standalone, centralized and distributed GANs using ANN in TensorFlow library. A comparative study is further performed between standalone and proposed GANs using standard performance measure criteria. Thamilarasu et al. [5] presented an integrated anomaly-based IDS exploiting deep learning models for IoT networks followed by their implementation and evaluation on real time IoT testbed. Data collection and feature extraction are utilized using perceptual learning whereas DNN are used for anomaly detection. Raspberry Pi and Keras library are used for implementation and Cooja simulator is used for testing the IDS. Different attacks like wormhole, sinkhole, DDoS etc. are evaluated on proposed IDS and an average precision rate of 95% is achieved for different attack scenarios. Temporal Convolutional Networks, integration of CNN and casual convolution, are employed for developing an IoT IDS [6]. The proposed IDS is implemented and analysed on BoT-IoT dataset and compared with ML (Logistic Regression and Random Forest) and deep learning algorithms (CNN and LSTM). Feature reduction and transformation techniques are used to alleviate data unbalancing problem with SMOTE-NC. Accuracy of 99.9643 with BoT-IoT dataset. Artificial Neural Networks are used to access the security concerns of fog nodes in IoT architecture [7]. An IoT testbed composed of different sensors are implemented on Raspberry Pi 3 which is configured as fog node. Anomaly Behaviour analysis is performed in different phases followed by testing using normal and flooding attack scenarios. Accuracy of 97.51% is achieved. Liang et al. [8] presented a hybrid Intrusion detection system which utilizes multi-agent, blockchain and deep learning algorithms. The proposed SESS system comprises of multiple communication agents which are continuously updated and improved through Reinforcement learning in DNN (Deep Neural Networks).

14

H. Sharma et al.

All the communication transactions are stored on Blockchain for reliability and only system manager has access to it. The framework is executed on NSL-KDD dataset. Convolutional Neural Networks are further exploited to design an IoT IDS against some standard Botnet attacks using BoT-IoT dataset [9]. Deep Neural networks are used for detecting intrusion in MQTT based communication IoT devices [10]. MQTT-IoT-IDS2020 dataset is used for detecting various attacks like Man-in-themiddle, DoS, Intrusion attacks on brokers etc. The MQTT protocol-based network’s features are considered in the input layer of DNN-based learning model, Rectified Linear Unit (ReLU) activation is utilized in two hidden layers and sigmoid activation for binary classification and SoftMax for multi-class attack classification in the output layer. An accuracy of 97.13% is achieved as compared to other DL models. Zhong et al. [11] introduced a sequential model based IDS for IoT exploiting deep learning algorithms. Gated Recurrent unit and Text CNN are used for feature extraction followed by classification by MLP and SoftMax regression for output module. KDD-99 and ADFA-LD datasets are used for this study. A comparative analysis is also performed on traditional ML classifiers and GRU-TCNN model using F1-score metric. IDS in IoT can be host based, network based or hybrid based [12]. The area of deep learning is widely explored for IoT security and a novel deep learning-based IDS named DF-IDS is forwarded for IoT networks which detects intrusion in two parts [13]. Feature extraction and selection are executed using standard benchmark techniques like PCA, Information gain etc. followed by training with deep neural networks. A significant good accuracy of 99.23% is achieved on NSL-KDD dataset. Le et al. [14] forwarded an intelligent IMIDS which exploited GANs for training data generation and convolutional network to categorize different cyber assaults in IoTs. Simulation based experiments are performed on two novel datasets i.e., UNSWNB15 and CICIDS2017 achieved high accuracy of 96.66 and 95.92 respectively. The proposed detection system has the potential to detect nine different types of attack categories like DoS, Generic, Exploits etc. Idrissi et al. [15] presented and discussed different possibilities of employing DL based host IDS techniques on particular real time set up of IoT environment. Fog node is chosen for the installation where only inbound traffic is analysed. The author suggested the design of customized IDS for each device category due to different hardware architectural specifications. After deep literature review and analysis, it can be concluded that deep learning algorithms are highly capable of detecting and recognizing multiple attack scenarios in IoT environment. However, dataset is an essential component for deep learning and hence selection of an IoT specific dataset is the most intricate task before identifying any DL algorithms and other related techniques (Table 1 and Fig. 2).

3 Proposed Work This paper proposed a deep learning-based technique for recognition and classification of different attack patterns in IoT networks. A lot of researchers have presented their work on prevalent datasets like KDD CUP 99, NSL-KDD, UNSW-NB15

Deep Learning-Based Recognition and Classification of Different …

15

Table 1 IDS for detection of attack patterns in IoT networks Refs.

Proposed IDS

Technique used

[2]

BLSTM-RNN IDS

Bidirectional Long UNSW-NB15 Short-Term Memory is used

Dataset

Results Detection Accuracy-95%

[3]

DL based IDS

DL algo like CNN, LSTM and RNN are employed

CICIDS2017

Detection Accuracy-97.16%

[4]

Fully distributed IDS

GANs are utilized for malicious pattern detection

Human activity recognition datasets

Detection Accuracy-84%

[6]

DL based IDS

Temporal CNN and SMOTE-NC are used

BoT-IoT dataset Detection Accuracy-99.99%

[7]

Anomaly-based Adaptive IDS

Artificial Neural Networks employed on fog nodes

Dataset generated from IoT testbed

Detection Accuracy-97.51%

[8]

Hybrid IDS

Reinforcement Learning, Blockchain and Multiagent System are used

NSL-KDD dataset

Detection Accuracy-97%

[9]

Baptized BoT IDS

CNN, RNN, LSTM, GRU are utilized

BoT-IoT dataset Detection Accuracy-99.94%

[10]

DL based IDS

Deep Neural Networks in MQTT based IoT devices

MQTT-IoT Detection IDS2020 dataset Accuracy-97.13%

[12]

DF-IDS

PCA and Deep Neural Networks are utilized

NSL-KDD dataset

Detection Accuracy-99.23%

[13]

IMIDS

Convolutional Neural Networks are used

UNSW-NB15 and CICIDS2017

Detection Accuracy96.6% (UNSW-NB15) 95.92% (CICIDS2017)

Fig. 2 Standard datasets used by researchers for attack detection in IoT devices

16

H. Sharma et al.

datasets, and many more. However, these datasets are not suitable for detecting IoT based attacks due to their underlying hardware architecture, distinct protocol stacks and communication standards. We have selected one latest benchmark IoT dataset i.e., CICIOT22 supported by Canadian Institute for Cybersecurity. This state-ofthe-art dataset includes data captured from a total of 60 IoT sensor devices having diverse standards like IEEE 802.11, Zigbee-based and Z-Wave. The network traffic is captured by utilizing Wireshark and tcpdump tool. A diverse range of IoT devices are employed to analyse how devices having different standards and architectures generate traffic in isolation and their interfacing and interaction with other devices. The devices are selected under distinct categories like Audio, Camera, Home automation etc. and different experiments are performed under six environments like Idle, Power, Interactions, Scenarios, Active and Attacks [16]. A total of 48 features were extracted from the network traffic. The framework for the development of IDS in IoT environment is proposed in different phases. The initial step deals with capturing of IoT network traffic and the data is stored in multiple pcap files under different device categories. We extracted different pcap files via attack category and convert it into.csv format. For e.g., Flood Attack directory contains a sub-directory of different IoT devices which further includes packet capture files (.pcap) under three protocols i.e., HTTP, UDP and TCP. All the data is integrated and consolidated into a single csv file after data preprocessing which includes normal traffic and two main attack categories i.e., DoS and Exploits. Our presented Intrusion Detection model based on Deep learning technique is performed on Google Colabs using TensorFlow and Keras Library. Keras is an opensource high-level Neural Network library in Python and is highly efficient to support Theano, TensorFlow, or CNTK. Google Colabs is an open-source product from Google which helps to write and execute Python code. It is more used and highly applicable in data analytics, machine learning and deep learning environments. We have employed Simple RNN (Recurrent Neural Networks), LSTM (Long Short-Term Memory) and GRU (Gated Recurrent Networks) for our experimental evaluation. These three RNN classes of DL algorithms are capable of processing and analysing large sets of examples generated from real time IoT environment. Recurrent neural Networks are an efficient class of deep learning models which are highly robust in representing sequential data like time series prediction, natural language etc. A RNN layer iterates across a sequence’s timesteps using a for loop while retaining an internal state that contains data on the timesteps it has already witnessed. To calculate the gradients, recurrent neural network exploits the backpropagation through time (BPTT) algorithm, which differs slightly from conventional backpropagation because it is tailored for sequence data. LSTM also known as Long Short term Memory are advanced RNN which helps in learning long term dependencies in the input and hence handle vanishing gradient problem faced by RNN. It is composed of three gates: Input gate, Forget gate and Output Gate. It also maintains an internal state which is likewise sent forward along with the hidden State in a LSTM network, which is the only way it differs from a Recurrent Neural Network in terms of basic working. GRU are further advanced LSTM which does not maintain a separate cell state and have faster training time. Here the three gates of LSTM are restored in form

Deep Learning-Based Recognition and Classification of Different …

17

of two gates i.e., Reset Gate and Update Gate. These gates get sigmoid activations, similar to LSTMs, causing their values to fall within the interval. The reset gate, intuitively, regulates how much of the prior state is to be maintained in the mind [17]. The whole working methodology is illustrated in Fig. 3 along with different phases. Fig. 3 Proposed work methodology

18

H. Sharma et al.

4 Results and Analysis The proposed model achieved significant good results in detecting Flood and Brute Force attacks which are merged under the label of DoS and Exploits in our main compiled csv file. The model was trained on 81,173 samples and all the deep learning models discussed above are implemented in Keras Library. Sigmoid and ReLU activation functions are used for input and output layers respectively with a dropout value of 0.2. Adam optimizer is utilized for updating the network weights. It requires less memory and is highly computationally efficient as compared to other optimisers. The experimental results are summarized in Table 2 where GRU achieved highest accuracy of 98.7% for detecting DoS attacks and all the algorithms attained almost the same accuracy of around 96% for exploits category. The training is performed for 50 epochs with a batch size of 10 (Fig. 4). Table 2 Different DL algorithms trained on CIC 2022 with their corresponding detection accuracy

S. no

DL Algo

Detection accuracy DoS (%)

Exploits (%)

1

Simple RNN

97.8

96

2

LSTM

98

96.4

3

GRU

98.7

96

Comparave Analysis of DL algorithms 99.00% 98.50% 98.00% 97.50% 97.00% 96.50% 96.00% 95.50% 95.00% 94.50% Simple RNN

LSTM Detecon Accuracy DoS

Detecon Accuracy Exploits

Fig. 4 Graphical comparative analysis of deep learning algorithms

GRU

Deep Learning-Based Recognition and Classification of Different …

19

5 Conclusion Intrusion Detection System in IoT can be designed and developed efficiently with an IoT specific dataset which includes all the device specific attributes and their corresponding vulnerability statistics. Traditional Intrusion detection systems are not robust and efficient for detecting the latest and dynamic attacks in this field. CIC IoT 2022 dataset is a versatile real time IoT specific dataset and proves to be highly appropriate for detecting IoT related attacks by utilizing deep learning algorithms. For future work, other deep learning algorithms like Autoencoders, Transformers and GANs can be explored for detecting malicious network traffic and intrusions in IoT networks. Real time datasets can be generated from vast set of IoT devices for the detection and analysis of attack patterns. Although dataset generation and aggregation of data in IoT networks are quite tedious and challenging due to varied protocol standards and device configurations. However genuine device profiling and behaviour analysis help in easy and timely detection of cyber assaults in IoT networks.

References 1. Khan AR, Kashif M (2022) Internet of things: architectures, protocols, and applications. Deep learning for intrusion detection and security of Internet of Things (IoT): current analysis, challenges, and possible solutions. security and communication networks. Hindawi (2022) 2. Roy B, Cheung H (2019) A deep learning approach for intrusion detection in internet of things using bi-directional long short-term memory recurrent neural network. In: International telecommunication network and applications conference. IEEE. https://doi.org/10.1109/ ATNAC.2018.8615294 3. Roopak M, Tian GY, Chambers J (2019) Deep learning models for cybersecurity in IoT networks. In: 9th annual computing and communication workshop and conference. IEEE. https://doi.org/10.1109/CCWC.2019.8666588 4. Ferdowsi A, Saad W (2019) Generative adversarial networks for distributed intrusion detection in the internet of things. In: Global communication conference. IEEE. https://doi.org/10.1109/ GLOBECOM38437.2019.9014102 5. Thamilarasu G, Chawla S (2019) Towards deep-learning-driven intrusion detection for the internet of things. Sensors 6. Derhab A, Aldweesh A, Emam AZ, Khan FA (2020) Intrusion detection system for internet of things based on temporal convolution neural network and efficient feature engineering. Hindawi 7. Pachecho J, Benitez VH, Felix Herran LC, Satam P (2020) Artificial neural networks-based intrusion detection system for internet of things fog nodes. IEEE Access 8. Liang C, Shanmugam B, Azam S, Karim A, Islam A, Zamani M, Kavianpour S, Bashah N (2020) Intrusion detection system for the internet of things based on blockchain and multi-agent systems. Electronics. https://doi.org/10.3390/electronics9071120 9. Idrissi I, Boukabous AM, Moussaoui O, Fadili HL (2021) Toward a deep learning-based intrusion detection system for IoT against botnet attacks. In: International journal of artificial intelligence. IEEE, pp 110–120 10. Khan MA, Khan MA, Jan SU, Ahmad J, Jamal SS (2021) A deep learning-based intrusion detection system for MQTT enabled IoT. Sensors 11. Zhong M, Zhou Y, Chen G (2021) Sequential model based intrusion detection system for IoT servers using deep learning methods. Sensors 21:113

20

H. Sharma et al.

12. Different types of intrusion detection systems (IDS). https://wisdomplexus.com/blogs/differ ent-types-of-intrusion-detection-systems-ids/. Last accessed 02 May 2022 13. Nasir M, Javed AR, Tariq MA, Asim M, Baker T (2021) Feature engineering and deep learningbased intrusion detection framework for securing edge IoT. In: The journal of super computing. Springer Nature 14. Le KH, Nguyen MH, Tran DT, Tran ND (2022) IMIDS: an intelligent intrusion detection system against cyber threats in IoT. Electronics 11:524 15. Idrissi I, Azizi M, Moussaoui O (2022) A lightweight optimized deep learning-based hostintrusion detection system deployed on the edge for IoT. Int J Comput Dig Syst. https://doi. org/10.12875/ijcds/110117 16. Dadkhah S, Mahdikhani H, Danso PK, Zohourian A, Truong KA (2022) Towards the development of a realistic multidimensional IoT profiling dataset. In: International conference on privacy, security & trust. IEEE 17. Recurrent Neural Network with Keras. https://www.tensorflow.org/guide/keras/rnn/. Last accessed 20 Dec 2022

Lipid Concentration Effects on Blood Flow Through Stenosed Tube Neha Phogat, Sumeet Gill, Rajbala Rathee, and Jyoti

Abstract The present investigation analyzes the combined effect of “magnetic field” and “slip velocity” on the blood flow through stenosed artery with permeable walls. The blood is treated as “elastic-viscous fluid”. The solutions are manifested for the effects of temperature and concentration on streaming blood by solving non-linear coupled equations using “Homotopy Perturbation Method” and computational results are presented graphically using “MATLAB Programming”. In the study, Brownian Motion Parameters and Thermophoresis Parameters are discussed and emphasized. The study depicts a different perception to understand the role of various parameters related to flow characteristics and circulation of blood under magnetic field and slip velocity in human body. Keywords Elastic-viscous fluid · Wall permeability · Brownian motion · Thermophoresis · Transverse magnetic field · Fluid acceleration · Slip velocity

1 Introduction The malfunction of cardiovascular system is one of the prominent cause of death. The prime function of the human cardiovascular system is to deliver blood (enriched with oxygen) to each and every cell of the body. The blood flow in human circulatory system is a continuous process and irregularities in this process results in a variety of cardiovascular disorders and illnesses such as “atherosclerosis” (popularly known as “stenosis”). Such issues make it essential to analyze the “blood flow disorders N. Phogat (B) · S. Gill · Jyoti Department of Mathematics, M.D. University, Rohtak, Haryana 124001, India e-mail: [email protected] S. Gill e-mail: [email protected] R. Rathee A.I.J.H. Memorial College, Rohtak, Haryana 124001, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 V. S. Rathore et al. (eds.), Emerging Trends in Expert Applications and Security, Lecture Notes in Networks and Systems 682, https://doi.org/10.1007/978-981-99-1946-8_3

21

22

N. Phogat et al.

in constricted human arteries”. In the case of stenosed artery, the “coupled nonlinear equations” along with corresponding “boundary conditions” are solved using “Homotopy Perturbation Method”. The blood flow characteristics through stenosed vessel with boundary conditions were examined by Beaver and Joseph [1]. They emphasized the role of slip velocity in a naturally permeable vessel wall. Further, the improved version of the same with boundary conditions on fluid flow at “the surface of the porous medium” was investigated by Saffman [2]. McDonald [3] examined the flow of blood through “modelled vascular stenosis”. Sanyal and Maji [4] graphically explained the effect of “pressure gradient and wall shear stress” on “unsteady blood flow” in presence of mild stenosis. Haldar and Ghosh [5] examined the effect of “magnetic field” on blood flow by taking blood as “Newtonian Fluid” and analyzed the expression for “blood velocity”, “pressure” and “flow rate”. Varshney et al. [6] investigated the effect of overlapping stenosis and externally applied magnetic field on the non-newtonian blood flow. The present study explored the amalgamated influence of “magnetic field” and “slip velocity” on “non-newtonian blood flow through stenosed artery with permeable walls”. Also, the present study emphasizes the lipid concentration effects on blood flow through stenosed tube under the influence of “magnetic field” and “slip velocity” by using Homotopy Perturbation Method.

2 Formulation of the Problem We considered blood as a “non-Newtonian Elastico-viscous”, “incompressible” and “electrically conducting fluid” with nanoparticles under the influence of “transversely applied magnetic field”. Motion of blood is assumed to be one-dimensional in a straight and rigid “cylindrical stenosed artery” through a porous medium. Further, the blood flow is assumed to be “laminar, unsteady, axially-symmetric and fully developed”. The cylindrical co-ordinate system (r, z) is considered such that the velocity component along z axis is u. “Heat and mass transfer phenomenon” is calculated by assigning temperature T and concentration C to the wall of the tube. By Ohm’s law, we have − − → → → − → J =σ E +− u × β

(1)

− → → where E is an “electric field intensity vector”, σ is “electrical conductivity”, − u is → − → − → − the “velocity vector”, β = β o + β 1 is the “total magnetic flux intensity vector” in − → which vector β 1 is the “induced magnetic field vector” which is of very small amount and assumed to be negligible in comparison with the “external applied magnetic − → − → field vector” β o . We consider that the “electric field intensity vector” E , due to the polarization of charge is also negligible.

Lipid Concentration Effects on Blood Flow Through Stenosed Tube

23

Fig. 1 Geometry of stenosed artery

Therefore, the “electromagnetic force” is defined as − → − → − → → F = J × β = −σβo2 − u

(2)

−  →  where  β o  = βo . Now, the geometry of a symmetric stenosed artery, as given by Haldar and Ghosh [5] in 1994, is shown in Fig. 1.   R(z) = 1 − F lon−1 (z − d) − (z − d)n ; d ≤ z ≤ d + lo Ro (z)

(3)

where, “n ≥ 2” is the parameter determining the “shape of stenosis” for which the “symmetric stenosis” is found for n = 2, d is the “position of stenosis”, lo is the “length of stenosis”, Ro (z) expresses the radius of unstenosed artery, R(z) is the “radius of circular cylindrical stenosed artery”, z indicates the “axial position” and the parameter F is given by n/(n−1) F = Roεl n n(n−1) ; ε is the maximum height of developed stenosis which is located o at z=d+

ε lo  1. ; n 1/(n−1) Ro (z)

Now, the equations governing the flow in cylindrical polar co-ordinates are: μ∇ 2 u + μ1

μ ∂  2  ∂p 2 − u  ∇ u − σβo u − ∂t ∂z K ∼

+ g(ργ )n f (T − T◦ ) + ρg α (C − C◦ ) = ρ

∂u ∂t 

(4)







1 ∂C ∂T ∂T DT ∂T 2 1 ∂ ∂T   Q 0 + αn f r + ζ DB + =  r ∂r ∂r ∂r ∂r T0 ∂r ρC p n f ∂t (5)

24

N. Phogat et al.

and DB



∂C DT 1 ∂ ∂T ∂C 1 ∂ r + r =  r ∂r ∂r T0 r ∂r ∂r ∂t

(6)

    where ∇ 2 = r1 ∂r∂ r ∂r∂ , − ∂∂zp = Ao + A1 cos t/t o , Ao is the “constant amplitude of pressure gradient”, A1 is the “amplitude of the pulsatile component”, t1o = 2π f p ,  f p is the “heart pulse frequency”, t is the time variable, T is the temperature, C is  concentration, u(r, t ) is the “axial velocity component”, ρ is the “density of blood”, μ is the “viscosity of blood”, μ1 is “elastic viscosity coefficient of blood”, g is (ρC) the acceleration due to gravity, ζ = (ρC)n f is the ratio between the “effective heat f capacity of nanoparticle” and “heat capacity of the fluid”, K is the “permeability of the isotropic porous medium”, D B is the Brownian diffusion coefficient, DT is the thermospheric diffusion coefficient and r is the radial co-ordinate. The “boundary conditions” are as follows: ∂C = 0 when r = 0; C = Co when r = R(z) ∂r ∂T = 0 when r = 0; T = To when r = R(z). ∂r Defining,  T = [θ + 1]To ; C = [σ + 1]Co ; t = tto ;r = y Ro The non-dimensional forms are as follows: 2 ∂u ∂ ∼ ∇ u − M 2 u + N (Ao + A1 cost) − H 2 u + G r θ + G f σ = α 2 (7) 1+ψ ∂t ∂t

2 ∂θ ∂σ ∂θ 1 ∂θ ∇2θ + β + NB + NT (8) = 2 ∂y ∂y ∂y α1 ∂t and

∼ NT 1 ∂σ ∼ ∇2 σ + ∇2θ = 2 NB α2 ∂t 1 ∂ y ∂y C◦ ζαDn fB

where ∇ 2 = NB =



 y ∂∂y , β =

Q ◦ R◦2 αn f T◦ (ρC p )n f

(9)

, − ∂∂zp = Ao + A1 cost, β =

Q ◦ R◦2 αn f T◦ (ρC p )n f

,

is the “Brownian motion parameter”, N T = ζ αDnTf is the “ther∼ αn f t◦ μ1 D B t◦ 2 2 mospheric parameter”, α1 = R 2 , α2 = R 2 , ψ = μto , M = βo Ro μσ is the ◦ ◦  “Hartmann number”, H = Ro K1 is the “permeability parameter”, α = Ro μtρo is the Womersley parameter, N = C R2 gρα oμ o

Ro2 , μ

G r = g(ργ )n f

Ro2 To μ

is the local temperature

is the local Grashof number. Grashof number, G f = Further, it is assumed that for values of t < 0, only the heart’s pumping action exists and for t = 0, the artery’s blood flow leads to the “instant pressure gradient” i.e.

Lipid Concentration Effects on Blood Flow Through Stenosed Tube

25

− ∂∂zp = Ao + A1 . The “boundary conditions in the non-dimensional form” are as follows: ∂θ = 0 when y = 0; θ = 0 when y = R(z) ∂y ∼



∂σ ∂y

= 0 when y = 0; σ = 0 when y = R(z). The initial condition is described as ∂u = −hu (Slip condition) ∂r Or  u + hu = 0 when r = a and t ≥ 0 η and a = RR(z) ; η is constant which depends on the properties where h = − R √ o (z) o K of “porous medium” and on its structure.

3 Solution of the Problem The solution of the non-linear coupled Eqs. (5) and (6) are calculated by “Homotopy Perturbation Method” as follows:  H (k, θ ) = (1 − k)[L(θ ) − L(θ )10



2 ∂θ ∂σ ∂θ 1 ∂θ + NT − 2 + k L(θ ) + β + N B ∂y ∂y ∂y α1 ∂t    ∼  ∼ ∼ H k, σ = (1 − k) L σ − L(σ )10





∼ ∂θ 1 ∂σ NT 1 ∂ y − 2 + k L(θ ) + NB y ∂ y ∂y α2 ∂t where k is the embedding parameter with range 0 ≤ k ≤ 1 and L = linear operator. Taking the following initial guesses

1 ∂ y ∂y



(10)

(11)  y ∂∂y is a

 y 2 − R 2 ωt e θ10 (y, z) = − 4   2 y − R 2 ωt ∼ σ 10 (y, z) = − e 4 

where ω is constant. Define, θ = θo + kθ1 + o(k)2

(12)

26

N. Phogat et al. ∼





σ = σ o + k σ 1 + o(k)2

(13)

Putting Eqs. (12) and (13) in Eqs. (8) and (9) respectively, and taking k → 1, the temperature and concentration profile is written as follows:   4  4 y2 − R2 2ωt y − R β − (N T + N B )e θ (y, t) =   64  44  2 y − R 4 ωeωt y − R 2 R 2 ωeωt − + 64 16 α12 α12      4  2 y − R 4 ωeωt y − R 2 R 2 ωeωt N T y 2 − R 2 ωt ∼ σ (y, t) = e − + NB 4 64 16 α22 α22 

(14)

(15)

Applying Laplace transformation, Hankel transformation, Laplace inverse and finite Hankel inverse on Eq. (7), we get the expression for fluid velocity as follows u(y, t) =

∞ 2  Jo (yλn ) λ2n   2 2 2 2 a n=1 h + λn Jo (aλn )

   −Bt   a Ao Ao + A1 −Bt  e  1−e +  2 J1 (aλn ) N  2 λn M + H 2 + λ2n B α + ψλ2n



 A1 B 1 a −Bt    cost + sint − e +  2 J (aλ ) 1 n B λn α + ψλ2n 1 + B2

e−Bt w w  G r NT + G r N B + 2 G r + 2 G f − 2 α1 α2 M + H 2 + λ2n

4 4 4 3 a a −R 3a 3a J1 (λn a) − J2 (λn a) + J3 (λn a) 64 λn 64λ2n 32λ3n

e−Bt R2w NT R2w  βG r + + 2 G + G + G r f f NB 4α12 4α22 M + H 2 + λ2n

2 a − R2 a Gr a2  J1 (λn a) − 2 J2 (λn a) +  2 4 λn 2λn α + ψλ2n      −(N T + N B ) e2ωt − e−Bt ω eωt − e−Bt − (B + 2ω) α12 (B + ω) 4 a − R4 a 3a 4 3a 3 J1 (λn a) − J2 (λn a) + J3 (λn a) 64 λn 64λ2n 32λ3n   2 R 2 ω eωt − e−Bt a2 a − R2 a + J a) − J a) (λ (λ 1 n 2 n 4 λn 2λ2n 4α12 (B + ω)



Lipid Concentration Effects on Blood Flow Through Stenosed Tube



  β 1 − e−Bt a2 − R2 a a2 + J1 (λn a) − 2 J2 (λn a) B 4 λn 2λn  ωt  −Bt e −e Gf NT R2w  + + 2 NB (B + ω) 4α22 α + ψλ2n 2 a − R2 a a2 J1 (λn a) − 2 J2 (λn a) 4 λn 2λn 4 w a − R4 a 3a 4 3a 3 − 2 J1 (λn a) − J a) + J a) (λ (λ 2 n 3 n 64 λn 64λ2n 32λ3n α2

27

(16)

The acceleration of the Fluid is as follows F(y, t) = F(y, t) =

∞ Jo (yλn ) 2  λ2n   2 + λ2 J 2 (aλ ) a2 h n o n n=1

∂u ∂t

    a  Ao Ao + A1 −Bt −Bt  e  1−e (−B) J1 (aλn ) (−B) +  2 N  2 λn B α + ψλ2n M + H 2 + λ2n 

 B 1 a A1 −Bt     −sint + cost − e (−B) J1 (aλn ) + B λn α 2 + ψλ2n 1 + B2   w w e−Bt  (−B) G r N T + G r N B + 2 G r + 2 G f − 2 M + H 2 + λ2n α1 α2

4 4 4 3 3a 3a a a −R J1 (λn a) − J a) + J a) (λ (λ 2 n 3 n 64 λn 64λ2n 32λ3n   R2w NT R2w e−Bt  (−B) βG r + Gr + Gf + Gf + 2 NB M + H 2 + λ2n 4α12 4α22

2 a − R2 a a2 J1 (λn a) − 2 J2 (λn a) 4 λn 2λn      −(N T + N B ) e2ωt 2ω − e−Bt (−B) ω eωt ω − e−Bt (−B) Gr  − + 2 (B + 2ω) α + ψλ2n α12 (B + ω) 4 3a 4 3a 3 a − R4 a J1 (λn a) − J2 (λn a) + J3 (λn a) 2 64 λn 64λn 32λ3n  

R 2 ω eωt ω − e−Bt (−B) a2 a2 − R2 a + J a) − J a) (λ (λ 1 n 2 n 4 λn 2λ2n 4α12 (B + ω)   2  β 1 − e−Bt (−B) a2 a − R2 a J1 (λn a) − 2 J2 (λn a) + B 4 λn 2λn



28

N. Phogat et al.

  ωt   e ω − e−Bt (−B) Gf NT R2w  + 2 + NB (B + ω) α + ψλ2n 4α22 2 a2 a − R2 a J1 (λn a) − 2 J2 (λn a) 4 λn 2λn w − 2 α2 4 3a 4 3a 3 a − R4 a J1 (λn a) − J a) + J a) (λ (λ 2 n 3 n 64 λn 64λ2n 32λ3n

(17)

4 Graphical Results and Discussions In the present analysis, numerical experiments have been conducted for “axial velocity, wall shear stress and volumetric flow rate”. The graphical features of pertinent parameters like the “velocity profile (u), concentration (σ ), and temperature (θ ) along with axial distance z” have been examined for different values of “Hartmann Number (M), slip parameter (h), and concentration of nanoparticles (φ)”. The flow characteristics of blood are investigated here by analyzing the values of “flow parameters” at a particular site in “cardiovascular system”. This analysis presents the results of concentration, temperature, axial velocity, and shear stress graphically. Figure 2 shows the variability of lipid concentration along the axis of the stenosed tube. Growth in the lipid concentration is noted on raising the values of concentration of nanoparticles. Figure 3 shows deviation in the temperature along the axis of the constricted tube. It has been seen that temperature rises when the concentration of nanoparticles becomes greater. Figures 4, 5 and 6 shows variability in the axial velocity along the axis of the tube. In the Fig. 4, although the effect of Hartmann Number leads to decrease in axial velocity but the effect is much more pronounced when the “Hartmann Number” increases from 0 to 2, but when the “Hartmann Number” increases from 2 to 8, then the decreasing effect of axial velocity does not crop up. In Figs. 5 and 6, as values of slip parameter and concentration of nanoparticle amplify, fluid velocity firstly increases but the increasing rate of axial fluid velocity extenuates for higher values of slip velocity parameter and concentration of the nanoparticle.

5 Conclusion In this paper, the effects of “slip velocity” and external applied “magnetic field” on pulsatile blood flow in a “constricted porous artery” are evaluated. The present model provides the scope to estimate the effect of the above mentioned various parameters on different “flow characteristics” and ensures the impact of various

Lipid Concentration Effects on Blood Flow Through Stenosed Tube

29

Fig. 2 Variation of concentration

Fig. 3 Variation of temperature

parameters for a better understanding of “circulation of blood” in the human body. The main emphasis of this study is to consider the effect of lipid concentration, slip velocity and concentration of nanoparticles at permeable walls. The conclusion of the investigation is as follows: • LDL effect persists due to increase in the concentration of nanoparticles. So, there are enhanced chances of stenosis along the wall of an artery. • Blood pressure increases due to increase in temperature which is affected by increasing nanoparticle concentration.

30

N. Phogat et al.

Fig. 4 Variation of axial velocity due to magnetic field

Fig. 5 Variation of axial velocity due to slip parameter

• Impact of low strength magnetic field on the blood flow is desirable but high intensity magnetic field leads to decrease the axial velocity which is dangerous for health. This study provides a general form for the “velocity of blood” through the stenosed part of an artery.

Lipid Concentration Effects on Blood Flow Through Stenosed Tube

31

Fig. 6 Variation of axial velocity due to concentration of nano particles

References 1. Beaver GS, Joseph DD (1967) Boundary conditions at a naturally permeable wall. J Fluid Mech 30:197–207 2. Saffman PG (1971) On the boundary consitions at the surface of a porous medium. Stud Appl Math 50:93–101 3. McDonald DA (1979) On steady blood flow through modelled vascular stenosis. J Biomech 12:303–306 4. Sanyal DC, Maji NK (1999) Unsteady blood flow through an indented tube with Atherosclerosis. Indian J Pure Appl Math 30:951–959 5. Haldar K, Ghosh SN (1994) Effect of a magnetic field on blood flow thorugh an indented tube in the presence of erythrocytes. J Pure Appl Math 25:345–352 6. Varshney G, Katiyar VK, Kumar S (2010) Effect of magnetic field on the blood flow in artery having multiple stenosis: a numerical study. Int J Eng Sci Tech 2:67–82

Single-Phase Bi-Directional AC/DC Converters for Fast DC Bus Voltage Controller R. Karpaga Priya, S. Kavitha, M. Malathi, P. Sinthia, and K. Suresh Kumar

Abstract A brand-new approach to controlling the DC bus voltage for single-phase bi-directional AC/DC converters is presented. The suggested controller can offer a stable and trustworthy closed-loop control system while significantly enhancing the transient performance of the DC voltage bus control loop. In the preliminary technique, a specialized adaptive filter is used to explicitly estimate the DC output of the bus voltage. A very reliable then trustworthy approximation of the DC component of the DC bus is given by the recommended filter construction. A single-phase ac/dc converter with a double-frequency ripple may accurately estimate the amount of DC. current using the current DC. current extraction technique. Sections on simulation and testing exhibit the anticipated closed-loop control systems. Keywords Adaptive filter · Bi-directional converter · Dc-extraction · Double-frequency ripple · Grid-connected converter · Power factor correction · State observer

R. Karpaga Priya (B) · S. Kavitha Department of EEE, Saveetha Engineering College, Chennai, Tamilnadu 602105, India e-mail: [email protected] M. Malathi Department of ECE, Rajalakshmi Institute of Technology, Chennai, Tamilnadu 602124, India P. Sinthia Department of Biomedical Engineering, Saveetha Engineering College, Chennai, Tamilnadu 602105, India K. Suresh Kumar Department of IT, Saveetha Engineering College, Chennai, Tamilnadu 602105, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 V. S. Rathore et al. (eds.), Emerging Trends in Expert Applications and Security, Lecture Notes in Networks and Systems 682, https://doi.org/10.1007/978-981-99-1946-8_4

33

34

R. Karpaga Priya et al.

1 Introduction By placing a thyristor between both the load and an AC source of constant voltage, AC voltage regulator (AC voltage regulators) can change the value of rms voltage level provided to the load path. Through adjustment of the thyristor’s firing angle in the AC voltage circuit, the effective value of the AC load connected to the load circuit can be changed. An AC voltage controller, or thyristor power converter, is a device that changes an AC source with a fixed voltage and fixed frequency into one with a variable voltage [1]. By modifying the trigger angle, the loads are regulated to RMS value of the Ac output and the AC current. Depending on the types of AC power supply, the input used in the circuits, are of two categories for AC voltage controllers. both a single-phase and three-phase AC controller. In our nation, 230 V RMS at a frequency of 50 Hz is the single-phase Mains power voltage used by single-phase AC regulators. The 400 V RMS three-phase AC electricity at 50 Hz power frequency is used to power the three-phase AC controller. Each sort of controller can be classified into two groups: two-way or full-wave AC controllers and DC or half-wave AC controllers [2]. In summary, there are several different kinds of AC voltage controllers, including single-phase halfwave AC voltage controllers (DC. controllers), single-phase full-wave AC voltage controllers (dual-phase controllers), three-phase half-wave AC. voltage controllers (DC. controllers), and full-wave three-phase AC. voltage controllers (bi-directional controller) [3]. Applications for this technique include transformer junction switching, induction heating, illumination control in AC systems, and industrial and residential heating. transformer under load), single-phase AC, multi-phase induction motor speed control and AC magnet drive.

2 Objectives of the Study 2.1 Voltage Control Techniques Phase modulation and on–off control are the two forms of thyristor controls that are commonly used to regulate alternating current in practical applications. Here are two methods for regulating the AC output voltage [4]. A thyristor is used as switch in on–off process control to connect and able to access AC power for a short period of time and then disconnect it for a short period of time [5]. The thyristor functions as a high-speed circuit as a result of high speed alternating current switch.

Single-Phase Bi-Directional AC/DC Converters for Fast DC Bus …

35

2.2 Phase Control Technique The load circuit is connected to the AC power for a portion of each input cycle using a thyristor acting as a switching in phase control. That is, for a portion of each input cycle, the AC line power is divided using a thyristor. The input source voltage is developed across the load because of the thyristor [6] switch turning on for a portion of each input half cycle and turning off for the remainder, disconnecting the load from the AC power. The output rms voltages across the load can be adjusted by adjusting the phase difference or trigger angle (delay angle), respectively. The phase difference (value of t) when the thyristor comes on and the load starts to flow is known as the trigger delay angle, or ”. The rectification of an AC line or an AC phase is used by thyristor AC voltage regulators [7]. The AC regulation thyristor is lines rectified because the power input is AC (phase rectified). During the negative half cycle, the current flowing and conducting thyristor drops to zero whenever the AC input voltage flips and goes negative. As a result, the thyristor automatically switches on when the device power reaches zero [8].

2.3 On-off Control Technique The fundamental principle of on–off control method is elucidated in relation to a single-phase full-wave AC voltage controller circuit displayed below. The thyristor switches T1 and T2 is turned on by relating the suitable gate trigger pulse to attach the input ac supply to the load for ‘n’ number of input cycles throughout the period interval [4] t O N . The thyristor switches T1 and T2 are turned off via obstructing the gate trigger pulses for ‘m’ number of input cycles through the time interval t O F F . The ON interval of an AC controller t O N typically contains an integer digit of input rotations.

3 Existing System Miao and Yuan Bing have proposed a circuit that partially resists steep currents to solve the problem of low voltage drop operation under heavy load. They have achieved adjustable slope current and operate slowly in the clamped state, which prolongs battery life and is suitable for mobile applications [9]. Yahaya et al. demonstrated a synchronous rectifier buck converter capable of ZVS in its operation. In addition, the traditional drain voltage in the ON state of the switch is reduced. Their comparative evaluations of the switch drain current, and the load current show a slight difference in amplitude [10]. Hamid Daneshpajooh and colleagues set out to find the best operating points for soft-switch half-bridge DC-DC converter. In their work, the converter’s smooth switching range and productivity are greatly improved

36

R. Karpaga Priya et al.

by recruiting a duty cycle as an important control parameter. Jian Min Wang et al. has proposed a control technology that enables synchronously rectified buck converter to achieve zero voltage conversion under low load conditions. The synchronous rectifier control technique is applicable to low voltage output because replacing the output rectifier diode with a MOSFET can minimize conduction loss and increase the efficiency of the whole circuit. Jae-Hong Kim et al. investigated how to deal with the voltage imbalance problem on the secondary side of a dual half-bridge converter. The decoupling terms and their compensation controls were derived from the converter performance equations. Mor Mordechai Peretz and Shmuel Ben Yaakov have developed a time-domain design method for digital control of pulse width-modulated DC-DC converters [11]. This approach is because the closed-loop response of a digital control system is largely determined by the first few samples of the compensator. This concept is used to customize the digital PID template to the desired response [12]. We also investigated the possible realistic closed-loop performance of the system controlled by the PID template controller and the stability limit of the time-domain controller. Lin and Hou proposed the evaluation, proposal, and completion of her DC-DC converter with two series-connected half-bridge converters with no output inductance [13]. On the high voltage side, two half-bridge converters are matched with asymmetric pulse width modulation to achieve zero voltage switching across all switching devices. The circuit breaker voltage load remains fixed to half of the input voltage. Therefore, low voltage stress active switches can be used for high-input voltage applications. On the low voltage side, the secondaries of [14] the two half-bridge converters are connected in similar and share the load current. Two transformers occurrence is fixed in series on the primary and secondary banks of both half-bridge converters. They concluded that any transformer could act as an inductor, so no output inductor was required for any half-bridge converter [15].

4 Proposed Method Bi-directional single-phase AC/DC converters have been discovered for vehicleto-grid (V2G), grid-to-vehicle (G2V), applications. It can offer a sort of power supply with a huge energy storage capacity (DG). The integration of DG’s renewable energy sources and sporadic power generation can be successfully supported by this energy storage. In addition to energy storage, V2G/G2V can offer load sharing, harmonic filtering, reactive power correction (VAR compensation), and various other extra services. Given these characteristics, electric vehicles will unavoidably be a component of the smart grid. The essential element that gives electric vehicles their V2G/G2V capabilities is a bi-directional AC/DC converter. A bi-directional AC/DC converter could provide several auxiliary services in addition to efficiently controlling the power flow among the grid like the traction battery. The bi-directional AC/DC converter is shown in this diagram as the connection between both DG and the traction battery. AC/DC

Single-Phase Bi-Directional AC/DC Converters for Fast DC Bus …

37

converter control is crucial for EV integration with DG. The control system oversees carrying out power flow regulation as well as other DG assistance requirements. It concentrates upon the development and execution of process mechanisms for V2G/ G2V-capable bi-directional AC/DC converters. To match residential power distribution, commercially available automotive AC/ DC chargers stay normally single-phase. According towards a dual frequency power ripple brought on by AC/DC power conversion, the operation of single-phase power creates a strong is typically quite challenging. To prevent ripple from accessing the control system, low-frequency ripple causes designers to employ very sluggish control methods. Controlling intermediate circuit voltages, which are used to store energy between the AC and DC sides, is a particularly difficult problem. The ability to switch maximum voltage between the two stages is provided by the DC bus capacitor, which also serves as a power storage capacitor. DC bus capacitors are used in single energy regulation systems to source reduced currents and isolate energy ripple. The DC link voltage exhibits a considerable double-frequency ripple because of power ripple. In closed-loop control systems, low-bandwidth controllers are often employed near filter banned and eliminate low-frequency ripple. Low-bandwidth controllers, however, operate slowly and have subpar transient functioning. Steadystate efficiency is also compromised because of the controller’s low gain. Due to implementation impact, the AC/DC converter must be enlarged to ensure high reliability against transient overshoot and undershoot. As a result, slowing down the closed-loop and marginally stabilizing it by doubling the frequency ripple. An efficient and trustworthy control of EV power conditioning systems is essential for the successful integration of EVs into DGs. Capacitors serve as energy storage devices and give the energy regulation system the ability to modify the instant energy. The smooth regulation of the energy flow between the vehicles and the DG is necessary for the integration of the vehicles into the DG. The figure shows the active rectifier control for single-phase energy AC/DC converter. This illustration shows the control structure’s internal current regulator and external voltage regulator. The converter’s intermediate circuit voltage is controlled by an outer voltage control loop, while the input current is shaped by an inner current control loop. In essence, a voltage regulator adjusts the current’s amplitude from the mains supply so that the balance of power between the initial and second stages is met, or, alternatively, maintains a constant voltage in the intermediate circuit. Decide. Many commercial directional active rectifier diodes and power factor correction devices employ this well-known control scheme. This controller is frequently implemented with PI regulators since voltage controllers can handle DC signals. The primary issue with this control strategy is that a sizeable chunk of dual regularity ripple manifests on the DC bus voltage because of the predictable power ripple present happening three-phase structure electrical conditioning units The power output creates this dual frequency ripple [16], which reflects just on DC bus voltage. The DC bus voltage element is superimposed on the low-frequency component.

38

R. Karpaga Priya et al.

4.1 Operation A, B, and C make up the full-bridge inverter’s three legs. Each leg is made up of switches and the diodes they are alternately spliced with. The two switches on each leg are wired in such a way that if one of them is off, other switch will turn on. The two switches are not ever turned off instantaneously. They remain frequently together switched off for a rapid period of time called equally blanking period, which would be negligible in comparison to the on/off times of switches, to avoid the dc input from firing incorrectly. If the converter shifts in each leg are non-away instantaneously, then the productivity current io in Fig. 1 will flow endlessly. The status of the switches is solely dictated by the output voltage. For illustration, consider leg A. The output voltage vAo , with particular to the midpoint of the dc source Vd , is stated by the switch states: when TA+ is on, the output current will flow through TA+, if io is positive or it will flow through DA. To generate output voltages in leg B and leg C the same triangular waveform is compared with sine wave vcontrol b and vcontrol c lagging vcontrol a by 120º and 240º respectively as shown in Fig. 4. The waveform of the voltage at point A with reference to the negative dc bus of the three-phase inverter is shown in Fig. 4 as vAN. The waveform of the voltage at points B and C through reference to the negative dc bus of the three-phase inverter will be same as vAN, but with a phase lag of 120º and 240º respectively. It is noted that the same quantity of average dc component is present in the output voltages vAN and vBN. In three-phase inverters harmonics of the line-to-line voltages are unease than the output of any one of the legs with harmonics. for example, vAN in Fig. 4 are equal to the harmonics in vAo in Fig. 2, where only the odd harmonics exit as side bands, centered around mf and its multiples then mf is odd. As the harmonic at mf, (the similar relates to its odd multiples), the phase difference between the mf harmonic in vAN and vBN is (120 mf )º. This difference in phase is equal to zero (a multiple of 360) if mf is odd and a multiple of 3. As a consequence, the harmonics at mf is repressed in the line-to-line voltage vAB [17]. The similar argument applies in the suppression of harmonics at the odd multiples of mf if mf is chosen to be an odd multiple of 3.

Fig. 1 Single-phase full-wave AC voltage controller circuit

Single-Phase Bi-Directional AC/DC Converters for Fast DC Bus … Distribuon

Acve Recfiers

Generaon

Dual Acve

39 Tracon circuit

Bridge

Fig. 2 Block diagram

Linear Modulation (ma ≤ 1.0) Output voltage’s fundamental frequency element differs linearly with amplitude modulation relation ma in the linear area (ma 1.0). The maximum use of the essential frequency in a single inverter legs. Over Modulation (ma ≥ 1.0) If the peak of the control voltage is more than the peak of triangle waveform, ma becomes ≥ 1.0 and some of the switchings do not take place near the peak of the waveform vAO. The waveform becomes flat topped. For necessarily huge values of ma , it becomes a square wave vAO . In the over modulation region compared to the region with ma ≤ 1.0, more sideband harmonics appear centered around the frequencies of harmonics mf and its multiples. In this mode, the line-to-line voltage normalized with reference DC supply voltage does not rise uniformly with ma .

4.2 Control System Bi-directional AC/DC converters have a completely different control scheme than unidirectional AC/DC converters. A control system must be capable to handle several auxiliary services as well as bi-directional power flow. The active rectifier control method must manage the link voltage and, as a result, the input current pulled from the mains if the circulating current originates from G2V. Depending upon the active or reactive power requirements after the DG, the control method should inject the proper amount of power into the grid, but the flow of power is from V2G. As a result, the controller loop architectures aimed at two energy flow directions are extremely dissimilar (Fig. 3). An AC converter that is single-phase and two-control way’s block diagram. This diagram shows that there are two levels in the control system: low rate and high level. A supervisor control system called the high channel system gives active power, active and reactive, and DC bus voltage reference standards. The standard value for active power is generated depending on the current’s flow and Pref . When charging a battery, power flows from the mains to the battery (G2V). The current battery curve with storage battery VB AT together determines the available power reference value. On either hand, when the power flow is from battery to grid (V2G) the power value is established by DG. DG demand determines the reference point for reactive power, Qref . A reference point for reactive and active power is established by the supervisory controller depending on the DG requirement as well as the battery’s state. Depending

40

R. Karpaga Priya et al.

Fig. 3 Circuit diagram

on voltage level, load curve, and DG demand the monitoring control establishes objective criteria for active and reactive power in V2G operations. High-level power management is primarily handled via supervisory control. To augment intermittent renewable energy sources, the system will be powered during the day when demand is highest and charged at night when energy was cheap and plentiful. Furthermore, this power control is severely established on the battery’s state and supervisory control then it prevents the battery from discharging absolutely. “Intermediate circuit optimizer” is determined by reference value of the intermediate circuit voltage Vref . Based on power, grid voltage, and battery voltage the DC Bus Optimizer’s optimum DC bus voltage is determined. For successful integration of vehicles into DG and smart grids, reliable.

5 Results and Discussion To investigate the practicability of the projected DC bus voltage regulator and then calculate its closed-loop implementation, we used an experimental 3.3 kW prototype to implement the regulator. With the help of Field Programmable Gate Array (FPGA) the control system is executed. Using FPGAs can deliver a very fast and consistent solution for digitally implementing the planned control system. The figure shows a block illustration of an AC/DC converter control system executed in an FPGA. Based on the representation, to regulate the active rectifier the quartet feedback signals are

Single-Phase Bi-Directional AC/DC Converters for Fast DC Bus …

41

Fig. 4 The transient performance of the closed-loop control system

used. The grid current ig and active rectifier current irec stay expended in the stately response K.X to aggressively damp the resonance produced by the LCL filter. The line voltage vg is recycled for harmonization (e.g generating a line current reference signal and obtaining the line frequency ω). The quarter signal of the DC bus voltage vB is used by the adaptive observer to excerpt the DC value VDC of the voltage loop. An application of adaptive observers in digital form. Three integrators make up the suggested adaptive observer. The system is converted into a discrete observer via a well-known bilinear transformation. Compared to Back-Euler and Forward-Euler, bilinear transformations can offer more precise transformations. Therefore, the frequency information provided by the PLL is required by the adaptive observer. A notch filter that is adaptable was used to implement synchronization (ANF). ANF can track grid frequencies accurately and provide frequency components for planned control systems. To keep it self-explanatory, this document contains a concise depiction of ANF-based PLLs. Convertors’ output waveform during steadystate G2V operation. The figure also shows the active rectifier inductance ripple and line inductance ripple, and the rectifier voltage vrec . This figure illustrates the attenuation of switching harmonics by the LCL filter. The transient performance for G2V set-up in a closed-loop control system is shown in Fig. 4, while the transient performance for V2G operation is shown in Fig. 5. Figure attests to the suggested closed-loop control system’s quick and reliable performance during transients. The practical VAR compensating waveform for G2V and V2G operations are shown in the Fig. 6 as a last step. These numbers demonstrate that when directed by DG, the bi-directional AC/DC converter can excellent quality out VAR correction. The converter’s steady-state waveform for V2G operation are shown in the image. This diagram demonstrates how a closed-loop control arrangement using the suggested voltage control loop can supply the converter with extremely consistent steady-state power when used in conjunction with a traditional control system. The

42

R. Karpaga Priya et al.

Fig. 5 Steady-state waveforms

Fig. 6 Comparison waveform

experimental data of the converter utilizing the controller were acquired to contrast the transient behaviors of the conversion using the suggested control method with the traditional control system. relative to the proposed voltage loop management scheme, there is a significant overshoot/undershoot during transients. The electrical

Single-Phase Bi-Directional AC/DC Converters for Fast DC Bus …

43

resistance of the MOSFET can be extremely close to the DC link voltage, which means that a 500 V MOSFET could be utilized for the second stage converter. The terminal voltage of the MOSFET can be extremely close to the DC link voltage, which means that a 500 V MOSFET could be utilized for the second stage converter. The second phase converter is often implemented using 600 V or 650 V MOSFETs with normal control techniques. But at the other hand, the second step of the suggested control system can be implemented using a 500 V MOSFET. Due to the high overshoot/undershoot during transients a conventional control system demands larger voltage ratings for the DC bus capacitors.

6 Conclusion and Future Enhancement New DC bus voltage control approach is suggested in this study for the efficiency of bi-directional separate AC/DC converters may be shown very quickly and reliably using a new DC bus voltage control approach. For the applications of V2G/G2V, the converter needs to manage power flow in both directions, and bi-directional separate AC/DC converters are appropriate. The suggested approach for estimating the DC value of the DC bus voltage in voltage control systems can offer a quick and precise estimation. As a result, the converter’s steady-state and transient implementations outperform the slow voltage regulators of the past. Additionally, the closed-loop scheme smoothly controls the V2G and G2V energy instructions mutually. Finally, the proposed control structure’s capacity to carry out auxiliary tasks like VAR compensation is another benefit. Results from simulations and investigations support the suggested control system’s good performance.

References 1. Yilmaz M, Krein PT (2013) Review of battery charger topologies, charging power levels, and infrastructure for plug-in electric and hybrid vehicles. IEEE Trans Power Electron 28(5):2151– 2169 2. Sioshansi R, Denholm P (2009) Emissions impacts and benefits of plug-in hybrid electric vehicles and vehicle-to-grid services. Environ Sci Technol 43(4):1199–2004 3. Thomas C (2009) Fuel cell and battery electric vehicles compared. Int J Hydrogen Energy 34:6005–6020 4. Yilmaz M, Krein PT (2013) Review of the impact of vehicle-to-grid technologies on distribution systems and utility interfaces. IEEE Trans Power Electron 28(12):5673–5689 5. Mischinger S, Hennings W, Strunz K (2012) Integration of surplus wind energy by controlled charging of electric vehicles. In: Proceedings of the IEEE 3rd PES international conference and exhibition on innovative smart grid technologies, pp 1–7 6. Kramer B, Chakraborty S, Kroposki B (2008) A review of plug-in vehicles and vehicle-to-grid capability. In: Proceedings of the IEEE 34th annual conference of industrial electronics, pp 2278–2283

44

R. Karpaga Priya et al.

7. Kisacikoglu MC, Ozpineci B, Tolbert LM, Wang F (2011) Single-phase inverter design for V2G reactive power compensation. In: Proceedings of the IEEE 26th annual applied power electronics conference and exposition, pp 808–814 8. Xu DQ, Joos G, Levesque M, Maier M (2013) Integrated V2G, G2V, and renewable energy sources coordination over a converged fiber-wireless broadband access network. IEEE Trans Smart Grid 4(3):1381–1390 9. Pinto JG, Monteiro V, Goncalves H, Afonso JL (2014) Onboard re-configurable battery charger for electric vehicles with traction-to-auxiliary mode. IEEE Trans Veh Technol 63(3):1104–1116 10. Galus MD, Waraich RA, Noembrini F, Steurs K, Georges G, Boulouchos K, Axhausen KW, Andersson G (2012) Integrating power systems, transport systems and vehicle technology for electric mobility impact assessment and efficient control. IEEE Trans Smart Grid 3(2):934–949 11. Musavi F, Edington M, Eberle W, Dunford WG (2011) Energy efficiency in plug-in hybrid electric vehicle chargers: evaluation and comparison of front end AC-DC topologies. In: Proceedings of the energy conversion congress and exposition, pp 273–280 12. Rask E, Bohn T, Gallagher K (2012) On charging equipment and batteries in plug-in vehicles: Present status. Presented at the IEEE innovative smart grid technologies conference, Washington, DC, USA 13. Grenier M (2009) Design of an on-board charger for plug-in hybrid electrical vehicle (PHEV). M.S. thesis, Dept. Energy Environ., Chalmers University, Goteborg, Sweden 14. Pahlevaninezhad M, Das P, Drobnik J, Moschopoulos G, Jain PK, Bakhshai A (2012) A nonlinear optimal control approach based on the control-lyapunov function for an AC/DC converter used in electric vehicles. IEEE Trans Ind Inform 8(3):596–614 15. Mitraand P, Venayagamoorthy GK (2010) Wide area control for improving stability of a power system with plug-in electric vehicles. IET Gener Transm Distrib 4(10):1151–1163 16. Pahlevaninezhad M, Das P, Drobnik J, Jain PK, Bakhshai A (2012) A new control approach based on the differential flatness theory for an AC/DC converter used in electric vehicles. IEEE Trans Power Electron 27(4):2085–2103 17. Prodic A (2007) Compensator design and stability assessment for fast voltage loops of power factor correction rectifiers. IEEE Trans Power Electron 22(5):1719–1730

Padam Persona—Personalizing Text to Image Using Artificial Intelligence N. Velmurugan, P. Sanjeev, S. Vinith, and Mohammed Suhaib

Abstract Personalized Text-to-image models give extra creative freedom to people/ artists by converting natural language concepts into images that depict an idea. However, it is unambiguous how this freedom can be used to create personalized and unique concept images, change their appearance, incorporate them into new characters and scenes. Input from the user is used to generate a painting or portrait based on the user’s likeness, or to generate a new product concept using language-based models. We suggest a straightforward approach that facilitates this kind of imaginative independence. Using only few training images of user-supplied concepts/ prompts (e.g.: objects or patterns of text), this Al model represents them with new “images” by converting the text input. The input can also be provided in regional languages such as Hindi, Tamil, Telugu, etc. These Al generated images can be used for several artistic purposes, which intuitively guide individual creativity. In instance, we discover that only one word (text input) is enough to accurately express a wide range of thoughts. We evaluate our approach/system against a broad range of foundational ideas and demonstrate that it provides more precise descriptions of concepts/ ideas in a wide range of contexts and activities via the use of unique aesthetic imagery. Keywords AI · Text to Image · Aesthetic · Personalized · Portrait · Uncopyrightable

N. Velmurugan (B) · P. Sanjeev · S. Vinith · M. Suhaib Department of IT, Saveetha Engineering College, Chennai, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 V. S. Rathore et al. (eds.), Emerging Trends in Expert Applications and Security, Lecture Notes in Networks and Systems 682, https://doi.org/10.1007/978-981-99-1946-8_5

45

46

N. Velmurugan et al.

1 Introduction 1.1 Objective The main objective of building a TTI (Text to Image) software is that it helps us to create the image we want to produce based on the description in common language as well as in our regional language (Tamil). An AI picture generator will create a unique image based on the text given. The image produced will be un-copyrightable where a user gets different images when entering the same text many times. AI helps us to create a portrait image in aesthetic look with the help of the DALL-E algorithm. AI as art, art is a skill of producing things such as painting, designs etc., where everyone can only imagine but cannot produce. AI helps the user to produce fascinated images of their thoughts with two or more words mixed. This software will bring a drastic change in art and animation industry. The artist has different thoughts while painting and designing. They can share their thought with AI where it produce an image of their thought. For example: “Cat fighting with lion and tiger.” So, the AI will create image with mixed text and display to the artist where he will get an idea by looking at the image.

1.2 Project Description In this paper, the system aims to achieve creative freedom by generating personalized images from text inputs. The images generated by the AI model will be aesthetic, unique and not generic images available on the internet. The purpose of a text to image model is to read as mixed text and display images based on the text input. Artists/creators can use this to gain inspiration from the fresh/creative AI generated images and incorporated them into their artistic piece of work. They just have to give words that come to their mind as text input and personalized images according to their prompt that will be generated within seconds. Several different images will be generated by the AI model and the user can select any concept they like from the options available. It enables the user to get different images whenever same words are given. These words can be given as input not only in mainstream language such as English, Spanish etc., but also in regional languages such as Hindi, Tamil, Telugu, etc. The images produced by AI will be unique, un-copyrightable and will have an aesthetic look.

Padam Persona—Personalizing Text to Image Using Artificial Intelligence

47

2 System Analysis 2.1 Existing System The end objective of personalized text-to-image generation is to generate artistic, one-of-a-kind visual representations of the user’s thoughts and feelings based on the user’s own natural language text inputs. To add to this, the system uses “Textual Prompt Inversions” inside the context of generative AI models. It finds unique words conveying complex conceptions that may stand in for both overarching themes and concrete imagery in artworks over the duration of a text prompt cluster. The current technology facilitates the generation of random pictures based on the supplied text. The text given will always be in mainstream languages such as English, Spanish etc., The AI model uses the DALL-E algorithm for conversion of the text to image. Since only main languages are used, users who do not understand to read and write English will find it difficult to operate the system. Drawbacks: The Users searching for personalized/artistic/aesthetic images related to regional/ native stuff cannot convey their ideas because there may not be suitable words which translate to the exact meaning in common languages like English.

2.2 Proposed System The existing system can only create images from text inputs given in main language like English. The proposed system is to deploy multiple languages (English & Regional Languages) in a single software which helps the user who does not understand English words to use this feature and type the required text input of his idea. At initial stage it will convert few words in regional languages from text to image.

2.3 Technologies Used Artificial Intelligence The term “Artificial Intelligence” refers to the ability of technology, mostly computers, to mimic human intelligence. An area of artificial intelligence is the one whose declared mission is to develop helpful AI for the benefit of all people. An application program that reads natural language and completes tasks for the user. The tasks are performed by AI with the help of the DALL-E algorithm. The AI system that generates a portrait image from natural language description. When a text is given as input the AI reads the input and converts it to a realistic image.

48

N. Velmurugan et al.

3 Related Works In this work, we provide a simple method for converting text to images using a transformer that can automatically treat separate text and picture datasets as a single container of data. In a zero-shot evaluation, this method may compete with those that have been trained on more particular concepts or prompts, provided that there is enough data and scale [1]. Related work has shown the efficacy of DALL E, a multi dataset transformational language model, and its variations in generating images from text using a simple architecture and a single training goal, all of which are supported by massive amounts of data and compute. In this study, we investigate in depth how such text-to-image generative transformers reason and how they differentiate between people [2]. Similar to the aforementioned study, this one begins by breaking down an input text into its component words. The datasets containing the images used as “embeddings” are then supplied into the AI model, with each word being treated as a separate “input.” For this reason, it seeks for novel embedding datasets that can adequately represent novel, domain-specific idea pictures [3].

4 System Design 4.1 Architecture Diagram In this architecture, the user gives the text input and UI feeds the input to the AI Model. The process in the AI Model is done using the Dall-E Algorithm where the text is recognized and image sets are mapped. The already trained model generates images as per user request/prompt and the final image is processed. The generated image is displayed on the UI (Fig. 1).

4.2 System Specification Hardware Requirement i3 Processor Based Computer or higher Memory: 1 GB Hard Drive: 50 GB Monitor Internet Connection Software Requirement Windows 7 or higher Visual Studio Cloud Server

Padam Persona—Personalizing Text to Image Using Artificial Intelligence

49

Fig. 1 System architecture

Google Chrome Browser

5 System Implementation 5.1 List of Modules UI & User Input AI model Text to image generation Output Image

5.2 Module Description UI & User Input The UI (User Interface) that is used in this system of personalized text to image generation app prompts the end user to give an input in English or Tamil and click the generate button. These input types can provide information on what type of images are to be generated. The user can provide the input in the text box given and it is fed into the AI model for further processing. With the user prompt, the app generates an image of their choice.

50

N. Velmurugan et al.

AI Model The AI model uses DALL-E, which is a simple algorithm that can be used to select the best model for a given dataset. It can also be modified to use a different criterion based on the inputs given. This model maps the input text to the required image data that needs to be generated. The algorithm uses the following steps: • Find the optimal model using the criterion function. • Select the feature subset that maximizes the accuracy of this model, using a criterion that depends on the features that were chosen. • Return the subset selected by this algorithm as the final model (if no other model satisfies the criteria). Text to Image Generation After the AI model has selected the image dataset that matches with the user input, then the text to image generation process starts. The output image will be generated after all these processes. Here the image will be generated from the prompts given by the user. The keywords are detected first and then the accurate images are generated by keeping the dataset as reference. Output Image The output images will be generated after going through processes such as Text to image conversion and the AI model which uses DALL-E Algorithm will generate personalized images based on the user input. These images will be unique and creative and there will be several different variants of the same concept/prompt which the user gave. The image results will be displayed on the UI.

6 Conclusion and Future Work 6.1 Conclusion Through the text to image application, we serve people who like to share their thoughts as realistic image with the help of AI. The AI model helps us to create images which are un-copyrightable, portrait and have an aesthetic look. It helps us to give text inputs not only in mainstream language like English but also in some regional languages (Hindi, Tamil, Telugu, etc.,).

Padam Persona—Personalizing Text to Image Using Artificial Intelligence

51

6.2 Future Enhancements Text input support for more regional languages across India will be added in the future. In this way users can gain inspiration from their own culture by searching for regional/native concepts. The system will generate personalized images according to the user prompts given in their own regional language.

References 1. Ramesh A, Pavlov M, Goh G, Gray S, Voss C, Radford A, Chen M, Sutskever I (2021) Zero-shot text-to-image generation. https://arxiv.org/abs/2102.12092v2 2. Cho J, Zala A, Bansal M (2022) DALL-eval: probing the reasoning skills and social biases of text-to-image generative transformers. https://arxiv.org/abs/2202.04053v1 3. Gal R, Alaluf Y, Atzmon Y, Patashnik O, Bermano AH, Chechik G, Cohen-Or D (2022) An image is worth one word: personalizing text-to-image generation using textual inversion. https:/ /arxiv.org/abs/2208.01618

Autodubs: Translating and Dubbing Videos K. Suresh Kumar, S. Aravindhan, K. Pavankumar, and T. Veeramuthuselvan

Abstract Informative and educational videos are only available in a few selected languages and because of this, people from different regions of the world are not able to understand or connect with the content efficiently even with the use of subtitles. Dubbing is a method of audio-visual translation in which the original conversation is translated and then acted out in such a manner that the media seems to be in the target language without any of the original content being lost in the translation. AutoDubs aims to convert the AV language of a video into the user-desirable language for better understanding. The initial work in this system is to translate audio to professional transcription and improve the accuracy of native-language dialogue lines. This system aims to convert the AV language into the user desirable Language for better understanding and for better connectivity with the actual content of the original video. That way people from all over the world can overcome the language barrier and understand the content better. Keywords Application programming interface (API) · Artificial intelligence (AI) · Speech-to-text (STT) · Text to speech (TTS)

1 Introduction Autodub is to make all the audiences see videos in their desired language using Artificial Intelligence to transcribe, translate, and voice-act videos from one language to another, i.e. “AI-Powered Video Dubs”. Scholars from around the world want to learn new things but there comes a language barrier. Making the machines dub using AI and cloud API will be helpful to overcome the language barrier. The scope of the system is to provide automatically dubbed videos using Artificial Intelligence, with K. S. Kumar · S. Aravindhan · K. Pavankumar (B) · T. Veeramuthuselvan Department of IT, Saveetha Engineering College, Chennai, India e-mail: [email protected] K. S. Kumar e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 V. S. Rathore et al. (eds.), Emerging Trends in Expert Applications and Security, Lecture Notes in Networks and Systems 682, https://doi.org/10.1007/978-981-99-1946-8_6

53

54

K. S. Kumar et al.

the help of Artificial Intelligence APIs provided by Google Cloud. The APIs include Speech-To-Text API, Translation API, and Text-To-Speech API. The APIs play an important role, in organizing these APIs and making the AI-powered video dub.

2 Related Work In 1954, the University of Georgetown introduced the first public machine translation system for Russian into English, and its repertoire was just around 250 words. The Canadian government has developed a completely automated system called METEO to translate weather predictions from French to English. In 1977, the system could translate 7,500 words per day; currently, it can translate close to 80,000 words per day, or over 30 million words per year. It is presently responsible for completing 91% of the translation work for Environment Canada in Ville Saint-Laurent, Quebec. The SYSTRAN machine translation system combines a “direct translation” approach with a rule-based paradigm to provide accurate translations from French to English. Though it was built for technical documentation, its massive lexicon means it can now translate any text in the public sphere. European languages were the primary focus of development for the SYSTRAN version. Languages including Arabic, Japanese, Korean, and Chinese are now available. One of the first and most popular commercial machine translation systems was SYSTRAN. Developed and supported by the European Commission from the late 1970s until 1994, EUROTRA was an ambitious machine translation system. ARIANE and SUSY are two further examples of rule-based machine translation systems. A number of companies are working on voice recognition systems that can automatically translate between speech signals and actual human speech. An expression of it is in writing. This conversion can be done with voice commands, assistive devices, bots, and more. Indices are seriously lacking in efficient technology. In this paper, we proposed a wavelet transform for WTASR (Indian Automatic Speech Recognition). Over time, linguistic variances cause high- and low-frequency difficulties in speech signals. Therefore, wavelets make it possible for networks to do multi-scale signal analysis. As the signal is wavelet decomposed and delivered to the network, a text is formed. Encoder-decoder systems are used in translator networks to facilitate language translation. For the purpose of language translation, this model was trained using data from India. There is a comparison of the suggested approach to established practises. Since the suggested WTASR has a low word error rate, it may be effectively used to Indian speech recognition [1]. This article is about speech translation (ST). It provides a simplified approach or pass-through ST model that improves both ASR and ASR in general. Improve ASR in Cascade using continuous plot word embeddings, also known as word embeddings. Via System and Model ST (Speech Translation). The advantage of using Word attachments is that you can easily get word attachments. Learning to work with plain text data alleviates the problem of lack of data. Word embeddings also provide additional contextual information about the language model. Motivate you to extract

Autodubs: Translating and Dubbing Videos

55

knowledge from Word Incorporation as a language model. ASR provides a novel decoding method that uses word embeddings as a normalization that reduces WER and incorporates semantic relationships between words. Using word embeddings as intermediate representations improves translation efficiency for the ST end-to-end model. As a result of our investigation, we now know that it is feasible to connect linguistic signals to semantic space, which should encourage further exploration into using the suggested technique with spoken language [2]. In this work, we report the latest iteration of the NICT/ATR-evolved ChineseJapanese-English portable speech-to-speech translating device, which is now suitable for deployment for tourists. It achieves real-time, location-independent voice translation by centralising all of the speech-to-speech translation features onto a single terminal. When replacing the current noise-suppression method, the speech reputation performance is much enhanced. Coverage of massive, silent topics and language portability are both made possible by corpus—primarily based approaches to voice recognition, device translation, and speech synthesis. The results demonstrate that the individual accuracy of Chinese voice recognition ranges from 82% to 94%, and that the understudy ratings of device translation in a bilingual evaluation range from 0.55 to 0.74 in the Chinese to Japanese and Chinese to English directions, respectively [14]. Expanding a speech-to-speech translation device that will be widely used by many buyers requires more than just improving the fundamental capabilities underneath the experimental environment; the device also necessities to reflect numerous attributes of expressions through the purchasers who are ready to utilize the discourse-todiscourse interpretation gadget. Subsequent to preparing countless individuals in view of the review on client needs, this examination has laid out a gigantic language and discourse data set neighboring the climate in what voice-to-discourse interpretation advancements are truly being used. This approach allowed us to achieve outstanding basic overall performance in real-world contexts, such as speech-tospeech translation environments, as opposed to only in the lab. In addition, a userfriendly UI has been created for speech-to-speech translation, and errors have been reduced throughout the translation process, thanks to the extensive use of metrics meant to increase user satisfaction. After imposing the essential services, the large database accrued thru the carrier became moreover administered to the device following a filtering method so as you acquire the best-viable robustness towards each the knowledge and therefore the surroundings of the customers’ utterances. This investigation aims to disclose the methods by which a multilingual speech-to-speech translation device has been effectively developed for mobile devices [13].

56

K. S. Kumar et al.

3 System Design A. Problem Description With new scholars being born at an uprising rate from different regions all over the world faces language issues with the content of the video they are studying on. Manual dubbing requires a lot of time and manpower. B. Proposed System To swap out the original audio track, the system proposes using translated dubs, which are spoken translations of the dialogue in an audiovisual work. The overarching goal of this endeavour is to develop a system that uses AI to dubbify (or re-enact) films from one language into another. In a bind, the user may localise Skype conversations and YouTube videos. To begin, the system takes the audio from the first video and transcribes it to text using the Google Cloud Speech-to-Text API. The next step is for the programme to use the Translate API to convert the text into another language. Finally, the system uses a Text-to-Speech API to “voice actor” the translations, creating “humanlike” voices (Fig. 1). First, the video is uploaded to the google cloud platform, then it will extract the audio from the input video file and Use the Speech-to-Text API to transform spoken language into text. Separate the transcribed text into individual phrases or paragraphs. Use the API for translation to convert text. Generate an audio recording of the translated text being read. Adjust the tempo of the synthesised sound to that of the video’s original voice actor. Next, add the new audio track over the combined video and audio.

Fig. 1 Architecture diagram

Autodubs: Translating and Dubbing Videos

57

4 Methodology In this proposed method, the system works with multiple modules and APIs to deliver output. The system starts by extracting sound from the input video using the movie module. The extracted audio is converted to text using the Speech-To-Text API. When the audio is converted to text, the transcribed text is divided into sentences/ segments for translation. The transcribed text is moved to the Interpretation API to request a translation into the user’s preferred language. It receives the translated text and calls the translated audio by passing it to the Text-To-Speech API. The produced audio is sped up in this way so that it coincides with the video’s initial speaker. After the new audio is recorded, it is superimposed over the currently playing video. User Input At first, the user needs to download the video that they are desired to get translated and save it in the local directory. In order to execute the program, the user needs to give the video filename, the original audio language of the video, and the language that the user desires to translate the video as inputs. Moviepy Once the program begins to compile, the moviepy python library starts to extract the audio from the input video. The extracted audio will be saved in the.wav format. Next, the audio will be fed to Speech-to-Text. Speech-to-Text The speech-to-text API is used to transcribe the words in the audio that is extracted by using Moviepy. Synchronous Recognition and Asynchronous Recognition are the two primary voice recognition technologies used by Speech-to-Text. In order to work properly, audio data must be shorter than 1 min in length for Synchronous Recognition to function. Long Running Operations are initiated by Asynchronous Recognition when the audio playback time exceeds 1 min; this feature can handle recordings of any length up to 480 min in length. The resulting text will be fed to the Translation API. Translation In order to translate the text, the translation API employs a Neural Machine Translation (NMT) model that has already been pre-trained by Google. It can easily translate content into 135 languages. The text fed by the Speech-to-text will be segmented and the segment is translated into the user’s desired language. Text-to-Speech WaveNet, a model trained using machine learning, is used by the Text-to-Speech system to create these synthetic voices. The network is so advanced that it can make computers sound remarkably like humans. Once the Translated audio is generated,

58

K. S. Kumar et al.

it’s embedded with the original video replacing its original audio content with help of moviepy library and the desired output is obtained and saved in the local directory.

5 System Requirements Hardware Requirements (min.) Ø i3 Processor-Based Computer Ø 4 GB Memory or higher Ø Hard Drive: 5 GB Ø Internet Connection: 2 mbps Software Requirements Ø Python language Ø Windows 10 or higher Ø Google Cloud Platform

6 Algorithm Algorithm Step 1: Get the Audio out from the input video. Step 2: Perform the speech-to-text conversion with the help of the Speech-to-Text API. Step 3: Separate the transcribed text into individual phrases or paragraphs. Step 4: Translate the text using Translation API. Step 5: Perform the text-to-speech conversion with the help of the Text-to-Speech API. Step 6: The converted sound needs to be sped up to match the original voice actor in the video. Step 7: The new audio will be attached to the existing video. Pseudocode: Start Input video and target language Extract audio from video Call Speect_to_Text API

Autodubs: Translating and Dubbing Videos

59

Return Text to Translation API Text_to_Speech to get desired audio Embed audio with video Stop

7 Conclusion and Future Enhancements The extraction of audio from the input video is done at an accuracy of 100% and in the translation phase, the translated text accuracy is done at 80%. The translated audio matched in the input video is done at an accuracy of 70%. In this project, the Google Cloud APIs are used in order to achieve AI-powered video dubs in a machine dubbing scenario. To solve the issue of translating the spoken language, Google Cloud is developing a new application programming interface. We name it the Translation Application Programming Interface, and it directly translates sounds (i.e. no transcribed text intermediary). It’s simpler and easier to use, thanks to the deployment. Automatically aligned phrases of similar length produced by this technology were quite near, on average, to the speech rate ratio of expertly dubbed scenes when compared to one another in terms of duration. Additionally, with properly translated and synced spoken lines, decent or even superior lip-syncing results are feasible. In the future, machine learning technology will be applied in this project to improve accuracy by creating specific datasets for each language containing the literal translation of the individual word. Automatically translated and synced spoken lines may provide satisfactory or even superior lip sync.

References 1. Choudhary T, Goyal V, Bansal A (2022) WTASR: wavelet transformer for automatic speech recognition of Indian languages. Big Data Mining Anal 6(1):85–91. https://doi.org/10.26599/ BDMA.2022.9020017 2. Chuang S-P, Liu AH, Sung T-W, Lee H-Y (2021) Improving automatic speech recognition and speech translation via word embedding prediction. IEEE/ACM Trans Audio, Speech, Lang Process 29:93–105. https://doi.org/10.1109/TASLP.2020.3037543 3. Lee A, Gong H, Duquenne P-A, Schwenk H, Chen P-J, Wang C, Popuri S, Adi Y, Pino J, Gu J, Hsu W-N (2022) Textless speech-to-speech translation on real data. In: Proceedings of the 2022 conference of the North American chapter of the association for computational linguistics: human language technologies, pp 860–872. Seattle, United States. Association for Computational Linguistics. 4. Hayashi K, Yamamoto R, Inoue K, Yoshimura T, Watanabe S, Toda T, Takeda K, Zhang Y, Tan X (2020) ESPnet-TTS: unified, reproducible, and integratable open-source end-to-end textto-speech toolkit. In: ICASSP 2020–2020 IEEE international conference on acoustics, speech and signal processing (ICASSP). IEEE, pp 7654–7658

60

K. S. Kumar et al.

5. Zhang Z, Shi Y, Yuan C, Li B, Wang P, Hu W, Zha Z-J (2020) Object-relational graph with teacher recommended learning for video captioning. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 13278–13288 6. Harzig P, Einfalt M, Lienhart R (2021) Synchronized audio-visual frames with fractional positional encoding for transformers in video-to-text translation. Prerelease arXiv:2112.140 88v1 7. Chen S, Jiang Y-G (2019) Motion guided spatial attention for video captioning. Proc AAAI Conf Artif Intell 33:8191–8198 8. Öktem A, Farrús M, Bonafonte A (2019) Prosodic phrase alignment for machine dubbing. Pre-release arXiv:1908.07226v1 9. Öktem A, Farrús M, Bonafonte A (2018) Bilingual prosodic dataset compilation for spoken language translation. In: Proceedings of Iberspeech, Barcelona, Spain 10. Nimbalkar S, Baghele T, Quraishi S, Mahalle S, Junghare M (2020) Personalized speech translation using google speech API and Microsoft translation API. In: Proceedings of international research journal of engineering and technology (IRJET) 11. Nursetyo B, Moses Setiadi DRI (2018) LatAksLate: Javanese Script translator based on Indonesian speech recognition using Sphinx-4 and Google API. In: International seminar on research of information technology and intelligent systems. ISRITI 12. Do QT, Toda T, Neubig G, Sakti S, Nakamura S (2017) Preserving word-level emphasis in speech-to-speech translation. IEEE/ACM Trans Audio Speech Lang Process 25(3):544–556. https://doi.org/10.1109/TASLP.2016.2643280 13. Yun S, Lee Y-J, Kim S-H (2014) Multilingual speech-to-speech translation system for mobile consumer devices. IEEE Trans Consum Electron 60(3):508–516. https://doi.org/10.1109/TCE. 2014.6937337 14. Shimizu T, Ashikari Y, Sumita E, Zhang J, Nakamura S (2008) NICT/ATR Chinese JapaneseEnglish speech-to-speech translation system. Tsinghua Sci Technol 13(4):540–544. https://doi. org/10.1016/S1007-0214(08)70086-5

Deep Neural Based Learning of EEG Features Using Spatial, Temporal and Spectral Dimensions Across Different Cognitive Workload of Human Brain: Dimensions, Methodologies, Research Challenges and Future Scope Ayushi Kotwal, Vinod Sharma, and Jatinder Manhas

Abstract Training to Deep learning based model can be executed simultaneously in order to learn complicated information from several domains, in contrast to the traditional cognitive workload recognition paradigms. The earlier stated techniques typically collect characteristics from the spectral and temporal views of the data separately. Therefore, the crucial step in establishing reliable EEG representations related to the identification of cognitive workload is to select deep learning-based models. The prime objective of this study is to focus on those deep learning based techniques which are effective and efficient in the identification of cognitive workload using Electroencephalogram (EEG) signals. Human brain is a dynamic entity and holds the tendency of constantly thinking about the past and the future instead of relaxing in the present. Many models dealing with cognitive assessment fail to perform better and gave poor performance due to this property of human mind. As a result, it becomes essential and mandatory to remove tension and anxiety as a noise using a variety of techniques out of the collected EEG signals of the human brain. A thorough study based on implementing of different deep learning techniques in identification of EEG-based cognitive or mental workload was carried out to help the researchers and scientists in getting the complete information in a single platform. The entire study conducted briefly describes all the concepts of cognitive workload recognition using deep learning in a systematic manner. The study reveals that the CNN outperformed all the other deep learning techniques when used for conducting any kind of human brain cognitive state assessment whereas RNN gave poor performance. Keywords Electroencephalogram (EEG) · Cognitive · Mental · Deep learning · Workload A. Kotwal (B) · V. Sharma · J. Manhas Department of Computer Science & IT, University of Jammu, Jammu, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 V. S. Rathore et al. (eds.), Emerging Trends in Expert Applications and Security, Lecture Notes in Networks and Systems 682, https://doi.org/10.1007/978-981-99-1946-8_7

61

62

A. Kotwal et al.

1 Introduction Mental or cognitive workload is still a crucial factor in determining how well users perform [1]. The concept of cognitive or mental workload is receiving more and more attention in the current era of technological advancement. Broadly, it can be defined as the ratio of availability of human’s resources to that of resources needed to complete a particular activity [2]. Since everyone reacts to stress differently, measuring mental stress can be difficult [3]. In addition, the technique used for measurement and analysis affects how effectively mental stress is measured. Numerous neuroimaging techniques have been applied to directly or indirectly measure brain activity in order to evaluate mental stress. These are functional near-infrared spectroscopy (fNIRS), functional magnetic resonance imaging (fMRI) [4], positron emission tomography (PET) [5], and electroencephalography (EEG) [6]. EEG technology is one of the most commonly utilized signals among them because of its excellent temporal resolution, applicability, reliability, and affordability. Hence, in this paper, we highlight the assessment concerning cognitive workload using EEG signals. The electroencephalography (EEG)-based cognitive workload is a crucial indicator of brain activity used in workload analysis applications. By doing this, individuals can avoid subject brain overload during significant or stressful occasions. In order to assess cognitive activity, task engagement, and memory load, the field of study known as cognitive monitoring combines cognitive neurology, psychology, biomedical science, and brain-computer interfaces (BCI).

1.1 Deep Learning and EEG Signals Deep learning-based approaches deal with the branch of machine learning that have advanced recently and demonstrated notable performance by producing better outcomes. Conventional machine learning approaches require a lot of prior knowledge to identify the properties of EEG signals. However, the complex cognitive process differs significantly amongst different participants, making it challenging to determine essential and suitable features. As a result, it is extremely difficult to classify the EEG signals accurately. For data pre-processing, several processing techniques such as down-sampling or artifacts removal method must be used which includes Independent Component Analysis, Blind Source Separation, Empirical Mode Decomposition method, and so on [7–10]. Different types of characteristics that are related to cognitive workload are divided into four different categories i.e. frequency, time, time frequency domain and non-linear analysis. Deep learning based techniques like DNN, CNN, LSTM, RNN, Auto-Encoders, and others will be used for feature extraction and classification. Due to the remarkable potential of end to end self-learning of deep learning based technology complicated high level feature representation is gaining tremendous results in recognizing

Deep Neural Based Learning of EEG Features Using Spatial, Temporal …

63

tasks in natural language processing, computer vision, automated speech recognition, bio-informatics, and other allied domains in recent years [11]. The most popular deep learning algorithms include Convolution Neural Networks (CNN), Deep Belief Networks (DBN), Autoencoders (AE), Recurrent Neural Networks (RNN), Restricted Boltzmann Machines (RBM), Optimized Deep Neural Networks, Multilayer Perceptron Neural Networks (MLPNN), and EEG-Functional Magnetic Resonance Imaging (EEG-fMRI) [12].

2 Literature Review A significant amount of literature on the proposed topic has been explored to support the use of Deep Learning models employed in estimating cognitive workload based on EEG signals and this section provides a general review of it. Chakladar et al. [13] proposed a latest framework for measuring different levels of mental workload based on the Grey Wolf Optimizer (GWO) algorithm and using a deep BLSTM-LSTM model. The STEW dataset is used to estimate cognitive load. This load is calculated using two experiments i.e. No task and SIMKAP based multitasking activity. By using subject ratings, the SIMKAP based multitasking activity is divided into three different workload labels i.e. low, high, and moderate. Whereas No task experiment is further divided into two different workload labels i.e. low and high. The input EEG signal has a variety of features that have been extracted, and the GWO optimizer uses these features to select the aspects that are most important to the mental workload. The classification of the cognitive load has been done using a deep BLSTM-LSTM model after feature selection and it obtains the classification accuracy of 82.57% and 86.33% respectively for SIMKAP-based multitasking activity and No task. In order to estimate workload using EEG, Kwak et al. [14] develop a novel multilevel feature fusion based method, during Sternberg working memory task which includes both easy, difficult levels. To allow 3D CNN in understanding the spatial and spectral patterns over the scalp, the 1D EEG data are transformed into 3D EEG pictures. The significance of every multilevel feature was then calculated by multiplying each one by the weighting factor after collecting the multilevel features from 3D convolutional process. The proposed network optimized the weighting factor based on the EEG image. The outcomes demonstrate that the 3D/2D CNN structure’s ability to estimate mental workload can be enhanced by multilevel feature fusion technique. Additionally, with the private dataset, the suggested model had accuracy of 90.8% and on the public dataset, it had an accuracy of 93.9%, respectively. Electroencephalogram signals are one of the most frequent modalities for detecting mental activity. To extract spectral and spatial invariant EEG representations, Kuanar et al. [15] employ the CNN and RNN model to extract temporal patterns in consecutive frames. The proposed model suggests that a hybrid network improves the classification accuracy when compared to several current LSTM models. During the performance of the memory task, the study demonstrates that an overall accuracy of 92.5% may be achieved when estimating cognitive memory stress at four distinct levels. Due

64

A. Kotwal et al.

to the ability of Electroencephalogram signals to reflect the electrical activity of brain; it is considered as one of the highest efficient physiological signals in evaluating workload related to different states of mind. Work efficiency may be improved with the right mental workload. A strong mental workload, on the other hand, can impair human memory, response, and performance. As a result, assessing mental workload is still a critical topic. Zhang et al. [16] implemented the concatenation of deep Recurrent neural network and 3D Convolutional neural networks i.e. R3DCNN to train spatial, spectral, temporal EEG features to explore the most effective EEG features for multitasking mental workload evaluation. The RNN layers are used to produce temporal representations, and 3D CNN may learn spatial as well as spectral features. The suggested model obtains an average accuracy of 88.9%. Hefron et al. [17] proposed deep RNN to take into consideration for temporal representations electroencephalography workload assessment for MATB tasks. The authors compared several RNN architectures, including the highly stacked LSTMs, with existing algorithms and statistically evaluated all variations of the variance, mean, kurtosis of frequency-domain power distributions and skewness and observed that skewness and kurtosis are not statistically significant characteristics whereas mean and variance are. The overall performance of the deep LSTM model performed well and obtained an accuracy rate of 93.0%. Almogbel et al. [18] implemented an end-to-end deep neural network that only accepts unprocessed raw EEG data for input in order to distinguish between various types of a driver’s mental workload and driving situation. EEG signal recordings on a particular participant driving vehicle in a high-fidelity driving simulated environment are performed to evaluate the proposed model. With an accuracy of 96.0%, the suggested model can accurately classify multiple labels of the mental workload of a driver and the driving situation. Bashivan et al. [19] employ a model which aims to retain the spatial, spectral, and temporal information of EEG, resulting in the identification of features which are less sensitive to changes and disturbances within each dimension. The authors convert the EEG data into a series of EEG images and then used ConvNets to extract spectral and spatial invariant information from each frame’s data, and LSTM to identify temporal patterns from the frame sequence. The method obtained a high accuracy of 91.1%. Zhang et al. [20] employed a novel method for improving characteristics of spectral maps and applies certain deep learning based milestones to the electroencephalogram based mental workload classification. In this study, the 6 channel based parallel technique of spectral feature enhanced maps improves the expression of employing structural data that can be further compressed by inter and intra subject differences. For evaluating performance, this model uses four different CNN structures i.e. AlexNet, VGGNet, DenseNet, and ResNet. As evident from the result it has been seen that ResNet achieved the highest classification accuracy of 93.71%. Table 1 outlines earlier research on cognitive workload recognition with the help of deep learning models.

Deep Neural Based Learning of EEG Features Using Spatial, Temporal …

65

Table 1 Previous studies related to cognitive workload assessment using different deep learning models Authors

Task

Inputs

Deep learning technique

Accuracy (%)

Chakladar et al. [13]

SIMKAP

Temporal, spectral

BLSTM-LSTM

86.33, 82.57

Kwak et al. [14]

Sternberg task

Spectral, spatial

3DCNN

93.9

Kwak et al. [14]

Sternberg task

Spectral, spatial

3DCNN

90.8

Kuanar et al. [15] Working memory Spatial, spectral, temporal

RNN

92.5

Zhang et al. [16]

Spectral, spatial temporal

RNN+3D CNN

88.90

Hefron et al. [17] MATB

Spectral-statistical

LSTM

93.0

Almogbel et al. [18]

Simulated drive

Raw data

CNN

95.31

Bashivan et al. [19]

Sternberg task

Spatial, spectral, temporal

CNN+RNN

91.1

Zhang et al. [20]

Sternberg task

Spectral maps

Parallel CNN

93.71

N back and arithmetic tasks

3 Experimental Setup The general steps required in the recognition of the cognitive workload are the collection of the data, preprocessing of EEG signals, extraction of the features from signals and classifier methods in deep neural network for the mental workload as shown in Fig. 1. After the pre-processing, deep neural networks are capable of selecting and extracting features by themselves from the various domains. Recognition of the mental or cognitive based workload is aided by the deep neural network’s several hidden layers. More accurate EEG representations are acquired with the application of deep learning based models, such as Convolution Neural Networks (CNN), Deep Belief Networks (DBN), Recurrent Neural Networks (RNN), and others. Preprocessing of EEG Signals

Automatic Feature Extraction

EEG Signals

Fig. 1 General steps in the recognition of the cognitive workload

Deep Neural Network Models

Different Levels of Cognitive Workload

66

A. Kotwal et al.

4 Dimensions STEW is a publicly available electroencephalography (EEG) dataset for the singlesession simultaneous capacity (SIMKAP) experiment of 48 volunteers. It is used to measure the effects of multitasking on mental workload [21]. In the SIMKAP multitasking examination, participants must check out similar products by evaluating two distinct panes while responding to audio questions which might involve calculations, comparisons, or database lookups. It is also possible to investigate methods based on intra-subject and inter-subject classification schemes with a sizeable dataset of 48 subjects and design algorithms for BCI applications. EEGLearn is an open access dataset in which seven male graduate students out of fifteen take part in the experiment. 240 sessions of a visual Sternberg working memory test are completed by participants. The job required memorizing a short list of English characters (SET) for three seconds, and then determining whether or not a randomly selected test character (TEST) is one of the remembered set [22]. After gathering 2670 trails, four workload levels are determined, each matching to the set sizes of the remembering characters 2, 4, 6, and 8. The level of corresponding workload increases with size. Subject-independent workload classification can be done using this dataset. A more meaningful comparison is provided by research that is explored at same datasets using various methodologies.

5 Proposed Methodology The proposed methodology for the assessment of the mental workload is categorized into five components: recording of EEG signals during some mental tasks, data pre-processing, feature extraction from temporal, spatial and spectral domains, classification methods of deep learning models and metrics for assessing performance i.e. specificity, sensitivity and accuracy in Fig. 2. In the collection, analysis, and classification of mental workload aspects throughout the EEG process of signals acquisition, EEG recording equipment and electrode distribution choices are crucial. The captured EEG signals, in addition to the number and position of electrodes, vary in the mental workload of the EEG experiment.

6 Discussion Multiple networks in the human brain are in control of a range of specialized tasks, including working memory. It has been demonstrated that a person’s ability to complete some cognitive tasks is impaired by their working memory capacity. A person may become confused and lose their capacity to learn if their cognitive load is increased above what is reasonable for them. Therefore, for applications

Deep Neural Based Learning of EEG Features Using Spatial, Temporal …

Pre-processing of raw EEG Signals

Estimation of Mental Workload

Performance Evaluation Metrics

67 Feature Extraction from temporal, spectral and spatial information

Classification models of deep learning

Fig. 2 Proposed methodology

like brain-computer interfaces and human–computer interaction, individual working memory requirements must be addressed. Even little fatigue can impair motivation and focus, which raises the risk of an accident or harm. Therefore, maintaining mental and physical stability is essential to reducing tension and anxiety during important occasions.

7 Conclusion However, additional training time and vast amount of data are needed to optimize the architecture and features of deep learning based models in order to improve an accuracy of multi-class classification. Because feature-to-target mappings are temporally non-stationary, cross-day workload estimate using EEG is a challenging task. Images and calculated characteristics are widely utilized as data in research using hybrid neural networks. The most widely used Neural Network architecture is CNN, mainly due to its better adapted for end to end learning, extends to huge amount of data, and therefore can take use of the hierarchical structure in natural signals. There must be data preprocessing measures taken to obtain high-quality EEG signals. Because EEG data is so sensitive, even the smallest shift in body position or psychological diversion could have a big impact on the spectral map and the outcome.

8 Major Research Challenges, Issues and Future Scope The majority of the inputs used by CNN models are spectral maps or pictures. It can be hazardous to elicit authentic cognitive processes, especially in a real-world setting. Thus, in order to allow researchers to examine mental states in the safe setting of a laboratory yet with the same level of accuracy as in real-life scenarios, safe and trustworthy procedures must be created and compared to genuine mental or cognitive

68

A. Kotwal et al.

states. Future research in this area still has to be done in extreme detail. A comprehensive analysis of various deep Recurrent Neural Network architectures, including variability throughout the complexity of hidden layer sequence to sequence connections and layering of various size LSTM layers, may bring considerable improvement. Only a few deep architectures are considered in this study due to time limits and computational burden.

References 1. Young MS et al (2014) State of science: mental workload in ergonomics. Ergonomics 58(1):1– 17 2. Wickens C (2002) Multiple resources and performance prediction. Theor Iss Ergon Sci 3:159– 177 3. Hou X, Liu Y, Sourina O, Tan YRE, Wang L, Mueller-Wittig W (2015) EEG based stress monitoring. In: Proceedings of the 2015 IEEE international conference on systems, man, and cybernetics, Hong Kong, China, 9–12 October 2015, pp 3110–3115 4. Zhang X, Huettel SA, O’Dhaniel A, Guo H, Wang L (2019) Exploring common changes after acute mental stress and acute tryptophan depletion: resting-state fMRI studies. J Psychiatr Res 113:172–180 5. Arrighi JA, Burg M, Cohen IS, Kao AH, Pfau S, Caulin-Glaser T, Zaret BL, Soufer R (2000) Myocardial blood-flow response during mental stress in patients with coronary artery disease. Lancet 356:310–311 6. Al-Shargie F, Tang TB, Badruddin N, Kiguchi M (2015) Mental stress quantification using EEG signals. In: Proceedings of the international conference for innovation in biomedical engineering and life sciences, Putrajaya, Malaysia, 6–8 December 2015, pp 15–19 7. Lotte F (2014) A tutorial on EEG signal-processing techniques for mental-state recognition in brain-computer interfaces. In: Guide to brain-computer music interfacing. Springer, London, pp 133–161 8. Whitham EM, Pope KJ, Fitzgibbon SP, Lewis T, Clark CR, Loveless S, Broberg M, Wallace A, DeLosAngeles D, Lillie P, Hardy A, Fronsko R, Pulbrook A, Willoughby JO (2007) Scalp electrical recording during paralysis: quantitative evidence that EEG frequencies above 20 Hz are contaminated by EMG. Clin Neurophysiol 118(8):1877–1888. https://doi.org/10.1016/j.cli nph.2007.04.027 9. Guerrero-Mosquera C, Navia A (2012) Automatic removal of ocular artefacts using adaptive filtering and independent component analysis for electroencephalogram data. IET Signal Process 6(2):99–106. https://doi.org/10.1049/iet-spr.2010.0135 10. Oosugi N, Kitajo K, Hasegawa N, Nagasaka Y, Okanoya K, Fujii N (2017) A new method for quantifying the performance of EEG blind source separation algorithms by referencing a simultaneously recorded ECoG signal. Neural Netw 93:1–6. https://doi.org/10.1016/j.neunet. 2017.01.005 11. Alzubaidi L, Zhang J, Humaidi AJ, Duan Y, Santamaría J, Fadhel MA, Farhan L (2021) Review of deep learning: concepts, CNN architectures, challenges, applications, future directions. J Big Data 8(1):1–74. https://doi.org/10.1186/s40537-021-00444-8 12. Praveena DM, Sarah DA, George ST (2020) Deep learning techniques for EEG signal applications—a review. IETE J Res. ISSN: 0377-2063. https://doi.org/10.1080/03772063.2020.174 9143 13. Chakladar DD, Dey S, Roy PP, Dogra DP (2020) EEG-based mental workload estimation using deep BLSTM-LSTM network and evolutionary algorithm. Biomed Signal Process Control 60:101989

Deep Neural Based Learning of EEG Features Using Spatial, Temporal …

69

14. Kwak Y, Kong K, Song W-J, Min B-K, Kim S-E (2020) Multilevel feature fusion with 3D convolutional neural network for EEG based workload estimation. IEEE Access 99:16009– 16021 15. Kuanar S, Athitsos V, Pradhan N, Mishra A, Rao KR (2018) Cognitive analysis of working memory load from EEG, by a deep recurrent neural network. In: Proceedings IEEE international conference on acoustics, speech and signal processing (ICASSP), pp 2576–2580. https://doi. org/10.1109/ICASSP.2018.8462243 16. Zhang P, Wang X, Zhang W, Chen J (2019) Learning spatial–spectral–temporal EEG features with recurrent 3D convolutional neural networks for cross-task mental workload assessment. IEEE Trans Neural Syst Rehabil Eng 27(1):31–42. https://doi.org/10.1109/TNSRE.2018.288 4641 17. Hefron RG, Borghetti BJ, Christensen JC, Kabban CMS (2017) Deep long short-term memory structures model temporal dependencies improving cognitive workload estimation. Pattern Recogn Lett 94:96–104 18. Almogbel MA, Dang AH, Kameyama W (2018) Cognitive workload detection from raw EEG-signals of vehicle driver using deep learning. In: International conference on advanced communication technology (ICACT), pp 1–6 19. Bashivan P, Rish I, Yeasin M, Codella N (2015) Learning representations from EEG with deep recurrent-convolutional neural networks. arXiv:1511.06448 20. Zhang Y, Shen Y (2019) Parallel mechanism of spectral feature-enhanced maps in EEG-based cognitive workload classification. Sensors 19(4):808 21. Lim WL, Sourina O, Wang LP (2018) STEW: simultaneous task EEG workload data set. IEEE Trans Neural Syst Rehabil Eng 26(11):2106–2114 22. Bashivan P, Yeasin M, Bidelman GM (2015) Single trial prediction of normal and excessive cognitive load through EEG feature fusion. In: IEEE signal processing in medicine and biology symposium, pp 1–5

A Framework for Classification of Nematodes Species Using Deep Learning Meetali Verma, Jatinder Manhas, Ripu Daman Parihar, and Vinod Sharma

Abstract Worldwide, Phytoparasitic (or phytonematodes) nematodes are seriously harming crops which ultimately lead to massive economic losses. According to research, less than 0.01% of these species have been identified till date. Majority of the Nematodes carries identical morphological similarities and are extremely difficult to classify using traditional technologies. These organisms play a vital role in pest control, soil ecology, biogeography, habitat preservation, and climate change that poses great demand for its accurate identification and classification. Traditionally, nematode identification has only relied on physical traits such body length, sexual organ morphology, mouth and tail components, and other physical traits. The said process is very complex, time-consuming, and entirely dependent on human expertise and expensive equipment for its classification. In recent years, deep learning based techniques have shown a considerable amount of improvement and provides considerable enhancement in its accuracy. In this paper, a deep learning technique InceptionV3 has been implemented to effectively classify the Phytoparasitic nematodes species. The experimental study has been conducted on state-of-the-art nematodes dataset consisting of two species: Acrobeles and Acrobeloides. The dataset, which comprises of 277 digital microscopic nematodes images, is further improved by using data augmentation techniques like zooming, flipping, shearing, etc. Training and testing accuracy for the suggested classification model were 99% and 90%, respectively. Keywords Nematodes · Plant-parasitic nematodes · Acrobeles · Acrobeloides · Deep learning · InceptionV3 M. Verma (B) · V. Sharma Department of Computer Science & IT, University of Jammu, Jammu, India e-mail: [email protected] J. Manhas Department of Computer Science & IT, Bhaderwah Campus University of Jammu, Bhaderwah, India R. D. Parihar Department of Zoology, University of Jammu, Jammu, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 V. S. Rathore et al. (eds.), Emerging Trends in Expert Applications and Security, Lecture Notes in Networks and Systems 682, https://doi.org/10.1007/978-981-99-1946-8_8

71

72

M. Verma et al.

1 Introduction Nematodes are invertebrates that belong to the phylum Nematoda and are also known as roundworms. These organisms have a cylindrical, transparent and unsegmented body. Nematodes are most numerous and diverse animals on Earth. They have up to one million different species and can be either parasitic or free-living [1]. The first category is commonly found in animals and plants, whereas the second is found in fresh water, marine environments, hot deserts, soil, and deep inside the Earth’s crust, and they sustain themselves mostly on algae, fungi, bacteria, dead organisms, etc. There are many microbes that are harmful and can harm people, plants, and other living things. Various deadly diseases like ascariasis, ancylostomiasis, trichuriasis [2], hookworm [3], angiostrongyliasis [4], helminthes [5], and onchocerciasis [6] in human beings are caused by these species. The number of these species that have been identified is less than 0.01% as per the research till date [7]. Nematodes play a crucial role in nutrient recycling, control pest and may cause harmful effect to plants. The majority of soil nematodes make a significant contribution to the natural environment’s nutrient cycling. According to reports, some nematodes are crucial in the fields of medicine and veterinary science [8]. Therefore, precise identification is essential for comprehending nematode diversity and creating efficient management and control plans. Depending on their sizes, nematodes are divided into different species. They are extremely challenging to visually identify and happen naturally. Nematodes have so many morphological similarities making them difficult to be classified. Nematodes are observed under a microscope using culture techniques to better understand their biological, genetic, and physiological characteristics [9]. Traditionally, nematode identification has only relied on physical traits such body length, sexual organ morphology, mouth and tail components, and other physical traits. Nematodes are differentiated from each other by piercing mouthparts called stylets. This typically leads to the absence of precise categorization among closely related species and is therefore unsatisfactory, especially when a large sample size is involved because of unique visible traits and a shortage of experienced taxonomists [10]. However, traditional methods are time-consuming and costly. Morphological identification uses fundamental principles to match the pattern using sketches in a standard taxonomic key. Morphological and DNA-based methods are used by experts to identify the species of nematodes [11]. The said process is very complex, timeconsuming, and entirely dependent on human expertise and expensive equipment. AI techniques can easily solve this problem by recognizing nematode species from their microscopic images. Using AI techniques the identification processes become much easier and faster that save time and human-efforts. Many other applications, including speech recognition [12], health care [13], business forecasting [14], agriculture [15], and others, have made extensive use of ML techniques. In recent years, the DL (a subfield of ML)-based techniques have shown a considerable amount of improvement added to the accuracy of the results. DL has already gained a vast following in microscopic image recognition [16], which includes cell segmentation,

A Framework for Classification of Nematodes Species Using Deep …

73

tissue segmentation, object segmentation and classification [17], pattern recognition [18], autonomous vehicles [19], etc. [20]. ResNet, Inception, Xception, and VGG16 are just a few of the CNN architectures that have been suggested and created specifically for image categorization. In this work, we present a modified InceptionV3 model with a higher accuracy value than the current state of the art for the image classification of two nematode species, namely, Acrobeles and Acrobeloides. The rest of the paper is formatted as follows: Sect. 2 presents the related work. Section 3 presents materials and the proposed method. Section 4 represents results and discussion. Finally, in Sect. 5, conclusion and future scope are presented.

2 Related Work The studies that are most relevant to this work are discussed in this section. Automatically classifying nematode species from images involves the following steps: (I) Acquisition of the image; (II) Preprocessing; (III) Feature Extraction and Selection; and (IV) Classification. Numerous deep learning techniques have been applied by researchers to the classification of nematode images. Abade et al. [21] used nematodes species in the NemaDataset to implement a deep learning-based method for the classification of 3,063 microscopic images from five phytonematode species with the most significant damage relevance for the soybean crop. The NemaDataset was used to evaluate 13 CNNs models, which represent the state-of-the-art in object recognition and classification research. Finally, a comparison of NemaNet, a new CNN model, with other existing models is presented. The best evaluation fold reached 98.03%, while the accuracy from scratch was 96.99%. The best evaluation fold in this case achieves 99.34% accuracy, compared to the transfer learning model’s average accuracy of 98.88%. On a public and open-source dataset called I-Nema, which contains 2,760 images of microscopic nematodes, Sheldon Fung et al. [22] conducted two types of experiments using six state-of-the-art CNNs, namely AlexNet, VGG-16, VGG-19, ResNet-34, ResNet-50, and ResNet-101. The model’s average accuracy was 79.0%. Entomopathogenic nematodes (EPN) are parasitic nematodes that cause harm to insects by infecting them with insect pathogenic bacteria. EPNs have been explored for their potential to replace the use of chemical pesticides, which can cause contamination in the environment. Uhlemann et al. [23] included three different species of EPNs: Heterorhabditis bacteriophora, Steinernema carpocapsae and Steinernema feltiae. Transfer learning is used by utilizing State-of-the-art model architectures that are already available. In Keras deep learning library 13 CNN architectures are available to be used, with or without pre-trained weights. These architectures include Xception, VGG16, VGG19, ResNet, Inception, InceptionResNet, MobileNet, DenseNet and NASNet. The model achieved an average validation accuracy of 88.28% for the juvenile nematode dataset and 69.45% for the adult nematode dataset. Entomopathogenic nematodes are soil-dwelling living creatures that have

74

M. Verma et al.

been widely utilized for biological control of agricultural insect pests. They are one of the finest alternatives to pesticides since simple processes for applying them with conventional sprayers have been developed. Counting is the most common, hard, time-consuming, and approximate aspect of studies on entomopathogenic nematodes in laboratory operations. Using computer vision, Kurtulmu¸s et al. [24] developed a new method for detecting and counting dead Heterorhabditis bacteriophora nematodes from microscope images. The suggested technique included three primary algorithm steps: pre-processing to acquire the nematode worms’ medial axes as precisely as feasible, skeleton analysis to separate overlapped nematode worms, and detection of dead nematodes using two different straighter line detection methods. The proposed method was put to the test using 68 microscope images including 935 living and 780 dead worms. The proposed technique was successful in detecting worms in microscope pictures, with recognition rates of above 85%. Romain et al. [25] implemented CNN for image classification of two quarantine nematode species: Globodera pallida and Globodera rostochiensi. The proposed CNN model had an accuracy rate of 71%. In another study, Lai et al. [26] developed the PPN identification model using the Faster Region-based Convolutional Neural Network (Faster RCNN) architecture, which contained 9483 images of nematode images from 10 PPN species. With a mean average precision of 0.9018, the model with the ResNet-101 structure had the best performance. In this study, we used an InceptionV3 model to automatically classify and extract features from digital images of microscopic nematodes. The two nematode species Acrobeles and Acrobeloides, are the subjects of the study. Python, the Keras API, and the Tensorflow framework were all used in the development of the CNN.

3 Material and Proposed Method The dataset used in this research will be presented in this section, along with our suggested approach for classifying the nematode species.

3.1 Description of Dataset In this present work, a state-of-the art dataset “I-Nema,” is presented, which contains two plant parasitic nematodes (PPN) species: Acrobeles and Acrobeloides. Further details regarding these datasets are listed as follows: • 227 training images – Acrobeles images: 57 – Acrobeloides images: 170

A Framework for Classification of Nematodes Species Using Deep …

75

Fig. 1 Flowchart of the proposed method

• 50 testing images – Acrobeles images: 14 – Acrobeloides images: 36

3.2 Proposed Method The flowchart for the suggested method is shown in Fig. 1. The input image goes through a preprocessing stage initially. The preprocessing involves resizing all images to 299 × 299 pixels and increasing the number of images by data augmentation. After that, we test the InceptionV3 model.

3.2.1

Data Augmentation

Our data collection is rather small, so we artificially increased the volume of our training data using data augmentation techniques. A common DL technique that produces the necessary number of samples is an increase in data. It becomes more efficient on the network for a small database through optimization. Traditional methods

76

M. Verma et al.

of enhancing data include shifting, rotating, flipping, transformation, and zooming. For this investigation’s training, we applied image augmentations using the “Keras Image Data Generator”. There are 170 images in the class “Acrobeloides” as demonstrated in Sect. 3.1. To maintain an even distribution of images between the two classes under consideration, we expanded the size of the “Acrobeles” class using the data augmentation technique. For data augmentation in this work, we selected a horizontal flip, a shear, and a zooming of 0.2. We had 170 pictures for each class as a result.

3.2.2

Nematodes Classification Using InceptionV3 Model

Figure 2 illustrates the method for classifying nematode species using the InceptionV3 model. On the “ImageNet” dataset, the InceptionV3 image classification algorithm has demonstrated accuracy levels of over 78.1%. Convolutions, average pooling, maximum pooling, concatenations, drops, and fully connected layers are among the fundamental symmetric and asymmetric components of the model. The model frequently applies batch normalization to the activation inputs. SoftMax is used to calculate the loss. Three blocks of BasicConv2d are the first part of our Modified InceptionV3. Convolutional layers and batch normalization steps are included in each block, which is then followed by 3 modules A, 4 modules B and 2 modules C followed by Avg Pooling, Dropout, Linear layer, ReLu, Dropout layer and Linear layer.

Fig. 2 Flowchart of the InceptionV3 for nematodes species classification

A Framework for Classification of Nematodes Species Using Deep …

77

Fig. 3 Training and validation loss

Fig. 4 Training and validation accuracy

4 Experiment Results The dataset has been randomly divided into 82% for training and 18% for testing in order to determine the performance of the proposed system. There were 227 images in the training set and 50 in the test set. We have applied data augmentation operations to training data, such as flipping, shearing, zooming, etc., to provide CNN architecture with a variety of image data [27]. For 20 epochs, CNN’s InceptionV3 model was trained to categories images into two groups: Acrobeles and Acrobeloides. During training, the Adam optimization method [28] was used to iteratively update the weights. 99% training accuracy and 90% test accuracy were attained by the InceptionV3 model. Without data augmentation, training accuracy was around 86%, so augmentation of the data had a significant positive impact on training accuracy. Figures 3 and 4 show the results of our model’s training and testing.

5 Conclusion and Future Scope In this work, we use the transfer learning method of InceptionV3 for the automatic classification of nematode species from digital microscopic images. A public dataset called “I-Nema” containing 277 microscopic images of PPN species were used in

78

M. Verma et al.

this paper. CNN has been trained to divide images into two categories: Acrobeles and Acrobeloides. The proposed CNN model has 90% test accuracy and 99% training accuracy. These results show that deep learning has tremendous potential for accurately classifying nematode species. Training and validation loss are shown in the Fig. 3 and in Fig. 4 training and validation accuracy represented. It is clearly apparent that, when compared to other recent methods, our suggested method has obtained better outcomes. However, this study is only focused on two categories of nematodes. In our future work, we will need to enhance its performance. In fact, combining or concatenating deep learning models may enhance the results of classification. Furthermore, this study’s dataset was comparatively small. In the future, we’ll also work on multiclass datasets and expand the size of the dataset.

References 1. Abad P, Gouzy J, Aury JM, Castagnone-Sereno P, Danchin EGJ, Deleury E et al (2008) Genome sequence of the metazoan plant-parasitic nematode Meloidogyne incognita. Nat Biotechnol 26(8):909–915 2. Caron Y, Bory S, Pluot M, Nheb M, Chan S, Prum SH et al (2020) Human outbreak of trichinellosis caused by Trichinella papuae nematodes, Central Kampong Thom Province, Cambodia. Emerg Infect Dis PubMed 26(8):1759 3. Loukas A, Bethony J, Brooker S, Hotez P (2006) Hookworm vaccines: past, present, and future. Lancet Infect Dis PubMed 6(11):733–741 4. Wang Q-P (2008) Human angiostrongyliasis. Lancet Infect Dis 8(10):621–630 5. Hotez PJ, Brindley PJ, Bethony JM, King CH, Pearce EJ, Jacobson J (2008) Helminth infections: the great neglected tropical diseases. J Clin Investig PubMed 118(4):1311–1321 6. Sabrosa AN, Souza de CE (2001) Nematode infections of the eye: toxocariasis and diffuse unilateral subacute neuroretinitis. Curr Opin Ophthalmol 12(6):450–454 7. Abebe E, Mekete T, Thomas WK (2011) A critique of current methods in nematode taxonomy. Afr J Biotechnol 10(3):312–323 8. Roeber F, Jex AR, Gasser RB (2013) Next-generation molecular-diagnostic tools for gastrointestinal nematodes of livestock, with an emphasis on small ruminants: a turning point? Adv Parasitol (Elsevier) 267–333 9. Bhat KH, Mir RA, Farooq A, Manzoor M, Hami A, Allie KA et al (2022) Advances in nematode identification: a journey from fundamentals to evolutionary aspects. Diversity (MDPI) 14(7):536 10. Oliveira CMG, Monteiro AR, Blok VC (2011) Morphological and molecular diagnostics for plant-parasitic nematodes: working together to get the identification done. Trop Plant Pathol 36(2):65–73 11. Bogale M, Baniya A, DiGennaro P (2020) Nematode identification techniques and recent advances. Plants (MDPI) 9(10):1260 12. Londhe ND, Ahirwal MK, Lodha P (2016) Machine learning paradigms for speech recognition of an Indian dialect. In: 2016 International conference on communication and signal processing (ICCSP), pp 0780–0786 13. Liu F, Yan J, Wang W, Liu J, Li J, Yang A (2020) Scalable skin lesion multi-classification recognition system. Comput Mater Continua 62(2):801–816 14. Rajab S, Sharma V (2015) Performance evaluation of ANN and neuro-fuzzy system in business forecasting. In: 2nd International conference on computing for sustainable global development (INDIACom), pp 749–754

A Framework for Classification of Nematodes Species Using Deep …

79

15. Pandith V, Kour H, Singh S, Manhas J, Sharma V (2020) Performance evaluation of machine learning techniques for mustard crop yield prediction from soil analysis. J Sci Res 16. Rani P, Kotwal S, Manhas J, Sharma V, Sharma S (2021) Machine learning and deep learning based computational approaches in automatic microorganisms image recognition: methodologies, challenges, and developments. Arch Comput Methods Eng (Springer) 1–37 17. Wu H, Liu Q, Liu X (2019) A review on deep learning approaches to image classification and object segmentation. Comput Mater Continua 60(2):575–597 18. Ameri A, Akhaee MA, Scheme E, Englehart K (2019) A deep transfer learning approach to reducing the effect of electrode shift in EMG pattern recognition-based control. IEEE Trans Neural Syst Rehabil Eng 28(2):370–379 19. Zhang J, Wang W, Lu C, Wang J, Sharma AK (2020) Lightweight deep network for traffic sign classification. Ann Telecommun (Springer) 75(7):369–379 20. Xing F, Xie Y, Su H, Liu F, Yang L (2018) Deep learning in microscopy image analysis: a survey. IEEE Trans Neural Netw Learn Syst (PubMed) 29(10):4550–4568 21. Abade AS, Porto LF, Ferreira PA, Vidal FB (2022) NemaNet: a convolutional neural network model for identification of soybean nematodes. Biosyst Eng (Elsevier) 213:39–62 22. Lu X, Wang Y, Fung S, Qing X (2021) I-Nema: a biological image dataset for nematode recognition 23. Uhlemann J, Cawley O, Duarte TK (2020) Nematode identification using artificial neural networks. In: DeLTA, pp 13–22 24. Kurtulmu¸s F, Ulu TC (2014) Detection of dead entomopathogenic nematodes in microscope images using computer vision. Biosyst Eng (Elsevier) 118:29–38 25. Thevenoux R, Buisson A, Aimar MB, Grenier E, Folcher L, Parisey N et al (2021) Image based species identification of Globodera quarantine nematodes using computer vision and deep learning. Comput Electron Agric (Elsevier) 186 26. Lai HH, Chang YT, Yang JI, Chen SF (2021) Application of convolutional neural ASABE annual international virtual meeting. In: American Society of Agricultural and Biological Engineers (ASABE) 27. Shijie J, Ping W, Peiyi J, Siping H (2017) Research on data augmentation for image classification based on convolution neural networks. In: Chinese automation congress (CAC), pp 4165–4170. IEEE 28. Yu Y, Liu F (2019) Effective neural network training with a new weighting mechanism-based optimization algorithm. IEEE Access 7:72403–72410

CAD Model for Biomedical Image Processing for Digital Assistance Hitesh Kumar Sharma, Tanupriya Choudhury, Richa Choudhary, Jung Sup Um, and Aarav Sharma

Abstract Insufficiency of doctors produces a high challenge for Biomedical Engineering to develop efficient assistance tools or applications which can be used as a supporting system (e.g., CAD) for doctors to diagnose diseases using biomedical images like CT-Scan, X-ray, MRI etc. COVID-19 is a highly communicable and extremely dangerous disease. Almost 1.36 billion people in the word are affected by the coronavirus till date. Two million people have died. This deadly disease has affected more than 150 countries in the world and still thousands of people are getting exposed to this disease daily. So early detection of the disease is really important and critical to save a person’s life. Coronavirus can be significantly detected through chest X-rays. With the help of Deep Learning and Neural Network, detection of COVID can be done quickly and cheaply. So, we are making use of a CNN Model to quickly detect whether a person has COVID-19 or not by inputting the X-ray images. Keywords CNN · COVID-19 · Deep learning · Chest X-ray

All authors contributed equally and are the first authors. H. K. Sharma · R. Choudhary · A. Sharma School of Computer Science, University of Petroleum & Energy Studies, Energy Acres, Bidholi, Dehradun 248007, Uttarakhand, India e-mail: [email protected] J. S. Um Kyungpook National University, Daegu, South Korea e-mail: [email protected] T. Choudhury (B) Informatics Cluster, SoCS, University of Petroleum and Energy Studies (UPES), Dehradun 248007, Uttarakhand, India e-mail: [email protected]; [email protected]; [email protected] Adjunct Professor, CSE Dept., Graphic Era Hill University, Dehradun 248002, Uttarakhand, India Director Research (Honorary), The AI University, Cutbank, MT 59427, USA © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 V. S. Rathore et al. (eds.), Emerging Trends in Expert Applications and Security, Lecture Notes in Networks and Systems 682, https://doi.org/10.1007/978-981-99-1946-8_9

81

82

H. K. Sharma et al.

1 Introduction COVID-19 also called as Coronavirus is a very deadly respiratory disease. It is a viral disease which is caused by SARS CoV-2 virus. Billions of people have already been affected by this disease and a million of people have died. The common symptoms experienced by the persons affected by the virus are the same as those of a flu or common viral infection i.e., high fever, dry cough and trouble breathing. The symptoms that distinguish it from any other viral disease is the loss of smell or taste or chest pain. COVID-19 has spread throughout the world. Almost every country in the world has been affected by this disease. The first case of coronavirus was found in Wuhan, China in November 2019. This virus is still affecting thousands of people daily across the world. The disease’s origin or etiology is still a mystery. It is a highly communicable disease. When an infected person encounters an uninfected person, the virus is typically transferred. From the infected individual’s mouth or nose, tiny droplets spread to the mouth or nose of the other person. Because of this, wearing a mask is one of the ways we can protect ourselves from coronavirus. There are several methods for figuring out if someone has coronavirus or not. One method of disease diagnosis is the RT-PCR Test, generally known as a swab test. An AntiGen Test is the alternative method of detection; results are available in just 30 min, although it is not always accurate. In order to rapidly and easily detect and diagnose the COVID-19 disease utilizing chest X-rays, we have now proposed to develop a Deep Learning Model using Convolutional Neural Network. By including the X-ray images as a dataset and making an extremely precise prediction, we are developing a binary image classifier that will identify if a person has coronavirus or not.

2 Literature Review In December 2019, WHO (World Health Organization) announced COVID-19 as a pandemic. Many countries imposed strict lockdown for minimizing the impact of such a critical communicable disease. In 2019, 2020 and 2021 many researchers took initiatives and provided many solutions to early detection and diagnosis of coronavirus disease. AI (Artificial Intelligence), ML (Machine Learning), DL (Deep Learning) and Image Processing are some advanced IT Technologies which helped the doctors to diagnose this disease in a fast way and more patients can be examined for diagnosis. X-ray Images are one of the main medical imaging tools for applying deep learning algorithms for high accuracy. Many researchers worldwide provided various research outputs to identify this pandemic disease in the early stage to fight against this communicable disease [1]. In their research work they explain the definition and source of the corona virus. The authors in [2] defined the various forms of corona virus and the medical test to identify it. The authors of [3–5] in their paper proposed an automated approach in the field of medical sciences. In their research work [6] the authors proposed a CNN based algorithm to predict the COVID-19

CAD Model for Biomedical Image Processing for Digital Assistance

83

disease using X-ray. In [7–9] both defined transfer learning and CNN-based model are used for diagnosis. Authors in [9, 10] proposed the AI based deep learning model to detect COVID-19 from medical images. Authors in [11, 12] suggested the use supervised machine learning algorithm for diagnosis of COVID-19 in early stages. Authors in [13] proposed the DenseNet [14] model which is a CNN model specially designed for medical image processing. In article [15], the author proposed the significance of X-ray images for fast diagnosis of this disease.

3 Proposed CNN Model for COVID X-Ray Detection This section includes information on the fundamental architecture of CNN, the architecture of the proposed CNN model and the dataset used.

3.1 Convolutional Neural Network (CNN) CNN [16] is a particular kind of neural network that is typically used with datasets of 2D images. An algorithm used in deep learning is a convolutional neural network (CNN). The CNN has several layers, and a basic neural network has three layers: input, hidden, and output. Each layer is made up of neurons. Images are entered into CNN, which adds weights and biases to them. CNN has an advantage over other learning algorithms in that it can automatically extract significant information from photos, such as if we want to detect images containing skin cancer it can detect the important features to distinguish from a cancerous skin mole to a non-cancerous skin mole where it can detect the important features itself, and the main edge that it has over other learning algorithm is its efficiency and accuracy [17] (Fig. 1). There are five different layers in a CNN: 1. Input Layer

Fig. 1 Example of convolutional neural network

84

2. 3. 4. 5.

H. K. Sharma et al.

Convolutional Layer Pooling layer Optimizer Layer Output Layer (Fig. 2)

A CNN’s input layer is its initial layer. It contains images and patterns in the form of a matrix, and each value of the matrix represents a pixel, with the value of each pixel being determined by the RGB color of the pixel before being processed by additional layers. Convo layer is a CNN feature extraction layer that extracts key features from the images using activation functions like ReLU (which changes all negative values to zero) [18, 19]. The pooling layer or filter layer, which is employed in between each processing layer decreases the spatial volume of the images, and is crucial for accelerating computing. Depending on the classification we want in our model, the SoftMax or logistic layer is utilized at the end of the neural network. If we need binary classification, we use the logistic function, and for multiclassification, we use the SoftMax layer. The neural network’s output layer, at its conclusion, gives us the value we require for categorization [20] (Fig. 3).

Fig. 2 Different layers of convolutional neural network

Fig. 3 Filter layer breakdown in a CNN model

CAD Model for Biomedical Image Processing for Digital Assistance

85

4 Dataset Standardization An Image generator was created to standardize the input images. It will be used to adjust the images in such a way that the mean of pixel intensity will be zero and standard deviation will become 1. The old pixel value of an image will be replaced by new values calculated using the following formula. In this this formula, the mean will be subtracted from each value and the result will be divided by standard deviation. The formula is given in the following equation. xi =

xi − μ σ

The results before and after applying this standardization formula are shown below through pixel intensity histogram. As shown in Fig. 4a the pixel intensity is ranging from 0.0 to 1.0. In this case, the mean will never be zero and standard deviation will never be 1. The dataset specification is given in Table 1. To eliminate this problem, the above standardization formulation is applied on each pixel value and resultant values after application have been plotted on the intensity histogram (Fig. 4b). The pink color bars represent the calculated intensity values. As we can see in the second histogram that intensity mean is now 0 and standard deviation is 1. The cross-entropy loss formula is given in the following equation:

(a)

(b)

Fig. 4 Pixel intensity distribution of leaf image a before standardization b after standardization

Table 1 Dataset details [21]

Total images in dataset

2483

No. of classes for classification

2

Dataset original image dimension

224 * 224

Total no. of images

2483

Images used in training the model

2085

Images used in testing the model

191

86

H. K. Sharma et al.

L cross entropy (X i ) = −(Yi Log( f (X i )) + (1 − Yi )Log (1 − f (X i ))) X i and Y i denote the input feature and corresponding label, f (X i ) represents the output of the model which is the probability that the output is positive. This formula is written for overall average cross entropy loss for complete dataset D of size N and it is given below with following equation. ⎞ ⎛  1⎝  L cross entropy (D) = − Log( f (X i )) + Log (1 − f (X i ))⎠ N positive Negative This formulation shows clearly that the loss will be dominated by negative labels if there is a large imbalance in the dataset with very few frequencies of positive labels. Freqpositive =

No. of Positive Samples N

Freqnegative =

No. of Negative Samples N

However, for accurate results this contribution should be equal. The one possible way to make equal contribution is to multiply each class frequency with a class specific weight factor Wpositive and Wnegative , so each class will contribute equally in the classification model. So, the formulation will be represented as: Wpositive × Freqpositive = Wnegative × Freqnegative Which we can do simply by taking Wpositive = Freqnegative Wnegative = Freqpositive Using the above formulation, we will be dealing with the class imbalance problem. To verify the formulation, we have plotted the frequency graph and it shows the expected chart. As we can see in Fig. 5, both frequencies are balanced.

CAD Model for Biomedical Image Processing for Digital Assistance

87

Fig. 5 X-ray image of COVID +ve person and COVID −ve person

5 Implementation of Proposed CNN Model In our model, we have used COVID X-ray images as our dataset. The dataset for positive COVID Cases has been taken from https://github.com/ieee8023/covid-che stxray-dataset and the dataset for negative COVID cases has been taken from Kaggle. COVID Xray Detection Dataset [21] (Fig. 5). Here is an overview of the CNN model that was trained using the sampled photos. Details of the layer configuration are provided in Table 2. Table 2 Model configuration layer wise

Basic CNN model configuration Layers

Details of layers of model

Optimization layer (layer 1)

Adam optimizer layer (Layer_ 1)

Convolutional 2D layer

64 Conv filter, 3 × 3 filter size (Layer_2)

Max pooling layer for size reduction

2 × 2 Kernel size for pooling (Layer_3)

Dropout layer for reducing parameters

15% (Layer_4)

Convolutional 2D layer

32 Conv filters, 5 × 5 Filter) (Layer_5)

Max pooling layer for size reduction

2 × 2 Kernel size for pooling (Layer_6)

Dropout layer for reducing parameters

15% (Layer_7)

Convolutional 2D layer

265 Conv filter, 3 × 3 filter size (Layer_7)

Max pooling layer for size reduction

2 × 2 Kernel size for pooling (Layer_8)

FC layer

Classification

88

H. K. Sharma et al.

Our model has been trained utilizing 5,668,097 total parameters in the proposed model.

6 Experimental Results Here are the outcomes of our suggested model. The training dataset was used to train the proposed CNN Model over 10 iterations. We have a good accuracy of 90.43% and validation accuracy of 92.19% at the 10th Epoch. This can be improved much further by raising the value of the epoch.

On running this model on the testing dataset, we got the accuracy of 89.45%. The dataset was also tested by uploading a COVID + X-ray Image. This image was given as an input to test the model (Fig. 6). The result was absolutely correct (Figs. 7 and 8). The following are the Training and Validation Graphs for Accuracy and Loss. The graphs show that the model is showing more than 90% accuracy with minimum loss (Fig. 9).

7 Conclusion COVID-19 is a very deadly and serious disease. It has hit almost all over the world and still the pandemic is going on. Thousands of people are getting exposed to it every day and the condition is getting worse day by day. It is important to stop the spread of the virus as much as we can. So, with the help of the CNN Model, the detection of COVID through X-ray images is one step forward in preventing the spread. Using Convolutional Neural Networks, we have created a Machine Learning

CAD Model for Biomedical Image Processing for Digital Assistance Fig. 6 Input image to the model

Fig. 7 Code to test the model for the X-ray image

Fig. 8 Result that was presented by the model after testing it on a COVID X-ray image

Fig. 9 Training and validation accuracy and loss

89

90

H. K. Sharma et al.

Model to diagnose the coronavirus quickly and efficiently. Right now, we can detect the coronavirus with the accuracy of more than 90%. This is a highly accurate model.

References 1. Filali Rotbi M, Motahhir S, El Ghzizal A. Blockchain technology for a safe and transparent Covid-19 vaccination. https://arxiv.org/ftp/arxiv/pa-pers/2104/2104.05428.pdf 2. Shi F, Wang J, Shi J, Wu Z. Review of artificial intelligence techniques in imaging data acquisition, segmentation, and diagnosis for COVID-19. https://ieeex-plore.ieee.org/document/906 9255 3. Sharma HK, Patni JC, Ahlawat P, Biswas SS (2020) Sensors based smart healthcare framework using internet of things (IoT). Int J Sci Technol Res 9(2):1228–1234 4. Sharma HK, Choudhury T, Mor A (2022) Application of bioinformatics in telemedicine system. In: Choudhury T, Katal A, Um JS, Rana A, Al-Akaidi M (eds) Telemedicine: the computer transformation of healthcare. TELe-Health. Springer, Cham. https://doi.org/10.1007/978-3030-99457-0_15 5. Mandal B, Choudhury T (2016) A key agreement scheme for smart cards using biometrics. In: IEEE International conference (Published in IEEE) ICCCA 2016 Galgotias University 6. Wang L, Lin ZQ, Wong A (2020) COVID-Net: a tailored deep convolutional neural network design for detection of COVID-19 cases from chest X-ray images. Article number: 19549 7. Gao T. Chest X-ray image analysis and classification for COVID-19 pneumonia detection using Deep CNN. https://doi.org/10.1101/2020.08.20.20178913 8. Dong D, Tang Z, Wang S, Hui H, Gong L. The role of imaging in the detection and management of COVID-19: a review. https://ieeex-plore.ieee.org/document/9079648 9. Yang D, Martinez C, Visuña L et al (2021) Detection and analysis of COVID-19 in medical images using deep learning techniques. Sci Rep 11:19638. https://doi.org/10.1038/s41598021-99015-3 10. Liu T, Siegel E, Shen D (2022) Deep learning and medical image analysis for COVID-19 diagnosis and prediction. Annu Rev Biomed Eng 24(1):179–201 11. Muhammad LJ, Algehyne EA, Usman SS et al (2021) Supervised machine learning models for prediction of COVID-19 infection using epidemiology dataset. SN Comput Sci 2:11. https:// doi.org/10.1007/s42979-020-00394-7 12. Khanchi I, Ahmed E, Sharma HK (2019) Automated framework for real-time sentiment analysis (March 1, 2020). In: 5th International conference on next generation computing technologies (NGCT-2019) 13. Chen X, Williams BM, Vallabhaneni SR, Czanner G, Williams R, Zheng Y (2019) Learning active contour models for medical image segmentation. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR), pp 11632–11640 14. Iandola F, Moskewicz M, Karayev S, Girshick R, Darrell T, Keutzer K (2014) Densenet: implementing efficient convnet descriptor pyramids. arXiv:1404.1869 15. Almalki YE et al (2021) A novel method for COVID-19 diagnosis using artificial intelligence in chest X-ray images. Healthcare 9(5):522. https://doi.org/10.3390/healthcare9050522 16. Shin HC, Roth HR, Gao M, Lu L, Xu Z, Nogues I, Summers RM (2016) Deep convolutional neural networks for computer-aided detection: CNN architectures, dataset characteristics and transfer learning. IEEE Trans Med Imaging 35(5):1285–1298 17. de Carvalho RGR (2021) Multi-modal tasking for skin lesion classification using deep neural networks 18. Ide H, Kurita T (2017) Improvement of learning for CNN with ReLU activation by sparse regularization. In: 2017 International joint conference on neural networks (IJCNN), Anchorage, AK, USA, 2017, pp 2684–2691. https://doi.org/10.1109/IJCNN.2017.7966185

CAD Model for Biomedical Image Processing for Digital Assistance

91

19. Li Y, Yuan Y (2017) Convergence analysis of two-layer neural networks with ReLU activation. Advances in neural information processing systems, vol 30 (NIPS 2017) 20. Dunne RA, Campbell NA (1997) On the pairing of the softmax activation and cross-entropy penalty functions and the derivation of the softmax activation function. In: Proceedings of the 8th Australian conference on the Neural Networks, Melbourne, vol 181, p 185. Citeseer 21. https://www.kaggle.com/competitions/stat946winter2021/data

Natural Language Processing Workload Optimization Using Container Based Deployment Hitesh Kumar Sharma, Tanupriya Choudhury, Eshan Dutta, Aniruddh Dev Upadhyay, and Aarav Sharma

Abstract BERT stands for Bidirectional Encoder Representations from Transformers. It is a pretrained model that sets condition for both the left and the right context to pre-train deep bi-directional representations from the unlabeled text. This allows us for a broad perspective of NLP works to be finetuned by the use of just one marginal output layer. The GLUE Benchmark is a set of resources which is used to train, assess and analyze the natural language in order to understand the systems with the final goal of stimulating research into the growth of general and reliable NLU systems. BERT, or Bidirectional Encoder Representations from Transformers, is a pre-training method for deep learning networks. It was developed by Google AI researchers and first released in late 2018. BERT has quickly become one of the most popular methods for natural language processing (NLP) tasks such as text classification, translation, and sentiment analysis. The aim of this research is to fine tune the BERT model so that it can perform GLUE tasks in NLP workloads (Dewangan et al. in IET Commun 15:1869–1882, 2021 [1]) such as CoLA (Corpus of Linguistic Acceptability), SST-2 (Stanford Sentiment Treebank), MRPC (Microsoft Research Paraphrase Corpus), QQP (Quora Question Pairs2), MNLI (Multi-Genre Natural Hitesh Kumar Sharma, Tanupriya Choudhury, and Eshan Dutta contributed equally and are the first authors. H. K. Sharma · E. Dutta · A. D. Upadhyay · A. Sharma Department of Informatics, School of CS, University of Petroleum & Energy Studies (UPES), Dehradun 248007, Uttarakhand, India e-mail: [email protected] E. Dutta e-mail: [email protected] T. Choudhury (B) Informatics Cluster, University of Petroleum and Energy Studies (UPES), Dehradun 248007, Uttarakhand, India e-mail: [email protected]; [email protected]; [email protected] Adjunct Professor, CSE Dept., Graphic Era Hill University, (UPES), Dehradun 248002, Uttarakhand, India Director Research (Honorary), The AI University, Cutbank, MT 59427, US © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 V. S. Rathore et al. (eds.), Emerging Trends in Expert Applications and Security, Lecture Notes in Networks and Systems 682, https://doi.org/10.1007/978-981-99-1946-8_10

93

94

H. K. Sharma et al.

Language Inference), QNLI (Question-answering Natural Language Inference), RTE (Recognizing Textual Entailment) and WNLI (Winograd Natural Language Inference). These are all important NLP tasks that have been used to evaluate a variety of different models. So far, the results have been promising. The BERT model has achieved state-of-the art performance on many GLUE tasks when compared to other pre-trained models such as XLNet and GPT2. This suggests that the BERT model may be a good choice for applications where natural language understanding is required. Keywords BERT · NLP · Language modelling · Transfer learning · Natural language processing · Containerization

1 Introduction In its simplest form, Transformer includes two distinct mechanisms: an encoder that describes the text input and a decoder that generates a job prediction. Contrary to directional models, which read the text input sequentially, the Transformer encoder reads the entire sequence of words at once (left-to-right or right-to-left). This is crucial because it enables more accurate forecasts. In a paper published by Google, the Transformer’s intricate operation is described [2]. A bidirectional recurrent neural network (BRNN), which is used in natural language processing, employs two unidirectional RNNs in succession, one of which processes input sequences from left to right and the other from right to left. BERT is a recent development in bidirectional RNN models that has shown promising results on a wide range of natural language processing tasks. BERT employs two training methodologies called masked selfattention and position-sensitive word embeddings, which allow it to learn contextdependent dependencies between words more effectively than traditional bidirectional RNN models. As a result, BERT achieves state-of-the art results on many tasks including sentence comprehension, question answering, and machine translation.

1.1 Masked LM (MLM) Some words in each group for BERT (generally fifteen percent of total words) are substituted with a masking token [COVER] in order to produce a masked vocabulary. The task is to produce a masked vocabulary from within the group and attempt to produce better results on the masked dataset as compared to the original unmasked dataset. The task is to produce the original unmasked vocabulary from the masked vocabulary without losing the essence. Transforming the numeric vector encoding into the token/lexicon by using a transformation matrix is the goal of Masked LM. SoftMax is used as an activation function to calculate the probability of the yield of the unmasked vocabulary (Fig. 1). The image shown in Fig. 1

Natural Language Processing Workload Optimization Using Container …

95

is taken from [Image Source: https://www.lyrn.ai/2018/11/07/explained-bert-stateof-the-art-language-model-for-nlp/] as it is a standard BERT Transformer Encoder Framework for NSP. This is where the expectation of veiled values is taken under consideration by the BERT misfortune work, which disregards the forecast of non-masked words. As a result, the show merges more slowly than coordinated models, in any case usually counteracted by its progressed setting mindfulness.

Fig. 1 Transformer architecture of BERT

Fig. 2 BERT language framework for NLP

96

H. K. Sharma et al.

1.2 Next Sentence Prediction (NSP) For this approach in BERT, the incoming dataset is bifurcated into two sets (50% divide) in such a way that the second sentence follows the first sentence in the divide. The objective is to forecast the second sentence where the premise is the first sentence (already given). The two-sentence group is detached to form a fifty percent corpus of the original incoming data and the arbitrary divide causes the Next Sentence Prediction as a task tricky. Before entering the show, the input is handled within the taking after way to help the demonstrate recognize between the two sentences in preparing (Fig. 2). The taking after steps is taken to decide on the off chance that the arbitrary sentence in the dataset which is divided, the second sentence, is really related to the first sentence as a follower. Transformer show is utilized to prepare the complete input sequence. Using a straightforward classification layer, the [CLS] token’s yield is changed into a 21 formed vector (learned networks of weights and biases). SoftMax is utilized to calculate the probability of IsNextSequence. Masked LM and Another Sentence Expectation are coupled for preparing the BERT demonstrate, with the objective of limiting the combined misfortune work of the two techniques.

1.3 NLP and GLUE The GLUE benchmark may be a set of assets for preparing, evaluating, and examining common dialect understanding frameworks [3]. A benchmark of nine sentenceor sentence-pair dialect getting the assignments based on the existing datasets and chosen to cover a broad assortment of dataset sizes, content sorts, and degrees of trouble, GLUE—the symptomatic dataset for assessing and examining show execution in common dialect with respect to a wide run of phonetic phenomena. An open pioneer board for following benchmark execution and a dashboard for seeing show execution on the symptomatic set are moreover available. The BERT [4] transformer model can be employed to solve a variety of natural language processing [5] issues. From the General Language Understanding Evaluation benchmark, We may fine-tune BERT for a multitude of activities: • CoLA is a dataset which checks whether a sentence is grammatically correct or not. It stands for Corpus of Linguistic Acceptability [6]. • SST-2 is a dataset which predicts the sentiment of a given sentence, whether positive or negative. It stands for Stanford Sentiment. • MRPC is a dataset which checks if two sentences are paraphrase of one another or not. It stands for Microsoft Research Paraphrase Corpus. • QQP is a dataset which checks if two questions are semantically equivalent or not. It stands for Quora Question Pairs2.

Natural Language Processing Workload Optimization Using Container …

97

• MNLI is a dataset whose aim is to determine if the premise involves the hypothesis, rejects the hypothesis, or neither supports nor rejects the hypothesis. It stands for Multi-Genre Natural Language Inference. • QNLI is a dataset which determines whether the sentence given as a context contains the answer to the question asked. It stands for Question-answering Natural Language Inference. • RTE is a dataset which checks whether a given hypothesis is supported by the given sentence. It stands for Recognizing Textual Entailment. • WNLI is a dataset whose aim is to check if a sentence with some pronouns substituted for other is still supported by the sentence given originally. It stands for Winograd Natural Language Inference.

2 Literature Review GLUE is a state-of-the art natural language understanding (NLU) platform that allows developers to build models that can analyze any text, regardless of task, genre or data. GLUE has been shown to be more effective than other NLU platforms because it incorporates processes that pursue and analyze language correctly and which is independent of a particular task, genre or data in a dataset. This makes GLUE more versatile and adaptable, allowing it to be used in a wider range of applications. We offer the General Language Understanding Evaluation (GLUE) benchmark to achieve this goal, which is a conglomeration of datasets and processes and tools for gauging and rating the robustness, performance and correctness of a language model over a plethora of existing natural language tasks. The poor results of our best model, trained on individual tasks, on the other hand, highlight the need for stronger generic NLU systems [7]. “Towards a Deep and Unified Understanding of Deep Neural Models in NLP” explains how the hidden and the inner layers of a deep neural network, a RNN, CNN, ANN or a transformer [8, 9], understands the inputted word and sentences and expresses a singular metric for quantitative explanation to it.

3 Dataset Characteristics The datasets are a conglomeration of data collected by independent research studies done by a multitude of organizations and institutions (Table 1). The datasets are now used as a benchmark to identify and regulate and evaluate the models in the Natural Language Understanding (NLU) domain [10]. These datasets give a well-defined framework for a plethora of natural language processing task such as sentiment analysis, grammar check, Boolean analysis, hypothesis testing, logical analysis, questionlogic theory, paraphrasing, text chaining and similarity testing. These tasks can be

98

H. K. Sharma et al.

Table 1 List of elements of models Model name

Train elements

Validation elements

Test elements

Dataset source

CoLA

8551

1063

1043

https://nyu-mll.github.io/CoLA/cola_p ublic_1.1.zip (Ref.: [11])

SST2

67,349

1821

872

https://www.kaggle.com/datasets/atulan andjha/stanford-sentiment-treebank-v2sst2 (Ref.: [12])

3668

408

1725

QQP

363,846

40,430

390,965

QNLI

104,743

5,463

5,463

MRPC

Weightage of Sentences

50

https://www.microsoft.com/en-us/dow nload/confirmation.aspx?id=52398 (Ref.: [13]) https://www.kaggle.com/competitions/ quora-question-pairs/data Ref.: [14]

40

40 30

22

19

17

Comp Clauses

to-VP

20 10

15

20

18

8

0 Simple

Adjunct

Arg Altern Binding Quesons Violaons

Type of Sentences

Fig. 3 CoLA dataset specification

used to test the efficiency and robustness of any NLP model. These datasets are maintained by TensorFlow community and updated every night. Being a very new dataset, the values in the dataset may update every night. Hence testing and modelling must be done on a specific version of the dataset. Additionally, these being a very new and updating dataset, it gives a chance to research and find discrepancies in the dataset and remove them for better dataset efficiency (Fig. 3).

4 Methodology To achieve GLUE tasks such as CoLA, SST-2, MRPC, QQC, etc. in the domain of natural language processing using the power of transfer learning and utilizing the BERT model. • Identifying the most suitable BERT model and architecture. • Choosing GLUE tasks for BERT and finding a suitable dataset corpus.

Natural Language Processing Workload Optimization Using Container …

99

Fig. 4 Architecture of the NLP based transformer model

• Preprocessing and augmenting the textual corpus for use in BERT. • Fine-tuning BERT (e.g., CoLA, SST-2, MRPC, QQC, etc. depending on the corpus). • Saving the trained model and containerizing it.

5 Implementation The aim and purpose of the research work is to create a singular architecture that could be tested on benchmark datasets of the GLUE framework. We create a single architecture for the transformer-based model for NLP tasks and we train it on different datasets like COLA, SST2, MRPC, QQP, MNLI, QNLI, WNLI and RTE. When trained on different datasets and hope to achieve good results on all datasets irrespective of which is chosen. Any architecture of the NLP based transformer model has to have 3 parts (Fig. 4): A text preprocessor, BERT based encoder, Keras layers model for regularization. The model architecture that we have used as a part of the training have been varying all along but is based on the above based architecture (Fig. 5). The actual architecture from training is shown below: Model for 2 sentence input (Fig. 6). Model for 1 sentence input (Fig. 7).

6 Experimental Results Our model training parameters are based on:

100

H. K. Sharma et al.

Fig. 5 BERT model architecture

Fig. 6 Model for 2 sentence input

Fig. 7 Model for 1 sentence input

Loss (Sparse Categorical Cross Entropy) [15], Accuracy (Sparse Categorical Accuracy) [16], Our validation parameters are: Loss (Sparse Categorical Cross Entropy), Accuracy (Sparse Categorical Accuracy), Confusion matrix. One of the confusion matrices and learning curve for the different models are shown below (Fig. 8). As per the charts shown above, different datasets show different results as expected. Confusion matrix, Validation accuracy and Loss curve has been shown for COLA, SST2, MRPC, QQP and QLNI. As per the graphs shown in Fig. 8, SST shows the highest accuracy but loss is also increased in same way. But MRPC shows the increasing validation accuracy and decreasing loss.

Natural Language Processing Workload Optimization Using Container … Fig. 8 Confusion matrix and loss curve for various models

● CoLA

● SST2

● MRPC

● QQP

● QNLI

101

102

H. K. Sharma et al.

7 Conclusion The application code, configuration, and dependencies are packaged by the container management tool Docker into a portable image that can be shared and used on any system or platform. Since they will all use the same services, Docker enables us to containerize numerous apps and run them on the same device/system. The trained and improved deep learning BERT model can be packaged as a Docker image using Docker. Therefore, it might be made available as a console that is accessible online or as an API.

References 1. Dewangan BK, Agarwal A, Choudhury T, Pasricha A (2021) Workload aware autonomic resource management scheme using grey wolf optimization in cloud environment. IET Commun 15:1869–1882 2. Mridha MF, Lima AA, Nur K, Das SC, Hasan M, Kabir MM (2021) A survey of automatic text summarization: progress, process and challenges. IEEE Access 3. Zafrir O, Boudoukh G, Izsak P, Wasserblat M (2019) Q8BERT: quantized 8Bit BERT. In: 2019 Fifth workshop on energy efficient machine learning and cognitive computing—NeurIPS Edition (EMC2-NIPS) 4. Maheshwarkar A, Kumar A, Gupta M (2021) Analysis of written interactions in open-source communities using RCNN. In: 2021 3rd International conference on advances in computing, communication control and networking (ICAC3N) 5. Gulia S, Choudhury T (2016) An efficient automated design to generate UML diagram from natural language specifications. In: 2016 6th International conference—cloud system and big data engineering (Confluence), Noida, India, pp 641–648. https://doi.org/10.1109/CONFLU ENCE.2016.7508197 6. Babaeianjelodar M, Lorenz S, Gordon J, Matthews J, Freitag E (2020) Quantifying gender bias in different corpora. In: Companion proceedings of the web conference 2020 7. Dewangan BK, Agarwal A, Choudhury T, Pasricha A (2020) Cloud resource optimization system based on time and cost. Int J Math Eng Manag Sci 5:758–768 8. Biswas R et al (2012) A framework for automated database tuning using dynamic SGA parameters and basic operating system utilities. Database Syst J III(4) 9. Kshitiz K et al (2017) Detecting hate speech and insults on social commentary using NLP and machine learning. Int J Eng Technol Sci Res 4(12):279–285 10. Kumar S, Dubey S, Gupta P (2015) Auto-selection and management of dynamic SGA parameters in RDBMS. In: 2015 2nd International conference on computing for sustainable global development (INDIACom), pp 1763–1768 11. Warstadt A, Singh A, Bowman SR (2018) Neural network acceptability judgments. arXiv: 1805.12471 12. Socher R, Perelygin A, Wu J, Chuang J, Manning C, Ng A, Potts C (2013) Conference on empirical methods in natural language processing (EMNLP 2013) 13. Dolan WB, Brockett C. Automatically constructing a corpus of sentential paraphrases. https:/ /www.microsoft.com/en-us/research/wp-content/uploads/2016/02/I05-50025B15D.pdf 14. Rajpurkar P, Zhang J, Lopyrev K, Liang P (2016) SQuAD: 100,000+ questions for machine comprehension of text. In: Proceedings of the conference on empirical methods in natural language processing. Association for Computational Linguistics, pp 2383–2392

Natural Language Processing Workload Optimization Using Container …

103

15. Zimnickas T, Vanagas J, Dambrauskas K, Kalvaitis A (2020) A technique for frequency converter-fed asynchronous motor vibration monitoring and fault classification, applying continuous wavelet transform and convolutional neural networks. Energies 16. Sharma HK (2013) E-COCOMO: the extended cost constructive model for cleanroom software engineering. Database Syst J 4(4):3–11

NDVI Indicator Based Land Use/Land Cover Change Analysis Using Machine Learning and Geospatial Techniques at Rupnarayan River Basin, West Bengal, India Krati Bansal, Tanupriya Choudhury, Anindita Nath, and Bappaditya Koley

Abstract The normalized difference vegetation index (NDVI) is an essential classification method for identifying the changes in dynamics of land use/land cover (LULC) area and planning for sustainable services. Machine learning and geospatial techniques are the most effective significant tools for change detection of LULC. The study was executed to assess dynamic changes of LULC with the help of NDVI classification using Machine learning and geospatial techniques. Landsat 5 and 8 images are applied in 2000 and 2020 (20 years) to extract NDVI values. The NDVI values are classified into five categories, and two NDVI maps, 2000 and 2020, are generated. Five LULC classes are identified: Water (Deep and Shallow), Built-up/ River Sand, Fallow/Wasteland, Agricultural Land/Crop Land, and Dense vegetation. The present study shows that the area’s water areas decreased from 4.1% in 2000 to 1.9% in 2020, and Built-up/River sand also decreased from 8.1% to 1.7% from 2000 to 2020, respectively. The dense vegetation area was also found at 11.8% in 2020, and Agroforestry/Sparse Vegetation areas increased from 2.1 to 34.7% in the last 20 years. All authors contributed equally and are the first authors. K. Bansal School of Computer Science, SoCS, University of Petroleum and Energy Studies (UPES), Dehradun, Uttarakhand, India e-mail: [email protected] T. Choudhury (B) Informatics Cluster, University of Petroleum and Energy Studies (UPES), Dehradun 248007, Uttarakhand, India e-mail: [email protected]; [email protected]; [email protected] Adjunct Professor, CSE Dept., Graphic Era Hill University, Dehradun 248002, Uttarakhand, India Director Research (Honorary), The AI University, Cutbank, MT 59427, US A. Nath · B. Koley Department of Geography, Bankim Sardar College, South 24 Parganas, West Bengal, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 V. S. Rathore et al. (eds.), Emerging Trends in Expert Applications and Security, Lecture Notes in Networks and Systems 682, https://doi.org/10.1007/978-981-99-1946-8_11

105

106

K. Bansal et al.

Keywords Machine learning · Geospatial techniques · NDVI · Multi-spectral images and sensor · Rupnarayan River

1 Introduction Worldwide our natural surroundings are very precious. It changes and is damaged by nature and human activities. Our environment is precious, and it should be conserved for future generations. Natural hazards namely floods, cyclones, and droughts changes in the surface of the Earth’s have also affected the climate [1]. Several studies [2–8] reveal that human activities are causing global warming—a rise in average temperatures around the world over this century. The finding of changes in the landscape of the Earth or analytical analysis helps to gain an understanding and helps to take the precaution timely [4]. Changing rate of LULC plays a significant role in studying global change. Different techniques have been approached for LULC mapping, and the changing pattern has been developed throughout the globe in the last few decades [5–8]. Climate change is a global phenomenon that has been linked to anthropogenic activities such as burning fossil fuels, deforestation and industrial agriculture. These activities are having a profound effect on LULC. Dynamic changes of LULC due to the increased human activity, leading to an increase in natural hazards such as flooding, drought, and tropical cyclones [9, 10]. Presently, the potential earth observation data are used to extract valuable information for land use and land covers by worldwide acceptance [11]. Earth observation systems and geospatial techniques are becoming increasingly important for assessing land use and land cover change analysis over large areas [7, 11, 12]. It was used for remote sensing data’s spatial and spectral resolution; it works at the micro level [13, 14]. Several methods such as object-based classification [14, 15], comparison of spectral indices and principal component based [16], cross-correlation analysis [17], image fusion-based, post-classification comparison [9, 18, 19] have been used for the monitoring and change dynamic of LULC by geospatial techniques and earth observation data [5]. This paper represented a variation of the vegetation index of the Rupnarayan River Basin using geospatial and machine learning techniques. Various previous researchers have used these techniques on different river basins. This paper aims to show the spatiotemporal changes in LULC of the Rupnarayan River Basin from 2000 to 2020. Landsat 8 provides the data over the period regarding spectral and spatial resolution. This paper’s prime objective is identifying and assessing LULC vegetation through the NDVI Indicator of the Rupnarayan basin. This study can be significant for flood hazards and coastal management in the coastal environment. Previous research has been done on various river basin areas of LULC change using geospatial and machine learning techniques.

NDVI Indicator Based Land Use/Land Cover Change Analysis Using …

107

2 Study Area The Rupnarayan River Basin is an important area of study due to its diverse geography, ecology and human activities which is located in the eastern part of India, it covers around 10,797 km2 . The Rupnarayan River Basin begins in the Chhota Nagpur plateau foothill at Dhaleswari, near Purulia. It flows south-easterly and passes the town of Bankura, where it is known as the Dwarakeswar River. The Shilabati River merges it near Bandal, which takes the name Rupnarayan and is joined by the Hooghly River at Geonkhal. The Rupnarayan River Basin area is located in eastern India and experiences a typical tropical monsoonal climate. This type of climate is characterized by hot, humid summers with heavy rainfall and mild winters that are relatively dry [20]. The gradient of the lower basin area is almost gentle to flat, composed of a lower alluvial plain and deltaic floodplain. The elevation in this area is 10 m, making it an ideal location for human settlements [20]. The angle of the Shilabati River concerning the Rupnarayan River is 230° [21]. Geographically, the Rupnarayan River is a sensitive area for environmental features; primarily Rupnarayan River flows three types of topography Plain, Rarh, and Deltaic. Rupnarayan River flows through an area of densely populated and agricultural land; a minimal area is occupied by vegetation (Fig. 1).

Fig. 1 Location of the Rupnarayan River Basin (RRB)

108

K. Bansal et al.

Table 1 Data source and used for NDVI analysis of the study area Data

Date acquired

Path/Row

Resolution

Source

Landsat 8-9 OLI/TIRS

2020-05-15

139/044

USGS

2020-04-06

138/045

30 m 15 m (PAN)

Landsat 5 TM

2000-01-26

138/045

30 m

USGS

2000-09-29

139/044

3 Materials and Methods 3.1 Data Collection Earth observation data was derived from the United States Geological Survey (https:/ /earthexplorer.usgs.gov/) for the present study. Landsat 5 TM (30 m) data was used for the analysis of NDVI of the year 2000, and Landsat 8/9 OLI/TIRS (30 m) was applied to the analysis of NDVI of the year 2020. UTM projection and WGS84 datum were used for geo-referencing the earth observation data. Detailed information on the data source is listed in Table 1.

3.2 Applied Methodology A statistical technique has been implemented to analyze spatial and temporal changes in vegetation geo. Satellite images do land cover changes analyses by utilizing classification methods. Landsat 8 and Landsat 5 TM images are two of the most widely used satellite imagery sources for Earth observation. Satellite images were collected of the Rupnarayan River Basin and combined using a mosaic method in ArcGIS. The resolution of these images is 30 m per pixel, and they have been classified into six areas based on the digital number (DN) of landscape elements. NDVI maps were prepared by Arc-GIS 10.8 platform last 20 years. NDVI classification methods did LULC classification. The present study has considered six classes: water, built-up, wasteland, agricultural area, spare, and dense vegetation (Fig. 2). Data Processing and Analysis Satellite data has been started to use as monitoring land cover and changes since 1970 [22]. Various Multi-spectral images were derived from the study area’s open source form USGS in 2000 and 2020. After collecting all data, the study area was clipped using a vector image (.shapefile). From the satellite images, Band 5 and 4 of Landsat 8/9 OLI/TIRS (2020) and Band 4 and 3 of Landsat 5 TM in 2000 were used to understand the actual changes of the LULC. After extraction of suitable study areas from the selected bands, machine learning techniques were used to prepare NDVI maps and analyze the overall changes of LULC. The present study used the

NDVI Indicator Based Land Use/Land Cover Change Analysis Using …

109

Fig. 2 Illustrate of the adopted methodology of the study area

multi-spectral images of the Rupnarayan River Basin to calculate the Normalized Difference Vegetation Index values. After processing of the Landsat 8 OLI (bands 5, 4) and Landsat 5 TM (bands 4, 3) they were used for calculation of NDVI of 2000 and 2020 using Geospatial techniques which detect change dynamics of the LULC area. NDVI has been calculated based on the at-sensor spectral radiance of the red and infrared bands of Landsat 8 OLI image and Landsat 5 TM images in 2000 and 2020. Theoretically, NDVI values vary between −1.0 and +1.0. NDVI represents the state of vegetation that can be implemented with the help of a GIS tool [23]. Normalized Difference Vegetation Index (NDVI) is a widely used geospatial tool to measure the health of vegetation in an area. Based on the measured canopy reflectance, NDVI has been calculated between the red and near-infrared bands. This computation method examines the ratio between the red (visible) and near-infrared bands of Landsat 8 OLI and Landsat 5 TM images. This study is assessing to identify the areas of containing significant changes in LULC, as well as other various land cover features. The NDVI maps were calculated between 2000 and 2020 using raster calculator spatial analysis tools. Based on the spatial ratio among red and infrared bands that are incredibly delicate to the green biomass [23], it is demarcated as: NDVI =

NIR − RED NIR + RER

where, ‘NIR’ band shows the near-infrared band of satellite images. ‘RED’ band indicates visible red reflection.

110

K. Bansal et al.

The NDVI map during the years 2000 and 2020 was prepared using the Raster calculator of the spatial analysis tool. Based on the computations of both bands (red and infrared), the output map (NDVI) was prepared on a raster layer with gray scale along the index ranges varying between −1 and +1. The thematic color was changed to NDVI as per the present study’s requirement. The methodology flowchart has shown the entire step for analyzing the maps of the Rupnarayan River. NDVI is used for the analysis of vegetation that reflects in satellite bands. It detects the change dynamics in vegetation cover. In analysis, it used spatial and spectral resolution of the remote sensing data, enabling it to work at the micro level. The main problem of this research is river-related. We have used the multi-spectral images of the Rupnarayan River Basin to calculate the NDVI values. NDVI is based on satellite images’ red band 4 and infrared band (band 5). An NDVI value varies between − 1.0 and +1.0. NDVI represents the state of vegetation that can be implemented with the help of a GIS tool [23]. NDVI is an important tool applied to measure the health of vegetation. NDVI is calculated between canopy reflectance of the red and nearinfrared bands, which are two different wavelengths of light that can be detected by satellite imagery. This method has been found to be useful for determining areas with healthy vegetation, such as forests or agricultural fields. NDVI maps during the years 2000 and 2020. The map was designed using the spatial analysis tool through Raster calculator application. NDVI values were calculated using the ratio between the bands. For Landsat 8 NDVI is estimated NDVI =

Band 5 − Band 4 Band 5 + Band 4

For Landsat 5 NDVI is estimated NDVI =

Band 4 − Band 3 Band 4 + Band 3

After analysis, a raster map was built with an index ranging from −1 to 1. As per requirement, we select the color of each band. A flow chart has shown the entire step performed to analyze the difference map of the Rupnarayan River Basin.

4 Result and Discussion Agriculture is the major resource in the study area. By calculating the ratio between these two wavelengths, it is possible to determine how much vegetation there is in an area. Satellite images have been applied for the 20 years of this present study to measure changes in land cover over time and assess its impact on local ecosystems. LULC classification was done through NDVI and classified into six categories in the study area. Generally, six NDVI classes were classified for the present study’s

NDVI Indicator Based Land Use/Land Cover Change Analysis Using …

111

analysis. In NDVI 2020, it has been observed that the range of the NDVI value is −0.204 to 0.591. It has been classified into six LULC classes Water (−0.204 to − 0.01), Built-up and or River Sand (−0.011 to 0.15), Fallow/Wasteland (0.151–0.25), Agricultural Land/Crop Land (0.251–0.35), Agroforestry/Sparse Vegetation (0.351– 0.45), Dense Vegetation (0.451–0.591). From NDVI 2000, it has been observed that the range of the NDVI value is −0.371 to 0.474. It is also classified into six LULC classes Water (−0.371 to 0.0), Built-up/River Sand (0.011–0.15), Fallow/ Wasteland (0.151–0.25), Agricultural Land/Crop Land (0.251–0.35), Agroforestry/ Sparse Vegetation (0.351–0.45), Dense Vegetation (0.451–0.474). From this LULC analysis, NDVI of 2000–2020 reveals significant changes in the present study. The present outcome map shows that the maximum area is occupied by the Agricultural Land/Crop Land (60.7%) in the year 2000; however, it reduced rapidly from 60.7% (2000) to 36.5% (2020). The growth of Agroforestry/Sparse Vegetation has rapidly increased from 2.1 to 34.7% between the years 2000 and 2020. The area of water has been reduced from 4.1 to 1.9% between the years 2000 and 2020 (Table 2). The figure portrays the total changes in percentage. It shows significant changes that happen from the year 2000 to 2020. Shallow water level increases from 2 to 35%. It also shows that dense vegetation parts increase from 0 to 12%. Soil area reduces by 8–2%. Agriculture areas also reduce from 61 to 36% and the Built-up area also reduces from 8.1 to 1.7% (Fig. 3). Figure 4 has been shown the modification in the LULC area in 2000 and 2020. Figure 4 reveals that Agroforestry/Sparse Vegetation, Dense vegetation is increased in the last two decades. On the other hand, significant changes have been noticed in Table 2 Area statistics for various LULC features of the study area NDVI range

Area in sq. km (2000)

Area (%)

LULC class

NDVI range

Area in sq. km (2020)

Area (%)

−0.371 to − 0.01

440.20

4.1

Water (deep and shallow)

−0.204 to − 0.01

204.83

1.9

−0.011 to 0.15

872.46

8.1

Built-up/river sand

−0.011 to 0.15

178.39

1.7

0.151–0.25

2712.3

25.1

Fallow/wasteland

0.151–0.25

1451.5

13.4

0.251–0.35

6549.46

60.7

Agricultural land/ crop land

0.251–0.35

3941.55

36.5

0.351–0.45

222.48

2.1

Agroforestry/ sparse vegetation

0.351–0.45

3748.93

34.7

0.14

0.0

Dense vegetation

0.451–0.591

10,797.04

100.0

0.451–0.474

Total

1271.84

11.8

10,797.04

100.0

K. Bansal et al.

60.7

36.5

34.7

25.1

13.4

11.8

8.1 2.1 Agroforestry/Sparse Vegetation

Agricultural Land/ Crop Land

LULC Feature Area percentage (2000)

0.0 Dense Vegetation

1.7 Fallow/Wasteland

4.1 1.9

Built-up/ River Sand

70.0 60.0 50.0 40.0 30.0 20.0 10.0 0.0

Water (Deep and Shallow)

Area in (%)

112

Area percentage (2020)

Fig. 3 Distribution various LULC features of the Rupnarayan River Basin

Water, Built-up/River Sand, Fallow/Wasteland, Agricultural Land/Crop Land areas, which are drastically reduced in the last 20 years (Fig. 5).

6000 4000

LULC Feature LULC, 2000 ( sq. Km)

LULC, 2020 ( sq. Km)

Change (sq km)

Fig. 4 Change directions of the LULC Features of the Rupnarayan River Basin

Dense Vegetation

Agroforestry/Spar se Vegetation

-4000

Agricultural Land/ Crop Land

-2000

Built-up/ River Sand

0

Fallow/Wasteland

2000 Water (Deep and Shallow)

Area in Sq. km

8000

NDVI Indicator Based Land Use/Land Cover Change Analysis Using …

(a)

(b)

Fig. 5 LULC map of the Rupnarayan River Basin a 2000 and b 2020

113

114

K. Bansal et al.

5 Conclusion The present study has been conducted on the LULC of the Rupnarayan River Basin using NDVI indicators, spatial analysis tools, and machine learning applications. The research aims to measure the changes in LULC dynamics of the Rupnarayan River Basin over a time period. By using Earth observation data, researchers can gain insights into LULC changes that are not easily obtained by other sources. Earth observation data has been utilized in a variety of ways to identify changes in LULC over the time period. The results demonstrated that the agricultural area had been drastically reduced from 2000 to 2020. LULC analysis also portrayed that wasteland has rapidly enhanced from 2010 to 2020. Finally, this research shows the changes in river basins. LULC maps are essential tools for decision-makers in identifying and protecting flood susceptible zones. These maps provide a comprehensive overview of the landscape, showing how land is being used or managed by humans. By providing this information, LULC maps can help identify areas that are particularly vulnerable to floods due to their physical characteristics such as low lying areas or those with a high water table. These results will also assist in inventing better mitigation methods for associated risk zones of the river basin.

References 1. Seneviratne S, Nicholls N, Easterling D, Goodess C, Kanae S, Kossin J, Zwiers FW (2012) Changes in climate extremes and their impacts on the natural physical environment. https://academiccommons.columbia.edu/doi/https://academiccommons.columbia.edu/ doi/10.7916/d8-6nbt-s431 2. USGCRP (2017) Climate science special report: fourth national climate assessment, vol 1. In: Wuebbles DJ, Fahey DW, Hibbard KA, Dokken DJ, Stewart BC, Maycock TK (eds) U.S. global change research program, Washington, DC, USA, 470 pp. https://doi.org/10.7930/J0J 964J6 3. Friedlingstein P, Jones MW, O’Sullivan M, Andrew RM, Hauck J, Peters GP, Peters W, Pongratz J, Sitch S, Le Quéré C, Bakker DCE, Canadell JG, Ciais P, Jackson RB, Anthoni P, Barbero L, Bastos A, Bastrikov V, Becker M, Zaehle S (2019) Global carbon budget 2019. Earth Syst Sci Data 11(4):1783–1838. https://doi.org/10.3929/ethz-b-000385668 4. Hu Y, Zhang Q, Zhang Y, Yan H (2018) A deep convolution neural network method for land cover mapping: a case study of Qinhuangdao, China. Remote Sens 10(12):2053. https://doi. org/10.3390/rs10122053 5. Abebe G, Getachew D, Ewunetu A (2021) Analysing land use/land cover changes and its dynamics using remote sensing and GIS in Gubalafito district, Northeastern Ethiopia. SN Appl Sci 4:30. https://doi.org/10.1007/s42452-021-04915-8 6. Arulbalaji P (2019) Analysis of land use/land cover changes using geospatial techniques in Salem district, Tamil Nadu, South India. SN Appl Sci 1:462. https://doi.org/10.1007/s42452019-0485-5 7. Nath A, Koley B, Saraswati S, Bhatta B, Ray BC (2021) Shoreline change and its impact on land use pattern and vice versa—a critical analysis in and around Digha area between 2000 and 2018 using geospatial techniques. Pertanika J Sci Technol 29(1):331–348

NDVI Indicator Based Land Use/Land Cover Change Analysis Using …

115

8. Nath A, Koley B, Saraswati S, Ray BC (2020) Identification of the coastal hazard zone between the areas of Rasulpur and Subarnarekha estuary, east coast of India using multi-criteria evaluation method. Model Earth Syst Environ 7:2251–2265. https://doi.org/10.1007/s40808-02000986-5 9. Hassan M, Ding W, Shi Z, Zhao S (2016) Methane enhancement through co-digestion of chicken manure and thermo-oxidative cleaved wheat straw with waste activated sludge: AC/N optimization case. Biores Technol 211:534–541 10. Dwivedi RS, Sreenivas K, Ramana KV (2005) Land-use/land-cover change analysis in part of Ethiopia using Landsat thematic mapper data. Int J Remote Sens 26(7):1285–1287. https://doi. org/10.1080/01431160512331337763 11. Mishra PK, Rai A, Rai SC (2020) Land use and land cover change detection using geospatial techniques in the Sikkim Himalaya, India. Egypt J Remote Sens Space Sci 23(2):133–143 12. Alam A, Bhat MS, Maheen M (2020) Using Landsat satellite data for assessing the land use and land cover change in Kashmir valley. GeoJournal 85:1529–1543. https://doi.org/10.1007/ s10708-019-10037-x 13. Liang S, Wang J (2020) A systematic view of remote sensing. Adv Remote Sens 1–57. https:/ /doi.org/10.1016/b978-0-12-815826-5.00001-5 14. Nath A, Koley B, Saraswati S, Choudhury T, Um JS (2022) Geospatial analysis of short term shoreline change behavior between Subarnarekha and Rasulpur estuary, east coast of India using intelligent techniques (DSAS). GeoJournal. https://doi.org/10.1007/s10708-022-10683-8 15. King DJ (2011) Comparison of pixel- and object-based classification in land cover change mapping AU—Dingle Robertson, Laura. Int J Remote Sens 32:1505–1529. https://doi.org/10. 1080/01431160903571791 16. Yanan L, Yuliang Q, Yue Z (2011) Dynamic monitoring and driving force analysis on rivers and lakes in Zhuhai City using remote sensing technologies. Procedia Environ Sci 10:2677–2683. https://doi.org/10.1016/j.proenv.2011.09.416 17. Jones DA, Hansen AJ, Bly K (2009) Remote sensing of environment monitoring land use and cover around parks: a conceptual approach. Remote Sens Environ 113:1346–1356. https://doi. org/10.1016/j.rse.2008.08.018 18. Koley B, Nath A, Saraswati S, Ray BC (2020) Assessment of 2016 Mantam landslide at Mangan, north Sikkim Himalayas using geospatial techniques. J Sci Res 64(2):1–9. https:// doi.org/10.37398/JSR.2020.640201 19. Koley B, Nath A, Saraswati S, Chatterjee U, Bandyopadhyay K, Bhatta B, Ray BC (2022) Assessment of spatial distribution of rain-induced and earthquake-triggered landslides using geospatial techniques along North Sikkim Road Corridor in Sikkim Himalayas, India. GeoJournal. https://doi.org/10.1007/s10708-022-10585-9 20. Maity SK, Maiti R (2018) Introduction. In: Sedimentation in the Rupnarayan River. SpringerBriefs in earth sciences. Springer, Cham. https://doi.org/10.1007/978-3-319-62304-7_1 21. Das B, Bandyopadhyay A (2015) Flood risk reduction of Rupnarayana River, towards disaster management—a case study at Bandar of Ghatal block in Gangetic delta. J Geogr Nat Disasters 5:1. https://doi.org/10.4172/2167-0587.1000135 22. Lillesand TM, Kiefer RW, Chipman JW (2004) Remote sensing and image interpretation, 5th edn. Wiley, New York 23. Pande CB, Moharir KN, Khadri SFR (2021) Assessment of land-use and land-cover changes in Pangari watershed area (MS), India, based on the remote sensing and GIS techniques. Appl Water Sci 11:96. https://doi.org/10.1007/s13201-021-01425-1

Prediction of Anemia Using Naïve-Bayes Classification Algorithm in Machine Learning Pearl D’Souza and Ritu Bhargava

Abstract To make accurate forecasts, machine learning algorithms rely on past data. It makes use of statistical methods and algorithms that are trained to make classifications or predictions. Consider anemia prediction in the medical machine learning domain, which is the most common hematological disease. When there aren’t enough good red blood cells in the body, oxygen can’t go where it needs to go. This paper aims to use a machine learning algorithm to develop an early detection of anemia in patients with low hemoglobin (HGB) counts, as well as to determine which parameters have the greatest impact. The Naïve Bayes algorithm using the multiple instance learning method is the main algorithm used in the research, and the analysis is carried out using the WEKA utility tool. This research uses a training dataset of 500 instances along with 7 fields (RBC, MCV, MCHC, TLC, PLT, HGB, and DECISION) to conduct the prediction. The real data constructed from the HGB test results collected from patients in the range set (11.6–16.6) is trained and tested by using the Naïve Bayes algorithm, which performs best with 90% accuracy in the percentage split of 66%. The WEKA experimenter proves that the Naïve Bayes algorithm gives the best performance with F-measure, sensitivity, the true positive rate (TP Rate), precision, and the lowest value in the false positive rate (FP Rate), respectively. Based on the performance chart curve for predicting anemia, it shows the highest weight, as visualized further. Keywords WEKA · Anemia · Kaggle · HGB · LD

P. D’Souza Principal, Sophia Girls’ College (Autonomous) Ajmer, Ajmer, India R. Bhargava (B) Associate Professor, Department of Computer Science, Sophia Girls’ College (Autonomous) Ajmer, Ajmer, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 V. S. Rathore et al. (eds.), Emerging Trends in Expert Applications and Security, Lecture Notes in Networks and Systems 682, https://doi.org/10.1007/978-981-99-1946-8_12

117

118

P. D’Souza and R. Bhargava

1 Introduction The basic tenet of machine learning is the systematic examination of data for the purpose of discovering previously unsuspected connections and patterns [1]. The machine learning process consists of three key components. Combining classification (clustering), analytical rules, and process modelling a set of output decision rules is developed through a classification analysis [2] for a certain dataset. Extracting data from a training set and transforming it into full structures enables machine learning systems or database machine learning processes to accurately identify trends in new data sets. Naïve Bayes classification algorithms are used in medical healthcare to improve the processing of medical data. Anemia is a medical condition that describes the reduction in hemoglobin or red blood cells in human blood [3]. Considering a CBC (complete blood count) test is conducted for patients in a laboratory, Anemia is affected by various features and attribute values that impact it the most. HGB (hemoglobin) count is one of them, which indicates that the low HGB (lack of enough red blood cells in the body) causes symptoms such as fatigue, headaches, dizziness, and shortness of breath [4]. In this work, the machine learning tool is used as an analysis of medical data. The machine learning tool is used for the analysis of the data and the framework that efficiently employs a number of different categorization techniques [5]. This tool processes the datasets by filtering out the irrelevant data that is not in use, and the remaining data is passed into the training and testing sets. Data mining in the medical domain, particularly in databases, involves analysing massive amounts of data to predict specific patterns of information [6]. This paper contains a set of 500 instances associated with HGB (hemoglobin) tests of patients, and the results predict anemic patients in the normal range taken (11.6–16.6), assisting doctors in providing immediate treatment [7]. The accuracy estimated using the Naïve Bayes Classification algorithm in predicting Anemia in patients turned out to be 90%, which is accurate [8]. The evaluation of data using the Naïve Bayes classifier takes 66% of the classified data as a training set and uses it to train algorithms [9]. Then it classifies the test data (34% of it) based on the decision rules found in the training set for predicting Anemia. Using 7 predictive features and the Naïve Bayes Classification algorithm computes the best prediction of Anemia using hemoglobin count (HGB) data [10].

2 Literature Work and Related Study Pouria et al. [2]. presented the work on the Naïve Bayes Algorithm and its applications: Naïve Bayes text classification, spam filtration, and sentiment analysis with zero conditional probability estimation. Green et al. [3] discussed an improved accuracy of 92% that was achieved in predicting whether a patient had chronic kidney disease using the Naïve Bayes. At

Prediction of Anemia Using Naïve-Bayes Classification Algorithm …

119

first, eGFR was used to forecast whether or not a patient would be sick, and the resulting information helped those who were sick by suggesting which foods they should eat and which they should avoid. Aidaroos et al. [9] used the Naïve Bayes approach to mine medical datasets from various perspectives, highlighting the main features of the dataset based on the requirements. Based on the experimental results compared to those of other approaches, it is determined that NB (Naïve Bayes) is the best approach for the majority of the used medical datasets. Manal et al. [11] discussed Anemia type prediction based on algorithms under the data mining concept on a sample of 41 patients, with the CBC that is used to construct the results using the WEKA experimenter. The algorithms undertaken are the J48 decision tree algorithm, which further gives the best performance with an accuracy score of 93.75% and a percentage split of 60%, respectively. Ninad et al. [12] presented work on the pre-detection of heart diseases and diabetes by providing health-related report values. In their proposed work, in order to classify the dataset, the Naïve Bayes algorithm is used to provide accurate results. With the result generated, heart diseases among the patients are predicted, leading to a successful evaluation of the ailment.

3 Objective of Work Data Pre-Processing step. To analyse medical data using Naïve Bayes algorithm. To analyse and predict the accuracy of the result by forming Confusion Matrix. Visualization of the test result obtained through the threshold curve.

3.1 Naïve Bayes Classification Algorithm The Bayes theorem-based supervised learning technique is primarily employed in the context of classification issues. In text classification, when a high-dimensional training dataset is required, this approach is particularly popular because it is a probabilistic classifier, making predictions solely based on the object’s probability [11]. The name Bayes Theorem has been shortened to “Bayes Rule” or “Bayes Law.” Here, the conditional probability is what ultimately decides the likelihood. The formula of Bayes’ Theorem is presented as:P (A|B) = P (B|A).P (A)/P (B) In this case, P (A|B) stands for the posterior probability, or the likelihood of confirming hypothesis A in light of evidence for event B.

120

P. D’Souza and R. Bhargava

Likelihood, denoted by P (B|A), is the probability that a hypothesis is supported by the available evidence. Prior probability, or P (A), is the likelihood of a hypothesis before any evidence is observed. Marginal probability, denoted by P (B), stands for evidence probability. In this paper we have a dataset of Medical conditions and corresponding target variable as “Anemia”. So using this dataset we have to decide that whether or not an individual is suffering from Anemia [12]. Naïve Bayes algorithm is one of fastest and the easiest Machine Learning Algorithms for predicting a class of a dataset [11]. It can be easily used for both Binary as well as Multi-class Classifications. For Text-classification problems it has become the most preferable choice [13]. Evaluation Metric.

3.2 Means of Judgment An accurate set is one in which the values are relatively close to one another. The accuracy of a measurement set is determined by whether or not its mean is close to the actual value of the quantity being measured. Data points from repeated measurements of the same quantity are required if one wishes to measure more than two terms [13]. Accuracy = (TP + TN)/(TP + FP + TN + FN). Precision = TP/(TP + FP). TP = True positive, TN = True Negative. False positives (FP) and false negatives (FN) are the opposite of true positives and true negatives, respectively. Where TP (True Positive) is returned if both the estimated and real values are positive, FP (False Positive) is returned if the predicted value is positive when the true value is negative, and FN (False Negative) is returned if the predicted value is negative when the true value is positive [13].

3.3 The Matrix of Confusion The Naïve Bayes classifier will now be further evaluated using a confusion matrix. It’s similar to a table based on a classifier’s performance in test data, also known as the “true value.” It will help identify comparisons between classes (Table 1).

Prediction of Anemia Using Naïve-Bayes Classification Algorithm …

121

Table 1 Shows the confusion matrix generated based on the classifier’s performance in test data, which includes the feature vectors from which classification of the data is possible Feature vector

Target vector