Modeling, Simulation and Optimization: Proceedings of CoMSO 2022 (Smart Innovation, Systems and Technologies, 373) 9819968658, 9789819968657

This book includes selected peer-reviewed papers presented at the International Conference on Modeling, Simulation and O

179 66 23MB

English Pages 624 [601] Year 2024

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Preface
Contents
About the Editors
1 On P-refinement in Topology Optimization
1.1 Introduction
1.2 Finite Element Analysis
1.2.1 Finite Element Analysis Using Single and Dual Mesh
1.3 Compliance Minimization Problem
1.3.1 Sensitivity Analysis for Single and Dual Mesh
1.4 Description of Test Cases
1.5 Results and Discussion
1.6 Conclusion
References
2 Computational Analysis of Darrieus Vertical Axis Wind Turbines for Exhaust Air Energy Extraction
2.1 Introduction
2.1.1 Wind Turbines
2.2 Methodology
2.2.1 Key Performance Parameters
2.2.2 Computational Domain and Boundary Conditions
2.2.3 Physics Conditions
2.2.4 Model Validation
2.3 Results
2.3.1 Coefficient of Moment and Coefficient of Power of H Darrieus VAWT
2.3.2 Velocity and Pressure Analysis of H-Darrieus VAWT
2.3.3 Coefficient of Moment and Coefficient of Power of Helical Darrieus VAWT
2.3.4 Velocity and Pressure Analysis for Helical Darrieus VAWT
2.3.5 Comparison of H-Darrieus and Helical Darrieus VAWT
2.4 Conclusions
References
3 A Wideband Circular Slot-Loaded Hexagon-Shaped Microstrip Antenna for Microwave RF Energy Harvesting
3.1 Introduction
3.2 Antenna Geometry
3.3 Result and Discussion
3.4 Conclusion
References
4 A New Method of Population Initialization for Enhancing Performance of Evolutionary Algorithms
4.1 Introduction
4.2 Differential Evolution (DE)
4.3 Concept of Opposite Point
4.4 Proposed OPDE Algorithm
4.5 Experimental Results and Discussion
4.6 Conclusions
References
5 Performance Analysis of Metaheuristic Methods in the Classification of Different Human Behavioural Disorders
5.1 Introduction
5.1.1 Contribution
5.2 Literature Review
5.3 Material and Methods
5.3.1 Dataset
5.4 Results and Analysis
5.5 Conclusion
References
6 The Influence of Wall Temperature on Total Pressure Drop During Condensation of R134a Inside a Dimpled Tube
6.1 Introduction
6.2 Governing Equations
6.3 Results and Discussions
6.4 Conclusion
References
7 Optimisation of Biodiesel Production Using Heterogeneous Catalyst from Palm Oil by Taguchi Method
7.1 Introduction
7.2 Materials and Methodology
7.2.1 Catalyst Preparation
7.2.2 Characterisation of Catalyst
7.2.3 Transesterification Process
7.2.4 Design of Experiment Using Taguchi Methodology
7.2.5 Signal-to-Noise Ratio (SNR) and Analysis of Variance (ANOVA)
7.3 Results and Discussion
7.3.1 SEM Analysis
7.3.2 Taguchi Method for Process Parameter Optimisation
7.3.3 Analysis of Variance (ANOVA)
7.3.4 Maximum Yield Prediction and Validation
7.3.5 Physicochemical Properties of POME
7.4 Conclusion
References
8 Optimization of Heterogeneous Biomass-Based Nano-Catalyzed Transesterification of Karanja Seed Oil for Production of Biodiesel
8.1 Introduction
8.2 Materials and Methodology
8.2.1 Karanja Oil and Chemicals
8.2.2 Synthesis of Catalyst
8.2.3 Characterization of Catalyst
8.2.4 Transesterification of Karanja Oil to Produce Biodiesel
8.2.5 Design of Experiment
8.3 Result and Discussion
8.3.1 XRD Analysis of Grape Fruit Peel Ash Catalyst
8.3.2 EDS and SEM Analysis of GFP Catalyst
8.3.3 Statistical Data Analysis
8.3.4 Effect of Catalyst Loading and the M/O molar Ratio on KOME Yield (%)
8.3.5 The Influence of Catalyst Amount and Reaction Time on the Yield (%) of KOME
8.3.6 Effect of Methanol-to-Oil Molar Ratios and Reaction Time on KOME Yield (%)
8.3.7 Optimization of Transesterification Reaction Parameters
8.4 Conclusions
References
9 Population Diversity-Aided Adaptive Cuckoo Search
9.1 Introduction
9.1.1 Evolutionary Algorithms
9.1.2 Exploration and Exploitation
9.1.3 Population Diversity
9.2 The Cuckoo Search Algorithm
9.3 The Proposed Adaptive Cuckoo Search Algorithm
9.4 Evaluation Process
9.4.1 Evaluating Algorithms
9.4.2 Evaluation Strategy
9.4.3 Performance Metrices
9.5 Result Analysis
9.6 Discussions and Conclusion
References
10 Solitons of the Modified KdV Equation with Variable Coefficients
10.1 Introduction
10.2 Governing Equations
10.2.1 Solution for Bright Soliton
10.2.2 Solution for Dark Soliton
10.2.3 Solution for Singular Soliton
10.3 Result and Discussion
10.4 Conclusion
References
11 A Review on Lung Cancer Detection and Classification Using Deep Learning Techniques
11.1 Introduction
11.2 Data Set Used in Literature Survey
11.2.1 LC25000
11.2.2 ACDC@LungHP
11.2.3 LIDC-IDRI
11.2.4 LUNA-16
11.3 Literature Survey
11.3.1 Lung Cancer Detection and Classification Using CNN
11.3.2 Lung Cancer Detection and Classification Using R-CNN
11.3.3 Lung Cancer Detection and Classification Using ANN
11.3.4 Lung Cancer Detection and Classification Using U-Net
11.4 Summary
11.5 Challenges
11.6 Future Scope
11.7 Conclusion
References
12 Enhancement and Gray-Level Optimization of Low Light Images
12.1 Introduction
12.2 Related Work
12.3 Image Intensity Transformation Techniques
12.3.1 Linear Transformation
12.3.2 Logarithmic Transformation
12.3.3 Power Law Transformation
12.4 Proposed Model
12.5 Proposed Experimental Results and Discussion
12.6 Conclusion
References
13 Prediction of Liver Disease Using Machine Learning Approaches Based on KNN Model
13.1 Introduction
13.2 Related Work
13.3 Proposed Methodology
13.3.1 Dataset Acquisition
13.3.2 Data Pre-processing
13.3.3 Model Training
13.4 Comparative Result Analysis
13.5 Conclusion
References
14 Modelling of Embedded Cracks by NURBS-Based Extended Isogeometric Analysis
14.1 Introduction
14.2 Extended Isogeometric Analysis
14.2.1 Basis Function
14.2.2 XIGA Discretization
14.2.3 Selection and Enrichment of Control Points
14.2.4 Integration of the Elements
14.2.5 Approximation
14.2.6 Crack Formulation
14.3 Evaluation of SIF
14.4 Result and Discussions
14.5 Conclusion
References
15 A New Version of Artificial Rabbits Optimization for Solving Complex Bridge Network Optimization Problem
15.1 Introduction
15.2 Artificial Rabbit Optimization
15.3 The Proposed Method
15.3.1 Chaotic Map for ARO
15.3.2 Cauchy Operator
15.4 Experimental Results and Analysis
15.4.1 Parameter Setup
15.4.2 Results and Analysis
15.4.3 Investigation of the Exploitative Ability of the CCARO
15.4.4 Investigation of the Exploratory Ability of the CCARO
15.4.5 Complex Bridge Network Optimization Problem
15.5 Conclusion
References
16 Regression Analysis on Synthesis of Biodiesel from Rice Bran Oil
16.1 Introduction
16.2 Materials and Equipment
16.2.1 Materials
16.2.2 Equipment
16.3 Biodiesel Production
16.3.1 Experimental Design
16.3.2 Steps Involved in Production of RBO Biodiesel
16.4 Regression Analysis
16.5 Results and Discussions
16.5.1 Physicochemical Properties of Biodiesel Feedstock
16.5.2 Design of Experiment (DOE) and Data Collection
16.5.3 Regression Analysis on Biodiesel Yields
16.5.4 Analysis of Variance (ANOVA)
16.5.5 Effect of Various Control Factors
16.6 Conclusion
References
17 A Study on Vision-Based Human Activity Recognition Approaches
17.1 Introduction
17.2 Handcrafted Feature-Based HAR
17.2.1 Spatiotemporal Approaches
17.2.2 Appearance-Based Approaches
17.3 Feature Learning-Based HAR
17.3.1 Traditional Learning
17.3.2 Deep Learning Architectures
17.4 HAR Datasets
17.5 Evaluation Metrics
17.6 Limitation of Existing HAR Systems
17.7 Challenges in HAR
17.8 Conclusion
References
18 In-Store Monitoring of Harvested Tomatoes Using Internet of Things
18.1 Introduction
18.2 Related Work
18.2.1 Tracking System for Food and Grain Warehouse
18.2.2 Integrated Monitoring and Control System for Food Grain Wastage
18.2.3 Control and Monitoring System for Cold Storage Environments
18.2.4 The Use of IoT and Blockchain Technology to Monitor and Classify Crop Quality
18.2.5 Intelligent Warehousing Based on Machine Learning
18.2.6 A System that Protects Crops from Post-Harvest Damage Using IoT
18.2.7 Utilization of Cooling Technologies to Improve Vegetable Storage in Mali
18.2.8 A Comparison of ECC, ZEBC, and Pot-in-Pot Coolers for Storage Post-harvest
18.2.9 An Automated System for Controlling Cold Storage Remotely
18.3 Methodology
18.3.1 IoT-Enabled System Architecture
18.4 Experiment Results and Analysis
18.5 Conclusion
References
19 A Compact Microstrip Hexagonal Patch Antenna with a Slotted Ground Plane for RF Energy Harvesting Applications
19.1 Introduction
19.2 Antenna Design
19.3 Results and Analysis
19.4 Conclusion
References
20 Hierarchical Aadhaar-Based Anonymous eSign Based on Group Signatures
20.1 Introduction
20.2 Related Work
20.3 Preliminaries
20.3.1 eSign
20.3.2 Signature of Knowledge
20.3.3 Group Signature
20.4 Proposed Scheme
20.4.1 Architecture
20.4.2 System Initialization
20.4.3 Registration of Members (in First Level Group)
20.4.4 Signatures by Members (of First Level Group)
20.4.5 Signature Verification (of First Level Group Signer)
20.4.6 Group Manager Signature Opening
20.4.7 Creation of Second Level Group
20.4.8 Registration of Members (in Second Level Group)
20.4.9 Signatures by Members (of Second Level Group)
20.4.10 Signature Verification (of Second Level Group Signer)
20.5 Security Considerations
20.6 Conclusion
References
21 Swarm Intelligence for Estimating Model Parameters in Thermodynamic Systems
21.1 Introduction
21.2 Particle Swarm Optimization (PSO)
21.3 Problem Formulation
21.4 Results and Discussion
21.5 Conclusion
References
22 Development of a Protocol on Various IoT-Based Devices Available for Women Safety
22.1 Introduction
22.1.1 Literature Review
22.2 Methodology
22.2.1 Purpose/Use of Different Components
22.2.2 Algorithm
22.2.3 Method
22.2.4 Working
22.3 Results and Discussion
22.4 Conclusion
References
23 A Review on Indian Language Identification Using Deep Learning
23.1 Introduction
23.1.1 Literature Survey
23.1.2 Comparison of Performance in Various Indian Language Identification and Classification Techniques
23.2 Summary of Review
23.3 Conclusion
References
24 Markov Process Based IoT Model for Road Traffic Prediction
24.1 Introduction
24.2 Problem Analysis and System Design
24.3 Result Analysis
24.4 Conclusions
References
25 Optimization of Process Parameters in Biodiesel Production from Waste Cooking Oil Using Taguchi-Grey Relational Analysis
25.1 Introduction
25.2 Work Procedures
25.2.1 Collection of the Feedstock
25.2.2 Transesterification Methods
25.2.3 Design Strategy
25.2.4 Taguchi- Grey Relational Analysis
25.2.5 Analysis of Variance (ANOVA)
25.3 Results and Discussion
25.4 Conclusion
References
26 A Novel D-Latch Design for Low-Power and Improved Immunity
26.1 Introduction
26.2 ASAP7: A 7 nm FinFET PDK
26.3 FinFET INDEP Technique
26.4 Proposed D-Latch FinFET Designs at 7 nm Technology Node
26.4.1 Layout Description of D-Latch Circuit
26.5 Monte Carlo Analysis of FinFET D-Latch Designs at 7 nm Technology Node
26.6 Conclusion
References
27 Anonymous and Privacy Preserving Attribute-Based Decentralized DigiLocker Using Blockchain Technology
27.1 Introduction
27.2 Related Work
27.3 Preliminaries
27.4 System and Threat Model
27.4.1 System Model
27.4.2 Threat Model
27.5 Proposed Scheme
27.5.1 Key Management
27.5.2 Smart Contract System
27.5.3 Main Methods of the Scheme
27.6 Security and Privacy Analysis
27.6.1 Data Confidentiality
27.6.2 User Privacy
27.6.3 User Linkability
27.6.4 User and Data Linkability
27.7 Conclusion
References
28 FIFO Memory Implementation with Reduced Metastability
28.1 Introduction
28.2 Earlier Work
28.3 Methodology
28.4 Results and Discussion
28.5 Conclusion
References
29 Diabetic Retinopathy Detection Using Ensemble of CNN Architectures
29.1 Introduction
29.2 Related Work
29.3 Proposed Methodology
29.4 Experiments and Results
29.5 Conclusions and Future Scope
References
30 Narrow Band 5G Antenna
30.1 Introduction
30.2 Antenna Design
30.3 Antenna Parameter Analysis
30.3.1 S-Parameter
30.3.2 Directivity
30.3.3 Antenna Parameter
30.3.4 Maximum Field Location
30.3.5 Radiating Pattern
30.4 Conclusion
References
31 Rectangular and Cylindrical Slotted Microstrip Patch Antenna Design for Biomedical Application
31.1 Introduction
31.2 Antenna Design
31.2.1 Design Equations
31.3 Bending Analysis of Design Antenna
31.4 Result
31.5 Conclusion
References
32 An Implementation of Machine Learning-Based Healthcare Chabot for Disease Prediction (MIBOT)
32.1 Introduction
32.2 Literature Survey
32.3 Proposed Method
32.3.1 Proposed Algorithm
32.4 Results Analysis
32.5 Performance Analysis
32.5.1 Algorithm Comparison
32.6 Conclusion and Future Work
References
33 Impact of Communication Delay in a Coordinated Control VPP Model with Demand Side Flexibility: A Case Study
33.1 Introduction
33.2 Proposed VPP Model, Design, and Simulation with Coordinated Control Strategy
33.2.1 Solar Photovoltaic (SPV)
33.2.2 Parabolic Trough Solar Thermal System (PTSTS)
33.2.3 Electric Vehicle (EV)
33.2.4 Conventional Synchronous Generator
33.2.5 Communication Delay Block
33.2.6 Proposed VPP Coordinated Control Design
33.2.7 Adopted Control Strategy
33.3 Results and Discussion
33.3.1 Case Study 1: Effect of Increasing Communication Delay on System Response
33.3.2 Different Load Change Scenarios in the VPP Area
33.3.3 Presence of EVs for Charge/Discharge into the Grid
33.4 Conclusion
References
34 Fabrication of Patient Specific Distal Femur with Additive Manufacturing
34.1 Introduction
34.2 Methodology
34.2.1 Generating of 3D Model Using 3D Slicer
34.2.2 Refining the Generated 3D Model Using Ansys Spaceclaim
34.2.3 Preparing the Model for 3D Printing Using Ultimaker Cura
34.2.4 Printing of the Model
34.3 Results and Discussion
34.3.1 Model Creation from DICOM File
34.3.2 Error Correction in the STL File Format 3D Model
34.3.3 Smoothening the Model Using Ansys Spaceclaim
34.3.4 Final Preparation of Model Using Ultimaker Cura
34.3.5 3D Printed Model Using FDM Printer
34.4 Conclusion
References
35 Performance Comparison of Cuckoo Search and Ant Colony Optimization for Identification of Parkinson’s Disease Using Optimal Feature Selection
35.1 Introduction
35.2 Methodology
35.2.1 Cuckoo Search Algorithm
35.2.2 Ant Colony Optimization Algorithm
35.3 Implementation of the Proposed Algorithms
35.3.1 Input Parameters
35.3.2 Dataset
35.4 Result
35.4.1 Result on Speech PD Dataset
35.4.2 Comparison of the Proposed Algorithms with Other Optimization Algorithms
35.5 Conclusions
References
36 Simulation and Modelling of Task Migration in Distributed Systems Using SimGrid
36.1 Introduction
36.2 Related Work
36.3 SimGrid
36.4 Network Configuration
36.5 Simulation Results
36.6 Conclusion
References
Chapter 37 An Approach to Bodo Word Sense Disambiguation (WSD) Using Word2Vec
Abstract
37.1 Introduction
37.2 Related Works
37.3 Proposed Methodology
37.3.1 Architecture for the Proposed Method
37.3.1.1 Data Preprocessing
37.4 Embedding Techniques for Building Corpora
37.4.1 Word Embedding
37.4.1.1 Word2Vec
37.4.1.2 Continuous Bag-of-Words (CBOW) Model
37.4.1.3 Skip-Gram Model
37.5 Calculation of Testing Procedure
37.5.1 Cosine Similarity
37.5.2 Sentence Similarity
37.6 Evaluation and Discussion of Results
37.7 Conclusion and Future Work
References
38 Effect of Ambient Conditions on Energy and Exergy Efficiency of Combined Cycle Power Plant
38.1 Introduction
38.2 Material and Methodology
38.3 Results and Discussion
38.4 Conclusion
References
39 Design and Analysis of Single and Multi-Band Rectangular Microstrip Patch Antenna Using Coplanar Wave Guide
39.1 Introduction
39.2 Methodology Used
39.2.1 UWB-MIMO Antenna System
39.2.2 Designing Mathematical Equations for MIMO Microstrip Antenna
39.2.3 Methods to Improve Isolation and Bandwidth Enhancement
39.3 Results and Discussion
39.3.1 Proposed Outputs
39.4 Comparison of the Proposed Design
39.5 Conclusion
References
40 Software in the Loop (SITL) Simulation of Aircraft Navigation, Guidance, and Control Using Waypoints
40.1 Introduction
40.2 Autopilot Design
40.2.1 Design and Control Law
40.2.2 Tuning and Testing Platform
40.2.3 Response Curves
40.3 Waypoint Navigation
40.4 Results and Discussion
40.5 Conclusion
References
41 Optimization of Biodiesel Production from Waste Cooking Oil Using Taguchi Method
41.1 Introduction
41.2 Materials and Method
41.2.1 Materials Required
41.2.2 Synthesis of Catalyst
41.2.3 Preliminary Waste Cooking Oil Analysis
41.2.4 Design of Experiments Based on Taguchi L9 Methodology
41.2.5 Control Parameters Selection and Levels
41.2.6 Experimentation Based on the Taguchi Orthogonal Array Design Matrix
41.3 Results and Discussion
41.3.1 Catalyst Characterization
41.3.2 Statistical Analysis
41.4 Conclusions
References
42 Design and Development of Energy Meter for Energy Consumption
42.1 Introduction
42.2 Methodology
42.2.1 Components Selected Arduino UNO
42.2.2 16*2 LCD Display
42.2.3 Resistor
42.2.4 Diode
42.2.5 Capacitor
42.2.6 Internal Battery
42.2.7 Inductor
42.2.8 Microchip
42.2.9 Tact Switch
42.2.10 Working Principle
42.3 Conclusion
42.3.1 Future Scope
References
43 Thermal Performance Study of a Flat Plate Solar Air Heater Using Different Insulating Materials
43.1 Introduction
43.2 Methodology
43.2.1 Model Description
43.2.2 Thermal Modelling Equations
43.3 Results and Discussion
43.4 Conclusion
References
44 L and S Beacon Dualband Antenna for Biomedical Application
44.1 Introduction
44.2 Antenna Design
44.2.1 Antenna Parametric Analysis
44.2.2 Antenna Parameters
44.3 Conclusion
References
45 A Comprehensive Review and Performance Analysis of Different 7T and 9T SRAM Bit Cells
45.1 Introduction
45.2 9T Cell Architectures
45.2.1 9T-1 SRAM Cell
45.2.2 9T-2
45.3 7T Cell Architecture
45.3.1 7T-1
45.3.2 7T-2
45.4 Static Noise Margin (SNM) Analysis
45.5 Temperature Variation Analysis
45.6 Analysis of Leakage Current
45.7 Conclusion
References
Author Index
Recommend Papers

Modeling, Simulation and Optimization: Proceedings of CoMSO 2022 (Smart Innovation, Systems and Technologies, 373)
 9819968658, 9789819968657

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Smart Innovation, Systems and Technologies 373

Biplab Das · Ripon Patgiri · Sivaji Bandyopadhyay · Valentina Emilia Balas · Sukanta Roy   Editors

Modeling, Simulation and Optimization Proceedings of CoMSO 2022

Smart Innovation, Systems and Technologies Volume 373

Series Editors Robert J. Howlett, KES International Research, Shoreham-by-Sea, UK Lakhmi C. Jain, KES International, Shoreham-by-Sea, UK

The Smart Innovation, Systems and Technologies book series encompasses the topics of knowledge, intelligence, innovation and sustainability. The aim of the series is to make available a platform for the publication of books on all aspects of single and multi-disciplinary research on these themes in order to make the latest results available in a readily-accessible form. Volumes on interdisciplinary research combining two or more of these areas is particularly sought. The series covers systems and paradigms that employ knowledge and intelligence in a broad sense. Its scope is systems having embedded knowledge and intelligence, which may be applied to the solution of world problems in industry, the environment and the community. It also focusses on the knowledge-transfer methodologies and innovation strategies employed to make this happen effectively. The combination of intelligent systems tools and a broad range of applications introduces a need for a synergy of disciplines from science, technology, business and the humanities. The series will include conference proceedings, edited collections, monographs, handbooks, reference books, and other relevant types of book in areas of science and technology where smart systems and technologies can offer innovative solutions. High quality content is an essential feature for all book proposals accepted for the series. It is expected that editors of all accepted volumes will ensure that contributions are subjected to an appropriate level of reviewing process and adhere to KES quality principles. Indexed by SCOPUS, EI Compendex, INSPEC, WTI Frankfurt eG, zbMATH, Japanese Science and Technology Agency (JST), SCImago, DBLP. All books published in the series are submitted for consideration in Web of Science.

Biplab Das · Ripon Patgiri · Sivaji Bandyopadhyay · Valentina Emilia Balas · Sukanta Roy Editors

Modeling, Simulation and Optimization Proceedings of CoMSO 2022

Editors Biplab Das Department of Mechanical Engineering National Institute of Technology Silchar Cachar, Assam, India Sivaji Bandyopadhyay National Institute of Technology Silchar Cachar, India

Ripon Patgiri Department of Computer Science and Engineering National Institute of Technology Silchar Cachar, Assam, India Valentina Emilia Balas Aurel Vlaicu University of Arad Arad, Romania

Sukanta Roy Department of Mechanical Engineering Curtin University Miri, Malaysia

ISSN 2190-3018 ISSN 2190-3026 (electronic) Smart Innovation, Systems and Technologies ISBN 978-981-99-6865-7 ISBN 978-981-99-6866-4 (eBook) https://doi.org/10.1007/978-981-99-6866-4 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Singapore Pte Ltd. The registered company address is: 152 Beach Road, #21-01/04 Gateway East, Singapore 189721, Singapore Paper in this product is recyclable.

Preface

The International Conference on Modeling, Simulation, and Optimization (CoMSO 2022) focuses on both theory and applications in the broad areas of big data and machine learning. This conference aims to bring together academia, researchers, developers, and practitioners from scientific organizations and industry to share and disseminate recent research findings in the fields of modeling, simulation, optimization, and its applications. CoMSO is an outstanding platform for discussing key findings, exchanging novel ideas, listening to world-class leaders, and sharing experiences with peer groups. The conference provides the opportunities for collaboration with the national and international organizations of repute to the research community. CoMSO has a large number of participants and submissions from worldwide. There are 139 submissions, and CoMSO 2022 issues 49 acceptances. The accepted papers will be published in this conference proceedings. Apart from that, there are world-leading keynote speakers, namely Prof. Amir H Gandomi, Prof. Uday S Dixit, Prof. Sivaji Bandyopadhyay, Prof. Soumendu Raha, Prof. Mamdouh El Haj Assad, Prof. Bale Reddy, Prof. Xiangrong Chen, Dr. Jayanta Mandol, and Dr. Arnab Basu. Finally, the conference concluded with a big success. Cachar, India Cachar, India Cachar, India Arad, Romania Miri, Malaysia

Biplab Das Ripon Patgiri Sivaji Bandyopadhyay Valentina Emilia Balas Sukanta Roy

v

Contents

1

On P-refinement in Topology Optimization . . . . . . . . . . . . . . . . . . . . . . Sougata Mukherjee, Balaji Raghavan, Subhrajit Dutta, and Piotr Breitkopf

2

Computational Analysis of Darrieus Vertical Axis Wind Turbines for Exhaust Air Energy Extraction . . . . . . . . . . . . . . . . . . . . . Reuben Brandon Huan Chung Lee, Sukanta Roy, Yam Ke San, and Aja Ogboo Chikere

3

4

5

6

7

8

1

19

A Wideband Circular Slot-Loaded Hexagon-Shaped Microstrip Antenna for Microwave RF Energy Harvesting . . . . . . . . Pradeep Chindhi, H. P. Rajani, and Geeta Kalkhambkar

37

A New Method of Population Initialization for Enhancing Performance of Evolutionary Algorithms . . . . . . . . . . . . . . . . . . . . . . . . Swati Yadav and Rakesh Angira

51

Performance Analysis of Metaheuristic Methods in the Classification of Different Human Behavioural Disorders . . . Preeti Monga and Manik Sharma

65

The Influence of Wall Temperature on Total Pressure Drop During Condensation of R134a Inside a Dimpled Tube . . . . . . . . . . . . N. V. S. M. Reddy, K. Satyanarayana, Rosang Pongen, and S. Venugopal Optimisation of Biodiesel Production Using Heterogeneous Catalyst from Palm Oil by Taguchi Method . . . . . . . . . . . . . . . . . . . . . . Bidisha Chetia and Sumita Debbarma

79

89

Optimization of Heterogeneous Biomass-Based Nano-Catalyzed Transesterification of Karanja Seed Oil for Production of Biodiesel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 Abhishek Bharti and Sumita Debbarma

vii

viii

9

Contents

Population Diversity-Aided Adaptive Cuckoo Search . . . . . . . . . . . . . 121 Debojyoti Sarkar and Anupam Biswas

10 Solitons of the Modified KdV Equation with Variable Coefficients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135 Priyanka Sharma, Sandip Saha, and Pankaj Biswas 11 A Review on Lung Cancer Detection and Classification Using Deep Learning Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147 Jyoti Kumari, Sapna Sinha, and Laxman Singh 12 Enhancement and Gray-Level Optimization of Low Light Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163 Aashi Shrivastava and M. P. Parsai 13 Prediction of Liver Disease Using Machine Learning Approaches Based on KNN Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175 Souptik Dutta, Subhash Mondal, and Amitava Nag 14 Modelling of Embedded Cracks by NURBS-Based Extended Isogeometric Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187 Vibhushit Gupta, Sahil Thappa, Shubham Kumar Verma, Sanjeev Anand, Azher Jameel, and Yatheshth Anand 15 A New Version of Artificial Rabbits Optimization for Solving Complex Bridge Network Optimization Problem . . . . . . . . . . . . . . . . . 205 Y. Ramu Naidu 16 Regression Analysis on Synthesis of Biodiesel from Rice Bran Oil . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219 Deepjyoti Patowary, Jitumoni Kumar, and Prasanta Kumar Choudhury 17 A Study on Vision-Based Human Activity Recognition Approaches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235 S. L. Reeja, T. Soumya, and P. S. Deepthi 18 In-Store Monitoring of Harvested Tomatoes Using Internet of Things . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249 Rohit Kumar Kasera, Shivashish Gour, and Tapodhir Acharjee 19 A Compact Microstrip Hexagonal Patch Antenna with a Slotted Ground Plane for RF Energy Harvesting Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267 Pradeep S. Chindhi, H. P. Rajani, and Geeta Kalkhambkar 20 Hierarchical Aadhaar-Based Anonymous eSign Based on Group Signatures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277 Puneet Bakshi and Sukumar Nandi

Contents

ix

21 Swarm Intelligence for Estimating Model Parameters in Thermodynamic Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293 Swati Yadav, Pragya Palak, and Rakesh Angira 22 Development of a Protocol on Various IoT-Based Devices Available for Women Safety . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305 Nishant Kulkarni, Shreya Gore, Avanti Dethe, Tushar Dhote, Rohit Dhage, and Manasvi Dhoke 23 A Review on Indian Language Identification Using Deep Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315 Swapnil Sawalkar and Pinky Roy 24 Markov Process Based IoT Model for Road Traffic Prediction . . . . . 329 V. Sreelatha, E. Mamatha, S. Krishna Anand, and Nayana H. Reddy 25 Optimization of Process Parameters in Biodiesel Production from Waste Cooking Oil Using Taguchi-Grey Relational Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339 Farhina Ahmed and Sumita Debbarma 26 A Novel D-Latch Design for Low-Power and Improved Immunity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349 Umayia Mushtaq, Md. Waseem Akram, Dinesh Prasad, and Bal Chand Nagar 27 Anonymous and Privacy Preserving Attribute-Based Decentralized DigiLocker Using Blockchain Technology . . . . . . . . . . 361 Puneet Bakshi and Sukumar Nandi 28 FIFO Memory Implementation with Reduced Metastability . . . . . . . 373 M. S. Mallikarjunaswamy, M. U. Anusha, B. R. Sriraksha, Harsha Bhat, K. Adithi, and P. Pragna 29 Diabetic Retinopathy Detection Using Ensemble of CNN Architectures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 385 B. Bhargavi, Lahari Madishetty, and Jyoshna Kandi 30 Narrow Band 5G Antenna . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 397 P. M. Preethi, P. V. Yalini Shree, N. Mohammed Shaqeeb, C. Kavya, and K. C. Raja Rajeshwari 31 Rectangular and Cylindrical Slotted Microstrip Patch Antenna Design for Biomedical Application . . . . . . . . . . . . . . . . . . . . . . 405 Sonam Gour, Mithlesh Arya, Ghanshyam Singh, and Amit Rathi 32 An Implementation of Machine Learning-Based Healthcare Chabot for Disease Prediction (MIBOT) . . . . . . . . . . . . . . . . . . . . . . . . . 419 Sauvik Bal, Kiran Jash, and Lopa Mandal

x

Contents

33 Impact of Communication Delay in a Coordinated Control VPP Model with Demand Side Flexibility: A Case Study . . . . . . . . . . 431 Smriti Jaiswal, Mausri Bhuyan, and Dulal Chandra Das 34 Fabrication of Patient Specific Distal Femur with Additive Manufacturing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 445 Thoudam Kheljeet Singh and Anil Kumar Birru 35 Performance Comparison of Cuckoo Search and Ant Colony Optimization for Identification of Parkinson’s Disease Using Optimal Feature Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 459 Neha Singh, Sapna Sinha, and Laxman Singh 36 Simulation and Modelling of Task Migration in Distributed Systems Using SimGrid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 475 Ehab Saleh and Chandrasekar Shastry 37 An Approach to Bodo Word Sense Disambiguation (WSD) Using Word2Vec . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 487 Subungshri Basumatary, Karmabir Brahma, Anup Kumar Barman, and Amitava Nag 38 Effect of Ambient Conditions on Energy and Exergy Efficiency of Combined Cycle Power Plant . . . . . . . . . . . . . . . . . . . . . . . 501 Pravin Gorakh Maske, Vartika Narayani Srinet, and A. K. Yadav 39 Design and Analysis of Single and Multi-Band Rectangular Microstrip Patch Antenna Using Coplanar Wave Guide . . . . . . . . . . . 511 K. R. Kavitha, S. Vijayalakshmi, B. Murali Babu, E. Glenda Lorelle Ritu, and M. Naveen Balaji 40 Software in the Loop (SITL) Simulation of Aircraft Navigation, Guidance, and Control Using Waypoints . . . . . . . . . . . . . 523 G. Anitha, E. Saravanan, Jayaram murugan, and Sudheer Kumar Nagothu 41 Optimization of Biodiesel Production from Waste Cooking Oil Using Taguchi Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 545 Subham Chetri and Sumita Debbarma 42 Design and Development of Energy Meter for Energy Consumption . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 563 K. Hariharan, Mathiarasan Vivek Ramanan, Naresh Kumar, D. Kesava Krishna, Arockia Dhanraj Joshuva, and S. K. Indumathi 43 Thermal Performance Study of a Flat Plate Solar Air Heater Using Different Insulating Materials . . . . . . . . . . . . . . . . . . . . . . . . . . . . 571 Pijush Sarma, Monoj Bardalai, Partha Pratim Dutta, and Harjyoti Das

Contents

xi

44 L and S Beacon Dualband Antenna for Biomedical Application . . . . 587 H. Naveen Mugesh, K. Nandha Kishore, M. Harini, and K. C. RajaRajeshwari 45 A Comprehensive Review and Performance Analysis of Different 7T and 9T SRAM Bit Cells . . . . . . . . . . . . . . . . . . . . . . . . . 595 Manthan Garg, Mridul Chaturvedi, Poornima Mittal, and Anamika Chauhan Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 607

About the Editors

Dr. Biplab Das presently working as Associate Professor in the Department of Mechanical Engineering, National Institute of Technology Silchar, India. Dr. Das completed his Ph.D. from NERIST, Itanagar, India, in the year of 2014. Later, he pursued his postdoctoral research from University of Idaho, USA. He is the recipient of prestigious Bhaskara Advance Solar Energy (BASE) fellowship form IUSSTF and DST, Government of India. He is also awarded with “DBT Associateship” by Department of Biotechnology, Government of India. He has 14+ years of experience in teaching and research and published more than 120 of referred international/national journal/conference papers. Dr. Das is actively involved in 10 numbers of sponsored project to develop a solar thermal system for Northeast India, worth 0.268 billion INR, sponsored by SERB, DST, Ministry of Power, and the Ministry of Climate Change, Government of India. He has already completed eight sponsored projects. He has guided nine Ph.D. scholar, and he is guiding seven Ph.D. scholars. He has ongoing research activities in collaboration with Jadavpur University, India; IIT Guwahati, India; University of Idaho, USA; and Ulster University, UK. Dr. Ripon Patgiri received his bachelor’s degree from the Institution of Electronics and Telecommunication Engineers, New Delhi, in 2009. He received his M.Tech. degree from the Indian Institute of Technology Guwahati in 2012. He received his Doctor of Philosophy from the National Institute of Technology Silchar in 2019. After M.Tech. degree, he joined as Assistant Professor at the Department of Computer Science and Engineering, National Institute of Technology Silchar in 2013. He has published numerous papers in reputed journals, conferences, and books. Also, he is awarded with several international patents. He is Senior Member of IEEE. He is Member of EAI and Lifetime Member of ACCS, India. Also, he is Associate Member of IETE. He was General Chair of the 6th International Conference on Advanced Computing, Networking, and Informatics (ICACNI 2018) and the International Conference on Big Data, Machine Learning and Applications (BigDML 2019). He is Organizing Chair of the 25th International Symposium on Frontiers of Research in Speech and Music (FRSM 2020) and the International Conference on Modeling, Simulations and Applications (CoMSO 2020). He is Convenor, Organizing Chair, and xiii

xiv

About the Editors

Program Chair of the 26th annual International Conference on Advanced Computing and Communications (ADCOM 2020). He is also Editor in a multi-authored book, titled Health Informatics: A Computational Perspective in Healthcare, in the book series of Studies in Computational Intelligence, Springer. Also, he is writing a monograph book, titled Bloom Filter: A Data Structure for Computer Networking, Big Data, Cloud Computing, Internet of Things, Bioinformatics and Beyond, Elsevier. Prof. Sivaji Bandyopadhyay is Director of National Institute of Technology Silchar since December 2017. He is Professor of the Department of Computer Science and Engineering, Jadavpur University, India, where he has been serving since 1989. He is attached as Professor, Computer Science and Engineering Department, National Institute of Technology Silchar. He has more than 300 publications in reputed journals and conferences. He has edited two books so far. His research interests are in the area of Natural Language Processing, Machine Translation, Sentiment Analysis, and Medical Imaging among others. He has organized several conferences and has been Program Committee Member and Area Chair in several reputed conferences. He has completed international funded projects with France, Japan, and Mexico. At the national level, he has been Principal Investigator of several consortium mode projects in the areas of machine translation, cross-lingual information access, and treebank development. At present, he is Principal Investigator of an Indo-German SPARC project with University of Saarlandes, Germany, on Multimodal Machine Translation and Co-PI of several other international projects. Prof. Valentina Emilia Balas is currently Full Professor in the Department of Automatics and Applied Software at the Faculty of Engineering, “Aurel Vlaicu” University of Arad, Romania. She holds a Ph.D. in Applied Electronics and Telecommunications from Polytechnic University of Timisoara. Dr. Balas is Author of more than 300 research papers in refereed journals and international conferences. Her research interests are in Intelligent Systems, Fuzzy Control, Soft Computing, Smart Sensors, Information Fusion, Modeling, and Simulation. She is Editor-in Chief for International Journal of Advanced Intelligence Paradigms (IJAIP) and for International Journal of Computational Systems Engineering (IJCSysE), Member in Editorial Board of several national and international journals, and is Evaluator expert for national, international projects, and Ph.D. thesis. Dr. Balas is Director of Intelligent Systems Research Centre in Aurel Vlaicu University of Arad and Director of the Department of International Relations, Programs and Projects in the same university. She served as General Chair of the International Workshop Soft Computing and Applications (SOFA) in eight editions 2005–2020 held in Romania and Hungary. Dr. Balas participated in many international conferences as Organizer, Honorary Chair, Session Chair, and Member in Steering, Advisory, or International Program Committees. She is Member of EUSFLAT, SIAM, Senior Member of IEEE, Member in TC—Fuzzy Systems (IEEE CIS), Chair of the TF 14 in TC—Emergent Technologies (IEEE CIS), and Member in TC—Soft Computing (IEEE SMCS). Dr. Balas was past Vice President (Awards) of IFSA International Fuzzy Systems Association

About the Editors

xv

Council (2013–2015) and is Joint Secretary of the Governing Council of Forum for Interdisciplinary Mathematics (FIM)—A Multidisciplinary Academic Body, India. Dr. Sukanta Roy is presently working as Head of Department and Associate Professor of Mechanical Engineering, Curtin University, Malaysia Campus. He has joined Curtin University, Malaysia Campus, in September 2017. Prior to joining Curtin University, he worked as Postdoctoral Fellow in IRPHE, Aix Marseille University, France, from November 2015 to June 2017 on the variable pitch control of vertical axis wind turbines. Dr. Roy has also worked as Postdoctoral Fellow in LHEEA, Ecole Centrale de Nantes, France, from September 2014 to October 2015 through Erasmus Mundus Postdoctoral Fellowship (Asia to Europe) from European Commission on flow physics analysis of micro-wind turbines. Prior to working in France, he has completed his Ph.D. in Mechanical Engineering from Indian Institute of Technology Guwahati, India (2010–2014). Dr. Roy is Fellow of Advance Higher Education (HEA), UK, and award winner of Curtin Citations for Outstanding Contributions to Student Learning 2019. Dr. Roy presently supervising a number of Ph.D. and M.Phil. students and Recognized Reviewer for more than 10 reputed international journals. His research interests include Experimental and Computational Fluid Dynamics, Wind Turbine Aerodynamics and Hydro-kinetic Turbines, Turbo-machinery, Renewable Energy, and Heat Transfer Applications.

Chapter 1

On P-refinement in Topology Optimization Sougata Mukherjee , Balaji Raghavan , Subhrajit Dutta , and Piotr Breitkopf

Abstract Structural topology optimization is carried out by optimizing relative density; i.e., design variable in each element is refined by refining the FE mesh size, where the number of density elements is scaled up beside the increase in interpolation order of finite elements. Higher-order elements can be used to obtain more realistic solutions compared to solutions with lower-order elements which includes the stability of numerical solution. In this technical script, dominance of p-refinement in topology optimization by solid isotropic material with penalization includes compliance values and computational time for various topology optimization problems.

1.1 Introduction The relative density is optimally distributed in the design domain by optimizing the topology of structure with an objective where the compliance is minimized (i.e., maximizing the stiffness) subjected to finite element analysis, volume constraint, and design variable. The design domain is analyzed after discretization using finite elements where design variables are assigned to every element. Structural topology optimization was introduced [1], and a detailed description was given. Topology optimization problems with large-scale domain can be solved where various numerical optimization methods are used. The optimal layout of structural component is obtained by topology optimization [2] with predefined performance goals. The main hindrance of topology optimization is: S. Mukherjee (B) · S. Dutta Department of Civil Engineering, National Institute of Technology, 788010 Silchar, India e-mail: [email protected] B. Raghavan Laboratoire de Genie Civil et Genie Mecanique EA 3913, Institut National des Sciences Appliquees de Rennes, Rennes, France P. Breitkopf Alliance Sorbonne Universites, Laboratoire Roberval FRE UTC-CNRS 2012, Universitè de Technologie de Compiègne, Compiègne, France © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024 B. Das et al. (eds.), Modeling, Simulation and Optimization, Smart Innovation, Systems and Technologies 373, https://doi.org/10.1007/978-981-99-6866-4_1

1

2

S. Mukherjee et al.

1. Topology optimization solutions are considered impractical without additive manufacturing since these solutions are difficult to interpret by standard CAD/CAE software. 2. Higher-resolution solution is obtained by high computational effort since there are multi-physical complex interaction problems. The first issue can be addressed by high-resolution density map where there is accurate resolution of equilibrium equation requiring FE model [3] which is accurate for design problems of large scale. The usual approach associated with every finite elements for single material density increases the constancy of analysis and topology-optimized solution resolution where finite elements are accordingly refined. Traditional topology optimization method [2] progressed with h-refinement where elements number for discretizing the structure were increased results obtained were much more prominent [2] since it is simple to program [4] where sensitivity analysis was direct. The major limitation of this method was high computational time which doesn’t depend upon geometry or loading conditions where stress field is singular which can restore optimized solutions. Structural layout discretization can further be refined by p-refinement of finite element analysis [5] for getting topologyoptimized solution which uses the same resolution mesh but the polynomial degree for finite element analysis [6] is increased. P-refinement in finite element can be used for widespread use in linear and nonlinear, static and dynamic engineering problems for 2D and 3D application in mechanics [7]. The p-version FEM method [8] improves stability of topology-optimized numerical solution which avoids checkerboarding problem which can be avoided by filtering the radius. The issue related to high aspect ratio yields p-refinement convergence rate which outperforms mesh refinement strategy for the same cost of solver. Different optimization solvers are benchmarked [5] when applied to structural topology optimization based on finite elements [9] which were benchmarked for minimum values of compliance, volume, and design problem mechanism for different sizes. The objective [10] of the study was to show that non-conforming four-node finite element reduces numerical instability which is characterized by checkerboard pattern. The non-conforming finite element converging to a solution is not dependent on Lame parameter, and it exhibits limiting values that prohibit the formation of checkerboards while optimizing the topology of structure. The method is employed to show stable numerical solutions by non-conforming elements for optimizing the topology of structure. Structured quadrilateral finite elements were used [11] to optimize the topology. They showed that triangular elements, i.e., low-order triangular elements, can create checkerboard pattern. A triangular checkerboard-free method was developed to restrict the design space. A multiscale design space was mapped with single-scale design space, a triangular mesh subdivision was employed for checkerboard-free design. The numerical issue was demonstrated [12] related to checkerboard pattern which is contained by finite volume theory. The finite volume theory [13] obeys equilibrium equation at subvolume level, whereas continuity equation interfaces adjacent to subvolume in continuum mechanics problem. This helps in numerically efficient computational model that results in obtaining checkerboard-free topologies without filtering techniques.

1 On P-refinement in Topology Optimization

3

The mesh dependence is solved by sensitivity filtering technique. Topology optimization of structure by different formula is proposed [14] where FE discretization is proposed rather than approach based on displacement. In this formulation approach, stresses are accounted in addition to displacements. Dual variational principles are proposed [14] in continuous and discretized forms solved using the method of moving asymptotes (MMA). This formulation helps in 0–1 designs having checkerboard-free solutions. A new way for avoiding checkerboard patterns is investigated [15] in optimized topologies, with in-plane rotation freedom. Since checkerboard pattern is a general phenomenon that is avoided by optimality criteria method and discretizing the domain with standard finite elements. They conducted [16] structural topology optimization using new sensitivity filter with chamfered and rounded domain. This method helps us to obtain optimal solutions without checkerboard and mesh dependence. It helps us to prevent one node hinge in optimized topology. The geometric and material nonlinearities are accommodated [17] in structural topology optimization. A length scale is enforced to filter material density fields which are penalized to remove intermediate densities. The non-existent solution is resolved by applying filtering scheme to remove solid-void topology problem. The compliant mechanisms are designed by structural topology optimization. A stochastic direct search method was presented [18] for optimizing the topology of continuum structures. The element exchange method [18] switches intermediate solid elements into void elements and intermediate solids with more value into solids resulting in more prominent 0–1 topology. The objective function considered in this case is compliance minimization problem where element strain energy is principle criteria with checkerboard control scheme for eliminating checkerboard regions. The characteristics of EEM are presented [18] with efficient and accurate solutions. FE mesh with Lagrangian elements shows checkerboard problems which can be eliminated by hexagonal element [19] with its shape function. In hexagonal mesh, there are two-node connections among elements and symmetry lines per element based on edges. Wachspress-type shape functions are used for satisfying optimization of elements, material distribution, and imposement of minimum length scale. The fundamental issues on structural topology optimization are explored [20] presenting a generalized topology optimization procedure. They [20] suggested accurate discretization of design domain to abstain mesh dependence. The checkerboard pattern can be hindered by higher-order elements and continuity analysis. The whole process is used for two- and three-dimensional problems for demonstrating effectiveness of the process. A classical density approach [21] is used for topology optimization by estimating the error of finite element method to prevent checkerboard used for providing higher regularity and robust results. The structural stiffness of a domain is maximized considering volume constraint. They proposed [22] element replaceable method for optimizing topology of structure when compared to existing version of TO methods. A new checkerboard control algorithm is proposed in this work which proceeds by filling or deleting element depending upon sensitivity values. Results showed that ERPM method is efficient in solving static and dynamic problems. The applications of simulated annealing (SA) technique [23] are used for structural topology optimization. Multiresolution design variable is considered as numerical tool for enhancing the search by SA for

4

S. Mukherjee et al.

topology optimization. The checkerboard pattern on structural topology deals with ADD and chromosome-repairing techniques. The structural topology optimization deals with SA and MRDV. MRDV is considered powerful tool for enhancing SA performance solving structural topology. A new TO method [24] is implemented to eliminate checkerboard pattern by constraining maximum weight. This method basically converts patterns of topology into network for searching optimized solution. A kriging-based model is also proposed [24] for predicting sensitivity to reduce heavy computational burden. The proposed method helps to reduce checkerboard pattern and significantly improves the final topology reducing computational cost. A detailed study was done [25] describing the advantages and disadvantages of multi-resolution topology optimization with refinement of polynomial value using finite cell method. A length scale was applied to the solution for avoiding overestimation of stiffness using filter methods. The relation among stiffness overestimation and length scale was determined such that high-resolution topology is maintained. A ground-structure topology optimization was formulated with the aim to investigate higher-order beam for box beam structure with thin wall. The Timoshenko and Euler beams were used [26] to form ground-structure since thin-walled closed beams were not suitable. The sectional deformation was taken into account for box beam such as distortion and warping. A significant difference was shown [26] among optimized beam layout for Timoshenko beam theory and higher-order beam with thin wall box beam. The material interpolation technique with NMM technique was used [27] for continuum structural topology optimization. The analysis accuracy was improved further [28] by increasing local approximation function order. It is shown that HONM method can be used to increase analysis accuracy in topology optimization problems which gives us checkerboard-free solution. A methodology was presented [7] where topology optimization and reduced-order modeling were combined. The higher dimensional equation was projected in lower dimensional space where proper orthogonal decomposition was constructed using reduced basis vectors. The vectors were updated in “on-the-fly” manner. The topology optimization proceeds by integrating global equations for higher-resolution solutions. The precision of this method was compared with traditional methods for compliant mechanisms and compliance minimization problems. A first-order reliability method and topology description function were used [29] for reliability calculation and topology optimization. The efficiency and validity of proposed method were validated by numerical examples. The effect of structural reliability indices among final layout was discussed [29]. A new TO solver was proposed [30] where inexact finite element analysis (FEA) and projected gradient descent (PGD) algorithms were combined it is also shown that this method is convergent to critical point of first order. Several important problems in topology optimization were solved using first-order bilevel topology optimization (FBTO) method under external loads and self-weight. The FBTO is compared with other TO approaches for both two-dimensional and three-dimensional problems. The structural topology optimization was applied to problem with fatigue constraints [31]. High-cycle fatigue analysis was used for calculating principle stress to obtain topology consisting of mass withstanding variable-amplitude loading conditions within a lifetime. The optimal conceptual designs of structural components were generated

1 On P-refinement in Topology Optimization

5

with dimensioning factor as fatigue life. The fatigue analysis from topology optimization was separated by using this method. The constraints are kept low as it is applied to stress clusters. The comparison among stress constraints is shown [31] among fatigue and static stresses. An approach is presented [32] for getting solutions of truss topology optimization problems with uncertainty in the location of structural nodes. The nodal location in truss is assumed as random, and uncertainty is predicted by probability methods. The objective was to minimize truss structure mean compliance with nodal uncertainty. The optimization problem is recast as simple deterministic structural problem by Neumann series expansion. The gradient-based method is used for comparing the accuracy and efficiency of topology-optimized solutions. The high efficiency and accuracy of the solution were ensured using proposed approach [32] where optimal truss topology was impacted by location of nodes. The fundamental issues of structural topology results quality were explored [20], and a topology optimization generalized procedure was presented. Irregular design domain having constraints on geometry was accepted, and design domain was meshed with automatic mesh generator. The number of elements for avoiding mesh dependence was suggested. The checkerboard problems were avoided by higherorder elements. The porous topology was dealt with continuity equations, and two filters were implemented for filtering solids among voids. This process is used for 2D and 3D structures. A topology optimization approach for metamaterial design was proposed [33] exhibiting negative refraction at a given angle of frequency with high transmission. They have considered finite slab as metamaterial with designable unit cells which are axis-symmetric subjected to exterior fields. The desired properties are achieved by unit cell by tailoring the responses of metamaterial slabs considering exterior fields. This approach can be directly applied to physical problems. A topology optimization approach considering level-set method was presented [34] to control high-frequency electromagnetic wave in periodic microstructure (unit cells). Homogenization method is applied for characterizing macroscopic high-frequency wave. IGA discretization procedure is applied with B-spline basis function. The effectiveness of proposed method was demonstrated by standard FE analysis. Optimizing the topology using higher polynomial values, i.e., Q8 and Q9 elements, can reduce the checkerboard problem in topology optimization which cannot be done by first-order elements. The checkerboard pattern can be described as alternating 0 (void) and 1 (solid) pattern within quick neighborhood in a domain. If the structure is discretized with second-order elements this error can be avoided but minimizing the checkerboard pattern is rather the tertiary advantage of p-refinement since regularization is needed for avoiding mesh dependence even if the structure is discretized with higher-order elements. In traditional element-based approach where mesh size is refined the filtering radius of finite elements should be such that there is no space in the design domain where the filtering radius has no elements or zero design variable. A recent study was conducted where p-version FEM was used for topology optimization where finite element with refined polynomial value was used for generating solution with higher resolution.

6

S. Mukherjee et al.

Topology optimization by p-refinement approach was conducted where the structure is also discretized using separate analysis and density mesh, with a comparison of compliance values, time taken, and optimized topologies.

1.2 Finite Element Analysis Topology optimization aids to attain optimized distribution of relative density map with design variable in design domain for obtaining nodal displacement [35] or stresses as state variables. A non-conforming voxel mesh of quadrilateral elements is used for discretization of the domain. The quadrilateral element used for discretizing the domain [36]: quadrilateral 4-noded, 8-noded, and 9-noded elements. Small strain displacement is assumed in linear isotropic elastic material for SIMP model which scales up Young’s modulus with relative density .ρe (Figs. 1.1 and 1.2).

.

E(ρ) = E min + (ρ) p × (E 0 − E min )

(1.1)

The stiffness matrix for each element from strain energy is derived and represented as: 1 .U = 2

{

e

{σ } {ε }dV = e t

Ωe

e

=

∑1{

∑1 e

2

2 { { e

ξ

η

Ωe

{σ e }t {εe }tdA

{σ e }t {εe }J tdηdξ

Fig. 1.1 4-noded, 8-noded, and 9-noded isoparametric elements

(1.2)

1 On P-refinement in Topology Optimization

7

Fig. 1.2 Isoparametric shape functions of Q4, Q8, and Q9 elements

where .{σ e } = [ D]{εe } = [ D][B]{q e } and ⎡

⎤ N1,ξ 0 N2,ξ 0 ... Nn,ξ 0 N1,η 0 N2,η ... 0 Nn,η ⎦ . [B] = [B] (ξ, η) = ⎣ 0 N1,η N1,ξ N2,η N2,ξ ... Nn,η Nn,ξ

(1.3)

giving the expression U = e

1 ∑1 { e

2

{1

ξ =−1 η=−1

{qe }t [B]t [D][B]{qe }t J dηdξ

1∑ e t e e = {q } [K ]{q } 2 e

.

(1.4)

. Topology optimization using finite elements proceeds with the computation of element stiffness matrix by integrating the product of strain displacement matrix and deformation matrix at the Gauss points. The displacements and its derivatives are calculated for every elements with their design variables for minimizing the compliance of structure. This type of finite element procedure is not efficient for optimizing the topology for mesh with high resolution which is necessary in smooth topologies for manufacturing goals. Topology optimization using dual mesh strategy

8

S. Mukherjee et al.

Fig. 1.3 Node numbering for Q8 and Q9 elements

is proposed in this work taking advantage of higher-order interpolation function such that relative density at Gauss points is assigned to every element such that higherresolution density map is ensured. The node numbering is represented in Fig. 1.3. The numerical integration at Gauss quadrature points is used for finite element analysis to construct the element stiffness matrix but the polynomial degree [37] of design variable is difficult to guess by quadrature rule. The density field on design variable is determined where density field is local element density with lower-order Gauss quadrature for integrating the design variable. The finite element technique [38] used for topology optimization divides a structural domain into smaller regions called density elements. The density elements of uniform size are considered for discretizing the domain.

1.2.1 Finite Element Analysis Using Single and Dual Mesh Topology of MBB beam is generally optimized by scaling up element Young’s moduli (stiffness) as a function of relative density using SIMP model. In single mesh problem, it defines that the stiffness of every element is mapped with actual stiffness for each element so that the sensitivities are calculated.

1 On P-refinement in Topology Optimization

9

Fig. 1.4 A density and analysis mesh represented separately within a Q9 element [39] representing dual mesh

{1 e . K single (x e )

{1

=

B t D(ρe )B J tdηdξ ξ =−1 η=−1

∑ Ng

=

B(e x gi ,e ygi )t D(xe )B(e x gi ,e ygi )t J

i=1

= (xe ) × p

Ng ∑

B(e x gi ,e ygi )t D0 B(e x gi ,e ygi )t J

(1.5)

i=1

The p-refinement approach improves the numerical solution (geometry interpolation is better with numerical stability), and it has shortcoming for density map of high resolution in 3D printer. This p-refinement approach can be significantly improved without increasing numerical solver cost by using separate analysis and sensitivity mesh for optimization. The higher-order elements are used to increase the numerical solve cost, and thus, a separate density grid for optimization is to be used. A subdivision of the finite element mesh is shown where each finite element is split into nine sensitivity grid (dual mesh) that raises the output solution tenfold with the same computational cost (Fig. 1.4). N

e e Kdual = Kdual (ρe1 , . . . , ρe g )

= .

Ng ∑

B(e x gi ,e ygi )t D(ρgi )B(e x gi ,e ygi )

i=1

∑ Ng

=

i=1

(ρei ) p × B(e x gi ,e ygi )t D0 B(e x gi ,e ygi )

(1.6)

10

S. Mukherjee et al.

1.3 Compliance Minimization Problem The structure is meshed by finite elements where E is same for every elements. The optimization problem (i.e., compliance minimization) can be written as:

.

min : c = f t u=u t K u= xe

N ∑

(ρe (x)) p u te K e u e

(1.7)

e=1

N ∑ s.t.: ve xe (x) ≤ V , Volume fraction ) ( N e=1 ∑ p xe (x) K e u = f, FE analysis e=1

0 10) problems. It can be seen that TC 2 has a high overall acceleration rate (AR2 ) for low-dimensional problems and TC 1 has a high overall acceleration rate (AR1 ) for high-dimensional problems among all three termination criteria (TC 1 , TC 2, and TC 3 ).

1,051,284

419,050

3,810,035

1,307,879

TC 2 3,607,763

449,238

TC 3

indicates the number of test functions in each group

13

D > 10

*N

11

D ≤ 10

TC 1 1,014,174

345,578

TC 1 3,793,298

1,019,389

TC 2

Table 4.2 Comparison of overall acceleration rate (AR) for DE and OPDE ∑ ∑ NFE (DE) NFE (OPDE) Group N*

3,578,102

434,404

TC 3 3.53

17.53

AR1 (%)

0.44

22.06

AR2 (%)

0.82

3.30

AR3 (%)

4 A New Method of Population Initialization for Enhancing Performance … 57

58

S. Yadav and R. Angira

Table 4.3 Comparison of overall acceleration rate (AR) for OPDE and DEo ∑ ∑ NFE (OPDE) NFE (DEo ) Group N TC 1

AR4 (%)

TC 1

D ≤ 10

11

345,578

710,424

51.36

D > 10

13

1,014,174

1,069,717

5.19

Similarly, a comparison of the overall acceleration rate (AR) for OPDE with reference to DEo for TC 1 is presented in Table 4.3 which can be calculated as ( A R4 = 1 −

∑N

i=1 N F E(OPDE)i ∑N i=1 N F E(DEo )i

) × 100%

(4.4)

It clearly shows that overall acceleration rate for low-dimensional problems is higher than high-dimensional problems. For 84.62% cases (11 out of 13), OPDE outperforms DEo for group 2 (D > 10). On the other hand, for the first group (D ≤ 10) OPDE outperforms DEo in 90.91% of the cases (10 out of 11). Overall OPDE outperforms DEo in 87.50% of cases (21 out of 24 functions). This indicates that performance of OPDE is better than that of DEo (even though DEo starts with the initial population of fitter one). Nine convergence graphs for comparing the performance of DE and OPDE on benchmark functions are shown in Fig. 4.1. These convergence profiles indicate that convergence speed of OPDE is faster than DE toward the optimal solution. Intuitively, the termination criteria TC 3 and TC 2 are comparatively better than TC 1 , if one intends to seek quality solution.

4.6 Conclusions This paper presents and evaluates a new method of population initialization based on opposite point method. This initialization technique is embedded in differential evolution (DE) to accelerate its convergence speed. The performance of OPDE algorithm is evaluated on an extensive set of twenty-four well-known benchmark functions. Also, the performance of OPDE is compared with that of DEo [16]. The results indicate that the proposed algorithm OPDE successfully outperforms DE and DEo in maximum number of functions with higher convergence rate. This shows that new method of population initialization proposed in the present study is better than that of reported in literature [16]. Thus, same can be used for other evolutionary algorithms to enhance their convergence speed. OPDE seems to be a promising algorithm to solve difficult problems encountered in science and engineering disciplines.

4 A New Method of Population Initialization for Enhancing Performance …

59

Easom (f 11 ) 1.2

fmin (DE) fmin (OPDE)

1

fmin

0.8 0.6 0.4 0.2 0 0

500

1000

1500 NFE

2000

2500

3000

(a). For termination criterion-1 (TC1) Schwefel's 2.21 (f 17 ) 100

fmin (DE) fmin (OPDE)

80

fmin

60 40 20 0 0

20000

40000

60000

80000

100000

120000

NFE (b). For termination criterion-1 (TC1) Pathological (f 24 ) 1

fmin (DE) fmin (OPDE)

0.8

fmin

0.6 0.4 0.2 0 0

20000

40000 NFE

60000

80000

100000

(c). For termination criterion-1 (TC1)

Fig. 4.1 Convergence profile of three benchmark functions for different termination criteria

60

S. Yadav and R. Angira Easom (f 11 ) 1.2

fmin (DE) fmin (OPDE)

1

fmin

0.8 0.6 0.4 0.2 0 0

1000

2000 NFE

3000

4000

5000

(d). For termination criterion-2 (TC2) Schwefel's 2.21 (f 17 ) 90

fmin (DE) fmin (OPDE)

80 70 60 fmin

50 40 30 20 10 0 0

200000

400000

600000

800000

1000000

NFE (e). For termination criterion-2 (TC2) Pathological (f 24 ) 1.2

fmin (DE) fmin (OPDE)

1

fmin

0.8 0.6 0.4 0.2 0 0

20000

40000

60000 80000 NFE

100000 120000 140000

(f). For termination criterion-2 (TC2)

Fig. 4.1 (continued)

4 A New Method of Population Initialization for Enhancing Performance …

61

Easom (f 11 ) 1

fmin (DE) fmin (OPDE)

0.8

fmin

0.6 0.4 0.2 0 0

5000

10000

15000 NFE

20000

25000

30000

(g). For termination criterion-3 (TC3) Schwefel's 2.21 (f 17 ) 90

fmin (DE) fmin (OPDE)

80 70 60 fmin

50 40 30 20 10 0 0

50000

100000

150000

200000

250000

300000

NFE (h). For termination criterion-3 (TC3) Pathological (f 24 ) 0.6

fmin (DE) fmin (OPDE)

0.5

fmin

0.4 0.3 0.2 0.1 0 0

50000

100000

150000

NFE (i). For termination criterion-3 (TC3)

Fig. 4.1 (continued)

200000

62

S. Yadav and R. Angira

Acknowledgements Financial support from the Guru Gobind Singh Indraprastha University is gratefully acknowledged.

References 1. Goudos, S.K., Baltzis, K.B., Antoniadis, K., Zaharis, Z.D., Hilas, C.S.: A comparative study of common and self-adaptive differential evolution strategies on numerical benchmark functions. Procedia Comput. Sci. 3, 83–88 (2011). https://doi.org/10.1016/j.procs.2010.12.015 2. Deka, D., Datta, D.: Optimization of crude oil preheating process using evolutionary algorithms. In: Das, B., Patgiri, R., Bandyopadhyay, S., Balas, V.E. (eds.) Modeling, Simulation and Optimization. Smart Innovation, Systems and Technologies, vol. 206. Springer, Singapore (2021). https://doi.org/10.1007/978-981-15-9829-6 3. Das, S., Suganthan, P.N.: Differential evolution: a survey of the state-of-the-art. IEEE Trans. Evol. Comput. 15, 4–31 (2011). https://doi.org/10.1109/tevc.2010.2059031 4. Storn, R., Price, K.: Differential evolution: a simple and efficient adaptive scheme for global optimization over continuous spaces. J. Glob. Optim. 23 (1995) 5. Storn, R., Price, K.: Differential evolution—a simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Optim. 11, 341–359 (1997) 6. Price, K., Storn, R., Lampinen, J.: Differential Evolution—A Practical Approach to Global Optimization. Springer, Berlin, Germany (2005) 7. Rahnamayan, S., Tizhoosh, H.R., Salama, M.M.A.: Opposition-based differential evolution. IEEE Trans. Evol. Comput. 12, 64–79 (2008). https://doi.org/10.1109/tevc.2007.894200 8. Liu, J., Lampinen, J.: A fuzzy adaptive differential evolution algorithm. Soft. Comput. 9, 448–462 (2005). https://doi.org/10.1007/s00500-004-0363-x 9. Brest, J., Greiner, S., Boskovic, B., Mernik, M., Zumer, V.: Self-adapting control parameters in differential evolution: a comparative study on numerical benchmark problems. IEEE Trans. Evolut. Comput. 10, 646–657 (2006). https://doi.org/10.1109/tevc.2006.872133 10. Teo, J.: Exploring dynamic self-adaptive populations in differential evolution. Soft. Comput. 10, 673–686 (2006). https://doi.org/10.1007/s00500-005-0537-1 11. Ali, M.M., Törn, A.: Population set-based global optimization algorithms: some modifications and numerical studies. Comput. Oper. Res. 31, 1703–1725 (2004). https://doi.org/10.1016/ s0305-0548(03)00116-3 12. Tasoulis, D.K., Pavlidis, N.G., Plagianakos, V.P., Vrahatis, M.N.: Parallel differential evolution. In: Proceedings of 2004 Congress Evolutionary Computation (CEC-2004), pp. 2023–2029 (2004). https://doi.org/10.1109/cec.2004.1331145 13. Noman, N., Iba, H.: Enhancing differential evolution performance with local search for high dimensional function optimization. In: Proceedings of Genetic and Evolutionary Computation Conference (GECCO 2005), pp. 967–974, Washington DC, USA, June 25–29 (2005). https:// doi.org/10.1145/1068009.1068174 14. Fan, H.Y., Lampinen, J.: A trigonometric mutation operation to differential evolution. J. Glob. Optim. 27, 105–129 (2003). https://doi.org/10.1023/a:1024653025686 15. Kaelo, P., Ali, M.M.: Probabilistic adaptation of point generation schemes in some global optimization algorithms. Optim. Methods Softw. 21, 343–357 (2006). https://doi.org/10.1080/ 10556780500094671 16. Rahnamayan, S., Tizhoosh, H.R., Salama, M.M.A.: A novel population initialization method for accelerating evolutionary algorithms. Comput. Math. Appl. 53, 1605–1614 (2007). https:// doi.org/10.1016/j.camwa.2006.07.013 17. Esmailzadeh, A., Rahnamayan, S.: Opposition-based differential evolution with protective generation jumping. In: IEEE Symposium on Differential Evolution, pp. 1–8 (2011). https:// doi.org/10.1109/sde.2011.5952059

4 A New Method of Population Initialization for Enhancing Performance …

63

18. Onwubolu, G.C., Babu, B.V.: New Optimization Techniques in Engineering. Springer, Berlin, New York (2004) 19. Storn, R.: On the usage of differential evolution for function optimization. In: Proceedings of the North American Fuzzy Information Processing, Berkeley, CA, USA, pp. 519–523, June 19–22 (1996). https://doi.org/10.1109/nafips.1996.534789 20. Sarker, R.A., Elsayed, S.M., Ray, T.: Differential evolution with dynamic parameters selection for optimization problems. IEEE Trans. Evolut. Comput. 18, 689–707 (2014). https://doi.org/ 10.1109/tevc.2013.2281528 21. Tizhoosh, H.R.: Reinforcement learning based on actions and opposite actions. In: International Conference on Advances and Applications of Artificial Intelligence and Machine Learning (AIML-2005), Cairo, Egypt (2005)

Chapter 5

Performance Analysis of Metaheuristic Methods in the Classification of Different Human Behavioural Disorders Preeti Monga and Manik Sharma

Abstract The primary intention of this study is to evaluate the performance of various swarm intelligent algorithms for identifying behavioural disorders. Four separate datasets were investigated and analysed to assess various behavioural problems such as Anxiety, ADHD, and Conduct disorder. Ten different swarm intelligence (SI)-based metaheuristic techniques, viz. slime mould algorithm (SMA), butterfly optimization algorithm (BOA), emperor penguins optimizer (EPO), whale optimization algorithm (WOA), ant lion optimization (ALO), grey wolf optimizer (GWO), firefly algorithm (FF), cuckoo search algorithm (CS), ant colony optimization (ACO), particle swarm optimization (PSO) have been employed to solve feature selection problem for behavioural disorders. The performance of ten distinct methods has been compared based on accuracy, sensitivity, and specificity metrics. The PSO was shown to have the highest accuracy for three of the four datasets.

5.1 Introduction Behavioural disorders are mental anomalies that cause repeating behaviour patterns that affect daily life activities [1]. In recent years, it has become more apparent in the academic environment as students deal with academic stress and associated problems. In India, estimates of behavioural disorders in children and adolescents range from 2.6 to 35.6% [2, 3]. These disorders can lead to life-threatening and chronic conditions. These repercussions hurt the well-being of the individual, their family, and society, so identifying mental health issues as soon as possible is crucial [4, 5]. Though clinical diagnosis with psychologists and psychiatric specialists is the most reliable method, it is critical to improving other diagnostic options. Artificial intelligence can augment a psychologist’s function. Algorithmic and data mining techniques may be used for clinical reasoning or logical operations on large databases. Medical diagnostic

P. Monga (B) · M. Sharma Department of CSA, DAV University, Jalandhar, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024 B. Das et al. (eds.), Modeling, Simulation and Optimization, Smart Innovation, Systems and Technologies 373, https://doi.org/10.1007/978-981-99-6866-4_5

65

66

P. Monga and M. Sharma

challenges demand a collection of disease-specific features. The main objective is to obtain an intelligent feature subset for enhanced diagnostic performance.

5.1.1 Contribution The key contribution of this study is to find out the performance of different SI intelligent metaheuristics (FS) in finding out performance metrics such as accuracy, sensitivity, and specificity. This work emphasizes on comparative analysis of ten emerging SI techniques slime mould algorithm (SMA) (2020) [16], emperor penguins optimizer (EPO) (2018) [17], whale optimization algorithm (WOA) (2016) [18], butterfly optimization algorithm (BOA) (2015) [19], ant lion optimization (ALO) (2015) [20], grey wolf optimizer (GWO) (2014) [21], firefly algorithm (FA) (2010) [22], cuckoo search (CS) (2009) [23], ant colony optimization (ACO) (1999) [24], particle swarm optimization (PSO) (1995) [25] for finding the optimal set of features for diagnosis of behavioural disorders (BDs). Four datasets based on the most common BDs, viz. ADHD, conduct disorder and anxiety were selected for this work. The experiments evaluated average, maximum, minimum accuracy, sensitivity, specificity, and standard deviation, respectively.

5.2 Literature Review Past research has shown that numerous human disorders have been successfully diagnosed using SI techniques [6–8]. While the success of soft computing/ML techniques in other diseases, its importance as well as performance in the diagnosis of BDs is yet to be investigated. Katis et al. [9] proposed a neuro-fuzzy model to predict anxiety disorder which achieved 84.3% accuracy. Grossi et al. [10] analysed EEG signals using random forest and ANN to diagnose autism in children. The random forest has shown the best predictive accuracy, i.e. 92.8%. Koh et al. [11] used entropy features with ECG signals to classify ADHD and CD disorders. Their proposed model using tenfold cross-validation with adaptive synthetic sampling (ADASYN) achieved an accuracy of 87.19%. Alabi et al. [12] designed a hybrid technique based on random forest and artificial neural network (RF-ANN) to diagnose mental disorders which achieved an accuracy of 74%. Mohana et al. [13] found that through feature relevance analysis and a classification algorithm, this study assists in locating the best classifier for an autism dataset. Using Runs Filtering, classification algorithms such as BVM, CVM, and MLR produced a high degree of accuracy of 95.21% and correctly classified the test samples. Radhamani et al. [14] proposed a model using MLP and SVM classifiers for the diagnosis of ADHD. Compared to the SVM classifier, the accuracy of the MLP algorithm is 95.2% for ADHD data classification. Ahmed et al. [15] designed a technique using a convolution neural network (CNN) for anxiety

5 Performance Analysis of Metaheuristic Methods in the Classification …

67

and depression diagnosis. Using the CNN algorithm, the model achieves the highest accuracy of 96% for anxiety and 96.8% for depression.

5.3 Material and Methods Swarm intelligent methods represent technological advancements that have been ubiquitously adapted to solve feature selection problems in a variety of domains. Many algorithms have been developed that are inspired by the cooperative behaviour of birds, insects, and other animals, making swarm intelligence one of the most researched areas within the field of nature-inspired computing. In swarm intelligence, the agents are the population as a whole, and they communicate and collaborate and the environment. In this research, ten different metaheuristics algorithms have been used as a feature selection technique. The brief details of these methods have been highlighted in Table 5.1. In recent days several metaheuristic techniques have been applied to solve scientific applications. A significant impact of using these methods has been shown in different areas such as query optimization [26], disease diagnosis [27], sentiment analysis [28], stock data mining [29], bioinformatics [30], education [31], robotics [32], power system [33], cloud computing [34]. In complex scientific applications, Table 5.1 Classification of the ten chosen swarm intelligent algorithms Algorithm

Year

Working Principle

Author

Slime mould algorithm (SMA)

2020

Oscillation behaviour

Li et al. [16]

Emperor penguins colony (EPC)

2019

Radiation from the body of penguins regulates their body temperature

Harifi et al. [17]

Whale optimization algorithm (WOA)

2016

Bubble net attacking method

Mirjalili and Lewis [18]

Butterfly optimization algorithm (BOA)

2015

Food search and breeding behaviour

Sankalp Arora et al. [19]

Ant lion optimizer (ALO)

2015

Hunting behaviour

Mirjalili [20]

Grey wolf optimization (GWO)

2014

Hunting behaviour of wolves

Mirjalili et al. [21]

Firefly algorithm (FF)

2010

Fireflies flashing behaviour

Yang [22]

Cuckoo search (CS)

2009

Laying eggs in the nest of other Yang [23] species of bird

Ant colony optimization 1999 (ACO)

Pheromones for communication

Dorigo et al. [24]

Particle swarm intelligence (PSO)

Social interaction

Russel et al. [25]

1995

68

P. Monga and M. Sharma

it is difficult to solve the optimization problem having several features. To overcome this issue, one has to select the optimal set of features to get an effective solution. Several researchers have efficiently used different metaheuristic techniques in solving the FS problem in the above domains. In this research work, emphasis has been given to SMA, BOA, EPO, WOA, ALO, GWO, CS, FF, ACO, and PSO.

5.3.1 Dataset Here four different datasets D1 (anxiety dataset), D2 (disruptive behaviour disorder rating scale), D3 (mental health in tech survey), and D4 (manifest anxiety scale responses) have been considered to evaluate the performance of ten different metaheuristic techniques. The description of the four datasets used in the work is given in Table 5.2. The two benchmark datasets were collected from Kaggle [35, 36], and the two datasets have been collected through a questionnaire. The questionnaire was formulated based on the SACRED anxiety scale (child version) [37] and the disruptive behaviour disorder rating scale (DBDRS) [38]. In this data collection participation, both students and teachers were involved. The SACRED was filled by the students and the DBDRS was filled by the teachers. The data was collected through Google forms and is stored in Microsoft Excel. The rows are the instances, and the columns are the attributes. The binary class label was generated based on the scoring system for each of the two rating scales. The SACRED scale is a three-point scale from not true (0) to very true (2). Out of 1474 entries student entries, 589 entries were of class 10th, 413 were of 11th standard, and 470 were of 12th standard. The DBDRS is rated on a four-point scale, where each item is scored from not at all (0) to very much (3). In normal educational settings, the anxiety rating scale and the DBD rating scale can be applied to aid in the diagnosis of children. Both the collected datasets have Table 5.2 Dataset description Name of the dataset No. of features*

No. of instances

Type of dataset

Interpretation of class label

Anxiety dataset (D1)

41

1474

Collected

1-Presence 0-Absence

Disruptive behaviour disorder rating scale (D2)

45

879

Collected

1-Presence 0-Absence

Mental health in tech survey (D3)

22

893

Benchmark

1-Presence 0-Absence

Manifest anxiety scale responses (D4)

50

5411

Benchmark

1-Presence 0-Absence

* Without

a class label

5 Performance Analysis of Metaheuristic Methods in the Classification …

69

personal attributes (such as name, school, class, and gender), and a set of features (to diagnose the disorder). All four datasets had a binary class label (to classify if the person has a disorder or not). The D3 is a benchmark OSMI mental health in tech survey 2016 dataset obtained from the Kaggle dataset repository. This dataset is based on the ongoing 2016 survey which has 23 instances and has received over 1400 responses. It intends to analyse the mental health of the employees in the IT sector and investigates the prevalence of mental health illnesses among them. After preprocessing, 893 instances were selected from this dataset. The D4 dataset is obtained from the Taylor Manifest Anxiety Scale, which is available online. A total of 5410 instances were collected and stored on OpenPsychometrics.org, a non-profit organization dedicated to educating the public about psychology. The Taylor Manifest Anxiety Scale comprised 50 true or false questions to measure anxiety. This scale is used to distinguish between normal and pathologically anxious persons. Age and gender were the physical attributes. A total of 50 attributes were 50 rated on the three-point psychometric scale—Not answered (0), True (1), and False (2).

5.4 Results and Analysis The entire dataset is divided into a training set (80%) and a test set (20%). To design a predictive classifier model based on feature selection by each SI algorithm, K-fold cross-validation approach is used. Ten iterations (folds) for each selected technique are employed to evaluate the performance. This approach is used for assessing the efficiency of a model by training it on a sample of input data and testing it on an unseen subset of that data. K-Nearest Neighbour (KNN) classifier is used as a classification model to predict samples of behavioural disorders. It is a supervised learning method and nonparametric that does not make any assumptions about data distribution. Samples that did not have any disorder were bucketed in the “negative class”, represented by 0 and samples that had disorder were bucketed in the “positive class” represented by 1. The models were evaluated on various quality measures. The following description is based on the predictive classier outputs as depicted in Table 5.3 [39]. Several experiments were conducted by varying testing/training ratios. It was empirically found that the ratio of 80/20 seems to be the best fit for this evaluation. Ten iterations were performed for the same experiment, and the average accuracy, average sensitivity, and average specificity were reported for each dataset. The results are given in Tables 5.4, 5.5, 5.6, and 5.7. For each of the performance metrics, the maximum, minimum, and average values along with standard deviation are computed.

70

P. Monga and M. Sharma

Table 5.3 Performance metrics True positive (TP)

Number of samples with the presence of disorder predicted a disorder

False positive (FP)

Number of samples with the absence of disorder predicted a disorder

True negative (TN)

Number of samples with the absence of disorder predicted as the absence of disorder

False negative (FN)

Number of samples have the presence of disorder predicted as the absence of disorder

Accuracy

Accuracy is evaluated on the training data for 10 rounds Accuracy = (TP + TN)/(TP + TN + FP + FN)

Std deviation

Standard deviation measures how dispersed dataset values are. A low standard deviation suggests most values are close to the mean, whereas a high value indicates that numbers are distributed throughout a wider range

Sensitivity

Indicates the proportion of positive observations that are accurately anticipated

Specificity

The accuracy with which the model accurately predicts real negatives. It’s the fraction of True negatives for which the model makes accurate predictions

Table 5.4 draws a comparison for all 10 algorithms based on the average accuracy, maximum accuracy and minimum accuracy for each of the four datasets, respectively. For datasets D1, D2, and D4, the PSO algorithm exhibits the highest average accuracy, while for dataset D3 GWO algorithms show the highest accuracy, respectively. The minimum accuracy for all the four datasets was exhibited by BOA algorithm. Table 5.5 depicts the average, maximum, and minimum sensitivity. For our study sensitivity is the measurement of the proportion of individuals with the disorder who were accurately identified as having the disorder. For D1 dataset, PSO gave 97.37% average sensitivity followed by 99.08% for D2 and 90.79% for D4. For D3 dataset, GWO gave 85.66% average sensitivity. Table 5.6 depicts the average, maximum, and minimum specificity. For our study specificity quantifies the proportion of those not suffering from the disease that was accurately predicted. PSO algorithm predicted average specificity of 90.90, 92.41, and 90.17% for the D1, D2, and D4 datasets, and GWO predicted 95.90% specificity for the D3 dataset. Table 5.7 depicts the standard deviation for accuracy, sensitivity, and specificity. Standard deviation measures how dispersed dataset values are. A low standard deviation means most values are close to the mean, and a high one means they’re spread out.

92.72

90.73

93.47

91.10

93.47

93.76

94.00

91.42

92.24

BOA

CS

EPO

FA

GWO

PSO

SMA

WOA

95.95

95.62

96.67

96.53

96.20

95.18

96.45

94.89

96.06

96

88.62

89.06

89.35

90.84

90.17

88.78

90.15

87.56

89.21

89.55

89.48

88.22

90.58

90.20

89.54

87.98

90.08

87.40

89.97

89.88

90.27

89.73

92.52

91.97

91.94

89.08

91.70

87.71

90.20

91.39

94.17

93.83

94.23

94.46

94.46

93.03

94.46

92.63

94.06

94.11

85.22

85.28

85.62

87.98

87.13

85.34

86.91

84.10

85.79

86.57

Mental health in tech survey (D3) (%)

88.42

87.13

89.29

88.88

88.67

86.73

89.04

85.74

88.85

88.88

Manifest anxiety scale responses (D4) (%)

94.12

92.99

95.51

95.48

95.00

93.30

95.31

92.82

94.69

94.63

97.77

97.60

98.57

98.46

97.89

97.14

98.06

96.80

98.17

98.00

Anxiety DBD dataset rating (D1) (%) scale (D2) (%)

Max accuracy

91.46

92.36

92.53

94.10

93.03

91.80

93.03

90.67

92.19

92.87

Mental health in tech survey (D3) (%)

The bold values indicates the highest average accuracy, highest minimum accuracy and highest max accuracy among the 10 algorithms

92.92

Anxiety DBD dataset rating (D1) (%) scale (D2) (%)

ALO

Min accuracy

Manifest anxiety scale responses (D4) (%)

Anxiety DBD dataset rating (D1) (%) scale (D2) (%)

Mental health in tech survey (D3) (%)

Average accuracy

ACO

Algo.

Table 5.4 Average, max, and min accuracy

90.67

89.53

91.72

91.77

90.66

89.55

91.06

88.71

91.11

90.93

Manifest anxiety scale responses (D4) (%)

5 Performance Analysis of Metaheuristic Methods in the Classification … 71

97.65

95.43

96.97

94.65

97.23

96.40

97.37

95.18

97.54

BOA

CS

EPO

FA

GWO

PSO

SMA

WOA

99.00

98.36

99.08

99.05

99.04

98.08

99.07

98.20

98.91

98.85

83.62

83.33

84.48

85.66

84.91

83.13

84.72

82.53

84.02

83.67

87.99

89.88

90.79

92.07

90.55

90.94

90.83

90.09

88.37

88.45

94.68

92.27

94.89

93.48

94.89

90.43

95.25

92.55

95.82

94.40

97.68

96.70

97.77

97.77

97.68

96.25

97.50

96.07

97.68

97.23

77.05

76.36

79.32

80.11

79.32

76.59

78.98

75.91

78.07

77.73

Mental health in tech survey (D3) (%)

85.40

87.09

88.95

90.30

88.64

88.32

89.22

86.95

85.55

86.35

Manifest anxiety scale responses (D4) (%)

100

97.66

99.15

99

99.01

98.23

98.79

97.73

99.22

99.08

100

99.73

100

100

99.91

99.73

100

99.73

100

100

Anxiety DBD dataset rating (D1) (%) scale (D2) (%)

Max sensitivity

89.89

89.55

90.00

92.28

89.89

89.09

90.57

88.41

89.55

89.32

Mental health in tech survey (D3) (%)

The bold values indicates the highest average sensitivity, highest minimum sensitivity and highest max sensitivity among the 10 algorithms

97.19

Anxiety DBD dataset rating (D1) (%) scale (D2) (%)

ALO

Min sensitivity

Manifest anxiety scale responses (D4) (%)

Anxiety DBD dataset rating (D1) (%) scale (D2) (%)

Mental health in tech survey (D3) (%)

Average sensitivity

ACO

Algo.

Table 5.5 Average, max, and min sensitivity

91.02

92.46

92.64

93.98

92.54

93.70

92.43

92.04

91.32

90.53

Manifest anxiety scale responses (D4) (%)

72 P. Monga and M. Sharma

88.18

86.41

90.24

87.82

90.00

91.32

90.90

87.95

87.36

BOA

CS

EPO

FA

GWO

PSO

SMA

WOA

90.52

90.75

92.41

92.00

91.16

90.03

91.76

89.00

91.00

91.49

93.58

94.60

94.11

95.90

95.31

94.31

95.47

92.48

94.28

95.29

92.31

85.05

90.17

86.82

87.99

82.34

88.65

82.29

93.03

92.62

83.53

84.97

94.89

88.30

86.54

83.73

87.06

83.33

83.73

85.95

86.19

85.56

97.77

86.98

86.98

83.81

87.14

83.49

85.24

86.19

88

89.22

79.32

91.67

91.56

87.56

91.33

85.56

89.44

91

Mental health in tech survey (D3) (%)

87.59

79.66

88.95

82.28

83.55

76.93

85.61

75.90

88.16

88.29

Manifest anxiety scale responses (D4) (%)

90.46

90.98

99.15

94.71

92.75

91.70

93.66

90.59

91.90

92.16

94.60

95.40

100

96.51

95.56

95.08

95.40

93.81

96.51

95.40

Anxiety DBD dataset rating (D1) (%) scale (D2) (%)

Max specificity

98

99

90

98.67

98

98.78

98.22

96.67

98.22

98.56

Mental health in tech survey (D3) (%)

The bold values indicates the highest average specificity, highest minimum specificity and highest max specificity among the 10 algorithms

88.98

Anxiety DBD dataset rating (D1) (%) scale (D2) (%)

ALO

Min specificity

Manifest anxiety scale responses (D4) (%)

Anxiety DBD dataset rating (D1) (%) scale (D2) (%)

Mental health in tech survey (D3) (%)

Average specificity

ACO

Algo.

Table 5.6 Average, max, and min specificity

96.67

91.14

92.64

91.06

91.92

88.81

91.57

89.43

97.21

96.16

Manifest anxiety scale responses (D4) (%)

5 Performance Analysis of Metaheuristic Methods in the Classification … 73

1.09

1.41

1.24

1.09

1.42

0.94

1.11

0.98

1.07

1.23

ALO

BOA

CS

EPO

FA

GWO

PSO

SMA

WOA

1.17

1.15

1.36

1.25

1.12

1.41

1.17

1.36

1.33

1.26

2.06

2.24

2.18

1.93

1.85

2.11

1.89

2.03

1.97

1.85

0.72

0.75

0.76

0.76

0.62

0.88

0.64

0.99

0.71

0.69

1.52

1.69

1.39

1.67

1.35

2.49

1.20

1.76

1.12

1.51

0.83

1.08

0.81

0.78

0.82

1.13

0.85

1.16

0.86

0.88

Anxiety DBD dataset rating (D1) (%) scale (D2) (%)

4.06

4.22

3.31

3.46

3.42

4.01

3.49

3.97

3.58

3.57

Mental health in tech survey (D3) (%)

STD. average sensitivity

Manifest anxiety scale responses (D4) (%)

Anxiety DBD dataset rating (D1) (%) scale (D2) (%)

Mental health in tech survey (D3) (%)

STD. average accuracy

ACO

Algo.

Table 5.7 Standard deviation for accuracy, sensitivity, and specificity

1.70

1.69

1.16

1.09

1.29

1.66

1.08

1.67

1.86

1.35

Manifest anxiety scale responses (D4) (%)

2.34

1.99

1.90

1.84

1.96

2.48

2.00

2.23

2.66

1.99

2.83

3.13

3.13

3.02

2.99

3.72

2.80

3.38

3.50

3.04

Anxiety DBD dataset rating (D1) (%) scale (D2) (%)

3.22

3.16

2.82

2.20

2.04

3.69

2.22

3.53

2.90

2.54

Mental health in tech survey (D3) (%)

STD. average specificity

2.92

3.63

2.38

2.53

2.79

3.84

2.04

4.02

2.84

2.59

Manifest anxiety scale responses (D4) (%)

74 P. Monga and M. Sharma

5 Performance Analysis of Metaheuristic Methods in the Classification …

75

5.5 Conclusion In this study, the performance of ten SI techniques (SMA, EPO, WOA, BOA, ALO, GWO, CS, FA, ACO, and PSO) has been explored for the diagnosis of behavioural disorders. Various performance metrics like accuracy, sensitivity, and specificity were calculated and examined. Four different datasets have been mined using these ten metaheuristics techniques. The results witnessed that one of the most widely used metaheuristic techniques PSO still found to be more promising in selecting the optimal set of features for the diagnosis of BD. The rate of accuracy achieved by the PSO algorithm for D1 is 94%, D2 96.67%, and D4 90.58%. For the D3 dataset, GWO attained the highest accuracy of 90.84%. In future, the performance of the amalgamation of these methods can be explored. Moreover, the use of chaotic maps, levy flight, and neural networks along with these metaheuristics techniques can be accessed.

References 1. Ogundele, M.O.: Behavioural and emotional disorders in childhood: a brief overview for paediatricians. World J. Clin. Paediatrics 7(1), 9 (2018) 2. Reddy, V.M., Chandrashekar, C.R.: Prevalence of mental and behavioural disorders in India: a meta-analysis. Indian J. Psychiatry 40(2), 149 (1998) 3. Datta, P., Ganguly, S., Roy, B.N.: The prevalence of behavioural disorders among children under parental care and out of parental care: a comparative study in India. Int. J. Pediatr. Adolesc. Med. 5(4), 145–151 (2018) 4. Saxena, S., Jané-Llopis, E.V.A., Hosman, C.: Prevention of mental and behavioural disorders: implications for policy and practice. World Psychiatry 5(1), 5 (2006) 5. McCarthy, G., Janeway, J., Geddes, A.: The impact of emotional and behavioural problems on the lives of children growing up in the care system. Adopt. Foster. 27(3), 14–19 (2003) 6. Gautam, R., Kaur, P., Sharma, M.: A comprehensive review on nature inspired computing algorithms for the diagnosis of chronic disorders in human beings. Prog. Artif. Intell. 8(4), 401–424 (2019) 7. Kaur, P., Sharma, M.: Diagnosis of human psychological disorders using supervised learning and nature-inspired computing techniques: a meta-analysis. J. Med. Syst. 43(7), 1–30 (2019) 8. Monga, P., Sharma, M., Sharma, S.K.: Performance analysis of machine learning and soft computing techniques in diagnosis of behavioral disorders. In: Electronic Systems and Intelligent Computing, pp. 85–99. Springer, Singapore (2022) 9. Katsis, C.D., Katertsidis, N.S., Fotiadis, D.I.: An integrated system based on physiological signals for the assessment of affective states in patients with anxiety disorders. Biomed. Sig. Process. Control 6(3), 261–268 (2011) 10. Grossi, E., Olivieri, C., Buscema, M.: Diagnosis of autism through EEG processed by advanced computational algorithms: a pilot study. Comput. Methods Prog. Biomed. 142, 73–79 (2017) 11. Koh, J.E., Ooi, C.P., Lim-Ashworth, N.S., Vicnesh, J., Tor, H.T., Lih, O.S., Fung, D.S.S., et al.: Automated classification of attention deficit hyperactivity disorder and conduct disorder using entropy features with ECG signals. Comput. Biol. Med. 140, 105120 (2022) 12. Alabi, E.O., Adeniji, O.D., Awoyelu, T.M., Fasae, O.D.: Hybridization of machine learning techniques in predicting mental disorder. Int. J. Human Comput. Stud. 3(6), 22–30 (2021) 13. Mohana, E., Poonkuzhali, S.: Categorizing the risk level of autistic children using data mining techniques. Int. J. Adv. Res. Sci. Eng. 4(1), 223–230 (2015)

76

P. Monga and M. Sharma

14. Radhamani, E., Krishnaveni, K.: Diagnosis and evaluation of ADHD using MLP and SVM classifiers. Indian J. Sci. Technol. 9(19), 1–7 (2016) 15. Ahmed, A., Sultana, R., Ullas, M.T.R., Begom, M., Rahi, M.M.I., Alam, M.A.: A machine learning approach to detect depression and anxiety using supervised learning. In: 2020 IEEE Asia-Pacific Conference on Computer Science and Data Engineering (CSDE), pp. 1–6. IEEE (2020, December) 16. Li, S., Chen, H., Wang, M., Heidari, A.A., Mirjalili, S.: Slime mould algorithm: a new method for stochastic optimization. Futur. Gener. Comput. Syst. 111, 300–323 (2020) 17. Dhiman, G., Kumar, V.: Emperor penguin optimizer: a bio-inspired algorithm for engineering problems. Knowl.-Based Syst. 159, 20–50 (2018) 18. Mirjalili, S., Lewis, A.: The whale optimization algorithm. Adv. Eng. Softw. 95, 51–67 (2016) 19. Arora, S., Singh, S.: Butterfly optimization algorithm: a novel approach for global optimization. Soft. Comput. 23(3), 715–734 (2019) 20. Mirjalili, S.: The ant lion optimizer. Adv. Eng. Softw. 83, 80–98 (2015) 21. Mirjalili, S., Mirjalili, S.M., Lewis, A.: Grey wolf optimizer. Adv. Eng. Softw. 69, 46–61 (2014) 22. Yang, X.S.: Firefly algorithm, stochastic test functions and design optimisation (2010). arXiv preprint arXiv:1003.1409 23. Yang, X.S., Deb, S.: Cuckoo search via Lévy flights. In: 2009 World Congress on Nature and Biologically Inspired Computing (NaBIC), pp. 210–214. IEEE (2009, December) 24. Dorigo, M., Di Caro, G.: Ant colony optimization: a new meta-heuristic. In: Proceedings of the 1999 Congress on Evolutionary Computation-CEC99 (Cat. No. 99TH8406), vol. 2, pp. 1470–1477. IEEE (1999, July) 25. Kennedy, J., Eberhart, R.: Particle swarm optimization. In: Proceedings of ICNN’95International Conference on Neural Networks, vol. 4, pp. 1942–1948. IEEE (1995, November) 26. Sharma, M., Singh, G., Singh, R., Singh, G.: Analysis of DSS queries using entropy based restricted genetic algorithm. Appl. Math. Inform. Sci. 9(5), 2599 (2015) 27. Sharma, M., Romero, N.: Future prospective of soft computing techniques in psychiatric disorder diagnosis. EAI Endorsed Trans. Pervasive Health Technol. 4(15), e1–e1 (2018) 28. Sharma, M., Singh, G., Singh, R.: Design of GA and ontology based NLP frameworks for online opinion mining. Recent Patents Eng. 13(2), 159–165 (2019) 29. Elhoseny, M., Metawa, N., El-hasnony, I.M.: A new metaheuristic optimization model for financial crisis prediction: towards sustainable development. Sustain. Comput. Inform. Syst. 35, 100778 (2022) 30. Calvet, L., Benito, S., Juan, A.A., Prados, F.: On the role of metaheuristic optimization in bioinformatics. Int. Trans. Oper. Res. (2022) 31. Gutjahr, G., Menon, R., Nedungadi, P.: Comparison of metaheuristics for the allocation of resources for an after-school program in remote areas of India. In: Symposium on Machine Learning and Metaheuristics Algorithms, and Applications, pp. 225–233. Springer, Singapore (2019, December) 32. Wong, W.K., Ming, C.I.: A review on metaheuristic algorithms: recent trends, benchmarking and applications. In: 2019 7th International Conference on Smart Computing and Communications (ICSCC), pp. 1–5. IEEE (2019, June) 33. Papadimitrakis, M., Giamarelos, N., Stogiannos, M., Zois, E.N., Livanos, N.I., Alexandridis, A.: Metaheuristic search in smart grid: a review with emphasis on planning, scheduling and power flow optimization applications. Renew. Sustain. Energ. Rev. 145, 111072 (2021) 34. Singh, H., Tyagi, S., Kumar, P., Gill, S.S., Buyya, R.: Metaheuristics for scheduling of heterogeneous tasks in cloud computing environments: analysis, performance evaluation, and future directions. Simul. Model. Pract. Theor. 111, 102353 (2021) 35. Mental Health in Tech Survey. (n.d.). Retrieved 10 Oct 2022, from https://www.kaggle.com/ datasets/osmi/mental-health-in-tech-survey 36. Manifest Anxiety Scale Responses. (n.d.). Retrieved 14 Oct 2022, from https://www.kaggle. com/datasets/lucasgreenwell/manifest-anxiety-scale-responses 37. Birmaher, B., Khetarpal, S., Brent, D., Cully, M., Balach, L., Kaufman, J., Neer, S.M.: The screen for child anxiety related emotional disorders (SCARED): scale construction and psychometric characteristics. J. Am. Acad. Child Adolesc. Psychiatry 36(4), 545–553 (1997)

5 Performance Analysis of Metaheuristic Methods in the Classification …

77

38. Silva, R.R., Alpert, M., Pouget, E., Silva, V., Trosper, S., Reyes, K., Dummit, S.: A rating scale for disruptive behavior disorders, based on the DSM-IV item pool. Psychiatr. Q. 76(4), 327–339 (2005) 39. Maloof, M.A.: Some basic concept of machine learning and data mining. In: Machine Learning and Data Mining for Computer Security, pp. 23–43. Springer, London (2006)

Chapter 6

The Influence of Wall Temperature on Total Pressure Drop During Condensation of R134a Inside a Dimpled Tube N. V. S. M. Reddy, K. Satyanarayana, Rosang Pongen, and S. Venugopal

Abstract The present study discusses the influence of wall temperature during condensation of R134a in a dimpled tube. The inside diameter and length of the dimpled section were considered as 8.38 and 500 mm. The dimpled tube specifications were considered as 5.08 mm axial pitch, 1 mm dimple diameter, and 4.99 mm dimple pitch. The numerical simulations were performed at a mass velocity of 25– 125 kg/m2 s, saturation temperature of 313 K, and wall temperatures of 308 K and 310 K, respectively. The mathematical equations were solved by using ANSYS FLUENT, and the flow field was considered transient. The variation of total pressure drops, volume fractions, and mixture velocities was discussed at different mass velocities and wall temperatures. By increasing the wall temperature, the total pressure drops were reduced and void fractions were increased with the wall temperature. Also, the comparison of wall shear stress and total pressure drops between smooth and dimpled tubes was studied.

N. V. S. M. Reddy (B) · K. Satyanarayana · R. Pongen · S. Venugopal Department of Mechanical Engineering, National Institute of Technology Nagaland, Dimapur, India e-mail: [email protected] K. Satyanarayana e-mail: [email protected] R. Pongen e-mail: [email protected] S. Venugopal e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024 B. Das et al. (eds.), Modeling, Simulation and Optimization, Smart Innovation, Systems and Technologies 373, https://doi.org/10.1007/978-981-99-6866-4_6

79

80

N. V. S. M. Reddy et al.

6.1 Introduction Passive heat transfer techniques are used to enhance the heat transfer rates with minimum pressure drops. The study of vapor volume fractions, mixture velocities, and wall shear stresses is crucial to enhance the performance of heat-exchanging devices. The impact of mass velocity and saturation temperature during condensation of R134a inside a plain geometry was studied in [1]. The influence of the temperature difference between the refrigerant and tube wall during condensation of R134a was studied in [2], and the influence of wall temperature in a smooth tube was studied in [3]. The impact of dimple geometry on the thermal and hydraulic properties of R134a was studied in [4]. They noticed that the frictional pressure drops increase with mass velocity. The influence of low mass fluxes on hydraulic properties with refrigerant R134a was studied in [5] in a dimpled geometry. They found that the pressure drops decreased with saturation temperature increments. Also, extended their investigations to study the low-GWP refrigerants to R134a in a dimpled tube in [6, 7]. The effects of wall temperature and mass velocities in a dimpled tube were studied in this numerical analysis. Also, the wall shear stress and total pressure drops were compared with the smooth tube at different wall temperatures.

6.2 Governing Equations The dimpled geometry specifications were considered as 1.0 mm dimple diameter, 4.99 mm dimple pitch, and 5.08 mm axial pitch. The inner diameter and length of the dimpled tube were considered as 8.38 and 500 mm. The numerical simulations were performed at a mass velocity of 25–125 kg/m2 s, vapor quality of 50%, temperature of 313 K, and wall temperatures of 308 K and 310 K, respectively. The governing equations were solved by ANSYS FLUENT, and the fluid domain was thought to be turbulent, three-dimensional, and unsteady. The refrigerant R134a was used as a working substance. The VOF model [8–10] was used to identify the volume fraction contours and velocity profiles in a dimpled tube. αl + αg = 1, where α volume fraction, l liquid phase, g vapor phase.

(6.1)

6 The Influence of Wall Temperature on Total Pressure Drop During …

81

The momentum and energy equations were considered as: ( )] [ ∂ (ρm u→) + ∇.(ρm u→u→) = −∇ P + ∇. μm,eff ∇ u→ + (∇ u→)T + ρm g→ + F→σ ∂t ] [ [ ] ∂ (ρm E) + ∇. u→(ρm E + P) = ∇. km,eff ∇T + S E , ∂t

(6.2) (6.3)

where k thermal conductivity, ρ density, F→σ surface tension force. The mathematical formulations were solved using ANSYS FLUENT software. The standard k-ε turbulence model was implemented to solve the turbulence parameters.

6.3 Results and Discussions Figure 6.1 represents the influence of mass velocity and tube wall temperature on total pressure drop in a dimpled geometry. The tube wall temperatures were considered at 308 and 310 K. The total pressure drops were decreased by raising the tube wall temperature and increased by raising the refrigerant mass velocity. The mass flux of the refrigerant increases the two-phase flow velocity which enhances the total pressure drop during condensation. Also, dimples on the tube surface create turbulence which increases frictional pressure drop. Figure 6.2 shows the effect of mass velocity and tube wall temperature on the void fraction at a mass velocity ranging between 25 and 125 kg/m2 s. The volume fraction in a dimpled geometry increases as mass velocity rises. Also, the void fraction in a dimpled geometry increases as the tube wall temperature increases from 308 to 310 K during condensation. Figure 6.3 compares the influence of refrigerant mass velocity and tube wall temperature on the mixture velocity of the refrigerant in a dimpled geometry at a mass velocity of 25–125 kg/m2 s. An increase in mass velocity enhances the twophase flow of the working substance resulting in the increase of refrigerant mixture velocity and the turbulence created by the dimples causes the rise in the axial flow velocity of the refrigerant. Also, by maintaining the constant mass fluxes at both the wall temperatures 308 and 310 K, the refrigerant mixture velocity was almost the same. The figure shows the slight variation in the mixture velocity at different wall temperatures.

82

N. V. S. M. Reddy et al.

Fig. 6.1 Influence of mass velocity and tube wall temperature on total pressure drop in a dimpled geometry

Fig. 6.2 Influence of wall temperature on void fraction inside a dimpled tube

6 The Influence of Wall Temperature on Total Pressure Drop During …

83

Fig. 6.3 Impact of wall temperature on two-phase mixture velocity in a dimpled geometry

Figure 6.4 shows the velocity contours in a dimpled tube during condensation of R134a. The velocity contours are presented at a tube wall temperature of 310 K and mass velocities from 75 to 125 kg/m2 s. The mass velocity raises the mixture velocity from the results. The velocity at the tube wall was zero and reached to the maximum at the center by increasing the mass velocity. Also, the axial velocity increases the flow velocity of the mixture. Figure 6.5 illustrates the comparison of total pressure drop inside a smooth and dimpled geometry with tetrafluoroethane. Figure 6.5a, b represents the total pressure drop comparison at wall temperatures of 308 and 310 K during condensation. The total pressure drop was raised by 36.28% to a dimpled tube at wall temperature of 308 K, and it was increased by 39.54% to a dimpled tube at wall temperature of 310 K. Thus, the total pressure drops were raised by modifying the geometry to a dimpled tube during condensation of R134a. This is because the frictional pressure drops increase by modifying the tube structure to a dimpled geometry which enhances the total pressure drop. Dimple protrusions generate turbulence inside two-phase flow which results in more pressure drops. Figure 6.6 illustrates the volume fraction contours inside a dimpled geometry. The volume fraction contours were presented at a tube wall temperature of 308 K and mass velocities of 25, 50, and 75 kg/m2 s. The colors red and blue indicate the vapor and liquid phases, respectively. The figure shows the volume fraction contours at the outlet portion of the test section. From the results obtained, the pure vapor before condensation was converted to a liquid–vapor mixture after condensation, and the

84

N. V. S. M. Reddy et al.

Fig. 6.4 Velocity contours at a tube wall temperature of 310 K and mass velocities of a 75 kg/m2 s, b 100 kg/m2 s, and c 125 kg/m2 s

Fig. 6.5 Comparison of total pressure drops inside a plain and dimpled geometry at a 308 K and b 310 K

6 The Influence of Wall Temperature on Total Pressure Drop During …

85

Fig. 6.6 Volume fraction contours at a wall temperature of 308 K and mass velocities of a 25 kg/ m2 s, b 50 kg/m2 s, and c 75 kg/m2 s

entire condensed liquid refrigerant stagnated at the bottom of the geometry due to the gravitational force. Also, the stratified and stratified-wavy flow structures were obtained during condensation. Figure 6.7 shows the comparison of wall shear stress inside a smooth and dimpled tube. The wall shear stress was presented at a tube wall temperature of 308 K and mass velocities of 75, 100, and 125 kg/m2 s. The shear stress along the tube’s walls was increased with an increase in refrigerant mass velocity from 75 to 125 kg/m2 s, and the flow structures were stratified.

86

N. V. S. M. Reddy et al.

Fig. 6.7 Comparison of wall shear stress at a wall temperature of 308 K and mass velocities of 75, 100, and 125 kg/m2 s in a smooth tube and b dimple geometry

6.4 Conclusion The influence of the refrigerant mass flux and tube wall temperature was studied inside a dimpled geometry with an 8.38 mm inside diameter and 9.54 mm outside diameter during condensation of R134a. The numerical simulations were performed at a mass velocity of 25–125 kg/m2 s, saturation temperature of 40 °C, and wall temperatures of 308 K and 310 K, respectively. The governing equations were solved by ANSYS FLUENT. The total pressure drop was raised by 36.28 and 39.54% by changing the smooth structure to a dimpled structure at a tube wall temperature of 308 and 310 K, and the flow structures and velocity contours were presented during condensation. Also, the tube wall shear stress was compared between smooth and dimpled tubes.

6 The Influence of Wall Temperature on Total Pressure Drop During …

87

References 1. Dalkilic, A.S., Laohalertdecha, S., Wongwises, S.: Experimental investigation of heat transfer coefficient of R134a during condensation in vertical downward flow at high mass flux in a smooth tube. Int. Commun. Heat Mass Transf. 36(10), 1036–1043 (2009) 2. Arslan, G., Eskin, N.: Heat transfer characteristics for condensation of R134a in a vertical smooth tube. Exp. Heat Transf. 28(5), 430–445 (2015) 3. Macdonald, M., Garimella, S.: Effect of temperature difference on in-tube condensation heat transfer coefficients. J. Heat Transf. 139(1) (2017) 4. Aroonrat, K., Wongwises, S.: Experimental study on two-phase condensation heat transfer and pressure drop of R-134a flowing in a dimpled tube. Int. J. Heat Mass Transf. 106, 437–448 (2017) 5. Reddy, N.V.S.M., Satyanarayana, K., Venugopal, S.: Influence of saturation temperature on pressure drop during condensation of R-134a inside a dimpled tube: a numerical study. Theor. Found. Chem. Eng. 56(3), 395–406 (2022) 6. Reddy, N.V.S.M., Satyanarayana, K., Venugopal, S.: Comparative numerical study of R134a and low global warming potential refrigerants during condensation inside a smooth and dimpled tube. Heat Mass Transf. 59(3), 393–408 (2023) 7. Reddy, N.V.S.M., Satyanarayana, K., Venugopal, S.: Numerical investigation during condensation of R134a inside a smooth and dimpled tube: comparison with low-GWP alternatives R1234yf, R1234ze and R290. Theor. Found. Chem. Eng. 56(5), 791–801 (2022) 8. Satyanarayana, K., Reddy, N., Venugopal, S.: Numerical investigation of single turn pulsating heat pipe with additional branch for the enhancement of heat transfer coefficient and flow velocity. Heat Transf. Res. 52(4), 45–62 (2021) 9. Satyanarayana, K., Reddy, N.V., Venugopal, S.: Numerical study to recover low-grade waste heat using pulsating heat pipes and a comparative study on performance of conventional pulsating heat pipe and additional branch pulsating heat pipe. Numer. Heat. Transf. Part A. Appl. 83(3), 248–264 (2023) 10. Hirt, C.W., Nichols, B.D.: Volume of fluid (VOF) method for the dynamics of free boundaries. J. Comput. Phys. 39, 201–225 (1981)

Chapter 7

Optimisation of Biodiesel Production Using Heterogeneous Catalyst from Palm Oil by Taguchi Method Bidisha Chetia and Sumita Debbarma

Abstract Over the past few years, biodiesel has drawn much interest as a clean substitute for conventional diesel. However, not all fatty acid chains in the biodiesel synthesis process are converted to alkyl esters. The quality and output of biodiesel get affected by this phenomenon. To achieve the highest yield, the optimisation of biodiesel production is essential. Taguchi’s signal-to-noise ratio (SNR) was applied in this work to maximise biodiesel yield using the L9 orthogonal array. This study produced biodiesel from palm oil by optimising the process parameters catalyst concentration (1.5–3.5 w/w%), methanol-to-oil ratio (6:1–12:1), reaction temperature (45–65 °C), and reaction time (60–180 min). The experimental results showed a maximum yield of 95% at the optimised methanol-to-oil ratio of 12:1, a reaction time of 180 min, a reaction temperature of 65 °C, and a catalyst concentration of 1.5%. ANOVA analysis reveals that reaction temperature has got the most significance on biodiesel yield, with a contribution of 72.62%. The physicochemical properties of the produced biodiesel have also been estimated.

7.1 Introduction In today’s energy scenario, biofuels have become the most promising option in alternative energy sources. The limited sources, increasing demand, and the detrimental effects of fossil fuels on the environment have encouraged researchers and scientists to opt for alternative or renewable fuels. Humans are also at substantial acute and chronic health hazards from burning fossil fuels from prolonged exposure to air pollutants like NOx , carbon monoxide (CO), SOx , and particulate matter (PM) [16]. Another justification for replacing fossil fuels with biofuels is their non-toxic behaviour towards the environment. Among renewable fuels, biodiesel, bioethanol, and biogas have gained prominent attention as alternatives for existing fossil fuels

B. Chetia (B) · S. Debbarma Department of Mechanical Engineering, NIT Silchar, Cachar, Assam 788010, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024 B. Das et al. (eds.), Modeling, Simulation and Optimization, Smart Innovation, Systems and Technologies 373, https://doi.org/10.1007/978-981-99-6866-4_7

89

90

B. Chetia and S. Debbarma

or non-renewable fuels [4]. Although some barriers to renewable energy penetration in society have been identified. Feedstock availability and production costs are the main barriers to large-scale full acceptance of biodiesel. Instead of being used directly as fuel in CI engines, raw biodiesel performs better when blended with conventional diesel [15]. However, the growing requirement for a sufficient quantity of edible oils has prompted questions about food security and the ethics of using food as fuel. The cost of first-generation biodiesel fuel has increased by around 30% compared to petroleum-derived fuel due to the usage of expensive, unprocessed raw oils [7]. Sourcing and locating affordable and underutilised feedstock are crucial for biodiesel manufacturing. Consequently, a recent trend has come to light regarding manufacturing biodiesel at a reduced cost by concentrating on non-edible feedstock, including discarded cooking oils, waste animal fats, and non-edible vegetable oils [14, 22]. Biodiesel made from these oils are known as second-generation biodiesel. To make biodiesel more comparable with conventional fuels, alternative low-cost raw materials that can substitute conventional ones are being researched for the production process [10]. Among various production methods, the transesterification reaction is found to be suitable and convenient. The transesterification reaction converts various triglyceride sources into biodiesel. The types of alcohol utilised in the transesterification, such as methanol and ethanol, the catalyst weight, and the alcohol/oil molar ratio all have a significant impact on the reaction yield and characteristics of the fuel that is produced. For Karabas [17], the optimal concentration of KOH catalyst was 0.7 wt% in producing biodiesel from crude acorn kernel oil. The yield with kernel oil was 90% under this optimal condition. Dhawane et al. [9] obtained an optimised condition in biodiesel production using an iron-doped catalyst with a loading of 4.5% from rubber seed oil. The choice of an appropriate catalyst in accordance with the composition of the oil is one of the key challenges facing the biodiesel synthesis process. The functional efficacy and adverse consequences of catalysts during transesterification have received extensive research and controversy. While homogeneous catalyst, a chemically created substance that is toxic as well as caustic in nature is frequently used in the works to synthesise biodiesel. When a homogeneous catalyst is used, the cost of disposing of the wastewater that is produced when it is separated from the reaction mixture increases. On the other hand, it has been claimed that the heterogeneous solid catalyst is favourable because it is not soluble in solvents or esters, making it simple to recover and reuse. The waste stream’s production is reduced, which simplifies subsequent separation and purification. Heterogeneous catalysts are typically non-corrosive and lower the production of pollutants, thereby improving the transesterification process [13, 29]. The porous nature and greater surface area of the catalyst molecule can aid triglyceride molecules in attaching themselves and responding effectively. By employing heterogeneous catalysts and optimising the yield process, biodiesel production can be improved which will help address the growing issue of traditional feedstock and their costs. A drawback of a heterogeneous catalyst is the formation of three parts with oil and alcohol. The diffusion limitation slows down the reaction rate. However, using a co-solvent can help to avoid this issue [3]. Earlier studies report the use of lipase [1], magnesium oxide

7 Optimisation of Biodiesel Production Using Heterogeneous Catalyst …

91

[2], and metal oxides [31, 32] as catalysts in biofuel production. An acidic catalyst of H2 SO4 used by Murmu et al. [23] on biodiesel production from Kusum oil and obtained an optimal condition at 4% v/v of H2 SO4 . Bio-based heterogeneous catalysts are also an emerging topic because of their strong catalytic activity. Many researchers are using waste agro-based materials as well as animal waste to produce biodiesel, which aids in waste management. Buasri et al. [6] used scallop waste shells as a catalyst in the synthesis of fatty acid methyl ester (FAME) from palm oil. The scallops showed high catalytic activity with ecologically friendly properties. Waste mango peels (Mangifera indica) were employed by Laskar et al. in [21] as a catalyst in the generation of biodiesel from soybean oil. They obtained a yield of 98% at 6% catalyst weight which could be reused up to 4th cycle. Sharma et al. [26] used calcined chicken egg shells which are rich in calcium oxide for the synthesis of Karanja oil methyl ester. They obtained a biodiesel yield of 95%. The optimised values obtained were a catalyst concentration of 2.5 wt%, a molar ratio of 8:1 (methanol/oil), and a reaction time of 2.5 h at 65 ± 0.5 °C, which resulted in a high conversion of 97.43% from triglyceride to methyl ester. Balaji and Niju [5] used calcined red banana (Musa acuminata) peduncle (CRBP) for the transesterification of Ceiba pentandra oil to Ceiba pentandra methyl ester (CPME). They characterised the catalyst by various techniques like scanning electron microscopy and energy dispersive X-ray spectroscopy (SEM–EDS) techniques, Fourier Transform Infrared Spectroscopy (FTIR), X-ray diffraction (XRD), and Brunauer–Emmett–Teller (BET). With the process settings of 2.68% catalyst weight percentage, 11.46:1 methanolto-oil ratio, and 106 min reaction time, a maximum CPME conversion of 98.73 ± 0.50% was attained. The temperature and stirring speed was maintained at 65 °C and 450 rpm, respectively, The utilisation of conventional experimental design methodologies along with manual optimisation is complicated and time-consuming. Additionally, when process parameters and levels increase, the number of trials also substantially rises. The Taguchi technique addresses these issues by reducing the design variance to reach mean target values for output parameters. The Taguchi technique studies all the design elements by utilising least number of experiments possible. Authors have used this technique [1, 2, 6, 9, 17, 19, 23, 24] to come up with the optimal solutions for their respective conditions. Kumar et al. [20] obtained optimised condition at 50 °C, 1% of catalyst weightage, 90 min of reaction time, 6:1 methanol-to-oil ratio, and for production of Manikara zapota methyl ester. Karmakar et al. [18] used Taguchi L16 approach for production of biodiesel from castor oil. They obtained optimised conditions of reaction temperature of 50 °C, methanol-to-oil ratio of 20:1, reaction duration of 1 h, 1% w/w catalyst concentration, and agitation speed of 700 rpm which gave maximum conversion of 90.83%. They also statistically analysed the parameters to define relationships using analysis of variance (ANOVA). These Taguchi optimised conditions produced experimental yield of 94.83%. The literature research leads to the conclusion that the Taguchi technique is a useful tool for optimising the process parameters for the generation of palm oil biodiesel.

92

B. Chetia and S. Debbarma

Fig. 7.1 a-c Process of making calcined wood ash

7.2 Materials and Methodology In this study, the biodiesel feedstock used is palm oil which is collected from a local grocery store, and the wood chips used as catalysts are collected from a furniture shop near the college campus. All other chemicals like methanol used are of analytical grades. The molecular weight and density of the oil are measured to be 854.3 g/mol and 0.792 g/cc, respectively.

7.2.1 Catalyst Preparation Firstly, the wood sawdust is washed with deionized water and then sundried for 3 days continuously. Dried sawdust is then burned in open air to make ash. The prepared ash was noticeable in its coarse nature. With the help of a mortar, it is brought to powder form and sieved to make a uniform size. The powdered ash is calcined at 700 °C for 4 h to remove the carbon content in a muffle furnace. Finally, the prepared catalyst is kept in a desiccator for further use. Figure 7.1 shows the process of making calcined wood ash from dried wood sawdust.

7.2.2 Characterisation of Catalyst 7.2.2.1

SEM Analysis

Scanning electron microscopy (SEM) is used to determine the surface texture and morphology of the prepared calcined catalyst by using the ZEISS instrument. The prepared catalyst is mounted on a stub before being observed by the instrument.

7 Optimisation of Biodiesel Production Using Heterogeneous Catalyst …

93

7.2.3 Transesterification Process For biodiesel production, 100 ml of the palm oil is filtered with filter paper and preheated at 60 ˚C for 1 h to remove any suspended particles and the moisture content present in the oil, respectively. Certain amount of methanol and the prepared catalyst is mixed to form a solution and put in an Erlenmeyer flask. The transesterification reaction occurs once the solution is poured into the oil sample at a certain temperature and stirred with the help of a magnetic stirrer at a speed of 400 rpm. The flask relates to a reflux condenser to prevent methanol evaporation. The reaction conditions to be considered for the transesterification reaction are methanol-to-oil ratio, catalyst concentration, reaction temperature, and reaction time. Once the desired reaction time is achieved, the flask is immediately put in a water bath to stop any further reactions. After that, the oil solution is centrifuged to separate the catalyst and produce FAME. The FAME is rinsed with warm distilled water, followed by a drying process to remove water particles. Figure 7.2 depicts the transesterification reaction set-up, and Fig. 7.3 shows the produced palm oil methyl ester (POME). This procedure was repeated for each experimental run according to the design matrix. Eventually, the biodiesel yield percentage was calculated as follows: Biodiesel yield(%) =

Fig. 7.2 Transesterification reaction

Weight of biodiesel produced × 100. Weight of oil

(7.1)

94

B. Chetia and S. Debbarma

Fig. 7.3 POME

7.2.4 Design of Experiment Using Taguchi Methodology In order to explore the effects of process variables on the mean and variance of the output characteristic that controls the proper operation of the process, Dr. G. Taguchi developed a revolutionary technique. In order to maximise the number of processinfluencing parameters, this method of experiment design employs orthogonal arrays. The advantage of this method over others is that it only considers a limited number of parameter pair combinations. His strategy provides a way for data collection to identify elements that most significantly affect product quality with the fewest possible experiments, saving time, and money. This approach is quite efficient with a small number of components, few interactions between them, and a small number of substantial contributions [27, 28]. In this study, only the four essential parameters at three levels have been taken into account. Reaction time, reaction temperature, catalyst type and concentration, alcohol type and quantity, stirring rate are only a few of the variables that might affect the production of biodiesel. By selecting L9 orthogonal array, the impacts of the four selected parameters that are reaction time, methanol-to-oil ratio, reaction temperature, and catalyst concentration at three distinct levels have been investigated. Table 7.1 shows the design strategy for this work. They are coded as A, B, C, and D for three distinct levels: lower to higher values as 1, 2, and 3.

7 Optimisation of Biodiesel Production Using Heterogeneous Catalyst …

95

Table 7.1 Design of experiment Factors

Designation

Levels 1

2

3

Methanol-to-oil ratio

A

6:1

9:1

12:1

Reaction time (min)

B

60

120

180

Reaction temperature (˚C)

C

45

55

65

Catalyst concentration (%)

D

1.5

2.5

3.5

7.2.5 Signal-to-Noise Ratio (SNR) and Analysis of Variance (ANOVA) To calculate the difference between the experimental value and the desired value of performance attributes, Taguchi recommended using the loss function. A SNR has been produced using the loss function’s value. The goal of the optimisation problem is the log function, or SNR, of the expected outcome. The deviation of the quality function from the predicted value is then calculated using SNR. Depending on the problem’s objective, the Taguchi method employs three different types of SNRs. Larger-the-Better (LTB) can be used for maximisation problems, Smaller-the-Better (STB) for minimization issues, and Nominal-the-Better (NTB) for issues needing normalisation. In order to determine the ideal parameter combinations, experimental data were evaluated using SNR. Larger-the-Better (LTB), one of the three unique SNR quality indicators, has been employed in the current study because maximising biodiesel yield is the main purpose. Therefore, the best level of design parameter will be the one with the highest SNR [25]. The highest biodiesel production can be obtained by using SNR analysis to determine the ideal values and configurations of parameters, but it is not possible to determine for SNR analysis the elements that had a significant impact on the output or the proportion that each component contributed to the output. To do this, response data from a statistical analysis of variance (ANOVA) could be employed. Sum of squares computation is necessary in order to do an ANOVA on response data [1, 11, 20]. The SNR measured in dB can be calculated as follows for NTB, STB, and LTB models.  y i2 Nominal the best − SNRi = 10 log 2 , si ⎛ ⎞ n  y 2j ⎠, Smaller the better = −10log⎝ n j=1 

⎞ ⎛ n 1 ⎝ 1 ⎠ , Larger the better = −10log n j=1 y 2j

(7.2)

(7.3)

(7.4)

96

B. Chetia and S. Debbarma

where ⎞ ⎛ n 1 ⎝ yi = yi, j ⎠(mean value of response), n j=1 si2

⎞ ⎛ n 1 ⎝ = yi, j − y i ⎠(variance), n − 1 j=1

(7.5)

(7.6)

where i is the experiment number, j is the trial number, and n are the number of trials. The following equations were used to determine the proportion of contribution. %contribution of factor =

SS f =

SS f × 100, SST

3 

2 n (SNR L ) f j − SNRT ,

(7.7)

(7.8)

j=1

SST =

9 

[SNRi − SNRT ]2 ,

(7.9)

i=1

where SSf is the sum of the squares for f th parameter and SST is the total sum of the squares of all parameters. N is the number of experiments at level j of factor f .

7.3 Results and Discussion 7.3.1 SEM Analysis The SEM micrographs of calcined wood ash are captured and depicted in Fig. 7.4. The micrographs are captured at various magnifications of 30.00 KX Fig. 7.4a and 50.00 KX Fig. 7.4b. The figure illustrates the porous, spongy, and agglomerated structure of wood ash that is obtained after calcination. Stronger interaction between the catalyst’s active sites and the reactants is made possible by the greater surface area of catalysts. Porous nature of the catalyst results in more active sites. Moreover, the formation of metal oxides through calcination can successfully catalyse the method for transforming palm oil into POME [3, 12].

7 Optimisation of Biodiesel Production Using Heterogeneous Catalyst …

97

Fig. 7.4 a–b SEM micrographs of calcined wood ash

7.3.2 Taguchi Method for Process Parameter Optimisation The Taguchi method’s crucial stage for achieving high-quality output without increasing the cost is process parameter optimisation. Also, the parameters are not affected by changes in the outside environment or other noise sources [17]. Table 7.2 lists the SNRs and the percentage production of POME for each of the nine designed sets of tests. In this work, the Larger-the-Better SNR model has been chosen as the objective is to maximise the biodiesel yield. According to the experimental results the experimental run 9 has a maximum yield of 94.35%, whereas run number 1 has a minimum yield of 90.30%. Even though this set of parameters might not be ideal. The SNR at each level for all four factors with their range and rank is tabulated in Table 7.3. All these values are obtained by using the software Minitab 21. Based on the range, the rank of each parameter was determined. The parameter having the highest range value was assigned as rank 1. As per the rank, reaction temperature has been recognised as the most effective parameter on the yield of POME. Table 7.2 Observed yield of POME according to L9 orthogonal array Experimental run

A

B

C

D

Yield (%)

SNR

1

1

1

1

1

90.30

39.1138

2

1

2

2

2

92.30

39.3040

3

1

3

3

3

93.00

39.3697

4

2

1

2

3

92.50

39.3228

5

2

2

3

1

94.00

39.4626

6

2

3

1

2

91.00

39.1808

7

3

1

3

2

93.90

39.4533

8

3

2

1

3

90.60

39.1426

9

3

3

2

1

94.35

39.4948

98

B. Chetia and S. Debbarma

Table 7.3 Response table for signal-to-noise ratio Level

A

B

C

D

1

39.26

39.30

39.15

39.36

2

39.32

39.30

39.37

39.31

3

39.36

39.35

39.43

39.28

Delta

0.10

0.05

0.28

0.08

Rank

2

4

1

3

Fig. 7.5 SNR of each parameter at different levels

Figure 7.5 displays the influence of each parameter at its distinct levels in terms of SNR. The maximum value in each chart indicates the best level of that parameter. For maximum biodiesel yield, the optimum level of each parameter would be a 12:1 methanol-to-oil ratio, reaction time of 180 min, temperature 65 ˚C, and 1.5% catalyst concentration.

7.3.3 Analysis of Variance (ANOVA) Table 7.4 shows that the significant controlling parameter for POME yield is reaction temperature followed by methanol-to-oil ratio, catalyst concentration, and reaction time. The larger value of F of a particular parameter indicates that the parameter has

7 Optimisation of Biodiesel Production Using Heterogeneous Catalyst …

99

Table 7.4 Analysis of variance Source

DF

Seq. SS

Adj. SS

Adj. MS

F-value

Regression

4

16.7979

Contribution (%) 90.37

16.7979

4.1995

9.38

P-value 0.026

A

1

1.7604

9.47

1.7604

1.7604

3.93

0.118

B

1

0.4537

2.44

0.4537

0.4537

1.01

0.371

C

1

13.5000

72.62

13.5000

13.5000

30.15

0.005

D

1

1.0837

5.83

1.0837

1.0837

2.42

0.195

Error

4

1.7910

9.63

1.7910

0.4477

Total

8

18.5889

100.00

the highest significance on the output result. It shows that the reaction temperature is the most significant parameter, contributing 72.62% to POME yield using calcined wood ash as a catalyst. The other parameters have contribution percentages of 9.47%, 5.83%, and 2.44%, respectively.

7.3.4 Maximum Yield Prediction and Validation By using the Minitab 21 software, the maximum yield of POME can be predicted at the optimum conditions which find that the predicted optimum yield of POME is found to be 94.93%. To confirm the yield percentage of POME at optimum conditions, the transesterification reaction has been repeated. The observed yield was 95%, which closely matches the predicted value. Slight variation in the result could be due to the influence of extraneous variables. As the reaction temperature increases, the POME content in the oil increases, and it reaches at an optimum level with a yield of 95%.

7.3.5 Physicochemical Properties of POME Table 7.5 shows the physicochemical properties of POME, raw palm oil, and diesel. All the properties lie within the standard range of biodiesel standard EN 1421, which implies that the produced POME has the potential to serve as an alternative fuel. These results are also in accordance with the results of Debbarma et al. [8] and Verma et al. [30].

100

B. Chetia and S. Debbarma

Table 7.5 Physicochemical properties of palm oil, POME, and diesel Properties

Unit

EN 1421 standard values

Palm oil

POME

Diesel

Density at 15 °C

g/cm3

0.86–0.90

0.87

0.88

0.84

Kinematic viscosity at 40 °C

mm2 /s

3.5–5

10.7

4.6

2.8

Flash point

°C

> 120

210.5

164

70

Pour point

°C



−6

−4

− 33

Cloud point

°C



−4

−3

− 16

7.4 Conclusion The present work examines the process parameter optimisation and its effects on the production of biodiesel from palm oil. Four process parameters, methanol-to-oil ratio, reaction time, reaction temperature, and catalyst weightage have been considered for optimising biodiesel yield using the Taguchi approach. According to the investigation, the optimum parameters for maximising biodiesel production were a 12:1 methanol-to-oil ratio, a 180-min reaction time, 65 °C reaction temperature, and 1.5% catalyst concentration. According to the results of the ANOVA study, the factor reaction temperature has the greatest impact on the production process, contributing 72.62% of the total. The physicochemical properties of the POME are also determined to satisfy EN 14214 biodiesel standards. It can be inferred that the POME, which uses calcined wood ash as a catalyst in the production process, maybe a viable alternative to conventional diesel and may help to resolve the world’s energy crisis.

References 1. Adewale, P., Vithanage, L.N., Christopher, L.: Optimization of enzyme-catalyzed biodiesel production from crude tall oil using Taguchi method. Energy Convers. Manage. 15(154), 81–91 (2017) 2. Aniza, R., Chen, W.H., Yang, F.C., Pugazhendh, A., Singh, Y.: Integrating Taguchi method and artificial neural network for predicting and maximizing biofuel production via torrefaction and pyrolysis. Biores. Technol. 1(343), 126140 (2022) 3. Arumugam, A., Sankaranarayanan, P.: Biodiesel production and parameter optimization: an approach to utilize residual ash from sugarcane leaf, a novel heterogeneous catalyst, from Calophyllum inophyllum oil. Renew. Energy 1(153), 1272–1282 (2020) 4. Awogbemi, O., Von Kallon, D.V., Aigbodion, V.S.: Trends in the development and utilization of agricultural wastes as heterogeneous catalyst for biodiesel production. J. Energy Inst. 1(98), 244–258 (2021) 5. Balajii, M., Niju, S.: A novel biobased heterogeneous catalyst derived from Musa acuminata peduncle for biodiesel production—process optimization using central composite design. Energy Convers. Manage. 1(189), 118–131 (2019) 6. Buasri, A., Worawanitchaphong, P., Trongyong, S., Loryuenyong, V.: Utilization of scallop waste shell for biodiesel production from palm oil—optimization using Taguchi method. APCBEE Proc. 1(8), 216–221 (2014)

7 Optimisation of Biodiesel Production Using Heterogeneous Catalyst …

101

7. Christopher, L.P., Kumar, H., Zambare, V.P.: Enzymatic biodiesel: challenges and opportunities. Appl. Energy 15(119), 497–520 (2014) 8. Debbarma, S., Misra, R.D., Das, B.: Performance of graphene-added palm biodiesel in a diesel engine. Clean Technol. Environ. Policy 22(2), 523–534 (2020) 9. Dhawane, S.H., Bora, A.P., Kumar, T., Halder, G.: Parametric optimization of biodiesel synthesis from rubber seed oil using iron doped carbon catalyst by Taguchi approach. Renew. Energy 1(105), 616–624 (2017) 10. Falowo, O.A., Oloko-Oba, M.I., Betiku, E.: Biodiesel production intensification via microwave irradiation-assisted transesterification of oil blend using nanoparticles from elephant-ear tree pod husk as a base heterogeneous catalyst. Chem. Eng. Process.-Process Intensif. 1(140), 157–170 (2019) 11. Goh, C.M., Tan, Y.H., Mubarak, N.M., Kansedo, J., Rashid, U., Khalid, M., Walvekar, R.: Synthesis of magnetic basic palm kernel shell catalyst for biodiesel production and characterisation and optimisation by Taguchi method. Appl. Nanosci. 3, 1–3 (2021) 12. Gohain, M., Devi, A., Deka, D.: Musa balbisiana Colla peel as highly effective renewable heterogeneous base catalyst for biodiesel production. Ind. Crops Prod. 15(109), 8–18 (2017) 13. Gohain, M., Laskar, K., Phukon, H., Bora, U., Kalita, D., Deka, D.: Towards sustainable biodiesel and chemical production: multifunctional use of heterogeneous catalyst from littered Tectona grandis leaves. Waste Manage. 1(102), 212–221 (2020) 14. Kanth, S., Ananad, T., Debbarma, S., Das, B.: Effect of fuel opening injection pressure and injection timing of hydrogen enriched rice bran biodiesel fuelled in CI engine. Int. J. Hydrogen Energy 46(56), 28789–28800 (2021) 15. Kanth, S., Debbarma, S., Das, B.: Experimental investigation of rice bran biodiesel with hydrogen enrichment in diesel engine. Energy Sour. Part A Recov. Util. Environ. Effects 5, 1–8 (2020) 16. Kanth, S., Debbarma, S., Das, B.: Experimental investigations on the effect of fuel injection parameters on diesel engine fuelled with biodiesel blend in diesel with hydrogen enrichment. Int. J. Hydrogen Energy 47(83), 35468–35483 (2022) 17. Karabas, H.: Biodiesel production from crude acorn (Quercus frainetto L.) kernel oil: an optimisation process using the Taguchi method. Renew. Energy 53, 384–388 (2013) 18. Karmakar, B., Dhawane, S.H., Halder, G.: Optimization of biodiesel production from castor oil by Taguchi design. J. Environ. Chem. Eng. 6(2), 2684–2695 (2018) 19. Kumar, N., Mohapatra, S.K., Ragit, S.S., Kundu, K., Karmakar, R.: Optimization of safflower oil transesterification using the Taguchi approach. Pet. Sci. 14(4), 798–805 (2017) 20. Kumar, R.S., Sureshkumar, K., Velraj, R.: Optimization of biodiesel production from Manilkara zapota (L.) seed oil using Taguchi method. Fuel. 140, 90–96 (2015) 21. Laskar, I.B., Gupta, R., Chatterjee, S., Vanlalveni, C., Rokhum, L.: Taming waste: waste Mangifera indica peel as a sustainable catalyst for biodiesel production at room temperature. Renew. Energy 1(161), 207–220 (2020) 22. Mkhize, N.M., Sithole, B.B., Ntunka, M.G.: Heterogeneous acid-catalyzed biodiesel production from crude tall oil: a low-grade and less expensive feedstock. J. Wood Chem. Technol. 35(5), 374–385 (2015) 23. Murmu, R., Sutar, H., Patra, S.: Experimental investigation and process optimization of biodiesel production from kusum oil using Taguchi method. Adv. Chem. Eng. Sci. 7(04), 464 (2017) 24. Priyadarshi, D., Paul, K.K.: Optimisation of biodiesel production using Taguchi model. Waste Biomass Valorization 10(6), 1547–1559 (2019) 25. Saravanakumar, A., Avinash, A., Saravanakumar, R.: Optimization of biodiesel production from Pungamia oil by Taguchi’s technique. Energy Sour. Part A Recov. Util. Environ. Effects 38(17), 2524–2529 (2016) 26. Sharma, Y.C., Singh, B., Korstad, J.: Application of an efficient nonconventional heterogeneous catalyst for biodiesel synthesis from Pongamia pinnata oil. Energy Fuels 24(5), 3223–3231 (2010)

102

B. Chetia and S. Debbarma

27. Singh, A., Sinha, S., Choudhary, A.K., Chelladurai, H.: Biodiesel production using heterogeneous catalyst, application of Taguchi robust design and response surface methodology to optimise diesel engine performance fuelled with Jatropha biodiesel blends. Int. J. Ambient Energy 43(1), 2976–2987 (2022) 28. Singh, G., Mohapatra, S.K., Ragit, S., Kundu, K.: Optimization of biodiesel production from grape seed oil using Taguchi’s orthogonal array. Energy Sour. Part A Recov. Util. Environ. Effects 40(18), 2144–2153 (2018) 29. Thangaraj, B., Solomon, P.R., Muniyandi, B., Ranganathan, S., Lin, L.: Catalysis in biodiesel production—a review. Clean Energy 3(1), 2–3 (2019) 30. Verma, P., Sharma, M.P., Dwivedi, G.: Evaluation and enhancement of cold flow properties of palm oil and its biodiesel. Energy Rep. 1(2), 8–13 (2016) 31. Wu, Q., Jiang, L., Wang, Y., Dai, L., Liu, Y., Zou, R., Tian, X., Ke, L., Yang, X., Ruan, R.: Pyrolysis of soybean soapstock for hydrocarbon bio-oil over a microwave-responsive catalyst in a series microwave system. Biores. Technol. 1(341), 125800 (2021) 32. Zheng, Y., Wang, J., Liu, C., Lu, Y., Lin, X., Li, W., Zheng, Z.: Catalytic copyrolysis of metal impregnated biomass and plastic with Ni-based HZSM-5 catalyst: synergistic effects, kinetics and product distribution. Int. J. Energy Res. 44(7), 5917–5935 (2020)

Chapter 8

Optimization of Heterogeneous Biomass-Based Nano-Catalyzed Transesterification of Karanja Seed Oil for Production of Biodiesel Abhishek Bharti and Sumita Debbarma

Abstract The current work focused on the optimization of transesterification reaction parameters for the synthesis of Karanja oil methyl ester (KOME) using grapefruit peel efficient catalyst. The reaction parameters, including catalyst loading (2– 11 wt%), M/O molar ratio (5:1–15:1), and reaction duration (45–90 min), were optimized using central composite design (CCD) and response surface methodology (RSM). Catalyst loading has the highest F-value (12.51) and the lowest pvalue (0.0001) of these variables; hence, it is more significant parameter. The model depicted the highest KOME yield of 95.62% under optimal reaction conditions of 2.47 wt% catalyst loading, 5:1 methanol-to-oil molar ratio, and 45-min reaction time. Furthermore, the suggested KOME yield (%) was validated by an observed KOME yield of 94.12, 0.5% under the suggested optimum conditions. Therefore, the CCD of RSM is a suitable optimization tool for the optimization of reaction parameters for the synthesis of biodiesel.

8.1 Introduction An increase in the world’s population, as well as the living standard of people, caused an increase in the demand for diesel fuels. Apart from that, the hazardous emissions ejecting during the combustion of diesel fuels are mainly responsible for global warming and create an imbalance in the ecological system. Due to the high dependency on diesel fuel, its prices will shoot up shortly. Hence to overcome this problem, it is essential to find an alternative to fossil fuel that is environmentfriendly. Nowadays, biodiesel has become an alternative to diesel fuels because its physiochemical properties are very much analogous to diesel fuel. Biodiesel is developed from biomass, so it is environmentally friendly [1]. A. Bharti (B) · S. Debbarma Department of Mechanical Engineering, National Institute of Technology, Silchar, Assam 788010, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024 B. Das et al. (eds.), Modeling, Simulation and Optimization, Smart Innovation, Systems and Technologies 373, https://doi.org/10.1007/978-981-99-6866-4_8

103

104

A. Bharti and S. Debbarma

Biodiesel is produced by the alcoholysis of various feedstock, which has triglycerides in the presence of a suitable catalyst. Biodiesel may be used as a fuel for traditional diesel engines by simply mixing it with diesel [2]. Researchers use edible oil, non-edible oil, and animal fat as feedstock and methanol for obtaining biodiesel. Using edible oil as a feedstock, the price of oil suddenly increases as well as there is a risk of food versus fuel crisis. Moreover, edible oilseed crops are grown on arable land cause deforestation, which affects the environment directly. Thus, recently nonedible oil has been much more suitable for biodiesel production [3]. For making biodiesel, mostly homogeneous, heterogeneous, and biocatalysts are used. While using the homogeneous catalyst to synthesize biodiesel, soap forms during the reaction. A large volume of wastewater is required to wash obtained biodiesel, thus increasing the production cost of biodiesel [4]. To avoid saponification, the heterogeneous catalyst is used for biodiesel production. Heterogeneous catalysts are metal and non-metal-based. On the other hand, heterogeneous catalysts are derived from natural waste resources, which makes the catalyst environment-friendly and cost-effective [5]. The catalyst is derived from natural resources like banana peel, mango peel, orange peel, waste chicken bone, waste fish bone, and waste grapefruit peel, which are cheaper than heterogeneous chemical catalysts [6]. Basumatary et al. [7] prepare a green catalyst from sugarcane bagasse by calcination (550 °C, 2 h). Non-edible jatropha oil is used as a feedstock in the synthesis of biodiesel. For the best biodiesel production, use a 9:1 M/O molar ratio, 10 wt% catalyst loading at 65 °C temperature, and 285 min’ reaction period. Balaji and Niju [8] use red banana peduncle for the synthesis of an innovative, cheaper, and ecologically responsible biocatalyst for the production of biodiesel. Ceiba pendantra oil is transesterified with methanol to create biodiesel. The highest value of 98.73% C. pendantra methyl ester was produced at reaction conditions of 2.68 wt% catalyst loading, 11.46 (M/O) molar ratio, and 106 min reaction time. The reaction took place at a constant 65 °C. The optimization of transesterification reaction parameters is essential for enhancing the yield percentage of biodiesel as well as reducing the process cost. There are several optimization techniques which have been used by researchers allied to the production of methyl ester from various feedstocks that are manly, full, and fractional factorial design, least square fitting, Taguchi orthogonal array method, and response surface methodology. Each of these methods has some advantages and disadvantages [9]. According to Fauzi and Amin [10] response surface methodology (RSM) is a more beneficial optimization technique examining the impact of transesterification reaction parameters utilizing central composite (CCD) on a single response as well as multiple responses and also optimizing the independent process parameters. RSM not only requires fewer trials to provide sufficient information for statistically acceptable outcomes, but it also requires less time and fewer resources to get experimental data. Ahmed et al. [11] analyzed the optimization of independent process parameters to produce optimal biodiesel production using the face-centered central composite design (FCCD) technique of response surface approach. In this work, 29 experiment runs were performed to thoroughly assess the effects of reaction

8 Optimization of Heterogeneous Biomass-Based Nano-Catalyzed …

105

variables like catalyst loading, methanol-to-oil (M/O) molar ratio, reaction temperature, and reaction time. With a 5.9:1 methanol-to-oil ratio, 0.51 wt% catalyst loading, 59 °C temperature, and 33 min reaction time, the FCCD model’s highest biodiesel output was 99.5%. 98 ± 2% biodiesel yield was found after experimental verification under the same optimum reaction condition. Chang et al. [12] utilized RSM and a five-factor central composite rotatable design (CCRD) to optimize parameters of biodiesel synthesis with the help of a catalyst Novozym (435), canola oil feedstocks, and methanol. The CCRD design model estimated a 99.4% maximum biodiesel production with the optimal process parameters of 42.3 wt% catalyst concentration, 3.5:1 methanol-to-canola oil ratio, 38 °C reaction temperature, and 12.4 h time. Experiments showed that the actual biodiesel yield was 97.9%. Biodiesel production in the experiment was very close to what was expected. Sharma et al. [13] optimize the microwave-assisted transesterification reaction parameters for synthesizing biodiesel from cotton seed oil using CaO catalysts. Response surface methodology uses the full factorial design method to optimize process parameters. Under the optimal conditions of a 9.6:1 methanol-to-oil molar ratio, 1.33 wt% catalyst concentration, and 9.7 min reaction time, the expected yield of 89.94% was discovered. Using the same optimum parameters, experiments were performed and achieved a 90.41 ± 0.02% yield of biodiesel, which is nearer to the predicted value. Rodrigues et al. [14] produced biodiesel by transesterifying soybean oil and methanol by using a lipase catalyst from Thermomyces lanuginosus. RSM and central composite design (CCD) were used to find the best reaction parameters. The maximum biodiesel production of 96% was produced with conditions of 7.5:1 M/O ratio, 15% catalyst concentration, 31.5 °C temperature, and 7 h reaction time. However, the CCD model of RSM predicted the 94.4 yield percentage of biodiesel under the same reaction conditions. Changmai et al. [15] carried out research on the synthesis of a nanocatalyst using Citrus sinensis peel ash (CSPA)/Fe3 O4 . Owing to its high potassium as well as calcium content, the catalyst is very basic and hence increases the rate of transesterification of WCO. Moreover, using the catalyst (CSPA/Fe3 O4 ) transesterification provided a maximum biodiesel yield of 98% under the ideal reaction conditions, which included a 6:1 methanol/oil molar ratio, 6 weight percent catalyst loading, 65 °C temperature, and a 3-h reaction time. The reusability study revealed that the catalyst has high physical stability and is reusable up to 9 consecutive cycles, indicating that it is a capable solid base catalyst for continuous biodiesel production. Ramirez et al. [16] used moringa leaf ash for the synthesis of an efficient heterogeneous catalyst. The catalyst was made easily by calcining it at 500 °C for 2 h. It was then used right away in a reaction with oil and methanol to make biodiesel. The best conditions for producing 86.7% biodiesel were a 6:1 molar ratio of methanol-to-oil, 120 min time, 65 °C temperature, and a 6-weight percent catalyst loading. Arumugam et al. [17] conducted a study on the utilization of waste sugarcane leaf ash as a catalyst for the production of Calophyllum inophyllum methyl esters. The reaction parameters for the highest biodiesel yield were optimized by RSM-based CCD design model. Under optimal conditions of 19:1 methanol-to-oil ratio, 5% catalyst, and 64 degrees Celsius, a maximum FAME yield of 97% was obtained. The catalyst was reused up to 6th cycle. Gouran et al. [18] utilized a waste biomass (wheat bran ash) to develop

106

A. Bharti and S. Debbarma

a heterogeneous catalyst. The obtained catalyst was used to produce biodiesel from waste cooking oil (WCO). Under the optimal condition of methanol-to-oil volume ratio of 1.46:1, a catalyst concentration of 11.66 wt%, and a temperature of 54.6 °C during 114.21 min a biodiesel purity of 93.6% was achieved. Husin et al. [19] conducted a study concerning the utilization of coconut husk as a solid catalyst for the transesterification reaction of Cerbera manghas oil into biodiesel. The optimal conditions of 10% catalyst loading, 60 °C reaction temperature for 3 h, and a 6:1 methanol-to-oil ratio can produce the highest yield of biodiesel (88.6%). According to the literature, it is clear that RSM is an efficient optimization tool for single and multiple responses. In the current study, a CCD model of RSM is employed to optimize transesterification reaction parameters for biodiesel production from Karanja oil. This study’s primary goals are to produce biodiesel from Karanja oil and optimize the process variables to minimize the tests necessary to obtain an efficient biodiesel yield.

8.2 Materials and Methodology 8.2.1 Karanja Oil and Chemicals Karanja seeds were taken from the forest in the Giridih District of India’s Jharkhand State, and Karanja oil extracted from its seed in the advanced biofuel laboratory, NIT Silchar Assam, India. 9.75 kg of oil extracted from 30 kg of seeds proves that the average oil content of Karanja seeds is 32.5%. Karanja oil extraction process is depicted in Fig. 8.2. Grapefruit peel is used for catalyst preparation and was collected from the juice corner of the NIT Silchar campus. Methanol of purity 99% manufactured by Sisco Research Laboratories Pvt. Ltd. was purchased from a local retailer.

8.2.2 Synthesis of Catalyst For the synthesis of a green, efficient catalyst, collected grapefruit peel (GFP) was washed several times with deionized water to remove unwanted impurities sticking to it and sun-dried for three days. Fine ash of GFP was obtained by open-air burning and sieving with a 0.044 mm sieve. To reduce the carbon content of GFP ash, it is calcined at 850 °C for four hr. in a muffle furnace [20]. The obtained solid catalyst was stored in an airtight vial for further use. The catalyst preparation process is presented in Fig. 8.1.

8 Optimization of Heterogeneous Biomass-Based Nano-Catalyzed …

107

Fig. 8.1 Catalyst preparation process

Fig. 8.2 Karanja oil extraction process

8.2.3 Characterization of Catalyst X-ray diffraction (XRD), scanning electron microscopy (SEM), and energydispersive X-ray spectroscopy (EDS) were used to find out the structure, shape, and elemental composition of the catalysts. XRD analysis was carried out in Panalytical X’ Pert3 powder instrument. The EDS and SEM were performed on the ZEISS instrument.

8.2.4 Transesterification of Karanja Oil to Produce Biodiesel For the transesterification process, 60 ml of Karanja oil is taken in a 250 ml glass conical flask and heated with the help of a magnetic stirrer hot plate. The quantity

108

A. Bharti and S. Debbarma

Table 8.1 Physiochemical properties of diesel and Karanja biodiesel Property

ASTM method

Diesel

Karanja biodiesel

Calorific value (KJ/kg)

D-224

45,908

34,400

Density

(kg/m3 )

D-4052

830.9

882.43

Kinematic viscosity @40 °C (mm2 /s)

D-445

3.21

7.37

Flash point

D-92

75

175

47

54

Cetane number

of methanol and GFP ash catalyst was decided by the experimental design matrix and introduced into the flask when the oil temperature reached 65 °C. Moreover, the mixture of methanol oil and catalyst was stirred continuously at a constant temperature of 65 °C [21]. After the reaction was completed, the resultant solution was separated using a centrifuge with a 3000 rpm rotation. The resultant mixture was separated into three layers, the top layer is biodiesel, the middle layer is glycerol, and the bottom layer is solid catalyst [22]. The access alcohol in biodiesel was removed by heating biodiesel again at 60 °C for 30 min. The acquired biodiesel was coded as Karanja oil methyl ester (KOME). The yield percentage of KOME is calculated as Eq. 18.1. The properties of the obtained biodiesel are depicted in Table 8.1. KOME Yield(%) =

Weight of obtained KOME × 100 Weight of karanja oil

(8.1)

8.2.5 Design of Experiment A five-level five-factor CCD model of RSM is employed to design of the experiment. The three reaction parameters, including catalyst loading (2–11%), the M/O molar ratio (5:1–15:1), transesterification time (45–90 min), and yield as a response are chosen for the present study. The CCD model produces a total 17 number of runs to determine the influence of independent parameters on the production of KOME. All the experimental runs have eight factorial points, six axial points, and three center points. Design expert 13.0.5.0 software was used to assess experimental data [23]. The independent input factors and their levels are represented in Table 8.2. The obtained biodiesel in 17 runs of experiment are depicted in Fig. 8.4. An experimentally observed value predicted value and residual based on the CCD design matrix are presented in Table 8.3.

8 Optimization of Heterogeneous Biomass-Based Nano-Catalyzed …

109

Table 8.2 Independent factors employed for CCD in transesterification of Karanja oil Input factors

Unit

Coded levels Axial (− α)

−1

0

+1

Axial(+ α)

Catalyst loading A

wt%

1.07

2

6.63

11

Methanol-to-oil molar ratio B



1.59

5

10

15

14.07 18.41

Reaction time C

min

29.66

45

67.50

90

105.34

Fig. 8.3 SEM image of GFP before calcination (a), SEM image of GFP after calcination (b)

8.3 Result and Discussion 8.3.1 XRD Analysis of Grape Fruit Peel Ash Catalyst The crystalline structure found in the catalyst was investigated using XRD analysis. The XRD spectrum is depicted in Fig. 8.5. The presence of peak of K2 O was found at the 2θ angle of 27.699°, 39.574°, 48.989°, 57.207°, 64.722°, and 71.795°. These peaks were verified by JCPDS reference code 96-900-9056. Peaks of CaO were detected at 2θ angles of 32.4, 37.621, 54.258, 64.647, and 67.901 (JCPDS reference code 96-101-1328). Peaks of CaCO3 were detected at 2θ angles of 23.022, 29.406, 35.966, 39.402, and 43.156 (JCPDS reference code 00-005-0586).

8.3.2 EDS and SEM Analysis of GFP Catalyst The EDS spectrum of prepared catalyst is depicted in Fig. 8.6. Figure 8.6a presents the spectrum of catalyst before calcination, and Fig. 8.6b depicts the spectrum of catalyst after calcination. EDS data confirmed that the element presence in the catalysts is K, Ca, Mg, Na, and P. The elemental composition of the catalyst before and after calcination is shown in Table 8.4. It is observed that the elemental composition like K, Ca, O, and Mg increases after the calcination of catalysts.

110

A. Bharti and S. Debbarma

Fig. 8.4 Image of yield biodiesel

The morphology of GFP catalysts was found using SEM. The average particle size of catalyst was determined using image J software. It is observed that the size of GFP particle (before calcination) was 138.683 nm. Moreover, after calcination the average particle size of GFP catalyst becomes 57.446 nm. The SEM image of catalyst is depicted in Fig. 8.3.

8.3.3 Statistical Data Analysis The predicted value of conversion of Karanja oil to KOME implies the observed value of KOME yield by using the quadratic polynomial equation depicted in Eq. 8.2 [24]. The P values less than 0.0500 suggest that each model term is significant. The high F-value (53.85) of the model implies that the regression coefficient is significant. According to the analysis of variation (ANOVA) depicted in Table 8.5, the value of the adjusted regression coefficient (R2 ) is 0.9691, and closer to the predicted regression coefficient (R2 ) of 0.8834 also confirms a significant model.

8 Optimization of Heterogeneous Biomass-Based Nano-Catalyzed …

111

Table 8.3 CCD, design of experiment Run

A: catalyst loading (%)

B: methanol-to-oil molar ratio

C: time (min)

Predicted yield (%)

Observed yield (%)

1

2

15

90

76.28

76.67

2

11

5

45

71.42

71.67

0.2519

3

6.5

18.409

67.5

81.54

81.00

− 0.5436

4

2

5

45

96.01

95.50

− 0.5050

5

6.5

10

29.6597

81.98

82.50

0.5170

6

6.5

10

67.5

80.20

80.65

0.4462

7

1.06807

10

67.5

84.84

83.97

− 0.8736

8

6.5

10

105.34

78.09

76.67

− 1.42

9

11

15

45

78.83

78.33

− 0.5008

10

6.5

10

67.5

80.20

80.21

0.0062

11

6.5

10

67.5

80.20

80.33

0.1262

12

2

15

45

81.93

82.33

0.4023

13

11

15

90

87.53

88.67

1.14

14

14.0681

10

67.5

75.78

75.33

− 0.4533

15

11

5

90

72.43

72.67

0.2365

16

6.5

1.59104

67.5

80.69

80.33

− 0.3597

17

2

5

90

82.68

83.82

1.14

Fig. 8.5 XRD spectrum of GFP catalyst

Residual 0.3868

112

A. Bharti and S. Debbarma

Fig. 8.6 EDS spectrum of GFP catalyst before calcination (a), EDS spectrum of EDS catalyst after calcination (b)

Table 8.4 Elemental composition of GFP before and after calcination S. No.

Element

GFP (before calcination) (wt%)

1

C

32.02

4.24

2

O

38.10

46.95

3

K

14.44

20.69

4

Ca

11.49

19.12

5

Na

0.77

1.46

6

Mg

1.20

4.61

7

P

0.77

1.25

KOME(%) = b0 +

n  a=1

ba X a +

n  a=1

baa X a2

GFP (after calcination) (wt%)

n−1  n 

ba j X a X j

(8.2)

a=1 j=a+1

where b0 is the coefficient constant, ba is the coefficient of linear terms, baj is the interaction term, and X a to X j represent input variable parameters, catalyst loading, methanol-to-oil molar ratio, and reaction duration. The relationship between the expected and actual values of the KOME% conversion is shown in Fig. 8.7.

8 Optimization of Heterogeneous Biomass-Based Nano-Catalyzed …

113

Fig. 8.7 Interaction between predicted and actual KOME yield

Table 8.5 ANOVA table for CCD Source

Sum of squares

df

Model

514.92

9

57.21

53.85

< 0.0001

A-catalyst loading

128.03

1

128.03

120.51

< 0.0001

B-methanol-to-oil ratio C-time

0.8800

1

Mean square

0.8800

F-value

0.8284

P-value

0.3930

18.29

1

18.29

17.22

AB

230.91

1

230.91

217.35

AC

102.82

1

102.82

96.78

< 0.0001

BC

29.49

1

29.49

27.76

0.0012

A2

1.42

1

1.42

1.34

0.2853

B2

1.22

1

1.22

1.15

0.3185

C2

0.0411

1

0.0411

0.0387

0.8497

Residual

7.44

7

1.06

Lack of fit

7.33

5

1.47

0.1035

2

0.0517

Pure error Cor total

522.36

16

28.35

Significant

0.0043 < 0.0001

0.0344

Not significant

114

A. Bharti and S. Debbarma

8.3.4 Effect of Catalyst Loading and the M/O molar Ratio on KOME Yield (%) The 3D surface graph of the impact of the relation between catalyst concentration and methanol-to-Karanja oil ratio on the KOME yield is presented in Fig. 8.8. The ranges for the catalyst loading (A), methanol-to-Karanja oil ratio (B), and reaction duration (C) are respectively 2–11 wt%, 5–15 wt%, and 45–90 min. The reaction temperature is maintained at 65 °C. The result revealed that at the minimum level of catalyst loading (5:1), the KOME yield increased to a specific limit. Moreover, increasing the reaction parameter A only beyond the 3 wt% yield percentage of KOME decreases. The decrease in KOME conversion is due to a shift in equilibrium in the direction of the product during the transesterification process caused by adding more methanol [25]. According to the ANOVA table (Table 8.5), parameter A is highly significant due to its high F-value (120.51) and a very small p-value. Apart from this, increasing the reaction parameter B beyond the 7:1 while parameter A is at its minimum level (2%), the yield percentage of KOME again decreases. This is due to the reaction mixture having more methanol inhibiting the catalytic activity; hence, the conversion rate of KOME decreases [26].

Fig. 8.8 Interaction effect of catalyst loading and methanol-to-oil molar ratio on KOME yield

8 Optimization of Heterogeneous Biomass-Based Nano-Catalyzed …

115

8.3.5 The Influence of Catalyst Amount and Reaction Time on the Yield (%) of KOME The interaction of the reaction parameters A: catalyst loading and C: reaction time on the yield of KOME is shown in Fig. 8.9 as a 3D surface plot. The methanolto-oil molar ratio is kept constant at 5:1 at the same time. From the plot, it is seen that at a minimum value of parameters A and B, maximum conversion (> 90%) of KOME occurs. This is attributed to the catalyst’s strong catalytic action. Furthermore, by increasing the catalyst loading above 3 wt% at minimum reaction time (45 min), the conversion rate of KOME decreases. This can be ascribed to the reactant’s viscosity increasing due to the high catalyst loading. A similar finding was observed by Dhawane et al. [27]. However, it is noted that the conversion rate of KOME rapidly declines after 63 min of reaction time. This is associated with the production of soap [25]. From the 3D surface plot, it is clear that the maximum KOME conversion occurs at the reaction parameters A and B of 2–3 wt% and 45–63 min, respectively.

Fig. 8.9 Interaction effect of catalyst loading and reaction time on KOME yield

116

A. Bharti and S. Debbarma

Fig. 8.10 Interaction effect of methanol-to-oil ratio and reaction time on KOME yield

8.3.6 Effect of Methanol-to-Oil Molar Ratios and Reaction Time on KOME Yield (%) The relative effect of process parameters reaction time (C) and methanol-to-oil ratio (B) on the yield of KOME is depicted in Fig. 8.10. For yield analysis, parameter A remains constant at 2 wt%. From the 3D surface plot, it is found that the highest KOME yield (> 90%) was achieved with the smallest values of parameters B and C. Moreover, at a high value of both parameters, B and C were more than 7:1 and 54 min, respectively, and the KOME conversion decreased continually. This is mainly due to glycerol solubilization, and dilution of catalyst concentration was found during the transesterification process due to an excess amount of methanol in the reaction mixture [28].

8.3.7 Optimization of Transesterification Reaction Parameters The input-independent reaction parameters and the observed value of responses are taken from the CCD of the RSM-based design of the experiment generated by design expert 13.0.5.0 software [25]. The software estimated a maximum 94.62% KOME yield with 2.47 wt% catalyst loading, 5:1 methanol-to-oil molar ratio, and 45 min reaction time by solving Eq. 8.3 [29].

8 Optimization of Heterogeneous Biomass-Based Nano-Catalyzed …

117

KOME Yield(%) = 80.20 − 3.34 A + 0.2539B − 1.16C + 5.37AB + 3.58AC + 1.92BC + 0.4207A2 + 0.3227B 2 − 0.0591C 2

(8.3)

where A, B, and C are the reaction’s input parameters, which are, respectively, catalyst loading, methanol-to-oil molar ratio, and reaction duration. The software-predicted value of KOME yield (94.62%) was confirmed by executing three separate experiments. The average of this was found to be 94.12% yield of KOME under predicted optimum reaction parameters. The overserved value of 94.12 ± 0.5% yield of KOME indicates that the used regression model is acceptable.

8.4 Conclusions A green and efficient catalyst is developed using waste grapefruit peel to synthesize biodiesel from non-edible Karina oil. Maximum biodiesel output at the lowest catalyst loading demonstrates the strong catalytic activity of the catalyst. A significant regression model based on CCD of RSM has a high F-value (53.85) and low p-value (< 0.0001) used for developing the design of experiments (17 Runs) for the optimization of reaction parameters for maximum conversion of KOME. The model’s predicted yield of KOME is 94.62% at process parameters of 2.47 wt% catalyst concentration, 5:1 methanol-to-oil molar ratio, 45-min reaction duration, and a reaction temperature of 65 °C. The experiment was conducted under predicted reaction parameters and observed a maximum KOME yield of 94.12 ± 0.5%. Consequently, the CCD of RSM is a suitable optimization tool.

References 1. Debbarma, S., Misra, R.D., Das, B.: Performance of graphene-added palm biodiesel in a diesel engine. Clean Technol. Environ. Policy 22(2), 523–534. 2. Sadaf, S., Iqbal, J., Ullah, I., Bhatti, H.N., Nouren, S., Nisar, J., Iqbal, M.: Biodiesel production from waste cooking oil: an efficient technique to convert waste into biodiesel. Sustain. Cities Soc. 41, 220–226 (2018) 3. Sajjadi, B., Raman, A.A.A., Arandiyan, H.: A comprehensive review on properties of edible and non-edible vegetable oil-based biodiesel: composition, specifications and prediction models. Renew. Sustain. Energy Rev. 63, 62–92 (2016) 4. Tariq, M., Ali, S., Khalid, N.: Activity of homogeneous and heterogeneous catalysts, spectroscopic and chromatographic characterization of biodiesel: a review. Renew. Sustain. Energy Rev. 16(8), 6303–6316 (2012) 5. Yusuff, A.S., Adeniyi, O.D., Olutoye, M.A., Akpan, U.G.: A review on application of heterogeneous catalyst in the production of biodiesel from vegetable oils. J. Appl. Sci. Process Eng. 4(2), 142–157 (2017) 6. Alagumalai, A., Mahian, O., Hollmann, F., Zhang, W.: Environmentally benign solid catalysts for sustainable biodiesel production: a critical review. Sci. Total. Environ. 768, 144856 (2021)

118

A. Bharti and S. Debbarma

7. Basumatary, B., Das, B., Nath, B., Basumatary, S.: Synthesis and characterization of heterogeneous catalyst from sugarcane bagasse: Production of jatropha seed oil methyl esters. Curr. Res. Green Sustain. Chem. 4, 100082 (2021) 8. Balajii, M., Niju, S.: A novel biobased heterogeneous catalyst derived from Musa acuminata peduncle for biodiesel production—process optimization using central composite design. Energy Convers. Manage. 189, 118–131 (2019) 9. Stamenkovi´c, O.S., Veliˇckovi´c, A.V., Veljkovi´c, V.B.: The production of biodiesel from vegetable oils by ethanolysis: current state and perspectives. Fuel 90(11), 3141–3155 (2011) 10. Fauzi, A., Hafiidz, M., Amin, N.A.S.: Optimization of oleic acid esterification catalyzed by ionic liquid for green biodiesel synthesis. Energy Convers. Manage. 76, 818–827 (2013) 11. Ahmad, T., Danish, M., Kale, P., Geremew, B., Adeloju, S.B., Nizami, M., Ayoub, M.: Optimization of process variables for biodiesel production by transesterification of flaxseed oil and produced biodiesel characterizations. Renew. Energy 139, 1272–1280 (2019) 12. Chang, H.-M., Liao, H.-F., Lee, C.-C., Shieh, C.-J.: Optimized synthesis of lipase-catalyzed biodiesel by Novozym 435. J. Chem. Technol. Biotechnol. Int. Res. Process Environ. Clean Technol. 80(3), 307–312 (2005) 13. Sharma, A., Kodgire, P., Kachhwaha, S.S.: Biodiesel production from waste cotton-seed cooking oil using microwave-assisted transesterification: optimization and kinetic modeling. Renew. Sustain. Energy Rev. 116, 109394 (2019) 14. Rodrigues, R.C., Volpato, G., Ayub, M.A.Z., Wada, K.: Lipase-catalyzed ethanolysis of soybean oil in a solvent-free system using central composite design and response surface methodology. J. Chem. Technol. Biotechnol. Int. Res. Process Environ. Clean Technol. 83(6), 849–854 (2008) 15. Changmai, B., Rano, R., Vanlalveni, C., Rokhum, L.: A novel Citrus sinensis peel ash coated magnetic nanoparticles as an easily recoverable solid catalyst for biodiesel production. Fuel 286, 119447 (2021) 16. Aleman-Ramirez, J.L., Moreira, J., Torres-Arellano, S., Longoria, A., Okoye, P.U., Sebastian, P.J.: Preparation of a heterogeneous catalyst from moringa leaves as a sustainable precursor for biodiesel production. Fuel 284, 118983 (2021) 17. Arumugam, A., Sankaranarayanan, P.: Biodiesel production and parameter optimization. An approach to utilize residual ash from sugarcane leaf, a novel heterogeneous catalyst, from Calophyllum inophyllum oil. Renew. Energy 153, 1272–1282 (2020) 18. Gouran, A., Aghel, B., Nasirmanesh, F.: Biodiesel production from waste cooking oil using wheat bran ash as a sustainable biomass. Fuel 295, 120542 (2021) 19. Husin, H., Abubakar, A., Ramadhani, S., Sijabat, C.F.B., Hasfita, F.: Coconut husk ash as heterogenous catalyst for biodiesel production from cerbera manghas seed oil. MATEC Web Conf. 197, 09008 (2018) 20. Oladipo, B., Ojumu, T.V., Latinwo, L.M., Betiku, E.: Pawpaw (Carica papaya) peel waste as a novel green heterogeneous catalyst for moringa oil methyl esters synthesis: process optimization and kinetic study. Energies 13(21), 5834 (2020) 21. Naik, M., Meher, L.C., Naik, S.N., Das, L.M.: Production of biodiesel from high free fatty acid Karanja (Pongamia pinnata) oil. Biomass Bioenergy 32(4), 354–357 (2008) 22. Laskar, I.B., Gupta, R., Chatterjee, S., Vanlalveni, C., Rokhum, L.: Taming waste: waste Mangifera indica peel as a sustainable catalyst for biodiesel production at room temperature. Renew. Energy 161, 207–220 (2020) 23. Adepoju, T.F., Ibeh, M.A., Babatunde, E.O., Asquo, A.J.: Methanolysis of CaO based catalyst derived from egg shell-snail shell-wood ash mixed for fatty acid methylester (FAME) synthesis from a ternary mixture of Irvingia gabonensis–Pentaclethra macrophylla–Elais guineensis oil blend: an application of simplex lattice and central composite design optimization. Fuel 275, 117997 (2020) 24. Niju, S., Janaranjani, A., Nanthini, R., Sindhu, P.A., Balajii, M.: Valorization of banana pseudostem as a catalyst for transesterification process and its optimization studies. Biomass Convers. Biorefinery, 1–14 (2021) 25. Mendonça, I.M., Paes, O.A.R.L., Maia, P.J.S., Souza, M.P., Almeida, R.A., Silva, C.C., Duvoisin, Jr., S., de Freitas, F.A.: New heterogeneous catalyst for biodiesel production from

8 Optimization of Heterogeneous Biomass-Based Nano-Catalyzed …

26. 27.

28.

29.

119

waste Tucumã peels (Astrocaryum aculeatum Meyer): parameters optimization study. Renew. Energy 130, 103–110 (2019) Gohain, M., Devi, A., Deka, D.: Musa balbisiana Colla peel as highly effective renewable heterogeneous base catalyst for biodiesel production. Ind. Crops Prod. 109, 8–18 (2017) Dhawane, S.H., Kumar, T., Halder, G.: Parametric effects and optimization on synthesis of iron (II) doped carbonaceous catalyst for the production of biodiesel. Energy Convers. Manage. 122, 310–320 (2016) Kamalini, A., Muthusamy, S., Ramapriya, R., Muthusamy, B., Pugazhendhi, A.: Optimization of sugar recovery efficiency using microwave assisted alkaline pretreatment of cassava stem using response surface methodology and its structural characterization. J. Mole. Liquids 254, 55–63 (2018) Betiku, E., Omilakin, O.R., Ajala, S.O., Okeleye, A.A., Taiwo, A.E., Solomon, B.O.: Mathematical modeling and process parameters optimization studies by artificial neural network and response surface methodology: a case of non-edible neem (Azadirachta indica) seed oil biodiesel synthesis. Energy 72, 266–273 (2014)

Chapter 9

Population Diversity-Aided Adaptive Cuckoo Search Debojyoti Sarkar and Anupam Biswas

Abstract Cuckoo search (CS) is an evolutionary algorithm based on Levy flight distribution (LFD). The exploration and exploitation of CS are largely dependent on a probabilistic parameter . Pa , the value of which ranges between .0 and .1. A lesser . Pa value promises higher exploration, while a higher value leads to higher exploitation. The original CS algorithm has a . Pa value fixed to .0.25. A controlled varying . Pa suggests a richer exploration-exploitation balance. In this article, we have presented a version of the algorithm with a dynamic . Pa value guided by real-time population diversity and have cited the results pertaining to improved efficiency and accuracy when tested upon fifteen CEC-15 benchmark functions.

9.1 Introduction 9.1.1 Evolutionary Algorithms Evolutionary algorithms (EAs) such as particle swarm [1], ant colony [2], cuckoo search [3], and bat colony [4] have been engaged in solving optimization problems for decades. Optimization problems are those which require minimization or maximization of a certain parameter. The EAs, with their population-based stochastic methods, have time and again proven their efficiency in solving optimization problems. EAs are modeled, inspired by Charles Darwin’s theory of evolution [5]. A population is composed of individuals, each representing potential solutions in a solution space. Initially, the individuals are all scattered throughout the solution space, and with progress, i.e., increase in iterations or ‘evolution of the population’ as we may say, the individuals communicate and learn from each other to improve their locations, finally converging to a certain point known as the optimal point. The final population D. Sarkar (B) · A. Biswas Department of Computer Science and Engineering, National Institute of Technology Silchar, Silchar788010, Assam, India e-mail: [email protected] A. Biswas e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024 B. Das et al. (eds.), Modeling, Simulation and Optimization, Smart Innovation, Systems and Technologies 373, https://doi.org/10.1007/978-981-99-6866-4_9

121

122

D. Sarkar and A. Biswas

is expected to have ’evolved’ and hence is expected to have the best individuals (or solutions). The best individual in the final population shall be considered the optimal solution.

9.1.2 Exploration and Exploitation Population-based optimization algorithms are largely based on two fundamental processes: exploration and exploitation. Exploration is the knowledge-gathering phase where all potential solution regions are identified. These solution regions must be intensively searched, in other terms, ‘exploited,’ to reach the optimal solution. Conventionally, an algorithm starts with maximum exploration in the initial iterations and gradually switches to exploitation at a later stage. It would not be illicit to state that exploration and exploitation are two opposite ends of a continuum since there lacks a vivid boundary between the two processes [6]. The efficiency of an optimization algorithm is largely dependent on its ability to strike a balance between exploration and exploitation [7]. An algorithm that emphasizes exploration may identify a large number of potential solutions but may take longer to converge to the optimal solution. On the other hand, an algorithm that emphasizes exploitation may converge faster but may miss the global optimum due to premature convergence. To achieve an optimal balance between exploration and exploitation, researchers have proposed various modifications to existing algorithms. For example, the adaptive cuckoo search algorithm presented in this study uses a dynamic . Pa value guided by real-time population diversity to strike a balance between exploration and exploitation.

9.1.3 Population Diversity The exploration and exploitation can be viewed from the perspective of population demography. Scattered individuals in a population enhance a wide search throughout the search space; hence, we can say that the population is exploring. Likewise, as the population shrinks, with the individuals converging in groups, we may safely say that the exploitation rate is increasing. Population diversity is simply the measure of how sparse or dense a population is. There are numerous ways by which an estimation can be made about the diversity [8]. 1. Radius: The distance of the farthest individual from the center. 2. Diameter: The distance of the two farthest individuals. 3. Average distance around the center: Mean distances of all the individuals from the center. 4. Normalized average distance around the center: Average distance around the center divided by either radius or diameter.

9 Adaptive Cuckoo Search

123

5. Average of average distances: Mean of the average distances of individuals from all the other individuals in the population. The upcoming sections are dedicated to the study of an adaptive version of cuckoo search (ACS) on the premise of a balanced exploration-exploitation. Section 9.2 explains the original cuckoo search algorithm, and Sect. 9.3 the proposed version of CS. Section 9.4 deals with the experimental setup. The results and analysis are shown in Sect. 9.5 while Sect. 9.6 are reserved for discussions, conclusions, and acknowledgments.

9.2 The Cuckoo Search Algorithm Xin-she Yang and Suash Deb in 2009 devised the cuckoo search algorithm [3]. The motivation lies in the parasitic nature of certain cuckoo species that lay their eggs in the nests of other host birds. The host bird when finds out about the alien eggs can resolve to either of the two decisions: destroying the eggs or abandoning the nest. Some cuckoos have evolved in such a way that some cuckoos can replicate the color and shape of the host bird’s eggs. This reduces the probability of the eggs being discovered. Generally, a cuckoo egg hatches faster than the host bird’s eggs. After a cuckoo egg is hatched, the cuckoo bird damages the other eggs which help it gain a larger share of food. The idea is to engage the potentially better solutions (cuckoos) to replace the not-so-good solutions. Each egg in a nest represents a solution and each cuckoo egg represents a new solution. The algorithm is based on three rules: (a) Each cuckoo drops a single egg in a randomly chosen nest (b) Bird nests with high-quality eggs will carry over to the next generation (c) The number of host nests is fixed, and the host bird can discover the cuckoo egg with a probability . Pa = [0, 1]. Algorithm 1 Cuckoo Search Algorithm procedure CuckooSearch Generate an initial population with n host nests. while t < max_iter Get a Cuckoo randomly by Levy distribution Calculate its fitness value, Fi Choose a nest randomly Calculate its fitness value, F j if Fi > F j Replace j by the new solution end if Abandon a worse nests with a probability, Pa Build new nests at new locations via Lévy flights Keep the best solutions (the nests with quality solutions) Rank the solutions and find the current best end while

124

D. Sarkar and A. Biswas

9.3 The Proposed Adaptive Cuckoo Search Algorithm The switching parameter, . Pa has a decent influence on the exploration and exploitation of the algorithm [3]. An increase in . Pa value suggests a decreasing exploration and increasing exploitation. It is self-explanatory why . Pa should start with a small value during the initial stage (iterations) of the algorithm. In the base variant of CS, the . Pa is however fixed to .0.25 throughout all the iterations [3]. The proposed variant of CS deals with a dynamically varying . Pa . The population radius after each iteration is observed. It is obvious that the radius will drop with increasing iterations, barring a negligible number of cases where it remains the same or increases. A drop in population radius will indicate a reduced exploration (or increased exploitation). A rapidly shrinking population indeed speeds up the search process, but at a cost of missing out on other potential solutions, leading to a local optima stagnation. The idea is to limit this rapid collapse of the population. Therefore, at every iteration, the radius drop is observed and the . Pa value is set. If the drop is high, a low . Pa value is set and vice-versa, in order to restrict the rapidly exploiting or lazily exploring population. Population Radius Population radius .(R) is the distance of the farthest individual from the center of the population. ⎛┌ ⎞ | I |∑ | . R = maximizei∈[1,|S|] ⎝ (xik − x¯k )2 ⎠ k=1

Here, .|S| is the population size, and . I is the number of dimensions. .xik will simply mean the.kth dimension of the.ith individual position..x¯ is the center of the population. In this case, the centroid of the population is taken as the center. The prime challenge in implementing this idea was the quantification of the decrease in radius. We cannot know when a radius drop is actually large enough if we do not have the radius for all the iterations. Since the . Pa is to be determined dynamically, i.e., at each iteration, based on the population radius of the previous iteration, a normalization of the radius drop is out of the equation. We have however come around this problem by determining the possible upper and lower bounds of radius drop by pre-running the algorithm on the same problem set with a constant . Pa . The lower bound is mostly .0 because the radii seize to differ when there is a convergence. The upper bound changes are respective to the number of dimensions and the objective function.

9 Adaptive Cuckoo Search

125

Algorithm 2 Switching parameter adaptation procedure initialisePa Take the pre-determined upper bound, max_dr op and lower bound, min_dr op as inputs. Calculate the difference in radius, dr op = pr evious_radius − curr ent_radius if dr op ≥ 0 if dr op < min_dr op Set Pa ← 1 (drop is lesser than desired, trigger exploitation) elseif dr op > max_dr op Set Pa ← 0 (drop is greater than desired, trigger exploration) else Calculate normalized drop, n_dr op dr op−min_dr op n_dr op = max_dr op−min_dr op Set Pa ← 1 − n_dr op endif endif

9.4 Evaluation Process 9.4.1 Evaluating Algorithms The ACS’s performance is up for comparison with four other recent optimization algorithms including the traditional CS algorithm, gradient-based optimizer (GBO) [9, 10], spotted hyena optimizer (SHO) [11, 12], and improved moth-flame optimization (IMFO) [13, 14]. The GBO is inspired by the gradient-based Newton’s method. The explorationexploitation balance is attained through two operators, namely the gradient search rule (GSR) and the local escaping operator (LEO). The GSR explores all the potential search areas in the solution space while the LEO stops the search from being stuck at a local optimum [10]. The SHO algorithm is based on the hunting dynamics of spotted hyenas. This strategy is completed in four steps: (a) encircling prey (b) hunting (c) attacking prey (exploitation) and (d) searching for prey (exploration) [11]. The IMFO is an improvement over the moth-flame optimizer (MFO). MFO is inspired by the movement of moths at night where they display varied behaviors in movement under the moonlight and artificial light. The traditional MFO suffered from slow convergence and local optima stagnation. IMFO attempts to overcome the flaw by two vivid mechanisms: Firstly, the idea of historical best flame average is incorporated to help the algorithm get released from the local optima; secondly, the quasi-opposition-based learning (QOBL) is used to disturb the location, hence increasing the population diversity, in turn, improving the rate of convergence [14].

126

D. Sarkar and A. Biswas

9.4.2 Evaluation Strategy The success rate of all four algorithms is compared to that of ACO, while the decrease in function evaluations and proximity to optima is compared solely with the traditional CS algorithm. The CS algorithm with a fixed . Pa value .(= 0.25) is run over fifteen CEC-15 benchmark functions [15] and the lowest difference of radius is taken to be the upper bound for radius drop. The lower bound of the same is taken to be .0. The adaptive CS is run 30 times over the same set of functions. The . Pa value for the first iteration is fixed at.0 for maximum exploration. The number of iterations is taken √ to be as . D × 2000 × q, where . D is the number of dimensions and .q is the number of optima. All experiments are carried out in a 64-bit PC running on Microsoft Windows 10 Pro OS with a RAM of 128 GB and an Intel(R) Xeon(R) W-2155 CPU.

9.4.3 Performance Metrices The experimental analysis is carried out based on three performance metrics: 1. Success Rate (SR): It measures the percentage of runs (30 in this case) in which the algorithm successfully reaches the optimal solution with a relaxation of 0.1 (in our case). This means that the algorithm may not find the exact optimal solution, but rather a solution that is within 0.1 of the optimal value. The SR provides an indication of how well the algorithm performs in terms of finding the optimal solution. SR =

.

No. of runs with optimal solution Total No. of runs

The optima can either be local or global optima. 2. Decrease in function evaluations (FE): It compares the number of function evaluations required by ACS to reach a solution with the number required by CS. This metric indicates the efficiency of ACS in terms of the number of function evaluations required to reach a solution. A higher percentage of decrease in FE suggests that ACS converges faster and is more efficient in finding the optimal solution. decrease in FE =

.

FE in CS − FE in ACS × 100 FE in CS

3. Proximity to optima: It measures the closeness of the average fitness values obtained by ACS to the optimal function value. This metric provides an indication of how well ACS can approximate the optimal solution. The closer the average fitness values to the optimal value, the better the algorithm’s performance in terms of accuracy. Proximity = Mean fitness values throughout all runs − Optimal function value

.

9 Adaptive Cuckoo Search

127

Table 9.1 Success rate of ACS as compared to those of the four algorithms: GBO, IMFO, SHO, and CS (with . Pa = 0.25) Success rate (%)

Function f1

f2

f3

f4

f5

f6

f7

f8

f9

f10

f11

f12

f13

a .

Dimension

Decrease in function evaluations (%).a

CS (with . Pa = 0.25)

GBO

IMFO

SHO

ACS

5

93.33

98.04

88.24

0

100

31.5

10

96.67

13.72

3.92

0

100

29.83

20

0

0

0

0

20

NA

2

100

100

100

0

100

33.57

5

100

100

100

0

100

NA

8

100

58.82

58.82

0

100

NA

2

100

100

100

29.42

100

28.35

3

100

100

100

0

100

25.7

4

100

100

100

1.96

100

30

5

73.33

19.6

9.8

0

100

0.8

10

0

1.96

0

0

86.67

0.68

20

0

0

0

0

3.33

NA

2

100

100

100

19.61

100

28.15

3

100

100

100

0

100

34.5

4

100

100

100

0

100

.− 12.11

4

100

100

100

0

100

10.72

6

100

96.07

84.31

0

100

.− 2.72

8

100

100

100

0

100

13.8

6

100

86.27

96.08

0

100

.− 2.24

10

100

47.05

27.45

0

100

1.6

16

100

11.76

1.96

0

100

9.11

2

100

100

100

52.94

100

17.54

3

100

100

100

0

100

0.65

4

100

100

92.16

0

100

5.1

10

100

0

0

0

100

NA

20

90

0

0

0

100

NA

30

0

0

0

0

3.33

NA

10

100

100

96.08

0

100

NA

20

100

100

70.59

0

100

NA

30

100

100

64.7

0

100

NA

10

0

1.96

0

0

26.67

NA

20

0

0

0

0

3.33

NA

30

0

0

0

0

0

NA

10

53.33

68.62

21.57

0

100

NA

20

0

0

0

0

60

NA

30

0

0

0

0

0

NA

10

6.67

0

1.96

0

30

NA

20

0

0

0

0

0

NA

30

0

0

0

0

3.33

NA

The column shows the percentage of reduction of the number of iterations in ACS as compared to CS with . Pa = 0.25

Dimension CS (. Pa = 0.25) ACS

Dimension CS (. Pa = 0.25) ACS

f9

10 0.015 3.15 . f 13 10 144.69 84.78

.

30 15.91 29.75 30 820.36 580.58

20 0.779 14.66

20 409.75 311.93

Table 9.2 Proximity analysis for hybrid functions f 10

10 70 70 . f 14 10 800 369

.

20 2502 1221

20 68.46 76.91 30 4296 1946

30 71.81 78.66

f 11

10 0.088 0.44 . f 15 10 140 140

.

20 556 140

20 0.158 2.11 30 802 140

30 2.4 2.59

f 12 10 13.30 0.09

.

20 848 397

30 2061 1682

128 D. Sarkar and A. Biswas

9 Adaptive Cuckoo Search

129

(a) f1,5D

(b) f2,2D

(c) f3,4D

(d) f4,5D

(e) f5,3D

(f) f6,4D

Fig. 9.1 Convergence on functions 1–8

130

D. Sarkar and A. Biswas

(g) f7,6D

(h) f8,2D

Fig. 9.1 (continued)

9.5 Result Analysis An improved success rate can be observed for ACS in the functions f1, f4, f11, f12, and f13 as shown in Table 9.1. A staggering improvement in the number of function evaluations can be seen in ACS while running over almost all the functions (f1– 8) except the hybrid functions. The percentage of reduced function evaluations with respect to CS tells us about a faster convergence of the ACS. The decrease in function evaluations is inapplicable for the hybrid functions as the algorithm does not fully converge while using up the maximum number of iterations. A proximity measure is shown in Table 9.2 suggesting the ‘closeness’ of the ACS solutions though they do not converge. It is clear that for functions f12–15, ACS provides better solutions than that obtained by CS, while nearly matching up the performance of CS for the remaining functions f9–11. The convergence curves in Fig. 9.2 show relatively stable and nonfluctuating curves for ACS suggesting a steady exploration-exploitation (Fig. 9.1).

9.6 Discussions and Conclusion The dependence on experimental results in order to estimate the upper bound and lower bound of the radius drop does not comply with the theoretical model, but in practice, we have a larger section of the iterations unaffected by the upper bound value of the radius drop. Therefore, a radius drop value larger and smaller than the upper and lower bound respectively can simply be ignored as their effect is on only a meager section of the entire algorithmic run. Although there have been a few other variants where . Pa is changed linearly and a few other adaptive CS versions [16– 19], this work focuses on a more dynamic way of setting the . Pa value with a view to limiting exploration and exploitation. Proper tuning of exploration-exploitation significantly aids in enhancing an algorithm⣙s efficacy and efficiency. Targeting

9 Adaptive Cuckoo Search

131

(a) f9,10D

(b) f10,10D

(c) f11,10D

(d) f12,10D

(e) f13,10D

(f) f14,10D

Fig. 9.2 Convergence on hybrid functions 9–15

132

D. Sarkar and A. Biswas

Fig. 9.2 (continued)

(g) f15,10D

and tuning the switching parameter have effectively increased the efficiency of the algorithm manifolds over some functions from the CEC-15 framework. In conclusion, while the study provides valuable insights into the performance of four optimization algorithms, there are many areas for further analysis and exploration. By examining the behavior of each algorithm in different regions of the search space, evaluating their computational efficiency, and exploring the potential for hybridization, future research can build on these findings to develop even more effective optimization techniques for a wide range of real-world applications. Acknowledgements This work is supported by the Science and Engineering Board (SERB), Department of Science and Technology (DST) of the Government of India under Grant No. ECR/2018/000204 and Grant No. EEQ/2019/000657.

References 1. Kennedy, J., Eberhart, R.: Particle swarm optimization. In: Proceedings of ICNN’95International Conference on Neural Networks, vol. 4, pp. 1942–1948. IEEE (1995) 2. Dorigo, M., Birattari, M., Stutzle, T.: Ant colony optimization. IEEE Comput. Intell. Mag. 1(4), 28–39 (2006) 3. Yang, X.-S., Deb, S.: Cuckoo search via lévy flights. In: World Congress on Nature & Biologically Inspired Computing (NaBIC). IEEE, 210–214 (2009) 4. Yang, X.-S., Gandomi, A.H.: Bat algorithm: a novel approach for global engineering optimization. Eng. Comput. (2012) 5. Back, T.: Evolutionary Algorithms in Theory and Practice: Evolution Strategies, Evolutionary Programming, Genetic Algorithms. Oxford University Press (1996) ˇ .s ˇek, M., Liu, S.-H., Mernik, M.: Exploration and exploitation? In evolutionary algo6. .Crepin rithms: a survey. ACM Comput. Surv. (CSUR) 45(3), 1–33 (2013) 7. Eiben, A.E., Schippers, C.A.: On evolutionary exploration and exploitation. Fundamenta Informaticae 35(1–4), 35–50 (1998) 8. Olorunda, O., Engelbrecht, A.P.: Measuring exploration, exploitation in particle swarms using swarm diversity. In: IEEE Congress on Evolutionary Computation (IEEE World Congress on Computational Intelligence). IEEE, 1128–1134 (2008)

9 Adaptive Cuckoo Search

133

9. Bengio, Y.: Gradient-based optimization of hyperparameters. Neural Comput. 12(8), 1889– 1900 (2000) 10. Ahmadianfar, I., Bozorg-Haddad, O., Chu, X.: Gradient-based optimizer: a new metaheuristic optimization algorithm. Inf. Sci. 540, 131–159 (2020). ISSN: 0020-0255 11. Dhiman, G., Kumar, V.: Spotted hyena optimizer: a novel bio-inspired based metaheuristic technique for engineering applications. Adv. Eng. Softw. 114, 48-070 (2017). ISSN: 09659978 12. Dhiman, G., Kumar, V.: Spotted hyena optimizer: a novel bio-inspired based metaheuristic technique for engineering applications. Adv. Eng. Softw. 114, 48–70 (2017) 13. Pelusi, D., Mascella, R., Tallini, L., Nayak, J., Naik, B., Deng, Y.: An improved moth-flame optimization algorithm with hybrid search phase. Knowl.-Based Syst. 191, 105277 (2020) 14. Dai, X., Wei, Y.: Application of improved moth-flame optimization algorithm for robot path planning. IEEE Access 9, 105914–105925 (2021) 15. Liang, J., Qu, B., Suganthan, P., Chen, Q.: Problem definitions and evaluation criteria for the CEC 2015 competition on learning-based real parameter single objective optimization. In: Technical Report201411A. Computational Intelligence Laboratory, Zhengzhou University, Zhengzhou China and Technical Report, vol. 29, pp. 625–640. Nanyang Technological University, Singapore (2014) 16. Zhang, Y., Wang, L., Wu, Q.: Modified adaptive cuckoo search (MACS) algorithm and formal description for global optimisation. Int. J. Comput. Appl. Technol. 44(2), 73–79 (2012) 17. Naik, M., Nath, M.R., Wunnava, A., Sahany, S., Panda, R.: A new adaptive cuckoo search algorithm. In: 2015 IEEE 2nd International Conference on Recent Trends in Information Systems (ReTIS), pp. 1–5. IEEE (2015) 18. Mlakar, U., Fister, I., Jr., Fister, I.: Hybrid self-adaptive cuckoo search for global optimization. Swarm Evol. Comput. 29, 47–72 (2016) 19. Mareli, M., Twala, B.: An adaptive cuckoo search algorithm for optimisation. Appl. Comput. Inform. 14(2), 107–115 (2018)

Chapter 10

Solitons of the Modified KdV Equation with Variable Coefficients Priyanka Sharma, Sandip Saha, and Pankaj Biswas

Abstract Nonlinear waves are one of nature’s most fundamental phenomena and their propagation in dynamical systems has sparked the researcher’s interest. This article establishes the three soliton solutions (dark, bright, and singular) of MKdV equation with variable coefficient using the ansatz approach with inverse width, amplitude, and velocity and then establishing the graphs with the variation of inverse width, amplitude, and velocity. The solutions of the present study will become very helpful for various engineering applications and for the research community interested toward this field, especially in the field plasma of physics.

10.1 Introduction Nonlinearity refers back to the direct interplay among the based and impartial variables on which this venture is based. In a dynamical system, nonlinear wave’s propagation plays an essential function within the new physics age. To analyze the deep water wave’s surfaces, a large number of authors [1–5] have studied the KdV equation using various numerical methods, viz. solitary wave ansatz method, inverse scattering transform method, tanh-sech method, and homogenous balance method, respectively [6–8]. Many researchers have used various powerful methods, such as extended tanh, homogenous balance, Jacobi elliptic, hirota bilinear, and wave ansatz, to investigate the solitary wave. Schamel [1] studied the MKdV equation for the ion acoustic waves because of resonant electrons. By making the belief of an electron equation and observing flattopped electron distribution functions, he investigated the dependence of the asymptotic conduction of small ion acoustic waves at the variety of resonant and concluded MKdV has more potent nonlinearity. Based on the study of Schamel [1], Grosse P. Sharma · P. Biswas Department of Mathematics, National Institute of Technology, Silchar, Assam, India S. Saha (B) Division of Mathematics, School of Advanced Sciences, Vellore Institute of Technology Chennai, Chennai, Tamil Nadu, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024 B. Das et al. (eds.), Modeling, Simulation and Optimization, Smart Innovation, Systems and Technologies 373, https://doi.org/10.1007/978-981-99-6866-4_10

135

136

P. Sharma et al.

[2] investigated the solitons of the MKdV equation. They revealed that each one reflection has less potential for one-dimensional charge symmetric Dirac operator. With terrible ions, Nakamura et al. [3] studied the profile of nonlinear ion acoustic waves of multi-trouble plasma. MKdV equations are used to describe the positive and negative soliton collisions. In addition, to measure the speed and width of the solitons, pseudopotential approach has been applied. Zhang et al. [9] investigated the new exact solutions for the GMKdV equation with variable coefficient. Simple direct method is used for the assessment of the solution, incorporating the four new kinds of Jacobi elliptic features which are degenerated through solitary and triangle waves within the restricted case. Wazwaz et al. [10] studied the multiple solitons solutions for the integrable couplings of the MKdV equations. The Bäcklund transformation and simplified Hirota’s method are used to develop two types of nonlinear integrable couplings of the MKdV equation. They revealed that multiple solitary wave solutions are identical as multiple soliton solutions of the MKdV equation; however, the Bäcklund transformation’s coefficients differ. Pelinovsky et al. [11] studied different types of solitons of assorted polarities within the framework of MKdV equation. Exchange, overtaking (for positive solitons), and absorbent (varied polarities of the solitons) interplay are considered to introduce the solitons (equal polarity) and the wave discipline moments. In 2017, Wazwaz et al. [12] performed a two-mode MKDV equation (TmKdV) by the usage of simplified Hirotas method with the aid of tanh/coth technique. They found more than one soliton solutions for the variations of nonlinearity and dispersion parameters. Using the inverse scattering method, Zhang et al. [13] developed Riemann–Hilbert problem (RHP) with the use of complex MKdV equation. The RHP was solved when the coefficient of reflection has more than one higher-order poles. The molecules and breather positron of novel soliton on zero background for the complex MKdV equation were studied by Zhang et al. [14]. The breather solutions were generated for the complex modified equation, and Darboux transformation was used for the study of higher-order breather positron. Umer Farooq et al. [15] have discovered several new exact solutions to the fractional potential KdV equation. To convert fractional potential KdV into equivalent fractional ODE, he employed complex wave transformation with Jumarie’s Riemann–Liouville (R–L) derivative. He uses this technique to generate several forms of exact soliton solutions. For the study of magneto-acoustic waves in plasma, Goswami et al. [16] used a numerical approach based on the homotopy perturbation transform method (HPTM) to find the precise and approximate solutions of nonlinear fifth-order KdV equations. His method was the combination of ordinary Laplace transform and the homotopy perturbation technique. In the current research, we have studied the three types of soliton solutions (dark soliton, bright soliton, and singular soliton) of MKdV equation with the aid of ansatz method. From the above literature survey, it has been found that most of the previous studies have been performed on the solitary waves. Best of the author’s knowledge, no such type of works have been done in MKdV equation by analyzing the dark soliton, bright soliton, and singular soliton. The present analysis of soliton solutions will become very much helpful to develop the electric circuits, multi-component of plasma, electrodynamics, and in traffic flow.

10 Solitons of the Modified KdV Equation with Variable Coefficients

137

10.2 Governing Equations The generalized equation is, ∂2g ∂g ∂g ∂3g + q(t)g 2 + c(t) 2 + d(t) 3 = 0, ∂t ∂x ∂x ∂x

(10.1)

where q(t), c(t), and d(t) are the variable coefficients.

10.2.1 Solution for Bright Soliton The bright soliton is defined as the transient surface soliton that causes transient increase in the solitary wave’s amplitude. For ansatz method of solitary wave, we take g(x, t) =

X coshb ξ

(10.2)

where ξ = W (x − vt) and b > 0 and velocity of the solitons is drawn by v. Here, W and X are the parameters where W is named as inverse width, whereas X is the amplitude of the solitons. Moreover, velocity of the solitons is drawn by v. Taking partial derivative of “g” w.r.t. “t” and “x” gt =

  1 d(t W v) tanh ξ dW dX − − X ∗ b ∗ x dt coshb ξ dt dt coshb ξ

(10.3)

gx = −X ∗ W ∗ b ∗ sechb−1 ξ ∗ sechξ ∗ tanh ξ

(10.4)

gx x = gx x x =

b(b + 1)X ∗ W 2 b ∗ X ∗ W2 − b cosh ξ coshb+2 ξ

b(b + 1)(b + 2)X ∗ W 3 ∗ tanh ξ −b3 ∗ X ∗ W 3 ∗ tanh ξ + coshb ξ coshb+2 ξ

(10.5)

(10.6)

Substituting gt , gx , gx x , gx x x equation in Eq. (10.1)   d(t W v) tanh ξ dX dW b sech ξ − X ∗ b ∗ x − dt dt dt coshb ξ   b2 ∗ X ∗ W 2 b(b + 1) ∗ X ∗ W 2 −b ∗ X 3 ∗ W ∗ tanh ξ + c(t) − + q(t) cosh3b ξ coshb ξ coshb+2 ξ  3  3 3 −b ∗ X ∗ W ∗ tanh ξ b(b + 1)(b + 2)X ∗ W ∗ tanh ξ + =0 + d(t) coshb ξ coshb+2 ξ

138

P. Sharma et al.

Note that b = 1 has being calculated making the exponents 3b and b + 2 equal. Moreover, setting the coefficients of linearly independent term’s to zero yields, −q(t) ∗ X 3 ∗ W + 6d(t) ∗ X ∗ W 3 = 0 

d(t W v) dW − −X x dt dt

(10.7)

 − d(t) ∗ X ∗ W 3 = 0

dX + c(t) ∗ X ∗ W 2 = 0 dt

(10.8) (10.9)

Evaluating above equations at b = 1, we get, 

 2   X = X o e−( c(t )W (t )dt )

t

−1 v= t W (t)

(10.10)

  −d(t)W 3 t  dt 

(10.11)

  d(t)W 3 t  dt  .

(10.12)

0

1 v= t W (t)

t 0

Hence, the modified KdV equation with variable coefficient has the bright soliton solution as, g(x, t) =

X . cosh(W (x − vt))

10.2.2 Solution for Dark Soliton Dark soliton can be noted as the transient surface soliton that causes a momentary decrease in the wave’s amplitude. For ansatz method of solitary wave, we write g(x, t) = X ∗ tanhb ξ

(10.13)

where ξ = W (x − vt) and b > 0 and W and X are the parameters where W is named as inverse width, whereas X is the amplitude of the solitons and velocity of the soliton is drawn by v. Taking partial derivative of “g” w.r.t. “t” and “x”,

10 Solitons of the Modified KdV Equation with Variable Coefficients

gt =

139

   d(t W v) dX dW tanhb ξ + X ∗ b ∗ tanh(b−1) ξ − tanh(b+1) ξ ∗ x − dt dt dt (10.14)   gx = X ∗ W ∗ b ∗ tanh(b−1) ξ − tanh(b+1) ξ (10.15)   g 2 gx = X 3 ∗ W ∗ b ∗ tanh(3b−1) ξ − tanh(3b+1) ξ

(10.16)

gx x = X ∗ W 2 ∗ b      ∗ (b − 1) tanh(b−2) ξ − tanh(b) ξ − (b + 1) tanhb ξ − tanh(b+2) ξ (10.17)    gx x x = X ∗ W 3 ∗ b3 − 3b2 + 2b tanh(b−3) ξ − tanh(b−1) ξ + X ∗ W 3 ∗ b   ∗ (b + 1) ∗ (b + 2) ∗ tanh(b+1) ξ − tanh(b+3) ξ   (10.18) + 2 ∗ X ∗ W 3 ∗ b3∗ tanh(b+1) ξ − tanh(b−1) ξ Substituting Eqs. (10.12, 10.15, 10.16, 10.17, 10.18) in Eq. (10.1)    dW d(t W v)  dX + X ∗b∗ x − tanh(b−1) ξ − tanh(b+1) ξ tanh ξ ∗ dt dt dt   + l(t) ∗ X 3 ∗ W ∗ b ∗ tanh(3b−1) ξ − tanh(3b+1) ξ b

+ c(t) ∗ X ∗ W 2 ∗ b      ∗ (b − 1) tanhb−2 ξ − tanhb ξ − (b + 1) tanhb ξ − tanhb+2 ξ     + d(t) X ∗ W 3 ∗ b3 − 3b2 + 2b tanhb−3 ξ − tanhb−1 ξ + X ∗ W 3 ∗ b ∗ (b + 1)   ∗ (b + 2) ∗ tanhb+1 ξ − tanhb+3 ξ   + 2 ∗ X ∗ W 3 ∗ b3 ∗ tanhb+1 ξ − tanhb−1 ξ = 0 Note that b = 1 has been calculated making the exponents 3b + 1 and b + 3 equal. Moreover, setting the coefficients of linearly independent term’s to zero yields, −l(t) ∗ X 3 ∗ W − 6 ∗ d(t) ∗ X ∗ W 3 = 0

(10.19)

  d(t W v) dW − − 2d(t)X W 3 = 0 X x dt dt

(10.20)

dX − 2c(t)X ∗ W 2 = 0 dt

(10.21)

140

P. Sharma et al.

  d(t W v) dW − + l(t) ∗ X 3 ∗ W + 2 ∗ d(t) ∗ X ∗ W 3 = 0 −X x dt dt 2 ∗ c(t) ∗ X ∗ W 2 = 0

(10.22) (10.23)

From Eq. (10.20), we get 

W 2 (−d(t))6 X= l(t)

21

Substituting value of X in Eq. (10.20) 2 v= t W (t)

t

  −d(t)W 3 t  dt  .

0

Hence, the MKdV equation with variable coefficient has the dark soliton solution as, g(x, t) = X ∗ tanh(W (x − vt)).

10.2.3 Solution for Singular Soliton Singular soliton can be described as the unstable soliton solution of MKdV equation with variable coefficient. For ansatz method of solitary wave, we write g(x, t) =

X sinhb W (x − vt)

(10.24)

where ξ = W (x − vt) and b > 0, W and X are the parameters where W is named as inverse width, whereas X is the amplitude of the solitons and velocity of the solitons is drawn by v. Now, taking partial derivative of “g” w.r.t. “t” and “x” gt =

  d(t W v) 1 dW dX 1   − − X ∗ b ∗ x b b dt sinh ξ dt dt sinh ξ (tanh ξ ) 1  gx = −b ∗ X ∗ W  f sinh ξ (tanh ξ )

(10.25) (10.26)

10 Solitons of the Modified KdV Equation with Variable Coefficients

gx x = b2 ∗ X ∗ W 2 gx x x = −b3 X W 3

1 X ∗ W2 + b(b + 1) b sinh ξ sinhb+2 ξ

1 1  − b(b + 1)(b + 2)X W 3  b b+2 sinh ξ (tanh ξ ) sinh ξ

141

(10.27) (10.28)

Substitute all Eqs. (10.25, 10.26, 10.27, 10.28) in Eq. (10.1), we get,   d(t W v) 1 dX 1 dW    − − X ∗b∗ x b b dt sinh ξ dt dt sinh ξ tanhb ξ l(t) ∗ b ∗ X 3 ∗ W c(t) ∗ b2 ∗ X ∗ W 2  + − 3b b sinh ξ tanh ξ sinhb ξ   2 c(t) ∗ b + b ∗ X ∗ W 2 d(t) ∗ b3 ∗ X ∗ W 3  + +  b+2 sinh ξ sinhb ξ (tanh ξ )   d(t) b3 + 3 f 2 + 2b ∗ X ∗ W 3   − =0 sinhb+2 ξ (tanh ξ ) Note that b = 1 has being calculated making the exponents 3b and b + 2 are being equal. Moreover, linearly independent term’s coefficients are set to zero yields, l(t) ∗ b ∗ X 3 ∗ W − 6d(t) ∗ X ∗ W 3 = 0 

dW d(t W v) −X x − dt dt

(10.29)

 + d(t) ∗ X ∗ W 3 = 0

dX + c(t) ∗ X ∗ W 2 = 0 dt

(10.30) (10.31)

From Eq. (10.31), we get 

 2   X = X o e−( c(t )W (t )dt )

Substituting X in Eq. (10.30), we get, 1 v= t W (t)

t

  −d(t)W 3 t  dt  .

0

Hence, the modified KdV equation with variable coefficient has the dark soliton solution as, u(x, t) =

X . sinh[W (x − vt)]

142

P. Sharma et al.

10.3 Result and Discussion In this section, the results obtained numerically are highlighted with the help of the graphs. Based on our approach, the analysis of the obtained solution has been expressed in the figures below. Figures 10.1, 10.2, 10.3 have shown the solitary waves of bright soliton (Fig. 10.1), dark soliton (Fig. 10.2), and singular soliton (Fig. 10.3), respectively. Fig. 10.1a–c shows the presentation of 3D plots to illustrate the obtained soliton solutions. Figure 10.1a shows the graphical representation of bright solitary wave where initially has amplitude as 1000 m, inverse width as 200 m and velocity as 50 m/s. When amplitude, inverse width, and velocity decrease to 100 m, 2 m, and 0.005 m/s respectively the soliton solution represented in Fig. 10.1b. The graphical changes indicate that the surface is smooth from the left when the value of these three parameters reduces. The graph becomes flattened from the above with slanted leg when the value of these three parameters becomes negative, and it is represented in Fig. 10.1c. Figure 10.2a–c shows the presentation of 3D plots to illustrate the obtained soliton solutions. Figure 10.2a shows graphical representation of a dark solitary wave with initial amplitude of 1000 m, an inverse width of 200 m, and a velocity of 50 m/s. When the amplitude, inverse distance, and velocity are reduced to 100 m, 2 m, and 0.005 m/s, graph shows that the surface becomes bent from the left. When the values of these three parameters become negative, the graph becomes flattened from above and the leg will be not smooth Fig. 10.2c. Figure 10.3a–c shows the presentation of 3D plots to illustrate the obtained soliton solutions. A singular solitary wave with initial amplitude of 1000 m, an inverse width of 200 m, and a velocity of 50 m/s is depicted graphically in Fig. 10.3a. The graphical adjustment shows that there is curve from the left when the amplitude, inverse width, and velocity are reduced to 100 m, 2 m, and 0.005 m/s, respectively. The graph (c) depicts the L-shape figure when the values of these three parameters become negative.

10.4 Conclusion The present study analyzes the solution procedure of MKdV equation by ansatz approach with the aid of three soliton approaches. From the present study, the main concluding remarks are as follows: • Three soliton solutions are found with the help of ansatz method for the MKdV equation with the variation in amplitude, inverse width, and velocity as parameters. • It has been found that when the value of inverse width, amplitude and velocity decreases the graph will be smooth from the left but when inverse width, amplitude, and velocity become negative then the graph will remain no longer smooth curve from left rather than it is flattened. • Finally, it has been revealed that the method approached here is helpful in the field of plasma physics.

10 Solitons of the Modified KdV Equation with Variable Coefficients Fig. 10.1 Numerical solutions for bright solitary waves

143

144 Fig. 10.2 Numerical solutions for dark solitary waves

P. Sharma et al.

10 Solitons of the Modified KdV Equation with Variable Coefficients Fig. 10.3 Numerical solutions for singular solitary waves

145

146

P. Sharma et al.

References 1. Schamel, H.: A MKdV equation for ion acoustic waves due to resonant electrons. J. Plasma Phys. 9(3), 377–387 (1973) 2. Grosse, H.: Solitons of the MKdV equation. Lett. Math. Phys. 8, 313–319 (1984) 3. Nakamura, Y., Tsukabayashi, I.: MKdV ion-acoustic solitons in a plasma. J. Plasma Phys. 34(3), 401–415 (1985) 4. Clarke, S., Grimshaw, R., Miller, P.: On the generation of solitons and breathers in the MKdV equation. Chaos 10, 383 (2000) 5. Raut, S., Saha, S., Das, A.N.: Effect of kinematic viscosity on ion acoustic waves in superthermal plasma comprising cylindrical and spherical geometry. Int. J. Appl. Comput. Math. 8(4), 1–12 (2022) 6. Raut, S., Saha, S., Das, A.N.: Studies on the dust-ion acoustic solitary wave in planar and non-planar super-thermal plasmas with trapped electrons. Plasma Phys. Rep. 48(6), 627–637 (2022) 7. Das, A.N., Saha, S., Raut, S., Talukdar, P.: Studies on ion-acoustic solitary waves in plasmas with positrons and two-temperature superthermal electrons through damped Zakharsov Kuznetsov burgers equation. Plasma Phys. Rep. 49(4), 454–466 (2023) 8. Raut, S., Saha, S., Das, A.N., Talukdar, P.: Complete discrimination system method for finding exact solutions, dynamical properties of combined Zakharsov-Kuznetsov-modified ZakarsovKuznetsov equation. Alex. Eng. J. 76, 247–257 (2023) 9. Zhang, J., Caier, Y.: New exact solutions of a variable-coefficient KdV equation. Appl. Math. Sci. 7(36), 1769–1776 (2013) 10. Wazwaz, A.M.: Multiple solitons solution for the two integrable couplings of the MKdV equation. 14, 219–225 (2013) 11. Pelinovsky, E.N., Shurgalina, E.G.: Two-soliton interaction within the framework of the MKdV equation. Radiophys. Quantum Electron. 57, 35–47 (2015) 12. Wazwaz, A.M.: A two-mode MKdV equation with multiple soliton solutions. Appl. Math. Lett. 70, 1–6 (2017) 13. Zhang, G., Yan, Z.: Inverse scattering transforms and soliton solutions of focusing and defocusing nonlocal MKdV equations with non-zero boundary conditions. Physica D 402, 132–170 (2019) 14. Zhang, Z., Biao, X.Y.: Novel soliton molecules and breather-positon on zero background for the complex MKdV equation. Nonlin. Dyn. 100, 1551–1557 (2015) 15. Farooq, U., Ahmed, N.: Some new exact solutions to fractional potential KdV equation (2019). 978-1-7281-2353-0 16. Goswami, A., Singh, J., Kumar, D.: Numerical simulation of fifth order KdV equations occurring in magneto-acoustic waves. Ain Shams Eng. J. 2265–2273 (2018)

Chapter 11

A Review on Lung Cancer Detection and Classification Using Deep Learning Techniques Jyoti Kumari, Sapna Sinha, and Laxman Singh

Abstract Cancer is a deadly illness affected by a confluence of genetic disorders and metabolic abnormalities. Cancer is the second biggest reason of mortality universal, with lung cancer having a substantially higher death rate than other types of cancer. The discovery of such nodes/cancer histopathologically is typically the maximum critical factor in selecting the optimal sequence of therapy. Initial disease detection on both fronts dramatically lowers the risk of death. To identify cancer, DL platforms are thought to be the best choice for detecting cancer in the most accurate and stressfree manner for clinicians. While other review publications have examined different system elements, this evaluation focuses on segmenting and categorising lung cancer. Specifically, in this review paper, research work has been selected based on detection and classification using different neural networks. Along with the information utilised for the study, tables have also been made to fully describe the vital processes of lung nodule identification and diagnosis. As a result, for readers this review gives a foundational understanding of the subject.

11.1 Introduction The health as well as lives of people are seriously challenged by lung cancer, among the deadliest malignancies in the universe. Lung cancer incidence and mortality rates have risen considerably in many countries during the previous 50 years. The considerable reductions in stroke but also cardiovascular disease mortality rates compared J. Kumari (B) · S. Sinha Amity Institute of Information Technology, Amity University, Noida, India e-mail: [email protected] S. Sinha e-mail: [email protected] L. Singh Department of Computer Science and Engineering (AI-ML), KIET Group of Institutions, Ghaziabad, U.P., India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024 B. Das et al. (eds.), Modeling, Simulation and Optimization, Smart Innovation, Systems and Technologies 373, https://doi.org/10.1007/978-981-99-6866-4_11

147

148

J. Kumari et al.

to cancer in several nations have contributed to cancer’s gaining prominence as a major cause of death. Depending to the Indian Council of Medical Research (ICMR) 2020 report, 13.9 lakh new cancer patients are identified in India each year, with 8.5 lakh cancer-related mortality documented. The most current GLOBOCAN 2018 projections predict that lung cancer is the highest prevalent malignancy (2.1 million new instances, or 11.6% of all incidence cancer circumstances in 2018), including an age-standardized occurrence rate of 22.5 per 100,000 person years globally (31.5 in men, 14.6 in women). [1–10]. Lung lumps are a key radiological indicator for earlier detection, but their identification typically marks the beginning of lung cancer diagnosis. The diameter of the lump determines its malignancy. The rising prevalence of pulmonary nodules places a burden on hospital systems and clinicians to distinguish between both healthy and unhealthy pulmonary tumors. Between 70 and 97% of incidental lesions as well as more to 90% of nodules seen during screening are benign activities [11–13]. There are various multiple kinds, dimensions, as well as shapes of lung nodules. While a few lumps have intricate arterial linkages but are challenging to identify, some had spherical nodules with size extending from 2 to 30 mm and are found in regions with numerous vessels. The majority of studies used the large and publicly available Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI) database to train but also test their algorithms, making the studies homogeneous [14]. Since they provide intriguing solutions for medical applications, ML as well as DL methods are being used in recent years to analyses and interpret medical images as well as diagnose disorders. It is still difficult to provide a prediction system that provides reliable diagnoses, and research in this area is ongoing. This study aims to assess what has been discovered about enhancing the effectiveness but also precision of detecting lung cancer technologies over the preceding 5 years [15]. This analysis will offer a realistic picture of every research report by beginning with data analysis. The pre-processing approach incorporated techniques for model development, model evaluation, analysis of the results, investigation of the difficulties found, released database with model performance, sensitivity, specificity, and lastly researcher’s recommendations for further research [16–18].

11.2 Data Set Used in Literature Survey 11.2.1 LC25000 With five classes of 5000 photos each, the LC25000 dataset has 25,000 color images in total. The dimensions and file type of each image are 768 by 768 pixels in jpeg. The four classifications are benign colonic tissues, benign lung tissues, lung squamous cell carcinomas, and colon adenocarcinomas.

11 A Review on Lung Cancer Detection and Classification Using Deep …

149

11.2.2 ACDC@LungHP The ACDC@LungHP (Automatic Cancer Detection and Classification in Wholeslide Lung Histopathology) competition compares several computer-aided diagnostic (CADs) approaches for the automatic detection of lung cancer. ACDC@LungHP 2019 used an annotated dataset of 150 training photos as well as 50 testing images from 200 patients to assess segmented (pixel-wise detection of cancer tissue in whole slide imaging (WSI)).

11.2.3 LIDC-IDRI The 1018 instances in the LIDC-IDRI Dataset each have medical thoracic CT scan pictures and an accompanying Xml document which reports the outcomes of a twophase image annotation process carried out by four skilled lung radiologists. During the first blinded-read stage, every radiologist individually evaluated every CT scan and assigned the lesions to one of three groups (nodule3 mm, non-nodule3 mm).

11.2.4 LUNA-16 The dataset for lung segmentation is called LUNA16 (Lung Nodule Analysis). It includes of 1,186 lung nodules from 888 CT images with annotations. The Fig. 11.1 represents the dataset set used in literature survey.

DATASET USED LC25000 REAL TIME 3% DATA 42%

LC25000 LUNA-16

ACDC@LUNG HP 3%

LIDC-IDRI 39%

LUNA-16 13% ACDC@LUNGHP REAL TIME DATA

Fig. 11.1 Data set used in literature survey

LIDC-IDRI

150

J. Kumari et al.

11.3 Literature Survey This section contains a collection of research papers on the topic of employing neural networks to identify lung cancer. The Directions are 1. 2. 3. 4.

Lung cancer detection and classification using CNN Lung cancer detection and classification using R-CNN Lung cancer detection and classification using ANN Lung cancer detection and classification using U-Net

11.3.1 Lung Cancer Detection and Classification Using CNN This work by Hatuwal et al. [19] demonstrates how to identify lung cancer based on histopathological images. Three classifications were made using a CNN: benign, adenocarcinoma, as well as squamous cell carcinoma. The model’s accuracy during training and validation was 96.11% as well as 97.20%, accordingly. A completely automated method for identifying lung cancer in complete slide pictures of lung tissue sections was proposed by Sari’et al. [20]. CNN back bone architectures such as VGG and ResNet are used for picture patch classification and their results were compared. However this model is lacking image augmentation. Togaçar et al. [21] Author proposed a Deep learning models were used to identify lung tumors using CT and MRI images. On the database, image augmentation methods such cutting, zooming, horizontal rotation, as well as filling were used. Future technological advancements might enable the creation of powerful CAD systems for more medical imaging purposes. Bonavita et al. [22] made the recommendation in this publication that nodule malignancy be evaluated using 3D CNN’s but then incorporated into an existing automated end-to-end lung cancer detection pipeline. However, this model is lack of nodule malignancy. Moitra et al. [23] The Cancer Imaging Archive’s (TCIA) revised NSCLC Radio genomics Collection was used in the study. A hybrid feature identification but also extraction approach was applied to segmented tumour images (MSER-SURF). Higher than those of the other main ML algorithms were the efficiency overall ROC-AUC rating. Author examined lung nodule identification to use an enhanced 3D AlexNet with lightweight design in this study by Neal Joshua et al. [24]. He performed binary categorization on computed tomography (CT) pictures from the LUNA 16 dataset conglomerate with datasets picture resources effort (benign and malignant). The findings were acquired using tenfold cross-validation. Kasinathan et al. [25] with this work, the centroid displacement as well as contour points were predicted using a curve evolution technique, which led to much correct forecasts of contour alterations. The obtained images were then classified to use an Enhanced CNN Classifier. A computerized diagnostic of a lung tumour can be made

11 A Review on Lung Cancer Detection and Classification Using Deep … Fig. 11.2 Chart for year wise paper for CNN

151

YEARWISE PAPER FOR CNN 2019 20%

2021 20%

2020 60%

with great precision using computer assisted diagnostics (CAD). More work may be done with a focus on other pre-processing approaches, as well as using the Enhanced CNN algorithm findings for higher accuracy in age and stage categorization. Anwer et al. [26] this study proposed an identification method for distinguishing between different types of lung cancer. CNN and deep learning are the foundations of the suggested approach. System is constructed using MATLAB GUI transfer learning and is trained and evaluated using data obtained from K1 hospital in Kirkuk, Iraq. Sajja et al. [27] in this article, a pre-trained CNN called GoogleNet is used as the foundation for a deep neural network. In dropout levels, 60% of all nerve cells are used, which reduces operation costs but also avoids overfitting. The proposed network’s effectiveness will be assessed with and without dropouts at various dropout ratios. Mohammed et al. [28] in this study a Neural network are used for training and categorisation of pre-trained convolutional neural networks. To obtain great performance and specifically detect lung cancer on CT images, CNN and TL are utilised. The confusion matrix, precision, recall, specificity, and f1-score are a few matrices that are used to evaluate models. The Fig. 11.2 represent the year wise paper with CNN.

11.3.2 Lung Cancer Detection and Classification Using R-CNN Zhang et al. [29] Study first created 3 methods of the Mask R-CNN. These three methods are optimised consuming specific training data, which included pictures at three distinct sizes. Each training data set contained 594 lung tumor-containing slices. To reduce the likelihood of false-positive results, these three masks were combined using weighted voting. By further collaborating with clinics including gathering sufficient imaging information for testing, the author hopes to increase the precision of this technique in the long term. Su et al. [30] the author of this paper suggested a Faster R-CNN method for detecting the lung nodules. Lung nodules can be found using the Faster R-CNN

152

J. Kumari et al.

algorithm, as well as the training set is used to show how useful the method is. In future study, the author should include benign tiny nodules in the data set. Additionally, this software can assist scientists and radiologists in the planning but also creation of a lung nodule recognition structure. The efficient object identification neural net “Mask R-CNN” was used by Liu et al. [31] in this paper for lung nodule identification, which offers contour data. The categorization system with the maximum accuracy was chosen. In order to acquire better accurate nodule segmentation on CT images, the author wants to convert the 2D network model into a 3d model in the future. Wang et al. [32] the dynamic region-based CNN was used to suggest a twostage detection approach for lung cancer. After experimental validation, the model performed remarkably well despite a large IoU (intersection over union) criterion, with a mean average accuracy (mAP) on the lung cancer dataset reaching 88.1%. In future few more optimization methods can be implemented such as monarch butterfly optimization (MBO), moth search (MS) procedure. Guo and Bai [33] in this study, the author suggested a novel model for multiscale pulmonary nodule combining Cascade R-CNN and feature pyramid network (FPN). On the Lung Nodule Analysis 2016 (LUNA16) database, this algorithm had an average precision (AP) of 0.879. The ability to identify pulmonary nodules in 3D pictures as opposed to 2D imageries will be crucial in the future. Yan et al. [34] the author of this research suggested a technique to segment lung nodules depends on Mask R-CNN. Due to the non-uniformity of CT data, we perform feature dimensionality reduction using the Laplacian operator in order to filter out some of the noise. To suppress correctly categorised samples in our method, the innovative function Focal Loss is employed. The average lung nodule accuracy is 78%, but more study and model development are required. Khairandish et al. [35] in the current investigation, the MRI scans were analysed to locate the tumour and classify it as benign or malignant. An MRI scan of a patient with a tumour has been classified with 98.81% accuracy using a combination of Fast R-CNN and SVM, an enhanced deep learning approach. In future research, it is feasible to compare these results to other approaches such as (KNN, RF) to identify new achievements and try to remove tumours. Cai et al. [36] in this study an approach for 3D modernization of pulmonary nodules as well as lungs has been developed by researchers at Aberystwyth University in the UK. It is founded on the ray-casting volume rendering technology and the Mask Region-Convolutional Neural Network (Mask R-CNN). The Multiview technique will be further investigated in order to improve segmentation overall detection accuracy. Li et al. [37] in this study the transfer learning technique was working to increase classification efficiency and reduce over fitting caused by a few tagged lung cancer samples. Other CNN designs, like as ResNet-50 and VGG, should be used in future research. Zia et al. [38] In this study, the author presented the Two-step Deep Network (TsDN), a completely autonomous lung CT method for the diagnosis of cancer. Nodule categorization as well as identification are the two components of TsDN. To

11 A Review on Lung Cancer Detection and Classification Using Deep … Fig. 11.3 Chart for year wise paper on R-CNN

153

YEARWISE PAPER FOR R-CNN 2018 11%

2021 33%

2019 22%

2020 34%

recognise nodules, an enhanced 3D-Faster R-CNN with a U-net-like encoder and decoding algorithm is used. For the classification of pulmonary nodules, the Multiscale Multi-crop Convolutional Neural Network (MsMc-CNN) has been developed. The Fig. 11.3 represent the year wise paper with R-CNN.

11.3.3 Lung Cancer Detection and Classification Using ANN According to Ibrahim et al. research’s [39], an artificial neural network can accurately identify if lung cancer is present or absent 96.67% of the time. The model was trained using signs like yellow fingers, anxiety, chronic sickness, fatigue, allergies, wheezing, coughing, shortness of breath, difficulty swallowing, and chest discomfort. Additional data will be required in the future for more precise prediction. Adetiba and Olugbara [40] In ability to forecast lung cancer, this study evaluated SVM and ANN ensembles. Utilizing data from the IGDB, these ML algorithms were taught to forecast lung disease. Epidermal development factor receptor, Kirsten rat sarcoma viral oncogene, and tumour suppressor p53 genome mutant nucleotide corpus from NSCLC patients. In the future, the authors intend to include more biomarkers on the proposed stage and conduct more rigorous comparison research with other cutting-edge machine learning algorithms. Apsari et al. [41] This study provided an ANN with self-organizing map (SOM) method-based automatic digital qualitative method for computed tomography images lung cancer diagnosis. A CT scan of the thorax reveals both healthy and cancerous stage I and II lung infections. Khobragade et al. [42] Using an ANN approach, this research demonstrated lung segmentation, extraction of features, including classification for identifying lung illnesses such tuberculosis, lung cancer, but also pneumonia. Statistical and geometrical information is extracted. The recommended process’s weakness is that it is not resilient when the size and position of the chest x-ray picture vary.

154 Fig. 11.4 Chart for year wise paper for ANN

J. Kumari et al.

YEARWISE PAPER FOR ANN 2015 20%

2019 40%

2016 20% 2018 20%

According to Kaur et al. [43], a lung cancer detection method that uses ANN as well as image filtering methods to categorise lung cancer stages has an accuracy of 93.3%. In the future, optimizing ant colonies may be combined with ANN to achieve better results. CT scans of the lung taken from a private clinic. An ANN approach for estimating tumour type was built as well as assessed in this publication by Nasser and Abu-Naser [44]. The ANN model’s input parameters were age, gender, histologic type, degree of differentiation, bone, bone marrow, lung, pleura, peritoneum, liver, brain, skin, neck, supraclavicular, axillar, mediastinum, and abdominal condition. The ANN model can properly identify the tumour type with 76.67% accuracy. Figure 11.4 represented the year wise paper used for the survey of ANN.

11.3.4 Lung Cancer Detection and Classification Using U-Net Chen et al. [45] the segmented lung CT findings are compared with those of a UNet network (DC-U-Net) fused with dilated convolution, Otsu, and region growth. The author utilizes the Intersection over Union (IOU), Dice coefficient, Accuracy, as well as Recall metrics to estimate the efficiency of the three techniques. This system streamlines the steps needed in segmenting medical images and enhances segmentation for the trachea, lung blood vessels, and other tissues. However, the dataset used in this research have very limited data. Shaziya et al. [46] The U-Net convolutional network design has remained devised as well as constructed specifically for the segmentation of biological pictures. In this work, U-Net ConvNet was used to segment lungs using a dataset of lungs. The lungs dataset consists of 267 lung-related CT pictures as well as the segmentation mappings that go with them. The achieved accuracy and loss are, correspondingly, 0.9678 but also 0.0871. U-Net ConvNet can therefore be used to distinguish among the lungs in CT scans. By instructing the U-Net to use the actual image size of 128 × 128 pixels,

11 A Review on Lung Cancer Detection and Classification Using Deep …

155

the accuracy may be increased even further. The amount of convolution operation with filter size can be raised to improve accuracy. Yang et al. [47] Due to the large degree of variation in the targeted tumours’ look and form, automated lung tumour segmentation is difficult. The problem is addressed, and the network’s potential to acquire wealthier presentations of lung tumours from both worldwide and local perspectives is increased by the introduction of a potent 3D U-Net with Resnet as well as a two-pathway deep supervision strategy by the authors. The findings show that the suggested 3D MSDS-UNet, notably for small tumours, outperforms the most recent segmentation models for all tumour sizes. In the future, the author will examine MSDS-extension UNet’s to multiple sophisticated encoder backbones, includes HRNet. In this study, a DL based method for identifying lung nodules is created by Hailan Cheng et al. The suspected lung nodules are first extracted using a segmentation network, and their precise identification is subsequently carried out using a classification network. On the one hand, ResNet’s residual system with shortcut connection may effectively address the gradient disappearing problem as well as more accurate acquisition of attributes. By incorporating the ResNet network into the U-Net network, the author suggests a modified U-Net Block network that will increase the precision of segmenting suspicious lung nodules. Furthermore, when more elements are added, the gradient disappears as well as the training effect becomes less effective. Ali et al. [49] In this research, an efficient end-to-end classification algorithm relies on densely coupled dilated convolutions is proposed, along with an improved feature learning approach. Lung ROIs are initially removed from the CT scans utilizing kmean grouping using morphological techniques to decrease the model’s search space rather than using complete CT scan pictures or nodule patches. The effectiveness of the suggested approach is assessed using the publicly accessible dataset LIDC-IDRI. Suzuki et al. [50] in this study author created a three-dimensional (3D) U-net deeplearning model that can detect lung nodules on chest CT images. In both interior and exterior validation of the model, the CPM was 94.7% (95% CI: 89.1%98.6%). The CPM in the Japanese sample was somewhat lower, which was most likely owing to discrepancies in nodule labelling and CT scan characteristics. The Fig. 11.5 represent the year wise paper with U-Net. Fig. 11.5 Chart for year wise paper on U-Net

YEAWISE PAPER FOR U-NET 2018 20% 2021 40%

2019 20% 2020 20%

156

J. Kumari et al.

The Table 11.1 mentioned below is systematic explanation with the performance of each neural network, data set and its result. The neural network-based CAD Design is the best effective method for identifying as well as categorizing lung nodules in CT images, according to Table 11.1 and its advantages. Researchers are using CNN for the initial recognition and classification of malignancies as a result of the significant success they have had with nodule identification and classification.

11.4 Summary The most effective and stress-free method for physicians to diagnose cancer is considered to be deep learning (DL) platforms. This review article focuses on segmentation and classification, however other review papers have looked at various system components. In specifically, data from neural networks like CNN, R-CNN, ANN, and U-NET that perform detection and classification were used in this study. According to the literature review, we can acquire the best accurate prediction of lung cancer by utilizing a neural network. To increase the efficiency as well as precision of the procedure, several issues need to be resolved. The most critical aspect of cancer detection and classification is accurate diagnosis. However, in a few cases, such as a lack of datasets and tiny nodes, early cancer detection might be challenging. The scanning process is time-consuming, and clinicians must manually access the report to find a few nodes.

11.5 Challenges To improve current infrastructure and suggest alternative ones, further study is needed. According to the poll, building a CADe solution that meets all of the aforementioned objectives as well as a fast processing cycle requires collective projects through the establishment of software communities. Therefore, the issues in creating CADe algorithms for identifying pulmonary nodules are as follows: • Development of new or improved approaches for segmenting lung pictures to enable for higher levels of automation, including instances with severe pathologies, tiny nodules (3 mm), as well as ground-glass opacity. • Why Develop systems that can recognise nodules, determine their characteristics (malignancy, volume, calcification presence and arrangement, contours, edges, including interior parts), as well as evaluate the status of oncological treatment but also its prospective prognosis; • Larger databases should be made available for effective system validation. • The sensitivity of CADe systems is comparatively good, yet the amount of FP (False Positive) is considerable in comparison to the performance of radiologists.

11 A Review on Lung Cancer Detection and Classification Using Deep …

157

Table 11.1 Systematic explanation with performance Ref Year Tools and No segment

Classified type of Data set cancer

Performance

19

2020 CNN/ support vector machine

Adenocarcinoma, LC25000 lung and – Squamous cell colon carcinoma histopathological image dataset



97.20%

20

2019 CNN



ACDC@LUNGHP –



97.9%

21

2020 CNN/ GLCM method



LIDC-IDRI

99.32%

99.71%

99.51%

22

2020 CNN/ re-sampled CTs



LIDC AND LUNA16 datasets





96 ± 2%

23

2020 CNN/PET/ CT (DICOM) tumor images

Non-small cell lung cancer







96 ± 3%

24

2021 CNN

Thoracic cancer

LUNA 16 dataset





97.17%

25

2019 CNN/ – computed tomography (CT) images

LIDC-IDRI

89.0%

91%

97%

26

2020 CNN/

Non-small cell lung cancer

Real time data 92.59% collected from k1 hospital in Kirkuk, iraq

-

93.33%

27

2020 CNN/CT scan

Non-small cell lung cancer

LIDC-IDRI





99.03%

28

2021 CNN/ Sparse Auto encoder



LIDC-IDRI

94%



88.41%

29

2019 R-CNN/ support vector machine

Non-small cell lung cancer

Real time dataset







30

2021 R-CNN



LIDC-IDRI





83.9%

31

2018 R-CNN/ CT scan



LIDC-IDRI







32

2022 R-CNN

Adenocarcinoma, small cell carcinoma, and squamous cell carcinoma

Real time dataset (34,056 pathological images on 261 patients)



88.1%



Sensitivity Specificity Accuracy (SN) (SP) (ACC)

(continued)

158

J. Kumari et al.

Table 11.1 (continued) Ref Year Tools and No segment

Classified type of Data set cancer

Performance

33

2021 R-CNN



LUNA 16

86.5%

91.5%

98.6%

34

2019 R-CNN/CT Non-small cell scan lung cancer

LIDC-IDRI





78%

35

2020 R-CNN/ – MRI Image

BRATS 2015





91.66%

36

2020 R-CNN/CT – Scan

LIDC-IDRI

88.7%



88.1%

37

2021 R-CNN/ microarray camera image

Lung squamous cell carcinoma, Lung adenocarcinoma

Real time Dataset – (Heilongjiang Provincial Hospital 400 CT image)



89.7%

38

2020 R-CNN/ Semantic image



LIDC-IDRI

88.5%

92.2%

94.6%

39

2019 ANN/CT Scan



Real time dataset





96.67%

40

2015 ANN/CT Scan



Real time dataset





87.6%

41

2021 ANN/CT Scan

CT thorax

Real time dataset





87%

42

2016 ANN/CT Scan

Lung cancer, TB, Real time dataset Pneumonia





92%

43

2018 ANN/CT Scan



Real time dataset (Aastha Hospital)





93.3%

44

2019 ANN/CT Scan

Primary tumor

Real time Data set





76.67%

45

2021 U-Net/CT Scan



LUNA16







46

2018 U-Net/CT Scan



Real time datset (267 Images of lung CT scan)





96.78%

47

2021 U-Net/CT Scan

Small tumour

Real time datset (Liaoning Cancer Hospital 220 cases)

74.6%





48

2019 U-Net/CT Scan

Small Nodes

Real time dataset 80% (contain more than 76,000 CT scans)



73%

Sensitivity Specificity Accuracy (SN) (SP) (ACC)

(continued)

11 A Review on Lung Cancer Detection and Classification Using Deep …

159

Table 11.1 (continued) Ref Year Tools and No segment 49

2022 U-Net/CT Scan

50

2020 U-Net/CT Scan

Classified type of Data set cancer

Small Node

Performance Sensitivity Specificity Accuracy (SN) (SP) (ACC)

LIDC-IDRI

70.2%

LIDC-IDRI



82% –

94.7%

11.6 Future Scope Future studies on the creation of CAD systems based on neural networks for identifying and categorizing lung nodules should largely concentrate on: • Resolving the challenge of deploying computerized tools in routine medical activity by creating new but more reliable methods for nodule recognition that improve sensitivity while maintaining a small proportion of false positives. • Increasing a brand-new CAD structure depends on cutting-edge feature map visualisation capabilities to more effectively evaluate and relay CNN’s judgement to radiologists. • Developing a DL system, they could recognize several lung nodule histological categories (solid, non-solid, mixed), in different locations (isolated, juxta-vascular, or juxta-pleural), as well as a diameter of under 3 mm.

11.7 Conclusion Recent research using DL techniques has produced encouraging findings for the CTbased detection of lung nodules. For identification and treatment, it is still difficult to section and classify lung nodules. Numerous methods for segmenting lung nodules rely on either a general or multiview neural net design. The bulk of studies that employed multiple views of the pulmonary nodules and fed these perspectives into the neural networks incorporated several unique design techniques. On the other hand, the broad neural network-based techniques were mostly focused on the U-Net design. Parallel to this, numerous lung nodule types, including boundary, juxtapleural, tiny, well-circumscribed, as well as big nodules, were segmented using classification methods. Despite this constraint, our review of the literature demonstrates the need of developing robust DL architectures too properly and quickly part in addition to categorize lung nodules. Not to mention, future studies should concentrate on creating fresh suggestions with computer-based techniques that are easy for using and will be useful to both academics and medical professionals.

160

J. Kumari et al.

References 1. Zhang, S., Sun, K., Zheng, R., Zeng, H., Wang, S., Chen, R., et al.: (2021) Cancer incidence and mortality in China, 2015. J. National Cancer Center 1(1), 2–11 (2021) 2. Sung, H., Ferlay, J., Siegel, R.L., Laversanne, M., Soerjomataram, I., Jemal, A., Bray, F.: Global cancer statistics 2020: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries. CA Cancer J. Clin. 71(3), 209–249 (2021) 3. Xia, C., Dong, X., Li, H., Cao, M., Sun, D., He, S., et al.: Cancer statistics in China and United States, 2022: profiles, trends, and determinants. Chin. Med. J. 135(05), 584–590 (2022) 4. Lei, S., Zheng, R., Zhang, S., Wang, S., Chen, R., Sun, K., et al.: Global patterns of breast cancer incidence and mortality: a population-based cancer registry data analysis from 2000 to 2020. Cancer Commun. 41(11), 1183–1194 (2021) 5. Liu, S., Chen, Q., Guo, L., Cao, X., Sun, X., Chen, W., He, J.: Incidence and mortality of lung cancer in China, 2008–2012. Chin. J. Cancer Res. 30(6), 580 (2018) 6. Brustugun, O.T., Grønberg, B.H., Fjellbirkeland, L., Helbekkmo, N., Aanerud, M., Grimsrud, T.K., et al.: Substantial nation-wide improvement in lung cancer relative survival in Norway from 2000 to 2016. Lung Cancer 122, 138–145 (2018) 7. Lin, H.T., Liu, F.C., Wu, C.Y., Kuo, C.F., Lan, W.C., Yu, H.P.: Epidemiology and survival outcomes of lung cancer: a population-based study. BioMed Res. Int. 2019, 1–19 8. Li, R., Xiao, C., Huang, Y., Hassan, H., Huang, B.: Deep learning applications in computed tomography images for pulmonary nodule detection and diagnosis: a review. Diagnostics 12(2), 298 (2022) 9. Dunke, S.R., Tarade, S.S.: Lung cancer detection using deep learning 2582, 7421 (2022). www. ijrpr.com 10. Talukder, M.A., Islam, M.M., Uddin, M.A., Akhter, A., Hasan, K.F., Moni, M.A.: Machine learning-based lung and colon cancer detection using deep feature extraction and ensemble learning. Expert Syst. Appl. 117695 (2022) 11. Afshar, P., Naderkhani, F., Oikonomou, A., Rafiee, M.J., Mohammadi, A., Plataniotis, K.N.: MIXCAPS: a capsule network-based mixture of experts for lung nodule malignancy prediction. Pattern Recogn. 116, 107942 (2021) 12. Madariaga, M.L., Lennes, I.T., Best, T., Shepard, J.A.O., Fintelmann, F.J., Mathisen, D.J., et al.: Multidisciplinary selection of pulmonary nodules for surgical resection: diagnostic results and long-term outcomes. J. Thoracic Cardiovas. Surg. 159(4), 1558–1566 (2020) 13. Karunakaran, N., Nishy Reshmi, S.: Survey on computerized lung segmentation and detection 14. Laursen, C.B., Clive, A., Hallifax, R., Pietersen, P.I., Asciak, R., Davidsen, J.R., et al.: European Respiratory Society statement on thoracic ultrasound. Eur. Respir. J. 57(3), 2001519 (2021) 15. Prabukumar, M., Agilandeeswari, L., Ganesan, K.: An intelligent lung cancer diagnosis system using cuckoo search optimization and support vector machine classifier. J. Ambient. Intell. Humaniz. Comput. 10(1), 267–293 (2019) 16. Jassim, M.M., Jaber, M.M.: Systematic review for lung cancer detection and lung nodule classification: taxonomy, challenges, and recommendation future works. J. Intell. Syst. 31(1), 944–964 (2022) 17. Halder, A., Dey, D., Sadhu, A.K.: Lung nodule detection from feature engineering to deep learning in thoracic CT images: a comprehensive review. J. Digit. Imaging 33(3), 655–677 (2020) 18. Meng, Q., Ren, P., Gao, P., Dou, X., Chen, X., Guo, L., Song, Y.: Effectiveness and feasibility of complementary lung-RADS version 1.1 in risk stratification for pGGN in LDCT lung cancer screening in a Chinese population. Cancer Management and Research 12, 189 (2020) 19. Hatuwal, B.K., Thapa, H.C.: Lung cancer detection using convolutional neural network on histopathological images. Int. J. Comput. Trends Technol 68(10), 21–24 (2020) 20. Šari´c, M., Russo, M., Stella, M., Sikora, M.: CNN-based method for lung cancer detection in whole slide histopathology images. In: 2019 4th International Conference on Smart and Sustainable Technologies (SpliTech), pp. 1–4. IEEE (2019)

11 A Review on Lung Cancer Detection and Classification Using Deep …

161

21. To˘gaçar, M., Ergen, B., Cömert, Z.: Detection of lung cancer on chest CT images using minimum redundancy maximum relevance feature selection method with convolutional neural networks. Biocybern. Biomed. Eng. 40(1), 23–39 (2020) 22. Bonavita, I., Rafael-Palou, X., Ceresa, M., Piella, G., Ribas, V., Ballester, M.A.G.: Integration of convolutional neural networks for pulmonary nodule malignancy assessment in a lung cancer classification pipeline. Comput. Methods Programs Biomed. 185, 105172 (2020) 23. Moitra, D., Mandal, R.K.: Classification of non-small cell lung cancer using one-dimensional convolutional neural network. Expert Syst. Appl. 159, 113564 (2020) 24. Neal Joshua, E.S., Bhattacharyya, D., Chakkravarthy, M., Byun, Y.C.: 3D CNN with visual insights for early detection of lung cancer using gradient-weighted class activation. J. Healthcare Eng. 2021, 1–11 (2021) 25. Kasinathan, G., Jayakumar, S., Gandomi, A.H., Ramachandran, M., Fong, S.J., Patan, R.: Automated 3-D lung tumor detection and classification by an active contour model and CNN classifier. Expert Syst. Appl. 134, 112–119 (2019) 26. Anwer, D.N., Ozbay, S.: Lung cancer classification and detection using convolutional neural networks. In: Proceedings of the 6th International Conference on Engineering & MIS 2020, pp. 1–8 (2020) 27. Sajja, T., Devarapalli, R., Kalluri, H.: Lung cancer detection based on CT scan images by using deep transfer learning. Traitement du Signal 36(4), 339–344 (2019) 28. Mohammed, S.H., Çinar, A.: Lung cancer classification with convolutional neural network architectures. Qubahan Acad. J. 1(1), 33–39 (2021) 29. Zhang, R., Cheng, C., Zhao, X., Li, X.: Multiscale mask R-CNN–based lung tumor detection using PET imaging. Mol. Imaging 18, 1536012119863531 (2019) 30. Su, Y., Li, D., Chen, X.: Lung nodule detection based on faster R-CNN framework. Comput. Methods Programs Biomed. 200, 105866 (2021) 31. Liu, M., Dong, J., Dong, X., Yu, H., Qi, L.: Segmentation of lung nodule in CT images based on mask R-CNN. In: 2018 9th International Conference on Awareness Science and Technology (iCAST), pp. 1–6. IEEE (2018) 32. Wang, X., Wang, L., Zheng, P.: SC-dynamic R-CNN: a self-calibrated dynamic R-CNN model for lung cancer lesion detection. Comput. Math. Methods Med. 2022, 1–9 (2022) 33. Guo, N., Bai, Z.: Multi-scale pulmonary nodule detection by fusion of cascade R-CNN and FPN. In: 2021 International Conference on Computer Communication and Artificial Intelligence (CCAI), pp. 15–19. IEEE (2021) 34. Yan, H., Lu, H., Ye, M., Yan, K., Xu, Y., Jin, Q.: Improved Mask R-CNN for lung nodule segmentation. In: 2019 10th International Conference on Information Technology in Medicine and Education (ITME), pp. 137–141. IEEE (2019) 35. Khairandish, M.O., Gurta, R., Sharma, M.: A hybrid model of faster R-CNN and SVM for tumor detection and classification of MRI brain images. Int. J. Mech. Prod. Eng. Res. Dev 10(3), 6863–6876 (2020) 36. Cai, L., Long, T., Dai, Y., Huang, Y.: Mask R-CNN-based detection and segmentation for pulmonary nodule 3D visualization diagnosis. IEEE Access 8, 44400–44409 (2020) 37. Li, S., Liu, D.: Automated classification of solitary pulmonary nodules using convolutional neural network based on transfer learning strategy. J. Mech. Med. Biol. 21(05), 2140002 (2021) 38. Zia, M.B., Xiao, Z.J.J.N.: Detection and classification of lung nodule in diagnostic CT: a TsDN method based on improved 3D-faster R-CNN and multi-scale multi-crop convolutional neural network. Int. J. Hybrid Inf. Technol. 13(2), 45–56 (2020) 39. Nasser, I.M., Abu-Naser, S.S.: Lung cancer detection using artificial neural network. Int. J. Eng. Inf. Syst. 3(3), 17–23 (2019) 40. Adetiba, E., Olugbara, O.O.: Lung cancer prediction using neural network ensemble with histogram of oriented gradient genomic features. Sci. World J. 2015, 1–17 (2015) 41. Apsari, R., Aditya, Y.N., Purwanti, E., Arof, H.: Development of lung cancer classification system for computed tomography images using artificial neural network. In: AIP Conference Proceedings, vol. 2329, no. 1, p. 050013. AIP Publishing LLC (2021)

162

J. Kumari et al.

42. Khobragade, S., Tiwari, A., Patil, C.Y., Narke, V.: Automatic detection of major lung diseases using Chest Radiographs and classification by feed-forward artificial neural network. In: 2016 IEEE 1st International Conference on Power Electronics, Intelligent Control and Energy Systems (ICPEICES), pp. 1–5. IEEE (2016) 43. Kaur, L., Sharma, M., Dharwal, R., Bakshi, A.: Lung cancer detection using CT scan with artificial neural network. In: 2018 International Conference on Recent Innovations in Electrical, Electronics & Communication Engineering (ICRIEECE), pp. 1624–1629. IEEE (2018) 44. Nasser, I.M., Abu-Naser, S.S.: Predicting tumor category using artificial neural networks 45. Chen, K.B., Xuan, Y., Lin, A.J., Guo, S.H.: Lung computed tomography image segmentation based on U-Net network fused with dilated convolution. Comput. Methods Programs Biomed. 207, 106170 (2021) 46. Shaziya, H., Shyamala, K., Zaheer, R.: Automatic lung segmentation on thoracic CT scans using U-net convolutional network. In: 2018 International Conference on Communication and Signal Processing (ICCSP), pp. 0643–0647. IEEE (2018) 47. Yang, J., Wu, B., Li, L., Cao, P., Zaiane, O.: MSDS-UNet: a multi-scale deeply supervised 3D U-Net for automatic segmentation of lung tumor in CT. Comput. Med. Imaging Graph. 92, 101957 (2021) 48. Cheng, H., Zhu, Y., Pan, H.: Modified U-net block network for lung nodule detection. In: 2019 IEEE 8th Joint International Information Technology and Artificial Intelligence Conference (ITAIC), pp. 599–605. IEEE (2019). 49. Ali, Z., Irtaza, A., Maqsood, M.: An efficient U-Net framework for lung nodule detection using densely connected dilated convolutions. J. Supercomput. 78(2), 1602–1623 (2022) 50. Suzuki, K., Otsuka, Y., Nomura, Y., Kumamaru, K.K., Kuwatsuru, R., Aoki, S.: Development and validation of a modified three-dimensional U-Net deep-learning model for automated detection of lung nodules on chest CT images from the lung image database consortium and Japanese datasets. Acad. Radiol. (2020)

Chapter 12

Enhancement and Gray-Level Optimization of Low Light Images Aashi Shrivastava and M. P. Parsai

Abstract The paper elaborates on the transformation needed to enhance the hazy and low light image. The low light hides the necessary details of the image and generates image frequencies in a darker band. Thus, transforming it in logarithmic scale drastically increases the white balance of the image, which gives the further need to refine the black region of the image which is performed by transforming the image with Laplacian model. These mathematical models are used consecutively to generate an optimal refined image.

12.1 Introduction The image is a two-dimensional representation of a three-dimensional object. The amplitude of this image defined as a function of f (x, y) is called the intensity levels of the image [1]. An image gives a substantial amount of information quickly through its visual representation of the details of the pictured object. Thus, use of image to procure data is significant in the fields of engineering, research, marketing, medical sciences and others. But the data input to the system is rarely pure and is often subjected to noise. So as in the case of image. There are multiple ways to clear this noise and extract useful data from the image. These ways are used to program the image in the sequence of steps and the sequence of the processes used to transform the image so that now it can deliver vital information is called Image processing. Here, image processing results in doing calculations resulting in a transforming of image such as rotation, brightness adjustment, contrast adjustment, colour levelling, image enhancement, noise deletion and many such operations. This finds multiple uses in remote sensing application where due to suspended particle in the atmosphere, such as dust, water, reflectivity and low light, the captured image is subjected to grains, noise and excessive dark and light bands of the image which necessarily requires multiple image processing operations. Here one such method to overcome the darker A. Shrivastava (B) · M. P. Parsai Jabalpur Engineering College Jabalpur, Jabalpur, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024 B. Das et al. (eds.), Modeling, Simulation and Optimization, Smart Innovation, Systems and Technologies 373, https://doi.org/10.1007/978-981-99-6866-4_12

163

164

A. Shrivastava and M. P. Parsai

regions is the logarithmic transformation of image and to find details of the image in excessive brighter regions Laplacian transformation is suggested.

12.2 Related Work Image enhancement has been a continuous area of development due to its varied uses that are found in a number of engineering applications. The model suggested by Junyi Xie et al. [2] deal with semantically guided low light image enhancement. This worked for both RGB as well as grayscale images and avoided unnecessary enhancements that brought new challenges. Srinivas et al. [3] focus on low contrast image enhancement using spatial contextual similarity histogram computation and colour reconstruction and proposed colour restoration and contrast enhancement techniques. For correcting low illumination image [4], image enhancement technique is used which uses a ‘k’ factor to adaptively adjust the luminous of image. To understand the haziness of limited luminous of underwater image enhancement model [5] based on local network modelling is proposed. The work of Iqbal et al. [6] gives the image enhancement technique by low- and high-frequency decompositions, colour balancing and luminance balancing. Song et al. [7] propose grayscale inversion and rotation invariant texture description which focuses on image processing of local gradient patterns found elaborately in many grayscale images. To compute and process the image with an effective level of brightness, a logarithmic transformation [8] is used which increases the white aspect of image; this model uses a multiplying factor with logarithmic transform to increase the white balance. In grayscale image, the excessive expansion of white scale histogram hides the details in the image which requires to process the image towards black scale frequency. This method works on applying Laplacian model as proposed by Vimal et al. [9].

12.3 Image Intensity Transformation Techniques An image of any object is made up of number for pixels, where the numbers of pixels in an image determines the intensity of the image. Thus, for transforming the intensities of an image the number of pixels is varying. The values are related in the expression as s = f (r), where r and s are the values of pixels before and after processing, respectively. As we are dealing with digital quantities, the values of transformation function are stored in a one-dimensional array and the mapping of pixels from r to s is implemented. For an 8-bit environment a table contains values of f will have 256 entries, where the starting value is 0 and ends with 255 location entries. There are basically three types of functions primarily used for image enhancement: linear transformation (negative and identity transformation), power law (nth power and nth root transformation), log (log and inverse log transformation) [10].

12 Enhancement and Gray-Level Optimization of Low Light Images

165

12.3.1 Linear Transformation This type of image transformation follows a linear relationship to find the intensity levels of the output image with reference to the input value of pixel intensity level. This type of transformation maps each pixel to itself in output transformation, where s = f (r ), for example, negative transformation. Negative Transformation: Assuming for an 8-bit gray input image, this transformation subtracts 255 for the value of pixel intensity levels and reproduces it as output. Mathematically speaking it can be expressed as follows: s = f (r ) = (255 − r ). This simply means that at the input value of 0 (black scale) gets mapped to 255 (white scale) and vice versa. Mathematically, 0 ≤ r ≤ 255. (Assuming the input pixels are from an 8-bit grayscale colour space) 0 ≥ −r ≥ −255 255 ≥ (255 − r ) ≥ 0 0 ≤ s ≤ 255.

12.3.2 Logarithmic Transformation The general form of log transformation is given as, s = c ∗ log(1 + r ). where c is constant and value of r is assumed as r ≥ 0. The value of output is taken on y-axis, and input value is in x-axis as shown in Fig. 12.1. The transformation provides for a narrow range of low-intensity values in input and results in the wider level of output. The compressing and spreading of intensity levels is quite wide in the logarithmic scale and hence the purpose of versatility which is fulfilled by power law transmission as discussed further. One may find in [8] more details for the logarithmic model put to use for the purpose of image enhancement techniques.

12.3.3 Power Law Transformation Power law takes the basic form [10], s = cra . Where c and a are positive constants. The plot for various values of ‘a’ is shown in Fig. 12.2. For various values of ‘a’, the change in the slope of the graphs gives intensity of output pixel. In the application dealing with low contrast images, it is assumed that the peaks of background and foreground have almost merged together in low contrast images. If the output pixel intensity says, r max is the major peak in the frequency histogram. Then contrast stretching occurs in histogram by choosing a < 1 for darker images [9]. Let ‘m’ be the slope of gamma transformation function then we should find the value of ‘a’, which gives r = r max . Let N be the number

166

A. Shrivastava and M. P. Parsai

Fig. 12.1 Some basic intensity log and inverse log transformation

of grayscale levels in the colour space we are dealing with. Since value of r and s changes from 0 to L − 1, mathematically it can be expressed as, c = k(N − 1)1−a , where k is a positive constant. Now, m = ds/dr = kar a−1 . Thus, to maximize m, we take first-order differentiation, dm/da = 0 dm = da



rmax N −1

(1−a)



rmax =a N −1

(1−a)

= 0.

Thus, a = ln r1max . N −1 Thus, for given value of r max we get the highest peak in histogram, so determine the maximum extent of contrast in the image corresponding value of ‘a’ can be determined. The importance for the use of gamma transform is based on how the shape of gamma curve changes mathematically, that is for a > 1, resulting the levels of output image intensities being completely opposite to thus obtained by a < 1.

12 Enhancement and Gray-Level Optimization of Low Light Images

167

Fig. 12.2 Plot for the equation s = cra , for various values of a

12.4 Proposed Model Low light images are subjected to darker regions of high-intensity values of black regions. These dark regions often hide essential details. To overcome this, logarithmic model is applied to transform the pixel intensity of image. This log model has a multiplying factor, c. Thus, the output intensity of pixels explicitly increases as shown in Fig. 12.1. This extensive increase of white pixelated regions in the image produces a foggy effect in the image as shown in Fig. 12.3. This problem is overcome by applying a certain degree of log factor and further application of power law transformation in the image. Now the values of output intensities are slowly increased to the effective value, in between the region that would have produced overexposed and under-exposed regions through the logarithmic transformation. This model thus uses two-step approach which takes in the benefits of logarithmic model of quick variation of output based on the slope of log curve and gamma image transformation model that brings in the versatility to slowly change the slope of the gamma curve for the required output pixel intensities in the image. Thus, effectively obtaining the optimal intensity levels that lie in between the under- and overexposed pixel intensities in the image. The use of gamma transforms is found to be a decisive step in obtaining the optimized image. As shown in Fig. 12.3, the intermediate images are often underexposed or overexposed, thus to find the right balance of output intensity levels gamma function is used. To optimize an under-exposed image, we use a < 1 in the power function as shown in Fig. 12.4.

168

A. Shrivastava and M. P. Parsai

Fig. 12.3 Log transform applied to original image producing under-exposed and overexposed regions in image

The results of applying gamma transforms are shown in Fig. 12.4. With a gamma factor of 0.3, the under-exposed image was optimized to reveal the details in the image without compromising the background contrast to produce a blurred image. The intermediate overexposed image, which was a result of applying higher values of the log transform is further optimized by using a > 1 in the power function of the gamma transformation as shown in Fig. 12.5. The results of applying gamma transforms are shown in Fig. 12.5. With a gamma factor of 4.0, the overexposed image was optimized to reveal the details in the image with higher contrast.

12.5 Proposed Experimental Results and Discussion The proposed two-step model as discussed, uses a logarithmic transform with the appropriate multiple factor c, to get an intermediate image. This intermediate image is further transformed with gamma transformation to reach a suitable value for an optimal refined image. This two-step model is pictorially represented in Fig. 12.6. The experimental mathematical transformation to image is applied through MATLAB software for image processing operation. The image enhancement techniques based on our proposed model are applied to a low light image and the steps performed are shown in Fig. 12.7, the final optimized image is seen to reveal the important details captured in the image. The image processing operations are applied to three sets of images in varying degree of low light conditions, and the final results shows optimal intensity images obtained under these conditions, which quite effectivity reveals essential details present in the pictures as shown in Figs. 12.8, 12.9 and 12.10. Histogram distribution

12 Enhancement and Gray-Level Optimization of Low Light Images

169

Fig. 12.4 a Under-exposed image of forest. b–d Results of applying gamma transform with c = 1 and a = 0.6, 0.4, 0.3, respectively

for each of the three deep low light images, which is shown in Fig. 12.10 is represented in Fig. 12.11. The variation of gray intensity levels defines the black-and-white regions of the image. As compared to intermediate images which over exposes white level of image so as to produce hazy images, the histogram for optimal enhanced image shows that the number of pixels has increased in the complete band of gray level intensity spectrum with skewness seen in histogram of optimal intensity image which defines the details more effectively as shown in Fig. 12.11c.

170

A. Shrivastava and M. P. Parsai

Fig. 12.5 a Overexposed image of a tree. b–d Results of applying gamma transform with c = 1 and a = 2.0, 3.0, 4.0, respectively Fig. 12.6 Flowchart of proposed model

12 Enhancement and Gray-Level Optimization of Low Light Images

Fig. 12.7 Visual results of low light image subjected to proposed model

Fig. 12.8 Image processing of low light image

171

172

Fig. 12.9 Image processing of medium low light image

Fig. 12.10 Image processing of deep low light image

Fig. 12.11 Histogram distribution of deep low image

A. Shrivastava and M. P. Parsai

12 Enhancement and Gray-Level Optimization of Low Light Images

173

12.6 Conclusion The paper addresses the need to reveal details of images taken under varied low light conditions. These images are subjected to more of the darker regions in the image spectrum. To reduce the contrast of these low light images, various imageenhancing methods are available; one such method is the logarithmic transformation. This method is found to often produce under-exposed or overexposed images. In these images, a secondary transformation step is performed known as power law transformation. The power law being represented by s = cra is applied by changing varied values of a > 1 or a < 1, for overexposed and under-exposed images respectively, to final get an optimal image. The study taken in this paper may reveal more ideas to perform image processing of low light images for future studies.

References 1. Sundararajan, D.: Digital Image Processing A signal processing and Algorithm Approach. Springer Nature Singapore Pte Ltd. (2017) 2. Xie, J., Bian, H., Wu, Y., Zhao, Y., Shana, L., Hao, S.: Semantically guided low-light image enhancement. Pattern Recogn. Lett. 308–314 (2020) 3. Srinivas, K. Bhandari, A.K., Singh, A.: J. Franklin Inst. 353, 13941–13963 4. Wang, W., Chen, Z., Yuan, X., Wu, X.: Adaptive image enhancement method for correcting low-illumination images. Inf. Sci. 496, 25–41 (2019) 5. Fu, X., Cao, X.: Underwater image enhancement with global–local networks and compressedhistogram equalization. Signal Process. Image Commun. 86, 115892 (2020) 6. Iqbal, M., Ali, S.S., Riaz, M.M., Ghafoor, A., Ahmad, A.: Color and white balancing in low-light image enhancement. Optik Int. J. Light Electron. Opt. 209, 164260 (2020) 7. Song, T., Xin, L., Gao, C., Zhang, G., Zhang, T.: Grayscale inversion and rotation invariant texture description using sorted local gradient pattern. IEEE Signal Process. Lett. 25(5), 625– 629 (2018) 8. P˘atra¸scu, V.: Gray level image enhancement method using the logarithmic model. Acta Tehnica Napocensis Electron. Telecommun. 43 (2003) 9. Vimal, S.P., Thiruvikraman, P.K.: Automated Image Enhancement Using Power Law Transformations, vol. 37, Part 6, pp. 739–745, Dec 2012 10. Gonzalez, R.C., Woods, R.E.: Digital Image Processing, 3rd ed., p. 108. Pearson Prentice Hall publication, Upper Saddle River, New Jersey 07458 (2008)

Chapter 13

Prediction of Liver Disease Using Machine Learning Approaches Based on KNN Model Souptik Dutta, Subhash Mondal , and Amitava Nag

Abstract The liver, the most crucial interior organ of the human body, performs functions of metabolism control and food digestion. Liver disorders can sometimes prove fatal, and appropriate treatment at the right time will save many lives. Research has been conducted to predict and diagnose liver diseases for quite a while, and Machine Learning (ML) tools have proven very effective. We have considered eight ML models for this study while working on the Liver Patient Dataset, which contains more than 30k instances, for accurately predicting liver diseases. This study includes some boosting algorithms as well. For the proper judgment of the performance of the proposed models, commonly used performance metrics such as accuracy, RoC-AuC, F 1 score, precision, and recall have been used. We have inferred that the k-Nearest Neighbor (KNN) produced the most accurate results at 92.345%. Since the models are not overfitted, they are k-fold cross-validated, and hence, the standard deviation of each model is lower. Therefore, the standard deviation of KNN is 0.815, and the False Negative (FN) rate comes out to be 3.7%.

S. Dutta · S. Mondal (B) · A. Nag Department of Computer Science and Engineering, Central Institute of Technology Kokrajhar, Kokrajhar, Assam 783370, India e-mail: [email protected] S. Dutta e-mail: [email protected] A. Nag e-mail: [email protected] S. Mondal Department of Computer Science and Engineering, Meghnad Saha Institute of Technology, Kolkata, West Bengal 700150, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024 B. Das et al. (eds.), Modeling, Simulation and Optimization, Smart Innovation, Systems and Technologies 373, https://doi.org/10.1007/978-981-99-6866-4_13

175

176

S. Dutta et al.

13.1 Introduction The liver is undoubtedly the most important of all the organs and organ systems that constitute the human body. To ensure the proper functioning of the body, the liver has a very significant role to play. Some of those are removing toxic substances, digesting of food, and maintaining the metabolism. Keeping the liver healthy should be a person’s foremost priority. Despite this, the liver falls prey to various severe ailments like liver cancer, gilbert’s syndrome, etc. According to many reports, liver and biliary diseases affect approximately 1 in 10 Americans [1]. What is alarming is that a person who might have the liver disease may not be aware of that since there may hardly be any symptoms at times. The burden of liver disease in India is significant because it alone contributed to around 18.3% of the two million global liver disease-related deaths in 2015 [2]. With the advancement of technology with each passing day, advances in medicine and health care have made rapid strides, which indeed is a ray of hope to reduce deaths every year. Machine Learning (ML), a subdomain of Artificial Intelligence (AI), is a potent tool, and more so in the field of health care. Put in simpler words, and the system is trained with some dataset that will inculcate the knowledge. Later, it will be used to make valid predictions and gain valuable insights into some other data the system needs to become familiar with. The Liver Patient Dataset with a sufficiently large number of instances has been considered for this study. To infer whether the said patient is diagnosed with a liver ailment or not, some well-known ML algorithms were used to train the dataset, namely k-Nearest Neighbor (KNN), Support Vector Machine (SVM), Logistic Regression (LR), Bernoulli Naïve Bayes (BNB), and Gaussian Naïve Bayes (GNB). In addition to these, several boosting algorithms like Xtreme Gradient Boost (XGB), Gradient Boost (GB), and AdaBoost (AB) were also implemented. Upon observation, it was found that the accuracy for KNN was 92.35% which was the best among all others. The study has been divided into the following parts: Related Work is discussed in Sect. 13.2 and the Proposed Methodology in Sect. 13.3. The Comparative Result Analysis has been profoundly discussed in Sect. 13.4, followed by Sect. 13.5, which brings us to a conclusion.

13.2 Related Work Studies on liver patients have been carried out recently, albeit on a small scale. In [3], they incorporated LR and SVM algorithms into their dataset involving over 500 patients and aimed to enhance the quality of prediction and classification. The model was 75% accurate. Ambesange et al. [4] extensively used the KNN classification method on the dataset for Indian Liver Patients. An important ML technique called hyperparameter tuning was implemented to increase the accuracy, and unsurprisingly the model was impressively accurate at 91%. The authors used models like LR, KNN,

13 Prediction of Liver Disease Using Machine Learning Approaches Based …

177

and SVM in [5] to predict liver disease for the dataset of the patients. The tenfold cross-validation was used for training and testing. The KNN proved effective as it returned an accuracy of 73.97%. In [6], they focused mainly on LR and other algorithms like KNN, SVM, NB, and Artificial Neural Networks (ANN) to gain insights and classify liver patients. They applied the k-fold technique, and it turned out that LR was the most effective algorithm at an accuracy of 74%. The authors, Auxilia et al. [7], have used various ML classification techniques like SVM, ANN, NB, Decision Tree (DT), and Random Forest (RF) in their study. The DT was the best algorithm used, which was 81% accurate, significantly ahead of the other methods used in accuracy. In [8], authors have considered the use of techniques such as Light Gradient Boosting Machine (LGBM), Multilayer Perceptron (MLP), and Stacking along with the commonly used robust algorithms like XGB, DT, KNN, LR, RF, and GB. The performance of RF proved to be the best, and interestingly, the accuracy and all the other performance metrics used all yielded the same value of 0.88. XGB and LGBM also came very close in terms of accuracy. In [9], authors worked mainly with KNN and NB algorithms in their predictive analysis and got 0.725 as the AUC value for NB. The authors in [10] recommend that CatBoost (CB) can be a very effective boosting algorithm after their study on the ILPD dataset revealed that CB outperformed GB and LGBM with an accuracy of 86.8%. In [11], aimed at making a more accurate diagnosis using LR, NB, and KNN and using an AI-dependent classifier, obtained a 75% accurate result. Newaz et al. [12] performed their study on the ILPD dataset with ED-SVM and CD-SVM classifiers. The CDSVM classifier performed better and yielded an accuracy of 67.40%. The authors in [13] researched the most effective ML technique for diagnosing liver disease. They used the working algorithms of LR, SVM, RF, MLP, KNN, LGBM, XGB, and ET. Almost all the algorithms performed well on the dataset used, but ET produced outstanding results with 89% accuracy and 93% precision. In [14], they conducted their study using algorithms like LR, DT, RF, NB, KNN, LGBM, XGB, GB, AB, and Stacking. In this case, RF produced the most accurate results with 63% accuracy. In [15] made, predictive analysis using the technique called W-LR-XGB while also using LR, NB, KNN, and SVM. This research was also very fruitful as it was seen that the XGB classifier made an impressive outcome with 83% accuracy. The authors in [16] used supervised learning techniques like LR, RF, DT, NB, and KNN. Upon evaluating all the models, they got almost 72% accuracy from RF.

13.3 Proposed Methodology This section deals with all the steps in predicting patients’ liver diseases. At first, the data pre-processing is performed on the dataset. Followed by that, we applied the various ML models that would help to train and test the data and reach a final solution. The dataset used here is sufficiently large with diverse data, so there was no need to implement the model on a separate dataset for validation. The following

178

S. Dutta et al.

Fig. 13.1 Workflow diagram of the proposed model

subsections contain detailed discussions about all the steps involved. The workflow diagram of this study is presented at a high level in Fig. 13.1.

13.3.1 Dataset Acquisition The study is conducted on the Liver Disease Patient Dataset [17]. This is a moderately imbalanced but very diverse dataset comprising 30,691 instances to go with 10 features and one binary outcome. Among them, we have nine numerical features and one categorical feature. The information and description of the dataset and the graphical representation of the features and the label have been well depicted in the

13 Prediction of Liver Disease Using Machine Learning Approaches Based …

179

Table 13.1 Features description with null values #

Column/Attributes

1

Age of the patient

Null values count

2

Gender of the patient

902

object

3

Total bilirubin

648

float64

4

Direct bilirubin

561

float64

5

Alkphos alkaline phosphatase

796

float64

6

Sgpt alamine aminotransferase

538

float64

7

Sgot aspartate aminotransferase

462

float64

8

Total proteins

463

float64

9

ALB albumin

494

float64

10

A/G ratio albumin and globulin ratio

559

float64

11

Result

2

0

Data type natures float64

int64

figures below. Table 13.1 represents all attributes’ descriptions, data types, and the number of null values in the original dataset.

13.3.2 Data Pre-processing The most crucial step after data acquisition is data pre-processing. A prediction model will work efficiently only when it is fed with good-quality data. The accuracy might be affected if the dataset is filled with irrelevant data, missing values, and outliers. We have used techniques such as null value handling, feature encoding, and data normalization as a part of data pre-processing. The data correlations heat map among all the features are shown in Fig. 13.2. During the data analysis phase, the feature’s data distribution was checked and depicted in Fig. 13.3. Handling missing or null values: The working dataset had a lot of NaN/Null values throughout the columns. This was done by applying Mean and values for the respective columns. Feature encoding: The column ‘Gender of the patient’ contains categorical values. Accordingly, label encoding has been applied by assigning the value 0 in the case of a Female and 1 in the case of a Male. Data normalization: To handle the outlier features and those that had varying scales had to be scaled correctly within a specific range. The Standard Scaler function has been used to scale the data values into the range between 0 and 1. The dataset was imbalanced due to the target outcome value count inequities. We have yet to perform any data balancing techniques due to many values. After that, we partitioned the pre-processed dataset in the proportion of 0.90:0.10 for the training and testing purpose of the ML classification model.

180

Fig. 13.2 Feature correlation heat map for the data

Fig. 13.3 Feature data values distribution

S. Dutta et al.

13 Prediction of Liver Disease Using Machine Learning Approaches Based …

181

Table 13.2 Comparisons of different models along with their performance metrics without hyperparameter tuning Model

A

MA

SD

RA

P

R

F

CK

KNN

92.345

91.481

0.815

0.905

0.857

0.865

0.861

0.808

XGB

87.980

87.151

0.633

0.799

0.914

0.620

0.739

0.664

GB

87.329

88.223

0.849

0.788

0.908

0.599

0.722

0.644

AB

77.980

79.715

0.833

0.689

0.627

0.488

0.549

0.406

SVM

73.713

72.611

0.361

0.535

0.658

0.087

0.153

0.095

LR

72.476

72.025

0.483

0.556

0.495

0.183

0.267

0.140

BNB

65.147

65.494

0.807

0.605

0.394

0.502

0.442

0.194

GNB

55.147

56.015

0.800

0.678

0.375

0.957

0.539

0.240

Working with a diverse dataset like this, hyperparameter tuning was not considered necessary since that would not have altered our results to a great extent. Instead, k-fold cross-validation is used to overcome the problem of overfitting.

13.3.3 Model Training In this study, we used eight distinct ML models; the model was trained on the preprocessed dataset using 90% of the total dataset. The models were KNN, SVM, LR, XGB, GB, AB, BNB, and GNB. Further, to test the effectiveness of the models, several performance metrics were utilized, which included accuracy (A), k-fold mean accuracy (MA), RoC-AuC (RA), precision (P), recall (R), F 1 -score (F), standard deviation (SD), and Cohen kappa (CK). The boosting algorithms performed on expected lines and produced desired results. On a detailed analysis of the performance of all the models, we were satisfied that the KNN classifier outperformed the others, including the boosting algorithms, being 92.34% accurate in prediction. The XGB and GB classifiers came impressively close, with 87.98 and 87.33% accuracy. AB classifier also produced promising results with 77.98% accuracy. The detailed test results are given in Table 13.2. The confusion matrices in the form of heat maps of the best-performing models are presented in Fig. 13.4, and the RoC-AuC curve for the model is also shown in Fig. 13.5.

13.4 Comparative Result Analysis This section puts forward the comparative analysis of the results concerning existing literature by considering the different performance matrices presented in the table given below. It is to be noted that the blank regions depict that the corresponding

182

S. Dutta et al.

Fig. 13.4 Confusion matrices for the top performing models (KNN, XGB, AB, GB)

Fig. 13.5 RoC-AuC curve for all the deployed models

13 Prediction of Liver Disease Using Machine Learning Approaches Based …

183

Table 13.3 Comparative results analysis with the proposed model Ref. #

Models used

A

P

R

[3]

LR, SVM

75.04

0.77

0.79

[4]

KNN

91.00

[5]

LR, KNN, SVM

73.97

[6]

LR, KNN, SVM, NB, ANN, 74.00 DT C 4.5

[7]

DT, RF, SVM, ANN, NB

81.00

[8]

XGB, DT, KNN, LR, RF, GB, AB, LGBM, MLP, Stacking

[9]

NB, KNN

[10]

F

RA

0.95 0.72

0.74

0.69

86.00

0.86

0.86

0.86

CB, GB, LGBM

86.80

0.90

0.81

0.85

[11]

LR, NB, KNN

75.00

[12]

ED-SVM, CD-SVM

67.40

[13]

LR, SVM, RF, MLP, KNN, ET, XGB, LGBM

88.94

0.93

0.84

0.88

[14]

LR, DT, RF, NB, KNN, LGBM, XGB, GB, AB, Stacking

63.00

0.64

0.63

0.63

[15]

LR, NB, KNN, SVM, XGB

83.00

0.83

0.82

0.81

[16]

LR, RF, DT, NB, KNN

71.90

0.62

0.71

0.61

0.70

Proposed

KNN

92.35

0.857

0.865

0.861

0.905

0.72

0.69

authors have not claimed any fruitful results of that parameter. The claimed result or the model displaying the best result has been highlighted in bold and represented in Table 13.3.

13.5 Conclusion This study has made the binary classification of liver patient disease using eight different ML models with tenfold cross-validation techniques. We have seen that KNN has the best performance, with an accuracy of more than 92% and RoC-AuC score above 0.90 and producing False Negative (FN) results at the rate of only 3.7%. The boosting algorithms, such as XGB and GB, performed well enough, each being over 87% accurate. These models stand out from the existing literature with the approaches taken to increase the accuracy and other performance metrics. Instead, k-fold cross-validation is used to overcome the problem of overfitting. This work will be further expanded with hyperparameter tuning on the balanced and unbalanced datasets by considering other ML and ANN models for more accurate prediction

184

S. Dutta et al.

results. Also, ensemble approaches will be applied, and the models’ performance will be compared to reach the best accurate results.

References 1. Liver Disease Facts. SLU Care Physician Group (2022). [Online]. Available: https://slucare. com/gastroenterology-hepatology/liver-center/liver-disease-facts. Accessed 15 Oct 2022 2. Mondal, D.: Epidemiology of Liver Diseases in India. PMC PunMed Central (2022). [Online]. Available: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8958241/. Accessed 15 Oct 2022 3. Geetha, C., Arunachalam, A.: Evaluation based approaches for liver disease prediction using machine learning algorithms. In: 2021 International Conference on Computer Communication and Informatics (ICCCI), Coimbatore, India (2021) 4. Ambesange, S., Nadagoudar, R., Uppin, R., Patil, V.: Liver diseases prediction using KNN with hyper parameter tuning techniques. In: 2020 IEEE Bangalore Humanitarian Technology Conference (B-HTC), Vijayapur, India (2020) 5. Thirunavukkarasu, K., Singh, A.S., Irfan, M., Chowdhury, A.: Prediction of liver disease using classification algorithms. In: 2018 4th International Conference on Computing Communication and Automation (ICCCA), Greater Noida, India (2018) 6. Adil, S.H., Ebrahim, M., Raza, K., Azhar Ali, S.S.: Liver patient classification using logistic regression. In: 2018 4th International Conference on Computer and Information Sciences (ICCOINS), Kuala Lumpur, Malaysia (2018) 7. Auxilia, L.A.: Accuracy prediction using machine learning techniques for Indian patient. In: 2018 2nd International Conference on Trends in Electronics and Informatics (ICOEI), Tirunelveli, India (2018) 8. Kuzhippallil, M.A., Joseph, C., Kannan, A.: Comparative analysis of machine learning techniques for Indian liver disease patients. In: 2020 6th International Conference on Advanced Computing and Communication Systems (ICACCS), Coimbatore, India (2020) 9. Hartatik, H., Badri Tamam, M., Setyanto, A.: Prediction for diagnosing liver disease in patients using KNN and Naïve Bayes algorithms. In: 2020 2nd International Conference on Cybernetics and Intelligent System (ICORIS), Manado, Indonesia (2020) 10. Afreen, N., Patel, R., Ahmed, M., Sameer, M.: A novel machine learning approach using boosting algorithm for liver disease classification. In: 2021 5th International Conference on Information Systems and Computer Networks (ISCON), Mathura, India (2021) 11. Srivastava, A., Vineeth Kumar, V., Mahesh, T.R., Vivek, V.: Automated prediction of liver disease using machine learning (ML) algorithms. In: 2022 Second International Conference on Advances in Electrical, Computing, Communication and Sustainable Technologies (ICAECT), Bhilai, India (2022) 12. Newaz, A., Ahmed, N., Shahriyar Haq, F.: Diagnosis of liver disease using cost-sensitive. In: 2021 International Conference on Computational Performance Evaluation (ComPE), Shillong, India (2021) 13. Minnoor, M., Baths, V.: Liver disease diagnosis using machine learning. In: 2022 IEEE World Conference on Applied Intelligence and Computing (AIC), Sonbhadra, India (2022) 14. Gupta, K., Jiwani, N., Afreen, N., Divyarani, D.: Liver disease prediction using machine learning classification techniques. In: 2022 IEEE 11th International Conference on Communication Systems and Network Technologies (CSNT), Indore, India (2022) 15. Zhao, R., Wen, X., Pang, H., Ma, Z.: Liver disease prediction using W-LR-XGB algorithm. In: 2021 International Conference on Computer, Blockchain and Financial Development (CBFD), Nanjing, China (2021)

13 Prediction of Liver Disease Using Machine Learning Approaches Based …

185

16. Dwivedi, D., Garg, S., Jayaraman, R.: Liver failure prediction using supervised machine learning. In: 2022 7th International Conference on Communication and Electronics Systems (ICCES), Coimbatore, India (2022) 17. Liver Disease Patient Dataset 30K Train Data. Kaggle (2020). [Online]. Available: https://www. kaggle.com/datasets/abhi8923shriv/liver-disease-patient-dataset. Accessed 15 Aug 2022

Chapter 14

Modelling of Embedded Cracks by NURBS-Based Extended Isogeometric Analysis Vibhushit Gupta, Sahil Thappa, Shubham Kumar Verma, Sanjeev Anand, Azher Jameel, and Yatheshth Anand

Abstract The current work aims to employ the extended isogeometric analysis (XIGA) for modelling of embedded discontinuities in engineering components. The utilization of CAD basis functions for approximations and geometry defining makes the XIGA approach a suitable technique for the efficient analysing of cracked domains. The employment of enrichment functions, i.e. Heaviside and crack tip in XIGA results in the elimination of grid-related issues like remeshing, conformal meshing and mesh distortion. Gauss quadrature rules are also modified in extended isogeometric analysis for improving the accuracy of integration. Further, mixed mode stress intensity factors (SIFs) have been calculated using the domain-based interaction integral method. Based on this model, a two-dimensional homogeneous plate containing a centre crack has been considered for the study. Furthermore, the effect of the similar crack at different inclination angles has been checked. This investigation has reported some key findings related to variation in stress intensity factors which has also been validated with the existing literature.

V. Gupta · S. Thappa · S. K. Verma · S. Anand · Y. Anand (B) SMVDU, Katra, J&K, India e-mail: [email protected] A. Jameel (B) NIT, Srinagar, J&K, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024 B. Das et al. (eds.), Modeling, Simulation and Optimization, Smart Innovation, Systems and Technologies 373, https://doi.org/10.1007/978-981-99-6866-4_14

187

188

V. Gupta et al.

14.1 Introduction Due to increased notions in the development of mechanical components and structures, cracks may generate or grow under complex and varied loadings. The presence of cracks will highly affect the working life and strength of the structures which results in failures. These failures may occur regardless of the elasticity theory and strength of the material. Thus, the study and analysis of cracks is a very muchneeded field in fracture mechanics. To analyse such problems, various numerical approaches like finite element analysis [1, 2], boundary element [3], extended finite element method [4, 5], isogeometric analysis (IGA) [6, 7], meshfree method [8, 9], coupled techniques [10, 11], scaled boundary finite element (XFEM) [12], extended isogeometric analysis [10, 13–15], etc., have been developed. However, the problems related to remeshing and cost association in the case of traditional FEA reduce the accuracy of the solution. So, the simulation of fractures has been done efficiently by utilizing the enrichment function in the standard FEA approximation which provides an additional degree of freedom (DOF). Although, in XFEM and coupled FEAmeshfree method, the description of the geometry and the solution needs a separate basis function. Hughes et al. [16] suggested the concept of IGA which was used to describe unknown solution field variable and geometry with a common basis function, i.e. non-uniform rational B-spline (NURBS). The utilization of a common basis function bridges a tight linkage between design and analysis by generating accurate mesh geometries with highly precise and converged results. Also, over the years, this technique has been applied very efficiently and successfully for analysing various engineering problems like structures [17, 18], fluids [19, 20], thermal [21–23], fluid– structure interaction [24], bio-medical [25], electromagnetics [26] and many more. In case of fractures, IGA also proved as a potential numerical technique [11, 27, 28]. Although it was evident that near the cracks the knots were repetitive which results in lower continuity, whereas the utilization of higher-order approximation functions can help in reducing such types of issues. Further, in the past few years, the IGA technique has been extended by taking the concept of partition of unity in it which is entitled as extended isogeometric analysis (XIGA) [29]. Various research workers used the XIGA concept for solving different sets of fracture problems [30–34]. As result, it was found that XIGA comes up as an accurate technique for solving static fracture problems. Based on this, NURBS-based XIGA approach is considered for analysing static cracks with different inclinations in this study. The utilization of NURBS as a basis function makes the analysis more accurate by resolving issues like interior boundary groups and partitioning. However, other CAD functions like T-splines [35], LR B-splines [36], Bézier extracted NURBS [37] and PHT-splines [38] have also been utilized for development of the XIGA technique. Also, NURBS is considered as the basis function for analysis of cracks in this study due to its pervasive nature. This study begins with a demonstration of XIGA approach for analysing static cracks. The next section presents the procedure for the evaluation of the SIFs followed by the results and discussion, and lastly, the section concludes the study.

14 Modelling of Embedded Cracks by NURBS-Based Extended …

189

14.2 Extended Isogeometric Analysis In this technique, a refined structure of the grid is utilized for the crack areas, whereas a local enrichment approach is taken into account for defining the geometry of discontinuities which are independent of the structural grid. In this section, the NURBSbased XIGA technique is discussed with emphasizing on analysing the static centre cracks.

14.2.1 Basis Function In this subsection, the basic concept of NURBS is presented as a basis function for XIGA. This basis/approximation function is defined as the generalized form of Bspline function which is derived from a knot vector of non-diminishing points given as { } Ξ = ξ1 . . . ξn+ p+1 .

(14.1)

Here n shows the basis functions number, while p represents splines order and ξi = knot value having indexing ith that splits the B-splines into a set of subdomains. Also, the knot vectors are categorized by two forms, i.e. i. Uniform vector ii. Non-uniform vector. If the distance between two consecutive knots remains same, it is generally referred as uniform knot vector, whereas it is a non-uniform vector. In CAD parlance, these knot vectors are also known as patches. The repetition of starting and ending knot vectors by p + 1 times is termed as open knot vector. The B-spline function is generated by utilizing the cox-de boor recursive formula [39] that can be written as For p > 0 Ni, p (ξ ) =

ξi+ p+1 − ξ ξ − ξi Ni+1, p−1 (ξ ) + Ni, p−1 (ξ ), ξi+ p+1 − ξi+1 ξi+ p − ξi For p = 0 Ni,0 (ξ ) =

{

1 if ξi ≤ ξi+1 . 0 otherwise

(14.2)

(14.3)

190

V. Gupta et al.

B-spline basis function ∑ p+shows some good properties like linear independency, partition of unity, i.e. i=1 Ni, p (ξ ) = 1, non-negativity Ni, p ≥ ∀ξ , variational diminishing property and Kronecker’s delta properties. About the continuity of Bspline function, it shows C 0 at the boundaries of a patch, whereas C p−1 is seen at the knot point ξ . Also, this continuity can be decreased by using the relation C p−1−k , where k shows the number of times the knots are repeated. The multivariate basis function can be derived by using the tensor product of a univariate function. The bi and trivariate basis function is given as p,q

Ni, j (ξ, η) = Ni, p (ξ ) × M j,q (η),

(14.4)

Ni, j,k (ξ, η, ζ ) = Ni, p (ξ ) × M j,q (η) × L k,r (ζ ).

(14.5)

p,q,r

p

Further, based on Ni, p (ξ ), the one-dimensional NURBS function Ri (ξ ) is developed and is given as Ni, p (ξ )wi p Ri (ξ ) = ∑n . i=1 Ni, p (ξ )wi

(14.6)

Here weight function is represented by wi . Analogously, the NURBS two- and three-dimensional equations is derived as Ni, p (ξ ) × M j,q (η)wi, j ∑m , i=1 j=1 Ni, p (ξ ) × M j,q (η)wi, j

p Ri (ξ, η) = ∑n

p

Ni, p (ξ ) × M j,q (η) × L k,r (ζ )wi, j,k , ∑m ∑l i=1 j=1 k=1 Ni, p (ξ ) × M j,q (η) × L k,r (ζ )wi, j,k

Ri (ξ, η, ζ ) = ∑n

(14.7) (14.8)

where Ni, p (ξ ) and M j,q (η) are the basis function in two-dimensions having orders p and q, respectively. Further, by following Eqs. (14.7) and (14.8), the equation of a surface and solid is constructed as follows: Z (ξ, η) =

m n ∑ ∑

p

Ri (ξ, η)Bi, j ,

(14.9)

i=1 j=1

U (ξ, η, ζ ) =

n ∑ m ∑ l ∑ i=1 j=1 k=1

p

Ri (ξ, η, ζ )Ci, j,k .

(14.10)

14 Modelling of Embedded Cracks by NURBS-Based Extended …

191

14.2.2 XIGA Discretization For XIGA discretization let us assume an arbitrary linear elastic, isotropic, homogeneous and two-dimensional domain ‘Ω’ depicted in Fig. 14.1. The Ω is partitioned into traction boundary ([ t ), displacement boundary ([ u ) and traction-free boundary ([ c ). The domain (Ω) when subjected to mechanical loading, the solution needs to satisfy the following equilibrium conditions [40] ∇.σ + f = 0.

(14.11)

In Eq. (14.11), σ, ∇ and f present a Cauchy tensor product, divergence operator and vector force of a body, respectively. The conditions of boundary for the domain under consideration are given as u = u on [u ,

(14.12)

σ.nˆ = 0 on [c ,

(14.13)

σ.nˆ = tˆ on [t .

(14.14)

Equation (14.11) presents the mechanical equilibrium respectively and shows relations of Hooke’s law. Thus, the constitutive equation in the case of homogeneous elastic material can be derived as σ = C.∈, Fig. 14.1 Homogenous two-dimensional cracked domain

(14.15)

Y

Γt Γ

Γc

Ω

Γu

X

192

V. Gupta et al.

where elasticity and strain tensor is presented by C and ∈, respectively. Based on this, the weak form equation as per Belytschko and Black [41] is stated as { { { σ (u) : ∈(u)dΩ = b : udΩ + t : ud[. (14.16) Ω

Ω

[

By putting the test and trial functions and utilizing the randomness of the varied control point values, the discretized form of equations formed as follows: [K ]{d} = { f }.

(14.17)

In Eq. (14.17), f, d and K show the external force vector, displacement vector and global stiffness matrix, respectively.

14.2.3 Selection and Enrichment of Control Points In XIGA, the numeral value of control points and approximation functions is equal, and the approximation functions are uniquely evaluated to its corresponding control point [29]. Also, the approximation functions have their own influence domain which tends to zero at the other domain. The control points are not necessarily be linked with their basis function supports as they may not be present at their physical space [29]. This feature has been utilized for selection of control points containing discontinuities. Thus, the control points having corresponding basis function domain intersect by crack face is enriched by HF, whereas the control points with their functions having influence of crack tip is enriched by CTEF. For more simplification of this selection process, firstly the parametric values of the crack tip are evaluated following with the calculation of corresponding basis function. The nonzero values of the basis function show the domain having discontinuity either the crack surface or the crack tip. Also, a detailed illustration of selection of control points for enrichment is shown in Fig. 14.2 which is a modification in Fig. 14.1. In this illustration for domain Ω, the standard control points are represented with red dots, whereas the enriched control points (ECP) of crack tip and crack face are shown via pink hexagonal dots and diamond green dots, respectively.

14.2.4 Integration of the Elements The accuracy of the integration significantly reduces due to the presence of discontinuities in the domain. Thus, to improve the accuracy an efficient integration strategy is required. Gauss quadrature rule is applied in this work for improving the accuracy

14 Modelling of Embedded Cracks by NURBS-Based Extended … Fig. 14.2 Representation of control point selection for enrichment

193

Y Crack tip ECP Γt Γc

Γ

Ω Heaviside ECP Γu

Standard control point X

of the solutions. Also, the sub-triangulation strategy is adopted for increasing the accuracy of integration [42]. The knot spans or elements containing a tip or crack face are divided into a set of subregions of sub-triangles, and the Gaussian rule is employed for each of the triangles. In this case of integration, a three-stage mapping is needed. At the last step of mapping, an extra step to the typical mappings of IGA is added which shows the utilization of Newton–Raphson algorithm for evaluation of the position of the crack tip in the parent domain. Further, for the elements having crack faces, a similar nonlinear solver can be utilized for obtaining the coordinates of any other physical point in the parent domain. The sub-triangulation strategy for a crack has been illustrated in Fig. 14.3. In this illustration, an enlarged elemental view of the domain having a crack has been presented. This figure clearly shows that the elements which have been intersected by the crack are subdivided into sub-triangles on both the sides of the crack.

14.2.5 Approximation The approximation of XIGA is done in a similar manner to XFEM in which a local enrichment approach is utilized for describing cracks independent of the element grid. Thus, the approximation of displacement by using the XIGA approach is expressed as follows: u h (ξ ) =

n be ∑ i=1

p

Ri (ξ )u i +

ns ∑ j=1

p

R j (ξ )[W (ξ ) − W (ξi )]a j

194

V. Gupta et al.

Fig. 14.3 Sub-triangulation integration strategy

Element

Sub triangles

Crack

Control point

Crack Tip

Sub triangles

+

nT ∑

p

Rk (ξ )

4 ∑ [βα (ξ ) − βα (ξi )]bkα .

(14.18)

α=1

k=1

In Eq. (14.18), u i allows to show us specified control points, n be presents count p of shape functions per element; Ri reflects the NURBS basis function, ns = split enriched node elements; nT = tip enriched node elements; a j = enriched DOF of split elements related to Heaviside function (HF) W (ξ ); bkα = enriched degree of freedom of tip elements relates to crack tip enrichment function (CTEF) βα (ξ ); βα (ξ ) and W (ξ ) presents the CTEF and HF, respectively. The main purpose of HF W (ξ ) is to produce discontinuity across the surface of the crack and ranges from − 1 to + 1 which is presented as { W (ξ ) =

−1, otherwise . +1, if (ξ − ξi ).n be ≥ 0

(14.19)

CTEF is utilized for enrichment of tip elements. This function is defined by Chopp and Sukumar [43] and is explained as ] [ √ √ θ √ θ √ θ θ r cos , r sin , r cos sin θ, r sin sin θ , βα (ξ ) = 2 2 2 2

(14.20)

where θ and r present polar coordinates at the reference line of the crack and the crack surface.

14 Modelling of Embedded Cracks by NURBS-Based Extended …

195

14.2.6 Crack Formulation The element matrices of K and f can be obtained by utilizing the approximation function u h (ξ ) is given as ⎤ u u K iua K iub K iuu j j j u u u u ⎥ ⎢ u = ⎣ K iaj u K iaj a K iaj b ⎦, u u u u u K ibj u K ibj a K ibj b ⎡

K hU

[

f hU =

{ K irjs { f iu = {

u

b1u

u

f iu f ia f i =

(

b2u

fi

Ωe

{

Ωe

{

(Ri )T H bdΩ +

,

(14.22) (14.23)

(Ri )T tˆd[,

(14.24)

(Ri )T H tˆd[,

(14.25)

(Ri )T βα tˆd[.

(14.26)

[t

{

=

]T

[t

Ωe u f ib

b4u

fi

)T Bir C B sj dΩ,

(Ri )T bdΩ +

f ia =

b3u

fi

(14.21)

{

(Ri ) βα bdΩ + T

Ωe

[t u

In these equations, Ri is represented as the NURBS basis function and Biu , Bia , u u Bib , Bic are the derivative matrices of Ri that are given as

bu Bi α ,



⎤ Ri,X 1 0 Biu = ⎣ 0 Ri,X 2 ⎦, Ri,X 1 Ri,X 2 ⎡ ⎤ 0 Ri,X 1 W u Bia = ⎣ 0 Ri,X 2 W ⎦, Ri,X 1 W Ri,X 2 W ⎡ ⎤ (Ri βα ) X 1 0 Bibα = ⎣ 0 (Ri βα ) X 2 ⎦, (Ri βα ) X 1 (Ri βα ) X 1 [ ] Bib = Bib1 Bib2 Bib3 Bib4 .

(14.30)

.

(14.31)

(14.27)

(14.28)

(14.29)

196

V. Gupta et al.

14.3 Evaluation of SIF In this section, the SIFs are evaluated by using domain-based interaction integral [44]. This concept of evaluating SIFs using interaction integral was proposed by Yau et al. [45] in which they considered two states of solid, i.e. auxiliary and actual state for an elastic material. This method was adopted and modified by various researchers for evaluation of SIF, whereas for the case of mixed mode fracture problems, Kim and Paulino [46] developed the strategy for evaluating mixed mode SIFs using Mintegral. Further, a modified version of M- integral strategy was proposed by Sukumar et al. [47] which includes the concept of domain for extraction of SIFs by providing the auxiliary field for that crack. Thus, the mixed mode interaction integral is given as ] { [ (2) (1) ∂u ∂u ∂s (1) (2) i i M (1,2) = + σi j − Z (1,2) δi j dA, (14.32) σi j ∂ x1 ∂ x1 ∂x j A

Z (1,2) =

) 1 ( (1) (2) (1) σi j ∈i j + σi(2) ∈ j ij . 2

(14.33)

In these equations ‘s’ presents the scalar weight function which ranges between (0–1); Z (1,2) = mutual strain energy; ∈ and σ show the strain’s and stress, respectively, whereas ‘1’ and ‘2’ help in showing the actual as well as auxiliary states, respectively. The SIFs and M (1,2) are associated as: M (1,2) =

) 2 ( (1) (2) K I K I + K II(1) K II(2) ∗ E

(14.34)

E here E ∗ = E and E ∗ = 1−v shows the case of plane stress and strain correspond( 2) ingly. Further, single SIF under mixed loads can be derived by taking the appropriate choice of auxiliary states. In the case of mode I loading, the SIFs are derived as K I(2) = 1 and K II(2) = 0 and for mode II as K I(2) = 0 and K II(2) = 1.

14.4 Result and Discussions A centre cracked homogeneous structure is considered under analysis which is presented in Fig. 14.4a. The dimensions of the plate are considered 100 × 200 mm with a 30 mm length of crack. The elastic modulus and fracture √ toughness of plate structure are considered to be 74 × 103 MPa and 1897.36 MPa mm, respectively, while the loading condition is considered to be monotonic [48] until the value of SIFs reaches the value of fracture toughness. The first-order NURBS-based XIGA is considered in the study since the weight component associated with control points becomes similar in the case

14 Modelling of Embedded Cracks by NURBS-Based Extended …

197

Fig. 14.4 a Centre cracked homogeneous rectangular plate. b Control net representation of a cracked plate

of XFEM. Thus, NURBS elements become similar to the Lagrange elements, and the comparison becomes viable. In XIGA, the plate is discretized in a control point distribution of 40 × 80 which is presented in Fig. 14.4b. Further, selection of ECP has been done based on the proposed methodology in Sect. 2.4. The magnified version of centre crack interface having ECP is presented by Fig. 14.5. In the illustration, the Heaviside function ECP is evident at both side of the crack by red star; while due to consideration of centre crack, the CTEF has been evident at both the tips of the crack by red squares. The consideration of sub-triangulation strategy for accuracy enhancement for this case is represented in Fig. 14.6. This illustration shows the higher order of integration by red dots around the crack tip and interface, whereas the lower order is evident for the remaining domain. Further, the top edge of the structure is loaded with 240 N/ mm, while the bottom edge is considered constrained. Also, for validation, a load with SIFs graph is evaluated for XIGA and XFEM in Fig. 14.7a–d for both mode I and mode II loading which seems to be in good agreement. Furthermore, the increment in angle of inclination (α) of crack, the K-I stress intensity factor is decreased while mode II is increased. Thus, the proposed model has been validated. The proposed model has been considered for checking the effect of orientation of crack from 0° to 90°. The domain representation of the different cracks at various orientations is presented with Fig. 14.8. Also, the observed data at mode I and II SIFs at different orientations have been checked and presented in Fig. 14.9. The figure presents that the K-I SIF is decreased with increment in orientation and becomes zero when the angle of crack reaches 90°. This decrease in mode I with an increase in α is due to a decrease in force contribution that acts perpendicular to the crack which was similarly noticed by Fayed (2017) [44]. Furthermore, in the case of mode

198

V. Gupta et al.

Standard control points Heaviside ECP

Crack Crack tip ECP

Fig. 14.5 ECP for centre crack

Fig. 14.6 Gauss point distribution for centre crack

14 Modelling of Embedded Cracks by NURBS-Based Extended …

199

Fig. 14.7 Variation of SIFs concerning load a XIGA K-I, b XIGA K-II, c XFEM K-I [48] and d XFEM K-II [48]

II SIF, the value increases when α escalates from zero angle to the maximum value and then decreases when advances to 90°. The maximum value of K-II is noticed in between 40° and 50°. This has also been found in sync with the case reported by Sharma [49] for edge crack. This means that there is a lower tendency of crack propagation when the angle is too low or too high for the case perpendicular loadings to the crack. Finally, the results evaluated from XIGA SIFs are also compared with the XFEM results [42] which also show a very close agreement between these two techniques.

200

V. Gupta et al.

Fig. 14.8 Domain representation of different orientations of centred cracked rectangular plate

Fig. 14.9 Variation of SIF with the orientation of the crack

14 Modelling of Embedded Cracks by NURBS-Based Extended …

201

14.5 Conclusion Extended isogeometric analysis (XIGA) has been implemented in this article for analysing embedded centre crack present in the homogeneous plate structure. This NURBS-based XIGA technique resolves issues related to discretization errors and remeshing which is quite mandatory in some of the numerical techniques. The cracked plate structure is considered under a monotonic load at the top and constrained at the bottom for computing mixed mode SIFs (K-I and K-II). Further, the effect of different inclination/orientation angles of crack has also been taken into account for the evaluation of SIFs. The result obtained from the conducted analysis are compared with the XFEM results of open literature which shows a remarkably good agreement. The main conclusions of the study are as follows: • The crack analysis at different inclinations shows that K-I SIF decreases with the increase in inclination, whereas K-II shows an increasing spike however, beyond 45° the curve shows a leap down to minimum. • The utilization of sub-triangulation strategy for numerical integration of the element helps in increasing the accuracy of the solution by providing more validated results. • The results reported in this study revealed that the XIGA technique is very efficient in analysing discontinuous unknown field variable near the cracks. It has been evident that the developed XIGA model helps in analysing static cracks with different inclinations very efficiently and robustly. For the future work this model can be modified for analysing the propagating cracks at different inclination angles. Also, the analysis of curved cracks at different inclinations can be a topic of research.

References 1. Hansbo, A., Hansbo, P.: A finite element method for the simulation of strong and weak discontinuities in solid mechanics. Comput. Methods Appl. Mech. Eng. 193(33), 3523–3540 (2004) 2. Amin Sheikh, U., Jameel, A.: Elasto-plastic large deformation analysis of bi-material components by FEM. Mater. Today Proc. 26, 1795–1802 (2020) 3. Dumont, N.A., Mamani, E.Y., Cardoso, M.L.: A boundary element implementation for fracture mechanics problems using generalised Westergaard stress functions. Eur. J. Comput. Mech. 27(5–6), 401–424 (2018) 4. Faron, A., Rombach, G.A.: Simulation of crack growth in reinforced concrete beams using extended finite element method. Eng. Fail. Anal. 116, 104698 (2020) 5. Jameel, A., Harmain, G.A.: Effect of material irregularities on fatigue crack growth by enriched techniques. Int. J. Comput. Methods Eng. Sci. Mech. 21(3), 109–133 (2020) 6. Verhoosel, C.V., Scott, M.A., de Borst, R., Hughes, T.J.R.: An isogeometric approach to cohesive zone modeling. Int. J. Numer. Meth. Eng. 87(1–5), 336–360 (2011) 7. Gupta, V., Jameel, A., Verma, S.K., Anand, S., Anand, Y.: An insight on NURBS based isogeometric analysis, its current status and involvement in mechanical applications. Arch. Comput. Methods Eng. 30(2), 1187–1230 (2023)

202

V. Gupta et al.

8. Jameel, A., Harmain, G.A.: Fatigue crack growth in presence of material discontinuities by EFGM. Int. J. Fatigue 81, 105–116 (2015) 9. Harmain, G.A., Jameel, A., Najar, F.A., Masoodi, J.H.: Large elasto-plastic deformations in bi-material components by coupled FE-EFGM. IOP Conf. Ser. Mater. Sci. Eng. 225, 012295 (2017) 10. Jameel, A., Harmain, G.A.: Large deformation in bi-material components by XIGA and coupled FE-IGA techniques. Mech. Adv. Mater. Struct. 29(6), 850–872 (2022) 11. Jameel, A., Harmain, G.A.: A coupled FE-IGA technique for modeling fatigue crack growth in engineering materials. Mech. Adv. Mater. Struct. 26(21), 1764–1775 (2019) 12. Jiang, S.-Y., Du, C.-B., Ooi, E.T.: Modelling strong and weak discontinuities with the scaled boundary finite element method through enrichment. Eng. Fract. Mech. 222, 106734 (2019) 13. Shoheib Mohammad, M., Shahrooi, S., Shishehsaz, M., Hamzehei, M.: Fatigue crack propagation of welded steel pipeline under cyclic internal pressure by Bézier extraction based XIGA. J. Pipeline Syst. Eng. Pract. 13(2), 04022001 (2022) 14. Yadav, A., Patil, R.U., Singh, S.K., Godara, R.K., Bhardwaj, G.: A thermo-mechanical fracture analysis of linear elastic materials using XIGA. Mech. Adv. Mater. Struct. 29(12), 1730–1755 (2022) 15. Kumar Singh, A., Jameel, A., Harmain, G.A.: Investigations on crack tip plastic zones by the extended iso-geometric analysis. Mater. Today Proc. 5(9, Part 3), 19284–19293 (2018) 16. Hughes, T.J.R., Cottrell, J.A., Bazilevs, Y.: Isogeometric analysis: CAD, finite elements, NURBS, exact geometry and mesh refinement. Comput. Methods Appl. Mech. Eng. 194(39), 4135–4195 (2005) 17. Cottrell, J.A., Reali, A., Bazilevs, Y., Hughes, T.J.R.: Isogeometric analysis of structural vibrations. Comput. Methods Appl. Mech. Eng. 195(41), 5257–5296 (2006) 18. Gupta, V., Jameel, A., Anand, S., Anand, Y.: Analysis of composite plates using isogeometric analysis: a discussion. Mater. Today Proc. 44, 1190–1194 (2021) 19. Akkerman, I., Bazilevs, Y., Kees, C.E., Farthing, M.W.: Isogeometric analysis of free-surface flow. J. Comput. Phys. 230(11), 4137–4152 (2011) 20. Tagliabue, A., Dedè, L., Quarteroni, A.: Isogeometric Analysis and error estimates for high order partial differential equations in fluid dynamics. Comput. Fluids 102, 277–303 (2014) 21. Duvigneau, R.: An introduction to isogeometric analysis with application to thermal conduction. INRIA 28 (2009) 22. Fang, W., An, Z., Yu, T., Bui, T.Q.: Isogeometric boundary element analysis for twodimensional thermoelasticity with variable temperature. Eng. Anal. Boundary Elem. 110, 80–94 (2020) 23. Gupta, V., Verma, S.K., Anand, S., Jameel, A., Anand, Y.: Transient isogeometric heat conduction analysis of stationary fluid in a container. Proc. Inst. Mech. Eng. Part E J. Process Mech. Eng. 09544089221125718 (2022) 24. Bazilevs, Y., Calo, V.M., Hughes, T.J.R., Zhang, Y.: Isogeometric fluid-structure interaction: theory, algorithms, and computations. Comput. Mech. 43(1), 3–37 (2008) 25. Morganti, S., Auricchio, F., Benson, D.J., Gambarin, F.I., Hartmann, S., Hughes, T.J.R., Reali, A.: Patient-specific isogeometric structural analysis of aortic valve closure. Comput. Methods Appl. Mech. Eng. 284, 508–520 (2015) 26. Buffa, A., Sangalli, G., Vázquez, R.: Isogeometric methods for computational electromagnetics: B-spline and T-spline discretizations. J. Comput. Phys. 257, 1291–1320 (2014) 27. Nguyen, V.P., Anitescu, C., Bordas, S.P.A., Rabczuk, T.: Isogeometric analysis: an overview and computer implementation aspects. Math. Comput. Simul. 117, 89–116 (2015) 28. Wang, Y., Gao, L., Qu, J., Xia, Z., Deng, X.: Isogeometric analysis based on geometric reconstruction models. Front. Mech. Eng. 16(4), 782–797 (2021) 29. Ghorashi, S.S., Valizadeh, N., Mohammadi, S.: Extended isogeometric analysis for simulation of stationary and propagating cracks. Int. J. Numer. Meth. Eng. 89(9), 1069–1101 (2012) 30. Benson, D.J., Bazilevs, Y., De Luycker, E., Hsu, M.C., Scott, M., Hughes, T.J.R., Belytschko, T.: A generalized finite element formulation for arbitrary basis functions: from isogeometric analysis to XFEM. Int. J. Numer. Meth. Eng. 83(6), 765–785 (2010)

14 Modelling of Embedded Cracks by NURBS-Based Extended …

203

31. De Luycker, E., Benson, D.J., Belytschko, T., Bazilevs, Y., Hsu, M.C.: X-FEM in isogeometric analysis for linear fracture mechanics. Int. J. Numer. Meth. Eng. 87(6), 541–565 (2011) 32. Nguyen-Thanh, N., et al.: An extended isogeometric thin shell analysis based on KirchhoffLove theory. Comput. Methods Appl. Mech. Eng. 284, 265–291 (2015) 33. Bhardwaj, G., Singh, I.V., Mishra, B.K.: Fatigue crack growth in functionally graded material using homogenized XIGA. Compos. Struct. 134, 269–284 (2015) 34. Singh, S.K., Singh, I.V.: Extended isogeometric analysis for fracture in functionally graded magneto-electro-elastic material. Eng. Fract. Mech. 247, 107640 (2021) 35. Ghorashi, S.S., Valizadeh, N., Mohammadi, S., Rabczuk, T.: T-spline based XIGA for fracture analysis of orthotropic media. Comput. Struct. 147, 138–146 (2015) 36. Gu, J., Yu, T., Van Lich, L., Nguyen, T.-T., Tanaka, S., Bui, T.Q.: Multi-inclusions modeling by adaptive XIGA based on LR B-splines and multiple level sets. Finite Elem. Anal. Des. 148, 48–66 (2018) 37. Singh, S.K., Singh, I.V., Mishra, B.K., Bhardwaj, G., Bui, T.Q.: A simple, efficient and accurate Bézier extraction based T-spline XIGA for crack simulations. Theoret. Appl. Fract. Mech. 88, 74–96 (2017) 38. Nguyen-Thanh, N., Zhou, K.: Extended isogeometric analysis based on PHT-splines for crack propagation near inclusions. Int. J. Numer. Meth. Eng. 112(12), 1777–1800 (2017) 39. Piegl, L., Tiller, W.: The NURBS Book. Springer, Berlin (2012) 40. Rabczuk, T., Song, J.H., Zhuang, X., Anitescu, C.: Extended Finite Element and Meshfree Methods. Elsevier Science (2019) 41. Belytschko, T., Black, T.: Elastic crack growth in finite elements with minimal remeshing. Int. J. Numer. Meth. Eng. 45(5), 601–620 (1999) 42. Dolbow, J.E.: An extended finite element method with discontinuous enrichment for applied mechanics. In: Theoretical and Applied Mechanics. Northwestern University, Evanston (1999) 43. Chopp, D.L., Sukumar, N.: Fatigue crack propagation of multiple coplanar cracks with the coupled extended finite element/fast marching method. Int. J. Eng. Sci. 41(8), 845–869 (2003) 44. Singh, I.V., Mishra, B.K., Bhattacharya, S., Patil, R.U.: The numerical simulation of fatigue crack growth using extended finite element method. Int. J. Fatigue 36(1), 109–119 (2012) 45. Yau, J.F., Wang, S.S., Corten, H.T.: A mixed-mode crack analysis of isotropic solids using conservation laws of elasticity. J. Appl. Mech. 47(2), 335–341 (1980) 46. Kim, J.-H., Paulino, G.H.: T-stress, mixed-mode stress intensity factors, and crack initiation angles in functionally graded materials: a unified approach using the interaction integral method. Comput. Methods Appl. Mech. Eng. 192(11), 1463–1494 (2003) 47. Sukumar, N., Huang, Z.Y., Prévost, J.H., Suo, Z.: Partition of unity enrichment for bimaterial interface cracks. Int. J. Numer. Meth. Eng. 59(8), 1075–1102 (2004) 48. Jameel, A.: Applications of Enriched Methods in Solving Problems Containing Discontinuities. Department of Mechanical Engineering, National Institute of Technology Srinagar, Srinagar 235 (2016) 49. Sharma, K.: Crack interaction studies using XFEM technique. J. Solid Mech. 6(4), 410–421 (2014)

Chapter 15

A New Version of Artificial Rabbits Optimization for Solving Complex Bridge Network Optimization Problem Y. Ramu Naidu

Abstract In recent years, meta-heuristic optimizers have been essential tools for solving real-world problems in many fields, such as engineering, science, and business. A newborn optimizer, so-called artificial rabbit optimization (ARO), is proposed based on survival strategies, detour foraging, and random hiding of rabbits in nature. Its performance was tested on 31 benchmark problems and five engineering design problems. It is observed that the ARO may face the problem of trapping at local optima and premature convergence. To remedy the aforementioned shortcomings of the ARO, a new version of ARO is developed based on a chaotic map (Chebyshev map) and Cauchy distribution random number-based mutation named CCARO in this paper. The refined ARO is tested on 13 benchmarks, and the findings are compared with that of the classical ARO and equilibrium optimizer (EO). The CCARO outperforms the competitors in most of the tasks. This demonstrates the ability of CCARO in solving complex optimization problems. In addition, one reliability redundancy allocation problem, the complex bridge network optimization problem (CBNOP), is solved using the CCARO to enhance the applicability range. It shows a superior performance that compares to the literature.

15.1 Introduction In these days, optimization is an inevitable tool for solving real-world applications in broad fields, including engineering, science, business, etc. Many real-world applications have been modeled as optimization problems with complex constraints, objective functions, and more number of decision variables. In solving such complex problems, meta-heuristic optimizers have been helping the solvers. In fact, metaheuristics do not need the gradient information of the objective function as well as constraints. Therefore, meta-heuristics are considered to be more effective optimizers compared to traditional ones. To the best of our knowledge, above two hundred Y. Ramu Naidu (B) School of Sciences, National Institute of Technology Andhra Pradesh, Tadepalligudem, Andhra Pradesh 534102, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024 B. Das et al. (eds.), Modeling, Simulation and Optimization, Smart Innovation, Systems and Technologies 373, https://doi.org/10.1007/978-981-99-6866-4_15

205

206

Y. Ramu Naidu

meta-heuristics have been developed so far, based on natural phenomena, human being psychology, physics, and mathematical concepts. Some of them are listed below: particle swarm optimization (PSO) [1], differential evolution (DE) [2], bat algorithm (BA) [3], brainstorm optimization (BSO) [4], cuckoo search (CS) [5], teaching-learning-based optimization (TLBO) [6], sine cosine algorithm (SCA) [7], soccer league competition (SLC) [8], gradient-based optimizer (GBO) [9], gravitational search algorithm (GSA) [10], equilibrium optimize (EO) [11]. According to the no-free lunch theorem, no single optimizer exists to solve all types of optimization problems. Therefore, many researchers have been focusing on either introducing new meta-heuristics or developing new variants of the existing optimizers. For instance, Wei-Chiang Hong proposed a chaotic particle swarm optimization used to solve nonlinear regression and time series problems. Wang et al. developed the krill herd algorithm based on a chaotic map to accelerate the global convergence speed [12]. Mainly, in meta-heuristics, a great task is maintaining a trade-off between exploration and exploitation of the search space. For the challenge mentioned above, Mirjalili and Gandomi [13] incorporated chaotic maps into GSA and succeeded up to some extent in obtaining more accurate values compared to the classical GSA. In the literature [14], the standard fruit fly optimization (FFO) was improved by incorporating a new parameter based on chaos. It outperformed the FFO, chaotic firefly algorithm, chaotic bat algorithm, chaotic artificial bee colony algorithm, chaotic accelerated particle swarm optimization, and chaotic cuckoo search. Moreover, some literature concerns new variants of meta-heuristics based on Cauchy distribution is reported here. Gupta and Deep introduced the new version of GWO by incorporating the Cauchy operator [15]. This operator has improved the performance of the classical GWO remarkably. Due to a good exploratory ability of the Cauchy distribution, Choi et al. [16] have enhanced the convergence speed of the DE. In this research work, a new variant of the ARO is developed, which depends on a chaotic map, and Cauchy distributed random number. Like other meta-heuristics, the ARO is also facing inherent difficulties, such as getting trapped by local optima, less accurate solutions, and premature convergence. In order to elude the mentioned difficulties and enhance the performance of the ARO, chaotic map and Cauchy distributed random number are inset into detour foraging and random hiding strategies of the ARO. To demonstrate the efficiency, reliability, and robustness of the CCARO, the CCARO is tested on thirteen classical problems taken from the literature, and the findings are compared with that of prominent optimizers. In addition to that, the CBNOP is also solved using the CCARO as a real-life application and the obtained results are compared with the literature. The structure of the paper is designed in the following way: Sect. 15.2 presents the classical ARO. The developed algorithm is described in Sect. 15.3. In Sect. 15.4, the experimental results are reported and analyzed by using statistical measurements and a hypothetical test so-called Wilcoxon signed rank test. Eventually, Sect. 15.5 is dedicated to providing some conclusions and possible future research.

15 A New Version of Artificial Rabbits Optimization …

207

15.2 Artificial Rabbit Optimization The ARO is the very recent optimizer from the meta-heuristic family which is inspired by the survival and foraging behavior of rabbits in nature. It can be classified as the population-based optimizer [17]. Rabbits are small mammals with fluffy, distinctive long ears, whiskers, and short tails. They are herbivores; i.e., they have a plant-based diet and do not eat meat. Basically, they have three survival strategies in nature. The first one is detour foraging strategy in which instead of eating grass in their own region, they eat grass in far regions. In doing such way, they can escape from their enemies and predators. This strategy is called detour foraging strategy. It is modeled by authors mathematically as follows: ( ) v p (gn + 1) = Rq (gn ) + S · R p (gn ) − Rq (gn ) + round (0.5 · (0.05 + u 1 )) · n 1 , p, q = 1, 2, . . . , N and q /= p S = K ·c ( .

K = exp(1) − exp { c(l) =

()

gn − 1 Mgn

(2 )) · sin (2π u 2 )

1 if l == T (κ) l = 1, . . . , D and κ = 1, . . . , [u 3 · D] 0 else

T = randperm (D) n 1 ∼ SN(0, 1) (15.1) where .v p (gn + 1) and . R p (gn ) are the candidate positions of the . pth rabbit at the time .gn + 1 and .gn , respectively, . N is the rabbit population size, . D is the dimension of the region, .[.] represents the ceiling function, .u 1 , .u 2 and .u 3 are uniform random numbers in .[0, 1]. . Mgn represents the total number of generations and SN(0,1) gives a standard normal random number. Rabbits are constantly threatened by predators like owls, hawks, eagles, falcons, wild dogs, feral cats, and ground squirrels. Rabbits make their burrows around their nest to elude predators since burrows provide some safety. This kind of strategy is called a random hiding strategy. This is the second survival strategy of rabbits. In the random hiding strategy, the rabbit chooses one of its burrows along each dimension of the region, and it gives the exploitation ability of the algorithm. Equation (15.2) gives the mathematical definition of the random hiding strategy.

208

Y. Ramu Naidu

B p,q (gn ) = R p (gn ) + H · V · R p (gn ), p = 1, . . . , N and q = 1, . . . , D Mgn − gn + 1 · u4 H= Mgn . n 2 ∼ S N (0, 1) { 1 if k == q V (k) = 0 else k = 1, . . . , D.

(15.2)

Here, .B p,q (gn ) is the .qth burrow of the . pth rabbit at the time .gn , . H is the hiding parameter which is linearly decreasing from 1 to . M1n g . The new position of the . pth rabbit is given in Eq. (15.3). ( ) v p (gn + 1) = R p (gn ) + S · u 4 · B p,r (gn ) − R p (gn ) , p = 1, . . . , N { . 1 if k == [u 5 · D] Vr (k) = 0 else,

(15.3)

where .u 4 and .u 5 are two random numbers in .(0, 1) and .B p,r (gn ) is a selected burrow from the set of .d burrows. The third phase in ARO is the energy shrink. In the early stages of the iteration process, rabbits engage in detour foraging while engaging in random concealment in the later stages. As a consequence of this switch-over, the energy of rabbits will deteriorate over the course of time. This energy loss makes the switch from detour foraging (exploration) to random hiding (exploitation). The mathematical definition of this energy factor is as follows: .

E(gn ) = 4(1 −

gn 1 )ln( ), Mgn u

(15.4)

where.u is the uniform random number in.(0, 1). Finally, the rabbit position is updated by Eq. (15.5). {

( ) ( ) R p (gn ) f R p (gn ) ≤ f v p (gn + 1) ( ) ( ) .R p (gn + 1) = v p (gn + 1) f R p (gn ) > f v p (gn + 1) ,

(15.5)

where . f (.) is the objective function. The aforementioned three phases are repeated in each iteration, and the description of the ARO is given in Algorithm 1.

15 A New Version of Artificial Rabbits Optimization …

209

Algorithm 1 Pseudocode of ARO 1: procedure 2: Initialize a set of rabbits population (solutions) and find their fitness values by using the given objective function 3: while The stopping prerequisite is not satisfied do 4: Find the energy factor(A) value 5: for p = 1 : N do ▷ N − population size 6: if E > 1 then 7: Find the new rabbit position using Eq. (15.1) 8: Check the space bounds are satisfied or not 9: else 10: Find the new rabbit position using Eq. (15.3) Check the space bounds are satisfied or not 11: 12: end if 13: Update the rabbit position (position) by using greedy selection 14: Update the best rabbit position (best solution) 15: end for 16: end while 17: Report: The best solution found so far 18: end procedure

15.3 The Proposed Method This section is dedicated to the exposition of the proposed algorithm, CCARO. All steps and analysis of the CCARO have been given below:

15.3.1 Chaotic Map for ARO In order to improve the performance of the ARO, a chaotic map is adopted in place of a uniform random number. Chaotic maps are very useful to avoid trapping at the local optimum due to their chaotic behavior [12]. Because of this reason, in this work, the Chebyshev chaotic map is integrated to enhance the effectiveness and robustness of the ARO. The definition of the Chebyshev map is as follows: x

. p+1

= cos( p cos−1 (x p )).

(15.6)

The range of the Chebyshev map is from .−1 to .1. Due to the replacement of the Chebyshev map to random number, Eqs. (15.1), and (15.3) are rewritten as

210

Y. Ramu Naidu

( ) R p (g + 1) = Rq (g) + S · R p (g) − Rq (g) + round (0.5 · (0.5 + Cs )) · n 1 ,

.

p, q = 1, . . . , N and q /= p S = K ·c ( () ( )) g−1 2 K = exp(1) − exp · sin (2πCs ) Mg { 1 if l == T (κ) c(l) = l = 1, . . . , D and κ = 1, . . . , [u 3 · D] 0 else T = randperm (D) n 1 ∼ S N (0, 1) (15.7) ( ) v p (g + 1) = R p (g) + P · Cs · B p,r (g) − R p (g) , p = 1, . . . , N { . , 1 if k == [u 5 · D] Vr (k) = 0 else

(15.8)

where .Cs is the chaotic number generated by the Chebyshev map.

15.3.2 Cauchy Operator In this work, one more operator is embedded along with the chaotic map, the socalled Cauchy operator, to increase the exploratory ability and convergence speed of the ARO. In general, the probability density function of the Cauchy distribution (CD) with parameters .λ and .μ is defined as: f (y) =

. Y

π(λ2

λ , − ∞ < y < ∞, λ > 0. + (y − μ)2 )

(15.9)

Based on Eq. (15.9), the cumulative distribution of the CD is given below: 1 1 + Tan−1 . F(y, λ, μ) = 2 π

)

( y−μ . λ

(15.10)

From Eq. (15.10), the Cauchy random number(.Cr ) is generated as follows: Cr = μ + λTan(π(rand − 0.5)),

.

where rand is a uniform random number in .[0, 1].

(15.11)

15 A New Version of Artificial Rabbits Optimization …

211

The new rabbit position (offspring) is generated in a similar way of [18]. R p = u R p 1 + (1 − u)R p 2 .

R p 1 = R p + Cr (Best R − R p ) R p 2 = Rq + Cr (Best R − Rq ),

(15.12)

where . R p and . Rq are two rabbit solutions selected randomly from the population, Best R is the best rabbit solution found so far, and .u is the uniform random number between .0 and .1. Like other meta-heuristic optimizers, the CCARO also starts with a random population of artificial rabbits, which is used to initiate the iterative process. In the population, each candidate rabbit represents a solution. For more details, the CCARO is given in Algorithm 2.

.

Algorithm 2 Pseudocode of CCARO 1: procedure 2: Initialize a set of rabbits population (solutions) and find their fitness values by using objective function 3: while The stopping prerequisite is not satisfied do 4: Find the energy factor(A) value 5: for p = 1 : N do 6: if E > 1 then 7: if rand 0.05

Provisionally significant

Reaction time

− 0.9779

0.0963

> 0.05

Quasi-significant

Reaction temperature

− 0.1480

0.2075

≫ 0.05

Non-significant

that methanol to oil molar ratio is the highest significant factor followed by the concentration of catalyst and time of reaction. In this study, the temperature for transesterification reaction is found to be insignificant.

16.5.5 Effect of Various Control Factors In this experiment, the impact of each factor on the biodiesel yield was studied individually. Accordingly, the characteristic curves for each control factor versus yield percentage have been analyzed separately. M:O Molar Ratio. For producing RBO biodiesel through the transesterification process, the amount of methanol should be used at a definite level. After exceeding a certain value, the yield of biodiesel will not be affected by the obscene amount of methanol. It will rather increase the production cost [9]. Also, the presence of methanol in the yield is not desirable as it can lower the flash point of the produced biodiesel [1]. Table 16.10 shows that the significance of the M:O molar ratio is very high as the p-value is 0.0092 (≪ 0.05). Here, the coefficient of the M:O molar ratio being 2.3183 leads to an appreciable increase in yield with M:O molar ratio (see Fig. 16.9).

16 Regression Analysis on Synthesis of Biodiesel from Rice Bran Oil

231

Fig. 16.9 Scatter plot of yield percentage versus M:O molar ratio

Catalyst Amount. A catalyst is a material that quickens the reaction speed without itself getting consumed. Longer time and higher temperature would be required in absence of a catalyst to convert RBO feedstock into biodiesel. But it is observed that, on increasing the catalyst amount beyond a certain limit, there would be the formation of soap [17]. This would reversely affect the yield of RBO. The p-value obtained for the catalyst amount from Table 16.10 is 0.0897 (> 0.05). It shows that the catalyst amount is a provisionally significant factor. Also, it will drastically reverse the yield response with the variation of catalyst amount, as its obtained coefficient is − 9.141 being the highest absolute of all the coefficients (see Fig. 16.10). Reaction Time. In this study, reaction time is found to be a quasi-significant factor in the production of RBO biodiesel. This is in reference of the p-value which is 0.0963 (> 0.05) as obtained from the analysis given in Table 16.7. It should be maintained within a certain range given that both low and high temperatures could result in a

Fig. 16.10 Scatter plot of yield percentage versus catalyst amount

232

D. Patowary et al.

decrease in yield [9]. Here, corresponding to the coefficient − 0.9779 of reaction time, the yield shows a decreasing trend with its variation (see Fig. 16.11). Reaction Temperature. The temperature range considered for this experiment is 50–60 °C. For increasing the reaction rate, the reaction temperature is a necessary factor as a certain amount of heat is required to trigger the molecules to collide with each other [1]. But from Table 16.10, the obtained p-value is 0.2075 (≫ 0.05), which indicates that it is a relatively insignificant factor to produce RBO biodiesel. Here, against the coefficient of reaction temperature which is being − 0.1480 indicates that the yield reduces moderately with its variation (see Fig. 16.12).

Fig. 16.11 Scatter plot of yield percentage versus reaction time

Fig. 16.12 Scatter plot of yield percentage versus reaction temperature

16 Regression Analysis on Synthesis of Biodiesel from Rice Bran Oil

233

16.6 Conclusion In recent times, biodiesel has gained attention worldwide to use as a replacement for conventional diesel in areas of the CI engine in blended form. An attempt has been made to compare the experimental and predicted yield percentages of biodiesel from RBO using multivariate linear regression analysis. The study was carried out considering the following conditions: methanol to oil molar ratio from 6:1 to 15:1, catalyst concentration from 0.9 to 2% by weight of RBO, reaction time from 75 to 120 min, and reaction temperature from 50 to 60 °C. From the linear regression analysis, it can be concluded that the alcohol to oil molar ratio has the highest significant effect followed by the concentration of catalyst and the time of reaction. The reaction temperature is relatively found to be an insignificant factor in the production of RBO biodiesel. From the regression statistics, the obtained R2 value of 0.8948, i.e., fitting of 89.48% of the data set in the developed yield model of RBO biodiesel proved to be a better fitting. Acknowledgements The authors want to acknowledge the Mechanical Engineering Department of Assam Engineering College for providing the laboratory space needed to produce RBO biodiesel. The authors also want to thank Assam Science and Technology University for letting them use their Energy Laboratory to evaluate the physicochemical characteristics of the feedstock as well as the produced biodiesel. Also, the authors would like to thank Bidyut Kalita, Rubul Phukon, and Dhrutisman Dey for their support and assistance in some laboratory related activities throughout the preparation of this paper.

References 1. Ahmed Elgharbawy, A.: Production of biodiesel from used cooking using linear regression analysis. J. Petrol. Min. Eng. 22(2), 92–99 (2020). https://doi.org/10.21608/jpme.2020.39252. 1044 2. Singh, B.: Production of biodiesel from plant oils-an overview. J. Biotechnol. Bioinform. Bioeng. 1, 33–42 (2014). https://doi.org/10.12966/jbbb.08.02.2014 3. Huang, D., Zhou, H., Lin, L.: Biodiesel: an alternative to conventional fuel. Energy Procedia 16, 1874–1885 (2012). https://doi.org/10.1016/j.egypro.2012.01.287 4. Demirbas, A.: Comparison of transesterification methods for production of biodiesel from vegetable oils and fats. Energy Convers. Manage. 49(1), 125–130 (2018). https://doi.org/10. 1016/j.enconman.2007.05.002 5. Zullaikah, S., Lai, C.C., Vali, S.R., Ju, Y.H.: A two-step acid-catalyzed process for the production of biodiesel from rice bran oil. Biores. Technol. 96(17), 1889–1896 (2005). https://doi. org/10.1016/j.biortech.2005.01.028 6. Zaidel, D.N.A., Muhamad, I.I., Daud, N.S.M., Muttalib, N.A.A., Khairuddin, N., Lazim, N.A.M.: Production of biodiesel from rice bran oil. In: Biomass, Biopolymer-Based Materials, and Bioenergy: Construction, Biomedical, and Other Industrial Applications, pp. 409–447 (2019). Elsevier. https://doi.org/10.1016/B978-0-08-102426-3.00018-7 7. Hoang, A.T., et al.: Rice bran oil-based biodiesel as a promising renewable fuel alternative to petrodiesel: a review. Renew. Sustain. Energy Rev. 135 (2021). https://doi.org/10.1016/j.rser. 2020.110204

234

D. Patowary et al.

8. Rajalingam, A., Jani, S.P., Kumar, A.S., Khan, M.A.: Production methods of biodiesel. J. Chem. Pharm. Res. 8(3), 170–173 (2016). Online available: www.jocpr.com 9. Mathiyazhagan, M., Ganapathi, A.: Factors affecting biodiesel production. Res. Plant Biol. 1(2), 1–05. Online available: www.resplantbiol.com 10. Mahesh, N., Talupula, N., Rao, S., Sudheer, B., Kumar, P.: Taguchi’s method for optimization of parameters involved in biodiesel production using benne seed oil. Int. J. Adv. Trends Eng. Technol. (2019) 11. Karmakar, B., Dhawane, S., Halder, G.: Optimization of biodiesel production from castor oil by Taguchi design. J. Environ. Chem. Eng. 6 (2018). https://doi.org/10.1016/j.jece.2018.04.019 12. Christian, K.T.R., Pascal Agbangnan D., C., Dominique, S.C.K.: Comparative study of transesterification processes for biodiesel production (a review) 20(2018), 51235–51242 (2018) 13. Gülüm, M., Bilgin, A.: Regression models for predicting some important fuel properties of Corn and Hazelnut oil biodiesel-diesel fuel blends. In: Exergetic, Energetic and Environmental Dimensions, pp. 829–850. Elsevier Inc. (2018). https://doi.org/10.1016/B978-0-12-813734-5. 00047-0 14. Elgharbawy, A.S., Sadik, W.A., Sadek, O.M., Kasaby, M.A.: Maximizing biodiesel production from high free fatty acids feedstocks through glycerolysis treatment. Biomass Bioenergy 146 (2021). https://doi.org/10.1016/j.biombioe.2021.105997 15. Harding, K.G.: Alternative Testing Methods to Determine the Quality of Biodiesel (2014). [Online]. Available: https://www.researchgate.net/publication/263543689. Last accessed 20 Oct 2022 16. Suhane, A., Sarviya, R.M., Siddiqui, A.R., Khaira, H.K.: Optimization of wear performance of Castor oil-based lubricant using Taguchi technique. Mater. Today Proc. 4(2), 2095–2104 (2017). https://doi.org/10.1016/j.matpr.2017.02.055 17. Leung, D.Y.C., Guo, Y.: Transesterification of neat and used frying oil: optimization for biodiesel production. Fuel Process. Technol. 87(10), 883–890 (2006). https://doi.org/10.1016/ j.fuproc.2006.06.003

Chapter 17

A Study on Vision-Based Human Activity Recognition Approaches S. L. Reeja, T. Soumya, and P. S. Deepthi

Abstract The objective of human activity recognition (HAR) is to categorize actions from subject behavior and environmental factors. Systems for automatically identifying and analyzing human activities make use of data collected from many types of sensors. Despite the fact that numerous in-depth review articles on general HAR themes have already been published, the area requires ongoing updates due to the developing technology and multidisciplinary nature. This study makes an effort to recapitulate the development of HAR from computer vision standpoint. HAR tasks are significantly associated to the majority of computer vision applications, including surveillance, security, virtual reality, and smart home. The improvements of cuttingedge activity recognition techniques are highlighted in this review, particularly for the activity representation and classification approaches. Research timeline is organized on the basis of representation techniques. We discuss a number of widely used approaches for classification and adhere on the category of discriminative, templateoriented, and generative models. This study also focuses on the major drawbacks and potential solutions.

17.1 Introduction In personal and social interactions, human activity recognition (HAR) is extremely significant. HAR is a computer vision sub-domain that has a lot of potential for developing life-improving solutions. Goal of HAR, depending on the application, is to recognize a specific individual’s or a group of humans’ physical activities. It S. L. Reeja (B) LBS Institute of Technology for Women, Marian Engineering College, APJ Abdul Kalam Technological University, Poojappura, Thiruvananthapuram, Kerala, India e-mail: [email protected] T. Soumya College of Engineering Muttathara, Thiruvananthapuram, Kerala, India P. S. Deepthi LBS Institute of Technology for Women, Poojappura, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024 B. Das et al. (eds.), Modeling, Simulation and Optimization, Smart Innovation, Systems and Technologies 373, https://doi.org/10.1007/978-981-99-6866-4_17

235

236

S. L. Reeja et al.

enables computers to recognize how people behave in specific environments and, as an outcome, become proactive in a variety of situations. Some of these tasks, such as running, jumping, walking, and sitting, can be executed by a person through changes in the entire body. Some activities, such as making hand gestures, are performed by moving a specific body part. Ambient assistive living, patient care home health monitoring, rehabilitation activities surveillance, and human–computer interaction are just a few of the successful applications of HAR. As a result of this research, HAR systems are now required in many application domains, including security solutions, human–machine interaction, and robotic systems for characterization of human behavior. Human posture recognition is an important aspect of human interaction that has recently become a research hotspot as a natural interaction mode. Examining input limb movements such as a person’s body contour, gesture, and joint position, among other things, is the method of study. Recognizing gestures and accurately interpret intentions is the primary objective of non-contact human–computer interaction. Everybody has their own distinct characteristics. The purpose of recognition is achieved by analyzing the person’s behavior, pace, posture, and other information. The gesture recognition gadget tries to correct and instruct the movement of special education users, such as athletes and dance students, so that they can learn without teachers. In vision-based HAR, camera signals are examined, whereas a sensor-based method recognizes sensor signals (accelerometer, gyroscope, radar, and magnetometer). Relatively low cost, compact size, and ease of handling, the accelerometer is the most popular HAR sensor. Object sensors, such as RFID tags, are used in the living environment despite their difficulty in deployment. Sensorbased HAR, according to studies, is more efficient and retains more anonymity than vision-based HAR. Furthermore, vision-based HAR is highly affected by acquisition angles, illumination, and individual intersect. Due to a variety of factors like rear ambiguity, restricted occlusion, adjustments in point of view and lighting, and camera angle, it can be difficult to tell the difference between human activity and still images or video sequences. These sensors give humans the ability to remain with them throughout the day and monitor their activities in real time. As a result, sensor-based HAR has become increasingly popular. Deep learning (DL) algorithms have generated a lot of interest as a result of their ability to extract features from image or video data as well as time information automatically. Human actions have a built-in hierarchy that indicates their various levels as depicted in Fig. 17.1. There is an atomic element at the lowest level, and these action primitives make up more complicated human actions. In this review, the terms “actions” and “activities” are interchangeable to refer to whole-body movements made up of a number of action primitives done in a temporally sequential order by a single person without the assistance of another person or other objects. The presence of a second individual or object is a crucial aspect of engagement. Human activities are viewed as a way for people to interact with one another, and there is a hierarchy that range in complexity from straight forward actions to multiple events as depicted in Fig. 17.2.

17 A Study on Vision-Based Human Activity Recognition Approaches

237

Fig. 17.1 Human body parts performing actions (ImageNet Dataset [1])

Fig. 17.2 Hierarchy of human activities (ImageNet Dataset [1])

Basic human acts are comprised of small, deliberate, and voluntary body motions that serve as the building blocks for more complicated actions like “raising the left hand” or “walking.” Gesture is a kind of nonverbal interaction that conveys important thoughts or commands. Gestures are different category of actions that can be either cognizant (applaud) or insentient (“huddling hands over face when feeling timid”). Behaviors refer to a person’s collection of observable physical acts and reactions in a given situation. Interactions are acts that influence the behavior of the participants or the items they involve. Group activities include things like “cuddling” that a group of individuals do together. These actions are generally complex and challenging to measure or identify. Events occur in a particular setting that expresses interpersonal societal interactions, such as “weddings and parties.” The structure of this vision-based HAR study and its major aspects are depicted in Fig. 17.3. In order to extract discriminating characteristics, handcrafted feature-based schemes rely on estimation and prior information. Action segmentation begins with foreground detection, which is followed by expert feature extraction, expert feature selection, and action classification. The most important features are extracted from the input photos or video frames before the description is built. Spatial-temporal

238

S. L. Reeja et al.

Fig. 17.3 Aspects of vision-based HAR

Fig. 17.4 Vision-based HAR categories

schemes rely on additional techniques based on constructed features. The orientation and location of the limb in space are used to describe static activities, and the movements of these static activities are used to characterize dynamic activities. Categorization of HAR schemes on the basis of feature extraction is summarized in Fig. 17.4.

17.2 Handcrafted Feature-Based HAR In comparison with global features, most current local features have been shown efficient in the care of low illumination and noise. The Bag-of-Visual-Words (BoVW) is typically integrated with local characteristics to produce the general pipeline of the

17 A Study on Vision-Based Human Activity Recognition Approaches

239

most cutting-edge local representation techniques. A conventional BoVW pipeline in accordance with detectors or dense sampling is used to initially obtain interest locations and local patches. These interest points or patches are then used to extract local features. These descriptors are then grouped with a code at the center of each cluster. Various sub-categories of this approach are explained in the following sections.

17.2.1 Spatiotemporal Approaches By extending Harris detectors, Zhao et al. [2] developed the 3D points in space and time. When pixel values exhibit large local fluctuations in both space and time, Laplacian operator is used to find points of interest. Saliency describes how some elements of an image are quickly and subtly distinguishable. Khan et al. [3] made the initial suggestion for the 2D salient point detection. It was expanded to 3D by Wang et al. [4]. Their suggested activity classification approach effectively uses the prominent points as local attributes. Accurate spatiotemporal (ST) corners are rather uncommon in some cases, and 3D analogues are insufficient. False alarms regularly happen as a result of unintended alterations in appearance. Ji et al. [5] noted the ST point scarcity and the challenges it caused in the recognition process. Utilizing Gabor filter as detector, more interest points are discovered. Li et al. [6] used two examples to demonstrate how original detectors may not work when there are no abrupt extremes in the motions. Additionally, a method for efficiently computing scaleinvariant spatiotemporal characteristics is created utilizing integral video. Wang et al. [7] eliminated time-consuming iterative technique by combining point localization and scale selection directly utilizing the 3D Hessian matrix. Principle component analysis (PCA) was introduced by Khelalef and Benoudjit [8] to address the problems of tracking and activity detection at the same time, the descriptor was employed to represent athletes. Sahoo et al. [9] expanded the Histogram of Gradients (HOG) descriptor to cover video sequences. For effective 3D gradient calculation, integral pictures are expanded to integral videos. For orientation quantization, polyhedrons are used as an analogue to 2D space HOG polygons.

17.2.2 Appearance-Based Approaches Global representations encode global descriptors as a single feature by directly extracting them from the original videos or images. In this depiction, the Region of Interest (ROI) is localized and separated by utilizing background deduction techniques. Some techniques encode ROI, from which they extract descriptors such as edges, corners, and ocular drift. The silhouette image is stacked in other silhouette-based global representation techniques. Additionally, Discrete Fourier transform (DFT), which is likewise a global technique, uses ROI’s frequency domain information for recognition.

240

S. L. Reeja et al.

(a) Shape: To identify human actions in videos, it makes sense to segregate the human body from the surrounding scenery. This process is also known as foreground extraction or background subtraction. The zone of interest and an entire object in the global representation technique are known as the silhouette, which is the extracted foreground in the HAR. Before extracting silhouettes, it is crucial to calculate the backdrop model. A feature tracker approach was utilized by Baradel et al. [10] which depicted activities as sequence of postures. By comparing the key and actual frames, it is possible to identify a precise posture. Since individuals move around, the background is obviously dynamic, which makes it challenging to model for background removal. Vishwakarma et al. [11] divided vertical and horizontal channels using normalized kernels. The final descriptor considers collection of photos with activity sequences might be thought of as an activity video. The ST form was initially developed to describe human activities by Bulbul et al. [12]. By merely stacking the silhouette regions within photos, space–time shape is created. Arunnehru et al. [13] introduced a method motivated by the search for sequences with similar behaviors. This approach is capable of handling complicated scenes and detecting many activities without the need for prior modeling or learning of activities. Partially resistant to changes in scale and direction is their methodology. Over segmented regions, also known as “super-voxels,” are matched. (b) Depth: Previous HAR research has primarily focused on video sequences recorded by conventional RGB cameras. However, the use of depth cameras has been restricted because of their high price and difficult operation. A less complicated and more economical technique to acquire the depth maps is now available due to the development of sophisticated depth sensors. These sensors can instantly determine the positions of skeleton joints when using adaptive algorithms. Depth maps and skeleton data which made significant contributions to the computer vision community that were readily available are depicted in Fig. 17.5. The action graph model, used by Silva and Marana [15], represents activities using a number of prominent postures represented by the collection of points in 3D depth map. When combining RGB and depth map features for HAR, Huang et al. [16] developed a framework and presented an ideal plan. ST interest points are only generated for the RGB channels, and HOG is computed to create the descriptor. HOG on depth maps was proposed by Yao et al. [17]. Depth maps and HOG are then computed. Histogram-based 4D descriptor is presented in Kumar [18] as an additional generalization of HOG. A collection of nearby normal from ST depth and the polynormal are the foundation of Yang and Tian [19] strategy as well. Each adaptive ST cell’s low-level polynormals are gathered using a planned strategy. The final representation of depth sequences is created by concatenating the feature vectors that were derived from each spatiotemporal cell. (c) Skeleton: Features derived from depth maps include skeletons and joint locations. Due to the ease with which skeleton and joints may be obtained, the Kinect gadget is popular in this portrayal. 20 joints are generated by applications in the

17 A Study on Vision-Based Human Activity Recognition Approaches

241

Fig. 17.5 Depth sensing model a Kinect v1 sensor, b Kinect v2 sensor, c original image, d depth map, e skeleton by Kinect v1, f skeleton by Kinect v2 (RGB-D Dataset [14])

Kinect v1 SDK, while 25 joints are generated by applications in the Kinect v2 SDK, which adds 5 joints as depicted in Fig. 17.6. Intrinsic noise was experienced and noisy skeleton was obtained by Li et al. [20]. The bones and joints may have many incorrect features. This scheme frequently combines features resistant to occlusion. A skeleton-based model, known as HOJ3D,

Fig. 17.6 Skelton model a Kinect v1 output b Kinect v2 output (RGB-D Dataset [14])

242

S. L. Reeja et al.

was proposed by Franco et al. [21]. Features are supplied to Hidden Markov Model (HMM) after being reprojected and grouped into words. Because of the architecture and reliability, HOJ3D is resistant to changes in the perspective. Eigen joints are a brand-new category of feature proposed by Liu et al. [22]. Three different types of activity information are characterized using 3D position differences of joints. The effective leading eigen vectors are chosen by PCA in order to eliminate noise and redundancy. The created features were then used to improve the performance of the Naive Bayes Nearest Neighbor (NBNN) algorithm. According to Li and Li [23], representing an action solely by joint positions is insufficient, especially when object interaction is involved. Zhu et al. [24] designed regularization in Long Short-Term Memory (LSTM) to learn the co-occurrences of joints Recurrent Neural Networks (RNN). A multimodal multipart combination was created by combining the complimentary HOG and HMM in the approach presented by Shahroudy et al. [25] for activity recognition in-depth map sequences. A two-level graded architecture based on skeletons was proposed by Chen et al. [26]. Clustering-based feature vector is included in the first layer to identify the most pertinent joints and cluster them in order to create an initial classification. It should be noted that the recognition work is broken up into a number of smaller, simpler activities that are completed within a particular cluster. Since different sequences of the same activity are categorized into separate clusters, it helps to solve the large intraclass variance.

17.3 Feature Learning-Based HAR Recent decade has experienced the increased use of feature learning in various computer vision applications, including anomaly detection, pedestrian identification, image categorization, and more. Additionally, a lot of HAR methods depending on feature learning are available. Feature based HAR schemes convert pixels into action categories. The learning method for HAR is further divided into traditional and deep learning schemes.

17.3.1 Traditional Learning Traditional learning schemes solve computer vision problems using exact steps and mathematical modeling. The accuracy of HAR doesn’t vary according to the dataset used for training. Various types of traditional learning schemes used in HAR are explained here. (a) Dictionary Learning: The input data is given a sparse representation with base dictionary atoms. System learns both the dictionary and the related classifier using an end-to-end unsupervised learning process. Using supervised dictionary learning approach, Sun et al. [27] transferred understanding from one dataset

17 A Study on Vision-Based Human Activity Recognition Approaches

243

of action to another. Without using any prior knowledge, this system learns a pair of dictionaries as well as the appropriate classifier parameters. However, they may create even more compact representations, the usage of over-complete dictionaries turns out to be more intriguing. Therefore, to provide a HAR result from missing video part, Xu et al. [28] have integrated dictionary learning and HMM. (b) Bayesian Networks: According to Mojarad et al. [29], Dynamic Bayesian network (DBN) is a topology opened along the axis of time. The fact that the DBN’s state space contains several random variables, as opposed to the HMM’s single random variable, is a key extension of the DBN. As a result, the HMM can be thought of as a DBN that has been simplified and has static graph topologies with limited count of random variables. A typical DBN structure was suggested by Suk et al. [30] suggested for two-hand gesture detection, which reveals three hidden variables. Five features, including motion and their spatial relationship, are designed as observations. The DBN structure is then constructed and made simpler using first-order Markov assumptions. Hartmann et al. [31] developed DBN customized for hands gesture recognition. For identifying five two-person encounters, Rodriguez et al. [32] suggested a hierarchical DBN. This scheme first divides body regions into independent approximations. Total body positions of a person in each frame are then estimated by integrating the several Bayesian networks into a hierarchy. Finally, using the DBN technique, the posture estimation results are taken into account. The conflict between robustness and computational complexity was pointed up by Gedamu et al. [33]. Average templates in DBN maximize intraclass variations while minimizing complexity. This solution provides a template with an average value along with different feature representations to balance them and produce quality work. (c) Conditional Random Fields: The conditional probability of detailed label Y for a particular observation X is compactly represented using conditional random fields (CRFs), which is a unidirectional graphic model. HMMs and CRFs were compared for activity recognition by Pramono et al. [34]. In order to investigate the impact on both models, a test was conducted by including variables that go against the premise of observational independence. The outcome shows that independence assumptions are violated more severely. Using CRF, Liu et al. [35] provided a method for identifying activities. First, Mocap data for known activities is used to produce synthetic poses from various views. The form, flow, duration conditional random field is an improved representation of these fundamental potentials that include terms to express spatial and temporal limitations (SFD-CRF). In their trial, distinct human behaviors such as sitting down or standing up were identified. Agahian et al. [36] substituted a latent pose estimator for the observation layer of CRF. The suggested approach enables transfer learning to make use of the knowledge and data on the relationship between an image and a pose.

244

S. L. Reeja et al.

17.3.2 Deep Learning Architectures Among DL architectures, Convolutional Neural Network (CNN) is the one that is most frequently employed network architecture. Over 15 million images with labels made up the first deep CNN training dataset used by Basly et al. [37]. CNNs are widely employed in several pattern recognition areas as a result of the excellent results. Gaikwal and Admuthe [38], CNNs can learn some representational features automatically in contrast to classic machine learning methods and their created features. Swathi et al. [39] used CNN for feature extraction which is followed by the design of multilayer perceptron for classification. More training cases can be produced or dumped, or HAR can be transformed into a still image classification issue to take advantage of the enormous image dataset (such as ImageNet) and pretrain the CNN. (a) Generative Models: The novel formulation conforms to the data distribution and decreases data dimensionality. In order to recreate the initial genuine data distribution of the training set, the primary goal of generative models is to comprehend the data distribution, including the attributes that correspond to each class. Auto-encoders, Variational Auto-encoders (VAE), and Generative Adversarial Networks (GAN) are the most widely used and effective techniques. It offers an end-to-end deep learning model for unusual activity identification in films as an illustration of generative models. The two networks in the proposed design compete to train while working together on the identification process, identical to the GAN structure. (b) Discriminative Models: These supervised systems categorize the raw data input into several output categories using a hierarchical learning model made up of numerous hidden layers. Deep neural networks (DNN), recurrent neural networks (RNN), and CNN are the most popular. It shows the benefits of longterm periodic convolutions and the significance of high-quality optical stream estimates for learning precise video representations for HAR. They look at the learning of long-term video depictions to do that. They take into account longterm temporal convolutional designs. The LSTM was expanded to differential Recurrent Neural Networks (RNNs). Salient ST representations of activities are learned by calculating the various derived states that are subtle to ST structure.

17.4 HAR Datasets Public datasets could be utilized to contrast various methods using the same standards, hence hastening the development of HAR techniques. Several typical datasets are examined in this section, organized according to the categories specified at the beginning of this analysis. A fair overview of the existing significant public datasets was published [40]; however, it mostly concentrated toward traditional RGB datasets and overlooked the most recent depth-based datasets. MSR DailyAction3D Dataset (2012) depicted in Fig. 17.7 is an interactive recording by a Kinect device [41]. Depth maps, video, and skeletal joints based

17 A Study on Vision-Based Human Activity Recognition Approaches

245

Fig. 17.7 MSR DailyAction3D Dataset [41]

on earlier MSR actions. It contains 16 activity classes that 10 subjects engaged in, including drinking, eating, reading, etc. In NTU-MSR Dataset (2013), gestures are represented as color images and its associated depth map, which are captured by sensors. A total of 1000 instances of each of the 10 gestures were recorded by the 10 individuals. Complex backdrops are said to make it a difficult real-life dataset. Additionally, the subject poses differently for each gesture in terms of hand position, scale, articulation, and so forth.

17.5 Evaluation Metrics For the purpose of recognizing human actions, a number of performance indicators from various classification disciplines have been modified and applied. We cite frequently used measures like accuracy, sensitivity, specificity, etc. Sensitivity is also known as the likelihood of detection, true positive rate, and recall. Sensitivity measures the actual positive cases that were projected to be positive. Precision, also known as Positive Prediction Value (PPV), it measures how likely it is that an instance of activity that has been observed will actually occur. The likelihood that an observed activity would be mistakenly identified by the recognizer is determined by precision. Specificity represents actual negative cases that were projected to be negative. Accuracy gauges the proportion of accurate estimates to overall sample size. When the classes are equally sampled, the accuracy produces good results.

17.6 Limitation of Existing HAR Systems Lighting change impacts the quality of images and, consequently, the information examined, which is the fundamental challenge for vision-based HAR. Present systems, based on solitary view capturing device, also have limitations related to viewpoint change. This issue limits the visibility of the actions being evaluated and restricts the quantity of information that can be extracted. This encompasses occlusion in all of its forms, including self-occlusion, which are significant drawbacks to HAR systems. Due to potential data linkage issues, the range of gestures connected and activity similarities are potentially extra challenges. To implement full, reliable, and efficient HAR systems in diverse circumstances, these issues need to be solved.

246

S. L. Reeja et al.

17.7 Challenges in HAR Modern HAR systems must overcome a wide range of obstacles to ensure the effectiveness of the findings delivered. The usage of these systems for surveillance, aged care, and medical monitoring, along with the rising costs of implementation, creates new societal issues, such as societal acceptance and privacy intrusion. The difficulty of tracking devices at home is regarded as an invasion of privacy and closeness. It’s intriguing to look into how HAR systems are being developed for smartphones as a solution to this final issue. Since the user’s own device would store the recorded data, this could help to alleviate the user’s privacy concern while also cutting down on the computing time required to transmit data between the device and a remote server. It can be difficult to comprehend and identify daily activities in long-term movies. This is because the long-form recordings of daily living activities are made up of a number of intricate actions. These activities are challenging to model due to their complicated structure and wide range of methods for carrying out the same task. The overlap between the beginning and end times of each specific activity is a further problem. Additionally, it is still a very difficult subject to handle to distinguish between deliberate and involuntary behaviors.

17.8 Conclusion In computer vision, human–computer interaction, robotics and security, the necessity to comprehend human activities is inevitable. Analysis of the current techniques employed to validate HAR was discussed. Additionally, it offers a taxonomy of various human activities as well as the techniques applied to action representations. However, intrinsic variance and class resemblance problem are the difficulties occurred while dealing with HAR. Features and learning type are the categories we used to organize human activities in this review. Due to the advancements and encouraging outputs in identification, we observe that DL schemes are appreciated these days. In addition, it is important to investigate how HAR systems might be integrated into smartphones, which have successfully replaced traditional HAR systems by virtue of their widespread acceptance and real-time applications. Real-time HAR systems require higher level of accuracy and the computational cost is also high. So, there is a necessity to develop a HAR system with better accuracy and low computational complexity.

References 1. https://www.image-net.org/index.php 2. Zhao, C., Chen, M., Zhao, J., Wang, Q., Shen, Y.: 3d behavior recognition based on multi-modal deep space-time learning. Appl. Sci. 9(4), 716 (2019)

17 A Study on Vision-Based Human Activity Recognition Approaches

247

3. Khan, M.A., Sharif, M., Akram, T., Raza, M., Saba, T., Rehman, A.: Hand-crafted and deep convolutional neural network features fusion and selection strategy: an application to intelligent human action recognition. Appl. Soft Comput. 87, 105986 (2020) 4. Wang, L., Xu, Y., Yin, J., Wu, J.: Human action recognition by learning spatio-temporal features with deep neural networks. IEEE Access 6, 17913–17922 (2018) 5. Ji, X., Zhao, Q., Cheng, J., Ma, C.: Exploiting spatio-temporal representation for 3D human action recognition from depth map sequences. Knowl. Based Syst. 227 (2021) 6. Li, C., Zhong, Q., Xie, D., Pu, S.: Collaborative spatiotemporal feature learning for video action recognition. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7872–7881 (2019) 7. Wang, Q., Sun, G., Dong, J., Ding, Z.: Continuous multi-view human action recognition. IEEE Trans. Circ. Syst. Video Technol. (2021) 8. Khelalef, A., Benoudjit, N.: An efficient human activity recognition technique based on deep learning. Pattern Recognit. Image Anal. 29(4), 702–715 (2019) 9. Sahoo, S.P., Srinivasu, U., Ari, S.: 3D Features for human action recognition with semisupervised learning. IET Image Proc. 13(6), 983–990 (2019) 10. Baradel, F., Wolf, C., Mille, J., Taylor, G.W.: Glimpse clouds: human activity recognition from unstructured feature points. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 469–478 (2018) 11. Vishwakarma, D.K., Dhiman, C.: A unified model for human activity recognition using spatial distribution of gradients and difference of Gaussian kernel. Vis. Comput. 35(11), 1595–1613 (2019) 12. Bulbul, M.F., Tabussum, S., Ali, H., Zheng, W., Lee, M.Y., Ullah, A.: Exploring 3D human action recognition using STACOG on multi-view depth motion maps sequences. Sensors 21(11), 3642 (2021) 13. Arunnehru, J., Thalapathiraj, S., Dhanasekar, R., Vijayaraja, L., Kannadasan, R., Khan, A.A., Haq, M.A., Alshehri, M., Alwanain, M.I., Keshta, I.: Machine vision-based human action recognition using spatio-temporal motion features (STMF) with difference intensity distance group pattern (DIDGP). Electronics 11(15), 2363 (2022) 14. Lai, K., Bo, L., Ren, X., Fox, D.: A large-scale hierarchical multi-view RGB-D object dataset. In: 2011 IEEE International Conference on Robotics and Automation, pp. 1817–1824. IEEE (2011, May) 15. Silva, M.V., Marana, A.N.: Human action recognition in videos based on spatiotemporal features and bag-of-poses. Appl. Soft Comput. 95, 106513 (2020) 16. Huang, N., Liu, Y., Zhang, Q., Han, J.: Joint cross-modal and unimodal features for RGB-D salient object detection. IEEE Trans. Multimedia 23, 2428–2441 (2020) 17. Yao, H., Yang, M., Chen, T., Wei, Y., Zhang, Y.: Depth-based human activity recognition via multi-level fused features and fast broad learning system. Int. J. Distrib. Sens. Netw. 16(2) (2020) 18. Kumar, N.: Better performance in human action recognition from spatiotemporal depth information features classification. In: Computational Network Application Tools for Performance Management, pp. 39–51. Springer, Singapore (2020) 19. Yang, X., Tian, Y.: Super normal vector for human activity recognition with depth cameras. IEEE Trans. Pattern Anal. Mach. Intell. 39, 1028–1039 (2017) 20. Li, W., Zhang, Z., Liu, Z.: Action recognition based on a bag of 3d points. In: 2010 IEEE Computer Society Conference on Computer Vision and Pattern, pp. 9–14, San Francisco, CA, USA (2016) 21. Franco, A., Magnani, A., Maio, D.: A multimodal approach for human activity recognition based on skeleton and RGB data. Pattern Recogn. Lett. 131, 293–299 (2020) 22. Liu, J., Wang, Z., Liu, H.: HDS-SP: a novel descriptor for skeleton-based human action recognition. Neurocomputing 385, 22–32 (2020) 23. Li, G., Li, C.: Learning skeleton information for human action analysis using Kinect. Sig. Process. Image Commun. 84, 115814 (2020)

248

S. L. Reeja et al.

24. Zhu, W., Lan, C., Xing, J., et al.: Co-Occurrence Feature Learning for Skeleton Based Action Recognition Using Regularized Deep LSTM Networks, vol. 2, p. 8 (2016). arXiv Preprint 25. Shahroudy, A., Ng, T.T., Yang, Q., Wang, G.: Multimodal multipart learning for action recognition in depth videos. IEEE Trans. Pattern Anal. Mach. Intell. 38(10), 2123–2129 (2016) 26. Chen, H., Wang, G., Xue, J.H., He, L.: A novel hierarchical framework for human action recognition. Pattern Recogn. 55, 148–159 (2016) 27. Sun, B., Kong, D., Wang, S., Wang, L., Yin, B.: Joint transferable dictionary learning and view adaptation for multi-view human action recognition. ACM Trans. Knowl. Discovery Data (TKDD) 15(2), 1–23 (2021) 28. Xu, K., Qin, Z., Wang, G.: Recognize human activities from multi-part missing videos. In: IEEE International Conference on Multimedia and Expo, ICME 2016, pp. 976–990 (2016) 29. Mojarad, R., Attal, F., Chibani, A., Amirat, Y.: Automatic classification error detection and correction for robust human activity recognition. IEEE Robot. Autom. Lett. 5(2), 2208–2215 (2020) 30. Suk, H.I., Sin, B.K., Lee, S.W.: Hand gesture recognition based on dynamic Bayesian network framework. Pattern Recogn. 43, 3059–3072 (2016) 31. Hartmann, Y., Liu, H., Lahrberg, S., Schultz, T.: Interpretable high-level features for human activity recognition. In: BIOSIGNALS, pp. 40–49 (2022) 32. Rodriguez Lera, F.J., Martin Rico, F., Guerrero Higueras, A.M., Olivera, V.M.: A context awareness model for activity recognition in robot assisted scenarios. Expert. Syst. 37(2), e12481 (2020) 33. Gedamu, K., Ji, Y., Yang, Y., Gao, L., Shen, H.T.: Arbitrary-view human action recognition via novel-view action generation. Pattern Recogn. 118, 108043 (2021) 34. Pramono, R.R.A., Chen, Y.T., Fang, W.H.: Empowering relational network by self-attention augmented conditional random fields for group activity recognition. In: European Conference on Computer Vision, pp. 71–90. Springer, Cham (2020, Aug) 35. Liu, W., Piao, Z., Tu, Z., Luo, W., Ma, L., Gao, S.: Liquid warping GAN with attention: a unified framework for human image synthesis. IEEE Trans. Pattern Anal. Mach. Intell. (2021) 36. Agahian, S., Negin, F., Köse, C.: An efficient human action recognition framework with posebased spatiotemporal features. Int. J. Eng. Sci. Technol. 23(1), 196–203 (2020) 37. Basly, H., Ouarda, W., Sayadi, F.E., Ouni, B., Alimi, A.M.: DTR-HAR: deep temporal residual representation for human activity recognition. Vis. Comput. 38(3), 993–1013 (2022) 38. Gaikwal, R.S., Admuthe, L.S.: A review of various sign language techniques. In: Conference Proceedings of COSMO 2021, SIST. Springer (2021) 39. Swathi, K., Rao, J.N., Gargi, M., VaniSri, K.L., Shyamala, B.: Human activities recognition using OpenCV and deep learning techniques. Int. J. Future Gener. Commun. Netw. 13(3), 717–724 (2020) 40. Chaquet, J.M., Carmona, E.J., Fernández-Caballero, A.: A survey of video datasets for human action and activity recognition. Comput. Vis. Image Underst. 117, 633–659 (2013) 41. https://sites.google.com/view/wanqingli/data-sets/msr-dailyactivity3d

Chapter 18

In-Store Monitoring of Harvested Tomatoes Using Internet of Things Rohit Kumar Kasera , Shivashish Gour , and Tapodhir Acharjee

Abstract Agriculture is one of the main pillars of human cultivation. Recently, smart agriculture techniques are used by the stakeholders in different steps of preand post-harvesting process. Due to large output, there is a high risk of fruit and vegetable deterioration. Fruits and vegetables are lost due to a variety of reasons, including poor warehouse management and ignorance. This paper covers one of these vegetables, the tomato, which experiences substantial losses as a result of a variety of elements, including the environment, insects, and others. Here we have proposed a module which features a four-layered architecture where Arduino and desktop machine have been used to establish machine-to-machine communication. Sensors including “DHT22,” “BMP180,” “MQ135,” “PIR Motion,” and “Microphone sensor” have been installed to track the circumstances of the tomato warehouse in real time. The desktop computer functions as the IoT gateway to control the warehouse system. It serves as a storage and management layer, controlling data storage and offering managerial features for businesses. Information from the sensors was gathered using the Arduino microcontroller board. The experiment’s goal was to examine how tomato survival rates changed as a result of environmental conditions. Since the system is designed at a low cost, it can detect and notify the problems that most producers experience.

18.1 Introduction Agricultural production is considered one of the world’s most significant economic pillars and a crucial source of food. Several countries such as India depend heavily on it for employment. According to United Nations Food and Agriculture Organization (FAO) study, the population of the world is increasing by three people every second, or 250,000 people per day, and is projected to reach “8 billion by 2025 and 9.6 billion by 2050” [1]. Food consumption will double by 2040 as a result of population growth. R. K. Kasera (B) · S. Gour · T. Acharjee Department of Computer Science and Engineering, Triguna Sen School of Technology, Assam University, Silchar, Assam, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024 B. Das et al. (eds.), Modeling, Simulation and Optimization, Smart Innovation, Systems and Technologies 373, https://doi.org/10.1007/978-981-99-6866-4_18

249

250

R. K. Kasera et al.

Even though India is the world’s top producer of fruits and vegetables, there is a high risk of perishing delicate fruits and vegetables because of the huge volume of production. Sometimes, improper management of storage facilities is to blame for the loss of fruits and vegetables. According to several research, most losses occur during post-harvest storage. Due to both physical and quality losses, post-harvest vegetable losses can reach up to 80% of total production, which lowers the economic worth of vegetables or crops [2, 3]. Farmers are also unable to store and maintain these things at their specific sites due to lack of smart facilities. Farmers’ produce goes to waste and their revenue is also impacted by the unavailability of these facilities [4]. Some foods, like tomatoes, carry a greater risk of loss. High input costs and delicate weather conditions, such as persistent rain, insect or rodents’ sightings, excessive heat or cold, and poor air quality, are blamed for this loss. These elements contribute to a greater loss of tomatoes during production and distribution. Quality and quantity are thus impacted and become more expensive and dangerous for Indian farmers to grow [5]. So, there is a need for such automated systems that can monitor and control variables such as temperature, humidity, air quality, air pressure, while storing and selling tomatoes. This system will reduce the possibility of tomatoes being rotten and the tomatoes will remain fresh for a longer period of time [6]. Using the Internet of things (IoT), such automated systems can be developed. In the past, some of the work related to this problem had already been completed [7–9], but these are still prone to errors. It is important to correctly consider certain factors when developing such system, which can detect insects, measure temperature and humidity, and gas quality. Before being examined in the cloud, the data gathered from the sensors of such characteristics needs to be appropriately cleaned and preprocessed, so that the real-time decisions can be automatically made based on the data analysis. In this study, we created a machine-to-machine (M2M) connection utilizing an Arduino Uno microcontroller board and a desktop computer as an IoT gateway to monitor the tomatoes storage environment in real time. The following is a list of this paper’s contributions: 1. Designed an IoT four-layered architecture for developing the method for temperature and humidity monitoring systems, CO2 gas monitoring systems, air pressure monitoring systems, rodent detection, and insect sound detection. 2. Reduces the flaws in the earlier study that is already in existence. 3. A comparison between the suggested model and the previous existing model. We have discussed a detailed introduction in Sect. 18.1, a literature review in Sect. 18.2, a proposed methodology in Sect. 18.3, detailed discussion, and comparative analysis of previously developed work in Sect. 18.4, and in Sect. 18.5, conclusion with future enhancement.

18 In-Store Monitoring of Harvested Tomatoes Using Internet of Things

251

18.2 Related Work 18.2.1 Tracking System for Food and Grain Warehouse This work led to the design and implementation of a platform for monitoring food storage systems [10]. To reduce food waste and unneeded financial losses, the warehouse management system is improved using technology. Temperature and humidity sensors were used to measure the respective parameters in the warehouse. A gas sensor detects the hazardous gas CO2 within a warehouse. Shock sensors identify tilts or lateral rack movements. To safeguard grains from fire, an alarm is set off when a fire flare is discovered. Using the Node-RED dashboard, live sensor data can be monitored anytime. Warehouse parameters are sent by email and SMS by MQTT Broker. Minimal cost and real-time monitoring capabilities are two of its benefits, while the system lacks insect detection as well as the fire detection.

18.2.2 Integrated Monitoring and Control System for Food Grain Wastage In accordance with the idea put forth by Devi et al., IoT has been utilized to monitor and manage food grain waste in warehouse [11]. A wireless sensor network was created which includes, DHT11 temperature and humidity sensor, CO2 air quality sensor, PIR moving object sensor, and a repellent device for rodent and insect presence, which all integrated with the help of Arduino which controls everything. If the temperature and humidity are higher or lower than the threshold values, a fan is turned on to cool the room. Real-time monitoring is an advantage of existing work. Its main flaw is that ultrasonic devices cannot detect rodents or insects clearly. Additionally, no validation experiments have been conducted with other related methodologies on the existing system.

18.2.3 Control and Monitoring System for Cold Storage Environments Siddiqua et al. present IoT-based real-time monitoring systems that can monitor temperature, humidity, luminance, and gas concentration in cold storage and alert the user for dangerous levels [12]. Sensor data are controlled and transmitted to the ESP32 using an Arduino Uno. Computers and Android phones can be used to run the MIT App Inventor. To monitor the temperature and humidity, a web server creates and stores a database, which is linked to an app. Being low cost is one of the advantages of this system, while there are some limitations to this work, including

252

R. K. Kasera et al.

the lack of a system for monitoring insects and rodents, the unclear handling of monitoring software, and not taking into account real-time data logging.

18.2.4 The Use of IoT and Blockchain Technology to Monitor and Classify Crop Quality The work by Sangeetha et al. outlines a system for measuring crop quality and tracking storage using a combination of blockchain technologies, Ganache and MetaMask, which are Ethereum-based blockchains that connect farmers directly to distributors [13]. Data is collected in real time in both the warehouse and the field. MQ137, DHT-11, and MG811 sensors are used here. Arduino and Raspberry Pi as IoT gateway are used. IoT gateways store data in Azure SQL databases running on Azure SQL servers. This method’s advantage is that farmers can access real-time information via a mobile app. The management of sensor data for consumers cannot be done by a separate server in the cloud. Field data and warehouse data are both categorized; however, the process used to do so at the backend is not covered.

18.2.5 Intelligent Warehousing Based on Machine Learning In order to help farmers, Anoop et al. created an IoT-enabled warehouse management system [14]. Temperature and humidity sensors measure the warehouse data which can be accessed via the NodeMCU microcontroller. ThingSpeak analyzes and displays warehouse climate data based on sensor data. This technique has the advantages of being inexpensive and focusing on preventing food waste and financial loss to maintain healthy crops. The climate data are examined using a machine learning system to assist in decision making.

18.2.6 A System that Protects Crops from Post-Harvest Damage Using IoT Jose et al. propose a smart warehouse and crop monitoring system [15]. IoT is used in this paper to notify the farmer about unconventional parameter changes in the crop warehouse via a buzzer alarm and through Blynk’s smart mobile application. In order to receive the sensor data, the esp32 microcontroller is utilized. Temperature sensors, humidity sensors, PIR motion sensors, and MQ2 sensors are used to monitor the warehouse. One of the major advantages of this system is that it is low cost. It lacks a monitoring system for insects and rodents, and it completely relies on Internet connections. Data from the sensors cannot be processed and stored at the local stage.

18 In-Store Monitoring of Harvested Tomatoes Using Internet of Things

253

18.2.7 Utilization of Cooling Technologies to Improve Vegetable Storage in Mali Using non-electric devices, Verploegen et al. propose solutions to agricultural postharvest storage problems in rural Mali [16]. Clay pot coolers and evaporative cooling chambers make up a two-class cooling system that is built and tested. The electronic sensor was used to track the sand’s temperature, humidity, and moisture content. Sensor data was gathered every five minutes for three to five months. The planned system has the benefit of extending the shelf life of vegetables stored in storage by evaporative cooling. It assists in lowering water loss and offers a stable storage environment with low temperatures and high humidity. Methods for animal protection, specialized equipment, and position tracking are not thoroughly examined as system flaws in this study.

18.2.8 A Comparison of ECC, ZEBC, and Pot-in-Pot Coolers for Storage Post-harvest To maintain the post-harvest storage quality and lengthen the shelf life of tomato crops in Malawi, Manyozo et al. suggested a technique to quantify the efficacy of evaporative cooling technologies (ECT) [17]. Tomatoes were kept for 24 days in a “zero energy brick cooler (ZEBC)” and an “evaporative charcoal cooler (ECC)” in order to assess the efficacy of the system. The length of time it took for the veggies to reach the final stage of ripening was used to calculate their shelf life. At this point, they continued to be marketable. The inadequacies of this approach include the lack of automation, human counting of the vegetable shelf life, and disregard for the protection of vegetables during storage from pests and animals.

18.2.9 An Automated System for Controlling Cold Storage Remotely Mohammed et al. created and constructed a smart IoT-based system using sensors and actuators [18]. To enhance the quality of food and cold storage facilities, the system is utilized for controlling, monitoring, and risk identification. A comparison between the “modified cold storage room” and a conventional cold storage room was done in order to assess the fruit’s quality. Following a real-time data analysis, the approach demonstrated how effectively the proposed system controlled cold storage. An Arduino-based “ESP8266” microcontroller board was used to build up the entire system. The absence of rats, insects, and other indoor animal monitoring is one of the system’s flaws. There isn’t a complete discussion of how IoT data is transmitted to the cloud server.

254

R. K. Kasera et al.

18.3 Methodology The proposed work has been built module by module using the IoT four-layered architecture given in Fig. 18.1. In one of their studies on IoT architecture, Zhong et al. [19] describe the investigation of IoT layer architecture as follows: Physical layer: IoT’s physical layer, also known as the device layer or perception layer, is the foundational level of the architecture. IoT-related physical tools and objects are described in detail in this tier, for instance, different sensors, actuators, transmitter modules, positioning modules (GPS), cameras, etc. The network layer gives this layer instructions on how to access specific devices’ information. This layer advances to the following layer once the physical item has completed the processing of gathering information from this layer.

Fig. 18.1 IoT four-layered architecture

18 In-Store Monitoring of Harvested Tomatoes Using Internet of Things

255

Access layer: The access layer is one of the layers in an IoT architecture. Through various networks, the network layer provides safe, fast, and reliable communication between the perception layer, abstraction layer, and the application layer based on information obtained by the perception layer. Network layer data transmission functions include short-distance transmission and remote data transmission. Communication networks or sensor networks used for short-distance transmission include Wi-Fi, Ad-hoc, Mesh, Zigbee, industrial buses, etc. Internet-based remote data transmission relies on a variety of special networks, mobile communications networks, satellite communications networks, etc. Abstraction layer: The application layer collects the required data from the stored data that satisfies specific requirements from the data abstraction layer and processes it. The layer attempts to present data and its management in ways that make it possible to create applications that are simpler to develop and perform much better. Application layer: In an IoT architecture, application layer is the top layer. The input from the abstraction layer, network layer, and perception layer is analyzed and processed by the application layer to create the IoT application. The application layer, along with unique demands, can be thought of as the interface between IoT and various types of users, which includes a person or a system, in order to realize various intelligent applications of IoT, for instance, industrial automation, smart car navigation, smart manufacturing, intelligent building, and intelligent traffic. Of course, information technology, including cloud computing, database systems, data mining technology, gateway technology, multimedia, deep learning, etc., is still required to support the intelligent applications of IoT. In Fig. 18.1, different physical sensor networks are incorporated in the connectivity layer and coupled to the Arduino microcontroller board. To send the data to an IoT gateway, the Arduino board is serially connected to a desktop computer working as an IoT gateway. Now, at the second layer which is access layer, raw data is delivered to the global network through an IoT gateway. All connection setup, scheduling of data transfer, packet transmission between flow sensor nodes and IoT gateway, description of network topology, and network commencement are all included in the access layer. All sensor nodes are temporarily linked together in the third layer (abstraction layer) of the warehouse module, which acts as a platform for various communication systems. As part of the service layer, data is gathered, authorizations are granted, and data is combined to improve service intelligence. Finally, the storage and management layer control the warehouse system, manages data storage, and handles all business management-related tasks.

18.3.1 IoT-Enabled System Architecture The architecture of the proposed tomatoes warehouse monitoring system is presented in Fig. 18.2.

256

R. K. Kasera et al.

Fig. 18.2 Architecture of tomatoes warehouse monitoring system

The sensor nodes in the tomato bucket are fixed as per the architecture described in Fig. 18.2. Based on numerous characteristics, including temperature and humidity, air pressure, air quality, motion detection, and insect detection, these sensor nodes are utilized to sense the tomato storage environment every day during day and nighttime. These sensor nodes connect to the Arduino board, a microcontroller. The sensor nodes are programmed such that if the sensor value exceeds a predetermined threshold, Arduino manages these data and sends them via a serial communication network to a gateway. The IoT gateway keeps sensor node values in a single string variable after receiving them. This data is then cleaned, preprocessed, and stored to separate the sensor node values. It stores all these values in a database after storing each one in a separate variable. A SQLite database is being utilized in this paper to send the sensor node value in real time. As the sensor senses the tomato warehouse environment every five seconds, all sensor node values are saved in the database at every five minutes intervals. Each sensor node in this study has its own table in the database. Air pressure, air quality, temperature, humidity, and insect detection are a few examples. A restful API has been developed when all these values have been saved into a database. This API enables a variety of business logic tasks. The values of the sensor nodes are viewed and examined on the cloud server by using this API. Tomato bucket temperatures and humidity are measured using a DHT22 sensor. The relationship between temperature and humidity is measured using the relative humidity equations [20]. Depending on the temperature, the relative humidity is defined as the ratio of the vapor pressure Pv to the saturation vapor pressure Pvs in Eq. (18.1). RH = 100 ∗

Pv Pvs

(18.1)

18 In-Store Monitoring of Harvested Tomatoes Using Internet of Things

) ( ( RH ) T + +T ∗ In 100 ). Wp = ( ( RH ) T − In 100 + +T

257

(18.2)

Relative humidity is measured in a percentage. As a result, Eq. (18.2) is used to measure the relationship between dew point, air temperature, and relative humidity. In Eq. (18.2), W p represents dew point temperature measured in centigrade, RH represents relative humidity, T represents air temperature, β equal to 17.625, and γ is equal to 243.04 °C. Equation (18.3) is used to calculate RH after substituting Eq. (18.2). 17.625∗Wp

RH =

e 243.04+Wp

(18.3)

17.625∗T

e 243.04+T

Using the BMP180 sensor, air pressure is measured using Eq. (18.4). ( AP = AP0 exp

−g M(alt − alt0 ) RT

) (18.4)

In Eq. (18.4), altitude “alt” determines the pressure, “AP” determines the air pressure at altitude “alt,” “AP0 ” represents the pressure at the reference level “alt0, ” where alt0 = 0 is the reference level at sea level, T represents the temperature at altitude “alt,” g represents a gravitational acceleration of 9.80 m per sec, M represents the molar mass of air, which is 0.0289 kg/mol, and R indicates the universal gas constant of 8.31 (N m/mol). An MQ135 sensor is utilized to keep track of the air quality in the tomato warehouse. The output of the MQ135 sensor is used to calculate an air quality value in “parts per million” (PPM) [21]. The PPM value is calculated using Eq. (18.5). μ = λχ + τ

(18.5)

After substituting Eq. (18.5) in the logarithmic notation, Eq. (18.6) is obtained. log10 μ = λ ∗ log10 χ + τ

(18.6)

Using Eqs. (18.5) and (18.6), the PPM gas value will be calculated. PIR motion sensor is used to sense the moving activity inside the tomato storage bucket. A microphone sensor that detects rodents or insect sounds is used to listen for insect or rodent sounds. An insect will be recognized if the microphone analogue sensor value exceeds the threshold. The overall operation of this system is depicted in Fig. 18.3. Here, an Arduino is used to construct a configuration file for connecting and reading all sensor nodes simultaneously. A notification to reconnect the sensor will be displayed if any one of the sensor nodes’ values for temperature (T) , humidity (H), air pressure (AP), air quality (AQ), insect sound (I), and motion detection (M) is null. The sensor data are obtained via each sensor node technique.

258

R. K. Kasera et al.

Fig. 18.3 Tomatoes storage system flow diagram

All sensor node values are downloaded via serial communication at the IoT gateway using a Python script. In which Formulas (18.1), (18.2), and (18.3) are used to calculate the values of T and H. If T and H values exceed the threshold continuously for five minutes, then the AQ and AP values are checked. A tomato warehouse environment change is triggered if the AQ and AP values are also above the threshold. During cold weather, if the AQ and AP values are above the threshold and if the T and H values are also above the threshold, then the alert alarm is triggered to change the tomato storage environment. To detect insects or rodents, sensor values M and I are scanned every five seconds. When M equals 1 and I above a certain threshold, which is set as less than 510 in this system, then alerts are generated. This alert was generated because a rodents or insect was detected near the tomatoes. Various software tools have been used in the implementation of this system, including Arduino programming, Python3, Flask, SQLite3, and MS Excel for visualizing data from the tomato storage environment. From a hardware perspective, it would include an Arduino Uno microcontroller board, computer desktop, DHT22 sensor, MQ135 sensor, PIR sensor, sound detection sensor, passive buzzer, and BMP180 barometric

18 In-Store Monitoring of Harvested Tomatoes Using Internet of Things

259

Table 18.1 Optimum storage parameter value for tomatoes vegetables Vegetables

Optimal temperature (°C)

Optimal humidity (%)

Maximum storage life (weeks)

Ethylene production

Tomatoes

12.8–21.1

90–95

1–3 weeks

Medium

pressure sensor. The optimum threshold value for the tomato storage parameter is retrieved from the “Engineering toolbox” dataset [22] which is shown in Table 18.1.

18.4 Experiment Results and Analysis The tomato storage house has been set up in a cardboard box during testing. In the cardboard box, sensors are installed as “DHT22,” “BMP180,” “MQ135,” “PIR sensor,” and “microphone sound.” Near the IoT gateway machine, a passive buzzer has been installed. Figure 18.4 illustrates the overall tomato storage setup. The testing of this system has been conducted two times between October and the first week of November 2022. Monitoring of the system is done day and night. The first testing was conducted in “Varanasi Uttar Pradesh, India” by storing 2.50 kg of red tomatoes. This testing duration was between 12-10-2022 and 17-10-2022. Whenever Arduino Uno boots, the sensor reads the values of T, H, AQ, AP, M, and I. First three days’ during nighttime between 6:00 PM and 5:00 AM, the average temperature was 32.56 °C, the humidity was 66.64%, the air pressure was 1004.71 pascal (Pa), the air quality was 78 PPM, and the average insect detection measurement was greater than 516. A daytime average of 31.23 °C was recorded from 6:00 AM to 4:00 PM, the humidity was 66.34%, the average air quality was 64.56 PPM, the air pressure was 1001.89 Pa, and the insect detection measure was 516. A separate plot of day and

Fig. 18.4 A view proposed tomato storage system

260

R. K. Kasera et al.

Day wise Temperature

Humidity

Air Pressure

Air Quality

90 80

Storage Parameter

70 60 50 40 30 20 10 10-12-22 9:49 10-12-22 10:01 10-12-22 11:41 10-12-22 12:00 10-12-22 13:18 10-12-22 14:23 10-12-22 15:05 10-12-22 16:01 10-12-22 17:02 10-13-22 12:25 10-13-22 12:58 10-13-22 13:00 10-13-22 15:47 10-13-22 16:00 10-13-22 17:00 10-15-22 16:29 10-15-22 17:00 10-16-22 7:11 10-16-22 9:43 10-16-22 10:17 10-16-22 11:05 10-17-22 12:00 10-17-22 13:00

0

Last Status Fig. 18.5 First three days experiment result during daytime as on 12-10-2022

nighttime temperature, humidity, air pressure, and air quality are shown in Figs. 18.5 and 18.6. The X-axis in Figs. 18.5 and 18.6 of the graph displays the date and time of the storage environment identified as “last status,” and the Y-axis lists data points for the storage parameter values T (°C), H (%), AQ (%), and AP (Pa). From the first investigation, tomato quality was found to be normal from day one to day three. But it was noticed on 17-10-2022 at 1:13 PM that some of the tomatoes’ color had been altered and had been affected as shown in Fig. 18.7. And this changing color of tomatoes indicates that tomatoes’ shelf life is reduced. Research conducted by Wang and Handa indicates tomatoes can be stored at 10–15 °C after harvesting [23] to increase shelf life of fresh tomatoes. The second experiment was conducted in November’s first week between 05-11-2022 at 16:58:52 and 10-11-2022 at 09:59:55 during the day and at night with storage of 5 kg tomato at “Assam University, Silchar, Assam, India.” Figures 18.8 and 18.9 depict graphs that were continuously monitored in real time and exhibit the visualization of tomato storage as a result of gathering various parameters. In Figures 18.8 and 18.9, the x-axis denotes the list of date and time labels as the “last status” of the warehouse monitoring, and the y-axis denotes the list of storage parameter data points, i.e., T (°C), H (%), AQ (%), and AP (Pa) at a particular date

2022-10-16 02:13:44

2022-10-16 01:02:42

2022-10-16 00:00:24

Air quality

2022-10-15 23:49:22

2022-10-15 20:06:43

2022-10-15 19:01:00

2022-10-15 18:00:33

2022-10-14 21:00:51

Temperature

2022-10-13 20:06:23

2022-10-13 19:03:29

2022-10-13 18:00:00

2022-10-12 01:00:20

2022-10-12 00:00:03

2022-10-12 23:01:56

Humidity

2022-10-12 22:04:55

2022-10-12 21:00:52

2022-10-12 20:03:57

2022-10-12 19:00:01

2022-10-12 18:00:01

Storage Parameter

18 In-Store Monitoring of Harvested Tomatoes Using Internet of Things 261

Nightwise Air pressure

90 80 70 60 50 40 30 20 10 0

Last Status

Fig. 18.6 First three days experiment result during nighttime as on 12-10-2022

Fig. 18.7 Status of tomatoes in the storage warehouse between 12-10-2022 and 17-10-2022

262

R. K. Kasera et al.

Day wise Humidity

Temperature

Air Pressure

Air Quality

80

Storage Parameter

70 60 50 40 30 20 10 11-10-22 11:33

11-10-22 10:00

11-9-22 16:33

11-10-22 9:53

11-9-22 11:00

11-9-22 15:06

11-9-22 10:08

11-9-22 7:08

11-9-22 9:50

11-8-22 16:42

11-8-22 17:00

11-8-22 12:05

11-6-22 16:00

11-8-22 11:19

11-6-22 15:00

11-6-22 14:00

11-6-22 8:01

11-6-22 13:03

11-6-22 7:01

11-5-22 16:58

11-5-22 17:00

0

Last status

Fig. 18.8 Visualization result of tomato storage system on various parameters at daytime as on 5–11-2022

and time. An average of 27 °C temperature was recorded, the humidity was measured as 75.75%, air quality was measured as 264.17 PPM, and air pressure was measured as 1015.64 Pa for the first two days. However, from Fig. 18.10, it has been found that on 06-11-2022 at 03:06:50 AM, an insect detection value of 502 was recorded, which indicates a danger alert near the tomato’s storage, and this insect value was recorded again continuously on 0611-2022 between 06:03:00 AM and 06:56:25 AM, and the values varied from 427, 480, and 467, which means bugs sound less than 510. This indicates that the rodent or bug has been present around the tomatoes for some time. After two days of storage, certain tomatoes are harmed by heat, fungi, insects, or rat infestations. According to studies done by Thole et al. in 2020 [24] and Beckles in 2012 [25] on the post-harvest qualities of tomatoes vegetable, tomatoes’ shelf life is significantly shortened if they are impacted by fungi, high temperatures, insects, or rodent illnesses. Figure 18.11 as a column chart represents the average analysis report of insect or rodent detection rate. In Fig. 18.11, the 4.89 series represents insect or rodents detection at a rate of 4.89%. The 95.11% parts of the series show insect or rodents not detection at a rate of 95.11%. Based on the monitoring of the first and second experiments, it was determined that when the temperature is below 30 °C, humidity is greater than 73%, air quality is superior to 220 PPM, the air pressure is much higher than 1012 Pa, and

18 In-Store Monitoring of Harvested Tomatoes Using Internet of Things

263

Nightwise Temperature

Air Pressure

Air quality

90 80 70 60 50 40 30 20 10 0 11-5-22 18:13 11-5-22 19:01 11-5-22 20:01 11-5-22 21:01 11-5-22 22:01 11-5-22 23:01 11-6-22 0:01 11-6-22 1:03 11-6-22 2:00 11-6-22 3:00 11-6-22 19:37 11-6-22 20:41 11-6-22 21:13 11-7-22 18:40 11-7-22 19:03 11-7-22 21:47 11-8-22 17:00 11-8-22 18:09 11-9-22 19:03

Storage Parameter

Humidity

last status Fig. 18.9 Visualization result of tomato storage system on various parameters at nighttime as on 5-11-2022

Fig. 18.10 Stored tomatoes affected because of fungi, rodents, or insect

the rate of insects or rodents is greater than 4% every hour, then the survival rate of tomatoes is reduced. The comparison of this proposed system with an existing system can be seen in Table 18.2. Table 18.2 shows that the proposed IoT-enabled methodology achieves better results by removing the shortcomings of the intriguing work.

264

R. K. Kasera et al.

Insect / Rodent Detected

Insect / Rodent not Detected

95.11

100 90 Average Detection rate

80 70 60 50 40 30 20 10

4.89

0 11-6-22 7:19 Last updated

Fig. 18.11 Analysis report of insect or rodents detection rate

Table 18.2 Comparative analysis table Characteristics

[10]

[12]

[6]

Proposed model

IoT enabled

Yes

Yes

Yes

Yes

Low cost

No

Yes

No

Yes

Automated controlling

Yes

No

Yes

Yes

Database storage enabled cloud and local

No

No

No

Yes

Insect and rodents detection system

No

No

No

Yes

Human interaction

No

Yes

No

No

18.5 Conclusion The suggested approach includes an automatic tomato storage monitoring mechanism that has been tried in two separate locations. 2.50 kg of tomatoes were used for real-time monitoring at the first location between October 10 and October 17, 2022. In the second location, 5 kg of tomatoes were monitored in real time from November 5, 2022, to November 10, 2022. As a result of the high air pressure, high air quality, and insect detection, the results indicated that the second site was more intriguing than the first. Because of that after just two days, the tomato warehouse system issues an alert. The proposed strategy has also been compared to other approaches that are already in use. According to this analysis, the suggested model has a low cost and a

18 In-Store Monitoring of Harvested Tomatoes Using Internet of Things

265

better level of performance. Future improvements to the IoT-enabled tomatoes warehouse monitoring system will use an AI/scheduling algorithm to estimate how many more days’ tomatoes can survive after one day, improve insect or rodents detection accuracy because different insects or rodents make different noises and there should be an alert corresponding to that, implement an extended version of softwaredefined network architecture to make the system better and to use an independent cloud server.

References 1. Rezk, N.G., Hemdan, E.E.-D., Attia, A.-F., El-Sayed, A., El-Rashidy, M.A.: An efficient IoT based smart farming system using machine learning algorithms. Multimedia Tools Appl. 80, 773–797 (2021). https://doi.org/10.1007/s11042-020-09740-6 2. Majumder, S., Bala, B.K., Arshad, F.M., Haque, M.A., Hossain, M.A.: Food security through increasing technical efficiency and reducing postharvest losses of rice production systems in Bangladesh. Food Secur. 8, 361–374 (2016). https://doi.org/10.1007/s12571-016-0558-x 3. Kumar, D., Kalita, P.: Reducing postharvest losses during storage of grain crops to strengthen food security in developing countries. Foods 6(1), 8 (2017). https://doi.org/10.3390/foods6 010008 4. Karpagalakshmi, R.C., Bharathkumar, S., Vignesh, R., Yogaraj, M., Naveenkumar, A.: Implementation of automated low cost data warehouse for preserving and monitoring vegetables and fruits. J. Eng. Sci. 11(11), 146–151 (2020) 5. Lamba, S.: Tracking food loss in the tomato value chain. https://idronline.org/article/agricu lture/photo-essay-tracking-food-loss-in-the-tomato-value-chain/?utm_source=facebookinst agram&utm_medium=social_media&utm_campaign=2022_Articles&utm_content=GU_nyi shi_tribes_hornbills. Last accessed 26 May 2022 6. Shukla, A., Jain, G., Chaurasia, K., Venkanna, U.: Smart fruit warehouse and control system using IoT. In: 2019 International Conference on Data Science and Engineering (ICDSE), pp. 40– 45. IEEE, Patna (2019). https://doi.org/10.1109/ICDSE47409.2019.8971474 7. Tervonen, J.: Experiment of the quality control of vegetable storage based on the Internetof-Things. Procedia Comput. Sci. 130, 440–447 (2018). https://doi.org/10.1016/j.procs.2018. 04.065 8. Purandare, H., Ketkar, N., Pansare, S., Padhye, P., Ghotkar, A.: Analysis of post-harvest losses: an Internet of Things and machine learning approach. In: 2016 International Conference on Automatic Control and Dynamic Optimization Techniques (ICACDOT), pp. 222–226. IEEE, Pune (2016). https://doi.org/10.1109/ICACDOT.2016.7877583 9. Olayinka, A.S., Adetunji, C.O., Nwankwo, W., Olugbemi, O.T., Olayinka, T.C.: A study on the application of Bayesian learning and decision trees IoT-enabled system in postharvest storage. In: Pal, S., De, D., Buyya, R. (eds.) Artificial Intelligence-Based Internet of Things Systems. Internet of Things, pp. 467–491. Springer, Cham (2022). https://doi.org/10.1007/978-3-03087059-1_18 10. Banerjee, S., Saini, A.K., Nigam, H., Vijay, V.: IoT Instrumented food and grain warehouse traceability system for farmers. In: 2020 International Conference on Artificial Intelligence and Signal Processing (AISP), pp. 1–4. IEEE, Amaravati (2020). https://doi.org/10.1109/AIS P48273.2020.9073248 11. Devi, A., Julie Therese, M., Dharanyadevi, P., Pravinkumar, K.: IoT based food grain wastage monitoring and controlling system for warehouse. In: 2021 International Conference on System, Computation, Automation and Networking (ICSCAN), pp. 1–5. IEEE, Puducherry (2021). https://doi.org/10.1109/ICSCAN53069.2021.9526400

266

R. K. Kasera et al.

12. Siddiqua, F., et al.: IoT-based low-cost cold storage atmosphere monitoring and controlling system. In: 2022 International Conference on Wireless Communications Signal Processing and Networking (WiSPNET), pp. 311–315. IEEE, Chennai (2022). https://doi.org/10.1109/ WiSPNET54241.2022.9767151 13. Sangeetha, M., et al.: Design and development of a crop quality monitoring and classification system using IoT and blockchain. J. Phys.: Conf. Ser. 1964, 062011 (2021). https://doi.org/10. 1088/1742-6596/1964/6/062011 14. Anoop, A., Thomas, M., Sachin, K.: IoT based smart warehousing using machine learning. In: 2021 Asian Conference on Innovation in Technology (ASIANCON), pp. 1–6. IEEE, Pune (2021). https://doi.org/10.1109/ASIANCON51346.2021.9544579 15. Jose, J., Samhitha, B.K., Maheswari, M., Selvi, M., Mana, S.C.: IoT based smart warehouse and crop monitoring system. In: 2021 5th International Conference on Trends in Electronics and Informatics (ICOEI), pp. 473–476. IEEE, Tirunelveli (2021). https://doi.org/10.1109/ICO EI51242.2021.9453029 16. Verploegen, E., Sanogo, O., Chagomoka, T.: Evaluation of low-cost evaporative cooling technologies for improved vegetable storage in Mali. In: 2018 IEEE Global Humanitarian Technology Conference (GHTC), pp. 1–8. IEEE, San Jose (2018). https://doi.org/10.1109/GHTC. 2018.8601894 17. Manyozo, F.N., Ambuko, J., Hutchinson, M.J., Kamanula, J.F.: Effectiveness of evaporative cooling technologies to preserve the postharvest quality of tomato. Int. J. Agron. Agric. Res. (IJAAR) 13(2), 114–127 (2018) 18. Mohammed, M., Riad, K., Alqahtani, N.: Design of a smart IoT-based control system for remotely managing cold storage facilities. Sensors 22(13), 4680 (2022). https://doi.org/10. 3390/s22134680 19. Zhong, C., Zhu, Z., Huang, R.-G.: Study on the IOT architecture and access technology. In: 2017 16th International Symposium on Distributed Computing and Applications to Business, Engineering and Science (DCABES), pp. 113–116. IEEE, Anyang (2017). https://doi.org/10. 1109/DCABES.2017.32 20. Lawrence, M.G.: The relationship between relative humidity and the dewpoint temperature in moist air: a simple conversion and applications. Bull. Am. Meteorol. Soc. 86(2), 225–234 (2005). https://doi.org/10.1175/BAMS-86-2-225 21. Kumar Sai, K.B., Mukherjee, S., Parveen Sultana, H.: Low cost IoT based air quality monitoring setup using Arduino and MQ series sensors with dataset analysis. Procedia Comput. Sci. 165, 322–327 (2019). https://doi.org/10.1016/j.procs.2020.01.043 22. ToolBox, E.: Fruits and Vegetables—Optimal Storage Conditions (2004). https://www.engine eringtoolbox.com/fruits-vegetables-storage-conditions-d_710.html. Last accessed 12 Oct 2022 23. Wang, K., Handa, A.K.: Understanding and improving the shelf life of tomatoes. In: Mattoo, A., Handa, A. (eds.) Achieving Sustainable Cultivation of Tomatoes, 1st edn., pp. 315–342. Burleigh Dodds Science Publishing, USA (2017) 24. Thole, V., et al.: Analysis of tomato post-harvest properties: fruit color, shelf life, and fungal susceptibility. Curr. Protoc. Plant Biol. 5 (2020). https://doi.org/10.1002/cppb.20108 25. Beckles, D.M.: Factors affecting the postharvest soluble solids and sugar content of tomato (Solanum lycopersicum L.) fruit. Postharvest Biol. Technol. 63(1), 129–140 (2012). https:// doi.org/10.1016/j.postharvbio.2011.05.016

Chapter 19

A Compact Microstrip Hexagonal Patch Antenna with a Slotted Ground Plane for RF Energy Harvesting Applications Pradeep S. Chindhi, H. P. Rajani, and Geeta Kalkhambkar

Abstract In this research work, a hexagonal shape microstrip patch antenna has been introduced to work for two 5G frequency bands, n78 (3300–3800 MHz) and n77 (3300–4200 MHz) for radio frequency (RF) energy harvesting applications. The resonant frequency for the 5G n78 and 5G n77 bands is 3.55 GHz and 3.75 GHz, respectively. To improve the characteristics of the antenna at the anticipated resonant frequency, a hexagonal shape slot is embedded in the ground plane. A working bandwidth of 2.0 GHz (2.80–4.88 GHz) cantered at 3.7 GHz (5G n77), is achieved. The proposed antenna exhibits a bidirectional radiation pattern with a peak gain of 3.74 dBi at 3.7 GHz. The height of the FR4 substrate is set constant at 1.6 mm. A comparative study has been performed on different substrate materials to know the effect of dielectric properties on the proposed hexagonal-shaped microstrip patch antenna (HSMPA).

19.1 Introduction In the present 5G era, radio frequency energy harvesting (RFEH) and wireless power transfer (WPT) studies are drawing more attention. In miniaturized wireless systems and ultra-low-power integrated electronic devices (IED), the RFEH and WPT technologies are potential alternatives for generating clean and sustainable energy. The RFEH and WPT are undoubtedly a promising scheme for future batteryless and self-sufficient Internet of things (IoT) applications [1–3]. In the RFEH system, an antenna is used as a key component to scavenge RF energy. The gain and directivity of the antenna are two important antenna performance indicators in case of the RFEH system. A rectifier circuit converts the received RF energy by antenna into direct current (DC). In order to increase RF to DC conversion, the RF energy P. S. Chindhi · G. Kalkhambkar SGMRH&RC, Sant Gajanan Maharaj College of Engineering, Kolhapur, India H. P. Rajani (B) Jain Group of Institutions, Jain College of Engineering, Belagavi, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024 B. Das et al. (eds.), Modeling, Simulation and Optimization, Smart Innovation, Systems and Technologies 373, https://doi.org/10.1007/978-981-99-6866-4_19

267

268

P. S. Chindhi et al.

harvesting antenna should have high gain and directivity [4]. To enhance antenna gain different techniques have been introduced in previous studies. A square shape microstrip patch antenna with modified inset feed to enhance the gain is discussed in [5]. Multi-substrate (FR4-Air-FR4) large aperture circular shape patch antenna with air gap technique to increase bandwidth and gain is presented in [6], and RFEH system gets bulky due to multi-substrate and large aperture. A 1X4 hexagonal slot antenna array with Wilkinson power divider to enhance the gain and power sensitivity is experimented in [1]. 1X4 antenna array with Wilkinson power divider increases the design complexity of the RFEH system. This paper focuses on the design and simulation of a compact RFEH HSMPA operating in the 5G n78 (3300–3800 MHz) and 5G n77 (3300–4200 MHz) frequency bands. The paper is prepared as follows: Sect. 19.2 presents the structural details of the proposed HSMPA. Section 19.3 illustrates software-based performance analysis and compares the proposed work with earlier literature. Finally, Sect. 19.4 presents conclusion on the proposed HSMPA for RFEH system followed by a list of references.

19.2 Antenna Design Although many different types of antenna for RFEH are described in the literature [7– 10], the HSMP was selected for this study due to some of its noticeable advantages. The advantages of HSMP include inexpensive, compact, lightweight and conformability. Fig. 19.1 shows overall size of the proposed HSMPA structure. The initial dimensions are 40 mm × 48 mm × 1.6 mm (Lg X Wg X HS). The optimal dimensions are: Lg = 40 mm, Wg = 48 mm, Wf = 0.8 mm, Lf = 7.47 mm, LHMPA = 11 mm, LHMPAG = 18 mm.

(a)

(b)

Fig. 19.1 Proposed hexagonal shape patch antenna with hexagon shape slotted ground. a Top view b Side view

19 A Compact Microstrip Hexagonal Patch Antenna with a Slotted Ground …

269

19.3 Results and Analysis Section 3 emphasizes on the parametric study of an optimized antenna to observe their impact on the antenna performance in terms of the reflection coefficient, gain and directivity. (a) Antenna performance variation (|S11| and Gain) with different feedline width (Wf ) The performance of the proposed HSMA is observed by altering the width of the feedline (Wf), and the results are shown in Fig. 19.1a and b. It is observed that as feedline width (Wf) increases from 0.5 to 3.5 mm, the resonance frequency moves towards a higher frequency. As feedline width increases from 0.5 to 3.5 mm, the S11 and gain of the antenna decrease. A good impedance match at 3.7 GHz is observed with Wf = 0.8; as a result, the S11 and gain are improved as shown in Fig. 19.2a and b. (b) Antenna performance variation (|S11| and Gain) with different feedline length (Lf ) The effect on the performance parameter of the proposed antenna is further studied by shifting the position and length of the feedline. The top layer hexagon is shifted up words in a step of 1mm, which changes the length of the feed line, and thus, it helps to match impedance at 3.7 GHz. The effect of change in Lf on the S11 and gain is visualized in Fig. 19.3a and b, respectively. It is observed that the FR4 substate offers better S11 and gain for Lf = 7.47 mm at 3.7 GHz. (c) Antenna performance variation (|S11| and Gain) with different substrate materials (Optimized antenna dimension) Fig. 19.4b and d illustrates the radiation and total efficiency of the HSMPA for different substrate materials. It has been noticed that FR4 and RO3006 substrate gives improved total efficiency in the 5G N78 (3.3–3.8 GHz) band as highlighted in Fig. 19.4d. Though total efficiency curves in Fig. 19.4d show better total efficiency for other substrate materials compare to FR4, S11 parameters and gain corresponding to FR4 show better performance in the second band 5G N77 (3.3–4.2 GHz) as shown in Fig. 19.4a and b. The radiation efficiency for different substrate material varies from 86.0% to 92% for proposed HSMPA as shown in Fig. 19.4b. The best antenna performance in terms of S11, gain, radiation efficiency and total efficiency has been observed at 3.7 GHz for FR4 substrate material. (d) Antenna performance variation (|S11| and Gain) for FR4 substrate materials (Optimized antenna dimension) Figure 19.5a shows the simulated surface current distribution over the patch and ground plane, and the 3D radiation pattern of the intended antenna at 3.7 GHz. From Fig. 19.5a, it is observed at 3.7 GHz the maximum surface current distribution is on

270

P. S. Chindhi et al.

(a)

(b) Fig. 19.2 Performance variation of proposed HSMPA with different feedline width (Wf). a S11(dB) with different feedline width (Wf). b Gain with different feedline width (Wf)

the edges of the hexagon and on the microstrip feedline. Figure 19.5b shows a bidirectional polar radiation pattern at 3.7 GHz with main lobe direction of 169 degrees and 180 degrees and 3dB angular width of 81.20 and 87.70, respectively, and thus, the proposed HSMPA can receive RF signals from diverse angles, which is suitable for RFEH (Table 19.1). The performance of optimized HSMP in terms of directivity, gain and efficiency at 3.7 GHz is shown in Fig. 19.6. The proposed antenna offers a peak realized and

19 A Compact Microstrip Hexagonal Patch Antenna with a Slotted Ground …

271

(a)

(b) Fig. 19.3 Performance variation of proposed HSMPA with different feedline length (Lf). a S11 (dB) with different feedline length (Lf). b Gain with different feedline length (Lf)

272

P. S. Chindhi et al.

(a)

(b) Fig. 19.4 Performance variation of proposed HSMPA with different substrate material. a S11(dB) for different substrate material b Gain for different substrate material. c Radiation efficiency for different substrate material. d Total efficiency for different substrate material

IEEE gain of 3.74 dBi with a directivity of 4.4 dBi at 3.7 GHz. Further, 86.0% of antenna radiation efficiency and total efficiency are noted. Almost equal efficiency for 5G n77 and N78 bands has been observed as shown in Fig. 19.6.

19 A Compact Microstrip Hexagonal Patch Antenna with a Slotted Ground …

(c)

(d) Fig. 19.4 (continued)

273

274

P. S. Chindhi et al.

(a)

(b) Fig. 19.5 Surface current distribution, 3D and 2D radiation pattern. a Surface current distribution and 3D radiation pattern at 3.7 GHz. b 2D polar radiation pattern at 3.7 GHz

19 A Compact Microstrip Hexagonal Patch Antenna with a Slotted Ground …

275

Table 19.1 Performance comparison of the proposed antenna with earlier literature References Operating Maximum Maximum Antenna Type of frequency (GHz) gain (dBi) directivity (dBi) dimension (mm) material [1]

1.62, 2.52

0.8,3.80



60 × 80 × 1.6

FR-4 (ε = 4.4)

[4]

2.4

6.28

7.54

86.7.6 × 58.4 × 1.6*

FR-4 (ε = 4.6)

[6]

1.57

9.61



275 × 205 × 6

FR4-air-FR4

[8]

2.45

6.14



100 × 100 × 1.6 FR-4 (ε = 4.4)

[9]

2.45

8.02



100 × 100 × 1.6 FR-4 (ε = 4.4)

Proposed work

3.7

3.74

4.4

48 × 40 × 1.6

FR-4 (ε = 4.3)

Fig. 19.6 Optimized performance of proposed HSMPA

19.4 Conclusion In this work, an HSMPA is designed and investigated to use in RFEH applications at 5G n77 and n78 bands. The dimensions of the feedline are parametrically studied to achieve resonance at 3.7 GHz. The hexagonal slot at the ground plane is embedded to enhance the bandwidth of HSMPA. Further, optimized geometry has been studied to observe the impact of different substrate materials. It has been concluded that FR 4 substrate gives better performance in terms of S11, gain and efficiency for the 5G n77 and n78 bands. A gain and efficiency (total and radiation) of 3.7 dBi and 86.0% is achieved. A positive gain response and almost constant efficiencies are observed in the 5G n77 and n78 frequency bands. The bidirectional radiation pattern of the proposed antenna enables the harvesting of RF energy from different angles.

276

P. S. Chindhi et al.

References 1. Zhang, J., Bai, X., Han, W., Zhao, B., Xu, L., Wei, J.: The design of radio frequency energy harvesting and radio frequency-based wireless power transfer system for battery-less selfsustaining applications. Int. J. RF Microw. Comput. Aided Eng. e21658 (2018). https://doi. org/10.1002/mmce.21658 2. Chindhi, P., Rajani, H.P., Kalkhambkar, G.: A spurious free dual band microstrip patch antenna for radio frequency energy harvesting. Indian J. Sci. Technol. 15(7), 266–275 (2022). https:// doi.org/10.17485/IJST/v15i7.2025 3. Chindhi, P., Rajani, H. P., Kalkhambkar, G., Kandasamy, N.: A compact corner truncated microstrip patch antenna for radio frequency energy harvesting to low power electronic devices and wireless sensors. Printed Antennas: Design and Challenges (2022). (n.p.): CRC Press. SBN: 9781000801125, 1000801128. https://doi.org/10.1201/9781003347057-4 4. Olowoleni, J.O., Awosope, C.O.A., Adoghe, A.U., Obinna, O., Udo, U.E.: Design and simulation of a novel 3- point star rectifying antenna for RF energy harvesting at 2.4 GHz. Cogent Eng. 8, 1943153 (2021) 5. Chindhi, P.S., Rajani, H.P., Kalkhambkar, G.B., Khanai, R.: Characteristics mode analysis of modified inset-fed microstrip antenna for radio frequency energy harvesting. Biosc. Biotech. Res. Comm. 13(13), 171–176 (2020). https://doi.org/10.21786/bbrc/13.13/24 6. Savitri, I., Anwar, R., Amrullah, Y.S., Nurmantris, D.A.: Development of large aperture microstrip antenna for radio wave energy harvesting. Prog. Electromagnet. Res. Lett. 74, 137– 143, (2018). https://doi.org/10.2528/PIERL18030305, http://www.jpier.org/PIERL/pier.php? paper=18030305 7. Ojha, S.S., Singhal, P.K., Thakare, V.V.: Dual-band rectenna system for biomedical wireless applications. Meas. Sens. 24, 100532 (2022). https://doi.org/10.1016/j.measen.2022.100532 8. Surender, D., Khan, T., Talukdar, F.A.: A hexagonal-shaped Microstrip patch antenna with notch included partial ground plane for 2.45 GHz Wi-Fi Band RF energy harvesting applications. In: 2020 7th International Conference on Signal Processing and Integrated Networks (SPIN), pp. 966–969 (2020). https://doi.org/10.1109/SPIN48934.2020.9071389 9. Surender, D., Khan, T., Talukdar, F.A.: A pentagon-shaped Microstrip patch antenna with slotted ground plane for RF energy harvesting. In: 2020 URSI Regional Conference on Radio Science ( URSI-RCRS), pp. 1–4 (2020). https://doi.org/10.23919/URSIRCRS49211.2020.911 3536 10. Pandey, R., Shankhwar, A.K., Singh, A.: Design and analysis of Rectenna at 2.42 GHz for Wi-Fi energy harvesting. Prog. Electromagnet. Res. C 117, 89–98 (2021). https://doi.org/10. 2528/PIERC21100409, http://www.jpier.org/PIERC/pier.php?paper=21100409

Chapter 20

Hierarchical Aadhaar-Based Anonymous eSign Based on Group Signatures Puneet Bakshi and Sukumar Nandi

Abstract In the past few years, other than traditional security requirements, privacy and anonymity are also getting increased importance. For example, in case of the online electronic signature service, eSign, an eSigner may want to hide his/her identity and may want to sign a document anonymously. At present, though eSign maintains traditional security requirements such as confidentiality, integrity and nonrepudiation anonymity of the eSigner is yet to be realized. This paper proposes an anonymous eSign scheme based on group signatures to achieve anonymity and conditional privacy. Security and performance analysis of the scheme presents that the scheme is secure and efficient.

20.1 Introduction In 2009, Government of India started a mission to assign a unique digital identity to every resident of India [1]. A resident can enroll into the program by providing his/her personal information such as biometric (fingerprint, iris-scan), photograph, demographic information, email-id, and phone number. On successful enrollment, the resident is assigned a unique 12-digit number referred to as Aadhaar [2]. Government of India entrusted UIDAI to implement this mission. An Aadhaar-based online authentication service is provided by UIDAI that authenticates a resident based on his/her registered biometric/phone number. Several other online services such as eSign [3] and DigiLocker [4] are based on the Aadhaar-based authentication service. The Information Technology Act 2000 (ITA-2000) [5] is the primary legal Act in India for electronic commerce. ITA-2000 provides legal sanctity to electronic signatures and electronic signature certificates. With the passage of this act, electronic signatures are considered at par with the corresponding handwritten signatures. ITA2000 also introduced the Controller of Certifying Authority (CCA) [6] which regP. Bakshi (B) · S. Nandi Indian Institute of Technology, 781039 Guwahati, Assam, India e-mail: [email protected] S. Nandi e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024 B. Das et al. (eds.), Modeling, Simulation and Optimization, Smart Innovation, Systems and Technologies 373, https://doi.org/10.1007/978-981-99-6866-4_20

277

278

P. Bakshi and S. Nandi

ulates the operations of certification authorities in India and established the Root Certifying Authority of India (RCAI) to electronically sign a digital document. Only designated agencies can provide the electronic signature service to the applications. These designated agencies are referred to as eSign Service Providers (ESP). The Application Service Provider (ASP) integrates with an ESP to provide electronic signature service to the end users for their respective applications. eSign in India uses Public Key Infrastructure (PKI) established by the CCA. At present, every eSigned document is endorsed by the identity information of the signer, and it is difficult for the signer to eSign anonymously. In recent times, other than traditional security requirements, privacy and anonymity are also receiving increased attention. One way to achieve anonymity is to let a group of members be created for a defined purpose. An anonymous eSign scheme should facilitate any member of the group to sign as a group member without disclosing his/her identity. At the same time, the signature should be verifiable to ensure that it is indeed made by a valid group member. Group signature [7] is a scheme in which any member of the group can sign on behalf of the group. The scheme has three salient properties, first, it ensures that only members of the group can produce such signatures, second, validity of the signature can be verified without knowing which member has produced the signature, third, in case of a dispute, member who produced the signature can be identified. In general, a group signature scheme has three participants, the group manager, group members, and the signature verifier. Although Aadhaar-based services use pseudonyms and virtual identifiers, the capabilities to provide privacy and anonymity are very limited. Though group signatures schemes can be used to provide anonymity in present eSign scheme, there are some challenges such as multiple authoritative groups, delegation of group creation authority, key management and how to adapt the scheme in present model of eSign. This paper presents an eSign scheme based on group signatures which lets the users sign anonymously, lets the authority of the group creation percolate down from the apex member of the organization to the approved members and from approved members to others, and so on. The proposed scheme consists of following phases, system initialization, formation of initial set of groups, enrolling members to a group, anonymous eSign by a group member, and eSign verification. If permitted, a group member can further create second level group, and so on. The proposed scheme is a hierarchical Aadhaar-based anonymous scheme in which the apex member of the group creates an initial set of groups in the organization and is the designated person who approves members of these groups. The group members can further create subgroups and approve the membership of other members in their respective groups, but the formation of the group requires approval from the apex member of the group. The paper is organized in following sections. Section 20.2 presents some of the related work, Sect. 20.3 presents some of the preliminaries such as eSign, signature of knowledge, and group signatures. Section 20.4 presents proposed scheme and the protocols in the proposed scheme such as system initialization, registration of the group manager, signature by the group manager, verification of the signature, opening of the signature, registration of the member in the second level group, signature by

20 Hierarchical Aadhaar-Based Anonymous eSign Based on Group Signatures

279

the second level group member, and the signature verification. Section 20.5 presents the security and the efficiency considerations, and Sect. 20.6 presents the conclusion of the paper.

20.2 Related Work Privacy and anonymity related security requirements are yet to be addressed in full in Aadhaar-based services. Most of the Aadhaar-based services including eSign are built using PKI [8] in which each subscriber has a descriptive information and is associated with a private key and a corresponding public key. A Certifying Authority (CA) issues a Digital Signature Certificate (DSC) which attests the public key with subscriber’s descriptive information. Present model of eSign ensures partial privacy of the document by using cryptographic hash of the document instead of the whole document. However, privacy of the identity of the user still needs to be addressed. In [9], the author provides a scheme to improve privacy of the eSigner using attribute-based scheme. Multiple authorities can assign attributes to a user and the user can eSign using these attributes without revealing his/her actual identity. In another scheme [10], the author used digital tokens to encode privacy rules of the users. Using these rules, a user can instruct the usage of his/her data such as what data can be used for what purpose by whom and for how long. Group signature scheme was first introduced by Chaum and Eugene van Heyst [11]. One limitation of the scheme was that the group signature is linear to the size of the group. Later, many authors proposed schemes in which the size of the signature is independent of the size of the group. One prominent scheme was proposed by Camenisch and Groth [12]. Another prominent scheme is the short group signature scheme proposed by Boneh et. al. [13]. Since its introduction, group signature and its variants are used considerably in various contexts to minimize disclosure of personal information. For example, [14–16] used it to improve privacy in vehicular network, [17] used it to improve location privacy of the user in electronic toll pricing system.

20.3 Preliminaries This section presents some of the preliminaries required for this paper.

20.3.1 eSign eSign is an Aadhaar-based online electronic signature service in India which is governed by CCA and involves four major participants, namely the CIDR [18], the ESP, the ASP, and the user. An ASP may request user to sign a document. The user pro-

280

P. Bakshi and S. Nandi consent,

consent,

consent,

H(m)

H(m)

eKYC(Req)

DSC

DSC

eKYC(Res)

user

ASP

ESP

CIDR

Fig. 20.1 eSign data flow

vides cryptographic hash of the document to the ASP which sends the same to the ESP. ESP authenticates the user using Aadhaar-based authentication service. ESP authenticates the user using Aadhaar-based authentication service. Once authenticated, the ESP sends the same to the CIDR requesting eKYC information of the user. Once received, ESP creates the DSC and sends it to the ASP which further provides the same to the user. Refer Fig. 20.1.

20.3.2 Signature of Knowledge In a traditional signature scheme [19], a signature on a message is associated with a public key whose corresponding private key is in secure possession of the signer. In contrast to this, in a signature of knowledge scheme [20], a signature on a message is associated with a statement .x ∈ L (for an .NP language . L) whose (hard-to-find) witness .w is in secure possession of the signer. Definition 1 A signature of knowledge of the discrete logarithm of the element y ∈ G to the base .g on the message .m can be defined as a pair .( p, q)∈{0, 1}k ×Z∗ n satisfying . p = H (m||y||g||g q y p ). Such a signature can be computed only if the secret key .x = logg (y) is known. One may chose .r at random from . Z n∗ and computes . p = H (m||y||g||gr ) and .q = r − px ( mod n). Definition 2 A signature of a double discrete logarithm of the element . y ∈ G to the bases .g and .a on the message .m (generally represented as .SKLOGLOG[α: y = α g a ](m)) can be defined as an .(l + 1) tuple .( p; q1 ;:::;ql ) ∈ {0, 1}k × Zl (where .l ≤ k is a security parameter) satisfying the equation . p = H (m||y||g||a||ti ||...||tl ). .

{

g (a i ) , if p[i] = 0 .with ti = q y (a i ) , otherwise. q

Such a signature can be computed only if the double discrete logarithm .x of the group element . y to the bases .g and .a is known.

20 Hierarchical Aadhaar-Based Anonymous eSign Based on Group Signatures

281

20.3.3 Group Signature A group signature scheme is comprised of four methods. First is the Setup method, which generates the public key of the group, the secret keys for all group members and the secret key for the group manager. Second is the Sign method, which takes as input, the message and the secret key of the group member and returns the signature on the message. Third is the Open method, which takes as input, a signature and the group manager’s secret key and returns the identity of the group member who signs the message. The communication channel between the group manager and the members of the group is assumed to be secure. A group signature scheme must satisfy following three properties. First is that only the group members can sign the messages. Second is that it is not possible to either find which group member signed the message (anonymity) or whether two signatures have been issued by the same group member (unlinkability). Third is that it is not possible for a group member to either prevent opening of the signature or to sign on behalf of other group member (including the group manager).

20.4 Proposed Scheme This section presents the proposed scheme of anonymous eSign based on group signatures.

20.4.1 Architecture CIDR is proposed to maintain an organization wide object which can be modified only by the apex member of the organization. This object maintains public information about groups such as public parameters and pointers to the Group Managers. CIDR is also proposed to expand eKYC object to include secret information such as private key and membership certificate for each participating group. If user .Ui is a member of group .G j , (s)he keeps the private key .xi j and the member certificate MC.i j in the extended .eKYC.i object. If user .Ui is a group manager of group .G j , s/he keeps the private key .di j in the extended .eKYC.i object (Fig. 20.2).

20.4.2 System Initialization The apex member of organization .i such as the Director General instantiates the organizational object.objORG.i . This object is supposed to include public information

282

P. Bakshi and S. Nandi objORGi C-DAC Groups Group

Public

Pending

Name

Parameters Approvals

EMP



SDir



HPC



Group Manager

SKDG

AadhaarNo

AadhaarNo

AadhaarNo

eKYC(i)

eKYC(j)

eKYC (DG)

org=CDAC

org=CDAC

org=CDAC

design=DG

design=DG

design=DG

priv data:

priv data:

priv data:

Gp : xip , MCip

Gp : xjp , MCjp

Gp : xdp , MCdp

Gq : xiq , MCiq

Gq : xjq , MCjq

Gq : xdq , MCdq

Gr ::::, :::

Gr ::::, :::

Gr ::::, :::

eKYCi

eKYCj

CDACObj eKYCDG

Fig. 20.2 OrgOBJ and extended eKYC

about groups such as group name, group purpose, public parameters, and pointers to the respective group managers. To instantiate this object, the apex member sends a request to the ASP. The request is forwarded to the ESP which authenticates the apex member and takes following actions on behalf of the apex member. It creates a private key .SKDG , stores the key in extended .eKYCDG , creates an organization object .objORG.i , publishes the corresponding public key .PBDG in .objORG.i , sets the version of the object as an initial version, and signs the object by .SKDG . Once .objORG.i is instantiated, the apex member may want to create some initial groups, for example, .G EMP for employees, .G SDIR for senior officials, .G Di for department .i, etc. To create a group, the apex member sends a request to an ASP. The request is forwarded to the ESP which authenticates the apex member and takes following actions on behalf of the apex member. It choses group’s private key .dG i , stores it in extended .eKYCDG , computes group’s public key . PbG i , publishes it in .objORG.i , increments version of .objORG.i , and signs this object with private key of the apex member, i.e., .SKDG .

20 Hierarchical Aadhaar-Based Anonymous eSign Based on Group Signatures Doc

Mi

DG Creates Org group

GMj

ASP

ESP

DG

283 V

{PbPaams} Create/UpdateOrgeKYCobject

Fig. 20.3 System initialization

ESP computes the public key . PbG i of group .G i in following steps (Fig. 20.3). 1. Chose an RSA public key .(n, e) corresponding to the private key .dG i . 2. Chose a cyclic group .G of order .n in which computing discrete logarithm is infeasible. Let .g be the generator of the group. 3. Chose an element .a ∈ Z∗ n such that it is of large multiplicative order modulo both prime factors of .n. 4. Select an upper bound .λ on the length of the secret keys and a constant .∈ > 1. These parameters are required for the SKLOGLOG signatures. 5. The group’s public key is . PbG i = (n, e, G, g, a, λ, ∈).

20.4.3 Registration of Members (in First Level Group) To join a group, member . Mi sends a request to the ASP, which is forwarded to the ESP, which authenticates the member and takes following actions on member’s behalf. 1. Generates a private key .SK. Mi . 2. Generates a secret .x ∈ R {0, ..., 2λ − 1}. 3. Computes . y = a x ( mod n) and membership key .z = g y . Signs them using private key of the member, i.e., .SK. Mi , to yield .{y}SK Mi and .{z}SK Mi . 4. Finds the group manager .GM.i of the group to be joined from the organizational object .objORG.i . 5. Sends membership approval request including .{y}SKU i and .{z}SKU i to .GM.i . This can be implemented by maintaining a queue of pending membership requests which is monitored periodically by a designated ESP. When the ESP designated for a group manager .GM.i finds a new membership request pending in the queue, the ESP takes following actions (Fig. 20.4).

284

P. Bakshi and S. Nandi Doc

Mi

GMj

ASP

ESP

DG

V

GMs joins Org group and receive certificates

RegisterInGroupReq(GIDi )

RegisterInGroupReq(GIDi )

AuthenticateReq

AuthenticateRes

RetrieveKYCofGMAndSend FindGroupManagerFromTable x' ' y ' = ax '

z' = gy ⟨y' , z' , eKYCGM ⟩

Verify eKYCGM If approved ask to issue MC ⟨approve/disapprove⟩

DG issues MC' for GMi 1 MC' = (y' + 1) /e2 GMi stores the MC in eKYC of GMi Store x' in eKYCGM → pvt Store MC' in eKYCGM → pvt RegisterInGroupRes()

RegisterInGroupRes()

Fig. 20.4 Group manager registration

20 Hierarchical Aadhaar-Based Anonymous eSign Based on Group Signatures

285

1. Using received .{y}SKU i and .{z}SKU i verifies that the requesting member knows the discrete logarithm of . y to the base .a. 2. Provides a notification to the group manager .GM.i for pending membership approval from member . Mi . 3. If approved, computes the following membership certificate. MCi' = (y + 1) /e ( mod n). 1

4. The membership certificate is returned to the member . Mi It should be noted that it is not feasible to construct such a triple .(x, y, v) without the help of the group manager. When .ESPMi finds an approval response, it stores .x and .MCi' in .eKYC → pvt.

20.4.4 Signatures by Members (of First Level Group) To sign a message .m, a group member . Mi sends a request to the ASP, which is forwarded to the ESP, which authenticates the member and takes following actions on behalf of the member (Fig. 20.5). 1. 2. 3. 4. 5.

Computes .g¯ = gr where .r ∈ R Z∗n . Computes .z¯ = g¯ y . α Computes .V1 = SKLOGLOG[α: z¯ = g¯ a ]. e Computes .V2 = SKROOTLOG[β: z¯ g¯ = g¯ β2 ]. The signature on message .m is .σ = (g, ¯ z¯ , V1 , V2 ).

20.4.5 Signature Verification (of First Level Group Signer) The correctness of the signature .σ can be verified by checking the validity of the signatures of knowledge of .V1 and .V2 against corresponding public parameters of the group published in .OrgOBJ.i . The signature proves that the member belongs to α the group for two reasons. First, .V1 implies that .z¯ g¯ must be of the form .z¯ g¯ = g¯ a +1 for an integer .α the member knows. Second, .V2 implies that the member knows an .eth root of .(a + 1), which means that the member is in possession of the secret key and the corresponding membership certificate (Fig. 20.6).

286

P. Bakshi and S. Nandi

Fig. 20.5 Group manager signature

20.4.6 Group Manager Signature Opening Finding whether the two signatures .σ1 = (g, ¯ z¯ , V1 , V2 ) and .σ2 = (g¯ ' , z¯ ' , V1' , V2' ) are issued by the same group member is possible only if one is able to find whether .logg¯ z ¯ = logg¯ ' z¯ ' , which is an infeasible problem in general thus ensuring anonymity and unlinkability. In contrast, since the group manager has relatively few discrete logarithms (to the base .g) of the membership keys, .logg¯ z¯ , s/he can verify whether the inequality holds true. Given only a signature .(g, ¯ z¯ , V1 , V2 ) for a message .m, the group manager can find the group member who issued this signature by testing the following inequality.

20 Hierarchical Aadhaar-Based Anonymous eSign Based on Group Signatures

287

Fig. 20.6 Group manager signature verification

?

g¯ Y P = z¯ for all group members . P (where .Y P denotes discrete logarithm of . P’s membership key . Z P to the base .g).

20.4.7 Creation of Second Level Group The procedure to create a second level group is similar to what is explained in Sect. 20.4.2. To create a second level group, the group member . Mi sends a request to an ASP. The request is forwarded to the ESP which authenticates the member . Mi and takes following actions on behalf of the member. It choses group’s private key .dG i , stores it in extended .eKYC. Mi , computes group’s public key . PbG i , publishes it in .objORG.i , increments version of .objORG.i and signs this object with private key of the apex member, i.e., .SK. Mi . ESP computes the public key . PbG i of group .G i as explained in Sect. 20.4.2.

20.4.8 Registration of Members (in Second Level Group) To join the second level group, a member sends a request to the ASP, which is forwarded to the .ESP. M , which authenticates the member and takes the following actions on member’s behalf. 1. Generates a private key .SK.U i. 2. Generates a secret .x ∈ R {0, ..., 2λ − 1}.

288

P. Bakshi and S. Nandi

3. Computes . y = a x ( mod n) and membership key .z = g y . Signs them using private key to yield .{y}SKU i and .{z}SKU i . 4. Finds the group manager .GM.i from the organizational object .objORG.i . 5. Sends membership approval request including .{y}SKU i and .{z}SKU i to .GM.i . This can be implemented by maintaining q queue of pending membership requests which is monitored periodically by a designated .ESPGM . When .ESPGM designated for a group manager .GM.i finds a new membership request pending in the queue, the .ESPGM takes following actions. 1. Using received .{y}SKU i and .{z}SKU i , verifies that the requesting member knows the discrete logarithm of . y to the base .a. 2. Provides a notification to the group manager for pending membership approval from the member. 3. If approved, computes the following membership certificate. MC = {y(MC' + 1) /e } = {y(y ' + 1) /e + 1} /e . 1

1

1

2

1

1

4. The membership certificate is returned to the member. When .ESP. M finds an approval response, it stores .x and .MC in its .eKYC → pvt (Fig. 20.7).

20.4.9 Signatures by Members (of Second Level Group) To sign a message .m, a second level group member sends a request to the ASP, which is forwarded to the ESP, which authenticates the member and takes following actions on member’s behalf (Fig. 20.8). 1. 2. 3. 4. 5. 6. 7. 8.

Computes .DocHash. Computes .g¯ ' = gr where .r ∈ R Z∗n .. Computes .z¯ ' = g¯ y . ' Computes .V1 = SKLL(x ' : z¯ ' = g' ¯ y ). e ' 2 Computes .V2 = SKLL(β ' : z¯ ' g¯ ' = g' ¯ β ). Computes .V3 = SKLL(x: z¯ = g¯ y ). e Computes .V4 = SKLL(β: k¯ g¯ = g¯ β 1 ). The signature on message .m is .σ = ⟨g¯ ' , z¯ ' , V1 , V2 , V3 , V4 ⟩.

20 Hierarchical Aadhaar-Based Anonymous eSign Based on Group Signatures

Fig. 20.7 Member registration

289

290

P. Bakshi and S. Nandi

Fig. 20.8 Member signature

20.4.10 Signature Verification (of Second Level Group Signer) The correctness of the signature .σ ' can be verified by checking the validity of the signatures of knowledge of.V1 ,.V2 ,.V3 and.V4 against corresponding public parameters of the group published in .OrgOBJ.i . The signature proves that the member belongs to α the group for four reasons. First, .V1 implies that .z¯ g¯ must be of the form .z¯ g¯ = g¯ a +1 for an integer .α the member knows. Second, .V2 implies that the member knows an .eth root of .(a + 1), which means that the member is in possession of the secret key and the corresponding membership certificate. Third, .V3 implies that .z¯ ' g¯ ' must be of α the form .z¯ ' g¯ ' = g¯ a +1 for an integer .α the member knows. Fourth, .V4 implies that the member knows an.e2 th root of.(a + 1), which means that the member is in possession of the secret key and the corresponding membership certificate (Fig. 20.9).

20.5 Security Considerations The security of the proposed scheme is based primarily on the discrete logarithm problem and the security of the RSA [19] and the Schnorr [21] signature schemes. If the factorization of the modulus .n is unknown, computing membership certificates

20 Hierarchical Aadhaar-Based Anonymous eSign Based on Group Signatures

291

Fig. 20.9 Member signature verification

is infeasible. The unlinkablity of the members in the proposed scheme is based on finding equality between two discrete logarithms. Anonymity is ensured by the property of the group signature which makes it infeasible to find which group member has signed the message.

20.6 Conclusion eSign is an online Aadhaar-based electronic signature service in India. Though eSign ensures traditional security requirements such as confidentiality, integrity and availability, privacy and anonymity are yet to be addressed. An eSigner may want to sign a document anonymously. This paper proposed an Aadhaar-based anonymous eSign scheme based on group signatures. The security of the proposed scheme is based on discrete logarithm problem.

References 1. Digital India. Government of India. https://www.digitalindia.gov.in (2021) 2. Rao, U., Nair, V.: Aadhaar: governing with biometrics. South Asia J. South Asian Stud. 42(3), 469–481 (2019) 3. eSign. Controller of Certifying Authorities. https://cca.gov.in/eSign.html (2021) 4. DigiLocker. Government of India. https://www.digilocker.gov.in/dashboard (2021) 5. The Information Technology Act 2000. Government of India. https://www.meity.gov.in/ writereaddata/files/itbill2000.pdf (2021) 6. Controller of Certifying Authorities. Government of India. https://cca.gov.in (2021) 7. Chaum, D., Heyst, E.V.: Group signatures. In: Workshop on the Theory and Application of of Cryptographic Techniques. Springer, Berlin, Heidelberg (1991)

292

P. Bakshi and S. Nandi

8. Diffie, W., Hellman, M.: New directions in cryptography. IEEE Trans. Inf. Theor. 22(6), 644–654 (1976) 9. Bakshi, P., Nandi, S.: Privacy enhanced attribute based eSign. In: CS & IT Conference Proceedings, vol. 10. no. 4 (2020) 10. Bakshi, P., Nandi, S.: Using Privacy Enhancing and Fine-Grained Access Controlled eKYC to Implement Privacy Aware eSign 11. Chaum, D., Van Heyst, E.: Group signatures. In: Workshop on the Theory and Application of of Cryptographic Techniques. Springer, Berlin, Heidelberg (1991) 12. Camenisch, J., Stadler, M.: Efficient group signature schemes for large groups. In: Annual International Cryptology Conference. Springer, Berlin, Heidelberg (1997) 13. Boneh, D., Boyen, X., Shacham, H.: Short group signatures. In: Annual International Cryptology Conference. Springer, Berlin, Heidelberg (2004) 14. Guo, J., Baugh, J.P., Wang, S.: A group signature based secure and privacy-preserving vehicular communication framework. In: 2007 Mobile Networking for Vehicular Environments. IEEE (2007) 15. Zhang, J., et al.: Privacy-preserving authentication based on short group signature in vehicular networks. In: The First International Symposium on Data, Privacy, and E-Commerce (ISDPE 2007). IEEE (2007) 16. Chaurasia, B.K., Verma, S., Bhasker, S.M.: Message broadcast in VANETs using group signature. In: 2008 Fourth International Conference on Wireless Communication and Sensor Networks. IEEE (2008) 17. Chen, X., et al.: A group signature based electronic toll pricing system. In: 2012 Seventh International Conference on Availability, Reliability and Security. IEEE (2012) 18. Aadhaar Authentication API Specification. UIDAI. https://uidai.gov.in/images/ FrontPageUpdates/aadhaar_authentication_api_2_0.pdf (2021) 19. Rivest, R.L., Shamir, A., Adleman, L.: A method for obtaining digital signatures and public-key cryptosystems. Commun. ACM 21(2), 120–126 (1978) 20. Camenisch, J., Stadler, M.: Efficient group signature schemes for large groups. In: Annual International Cryptology Conference. Springer, Berlin, Heidelberg (1997) 21. Schnorr, C.-P.: Efficient signature generation by smart cards. J. Cryptol. 4(3), 161–174 (1991)

Chapter 21

Swarm Intelligence for Estimating Model Parameters in Thermodynamic Systems Swati Yadav, Pragya Palak, and Rakesh Angira

Abstract Parameter estimation from VLE data has drawn ample amount of attention in nonlinear vapor–liquid thermodynamic modeling problems. Traditional optimization methods are very sensitive to the initial guesses of unknown parameters and often fail to converge to the global optimum of the parameter estimation nonlinear mathematical programming problems. In this work, we demonstrate the application of a swarm intelligence-based algorithm called the Particle Swarm Optimization (PSO) algorithm. It can solve efficiently the nonlinear parameter estimation problems and finds the global optimum with high probability. Furthermore, it is not sensitive to the initial estimates of unknown parameters. PSO is easy to execute and is experimentally proven to perform well on many optimization problems. The results obtained using PSO are compared with those reported in literature. Also, the new results obtained using PSO are presented.

21.1 Introduction In numerous engineering applications, parameter estimation is a common problem and mainly deals with the comparing of set of measurements with the predicted values based on model to attain the parameter values [1]. Some of the difficulties that may be faced in parameter estimation for VLE modeling are convergence to a local minimum, flat objective function in the neighborhood of the globally optimal solution, badly scaled model functions, and non-differentiable terms in thermodynamic equations [2]. In frame of VLE data modeling, recent studies have shown that usage of local optimal parameters may result in incorrect predictions of the azeotropic states with local composition models and in qualitative discrepancies of the phase behavior such as prediction of spurious phase split and modeling of homogeneous azeotropes as heterogeneous. These failures are indubitably potential sources of problems for the design of separation processes [2, 3]. S. Yadav · P. Palak · R. Angira (B) Process Systems Engineering Laboratory, University School of Chemical Technology, Guru Gobind Singh Indraprastha University, New Delhi 110078, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024 B. Das et al. (eds.), Modeling, Simulation and Optimization, Smart Innovation, Systems and Technologies 373, https://doi.org/10.1007/978-981-99-6866-4_21

293

294

S. Yadav et al.

Generally, parameter estimation of thermodynamic models is based on classical least squares or maximum likelihood approaches [4–6]. Formulation of both these approaches involve the minimization of an objective function subject to constraints epitomizing the model equations. Specifically, calculation of model parameters from systems is a data fitting process, where the mathematical function is the main equation of thermodynamic models and unknowns are the estimated parameters [7]. The activity coefficient models of Margules and NRTL are one of the most applicable chemical thermodynamic models in phase equilibria calculations and materials behavior prediction [8]. The impeccable and authentic determination of these model parameters calls for the lore on experimental data of binary systems. After regressing and fitting of such data, for a considerable number of systems these model parameters are obtained [7]. In this paper, we demonstrate the application of particle swarm optimization (PSO) algorithm for estimation of model parameters of vapor–liquid equilibrium (VLE) thermodynamic problems. PSO has been applied to eight different VLE binary systems at various pressures. Margules and non-random two liquid (NRTL) model are used for correlation of the experimental VLE data for different binary systems at equilibrium temperature. The results for the model parameters obtained using PSO are compared with those reported in the literature.

21.2 Particle Swarm Optimization (PSO) Particle swarm optimization (PSO) is relatively a new method and being one of the evolutionary computation techniques, it shares lot of similarity with these techniques. PSO emulates the swarm behavior of insects, animals herding, birds flocking, and fish schooling where these swarms search for food in a collaborative manner. Each member in the swarm adapts its search patterns by learning from its own and other member experiences. These phenomena are studied and mathematical models are constructed [9, 10]. The PSO algorithm is easy to implement and has been empirically shown to perform well on many optimization problems. PSO emulates the swarm behavior and the individuals represent points in the D-dimensional search space. A particle represents a potential solution. The velocity of each particle is updated as, Vi (t + 1) = wVi (t) + c1r1 ∗ ( pbesti −X i (t)) + c2 r2 ∗ (gbest−X i (t))

(21.1)

and at the next iteration, the position of each particle is updated as, X i (t + 1) = X i (t) + Vi (t + 1)

(21.2)

where r 1 and r 2 are random numbers in range [0, 1] and constants w, c1, and c2 are parameters to the PSO algorithm. The schematic representation of the PSO algorithm is presented in Fig. 21.1.

21 Swarm Intelligence for Estimating Model Parameters …

295

Fig. 21.1 Schematic representation of PSO algorithm

21.3 Problem Formulation An important aspect of a process model is validation of a model, which requires fitting of symbolic data by a certain kind of correlation using a global optimization technique. Separation systems consists of a variety of vapor–liquid equilibrium (VLE) correlations in their mathematical models that are certain to the binary or multicomponent system of interest. Prediction of vapor–liquid phase equilibrium is of great importance in the design and successful operation of separation systems, as a result of which a lot of attention has been given to modeling the thermodynamics of phase equilibrium in fluid mixtures. These models generally take the form of excess Gibbs energy models or equation of state models, with model parameters obtained by parameter estimation from binary experimental data.

296

S. Yadav et al.

We consider here the estimation of parameters in eight binary VLE systems using two thermodynamic models, the liquid phase activity coefficients of which are as follows: I. Margules Equation. The Margules equation is a simple empirical model introduced in 1895 by Max Margules [11] for the correlation of excess Gibbs free energy of a liquid mixture with activity coefficients. For a binary system, the Margules equation is, GE = (A21 x1 + A12 x2 )x1 x2 RT

(21.3)

ln γ1 = x22 [A12 + 2(A21 − A12 )x1 ]

(21.4)

ln γ2 = x12 [A21 + 2(A12 − A21 )x2 ]

(21.5)

where γ1 and γ2 denote the activity coefficients of component 1 and 2 for liquid phase, respectively. The Margules parameters A12 and A21 are expressed by, A12 A21

  1 ln γ1 2 ln γ1 = 2− + x1 x1 x2   1 ln γ1 2 ln γ2 = 2− + x2 x2 x1

(21.6) (21.7)

II. NRTL Equation. The non-random two liquid (NRTL) equation is a local composition model introduced by Renon and Prausnitz [12] in 1968 for correlation of activity coefficients of binary systems and is used widely in phase equilibria calculations. The NRTL equation for a binary system is, G 21 τ21 GE G 12 τ12 = + x1 x2 RT x1 + x2 G 21 x2 + x1 G 12    2 G 21 G 12 τ12 2 ln γ1 = x2 τ21 + x1 + x2 G 21 (x2 + x1 G 12 )2    2 G 12 G 21 τ21 2 ln γ2 = x1 τ12 + x2 + x1 G 12 (x1 + x2 G 21 )2 G 12 = exp(−ατ12 ) G 21 = exp(−ατ21 ) τ12 =

b12 b21 τ21 = RT RT

(21.8)

(21.9)

(21.10) (21.11) (21.12)

21 Swarm Intelligence for Estimating Model Parameters …

297

Table 21.1 Binary systems used for testing the PSO algorithm S. No

System

N

T (°C)

References

P1

Methyl ethyl ketone(1) + toluene(2)

11

50

[8]

P2

Chloroform(1) + 1,4 dioxane(2)

13

50

[8]

P3

Diethyl ketone(1) + n-hexane(2)

13

65

[8]

P4

5-nonanone(1) + n-hexane(2)

20

60

[13]

P5

5-nonanone(1) + 1-hexene(2)

10

60

[13]

P6

5-Nonanone(1) + DMSO(2)

15

60

[13]

P7

5-Nonanone(1) + DMSO(2)

15

70

[13]

P8

5-Nonanone(1) + DMSO(2)

15

80

[13]

where τ 12 and τ 21 are the NRTL parameters. Also, α, b12 , and b21 are parameters specific to a particular pair of species and independent of composition and temperature. The value of α varies between 0.25 and 0.50. The isothermal binary systems used in this work are listed in Table 21.1.

21.4 Results and Discussion This section presents the results obtained using the PSO algorithm on eight isothermal VLE thermodynamic systems at various pressures and their comparison with those reported in literature. All the experimental data used in this study has been taken from the literature mentioned in Tables 21.1 and 21.2 under the head Reference. The PSO parameters used in the present study are as follows: particle size (NP) = 30, inertia factor (w) = 0.8, and the acceleration coefficient (c1 and c2 ) = 1.5. For each problem, 100 simulation runs were carried out using MATLAB. The execution of the algorithm automatically terminates when the maximum number of generations (i.e., 2000) have been completed. Number of runs converged (NRC) has been found to be 100% for all the thermodynamic systems. The activity coefficients of binary systems were correlated by Margules and nonrandom two liquid (NRTL) equations for equilibrium temperature. This procedure of correlation was based on the minimization of the objective function given below, Fobj =

n  

expt

p calc − pj j

2 (21.13)

i=1

Table 21.2 shows the comparison of results obtained using PSO algorithm with those reported in literature [8, 13]. In the present study, objective function value under the head Literature has been calculated using model parameters reported in [8, 13] for fair comparison of results. It is clear from Table 21.2 that there is improvement in objective function value using PSO for all the problems studied. The percentage

298

S. Yadav et al.

Table 21.2 Comparison of results obtained using PSO with those reported in literature S. No

Parameters

PSO Model parameters

Literature Fobj

Model parameters

Ref Fobj

P1

(A12 , A21 )

(0.3473, 0.1955)

0.0079

(0.372, 0.198)

0.0419

[8]

P2

(A12 , A21 )

(−0.7205, − 1.3730)

0.7199

(−0.72, − 1.27)

2.3456

[8]

P3

(A12 , A21 )

(1.1543, 0.5673)

13.5294

(1.153, 0.596)

14.0865

[8]

P4

(τ 12 , τ 21 )/α

(−0.3147, 0.9776)/0.3

41.6454

(1.197, − 0.407)/0.3

P5

(τ 12 , τ 21 )/α

(−0.5218, 1.0046)/0.3

10.6482

(1.178, − 0.575)/0.3

P6

(τ 12 , τ 21 )/α

(0.6942, 1.7746)/0.4

0.0857

(0.675, 1.75)/0.4

0.1164

[13]

P7

(τ 12 , τ 21 )/α

(0.5921, 1.7134)/0.4

0.0711

(0.583, 1.727)/0.4

0.072

[13]

P8

(τ 12 , τ 21 )/α

(0.4734, 1.7085)/0.4

0.1144

(0.473, 1.701)/0.4

0.1199

[13]

2404.8 763.133

[13] [13]

improvement in the objective function value has been found to be 81.14%, 69.30%, 98.26%, 98.60%, and 26.37% for systems P1, P2, P4, P5, and P6 respectively. Also, it is to be noted that model parameters for systems P4 and P5 are quite different than those reported in literature. Figure 21.2 shows the deviations obtained in the experimental and predicted/ calculated values of total pressure of the four thermodynamic systems. From Fig. 21.2, it can be observed that system P2, P4, P5, and P6 has high deviation in calculated value of total pressure. Further, this is supported by the objective function value obtained for these systems. Figure 21.3 show the convergence history for a typical run of five thermodynamic systems considered in the present study. It indicates the global optimum value is reached within 100 to 200 iterations of PSO algorithm.

21 Swarm Intelligence for Estimating Model Parameters …

299

Pexpt vs Pcalc Pexpt

80

Pcalc(PSO)

Pcalc(lit.)

70

pressure

60 50 40 30 20 10 0 0

0.2

0.4

0.6

0.8

1

1.2

liquid mole fraction (a) For P2 Pexpt vs Pcalc Pexpt

700

Pcalc(PSO)

Pcalc(lit.)

600

pressure

500 400 300 200 100 0 0

0.2

0.4

0.6

0.8

liquid mole fraction (b) For P4

Fig. 21.2 Experimental and calculated pressures for P2, P4, P5, and P6

1

1.2

300

S. Yadav et al.

Pexpt vs Pcalc 800

Pexpt

Pcalc(PSO)

Pcalc(lit.)

700 600 pressure

500 400 300 200 100 0 0

0.2

0.4

0.6

0.8

1

1.2

liquid mole fraction (c) For P5 Pexpt vs Pcalc Pexpt

10

Pcalc(PSO)

Pcalc(lit.)

pressure

8 6 4 2 0 0

0.2

0.4

0.6

0.8

1

1.2

liquid mole fraction (d) For P6 Fig. 21.2 (continued)

21.5 Conclusion This work presents the application of PSO for estimating model parameters in VLE thermodynamic systems. The results show that model parameters for P1, P2, P3, P7, and P8 are approximately same as reported in literature. But model parameters for P4, P5, and P6 are quite different and correct than those reported in literature. This

21 Swarm Intelligence for Estimating Model Parameters …

301

Fig. 21.3 Convergence history of P1, P3, P5, P6, and P7

(a) For P1

(b) For P3

302

S. Yadav et al.

Fig. 21.3 (continued)

(c) For P5

(d) For P6

21 Swarm Intelligence for Estimating Model Parameters …

303

Fig. 21.3 (continued)

(e) For P7

study clearly indicates the advantage of using PSO algorithm by obtaining correct value of VLE model parameters. Hence PSO has the potential to solve the more complex thermodynamic problems.

Nomenclature γi xi w c1 and c2 pbesti gbest p calc j expt pj Fobj N GE R T

Liquid phase activity coefficient for component i Liquid phase mole fraction of component i Inertia weight constant Acceleration coefficients Position of best f (X) value explored by particle i Position of best f (X) value explored by all particles in the swarm Calculated pressure value Experimental pressure value Objective function value to be minimized Number of experimental data points Molar excess Gibbs energy The gas constant Temperature

304

S. Yadav et al.

Acknowledgements Financial support from the Guru Gobind Singh Indraprastha University is gratefully acknowledged. The work has been supported under Faculty Research Grant Scheme (FRGS) for the year 2022-23 (F. No. GGSIPU/FRGS/2022/1223/35).

References 1. Bonilla-Petriciolet, A., Bravo-Sanchez, U.I., Castillo-Borja, F., Zapiain-Salinas, J.G., SotoBernal, J.J.: The performance of simulated annealing in parameter estimation for vapor-liquid equilibrium modelling. Braz. J. Chem. Eng. 24, 151–162 (2007). https://doi.org/10.1590/s010466322007000100014 2. Erodotou, P., Voustas, E., Sarimveis, H.: A genetic algorithm approach for parameter estimation in vapour-liquid thermodynamic modelling problems. Comput. Chem. Eng.. Chem. Eng. 134, 106684 (2020). https://doi.org/10.1016/j.compchemeng.2019.106684 3. Bollas, G.M., Barton, P.I., Mitsos, A.: Bilevel optimization formulation for parameter estimation in vapor-liquid(-liquid) phase equilibrium problems. Chem. Eng. Sci. 64, 1768–1783 (2009). https://doi.org/10.1016/j.ces.2009.01.003 4. Esposito, W.R., Floudas, C.A.: Global optimization in parameter estimation of nonlinear algebraic models via the error-in-variables approach. Ind. Eng. Chem. Res. 37, 1841–1858 (1998). https://doi.org/10.1021/ie970852g 5. Gau, C.Y., Brennecke, J.F., Stadtherr, M.A.: Reliable nonlinear parameter estimation in VLE modelling. Fluid Ph. Equilibria 168, 1–18 (2000). https://doi.org/10.1016/s0378-3812(99)003 32-5 6. Gau, C.Y., Stadtherr, M.A.: Deterministic global optimization for error-in-variables parameter estimation. AIChE J. 48, 1192–1197 (2002). https://doi.org/10.1002/aic.690480607 7. Farajnezhad, A., Afshar, O.A., Khansary, M.A., Shirazian, S., Ghadiri, M.: Correlation of interaction parameters in Wilson, NRTL and UNIQUAC models using theoretical methods. Fluid Ph. Equilibria 417, 181–186 (2016). https://doi.org/10.1016/j.fluid.2016.02.041 8. Smith, J. M., Van Ness, H. C., Abbott, M. M.: Introduction to Chemical Engineering Thermodynamics. 7th Int Edn. McGraw-Hill, Boston (2005), 430–482. 9. Liang, J.J., Qin, A.K., Suganthan, P.N., Bhaskar, S.: Comprehensive learning particle swarm optimizer for global optimization of multimodal functions. IEEE Trans. Evolut. Comput. 10, 281–295 (2006). https://doi.org/10.1109/tevc.2005.857610 10. Chakraborty, S., Nair, R.G., Seban, L. Dye sensitized solar cell parameter extraction using particle swarm optimization. In: Das, B., Patgiri, R., Bandyopadhyay, S., Balas, V.E. (eds) Modeling, Simulation and Optimization. Smart Innovation, Systems and Technologies, vol 206. Springer, Singapore (2021). doi:https://doi.org/10.1007/978-981-15-9829-6 11. Margules, M., Wien, A.W., Math.-Naturwiss.: Über die Zusammensetzung der gesättigten Dämpfe von Mischungen. – Sitzber. Kais. Klasse II 104, 1243–1278 (1895). 12. Renon, H., Prausnitz, J.M.: Local compositions in thermodynamic excess functions for liquid mixtures. AIChE J. 14, 135–144 (1968). https://doi.org/10.1002/aic.690140124 13. Renon, H.J., Prausnitz, M.: Liquid-liquid and vapor-liquid equilibria for binary and ternary systems with dibutyl ketone, dimethyl sulfoxide, n-hexane, and 1-hexene. Ind. Eng. Chem. Process. Des. Dev. 7, 220–225 (1968). https://doi.org/10.1021/i260026a011

Chapter 22

Development of a Protocol on Various IoT-Based Devices Available for Women Safety Nishant Kulkarni, Shreya Gore, Avanti Dethe, Tushar Dhote, Rohit Dhage, and Manasvi Dhoke

Abstract In today’s world, women are confronted with a slew of issues, including harassment. So, ensuring the safety of women is of utmost importance. Physical and/ or sexual violence is believed to have affected 35% of women at some point in their life. A protocol has been designed and developed to ensure the safety of ladies. The device primarily has three features: The first feature is for sending emergency SMS to friends and family, as well as alerting the nearest police station with current GPS location so that aid can arrive as soon as possible. The second feature is missed calls, and the third feature is of audio recording. These features can be used depending on the situation that women are in. In the future, this technology might be downsized and implanted in clothing, handbags, phones, and other items to make it more portable.

22.1 Introduction Women’s safety has become a serious worry as the crimes against women have increased in recent years. In India, women are beginning to escape from their cocoons and chase their dreams. They are becoming more self-independent and less dependent on male relatives. They have lost their fear. They participate energetically in both domestic and official activities. Simultaneously, crimes against them have escalated, and they are being repressed. According to the most recent statistics, around 34,651 rape incidents are registered each year, with many more cases going unreported. Many women find it impossible to leave their homes or offices alone at night because of such events. Women’s safety and security have become a national concern that requires attention [1]. Once again, technology has come to our rescue, and a women’s safety device has been invented. This document includes preventive devices that can N. Kulkarni Department of Mechanical Engineering, Vishwakarma Institute of Technology, Pune, India S. Gore · A. Dethe (B) · T. Dhote · R. Dhage · M. Dhoke Department of Chemical Engineering, Vishwakarma Institute of Technology, Pune, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024 B. Das et al. (eds.), Modeling, Simulation and Optimization, Smart Innovation, Systems and Technologies 373, https://doi.org/10.1007/978-981-99-6866-4_22

305

306

N. Kulkarni et al.

be utilized by women and girls to fight against the difficulties they face. This device carries three buttons: The first is red, which sends a text message with a location alert to a designated contact number. The second button is blue, which sends a miss call alert. The third is green, which can help the victim to develop an off by recording the audio.

22.1.1 Literature Review Monisha et al. [2] proposed FEMME, an ARM controller-based safety application and device. It’s a safety gadget designed specifically for women and girls. The appliance is turned on by simultaneously hitting the power button and the key. On launching the app, you will notice four key icons. Detection by a hidden camera, audio recorder, Security Optimization service message, and video recording. The device can be triggered by depressing the corresponding button. However, the device is connected to the phone and features two buttons: one for emergency use and the other for hidden camera identification. The miniature has been proposed by Sog et al. [1] the small size of the product allows it to worn as a watch or pendant. It also has a spoken keyword recognition feature that can send a crisis alert notification to the device’s contacts. Chougula et al. [3] suggested a portable gadget that looks like a sash. Following the success of a few apps and devices, such as VithUapp, which was inspired by a favorite television show, this device was created. If the victim is traveling, the GPS module broadcasts the victim’s speed and d and well as the associated longitude and latitude. It aids in the tracking of the victim and makes it easy to locate them along with current date and time. The gadget is activated when the threshold is crossed, and the GPS module updates the location and sends it to pre-determined every-minute contacts and other crisis services, such as police control center. In an emergency situation, the shrill alarm will sound, and sirens will sound, alerting people that there is a problem nearby and that assistance is needed. Deepali et al. [4] offered a wearable safety device that uses GPS to track the victim’s whereabouts and capture their image, it also contains a camera to capture the assaulter’s image and an emergency button for delivering notifications. The purpose of this programmer is to ensure that kids and working women are safe. As a result, a portable wireless women’s safety device and a school bus tracking system were modified to include an electronic camera to capture an image of the incident and an emergency button to warn individuals around. The device offers various advantages as well as limitations. The main limitation would be excessive energy usage. The efficiency of the device can be improved using a battery backup option.

22 Development of a Protocol on Various IoT-Based Devices Available …

307

Deepali et al. [4] offer a smart women’s safety system device that includes a Raspberry Pi 2 with Global System for Mobile Communications SIM900A, Global Positioning System receiver, live video streaming, and other capabilities. The proposed design is a portable wristband tool that includes the following features: The Raspberry Pi Foundation in the United Kingdom has produced a series of single-sized credit card computers known as the Raspberry Pi 2. The embedded system integration process is made easier with this module, which supports up to 51 channels. This allows streaming of live videos by setting up Wireless Fidelity (Wi-Fi), using the Logitech c270 webcam, and installing the Raspberry Pi software 2 models B. Advantage here is to watch the live streaming video or download it. Deepali et al. [4] proposed a portable device with SMS capabilities, alert sensors, and security features that efficiently suit almost all needs. Adding a few sensors, such as pressure sensors, and locating hidden cameras, can be extremely beneficial. Jain et al. [5] suggested a band model that ensures women’s safety. The band has a transmitter with an Arm7 microcontroller, temperature sensor, movement sensor, heart rate sensor, GSM, GPS, and an alert button to which the electrical power (battery) is linked. This group starts by working together in two ways. With the help of a GPS module, the victim’s location is tracked and reported to the designated contacts via an emergency message that eventually includes a Raspberry Pi or laptop model. To receive data from the sender, the computer must have an Internet connection, and assistance can be provided with this. Second, if there is a hazard, the creature’s ideal body freezes and the sensory nerves reach the image, where the sensors continue to relay their values to a microcontroller. It compares them to the limit values. As the Internet is not always available, alarms can be added to make the device more useful. This will notify the sufferer’s surroundings and instant protection can be provided to the victim. Looking into all devices and apps developed earlier, it is necessary to have one less cost-efficient and easily operational device to ensure women’s safety. Proposed protocol is the step taken to achieve this goal. Objectives behind the development of this protocol are: 1. 2. 3. 4.

Defend every woman in need or trouble. To prevent societal crimes. To give a trustworthy security system for women who are alone or feel insecure. To assist women in becoming more peaceful, understanding, flexible, and in gaining body and mind control, as well as to become more responsive rather than reactive, more observant, and cognitively aware [6].

22.2 Methodology The main board Arduino Nano for all three features which consist of GSM, GPS, microSD card, NEO-6 M, male–female wires, push button, and other components are used on these two boards. The initial version of proposed protocol is developed

308

N. Kulkarni et al.

using Arduino Nano as the main board because it incorporates all the three features mentioned earlier, the following components are used in the final assembly.

22.2.1 Purpose/Use of Different Components • GSM module: The GSM module allows a computer to communicate with a Global System for mobiles and General Packet Radio Service (GSM-GPRS) system. A SIM card for the GSM is placed into the mobile device for the purpose of transmitting and receiving messages via GPRS. The proposed system saves the number of GSM SIM card. With the rise in popularity of GSM, reciprocation have expanded to include a wide range of specialized applications, machine automation, and machine-to-machine connection in addition to speech. It has a frequency range of 900MHz to 1800MHz. • GPS modules: A position tracker is the Global Positioning System. It keeps track of the current location using longitude and latitude. [1] This information will be used by the GPS which is directly attached to the microcontroller’s USART, to search for a specific address along with the street name and adjacent intersection with actual position. The GPS Coder Module, which is directly connected to the microcontroller’s USART, provides reliable location, navigation, and timing services to consumers continuously on a continuous basis in all weather conditions, at any time of day or night, and anywhere on or near the Earth. If GPS is disconnected, the system will only send text messages with longitude and latitude. As a result, Internet access is necessary. • Micro SD: MicroSD cards are used to store significant amounts of data in devices that profit from their small size. Standard SD cards are used in heavy/ large devices because of their large storage capacity MicroSD cards are preferred in devices like smartphones because of their small size. In this device, for the feature of audio recording, a micro SD card plays an important role. It can store a recording of around 10 min. This section explains the proposed protocol of women’s safety device. The proposed system’s block diagram is shown in Fig. 22.1 Step 1: Depending on the situation, press either of the buttons. Step 2: If GPS receives a signal, it will begin computing the victim’s current latitude and longitude data and sends them through SMS to the registered mobile phone number via the GSM module. Step 3: If the miss call button is hit, the GPS module obtains the most recent location and sends it to the GSM module. Step 4: If you press the audio button, the recording will begin.

22 Development of a Protocol on Various IoT-Based Devices Available …

309

Fig. 22.1 Block diagram of the proposed system

22.2.2 Algorithm SMS and Miss Call Algorithm: 1. Grant permission for location and SMS. 2. Registering an emergency contact number for sending SMS and the current location of the user. 3. Current location will consist of latitude and longitude. 4. The SMS will be sent to the registered contact number. Im in Trouble! Sending My Location: http://maps.google.com/maps?q=loc:18.4678156,73.8679419

Audio Algorithm: 1. 2. 3. 4.

Initializing connection. Setting a time limit of 2 min for audio to be recorded. Audio files are recorded in a loop process, each of minutes. Audio file is stored in a memory card.

22.2.3 Method The methods utilized in this paper are depicted in Fig. 22.2. By pressing, any of the appropriate buttons, the device can be activated, depending on the situation. The suggested device has three buttons in total. Button one, SMS with location (red),

310

N. Kulkarni et al.

when pressed, sends an instant location with a distress message to the registered mobile number and even local police pre-set numbers via a GSM module3 [1]. This distress message includes the current location of the victim, which is tracked by the Global Positioning System (UBLOX) and transferred to the GSM module. Then the location and default distress message will be sent to pre-stored contacts. The second miss call button, which is blue in color, sends a missed call to the designated single contact number when hit. At least one contact from the user’s end will be available for selection. The audio recording button, which is green in color, is the last one. Evidence is collected with this button. The call is made with a GSM modem, and the audio is recorded with an audio recorder. This GSM modem (sim 900) works as a phone with its unique number and accepts any GSM network operator SIM card. This modem has the advantage of being able to send and receive SMS as well as record audio. It is a whole system. As a result, there is no need to carry several gadgets. When the user presses any of the buttons, the GPS tracking feature keeps track of them. It takes audio recordings that can be used in future investigations. When the battery is low, the pre-stored contacts are automatically sent to the location [7].

22.2.4 Working Part A For part A, the simulation is done in proteus. Here the sound library has been with some LEDs and the logic straight. In proteus, no sound can be created that’s why a test pin has been placed. If this test pin is low that means we are not getting any sound or noise in the surrounding, and if this test pin is high then the sensor is detecting sound. The output is in digital format. This part is shown below in Figs. 22.3 and 22.4. Part B The Global Positioning System (GPS) module, GSM module, and an Arduino Uno are the three key components for this portion B. SMS is send via GPS receiver module using GSM module as a main board along with user’s live coordinates. To operate or send SMS, the switch input-based GPS requires a manual effort. This is shown in Fig. 22.5. A wide range of sensors can be integrated with the circuit to detect a flame, vibration, or similar devices to automatically function on particular events with just a few simple tweaks. For an instance, utilizing a sensor for flame detection to communicate the position of a fire hazards to a rescue team, or using a system to send the location when a car accident occurs to a rescue squad, etc.

22 Development of a Protocol on Various IoT-Based Devices Available …

311

Fig. 22.2 Flow chart of the proposed system

22.3 Results and Discussion The purpose of the proposed protocol is to keep women safe and secure in an emergency. A woman hits the button when she is insecure. The microcontroller is commanded, on pressing the button and the current latitude and longitude of victim

312

N. Kulkarni et al.

Fig. 22.3 Audio button (GREEN)

Sound sensor Module Arduino nano

SD card module

Fig. 22.4 circuit for audio button

is calculated by GPS. The microcontroller’s and nearby police station’s phones will receive SMS messages with latitude and longitude values from the GSM module. SMS will be delivered to the registered phone numbers through GSM. Figure 22.6 represents the image of the SMS sent to the registered mobile number. All the features are tested multiple times. But due to battery backup, there will be chances of getting the device off. As a result, this device will show around 96–98% of accurate output.

22 Development of a Protocol on Various IoT-Based Devices Available …

313

Fig. 22.5 Circuit for SMS and missed call button

Fig. 22.6 Result

22.4 Conclusion An initial version of proposed protocol (Woman Security Device) is created in response to the rising levels of violence against women. This technique is intended to assist women with difficulty in both calling for aid and alerting others in the area. The proposed design will address and help to resolve significant difficulties that women encounter. This is also useful for elder people. The device is made with cuttingedge technology and processors. This proposed device could be expanded further to become a wearable gadgets. The design could be modified smaller and lower in weight to make it more convenient and easy to carry. The Android app can be used to improve the device’s efficiency. It may allow the user to enter different contact details, depending on their needs. The application can also be used to access recorded evidence. More defense features that can be managed by various monitoring systems can be added. Acknowledgements We would like to thank the authorities of Vishwakarma Institute of Technology, Pune for providing this opportunity and necessary help to work on such a skill-developing project.

314

N. Kulkarni et al.

References 1. SoSogi, N.R.: MARISA: a raspberry pi based smart ring for women’s safety using IoT. In: 2018 International Conference on Inventive Research in Computing Applications (CIRCA). IEEE (2018) 2. Monisha, D.G.: Women safety device and application-FEMME. Indian J. Sci. Technol. 9(10), 1–6 (2016) 3. Chougula, B., Naik, A., Monu, M., Patil, P., Das, P.: Smart girl’s security system. Int. J. Appl. Innov. Eng. Manag. 3(4), (2014) 4. Bhavanivale, M., Deepali, M.: IoT based unified approach for women and children security using wireless and GPS. Int. J. Adv. Res. Comput. Eng. Technol. (IJARCET) 5 (2016) 5. Jain, R.A.: Women’s safety using IoT. Int. Res. J. Eng. Technol. (IRJET) 4(05), 2336–2338 (2017) 6. Tejesh, B.S.S.: A smart women protection system using internet of things and open source technology. In: 2020 International Conference on Emerging Trends in Information Technology and Engineering (ic-ETITE). IEEE (2020) 7. Tejesh, B.S.S.: A smart women protection system using internet of things and open source technology. In: 2020 International Conference on Emerging Trends in Information Technology and Engineering (ic-ETITE). IEEE, (2020) 8. Helen, A.: A smartwatch for women security based on IoT concept ‘watch me. In: 2017 2nd International Conference on Computing and Communications Technologies (ICCCT). IEEE, (2017) 9. Bhardwaj, N., Aggarwal, N.: Design and development of “Suraksha”-A women safety device. Int. J. Inform. Comput. Technol. 4(8), 787–792 (2014)

Chapter 23

A Review on Indian Language Identification Using Deep Learning Swapnil Sawalkar and Pinky Roy

Abstract Indian Language Identification and classification has several practical applications and has received a lot of interest in the computer vision research field over the last few decades and its significance is boosted by its applications in deep learning. Despite the fact that various language identification and classification techniques in deep learning have been presented previously, there is no research that focuses on reviewing hybrid features and attention from various languages along with contrasting simple architecture from multiple architecture. Hence, this review provides an overview of various strategies for Indian language identification and classification using hybrid extraction, hybrid deep learning with also considered hybrid features from simple architecture to hybrid architecture. Finally, an effective comparison analysis has been made based on the performance analysis of various simple and hybrid language identification techniques. The state-of-art research relevant to each aspect of the language identification and classification is presented which evaluates their strength, weakness, and overall applicability for deep learning applications. Finally, concluding remarks based on the drawbacks and its respective solutions has been provided.

23.1 Introduction Language Identification (LID) is a method for recognizing a speaker’s language, regardless of gender, accent, or pronunciation [1]. In Language Identification (LID) system for Indian languages, languages are first pre-classified into tonal and non-tonal categories, and then specific languages are found among the languages of the appropriate category. Any LID system’s performance is influenced by a variety of variables and also it should be able to accurately identify a large number of target languages. S. Sawalkar (B) Sipna College of Engineering and Technology Amravati M.S., Amravati, India e-mail: [email protected] P. Roy National Institute of Technology Silchar Assam, Silchar, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024 B. Das et al. (eds.), Modeling, Simulation and Optimization, Smart Innovation, Systems and Technologies 373, https://doi.org/10.1007/978-981-99-6866-4_23

315

316

S. Sawalkar and P. Roy

Language identification tasks become more difficult in nations like India where the majority of the languages have a common phoneme set. Languages can either be preclassified into distinct sub language families or into different categories in order to prepare a LID system for detecting either a large number of target languages or closely related languages with improved accuracy [2, 3]. With the development of deep learning techniques, new methods using deep learning are considered for reasoning about language data. It has taken a lot of effort from a computational perspective to model many languages. Early research focused on making voice signals a smooth input for machines in the speech recognition systems and speaker recognition fields. Deep learning has significantly increased the prediction power of computer devices, due to the availability of massive data and the use of advanced learning algorithms. Deep learning methods have drawn the attention of scientists working in every field to use their skills to address challenges due to their superior and consistent performances [4, 5]. Fully connected deep neural networks and CNN architectures were evaluated in a deep learning-based language identification system and found to perform similarly, while the CNN model required much less parameters for each of the in-set languages concerned, training i-vectors, and test i-vectors are used. The fundamental difficulty in identifying spoken languages is finding meaningful audio feature representations that are resistant to both individual pronunciation differences and cross-linguistic similarity [6, 7]. Due to this, building reliable deep learning models using these datasets is quite challenging. Hence, many attempts have been made to develop a reliable representation of language identification. Many researchers have evaluated language classification system, however, they rarely summarize methodologies for analyzing hybrid feature of language classification with considering combined feature attention and multilingual deep learning utilization. The main contribution of this review paper are as follows: • Simple deep learning architecture and hybrid deep learning architecture used for Indian language identification along with hybrid feature of various language identification and classification approaches has been provided with comparison study.

23.1.1 Literature Survey In this section, the review has been provided with discussing the salient characteristics of a various language identification and classification approaches in deep learning and hybrid feature extraction methods are used to accomplish the identification task. The method of reviewing various language identification and classification techniques has been shown in Fig. 23.1. The review has been made in three distinctive directions related to language identification and classification namely Simple deep learning methods, Hybrid feature extraction methods, and Hybrid deep learning architecture.

23 A Review on Indian Language Identification Using Deep Learning

317

Fig. 23.1 Direction of reviewing various Indian language identification and classification approaches

Review on hybrid feature extraction method for Indian language identification Biswas et al. [8] developed the field of SLID in all multilingual voice recognition systems, it has already been established as a crucial first step. The significance of SLID has grown because to recent advances in ASR technologies. In this study, a paradigm for the distinction between languages spoken in India and other countries. Then supplement the data with noise of varied loudness obtained from various contexts with the aim of making our model resilient to noise from ordinary life. Hence, extract aggregated macro-level features from the MFCC time series of this enriched data and perform feature selection using the Feature Extraction based on Scalable Hypothesis testing (FRESH) algorithm. Although, there is a need to improve the quality of the proposed system’s results by utilizing varied less loudness during the MFCC phase. Das et al. [9] proposed the influence spoken language identification-based applications have on the daily lives of regular people has grown as a result of recent developments in the disciplines of machine learning and artificial intelligence. Applications based on spoken language recognition have long been available in the West, but due to various difficulties, they have not yet achieved much traction in multilingual nations like India. This problem has been addressed in this paper by attempting to distinguish between different Indian languages using a variety of well-known features, such as MFCC, Linear Prediction Coefficient (LPC), Discrete Wavelet Transform (DWT), and Gammatone Frequency Cepstral Coefficient (GFCC). However, due to the voice signal’s high unpredictability, it is preferable to carry out some feature extraction to reduce that fluctuation.

318

S. Sawalkar and P. Roy

China et al. [10] developed multi-level prosody and spectral data, to create an automatic tonal and non-tonal pre-classification-based LID system. Languages are first divided into groupings that are tonal and non-tonal, and then specific languages are found within those groups. For the pre-classification job, the system uses MFCCs, Mean Hilbert Envelope Coefficients (MHECs), and Shifted Delta Cepstral Coefficients of MFCCs and MHECs. The complementarity of the syllable, word, and phrase level (spectral + prosody) for the pre-classification-based LID task has also been explored, as has multi-level analysis of spectral data. However, prosodic is still difficult to achieve because, some methods do not support fine-grained alterations. Deshwal et al. [11] presented that LID is the precise identification of an unidentified language through the comparison of speech biometrics from a test speech sample with a library of language models. Different methods are used independently during the feature extraction stage, including MFCCs, perceptual linear prediction features (PLP), and relative perceptual linear prediction features (RASTA-PLP). Later on, the effectiveness of our LID system is examined using a variety of hybrid feature combinations, including MFCC, PLP, combined with their first order derivatives, MFCC + RASTA-PLP, and MFCC + Shifted delta cepstral coefficients (SDC). However, the effectiveness of a LID system with hybrid features has not been investigated. Biswas et al. [12] developed the most natural form of communication for a very long time has been spoken language. A process for automated speech recognition (ASR) is motivated by this. The creation of language identification would be required for any multilingual ASR system as ASR research advances. India has a large variety of languages, each with hundreds of dialects, making this issue particularly relevant to that country. To achieve the optimum performance, the parameters for silent removal are optimized. Second, features with the Mel Frequency Cepstral Coefficient (MFCC) are taken out of the voice signals. However, several methods have not been investigated employing lexical and structural aspects, such as character, word, part-of-speech, n-grams, and dependency linkages. Sangwan et al. [1] developed the use of hybrid characteristics and artificial neural networks (ANN) for spoken language identification (LID) is covered in this research. The employed RASTA-PLP features for feature extraction and later hybrid features that combined RASTA-PLP features with cutting-edge MFCC features. The results show that MFCC + RASTA-PLP features perform better than RASTA-PLP features on their own. With the help of the “trainlm” network training function and the MFCC + RASTA-PLP hybrid features, the classification accuracy was 94.6%. Although, there is a need to provide effective preprocessing approach to improve the accuracy of language identification in hybrid feature extraction method. Birajadar et al. [13] presented a language identification system’s primary goal is to accurately identify the language from a speech utterance, and multilingual speech apps frequently use this method. These visual representations of speech samples can be thought of as a texture image that depicts changes in energy in various frequency bands over time. The second step involves extracting textural information from the visual representation using Weber local descriptor (WLD), linear phase quantization (LPQ), and completed linear binary pattern (CLBP). However, although, there is a

23 A Review on Indian Language Identification Using Deep Learning

319

need to improve the time–frequency performance under language identification with partial or heavy occlusion. Bakshi et al. [14] built a novel duration normalized feature selection method and a two-step modified hierarchical classifier are presented to enhance the accuracy of spoken language identification (SLID) utilizing Indian languages under duration mismatch conditions. The SLID system’s performance is noticeably worsened for short-duration utterances, despite the fact that accuracy for training and testing durations that are out of sync is improved and a series of inter- and intra-family and additional classes to enhance the estimation of bogus language families. However, there is a need to enhance the accuracy of SLID system in the language identification. Sangwan et al. [15] presented the primary condition for enhancing any system’s identification performance is the depiction of high-quality audio features. In order to produce new feature representations of the speech signals, this research presents a hybrid approach to feature extraction that combines MFCC and Shifted Delta Cepstral (SDC) coefficients features. The learning outputs of the hidden layers of the DBN are used to establish a three-layer back-propagation neural network (BPNN) classifier, which is used to detect language from spoken language. However, these characteristics are sensitive to a number of factors, including noise and acoustic changes and, there is still a challenge in extracting robust characteristics of highquality audio feature. From Table 23.1, it is clearly understood that Indian language identification has been done with various standard and well-known hybrid features from datasets. These datasets include non-linear data, spectral data, and large margin data which enhances loudness from various contexts with the help of classifiers. However, these datasets not consider semantic and syntactic information from all speech signals. Also, there is a need to improve the hybrid features in SLID system and hence, provide continuous updation without any time changes in dataset processing. Review on multilingual deep learning-based acoustic modeling for Indian language identification Basu et al. [16] evaluated due to the lack of appropriate voice corpora makes research and development of speech technology applications in low-resource languages (LRL) difficult. The construction of such an LRL corpus, which includes sixteen infrequently studied Eastern and Northeastern (E&NE) Indian languages, is illustrated in this work. The data variability is presented using various statistics. MFCCs, SDC, and RASTA-PLP features are spectral characteristic to examine the presence of speakerand language-specific information. However, the majority of these models need a lot of data to train, and in low-resource situations, the training data is frequently insufficient or not sufficient to support such driving learning objectives. Verma et al. [17] presented Internet service quality has dramatically improved, which has led to a surge in content creation and consumption. As a result, there is a growing diversity of viewers who wish to consume media in languages they are unfamiliar with or prefer. Real-time and fine-grained content analysis services, such as language identification, content transcription, and analysis, are thus becoming

320

S. Sawalkar and P. Roy

Table 23.1 Review on hybrid feature extraction method for Indian language identification Ref no

Techniques used

Significance

[8]

Automatic spoken language identification using MFCC

SLID has grown because to recent Varied less loudness advances in ASR technologies with noise during MFCC phase of varied loudness obtained from various contexts with the aim of making our model resilient to noise from ordinary life

[9]

Hybrid-meta heuristic feature selection method

Spoken language identification-based applications have on the daily lives of regular people has grown as a result of recent developments in the disciplines of machine learning and artificial intelligence

Less improvement in loudness due to its highly unpredictable voice signals which reduces fluctuation

[10] Multi-modeling multi-level prosody method-based language identification

Spoken language recognition have long been available in the west and pre-classification-based LID task has also been explored, as has multi-level analysis of spectral data

Prosodic is difficult to achieve due to no support in fine-grained alterations

[11] Language identification using hybrid feature and back-propagation neural network

The complementarity of the syllable, word, Effectiveness of LID and phrase level (spectral + prosody) for system is not the pre-classification-based LID task has properly investigated also been explored, as has multi-level analysis of spectral data

[12] Spoken language identification of Indian languages using MFCC features

The effectiveness of LID system is examined using a variety of hybrid feature combinations, including MFCC, PLP, combined with their first order derivatives, MFCC + RASTA-PLP, and MFCC + SDC

Since, some methods have not been implemented like lexical and structural aspects

[1]

A SVM classifier is trained using these features, and it has been proven to be the best classifier due to its large-margin classification property and capacity for classifying complex and non-linear data

Insufficient preprocessing approach to enhance hybrid feature

[13] Indian language identification using time–frequency texture features

The visual representations of speech samples can be thought of as a texture image that depicts changes in energy in various frequency bands over time

Time–frequency performance has to be improved with partial or heavy occlusion

[14] Indian spoken language identification using selection in duration mismatch framework

The SLID system’s performance is noticeably worsened for short-duration utterances, despite the fact that accuracy for training and testing durations that are out of sync is improved

Less accuracy in SLID system of the hybrid feature

Performance of language identification using ANN learning algorithm

[15] Isolated world Hybrid approach to feature extraction that language combines MFCC and SDC coefficients identification system features using hybrid features from deep learning network

Limitations

Still there is an challenge in extracting robust characteristics of audio signals

23 A Review on Indian Language Identification Using Deep Learning

321

more and more necessary. However, Capsule Networks that have not been demonstrated to be effective at object detection extend the use of Capsule Networks because of their special capacity to retain information about relative orientation in image observations, in order to address the issue of language identification. Mandava et al. [18] developed the investigation of the field-effect transistor (FET) threshold voltage variation caused by random telegraph signals in a percolative channel using a novel graphic technique. First, a minimum Vth and a critical curve in a mloc loc plot are generated using technology computer-aided design simulation with no percolation. The former represents a statistical distribution that differs greatly from the typical log-normal distribution. The critical mloc loc curve, which separates the plot into the permitted region and the banned zone, will decrease if gate size is increased. The allowed region’s Vth outlines are then graphically produced. However, the crucial mloc loc curve takes precedence if these coupled mloc and loc fall in the restricted region. Saha et al. [19] developed scene text analysis is a topic of study that presents difficulties for researchers because of the complexity of the background, the image quality, the orientation of the text, the size of the text. The majority of scene text identification methods take either a feature-based or deep learning-based approach to the issue. In this work, an end-to-end system is provided for scene text detection, localization, and language identification to combine feature-based and deep learning-based approaches. The language of the discovered scene texts is finally identified using a Convolution Neural Network-based model. However, despite their robustness, such models have some flaws that affect their classification performance. Ranasinghe et al. [20] proposed the BRUMS submission to the joint task Hate Speech and Offensive Content Identification in Indo-European Languages (HASOC) 2019. A multilingual deep learning model for detecting hate speech and abusive words on social media in the English track of the HASOC sub-task 1, our top performing system was ranked third out of 79 entries. Only architectures that use character embedding’s are subject to the final prepossessing phase and used lower-case letters in the fast text retrained character embedding models. As a result, it changes the text to lower-case letters. BERT are utilized on the dataset cases containing some languages like Tamil, English, Malayalam, and German. However, BERT-based architecture did not use for this preprocessing step. Ranasinghe et al. [20] evaluated that offensive content is common on social media, which is a source of concern for businesses and government agencies. Several papers have lately been published that investigate approaches for detecting various types of such content (e.g., hate speech, cyberbullying, and cyberaggression). This method outperforms the top systems submitted to recent shared challenges in these three languages, demonstrating the durability of cross-lingual contextual embedding’s and transfer learning for this task. However, cutting-edge cross-lingual contextual embedding’s like XLMR have not been applied to abusive language detection. Pitenis et al. [21] developed an offensive language for online communities and social media platforms, researchers have been studying ways to cope with it and developing algorithms to detect its various manifestations, such as cyberbullying, hate speech, hostility, and so on. To remedy this deficiency, provides the Offensive

322

S. Sawalkar and P. Roy

Greek Tweet Dataset, the first Greek annotated dataset for offensive language identification (OGTD). OGTD is a manually annotated dataset containing 4779 tweets classified as offensive or not offensive. Hence, evaluate numerous computational models built and tested on this data, in addition to providing a full description of the dataset. However, fine adjusting the BERT-Base Multilingual Cased model did not yield satisfactory results. Ranasingh et al. [22] presented that offensive content is common on social media, which causes companies and government agencies to be concerned. Several research has lately been published that look into ways for detecting various types of such information (e.g., hate speech, cyberbullying, and cyberaggression). Taking advantage of existing English datasets, cross-lingual contextual word embedding’s and transfer learning are used to create predictions in low-resource languages. However, in Bengal, increasing the amount of training instances from 0 to 100 reduced the transfer learning outcomes slightly, most likely because it requires a certain number of examples to train the softmax layer. Zhao et al. [23] developed the growth of online media platforms has increased individuals’ ability to freely post and remark, yet the detrimental impact of abusive language has grown more obvious. It is critical for the automatic detection system of offensive words. Hence, a proposed system based on the multilingual models XLMRoberta and DPCNN to perform this task. The results of our system’s tests on the official test data set show its effectiveness. However, when the amount of training is very limited, the large-scale model is not able to achieve satisfactory results, so addressing the problem of model training with a small amount of target data has become a research hotspot. Bharathi et al. [24] proposed that social networks have a significant impact in practically every field. Text messaging over the Internet or cellular phones has grown in popularity as a means of personal and commercial communication. It is the moderator’s option which comments to delete from the platform due to violations and automatic tools for recognizing abusive phrases would be handy. This is a collaborative task in DravidanLangTech-EACL2021. The purpose of this job is to identify foul language material in a code-mixed dataset of social media comments/posts in Dravidian languages. However, there is a need to use the occlusion information carried by text messaging to improve detection accuracy. Hou et al. [25] proposed a large-scale language-independent multilingual model from start to finish for integrated ASR and Indian language identification. With around 5000 h of training data, this model achieves a word error rate (WER) of 52.8 and LID accuracy of 93.5 on 42 languages. Furthermore, models trained in a multilingual manner may share information across languages, which aids in the performance of low-resource language tasks, and used Transformer architecture to perform multilingual ASR on low-resource languages and achieved a 10% relative improvement on the baseline, as well as a large-scale streaming end-to-end model trained on nine Indian languages using CTC. However, there is a need to improve the processing of balanced training data in order to use data more efficiently. Tong et al. [26] proposed ASR models that support several languages are appealing since they have been found to benefit from more training data and better adapt to

23 A Review on Indian Language Identification Using Deep Learning

323

languages with limited resources. However, an explosion of context-dependent states results from initialization from monolingual context-dependent models. Language adaptive training is examined using Learning Hidden Unit Contribution (LHUC). In order to address the overfitting issue, dropout during cross-lingual adaptation is also researched and tried. However, due of the significant rise in context-dependent labels brought on by the phone set mismatch, it poses extra difficulties for multilingual and cross-lingual ASR. From Table 23.2, it is clearly understood that performance analysis of multilingual deep learning-based acoustic modeling for various Indian language identification techniques has been considered. These techniques increase the automatic detection, low-resource languages, and cross-lingual contextual words. However, the performance is not improved after significant level of detection. Also, there is a need to improve the system performance under different languages with diverse sense of phrases to eliminate average accuracy rates.

23.1.2 Comparison of Performance in Various Indian Language Identification and Classification Techniques The comparison of performance in hybrid deep learning in terms of accuracy has been shown in Fig. 23.2. The performance of hybrid deep learning technique such as CNN, GRU, LSTM, and CNN-BiLSTM has been analyzed and they show better performance that is more than 0.6 in terms of accuracy thereby these techniques outperforms expert hybrid deep learning with high accuracy are not obtained. Hence, these techniques require improvement in labeling process to extend the identification process in order to eliminate false positive rate. Figure 23.3 depicts the performance comparison of multilingual deep learningbased acoustic modeling for language identification in terms of accuracy. The multilingual deep learning techniques such as MLP, Caps net, DPCNN, and SVM. The maximum value of MLP is 100 and the minimum value of SVM is 82. However, these multilingual deep learning model are not effective at text detection due to their special capacity. Hence, these techniques require further improvement in text detection and language classification to improve the accuracy level. Figure 23.4 depicts the performance comparison of hybrid attention-based multimodal network for language identification in terms of accuracy. The hybrid attention techniques such as DNN, RNN, CNN, and BiLSTM. The maximum accuracy value of hybrid attention technique in RNN is 98 and the minimum value of BiLSTM is 82. However, false temporal attention mechanism is need to be reduced in both CNN and RNN model Hence, these techniques require further improvement in language recognition process to improve detection of offensive speech on social media.

324

S. Sawalkar and P. Roy

Table 23.2 Review on multilingual deep learning-based acoustic modeling for Indian language identification Ref no

Techniques used

Significance

Limitations

[16]

Multilingual speech corpus in low-resource in eastern and north eastern Indian languages for language identification

MFCCs, SDC, and RASTA-PLP features are spectral characteristic to examine the presence of speaker- and language-specific information

In low-resource situations, training data is not sufficient

[17]

Fine-grained language identification with multilingual CapsNET model

CapsNet architecture is used to detect spoken language in real-time from noisy and easily accessible data sources

Capsule networks are not effective at object detection due to their special capacity

[18]

LSTM-CTC-based joint acoustic model for Indian language identification

The critical mloc loc curve, which separates the plot into the permitted region and the banned zone, will decrease if gate size is increased

The crucial mloc loc curve are not supported in the restricted region

[19]

Multilingual scene text detection and language identification

The model generates text proposals using Maximally Stable Extremal Regions and the Stroke Width Transform, then refines the proposals using the Generative Adversarial Network

Despite their robustness, models have some flaws which affect their language identification

[20]

Deep learning models for Multilingual Hate Speech and offensive language identification

Embedding’s are subject to the final prepossessing phase and used lower-case letters in the fast text retrained character embedding models

BERT-based architecture is not used for the preprocessing step in the datasets

[21]

Multilingual offensive language identification with cross-lingual embedding’s

English data to make predictions in languages with fewer resources by using cross-lingual contextual word embedding’s and transfer learning

Cross-lingual contextual embedding’s like XLMR have not been applied to abusive language detection

[22]

Offensive language identification in Greek using multilingual deep learning

Numerous computational models built Fine adjusting the and tested on this data, in addition to BERT-Base providing a full description of the dataset Multilingual Cased model did not achieve good results

[23]

Multilingual offensive language identification for low-resource languages

Taking advantage of existing English datasets, cross-lingual contextual word embedding’s and transfer learning are used to create predictions in low-resource languages

Reduced the transfer learning because it requires certain softmax layer (continued)

23 A Review on Indian Language Identification Using Deep Learning

325

Table 23.2 (continued) Ref no

Techniques used

Significance

Limitations

[24]

Offensive language identification based on XLM-RoBERTa with DPCNN

Detrimental impact of abusive language has grown more obvious and it is critical for the automatic detection system of offensive words

As the training period is limited, large-scale model is not able to give satisfactory results

[25]

Offensive language identification based on multilingual code-mixing text

Moderator’s option which comments to delete from the platform due to violations and automatic tools for recognizing abusive phrases would be handy

To improve the accuracy detection, there is a need to use occlusion information’s

[26]

Large-scale end-to-end multilingual speech recognition and language identification

Performance of low-resource language tasks, and used Transformer architecture to perform multilingual ASR on low-resource languages and achieved a 10% relative improvement on the baseline

There is a need to improve the processing of balanced training data

[27]

Multilingual training and cross-lingual adaptation on CTC-based acoustic models

Automatic Speech Recognition (ASR) models that support several languages are appealing since they have been found to benefit from more training data and better adapt to languages with limited resources

Possess difficulties in multilingual and cross-lingual ASR due to rise in context-dependent labels

0.7 0.6

Accuracy

0.5 0.4 0.3 0.2 0.1 0 CNN

GRU

LSTM

Techniques

Fig. 23.2 Performance comparison of hybrid deep learning architecture

CNN-BiLSTM

326

S. Sawalkar and P. Roy

120

Accuracy

100 80 60 40 20 0 MLP

Capsnet

DPCNN

SVM

Techniques

Fig. 23.3 Performance comparison of multilingual deep learning-based acoustic modeling

105 100

Accuracy

95 90 85 80 75 DNN

RNN

CNN

BiLSTM

Techniques

Fig. 23.4 Performance comparison of hybrid attention-based multi modal network

23.2 Summary of Review Indian Language identification and classification has been analyzed to support various deep learning techniques. The analyzed summary of various techniques in language identification and classification techniques were explained as follows:

23 A Review on Indian Language Identification Using Deep Learning

327

• Hybrid feature extraction methods such as MFCC, PLP, RASTA-PLP, and MFCC + RASTA-PLP yields more regional datasets like spectral data and large margin data that are better at language identifications. However, these techniques did not consider semantic and syntactic information from the hybrid features in SLID system and have used only regional datasets. • Multilingual DL-based acoustic modeling consider techniques such as MLP, CapsNet, DPCNN, and SVM are not much effective for detecting different senses in languages. However, identifying the diverse sense of phrases are not considered. Hence, there is a need to improve the system performance for different languages with contextual embedding. • Hybrid DL models such as CNN, GRU, LSTM, and CNN-BiLSTM detects various languages in fast expansion of social media platforms. However, in these techniques multi-accent recognition is difficult due to the lack of labeled data. Hence, there is a need to improve these hybrid architectures with balancing complexity and accuracy.

23.3 Conclusion In this research, review has been done to detect and classify various techniques of Indian language identification using deep learning. This study reviewed simple deep learning architecture, hybrid feature extraction methods, hybrid deep learning architecture techniques in Indian language detection and classification. Furthermore, analyzed the deep learning strategies used in various language identification models and evaluate the performance of various language identification and classification techniques and found that these techniques have average accuracy values. Finally, the last section summarizes how various language detection and classification techniques work and also provide the required improvements to eliminate various challenges in language identification. In this review, three aspects in language identification using deep learning techniques are offered with their importance and challenges.

References 1. Sangwan, P., Deshwal, D., Dahiya, N.: Performance of a language identification system using hybrid features and ANN learning algorithms. Appl. Acoust. 175, 107815 (2021) 2. Aarti, B., Kopparapu, S.K.: Spoken Indian language identification: a review of features and databases. S¯adhan¯a 43(4), 1–14 (2018) 3. Bhanja, C.C., Bisharad, D., Laskar, R.H.: Deep residual networks for pre-classification based Indian language identification. J. Intell. Fuzzy Syst. 36(3), 2207–2218 (2019) 4. Singh, G. et al.: Spoken language identification using deep learning. Comput. Intell. Neurosci. (2021) 5. Draghici, A., Abeßer, J., Lukashevich, H.: A study on spoken language identification using deep neural networks. In: Proceedings of the 15th International Conference on Audio Mostly (2020)

328

S. Sawalkar and P. Roy

6. Founta, A. M., et al.: A unified deep learning architecture for abuse detection. In: Proceedings of the 10th ACM Conference on Web Science (2019) 7. Rosenthal, S., et al.: A large-scale semi-supervised dataset for offensive language identification. arXiv preprint arXiv:2004.14454 (2020) 8. Biswas, M., Rahaman, S., Ahmadian, A., Subari, K., Singh, P. K.: Automatic spoken language identification using MFCC based time series features. Multimedia Tools Appl. 1–31 (2022) 9. Das, A., Guha, S., Singh, P.K., Ahmadian, A., Senu, N., Sarkar, R.: A hybrid meta-heuristic feature selection method for identification of Indian spoken languages from audio signals. IEEE Access 8, 181432–181449 (2020) 10. Bhanja, C., Chuya, Laskar, M.A., Laskar, R.H.: Modelling multi-level prosody and spectral features using deep neural network for an automatic tonal and non-tonal pre-classification-based Indian language identification system. Lang. Resour. Eval. 55(3), 689–730 11. Deshwal, D., Sangwan, P., Kumar, D.: A language identification system using hybrid features and back-propagation neural network. Appl. Acoust. 164, 107289 (2020) 12. Biswas, M., Rahaman, S., Kundu, S., Singh, P.K., Sarkar, R.: Spoken language identification of Indian languages using MFCC features. In: Machine Learning for Intelligent Multimedia Analytics, pp. 249–272. Springer, Singapore (2021) 13. Birajdar, G.K., Raveendran, S.: Indian language identification using time-frequency texture features and kernel ELM. J. Ambient Intell. Humanized Comput. 1–14 (2022) 14. Bakshi, A., Kopparapu, S.K.: Improving Indian spoken-language identification by feature selection in duration mismatch framework. SN Comput. Sci. 2(6), 1–16 (2021) 15. Sangwan, P., et al.: Isolated word language identification system with hybrid features from a deep belief network. Int. J. Commun. Syst. e4418 (2020) 16. Basu, J., Khan, S., Roy, R., Basu, T.K., Majumder, S.: Multilingual speech corpus in lowresource eastern and northeastern Indian languages for speaker and language identification. Circ. Syst. Sig. Process. 40(10), 4986–5013 (2021) 17. Verma, M., Buduru, A.B.: Fine-grained language identification with multilingual CapsNet model. In: 2020 IEEE Sixth International Conference on Multimedia Big Data (BigMM), pp. 94–102. IEEE (2020) 18. Tirusha, M., Vuddagiri, R.K., Vydana, H.K., Vuppala, A.K.: An investigation of LSTM-CTC based joint acoustic model for Indian language identification. In: 2019 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), pp. 389–396. IEEE (2019) 19. Saha, S., Chakraborty, N., Kundu, S., Paul, S., Mollah, A.F., Basu, S., Sarkar, R.: Multi-lingual scene text detection and language identification. Pattern Recogn. Lett. 138, 16–22 (2020) 20. Ranasinghe, T., Zampieri, M., Hettiarachchi, H.: BRUMS at HASOC 2019: deep learning models for multilingual hate speech and offensive language identification. FIRE (working notes) (2019) 21. Ranasinghe, T., Zampieri, M.: Multilingual offensive language identification with cross-lingual embeddings. arXiv preprint arXiv:2010.05324 (2020) 22. .Ranasinghe, T., Zampieri, M.: Multilingual offensive language identification for low-resource languages. Trans. Asian Low-Resour. Lang. Inf. Process. 21(1), 1–13 (2021) 23. Pitenis, Z., Zampieri, M., Ranasinghe, T.: Offensive language identification in Greek. arXiv preprint arXiv:2003.07459 (2020) 24. Zhao, Y., Tao, X.: ZYJ123@ Dravidian langTech-EACL2021: offensive language identification based on XLM-RoBERTa with DPCNN. In: Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages (2021) 25. .Bharathi, B: SSNCSE_NLP@ DravidianLangTech-EACL2021: offensive language identification on multilingual code mixing text. In: Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages (2021) 26. .Hou, W., et al.: Large-scale end-to-end multilingual speech recognition and language identification with multi-task learning. Babel 37.4k, 10k (2020) 27. Tong, S., Garner, P.N., Bourlard, H.: Multilingual training and cross-lingual adaptation on CTC-based acoustic model. arXiv preprint arXiv:1711.10025 (2017)

Chapter 24

Markov Process Based IoT Model for Road Traffic Prediction V. Sreelatha, E. Mamatha, S. Krishna Anand, and Nayana H. Reddy

Abstract IoT based network models play a vital role in estimating and predicting the behavior of the movement of vehicles. Drastic changes in economics lead to high usage of personal vehicles by the common people. This in turn lead to traffic jams, road blockages and make areas accident prone. It becomes a hectic task to control the system and handle the system smoothly. In this paper, the authors proposed IoT-based network model for predicting traffic in the busy areas by using binary Markovian events. Initially, a hidden Markov models (HMMs) forward algorithm is proposed to schedule and utilize available resources to the devices with extreme possible activation probabilities. During the process of estimating performance, a regret matrix is initiated to check how many transmission slots are being wasted. Secondly, a model has been proposed to optimize the Age of Information (AoI) by maintaining the regret matrix as least as possible. Finally, recapitulate algorithm is projected to assess activation probabilities for online-learning predictions to the real-time traffic problems. To outperform the proposed model simulation results are presented to show the efficiency of the algorithm.

24.1 Introduction In recent decades, the progress in socio-economic condition and human lifestyle, particularly in metro cities deal with rapid growth in usage of personal transport system to common people. As the motor automobiles flying on the roads drastically upsurges, it leads to traffic congestion and may lead accidents. To overcome this problem, new robust intelligent automated models have been extensively adopted in traffic control system [1]. Among all these methods, short-term traffic flow estimation is widely adopted as part of transport intelligent system [2]. It is a semi-automated V. Sreelatha (B) · E. Mamatha · N. H. Reddy GITAM University, Bangalore, India e-mail: [email protected] S. K. Anand Anurag University, Hyderabad, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024 B. Das et al. (eds.), Modeling, Simulation and Optimization, Smart Innovation, Systems and Technologies 373, https://doi.org/10.1007/978-981-99-6866-4_24

329

330

V. Sreelatha et al.

system and started the era of changing from self or manual control system to streamlined robotic system, which proves traffic control and guidance and set the basis rules for the transport traffic maintenance in busy areas. One of the foremost advantages of this system is to give the necessary support for pedestrians and deal with effective decision making for travelers. In the research sector of traffic engineering for civil engineers, traffic prediction and its short-term traffic flow estimation is a key concern to enhance traffic control system and to optimize traffic congestion accidents [3, 4]. The initial research primarily depended on stochastic learning models to improve the short-term traffic flow forecasting with traditional mathematical tools [7, 8]. With the assistance from certain probability distributions like Exponential distribution, Poison distribution and Gaussian distribution, the parameters which need to be estimated for stochastic forecasting model based on theoretical inference, strong illustrative results. In the early days, the models developed based on traditional techniques focused on Kalman filter models, non-parametric regression methods and time series models. With vast increase in amount of traffic, the existing methods have proved to be less effective. There is an urgent need in improving the system. In the automation era, a wide range of computerized applications are evolved to monitor traffic information system. Some of the devices adopted in the traffic control system are geometric and radar detectors, video monitoring system, inductive detectors, radio frequency identification technology and floating vehicle detection [5, 6]. To seamlessly monitor the traffic control system, it requires a huge amount of data, and these devices provided the required information [9–11]. At the same time, the evolution of computers and usage in the real life system occurred in almost all fields with the swift advancement in deep learning and artificial intelligence. Its application spread across several domains such as machine learning, speech and pattern recognition and image processing which tends gradually pertain to traffic measurements system. Recent developments in computing and automation sectors with Internet of Things (IoT) assistance, the industrial sector moved to deploy wide range of machine type communication (MTC) instruments to gather real-time information. The advancement of IoT–MTC devices in terms of portability and cost effectiveness became part of human’s daily life, like in the fields of remote surgery, environments and climate change observation, vehicle automation and cyber security information monitoring [12]. The immense growth in communication sector with 5G technology serves magnificently to move toward robotization. In the service mode, Quality of Service (QoS) is one of the major concerns. Recently, most of use cases ought to stringent demands, which require tremendously low end to end latency for the implementation and installation of IoT devices to gather data. Traffic analysis with respect to human type traditional communication devices (HTDs) is entirely different as compared to the traffic of MTC devices. As a point of observation, MTDs based traffic is analog and homogeneous with highly correlated, while HTDs are uncorrelated, volatile and non-homogeneous. Traffic correlation using MTC devices has been depicted in Fig. 24.1. Wide range of sensor devices are arranged to monitor traffic movement on the roads and the information is correlated by exchanging information through communication sector. Here, the vehicle movement

24 Markov Process Based IoT Model for Road Traffic Prediction

331

Fig. 24.1 Traffic correlation scenario model with IoT devices installed on the roadsides with connection to the control room—Base station

is monitored with the support of fixed and motion sensor detectors. If any motor violates the rules, immediately corresponding IoT devices send the information to the concerning authority to take appropriate action. As long as the vehicles move with normal speed and strictly maintain rules, no MTC devices will activate. Otherwise they will play a role in adjusting the speed limit alarm and traffic lights depending on the movement of vehicles. In case, if an animal or human is crossing the road, then human detector device immediately comes into effect and exchange information through safety alarm signal to the nearest located base station (BS). Further, to take a precautionary step, the base station instantly directs to immediate fast-moving vehicle to apply brakes to stop or make it reduce the speed. This in turn increases the survival chances of the human or animal that is crossing the road. All these actions must take place within a time frame window of a fraction of a second to avoid the possibilities of accidents. For accurate results, one of the crucial parameters is the latest information received from IoT device, measured as Age of Information (AoI). To design a learning-based scheduling algorithm, it is essential to minimize the AoI time in the IoT networks. The design of access protocol is a central factor to exchange the information between the devices in a systematic manner. Devices arranged in the system communicate the information with base station in a random manner. In the next level, the device requests a direct connection to BS and reacts with a content resolution message. But this approach deals with high level of overhead signaling and end to end latency. It fails to work at low latency levels. Another major setback in this process is there is a high level of collisions when a large number of devices tried sporadically at the same time to access network. A good number of alternate techniques are projected

332

V. Sreelatha et al.

in IoT network system to avoid the collisions and overhead signaling. Some of the techniques are access class barring (ACB), time-division multiple access (TDMA), and grant-free (GF) schemes. TDMA algorithm is a straightforward approach where the resources for all the devices are equally distributed without the consensus of any scheduling algorithm. This algorithm is efficient for periodic signals but not aperiodic or sporadic signals. To overcome this problem GF is proposed to minimize signal overhead but it is not good when the available resources are not equal to the number of potentially active devices. Moreover, it causes high AoI and fails with a good number of collisions between devices. ACB is one of the best promising methodologies among all alternatives during that time. To efficiently utilize resource allocation in IoT network system, number of learning-based algorithms are developed. Laner et al. [11] proposed coupled Markov modulated Poisson process (CMMPP) and Grigoreva et al. [13] proposed coupled Markovian arrival process (CMAP) for the activation of devices for traffic control system. These models did not explain resource allocation process. Rossi et al. [14] developed an HMM model to correlate time among binary sources in the wireless sensor network (WSN) system with decision fusion algorithm [15]. To overcome the large signaling and collision problems, Zhou et al. [16] presented hybrid-based resource allocation model resulting from message replications in GF transmission. Ali et al. [17] in their work to perform FU grant, presented a multiarmed bandit algorithm using IoT networks. But these models did not effectively address exploiting the road traffic correlation from the system on the event-temporal basis. Most of the mentioned models presented in their work depend on the usage of machine learning mechanism and reinforcement learning algorithms. These models mostly depend on large number of complex calculations either at IoT devices systems or at base station. Hence to implement, it involves most powerful hardware and long duration of training to the self-learning algorithms to overcome the challenges of machine learning in communication systems [18]. In general, IoT network systems are considered as interactive applications, in which observations are delivered over time. These models become learning-based algorithms. Real-time IoT networks require tremendously powerful hardware to handle complex machine learning schemes to accomplish learning-based scheduling algorithms. Authors in this work proposed a stochastic-based model which is efficient in terms of accuracy and simpler than present available machine learning solutions.

24.2 Problem Analysis and System Design To study traffic control by using IoT networks, many learning-based algorithms are presented in the present research work. To model the system, consider Narrow-BandIoT (NB-IoT) network system having K IoT devices. All these devices transmit their information to a solo base station BS. This has been portrayed in Fig. 24.2. The linked transmission resources in the conventional LTE FU bifurcate into various time slots, so that BS can schedule these time slots to L devices to transmit the information

24 Markov Process Based IoT Model for Road Traffic Prediction

333

Fig. 24.2 Markov process-based N activation model with ON–OFF states to control k devices

in L frequency slots [19]. In this process, all transmission slots are consigned to scheduled devices, so that these devices relay information to the broadcast station provided the devices have adequate data to transfer. If a device is not active, then the uplink resource becomes futile during the schedule of the transmission. To design the model, let us denote the discrete time slots t = 1, 2, … for the activeness of devices by using the random variable at k . where  1, device active atk = 0, otherwise At the time t the IoT devices activation is represented by the probability vector:   At = at1 , at2 , . . . atk Two state’s N-independent Markov process is adopted to monitor all the devices for active states, where Markov states oscillate between on and off. At the time t the Markov states S t are represented with transition probabilities stated in Fig. 24.2. In general, the one-step transition probability from one state to another state transition is represented by using Markov chain as a time index as: Pi, j = Pr{X n = j|X n−1 = i}

(24.1)

Following this to the given system at the nth state for X t ∈ {0, 1}, P1,1 = Pr{X t = 1|X t−1 = 1} = 1 − ε0 P0,1 = Pr{X t = 0|X t−1 = 1} = ε0 P1,0 = Pr{X t = 1|X t−1 = 0} = ε1 P0,0 = Pr{X t = 0|X t−1 = 0} = 1 − ε1

(24.2)

334

V. Sreelatha et al.

If the state S t = 1, then the nth stage Markov process trigger specific IoT device k with the probability rate qn,k . In particular, a device is active if its corresponding Hidden Markov states are active. At time t, the active state probability of the kth device is stated as: Pr



Akt



= 1|St = 1 −

N 

(1 − qnk ) St

n=1

=1−

N 

h(n)

(24.3)

1 − ε1 qnk , St = 1 1 − (1 − ε0 )qnk , St = 0

(24.4)

n=1

where  h(n) =

The main focus of this research work is to estimate performance measures for the traffic system. Some of the important metrics are age of information, system usage, regret and traffic preconditions. In the learning-based scheduling algorithms, regret is considered as the vital measurement in traffic prediction. One unit of regret is nonutilization of inactive device resource while an active device has not properly utilized its resources. The number of incorrect allotments at the time t can be estimated as the variance of uplink grant vector uk and activation vector At of the k th device. ωt =

K k=1

Max0, u kt − atk 

(24.5)

The next parameter which plays vital role in traffic congestion estimation is the age of information (AoI) which is used to predict degree of data fairness and freshness during scheduling of the IoT devices. It is measured as the time consumed to transmit data packet at the kth device.  = t − tk

(24.6)

where t is always greater than t k . The mean age per device at an instant time can be computed as: =

1 K k k=1 K

(24.7)

Apart from this, other parameters include measured variance of activation vector at k and uplink grant vector ut k at instant time t. μt =

K  k=1

Max(0, atk − u kt )

(24.8)

24 Markov Process Based IoT Model for Road Traffic Prediction

335

At time t, the regret value is measured as: R(t) = Min(μt , ωt )

(24.9)

The other parametric measure for the traffic prediction is system usage. This parameter helps in measuring the efficacy of the presented algorithm. Time t as system mean usage between amount of productively utilized transmission slots by IoT device and existing slots L can be defined as: ηt =

t 1  L − ωk  t L k=0

(24.10)

The system mean usage represents as the percentage of effectively utilized uplink transmission slots of all the IoT devices.

24.3 Result Analysis Simulation results are presented in this section. To generate the results, it is taken as the frequency L as 10units, sensors K as 50 units and hidden Markov events N as 5 units. The uniformly distributed transition probabilities for the temporal state ε are defined in the interval as [0, 0.5] whereas the activation probabilities defined in the interval as qnk ∈ [0, 1] (Figs. 24.3, 24.4, 24.5, 24.6).

24.4 Conclusions Advent development and rapid embryonic technology in the fields of educational, industrial and management sectors metro cities emerging as smart cities. The people life style with gigantic increases of volume of the population in cites leads proportionate of problems in terms jobs opportunity, retention of skilled labor and traffic congestion. To overcome these problems, humans started to look forward toward the smart technology in the process of adopting machine learning system. With the motto of developing a robust transport movement mechanism in the highly populated cities to monitor vehicles, authors proposed a hidden Markov process model to predict traffic congestion and also estimate vehicle movement by massively arranging IoT devices. The model proposed in this work rigorously forecast traffic movement in the busy area surrounded by IoT devices and activate their configuration with the uppermost likelihood of transient probabilities. Various parametric metrics such as information age, regret and mean of data information transmission have been computed with the help of simulation experiments and presented in graphical form. Results shows that the proposed online-learning model achieves

336

V. Sreelatha et al.

Fig. 24.3 Target region for AoI corresponding to regret with various devices

Fig. 24.4 Time slot analysis with respect to regret with geographically located IoT devices

24 Markov Process Based IoT Model for Road Traffic Prediction

337

Fig. 24.5 Time slot analysis with average age of information wireless connected IoT devices

Fig. 24.6 Traffic congestion probability rate with priority customer vehicles with available road traffic facilities

better results with available techniques to solve traffic related general problems in the overloaded traffic gateway. In order to generate better results, equipment such as smart control signal system, video monitoring system, IoT devices and big data analytic tools are inserted in transport network system.

338

V. Sreelatha et al.

References 1. Eldeeb, E., Shehab, M., Kalør, A.E., Popovski, P., Alves, H.: Traffic prediction and fast uplink for hidden markov IoT models. IEEE Internet Things J. 9(18), 17172–17184 (2022) 2. Mamatha, E., Reddy, C.S., Prasad, K.R.: Antialiased digital pixel plotting for raster scan lines using area evaluation. In: Emerging Research in Computing, Information, Communication and Applications. pp. 461–468, Springer Singapore (2016) 3. Reddy, C.S., et al.: Obtaining description for simple images using surface realization techniques and natural language processing. Indian J. Sci. Technol. 9(22) (2016) 4. Elliriki, M., Reddy, C.C.S., Anand, K.: An efficient line clipping algorithm in 2D space. Int. Arab J. Inf. Technol. 16(5), 798–807 (2019) 5. Mamatha, E., Saritha, S., Reddy, C.S., Rajadurai, P.: Mathematical modelling and performance analysis of single server queuing system-eigenspectrum. Int. J. Math. Oper. Res. 16(4), 455–468 (2020) 6. Saifuzzaman, M., Moon, N.N., Nur, F.N.: IoT based street lighting and traffic management system. In: 2017 IEEE Region 10 Humanitarian Technology Conference (R10-HTC), pp. 121– 124. IEEE (2017, Dec) 7. Saritha, S., Mamatha, E., Reddy, C.S., Rajadurai, P.: A model for overflow queuing network with two-station heterogeneous system. Int. J. Process Manage. Benchmarking 12(2), 147–158 (2022) 8. Sarrab, M., Pulparambil, S., Awadalla, M.: Development of an IoT based real-time traffic monitoring system for city governance. Glob. Trans. 2, 230–245 (2020) 9. Mamatha Elliriki, C.S., Reddy, K.A., Saritha, S.: Multi server queuing system with crashes and alternative repair strategies. Commun. Stat. Theor. Methods 51(23), 8173–8185 (2022) 10. Saritha, S., Mamatha, E., Reddy, C.S.: Performance measures of online warehouse service system with replenishment policy. J. Europeen Des Syst. Automatises 52(6), 631–638 (2019) 11. Laner, M., Svoboda, P., Nikaein, N., Rupp, M.: Traffic models for machine type communications. In: ISWCS 2013; The Tenth International Symposium on Wireless Communication Systems (pp. 1–5). VDE (2013, August) 12. Bedewy, A.M., Sun, Y., Shroff, N.B.: Minimizing the age of information through queues. IEEE Trans. Inf. Theor. 65(8), 5215–5232 (2019) 13. Grigoreva, E., Laurer, M., Vilgelm, M., Gehrsitz, T., Kellerer, W.: Coupled markovian arrival process for automotive machine type communication traffic modeling. In: 2017 IEEE International Conference on Communications (ICC) (pp. 1–6). IEEE (2017, May) 14. Rossi, P.S., Ciuonzo, D., Ekman, T.: HMM-based decision fusion in wireless sensor networks with noncoherent multiple access. IEEE Comms. Lett. 19(5), 871–874 (2015) 15. Anand, K., Mamatha, E., Reddy, C.S., Prabha, M.: Design of neural network based expert system for automated lime kiln system. J. Européen des Syst. Automatisés 52(4), 369–376 (2019) 16. Zhou, J., Gao, D., Zhang, D.: Moving vehicle detection for automatic traffic monitoring. IEEE Trans. Veh. Technol.Veh. Technol. 56(1), 51–59 (2007) 17. Ali, F., Ali, A., Imran, M., Naqvi, R.A., Siddiqi, M.H., Kwak, K.S.: Traffic accident detection and condition analysis based on social networking data. Accid. Anal. Prev.. Anal. Prev. 151, 105973 (2021) 18. Saritha, S., Mamatha, E., Reddy, C.S., Anand, K.: A model for compound poisson process queuing system with batch arrivals and services. J. Europeen des Syst. Automatises 53(1), 81–86 (2019) 19. Mamatha, E., Sasritha, S., Reddy, C.S.: Expert system and heuristics algorithm for cloud resource scheduling. Rom. Stat. Rev. 65(1), 3–18 (2017)

Chapter 25

Optimization of Process Parameters in Biodiesel Production from Waste Cooking Oil Using Taguchi-Grey Relational Analysis Farhina Ahmed and Sumita Debbarma

Abstract The environmental scenario is demanding to safeguard of natural resources due to the degradation of the environment in day-to-day life. The increasing demand for fossil fuel energy is a threat to the environment as it has many negative impacts. So, the production of alternative fuel is increasing in recent years which can be developed from renewable sources. The production of biodiesel is increasing as an alternative fuel as it is biodegradable and non-toxic. It can be derived from renewable and domestic resources and compared to petroleum-based diesel; it has many advantages such as low emissions of carbon monoxide, particulate matter, and unburned hydrocarbons. In this present study, an attempt was made to produce biodiesel from waste cooking oil by the method of transesterification. The catalyst used was KOH. The highest yield obtained was 86.3%. The Taguchi-grey relational analysis was used as an optimization tool to optimize the yield of the produced biodiesel. To find the significance of each selected process parameter, analysis of variance (ANOVA) was performed. The parameter that had the highest contribution was found to be reaction time followed by alcohol-to-oil molar ratio and catalyst concentration. The factor that was not found to be significant and with the least contribution to output response was reaction temperature.

25.1 Introduction The increasing demand for energy in today’s world due to the rapid increase in population and industrialization has created a threat to the environment. As the natural resources are declining with time and also the use of fossil fuel is creating environmental degradation as it emits several greenhouse gases. Petroleum diesel is also a significant producer of air contaminants like NOx , particulate matter, SOx , volatile organic compounds, and CO. Due to its persistence in the environment, emissions F. Ahmed (B) · S. Debbarma Department of Mechanical Engineering, NIT Silchar, Silchar, Assam 788010, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024 B. Das et al. (eds.), Modeling, Simulation and Optimization, Smart Innovation, Systems and Technologies 373, https://doi.org/10.1007/978-981-99-6866-4_25

339

340

F. Ahmed and S. Debbarma

of such pollutants have severe effects on human health as well as the environment as a whole [1]. Thinking about the healthy future of our environment, researchers have come forward with alternative fuels that can be renewable and self-reliant based on agricultural sources. From the studies, it has been known that biodiesel is gaining attention as an alternative fuel because it is non-toxic, biodegradable, and can be made from renewable sources and domestic resources. Biodiesel offers a better combustion emission profile than petroleum-based diesel, with lower emissions of carbon monoxide, particulate matter, and unburned hydrocarbons. As it has high flash point (150 °C), it is less flammable and safer to carry or handle than petroleum diesel [2]. Mono-alkyl esters of long-chain fatty acids are what biodiesel is known as. It can be available from various vegetable oils and other sources, like animal fats, waste cooking oils, greases, and algae [3]. Biodiesel is preferable in today’s world because it can be produced from renewable sources, and its properties are also within the ASTM range. In our country also, there are many vast quantities of animal’s fats, and waste cooking oils are available. The waste cooking oil is highly available from the places anywhere foods are cooked or fried in oils mainly in restaurants, street food stalls, and local markets. These frying oils are derived from vegetable oils like (rice brand, linseed, castor, soybean, cottonseed, peanut, sunflower, rapeseed, sesame, corn, olive, palm, palm kernel, coconut, and from a wide range of plant sources). The waste cooking oil is also derived from animal fats/oils [4]. From an economic perspective, these commercial set-ups use the same oil/fats many times or continuously but there are major physical changes in the oil for frying repeatedly. Some common physical changes are increased of viscosity, increased free fatty acids, an increment in the specific heat, and change in the colour. Also, by this way of cooking process it affects the vegetable oil, it increase the possibility of diglycerides, monoglycerides, and free fatty acids (FFAs) are produced as triglycerides degrade. Free fatty acids (FFAs) in the WCO increase as a result of repeatedly frying the same oil. So, consuming such oils is not healthy for human health [1, 5]. To increase the production of biodiesel in greater amount, optimization techniques are used. Various optimization techniques have been developed in recent times. The available tools for the optimization are helpful for saving both cost and time as well as they can be reproducible. This tool is recommended to use for multiple objectives. This technique is known to convert the different responses into a unified grey relational grade, after transformation, it is necessary to develop a response table where the best process variables can be chosen [6]. Taguchi-grey relational analysis has been used in this paper to carry out the optimization of the process parameter. The experiments that were carried out in this present study were performed with the help of an orthogonal layout of L 9 with the parameters like (reaction temperature, catalyst concentration, reaction time, and alcohol to oil ratio) each at three levels. Variety of techniques can be used for the production of biodiesel. The procedure that was utilized to make biodiesel in this work was transesterification method.

25 Optimization of Process Parameters in Biodiesel Production from Waste …

341

25.2 Work Procedures 25.2.1 Collection of the Feedstock Waste cooking oil as a feedstock was used in this work to produce biodiesel. Waste cooking oil (WCO) is highly available in our country as a lot of restaurants, street food stalls use a lot of oil for frying various dishes which later on are wasted. The disposal problem creates a major problem in the environment. While some of this WCO is used to make soap, the majority of it is released into the environment, and some are illegally dumped into rivers which become the cause for environmental pollution, human health concerns, and also the aquatic ecosystem gets disturbed. So, in place of causing problem to the environment, it can be used in an effective way by using it as feedstock for biodiesel production which will also be cost-efficient. It will also offer significant advantages like a reduction in environmental pollution, and it will also reduce the cost of biodiesel production as it is otherwise a wasted product compared to other pure vegetable oil which is expensive and it is also readily available. Consequently, using WCO oil as a feedstock greatly improves the financial profitability of biodiesel production [7]. In this work, the feedstock is collected from the NIT Silchar hostel mess.

25.2.2 Transesterification Methods Different methods can be chosen for the production of biodiesel; here in this work, the method used was transesterification. From the literature study, this procedure is thought to be the most widespread chemical reaction in which it involves the presence of a catalyst and involves the reaction of alcohol with a triglyceride of fatty acids (vegetable oil). As a by-product of this procedure, glycerol is created [8]. In this present work, the alkali catalytic method was used as the FFA content of the feedstock was less than 1% [9]. In this process, it is a triglyceride and alcohol reaction that occurs in the presence of alkaline catalysts. Potassium hydroxide (KOH) and sodium hydroxide (NaOH) are the most widely utilized types of base homogeneous catalysts because they can catalyse reactions at lower temperatures and pressures, have higher conversion rates, and are conveniently accessible and inexpensive [10, 11]. The alcohol that was used in this present work was methanol. Several process variables can influence the yield of the biodiesel produced by the transesterification process like the reaction time, alcohol to oil molar ratio, reaction temperature, catalyst concentration, the amount of free fatty acids present, and the water content. After the transesterification process, warm distilled water is used to wash the FAME, and after washing process, the FAME is dried by heating it in the magnetic stirrer to remove the moisture. The process of transesterification is as shown in Fig. 25.1.

342

F. Ahmed and S. Debbarma

a)

Transesterification process

c) Washing process of FAME

b) separation of FAME and Glycerol

d) Drying process of FAME

Fig. 25.1 a–d Preparation of biodiesel

25.2.3 Design Strategy In this work, Taguchi’s design of experiment is applied for performing the experimental array design while selecting four process parameters, i.e. catalyst concentration (%), alcohol to oil molar ratio, reaction temperature (°C), and reaction time (min). As seen in Table 25.1, these characteristics are varied on three different levels. In the Taguchi approach, there are predefined orthogonal arrays available which helps to perform the experiment with minimum number and with different specified [12, 13]. The process parameters are considered the main function which affects the objectives function, i.e. the yield of the biodiesel. The levels and parameters were determined with the help of a literature review. Table 25.1 Process parameters and their levels Designation

Process parameters

Levels Level 1

Level 2

Level 3

A

Methanol to oil molar ratio

4:1

6:1

9:1

B

Catalyst amount (% w/w)

0.5

1

1.5

C

Reaction time (min)

60

75

90

D

Reaction temperature (°C)

50

55

60

25 Optimization of Process Parameters in Biodiesel Production from Waste …

343

Table 25.2 Design of experiments using L 9 orthogonal array Run

Parameters and levels A

B

C

D

1

1

1

1

1

2

1

2

2

2

3

1

3

3

3

4

2

1

2

3

5

2

2

3

1

6

2

3

1

2

7

3

1

3

2

8

3

2

1

3

9

3

3

2

1

25.2.3.1

The Fewest Number of Runs Needed

In this study, least amount of experiments that are necessary to be conducted by the Taguchi’s design of experiment method can be calculated based on the degree of freedom approach. The bare minimum of experiments must exceed or be equal to the total degree of freedom. Degree of freedom = F × (L − 1) + 1 = 4 × (3 − 1) + 1 = 9 = L 9 ,

(25.1)

where F = stands to be the number of factors and L = is expressed as number of levels. The L 9 orthogonal array design for the selected process parameters and their levels are given in Table 25.2.

25.2.4 Taguchi- Grey Relational Analysis Single response performance uses the Taguchi approach; but for multiple response optimizations, this method gets complex [14]. The grey Taguchi approach is recommended for multiple response optimization. This method is the advanced form of the Taguchi method. Experimental values are first normalized between 0 and 1 for the analysis in grey relational analysis, which is then generated using the provided equations. Grey relational coefficient needs to be found after performing the normalizing step [15, 16]. The next step that needs to be followed is to calculate the cumulative grey relational grade for the corresponding responses; it is calculated by averaging the determined grey coefficient [17]. Two conditions are applicable in this procedure, i.e. “larger-the-better” or “smaller-the-better”. In this work, the “larger-the-better” condition shown in Eq. (25.1) is used to maximize the production of biodiesel made

344

F. Ahmed and S. Debbarma

from used cooking oil. xi∗ (k) =

xio (k) − min xio (k) , max xio (k) − min xio (k)

(25.2)

where xi∗ (k) regarded as a normalized value, where the initial sequence is xio (k), i stands for the number of experiments (i = 1, 2, …), and k represent the number of responses (k = 1, 2, …). Grey relational coefficient is expressed. ζi (k) =

min +ζ · max , oi (k) + ζ · max

(25.3)

where oi (k) = reference sequence deviation. In the calculation of grey relational coefficient calculation, distinguishing factor ζ is used, whose value is expressed between (0 and 1), and usually the value of ζ is taken as 0.5. The coefficient for each level of inputs is averaged to determine the grey grade from the calculated grey coefficient. Its values lie within the range of 0–1.

25.2.5 Analysis of Variance (ANOVA) It is a method for determining the importance of each selected process parameter for the outcome response. With the help of this technique, the role that each process parameter plays towards the output can be determined, and by controlling these parameters, the process can be improved [17].

25.3 Results and Discussion The yield of the biodiesel produced from used cooking oil was obtained with L 9 ’s orthogonal array assistance. The transesterification method was carried out in this present work and commonly for the production of biodiesel this method is used. The catalyst plays a significant role in the reaction, and the required amount of temperature in the reaction is needed to maintain throughout the reaction time. Following the separation of glycerol and fatty acid methyl ester (FAME), there is a washing procedure which is required to follow for removing the soap from the biodiesel. Thereafter, the drying of the FAME is carried out so that the moisture can be removed. The yield of the nine experiments is obtained as given in Table 25.3. It can be noticed from the above table that the maximum yield percentage of 86.3 has been achieved from waste cooking oil biodiesel. To optimize the yield percentage, the Taguchi-grey relational analysis has been used in this work. Following

25 Optimization of Process Parameters in Biodiesel Production from Waste …

345

Table 25.3 Obtained yield percentage Run

Parameters and levels

Yield (%)

A

B

C

D

1

1

1

1

1

81.92

2

1

2

2

2

78.86

3

1

3

3

3

78.53

4

2

1

2

3

82.48

5

2

2

3

1

79.63

6

2

3

1

2

86.3

7

3

1

3

2

79.24

8

3

2

1

3

83.49

9

3

3

2

1

84.53

the normalization of the data and the series of deviations, the grey relational coefficient is generated for particular response. The larger-the-better criterion is employed since the yield needs to be higher. Table 25.4 shows how the grey relationship grade is calculated. Based on Table 25.5, it can be observed that the setting which has the highest level is A2, B3, C1, and D2, i.e. alcohol to oil ratio is 6:1, catalyst concentration 1.5%, reaction temperature 60 °C, and reaction time is 55 min. The parameter which has the highest value is reaction time. Therefore, to determine the importance of each process variable and its contribution in the output response ANOVA is performed. The percentage contribution is calculated as given in Table 25.6. The F tab value is 4.26. The parameters are considered significant whose F cal value is greater than F tab . From the above table, it can be observed that the parameter like alcohol to oil ratio, reaction time, and catalyst concentration are considered to be Table 25.4 Normalizing, deviation sequence, grey relational coefficient, grey relational grade, and rank Run

Yield%

Normalizing (yield)

Deviation (yield)

Grey relational coefficient

Grade

Rank

1

81.92

0.43629

0.56371

0.47005

0.47005

5

2

78.86

0.04247

0.95753

0.34305

0.34305

8

3

78.53

0

1

0.33333

0.33333

9

4

82.48

0.50836

0.49163

0.50422

0.50422

4

5

79.63

0.14157

0.85848

0.36807

0.36807

6

6

86.3

1

0

1

1

1

7

79.24

0.09137

0.90862

0.35496

0.35496

7

8

83.49

0.63835

0.36165

0.58028

0.58028

3

9

84.53

0.77220

0.22780

0.68700

0.68700

2

346

F. Ahmed and S. Debbarma

Table 25.5 Response table of grey relational grade Parameters

Level1

Level 2

Level 3

Max–min

Rank

A

0.38214

0.62409

0.54074

0.24195

3

B

0.44307

0.43046

0.67343

0.24297

2

C

0.68344

0.511423

0.35211

0.33133

1

D

0.50837

0.566

0.4726

0.0934

4

Table 25.6 ANOVA for yield percentage Parameters

SS

DOF

MS = SS/ F value = DOF MS/MSE

F 0.05, 2, 9

% contribution

43.94

26.95

Alcohol to oil ratio

33.4

2

16.7

Catalyst concentration

25.65

2

12.82

33.73

20.70

Reaction Time

60.34

2

30.17

79.39

48.70

2

0.51

1.34

0.83

9

0.38











Reaction temperature Error Total

1.038 3.456 123.89

17

4.26

significant and reaction temperature stands to be the insignificant parameter. The parameter which has the highest contribution in the output response is the reaction time (48.7%) followed by alcohol to oil ratio (26.95%) and catalyst concentration (20.7%). The least contribution was from reaction temperature, i.e. 0.83%. Hence, it can be observed that this parameter did not significantly affect the production of biodiesel made from waste cooking oil (WCO) as compared to the other parameters.

25.4 Conclusion Biodiesel is gaining attention worldwide in recent days as non-conventional fuel. Biodiesel can be made by using renewable and domestic sources. In this work, utilizing waste cooking oil as feedstock, transesterification method was used to create biodiesel. Based on the literature study, the process parameter has been selected like alcohol to oil molar ratio, reaction time, reaction temperature, and catalyst concentration. The alcohol that was chosen for the reaction was methanol. It was observed in this study that catalyst plays an important part in the making of biodiesel. Catalyst that was used in present study was potassium hydroxide (KOH). The optimization tool that used in this study was Taguchi-grey relational analysis. The levels which has the highest values were found to be A2, B3, C1, and D2. To find the significance of each factor, ANOVA was used. The parameter which found out to be significant was

25 Optimization of Process Parameters in Biodiesel Production from Waste …

347

alcohol to oil ratio, reaction time, and catalyst concentration. The parameter which identified to be insignificant was reaction temperature. The highest contributing parameter towards the output response (yield) was reaction time followed by alcohol to oil molar ratio (26.95%) and then catalyst concentration (20.70%). The least contribution is from reaction temperature (i.e. 0.83). Hence, it can be concluded that rather than disposing the waste cooking oil and polluting the environment it can be efficiently use for the production of biodiesel.

References 1. KulKarni, M.G., Dalai, A.K.: Waste cooking oil—an economic source for biodiesel: a review. Ind. Eng. Res. 2006(45), 2901–2913 (2006) 2. Zhang, Y., Dube, M.A., McLean, D.D., Kates, M.: Biodiesel Production from waste cooking oil: 1. Process design and technological assessment. Biresour. Technol. 89, 1–16 (2003) 3. Wayan Sutapa, I., Latuputy, L., Tellusa, I.: Production of biodiesel from Calophyllum inophyllum L. oil by lipase enzyme as biocatalyst. Int. J. Eng. Sci. Res. Technol. ISSN: 2277-9655 4. Said, N.H. et al.: Review of the production of biodiesel from waste cooking oil using solid catalysts. J. Mech. Eng. Sci. (JMES) (2015) 5. Raqeeb, M.A., Bhargavi, R.: Biodiesel production from waste cooking oil. J. Chem. Pharm. Res. 7(12), 670–681 (2015) 6. Cortes, G.C., Carrillo, T.R., Peasco, I.Z., Avila, J.R.: AKe JQ, Cruz JM, Microcosm assays and Taguchi experimental design for treatment of oil sludge containing high concentration of hydrocarbons. Bioresour. Technol. 100, 5671–5677 (2009) 7. Chhetri, A.B., et al.: Waste cooking oil as an alternate feedstock for biodiesel production. Energies 1, 3–18 (2008) 8. Sinha, S., Agarwal, A.K., Garg, S.: Biodiesel development from Rice bran oil: transesterification process optimization and fuel characterization. Energy Convers. Manag. 49, 1248–1257 (2008) 9. Karmakar, A., Karmakar, S., Mukherjee, S.: Properties of various plants and animals feedstock for biodiesel production. Bioresour. Technol. 101, 7201–7210 (2010) 10. Gebremarian, S.N., Marchetti, J.M.: Biodiesel production technologies: review. AIMS Energy 5(3), 425–457 (2017) 11. Talha, N.S., Sulaiman, S.: Overview of catalysts in biodiesel production. ARPN J. Eng. Appl. Sci. 11(1) (2016). ISSN: 1819-6608 12. Kechagias, J., Aslani, K.E., Fountas, N.A., Vaxevanidis, N.M., Manola-Kos, D.E.: A comparative investigation of Taguchi and full factorial design for machinability prediction in turning of a titanium alloy. Measurement 151, 107213 (2020) 13. Padhi, S.K., Sahu, R.K., Mahapatra, S.S., et al.: Optimization of fused deposition modelling using finite element analysis and knowledge-based library. Virtual Phys. Prototyp. 13, 177–190 (2017) 14. Chen, K.T., Kao, J.Y., Hsu, C.Y., Da Hong, P.: Multi-response optimisation of mechanical properties for ZrWN films grown using grey Taguchi approach. Cream Int. 45, 327–333 (2019) 15. Senthikumar, N., Tamizharasan, T., Anandakrishnan, V.: Experimental investigation and performance analysis of cemented carbide inserts of different geometries using Taguchi based grey relational analysis. Measurement 58, 520–536 (2014)

348

F. Ahmed and S. Debbarma

16. Sahu, P.K., Pal, S.: Multi-response optimization of process parameter in friction stir welded AM20 magnesium alloy by Taguchi grey relational Analysis. J. Magnesium Alloys 3, 36–46 (2015) 17. Deepanraj, B., Sivasubramanian, V., Jayaraj, S.: Multi-response optimization of process parameter in biogas production from food waste using Taguchi-Grey relational analysis. Energy Convers. Manag. 141, 420–438 (2017)

Chapter 26

A Novel D-Latch Design for Low-Power and Improved Immunity Umayia Mushtaq , Md. Waseem Akram , Dinesh Prasad , and Bal Chand Nagar

Abstract In this paper, different D-Latch designs are proposed at 7 nm technology node using fin field-effect transistor (FinFET) device. FinFET D-Latch circuits are designed using INDEP technique, and the comparative analysis is carried out using the conventional FinFET D-Latch designs. In addition to this transmission gate-based FinFET D-Latches are also designed using ASAP7 PDK at 7 nm technology node. Different performance parameters are analyzed, including average power dissipation, propagation delay, power delay product, and area. The average power dissipation is reduced by 7.7% in FinFET INDEP D-Latch in comparison with the conventional FinFET D-Latch. Moreover, the power delay product is decreased in the case of proposed FinFET INDEP D-Latch in comparison with the conventional design. In the case of transmission gate-based INDEP FinFET D-Latch design average power dissipation is reduced by 14.5% as compared to transmission gate-based FinFET DLatch. The reliability of FinFET D-Latch is examined using Monte Carlo approach for 10,000 runs, which clearly demonstrates the superior performance of FinFET INDEP D-Latch as compared to conventional FinFET D-Latch circuit.

U. Mushtaq (B) · Md. Waseem Akram · D. Prasad Department of Electronics and Communication Engineering, Jamia Millia Islamia, New Delhi 110025, India e-mail: [email protected] Md. Waseem Akram e-mail: [email protected] B. Chand Nagar Department of Electronics and Communication Engineering, NIT Patna, Ashok Rajpath, Patna 800005, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024 B. Das et al. (eds.), Modeling, Simulation and Optimization, Smart Innovation, Systems and Technologies 373, https://doi.org/10.1007/978-981-99-6866-4_26

349

350

U. Mushtaq et al.

26.1 Introduction From one technology node to next, continuous scaling of CMOS for the past few decades results in performance improvement in digital logic circuits. However, scaling of bulk CMOS poses significant challenges as a result of fundamental limitations in material and process technology. Problems with the conventional bulk CMOS devices include sub-threshold leakage, gate dielectric leakage, short channel effects, and device-to-device variations [1, 2]. Nowadays, two major challenges include increasing variability in device characteristics and increase in standby power dissipation. Due to this circuit and system reliability gets affected at advanced technology nodes. As the bulk CMOS scaling reaches the physical limits of quantum mechanics and atomic level, these aforementioned challenges become more significant [3, 4]. In order to extend silicon scaling, continuous innovations in device structures and materials are taking place. One important innovation is FinFET devices. Besides, this many other devices come into existence which include double-gate FinFET, Tri-gate FinFET, π-gate, Omega-gate, and gate-all-around (GAA) devices [5]. The electrostatic control increases from SOI FinFET to GAA devices but at the same time difficulty in fabrication also increases. FinFET device has fabrication and layout similarity with the conventional MOSFET devices. Therefore, one of the finest options for designing digital logic circuits is FinFET devices due to low leakage current, high drain current, and reduction in switching voltage [6]. In multi-gate devices, CMOS scaling can be continued by vanquishing the obstacles posed by continuous scaling. The ability of front and back gates can be independently controlled with the proper biasing is the best feature of the FinFET technology. This feature of FinFET doublegate device helps us to control device current and threshold voltage. By controlling threshold voltage changes, power dissipation can be regulated. These potential capabilities of FinFET device, bridging the gap between non-silicon devices and bulk CMOS [5]. FinFET could be among the best alternatives to bulk CMOS transistors in many designs. Due to the huge demand and utilization of sequential logic circuits and storage elements in modern semiconductor industry, low-area and highperformance designs of different logic circuits are devised from time to time. FinFET finds numerous applications in digital logic circuit design like FinFET SRAM design at 7 nm technology node [7], combinational circuit design, and different logic gate design due to its low leakage power dissipation [8]. D-Latch circuits being the important component of sequential logic design are designed from time to time by various researchers throughout the world. In order to achieve the optimal design of D-Latch in relation to power dissipation, propagation delay and area, different designs are available in the literature [9–12]. For amplification and switching applications in electronic circuit and system design, MOSFET is the most widely used device. But as technology scales below 30 nm short channel effects increase drastically in conventional MOSFET devices. The problem faced by conventional MOSFET design can be overcome by the use of multi-gate devices like double-gate FinFETs and Tri-gate FinFET devices. These devices have better electrostatic control over the channel due to the increase in number of gates. In addition, if one gate fails to maintain control

26 A Novel D-Latch Design for Low-Power and Improved Immunity

351

Fig. 26.1 a Planner device [13]; b FinFET device [13]

of the channel, the other gate takes the control, which reduces leakage current and short channel effects. Multi-gate FET (FinFET) as shown in Fig. 26.1b shows better performance and decreases power dissipation when compared to planner MOSFETs as shown in Fig. 26.1a [13]. Mostly, CMOS technology is used in the design of conventional circuits. Circuits are designed using FinFET technology to provide significant improvements for propagation delay, power dissipation, and power delay product using ASAP7 PDK [14]. In this paper, FinFET D-Latch is designed using FinFET INDEP technique and the comparative analysis is performed. In Sect. 26.2, brief description of ASAP7 PDK is given. Sect. 26.3 considers the low-power FinFET INDEP technique used to design D-Latch. In Sect. 26.4, the proposed D-Latch circuits are discussed and the simulation results are presented. Besides, this comparison based on area is also performed in this section. Section 26.5 describes Monte Carlo analysis, and Sect. 26.6 includes the conclusion.

26.2 ASAP7: A 7 nm FinFET PDK In this paper, different D-Latch styles are designed using ASAP7 process design kit (PDK). Due to current technology competency and realistic design conjectures regarding the lithographic step, ASAP7 PDK is used to design different D-Latches [14]. The parameters used in our design include nominal supply voltage of 0.7 V, gate length of 20 nm, fin width of 7 nm, fin height of 32 nm, and oxide thickness of 2.1 nm. The comparative analysis is provided between different conventional D-Latch designs using FinFET technology and the proposed ones based on different leakagereducing techniques. The ASAP7 PDK consists of BSIM-CMG SPICE models. It consists of many devices, i.e., LVT, SLVT, RVT, and SRAM. SRAM devices in ASAP7 PDK can be used to design retention latches and other low-power circuits because of the reduction of overlap capacitance and gate-induced drain leakage. The device used in our design is the SRAM cell of ASAP7 PDK at TT process corner.

352

U. Mushtaq et al.

26.3 FinFET INDEP Technique The major issue which the modern electronic industry is facing is the increase in power dissipation at lower technology nodes. Therefore, many low leakage techniques are proposed from time to time. In this paper, different D-Latch circuits with conventional and transmission gate approach are proposed using 7 nm FinFET technology. Moreover, different D-Latch designs are devised using FinFET INDEP technique [15] and a comparison is carried out between the conventional circuits and the one with FinFET INDEP technique. The parameters used for analysis include propagation delay, average power dissipation, and power delay product and area. Two FinFET transistors (one p-type and one n-type) are placed between the pullup and pull-down networks using this technique as shown in Fig. 26.2. The extra FinFET transistors introduced are input-dependent transistors. By controlling the input of the additional inserted FinFET transistors, the leakage power dissipation can be decreased. This technique involves adjusting the INDEP FinFET transistors’ inputs to increase the number of OFF transistors from vdd to ground without affecting logic circuit outputs. The resistance increases from vdd to ground as the number of OFF transistors increases. Power dissipation is decreased as a result of the increased resistance between vdd and ground. This increase in resistance from vdd to ground results in reduction of leakage power dissipation. This technique increases propagation delay to small extent. The proper selection of inputs for FinFET INDEP transistors, however, reduces the propagation delay increase. Usually stacked transistors have a capacity to reduce leakage current but at a cost of penalty in delay. But in FinFET INDEP technique due to proper selection for inputs to INDEP FinFET transistors delay penalty is reduced. Besides this propagation delay is reduced by instantly turning ON FinFET transistors which are used to form Boolean logic for any particular input and by increasing the width of extra inserted INDEP FinFET transistors. Fig. 26.2 Schematic of FinFET INDEP technique [15]

26 A Novel D-Latch Design for Low-Power and Improved Immunity

353

26.4 Proposed D-Latch FinFET Designs at 7 nm Technology Node Usually, every sequential circuit consists of Latches, and hence, Latches can be considered as building block of sequential circuits. A single bit of information is stored in Latch. When the clock input is low, the output of FinFET D-Latch will remain in the previous state. However, when the clock input is high, the output of D-Latch will be same as ‘Din ’ input. Figure 26.3 represents the conventional FinFET D-Latch [13]. The working of FinFET D-latch using ASAP7 PDK at 7 nm technology node is validated as displayed in Fig. 26.4. The operation of D-Latch can be verified using Fig. 26.3. When the clock (CLK) input is high, the transistors MP2 and MN1 are in ON state. So, if Din equals logic “1”, at that time MP1 will be turned OFF but MN2 will be turned ON. In this case, logic “0” will be sent to next node. The logic at this node gets inverted and the output “Q” obtained in this case equals “1”. The output data in this case is stored by MP4 and MP5. Now when the CLK input is low, the FinFET transistors MP2 and MN1

Fig. 26.3 Schematic of FinFET D-Latch [13] Fig. 26.4 Transient behavior of FinFET D-Latch

354

U. Mushtaq et al.

are in OFF state. So, if Din equals “0”, no data will be transferred to next node. The transistors MP5 and MN4 are in ON state because CLK input is low. Since previous state of output “Q” is logic “1”, the transistor MP4 and MN5 are in OFF and ON state, respectively. At this, logic “0” is sent to the next node. This logic is inverted by the inverter containing transistors MP3 and MN3. Therefore, output “Q” equals logic “1”. This discussion makes us to conclude that when the CLK input is high, output equals to “Din ” but when the CLK input becomes low, previous state of output is retained as is verified from Fig. 26.4 as well. Figure 26.5 illustrates a FinFET INDEP D-Latch with two additional inserted FinFET transistors. The extra-inserted transistors MP6 and MN6 are input-dependent and are also referred to as FinFET INDEP transistors. The inputs V1 and V2 are adjusted in such a way that the output of FinFET D-Latch will not be affected. The inputs to INDEP FinFET transistors are applied in such a way that it increases the number of OFF transistors in the path from vdd (supply voltage) to ground without affecting the functionality of D-latch. Increased resistance from vdd to ground caused by an increase in number of OFF transistors lowers the power dissipation of the DLatch circuit. In FinFET D-Latch circuit due to INDEP FinFET transistors propagation delay gets slightly increased as compared to the conventional FinFET D-Latch circuit. This occurs due to the proper choice of inputs to FinFET INDEP transistors MN6 and MP6 in Fig. 26.5. The voltage levels in V1 and V2 selected from different input voltage levels are shown in Fig. 26.6. The functionality of FINFET INDEP D-Latch is verified and is shown in Fig. 26.6. All the simulations are performed using cadence virtuoso tool. Different performance parameters like average power dissipation, propagation delay, and power delay product are calculated for FinFET D-Latch and INDEP FinFET D-Latch at 7 nm technology node, and the results are summarized in Table 26.1.

Fig. 26.5 Schematic of proposed FinFET INDEP D-Latch

26 A Novel D-Latch Design for Low-Power and Improved Immunity

355

Fig. 26.6 Transient behavior of proposed FinFET INDEP D-Latch

Table 26.1 Performance parameters of different D-Latch designs Technique

Performance parameters Average power dissipation (nW)

% avg. power saving

Propagation delay (ps)

Power delay product (aJ)

Conventional FinFET D-Latch

8.07



16.82

0.135

INDEP FinFET D-Latch

7.45

7.7

17.57

0.130

It is clear from Table 26.1 that average power dissipation is reduced in INDEP FinFET D-Latch as compared to the conventional FinFET D-Latch circuit at 7 nm technology node by 7.7%. Propagation delay is slightly increased in INDEP FinFET D-Latch. However, power delay product is reduced in FinFET INDEP D-Latch circuit as compared to conventional design. D-Latches are also designed using transmission gate (TG) [10]. The FinFET transmission gate consists of both p-type and n-type FinFET devices connected by drain and source terminals. In order to pass the full logic both n-type and p-type devices are used to design transmission gate because n-type device passes low level and p-type device passes high level well. The input is passed through the transmission gate when both p-type and n-type transistors are in ON state. The design of FinFET D-Latch is shown in Fig. 26.7, and the transient behavior obtained can be seen in Fig. 26.8. When the CLK input is high, the TG1 and TG2 will be ON and OFF, respectively. The output “Q” in this case is equal to “Din, ” and the latch is said to be in transparent mode of operation. But when CLK input is low, the TG1 and TG2 are OFF and ON, respectively. In this case, the previous value of “Q” which was there just before CLK went low is fed back, gets inverted, and is then fed to other inverter. As long as TG2 is in ON state, this value of “Q” is circulated between two inverters and hence the value of “Q” is stored. This mode of operation of latch is called opaque mode. This process clearly explains how latch stores data. In the same way, INDEP FinFET transmission

356

U. Mushtaq et al.

Fig. 26.7 Schematic of transmission gate-based FinFET D-Latch [10] Fig. 26.8 Transient behavior of transmission gate-based FinFET D-Latch

D-Latch is designed and different performance parameters are calculated as shown in Table 26.2. In transmission gate FinFET INDEP D-Latch, power dissipation is reduced by 14.5% and power delay product is reduced by 10.25% as compared to conventional TG-based D-Latch circuit which is obvious from Table 26.2 as well. This clearly proves FINFET INDEP technique shows better results at 7 nm technology node and can be applied to design low-power circuits at advanced technology nodes. Table 26.2 Performance parameters of TG-based different D-Latch designs Technique

Performance parameters Average power dissipation (nW)

% avg. power saving

Propagation delay (ps)

Power delay product (aJ)

Conventional TG FinFET D-Latch

9.78



16.04

0.156

INDEP TG FinFET D-Latch

8.36

14.5

16.78

0.140

26 A Novel D-Latch Design for Low-Power and Improved Immunity

357

Fig. 26.9 Layout of FinFET D-Latch circuit at 7 nm technology node

26.4.1 Layout Description of D-Latch Circuit The layout of FinFET D-Latch and INDEP FinFET D-Latch circuit is designed at 7 nm technology node using microwind layout design tool [16]. FinFET D-Latch and INDEP FinFET D-Latch are compared in terms of area as depicted in Figs. 26.9 and 26.10, respectively. This FinFET D-Latch Layout has a fin count of 1, a fin thickness of 4 nm, and a fin height of 35 nm. It is designed at the 7 nm technology node. In the case of FinFET D-Latch, the area is 0.119 μm2 while in the INDEP FinFET DLatch, the area is 0.126 μm2 . In comparison with FinFET D-Latch, INDEP FinFET D-Latch has an area increase of 5.8% only. The area has slightly increased as a result of additional INDEP FinFET transistors. The same approach may be applied to the design of further D-Latch circuits, and the trend of area increase will be the same as well.

26.5 Monte Carlo Analysis of FinFET D-Latch Designs at 7 nm Technology Node At lower technology nodes, the performance of logic circuits gets deeply affected due to process, voltage, and temperature (PVT) variations. To evaluate the reliability of logic circuits, Monte Carlo approach is used [7]. The PVT parameters which include channel width, channel length of devices, supply voltage, temperature, and threshold voltage of devices are varied at ± 10% variation from their nominal values with Gaussian 3σ distribution [17]. The analysis is performed for 10,000 samples at TT process corner. Uncertainties in terms of statistical parameters like mean (μ) and

358

U. Mushtaq et al.

Fig. 26.10 Layout of proposed INDEP FinFET D-Latch circuit at 7 nm technology node

standard deviation (σ ) are calculated for different FinFET D-Latch designs as shown in Tables 26.3 and 26.4. Figures 26.11 and 26.12 are the Monte Carlo distribution for average power dissipation of FinFET D-Latch and FinFET INDEP D-Latch, respectively, at 7 nm technology node. From Tables 26.3 and 26.4, it can be seen that FinFET INDEP D-Latch has better performance parameters as compared to conventional FinFET D-Latch at 7 nm technology node in both with and without transmission gate designs. Therefore, FinFET INDEP approach can be utilized to design D-Latch at lower technology Table 26.3 Uncertainty for different performance metrics for FinFET D-Latch Technique

Performance parameters Average power dissipation

Propagation delay

Power delay product

μ (nW)

σ (pW)

μ (pS)

σ (pS)

μ (zJ)

σ (zJ)

Conventional FinFET D-Latch

7.94

638.67

16.66

1.83

131.29

4.44

INDEP FinFET D-Latch

7.47

600.02

17.60

1.79

130.53

3.40

Table 26.4 Uncertainty for different performance metrics for TG-based FinFET D-Latch Technique

Performance parameters Average power dissipation

Propagation delay

Power delay product

μ (nW)

σ (pW)

μ (pS)

σ (pS)

μ (zJ)

σ (zJ)

Conventional TG FinFET D-Latch

9.82

800.35

16.11

1.68

156.91

4.26

INDEP TG FinFET D-Latch

8.40

727.62

16.82

1.69

140.26

2.86

26 A Novel D-Latch Design for Low-Power and Improved Immunity

359

Fig. 26.11 Monte Carlo distribution for average power of FinFET D-Latch

Fig. 26.12 Monte Carlo distribution for average power of FinFET INDEP D-Latch

nodes because INDEP approach is more reliable and has low value of uncertainty as compared to conventional FinFET circuit designs.

26.6 Conclusion In the battery-operated peripheral devices, one of the main issues the semiconductor industry is dealing with is the increase in power dissipation. In this paper, FinFET D-Latches are designed using ASAP7 PDK at 7 nm technology node and the comparison is offered with FinFET INDEP technique. Moreover, transmission gate-based D-Latches are also designed at 7 nm technology node and the comparative analysis in terms of different performance parameters like average power dissipation, propagation delay, power delay product, and area is performed with the one with FinFET INDEP technique. This FinFET INDEP technique reduces power dissipation and power delay product as compared to conventional FinFET D-Latch at 7 nm technology node. The Monte Carlo analysis is performed for 10,000 runs to check the reliability of different FinFET D-Latch designs. In light of the entire analysis, it is concluded that FinFET INDEP D-Latches have better performance parameters as compared to conventional FinFET D-Latch circuits. Acknowledgements We acknowledge the support from ni logic Pvt. Ltd. (ni2designs), India, for providing the Microwind Software for designing D-Latch Layout using 7nm FinFET foundry.

360

U. Mushtaq et al.

References 1. Frank, D.J., Dennard, R.H., Nowak, E., Solomon, P.M., Taur, Y., Wong, H.S.: Device scaling limits of Si MOSFETs and their application dependencies. Proc. IEEE 89(3), 259–288 (2001) 2. Hajare, R., Lakshminarayana, C., Sumanth, S.C., Anish, A.R.: Design and evaluation of FinFET based digital circuits for high speed ICs. In: 2015 International Conference on Emerging Research in Electronics, Computer Science and Technology(ICERECT), pp.162–167. IEEE (2015) 3. Chen, T.C.: Overcoming research challenges for CMOS scaling: industry directions. In: 2006 8th International Conference on Solid-State and Integrated Circuit Technology Proceedings, 23 Oct 2006, pp. 4–7. IEEE (2006) 4. Alioto, M.: Ultra-low power VLSI circuit design demystified and explained: a tutorial. IEEE Trans. Circuits Syst. I: Reg. Pap. 59(1), 3–29 (2012) 5. Colinge, J.P.: The SOI MOSFET: from single gate to multigate. In: FinFETs and Other Multigate Transistors, pp. 1–48. Springer, Boston, MA (2008) 6. Tahrim, A.B.A., Tan, M.L.P.: Design and implementation of a 1-bit FinFET full adder cell for ALU in subthreshold region. In: 2014 IEEE International Conference on Semiconductor Electronics (ICSE2014), pp. 44–47. IEEE (2014) 7. Mushtaq, U., Sharma, V.K.: Design and analysis of INDEP FinFET SRAM cell at 7-nm technology. Int. J. Numer. Model. Electron. Netw. Devices Fields 33(5), e2730 (2020) 8. Mushtaq, U., Akram, M.W., Prasad, D.: Energy efficient and variability immune adder circuits using short gate FinFET INDEP technique at 10nm technology node. Aust. J. Electr. Electron. Eng. 1–12 (2022) 9. Alioto, M., Mita, R., Palumbo, G.: Analysis and comparison of low-voltage CML D-latch. In: 9th International Conference on Electronics, Circuits and Systems, 15 Sept 2002, vol. 2, pp. 737–740. IEEE (2002) 10. Rabaey, J.M., Chandrakasan, A.P., Nikolic, B.: Digital Integrated Circuits. Prentice Hall, Englewood Cliffs (2002) 11. Gerosa, G., Gary, S., Dietz, C., Pham, D., Hoover, K., Alvarez, J., Sanchez, H., Ippolito, P., Ngo, T., Litch, S., Eno, J.: A 2.2 W, “80 MHz superscalar RISC microprocessor”. IEEE J. Solid-State Circuits 29(12), 1440–1454 (1994) 12. Partovi, H., Burd, R., Salim, U., Weber, F., DiGregorio, L., Draper, D.: Flow-through latch and edge-triggered flip-flop hybrid elements. In: 1996 IEEE International Solid-State Circuits Conference. Digest of Technical Papers, ISSCC, pp. 138–139. IEEE (1996) 13. Vallabhuni, R.R., Yamini, G., Vinitha, T., Reddy, S.S.: Performance analysis: D-Latch modules designed using 18nm FinFET technology. In: 2020 International Conference on Smart Electronics and Communication (ICOSEC), pp. 1169–1174. IEEE (2020) 14. Clark, L.T., Vashishtha, V., Shifren, L., Gujja, A., Sinha, S., Cline, B., Ramamurthy, C., Yeric, G.: ASAP7: a 7-nm FinFET predictive process design kit. Microelectron. J. 53, 105–115 (2016) 15. Mushtaq, U., Akram, M., Prasad, D.: Design and analysis of energy-efficient logic gates using INDEP short gate FinFETs at 10 nm technology node. In: Advances in Micro-electronics, Embedded Systems and IoT, pp. 19–28. Springer, Singapore (2022) 16. Etienne Sicard: Introducing 7-nm FinFET Technology in Microwind (2017). https://hal.arc hives-ouvertes.fr/hal-01558775 17. Meinhardt, C., Zimpeck, A.L., Reis, R.A.: Predictive evaluation of electrical characteristics of sub-22 nm FinFET technologies under device geometry variations. Microelectron. Reliab. 54(9–10), 2319–2324 (2014)

Chapter 27

Anonymous and Privacy Preserving Attribute-Based Decentralized DigiLocker Using Blockchain Technology Puneet Bakshi and Sukumar Nandi

Abstract DigiLocker is an Aadhaar-based sharable public cloud service from Government of India. Similar to other centralized services, DigiLocker may operate the system to its full discretion which may not be fully aligned with the expected usage policy. Though DigiLocker ensures data integrity and secure data access, privacy and anonymity related concerns are yet to be addressed in full. For example, an adversary may trace a user based on encrypted communication messages from him/her. This paper presents a privacy preserving blockchain-based DigiLocker service which may address these challenges effectively using attribute-based encryption and blockchain technology.

27.1 Introduction In year 2009, Government of India initiated a mission to assign a unique digital identity to all citizens of India. The digital identity is a unique 12-digit number referred to as Aadhaar [1]. DigiLocker is an Aadhaar-based online service which provides a secure cloud storage to its subscribers. It ensures data security by mandating that all documents, requests, and responses to be digitally signed by the issuer, the sender, and the receiver, respectively. All participating entities must adhere to the Digital Locker Technical Specification (DLTS) [2]. At present, DigiLocker is a centralized service which must be trusted by its registered users and agencies. Being a centralized service, DigiLocker has the potential to allow or disallow access to a document at its full discretion which may not be aligned with the policies agreed upon by the owner of the document. This deprives users of the full ownership of their documents. Other than that, an adversary may trace messages from a specific user and may establish a correlation between a user P. Bakshi (B) · S. Nandi Indian Institute of Technology, Guwahati 781039, Assam, India e-mail: [email protected] S. Nandi e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024 B. Das et al. (eds.), Modeling, Simulation and Optimization, Smart Innovation, Systems and Technologies 373, https://doi.org/10.1007/978-981-99-6866-4_27

361

362

P. Bakshi and S. Nandi

and the data. This may refrain users from accessing documents in sensitive contexts. A user may want to access the document anonymously and may also want to keep the identity of the document issuer anonymous. In this context, blockchain [3] and Inter Planetary File System (IPFS) [4] are two promising technologies which may help address these two challenges effectively. Blockchain is a decentralized, distributed, and immutable ledger technology which facilitates tracking of assets by recording ledger transactions. IPFS is a peer-to-peer network technology which facilitates distributed sharing of data. The data in IPFS is uniquely identified by the cryptographic hash of the data contents. Contributions: The proposed work includes primarily two contributions. 1. The proposed work presents a novel DigiLocker platform which is decentralized and does not require its subscribers to trust a central organization. 2. The proposed work presents a novel mechanism in which users of DigiLocker can request and retrieve data anonymously. The remainder of this paper is organized as follows. Section 27.2 presents the related work. Section 27.3 presents the preliminaries. Section 27.4 presents the system and threat model. Section 27.5 presents the proposed scheme. Section 27.6 presents the security and privacy analysis, and Sect. 27.7 presents the conclusion.

27.2 Related Work Though the research in DigiLocker to improve privacy, anonymity and trust is quite nascent, a good amount of research is done in peer-to-peer shared and decentralized environment. Some of the notable works in anonymous peer-to-peer data sharing network are Freenet [5] and Free Haven [6]. Freenet ensures anonymity of issuers and requesters by data encryption and by request relaying (virtual circuit). Freenet also emphasizes on server anonymity, which means to keep the document anonymous from its hosting server. Freenet ensures server anonymity by encrypting the document with a key which is external to the server. In Free Haven, though documents are not encrypted but they are divided in parts and each part is stored on a different server (the part may move dynamically from one server to another). Though both Freenet and Free Haven aim to achieve anonymity in a decentralized system, their support to provide access to specific users is still limited. Some other works use group signatures to share the data anonymously [7, 8]. The group signature schemes consist of a group manager and a set of group members. Group manager holds a special group manager private key from which private keys of all other group members are derived. Group manager publishes a single public key for the whole group. A group member can create a signature which can be validated using group public key without revealing the individual identity of the singer and hence keeping the signer anonymous. Though group signature is an effective scheme, it requires group members to trust the group manager.

27 Anonymous and Privacy Preserving Attribute-Based Decentralized …

363

Another set of works used Ciphertext-Policy Attribute-based Encryption (CPABE) [9] to achieve anonymity. In CP-ABE, private keys of the users are generated based on attributes of the users, and ciphertexts are generated based on access policies. Only if user’s attributes match the access policy, (s)he would be able to decrypt the ciphertext. Though CP-ABE is a promising scheme, the access policy is a sensitive data and making it available along with ciphertext may reveal important information.

27.3 Preliminaries DigiLocker DigiLocker is an online cloud storage service which lets agencies (issuers) issue documents to document owners (users) and lets clients (requesters) seek documents from document owners. Each document is uniquely identified by a Unique Resource Identifier (URI) which has the syntax of the form .⟨IssuerID :: .DocType :: .DocID⟩, where .IssuerID is a unique identifier of the issuer, .DocType is the document classification from the issuer and .DocID is an issuer defined document identifier. DigiLocker provides necessary interfaces to the user to add (pushDoc) and retrieve (pullDoc) documents. Similarly, issuer also provides necessary interfaces to DigiLocker to retrieve (pullDoc) the documents. Blockchain is a secure immutable distributed ledger technology which maintains a chain of blocks consisting of transactions made to access the designated assets. Blockchain was introduced in 2008 and is the fundamental technology enabling Bitcoin [10] and Ethereum [11]. IPFS is a distributed, immutable peer-to-peer data store technology in which the stored data is identified by the cryptographic hash of its content. IPFS maintains a Distributed Hash Table (DHT) to map content addresses to nodes which contain the corresponding data. Ethereum is a blockchain-based platform with smart contract functionality. A smart contract is a self-executing code along with state data and the associated functions (referred to as Application Binary Interfaces (ABI)). Smart contracts are most commonly written in Solidity language and are compiled into Ethereum Virtual Machine (EVM) bytecode. Since smart contracts are deployed on Blockchain, their code is immutable. Ethereum has two types of accounts. First is Externally Owned Account (EOA) which is of an entity having a private key. Second is contract account which is of a smart contract deployed on the Ethereum blockchain. A transaction in Ethereum is a digitally signed request from one EOA address to either another EOA address or to a smart contract address.

364

P. Bakshi and S. Nandi

27.4 System and Threat Model 27.4.1 System Model The proposed scheme includes following five entities. 1. DigiLocker: A decentralized blockchain-based service which lets issuers issue documents to registered users. 2. User: A citizen who is registered with DigiLocker service. 3. Issuer: An agency registered with DigiLocker service which issues documents to the users. Each document is associated with an access policy which dictates which users may access the document. An access policy may typically be described in terms of attributes required by requester. Both the document and the access policy are stored in IPFS after encryption. 4. Attribute Authorities: An agency which may assign attributes to users. 5. Requester: An entity which seeks access to a document. A requester needs to register itself with DigiLocker before it may seek access to a document.

27.4.2 Threat Model The proposed scheme aims to achieve the following security and privacy requirements. 1. Data Confidentiality: Other than the sender or the receiver, no one should be able to decipher the shared data either in rest or in transit. 2. User Privacy: Sender and receiver share the data anonymously and no one else should be able to determine who the sender is and who the receiver is. 3. User Linkability: It should not be possible to link a user with a set of activities. For example, it should not be possible to determine how many times the user has requested or provided the data. 4. User and Data Linkability: It should not be possible to link a user with any shared data.

27.5 Proposed Scheme This section presents the proposed scheme including key management, smart contracts, and the main methods.

27 Anonymous and Privacy Preserving Attribute-Based Decentralized …

365

27.5.1 Key Management Following four categories of asymmetric keys are envisaged in the proposed work. Identity Key Each subscriber has an asymmetric identity keypair and registers itself in the system using its public identity key. Identity keypairs of users, issuers, senders, and receivers are denoted by (.SKUi , .PKUi ), (.SK Ii , .PK Ii ), (.SK Si , .PK Si ), and (.SK Ri , .PK Ri ). Smart Contract Key Transient Key To maintain anonymity of the sender, sender is proposed to use a transient key rather than its identity key. A transient keypair is an asymmetric keypair which is used only once. Transient keypair between a user .Ui and the DigiLocker is denoted by (.SKTRAN−Ui −DL , .PK TRAN−Ui −DL ). Anonymous Key To maintain anonymity of the receiver, sender is proposed to send transaction to an anonymous address derived from an anonymous key. Anonymous keypair between a user.Ui and the DigiLocker is denoted by the pair (.SKANON−Ui −DL , .PK ANON−Ui −DL ).

27.5.2 Smart Contract System As illustrated in Figs. 27.1, 27.2, 27.3 and 27.4, the proposed system consists primarily of three smart contracts. Identity Contract (IC) maintains identities of registered users, issuers, and attribute authorities. During registration, issuers provide address of Document Data Contract (DDC) which implements hosting of documents from respective issuer. DDC implements indexing of documents from URI to address of encrypted document and the address of encrypted access policy in IPFS. Similarly, during registration, attribute authorities also provide address of User Attributes Data Contract (UADC) which implements hosting of user’s attributes from respective authorities. UADC implements indexing of documents from user’s EOA to IPFS address of encrypted attributes. DigiLocker Contract (DLC) is the orchestration contract which is used by users to pull the documents and is integrated with issuers and attribute authorities. IC is deployed by an apex governing body and maintains identities of all participating entities. The state variables in IC include a list of users, issuers, and attribute authorities. The list is indexed by their respective EOAs and consists of identity and other information. For each issuer, the list also contains the address of the smart contract which manages documents issued by the respective issuer. Similarly, for each attribute authority, the list also contains the address of the smart contract which manages attributes assigned to a user by the respective attribute authority. An IC may include ABIs to manage users (addUser, deleteUser, retreiveUser), to manage issuers (addIssuer, deleteIssuer, retrieveIssuer) to manage attribute authorities

366

P. Bakshi and S. Nandi

Fig. 27.1 Identity contract

Fig. 27.2 Document data contract

(addAA, deleteAA, retrieveAA), to retrieve the address of the smart contract managing documents (getDOCsSCAddr) and to retrieve the address of the smart contract managing user’s attributes (getUserAttributeSCAddr). DDC is deployed by an issuer and is registered in IC. The state variables in DDC include a list of document types and a list of document objects. The list of document objects is a nested mapping from a document type to a document identifier to the IPFS addresses of the encrypted document data and the associated encrypted access policy. It should be noted that both document data and access policy are stored in IPFS after encryption. DDC may include ABIs to add IPFS address of encrypted document (addDocData), to add IPFS address of encrypted access policy (addDocAP), to delete

27 Anonymous and Privacy Preserving Attribute-Based Decentralized …

367

Fig. 27.3 User attributes data contract

Fig. 27.4 DigiLocker contract

document (deleteDoc), to update IPFS address of encrypted document (updateDocData), to update IPFS address of encrypted access policy (updateDocAP), to retrieve IPFS address of encrypted document (getEncDocData), to retrieve IPFS address of encrypted access policy (getEncDocAP). UADC is deployed by an attribute authority and is registered in IC. The state variables in UADC include a mapping from user’s EOA to an IPFS address where user’s encrypted attributes are stored. It should be noted that user’s attributes are stored in IPFS after encryption. Following ABIs are envisaged from a UADC. UADC may include ABIs to add IPFS address of encrypted attributes of a user (addAttribs), to delete IPFS address of encrypted attributes of a user (deleteAttribs), to update IPFS address of encrypted attributes of a user (upgradeAttribs) and to retrieve IPFS address of encrypted attributes of a user (getEncAttribs).

368

P. Bakshi and S. Nandi

DLC is deployed by an apex governing body and implements a mechanism to pull and push documents. DLC may include ABIs to let the user pull a document referenced by a URI (pullDoc) and push a document to specific subscriber (pushDoc).

27.5.3 Main Methods of the Scheme User Registration User registers itself in Identity Contract as well as in DigiLocker Contract. To register in Identity Contract, a user generates an identity keypair (.SKIDENT−Ui ,.PKIDENT−Ui ) according to Algorithm 1 and registers itself using its public key. The private key is stored securely by user in its wallet. Before registration in DigiLocker Contract, the user generates a secret key (.SKUi −DL ). User registers its EOA with DLC through a transaction and shares .SK Ui −DL with DigiLocker wallet offline securely. Algorithm 1 Identity Key Generation Require: Cyclic group G of order p. g being the generator of G. Ensure: PKIDENT , SKIDENT Chose a random number xIDENT ∈R Z Set SKIDENT = xIDENT Set PKIDENT = gxIDENT Returns PKIDENT , SKIDENT

Issuer Registration Issuer registers itself in Identity Contract as well as in DigiLocker Contract. To register in Identity Contract, similar to a user, an issuer also generates an identity keypair (.SKIDENT−Ii , .PKIDENT−Ii ) according to Algorithm 1 and registers itself using its public key. The private key is stored securely by issuer in its wallet. Before registration in DigiLocker Contract, an issuer generates a master key (.MKIi −DL ), chooses a key derivation function (.KDFIi −DL ), and deploys a Document Data Contract (DDC). DDC manages documents issued by the issuer. To register with DLC, the issuer registers its EOA and the address of DDC with DLC through a transaction and shares secrets .⟨MKIi −DL , KDFIi −DL ⟩ with DigiLocker wallet offline securely. Attribute Authority Registration Similar to issuer, an attribute authority also registers itself in Identity Contract as well as in DigiLocker Contract. To register in Identity Contract, similar to an issuer, an attribute authority also generates an identity keypair (.SKIDENT−AAi , .PKIDENT−AAi ) according to Algorithm 1 and registers itself using its public key. The private key is stored securely by attribute authority in its wallet. Before registration in DigiLocker Contract, an attribute authority generates a master key (.MKAAi −DL ), chooses a key derivation function (.KDFAAi −DL ), and deploys a User Attribute Data Contract (UADC). UADC manages attributes associated with a user. To register with DLC, the attribute authority registers its EOA and the address of UADC with DLC through a transaction and shares secrets .⟨MKAAi −DL , .KDFAAi −DL ⟩ with DigiLocker wallet offline securely.

27 Anonymous and Privacy Preserving Attribute-Based Decentralized …

369

Anonymous Transaction A sender .Si may want to send an anonymous transaction to a receiver .Ri such that the transaction does not reveal to others the real address of the sender, the real address of the receiver and still the receiver and only the receiver is able to decipher the real address of the sender in the transaction. Algorithm 3 presents the proposed scheme to create an anonymous transaction. Algorithm 2 Anonymous Key Generation Require: SKTRAN−S = xTRAN−S PKIDENT−R = gxIDENT−R Ensure: PKANON−S−R 1: Compute z = H(SKTRAN−S . PKIDENT−R ) = H(xTRAN−S . gxIDENT−R ) 2: Compute PKANON−S−R = gz . PKIDENT−R = gz . gxIDENT−R = g(z+xIDENT−R ) 3: Hence SKANON−S−R = z + xIDENT−R = H(xTRAN−S . gxIDENT−R ) + xIDENT−R 4: Return PKANON−S−R

The receiver .Ri scans the blockchain for any new anonymous transaction in the blockchain and if present attempts to compute the anonymous private key .SKANON−Si −Ri . Using this anonymous private key.SKANON−Si −Ri , receiver.Ri , decrypts contents of anonymous transaction and retrieves the real identity .PKIDENT−Si of the sender and the shared secret .SKSi −Ri . Algorithm 3 An Anonymous Transaction Require: PKIDENT−R SKSR SKTRAN−S−R Algorithm2 Ensure: An Anonymous Transaction 1: Generate a transient key pair SKTRAN−S , PKTRAN−S SKTRAN−S = xTRAN PKTRAN−S = gxTRAN 2: Generate an anonymous key pair, SKANON−S−R , PKANON−S−R using Algorithm 2. 3: Encrypts sender’s identity public key with anonymous public key, i.e., {PKIDENT−S }PKANON−S−R . 4: Encrypt SKS−R with anonymous public key, i.e., {SKS − R}PKANON−S−R . 5: Include other objects in transaction which are preferably encrypted with SKS−R . 6: Creates a transaction including [3], [4] and [5] and signs the transaction with transient key SKTRAN−S−R . The destination address in the transaction is PKANON−S−R . 7: Returns the anonymous transaction just created.

Document Uploading (pushDoc()) Each issuer manages its repository of documents with a Document Data Contract (DDC), address of which is available through DLC. Each document is stored by issuer in IPFS after encryption, and the elements to decrypt the same are shared with DigiLocker wallet offline securely. For each docu-

370

P. Bakshi and S. Nandi

ment, the issuer derives a unique document specific secret key .SKURIi −Ij −DL from the key derivation function .KDFIj −DL , the master key .MKIj −DL and the document .URIi . Issuer encrypts the document using .SKURIi −Ij −DL and stores the encrypted document in IPFS. Let the encrypted document is stored at IPFS address .FDATAENCij . Issuer also encrypts the corresponding access policy using the same key .SKURIi −Ij −DL and stores the encrypted access policy in IPFS. Let the encrypted access policy is stored at IPFS address .FAPENCij . Document Retrieval (pullDoc()) When a user requests a document, the DLC uses the master key .MKIj −DL , the key derivation function .KDFIj −DL , and the document .URIi to regenerate .SKURIi −Ij −D . Using the issuer EOA and the document .URIi , DLC retrieves the IPFS address, .FDATAENCij of encrypted document data, and .FAPENCij of encrypted access policy. Using .SKURIi −Ij −DL , the DLC decrypts the encrypted document and the corresponding encrypted access policy. Now, DLC uses the master key .MKAAi −DL , the key derivation function .KDFAAi −DL and the user’s EOA to regenerate .SKUA−AA−DL . Using its internal mapping, DLC retrieves the IPFS address .FUATTRIBENC−AA1 , .FUATTRIBENC−AA2 , ... .FUATTRIBENC−AAN of user’s encrypted attributes from all attribute authorities. Using .SKUA−AA1 −DL , ..., .SKUA−AAN −DL the DLC decrypts the user’s encrypted attributes. Now, DLC verifies whether user’s attributes match with the document’s access policy and if they match, DLC proceeds further, otherwise rejects the request. To proceed further, the DLC re-encrypts the document data using user’s shared secret key .SKUi−DL and stores it in IPFS at new IPFS address .FDATAENCTij . Now, DLC needs to share .FDATAENCTij and .SKUi −DL with the requester user anonymously. The three privacy and anonymity related challenges in this workflow are to limit the knowledge of the .SKUi −DL , the sender’s address and the receiver’s address to the user and the DigiLocker. To address these challenges, the DigiLocker uses anonymous keys (.SKANON−Ui −DL , .PKANON−Ui −DL ) and transient keys (.SKTRAN−Ui −DL , .PKTRAN−Ui −DL ). DLC generates two keypairs, namely a transient keypair, (.SKTRAN−Ui −DL , .PKTRAN−Ui −DL ) and an anonymous keypair (.SKANON−Ui −DL , .PKANON−Ui −DL ), encrypts.SKUi −DL using anonymous public key.PKANON−Ui −DL and creates an anonymous transaction (Algorithm 3 which includes .FDATAENCTij and encrypted .SKUi −DL . User observes a new transaction in the blockchain and regenerates the private anonymous key .SKANON−Ui −DU and keeps this key securely in its wallet. Since the transaction is sent from one anonymous address to another anonymous address, other users are not able to derive addresses of either the sender or the receiver. Implementation of the proposed scheme is available in GitHub [13].

27.6 Security and Privacy Analysis This section presents the security analysis of the proposed scheme.

27 Anonymous and Privacy Preserving Attribute-Based Decentralized …

371

27.6.1 Data Confidentiality The IPFS address shared with users contains encrypted document data. The decryption key is shared with the users in an anonymous transaction in such a way that only the intended receiver can reconstruct the key from the anonymous transaction. Moreover, since the decryption key is used only once, even if a key is compromised, it cannot be used to decrypt documents shared earlier or in future. Mention about some difficult assumption which holds this argument.

27.6.2 User Privacy When DLC needs to provide data to a requester user, it generates a new transient keypair (.SK TRAN−DL , .PK TRAN−DL ) and signs the transaction using the transient private key .SK TRAN−DL . Since, .PKTRAN−DL is a new address and is not registered anywhere, other entities cannot find who is the real sender of the transaction. Hence, this preserves anonymity of the sender. The destination address in transaction is an anonymous address .PK ANON−S−R , which is a new address and is not registered anywhere. To regenerate .SK ANON−S−R , one should know .SK IDENT−R , which only the intended recipient knows. Moreover, because of Discrete Logarithm Assumption, an adversary cannot deduce r from gr and hence cannot deduce .SK ANON−S−R from .PK ANON−S−R . Hence, other entities cannot find who is the real receiver which preserves anonymity of the receiver.

27.6.3 User Linkability An adversary cannot link activities from a given user. A user’s request is sent from an anonymous address to an anonymous address. Transaction is also signed with a transient key. Same is the case with response, which is also sent from an anonymous address to an anonymous address with transaction being signed with a transient key. Any two requests (or responses) transactions have different source, destination, and signature. Hence, an advisory cannot link activities from a given user.

27.6.4 User and Data Linkability This can easily be inferred from aforementioned security analysis.

372

P. Bakshi and S. Nandi

27.7 Conclusion This paper proposed an anonymous and privacy preserving blockchain-based decentralized DigiLocker system. The system ensures data confidentiality, forward secrecy, user privacy, user unlinkability, and unlinkability between user and data. Yet, the data access can be audited. The paper also presented an implementation of the scheme and the performance evaluation of the same.

References 1. Rao, U., Nair, V.: Aadhaar: governing with biometrics. South Asia: J. South Asian Stud. 42(3), 469–481 (2019) 2. DLTS, https://docs.ipfs.tech/concepts/what-is-ipfs 3. Zheng, Z., et al.: Blockchain challenges and opportunities: a survey. Int. J. Web Grid Serv. 14(4), 352–375 (2018) 4. Psaras, Y., Dias, D.: The interplanetary file system and the filecoin network. In: 50th Annual IEEE-IFIP International Conference on Dependable Systems and Networks—Supplemental Volume (DSN-S). IEEE (2020) 5. Clarke, I., et al.: Freenet: a distributed anonymous information storage and retrieval system. Designing Privacy Enhancing Technologies. Springer, Berlin, Heidelberg (2001) 6. Dingledine, R., Freedman, M.J., Molnar, D.: The free haven project: distributed anonymous storage service. Designing Privacy Enhancing Technologies. Springer, Berlin, Heidelberg (2001) 7. Chaum, D., van Heyst, E.: Group signatures. In: Workshop on the Theory and Application of Cryptographic Techniques. Springer, Berlin, Heidelberg (1991) 8. Chen, L., Pedersen, T.P.: New group signature schemes. In: Workshop on the Theory and Application of Cryptographic Techniques. Springer, Berlin, Heidelberg (1994) 9. Bethencourt, J., Sahai, A., Waters, B.: Ciphertext-policy attribute-based encryption. In: 2007 IEEE Symposium on Security and Privacy (SP’07). IEEE (2007) 10. Franco, P.: Understanding Bitcoin: Cryptography, Engineering and Economics. Wiley (2014) 11. Wood, G.: Ethereum: a secure decentralised generalised transaction ledger. Ethereum project yellow paper 151.2014, pp. 1–32 (2014) 12. Digital India, Government of India, https://www.digitalindia.gov.in (2021) 13. Bakshi, P.: Blockchain Based DigiLocker, https://github.com/bakshipuneet/ BlockhainBasedDigiLocker (2021)

Chapter 28

FIFO Memory Implementation with Reduced Metastability M. S. Mallikarjunaswamy, M. U. Anusha, B. R. Sriraksha, Harsha Bhat, K. Adithi, and P. Pragna

Abstract Clock domain crossing, often known as data interchange between systems operating at different clock frequencies, can occur in a situation involving digital engineering. In modern-day computers, the clock speeds are in terms of GHz and also have very high data rates. The major problem encountered in this system is the data loss. Since different systems operate at different clock speeds, there is a very high probability of data loss depending on the clock speeds. Therefore, it is necessary to prevent these data loss in the computers and the best way of preventing it is with the use of firstin first-out (FIFO). FIFO can be used as a temporary storage which can store the data that is being transferred between two systems in the computers. During the design, the area, cost, and power dissipation are very important constraints of memory. In this designed system, the data loss is significantly reduced as multi-level synchronization techniques are utilized. Due to relatively less processing taking place, speeds of the overall system are significantly increased and the power consumption of the system is considerably reduced. Since many status flags are used, the overflow and underflow conditions are overcome. In this work, design and implementation of synchronous and asynchronous FIFO with a single and a dual clock is carried out to overcome the problem of synchronization and to reduce the metastability that arises while working with the asynchronous inputs of reading and writing and providing stable inputs to the FIFO. The design is simulated using Verilog Hardware Description Language (HDL) and verified on EDA tools using timing analysis.

28.1 Introduction In digital systems, there is multiple data being transferred between different elements of the system at a variable rate. Storage or a buffer is always necessary when data transfer takes place as the typical processors cannot process the data with very high input rates as in the case of sensors. Sensors continuously sense the surroundings and M. S. Mallikarjunaswamy · M. U. Anusha (B) · B. R. Sriraksha · H. Bhat · K. Adithi · P. Pragna Department of Electronics and Instrumentation Engineering, Sri Jayachamarajendra College of Engineering, JSS Science and Technology University, Mysuru, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024 B. Das et al. (eds.), Modeling, Simulation and Optimization, Smart Innovation, Systems and Technologies 373, https://doi.org/10.1007/978-981-99-6866-4_28

373

374

M. S. Mallikarjunaswamy et al.

provide the data at high speeds. If buffers are not used, the slowest system determines the overall operating speeds of other components. For example, in the CD drive, the speed of disk rotation determines the data rate. But this is not comparable to the data rate of the ADC which is controlled by the quartz crystal. Hence, these variable data rates are compensated by the use of buffers in the circuits before the processors. Modern processors frequently outperform peripherals that are attached to them in speed. By using first-in first-out (FIFO), a processor’s processing speed need not be slowed down when exchanging data with a peripheral. A FIFO can once more be utilized to alleviate the issue if the peripheral is occasionally quicker than the processor. Depending on the specific issue, various circuitry changes are conceivable. Data transfers through parallel/serial ports, external memory devices via controllers, are frequently used and most common. In these circumstances, a bidirectional FIFO may be used. This speeds up data movement from a processor or RAM to a peripheral. Software or hardware can be used to implement FIFOs. Hardware is less adaptable than software. The speed of the hardware FIFOs demonstrates their benefit. FIFO in the synchronous design has a depth of 16 addresses and a width of 8 bits. The local clock (CLK), which generates the continuous pulses, is provided as synchronous inputs. The difficulty of synchronizing two circuits that operate at different frequencies is one of the main issues that arise while using the asynchronous FIFO. Generally, synchronization circuits used with asynchronous inputs are used to solve this issue. A D flip-flop that is coupled to and controlled by the other clock and has its D input connected to an asynchronous clock makes up the synchronization circuit. However, doing so violates the setup and hold timings, which allows the output to enter a metastable condition for an infinite period. Two D flip-flops are used in synchronization on two levels. The second D flip-flop’s input is connected to the first flip-output. The second flip-flop can theoretically never enter the metastable state by using the proper clock period, delivering synchronized inputs to the FIFO memory. One way for pointer synchronization is by using gray codes rather than binary codes to encode the pointers. When pointers are encoded using gray coding, the difficulty of synchronizing multiple changing bits on a clock edge is eliminated. The potential work that might be done moving ahead is to use this as a processor’s cache memory.

28.2 Earlier Work The literature available on FIFO implementation has been studied and some of them are reported here. Avinash et al. [1] have designed novel architecture for asynchronous FIFO, they used gray code converters, and the synchronization mechanism with the aid of dual flop architecture to create overrun and underrun flag generation mechanism. It is possible to reuse the FIFO architecture described in this work as an IP for any system-on-chip (SoC) designs because it is built as a programmable subsystem. Verilog RTL code that can be synthesized is used to implement the complete design, and the Cadence NC simulator is used to verify it. Mohini et al.

28 FIFO Memory Implementation with Reduced Metastability

375

[2] have worked on the design and verification of generic FIFO using a layered test bench and assertion technique. Here the pointers will indicate the status of the FIFO, the flag information like full, empty, last, second last first, and the FIFO will have synchronous reset ability and this FIFO is used as a Device under test (DUT) under a verification environment. Lincy et al. [3] have developed an asynchronous FIFO design using Verilog; here data values are consecutively written into a FIFO buffer using one clock domain, while data values are sequentially read from the same FIFO buffer using a different clock domain. These two clock domains are asynchronous with respect to one another. Making an asynchronous comparison of the pointers before setting the full or empty status bits is an intriguing and novel method of FIFO full and empty creation. Sanskriti et al. [4] have designed an FPGA implementation of the reconfigurable FIFO-based high-speed data acquisition IoT architecture model. In this work, reconfigurable FIFO is designed to capture data in order to have effective communication across the various IoT application nodes. In addition to cutting down on overall sensing time, this would also save energy which is a major concern in IoT devices for offline settings. Maurya [5] worked on the design of RTL synthesizable 32-bit FIFO memory in which single bit memory cells with an 8 × 32 buffer that can be utilized in synchronous or asynchronous FIFO have been constructed. This architecture includes a hardware signal upgrade to prevent overflow, the almost full signal is asserted (credit-based information), along with few sequential latches and digital logic gates, they have built and synthesized asynchronous FIFO at register transfer level (RTL) using Verilog, and simulation has been done at the gate level using the Xilinx-ISE 13.3 tool to provide delay, power, and area estimates for the same. Upadhyay et al. [6] have the design to effectively use cache memory under the direction of the cache controller which consumes less power. The model is made simpler by this architecture, and cache coherency is also avoided. The computation time is decreased since the cache memory is only a temporary memory. When compared to the main memory, an operand’s operation becomes faster. Clifford [7] has developed an asynchronous FIFO design to transfer data securely from one clock domain to another clock domain. Ashour [8] has designed and modeled an asynchronous FIFO with data interface width and memory depths. The FIFO flag thresholds can be changed at runtime, and their implementation and interface with other system parts follow a modular design strategy. Nagarajan [9] has worked on the design and verification of asynchronous FIFO Module using system Verilog-based universal verification methodology (UVM), to prevent data underflow or overflow, it has the ability to alert the concerned modules about its empty condition and full status. Because clocks govern the read and write processes, this FIFO system is categorized as synchronous. Using dual-port RAM or an array of flip-flops in the design, both read and write operations take place at the same time. Utilizing the UVM, the synchronous FIFO is verified after being designed. The verification plan and test results are covered in great depth. Satyendra et al. [10] have conducted a study on FIFO buffer adoption scheme for high-speed data links; there have been three stages to this study’s completion. The basic architecture of the many varieties of FIFO architecture was first been investigated and presented. Chakraborty

376

M. S. Mallikarjunaswamy et al.

[11] has designed and implemented an 8-bit read-write memory using FIFO algorithm. The memory addresses in the RAM chips used in this design were located by an address generation unit, so that a data word (8-bit) can be written. Then a 2:1 bus multiplexer unit was created, which is used to choose whether incoming data words should be written, read, and then rewritten again. Hemant et al. [12] proposed an asynchronous FIFO architecture that, in addition to the standard status signals, includes a few extra status signals for a more streamlined user interface and increased security. In the design, gray code pointers are utilized. Two synchronizer modules with two D flip-flops were utilized for synchronization. Thillaikkarasi et al. [13] have designed a resourceful counter for FIFO; a status counter in this FIFO system will handle the full and empty FIFO states. Every write action results in an increase in the status counter, and every read operation results in a decrease. When the status counter reaches the maximum FIFO depth, FIFO is full, and when it reaches zero, FIFO is empty. Amit et al. [14] opinioned that for data buffering and flow control, asynchronous FIFO are most commonly employed in SOC architectures. Because the SOC uses numerous IPs that operate at various speeds, when a write operation is quicker than a read operation, asynchronous FIFO is typically employed. Navaid et al. [15] have implemented and verified synchronous FIFO using system Verilog verification methodology. In this work, a verification environment was designed and built that is able to find the majority of defects necessary for the synchronous FIFO model to operate correctly. Ramesh et al. [16] have implemented asynchronous FIFO design with gray code pointer for high-speed advanced microcontroller bus architecture (AMBA) which is a compliant memory controller in which data values are sequentially put into a FIFO buffer using one clock domain and sequentially read from the same FIFO buffer using a different clock domain, the two clock domains being asynchronous to one another. Wolf et al. [17] have worked on FIFO memory with decreased fall-through delay, the fall-through delay is the amount of time it takes for data to move from the input to the output of the FIFO, the main goal of this work was to create a FIFO data store that can buffer complete blocks of data with extremely quick fall-through times and extremely quick shifting and shift out rates. Camilleri et al. [18] have invented a method and structure for controlling an asynchronous FIFO memory system, and for determining the amount of data currently stored in a FIFO memory of the asynchronous FIFO memory system. Nguyen et al. [19] have presented the design and implementation of an efficient asynchronous FIFO architecture well suited for synchronization unit in multisynchronous network on chips, token ring structure, register-based memory, and modified asynchronous assertion— synchronous de-assertion techniques are applied to improve the performance of the proposed asynchronous FIFO. Kumari et al. [20] have implemented asynchronous FIFO and have interfaced it with UART demonstration. Harpreet et al. [21] presented the use of low power double edge-triggered address pointer circuit used for FIFO memory design, which results in significant reduction on the cumulative capacitive load on the pointer clock path and hence consumes less power compared to other designs, which uses a true single-phase clock and is suitable for low voltage and high-speed applications. After studying the existing literature, the objective of our

28 FIFO Memory Implementation with Reduced Metastability

377

work is set to design and implement cost-effective FIFO with reduced metastability. The FIFO makes use of SRAM cell instead of registers.

28.3 Methodology The design has been simulated in EDA Playground with Aldec Riviera-pro tool and EP Wave to display the output. This includes a FIFO with a depth of 16 addresses and a width of 8 bits. Shift registers are used to implement the synchronous FIFO. To bring out the flexibility in reading and writing of data whenever needed, the write_ enable and the read_enable are used. The data can be written into the FIFO whenever the write_enable (W_en) is set to logic high. Similarly, the data can be read from the FIFO whenever necessary by setting the read_enable (R_en) to logic high. The entire system runs on the local clock (Clk) which provides the constant pulses. A reset block is provided which resets the entire FIFO and can be used to turn the FIFO on and off by the processor. The system can be reset by providing logic high to the reset block. The synchronous FIFO design has five status bits full (F), empty (E), almost_full (AF), almost_empty (AE), and half_full (HF). The input signals and status bits of synchronous FIFO are depicted in Fig. 28.1. These status bits are decided by the read and write pointers. The full bit is set when all the addresses of FIFO consist of data. Similarly, the empty bit is set when all the addresses of the FIFO are empty. The almost_full bit is set when the contents of the addresses are tending to be full and similarly, the almost_empty bit is set when the contents of the addresses are tending to be empty. The half_full bit is set when the FIFO has 8 addresses with data. When the power is turned on, the FIFO is in the reset state. This instruction is provided by the processor, and this erases all the memory in the FIFO. It also sets the status bit empty which represents that the FIFO is empty. The designed memory for the synchronous FIFO has a depth of 16 addresses and a data width of 8 bits in each of the addresses. Hence, the designed memory can hold up to a maximum of 128 bits or 16 bytes. This capacity can be increased at any point of time depending upon the user’s requirement. The block diagram of synchronous

Fig. 28.1 A general synchronous FIFO module and its status bits

378

M. S. Mallikarjunaswamy et al.

Fig. 28.2 Block diagram of synchronous FIFO module

FIFO is shown in Fig. 28.2, being the memory module which is controlled by other different sub-modules and circuits. It has a provision to mark the entry of 8-bit input or the write data. Similarly, there is also a provision to mark the exit of 8-bit output or the read data. At the positive edge of clk, if the reset bit is high, then all the bits in the memory get cleared and the empty bit is high and all the other bits will be low. If the reset bit is low, check if wr_en is high and not full. In that case, data is written into the memory and addr pointer gets incremented to point to the next memory location. Else, if read_ en is high and not empty, data is read from the memory by decrementing the addr pointer. Figure 28.3 shows the flowchart of synchronous FIFO operations. As shown in Fig. 28.4, the designed memory for the asynchronous FIFO has a depth of 8 addresses and a data width of 8 bits in each of the addresses. Hence, the designed memory can hold up to a maximum of 64 bits or 8 bytes. This capacity can be increased at any point of time depending upon the user’s requirement. The block diagram contains the major block, being the memory block which is controlled by other different blocks and circuits. It has a provision to mark the entry of 8-bit input or the write data. Similarly, there is also a provision to mark the exit of 8-bit output or the read data. A flowchart of asynchronous FIFO is shown in Fig. 28.5. The block diagram shows a reset block. It determines whether the FIFO is in the off state or in the on state. If the reset is turned on, the entire FIFO is off. Similarly, if the reset is turned off, the FIFO is turned on. In addition to this, the block diagram also contains a read control and a write control block. It controls the reading and writing of the data into and from the memory. The write control block has two signals, the write clock (clk) and the write enable (wr_en) signal. When the write enable is turned on, only then the data can be written into the memory in synchronous to the write clock provided. Similarly, the read control logic also has two signals, the read clock (clk) and the read enable (read_en) signal. When the read enable signal is turned on, only then the data

28 FIFO Memory Implementation with Reduced Metastability

379

Fig. 28.3 Flowchart of synchronous FIFO operations

can be read from the FIFO memory in synchronous with the read clock provided. The block diagram also contains read and write pointers to mark the addresses of the FIFO memory. The write pointer denotes the address where the data is written and the read pointer determines the address where the data is being read. During data transfer, if the writing data rate is very high in comparison with the reading data rate, the memory can be temporarily stored in the FIFO. By doing so, it provides sufficient time for the reading clock speeds to accurately retrieve the data without any data loss. Hence, more the storage capacity of the FIFO, less will be the data loss and the computer can run smoothly without any interruptions. Concurrent read/write FIFOs, depending on the control signals for writing and reading, fall into two groups: synchronous FIFOs and asynchronous FIFOs. A synchronous FIFO is a queue in which there is a single clock pulse for both data write and data read. In Synchronous FIFO the read and write operations are performed at the same rate. An asynchronous FIFO refers to a FIFO where the data values are written to the FIFO at a different rate and data values are read from the same FIFO at a different rate, both at the same time.

380

M. S. Mallikarjunaswamy et al.

Fig. 28.4 Block diagram of Asynchronous FIFO module

28.4 Results and Discussion Upon simulation using Verilog in the EDA Playground, the following timing analysis was done. The analysis is done in detail verifying all of signal used in the synchronous and asynchronous FIFO. The empty flag is turned to logic low from logic high after the first bit is stored inside the synchronous FIFO in synchronous with the local clock (clk). Here, a clock with a time period of 10 ns is used to analyze the results. The input data (din) with a data length of 8 is fed. The first input data fed is a hexadecimal number FF, and until this data is written into the FIFO, the empty bit is set. After this data is written, the empty bit is reset. The full flag is activated only when all the FIFO memory addresses are filled. According to the specification of the synchronous FIFO, it contains 16 addresses and after the 16th address is filled, the full status bit is set. Since the synchronous FIFO can store data in its 16 addresses, the user or the processor has to be notified that the memory is almost about to be filled. This is very crucial in the working and analysis of the FIFO memory as it is helpful in overcoming the overflow condition. Similar to the almost_full bit, almost_ empty (alme) status bit is used to determine the almost_empty condition occurring in the synchronous FIFO memory. It is also a very important status bit since it can prevent the underflow condition. According to the program written, the threshold for deactivating the almost_empty condition given is 3. So, after the 3rd address is filled with 8-bit memory, the almost_empty bit is reset. A half_full status bit is provided to denote that half of the memory of synchronous FIFO is filled. It is reset only when the next data is written or any data is erased from the memory. When the write_enable is

28 FIFO Memory Implementation with Reduced Metastability

381

Fig. 28.5 Flowchart of asynchronous FIFO operations

set, 8-bit data starts to occupy the 16 addresses of the designed FIFO memory, one address at a time by incrementing it. Similar to the write_enable, read_enable also works in the same way. The synchronous FIFO is simulated and verified on the tool EDA Playground and results are shown in Fig. 28.6. The asynchronous FIFO is simulated and verified on the tool EDA Playground and results are shown in Fig. 28.7.

382

M. S. Mallikarjunaswamy et al.

Fig. 28.6 Timing diagram of synchronous FIFO operations

Fig. 28.7 Timing diagram of asynchronous FIFO operations

28.5 Conclusion In this work, synchronous and asynchronous FIFOs were implemented successfully. This design describes the approach for implementing a verification environment for the FIFO memory. This design is a cost-effective way of implementing FIFOs using SRAM cell instead of registers. The disadvantage is higher latency per instruction. Either this needs to run the processor clock slower or it might need to pipeline the register file, so each instruction will take more cycles. The design uses SRAM cells with great speed. The developed FIFOs are useful for business or corporate networks and settings. Since the speed and performance of system are directly and consistently

28 FIFO Memory Implementation with Reduced Metastability

383

affected by the size of the RAM, there is a necessity to ensure good FIFO arrangement. In both the synchronous and asynchronous models, data is being written through write control. Data is read out through read control. When all the bits of FIFO are full, the full bit gets high. Similarly, the empty bit gets high when the FIFO is empty. A provision is provided for preventing overflow and underflow conditions. Pointer synchronization problem has been overcome, and most importantly, the metastability has been reduced.

References 1. Yadlapati, A., Kakarla, H.K.: Design and verification of asynchronous FIFO with novel architecture using Verilog HDL. J. Eng. Appl. Sci. 14(1), 159–163 (2019) 2. Akhare, M., Narkhede, N.: Design and verification of generic FIFO using layered test bench and assertion technique. Int. J. Eng. Adv. Technol. (IJEAT) 8(6), 5254–5260 (2019). ISSN: 2249-8958 (Online) 3. Lincy, D.F., Thenappan, S.: Asynchronous FIFO design using Verilog. Int. Res. J. Eng. Technol. (IRJET) 7(9), 2147–2151 (2020). ISSN: 2395-0056 4. Gupta, S., Sharma, M., Chawla, R.: FPGA implementation of r-FIFO-based high-speed data acquisition IOT architecture model. SN Appl. Sci. 2, article no. 661 (2020) 5. Maurya, S.: Design of RTL synthesizable 32-bit FIFO memory. Int. J. Eng. Res. Technol. (IJERT) 5(11) (2016). ISSN: 2278-0181 6. Upadhyay, A., Sahu, V., Roy, S. K., Singh, D.: Design and implementation of cache memory with FIFO cache-control. i-manager’s J. Commun. Eng. Syst. 7(1), 16–21 (2018) 7. Cummings, C.E.: Simulation and Synthesis Techniques for Asynchronous FIFO Design, SNUG 2001, Section MC1, 3rd paper (2001) 8. Ashour, H.: Design, simulation and realization of a parametrizable, configurable and modular asynchronous FIFO. In: Science and Information Conference (SAI), pp. 1391–1395 (2015) 9. Nagarajan, V.: The Design and Verification of a Synchronous First-In First-Out (FIFO) Module Using System Verilog Based Universal Verification Methodology (UVM). RIT Scholar Works (2018) 10. Chandravanshi, S., Moyal, V.: A study of FIFO buffer adoption scheme for high speed data links. Int. J. Adv. Res. Electr. Electron. Instrum. Eng. 4(5), 3944–3948 (2015) 11. Chakraborty, R.: Design and implementation of 8-bit read write memory using FIFO algorithm. In: Proceedings of the World Congress on Engineering (WCE), vol. II (2011) 12. Kaushal, H., Puri, T.: Design of synthesizable asynchronous FIFO and implementation on FPGA. Int. J. Eng. Res. Dev. 13(7), 06–12 (2017) 13. Janani, S., Thillaikkarasi, S.: Designing resourceful counter for FIFO. Int. J. Comput. Sci. Mob. Comput. 3(10), 231–237 (2014) 14. Kumar, A., Shankar, Sharma, N.: Verification of asynchronous FIFO using system Verilog. Int. J. Comput. Appl. (0975-8887) 86(11), 16–20 (2014) 15. Rizvi, N.Z., Arora, R., Agrawal, N.: Implementation and verification of synchronous FIFO using system Verilog verification methodology. J. Commun. Technol. Electron. Comput. Sci. 2, 18–23 (2015) 16. Ramesh, G., Shivaraj Kumar, V., Jeevan Reddy, K.: Asynchronous FIFO design with gray code pointer for high speed AMBA AHB compliant memory controller. IOSR J. VLSI Signal Process. 1(3), 32–37 (2012) 17. Wolf, Northboro, Bessolo, Groton: FIFO memory with decreased fall through delay. Patent Number: 4,833,655 (Date of Patent: 23 May 1989) 18. Camilleri, A., Ebeling: FIFO memory system and method with improved determination of full and empty conditions and amount of data stored. Patent No.: US 6,434,642 B1, Date of Patent: 13 Aug 2002, Appl. No.: 09/414,987, Filed: 7 Oct 1999

384

M. S. Mallikarjunaswamy et al.

19. Nguyen, T.-T., Tran, X.-T.: A novel asynchronous first-in-first-out adapting to multisynchronous network-on-chips. Microprocessors Microsyst. 37(4–5) (2013) 20. Kumari, P., Sehrawat, A., Yadav, N.: Implementation of asynchronous FIFO and interface it with UART. Int. J. VLSI Syst. Des. Commun. Syst. 04(05), 0390–0394 (2016). ISSN: 2322-0929 21. Harpreet, Bhutani, S.: Use of low power DET address pointer circuit for FIFO memory design. Int. J. Educ. Sci. Res. Rev. 1(3) (2014) ISSN: 2348-6457 22. Janani, S., Muthukrishnan, S., Saravana Kumar, S.: Designing of FIFO for the high speed memory access. Int. J. Sci. Res. Dev. (IJSRD) 3(02) (2015). ISSN (online): 2321-0613 23. Dour, P., Kinkar, C. Throughput improvement in asynchronous FIFO queue in wired and wireless communication. Int. J. Eng. Res. Technol. (IJERT) 05(12) (2016)

Chapter 29

Diabetic Retinopathy Detection Using Ensemble of CNN Architectures B. Bhargavi, Lahari Madishetty, and Jyoshna Kandi

Abstract Diabetic retinopathy (DR) is an eye disease that affects diabetes patients and damages their retina if not detected early. DR has been manually screened by an ophthalmologist until recently, which is a time-consuming technique. The objective is to use AI technology to automatically scan photographs for the disease and provide information on the severity of the issue. We propose a deep learning model that can automatically analyze a patient’s ocular picture and determine the severity of blindness faster. Based on this blindness severity, we can screen the process of treating DR on a wide scale with pace. The proposed computationally efficient ensemble of CNN architectures is used to correctly detect and classify diabetic retinopathy. This model is less complicated, and it results in 72.% of accuracy tested on retina fundus image dataset.

29.1 Introduction Diabetic retinopathy (DR) is one of the most grave symptoms of diabetes, causing damage to the retina and eventually blindness. One of the challenges is the ability to automatically detect diabetic retinopathy in photographs and provide information on the severity of the condition. Convolutional neural network (CNN) model can be used to automatically analyze a patient’s ocular picture and determine the severity of blindness. Using this severity, we can monitor the progress of diabetic retinopathy treatment on a large scale saving lot of time. Our proposed technique can assist doctors in initiating diabetic retinopathy treatment at the appropriate time and in diagnosing the disease in its early stages. The main objective of this paper is to propose an efficient CNN model that will help us recognize and classify the stage of diabetic retinopathy that a patient is experiencing as well as the severity of the condition. B. Bhargavi (B) GITAM University, Hyderabad, Telangana, India e-mail: [email protected] L. Madishetty · J. Kandi Chaitanya Bharathi Institute of Technology, Hyderabad, Telangana, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024 B. Das et al. (eds.), Modeling, Simulation and Optimization, Smart Innovation, Systems and Technologies 373, https://doi.org/10.1007/978-981-99-6866-4_29

385

386

B. Bhargavi et al.

Table 29.1 Types of CNN architectures No. of layers Architecture name (year)

Dataset

Limitations Not designed to work on large images Difficult to learn features from image sets Vanishing Gradient problem Overfitting problem Deeper network requires weeks for training Depth-wise separable convolutions may not be optimal

LeNet [7]

7

MNIST

AlexNet [8]

8

CIFAR-10

VGG-16 [9]

16

ImageNet

Inception [10] ResNet [11]

22 56

ImageNet ImageNet

Xception [12]

36

ImageNet

Section 29.2 describes the different CNN types and existing systems. In Sect. 29.3, we describe the proposed system to classify the fundus images. Section 29.4 discusses the experimentation and result analysis. Finally, in Sect. 29.5, we provide conclusions and scope for further research.

29.2 Related Work Different machine learning and deep learning models have been applied to detect and classify diabetic retinopathy [1–6]. In this paper, we delve into deep learning techniques that are based on CNN architectures. A CNN model is a deep learning system that can take an input image and assign relevance to various aspects of the image including weights and biases. The preprocessing required by CNN model is much lesser than that required by other classification methods. CNN architectures are available in a wide range of shapes and sizes and are efficient with respect to different applications. Table 29.1 shows the different CNN architectures used in the current literature. We describe our observations about CNN architectures like the number of layers, activation function used, dataset on which the model is applied and limitations in the table. LeCun et al. developed LeNet-5 [7], a 7-level convolutional network that recognizes handwritten numbers scanned in 32.×32-pixel grayscale input images and classified them as digits. LeNet-5 used Softmax classifier to divide the images into categories. AlexNet is another CNN-based model that is made up of eight layers, each with its own set of learnable parameters. The model used ReLU function and dropout layers to prevent over-fitting in their model. VGGNet [9] constitutes 16 con-

29 Diabetic Retinopathy Detection

387

volutional layers, similar to AlexNet but has lots of filters. The model is similar to AlexNet with large number of 3.×3 convolution layers. But, it also has lots of filters and large number of learnable parameters. It is currently used mainly to extract features from images. GoogLeNet or Inception V1 [10] used 11 convolutions in the middle of the 22layered architecture along with global average pooling. Residual neural network (ResNet) [11] constitutes reformulating layers as learning residual functions. ResNets are evaluated with 8 times deeper than VGG nets but still have lower complexity. The Google product Xception [12], which stands for extreme version of inception includes modified depth-wise separable convolution layers with residual connections. The distinction between Inception and Xception is the availability of nonlinearity after the original operation. In the Inception model, a ReLU nonlinearity follows both processes. Xception, on the other hand, does not introduce any nonlinearity. In the existing DR system, Mishra et al. [1] developed CNN model-based techniques for diabetic retinopathy detection. They developed DenseNet model and compared it to VGG-16 architectures. DenseNet architecture is an advanced version of ResNet architecture. They used APTOS 2019 dataset which are fundus image datasets and validated the accuracies of models without and with ImageNet, respectively. So, without ImageNet, VGG-16 gave less accuracy which is 0.73 and with ImageNet, DenseNet gave 0.96 accuracy value which is better than VGG-16. Gangwar et al. [3] developed transfer learning and deep learning techniques to detect blindness in DR. Using transfer learning on pretrained Inception-ResNet and by adding custom block of CNN layers on top of it gave an accuracy of 0.82 on APTOS2019 retina fundus image dataset. Qiao et al. [4] developed deep learning techniques based on microaneurysm feature for early diagnosis of DR. Tymchenko et al. [13] developed multistage approach-based deep learning techniques for early detection of DR.

29.3 Proposed Methodology In our proposed system, we developed a CNN model using ResNet architecture, trained the model, and calculated the performance metrics. To improve the accuracy of the CNN model, we ensembled the previous model results with VGG architecture and then fine-tuned our resulting model and re-calculated the metrics. Further, we ensembled the ResNet model with Xception architecture and then fine-tuned the model again to ensure that the train and validation losses do not increase. The implementation of the proposed system consists of several steps as shown in Fig. 29.1 that are described as follows: Step 1: Dataset (Fundus/Retinal images) We define fundus imaging as the process of employing reflected light to create a two-dimensional (2D) representation of the three-dimensional (3D) semitransparent retinal tissues projected onto the imaging plane.

388

B. Bhargavi et al.

Fig. 29.1 Block diagram of proposed diabetic retinopathy detection system

Step 2: Data Preprocessing In data preprocessing, we transform initial data by cropping, resizing, and data cleaning images into a useful format. Step 3: Data Augmentation In data augmentation, we improve the quantity of data available from the existing data. It helps to avoid over-fitting when training a machine learning model. It involves rotating, flipping, and mirroring the images to balance the dataset if the dataset is imbalanced. Step 4: Feature Extraction The features extracted from the retina images include microaneurysms detection, exudates detection and blood vessels extraction [5]. Microaneurysms are the blood vessels of capillary walls that can leak fluid, causing intraretinal edema and hemorrhages. Because microaneurysms are the first clinically noticeable indicator of diabetic non-proliferative eye illness [4], recognizing them can be the first step in secondary prevention of the disease progressing to the advanced stage and eventually resulting in severe vision loss. We observed the most prevalent signs of DR are the development of exudates and abnormal blood vessels on the retina. The development of exudates on the retina is the most prevalent sign of diabetic retinopathy. Exudates are one of the first clinical indications of DR, thus detecting them would be a useful addition to the mass screening assignment, as well as a necessary step toward automatic disease grading and monitoring. Diabetic retinopathy, glaucoma, arteriosclerosis, and hypertension are all diseases with retinal blood vessels that have a role in diagnosis and treatment. As a result, extracting the retinal vasculature is essential for supporting experts in the diagnosis and treatment of systemic illnesses. Step 5: Data Splitting In machine learning, data splitting is widely used to divide data into train, test, and validation sets. In this case, we divided the train images into 20.% validation images and 80.% training images. Step 6: CNN Model and evaluation Here, we build the CNN models, that is, ResNet, the ensemble of ResNet and VGG with fine-tuning and the ensemble of ResNet and Xception and train them. We evaluate our CNN models and calculate the performance metrics like accuracy and loss. Step 7: Identification of Diagnosis Level We test our trained CNN model using the test images and get the diagnosis level of DR as the final output.

29 Diabetic Retinopathy Detection

389

29.4 Experiments and Results We conducted our experiments on Dell laptop with Core i3 processor 8GB RAM in Python using Keras framework. We used the APTOS 2019 Blindness Detection (APTOS2019) [14] dataset to evaluate our proposed ensemble models. The full dataset consists of fundus photographs, which includes 3662 training images, and it is of size 454MB. Diabetic retinopathy is classified into stages that allow us to determine the severity of the disease and, as a result, develop prevention strategies. The stages of diabetic retinopathy are divided into 5 diagnosis levels. They include diagnosis level 0, i.e., No DR, diagnosis level 1, i.e., Mild DR, diagnosis level 2, i.e., Moderate DR, diagnosis level 3, i.e., Severe DR, and diagnosis level 4, i.e., Proliferative DR. Figures 29.2 and 29.3 show the bar plots representing the distribution of training images and test images in the dataset belonging to each stage of DR. 1. No DR: The disease has not yet progressed to the point where it will affect the eye. To put it another way, the eye is in excellent condition. 2. Mild DR: In the first stage of mild non-proliferative DR, the symptoms include swelling in the tiny parts of the blood vessels in the retina. 3. Moderate DR: In the second stage, there are visible signs of blockage of some of the blood vessels in the retina. 4. Severe DR: The third stage of DR causes more blood vessels to clog, resulting in inadequate blood flow to portions of the retina. 5. Proliferative DR: In the fourth and final stage, new fragile and abnormal blood vessels will begin to grow in the retina. These fragile blood vessels may leak leading to vision loss and possibly blindness.

Fig. 29.2 APTOS2019 dataset trained images distribution

390

B. Bhargavi et al.

Fig. 29.3 Distribution of test images of APTOS2019 dataset

We observe from the experiments on the CNN models, i.e., ResNet, ensemble of ResNet and VGG and the ensemble of ResNet and Xception, the accuracies were gradually increased and the losses decreased. For the first model, which is ResNet (shown in Fig. 29.4) the validation loss is greater than the train loss and the validation accuracy is less than the train accuracy shown in Fig. 29.5. For the second model, which is an ensemble of ResNet and VGG (Refer Figs. 29.6 and 29.7), we observed that the train and validation loss were gradually decreasing. They had a similar value of losses, when coming to accuracies, the training accuracy was increasing gradually and the validation accuracy was getting decreased at a point of time where the train accuracy is more than the validation accuracy. The train and validation losses were gradually decreasing, and the validation loss is less than the train loss, in the third model, which is an ensemble of ResNet and Xception (Refer Figs. 29.8 and 29.9) we observed that the train and validation accuracies are increasing, trained data accuracy is lesser than that of validation accuracy. Table 29.2 describes the accuracies of ResNet and proposed ensemble models on the APTOS 2019 dataset. We were able to achieve an accuracy of 0.72 with our proposed ensemble of ResNet and Exception. Figure 29.10 reveals the confusion matrix with accuracies achieved at different diagnosis levels using our proposed ensemble model. Figure 29.11 shows our designed GUI to detect the DR stages based on fundus test images at severity level 0 which means the patient has no DR. Similarly, Fig. 29.12 shows our designed GUI to detect the DR stages based on fundus test images at

29 Diabetic Retinopathy Detection

391

Fig. 29.4 Train and validation loss of ResNet graphs

Fig. 29.5 Train and validation accuracy of ResNet graphs

Fig. 29.6 Train and validation loss of ensemble of ResNet and VGG

diagnosis level 1 which means the patient has mild DR. Figures 29.13, 29.14 and 29.15 illustrate the designed GUI to detect the DR stages based on fundus test images at diagnosis levels 2, 3, and 4 which means the patient has moderate DR, severe DR, and proliferative DR, respectively.

392

B. Bhargavi et al.

Fig. 29.7 Train and validation accuracy of ensemble of ResNet and VGG

Fig. 29.8 Train and validation loss ensemble of ResNet and Xception

Fig. 29.9 Train and validation accuracy ensemble of ResNet and Xception Table 29.2 Results of the ensemble architectures Architecture Accuracy Precision ResNet Ensemble (ResNet, VGG) Ensemble (ResNet, Xception)

Recall

F1 score

0.49 0.66

0.25 0.52

0.49 0.66

0.33 0.58

0.72

0.69

0.72

0.67

29 Diabetic Retinopathy Detection

393

Fig. 29.10 Confusion matrix of ensemble of ResNet and Xception

Fig. 29.11 Test image with diagnosis level-0

29.5 Conclusions and Future Scope The main objective of our proposed system is to use fundus photographs to estimate the severity of diabetic retinopathy using an efficient CNN model. For an effective treatment regimen, diagnostic measures should aim for precision. We proposed an approach of ensembling ResNet and Xception architectures for diabetic retinopathy detection. In comparison with other ways, our proposed approach performed well

394

Fig. 29.12 Test image with diagnosis level-1

Fig. 29.13 Test image with diagnosis level-2

B. Bhargavi et al.

29 Diabetic Retinopathy Detection

Fig. 29.14 Test image with diagnosis level-3

Fig. 29.15 Test image with diagnosis level-4

395

396

B. Bhargavi et al.

that resulted in an increase in train and validation accuracies and a decrease in train and validation losses. We were able to get a high level of accuracy in the diagnosis outcomes. But, the proposed model is not compatible with mobile-like devices. For making the model more compatible with all other devices, we can extend our work by developing lightweight models [15]. We can further improve the accuracy by evaluating other ensembling techniques [5] on CNN architectures.

References 1. Mishra, S., Hanchate, S., Saquib, Z.: Diabetic retinopathy detection using deep learning. In: 2020 International Conference on Smart Technologies in Computing, Electrical and Electronics (ICSTCEE), pp. 515–520. IEEE (2020) 2. Beede, E., Baylor, et al.: A human-centered evaluation of a deep learning system deployed in clinics for the detection of diabetic retinopathy. In: Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, pp. 1–12 (2020) 3. Gangwar, A.K., Ravi, V.: Diabetic Retinopathy Detection Using Transfer Learning and Deep Learning, Evolution in Computational Intelligence. Springer Singapore, Singapore, pp. 679–689 (2021) 4. Qiao, L., Zhu, Y., Zhou, H.: Diabetic retinopathy detection using prognosis of microaneurysm and early diagnosis system for non-proliferative diabetic retinopathy based on deep learning algorithms. IEEE Access J. 8, 104292–104302 (2020) 5. Tsiknakis, N., Theodoropoulos, D., et al.: Deep learning for diabetic retinopathy detection and classification based on fundus images: a review. J. Comput. Biol. Med. 135, 104599 (2021) 6. Jain, A., et al.: Deep learning for detection and severity classification of diabetic retinopathy. In: 2019 1st International Conference on Innovations in Information and Communication Technology (ICIICT), pp. 1–6. IEEE (2019) 7. LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998) 8. Das, S.: CNN architectures. https://medium.com/analytics-vidhya/cnns-architectures-lenetalexnet-vgg-googlenet-resnet-and-more-666091488df5 (2017) 9. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014) 10. Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2015) 11. He, K., Zhang, et al.: Delving deep into rectifiers: surpassing human-level performance on imagenet classification. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1026–1034 (2015) 12. Chollet, F.: Xception: Deep learning with depthwise separable convolutions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1251–1258 (2017) 13. Tymchenko, B., Marchenko, P., Spodarets, D.: Deep learning approach to diabetic retinopathy detection. arXiv preprint arXiv:2003.02261 (2020) 14. APTOS 2019 Blindness Detection Dataset. https://www.kaggle.com/c/aptos2019-blindnessdetection/ (2019) 15. Kolla, M., Venugopal, T.: Diabetic Retinopathy Classification Using Lightweight CNN Model, ICCCE 2021, pp. 1263–1269. Springer Nature Singapore, Singapore (2022)

Chapter 30

Narrow Band 5G Antenna P. M. Preethi, P. V. Yalini Shree, N. Mohammed Shaqeeb, C. Kavya, and K. C. Raja Rajeshwari

Abstract We designed a Pick Axe antenna which can operate in ka-band (26– 40 GHz) which can be used for satellite communication. From this Pick Axe antenna ground plane developed to increase the gain, directivity, and efficiency. This omnidirectional antenna which works in high frequency can be used in military communication. It has low return loss and high gain. The application is satellite communication, close-range targeting radars, military aircraft telescope, TV, cell phones, GPS, etc. The software used was Advanced Design System (ADS), so the design cost was very less. Our main purpose to design this 5G antenna is to produce a highly efficient antenna which can operate in Ultra High Frequency Range (UHF) at the same it being cost effective and simple to design and use. The proposed 5G Pick Axe antenna is a narrow band antenna which can radiate with high efficiency at 30 GHz and high VSRW value.

30.1 Introduction An antenna, often known as an aerial, is a piece of electrical equipment that transforms electrical current into radio waves and the other way around. It frequently functions in tandem with a radio transmitter or receiver. A radio transmitter uses alternating current (AC) that pulses at radio frequency to provide a signal to the antenna’s terminals at a high frequency [1]. The antenna eventually emits electromagnetic radiation as the current’s energy. An antenna should be designed to intercept some of the power of the electromagnetic wave during reception in order to produce a tiny voltage at its terminals. This voltage is then transmitted once more to a receiver where it is amplified.

P. M. Preethi · P. V. Yalini Shree · N. Mohammed Shaqeeb · C. Kavya · K. C. Raja Rajeshwari (B) Assistant Professor, Department of Electronics and Communication Engineering, Dr. Mahalingam College of Engineering and Technology, Pollachi, Tamilnadu 642003, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024 B. Das et al. (eds.), Modeling, Simulation and Optimization, Smart Innovation, Systems and Technologies 373, https://doi.org/10.1007/978-981-99-6866-4_30

397

398

P. M. Preethi et al.

So, each component of radio-using equipment must include an antenna [2]. They are implemented in various of platforms, notably radar, two-way radio, broadcast television, as well as wireless transmission. Radio frequency (RF) fields are converted into alternating current (AC) fields or the other way around by a special transducer known as an antenna. There are two basic types: the transmitting antenna, which produces an RF field while receiving AC from electronic devices, and the receiving antenna, which detects RF energy while supplying AC to the same devices. The improvement of antenna systems’ bandwidth performance is crucial to their ability to function for 5G compatibility. Their main objective is to achieve high bandwidth to accommodate 5G systems’ high throughput. Due to the enormous data demand, the mm-wave spectrum requires large bandwidth. By using more of the spectrum resources, from the sub-3 GHz employed in 4G to 100 GHz and beyond, greater bandwidths will be made available with 5G. Both lower bands (such as sub-6 GHz) and millimetre Wave (such as 24 GHz and higher) are capable of delivering 5G, which delivers unfathomably high bandwidth, multi-Gbps speed, and extremely low latency [3]. Because it is the top part of the original NATO K band, which was split into three bands because the centre was rendered worthless for long-distance transmission due to the presence of the atmospheric moisture resonance peak at 22.24 GHz, the band is known as Ka, short for “K-above” (1.35 cm). KSAT has integrated Ka-band on the KSATlite network as well as huge, highly effective antenna systems. Ka-band is meant to give better data rates from space to the ground. Smaller antennas can be used with ka-band frequencies. The ka-band frequency has a wide range of possible applications, including high-resolution, short-range target radar systems, combat aircraft, space telescopes, industrial, wireless exact point microwave communication systems, vehicle speed detection systems, and satellite communications [4].

30.2 Antenna Design At first, we created a loop antenna and when checking with outputs the gain and directivity was not our favourable side. Hence, we wanted to reshape the antenna and we arrived at this unique shaped antenna. When checking the results, the gain and directivity was very much closer to our expected. Thus, we arrived at this antenna (Figs. 30.1 and 30.2).

30 Narrow Band 5G Antenna

399

Fig. 30.1 Antenna design

Fig. 30.2 Design in software

30.3 Antenna Parameter Analysis 30.3.1 S-Parameter S-parameters, also known as scattering parameters, are used to calculate how much energy is transported across an electric network. S-parameters are used to illustrate how different ports relate to one another when it is essential to characterize a network in terms of phase and amplitude versus harmonics rather than voltages and currents. A

400

P. M. Preethi et al.

Fig. 30.3 S-parameter

complex system may be represented using S-parameters as a straightforward “black box”, which makes it easy to describe what happens to the signal inside of it [5] (Fig. 30.3).

30.3.2 Directivity Aerials or RF antennas do not really radiate in all directions equally. Any realistic RF antenna would emit more in some directions than others, it has recently been observed. The type of proposed antenna, its size, the environment, and a range of other factors also influence the actual pattern. The power radiated can be focused in the appropriate direction by utilizing this directional pattern [6]. An antenna’s directivity is frequently defined as the highest power density P(,)max to its overall average over a sphere as seen in the far field of the antenna (Fig. 30.4).

30.3.3 Antenna Parameter In electromagnetics, an antenna’s power gain, often known as gain, is a crucial performance indicator that combines the directivity and coefficient of performance of the antenna. The gain of a transmitting antenna shows how successfully it converts input power into radio waves that go in a certain direction. How successfully a receiving antenna converts radio waves originating from a certain direction into electrical power is measured by its gain. Gain is interpreted to imply the gain’s

30 Narrow Band 5G Antenna

401

Fig. 30.4 Antenna directivity

highest value when no orientation is provided. The radiation pattern shows gain as a function of orientation [7] (Fig. 30.5).

Fig. 30.5 Antenna parameters

402

P. M. Preethi et al.

Fig. 30.6 Maximum field location

30.3.4 Maximum Field Location See Fig. 30.6.

30.3.5 Radiating Pattern Both the wave front’s strength and its emission from or reception at an antenna are referred to as “radiation”. An antenna’s radiation pattern is the contour created to depict the radiation of that antenna in any representation [8]. The energy emitted by an antenna is represented by its radiation pattern. The spread of radiant energy into space is represented diagrammatically by radiation patterns [9] (Fig. 30.7).

30 Narrow Band 5G Antenna

403

Fig. 30.7 Radiation pattern

30.4 Conclusion We designed a narrow band UHF antenna which can operate efficiently with high gain, high directivity, and low return loss at 30 GHz can be used in satellite and mobile communication. Since we used ADS software, it is easy to design, and it is a cost effective method to get output at ka-band range.

References 1. Filice, F. et al.: Wideband aperture coupled patch antenna for Ka-band exploiting the generation of surface waves. In: 2020 IEEE International Symposium on Antennas and Propagation and North American Radio Science Meeting, pp. 623–624 (2020) 2. Tsuji, H. et al.: Effective use of Ka-band based on antenna and radio wave propagation for mobile satellite communications. In: 2020 International Symposium on Antennas and Propagation (ISAP), pp. 57–58 (2021) 3. Segura-Gómez, C., Palomares-Caballero, Á., Alex-Amor, A., Valenzuela-Valdés, J., Padilla, P.: Design of compact H-plane SIW antenna at Ka band. In: 2020 14th European Conference on Antennas and Propagation (EU CAP), pp. 1–4 (2020) 4. Mei, P., Zhang, S., Pedersen, G.F.: A dual-polarized and high-gain X-/Ka-band shared-aperture antenna with high aperture reuse efficiency. IEEE Trans. Antennas Propag. 69(3), 1334–1344 (2021) 5. Zhang, J., Mao, J.: A high-gain Ka-band microstrip patch antenna with simple slot structure. In: 2020 International Conference on Microwave and Millimetre Wave Technology (ICMMT), pp. 1–3 (2020) 6. Sun, M.-J., Liu, N.-W., Fu, G., Zhu, L.: A wideband microstrip patch antenna with dual square open-loop resonators. In: 2019 International Symposium on Antennas and Propagation (ISAP), pp. 1–3 (2019). Ji, P., Qi, Z., Huang, X., Zhao, W., Zhu, Y., Li, X.: K-band wideband microstrip

404

P. M. Preethi et al.

antenna array with sidelobe level reduction. In: 2020 International Conference on Microwave and Millimeter Wave Technology (ICMMT), pp. 1–3 (2020) 7. Kurniawan, F., Sri Sumantyo, J.T., Sitompul, P.P., Prabowo, G.S., Aribowo, A., Bintoro, A.: Comparison design of X-band microstrip antenna for SAR application. In: 2018 Progress in Electromagnetics Research Symposium (PIERS-Toyama), pp. 854–857 (2018) 8. Dhara, R., Kumar Jana, S., Mitra, M., Chatterjee, A.: A circularly polarized T-shaped patch antenna for wireless communication application. In: 2018 IEEE Indian Conference on Antennas and Propagation (InCAP), pp. 1–5 (2018) 9. Jyosthna, R., Sunny, R.A., Jugale, A.A., Ahmed, M.R.: Microstrip patch antenna design for space applications. In: 2020 International Conference on Communication and Signal Processing (ICCSP), pp. 406–410 (2020)

Chapter 31

Rectangular and Cylindrical Slotted Microstrip Patch Antenna Design for Biomedical Application Sonam Gour, Mithlesh Arya, Ghanshyam Singh, and Amit Rathi

Abstract This paper proposes a microstrip circular patch antenna at 2.45 GHz. Microstrip patch antenna is used due to its low profile, conformability, and ease of fabrication. The proposed antenna is designed for the biomedical application for the industrial, scientific, and medical bands. It can be used inside or outside the body for detecting any tumor or disease. The bending result shows that it can be planted anywhere inside the body. Circular slots in the patch are cut for achieving miniaturization. The Rogers 3010 substrate has been used for designing the Antenna due to its biocompatibility nature. The achieved return loss is − 47 dB at the resonating frequency of 2.45 GHz. The achieved directivity of the antenna is 5.38 dB for the bending analysis applications.

31.1 Introduction In recent years, the use of antenna in medical equipment is increasing day by day. It is used in the human body for analyzing blood pressure, heart rate, and improper tissue growth in the body. The use of microstrip antenna in biomedical equipment is increased due to their compact size, easy fabrication, and low profile. The designed antenna operates in the industrial, scientific, and medical (ISM) band structures for 2.4–2.48 GHz. Many different evaluations in the patch have been done by many researchers for achieving miniaturization and biocompatibility. For the uncertain movement design of radiation for receiving the signal in the reception size, circular polarization is often required. The CP radiation is also used for better gain and bandwidth enhancement. The value of the CP radiation is increased due to the high ohmic loss inside the body. A hemispherical shape in the antenna is modelized inside the S. Gour · M. Arya · G. Singh Department of Computer Engineering, Poornima College of Engineering, Jaipur, India S. Gour · A. Rathi (B) Department of Electronics & Communication Engineering, Manipal University Jaipur, Jaipur, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024 B. Das et al. (eds.), Modeling, Simulation and Optimization, Smart Innovation, Systems and Technologies 373, https://doi.org/10.1007/978-981-99-6866-4_31

405

406

S. Gour et al.

body phantom for achieving circular polarization [1]. The survey of the nanosensors technique tells us that antennae can be designed using different nanostructures and biocompatible materials [2]. The use of flexible material is increased so that it can be easily implanted and adjusted as per the body structure [3]. The resonance of antenna is achieved by adding the shorting pin near the feed point [4]. Further improvement in the design is achieved by changing the ground structure. A defected ground structure is used for further movement for the resonance frequency and increasing sensor sensitivity [5]. The presented design in the paper used some defective structures of the ground for shifting the frequency. Linear radiation is achieved by using cross-slot design in the patch antenna, and the radiation further increased by using shorting pins and patch-edged surface. Cross-slot was etched from the patch antenna for obtaining linear radiation to obtain polarized performance, and crossed slots are etched in the center of a patch antenna for this implantable scenario [6]. The reactive impedance substrate in the microstrip patch antenna is used for widening the bandwidth and axial ratio. It enhances the performance of the antenna at the lower ISM band [7]. The current path of the antenna is further increased by using some slotted structure in the patch and by shorting the current patch with the ground substrate [8–11]. The arc-shaped slots and additional U-shaped design reduced the patch size and provided the dual-band antenna design [12]. The open-ended slots are added in the design for miniaturization achievement in the circular antenna [13]. The circularized polarized antenna is designed by different cuts in the patch design like different forms of designing like circular, triangular, and rectangular cuts [14, 15]. The size of the antenna can be further reduced by using swarm optimization technique [16]. MSA uses in the WiMAX application is increased day by day [17–20]. In this paper, a microstrip circular patch Antenna is proposed for industrial, scientific, and medical bands (2.4–2.48 GHz) for biomedical applications. For realizing property, a new method is proposed, where two connected rectangular slots are etched from the patch, and the etching of the slots is done in horizontal and vertical directions for working in the TEM mode. The miniaturization in the design has been achieved by removing four cylindrical slots in the design. The slots are used for improving the return loss and minimizing the size of the antenna.

31.2 Antenna Design The proposed design for the implantable medical device and the simulated design are checked further for the bending process. The bending process is necessary due to the biomedical application. It should be at that level, that if we want to bend the antenna as per the placement inside the body, it does not deviate from its performance. The designed structure is worked for the ISM band (2.4–2.48 GHz). The achieved resonance frequency is 2.44 GHz. The substrate of the antenna biocompatible is designed by using Rogers 3010, as it is easily available and in the nature. The bending analysis of any antenna is important due to placement in the body. The bending is done due to the uneven structure of the body. An antenna’s behavior during mechanical

31 Rectangular and Cylindrical Slotted Microstrip Patch Antenna Design …

407

deformation, such as bending or twisting, is examined in the bending analysis of an antenna for biomedical applications. This is significant because the antenna needs to be able to endure the stresses and strains of the body or movement in applications where it is implanted or linked to a wearable device. The antenna’s material characteristics, its shape, and the kind of bending or deformation the antenna is anticipated to experience are all important considerations when performing bending analysis. An effective method for simulating the deformation behavior of antennas is finite element analysis (FEA). The used designed equation for the antenna presents in the next section. It is also critical to think about how bending affects the antenna’s electromagnetic performance. The antenna’s capacity to properly transmit and receive signals can be significantly impacted by bending, which can change the resonance frequency, radiation pattern, and impedance matching of the antenna. As a result, when bending an antenna for biomedical purposes, it’s important to consider both the mechanical deformation and the ensuing changes to the antenna’s electromagnetic properties. This knowledge can be utilized to tailor the antenna’s design for certain biomedical uses, such as wearable sensors or implantable devices.

31.2.1 Design Equations The designing parameter of any antenna is defined by its design equation. Radius of the circular patch has been achieved by using the below equations. a = /[ 1+

f ( )] πf 2h ln + 1.7726 πεr f 2h

f =

8.791 × 109 √ fr εr

(31.1)

(31.2)

Ground Plane Width (W ) = 6h + a

(31.3)

Ground Plane Length (L) = 6h + a

(31.4)

As per the design equation, the size of the antenna is achieved by using those equations. Further minimization in the size has been achieved by using rectangular and cylindrical slots. The all-design parameters are mentioned in Table 31.1. Design Evaluation The substrate of the antenna is designed by using Rogers 3010 (εr = 10.2, tan δ = .0035), as per the biocompatible material. The required result has been achieved by various design changes in the design. Some of the design evaluation is represented in Fig. 31.1. These crossed-shaped slots are used to meander the current path and for increasing the effective electrical current length of the patch.

408 Table 31.1 Design parameters

S. Gour et al.

Parameter

Value (mm)

Ground width (W )

54.76

Ground length (L)

54.76

Height of ground (H t )

0.035

Height of substrate (H)

1.6

Cylindrical patch radius (a)

17.6

Feed width (W f )

1

Inset width (g)

0.5

Inset length (L g )

11.69

Fig. 31.1 Design evaluation in the antenna design

The final design 4 provides a better result as compared to the previous designs. Design 4 should be compared with the previous design result for the factor of return loss, VSWR, and gain of the antenna. The comparative analysis of the return loss is represented in Fig. 31.2. The insertion of the first rectangular slots lowers the value of the desired frequency and moves it to further the frequency range. Further additions of the cylindrical slots in design 4 increase the tripping of the signal toward the lower region. The ground is used in the antenna for creating ohmic losses and parameter passes. The evaluation in the ground style provides further increased value of the return loss. The ground plane back portion prevents the undesired effect by scattering the signal and avoiding the omnidirectional interferences. By adding these slots in the design, the similar mode can be achieved for increasing the current path without disturbing the performance of the antenna. The increment in the return loss tells that more

31 Rectangular and Cylindrical Slotted Microstrip Patch Antenna Design …

409

Diffrent Design Effect on Return Loss

-2 -6

Design 4

-14

Design 3

-18 -22

Desgin 2

-26

Design 1

-30 1.00 1.24 1.47 1.71 1.94 2.18 2.42 2.65 2.89 3.12 3.36 3.60 3.83 4.07 4.30 4.54 4.78

Return Loss (dB)

-10

Frequency (GHz) Fig. 31.2 Return loss changes as per design evaluation

value has been transmitted by the antenna. The effect of the ground is represented in Fig. 31.3. By arranging a series of slots or resonators in the antenna’s ground plane, the defected ground structure (DGS) design technique helps antennas operate better. The electromagnetic characteristics of the antenna can be changed by this construction, which also improves the antenna’s performance in terms of radiation efficiency, bandwidth, and directivity. The DGS can be used in a variety of antenna designs, such as planar inverted F antennas, slot antennas, and microstrip patch antennas (PIFAs). The following are some typical uses for the DGS in aerial design: improvement of bandwidth: the antenna’s bandwidth can be improved by incorporating a DGS into the design, which is necessary for applications that call for a broad frequency range. Polarization Control: By changing the size and placement of the DGS elements, it is possible to modify the polarization of the signal that is radiated. Fig. 31.3 Changes in the ground shape for further enhancement in the result

410

S. Gour et al.

Fig. 31.4 Return loss versus frequency for defected ground

Cross-Polarization Suppression: By adding a DGS, it is feasible to reduce crosspolarization radiation and enhance the polarization purity of the antenna. Overall, the DGS methodology is a flexible and effective method for improving the performance of antennas, and it has a wide range of potential applications in industries like satellite communication, wireless communication, and radar systems. The evaluation in the design provides better result as compared to the previous one as shown in Fig. 31.4. The result shows that the achieved return loss is − 47 dB at the resonance frequency of 2.45 GHz. The demonstrating design provides very less bandwidth, and it can be enhanced by using some other techniques. The peak result shows that it can be used for the application of further processing. The ISM band result shows that it can be used for biomedical applications. These medical applications can be used for any medical problems and deficiencies.

31.3 Bending Analysis of Design Antenna For implanting any device in the human body, the analysis of the bending is very important due to the unequal structure of the human body or the designed antenna should be able to place in any part of the body. The design antenna is simulated for the biomedical application. It should be capable enough for placing in any body structure [13]. The bending analysis can be done with the help of CST simulation software. The bending direction of the antenna is represented in Fig. 31.5. The result of the bending process shows that it can be used as an implantable device. The results are in the range of the ISM band. Figure 31.6 represents the

31 Rectangular and Cylindrical Slotted Microstrip Patch Antenna Design …

411

Fig. 31.5 Bending of the antenna at certain direction

return loss of the bending antenna. The result shows that the antenna work better even if it was bent as per the given structure of the body. The simulated VSWR of the bent antenna is 1.6, represented in Fig. 31.7, which shows that the antenna is able to reflect the signal. The result of the bending simulation shows that it can be used for biomedical purposes with increased directivity. The achieved result has been compared with the previous work. The analysis of the results shows that the designed antenna provides a better result as compared to the previous design (Fig. 31.8).

Fig. 31.6 Achieved return loss after bending the antenna

412

S. Gour et al.

Fig. 31.7 Achieved VSWR after bending the design Fig. 31.8 Directivity of antenna after bending process

31.4 Result After bending the antenna in a certain direction, the parameter of the antenna is the same. There are no changes occurring in the parametric value of the designs. The final results like E-field, H-field, and surface current are represented in Figs. 31.9, 31.10, and 31.11. The output of all the results is in the parametric range (Fig. 31.12). An antenna, which is a device used to transmit or receive electromagnetic waves, can store energy in two different ways: electrically and magnetically. The amount of energy that is stored in the electric and magnetic fields that surround an antenna is known as its electric energy density and magnetic energy density, respectively. All

31 Rectangular and Cylindrical Slotted Microstrip Patch Antenna Design …

413

Fig. 31.9 E-field of antenna after bending process

Fig. 31.10 H-field of antenna after bending process

electromagnetic wave contains both electric and magnetic fields, and how strong they are in relation to one another depends on the frequency of the wave and the properties of the antenna. The electric and magnetic field of the antenna is 45611 V/m and 248 A/m, respectively. The electric energy density and magnetic energy density of the antenna are 0.00866 V/m3 and 0.00882 V/m3 . In biomedical applications, an antenna’s power loss density is defined as the amount of power lost in the antenna per unit volume. A significant factor that affects

414

Fig. 31.11 Electric energy density of antenna after bending process

Fig. 31.12 Magnetic energy density of antenna after bending process

S. Gour et al.

31 Rectangular and Cylindrical Slotted Microstrip Patch Antenna Design …

415

Fig. 31.13 Power loss density of antenna after bending process

the quantity of heat produced by the antenna and the possibility of thermal or tissue injury is the power loss density (Fig. 31.13). Antennas are frequently employed in wearable technology or medical implants in the context of biomedical applications, and the power loss density is a key factor in determining how much electromagnetic radiation is exposed to the human body. In order to protect the patient’s or user’s safety, it is crucial to prevent excessive power loss density because it might lead to tissue injury or heating. In biomedical applications, an antenna’s power loss density is defined as the amount of power lost in the antenna per unit volume. A significant factor that affects the quantity of heat produced by the antenna and the possibility of thermal or tissue injury is the power loss density. Antennas are frequently employed in wearable technology or medical implants in the context of biomedical applications, and the power loss density is a key factor in determining how much electromagnetic radiation is exposed to the human body. In order to protect the patient’s or user’s safety, it is crucial to prevent excessive power loss density because it might lead to tissue injury or heating. The simulated antenna design using Rogers 3010 as the substrate type, provides biocompatibility and high reflection losses. The simulated result represents that it can be used for the ISM band because it provides the low value of S11 at the resonance frequency of 2.45 GHz. The achieved bandwidth of the antenna is 25 MHz, it can be increased further by using some design evaluation techniques for further research. The achieved value of the return loss is − 47 dB. The value of the E-field and H-field is 0.00866 J/m3 and 0.00882 J/m3 , respectively. The design is also simulated under the bending process. The result after the bending process represents that it can be used for body implantation. After bending, the achieved return loss of the antenna is −13 dB. The antenna is worked as more directable after the bending process. The value of the achieved directivity is 5.38 dB.

416

S. Gour et al.

Table 31.2 Comparison with previous work References

Substrate

Frequency band

Return loss

Res. freq (GHz)

[21]

FR4

2.45 GHz

− 38 dB

2.45

[22]

Rogers 3010

Implantable antenna

− 21.6 dB

2.44

[23]

FR4

Tumor detection in breast and brain

− 37.56 dB

2.48

[9]

Rogers RT6010

MICS and ISM

− 43.19 dB

0.402

[10]

FR4

ISM band/biomedical application

− 58 dB

2.475

Proposed

Rogers 3010

ISM Band

− 47 dB

2.45

2.4

At last, the simulated design is compared with the previous work, and the comparison shows that it is a better design as compared to the previous ones (Table 31.2).

31.5 Conclusion In this communication, the rectangular slot antenna with the cylindrical cut is presented for achieving good radiation and return loss. The designed antenna is worked for the ISM band and also shows the good result under the bending analysis process. The simulation bandwidth of the antenna is 25 MHz. The good tripping of the signal is achieved by inserting rectangular slots in the design. Further enhancement of the electrical current path was achieved by inserting a circular slot in the antenna. The designed antenna shows the lower region of the return loss, and the value of the tripping signal is − 47 dB. The biocompatibility of the design is checked by the bending analysis problem. The bending process shows that it could be implanted in the body without affecting the working of the device. The return loss of the bended antenna is − 13 dB, and VSWR is 1.5. The results are in the range of the ISM band, so the designed antenna can be used for biomedical applications. Enhancement in the directivity is achieved after the bending process. Hence, it can be said that the bending process is increasing the directivity of the designed antenna.

References 1. Ouerghi, K., Fadlallah, N., Smida, A., Ghayoula, R., Fattahi, J., Boulejfen, N.: Circular antenna array design for breast cancer detection. In: 2017 Sensors Networks Smart and Emerging Technologies (SENSET), 12–14 Sept (2017) 2. Gour, S., Chaudhary, P., Rathi, A.: Survey of microstrip antenna in nanotechnology using different nanostructures. In: Dwivedi, S., Singh, S., Tiwari, M., Shrivastava, A. (eds.) Flexible

31 Rectangular and Cylindrical Slotted Microstrip Patch Antenna Design …

3.

4. 5. 6. 7. 8. 9.

10.

11. 12. 13.

14. 15.

16. 17. 18. 19. 20.

21.

22.

23.

417

Electronics for Electric Vehicles. Lecture Notes in Electrical Engineering, vol. 863. Springer, Singapore. https://doi.org/10.1007/978-981-19-0588-9_4 Sharma, A., Saini, Y., Singh, A.K., Rathi, A.: Recent advancements and technological challenges in flexible electronics: mm wave wearable array for 5G networks. In: AIP Conference Proceedings, vol. 2294 (1), p. 020007. AIP Publishing LLC (2020) Soni, B.K., Singh, K., Rathi, A., Sancheti, S.: Performance improvement of aperture coupled MSA through Si micromachining. Int. J. Circuits Syst. Signal Process. 16 (2022) Alrayes, N., Hussein, M.I.: Metamaterial-based sensor design using split ring resonator and Hilbert fractal for biomedical application. Sens. Bio-sensing Res. (2021) Usluer, M., Basaran, S.C.: Circularly polarized implantable antenna with improved impedance matching. In: URSI International Symposium on Electromagnetic Theory (2016) Samanta, G., Mitra, D.: Dual band circular polarized flexible implantable antenna using reactive impedance substrate. IEEE Trans. Antennas Propagat. (2019) Shubair, R.M., Salah, A., Abbas, A.K.: Novel implantable miniaturized circular microstrip antenna for biomedical telemetry. In: IEEE Conference (2015) Mahbub, F., Islam, R., Banerjee Akash, S., Tanseer Ali, M.: Design and implementation of a microstrip patch antenna for the detection of cancers and tumors in skeletal muscle of the human body using ISM band. In: 12th Annual Information Technology, Electronics and Mobile Communication Conference (IEMCON) (2021) Mainul, E.A., Hossain, Md.F.: Numerical design of a miniature dual-band DGS implantable antenna for biotelemetry. In: 2021 International Conference on Science & Contemporary Technologies (ICSCT) (2021) Singh, G., Kumar, M.: Design of frequency reconfigurable microstrip patch antenna. In: 2011 6th International Conference on Industrial and Information Systems. IEEE (2011) Xiao, S.Q., Li, R.Q.: Antennas design for implantable medical devices. In: IEEE Conference (2015) Mendes, C., Peixeiro, C.: On-body off-body dual mode microstrip antenna for body area network applications. In: 10th European Conference on Antennas and Propagation (EuCAP) (2016) Faisal, F., Amin, Y., Cho, Y., Yoo, H.: Compact and flexible novel wideband flower-shaped CPW-fed antennas for high data wireless applications. IEEE Trans. Antennas Propagat. (2019) Rawat, A., Tiwari, A., Gour, S., Joshi, R.: Enhanced performance of metamaterials loaded substrate integrated waveguide antenna for multiband application. In: IEEE International Conference on Mobile (2021) Rathi, A., Rathi, P., Vijay, R.: Optimization of MSA with swift particle swarm optimization. Int. J. Comput. Appl. 975, 8887 (2010) Suvalka, R., Agrahari, S., Rathi, A.: CPW fed dual notched UWB antenna: Wimax and x-band notched. Suranaree J. Sci. Technol. 29(4) (2022) Suvalka, R., Agrahari, S., Yadav, A.K.S., Rathi, A.: EBG and SRR loaded triple band notched UWB antenna. Scientia Iranica (2022) Gour, S., Rathi, A.: Analyzing performance of microstrip patch antenna for detecting the tumor in the human body. Telecommun. Radio Eng. 81(10) (2022) Sharma, A., Suvalka, R., Singh, A.K., Agrahari, S., Rathi, A.: A rectangular annular slotted frequency reconfigurable patch antenna. In: International Conference on Communication, Devices and Networking, pp. 255–261. Springer, Singapore (2019) Wang, H., Zhou, J., Huang, Y., Wang, J.: Low-profile capacitive fed air-supported microstrip antenna at UHF band for biomedical application. In: International Microwave Workshop Series on RF and Wireless Technologies for Biomedical and Healthcare Applications (2013) Biplob Hossain, Md., Faruque Hossain, Md.: A dual band microstrip patch antenna with metamaterial superstrate for biomedical applications. In: International Conference on Electronics, Communications and Information Technology (ICECIT), Khulna, Bangladesh, 14–16 Sept 2021 (2021) Jeyakumar, V., Nachiappan, M., Alagarsamy, T.: Design of compact coplanar waveguide feed inverted—P antenna for biomedical implants. In: 19th International Conference on Smart Technologies (2021)

Chapter 32

An Implementation of Machine Learning-Based Healthcare Chabot for Disease Prediction (MIBOT) Sauvik Bal, Kiran Jash, and Lopa Mandal

Abstract Every person needs health care for a good start to their life. But when it comes to health issues, it is extremely difficult to consult a doctor due to the pandemic situation. Now it is very difficult to consult with doctors or visit in hospital. Natural language processing (NLP) and machine learning concepts will be applied to the development of a chatbot application. With supervised machine learning, a chatbot system is proposed, which will provide disease diagnosis and treatment with detailed descriptions about different diseases before consulting with doctor. This proposed system provides GUI-based text assistant that can communicate with bot like userfriendly. Bot will provide user symptoms and risk factor respective to user disease analgesics and also provides the best suggestion. The chatbot will also clarify when to see a doctor physically. The research indicates that this type of system is underused and that people do not know of its benefits. By using this free application, every individual will be able to avoid the expensive and time-consuming process of visiting a hospital.

32.1 Introduction Most people are unaware of the increasing health information from Internet. When people search for information on health, they are influenced by a variety of factors. In their busy life schedule, it becomes very difficult for people to be aware and careful about their health issues. Most working-class people claim that their hectic schedules do not allow them to consult their physician on a regular basis, and they ignore any S. Bal (B) Techno India University, Kolkata, West Bengal, India e-mail: [email protected] K. Jash University of Engineering and Management, Jaipur, Rajasthan, India L. Mandal Alliance University, Bengaluru, Karnataka, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024 B. Das et al. (eds.), Modeling, Simulation and Optimization, Smart Innovation, Systems and Technologies 373, https://doi.org/10.1007/978-981-99-6866-4_32

419

420

S. Bal et al.

discomfort they feel until it becomes too intense. The use of reputable medical information including diseases, symptoms, and treatments is important before visiting a doctor or a medical center or shop for assistance with a common illness. However, less computer knowledge of users leads to access difficulties. A number of health applications like ‘Doctor Me’, ‘MedBot’, and ‘MedChat’ are available to inform all. Even so, multiple steps are required to reach the desired information. The proposed system involves the development of an intelligent agent (MIBOT) to facilitate the interaction between users and a chatbot that returns the diagnosis of different diseases based on 51 symptoms that users provide. Through the Chabot’s interface, this system is capable of detecting symptoms by providing input from the user. In this chatbot system, symptoms can be identified with standard precision based on user input. Through the proposed system, people will become more aware of the need to take steps to remain healthy and be aware of the ways to do so. With the new proposed system, maximum people will ignore their health due to the lengthy hospital appointment process. Chatbots can be used just like regular humans and can carry on with their work while they converse with them. This ensures that nothing will disturb their working hours, making it easy for them to access. By using the chatbot, users are made aware of their health and can take action to improve it, thereby contributing to healthcare. Major diseases are likely to develop as a result of avoiding hospital treatment for minor ailments. Using this approach, this problem can be resolved. It aims to create a free chatbot that is available at all times and completely free of charge. Chatbot is absolutely free and user-friendly nature makes it appealing to users, and it is accessible from anywhere. It reduced the cost of consulting specialized doctors. The paper is organized as follows: Literature survey has been discussed in Sect. 32.2. Proposed methodology is described in Sect. 32.3 with examples and demonstration of the algorithm and system diagram. Performance and results analysis are demonstrated in Sect. 32.4 of the paper. A performance analysis is presented in Sect. 32.5 in the paper where six different types of algorithms are analyzed.

32.2 Literature Survey Several works have been written, developed, and published on this topic. Using NLP automatic translation, the text and classify text into categories after these extracted symptoms, chatbot diagnoses and prescribes a treatment for the disease [1, 2]. The use of neural networks (NNs) and Bayesian networks can be used in static imperfection prediction. The TF-IDF is vectorized, and the cosine similarity measure is used to generate similarities between texts [3, 4]. Instant messaging (IM) applications can easily accommodate the chatbot [5]. Databases have been used as a method for storing knowledge, and interpreters as a method for storing programs of operations for pattern-matching requirements [6]. MedBot is a famous chatbot idea. In this paper, 16 symptoms are described as knowledge for conversation. Using the provided APIs, the chatbot can be easily integrated into instant messaging (IM) applications, as well as online chat applications like Facebook, Hangout, and Line [7]. The users can

32 An Implementation of Machine Learning-Based Healthcare Chabot …

421

converse with a chatbot via text-to-text to ask about health problems. By analyzing natural language, this system can be used by elderly, less technical users who would have difficulty communicating their symptoms. It might also be relatively straightforward to add NLG components to support spoken language [8]. Another idea for chatbot is MedChat. This work described the knowledge of K-nearest neighbor algorithm (KNN) with the help of natural language processing [9]. The cancer chatbot is specifically designed for cancer patients, according to Belfin R. V. In this area, cancer sufferers can ask about everything related to the disease, including symptoms, treatments, survival, etc. To provide users with a human-like experience, sentiment analysis is conducted in order to identify their moods and provide them with a human-like experience [10, 11]. The Cosine similarity measure and TF-IDF are used in another chatbot technique for vectorizing between user-input symptoms and our training data. A SVM classifier is then used to classify the symptoms [12]. Another chatbot is Medibot. Three algorithms are used to boost performance in this study, KNN and naive handling fast and simple classification, SVM handling complex classification [13]. SVM classifying is a machine learning algorithm that determines decision boundaries in the range of the problem by using hyper-planes that define decision boundaries [14, 15]. Medical chatbots can diagnose patients with simple symptom analysis, proving that they can somewhat accurately diagnose patients and natural language processing is used to create a conversational approach [16]. Health culture is significantly affected by medical chatbots throughout a state. Human error is less likely with the system because of its increased reliability. Nowadays humans are very much addicted to social media and the Internet over health and don’t pay attention to their well-being [17].

32.3 Proposed Method In this proposed system, ‘MIBOT’ is designed to serve as conversational agent that facilitates the discussion of health concerns based on the symptoms provided. Using a user interface, chatbot can identify different diseases based on input from the user. The NLP technology used in the chabot’s diagnosis and treatment of the disease relies on automatic translation and categorization of the text based on the extracted symptoms. Written commands are input into the proposed system because it is textbased approach. At the start of the process, bot is ready to prompt for user’s query like You: ‘Hi’ or ‘how are you’ or ‘is anyone there?’ … MIBOT: ‘Hello’ or ‘Hi there, how can I help?’ Chatbot face many challenges during the input of natural language by the user. Input from the user can take any form of organization and any structure. Input can be provided by a user in a variety of ways at different times, and it is possible for different users to provide input in many ways. Such as: ‘what is your name’, ‘what should I call you’, ‘What’s your name?’

422

S. Bal et al.

Meanwhile, the chatbot awaits a command. The commands in our system can be divided into two types, 1. Command type: Disease classification 2. Command type: General. The core task of the system is the disease classification command. For example, 1. 2. 3. 4.

‘How to get over depression?’ ‘What is the risk factor of dengue?’ ‘What are the symptoms of COVID-19?’ ‘How to prevent diabetes?’.

Symptoms will be asked by the system. In a particular instance, it will take only one symptom of a disease. On the other hand, inserting too many symptoms at a time by the user is also unexpected. As many symptoms as possible are encouraged to be entered into the system. When there are more symptoms, the actual disease can be predicted more accurately. Our bot, after a user inputs a symptom, instantly identifies the pattern using cosine similarity to measure our training and TF-IDF for vectorization using user-inputted symptoms and our training set features. According to our system, based on both the train and test sets, the SVM classifier gives the highest accuracy among all the symptoms. On the basis of the classification result, the system generates appropriate suggestions. On the other hand, the general commands are analyzed for keywords and whether only medical information is required. Such as: 1. 2. 3. 4.

‘What’s your Name?’ ‘What is MI store?’ ‘From where you collect medicine?’ ‘How much you charge?’.

An enormous collection of disease information is available in our enriched knowledge base. Several authentic Internet sources are used to collect this information. The bot retrieves information about diseases from our knowledge base whenever a command asks about them based on similarity measure (cosine similarity). One of the most important tasks of the bot is the creation of test and training datasets. It is necessary to rely on a reliable source of information for us to be able to real-life symptoms into categories that correspond to real-life illnesses. Our test datasets contain independent tests based, on the most commonly used disease classification train [9, 18]. Data were collected from various doctors and hospitals directly by the authors (Fig. 32.1). One of the most important tasks of the system is the creation of test and training datasets. Symptoms in real life of real-life diseases must be classified by a reliable source of information. There are also independent test datasets in [19] which used the most popular disease classification trains. A total of 4940 datasets were gathered by contacting doctors and hospitals directly and collecting data from them. There

32 An Implementation of Machine Learning-Based Healthcare Chabot …

423

Fig. 32.1 Schematic diagram of the proposed system Table 32.1 Overview of source test and train datasets Samples size (4940)

Training dataset

Test dataset

3743

1247

424

S. Bal et al.

are 51 unique diseases that were applied for testing and training datasets. Table 32.1 summarizes the source datasets. Using natural language inputs from users, the dataset complexity is reduced without losing originality. As mentioned earlier, users indicate that symptoms usually vary and are fewer. Additionally, the datasets have a high number of features with ‘0 s’ indicating negatives and ‘1 s’ indicating positives. Here is an example of abstract data of the ‘Dehydration’.

Izziness

Itching

1

0

Light headedness 1

Dry mouth

Fever

1

0

Dehydration

The first step is to remove all the negative features and keep only the positive ones. For example, Light Headedness 1

Izziness 1

Dry mouth 1

Dehydration

After that, in step 2, we have placed the positive feature names where they belong. Dizziness

Lightheadedness

Dry mouth

Dehydration

The disease classification datasets we create are customized. Afterward, we merged the source testing and training datasets and divided them into two parts for new testing and training data. 3743 instances are contained in the training dataset (75%), and 1247 instances are contained in the test dataset (25%). There is another file in our possession that maps the disease name to the type of doctor and specialty. Table 32.2 illustrates the relationship between prognosis, specialty, and type of doctor. The bot displays suggestions after the user has classified. Based on the mapping, the ailment is identified, and the user is sent to appropriate doctor. Table 32.2 Prognosis and specialty mapping example

Prospection

Specialist

Medico

Dehydration

Gastroenterologist

Gastroenterologist

Fungal infection

Dermatology

Dermatologist

32 An Implementation of Machine Learning-Based Healthcare Chabot …

425

32.3.1 Proposed Algorithm

1: take basic input from user 2: MIBOT: Reply According to users Input 3: command ← Take command input 4: command ← Filter command using Tokenization (such as in lowercase) 5: keyword ← extract keywords from command 6: type ← detect command type from keyyword 7: c ← 0 8: if Command type is Decease Classification then 9: symptom ← take a symptom input 10: c ← c + 1 11: fetsymptom ← match feature with symptom using cosine similarity measure 12: if c is less than the threshold then 13: go to 9 14: else 15: if t user wants to put more symptoms then 16: go to 9 17: else 18: classify symptoms using SVM based on fetsymptom 19: display appropriate suggestion accordingly 20: end if 21: end if 22: else 23: if user needs information then 24: reply from database accordingly (Predict Disease Information, Diagnosis, etc.) 25: else 26: Show “MIBOT: I am just a chatbot. Please consult a doctor for your query.” 27: end if 28: end if

32.4 Results Analysis The new ‘MIBOT’ application has been tested with sick people. They used chatbots to check the state of their health. The chatbot is designed to integrate with a web browser. The presence of a cold fever is characterized by symptoms such as coughing, headaches, and body aches. On the basis of those symptoms, the MIBOT was able to correctly predict cold fever based on the dataset. The dataset and training

426

S. Bal et al.

Fig. 32.2 Different output of the proposed system

requirements vary between different algorithms. Different algorithms have different accuracy levels. This is done with the SVM algorithm. SVM is the algorithm that was better in our experiment because it had the highest accuracy of 98.47 percent, which is the good performance among the entire algorithms. It is clear, therefore, that SVM should be the system central classifier (Fig. 32.2).

32.5 Performance Analysis Experimental results were performed on a system with 32 GB RAM, Intel Processor with 8 GB graphics card. The experiment was conducted with Python 3.7. The classifier has been validated using an independent test set as well as K-fold crossvalidation with K-fold CV, and the complete dataset is randomly subdivided into equal-sized samples. As training data, K − 1 subsamples are employed, while one subsample is used as test data. Afterward, K cross-validation runs were performed. In our experiment used a tenfold CV. Table 32.4 summarizes the average tenfold score is presented (Table 32.3). Table 32.3 Comparison of old and new system MedBot

DoctorMe

MIBOT

Chatbot types

AI-based

AI-rule-based

AI-based

Installation of program

Not required to install

Required to install

Not required to install

Usability

Not needed

Needed

Not prerequired

Symptoms covered

160 symptoms

Various symptoms

51 symptoms

32 An Implementation of Machine Learning-Based Healthcare Chabot … Table 32.4 K-fold average score summary

Algorithms

K-fold average score

Multinomial Naive Bayes

0.9625

Decision trees

0.9663

Random forest

0.9753

KNN

0.9713

AdaBoost

0.9687

SVM

0.9856

427

32.5.1 Algorithm Comparison There are two types of datasets that were used in this medical chatbot: training dataset with 75% and testing dataset with 25%, respectively. For simple testing decision tree random forest, multinomial NB, KNN, SVM, AdaBoost algorithms are basically used. The majority of the time, SVM provides perfect results and can handle large datasets as well as work faster (Table 32.4) depicts the column chart displays the comparisons in Table 32.5. SVM produces 98.47%, random forest with 97.85%, AdaBoost with 97.46%, decision tree with 97.28%, KNN with 96.16%, and multinomial Naive Bayes with 95.77%. Our own dataset of independent tests, however, is also used to test the model. Three performance metrics are used in our experiment: F1-Score (weighted average), precision (weighted average), and accuracy. Test set contains a range of samples from each class, and precision and F1-Score are weighted according to the number of samples from each class. According to the definition, metrics are defined as follows: Our own dataset of independent tests, however, is also used to test the model. Three performance metrics are used in our experiment: F1-Score (weighted average), precision (weighted average), and accuracy. Test set contains a range of samples from each class, and precision and F1-Score are weighted according to the number of samples from each class. According to the definition, metrics are defined as follows: F1 Score = 2 ∗ ((Recall ∗ Precision)/(Recall + Precision))

(32.1)

Table 32.5 Overview of experimental results Algorithms

Accuracy

Precision (weighted avg.)

F1-score (weighted avg.)

Multinomial Naive Bayes

95.77

96.89

96.10

Decision trees

97.28

98.64

97.68

Random forest

97.85

98.98

98.25

KNN

96.16

97.36

96.48

AdaBoost SVM

97.46 98.47

98.89 99.25

97.92 98.65

428

S. Bal et al.

TEST ACCURACY (%) 100 99 98 97 96 95 94

98.64 96.89 95.77

97.28

98.98 97.85

99.25 98.47

98.89 97.46

97.36 96.16

Multinomial Decision Naïve Bayes Tree

Accuracy

Random Forest

KNN

SVM

AdaBoost

Precision (Weighted avg.)

Fig. 32.3 Graphical representation of test accuracy

Recall = True Positive/(True Positive + False Negative)

(32.2)

Precision = True Positive/(True Positive + False Positive)

(32.3)

Accuracy = (True Positive + True Negative)/(True Positive + True Negative + False Positive + False Negative)

(32.4)

Although the entire algorithms the SVM algorithm is the most accurate and best performance at 98.47%, all algorithms have shown high performance with high accuracy (Fig. 32.3). Among the other models, random forest, meanwhile, shows the best performance in this case with 97.85% accuracy rate, whereas multinomial Naive Bayes offers the worst result with 95.77% accuracy rates. SVM is clearly the most suitable classifier for the system.

32.6 Conclusion and Future Work This experiment illustrates the deployment method for healthcare chatbot built on machine learning. Steps of building this healthcare chatbot including these customized datasets are demonstrated step by step. The SVM was found to be the most effective among six different machine learning algorithms. It has been achieved, but the system still has some minor shortcomings. In the study, the main limitation was the lack of ability to capture enough symptoms from the participant. The typical user tends to list very few symptoms of the problem. Making an incorrect decision

32 An Implementation of Machine Learning-Based Healthcare Chabot …

429

could result from this. Several approaches are being considered by the authors to resolve this issue. The next step is to increase the accuracy and robustness of the proposed system.

References 1. Huang, J., Zhou, M., Yang, D.: Extracting Chatbot knowledge from online discussion forums. In: IJCAI’07: 20th International Joint Conference on Artificial Intelligence 2. Taneja, S., Gupta, C., Goyal, K., Gureja, D.: An enhanced K-nearest neighbor algorithm using information gain and clustering. In: 2014 Fourth International Conference on Advanced Computing & Communication Technologies. https://doi.org/10.1109/ACCT.2014 3. Rahutomo, F., Kitasuka, T., Aritsugi, M.: Semantic cosine similarity. In: The 7th International Student Conference on Advanced Science and Technology ICAST (2012) 4. Qaiser, S., Ali, R.: Text mining: use of TF-IDF to examine the relevance of words to documents. Int. J. Comput. Appl. (2018).https://doi.org/10.5120/ijca2018917395 5. Tripathy, A.K., Carvalho, R., Pawaskar, K., Yadavm, S., Yadav, V.: Mobile based healthcare management using artificial intelligence. In: 2015 International Conference on Technologies for Sustainable Development (ICTSD-2015), 04–06 Feb 2015, Mumbai, India 6. Setiaji, B., Wibowo, F.W.: Chatbot using a knowledge in database: human to machine conversation modeling. In: 2016 7th International Conference on Intelligent System Modeling and Simulation (ISMS), pp. 72–77 (2016) 7. Rosruen, N., Samanchuen, T.: Chatbot utilization for medical consultant system. In: The 2018 Technology Innovation Management and Engineering Science International Conference (TIMES-iCON2018) 8. Divya, I., Ishwarya, P., Devi, K.: A self-diagnosis medical Chatbot using artificial intelligence. J. Web Dev. Web Des. 3(1) 9. Mathew, R.B., Varghese, S., Joy, S.E.: Chatbot for disease prediction and treatment recommendation using machine learning. In: Processing of the Third International Conference on Trends in Electronics and Informatics (ICOEI 2019) IEEE Xplore Part Number: CFP19J32-ART; ISBN: 978-1-5386-9439-8 10. Belfin, R.V., Shobana, A.J., Manilal, M., Mathew, A.A., Babu, B.: A graph based Chatbot for cancer patients. In: 2019 5th International Conference on Advanced Computing & Communication Systems (ICACCS) 11. Bal, S., Choudhary, A.K., Majumdar, S., Pal, D., Mandal, L.: Sentiment analysis of online reviews for educational institutions. In: Applications of Machine Intelligence in Engineering, 1st ed. Imprint CRC Press. https://doi.org/10.4324/9780000000002-1 12. Moshiul Rahaman, M., Amin, R., Liton, M.N.K., Hossain, N.: Disha: an implementation of machine learning based Bangla healthcare Chatbot. In: 2019 22nd International Conference of computer and Information Technology (ICCIT), 18–20 Dec 2019 13. Srivastava, P., Singh, N.: Automatized medical Chatbot (Medibot). In: 2020 International Conference on Power Electronics & IOT Applications in Renewable Energy and its Control (PARC) GLA University, Mathura, UP, India, 28–29 Feb 2020 14. Bal, S., Mahanta, S., Mandal, L., Parekh, R.: Bilingual machine translation: English to Bengali. In: Proceedings of International Ethical Hacking Conference 2018. Advances in Intelligent Systems and Computing, vol 811. Springer, Singapore. https://doi.org/10.1007/978-981-131544-2_21 15. Mohan, L., Pant, J., Suyal, P., Kumar, A.: Support vector machine accuracy improvement with classification. In: 12th International Conference on Computational Intelligence and Communication Networks

430

S. Bal et al.

16. Kandpal, P., Raut, K.J.R., Bhorge, S.: Contextual Chatbot for healthcare purpose (using deep learning). In: 2020 Fourth World Conference on Smart Trends in Systems, Security and Sustainability 17. Bal, S., Mahanta, S., Mandal, L.: Bilingual machine translation: Bengali to English. In: Proceedings of International Conference on Computational Intelligence, Data Science and Cloud Computing. Lecture Notes on Data Engineering and Communications Technologies, vol 62, 2021. Springer, Singapore. https://doi.org/10.1007/978-981-33-4968-1_31 18. Athota, L., Shukla, V.K., Pandey, N., Rana, A.: Chatbot for healthcare system using artificial intelligence. In: 2020 8th International Conference on Reliability Info com Technologies and Optimization (ICRITO). Amity University, Noida, India, 4–5 June 2020 19. Disease Prediction Using Machine Learning with GUI. https://www.kaggle.com/datasets/nee lima98/disease-prediction-using-machine-learning. Last access date: 03 Dec 2022, Time: 17:37

Chapter 33

Impact of Communication Delay in a Coordinated Control VPP Model with Demand Side Flexibility: A Case Study Smriti Jaiswal , Mausri Bhuyan , and Dulal Chandra Das

Abstract Conventional power systems are based on the production of electrical energy, majorly from fossil fuels (coal, oil, and natural gas), with the overall efficiency of the energy conversion process as low (typically 30–40%). Due to energy security and climate changes, there has been a shift toward non-conventional energy sources, which has given rise to new technologies and concepts like virtual power plants (VPP), distributed generation (DG), smart grids, and many more. The fastpaced integration of distributed energy resources (DER) requires new research and innovation to overcome newly emerging issues in this regard. The idea of a virtual power plant (VPP) has evolved in recent literature to incorporate DGs and widen their distribution in electricity grids. It combines various small, medium, and large DG units with existing conventional power plants into a ‘virtually single’ generating facility. This work focuses on modeling a VPP with solar PV energy generation, a parabolic trough solar thermal system, an electric vehicle, and a conventional thermal power plant; and designing an appropriate control strategy for the same. Modern optimization tools like grasshopper optimization algorithms (GOAs), salp swarm optimization (SSO), and sine cosine algorithms (SCAs) have been investigated for the sake of obtaining the optimum coordinated control of VPP.

S. Jaiswal (B) · D. C. Das National Institute of Technology, Silchar, Assam 788010, India e-mail: [email protected] D. C. Das e-mail: [email protected] M. Bhuyan New horizon college of engineering, Bengaluru, Karnataka 560103, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024 B. Das et al. (eds.), Modeling, Simulation and Optimization, Smart Innovation, Systems and Technologies 373, https://doi.org/10.1007/978-981-99-6866-4_33

431

432

S. Jaiswal et al.

33.1 Introduction Modern power systems are an extensively sizeable interconnected network of various generators, transformers, transmission and distribution utilities, substations, and many other components. In recent days, more and more DERs are being integrated into power grids. The whole concept of VPP and microgrids is based on renewable energy sources and their sustainability; VPP is a technology for the future and has the potential to substantially reduce the reliance on fossil fuels. Microgrids and virtual power plants (VPPs) have immensely helped in integrating distributed energy resources (DER) by coordinating and aggregating the DERs. The microgrids are constituted by the physical integration of the DER and loads [1]. The operation of the microgrids can be done in isolated and grid-connected modes. A VPP is software (virtual) integration of different DERs that are geographically dispersed, also known as the ‘Internet of Energy’ [2]. By incorporating a hybrid control scheme consisting of coordinated centralized commercial/utility-scale DERs and decentralized control to dozens of small, residential-scale. DERs using standardized interoperability functions, we envisage a virtual power plant capable of providing primary frequency response reserve benchmarks in this VPP design. Since communication latencies will prohibit the aggregation from satisfying the response and ramp time parameters, relying only on centralized control of DERs is not an option for primary frequency response reserves [3]. The integration of distributed generation, including RES and energy storage devices with artificial intelligence and communication in virtual power plants, is a trending demand. New developments in ICT technology, such as Internet of things (IoT), wired types including twisted pair, power line communication (PLC), optical fiber, also, wireless: cellular communication, wireless local area networks (WLAN), satellite communication, ZigBee, etc. The mismatch between power generation with load demand roots frequency deviations may lead to tripping of the tie-lines, system collapse as well as power blackouts [1]. The core framework of VPP and ICT is the energy management system (EMS). The presented work is directed toward the automatic frequency control of VPP with several generating sources dispatching communication delay due to the integration of RESs. The proposed VPP model in Fig. 33.1 is considered to achieve three major objectives: (a) A coordinated active power control of grid-integrated VPP with high penetration renewable energy. (b) Development of grid-integrated VPP model with incorporation of parabolic trough solar thermal system (PTSTS), the solar PV system, and electric vehicle (EV) using MATLAB/Simulink as the simulation platform. (c) Incorporating renewable PTSTS unit with thermal energy storage as a dispatchable unit in the VPP. (d) Devise a control strategy for the VPP model’s active power regulation, incorporating communication delays, from the ones available for modified forms of the interconnected microgrid. The paper structure is as follows: a brief introduction of multi-objective VPP model architecture in the first section, followed by the description of the transfer function model of system components, then VPP model design and simulation with

33 Impact of Communication Delay in a Coordinated Control VPP Model …

433

Fig. 33.1 An integrated multi-objective performance VPP model

proposed communication control strategy in the next section. Another section in the sequence represents a comparative analysis to test the controller’s suitability which is further studied under three different cases before concluding the section.

33.2 Proposed VPP Model, Design, and Simulation with Coordinated Control Strategy Modern smart technology and information technology (IT) are the drivers for VPP. VPP specifically works on data control centers to accumulate data from the DERs participating in the VPP [4]. Data connections that are protected from other data traffic by encryption methods are used to convey control commands and data [5, 6]. Overall, the idea is to produce a resilient network of flexible power consumers with DERs and related storage systems in monitoring, predicting, optimizing, and regulating their supply or consumption or functioning without the assistance of a faulty DER. The significant components of VPP as per integrated multiobjective performance are in Fig. 33.1 and include dispatchable power plants and intermittent generating units, energy storage systems (ESS), flexible loads, information and communication technology (ICT), and energy management systems (EMS). Flexible or controllable loads are those that have the capability to change regular consumption patterns according to changes in the tariff of electricity [7]. It also offers a high degree of adaptability and superior optimization, encourages inbound and outbound supply chains, and works with ICT to support continuous process traceability [8]. Their transfer function models have been discussed in the following sections.

434

S. Jaiswal et al.

33.2.1 Solar Photovoltaic (SPV) Solar PV module array connected in a suitable series–parallel arrangement so as get cumulative the power generated by each solar cell [8]. The SPV transfer function model is given below [9, 10]. Here, K SPV and T SPV are gain and time constants, respectively. From [7], K SPV = 1.8 and T SPV = 1 s and from [8], K SPV = 1 and T SPV = 1.8 s. G SPV (s) =

K SPV TSPV s + 1

(33.1)

Equation are not sequentialafter eq. 33.12 constraints are mentioned prior eq 33.13, all equation are sequential

33.2.2 Parabolic Trough Solar Thermal System (PTSTS) Parabolic trough power plants or parabolic trough collectors (PTC) output depends on solar irradiance and the temperature of the fluid [8]. Heat transfer fluid (HTF) preheats water steam or synthetic oil at 400 °C to power the turbine and generate power [10]. ( G PTSTS (s) =

K RF TRF s + 1

)(

K RV TRV s + 1

)(

KG TG s + 1

)(

KT TT s + 1

) (33.2)

(K RF , T RF ), (K RV and T RV ), (K G and T G ), (K T and T T ) are gain and time constant of refocus; receiver, governor, and turbine, respectively [11].

33.2.3 Electric Vehicle (EV) During peak hours in the power grid, EVs can discharge their stored energy to provide generation support to the grid using power electronic converters with bidirectional power transfer capability [10]. This enables them to serve as an emergency backup power source in VPP with the help of V2G technology. The transfer function model of EV is given below [11, 12]. G EV (s) =

1 TEV s + 1

(33.3)

33 Impact of Communication Delay in a Coordinated Control VPP Model …

435

33.2.4 Conventional Synchronous Generator In this work, the conventional thermal power plant comprises a non-reheat type turbine, and this, in turn, includes the following three parts: the governor, turbines, load, and machine with transfer function as per available time constants T G , T T , T P [11]. G g (s) =

1 TG s + 1

(33.4)

G t (s) =

1 TT s + 1

(33.5)

G p (s) =

Kp Tp s + 1

(33.6)

33.2.5 Communication Delay Block In the ICT network, a piece of information or command takes a finite amount of transit time as it propagates during the bidirectional exchange of information between a VPP element and the VPP’s master control center. This communication delay, called dead time T D = 0.7 s, affects the various operational aspects; a rational approximation of the same as the transfer function is given as e−sTD [12]. e−sTD =

− 1 sTD + 1 2 − sTD = 12 2 + sTD sTD + 1 2

(33.7)

33.2.6 Proposed VPP Coordinated Control Design The model has been developed in MATLAB/Simulink platform for carrying out the simulation and analyses. The control model comprises two areas, viz. VPP as area 1 and conventional thermal power plant as area 2. The model with three primary variations are considered for realization of the coordinated control strategy i.e. load change in area 1 (VPP area), load change in area 2 (traditional thermal power plant area), and power generated by the solar PV system are shown in Fig. 33.2. In order to avoid repercussions of frequency deviation, the coordinated control strategy adopted in this study involves the difference between the power demand reference Pd and total power generation Pg in a particular area [8]. Total power generated (say, in VPP area) Pg is the sum of the output power of PTSTS, SPV, and EV; and is given by Pg .

436

S. Jaiswal et al.

Fig. 33.2 Proposed VPP control architecture model

for the system with finite inertia M and inertial and damping constant D where K SYS is the constant of the considered VPP area the grid frequency variation is given by ΔPe = Pg − Pd

(33.8)

Pg = PPTSTS + PSPV ± PEV

(33.9)

Δf = G SYS (s) =

ΔPe K SYS + D

1 Δf 1 = = ΔPe K SYS (1 + sTSYS ) Ms + D

(33.10) (33.11)

33.2.7 Adopted Control Strategy The control center (s) of VPP triggers DERs with control commands to produce sufficient power to cater to the aggregate load power [7]. In central control concept topology as shown in Fig. 33.1, the VPP accumulates information on various DERs’ status by the master control with the energy management system (EMS) of the VPP is concerned with the computation of the generation profile unit of the DERs and DRs schedules the power (active) generation profile for optimal operation [13]. The comparative analysis of generic and trending optimization algorithms such as particle swarm optimization (PSO), GOA, SCA, and SSA is conducted to achieve the defined objective function ‘J’ that has to be minimized is defined as:

33 Impact of Communication Delay in a Coordinated Control VPP Model …

{T J=

(

437

) (Δ f 1 )2 + (Δ f 2 )2 + (Δptie )2 dt

(33.12)

0

Here, T = 120 s; Δf 1 , Δptie and Δf 2 are the VPP frequency deviation, tie-line power flow, and conventional power plant area frequency deviation. K P , K I, and K D are the tuning parameters of the PID controller. As per Table 33.1, the following input parameters are used to run the different algorithms for the minimization of the objective cost function and are subject to these constraints below: K Pmin ≤ K P ≤ K Pmax K Imin ≤ K I ≤ K Imax K Dmin ≤ K D ≤ K Dmax Table 33.2 Here the proposed GOA algorithm provides the lowest possible final value of objective function Jmin, which is 0.04426 compared to SSA.SCA and PSO. Fig. 33.3, depicts the performance of the system in terms of frequency deviation in both area and tie line power such that the peak overshoot and undershoot values vary marginally but the steady state error and settling time parameter outperforms for GOA over rest of the algorithms considered in this study. It highlights that among PD, PI, ID, and PID controllers, PID controller performs the best Table 33.1 Value of various parameters involved in the VPP model cost optimization Parameters

Value(s)

Gain and time constant of SPV

K SPV , T SPV

1, 1.8 s

Gain and time constant of EV

K EV , T EV

1, 0.15 s

Time constant of turbine, governor, receiver, and refocus of PTSTS T T , T G , T R , T RF

1, 0.08, 4, 1.33 s

Gain of turbine, governor, receiver, and refocus of PTSTS

K T , K G, K RV , K RF

1, 1, 1, 1

Time constant of governor and turbine

T G, T T

0.08, 0.3 s

System characteristics

K sys1 , K sys2

1, 120

Constant droop value of areas 1, 2

R1 , R2

2.4 Hz/p.u.

Synchronizing tie-line coefficient

T 12

0.007

Moment of inertia of the VPP and conventional generator

M1, M2

3, 0.1667

GOA PSO

SSA

SCA

Upper bound of PID constants

0.075 0.075 0.075 0.075

Lower bound of PID constants

0.005 0.005 0.005 0.005

Iterations

100

100

100

100

Search agents’ count

30

50

50

50

438

S. Jaiswal et al.

in terms of convergence of objective function. On comparison of GOA, PSO, SCA, and SSA algorithms indicate the suitability of controllers. Table 33.2 Final values of optimized PID constants after 100 iterations Parameters

GOA

PSO

SCA

SSA

KP1

− 0.02829

− 0.03774

− 0.00674

0.005

K I1

− 0.06558

− 0.05482

− 0.00657

− 0.03389

K D1

− 0.05146

− 0.01717

− 0.075

− 0.01678

K P2

− 0.07335

− 0.075

− 0.075

− 0.07474

K I2

− 0.06191

− 0.075

− 0.075

− 0.075

K D2

− 0.0414

− 0.075

− 0.06864

0.002328

K P3

− 1.75E − 04

− 0.01272

− 0.01141

− 0.00607

K I3

− 1.02E − 04

− 1.37E-04

− 3.04E − 04

− 8.12E − 05

K D3

− 0.005

4.12E-04

0.00103

0.00378

Peak overshoot

0.01486

0.01325

0.01341

0.01606

Peak undershoot

− 0.00395

− 0.003487

− 0.00354

− 0.00465

Settling time (s)

80.09

> 120

106.1

102

Peak overshoot

0.01357

0.01071

0.01087

0.01227

Peak undershoot

− 0.00347

− 0.002626

− 0.00261

− 0.00403

Settling time (s)

79

> 120

119.7

119.6

Peak overshoot

0.006065

0.006567

0.00642

0.00619

Peak undershoot

− 0.00141

− 0.001511

− 0.00133

− 0.00138

Settling time (s)

78.52

~ 120

112

98

f 1 (Hz)

f 2 (Hz)

ptie (p.u.)

Fig. 33.3 Comparison of different algorithms in terms of frequency deviation in area 1, area 2, and tie-line with convergence curve

33 Impact of Communication Delay in a Coordinated Control VPP Model …

439

33.3 Results and Discussion Considering the possibility of different operating conditions, three case study results are inferred in this section.

33.3.1 Case Study 1: Effect of Increasing Communication Delay on System Response The analysis is performed by increasing the communication delay in area 1 of the VPP model in steps of 0.25 s, starting with an initial communication delay of 0.5 s (see Table 33.3, Fig. 33.4). Operating conditions for the subsystems under the proposed VPP model, in this case, study is subjected to these constraints: For Area-1 with PTSTS, SPV, EV, and load the time constants, T D1 = 0.5, 0.75, 1.0, and 1.25 s and simulation time of 120 s are executed under these operating conditions: {

0.1 p.u., 0 < t < 20 s 0.11 p.u., t > 20 s ⎧ ⎨ 0.02 p.u., 0 < t < 40 s Load in area1 = 0.03 p.u., 40 s < t < 60 s ⎩ 0.045 p.u., t > 60 s PSPV =

(33.13)

(33.14)

In other words, the system gradually becomes more oscillatory, leading to the eventual possibility of the system becoming unstable beyond some particular limit of communication delay [14]. Table 33.3 Transfer function of communication delay block, peak overshoot, and peak undershoot values for increasing communication delay Comm. delay in VPP

T D1 = 0.5

T D1 = 0.75

T D1 = 1

T D1 = 1.25

Δf 1 (Hz)

Peak overshoot

0.01486

0.01618

0.0178

0.01966

Peak undershoot

− 0.00393

− 0.00438

0.00488

− 0.00541

Peak overshoot

0.01357

0.01462

0.01595

0.01756

Peak undershoot

− 0.00347

− 0.00378

− 0.00416

− 0.0046

Peak overshoot

0.006065

0.006558

0.007165

0.007877

Peak undershoot

− 0.00142

− 0.00156

− 0.00174

− 0.00194

−0.125s+0.5 0.125s+0.5

−0.1875s+0.5 0.1875s+0.5

−0.25s+0.5 0.25s+0.5

−0.3125s+0.5 0.3125s+0.5

Δf 2 (Hz) ΔPtie (p.u.)

Transfer function

440

S. Jaiswal et al.

Fig. 33.4 Impact of increasing communication delay in VPP due to DER on the system’s power and frequency response in VPP

33.3.2 Different Load Change Scenarios in the VPP Area Testing for the two-area VPP model against various load conditions in the VPP area, i.e., a step load change in the VPP area represents a sudden increase of load in the area. Constant load change represents switching in of some constant load in the area. Similarly, decreasing step load represents a sudden decrease of load in the area in two steps. Random fluctuating load changes are also very similar to the practical load changes occurring in a typical power system. Occurrence of all these possibilities of load variations is frequent in practical power systems with Area-1 time delay constant T D1 = 0.5 and Area-2, T D2 = 0.6 s for each load case. Frequency response for the first case of step, constant and step decreasing type load changes is obtained as in Fig. 33.5. Next, with gradually increasing load, (Fig. 33.6) compared to fluctuating load changes (Fig. 33.7) in the VPP area, the system frequency deviations and tie-line power flow eventually settle to zero in the steady state after some initial oscillations. As compared to the first sub-cases, with gradually increasing load, the system

Fig. 33.5 Plot of Δf 1 (Hz) in VPP area due to step, constant, and decreasing step load changes under sub-case 1

33 Impact of Communication Delay in a Coordinated Control VPP Model …

441

Fig. 33.6 System response due to gradually increasing load in VPP (Δf 1 )

oscillations are quite less, both in terms of number and magnitude of overshoot and undershoot given in Table 33.4.

Fig. 33.7 System response due to fluctuating load changes in VPP (Δf 1 )

Table 33.4 Peak overshoot and peak undershoot values due to gradually increasing load in VPP Δf 1 (Hz)

Δf 2 (Hz)

Δptie (p.u.)

System response due to gradually increasing load under sub-case2 Peak overshoot

0.01748

0.01594

0.007121

Peak undershoot

− 0.002284

− 0.002151

− 0.0008123

Steady-state value

− 2.11 × 10–6

− 2.092 × 10–6

4.449 × 10–5

System response due to randomly fluctuating load sub-case3 Peak overshoot Peak undershoot

0.1332 − 0.0979

0.1215 − 0.09491

0.05423 − 0.04042

Steady-state value

1.916 × 10–7

1.964 × 10–7

1.156 × 10–5

Following the sharp decline of load at t = 30 s

0.1332

0.1215

0.5423

-0.03126

− 0.01362

Following the sharp rise of load at t = 60 s − 0.03442

442

S. Jaiswal et al.

Fig. 33.8 System response when EVs fleet serves as V2G in column 1, versus G2V in column 2

33.3.3 Presence of EVs for Charge/Discharge into the Grid This case study takes into account a particular scenario of when the electric vehicle fleet of the VPP system as V2G. The frequency of the two areas doesn’t settle to zero in steady state, i.e., unsettling Δptie (p.u.) while Δf 1 (Hz) with values of 0.03774 and Δf 2 (Hz) with the value of 0.03762. The countermeasure for this situation is presented in column 2 considering EVs are available as G2V (see Fig. 33.8).

33.4 Conclusion PID controller performs better than PD, PI, and ID controller in terms of convergence of the objective function. Of the four distinct optimization algorithms investigated in this work, the grasshopper optimization algorithm (GOA) performs the best as compared to the other candidate algorithms like PSO, SSA, and SCA. In any practical grid, it is of utmost importance to keep the frequency within tolerance limits to avoid repercussions of frequency deviation justifies the robustness of the model with the adopted control strategy. With increasing communication delay of the VPP, the overshoot and undershoots of time responses increase; thus, the system becomes more oscillatory and eventually becomes vulnerable to system stability. The fast, responsive nature of the adopted control strategy in the event of a steep rise or fall

33 Impact of Communication Delay in a Coordinated Control VPP Model …

443

of load and the absence of solar PV power generation. As there has been some notable research and innovation in the domain of energy storage devices, incorporation of such elements into the present model will be a potential solution for further investigation.

References 1. Zhang, G., Jiang, C., Wang, X.: A comprehensive review on structure and operation of virtual power plant in electrical system. IET Gener. Transmission Distrib. 13 (2018). https://doi.org/ 10.1049/iet-gtd.2018.5880 2. Hussain, I., Ranjan, S., Das, D., Sinha, N.: Performance analysis of flower pollination algorithm optimized PID controller for Wind-PV-SMES-BESS-diesel autonomous hybrid power system. Int. J. Renew. Energy Res. 7, 643–651 (2017) 3. Johnson, J., et al.: Design and Evaluation of a Secure Virtual Power Plant. Sandia National Laboratories (2017). https://doi.org/10.13140/RG.2.2.36603.62244 4. Ray, P., Ekka, S.: BFO Optimized Automatic Load Frequency Control of a Multi-Area Power System (2016). https://doi.org/10.4018/978-1-5225-0427-6.ch016 5. Urcan, C., Bica, D.: Simulation concept of a virtual power plant based on real-time data acquisition, 1–4 (2019). https://doi.org/10.1109/UPEC.2019.8893565 6. El Bakari, K., Kling, W.L.: Virtual power plants: an answer to increasing distributed generation. In: Innovative Smart Grid Technologies Conference Europe (ISGT Europe), 1–6 (2010). https:// doi.org/10.1109/ISGTEUROPE.2010.5638984 7. Vandoorn, T., Zwaenepoel, B., Kooning, J., Meersman, B., Vandevelde, L.: Smart microgrids and virtual power plants in a hierarchical control structure. In: 2011 2nd IEEE P ES International Conference and Exhibition on Innovative Smart Grid Technologies, 7 (2011). https://doi.org/ 10.1109/ISGTEurope.2011.6162830 8. Saliba, S.: Virtual reality, ABB Power Systems, Power Generation, ABB Review 415, Germany (2015) 9. Saboori, H., Mohammadi, M., Taghe, R.: Virtual power plant (VPP), definition, concept, components, and types. In: 2011 Asia-Pacific Power and Energy Engineering Conference, 1–4 (2011). https://doi.org/10.1109/APPEEC.2011.5749026 10. Mausri, B., & Barik Amar, K., Das, D.C.: GOA optimised frequency control of solar-thermal/ sea-wave/biodiesel generator based interconnected hybrid microgrids with DC link. Int. J. Sustain. Energy 39, 615–633 (2020) 11. Ranjan, S., Das, D., Latif, A., Sinha, N.: LFC for autonomous hybrid micro grid system of 3 unequal renewable areas using mine blast algorithm. Int. J. Renew. Energy Res. 8, 1297–1308 (2018) 12. Dey, P., Das, D., Latif, A., Hussain, S., Ustun, T.S.: Active power management of virtual power plant under penetration of central receiver solar thermal-wind using butterfly optimization technique. Sustainability 12, 6979 (2020). https://doi.org/10.3390/su12176979.s 13. Hanta, V., Procházka, A.: Rational approximation of time delay (2009) 14. Wu, H., Ni, H., Heydt, G.T.: The Impact of Time Delay on Robust Control Design in Power Systems, pp. 1–6. IEEE, Tempe, AZ USA (2002)

Chapter 34

Fabrication of Patient Specific Distal Femur with Additive Manufacturing Thoudam Kheljeet Singh

and Anil Kumar Birru

Abstract The application of additive manufacturing in field of medical science is vast and far reaching—from anatomical models and prosthetics to biomedical research, medical education, surgery planning, etc. Designing or generating a 3D CAD model is a must for additive manufacturing. In medical science, the 3D model is mainly generated from medical imaging such as computed tomography (CT) scan, magnetic resonance imaging (MRI) scan, ultrasound scan, etc. However, the generated 3D model always has a chance of error such as dimensional inaccuracy and rough surface finished due to error in the stereolithography (STL) file generated. Therefore, optimization of the model is generally required. This paper presents a holistic approach to modelling of distal femur for 3D printing with the use of three softwares, namely: (i) 3D slicer for converting of CT scan which in DICOM format to 3D CAD model of STL file format; (ii) Ansys spaceclaim for correction of STL file error which otherwise may lead to dimensional inaccuracy, surface roughness, etc.; and (iii) Ultimaker Cura for setting the printing parameter of the model. Therefore, the model that is being processed from medical images like CT scan, MRI scan, etc. for 3D printing are free from error, ensuring a better 3D printed model. Finally, the model is printed using FDM printer and polylactic acid (PLA) filament as printing material. The dimensional deviation of the printed model from 3D CAD model was found to be 0.22 mm when measured along the transepicondylar axis (TEA) of the distal femur with a good surface finish.

T. K. Singh Department of Mechanical Engineering, Bir Tikendrajit University, Chanchipur, Manipur, India A. K. Birru (B) Department of Mechanical Engineering, National Institute of Technology, Imphal, Manipur, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024 B. Das et al. (eds.), Modeling, Simulation and Optimization, Smart Innovation, Systems and Technologies 373, https://doi.org/10.1007/978-981-99-6866-4_34

445

446

T. K. Singh and A. K. Birru

34.1 Introduction The ability of additive manufacturing to fabricate complex shape and structure has made it applicable in every field. A pre-requisite to the additive manufacturing is the availability of 3D CAD model. In case of anatomical model, it is difficult to model directly in an CAD software, therefore, the 3D model is generally generated from medical imaging such as CT scan, MRI scan, ultrasound scan, etc., which come in DICOM file format that contains meta information data and stack of 2D images combined into a single document [1, 2]. Various softwares such as 3D slicer, Mimic, Invesalius, OsiriX, MATLAB, etc., are used to segment the stack of 2D images into 3D model of file format like STL, OBJ, VRML, AMF and 3MF which are supported by the printer [3–5]. Among these, STL and OBJ file format are predominantly used [6]. In the STL file format, the 3D model surface is described by a series of link triangular facet of different sizes and shapes [7] which is generally generated by the built-in algorithm of the software used [8]. However, the STL format have a very limitation support for representation of colour, texture and other attribute [6, 8]. One of the major drawbacks of STL is that the file size can become very large in generating facets for highly complex curve body [6] and another drawback is the presence of large number of redundant vertices [9]. As the curve surface is divided into the triangular facets, final STL file may have dimensional error, especially in case of the human anatomy [10] along with some noise in the model as it gets converted from DICOM file format. With increase in the number of triangular facets, the rendering or the process time also increases. These errors must be corrected and prepared in the model before they are finalized for 3D printing, so as to ensure that the 3D printed product is free of error. CT scan of knee is used in this paper and focus is given to the distal femur which is the lower end part of femur bone located just above the knee joint. It is made up of the medial and lateral condyles, the intercondylar fossa and the patellar surface. The femoral condyle articulates with tibial condyle forming tibiofemoral joint allowing knee flexion and extension in the sagittal plane, and internal and external rotation in the horizontal plane. Many researchers have worked on in regard with additive manufacturing such as 3D printing optimal parameter, minimizing of defects like rough finished, dimensional inaccuracy, weak infill, etc., but very few have focus on the generation of triangular mesh in the 3D model of STL file format and there is still lagging in research in regard with the errors in the STL file format 3D model specially in printing of anatomical model. Thus, this paper presents a holistic approached to 3D modelling for generating an error-free STL file format 3D model which is converted from CT scan data using 3D slicer. Ansys Spaceclaim is used for the correction and modification in the STL file 3D model, Ultimaker Cura software for the preparation of 3D printing and Creality Ender 3 Pro fused deposition modelling (FDM) 3D printing is used for printing, and PLA pro filament of 1.75 mm diameter from WOFD 3D is used as material for printing.

34 Fabrication of Patient Specific Distal Femur with Additive Manufacturing

447

34.2 Methodology A post operation CT scan of a male patient of age 33 years with femoral condyle fracture from a traumatic injury is obtained. In order to hold the femoral condyle in its place and for healing, two screws are inserted into the bone.

34.2.1 Generating of 3D Model Using 3D Slicer 3D Slicer 4.11.20210226 version software was used for the conversion and generation of 3D model in STL file format from CT scan data which is in DICOM file format. The steps involved in generating the model using the 3D slicer are listed as follows: Importing and loading of the patient CT scan data into the 3D slicer: Using the ‘Import DICOM files’ after clicking into the ‘Load DICOM Data’ option at the starting page of the 3D slicer, the directory where DICOM file of the patient is saved in the computer is selected. The CT scan data of the patient along with information like name, gender, study date, etc., is loaded into the 3D slicer. After clicking the load button, the file is loaded showing the CT scan data in axial, sagittal and coronal view along with a view to show when 3D image is rendered as shown in Fig. 34.1. Volume rendering of the imported CT scan data: Volume rendering is a visualization technique where 3D object is displayed directly from set of 2D image volume without segmentation. In 3D slicer the ‘Volume Rendering’ module is selected in which VTK ray casting method is used for displaying the 3D model of the imported CT scan. Bone in the ‘preset’ available under the Volume Rendering module is selected. This renders a 3D model of the bone from the CT scan while hiding the other tissue as shown in Fig. 34.2. Cropping of the model using Crop Volume module: In order to reduce the computational requirement and to extract only the required region of the model, cropping is required. For this ‘Crop Volume’ module is used. This module helps in extraction of image sub volumes as described by the region of interest (ROI). Processing and segmentation of the CT scan data: Segmentation is a procedure that delineates regions in an image. The segment editor module is used for specifying segment in both 2D and 3D image [11]. It offers editing tools such as threshold, erase, draw, paint, etc., to reach to the final segmented 3D model. Using the add segment under the Segment Editor module a new segment is added and selected, the knee under the property type and left under the modifier. The following steps are employed to obtain the final segmented 3D model. Threshold: This automatically outlines the structure in the set of DICOM images based on the greyscale intensity of CT scan. The lower slider excludes the low intensity, while the upper slider excludes the upper intensity. The threshold intensity

448

T. K. Singh and A. K. Birru

Fig. 34.1 Default display of CT scan DICOM file in 3D slicer. Top left panel shows axial plane, bottom left shows coronal plane, bottom right shows sagittal plane and top right shows 3D model as per the type of rendering

of 232 is set at lower slider, while the upper slider is kept at maximum and the resulted 3D model is shown in the Fig. 34.3. Paint and erase: Even though the threshold includes major portion of the area that are required to generate the model, they have also included noises and excluded certain region of the image that are required. In order to correct this error, paint is used to add layer to the segmented model, and erase function allows to erase the noise form segmented 3D model. The final model that is obtained is shown in Fig. 34.4. Exporting the segmented mode: Under the export to file option in the ‘Segmentation’ module, the final 3D model is saved in the desire directory as STL file format.

34 Fabrication of Patient Specific Distal Femur with Additive Manufacturing

449

Fig. 34.2 Volume rendered 3D model of the patient’s CT scan data using VTK ray casting method

Fig. 34.3 Generated 3D model using the threshold function after it has been cropped using region of interest (ROI)

450

T. K. Singh and A. K. Birru

Fig. 34.4 3D model generated after processing using paint and erase function

34.2.2 Refining the Generated 3D Model Using Ansys Spaceclaim Ansys Spaceclaim 2021 R2 version is being used for optimizing the model. Following are the steps involved in correcting the errors and optimizing the generated STL model using 3D slicer: Importing the 3D model in the Ansys Spaceclaim: The file in STL format is dragged and dropped into the Ansys Spaceclaim software environment. Checking the facet of STL file format for error: Using the ‘Check facet’ function under the ‘Facet’ menu, the STL file is being checked for any errors that will adversely affect the 3D printing process. Following are the errors that have been found: self-intersecting mesh, inconsistent triangle orientation, over connected mesh, non-manifold vertex and mesh has multiple pieces. These errors mostly might have crept due to noises in the generated 3D model in STL file format using 3D slicer. Correcting the errors in the STL file: Following are the steps taken for correcting the 3D model generated in STL file format: • As the noises are generally not connected to the main 3D model, using the ‘Separate All’ function under the ‘Facet’ menu, the noises are separated as different facets from the main model as shown in Fig. 34.5. All these facets other than the main 3D model are being deleted, thereby removing the noise associated with the STL file format. • Checking of the facet after the deletion of noises from the 3D model has shown the presence of non-manifold vertex as shown in Fig. 34.6.

34 Fabrication of Patient Specific Distal Femur with Additive Manufacturing

451

Fig. 34.5 Noises are separated as different facets and noise in the STL file 3D model with the main model hidden

Fig. 34.6 Two red circles indicate the presence of non-manifold vertex in the noise-free model

• The region where the non-manifold vertex is located is being deleted and patched again, so that there is no presence of non-manifold vertex in the model. The facet check is run again and found that there is no error in the geometry. Smoothening the 3D model: Since the exported STL file 3D model has rough surface, the ‘smooth’ function under the facet menu is used for reducing the roughness

452

T. K. Singh and A. K. Birru

to an extent. An angle threshold of 60° is maintained and flatten peaks type is used for it. Checking and correcting any facet error cause by smoothening operation: As the meshes in the 3D model are arranged again by adding in the valley and deleting in the peak, the facet check is run again to ascertain that the geometry is free of any error. The facet error run has found that the geometry has self-intersecting mesh. These errors are being corrected using the automatic feature ‘Autofix’ function under the facet menu. Then again facet error check is run again and found that the geometry is free of errors. Exporting the file in STL format: The optimized 3D model using the Ansys Spaceclaim 2021 R2 is saved using the save option in the desire directory and selecting the STL option in the ‘save as type’.

34.2.3 Preparing the Model for 3D Printing Using Ultimaker Cura Ultimaker Cura is an open-source slicing software for rapid prototyping or 3D printing. For this paper, Ultimaker Cura 4.12.1 is used to prepare the model for 3D printing. • The model is first imported in to the Ultimaker Cura 4.12.1. • The model is made sure that the flat surface is on the platform using the lay flat option under the rotation tab. • Under the print setting, the parameter is set and support for the model is made to generate, wherever there is an overhang part in the model during the slicing process. • Finally, the model is sliced and then a preview is generated in order to view the way the model is going to be printed using fused deposition modelling (FDM) printer.

34.2.4 Printing of the Model The slicing of the model generates a G code file which is sent to the FDM 3D printing (Creality Ender 3 Pro). PLA filament (WOFD 3D PLA Pro) of 1.75 mm diameter as a printing material is set for printing. Postprocessing such as removal of the support and adhesive is performed on the printed model.

34 Fabrication of Patient Specific Distal Femur with Additive Manufacturing

453

34.3 Results and Discussion 34.3.1 Model Creation from DICOM File A 3D model of the distal femur with condyle fracture along with screw in it to hold the condyle in its position is successfully generated as shown in Fig. 34.7 using the 3D slicer. This file is exported in STL file format which is recognized by a 3D printer. While generating STL file format 3D model using 3D slicer, usually unconnected structures to the main model, i.e. noise is included that is either missed or unable to remove. This noise can cause inconsistent triangle orientation, self-intersecting mesh, etc., which will hinder the quality of the 3D printed object. The errors in the model due to the noise were checked using the check facet function of the Ansys Spaceclaim.

Fig. 34.7 Different view of the STL file format 3D model of distal femur

454

T. K. Singh and A. K. Birru

34.3.2 Error Correction in the STL File Format 3D Model The ‘Autofix’ function in Ansys Spaceclaim is a powerful tool to fix errors in the 3D model of STL file format, but these do not separate noise from the main model. As we can see from the result of check facet after applying the Autofix function in Fig. 34.8 that all other errors are corrected except for the presence of multiple pieces which indicated that there is more than one facet in the model and other is the presence of non-manifold vertex which is discussed later in this section. As most of the errors are caused by the noise present in the model, deleting the noise will eliminate these errors. Therefore, instead of directly using the Autofix function, ‘Separate All’ function is used first to separate the noise present in the model through separation of facets from the main model. The separated facet, i.e. noise is shown in Fig. 34.5. These separated facets are deleted, thereby eliminating the noise. Even after the removal of the noise, error was still found in the model that is the presence of non-manifold vertex. A non-manifold vertex is due to the presence of more than one vertex at the same place and similarly the non-manifold edge or face is due to the presence of more than one edge or face at one location, respectively, resulting in a non-manifold geometry. In the STL file, this is usually because of the presence of large number of redundant vertices [9]. The presence of the non-manifold geometry in 3D model is shown in Fig. 34.9. These non-manifold vertices are corrected through the deletion of the triangular mesh in that region and patching the region thereby restoring the water tightness of the model.

Fig. 34.8 Result of check facet after applying the ‘Autofix’ function

34 Fabrication of Patient Specific Distal Femur with Additive Manufacturing

455

Fig. 34.9 Red circle denotes the presence of non-manifold in the 3D model

34.3.3 Smoothening the Model Using Ansys Spaceclaim In order to reduce the surface roughness of the model, smoothing function of the Ansys Spaceclaim is used. It offers three functions, i.e. flatten peaks, add facets and volumes aware. Add facets and volumes aware will have increased more than the double of the presence in the model, i.e. 747,962 faces and 373,923 vertices. Therefore, in order to smoothen the surface, flatten peak type is used so to knock down the spike and fill the valley without change in the number of triangular faces and vertices. During the smoothening function, the triangular faces are rearranged, as a result certain errors occurred in the meshing of the model. In this smoothened model, the mesh is found to be self-intersecting while checking for error. This was corrected using the Autofix function of the Ansys Spaceclaim which involves deletion and creation of triangular meshes in the model. Thereby, it results in the increase in the number of vertices and faces to 373,998 vertices and 748,112 faces. Through this method, the surface is smoothened through rearrangement of the mesh and then an error-free 3D model of STL file format is obtained through the deletion and correction of noises, non-manifold vertices, etc.

456

T. K. Singh and A. K. Birru

34.3.4 Final Preparation of Model Using Ultimaker Cura Using Ultimaker Cura software, the various printing parameters such as printing speed, nozzle temperature, bed temperature, etc., are being set. Required support structure is also made to generate to support the overhanging area of the model. The model is then sliced. The slicing algorithm slices the model into individual layer and converts the STL file into G code. The G code makes the tools, i.e. the nozzle in the FDM 3D printer to move as per the generated layer information during slicing process.

34.3.5 3D Printed Model Using FDM Printer The printed model as shown in Fig. 34.10 has a good surface finish with a dimensional deviation of 0.22 mm from the 3D CAD model when measured along the transepicondylar axis (TEA) of the distal femur.

Fig. 34.10 3D printed model of the distal femur using FDM 3D printer

34 Fabrication of Patient Specific Distal Femur with Additive Manufacturing

457

34.4 Conclusion This paper presents a holistic approach towards modelling of 3D anatomical model and preparing it for 3D printing. The generated error-free STL file format 3D model is a pre-requisite to 3D printing and for conversion to CAD geometry for finite element analysis (FEA) of any anatomical 3D model which is obtained from DICOM file format such as CT scan, MRI scan, etc. Nevertheless, the model can also be used for better visualization in pre-operational planning or post-operational follow up in case of any complication. The steps that have been used to reach to 3D printed model with good dimensional accuracy are as follows: • 3D slicer is used for generation of 3D model in STL file format from CT scan data (DICOM file). • The presence of noise in the generated model and meshing error such as nonmanifold vertex, intersecting mesh that can hinder the print quality are corrected using the Ansys Spaceclaim. • Ultimaker Cura is used for setting of printing parameter along with support for the overhanging area of the model which is then used for slicing of the model. • Creality Ender 3 Pro FDM printer and PLA filament from WOLD 3D of 1.75 diameter are used as materials for printing of the model. The printed model as shown in Fig. 34.10 is obtained.

References 1. Marro, A., Bandukwala, T., Mak, W.: 3D printing and medical imaging: a review of the methods and applications. Curr. Probl. Diagn. Radiol. 45(1), 2–9 (2016) 2. Mamdouh, R., El-Bakry, H.M., Riad, A.E., Elkhamisy, N.: Converting 2D-medical image files “DICOM” into 3D-models, based on image processing, and analysing their results with python programming. WSEAS Trans. Comput. 19, 10–20 (2020) 3. Smitha, T.V., Madhura, S.B., Brundha, R.C.: 2D image-based higher-order meshing for 3D modelling in MATLAB. IOP Conf. Ser. Mater. Sci. Eng. 1070, 12–17 (2021) 4. Kamio, T., Suzuki, M., Asaumi, R., Kawai, T.: DICOM segmentation and STL creation for 3D printing: a process and software package comparison for osseous anatomy. 3D Print Med. 6(17) (2020) 5. Durnea, C.M., Siddiqi, S., Nazarian, D., Munneke, G., Sedgwick, P.M., Doumouchtsis, S.K.: 3D-volume rendering of the pelvis with emphasis on paraurethral structures based on MRI scans and comparisons between 3D slicer and OsiriX® . J. Med. Syst. 45(3), 27 (2021) 6. Dasgupta, P.B.: Compressed representation of colour information for converting 2D images into 3D models. Int. J. Comput. Trends Technol. 68(11), 59–63 (2020) 7. Rypl, D., Bittnar, Z.: Generation of computational surface meshes of STL models. J. Comput. Appl. Math. 192(1), 148–151 (2006) 8. Szilvi-Nagy, M., Mátyási, G.: Analysis of STL files. Math. Comput. Model. 38, 945–960 (2003) 9. Wang, C.-S., Chang, T.-R., Hu, Y.-N., Hsiao, C.-Y., Teng, C.-K.: STL mesh re-triangulation in rapid prototyping manufacturing. In: IEEE International Conference on Mechatronics, 492– 497, ICM’05 (2005) 10. Manmadhachary, A., Kumar, Y.R., Krishnanand, L.: Improve the accuracy, surface smoothing and material adaption in STL file for RP medical models. J. Manuf. Process. 21, 46–55 (2016)

458

T. K. Singh and A. K. Birru

11. Pinter, C.S., Lasso, A., Fichtinger, G.: Polymorph segmentation representation for medical image computing. Comput. Methods Progr. Biomed. 171, 19–26 (2019) 12. Zhang, X., Zhang, K., Pan, Q., Chang, J.: Three-dimensional reconstruction of medical images based on 3D slicer. J. Complexity Health Sci. 2(1), 1–12 (2019) 13. Cheng, G.Z., Jose Estepar, R.S., Folch, E., Onieva, J.O., Gangadharan, S., Majid, A.: 3D printing and 3D slicer—powerful tools in understanding and treating structural lung disease. Chest 149(5), 1136–1142 (2016) 14. Nguyen, V.S., Tran, M.H., Quang Vu, H.M.: A research on 3D model construction from 2D DICOM. In: 2016 International Conference on Advanced Computing and Applications (ACOMP), 158–163 (2016) 15. Chunxiang, W., Zhenhua, L.: Optimization and application of STL model slicing algorithm in rapid prototyping. In: Advance Materials Research 605–607, 669–672, Trans. Tech. Publication Switzerland (2013) 16. Nizamuddin, M., Kirthana, S.: Reconstruction of human femur bone from CT scan images using CAD techniques. IOP Conf. Ser. Mater. Sci. Eng. 455, 012103 (2018)

Chapter 35

Performance Comparison of Cuckoo Search and Ant Colony Optimization for Identification of Parkinson’s Disease Using Optimal Feature Selection Neha Singh, Sapna Sinha, and Laxman Singh

Abstract Parkinson’s disease (PD) is a chronic central nervous system condition that largely affects the body movement. If left untreated at a nascent stage, it could be life threatening. PD leads to sluggishness of movement and causes muscle inflexibility and tremors. There are numerous methods available in literatures such as speechbased method, gait-based methods, handwriting-based methods for detection of PD. However, speech-based method is known to be an efficient and competitive method as compared other two methods. Hence, in this study, we consider speech-based method in which two nature-inspired algorithm, viz., cuckoo search and ant colony optimization (ACO) have been used to select the optimal features for classification of the Parkinson patient with respect to healthy ones. The simulation results reveal that the cuckoo search algorithm obtained the better accuracy and achieved minimal subset of features with more stability compared to ACO algorithm.

35.1 Introduction Parkinson’s disease (PD) [1] is a slowly progressing condition of the central nervous system that mostly impairs bodily movement. Loss of brain cells leads to Parkinson’s disease, an illness of the neurological system. Nerve cells in the patient steadily lose the ability to communicate with one another as a result of this disease, which results in nervous system problems like depression, etc. [2]. Slowness of movement, N. Singh (B) · S. Sinha Amity Institute of Information Technology, Amity University, Noida, India e-mail: [email protected] S. Sinha e-mail: [email protected] L. Singh Department of Computer Science and Engineering (AI-ML), KIET Group of Institutions, Ghaziabad, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024 B. Das et al. (eds.), Modeling, Simulation and Optimization, Smart Innovation, Systems and Technologies 373, https://doi.org/10.1007/978-981-99-6866-4_35

459

460

N. Singh et al.

tremors, muscle rigidity, poor posture, imbalance, deviation in speech, and uneven handwriting strokes are a few of the symptoms that first manifest [3]. These signs gradually start to show. Since this disorder is incurable, it should be found in its early stages. Early-stage diagnosis of this disease is aided by symptoms including irregular handwriting and speech patterns. Medical professionals can propose pathology lab testing for specific Parkinson’s disease symptoms if they are aware of their exact nature and relative importance. If this is the case, the disease can be identified during the initial consultation itself. For the detection of PD, numerous methods exist in the literature, including speech-based methods, gait-based methods, and handwriting-based methods. Compared to the other two methods, the speech-based method is recognized as the most effective and competitive. Unified Parkinson’s Disease Rating Scale (UPDRS) is widely utilized to quantify the stages of Parkinson’s disease severity [4]. It consists of a series of tests performed by the patient and evaluated by the clinician based on a set of predefined criteria. Popular applications of these scores in machine learning include learning models that use the speech signals of subjects to predict the subject’s UPDRS score. One such popular dataset was produced by the University of Oxford in collaboration with the National Centre for Voice and Speech in Denver, Colorado, and uploaded to the UCI-Machine Learning data repository. (UCI-ML). Patients with PD may speak with a modest to significant alteration in their voice’s acoustic characteristics, such as flutter and fundamental frequencies. This serves as a helpful sign for classifying Parkinson’s patients automatically. Recent research has demonstrated that by determining a link between the minimum and maximum frequencies, measures of varying frequencies variation, etc., speech patterns of the patients have been effective to predicting the same [4]. In the paper [5], Aculab collected voice data from two men aged 59 with similar voices. One participant was healthy, while another has Parkinson’s disease (see Fig. 35.1). Both men were asked to say ‘aah’, and the waveforms below illustrate the differences in their voices. The healthy speaker’s speech is regular, whereas the Parkinson’s patient’s speech is extremely irregular with high values of jitter, shimmer, and a low harmonic-to-noise ratio. A large amount of data increases the complexity and cost of computation. Increased feature and instance counts need a significant amount of preprocessing and processing work. Data treatment is therefore required. Feature selection, also known as variable or attribute selection, is useful for managing data volume, which lowers processing costs and complexity [6]. Out of all the features that are available, a subset is chosen in this. The primary factor taken into account is accuracy, which is calculated both prior to and following feature selection [7]. The primary goal is to use fewer features while yet achieving better or at least equivalent classifier performance [8, 9]. The use of bio-inspired algorithms [10] in optimization techniques [11] is becoming increasingly popular. These are sometimes known as evolutionary optimization algorithms or nature-inspired algorithms. Examples of optimization methods that draw inspiration from nature include the genetic algorithm, particle swarm optimization, cuckoo search, ant colony optimization, peacock algorithm

35 Performance Comparison of Cuckoo Search and Ant Colony …

461

Fig. 35.1 Voice data of healthy and PD-affected person [5]

[12], and others. Swarm optimization algorithm refers to any controlled collection of cooperating agents. Swarm intelligence [13] refers to the intelligence of a group of simple agents as a whole. Inspired by the cooperative nature of social insect colonies and other animal societies, particularly those of beehives, termite hills, wasp nests, and ants, the study of ‘swarm intelligence’ has been used to develop new algorithms and other complex problem-solving devices. When the intelligence, ability, and skill of multiple such agents are pooled, the agents not only find their food source quickly but also find a food source that is good in quality, quantity, and has a shorter distance that they need to travel. Several optimization problems have been successfully tackled by utilizing swarm intelligence. Swarm optimization is the practice of utilizing and exploiting the collective intelligence of a group of agents with the aim of optimizing some overarching goal. In our work, we have used two swarm optimization algorithms [14] named cuckoo search and ant colony optimization (ACO) for feature selection. The primary goal of this study is to narrow the search space by obtaining the smallest subset of features that will allow us to classify the patient as healthy or non-healthy (i.e., suffering from Parkinson’s disease) with high accuracy. The rest of the paper is ordered as follows: Sect. 35.2 describes the two bio-inspired algorithms: cuckoo search and ant colony optimization (ACO) for feature selection. Section 35.3 outlines the implementation of the two nature-inspired algorithms for selecting optimal feature subset. Results are discussed in Sect. 35.4 followed by comparison with some other related optimization algorithms. The paper concludes with Sect. 35.5.

462

N. Singh et al.

35.2 Methodology This section describes the two swarm optimization algorithms briefly, cuckoo search and ant colony optimization algorithm.

35.2.1 Cuckoo Search Algorithm Cuckoo search [15, 16] is inspired by the parasitic nature of the ‘cuckoo bird’. A unique trait of these birds is that they choose a random nest belonging to a host bird and lay their eggs there instead of in their own. Cuckoo birds only lay one egg at a time. Furthermore, they readily adopt the physical properties (color, shape, and spots) of the host bird’s eggs, increasing the chances of their eggs’ survival. The host bird, on the other hand, is able to determine that the egg in her nest does not belong to her. As a result, she has the option of either discarding the egg or leaving her nest in order to construct a new one somewhere else. Therefore, the cuckoo eggs that are most similar to those of the host bird have the best chance of survival. Figure 35.2 explains this process. A key aspect of the cuckoo search algorithm is the construction of a new nest, while the old one is decided to abandon. The development of new nests is governed by random walks implemented via Levy flights [18, 19] and the one with best eggs is carried forward, while the other is dropped and replaced by new random nest. In quest for best quality food, animals and insects randomly walk in their environment. This flight behavior of certain animals and insects is known as Levy flight. Levy flight, a form of random search, depends on the idea of steps and the size of each step. The term ‘step’ refers to the next potential move, while ‘step-size’ defines the amount by which the current step will vary to generate the next step based on a

Fig. 35.2 Cuckoo search [17]

35 Performance Comparison of Cuckoo Search and Ant Colony …

463

Fig. 35.3 Levy flight distribution [22]

probability distribution. We have implemented Mantegna’s algorithm [20] for Levy flights [21]. Figure 35.3 shows the comparison between Levy flight distribution, normal and Cauchy distribution. The equation for generating nest is mentioned as below: Step =

u 1

|v| β

,

(35.1)

where β is a parameter in interval [1, 2] and it is assumed to be 1.5. The formulas for u and v: u = randi(size(s,), size(s, 2)) × sigma,

(35.2)

v = randi(size(s, 1), size(s, 2)),

(35.3)

s = nest,

(35.4)

where ( ) sin pi × β2 × gamma(1 + β) ) ( sigma = . β−1 1 2 β × β × 2 gamma 1+β 2

(35.5)

The proposed algorithm’s main steps are as follows: 1. Initialization of nest: A binary string of ones and zeroes is used to represent each nest. Each bit of this string represents an egg that corresponds to one feature. If the bit string is ‘1’, this means that particular feature is present, whereas ‘0’

464

2. 3.

4. 5. 6.

N. Singh et al.

means that particular feature is not present. Initially, the population of nest is generated randomly. Each nest contains N eggs which is equal to the number of features in the dataset. Fitness of nest: Each nest’s fitness is calculated as the classification accuracy using only the features present in the nest. Replacing the worst nest: Every iteration, the best nest up to that point is identified, and only those nests with fitness greater than some probabilities are carried over to the following generation, while the others are replaced by new nest. Step Size: Using Levy flight, each nest is updated, resulting in a new population of nests. Stopping criterion: Steps 1–4 are repeated till the iteration exhausts. Output: The algorithm returns the nest with maximum fitness, i.e., highest classifier’s accuracy and a bit string that specifies the optimal feature subset.

35.2.2 Ant Colony Optimization Algorithm ACO [23] represents the swarm intelligence nature of ants in search of food by using the shortest path between food sources through release of a chemical called pheromone. Pheromone is like a mark left by ants (which decays/evaporates over time) to help her peer ants to follow the path and reach the food source. So, the path which is rich in pheromone chemicals will the most popular and will be the shortest path. As a result of pheromone decay, the less popular paths will have fading pheromone over time, and thus, these paths will be eliminated from the competition to determine the fastest route to the ideal source of food. Figure 35.4 explains this process. As an optimization method, ACO can be used to find optimal solutions to problems of feature selection, where it is desired to use the fewest possible features to predict a class label without sacrificing accuracy. ACO searches the space of possible feature subsets for the best one. For the purposes of ACO implementation, we have assigned a pheromone to each feature. To determine the accuracy of bit strings, the wrapper method is used for feature selection. The proposed algorithm’s main steps are as follows: 1. Initialization of ants: Each ant is represented as a bit vector of ‘0’ and ‘1’. ‘0’ in the bit string represents that the feature is not selected and ‘1’ represents that the feature is selected. For the first iteration, the ants are randomly generated with ‘m’ features. 2. Solution construction: For the subsequent iteration, we start with ‘m − p’ features on and keep on adding features until the features are equal to ‘m’. Addition of feature is governed by Updated Selection Measure (USM) factor. Each ant selects the feature that has maximum update selection measure. USM is given by: USM( j ) =

Classifier_ accuracy( j ) × pher( j) , Classifier_ accuracy (i) × pher(i )

(35.6)

35 Performance Comparison of Cuckoo Search and Ant Colony …

465

Fig. 35.4 ACO [24]

where Feature i does not belong to the subset being considered. Feature j belongs to the subset being considered. pher( j) pheromone of the jth feature. 3. Generating population for next iteration: Using ‘k’ best subset, we generate next generation of ants. 4. Updation: The pheromone for each feature is updated by: pher(i) = (a × r 1) + (b × r 2) + (c × {1 − r 3}) + d,

(35.7)

where a, b, c, d are constants and r1 is a ratio that indicates the frequency of occurrence of a feature in the best population. Whereas r2 denotes the ratio between the occurrence of a feature in the best half subsets and the overall occurrence of the feature, while r3 denotes the overall occurrence of the feature. 5. Stopping criteria: When either the termination criterion is met (a predefined maximum number of iterations), or the space has been searched under the constraint that the minimum number of features is chosen, and the accuracy of the entire set of features and the chosen subset in determining the class is not significantly different, the algorithm stops.

466

N. Singh et al.

6. Output: The algorithm outputs the best solution, i.e., the bit string (corresponding to the feature subset selected) with maximum classifier’s accuracy.

35.3 Implementation of the Proposed Algorithms This section specifies the inputs parameters and the dataset required for the implementation of the two proposed algorithms, i.e., cuckoo search and ant colony optimization algorithm.

35.3.1 Input Parameters Table 35.1 specifies the input variables that are common and should be stated at the beginning of cuckoo search and ant colony optimization algorithms. The below mentioned input variables in Tables 35.2 and 35.3 are to be specified at the beginning of the cuckoo search and ant colony optimization algorithms, respectively. Table 35.1 Input variables common to cuckoo search and ACO algorithms Parameters

Value

Description

fv

195 × 22 Total number of features in dataset

grv

195 × 1

Group vector (0 for healthy person; 1 for PD-affected person)

Activation function

sig.

Activation function typea : ‘sig’ for Sigmoidal function ‘sin’ for Sine function ‘hardlim’ for Hardlim function ‘tribas’ for Triangular basis function ‘radbas’ for Radial basis function

Number of hidden 80 neurons

Number of hidden neurons assigned

Type

1

Type of classification (0 for regression; 1 for (both binary and multi-classes))

iter

10

Total iteration count

a

Source https://dr.ntu.edu.sg/

Table 35.2 Input variables especially used for cuckoo search algorithm

Parameters

Value

Description

N

22

Number of nest (feature count in dataset)

Pa

0.75

Probability with which eggs are replaced

Lb

0

Lower boundary limit

Ub

1

Upper boundary limit

35 Performance Comparison of Cuckoo Search and Ant Colony … Table 35.3 Input variables especially used for ACO algorithm

Parameters

Value

Description

Na

6

Number of ants

pher

1

Pheromone value of each feature

Li

1

Local feature importance

467

35.3.2 Dataset In this work, we have used open-sourced Parkinson’s disease database that was freely available on the Kaggle website (https://www.kaggle.com/datasets/gargmanas/par kinsonsdataset). This dataset contains 195 biomedical voice measurements. Out of 195 instances, persons having PD are 147, while 48 persons are not having PD. There are 23 attributes in the dataset. Each attribute corresponds to a dataset column which indicates several types of voice measures and the last column states whether the person is healthy or is affected by Parkinson’s disease. If the value in last column is ‘0’, this means that the person is healthy, and if the value in last column is ‘1’, this means that the person is affected by Parkinson’s disease. Table 35.4 presents the details of the features of the dataset. The main characteristics of the dataset are: • Dataset characteristics: multivariate. • Number of instances: 195. Table 35.4 Attribute information of Parkinson’s disease dataset [25] Feature label

Description

MDVP: Fo (Hz)

Average fundamental frequency of the voice

MDVP: Fhi (Hz)

Maximum fundamental frequency of the voice

MDVP: Flo (Hz)

Minimum fundamental frequency of the voice

MDVP: jitter (%), MDVP: jitter (Abs), MDVP: RAP, MDVP: PPQ, jitter: DDP

Numerous measures of variation in fundamental frequency

MDVP: shimmer, MDVP: shimmer (dB), shimmer: APQ3, shimmer: APQ5, MDVP: APQ, shimmer: DDA

Numerous measures of variation in amplitude

NHR, HNR

Two measures of ratio of noise to tonal components in the voice

RPDE, D2

Two nonlinear dynamical complexity measures

DFA

Signal fractal scaling exponent

Spread1, spread2, PPE

Three nonlinear measures of fundamental frequency variation

Status

Health status of the subject (one)—Parkinson’s, (zero)—healthy

468

• • • • •

N. Singh et al.

Area: life. Attribute characteristics: real. Number of attributes: 23. Associated tasks: classification. Missing values: N/A.

Figure 35.5 shows the scatter distribution of the speech dataset using principal component analysis (PCA). Data analysis uses the principal component analysis[26] (PCA) technique to reduce the dimensionality of the data while retaining the most crucial information. Mean-Centered Data, Covariance Matrix, Eigenvalues and Eigenvectors, Principal Components, and Loading Scores are the components of PCA. The built-in function ‘pca’ in MATLAB was used to perform PCA. The principal component coefficients (coeff), principal component scores (score), and the eigenvalues of the covariance matrix are all included in the output of this function. (latent). These outputs provide a comprehensive breakdown of the outcomes of a PCA analysis when taken together. The principal component structure is depicted by the coefficient matrix, each observation’s relationship to the principal components is shown by the score matrix, and the variance explained by each principal component is shown by the eigenvalues. These findings can be used to comprehend the fundamental structure of the data, spot trends or clusters, or reduce the dimensionality of the data for more in-depth examination. The coefficients of each original variable in a PC are determined by the eigenvectors of the data’s covariance matrix. Each PC is a linear combination of the original variables. The first PC, then the second PC, and so on capture the most variation in the data. The data were projected onto the new coordinate system determined by the principal components using the principal component scores, and the same data were plotted using MATLAB’s gscatter function.

Fig. 35.5 Scatter plot of dataset using principal component analysis (PCA)

35 Performance Comparison of Cuckoo Search and Ant Colony …

469

35.4 Result This section presents the result of both the algorithms (i.e., cuckoo search and ACO) using the Parkinson’s disease dataset. The algorithms were tested using the abovementioned dataset in terms of number of features and accuracy returned by the classifier and computational cost (in seconds) taken by the classifier. Table 35.5 shows the number of features selected by cuckoo search algorithm and ACO algorithm, Table 35.6 shows the accuracy retuned by cuckoo search and ACO algorithm, and Table 35.7 shows the computational cost (in seconds) of cuckoo search and ant colony algorithm.

35.4.1 Result on Speech PD Dataset In PD dataset, the total of features was 23. After implementation of optimization algorithms, the features were reduced from 23 to 9 with maximum accuracy of 91.43% using cuckoo search algorithm, while from 23 to 4 with maximum accuracy of 85.14% using ACO algorithm. The results demonstrated in Tables 35.5 and 35.6 show that cuckoo search algorithm obtained the nine features with higher accuracy, while ACO algorithm reduced the feature set further up to 4 with little bit lower accuracy when implemented on same hardware and software with 10 number of iterations. Beside this, the computation cost of cuckoo search algorithm was little bit lower than ACO algorithm that indicates its comparatively high speed than ACO. Table 35.5 Result - Feature Selected by cuckoo search and ACO Total features in dataset

23

Feature selected using cuckoo search algorithm

9

Feature selected using ACO algorithm

4

Table 35.6 Result- Accuracy obtained by cuckoo search and ACO Total features

23

Accuracy (%) retuned by cuckoo search algorithm

91.43

Accuracy (%) retuned by ACO algorithm

85.14

Table 35.7 Result- Computational Cost (in sec) of cuckoo search and ACO Total features

23

Computational cost (in seconds) of cuckoo search algorithm

0.3220

Computational cost (in seconds) of ACO algorithm

3.294

470

N. Singh et al. 100 90 80

Accuracy %

70 60 50 40 30 20 10 0

Voice Dataset TCFA

OCFA

Cuckoo Search

ACO

Fig. 35.6 Accuracy comparison of cuckoo search and ACO with TCFA and OCFA

35.4.2 Comparison of the Proposed Algorithms with Other Optimization Algorithms The accuracy achieved by cuckoo search and ant colony optimization is compared with the accuracy achieved produced by traditional cuttlefish algorithm (TCFA) and optimized cuttlefish algorithm (OCFA) [27], and the same is represented in Fig. 35.6. Figure 35.7 shows the number of features selected by cuckoo search, ACO, TCFA, and OCFA. Figure 35.8 shows the comparison of computational cost (in second) of cuckoo search algorithm, ACO, TCFA, and OCFA. From the results obtained, it is clearly obvious that the classifier returned higher accuracy for the subset of features selected by the two proposed algorithm when compared to all features present in the dataset. Therefore, the two algorithms are implemented successfully and are optimized for the looked-for objective. Henceforth, the above figures imply that the cuckoo search and ACO algorithms select a minimal subset of features from all set of features which supports to attain increased accuracy of the classifier.

35.5 Conclusions In this paper, we have evaluated the performance of two swarm optimization algorithms named Cuckoo search and ACO for classifying Parkinson’s disease. The performance of these algorithms was evaluated in terms of optimal subset of features, accuracy, and computational cost using the Parkinson’s disease dataset that was

35 Performance Comparison of Cuckoo Search and Ant Colony …

471

25

No. of Feature Selected

20

15

10

5

0

Voice Dataset TCFA

OCFA

Cuckoo Search

ACO

Fig. 35.7 Comparison of features selected using cuckoo search and ACO with TCFA and OCFA 4 3.5

Time (in sec)

3 2.5 2 1.5 1 0.5 0

Voice Dataset TCFA

OCFA

Cuckoo Search

ACO

Fig. 35.8 Comparison of computational cost of cuckoo search and ACO with TCFA and OCFA

procured from Kaggle.com. Based on the numerical results shown in Tables 35.5 and 35.6, it can be inferred that both the algorithms performed well in terms of accuracy and selection of optimal number of features. Cuckoo search algorithm obtained the nine features with 91.43% accuracy, while ACO algorithm reduced the feature

472

N. Singh et al.

set further up to 4 with 85.14% accuracy when implemented on same hardware and software with 10 number of iterations. In terms of computation cost, cuckoo search algorithm was found to be little bit faster than ACO algorithm as shown in Table 35.7. Hence, more concisely we can say that cuckoo search algorithm can be employed more efficiently in those applications where accuracy is of prime importance, whereas ACO algorithm can be employed more preferably in those tasks where dimensional reduction is of prime importance than accuracy.

References 1. Demir, F., Siddique, K., Alswaitti, M., Demir, K., Sengur, A.: A simple and effective approach based on a multi-level feature selection for automated parkinson’s disease detection. J. Pers. Med. 12, 55 (2022). https://doi.org/10.3390/jpm12010055 2. Sehgal, S., Agarwal, M., Gupta, D., Sundaram, S., Bashambu, A.: Optimized grass hopper algorithm for diagnosis of Parkinson’s disease. SN Appl. Sci. 2, 999 (2020). https://doi.org/10. 1007/s42452-020-2826-9 3. Sharma, P., Jain, R., Sharma, M., Gupta, D.: Parkinson’s diagnosis using ant-lion optimisation algorithm. IJICA 10, 138 (2019). https://doi.org/10.1504/IJICA.2019.103370 4. Unified Parkinson’s Disease Rating Scale Characteristics and Structure. The Cooperative Multicentric Group—PubMed. https://pubmed.ncbi.nlm.nih.gov/8139608/. Last accessed 03 Aug 2022 5. Aculab voice detection system aids diagnosis of Parkinson’s disease. https://www.digitalhe alth.net/2017/06/voice-detection-system-used-to-aid-diagnosis-of-parkinsons-disease/. Last accessed 30 Mar 2023 6. EBSCOhost|65237668|Feature Subset Selection Based on Bio-Inspired Algorithms. Last accessed 03 Aug 2022 7. Tran, B., Xue, B., Zhang, M.: Overview of particle swarm optimisation for feature selection in classification. In: Dick, G., Browne, W.N., Whigham, P., Zhang, M., Bui, L.T., Ishibuchi, H., Jin, Y., Li, X., Shi, Y., Singh, P., Tan, K.C., and Tang, K. (eds.) Simulated Evolution and Learning, pp. 605–617. Springer International Publishing, Cham (2014). https://doi.org/10. 1007/978-3-319-13563-2_51 8. Chen, G., Chen, J.: A novel wrapper method for feature selection and its applications. Neurocomputing 159, 219–226 (2015). https://doi.org/10.1016/j.neucom.2015.01.070 9. Maldonado, S., Weber, R.: A wrapper method for feature selection using support vector machines. Inf. Sci. 179, 2208–2217 (2009). https://doi.org/10.1016/j.ins.2009.02.014 10. Chaudhary, R., Banati, H.: Study of population partitioning techniques on efficiency of swarm algorithms. Swarm Evol. Comput. 55, 100672 (2020). https://doi.org/10.1016/j.swevo.2020. 100672 11. Fister, Jr., I., Yang, X.-S., Fister, I., Brest, J., Fister, D.: A Brief Review of Nature-Inspired Algorithms for Optimization. http://arxiv.org/abs/1307.4186, https://doi.org/10.48550/arXiv. 1307.4186 (2013) 12. Chaudhary, R., Banati, H.: Peacock Algorithm. In: 2019 IEEE Congress on Evolutionary Computation (CEC), pp. 2331–2338 (2019). https://doi.org/10.1109/CEC.2019.8790371 13. Beni, G.: From swarm intelligence to swarm robotics. In: Sahin, ¸ E., Spears, W.M. (eds.) Swarm Robotics, pp. 1–9. Springer, Berlin, Heidelberg (2005). https://doi.org/10.1007/978-3-540-305 52-1_1 14. Nature-Inspired Swarm Intelligence and Its Applications (PDF). https://www.researchgate.net/ publication/287566299_Nature-Inspired_Swarm_Intelligence_and_Its_Applications 15. Jiang, Y., Liu, X., Yan, G., Xiao, J.: Modified binary cuckoo search for feature selection: a hybrid filter-wrapper approach. In: 2017 13th International Conference on Computational Intelligence and Security (CIS), pp. 488–491 (2017). https://doi.org/10.1109/CIS.2017.00113

35 Performance Comparison of Cuckoo Search and Ant Colony …

473

16. Zhao, M., Qin, Y.: Feature selection on elite hybrid binary cuckoo search in binary label classification. Comput. Math. Methods Med. 2021, 1–13 (2021). https://doi.org/10.1155/2021/ 5588385 17. Xu, W., Yu, X.: Adaptive guided spatial compressive cuckoo search for optimization problems. Mathematics 10, 495 (2022). https://doi.org/10.3390/math10030495 18. Yang, X.-S., Deb, S.: Cuckoo search via Lévy flights. In: 2009 World Congress on Nature & Biologically Inspired Computing (NaBIC), pp. 210–214 (2009). https://doi.org/10.1109/ NABIC.2009.5393690 19. Cuckoo Search via Levy Flights (PDF). https://www.researchgate.net/publication/45904981_ Cuckoo_Search_via_Levy_Flights. Last accessed 07 Nov 2022 20. Nasa-ngium, P., Sunat, K., Chiewchanwattana, S.: Enhancing modified cuckoo search by using Mantegna Lévy flights and chaotic sequences. In: The 2013 10th International Joint Conference on Computer Science and Software Engineering (JCSSE), pp. 53–57 (2013). https://doi.org/ 10.1109/JCSSE.2013.6567319 21. Modified Lévy flight distribution algorithm for global optimization and parameters estimation of modified three-diode photovoltaic model. SpringerLink. https://doi.org/10.1007/s10489022-03977-4. Last accessed 27 Mar 2023 22. Nolan, J.P.: Univariate Stable Distributions: Models for Heavy Tailed Data. Springer International Publishing, Cham (2020). https://doi.org/10.1007/978-3-030-52915-4 23. A New Feature Selection Method Based on Ant Colony and Genetic Algorithm on Persian Font Recognition (PDF). https://www.researchgate.net/publication/272910647_A_New_Fea ture_Selection_Method_Based_on_Ant_Colony_and_Genetic_Algorithm_on_Persian_Font_ Recognition. Last accessed 03 Aug 2022 24. Liu, Y., Cao, B., Li, H.: Improving ant colony optimization algorithm with epsilon greedy and Levy flight. Complex Intell. Syst. 7, 1711–1722 (2021). https://doi.org/10.1007/s40747-02000138-3 25. Hindawi: Table 1. Diagnosing Parkinson’s Diseases Using Fuzzy Neural System. https://www. hindawi.com/journals/cmmm/2016/1267919/tab1/. Last accessed 30 Mar 2023 26. Principal component analysis of raw data—MATLAB PCA—MathWorks India. https://in.mat hworks.com/help/stats/pca.html. Last accessed 30 Mar 2023 27. Gupta, D., Julka, A., Jain, S., Aggarwal, T., Khanna, A., Arunkumar, N., de Albuquerque, V.H.C.: Optimized cuttlefish algorithm for diagnosis of Parkinson’s disease. Cogn. Syst. Res.. Syst. Res. 52, 36–48 (2018). https://doi.org/10.1016/j.cogsys.2018.06.006

Chapter 36

Simulation and Modelling of Task Migration in Distributed Systems Using SimGrid Ehab Saleh

and Chandrasekar Shastry

Abstract Developing protocols and algorithms in the field of distributed computing necessitates achieving comparable results from a large number of back-to-back experiments on real distributed systems. However, while multithreading cannot provide sufficient performance for a real distributed system, using real systems is usually prohibitively expensive and necessitates highly skilled resources and power management. Simulation is used as a solution to address these issues in that it does not only allow for reliable results at a low cost and in a reasonable amount of time but also allows for the exploration of a wide range of platforms and scenarios. Churn is the most concerning issue when modelling and simulating a distributed network. It refers to the unpredictability of a large number of arriving and departing devices in a short period of time. In response to Churn, the server initiates Task Migration, which entails relocating the remaining jobs to another device in the same network. In this paper, we use the simulation framework SimGrid to run two experiments on task migration in distributed systems. For the same task, we use a different network size in each experiment. To ensure the network’s heterogeneity, we select the dataset GWA-T-13 Materna, which contains performance metrics described as trace files of over 1500 VMs from the distributed Materna Data Centres in Dortmund, Germany. In both experiments, the simulation results show how the distributed system responds to Churn in a way that reflects in the execution time and power consumption of those devices that initiate task migration.

C. Shastry Department of Computer Science and Engineering, Jain University, Bengaluru, India e-mail: [email protected] E. Saleh (B) Leibniz Supercomputing Center of the Bavarian Academy of Science and Humanities, Munich, Germany e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024 B. Das et al. (eds.), Modeling, Simulation and Optimization, Smart Innovation, Systems and Technologies 373, https://doi.org/10.1007/978-981-99-6866-4_36

475

476

E. Saleh and C. Shastry

36.1 Introduction Distributed System is a network of computing devices that are physically distributed across geographical areas and linked via the same or different network to perform a common task that requires more computing power and random memory than a single device can provide. The most important characteristic of a real distributed network is heterogeneity, which refers to the ability of any machine with a different architecture and running any operating system to join the network and participate in solving the main task. However, aside from the large monetary budget required to run such a complex and heterogeneous system, since all of these devices’ computing resources are dedicated solely to running their share of tasks, they must be available most of the time, making developing such a distributed system challenging in terms of resource management and power management. These issues can be addressed by conducting simulation. SimGrid [2] is an opensource simulation framework that was introduced first in 2000 to investigate scheduling algorithms for distributed applications in a distributed environment. Nowadays, it’s used to develop and support distributed algorithms and protocols in a wide range of distributed networks. To model the behaviour of algorithms in SimGrid, however, we write the actual code in C++, Java, or Python and use any operating system with any IDE. Churn [5] refers to the unpredictability of a large number of arriving and departing devices in a short period of time in distributed networks. Churn is most common in peer-to-peer networks (P2P), where there is no specific topology that the network follows and a large number of peers join and leave the network since, in most cases, peers are anonymous (unknown to the server and therefore inaccessible) and autonomous (participants can join and leave at their own discretion). Responding to Churn, the server initiates task migration, which will allocate new available devices in the same network to continue the execution of the remaining jobs. In this paper, we will model and simulate task migration in a P2P distributed network using SimGrid. We take the standard approach of task migration between peers, in which the remaining tasks or jobs are shifted to the fastest peer that completes execution first. However, to ensure the network’s heterogeneity, we selected dataset GWA-T-13 Materna [3], which contains performance metrics described as trace files of over 1500 VMs from the distributed Materna Data Centers in Dortmund, Germany. On the other side, to mimic the behaviour of Churn in our networks, we consider the availability of connected devices. The server considers devices with computing power less than a predefined threshold to have left the network and are no longer available.

36 Simulation and Modelling of Task Migration …

477

36.2 Related Work Task Migration in distributed systems is described as the process of shifting a task from one network host to another when its execution is interrupted by a variety of reasons. The primary motivations for task migration are load balancing and increased availability [10]. Task migration can also be referred to as job migration or process migration, and any of these terms can be used to refer to the transmission of unfinished work from one computing element to another in any environment, such as symmetric multiprocessors, Non-Uniform Memory Access (NUMA) multiprocessors, Massively Parallel Processors (MPP) and Local Area Network (LAN). There are several methods that every operating system or distributed system uses to perform task migration when necessary, but the general steps for each method are: 1. The source node sends a migration request to the destination node. 2. The process is halted in the source node and removed from its execution context. 3. The process states are extracted from the source node. These states could be the address space and the channels that are available. 4. The destination node creates an instantiated space before the source node transmits the process to be executed. 5. The process, along with all necessary states, is sent to the destination node by the source node. 6. The process is imported into the instantiated space by the destination node. 7. The imported process instance is now being activated for execution in the destination node. Figure 36.1 depicts the three primary components involved in task migration. The concept of migrating tasks has been developed in a range of domains in distributed systems, including load balancing, with the goal of increasing resource utilisation and response time. Suen and Wong proposed a network protocol and a fully distributed algorithm for task migration in [11] in order to reduce both the response time and the communication cost. The framework is focused on finite projective

States/Process Transfering Source Instance

Destination Instance

Migration Process

Migration Process

Communication Process

Fig. 36.1 High-level view of task migration

478

E. Saleh and C. Shastry

planes, which Maekawa used to reduce the communication cost for a distributed mutual exclusion algorithm in [7]. The proposed approach will initiate task migration whenever it is conceivable, that is, whenever there is at least one lightly loaded CPU and at least one heavily loaded CPU. In [6], Ma and Wang suggested using a Java-based lightweight task migration to accelerate computation. The middleware used supports an asynchronous migration technique that allows migrations to occur virtually anywhere in the task. Zhu et al. in [12] carried out several task migration experiments in order to investigate the impact of using process migration to improve system performance. All experiments were conducted on the distributed system Amoeba [1]. Because the cost of the migrations is offset by the gain from using idle cycles, the results showed a slight improvement in system performance. The purpose of this research is to perform task migration on two different network sizes. However, rather than conducting migrations on a single central server, as is typically done in most studies where the network is client-server, we aim to select a P2P overlay in which all peers can act as both client and server at the same time, and thus each peer that acts as a server initiates task migration when necessary and shifts the remaining jobs to the fastest device that is located in the same sub-network and completes execution first.

36.3 SimGrid SimGrid is a simulation framework for heterogeneous distributed systems. It is primarily used to evaluate and test existing algorithms and protocols in a wide range of distributed network architectures, including volunteer computing, peer-to-peer, fog computing and others. SimGrid is considered a free library rather than API software with graphical user interfaces (GUIs) or command-line applications. The user must, however, use an IDE to write their own code that represents the behaviour of the suggested algorithm or protocols. Users can write Python, C++, or Java code that can be run on Linux, Mac OSX, or Windows. In general, there are three interfaces that are used to express a SimGrid application, given as follows: 1. S4U Interface: SimGrid for You (S4U) is the most recent and powerful SimGrid library, which has developed and added new functions that are critical in most distributed systems, including Cloud, P2P, HPC, IoT, and similar environments. 2. SimGrid Message Passing Interface SMPI: SMPI is a SimGrid tool for developing MPI applications. SMPI operates in two modes: online, where communication is simulated while computation is performed in real time, and offline, where computations are skipped, making it

36 Simulation and Modelling of Task Migration …

479

faster than online but only applicable to applications with consistent execution and communication patterns. 3. The MSG Interface: MSG interface is an API that allows developers to write applications in Concurrent Sequential Processes (CSP) that interact with each other via messages in order to simplify the use of distributed systems. It was the main API of SimGrid 3, but now that SimGrid 4 is available, MSG is no longer supported and will most likely never adopt. Instead, S4U is a better choice for beginning a new project in distributed systems with SimGrid. SimGrid includes some plugins that allow distributed applications to achieve additional metrics that would not be possible without them. The following plugins are available: 1. Host Energy: General speaking, the power consumption of any peer in the network can be divided into two power models: The static part, which represents power usage when the node is idle, turned off or turned on, and the dynamic part, which represents power consumption when the CPU is working. Whereas the static part of the total consumed energy is simple to compute, the dynamic part is linearly proportional to the CPU load [8]. plugin plugin_host_energy [4] is used to provide the simulator with the ability to monitor the energy dissipated by each device. plugin_host_energy calculates the amount of power consumed by each node in the network by adding the static and dynamic parts of the consumed power. It uses the Formula (36.1) to calculate the total power consumption for a given machine i that has the frequency f, the workload w and the usage percentage u. .

dynamic

Pi, f,w = Pi,static + Pi, f,w f

× u.

(36.1)

During modelling, there are four parameter that indicate the amount of static consumed power given as follows: 1. 2. 3. 4.

Idle: Wattage when the device is up, but it is not executing any task. Epsilon: Wattage when all cores are executing the task. Allcores: Wattage when the device is not turned on. Off: Wattage when all cores in the same processor are not performing work but are not idle.

Idle is used for the missing Epsilon value if only two values are provided. For example, in Listing 36.1 we describe the following parameters for 4-Core CPU: Off is 10 W; Idle is 90 W; Epsilon is 100 W and AllCores is 180 W. This is sufficient to calculate the wattage as a function of the number of loaded cores. However, we used linear extrapolation between two known parameters to estimate the dynamic consumed power as the number of cores increased. Table 36.1 shows the amount of wattage consumed as the number of running cores is gradually increased from 0 (Idle) to 4 (Allcores).

480

E. Saleh and C. Shastry

Table 36.1 Wattage specification at each core No. cores Wattage Explanation 0-idle 0-not idle

90 100

1 2 3 4

120 140 160 180

Consumed wattage when all cores are idle (Idle) Consumed wattage when all cores are idle but the host is on (Epsilon) Linear extrapolation between Epsilon and AllCores Linear extrapolation between Epsilon and AllCores Linear extrapolation between Epsilon and AllCores Consumed wattage when all cores are full functional (Allcores)

When the number of cores is increased by one, we compute the linear extrapolation between Epsilon and Allcore to calculate the slop value that should be added to the previous wattage value. 2. Link Energy: The plugin plugin_link_energy is accounted for computing the consumed energy in the links during the simulation. The energy consumption of a link is directly proportional to its current traffic load. Given the following example Listing 36.2 we specify the consumption parameter of a link: The property wattage_range in the preceding example indicates the values of dissipated energy of the link link_0, where the first value (100) represents the dissipated value when the link is on but there is no traffic load to transfer, and the second value(200) represents the dissipated value when the link is fully loaded. The second property wattage_off indicates the consumed power when the host is turned off. 3. WiFi Energy: Like the plugin plugin_link_energy, the plugin plugin_link_energy_wifi is accounting for estimating the dissipated energy of WiFi links. 4. Host Load: The plugin_host_load plugin assigns an extension to each peer to store data and places callbacks that concern some signals, such as stating if the execution is complete, if the load has been increased or decreased, and what to do when the host is suspended or turned off. 1

2 3 4



5 6

Listing 36.1 Describe the power consumption parameters in Quad-Core CPU

36 Simulation and Modelling of Task Migration …

481

Table 36.2 Peer configuration based on the number of cores Peak GFLOPS Power consumption parameters (W) No. cores per peer No. peers first network/second per core network Idle Allcores Epsilon Off One-core CPU Two-core CPU Four-core CPU Six-core CPU Eight-core CPU

1

202/74 953/318 242/74 39/13 64/21

5 5 5 5 5

100 100 100 100 100

140 160 200 240 280

120 120 120 120 120

10 10 10 10 10

2 3 4



5 6

Listing 36.2 Describe the power consumption parameters of connection link

36.4 Network Configuration We chose a structured P2P network. Each peer represents an actual network device. In this type of network, each peer can act as both a server and a client at the same time, and there are common tasks that can be performed in either of the two modes, such as responding to any inquiring request, and performing the allotted work. However, some tasks, such as distributing the allotted work, can only be performed in one mode of the peer, which is only implemented in peers that act solely as servers. The network’s overlay is a structured P2P, which follows a specific topology that eventually shapes the network’s final structure, which can be represented by various topologies, such as bus topology, star topology, tree topology, and so on. Instead of randomly flooding messages to all peers, following a structured overlay will reduce the effort that each peer expends in routing to find other peers or resources in the same network. This can be accomplished by storing peer-related information in lookup and hash tables that are accessible to all network peers. Figures 36.2 and 36.3 depict the virtual network overlay of the two networks that will be used in the simulation stage. The network shape follows a structured topology that is constructed as a result of the real-world epidemic spread of VC’s middleware. Each peer performs differently as they are different in the number of cores per processor. This was employed to ensure the heterogeneity of our network.

482

E. Saleh and C. Shastry

Table 36.2 groups the peers based on the number of cores they have. The energy consumption model for each peer is also given to each group.

36.5 Simulation Results We ran two different task migration experiments. In the first experiment, we chose a network of 500 peers, and in the second one, we chose a network of 1500 peers. However, in these two experiments, the same task migration approach was used: when a super-peer receives a result from a sub-peer and there is still work to be done, it initiates task migration. In this case, the super-peer will only select the fastest peer that completed execution first and assign it work to execute. We have chosen main task of the size one ExtaFLOPS (.1018 flops) to be executed in both networks. For both experiments, we assigned this task to a randomly selected peer that is connected to several sub-peers and also has its own sub-peers. Also, in order to ensure that the main task is distributed evenly across all connected peers, we provide the global scheduler to each peer that can act as a server. The primary goal of using this scheduler is to calculate the workload portion of the main task to

Fig. 36.2 Network overlay of 1500 peers

36 Simulation and Modelling of Task Migration …

483

Fig. 36.3 Network overlay of 500 peers

be distributed to each network sub-peer in such a way that all devices exhibit equal effort in the two main metrics: energy use and execution time. This global scheduler is described in the study [9]. It is based on calculating the maximum computational power that each peer that can serve as a server has.

Fig. 36.4 Execution time for each peer when conducting the first experiment of task migration

484

E. Saleh and C. Shastry

Fig. 36.5 Power consumption for each peer when conducting the first experiment of task migration

Fig. 36.6 Execution time for each peer when conducting the second experiment of task migration

Figures 36.4 and 36.5 show the execution time and power consumption values, respectively, when conducting the first experiment. We can see that, while most peers have close execution time values, some peers have higher values than other peers in the same sub-network (have the same super-peer), which is due to selecting the fastest peer to execute the unfinished work that was left by those peers who have computing availability less than the threes threshold, resulting in an unbalanced execution time for some peers, extending the task’s overall execution time. Similarly, as shown in Fig. 36.5, power consumption follows the same trend as execution time, with some peers consuming more power than others, owing to the fact that because they finished performing their work earlier, they took longer time to complete the unfinished work. Figures 36.6 and 36.7 show the execution time and power consumption values, respectively, for the 1500 peers that participated in the second experiment. We can see that selecting peers with the fastest computing performance to complete the work left unfinished by other peers with computing availability below the threshold results in abnormal execution times for these fastest peers, thus increasing the overall execution time of the task. Similar to execution time, power consumption follows the same trend, with several peers consuming more power than others as a result of the fact that they finished their work earlier and took longer time to complete the unfinished work of the other slower peers, as shown in Fig. 36.7.

36 Simulation and Modelling of Task Migration …

485

Fig. 36.7 Power consumption for each peer when conducting the second experiment of task migration Table 36.3 Total execution time and estimated power consumption in both experiments of task migration Total execution time (s) Total power consumption (GJ) First experiment Second experiment

295,535 119,329

18.04 21.12

Table 36.3 displays the total execution time and estimated power consumption for the two experiments. We considered the amount of wasted time due to network link latency. As a result of involving more peers in computing, the second experiment resulted in a 60% reduction in total execution time. However, despite the fact that the total task was completed in less time in the second experiment, there is no improvement in power consumption, and even the total power consumption has increased by 17% when compared to the total consumed power obtained in the first experiment, which is also due to involving more peers in the second experiment, which increased the accumulated power consumed.

36.6 Conclusion Simulation plays a major role in developing and evaluating new algorithms in distributed systems because it allows for reliable results on a variety of platforms and experiments at a low cost and within a reasonable amount of time without the need to perform these experiments in real-world distributed systems. Because Churn effect is one of the most concerning issues in distributed networks, we used the simulation framework SimGrid in this paper to model and simulate two experiments of task migration, which is performed as a response to Churn. Each experiment with a different size network that perform a task of one ExaFLOPS (.1018 flops). In both experiments, the simulation results show how the network responded to Churn, which is reflected in the execution time and energy consumed by those peers who initiate task migration.

486

E. Saleh and C. Shastry

References 1. Amoeba: A Distributed Operating System. https://fsd-amoeba.sourceforge.net/amoeba.html/. Accessed 30 Apr 2022 2. Casanova, H.: SimGrid: a toolkit for the simulation of application scheduling. In: Proceedings of the First IEEE/ACM International Symposium on Cluster Computing and the Grid, pp. 430–437 (2001). https://doi.org/10.1109/CCGRID.2001.923223 3. GWA-T-13 Materna. http://gwa.ewi.tudelft.nl/datasets/gwa-t-13-materna. Accessed 1 Mar 2022 4. Heinrich, F.C., Cornebize, T., Degomme, A., Legrand, A., Carpen-Amarie, A., Hunold, S., Orgerie, A.C., Quinson, M.: Predicting the energy-consumption of MPI applications at scale using only a single node. In: 2017 IEEE International Conference on Cluster Computing (CLUSTER), pp. 92–102 (2017). https://doi.org/10.1109/CLUSTER.2017.66 5. Ho, C.Y., Chung, M.C., Yen, L.H., Tseng, C.C.: Churn: a key effect on real-world p2p software. In: 2013 42nd International Conference on Parallel Processing, pp. 140–149 (2013). https:// doi.org/10.1109/ICPP.2013.23 6. Ma, R.K., Wang, C.L.: Lightweight application-level task migration for mobile cloud computing. In: 2012 IEEE 26th International Conference on Advanced Information Networking and Applications, pp. 550–557 (2012). https://doi.org/10.1109/AINA.2012.124 7. Maekawa, M.: A sqrt(n) algorithm for mutual exclusion in decentralized systems. ACM Trans. Comput. Syst. 3(2), 145–159 (1985) 8. Orgerie, A.C., de Assunção, M.D., Lefèvre, L.: A survey on techniques for improving the energy efficiency of large-scale distributed systems. ACM Comput. Surv. (CSUR) 46, 1–31 (2013) 9. Saleh, E., Shastry, C.: A new approach for global task scheduling in volunteer computing systems. Int. J. Inf. Technol. (2022). https://doi.org/10.1007/s41870-022-01090-w 10. Smith, P., Hutchinson, N.C.: Heterogeneous process migration: the Tui system. Softw. Pract. Exp. 28, 611–639 (1998) 11. Suen, T., Wong, J.: Efficient task migration algorithm for distributed systems. IEEE Trans. Parallel Distrib. Syst. 3(4), 488–499 (1992). https://doi.org/10.1109/71.149966 12. Zhu, W., Socko, P., Kiepuszewski, B.: Migration impact on load balancing-an experience on amoeba. SIGOPS Oper. Syst. Rev. 31(1), 43–53 (1997)

Chapter 37

An Approach to Bodo Word Sense Disambiguation (WSD) Using Word2Vec Subungshri Basumatary, Karmabir Brahma, Anup Kumar Barman and Amitava Nag

Abstract Natural language processing (NLP) is one of the most popular proliferating research fields nowadays in artificial intelligence and machine learning. NLP deals with numerous applications such as Sentiment Analysis, Speech Recognition, Summarization of Text, Social Media Analytics. However, one of the significant challenges in developing NLP tools is word ambiguity, i.e., a word can have more than one meaning. The process of determining an ambiguous word's precise meaning in a given context is known as word sense disambiguation (WSD). In this research work, we propose a WSD framework for the Bodo language using Word2Vec. Cosine similarity is used to assess how similar the sentence is to the corpora to interpret the ambiguous word. To the best of our knowledge, this is the first work on Bodo WSD.

37.1 Introduction Natural language processing (NLP) is an extremely challenging field of research in computer science that deals with human communication. NLP includes a set of methods and techniques for assisting computers in comprehending, interpreting, and producing human language. Text Analytics, Semantic Parsing, Information Extraction, Text Classification, and Text Summarization are some of the widely S. Basumatary (*) · K. Brahma · A.K. Barman · A. Nag  Department of Computer Science and Engineering, Central Institute of Technology Kokrajhar, BTR, Kokrajhar, Assam, India e-mail: [email protected] K. Brahma e-mail: [email protected] A.K. Barman e-mail: [email protected] A. Nag e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024 B. Das et al. (eds.), Modeling, Simulation and Optimization, Smart Innovation, Systems and Technologies 373, https://doi.org/10.1007/978-981-99-6866-4_37

487

488

S. Basumatary et al.

used NLP applications [1]. Since human languages are so varied and context-sensitive, there are many different ambiguities [2]. In NLP, a few words can signify different meanings depending on the context word. Humans are capable of distinguishing it with ease because ambiguous word meanings can be understood from the perspective of the word given [3]. An automatic determination of the correct meaning of words is aided by its perspective and a long-standing NLP research area known as “Word Sense Disambiguation” (WSD). WSD is considered one of the most challenging issues in the field of computational linguistics when it comes to computing the correct meaning of ambiguous words [4, 5]. The WSD approach is used in solving several issues, including sentiment analysis, machine translation, etc. Depending on the context word, different definitions can apply to several multiple meanings of words [3]. “Train” is an example of an ambiguous word; according to WordNet, it has different alternative meanings as given in the examples: Sense1: This train will stop at the next station. Sense2: I train her to take over my job when I retire. The word “train” has a different sense in the sentences. As it has been observed that at the beginning of a sentence, the word “train” indicates an object for travel. The second sentence of the word “train” indicates how to take over the responsibilities of the job. A machine will struggle to understand the meaning of a sentence if a word is ambiguous, but a human being can easily understand it. As a result, it has become a major problem in the field of NLP research. To automatically determine a word's meaning based on context, experts use word sense disambiguation technology. WSD usually determines which sense of a word is being utilized in a sentence when it has more than one meaning [6]. In the last few years, the majority of the research studies have been observed concentrating on using vectors to represent words in a multidimensional space. Nowadays, the word embedding model is gaining greater interest from researchers because of its remarkable capacity to represent words in low-dimensional vector space [7]. In 2013, Mikolov et al. first introduced the word embedding model which is regarded as the most important development in the field of NLP for feature extraction and language modeling [2]. A persistent representation of words in a specified vector space is known as word embedding. Word2Vec is one of the methods that may be used to create word embedding. CBOW and skip-gram models are the two most popular word embedding models [7]. In this paper, we have extended the work of Nurifan et al. [3] for Bodo WSD using Word2Vec. In the initial phase, we aim to develop two sets of dataset manually. In the next phase, preprocessing step is done to reduce the word variants. Later, Word2Vec model is used to develop corpora after preprocessing the data. Moreover, ambiguous word meanings are generated using corpora. To determine how identical a sentence is to the first and second corpora cosine similarity is used. Based on the score calculated from similarity, the meaning of an ambiguous word is determined. This paper is divided into several sections such as: The related work is described in Sect. 37.2. The proposed methodology is described in Sect. 37.3.

37  An Approach to Bodo Word Sense Disambiguation (WSD) …

489

The techniques for building a corpus are described in Sect. 37.4. The calculation of the testing method is discussed in Sect. 37.5. The evaluation and discussion of the result are presented in Sect. 37.6, and Sect. 37.7 provides ideas for future work.

37.2 Related Works Many natural language processing (NLP) applications use word embedding techniques because they may capture both the syntactic and semantic aspects of texts utilizing an unlabeled data file. The most crucial method is word embedding since it enables neural networks to encode the data that a word is intended to belong in. These models can be utilized as supplementary word features in different learning methods, which makes them helpful for NLP systems. A word embedding technique for NLP applications has been developed by Kumari et al. [7]. Distributed representations of words were first proposed in 1986 and successfully used in language models by Collobert and Weston [8]. Similarly, these models were used for a variety of tasks, including chunking Named Entity Recognition (NER) for WSD Turian and Ratinov [9]. Bengio and Ducharme [10] carried out research work in the field of distributed word representation along with neural networks for probability value. Zhong and Ng [11] introduce It Makes Sense (IMS), a supervised learning-based English all-words for WSD systems. Taghipour and Ng [12] proposed to evaluate word embeddings performance for word sense disambiguation, where a significant quantity of research has been conducted. In 2015, word embedding based on feedforward neural networks was used by supervised WSD. Mikolov et al. [13] developed some other significant work which is explained as follows: The author presents some innovative perspectives on word embeddings that employ neural networks. Continuous Bag-of-Words (CBOW) and Skip-Gram (SG) are two prediction-based techniques for creating a word embeddings model. Levy and Goldberg [14] discovered that Word2Vec implicitly factors the word–context matrix that each result of the matrix is made up of the point-wise mutual information (PMI) of words and context pairs. Lund and Burgess [15] developed a method of matrix factorization to determine word semantic similarity and were able to show the relationship between words and context words by constructing a co-occurrence matrix. Rohde et al. [16] introduce an upgraded semantic similarity model based on lexical co-occurrence. Pennington [17] author focuses on the Global Vectors (GloVe), which is a representation of the global context of the words. Researchers can make use of the statistics for word co-occurrences in the context in order to capture the whole context. To overcome the shortcomings from both strategies, the GloVe model and a hybrid model combining context window-based analysis with matrix factorization has been created. Bojanowski et al. [18] were first to propose morphological information when determining semantic links between words. Gaikwad and Haribhakta [19] author carried out research work on fastText and

490

S. Basumatary et al.

adaptive models for Hindi word embedding. Joulin et al. [20] proposed fast-text word embeddings and n-gram features and showed superior performance to deep learning methods. Several important researches in the field of developing sentence embeddings, including Sent2Vec, are developed by Pagliardini et al. [21]. Moreover, Hill et al. [22] introduced unlabeled Distributed Representations of Sentences as Fast Sent. Later, Quoc Le [23] proposed Distributed Representations of Sentences as DocVec. Rothe and Schütze [24] author carried out research work to enhance the accuracy of the WSD models and developed a technique called Auto-Extend that teaches embeddings for lexemes and synonym sets from regular word embeddings. Yadav et al. [25] authors take the initial effort to create an efficient WSD for an Indian language and make use of the Hindi WordNet lexical database created by IIT Bombay. Tandon [26] proposed a technique that measures the number of words between the senses context and dictionary definitions and the rest of the sentence that contained the ambiguous word was chosen. Singh and Siddiqui [27] author examined the impact of context window size, stop word removal, and stemming for Hindi WSD. The researcher utilized Hindi WordNet to determine the correct meaning of ambiguous Hindi words and employed an overlapping strategy between both the target words and their sense definitions. Kumari and Lobiyal [2] authors proposed to use a Lesk technique extension for Hindi WSD and indulged in the definitional overlap between the context and common sense. By collecting terms from glosses, synonyms, hypernyms, sample sentences, etc., sense meaning has been expanded, and a bag of context is created utilizing words that are close to an ambiguous word. Kowsher et al. [28] proposed a word embedding model for Bangal languages.

37.3 Proposed Methodology The primary aim of this research work is to create datasets and utilize them as a resource to resolve the WSD issue.

37.3.1 Architecture for the Proposed Method The overall architecture of the proposed WSD method is illustrated in Fig. 37.1. The two corpora are taken as input for data preprocessing. After utilizing the preprocessing techniques for the removal of punctuation, tokenization, and stop words, Word2Vec model is created. Using the Word2Vec model, a projection of the preprocessed model is created.

37  An Approach to Bodo Word Sense Disambiguation (WSD) …

491

Fig. 37.1  Overall architecture for the proposed methods

37.3.1.1 Data Preprocessing The sentences in the corpus are extremely diverse which are preprocessed before applying Word2Vec. Three steps are required for preprocessing architecture [3]: 1. Remove punctuation: For developing the corpora, we create the words and test them using our testing data. As a result, we eliminate the punctuation such as (‘,’, ‘।’, ‘(’,‘)’, ‘-’, ‘……’, ‘.’, ‘‘’) from our dataset [28]. 2. Tokenize: To make the input easier to process, we tokenize sentences, by using tokenization and break them into words. For example—“आंनि आदैयाव दुख ु मोनदों” (I got hurt in my legs) after tokenization output will be: “आंनि”, “आदैयाव”, “दुखु”, “मोनदों” [28]. 3. Stop word removal: Stop word removal is some of the words which are commonly used in sentences but have no meaning when they are utilized to build a corpus. For example—आरो (and), एबा (or), जेरैनों (who’s), ‘सोर (who), etc., do not give the context of the sentences any true meaning [28] (Table 37.1).

Table 37.1  Result of preprocessing steps Sentence 1

2

Input मादैनि फिसाज्लाया दिनै हाथाइनि डाक्टरनाव थांदोंमोन (Today aunt’s son have visited to dentist) दिनै गावदांनि बयबो लोगोफोरा गसाइगावहाथाइयाव फैयदोंमोन (Today all the friends of Gaodang have visited the Gossaigaon market)

Output मादै फिसाज्लाया दिनै हाथाइडाक्टरनाव थांदों (Today aunt son visit dentist) दिनै गावदां बयबो लोगो गसाइगावहाथाइ फैयदों (Today friend Gaodang visit Gossaigaon market)

492

S. Basumatary et al.

37.4 Embedding Techniques for Building Corpora 37.4.1 Word Embedding Word embedding models are one of the prominent topics in the research field of NLP and business because it can generate spaces vector in which language information can be retained and accessible to later applications [4]. The idea behind these models is to utilize the paradigm, also known as word-distributed representation, to map the related words to neighboring words. In another way, words that appear in related contexts have related vector representations, and the geometric proximity among them determines how closely they are related [29]. 37.4.1.1 Word2Vec Mikolov et al. proposed a well-known word-based model known as Word2Vec [29]. A popular and most commonly used model in word embedding is Word2Vec [28]. Using deep learning techniques, a prediction model called Word2Vec was created. This model is also a structure of an unsupervised approach that utilizes input as unlabeled textual data for a huge amount that creates a vocabulary of each term that might exist and then turns that vocabulary into large embeddings inside the vector space for each word. From alike contexts of words, the model adopts which have identical word embedding and similar meanings [2]. Word2Vec uses two well-known models: (1) CBOW and (2) Skip-Gram, which are two opposite related models [2]. Figures 37.2 and 37.3 illustrate CBOW and Skip-Gram model for Word2Vec word embedding which are discussed with an example in subsequent sub-section.

Fig. 37.2  CBOW model of Bodo text

37  An Approach to Bodo Word Sense Disambiguation (WSD) …

493

Fig. 37.3  Skip-gram model of Bodo text

37.4.1.2 Continuous Bag-of-Words (CBOW) Model The continuous Bag-of-Words (CBOW) model is widely used for language processing tasks to convert words into word embedding techniques. It predicts a target word based on the context of certain provided words as input, in contrast to skipgram. It uses continuous representations of the context distributions. In CBOW, a predefined window is created using a certain word sequence, and the model uses a log-linear classifier to attempt to predict the center word of the window based on past and present words [2, 28]. For example: “दिनै आंनि हाथाया जीबीद सादों” (Today my teeth is paining too much). In this example, y(t) indicates the target word and y(t − 2) … y(t + 2) indicates the context word. The mathematical equation is described below:

P0 =

T 1∑ log log p(yt |yt−n , . . . , yt−1 , yt+1 , . . . , yt+n ). T t=1

(37.1)

Here, P0 indicates the target word, and these techniques accept a context window of size n at each timestamp t, where n is the total number of words before and after the target word indicated by yt. 37.4.1.3 Skip-Gram Model Skip-gram model used unsupervised learning techniques to determine how similar two words are semantic, relying on the context given. From the specified target word, the skip-gram model identifies the context words around it [2, 28]. For example: “दिनै आंनि हाथाया जीबीद सादों” (Today my teeth is paining too much). Here, in this example, y(t) indicates the target word and y(t − 2 ) … y(t + 1) indicates the context word. It determines the highest average logarithmic probability based on the equation [2].

S. Basumatary et al.

494

P0 =

T 1 ∑∑ −n ≤ k ≤ np(yt+1 |yt ) . T t=1 s�=0

(37.2)

In the skip-gram model, where D is the set of all potential pairs between target and context, yt+1 is the context word, yt is the target word, and V is the vocabulary, the goal of this model is to calculate the probability of identifying yt+1 as the context of yt for all training target-context possible pairs. The equation is described as: ∑ log p(yt+1 , yt ). (37.3) (i,j)∈D

37.5 Calculation of Testing Procedure A corpus is created manually for testing purposes, whereas a preprocessing process is done for testing the datasets.

37.5.1 Cosine Similarity Cosine similarity is defined as determining the value of the angle between two vectors. In cosine similarity, the metric represents how similar the texts are to one another, irrespective of the dimensions. Cosine similarity provides an output with an interval between − 1 and 1 [3]. The cosine similarity has an inverse relationship with the distance between the vectors. The formula for finding the cosine similarity between two vectors Mj and Nj is described as

Cosine similarity = cos(θ ) M.N , = ||M||||N|| n

(Mj, Nj) =  n

Mj Nj  n 2

j=1

j=1 Mj

2 i=j Nj

(37.4)

.

(37.5)

37.5.2 Sentence Similarity In sentence similarity, excluding the ambiguous word, every word in a sentence is calculated using cosine similarity to the ambiguous word present in the corpus. The word from the sentence that is ambiguous and the word from the sentence that is not present in the corpus will be provided a value of 0. Using cosine similarity, we can calculate the words from sentences [3].

37  An Approach to Bodo Word Sense Disambiguation (WSD) …

495

n

1 ∑ cj . Sentence similarity = ∗ n j=1

(37.6)

Here, c stands for the cosine similarity of words from sentences to the ambiguous term and n stands for the number of words in sentences. Figures 37.4 and 37.5 analyze the outcome in a 2D chart for corpus1 and corpus2 using the Word2Vec model, where X-axis represents the vector1 value and Y-axis represents the vector2 value along with window size = 2.

37.6 Evaluation and Discussion of Results The score of sentence similarity has been computed to clarify the purpose of the ambiguous word in a sentence. The sense of an ambiguous word in a sentence is determined by corpus one if the score of sentence similarity to corpus one is higher than corpus two, and vice versa. Here Table 37.2 shows the calculation of preprocessing input sentence1 as “मादैनि फिसाज्लाया दिनै हाथाइनि डाक्टरनाव थांदोंमोन” (Today aunt’s son have visited to dentist), where corpus1 represent as “हाथाइ” (teeth). In the same manner, Table 37.3 shows the preprocessing input sentence 2 as “दिनै गावदांनि बयबो लोगोफोरा गसाइगावहाथाइयाव फैयदोंमोन” (Today all the friends of Gaodang have visited the Gossaigaon market), here corpus2 represent as “हाथाइ” (market). To precisely assess this model, we used the Bodo datasets. In this research work, we divided the dataset into two ways (training and testing) and applied the Word2Vec technique along with pretrained skip-gram models. Here, in Table 37.2, the score of testing data is shown as well as in Table 37.3. The table will represent the evaluation of the result finding through cosine similarity.

Fig. 37.4  Projection model for teeth (partially)

496

S. Basumatary et al.

Fig. 37.5  Projection model for the market (partially)

Table 37.2  Cosine similarity result for sentence1 Word from sentence1 cosine similarity with word हाथाइ (teeth) from sentence1 in corpus 1 and in corpus 2 0.18495189 0.082451 मादै (aunt) 0.190039 0.1651276 फिसाज्ला (son) 0.13545047 0.17643221 दिनै (today) 0 (ambiguous) 0 (ambiguous) हाथाइ (teeth) 0.17953852 0 (not in corpus1) डाक्टरनाव (doctor) 0.13160141 0.063112 थांदों (going) 0.136 0.081 Avg. sentence similarity

Table 37.3  Cosine similarity result for sentence 2 Word from sentence2 cosine similarity with word हाथाइ (market) from sentence2 in corpus 1 in corpus 2 0.13545047 0.15686543 दिनै (today) 0 (not in corpus) 0.132763 गावदां (gaodang) 0.139231 0.10919978 बयबो (everyone) 0.11860887 0.119178 लोगो (friend) 0 (not in corpus1) 0.18965437 गसाइगाव (gossaigaon) 0 (ambiguous) 0 (ambiguous) हाथाइ (market) 0 (not in corpus1) 0 (not in corpus1) फैयदों (came) 0.065 0.1007 Avg. sentence similarity

37  An Approach to Bodo Word Sense Disambiguation (WSD) …

497

37.7 Conclusion and Future Work In this research work, we presented the use of Word2Vec to build the corpora for Bodo text using word embedding techniques. There are two strategies to solve the presented model, such as developing a word embedding model from corpora. Moreover, the accuracy result using cosine similarity for sentence1 from corpus 1 is more than corpus2. Similarly, the accuracy rate for sentence2 for corpus 1 is lesser than for corpus2. From the above experiment, we got an accuracy of average sentence similarity as (0.136 and 0.081%) from sentence1 and (0.065 and 0.1007%) from sentence2. In the future, we intend to extend the datasets for better performance.

References 1. Kamath, U., Liu, J., Whitaker, J.: Deep Learning for Natural Language Processing (NLP) and Speech Recognition (2019). https://doi.org/10.1007/978-3-030-14596-5 2. Kumari, A., Lobiyal, D.K.: Efficient estimation of Hindi WSD with distributed word representation in vector space. J. King Saud Univ. Comput. Inf. Sci. (2021). https://doi. org/10.1016/j.jksuci.2021.03.008 3. Nurifan, F., Sarno, R., Wahyuni, C.S.: Developing corpora using word2vec and Wikipedia for word sense disambiguation. Indonesian J. Electr. Eng. Comput. Sci. 12(3), 1239–1246 (2018). https://doi.org/10.11591/ijeecs.v12.i3.pp1239-1246 4. Duarte, J.M., Sousa, S., Milios, E., Berton, L.: Deep analysis of word sense disambiguation via semi-supervised learning and neural word representations. Inf. Sci. 570, 278–297 (2021). https://doi.org/10.1016/j.ins.2021.04.006 5. Nithyanandan, S., Raseek, C.: Deep learning models for word sense disambiguation: a comparative study. SSRN Electron. J. (2019).https://doi.org/10.2139/ssrn.3437615 6. Husein Wattiheluw, F., Sarno, R.: Developing word sense disambiguation corpuses using Word2vec and Wu Palmer for disambiguation. In: Proceedings—2018 International Seminar on Application for Technology of Information and Communication: Creative Technology for Human Life, ISemantic 2018, 244–248 (2018).https://doi.org/10.1109/ ISEMANTIC.2018.8549843 7. Kumari, A.: Word2vec’s Distributed Word Representation for Hindi Word Sense Disambiguation, 325–335 (2020) 8. Collobert, R., Weston, J.: A unified architecture for natural language processing: deep neural networks with multitask learning. In: Proceedings of the 25th International Conference on Machine Learning, 160–167 (2008) 9. Turian, J., Lev Ratinov, Y.B.: Word representations: a simple and general methods for semi-supervised learning. J. Pharmacy Pharmacol. 30(1S), 53P–53P (1978). https://doi. org/10.1111/j.2042-7158.1978.tb10760.x 10. Bengio, Y., Ducharme, R., Vincent, P.: A neural probabilistic language model (short version). Adv. Neural Inf. Processing Syst. (2001). https://proceedings.neurips.cc/paper/2000/ file/728f206c2a01bf572b5940d7d9a8fa4c-Paper.pdf 11. Zhong, Z., Ng, H.T.: It makes sense: a wide-coverage word sense disambiguation system for free text. In: ACL 2010—48th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference, 78–83 (2010)

498

S. Basumatary et al.

12. Taghipour, K., Ng, H.T.: Semi-supervised word sense disambiguation using word embeddings in general and specific domains. In: NAACL HLT 2015—2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Proceedings of the Conference, 314–323 (2015). https://doi. org/10.3115/v1/n15-1035 13. Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. In: 1st International Conference on Learning Representations, ICLR 2013— Workshop Track Proceedings, 1–12 (2013) 14. Levy, O., Goldberg, Y. (n.d.): Dependency-Based Word Embeddings 15. Lund, K., Burgess, C.: Producing high-dimensional semantic spaces from lexical co-occurrence. Behav. Res. Methods Instrum. Comput. 28(2), 203–208 (1996). https://doi. org/10.3758/BF03204766 16. Rohde, D.L.T., Gonnerman, L.M., Plaut, D.C.: An improved model of semantic similarity based on lexical. Cognitive Sci. 1–33 (2009). https://pdfs.semanticscholar.org/73e6/351a8fb61afc810a8bb3feaa44c41e5c5d7b.pdf 17. Pennington, J., Socher, R., Manning, C.D.: Global vectors for word representation. AES J. Audio Eng. Soc. 19(5), 417–425 (2014) 18. Bojanowski, P., Grave, E., Joulin, A., Mikolov, T.: Enriching word vectors with subword information. Trans. Assoc. Comput. Linguist. 5, 135–146 (2017). https://doi.org/10.1162/ tacl_a_00051 19. Gaikwad, V., Haribhakta, Y.: Adaptive glove and fasttext model for Hindi word embeddings. In: ACM International Conference Proceeding Series, 175–179 (2020).https://doi. org/10.1145/3371158.3371179 20. Joulin, A., Grave, E., Bojanowski, P., Mikolov, T.: Bag of tricks for efficient text classification. In: 15th Conference of the European Chapter of the Association for Computational Linguistics, EACL 2017—Proceedings of Conference, 2, 427–431 (2017). https://doi. org/10.18653/v1/e17-2068 21. Pagliardini, M., Gupta, P., Jaggi, M.: Unsupervised learning of sentence embeddings using compositional n-gram features. In: NAACL HLT 2018—2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies—Proceedings of the Conference, 1, 528–540 (2018). https://doi.org/10.18653/ v1/n18-1049 22. Hill, F., Cho, K., Korhonen, A.: Learning distributed representations of sentences from unlabelled data. In: 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL HLT 2016— Proceedings of the Conference, 1367–1377 (2016). https://doi.org/10.18653/v1/n16-1162 23. Quoc Le, T. M.: Distributed representation of sentences and documents. Print Demand 9(2), 42 (2003) 24. Rothe, S., Schütze, H.: Auto extend: extending word embeddings to embeddings for synsets and lexemes. In: ACL-IJCNLP 2015—53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing, Proceedings of the Conference, 1, 1793–1803 (2015). https://doi.org/10.3115/v1/p15-1173 25. Yadav, P., Husain, M.S.: Study of Hindi Word Sense Disambiguation Based on Hindi WorldNet. IJRASET ISeSN: 2321-9653 2(V) (2014) 26. Tandon, R.: Word Sense Disambiguation Using Hindi WordNet, 1–3 (2009)

37  An Approach to Bodo Word Sense Disambiguation (WSD) …

499

27. Singh, S., Siddiqui, T.J.: Evaluating effect of context window size, stemming and stop word removal on Hindi word sense disambiguation. In: Proceedings—2012 International Conference on Information Retrieval and Knowledge Management, CAMP’12, 1–5 (2012). https://doi.org/10.1109/InfRKM.2012.6204972 28. Kowsher, M., Uddin, M.J., Tahabilder, A., Prottasha, N.J., Ahmed, M., Alam, K.M.R., Sultana, T.: BnVec: Towards the development of word embedding for Bangla language processing. Int. J. Eng. Technol. 10(2), 95 (2021). https://doi.org/10.14419/ijet.v10i2.31538 29. Hasni, S., Faiz, S.: Word embeddings and deep learning for location prediction: tracking Coronavirus from British and American tweets. Soc. Netw. Anal. Min. 11(1), 1–20 (2021). https://doi.org/10.1007/s13278-021-00777-5

Chapter 38

Effect of Ambient Conditions on Energy and Exergy Efficiency of Combined Cycle Power Plant Pravin Gorakh Maske, Vartika Narayani Srinet, and A. K. Yadav

Abstract The exergy analysis acts as a powerful tool for a clear variation between losses of energy to the surrounding and irreversibilities in the system. This paper evaluates the effect of ambient conditions on energy and exergy efficiency of a 50 MW capacity combined cycle power plant. To carry out performance analysis, a code was developed in Python language, and the results were first validated with the component wise exergy efficiency data of Ersayin and Ozgener (Renew Sustain Energ Rev. 43:832–842, 2015). For ambient temperature varying from 17 to 42 C, the energy efficiency reduced from 64.1 to 54.5%, while the overall exergy efficiency of the plant reduced from 57.5 to 48%. The ambient temperature rise led to increase in specific volume of inlet air to gas turbine unit, and subsequently, compressor power consumption was increased. The rise in ambient temperature had also increased heat recovery steam condenser back pressure from 7260 to 10,520 Pa. Although steam quality at exit of low pressure (LP) turbine was improved from 0.881 to 0.8925, but the rise in back pressure had reduced LP turbine output and energy-exergy efficiency of the plant. Based on exergy analysis of the plant, it is found that combustion chamber had most exergy destruction rate among all the system components.

Nomenclature ex tamb h T sc Pb Δt

Exergy Ambient temperature Heat transfer coefficient Saturation temperature Back pressure at exit Temperature difference of cooling water

P. G. Maske · V. N. Srinet · A. K. Yadav (B) Department of Mechanical Engineering, Motilal Nehru National Institute of Technology Allahabad, Prayagraj 211004, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024 B. Das et al. (eds.), Modeling, Simulation and Optimization, Smart Innovation, Systems and Technologies 373, https://doi.org/10.1007/978-981-99-6866-4_38

501

502

P. G. Maske et al.

38.1 Introduction The gap between energy supply and demand is rising, and studies revealed electricity demand for the world is growing with average rate of 6% annually. According to International Energy Agency [2], 42% of electricity supply mainly comes from the coal power plants. The latest energy policies are encouraging for more-efficient power generation system with minimum wastage of energy and boost the dependence on renewable energy sources. Woudstra et al. [3] evaluated alternative designs of combined cycle plants and carried out exergy destruction calculations for each component. Petrakopoulou et al. [4] analyzed a combined cycle power plant by splitting the exergy destruction term into avoidable, unavoidable, endogenous, and exogenous parts. Bagdanavicius et al. [5] concluded out that is the most exergy efficient system with the lowest exergy cost of electricity. Bassily [6] investigated that the irreversibility of combined cycle heat recovery steam generator (HRSG) reduced with decrease in the temperature difference of pinch point and mass flow rates of steam drum. Kaviri et al. [7] investigated the effect of mass flow rate and inlet gas temperature on the efficiency of the CCPP. The increase in the inlet temperature had negative effect on the efficiency. Ahmadi et al. [8] carried out an exergo-environmental analysis of a polygeneration system and compared the results with a combined cycle power plant. Kaushik et al. [9] carried out a detailed review of different studies on thermal power plants stimulated by natural gas and coal. Popli et al. [10] investigated the integration of a trigeneration scheme within a CCPP and found significant economic benefits with the implementation. Reddy et al. [11] performed energy, exergy, and thermo-economical analysis of conventional and solar-collector assisted CCPP. As the ambient temperature decreases, the temperature of cooling water in the cooling tower is also reduced and hence have a positive effect on the overall efficiency of the combined cycle plant [12]. In the present study, a Python code was developed to access performance of 50 MW combined cycle power plant based on energy and exergy efficiency. The effect of ambient conditions on back pressure in LP turbine and plant efficiency was modeled.

38.2 Material and Methodology The code carried out energy and exergy analysis based on following assumptions: • • • •

Flow was considered in steady state. Heat transfer between the components of plant and environment was negligible. Atmospheric temperature and pressure were taken as 295 K and 101.325 kPa. Specific heat of air and gas was constant and lower heating value of fuel is taken as 47100 kJ/kg.

38 Effect of Ambient Conditions on Energy and Exergy Efficiency …

503

Figure 38.1 shows general layout of 50 MW CCPP selected for the present investigation. The technical specification of CCPP is given in Table 38.1. To calculate exergy destruction rate and exergy efficiency, following equations are used [13]. ex,heattransfer

) ( Tamb × Q˙ i , = 1− Ti

ex = ex,physical + ex,chemical .

(38.1) (38.2)

To calculate physical exergy for water and steam, below equation was used: ex,i = m˙ × {(h i − h amb ) − (si − samb ) × Tamb }.

(38.3)

for ideal gas, )] )} { [ ( ( Ti Pi + R × Tamb × ln × m. ˙ ex,i = C pg × (Ti − Tamb ) − Tamb × ln Tamb Pamb (38.4) Heat recovery system acts as heat exchanger where hot gases exchange heat with water to form steam. Energy balance for HRSG is as follows:

Fig. 38.1 General layout of CCPP

504

P. G. Maske et al.

Table 38.1 Technical specifications of CCPP S. No

Specifications of gas turbine

1

Type of plant

GE LM 6000 PC

2

Capacity of power plant

50 MW

3

Compression ratio

30.3

4

Lower heating value of fuel

47,100

5

Isentropic efficiency of gas turbine

0.88

Specifications of condenser 6

Number of tubes (n t )

9000

7

Number of water passage

1

8

Water velocity (vw )

1.950 m/s

9

Temperature difference

10 ˚C

10

Factor influencing condenser steam load (ϕδ )

1

11

Tube diameter (D) and length (L)

0.024 m, 5 m

) ( ) ( m˙ w,HP × h in,HP − h out,HP = m˙ g × C pg TOut,Gt − T f ,

(38.5)

where T f is temperature leaving low pressure HRSG. ηI,ccpp =

W˙ net,gascycle + W˙ net,steamcycle , Q added

(38.6)

ηII,ccpp =

W˙ net,gascycle + W˙ net,steamcycle . E x,chemical

(38.7)

The heat transfer coefficient inside condenser tube was calculated by Barman’s method [14]: As = π × D × L × n t ,

(38.8)

(π × n t × D 2 × ρ × vw ) , (38.9) 4 )] ] [( [ )2 0.42 × β 0.5 ( 1.1 × vw x × ϕz × ϕδ , h = 3500β × 1 − 35 − T w,i D 0.25 1000 (38.10) ) ( (38.11) x = 0.12 × β × 1 + 0.15 × Tw,i , m˙ w,cooling =

ϕz = 1 +

( ) Tw,i n−2 1− . 10 35

(38.12)

38 Effect of Ambient Conditions on Energy and Exergy Efficiency …

505

Following equations were used to calculate saturation temperature and back pressure in condenser of bottoming cycle: Δt = Tw,o − Tw,i ,

(38.13) ⎤

⎡ δt = ⎣



e

Δt h×A m˙ w,cooling ×C pw

⎦,

Tsc = Δt + δt + Tw,i , ( Pb =

Tsc + 100 57.66

(38.14)

−1 (38.15)

)7.46 × 9.8.

(38.16)

38.3 Results and Discussion Figure 38.2 shows validation of code output with results of Ersayin and Ozgener [1]. The energy and exergy efficiency predicted by the code was in well agreement with the published results for 50 MW CCPP. Figure 38.3 shows variation of steam quality with back pressure. With increase in ambient temperature, the inlet water temperature to the condenser increases. This results into rise in backpressure (inside the condenser) and steam quality at exit of LP turbine. Figure 38.4 shows variation of energy and exergy efficiency with steam quality. Both efficiencies decreased with increase in steam quality at exit of LP turbine due to high enthalpy loss at exit. The variation of first and second law efficient with ambient conditions is shown in Fig. 38.5a, b, respectively. For ambient temperature varying from 17 to 42 °C, the energy efficiency reduced from 64.1 to 54.5%, while the overall exergy efficiency of the plant reduced from 57.5 to 48%. The ambient temperature rise led to increase in specific volume of inlet air to gas turbine unit and subsequently compressor power consumption was increased. Figure 38.6 shows component wise exergy destruction for 50 MW CCPP. The presence of irreversibility due to temperature difference inside the combustion changed caused maximum exergy destruction.

38.4 Conclusion To analyze the performance of the combined cycle power plant, both the energy (thermal) and exergy efficiencies are considered in the present investigation. With rise in ambient temperature from 17 to 42 C, the energy efficiency reduced from

506

P. G. Maske et al. Ersayin and Ozgener [1]

Present work

Ersayin and Ozgener [1]

Plant Exergy efficiency (%)

Plant Energy efficiency (%)

Exergy efficiency (%)

Present Work

Present work

Steam Quality

Fig. 38.2 Validation of code with published results

Back Pressure (Pa)

Fig. 38.3 Variation of steam quality with back pressure

Ersayin and Ozgener [1]

38 Effect of Ambient Conditions on Energy and Exergy Efficiency …

507

Fig. 38.4 Variation of efficiency with steam quality

First Law efficiency (%)

(a)

Ambient temperature 17 oC 22 oC 27 oC 32 oC 37 oC 42 oC

Cooling water Inlet Temperature (oC)

Second Law efficiency (%)

(b)

17 oC 22 oC 27 oC 32 oC 37 oC 42 oC

Cooling water Inlet Temperature (oC)

Fig. 38.5 First and second law efficiency variations

64.1 to 54.5% and exergy efficiency of the plant reduced from 57.5 to 48%. The ambient temperature rise led to increase in specific volume of inlet air to gas turbine unit, and subsequently, compressor power consumption was increased. The rise in ambient temperature had also increased heat recovery steam condenser back pressure. Although steam quality at exit of low pressure (LP) turbine was improved from 0.881

508

P. G. Maske et al.

Fig. 38.6 Percentage of component wise exergy destruction

to 0.8925, but the rise in back pressure had reduced LP turbine output and energyexergy efficiency of the plant. Based on exergy analysis of the plant, it is found that combustion chamber had most exergy destruction rate among all the system components. Hence, exergy analysis is a suitable and efficient method to identify the types, magnitudes, and locations of irreversibilities in a thermodynamic system.

References 1. Ersayin, E., Ozgener, L.: Performance analysis of combined cycle power plants: a case study. Renew. Sustain. Energ. Rev. 43, 832–842 (2015). https://doi.org/10.1016/j.rser.2014.11.082 2. International Energy Agency, Nuclear Energy Agency, Organization for Economic CoOperation and Development. Projected Costs of Generating Electricity. Report, ISBN 97892-64-24443-6, Paris, France (2015) 3. Woudstra, N., Woudstra, T., Pirone, A., Stelt, T.: Thermodynamic evaluation of combined cycle plants. Energ. Convers. Manage. 51, 1099–1110 (2010) 4. Petrakopoulou, F., Tsatsaronis, G., Morosuk, T., Carassai, A.: Conventional and advanced exergetic analyses applied to a combined cycle power plant. Energy 41, 146–152 (2012) 5. Bagdanavicius, A., Jenkins, N., Hammond, G.: Assessment of community energy supply systems using energy, exergy and exergoeconomic analysis. Energy 45, 247–255 (2012) 6. Bassily, A.M.: Modeling, analysis, and modifications of different GT cooling techniques for modern commercial combined cycle power plants with reducing the irreversibility of the HRSG. Appl. Therm. Eng. 53, 131–146 (2013) 7. Kaviri, A., Jaafar, M., Lazim, T., Barzegaravval, H.: Exergoenvironmental optimization of heat recovery steam generators in combined cycle power plant through energy and exergy analysis. Energ. Convers. Manage. 67, 27–33 (2013) 8. Ahmadi, P., et al.: Greenhouse gas emission and exergo-environmental analyses of a trigenerations system. Int. J. Greenhouseg Gas Control (2011). https://doi.org/10.1016/j.ijggc.2011. 08.011 9. Kaushik, S., Reddy, V., Tyagi, S.: Energy and exergy analyses of thermal power plants: a review. Renew. Sustain. Energ. Rev. 15, 1857–1872 (2011)

38 Effect of Ambient Conditions on Energy and Exergy Efficiency …

509

10. Popli, S., Rodgers, P., Eveloy, V.: Trigeneration scheme for energy efficiency enhancement in a natural gas processing plant through turbine exhaust gas waste heat utilization. Appl. Energ. 93, 624–636 (2012) 11. Reddy, S., Kaushik, S., Tyagi, S.: Exergetic analysis of solar concentrator aided natural gas fired combined cycle power plant. Renew. Energ. 39, 114–125 (2012) 12. Chuang, Ch., Sue, D.: Performance effects of combined cycle power plant with variable condenser pressure and loading. Energy 30, 1793e801 (2005) 13. Kotas, T.J.: The Exergy Method of Thermal Plant Analysis, Reprint ed. Krieger Publishing Company, Florida (1995) 14. Lakovi´c, M.S., Stojiljkovic, M.M., Lakovic, S.V., Stefanovic, V.P., Mitrovi´c, D.D.: Impact of the cold end operating conditions on energy efficiency of the steam power plants. Therm. Sci. 14(SUPPL. 1) (2010). https://doi.org/10.2298/TSCI100415066L

Chapter 39

Design and Analysis of Single and Multi-Band Rectangular Microstrip Patch Antenna Using Coplanar Wave Guide K. R. Kavitha , S. Vijayalakshmi , B. Murali Babu , E. Glenda Lorelle Ritu, and M. Naveen Balaji

Abstract For use in ultra-wideband (UWB) applications, a coplanar waveguide (CPW)-fed elliptical slot antenna with a wide tunable dual band-notched function and frequency reconfigurable feature is devised. With an S-shaped resonator etched into the circular ring radiation patch and a parallel stub-loaded resonator etched into the CPW transmission line, the dual band-notched function is made possible. It has been shown that introducing a pair of shuntly connected slots across the host CPW’s slots results in the creation of a reconfigurable device. Using electromagnetic simulations, the design technique has been verified.

39.1 Introduction An antenna that can change significantly its frequency and radiating components in a controlled and reversible manner is referred to as a reconfigurable antenna. Reconfigurable antennas incorporate an inner medium that allows the deliberate division of the RF currents over the antenna face and produces reversible variations over its parcels in order to provide a dynamic response. In order to maximise antenna performance in a changing script or to satisfy changing operational conditions, reconfigurable antennas are employed. A variety of designs have been suggested for applying UWB antennas, including cube niches, C-shaped niches, pi-shaped niches, E-shaped

K. R. Kavitha (B) · S. Vijayalakshmi · E. Glenda Lorelle Ritu Sona College of Technology, Salem, Tamil Nadu, India e-mail: [email protected] B. Murali Babu Paavai Engineering College, Namakkal, Tamil Nadu, India M. Naveen Balaji Amrita Vishwa Vidyapeetham, Coimbatore Campus, Coimbatore, Tamil Nadu, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024 B. Das et al. (eds.), Modeling, Simulation and Optimization, Smart Innovation, Systems and Technologies 373, https://doi.org/10.1007/978-981-99-6866-4_39

511

512

K. R. Kavitha et al.

niches, H-shaped niches, and U-shaped niches, by carving colourful spots on either radiating patches or ground planes either, combining the C-shaped niche. A planar left-handed propagation medium made up of a coplanar waveguide inductively linked to a split ring resonator and periodically supplied with thin metallic cables was proposed by Martin. F. Bonache. Jet al. The cables transform the building into a microwave oven tube that has a wide frequency range and a negative effective permittivity. Due to the limited ramification space, Borja et al. [2] explored the possibility that mobile devices might include more than 20 different radios. The new antenna’s impedance bandwidth was chosen to accommodate both present and future demands within the assiduity. Atallah et al. [3] suggested that for an overlay on cognitive radio systems, a new small coplanar waveguide ultra-wideband antenna with an electrically controllable notched band. The proposed antenna can effectively block the primary drug users operating in this band, which is similar to the WLAN (5.15–5.35 GHz; 5.725– 5.825 GHz) and the WiMAX, according to experimental results. The proposed antenna can have a band notch over a nonstop operating band of about 1.44 GHz from 4.77 to 6.21 GHz (5.25–5.825 GHz).

39.2 Methodology Used Two-port DRA arrays (TPDRAA) and CPW-fed monopole antenna arrays (TPCFMAA) are developed, constructed, and tested. For frequencies between 2.6 and 4.4 GHz, it has been shown that the TPCFMAA exhibits mutual coupling below − 20 db. By integrating meta-grid lines in the TPCFMAA ground plane, the envelope correlation coefficient decreases from 0.072 to 0.026. The inter-element spacing between the antennas is reported to be 0.015 h at 3.5 GHz. The measured − 10 dB impedance bandwidth for TPDRAA is between 5.65 and 6.55 GHz. For a 4.8 mm (0.090 h) inter-element spacing, the mutual coupling between the antenna elements is seen to be below − 16 db. Within the bandwidth, TPDRAA’s measured gain ranges from 4.17 to 5.2 dbi. The suggested structures offer polarisation-dependent tunability in addition to multi-band resonance frequency functionality. It is discovered that the polarisation of the incident terahertz radiation affects the observed tunability of the SRRs. Electromagnetic simulation and experimental measurements both show this polarisation-dependent tunability.

39.2.1 UWB-MIMO Antenna System Because UWB uses such a huge portion of the available bandwidth, there is the potential for very high theoretical capacity and very high data rates. This can be considered by Shannon’s capacity equation,

39 Design and Analysis of Single and Multi-Band Rectangular Microstrip …

C = B ∗ log2 (1 + S/N )

513

(39.1)

where C is the maximum channel capacity, B is the signal bandwidth, S is the signal power, and N is the noise power.

39.2.2 Designing Mathematical Equations for MIMO Microstrip Antenna Step 1: Calculation of Lambda (λ) Step 2: Calculation of L and W The centre frequency will be approximately given by (Fig. 39.1): C √ 2L ∈ r C L= √ 2 f c ∈r / 2 C W = 2fr ∈ r +1 fc =

(39.2)

(39.3)

Step 3: Feed width calculation ( ( ) ( )) H Wf C + 0.25 Z o = √ ln 8 εr Wf H Fig. 39.1 UWB spectrum band

(39.4)

514

K. R. Kavitha et al.

Fig. 39.2 Comparison of UWB and narrowband modulation schemes

This is standard dimension for getting better isolation in MIMO antenna. Difference between narrowband and wideband schemes See Fig. 39.2.

39.2.3 Methods to Improve Isolation and Bandwidth Enhancement 1. Modifying the patch The microstrip antenna which will be used must be modified from the traditionally available patches for better results in the antenna simulation. 2. Introducing Electromagnetic Band Gap EBG structures will be used as a filter for unwanted signals, it will offer an efficient suppression of the interferences by introducing a wide Omni-Directional stop band in the operating frequency. 3. Introduction of Stub In this isolation, enhancement can be done by adding an external feed network or stub to the antenna.

39.3 Results and Discussion A split ring resonator structure is designed on an FR-4 substrate and simulated.

39 Design and Analysis of Single and Multi-Band Rectangular Microstrip …

515

39.3.1 Proposed Outputs Split ring resonator 2 Figure 39.3 shows the rectangular micro patch antenna designed without the coplanar wave guide. The resonant frequency is reached at 4.9 GHz with a return loss of − 18 dB, as shown in Fig. 39.4. Radiation pattern See Fig. 39.5.

Fig. 39.3 Split ring resonator (rectangular shape)

Fig. 39.4 Output of split ring resonator 2 (rectangular shaped)

516

K. R. Kavitha et al.

Fig. 39.5 Radiation pattern of split ring resonator (rectangular shaped)

Gain The antenna parameters are shown in Fig. 39.6. 2.98 dB is the gain attained for the split ring resonator construction with the notch added to the lower C-shaped slot. Split ring resonator 2 (rectangular shaped with S-shaped) Figure 39.7 shows rectangular micro patch antenna designed with coplanar wave guide with S-shaped resonator. The resonant frequency is reached at 5.2 GHz with a return loss of − 11 dB, as seen in Fig. 39.8. Radiation pattern Figure 39.9 shows the radiation pattern of split ring resonator. The gain for the split ring resonator construction in Fig. 39.10 is measured at 5.2 dB and includes both an upper and a lower C-shaped slot with a notch.

39 Design and Analysis of Single and Multi-Band Rectangular Microstrip …

Fig. 39.6 Gain of rectangular shaped

Fig. 39.7 Split ring resonator 3 (A slot added with upper C shape)

517

518

K. R. Kavitha et al.

Fig. 39.8 Output of split ring resonator 2 (A slot added with upper C shape)

Fig. 39.9 Radiation pattern of split ring resonator (rectangular shaped with S-shaped)

39 Design and Analysis of Single and Multi-Band Rectangular Microstrip …

Fig. 39.10 Gain of split ring resonator 3

39.4 Comparison of the Proposed Design See Table 39.1.

519

520 Table 39.1 Comparison of the split ring resonator design

K. R. Kavitha et al.

Design

Resonant frequency

Return loss

Gain

Design 1

8.7

− 27

2.5477

Design 2

17.4

− 16

2.7974

Design 3

9.3

− 17.7

2.2016

17.7

− 27.3

Design 4

7.5

Design 5

11.7

− 13.6

12.5

− 15.51

13.7

− 25.61

15.4

− 20.3

16.2

− 25.42

16.6

− 28.05

18

− 17.44

19.7

16.94

2.98 2.14

12.13

39.5 Conclusion For reconfigurable and tunable functioning, the utilisation of S-SRs energised by counter-directional attractive fluxes of a CPW has been demonstrated. A UWB antenna design with an adjustable notch band for WiMax or WLAN services interference rejection has been shown to illustrate the potential applicability of the suggested structure. Using electromagnetic simulations, the design technique has been verified.

References 1. Aghdam, S.A.: A novel UWB monopole antenna with tunable notched behavior using varactor diode. IEEE Antennas Wirel. Propag. Lett. 13, 1243–1246 (2014) 2. Anagnostou, D., Zheng, G., Chryssomallis, M., Lyke, J., Ponchak, G., Papapolymerou, J., Christodoulou, C.: Design, fabrication, and measurements of an RF-MEMS-based selfsimilar reconfigurable antenna. IEEE Trans. Antennas Propag. 54(2), 422–432 (2006) 3. Anagnostou, D.E., Chryssomallis, M.T., Braaten, B.D., Ebel, J.L., Sepulveda, N.: Reconfigurable UWB antenna with RF-MEMS for on-demand WLAN rejection. IEEE Trans. Antennas Propag. 62(2), 602–608 (2014) 4. Antonino-Daviu, E., Cabedo-Fabres, M., Ferrando-Bataller, M., Rodrigo-Penarrocha, V.: Active UWB antenna with tunable bandnotched behavior. IET Semin. Digests 2032(18), 720–720 (2007) 5. Aznar, F., Gar´cIa-Gar´cIa, J., Gil, M., Bonache, J., MartIn, F.: Strategies for the miniaturization of metamaterial resonators. IEEE Microwave Wirel. Compon. Lett. 50(5), 1263–1270 (2008) 6. Bonache, J., Gil, I., Gar´cIa-GarcIa, J., MartIn, F.: Novel microstrip bandpass filters based on complementary split-ring resonators. IEEE Trans. Microwave Theor. Tech. 54(1), 265–271 (2006) 7. Chen, H., Ran, L., Huangfu, J., Zhang, X., Chen, K., Grzegorczyk, T.M., Kong, J.A.: Lefthanded materials composed of only S-shaped resonators. Phys. Rev. E 70(5), 057605 (2004)

39 Design and Analysis of Single and Multi-Band Rectangular Microstrip …

521

8. Christodoulou, C.G., Tawk, Y., Lane, S.A., Erwin, S.R.: Reconfigurable antennas for wireless and space applications. Proc. IEEE 100(7), 2250–2261 (2012) 9. Ebrahimi, E., Kelly, J.R., Hall, P.S.: Integrated wide-narrowband antenna for multi-standard radio. IEEE Trans. Antennas Propag. 59(7), 2628–2635 (2011) 10. Herraiz-Martinez, F.J., Zamora, G., Paredes, F., Martin, F., Bonache, J.: Multiband printed monopole antennas loaded with OCSRRs for PANs and WLANs. IEEE Antennas Wirel. Propag. Lett. 10, 1528–1531 (2011) 11. Horestani, A.K., Fumeaux, C., Alsarawi, S.F., Abbott, D.: Displacement sensor based on diamond-shaped tapered split ring resonator. IEEE Sens. J. 13(4), 1153–1160 (2013) 12. Horestani, A.K., Shaterian, Z., Kaufmann, T., Fumeaux, C.: Single and dual band-notched ultrawideband antenna based on dumbbell shaped defects and complementary split ring resonators. In: German Microwave Conference (2015) 13. Horestani, A.K., Withayachumnankul, W., Chahadih, A., Ghaddar, A., Zehar, M., Abbott, M., Fumeaux, C., Akalin, T.: Metamaterial-inspired bandpass filters for terahertz surface waves on Goubau lines. IEEE Trans. Terahertz Sci. Technol. 3(6), 851–858 (2013)

Chapter 40

Software in the Loop (SITL) Simulation of Aircraft Navigation, Guidance, and Control Using Waypoints G. Anitha , E. Saravanan , Jayaram murugan , and Sudheer Kumar Nagothu

Abstract Fixed-wing UAVs are used for a variety of purposes, including surveillance, delivery, and military operations. Waypoint navigation is extensively used in fixed-wing UAVs to autonomously guide them. However, the takeoff and landing phases require precise and conscious design due to the high risk of damage to the vehicle during these phases. This paper describes the design of an autopilot for the Cessna 172 Skyhawk using MATLAB Simulink and X-Plane. The PID controllers in the autopilot system are tuned on a SITL platform with Simulink and X-Plane. The PID controller is tuned and the effects of the coefficients on the aircraft’s response are studied. The tuned autopilot is implemented for waypoint navigation based on the haversine equations. Moreover, from a developer’s point of view, navigating using waypoints paves the way for a simpler User Interface (UI).

40.1 Introduction Autopilot systems are crucial to the advancement of aviation since they aid in the enhancement of navigation procedures, flight management, and aircraft control. This system is used to maintain a parameter at a given setpoint, such as pitch, roll, or heading. The sensors on the aircraft provide the current flight parameters, such as pitch, roll, and yaw, which the autopilot compares with the set point to calculate the error. This error is then fed through a PID controller to estimate the control signal to be given to the actuator. The current state of the art in aviation, especially for unmanned aerial vehicles (UAV), involves navigating the vehicle from its home location to the destination via a pre-planned route. For this purpose, the vehicle has an on-board computer with a flight management system (FMS), which gives the desired values of the flight parameters at that instant to reach the destination, and a flight control G. Anitha (B) · E. Saravanan · J. murugan · S. K. Nagothu Division of Avionics, Department of Aerospace Engineering, Madras Institute of Technology, Anna University, Chennai, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024 B. Das et al. (eds.), Modeling, Simulation and Optimization, Smart Innovation, Systems and Technologies 373, https://doi.org/10.1007/978-981-99-6866-4_40

523

524

G. Anitha et al.

Fig. 40.1 SITL block diagram

system (FCS), which provides the necessary control input to the control actuators. The navigation process is accomplished inside the FMS, where the attributes of the specified route are used to arrive at the required flight parameters at any instant of flight. The SITL simulation using MATLAB and X-Plane is depicted in Fig. 40.1. Waypoint navigation is provided by calculating the heading based on Line of Sight (LOS), an inverse tangent relation involving the latitudes and longitudes of home and destination. The practical application of haversine relations is used in recommender systems for finding the nearest waypoint [1, 2]. Aircraft Navigation, Guidance, and Control Using Waypoints are examined and implemented using the interfacing of MATLAB Simulink and X-Plane [3]. Sending data to X-Plane as well as using the SITL simulation as a test platform have also been implemented. A state space model based on numerical integration of the flight data is discussed in certain research papers [5], and a controller designed for an aircraft whose state space model is constructed is also presented [6, 7]. The main challenge in navigating a fixed-wing aircraft is the takeoff and landing process, which is chosen to be the prime objective of this work [8, 9]. To aid this challenge, this paper presents the waypoint navigation of a fixed-wing aircraft using an autopilot designed in Simulink. The SITL simulation with Simulink and X-Plane is carried out by splitting the flight trajectory into the “Takeoff and Cruise” and “Descent and Landing” phases [10]. Based on the literature survey carried out, the objectives of our work are • To design an autopilot for a Cessna 172 Skyhawk aircraft model in MATLAB Simulink. • Interface Simulink with X-Plane via an Ethernet cable using UDP communication and compare the manual control with the autopilot. • Perform waypoint navigation with the designed autopilot by splitting the flight trajectory into takeoff, cruise, and landing phases.

40 Software in the Loop (SITL) Simulation of Aircraft Navigation …

525

40.2 Autopilot Design 40.2.1 Design and Control Law The autopilot control system is designed with the help of MATLAB Simulink. A closed-loop feedback system is built for the control surfaces consisting of an aileron, rudder, and elevator. The elevator PID feedback system helps to control the pitch and, most importantly, the flight path angle. The rudder and aileron PID feedback systems help control the heading and roll, respectively. Throttle feedback controls the velocity. A control law to calculate the control signal from the set point and current flight parameters involves passing the error between the set point and current value to a PID controller whose output is scaled and trimmed to the range of − 1 to 1, consistent with the range accepted by X-Plane. The scaling and trimming are accomplished with a gain block followed by a saturation block. For velocity control, an additional block is used to map the control signal from the range − 1 to 1 to the range 0 to 1, as negative throttle input is irrelevant for the chosen aircraft, the Cessna 172 Skyhawk.

40.2.2 Tuning and Testing Platform X-Plane allows the user to receive, manipulate, and send data back to it. It sends data through UDP communication to the specified IP address in packets. The procedure to be followed to establish UDP communication between the two software programmes running on the same system is described below. • Inside X-Plane, open “Settings.” Move to the “Data Output” tab and check the “Send network data output” checkbox. (Fig. 40.2) • Select “Enter IP Address” in the drop-down list box and enter “127.0.0.1” as the IP address and “49004” as the Set the UDP rate to 50 packets/s. • Under the “Network” tab, set the send and receive ports to 49004 and 49000, respectively, under UDP PORTS. • Open Simulink, and inside the “UDP Receive” block, enter “127.0.0.1” as the remote IP address, “49004” as the local IP port, “401” for the maximum length of the message, and “0.02” for sampling and blocking time. • Inside UDP Send, set the same IP address, 49000, as the port. The rest of the options are left unchanged in both blocks. In this setup, the X-Plane sends data to Simulink via the 49004 port, and the opposite happens through the 49000 ports (Figs. 40.3 and 40.4).

526

Fig. 40.2 X-plane data input and output settings

Fig. 40.3 “UDP Receive block” in simulink

G. Anitha et al.

40 Software in the Loop (SITL) Simulation of Aircraft Navigation …

527

Fig. 40.4 “UDP Send” block in simulink

40.2.3 Response Curves This process involves manually changing the controller’s coefficients [10] and selecting a suitable value based on the transient response of the plant. The integral coefficient (K i ) and the differential coefficient (K d ) are first made zero. The proportional coefficient (K p ) is increased until the required rise time is reached. Then the integral coefficient (K i ) is increased until the steady-state error is within acceptable limits. Finally, the differential coefficient (K d ) is increased to reduce overshoot and oscillation and to smooth the response. The K p , K i , and K d values are fixed after observing the transient response of the aircraft with variations in the coefficients. Figures 40.5 and 40.6 show the flight path angle response of the aircraft for K p greater than 1 and less than 1, respectively. Table 40.1 shows the corresponding transient characteristics. The last peak value occurs at K p = 3, and this is chosen as the K p for elevator control. Figure 40.7 shows the flight path angle response of the aircraft for K p fixed at 3 and varying K i values. Table 40.2 shows the corresponding transient characteristics, K i is fixed as 1.5, which has the least rise time and an acceptable settling time. Figure 40.8 shows the flight path angle response of the aircraft for K p fixed at 3, K i fixed at 1.5, and varying K d values. Table 40.3 shows the corresponding transient characteristics. K p is fixed at 0.5, observing that it has the least overshoot and rise time. Finally, for elevator control, the coefficients are K p = 3, K i = 1.5, and K d = 0.5. The chosen coefficients and their responses are tabulated in Table 40.4. The following can be observed from the table: Adding an integral term to the controller has given a tremendous improvement in the settling time, decreasing it by 49.02 s. Moreover, the overshoot percentage increases by 12.263; the steady-state error has

528

G. Anitha et al.

Fig. 40.5 Response of system with P-controller for K p < 1

Fig. 40.6 Response of system with P-controller for K p > 1 Table 40.1 Time domain analysis for system with P controller Kp

Ki

Kd

Peak value (°)

Steady-state value (°)

Overshoot (°)

Overshoot %

Settling time (s)

Rise time (s)

0.2

0

0

25.330

5.354

19.976

373.074

68.620

1.100

0.3

0

0

22.407

5.309

17.098

322.035

60.300

1.060

0.5

0

0

18.313

5.264

13.049

247.884

52.020

0.974

0.7

0

0

15.678

5.247

10.431

198.818

51.120

0.917

1

0

0

13.158

5.261

7.897

150.105

51.860

0.862

3

0

0

8.647

5.328

3.320

62.315

53.700

0.505

5

0

0

10.176

5.275

4.901

92.914

56.340

0.396

7

0

0

11.534

5.164

6.371

123.373

53.360

0.345

10

0

0

12.145

5.166

6.979

135.087

-

0.378

40 Software in the Loop (SITL) Simulation of Aircraft Navigation …

529

Fig. 40.7 Response of system with PI-controller Table 40.2 Time domain analysis for system with PI controller Kp

Ki

Kd

Peak value (°)

Steady state value (°)

Overshoot (°)

Overshoot %

Settling time (s)

Rise time (s)

3

0.75

0

8.458

4.902

3.557

72.567

9.940

0.421

3

1

0

8.580

4.923

3.657

74.287

7.800

0.476

3

1.25

0

8.654

4.952

3.702

74.761

5.740

0.356

3

1.5

0

8.645

4.952

3.693

74.578

4.680

0.411

3

1.75

0

8.681

4.979

3.702

74.353

3.820

0.432

3

2

0

8.555

4.965

3.590

72.305

3.600

0.434

Fig. 40.8 Response of system with PID-controller for K p = 3 K i = 1.5

530

G. Anitha et al.

Table 40.3 Time domain analysis for system with PID controller Kp

Ki

Kd

Peak value (°)

Steady state value (°)

Overshoot (°)

Overshoot %

Settling time (s)

Rise time (s)

3

1.5

0.2

8.354

4.962

3.392

68.352

4.840

0.452

3

1.5

0.35

8.057

4.965

3.092

62.270

4.900

0.422

3

1.5

0.5

7.697

4.965

2.732

55.015

4.820

0.443

3

1.5

0.75

8.078

4.969

3.109

62.559

5.280

0.715

3

1.5

1

8.059

4.961

3.098

62.448

5.580

0.822

3

1.5

1.25

8.124

4.970

3.153

63.443

6.000

0.830

decreased by 0.28°, or 85.37% in magnitude; and the rise time has decreased by 0.094 s. Not much change is observed in the peak value. Adding a derivative term to the controller has decreased the overshoot percentage by 19 compared to PI and 7.3 compared to the P controller. Slight increases in settling time and rise time of magnitudes 0.14 and 0.029 are observed, respectively. The peak value has decreased by 0.948°, and the steady-state error has decreased by 0.013°, or 27.08%, compared to the PI controller. A high overshoot was observed while tuning the PID to control the flight path angle, unlike the ones seen in model-based tuning [4]. Regardless of how tightly the controller’s coefficients were controlled, a minimum of 50% overshoot was always observed. This could be attributed to the fact that there is an intercoupling with the other parameters that are considered and the dependency of the flight path angle on several other flight parameters that are not accounted for. One such parameter is velocity. The throttle was simply kept at full capacity; however, the velocity kept wavering around 127 knots. Observing the effect of other flight parameters on the flight path angle could resolve the unusual overshoot observed. Alternatively, the control scheme can be varied to study its effects on the overshoot. Similarly, the roll, heading, and velocity controllers are tuned. Their coefficients are shown in Table 40.5. Table 40.4 P, PI, and PID comparison Kp

Ki

Kd

Peak value (°)

Steady-state value (°)

Overshoot (°)

Overshoot %

Settling time (s)

Rise time (s)

3

0

0

8.647

5.328

3.320

62.315

53.700

0.505

3

1.5

0

8.645

4.952

3.693

74.578

4.680

0.411

3

1.5

0.5

7.697

4.965

2.732

55.015

4.820

0.443

40 Software in the Loop (SITL) Simulation of Aircraft Navigation … Table 40.5 Coefficients of FPA, heading, roll, and velocity control PIDs

531

Controller

Kp

Ki

Kd

FPA

3

1.5

0.5

Heading

1.5

2.3

0.5

Roll

0.65

0.5

0.1

Velocity

2

0.1

0

40.3 Waypoint Navigation The path to be followed by an air vehicle is conveniently specified by waypoints. A waypoint refers to the coordinates of a single point, such as latitude, longitude, and altitude in a spherical system or x, y, and z distances in a linear system. Waypoint navigation is the process of directing the aircraft by estimating the required parameters at any instant, given the current location and the next waypoint location. The simplest way to accomplish this is to guide the aircraft to some radial threshold distance near the next waypoint, say, 200 m, and then switch to the subsequent waypoint. The current altitude, longitude, and latitude and those values corresponding to the next waypoint are used to calculate the current desired flight path angle, distance left to cover, and heading. Throughout the flight, the desired roll is fixed to be zero, which means the turn is executed without banking. The desired cruise speed is set to 100 knots. Figure 40.9 shows the FMS block with its inputs and outputs. From Fig. 40.10, if P and Q are two points on earth, then the shortest distance (d) between the two is given by haversine Eq. (40.1). Point P represents the current position, and point Q represents the destination. d = R × c2

(40.1)

[ √ ] c1 c2 = 2 tan √ 1 − c1

(40.2)

−1

Fig. 40.9 FMS block

532

G. Anitha et al.

Fig. 40.10 Shortest distance between two points on a sphere

[ c1 = sin d R Θ ∅

2

[ ] { ]} θ2 − θ1 2 ∅2 − ∅1 + cos(θ1 ) × cos θ2 × sin 2 2

(40.3)

The shortest distance between the points on the sphere (m) Radius of the earth (637,100 m) Latitude of a given point on the sphere (°) Longitude of a given point on the sphere (°). The bearing of a line made by any two points is given by β = tan

−1

[

sin(∅2 − ∅1 ) × cos θ2 cos θ1 sin θ2 − sin θ1 cos θ2 cos(∅2 − ∅1 ) [ ] h des − h curr γ = tan−1 d

] (40.4) (40.5)

where hdes Destination altitude hcurr Current altitude. Then, the bearing angle is measured as shown in Fig. 40.11. Flight path angle is measured as per Fig. 40.12. Figure 40.13 shows the sequence of waypoints corresponding to takeoff navigation. Waypoints 1 and 2 represent the starting and end of the runway. The altitudes at waypoints 3, 4, and 5 are fixed at 1000 feet, and the path from 2 to 3 represents the climb phase. After the last cruise waypoint is crossed, the descent phase is commenced, where a negative flight path angle is maintained to reach the desired altitude. The required flight path angle is calculated using the inverse tangent relationship. The speed is gradually decreased to 70 knots during this phase. At touchdown, the throttle is completely reduced. The velocity keeps reducing gradually, and at 40 knots, the brakes are slowly engaged until the aircraft comes to a halt.

40 Software in the Loop (SITL) Simulation of Aircraft Navigation …

533

Fig. 40.11 Bearing angle Fig. 40.12 Flight path angle

Figure 40.14 shows the waypoints corresponding to landing navigation. An additional waypoint ‘4’ is specified to ensure the proper alignment with the runway before landing. / RMSE =



(Response − Setpoint)2 N

(40.6)

534

G. Anitha et al.

Fig. 40.13 Takeoff waypoints

Fig. 40.14 Landing waypoints

For each of the four flight variables, namely, flight path angle, heading, roll, and velocity which are shown as graphs, the Root Mean Squared Error (RMSE) is calculated to quantify the mean deviation from the set point. Figure 40.15 shows the aircraft flight path angle during the takeoff phase.

Fig. 40.15 Takeoff-flight path angle

40 Software in the Loop (SITL) Simulation of Aircraft Navigation …

535

Figures 40.16, 40.17 and 40.18 show the corresponding response curves along with the setpoint at every instant during takeoff. Figure 40.19 shows the aircraft flight path angle (FPA) during the landing phase. Figures 40.20, 40.21, and 40.22 show the corresponding response curves along with the setpoint at every instant during Landing. Takeoff and Landing Waypoint Coordinates and Turning Angles are given in Tables 40.6 and 40.7.

Fig. 40.16 Takeoff-heading

Fig. 40.17 Takeoff-roll

Fig. 40.18 Takeoff-velocity

536

Fig. 40.19 Landing-flight path angle

Fig. 40.20 Landing-heading

Fig. 40.21 Landing-roll

Fig. 40.22 Landing-velocity

G. Anitha et al.

40 Software in the Loop (SITL) Simulation of Aircraft Navigation …

537

Table 40.6 Takeoff waypoint coordinates and turning angles Wp No

Latitude deg

rad

Longitude deg

rad

Altitude

Bearing

Turning angle

ft

rad

red

deg

1

12.9837

0.2266

80.1517

1.3989

2

12.9959

0.2268

80.1842

1.3995

0







0

1.2029

3

13.0266

0.2274

80.2365

1.4004

1000

1.0293

0.1736

9.9452

4

13.0658

0.2280

5

13.1178

0.2289

80.2895

1.4013

1000

0.9208

0.1085

6.2143

80.3281

1.4020

1000

0.6252

0.2957

16.9397

Table 40.7 Landing waypoint coordinates and turning angles Wp No

Latitude

Altitude

Bearing

Turning angle

deg

rad

Longitude deg

rad

ft

rad

red

deg

1

13.0643

0.2280

80.2923

1.4014

1000







2

13.0397

0.2276

80.2629

1.4009

750

0.8614





3

13.0157

0.2272

80.2260

1.4002

500

0.9810

− 0.1196

− 6.8520

4

12.9960

0.2268

80.1844

1.3995

100

1.1189

− 0.1379

− 7.9018

40.4 Results and Discussion The observations made during various stages of the work are summarised below. • The response curves had several unusual behaviours that could not have been observed in the model-based tuning of the autopilot. • The overshoot was unusually high, about 50%. Further research is required to study the effect of various flight parameters on the overshoot observed. • An interesting trend switch is observed at K p = 1 for the P controller, due to which the graphs are plotted separately and the trends have been compared. • Since only the rudder is used to make a turn in the cruise phase, maintaining the roll at zero, a limit exists in the minimum radius of turn for a given speed. Further work could include a coordinated turn where the bank and yaw are used to make the aircraft take a curved path. This would further decrease the turn radius. • A minimum amount of distance is to be given between any two-waypoints for the aircraft to reach a steady state along the specified heading. Otherwise, the aircraft will keep travelling along S-shaped routes, even after the FMS has progressed to a subsequent waypoint. • For a Cessna 172 Skyhawk, travelling at 120 knots, keeping the turn angle between the legs around 15°, and the minimum distance between waypoints as 5 km was seen to give appreciable results for an unbanked turn. • Splitting the route into several smaller segments helped in the proper confinement of the aircraft along the specified route.

538

G. Anitha et al.

• The control surface actuators dynamics are not included in the X-Plane software, which means the control surface can be instantly deflected from one angle to another. This causes issues at very high K d values, where the control surface switches rapidly between − 1 and 1 trying to reach the set point, referred to as high-frequency noise [9]. This can be overcome by adding an additional block representing the actuator dynamics, between the autopilot and the “X-Plane” block. • The X-Plane software is prone to wind gusts even when the weather is set to calm, which makes it difficult to observe the response. However, these gusts and disturbances help to observe the system’s recovery from a disturbance. • Of all the parameters being controlled, the velocity seems to pose the highest difficulty in control, as observed from the high RMSE (Table 40.8) values as compared to other parameters. This is because of the delay between throttle input and the velocity variation. This delay makes it imperative to go for a controller with a higher settling time and minimum oscillation and overshoot. Even though the PID for velocity control can be further tuned to reduce the overshoot, the settling time would remain as high as 120 s [4] (Figs. 40.23, 40.24, 40.25, 40.26, and 40.27). Table 40.8 RMSE during takeoff and landing Flight phase

Takeoff

Landing

Flight parameter

FPA (°)

Roll (°)

Heading (°)

Velocity (knots)

FPA (°)

Roll (°)

Heading (°)

Velocity (knots)

RMSE

0.1603

0.3784

1.0303

7.8124

2.5158

0.2945

0.4731

19.4547

Fig. 40.23 Waypoint navigation block diagram

40 Software in the Loop (SITL) Simulation of Aircraft Navigation …

539

Fig. 40.24 Simulation setup block diagram

Fig. 40.25 SITL via Ethernet

From takeoff simulation graphs, during the ground run, the flight path angle (FPA) is seen to oscillate as the autopilot tries to maintain a FPA of zero degrees in order to keep the aircraft on the ground. Every time the flight path angle (FPA) increases, the autopilot immediately responds by decreasing the FPA value, which overshoots the desired value, hence the oscillation. However, these oscillations were not seen to have an intense effect during the ground run when observed visually. The path from the second to the third waypoint has several disturbances, but they are successfully rejected by the autopilot. The roll seems to oscillate whenever the heading is changed;

540

G. Anitha et al.

Fig. 40.26 Aircraft in the takeoff phase

however, it is immediately corrected by the autopilot. The heading control is robust. Velocity control seems to require further tuning and an alternate control law. It takes a very long time to settle, as seen in [4]. From the landing simulation graphs, the response is very similar to that of the takeoff simulation, including the oscillations during the ground run, except for the velocity response after landing. The RMSE of FPA is seen to increase during landing, which arises from not accounting for the elevation of the centre of gravity above ground level. Roll has a low RMSE, which is to be expected knowing that its set point remains at zero throughout the flight. As flaps were not implemented and brakes were applied only after the aircraft decelerated to 40 knots, during the period of velocity change from 70 to 40 knots, the deceleration entirely depends on the drag on the aircraft and ground resistance, which makes the graph seem like a bad response. A positive steady-state error is observed after the

40 Software in the Loop (SITL) Simulation of Aircraft Navigation …

541

Fig. 40.27 Aircraft in the landing phase

aircraft has landed, which is because the centre of gravity of the aircraft is always at an elevation above the ground. This makes the FMS predict a negative FPA required to try to make the aircraft reach an altitude of zero degrees. Some roll disturbances are observed towards the end of the ground run.

542

G. Anitha et al.

40.5 Conclusion Through this work, the basic aspects of autopilot systems, their design and tuning, waypoint navigation, and setting up a simulation environment using Simulink and X-Plane are studied and successfully implemented. Waypoint navigation remains an area open for vast innovations and can be used to test new ideas related to improving the control system in existing aircraft as well as to develop control systems for UAVs. This project has served to obtain a keen understanding of various aspects of aircraft controls. The future scope consists of numerous improvements that can be made to the current design. In autopilot design, the control law or scheme can be varied; newer types of controllers, such as adaptive flight controllers, fuzzy logic-based controllers, and reinforcement learning-based controllers, can be implemented, and their responses can be tested on the SITL platform. In waypoint navigation, a banked or coordinated turn can be implemented, which would enable one to fly along a curved path without having to divide the path into several linear segments. The prototype testing support from MATLAB can be used to implement the designed systems in the real world, especially in the field of drones.

References 1. Haq, A., Reddy, G.M.S., Raj, P.C.P.: Design, modeling and tuning of modified PID controller for autopilot in MAV’S. Int. J. Sci. Eng. Res. 5(12), 506–513 (2014) 2. Ang, K.H., Chong, G.: PID control system analysis, design, and technology. IEEE Trans. Control Syst. Technol. 13(4), 559–576 (2005) 3. Bittar, A., Figueredo, H.V., Guimaraes, P.A., Mendes, A.C.: Guidance software-in-the-loop simulation using x-plane and simulink for UAV’s. In: International Conference on Unmanned Aircraft Systems, pp. 993–1002 (2014) 4. Cetin, E.: System identification and control of a fixed wing aircraft by using flight data obtained from x-plane flight simulator. Master of science thesis, Middle East Technical University-Open archive (2018) 5. Hartanto, S., Furqan, M., Siahaan, A.P.U., Fitriani, W.: Haversine method in looking for the nearest masjid. Int. J. Eng. Res. 3, 187–195 (2017) 6. Jalovecký, R., Bystˇrický, R.: Online analysis of data from the simulator x-plane in MATLAB. In: International Conference on Military Technologies (ICMT), pp. 592–597 (2017) 7. Jamhuri, M., Irawan, M.I., Mukhlash, I.: Similarity analysis of user trajectories based on Haversine distance and Needleman Wunsch algorithm. Elkawnie J. Islamic Sci. Technol. 7(2), 263–276 (2021) 8. Mahmoud, H., Akkari, N.: Shortest path calculation: a comparative study for location-based recommender system. In: World Symposium on Computer Applications and Research, pp. 1–5 (2016) 9. Moin, L., Baig, A.Z., Uddin, V.: State space model of an aircraft using Simulink. Int. J. Syst. Model. Simul. 2(4), 1–6 (2017) 10. Ige, O.O.S.: Automatic tuning of PID controllers. Master of science thesis, University of Southeast Norway-Open archive (2018) 11. Paing, H.S., Anatoli, S., Naing, Z.M., Htun, H.M.: Designing, simulation and control of autopilot using PID controller. In: IEEE Conference of Russian Young Researchers in Electrical and Electronic Engineering (ElConRus), pp. 2672–2675 (2021)

40 Software in the Loop (SITL) Simulation of Aircraft Navigation …

543

12. Randhawa, P.: Designing and implementation of an optimized PID controller for longitudinal autopilot. Adv. Res. J. Multi-Disc. Discov. 11, 21–26 (2017) 13. Rahul, B., Dharani, J.: Genetic algorithm tuned PID controller for aircraft pitch control. Int. J. Res. Eng. Appl. Manage. (IJREAM) 4, 303–307 (2019) 14. Ribeiro, L.R., Oliveira, N.M.: UAV autopilot controllers test platform using MATLAB/ Simulink and x-plane. In: ASEE/IEEE Frontiers in Education Conference (FIE), vol. 6(12), pp. 1301–1315 (2010) 15. Sofwan, A., Soetrisno, Y.A., Ramadhani, N.P., Rahmani, A., Handoyo, E., Arfan, M.: Vehicle distance measurement tuning using Haversine and micro-segmentation. In: International Seminar on Intelligent Technology and Its Applications (ISITIA), pp. 239–243 (2019) 16. Szabolcsi, R.: Optimal PID controller-based autopilot design and system modelling for small unmanned aerial vehicle. Rev. Air Force Acad. 3, 43–58 (2018)

Chapter 41

Optimization of Biodiesel Production from Waste Cooking Oil Using Taguchi Method Subham Chetri and Sumita Debbarma

Abstract Mustard oil is commonly used for cooking purposes. After one use, cooking oil starts to deteriorate and it is also advisable not to consume it as it will be hazardous to health. Every day, due to the huge utilization of mustard oil, tons of waste cooking oil is thrown away in the environment which eventually increases pollution. Therefore, for better utilization of wastage, biodiesel could be the best solution. Biodiesel yield is a very important parameter in the whole production process. To maximize the biodiesel yield with minimal experiment, optimization techniques are well popular. In respect of this, nine transesterification experiments were performed in the presence of an Azolla algae-based heterogeneous catalyst based on the Taguchi orthogonal array technique. At a methanol-to-oil molar ratio of 9:1, reaction temperature of 55 °C, a catalyst concentration of 3 wt%, and an hourlong reaction period, the highest yield of 91% was attained. Methanol-to-oil ratio and catalyst concentration were observed as the highest influence factor on biodiesel yield.

41.1 Introduction In 2022, global energy will increase by 8% to reach USD 2.4 trillion with the phenomenal rise in the clean energy sector. Clean energy has become essential for sustainable development. According to the International Energy Agency, crude oil consumption is expected to rise by 3.461% from 96.2 million barrels per day to 99.53 million barrels per day in the year 2022 [1]. This large consumption resulted in an increase in the release of carbon dioxide and harmful gaseous. Therefore, the focus on alternative energy is very important. S. Chetri (B) · S. Debbarma Department of Mechanical Engineering, NIT Silchar, Silchar, Assam 788010, India e-mail: [email protected] S. Debbarma e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024 B. Das et al. (eds.), Modeling, Simulation and Optimization, Smart Innovation, Systems and Technologies 373, https://doi.org/10.1007/978-981-99-6866-4_41

545

546

S. Chetri and S. Debbarma

Biofuel, on the other hand, emerged as an excellent alternative to fossil fuel. It is derived from the widely available biomass. It is renewable, broadly available, carbon neutral and has the potential to replace fossil fuels in the future. Most biofuels especially biodiesel as well as ethanol are used as a transportation fuel. Many researchers showing interest in the blending methodology of biodiesel in diesel to make it more effective and efficient. Various studies have been done in the context of the production of biodiesel. Some of the techniques such as dilution, pyrolysis, transesterification, and micro-emulsification can be used to create biodiesel. The simplest and most affordable production process is transesterification, which involves allowing triglycerides to react with catalyst and a methanol to produce the methyl group of fatty acids ester and glycerol [2]. Any type of lipid feedstock, including edible and inedible oils, animal fat, and used cooking oil, contains triglycerides. The production of biodiesel is increased by using a catalyst to speed up the reaction. Biodiesel can be prepared from edible and inedible oils. Non-edible oils can develop in environments with little water, high temperatures, and sandy soils [3]. Non-edible oils are considered to be the greatest feedstock among the available options because they are inexpensive and readily available year-round locally. More importantly, it does not create a food vs fuel conflict, unlike edible oils. Oils from these resources contain harmful toxic compounds which are not suitable for human consumption and are considered waste when not used. For instance, the main toxic compounds in jatropha plants are protein curcin and purgative agents, flavonoids pongamia and karanjin in karanja, and oleandrin and neriifolin in oleander. On the other hand, waste cooking oil is also considered as a good source of raw material for biodiesel production. Biodiesel produced from waste cooking oils, vegetable oils, and fats of animals is a huge source of feedstock [4]. Repeatedly heating vegetable oil for frying food can cause the generation of varieties of compounds, including polycyclic aromatic hydrocarbons, some of which are carcinogenic. Used cooking oil is harmful for human and animal consumption and also affects the environment if disposed of unscientifically. Due to the toxic compound’s form and the potential for returning dangerous compounds to the food chain through animal flesh after feeding, the European Union has imposed a ban on feeding used cooking oil to animals [5]. The quantity of waste cooking oil generated per year is the highest. Waste cooking oil could be used for the production of biodiesel very efficiently. There is a difference between data on diesel fuel requirement and waste cooking oil availability and cooking oil may not completely replace diesel fuel. However, it lessens dependence on fuels made from petroleum to some extent [6]. The physicochemical properties of waste cooking oil and produced biodiesel in comparison with American Society for Testing and Materials (ASTM) standards are shown in Tables 41.1 and 41.2, respectively [7]. When homogenous catalysts are used to produce biodiesel, several problems arise, including the inefficacy to catalyst recovery, a high likelihood of free fatty acids that could form soap, and the need for costly purification and catalyst separation steps, which renders the entire process uneconomical [8]. Heterogeneous catalysts are used to produce biodiesel in order to overcome this problem because they don’t dissolve or get consumed during the transesterification process, which increases the efficiency of the process [9]. Additionally, the heterogeneous catalyst has a number of benefits

41 Optimization of Biodiesel Production from Waste Cooking Oil Using …

547

Table 41.1 Physicochemical properties of waste cooking oil Properties

Value

Density (mg/l) at 25 °C

0.96

Dynamic viscosity at 40 °C

37.65

Acid value

0.272

Free fatty acid content (weight %)

0.136

Saponification number (mg KOH/g oil)

216

Mean molecular weight (g/mol)

866.31

Table 41.2 Properties of biodiesel produced from waste cooking oil Properties

Test method ASTM

B100 (based on ASTM D6751)

Test results

Density at 20 °C, g/ml

D341



0.8763

Flashpoint, °C

D93

Min. 93

96

Cloud point, °C

D2500

Report

9

Kinematic viscosity at 40 °C mm2 /s

D445

1.9–6.0

5.2

Acid value, mg KOH/g

D974

Max. 0.500 (D874)

0.497

Sulfur content, mass%

D4294

Max. 0.05

0.0143

including a tendency to be reusable, environmentally friendly, non-toxicity, and hightemperature resistance [10]. Since heterogeneous catalysts are costly so to make their entire biodiesel production more economically, biomass-based heterogeneous catalysts attract the attention of many researchers. Few researchers have investigated the generation of biodiesel using different biomass ash as a heterogeneous catalyst, such as Brassica nigra waste [11], rice straw waste [12], moringa leaves [13], Banana peduncle [14], pineapple leaves [15], Sesamum indicum plant waste [16], waste ginger straw [17], Musa acuminata peduncle [18], raw sugar beet agro-industrial waste [19], wood ash [20], tucuma peels [21], Musa balbisiana Colla peels and underground stem [22]. The manual variation of one factor at a time technique for optimization takes a lot of time and work. To enhance the effectiveness of the optimization, soft computational methods are therefore employed. These methods include central composite design (CCD), Box–Behnken design (BBD), Taguchi orthogonal array (OA), Plackett–Burman design (PBD), etc. [23]. Among these, RSM-based techniques like central composite design (CCD) and Box–Behnken design (BBD) continue to optimize the conversion process by repeating runs around the central points. In contrast to RSM methods, which demonstrate interaction effects on process response, factorial approaches, such as Plackett–Burman design (PBD) and Taguchi orthogonal array (OA), provide the results as effects of specific parameters on the process response instead of using a large number of experiments with repeats. The Taguchi OA method is preferred because it examines only a small number of useful

548

S. Chetri and S. Debbarma

parameters rather than all possible parameter combinations. It saves time and effort by concentrating on identifying the variables that have the greatest impact on product quality with fewer runs to conserve time and resources. The Taguchi technique has been successfully used by many studies as a useful tool for optimizing the yield of biodiesel. The Taguchi technique was used to determine the best process variables for producing biodiesel from Manilkara zapota (L.) seed: 50 °C reaction temperature, 6:1 methanol-to-oil ratio, 90 min reaction duration, and 1 wt% catalyst concentration [24]. The Taguchi technique was also used by Mahamuni et al. [25] to investigate the impact of process variables on the transesterification of soybean oil using highfrequency ultrasound. The following are the settings that the authors found to be most effective for producing biodiesel with a yield of more than 92.5% in less than 30 min: 6:1 methanol-to-oil molar ratio, 143 W, and 0.75% (w/w) KOH loading at an ultrasonic frequency of 581 kHz. At 4.5 weight percent catalyst loading, a temperature of 60 °C, a 9:1 methanol-to-oil ratio, and an agitation speed of 1250 rpm, Dhawane et al. [26] obtained a maximum yield of 97.5% rubber seed oil methyl ester. The optimization of yield of biodiesel from waste cooking oil by Azolla algaederived heterogeneous catalyst using the Taguchi technique, however, is not widely documented in the literature. The current study uses the Taguchi orthogonal array technique to optimize the yield of biodiesel from used cooking oil. Dry Azolla algae ash as the catalyst was chosen to increase the rate of reaction. Algae (single-celled organism) is the widely available photosynthetic organism that possesses photosynthetic pigments called chlorophyll. The aquatic fern Azolla has a short, branching, floating stalk with roots that extend into the water. Each leaf has a thick aerial dorsal lobe filled with chlorophyll that is green and a slightly bigger thin, colorless, floating ventral lobe. The leaves are arranged alternately. A reddish-brown pigment called anthocyanin lends the fern its color under certain circumstances. The plant diameter of small species like Azolla pinnata varies from 1–2.5 cm to 15 cm or more for Azolla nilotica. Azolla plants are triangular or polygonal in form, and they can float alone or in mats on the water’s surface [27]. They can be grown anywhere with minimal effort, thereby proving as an economically suitable material for the production of biodiesel. Algae can be found everywhere, and also it reduces the overall cost of the production of biodiesel. As most of the past studies were carried out in the heterogeneous catalyst due to its efficacy, algae being a heterogeneous catalyst could be the best alternative to the homogenous catalyst due to its low cost and availability.

41.2 Materials and Method 41.2.1 Materials Required Waste cooking oil was collected from the Boys’ hostel mess of NIT Silchar, India. The unwanted material was removed from the waste cooking oil using a fine mesh sieve. For the transesterification process, methanol was chosen as the reactant which was

41 Optimization of Biodiesel Production from Waste Cooking Oil Using …

549

purchased from a local chemical retailer. To calculate the amount of free fatty acids present in the oil, a predetermined amount of sample, dissolved in ethanol, was titrated against sodium hydroxide (NaOH), with a few drops of phenolphthalein indicator serving as the end indicator. Density and kinematic viscosity were determined with the help of VDM300 Viscodense meter.

41.2.2 Synthesis of Catalyst The Azolla algae were collected from the lake of the NIT Silchar campus and were properly cleaned with distilled water to remove impurities and placed in direct sunlight for 2 weeks to remove the entire moisture content of it. Dry Azolla algae are then converted into ash by direct open burning. Dry algae ash is then put into a muffle furnace for calcination. Calcination is performed for 4 h at a 900 °C temperature. Calcination is performed to convert the calcium carbonate present in the catalyst to the desirable calcium oxide. Catalyst calcination temperature is the most important factor affecting the transesterification process. Catalysts calcined above 800 °C showed better catalytic activity than catalysts calcined in lower temperatures [28]. Based on the past studies, a calcination temperature of 900 °C for 4 h duration was selected. After calcination, the catalyst was stored in an airtight container to avoid contact with air (Fig. 41.1).

Fig. 41.1 Catalyst preparation

550

S. Chetri and S. Debbarma

41.2.3 Preliminary Waste Cooking Oil Analysis The acid content and free fatty acid content of the oil were determined by the titration method using sodium hydroxide and phenolphthalein as the endpoint indicator. For the process, 0.4 g weight of oil is dissolved in 50 ml ethanol and allowed to heat at 40 degrees centigrade for 5 min. Few 4–5 drops of phenolphthalein indicator were added. The mixture was subsequently titrated against a 0.1 M sodium hydroxide solution until every free fatty acid in the oil was neutralized. The appearance of a light pink color indicates the titration endpoint. Initial reading and final reading were noted down. The free fatty acid (FFA) and acid value (AV) were calculated using the following equation: FFA% = AV =

28.2 × N × V sample weight

56.1 × V × N sample weight

(41.1) (41.2)

Here, N denotes normality, V is the titer value. To determine the saponification value of the raw oil, 50 ml of alcoholic KOH is added to 5 g of the sample (0.5 N). The mixture is then boiled for an hour to fully saponify the oil. The solution is added with phenolphthalein indicator once it has cooled. The amount of HCl required for neutralization is then measured as the solution is titrated to a 0.5 N HCl solution. The saponification number is determined using the following formula: Saponification No =

(K − H ) × 28 , W

where K, H, and W denote the volume of KOH solution in milliliters, the volume of 0.1 N HCl solution in milliliters, and the mass of the oil sample in grams, respectively.

41.2.4 Design of Experiments Based on Taguchi L9 Methodology The Taguchi technique was created by Dr. Genechi Taguchi to analyze the different variables that affect the means and variation of performance parameters, which define how well a process works. To run the fewest number of tests possible, Taguchi analysis’s experiment design makes use of a set of orthogonal arrays [29]. The number of experiment runs and their parameters can be chosen from the orthogonal arrays. The variation of each parameter was chosen in accordance with prior research, and a number of parameters with respect to which a result is required were completed. The least number of experiments was determined by the following equation:

41 Optimization of Biodiesel Production from Waste Cooking Oil Using …

551

N = (L − 1)P + 1

(41.3)

where N, L, and P denote the run of experiments required, levels, and chosen control parameters, respectively [24].

41.2.5 Control Parameters Selection and Levels Only the four most significant variables; methanol-to-oil ratio, time for reaction, reaction temperature, and catalyst concentration have been selected for this study among the many variables affecting the production of biodiesel, including reaction temperature, methanol-to-oil ratio, alcohol type and its concentrations, reaction time, catalyst type and its concentration, moisture content in the oil, and agitation or stirring speed. The effects of four components with three levels have been examined using the Taguchi L9 orthogonal array approach (Levels: 3, Parameters: 4, as indicated in Table 41.3). According to Taguchi L9 orthogonal array, nine tests in total were carried out, as indicated in Table 41.4. Table 41.3 Parameters and levels Parameters

Levels 1

2

3

A

Methanol-to-oil molar ratio

9:1

12:1

15:1

B

Catalyst concentration (wt%)

3

4.5

6

C

Time of reaction (hours)

2

3.5

5

D

Reaction temperature (°C)

55

65

75

Table 41.4 L9 orthogonal array for design of experiment (DOE) Experiment No

Parameters and levels Methanol-to-oil molar ratio

Temperature (°C)

Reaction time (h)

Catalyst (w/ w)

Yield (%)

1

9:1

55

2

3

91

2

9:1

65

3.5

4.5

81

3

9:1

75

5

6

83

4

12:1

55

3.5

6

79

5

12:1

65

5

3

83

6

12:1

75

2

4.5

82

7

15:1

55

5

4.5

78

8

15:1

65

2

6

80

9

15:1

75

3.5

3

82

552

S. Chetri and S. Debbarma

41.2.6 Experimentation Based on the Taguchi Orthogonal Array Design Matrix Based on the orthogonal array design, a sample of 100 ml filtered waste cooking oil was poured into a conical flask and allowed to heat in a hot plate for removal of moisture content. Since the free fatty acid value of the oil was 1.34% (less than 2%), direct transesterification was possible. The color of the sample before and after obtaining the endpoint in the titration procedure is depicted in Figs. 41.2 and 41.3, respectively. The catalyst was taken as per the design matrix, and methanol was taken as given the methanol-to-oil molar ratio. When the heated oil reached the required temperature, both catalyst and methanol were poured into the conical flask containing the heated oil sample and for better mixing and reaction, a magnetic stirrer was used. As the methanol boiling temperature is nearly about 64.7°C, so to avoid the evaporation of methanol, a condenser was used. Once the reaction time is reached, the mixture is emptied into a centrifuge tube for the separation process in a centrifuge for 20 min. A clear separation of excess methanol, biodiesel, and glycerol was observed as shown in Fig. 41.5. The top layer was the unreacted methanol, the second layer was the biodiesel, the third layer was the glycerol and the last layer at the bottom of the tube was the catalyst. The biodiesel phase was separated and poured into the beaker and then allowed to heat up to 110°C to remove the remaining methanol and water vapor. The final sample was then washed with distilled water to remove any remaining traces of the catalyst. The same experiments are repeated with the orthogonal design matrix, and the biodiesel yield is determined using the equation below (Fig. 41.4): Yield% =

weightofbiodieselobtained × 100 weightofrawoilused

Yield % =

weight of biodiesel obtained × 100 weight of raw oil used

(41.4)

41.3 Results and Discussion 41.3.1 Catalyst Characterization SEM examination was done at various magnifications and sizes to examine the surface morphology of catalysts made from raw and calcined algae. Figure 41.6 shows SEM micrographs at various magnifications. A sample that hasn’t been calcined shows an amorphous crystal structure. The sample appears to include a wider size distribution of uneven, irregular particles. Calcined algae catalyst shows more even particle distribution and more porous structure, thus indicating a large specific

41 Optimization of Biodiesel Production from Waste Cooking Oil Using … Fig. 41.2 Sample before titration

Fig. 41.3 Sample after attaining endpoint in titration

553

554

Fig. 41.4 Biodiesel samples

Fig. 41.5 Flowchart of waste cooking oil biodiesel production

S. Chetri and S. Debbarma

41 Optimization of Biodiesel Production from Waste Cooking Oil Using …

555

area. More catalytic sites or higher surface areas are also crucial for heterogeneous processes, which result in improved conversion efficiency from extremely porous catalysts [30]. After the calcination of the algae catalyst, which caused agglomeration and spongy nature, the sintering of aggregates was seen. The elemental composition of the samples was disclosed by the EDX results and shown in Fig. 41.7. The primary ingredients in the composite catalyst were potassium, magnesium, calcium, silicon, aluminum, and iron. This indicates a blend of alkaline (calcium, magnesium, and potassium), and the catalyst has been exposed to acidic materials (silicon and iron) capacity for dual use. This indicates the catalyst’s capability to esterification and transesterification can be catalyzed simultaneously when generating biodiesel. Also, they are very active substances that could give

Fig. 41.6 SEM analysis a uncalcined algae catalyst at 20 kX magnification, b calcined algae catalyst at 5 kV magnification, c uncalcined algae catalyst at 30 kX magnification, d calcined algae catalyst at 30 kX magnification

556

S. Chetri and S. Debbarma

Fig. 41.7 EDAX analysis of uncalcined algae catalyst

Fig. 41.8 EDAX analysis of calcined algae catalyst

the catalyst its catalytic activity [31]. A heterogeneous catalyst was created from bananas peels by Pathak et al. [32] and claimed that the catalyst’s action resulted from the presence of elements like silicon, calcium, and potassium (Figs. 41.7 and 41.8).

41.3.2 Statistical Analysis Before conducting the statistical analysis, the Taguchi design should be analyzed. And in regard to this, the orthogonal array has been framed as shown in Table 41.4. According to the orthogonal array, nine experiments have been conducted with

41 Optimization of Biodiesel Production from Waste Cooking Oil Using …

557

different variable parameters. The yield of biodiesel (output) obtained in each experiment was noted. Typically, three mathematical methods, Higher-the-Better, Lowerthe-Better, and Nominal-the-Better, are used to examine the output value of a signalto-noise ratio (SNR). For problems related to maximization, Higher-the-Better is preferable. Similarly, for minimization-related problems, Lower-the-Better is used and Nominal-the-Better is used for problems of normalization. The higher the better was chosen because the goal of this study is to maximize output yield under ideal conditions. The highest yield of biodiesel can be achieved by using signal-to-noise analysis to identify the optimum values for each parameter and the ideal combinations of parameters, but it is unable to identify the factor(s) that had a substantial impact on the outcome or the relative contributions of each component. To do this, respondent data from a survey can be utilized in a statistical analysis of variance (ANOVA). For conducting an ANOVA on response data, sum of squares computation is necessary. For conducting an ANOVA on response data, sum of squares computation is necessary. The following equations were used to calculate the contribution percentage. %contribution of factor =

Sj × 100 St

(41.5)

where S t represents the total sum of the squares for all parameters and S j represents the square sum for the jth parameter. 4  ]2 [ Sj = n (SNRl ) ji − SNRT

(41.6)

i=1

where “n” is the number of tests performed at level “i” of factor “j.” St =

9 

(SNRk − SNRT )2

(41.7)

k=1

As shown in Table 41.5, the ideal conditions for the production of biodiesel are 55 °C for the reaction, 2 h for the reaction, a molar ratio of 9:1 for the methanol to the oil, and a catalyst concentration of 3 weight percentage (wt%). The biodiesel yield under ideal circumstances was found to be 91%, which was a significant amount. In addition, Fig. 41.5 displays the signal-to-noise ratio (SNR) for Taguchi method parameter optimization. Better performance characteristics and a smaller variance of the output characteristic around the desired value are correlated with higher SNR [33]. Biodiesel yield percent reduced while increasing the catalyst content because of the high saponification. According to the response table for signal-to-noise ratios shown in Table 41.6, ΔSNR value of methanol-to-oil ratios and catalyst concentration factors is equal and higher among all factors and thus rank 1 and rank 2 are assigned to the methanol-to-oil ratios and catalyst concentration factors, respectively. Based

558

S. Chetri and S. Debbarma

on the rank, methanol-to-oil ratios are the most influencing parameters on the yield of waste cooking oil biodiesel followed by catalyst concentration as second. In addition, signal-to-noise ratio is a metric that contrasts the strength of the desired signal with the strength of the background noise. The closer to the target it is, the higher the SNR actually indicates. According to Fig. 41.5, the signal-to-noise ratio would be at its highest point under ideal circumstances (SNR = 39.1808). Measuring the percentage impact of each process variable on the biodiesel yield allowed for the identification of the most important process variable. Table 41.7 contains a summary of the calculated S j and percentage contribution. The two major factors, according to the contribution table, were the concentration of the catalyst, which contributed 41.54% and the molar ratio of methanol to oil, which contributed 35.63% to the biodiesel yield. Reaction temperature and reaction time were the two process variables with the least influence each having a contribution of 2.56%. It proves that the majority of the biodiesel conversion occurred relatively soon after the reaction started. Once the reaction has established a steady state, the conversion rate is unaffected and is high at the beginning of the reaction (Fig. 41.9). Table 41.5 Biodiesel yield and SNR Run no

SNR

Biodiesel yield (%)

1

39.1808

91

2

38.1697

81

3

38.3816

83

4

37.9525

79

5

38.3816

83

6

38.2763

82

7

37.8419

78

8

38.0618

80

9

38.2763

82

Table 41.6 Response table for SNR Level

Methanol-to-oil ratio

Temperature (°C)

Reaction time (h)

Catalyst (w/w)

1

38.58

38.33

38.51

38.61

2

38.20

38.20

38.31

38.10

3

38.06

38.31

38.20

38.13

Delta

0.52

0.12

0.37

0.52

Rank

1

4

3

2

41 Optimization of Biodiesel Production from Waste Cooking Oil Using … Table 41.7 Percentage of a process parameter’s contribution

559

Parameter

Sj

% Contribution

Methanol-to-oil molar ratio

40.222

35.63

Temperature (°C)

2.889

2.56

Reaction time (h)

22.889

20.28

Catalyst (w/w)

46.889

41.54

Fig. 41.9 Signal-to-noise ratio (SNR) for the optimization of parameters using the Taguchi method

41.4 Conclusions The development and effective application of the Taguchi technique for the optimization of numerous process parameters is the main objective of this experimental inquiry, which aims to produce the largest output of biodiesel from spent cooking oil. To convert raw oil to methyl ester, nine experiments were carried out based on the orthogonal array design matrix. On the basis of the experimental investigation, the conclusions below are provided: • The two-stage transesterification process used to convert used cooking oil to methyl ester was successful. • The signal-to-noise ratio (SNR) with the highest value denotes the ideal circumstances for obtaining the highest biodiesel yield. At a catalyst concentration of 3, a reaction temperature of 55 °C, and a 2 h-long reaction time, a yield of 91% was attained. • The most influencing parameter is the methanol-to-oil ratios and the catalyst concentration.

560

S. Chetri and S. Debbarma

• According to the analysis of variance (ANOVA), the catalyst concentration contributed the most (41.54%) • When compared to other feedstocks, the cost of producing biodiesel from used cooking oil was low. Thus, it was advised to use waste cooking oil biodiesel as an alternative fuel in current diesel internal combustion engines.

References 1. Gould, T.: IEA [Online] June 22, 2022 [Cited: June 22, 2022.]. https://www.iea.org/news/rec ord-clean-energy-spending-is-set-to-help-global-energy-investment-grow-by-8-in-2022 2. Ma, F., Hanna, M.A.: Biodiesel production: a review. Biores. Technol. 70(1), 1–5 (1999) 3. Kanth, S., Debbarma, S., Das, B.: Effect of hydrogen enrichment in the intake air of diesel engine fuelled with honge biodiesel blend and diesel. Int. J. Hydrogen Energy 45(56), 32521–32533 (2020) 4. Kanth, S., Ananad, T., Debbarma, S., Das, B.: Effect of fuel opening injection pressure and injection timing of hydrogen enriched rice bran biodiesel fuelled in CI engine. Int. J. Hydrogen Energy 46(56), 28789–28800 (2021) 5. Cvengroš, J., Cvengrošová, Z.: Used frying oils and fats and their utilization in the production of methyl esters of higher fatty acids. Biomass Bioenerg. 27(2), 173–181 (2004) 6. Kulkarni, M.G., Dalai, A.K.: Waste cooking oil an economical source for biodiesel: a review. Ind. Eng. Chem. Res. 45(9), 2901–2913 (2006) 7. Degfie, T.A., Mamo, T.T., Mekonnen, Y.S.: Optimized biodiesel production from waste cooking oil (WCO) using calcium oxide (CaO) nano-catalyst. Sci. Rep. 9(1), 1–8 (2019) 8. Anwar, M., Rasul, M.G., Ashwath, N.: Production optimization and quality assessment of papaya (Carica papaya) biodiesel with response surface methodology. Energy Convers. Manage. 156, 103–112 (2018) 9. Balajii, M., Niju, S.: Biochar-derived heterogeneous catalysts for biodiesel production. Environ. Chem. Lett. 17(4), 1447–1469 (2017) 10. Foroutan, R., Mohammadi, R., Razeghi, J., Ramavandi, B.: Biodiesel production from edible oils using algal biochar/CaO/K2 CO3 as a heterogeneous and recyclable catalyst. Renew. Energy 168, 1207–1216 (2021) 11. Nath, B., Das, B., Kalita, P., Basumatary, S.: Waste to value addition: utilization of waste Brassica nigra plant derived novel green heterogeneous base catalyst for effective synthesis of biodiesel. J. Clean. Prod. 239, 118112 (2019) 12. Sahu, O.: Characterisation and utilization of heterogeneous catalyst from waste rice-straw for biodiesel conversion. Fuel 287, 119543 (2021) 13. Aleman-Ramirez, J.L., Moreira, J., Torres-Arellano, S., Longoria, A., Okoye, P.U., Sebastian, P.J.: Preparation of a heterogeneous catalyst from moringa leaves as a sustainable precursor for biodiesel production. Fuel 284, 118983 (2021) 14. Balajii, M., Niju, S.: Banana peduncle—a green and renewable heterogeneous base catalyst for biodiesel production from Ceiba pentandra oil. Renew. Energy 146, 2255–2269 (2020) 15. Barros, S.D., Junior, W.A., Sá, I.S., Takeno, M.L., Nobre, F.X., Pinheiro, W., Manzato, L., Iglauer, S., de Freitas, F.A.: Pineapple (Ananás comosus) leaves ash as a solid base catalyst for biodiesel synthesis. Biores. Technol. 312, 123569 (2020) 16. Nath, B., Kalita, P., Das, B., Basumatary, S.: Highly efficient renewable heterogeneous base catalyst derived from waste Sesamum indicum plant for synthesis of biodiesel. Renew. Energy 151, 295–310 (2020) 17. Yu, H., Cao, Y., Li, H., Zhao, G., Zhang, X., Cheng, S., Wei, W.: An efficient heterogeneous acid catalyst derived from waste ginger straw for biodiesel production. Renew. Energy 176, 533–542 (2021)

41 Optimization of Biodiesel Production from Waste Cooking Oil Using …

561

18. Balajii, M., Niju, S.: A novel biobased heterogeneous catalyst derived from Musa acuminata peduncle for biodiesel production—process optimization using central composite design. Energy Convers. Manage. 189, 118–131 (2019) 19. Abdelhady, H.H., Elazab, H.A., Ewais, E.M., Saber, M., El-Deab, M.S.: Efficient catalytic production of biodiesel using nano-sized sugar beet agro-industrial waste. Fuel 261, 116481 (2020) 20. Sharma, M., Khan, A.A., Puri, S.K., Tuli, D.K.: Wood ash as a potential heterogeneous catalyst for biodiesel synthesis. Biomass Bioenerg. 41, 94–106 (2012) 21. Mendonça, I.M., Paes, O.A., Maia, P.J., Souza, M.P., Almeida, R.A., Silva, C.C., Duvoisin, S., Jr., de Freitas, F.A.: New heterogeneous catalyst for biodiesel production from waste tucumã peels (Astrocaryum aculeatum Meyer): parameters optimization study. Renew. Energy 130, 103–110 (2019) 22. Gohain, M., Devi, A., Deka, D.: Musa balbisiana Colla peel as highly effective renewable heterogeneous base catalyst for biodiesel production. Ind. Crops Prod. 109, 8–18 (2017) 23. Karmakar, B., Ghosh, B., Samanta, S., Halder, G.: Sulfonated catalytic esterification of Madhuca indica oil using waste Delonix regia: L16 Taguchi optimization and kinetics. Sustain. Energy Technol. Assess. 37, 100568 (2020) 24. Kumar, R.S., Sureshkumar, K., Velraj, R.: Optimization of biodiesel production from Manilkara zapota (L.) seed oil using Taguchi method. Fuel 140, 90–96 (2015) 25. Mahamuni, N.N., Adewuyi, Y.G.: Application of Taguchi method to investigate the effects of process parameters on the transesterification of soybean oil using high frequency ultrasound. Energy Fuels 24(3), 2120–2126 (2010) 26. Dhawane, S.H., Bora, A.P., Kumar, T., Halder, G.: Parametric optimization of biodiesel synthesis from rubber seed oil using iron doped carbon catalyst by Taguchi approach. Renew. Energy 105, 616–624 (2017) 27. Lumpkin, T.A., Plucknett, D.L.: Azolla: botany, physiology, and use as a green manure. Econ. Bot. 34, 111–153 (1980) 28. Boro, J., Thakur, A.J., Deka, D.: Solid oxide derived from waste shells of Turbonilla striatula as a renewable catalyst for biodiesel production. Fuel Process. Technol. 92(10), 2061–2067 (2011) 29. Kim, S., Yim, B., Park, Y.: Application of Taguchi experimental design for the optimization of effective parameters on the rapeseed methyl ester production. Environ. Eng. Res. 15(3), 129–134 (2010) 30. Betiku, E., Akintunde, A.M., Ojumu, T.V.: Banana peels as a biobase catalyst for fatty acid methyl esters production using Napoleon’s plume (Bauhinia monandra) seed oil: a process parameters optimization study. Energy 103, 797–806 (2016) 31. Yusuff, A.S., Adeniyi, O.D., Azeez, S.O., Olutoye, M.A., Akpan, U.G.: Synthesis and characterization of anthill-eggshell-Ni-Co mixed oxides composite catalyst for biodiesel production from waste frying oil. Biofuels Bioprod. Biorefin. 13(1), 37–47 (2019) 32. Pathak, G., Das, D., Rajkumari, K., Rokhum, S.L.: Exploiting waste: towards a sustainable production of biodiesel using Musa acuminata peel ash as a heterogeneous catalyst. Green Chem. 20(10), 2365–2373 (2018) 33. Reddy, M.M., Joshua, C.X.: Performance of Silicon carbide whisker reinforced ceramic inserts on Inconel 718 in end milling process. In: IOP Conference Series: Materials Science and Engineering 2016, vol. 121, no. 1, p. 012005. IOP Publishing (2016).

Chapter 42

Design and Development of Energy Meter for Energy Consumption K. Hariharan, Mathiarasan Vivek Ramanan, Naresh Kumar, D. Kesava Krishna, Arockia Dhanraj Joshuva, and S. K. Indumathi

Abstract In recent days, IoT-based applications are growing in popularity as they offer efficient solutions to many real-time problems. This article proposed his Internet of the Thinks-based electricity system will be monitoring the meter using an Android application. This is intended to reduce the manual effort to measure power units and make users worry about excessive power consumption. The sensor like optical and Arduino UNO were used to capture the electricity meter pulse. It reduces the error which is created by human and consumption of energy according to the cost, by using energy meter the consumption of energy and low-cost wireless system can work autonomously interpret unit meters will be implemented. It is a new field and IoT-based devices have revolutionized electronics and IT. The main purpose about this work is to rise an awareness of power consuming and save energy by using household appliances efficiently. Because of manual work, EB bill systems have major disadvantage. The system uses IoT to provide information on meter reading, K. Hariharan · M. Vivek Ramanan · N. Kumar · D. K. Krishna · A. D. Joshuva (B) · S. K. Indumathi Centre of Automation and Robotics (ANRO), Department of Mechatronics Engineering, Hindustan Institute of Technology and Science, Chennai 603103, Tamilnadu, India e-mail: [email protected] K. Hariharan e-mail: [email protected] M. Vivek Ramanan e-mail: [email protected] N. Kumar e-mail: [email protected] D. K. Krishna e-mail: [email protected] S. K. Indumathi e-mail: [email protected] A. D. Joshuva University Center for Research & Development (UCRD), Chandigarh University, Mohali 140413, Punjab, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024 B. Das et al. (eds.), Modeling, Simulation and Optimization, Smart Innovation, Systems and Technologies 373, https://doi.org/10.1007/978-981-99-6866-4_42

563

564

K. Hariharan et al.

power outages, and the warning systems to generate alarms when power consumption exceeds specified limits. This idea is being introduced in order to reduce human dependency in collecting metrics in monthly basis to reduce technically created issues in the billing process. The advantage of this system is that users could easily understand the daily energy consumption of household electrical equipment and control them, thereby contributing to energy saving. The Electricity Supply Department will provide consumers with information on billing amounts, payments, and details of pre-scheduled outages.

42.1 Introduction A crucial aspect of energy distribution is meter billing. The use of manual systems is not advised. Users at power businesses experience trouble resolving human error; as a result, customers must visit the company’s offices, wait in line, and get rectified. The correctness of the accounts has to be increased. The accounting process can be automated by using an automatic reading system [1]. Those introduced into the country can currently only to measure and also monitor the electricity; remote access is not allowed. The system also requires a lot of laborers, takes a long time, and is prone to mistakes [2]. These issues can be resolved with “smart power meters,” which offer services to customers via SMS in addition to other built-in capabilities like tamper protection, malfunction detection, etc. The completion of this project will improve energy management, promote energy conservation, and do away with the unnecessary difficulties caused by improper invoicing [3]. Consumption will be tracked by the billing system, which will also resolve any billing and consumption conflicts. Theft of electricity is another frequent issue. An “ordinary” meter’s principal drawback is that it is less dependable, less precise, and more susceptible to tampering. Even modern computerized electricity meters are susceptible to unlawful manipulation to some extent [4]. The purpose-built energy meter has the ability to verify the distribution transformer’s power supply status in order to identify distribution system issues. The implementation of a smart energy meter with an Android application results in a fully automated electricity billing system [5]. Designing a SEM that is tamper-proof, supports automatic metering and invoicing, and, at the same time, aids in locating the transmission line fault location Smart Energy Meter Telemetric communication is used to read the household’s energy usage and automatically produce a bill. Smart energy meter surveillance using IOT to generate an alert when the power unit consumption will exceed a specified amount when meter readings, power outages, and power cuts are recorded. Smart power monitoring using IOT to optimize power consumption and reduce it [7]. The SEM which helps in home automation of home using IOT and enabling wireless communication in which there will be a big leap toward the Digital India. Electromechanical meters operate in a pretty straightforward manner. An internal non-magnetic metallic disc electromagnetic meter is attached to it and rotates in accordance with the amount of power flowing through it. Therefore, the disc rotates faster when the power going through

42 Design and Development of Energy Meter for Energy Consumption

565

is high and slower when the power passage is very low. The reading on the electricity meter is then determined by the rate of rotation. Reading will increases as the rotation increases, and vice versa [8]. Since a disc must rotate in order for the process to work, the disc will inevitably need some electrical energy. To manufacture, around 2 Watts of power are needed. The main objective of this project is to monitor the readings of a specific electrical appliance, to reduce the unwanted electrical cost, and managing the power consumption by electrical appliances.

42.2 Methodology See Fig. 42.1.

42.2.1 Components Selected Arduino UNO The Arduino UNO is a microcontroller that is based on the Microchip ATmega328P microcontroller. It’s simple to connect to the computer. Microprocessors and controller extensions are used in Arduino board designs [9]. The board offers a virtual unit and an analogue input that may be linked to various breadboards (used for prototyping), and other circuits.

Fig. 42.1 Working diagram

566

K. Hariharan et al.

42.2.2 16*2 LCD Display LCD 16x2 refers to electronic devices that show information or messages. As its name implies, it can display a maximum of 32 characters (16 × 2 = 32), which is completely made of 5 × 8 (40) dots. The other is 1280 pixels. CRTs are thicker than LCD displays. LCD panels utilize less power than LED screens because they operate on the idea of blocking light rather than diffusing it.

42.2.3 Resistor Resistor color codes consist of three- or four-color bands followed by tolerance bands. If a temperature coefficient band is present, there is a broad band to the right of the tolerance band, usually found in end caps. It is an assembly of sandwiched layers of insulating phenolic fiber board and also the conductive copper strip in which the components can be connected and secured. The project consists of a total of 8 resistors.

42.2.4 Diode A diode is used at a certain voltage level, only permits one direction of current flow. A perfect diode has infinite resistance in the reverse bias and no resistance during forward bias. The most popular kind of diode is a semiconductor diode. Only in the presence of a specific forward biased threshold voltage can these diodes begin to conduct current. The “reverse breakdown voltage” is the voltage at which this breakdown takes place [2]. The diode can conduct reverse current if the circuit voltage is greater than the reverse breakdown voltage. Therefore, rather than referring to infinite resistances, we are actually discussing high resistance blocking diodes. There are three diodes in this project: D1 (M7), D2 (ES1J), and D3 (RS1M).

42.2.5 Capacitor A capacitor is a device that can store energy in the form of electrical charge. Comparing a battery of the same size, a capacitor stores much less energy, about 10,000 times less, but is useful enough for so many circuit designs.

42 Design and Development of Energy Meter for Energy Consumption

567

42.2.6 Internal Battery This allows the meter to communicate with the utility (via modem) if the meter loses power.

42.2.7 Inductor Passive electrical components known as inductors consist of coils of wire designed to take the advantage about the interaction of the magnetism and also the electricity by forming a magnetic flux in coil itself or in the device which it is acting. In other words, an inductor is an electrical device with inductance.

42.2.8 Microchip A microchip, also known as a IC, is a group of electronic circuits connected in compact flat silicon piece. On a chip, transistors operate like a switch that could turn on and off the current passing through it. A fine switch pattern is created on a silicon wafer using a multi-layer lattice with connected forms.

42.2.9 Tact Switch Tactile switches, also known as tactile pushbutton switches or tactile switches, are used in a wide range of products such as home appliances, office equipment, retail and industrial equipment. Users can select optional properties for specific applications, such as: B. Size (external dimensions), board mounting, sealing structure.

42.2.10 Working Principle Depending on whether you are measuring gas or electricity, smart meters operate differently. Connected to the grid, smart electricity meters keep track of the power consumption. You can use smart meter to communicate the same data to the InHome Display (IHD) in your home. On the 16*2 LCD display module and mobile application, readings of voltage, current, power consumption, number of units, and associated price are computed and shown (Figs. 42.2, 42.3 and 42.4).

568

Fig. 42.2 TINKER CAD simulation

Fig. 42.3 Before simulation

Fig. 42.4 After the simulation

K. Hariharan et al.

42 Design and Development of Energy Meter for Energy Consumption

569

42.3 Conclusion The common model is used to calculate the unit consumed in home and is also convenient for reading energy units. This reduces energy wastage and raises public awareness. Any manual work is also taken into account. The proposed system reduces consumer stress and allows users to worry about excessive power consumption and malfunctioning appliances at home. This system allows customers to easily check the total impulse, total number of units, and total electricity bill. The system is readable and reliable. Data stored in the cloud is very important for future data mining of electricity meters. Distributors like DPDC can speak broadly and observe local consumption patterns. Therefore, this observation is useful for load balancing in a specific area.

42.3.1 Future Scope 1. 2. 3. 4.

GSM module is used for sending and receiving information to users. Solar panels could be used for better usage of smart energy meters. Develop more settings and options in the GUI application. Interface smoke, fire detection, and also automatic alarm.

References 1. Joshi, D., Kolvekar, S., Raj, Y., Singh, S.: IoT based smart energy meter. Bonfring Int. J. Res. Commun. Eng. 6(Special Issue), 89–91 (2016) 2. Kádár, P.: Smart meters based on energy management systems. Renew. Energy Power Qual. J. 1:1160–1163 (2011) 3. Reddy, V.: GSM based smart energy meter using with the help of Arduino. Int. J. Res. Appl. Sci. Eng. Technol. 7(3), 2073–2080 (2019) 4. Shinde, M., Yadav, M., Zapake, M.: IoT based smart energy meter. Int. J. Trend Sci. Res. Dev. 1(Issue-6), 1151–1153 (2017). 5. Estibeiro, C.R.: IOT based energy meter reading system. Int. J. Adv. Res. Sci. Commun. Technol. 2:35–39 (2022) 6. Nallasivam, M., Saravanavasan, R., Vignesh, A., Nandhu, S.: Design of smart energy meter with IOT using Arduino mega compositions. ECS Trans. 107(1), 19435–19442 (2022) 7. Abdollahi, A, Dehghani, M., Zamanzadeh, N.: SMS-based automatic meter reading system. In: IEEE International Conference on Control Applications (CCA 2007), pp. 1103–1107 (2007) 8. Hao, Q., Song, Z.: The status of development in the intelligent automatic meter reading system. In: Proceedings of of China Science and Technology Information, no. 19, p. 72 (2005) 9. Maitra, S.: Embedded Energy Meter-A new concept to measure the energy consumed by a consumer and to pay the bill. In: Power System Technology and IEEE Power India Conference (2008)

Chapter 43

Thermal Performance Study of a Flat Plate Solar Air Heater Using Different Insulating Materials Pijush Sarma, Monoj Bardalai, Partha Pratim Dutta, and Harjyoti Das

Abstract This study focuses on evaluation of thermal interpretation of a flat plate solar air heater (SAH), one of the most important uses of solar thermal energy. In this work, a simulation using the MATLAB software is carried out to study the behaviour of various parameters which influences the thermal performance of the SAH. The range of parameters that are chosen are mass flow rate (m = 0.042– 0.068 kg/s), thickness of insulation (t i = 0.02–0.08 m), absorber plate area (Ap = 1–2.5 m2 ), overall heat loss coefficient (0–10 W/m2 ) and collector tilt (β = 0°– 60°). Also, an investigation is carried out on different insulating materials, namely thermocol, pine wood, sugarcane bagasse and bubble wrap that can be used to prevent losses from the bottom and side walls of the SAH. The simulation findings showed that as the mass flow rate rises, the heat transfer coefficient (h) increases linearly. When insulation thickness is increased the side and bottom loss coefficients fall, with materials having high thermal conductivity have higher side and bottom loss coefficients. Additionally, a linear relationship between the variation in collector efficiency factor (F) and insulation thickness (t i ) is evident from this study.

43.1 Introduction Solar energy is one of the most fortunate and easily accessible renewable energy sources. One of the most popular types of solar energy usage systems is SAH. These devices absorb the solar radiation and transform it into heat energy at the captivating surface and then transfer it to the fluid flowing in the duct. Benefits of SAH include easy construction, few parts, low cost and longer maintenance intervals, which reduce overall expenditures. Ducts, an absorber tray, glazing and insulation are an SAH’s main components [1, 2]. The rate of convection heat transfer between the air and the absorber plate determines the heater’s overall efficacy. However, the ‘h’ between the collector plate and P. Sarma · M. Bardalai (B) · P. Pratim Dutta · H. Das Tezpur University, Tezpur, Assam 784028, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024 B. Das et al. (eds.), Modeling, Simulation and Optimization, Smart Innovation, Systems and Technologies 373, https://doi.org/10.1007/978-981-99-6866-4_43

571

572

P. Sarma et al.

the air is minimum, which limits the advancement of SAHs and the thermal efficiency. Nevertheless, a number of researchers have proposed various methods to improve the thermal performance such as adding booster mirrors to enhance solar insolation, altering the shape of the collector plate to enhance the ‘h’, jet impingement, attaching baffles or fins, using vortex generators to increase turbulence, using sensible heat storage (SHS) materials such as pebbles, desert sand, metal chips, gravels etc., can boost thermal efficiency and decrease heat losses, phase change materials (PCM) greatly affect the air exit temperature and are an efficient way to boost the heater’s efficiency [3–5]. Thermal performance of an SAH with a conical surface was studied by Abu¸ska and Kayapunar [6] which showed flat absorber plates had lower thermal efficiency than those with conical ones. The use of vortex generators to increase the turbulence in the air flow enhances heat transfer rates and friction factors along with increase in energy and exergy efficiency [7, 8]. Akhbari et al. [9] constructed a pilot plant of a triangular channel SAH which demonstrated a linear relationship between thermal efficiency and ‘m’. The correct positioning of baffles on the collector plate plays a significant role in determining the SAH’s thermal efficiency which was investigated by Bensaci et al. [10]. The interpretation of a flat and curved SAH was evaluated by Singh and Singh [11], which showed the curved solar panel had higher outlet air temperature than the flat plate one. Parameters like solar insolation, ‘m’ and ‘Ap ’, all play an important part in deciding the efficiency of SAH which was shown by Abu¸ska [12]. An experimental study on three SAHs with various cross-sectional shapes was conducted by Abdullah et al. [13] which showed the circular arrangement has the highest efficiency. Experimental research on an SAH channel with integrated winglets and wavy grooves was done by Skullong et al. [14], which demonstrated that the newly developed model induces a shift in flow direction, significantly increasing heat transfer. The drying of edible products like black tea, garcinia pedunculata by upgraded new designs of SAH cum dryers showed effective solar drying compared with conventional sun drying [2, 15–20]. Comparative studies, related to the modification of collector plate along with duct having different configurations, have been put forward by various researchers to study the impact on the performance of these devices [21–24]. An SAH has several benefits, however because of its low ‘h’, its thermal performance is still far from its optimal rate. The literature showed that several variables including ‘h’, ‘m’, solar radiation, surface structure of the collector plate, etc., affect the SAH’s thermal efficacy. However, the use of different insulation materials to reduce the various losses from the heater has not been much focused yet. Moreover, there are very few works, where the parameters involving the various losses, are studied to see the effect on the thermal performance. Hence, in this report the thermal performance of an SAH is studied using the MATLAB software. The various modelling equations for thermal performance calculation are given by Singh and Singh [12] and Saxena et al. [25], which require collector thermal energy gain, solar radiation, and various losses to surrounding. In addition, the usage of various insulation materials to study the various heat loss coefficients that affect the heater’s overall performance is focused upon. The results of the simulation demonstrated that the ‘h’ grows linearly as the mass flow rate rises. The side and bottom loss coefficients

43 Thermal Performance Study of a Flat Plate Solar Air Heater Using …

573

decrease with increasing insulation thickness, and they are higher for materials with high thermal conductivity. It is clear from this analysis that the variation in ‘t i ’, and ‘F’ factor has a linear relationship. Furthermore, there exists a linear relation between ‘m’, solar insolation and thermal efficiency of the heater.

43.2 Methodology This report focuses on how different performance parameters affect the thermal performance of SAH. The various effects are studied using the MATLAB software, and different range of parameters are considered from various literature, namely mass flow rate (m), thickness of insulation (ti ), absorber plate area (Ap ), overall heat loss coefficient (Ul ) and collector tilt (β) which are listed in Table 43.1 along with their respective ranges. Also, by providing insulation materials with a specific thickness, side and bottom losses from the SAH can be decreased which is determined in terms of overall loss coefficient. Five different insulating materials, namely thermocol, glass wool, bubble wrap and sugarcane bagasse are used in this study to examine the impact on thermal performance which are listed in Table 43.2 along with their thermal conductivities. On a particular day in the month of September 2022, the average intensity of solar radiation (I ) in Tezpur, Assam was 997 W/m2 . In Fig. 43.1, a pyranometer with a range of 0–1999 W/m2 is used to measure the solar insolation. The ambient temperatures are measured using a thermocouple, which has a temperature variation of −10 to 120 °C as shown in Fig. 43.2 and the average ambient temperature that Table 43.1 Searched range of design and operating parameters

Table 43.2 Thermal conductivities of different insulation materials

Sl. No

Parameters

Range

1

Mass flow rate (m)

0.036–0.068 kg/s

2

Overall heat loss coefficient (Ul )

0–10 W/m2

3

Thickness of insulation (ti )

0.02–0.08 m

4

Absorber plate area ( Ap )

0.5–2.5 m2

5

Collector tilt (β)

0°–60°

Sl. No

Materials

Thermal conductivity (W/mK)

1

Thermocol

0.050

2

Glass wool

0.031

3

Bubble wrap

0.038

4

Sugarcane bagasse

0.048

5

Wood

0.150

574

P. Sarma et al.

was observed was about 33 °C. The average wind speed was around 0.8 m/s, and an anemometer shown in Fig. 43.3 is used to measure the wind speed. Fig. 43.1 Pyranometer

Fig. 43.2 Thermocouple

43 Thermal Performance Study of a Flat Plate Solar Air Heater Using …

575

Fig. 43.3 Anemometer

43.2.1 Model Description Figure 43.4 depicts the SAH in schematic view. The cross-sectional measurements of the air heater are W = 1 m (wide) and H = 0.06 m, and its length is L = 2 m (duct height). The absorber plate has a specific selective coating with a high absorptivity (α), and the upper side of the SAH is protected by a glass cover with high transmissivity (τ ). Moreover, the air duct sidewalls and the bottom wall of the absorber are anticipated to be thermally insulated.

Fig. 43.4 Schematic view of SAH

576

P. Sarma et al.

43.2.2 Thermal Modelling Equations The following assumptions are made to inspect the SAH. 1. The collector functions only under steady-state conditions. 2. It is assumed that the average air temperature in the ducts and the operating temperatures of the SAH components are also uniform. 3. Only the flow direction causes the air temperature to vary. 4. The air channel is supposed to be leak-free. 5. It is assumed that the input temperature is identical to the outside temperature. The equations used to clarify the different parameters that reflects the thermal performance of the solar heater is presented in the following part [2, 12, 25]. Useful heat gain, Q u = FR Ap [I (τ α) − Ul (To − Ti )]

(43.1)

where FR is the collector heat-removal factor, (τ α) is transmittance-absorbent product of glass cover and To and Ti are the outlet and inlet air temperature. Expression for FR is given by, [ { }] FUl Ap mC ˙ p 1 − exp − FR = Ul A p mC ˙ p

(43.2)

where Cp is the specific heat of air in J/kg K. Collector efficiency factor, h h + Ul

(43.3)

Qu + Ti mC ˙ p

(43.4)

Qu ( ) Ap Tpm − Ta

(43.5)

F= The outlet temperature is expressed as, To =

Heat transfer coefficient, h=

where Ta is the ambient temperature in K. Mean plate temperature,

43 Thermal Performance Study of a Flat Plate Solar Air Heater Using …

[ Tpm = Ta + FR I (τ α)

1 − FR To − Ti + FR Ul I (τ α)

577

] (43.6)

Overall heat loss coefficient, Ul = Ub + Us + Ut

(43.7)

( ) Q l = Ul Ap Tpm − Ta

(43.8)

Overall heat loss is given by,

The bottom loss (Ub ), side loss (Us ) and top loss (Ut ) coefficients are given as, ki ti

(43.9)

(L + W )H ki L W ti

(43.10)

Ub = Us =

where ki is the thermal conductivity of insulation and ti is the thickness of insulation. ⎤−1

⎡ ⎢ Ut = ⎣

C Tpm



M Tp −Ta M+ f

0.33 +

1 ⎥ ⎦ hw

⎡ +⎣

( ) 2 σ Tpm + Ta2 Tpm + Ta 1 εp +0.05M (1−εp )

+

(2M+ f −1) εg



−M



(43.11) where M is the number of glass covers, εp is the emissivity of absorber plate and εg is the emissivity of glass cover. f = (1 + 0.089h w + 0.1166h w εp )(1 + 0.0786M)

(43.12)

( ) C = 520 1 − 0.0000513β 2

(43.13)

where β is the tilt angle in degree. Wind heat transfer coefficient, h w = 5.7 + 3.8Vw where Vw is the wind velocity. Thermal efficiency,

(43.14)

578

P. Sarma et al.

ηth =

[ ] Qu Ul (To − Ti ) = FR (ατ ) − I Ap I

(43.15)

43.3 Results and Discussion Using the MATLAB software, it has been studied how different performance factors affect the SAH’s thermal performance, as demonstrated from Figs. 43.5, 43.6, 43.7, 43.8, 43.9, 43.10, 43.11, 43.12. For various ambient temperatures, Fig. 43.5 illustrates the variation of mass flow rate (0.036–0.068 kg/s) with ‘h’. It is observed that the ‘h’ increases linearly as the ‘m’ increases. Additionally, higher ‘h’ is obtained at higher ambient temperatures. Figure 43.6 illustrates the similar fluctuation in mass flow rate and convective heat transfer coefficient for absorber plate with varied areas (1, 1.5, 2 and 2.5 m2 ). The aforementioned variation also exhibits a linear relationship, with larger ‘h’ for ‘Ap ’ of 1 m2 and lowest for 2.5 m2 . This is because for lower ‘Ap ’ the ‘h’ increases, which is evident from Eq. 43.5. By providing insulation materials with a specific thickness, side and bottom losses from the SAH can be decreased. Five different insulating materials, including thermocol, glass wool, bubble wrap and sugarcane bagasse were used in this study to

Convective heat transfer coefficient (W/m 2 K)

28 26

Ap = 2 m2

24

To = 335 K

Ti = 309 K

22 20 18 16

Ta1= 299 K Ta2= 305 K Ta2= 307 K Ta3= 309 K Ta4= 306 K

14 12 10 0.035

0.04

0.045

0.05

0.055

0.06

Mass flow rate of air (kg/s) Fig. 43.5 ‘m’ versus ‘h’ for ‘T a ’

0.065

0.07

43 Thermal Performance Study of a Flat Plate Solar Air Heater Using …

579

Convective heat transfer coefficient (W/m 2 K)

55 50

2

Ti = 309 K

Ap1= 1 m

To = 335K

Ap2= 1.5 m Ap3= 2 m

45

2

2

Ap4= 2.5 m 2

40 35 30 25 20 15 10 0.035

0.04

0.045

0.05

0.055

0.06

0.065

0.07

Mass flow rate of air (kg/s) Fig. 43.6 ‘m’ versus ‘h’ for different ‘Ap ’

Fig. 43.7 ‘t i ’ versus side and bottom loss coefficients for different insulating materials

580

Fig. 43.8 Collector thermal energy gain versus ‘U l ’ for different ‘Ap ’

Fig. 43.9 ‘U l ’ versus ‘β’ for different insulating materials

P. Sarma et al.

43 Thermal Performance Study of a Flat Plate Solar Air Heater Using …

Fig. 43.10 ‘t i ’ versus ‘F’ for different numbers of glass covers

Fig. 43.11 ‘t i ’ versus ‘F’ for different insulating materials

581

582

P. Sarma et al.

Fig. 43.12 Thermal efficiency versus ‘m’ for different solar insolation

inspect the effect on thermal performance. The change of insulation thickness, ‘t i ’ (0.02–0.08 m) with side and bottom loss coefficients for various insulating materials is shown in Fig. 43.7. It has been seen that the side and bottom loss coefficients decrease as insulation thickness increases. Furthermore, it is observed that the side and bottom loss coefficients become virtually constant after an insulation thickness of 0.05 m. Overall heat loss coefficient plays an important role in determining SAH’s thermal performance, more the heat loss, less is the thermal collector thermal energy gain. Figure 43.8 shows the variation of thermal energy gain with overall heat loss coefficient for different absorber plate areas. It is observed that with increase in ‘U l ’, the collector energy gain decreases parabolically. So, lower the value of ‘U l ’ higher is the energy gain by the collector, which can be achieved by using materials of suitable thermal conductivity and particular insulation thickness. Additionally, with high values of ‘Ap ’ higher collector energy gains were obtained, which ultimately will have an impact on the SAH’s efficiency. Overall loss coefficient is a summation of top, bottom and side loss coefficients. Equation shows the ‘β’ is a function of ‘U t ’. Figure 43.9 depicts how the overall loss coefficient changes with collector tilt for different insulating materials for fixed number of glass covers. And it can be observed that there is a linear relationship between both, with materials having higher value of ‘k i ’ having high overall loss coefficient. Lower value of heat loss coefficient will finally reduce the overall heat loss from the system which is also evident from Eq. 43.8. For reduced top loss coefficients, smaller collector tilts of roughly 25°–30° may be employed.

43 Thermal Performance Study of a Flat Plate Solar Air Heater Using …

583

The overall heat gain can be enhanced with an increase in the collector efficiency factor, as can be observed from Eqs. 43.2 and 43.3. Additionally, the overall heat loss coefficient, which in turn depends on ‘t i ’, affects the ‘F’. The variation of ‘F’ with ‘t i ’ (0.02–0.08 m) for different numbers of glass covers ‘M’ is shown in Fig. 43.10. It is evident that the ‘F’ increases as insulation thickness does. Additionally, more glass covers result in a larger value of ‘F’, however the difference between these values is not particularly large. Similarly, Fig. 43.11 shows the variation ‘F’ with ‘t i ’ but for different insulating materials having different thermal conductivities. For materials with lower thermal conductivities, the ‘F’ grows with increasing ‘t i ’ but in a gradual manner, and after a value of t i = 0.06 m it becomes nearly constant. While for materials like wood (ki = 0.15 W/mK) having higher value of ‘k i ’, the ‘F’ increases rapidly until it reaches a value of 0.80 which is equivalent to a value corresponding to other insulation materials. Figure 43.12 depicts the relationship between the ‘m’ and the SAH’s thermal efficiency and there exists a linear relation between both which is also evident from Eq. number 1 and 2. It can be seen that efficiency rapidly increases with increasing ‘m’. And it is also seen that at high solar radiations higher efficiency is obtained, so it can also be concluded that the relationship between solar insolation and thermal efficiency is linear.

43.4 Conclusion This report focuses on how different performance parameters affect the thermal performance of SAH. The various effects are studied using the MATLAB software, and different ranges of design parameters are considered, namely m, t i , Ap and U l . Also, by providing insulation materials with a specific thickness, side and bottom losses from the SAH can be decreased, which is determined in terms of overall loss coefficient. Five different insulating materials, including thermocol, glass wool, bubble wrap and sugarcane bagasse were used in this study to examine the impact on thermal performance. From the stimulation results, some of the important points that can be concluded are: • ‘h’ increases linearly as the mass flow rate increases. Additionally, higher ‘h’ is obtained at higher ambient temperatures. • Also, higher values of ‘h’ are obtained for small absorber plate area. • The side and bottom loss coefficients decrease as insulation thickness increases. Furthermore, it is observed that the side and bottom loss coefficients become virtually constant after a ‘t i ’ of 0.05 m. • Also, materials with high thermal conductivity have high side and bottom loss coefficients. • There exists an inverse relationship between the collector energy gain and ‘U l ’ for various ‘Ap ’.

584

P. Sarma et al.

• The variation of ‘F’ with ‘t i ’ for different numbers of glass covers shows a linear relationship. Additionally, more glass covers result in a larger value of ‘F’, however the difference between these values does not vary much. • For materials with lower thermal conductivities, the ‘F’ grows with ‘t i ’ but in a gradual manner and after a value of t i = 0.06 m, it becomes nearly constant. • There exists a linear relationship between the ‘m’ and the thermal efficiency with higher ‘m’ having value of higher efficiency.

References 1. Sukhatme, S.P., Nayak, J.K.: Solar Energy, 4th Edition. McGraw-Hill Education, New Delhi (2017). 2. Saxena, A., El-Sebaii, A.A.: A thermodynamic review of solar air heaters. Renew. Sustain. Energy Rev. 43, 863–890 (2015). https://doi.org/10.1016/j.rser.2014.11.059 3. Jia, B., Liu, F., Wang, D.: Experimental study on the performance of spiral solar air heater. Sol. Energy 182, 16–21 (2019). https://doi.org/10.1016/j.solener.2019.02.033 4. Olivkar, P.R., Katekar, V.P., Deshmukh, S.S., Palatkar, S.V.: Effect of sensible heat storage materials on the thermal performance of solar air heaters: state-of-the-art review. Renew. Sustain. Energy Rev. 157, 112085 (2022). https://doi.org/10.1016/j.rser.2022.112085 5. Singh, A.K., Agarwal, N., Saxena, A.: Effect of extended geometry filled with and without phase change material on the thermal performance of solar air heater. J. Energy Storage 39, 102627 (2021). https://doi.org/10.1016/j.est.2021.102627 6. Abu¸ska, M., Kayapunar, A.: Experimental and numerical investigation of thermal performance in solar air heater with conical surface. Heat Mass Transf. 57, 1791–1806 (2021). https://doi. org/10.1007/s00231-021-03054-5 7. Baissi, M.T., Brima, A., Aoues, K., Khanniche, R., Moummi, N.: Thermal behavior in a solar air heater channel roughened with delta-shaped vortex generators. Appl. Therm. Eng. 165, 113563 (2020). https://doi.org/10.1016/j.applthermaleng.2019.03.134 8. Xiao, H., Dong, Z., Liu, Z., Liu, W.: Heat transfer performance and flow characteristics of solar air heaters with inclined trapezoidal vortex generators. Appl. Therm. Eng. 179, 115484 (2020). https://doi.org/10.1016/j.applthermaleng.2020.115484 9. Akhbari, M., Rahimi, A., Hatamipour, M.S.: Modeling and experimental study of a triangular channel solar air heater. Appl. Therm. Eng. 170, 114902 (2020). https://doi.org/10.1016/j.app lthermaleng.2020.114902 10. Bensaci, C.E., Moummi, A., De la Flor, F.J., Jara, E.A., Rincon-Casado, A., Ruiz-Pardo, A.: Numerical and experimental study of the heat transfer and hydraulic performance of solar air heaters with different baffle positions. Renew. Energy 155, 1231–1244 (2020). https://doi.org/ 10.1016/j.renene.2020.04.017 11. Singh, A.P., Singh, O.P.: Curved vs flat solar air heater: performance evaluation under diverse environmental conditions. Renew. Energy 145, 2056–2073 (2020). https://doi.org/10.1016/j. renene.2019.07.090 12. Abu¸ska, M.: Energy and exergy analysis of solar air heater having new design absorber plate with conical surface. Appl. Therm. Eng. 131, 115–124 (2018). https://doi.org/10.1016/j.applth ermaleng.2017.11.129 13. Abdullah, A.S., El-Samadony, Y.A., Omara, Z.M.: Performance evaluation of plastic solar air heater with different cross-sectional configuration. Appl. Therm. Eng. 121, 218–223 (2017). https://doi.org/10.1016/j.applthermaleng.2017.04.067

43 Thermal Performance Study of a Flat Plate Solar Air Heater Using …

585

14. Skullong, S., Promvonge, P., Thianpong, C., Jayranaiwachira, N., Pimsarn, M.: Heat transfer augmentation in a solar air heater channel with combined winglets and wavy grooves on absorber plate. Appl. Therm. Eng. 122, 268–284 (2017). https://doi.org/10.1016/j.appltherm aleng.2017.04.158 15. Dutta, P. P.: Prospect of renewable thermal energy in black tea processing in assam: an investigation for energy resources and technology. Ph.D. dissertation, Tezpur University, India (2017) 16. Dutta, P.P., Kumar, A.: Development and performance study of solar air heater for solar drying applications. In: Solar drying technology: concept, design, testing, modeling, economics, and environment. Springer, Singapore, pp. 579–601 (2017). https://doi.org/10.1007/978-981-103833-4_21 17. Dutta, P., Dutta, P.P., Kalita, P.: Thermal performance studies for drying of garcinia pedunculata in a free convection corrugated type of solar dryer. Renew. Energy 163, 599–612 (2021). https:// doi.org/10.1016/j.renene.2020.08.118 18. Sharma, A., Dutta, P.P.: Scientific and technological aspects of tea drying and withering: a review. Agric. Eng. Int. CIGR J. 20, 210–220 (2018) 19. Sharma, A., Dutta, A.K., Bora, M.K., Dutta, P.P.: Study of Energy Management in a Tea Processing Industry. In: AIP Conference Proceedings 2091, Assam, pp. 1–7 (2019). https:// doi.org/10.1063/1.5096503 20. Sharma, A., Dutta, P.P.: Exergy analysis of a solar thermal energy powered tea withering trough. Mater. Today Proc. 47(11), 3123–3128 (2021). https://doi.org/10.1016/j.matpr.2021.06.181 21. Dutta, P.P., Goswami, P.: A comparative thermal performance studies between two different configurations of absorber plates in a double pass solar air heater using CFD and experimental analysis. Adv. Sci. Technol. 2, 178–183 (2021) 22. Dutta, P.P., Begum, S.S., Jangid, H., Goswami, A.P., Bardalai, M., Dutta, P.P.: Modeling and performance evaluation of a small solar parabolic trough collector (PTC) for possible purification of drained water. Mater. Today Proc. (2021). https://doi.org/10.1016/j.matpr.2021. 04.489 23. Dutta, P.P., Kakati, H., Bardalai, M., Dutta, P.P.: Performance studies of trapezoidal, sinusoidal and square corrugated aluminium alloy (Almncu) plate ducts. Model. Simul. Optim. Smart Innov. Syst. Technol. 206, 751–774 (2021). https://doi.org/10.1007/978-981-15-9829-6_59 24. Sharma, A., Dutta, P., Goswami, P., Dutta, P.P., Das, H.: Possibility of waste aluminum can for solar air heater through thermal performance studies. In: International Conference on Waste Management Towards Circular Economy, KIIT University (2019) 25. Somayeh, S.D., Shadi, M.: Optimization-decision making of roughened solar air heaters with impingement jets based on 3E analysis. Int. Commun. Heat Mass Transfer 205, 106607 (2021). https://doi.org/10.1016/j.icheatmasstransfer.2021.105742

Chapter 44

L and S Beacon Dualband Antenna for Biomedical Application H. Naveen Mugesh, K. Nandha Kishore, M. Harini, and K. C. RajaRajeshwari

Abstract In this work, a beacon antenna was initially created to cover S band (2– 4 GHz) and L band (1–2 GHz) for medical purpose, and the proposed antenna covers dual band. From the beacon antenna ground plane developed to increase the gain, directivity, and efficiency. The antenna covers frequency range of S band and L band with low return loss and high gain. In this 2.1 GHz (−38 dB) bandwidth resonated frequency of S band at 1.3 GHz (−28 dB) which is for L band application. Based on this bandwidth data rate can be enhanced to achieve the through port and latency. One such antenna consists of a low, light-weight, as well as low-profile antennas capable of sustaining excellent result more than a wide range of frequencies. ADS was used to produce these antenna simulations. The numerical simulations are shown in elements of percussive frequency reflection coefficient, voltage standing wave, radiation efficiency, receiver sensitivity, and beam width. The application is biomedical, satellite navigation, digital audio and video broadcasting, and weather surveillance radar for air traffic control.

44.1 Introduction An antenna is a device which is used to transmit and receive the radio frequency signals. It is also a specialised transducer which comes in all shapes and size from small to big which is enable in mobile phone to big satellite which is miles and miles of away. An antenna is of two types. They are receiving antenna and transmitting antenna. Receiving antenna receives the radio frequency from the transmitter and convert it into AC signal and transmitting antenna convert the AC signal to radio frequency and transmit to receiver [1]. Based on the frequency band the antennas are classified into seven types. They are L band (1–2 GHz) which covers the wavelength of 15–30 cm; the applications of L band are mobile application, satellite navigation, aircraft surveillance, etc. [2]. The next type of frequency band is S band (2–4 GHz) H. Naveen Mugesh · K. Nandha Kishore · M. Harini · K. C. RajaRajeshwari (B) Department of Electronics and Communication Engineering, Dr Mahalingam College of Engineering and Technology, Pollachi, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024 B. Das et al. (eds.), Modeling, Simulation and Optimization, Smart Innovation, Systems and Technologies 373, https://doi.org/10.1007/978-981-99-6866-4_44

587

588

H. Naveen Mugesh et al.

which covers the wavelength of 7.5–15 cm the application of Optical Communication, etc. The other type of frequency band is c band (4–8 GHz) which covers a wavelength of 1.8–2.4 cm used to measure climate-related conditions. Another type is X band which works under the frequency of 7–11.2 GHz and covers the wavelength 2.5–3.75 cm the application of x band is radar and space communication. The other type of band is Ku band (12–18 GHz) Kurtz-under band (Ku band) has the wavelength of 16.7–25 mm; they are mostly used in satellite television and satellite mobile phone. The other band is K band (18–27 GHz) having the wavelength of 1.11–1.67 cm; the k stands for Kurz; they are normally used in satellite broadcast. By following the last type is Ka-band (26.5–40 GHz) having wavelength of 7.5– 11.1 mm. The application is satellite communication, close-range targeting radars, military aircraft, space telescopes, etc. Among them our antenna works under the dual band frequency, i.e. S band and L band. In fact, antenna technology has three main uses in contemporary clinical sector: (1) telecommunication, (2) assessment, and (3) medication [3]. Cordless capsules endoscopic has been employed in our everyday routines for transmitting information. Portable or implanted information security, such as Radio Frequency Identification (RFID) tags, can be researched and remarked on. Transmitter technology is necessary for diagnosis in Magnetic Resonance Imaging (MRI) or fMRI devices [4]. Infrared tomography helps people to determine potential diagnostic technology. Finally, the energy created by the directional antennas radiation is utilised for treatment. Spa cancer treatments include overheating and blood coagulation [5]. The proposed work of our antenna based on the S band and L band frequency ranges which includes typical integrated application in a single slotted antenna. This frequency ranges are included in S band and L band allocation [6]. By implementing the dual band in the designed antenna to maintain good efficiency and work under the dual band through the downlink frequency allocation can be taken care which can be separated by the guard band.

44.2 Antenna Design Our antenna works under the frequency of 1.3 and 2.1 GHZ which works perfectly for biomedical purpose (Fig. 44.1). We are planned to design the antenna in the microstrip shape which doesn’t work on the certain frequency, and we planned to design by attaching some triangle on the sides to those antennas and that also doesn’t acquire desire output, so we designed by adding some more triangle and add ground plane at the bottom and compile it we get the desire output under the frequency in range of 0–2.2 GHz. This antenna works under low frequency nearly maximum of 2.2 GHz, so it is harmless to the human body system and size is small and compactable [7].

44 L and S Beacon Dualband Antenna for Biomedical Application

589

Fig. 44.1 Antenna design

44.2.1 Antenna Parametric Analysis The radiating element is designed. Perfect boundary as finite conductivity region can be identified as beacon. The stimulated model of the proposed patch is shown in Fig. 44.2.

590

H. Naveen Mugesh et al.

Fig. 44.2 Design in software

44.2.2 Antenna Parameters Gain, bandwidth, radiation pattern, beam width, polarisation, and impedance are common antenna properties (Fig. 44.3). The antenna pattern is the way the antenna reacts to a plane wave coming at it from a specific angle or the relative strength of the wave the antenna is sending out in a specific direction [8].

Fig. 44.3 Antenna parameters

44 L and S Beacon Dualband Antenna for Biomedical Application

591

Fig. 44.4 Antenna directivity

44.2.2.1

Antenna Directivity

Directivity, gain, and power radiated are important parameters to determine the efficiently of antenna. Gain of 3.48 is achieved (Figs. 44.4, 44.5). This figure shows how far field the antenna gets radiated along its patch, then red portion indicates the highly radiated portion and green colour indicates the lightly radiated portion which are in major lobe [9].

44.2.2.2

Maximum Field Location

Signals that must travel long distances and are deemed to be in the far field area are often sent via antennas (Fig. 44.6). One condition for standard solution in the far fields region is that the separation from the station be much greater than the antenna’s dimensions and bandwidth [10].

44.2.2.3

S-Parameter

Figure 44.7 shows the reflection scattering parameter for resonating solution frequency at Ka-band application. 1.3 bandwidth resonated frequency which is for L band and 2.1 which is for S band application. Based on this bandwidth data rate can be enhanced to achieve the throughput and latency [5].

592

H. Naveen Mugesh et al.

Fig. 44.5 a Directivity at 1.2 GHz, b Directivity at 1.3 GHz, c Directivity at 1.6 GHz, d Directivity at 2.1 GHz, e Directivity at 2.5 GHz, f Directivity at 2.9 GHz

44 L and S Beacon Dualband Antenna for Biomedical Application

593

Fig. 44.6 Maximum field location

Fig. 44.7 S-parameter

44.3 Conclusion The work of the antenna based on the dual band frequency ranges which includes biomedical antenna and commercial application. By implementing the applications in our designed antenna to maintain good efficiency of S band and L band which works on 1.3 and 2.1 GHz excellent for Biomedical Application.

594

H. Naveen Mugesh et al.

References 1. Malik, N.A., Sant, P., Ajmal, T., Ur-Rehman, M.: Implantable antennas for bio-medical applications. IEEE J. Electromagn. RF Microw. Med. Biol. 5(1), 84–96 (2021). https://doi.org/10. 1109/JERM.2020.3026588 2. Bashir, Z., et al.: A miniaturized wide band implantable antenna for biomedical application. In: 2019 UK/China Emerging Technologies (UCET), pp. 1–4 (2019). https://doi.org/10.1109/ UCET.2019.8881849 3. Noghanian, S.: Research on antennas for biomedical applications. In: 2018 18th International Symposium on Antenna Technology and Applied Electromagnetics (ANTEM), pp. 1–2 (2018). https://doi.org/10.1109/ANTEM.2018.8572928 4. Bhavani, S., Shanmuganantham, T.: Wearable antenna for bio medical applications. In: 2022 IEEE Delhi Section Conference (DELCON), pp. 1–5 (2022). https://doi.org/10.1109/DELCON 54057.2022.9753038 5. Rajarajeshwari, K.C., Poornima, T., Gokul Anand, K.R., Kumari, S.V.: SIMO array characterized THz antenna resonating at multiband ultra high frequency range for 6G wireless applications. In: Das, S., Nella, A., Patel, S.K. (eds) Terahertz Devices, Circuits and Systems. Springer, Singapore (2022). https://doi.org/10.1007/978-981-19-4105-4_7 6. Malik, N.A., Ajmal, T., Sant, P., Ur-Rehman, M.: A compact size implantable antenna for biomedical applications. In: 2020 International Conference on UK-China Emerging Technologies (UCET), pp. 1–4 (2020). https://doi.org/10.1109/UCET51115.2020.9205350 7. Kumar, S.A., Shanmuganantham, T.: Scalp—implantable antenna for biomedical applications. In: 2020 URSI Regional Conference on Radio Science (URSI-RCRS), pp. 1–4 (2020). https:// doi.org/10.23919/URSIRCRS49211.2020.9113574 8. Dawood, H., Zahid, M., Awais, H., Shoaib, S., Hussain, A., Jamil, A.: A high gain flexible antenna for biomedical applications. In: 2020 International Conference on Electrical, Communication, and Computer Engineering (ICECCE), pp. 1–4 (2020). https://doi.org/10.1109/ICE CCE49384.2020.9179186 9. Mishra, P.K., Raj, S., Tripathi, V.S.: A novel skin-implantable patch antenna for biomedical applications. In: 2020 IEEE 7th Uttar Pradesh Section International Conference on Electrical, Electronics and Computer Engineering (UPCON), pp. 1–5 (2020). https://doi.org/10.1109/ UPCON50219.2020.9376443 10. Yan, L.-M., et al. Compact magnetically symmetric antenna design for implantable biomedical applications. In: 2021 IEEE International Symposium on Antennas and Propagation and USNCURSI Radio Science Meeting (APS/URSI), pp. 1277–1278 (2021). https://doi.org/10.1109/ APS/URSI47566.2021.9704642

Chapter 45

A Comprehensive Review and Performance Analysis of Different 7T and 9T SRAM Bit Cells Manthan Garg, Mridul Chaturvedi, Poornima Mittal , and Anamika Chauhan

Abstract In the given paper, the performance and durability of two 9T SRAM cells, namely 9T-1 and 9T-2 with two 7T SRAM cells, namely 7T-1 and 7T-2 are examined and then contrasted against one another. The outcome is presented by checking the results of the—static margins (write, hold, read) and leakage current analysis. All the cells are simulated with a 90-nm technology node and every simulation is performed with 1 V constant power supply at 27 °C. The best hold static noise margin is obtained for SRAM 7T-1 cells with the value 0.466 V. The best performing read static noise margin is of the 9T-1 cell which is calculated as 0.414 V. In order to find out the variation in performance of the bit cells with varying temperatures, the static noise margin is also calculated in a range from −10 to 110 °C. The best variance in hold mode comes out to be 0.17 mV/°C for 9T-2 cell and in read mode to be 0.05 mV/°C for the 7T-2 cell. Write margin value for the 9T-2 cell is the highest at 0.991 V, while the other cells range from 0.365 to 0.604 V. The temperature variance was worst in write margin for the 9T-1 cell with 0.827 mV/°C, while the other cells performed relatively better. Finally the leakage current of the above given cells is also analyzed and the least leakage is found to be present in the 7T-2 cell at 0.0262 nA.

45.1 Introduction In the modern scenario, an exponential increment in the demand of portable, better and highly efficient digital memories is seen that is not only more durable to variation but also gives reliable performance under different temperature ranges. Checking the durability of the cells in varying temperature ranges has now become more important than ever before, as even the average user may experience their systems to heat up to 60–70 °C. Memories based on SRAM architectures make up a major portion of the power consumption for SoC [1]. No requirement for refreshing add to the features of SRAM memory [2, 3]. The bit stored in the SRAM bit cell remains available till M. Garg (B) · M. Chaturvedi · P. Mittal · A. Chauhan Delhi Technological University, New Delhi, Delhi, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024 B. Das et al. (eds.), Modeling, Simulation and Optimization, Smart Innovation, Systems and Technologies 373, https://doi.org/10.1007/978-981-99-6866-4_45

595

596

M. Garg et al.

power supply is not cut-off; hence making SRAM cells a necessity in the memory world [4]. Many researchers have proposed various improvements in the given SRAM architectures to make them more robust and efficient [5–8]. But with the continuous decrease in the technology node and the reduction in supply voltage (VDD), the trade-off between cell size and performance continues to increase. The first shortcoming of the SRAM cells is its large size coupled with upcoming advancement in technologies due to higher vulnerabilities caused by process breakdowns [9]. The next challenge is the higher loss of power and poor efficiency caused by high values of leakage current [10]. Inspired by these flaws, inventors and researchers have been trying to reduce leakage current while keeping the cell compact as possible thus, reducing the power loss. This means that the power and size improvements in the architecture of one cell leads to a compounding effect, hence resulting in exponential enhancements in the total effectiveness of the cache memory. In this paper, four SRAM cells are analyzed, trying to identify the best and worst features of each architecture. These analyses are done based on read, hold static noise margin (RSNM and HSNM respectively), write margin (WM), and leakage current. Along with this robustness of each cell is checked by finding the variation in their performance with change in temperatures at which they operate. The remaining writing is structured as follows—Sect. 45.2 contains the cell architectures for the 9T SRAM cells and Sect. 45.3 contains the cell architectures for the 7T SRAM cells. Section 45.4 describes static analysis and its results. Section 45.5 describes the temperature variation analysis and the performance of each bit cell in the same. Section 45.6 provides insights about the leakage current for the given SRAM cells. At last the paper is concluded in Sect. 45.7. Several attempts have been made throughout generations to improve the SRAM cell architecture, so that we are able to perform operations with low supply voltages and achieve better bit density [11]. The SRAM 6T design presented in Fig. 45.1 is the conventional architecture using six transistors. It contains P1–N1 along with P2–N2 which creates an inverter-pair system for the memory core. AC1–AC2 are used as the access transistors. The design’s compact area and differential sensing make it effectively the benchmark for commercial use. But with this 6T cell’s larger area footprint, high dissipation of power and current make it difficult to implement for scale [12]. Further extensions have been made to improve these shortcomings which are explored in the architectures shown in Fig. 45.2.

45.2 9T Cell Architectures 45.2.1 9T-1 SRAM Cell This SRAM cell contains nine transistors and uses a CMOS inverted pair core along with extra transistors to make a distinct write and read port. Hence, it separated the write/read access ports for the SRAM cell. Figure 45.2a presents the diagram for the

45 A Comprehensive Review and Performance Analysis of Different 7T …

Fig. 45.1 SRAM 6T circuit diagram

Fig. 45.2 Schematic diagram for a 9T-1, b 9T-2, c 7T-1 and d 7T-2 SRAM bit cell

597

598

M. Garg et al.

given cell architecture. To carry out the write and read operations, a common pair of bitlines (BLB and BL) is used [13]. The bit that is required to be placed onto the memory is put onto BL while write operation is performed, along with WL being set high and RD is ‘0’. During the read mode, RD is ‘1’ while WL is kept ‘0’; this switches the transistors R2/R1 to be in the ON mode. Hence whichever node carries 0 will now lead to a potential drop in the corresponding bitlines. This potential drop is later sensed to detect the bit stored in the cell. Hold mode demands each of RD and WL to be turned to a lower potential. Read/write operations use the same ports, hence they need less standby power [14].

45.2.2 9T-2 The 9T-2 SRAM [15] cell contains nine transistors displayed in Fig. 45.2b. The transistor pairs P2–N2 and P1–N1 form the main memory in the bit cell. In write functionality transistors N3–P3 make a gate for transmission that is controlled by WLB/WL on the left side, along with this transistors N5–N4 create a read separate structure. RE signal, controlling N6 transistor, is used to cut the feedback loop. To reduce leakage current virtual ground signal is used represented by VGND. WLB is ‘1’ and WL is ‘0’, isolating BLB from the rest of the cell in the hold mode. Keeping RE high, the N6 transistor is turned on to maintain the feedback pathway. VGND is set low; the RBL is kept ‘1’ at the same time RD is kept ‘0’, to ensure current discharge. In read mode, RD is kept high to turn on the N4 transistor, while WL is set 0 and WLB is set to 1, keeping BLB separated from the rest of the cell. RE is at the logic ‘1’ state for the feedback path. A low potential is maintained at VGND while RBL is already in an already charged state. WLB is set low and WL is kept high during the write operation, thus uniting BLB with the rest of the cell. RE and RD are at 0. VGND is kept high and RBL is in an already charged state. The BLB is set at low/high depending on the value that needs to be written (for 0 BLB is ‘0’ and for 1 BLB is ‘1’ [15].

45.3 7T Cell Architecture 45.3.1 7T-1 The 7T-1 bit cell shown is a minor variation from the standard design of 6T SRAM cells [16]. Memory core of the cell comprises two inverted pairs of transistors P2– N2 and P1–N1 connected through connection using feedback. The pairs and the schematic diagram of the architecture can be seen in Fig. 45.2c. Transistor N5 allows the feedback connection; its function is to break/establish this inverter pair connection. Discharging, being a crucial and necessary part, required to swap the bit stored

45 A Comprehensive Review and Performance Analysis of Different 7T …

599

in the bit cell and cannot be eliminated in the given SRAM architecture. A conventional design of a bit cell requires both bitlines to be discharged, hence making the cell highly susceptible to noise and errors. Using a single bit cell for the write operation, better results are obtained. Differential-ended read operation along with single-ended write are two operations followed by this cell.

45.3.2 7T-2 The 7T-2 architecture presented contains a differential write/read operation. When put against the design of a standard 6T design, this architecture includes an extra transistor, namely N3, which is controlled by the WLRB, as shown in Fig. 45.2d. N3 transistor is used to form an inverter configuration that is stacked in this cell, hence pushing the cell’s power utilization, though reducing the current for ON mode. To balance out deficit created in the differential read, the architecture places the transistor N3 at ‘0’ state to improve the RSNM. Write functionality in given SRAM cell is highly constrained by time. The signal received from the WLRB is to be configured with precaution along the WL signal, if not done in a proper manner; this cell can result in a difficulty in its write operation. This furthermore can result in an enhancement in the write delay of the architecture [17].

45.4 Static Noise Margin (SNM) Analysis Static noise margin (SNM) is the representation of the highest fluctuations in the voltage supply after which the SRAM bit cell starts deviating from its normal results [18]. SNM value is calculated to be equal to the side of the broadest square that can be placed into the lobe of the butterfly curve [19, 20]. Hence SNM is a measure of the sturdiness of architecture. A higher value of SNM means that the cell architecture can hold up to a greater variation in power supply voltage hence making it more durable. The values obtained for HSNM and RSNM for the cells used are shown in Fig. 45.3. The HSNM values of the 7T-1, 7T-2, 9T-1 and 9T-2 cells are 0.466, 0.443, 0.422 and 0.387 V, respectively, hence the 7T-1 cell performs the best. RSNM values 7T-1, 7T-2, 9T-1 and 9T-2 cells are 0.169, 0.145, 0.414 and 0.388 V, respectively, therefore the 9T-1 cell performs superior to others. Write margin is considered to be the smallest voltage that is essential to perform write operation a bit into the bit cell [14]. WM is used to judge the performance of a cell while writing a bit into it. A low WM makes the write operation difficult, and though a higher WM leads to ease in the write operation, it also makes the SRAM cell prone to undesirable noise. A balanced WM is essential for effective write operation. The values obtained for WM in 7T-1, 7T-2, 9T-1 and 9T-2 are given 0.604, 0.522, 0.365 and 0.991 V, respectively, hence 9T-2 having the highest WM will have the easiest write operation but will be susceptible to noise.

600 Fig. 45.3 Comparison of static analysis and write margin for the different SRAM bit cells shown graphically

M. Garg et al.

45 A Comprehensive Review and Performance Analysis of Different 7T …

601

45.5 Temperature Variation Analysis Every electronic circuit or system is subjected to a range of temperatures while it operates. Hence it becomes a necessity that these circuits and systems perform relatively the same throughout these temperature variations giving as close to room temperature performances as possible. A range of −10 to 110 °C is taken to analyze and compare the SRAM cells. The SNM for each cell is calculated in the given temperature range and then calculated the difference between the values obtained at the highest and lowest temperatures [21]. Finally this difference is divided with 120, i.e. the difference between the maximum and minimum temperatures in our range of analysis. This leads to our results having the dimensions of mV/°C. A smaller value of temperature variation is favourable as it means that our design is less susceptible to temperature changes and hence provides consistent performance. All the results are shown in Fig. 45.4. The variation observed in HSNM values for 7T-1, 7T-2, 9T-1 and 9T-2 are 0.342, 0.258, 0.2 and 0.17 mV/°C, respectively. The RSNM records a variation for 7T-1, 7T-2, 9T-1 and 9T-2 as 0.45, 0.05, 0.14 and 0.23 mV/°C, respectively. And finally variation in WM is found to be 0.291, 0.51, 0.827 and 0.14 mV/°C for 7T-1, 7T-2, 9T-1 and 9T-2 bit cells, respectively.

45.6 Analysis of Leakage Current Increase in the dissipation of SRAM’s current is attributed to the leakage in the domain as small as the nanometer. The leakage current is in direct proportion with the loss in power. Access transistors are checked in a cell for unwanted discharge of current when the environment is kept at a hold state. The greatest value of this unwanted flow of current in the OFF state is called leakage current. Lesser value of leakage current signifies lesser power dissipation and hence is more favourable [22]. Figure 45.5 shows the graphical representation of the analysis, with SRAM 7T-1, 7T-2, 9T-1 and 9T-2 recording leakage currents of 15.286, 0.262, 109.4 and 95.55 nA, respectively.

45.7 Conclusion In the given paper an attempt is made to compare two 9T SRAM cells performance with two 7T SRAM cells, and try to find out how these various architectures perform against one another using different formats of analysis. Static noise margin, temperature variation are used for the write, read/hold mode of the bit cell and leakage current analysis for the hold mode to finally find the superior cells in each category. In our analysis, all the cells have been simulated using a 90 nm technology node. For the hold static noise margin, it is found that all the cells have more or less the

602 Fig. 45.4 Temperature variation analysis in the range of −10 to 110°C in a Hold, b Read and c Write of the SRAM bit cells

M. Garg et al.

45 A Comprehensive Review and Performance Analysis of Different 7T …

603

Fig. 45.5 Leakage current values recorded for SRAM 7T-1, 7T-2, 9T-1 and 9T-2 bit cells

same value, the maximum difference between the values being 0.08 V; hence all the architectures have comparable performances in hold mode. During the read static noise margin, the 9T architectures are seen to be more robust than the 7T architectures as the 9T cells have RSNM values around 0.4 V, while the 7T cells have values around 0.15 V. Finally in the WM analysis it can be seen that 9T-2 architecture displays the highest WM of 0.991 V, hence making the write functionality easier, but also making it more susceptible to noise, while the other cells have values in the range 0.365–0.604 V. For temperature variation analysis, the hold mode 9T cells are better as they show values of 0.2 and 0.17 mV/ °C against 0.342 and 0.258 mV/°C shown by the 7T cells. In the read mode the best variation is shown by the 7T-2 SRAM cell with a temperature variation of just 0.05 mV/°C and the worst is shown by the 7T-1 cell with 0.45 mV/°C, while the 9T SRAM cells have a comparable variation around 0.18 mV/°C. In write operation it is seen that SRAM 9T-1 shows the worst variation of 0.827 mV/°C. At last in the leakage current analysis it is observed that the 7T architecture is highly superior to the 9T architecture as the best, i.e. 0.262 nA (7T-2) outperforms both the 9T cells having values of 109.4 and 95.55 nA, respectively. Based on the findings of the above paper, one can easily see that no architecture is best fitted for all scenarios, 9T architectures though perform the best in RSNM values but they underperform by more than 500 times their 7T counterparts in the leakage current analysis. The 9T architectures show balanced performance in the temperature variation analysis; one might want to lean towards the 7T-2 design to get the best results if their work is centred upon read operation. Therefore, it is concluded that one must decide upon an objective and a desired outcome to choose the right cell for their operation.

604

M. Garg et al.

References 1. Rawat, G., Rathore, K., Goyal, S., Kala, S., Mittal, P.: Design and analysis of ALU: vedic mathematics approach. In: International Conference on Computing, Communication and Automation, pp. 1372–1376 (2015) 2. Chuang, C.-T., Mukhopadhyay, S., Kim, J.-J., Kim, K.: High-performance SRAM in nanoscale CMOS: design challenges and techniques. In: IEEE International Workshop on Memory Technology, Design and Testing, pp. 4–12 (2007) 3. Pavlov, A., Sachdev, M.: CMOS SRAM Circuit Design and Parametric Test in Nano Scaled Technologies. Springer, Netherlands (2008) 4. Rawat, B., Gupta, K., Goel, N.: Low voltage 7T SRAM cell in 32 nm CMOS technology node. In: 2018 International Conference on Computing, Power and Communication Technologies (GUCON) (2018) 5. Chang, M.-H., Chiu, Y.-T., Hwang, W.: Design and iso-area Vmin analysis of 9T subthreshold SRAM with bit-interleaving scheme in 65-nm CMOS. IEEE Trans. Circuits Syst. II Exp. Briefs 59(7), 429–433 (2012) 6. Chang, I.J., Kim, J.-J., Park, S.P., Roy, K.: A 32 kb 10T subthreshold SRAM cell array with bit-interleaving and differential read scheme in 90 nm CMOS. IEEE J. Solid-State Circuits 44(2), 650–658 (2009) 7. Chang, L., et al.: An 8T-SRAM for variability tolerance and lowvoltage operation in highperformance caches. IEEE J. Solid-State Circuits 43(4), 956–963 (2008) 8. Mishra, N., Mittal, P., Kumar, B.: Analytical modeling for static and dynamic response of organic pseudo all-p inverter circuits. J. Comput. Electron. 18(4), 1490–1500 (2019) 9. Yoshinobu, N., Masahi, H., Takayuki, K., Itoh, K.: Review and future prospects of low-voltage RAM circuits. IBM J. Res. Devel. 47(5/6), 525–552 (2003) 10. Rawat, B., Mittal, P.: Single bit line accessed high high performance ultra low voltage operating 7T SRAM bit cell with improved read stability. Int. J. Circuit Theory Appl. 49(5), 1435–1449 (2021) 11. Rawat, B., Mittal, P.: Single bit line accessed high performance ultra low voltage operating 7T SRAM bit cell with improved read stability. Int. J. Circuit Theory Appl. 49(5), 1435–1449 (2021) 12. Giterman, R., Bonetti, A., Bravo, E.V., Noy, T., Teman, A., Burg, A.: Current-based dataretention-time characterization of gain-cell embedded DRAMs across the design and variations space. IEEE Trans. Circ. Syst. I Reg. Papers 67(4), 1207–1217 (2020) 13. Liu, Z., Kursun, V.: Characterization of a novel nine-transistor SRAM cell. IEEE Trans. Very Large Scale Integr. Syst. 16(4), 488–492 (2008) 14. Yang, Y., Jeong, H., Song, S.C., Wang, J., Yeap, G., Jung, S.: Single bit-line 7T SRAM cell for near-threshold voltage operation with enhanced performance and energy in 14 nm FinFET technology. IEEE Trans. Circuits Syst. I Regular Pap. 63(7), 1023–1032 (2016) 15. Sachdeva, A., Tomar, V.K.: Design of multi-cell upset immune single-end SRAM for low power applications. Int. J. Electron. Commun. 128, 153516 (2020) 16. Aly, R.E., Bayoumi, M.A.: Low-power cache design using 7T SRAM cell. IEEE Trans. Circuits Syst. II 54, 318 (2007) 17. Oh, J.S., Park, J., Cho, K., Oh, T.W., Jung, S.O.: Differential read/write 7T SRAM with bitinterleaved structure for near-threshold operation. IEEE Access 9, 64104 (2021) 18. Kumar, N., Mittal, P.: Performance analysis of FinFET based 2:1 multiplexers for low power applications. In: 6th Students’ Conference on Engineering and Systems (SCES-2020) (2020) 19. Seevinck, E., List, F.J., Lohstroh, J. (1987) Static-noise margin analysis of MOS SRAM cells. IEEE Journal of Solid-State Circuits 22(5), 748–754. 20. Singh, J., Mohanty, S.P., Pradhan, D.K.: Robust SRAM designs and analysis. Springer, Berlin (2013)

45 A Comprehensive Review and Performance Analysis of Different 7T …

605

21. Chaturvedi, M., Garg, M., Rawat, B., Mittal, P.: A read stability enhanced, temperature tolerant 8T SRAM cell. In: 2021 International Conference on Simulation, Automation & Smart Manufacturing (SASM) (2021) 22. Rawat, B., Mittal, P.: Analysis of varied architectural configuration for 7T SRAM bit cell. In: 4th International Conference on Recent Trend in Communication & Electronics (ICCE-2020), Proceedings Published by Taylor and Francis (2020)

Author Index

A Aashi Shrivastava, 163 Abhishek Bharti, 103 Adithi, K., 373 Amitava Nag, 175, 487 Amit Rathi, 405 Anamika Chauhan, 595 Anil Kumar Birru, 445 Anitha, G., 523 Anupam Biswas, 121 Anup Kumar Barman, 487 Anusha, M. U., 373 Arockia Dhanraj Joshuva, 563 Avanti Dethe, 305 Azher Jameel, 187

F Farhina Ahmed, 339

B Bal Chand Nagar, 349 Bhargavi, B., 385 Bidisha Chetia, 89 Breitkopf, Piotr, 1

I Indumathi, S. K., 563

C Chandrasekar Shastry, 475 Chikere, Aja Ogboo, 19

D Debojyoti Sarkar, 121 Deepjyoti Patowary, 219 Deepthi, P. S., 235 Dinesh Prasad, 349 Dulal Chandra Das, 431

G Geeta Kalkhambkar, 37, 267 Ghanshyam Singh, 405 Glenda Lorelle Ritu, E., 511

H Hariharan, K., 563 Harini, M., 587 Harjyoti Das, 571 Harsha Bhat, 373

J Jayaram murugan, 523 Jitumoni Kumar, 219 Jyoshna Kandi, 385 Jyoti Kumari, 147

K Karmabir Brahma, 487 Kavitha, K. R., 511 Kavya, C., 397 Kesava Krishna, D., 563 Kiran Jash, 419 Krishna Anand, S., 329

© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024 B. Das et al. (eds.), Modeling, Simulation and Optimization, Smart Innovation, Systems and Technologies 373, https://doi.org/10.1007/978-981-99-6866-4

607

608 L Lahari Madishetty, 385 Laxman Singh, 147, 459 Lee, Reuben Brandon Huan Chung, 19 Lopa Mandal, 419

M Mallikarjunaswamy, M. S., 373 Mamatha, E., 329 Manasvi Dhoke, 305 Manik Sharma, 65 Manthan Garg, 595 Mathiarasan Vivek Ramanan, 563 Mausri Bhuyan, 431 Md. Waseem Akram, 349 Mithlesh Arya, 405 Mohammed Shaqeeb, N., 397 Monoj Bardalai, 571 Mridul Chaturvedi, 595 Murali Babu, B., 511

N Nandha Kishore, K., 587 Naresh Kumar, 563 Naveen Balaji, M., 511 Naveen Mugesh, H., 587 Nayana H. Reddy, 329 Neha Singh, 459 Nishant Kulkarni, 305

P Pankaj Biswas, 135 Parsai, M. P., 163 Partha Pratim Dutta, 571 Pijush Sarma, 571 Pinky Roy, 315 Poornima Mittal, 595 Pradeep Chindhi, 37 Pradeep S. Chindhi, 267 Pragna, P., 373 Pragya Palak, 293 Prasanta Kumar Choudhury, 219 Pravin Gorakh Maske, 501 Preethi, P. M., 397 Preeti Monga, 65 Priyanka Sharma, 135 Puneet Bakshi, 277, 361

R Raghavan, Balaji, 1

Author Index Rajani, H. P., 37, 267 Raja Rajeshwari, K. C., 397, 587 Rakesh Angira, 51, 293 Ramu Naidu, Y., 205 Reddy, N. V. S. M., 79 Reeja, S. L., 235 Rohit Dhage, 305 Rohit Kumar Kasera, 249 Rosang Pongen, 79

S Sahil Thappa, 187 Saleh, Ehab, 475 Sandip Saha, 135 Sanjeev Anand, 187 San, Yam Ke, 19 Sapna Sinha, 147, 459 Saravanan, E., 523 Satyanarayana, K., 79 Sauvik Bal, 419 Shivashish Gour, 249 Shreya Gore, 305 Shubham Kumar Verma, 187 Smriti Jaiswal, 431 Sonam Gour, 405 Sougata Mukherjee, 1 Soumya, T., 235 Souptik Dutta, 175 Sreelatha, V., 329 Sriraksha, B. R., 373 Subham Chetri, 545 Subhash Mondal, 175 Subhrajit Dutta, 1 Subungshri Basumatary, 487 Sudheer Kumar Nagothu, 523 Sukanta Roy, 19 Sukumar Nandi, 277, 361 Sumita Debbarma, 89, 103, 339, 545 Swapnil Sawalkar, 315 Swati Yadav, 51, 293

T Tapodhir Acharjee, 249 Thoudam Kheljeet Singh, 445 Tushar Dhote, 305

U Umayia Mushtaq, 349

Author Index V Vartika Narayani Srinet, 501 Venugopal, S., 79 Vibhushit Gupta, 187 Vijayalakshmi, S., 511

609 Y Yadav, A. K., 501 Yalini Shree, P. V., 397 Yatheshth Anand, 187