Modeling, Simulation and Optimization: Proceedings of CoMSO 2021 (Smart Innovation, Systems and Technologies, 292) 9811908354, 9789811908354

This book includes selected peer-reviewed papers presented at the International Conference on Modeling, Simulation and O

127 11 19MB

English Pages 684 [661] Year 2022

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Preface
Contents
About the Editors
1 Empirical Study of Far-Field Crop Quality Examination Models: A Numerical Outlook
1.1 Introduction
1.2 Literature Review
1.3 Statistical Analysis
1.4 Conclusion
References
2 A Contemporary Initiative to Uphold Cease COVID-19 Using Keras and Tensorflow
2.1 Introduction
2.2 Related Works
2.3 Dataset
2.4 Proposed Methodology
2.4.1 Packages
2.4.2 Mask Detection
2.4.3 Face Detection
2.4.4 Alarm Activation
2.5 Result
2.6 Conclusion and Future Works
References
3 A Comparative Study of Various Traditional and Hybrid Cryptography Algorithm Models for Data Security
3.1 Introduction
3.2 Literature Review
3.3 Generalized Hybrid Cryptographic Algorithm Models
3.4 Experimental Setup and Results
3.5 Conclusion and Future Work
References
4 V2G/G2V Bi-directional On-Board EV Charger Using Two Phase Interleaved DC-DC Converter
4.1 Introduction
4.2 System Configuration
4.2.1 Forward Operation Mode
4.2.2 Reverse Operation Mode
4.3 Design of Two-Phase Interleaved DC-DC Converter
4.4 G2V V2G Control Operation
4.5 Simulation and Results
4.6 Conclusion
References
5 A Comparative Performance Analysis of Varied 10T SRAM Cell Topologies at 32 nm Technology Node
5.1 Introduction
5.2 Existing 10T SRAM Bit-Cell Topologies
5.2.1 10T’1
5.2.2 10T’2
5.2.3 10T’3
5.2.4 10T’4
5.3 Static Noise Margin Analysis
5.3.1 Hold Static Noise Margin Analysis
5.3.2 Read Static Noise Margin Analysis
5.3.3 Write Static Noise Margin Analysis
5.4 Temperature Analysis
5.4.1 Temperature Analysis for Hold Operation
5.4.2 Temperature Analysis of Read Operation
5.4.3 Temperature Analysis of Write Operation
5.5 Leakage Current Analysis
5.6 Conclusion and Future Scope
References
6 Hydrodynamic Coupling Between Comoving Microrobots
6.1 Introduction
6.2 Modeling of Microrobots in COMSOL Multiphysics
6.3 Results
6.4 Discussion
6.5 Conclusion
References
7 Chemical Reaction Optimization (CRO) for Maximum Clique Problem
7.1 Introduction
7.1.1 Problem Statement
7.2 Related Work
7.3 Proposed Method for Solving MCP Using Chemical Reaction Optimization
7.3.1 Population Generation and Initialization
7.3.2 Operator Design
7.3.3 Experimental Results and Comparisons
7.4 Conclusion
References
8 Fast Implementation for Computational Method of Optimum Attacking Play in Rugby Sevens
8.1 Introduction
8.2 Related Work
8.3 Computational Method for Optimum Attacking Play
8.3.1 Optimum Attacking Play
8.3.2 Run Play Simulation
8.3.3 Hand-Pass Simulation
8.4 Improvements for Fast Implementation
8.4.1 Limitation for Angle of Run Play Simulations
8.4.2 Lower Bound Modification for Bounding Operation
8.4.3 Parallel Computing for Hand-Pass Simulations
8.5 Validation Experiments
8.5.1 Target Examples
8.5.2 Optimum Attacking Plays Computed
8.5.3 Comparison of Processing Times
8.6 Conclusions
References
9 A Review of Various Sign Language Recognition Techniques
9.1 Introduction
9.2 Data Acquisition and Preprocessing Methods
9.3 Recognition Methods
9.3.1 Combinational Neural Networks
9.3.2 Speech Translating Gloves
9.3.3 Support Vector Machine Classifier
9.3.4 Hidden Markov Model and K-Nearest Neighbour
9.3.5 Surface Electromyography, Accelerometer and Gyroscope Sensors
9.3.6 Dynamic Time Warping Algorithm
9.3.7 Transition Movement Model
9.3.8 Image Processing and SLFG Module
9.3.9 Principal Component Analysis
9.3.10 3-D Contour Model and Pixel Classifier
9.3.11 Self-Organizing Maps
9.3.12 General Fuzzy Minmax and Grammar Rules
9.4 Analysis and Disscussions
9.5 Conclusion
References
10 A Complete Solution to a Long-Run Sand Augmentation Problem Under Uncertainty
10.1 Introduction
10.2 Control Problem
10.2.1 Stochastic Dynamical System
10.2.2 Objective and HJB Equation
10.3 Exact Solution
10.3.1 Guessed Solution and Its Optimality
10.3.2 Remark and Application
10.4 Conclusion
References
11 Real-Time One-Hand Indian Sign Language Alphabets and Numbers Recognition in Live Video Using Fingertip Distance Feature
11.1 Introduction
11.2 Related Work
11.3 Proposed Work
11.3.1 Live Video Acquisition
11.3.2 Frame Extraction
11.3.3 Hand Tracking
11.3.4 Feature Extraction and Representation
11.3.5 Classification
11.3.6 Recognition and Text Output
11.4 Conclusion
References
12 Structural and Optical Analysis of Hydrothermally Synthesized Molybdenum Disulfide Nanostructures
12.1 Introduction
12.2 Experimental Procedure
12.2.1 Chemicals Used
12.2.2 Synthesis of Molybdenum Disulfide (MoS2)
12.2.3 Instrumentation
12.3 Results and Discussions
12.3.1 Optical Analysis of MoS2
12.3.2 Structural Studies
12.3.3 Morphological Studies
12.4 Conclusion
References
13 IRIS Image Encryption and Decryption Based Application Using Chaos System and Confusion Technique
13.1 Introduction
13.2 Literature Survey
13.3 Methodology
13.3.1 Image Compression Using Singular Value Decomposition Technique
13.3.2 Chaotic Logistic Map
13.3.3 Key Generation
13.3.4 Confusion Technique
13.3.5 IRIS Database
13.4 Block Diagram
13.5 Results
13.5.1 Android Application
13.5.2 Image Restoration
13.6 Parametric Measurements
13.6.1 Compression Ratio
13.6.2 Computation Time
13.6.3 Peak Signal to Noise Ratio (PSNR)
13.6.4 Structural Similarity Index (SSIM)
13.6.5 Number of Pixel Changing Rate (NPCR)
13.7 Conclusion
References
14 Comparative Study of Aero and Non-aero Formula Student Type Race Car Using Optimum Lap
14.1 Introduction
14.2 Background Simulation Process of OL
14.3 Mathematical Analogy of Forces Acting on Vehicle Body at Any Location of Track
14.4 Limitations of OL
14.5 Model Development
14.5.1 Vehicle Model in OL
14.5.2 Track Modeling in OL
14.6 Case Study of an FS Aero and Non-aero Vehicle on 2 Different Tracks
14.7 Results
14.8 Conclusion
References
15 Simulation and Stabilization of a Custom-Made Quadcopter in Gazebo Using ArduPilot and QGroundControl
15.1 Introduction
15.1.1 Architecture
15.1.2 Quadcopter Model Description
15.1.3 LiftDragPlugin
15.2 Experimental Set-up
15.3 Results and Conclusion
References
16 Construction of Reliability Sampling Plans Using Dagum Distribution Under Type-I Censoring
16.1 Introduction
16.2 Dagum Distribution
16.3 Procedure to Determine the Operating Characteristics
16.4 Empirical Analysis of Operating Characteristic Curves
16.5 Procedure for the Construction of Reliability Single Sampling Plan
16.6 Conclusion
References
17 Defect Detection Using Correlation Approach for Frequency Modulated Thermal Wave Imaging
17.1 Introduction
17.2 Theory
17.2.1 Data Processing Approach
17.3 Modeling and Analysis
17.4 Results and Discussion
17.5 Conclusion
References
18 Machine Learning Models for Predictive Analytics in Personal Finance
18.1 Introduction
18.2 Review of the Role of Machine Learning in Personal Finance
18.2.1 Insurance
18.2.2 Chatbots/User-Interaction Systems
18.2.3 Investment Planning
18.3 Proposed Work
18.3.1 Linear Regression Model for Budgeting and Expense Management
18.3.2 RNN-Based Model for Investment Portfolio Management
18.3.3 Logistic Regression Based Retirement Prediction
18.4 Experimental Results
18.5 Conclusion and Future Scope
References
19 MNIST Image Classification Using Convolutional Neural Networks
19.1 Introduction
19.2 Problem Formulation
19.2.1 Convolutional Neural Network
19.2.2 Data Augmentation
19.2.3 Batch Normalization
19.2.4 Dataset
19.3 Proposed Model
19.4 Results and Discussion
19.5 Conclusion
References
20 Double Sampling Plans for Life Test Based on Marshall–Olkin Extended Exponential Distribution
20.1 Introduction
20.2 Marshall–Olkin Extended Exponential Distribution
20.3 Reliability Double Sampling Plans with Smaller Number of Failures
20.4 Search Procedure for the Selection of DSP - (n1 ,n2 )
20.5 Procedure for Selection of DSP - (n1 ,n2 )
20.6 Conclusion
References
21 Detection of Objects Using a Fast R-CNN-Based Approach
21.1 Introduction
21.2 Problem Formulation
21.2.1 Architecture of Region Proposal Network (RPN)
21.3 The Proposed Approach
21.4 Base Network
21.4.1 The Region Proposal Network (RPN)
21.4.2 Context-Aware RoI Pooling
21.4.3 Classifier
21.5 Results and Discussion
21.5.1 Dataset
21.5.2 Metrics Used for Evaluation
21.5.3 Model Training
21.5.4 Performance Results
21.5.5 Loss Value Comparison
21.6 Conclusion and Future Work
References
22 Modeling and Simulation of Electric Vehicle with Synchronous Reluctance Motor
22.1 Introduction
22.2 Methodology
22.3 Modeling and Simulation
22.4 Result and Analysis
22.5 Conclusion
References
23 Performance Analysis of Disc Type Magnetorheological Brake with Tapered Disc
23.1 Introduction
23.2 Design and Modelling of MR Brake
23.2.1 Structural Model of the Disc Type MR Brake
23.2.2 Mathematical Modelling
23.3 Magnetic Analysis of Disc Type MR Brake
23.4 Results and Discussions
23.5 Conclusions
References
24 Dynamic Characteristics Analysis of Kirloskar Turn Master35 Machine Tool Bed with Different Polymer Concrete Materials
24.1 Introduction
24.2 Simulation Modeling
24.2.1 Geometric Data
24.2.2 Material Properties
24.2.3 Meshing
24.2.4 Analysis Settings
24.3 Simulation Results and Discussion
24.3.1 Modal Analysis Results
24.3.2 Harmonic Response Results
24.4 Conclusions
References
25 Comparative Study of Magnetorheological Brakes from Magnetic Theory Perspective by Finite Element Methods
25.1 Introduction
25.2 Literature Review
25.3 Design of Magnetorheological Brake
25.4 Methodology
25.4.1 Phase 1: MR Fluid Analysis
25.4.2 Phase 2: Optimization of Brake Geometry
25.4.3 Phase 3: Variation in Power Input Parameters
25.5 Results
25.5.1 Comparative Study of MR Fluids
25.5.2 Optimization of Brake Geometry
25.5.3 Variation in Power Input Parameters
25.6 Conclusion
References
26 Modal and Fatigue Analysis of Ultrasonic Machining Tool for Performance Analysis
26.1 Introduction
26.2 Finite Element Model of the Tool
26.3 Modal Analysis
26.4 Fatigue Analysis
26.5 Conclusion
References
27 Effect of Suspension Parameter on Lateral Dynamics Study of High Speed Railway Vehicle
27.1 Introduction
27.2 Bondgraph Modelling of 31-DOF Railway Vehicle
27.3 Results and Discussion
27.4 Conclusions
References
28 A Weighted Fuzzy Time Series Forecasting Method Based on Clusters and Probabilistic Fuzzy Set
28.1 Introduction
28.2 Preliminaries
28.2.1 Score and Deviation Functions
28.3 Proposed Forecasting Method
28.4 Forecasting of SBI Share Price using Proposed Method
28.5 Conclusions
References
29 Modeling Clusters in Streamflow Time Series Based on an Affine Process
29.1 Introduction
29.2 Affine Process Model and Clusters
29.2.1 Affine Process Model
29.2.2 Clusters
29.3 Brief Application
29.3.1 Study Site
29.3.2 Computation
29.3.3 Remarks on Water Quality
29.4 Conclusion
References
30 Assessing and Predicting Urban Growth Patterns Using ANN-MLP and CA Model in Jammu Urban Agglomeration, India
30.1 Introduction
30.2 Study Area and Datasets
30.3 Methods
30.4 Result and Discussion
30.4.1 LULC Change Detection
30.4.2 Detect Urban Growth Pattern
30.4.3 Urban Growth Prediction
30.5 Conclusions
References
31 Some Investigations on CdSe/ZnSe Quantum Dot for Solar Cell Applications
31.1 Introduction
31.2 Experimental Details
31.3 Synthesis of Quantum Dots
31.4 Results and Discussions
31.4.1 UV Spectra
31.4.2 Surface Morphological Analysis
31.4.3 Elemental Analysis
31.4.4 FT-IR Characterization
31.4.5 Particle Size Analyzer (PSA)
31.4.6 XRD
31.5 Conclusion
References
32 Design and Finite Element Analysis of a Mechanical Gripper
32.1 Introduction
32.2 Modeling of the Geometry
32.2.1 Design Methodology
32.2.2 Configurations of the Gripper
32.2.3 Finite Element Analysis
32.3 Results and Discussions
32.3.1 Results for the Gear Assembly
32.3.2 Results for the Jaws of the Gripper
32.3.3 Results for the Entire Model of Gripper
32.4 Conclusions
References
33 Analysis of Fractional Calculus-Based MRAC and Modified Optimal FOPID on Unstable FOPTD Processes
33.1 Introduction
33.2 Unstable Process
33.3 Fractional Calculus
33.3.1 G-L Fractional Derivative
33.3.2 R-L Fractional Integral
33.4 Control Law
33.4.1 Model Reference Adaptive Control (MRAC)
33.4.2 FO-Lyapunov Stability Rule
33.4.3 FOPID Rule
33.5 Modified Particle Swarm Optimization (MPSO)
33.6 Result and Analysis
33.6.1 Algorithm Outcome
33.6.2 Case 1 (Time Constant of 1 of Reference Model)
33.6.3 Case 2(Time Constant of 0.5 of Reference Model)
33.7 Conclusion and Future Scope
References
34 Chaotic Lorenz Time Series Prediction via NLMS Algorithm with Fuzzy Adaptive Step Size
34.1 Introduction
34.2 Autoregressive Moving Average Model
34.3 FASS-NLMS Algorithm
34.4 Steps for Time Series Prediction Based on ARMA Model via FASS-NLMS Algorithm
34.5 Computational Results
34.6 Conclusion
References
35 Channel Adaptive Equalizer Design Based on FIR Filter via FVSS-NLMS Algorithm
35.1 Introduction
35.2 Adaptive Equalization Problem Statement for Communication Channels
35.3 FVSS-NLMS Algorithm
35.4 Simulation and Results
35.4.1 Simulation
35.4.2 Results
35.5 Conclusion
References
36 Direct Adaptive Inverse Control Based on Nonlinear Volterra Model via Fractional LMS Algorithm
36.1 Introduction
36.2 Nonlinear Volterra Model
36.2.1 FLMS Algorihm
36.3 Direct Adaptive Inverse Control
36.4 Computational Results
36.5 Conclusion
References
37 Indirect Adaptive Inverse Control Synthesis via Fractional Least Mean Square Algorithm
37.1 Introduction
37.2 Indirect Adaptive Inverse Control
37.3 Fractional Least Mean Square Algorithm
37.4 Computational Results
37.5 Conclusion
References
38 Thermo-Economic Analysis for the Feasibility Study of a Binary Geothermal Power Plant in India
38.1 Introduction
38.2 Methodology
38.2.1 The Thermodynamic Cycles
38.2.2 Modelling
38.2.3 Modelling Using Aspen Plus
38.3 Results and Discussion
38.3.1 Thermodynamic Analysis
38.3.2 Economic Analysis
38.4 Conclusion
References
39 Hybrid 3D-CNN Based Airborne Hyperspectral Image Classification with Extended Morphological Profiles Features
39.1 Introduction
39.2 Materials and Methods
39.2.1 EM- Hybrid 2D-3D-CNN Architecture
39.3 Results and Discussion
39.4 Conclusion
References
40 Numerical Approximation of System of Singularly Perturbed Convection–Diffusion Problems on Different Layer-Adapted Meshes
40.1 Introduction
40.2 The Continuous Problem-I
40.3 The Discrete Problem
40.3.1 The Generalized S-mesh
40.3.2 The Vulanović L-mesh
40.3.3 The Finite Difference Scheme
40.4 Numerical Results
40.5 The Continuous Problem-II
40.6 The Discrete Problem
40.6.1 Discretization of the Domain
40.6.2 The Finite Difference Scheme
40.7 Numerical Results
40.8 Observations and Concluding Remarks
References
41 Role of Decision Making for Effective Health Care
41.1 Introduction
41.2 Decision Making
41.2.1 Real World Data
41.2.2 Missing Data
41.3 Multimodal Data-Driven Approach
41.3.1 Semantic Perception and Semantic Alignment
41.3.2 Data Fusion and Cross-Border Knowledge Fusion
41.4 Practice-Based Evidence
41.5 Intelligent Decision
41.6 Value of Information
41.7 Security
41.8 Shared Decision Making
41.9 Conclusions
References
42 Data Secrecy: Why Does It Matter in the Cloud Computing Paradigm?
42.1 Introduction
42.2 Preliminaries in Cryptography
42.2.1 Cryptographic Hash Functions
42.2.2 Public Key Cryptography
42.2.3 Symmetric Key Cryptography
42.2.4 Key Agreement Protocol
42.2.5 Connection Establishment
42.3 Email Server
42.3.1 Data Secrecy Issue
42.3.2 Asymmetric Communication Issue
42.3.3 Virus
42.4 Cloud Storage
42.4.1 Data Secrecy
42.4.2 Issues in Secret Key
42.5 Identity Management Server
42.5.1 Issue of Password
42.5.2 Hash Value of Password
42.5.3 Data Secrecy
42.6 Instant Messaging Platforms
42.7 Conclusion
References
43 A Survey on Optimization Parameters and Techniques for Crude Oil Pipeline Transportation
43.1 Introduction
43.1.1 Cost Components of Pipeline Networks
43.2 Parameters in Optimizing Pipeline Operations
43.2.1 Oil Pipeline Design and Network Optimization
43.2.2 Wax Deposition, Chemical Dosing and Drag Reduction
43.2.3 Optimization in Optimizing Oil Pipeline Operations
43.3 Research Gaps
43.4 Conclusions
References
44 Design and Simulation of Higher Efficiency Perovskite Heterojunction Solar Cell Based on a Comparative Study of the Cell Performances with Different Standard HTLs
44.1 Introduction
44.2 Modeling
44.3 Simulation
44.4 Results and Discussion
44.5 Conclusion
References
45 6G Communication: A Vision on Deep Learning in URLLC
45.1 Introduction
45.2 6G Communication
45.2.1 Dimensions to Design 6G
45.3 Applications of 6G Communication
45.4 AI-Based 6G Communication
45.4.1 Transmitters and Receivers Optimizes on Their Own
45.4.2 Heavy Usage of Cognitive Spectrum
45.4.3 Context Awareness
45.5 Deep Learning
45.5.1 Deep Learning in URLLC
45.5.2 Supervised Deep Learning for URLLC
45.5.3 Deep Reinforcement Learning in URLLC
45.5.4 A 6G Multi-level Architecture
45.5.5 Deep Transfer Learning in Mobilized Environments
45.6 Discussion
45.7 Conclusion
References
46 AHP-GRA Integrated Methodology for Decision-Making in WEDM of Ti-6Al-4 V Alloy
46.1 Introduction
46.2 Multiple Attribute Decision-Making Methods
46.2.1 Analytic Hierarchy Process
46.2.2 Grey Relational Analysis
46.2.3 AHP-GRA Integrated Methodology
46.3 Results and Discussion
46.3.1 Application of AHP
46.3.2 Implementation of AHP-GRA Integrated Methodology
46.4 Conclusions
References
47 Application of Taguchi-DEAR Method for Parametric Optimization of CI Engine with Waste Cooking Oil Biodiesel
47.1 Introduction
47.1.1 An Alternative Fuel
47.1.2 Transesterification
47.1.3 Properties of WCO and Biodiesel
47.2 Engine Testing for Performance and Emission Analysis
47.3 Data Envelopment Analysis-Based Ranking Method (DEAR Method)
47.4 Confirmation Run
47.5 Characteristic Graphs and Observations
47.5.1 Observations for Performance
47.5.2 Observations for Emission Graphs
47.6 Conclusion
References
48 Conjugate Heat Transfer Analysis in a Composite Building Wall: Effect of Double Plaster-Brick-Glass Wool
48.1 Introduction
48.2 Building Materials and Wall Structure
48.3 Mathematical Modeling
48.3.1 Boundary Conditions
48.4 Numerical Procedure and Validation
48.5 Results and Discussion
48.6 Conclusions
References
49 Performance Enhancement of Solar Air Heater with the Application of Trapezoidal Ribs on the Absorber Plate
49.1 Introduction
49.2 Problem Statement
49.3 Mathematical Formulations
49.4 Grid Distribution
49.5 Important Co-relations
49.6 Results and Discussions
49.7 Conclusion
References
50 Investigation of Hybrid Fiber-Reinforced Concrete Beam–Column Joint Behavior Using Fruit Fly Optimal NN
50.1 Introduction
50.2 Proposed Methodology
50.2.1 Experimental Design Materials
50.2.2 Neural Network (NN) for Behavior Analysis
50.2.3 Fruit Fly Optimization (FFO) Model for NN
50.3 Result and Discussion
50.4 Conclusion
References
Author Index
Recommend Papers

Modeling, Simulation and Optimization: Proceedings of CoMSO 2021 (Smart Innovation, Systems and Technologies, 292)
 9811908354, 9789811908354

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Smart Innovation, Systems and Technologies 292

Biplab Das Ripon Patgiri Sivaji Bandyopadhyay Valentina Emilia Balas   Editors

Modeling, Simulation and Optimization Proceedings of CoMSO 2021

Smart Innovation, Systems and Technologies Volume 292

Series Editors Robert J. Howlett, Bournemouth University and KES International, Shoreham-by-Sea, UK Lakhmi C. Jain, KES International, Shoreham-by-Sea, UK

The Smart Innovation, Systems and Technologies book series encompasses the topics of knowledge, intelligence, innovation and sustainability. The aim of the series is to make available a platform for the publication of books on all aspects of single and multi-disciplinary research on these themes in order to make the latest results available in a readily-accessible form. Volumes on interdisciplinary research combining two or more of these areas is particularly sought. The series covers systems and paradigms that employ knowledge and intelligence in a broad sense. Its scope is systems having embedded knowledge and intelligence, which may be applied to the solution of world problems in industry, the environment and the community. It also focusses on the knowledge-transfer methodologies and innovation strategies employed to make this happen effectively. The combination of intelligent systems tools and a broad range of applications introduces a need for a synergy of disciplines from science, technology, business and the humanities. The series will include conference proceedings, edited collections, monographs, handbooks, reference books, and other relevant types of book in areas of science and technology where smart systems and technologies can offer innovative solutions. High quality content is an essential feature for all book proposals accepted for the series. It is expected that editors of all accepted volumes will ensure that contributions are subjected to an appropriate level of reviewing process and adhere to KES quality principles. Indexed by SCOPUS, EI Compendex, INSPEC, WTI Frankfurt eG, zbMATH, Japanese Science and Technology Agency (JST), SCImago, DBLP. All books published in the series are submitted for consideration in Web of Science.

More information about this series at https://link.springer.com/bookseries/8767

Biplab Das · Ripon Patgiri · Sivaji Bandyopadhyay · Valentina Emilia Balas Editors

Modeling, Simulation and Optimization Proceedings of CoMSO 2021

Editors Biplab Das Department of Mechanical Engineering National Institute of Technology Silchar Silchar, India Sivaji Bandyopadhyay National Institute of Technology Silchar Silchar, India

Ripon Patgiri Department of Computer Science and Engineering National Institute of Technology Silchar Silchar, India Valentina Emilia Balas Department of Automatics and Applied Software “Aurel Vlaicu” University of Arad Arad, Romania

ISSN 2190-3018 ISSN 2190-3026 (electronic) Smart Innovation, Systems and Technologies ISBN 978-981-19-0835-4 ISBN 978-981-19-0836-1 (eBook) https://doi.org/10.1007/978-981-19-0836-1 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Singapore Pte Ltd. The registered company address is: 152 Beach Road, #21-01/04 Gateway East, Singapore 189721, Singapore

Preface

This book is the second edition of the series Modeling, Simulation and Optimization that includes the selected peer-reviewed papers presented at the International Conference on Modeling, Simulation and Optimization, organized by the National Institute of Technology, Silchar, Assam, India, during December 16–18, 2021. This includes papers from various topics like computational modeling and simulation, system modeling and simulation, device/VLSI modeling and simulation, control theory and applications, modeling and simulation of energy systems and optimization. The book presents a variety of system models as well as solutions to growing difficulties in a variety of scientific domains. Engineering and science use modeling and simulation to improve lifestyles, lives and businesses. Improving the employability of developed items in real-world projects is critical in the said industry. It also ensures the generated product’s authenticity and value. As a result, it is proposed in this book to address growing difficulties as well as modeling and simulation solutions. The book provides an excellent vehicle for conveying current study findings. The book covers a wide range of modeling and simulation topics, making it an excellent study resource. This book will appeal to a wide audience, making it one of the most sought-after titles. The book aims • • • • •

to disseminate various models of diverse systems to publish solutions to emerging challenges in diverse scientific fields to share recent findings of different research in modeling and simulation to widen the readership in various disciplines to increase the impact of the book by disseminating top quality chapters.

Silchar, India Silchar, India Silchar, India Arad, Romania

Biplab Das Ripon Patgiri Sivaji Bandyopadhyay Valentina Emilia Balas

v

Contents

1

2

3

4

5

Empirical Study of Far-Field Crop Quality Examination Models: A Numerical Outlook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Akshay Dhande and Rahul Malik A Contemporary Initiative to Uphold Cease COVID-19 Using Keras and Tensorflow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . S. D. Sujeevan Tej, Natasha J. V. Velmury, V. Bhargav Sai, Vema Rahul Sairam, and T. Anjali

1

21

A Comparative Study of Various Traditional and Hybrid Cryptography Algorithm Models for Data Security . . . . . . . . . . . . . . . Pravin Soni and Rahul Malik

31

V2G/G2V Bi-directional On-Board EV Charger Using Two Phase Interleaved DC-DC Converter . . . . . . . . . . . . . . . . . . . . . . . . . . . . K. Vineel Kumar and K. Deepa

49

A Comparative Performance Analysis of Varied 10T SRAM Cell Topologies at 32 nm Technology Node . . . . . . . . . . . . . . . . . . . . . . . Siddhant Ahlawat, Siddharth, Bhawna Rawat, and Poornima Mittal

63

6

Hydrodynamic Coupling Between Comoving Microrobots . . . . . . . . S. Sharanya and T. Sonamani Singh

7

Chemical Reaction Optimization (CRO) for Maximum Clique Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Mahmudul Hasan, Md. Rafiqul Islam, and Amrita Ghosh Mugdha

85

Fast Implementation for Computational Method of Optimum Attacking Play in Rugby Sevens . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Kotaro Yashiro and Yohei Nakada

97

8

9

77

A Review of Various Sign Language Recognition Techniques . . . . . . 111 Rashmi S. Gaikwad and Lalita S. Admuthe

vii

viii

Contents

10 A Complete Solution to a Long-Run Sand Augmentation Problem Under Uncertainty . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127 Hidekazu Yoshioka and Haruka Tomobe 11 Real-Time One-Hand Indian Sign Language Alphabets and Numbers Recognition in Live Video Using Fingertip Distance Feature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135 Rakesh R. Savant, Jitendra V. Nasriwala, and Preeti P. Bhatt 12 Structural and Optical Analysis of Hydrothermally Synthesized Molybdenum Disulfide Nanostructures . . . . . . . . . . . . . . 145 Nipom Sekhar Das, Koustav Kashyap Gogoi, Avijit Chowdhury, and Asim Roy 13 IRIS Image Encryption and Decryption Based Application Using Chaos System and Confusion Technique . . . . . . . . . . . . . . . . . . . 155 K. Archana, Sharath Sashi Kumar, Pradeep P. Gokak, M. Pragna, and M. L. J. Shruthi 14 Comparative Study of Aero and Non-aero Formula Student Type Race Car Using Optimum Lap . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177 Shulabh Yadav, Tirth Lodhiya, Rajan Swami, and Shivam Prajapati 15 Simulation and Stabilization of a Custom-Made Quadcopter in Gazebo Using ArduPilot and QGroundControl . . . . . . . . . . . . . . . . 191 Nakul Nair, K. B. Sareth, Rao R. Bhavani, and Ashish Mohan 16 Construction of Reliability Sampling Plans Using Dagum Distribution Under Type-I Censoring . . . . . . . . . . . . . . . . . . . . . . . . . . . 203 R. Vijayaraghavan, K. Sathya Narayana Sharma, and C. R. Saranya 17 Defect Detection Using Correlation Approach for Frequency Modulated Thermal Wave Imaging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231 Anju Rani, Vanita Arora, K. Ramachandra Sekhar, and Ravibabu Mulaveesala 18 Machine Learning Models for Predictive Analytics in Personal Finance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241 Rishabh Kalai, Rajeev Ramesh, and Karthik Sundararajan 19 MNIST Image Classification Using Convolutional Neural Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255 Ashesh Roy Choudhuri, Barnali Guha Thakurata, Bipsa Debnath, Debanjana Ghosh, Hrittika Maity, Neela Chattopadhyay, and Rupak Chakraborty 20 Double Sampling Plans for Life Test Based on Marshall– Olkin Extended Exponential Distribution . . . . . . . . . . . . . . . . . . . . . . . . 267 R. Vijayaraghavan, C. R. Saranya, and K. Sathya Narayana Sharma

Contents

ix

21 Detection of Objects Using a Fast R-CNN-Based Approach . . . . . . . . 295 Amlan Dutta, Ahmad Atik, Mriganka Bhadra, Abhijit Pal, Md Akram Khan, and Rupak Chakraborty 22 Modeling and Simulation of Electric Vehicle with Synchronous Reluctance Motor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309 Akash S. Prasad, T. Prabu, Prabhu Selvaraj, and M. Govindaraju 23 Performance Analysis of Disc Type Magnetorheological Brake with Tapered Disc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321 Peri Krishna Karthik and T. Jagadeesha 24 Dynamic Characteristics Analysis of Kirloskar Turn Master35 Machine Tool Bed with Different Polymer Concrete Materials . . . . . 329 Shaik Faazil Ahmad and T. Jagadeesha 25 Comparative Study of Magnetorheological Brakes from Magnetic Theory Perspective by Finite Element Methods . . . . 339 A. Hafsana, Athul Vijay, Manas P. Vinayan, Bijja Kavya, and T. Jagadeesha 26 Modal and Fatigue Analysis of Ultrasonic Machining Tool for Performance Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349 Mehdi Mehtab Mirad, Saka Abhijeet Rajendra, Jasper Ramon, and Bipul Das 27 Effect of Suspension Parameter on Lateral Dynamics Study of High Speed Railway Vehicle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357 Yamika Patel, Vikas Rastogi, and Wolfgang Borutzky 28 A Weighted Fuzzy Time Series Forecasting Method Based on Clusters and Probabilistic Fuzzy Set . . . . . . . . . . . . . . . . . . . . . . . . . 367 Krishna Kumar Gupta and Sanjay Kumar 29 Modeling Clusters in Streamflow Time Series Based on an Affine Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379 Hidekazu Yoshioka and Yumi Yoshioka 30 Assessing and Predicting Urban Growth Patterns Using ANN-MLP and CA Model in Jammu Urban Agglomeration, India . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 387 Vishal Chettry and Keerti Manisha 31 Some Investigations on CdSe/ZnSe Quantum Dot for Solar Cell Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 399 K. R. Kavitha, S. Vijayalakshmi, B. MuraliBabu, and V. DivyaPriya

x

Contents

32 Design and Finite Element Analysis of a Mechanical Gripper . . . . . . 411 Mousam Bhagawati, Md. Asaduz Zaman, Rupam Deka, Krishnava Prasad Bora, Partha Protim Gogoi, Maharshi Das, and Nandini Arora 33 Analysis of Fractional Calculus-Based MRAC and Modified Optimal FOPID on Unstable FOPTD Processes . . . . . . . . . . . . . . . . . . 427 Deep Mukherjee, G. Lloyds Raja, Palash Kundu, and Apurba Ghosh 34 Chaotic Lorenz Time Series Prediction via NLMS Algorithm with Fuzzy Adaptive Step Size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 443 Rodrigo Possidônio Noronha 35 Channel Adaptive Equalizer Design Based on FIR Filter via FVSS-NLMS Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 457 Rodrigo Possidônio Noronha 36 Direct Adaptive Inverse Control Based on Nonlinear Volterra Model via Fractional LMS Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . 469 Rodrigo Possidônio Noronha 37 Indirect Adaptive Inverse Control Synthesis via Fractional Least Mean Square Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 483 Rodrigo Possidônio Noronha 38 Thermo-Economic Analysis for the Feasibility Study of a Binary Geothermal Power Plant in India . . . . . . . . . . . . . . . . . . . . 493 Shivam Prajapati, Shulabh Yadav, Raghav Khandelwal, Priyam Jain, Ashis Acharjee, and Prasun Chakraborti 39 Hybrid 3D-CNN Based Airborne Hyperspectral Image Classification with Extended Morphological Profiles Features . . . . . 511 R. Anand, S. Veni, and P. Geetha 40 Numerical Approximation of System of Singularly Perturbed Convection–Diffusion Problems on Different Layer-Adapted Meshes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 523 Sonu Bose and Kaushik Mukherjee 41 Role of Decision Making for Effective Health Care . . . . . . . . . . . . . . . 537 Sabuzima Nayak, Manisha Panda, and Ripon Patgiri 42 Data Secrecy: Why Does It Matter in the Cloud Computing Paradigm? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 549 Ripon Patgiri, Malaya Dutta Borah, and Laiphrakpam Dolendro Singh 43 A Survey on Optimization Parameters and Techniques for Crude Oil Pipeline Transportation . . . . . . . . . . . . . . . . . . . . . . . . . . . 561 Amrit Sarkar and Adarsh Kumar Arya

Contents

xi

44 Design and Simulation of Higher Efficiency Perovskite Heterojunction Solar Cell Based on a Comparative Study of the Cell Performances with Different Standard HTLs . . . . . . . . . . 575 Pratik De Sarkar and K. K. Ghosh 45 6G Communication: A Vision on Deep Learning in URLLC . . . . . . . 587 Ashmita Roy Medha, Muskan Gupta, Sabuzima Nayak, and Ripon Patgiri 46 AHP-GRA Integrated Methodology for Decision-Making in WEDM of Ti-6Al-4 V Alloy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 599 D. Devarasiddappa, M. Chandrasekaran, and K. K. Mandal 47 Application of Taguchi-DEAR Method for Parametric Optimization of CI Engine with Waste Cooking Oil Biodiesel . . . . . . 613 Prasanta Kumar Choudhury, Niraj Kashyap, and Biplab Das 48 Conjugate Heat Transfer Analysis in a Composite Building Wall: Effect of Double Plaster-Brick-Glass Wool . . . . . . . . . . . . . . . . . 627 Biswajit Nath, Sujit Roy, Ankur Gupta, and Suresh Gogada 49 Performance Enhancement of Solar Air Heater with the Application of Trapezoidal Ribs on the Absorber Plate . . . 641 Suresh Gogada, Sujit Roy, Ankur Gupta, and Biswajit Nath 50 Investigation of Hybrid Fiber-Reinforced Concrete Beam– Column Joint Behavior Using Fruit Fly Optimal NN . . . . . . . . . . . . . . 655 Pallab Das Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 667

About the Editors

Dr. Biplab Das is presently working as Assistant Professor in the Department of Mechanical Engineering, National Institute of Technology Silchar, India. He completed his Ph.D. from NERIST, Itanagar, India, in the year of 2014. Later, he pursued his Post-Doctoral research from University of Idaho, USA. He is the recipient of prestigious Bhaskara Advance Solar Energy (BASE) fellowship form IUSSTF and DST, Government of India. He is also awarded with “DBT Associateship” by Department of Biotechnology, Government of India. He has 12+ years of experience in teaching and research and published more than 60 nos. of referred International/National Journal/conference papers. Presently, he is actively involved in 08 nos. of ongoing sponsored project to develop a solar thermal system for North East India, worth 0.268 billion INR, sponsored by SERB, DST, Ministry of Power, and the Ministry of Climate Change, Government of India. He is guiding 06 nos. of Ph.D. scholars He has ongoing research activities in collaboration with Jadavpur University, India, IIT Guwahati, India, University of Idaho, USA, Ulster University, UK. Dr. Ripon Patgiri is Assistant Professor at the department of Computer Science and Engineering, National Institute of Technology Silchar. He received his Bachelor Degree from Institution of Electronics and Telecommunication Engineers, New Delhi in 2009, M.Tech. degree from Indian Institute of Technology Guwahati in 2012 and Ph.D. from National Institute of Technology Silchar in 2019. After M.Tech. degree, he joined as Assistant Professor at the Department of Computer Science and Engineering, National Institute of Technology Silchar in 2013. He has published numerous papers in reputed journals, conferences and books. His research interests include bloom filters, networking, security, privacy, secrecy and communication. He is a senior member of IEEE. He is a member of ACM and EAI. He is a lifetime member of ACCS, India. Also, he is an associate member of IETE. He was General Chair of 6th International Conference on Advanced Computing, Networking, and Informatics and International Conference on Big Data, Machine Learning and Applications. He is Organizing Chair of 25th International Symposium on Frontiers of

xiii

xiv

About the Editors

Research in Speech and Music and International Conference on Modeling, Simulations and Applications. He is Convenor, Organizing Chair and Program Chair of 26th annual International Conference on Advanced Computing and Communications (ADCOM 2020). He is an area editor of the EAI Endorsed Transactions on Internet of Things. He is also an editor in a multi-authored book, titled Health Informatics: A Computational Perspective in Healthcare, in the book series of Studies in Computational Intelligence, Springer. Also, he is writing a monograph book, titled Bloom Filter: A Data Structure for Computer Networking, Big Data, Cloud Computing, Internet of Things, Bioinformatics and Beyond, Elsevier. He is also an editor of contributed volume, Principles of Big Graph: In-depth Insight, Advances in Computers, Elsevier, and Principles of Social Networking: The New Horizon and Emerging Challenges, Smart Innovation, Systems and Technologies (SIST), Springer. He serves an editor of several conference proceedings, including Proceedings of International Conference on Big Data, Machine Learning and Applications (LNNS, Springer, 2021), Modeling, Simulation and Optimization (SIST, Springer, 2021), and Big Data, Machine Learning, and Applications (CCIS, Springer, 2020). Prof. Sivaji Bandyopadhyay is Director of National Institute of Technology Silchar since December 2017. He is Professor of the Department of Computer Science and Engineering, Jadavpur University, India, where he has been serving since 1989. He is attached as Professor, Computer Science and Engineering Department, National Institute of Technology Silchar. He has more than 300 publications in reputed journals and conferences. He has edited two books so far. His research interests are in the areas of natural language processing, machine translation, sentiment analysis and medical imaging among others. He has organized several conferences and has been Program Committee Member and Area Chair in several reputed conferences. He has completed international funded projects with France, Japan and Mexico. At the national level, he has been Principal Investigator of several consortium mode projects in the areas of machine translation, cross-lingual information access and treebank development. At present, he is Principal Investigator of an Indo-German SPARC project with University of Saarlandes, Germany, on Multimodal Machine Translation and the Co-PI of several other international projects. Prof. Valentina Emilia Balas is currently Full Professor in the Department of Automatics and Applied Software at the Faculty of Engineering, “Aurel Vlaicu” University of Arad, Romania. She holds a Ph.D. Cum Laude, in Applied Electronics and Telecommunications from Polytechnic University of Timisoara. He is author of more than 350 research papers in refereed journals and International Conferences. Her research interests are in intelligent systems, fuzzy control, soft computing, smart sensors, information fusion, modeling and simulation. She is Editor-in Chief to International Journal of Advanced Intelligence Paradigms (IJAIP) and to International Journal of Computational Systems Engineering (IJCSysE), member in Editorial Board member of several national and international journals and is evaluator expert for national, international projects and Ph.D. thesis. He is the director of Intelligent Systems Research Centre in Aurel Vlaicu University of Arad and Director of

About the Editors

xv

the Department of International Relations, Programs and Projects in the same university. She served as General Chair of the International Workshop Soft Computing and Applications (SOFA) in nine editions organized in the interval 2005–2020 and held in Romania and Hungary. He participated in many international conferences as Organizer, Honorary Chair, Session Chair, Member in Steering, Advisory or International Program Committees and Keynote Speaker. Now she is working in a national project with EU funding support: BioCell-NanoART = Novel Bio-inspired Cellular NanoArchitectures—For Digital Integrated Circuits, 3M Euro from National Authority for Scientific Research and Innovation. She is a member of European Society for Fuzzy Logic and Technology (EUSFLAT), member of Society for Industrial and Applied Mathematics (SIAM) and a senior member IEEE, member in Technical Committee– Fuzzy Systems (IEEE Computational Intelligence Society), chair of the Task Force 14 in Technical Committee–Emergent Technologies (IEEE CIS), member in Technical Committee–Soft Computing (IEEE SMCS). He was past Vice-president (responsible with Awards) of IFSA—International Fuzzy Systems Association Council (2013– 2015), is a Joint Secretary of the Governing Council of Forum for Interdisciplinary Mathematics (FIM), A Multidisciplinary Academic Body, India, and recipient of the “Tudor Tanasescu” Prize from the Romanian Academy for contributions in the field of soft computing methods (2019).

Chapter 1

Empirical Study of Far-Field Crop Quality Examination Models: A Numerical Outlook Akshay Dhande

and Rahul Malik

Abstract Analysis of crop quality from image data is a multi-domain task, which includes effective image capture, pre-processing, fusion, segmentation, feature extraction, selection of optimum features, classification and post-processing. In order to perform these tasks with utmost efficiency, several architectures have been proposed by researchers and satellite image processing system designers. These architectures vary widely in terms of performance parameters like accuracy, response time, applicability to different types of imagery, image data requirement, etc. For instance, architectures that are well suited for cotton crop analysis do not give good accuracy for other crop types, while generalized architectures that have high accuracy, underperform when applied to specialized crop types. Moreover, selection of algorithms at micro-level is also difficult due to a large variety of architectures available for each of the internal tasks. In order to reduce this ambiguity, the underlying text analyzes various state-of-the-art recent models and architectures that are proposed for high efficiency crop analysis using satellite image processing. The analysis is supported by statistical comparison and recommendations for these architectures w.r.t. their field of application, and desired system performance capabilities. This text also recommends optimizations to the reviewed architectures in order to further improve their parametric performance and large scale deploy-ability.

1.1 Introduction In order to design a highly efficient satellite image processing system that can effectively analyze crop quality, disease spreading and other quantitative and qualitative features; a large number of inter-dependent multi-domain architectures are used [1]. A. Dhande (B) Department of Electronics and Telecommunication Engineering, Lovely Professional University, Phagwara, Punjab, India e-mail: [email protected] R. Malik Department of Computer Science and Engineering, Lovely Professional University, Phagwara, Punjab, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. Das et al. (eds.), Modeling, Simulation and Optimization, Smart Innovation, Systems and Technologies 292, https://doi.org/10.1007/978-981-19-0836-1_1

1

2

A. Dhande and R. Malik

These architectures include, but are not limited to, efficient data capturing architectures, pre-processing and denoising architectures, data fusion architectures, image segmentation and region-of-interest (ROI) extraction architectures, feature extraction and selection architectures, classification architectures, and post-processing architectures for temporal evaluation. Efficiency of the overall satellite image processing system is highly dependent upon 2 major factors; i.e.; efficiency of individual architectures, and efficiency of interfacing the results of one architecture with input of the consecutive architecture [2]. These micro-level architectures can be observed from Fig. 1.1, wherein operations ranging from image enhancement to feature detection are mentioned. These operations are interdependent on each other, for instance, a hyperspectral image with ‘N’ bands can use a fusion algorithm (like principal component analysis (PCA), Brovey, etc.) for combining them. The fused image pixels (FIP) can be represented using the following Eq. 1.1,   F I Pi, j = F I1i, j , I2i, j , I3i, j , . . . , I Ni, j

(1.1)

where, (i, j) are the pixel numbers, and Im i, j is the mth hyperspectral band, which is used for fusion using function ‘F’. The output of this fusion block must be processable by the segmentation block in terms of grey level compatibility, colour band compatibility, etc. Moreover, this output must be highly efficient in terms of quality of image information present after fusion of hyperspectral bands [3]. Therefore, to design a highly efficient satellite image processing system, internal design of these blocks must be done with high efficiency. To perform this task, a large number of architectures are

Fig. 1.1 Steps required for designing a satellite image processing architecture

1 Empirical Study of Far-Field Crop Quality Examination …

3

proposed by researchers and system designers over the years. Most of these architectures use a combination of pixel-level or window-level processing with deep learning for effectively classifying application specific satellite images into multiple classes [4]. A survey of these algorithms along with their nuances, drawbacks, and recommended enhancement methodologies can be comprehensively studied from the next section. This is followed by an application specific statistical analysis of these algorithms in terms of accuracy, processing delay, and other parameters. This will assist researchers and system designers to select the most optimum architecture combination for their application. Finally, this text concludes with some interesting observations about these techniques, and recommends ways to further fuse and improve their individual performance.

1.2 Literature Review A large number of models have been proposed for crop image processing via satellite data, wherein processes like change detection, classification, disease detection, prediction of plantation type, etc. An example of such a system that performs crop classification via bio-inspired models can be observed in [5], wherein Particle Swarm Optimisation (PSO), Maximum Likelihood Classifier (MLC) and Ant Colony Optimisation (ACO) are used. The PSO algorithm uses particle velocity update process, wherein the image pixel is considered as a particle, and each pixel’s intensity is updated according to the following equation, Pi+1 = w ∗ Pi + (Ws ∗ r1 ∗ (Pbest − Pi )) + (Wc ∗ r2 ∗ (P G best − Pi ))

(1.2)

where, Pi is the current particle intensity, Pi+1 is next particle intensity, w, Ws , Wc are the particle weight, social learning weight and cognitive learning update values, r1 and r2 are random numbers, while Pbest and P G best are local and global best pixel values, which are obtained using expected pixel value functions. Similarly, the MLC algorithm uses a likelihood function for evaluation of similarity between expected pixel values and current pixel values using the following equation, G i (P) = − logn

      Pexp P ∗ P− P − Pexp − N N

(1.3)

where, ‘P’ is the pixel value, Pexp is expected pixel value, ‘N’ is number of pixels, and ‘G’ is the output pixel value. Due to these identities, the final pixel level closely matches with type f crop being classified. It is observed that MLC has an accuracy of 87.8%, while PSO has an accuracy of 98.56% on the same dataset. ACO provides a similar accuracy of 98%, but with reduced complexity as compared with PSO, thereby making it useful for high performance applications. A similar application of crop classification for agricultural lands can be observed in [6], wherein feature selection process is used in order to improve classification

4

A. Dhande and R. Malik

Fig. 1.2 General-purpose land type classification model [6]

performance for supervised classifiers like Naïve Bayes (NB), J48, Random Forest (RF), and Background Foreground (BF) classifier. The architecture diagram for this model can be observed from Fig. 1.2, wherein general-purpose processing blocks are showcased for land classification. It is observed that BF classifier has an average accuracy of 83%, J48 graft classifier has an accuracy of 83.5%, NB classifier has an accuracy of 77.5%, Binary tree (BTree) classifier has an accuracy of 83.59% and RF classifier has an accuracy of 87.75% for land-type classification. Satellite images can also be used for detection of crop damages using specialized feature extraction models. An example of this can be observed from [7], wherein Novel Disaster Vegetation Damage Index (DVDI) is used for identification of crop damages due to floods. The DVDI evaluation requires information from different arenas, which includes crop type, flood inundation extent, and crop-conditional profile. These inputs are combined in order to generate a crop-specific flood damage model, which assists in estimation of crop damage due to the flood. The value of DVDI is evaluated using Eq. 1.4, wherein surface reflectance values for near-infrared (NIR) and visible red (R) bands is used. ⎛ DVDI = ⎝ 

Pnir −Pr Pnir +Pr Pnir −Pr Pnir +Pr







Pnir −Pr Pnir +Pr







med  ⎠ Pnir −Pr Pnir +Pr med after

−   Pnir −Pr Pnir −Pr − Pnir +Pr Pnir +Pr med    − ⎝ Pnir −Pr −Pr − PPnir Pnir +Pr +P nir r ⎛

max

max

med

⎞ ⎠

(1.4) before

where, Pnir and Pr are the pixel values of NIR and R bands respectively, while ‘med’ and ‘max’ are their maximum values, these values are taken ‘before’ and ‘after’

1 Empirical Study of Far-Field Crop Quality Examination …

5

flooding, in order to find out the extent of flood damage. It is observed that the DVDI feature vector is able to identify damages with over 85% accuracy, thereby making it useful for coarse analysis in multiple applications. This accuracy can be further improved via use of a model that can process multiple image types for evaluation of R and NIR images. Such a model can be observed from [8], wherein images from low orbiting satellites and small unmanned aerial systems (UAS) are taken for quantification of crop lodging. The model evaluated crop surface features, along with crop colour features like percentages of red, green, and blue pixels; excess Green ratio (E × G), Green to blue ratio (GBR), and Green red vegetation index (GRVI) as observed from the following equations, Per.R =

R R+G+B

(1.5)

Per.G =

G R+G+B

(1.6)

Per.B =

B R+G+B

(1.7)

E ×G =2∗G− R− B G B

(1.9)

G−R G+R

(1.10)

GBR = GRVI =

(1.8)

These features are given to a threshold classifier in order to obtain an accuracy of 90% for crop lodging classification. An application of these features and classification models can be observed in [9], wherein olive trees are detected and counted from satellite images. It uses multi-level thresholding scheme along with circular Hough transform in order to achieve an accuracy of 96% for Olive tree type detection, and tree counting. This method can be extended for different crop types and different applications like crop monitoring, disease detection, etc. by modification of multilevel threshold algorithm, via the usage of high complexity classification models like support vector regression (SVR), partial least squares regression (PLSR), random forest regression (RFR), and extreme learning regression (ELR) as suggested in [10]. These models utilize canopy spectral and structural information in order to evaluate properties like leaf area index (LAI), aboveground biomass (AGB), and nitrogen concentration of leaf (NCL). Spectral and structural properties include Red-edge chlorophyll index, Green normalized difference vegetation index, Ratio vegetation index, enhanced vegetation index, Modified chlorophyll absorption in reflectance index, Wide dynamic range vegetation index, Normalized ratio vegetation index, transformed vegetation index, Normalized difference vegetation index, Normalized

6

A. Dhande and R. Malik

Fig. 1.3 Multi-feature and multiple classifier-based crop properties estimation model [10]

difference red-edge, enhanced vegetation index, Modified chlorophyll absorption in reflectance index, Structure insensitive pigment index, and Visible atmospherically resistance index for assisting the classification process. This process can be observed from Fig. 1.3, wherein different operations like atmospheric correction, radiometric calibration, sharpening, reflectance imagery registration, etc. are seen. It is observed that Partial Least Squares Regression (PLSR) method achieves an accuracy of 79.2% for satellite images, 78.7% for UAV images, and 85.2% when satellite images are combined with UAV image; while Support Vector Regression model achieves 85.6%, 76.3%, and 89.8% accuracy for same combination of images. This accuracy is improved using RFR model, wherein accuracies of 81.4%, 76.9%, and 91.8% are achieved, while the ELR model further tunes this performance via extreme feature learning to 87%, 80.5%, and 92.3% respectively for satellite, UAV and combined images. This accuracy can be further improved with extended feature extraction models as suggested in [11], wherein scattering parameters are extracted from multitemporal dual polarized and quad polarized synthetic aperture radar (SAR) images.

1 Empirical Study of Far-Field Crop Quality Examination …

7

Fig. 1.4 SSU and GRU for efficient feature extraction and classification [12]

Due to the use of these scattering parameters, an accuracy of 91% is achieved via support vector machine (SVM) classification model, which is higher than nonscattering parameterized SVM model that has an accuracy of 85% for crop identification. This accuracy can be improved via the use of convolutional neural networks (CNN) and its variants. The work in [12] proposes such a CNN variant that uses Temporal Attention Networks (TAN) for crop classification using multitemporal and multisensory crop data. It uses Spatial Spectral Unification (SSU) features along with gated recurrent units (GRU) for efficient image classification. The model converts query images into different temporal formats via temporal transformation functions (G functions) as observed from Fig. 1.4, wherein a spectral context vector (SCV) is formed via combination of these images.

8

A. Dhande and R. Malik

This SCV is given to a CNN and GRU-based classification model in order to categorize input images into different crop types. Due to a combination of SSU and GRU models, an accuracy of 99.09% is achieved, which is higher than RF (92.85%), SVM with radial basis kernel function (SVM-RBF) (94.11%), and simple CNN-GRU (96.69%) that doesn’t use SSU and SCV feature vectors. Due to this high accuracy on multiple crop types, the proposed model can be used for high efficiency classification applications. Vegetation indices indicate a wide variety of information about satellite images, the work in [13] evaluates these indices and compares their performance for estimation of tobacco crop yield. Vegetation indices like Normalized Difference Index 45 (NDI-45) and Normalized Difference Vegetation Index (NDVI) are evaluated, and their data is given to an artificial neural network (ANN) for effective classification. It is observed that using images from multiple sources, and stacking them together in order to evaluate NDVI and NDI45 values is capable of obtaining an accuracy of 95.81%, which is higher than single stacked NDVI and NDI45 evaluation that gives an accuracy of 88.5%, thereby indicating that multiple stacking of images must be done for high efficiency classification. This can also be observed from [14], wherein multiple source remote imagery is combined with multiple source proximal sensing image data into order to evaluate stacked indices for evaluation of weed and crop leaf area index (LAI). These LAI values can be used for yield identification with an accuracy of 85% across multiple crop types. This estimation performance can be improved via the use of Gaussian Processes as indicated in [15], wherein combination of moderate resolution imaging spectroradiometer (MODIS) data and soil moisture active passive (SMAP) is done. The model is able to evaluate Gaussian probability of Corn, Wheat, and Soya crops with an accuracy of 81%, which can be improved via the use of deep learning models for classification. Such deep learning models can be observed from [16], wherein refined network (RefNet), specific spatial template matching (SSTM) and single-shot multibox detector (SSMB) are used. The model uses panchromatic images in order to detect satellite image components like ships, planes, storage units, bridges, and harbours as observed from Fig. 1.5, wherein different training sets are generated for different satellite image types. Due to the use of these deep learning models, an accuracy of 86.67% is achieved, which is higher than faster recurrent CNN (55%), You Look Only Once (YOLO) (62%), SSMB (85.3%) and SSMB and RefNet (86.1%) for satellite image component classification. An application of these models for olive tree detection and classification can be observed from [17], wherein red band is used. A classification accuracy of 98.6% is achieved on the SIGPAC dataset, which is higher than Gaussian Process Classifier (GPC) that has an accuracy of 96.2% for Olive classification. Another application of deep learning models for rice paddy classification can be observed from [18], wherein Sentinel-1 dataset is used along with domain adapted recurrent neural networks (DARNN). The DARNN model is custom designed as per the classification application, thereby limiting its scalability, but increasing average accuracy of classification. It is observed that data augmentation (DA) when combined with semi-supervised classification learning material (SSCLM) outperforms single image processing, and thereby making the augmentation process an integral part of

1 Empirical Study of Far-Field Crop Quality Examination …

9

Fig. 1.5 Deep learning model based on SSTM, SSMB and RefNet for satellite image processing [16]

any classification system. The accuracy of DARNN with DA is 96.42% for agricultural, urban, forestry, and water classification; while the accuracy of SSCLM with DARNN is nearly 95.92%, which is higher than primary learning material (PLM) based DARNN (95.45%) and PLM based fully connected recurrent neural networks (FCRNN) (94.91%); where FCRNN are used for general purpose crop classification. This accuracy must be evaluated on a large number of datasets before their application to real-time satellite images. A description of such a large dataset can be observed from [19], wherein the Campo Verde Database is introduced. It consists of 513 land-use fields, with varying crop and other agricultural classes. It also consists of 14 synthetically processed Sentinel-1 images, along with 15 LANDAT-8 Operational Land Imager (OLI) images. These images when combined together form a large dataset that can be used for multiple applications. Change detection is another application of satellite image processing, wherein temporal changes in images are detected via machine learning models. The work in [20] proposes a Siamese Autoencoder (SA) based fully convolutional neural network (FCNN) for detection of change in satellite images. Due to the use of SAFCNN model, along with Global Max Pooling as observed in Fig. 1.6, an accuracy of 93% is achieved; which is higher than SAFC difference network (90%), SAFC concatenated network (91%), Residual Network (ResNet) (91%), and kernel density estimation Global Algorithm (KDEGA) (78%). Similar systems can be observed from [21], and [22] wherein different vegetation indices, like leaf area index, and, moderate resolution imaging spectroradiometer enhanced vegetation index, are used in order to estimate crop type with accuracies of 80% and 86.9% respectively. These accuracies can be improved via the use of deep learning models

10

A. Dhande and R. Malik

Fig. 1.6 SAFCNN for change detection [20]

as suggested in [23], where two different Deep Neural Network (DNN) models, namely, rain type classification, U-net (RTC-UNet) and RTC fully connected neural network (RTC-FCNN) are used for segmentation of rain pixels from Microwave Satellite images. An accuracy of 96% is achieved using RTC-UNet architecture, while an accuracy of 96.6% is achieved with the use of RTC-FCNN architecture, both of which can be used for real-time classification purposes. Such deep learning models can also be used in hybrid applications like land mapping with change detection [24], high efficiency on-board processing with classification [25], classification with post processing [26] with high accuracies. It is observed that change detection [24] accuracy of 84.1% is achieved using classification and regression trees (CART) model, 86.27% with SVM model, 86.46% with RF model and 88.56% with fused CART + SVM + RF model; while generalized deep clustering model (GDC-SAR) [26] model is able to obtain an accuracy of 92.3% for surface classification, 95.5% for double-bounce classification, and 80.5% for volume classification. This performance is superior when compared with Gaussian mixture model clustering (GMMC) which achieves accuracies in the range of 90.5%, 86.3%, and 68% for respective applications. It is observed from [24] that hybrid combination of classification algorithms improves classification performance. This can also be observed from [27], wherein an ensemble for CNN models is used for classification of geospatial land images. The model uses a combination of various residual neural networks (ResNets) and dense neural networks (DenseNets) in order to obtain the final classification output. Structure of this ensemble network can be observed from Fig. 1.7, wherein both training and testing processes can be seen. The model initially loads weights from ImageNet classification model, and then uses these weights for training both DenseNet and ResNet classifiers. Once trained,

1 Empirical Study of Far-Field Crop Quality Examination …

11

Fig. 1.7 Ensemble CNN models for high efficiency classification [27]

a fusion layer is used to combine these models for final classification. Due to combination of these networks, an accuracy of 92.4% is achieved, which is higher than DenseNet (91.6%), ResNet (89.24%), and Long-short-term-memory (LSTM) (87.15%) classifiers. Applications of these classifiers for precision agriculture [28], detection of twister disease in onion [29], accurate carrot yield forecasting [30], automated plant counting [31], irrigation water usage [32], forecasting of corn yield at county level [33], and crop nutrition mapping [34] can be observed. These applications use different classifiers to achieve different levels of accuracy, for instance, [29] uses thresholding with vegetation indices (TH VI) to obtain an accuracy of 83.3% for disease detection, [30] uses linear and non-linear classifiers with vegetation indices (LNL-VI) to obtain an accuracy of 90.3% for yield forecasting, [31] uses transfer learning (TL) to obtain an accuracy of 95% for automated plant counting, while [33] uses vegetation indices with random forest (RF) classifier to obtain an accuracy of 87% for crop yield prediction. Thus, it can be observed that machine learning and deep learning models are highly suited for classification and processing of satellite crop images. This can be further observed from [35, 36] and [37], wherein different machine learning models like gaussian process regression (GPR), decision tree (DT), back propagation neural network (BPNN), and k-Nearest neighbours are used for yield prediction. These models are able to achieve an accuracy of 83%, 79%, 85%, and 69.5% respectively, which can be further improved via adding more parameters for network training and by the use of deep learning models. The former approach can be observed in [38], wherein effect of satellite surface moisture conditions is evaluated on classification performance of deep learning models. It is observed that as the moisture levels increase, accuracy of image capturing reduces, thereby reducing overall classification performance of the system. Such observations can be used for improving overall accuracy of satellite image processing systems. This can also be seen from [39], wherein Dust Emission Probability (DEP) is evaluated using a highly precise image capturing model via estimation of Normalized Difference Vegetation Index (NDVI)

12

A. Dhande and R. Malik

and soil moisture. It is observed that DEP is evaluated with an accuracy of 85.3%, which can be further improved via the use of deep CNN models. Satellite images can also be used to detect temporal variations in crop yield before and after major events. It can be observed from [40] that satellite images are used for identification of crop situation in India before and after the CoVID outbreak via the use of NDVI and other vegetation indices. This observation can be clubbed with evaluation of leaf chlorophyll content and nitrogen levels using Green Normalized Difference Vegetation Index (GNDVI) with nearly 79% accuracy as seen in [41]. This evaluation assists in finding out yield quality for small scale to moderate scale fields. Secondary researches are also done for evaluating sustainable agriculture from spaceborne images [42], smallholder agriculture analysis [43], fertilizer estimation [44], and moisture to crop pattern mapping [45], with moderate accuracy. The efficacy of these applications can be improved via the use of deep learning models as suggested in [46] and [46], wherein classification and segmentation models are proposed for high efficiency estimation of yield and crop disease types from satellite images. Thus, it can be observed that deep learning models have become a de-facto standard for processing satellite images with high accuracy. A statistical analysis of these models, and other reviewed models w.r.t. their field of application, overall accuracy, and complexity of deployment can be observed from the next section. This will assist researchers and system designers to select the best possible algorithm type for their given application deployment.

1.3 Statistical Analysis In order to perform statistical analysis of the reviewed algorithms, each of these algorithms is compared in terms of their application (App), overall accuracy (OA) and complexity of deployment (CD). The complexity of deployment is a fuzzy variable, and has fuzzy values of Low (L), Medium (M), High (H) and Very High (VH), depending upon the number of processing components used for deployment. This analysis can be observed from Table 1.1, wherein each of the reviewed methods and their performance is tabulated. It can be observed that a major portion of research has been done in either classification, yield prediction, change detection or detection and counting of plant types. Figure 1.8, showcases the performance of these algorithms for change detection. Figure 1.9, showcases the performance of these algorithms for yield prediction. Figure 1.10, showcases the performance of these algorithms for detection and counting of crops in satellite images. Figure 1.11, showcases the performance of these algorithms for classification (which includes disease detection and crop type classification). It can be observed that SSU, GRU, and CNN [12], Multiple VI with thresholding [8], Ensemble CNN [27] and PSO [5] models, outperform other linear models for classification of satellite image data. While, RTC + FCNN [23], RTC + UNet [23], SAFNN [20] and SVM + RF + CART [24] outperform other models for change

1 Empirical Study of Far-Field Crop Quality Examination …

13

Table 1.1 Statistical evaluation of different satellite imaging models for crop image processing Method

App

OA (%)

CD

PSO [5]

Classification

98.56

M

ACO [5]

Classification

98

M

MLC [5]

Classification

87.8

M

RF [6]

Classification

87.75

M

DVDI with thresholding [7]

Change detection

85

L

Multiple VI with thresholding [8]

Classification

90

L

Hough transform with multi-level thresholding [9]

Detection and counting

96

M

PLSR Hybrid [10]

Classification

85.2

M

SVM UAV [10]

Classification

85.6

M

SVM Hybrid [10]

Classification

89.8

M

RFR UAV [10]

Classification

87

M

RFR Hybrid [10]

Classification

92.3

M

SVM with scattering parameters [11]

Classification

91

M

SVM with non-scattering parameters [11]

Classification

85

L

SSU, GRU, CNN [12]

Classification

99.09

VH

SSU and RF [12]

Classification

92.85

VH

SSU, SVM-RBF [12]

Classification

94.11

H

CNN-GRU [12]

Classification

96.69

H

ANN with NDVI and NDI45 Multiple stacking [13]

Classification

95.81

H

ANN with NDVI and NDI45 Single stacking [13]

Classification

88.5

H

LAI with Thresholding [14]

Yield prediction

85

L

RefNet, SSTM, SSMB [16]

Classification

86.67

VH

SSMB [16]

Classification

85.3

H

RefNet and SSMB [16]

Classification

86.1

H

Red Band with Thresholding [17]

Detection and counting

98.6

L

GPC [17]

Detection and counting

96.2

M

DARNN and DA [18]

Classification

96.42

VH

DARNN and SSCLM [18]

Classification

95.92

VH

DARNN and CLM [18]

Classification

95.45

VH

FCRNN and PLM [18]

Classification

94.91

VH

SAFCNN [20]

Change detection

93

VH

SAFC DiffNet [20]

Change detection

90

VH

SAFC ConcNet [20]

Change detection

91

VH

ResNet [20]

Change detection

91

VH (continued)

14

A. Dhande and R. Malik

Table 1.1 (continued) Method

App

OA (%)

CD

MODIS with Thresholding [22]

Change detection

86.9

L

RTC-UNet [23]

Change detection

96

VH

RTC-FCNN [23]

Change detection

96

VH

CART [24]

Change detection

84.1

H

SVM [24]

Change detection

86.27

H

RF [24]

Change detection

86.46

H

SVM + RF + CART [24]

Change detection

88.56

VH

GDC-SAR [26]

Classification

90.5

H

Ensemble CNN [27]

Classification

92.4

VH

DenseNet [27]

Classification

91.6

H

ResNet [27]

Classification

89.24

H

LSTM [27]

Classification

87.15

H

LNL-VI [30]

Yield prediction

90.3

L

TL [31]

Detection and counting

95

VH

RF [33]

Yield prediction

87

M

NDVI with thresholding [39]

Yield prediction

85.3

L

Change detection models for Satellite images SVM+ RF+ CART [24] RF [24] SVM [24] CART [24] RTC-FCNN [23] RTC-UNet [23] MODIS with Thresholding [22] LAI Thresholding [21] KDEGA [20] ResNet [20] SAFC ConcNet [20] SAFC DiffNet [20] SAFCNN [20] DVDI with thresholding [7] 0

20

40

60

80

100

120

Fig. 1.8 Statistical evaluation of change detection models

detection. Moreover, LNL-VI [30], and BPNN [35] outperform linear classification models for yield prediction; and red band thresholding [17] is the most recommended algorithm for detection and counting of crops. These algorithms can be selected by researchers in order to design highly efficient and accurate satellite image processing systems.

1 Empirical Study of Far-Field Crop Quality Examination …

15

Yield Prediction Models for Satellite Image Processing GNDVI [40] NDVI with thresholding [39] kNN [36] BPNN [35] DT [35] GPR [35] RF [33] LNL-VI [30] MODIS & SMAP with Gaussian [15] LAI with Thresholding [14] 0

20

40

60

80

100

Fig. 1.9 Statistical evaluation of yield prediction models for satellite images

Fig. 1.10 Statistical evaluation of detection and counting models for satellite images

1.4 Conclusion From the statistical evaluation it can be observed that CNN-based models like SSU GRU CNN, ensemble CNN, RTC-FCNN, SAFNN, and RTC UNet CNN are most suited across different satellite image processing applications when applied to cropbased processing. These methods are matched with linear methods like PSO, multiple vegetation index evaluation, CART, RF, SVM, and BPNN if the number of extracted features is increased. It is highly recommended that CNN-based models must be used for high efficiency system design, and multiple image sources must be used in order to train these models. Moreover, it is also observed that transfer learning models have better performance than constrained training models, because the former models are able to increase overall training samples for highly efficient network training. These models when combined with data augmentation methods should provide very high accuracy, but will also increase system complexity, which can be reduced by efficient feature selection methods. It is also observed that linear classification models can be improved via use of CNN models, and their system complexities can be reduced via

16

A. Dhande and R. Malik

Classification models for satellite image processing TH VI [29] LSTM [27] ResNet [27] DenseNet [27] Ensemble CNN [27] GMMC [26] GDC-SAR [26] FCRNN & PLM [18] DARNN & CLM [18] DARNN & SSCLM [18] DARNN & DA [18] RefNet & SSMB [16] SSMB [16] YoLo [16] Fast CNN [16] RefNet, SSTM, SSMB [16] ANN with NDVI & NDI45 Single stacking [13] ANN with NDVI & NDI45 Mul ple stacking [13] CNN-GRU [12] SSU, SVM-RBF [12] SSU & RF [12] SSU, GRU, CNN [12] SVM with non-sca ering parameters [11] SVM with sca ering parameters [11] RFR Hybrid [10] RFR Satellite [10] RFR UAV [10] SVM Hybrid [10] SVM Satellite [10] SVM UAV [10] PLSR Hybrid [10] PLSR Satellite [10] PLSR UAV [10] Mul ple VI with thresholding [8] RF [6] BTree [6] J48 Gra [6] BF [6] NB [6] MLC [5] ACO [5] PSO [5] 0

20

40

60

80

100

Fig. 1.11 Statistical evaluation of classification models for satellite image processing

120

1 Empirical Study of Far-Field Crop Quality Examination …

17

the use of pipelining and parallel processing components along with highly efficient on-chip image acquisition models.

References 1. Solano-Correa, Y.T., Bovolo, F., Bruzzone, L., Fernández-Prieto, D.: A method for the analysis of small crop fields in sentinel-2 dense time series. IEEE Trans. Geosci. Remote Sens. 58(3), 2150–2164 (2020). https://doi.org/10.1109/TGRS.2019.2953652 2. Shelestov, A., et al.: Cloud approach to automated crop classification using sentinel-1 imagery. IEEE Trans. Big Data 6(3), 572–582. https://doi.org/10.1109/TBDATA.2019.2940237 3. Liu, M.W., Ozdogan, M., Zhu, X.: Crop type classification by simultaneous use of satellite images of different resolutions. IEEE Trans. Geosci. Remote Sens. 52(6), 3637–3649 (2014). https://doi.org/10.1109/TGRS.2013.2274431 4. Moumni, A., Lahrouni, A.: Machine learning-based classification for crop-type mapping using the fusion of high-resolution satellite imagery in a semiarid area. Scientifica 2021, Article ID 8810279, 20 p (2021). https://doi.org/10.1155/2021/8810279 5. Omkar, S.N., Senthilnath, J., Mudigere, D., et al.: Crop classification using biologically-inspired techniques with high resolution satellite image. J. Indian Soc. Remote Sens. 36, 175–182 (2008). https://doi.org/10.1007/s12524-008-0018-y 6. Kalaivani, A., Khilar, R.: Crop classification and mapping for agricultural land from satellite images. In: Hemanth, D. (eds.) Artificial Intelligence Techniques for Satellite Image Analysis. Remote Sensing and Digital Image Processing, vol. 24. Springer, Cham (2020). https://doi.org/ 10.1007/978-3-030-24178-0_10 7. Rahman, M.S., Di, L., Yu, E., et al.: Remote sensing based rapid assessment of flood crop damage using novel disaster vegetation damage index (DVDI). Int. J. Disaster Risk Sci. 12, 90–110 (2021). https://doi.org/10.1007/s13753-020-00305-7 8. Quiros Vargas, J., Khot, L.R., Peters, R.T., Chandel, A.K., Molaei, B.: Low orbiting satellite and small UAS-based high-resolution imagery data to quantify crop lodging: a case study in irrigated spearmint. IEEE Geosci. Remote Sens. Lett. 17(5), 755–759 (2020). https://doi.org/ 10.1109/LGRS.2019.2935830 9. Khan, A., Khan, U., Waleed, M., Khan, A., Kamal, T., Marwat, S.N.K., Maqsood, M., Aadil, F.: Remote sensing: an automated methodology for olive tree detection and counting in satellite images. IEEE Access pp. 1–1 (2018). https://doi.org/10.1109/ACCESS.2018.2884199 10. Maimaitijiang, M., Sagan, V., Sidike, P., Daloye, A., Erkbol, H., Fritschi, F.: Crop monitoring using satellite/UAV data fusion and machine learning. Remote Sens. 12 (2020).https://doi.org/ 10.3390/rs12091357 11. Guo, J., Wei, P.-L., Liu, J., Jin, B., Su, B.-F., Zhou, Z-S.: Crop classification based on differential characteristics of H/α scattering parameters for multitemporal quad- and dual-polarization SAR images. IEEE Trans. Geosci. Remote Sens. pp. 1–13 (2018). https://doi.org/10.1109/TGRS. 2018.2832054 12. Li, Z., Chen, G., Zhang, T.: Temporal attention networks for multitemporal multisensor crop classification. IEEE Access 7, 134677–134690 (2019). https://doi.org/10.1109/ACCESS.2019. 2939152 13. Khan, W., et al.: On the performance of temporal stacking and vegetation indices for detection and estimation of tobacco crop. IEEE Access 8, 103020–103033 (2020). https://doi.org/10. 1109/ACCESS.2020.2998079 14. Asad, M.H., Bais, A.: Crop and weed leaf area index mapping using multi-source remote and proximal sensing. IEEE Access pp. 1–1 (2020). https://doi.org/10.1109/ACCESS.2020.301 2125 15. Martínez-Ferrer, L., Piles, M., Camps-Valls, G.: Crop yield estimation and interpretability with Gaussian processes. IEEE Geosci. Remote Sens. Lett. https://doi.org/10.1109/LGRS.2020.301 6140

18

A. Dhande and R. Malik

16. Hou, B., Ren, Z., Zhao, W., Wu, Q., Jiao, L.: Object detection in high-resolution panchromatic images using deep models and spatial template matching. IEEE Trans. Geosci. Remote Sens. 58(2), 956–970 (2020). https://doi.org/10.1109/TGRS.2019.2942103 17. Waleed, M., Tai-Won, U., Khan, A., Ahmad, Z.: An automated method for detection and enumeration of olive trees through remote sensing. IEEE Access pp. 1–1 (2020). https://doi. org/10.1109/ACCESS.2020.2999078 18. Jo, H.-W., et al.: Deep learning applications on multitemporal SAR (Sentinel-1) image classification using confined labeled data: the case of detecting rice paddy in South Korea. IEEE Trans. Geosci. Remote Sens. 58(11), 7589–7601 (2020). https://doi.org/10.1109/TGRS.2020. 2981671 19. Del’ArcoSanches, I., et al.: Campo verde database: seeking to improve agricultural remote sensing of tropical areas. IEEE Geosci. Remote Sens. Lett. 15(3), 369–373 (2018). https://doi. org/10.1109/LGRS.2017.2789120 20. Mesquita, D.B., Santos, R.F.d., Macharet, D.G., Campos, M.F.M., Nascimento, E.R.: Fully convolutional siamese autoencoder for change detection in UAV aerial images. IEEE Geosci. Remote Sens. Lett. 17(8), 1455–1459 (2020). https://doi.org/10.1109/LGRS.2019.2945906 21. Zhan, X., Xiao, Z., Jiang, J., Shi, H.: A data assimilation method for simultaneously estimating the multiscale leaf area index from time-series multi-resolution satellite observations. IEEE Trans. Geosci. Remote Sens. 57(11), 9344–9361 (2019). https://doi.org/10.1109/TGRS.2019. 2926392 22. Zhang, S., et al.: Developing a method to estimate maize area in North and Northeast of China combining crop phenology information and time-series MODIS EVI. IEEE Access 7, 144861–144873 (2019). https://doi.org/10.1109/ACCESS.2019.2944863 23. Choi, Y., Kim, S.: Rain-type classification from microwave satellite observations using deep neural network segmentation. IEEE Geosci. Remote Sens. Lett. https://doi.org/10.1109/LGRS. 2020.3016001 24. Useya, J., Chen, S., Murefu, M.: Cropland mapping and change detection: toward Zimbabwean cropland inventory. IEEE Access 7, 53603–53620 (2019). https://doi.org/10.1109/ACCESS. 2019.2912807 25. Horstrand, P., Guerra, R., Rodríguez, A., Díaz, M., López, S., López, J.F.: A UAV platform based on a hyperspectral sensor for image capturing and on-board processing. IEEE Access 7, 66919–66938 (2019). https://doi.org/10.1109/ACCESS.2019.2913957 26. Chatterjee, A., Saha, J., Mukherjee, J., Aikat, S., Misra, A.: Unsupervised land cover classification of hybrid and dual-polarized images using deep convolutional neural network. IEEE Geosci. Remote Sens. Lett. 18(6), 969–973 (2021). https://doi.org/10.1109/LGRS.2020.299 3095 27. Minetto, R., Segundo, M., Sarkar, S.: Hydra: an ensemble of convolutional neural networks for geospatial land classification. IEEE Trans. Geosci. Remote Sens. (2018). https://doi.org/ 10.1109/TGRS.2019.2906883 28. Roberts, D.P., Short, N.M., Sill, J., et al.: Precision agriculture and geospatial techniques for sustainable disease control. Indian Phytopathol. (2021). https://doi.org/10.1007/s42360-02100334-2 29. Isip, M.F., Alberto, R.T., Biagtan, A.R.: Exploring vegetation indices adequate in detecting twister disease of onion using Sentinel-2 imagery. Spat. Inf. Res. 28, 369–375 (2020). https:// doi.org/10.1007/s41324-019-00297-7 30. Suarez, L.A., Robson, A., McPhee, J., et al.: Accuracy of carrot yield forecasting using proximal hyperspectral and satellite multispectral data. Precis. Agric. 21, 1304–1326 (2020). https://doi. org/10.1007/s11119-020-09722-6 31. Valente, J., Sari, B., Kooistra, L., et al.: Automated crop plant counting from very highresolution aerial imagery. Precis. Agric. 21, 1366–1384 (2020). https://doi.org/10.1007/s11 119-020-09725-3 32. Foster, T., Mieno, T., Brozovic, N.: Satellite-based monitoring of irrigation water use: assessing measurement errors and their implications for agricultural water management policy. Water Resources Res. 56, e2020WR028378 (2020). https://doi.org/10.1029/2020WR028378

1 Empirical Study of Far-Field Crop Quality Examination …

19

33. Schwalbert, R., Amado, T., Nieto, L., et al.: Mid-season county-level corn yield forecast for US Corn Belt integrating satellite imagery and weather variables. Crop Sci. 60, 739–750 (2020). https://doi.org/10.1002/csc2.20053 34. Sharifi, A.: Remotely sensed vegetation indices for crop nutrition mapping. J. Sci. Food Agric. 100, 5191–5196 (2020). https://doi.org/10.1002/jsfa.10568 35. Sharifi, A.: Yield prediction with machine learning algorithms and satellite images. J. Sci. Food Agric. 101, 891–896 (2021). https://doi.org/10.1002/jsfa.10696 36. Archontoulis, S.V., Castellano, M.J., Licht, M.A., et al.: Predicting crop yields and soil-plant nitrogen dynamics in the US Corn Belt. Crop Sci. 60, 721–738 (2020). https://doi.org/10.1002/ csc2.20039 37. Beal Cohen, A.A., Seifert, C.A., Azzari, G., Lobell, D.B.: Rotation effects on corn and soybean yield inferred from satellite and field-level data. Agron. J. 111, 2940–2948 (2019). https://doi. org/10.2134/agronj2019.03.0157 38. Modanesi, S., Massari, C., Camici, S., Brocca, L., Amarnath, G.: Do satellite surface soil moisture observations better retain information about crop-yield variability in drought conditions? Water Resources Res. 56, e2019WR025855 (2020). https://doi.org/10.1029/2019WR025855 39. Effati, M., Bahrami, H.-A., Gohardoust, M., Babaeian, E., Tuller, M.: Application of satellite remote sensing for estimation of dust emission probability in the Urmia Lake Basin in Iran. Soil Sci. Soc. Am. J. 83, 993–1002 (2019). https://doi.org/10.2136/sssaj2019.01.0018 40. Saxena, S., Rabha, A., Tahlani, P., Ray, S.: Crop situation in India. J. Indian Soc. Remote Sens. (2020). https://doi.org/10.1007/s12524-020-01213-5 41. Croft, H., Arabian, J., Chen, J.M., et al.: Mapping within-field leaf chlorophyll content in agricultural crops for nitrogen management using Landsat-8 imagery. Precis. Agric. 21, 856– 880 (2020). https://doi.org/10.1007/s11119-019-09698-y 42. Hank, T.B., Berger, K., Bach, H., et al.: Spaceborne imaging spectroscopy for sustainable agriculture: contributions and challenges. SurvGeophys 40, 515–551 (2019). https://doi.org/ 10.1007/s10712-018-9492-0 43. Cucho-Padin, G., Loayza, H., Palacios, S., et al.: Development of low-cost remote sensing tools and methods for supporting smallholder agriculture. ApplGeomat 12, 247–263 (2020). https://doi.org/10.1007/s12518-019-00292-5 44. Jin, Z., Prasad, R., Shriver, J., et al.: Crop model- and satellite imagery-based recommendation tool for variable rate N fertilizer application for the US Corn system. Precis. Agric. 18, 779–800 (2017). https://doi.org/10.1007/s11119-016-9488-z 45. Mapping soil moisture and their correlation with crop pattern using remotely sensed data in arid region. https://www.sciencedirect.com/science/article/pii/S1110982318304551 46. Jayanth, J., Shalini, V.S., Ashok Kumar, T., Koliwad, S.: Classification of field-level crop types with a time series satellite data using deep neural network. In: Hemanth, D. (eds) Artificial Intelligence Techniques for Satellite Image Analysis. Remote Sensing and Digital Image Processing, vol. 24. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-24178-0_3

Chapter 2

A Contemporary Initiative to Uphold Cease COVID-19 Using Keras and Tensorflow S. D. Sujeevan Tej, Natasha J. V. Velmury, V. Bhargav Sai, Vema Rahul Sairam, and T. Anjali Abstract Because of the world’s extreme interconnectivity, the next pandemic is likely only a flight away. Viruses like Ebola, NIPAH, Zika, and others have emerged and significantly impact people’s lives. Millions of people died because of these deadly viruses and still new viruses are coming into mankind every few years and taking so many lives. Currently, Coronavirus is an immediate threat to mankind. Each country is establishing a new set of laws and regulations to limit the Coronavirus effect. The most important thing for us to remember at this period is to wear a facemask at all times, maintain social distance, sanitize our hands, and do several other things. Despite the government’s intensified efforts, a small number of people continue to violate the rules and regulations. A chance to limit the virus’s spread in its early stages was neglected, with grave repercussions for many netizens. Facemask identification would be one of the most important aspects of our civilization to prevent the virus from spreading further.

S. D. Sujeevan Tej (B) · N. J. V. Velmury · V. Bhargav Sai · V. R. Sairam · T. Anjali Department of Computer Science & Engineering, Amrita Vishwa Vidyapeetham, Amritapuri, India e-mail: [email protected] N. J. V. Velmury e-mail: [email protected] V. Bhargav Sai e-mail: [email protected] V. R. Sairam e-mail: [email protected] T. Anjali e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. Das et al. (eds.), Modeling, Simulation and Optimization, Smart Innovation, Systems and Technologies 292, https://doi.org/10.1007/978-981-19-0836-1_2

21

22

S. D. Sujeevan Tej et al.

2.1 Introduction The first case of the deadly Coronavirus infection was identified in Wuhan, China on December 31st, 2019. It spread across the world in a matter of months, wreaking havoc on the global economy and killing a large number of people [1]. The World Health Organization (WHO) declared it a global pandemic on March 11, 2020, and urged all countries to take preventive measures [2]. The lockdown was implemented in all countries around the world, resulting in an economic crisis. In just five days, the global stock market lost $6 trillion due to COVID-19. All modes of transportation, including airlines, trains, and waterways, were halted. All imports and exports between the countries were halted for a time, resulting in a significant loss to the global economy [3]. Many people were infected even after interventions like lockdown were implemented, and many people died as a result of a lack of adequate care and knowledge about the disease. As a result, several countries began developing vaccines to address this problem. India, Russia, and the United States, among other nations, created a vaccine for this virus and began vaccinating people. Not only in these nations, but in every country on the planet, vaccination has begun. Some people become infected again after having been vaccinated [4]. Almost all countries have now lifted the lockdown, and people have resumed their regular lives however, many people are still reckless, and some are still unaware of laws and regulations such as wearing proper face masks. The second wave of the coronavirus impacted negatively in numerous countries [5] and the third variants of the SARS-CoV-2 virus have now been discovered in a few of them as well [6]. To counteract the impact, the government and police are working hard to raise public awareness and encourage people to act responsibly. Security guards are appointed by both businessmen and organizational supervisors to ensure the safety of their customers and employees. Shopping malls, temples, and banks are among the most common places where security guards are employed. Nonetheless, some persons may probably violate the Covid-19 regulations. To address this problem, we developed a machine learning model that emits an alert sound when it identifies a person in a live stream video without a face mask. This model was built with Tensorflow, Keras, and OpenCV. We trained our model using a dataset that comprised images of persons with facial characteristics, masks, and without masks. As a result, wearing a mask and maintaining social distance is recommended as one strategy to avoid being infected.

2.2 Related Works The corona virus commenced to knock off populations all across the planet, spreading rapidly. The emergence of this virus, its cause and effects were explained in detail by Khan [7]. The key motivation for alerting security personnel was the growing number of cases of Covid-19 in our country. This makes it extremely risky for people

2 A Contemporary Initiative to Uphold Cease …

23

to venture out into the world. Many people may not adhere to the key curb on social distancing in the outside world. The proper condition for wearing these masks was discussed by Qin [8]. This made us realize that there are a lot of people in places like temples, banks, and overcrowded places, and the security guards present will not be able to check everyone to see whether they are wearing their masks or not. As a result, we’ll use surveillance cameras to keep an eye on them and sound an alarm if they’re breached. The paper, Covid-19 Face Mask Detection Using Tensorflow, Keras, and OpenCV [9], explains how they used the aforementioned packages to correctly detect the face mask on two separate datasets with an accuracy of 95.77 percent and 94.58 percent, respectively. To avoid overfitting the images, they used a Sequential Convolutional Neural Network to detect them. In [10], the authors used a multi-staged CNN architecture with a 2-stage RCNN detector. They performed image super-resolution and classification for facemask detection using SRCNet (Dong et al. 2016). Content-based image retrieval has been suggested as an optimal method for proper image retrieval (CBIR). The authors of [11] suggested a CNN deep learning module for detecting faces and converting facial features to vector embeddings for face recognition. We will utilize MobileNetV2 [12, 13], a variant of the CNN model, in our suggested work since it has fewer parameters and provides us with a more accurate and effective trained model. With the help of the beep sound, which will alert the security through the monitor, we will be able to achieve a higher level of efficiency and will be able to alert those persons who are violating the covid norms and regulations. The Adam optimizer [14] was used to refine our training model since it combines two gradient descent methodologies: momentum and root mean square propagation. To obtain an efficient gradient descent, it inherits the attributes from the previous two approaches. We can easily reach the minima by combining the two methods described above.

2.3 Dataset There are 1900 images in this dataset, which are divided into two categories: with masks and without masks. There are 657 images with masks and 658 images without masks in this set. We used a pre-existing dataset from GitHub (Figs. 2.1 and 2.2).

2.4 Proposed Methodology We propose this paper with the primary goal of detecting people who are breaking the covid-19 rules and regulations, especially in densely populated areas such as temples or overcrowded areas. The proposed approach is made up of three key components: mask detection, face detection, and an alarm system.

24

S. D. Sujeevan Tej et al.

Fig. 2.1 Dataset with the mask https://github.com/balajisrinivas/Face-Mask-Detection/tree/mas ter/dataset

2.4.1 Packages Keras: Keras is a library for neural networks that offers an interface. It has a diverse set of deployment options, with at least five backends to choose from, such as (TensorFlow, CNTK, Theano, MXNet, and PlaidML). It’s also used in our model for preprocessing and all other model-related tasks. Tensorflow: It’s a machine learning library that is also open source. Deep neural networks can be trained and inferred using Tensorflow. The majority of machine learning and deep neural network algorithms depend on it. Tensorflow flow is used in our model for a variety of tasks, including data preprocessing. MobileNetV2: Another term for it is lightweight convolutional neural networks. We picked MobileNet because of its faster processing speed and usage of fewer variables. It is built using Keras and Tensorflow applications. Although its accuracy is lower than that of other models, it is suitable in our domain of face mask detection.

2 A Contemporary Initiative to Uphold Cease …

25

Fig. 2.2 Dataset without a mask https://github.com/balajisrinivas/Face-Mask-Detection/tree/mas ter/dataset

Open CV: It’s used for real-time computing tasks like image processing and video recording, among others. In our model, OpenCV is used to access either the live stream or a video frame by frame to check whether a mask is present or not. It is also used to indicate the label and bounding boxes around the face in the output. Sklearn: It is one of the most useful machine learning libraries. It includes a variety of supervised and unsupervised learning algorithms and can be used for a variety of statistical modeling tasks such as classification, regression, and clustering.

2.4.2 Mask Detection The first step is to preprocess the data, which includes all of the dataset’s images. We’ll use Keras preprocessing models to transform all of the images into an array, and then use one-hot encoding to adjust the categorical data between ‘with’ and ‘without

26

S. D. Sujeevan Tej et al.

a mask’ using the sklearn module. In the current scenario, we will use MobileNets for the neural model instead of the traditional CNN form. We are not using YOLO because, as discussed in [14], it has a higher localization error than CNN and has trouble detecting smaller artifacts, even though MobileNet outperforms these two modules in this case. This serves as a compelling reason to choose MobileNet over traditional CNN and YOLO. We’ll now send the MobileNetV2 the preprocessed input picture array, perform max pooling, flatten it, and create a fully connected layer via which we’ll obtain the output in the training stage. As previously stated, our platform includes two submodules when using MobileNet: the head and the base models. While developing the base model, it is critical to remember to utilize a pre-trained model that is unique to the picture collection we are using, such as imagenet, which has pre-trained weights for the photos in its model. The output from this is sent into the head model, where the remaining tasks like pooling, activation, and so on are completed. The process proceeds via a series of actions or tasks in the head model, which are as follows: To begin, the input will be in the form of (224, 224, 3). The head model will be placed on top of the base model, and then max pooling will be applied. This is done to make over-fitting easier by providing the data in an abstracted form. It also reduces computing costs by reducing the number of parameters that must be learned, as well as providing basic translation invariance to the internal representation. The goal is to downsample an input representation thereby reducing its dimensionality such that assumptions may be made about features included in the binned sub-regions. Now we will flatten the model, turning the data into a one-dimensional array for further processing. To produce a single lengthy feature vector, we flatten the output of the convolutional layers. Following that, a dense layer will be created, with about 128 neurons triggered by Relu activation. In our model, we’ll utilize the Relu activation layer because it’s good for nonlinear use cases. This produces 128 outcomes, which are subsequently passed to the dropout layer. This layer prevents the model from becoming overfit. From here, a dense layer with two neurons will be added, and the respective results will only come out (output) through these two neurons. We only utilize two neurons for our output: ‘with mask’ and ‘without a mask’. In the output layers, we use softmax or sigmoid functions since they are probability-based operation functions. We’ll always use softmax because we’re concerned with binary classification. The layers in the base model must be frozen so that they do not shift during the training process. As a result, we will use an Adam optimizer for image prediction methods, which is close to the Relu Activation function (Fig. 2.3).

2.4.3 Face Detection In this instance, we’ll utilize deep learning facial recognition [15]. The original HOG and SVM [16] models were recommended, however for a faster and more accurate

2 A Contemporary Initiative to Uphold Cease …

27

Fig. 2.3 The neural network model https://github.com/balajisrinivas/Face-Mask-Detection/tree/ master/dataset

result, we will utilize a pre-trained deep learning model using OpenCV. This DNN face detector in OpenCV is a Caffe model that is built on the Single Shot-Multibox Detector (SSD) and uses ResNet-10 as its backbone. Caffe (Convolutional Architecture for Fast Feature Embedding) is a deep learning system for image classification and segmentation. The prototxt file and the Caffe model weights file are the two files we’ll be working with. The prototxt file describes the model architecture, which includes the list of neural network layers that a model has; each layer’s characteristics, such as name, type, input dimensions, and output dimensions; and layer connection specifications. The weights for the actual layers are contained in the Caffe model file. To improve detection, all of the pictures were reduced to 300 × 300 pixels. Because the Dnn detector fails to identify bigger images about 3000 × 3000 pixels).

2.4.4 Alarm Activation We’ll use a built-in Python module named Winsound to advise or alert the surveillance members. We’ll run our real-time video frame using the Videostream from the OpenCV module to verify our performance. We cleaned the dataset applying pre-processing techniques before moving on to the model construction: mask and face detections, as described in the previous modules. After recognizing the face, we’ll go on to detecting the mask in the video’s live stream.

2.5 Result We segregated our data into two sections: 80 percent for training and 20 percent for testing. Since we’re using MobileNetV2 for the neural model, we’ll get higher performance than other models. To boost accuracy and ensure that loss error is treated correctly, we used a low learning rate. As shown in Fig. 2.6, we used the cross-entropy function in conjunction with the Adam optimizer to train our model over 20 epochs, achieving an accuracy rate of over 96%. As shown in Fig. 2.4, we have a fair model as a result of the efficient compilation; the loss rate is very low.

28

S. D. Sujeevan Tej et al.

Fig. 2.4 The loss accuracy validation for the model

Finally, as shown in Figs. 2.5 and 2.6, we were able to achieve a precision of 99

Fig. 2.5 The graph between training loss and accuracy

Fig. 2.6 The accuracy table

2 A Contemporary Initiative to Uphold Cease …

29

Fig. 2.7 How to use a facemask

percent for the performance of our proposed design.

2.6 Conclusion and Future Works Our model detects and emits a beep alarm; in the future, we plan to add a feature that displays the face of a person who is violating the rules and transmit the photo to a nearby security guard in the perimeter using semantic segmentation and image processing [11] and increase the accuracy and efficiency of present model by using the CNN methods in [17]. Some health organizations recommend wearing a double mask and include instructions on which masks to use and how to use them. We would like to design our model similar to the rules updated by the government. Figure 2.7 depicts a few of the laws recommended by these organizations. The entire world is working hard to eradicate this lethal virus. As a result, several of these machine learning and artificial intelligence models emerge to combat the problem. In this article, we explain how to tell if someone is wearing a facemask and, if they aren’t, how to notify a security guard or police officer with an Alert Alarm beep sound, making their work simpler.

References 1. Coronavirus disease 2019 (COVID-19) Situation Report—94, World Health Organisation (WHO), 23rd April 2020 2. Ibn Mohammed, T., Mustapha, K.B., Godsell, J., Adamu, Z., Babatunde, K.A., Akintade, D.D.,

30

3. 4. 5. 6. 7. 8.

9. 10. 11.

12. 13.

14. 15. 16.

17.

S. D. Sujeevan Tej et al. Acquaye, A., Fujii, H., Ndiaye, M.M., Yamoah, F.A., Koh, S.C.L.: A critical analysis of the impacts of COVID-19 on the global economy and ecosystems and opportunities for circular economy strategies. Resources, Conserv. Recycl. 164, 105169 (2021) ISSN 0921-3449 Ozili, P.K., Arun, T.: Spillover of COVID-19: impact on the global economy. SSRN Electron. J. March 27, 2020 Over 21,000 tested positive for COVID-19 after the first dose of vaccine; 5500 after the second dose, Business Today Rajan, R., Sharma, A., Verma, M.K.: Characterization of the second wave of COVID-19. medRxiv 2021.04.17.21255665 About Variants of the Virus that Causes COVID-19, Centres for Disease Control and Prevention (CDC) Shereen, M.A., Khan, S., Kazmi, A., Bashir, N., Siddique, R.: COVID-19 infection: emergence, transmission, and characteristics of human coronaviruses. J. Adv. Res. 24 (2020) Qin, B., Li, D.: Identifying Facemask-wearing condition using image super-resolution with classification network to prevent COVID-19. Sensors 20, 5236. https://doi.org/10.3390/s20 185236,2020 Das, A., Wasif Ansari, M., Basak, R..: Covid-19 face mask detection using TensorFlow, Keras and OpenCV. In: IEEE 17th India Council International Conference (INDICON) (2020) Chavda, A., Dsouza, J., Badgujar, S., Damani, A.: Multi-stage CNN architecture for face mask detection. In: 6th International Conference for Convergence in Technology (I2CT) (2021) Pradeesh, N., Abhishek, L.B., Padmanabhan, G., Gopikrishnan, R., Anjali, T., Krishnamoorthy, S., Bijlani, K.: Fast and reliable group attendance marking system using face recognition in classrooms. In: 2nd International Conference on Intelligent Computing, Instrumentation and Control Technologies (ICICICT) (2019) Chen, H.Y., Su, C.-Y.: An enhanced hybrid MobileNet. In: 9th International Conference on Awareness Science and Technology (ICAST) (2018) Amrita Varshini, J.L., Amrita Sree, J.L., Dineshan, A., Anjali, T., Jayakumar, O.D., Bharadwaj, A.: Face mask detection and recognition using an IoT enabled PDMS-Ag e-skin sensor that works in contact and non-contact modes (2020) Jose, E.K., Veni, S.: YOLO classification with multiple object tracking for vacant parking lot detection. J. Adv. Res. Dyn. Control Syst. 10(3), 683–689 (2018) Meenpal, T., Balakrishnan, A., Verma, A.: Facial mask detection using semantic segmentation. In: 4th International Conference on Computing, Communications and Security (ICCCS) (2019) Subhamastan Rao, T., Anjali Devi, S., Dileep, P., Sitha Ram, M.: A novel approach to detect face mask to control Covid using deep learning. Euro. J. Mol. Clin. Med. 07 (2020). ISSN 2515-8260 Aloysius, N., Geetha, M.: A review on deep convolutional neural networks. In: International Conference on Communication and Signal Processing (ICCSP), April 6–8, 2017, India

Chapter 3

A Comparative Study of Various Traditional and Hybrid Cryptography Algorithm Models for Data Security Pravin Soni

and Rahul Malik

Abstract Information security involves various services like availability, cryptography, and integrity which help to protect data. Nowadays everyone store his/her important data on various cloud environments for easy access without much thinking of data privacy. Still most individual or many companies prefer to store their business or individual level information in computer but connected to Internet. There are well-known algorithms available which help to secure data by using some secret key. Hackers are simply capable to interrupt the key with the assistance of recent high end computing machines. Hence information needs to be secured in more powerful way that no longer to be abused by folks that aren’t entitled to. The security of data can be increased by hybridization of these well-known algorithms. The paper provides the generic models of various hybrid cryptographic models which improves the security of data. The paper also provides the comparative study of various traditional and hybrid models used heavily for providing security to data. The hybrid model can provide enhanced security than AES and should be selected if extreme security is required.

3.1 Introduction Cryptography is an art by which one can secure personal computer data and digital communications through the use of complex codes so that the personal data can be read and processed by genuine person. In software engineering, cryptography alludes to make sure about securing records by following data conversion strategies developed using mathematical concepts and a hard and fast procedure based calculations referred to as algorithms which convert messages into ciphertext that are difficult to read. The various other complex algorithms are used along with cryptography

P. Soni (B) · R. Malik Department of Computer Science and Engineering, Lovely Professional University, Phagwara, Punjab, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. Das et al. (eds.), Modeling, Simulation and Optimization, Smart Innovation, Systems and Technologies 292, https://doi.org/10.1007/978-981-19-0836-1_3

31

32

P. Soni and R. Malik

algorithm to enhance the security of cryptosystem products involving digital signature generation and verification phase for identity verification and maintaining data integrity, dynamic key generation, and distribution, etc. [1, 2]. Cryptosystem is combination of three cryptographic algorithm (cipher) encryption, decryption and key generation and used to encrypt and decrypt data and to establish a secure communications among network, gadgets, and applications. A cipher suite can make use of different algorithms for different purpose like encryption, message authentication, and key exchange. These algorithms are embedded in protocols and written in various programmable languages that runs on operating systems and network devices like computer systems, gateways [1, 2]. The cipher can be classified mainly in two broad categories as classical and modern approach. Figure 3.1 shows the classification of ciphers. Symmetric and asymmetric encryptions are forms of cryptosystem in which symmetric encryption algorithms uses single key for encryption decryption of data whereas asymmetric encryption algorithms uses paired keys, public key for encryption and it paired key (private) for decryption of data [2, 3]. Figures 3.2 and 3.3 depict the simple process involved in encryption and decryption of data using symmetric and asymmetric encryption algorithm respectively. Symmetric encryption algorithm generates ciphertext by transforming plaintext using a secret key and an encryption algorithm whereas decryption algorithm is used to recover the plaintext from ciphertext by using the same secret key. In asymmetric

Fig. 3.1 Classification of ciphers [3]

3 A Comparative Study of Various Traditional and Hybrid …

33

Fig. 3.2 Block diagram of processing data using symmetric encryption algorithm

Fig. 3.3 Block diagram of processing data using asymmetric encryption algorithm

encryption plaintext is converted into ciphertext using a one of two keys (public key) and an encryption algorithm and recovered from the ciphertext using the paired secret key (i.e. corresponding private key) and a decryption algorithm [4]. The primary use of symmetric block cipher is encryption/decryption i.e. providing confidentiality to data. As shown in Fig. 3.4 security of symmetric block cipher based on Fiestel structure depends on parameters and design features such as number of rounds, block size, key size, sub-key generation algorithm, and round function [4].

34

P. Soni and R. Malik

Fig. 3.4 Symmetric block cipher design parameters based on Fiestel structure [4]

3.2 Literature Review Symmetric encryption or conventional encryption was the main sort of cryptographic algorithm used for data confidentiality service prior to public key encryption algorithm developed by Diffie-Helman in the 1970s. It is still preferred choice between most widely available two types of encryption algorithms mainly due to its speed. Every symmetric encryption scheme has five main key elements as plaintext, encryption algorithm, secret key, ciphertext, and decryption algorithm [4]. Selection of encryption algorithm depends upon many properties like security, implementation, speed, and cost. Most important factor from above is security which encompasses characteristics such as: resistance to cryptanalysis, mathematical calculation, diffusion and confusion property (random output), and more secure than other [5]. Table 3.1 provides comparison of famous symmetric algorithm based on attributes like design paradigm, block size, key size, no of rounds, throughput. Alterations to cryptographic algorithms may be divided into two categories: “internal” and “external” modifications. Internal modification refers to altering the algorithm’s core internal structural unit in order to boost throughput and complexity. Although the internal alteration provides excellent cryptographic strength, it needs a tremendous deal of knowledge in order to maintain the fundamental notion of the original encryption techniques. Apart from that, change in internal part of algorithm will be costly and might take long time for implementation in hardware/software [10].

3 A Comparative Study of Various Traditional and Hybrid …

35

Table 3.1 Comparison of symmetric algorithm based on attributes Attribute

Rijndael

MARS

Design paradigms [5, 6]

Feistel structure

Extended Feistel Feistel

Substitution Feistel permutation

Substitution Feistel permutation structure

Key size in bits [7]

128, 192 or 256

128 - 448 128,192 or 256

128, 192 or 256

128, 192 or 256

128

128–448

Block size in bits [7]

128

128

128

128

128

64

64

No. of sub 2 block [5, 8]

4

4

4

4

4

2

No. of rounds [7]

32

20

32

16

8

16

10, 12 or 14

RC6

Serpent

Twofish

IDEA

Blowfish

Key setup Increasing Constant speed [8, 9]

Constant Constant

Increasing Constant

Increasing

RCPF 383.1 throughput in MB/s [7]

154

140.1

256.3

192.4

128.4

171.1

Internal changes might include increasing the size of the key required in the algorithm or introducing a secondary key. The Feistel structure, which consists of a round repeated a number of times, is seen in several symmetric encryption schemes. Based on a secret key, each round executes substitution and permutation block. A symmetric block cipher’s internal alteration might be as simple as changing its Sboxes or increasing the number of rounds in the Feistel structure [4, 10]. Kumar and Rana modified internal structure of AES algorithm by increasing number rounds to 16 and key size to 320 bits. The sub keys required for rounds are generated using Polybius square. The modified AES algorithm perform was better than TDES [11]. External modification is the process of combining two or more block algorithms to create new algorithms without modifying the underlying structure of the algorithm. The resulting algorithm is a “hybrid” algorithm, which is made up of many different original algorithms. For a programmer with little experience in cryptographic algorithm creation, external modification is a simple and convenient way. Furthermore, it provides greater design freedom and saves the time and cost of implementing it in hardware and software [8, 10]. “Multiple” or “cascaded” encryption are the most common types of external modification. Multiple encryption employ the same encryption technique again and over to encrypt the same plaintext block with various keys. Cascaded encryption is similar to multiple encryption methods, except it encrypts the same plaintext block with the same or different keys using distinct encryption algorithms one after the other [8, 10]. Table 3.2 gives the brief overview of various existing hybrid models developed for providing data security like encrypting with symmetric encryption algorithm and securing key with asymmetric encryption algorithm, cascading of encryption algorithms and Input File Split in equal number of parts.

36

P. Soni and R. Malik

Table 3.2 Brief overview of various hybrid models Study of hybrid model

Algorithm used

Gist

Chandu et al. [12]

AES and RSA

• AES is used to encrypt data by IOT device and its key is encrypted using RSA • The Transmitter uses the public key of data requester for encrypting symmetric key

Zou et al. [13] AES and RSA

• The model uses double encryption model • AES encrypt the input file to produce intermediate ciphertext and RSA encrypts intermediate ciphertext along with AES key using receiver’s public key to generate final ciphertext • Hybrid model efficiency is higher compared to RSA

Santoso et al. [14]

AES and twofish

• Input is processed twice with Symmetric encryption algorithm in fixed sequence first by AES then by twofish • Key is generated using SHA256 and used by both algorithms

Mata et al. [15]

AES and Blowfish

• Encrypts input in fixed sequence AES then Blowfish • Hybrid model requires more encryption time then AES

Ntshabele et al. [16]

RSA-Blowfish-DES

• Two Hybrid model developed • Data is encrypted using Symmetric Encryption algorithm by using different keys • RSA is used to for encrypting secret keys • 3tier model i.e. double encryption shows better security strength

Prabu and Vasudevan [17]

AES, blowfish and RC6

• File is divided into 3 equal parts and each part is encrypted with the algorithm specified in sequence • Split files are uploaded to different cloud for processing • Model Performance is better than DES

Maitri and Verma [18]

AES, blowfish, RC6 and BRA • File is divided into 8 equal parts and each part is encrypted with one of the algorithm • Multithreading used for performance enhancement • Takes 17% to 20% less time than AES

Gupta and Kapoor [19]

Blowfish and ECC

• Input is divided into Chunks of equal Size • Blowfish is used for encrypting even chunks and ECC is used for odd chunks • The size of cipher file is almost 30% large than plaintext file

3 A Comparative Study of Various Traditional and Hybrid …

37

3.3 Generalized Hybrid Cryptographic Algorithm Models Hybrid cryptographic algorithm is grouping of various symmetric, asymmetric encryption algorithm or both. Based on study of various hybrid cryptography algorithm models, they can be classified based upon how they are combined together for processing of data and/or for its corresponding key. 1.

Symmetric encryption algorithm used for data confidentiality and its secret key sharing is done by asymmetric algorithm

Figure 3.5 shows the generalized model of hybrid cryptographic algorithm where symmetric encryption algorithm used for data confidentiality and its secret key sharing is done by asymmetric algorithm and final output is combination of outputs of both algorithms. 2.

Symmetric and asymmetric encryption algorithm cascaded and used for data encryption.

Figure 3.6 shows the generalized model for of hybrid cryptographic algorithm where symmetric and asymmetric encryption algorithm are cascaded and used for data encryption and output is multiple times processed plaintext. Many researchers [20– 22] suggested that RSA is not suitable for encrypting large amount of data but in hybrid mode it should be used to encrypt the secret key used in symmetric encryption algorithm.

Fig. 3.5 Hybrid model where symmetric encryption algorithm used for data confidentiality and its secret key sharing is done by asymmetric algorithm

38

P. Soni and R. Malik

Fig. 3.6 Hybrid model where symmetric and asymmetric encryption algorithm are cascaded and used for data encryption

3.

Multiple symmetric encryption algorithms cascaded and used for data encryption.

Figure 3.7 shows the generalized model for of hybrid cryptographic algorithm where symmetric algorithm are cascaded and used for data encryption and ciphertext is multiple times processed plaintext. Figure 3.7a shows hybrid model which cascades multiple algorithms in fixed order whereas Fig. 3.7b shows hybrid model which cascades multiple algorithms available in each stage whose selection is based on some attributes like key, time, random number, etc. 4.

Different symmetric algorithms are applied on equal fragments of input data.

Figure 3.8 shows the generalized model for hybrid cryptographic algorithm where different symmetric algorithms are applied on equal fragments of input data and combination of all individual outputs results in ciphertext of system.

3.4 Experimental Setup and Results We have already discussed some common encryption algorithm parameters and some hybrid encryption models which are combination of conventional (AES, Blowfish,

3 A Comparative Study of Various Traditional and Hybrid …

39

Fig. 3.7 Hybrid model where symmetric algorithm are cascaded and used for data encryption a fixed order and b dynamic order

Fig. 3.8 Hybrid model where different symmetric algorithms are applied on equal fragments of input data

40

P. Soni and R. Malik

RC6, etc.) and public key (RSA, ECC etc.) algorithms. This part presents comparative result and its analysis of various traditional symmetric key algorithm and various hybrid models developed using them. We have selected cipher suites based on symmetric block encryption algorithms as AES, BLOWFISH, RC6, MARS, SERPENT, TWOFISH due to their popularity of use and also part of various famous cryptography systems such as veracrypt [23], intercrypto [24]. For comparing various traditional and hybrid algorithm, we have used CryptoPP library (version 8.2.0) [25] and Visual C++ compiler. Visual Studio 2013 is used for developing traditional and hybrid models. The machine specification used for obtaining results are Intel® Core TM i3-3217U CPU @ 1.8 GHz 1.8 GHz, 8 GB of RAM, 500 GB Hard disk and WINDOWS 10 Home Single Language OS 64 bit architecture. The various traditional and hybrid algorithm models are compared based upon performance indicators like: Encryption Time, Cipher text size, Throughput and Decryption Time. The indicators parameters used for performance comparison are calculated as follows. • • • •

Input Size: size of plaintext file given as input in KB Ciphertext Size: size of Ciphertext file generated as output in KB Output Size: size of recovered plaintext file from Ciphertext in KB Encryption Time: time required to process plain text to obtain cipher text in milliseconds Encryption Time (in ms) = End Time - Start Time of Encryption Process (3.1)

• Decryption Time: time required to obtain plain text from cipher text in milliseconds Decryption Time (in ms) = End Time - Start Time of Decryption Process (3.2) • Average Throughput: Average speed of generating ciphertext during encryption process (in Bytes/ms)  Ciphertext Size (in Bytes) Average Throughput =

Encryption Time ( in ms)

n

(3.3)

• Pre Processing Time (in milliseconds): time required for splitting the input files into n-number of equal parts before encryption process for split model. • Post Processing Time (in milliseconds): time required for obtaining the recovered text by combining results of decryption process for split model of n-number of ciphertext. Figures 3.9, 3.10 and 3.11 shows comparison between traditional and hybrid cascaded model (Level-2) and hybrid file split model based on symmetric cryptography algorithms in terms of encryption time, decryption time and average

3 A Comparative Study of Various Traditional and Hybrid …

Fig. 3.9 Encryption time required by various cryptography models for different input sizes

41

42

P. Soni and R. Malik

Fig. 3.10 Decryption time required various cryptography models for different output sizes

3 A Comparative Study of Various Traditional and Hybrid …

Fig. 3.11 Average throughput performances of various cryptography models

43

44

P. Soni and R. Malik

Fig. 3.12 Pre processing time required by hybrid file split model for dividing file into number of equal parts

throughput. Based upon Figs. 3.9a, 3.10a and 3.11a, the AES (Rijndael) gives better performance followed by RC6. Serpent performance is worst. Based upon Figs. 3.9b, 3.10b and 3.11b, the hybrid cascaded model (AES-RC6) gives best performance whereas worst performance is observed for hybrid cascaded model (AESSERPENT). Based upon Figs. 3.9c, 3.10c and 3.11c, the hybrid file split models time performance is directly proportional to number of file split and size of input. Figures 3.12 and 3.13 shows the time required for pre processing of input and post processing for output generation required by hybrid file split model since model need to split input into number of equal parts before encrypting each part with different algorithm and needs to merge output as single output after decryption process. We have observed that pre processing time increases with respect to size and number of parts of input as per Fig. 3.12 and post processing time mainly increases with respect to output size as per Fig. 3.13.

3.5 Conclusion and Future Work Data confidentiality is very important for business or individual level information. Confidentiality is achieved by encrypting data with secret key (password or system generated random text) which transmitted or kept secure and required for data access. The AES (Rijndael) algorithm performance is best in various traditional and hybrid models. The AES (Rijndael) algorithm throughput is 20% more than RC6 which comes second best in traditional model. The Serpent appears to be slowest which is

3 A Comparative Study of Various Traditional and Hybrid …

45

Fig. 3.13 Post processing time required by hybrid file split model for merging number of intermediate outputs into single output

slower by 62% than AES. The various hybrid models provides extra security to data by either cascading (double encryption) or encrypting input by splitting into number of equal parts where each part is encrypted with different algorithm. The hybrid cascaded model performance is better than hybrid file split model due to pre and post processing overhead required for splitting and merging. The hybrid cascaded model (AES-RC6) is 16% and 8% faster than Serpent and Mars encryption algorithm respectively and 55% slower than AES. Ignoring the pre and post processing time of hybrid file split model for split-3 its performance is almost similar to AES. The hybrid model provides enhanced security than AES and its selection should be used only if extreme security is required irrespective of its performance. The hybrid model security can be increased if algorithms used for encryption are dynamically selected based upon parameters like secret key, random number, time, etc. and its performance can be improved if algorithm used for hybrid model structure is modified internally.

References 1. Rouse, M.: What is cryptography?—Definition from WhatIs.com (2018). Available: https:// searchsecurity.techtarget.com/definition/cryptography. Accessed 10 Feb 2021 2. Stallings, W.: Cryptography and Network Security, 5th edn. Prentice Hall Press, USA (2010) 3. Nazeh Abdul Wahid, M., Ali, A., Esparham, B., Marwan, M.: A comparison of cryptographic algorithms: DES, 3DES, AES, RSA and blowfish for guessing attacks prevention. J. Comput. Sci. Appl. Inf. Technol. 3(2), 1–7 (2018). https://doi.org/10.15226/2474-9257/3/2/00132

46

P. Soni and R. Malik

4. Stallings, W.: Network Security Essentials, 4th edn. Prentice Hall Press, USA (2010) 5. Nechvatal, J., Barker, E., Dodson, D., Dworkin, M., Foti, J., Roback, E.: Report on the development of the advanced encryption standard (AES). J. Res. Natl. Inst. Stand. Technol. 104(5), 435–459 (1999). https://doi.org/10.6028/jres.106.023 6. Ebrahim, M., Khan, S., Bin Khalid, U.: Symmetric algorithm survey: a comparative analysis. Int. J. Comput. Appl. 61(20), 12–19 (2013) 7. Jiang, J., Ni, X., Zhang, M.: Reconfigurable cipher processing framework and implementation. In: Zhou, X., Xu, M., Jähnichen, S., Cao (eds.) Journal of Advanced Parallel Process. Technol. APPT 2003. Lecture Notes Computer Science, vol. 2834, pp. 509–519. Springer, Berlin (2003). https://doi.org/10.1007/978-3-540-39425-9_60 8. Schneier, B.: Applied Cryptography, 2nd ed. Wiley (1996) 9. Schneier, B., Whiting, D.: A performance comparison of the five AES finalists. In: Proceedings of the 3rd Advanced Encryption Standard AES Candidate Conference, vol. 3, pp. 123–135 (2000) 10. Marinakis, G.: Modification and customization of cryptographic algorithms. J. Appl. Math. Bioinforma. 9(1), 1–13 (2019) 11. Kumar, P., Rana, S.B.: Development of modified AES algorithm for data security. Optik (Stuttg) 127(4), 2341–2345 (2016). https://doi.org/10.1016/j.ijleo.2015.11.188 12. Chandu, Y., Rakesh Kumar, K.S., Prabhukhanolkar, N.V., Anish, A.N., Rawal, S.: Design and implementation of hybrid encryption for security of IOT data. In: Proceedings of the 2017 International Conference on Smart Technology Smart Nation, SmartTechCon 2017, pp. 1228– 1231 (2018). https://doi.org/10.1109/SmartTechCon.2017.8358562 13. Zou, L., Ni, M., Huang, Y., Shi, W., Li, X.: Hybrid encryption algorithm based on AES and RSA in file encryption. In: Hung, J., Yen, N., Chang, J.W. (eds.) Front Computing FC 2019, No. Lecture Notes in Electrical Engineering, vol. 551, pp. 541–551. Springer, Singapore (2020). https://doi.org/10.1007/978-981-15-3250-4_68 14. Santoso, K.I., Muin, M.A., Mahmudi, M.A.: Implementation of AES cryptography and twofish hybrid algorithms for cloud. J. Phys. Conf. Ser. 1517(1) (2020). https://doi.org/10.1088/17426596/1517/1/012099 15. Mata, F., Kimwele, M., Okeyo, G.: Enhanced secure data storage in cloud computing using hybrid cryptographic techniques (AES and Blowfish). Int. J. Sci. Res. 6(3), 1702–1708 (2017). https://doi.org/10.21275/ART20171804 16. Ntshabele, K., Isong, B., Moemi, T., Dladlu, N., Gasela, N.: Hybrid encryption model for data security in cloud environment. In: International Conference on Grid, Cloud, & Cluster Computing, 2018, pp 20–25 17. Prabu Kanna, G., Vasudevan, V.: A new approach in multi cloud environment to improve data security. In: Proceedings—2017 International Conference on Next Generation Computing Information Systems ICNGCIS 2017, pp. 19–23 (2017). https://doi.org/10.1109/ICNGCIS.201 7.23 18. Maitri, P.V., Verma, A.: Secure file storage in cloud computing using hybrid cryptography algorithm. In: Proceedings of the 2016 IEEE International Conference on Wireless Communication Signal Processing, Networking, WiSPNET 2016, pp. 1635–1638 (2016). https://doi. org/10.1109/WiSPNET.2016.7566416 19. Gupta, N., Kapoor, V.: Hybrid cryptographic technique to secure data in web application. J. Discret. Math. Sci. Cryptogr. 23(1), 125–135 (2020). https://doi.org/10.1080/09720529.2020. 1721872 20. Padmavathi, B., Kumari, S.R.: A survey on performance analysis of DES, AES and RSA algorithm along with LSB substitution technique. Int. J. Sci. Res. 2(4), 2319–7064 (2013) 21. Amit, K.: Encrypt/decrypt a file using RSA public-private key pair. Github. Available: https:// kulkarniamit.github.io/whatwhyhow/howto/encrypt-decrypt-file-using-rsa-public-privatekeys.html. Accessed 15 Mar 2021 22. Google, Encrypting and decrypting data with an asymmetric key. Security and identity products—Cloud KMS. Available https://cloud.google.com/kms/docs/encrypt-decrypt-rsa. Accessed 22 Mar 2021

3 A Comparative Study of Various Traditional and Hybrid …

47

23. IDRIX, VeraCrypt - Free Open source disk encryption with strong security for the Paranoid. Available https://www.veracrypt.fr/en/Home.html. Accessed 22 Mar 2021 24. InterCrypto Ltd., Files and Drives Encryption Software for Windows Vista. Available https:// www.intercrypto.com/. Accessed 15 Mar 2021 25. Dai, W.: Crypto++ Library 8.2 | Free C++ Class Library of Cryptographic Schemes. Available https://www.cryptopp.com/. Accessed 10 May 2021

Chapter 4

V2G/G2V Bi-directional On-Board EV Charger Using Two Phase Interleaved DC-DC Converter K. Vineel Kumar and K. Deepa

Abstract The future trend of transportation is electric, here this work emphasizes on design and simulation of Vehicle to Grid (V2G) and Grid to Vehicle (G2V) operation of a Bi-directional Interleaved DC-DC Converter. The rapid increase in the advancement of electric vehicle in the transportation sector might increase the peak demand in the current grid structure and hence V2G power conversion becomes much crucial to handle the sudden surplus increase in energy demand. The proposed topology consists of two parts: (1) bi-directional AC-DC converter (2) BIC. Compared to the other bi-directional operation this design produces the less ripple in the output and input currents which increases the efficiency of the charger. The proposed design and control is validated by MATLAB/Simulink, in both G2V and V2G operations.

4.1 Introduction Environmental problems have been increasing day by day in many countries. From the research, it’s said that most of the pollution is caused by internal combustion engines (ICE) vehicles. Hence many countries have shown interest in the use of energy storage-based electric vehicles [1]. As a result, manufacturing of electric vehicles will increase and would become a key role in the future auto-mobile industry. The main reason behind the use of electric vehicle is to reduce the Co2 and other emissions which are released by the ICE vehicles. Out of all energy storage devices, the efficient is battery. There are various types of batteries available depending on the design requirements. So, this makes the usage of EVs increase more and more in the future auto-mobile industry [2]. As the usage of EVs increases day by day, power generating companies are started analyzing on the quantity and quality of power which is supplied by the utility grid K. Vineel Kumar (B) · K. Deepa Department of Electrical and Electronics Engineering, Amrita School of Engineering, Bangalore, Amrita Vishwa Vidyapeetham, Bengaluru, India e-mail: [email protected] K. Deepa e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. Das et al. (eds.), Modeling, Simulation and Optimization, Smart Innovation, Systems and Technologies 292, https://doi.org/10.1007/978-981-19-0836-1_4

49

50

K. Vineel Kumar and K. Deepa

[3]. The power consumption/demand may raise abruptly and leads to the power insufficiency due to the consumption of power by EVs. Hence to meet the power demand a new power plant has to be installed [4]. Thereby, in order to maintain the quantity and quality of power from the grid, a V2G strategy has to be implemented in the vehicle charging system [5]. In this context, several researchers studied on bi-directional charger design for the battery in the EV for power balancing in the peak demand. In [6], a bi-directional EV charger is presented using 3-level DC-DC converter. In [7], it is presented that a detailed research on interleaved converter and achieved less ripple in current and stable voltage. The design is simple and economical however it has a disadvantage of more inductance ripple. In [8], it is presented a BIC and concluded that ripple in the inductor current is reduced. In [9], instead of using PI controller, a fuzzy-based bi-directional charger has developed. The charging design of V2V is implemented in [10] and for electric mobility applications, a bi-directional three-level Dc-Dc converter with experimental results is presented in [11]. In this paper, Bi-directional Interleaved Converter (BIC) is presented which undergoes two conversion stages—(1) 3-Ø bi-directional AC-DC converter (2) bidirectional DC to DC converter. The proposed charger design is for lithium-ion batteries. During G2V operation, the battery undergoes charging operation and stores energy. During V2G operation, the energy stored in the battery can be feedback to the utility grid during the peak demand hours and can earn money or will be able to connect to home appliances [12]. The proposed design and control modes are verified using MATLAB/Simulink. This paper discusses about system configuration in Sects. 4.2 and 4.3 tabulates the design parameters and formulas for the converters, Sect. 4.4 discusses about control concept of the proposed topology, Sect. 4.5 shows the open loop and closed loop circuits and its results are analyzed, Sect. 4.6 concludes the paper.

4.2 System Configuration The proposed bi-directional charger operates in two modes, forward operation for charging the battery and reverse operation for the battery discharging. In forward operation mode, the first conversion is about converting the three phase AC grid voltage to DC voltage using a three phase Voltage Source Converter. Following it, an BIC is designed in such a way that it can operate in buck mode during forward operation mode and boost mode during reverse operation mode [6]. A 3- Ø bidirectional AC-DC converter along with an BIC is shown in Fig. 4.1. This topology helps in reducing the current ripple with an increased efficiency [7].

4 V2G/G2V Bi-directional On-Board EV Charger …

51

Fig. 4.1 Topology of the proposed bi-directional charger

4.2.1 Forward Operation Mode Bi-directional AC-DC Converter: The supply to the 3-Ø AC-DC converter is given by the 3-Ø grid. In the forward mode of operation, this converter works as a 3-Ø diode bridge rectifier which is shown in Fig. 4.2. It follows a DC link which is connected to BIC. Bi-directional Interleaved DC-DC converter: In this paper a two phase BIC is used as a DC-DC converter. The topology of the converter is shown in Fig. 4.3. It

Fig. 4.2 Topology of the bi-directional AC-DC converter

52

K. Vineel Kumar and K. Deepa

Fig. 4.3 Topology of the two phase bi-directional interleaved DC-DC converter

consists of two inductors which are connected in parallel. The parallel connection of these two inductors reduces the low frequency harmonics which further reduce the size and cost of the inductors. As the current from the input source divides parallel, equal amount of current passes through the two phases which produces the small input current ripple. In forward operation mode, this converter operates in buck mode. Buck mode: In this mode, interleaved DC-DC converter acts as a buck converter. The high voltage (HV) side is DC link and low voltage (LV) side a battery are connected. Initially, switches Q1 and Q2 are triggered. During this time Q3 and Q4 acts as a diode. Inductors get charged during switch ON and get discharged during OFF state through diode in Q3 and Q4. Battery gets charged by the currents flow from the two inductors.

4.2.2 Reverse Operation Mode During the reverse operation mode, the power flows from the vehicle to the grid. Voltage of the battery is stepped up to DC link voltage using two phase interleaved DC-DC converter. The DC link voltage is converted into AC grid voltage using bi-directional AC-DC converter which then operates in inverting mode. Bi-directional Interleaved DC-DC converter: Boost mode: In this mode, interleaved DC-DC converter acts as boost converter. LV side is battery and HV side is DC link are connected. Switches Q3 and Q4 are triggered. During this time, switches

4 V2G/G2V Bi-directional On-Board EV Charger …

53

Q1 and Q2 act as diodes. Inductors get charged during switches Q3 and Q4 are ON. The stored energy in inductors along with battery energy is discharged when Q3 and Q4 are in OFF state. Bi-directional AC-DC converter: During reverse operation mode, bi-directional AC-DC converter acts as a inverter. The DC link voltage is converted into an AC grid voltage by operating the switches in 180 degree conduction mode. The output of inverter is a quasi-square waveform. An LC filter is connected at the source terminals which filters out the harmonics in the output voltage and converts quasisquare voltage into a pure sinusoidal voltage. An RL load is connected in the proposed topology as shown in Fig. 4.1.

4.3 Design of Two-Phase Interleaved DC-DC Converter The design parameters of BIC are considered using the following Eqs. (4.1)–(4.44) for both buck and boost operation. Buck converter duty cycle, D =

VO VS

Boost converter duty cycle, D = 1 −

(4.2)

V in ∗ D il ∗ f s

(4.3)

VO ∗ D VO ∗ f s ∗ R

(4.4)

Inductances L 1 , L 2 = Capacitance C2 =

VO VS

(4.1)

where fs is switching frequency, V O is Output Voltage, V s is Source Voltage.

4.4 G2V V2G Control Operation The control operation of V2G and G2V with PI controller is shown in Figs. 4.4 and 4.5 [8]. In G2V operation, EV battery undergoes charging. To regulate the battery

Fig. 4.4 Closed loop control during forward operation [8]

54

K. Vineel Kumar and K. Deepa

Fig. 4.5 Closed loop control during reverse operation [8]

current, PI controller is used for which the reference current is measured with the feedback current sent from the battery. The error produced by the comparator is given to the PI controller to reduce the steady state error with a saturation limit. The pulses which are produced given to the switches Q1 and Q2 so that the inductor gets charged. In case V2G mode, reference voltage is compared with the feedback voltage taken from the battery. The error produced is passing through the PI controller with a saturation limit to reduce the steady state error. The main objective of the controller is to control the EV battery voltage and maintain at a constant value as required by the electric vehicle.

4.5 Simulation and Results The simulation setup of the proposed bi-directional charger with BIC has been verified in MATLAB and are shown in Figs. 4.6 and 4.11. The parameters used are as shown in Table 4.1. The output waveforms from the open loop simulation are shown from Figs. 4.7, 4.8, 4.9 and 4.10. Figure 4.6 shows the proposed open loop topology, verified in

Fig. 4.6 Open loop simulation topology

4 V2G/G2V Bi-directional On-Board EV Charger …

55

Table 4.1 Parameter specifications S. No.

Parameter

Symbol

Value

1

Inductance

L1, L2

13.356 mH

2

Filter capacitance

C1

900 uF

3

Battery voltage

Vbat

56 V

4

Output power

Po

1 kW

5

Output capacitance

C2

200 uF

6

Source inductance

Ls

5e−3H

7

Source capacitance

Cs

1e−3

Fig. 4.7 Open loop simulation waveforms of buck and boost operation

Fig. 4.8 Inductor and output currents in buck mode

56

K. Vineel Kumar and K. Deepa

Fig. 4.9 Inductor Current ripples

Fig. 4.10 Closed loop simulation topology

MATLAB. Figure 4.7 shows the simulation waveforms when converter operating in V2G mode and G2V mode. During the G2V, the duty cycle is set to 12% to step down DC link voltage of 320 V to battery voltage of 56 V. The initial SOC taken is 80%. From Fig. 4.7, it can be seen that SOC is increasing as the battery charges. The output current is 20 A with a power output of 1 kW. During the V2G, the duty cycle is set to 88% to step up the DC link voltage of 320 V from battery voltage of 57 V. The initial SOC taken is 80%. From Fig. 7a, it can be seen that SOC is decreasing as the battery discharges. From Fig. 4.9, it can be seen that variation in the inductor current during the G2V operation. It shows the ripple in the inductor current is less as two inductors are connected in parallel. Figure 4.8 shows the open loop simulation waveform of

4 V2G/G2V Bi-directional On-Board EV Charger …

57

both inductor current and output currents in buck mode. It is observed that an output current 20 A is passing through the battery which a rated battery charging current. The output waveforms from the closed loop simulation are shown from Fig. 4.12, 4.13, 4.14 and 4.15. Figure 4.11 shows the proposed closed loop topology, verified in MATLAB. Figure 4.12, represents the battery parameters during G2V mode. Initial reference current taken as 20 A, by operating the circuit in closed loop operation it is observed that a power output of 1 kW with a step down in a voltage from DC link of 320 V to a battery voltage of 56 V. It is also observed that an increase in SOC from the given initial SOC of 80%. From Fig. 4.14, it is observed the variation in the inductor currents and output currents. The reference current of 20 A can be seen at the battery side. Figure 4.13 represents the closed loop simulation waveform during boost operation. It is observed that SOC is decreasing from the given initial

Fig. 4.11 Battery parameters during G2V operation

Fig. 4.12 Battery parameters during V2G operation

58

K. Vineel Kumar and K. Deepa

Fig. 4.13 Closed loop simulation waveforms of inductor and output currents

Fig. 4.14 Load voltage at the grid during V2G operation (Without Filter)

SOC of 80%. In closed loop boost operation, the main objective is to control the DC link voltage and set as constant. In Fig. 4.12, it is observed that a DC link voltage of 400 is passing through the circuit which is a given as a reference value. Figure 4.15 shows the filtered load voltage waveforms during V2G operation. Total harmonic Distortion (THD) at the source voltage (grid) is calculated using Eq. (4.5) and shown in Figs. 4.16 and 4.17. Where I 1 is fundamental frequency of the component and I 2 , I 3 , I 4 , … are distortions or harmonic frequencies of the component. In Fig. 4.16, it is clearly seen that THD of the output voltage is about 58.31% without adding the filter. In Fig. 4.17, it is shown that the THD is 1.38%. From this it can be concluded that adding an LC filter at the source terminals reduces the harmonics in the waveform.  I22 + I32 + I42 + · · · × 100 (4.5) %THD = I12

4 V2G/G2V Bi-directional On-Board EV Charger …

59

Fig. 4.15 Output voltage during the V2G operation with filter

Fig. 4.16 THD analysis of load voltage without filter

4.6 Conclusion In this paper, the topology of BIC is proposed. Proposed topology is validated by simulation on MATLAB/Simulink. G2V mode (forward operation) converts the DClink with 320 V to battery of 56 V at an output current of 20 A and responding to the reference current, the battery gets charged with a duty cycle of 12%. Power at the output is 1 kW and V2G mode (reverse operation) converts the 56 V of a charged battery to 320 V of DC link. A bi-directional AC-DC power converter undergoes two

60

K. Vineel Kumar and K. Deepa

Fig. 4.17 THD analysis of load voltage with filter

conversions, rectifying mode during G2V and inverter mode during V2G. Similarly, BIC undergoes two conversion stages, buck mode during G2V and boost mode during V2G.

References 1. Poornesh, K., Nivya, K.P., Sireesha, K.: A comparative study on electrical vehicle and internal combustion Engine vehicle. In: Proceedings of the International Conference on Smart Electronics and Communications, pp 1179–1183 (2020) 2. de Melo, R.R., Tofoli, F.L., Daher, S., Antunes, F.L.M.: Interleaved bidirectional DC–DC converter for electric vehicle applications based on multiple energy storage devices. Electr. Eng. (2020) 3. Kang, T., Chae, B., Suh, Y.: Control algorithm of bi-directional power flow rapid charging system for electric vehicle using li-ion polymer battery. IEEE ECCE Asia Downunder 4. Thiruvonasundari, D., Deepa, K.: Electric vehicle battery modelling methods based on state of charge—review. J. Green Eng. 10(2020) 5. Rahul, K., Ramprabhakar, J., Shankar, S.: Comparative study on modelling and estimation of state of charge in battery. IEEE Trans. pp. 1610–1615 (2017) 6. Chaurasiya, S., Singh, B.: A G2V/V2G off-board fast charger for charging of lithium-ion based electric vehicles. In: IEEE International Conference on Environment and Electrical Engineering (2019) 7. Li, Y., Diao, L.: Research on interleaved bidirectional DC/DC converter. Lecture notes in electrical engineering, Chap. 42, pp. 409–420 (2016) 8. Phimphui, A., Supatti, U.: V2G and G2V using inter-leaved converter for a single-phase onboard bidirectional charger. In: IEEE Transportation Electrification Conference and Expo, Asia-pacific-2019

4 V2G/G2V Bi-directional On-Board EV Charger …

61

9. Srilakshmi, S., Mohankrishna, S., Deepa, K.: Bidirectional converter using fuzzy for battery charging of electric vehicle. In: IEEE Transportation Electrification Conference (ITEC India) (2019) 10. Vempalli, S., Deepa, K., Prabhakar, G.: A novel V2V charging method addressing the last mile connectivity. In: IEEE International Conference on Power Electronics, Drives and Energy Systems (PEDES) (2018) 11. Monteiro, V, et al.: Experimental validation of a bidirectional three-level dc-dc converter for on-board or off-board EV battery chargers. In: IECON 2019–45th Annual Conference of the IEEE Industrial Electronics Society, vol. 1. IEEE, New York (2019) 12. Maalandish, M., et al.: Six-phase interleaved boost dc/dc converter with high-voltage gain and reduced voltage stress. IET Power Electron. 10(14), 1904–1914 (2017)

Chapter 5

A Comparative Performance Analysis of Varied 10T SRAM Cell Topologies at 32 nm Technology Node Siddhant Ahlawat, Siddharth, Bhawna Rawat , and Poornima Mittal

Abstract SRAM is a key component of most embedded systems; it consumes a major portion of total area as well as total power of the system. As a result, design and stability of SRAM cell has a huge influence on the overall performance, and cost of building the system. Practical aspects like variations in performance due to temperature changes in the environment are paramount in defining the truthfulness of the bit-cell. Power wastage in the form of leakage current is also crucial in determining the effectiveness of the bit-cell. In this manuscript four different 10T SRAM cell configurations are designed on 32 nm technology nodes and simulated at supply voltage 1 V to evaluate and compare their stability against noise in hold, read, and write mode. On the basis of SNM the 10T’2 bit-cell has the most holistical performance with HSNM, RSNM, and WSNM values at 0.4004, 0.5584, and 0.52 V respectively. It is followed closely by the 10T’4 SRAM bit-cell. The vulnerability to temperature variation has also been evaluated to record variation in static noise margin with temperature. The temperature analysis reveals that the 10T’3 cell has the best tolerance against temperature variation, as the change in noise margin per degree Celsius is minimum at 0.051, 0.058, 0.123 mV/°C. At last, the leakage current associated with each design has been compared, which allows us to compare these cells on the basis of static power losses.

5.1 Introduction The modern world is highly digital and most electronic devices with processing capability rely on a static random access memory (SRAM)-based cache memory for optimal performance. Thus, the performance of SRAM bit-cell plays a crucial role in overall performance of a microprocessor memory [1]. Conventionally, a latching structure is used to form the core of a bit-cell, comprising of two inverters connected by a mutual feedback. This memory block of the cell is accessed through access transistors formed by transistors connected in a pass transistor configuration. The SRAM S. Ahlawat · Siddharth · B. Rawat · P. Mittal (B) Delhi Technological University, Delhi, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. Das et al. (eds.), Modeling, Simulation and Optimization, Smart Innovation, Systems and Technologies 292, https://doi.org/10.1007/978-981-19-0836-1_5

63

64

S. Ahlawat et al.

cells can be designed using devices such as metal-oxide semiconductor field effect transistors (MOSFET) or bi-polar junction transistors (BJT). The BJT-based design bestows high speed performance but consumes high power, whereas MOSFETs provide low power design and are thus commonly used in cache [2]. The advantages of using a MOSFET cache in a microprocessor are high speed, increased reliability, and long duration data retention. But, large-scale use of SRAM is limited by high cost low density and higher power consumption during read and write operation [3, 4]. The most primitive SRAM structure is composed of six transistors (6T), of which four transistors form the cross-coupled inverter-based memory core and the data which is in binary format is stored in the bit-cell and is accessed with the help of two more transistors. The memory core has the potential to store two stable states—‘0’ and ‘1’. The access transistors act in conjugation with two bit lines and are controlled via signals, to steer the cell into read, hold, and write operations. In the recent decade besides 6T cell various other bit-cell topologies such as—7T, 8T, 9T, 10T, 11T, 12T, 13T [5–11] have been reported in literature. An increase in transistor count is observed with the decreasing technology node, to uplift performance for the bit-cell. Usually power consumption and area increases with increase in transistor count, but improvement in performance parameters like noise margin, speed, reliability against variation overshadow the increased area cost. Also, the need for such high transistor count cells is necessitated by high power consumption and lower read mode stability for the 6T bit-cell topologies [12]. Among the different cells reported in literature the 10T cells are gaining popularity at lower technology nodes due to their higher immunity to noise and reduced leakage current. Consequently, in this paper four pre-existing 10T SRAM bit-cell topologies are designed and analyzed at 32 nm technology node. This analysis has helped in the identification of the merits and shortcomings of each topology. The increasing dependence of civilization on technology has exponentially increased the demand for SRAM. Therefore, with the growing plethora of application, it is important to categorize the different SRAM bit-cells based on their merits for optimal utilization. In this paper, a comprehensive analysis of different 10T bit-cell configurations is presented using Predictive Technology Model, each 10T bit-cell is designed at 32 nm technology node and simulated with supply voltage of 1 V. The rest of the manuscript is organized in the following manner—Sect. 5.2 elucidates the different topologies for 10T pre-existing in the literature. In Sect. 5.3, the static noise margin analysis under HOLD, READ, and WRITE conditions for the cells is explained. The stability of each cell against temperature variation under above-mentioned three conditions is evaluated in Sect. 5.4. Section 5.5 consists of evaluation of leakage current which reflects static power losses through each cell. The findings and the future scope of the paper are summarized in Sect. 5.6.

5 A Comparative Performance Analysis …

65

5.2 Existing 10T SRAM Bit-Cell Topologies 5.2.1 10T’1 A 10T bit-cell (10T’1) was reported in 2021 by Sachdeva and Tomar [8], to improve the leakage power performance. The schematic diagram for 10T’1 bit-cell is depicted in Fig. 5.1a. This bit-cell bestows an enhanced read operation performance and expanded write margin when compared to a conventional 6T cell. Additional transistors are added to improve drainage and disconnect the feedback loop during write operation, thereby improving efficiency and speed during the operation. The cross coupled inverter configuration is reduced to a cascaded inverter setup. This implies that data is to be written from a bit line to the first inverter, the next inverter develops the correct value based on the cascading effect. This alteration is to make the write operation for the bit-cell single ended which results in increment of the write margin of the bit-cell. The disconnection of the feedback between the inverter pair is facilitated by an additional control signal CSR. Transistors; N5-N6, are used as buffers

Fig. 5.1 Schematic diagram for a 10T’1, b 10T’2, c 10T’3, and d 10T’4 SRAM bit-cell

66

S. Ahlawat et al.

for read operation, they are controlled by bits QB and Q. Whereas, transistor N8 acts as a buffer stack on top of N5, and N6 to further enhance the read operation stability.

5.2.2 10T’2 Another topology to design a 10T bit-cell (10T’2) was reported by Mansore and Gamad [13] in 2018. The schematic illustration for the 10T’2 bit-cell is depicted in Fig. 5.1b. The major highlights of this cell are its features such as—write operation with interrupted power supply assist [1], reduced leakage power, read buffer to increase stability, and enabling the structure for bit interleaving arrangement. The added advantage for the cell is that even though the transistor count has increased, the number of control signals are under check. This ensures that the peripheral circuitry for the cache is under check.

5.2.3 10T’3 A new technique to design SRAM bit-cells using Schmitt Trigger-based inverter has been gaining moment over the last few years. A 10T bit-cell (10T’3) reliant on Schmitt Trigger-based inverter core was reported by Mansore and Gamad in 2019. Schmitt Trigger-based inverters have the capability to alter their switching voltage in keeping with the direction of input alteration and consequently improve the stability for the bit-cell [14]. Figure 5.1c depicts the schematic diagram for the 10T’3 bit-cell. The 10T’3 consists of two Schmitt Trigger-based inverters which help to enhance read stability, it contains the transistor M3 which acts as a barrier between Q node and the discharge path of RBL through M8-M3-M10. Thus, preserving the value stored at data node Q. An OR gate is used to control transistor M10, to increase write margin for the cell, while the PQB node ensures that no subthreshold current flows through M4-M5-M6 by increasing the source voltage of M5 (Vg-Vs < Vth) during the read and hold operation.

5.2.4 10T’4 Yet another 10T bit-cell (10T’4) configuration was reported by Zhang et al. [15] in 2019. The schematic illustration for 10T’4 bit-cell is depicted in Fig. 5.1d. The 10T’4 cell consists of two orthodox inverters and a modified inverter with a PMOS (PR2) in-between them to separate the Q node from SQ node. The write operation when performed simultaneously for node QB and SQ, enhances the write margin for the cell. This cell also employs a read assist scheme as for Q = ‘0’, to ensure there are two discharge paths to improve read stability. The same is achieved as BL can

5 A Comparative Performance Analysis …

67

discharge through ACR-NR1, which increases the SQ node voltage by a very small voltage which in turn has no impact on Q and thus no destructive read operation is registered by the cell.

5.3 Static Noise Margin Analysis One of the important parameters in analyzing the implementation and functioning of a SRAM cell is the static noise margin (SNM). It is a measure of noise immunity of the circuit, it is measured as the maximum noise value a cell can tolerate before the information inside the cell changes i.e. bit inversion takes place. The SNM value is crucial to measure during each mode of operation—hold, read, and write operations. So, for each cell three distinct SNM values are recoded one for each operation—hold static noise margin (HSNM), read static noise margin (RSNM), and write static noise margin (WSNM).

5.3.1 Hold Static Noise Margin Analysis An SRAM bit-cell for majority part of its operation is maintained in hold mode and therefore HSNM measurement is of utmost important in determining SRAM stability. The HSNM is the lowest noise voltage that is necessary to flip the state of the cell during the hold condition [16]. To measure HSNM, the side of the largest possible square that can fit inside the butterfly curve [17] is taken. The HSNM values attained for all the bit-cells are rivalled graphically in Fig. 5.2a. It can be deduced from the figure that 10T’3 bit-cell has the best hold noise performance at 0.436 V in comparison to the other cells. While, the 10T’2 and 10T’4 cell have similar HSNM value at 0.4004 and 0.392 V respectively. The bit-cell with the least hold stability is 10T’1 with HSNM of 108 mV only.

5.3.2 Read Static Noise Margin Analysis The read operation subjects the memory core of the bit-cell to vulnerability because it is being accessed via bit lines. The RSNM is the lowest noise voltage needed during the read operation at each of the cell storage nodes which is essential to flip the state of the cell. Alike, HSNM it is also measured as the edge of the square of maximum area that can fit into the read butterfly curve. The RSNM values acquired for all the 10T SRAM bit-cell are compared in Fig. 5.2b. It can be concluded from the obtained RSNM values that the cell with the most superior read stability is the 10T’2 cell as its RSNM is 0.4584 V, followed by 10T’4, 10T’3, and 10T’1, which have relatively low RSNM values at 0.383 V, 0.28 V, and 0.156 V. The most important observation

68

S. Ahlawat et al.

Fig. 5.2 Graphical comparison for a HSNM, b RSNM, and c WSNM values for various 10T SRAM bit-cells

among the RSNM value is that the 10T’3 cell is the only cell, which registers a decline in its performance when compared to its HSNM value. The other cells have a read assist mechanism to boost the strength of the cell when the read operation occurs and the same can be confirmed from the RSNM values for each of the cells.

5.3.3 Write Static Noise Margin Analysis During the write operation, the data on the bit lines is written into the memory core of the bit-cell. WSNM is the measure to check the steadiness of the write operation. It is the lowest noise voltage needed to unintentionally write the logic value onto nodes of the cell. It is calculated as the difference between the logic high of the cell and the bit line voltage point where bit inversion takes place while varying the word line for the bit-cell [18]. The WSNM values obtained for the varied 10T SRAM bit-cells are compared in Fig. 5.2c. Theoretically, the best value for WSNM that a bit-cell can obtained is half of the supply voltage at which the cell is operational. All the cell in this paper are analyzed at 1 V of supply voltage, therefore the most ideal value for the WSNM is in the vicinity of 0.5 V. The cell that records a similar value for WSNM is 10T’2 at 0.52 V. The other cells either fall short of the ideal value such as 10T’1

5 A Comparative Performance Analysis …

69

and 10T’3 at 0.45 V and 0.375 V respectively or overshoot the mark as in the case of 10T’4 at 0.667 V. Another important observation is that despite having relatively low hold and read noise immunity, 10T’1 cell shows slightly better write noise margin, because the inverter loop in 10T’1 is broken during the write operation. Whereas, the 10T’3 gives relatively low value of WSNM, due to having a relatively simple and direct method of executing the write operation also known as single-ended write operation.

5.4 Temperature Analysis Another significant parameter for scheming a SRAM cell is the temperature analysis. An SRAM bit-cell may be subject to changing temperature range and the cell is expected to maintain its functionality. So, temperature variation analysis is an essential parameter for evaluation of an SRAM bit-cell. All the cells in this analysis are evaluated for performance shift in the static margin for each cell with operating temperature varying from − 10 to 110 °C linearly.

5.4.1 Temperature Analysis for Hold Operation The variation in HSNM value for each cell is presented in terms of butterfly curve in Fig. 5.3. As can be observed that butterfly curve for 10T’1 is the most narrow, consequently its HSNM is the least. The HSNM values for 10T’1 decrease from its highest value of 0.108 at − 10 °C to a lowest value of 0.099 at 85 °C. Whereas, for 10T2 the HSNM decreases from a highest value of 0.403 at − 10 °C to a minimum value of 0.3844 at 85 °C. It is observed from Fig. 5.3c, d that HSNM of 10T’3 decreases from a highest value of 0.43 at 0 °C to a minimum value of 0.392 at 90 °C and HSNM of 10T’4 decreases from a highest value of 0.392 at 0 °C to a minimum value of 0.364 at 90 °C. For the ease comparison the change in HSNM with change in temperature is depicted in Fig. 5.4a. It can be concluded that 10T’2 is affected the most by temperature change, closely followed by 10T’1 and 10T’4 SRAM bit-cell. While, the 10T’3 cell performs quite well under temperature variation.

5.4.2 Temperature Analysis of Read Operation During the read operation, the temperature variation analysis is to analyze the impact of temperature on the RSNM values registered for each cell. The read butterfly curve for each 10T bit-cell is presented in Fig. 5.5. It can be easily noticed from Fig. 5.5a, b, that the RSNM for 10T’1 decreases from a highest value of 0.1701 at − 10 °C to

70

S. Ahlawat et al.

Fig. 5.3 Butterfly curve for hold operation of a 10T’1, b 10T’2, c 10T’3, and b 10T’4 cell under temperature variation

a minimum value of 0.1485 at 85 °C and RSNM of 10T’2 decreases from a highest value of 0.403 at − 10 °C to a minimum of 0.3844 at 85 °C. While, from Fig. 5.5c, d, it is observed that RSNM of 10T’3 decreases from a highest value of 0.285 at 0 °C to a minimum value of 0.280 at 90 °C and RSNM of 10T’4 decreases from a highest value of 0.38 at 0 °C to a minimum value of 0.36 at 90 °C. The change in noise margin with temperature in read mode is exhibited in Fig. 5.4b. It can be concluded that 10T’3 holds best under changing temperature, followed closely by 10T’2. 10T’4, and 10T’3 have relatively lower impact of temperature variation.

5.4.3 Temperature Analysis of Write Operation The variation in the WSNM for the 10T SRAM bit-cells due to temperature variation is depicted in Fig. 5.6. It can be deduced from Fig. 5.6a, that WSNM of 10T’1 decreases from a highest value of 405.018 mV at − 10 °C to a minimum value of

5 A Comparative Performance Analysis …

71

Fig. 5.4 Variation in noise margin with temperature for a HSNM, b RSNM, and c WSNM for all 10T SRAM bit-cells

367.26 mV at 110 °C. In Fig. 5.6b, WSNM of 10T’2 decreases from a highest value of 472.556 mV at − 10 °C to a minimum value of 458.16 mV at 110 °C. While, from Fig. 5.6c, it is observed that WSNM of 10T’3 decreases by 0.125 mV for every 1ºC increase in temperature. From Fig. 5.6d, it is detected that the WSNM keeps decreasing at a linear pace of 0.83 mV/ºC from 15 °C to 85 °C and then shows a change in pattern due to decrease in switching threshold of inverter formed by PR1 and NR2. So initially when QB = ‘1’ and Q = ‘0’, the SQB node retains the value ‘1’ and thus it keeps on charging the QB node. As WWL keeps on increasing, QB keeps on decreasing and reaches a point where it goes into metastable state due to discharging through WBL and simultaneously charging from SQB node. When temperature in increased, the switching threshold of the inverter created by PR1 and NR2 decreases, therefore QB has to decrease further in order to pass the new decreased threshold. This leads to amendment of ‘1’ at Q and decrement in write margin. The variation in WSNM with temperature is disclosed in Fig. 5.4c, it can be easily noticed that 10T’4 is affected the most by temperature change. 10T’2 performs the best against temperature change, closely followed by 10T’3.

72

S. Ahlawat et al.

Fig. 5.5 Butterfly curve for read operation of a 10T’1, b 10T’2, c 10T’3, and d 10T’4 cell under temperature variation

5.5 Leakage Current Analysis A SRAM bit-cell for most part of its operation remains in the hold state. During this state only the transistors of the memory core are functional, while the other transistors in the cell should be off. These off transistors should therefore record zero conduction through them. But, even when a transistor is off a small amount of current may flow through it, resulting in leakage current through the cell. This leakage current is responsible for a large amount of power wasted under hold condition. Therefore, for an optimal cell performance the leakage current should be as minimal as possible. So, all the bit-cells in this paper were also analyzed for leakage current to identify the cell that would have minimal static power loss. The leakage current values obtained for each SRAM bit-cell are depicted in Fig. 5.7. The highest leakage current is observed for 10T’4 SRAM bit-cell at 994.721 pA (picoampere). Whereas, the 10T’1 and 10T’2 SRAM bit-cells have comparable leakage current at 330 and 316 pA. But the least leakage current value is reported for 10T’3 SRAM bit-cell, with leakage current of only 9.206 pA.

5 A Comparative Performance Analysis …

(a)

(c)

73

(b)

(d)

Fig. 5.6 Write margin curve of a 10T’1, b 10T’2, c 10T’3, and d 10T’4 cell under temperature variation

Fig. 5.7 Leakage current values recorded for different 10T SRAM bit-cells

74

S. Ahlawat et al.

5.6 Conclusion and Future Scope In this paper four different 10T SRAM bit-cell topologies were designed at 32 nm technology node and their simulation results were analyzed at 1 V supply voltage. Each bit-cell was evaluated in terms of static noise margin, variation in static performance under temperature variation and leakage current. On the basis of SNM the 10T’2 bit-cell has the most holistical performance with HSNM, RSNM, and WSNM values at 0.4004, 0.5584, and 0.52 V respectively. It is followed closely by the 10T’4 SRAM bit-cell. The cell with weakest static performance is the 10T’1 with relatively low HSNM, RSNM values at 0.10 V, 0.158 V and an average WSNM, but the poor performance in read and hold conditions are the overpowering demerits. The temperature analysis reveals that the 10T’3 cell has the best tolerance against temperature variation, as the change in noise margin per degree Celsius is minimum at 0.051, 0.058, 0.123 mV/°C. The other cells show approximately the same levels of temperature tolerance which are moderate but not at par with 10T’3 cell. The leakage current analysis has also been done which further represents the static power dissipation and the cell 10T’2 has outperformed other designs in this context. As for the future scope, the same designs can be implemented using FinFET device, as they have significantly faster switching time and are highly dense when compared to CMOS at lower technology nodes. FinFETs also possess lower gate resistance which further helps to improve the noise immunity. A major dissimilarity between designs based on FinFETs and that which use orthodox planar devices (CMOS) is that the autonomy to choose the device’s drive strength is bargained, especially for devices that are close to the smallest size. This comparative study can help designers to incorporate the best aspects of each design to generate a new and better bit-cell configuration. Also, a compatible Sense Amplifier can be designed, as it is an important peripheral circuitry component which helps in better reading operation.

References 1. Rawat, B., Mittal P.: Analysis of varied architectural configuration for 7T SRAM bit-cell. In: 4th International Conference on Recent Trend in Communication & Electronics (ICCE-2020), 28–29 Nov, 2020, Proceedings Published by Taylor and Francis 2. Mittal, P., Kumar, N.: Comparative analysis of 90 nm MOSFET and 18 nm FinFET based different multiplexers for low power digital circuits. Int. J. Adv. Sci. Technol. 29(8), 4089–4096 (2020) 3. Rawat, B., Mittal, P.: Single bit line accessed high high performance ultra low voltage operating 7T SRAM bit-cell with improved read stability. Int. J. Circuit Theory Appl. 49(5), 1435–1449 (2021) 4. Kumar, N., Mittal, P., Mittal, M.: Performance analysis of FInFET based 2:1 multiplexers for low power application. In: 2020 IEEE Students Conference on Engineering and Systems (2020) 5. Rawat, B., Mittal, P.: A 32 nm single ended single port 7T SRAM for low power utilization semiconductor science and technology (2021). https://doi.org/10.1088/1361-6641/ac07c8

5 A Comparative Performance Analysis …

75

6. Pasandi, G., Fakhraie, S.M.: A new sub-300 mV 8T SRAM cell design in 90 nm CMOS. In: The 17th CSI International Symposium on Computer Architecture & Digital Systems (CADS 2013), pp. 39–44 (2013). https://doi.org/10.1109/CADS.2013.6714235 7. Dhindsa, A.S., Saini, S.: A novel differential 9T cell SRAM with reduced sub threshold leakage power. In: 2014 International Conference on Advances in Engineering & Technology Research (ICAETR - 2014), pp. 1–5 (2014). https://doi.org/10.1109/ICAETR.2014.7012808 8. Sachdeva, A., Tomar, V.: Design of 10T SRAM cell with improved read performance and expanded write margin. IET Circuits Devices Syst. 15(1), 42 (2021) 9. He, Y., Zhang J., Wu, X., Si, X., Zhen, S., Zhang, B.: A half-select disturb-free 11T SRAM cell with built-in write/read-assist scheme for ultralow-voltage operations. IEEE Trans. Very Large Scale Integr. (VLSI) Syst. 27(10), 2344–2353 (2019). https://doi.org/10.1109/TVLSI. 2019.2919104 10. Sharma, A., Bharti, M.: SA novel low power 12T SRAM Cell with Improved SNM. In: 2019 6th International Conference on Computing for Sustainable Global Development (INDIACom), 98–101 (2019) 11. Atias, L., Teman, A., Giterman, R., Meinerzhagen, P., Fish, A.: A low-voltage radiationhardened 13T SRAM bitcell for ultralow power space applications. IEEE Trans. Very Large Scale Integr. (VLSI) Syst. 24(8), 2622–2633 (2016). https://doi.org/10.1109/TVLSI.2016.251 8220 12. Ajoy, C.A., Kumar, A., Anjo, C.A., Raja, V.: Design and analysis of low power SRAM using cadence tool in 180 nm technology. IJCST (2014) 13. Mansore, S., Gamad, R.: A data-aware write-assist 10T SRAM cell with bit-interleaving capability. Turkish J. Electr. Eng. Comput. Sci. 26, 2361–2373 (2018) 14. Mansore, S.R., Gamad, R.S.: Single-ended 10T SRAM cell with improved stability. J. VLSI Des. Sign. Process. 5, 19–25 (2019). https://doi.org/10.5281/zenodo.3491402 15. Zhang, J., Wu X., Yi, X., Lv, J., He, Y.: A subthreshold 10T SRAM cell with enhanced read and write operations. In: 2019 IEEE International Symposium on Circuits and Systems (ISCAS), pp. 1–4 (2019). https://doi.org/10.1109/ISCAS 16. Seevinck, E., List, F.J.: Lohstroh: Static-noise margin analysis of MOS SRAM cells. IEEE J. Solid-State Circuits 22(5), 748–754 (1987) 17. Kumar, B., Kaushik, B.K., Negi, Y.S.: Design and analysis of noise margin, write ability and read stability of organic and hybrid 6-T SRAM cell. Microelectron. Relab. 54(12), 2801–2812 (2014) 18. Rawat, B., Mittal, P.: A 32 nm single ended single port 7T SRAM for low power utilization. Semiconductor Sci. Technol. (2021)

Chapter 6

Hydrodynamic Coupling Between Comoving Microrobots S. Sharanya and T. Sonamani Singh

Abstract Locomotion of micron-size body in a fluid medium comes under the low Reynolds number (Re) fluid–structure interaction dynamics. The studies of biological and artificial microswimmers have paved the way for the research and development of clinical microrobots for biomedical applications. The paper presents a modeling and analysis of swimming of two multilink microrobots, where each robot is composed of a rigid spherical magnetic head attached to a rigid slender tail via a torsional spring. The microrobots are modeled in presence of both the intra- and inter-hydrodynamic coupling using the finite element method in COMSOL. The effect of hydrodynamic coupling is analyzed for two types of swimming mode: side-by-side and front-andback configurations. The performances of microrobots are compared for the two modes of swimming and with that of an isolated microrobot. The hydrodynamic coupling is found to affect the performances of the robots and the maneuvering of collective performance depends on the swimming mode and actuation frequency.

6.1 Introduction Microrobots are micron-size devices powered by an external field (magnetic, electric, acoustic, or light), chemicals, or biological motors to execute certain functions [1]. The research in microrobotics is mainly driven by its potentials for minimally invasive biomedical applications like smart drug delivery, detoxification, microsurgery, and in-vivo sensing and imaging [2]. The domain of microrobotics in clinical biomedical is relatively new and still in the development stage. In the last two decades, many innovative designs (bioinspired, artificial, and hybrid), actuation mechanisms, and control strategies have been proposed visioning for its applications in the biomedical domain [1, 2].

S. Sharanya · T. S. Singh (B) Micro & Nano Robotics Laboratory, Department of Physics, National Institute of Technology, Tiruchirappalli 620015, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. Das et al. (eds.), Modeling, Simulation and Optimization, Smart Innovation, Systems and Technologies 292, https://doi.org/10.1007/978-981-19-0836-1_6

77

78

S. Sharanya and T. S. Singh

The locomotion of microrobots in a fluid environment is different from their macro-scale counterpart as in the micron-scale dimension the fluid–structure interaction is governed by low Reynolds (Re) number dynamics where the dissipative viscous forces dominate over the inertial forces [3]. This imposes a constraint on the microscale body to execute a nonreciprocal motion to displace in the fluid medium [3]. The microorganisms are the perfect examples of low Re swimmer and many designs of microrobots are inspired by E-coli bacteria, Spermatozoon, and Chlamydomonas [4]. In some microrobot designs, hybrid models are also proposed where the functionality of both the artificial and biological components are exploited [1]. In literature, the microrobots are also known by some other synonyms like microswimmers, micromotors, or micromachines. The developments in the field of fabrication technology and micromachining techniques have also further made the realizability of designing robots on a micron-scale. But compared to macro-scale robot’s functionalities and on-board facilities (power supplies, sensors, and actuators) the field of microrobots are still premature and concerning the types of biomedical applications it is aiming for, many challenges are still unaddressed [5]. The types of applications aimed for designing these microdevices are mainly for operating in complex biological environments so the probability of completion of the desired task by a single robot is relatively very low. A possible solution is using a swarm of microrobots, which can corporate collectively to significantly enhance the capability. The deployment of the swarm of robots although might increase the probability of completion of the task, but simultaneously increases the complexity due to hydrodynamic interaction between the robots and with the environment. The hydrodynamic interactions in low Re numbers locomotion play an important role in the dynamics of the swimmer. The effect of this role is also altered by swimmer components [6], material properties [7, 8], nature of actuation and mode of swimming [9, 10], and morphology [11]. In the studies where complete hydrodynamic interactions (intra and inter) are considered the simulations are computationally very expensive. On the other end when approximation (leading order) of hydrodynamic interactions is used, they have gone for simpler model or considered for small amplitude approximation. In this paper, we have proposed a simpler approach considering both the intra- and inter-hydrodynamic interaction using the COMSOL software to study the dynamics of comoving microrobots. The modeling details of the microrobots in a fluid medium and their hydrodynamic interactions in COMSOL are presented in Sect. 6.2. The influence of the hydrodynamic interactions is quantified in terms of the performance (time-averaged velocity) of the microrobots. The analysis of the dependencies of performances on the actuation and coupling parameters is described in Sect. 6.3. This is followed by the discussion in Sect. 6.4 and the conclusion in Sect. 6.5.

6 Hydrodynamic Coupling Between Comoving Microrobots

79

6.2 Modeling of Microrobots in COMSOL Multiphysics The modeling of microrobots swimming in the fluid medium is done in COMSOL. The geometry is built in a 2-D plane assuming that the locomotion of robots is confined in the x–y plane and rotations of the robots are about the z-axis. Each robot is a two-link design structure composed of a rigid magnetic head and tail joined by an elastic torsional joint as shown in Fig. 6.1a and the two swimming configurations: side-by-side and front-and-back swimming are shown in Fig. 6.1b, c, respectively. The parameter h denotes the average distance of separation between the two robots. The robots are immersed in a fluid chamber consisting of water and the dimensions of the chamber walls are taken very large relative to the dimensions of the robot so that the wall does not influence the dynamics of the robots. The fluid is modeled as Stokes’s flow (inertialess flow), as locomotion of microrobots comes under low Re regime [3] 1 μ − ∇ 2 Ufluid + ∇ P = F. ρ ρ

(6.1)

where μ and ρ are the dynamic viscosity and the density of the fluid, respectively, Ufluid is the velocity and P is the pressure of fluid medium, and F is the external force. The inter- and intra-hydrodynamic interactions between microrobots are solved by using the arbitrary Lagrangian–Eulerian (ALE) method. In COMSOL, this method is implemented by using the fluid-multibody interaction assembly feature which provides the two-way coupling case. Here, the two-way coupling means that the Elastic Torsional Joint Magnetic Rigid Head

Rigid Tail

Robots Swimming Side-by-Side

Robot A (Top)

Robot C (Front)

Robot B (Bottom)

Robot D (Back) Robots Swimming Front-and-Back

Fig. 6.1 a Schematic diagram of the microrobot model. The microrobot is a two-link robot consisting of a magnetic rigid head connected to the rigid tail by an elastic torsional spring. b Microrobots swimming side-by-side. c Microrobots swimming front-and-back

80 Table 6.1 Parameters used in the simulation

S. Sharanya and T. S. Singh Parameters

Value

Radius of head ‘r’

100 µm

Length × width of tail (l × a)

322 µm × 42 µm

Magnetization of head

1.6 × 10−7 Am2

Magnitude of magnetic field

5 mT

Torsional spring stiffness ‘k’

1.5 × 10−9 N/m

Robot Material (Young’s Modulus)

4 × 109 Pa

moving part of the robot will disturb the nearby fluid environment, and this disturbance will affect the remaining parts of the same robot (intra-hydrodynamic coupling) and the neighboring robot (inter-hydrodynamic coupling). Among the different methods of driving the microrobots, magnetic actuation is the most popular one because of its wireless power transfer capability and minimal interaction with the internal tissues of the body when operated at a lower magnitude [2]. In this type of actuation, a part of the robot is made up of magnetic material and an external magnetic field is used to drive the robot by imparting magnetic torque. For a rigid spherical magnetic head with constant magnetization ‘MHead ’ when placed in an oscillating external magnetic B = Bo cos(ψ(t))xˆ + Bo sin(ψ(t)) yˆ , the external magnetic torque experienced by the head is given by τext = MHead Bo sin[ψ(t) − θ]ˆz .

(6.2)

where ψ(t) = ψo sin(ωt), ψo and ω are the maximum angular magnitude and the frequency of actuation, and θ is the angular displacement of the head. For our microrobot model, we apply the moment in the form of Eq. (6.2) on the head of the robot to mimic the magnetic actuation. The parameters of the microrobot and fluid medium used in the simulation are given in Table 6.1 and other parameters used in generating the results are mentioned in the figure caption. The measurement and analysis of the swimming microrobots are done in the spatial frame (laboratory frame of reference).

6.3 Results The robot is characterized by the following performance measures: time-averaged displacement ‘X avg ’ and time-averaged velocity ‘Vavg ’, where the average is taken per actuation time cycle. The velocity distributions of the fluid due to motion of the robots at three different time instant for side-by-side (Fig. 6.2a–c) and front-andback (Fig. 6.2d–f) are shown in Fig. 6.2. From these figures, we can see that the velocity flow distributions around the robots are different; for side-by-side configuration, the flow distribution is similar around the two robots and for front-and-back, it is different. The impact of these differences in hydrodynamic interactions on the performances of the robots is described below.

Fig. 6.2 Velocity profile of the fluid surrounding the microrobots at three different time instant for side-by-side swimming a–c and front-and-back swimming d–f. The other parameters used are: f = 1 Hz, h = 4r , ψo = 0.2 rad

6 Hydrodynamic Coupling Between Comoving Microrobots 81

82

S. Sharanya and T. S. Singh Side-by-Side Swimming

Front-and-Back Swimming

Fig. 6.3 Time-averaged velocity of the microrobots as a function of actuation frequency a for sideby-side swimming and b for front-and-back swimming. For comparison the time-averaged velocity for single isolated swimmer is also included. The other parameters used are: h = 4r , ψo = 0.2 rad

Figure 6.3a shows the Vavg as a function of frequency for side-by-side configuration for robot-A (top) and robot-B (bottom) and for reference we have included the performance for a single robot swimming alone. Note that the robots are moving in the negative x-axis direction (refer to Fig. 6.1) which is why the averaged-time velocity is negative. From this plot, we can see that in side-by-side configuration the collective swimming enhances the performances for both the robots A and B but only in a certain frequency range (0.5 Hz < f < 5 Hz) and outside this range the performance is almost equal to that of the isolated single robot. Also, the two robots A and B swim with similar velocities. The performance for front-and-back configuration is shown in Fig. 6.3b, from this plot we can see that at a lower frequency ( f < 5 Hz) the swimming velocities of robot C and D are almost similar and comparable to that of an isolated robot and at higher actuation frequency the robot D (back robot) swims slower. The variation of the performances as a function of h for side-by-side configuration is shown in Fig. 6.4, from these plots we can see that the performance decreases as the average separation is increased.

6.4 Discussion Microrobots having locomotion capabilities and functionalities are going to be a desirable choice for minimally invasive medical diagnosis and treatment due to their miniature size and ability to penetrate in complex in-vivo environments where the normal clinical techniques cannot access. But, to remotely control and track its motion in real-time in a biological fluid medium and have proper precision is not so easy given the scale we are dealing with. Also, the biocompatibility and non-toxicity of the microrobot to prevent undesirable effects inside our human body make the material design a challenging task. To tackle these challenges researchers have looked for

6 Hydrodynamic Coupling Between Comoving Microrobots

83

Side-by-Side Swimming

Fig. 6.4 Time-averaged displacement and velocity as a function of the average separation distance for robots A and B for side-by-side configuration. The other parameters used are: f = 1 Hz, ψo = 0.2 rad

inspiration from the micro-biological world and developed relatively simple microrobots. Then try to add up the additional features and tune them for biomedical applications. The controls of magnetic microrobots are mainly based on the structural morphology, magnetic properties of the material used and type of actuation field (planar or rotating). In addition to these factors, hydrodynamic interaction is inherently present in the dynamics. This interaction can be indirectly controlled by planning the deployment strategy where the distribution of microrobots is made in such a way that optimal collective performance is obtained. As mentioned in Sect. 6.1 that it is desirable to use more than one robot to increase the probability of achieving the task, this also increases the involvement of the role of hydrodynamics. From the results presented in Fig. 6.2, we can see that the hydrodynamic interactions between the comoving robots change significantly when we change the alignment of the robots. These changes are seen to impact the velocities of the robots, refer to Fig. 6.3. In the side-by-side configuration, due to symmetry in the hydrodynamic coupling, the robots swim with similar velocities, and in certain frequency ranges its velocity is greater than that of an isolated swimming robot. For the front-and-back case, there is an asymmetry in coupling and the velocities are different. A similar type of behavior is reported in biological microswimmers [7]. The hydrodynamic coupling depends on the average separation between the robots, from the results shown in Fig. 6.4 we can see that as we reduced the coupling between the robots for side-by-side configuration the performance decreases.

84

S. Sharanya and T. S. Singh

6.5 Conclusion The paper concludes that hydrodynamic interactions influence the collective performance of the microrobots when swimming in the vicinity. For comoving microrobots, the deployment strategy and actuation frequency are important to maximize the performance of the robots. In the side-by-side configuration, the velocities of the robots can be enhanced compared to an isolated robot by tunning the actuation frequency. The velocities of robots in front-and-back configurations are almost similar to that of an isolated robot. Acknowledgements The author Sharanya S. is thankful to the Ministry of Education (MoE) for awarding HTRA fellowship (NIT Trichy) for pursuing this study.

References 1. Ceylan, H., Giltinan, J., Kozielskia, K., Sitti, M.: Mobile microrobots for bioengineering applications. Lab Chip 17, 1705–1724 (2017) 2. Koleoso, M., Feng, X., Xue, Y., Li, Q., Munshi, T., Chen, X.: Micro/nanoscale magnetic robots for biomedical applications. Mater. Today Bio. 8, 100085 (2020) 3. Purcell, E.M.: Life at low Reynolds number. Am. J. Phys. 45, 3–11 (1977) 4. Palagi, S., Fischer, P.: Bioinspired microrobots. Nat. Rev. Mater 3, 113–124 (2018) 5. Soto, F., Wang, J., Ahmed, R., Demirci, U.: Medical micro/nanorobots in precision medicine. Adv. Sci. 7, 2002203 (2020) 6. Giuliani, N., Heltai, L., DeSimone, A.: Predicting and optimizing microswimmer performance from the hydrodynamics of its components: the relevance of interactions. Soft Rob. 5, 410–424 (2018) 7. Taketoshi, N., Omori, T., Ishikawa, T.: Elasto-hydrodynamic interaction of two swimming spermatozoa. Phys. Fluids 32, 101901 (2020) 8. Kuroda, M., Yasuda, K., Komura, S.: Hydrodynamic interaction between two elastic microswimmers. J. Phys. Soc. Japan 88, 054804 (2019) 9. Keaveny, E.E., Maxey, M.R.: Interactions between comoving magnetic microswimmers. Phys. Rev. E 77, 041910 (2008) 10. Alexander, G.P., Yeomans, J.M.: Hydrodynamic interactions at low Reynolds number. Exp. Mech. 50, 1283–1292 (2010) 11. Singh, T.S., Singh, P., Yadava, R.D.S.: Effect of interfilament hydrodynamic interaction on swimming performance of two-filament microswimmers. Soft Matter 14, 7748–7758 (2018)

Chapter 7

Chemical Reaction Optimization (CRO) for Maximum Clique Problem Mahmudul Hasan, Md. Rafiqul Islam, and Amrita Ghosh Mugdha

Abstract The maximum clique problem (MCP) finds the largest complete subgraph or clique of a given graph. The target here is to maximize the size of the clique. In this article, we have proposed a chemical reaction optimization (CRO)-based metaheuristic algorithm to solve the maximum clique problem. Two datasets have been used to measure the performance of the proposed method. The proposed method gives better results with less average errors in comparison to the state-of-the-art methods for the two datasets.

7.1 Introduction Maximum clique is mainly a clique with the maximum number of vertices where each vertex is connected to others which cannot be elaborated by adding more vertices. In this problem, the input is a simple undirected graph and the output is a clique of nodes in the undirected input graph. The maximum clique problem is considered one of the first problems proven to be NP-hard [1]. Maximum clique problem plays an important role in graph-theoretic applications including graph coloring and fractional graph coloring and in solving important problems, for example, constraint satisfaction, subgraph isomorphism, or vertex covering analysis problems [2]. This problem has a wider range of applications in both theoretical and practical sections such as social network communication analysis, computer network analysis, information recovery, signal sending theory, pattern recognition, molecular biology, bioinformatics [3]. Many techniques to solve the maximum clique problem were developed. Many algorithms were proposed by the researchers, for example, genetic algorithm (GA) [4], simulated annealing (SA) [5, 6], tabu search [7], reactive local search (RLS) [8], Ant Colony Optimization Algorithm (ACO) [9]. All of the proposed algorithms have M. Hasan · Md. Rafiqul Islam · A. G. Mugdha (B) Computer Science and Engineering Discipline, Khulna University, Khulna 9208, Bangladesh e-mail: [email protected] M. Hasan e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. Das et al. (eds.), Modeling, Simulation and Optimization, Smart Innovation, Systems and Technologies 292, https://doi.org/10.1007/978-981-19-0836-1_7

85

86

M. Hasan et al.

played a great role in their places for solving the problem. Although several algorithms were proposed to solve MCP, there is no general solution that can give optimal results in polynomial time. We have solved the MCP using chemical reaction optimization (CRO) algorithm. CRO is a population-based nature-inspired metaheuristic approach which performs very well for its searching capability. It can search solution space both locally and globally through its four operators and that makes the algorithm more efficient than any other approach. Many optimization problems have been solved by CRO in recent years with better results than the other existing metaheuristics algorithms [10–16]. Contributions of our proposed work: The four reaction operators along with repair operator have been redesigned and obtained better results in the datasets compared to other metaheuristic algorithms. In this article, we have demonstrated the basic ideas of MCP with the problem statement in Sect. 7.1. In Sect. 7.2, some related works have been described, we have described our proposed method in Sects. 7.3, and 7.4 concludes the work.

7.1.1 Problem Statement A clique is mainly a complete subgraph that means the nodes in a clique is adjacent to one another. Maximum clique is the largest complete subgraph in a graph. Maximum clique problem (MCP) is a problem that finds the possible largest set of vertices than any other clique. MCP is considered as NP-hard just because there exists no solution to solve it in polynomial time [2]. Let a graph G has a clique, and Q is a vertex set where every pair of nodes is adjacent. A node v outside the clique is given, and this node can be added to clique, Q if the node is adjacent to every node in Q. The largest complete subgraph of a given graph is the maximum clique. Let us consider the graph in Fig. 7.1 where graph G = (V, E),

Fig. 7.1 Arbitrary graph

7 Chemical Reaction Optimization (CRO) for Maximum …

87

V = {1, 2, 3, 4, 5, 6} and E = {(1, 2), (1, 6), (2, 3), (2, 5), (2, 6), (3, 4), (3, 5), (3, 6), (4, 5), (5, 6)} The cliques of the given graph are = {1, 2, 6}, {2, 5, 6}, {2, 3, 5}, {2, 3, 6}, {3, 5, 6}, {2, 3, 5, 6}, {3, 4, 5} The clique set {2, 3, 5, 6} has the maximum size among all cliques. So, the maximum clique = {2, 3, 5, 6}, and the size is 4. MCP works with a given number of vertices in a graph. To build maximum cliques, first start from a single vertex, then grow that vertex by looping through the graphs remaining vertices. Every vertex v including in this loop, v will be appended to the clique if v is adjacent to the remaining nodes in the clique, and if not, then v will be discarded. A graph G = (V, E) is given where V is the vertex set and E is the edge set, and a clique is a subset of vertices. The maximum clique size of the graph G is symbolized as ω(G). The objective of MCP is to find a clique with maximum size |Q| [17]: ω(G) = max{|Q| : Q is a clique size} (7.1) For the MCP, the objective is to maximize the size of cliques which means that the target is to search for complete subgraphs of the given graph where the number of vertices will be maximum.

7.2 Related Work Among many proposed techniques to solve the maximum clique problem, some of the existing algorithms are described in brief. Moussa et al. [17] proposed a method to solve the maximum clique problem based on the genetic algorithm. Initially, the algorithm randomly creates chromosomes as initial feasible solution considered as population. During the generations, the crossover and mutation operators evolve the cliques inside the chromosomes successively. To expand the size of the clique, the operators are always followed by an operator that adds random vertices to the clique named ExpandClique, and the operator ensures that the expanded or the new chromosome is still a clique. The algorithm repeats for certain generations to get the optimized result. Assad et al. [18] proposed a method to solve the maximum clique problem using the harmony search. The paper proposed two methods for MCP and compared the results between them, one is (HS_ MCP) a generic HS, the other one is (HSI_ MCP) an idiosyncratic HS, and both of them are based on the harmony search algorithm. The proposed algorithm starts with the harmony memory initialization. When a new component is added to harmony memory, a new HM is generated and the new component may be from an existing HM or another value that is generated randomly. As the new harmony is generated randomly, it still may not represent a clique, so to convert that newly generated harmony into a clique, it is passed through an addition

88

M. Hasan et al.

repair operator. Enlarge, extraction, and extension are the main three phases of the repair procedure. HS_ MCP gives better results on some of the graph types from DIMACS benchmarks set [20]. Evin [19] proposed a new hybrid method based on genetic algorithm for solving the MCP and compared the results with similar literature results. The rHGA method [19] used binary representation to generate chromosomes for initializing the population. To choose parents for the next generations, Roulette wheel method was used. As a crossover operator, the single-point crossover was used. The best 5% and the worst 5% of the chromosome were transferred to next generations without any change in the elitism. Three additional operators were used, drop operator that removes a node from the chromosome, add operator adds a node to the chromosome, and the swap operator replaces a node in the chromosome with another node that is not in the chromosome. At last in the mutation operator, randomly 10% vertex of the chromosome was selected and deleted them which increased the diversity. Each chromosome goes through to these operators and tries to find out better solutions. This algorithm was tested on the DIMAC [20] and BHOSLIB [21] benchmark instances.

7.3 Proposed Method for Solving MCP Using Chemical Reaction Optimization Chemical reaction optimization (CRO) mimics the process of chemical reactions to solve an optimization problem. The two laws of thermodynamics help to operate the function of CRO. Energy cannot be destroyed or formed but can be exchanged from one form to another so that the total sum of energy never changes, and this is the first law. Every chemical element has PE, KE, and buffer where PE and KE refer to potential and kinetic energy. The energy of the environment around the substance is referred to as bu f f er . The second law ensures the transformation of energy among the molecules since the PE is converted to KE during the iteration stage. CRO follows these two rules to come up with a better solution for optimization problems. Initialization, iteration, and the final output stage are the basic three stages of CRO. In the initialization step, the initial values of the algorithm parameters are defined and initialized. After initializing the initial parameters, CRO generates PopSize of the initial population to perform the operations on them in the iteration step, and this stage stops when the stopping criterion is met. Four elementary reactions of CRO occur in the iterations to obtain stability of molecules named synthesis, decomposition, on-wall, and intermolecular ineffective collision. These four reaction operators and their capability of searching make CRO a better approach for optimization problems. Besides, CRO permits additional repair operators to be incorporated if necessary. The algorithm outputs the best solution in the final output stage. MCP has been solved with the help of CRO, and we have named the proposed method as MCP_ CRO. After initializing the initial parameter of CRO, the initial population is generated randomly, then the parameters and population are passed

7 Chemical Reaction Optimization (CRO) for Maximum …

89

through the MCP_ CRO function to perform the iteration step. After stopping criteria met, MCP_ CRO measured the output. The algorithm outputs the maximum clique size and the vertices in the clique.

7.3.1 Population Generation and Initialization In CRO, a single unit from the whole population is called a molecule. The molecule is also known as the manipulated agent as it does the actual manipulation for retrieving the solution. PopSize of molecules creates the whole population. Each molecule has several parameters. The initial parameters of CRO are: PopSize, Buffer, MoleColl, Minstruct, InitialKE, α and β, NumHit, MinPE, MinHit, and KELossRate. In the solution generation section, we will describe how we have redesigned the CRO operators to solve the problem. Population Generation: The initial population is generated based on random selection. In this representation, at first, the molecules be an n size array where n is the size of the input graph. This array can take the value 0 or 1. For each index of the array, a 0 or 1 is selected randomly. Finally, we get an n-bit binary array. As the value of molecules is chosen randomly, it may not be a clique; that is why the molecule is converted to clique using a heuristic approach. Then, another array is created with the index number of the previous array where the index value is 1. By looping through 0 to PopSize, all the molecules are generated accordingly. Thus, the initial population is created. Solution Representation: Basically, each molecule in the population will contain a clique of the graph. For example, we consider a given graph with 200 vertices from DIMACS clique benchmark graphs [20]. A molecule generated from the graph is shown in Table 7.1.

7.3.2 Operator Design In the iteration stage of the CRO algorithm, four elementary reactions occur. In case of the maximum clique problem, an additional operator named ExpandClique is used which is called after each of the CRO operators. The operator helps to enlarge the size of the clique found after the operation of each of the basic operators of CRO.

Table 7.1 Solution representation Index 0 1 2 3 Nodes

12

14

8

15

4

5

6

7

8

9

10

11

26

30

33

41

55

111

145

185

90

M. Hasan et al.

Fig. 7.2 On-wall ineffective collision operator

On-wall Ineffective Collision: Let the molecule size be 10. Here, molecule m produces a new molecule m  . In molecule m  , we append an adjacent node of the node 14, but node 14 and the appended adjacent node are selected randomly, as shown in Fig. 7.2. After on-wall ineffective collision, ExpandClique operator is executed that enlarges the size of the clique. (7.2) m → m ExpandClique Operator: The expand clique operator tries to increase the clique size found so far. First, we copy the nodes in the clique found after the execution of a basic operator in a vector, V  . Then, the operator selects a vertex randomly from the graph Vi and then iterates from that vertex to the last vertex of the graph. At each iteration, the operator checks that the corresponding vertex is adjacent to all existing vertices in the present clique; if it is, then the vertex will be added to the clique. In the end, the output is a larger clique. Decomposition: In this elementary reaction, two new molecules are generated from a molecule. Two newly generated molecules bring diversity in their structure from the old molecule. Let, molecule m produce two new molecules m 1 and m 2 . Initially, molecule m is divided into two random size parts. The new molecule m 1 is formed by the first part of the molecule m, and the rest half of the molecule is created by adding random nodes to it. Similarly, another molecule m 2 is formed by the last part of the molecule m, and the rest half is random. The scenario is shown in Fig. 7.3. Right after the decomposition operator, the algorithm will use the ExpandClique operator. (7.3) m → m1 + m2 Intermolecular Ineffective Collision: In this collision, two molecules hit one another and create two new molecules. Let two molecules m 1 and m 2 collide and produce molecules m 1 and m 2 . In molecule m 1 , we append an adjacent node of the node 14, and in molecule m 2 , we append an adjacent node of the node 18. Here, nodes 14, 18, and the appended adjacent node are selected randomly, shown in Fig. 7.4. We call ExpandClique operator after the intermolecular ineffective collision.

7 Chemical Reaction Optimization (CRO) for Maximum …

91

Fig. 7.3 Decomposition operator

Fig. 7.4 Intermolecular ineffective collision operator

Fig. 7.5 Synthesis operator

m 1 + m 2 → m 1 + m 2

(7.4)

Synthesis: The synthesis operator is a reverse process of decomposition where two molecules hit each other and create a new molecule. In our case, from the molecules m 1 and m 2 , we take the common nodes to form the new molecule m and then check if it is a valid clique or not; if it is valid, we will have a new molecule combined by two molecules. Let m 1 and m 2 be two molecules. After the collision, molecule m is created and the operations are shown in Fig. 7.5. To improve the result after the synthesis operator, ExpandClique operator is called. m1 + m2 → m

(7.5)

Repair Operator: This operator is applied to the solution found in the iteration to increase the clique size by the local search. At first, the nodes in the clique found after execution of the basic operators are copied in a vector. Let, Q be the clique

92

M. Hasan et al.

Table 7.2 Determination of adjoinable vertices of the present clique Adjoinable vertex v of Q Adjoinable vertices of Q ∪ {v} ρ(Q ∪ {v}) 2 10 14 16 17 19

14, 16, 19 17, 19 2, 17, 19 2, 17 10, 14, 16 2, 10, 14

3 2 3 2 3 3

Fig. 7.6 Flowchart for MCP_ CRO

found after the iteration. If there are no adjoinable vertices for Q, then the output will be Q. Otherwise, let an adjoin able vertex of the clique Q be v. The operator finds the number of adjoinable vertices for Q ∪ {v} which is denoted as ρ(Q ∪ {v}). If a vertex vmax has a maximum number of adjoinable vertices for Q ∪ {v}, then the vertex vmax will be added to Q and the clique Q ∪ {vmax } is obtained. This process is repeated for every adjoinable vertex of Q. The idea of the procedure of the operator has been taken from [22]. Let the clique found in iteration be Q = {1, 3, 4, 11} of size 4. From Table 7.2, for the clique Q, the maximum ρ(Q ∪ {v}) = 3 for v = 2. So, vertex 2 is added to the clique Q. Now, Q = {1, 2, 3, 4, 11} of size 5. The process will be repeated until the clique has no adjoinable vertex. A flowchart for the proposed algorithm (MCP_ CRO) is shown in Fig. 7.6.

7 Chemical Reaction Optimization (CRO) for Maximum …

93

7.3.3 Experimental Results and Comparisons We implemented the proposed algorithm and evaluated on the DIMACS [20] and BHOSLIB clique graphs [21]. The algorithm outputs maximum clique size and nodes in the clique. The comparison between MCP_ CRO and GA_ MC [17] is shown in Table 7.3. Results of BHOSLIB clique graphs [21] and the comparison between MCP_ CRO and rHGA [19] are given in Table 7.4. In Table 7.3, EGA_ MC and EMCP_ CRO denote the errors in GA_ MC and MCP_ CRO methods with respect

Table 7.3 Comparison of optimal solution of DIMAC instances [20] for MCP_ CRO and GA_ MC [17] Instances

Best known

Edge density

GA_ MC

EGA_ MC (%)

MCP_ CRO

EMCP_ CRO (%)

broke200_ 2

12

0.500

12

0.00

12

broke200_ 4

17

0.650

17

0.00

17

0.00

broke400_ 2

29

0.750

25

13.79

25

13.79

broke400_ 4

33

0.750

33

0.00

33

0.00

broke800_ 2

24

0.650

20

16.67

20

16.67

0.00

broke800_ 4

26

0.650

21

19.23

20

23.08

C125.9

34

0.898

34

0.00

34

0.00

C250.9

44

0.899

44

0.00

44

0.00

C500.9

57

0.900

56

1.75

55

3.51

C1000.9

68

0.901

64

5.88

66

2.94

C2000.9

80

0.900

70

12.50

73

8.75

keller4

11

0.650

11

0.00

11

0.00

keller5

27

0.753

27

0.00

27

0.00

keller6

58

0.819

54

6.90

53

8.60

hamming8-4

16

0.830

16

0.00

16

0.00

hamming10-4

40

0.640

40

0.00

40

0.00

gen200_ p0.9_ 44

44

0.899

44

0.00

44

0.00

gen200_ p0.9_ 55

55

0.899

55

0.00

55

0.00

gen400_ p0.9_ 55

55

0.899

52

5.45

55

0.00

gen400_ p0.9_ 65

65

0.899

65

0.00

65

0.00

gen400_ p0.9_ 75

75

0.899

75

0.00

75

0.00

p_ hat300-1

8

0.245

8

0.00

8

0.00

p_ hat300-2

25

0.490

25

0.00

25

0.00

p_ hat300-3

36

0.745

36

0.00

36

0.00

p_ hat700-1

11

0.250

11

0.00

11

0.00

p_ hat700-2

44

0.499

44

0.00

44

0.00

p_ hat700-3

62

0.749

62

0.00

62

0.00

p_ hat1500-1

12

0.254

12

0.00

12

0.00

p_ hat1500-2

65

0.507

65

0.00

65

0.00

p_ hat1500-3

94

0.755

93

1.06

94

Average Error

2.77

0.00 2.58

94

M. Hasan et al.

Table 7.4 Comparison of optimal solution of BHOSLIB instances [21] for MCP_ CRO and rHGA [19] Instances

Best known

Edge density

rHGA

ErHGA (%)

MCP_ CRO

EMCP_ CRO (%)

frb30-15-1

30

0.824

30

0.00

30

0.00

frb30-15-2

30

0.823

29

3.33

30

0.00

frb30-15-3

30

0.824

28

6.67

29

3.33

frb30-15-4

30

0.823

29

3.33

30

0.00

frb30-15-5

30

0.824

28

6.67

30

0.00

frb35-17-1

35

0.842

33

5.71

35

0.00

frb35-17-2

35

0.842

34

2.86

34

2.86

frb35-17-3

35

0.842

34

2.86

35

0.00

frb35-17-4

35

0.842

33

5.71

34

2.86

frb35-17-5

35

0.841

34

2.86

34

2.86

frb40-19-1

40

0.857

38

5.00

39

2.50

frb40-19-2

40

0.857

38

5.00

39

2.50

frb40-19-3

40

0.858

38

5.00

39

2.50

frb40-19-4

40

0.856

38

5.00

38

5.00

frb40-19-5

40

0.856

38

5.00

38

5.00

frb45-21-1

45

0.867

42

6.67

43

4.44

frb45-21-2

45

0.869

42

6.67

43

4.44

frb45-21-3

45

0.867

42

6.67

43

4.44

frb45-21-4

45

0.869

43

4.44

43

4.44

frb45-21-5

45

0.869

42

6.67

43

4.44

frb50-23-1

50

0.879

47

6.00

48

4.00

frb50-23-2

50

0.878

46

8.00

49

2.00

frb50-23-3

50

0.877

47

6.00

48

4.00

frb50-23-4

50

0.879

50

0.00

47

6.00

frb50-23-5

50

0.879

47

6.00

48

4.00

frb53-24-1

53

0.883

49

7.55

50

5.66

frb53-24-2

53

0.883

49

7.55

50

5.66

frb53-24-3

53

0.884

49

7.55

50

5.66

frb53-24-4

53

0.883

48

9.43

50

5.66

frb53-24-5

53

0.883

49

7.55

50

5.66

frb59-26-1

59

0.892

55

6.78

56

5.08

frb59-26-2

59

0.893

54

8.47

56

5.08

frb59-26-3

59

0.893

54

8.47

56

5.08

frb59-26-4

59

0.892

55

6.78

55

6.78

frb59-26-5

59

0.893

55

6.78

56

5.08

frb100-40

100

0.928

89

11.00

92

Average Error

5.83

8.00 3.75

7 Chemical Reaction Optimization (CRO) for Maximum …

95

to the best-known results. Similar notations are used in Table 7.4 also. The proposed MCP_ CRO algorithm was implemented by using C++ language and executed on a laptop having Intel Core i5-7200U CPU. We set up the iteration of 5000 and PopSi ze = 10, MoleColl = 0.3, K E Loss Rate = 0.7, I nitial K E = 0, bu f f er = 0, α = 0.90, β = 0.85, to get the results. The average errors for two datasets were calculated, and it can be observed that the MCP_ CRO algorithm has less average errors for both the datasets.

7.4 Conclusion We have proposed a CRO-based method to solve MCP. The challenging task was to design the four CRO operators. A repair operator has been used that helps to improve the results. The algorithm gives better results with less error rate than GA_ MC and rHGA, and in almost all cases, the proposed method outputs the best-known results. The algorithm was evaluated on two datasets only; as further work, we will improve the proposed method to perform experiments on all benchmark datasets of MCP.

References 1. Gary, M.R., Johnson, D.S.: Computers and intractability: a guide tothe theory of NPcompleteness (1979) 2. Solnon, C., Fenet, S.: A study of ACO capabilities for solving the maximum clique problem. J. Heurist. 12, 155–180 (2006) 3. Marchiori, E.: A simple heuristic based genetic algorithm for the maximum clique problem. In: Symposium on Applied Computing: Proceedings of the 1998 ACM Symposium on Applied Computing, vol. 27 (1998) 4. Johnson, D.S., McGeoch, L.A.: The traveling salesman problem: a case study in local optimization. Local Search Comb. Optim. 1, 215–310 (1997) 5. Xu, X., Ma, J., Lei, J.: An improved ant colony optimization for the maximum clique problem. In: Third International Conference on Natural Computation (ICNC 2007) 2007 Aug 24, vol. 4, pp. 766–770. IEEE, New York 6. Kirkpatrick, S., Gelatt, C.D., Vecchi, M.P.: Optimization by simulated annealing. Science 220, 671–680 (1983) 7. Al-Fayoumi, M., Banerjee, S., Mahanti, P.K.: Analysis of social network using clever ant colony metaphor. World Acad. Sci. Eng. Technol. 5, 970–974 (2009) 8. Battiti, R., Protasi, M.: Reactive local search for the maximum clique problem 1. Algorithmica 29, 610–637 (2001) 9. Fenet, S., Solnon, C.: Searching for maximum cliques with ant colony optimization. In: Workshops on Applications of Evolutionary Computation. Springer, Berlin, Heidelberg (2003) 10. Xu, J., Lam, A.Y., Li, V.O.: Chemical reaction optimization for task scheduling in grid computing. IEEE Trans. Parallel Distrib. Syst. 22, 1624–1631 (2011) 11. Saifullah, C.M.K., Rafiqul Islam, Md: Solving shortest common supersequence problem using chemical reaction optimization. In: 2016 5th International Conference on Informatics, Electronics and Vision (ICIEV). IEEE, New York (2016) 12. Kabir, R., Islam, R.: Chemical reaction optimization for RNA structure prediction. Appl. Intell. 49, 352–375 (2019)

96

M. Hasan et al.

13. Riaz Mahmud, Md, Pritom, R.M., Rafiqul Islam, Md: Optimization of collaborative transportation scheduling in supply chain management with TPL using chemical reaction optimization. In: 2017 20th International Conference of Computer and Information Technology (ICCIT). IEEE, New York (2017) 14. Islam, M.R., Islam, M.S., Sakeef, N.: RNA secondary structure prediction with pseudoknots using chemical reaction optimization algorithm. IEEE/ACM Trans. Comput. Biol, Bioinform (2019 Aug 20) 15. Rafiqul Islam, Md, et al.: Optimization of protein folding using chemical reaction optimization in HP cubic lattice model. Neural Comput. Appl. 32(8), 3117–3134 (2020) 16. Saha, S.K., Islam, M.R., Hasan, M.: DNA motif discovery using chemical reaction optimization. Evol. Intell. pp. 1–20 (2020) 17. Moussa, R., Akiki, R., Harmanani, H: A genetic algorithm for the maximum clique problem. In: 16th International Conference on Information Technology-New Generations (ITNG 2019). Springer, Cham (2019) 18. Assad, A., Deep, K.: A heuristic based harmony search algorithm for maximum clique problem. Opsearch 55, 411–433 (2018) 19. Evin, G.K.: A new genetic algorithm for the maximum clique problem. In: The International Conference on Artificial Intelligence and Applied Mathematics in Engineering. Springer, Cham (2019) 20. Second DIMACS challenge on cliques, coloring and satisfiability: DIMACSbenchmark (1993). Available here http://dimacs.rutgers.edu/programs/challenge in (1993) 21. BHOSLIB: benchmarks with hidden optimum solutions for graph problems. Available online: http://iridia.ulb.ac.be//fmascia/maximumclique/BHOSLIB-benchmark (17 June, 2020) 22. Dharwadker, A.: The Clique Algorithm. Institute of Mathematics, Haryana, India (2006)

Chapter 8

Fast Implementation for Computational Method of Optimum Attacking Play in Rugby Sevens Kotaro Yashiro and Yohei Nakada

Abstract Owing to the first Rugby World Cup in 1987 and rugby sevens inclusion in the Olympic Games in 2016, worldwide interest in rugby has been growing. Therefore, presenting explanatory information during streaming or broadcasting of rugby matches is essential for easy understanding of plays, tactics, developments, and rules. Against such a background, we have proposed a method for computing optimum attacking play combining run and hand-pass plays, from players positional data in rugby sevens. However, to perform this method in real-time or semi-realtime, it has been required to accelerate its computational processes. To do this, we introduce three improvements from different viewpoints. The effectiveness of these improvements is validated via the same pseudo formation examples used in our previous work.

8.1 Introduction The world’s interest in rugby has been growing since its World Cup was first held in 1987 and its inclusion in the Olympic Games in 2016. Although there is increasing interest in rugby around the world, there may be difficulties in increasing fans because its plays, tactics, developments, and rules seem complex to potential fans. Furthermore, in various team sports including rugby, there are increasingly observed and stored players and ball positional data. It has, therefore, become a necessity to present explanatory information by using the positional data, to ease the understanding of plays, tactics, developments, and rules. Therefore, we have proposed a method to compute the optimum attacking play for a try from the positional data of the players and the ball in rugby sevens in our previous work [1]. This method considers attacking plays combining run and handpass plays simulated using motion models. The optimum attacking play is obtained via the branch-and-bound (B&B) method [2, 3] from a set of executable attacking plays stored as a multi-branch tree. It was also validated that this method can compute K. Yashiro · Y. Nakada (B) Meiji University, Nakano Tokyo 164-8525, Japan e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. Das et al. (eds.), Modeling, Simulation and Optimization, Smart Innovation, Systems and Technologies 292, https://doi.org/10.1007/978-981-19-0836-1_8

97

98

K. Yashiro and Y. Nakada

realistic and reasonable attack plays for several pseudo formation examples. Moreover, there can be several applications of this method, for example, try-scoring opportunity detection, attack scene classification, and game summarization. However, to perform this method in real-time or semi-real-time in streaming and/or broadcasting, it has been required to accelerate its computational processes. To achieve this aim, we introduce three improvements from different viewpoints: The first improvement is related to the range of directional angle in the run play simulations of the ball carrier. The second is involved with the bounding operation in the B&B method [2, 3] for optimization of attacking plays. The last is associated with a parallel computing of the hand-pass simulations. It is expected that these improvements enable us to accelerate the computational processes of the optimum attacking plays. The effectiveness of these improvements is validated via the same pseudo formation examples used in our previous work [1]. The rest of this paper is organized as follows: in Sect. 8.2, we explain related work of this study from several viewpoints. Section 8.3 describes the essence of the computational method in [1]. In Sect. 8.4, three improvements proposed in this study are explained. Section 8.5 reports the experimental results of the validation experiment using four pseudo formation examples. In Sect. 8.6, we present conclusions and future work.

8.2 Related Work There are reported physical abilities of rugby players in many references. For example, in [4], the influence of ball carrying conditions on sprint speed was examined by measuring sprint times of international rugby players under three different conditions. In [5], several physical characteristics of players were compared between rugby union and rugby sevens. The effect of training with heavy rugby balls in youth rugby union players was analyzed in [6] by comparing their speeds and effective ranges before and after the training. The launch characteristics of kicking and passing in national rugby players were examined using a high-speed camera in [7]. Moreover, papers [8] and [9] have comprehensively reviewed a vast number of studies related to the science of rugby. In our method, the players’ reachable areas, which can be calculated from motion models in [10] and players positional and velocity information, are used to obtain the optimum attacking plays. These players’ reachable areas are also effectively used in other methods. For example, in [10], they are used to compute player dominant regions, which are an extension of the Voronoi regions [11]. They are also utilized to predict players who could receive possible passes in basketball games in a method in [12]. In [13], adjacency graphs computed from the players’ dominant regions, which are a kind of Delaunay graph [11], are employed for determining important players and detecting similar scenes in soccer. In [14], both the dominant regions and the adjacency graphs are computed to extract formation features for inputting into quantification models of effectiveness and risk for pass plays in soccer.

8 Fast Implementation for Computational Method …

99

In addition, other studies presented explanatory information using players positional data in team sports games. For example, in [15], several mathematical analyzes with players positional data in soccer including geometrical analyzes were discussed. Hidden Markov model (HMM)-based methods using players positional data were proposed in [16] and [17]. In [18], method for visualizing formation patterns computed from players positional data was introduced and applied to soccer data. Moreover, there are reviewed and/or explained many methods to presenting explanatory information for team sports in [19–22]. Furthermore, we can also find several tools which are applicable to analysis of rugby matches (e.g., [23–26]). Although there can be found many methods and tools as described here, to our best investigation, there could not be found any method and any tool, which can compute or visualize the optimum attacking play in rugby, except for our method.

8.3 Computational Method for Optimum Attacking Play This section describes the essence of the computational method in [1]. For more details, refer to [1].

8.3.1 Optimum Attacking Play This method evaluates an attacking play for a try from two viewpoints: (i) the shortest time up to a try and (ii) the expectation count of tackles. Specifically, this method considers the following optimization problem: min F( p) = T ( p) + λN ( p).

p∈P ∗

(8.1)

Here, the symbols p and P ∗ represent an attacking play combining run and handpass plays, and the set of executable attacking plays, respectively. The functions T (·) and N (·) represent (i) the shortest time up to a try and (ii) the expectation count of tackles, respectively. The constant λ ∈ [0, ∞) denotes the trade-off parameter for control of weights of the two values. In the actual implementation, this method considered discretizing with the time step δ [s], because of the difficulties in simulations of plays in continuous time. Furthermore, it uses another time step  [s] that represents the time interval for the decision-making of the ball carriers for overcoming computational difficulties. Here, the condition  = hδ is satisfied by some natural number h. Thus, the attacking plays p can be stored as a multi-branch tree, where a node expresses a corresponding decision-making point of a ball carrier. Figure 8.1 shows a schematic diagram of such a decision tree of the ball carrier.

100

K. Yashiro and Y. Nakada

Fig. 8.1 Decision tree of ball carrier in one step

Ball carrier

Keep running

Pass to other teammate players

Consequently, this method can optimize the attacking play via the B&B method [2, 3] (for minimization problems), which consists of two operations: (i) branching operation for dividing a problem into sub-problems and (ii) bounding operation for eliminating unnecessary sub-problems. If the lower bound of the objective function value at a specific node is larger than its upper bound for the optimum attacking play, the bounding operation eliminates the corresponding unnecessary sub-problems. In the actual implementation of [1], the lower bound and the upper bound are obtained as the objective function value up to the node, and the smallest value in all objective function values of the executable attacking plays already simulated until a try, respectively. Moreover, the bounding operation eliminates sub-problems corresponding to teammate players not having any executable hand-pass. Notably, this implementation applied the best-first search, using the lower bounds up to nodes for determining its order for the branching operation.

8.3.2 Run Play Simulation In this method, the player motion model shown in (8.2) and (8.3) is used for simulations of run plays: dr (t) = v(t), dt

(8.2)

dv(t) = aV u − av(t). dt

(8.3)

Here, the two-dimensional vector valued functions r (t) and v(t) express the position and the velocity vectors of the player at time t, respectively. The two-dimensional vector u represents a unit direction vector for the driving force. The constants V [m/s] and a [s−1 ] denote the maximal speed and the acceleration capability, respectively. This model is identical to the one described in [10]. In the actual run play simulation in our previous study [1] and this study, it is assumed that the ball carriers move according to the above motion model with the

8 Fast Implementation for Computational Method …

101

direction vector u of the optimum angle in (8.3). In addition, this method determines this optimum angle via the minimization of the objective function, assuming that the ball carrier keeps running with the same vector u up to a try. Moreover, when the ball carrier decides to continue a run play in a decision-making point, this method updates the optimum angle of the vector u by considering the same minimization. Note that the directional angle of the unit direction vector u is discretized by the width ξ [deg] to ease the implementation of such an optimization.

8.3.3 Hand-Pass Simulation For simplicity, the two-dimensional ball position b(t) is simulated by using a linear motion model with constant velocity as db(t) = γ ν, dt

(8.4)

where the parameter γ [m/s] represents the pass ball speed, and the two-dimensional vector ν expresses a unit direction vector. To avoid violating a rule for hand-pass plays in rugby, all ball carriers execute only hand-passes with directions (i.e., direction of the vector ν) from 90 − ε to −(90 −ε) [deg]. Moreover, that the ball   it also assumes in a hand-pass can move only within the time range t p , t p + /γ . Here, the time point t p [s] and the constant [m] represent the ball releasing time point from the previous ball carrier and the effective range for a hand-pass ball. Figure 8.2 shows the area for an executable pass. Note that this method also determines hand-pass plays after an opposing player’s catching the pass ball in the simulation as inexecutable plays. Moreover, in this method, the teammate player cannot catch the ball at the positions nearer the goal line than the position where the ball carrier passes the ball to avoid violating a rule. In the actual hand-pass simulation, there are two phases: In the first, this method simulates each hand-pass angle of which is within the range [90 − ε, −(90 −ε)] for Fig. 8.2 Area for executable pass [1]. The constants [m] and ε [deg] denote the effective range and the angle value for defining executable passes, respectively

[deg] Position of ball carrier [m] Direction of attack

− (90

Effective range of hand-pass Area for executable pass

) [deg]

102

K. Yashiro and Y. Nakada

searching executable passes of all teammate players. Note that the angle of the vector ν in (8.4) is also discretized with the width ζ [deg] to ease the implementation. In the second, the optimum passes for all teammate players are evaluated by minimizing the same objective function (8.1). Here, this minimization is conducted under the condition such that all teammate ball catchers keep running after catching up to a try with the optimum angle of the vector u in (8.3).

8.4 Improvements for Fast Implementation In this section, we present three improvements to the method in [1]. The first improvement is a limitation of the range of the direction vector u in the player motion model (8.3) for the run plays simulation. The second is related to the lower bound evaluation in the bounding operation in the B&B method for the optimization problem (8.1). The third is associated with the parallel computing of the hand-pass simulations with the ball motion model (8.4). Notably, we used Java SE Development Kit 11 [27] to implement this method in this study, although the previous program in [1] was implemented with Processing 3 [28].

8.4.1 Limitation for Angle of Run Play Simulations In the first improvement, the angle range of the run play simulation is reduced to the attacking direction from all directions owing to the feature of rugby sevens. That is, the directional angle of the vector u in the player motion model (8.3) is limited in the range [90, 270]. This improvement is expected to decrease the processing time for the run play simulations by half. However, there is no theoretical guarantee that the method with this improvement outputs the same attacking plays as the method without this improvement does for every case. However, owing to the characteristic of rugby, the ball carrier rarely runs in the opposite direction of the attack during attacking plays. Moreover, against all pseudo formation examples in the validation experiment, we obtained the same optimum attacking plays with and without this improvement.

8.4.2 Lower Bound Modification for Bounding Operation The processing time of the B&B method depends on the lower bound evaluation for sub-problems, particularly, in cases using the best-first search. In the second improvement, such lower bound evaluation in the B&B method is replaced. In our previous work [1], the objective function value (8.1) at the time t was used as the lower bound, because it is monotonous non-decreasing with respect to time t ∈ [0, T ( p)].

8 Fast Implementation for Computational Method …

103

Contrarily, this improvement adds the previous lower bound to a lower bound on the time up to a try from the time point t, which can be obtained from the ball position at the time t, the shortest distance to the goal line, and the upper bound on the speed of all teammate players, as follows L( p, t) = f ( p, t) +

x p (t) − x G . V∗

Here, the function L(·) expresses the modified lower bound, and the function f (·) calculates the objective function value (8.1) up to time t in the attacking play p, which was used in [1]. The function x p (·) and the constant x G denote the positions of the ball at time t in the attacking play p and the goal line on the long axis of the court, respectively. The constant V ∗ represents the upper bound of the speed of all teammate players. We can also easily verify that the two lower bounds L(·) and f (·) satisfy the following relation with respect to t ∈ [0, T ( p)] for any executable attacking play p: F( p) ≥ L( p, t) ≥ f ( p, t)

(8.6)

That is, the modified lower bound L(·) is tighter than the previous lower bound f (·). Therefore, this improvement is expected to make it possible to more quickly eliminate unnecessary sub-problems and to reduce the processing time.

8.4.3 Parallel Computing for Hand-Pass Simulations In addition, we make further improvement to apply the parallel computing to the simulation of the hand-pass plays. As described in the previous section, the computational method [1] discretizes the angle of the directional vector ν from 90 − ε to −(90 −ε) [deg] with a width ζ [deg] for easing to search executable passes for each teammate player. Furthermore, this method evaluates the optimum handpasses for all teammate players with respect to the objective function (8.1) by simulating all run plays of teammate players from all catching positions, assuming they continue running with the maximal driving force in the same direction up to a try. Therefore, the hand-pass simulations have a large proportion of the processing time. However, it is relatively easy to parallelize the hand-pass simulations with different (discretized) directional angles, and its effectiveness is expected to be relatively high even considering the influences of overheads. Therefore, we implemented the present program with this parallel computing by using the CompletableFuture class [29], which enables us to build efficient asynchronous computations in Java 8 added. In addition, we conducted the validation experiment described later on a 24-core 48-thread machine.

104

K. Yashiro and Y. Nakada

8.5 Validation Experiments In this section, we explain the validation experiment using four pseudo formation examples in rugby sevens. This experiment has two aims: (i) validating that there is no difference in the computed optimum attacking plays depending on the improvements proposed in this study and (ii) confirming the effectiveness of the proposed three improvements. Therefore, we used the same formation examples as in [1] for comparisons. In addition, we set all parameters of the computational method with or without the improvements to the same values as in [1]. Moreover, as described in the previous section, this validation experiment is conducted on a 24-core 48thread machine [OS: CentOS Linux release 7.6.1810 (Core), CPU: AMD EPYC 7402 (24Core 2.8 GHz 128 MB) × 2, memory: 16 GB DDR4-3200 REG ECC (256 GB in total)]. The program was implemented in Java (jdk-11.0.7).

8.5.1 Target Examples In this experiment, we apply the computational method to the four pseudo formation examples. Figure 8.3 describes the initial positions of all players. The initial velocities are set to 0 [m/s] for each player in each example.

(a) Example 1

(b) Example 2

(c) Example 3

(d) Example 4

Fig. 8.3 Initial positions on the court for all examples [1]. The dots describe the initial positions of players. The teammate and the opposing players are colored by blue and red, respectively

8 Fast Implementation for Computational Method …

105

In Example 1, the lowest teammate player (colored by blue in Fig. 8.3) on the short axis of the court is the first ball carrier. In Fig. 8.3, there are opposing players to the upper left of the ball carrier; however, there are none to the lower left and the front area of the ball carrier. Additionally, the teammate players are located only to the upper side of the ball carrier. Therefore, obviously, an appropriate attacking play for this example can be a run play of the first ball to the lower left side up to a try without any hand-passes. The first ball carrier in Example 2 is also the lowest teammate player on the short axis. There are mainly two differences from Example 1: (i) the ball carrier distancing from the other teammate players and (ii) an opposing player locating in front of the ball carrier. For such an example, one may consider an appropriate attacking play as a run play of the first ball carrier up to a try with shoving or dodging the opposing player in the front. On the contrary, valid attacking plays are expected to include hand-pass plays for Examples 3 and 4, where different and the same positions are set for the opposing and teammate players, respectively. For the both examples, the third lowest teammate players on the short axis are set as the first ball carriers with two teammate players in the lower side. For Example 3, a hand-pass play to the lowest teammate player is more appreciated than that to the second lowest, because there is no opposing player in front of him. Contrarily, it is accepted to pass to the second lowest teammate player for Example 4, because of the two opposing players in front of the lowest teammate player.

8.5.2 Optimum Attacking Plays Computed Figure 8.4 describes the optimum attacking plays obtained by our method for all examples. Table 8.1 lists the evaluated values of the obtained optimum attacking plays. We verified that the results shown in this figure and table can be obtained with any conditions related to the proposed improvements. Additionally, these results are the same as the ones described in [1], which were obtained from our previous program implementing our computational method without the proposed improvements.

8.5.3 Comparison of Processing Times Table 8.2 shows the comparison of the average processing time for each example with ten trials under the following four conditions: (i) without any improvement, (ii) with the first proposed improvement related to the angle range of the run play simulation, (iii) with the first improvement and the second improvement that is relevant to the lower bound evaluation of the B&B method, and (iv) with all three improvements including parallel computing of the hand-pass simulations. From this table, we can observe that a valid acceleration is achieved with condition (ii) compared with the condition without improvement (i.e., condition (i)). Although

106

K. Yashiro and Y. Nakada

(a) Example 1

(b) Example 2

(c) Example 3

(d) Example 4

Fig. 8.4 Optimum attacking plays shown on the court for all examples at the time when the last ball carriers arrived at a goal line. The dots describe the position of the ball carriers and the center points of the extended reachable areas [1] of the non-ball carriers, while the circles show the border lines of the extended reachable areas of the non-ball carriers. Those of the teammate and the opposing players are colored by blue and red, respectively. The yellow lines indicate the orbits of the ball Table 8.1 Evaluated values for optimum attacking play Example

Shortest time up to a try (s)

Expectation count of tackles (people)

1

5.65

1.00

Objective function value 7.65

2

4.30

1.00

6.30

3

6.85

3.00

12.85

4

5.75

4.00

13.75

Table 8.2 Comparison of average processing times with ten runs (seconds) Condition

Example 1

2

3

4

Average

(i)

19.375

4.315

68.804

1786.619

469.778

(ii)

8.200

2.158

17.614

339.955

91.982

(iii)

8.068

2.064

17.550

30.296

14.495

(iv)

3.072

1.720

3.545

4.107

3.111

8 Fast Implementation for Computational Method …

107

there are only small differences between conditions (ii) and (iii) in Examples 1–3, it is observed that the processing time with condition (iii) is reduced by one-tenth of that with condition (ii) for Example 4. This is because there are a large number of branches in the optimization process for attacking plays in Example 4. Comparing conditions (iii) and (iv), the parallel computing for the hand-pass simulations validly contributes to accelerate the processing time. Additionally, we can observe that the average processing time with condition (iv) is reduced to less than 1/150 of that with condition (i). Furthermore, it can be observed that the average processing times for all examples are several seconds under condition (iv); thus, this method is applicable for replay scenes during live streaming and/or live broadcasting. In addition, it is suggested that it may be performed in real-time and/or semi-real-time by using a suitable multiple-machine environment.

8.6 Conclusions In this study, we introduced three improvements for the computational method of optimum attacking plays in rugby sevens to accelerate its computational process. In addition, these improvements were validated via the same pseudo formation examples used in our previous work [1]. The results showed that they can adequately accelerate the computational process. There are two ongoing projects by the authors: (i) additional validations with various formation examples extracted from real matches and (ii) model improvement by considering kick plays including kick-passes. A part of their results has already presented in [30]. Moreover, additional work can be (iii) improvement of the implementation for further acceleration of the processing time, (iv) more suitable parameter settings based on statistical approaches with real data, and (v) improvement of visualization by reproducing realistic 3D CG and/or drawing on real videos. Acknowledgements We are grateful to S. Ujihara and H. Asao, formerly at our laboratory, for their cooperation and contributions to this study. We also thank S. Ryuzaki and R. Mizuno in our laboratory for their advice.

References 1. Yashiro, K., Nakada, Y.: Computational method for optimal attack play consisting of run plays and hand-pass plays for seven-a-side rugby. In: Proceedings of 22nd IEEE International Symposium on Multimedia, pp. 145–148, IEEE, Naples, Italy (2020) 2. Land, A.H., Doig, A.G.: An automatic method of solving discrete programming problems. Econometrica 28(3), 497–520 (1960) 3. Hromkoviˇc, J.: Algorithmics for Hard Problems: Introduction to Combinatorial Optimization, Randomization, Approximation, and Heuristics. Springer, Berlin Heidelberg (2010)

108

K. Yashiro and Y. Nakada

4. Barr, M.J., Sheppard, J.M., Gabbett, T.J., Newton, R.U.: The effect of ball carrying on the sprinting speed of international rugby union players. Int. J. Sports Sci. Coach. 10(1), 1–9 (2015) 5. Ross, A., Gill, N., Cronin, J.: Match analysis and player characteristics in rugby sevens. Sports Med. 44(3), 357–367 (2014) 6. Hooper, J.J., James, S.D., Jones, D.C., Lee, D.M., Gál, J.M.: The influence of training with heavy rugby balls on selected spin pass variables in youth rugby union players. J. Sci. Med. Sport 11(2), 209–213 (2008) 7. Holmes, C., Jones, R., Harland, A., Petzing, J.: Ball launch characteristics for elite rugby union players. In: Moritz, E.F., Haake, S. (eds.) The Engineering of Sport 6. Springer, New York, pp. 211–216 (2006) 8. Gabbett, T.J.: Science of rugby league football: a review. J. Sports Sci. 23(9), 961–976 (2005) 9. Johnston, R.D., Gabbett, T.J., Jenkins, D.G.: Applied sport science of rugby league. Sports Med. 44, 1087–1100 (2014) 10. Fujimura, A., Sugihara, K.: Geometric analysis and quantitative evaluation of sport teamwork. Syst. Comput. Jpn. 36(6), 49–58 (2005) 11. Aurenhammer, F., Klein, R., Lee, D.T.: Voronoi Diagrams and Delaunay Triangulations. World Scientific Publishing Company, London (2013) 12. Sano, Y., Nakada, Y.: Improving prediction of pass receivable players in basketball: simulationbased approach with kinetic models. In: Proceeding of the 10th International Symposium on Information of Communication Technology, pp. 328–335, ACM, Hanoi—Ha Long Bay, Vietnam (2019) 13. Takahashi, S., Haseyama, M.: A note on network analysis based detection of important player and similar scenes in soccer videos. ITE Tech. Rep. 38(51), 1–4 (2014). (in Japanese) 14. Mimura, T., Nakada, Y.: Quantification of pass plays based on geometric features of formations in team sports. In: Proceedings of the 10th International Symposium on Information of Communication Technology, pp. 306–313, ACM, Hanoi—Ha Long Bay, Vietnam (2019) 15. Sumpter, D.J.T.: Soccermatics: Mathematical Adventures in the Beautiful Game. Bloomsbury Sigma, London (2016) 16. Motoi, S., Misu, T., Nakada, Y., Yazaki, T., Kobayashi, G., Matsumoto, T., Yagi, N.: Bayesian event detection for sport games with hidden markov model. Pattern Anal. Appl. 15(1), 59–72 (2012) 17. Kobayashi, G., Hatakeyama, H., Ota, K., Nakada, Y., Kaburagi, T., Matsumoto, T.: Predicting viewer-perceived activity/dominance in soccer games with stick-breaking HMM using data from a fixed set of cameras. Multimed. Tools Appl. 75(6), 3081–3119 (2016) 18. Yamamoto, R., Abe, T., Nakada, Y.: Visualization method of relationship among team sports formation components in shoot scenes. In: Proceeding of the 2017 IEEE Symposium Series on Computational Intelligence, pp. 1299–1306, IEEE, Honolulu, HI, USA (2017) 19. Gudmundsson, J., Horton, M.: Spatio-temporal analysis of team sports. ACM Comput. Surv. 50(2), article no. 22, 1–34 (2017) 20. Lord, F., Pyne, D.B., Welvaert, M., Mara, J.K.: Methods of performance analysis in team invasion sports: a systematic review. J. Sports Sci. 38(20), 1–12 (2020) 21. Fister, I., Fister, I., Jr., Fister, D.: Computational Intelligence in Sports, 1st edn. Springer, Berlin (2019) 22. Araújo, D., Couceiro, M.S., Seifert, L., Sarmento, H., Davids, K.: Artificial Intelligence in Sport Performance Analysis, 1st edn. Routledge, New York (2021) 23. Agile Sports Technologies, Inc., Hudl Sportscode, https://www.hudl.com/products/sportscode. Accessed 26 Nov 2021 24. Coach Logic, Coach Logic, https://www.coach-logic.com/. Accessed 26 Nov 2021 25. iSportsAnalysis Ltd., Rugby Video Analysis, https://www.isportsanalysis.com/rugby-videoanalysis.php. Accessed 26 Nov 2021 26. Rugby Assistant, https://www.rugbyassistant.org/. Accessed 26 Nov 2021 27. Oracle.com, Java SE Development Kit 11 Downloads, https://www.oracle.com/java/technolog ies/javase-jdk11-downloads.html. Accessed 26 Nov 2021

8 Fast Implementation for Computational Method …

109

28. Processing.org: https://processing.org/. Accessed 26 Nov 2021 29. Oracle.com, CompletableFuture (Java Platform SE 8), https://docs.oracle.com/javase/8/docs/ api/java/util/concurrent/CompletableFuture.html. Accessed 26 Nov 2021 30. Ryuzaki, S., Yashiro, K., Nakada, Y.: Computational method of optimal kick-pass plays considering run plays after catches in seven-a-side rugby. In: Proceedings of the 2021 IEEE Multimedia Big Data, pp. 40–48, IEEE, Taichung, Taiwan (2021)

Chapter 9

A Review of Various Sign Language Recognition Techniques Rashmi S. Gaikwad and Lalita S. Admuthe

Abstract Sign language gesture recognition is a vast area of research as it is a means of communication between the people without hearing disability and the deaf and dumb people. Many researchers have contributed to develop systems which will detect the signs of letters and words and generate a meaningful output which will help the two communities to interact with each other. This paper is a review of various sign language recognition techniques which are generated to solve the problem of sign language recognition along with their advantages and disadvantages. The aim of this research is to find out best possible system to be implemented for sign language recognition which has the highest accuracy and requires minimum or no hardware.

9.1 Introduction Countries all over the world have different languages so they also have different sign languages. Some of them are the American Sign Language (ASL), Chinese Sign Language (CSL) and Indian Sign Language (ISL). Sign Language Recognition system bridges the gap between the deaf and dumb community and normal people. The deaf and dumb people know the sign language and can communicate through signs. But normal people do not understand sign language. So, problem arises when the deaf and dumb people want to convey some information through sign language but normal people do not understand what the person performing signs is trying to say. To overcome this problem a communication system is required which not only translates the sign language but is intelligent enough to generate meaningful sentences and convey the information that the user is trying to say. This will not only help the deaf and dumb people but will also help people without hearing disability to understand the sign language and they will be required to do minimum signs to convey information to the deaf and dumb people.

R. S. Gaikwad (B) · L. S. Admuthe DKTE Society’s Textile & Engineering Institute, Ichalkaranji, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. Das et al. (eds.), Modeling, Simulation and Optimization, Smart Innovation, Systems and Technologies 292, https://doi.org/10.1007/978-981-19-0836-1_9

111

112

R. S. Gaikwad and L. S. Admuthe

Researchers have tried to develop systems to recognize signs performed in different sign languages. To design the most accurate SLR system, the data acquisition method and the processing module play a very important role. Every system has its own advantages and disadvantages. In this research different ways of data acquisition and SLR techniques are discussed and the accuracy, advantages and drawbacks of all the systems are tabulated.

9.2 Data Acquisition and Preprocessing Methods There are many data acquisition ways that can be used to generate the dataset for the implementation of a SLR system. The main aim of data acquisition process is to obtain distinct signs made by the signer to communicate. The dataset can consists of static signs or dynamic signs. Static signs are signs which are stationary and which do not involve movement. Dynamic signs are those which involve movement. So, the SLR system must be able to detect both static and dynamic signs successfully. In [1] SLR systems are classified into two types, glove-based systems and visionbased systems. The glove-based systems requires the user or the signer to wear specific gloves implanted with sensors glove-based systems require the user to wear specific hand gloves with sensors to perform signs every time. Vision-based systems are based on machine learning process where artificial intelligence does the work of recognizing signs and the user is not required to wear any type of gloves. The basic device used for capturing signs performed by a signer is a camera. Todays cameras come with higher resolution and support many video formats. Morphological operations, filtering and background subtraction techniques are then applied to the obtained image frames. Signs can be performed by wearing hand gloves with specific sensors like motion sensors, accelerometer and gyroscope in [2], Kinect sensor used in [1, 3, 4], Leap motion sensor used in [5], etc. In some research work signs are directly performed by bare hands and user is not required to wear gloves. After the video or images of signs are captured they can be converted to different colour model formats like Red, Green, Blue (RGB) in [6], Hue Saturation Value (HSV) in [8, 9] or Hue, Saturation, Intensity (HSI) in [10]. Table 9.1 lists the data acquisition devices and techniques used by the reseachers. Table 9.1 Data acquisition devices and techniques

Author

Data acquisition

[6, 10, 14, 15, 18, 20–29]

Bare hands and web camera

[2, 7, 11, 30–32]

Hand gloves with sensors and gyroscope

[1, 3, 4, 12]

Kinect sensor

[5, 8]

Leap motion controller

9 A Review of Various Sign Language Recognition Techniques

113

9.3 Recognition Methods Sign language recognition is a vast area of research and many researchers have found out techniques which have their own advantages and disadvantages over the other. Some of the major research work in this field is reviewed below.

9.3.1 Combinational Neural Networks In [6] sign language recognition is done using inflated 3-D convolutional networks, i.e. I3-D ConvNet. The ChaLearn249 dataset is used which consists of sign language images and videos. The system is tested for RGB and optical flow data and both combined. The accuracies are found to be 57.73%, 57.68% and 64.44%, respectively. In [11] deep learning using CNN is used to locate the signer hands by using 5 layers of CNN and average pooling. The activation function used is CReLu. Long short-term memory neural network (LSTM) is used for encoding and decoding of input frames of variable length by obtaining the temporal structure information. The accuracy of the recognition rate of the network used in this paper is 99%. [12] proposed a 3-D combinational neural network sign language recognition system by extracting spatio-temporal information of signs. The region of interest, i.e. ROI is the part in an image frame which consists of the information about the gestures. All the background is subtracted from the image frame which are obtained from the signer video. The attention-based mechanism is nothing but a training mechanism used to train the 3-D CNN by feeding the network with spatio-temporal data of images. The work is focused on large vocabulary Chinese sign language recognition. This system is only implemented for static images and only the signs of words are detected. An accuracy of 94.3% is achieved using this technique. Reference [13] proposed a real time traffic sign recognition system using faster recurrent combinational neural networks and MobileNet. The limitations of colours and space which vary from sign to sign have been eliminated by using the RGBN colour space and detection of contours and centres of traffic signals. An accuracy of 84.5% and recall of 94% is achieved using this method. Reference [14] proposed an American Sign Language Recognition system using Recurrent neural networks and leap motion controller. Hand position, fingertip position and the angle between different fingers whilst performing signs is considered for feature extraction. For this purpose the leap motion controller is used and accuracy of 91.51% is achieved. Reference [15] used fully convolutional neural network for the detection and recognition of traffic signs. The work is divided into two stages. In the first stage the size and orientation of the traffic sign is detected and in the second stage the text of the traffic sign is detected. A precision of 93.5% and recall of 94% is achieved using this technique. Reference [16] proposed a method to detect signs of English letters from A to Z in American Sign Language by using neural networks. The video of a signer performing

114

R. S. Gaikwad and L. S. Admuthe

signs of different letters is recorded and image frames of the video are generated and stored. Filtering and background subtraction techniques are applied to the obtained image frames. These background subtraction images are then subjected to feature extraction stage where a feature vector for each sign is generated which is a sign row column matrix of N elements. This acts as a database for training the neural network. Once the neural network is trained the input images of signs can be fed to neural network where the NN compares the test image with the database and produces the output in the form of text. The results in this paper show that the speed of processing is improved by the use of neural network and the dataset can be increased to any number of variables. Figure 9.1 shows the signs for American sign language of letters from A to Z.

Fig. 9.1 American sign language for alphabets A–Z [16]

9 A Review of Various Sign Language Recognition Techniques

115

Reference [10] compared the efficiency of different types of neural networks in Arabic sign Language Gesture recognition for static as well as dynamic signs and proved that the fully recurrent neural networks have the highest accuracy and minimum error rate. The features of each image in the database are calculated by determining the segmented colour regions which are taken from the fingertips and their relative position with respect to wrist and to each other. The signer is required to wear a special-coloured glove to separate the signer hand from the background. A database of signs for 28 letters of Arabic Sign Language is generated for training the neural network and the test image is compared with the database and signs are detected. The accuracy of recognition using this technique is found to be 95.11%. Convolutional neural network in combination with long short-term memory (LSTM) in [25] is the most recent development in sign language recognition where the data is collected and signs are recognized using OpenCV and computer vision. This method achieved an accuracy of 95.5%. In [18], reinforcement learning using spatial–temporal CNN and bidirectional LSTM is used to recognize signs of word using CSL Split II and RWTH-PHOENIX-Weather 2014 dataset. Here, the word error rate is 28.6%. In [31], dynamic hand gesture recognition system is implemented by using a fusion of 1-D and 2-D CNN with temporal convolutional network. Sixteen features of each sample are generated using data gloves.

9.3.2 Speech Translating Gloves Reference [2] proposed a method to recognize Indian sign language gestures with the use of flex sensors, gyroscope, accelerometer and microcontroller. The platform for hardware and software, i.e. microcontroller programming and interfacing is achieved by making the use of Arduino. In this system, first a database set of signs of Alphabets and words in Indian sign language is created. A hand glove is implanted with flex sensors, gyroscope accelerometer and microcontroller captures the motion of hand and sends the readings to an Android Application which compares the data of the performed sign with the database and produces the output in the form of speech. The flex sensors calculate the bend in each finger whilst performing the sign. The gyroscope calculates the angular movement in space. The accelerometer calculates the orientation of the hands in space. The data from these three devices is given to the micro controller which determines the position and hand gesture. The data from microcontroller is sent to the Android phone by using Bluetooth. The advantages of this system is that this system integrates the results of sign language recognition to a smart phone so that it becomes user friendly and movable system but its major disadvantage is that the user is required to wear gloves all the time. If the database increases then the accuracy of the system will decrease as machine learning concept is not used to train the gloves.

116

R. S. Gaikwad and L. S. Admuthe

9.3.3 Support Vector Machine Classifier Reference [5] presented a framework to recognize natural signs and finger spellings using Leap Motion Sensor. Support vector machine classifier has been used to differentiate between neural and finger spelling gestures. This system achieves an overall accuracy of 63.57% in real time recognition of sign gestures which is very less compared to other recognition systems. [17] proposed a vision-based multi-features support vector machines, i.e. SVM classifier to resolve the recognition problems by video sequence appearance modelling. The recognition of Chinese Sign Language of English letters from A to Z as well as some Chinese words is done in this paper. A video is recorded of signer performing signs of letters in Chinese sign language. A total of 195 images for each letter were captured. The feature extraction stage is divided into two parts—temporal characteristics extraction and spatial characteristics expansion. The boundaries in the sign or shape are determined using Fourier descriptors. The results in this paper show that the recognition rate is in between 90 to 99% using the SVM. In [30], a combination of ANN, k-nearest neighbour and support vector machine is used for Italian sign language recognition. This system achieves an accuracy of 93.1%.

9.3.4 Hidden Markov Model and K-Nearest Neighbour Reference [19] describe an Arabic sign langue recognition technique using DG5-V Hand data glove and Polhemus G4 motion sensor. The data glove consists of five bend sensor and an embedded accelerometer. Two data sets were obtained from two different users one by using polhemus G4 motion sensor other by using DG5-V. All the sensors readings belonging to each word in a sentence were labelled by synchronizing a camera with the gloves for detecting boundary between two words. Feature extraction is done by sliding window-based approach. The features like mean, standard deviation, covariance and entropy are extracted. The classification and recognition of signs is done by using Hidden Markov Model (MKNN).The results obtained by HMM and KNN are compared at the end and the two data sets are tested. The accuracy of recognition of signs obtained by the motion tracker is 97% but here data gloves with motion sensor is required to be worn by the signer.

9 A Review of Various Sign Language Recognition Techniques

117

9.3.5 Surface Electromyography, Accelerometer and Gyroscope Sensors Reference [20] explores Chinese sign language recognition based on surface electromyography, accelerometer and gyroscope sensors. The classification abilities of the three and their combinations in three common sign components, i.e. one or two handed, hand orientation and hand amplitude were computed first and then an optimized tree-structure classification framework is developed for Chinese sign language recognition. Eight subjects participated in this research and recognition experiments in dissimilar testing conditions were realized on a target set consisting of 150 sub words of Chinese sign language. The overall recognition accuracies of 94.31% and 87.02% were attained for 150 CSL sub words in user-specific test and user-independent test, respectively. Reference [21] also proposed a sign language recognition system for using surface Electromyogram (sEMG) obtained from the user’s forearm from a tri axial accelerometer placed on the user’s wrist. The sEMG signals are obtained from the skin above the muscles involved during gesturing with non-invasive surface electrodes. This paper investigated the classification power of one feature, i.e. the sample entropy combined with the empirical mode decomposition (EMD) also called as intrinsic mode entropy (IMEn) for the classification of 60 GSL signs when applied to sEMG and 3-D accelerometer data. Experimental results show that IMEn when applied to sEMG and 3-D accelerometer data for 60 GSL signs from three signers provides an effective GSL gesture recognition presenting low and intra and inter signer performance variability. The accuracy of this system is between 77 to 99%.

9.3.6 Dynamic Time Warping Algorithm Reference [3] proposed a method for static and dynamic hand gesture recognition using Dynamic Time Warping algorithm. A kinetic sensor is used to perform the signs. The centre of the palm is considered as the centre of the largest circle in the contour. The finger tips are localized by using the k-curvature algorithm. The hand information is stored in a fixed size. First In First Out (FIFO) list and a database is created as a set of reference gestures. The captured gesture is compared with the reference gesture using data time warping algorithm. The DTW algorithm calculates the dissimilarity between two data series obtained at different times. The minimal cost between the two series is calculated by using a matrix containing Euclidian distances at aligned points over the two sequences. The algorithm used in this system has an accuracy of 92.4% for one or multiple hand static and dynamic gestures. Problems were encountered during this research when the user wore a bracelet and this system requires double training at different depths.

118

R. S. Gaikwad and L. S. Admuthe

9.3.7 Transition Movement Model Reference [22] proposed transition movement models (TMM) to handle transition parts between two adjacent signs. For tackling mass transition movements due to a large vocabulary size, a temporal clustering algorithm, i.e. dynamic time warping (DTW) is proposed to dynamically cluster them. An iterative segmentation algorithm is used for automatically segmenting transition parts from continuous sentences and training the TMMs together with sign models. Transition movements are related with the end of the preceding sign and the start of the following sign so transition movement models, i.e. TMMs in terms of signs have many identical and similar clusters. Clustering transition movements reduce the number of data and also avoid the inadequacy of training data. This is useful for large vocabulary continuous SLR and gives an accuracy of 94.3%.

9.3.8 Image Processing and SLFG Module Reference [23] described real time Indian sign language recognition by using image processing techniques. In the data acquisition stage images of user hand are captured instead of video. The user is required to wear a black wrist band to indicate the end of the palm and the back ground is kept black to reduce shadow images. The captured images are resized and converted to black and white images so that edge detection technique can be applied. The feature extraction and testing process consists of scanning the edge image to locate the tip positions and the reference point at the bottom of the palm. Two scan processes are done, one is from left to right scan and other is right to left scan to determine feature points in binary number system. A binary number is determined by this process for each sign and these values are stored as a database in the system. The heights of the fingers and the angle between the lines joining the feature point with the reference point are also calculated to create the database. In the testing phase, the point image or test image is simply compared with the stored database, i.e. the binary values from the database and that of the test image are compared and the output is given out in the form of appropriate or corresponding test. This method has an accuracy of sign recognition of 98.125% but it is applicable only to static images. Also, as the signs go on increasing it poses limitations on the system. Reference [24] proposed a system for dynamic sign language recognition system for smart home interaction. The proposed system makes use of image processing module for feature extraction of the signs performed by a signer in a video. 3D histograms of a gradient orientation descriptor is used to represent features. To cluster the visual words k-means ++ method is used and gesture classification is done using bag of feature, i.e. BOFs and nonlinear support vector machine. Meaningful sentences are formed by the use of stochastic linear formal grammar (SLFG) module.

9 A Review of Various Sign Language Recognition Techniques

119

This system is not applied to real time videos and it can be expanded for sign language communication in real time. The accuracy with this system is 98.65%.

9.3.9 Principal Component Analysis Reference [26] describes a method for gesture recognition of static signs by using Principal Component Analysis (PCA). Image acquisition is done by using a camera and signs were recorded. The image frames were first converted from RGB to YCbcr which is a hand segmentation technique where the hand image is subtracted from the background by skin colour segmentation. The feature extraction and gesture recognition technique used in this paper is the Principal Component Analysis (PCA).The first stage consists of training stage where the images are mapped to the Eigen space. The database is generated they are tested and compared with the database. The application of this system is in human and robot interaction for medical purposes but the system has number of limitation like it does not support dynamic gestures, it cannot determine signs performed in light coloured background. Considering these drawbacks the accuracy of recognition is 91.25%. Reference [27] discussed two approaches, i.e. image-based and sensor-based approaches for Arabic sign language recognition of only Arabic alphabets and compares the results and accuracies of both approaches. This paper also explains the advantages and disadvantages of different types of neural networks which can be used for sign language recognition.

9.3.10 3-D Contour Model and Pixel Classifier Reference [1] presented a 3-D contour model for hand gesture recognition using a Kinect sensor. A pixel classifier is used to recognize the different parts of a hand. The Kinect sensor provides both RGB and depth information. By using the training samples a database of hand contour is created. The 3-D hand contour model with labelled information provides simplified hand model to facilitate building real time bare handed controlled interface. But this system has two drawbacks, first is that the segmenting hand from a long arm or a body is not proper with the proposed hand extraction and hand parts classification so, the form of camera set up is limited. Second drawback is that the accuracy of the contour model is limited by the classification result of hand parts. This system is a desktop application and the illumination changes of the background does not affect the recognition system because of the use of Kinect camera. It gives an accuracy of 74.65%.

120

R. S. Gaikwad and L. S. Admuthe

9.3.11 Self-Organizing Maps Reference [28] proposed a human posture recognition system for video surveillance using different classifiers like k-means, fuzzy c-means, and multilayer perceptron, self-organizing maps and neural networks. Different videos for different human postures are recorded with different background conditions and each video is tested by using all the classifiers mentioned above. A comparison of the accuracy of detection of signs is done at the end of this paper which indicates that the selforganizing maps and multilayer perceptron have the highest accuracy followed by neural network. All the test conditions were applied to static signs and not dynamic and also the background was kept constant. Reference [9] described how the problem of analysis and validation of a sentence is resolved using a semantic context. This system is applied for Italian sign language recognition. A video is processed in order to obtain a feature vector by standard image processing algorithm. The classification of single sign is done using a selforganizing map (SOM) neural network and to provide a list of probable meanings. A common sense module chooses the right meaning depending on the context of the whole sentence and gives an accuracy of sentence generation of 97.5%.

9.3.12 General Fuzzy Minmax and Grammar Rules Reference [29] describes the use of General fuzzy MinMax and grammar rules for SLR. A video is recorded of the signer performing different signs and frames are extracted from the video. The individual frames which represent exact sign of words in ISL are extracted and the frames which are disoriented are discarded. For feature extraction Fourier descriptors are used. The training and learning part is carried out by a general purpose Fuzzy MinMax (GFMM) neural network on MATLAB. After the training is completed the system is made to go through the same process during the testing phase and gestures are identified in text form and by using the grammar constructed, the meaningful sentences are formed with an accuracy of 92.92%. Reference [21] proposed a new fuzzy hand posture model. A modified Circular fuzzy neural network (CFNN) architecture is described to reduce the training time. The advantage of this system is that the robustness and reliability of the hand gesture identification is improved and the complexity and training time of the used neural network are significantly decreased with an accuracy of 96%. The author states that the limitation of this system is the need of using a homogeneous background and also the reliability can be increased by applying the fault tolerant hand posture and hand gesture set definition concept. Circular fuzzy neural network is also used in [7] which is the state of the art work for detecting hand gestures using EMG with a test accuracy of 100%.

9 A Review of Various Sign Language Recognition Techniques

121

9.4 Analysis and Disscussions It is observed that most of the research work in sign language recognition is done on American sign language, Chinese sign language, Italian Sign language and Arabic Sign Language. Very less work is done on the recognition of Indian Sign Language. Vision-based systems are more advantageous over glove-based systems because in the first case signer is not required to wear specific hardware like hand gloves with sensors. From the research it can be seen that data acquisition is mostly done using a web camera and the input images obtained are then subjected to preprocessing for obtaining prominent features of hand gestures. Videos or images obtained by using leap motion controller or Kinect sensor are more accurate and prominent but they are costly. The deep learning and 3-D CNN is the most widely used recognition method as its recognition accuracy is close to 99%. Faster RCNN method has also achieved more accuracy compared to other methods. General fuzzy MinMax neural network in fusion with grammar rules is used for recognition of words and generate sentences. SVM and HMM are also popular methods for sign language recognition but are more complex compared to CNN. The overall summary of the performance parameters and detection accuracies of different recognition methods and the datasets used are summarized in Table 9.2.

9.5 Conclusion From the literature review it is observed that very less research work has been carried out on Indian sign language recognition. Sign language recognition systems suffer from background conditions and they work more accurately if artificial intelligence networks like Hidden Markov Models, neural networks, fuzzy logic, etc. are used. Implementation of CNN could be done using OpenCv and Computer Vision for sign language recognition. The effects of occlusion and co articulation problems in image frames obtained from the video of signs need to be focused more during the data acquistion phase. Most of the research work is done on sign language recognition of alphabets, numerical digits or words but very less research work is carried out on sentence recognition. To aid the hearing impaired people with a way of communication and eliminate the need to wear any mechanical gloves or sensors to perform signs, an intelligent system must be designed which will recognize the signs of words from an input video and generate meaningful sentences accurately.

122

R. S. Gaikwad and L. S. Admuthe

Table 9.2 Comparison of results of different recognition methods Reference no.

Recognition method

Performance

Data set

Remarks

[7]

Fuzzy c-means clustering algorithms

100% test accuracy

EMG dataset consisting of 10 classes of hand gestures

Classification of EMG hand gestures

[11]

3-D CNN and faster RCNN

Accuracy = 99%

Global local and 3-D Features (40 common words and 10,000 sign language images)

Uses RGB data stream

[24]

SLFG module and SVM

Accuracy = 98.65%

Six dynamic gestures which Implemented consists of circular, goodbye, for smart home rectangular, rotate, triangular interaction and wait gestures. (Total 160)

[23]

Edge detection

Accuracy = 98.125%

Set of 320 images used for training using without angular measurements and set of 160 images (5 images of each sign) with angular measurement

Recognizes the letters of Tamil sign language

[9]

SOM and neural network

Correctly translated sentences accuracy 83.3%

(A) a test set of 30 videos of sentences using 20 signs

Correctly translated sentences accuracy 82.5%

(B) 80 videos of sentences using 40 signs

Italian sign language sentence Recognition capturing lip and hand movements

[19]

HMM and k-nearest neighbour

Accuracy = 97%

Uses two datasets. The first was collected using DG5-VH and data gloves and the second was collected using Polhemus G4 tracker. Both datasets consist of 40 Arabic sentences with 80-word perplexity

[8]

Circular fuzzy neural network

Accuracy = 96%

Predefined hand gestures are Recognition of stored in the database as hand gestures linguistic variables

[25]

Convolutional neural network (CNN) and long short-term memory (LSTM)

Accuracy = 95.52%

Captured images using OpenCV

Arabic sign language hand gesture recognition

American sign language recognition using computer vision (continued)

9 A Review of Various Sign Language Recognition Techniques

123

Table 9.2 (continued) Reference no.

Recognition method

Performance

Data set

Remarks

[10]

Recurrent neural network

Accuracy = 95.11%

900 coloured images are used to represent the 30 different hand gestures and have been used as a training set and another 900 images have been used as a test set

Arabic sign language (ArSL) hand gesture recognition

[12]

3-D CNN and LSTM

Accuracy = 94.3%

Chinese sign language dataset with 500 categories of signs other is the ChaLearn14 gesture dataset

Isolated sign language recognition

[22]

Transition Accuracy = movement 94.3% models and DTW

Dataset consists of 3000 sentence samples with 750 different sentences over a vocabulary of 5113 signs

Chinese sign language recognition

[4]

Finger earth mover’s distance (FEMD) and kinect sensor

Accuracy = 93.9%

10 gesture data set

Distance metric calculation based on part-based representation

[30]

Artificial neural network (ANN), k-nearest neighbours and SVM

Accuracy = 93.91%

Use of wearable electronics with sensors for dataset generation. Dataset consists of ten signs in Italian sign language performed by 17 signers

Automatic signer independent recognition of Italian sign language

[15]

Deep learning neural network

Precision = Publicly available traffic sign Recognition of 93.5% data set, i.e. Traffic guide Chinese and Recall = 94% panel data set English traffic signs

[29]

General fuzzy MinMax neural network

Accuracy = 92.92%

Dataset consists of Indian sign language containing 4–5 sentences, each having 2-to-3-word gestures

The video of full sentence gesture is converted to text

[3]

Dynamic time warping algorithm

Accuracy = 92.4%

55 static and dynamic gestures collected by the windows software development kit 1.8 for Kinect

Static and dynamic hand gesture recognition (continued)

124

R. S. Gaikwad and L. S. Admuthe

Table 9.2 (continued) Reference no.

Recognition method

Performance

Data set

[14]

RNN and LSTM

Accuracy = 91.51%

First dataset consists of large Recognition of number of gestures defined semaphoric by the American sign hand gestures language. Second is the SHREC dataset which is a wide collection of semaphoric hand gestures

[26]

Principle component analysis

Accuracy = 91.25%

4 gestures with 5 different poses per gesture from 4 subjects making 20 images per gesture

Vision-based hand gesture recognition system

[13]

Faster RCNN and Accuracy = GTSRB database with MobileNet 84.5% 39,209 training images and Recall = 94% 12,630 testing images

Can detect all categories of traffic signs

[31]

fCNN (fusion of 1-D CNN and 2-D CNN) and temporal convolutional network (TCN)

Accuracy = 77.42% F1 Score = 0.6842 Recall = 0.7035 Precision = 0.7019

Data gloves used to capture finger movement and bending. 16 features of each sample in the gesture data set. The features are pitch angle, roll angle, yaw angle of the left and right hands, and the bending resistance corresponding to the five fingers of each hand

Dynamic hand gesture recognition

[6]

Inflated two stream 3-D convolutional networks (I3-D ConvNets)

57.73% RGB data 57.68% Optical flow data 64.44% Combination of RGB and optical flow

ChaLearn 249 gesture database

Recognition of isolated sign language gestures on RGB data

[5]

Support vector Accuracy = machine and leap 63.57% motion sensor

A dataset of 2240 gestures consisting of 28 isolated manual signs and 28 finger spelling words recorded from 10 users

Real time gesture recognition

[32]

Attention-based encoder-decoder model with a multi-channel convolutional neural network

Data sets are collected from 34 volunteers as some and each set contains 30 × 10 instances. Thus, a total of 34 × 30 × 10 + 34 × 30 × 10 = 20,400 instances present in the dataset

Translates Chinese sign language into voices

WER = 10.8%

Remarks

(continued)

9 A Review of Various Sign Language Recognition Techniques

125

Table 9.2 (continued) Reference no.

Recognition method

Performance

Data set

Remarks

[18]

Reinforcement learning and spatial–temporal CNN and bidirectional LSTM

WER = 28.6%

CSL split II and RWTH-PHOENIX-weather 2014

Signs of words in a video are recognized

References 1. Yao, Y., Fu, Y.: Contour model-based hand-gesture recognition using the Kinect sensor. IEEE Trans. Circuits Syst. Video Technol. 24(11), 1935–1944 (2014) 2. Heera, S.Y., Murthy, M.K., Sravanti, V.S., Salvi, S.: Talking hands—an Indian sign language to speech translating gloves. In: International Conference on Innovative Mechanisms for Industry Applications (ICIMIA 2017), IEEE. ISBN: 978-1-5090-5960-7/17 (2017) 3. Plouffe, G., Cretu, A.: Static and dynamic hand gesture recognition in depth data using dynamic time warping. IEEE Trans. Instrum. Meas. (2015) 4. Ren, Z., Yuan, J., Meng, J., Zhang, Z.: Robust part-based hand gesture recognition using kinect sensor. IEEE Trans. Multimedia 15(5) (2013) 5. Saini, P., Saini, R., Santosh Kumar Behera, S., Debi Prosad Dogra, D., Roy, P.P.: Real-time recognition of sign language gestures and air-writing using leap motion. In: 15th IAPR International Conference on Machine Vision Applications (MVA) Nagoya University, Nagoya, Japan, May 8–12 (2017) 6. Sarhan, N., Frintrop, S.: Transfer learning for videos: from action recognition to sign language recognition. IEEE ICIP (2020) 7. Jia, G., Lam, H., Ma, S., Yang, Z., Xu, Y., Xiao, B.: Classification of Electromyographic hand gesture signals using modified fuzzy c-means clustering and two-step machine learning approach. IEEE Trans. Neural Syst. Rehabil. Eng. 28(6) (2020) 8. Várkonyi-Kóczy, A.R., Tusor, B.: Human–computer interaction for smart environment applications using fuzzy hand posture and gesture models. IEEE Trans. Instrum. Meas. 60(5), 1505–1514 (2011) 9. Infantino, I., Rizzo, R., Gagli, S.: A framework for sign language sentence recognition by common-sense context. IEEE Trans. Syst. Man Cybern. Part C Appl. Rev. 37(5) (2007) 10. Maraqa, M., Abu-Zaiter, R.: Recognition of Arabic sign language (ArSL) using recurrent neural networks. Appl. Dig. Inform. Web Technol. 478–481 (2008) 11. Siming, H.: Research of a sign language translation system based on deep learning. In: IEEE International Conference on Artificial Intelligence and Advanced Manufacturing (AIAM) (2019) 12. Huang, J., Zhou, W., Li, H., Li, W.: Attention based 3D CNNs for large vocabulary sign language recognition. IEEE Trans. Circuits Syst. Video Technol. (2018) 13. Li, J., Wang, Z.: Real-time traffic sign recognition based on efficient CNNs in the wild. IEEE Trans. Intell. Transport. Syst. (2018) 14. Avola, D., Bernardi, M., Cinque, L., Foresti, G.L., Massaroni, C.: Exploiting recurrent neural networks and leap motion controller for the recognition of sign language and semaphoric hand gestures. IEEE Trans. Multimedia (2018) 15. Zhu, Y., Liao, M., Yang, M., Liu, W.: Cascaded segmentation-detection networks for text-based traffic sign detection. IEEE Trans. Intell. Transport. Syst. 19(1) (2018) 16. Mekala, P., Gao, Y., Fan, J., Davari, A.: Real-time sign language recognition based on neural network architecture. ISBN: 978-1-4244-9592-4/11/©2011. IEEE (2011)

126

R. S. Gaikwad and L. S. Admuthe

17. Quan, Y.: Chinese sign language recognition based on video sequence appearance modelling. In: 5th IEEE Conference on Industrial Electronics and Applications (ICIEA), pp. 1537–1542 (2010) 18. Wei, C., Zhao, J., Zhou, W., Li, H.: Semantic boundary detection with reinforcement learning for continuous sign language recognition. IEEE Trans. Circuits Syst. Video Technol. 31(3) (2021) 19. Mohamed Hassan, M., Khaled Assaleh, K., Tamer Shanableh, T.: User-dependent sign language recognition using motion detection. In: International Conference on Computational Science and Computational Intelligence IEEE (2016) 20. Yang, X., Chen, X., Cao, X., Wei, S., Zhang, X.: Chinese sign language recognition based on an optimized tree. IEEE (2016) 21. Kosmidou, V.E, Hadjileontiadis, L.J.: Sign language recognition using intrinsic-mode sample entropy on sEMG and accelerometer data. IEEE Trans. Biomed. Eng. 56(12) (2009) 22. Fang, G., Gao, W., Zhao, D.: Large-vocabulary continuous sign language recognition based on transition-movement models. IEEE Trans. Syst. Man Cybern. Part A Syst. Humans 37(1), 1–9 (2007) 23. Subha Rajam, P., Balakrishnan, G.: Real time indian sign language recognition system to aid deaf-dumb people. ISBN: 978–1–61284–307–0/11/ ©2011 IEEE pp 737–742 (2011). 24. Abid, M.R., Petriu, E.M., Amjadian, E.: Dynamic sign language recognition for smart home interactive application using stochastic linear formal grammar. IEEE Trans. Instrum. Meas. 64(3), 596–605 (2015) 25. Li, W., Pu, H., Wang, R.: Sign language recognition based on computer vision. In: IEEE International Conference on Artificial Intelligence and Computer Applications (ICAICA) (2021) 26. Ahuja, M., Singh, A.: Static vision based hand gesture recognition using principal component analysis. IEEE (2015) 27. Mohandes, M., Deriche, M., Liu, J.: Image-based and sensor-based approaches to Arabic sign language recognition. IEEE Trans. Human Mach. Syst. IEEE, 2168–2291 (2014) 28. Htike, K.K., Khalifa, O.O., Ramli, H.A.M., Abushariah, A.M.: Human activity recognition for video surveillance using sequences of postures. ISBN: 978-1-4799-3166-8. IEEE (2014) 29. Futane, P.R., Dharaskar, R.V.: Video gestures identification and recognition using fourier descriptor and general fuzzy minmax neural network for subset of Indian sign language. ISBN: 978-1-4673-5116-4/12. IEEE (2012) 30. Calado, A., Errico, V., Saggio, G.: Toward the minimum number of wearables to recognize signer-independent Italian sign language with machine-learning algorithms. IEEE Trans. Instrum. Meas. 70. https://doi.org/10.1109/TIM.2021.3109732 (2021) 31. Liu, J., Yan, W., Dong, Y.: Dynamic hand gesture recognition based on signals from specialized data glove and deep learning algorithms. IEEE Trans. Instrum. Meas. 70 (2021) 32. Wang, Z., Zhao, T., Ma, J., Chen, H., Liu, K., Shao, H., Wang, Q., Ren, J.: Hear sign language: a real-time end-to-end sign language recognition system. IEEE Trans. Mob. Comput. (2020)

Chapter 10

A Complete Solution to a Long-Run Sand Augmentation Problem Under Uncertainty Hidekazu Yoshioka

and Haruka Tomobe

Abstract We propose a long-run stochastic control model of a sand augmentation problem. This is an ergodic control problem corresponding to major environmental problems that many rivers and coasts encounter. The objective of the control problem is to minimize the probability of sand depletion in a target aquatic environment by augmenting sand from elsewhere by paying costs. The sand storage dynamics follows a controlled piecewise deterministic Markov process. The optimal augmentation policy is derived from a discontinuous viscosity solution solving a Hamilton–Jacobi– Bellman equation. Its solution structure is completely determined by mathematical analysis. Our contribution serves as a new application of a (discontinuous) viscosity solution to an engineering problem.

10.1 Introduction Sand is a major earth material found in most of the terrestrial and aquatic environments. In a water body such as a river and a lake, sand is transported hydrodynamically along water flows, causing deposition and erosion of the bottom of the water. Recently, severe human interventions to aquatic environments have been critically modifying the natural erosion and deposition processes. As an example, constructing a huge dam across a river cross-section disconnects the sediment flux from its upstream to downstream [1], causing severe erosion of the downstream river channel. Sea-level rise affects the utility of beaches as recreational places, forcing the manager to conduct beach nourishment projects [2]. The sand augmentation is an environmental restoration project to transport sand from a source place to a target place facing a severe erosion [2, 3]. This is a costly H. Yoshioka (B) Shimane University, Nishikawatsu-cho 1060, Matsue 690-8504, Japan e-mail: [email protected] H. Tomobe Tokyo Institute of Technology, 4259 Nagatsutacho, Midori Ward, Yokohama 226-0026, Japan e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. Das et al. (eds.), Modeling, Simulation and Optimization, Smart Innovation, Systems and Technologies 292, https://doi.org/10.1007/978-981-19-0836-1_10

127

128

H. Yoshioka and H. Tomobe

civil-engineering work because a huge amount of sand (several hundreds to thousands cubic meters) must be transported by workforces at each augmentation opportunity. Therefore, each project should be efficient and effective enough to mitigate the erosion. To date, mathematical models based on the optimal and stochastic control theories have been developed and analyzed [4, 5]; however, most of them neglect the stochastic nature of the hydrological processes in aquatic environments and/or not solvable analytically, and hence are not always easy to use in applications. The research background presented above motivates us to develop a reasonable and tractable mathematical model to represent an optimization procedure of sand augmentation problems that can consider the stochasticity. Our objective is to propose and solve such a control model based on a piecewise deterministic Markov process (PDMP). Our problem is an ergodic control problem whose optimality equation, called Hamilton–Jacobi–Bellman (HJB) equation, admits a (discontinuous) viscosity solution [6]. Our contribution is a unique application of ergodic control to an engineering problem.

10.2 Control Problem 10.2.1 Stochastic Dynamical System The system we consider is a controlled PDMP that governs the continuous-time evolution of the sediment storage X = (X t )t≥0 in an aquatic environment (Fig. 10.1): dX t = −Sχ{X t >0,αt =1} dt + dNt for t > 0, X 0 ∈  ≡ [0, 1].

(10.1)

Here, t is time, S > 0 is the erosion speed, N = (Nt )t≥0 represents a compound Poisson process having the intensity ω > 0 and the jump size η = (ηt )t≥0 such that

Fig. 10.1 Conceptual image of the control problem

10 A Complete Solution to a Long-Run Sand Augmentation ...

129

ηt ∈ {0, 1 − X t }, and α = (αt )t≥0 is a continuous-time and two-state Markov chain valued in {0, 1} such that the transition rates from the states 0 to 1 is μ > 0 (resp., 1 to 0 is λ > 0), χ is an indicator function; it is 1 if  is true, while is 0 otherwise. The maximum storage of the sand is normalized to 1 without any loss of generality. The Markov chain α represents an aquatic environment with a high-flow regime where the sand is eroded (α = 1) and a low-flow regime where the sand is not eroded (α = 0). Jump times of the compound Poisson process represent opportunities of sand augmentation with the mean time interval ω−1 > 0, and η represents the size of augmentation. Hence, the PDMP (10.1) represents a controlled sand management process such that the erosion is counteracted by the augmentation. Its solution should be understood in a non-smooth Filippov-like sense. We assume that η is Markovian and adapted to a natural filtration generated by α and N as in conventional control models. In addition, we assume that the augmentation is allowed only in the low-flow regime (α = 0) because working in the high-flow regime is dangerous.

10.2.2 Objective and HJB Equation The objective to be minimized by choosing η at each jump time {τi }i=1,2,3,... of N is ⎡ ⎤    1 ⎣ T H = inf lim sup E cηt + dχ{ηt >0} ⎦, χ{X s =0} ds + η T →+∞ T 0 τ ≤T

(10.2)

i

where E is the expectation. The first term corresponds to the probability of sand depletion (X s = 0), while the second term to the augmentation cost consisting of the proportional cost to buy the sand for the augmentation (cηt ) and a fixed cost representing labor costs (dχ{ηt >0} ) with parameters c, d > 0 unless otherwise specified. By invoking a verification argument [7], we  infer that the optimality equation governing optimal augmentation policy η∗ = ηt∗ t≥0 is the HJB equation d (x) + λ( (x) − (x)) − χ{x=0} = 0 , x ∈ dx H + μ( (x) − (x)) + ω( (x) − M (x)) − χ{x=0} = 0

H + Sχ{x>0}

(10.2)

with M (x) =

min

u∈{0,1−x}

(x + u) + cu + dχ{u>0} ,

(10.3)

130

H. Yoshioka and H. Tomobe

where and are some functions from  to R. The first and second equations of (10.2) correspond to the optimality in the high-flow and low-flow regimes, respec∗ tively. We infer that the

optimal control η depends on the coupled state (X t , αt ) as ∗ (X t + u) + cu + dχ{u>0} . We will show this is true. ηt = χ{αt =0} inf u∈{0,1−X t }

10.3 Exact Solution 10.3.1 Guessed Solution and Its Optimality The following proposition is the main result of this short paper, stating that the HJB equation admits an exact solution and that the associated optimal control is of a threshold type. For simplicity, set α =1+

λ+S λ − S(c + d) and β = 1 + − S(c + d). μ+ω μ

(10.4)

Hereafter, C is an arbitrary real constant, arising due to the non-uniqueness of

, up to a real constant, which is a common feature of Ergodic control problems [7]. Proposition 10.1 Assume α ≥ 0. Then, the HJB Eq. (10.2) admits a discontinuous viscosity solution (H, (·), (·)) of the form

Sμ 1 c+d , λ+μ 1−x    −cx +  C1 1 −e−C2 x (0 ≤ x ≤ x)

(x) = C + λ H − μ+λ H − 1 + < x ≤ 1) − 1) (x (x Sμ μ S H=

(10.5)

(10.6)

with C1 = −

S(μ + ω) μ+λ+ω λω + c and C2 = , λω λω S(μ + ω)

(10.7)

and

(x) = C +

⎧ ⎪ ⎨ ⎪ ⎩

(H − 1)/λ

μλC3 (1−e−C2 x )−(μ+ω+λ)+(μ+ω)H λ(μ+ω)

(x) − H/μ

Here, x ∈ [0, 1) is uniquely determined from

(x = 0) − cx (0 < x ≤ x) . (x < x ≤ 1)

(10.8)

10 A Complete Solution to a Long-Run Sand Augmentation ...



μ+ω+λ Sd − Sc e−C2 x = . μ+ω 1−x

131

(10.9)

In addition, the optimal control η∗ at each jump time τi is given by   η∗ X τi , ατi =



  1 − X τi 0 ≤ X τi ≤ x, ατi = 0 . 0 (Otherwise)

(10.10)

This proposition is proven as follows. Firstly, we guess the optimal control of the form in the proposition with some x ∈ [0, 1), and invoke

η∗ (x, 0) = arg min (x + u) + cu + dχ{u>0} = χ{x≤x} (1 − x).

(10.11)

u∈{0,1−x}

Then, substituting this into the HJB equation together with the continuity assumptions ∈ C() and ∈ C((0, 1]) (justified later) leads to them and the effective Hamiltonian H presented in the proposition. It is shown by a classical intermediate value theorem that the algebraic equation to determine x ∈ [0, 1) is uniquely solvable. This equation can be solved by a fixed-point iteration. The viscosity property [6, 7] is trivially satisfied except for x = 0, x at which is discontinuous and may not be differentiable, respectively. The former case is important but not technically problematic because the HJB equation does not include differentials of . It turns out that only the viscosity property at x = x needs to be checked. To proceed, notice the smoothness (not assumed a priori) d (x + 0) d (x − 0) μ+λ H − = c − C1 e−C2 x − dx dx μ S λω S(μ + ω)c − (μ + ω + λ) e−C2 x =c− λω S(μ + ω)

1 μ + λ 1 Sμ c+d − μ Sλ+μ 1−x

μ+ω+λ 1 − c e−C2 x − c + d = 0. =c+ S(μ + ω) 1−x (10.12) Consequently, the triplet (H, , ) is a viscosity solution; actually, it becomes a classical solution, but this may be a fortunate case. The optimality of the solution follows from the verification argument similar to [7] with an Itô formula for bounded and finite jump processes. We do not know whether the similar smoothness follows for more complicated cases. In such a case, an Ito–Tanaka formula can be utilized.

132

H. Yoshioka and H. Tomobe

Fig. 10.2 The threshold x. Do nothing is optimal in the white area. The black line is x =0

10.3.2 Remark and Application The control with x = 0 is optimal if the cost is moderately large (α < 0, β ≥ 0). If α < 0, β < 0, then the threshold type control is sub-optimal but do nothing (η∗ ≡ 0) is optimal. The parameters in (10.4) thus govern the project feasibility. Proposition 10.1 still continuously holds true even if c ≤ 0 and it is close to 0. For example, this case represents the situation where some construction surplus soils are available. We compute x by statistically analyzing a hydrological data of Hii River, Japan from June 2019 to May 2020 using a Recking-based formula. By identifying the high- and low-flows using the threshold discharge 5 (m3 /s), we get S = 0.0065 (1/h), λ = 0.0072 (1/h), μ = 0.0077 (1/h). We use ω = 0.0050 (1/h) and examine a variety of the cost parameters c, d, where the storable sand amount is set as 400 m3 (Fig. 10.2).

10.4 Conclusion We will examine whether the model can be made closer to real problems without losing the tractability. Our HJB equation can become a unique benchmark for checking convergence of numerical methods because exact discontinuous solutions are not seen so much. The explicit nature allows for applying the proposed model to cost-constrained (i.e., expectation-constrained) optimization problems based on impulse controls subject to discrete observations, which is currently under investigation. Acknowledgements This research is supported by The Yanmar Environmental Sustainability Support Association No. KI0212021.

10 A Complete Solution to a Long-Run Sand Augmentation ...

133

References 1. Downs, P.W., Dusterhoff, S.R., Leverich, G.T., Soar, P.J., Napolitano, M.B.: Fluvial system dynamics derived from distributed sediment budgets: perspectives from an uncertainty-bounded application. Earth Surf. Proc. Land. 43, 1335–1354 (2018) 2. Jin, D., Hoagland, P., Ashton, A.D: Risk averse choices of managed beach widths under environmental uncertainty. Nat. Resource Model. e12324 (2021) 3. Huizer, S., Oude Essink, G.H., Bierkens, M.F.: Fresh groundwater resources in a large sand replenishment. Hydrol. Earth Syst. Sci. 20, 3149–3166 (2016) 4. Gopalakrishnan, S., McNamara, D., Smith, M.D., Murray, A.B.: Decentralized management hinders coastal climate adaptation: the spatial-dynamics of beach nourishment. Environ. Resource Econ. 67(4), 761–787 (2017) 5. Yoshioka, H., Tsujimura, M., Hamagami, K., Yaegashi, Y., Yoshioka, Y.: HJB and Fokker-Planck equations for river environmental management based on stochastic impulse control with discrete and random observation. Comput. Math. Appl. 96, 131–154 (2021) 6. Crandall, M.G., Ishii, H., Lions, P.L.: User’s guide to viscosity solutions of second order partial differential equations. Bull. AMS 27(1), 1–67 (1992) 7. Yoshioka, H., Tsujimura, M.: Hamilton-Jacobi-Bellman-Isaacs equation for rational inattention in the long-run management of river environments under uncertainty. arXiv preprint arXiv:2107. 12526 (2021)

Chapter 11

Real-Time One-Hand Indian Sign Language Alphabets and Numbers Recognition in Live Video Using Fingertip Distance Feature Rakesh R. Savant , Jitendra V. Nasriwala , and Preeti P. Bhatt Abstract Sign language is the mode of communication used by hearing and speech impaired people to communicate with each other. The majority of the normal population does not have an understanding of sign language. To link this gap, the need for an automated sign language translator. Much research has been done for Indian Sign Language Recognition (SLR). Real-time sign language recognition is the need of an hour. We need to develop the SLR, which works efficiently in a real-time situation. Our study covers the development of the real-time Indian Sign Language Recognition System for one-handed ISL alphabets and numbers. We have used a hand landmark-based hand tracking module for detecting hands in real-time live video. The Euclidean Distance between hand landmarks is used as a feature to feed multilayer perceptron neural networks. We get 80% accuracy on training the model on the dataset. The prototype is tested by deploying the model in real-time SLR in live video, and it recognizing each one-handed gesture other than the gesture of alphabet ‘V’.

11.1 Introduction Communication is a primary need of humankind. Human uses different medium to communicate with each other. The most common medium for communication is spoken language. Without spoken language, it would not be possible for a large proportion of the population to communicate. Even with spoken language, a significant part of the population with speech and hearing disabilities cannot communicate with ordinary people. Deaf-dumb uses sign language as a communication medium to communicate with each other. And on the other hand, a limited set of the population with speech ability understand the sign language. An automatic sign language recognition system is needed to communicate between the hearing-speech impaired and the normal human [1]. Sign language uses hands, postures of hands, movements R. R. Savant (B) · J. V. Nasriwala · P. P. Bhatt Faculty of Computer Science, Babu Madhav Institute of Information Technology, Uka Tarsadia University Bardoli, Bardoli, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. Das et al. (eds.), Modeling, Simulation and Optimization, Smart Innovation, Systems and Technologies 292, https://doi.org/10.1007/978-981-19-0836-1_11

135

136

R. R. Savant et al.

of hands, and expressions to represent the alphabet, number, or dictionary word. The Indian Sign language has its alphabets, numbers, and reach set of word dictionaries. In the sign language study, the signer performs the signs using one-hand or both hands [2]. Indian Sign Language (ISL) has particular traits that its majority alphabets and numbers use arms to symbolize the alphabet or digit [3]. Real-time sign language translator is a complicated task. The biggest challenge for a real-time sign language translator is the accurate segmentation of hands from the cluttered environment (uncontrolled signer space). The effective segmentation of hands and face allow us to retrieve good features from the region of interest (hands). Sign Language Recognition is classified into the following subcategories: Static Sign Language Recognition, Isolated Sign Language Recognition, and Continuous Sign Language Recognition. Static Sign Language Recognition covers the recognition of finger-spelled gestures of alphabets and digits. Isolated Sign Language Recognition covers the recognition of dynamic gestures corresponding to words. Lastly, Continuous Sign Language covers recognizing a sequence of dynamic gestures to form specific sentences [1]. Our study aims to recognition of single-handed ISL alphabets and numbers from live video in real-time situations. The dataset used in the study has different 14 classes of single-handed ISL characters (nine digits and five alphabets of ISL). For each class, there are 1200 images. There is a total of 16,800 images available in the dataset for different 14 classes [4]. We have pre-processed the given dataset with our machine learning-based skin-segmentation algorithm. We have designed the fingertips distance-based features to recognize appropriate single-handed sign alphabets and digits in live video. The next section describes the related work. Further, it emphasizes the proposed methodology covers a detailed description of ISL alphabets and number recognition in real-time live video. Finally, the conclusion is given in the last section.

11.2 Related Work Much work has been reported for sign language recognition in a controlled environment as per the literature given by Savant and Ajay [5]. ISL sign language recognition in real-time is challenging. Many works reported real-time sign language recognition with colored hand gloves as per [6–9]. Hardware-based gesture recognition is another approach where the signer wears sensory gloves with additional hardware [10]. For the effective recognition of signs in a real-time situation, the need for effective hand segmentation methods. Much work has been reported with the skin color-based segmentation approaches using the color spaces like HSV and YCbCr [11–18]. Real-time sign language recognition with a computer vision approach is reported by [19–23]. In [24], they discussed the approach for static American Sign Language alphabets with Edge Oriented Histogram with Multiclass SVM. In [3],

11 Real-Time One-Hand Indian Sign Language Alphabets …

137

they used the deep learning method with pre-trained model Inception v3 to recognize static sign language from the image. The survey concludes that fewer works have been reported for real-time sign language recognition from live video. Our study aims to recognize the static one-handed ISL alphabets and numbers in real-time by using fingertip distance as a feature. Our study aims to develop a robust sign language system that works in real-time situations.

11.3 Proposed Work We intend to develop an ISL one-handed alphabet and number recognition system which works in a real-time situation. Here, we have used the fingertip distance as a feature to train and test the classifier. The proposed method works under the versatile background condition and recognizes the one-handed alphabets and numbers. The steps included in the gesture recognition process are video capturing, frame extraction, hand tracking, feature extraction and representation, classification, and recognition, as shown in Fig. 11.1.

11.3.1 Live Video Acquisition Video acquisition is the first step in ISL recognition. At first, the camera starts to capture video, and it gives a live video feed as an input to the system.

Fig. 11.1 System overview

138

R. R. Savant et al.

11.3.2 Frame Extraction In this step, frames are extracted from the live video, and these frames are input to the next module. We have considered the 30 FPS frame rate.

11.3.3 Hand Tracking In this step, the hand tracking module is used to detect and track hands in a live video feed. The extracted continuous frames are processed in this module and track the hand. For hand tracking in a real-time live video, we have used the method [25], which gives the hand skeleton with different 21 landmarks on hand.

11.3.4 Feature Extraction and Representation The hand skeleton with different 21 hand landmarks is used as the reference for the feature extraction. Before that the hand analysis is done to understand the distance feature between hand tips. Different 21 hand landmarks are represented in hand, and a unique number denotes each landmark as per Fig. 11.2.

11.3.4.1

Feature Analysis

We have originated the following characteristics by analyzing the hand skeleton while the signer performs sign language gestures. And that used to design novel features for ISL recognition. The palm is denoted with the number 0. That is an essential point in the experiments considering the distance between other landmarks from palm point.

Fig. 11.2 Hand and landmark references [26]

11 Real-Time One-Hand Indian Sign Language Alphabets …

139

Fig. 11.3 One-handed ISL alphabets and numbers

Hand landmarks 4, 8, 12, 16, and 20 are the fingertips, and if we consider the distance of those points from point 0, then these points are the maximum distanced points from the palm point 0 if the entire hand is open. This unique characteristic helps to consider the distance parameter as a feature for ISL recognition. Hand landmarks 5, 9, 13, and 17 are the points where your figure joints with palm. And these points have unique characteristics that the distance of these points to the palm point 0 is always similar.

11.3.4.2

Feature Extraction and Representation

The main objective of our study is to validate fingertips-based features for ISL recognition. Based on the analysis of the hand skeleton, we have considered the distance between these landmarks as a feature. The Euclidean Distance from point 0 to the rest of the other hand landmarks is calculated. We have considered the 14 one-handed ISL alphabets and numbers (1, 2, 3, 4, 5, 6, 7, 8, 9, C, I, L, V, and U). The gestures considered in our study are shown in Fig. 11.3. The significant difference between each ISL character is the finger movements and positions. To perform a specific gesture, the signer has to generate the different hand postures using hand gestures like folding the fingers and opening the fingers, etc. While this finger movement is performed that generates the different distance values for each hand landmark from the palm point 0. We have calculated the distance between each hand landmark from palm point using Euclidean Distance as given in (1). d=



(x2 − x1 ) + (y2 − y1 )2

(11.1)

140

R. R. Savant et al.

Fig. 11.4 Hand landmarks (average distance calculation)

For checking the finger is open or close, we have calculated the average of the distance of points 5, 9, 13, and 17 as given in Fig. 11.4. Suppose the distance of fingertips (4, 8, 12, 16, and 20) and the second fingertips (7, 11, 15, and 19) of any finger is greater than the calculated average, then the finger is considered as an open finger else the finger is closed. Table 11.1 gives the analysis of the ISL one-handed gestures. When any person folds the hand at that moment, all fingers are folded from points 3, 6, 10, 14, and 18, respectively for each finger. And this analysis of the hand helps us to define the characteristics of each alphabet and digits of ISL under study. Based on the gesture analysis given in Table 11.1 for the ISL one-handed gestures, the next step is to represent the feature and generate feature vectors to train and test the classifier. For feature vector generation, we have taken the Euclidean Distance of each 21 points from palm point 0. As in a real-time video, the signer’s hand constantly moves, leading to distance variations between hand landmarks in each frame. And on the other hand, the dataset used for classification has images with a fixed size of hands (128 * 128). To overcome this scaling issue of hand landmark distances, we have to normalize the feature vectors.

11.3.5 Classification In this step, we have performed the classification of gestures given in Fig. 11.3. We have read images one by one from the dataset for each class and calculated and generated the feature vectors as discussed in the previous step. After generating the feature vectors, we have split all the samples based on 80–20 splits. 80% samples are used to train, and 20% samples are used to test the classifier. To maintain uniformity in each experiment, we have set the stratify parameter based on the classes and kept

11 Real-Time One-Hand Indian Sign Language Alphabets …

141

Table 11.1 One-handed ISL characters distance feature analysis for each character Sign Description 1

• The visual appearance of number 1—The index finger is open • Landmarks 6, 7, and 8 have a distance greater than the average distance of points 5, 9, 13, and 17

2

• The visual appearance of number 2—The index and middle fingers are open. And hand orientation is the front side of the hand • Landmarks 6, 7, 8, 10, 11, and 12 have a distance greater than the average distance of points 5, 9, 13, and 17

3

• The visual appearance of number 3—The index, middle, and ring fingers are open • Landmarks 6, 7, 8, 10, 11, 12, 14, 15, and 16 have a distance greater than the average distance of points 5, 9, 13, and 17

4

• The visual appearance of number 4—The index, middle, ring, and pinky fingers are open • Landmarks 6, 7, 8, 10, 11, 12, 14, 15, 16, 18, 19, and 20 have a distance greater than the average distance of points 5, 9, 13, and 17

5

• The visual appearance of number 5—The index, middle, ring, pinky, and thumb fingers are open • Landmarks 4, 6, 7, 8, 10, 11, 12, 14, 15, 16, 18, 19, and 20 have a distance greater than the average distance of points 5, 9, 13, and 17

6

• The visual appearance of number 6—The pinky finger is open • Landmarks 18, 19, and 20 have a distance greater than the average distance of points 5, 9, 13, and 17

7

• The visual appearance of number 7—The index finger is opened, and half folded • Landmarks 6, 7, and 8 have a distance greater than the average distance of points 5, 9, 13, and 17, and point 8 has a nearer same distance equal to 6

8

• The visual appearance of number 8—The index, middle, and thumb fingers are open • Landmarks 4, 6, 7, 8, 10, 11, and 12 have a distance greater than the average distance of points 5, 9, 13, and 17

9

• The visual appearance of number 9—The thumb finger is open • Landmarks 4 have a distance greater than the average distance of points 5, 9, 13, and 17

C

• The visual appearance of alphabet C—The index and thumb fingers are opened, and the index finger is half folded • Landmarks 4, 6, 7, and 8 have a distance greater than the average distance of points 5, 9, 13, and 17, and point 8 has a nearer same distance equal to 6

I

• The visual appearance of the alphabet I—The index finger is open • Landmarks 6, 7, and 8 have a distance greater than the average distance of points 5, 9, 13, and 17

L

• The visual appearance of the alphabet L—The index and thumb fingers are open • Landmarks 4, 6, 7, and 8 have a distance greater than the average distance of points 5, 9, 13, and 17

V

• The visual appearance of the alphabet V—The index and middle fingers are open. And hand orientation is the backside of the hand • Landmarks 6, 7, 8, 10, 11, and 12 have a distance greater than the average distance of points 5, 9, 13, and 17

U

• The visual appearance of alphabet U—The index and thumb fingers are opened, and the index finger is half folded • Landmarks 4, 6, 7, and 8 have a distance greater than the average distance of points 5, 9, 13, and 17

142

R. R. Savant et al.

Table 11.2 Real-time gesture recognition in live video

the random state value 10 for splitting. We have used multilayer perceptron neural network (MLP) with (7, 7, 7) hidden layers, activation function ‘relu’ and ‘adam’ solver for classification. The classifier is tested on splits 60–40, 70–30, and 80– 20. With 80–20 splits MLP model gives good prediction accuracy compared to the other two splits, i.e., 80%. The trained model is saved for use in a real-time gesture recognition prototype.

11.3.6 Recognition and Text Output The saved model is used for recognizing one-handed ISL gestures in a live video feed. The model performed well in the real-time prototype and recognized each gesture in a real-time live video feed. Except for the gesture of alphabet ‘V’, all the other gestures are correctly identified by the model in a real-time live video feed. Gestures of ‘2’ and ‘V’ are similar-looking, and we have not focused on any other parameters for feature generation, so our model failed to recognize the gesture ‘V’. Table 11.2 real-time gesture recognition in live video, showing the frames from live video for real-time one-hand ISL alphabets and numbers recognition.

11.4 Conclusion Real-time Sign Language Recognition is a challenging task to accomplish. Many types of research are going in this domain. We need solutions that may be deployable

11 Real-Time One-Hand Indian Sign Language Alphabets …

143

on hand-held devices like mobile phones. Hand detection is an essential step in the gesture recognition system. In our study, we have used a hand tracking module to detect the hands in real-time live video. The hand landmarks are used as reference points to generate the feature vectors. We have taken the distance between these landmarks as a feature to feed the multilayer perceptron neural network classifier. With the use of this distance feature, we get 80% accuracy on a dataset. While testing on real-time SLR, the model recognized each gesture correctly. Except for the gesture ‘V’, all other gestures identify accurately in live video.

References 1. Aloysius, N., Geetha, M.: Understanding vision-based continuous sign language recognition. Multimedia Tools Appl. 79, 22177–22209 (2020) 2. Rathi, S., Gawande, U.: Development of full-duplex intelligent communication system for deaf and dumb people. In: 2017 7th International Conference on Cloud Computing, Data Science and Engineering-Confluence, pp. 733–738, IEEE (2017) 3. Das, A., Gawde, S., Suratwala, K., Kalbande, D.: Sign language recognition using deep learning on custom processed static gesture images. In: 2018 International Conference on Smart City and Emerging Technology (ICSCET), pp. 1–6, IEEE (2018) 4. Dutta, K.K., Bellary, S.A.S.: Machine learning techniques for Indian sign language recognition. In: 2017 International Conference on Current Trends in Computer, Electrical, Electronics and Communication (CTCEEC), pp. 333–336, IEEE (2017) 5. Savant, R., Ajay, A.: Indian sign language recognition system for deaf and dumb using image processing and fingerspelling: a technical review. Nat. J. Syst. Inf. Technol. 11(1), 23 (2018) 6. Rao, G.A., Kishore, P.V.V.: Sign language recognition system simulated for video captured with smart phone front camera. Int. J. Electr. Comp. Eng. (2088–8708) 6(5) (2016) 7. Chattoraj, S., Vishwakarma, K., Paul, T.: Assistive system for physically disabled people using gesture recognition. In: 2017 IEEE 2nd International Conference on Signal and Image Processing (ICSIP), pp. 60–65, IEEE (2017) 8. Kishore, P.V.V., Kishore, S.R.C., Prasad, M.V.D.: Conglomeration of hand shapes and texture information for recognizing gestures of Indian sign language using feed-forward neural networks. Int. J. Eng. Technol. (IJET) 5(5), 3742–3756 (2013) 9. Saqib, S., Kazmi, S.A.R.: Repository of static and dynamic signs. Int. J. Adv. Comput. Sci. Appl. 8, 101–105 (2017) 10. Savant, R., Nasriwala, J.: Indian sign language recognition system: approaches and challenges. Adv. Innov. Res. 76 (2019) 11. Shaik, K.B., Ganesan, P., Kalist, V., Sathish, B.S., Jenitha, J.M.M.: Comparative study of skin color detection and segmentation in HSV and YCbCr color space. Proc. Comp. Sci. 57, 41–48 (2015) 12. Tripathi, K., Nandi, N.B.G.: Continuous Indian sign language gesture recognition and sentence formation. Proc. Comp. Sci. 54, 523–531 (2015) 13. Reshna, S., Jayaraju, M.: Spotting and recognition of hand gesture for Indian sign language recognition system with skin segmentation and SVM. In: 2017 International Conference on Wireless Communications, Signal Processing and Networking (WiSPNET), pp. 386–390, IEEE (2017) 14. Rautaray, S.S., Agrawal, A.: Vision based hand gesture recognition for human computer interaction: a survey. Artif. Intell. Rev. 43(1), 1–54 (2015) 15. Sajanraj, T.D., Beena, M.V.: Indian sign language numeral recognition using region of interest convolutional neural network. In: 2018 Second International Conference on Inventive Communication and Computational Technologies (ICICCT), pp. 636–640, IEEE (2018)

144

R. R. Savant et al.

16. Kaur, G., Kaur, P.: Analysis of face recognition using YCbCr and CIElab skin color segmentation methods. Int. J. Adv. Res. Comp. Sci. 6(6) (2015) 17. Phung, S.L., Bouzerdoum, A., Chai, D.: A novel skin color model in YCbCr color space and its application to human face detection. In: Proceedings International Conference on Image Processing, vol. 1, pp. I-I, IEEE (2002) 18. Sikandar, T., Ghazali, K.H., Mohd, I.I., Rabbi, M.F.: Skin color pixel classification for face detection with hijab and niqab. In: Proceedings of the International Conference on Imaging, Signal Processing and Communication, pp. 1–4 (2017) 19. Masood, S., Srivastava, A., Thuwal, H.C., Ahmad, M.: Real-time sign language gesture (word) recognition from video sequences using CNN and RNN. In: Intelligent Engineering Informatics, pp. 623–632. Springer, Singapore (2018) 20. Ahire, P.G., Tilekar, K.B., Jawake, T.A., Warale, P.B.: Two way communicator between deaf and dumb people and normal people. In: 2015 International Conference on Computing Communication Control and Automation, pp. 641–644, IEEE (2015) 21. Garcia, B., Viesca, S.A.: Real-time American sign language recognition with convolutional neural networks. Convolution. Neural Netw. Vis. Recogn. 2, 225–232 (2016) 22. Hsieh, C.C., Liou, D.H.: Novel Haar features for real-time hand gesture recognition using SVM. J. Real-Time Image Proc. 10(2), 357–370 (2015) 23. Köpüklü, O., Gunduz, A., Kose, N., Rigoll, G.: Real-time hand gesture detection and classification using convolutional neural networks. In: 2019 14th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2019), pp. 1–8, IEEE (2019) 24. Nagarajan, S., Subashini, T.S.: Static hand gesture recognition for sign language alphabets using edge oriented histogram and multi class SVM. Int. J. Comp. Appl. 82(4) (2013) 25. Zhang, F., Bazarevsky, V., Vakunov, A., Tkachenka, A., Sung, G., Chang, C.L., Grundmann, M.: Mediapipe hands: on-device real-time hand tracking. arXiv preprint arXiv:2006.10214 (2020) 26. “MediaPipe Hands.” https://google.github.io/mediapipe/solutions/hands

Chapter 12

Structural and Optical Analysis of Hydrothermally Synthesized Molybdenum Disulfide Nanostructures Nipom Sekhar Das, Koustav Kashyap Gogoi, Avijit Chowdhury, and Asim Roy Abstract Layered transition metal dichalcogenides (TMDCs) have drawn immense attraction owing to their unique physical, chemical, structural, and tunable electronic characteristics. Among the TMDCs, layered molybdenum disulfide (MoS2 ) exhibits wide application perspectives due to their high stability, cost effectiveness, simple processability, and non-toxicity. Herein, the structural and spectroscopic behavior of MoS2 synthesized via a hydrothermal method have been reported. The X-ray diffraction pattern exhibits sharp peaks located at 2θ–14.86°, 35.49°, and 57.65°, which corresponds to reflection from the (002), (100), and (110) plane, respectively, representing the crystalline characteristics of MoS2 . Also, the interplanar spacing and average crystallite size are estimated to be ~5.95 Å and 21.51 nm, respectively. The UV–visible absorption spectrum exhibits two characteristic peaks positioned at ~272 and 325 nm, whereas the PL spectrum shows the presence of a sharp emission peak at ~434 nm. Furthermore, the optical band gap energies are calculated to be ~1.55 and 2.91 eV, from Tauc’s plot. FESEM image reveals the rod-like shape of the material.

12.1 Introduction Layered graphene like materials and its derivatives have received enormous research interest and broad application perspectives in diverse research fields. However, the lack of energy band gap and chemical inertness curtails their mass-scale utilization as semiconducting materials. Therefore, researchers have focused on similar N. S. Das · K. K. Gogoi · A. Chowdhury (B) · A. Roy (B) Department of Physics, National Institute of Technology Silchar, Cachar, Assam 788010, India e-mail: [email protected] A. Roy e-mail: [email protected] A. Chowdhury Department of Condensed Matter Physics and Material Sciences, S. N. Bose National Centre for Basic Sciences, Block JD, Sector III, Salt Lake, Kolkata 700106, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. Das et al. (eds.), Modeling, Simulation and Optimization, Smart Innovation, Systems and Technologies 292, https://doi.org/10.1007/978-981-19-0836-1_12

145

146

N. S. Das et al.

Fig. 12.1 Schematic representation of planar view of molecular structure of MoS2

2D-layered materials such as TMDCs to meet expectations and expand their applicability in recent years. The outstanding physical, structural, and tunable electronic properties of TMDCs make them ideal candidates. Among the various TMDCs, MoS2 has gained the utmost popularity due to its layer-dependent tunable electronic and optoelectronic properties. MoS2 is constituted of molybdenum atoms sandwiched between two layers of sulfur atoms held together by weak van der Waals interaction. Because of the weak interactions between the sulfur layers, a few layers of thick sheets can be derived from the bulk structure [1, 2]. Furthermore, both the exfoliated and pristine MoS2 feature a large number of other exploitations such as energy storage [3] and optoelectronic devices including FETs, photovoltaic cells, photodetectors, LEDs [4–7], catalysis [8, 9], lubrication [10, 11], etc. MoS2 is anticipated to have indirect and direct band gaps of roughly 1.3 and 1.8 eV, respectively [12, 13]. Figure 12.1 demonstrates the planar view of MoS2 structure.

12.2 Experimental Procedure 12.2.1 Chemicals Used Thiourea (CH4 N2 S; M. W.: 76.12 g/mol; Merck), ammonium heptamolybdate tetrahydrate [(NH4 )6 Mo7 O24 .4H2 O; M. W.: 1235.86 g/mol; Merck), N, N-dimethyl formamide (DMF) Merck, HCON (CH2 )2 , ethanol and de-ionized water are the chemicals used.

12.2.2 Synthesis of Molybdenum Disulfide (MoS2 ) MoS2 was prepared by using a simple hydrothermal method as demonstrated in Fig. 12.2 [14]. The detailed procedure includes the preparation of a solution of

12 Structural and Optical Analysis of Hydrothermally …

147

Fig. 12.2 Schematic diagram of synthesis of MoS2

ammonium heptamolybdate tetrahydrate (0.185 g) and thiourea (0.319 g) in 30 mL of DMF through vigorous stirring for 5 h. The homogeneous solution was placed in a Teflon lined stainless steel autoclave and maintained at 200 °C for 24 h. The autoclave was allowed to cool down to ambient temperature naturally. Finally, the black MoS2 powder was obtained post centrifugal washing of the hydrothermally treated product with DI water and ethanol and oven drying at 80 °C for 14 h. The material was then analyzed through different optical, structural, and morphological characterization techniques.

12.2.3 Instrumentation The sample was characterized using Photoluminescence (PL) spectroscopy (Horiba, Model: Fluormax 4C), field emission scanning electron microscope (FESEM) (FEI, Quanta 250FEG), X-ray diffractometer (XRD) (Panalytical, Model: Xpert3 MRD XL), UV–visible absorption spectroscopy (Agilent Technologies, Model: Cary 60), Fourier transform infrared spectroscopy (FTIR) (Bruker ALPHA II compact spectrophotometer).

148

N. S. Das et al.

Fig. 12.3 a UV–visible absorption spectrum of MoS2 b Tauc’s plot of MoS2

12.3 Results and Discussions 12.3.1 Optical Analysis of MoS2 12.3.1.1

UV–Visible Absorption Spectroscopy

Fig. 12.3a displays the absorption characteristics of MoS2 in the wavelength band of 200–800 nm. The spectrum displays the appearance of two absorption peaks at ~272 and ~325 nm which occur may be due to the occurrence of an occupied orbital to an unoccupied orbital interband electron transition [15–18]. Tauc’s plot method is used to estimate the optical band gap energies, and the values are estimated to be 1.55 and 2.91 eV as shown in Fig. 12.3b [19, 20].

12.3.1.2

Photoluminescence (PL) Spectroscopy

A PL spectrum of MoS2 was done with λex ~ 325 nm. The peak is shown in Fig. 12.4a. While MoS2 exhibits a wide emission peak nearly at 434 nm for excitation wavelength 325 nm, the estimated emission energy is 2.85 eV for the broad emission peak at 434 nm as displayed in Fig. 12.4b [21, 22].

12.3.1.3

Fourier Transform Infrared Spectroscopy (FTIR)

Existence of functional groups of MoS2 is identified by FTIR spectroscopy. The stretching vibration of O–H bond is shown by the peak appears at ~3110 cm−1 . Mo–O vibration is represented by the peak appears at 619 cm−1 . Furthermore, peaks at ~912, 1408 and 1600 cm−1 represent the characteristic peaks of MoS2 as demonstrated in Fig. 12.5 [23, 24].

12 Structural and Optical Analysis of Hydrothermally …

149

Fig. 12.4 a Photoluminescence spectra of MoS2 b emission energy curve of MoS2

Fig. 12.5 FTIR spectra of MoS2

12.3.2 Structural Studies 12.3.2.1

X-ray Diffraction (XRD)

The XRD spectroscopy of synthesized MoS2 was done in the 2θ from 10° to 80°, and the corresponding pattern is shown in Fig. 12.6. In the figure, an intense peak is observed at 2θ~14.86°, which is due to the reflection from the (002) plane of MoS2 [25, 26]. The interplanar spacing related to the most intense peak is estimated by using the Bragg’s law and is obtained to be ~5.95 Å. The interlayer spacing of (002) plane is found to be smaller compared to bulk MoS2 [27, 28]. Two additional peaks at 2θ~35.49° and 57.65° are observed which are formed by the reflection from the (100) kλ is used to identify and (110) planes, respectively [26]. Scherrer equation D = β cos θ

150

N. S. Das et al.

Fig. 12.6 XRD pattern of MoS2

the crystallite size, and the estimated value is 21.51 nm, where k and β represent shape factor and full width half maxima (FWHM), respectively, θ represents angle of diffraction, and λ corresponds to the wavelength of X-ray.

12.3.3 Morphological Studies 12.3.3.1

Field Emission Scanning Electron Microscopy (FESEM)

The FESEM image of MoS2 is shown in Fig. 12.7a,b. Rod-like nature of the material is observed with diameter of the rod around 98.83 nm and length of the rod around 2.58 μm as shown in Fig. 12.7a [29, 30].

12.4 Conclusion In this article, MoS2 was successfully synthesized by using a cost-effective hydrothermal method and was subjected to extensive optical, structural, and morphological characterizations. The hydrothermally synthesized MoS2 exhibited an optical band gap of ~1.55 and 2.91 eV along with an interplanar spacing of ~5.95 Å between its layers which is smaller than the interlayer spacing of bulk MoS2 . Also, the average crystallite size of MoS2 was estimated to be ~21.51 nm. From PL spectra, emission energy was calculated to be 2.85 eV. FESEM image shows the rod-like nature of the synthesized material. This synthesized material can be exploited in various fields like

12 Structural and Optical Analysis of Hydrothermally …

151

Fig. 12.7 a, b FESEM image of MoS2

electronic and optoelectronic devices including resistive switching memory devices in future. Acknowledgements The authors acknowledge Central Instrumentation Facility, NIT, Silchar, SMaRT Laboratory, Department of Physics, NIT, Silchar, and SAIF, S. N. Bose National Centre for Basic Sciences for providing material characterization.

References 1. Panigrahi, P.K., Pathak, A.: Aqueous medium synthesis route for randomly stacked molybdenum disulfide. J. Nanopart. 1–10 (2013) 2. Cho, M.H., Ju, J., Kim, S.J., Jang, H.: Tribological properties of solid lubricants (graphite, Sb2S3, MoS2 ) for automotive brake friction materials. Wear 260, 855–860 (2006) 3. Zang, X., Chen, Q., Li, P., He, Y., Li, X., Zhu, M., Li, X., Wang, K., Zhong, M., Wu, D., Zhu, H.: Highly flexible and adaptable, all solids state supercapacitors based on graphene woven -fabric film electrodes. Small 10(13), 2583–2588 (2014) 4. Li, X., Zhu, H.: Two-dimensional MoS2 : Properties, preparation, and applications. J. Mater. 1(1), 33–44 (2015) 5. Radisavljevic, B., Radenovic, A., Brivio, J., Giacometti, V., Kis, A.: Single-layer MoS2 transistors. Nat. Nanotechnol. 6(3), 147–150 (2011) 6. Yang, X., Fu, W., Liu, W., Hong. J., Cai, Y., Jin, C., Xu, M., Wang, H., Yang, D., Chen, H.: Engineering crystalline structures of two dimensional MoS2 sheets for high-performance organic solar cells. J. Mater. Chem. A. 2 (21), 7727–7733 (2014) 7. Chang, Y.H., Zhang, W., Zhu, Y., Han, Y., Pu, J., Chang, J.K., Hsu, W.T., Huang, J.K., Hsu, C.L., Chiu, M.H., Takenobu, T., Li, H., Wu, C.I., Chang, W.H., Wee, A.T.S., Li, L.J.: Monolayer MoSe2 grown by chemical vapor deposition for fast photodetection. ACS Nano 8(8), 8582– 8590 (2014) 8. Britnell, L., Ribeiro, R.M., Eckmann, A.., Jalil, R., Belle, B.D., Mishchenko, A., Kim, Y.J., Gorbachev, R.V., Georgiou, T., Morozov, S.V., Grigorenko, A.N., Geim, A.K., Casiraghi,

152

9.

10. 11.

12. 13.

14. 15.

16.

17.

18.

19.

20.

21.

22. 23. 24. 25. 26. 27.

N. S. Das et al. C., Neto, A.H.C., Novoselov, K.S.: Strong light-matter interactions in heterostructures of atomically thin films. Science. 340(6138), 1311–1314 (2013) Laursen, A.B., Kegnaes, S., Dahl, S., Chorkendorff, I.: Molybdenum sulfides-efficient and viable materials for electro and photoelectrocatalytic hydrogen evolution. Energy Environ. Sci. 5, 5577–5591 (2012) Chhowalla, M., Amaratunga, G.A.J.: Thin films of fullerene-like MoS2 nanoparticles with ultra-low friction and wear. Nature 407, 164–167 (2000) Zhang, Y., Chang, T.R., Zhou, B., Cui, Y.T., Yan, H., Liu, Z., Schmitt, F., Lee, J., Moore, R., Chen, Y., Lin, H., Jeng, H.-T., Mo, S.-K., Hussain, Z., Bansil, A., Shen, Z.-X.: Direct observation of the transition from indirect to direct bandgap in atomically thin epitaxial MoSe2 . Nat. Nanotechnol. 9(2), 111–115 (2013) Novoselov, K.S., Jiang, D., Schedin, F., Booth, T.J., Khotkevich, V.V., Morozov, S.V., Geim, A.K.: Two-dimensional atomic crystals. Proc. Natl. Acad. Sci. 102(30), 10451–10453 (2005) Lee, H.S., Min, S.-W., Chang, Y.-G., Park, M.K., Nam, T., Kim, H., Kim, J. H., Ryu, S., Im, S.: MoS2 nanosheet phototransistors with thickness-modulated optical energy gap. Nano Lett. 12(7), 3695–3700 (2012) Splendiani, A., Sun, L., Zhang, Y., Li, T., Kim, J., Chim, C.-Y., Galli, G., Wang, F.: Emerging photoluminescence in monolayer MoS2 . Nano Lett. 10(4), 1271–1275 (2010) Zuo, L., Qu, R., Gao, H., Guan, X., Qi, X., Liu, C., Zhang, Z., Lei, X.: MoS2/RGO hybrids prepared by a hydrothermal route as a highly efficient catalytic for sonocatalytic degradation of methylene blue. Results Phys. 14, 102458 (2019) Ahmad, R., Srivastava, R., Yadav, S., Singh, D., Gupta, G., Chand, S., Sapra, S.: Functionalized molybdenum disulphide nanosheets for 0D–2D hybrid nanostructure: Photoinduced charge transfer and enhanced photoresponse. J. Phys. Chem. Lett. 8(8), 1729–1738 (2017) Pathak, P.K.B., Nandi, T., Srivastava, J., Ghosh, S.K., Prasad, N.E.: Synthesis of MoS2 ultrafine particles: Influence of reaction condition on the shape and size of particles. Int. J. Nanoparticles Nanotechnol. 5 (2019) Ali, A., Mangrio, F.A., Chen, X., Dai, Y., Chen, K., Xu, X., Xia, R., Zhu, L.: Ultrthin MoS2 nanosheets for high-performance photoelectrochemical applications via plasmonic coupling with Au nanocrystals. Nanoscale 11, 7813–7824 (2019) Liu, G., Ma, H., Teixeira, I., Sun, Z., Xia, Q., Hong, X., Tsang, S.C.E.: Hydrazine- assisted liquid exfoliation of MoS2 for catalytic hydrodeoxygenation of 4-methylphenol. Chem. Eur. J. 22(9), 2910–2914 (2016) Ali, G.A.M., Thalji, M.R., Soh, W.C., Algarni, H., Chong, K.F.: One-step electrochemical synthesis of MoS2 /graphene composite for supercapacitor application. J. Solid State Electrochem. 24, 25–34 (2020) Saha, N., Sarkar, A., Ghosh, A.B., Dutta, A.K., Bhadu, G.R., Paul, P., Adhikary, B.: Highly active spherical amorphous MoS2 : Facile synthesis and application in photocatalytic degradation of rose bengal dye and hydrogenation of nitroarenes. RSC Adv. 5, 88848–88856 (2015) Wu, J.-Y., Lin, M.N., Wang, L.D., Zhang, T.: Photoluminescence of MoS2 prepared by effective grinding-assisted sonication exfoliation. J. Nanomater. 2014, 1–7 (2014) Mouri, S., Miyauchi, Y., Matsuda, K.: Tunable photoluminescence of monolayer MoS2 via chemical doping. Nano Lett. 13(12), 5944–5948 (2013) Zeng, Y.-X., Zhong, X.-W., Liu, Z.-Q., Chen, S., Li, Nan.: Preparation and enhancement of thermal conductivity of heat transfer oil-based MoS2 nanofluids. J. Nanomater. 4, 1–6 (2013) Lalithambika, K.C., Shanmugapriya, K., Sriram, S.: Photocatalytic activity of MoS2 nanoparticles: An experimental and DFT analysis. Appl. Phys. A. 125(12), 817 (2019) Zhang, X., Tang, H., Xue, M., Li, C.: Facile synthesis and characterization of ultrathin MoS2 nanosheets. Mater. Lett. 130, 83–86 (2014) Muralikrishna, S., Manjunath, K., Samrat, D., Reddy, V., Ramakrishnappa, T., Nagaraju, D.H.: Hydrothermal synthesis of 2D MoS2 nanosheets for electrocatalytic hydrogen evolution reaction. RSC Adv. 5(109), 89389–89396 (2015)

12 Structural and Optical Analysis of Hydrothermally …

153

28. Liu, Y., Nan, H., Wu, X., Pan, W., Wang, W., Bai, J., Zhao, W., Sun, L., Wang, X., Ni, Z.: Layer by layer thinning of MoS2 by plasma. ACS Nano 7(5), 4202–4209 (2013) 29. Viswan, G., Reshmi, S., Sachidanand, P.S., Mohan, M., Bhattacharjee, K.: Electrical characterization of tailored MoS2 nanostructures. IOP Conf. Ser.: Mater. Sci. Eng. 577, 012163 (2019) 30. Su, L., Xiao, Y., Han, G.: Synthesis of highly active cobalt molybdenum sulfide nanosheets by a one-step hydrothermal method for use in dye-sensitized solar cells. J. Mater. Sci. 52, 13541–13551 (2017)

Chapter 13

IRIS Image Encryption and Decryption Based Application Using Chaos System and Confusion Technique K. Archana, Sharath Sashi Kumar, Pradeep P. Gokak, M. Pragna, and M. L. J. Shruthi Abstract In today’s era, Digital Image Security is of primary importance and is receiving a huge amount of consideration. Because digital photographs are used in almost all technological fields, confidential data may be present in these images. To maintain the security of these images, several image encryption and decryption techniques have been presented by various authors and academics. We propose a novel approach for IRIS image-based algorithm to perform encryption and decryption in this paper. The image is compressed using the SVD technique, then shuffled based on the confusion pattern. Encryption and decryption are performed using a logistic mapping-based chaos technique. The proposed algorithm is compared to several existing techniques in terms of compression ratio, computation time, PSNR, SSIM, and NPCR values.

13.1 Introduction As technology becomes more prevalent in everyday life, massive amounts of data are generated on a daily basis. Text, audio, video, photos, and other types of data can all be used to store this information. Images are one of the most used ways to share data. We frequently trade photographs to convey information because “A picture is worth a thousand words”. The advancement of smartphones and smart devices has made it easier for the users to use the image data. In terms of storage costs, storing uncompressed data proves to be prohibitively expensive. Furthermore, data transmission in an uncompressed form will take in large bandwidth. Due to this factor, data compression techniques have become increasingly popular in some research areas such as signal processing and artificial intelligence before storage and transmission.

K. Archana (B) · S. S. Kumar · P. P. Gokak · M. Pragna · M. L. J. Shruthi Department of Electronics and Communication Engineering, PES University, RR Campus, Bangalore, Karnataka, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. Das et al. (eds.), Modeling, Simulation and Optimization, Smart Innovation, Systems and Technologies 292, https://doi.org/10.1007/978-981-19-0836-1_13

155

156

K. Archana et al.

Compression of images reduces the quantity of data necessary for image representation. With the use of these techniques, it should be ensured that the quality of the image is not compromised at any point. The primary aim of compression is to represent all the digital images with as few bits as possible and with the highest quality possible. Because of the prevalence of multimedia applications, it has become necessary to secure multimedia data, particularly images. Image encryption is important in the field of Information Security. Some of the inherent image characteristics include large data capacity and high correlation between image pixels which differentiates image encryption with that of text encryption. Chaotic maps can be used to create new encryption algorithms based on properties like initial conditions and random behavior. In an era where safety and authentication are of utmost importance, there has been a lot of interest in biometric systems. Every individual’s physical or behavioral qualities are determined using biometrics. Iris is a biometric physiological feature which, even between identical twins, is universally unique and distinctive. The following are some of the applications for an iris-based imaging system: • • • • •

Ensure safe access to classified documents. To avoid cross-border infiltration. Highly secured state facilities. Maintain reports on attendance in large business systems. For authentication in ATMs.

In addition to hardening the system, confusion techniques are used to make it more difficult for hackers to obtain secure information. The final image becomes more jumbled as a result of these algorithms. The pixels are reconfigured using a mixture of confusion patterns to secure the image. This paper explains the outcome of the proposed method and compares it to existing methods. The study based on some performance parameters is also discussed. The following are the sections that make up the paper. The basic overview of Confusion Technique, Image Compression, and Image Cryptography is all covered in Sect. 13.1. Section 13.2 deals with the literature survey of various research publications. The proposed algorithm’s methodology is covered in Sect. 13.3. Section 13.4 contains the Block Diagram of the proposed algorithm. The acquired results and comparison analysis of parametric measurements are presented in Sect. 13.5. Section 13.6 discusses the conclusion of work.

13.2 Literature Survey Cryptographic algorithms based on chaos have revealed novel approaches to developing effective image encryption systems. Chaos technique has been broadly adopted for image encryption due to many of its capabilities.

13 IRIS Image Encryption and Decryption Based Application …

157

The foundation of chaos encryption is the ability of dynamic systems to generate a random sequence of integers. We will examine different existing image encryption and decryption approaches in this section. The image compression with regards to lossy techniques works by transforming domain conversion followed by quantization to eliminate coefficients with minimal contribution. The paper [1] proposes an orthonormal property-based SVD decomposition method. Mohamed Gafsi et al. employ a symmetrical, confusion-based, and diffusion-based cryptographic scheme [2]. For the generation of high quality keys, a complex chaos-based technique PRNG (Pseudo Random Number Generator) technique is used. For the initialization of the system, an external secret 256-bit key was created using SHA-256. This key generated is highly random, entropic, and complex. The architecture was appropriate for meeting the demands of real-time applications as it was easy to compare and execute. Henon map, 2d logistic map, and baker map were some of the chaotic maps used. To compress and encrypt the plain image, Jie et al. [3] suggested using a Random Bernoulli measurement matrix. In order to increase security, 2D-LASM was used for sequence generation which is permuted. The following parameters were used in this study: x0 = 0. 754, 686, 681, 982, 361, y0 = 0. 276, 025, 076, 998, 578, u = 0.9, threshold = 50, compression ratio = 0.25. The image quality of the restored image relates with the various carrier images. This scheme achieves good concealment and is secure in terms of image appearance. The encryption happens without any dependency on carrier images. The image is usually encrypted with a secret key that the attacker does not have. The paper [4] is based on the entropy values of the image itself as a secret key. As all of the old keys are no longer valid, this technique has the potential to solve security issues caused by large data capacities and high correlation among pixels in color image encryption. The results of the experiment show that the scrambled image has a nearly uniform histogram pattern and that the high security provided can withstand the vast majority of common attacks. The intrinsic features of any image such as correlation between pixels and the mass data capacity make some of the techniques such as AES, DES, and RSA not practical encryption algorithms. Epin et al. [5] conducted a survey of various chaosbased image encryption techniques. All of these techniques are being researched and analyzed in order to ensure the necessary security procedures. Furthermore, most of these techniques can be implemented in real time. Digital information in multimedia technologies should be secure to avoid unauthorized entities. Scanning techniques have been used successfully in a wide range of encryption methods. Cell shuffling-based image encryption algorithm is proposed in [6]. There are two stages to the method used here. One method is to divide and shuffle the image pixels into several blocks. The second method is used to get the encoded image by using a spiral wave-based scan model. The results are satisfactory. However, only one scan pattern was used. Another novel image encryption method has been implemented. In [7], some of the random scan patterns such as Continuous raster C, diagonal D, spiral S, and so on were used. In this technique, SCAN patterns were used to first encrypt the cells based on random permutation followed by Hill cipher technique using a complex chain of random scan patterns.

158

K. Archana et al.

Sai Subha et al. [8] presented a data encryption method that combines a scan pattern with a partial image encryption method. It includes basic scan patterns like zig-zag, spiral out, and others. Based on the findings from the study, it can confirm the scan pattern’s efficiency in image encryption. SCAN pattern is based on spatial access technology, which simply generates a large number of scanning paths. The encryption approach in the paper [9] is based on the patterns created using the SCAN approach. Lossless image encryption is performed and recorded. We have presented an improved symmetric image encryption and decryption system based on chaos which is followed by a confusion strategy. The aim is to ensure high security and performance while using minimum computational complexity and resources.

13.3 Methodology 13.3.1 Image Compression Using Singular Value Decomposition Technique Removal of irrelevant or redundant information from an image refers to image compression. It is a technique for reducing image storage space without sacrificing image quality. There are two kinds of compression techniques. Lossy compression: This method is suitable for applications that are acceptable for minor loss of fidelity and this loss of image data is in such a way that the image cannot be reconstructed once it undergoes the compression process. This technique is used in the majority of applications, one of which is the SVD technique. Lossless compression: This method is mostly suitable for images dealing with medical use as the compressed image appears similar to the original one. The process is reversible i.e., the image can be reconstructed. The number of components we can use is limited in such an application. Singular Value Decomposition (SVD) is a linear image matrix.  1. SVD is a factorization of A, a rectangular matrix into U VT , in which U and V are orthonormal matrices. is a matrix composed of eigenvalues of matrix A and is known as diagonal matrix. 2. The singular values σ 1 > σ2 > . . . > σn > 0 appear in descending order along the main diagonal of . 3. As the image quality gradually increases, the memory required to store the image also increases. Thus, it would be better to have smaller SVD approximations as it involves smaller ranks.  (13.1) A=U VT

13 IRIS Image Encryption and Decryption Based Application …



U = [U1 . . . Un ],



σ1 · · · ⎢ = ⎣ ... . . . 0 ···

159

⎤ ⎡ ⎤T 0 V1 .. ⎥, VT = ⎢ .. ⎥ ⎣ . ⎦ . ⎦

σn

(13.2)

Vn

13.3.2 Chaotic Logistic Map The chaos system exhibits unpredictability and initial value sensitivity. Even small changes in the first circumstances can lead to completely unrelated sequences in chaos. Logistic function is one of the chaos functions, which varies with highly sensitive initial conditions and generates a non-periodical, pseudo-random sequence completely unpredictable if the bifurcation parameter “μ” is chosen correctly. The formula for the 1-D Logistic mapping function that is commonly used is given below: X n+1 = μX n (1 − X n )

(13.3)

Here μ is known as a bifurcation parameter. As per the research results, system will have chaotic nature when μ ranges between [3.56994, 4]. The initial value is X 0 and its range varies from X 0 ε [0, 1]. A chaos sequence of X n is obtained for a specified value of μ and X 0 . A value of μ close to 4 indicates that a non-convergent, acyclic sequence is generated, which is irrelevant and X n proves to be more evenly distributed. Figure 13.1 depicts the bifurcation diagram of logistic mapping, which demonstrates that this map is mostly chaotic when values are close to 4.

Fig. 13.1 Chaotic behavior of logistic map

160

K. Archana et al.

The values of the logistic function visited asymptotically to almost all initial conditions are placed on the y-axis and the parameter used for bifurcation (μ) is presented on the horizontal axis of the plot.

13.3.3 Key Generation Proposed cryptosystem employs a 128-bit symmetric key, and a 1D logistic chaotic map is used for the generation of a unique key based on Eq. 13.3. The sensitivity of the keys is evaluated by providing the correct key once and the incorrect key once during the decryption stage. The decrypted image with the incorrect key is such that it reveals no part of the plain image, and the encrypted image must be decrypted with the appropriate key. To withstand a brute force attack, a crypto system should currently have more than 2100 strong private keys. The proposed system uses 128 bit secret key, so 2128 possible keys can be generated. This key is generated by the concatenation of two 64 bit inputs (μ and X n ) in IEEE754 format. As a technical standard for the calculation of floating points, IEEE 754 was established in 1985. This standard is used for representing real numbers by Intel-based PCs and various Unix platforms. Figure 13.2 represents the IEEE 754 Floating Point Standard.

13.3.4 Confusion Technique Confusion technique is used to alter the pixels in an image. When we apply any confusion technique to a matrix, the pixels in the matrix are altered according to the pattern, which means that by using any confusion technique, we can reduce the high correlation that exists between the image pixels, and thus the security level of any encryption algorithm is increased. As we all know, there are numerous confusion techniques available, such as the Zigzag, Hilbert, and Spiral scanning techniques.

Fig. 13.2 IEEE 754 floating point standard

13 IRIS Image Encryption and Decryption Based Application …

161

Selection of each block and selection of pixels in each block is performed by means of some of the existing confusion techniques [10] and shown in Fig. 13.3. After conducting all of this research, we devised our own confusion technique, which also improves security by shuffling pixels and providing efficient desired output. The sparse coefficient matrix of a plain image is confused using our confusion technique which disrupts the image pixels having high correlation and thus increases the encryption algorithm’s level of safety.

Fig. 13.3 Existing confusion techniques

162

K. Archana et al.

Figure 13.4 shows the proposed confusion technique of our algorithm. Figure 13.5 shows the output matrix obtained with the confused pixels after rearrangement using proposed confusion technique.

Fig. 13.4 Input matrix and the sequence of rearrangement in the proposed confusion technique

Fig. 13.5 Output matrix

13 IRIS Image Encryption and Decryption Based Application …

163

13.3.5 IRIS Database An Iris database is a collection of photographs that, at a minimum, include the iris region of the eye. Sensors that operate in the visible (380–750 nm) or near-infrared spectrums (700–900 nm) are generally used to collect the images [11]. Figure 13.6 indicates the sample IRIS images from the database [12].

Fig. 13.6 IRIS sample images from bio-metrics Ideal test database

164

13.4 Block Diagram

13.5 Results See Figs. 13.7, 13.8, 13.9, 13.10, and 13.11.

K. Archana et al.

13 IRIS Image Encryption and Decryption Based Application …

Fig. 13.7 Input image and its histogram representation

Fig. 13.8 Encrypted image and its histogram representation

Fig. 13.9 Decrypted image and its histogram representation

165

166

K. Archana et al.

Fig. 13.10 Results obtained after applying confusion technique

Fig. 13.11 Comparison of the histograms of original image and reconstructed image

13 IRIS Image Encryption and Decryption Based Application …

167

Fig. 13.12 Authentication screen

13.5.1 Android Application Users will be able to send an IRIS image as well as the key to the receiver using the implemented Android app. Even a minor change in the key value at the receiver end results in the user receiving the encrypted image itself. Figure 13.12 depicts an overview of the authentication screens in the Android application that the user must complete in order to send the image (Figs. 13.13 and 13.14).

13.5.2 Image Restoration A clear image free from noise can be estimated using Image restoration techniques. These techniques are used to reduce or eliminate noise and recover from resolution loss. The aim of restoration is to compensate or eliminate noise or flaws that degrade an image. The Wiener filter is a popular image restoration method. It simultaneously eliminates additive noise and inverts blurring. It is based on the assumption that additive noise is a stationary random process that is unaffected by pixel location. The algorithm minimizes the square error between the original and reconstructed images.

168

K. Archana et al.

Fig. 13.13 Main screen

13.6 Parametric Measurements 13.6.1 Compression Ratio Image compression ratio is defined as a parameter for comparison of the memory sizes of original image with that of the compressed image. Compression Ratio =

Size of Original Image Size of Compressed Image

(13.4)

Table 13.1 from [13] shows the comparison of the compression ratio values obtained using the DCT method with our proposed method for specified threshold values. Figure 13.15 depicts the compression ratio versus threshold plot for the set of values shown in Table 13.1.

13 IRIS Image Encryption and Decryption Based Application …

169

Fig. 13.14 Encryption of the image

Table 13.1 Comparision of compression ratio of existing methods versus proposed method

T (threshold value)

Compression ratio % (existing algorithm)

Compression ratio % (proposed algorithm)

28

19.8225

22.23

30

21.9399

23.82

32

24.1728

25.41

34

26.6253

26.99

36

29.0583

29.58

13.6.2 Computation Time The total time period required to execute a computational process is referred to as the computation time or the running time. We have considered the computation time of the compression technique in our paper. Table 13.2 from [13] predicts the comparison of the computational time obtained using DCT method with our proposed method for specified threshold values. Figure 13.16 depicts a plot of computation time versus threshold for the set of values listed in Table 13.2.

170

K. Archana et al.

Fig. 13.15 Plot of compression ratio versus threshold

Table 13.2 Comparison of computation time of existing methods versus proposed method T (threshold value)

Computation time (existing algorithm) (ms)

Computation time (proposed algorithm) (ms)

28

385.1

171.8

30

354.4

156.2

32

329.8

187.5

34

371.4

156.25

36

371.0

140.625

The compression ratio obtained by the proposed method is greater than that obtained by the DCT method for the given threshold. Compared to the proposed method, the computational time for DCT is longer. This method finds an application that requires higher compression in quality and computation time without compromising it.

13.6.3 Peak Signal to Noise Ratio (PSNR) As a measure of the fidelity of a signal, the Peak Signal-to-Noise Ratio (PSNR) is the ratio between the maximum power of the signal and the power of any corrupting noise.

13 IRIS Image Encryption and Decryption Based Application …

171

Fig. 13.16 Plot of computation time versus threshold

PSNR = 20log10

MAX I √ MSE

(13.5)

where MAX I indicates the highest possible pixel value for the image. Its value is taken as 255 in our paper as 8 bits per sample is used for the representation of the image pixels.

13.6.4 Structural Similarity Index (SSIM) SSIM is a statistic for comparing the similarity between two images. It is a modern assessment method that focuses on three factors: Brightness, Contrast, and Structure, to better suit the workings of the human visual system. SSIM, in contrast to PSNR, is based on observable image structures. In comparison with the values in [3], the proposed algorithm provides a better PSNR and SSIM values (Table 13.3). In general, a greater PSNR value indicates a higher-quality reconstruction. In the case of two identical sets of data, the SSIM index is a decimal value between 0 and Table 13.3 Comparison of PSNR and SSIM values

PSNR (db)

SSIM

Jie et al. [3]

29.2742

0.7308

Proposed algorithm

34.0619430

0.8500343

172

K. Archana et al.

Fig. 13.17 Comparison of SSIM and PSNR values of the existing algorithm and proposed algorithm

Table 13.4 NPCR values of existing and proposed methods

NPCR (%) Khaled Loukhaoukha et al. [14]

99.5850

Sathish Kumar et al. [15]

98.4754

Huang et al. [16]

99.5400

Proposed algorithm

99.60483

1, with 1 representing perfect structural similarity. There is no structural similarity if the value is zero (Fig. 13.17).

13.6.5 Number of Pixel Changing Rate (NPCR) Image encryption algorithms/ciphers are often evaluated against differential attacks by counting the number of changing pixel rate (NPCR). A high NPCR score has traditionally been viewed as great resistance against differential attacks. The proposed approach has an NPCR value of 99.604%. As a result, when compared to existing approaches, our algorithm outperforms them in terms of image encryption security and performance. The value of greater than 99.5% confirms the resistance against differential attacks (Table 13.4 and Fig. 13.18).

13.7 Conclusion In this paper, we describe an improved algorithm for the encryption and decryption of IRIS image. The experiment is conducted on an Intel core(i5) processor with Python 3.9.

13 IRIS Image Encryption and Decryption Based Application …

173

Fig. 13.18 Comparison of NPCR values of the existing algorithm and proposed algorithm

This algorithm is also used in the development of an Android application that transmits confidential encrypted IRIS images. The following are the various steps in the algorithm: 1. 2. 3. 4. 5. 6. 7. 8.

Image Compression using SVD technique. Pixels of the input image are shuffled based on the proposed confusion pattern. 1-Dimensional Chaotic map called Logistic mapping encrypts each pixel of the Shuffled Image. Decryption is performed with the same chaotic map and symmetric key. Reverse shuffling is performed by implementing the inverse confusion pattern of the Decrypted Image. The Decrypted image undergoes restoration using a Wiener filter to obtain better quality of the final image. Comparative analysis of various performance parameters with the existing algorithms. Development of an Android application for transmission of the encrypted IRIS images.

In comparison to existing encryption algorithms, experimental results will demonstrate the proposed scheme’s higher efficiency and security level in terms of parameters such as Compression ratio, Computation time, PSNR, SSIM, and NPCR values. These improvements can motivate the practical applications of chaos-based image cryptosystems.

References 1. Yeganegi, F., Hassanzadeh, V., Ahadi, S.M.: Comparative performance evaluation of SVDbased image compression. In: Electrical Engineering (ICEE), Iranian Conference on, 2018, pp. 464–469. doi: https://doi.org/10.1109/ICEE.2018.8472544 (2018)

174

K. Archana et al.

2. Mishra, K., Kumar Singh, S., Nagabhushan, P.: An improved SVD based image compression. In: Conference on Information and Communication Technology (CICT), 2018, pp. 1–5. doi: https://doi.org/10.1109/INFOCOMTECH.2018.8722414 (2018) 3. Hnesh, A.M.G., Demirel, H.: DWT-DCT-SVD based hybrid lossy image compression technique. In: International Image Processing, Applications and Systems (IPAS), 2016, pp. 1–5. doi: https://doi.org/10.1109/IPAS.2016.7880068 (2016) 4. Raghavendra, M.J., Prasantha, H.S., Sandya, S.: DCT SVD based hybrid transform coding for image compression. Int. J. Recent Innov. Trends Comput. Commun. 3 (2015) 5. Swathi, H., Shah, S., Gopichand, S.G.: Image compression using singular value decomposition. In: IOP Conference Series: Materials Science and Engineering, vol. 263, pp. 042082. https:// doi.org/10.1088/1757-899X/263/4/042082 (2017) 6. Sakshi, P., Bharath, K.P., Rajesh, M.: Image Encryption Decryption Using Chaotic Logistic Mapping and DNA Encoding (2020) 7. Gafsi, M., Abbassi, N., Ali Hajjaji, M., Malek, J., Mtibaa, A.: Improved chaos-based cryptosystem for medical image encryption and decryption. In: Volume 2020, Article ID 6612390. doi: https://doi.org/10.1155/2020/6612390 (2020) 8. Li, X., Zhang, Y.: Digital image encryption and decryption algorithm based on wavelet transform and chaos system. In: IEEE Advanced Information Management, Communicates, Electronic and Automation Control Conference (IMCEC), 2016, pp. 253–257. doi: https://doi.org/ 10.1109/IMCEC.2016.7867211 (2016) 9. Jie, F., Ping, P., Zeyu, G., Yingchi, M.: A meaningful visually secure image encryption scheme. In: 2019 IEEE Fifth International Conference on Big Data Computing Service and Applications (BigDataService), 2019, pp. 199–204. doi: https://doi.org/10.1109/BigDataService.2019. 00034 (2019) 10. Mohammed, A., Diaa, M., Al-Husainy, M.: Image encryption technique based on the entropy value of a random block. Int. J. Adv. Comp. Sci. Appl. 8. https://doi.org/10.14569/IJACSA. 2017.080735 (2017) 11. Ephin, M., Vasanthi, N.A., Joy, J.: Survey of chaos based image encryption and decryption techniques (2013) 12. Arun Raj, R., George, S.N., Deepthi, P.P.: An expeditious chaos based digital image encryption algorithm. In: 2012 1st International Conference on Recent Advances in Information Technology (RAIT), 2012, pp. 14–18. doi: https://doi.org/10.1109/RAIT.2012.6194471 (2012) 13. Gonzalo, A., Li, S.: Some Basic cryptographic requirements for chaos-based cryptosystems. I. J. Bifurcation Chaos. 16, 2129–2151. https://doi.org/10.1142/S0218127406015970 (2006) 14. Janakiraman, S., Thenmozhi, K., Rayappan, J.B.B., Amirtharajan, R.: Lightweight chaotic image encryption algorithm for real-time embedded system: implementation and analysis on 32-bit microcontroller. Microprocess. Microsyst. 56. https://doi.org/10.1016/j.micpro.2017. 10.013 (2018) 15. Yuan, F., Wang, G.-Y., Cai, B.-Z.: Android SMS Encryption System based on Chaos, pp. 856– 862. https://doi.org/10.1109/ICCT.2015.7399961 (2015) 16. Winston, J., Jude, D.: A comprehensive review on iris image-based biometric systems. Soft. Comput. 23. https://doi.org/10.1007/s00500-018-3497-y (2019) 17. Marsden, E., Mackey, A., Plonsky, L.: The IRIS repository: advancing research practice and methodology. In: Mackey, A., Marsden, E. (eds.) Advancing Methodology and Practice: The IRIS Repository of Instruments for Research into Second Languages, pp. 1–21. New York, Routledge (2016) 18. http://biometrics.idealtest.org/ 19. Wu, Y.: NPCR and UACI randomness tests for image encryption. Cyber J. J. Select. Areas Telecommun. (2011) 20. Singar, C.P., Bharti, J., Pateriya, R.K.: Image encryption based on cell shuffling and scanning techniques. In: 2017 International Conference on Recent Innovations in Signal processing and Embedded Systems (RISE), 2017, pp. 257–263. doi: https://doi.org/10.1109/RISE.2017.837 8163 (2017)

13 IRIS Image Encryption and Decryption Based Application …

175

21. Praveena, V., Manohara Varma, P.H., Raju, B.: Image encryption using SCAN patterns and Hill Cipher. In: 2018 3rd IEEE International Conference on Recent Trends in Electronics, Information and Communication Technology (RTEICT), 2018, pp. 1983–1988. doi: https:// doi.org/10.1109/RTEICT42901.2018.9012499 (2018) 22. Geetha, R., Geetha, S.: A multi-layered “plus-minus one” reversible data embedding scheme. Multimedia Tools Appl. 80. https://doi.org/10.1007/s11042-021-10514-x (2021) 23. Sai Subha, V., Priyanka, U., Remya, K.R., Reenu, R.: Image encryption using scan pattern. Int. J. Soft Comput. Artif. Intell. (IJSCAI) 1(1), 18–21 (2013) 24. Chen, C., Chen, R.: Image encryption and decryption using SCAN methodology. In: 2006 Seventh International Conference on Parallel and Distributed Computing, Applications and Technologies (PDCAT’06), 2006, pp. 61–66. doi: https://doi.org/10.1109/PDCAT.2006.71 (2006) 25. Maniccam, S.S., Bourbakis, N.G.: Image and video encryption using SCAN patterns. Pattern Recogn. 37(4), 725–737. doi:https://doi.org/10.1016/j.patcog.2003.08.011 (2004) 26. Diaconu, A.-V., Loukhaoukha, K.: An improved secure image encryption algorithm based on Rubik’s cube principle and digital chaotic cipher. Volume 2013, Article ID 848392. doi:https:// doi.org/10.1155/2013/848392 (2013) 27. Kumar, S., Bagan, K.: A novel image encryption algorithm using pixel shuffling and BASE 64 encoding based chaotic block cipher (IMPSBEC). WSEAS Trans. Comput. (2011) 28. Huang, C.K., Nien, H.-H.: Multi chaotic systems based pixel shuffle for image encryption. Optics Commun. 282, pp. 347–350 (2008) 29. Association for Computing Machinery.: Proceedings of the 3rd International Conference on Cryptography,Security and Privacy, New York, NY, USA (2019) 30. Ratheesh Kumar, R., Mathew, J.: Image encryption: traditional methods versus alternative methods. In: 2020 Fourth International Conference on Computing Methodologies and Communication (ICCMC), 2020, pp. 1–7. doi: https://doi.org/10.1109/ICCMC48092.2020.ICCMC000115 (2020) 31. Pan, F., Ren, C., Gong, J., Song, L.: A fast image encryption based on improved logistic map. In: 2021 International Symposium on Computer Technology and Information Science (ISCTIS), 2021, pp. 337–340. doi: https://doi.org/10.1109/ISCTIS51085.2021.00075 (2021) 32. Cheltha, J.N.C., Rakhra, M., Kumar, R., Walia, H.: A review on data hiding using steganography and cryptography. In: 2021 9th International Conference on Reliability, Infocom Technologies and Optimization (Trends and Future Directions) (ICRITO), 2021, pp. 1–4. doi: https://doi. org/10.1109/ICRITO51393.2021.9596531 (2021) 33. Sun, S., Guo, Y.: A New Hyperchaotic Image Encryption Algorithm Based on Stochastic Signals, vol. 9, pp. 144035–144045. doi: https://doi.org/10.1109/ACCESS.2021.3121588 (2021) 34. Ge, B., Chen, X., Chen, G., Shen, Z.: Secure and Fast Image Encryption Algorithm Using Hyper-Chaos-Based Key Generator and Vector Operation, vol. 9, pp. 137635–137654. doi: https://doi.org/10.1109/ACCESS.2021.3118377 (2021) 35. Erkan, U., Toktas, A., Toktas, F.: A novel euler chaotic map for image encryption. In: 2021 International Conference on Innovations in Intelligent Systems and Applications (INISTA), 2021, pp. 1–6. doi: https://doi.org/10.1109/INISTA52262.2021.9548443 (2021) 36. Gu, J., Zhang, H., Lu, Y., Li, H., Zhang, J.: Hiding iris biological features with encryption. In: 2021 International Conference on Security, Pattern Analysis, and Cybernetics (SPAC), 2021, pp. 501–505. doi: https://doi.org/10.1109/SPAC53836.2021.9539978 (2021) 37. Rupa, H.M.C., Sai, K.P., Pravallika, A., Sowmya, V.K.: Secure medical multimedia data using symmetric cipher based chaotic logistic mapping. In: 2021 International Conference on System, Computation, Automation and Networking (ICSCAN), 2021, pp. 1–6, doi: https://doi.org/10. 1109/ICSCAN53069.2021.9526406 (2021)

Chapter 14

Comparative Study of Aero and Non-aero Formula Student Type Race Car Using Optimum Lap Shulabh Yadav , Tirth Lodhiya , Rajan Swami , and Shivam Prajapati Abstract Theoretically, there are numerous pros and cons of having or not having the aero package in the car, but by only knowing all the ups and downs, one can’t decide if they should start designing an aero car or a non-aero car to get an overall minimum lap time. The design should be mathematically concise. The current paper presents a statistical analysis of both types of vehicles. The Lap Time Simulations (LTS) must be performed with some appropriate assumptions in the initial phases. Among the various methods to perform LTS the easiest and most absolute method is used here. Optimum Lap (OL) is used to perform iterations. Starting with understanding the backend of the software, a Mathematical analogy is explained. After that, some limitations of OL are explained so that after designing one should not rely on this data blindly and perform actual tests on track. Further, the vehicle and track modeling are explained along with the case study presenting a comparative study of lap times of aero and non-aero cars on two different tracks Suzuka Inter-national Racing Course (SIRC) and Autódromo Concepción del Uruguay (ACDU) using OL. In the end, the results are compared on the basis of their lap time and several conclusions are made on the basis of the event to be participated in.

14.1 Introduction Formula Student is an international competition consisting of student teams competing in an F4 formula event. Students from around the world design, optimize, and manufacture open-wheel, open-cockpit formula student-type cars to participate in the grand event. Students need to work under the restrictions of a rule-book and come up with Formula cars capable of competing with each other. The event is divided into two parts, namely static events and dynamic events. Dynamic events include S. Yadav (B) · T. Lodhiya · R. Swami Sardar Vallabhbhai National Institute of Technology, Surat, Gujarat, India e-mail: [email protected] S. Prajapati National Institute of Technology, Agartala, Tripura, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. Das et al. (eds.), Modeling, Simulation and Optimization, Smart Innovation, Systems and Technologies 292, https://doi.org/10.1007/978-981-19-0836-1_14

177

178

S. Yadav et al.

Endurance, Autocross, Skid-pad, and Acceleration. The dynamic events makeup to maximum points and are the deciding factor for winning teams. Achieving minimum lap times in all the dynamic events can account for the team’s victory. In the pre-designing phase, one doesn’t have the real-time track data for their vehicle and needs to decide their objectives at that time. They need the data of velocity, longitudinal acceleration, the track dimensions, the race line to be followed, etc. at every smallest section of the track. This all data should be compared along with the distance traveled and the time taken to travel that much distance by the vehicle. This is known as Lap Time Simulation (LTS). This is used to derive a rough approximation of the lap times which can be achieved in different track and vehicle conditions. Without the use of LTS, it is impossible to set the objectives to be achieved in the real scenario. The FS cars can go up to 100–120 kmph hence the aerodynamic forces acting on the vehicle can either hinder the performance or be used to its advantage. In the case of Formula Student, the cars can experience large lateral acceleration. To cope with this, the tires need to maintain traction while taking turns and better the traction higher the velocity achieved while cornering, resulting in better lap time. Traction can be increased by the use of aerodynamic elements to produce enough downforce while cornering and hence giving us a swifter and smoother cornering performance. Similarly while covering the straights, acceleration of the vehicle is limited by the maximum traction tires can produce. Use of aerodynamics to increase downforce results in higher acceleration. But here another factor in play is the weight of the aerodynamic elements. The addition of an aerodynamic package to the vehicle increases the vehicle’s gross weight, affecting the car. The aerodynamic package also induces higher drag force and increases the CG height affecting the dynamics of the vehicle hence these factors need to be taken into consideration while making a decision of the aero package. While cornering, an increase in downforce increases the capacity to endure lateral acceleration hence we can keep a higher velocity while cornering. In the straight track scenario when the vehicle is accelerating, as compared to an aero vehicle, a non-aero vehicle has a less frontal area which leads to a lesser drag. Along with the higher speed in the corners, by adding the aerodynamic package the weight of the vehicle increases which results in a reduction of the overall performance. An increase in weight means the maximum achievable acceleration by the engine reduces. Also, the height of the CG increases due to the aero package which affects the vehicle dynamics. The increase in moment generated by lateral forces causes toppling of the vehicle while cornering at a lower velocity than non-aero. Aero package also increases the drag coefficient of the vehicle resulting in higher resistance to motion hence higher power requirement for the vehicle. Wherein the non-aero vehicles the straights would be faster than the corners. So, one should decide whether they should add an aero package or not by performing LTS on software and after all, whichever gives the minimum lap time should be opted. There are many paid and free software available in the market, but most of the software require a lot of vehicle parameters of which the designers are not aware of in the initial stages and they also need expertise Optimum Lap (OL) is a free and

14 Comparative Study of Aero and Non-aero Formula Student Type …

179

simplified LTS tool that requires very few inputs and can build a virtual vehicle in a very short time. By simulating in OL, a large amount of data is obtained which is represented by interactive graphing tools which help in understanding data quickly. It also becomes easier to compare data of different vehicles on different tracks. Further, the data can be exported as ‘.csv’ to work further in software such as MS Excel or MATLAB.

14.2 Background Simulation Process of OL As described in the Introduction, OL is software where one can design the vehicle models, build different tracks (real or imaginary), simulate and analyze lap times on those tracks, and after all optimize the vehicle model. The first and foremost assumption made in the simulations performed in OL is it takes the whole vehicle as a point mass. So, this software is only used to rapidly analyze the characteristics of the vehicle on a given track in the initial stages. It utilizes a quasi-steady-state point mass vehicle model. That means it has the ability to be accurate due to the combined states that the vehicle can achieve on the track. As the vehicle can corner with acceleration or deceleration simultaneously, one must understand the process followed by OL to interpret with this software. As shown in Fig. 14.1, it calculates the corner speed first and then the speed on which the vehicle will be thrown out of the corners while accelerating and at last it calculates the distance before which the declaration should be started in order to achieve maximum speed at the apex of the corner. From the graph shown in Fig. 14.1 it is clear that theoretically, the vehicle can be driven on two different lines, the accelerating line (Accelerating blue dashed line—Cornering entry black dashed line—Cornering Red Continuous line) or the combined continuous line (Accelerating blue dashed line—Braking red dashed

Fig. 14.1 Process followed in OL

180

S. Yadav et al.

Fig. 14.2 Smoother transition between different ride conditions [1]

line—Cornering entry black dashed line—Cornering Red Continuous line). Practically it is not possible to drive any vehicle on the accelerating line as the vehicle will be thrown out of the corner as soon as the driver starts cornering because of the inertia and the excessive lateral forces acting upon the vehicle. As the combined states are applied, the smoother transition between the different ride conditions can be achieved as shown in Fig. 14.2.

14.3 Mathematical Analogy of Forces Acting on Vehicle Body at Any Location of Track To work with any software one should be aware of the mathematics used behind the screen. Let us take an example to develop the mathematical model of the car. The forces acting on any vehicle are the driving force, the braking force, the cornering force, the drag force, and the lift/downforce [4]. The driving force is responsible for moving the vehicle and is also known as the traction force. It depends upon the

14 Comparative Study of Aero and Non-aero Formula Student Type …

181

Fig. 14.3 FBD of a point mass a from side view, b from top view

engine’s driving force, the normal load of the vehicle, and the longitudinal force generated by the friction between the road and the tire. When the external force is applied in the opposite direction of the driving force to lower the speed of the vehicle, that force is known as braking force and it depends upon the tire’s longitudinal friction coefficient and the normal load acting upon it. When a vehicle moves in the corners, it experiences the centrifugal forces which are termed lateral forces. The lateral forces are dependent on the normal load acting on the vehicle and the lateral coefficient of friction of tires. If the aerodynamic packages are included in the vehicle then the aerodynamic forces increase. The increase in drag force decreases the speed of the vehicle on the straight paths as the opposing forces increase. The increased downforces increase the vertical load of the vehicle. It increases the traction of the vehicle in the corners and allows the vehicle to push more lateral g-force in the corners. In case the aerodynamic package is not installed in the vehicle, then the drag force will always be there but lesser than the aero vehicle. And the downforce will be so less in the case of the non-aero vehicle that this can be neglected. The rolling resistance can also be neglected in some cases as those are very little compared to traction forces. Figure 14.3 shows the free body diagram of the car with the above forces acting upon a point mass of the vehicle. Now the track is broken into segments to understand the vehicle condition in each segment which can later be integrated. The segments are categorized on the basis of the state of the vehicle in the given segment. The vehicle can be braking, cornering, or accelerating in either of the segments. All the sections of the track can be understood by considering a situation in a tight U-turn. Let us examine all the states and get an idea of how calculations are made. In the following calculations, ρ stands for Air density, A for Frontal Area, v for Vehicle Speed, C d for Drag Coefficient, μx for Longitudinal Friction Coefficient, μy for Lateral Friction Coefficient, R for turning radius, and m for Mass of Vehicle. Ft = Normal Load × μx Fd =

1 × ρ × Cd × A × v2 2

(14.1) (14.2)

The beginning of a U-turn is the braking state. As shown in Fig. 14.4, the major forces considered while braking are Normal load and Drag Force. The normal load can be assumed to be equal to 100% of the weight. The tractive force due to braking

182

S. Yadav et al.

Fig. 14.4 Segment-wise illustration of the braking zone forces

can be calculated from the normal load if the longitudinal friction coefficient is known, as shown in Eq. 14.1. The Drag force favors the braking force. The drag force is directly proportional to the density of air and the frontal area, proportional to the square of the vehicle velocity as shown in Eq. 14.2 [2]. With the use of Newton’s second law (Eq. 14.3), maximum vehicle deceleration in the given segment is calculated (Eq. 14.4). 

Fext = m × a

(14.3)

Ft + Fd m

(14.4)

a=

v = −a × t + v0

(14.5)

Here the time period is the time taken to cross the current segment which is directly proportional to the size of the segment. The section followed by braking is cornering. As mentioned in Fig. 14.5 when the vehicle enters into the cornering state, the lateral friction force applied by the vehicle and the centrifugal force is balanced and limited by the lateral friction coefficient of the tire. As all 4 wheels are in contact with the ground, lateral force is applied by all of them. Hence, the maximum normal load can be applied in this case (Eq. 14.6). By Newton’s 2nd law, the centrifugal acceleration

14 Comparative Study of Aero and Non-aero Formula Student Type …

183

Fig. 14.5 Segment-wise illustration of cornering zone forces

(Eq. 14.3) is equated to the lateral acceleration (Eq. 14.7) applied by the vehicle to the tarmac. This gives the maximum possible velocity for the given segment of the corner which is then integrated for all segments of the cornering region (Eq. 14.8). Fy = Normal Load × μy Fy v2 = R m  Fy × R ∴v= m a=

(14.6)

(14.7)

(14.8)

The 3rd section is when the car leaves the corner and starts attaining speed. This is the accelerating section as shown in Fig. 14.6. The major forces to be considered here are the engine driving force (Eq. 14.9) and the drag force (Eq. 14.2). The accelerating force can be calculated from the longitudinal friction coefficient and the normal load. The normal load is variable depending on the 2WD vehicle or 4WD vehicle. The position of the CG describes the percentage of the normal load acting on the driving tires, this also limits the maximum force applied by the driving wheels. For the sake of simplicity, we shall assume a 2WD vehicle with a 50% distribution of load to driving wheels. This means the driving force applied by the engine is less compared to braking force or lateral force that can be exerted.

184

S. Yadav et al.

Fig. 14.6 Section-wise illustration of acceleration zone forces

Fs = Engine Driving Force (As long as its smaller than normal load ∗ μx )

(14.9)

In this case, the drag force opposes the driving force. Similar to the braking case the Drag force can be calculated from air density, drag coefficient, frontal area, and vehicle velocity (Eq. 14.2). Similar to the case of braking, by applying Newton’s 2nd law of motion (Eq. 14.3), the equation for the final velocity at the end of the segment can be devised. The final equation (Eq. 14.10) is different from Eq. 14.4 in terms of the direction of acceleration.   Ft − Fd t + v0 (14.10) v= m This velocity is integrated over all the segments in the accelerating state to build the velocity profile of the whole section. The velocity keeps increasing until the limit of engine driving force is achieved or the acceleration region has ended depending on track geometry. This is the method to formulate the maximum velocity that can be achieved in the accelerating region. This process is followed in each segment throughout the track with a restricting condition that the vehicle can accelerate or decelerate in a corner until it achieves the maximum allowed corner velocity. If the vehicle is allowed to reach a velocity higher than the permitted velocity at a corner, it will be thrown out of the track as discussed above. The solver takes care of the

14 Comparative Study of Aero and Non-aero Formula Student Type …

185

transition between segments, which gives a smooth velocity and acceleration profile throughout the track. The total lap time is the sum of time taken to cover each segment which concludes the LTA.

14.4 Limitations of OL The lap time calculated in OL is for the ideal case, which means the driver makes no mistake and performs each and every gear shift errorless and controls the throttle and brakes as plotted on a Lap time graph. This is just a hypothetical situation considering the driver and the vehicle are both at their 100% capabilities which is never possible. In actual scenarios, the lap time achieved on the track is always higher than the software results. Instead of only performance of driver and vehicle, there are certain limitations of OL software that one should take into account before performing iterations in order to get the most appropriate results in the end. The limitations are mentioned below: • This software considers a vehicle as a point mass and hence there is no load transfer included at all. The suspension characteristics and the inertia are also not considered so the tire traction becomes a linear function. • There is no utilization for a real tire model, so no suspension kinetics are considered. The effects of camber behavior, slip angle, and slip ratio are not taken into account. The tire temperature and pressure also don’t play a role here, whereas in the actual scenario, as tire surface temperature increases the grip increases so lap time reduces after some laps considerably. The tire load sensitivity is also not considered so the tires are considered rigid objects which continuously provide the same longitudinal and lateral friction overall track. In reality, the tire is modeled as a spring considering its vertical stiffness and the friction provided by the tire is a non-linear function of temperature, pressure, the load acting on it, etc. • The yaw moment generated in the vehicle is also not considered in this software as there is no CG location or the wheelbase-track width, the vehicle shows no tendency to understeer or oversteer. It is assumed that the vehicle is always moving in perfect steer condition. • The banking encountered by a vehicle on the track is also not considered. The track is considered as a 2D map only. No increase or decrease in traction is included due to the centrifugal forces or load transfer. The transient effects of damping or inertia are also not considered in this software. Due to these reasons, a designer cannot rely completely on the results of the software as the final outcome will be dependent upon the sensitivity of the vehicle model developed and the vehicle parameters. But one can get the results within the range of 10% error in the pre-designing phase by giving very few and the most appropriate parameters for that stage as the assumptions would vary after iterations

186

S. Yadav et al.

and it will be easy to alter those data and get the most accurate Lap time for a given vehicle and track configurations.

14.5 Model Development Patil et al. [3] have performed the LTA in OL and compared the results with the actual race track data. They have explained the complete process of performing simulations on OL in their paper. The basis summary of processes for generating an engine model and track is explained below.

14.5.1 Vehicle Model in OL Vehicle modeling in OL means the input of various parameters which in turn helps simulate the vehicle behavior on the track. There are various categories of parameters starting from General Data which Consists of Vehicle type, Vehicle mass, Drive type (2WD/4WD); Aero Data which consists of Drag coefficient/Efficiency of lift, Downforce Coefficient, Frontal Area, Air density; Tire data which consists of Tire dimensions rolling Resistance, longitudinal friction coefficient lateral friction coefficient. Next, the engine data input category consists of input of dynamometer characteristics of the corresponding engine which is in form of Engine speed and corresponding Engine Torque available at the given speed, Thermal efficiency, and fuel energy density (optional for fuel consumption and energy consumption calculations). The transmission Data category includes Transmission type (Sequential Gearbox/Continuous Variable Transmission), Gear ratios for corresponding gears (For sequential gearbox only) Final Drive Ratio, and Drive efficiency. Finally, the Scaling parameters category includes parameters as Power Factor, Aero Factor, and Grip Factor. After putting the above-mentioned parameters, the software gives the output of plots of expected behavior of vehicle starting from Engine model which gives a plot of Engine Torque—Engine Speed and Engine Power—Engine speed; Driveline model; Gearing model; Traction Model, also denoted the point where tractive force is equal to drag force i.e., Top speed that vehicle can reach. Finally, vehicle model becomes ready to iterate on different tracks.

14.5.2 Track Modeling in OL One can import standard tracks uploaded on the site of OL or can generate own track by providing some data such as track type, and then inputting track dimensions. The available track types are Permanent Circuit, Temporary circuit, Rally stage, oval circuit, Autocross, and drag strip. The most appropriate track type can be selected by

14 Comparative Study of Aero and Non-aero Formula Student Type …

187

the designer and can make one by adding data in the track configurator. The straight paths and left or right paths with the appropriate cornering radii can be added in terms of section lengths of the track. After finishing the configuration of the track, sections of different lengths can also be defined.

14.6 Case Study of an FS Aero and Non-aero Vehicle on 2 Different Tracks After studying all the parameters and taking appropriate assumptions at the initial stage two vehicle models were developed and were iterated on two different international tracks namely Suzuka International Racing Course (SIRC) and Autódromo Concepción del Uruguay (ACDU). It was made sure that both the vehicles have the same engine model (as shown in Fig. 14.7) and the tire data. The only aerodynamic parameters and the vehicle weight were kept different as those will vary in both cases. The detailed input parameters are shown in Table 14.1. Usually, the tracks are selected as per the competition site but here two different tracks are selected each having different track parameters to show the favoring results on those tracks. The data of both the tracks used in this paper are available on the website of the optimum G [1]. The length of the SIRC is 5.8 km and has 18 turns in one lap [5] whereas the track ACDU is 4.279 km long and it has 9 turns in one lap [6]. Both the tracks were imported in software and then in the simulation tab the simulations were performed and results were compared.

Fig. 14.7 Engine characteristics

188 Table 14.1 Input parameters of FS aero and non-aero vehicles for OL

S. Yadav et al. Parameter

FS non-aero vehicle

FS aero vehicle

Mass

270 kg

300 kg

Driven type

2WD

Drag coefficient

0.49

0.8

Downforce coefficient

−0.06

1.6

Frontal area

1 m2

1.4 m2

kg/m3

Air density

1.2

Tire radius

0.26 m

Rolling resistance coefficient

0.015

μx

1.1

μy

1.1

Fuel used

Gasoline (47.2 MJ/kg)

Transmission type

Sequential gearbox

14.7 Results When both the vehicle models were simulated on both the tracks the results were obtained only in the form of time. One needs to select the proper method to see a graphical representation of the results. Some of the results are color-coded. The meaning of one color code is shown in Fig. 14.8 which further has been used in Fig. 14.9 showing the comparison of the speed of both the vehicle models on Suzuka International Racing Course and Fig. 14.10 showing the same results for the track Autódromo Concepción del Uruguay. It can be seen very clearly from the pair of Fig. 14.9a, b or Fig. 14.10a, b that both the vehicles are able to achieve the maximum possible speed (i.e., 114.876 kmph) on both the tracks. Also, the speed of the aero vehicle on the corners is more compared to the non-aero vehicle because of the downforce. The result of the total lap time taken by both the vehicles on both the tracks is shown in Table 14.2. The lap time difference for the Suzuka track is 2.26 s more for the non-aero vehicle and for the Autódromo track is 1.18 s more for the non-aero vehicle. The percentage differences are coming − 1.21% and − 0.89% for both the tracks respectively.

Fig. 14.8 Color code explanation for velocity graphs

14 Comparative Study of Aero and Non-aero Formula Student Type …

189

Fig. 14.9 Velocity graph for SIRC a non-aero vehicle, b aero vehicle

Fig. 14.10 Velocity graph for ACDU a non-aero vehicle, b aero vehicle

Table 14.2 Lap time comparison of both the vehicles on both the tracks

SIRC

ACDU

Aero vehicle (s)

186.46

132.90

Non-aero vehicle (s)

188.72

134.08

Lap time difference (s)

− 2.26

− 1.18

%Diff. w.r.t. aero vehicle (%)

− 1.21

− 0.89

14.8 Conclusion The lap time for both the vehicles is coming very close but if one is designing a vehicle for racing purpose then one thousands of a second also matters in that case. If the vehicle is to be designed with the parameters given in Table 14.1 and to be raced on the given tracks then one should definitely go for an aero vehicle. When sector-wise observation was carried out, we noticed that in some places non-aero vehicle was getting to its maximum speed faster than the aero vehicle. In straight sections, an aero vehicle becomes slow because of increased drag due to its increased frontal area. So, the vehicle is to be designed for drag racing or some ovalshaped track racing is to be done then the non-aero type vehicle should be designed keeping all other parameters constant.

190

S. Yadav et al.

If the vehicle is to be designed for some standard racing formats such as Formula Student or FSAE then it should be decided only after performing the given procedure. Then, after comparing the virtual track data, the vehicle should be decided in which the minimum lap time is achieved for the given race track.

References 1. Kennard, C., Antunes, J., Optimum G (2021) OptimumG, Com. Available at: http://www.opt imumg.com. Accessed 24 Nov 2021 2. Milliken, W.F., Milliken, D.L.: Race Car Vehicle Dynamics. SAE International, Warrendale (1995) 3. Patil, M. et al.: Analyzing the performance of a formula type race car using lap time simulation. Indian J. Sci. Technol. 9(39) (2016). doi: https://doi.org/10.17485/ijst/2016/v9i39/94146 4. Wikipedia contributors: Suzuka International Racing Course, Wikipedia, The Free Encyclopedia (2021). Available at: https://en.wikipedia.org/w/index.php?title=Suzuka_International_Racing_ Course&oldid=1055001499. Accessed 24 Nov 2021 5. Wikipedia contributors: Autódromo de Concepción del Uruguay, Wikipedia, The Free Encyclopedia (2019). Available at: https://en.wikipedia.org/w/index.php?title=Aut%C3%B3d romo_de_Concepci%C3%B3n_del_Uruguay&oldid=905432622. Accessed 24 Nov 2021 6. Yarin, L.P.: Drag force acting on a body moving in viscous fluid. In: The Pi-Theorem. Springer, Berlin, pp. 71–102 (2012)

Chapter 15

Simulation and Stabilization of a Custom-Made Quadcopter in Gazebo Using ArduPilot and QGroundControl Nakul Nair, K. B. Sareth, Rao R. Bhavani, and Ashish Mohan

Abstract Simulation has been an integral part of the research and development domain in the current era. Especially in the field of aerial robotics, the introduction of simulation techniques and several simulation platforms has been a great breakthrough and has given a lot of options for the researcher to reiterate the designing and functioning of the final model without building it in real life which completely strikes off the disadvantage of financial incur of building the model mainly for testing purposes. In this paper, we propose a systematic approach through which one can design and simulate their own custom quadcopter in a platform like Gazebo with the help of ArduPilot flight stack and QGroundControl software and then stabilize it in the simulation with the help of the LiftDragPlugin. An experimental set-up was designed and carried out to prove that the quadcopter was stable before continuing to build the physical model. The main objective of this paper is to demonstrate the idea of building a personalized quadcopter and integrating it with the major simulation platforms like Gazebo and ArduPilot to make the vehicle work and fly stably. This work is aimed to inspire the creative aerial robotics researchers to think outside the box and to motivate them to come up with their own vehicle for their applications rather than depending on the already established group of simulated vehicles available.

Nakul Nair (B) · K. B. Sareth · R. R. Bhavani · A. Mohan AMMACHI Labs, Amrita Vishwa Vidyapeetham, Amritapuri Campus, Clappana, Kerala, India e-mail: [email protected] R. R. Bhavani e-mail: [email protected] A. Mohan e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. Das et al. (eds.), Modeling, Simulation and Optimization, Smart Innovation, Systems and Technologies 292, https://doi.org/10.1007/978-981-19-0836-1_15

191

192

Nakul Nair et al.

15.1 Introduction Quadcopter is one of the most used multirotor frames for various robotics applications like disaster management [1], package delivery, first-aid assistance, etc., and the most important aspect of a quadrotor or any other framed multirotor to be used for any real-life application is its stability [2]. Stability of a quadrotor is very important while considering its applications including flying in the real world to do various tasks and stability of the vehicle under idle as well as real-life atmospheric and environmental conditions are very crucial in designing and building a quadrotor [3]. Use of simulation platforms to make sure that the vehicle is properly stable under every possible circumstance has now become a normal and mandatory practices amongst researchers as well as aerial vehicle enthusiasts [4]. The real challenge for simulating a drone in a platform like Gazebo is when the quadcopter needs to be built from scratch. Designing a quadcopter from scratch and then making it compatible to be simulated in Gazebo is a systematic process in which a lot of additional processes and software are included. Since stability of a quadcopter has an extremely important place in the research and development area of the unmanned aerial vehicle domain, the task of achieving proper stability of the vehicle can be implemented using simulation platforms like MATLAB and Simulink. Stability is achieved by tuning the gain parameters of the PID controllers used in the flight system [5, 6]. Simulation of an unmanned aerial vehicle is an explicit part of testing the vehicle to get to know how the vehicle will respond if it was to be tested in the real world. Simulating the vehicle in a virtual platform could reduce the risk of damaging it while testing the performance and perfecting the vehicle before finalizing the model [4, 7]. The scope of simulating a custom designed quadcopter in a simulating platform like Gazebo can sometimes be challenging as the parameters used for simulating and stabilizing the vehicle must be accurate and compatible with the design. Gazebo is one of the prominent simulation platforms of recent times that effectively interfaces with ROS and Ardupilot which makes the process of simulation very much easy. Other simulation platforms like X-plane, Flight Gear, JAVSim, Microsoft AirSim and UEASim simulator are also preferred by researchers across the globe [8]. Use of ROS with Gazebo and Simulink can also yield to a working simulation domain in which researchers will be able to test the performance of already available simulations of quadcopters [7]. Doing various upgrades to the quadcopter parameters virtually so that the objective of performance testing can be done in a virtual environment and make the job of physical implementation as easy as possible [9]. Ardupilot is a widely used simulation platform for simulating and controlling unmanned vehicles. It establishes connection with the vehicle simulated in Gazebo through TCP and UDP port communication which in turn allow the user to control the vehicle in Gazebo through the terminal using different predefined commands [10]. Ardupilot can make the process of testing a quadcopter much easier by providing many convenient functions which can be used as per the user’s demand [11].

15 Simulation and Stabilization of a Custom-Made Quadcopter …

193

Qgroundcontroller acts as the virtual equivalent of ground control station for the quadcopter in simulation. It can also display the desired location in which the vehicle is about to fly using GPS connectivity. Through the Qgroundcontroller, functions like calibration of the vehicle, setting up the missions for the vehicle to fly and for analysing the values of different parameters like roll, pitch and yaw of the vehicle can be done without any complexity [12]. This paper put forward a systematic approach on how to simulate a custom-made quadcopter in Gazebo simulator and stabilize it by tuning the LiftDragPlugin [13] with the help of ArduPilot flight stack and QGroundControl software.

15.1.1 Architecture The main components of the simulation system are a 3D model of the custom quadcopter interfaced with Gazebo, SITL Pixhawk with firmware ArduPilot [11] and Qgroundcontrol [12]. SITL is the component responsible for establishing a connection between the terminal command and the quadcopter model spawned in Gazebo. D [13] is responsible for lifting the quadcopter by giving the necessary values to the plug-in parameters, namely angle of attack, coefficient of drag, coefficient of lift, etc. This plug-in is incorporated in the SDF file containing the model of the quadcopter (see Fig. 15.1). ArduPilot is an open-source flight stack capable of controlling unmanned vehicles both in simulation and in real-life conditions [11]. The architecture of the SITL system of ArduPilot is given above (see Fig. 15.2). The simulation is started off by establishing the SITL and the MAVProxy by executing the sim_vehicle.py startup file, in which all the details regarding the simulation and parameters for the SITL are described. The use of MAVProxy, which is a hidden functionality within the ArduPilot firmware, will help to create a simple and consistent command-line interface through which processes of a ground control station (GCS) like setting

Fig. 15.1 Architecture of the proposed approach

194

Nakul Nair et al.

Fig. 15.2 Software in the loop (SITL) architecture of ArduPilot

up the parameters for the simulation can be done effortlessly [10]. The MAVProxy and the ArduPilot are interfaced through the transmission control protocol (TCP) communication. With the help of predefined keywords, the user can also change the default parameters which were already set in the start-up file. For example, using the command sim_vehicle.py—v, ArduPlane will set up the simulation with the ArduPlane as the vehicle for the simulation. The vehicle is spawned in Gazebo directly using the world file of the vehicle, which eventually calls the SDF file of the custom-built 3D model of the quadcopter with the Gazebo—verbose command. For this project, we are using another virtual ground control station QGroundControl which is connected to the ArduPilot SITL through a User Datagram Protocol (UDP). When the connection is made, the QGroundControl will act as the GCS through which we will be able to control the flight of the vehicle in simulation and adjust different parameters as well as set missions for the vehicle [11, 14].

15.1.2 Quadcopter Model Description The quadcopter CAD model is created in SOLIDWORKS. After creating the final assembly file for the model, the CAD file is then converted to the URDF file format, which can be easily done using the SOLIDWORKS to URDF converter plug-in. The Unified Robotic Description Format (URDF) and Simulation Description Format

15 Simulation and Stabilization of a Custom-Made Quadcopter …

195

Fig. 15.3 Drawing of the proposed model with SDF file description

(SDF) are two of the XML formats that describe various models and environments for simulation, visualization and control. The format being used in this experimentation is SDF as it is one of the more used file formats in robotic simulations. The model of the quadcopter consists of a base link with four arms and a rotor at each end of the arm links (see Fig. 15.3). In addition to the visible links, the quadcopter also consists of a flight controller which has got GPS and IMU, where accelerometer, barometer and magnetometer are a part of. For aiding the rotation of the quadcopter, various plug-ins are provided for each of the rotors. Iris model quadcopter is taken as the reference for all the development of this proposed custom quadcopter model. The SDF file used for bringing up the quadcopter model in Gazebo consists of the important kinematic and dynamic properties of the robot which can be defined using various tags (see Fig. 15.4). Link is the tag used to define the basic features and properties of each link in our model. It can specify details like name of the link and position. Visual is the tag which is responsible for the visual description, dimension and origin of the model, and in some cases, the visual tag also contains the path to the mesh file of the 3D model. Collision tag describes the collision boundary for the defined links. Like the visual tag, collision tag also contains sub-tags like geometry and material. Joint tag is responsible for establishing and specifying the type of joint which holds the links together. Different joints like fixed, revolute, prismatic, etc., can be defined using the joint tags.

196

Nakul Nair et al.

Fig. 15.4 Example format for the SDF file

After completion of the SDF file for the quadcopter model, the vehicle can now be visualized in the Gazebo window by calling the SDF file to the Gazebo platform (see Fig. 15.5).

Fig. 15.5 Quadcopter model spawned in Gazebo environment

15 Simulation and Stabilization of a Custom-Made Quadcopter …

197

15.1.3 LiftDragPlugin This is one of the most used plug-ins when it comes to stabilizing an unmanned aerial vehicle in a simulation environment. This plug-in is responsible for generating a virtual lift and drag force which results in the proper functioning of the vehicle [13]. Some of the vehicles which incorporate the LiftDragPlugin are fixed wing, VTOL aircrafts, multi-copters, etc. The basic skeleton of the plug-in in XML format is given below. A plug-in is defined inside the plug-in tag, and all the parameters responsible for the action of that plug-in are given inside this tag itself (see Fig. 15.6). Each parameter of the plug-in has its own tag associated with it, and the value of each parameter is specified inside that tag. The values for the parameters may vary from model to model according to the dimensional characteristics. The values can be found

Fig. 15.6 LiftDragPlugin format

198

Nakul Nair et al.

Fig. 15.7 Coefficient of lift versus angle of attack curve

out using the conventional trial-and-error method followed by estimating the values from the coefficient of lift/drag vs angle of attack curve (see Fig. 15.7) [13].

15.2 Experimental Set-up The experimental set-up is designed under the assumption that there are no atmospheric conditions affecting the flight of the quadcopter. The experimentation is carried out by comparing the results obtained for each of the parameters, namely roll, pitch and yaw, for each value changed in the LiftDragPlugin. As there does not exist a single fixed value for the parameters in the LiftDragPlugin, every unmanned aerial vehicle will have different values for the parameters. As a result of the abovementioned problem, the parameters should be found out after fixing a particular value for a parameter and then changing the other parametric values [13]. The experimentation is done with the help of QGroundControl, where the drone is simulated using the mission planner, and for each the values provided in the LiftDragPlugin, the drone exhibits a particular roll, pitch and yaw. Further, a particular mission for the quadcopter stabilization testing is designed in the QGroundControl, and the same mission is repeated with the same vehicle but with different plugin values. The response of the vehicle to different plug-in values can be visually observed in the Gazebo simulation window. The corresponding roll, pitch and yaw

15 Simulation and Stabilization of a Custom-Made Quadcopter …

199

Fig. 15.8 QGroundControl window with the desired mission trajectory

values are obtained, and the graphical representation of the same can be found in the application settings option in the QGroundControl [12]. The manoeuvre path designed to test the performance of the quadcopter is displayed in the QGroundControl window (see Fig. 15.8). The path includes a takeoff and forward translation from the take-off point to a small distance and landing at the last position.

15.3 Results and Conclusion As a part of the experiment, six test cases were tried. For each case, the value of angle of attack(a0) parameter was taken through a trial-and-error method, and the values for both coefficient of lift(cla) and coefficient of drag(cda) parameters were estimated with the help of the relation deduced from the cla versus a0 and cda versus a0 plot. Initially, the value for a0 was taken as 0.4, and the corresponding value for cla and cda were taken. While performing the mission, the drone was able to successfully complete the mission, but the drone went through a lot of roll and pitch fluctuation throughout the course of the mission (see Fig. 15.9). This result led to the inference that the value of a0 taken was not desirable. Taking 0.4 as the reference of

200

Nakul Nair et al.

Fig. 15.9 Roll, pitch versus time (a0 = 0.4)

experimentation, we took two values for the a0, 0.5 and 0.3 to know if the value of a0 must be reduced or increased to attain stable motion. a0 = 0.5 clearly shows a drastic increase in the roll and pitch motion (see Fig. 15.10). Hence, the value of a0 should be reduced to get a stable flight. On taking 0.3 as the value of a0, the roll and pitch during the flight got reduced but were not enough for providing a stable flight for the quadcopter (see Fig. 15.11). On further reducing the value of a0–0.2, the drone showed a great fluctuation in the roll and pitch values and was unsuccessful in completing the mission after showing a potential thrust loss error (see Fig. 15.12). From the above experiment, it is visible that the value of a0 for the quadcopter is between 0.2 and 0.3. On taking 0.25 as the value for a0, the quadcopter showed a Fig. 15.10 Roll, pitch versus time (a0 = 0.5)

Fig. 15.11 Roll, pitch versus time (a0 = 0.3)

15 Simulation and Stabilization of a Custom-Made Quadcopter …

201

Fig. 15.12 Roll, pitch versus time (a0 = 0.2)

promising stable motion throughout the mission and completed the mission successfully (see Fig. 15.13). After further experimentation, it was concluded that a0 = 0.22 provided the quadcopter with a much stable flight (see Fig. 15.14) and any value less than 0.22 resulted in thrust loss of the quadcopter, where the vehicle was unable to complete the mission. The results obtained from the above experimentation help us to accurately determine the values required for the parameters in the LiftDragPlugin to have a smooth and stable flight for the quadcopter that had been custom designed with inertia values different from the already existing quadcopters. The values of the various parameters are very much dependent on the weight and the inertia values of the quadcopter, and any change in the inertia values could end up in the crashing of the quadcopter. Fig. 15.13 Roll, pitch versus time (a0 = 0.25)

Fig. 15.14 Roll, pitch versus time (a0 = 0.22)

202

Nakul Nair et al.

It is concluded that the objectives put forward at the beginning of the paper were successfully accomplished although some challenges were faced along the way. A custom designed quadcopter was simulated in the Gazebo platform and completed a successful trajectory by tuning the parameters in the LiftDragPlugin with the help of both ArduPilot and QGroundControl platforms. The future scope will be to implement the process explained in this paper to other multirotor unmanned aerial vehicles.

References 1. Nikhil, N., Shreyas, S.M., Vyshnavi, G., Yadav, S.: Unmanned aerial vehicles (UAV) in disaster management applications. In: 2020 Third International Conference on Smart Systems and Inventive Technology (ICSSIT). IEEE, Tirunelveli, India (2020) 2. Gupte, S., Mohandas, P.I.T., Conrad, J.M.: A survey of quadrotor unmanned aerial vehicles. In: 2012 Proceedings of IEEE Southeastcon, pp. 1–6. doi: https://doi.org/10.1109/SECon.2012. 6196930 (2012) 3. Katiar, A., Rashdi, R., Ali, Z., Baig, U.: Control and stability analysis of quadcopter. In: 2018 International Conference on Computing, Mathematics and Engineering Technologies— iCoMET (2018) 4. Choi, H., Crump, C., Duriez, C, Elmquist, A., Hager, G., Han, D., Hearl, F., Hodgins, J., Jain, A., Leve, F., Li, C., Meier, F., Negrut, D., Righetti, L., Rodriguez, A., Tan, J., Trinkle, J.: On the use of simulation in robotics: opportunities, challenges, and suggestions for moving forward. PNAS 118(1), e1907856118 (2021) 5. Bianca Sabrina, C.S., Egidio Raimundo, N., Alexandre Baratella, L., João Paulo, C.H.: A Quadcopter stability analysis: a case study using simulation. World Acad. Sci. Eng. Technol. Int. J. Aeros. Mech. Eng. 15(1) (2021) 6. Praveen, V., Anju Pillai, S.: Modeling and simulation of quadcopter using PID controller. Int. J. Contr. Theo. Appl. 9, 7151–7158 (2016) 7. Nithya, M., Rashmi, M.R..: Gazebo—ROS—simulink framework for hover control and trajectory tracking of Crazyflie 2.0, TENCON 2019. In: 2019 IEEE Region 10 Conference (TENCON). IEEE, Kochi, India, pp. 649–653 (2019) 8. Idriss Hentati, A., Krichen, L., Fourati, M., Chaari Fourati, L.: Simulation tools, environments and frameworks for UAV systems performance analysis. In: 2018 14th International Wireless Communications and Mobile Computing Conference (IWCMC), pp. 1495–1500 (2018) 9. Akhil, M., Anand, M.K., Sreekumar, A., Hithesan, P.: Simulation of the mathematical model of a quad rotor control system using Matlab Simulink. Appl. Mech. Mater. 110–116, 2577–2584 (2012) 10. MAVProxy documentation, www.ArduPilot.org/dev/docs/mavproxy-developer-gcs.html.last. Accessed 28 Aug 2021 11. ArduPilot documentation, www.ArduPilot.org/ArduPilot/.last. Accessed 28 Aug 2021 12. QGroundControl documentation, www.docs.qgroundcontrol.com.last. Accessed 28 Aug 2021 13. LiftDragPlugin documentation, www.gazebosim.org/tutorials?tut=aerodynamics&cat=phy sics.last. Accessed 28 Aug 2021 14. Qays, H.M., Jumaa, B.A., Salman, A.D.: Design and implementation of autonomous Quadcopter using SITL simulator. Iraqi J. Comp. Commun. Contr. Syst. Eng. (IJCCCE) 20(1) (2020)

Chapter 16

Construction of Reliability Sampling Plans Using Dagum Distribution Under Type-I Censoring R. Vijayaraghavan, K. Sathya Narayana Sharma, and C. R. Saranya

Abstract Reliability sampling plans are generally employed to obtain the acceptable or non-acceptable lot of finished products by performing tests on the product lifetime and observing the number of failures of product. Product lifetime is the quality parameter that is modeled by a suitable lifetime probability distribution. In this article, assuming the product lifetime follows Dagum distribution and reliability single sampling plan (RSSP) is constructed. A procedure for selection of the plan parameters to protect both the producer as well as the consumer indexed by the acceptable and unacceptable median life is evolved.

16.1 Introduction Reliability sampling plan or life test sampling plan is a set of rules and procedures for taking decisions on acceptability or non-acceptability of the lot based on information provided by the test results and concentrate whether lifetime of products attains specified standard of sampled lifetime data. In reliability sampling plan, lifetime of the product is considered as the quality parameter, and it is described by a lifetime distribution. According to Fertig and Mann [1], life test sampling plan is a technique for making decision on the inspected lot based on the samples with concept of censoring to manage the testing time at an appropriate level. Censoring is the termination of life testing experiment of sampled units. Type-I censoring schemes are termed as time truncated life test; in the life test experiment, n samples are placed to testing the lifetime of item until a test time, t, is reached, and then, the test is terminated. R. Vijayaraghavan Department of Statistics, Bharathiar University, Coimbatore 641 046, India K. Sathya Narayana Sharma (B) Department of Mathematics, School of Advanced Sciences, Vellore Institute of Technology, Vellore, Tamil Nadu 632 014, India e-mail: [email protected] C. R. Saranya Department of Statistics, KSMDB College, Sasthamcotta, Kerala, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. Das et al. (eds.), Modeling, Simulation and Optimization, Smart Innovation, Systems and Technologies 292, https://doi.org/10.1007/978-981-19-0836-1_16

203

204

R. Vijayaraghavan et al.

The literature in product control provides important contributions, which have been made during the past five decades in the development of reliability sampling using lifetime distributions. The relevance of several continuous probability distributions in the researches pertained to the construction of life test sampling plans has been significantly marked out in the literature of product control. While important contributions have been made during the past five decades in the evolution of life test sampling plans employing exponential, Weibull, lognormal and gamma distributions as the lifetime distributions, the literature also provides application of numerous distributions for modeling lifetime data. Epstein [2, 3], Handbook H-108 [4] and Goode and Kao [5–7] proposed the construction of life test sampling plans using exponential and Weibull distributions. The works of life tests sampling plans are include Gupta [8], Schilling and Neubauer [9], Balakrishnan et al. [10], Kalaiselvi and Vijayaraghavan [11], Kalaiselvi et al. [12], Loganathan et al. [13], Vijayaraghavan et al. [14], Vijayaraghavan and Uma [15, 16] and Vijayaraghavan et al. [17, 18]. Dagum [19] introduced the Dagum distribution. Kleiber [20] discussed the applications of Dagum distribution. Dagum distribution is monotonically decreasing and has an upside-down bathtub shape of hazard function. As a positively skewed distribution, the Dagum distribution is considered as a lifetime distribution (see, Domma et al. [21]). In this research, sampling plans for life tests are formulated by assuming the lifetime of product follows Dagum distribution indexed by acceptable and unacceptable median life.

16.2 Dagum Distribution Let T be the lifetime of the components, it is considered as a random variable. Assume that T follows the Dagum distribution. The probability density function and the cumulative distribution function of T are, respectively, defined by     −δ −(ϕ+1) t ϕδ t −(δ+1) , t > 0, θ > 0, ϕ > 0, δ > 0 1+ f (t; θ, ϕ, δ) = θ θ θ (16.1) and 

 −δ −ϕ t , t > 0, θ > 0, ϕ > 0, δ > 0 F(t; θ, ϕ, δ) = 1 + θ where θ is the scale parameter, and ϕ and δ are the shape parameters.

(16.2)

16 Construction of Reliability Sampling Plans Using Dagum Distribution …

205

The median lifetime and the hazard function for a specified time t under the Dagum distribution are, respectively, given by  1 − 1δ μd = θ 2 ϕ − 1

(16.3)

and

Z (t) =

 −δ −(ϕ+1) 1 + θt   −δ −ϕ 1 − 1 + θt

 ϕδ t −(δ+1) θ θ



(16.4)

The product failure proportion, p, before time t, is expressed by p = P(T ≤ t) = F(t; θ, ϕ, δ).

(16.5)

16.3 Procedure to Determine the Operating Characteristics The probabilities of acceptance can be obtained using binomial or Poisson distributions substituting in (16.6). Pa ( p) =

c

p(x).

(16.6)

x:0

The OC curve of the plan can be obtained based on the following procedure: Step 1: Specify the values of p, ϕ and δ. Step 2: Determine the value of t/θ from (16.2) and (16.5). Step 3: Substituting the specified value of ϕ, δ and the value of t/θ obtained in Step 2, determine μtd from (16.3). Actual Median Life Step 4: Define the ratio μμd0d = Assumed . Given the assumed median life Median Life μd0 and the specified values of ϕ and δ, this ratio can be obtained using Eq. (16.2) and (16.3) Step 5: Determine μμd0d corresponding to each specified value of p. Step 6: Find the acceptance probabilities using (16.7) or (16.8) for the specified life test plan associated with p or μμd0d . Step 7: Plot the acceptance probabilities against the values of μμd0d to obtain the required OC curve of the life test plan. Alternatively, to obtain the OC curve of the plan, the following procedure may be followed:

206

R. Vijayaraghavan et al.

Step 1: Specify t/μd0 ,μd /μd0 , ϕ and δ. Step 2: Determine t/θ using the expression (16.3). Step 3: Corresponding to t/θ find the value of p utilizing (16.2) and (16.5). Step 4: Using (16.7) or (16.8) in (16.9), find the acceptance probabilities for the specified plan, for each specified value of p. Step 5: Plot the acceptance probabilities against the values of μμd0d . The resulting figure would be the required OC curve. Connected with a stated value of p, there exists a particular value of t/θ, which can be derived as a function of p, ϕ and δ from the CDF of the Dagum distribution. The expression for t/θ is then derived using (16.2) and (16.5) as  −1/δ t = p −1/ϕ − 1 . θ

(16.7)

The expression for t/μd is then derived using (16.3) and (16.7) as t = μd



21/ϕ − 1 p −1/ϕ − 1

1/δ (16.8)

Single value of p is connected with the distinct value of t/μd , and this might be discovered using the techniques below: Step 1: Specify the values of p, ϕ and δ. Step 2: Obtain the value of t/θ using the expression given as (16.7). Step 3: Determine t/μd from the expression given as (16.8) for the specified values of ϕ, δ and the value of t/θ obtained from Step 2. In an analogous manner, for a mentioned value of t/μd , the value of p could be calculated.

16.4 Empirical Analysis of Operating Characteristic Curves It can be noted that the reliability single sampling plan (RSSP) using Dagum distribution is specified by the parameters n, c, θ, ϕ and δ. As the failure probability p is associated with distribution function, which is a function of t/θ, the acceptance probabilities in turn can be computed for given sets of values of n, c, ϕ and δ. The acceptance probabilities of the submitted lot under the single sampling plan for life tests are computed against the ratio μd /μd0 based on the procedure described previously for different combinations of parameters n, c, ϕ and δ. It is to be noted that the values of these parameters influence the changes in the nature of the OC function. In order to explore the effects of the parameters, an empirical analysis of the operating characteristic curves drawn for various sets of parameters is carried out.

16 Construction of Reliability Sampling Plans Using Dagum Distribution …

207

Fig. 16.1 OC curves of RSSP based on Dagum distribution with n = 100, c = 2, ϕ = 1.5 and varying δ

Figure 16.1 presents a set of operating characteristic curves of the single sampling plans for varying values of δ and fixed n, c and ϕ. The curves display the acceptance probabilities against the values of μd /μd0 . It can be observed from the curves that the acceptance probabilities increase as the values of the shape parameter δ increases for any specified value of μd /μd0 . Hence, for higher values of δ, the acceptance probabilities are larger for any given value of μd /μd0 . A similar property can be observed from Fig. 16.2 that as ϕ increases, the acceptance probabilities for fixed values of n, c and δ would increase. Figure 16.3 displays a set of operating characteristic curves of the plan for varying values of n and fixed c, ϕ and δ. It can be observed from the curves that when the sample is small, the acceptance probability for any specified value of μd /μd0 is higher than the acceptance probability, which corresponds to the higher sample size. It can be interpreted that when the expected median life is smaller than the assumed median life, the acceptance probability becomes higher when the sample size is smaller. As μd /μd0 increases, probability of acceptance also increases irrespective of the sample size. It is also observed that protection to producer is greater with larger acceptance probabilities for smaller sample sizes as the expected median life moves toward the assumed median life, while the consumer gets more protection with smaller acceptance probabilities for larger sample sizes when the expected median life is much smaller than the assumed median life. In other words, as the value of n decreases, the producer receives greater protection, while the customer receives less protection.

208

R. Vijayaraghavan et al.

Fig. 16.2 OC curves of RSSP based on Dagum distribution with n = 100, c = 2, δ = 1.5 and varying ϕ

Fig. 16.3 OC curves of RSSP based on Dagum distribution with c = 2, ϕ = 2, δ = 1.5 and varying n

16 Construction of Reliability Sampling Plans Using Dagum Distribution …

209

Fig. 16.4 OC curves of RSSP based on Dagum distribution with n = 100, ϕ = 2, δ = 1.5 and varying c

Similar properties can be observed from Fig. 16.4, which presents a set of four OC curves of the plan for varying values of c and fixed values of n, ϕ and δ.

16.5 Procedure for the Construction of Reliability Single Sampling Plan Vijayaraghavan and Uma [16] provided the procedures and discussed for p0 and p1 are association with t/μd0 and t/μd1 , respectively. In reliability sampling, a specific sampling plan for life tests can be obtained and the OC curve must pass through two locations, namely (μd0 , α) and (μd1 , β), which are associated with the risks α and β. The below two conditions must be satisfied, for obtain the optimum plan parameters with fixed value of α and β, respectively: Pa ( p0 ) ≥ 1 − α

(16.9)

Pa ( p1 ) ≤ β.

(16.10)

and

210

R. Vijayaraghavan et al.

It may be noted that the specification of ( p0 , 1 − α) and ( p1 , β) is equivalent to the points (μd0 , 1 − α) and (μd1 , β) or (t/μd0 , 1 − α) and (t/μd1 , β). Based on the search procedure, the optimum single sampling plans under the Dagum distribution fixing the shape parameters values (ϕ, δ) as (1, 1), (1.5, 1), (2, 1), (3, 1), (1, 1.5), (1.5, 1.5), (2, 1.5) and (3, 1.5) are obtained which satisfy the conditions (16.9) and (16.10). Tables 16.1, 16.2, 16.3, 16.4, 16.5, 16.6, 16.7 and 16.8 display the optimum plan parameter values of n and c, which are tabulated against several combinations of t/μd0 and t/μd1 associated the α = 0.05 and β = 0.10. Numerical Illustration 1 A reliability single sampling plan is to be instituted when the lifetime of the components is assumed to a random variable which is distributed according to the Dagum distribution whose shape parameters are specified as ϕ = 1.5 and δ = 1. It is assumed that an experimenter terminates the test at t = 1000 hours; ensures protection to producer against the acceptable median life, 125,000 h with the producer risk of 5% and ensures protection to the consumer against the unacceptable median life 40,000 h with consumer risk of 10%. From the given values, t/μd0 = 0.008 and t/μd1 = 0.025 are evaluated. Based on the procedure designated earlier, the plan parameters are obtained as n = 809 and c = 3. Table 16.2 also displays these parameters against t/μd0 = 0.008 and t/μd1 = 0.025. Corresponding to these values, the acceptable and limiting quality level are determined as p0 = 0.00156 and p1 = 0.00825. Thus, the desired plan for the given conditions is executed as given below: 1. 2. 3. 4. 5. 6.

Choose randomly 809 products from a lot. Conduct the life test experiment on each sample. Count the number of failures (x) until time t. Terminate the life test at t0 = 1000 hours or if x exceeds 3 before test time. If x ≤ 3 accept the lot; otherwise reject the lot. Treat the items, which survive beyond time t0 = 1000 hours as passed.

Numerical Illustration 2 Assume the product lifetime follows the Dagum distribution, which has the shape parameters, ϕ and δ are specified as 2 and 1.5, respectively. Once the test time is reached, t = 750 h, the test will be terminated. The producer’s quality level prescribed as p0 = 0.25% with the risk α = 0.05, and consumer’s quality level prescribed as p1 = 3% with the risk β = 0.10. The values of t/μd corresponding to the prescribed values of p0 and p1 are determined as t/μd0 = 0.078 and t/μd1 = 0.196. Based on the procedure designated, for the indices t/μd0 and t/μd1 , the optimum plan parameters (n, c) of the single sampling plan for life tests under the assumption of Dagum distribution having the parameters ϕ = 2 and δ = 1.5 are determined as n = 129 and c = 1. The acceptable and unacceptable median life are, respectively, obtained as μd0 = t/0.078 = 9615 hours and μd1 = t/0.196 = 3827 hours. Figures 16.5 and 16.6 display the OC curves of the single sampling plans obtained in Numerical Illustrations 1 and 2. The OC curve of the single sampling plan for life

6176, 10

3934, 7

3018, 6

2326, 5

1783, 4

1341, 3

1220, 3

1119, 3

1033, 3

764, 2

714, 2

669, 2

630, 2

596, 2

564, 2

536, 2

488, 2

327, 1

302, 1

281, 1

0.003

0.0035

0.004

0.0045

0.005

0.0055

0.006

0.0065

0.007

0.0075

0.008

0.0085

0.009

0.0095

0.01

0.011

0.012

0.013

0.014

0.001

t/μd0

0.0025

t/μd1

384, 2

414, 2

448, 2

488, 2

536, 2

709, 3

748, 3

791, 3

840, 3

896, 3

1148, 4

1236, 4

1553, 5

1694, 5

2115, 6

2626, 7

3564, 9

5096, 12

8274, 18

0.0015

384, 2

519, 3

562, 3

613, 3

673, 3

848, 4

895, 4

947, 4

1167, 5

1244, 5

1513, 6

1629, 6

1972, 7

2595, 9

3334, 11

4229, 13

6211, 18

0.002

483, 3

519, 3

673, 4

733, 4

935, 5

984, 5

1038, 5

1248, 6

1481, 7

1579, 7

1867, 8

2383, 10

2979, 12

3677, 14

4973, 18

7299, 25

0.0025

577, 4

621, 4

781, 5

851, 5

1062, 6

1249, 7

1318, 7

1540, 8

1788, 9

2227, 11

2556, 12

3294, 15

4148, 18

5771, 24

8592, 34

0.003

670, 5

721, 5

886, 6

1080, 7

1310, 8

1507, 9

1725, 10

1967, 11

2386, 13

2858, 15

3558, 18

4712, 23

6421, 30

9628, 43

0.0035

Table 16.1 Optimum reliability sampling plan based on Dagum distribution for ϕ = 1 and δ = 1

761, 6

819, 6

991, 7

1304, 9

1674, 11

1887, 12

2123, 13

2661, 16

3116, 18

3936, 22

5187, 28

7132, 37

0.004

851, 7

1011, 8

1196, 9

1523, 11

2030, 14

2383, 16

2773, 18

3476, 22

4401, 27

5741, 34

7890, 45

Key: n, c

0.0045

(continued)

939, 8

1198, 10

1497, 12

1847, 14

2498, 18

2992, 21

3664, 25

4675, 31

6219, 40

0.005

16 Construction of Reliability Sampling Plans Using Dagum Distribution … 211

262, 1

246, 1

232, 1

219, 1

208, 1

197, 1

133, 1

100, 1

48, 0

40, 0

35, 0

30, 0

0.016

0.017

0.018

0.019

0.02

0.03

0.04

0.05

0.06

0.07

0.08

0.001

t/μd0

0.015

t/μd1

Table 16.1 (continued)

30, 0

58, 1

68, 1

81, 1

100, 1

133, 1

197, 1

208, 1

219, 1

232, 1

337, 2

359, 2

0.0015

52, 1

58, 1

68, 1

81, 1

100, 1

133, 1

270, 2

284, 2

300, 2

317, 2

337, 2

359, 2

0.002

52, 1

58, 1

68, 1

81, 1

100, 1

133, 1

270, 2

284, 2

300, 2

317, 2

423, 3

451, 3

0.0025

52, 1

58, 1

68, 1

81, 1

100, 1

182, 2

270, 2

357, 3

376, 3

398, 3

423, 3

451, 3

0.003

52, 1

58, 1

68, 1

81, 1

100, 1

182, 2

339, 3

357, 3

376, 3

477, 4

506, 4

539, 4

0.0035

52, 1

58, 1

68, 1

81, 1

137, 2

182, 2

339, 3

427, 4

451, 4

477, 4

587, 5

626, 5

0.004

52, 1

58, 1

68, 1

111, 2

137, 2

182, 2

406, 4

427, 4

523, 5

553, 5

667, 6

711, 6

0.0045

52, 1

58, 1

68, 1

111, 2

137, 2

228, 3

471, 5

496, 5

523, 5

628, 6

746, 7

795, 7

0.005

212 R. Vijayaraghavan et al.

9829, 15

6657, 11

4585, 8

3693, 7

2964, 6

2359, 5

1850, 4

1693, 4

1302, 3

1203, 3

1117, 3

750, 2

644, 2

562, 2

496, 2

323, 1

291, 1

264, 1

241, 1

0.011

0.012

0.013

0.014

0.015

0.016

0.017

0.018

0.019

0.02

0.0225

0.025

0.0275

0.03

0.0325

0.035

0.0375

0.04

0.006

t/μd0

0.01

t/μd1

241, 1

264, 1

291, 1

442, 2

496, 2

562, 2

644, 2

942, 3

1117, 3

1440, 4

1558, 4

1693, 4

2146, 5

2679, 6

3312, 7

4077, 8

5857, 11

8074, 14

0.0065

241, 1

264, 1

398, 2

442, 2

496, 2

562, 2

809, 3

942, 3

1336, 4

1440, 4

1807, 5

1964, 5

2437, 6

2994, 7

3657, 8

5207, 11

7103, 14

0.007

241, 1

361, 2

398, 2

442, 2

496, 2

562, 2

809, 3

942, 3

1336, 4

1671, 5

1807, 5

2231, 6

2724, 7

3305, 8

4336, 10

5948, 13

8737, 18

0.0075

329, 2

361, 2

398, 2

442, 2

496, 2

705, 3

809, 3

1127, 4

1551, 5

1671, 5

2053, 6

2493, 7

3008, 8

3919, 10

5335, 13

7406, 17

0.008

329, 2

361, 2

398, 2

442, 2

622, 3

705, 3

968, 4

1127, 4

1761, 6

1897, 6

2294, 7

2753, 8

3566, 10

4822, 13

6318, 16

9200, 22

0.0085

Table 16.2 Optimum reliability sampling plan based on Dagum distribution for ϕ = 1.5 and δ = 1

329, 2

361, 2

398, 2

555, 3

622, 3

705, 3

968, 4

1307, 5

1761, 6

2120, 7

2533, 8

3264, 10

4116, 12

5416, 15

7932, 21

0.009

329, 2

361, 2

398, 2

555, 3

622, 3

844, 4

968, 4

1485, 6

1968, 7

2559, 9

3003, 10

3767, 12

4929, 15

6589, 19

9839, 27

Key: n, c

0.0095

(continued)

329, 2

361, 2

500.3

555, 3

622, 3

844, 4

1123, 5

1485, 6

2375, 9

2776, 10

3466, 12

4264, 14

5731, 18

8322, 25

0.01

16 Construction of Reliability Sampling Plans Using Dagum Distribution … 213

176, 1

137, 1

111, 1

93, 1

47, 0

41, 0

32, 0

27, 0

23, 0

20, 0

17, 0

0.06

0.07

0.08

0.09

0.1

0.12

0.14

0.16

0.18

0.2

0.006

t/μd0

0.05

t/μd1

Table 16.2 (continued)

17, 0

20, 0

23, 0

27, 0

32, 0

41, 0

79, 1

93, 1

111, 1

137, 1

176, 1

0.0065

17, 0

20, 0

23, 0

27, 0

32, 0

69, 1

79, 1

93, 1

111, 1

137, 1

176, 1

0.007

17, 0

20, 0

23, 0

27, 0

32, 0

69, 1

79, 1

93, 1

111, 1

137, 1

176, 1

0.0075

17, 0

20, 0

23, 0

27, 0

32, 0

69, 1

79, 1

93, 1

111, 1

137, 1

176, 1

0.008

17, 0

20, 0

23, 0

27, 0

55, 1

69, 1

79, 1

93, 1

111, 1

137, 1

176, 1

0.0085

17, 0

20, 0

23, 0

27, 0

55, 1

69, 1

79, 1

93, 1

111, 1

137, 1

176, 1

0.009

17, 0

20, 0

23, 0

45, 1

55, 1

69, 1

79, 1

93, 1

111, 1

137, 1

176, 1

0.0095

17, 0

20, 0

23, 0

45, 1

55, 1

69, 1

79, 1

93, 1

111, 1

137, 1

241, 2

0.01

214 R. Vijayaraghavan et al.

133, 1

121, 1

111, 1

0.085

0.09

0.095

282, 1

0.055

147, 1

458, 2

0.05

165, 1

492, 2

0.048

0.08

532, 2

0.046

0.075

576, 2

0.044

185, 1

627, 2

0.042

0.07

860, 3

0.04

242, 1

945.3

0.038

210, 1

1044, 3

0.036

0.065

1388, 4

0.06

1553, 4

0.034

0.015

t/μd0

0.032

t/μd1

111, 1

121, 1

133, 1

147, 1

165, 1

254, 2

288, 2

331, 2

386, 2

575, 3

618, 3

799, 4

865, 4

1093, 5

1194, 5

1490, 6

1839, 7

2467, 9

3225, 11

0.02

152, 2

166, 2

182, 2

202, 2

225, 2

318, 3

362, 3

416, 3

580, 4

798, 5

975, 6

1176, 7

1407, 8

1674, 9

2138, 11

2848, 14

4048, 19

5877, 26

9813, 41

0.025

152, 2

166, 2

229, 3

254, 3

283, 3

381, 4

502, 5

656, 6

855, 7

1326, 10

1646, 12

2012, 14

2682, 18

3590, 23

5081, 31

7767, 45

0.03

191, 3

208, 3

274, 4

304, 4

393, 5

502, 6

638, 7

885, 9

1377, 13

2426, 21

3134, 26

4387, 35

6421, 49

0.035

228, 4

249, 4

318, 5

400, 6

499, 7

678, 9

964, 12

1472, 17

2459, 26

0.04

Table 16.3 Optimum reliability sampling plan based on Dagum distribution for ϕ = 2 and δ = 1

265, 5

329, 6

404, 7

540, 9

704, 11

961, 14

1467, 20

2598, 33

0.045

336, 7

406, 8

529, 10

721, 13

1002, 17

1562, 25

2680, 40

Key: n, c

0.05

(continued)

440, 10

556, 12

732, 15

1073, 21

1627, 30

2830, 49

0.055

16 Construction of Reliability Sampling Plans Using Dagum Distribution … 215

102, 1

87, 1

76, 1

40, 0

35, 0

32, 0

29, 0

27, 0

24, 0

23, 0

21, 0

0.11

0.12

0.13

0.14

0.15

0.16

0.17

0.18

0.19

0.2

0.015

t/μd0

0.1

t/μd1

Table 16.3 (continued)

21, 0

23, 0

24, 0

45, 1

49, 1

54, 1

60, 1

67, 1

76, 1

87, 1

102, 1

0.02

36, 1

38, 1

41, 1

45, 1

49, 1

54, 1

60, 1

67, 1

76, 1

87, 1

102, 1

0.025

36, 1

38, 1

41, 1

45, 1

49, 1

54, 1

60, 1

67, 1

76, 1

120, 2

140, 2

0.03

36, 1

38, 1

41, 1

45, 1

49, 1

54, 1

82, 2

92, 2

104, 2

120, 2

175, 3

0.035

36, 1

38, 1

41, 1

45, 1

67, 2

74, 2

82, 2

92, 2

104, 2

150, 3

175, 3

0.04

36, 1

53, 2

57, 2

62, 2

67, 2

74, 2

82, 2

116, 3

131, 3

180, 4

244, 5

0.045

49, 2

53, 2

57, 2

62, 2

67, 2

93, 3

103, 3

116, 3

157, 4

209, 5

277, 6

0.05

49, 2

53, 2

57, 2

78, 3

85, 3

93, 3

124, 4

139, 4

182, 5

237, 6

342, 8

0.055

216 R. Vijayaraghavan et al.

6164, 11

4680, 9

3460, 7

2779, 6

2207, 5

1724,4

1569, 4

1198, 3

1099, 3

1011, 3

730, 2

664, 2

606, 2

556, 2

473, 2

297, 1

259, 1

228, 1

202, 1

180, 1

0.044

0.046

0.048

0.05

0.052

0.054

0.056

0.058

0.06

0.0625

0.065

0.0675

0.07

0.075

0.08

0.085

0.09

0.095

0.1

0.03

t/μd0

0.042

t/μd1

180, 1

202, 1

312, 2

354, 2

407, 2

473, 2

698, 3

761, 3

997, 4

1096, 4

1404, 5

1526, 5

1889, 6

2311, 7

2803, 8

3667, 10

5002, 13

6941, 17

0.035

247, 2

276, 2

312, 2

445, 3

511, 3

710, 4

969, 5

1057, 5

1314, 6

1614, 7

1968, 8

2535, 10

3190, 12

4181, 15

5588, 19

8322, 27

0.04

247, 2

347, 3

391, 3

533, 4

612, 4

936, 6

1230, 7

1619, 9

2071, 11

2600, 13

3576, 17

4826, 22

6675, 29

9915, 41

0.045

310, 3

415, 4

468, 4

618, 5

901, 7

1263, 9

1982, 13

2560, 16

3517, 21

5103, 29

7816, 42

0.05

431, 5

482, 5

617, 6

866, 8

1271, 11

1997, 16

3421, 25

5003, 35

0.055

Table 16.4 Optimum reliability sampling plan based on Dagum distribution for ϕ = 3 and δ = 1

489, 6

676, 8

903, 10

1264, 13

1984, 19

3508, 31

0.06

660, 9

863, 11

1249, 15

1956, 22

3446, 36

Key: n, c

0.065

(continued)

881, 13

1288, 18

1985, 26

3516, 43

0.07

16 Construction of Reliability Sampling Plans Using Dagum Distribution … 217

147, 1

122, 1

104, 1

90, 1

46, 0

41, 0

37, 0

33, 0

30, 0

27, 0

0.12

0.13

0.14

0.15

0.16

0.17

0.18

0.19

0.2

0.03

t/μd0

0.11

t/μd1

Table 16.4 (continued)

27, 0

30, 0

56, 1

62, 1

69, 1

78, 1

90, 1

104, 1

122, 1

147, 1

0.035

46, 1

51, 1

56, 1

62, 1

69, 1

78, 1

90, 1

104, 1

122, 1

147, 1

0.04

46, 1

51, 1

56, 1

62, 1

69, 1

78, 1

90, 1

104, 1

168, 2

201, 2

0.045

46, 1

51, 1

56, 1

62, 1

69, 1

78, 1

123, 2

142, 2

168, 2

253, 3

0.05

46, 1

51, 1

56, 1

62, 1

95, 2

107, 2

123, 2

142, 2

211, 3

253, 3

0.055

46, 1

51, 1

77, 2

85, 2

95, 2

107, 2

123, 2

179, 3

252, 4

351, 5

0.06

64, 2

69, 2

77, 2

85, 2

95, 2

135, 3

154, 3

214, 4

293, 5

399, 6

0.065

64, 2

69, 2

77, 2

85, 2

119, 3

135, 3

185, 4

249, 5

332, 6

492, 8

0.07

218 R. Vijayaraghavan et al.

7547, 15

5101, 11

3506, 8

2818, 7

1987, 5

1793, 5

1026, 3

840, 3

562, 2

480, 2

417, 2

268, 1

238, 1

213, 1

192, 1

175, 1

160, 1

147, 1

136, 1

0.022

0.024

0.026

0.028

0.03

0.035

0.04

0.045

0.05

0.055

0.06

0.065

0.07

0.075

0.08

0.085

0.09

0.095

0.012

t/μd0

0.02

t/μd1

136, 1

147, 1

160, 1

175, 1

192, 1

213, 1

325, 2

366, 2

417, 2

480, 2

705, 3

1006, 4

1424, 5

2275, 7

2785, 8

3688, 10

5115, 13

7961, 19

0.014

136, 1

147, 1

160, 1

175, 1

263, 2

292, 2

325, 2

366, 2

523, 3

603, 3

844, 4

1167, 5

1617, 6

2978, 10

4063, 13

5652, 17

8523, 24

0.016

136, 1

147, 1

219, 2

239, 2

263, 2

292, 2

325, 2

460, 3

523, 3

721, 4

979, 5

1325, 6

2182, 9

4116, 15

5796, 20

8906, 29

0.018

186, 2

201, 2

219, 2

239, 2

263, 2

366, 3

408, 3

460, 3

626, 4

837, 5

1112, 6

1635, 8

2912, 13

6107, 24

9166, 34

0.02

186, 2

201, 2

219, 2

239, 2

331, 3

366, 3

408, 3

550, 4

727, 5

951, 6

1372, 8

2089, 11

3803, 18

9550, 40

0.022

Table 16.5 Optimum reliability sampling plan based on Dagum distribution for ϕ = 1 and δ = 1.5

186, 2

201, 2

219, 2

301, 3

331, 3

366, 3

489, 4

639, 5

825, 6

1063, 7

1627, 10

2680, 15

5370, 27

0.024

186, 2

253, 3

275, 3

301, 3

396, 4

438, 4

567, 5

725, 6

922, 7

1283, 9

2002, 13

3548, 21

8263, 44

Key: n, c

0.026

(continued)

233, 3

253, 3

275, 3

360, 4

396, 4

508, 5

644, 6

811, 7

1113, 9

1606, 12

2494, 17

4964, 31

0.028

16 Construction of Reliability Sampling Plans Using Dagum Distribution … 219

126, 1

110, 1

97, 1

86, 1

77, 1

70, 1

38, 0

34, 0

32, 0

29, 0

27, 0

0.11

0.12

0.13

0.14

0.15

0.16

0.17

0.18

0.19

0.2

0.012

t/μd0

0.1

t/μd1

Table 16.5 (continued)

27, 0

29, 0

54, 1

58, 1

64, 1

70, 1

77, 1

86, 1

97, 1

110, 1

126, 1

0.014

46, 1

50, 1

54, 1

58, 1

64, 1

70, 1

77, 1

86, 1

97, 1

110, 1

126, 1

0.016

46, 1

50, 1

54, 1

58, 1

64, 1

70, 1

77, 1

86, 1

97, 1

110, 1

126, 1

0.018

46, 1

50, 1

54, 1

58, 1

64, 1

70, 1

77, 1

86, 1

97, 1

110, 1

126, 1

0.02

46, 1

50, 1

54, 1

58, 1

64, 1

70, 1

77, 1

86, 1

97, 1

150, 2

172, 2

0.022

46, 1

50, 1

54, 1

58, 1

64, 1

70, 1

77, 1

86, 1

132, 2

150, 2

172, 2

0.024

46, 1

50, 1

54, 1

58, 1

64, 1

70, 1

77, 1

118, 2

132, 2

150, 2

172, 2

0.026

46, 1

50, 1

54, 1

58, 1

64, 1

70, 1

106, 2

118, 2

132, 2

150, 2

172, 2

0.028

220 R. Vijayaraghavan et al.

7857, 18

4350, 11

2857, 8

1969, 6

1492, 5

1117, 4

819, 3

724, 3

513, 2

460, 2

375, 2

228, 1

193, 1

165, 1

143, 1

125, 1

111, 1

99, 1

89, 1

0.06

0.065

0.07

0.075

0.08

0.085

0.09

0.095

0.1

0.11

0.12

0.13

0.14

0.15

0.16

0.17

0.18

0.19

0.04

t/μd0

0.055

t/μd1

89, 1

99, 1

111, 1

125, 1

196, 2

226, 2

264, 2

312, 2

471, 3

691, 4

771, 4

1005, 5

1443, 7

1817, 8

2670, 11

4199, 16

7441, 26

0.05

89, 1

135, 2

152, 2

172, 2

196, 2

284, 3

331, 3

469, 4

654, 5

1018, 7

1371, 9

1799, 11

2612, 15

4101, 22

7239, 36

0.06

121, 2

135, 2

152, 2

216, 3

246, 3

340, 4

460, 5

691, 7

1002, 9

1841, 15

2611, 20

4034, 29

7270, 49

0.07

152, 3

170, 3

191, 3

258, 4

342, 5

448, 6

645, 8

1044, 12

1829, 19

3987, 37

0.08

183, 4

203, 4

265, 5

340, 6

479, 8

655, 10

999, 14

1789, 23

4031, 47

0.09

Table 16.6 Optimum reliability sampling plan based on Dagum distribution for ϕ = 1.5 and δ = 1.5

212, 5

268, 6

336, 7

459, 9

656, 12

1005, 17

1792, 28

0.1

269, 7

331, 8

475, 11

651, 14

1041, 21

1821, 34

Key: n, c

0.11

(continued)

352, 10

453, 12

675, 17

1022, 24

1825, 40

0.12

16 Construction of Reliability Sampling Plans Using Dagum Distribution … 221

80, 1

73, 1

66, 1

61, 1

56, 1

31, 0

28, 0

27, 0

25, 0

23, 0

22, 0

0.21

0.22

0.23

0.24

0.25

0.26

0.27

0.28

0.29

0.3

0.04

t/μd0

0.2

t/μd1

Table 16.6 (continued)

37, 1

39, 1

42, 1

45, 1

48, 1

52, 1

56, 1

61, 1

66, 1

73, 1

80, 1

0.05

37, 1

39, 1

42, 1

45, 1

48, 1

52, 1

56, 1

61, 1

66, 1

73, 1

80, 1

0.06

37, 1

39, 1

42, 1

45, 1

48, 1

52, 1

56, 1

61, 1

66, 1

100, 2

110, 2

0.07

37, 1

39, 1

42, 1

45, 1

48, 1

71, 2

77, 2

83, 2

91, 2

100, 2

110, 2

0.08

37, 1

54, 2

58, 2

62, 2

66, 2

71, 2

77, 2

83, 2

114, 3

125, 3

138, 3

0.09

51, 2

54, 2

58, 2

62, 2

66, 2

71, 2

97, 3

105, 3

114, 3

150, 4

165, 4

0.1

51, 2

54, 2

58, 2

78, 3

83, 3

90, 3

97, 3

126, 4

137, 4

174, 5

217, 6

0.11

64, 3

68, 3

73, 3

78, 3

100, 4

107, 4

116, 4

146, 5

181, 6

221, 7

268, 8

0.12

222 R. Vijayaraghavan et al.

7063, 13

4456, 9

3141, 7

2126, 5

1587, 4

1018, 3

639, 2

514, 2

421, 2

256, 1

216, 1

185, 1

160, 1

139, 1

122, 1

108, 1

97, 1

87, 1

79, 1

0.085

0.09

0.095

0.1

0.11

0.12

0.13

0.14

0.15

0.16

0.17

0.18

0.19

0.2

0.21

0.22

0.23

0.24

0.06

t/μd0

0.08

t/μd1

79, 1

87, 1

97, 1

108, 1

122, 1

139, 1

160, 1

185, 1

296, 2

351, 2

421, 2

645, 3

959, 4

1414, 5

2822, 9

4077, 12

6301, 17

0.07

79, 1

87, 1

97, 1

108, 1

122, 1

191, 2

218, 2

253, 2

296, 2

440, 3

633, 4

896, 5

1413, 7

2531, 11

5825, 22

0.08

79, 1

87, 1

133, 2

149, 2

168, 2

191, 2

218, 2

318, 3

445, 4

527, 4

834, 6

1255, 8

2417, 14

5503, 28

0.09

108, 2

119, 2

133, 2

149, 2

210, 3

239, 3

328, 4

380, 4

516, 5

776, 7

1220, 10

2282, 17

5137, 34

0.1

108, 2

119, 2

167, 3

187, 3

252, 4

286, 4

381, 5

501, 6

791, 9

1173, 12

2143, 20

4988, 42

0.11

Table 16.7 Optimum reliability sampling plan based on Dagum distribution for ϕ = 2 and δ = 1.5

135, 3

150, 3

199, 4

223, 4

292, 5

378, 6

534, 8

733, 10

1186, 15

2084, 24

4783, 50

0.12

162, 4

179, 4

232, 5

295, 6

371, 7

510, 9

731, 12

1124, 17

2011, 28

Key: n, c

0.13

(continued)

188, 5

208, 5

294, 7

364, 8

486, 10

722, 14

1113, 20

1985, 33

0.14

16 Construction of Reliability Sampling Plans Using Dagum Distribution … 223

42, 0

39, 0

35, 0

33, 0

30, 0

28, 0

26, 0

24, 0

23, 0

21, 0

20, 0

0.26

0.27

0.28

0.29

0.3

0.31

0.32

0.33

0.34

0.35

0.06

t/μd0

0.25

t/μd1

Table 16.7 (continued)

20, 0

21, 0

23, 0

24, 0

26, 0

47, 1

51, 1

55, 1

60, 1

65, 1

71, 1

0.07

34, 1

36, 1

38, 1

41, 1

44, 1

47, 1

51, 1

55, 1

60, 1

65, 1

71, 1

0.08

34, 1

36, 1

38, 1

41, 1

44, 1

47, 1

51, 1

55, 1

60, 1

65, 1

71, 1

0.09

34, 1

36, 1

38, 1

41, 1

44, 1

47, 1

51, 1

55, 1

60, 1

65, 1

98, 2

0.1

34, 1

36, 1

38, 1

41, 1

44, 1

47, 1

51, 1

75, 2

82, 2

89, 2

98, 2

0.11

34, 1

36, 1

38, 1

41, 1

60, 2

65, 2

70, 2

75, 2

82, 2

89, 2

98, 2

0.12

34, 1

50, 2

53, 2

56, 2

60, 2

65, 2

70, 2

75, 2

103, 3

112, 3

123, 3

0.13

47, 2

50, 2

53, 2

56, 2

60, 2

65, 2

88, 3

95, 3

103, 3

134, 4

147, 4

0.14

224 R. Vijayaraghavan et al.

1310, 4

865, 3

554, 2

453, 2

275, 1

231, 1

196, 1

169, 1

146, 1

128, 1

113, 1

100, 1

89, 1

80, 1

73, 1

39, 0

36, 0

33, 0

30, 0

0.16

0.17

0.18

0.19

0.2

0.21

0.22

0.23

0.24

0.25

0.26

0.27

0.28

0.29

0.3

0.31

0.32

0.33

0.1

t/μd0

0.15

t/μd1

51, 1

55, 1

60, 1

66, 1

73, 1

80, 1

89, 1

100, 1

113, 1

128, 1

146, 1

169, 1

269, 2

316, 2

376, 2

569, 3

696, 3

1035, 4

1726, 6

0.11

51, 1

55, 1

60, 1

66, 1

73, 1

80, 1

89, 1

100, 1

113, 1

128, 1

200, 2

231, 2

269, 2

397, 3

472, 3

681, 4

966, 5

1683, 8

2914, 12

0.12

51, 1

55, 1

60, 1

66, 1

73, 1

80, 1

89, 1

137, 2

154, 2

175, 2

200, 2

290, 3

337, 3

475, 4

656, 5

898, 6

1481.9

2758, 15

6281, 30

0.13

51, 1

55, 1

60, 1

66, 1

73, 1

110, 2

122, 2

137, 2

154, 2

220, 3

251, 3

290, 3

404, 4

551, 5

832, 7

1415, 11

2461.17

5827, 36

0.14

51, 1

55, 1

83, 2

90, 2

99, 2

110, 2

122, 2

172, 3

194, 3

220, 3

301, 4

402, 5

532, 6

844, 9

1258, 12

2306, 20

5380, 42

0.15

Table 16.8 Optimum reliability sampling plan based on Dagum distribution for ϕ = 3 and δ = 1.5

70, 2

76, 2

83, 2

90, 2

99, 2

138, 3

154, 3

172, 3

232, 4

305, 5

396, 6

511, 7

779, 10

1197, 14

2235, 24

5055, 49

0.16

70, 2

76, 2

83, 2

114, 3

125, 3

138, 3

184, 4

206, 4

269, 5

388, 7

489, 8

721, 11

1194, 17

2080, 27

Key: n, c

0.17

(continued)

70, 2

95, 3

104, 3

114, 3

150, 4

166, 4

214, 5

272, 6

377, 8

508, 10

714, 13

1126, 19

2051, 32

0.18

16 Construction of Reliability Sampling Plans Using Dagum Distribution … 225

19, 0

16, 0

0.4

0.425

11, 0

20, 0

0.39

0.5

21, 0

0.38

14, 0

22, 0

0.37

13, 0

24, 0

0.36

0.475

26, 0

0.45

28, 0

0.35

0.1

t/μd0

0.34

t/μd1

Table 16.8 (continued)

11, 0

13, 0

14, 0

16, 0

19, 0

20, 0

21, 0

22, 0

24, 0

26, 0

47, 1

0.11

11, 0

13, 0

14, 0

16, 0

19, 0

33, 1

36, 1

38, 1

41, 1

44, 1

47, 1

0.12

11, 0

13, 0

14, 0

27, 1

31, 1

33, 1

36, 1

38, 1

41, 1

44, 1

47, 1

0.13

19, 1

21, 1

24, 1

27, 1

31, 1

33, 1

36, 1

38, 1

41, 1

44, 1

47, 1

0.14

19, 1

21, 1

24, 1

27, 1

31, 1

33, 1

36, 1

38, 1

41, 1

44, 1

47, 1

0.15

19, 1

21, 1

24, 1

27, 1

31, 1

33, 1

36, 1

38, 1

41, 1

44, 1

65, 2

0.16

19, 1

21, 1

24, 1

27, 1

31, 1

33, 1

36, 1

52, 2

56, 2

60, 2

65, 2

0.17

19, 1

21, 1

24, 1

27, 1

43, 2

46, 2

49, 2

52, 2

56, 2

60, 2

65, 2

0.18

226 R. Vijayaraghavan et al.

16 Construction of Reliability Sampling Plans Using Dagum Distribution …

227

Fig. 16.5 OC curves of RSSP based on Dagum distribution with n = 809, c = 3, ϕ = 1.5 and δ=1

Fig. 16.6 OC curves of RSSP based on Dagum distribution with n = 129, c = 1, ϕ = 2 and δ = 1.5

test plan (n = 809, c = 3) based on Dagum distribution passes through the desired points, namely (0.008, 0.96092) and (0.025, 0.099) displayed in Fig. 16.5. Similarly in Fig. 16.6, the OC curve of the sampling plan (n = 129, c = 1) passes through the points (0.078, 0.958) and (0.196, 0.98).

228

R. Vijayaraghavan et al.

16.6 Conclusion Reliability single sampling plans of products are proposed based on Dagum distribution. The procedures for choosing single sampling plans are developed. Tables are presented for choosing parameters of reliability sampling plans indexed by acceptable and unacceptable median life for the preassigned time t with few combination of (ϕ, δ). The industrial practitioners can adopt this procedure to the life test and can develop the required plans for other choices of ϕ and δ. The proposed plan is widely applicable, and it can apply in the manufacturing industries, costly or destructive item life testing, life testing for ball bearing, wind-speed data analysis, low-flow analysis, regional flood frequency, survival data, etc. The proposed plan is more effective than the existing plan.

References 1. Fertig, F.W., Mann, N.R.: Life-test sampling plans for two-parameter weibull populations. Technometrics 22, 165–177 (1980) 2. Epstein, B.: Tests for the validity of the assumption that the underlying distribution of life is exponential, Part I. Technometrics 2, 83–101 (1960) 3. Epstein, B.: Tests for the validity of the assumption that the underlying distribution of life is exponential, Part II. Technometricsh 2, 167–183 (1960) 4. Handbook H-108. Sampling Procedures and Tables for Life and Reliability Testing. Quality Control and Reliability, Office of the Assistant Secretary of Defense. US Department of Defense, Washington, D.C. (1960) 5. Goode, H.P., Kao, J.H.K.: Sampling plans based on the weibull distribution. Proceedings of the Seventh National Symposium on Reliability and Quality Control, pp. 24–40. Philadelphia, PA (1961) 6. Goode, H.P., Kao, J.H.K.: Sampling procedures and tables for life and reliability testing based on the weibull distribution (Hazard Rate Criterion). Proceedings of the Eight National Symposium on Reliability and Quality Control, pp. 37–58. Washington, DC (1962) 7. Goode, H.P., Kao, J.H.K.: Hazard rate sampling plans for the weibull distribution. Ind. Qual. Control. 20, 30–39 (1964) 8. Gupta, S.S.: Life test sampling plans for normal and lognormal distributions. Technometrics 4, 151–175 (1962) 9. Schilling, E.G., Neubauer, D.V.: Acceptance Sampling in Quality Control. Chapman and Hall, New York, NY (2009) 10. Balakrishnan, N., Leiva, V., López, J.: Acceptance sampling plans from truncated life-test based on the generalized Birnbaum—saunders distribution. Commun. Stat. Simul. Comput. 36, 643–656 (2007) 11. Kalaiselvi, S., Vijayaraghavan, R.: Designing of bayesian single sampling plans for weibullinverted gamma distribution. Recent Trends in Statistical Research, Publication Division, pp. 123–132. M. S. University, Tirunelveli, (2010) 12. Kalaiselvi, S., Loganathan, A., Vijayaraghavan, R. Reliability sampling plans under the conditions of rayleigh—Maxwell distribution—a bayesian approach. Recent Advances in Statistics and Computer Applications, pp. 280–283, vol. 2011. Bharathiar University, Coimbatore (2011) 13. Loganathan, A., Vijayaraghavan, R., Kalaiselvi, S.: Recent developments in designing bayesian reliability sampling plans—an overview. New Methodologies in Statistical Research, pp. 61– 68. Publication Division, M. S. University, Tirunelveli (2012)

16 Construction of Reliability Sampling Plans Using Dagum Distribution …

229

14. Vijayaraghavan, R., Chandrasekar, K., Uma, S.: Selection of sampling inspection plans for life test based on weibull-Poisson mixed distribution. Proceedings of the International Conference on Frontiers of Statistics and its Applications, pp. 225–232. Coimbatore (2012) 15. Vijayaraghavan, R., Uma, S.: Evaluation of sampling inspection plans for life test based on exponential-poisson mixed distribution. Proceedings of the International Conference on Frontiers of Statistics and its Applications, pp. 233–240. Coimbatore (2012) 16. Vijayaraghavan, R., Uma, S.: Selection of sampling inspection plans for life tests based on lognormal distribution. J. Test. Eval. 44, 1960–1969 (2016) 17. Vijayaraghavan, R., Sathya Narayana Sharma, K., Saranya, C.R.: Reliability sampling plans for life tests based on pareto distribution. TEST Eng. Manag. 83, 27991–28000 (2020a) 18. Vijayaraghavan, R., Saranya, C.R., Sathya Narayana Sharma, K.: Reliability sampling plans based on exponential distribution. TEST Eng. Manag. 83, 28001–28005 (2020b) 19. Dagum, C.: A new model for personal income distribution: specification and estimation. Econ. Applique. 33, 413–437 (1977) 20. Kleiber, C.: A guide to the Dagum distribution. In: Duangkamon, C. (ed.) Modeling Income Distributions and Lorenz Curves Series: Economic Studies in Inequality, Social Exclusion and Well-Being, vol. 5. Springer, New York, NY (2008) 21. Domma, F., Giordano, S., Zenga, M.: The Fisher Censored Data from the Dagum Distribution, Working Paper No. 8, Department of Economics and Statistics, University of Calabria, Italy (2009)

Chapter 17

Defect Detection Using Correlation Approach for Frequency Modulated Thermal Wave Imaging Anju Rani, Vanita Arora, K. Ramachandra Sekhar, and Ravibabu Mulaveesala Abstract Defect detectability of structures and components is one of the significant parameters for proper functioning of industries. Active thermography is a safe and reliable technique for non-destructive testing and evaluation (NDT&E) of these materials during in-service applications. This paper investigates the defect resolvability in a carbon fiber reinforced polymer (CFRP) sample using frequency modulated thermal wave imaging (FMTWI) technique. FMTWI performance has been examined using correlation-based pulse compression (PC) approach and compared with the conventional data processing approaches. Results shows the high sensitivity and resolution to resolve deeper defects of varying depths and diameters in CFRP sample using correlation approach.

17.1 Introduction Infrared thermography (IRT) is one of the popular NDT&E techniques adopted in various industries due to fast, non-contact and safe detection with wide area monitoring [1–4]. It can be broadly classified into two modes: passive and active. Passive mode utilizes the temperature gradient inherited by test object to inspect any presence of flaws/defects while active mode requires external heat sources for quantitative inspection of deeper defects inside test object [3]. Various IRT studies have been performed, such as pulse thermography (PT) [4–7], pulse phase thermography A. Rani (B) · K. Ramachandra Sekhar Department of Electrical Engineering, Indian Institute of Technology Ropar, Rupnagar 140001, India e-mail: [email protected] V. Arora School of Electronics, Indian Institute of Information Technology Una, Una, Himachal Pradesh 177209, India R. Mulaveesala Centre for Sensors, Instrumentation and Cyber-Physical Systems Engineering, Indian Institute of Technology Delhi, Hauz Khas, New Delhi 110016, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. Das et al. (eds.), Modeling, Simulation and Optimization, Smart Innovation, Systems and Technologies 292, https://doi.org/10.1007/978-981-19-0836-1_17

231

232

A. Rani et al.

(PPT) [8–11] and lock-in thermography (LT) [12–14] for detection of surface and sub-surface flaws inside any test object. PT is an easy and fast technique which requires high peak power heat sources for defect inspection [6]. However, it suffers from dependency of the thermal gradient over variations in surface features which can be overcome by using Fourier domain phase analysis data processing approach as in case PPT. PPT being a phase analysis approach is insensitive to variations due to surface features but still requires high intensity heat sources for experimentation similar to PT [9]. LT on the other hand utilizes a mono-frequency excitation source varying periodically with respect to time to illuminate test object. Due to single frequency excitation, thermal wavelength inside test object remains fixed for single experimentation cycle [13]. Therefore, repetition of experiments for multiple frequencies is necessary in order to scan the entire sample thickness. The limitations of these conventional thermographic techniques can be overcome by modulating the moderate peak power heat sources over a pre-defined frequency sweep in fixed duration [15–18]. This paper mainly focuses on the detection of flat bottom hole defects of varying depths and diameters using FMTWI technique. Further, the PC approach has been investigated and compared with other conventional post-processing approaches to estimate the defect detection capabilities.

17.2 Theory The generation of thermal waves inside a finite thickness sample in the absence of energy source or sink can be expressed as [19]: ∂ T (z, t) ∂ 2 T (z, t) ; where 0 ≤ z ≤ for t ≥ 0 =α ∂t ∂z 2

(17.1)

where T (z, t) is the thermal gradient observed over spatial location z at time t and α defines thermal diffusivity of specimen at constant pressure and K is the thermal diffusivity of the test object. Figure 17.1. depicts the set of boundary and initial conditions (IC’s) considered for inspecting thermal gradient over the test object. The specimen is assumed to illuminate using frequency modulated thermal excitation given by [17, 18]: ∂ T (0, t) = Q f mtwi (t) ∂z     Bt 2 ∂ T (0, t) = Q o 1 + sin 2π f o t + −K ∂z 2τ −K

(17.2)

(17.3)

where Q represents the frequency modulated (FM) input heat flux illuminated over test object with peak amplitude Q o defined within bandwidth B with initial frequency

17 Defect Detection Using Correlation Approach for Frequency …

233

Fig. 17.1 Illustration of boundary and initial conditions for the given specimen

f o for τ duration. The opposite end of test object is insulated given as: −K

∂ T (L , t) =0 ∂z

(17.4)

The specimen is assumed to be initially at room temperature To . Therefore, the surface temperature observed for the given excitation can be expressed as: T (z, t) = Ta (z, t) + Tb (z, t)

(17.5)

such that, ⎤ L ∞

pπ z 

pπ z 

πp 2 a dz ⎦ cos e−αt ( L ) To cos Ta (z, t) = ⎣ L 0 L L p=0 ⎡

(17.6)

and Tb (z, t) = 0

⎤ ⎤ L ∞ 

pπ z 

πp 2 a α ⎣⎣ Q f mtwi (τ ) dz ⎦ cos e−α(t−τ )( L ) ⎦dτ L K L L 0 p=0 ⎡⎡

t

(17.7) 

1 p=0 for 2 p = 1, 2, 3 . . . ., ∞ The temperature response thus observed from Eq. (17.5) is used for data processing for detection of defects inside the given specimen.

given that a =

234

A. Rani et al.

17.2.1 Data Processing Approach The defect detection capabilities of any thermographic technique can be improved by applying appropriate data processing approach on the computed temporal temperature response at chosen spatial location in test object. The popular data processing schemes can be broadly classified as: (i) Frequency domain data processing approach [8–11], and (ii) Time domain data processing approach [20–26].

17.2.1.1

Frequency Domain (FD) Data Processing Approach

FD data processing is conventional scheme to overcome the limitations of surface emissivity and uneven heating present in PT methods. In this approach, the phase information for each frequency component (phasegrams) is extracted by applying Fast Fourier Transform (FFT) on single pixel thermal profiles given by [10]: T (u) =

  N −1 − j2πnu 1

T (n)e N N n=0

T (u) = Re(u) + Im(u)

(17.8) (17.9)

where Re(u) and Im(u) are the real and imaginary components of the temporal temperature response T (u) respectively. The phase information ϕ(u) corresponding to each frequency component can be written as: 

Im(u) ϕ(u) = a tan Re(u)

17.2.1.2

 (17.10)

Time Domain (TD) Data Processing Approach

TD data processing is a correlation-based approach to provide improved detection range and better depth resolution even in the presence of background noise as shown in Fig. 17.2. It utilizes Hilbert Transform (HT) in time domain to analyze the computed temporal temperature response. The TD phase is computed as [20–23]:  θ = tan

−1

   IFFT H (ω)∗ T (xr , ω)   IFFT T (xr , ω)∗ T (xi , ω)

(17.11)

where * represents complex conjugate, H (ω) is the Hilbert Fourier transform, T (xr , ω) and T (xi , ω) are FFT of the temporal temperature response for the reference signal & Fourier transform of the temporal temperature response at any spatial

17 Defect Detection Using Correlation Approach for Frequency …

235

Fig. 17.2 Correlation-based data processing approach

location over the specimen. The Correlation Coefficient (CC) in the time domain can be obtained using Inverse Fast Fourier Transform (IFFT) given by [24–26]: CC = I F F T {T (xr , ω) ∗ T (xi , ω)}

(17.12)

17.3 Modeling and Analysis The defect resolvability of the FMTWI approach has been investigated on CFRP test object having flat bottom holes (FBH) of distinct diameters at different depth locations as shown in Fig. 17.3 respectively. The FM heat flux of 500 W/m2 with frequency sweep of 0.01 to 0.1 Hz is incident on front end of the specimen for 100 s duration. The thermal gradient is monitored at 25 Hz frame rate for 100 s using COMSOL Multiphysics 5.2 finite element modeling software. The single pixel profiles thus obtained are analyzed using discussed data processing approaches to detect sub-surface defects inside the given specimen.

17.4 Results and Discussion Defect detection ability of FMTWI technique over CFRP test object is investigated by illuminating it with FM heat flux of 500 W/m2 modulating between 0.01 to 0.1 Hz for 100 s. Figure 17.4a depicts the thermal gradient over test object for defects (a–c) having different diameters at various depth locations in test object. It is clearly visible that near surface flaws offer high temperature than deeper defects. Figure 17.4b illustrates the thermal response for the defects (d–f) having different diameters located at 0.7 mm from the sample surface. It can be clearly observed that change in diameters does not have high effect on the temperature response due to low aspect ratio. Furthermore, the mean rise present in single pixel thermal profiles is removed using first order polynomial fit to obtained zero mean thermal response. The

236

A. Rani et al.

Defect

Diameter

Depth

(mm)

(mm)

a

10

2

b

8

1.5

c

6

1

d

10

0.7

e

8

0.7

f

6

0.7

Fig. 17.3 Illustration of the CFRP specimen

Fig. 17.4 Temperature response observed at surface for: a Distinct diameters located at different depth locations, and b Distinct diameters located at same depth location

correlation-based data processing approach is applied to these zero mean profiles to obtain compressed pulses. PC is performed by CC of zero mean thermal response of the defected region with that of non-defective region. Compressed output pulses concentrate the energy of

17 Defect Detection Using Correlation Approach for Frequency …

237

Fig. 17.5 Correlation response obtained for: a Different diameters located at different depth locations, and b Different diameters located at same depth location

input signal into main lobe providing high sensitivity and test resolution. Figure 17.5 depicts the correlation response obtained for the six flat bottom hole defects in the CFRP sample. Further, defects a–c show high CC values than defects d–f. Therefore, the defects d–f show low resolution in comparison to the defects a–c respectively. Furthermore, various data processing approaches have been explored utilizing the FMTWI technique over CFRP sample. Figure 17.6a depicts the FD phase image obtained at 0.03 Hz using from Eq. (17.10). Figure 17.6b depicts the TD phase image obtained for 1.3 s using Eq. (17.11) and Fig. 17.6c represents the CC image obtained for 63 s using Eq. (17.12) respectively. Results depict that correlated response provides better thermal contrast for shallow as well as deeper defects in comparison to other data processing approaches.

17.5 Conclusion In this paper the performance of FMTWI has been inspected to detect six FBH defects of varying depths and diameters in carbon fiber reinforce polymer sample. Results highlight the high depth resolvability to detect defects of small lateral dimensions for the proposed FMTWI with correlation-based PC approach. Furthermore, performance of correlation approach has been compared with the conventional data processing approaches to study the sensitivity and depth resolution of the subsurface defects inside CFRP sample.

238

A. Rani et al.

Fig. 17.6 Defect detectability using various data processing approaches: a FD phase image, b TD phase image and c CC phase image

References 1. Almond, D.P., Patel, P.: Photothermal Science and Techniques. Chapman & Hall Publication (1996). 2. Hellier, C.J.: Handbook of Nondestructive Evaluation, 2nd edn. McGraw-Hill Education, New York (2013) 3. Yang, R., He, Y.: Optically and non-optically excited thermography for composites: A review. Infrared Phys. Technol. 75, 26–50 (2016) 4. Ciampa, F., Mahmoodi, P., Pinto, F., & Meo, M.: Recent advances in active infrared thermography for non-destructive testing of aerospace components. Sensors (Switzerland) 18(2), art. no. 609 (2018). 5. Parker, W.J., Jenkins, R.J., Butler, C.P., Abbott, G.L.: Flash Method of Determining Thermal Diffussivity, Heat Capacity, and Thermal Conductivity. J. Appl. Phys. 32(9), 1679–1684 (1961) 6. Vavilov, V.P., Burleigh, D.D.: Review of pulsed thermal NDT: Physical principles, theory and data processing. NDT and E Int. 73, 28–52 (2015)

17 Defect Detection Using Correlation Approach for Frequency …

239

7. Wang, Z., Tian, G., Meo, M., Ciampa, F.: Image processing based quantitative damage evaluation in composites with long pulse thermography. NDT and E Int. 99, 93–104 (2018) 8. Maldague, X., Marinetti, S.: Pulsed phase thermography. J. Appl. Phys. 79, 2694–2698 (1996) 9. Busse, G.: Optoacoustic phase angle measurement for probing a metal. Appl. Phys. Lett. 35(10), 759–760 (1979) 10. Busse, G., Wu, D., Karpen, W.: Thermal wave imaging with phase sensitive modulated thermography. J. Appl. Phys. 71(8), 3962–3965 (1992) 11. Dillenz, A., Zweschper, T., Riegert, G., & Busse, G.: Progress in phase angle thermography. Review of Scientific Instruments 74(1 II), 417–419 (2003). 12. Rantala, J., Wu, D., Busse, G.: Amplitude-modulated lock-in yibro thermography for NDE of polymers and composites. Res. Nondestr. Eval. 7(4), 215–228 (1996) 13. Wu, D., Busse, G.: Lock-in thermography for nondestructive evaluation of materials. Rev. Gen. Therm. 37(8), 693–703 (1998) 14. Ibarra-Castanedo, C., Piau, J.-M., Guilbert, S., Avdelidis, N., Genest, M., Bendada, A., Maldague, X.P.V.: Comparative study of active thermography techniques for the nondestructive evaluation of honeycomb structures. Res. Nondestr. Eval. 20(1), 1–31 (2009) 15. Mulaveesala, R., Tuli, S.: Theory of frequency modulated thermal wave imaging for nondestructive subsurface defect detection. Applied Physics Letters 89 (19) art. no. 191913 (2006). 16. Tabatabaei, N., & Mandelis, A.: Thermal-wave radar: A novel subsurface imaging modality with extended depth-resolution dynamic range. Review of Scientific Instruments 80(3), art. no. 034902 (2009). 17. Rani, A., and Mulaveesala, R.: Pulse compression favorable frequency modulated thermal wave imaging for non-destructive testing and evaluation: An Analytical Study. IOPSciNotes 2(2), art. no. 024401 (2021). 18. Rani, A., Mulaveesala, R.: Frequency Modulated Thermal Wave Imaging for InfraRed Nondestructive Testing of Mild Steel. Mapan - Journal of Metrology Society of India 36(2), 389–393 (2021) 19. Carslaw, H. S., and J. C. Jaeger.: Heat conduction in solids. Oxford University Press (1959). 20. Yang, R., He, Y., Mandelis, A., Wang, N., Wu, X., Huang, S.: Induction infrared thermography and thermal-wave-radar analysis for imaging inspection and diagnosis of blade composites. IEEE Trans. Industr. Inf. 14(12), 5637–5647 (2018) 21. Rani, A., Mulaveesala, R.: Investigations on pulse compression favourable thermal imaging approaches for characterization of glass fibre reinforced polymers. Electron. Lett. 56(19), 995– 998 (2020) 22. Rani, A., and Mulaveesala, R.: Depth resolved pulse compression favourable frequency modulated thermal wave imaging for quantitative characterization of glass fibre reinforced polymer. Infrared Physics and Technology 110, art. no. 103441 (2020). 23. Tabatabaei, N.: Matched-filter thermography. Applied Sciences (Switzerland) 8(4), art. no. 581 (2018). 24. Tabatabaei, N., & Mandelis, A.: Thermal coherence tomography using match filter binary phase coded diffusion waves. Physical Review Letters 107(16), art. no. 165901 (2011). 25. Mulaveesala, R., & Ghali, V. S.: Cross-correlation-based approach for thermal non-destructive characterisation of carbon fibre reinforced plastics. Insight: Non-Destructive Testing and Condition Monitoring 53(1), pp. 34–36 (2011). 26. Rani, A., Arora, V., and Mulaveesala, R.: InfraRed image correlation for non-destructive testing and evaluation. Proc. SPIE 11743, Thermosense: Thermal Infrared Applications XLIII, 1174310 (2021).

Chapter 18

Machine Learning Models for Predictive Analytics in Personal Finance Rishabh Kalai , Rajeev Ramesh , and Karthik Sundararajan

Abstract Machine learning is an application of artificial intelligence where statistical data is processed by various algorithms that are generally automated in order to produce insights and inferences. It finds many applications in the field of personal finance for portfolio analysis, recommendation engines and even financial forecasting tools. Personal finance management is absolutely crucial to attain financial freedom as well as security. Long-term fiscal planning can provide a contingency against uncertainty as well as promote financial stability. The main objective of this paper is to propose RNN-based predictive model for personal finance. This paper provides a comprehensive analysis technique that can be utilized to manage the key financial parameters for an individual using machine learning models. In the proposed work, we have implemented three models: a linear regression model for expenditure prediction, RNN-based for stock prediction and logistic regression for retirement prediction. These models are trained and tested on the basis of both the individual user’s data and external data pertaining to the economy and the financial markets. In the proposed work, experimental results show an accuracy score of 83.55% for linear regression, 86.7% for RNN and 84.53% for logistic regression, each of which is used for a different phase of the proposed system.

18.1 Introduction Personal finance, as a term, encompasses concepts of management of money, saving and investing. It refers to the entire industry that helps individuals and advises them about financial and investment opportunities [1], budgeting, banking, insurance, mortgages, investments, retirement planning, tax and estate planning and so on. Personal finance management can help an individual effectively plan and reach short-term as well as long-term financial goals progressively throughout their income lifetime and post-retirement as well. R. Kalai (B) · R. Ramesh · K. Sundararajan Department of CSE, BNM Institute of Technology, Bangalore 560070, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. Das et al. (eds.), Modeling, Simulation and Optimization, Smart Innovation, Systems and Technologies 292, https://doi.org/10.1007/978-981-19-0836-1_18

241

242

R. Kalai et al.

Due to the apparent lack of financial educational courses present in the school curriculum, financial illiteracy among the younger generation as well as adults is prevalent in today’s world [2]. Despite adequate resources, there is a widespread behaviorally anchored mismanagement of money and debt which consequently resulted in inadequate credit scores among a large percentage of the general population [3]. The main benefits of personal finance management are long-term financial planning, money management, income and asset protection, investments and retirement/estate planning [4]. With the advent of big data and the rapid proliferation of availability in user related personal data, machine learning has gained favor over the last decade among economists and statisticians for data analysis and insight mining [1]. Its plethora of statistical and analytical tools are increasingly being used by businesses to perform comparative analysis studies and even produce personalized recommendations or insights to consumers based on interactions [5]. The purpose of this paper is to propose a model that can be used by an individual to track their personal financial aspects. Algorithms such as linear regression, logistic regression and recurrent neural networks (RNNs) are used for the different elements present within the proposed system. In our review, we aim to answer the following questions: RQ1: How is machine learning used in the field of personal finance? RQ2: Can machine learning improve an individual’s finances? RQ3: Which models would be better suited to address the needs of Personal Finance? The main contribution of the paper is the detailed description of a plausible system that is capable of tracking, managing and most importantly planning the fundamental features of an individual’s finances with the help of machine learning. The proposed system can be utilized to efficiently and systematically improve the finances of an individual and increase financial awareness as well.

18.2 Review of the Role of Machine Learning in Personal Finance Machine learning has a wide array of applications in the finance sector. It is focused on the development of complex algorithms based on mathematical models and databased model training to conduct predictive analysis whenever new data is supplied.

18 Machine Learning Models for Predictive Analytics in Personal …

243

18.2.1 Insurance Insurance is one of the prevalent fields in which machine learning has a lot of use cases [6]. One commonly observed instance of this is where computers run simulation models and compare the pictures of a damaged vehicle provided by an insurance claimant to pictures of a new car in order to assess the extent of damage. The automated system also generally checks and verifies the records of driving history and offers the claimant a quote within a few minutes to settle the claim. This process is more efficient and less cumbersome for the claimant as well as the insurance agency involved compared to the usual tedious procedure [7].

18.2.2 Chatbots/User-Interaction Systems Machine learning has also been used in chatbots in personal finance management systems [8, 9]. It picks up traits and trends from each conversation with a customer and utilizes it to improve the experience in the future. Natural language processing methods are primarily used in this application to identify the user’s mood after which systems can decide if the issue at hand can be solved or requires human intervention and connects the customer to a real-life executive [10]. While this may not be directly related to an individual’s finances, the use of these systems is prevalent in most of the popular personal finance management applications present in the market today.

18.2.3 Investment Planning Another useful application is investment planning and portfolio management. One of the key steps in this process is the determination of the optimal distribution of money between savings accounts and current accounts. Reduction in costs and increase in savings can be attained through the automation of various processes and procedures such as asset management and budgeting [11]. With predictive analysis methods, prediction of stock prices and identification of potential future investments can facilitate the optimization of investments for maximum returns.

18.3 Proposed Work Figure 18.1 represents the architecture of the proposed system. The proposed work has the following phases: (1) budgeting and expense management, (2) investment portfolio management and (3) retirement prediction.

244

R. Kalai et al.

Fig. 18.1 Architecture of proposed model

18.3.1 Linear Regression Model for Budgeting and Expense Management Budgeting is the fundamental step in managing an individual’s finances [12], and with the advent of automation techniques in finance, budgeting can be managed automatically and more efficiently. The budgeting tool facilitates the tracking of an individual’s spending based on the discretionary expenses incurred on a daily basis. The system can then analyze this data that has been gathered to identify spending patterns and areas of potential over-expenditure that the individual might be oblivious to and consequently outline areas of possible savings. Zero-Based Budgeting Zero-based budgeting is a technique based on the allocation of all income and funds to expenses, savings and debt payments. It structures the expenses on the basis of category and also the period in which it occurred. It operates on the principle that total expenditures subtracted from the total income should amount to zero at the end of the month (or the budgeting time period). In the proposed system, we will make use of the zero-based budgeting principle and also predict the expected expenditure for a time period, based on the previous expenses of an individual with the linear

18 Machine Learning Models for Predictive Analytics in Personal …

245

Fig. 18.2 Categorized expenditure dataset used for linear regression model

regression model. The data recorded for an individual would represent recorded expenses organized categorically with time dimension as shown in Fig. 18.2. Linear Regression Linear regression is a supervised machine learning model that is used to model a mathematical relationship between an independent or explanatory variable and the dependent or response variable by fitting a linear equation between them. The model utilizes a multivariate linear regression equation to describe the relationship [13]. The response variable y is related to the n dependent variables, x1 , x2 , . . . , xn , such that the linear regression equation describing their relationship is of the form: y = β0 +

n 

βi xi + ε

(18.1)

i=1

where the intercepts β1 , β2 , · · · , βn are regression coefficients and ε is the random error [14]. Since, the expenses are broken down and classified in terms of different time periods and categories while also prioritized. The dependent variable y will be the predicted expense for the time period, while the independent variables will be denoted by x based on the different categories and the total expense incurred in each category. The regression coefficients β1 , β2 , · · · , βn are used to represent the average functional relationship between variables of interest. Fitting Using Least Squares Method The next step in the construction of a linear regression model is the fitting of the data to the linear equation. Since the constants β0, β1, β2… βn are unknown, they will have to be estimated. The least squares estimation principle provides a way of calculating these coefficients effectively. This is done by the minimization of the sum of the squared residuals (distance between the regression line and the data points) from the line described by the linear equation [15]. The method of least squares used to minimize the residual is described by: n  i=1

 r = min 2

n   i=1



yi − yi

2

 (18.2)

246

R. Kalai et al. 

where r is the residual, yi is the predicted value by the linear regression equation, and yi is the actual value. Now, the model is trained and fitted to predict the values of future expenditure based on past expenditure. Prediction The final step in the process of linear regression is to predict the values for the desired time period. This step gives the final total expected expenditure over all categorical expenses for the desired time period in the future for which it was projected. As such, the future expenses for an individual are obtained as a function of past expenditures and spending patterns.

18.3.2 RNN-Based Model for Investment Portfolio Management Asset Planning An asset is a resource that has economic value, owned by an individual or a company that will bring future benefits. Assets are an important aspect when it comes to investing and future planning. Multiple artificial intelligence and machine learning techniques can be used for asset planning and optimization [11]. Artificial neural network (ANN) can be defined as a connection of different nodes that are loosely modeled after the neurons in a brain [16, 17]. Feedforward artificial neural networks are commonly used for price forecasting. Generally, the output produced is affected by size of training set, degree of freedom of network, physical complexity of problem given [18]. With financial assets and global markets becoming increasingly complex, traditional risk models may no longer be sufficient for risk analysis. Investment Recommendation There are many artificial intelligence techniques that can be used for stock prediction such as artificial neural networks, ordinary least square regression, elastic nets, LASSO regression, random forest, among which the most applicable being long short-term memory (LSTM), convolution neural network (CNN) and recurrent neural network (RNN). Among these techniques, LSTM and RNN use historical data to predict the prices of any particular stock [19]. Hence, LSTM and RNN are used for identifying long-term dependencies for future prediction. Retrieval of Stock Data and Simple Moving Average The first step is to read the historical stock market data through the use of a stock market API. The type of data returned from the stock API is of the type time series, a sequence of numbers in chronological order. The stock API shows the following fields: open, highest price of the day, lowest price of the day, close and volume. The closing stock price will be used to train the neural network, as we aim to predict

18 Machine Learning Models for Predictive Analytics in Personal …

247

the closing price of the stock for investment advice. Simple moving average (SMA) is used to calculate the average value of a selected range of prices, in this case, the closing prices of the stock fetched from the stock API by the number of time periods in that range [20]. The formula for SMA is: n SM A =

i=1

xi

n

(18.1)

where x1 , x2 , . . . , xn are prices of stock at period n and n is the total number of time periods. Data Preprocessing and Training Neural Network The next step is to prepare the training data. The training data is prepared with weekly stock prices and the returned simple moving average. With the window size being 50, that means the closing values of the previous 50 weeks of the stock as training features and the SMA as the label feature. Then, the dataset is split into two parts. About 70% is used as training set, and 30% is used as validation set. Now that the training dataset is ready, the next step is to create a model for time series prediction. For the model to learn the sequential time series data, recurrent neural network layers are created and LSTM cells are added to the RNN. Along with optimization algorithms for machine learning, root mean square method is also deployed with the model. With the use of root mean square, the difference between the predicted values and actual values can be determined. This will enable the model to learn by minimizing the error during the training process. RNN Algorithm Input: Time series stock market data with SMA for every 50 consecutive weeks Output: Predicted Value of Stock Step 1: set input_layer_neurons, input_layer_shape set output_layer_neurons, output_layer_shape define rnn_input_shape, rnn_input_neurons define rnn_output_shape, rnn_output_nerons Step 2: fit model using sequential() add layer.dense (input_layer_neuron, input_layer_shape) add layer.reshape (rnn_input_shape) Step 3: lstm_cells - > [] for each index in layer push lstm_cells add lstm_cells to rnn using model.add() Step 4: train model using model.fit()to obtain trained_model make prediction using trained_model Step 5: calculate current state h t = f (h t−1 , xt )

(18.4)

248

R. Kalai et al.

where h t is the current state, h t−1 is the previous state and xt is the input state. apply activation function h t = tanh(whh h t−1 + wxh xt )

(18.5)

where whh is the weight of recurrent neuron, wxh is the weight of the input neuron. derive output by applying yt = why h t

(18.6)

where yt is the output, why is the weight at output layer. Step 6: perform model evaluation to obtain accuracy. Validation and Prediction Now that the model has been trained, the next phase is to prepare it for prediction of future values. But prior to that, the model has to undergo validation. Since the data has been split into two parts, 70% training and 30% validation, the training set will be used for training the model and the validation set will be used for validating the model. For prediction, it uses a window size of 50 which is the closing values of the previous 50 consecutive days. Since the training set is incremented daily, the values of the past 50 days are used as input to predict the value for the 51st day.

18.3.3 Logistic Regression Based Retirement Prediction Retirement planning is an essential part of personal finance. As the average life expectancy continues to rise [21], retirement planning has gradually become a more crucial part of the financial planning process. The proposed system utilizes a method with which a prediction can be made if the user can retire early (before the designated retirement age) based on various factors that are observed to be the most crucial in the determination of retirement age. These factors are gender, disease, education level, marital status, income and employment status. Macroeconomic factors such as unemployment rate and stock market condition also play a role (Fig. 18.3).

Fig. 18.3 Retirement dataset used for modeling logistic regression predictor

18 Machine Learning Models for Predictive Analytics in Personal …

249

Logistic Regression Logistic regression is a supervised machine learning model used to calculate the probability of a certain class or event existing. It is a predictive analysis method that is used in describing the data and explaining the relationship between one dependent binary variable and one or more nominal, ordinal, interval, ratio-level independent variables [22]. Logistic regression estimates a continuous quantity, i.e., the probability that an event occurs compared to a certain threshold that allows taking decisions about classification of the data [23]. The mathematical formula of a logistic model is:  p = β0 + βi xi 1− p i=1 n

l = logb

(18.7)

where l is the log odds, x1 , x2 , . . . , xn are the predictors, β1 , β2 , . . . , βn are the estimated parameters of the model, b is the base of the logarithm function, n is the number of observations, and p is the probability that the response variable is 1. The probability of early retirement is given by:  p = Sb β0 +

n 

βi xi

(18.8)

i=1

where Sb is a sigmoid function with base b, p is the probability of early retirement and x = {x1 , x2 , . . . , xn } will be the vector of explanatory values describing the factors affecting retirement age, and the values of β will be the parameters to be estimated. The sigmoid Sb function is used to classify whether the individual described by the specific equation retires before or after the age of sixty (Table 18.1). Prediction The logistic regression model has been trained and validated on the data of working and retired individuals along with their corresponding data pertaining to the retirement factors. The model is then used to predict whether an individual described by Table 18.1 Factors affecting the age of retirement Factors

Value/representation

Gender

Male/female

Disease

Moderate health/perfect health/marginal health/late disease/early disease

Education level

High school/undergraduation (college)/graduate school

Marital status

Married/unmarried

Yearly income

Average annual income before retirement

Employment status

Unemployed/employed

Age

Age of the person currently

250

R. Kalai et al.

a specific set of values for each of the retirement factors as well as current age will retire before or after the age of sixty, thus enabling augmenting or modification of retirement plans.

18.4 Experimental Results In the proposed work, three machine learning models were trained for their respective datasets and the models were evaluated on the basis of accuracy of prediction. The accuracy of the models was evaluated using the calculated mean absolute percentage error (MAPE) value. Accuracy(% ) = MAPE =

TP + TN ∗ 100 TP + FP + FN + TN

n 1 |(Actual − Predicted)/Actual)| ∗ 100 n i=1

Accuracy(%) = MAX(0, (100 − MAPE))

(18.9)

(18.10) (18.11)

where n is the number of observations for which the MAPE is calculated. TP, TN, FP and FN present in Eq. (18.9) denote true positive, true negative, false positive and false negative, respectively. The number of data points for which accuracy is calculated is based on the training/validation dataset size utilized. For the RNN model, the training dataset was set as 80% of the entire dataset, while the linear regression and logistic regression models used the entire dataset. Through experimentation, the accuracy of each of the elements of the proposed system was found and is represented in Fig. 18.4. The accuracy for the linear regression model and RNN was calculated using Eqs. (18.10) and (18.11), while for the logistic regression model, it was calculated using Eq. (18.9). For expenditure prediction, linear regression was utilized in order to generate a trends line that would be capable of being extrapolated in order to forecast future possible expenditure for a given time period. The graph in Fig. 18.5 displays the linear regression line (trends line) that was generated by regressing the aggregated categorical expenditure over the time period in which the expense occurred. The green line is used to indicate the mean of the entire expenditure dataset which was recorded over the time period for which the linear regression model is trained. This line allows for inference of number of above average expenses incurred. The gray data points are outliers that were imputed. Early retirement prediction was conducted on the basis of the macro-factors that are deemed to be the most important determiner/variables. The data was composed of both categorical and numerical variables that were used to perform logistic regression, with the dependent variable being a flag denoting retirement of the individual

18 Machine Learning Models for Predictive Analytics in Personal …

251

Fig. 18.4 Performance of the proposed system

Fig. 18.5 Linear regression (trend line) of expenditure versus date with mean of the dataset, used for expenditure prediction/forecasting

before 60. The logistic regression predictor modeled was then used to perform predictions of early retirement based on the testing data. The value 1 is used to indicate true, while 0 represents false. The number of correct and false predictions was derived through comparison with actual labels, and the confusion matrix shown in Fig. 18.6 was constructed in order to visualize the number of correct and incorrect classifications. The terms that compose the confusion matrix are: true positive which denotes

252

R. Kalai et al.

Fig. 18.6 Confusion matrix of modeled logistic regression classifier for early retirement prediction

correctly classified individuals who retired early, true negative which denotes incorrectly classified individuals who retired early, false positive which denotes incorrectly classified individuals who did not retire early and false negative which denotes correctly classified individuals who did not retire early. Figures 18.7 and 18.8 depict the time series forecasting for stocks: TSLA (Tesla, Inc.) and MSFT (Microsoft Corporation). The blue line depicts the actual price of the stock over the last 20 years, with the green line depicting the predicted value of the stock for the training data by the RNN model. The yellow trends line is used to show the value of simple moving average (SMA), and this is used to ensure that no steep and sudden increase or decrease (outlier) in price of the stock at a short period of time negatively impacts the accuracy of the model significantly. The red line shows the projected value of the stock for the time period into the future for which the prediction is to be made, and this is defined by the window size that is set during the retrieval of stock data and computation of SMA. As such, with a reasonable degree of accuracy, the price of a stock can be forecasted as a function of its historical data.

18.5 Conclusion and Future Scope Personal finance management is crucial in improving the financial condition of individuals and results in potential increased income and savings among a wide slew

18 Machine Learning Models for Predictive Analytics in Personal …

253

Fig. 18.7 Time series forecasting for prediction of TSLA (Tesla, Inc.) stock/share price using RNN model based on SMA values of past performance

Fig. 18.8 Time series forecasting for prediction of MSFT (Microsoft Corporation) stock/share price using RNN model based on SMA values of past performance

of other benefits, all of which can ultimately lead to long-term financial security and stability. Machine learning can be utilized in order to further augment and refine the processes present within personal finance management. From the results, it is observed that by using the machine learning algorithms present in the proposed system, predictive analytics can be conducted with a high rate of accuracy for all the three phases of the system, each of which encompasses a fundamental

254

R. Kalai et al.

aspect pertaining to an individual’s financial status and condition, thereby ultimately enabling a more data-oriented analytical approach for personal finance management. In the future, we would like to explore and identify further applications in the domain of personal finance that can be improved with the utilization of machine learning.

References 1. Varian, H.R.: Big data: new tricks for econometrics. J. Econ. Perspect. 28(2), 3–28 (2014) 2. Lusardi, A.: Financial literacy and the need for financial education: evidence and implications. Swiss J. Econ. Stat. 155, 1 (2019). https://doi.org/10.1186/s41937-019-0027-5 3. [Online]. Available: https://builtin.com/artificial-intelligence/machine-learning-finance-exa mples 4. Garman, E.T., Forgue, R.: Personal finance. Cengage Learning (1 Sept 2014) 5. Sundararajan, K., Palanisamy, A.: Probabilistic model based context augmented deep learning approach for sarcasm detection in social media. Int. J. Adv. Sci. Technol. 29(06), 8461–79 (June 2020). http://sersc.org/journals/index.php/IJAST/article/view/25290 6. 10 Companies Using Machine Learning in Finance to Improve the Entire Industry—Builtin 7. Hanafy, M., Ming, R.: Machine learning approaches for auto insurance big data. Risks 9(2), 42 (2021) 8. Dixon, M.F., Halperin, I., Bilokon, P.: Machine Learning in Finance. Springer International Publishing (2020) 9. Lokman, A.S., Ameedeen, M.A.: Modern chatbot systems: a technical review. Proceedings of the Future Technologies Conference. Springer, Cham (2018) 10. Sundararajan, K., Palanisamy, A.: Multi-rule-based ensemble feature selection model for sarcasm type detection in twitter. Comput. Intell. Neurosci. 2020 (2020) 11. Bartram, S.M., Branke, J., Motahari, M.: Artificial intelligence in asset management. No. 14525. CFA Institute Research Foundation (2020) 12. Thulasimani, M.: Personal Financial Management (2015) 13. Here’s how credit scores compare across generations—CNBC. Available: https://www.cnbc. com/2018/09/25/heres-how-credit-scores-compare-across-generations.html 14. Uyanık, G.K., Güler, N.: A study on multiple linear regression analysis. Procedia—Soc. Behav. Sci. 106, 234–240 (2013). https://doi.org/10.1016/j.sbspro.2013.12.027 15. Kong, M. et al.: IOP Conf. Ser.: Earth Environ. Sci. 252, 052158 (2019) 16. Haykin, S.: Neural Networks and Learning Machines, 3rd edn. Pearson, New York (2009) 17. Aggarwal, C.C.: Neural Networks and Deep Learning. Springer, Cham, Switzerland (2018). https://doi.org/10.1007/978-3-319-94463-0 18. Kamruzzaman, J., Sarker, R.A.: ANN-based forecasting of foreign currency exchange rates. Neural Inf. Process-Lett. Rev. 3(2), 49–58 (2004) 19. Hiransha, M., Gopalakrishnan, E.A., Menon, V.K., Soman, K.P.: NSE stock market prediction using deep-learning models. Procedia Comput. Sci. 132, 1351–1362 (2018). https://doi.org/ 10.1016/j.procs.2018.05.050 20. Linear regression—yale [Online]. Available: http://www.stat.yale.edu/Courses/1997-98/101/ linreg.html 21. Roser, M., Ortiz-Ospina, E., Ritchie, H.: Life Expectancy—OurWorldInData. [Online] (2019). Available: https://ourworldindata.org/life-expectancy 22. Peng, J., Lee, K., Ingersoll, G.: An introduction to logistic regression analysis and reporting. J. Educ. Res. 96, 3–14 (2002). https://doi.org/10.1080/00220670209598786 23. Dreiseitl, S., Ohno-Machado, L.: Logistic regression and artificial neural network classification models: a methodology review. J. Biomed. Inform. 35(5–6), 352–359 (2002) 24. Simple Moving Average (SMA)—Investopedia [Online]. Available: https://www.investopedia. com/terms/s/sma.asp

Chapter 19

MNIST Image Classification Using Convolutional Neural Networks Ashesh Roy Choudhuri, Barnali Guha Thakurata, Bipsa Debnath, Debanjana Ghosh, Hrittika Maity, Neela Chattopadhyay, and Rupak Chakraborty

Abstract Convolutional neural network (CNN) holds the current research interest in the ever-evolving image classification field. Accurate classifying the image data with minimum of time is highly desired. But the traditional CNN architecture often fails to generate the appropriate outcome for large dataset. So, a modified approach of CNN is proposed here which is the combination of data augmentation and batch normalization embedded with CNN. Nowadays, identifying or classifying digits accurately with variety of modes is really a task of challenge. The advantages of the proposed approach are noted when it is applied to the popular MNIST dataset used for digit classification. The proposed approach has been compared with some existing techniques and results infer that the validation, training loss, and testing accuracy of the proposed approach are more superior as compared to the state-of-art approaches.

19.1 Introduction During contemporary times where digitization has revolutionized the frontier of modern computing and data processing, handwriting recognition systems are an important sector. Extracting information from the various available sources and converting them into digital formats is an important process, as processing digital data are both cost-effective and less time-consuming. Handwritten recognition systems find applications in advanced fingerprint verification systems used for personal identification and investigation [1]. The fingerprints left on various surfaces are collected as specimens by the security departments to be used in the investigation in crime scenarios. They are used in the identification of automobile license plates, bank documents, archeological manuscripts, medical prescriptions, and the list goes on

A. R. Choudhuri · B. G. Thakurata · B. Debnath · D. Ghosh · H. Maity · N. Chattopadhyay · R. Chakraborty (B) Department of Computer Science and Engineering, Guru Nanak Institute of Technology, Kolkata, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. Das et al. (eds.), Modeling, Simulation and Optimization, Smart Innovation, Systems and Technologies 292, https://doi.org/10.1007/978-981-19-0836-1_19

255

256

A. R. Choudhuri et al.

[2]. The need for highly accurate, cost, and power-efficient recognition systems is hence a necessity. The influence of deep learning architectures has brought numerous changes in the field of AI and ML. Deep learning architectures mimic the working of the human brain in processing and recognizing data. It involves autoencoders, ANNs, recurrent neural networks, convolutional neural networks, etc. A neural network is a network of artificial neurons or nodes. They have input layers, hidden layers, and an output layer. Each layer contains nodes that are connected to nodes in the previous layer. CNN has proved to be a boon to the recognition technology as its simplistic structure makes it very suitable for image processing and recognition. They successfully without any human supervision detect prominent features of an image through the multiple hidden layers. With the advent of higher computational power and GPU efficiency, convolutional neural networks have now become a powerful means to classify image data and continues to retain their popularity in this field. CNN is well designed for the processing of data through multiple layered arrays [3]. The defining feature that differentiates CNN from any other ordinary neural network is that CNN gathers input as a 2D array and operates directly on the images rather than focusing on feature extraction. In the past years, almost all state-of-the-art algorithms in the field of image recognition are based on it. The spatial correlations existing within the input data are used by CNN. Each and every synchronic layer of a neural network connects some input neurons. This specific region is called the local receptive field which focuses on the hidden neurons that process the input data inside the mentioned field not realizing the changes outside the specific boundary. Some secured multimedia obtained techniques can also be found in the literature [4–6]. After all, detecting an individual’s handwriting is a challenging task. CNN due to its several advantages has been used in several papers to do this task. In some papers, accuracy is as high as 98–99%. In [7] Morph-CNN, where the counter-harmonic filter is combined with convolutional layers to perform morphological operations resulting in escalated feature maps to perform digit recognition on MNIST and SVHN datasets with accuracies of 99.32% and 92.56%, respectively. Different hidden layers are added to CNN to note how different hidden layers affect the whole performance. Epochs are also added to it, and different accuracies are noted to recognize handwritten digits in [8, 9]. Higher accuracy of 99.21% is achieved in less computation time by improving CNN architecture in [10] using the MNIST dataset and DL4J framework. CNN architecture is further improved, and greater accuracy of 99.87% is gained in [2]. Here, the model is tested with different SGD optimization algorithms, culminating to attain this high-recognition accuracy. Models of CNN being applied to vernacular languages are also popular now. In [11], LeNet-5 CNN was applied on MADBase database with Arabic handwritten images for handwritten character recognition. In [12], 92.72% testing accuracy was reached during digit recognition on NumtaDB, a Bangla digit database using CNN. Deep CNN is used in Devanagari handwritten digit recognition using the CPAR-2012 and CVPR-ISI datasets in [13]. The problem of classifying images from MNIST dataset is formulated in Sect. 19.2. Our CNN model is proposed in Sect. 19.3. The results observed are discussed in Sect. 19.4, and our approach is compared is made with other models

19 MNIST Image Classification Using Convolutional Neural Networks

257

and other approaches. The conclusion and future scope of the paper are eventually discussed in Sect. 19.5.

19.2 Problem Formulation Our objective is to build an efficient CNN model which detects handwritten digits of MNIST dataset with high accuracy and minimal loss as much as possible. Convolutional neural network over the years, owing to its versatility has proven to be the best solution to solve this problem.

19.2.1 Convolutional Neural Network Multiple layers form a CNN. We have the input layer at first followed a set of layers finally ending with the output layer. The main layers and their characteristics are described below: Convolutional Layer: It is the base layer and main building block of CNN. A filter is placed on the image resulting in a convolved feature map, analogous to watching a specific section of an outdoor scene through a window, which makes specific features prominent. Extracting of image features is the important use of this layer [14]. On convoluting the m × m filter over the n × n input neurons of the input layer, (n − m + 1) x (n − m + 1) is delivered as output. These filters or kernels help in extracting specific low-level features from the input high-level image. We get the sum of dot products whenever convolution occurs at a (x, y) coordinate in space between the input and 2D kernel [15] Convolutionx,y =



pi qi

(19.1)

Here, pi refers to the convolutional kernel weights whereas qi stands for the values of correspondingly spatial extent in the input map (or the output of the preceding layer). Scalar bias will be added to obtain the output for convolutional layer [15] z x,y = convolutionx,y + bias

(19.2)

ReLU: It is used to add non-linearity in order to gain a generalized solution. The output of the convolution layer is added with a bias value (element wise), and then, it is passed through an activation function. It operates on element z, where any negative value is scaled to zero [15], g(z) = z if z ≥ 0,

258

A. R. Choudhuri et al.

0 if z < 0.

(19.3)

Without it, data being fed into each layer would lose the dimensionality that we want to maintain. It also helps in reducing the vanishing gradient problem [16]. It does not change the size of its input. The other activation functions are sigmoid, tanh, leaky ReLu, etc. Pooling Layer: This layer reduces the sample size of a particular feature map. It works by reducing the dimension of the sample which helps in increasing the computational performance, and it also decreases the chances of over fitting. It also introduces non-linearity. A pooling operator runs on individual feature channels, merges nearby feature values into one, by the application of a suitable operator. Two of the most commonly used pooling techniques are max pooling and average pooling [17]. Fully connected Layer: For performing classifications, there will be a need of several fully connected layers in CNN architecture. In a fully connected layer, the neurons are attached to all the neurons in the preceding layer like any ANN or MLP. Fully connected layers combine features from convolution and from pooling layers to generate a probable class score for the classification of the input images. Nonlinear activation function may be used in a fully connected layer to enhance the performance of the network. At present, the classification layer uses the activation function ‘softmax’ for classifying the generated features of the input image received from the previous layer into various classes based on the training data.

19.2.2 Data Augmentation Data augmentation is a method to increase the dataset’s size as it results in more efficiency to a deep learning model. The traditional approach involves creating new images by performing traditional transformations like rotations, zoom, sheer, shift, coloration, filling, etc., over the images. Involving mostly affine transformations, the original image takes the form [18]: y = Wx + b

(19.4)

Increasing the size of data helps the model combat noisy images and overfitting [19]. In [20], unsupervised data augmentation method is used. In Keras, ImageDataGenerator class is used to perform data augmentation and arguments of relevant data augmentation operation are passed through it [21]. Some of the important operations are Rotation: Images are rotated randomly by specifying the degree to the rotation_range argument. Zoom: Images are zoomed in and zoomed out by specifying the range value. For zoom in the value would be less than 1 and for zoom out, it would be greater than 1.

19 MNIST Image Classification Using Convolutional Neural Networks

259

Shift: The image pixels are moved in either horizontally or vertically. width_shift_range argument is used for the horizontal shift, and height_shift_range is used for the vertical shift. Fill mode: There are several points outside of the input image’s boundaries. This operation is performed to fill those points by specifying the filling mode, and the fill_mode argument is used for it. Default fill mode is ‘nearest’ which is applied here. Apart from creating more data, it also helps in the generalization of CNN models so that these trained models can perform better with real-world examples. Generative adversarial networks (GANs) are another popular method where styled transformation occurs when the input image is styled with its 6 different image styles [18].

19.2.3 Batch Normalization It is tough to train deep neural networks with many layers. Overfitting might occur. Moreover, the distribution of inputs converts at each layer at the training time. It happens due to the parameter’s conversion of the previous layers. It decelerates the entire training process. This incident is known as the ‘internal covariate shift’ [4]. Batch normalization is a method to stabilize the training process via normalizing the inputs at each layer for every training mini-batch. This technique minimizes the epochs’ number and increases the model’s accuracy. In CNN, suppose a layer gets output from preceding layer x. Then, the layer applies an affine transformation to achieve y = W x + b. Let us consider after that it produces an output h(y) which is considered as input of the next layer. Let each component are designated by y = [y1 , y2 , . . . , yn ]T . Now, h(y) can be written as h(y) = [h(y1 ), h(y2 ) . . . , h(yn )]T [5]. Batch normalization is based on the transformation [5]: yi − F[yi ]  yˆi ← √  F (yi − F[yi ])2

(19.5)

 √  Here, F[yi ] is mean, and F (yi − F[yi ])2 is standard deviation of random variable yi . Mean and standard deviation are estimated with a batch of training images at the time of training.

19.2.4 Dataset The MNIST database or Modified National Institute of Standards and Technology database is a subset of a larger dataset available from the NIST database introduced

260 Table 19.1 Train-validation-test split

A. R. Choudhuri et al. Split category

No. of images

Train set

48,000

Validation set

12,000

Test set

10,000

Fig. 19.1 Image samples from the MNIST database

by LeCun [22] in 1998. The images dataset in MNIST database is a combination of two of NIST’s databases one of which consists of handwritten digits of high school students, and the other dataset contains handwritten digits of employees of the United States Census Bureau. This dataset contains around 70,000 images in total, out of which contains 48,000 images are being used for training the model, around 12,000 images for validating the model and 10,000 images are being used to test the model. Out of this training data, half of the data has been taken from NIST’s training dataset and the second half from NIST’s testing dataset [23]. A portion of this training set has been used for validation while training our model. The train-validation-test split of this dataset for building our model is described in Table 19.1. Each and every image is grayscale of dimension 28 × 28 pixels, representing the digits 0–9. Each element in this vector is denoted in either zero or one to represent the intensity of the pixels [24] (Fig. 19.1).

19.3 Proposed Model Our proposed architecture consists of 9 layers, with 1 input layer and output layer, 6 inner layers and 1 dense layer as described in Fig. 19.2. The input layer takes 28 × 28 px images from MNIST. Inner layers consist of 2 sets of 2 convolutional layers, each followed by 1 max pooling layer. The first set of convolutional layers contains 32 filters each and the size of its filter is 3 × 3. The next set of convolutional layers

19 MNIST Image Classification Using Convolutional Neural Networks

261

Fig. 19.2 9 layered CNN model with both data augmentation and batch normalization

contain 64 filters each and the size of its filter size is 3 × 3. The popular rectified linear unit (ReLU) function helpful in avoiding the vanishing gradient problem [16] is the activation function for all inner layers. The pool size of each of the max pooling layers is 2 × 2. The dense layer after flattening has 256 neurons and ReLU activates it. Finally, the softmax activation function is applied over the last dense layer to classify the given input digit image among the 0–9 categories. The ‘deepness’ of the model would enable us to extract more prominent features from the MNIST digits and reach a higher testing accuracy. But the deeper the model is the chances of overfitting also increase. Dropouts of 40% are hence added after every convolution-pooling set, and a dropout of 50% is added after the first dense layer. Dropout is a regularization technique which randomly ignores the given number of nodes and helps to reduce overfitting during training. To further increase test accuracy, data augmentation (DA) and batch normalization (BN) are added. Data augmentation applied over the input data diversifies the dataset more and enables the model to learn newer low-level and high-level features. Images have been rotated by 10 degrees, zoomed by 5%, shifted randomly in between − 5% and +5%, and many more operations are applied. Batch normalization raises the independency of the model, further fulfilling our aim of attaining a higher accuracy. It is added twice, each one after every set of convolutional layers before max pooling. To arrive at a more accurate good model, we train the model over 4 cases with different combinations of usage of data augmentation and batch normalization as in Table 19.2. The overall algorithm followed is the following: Proposed model algorithm: (i)

Perform data pre-processing on the MNIST dataset

Table 19.2 Performance of CNN for 4 different cases Case

DA or BN

1 2

Training accuracy (%)

Validation accuracy (%)

Test accuracy (%)

Training time (s)

No DA, No BN 99.66

99.02

99.20

6688

Only DA

99.25

99.20

99.28

6700

3

Only BN

99.31

99.17

99.22

6692

4

Both DA and BN

99.26

99.46

99.68

6693

262

A. R. Choudhuri et al.

Fig. 19.3 Flowchart of the working methodology of case 4 with both data augmentation and batch normalization

(ii) (iii) (iv) (v) (vi) (vii) (viii)

Perform the train-validation-test split as per Sect. 2.4 for i = 1 to 4/* i is the model for each of the 4 cases */ ClassifyMNIST (i, train data, validation data, test data) /* 1–No DA, No BA; 2–Only DA; 3–Only BN; 4–Both DA and BN */ Compare and plot training-validation accuracy of 4 cases Compare and plot training-validation loss of 4 cases.

Adam optimizer, the popular gradient descent optimization algorithm with cross entropy as its loss function. Being the child of gradient descent with momentum and RMSProp algorithm, it enjoys their mutual benefits resulting in memory-efficiency and high performance of the model (Fig. 19.3).

19.4 Results and Discussion We use the training set to train the model, the validation set tunes the parameters accordingly during the training, and finally, we check the accuracy of our model with the test set for all the 4 cases. The epochs are 50, and batch size is 32 for all 4 cases. The proposed model was implemented using Keras of the TensorFlow framework on Google Collab. We compared the effectiveness of augmentation and normalization techniques both individually as well as combined together in our CNN model as shown in Fig. 19.4 and Table 19.2. Case 1 shows the results for our bare CNN model without data augmentation and batch normalization. We achieve a high-final training accuracy of 99.66% which decreases significantly in validation and test sets. This shows that despite getting high-train accuracy, and the model is not able to cope similarly with unseen data and higher test loss. In case 2, we added data augmentation to our previous CNN architecture in case 1 in hopes to diversify the dataset and improve the intuition of the model. We got a 99.25% train accuracy, which is lesser than the training accuracy from case 1 showing

19 MNIST Image Classification Using Convolutional Neural Networks

263

Fig. 19.4 a Training and validation accuracy, b Training and validation loss graphs of the 4 cases

264

A. R. Choudhuri et al.

we have successfully increased the difficulty/diversity of the dataset. We get a similar validation accuracy of 99.205 which is better than that of case 1 and quite close to the training accuracy of case 2 suggesting a reduction in overfitting. The test accuracy came as 99.28% which is slightly higher than the train accuracy. Similarly in case 3, we used the batch normalization technique to modify our case 1 model. Case 1 had signs of overfitting, so we added a normalization layer after each set of two convolution layers. We achieved a training accuracy of 99.31%, again less than the case 1 training accuracy. Validation accuracy was 99.17%, which is better than that of our base model. The final test accuracy was 99.22% which is an improvement from case 1 test accuracy and quite close to train accuracy for case 3 proving that the technique can reduce the overfitting and stabilize the training process to enhance the model’s capability to handle unseen. For case 4, we added both data augmentation and batch normalization to our base CNN model to see if we can combine the desired result from the individual usages. The goal of this test was to get an ideal architecture for digit recognition that goes beyond the usual 98% a simple CNN can achieve in training and make the model more intuitive toward handling unseen data. We sent augmented data into the CNN model, and the normalization layers were place after each set of two convolution layers like in case 3. We achieved a training accuracy of 99.28% and a 99.46% overall validation accuracy. The validation accuracy is much better than the previous cases, showing that the training process has stabilized and reduced overfitting. The final test accuracy was found to be 99.68% for case 4, which is quite an improvement over our base CNN model and a bit higher than the training accuracy of the case 1 model. Thus, from the results, DA + BN performed better than the rest so we append it to our model and use it as our proposed CNN. Finally, in Table 19.3, we compared our model with three existing state-of-theart CNN models, where our CNN clearly achieved a superior performance. In [7], MConv layer made using CHM filter, and convolution layer is used. Alongside convolution, this layer also performs morphology, and images resulted from this layer are of better quality. These MConv feature maps along with pooling, and dense layers perform the overall classification of MNIST and SVHN benchmark datasets where different combinations of MConv layers are used for different results of dilation and erosion. Their two erosion layers architecture on the MNIST dataset performs the best reaching the highest overall accuracy of 99.29%. In [9], they perform an experimental comparison with different combinations of convolution, max pooling, dropout, dense Table 19.3 Other CNN models and their approaches

Paper

Approach

Accuracy (%)

[7]

Morph-CNN with counter-harmonic filter

99.29

[9]

CNN (TensorFlow framework)

99.21

[10]

CNN (DL4J framework)

99.21

Our model

CNN with DA and BN

99.68

19 MNIST Image Classification Using Convolutional Neural Networks

265

layers. Among the different 3 and 4 layer CNN models, they conclude that their 4 layer CNN model with convolution, pooling, convolution, pooling, dropout, dense, softmax combination works the best with a high overall accuracy of 99.21%. In [10], all operations of CNN modeling, feature extraction, and classification are performed using the DL4J framework. Their model reaches an average accuracy of 99.21%.

19.5 Conclusion Handwritten digit classification is an important problem in the emerging world of technology, and deep learning’s CNN is one of its best solutions. In this paper, we successfully finalized an efficient architecture for classifying the MNIST dataset. After comparing the different combinations of data augmentation and batch normalization, we concluded that for a model with 4 convolution layers, with the first 2 layers having 32 filters and the later having 64 filters, the one using both data augmentation and batch normalization performs the best, with a high-test accuracy of 99.68%, higher than many of the existing techniques. Losses of all 4 models were compared as well. In future, we wish to work on reducing the overall loss of the model as much as possible. Furthermore, we may extend our model to other architectures of CNN like hybrid CNNs, CNN with RNN, genetic algorithms, and the other emerging modern techniques. More accurate and efficient CNN models can be built by experimenting with different optimization techniques, convolution layers, batch sizes, etc.

References 1. Minaee, S., Azimi, E., Abdolrashidi, A.: FingerNet: Pushing the Limits of Fingerprint Recognition Using Convolutional Neural Network. arXiv:1907.12956v1 2. Ahlawat, S., Choudhary, A., Nayyar, A., Singh, S., Yoon, B.: Improved handwritten digit recognition using convolutional neural networks (CNN). Sensors 20(12), 3344 (2020) 3. Pandya, B., Cosma, G., Alani, A.A., Taherkhani, A., Bharadi, V., McGinnity, T.M.: Fingerprint classification using a deep convolutional neural network. In: 2018 4th International Conference on Information Management (ICIM), pp. 86–91. IEEE (2018) 4. Ioffe, S., Szegedy, C.: Batch Normalisation: Accelerating Deep Network Training by Reducing Internal Covariate Shift. arXiv:1502.03167v3 (2015) 5. Yuan, X., Feng, Z., Norton, M., Li, X.: Generalized batch normalization: towards accelerating deep neural networks. Proc. AAAI Conf. Artif. Intell. 33(01), 1682–1689 (2019) 6. Namasudra, S., Roy, P., Vijayakumar, P., Audithan, S., Balusamy, B.: Time efficient secure DNA based access control model for cloud computing environment. Futur. Gener. Comput. Syst. 73, 90–105 (2017) 7. Mellouli, D., Hamdani, T.M., Sanchez-Medina, J.J., Ayed, M.B., Alimi, A.M.: Morphological convolutional neural network architecture for digit recognition. IEEE Trans. Neural Netw. Learn. Syst. 30(9), 2876–2885 (2019) 8. Arif, R.B., Siddique, M.A.B, Khan, M.M.R., Oishe, M.R.: Study and observation of the variations of accuracies for handwritten digits recognition with various hidden layers and epochs

266

9.

10.

11.

12.

13.

14.

15. 16. 17. 18. 19.

20.

21.

22. 23. 24. 25.

26.

A. R. Choudhuri et al. using convolutional neural network. In: 4th International Conference on Electrical Engineering and Information and Communication Technology, pp. 112–117. (2018) Siddique, F., Sakib, S., Siddique, M.A.B.: Recognition of handwritten digit using convolutional neural network in python with tensorflow and comparison of performance for various hidden layers. 5th International Conference on Advances in Electrical Engineering, pp. 541–546 (2019) Ali, S., Shaukat, Z., Azeem, M., Sakhawat, Z., Mahmood, T., Rehman, K.U.: An efficient and improved scheme for handwritten digit recognition based on convolutional neural network. SN Appl. Sci. 1, 1125 (2019) El-Sawy, A., EL-Bakry, H., Loey, M.: CNN for handwritten arabic digits recognition based on LeNet-5. In: Hassanien, A., Shaalan, K., Gaber, T., Azar, A., Tolba, M. (eds.) Proceedings of the International Conference on Advanced Intelligent Systems and Informatics 2016. AISI 2016. Advances in Intelligent Systems and Computing, vol. 533, pp. 566–575. Springer (2017) Shawon, A., Jamil-Ur Rahman, M., Mahmud, F., Arefin Zaman, M.M.: Bangla handwritten digit recognition using deep CNN for large and unbiased dataset. In: 2018 International Conference on Bangla Speech and Language Processing (ICBSLP), pp. 1–6. IEEE (2018) Singh, P., Verma, A., Chaudhari, N.S.: Deep convolutional neural network classifier for handwritten devanagari character recognition. In: Satapathy, S., Mandal, J., Udgata, S., Bhateja, V. (eds.) Information Systems Design and Intelligent Applications. Advances in Intelligent Systems and Computing, vol. 434, pp. 551–561. Springer, New Delhi (2016) Saha, C., Faisal, R., Rahman, M.: Bangla handwritten digit recognition using an improved deep convolutional neural network architecture. In: International Conference on Advanced Information and Communication Technology 2020, IEEE, Dhaka (2020) Huynh-The, T., Hua, C., Pham, Q., Kim, D.: MCNet: an efficient CNN architecture for robust automatic modulation classification. IEEE Commun. Lett. 24, 811–815 (2020) Lin, G., Shen, W.: Research on convolutional neural network based on improved relu piecewise activation function. Procedia Comput. Sci. 131, 977–984 (2018) Liu, K., Kang, G., Zhang, N., Hou, B.: Breast cancer classification based on fully-connected layer first convolutional neural networks. IEEE Access 6, 23722–23732 (2018) Wang, J., Perez, L.: The effectiveness of data augmentation in image classification using deep learning. arXiv:1712.04621v1 (2017) Mikolajczyk, A., Grochowski.: Data augmentation for improving deep learning in image classification problem. In: International Interdisciplinary PhD Workshop (IIPhDW) 2018, pp. 117–122. IEEE (2018) Shijie, J., Ping, W., Peiyi, J., Siping, H.: Research on data augmentation for image classification based on convolution neural networks. Chinese Automation Congress (CAC) 2017, pp. 4165– 4170. IEEE (2017) Data Augmentation applying methodology. https://machinelearningmastery.com/how-to-con figure-image-data-augmentation-when-training-deep-learning-neural-networks/, last assessed 18 Sept 2021 LeCun, Y., Cortes, C., Burges, C.J.C.: The MNIST Database of Handwritten Digits. (2012) Kussul, E., Baidyk, T.: Improved method of handwritten digit recognition tested on MNIST database. Image Vis. Comput. 22(12), 971–981 (2004) MNIST page. https://web.stanford.edu/~hastie/CASI_files/DATA/MNIST.html, last assessed 18 Sept 2021 Namasudra, S., Chakraborty, R., Kadry, S., Manogaran, G., Rawal, B.S.: FAST: fast accessing scheme for data transmission in cloud computing. Peer-to-Peer Networking Appl. 14, 2430– 2442 (2021) Pavithran, P., Mathew, S., Namasudra, S., Lorenz, P.: A novel cryptosystem based on DNA cryptography and randomly generated mealy machine. 104, 102160 (2021)

Chapter 20

Double Sampling Plans for Life Test Based on Marshall–Olkin Extended Exponential Distribution R. Vijayaraghavan, C. R. Saranya, and K. Sathya Narayana Sharma

Abstract Acceptance sampling or sampling inspection is an essential quality control technique that describes the rules and procedures for making decisions about the acceptance or rejection of a batch of commodities based on inspection of one or more samples. When quality of an item is evaluated based on the lifetime of the item, which can be adequately described by a continuous-type probability distribution, the plan is known as life test sampling plan. In this article, a reliability double sampling plan with smaller acceptance number is formulated under time censoring by assuming the lifetime of the item follows Marshall–Olkin extended exponential distribution. A procedure for selection of the plan parameters to protect both the producer as well as the consumer indexed by the acceptable mean life and operating ratio is evolved.

20.1 Introduction Product control, also termed as acceptance sampling, is a statistical methodology widely used in industry to make decisions regarding the disposition of lot(s) of manufactured items based on the evaluation of samples. Lifetime of a commodity is a vital quality feature in assessing the quality of products, as both producers as well as consumers prefer the products with good life. The process of assessing the lifetime of the product or item through experiments is called a life test. Sampling inspection procedures that are adopted for choosing between acceptance and rejection of the batch of item-based sampled lifetime information are known as the life test or reliability sampling plans. The objective of reliability sampling plan is to determine R. Vijayaraghavan Department of Statistics, Bharathiar University, Coimbatore 641 046, India C. R. Saranya (B) Department of Statistics, KSMDB College, Sasthamcotta, Kerala, India e-mail: [email protected] K. S. N. Sharma Department of Mathematics, School of Advanced Sciences, Vellore Institute of Technology, Tamil Nadu, Vellore 632 014, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. Das et al. (eds.), Modeling, Simulation and Optimization, Smart Innovation, Systems and Technologies 292, https://doi.org/10.1007/978-981-19-0836-1_20

267

268

R. Vijayaraghavan et al.

whether the lifetimes of products reach the specified standard or not based on the sample lifetime observations. A central feature of a life test plan is that it involves the lifetime characteristic as the random variable which can be more adequately described by any one of the skewed type probability distributions such as, exponential, Weibull, lognormal, or gamma distributions. The application of many continuous distributions in the design and assessment of sampling inspection plans for life tests has been significantly marked out in the literature of product control. While important contributions have been made during the past five decades in the development of life test sampling plans employing exponential, Weibull, lognormal and gamma distributions as the lifetime distributions, the literature also provides application of several compound distributions for modeling lifetime data. Epstein [1, 2] Handbook H-108 [3], and Goode and Kao [4–6] present the theory and development of reliability sampling plans based on exponential and Weibull distributions. Gupta and Groll [7] detailed procedures for the construction of sampling inspection plans for life test based on Gamma distribution. Schilling and Neubauer [8] provided a detailed account of such plans. The recent literature in reliability sampling plans includes the works of Gupta [9], Kantam et al. [10], Baklizi and Masri [11], Jun et al. [12], Tsai and Wu [13], Balakrishnan et al. [14], Aslam and Jun [15], Aslam et al. [16], Kalaiselvi and Vijayaraghavan [17], Kalaiselvi et al. [18], Loganathan et al. [19], Vijayaraghavan et al. [20] and Vijayaraghavan and Uma [21], Al-Zahrani [22]. Marshall–Olkin extended exponential distribution (MOEED), introduced by Marshall and Olkin [23], is a generalization of the exponential distribution and has a monotone failure rate. In some applications of biological, agricultural and entomological studies, the failure rate function of the underlying distribution may be inverted bathtub-shaped hazard function or unimodal. When a probability distribution for lifetime variable has a failure rate function that takes various shapes, it is the natural choice to adopt the distribution in practice. According to Marshall and Olkin [23, 24], MOEED has the failure rate that decreases with time, fairly constant failure rate and failure rate that increases with time, indicative of infantile or early failures, useful life or random failures and wear-out failures, respectively. In this article, a specific reliability double sampling plan is devised considering Marshall–Olkin extended exponential distribution as the model for lifetime. A procedure for the selection of such plans allowing only a fewer number of failures indexed by acceptable and unacceptable mean life is evolved.

20.2 Marshall–Olkin Extended Exponential Distribution Let T represents the lifetime random variable. Assume that T follows Marshall–Olkin extended exponential distribution (MOEED). The probability density function and the cumulative distribution function of T are defined, respectively, by

20 Double Sampling Plans for Life Test Based on Marshall–Olkin …

f (t; γ , θ ) =

269

γ e−t/θ , t, γ , θ > 0 θ (1 − γ e−t/θ )2

(20.1)

1 − e−t/θ , t, γ , θ > 0, 1 − γ e−t/θ

(20.2)

and F(t; γ , θ ) =

where γ is the shape parameter,θ is the scale parameter and γ = 1 − γ . The mean lifetime of MOEED is given by μ = E(T ) = −

γ θ log γ , 1−γ

(20.3)

The failure proportion, p, of products is defined by p = P(T ≤ t) = F(t; γ , θ ).

(20.4)

20.3 Reliability Double Sampling Plans with Smaller Number of Failures In costly or destructive testing, a sampling plan with zero or fewer failures in the samples is often employed. It can be observed from Fig. 20.1 that single sampling plans for life tests with acceptance number, c, is 0, designated by SS P(n, 0), and it is not providing security to the producer against the acceptable reliable life of Fig. 20.1 Operating characteristic curves of single and double sampling plans for life tests based on MOEED having smaller acceptance numbers

270

R. Vijayaraghavan et al.

the product. The OC curves of sampling plans with c = 0 are exclusively in poor shape, which does not ensure protection to producers, but safe guard the interests of consumers against unacceptable reliable life of the product. It can be demonstrated that single sampling plans admitting one or more failures in a sample of items lack the undesirable characteristics of SS P(n, 0), but require larger sample sizes. According to Dodge [25], single sampling plans (SSP) with a fewer number of failures or acceptance numbers, c, are 0 and 1 can be used. But, the OC curves of c = 0 and c = 1 plans would expose a fact that there will be a conflicting interest between the producer and the consumer as c = 0 plans provide protection to the consumer with lesser risk of accepting the lot having unacceptable reliable life of the product while c = 1 plans offer protection to the producer with lesser risk of unaccepting the lot having acceptable reliable life. Such conflict can be invalidated if one is able to design a life test plan having its OC curve lying between the OC curves of c = 0 and c = 1 plans. Vijayaraghavan [26] has observed the OC curve of double sampling plans DSP-(0, 1) plan, the curve lies between the SSP with the c = 0 and c = 1 and the DSP-(0, 1), it is an alternative plan to ChSP-1. It can also be observed from Fig. 20.1 that there is a wider gap to be filled between the SSP with c = 0 and c = 1. Hence, it is, obviously, desirable to determine a plan whose OC curve is expected to lie between c = 0 and c = 1 plans. A double sampling plan with a1 = 0, r1 = 2 and a2 = 1, designated by DS P − (n 1 , n 2 ), overcomes the shortcoming of c = 0 plans to a greater degree by providing an appropriate shape of the OC curve, which is considered as satisfactory to both producer and consumer. A special feature of DS P − (n 1 , n 2 ) is that its OC curve coincides with the OC curve of c = 1 single sampling plan at the upper portion and coincides with the OC curve of c = 0 single sampling plan at the lower portion. This feature would be of much help in selection of an optimum DS P −(n 1 , n 2 ) protecting the producer and consumer against lot rejection for the specified acceptable reliable life and against lot acceptance for the specified unacceptable reliable life. The operating procedure of DS P − (n 1 , n 2 ) is as follows: A sample of n 1 items is taken from a given lot and inspected. If zero failures are found, i.e., m 1 = 0, while inspecting n 1 items, then the lot is accepted; if one failure is found, i.e., m 1 = 1, a second sample of n 2 items is taken and the number of failures, m 2 , is observed. If no failures are found, i.e., m 2 = 0, while inspecting n 2 items, then the lot is accepted; if one or more failures are found, i.e., m 2 is greater than or equal to 1, then the lot is rejected. Associated with DS P − (n 1 , n 2 ) are the performance measures, called OC and ASN functions, which are, respectively, expressed by Pa ( p) = p(0|n 1 , p) + p(1|n 1 , p) p(0|n 2 , p)

(20.5)

AS N ( p) = n 1 + n 2 p(1|n 1 , p),

(20.6)

and

20 Double Sampling Plans for Life Test Based on Marshall–Olkin …

271

where p is the proportion, p, of item failing in advance time t, and p(0|n 1 , p), p(0|n 2 , p) and p(1|n 1 , p) are defined either from the binomial distribution or from the Poisson distribution function.

20.4 Search Procedure for the Selection of DS P − (n1 , n2 ) In reliability sampling, a specific sampling plan for life tests can be obtained by stipulating the requirements that the OC curve should pass through two points, namely, (μ0 , α) and (μ1 , β), where μ0 and μ1 are the acceptable and unacceptable mean life associated with the risks α and β, respectively. In such a case, the operating ratio, R = μ0 /μ1 , which is the ratio of acceptable mean life to unacceptable mean life, can be used as the measure of discrimination. An optimum double sampling plan for life tests can be obtained by satisfying the following two conditions with the fixed value of producer’s and consumer’s risks at α and β, respectively and AS N is minimum: Pa (μ0 ) ≥ 1 − α

(20.7)

Pa (μ1 ) ≤ β.

(20.8)

and

Tables 20.1, 20.2, 20.3, 20.4, 20.5, 20.6, 20.7, 20.8, 20.9 and 20.10 displays the optimum values of n 1 and n 2 along with ASN of the double sampling plans for life tests under MOEED for different sets of values of γ . The optimum plans given in the table are obtained by a search procedure using the expression (20.3) for the OC function and the expression (20.4) for the proportion of product failing in an appropriate manner. The plans, which are tabulated against the operating ratio, R = μ0 /μ1 , and t/μ0 , have a maximum of 5 percent producer’s risk and a maximum of 10 percent consumer’s risk.

20.5 Procedure for Selection of DS P − (n1 , n2 ) A simple procedure for selecting the parameters of double sampling plans for life tests is given below: Step 1: Specify the value of the parameter γ or its estimate. Step 2: Specify the values of t/μ0 and t/μ1 with α = 0.05 and β = 0.10, respectively. Step 3: Find the operating ratio, R = μμ01 .

155,136,179

143,162,170

135,178,163

128,198,158

123,200,152

118,213,148

114,210,143

110,217,139

106,249,138

103,228,132

100,223,127

97,235,125

95,186,117

92,208,116

90,180,110

87,258,115

85,219,108

81,215,103

78,155,94

74,231,96

12.5

13.0

13.5

14.0

14.5

15.0

15.5

16.0

16.5

17.0

17.5

18.0

18.5

19.0

19.5

20.0

21.0

22.0

23.0

γ = 0.50

t/μ0 = 0.001

12.0

R = μ0 /μ1

89,195,108

93,202,113

97,247,122

102,234,127

105,213,128

107,282,138

110,268,140

113,277,145

116,316,153

120,256,151

123,323,163

127,305,166

132,255,166

136,279,174

141,272,179

147,251,183

154,230,189

162,213,195

172,191,203

185,167,214

γ = 0.75

125,262,149

130,333,162

136,383,174

143,358,180

147,331,183

151,329,187

155,342,193

159,376,202

163,488,220

168,441,221

173,470,231

179,405,230

185,400,237

191,423,248

199,369,250

207,357,258

216,341,267

227,315,276

241,281,287

259,244,301

γ = 1.50

145,349,177

152,330,184

159,364,195

167,377,206

171,414,215

175,536,233

180,472,232

185,485,240

190,581,258

196,494,255

202,500,263

209,453,266

215,537,284

223,479,287

232,435,292

241,429,302

252,398,311

264,378,322

280,337,334

301,291,351

γ = 2.00

164,390,200

171,493,218

179,617,240

188,624,252

193,554,252

198,606,263

204,488,258

209,657,283

215,668,292

222,537,286

229,527,294

236,560,306

244,545,314

252,576,329

262,519,333

273,482,342

285,458,353

299,428,365

317,383,379

341,330,397

γ = 2.50

(continued)

182,420,221

190,475,235

199,513,250

209,539,265

214,664,284

220,578,282

226,590,291

232,686,309

239,620,311

246,649,323

254,599,327

262,615,339

271,593,348

280,618,362

291,569,369

303,537,379

316,516,392

332,475,405

352,425,420

378,369,441

γ = 3.00

Table 20.1 Optimum DS P − (n 1 , n 2 ) with ASN for life tests based on Marshall–Olkin extended exponential distribution for the specified values of shape parameter γ and t/μ0 = 0.001

272 R. Vijayaraghavan et al.

69,131,81

66,147,79

64,122,74

62,111,71

60,105,69

58,103,66

56,106,64

54,117,63

53,91,60

51,106,58

50,88,56

48,123,56

47,98,53

46,87,52

45,81,50

44,77,49

26.0

27.0

28.0

29.0

30.0

31.0

32.0

33.0

34.0

35.0

36.0

37.0

38.0

39.0

40.0

Key: n1 , n2 , ASN

71,216,91

25.0

γ = 0.50

t/μ0 = 0.001

24.0

R = μ0 /μ1

Table 20.1 (continued)

52,103,58

53,123,61

55,97,61

56,115,63

58,101,65

59,128,68

61,117,69

63,114,71

64,217,79

66,255,84

69,130,79

71,148,83

73,211,90

76,169,90

79,167,93

82,179,98

85,226,106

γ = 0.75

72,160,81

74,153,83

76,154,85

78,160,88

80,173,91

82,201,95

85,166,96

87,206,101

90,190,103

93,190,107

96,200,111

99,226,116

102,431,135

106,278,128

110,300,135

115,244,136

119,363,151

γ = 1.50

84,162,93

86,173,96

88,192,100

90,239,105

93,196,105

95,298,114

98,249,114

101,252,118

104,289,124

108,227,124

111,293,132

115,278,136

119,301,142

124,260,145

128,354,157

133,410,168

139,330,168

γ = 2.00

94,233,107

97,196,108

99,242,113

102,221,115

105,218,119

108,225,122

111,244,127

114,289,133

118,257,136

122,255,140

126,268,146

130,305,153

135,290,158

140,302,164

145,347,174

151,344,180

157,391,192

γ = 2.50

104,261,119

107,237,121

110,233,124

113,240,128

116,258,132

119,300,138

123,265,140

126,393,152

130,391,157

135,292,156

139,353,164

144,344,170

149,380,178

155,347,183

161,360,191

167,427,203

174,446,213

γ = 3.00

20 Double Sampling Plans for Life Test Based on Marshall–Olkin … 273

80,63,92

73,78,86

69,85,83

66,86,80

63,92,77

60,105,75

58,103,73

56,106,71

54,117,70

53,91,65

51,106,64

50,88,61

48,123,63

47,98,59

47,98,59

45,81,54

44,77,53

42,73,50

40,75,48

12.5

13.0

13.5

14.0

14.5

15.0

15.5

16.0

16.5

17.0

17.5

18.0

18.5

19.0

19.5

20.0

21.0

22.0

γ = 0.50

t/μ0 = 0.002

12.0

R = μ0 /μ1

47,131,60

50,85,59

52,103,63

53,123,67

56,115,70

56,115,70

58,101,70

59,128,75

61,117,76

63,114,78

65,115,80

67,120,83

69,130,87

72,117,89

75,113,92

78,113,95

82,105,99

87,95,103

94,81,109

γ = 0.75

66,122,78

69,132,83

72,160,89

74,153,91

78,160,96

78,160,96

80,173,100

82,201,106

85,166,105

87,206,113

90,190,114

93,190,118

96,200,123

100,179,125

104,174,129

109,160,133

114,156,138

121,139,144

130,122,151

γ = 1.50

76,166,92

80,155,96

84,162,101

86,173,105

90,239,117

90,239,117

93,196,116

95,298,130

98,249,128

101,252,132

105,203,131

108,227,138

112,216,141

116,218,146

121,203,150

126,200,156

132,189,161

140,169,168

151,144,176

γ = 2.00

86,170,103

90,183,109

94,233,118

97,196,118

102,221,127

102,221,127

105,218,130

108,225,135

111,244,140

114,289,150

118,257,151

122,255,155

126,268,162

131,248,165

136,247,171

142,233,176

150,204,182

158,193,189

170,166,199

γ = 2.50

(continued)

95,193,114

99,248,124

104,261,131

107,237,132

113,240,140

113,240,140

116,258,145

119,300,154

123,265,155

127,260,159

131,268,165

135,292,173

140,277,177

145,281,184

151,266,189

158,247,195

166,230,201

175,215,210

188,186,220

γ = 3.00

Table 20.2 Optimum DS P − (n 1 , n 2 ) with ASN for life tests based on Marshall–Olkin extended exponential distribution for the specified values of shape parameter γ and t/μ0 = 0.002

274 R. Vijayaraghavan et al.

27,45,30

34,68,40

33,59,38

32,54,37

31,52,36

30,50,34

29,51,33

28,54,32

27,66,32

27,39,30

26,43,29

25,52,29

25,36,28

24,42,27

24,32,26

23,38,26

26.0

27.0

28.0

29.0

30.0

31.0

32.0

33.0

34.0

35.0

36.0

37.0

38.0

39.0

Key: n1 , n2 , ASN

40.0

28,41,31

35,100,44

25.0

28,59,32

29,50,33

30,47,34

31,45,34

31,74,36

32,68,37

33,68,38

34,71,40

35,78,41

36,102,44

38,62,43

39,73,45

40,110,50

42,79,49

44,74,51

37,64,43

45,131,58

38,90,47

γ = 0.75

24.0

γ = 0.50

t/μ0 = 0.002

23.0

R = μ0 /μ1

Table 20.2 (continued)

37,58,41

38,59,42

38,115,45

39,123,47

41,65,46

42,70,47

43,77,49

44,90,50

45,140,55

47,88,54

48,134,58

50,102,58

52,96,60

54,97,62

56,102,65

58,113,68

60,142,73

63,123,75

γ = 1.50

42,83,47

43,88,48

44,99,50

45,127,53

47,81,52

48,94,54

49,130,58

51,96,58

52,153,63

54,115,62

56,109,64

58,110,67

60,117,69

62,132,73

64,183,79

67,140,79

70,136,82

73,143,87

γ = 2.00

47,97,53

48,115,55

50,85,55

51,98,57

52,129,60

54,102,61

55,153,65

57,121,65

59,115,67

61,116,70

63,122,72

65,134,75

67,166,80

70,137,81

72,228,91

75,202,92

78,301,105

82,174,98

γ = 2.50

52,97,58

53,116,60

55,96,61

56,118,63

58,104,65

59,145,68

61,130,70

63,127,72

65,132,74

67,143,77

69,171,82

72,138,83

74,183,88

77,169,91

80,176,95

83,206,101

87,177,103

91,177,108

γ = 3.00

20 Double Sampling Plans for Life Test Based on Marshall–Olkin … 275

55,39,63

50,49,59

47,54,56

45,55,54

43,57,52

41,63,51

40,57,49

38,69,48

37,64,46

36,60,44

35,58,43

34,58,42

33,59,41

32,63,40

31,74,40

31,47,37

30,50,36

29,42,34

27,66,34

12.5

13.0

13.5

14.0

14.5

15.0

15.5

16.0

16.5

17.0

17.5

18.0

18.5

19.0

19.5

20.0

21.0

22.0

γ = 0.50

t/μ0 = 0.003

12.0

R = μ0 /μ1

32,68,39

34,54,40

35,78,44

36,72,44

37,70,45

38,71,47

39,73,48

40,78,50

41,86,52

42,105,56

44,74,54

45,86,57

47,75,58

49,71,60

51,70,62

53,70,64

56,64,67

59,61,70

65,48,74

γ = 0.75

44,90,53

46,98,56

48,134,62

50,87,60

51,98,62

52,121,66

54,97,66

55,122,70

57,107,70

59,102,72

60,142,78

62,139,81

65,108,80

67,117,84

70,108,86

73,105,89

77,96,92

81,93,97

88,77,102

γ = 1.50

51,96,61

53,124,66

56,109,68

57,142,72

59,113,72

60,163,78

62,132,77

64,124,79

66,124,81

68,127,84

70,136,88

72,152,92

75,134,93

78,128,96

81,129,100

84,133,104

89,115,107

94,108,112

101,94,118

γ = 2.00

57,121,69

60,115,72

63,122,76

64,180,83

66,145,82

68,138,84

70,137,86

72,142,89

74,152,93

76,173,97

79,148,98

81,180,105

84,169,107

87,171,111

91,152,113

95,146,117

100,134,121

105,130,126

113,111,132

γ = 2.50

63,127,75

66,136,80

69,171,87

71,156,88

73,154,90

75,158,93

77,169,96

(continued)

79,196,102

82,159,101

84,192,108

87,177,110

90,176,113

93,183,118

96,201,124

100,186,127

105,163,129

110,156,134

116,146,140

125,124,146

γ = 3.00

Table 20.3 Optimum DS P − (n 1 , n 2 ) with ASN for life tests based on Marshall–Olkin extended exponential distribution for the specified values of shape parameter γ and t/μ0 = 0.003

276 R. Vijayaraghavan et al.

19,24,21

24,32,27

23,33,26

22,36,25

21,44,25

21,29,24

20,33,23

20,25,22

19,30,22

19,23,21

18,28,20

18,22,20

17,30,19

17,23,19

16,38,19

16,25,18

26.0

27.0

28.0

29.0

30.0

31.0

32.0

33.0

34.0

35.0

36.0

37.0

38.0

39.0

Key: n1 , n2 , ASN

40.0

19,31,21

24,55,29

25.0

20,25,22

20,31,22

20,60,24

21,34,24

22,29,25

22,40,25

23,34,26

23,66,28

24,45,28

25,41,29

26,39,30

27,39,31

28,41,32

29,43,33

30,47,35

25,52,30

31,53,37

26,55,32

γ = 0.75

24.0

γ = 0.50

t/μ0 = 0.003

23.0

R = μ0 /μ1

Table 20.3 (continued)

25,38,28

25,59,29

26,45,29

27,40,30

27,66,32

28,53,32

29,49,33

30,48,34

31,48,35

32,49,36

33,51,37

34,55,39

35,61,40

36,72,42

38,59,43

39,72,46

41,65,47

42,93,51

γ = 1.50

28,57,32

29,49,32

30,46,33

30,101,36

31,69,36

32,64,36

33,64,38

34,65,39

35,70,40

36,79,42

37,109,45

39,65,44

40,79,47

42,68,48

43,88,51

45,81,52

47,81,55

49,86,57

γ = 2.00

31,81,36

32,66,36

33,62,37

34,61,38

35,61,39

36,63,40

37,67,42

38,73,43

39,86,45

41,65,46

42,76,48

43,102,51

45,82,52

47,77,54

48,115,58

50,114,60

52,129,64

55,97,64

γ = 2.50

34,85,39

35,77,40

36,76,41

37,78,42

38,84,43

39,97,45

41,65,46

42,73,47

43,87,49

45,73,51

46,90,53

48,82,54

49,121,59

51,112,60

53,116,63

55,135,67

58,104,68

60,135,73

γ = 3.00

20 Double Sampling Plans for Life Test Based on Marshall–Olkin … 277

43,26,48

39,33,45

36,39,43

34,44,41

33,41,40

31,52,39

30,50,38

29,51,36

28,54,36

27,66,36

27,39,33

26,43,32

25,52,32

25,36,30

24,42,29

24,32,28

23,38,28

22,36,26

21,36,25

20,39,24

12.5

13.0

13.5

14.0

14.5

15.0

15.5

16.0

16.5

17.0

17.5

18.0

18.5

19.0

19.5

20.0

21.0

22.0

23.0

γ = 0.50

t/μ0 = 0.004

12.0

R = μ0 /μ1

24,34,28

25,36,29

26,39,31

27,45,32

28,41,33

28,59,35

29,50,35

30,47,36

31,45,37

31,74,41

32,68,41

33,68,42

34,71,44

36,51,44

37,56,46

38,62,48

40,55,49

43,45,51

45,45,53

49,37,56

γ = 0.75

32,56,38

33,80,41

35,61,42

37,58,44

38,59,45

38,115,51

39,123,53

41,65,49

42,70,51

43,77,53

44,90,56

46,74,56

47,88,59

49,79,60

51,76,62

53,76,64

55,78,67

58,72,70

61,69,73

66,58,77

γ = 1.50

37,59,43

38,86,47

40,79,48

42,83,51

43,88,53

44,99,55

45,127,59

47,81,57

48,94,59

49,130,65

51,96,63

53,88,65

54,115,69

56,109,71

58,110,74

61,93,75

63,101,78

67,84,80

71,78,84

76,70,88

γ = 2.00

41,76,48

43,76,51

45,82,54

47,97,57

48,115,61

50,85,60

51,98,62

52,129,67

54,102,66

55,153,73

57,121,72

59,115,74

61,116,76

63,122,80

65,134,84

68,116,85

71,111,88

75,99,91

79,94,94

85,81,99

γ = 2.50

45,87,53

47,95,56

49,121,61

52,97,62

53,116,66

55,96,66

56,118,69

58,104,70

59,145,76

61,130,77

63,127,79

65,132,82

67,143,86

70,121,86

72,138,91

75,131,94

78,132,98

(continued)

82,120,101

87,107,104

93,95,109

γ = 3.00

Table 20.4 Optimum DS P − (n 1 , n 2 ) with ASN for life tests based on Marshall–Olkin extended exponential distribution for the specified values of shape parameter γ and t/μ0 = 0.004

278 R. Vijayaraghavan et al.

19,27,22

18,32,21

18,22,20

17,27,20

17,20,19

16,25,19

16,19,18

15,26,17

15,19,17

14,34,17

14,21,16

14,17,16

13,33,16

13,20,15

13,17,15

13,14,14

26.0

27.0

28.0

29.0

30.0

31.0

32.0

33.0

34.0

35.0

36.0

37.0

38.0

39.0

40.0

Key: n1 , n2 , ASN

20,25,23

25.0

γ = 0.50

t/μ0 = 0.004

24.0

R = μ0 /μ1

Table 20.4 (continued)

14,37,17

15,19,17

15,24,17

15,43,18

16,22,18

16,29,18

17,21,19

17,28,20

18,22,20

18,29,21

19,24,21

19,34,22

20,29,23

20,60,26

21,38,25

22,35,26

23,34,27

γ = 0.75

19,27,21

19,39,22

20,28,22

20,41,23

21,31,23

21,50,25

22,36,25

23,32,26

23,48,27

24,40,27

25,38,28

26,37,29

26,76,32

27,66,33

28,71,34

30,43,34

31,48,36

γ = 1.50

21,45,24

22,33,24

22,56,26

23,38,26

24,34,27

24,50,28

25,42,28

26,39,29

27,37,30

27,61,32

28,57,33

29,57,34

30,61,35

31,69,37

32,129,43

34,56,39

35,70,42

γ = 2.00

24,33,26

24,46,27

25,38,28

26,35,29

26,50,29

27,45,30

28,43,31

29,42,32

29,86,35

30,76,36

31,81,37

33,49,37

34,53,39

35,61,40

36,77,43

38,62,44

39,86,47

γ = 2.50

26,38,29

26,60,30

27,48,30

28,44,31

29,42,32

29,75,34

30,65,35

31,64,36

32,66,37

33,71,38

34,85,40

36,56,41

37,65,42

38,84,45

40,68,46

41,101,50

43,87,51

γ = 3.00

20 Double Sampling Plans for Life Test Based on Marshall–Olkin … 279

36,19,40

32,25,37

30,28,35

28,32,34

27,31,32

26,31,31

25,32,30

24,34,29

23,38,29

22,49,29

22,32,27

21,39,26

21,29,25

20,36,25

20,27,24

19,36,24

19,27,23

18,28,22

17,34,21

12.5

13.0

13.5

14.0

14.5

15.0

15.5

16.0

16.5

17.0

17.5

18.0

18.5

19.0

19.5

20.0

21.0

22.0

γ = 0.50

t/μ0 = 0.005

12.0

R = μ0 /μ1

20,35,24

21,34,25

22,35,26

22,65,30

23,38,28

24,33,28

24,45,30

25,38,30

26,35,31

26,50,33

27,45,33

28,43,34

29,43,36

30,44,37

31,45,38

33,38,39

35,35,41

37,33,43

41,26,46

γ = 0.75

27,44,32

28,53,34

30,43,35

30,61,37

31,55,38

32,52,38

33,51,39

34,51,41

35,53,42

36,55,43

37,58,45

38,63,47

39,72,49

41,60,50

42,70,53

44,65,54

47,54,56

49,55,58

54,43,62

γ = 1.50

31,49,36

32,64,39

34,56,40

35,55,41

36,55,43

36,107,48

37,109,50

39,61,47

40,65,48

41,72,50

42,83,53

44,69,54

45,81,56

47,74,58

49,72,60

51,72,62

53,74,65

57,61,67

61,55,71

γ = 2.00

34,74,41

36,63,43

38,62,45

39,64,46

40,66,48

41,70,49

42,76,51

43,85,53

44,106,57

46,79,56

47,97,60

49,84,60

50,114,65

52,103,66

54,102,69

57,84,70

60,78,72

63,76,76

68,65,79

γ = 2.50

38,60,44

39,97,49

41,101,52

42,118,55

44,72,52

45,79,54

46,90,57

47,116,61

49,88,60

50,114,64

52,97,65

54,92,66

56,92,69

58,94,71

60,100,74

63,91,77

66,88,80

69,89,83

74,77,87

γ = 3.00

(continued)

Table 20.5 Optimum DS P − (n 1 , n 2 ) with ASN for life tests based on Marshall–Olkin extended exponential distribution for the specified values of shape parameter γ and t/μ0 = 0.005

280 R. Vijayaraghavan et al.

12,16,14

15,22,18

15,17,17

14,21,16

14,16,16

13,24,15

13,17,15

13,14,15

12,22,14

12,16,14

12,14,14

11,28,13

11,17,13

11,14,12

11,12,12

11,11,12

26.0

27.0

28.0

29.0

30.0

31.0

32.0

33.0

34.0

35.0

36.0

37.0

38.0

39.0

Key: n1 , n2 , ASN

40.0

12,20,14

16,18,18

25.0

12,40,15

13,16,15

13,19,15

13,27,15

14,17,16

14,21,16

14,37,17

15,20,17

15,28,18

16,20,18

16,29,19

17,22,19

17,35,21

18,27,21

19,24,22

16,25,19

19,40,23

17,21,20

γ = 0.75

24.0

γ = 0.50

t/μ0 = 0.005

23.0

R = μ0 /μ1

Table 20.5 (continued)

15,30,17

16,20,18

16,25,18

16,51,20

17,24,19

17,36,20

18,25,20

18,37,21

19,27,21

19,49,23

20,33,23

21,29,24

21,50,25

22,39,26

23,37,27

24,37,28

25,38,29

26,40,30

γ = 1.50

17,30,19

18,23,20

18,29,20

19,24,21

19,30,21

20,26,22

20,34,23

21,29,23

21,45,25

22,36,25

23,33,26

23,75,29

24,50,28

25,47,29

26,47,30

27,50,32

28,57,34

29,80,37

γ = 2.00

19,28,21

19,43,22

20,30,22

20,50,23

21,33,23

22,29,24

22,42,25

23,35,26

24,33,27

24,53,28

25,46,29

26,45,30

27,45,31

28,47,32

29,51,34

30,58,35

31,81,38

33,54,38

γ = 2.50

20,77,25

21,34,23

22,29,24

22,40,25

23,34,26

23,66,27

24,44,27

25,40,28

26,38,29

27,38,30

27,71,32

28,68,33

29,75,35

31,45,35

32,50,37

33,59,38

34,85,42

36,63,42

γ = 3.00

20 Double Sampling Plans for Life Test Based on Marshall–Olkin … 281

33,11,36

27,21,31

26,21,30

24,25,29

23,25,27

22,26,26

21,29,26

20,33,25

20,25,24

19,30,24

19,23,23

18,28,22

18,22,21

17,30,21

17,23,20

16,38,21

16,25,19

15,34,19

15,19,18

12.5

13.0

13.5

14.0

14.5

15.0

15.5

16.0

16.5

17.0

17.5

18.0

18.5

19.0

19.5

20.0

21.0

22.0

γ = 0.50

t/μ0 = 0.006

12.0

R = μ0 /μ1

17,28,20

18,25,21

19,24,22

19,31,23

20,25,23

20,31,24

20,60,28

21,34,26

22,29,26

22,40,28

23,34,28

24,31,29

24,45,31

25,41,31

26,39,32

28,30,33

29,32,35

31,28,36

35,20,39

γ = 0.75

23,32,27

24,34,28

25,38,30

25,59,32

26,45,32

27,40,32

27,66,35

28,53,35

29,49,35

30,48,37

31,48,38

32,49,39

33,51,40

34,55,42

35,61,44

37,51,45

39,47,47

42,40,49

45,36,52

γ = 1.50

26,39,30

27,45,32

28,57,34

29,49,35

30,46,36

30,101,42

31,69,39

32,64,40

33,64,41

34,65,42

35,70,44

36,79,47

38,58,46

39,65,48

41,58,50

42,68,52

45,54,54

47,54,56

51,45,59

γ = 2.00

29,42,34

30,51,36

31,81,40

32,66,39

33,62,40

34,61,41

35,61,42

36,63,44

37,67,45

38,73,47

39,86,50

41,65,50

42,76,53

44,67,54

45,82,57

47,77,59

50,64,60

53,59,63

56,57,66

γ = 2.50

31,64,37

33,53,39

34,85,43

35,77,43

36,76,45

37,78,46

38,84,48

39,97,50

41,65,49

42,73,51

43,87,54

45,73,55

46,90,58

48,82,60

50,80,62

52,81,64

55,72,66

58,68,69

62,61,73

γ = 3.00

(continued)

Table 20.6 Optimum DS P − (n 1 , n 2 ) with ASN for life tests based on Marshall–Olkin extended exponential distribution for the specified values of shape parameter γ and t/μ0 = 0.006

282 R. Vijayaraghavan et al.

10,17,12

13,17,15

12,38,16

12,18,14

12,14,14

11,28,14

11,17,13

11,13,13

11,11,12

10,20,12

10,14,12

10,12,11

10,10,11

10,9,11

9,16,11

9,13,10

26.0

27.0

28.0

29.0

30.0

31.0

32.0

33.0

34.0

35.0

36.0

37.0

38.0

39.0

Key: n1 , n2 , ASN

40.0

11,11,12

13,24,16

25.0

11,12,12

11,14,12

11,18,13

11,36,14

12,14,14

12,17,14

12,24,14

13,15,15

13,19,15

13,33,16

14,18,16

14,26,17

15,19,17

15,28,18

16,22,19

14,17,16

16,41,21

14,25,17

γ = 0.75

24.0

γ = 0.50

t/μ0 = 0.006

23.0

R = μ0 /μ1

Table 20.6 (continued)

13,18,15

13,23,15

14,16,16

14,19,16

14,26,16

15,18,17

15,23,17

15,46,19

16,23,18

16,36,19

17,24,19

17,44,21

18,28,21

19,25,22

19,39,23

20,33,23

21,31,24

22,31,25

γ = 1.50

14,38,17

15,19,17

15,25,17

16,19,18

16,23,18

16,38,19

17,24,19

17,37,20

18,26,20

18,48,22

19,30,22

20,27,23

20,43,24

21,36,24

22,33,25

23,33,26

24,34,28

25,35,29

γ = 2.00

16,21,18

16,28,18

17,21,19

17,27,19

18,22,20

18,28,20

19,24,21

19,32,22

20,27,22

20,41,23

21,33,24

22,31,25

22,70,28

23,48,27

24,46,28

25,46,29

26,50,31

27,60,33

γ = 2.50

17,26,19

17,52,20

18,27,20

18,45,21

19,29,21

20,25,22

20,33,23

21,29,23

21,46,24

22,37,25

23,34,26

24,33,27

24,67,29

25,58,30

26,60,31

27,71,33

29,42,33

30,49,35

γ = 3.00

20 Double Sampling Plans for Life Test Based on Marshall–Olkin … 283

31,6,33

25,14,28

22,20,26

21,21,25

20,22,24

19,24,23

18,28,23

18,22,22

17,27,21

17,21,20

16,27,20

16,21,19

15,34,20

15,23,18

15,18,18

14,31,18

14,21,17

14,15,16

13,18,15

12.5

13.0

13.5

14.0

14.5

15.0

15.5

16.0

16.5

17.0

17.5

18.0

18.5

19.0

19.5

20.0

21.0

22.0

γ = 0.50

t/μ0 = 0.007

12.0

R = μ0 /μ1

15,21,18

16,19,19

16,29,20

17,21,20

17,26,21

17,39,22

18,25,22

18,35,23

19,26,23

19,37,24

20,29,24

21,25,25

21,34,26

22,30,27

23,29,28

24,28,29

26,23,30

28,20,32

31,16,35

γ = 0.75

20,25,23

20,51,26

21,50,27

22,34,26

22,64,30

23,39,28

24,34,29

24,50,30

25,41,31

26,38,31

27,37,32

27,57,35

28,53,36

29,53,37

31,39,37

32,42,39

34,37,40

36,35,42

39,30,45

γ = 1.50

22,39,26

23,42,28

24,50,30

25,40,30

26,36,31

26,51,32

27,45,33

28,42,33

28,88,39

29,64,37

30,61,38

31,61,39

32,64,41

34,47,41

35,52,43

36,58,45

38,52,46

41,42,48

44,38,51

γ = 2.00

25,34,29

26,38,30

27,45,32

28,41,33

28,65,35

29,54,35

30,51,36

31,50,37

32,50,38

33,51,40

34,53,41

35,57,43

36,63,45

37,74,48

39,60,48

41,55,50

43,53,52

45,53,54

48,48,57

γ = 2.50

27,40,31

28,48,33

29,75,37

30,59,37

31,55,37

32,53,38

33,53,39

34,54,41

35,56,42

36,59,44

37,65,46

38,74,48

40,60,48

41,70,51

43,63,52

44,78,56

47,61,57

49,63,59

53,52,62

γ = 3.00

(continued)

Table 20.7 Optimum DS P − (n 1 , n 2 ) with ASN for life tests based on Marshall–Olkin extended exponential distribution for the specified values of shape parameter γ and t/μ0 = 0.007

284 R. Vijayaraghavan et al.

9,12,10

11,21,14

11,15,13

11,12,13

10,21,12

10,14,12

10,12,12

10,10,11

9,19,11

9,14,11

9,11,10

9,10,10

9,8,10

8,23,10

8,13,9

8,11,9

26.0

27.0

28.0

29.0

30.0

31.0

32.0

33.0

34.0

35.0

36.0

37.0

38.0

39.0

Key: n1 , n2 , ASN

40.0

9,15,11

12,14,14

25.0

9,29,11

10,10,11

10,12,11

10,14,12

10,19,12

11,11,12

11,13,13

11,17,13

11,36,14

12,14,14

12,19,14

13,14,15

13,17,15

13,27,16

14,18,16

12,18,14

14,30,18

13,14,15

γ = 0.75

24.0

γ = 0.50

t/μ0 = 0.007

23.0

R = μ0 /μ1

Table 20.7 (continued)

11,20,13

12,12,13

12,14,13

12,18,14

12,27,14

13,15,15

13,19,15

13,32,16

14,18,16

14,24,16

15,18,17

15,24,17

16,19,18

16,27,19

17,22,19

17,36,21

18,28,21

19,26,22

γ = 1.50

13,13,14

13,16,14

13,20,15

13,34,16

14,18,16

14,23,16

15,17,17

15,22,17

15,37,18

16,22,18

16,38,19

17,25,19

18,22,20

18,31,21

19,27,22

20,26,23

20,43,24

21,39,25

γ = 2.00

14,16,15

14,19,16

14,26,16

15,18,17

15,23,17

15,47,18

16,23,18

16,36,19

17,24,19

17,44,20

18,28,20

19,24,21

19,38,22

20,32,23

21,30,24

22,29,25

22,70,28

23,63,29

γ = 2.50

15,17,16

15,21,17

15,31,17

16,21,18

16,28,18

17,21,19

17,29,19

18,24,20

18,34,21

19,27,21

20,25,22

20,36,23

21,32,24

22,31,25

23,31,26

23,66,29

24,67,30

26,36,30

γ = 3.00

20 Double Sampling Plans for Life Test Based on Marshall–Olkin … 285

28,13,31

23,11,26

20,16,23

19,17,22

18,18,21

17,20,21

16,25,20

16,19,19

15,26,19

15,19,18

15,16,18

14,21,17

14,17,17

13,33,18

13,20,16

13,17,16

13,14,15

12,18,15

12,13,14

12.5

13.0

13.5

14.0

14.5

15.0

15.5

16.0

16.5

17.0

17.5

18.0

18.5

19.0

19.5

20.0

21.0

22.0

γ = 0.50

t/μ0 = 0.008

12.0

R = μ0 /μ1

13,25,16

14,18,17

14,37,19

15,19,18

15,24,18

15,43,21

16,22,19

16,29,20

17,21,20

17,28,21

18,22,22

18,29,23

19,24,23

20,22,24

21,21,25

21,26,26

23,20,27

25,17,29

34,27,39

γ = 0.75

17,32,21

18,28,21

19,27,22

19,39,24

20,28,24

20,41,25

21,31,25

21,50,27

22,36,27

23,32,28

23,48,30

24,40,30

25,38,31

26,37,32

27,37,33

28,38,34

30,32,36

32,29,37

38,35,44

γ = 1.50

19,53,25

20,43,25

21,45,26

22,33,26

22,56,29

23,38,28

24,34,28

24,50,30

25,42,31

26,39,31

27,37,32

27,61,35

28,57,36

29,57,37

31,41,37

32,44,39

34,39,41

36,36,42

42,42,50

γ = 2.00

21,60,27

22,70,29

24,33,28

24,46,29

25,38,30

26,35,30

26,50,32

27,45,33

28,43,34

29,42,35

30,42,36

30,76,40

32,46,39

33,49,40

34,53,42

36,46,43

37,52,45

39,49,47

46,47,54

γ = 2.50

23,49,28

24,67,31

26,38,30

26,60,33

27,48,33

28,44,33

29,42,34

29,75,38

30,65,38

31,64,39

32,66,41

33,71,42

35,51,42

36,56,44

37,65,46

39,56,48

41,53,49

43,53,52



γ = 3.00

(continued)

Table 20.8 Optimum DS P − (n 1 , n 2 ) with ASN for life tests based on Marshall–Olkin extended exponential distribution for the specified values of shape parameter γ and t/μ0 = 0.008

286 R. Vijayaraghavan et al.

8,12,9

10,16,12

10,12,12

10,10,11

9,18,11

9,13,11

9,10,10

9,9,10

9,8,10

8,14,10

8,11,9

8,9,9

8,8,9

8,7,9

8,7,9

7,15,9

26.0

27.0

28.0

29.0

30.0

31.0

32.0

33.0

34.0

35.0

36.0

37.0

38.0

39.0

Key: n1 , n2 , ASN

40.0

8,15,10

11,11,13

25.0

9,8,10

9,9,10

9,10,10

9,12,10

9,16,11

10,9,11

10,11,11

10,13,12

10,17,12

11,11,13

11,13,13

11,18,13

12,12,14

12,16,14

12,24,15

11,13,13

13,16,15

11,18,13

γ = 0.75

24.0

γ = 0.50

t/μ0 = 0.008

23.0

R = μ0 /μ1

Table 20.8 (continued)

10,13,11

10,17,12

11,11,12

11,12,12

11,15,12

11,20,13

12,13,13

12,15,14

12,20,14

13,14,15

13,18,15

13,27,16

14,18,16

14,26,17

15,19,17

15,30,18

16,23,19

17,20,19

γ = 1.50

11,15,12

11,20,13

12,12,13

12,15,13

12,19,14

13,13,14

13,16,15

13,22,15

14,16,16

14,20,16

14,38,17

15,21,17

15,37,18

16,23,18

17,20,19

17,30,20

18,26,21

19,25,22

γ = 2.00

12,15,13

12,20,14

13,14,14

13,16,14

13,22,15

14,16,16

14,20,16

14,30,16

15,20,17

15,28,17

16,21,18

16,32,19

17,24,19

18,22,20

18,32,21

19,28,22

20,27,23

21,27,24

γ = 2.50

13,15,14

13,19,15

13,29,15

14,17,15

14,23,16

15,17,17

15,22,17

15,42,18

16,23,18

16,49,20

17,26,19

18,23,20

18,34,21

19,29,22

20,27,23

20,77,27

21,46,25

22,45,26

γ = 3.00

20 Double Sampling Plans for Life Test Based on Marshall–Olkin … 287

27,8,29

21,9,23

18,14,21

17,16,20

16,17,19

16,14,19

15,17,18

14,22,18

14,17,17

14,14,17

13,19,16

13,16,16

13,13,15

12,19,15

12,16,15

12,13,14

11,28,15

11,15,13

11,11,13

12.5

13.0

13.5

14.0

14.5

15.0

15.5

16.0

16.5

17.0

17.5

18.0

18.5

19.0

19.5

20.0

21.0

22.0

γ = 0.50

t/μ0 = 0.009

12.0

R = μ0 /μ1

12,17,14

13,14,15

13,19,16

13,26,17

14,16,16

14,19,17

14,26,18

15,18,18

15,22,18

16,18,19

16,22,20

16,30,21

17,22,21

18,20,22

19,18,22

19,22,23

21,17,24

22,16,25

30,25,35

γ = 0.75

15,46,20

16,27,19

17,24,20

17,33,21

18,24,21

18,31,22

19,25,23

19,33,23

20,27,24

20,38,25

21,31,26

22,28,26

22,39,28

23,35,28

24,34,29

25,33,31

27,27,32

28,28,33

34,31,40

γ = 1.50

17,37,21

18,31,22

19,30,23

20,25,23

20,32,24

21,27,25

21,36,26

22,30,26

22,45,28

23,37,28

24,34,29

24,56,32

25,47,32

26,45,33

27,45,34

29,34,35

30,36,36

32,32,38

38,34,44

γ = 2.00

19,32,23

20,32,24

21,33,25

22,29,26

22,38,27

23,32,27

23,48,29

24,39,29

25,36,30

25,62,33

26,50,33

27,47,34

28,47,35

29,48,36

30,51,38

32,41,39

33,45,40

35,41,42

41,41,48

γ = 2.50

21,29,24

22,31,26

23,34,27

23,55,29

24,40,29

25,36,29

25,58,32

26,47,32

27,44,33

28,42,34

29,42,35

30,43,36

31,45,37

32,48,39

33,53,41

34,61,43

36,51,44

38,48,46



γ = 3.00

(continued)

Table 20.9 Optimum DS P − (n 1 , n 2 ) with ASN for life tests based on Marshall–Olkin extended exponential distribution for the specified values of shape parameter γ and t/μ0 = 0.009

288 R. Vijayaraghavan et al.

8,6,9

9,16,11

9,12,11

9,10,10

9,8,10

8,15,10

8,11,9

8,9,9

8,8,9

8,7,9

8,6,9

7,12,8

7,10,8

7,9,8

7,8,8

7,7,8

26.0

27.0

28.0

29.0

30.0

31.0

32.0

33.0

34.0

35.0

36.0

37.0

38.0

39.0

Key: n1 , n2 , ASN

40.0

8,7,9

10,10,12

25.0

8,8,9

8,9,9

8,11,9

8,14,10

9,8,10

9,9,10

9,10,10

9,12,11

9,17,11

10,10,11

10,12,12

10,15,12

11,11,13

11,13,13

11,18,13

10,12,12

12,13,14

10,16,12

γ = 0.75

24.0

γ = 0.50

t/μ0 = 0.009

23.0

R = μ0 /μ1

Table 20.9 (continued)

9,12,10

9,15,10

9,32,11

10,10,11

10,12,11

10,15,12

10,25,12

11,12,12

11,15,13

11,21,13

12,13,14

12,17,14

12,27,15

13,16,15

13,23,15

14,17,16

14,26,17

15,20,17

γ = 1.50

10,12,11

10,14,11

10,20,12

11,11,12

11,14,12

11,17,13

12,12,13

12,14,13

12,19,14

13,14,15

13,17,15

13,26,15

14,18,16

14,26,17

15,19,17

15,32,18

16,23,19

17,21,20

γ = 2.00

11,11,12

11,14,12

11,17,12

11,33,13

12,14,13

12,18,14

12,31,14

13,16,15

13,22,15

14,16,16

14,21,16

15,17,17

15,23,17

16,19,18

16,28,19

17,23,19

18,22,20

18,35,22

γ = 2.50

11,20,13

12,13,13

12,15,13

12,20,14

13,14,14

13,17,15

13,25,15

14,17,16

14,23,16

15,18,17

15,24,17

16,19,18

16,28,19

17,23,19

17,52,22

18,32,21

19,29,22

20,28,23

γ = 3.00

20 Double Sampling Plans for Life Test Based on Marshall–Olkin … 289

24,8,26

21,5,23

17,11,20

16,12,19

15,14,18

14,16,17

14,14,17

13,17,16

13,14,16

12,22,16

12,16,15

12,14,14

12,12,14

11,17,14

11,14,13

11,12,13

11,11,13

10,14,12

10,11,12

12.5

13.0

13.5

14.0

14.5

15.0

15.5

16.0

16.5

17.0

17.5

18.0

18.5

19.0

19.5

20.0

21.0

22.0

γ = 0.50

t/μ0 = 0.010

12.0

R = μ0 /μ1

11,15,13

11,36,15

12,16,14

12,20,15

13,13,15

13,16,16

13,19,16

13,27,17

14,17,17

14,21,17

15,16,18

15,20,18

16,17,19

16,20,20

17,18,20

18,16,21

19,15,22

21,12,24

28,20,32

γ = 0.75

14,21,17

15,18,17

15,30,19

16,20,19

16,25,19

16,51,22

17,24,20

17,36,22

18,25,22

18,37,23

19,27,23

20,24,24

20,33,25

21,29,26

22,27,27

23,27,28

24,27,29

26,22,30

31,26,36

γ = 1.50

16,20,18

16,38,20

17,30,21

18,23,21

18,29,22

19,24,22

19,30,23

20,26,24

20,34,25

21,29,25

21,45,27

22,36,27

23,33,28

24,32,29

25,31,30

26,32,31

27,33,33

29,28,34

34,31,40

γ = 2.00

17,30,20

18,28,21

19,28,22

19,43,24

20,30,24

20,50,26

21,33,25

22,29,26

22,42,27

23,35,28

24,33,29

24,53,31

25,46,32

26,45,33

27,45,34

28,47,35

30,37,36

31,40,38

37,35,43

γ = 2.50

19,24,22

20,25,23

20,77,28

21,34,25

22,29,26

22,40,27

23,34,27

23,66,31

24,44,30

25,40,30

26,38,31

27,38,32

28,38,34

29,40,35

30,42,36

31,45,38

32,50,40

34,44,41



γ = 3.00

(continued)

Table 20.10 Optimum DS P − (n 1 , n 2 ) with ASN for life tests based on Marshall–Olkin extended exponential distribution for the specified values of shape parameter γ and t/μ0 = 0.010

290 R. Vijayaraghavan et al.

7,8,8

9,8,10

8,15,10

8,11,10

8,9,9

8,8,9

8,7,9

7,15,9

7,11,8

7,9,8

7,8,8

7,7,8

7,6,8

7,6,8

7,5,8

6,11,7

26.0

27.0

28.0

29.0

30.0

31.0

32.0

33.0

34.0

35.0

36.0

37.0

38.0

39.0

Key: n1 , n2 , ASN

40.0

7,9,8

9,10,11

25.0

7,10,8

7,13,8

8,6,9

8,7,9

8,8,9

8,10,9

8,12,10

8,17,10

9,8,10

9,10,10

9,12,11

9,17,11

10,10,12

10,12,12

10,17,12

9,13,11

11,11,13

9,22,12

γ = 0.75

24.0

γ = 0.50

t/μ0 = 0.010

23.0

R = μ0 /μ1

Table 20.10 (continued)

8,13,9

8,25,10

9,9,10

9,10,10

9,12,10

9,15,11

10,9,11

10,11,11

10,13,11

10,18,12

11,11,12

11,14,13

11,20,13

12,13,14

12,17,14

13,14,15

13,18,15

14,15,16

γ = 1.50

9,11,10

9,14,10

9,20,11

10,10,11

10,12,11

10,15,11

10,23,12

11,12,12

11,15,13

11,22,13

12,14,14

12,18,14

13,13,15

13,17,15

13,29,16

14,19,16

14,38,18

15,23,18

γ = 2.00

10,10,11

10,11,11

10,14,11

10,19,12

11,11,12

11,14,12

11,18,13

12,12,13

12,15,14

12,22,14

13,15,15

13,20,15

14,16,16

14,21,16

15,17,17

15,25,18

16,21,18

16,47,21

γ = 2.50

10,15,11

10,25,12

11,12,12

11,15,12

11,20,13

12,13,13

12,16,13

12,23,14

13,15,14

13,20,15

14,16,16

14,21,16

15,17,17

15,24,17

16,20,18

16,33,19

17,26,20

18,24,21

γ = 3.00

20 Double Sampling Plans for Life Test Based on Marshall–Olkin … 291

292

R. Vijayaraghavan et al.

Step4: From Table 20.1, for a specified value of γ , choose the values of n 1 and n 2 corresponding to the value of t/μ0 and the value of the operating ratio which is just closer to R found in Step 3. Thus, the values of n 1 and n 2 constitute the required double sampling plan for life tests allowing a maximum of one failure item. The following illustrations demonstrate how an optimum plan is chosen for a given set of entry parameters, namely t/μ0 and the operating ratio, μ0 /μ1 . Numerical Illustration 1. Assume that a double sampling plan for life tests is to be instituted. It is assumed that the life time of the components is a random variable which is distributed according to a MOEED with shape parameter γ = 1.5. It is expected that the plan shall provide the desired degree of discrimination measured in terms of the operating ratio R = 21, ensure protection to the producer in terms of the acceptable mean life μ0 = 75000 hours with the associated risk of 5% and ensure protection to the consumer against the unacceptable mean life μ1 = 3000 hours with the associated risk of 10%. Suppose that the experimenter wishes to terminate the life test at t = 75 hours. Entering Table 20.1 with R = μ0 /μ1 = 21 and t/μ0 = 0.001, the optimum double sampling plan is chosen with the sample sizes n 1 = 136 and n 2 = 383, which yield AS N = 174. Thus, the desired plan for the given conditions is implemented as given below: 1. 2. 3. 4. 5.

6. 7.

Select a random sample of n 1 = 136 items from a lot. Conduct the life test on each of the sampled items. Observe the number, x, of failures before reaching the termination time. Terminate the life test once the termination time, t0 = 75 hours, is reached. If no failures are observed in the 136 items tested or until time t is reached, accept the lot; if one failure is observed in the 136 items tested, select a random sample of n 2 = 383 items Conduct the life test on each of the 383 items. Accept the lot, when there are no failures in the 383 items; if one or more failures are observed, reject the lot, Treat the items, which survive beyond time t0 = 75 hours as passed.

Numerical Illustration 2. Suppose that an experimenter is interested to implement a double sampling plan for life tests for taking a decision about the disposition of a submitted lot of manufactured products whose lifetime follows MOEED with shape parameter γ = 2.0. It is assumed that the life test will be terminated at t = 45 h. The acceptable and unacceptable proportions of the lot failing before time t are, respectively, prescribed as p0 = 0.002 and p1 = 0.03325 with the associated risks fixed at the levels α = 0.05 and β = 0.10. The values of t/μ0 and t/μ1 corresponding to p0 = 0.0125 and p1 = 0.0583 are determined as t/μ0 = 0.003 and t/μ1 = 0.048. Hence, the desired operating ratio is defined by μ0 /μ1 = 16. When entered with these indices, Table 20.1 provides the optimum sample sizes of the double sampling plan as n 1 = 70 and n 2 = 136 with AS N = 88, which satisfy the conditions

20 Double Sampling Plans for Life Test Based on Marshall–Olkin …

293

(7) and (8). The acceptable mean life and unacceptable mean life are obtained as μ0 = t/0.003 = 15000 hours and μ1 = t/0.048 = 937.5 hours, respectively.

20.6 Conclusion Double reliability sampling plans of products are proposed based on Marshall–Olkin extended exponential distribution and this methodology widely applied in the products life test experiment. Tables are presented for choosing parameters of reliability sampling plans indexed by acceptable mean life and operating ratio for the preassigned time t with few specified values of shape parameter γ. The industrial practitioners can adopt this procedure to the costly or destructive life test and can develop the required plans for other choices of γ.

References 1. Epstein, B.: Tests for the validity of the assumption that the underlying distribution of life is exponential, Part I. Technometrics 2, 83–101 (1960) 2. Epstein, B.: Tests for the validity of the assumption that the underlying distribution of life is exponential, Part II. Technometrics 2, 167–183 (1960) 3. Handbook H-108.: Sampling procedures and tables for life and reliability testing. Quality Control and Reliability, Office of the Assistant Secretary of Defense. US Department of Defense, Washington, D.C. (1960) 4. Goode, H.P., Kao, J.H.K.: Sampling plans based on the weibull distribution. In: Proceedings of the seventh national symposium on reliability and quality control, pp. 24–40. Philadelphia, PA (1961) 5. Goode, H.P., Kao, J.H.K.: Sampling procedures and tables for life and reliability testing based on the weibull distribution (Hazard Rate Criterion). In: Proceedings of the Eight National Symposium on Reliability and Quality Control, pp. 37–58. Washington, DC (1962) 6. Goode, H.P., Kao, J.H.K.: Hazard rate sampling plans for the weibull distribution. Ind. Qual. Control. 20, 30–39 (1964) 7. Gupta, S.S., Groll, P.A.: Gamma distribution in acceptance sampling based on life tests. J. Am. Stat. Assoc. 56, 942–970 (1961) 8. Schilling, E.G., Neubauer, D.V.: Acceptance sampling in quality control. Chapman and Hall, New York, NY (2009) 9. Gupta, S.S.: Life test sampling plans for normal and lognormal distributions. Technometrics 4, 151–175 (1962) 10. Kantam, R.R.L., Rosaiah, K., Rao, G.S.: Acceptance sampling based on life tests: log-logistic models. J. Appl. Stat. 28, 121–128 (2001) 11. Baklizi, A., El Masri, A.E.Q.: Acceptance sampling based on truncated life tests in the Birnbaum Saunders model. Risk Anal. 24, 1453–1457 (2004) 12. Jun, C.-H., Balamurali, S., Lee, S.H.: Variables sampling plans for weibull distributed lifetimes under sudden death testing. IEEE Trans. Reliab. 55, 53–58 (2006) 13. Tsai, T.-R., Wu, S.-J.: Acceptance sampling based on truncated life-tests for generalized rayleigh distribution. J. Appl. Stat. 33, 595–600 (2006) 14. Balakrishnan, N., Leiva, V., López J.: Acceptance sampling plans from truncated life-test based on the generalized Birnbaum–Saunders distribution. Commun. Stat.–Simul. Comput. 36:643–656 (2007)

294

R. Vijayaraghavan et al.

15. Aslam, M., Jun, C.H.: A group acceptance sampling plan for truncated life test having weibull distribution. J. Appl. Stat. 39, 1021–1027 (2009) 16. Aslam, M., Kundu, D., Jun, C.H., Ahmad, M.: Time truncated group acceptance sampling plans for generalized exponential distribution. J. Test. Eval. 39, 968–976 (2011) 17. Kalaiselvi, S., Vijayaraghavan, R.: Designing of Bayesian Single Sampling Plans for WeibullInverted Gamma Distribution, pp. 123–132. Recent Trends in Statistical Research, Publication Division, M. S. University, Tirunelveli (2010) 18. Kalaiselvi, S., Loganathan, A., Vijayaraghavan, R.: Reliability Sampling Plans under the Conditions of Rayleigh–Maxwell Distribution—A Bayesian Approach, pp. 280–283. Recent Advances in Statistics and Computer Applications, Bharathiar University, Coimbatore (2011) 19. Loganathan, A., Vijayaraghavan, R., Kalaiselvi, S.: Recent Developments in Designing Bayesian Reliability Sampling Plans—an Overview, pp. 61–68. New Methodologies in Statistical Research, Publication Division, M. S. University, Tirunelveli (2012) 20. Vijayaraghavan, R., Chandrasekar, K., Uma, S.: Selection of sampling inspection plans for life test based on weibull-poisson mixed distribution. Proceedings of the International Conference on Frontiers of Statistics and its Applications, pp. 225–232. Coimbatore (2012) 21. Vijayaraghavan, R., Uma, S.: Selection of sampling inspection plans for life tests based on lognormal distribution. J. Test. Eval. 44, 1960–1969 (2016) 22. Al-Zahrani.: Reliability test plan based on Dagum distribution. Int. J. Adv. Stat. Probab. 4, 75–78 (2016) 23. Marshall, A.W., Olkin, I.: A new method for adding a parameter to a family of distributions with application to the exponential and weibull families. Biometrika 84, 641–652 (1997) 24. Marshall, A.W., Olkin, I.: Life Distributions. Structure of Nonparametric, Semiparametric and Parametric Families. Springer, New York, US (2007) 25. Dodge, H.F.: Chain sampling inspection plans. Ind. Qual. Control. 11, 10–13 (1955) 26. Vijayaraghavan, R.: Minimum size double sampling plans for large isolated lots. J. Appl. Stat. 34, 799–806 (2007)

Chapter 21

Detection of Objects Using a Fast R-CNN-Based Approach Amlan Dutta, Ahmad Atik, Mriganka Bhadra, Abhijit Pal, Md Akram Khan, and Rupak Chakraborty

Abstract Object detection stands as the current interest in locating the objects. Reducing the run time of the algorithms holds a challenge in this field. Deep convolutional neural network (CNN) like SPPnet, Fast R-CNN, etc. had been effectively proposed to reduce the execution time. Encouraged by these networks, a Region Program Network (RPN) has been proposed here to portray features of images with less computational cost and time. This network is capable of predicting the bounds of an object and generates scores at each position. In the proposed network, Fast R-CNN is also integrated with RPN so that exceptional region programs can be generated and entire network can easily understand which area should be focused. The proposed network has been tested on MS COCO, PASCAL VOC 2007, 2012 datasets. Results infer that the proposed RPN along with Fast R-CNN outperforms some popular models like deep VGG-16 (Du in Journal of Physics 1004(012029):1–9 2018), ImageNet, YOLO, etc. in terms of all the accuracy parameters.

21.1 Introduction In computer vision, detection of objects is a process to find out objects from the classes of an image. Objects can be localized by interpreting in various ways. This can be achieved by bounding boxes around objects or pointing the image pixels which are identified as objects. Objects can be detected in multiple ways: One step, Two step, and Heatmap-based object detection. Object detection can be implemented by the networking algorithms: Mask R-CNN, Faster R-CNN, and R-CNN, where images are processed, and objects are detected. Both the networks are used to train and test images after rescaling them such that the size of their shorter side becomes 600 px. The total stride for both the ZF and VGC nets on the last convolutional layer becomes 16 px, providing good results [1, 2]. Introduced novel Region Proposal Networks (RPNs) which share their convolutional A. Dutta · A. Atik · M. Bhadra · A. Pal · M. A. Khan · R. Chakraborty (B) Department of Computer Science and Engineering, Guru Nanak Institute of Technology, Kolkata, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. Das et al. (eds.), Modeling, Simulation and Optimization, Smart Innovation, Systems and Technologies 292, https://doi.org/10.1007/978-981-19-0836-1_21

295

296

A. Dutta et al.

layers with other popular object detection networks so that computation cost for computing proposals becomes small [3–5]. The SSD with MobileNetV1 has a high detection speed but the accuracy is low compared to Faster R-CNN with InceptionV2 that has lower speed but good accuracy [6, 7]. Have shown an algorithmic change by communicating the proposals with a deep CNN-lead to an appropriate and proper solution [1, 8, 9]. Effectiveness of region proposals in images is verified by comparing different object detection techniques. Fast R-CNN shows the highest detection rate for the panel and speech balloon whereas Faster R-CNN shows the highest detection rate for character, face and text objects [10, 11]. Our approach is based on the current revolutionary FASTER R-CNN model, and designed two domain adaptation components with the aim of reducing the domain discrepancy [12–14]. Proposed a multi-component CNN model, each component of which is steered to focus on a different region of the object thus enforcing diversion of the appearance factors [15–17]. Here, Sect. 21.2 describes Problem Formulation of the paper in the Object detection field. Proposed approach is discussed in Sect. 21.3. Base network is described in Sect. 21.4. Section 21.5 explains results and discussion of proposed approach. Finally, Sect. 21.6 draws a conclusion of the paper and its future scope.

21.2 Problem Formulation Faster R-CNN is the widely used version of R-CNN. Faster R-CNN is a method for detecting objects, which extract features from the pre-trained CNN network. The network is subdivided into two networks, which can be trained.

21.2.1 Architecture of Region Proposal Network (RPN) • At first, the feature map can be obtained by passing the image as an input to the backbone CNN. The size of the input image can be rescaled not exceeding 1000 pixels in the longer side and 600 pixels in the shorter side. • The backbone network features are way smaller than the actual image taken as input, having a pace of backbone network. • Taking into consideration the output feature map, the network identifies the object from the input image and approximates its position and dimensions in the image. Then an Anchor set is placed in the loaded picture for every position of resultant feature map. This set signifies probable object of different sizes and phase ratios on that location. • Output feature map is checked by each and every pixel one by one and finds the corresponding anchors, if they have objects or not. And strains out the frame of bounding boxes to give the region of interest.

21 Detection of Objects Using a Fast R-CNN-Based Approach

297

Fig. 21.1 Traditional region proposal network (RPN)

• The process is followed by ROI pooling, upstream classifier and bounding box regressor, likewise Fast R-CNN (Fig. 21.1).

21.3 The Proposed Approach The overall framework of the proposed approach is demonstrated in Fig. 21.2. If the default Faster R-CNN framework is required to metamorphose, red boxes apex the proposed intensification in Fig. 21.2. In the initial step, MobileNet architecture has been used to assemble the base intricacy layer in spite of default Faster RCNN framework in which VGG-16 architecture lies. Soft-NMS algorithm has been utilized to work out matters of heavy car repression in the RPN. Context-aware RoI

Fig. 21.2 The proposed approach framework

298

A. Dutta et al.

pooling layer retards RoI pooling layer to sustain actual edifice of small cars. To classify proposals, MobileNet architecture has been built at the closing stage within the car background; also rearranges boundary box for every detected car. Below is the explanation of our approach.

21.4 Base Network VGG-16 as a base network is used by default faster R-CNN. It has been proved that 80–85% of the forward time used in base network makes the whole framework much faster. To split the convolution into a 3 × 3 depth-wise convolution, MobileNet architecture is used in 1 × 1 pointwise convolution, minimizing the no. of criterion and cost of calculation. MobileNet initiates two criterions to fit to tune the maneuver/precision trade-off, including resolution coefficient along with width coefficient. Instead of VGG-16, MobileNet is embraced to create the base convolutional layers in Faster R-CNN in the default framework object detection in this paper (Table 21.1). The Depth-wise separable convolution has two layers, namely, Depth-wise convolutions and Pointwise convolutions. The number of output feature map channels, the kernel size square and the computational cost reduction are commensurate with each other.

21.4.1 The Region Proposal Network (RPN) At first, RPN creates a group of anchor boxes out of a base network generated feature map which is convolutional. Three anchor box having scales of 128, 256 and 512 and ratios of 1: 1, 1: 2, and 2: 1 are utilized for whichever anchor of the research paper as in for swapping between revoke and processing speed by relenting nine anchors at each position of the slider. As of Fig. 21.3, there are 1764 anchors for a 14 × 14 size convolutional feature map. Then for each of the anchors, RPN holds every anchor box and outcomes as two discrete outcomes. The first one is the objectness result, which defines the probability of how many objects are anchored. As shown in Fig. 21.3, the 2nd output Table 21.1 Model of MobileNet differentiation with VGG

Used models

ImageNet precision (%)

Multiply-adds (in millions)

Attributes (in millions)

MobileNet

69.2

571

5.3

VGG-16

72.7

15,335

134

21 Detection of Objects Using a Fast R-CNN-Based Approach

299

Fig. 21.3 RPN generated anchor boxes

Fig. 21.4 The network of region proposal

bounding box regression is used to modify the anchors to suit the object better. A valid set of ideas for vehicles is generated by applying the ultimate proposal coordinates and their objectness score. Proposals finish as overlying on the same object as anchors normally. The problem of duplicate proposals can be solved by using Soft non-maximum suppression (NMS) algorithm. In advanced object detection, Faster R-CNN is also utilized to separate similar proposals. Conventional non-maximum suppression separates any other proposal that has an overlay more than a predetermined approach with a conquering proposal. With the help of soft-NMS; adjacent

300

A. Dutta et al.

Fig. 21.5 Observation result with (I) Soft-NMS and (II) NMS. Due to dense car obstruction, NMS left 1 vehicle within observation outcomes but soft-NMS determined 2 vehicles individually

proposals together with conquering proposals neither fully extinguished (Figs. 21.4 and 21.5).

21.4.1.1

Soft Non-Maximum Suppression Algorithm

Consider Pm = {p1 , p2 , p3 , …, pn } designates a stated proposal set yield, from which proposals are adjured by objective result. The adjacent proposal approach T is fixed for cross-validation to 0.5. Consider Si to indicate the objective result of pi that being the highest value in the vector of classification score pi . Consider pi denotes the conquering proposal and pj be an adjacent proposal of pi . The modified objective result of pj (indicated as S uj ) is calculated by the formula below:   S uj = Si 1 − O pi , p j

(21.1)

21 Detection of Objects Using a Fast R-CNN-Based Approach

301

Fig. 21.6 The soft-NMS algorithm flowchart

Here, O pi , p j is the Intersection of union (IoU) in the middle of conquering proposal pi and adjacent proposal pj and is calculated as follows: O pi , p j

  area pi ∩ p j   = area pi ∪ p j

(21.2)

Structural outline about soft-NMS algo is shown in Fig. 21.6.

21.4.2 Context-Aware RoI Pooling The RoI pooling layer algorithm is utilized for balancing the dimensional proposals to constant sizes. To turn out the features inside any valid region into a small feature

302

A. Dutta et al.

Fig. 21.7 CARoI pooling strategy. a Proposals and feature maps produced by RPN and base network; b Conventional RoI pooling operation. c Context-aware RoI pooling operation

map with a fixed dimensional magnitude of H × W, RoI pooling layer uses its highest pooling. RoI max pooling works in such a way that it divides the RoI proposal into an H × W grid of subcells of roughly size (h/H) × (w/W), for maximum pooling the values in each subcell to the corresponding output grid cells. Any proposed value was less than the H × W; its perspective can be built up as H × W on appending duplicate values for filling the generated space. Appending duplicate values into tiny proposals are preferable, mainly with tiny cars, because of pulling down actual formation of tiny cars. Then, the efficiency of detecting small cars will be brought down. For proposals less constant size of outcome feature map, we used deconvolution operation and to do that we have used the following formula: yk = Fk ⊕ h k ,

(21.3)

Here (Fig. 21.7), yk Fk hk

Output Feature Map. Input Proposal. Kernel of the Deconvolution Operation.

21.4.3 Classifier The appellation in the dispensed substructure is called Classifier. After taking out attributes for every individual proposal through context-aware RoI pooling, these attributes come into the picture for classification. The classifier is divided into two Fully Connected layers, the 1st is a box classification layer and the 2nd one is box regression layer.1st Fully Connected layer has been led to the softmax layer to calculate the probabilities of the cars and the background. Another Fully Connected layer along a linear activation function degenerates detected cars’ bounding box. Each and

21 Detection of Objects Using a Fast R-CNN-Based Approach

303

every convolutional layer is gone along with a ReLU layer and batch normalization layer.

21.5 Results and Discussion COCO dataset is utilized here to compare the outcomes of the proposed approach with different approaches. Experiment has been implemented on Google Colab Platform with TPU accelerator. Deep CNN frameworks have been implemented using TensorFlow, whereas OpenCV has been chosen for the processing of real-time data.

21.5.1 Dataset The current interest of the popular dataset named as Common Objects in Context dataset (COCO) has been chosen here for experiment. This object detection dataset is widely used nowadays for benchmark comparison. In addition to that a new custom object detection dataset can also be created as the COCO dataset follows the goto format to store annotations of data. Annotations for Segmentation data are also available in this COCO dataset.

21.5.2 Metrics Used for Evaluation Performance of the presented approach has been evaluated by two parameters. Intersection over Union (IoU) and Average Precision (AP) benchmarks. Three difficulty levels of the COCO dataset are tested here. Different kinds of object detection algorithms have been assessed by these criteria. In this experiment, the IoU has been chosen as 0.7.

21.5.3 Model Training Throughout the paper, the pre-trained MobileNet model applied to ImageNet dataset, which has been chosen as a base network and it is later fine-tuned through COCO dataset. Further training process is accelerated, and overfitting has been reduced by freezing the weights of every batch normalization layer at time of training the model. First of all, Region Proposal Networks (RPN) along with the classifier are trained on a small-batch basis and later the attributes of the RPN with the modified base network. After that RPN generated two kinds of proposals (positive and negative) for training and updating the classifier. Then the attributes of the classifier and base

304

A. Dutta et al.

Table 21.2 Detection results by the different methods applied on COCO dataset Method

Average Precision Easy (%)

Average (%)

High (%)

Extracting time (s)

Faster R-CNN [1]

87.72

82.85

72.13

1.9

YOLO [18]

47.69

35.74

29.65

0.03

Proposed approach

88.21

86.85

73.73

0.14

convolutional layers have been updated one-by-one. The balancing parameter (λ) has been fixed for loss function. Learning rate of the RPN initially is fixed to 0.0001 where the decay rate of learning has been fixed to 0.0005/small-batch. We have trained the final network up to 200 epochs to get the effective outcomes.

21.5.4 Performance Results See Table 21.2 and Fig. 21.8.

21.5.5 Loss Value Comparison See Fig. 21.9.

21.6 Conclusion and Future Work So, hereby we can conclude that RPN can be presented more efficiently and accurately. Using proposed approach in Fast R-CNN, we have got more optimized loss value, more Average Precision value than Faster R-CNN and less processing time. In the future we wish to work on reducing the overall loss of the model as much as possible.

21 Detection of Objects Using a Fast R-CNN-Based Approach

Fig. 21.8 Detection result based on the proposed approach implanted on the image

305

306

A. Dutta et al.

Fig. 21.9 Loss value comparison between faster R-CNN, YOLO architecture and proposed approach

References 1. Du, J.: Understanding of object detection based on CNN Family and YOLO.: IOP conference series. J. Phys. 1004(012029), 1–9 (2018) 2. Roska, T., Leon, O.: The CNN universal machine: an analogic array computer. IEEE Trans. Circuits Syst. II: Analog Digit. Signal. Process. 40, 163–173 (1993) 3. Ren, S., He, K., Girshick, R., Sun, J.: R-CNN: towards real-time object detection with region proposal networks. 1–14 (2016) 4. Liu, Y., LI, H., Yan, J., Wei, F., Wang, X., Tang, X.: Recurrent scale approximation for object detection in CNN. In: IEEE International Conference on Computer Vision (ICCV), pp. 2–7 (2017) 5. Howard, A.G., Zhu, M., Chen, B., et al.: MobileNets: efficient convolutional neural networks for mobile vision applications, arXiv:1704.04861v1 6. Galvez, R.L., Bandala, A.A., Vicerra, R.R.P., Dadios, E.P., Maningo, J.M.Z.: Object detection using convolutional neural networks. In: Proceedings of TENCON IEEE Region 10 Conference, pp. 28–31 (2018) 7. Cai, Z., Vasconcelos, N.: Cascade R-CNN: delving into high quality object detection. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1–9 (2018) 8. Sun, P., Zhang, R., Jiang, Y., Kong, T., Xu, C., Zhan, W., Tomizuka, M., Li, L., Yuan, Z., Wang, C., Luo, P.: Sparse R-CNN: end-to-end object detection with learnable proposals. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1– 10 (2021) 9. Leon, O., Chua, R.T.: The CNN paradigm. Fundam. Theory Appl. 40, 147–156 (1993) 10. Yanagisawa, H., Yamashita, T., Watanabe, H.: A study on object detection method from manga images using CNN. Int. Workshop Adv. Image Technol. (IWAIT) 1–4 (2018) 11. Zhang, H., Chang, H., Ma, B., Wang, N., Chen, X.: Dynamic R-CNN: towards high quality object detection via dynamic training. In: European Conference on Computer Vision, pp. 260– 275. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58555-6 12. Chen, Y., Li, W., Sakaridis, C., Dai, D., Gool, L.V.: Domain adaptive faster R-CNN for object detection in the wild. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3339–3348 (2018)

21 Detection of Objects Using a Fast R-CNN-Based Approach

307

13. Hung, J., Carpenter, A.: Applying faster R-CNN for object detection on malaria images. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 56–61 (2017) 14. Gupta, H., Jin, K.H., Nguyen, H.Q., McCann, M.T.: CNN-based projected gradient descent for consistent CT image reconstruction. IEEE Trans. Med. Imaging 37, 1–14 (2018) 15. Gidaris, S., Komodakis, N.: Object detection via a multi-region & semantic segmentation-aware CNN model. In: IEEE International Conference on Computer Vision (ICCV), pp. 1134–1142 (2015) 16. Wu, M., Yue, H., Wang, J., Huang, Y., Liu, M., Jiang, Y., Ke, C., Zeng, C.: Object detection based on RGC mask R-CNN. IET Image Proc. 14, (2019). https://doi.org/10.1049/iet-ipr.0057 17. Wang, P., Liu, Y., Guo, Y., Sun, C., Tong, X.: O-CNN: octree-based convolutional neural networks for 3D shape analysis. ACM Trans. Graph. (TOG) 36, 1–11 (2017) 18. Ren, S., He, K., Girshick, R., Sun, J.: Faster R-CNN: towards real-time object detection with region proposal networks. IEEE Trans. Pattern Anal. Mach. Intell. 39(6), 5–13 (2017)

Chapter 22

Modeling and Simulation of Electric Vehicle with Synchronous Reluctance Motor Akash S. Prasad, T. Prabu, Prabhu Selvaraj, and M. Govindaraju

Abstract Induction motors were used as traction motor in automotive industry for a long time. Synchronous reluctance motors which were invented in the 90s are now considered superior to induction motor. An increasing concern for making the traction motors efficient resulted in making the synchronous reluctance motor technology important. The existed technologies for the synchronous reluctance motor did not produce sufficient saliency in the rotor design, which made them infamous for traction purpose. But the leading electric vehicle and electric motor manufactures around the world made them a prominent motor for traction purpose. This paper discusses about the importance of synchronous reluctance motor for traction purpose in electric vehicle by modeling, simulating and controlling the motor and comparing it with brushless DC motor and permanent magnet synchronous motor in MATLAB Simulink. Although there is various driving cycle for vehicle simulation in MATLAB Simulink, a standard driving cycle was created to simulate real driving conditions. Vehicle battery is the limiting factor in any electric vehicle, the energy drain from the battery is optimized to ensure the range out of the vehicle is maximum. The temperature dependence model for the brushless DC motor and permanent magnet synchronous motor are also simulated and discussed.

A. S. Prasad (B) · P. Selvaraj · M. Govindaraju Department of Mechanical Engineering, Amrita School of Engineering, Amrita Vishwa Vidyapeetham, Coimbatore, India e-mail: [email protected] P. Selvaraj e-mail: [email protected] T. Prabu Department of Electrical Engineering, Amrita School of Engineering, Amrita Vishwa Vidyapeetham, Coimbatore, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. Das et al. (eds.), Modeling, Simulation and Optimization, Smart Innovation, Systems and Technologies 292, https://doi.org/10.1007/978-981-19-0836-1_22

309

310

A. S. Prasad et al.

22.1 Introduction Electric vehicle was invented years before fossil fuel vehicle was invented. Unfortunately, the abundance of fossil fuel and their low cost in the early days gave the internal combustion engine to boom in the automotive sector [1]. But in the last few decades, the increasing price of fossil fuel and the after effects of burning fossil fuel has creating path for exploring various opportunities in electric vehicles. This led to the development of efficient and high-performance motors for traction purpose. The battery having efficient cooling system, controller having precise switching gate for inverter play crucial role in the total efficiency of the electric vehicle. During the initial days of high-performance electric vehicle, induction motors were used as traction motors. But that have changed as the manufactures realized that the induction motors were not the efficient one for traction purpose. A vehicle does not have enough space to put a battery pack that will give enough range to compete with the fossil fuel vehicle [1–3]. A typical fossil fuel vehicle can give about 1000 km range while the EVs are still struggling to get a 500 km range. This have made the EV manufactures to innovate the traction motors and their drives [4]. Currently, most of the EVs have permanent magnet synchronous motor as their traction motor that provide enough power and range. But it can be made better with synchronous reluctance motor which run on electromagnetic torque and reluctance torque where as permanent magnet synchronous motor run on electromagnetic torque. PMSM have high cost due to the use of rare earth permanent magnet [5]. They also have torque ripple due to the interaction between permanent magnet rotor and stator windings. SynRM on the other side have no permanent magnet rotor which makes them cheap too. The opportunities in SynRM are depended on the structural features of the rotor. The various rotor structures available for SynRM are discussed in this project. But for the simulation in MATLAB Simulink, only salient pole rotor is available. Control systems used in AC machines that emulate the DC machines orienting the stator fields, for keeping the angle between the stator flux and rotor flux at 90degrees, this is the basic understanding for field orientation. A position sensor is needed for getting feedback of the rotor position that is required for keeping the orientation of the stator fields. The vector control method, which has magnitude and phase angles of the AC quantities is a powerful technique for field oriented control and has been adopted in various AC motor drives worldwide [6]. The driving cycle is given to the driver unit that creates necessary algorithm and provide the acceleration and deceleration signals to the controller. The model consists of only Simscape blocks and no mathematical blocks, which makes it easy for anyone to understand the whole model [7]. The saliency ratio and motor efficiency is determined by the flux barriers. The shape and number of flux barriers are the crucial parameters in them. The motor generates the least torque ripple when the number of flux barriers increases. The width of the flux barrier which determine the shape, also reduce the torque ripple [8]. Battery is the energy storage and energy conversion component in an electric vehicle, from which current is drawn to power the traction motor. Currently on the market, lead-acid battery pack and lithium battery pack like nickel-metal hydride

22 Modeling and Simulation of Electric Vehicle with Synchronous …

311

battery, lithium-cadmium battery, lithium cobalt battery and lithium ferro phosphate battery packs are mostly used. Due to simple structure and low-price characteristics of the lead-acid battery pack, it has higher usage. But the lower life cycle and discharge coefficient, higher internal resistance and toxicity, they are replaced by lithium battery pack. The controller used for running all the motor are field oriented controller (FOC). For high-speed motors that are used in electric motors, the control system should have parameters that are easy to control, so the AC quantities are transformed into DC quantities. For this transformation, Clarke transform and park transform are used in the algorithm. Clarke transformation transform the 3 phase variables to 2 phase variables known as ∝ ß transformation. Park transformation transform AC variables to DC variables. Implementing the field oriented control in Simulink require these transformation for reducing the complexity of modeling and simulation. It is the state-of-the-art controller for traction purpose motors [9]. The simulation results obtained from the three motors will give an idea on the motor efficiency and performance for traction purpose in electric vehicle.

22.2 Methodology For the simulation of the electric vehicle, vehicle parameters are required. For reference the vehicle parameters are taken from a 4-seater hatchback manufactured in India. The actual traction force of motor and vehicle speed changes with operating conditions like moving on uphill or downhill, acceleration or deceleration. Changes like these are due to the traffic conditions and the type of vehicle that is used for the purpose. The traffic conditions vary all the time, so do the purpose of the vehicles [10]. The required design parameters are shown below in Table 22.1. Table 22.1 Design parameters

Wheelbase

2498 mm

Length

3993 mm

Width

1811 mm

Gross weight

1800 kg

Tire size

215/60 R16

Acceleration time

20 s

Vehicle speed

80 kmph

Tire rolling resistance coefficient

0.01

Air density

1.202 kg/m2

Acceleration due to gravity

9.8 m/s2

Aerodynamic drag coefficient

0.3

Vehicle frontal area

2.8 m2

312

A. S. Prasad et al.

The traction power of an electric vehicle is expressed as; Pt =

 2 δM  2 1 V f + Vb2 + Mg fr V f + ρa Cd A f V f3 2ta 3 5

(22.1)

The maximum traction power required to run the vehicle is obtained from (22.1), and the maximum torque output is obtained by calculating it with rated speed of 1000 rpm. The simulation requires a drive cycle with various values of input, in this simulation rpm values from 0 to 1000 rpm are given as input. The input given to the field oriented controller gives switching signals for the three-phase inverter. The choice of traction motor for the electric vehicle is mainly depended on a number of aspects that include vehicle parameters, driver expectation and power source. Vehicle parameters include volume and weight, which is depended on vehicle weight, vehicle type and the vehicle payload. Driver anticipation is defined by the driving style of the driver that includes the maximum speed, acceleration and braking. Most of the electric vehicles that are manufactured now have PMSM for traction purpose, SynRM is new to the electric vehicles and a lot of research is going on in this technology. BLDC is included in the comparison as it is the cheapest motor available for traction purpose in any electric vehicle. In the modeling and simulation done on MATLAB Simulink, we obtained data regarding torque, speed, velocity, current and state of charge. These results gave a detailed comparison between the motors. The stator windings and the rotor get heated up during the run of motor that increases the temperature of the windings and rotor. Heat transfer happens between the windings, atmosphere and rotor. Conduction heat transfer happens between the windings and convective heat transfer happens between windings and atmosphere, windings and rotor and also within the windings. The temperature dependence model for BLDC motor and PMSM are simulated and the temperature characteristics curves are obtained for the windings and rotor.

22.3 Modeling and Simulation For the modeling and simulation of the models MATLAB Simulink version of year 2020, release name R2020a was used. Electric vehicle models with BLDC motor, PMSM and SynRM are modeled separately in MATLAB Simulink. In Fig. 22.1 the electric vehicle model with SynRM, everything visible in blue are electrical components including controlled voltage source, controlled current source, voltage sensor and current sensor. Everything visible in green are mechanical components like tire, rotational motion sensor, torque sensor and mechanical rotational reference. Brown lines represents physical signals and black lines represents Simulink signals. There are Simulink PS converter for converting Simulink signal to physical signal and PS Simulink converter for converting physical signal to Simulink signal. The vehicle body has all the vehicle parameters like vehicle mass, CG height, number of wheels per axle, frontal area and drag coefficient. Output parameters like voltage,

22 Modeling and Simulation of Electric Vehicle with Synchronous …

313

Fig. 22.1 Electric vehicle model with SynRM

current, torque, rpm, distance and speed are taken from different sensors like voltage sensor, current sensor, torque sensor and rotational motion sensor. For each block in the model, there is block parameters that control the simulation of the model. Motor parameters, battery parameters, controller parameters are all given in the respective blocks inside the Simulink workspace. The vehicle body parameters are discussed in Table 22.1 itself few of the important block parameters of BLDC motor, PMSM and SynRM model are shown below in Table 22.2. After the model is created in the workspace, the simulation is done for 1 s for finding out the output torque, speed, velocity and input current. For finding the state of charge of the battery the model is simulated for 60 s and then drive range and drive hour is calculated. Table 22.2 Block parameter for the models

Parameter

BLDC motor PMSM

SynRM

Number of pole pairs

4

4

4

Stator d-axis inductance

0.0006033 H

0.0006033 H 0.0100 H

Stator q-axis inductance

0.0006668 H

0.0006668 H 0.0011 H

Stator resistance per 0.05  phase

0.05 

0.001 

Rotor inertia

1 kg m2

1 kg m2

1 kg m2

Nominal voltage

560 V

560 V

560 V

Internal resistance

0.01 

0.01 

0.01 

Ampere-hour rating

100 Ahr

100 Ahr

100 Ahr

Switching frequency 2000 Hz

2000 Hz

2000 Hz

314

A. S. Prasad et al.

Fig. 22.2 Temperature dependence model

In the previous electric vehicle model, the modeling and simulation are done without considering the iron losses and copper losses. The parameters required for modeling the conduction and convention process are obtained from various literature papers. For finding out the temperature rise while running the vehicle with all the three motors, the simulation is done for 1 s. The model contains convective and conduction blocks that are interconnected to form a winding network along with rotor. The three windings in the stator are denoted by A, B and C. Convection occurs between the windings and the atmosphere and between the windings and rotor. Four temperature sensors are connected to the windings and the rotor, from which the temperature variation is noted during the simulation. Temperature dependence model is available for BLDC and PMSM only, in MATLAB Simulink. The temperature dependence model is shown in Fig. 22.2.

22.4 Result and Analysis The scopes that are connected the sensors gives the required graphical representation of the results. The results obtained from the simulation contain characteristics curves of speed, current, velocity, torque and state of charge of battery for all the three electric vehicle models. From Fig. 22.3, the graph representing the characteristics of speed and time it can be seen that the input rpm and output rpm almost align with each other in BLDC motor and PMSM. In SynRM it can be seen that the output rpm takes time to align with the input rpm, but as the speed increases there is no delay or fluctuations. The noticeable delay in gaining speed during the start of the motor is due to the delay in time taken by the reluctance motor to act up. In BLDC motor as the speed increases the output rpm curve shows slight ripple. This shows that BLDC motor is not an ideal motor for high load applications.

315

Speed (rpm)

22 Modeling and Simulation of Electric Vehicle with Synchronous …

Time (sec) (a)

(b)

(c)

Fig. 22.3 Graph showing speed versus time characteristics of a BLDC, b PMSM and c SynRM

Current (A)

From Fig. 22.4, the graph representing the characteristics of current and time it can be seen that the maximum amount of current drawn from the battery is 150A, 125A and 100A for BLDC motor, PMSM and SynRM, respectively. It can also be noted that as the speed increases the current drawn also increases and the number of switching also increases. From Fig. 22.5, the graph representing the characteristics of torque and time it can be seen that the maximum torque attained by the BLDC motor and PMSM are 200 Nm, but as the speed increases the torque output is fluctuating in the case of BLDC motor. This is known as torque ripple and torque ripple is high in BLDC

Time (sec) (a)

(b)

(c)

Torque (Nm)

Fig. 22.4 Graph showing current versus time characteristics of a BLDC motor, b PMSM and c SynRM

Time (sec) (a)

(b)

(c)

Fig. 22.5 Graph showing torque versus time characteristics of a BLDC motor, b PMSM and c SynRM

316

A. S. Prasad et al.

Velocity (kmph)

motor. In SynRM the maximum torque attained is 300 Nm, during the initial start of the vehicle the torque is maximum and reduces to a lower value and maintain the torque and no noticeable torque ripple is seen. From Fig. 22.6, the graph representing the characteristics of velocity and time, the curves with color red, yellow and blue represents SynRM, PMSM and BLDC motor, respectively. It can be seen that the maximum velocity attained by the vehicle are 9 kmph, 13 kmph and 14 kmph for the 1 s run of the vehicle with BLDC motor, PMSM and SynRM, respectively. From Fig. 22.7, the graph representing the characteristics of SOC and time, the curves with color red, yellow and blue represents SynRM, PMSM and BLDC motor, respectively. It can be seen that the state of charge of battery has dropped to 99.1%,

Time (sec)

SOC (%)

Fig. 22.6 Graph showing velocity versus time characteristics of BLDC motor, PMSM and SynRM

Time (sec)

Fig. 22.7 Graph showing SOC versus time characteristics of BLDC motor, PMSM and SynRM

317

Temperature ( C)

22 Modeling and Simulation of Electric Vehicle with Synchronous …

Time (sec)

Time (sec)

(a)

Fig. 22.8 Graph showing temperature vs time characteristics of a BLDC motor and b PMSM

Table 22.3 Comparison between BLDC motor, PMSM and SynRM

Parameter

BLDC motor

PMSM

SynRM

Rotor response

Good

Better

Weak

Input current

High

Average

Low

Output torque

Weak

Good

Good

Vehicle velocity

Low

Good

Good

State of charge

Low

Average

High

Motor temperature

Very high

Low



99.31% and 99.53% during the 60 s drive of the vehicle at an average speed of 80kmph with BLDC motor, PMSM and SynRM, respectively. From Fig. 22.8, the graph representing the characteristics of temperature and time, the curves with color red, yellow, blue and green corresponds to winding A, winding B, winding C and rotor, respectively. It can be seen that the temperature of the winding is very high in BLDC motor. Difference in temperature increase with the windings is seen in the graphs, this is because windings are energized in that specific order. The energizing order can be seen in the graphs showing the characteristics of current and time. Comparison between BLDC motor, PMSM and SynRM are shown in Table 22.3 For the purpose of understanding the results obtained from the simulation. The performance of all the 3 motors can be seen with various parameters.

22.5 Conclusion The presented work is aimed to give a comparison between BLDC motor, PMSM and SynRM for an electric vehicle. Field oriented control system is used to run all the three motors. The results obtained from the simulation shows that SynRM gives more drive hour as the current drawn from the battery is lower. The rotor response of the electric vehicle with SynRM is weak due to the time taken by the

318

A. S. Prasad et al.

reluctance torque in the rotor to act up. The area where an electric vehicle lags behind a fossil fuel vehicle is in the range achieved by the vehicle. SynRM can give a good range for the vehicle and PMSM can give a good rotor response. This is due to the permanent magnet rotor in PMSM. The need to reduce the use of permanent magnet has resulted in the innovation of permanent magnet assisted synchronous reluctance motor (PMaSynRM), which gives the best of both. As of now MATLAB Simulink does not have the requirements to run simulations on a PMaSynRM. The future scope of this project lays in that area. The temperature dependence model for SynRM is not yet available in MATLAB Simulink. The temperature dependence motor model of BLDC motor and PMSM shows that the temperature rise in BLDC motor is very high. After considering various parameter for comparison, it can be seen that SynRM is the more suitable option for electric vehicle with high load applications.

References 1. Kıyaklı, A.O., Solmaz, H.: Modeling of an electric vehicle with MATLAB/Simulink. Int. J. Automot. Sci. Technol. 2(4), 9–15 (2018) 2. Betz, R.E., Lagerquist, R., Jovanovic1, M., Miller, T.J.E.: Control of synchronous reluctance machines. IEEE Trans. Ind. Appl. 29(6) (2018) 3. Hadj, N.B., Abdelmoula, R., Chaieb, M., Neji, R.: Permanent magnet motor efficiency map calculation and small electric vehicle consumption optimization. J. Electr. Syst. 14(2), 127–147 (2018) 4. Dr. Hanselman, D.: Brushless Permanent Magnet Motor Design. Magna Physics Publishing (2006) ISBN: 1-881855-15-5 5. Erik, S.: Electrical vehicle design and modeling. In: Electric Vehicles-Modelling and Simulations. InTech (2011) 6. Amin, F., Sulaiman, E., Soomro, H.A.: Field oriented control principles for synchronous motor. Int. J. Mech. Eng. Robot. Res. 8(2), (March 2019) 7. Shewy, H.M.E., Kader, F.A.A., Kholy, M.M.E., Shahat, A.E.: Dynamic modeling of permanent magnet synchronous motor using MATLAB—simulink. Proceedings of the 6th International Conference on Electrical Engineering (ICEENG), pp. 1–16, (May 2008) 8. Wu, H., Depernet, D., Lanfranchi, V.: A survey of synchronous reluctance machine used in electric vehicle. International Conference on Renewable Energy: Generation and Applications. Belfort, France (Feb 2016) 9. Ehsani, M.: Modern Electric, Hybrid electric, and Fuel cell Vehicles: Fundamentals, Theory and Design. CRC Press, LLC (2005) 10. Aziz, R., Atkinson, G.J., Salimin, S.: Thermal modelling for permanent magnet synchronous machine (PMSM). Int. J. Power Electron. Drive Syst. 8(4), (Dec 2017) 11. Rajabi Moghaddam, R.: Synchronous Reluctance Machine (SynRM) in Variable Speed Drives (VSD) Applications. TRITA-EE vol. 038. (2011) 12. Thomas, J., Owen, E.L.: AC adjustable-speed drives at the millennium: how did we get here? IEEE Trans. Power Electron. (2011) 13. Krishnan, R.: Permanent Magnet Synchronous and Brushless DC Motor Drives. Taylor and Francis Group, LLC, (2010) 14. Energy.: https://www.energy.gov/articles/history-electric-car 15. Embitel.: https://www.embitel.com/automotive-insights/how-does-motor-control-systemwork-for-electric-vehicles 16. Matworks.: https://www.mathworks.com/help/physmod/sps/electricdrives.html?s_tid= CRUX_topnav

22 Modeling and Simulation of Electric Vehicle with Synchronous …

319

17. Duraisamy, T., Dr. Deepa, K.: Electric vehicle battery modelling methods based on state of charge-review. J. Green Eng. 10, 24–61 (2020) 18. Sidharthan, V.P., Lingam, A.S., Vijith, K.: Brushless DC motor driven plug in electric vehicle. Int. J. Appl. Eng. Res. 10, 3420–3424 (2015) 19. Krishna Vempalli, S., Dr. Ramprabhakar, J., Shankar, S., Prabhakar, G.: Electric vehicle designing, modelling and simulation. In: 2018 4th International Conference for Convergence in Technology (I2CT). Mangalore, India (2018) 20. Srikanth, V., Dut, A.: A novel space vector PWM for field oriented control of permanent magnet synchronous motor. In: Proceedings of the 1st NCET’12. Barton hill, Thiruvananthapuram (2012) 21. Srikanth, V., Dutt, A.: Performance analysis of a permanent magnet synchronous motor using a novel SVPWM. In: 2012 IEEE International Conference on Power Electronics, Drives and Energy Systems (PEDES). Bengaluru (2012)

Chapter 23

Performance Analysis of Disc Type Magnetorheological Brake with Tapered Disc Peri Krishna Karthik and T. Jagadeesha

Abstract Magnetorheological fluid is a smart type fluid material that changes its viscosity abruptly on the application of an external magnetic field. Quick response and accurate controllability are its main advantages. Thus it is being actively employed in many profound applications of haptics, dampers and clutches, etc. At present major work is put on research to extend its applications in the field of automotive brakes. Many papers were published, and lots of work is done to investigate its usability and develop a design of MR brake. Agile research in this field is also ongoing to improve and optimise brake designs. Our work is one such attempt. In this work, analysis was carried out by mainly varying the geometric parameters of the disc of a conventional disc type magnetorheological brake to study its effect on the braking torque obtained. COMSOL Multiphysics software was used for the analysis, and magnetic analysis of magnetorheological brake using the same was also discussed.

23.1 Introduction The passively controlled conventional brakes require complex mechanical parts to function. The system also occupies much space and has increased unsprung weight that is undesirable. One way of solving these problems is to use magnetorheological fluid technology to control the braking system actively. Being quick responsive and accurately controllable, the MR fluid has found potential in many applications like haptics, dampers, clutches, and others which were discussed by Wang et al. [1]. Extending its applications into brakes, Li et al. [2] designed and experimentally evaluated the braking system based on this MR fluid technology. They found out the transmitted torque increases gradually with increasing magnetic field strength and P. K. Karthik · T. Jagadeesha (B) Department of Mechanical Engineering, National Institute of Technology, Calicut, Kerala 673601, India e-mail: [email protected] P. K. Karthik e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. Das et al. (eds.), Modeling, Simulation and Optimization, Smart Innovation, Systems and Technologies 292, https://doi.org/10.1007/978-981-19-0836-1_23

321

322

P. K. Karthik and T. Jagadeesha

rotary speed. A high-efficiency electromagnet is also designed with the help of finite element analysis. Nguyen et al. [3] worked on design considering different shapes of MRB envelope such as rectangular, polygon, and spline shapes. The results showed that the mass of conventional rectangular MRBs can be significantly reduced by using a 5-seg-polygon envelope. Rosa et al. [4] investigated modelling of the brake based on torque density, its efficiency, bandwidth, and controllability. The effect of other parameters like length of the fluid gap, radius, etc., are also analysed. Karakoc et al. [5] worked on designing an MR brake with a focus on circuit optimisation and the material choice of the brake components. Also, the design is experimentally evaluated. Nyugen et al. [6], in their other research, worked on optimal designs of common types of MRBs such as disc-type, drum-type, single-coil hybrid-types, two coil hybrid-types, and T-types with the aim of minimising the mass of the brake. It was also concluded that different values of braking force require different types, and optimising their designs is the best way to suit the requirements. He also published his work on a T-shaped MR brake that was optimally designed using three different smart MR fluids. Further research was carried to select the best one among the three [7]. In the recent work of H. Guoliang et al. [8], brake performance is found to be improved through a design of multiple discs instead of a conventional single disc. The developed model design is found to be appropriate for medium and low-level torque applications. Wang et al. [9] experimentally studied the transient behaviours of MR brakes under compression-shear mode. Effects on the transient torque process under different applied currents, rotational speeds, compressive speeds, and compression strains have been determined. The results show the magnetic field strength amplified the torque and response time of MR brake was greatly affected by the method of applying current supplied, compressive strain, and working mode.

23.2 Design and Modelling of MR Brake 23.2.1 Structural Model of the Disc Type MR Brake The conventional disc type MR brake consists of four main components, which are the disc, housing, coil, and the magnetorheological fluid. The 3D model of the conventional disc brake is shown in Fig. 23.1. In this study, we shall be looking at the variation in braking torque when the disc geometry is modified from the conventional one. For that, the analysis is done in two parts. In the first, the parameters disc thickness, MR fluid clearance gap are varied, and the variation is checked. Then for the later part, the disc is tapered to check for the variation. The 2D axisymmetric geometry of the brake with the tapered disc is shown in Fig. 23.2. The dimensions of the initial geometry of the MR brake considered is shown in Fig. 23.3.

23 Performance Analysis of Disc Type …

323

Fig. 23.1 Schematic design of MR Brake

Fig. 23.2. 2D Axisymmetric geometry of MR brake with tapered disc

23.2.2 Mathematical Modelling The amount of torque that could be generated is mathematically modelled by assuming the fluid element to be a ring structure which is modelled using HerschelBulkley fluid element. The braking torques are evaluated using the following expression, which was detailed in [6] bd ro T = 2π r 2 τz dr + 2πro2 ∫ τr dz. ri

0

324

P. K. Karthik and T. Jagadeesha

Fig. 23.3 Dimensions of the 2D—axisymmetric geometry of the conventional MR brake

where r = radius of the fluid element ro = Outer radius of the disc ri = Inner radius of the fluid element bd = disc thickness τz = Shear stress in the axial direction. τr = Shear stress in the radial direction. The assumptions made while modelling are: 1. 2. 3.

The fully developed, laminar and incompressible flow of the fluid Effect of body forces are disregarded MR fluid is always in contact with the rotor disc and casing.

23.3 Magnetic Analysis of Disc Type MR Brake To start with the analysis, the 2D axisymmetric geometry was set up to ease the computation. The materials were assigned for the respective domains. The disc and housing are assigned with magnetic silicon steel. The coil is assigned with copper. The MR fluid is assigned with Lord MRF 132 DG, which is a commercially available MR fluid. Magnetic Fields physics was set up, and the necessary parameters were also set up. The coil contains 136 turns, and the current through the coil is 1A. Fine meshing was done, and the meshing parameters were optimised for the physics automatically. A stationary solver is used for the study, and the termination condition is set to tolerance or iterations with a maximum of 100 iterations. As mentioned previously, the analysis was done in two parts. The first one is in which the parameters disc thickness, MR fluid clearance gap are varied. Few values for each of the parameters are considered, and the braking torque in each for each of the different possible configurations is evaluated. This is done by a Parametric Sweep in COMSOL. For the later part where the disc is tapered. Similar to the first

23 Performance Analysis of Disc Type … Table 23.1 Parameters varied for analysis

325

Parameter

Minimum, maximum values

Step Size

Parameter unit

Disc thickness

[2, 7]

0.5

mm

Axial clearance for MRF

[0.5, 1.9]

0.2

mm

Taper angle

[0, 4.2]

0.2

degree

part of the study, various taper angles are given as a parameter, and the parametric sweep was carried out. The angular velocity of the disc was assumed to be 60 rad/s during the entire analysis. The parametrised values are shown in Table 23.1.

23.4 Results and Discussions The analysis results were post-processed, and the magnetic flux density is plotted as surface plots of the domains. Then the torque was evaluated using the inbuilt math tools in COMSOL using the equations discussed in the previous sections. The obtained parametric solutions are then plotted as graphs to be able to depict the behaviour of the MR brake under the varied parameters (Table 23.2). Figure 23.4 shows the surface plot of braking torque variation with parameters disc thickness and axial clearance. Although the increase in both the parameters decrease the braking torque value, the axial clearance for MR fluid has a greater effect than the disc thickness on the variation of braking torque (Figs. 23.5 and 23.6). From the results of the second part of the analysis, where the disc was tapered, it was found that there is a slight increase in the average magnetic flux density from 0.424 T at zero taper condition to 0.426 T at 0.4° tapered condition. From here, the average flux density continuously decreased with further increase in taper angle. Table 23.2 Observed Results for the first part of the analysis Horizontal axis parameter (x)

Vertical axis parameter (y)

Braking torque Variation with increasing variable parameter

Braking torque for parameters (x, y) Tb(x,y) Nm

Disc thickness (constant)

Axial clearance for MRF

Decreases

Tb(4.5,0.5) = 291.18 Nm Tb(4.5,1.9) = 236.17 Nm

Disc thickness

Axial clearance for MRF (constant)

Decreases

Tb(2,1) = 266.55 Nm Tb(7,1) = 253.52 Nm

326

P. K. Karthik and T. Jagadeesha

Fig. 23.4 Surface Plot for axial clearance for MR fluid versus disc thickness

Since the braking torque is positively related to the average magnetic flux density, it also follows the same behaviour.

23.5 Conclusions In this study, an attempt was made to understand the behaviour of the magnetorheological brake under various geometric configurations of the disc type brake. A detailed methodology on how to model the magnetic simulation analysis of magnetorheological fluid in COMSOL Multiphysics software is also discussed. The magnetic analysis is conducted for various geometries of the disc, which includes discs of varying thickness and clearance gaps and also tapered disc. There were two key findings. The first one is that there isn’t much effect on the braking torque due to varying disc thickness. However, the decrease in axial clearance for magnetorheological fluid increases the average magnetic flux density in that region and thus the braking torque. The latter is that a tapered disc design has less average magnetic flux density than the one with no taper except for the case where the taper angle is very small, in this study, 0.4°, which is significantly small compared to the other geometrical parameters of the disc.

23 Performance Analysis of Disc Type …

327

Fig. 23.5 Surface plots of magnetic flux density distribution in the MR Fluid domain for configurations with various taper angles

328

P. K. Karthik and T. Jagadeesha

Fig. 23.6 Variation of average magnetic flux density with the disc taper angle

References 1. Wang, J., Meng. G.: Magnetorheological fluid devices: principles, characteristics and applications in mechanical engineering. In: Proceedings of the Institution of the Mechanical Engineers, Part L: Journal of Materials Design and Applications 2001, vol. 215, pp. 165–174. Sage (2001) 2. Li, W., Du, H.: Design and experimental evaluation of a magnetorheological brake. Int. J. Adv. Manuf. Techol. 21, 508–515 (2003) 3. Nguyen, Q.H., Lang, V.T., Nguyen, N.D., Choi, S.B.: Geometric optimal design of a magnetorheological brake considering different shapes for the brake envelope. Smart Mater. Struct. 23, 15–20 (2014) 4. Rossa, C., Jaegy, A., Lozada, J, Micaelli, A.: Design considerations for magnetorheological brakes. IEEE ASME Trans Mech. 19, 1669–1680 (2014) 5. Karakoc, K., Park, E.J., Suleman, A.: Design considerations for an automotive magnetorheological brake. Mechatronics 18, 434–447 (2008) 6. Nguyen, Q.H., Lang, V.T., Choi, S.B.: Optimal design and selection of magnetorheological brake types based on braking torque and mass. Smart Mater. Struct. 24, 067001 (2015) 7. Nguyen, Q.H., Choi, S.B.: Optimal design of a novel hybrid MR brake for motorcycles considering axial and radial magnetic flux. Smart Mater. Struct. 21, 055003 (2012) 8. Huang, J., Zhang, J.Q., Yang, Y., Wei, Y.Q.: Analysis and design of a cylindrical magnetorheological fluid brake. J. Mater. Process. Technol. 129, 559–562 (2002) 9. Guoliang, H., Lifan, W., Linsen, L.: Torque characteristics analysis of a magnetorheological brake with double brake disc. Actuators 10, 23 (2021)

Chapter 24

Dynamic Characteristics Analysis of Kirloskar Turn Master35 Machine Tool Bed with Different Polymer Concrete Materials Shaik Faazil Ahmad and T. Jagadeesha Abstract Nowadays capital goods industry needs the machine tools with superior damping capabilities such that the goal of precision, ultra-precision can be achieved. This can be possible by incorporating the newer materials with better damping properties rather than going for the conventional material cast iron. The current work focuses on the dynamic characteristics of the Kirloskar made machine tool Turn Master35’s bed with three different polymer concrete materials. The dynamic characteristics such as natural frequencies and mode shapes are evaluated from the modal analysis module available in ANSYS workbench. To understand bed’s behavior at these frequencies, the acceleration frequency and displacement frequency curves were plotted from the harmonic response analysis available in the simulation software. The results are compared with actual cast iron bed. Out of the three polymer concrete materials, Basalt Fiber Polymer Concrete (BFPC) has better dynamic stability than the other materials. The result shows that BPFC is better material than the cast iron with significant decrease in both the acceleration and displacement amplitude. The other two materials also shown good results than the cast iron in terms of the acceleration amplitude but their displacement amplitude is higher that cast iron bed. The cast iron bed can be replaced with the BFPC, which has the sufficient damping required to absorb the loads acting on the bed structure in operating conditions.

24.1 Introduction The force generated during the turning operation is resolved into three mutually perpendicular axes. These forces get transmitted to the machine tool structure along with the weight of moving parts on saddle and the force due to the feed motion of the saddle. These forces are the reason for the vibrations on the machine tool which causes chatter and damages the surface finish of the machine tool. To avoid this machine tools should have better damping properties. Mahendrakumar et al. S. F. Ahmad · T. Jagadeesha (B) Department of Mechanical Engineering, National Institute of Technology Calicut, Kozhikode 673601, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. Das et al. (eds.), Modeling, Simulation and Optimization, Smart Innovation, Systems and Technologies 292, https://doi.org/10.1007/978-981-19-0836-1_24

329

330

S. F. Ahmad and T. Jagadeesha

[1] used the Himalayan nettle as the filler material and polyester as resin in the manufacturing of micro lathe bed. The torsional and bending stiffness is found out experimentally and numerically. Depending on the results design modifications such as using the different cross sections and rib configurations in the regions where high amount of bending and torsion is observed. From overall analysis it is evident that polymer concrete materials improve the dynamic characteristics of the micro lathe bed. Shanmugam et al. [2] have gone a step further to analyze dynamic characteristics and torsional rigidity of the epoxy granite CNC lathe bed with steel fibers as the reinforcement materials. Both experimental and finite element methods are used to obtain the results for natural frequencies and torsional rigidity. The steel reinforced epoxy granite bed has shown an improvement in the dynamic characteristics by 4–10% and also the mass of the bed is reduced by 22% when compared with the conventional cast iron bed. Yang et al. [3] made an attempt to understand the static and dynamic characteristics of the CK6112 CNC lathe bed with cast iron as the structural material. The finite element modeling to complete the entire analysis and results of the maximum stress, maximum strain and maximum deformation are extracted. They performed the modal analysis to calculate the first three natural frequencies and mode shapes. Sun et al. [4] have carried out the dynamic and static analysis on the key vertical parts of the ultra-precision aspherical machine tool which is used to manufacture the optical lens. In here they used the spring damper elements to model the joint surfaces of the machine tool. The first order natural frequencies if the key vertical parts are calculated using the modal analysis in the commercially available simulation software. Lee et al. [5] have designed and manufactured a high speed gantry type milling machine with unsaturated polyester resin and sand as the matrix along with other toughening and curing agents. The static and dynamic analysis has been carried out on the machine using finite element method. The impact hammer test is used to calculate the damping factor of the machine tool. Kim et al. [6] extensively studied the different mixing ratios of the ingredients in the polymer concrete composites and found out the best ratio for manufacturing the ultra-precision machine tool. The ratio for using the polymer concrete materials for machine tool structure is by mixing the 50% weight fraction of pebble, 42.5% weight fraction of sand and 7.5% weight fraction of the resin. Chen et al. [7] studied the static and dynamic characteristics of the high-precision machine with artificial granite as the material. The static stiffness and dynamic stiffness of the machine is found out both numerically and experimentally. The results are compared with cast iron structure, dynamic stiffness is increased by 0.5–1.3% times and static stiffness increased by 8%. Sonawane et al. [8] have used the polymer concrete material as filler material in the vertical milling center column to increase the dynamic stiffness of the column. It is clear from the obtained that filled column shown improved damping by 30 times than the unfilled one. This work highlights the machine tools’ dynamic behavior when forces are transmitted to the structure, by analyzing the harmonic frequency response curves for three different polymer concrete materials.

24 Dynamic Characteristics Analysis of Kirloskar …

331

Fig. 24.1 CAD Model of turn Master35

24.2 Simulation Modeling 24.2.1 Geometric Data The Turn Master35 machine tool bed structure is considered for the analysis. The CAD model of the bed is developed in the computer-aided design software and saved in neutral file format. The file is then imported into simulation software ANSYS (Fig. 24.1).

24.2.2 Material Properties Three different type of polymer concrete materials namely BPFC, Unsaturated Polyester-Sand and Epoxy Granite are used. These are compared with the cast iron material. The material data for the analysis is taken from the references as shown in Table 24.1. As this kind of analysis comes under the structural dynamic, it requires the inertia and elastic properties. These polymer concrete materials can be modeled according to Hooke’s Law as isotropic and homogeneous [9].

332 Table 24.1 Polymer concrete material properties

S. F. Ahmad and T. Jagadeesha Material

Density (kg/m3 )

Modulus of elasticity (MPa)

Poisson’s ratio

Cast iron

7130

11,700

0.275

BFPC

2850

35,000

0.26

Unsaturated polyester sand

2260

25,200

0.2

Epoxy granite

2900

70,000

0.25

24.2.3 Meshing The mesh of the machine tool is generated with the 10-noded tetrahedral element. The reason for selecting this element is it saves computational time when compared with hexahedron element but there is no considerable change in the results. This element is a good general purpose element and can be used for any kind of analysis.

24.2.4 Analysis Settings The boundary conditions are modeled in this step. To calculate the natural frequencies and modes, modal analysis has been carried out for these three materials along with cast iron. The bottom surface of the structure is fixed with bolts to the base of the machine tool, so fixed support is applied [10]. The frequency range for harmonic analysis is set from 0 to 2000 Hz. The loads must be applied on the machine tool bed for the harmonic analysis to calculate the acceleration frequency response and displacement frequency response. The criteria followed to model the force flow on the machine tool bed is from Ref. [11] (Fig. 24.2).

24.3 Simulation Results and Discussion 24.3.1 Modal Analysis Results The modal analysis results show the first ten natural and mode shapes of the machine tool bed for different polymer concrete materials. This is similar to free vibration analysis of a spring-mass system. The natural frequencies are listed in Table 24.2.

24 Dynamic Characteristics Analysis of Kirloskar …

333

Fig. 24.2 Loads applied for harmonic analysis

Table 24.2 Ten natural frequencies for different materials Natural frequency

Cast iron

BFPC

Unsaturated polyester sand

Epoxy granite

1st

205.92

327.68

174.86

258.13

2nd

459.41

731.66

391.34

576.61

3rd

476.79

761.44

410.53

600.9

4th

558.57

890.64

477.75

702.29

5th

700.04

1118.4

603.97

882.86

6th

747.2

1192.7

642.36

941.04

7th

852.14

1362.1

736.92

1075.6

8th

922.86

1463.2

785.29

1153.8

940.26

1368.9

9th

1101.6

1732.8

10th

1397.9

2052.6

1103.1

1614.6

24.3.2 Harmonic Response Results The acceleration frequency response and displacement frequency response curves of the machine tool structure for different materials were evaluated. The maximum values of acceleration amplitude and displacement amplitude in X, Y and Z directions are shown in Table 24.3. The response curves plotted are shown in Figs. 24.3, 24.4, 24.5, 24.6, 24.7, 24.8, 24.9 and 24.10.

334

S. F. Ahmad and T. Jagadeesha

Table 24.3 Maximum amplitude values for acceleration and displacement Material

Acceleration amplitude (m/s2 )

Displacement amplitude (mm)

X

Y

X

Z

Y

Z

Cast iron

264.26

369.1

320.21

0.083562

0.054045

0.050352

BFPC

232.42

120.92

112.32

0.071511

0.037587

0.035132

Unsaturated polyester resin

322.06

184.26

144.6

0.12554

0.078202

0.067664

Epoxy granite

379.36

196.9

235.86

0.063422

0.046703

0.0052488

Original CI

Acceleration amplitude(m/s2)

396

X-direction Y-direction Z-direction

352 308 264 220 176 132 88 44 0 0

220

440

660

880

1100

1320

1540

1760

1980

1540

1760

1980

Frequency (Hz)

Fig. 24.3 Acceleration frequency response for cast iron bed 0.099

Original CI

Displacement(mm)

0.088

X-direction Y-direction Z-direction

0.077 0.066 0.055 0.044 0.033 0.022 0.011 0.000 0

220

440

660

880

1100

Frequency (Hz)

Fig. 24.4 Displacement frequency response for cast iron bed

1320

24 Dynamic Characteristics Analysis of Kirloskar …

335

Unsaturated Polyester Sand

Acceleration Amplitude(m/s 2)

312 273

X-direction Y-direction Z-direction

234 195 156 117 78 39 0 0

220

440

660

880

1100

1320

1540

1760

1980

Frequency (Hz)

Fig. 24.5 Acceleration frequency response for unsaturated polyester sand bed

Displacement(mm)

0.128

Unsaturated Polyester Sand

0.112

X-direction Y-direction Z-direction

0.096 0.080 0.064 0.048 0.032 0.016 0.000 0

220

440

660

880

1100

1320

1540

1760

1980

Frequency (Hz)

Fig. 24.6 Displacement frequency response for unsaturated polyester sand bed

24.4 Conclusions The current analysis investigates the best polymer concrete material for the Turn Master35 through finite element model. Out of the three materials, BFPC has shown better results when compared to cast iron. The natural frequencies of the BPFC bed are larger than the other material beds. So this bed can be used with the high speed motors such that resonance frequency of the motor doesn’t match with the natural frequency

336

S. F. Ahmad and T. Jagadeesha 423

Epoxy Granite

Acceleration Amplitude(m/s 2)

376

X-direction Y-direction Z-direction

329 282 235 188 141 94 47 0 0

250

500

750

1000

1250

1500

1750

2000

2250

Frequency (Hz)

Fig. 24.7 Acceleration frequency response for epoxy granite bed

Epoxy Granite

Displacement(mm)

0.0624 0.0546

X-direction Y-direction Z-direction

0.0468 0.0390 0.0312 0.0234 0.0156 0.0078 0.0000 0

220

440

660

880

1100

1320

Frequency (Hz) Fig. 24.8 Displacement frequency response for epoxy granite bed

1540

1760

1980

24 Dynamic Characteristics Analysis of Kirloskar …

BFPC

248

Acceleration Amplitude(m/s 2 )

337

217

X-direction Y-direction Z-direction

186 155 124 93 62 31 0 0

220

440

660

880

1100

1320

1540

1760

1980

Frequency (Hz) Fig. 24.9 Acceleration frequency response for BFPC bed

BFPC

Displacement(mm)

0.0712 0.0623

X-direction Y-direction Z-direction

0.0534 0.0445 0.0356 0.0267 0.0178 0.0089 0.0000 0

220

440

660

880

1100

1320

1540

1760

1980

Frequency (Hz)

Fig. 24.10 Displacement frequency response for BFPC bed

of the bed. By using the high speed motors, the speed of the spindle increases which results in increasing the material removal rate. The acceleration frequency response of the BFPC shows that it has the maximum amplitude of 232.42 m/s2 at a frequency of 180.96 Hz in X-direction. In this scenario this has no effect on the machine tool because, that frequency doesn’t coincide with any of the natural frequencies. Now

338

S. F. Ahmad and T. Jagadeesha

coming to the Y-direction, the maximum amplitude of acceleration is 120.92 m/s2 at the frequency of 428.41 Hz. This frequency is near to the third mode of vibration of the machine tool bed which is bending in the positive Y-direction. To avoid this sufficient amount of stiffness is provided by using different type of rib configurations and cross sections at the regions of bending. The displacement frequency response curves also show that BFPC bed has less amplitude when compared with other materials. These results suggest that Turn Master35 bed can be replaced with BPFC material to have better dynamic stiffness for the bed. It will reduce the production cost of the machine tools and consume less energy for production.

References 1. Mahendrakumar, N., Thyla, P.R., Mohanram, P.V.: Study on static and dynamic characteristics of nettle—polyester composite micro lathe bed. Proc. Inst. Mech. Eng. Part L J. Mater. Des. Appl. 233(2), 141–155 (2019) 2. Chinnuraj, S., Thyla, P.R., Elango, S., Venugopal, P.R., Mohanram, P.V: Static and dynamic behavior of steel-reinforced epoxy granite CNC Lathe bed using finite element analysis. Proc. Inst. Mech. Eng. Part L J. Mater. Des. Appl. 0(0), 1–15 (2020) 3. Yang, H., Zhao, R., Li, W., Yang, C., Zhen, L.: Static and dynamic characteristics Modeling for CK61125 CNC lathe bed basing on FEM. Procedia Eng. 174, 489–496 (2017) 4. Sun, L., Yang, S., Zhao, P., Wu, P., Long, X., Jiang, Z.: Dynamic and static analysis of the key vertical parts of a large scale ultra-precision optical aspherical machine tool. Procedia CIRP 27, 247–253 (2015) 5. Suh, J.D., Lee, D.G.: Design and manufacture of hybrid polymer concrete bed for high-speed CNC milling machine, pp. 113–121 (2008) 6. Kim, H.S., Park, K.Y., Lee, D.G.: Mater. Process. Technol. 48, 649–655 (1995) 7. Chen, T., Chen, Y., Hung, M., Hung, J.: Design analysis of machine tool structure with artificial granite material. Adv. Mech. Eng. 8(57), 1–14 (2016) 8. Sonawane, H., Subramanian, T.: Improved dynamic characteristics for machine tools structure using filler materials. Procedia CIRP 58, 399–404 (2017) 9. Brecher, C., Abele, E., Bleicher, F.: Manufacturing technology materials in machine tool structures. CIRP Annals 64, 725–748. (2015) 10. Shen, J., Xu, P., Yu, Y.: Dynamic characteristics analysis and finite element simulation of steel–BFPC machine tool joint surface. J. Manuf. Sci. Eng. 142(1) (2020) 11. Koeingsberger. F.: Design Principles of Metal Cutting Machine Tools (1964)

Chapter 25

Comparative Study of Magnetorheological Brakes from Magnetic Theory Perspective by Finite Element Methods A. Hafsana, Athul Vijay, Manas P. Vinayan, Bijja Kavya, and T. Jagadeesha Abstract For over a decade, smart materials and their applications have been propitious in the automobile industry. A lot of research has been carried out in the field of MR brakes, but extensive research on the magnetic properties is significantly less. Our research focuses on MR fluid selection and design optimization for MR brake, emphasizing magnetic properties. The magnetic properties of various MR fluids are considered for analysis. Understanding the properties of these fluids is a steppingstone to developing similar fluids, which use natural oils. The obtained results are further used to design the MR brake. Analysis and optimization of different designs are included in this research. The best brake model, considering the intensity of the magnetic field, MRF braking torque, and other design parameters, is obtained. The optimized brake model is found to have an increased MRF braking torque of about 50%. Furthermore, a study of torque variation with the current and number of turns is performed to get insight into implementing the best control system for the brake system. This research work is expected to be applied in the design of advanced MR brakes and similar devices.

25.1 Introduction One of the widely employed smart fluids in the automobile industry is Magnetorheological fluid or MR fluid. MR fluid is a suspension of micro-particles of ferromagnetic substances with quick magnetizing ability dispersed in a carrier fluid that can be organic or synthetic oils. These fluids can change their rheological properties, including flow and deformation-related characteristics, under the action of an external magnetic field. MR brakes are magnetically controlled brakes exercising direct application of MR fluids, where braking torque is obtained due to the shear stress developed in the fluid. One of the rheological properties of primary importance in brake applications is the A. Hafsana · A. Vijay · M. P. Vinayan · B. Kavya · T. Jagadeesha (B) Department of Mechanical Engineering, National Institute of Technology, Calicut, Kerala 673601, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. Das et al. (eds.), Modeling, Simulation and Optimization, Smart Innovation, Systems and Technologies 292, https://doi.org/10.1007/978-981-19-0836-1_25

339

340

A. Hafsana et al.

viscosity of the brake fluid due to its direct relation to the shear stress developed. The development of shear stress is attributed to the alignment of particles in the chain pattern along the direction of magnetic fields. This arrangement imposes flow constraints to the MR fluid particles, creating an internal resistance, increasing the yield stress of fluid. Thus, the fluid’s viscosity can be altered, employing a magnetic field to induce polarization effects of the ferromagnetic particles. This variation is impulsive, permitting quick response in retardation torque of the brake. Thus, the behavior of MR fluid under magnetic field is essentially non-Newtonian (whereas in the absence of magnetic field, it acts as a Newtonian fluid, i.e., the viscosity remains independent of external fields). The variation of fluid characteristics of MR fluid concerning the external magnetic field is widely depicted using the Bingham model of MR fluid. The MR fluid is found to be superior in terms of fluid stability and controllability to its electro-static counterpart called ER fluids. Because of the quick response, accurate control features, and easier integration with the electrical interface, coherent safety control systems like ABS (Anti-lock Braking System), VSC (Vehicle Stability Control), etc., can be implemented easily. Braking mechanisms and similar motion control mechanisms require a high standard of reliability and efficiency in delivering their output. Along with the system’s efficiency, the weight and structural constraints must be considered while designing the systems. This work focuses on studying the Magnetorheological braking system from the magnetic theory perspective. Methodologies like changing the core structure, varying the type of MR fluid, actuating current, and the number of turns in the coil were adopted to understand how the braking torque developed due to the properties of MR fluid varies. The research summary is as follows—In Sect. 25.2, a detailed review of past literature related to MR brake design and optimization is given. In Sect. 25.3, details of the MR brake model chosen for optimization are presented. The methodology (Sect. 25.4) is carried out in three phases: MR fluid analysis, core geometry optimization, and power input parameters variation. Finally, the results and conclusions of the research are presented in Sects. 25.5 and 25.6, respectively.

25.2 Literature Review Sohn et al. [1] proposed an MR brake with a tapered magnetic core. The research showed an increase in the region where magnetic effects are felt, giving a high braking torque for the modified core. Nguyen et al. [2] presented the optimal design of the MR Brake absorber for the torsional vibration. The braking torque was estimated using the Bingham plastic model of MR fluids, and with the assumption of MRF braking torque being similar to dry friction torque, optimum torque was found for the design. Olabi et al. [3] address various applications of MR fluid and related technologies. It was concluded that the various features like quick response and easy controllability render MRF technology a highly potential candidate for next-generation applications. Huang et al. [4] derived the theoretical expression of retardation torque and other

25 Comparative Study of Magnetorheological Brakes …

341

parameters such as the thickness and width of MR fluid for cylindrical MR brakes. Park et al. [5] proposed a design for disc-type MR brake based on multidisciplinary optimization involving magnetostatics, fluid dynamics, and thermal analysis. Poznic et al. [6] conducted a comparative study on different MR brake types, and the best model was chosen for experimental analysis. The requirement of effective magnetic field utilization was recommended to improve the overall braking torque of MR brakes. Yoon et al. [7] proposed an MR damper having a better response time by reducing the eddy current distribution. This was implemented using soft material with less electric resistivity as the core material. Shioa et al. [8] presented a model for torque enhancement in MR brake by enhancing magnetic field strength. In order to achieve this, a multi-pole structure was proposed [9–17]. The effect of material and MR fluid thickness were also studied and incorporated in the proposed design.

25.3 Design of Magnetorheological Brake The basic model of MR brake (drum type) utilized for optimization in this project is shown in Fig. 25.1. The MR fluid is present between the rotor and magnetic core. The core is wound with electric coils, through which current is passed to magnetize the core. The properties of components of MR brake are tabulated in Table 25.1 Fig. 25.1 MR fluid brake with conventional core

Table 25.1 Properties of materials used for the components of MR brake Components

Material

Density (kg/m3 )

Coil

Copper

8960

1

110

Stator, rotor, magnetic core

Structural steel

7850

100

200

Caps (lower and upper), shaft

Aluminium

2700

1

70

Relative permeability

Young’s modulus (109 a)

342

A. Hafsana et al.

In the absence of an external magnetic field, particles in the MR fluid behave normally with no opposition to the relative motion of the core and rotor. When an electric current is applied, the core is magnetized, and the magnetic field lines pass through the MR fluid resulting in the arrangement of particles to form chain-like structures. This provides resistance to fluid motion, and thus an effective increase in shear stress is obtained. This change in shear stress with the application of a magnetic field is reversible and can contribute to the adjustment of the braking torque as required, with high precision and impulse. The electromagnetic system in MR brake consists of core and coil, which on applying electric current acts as an electromagnet and produces the required field for actuation of MR fluid. Generally, steel core is preferred because of its high relative permeability, and copper wires are used for the coil. The high permeability of steel allows the field to be directed effectively to the fluid gap. Thus, the input power required can be reduced. The brake system’s space requirements restrict the number of turns. Furthermore, an increase in the power supply can improve the response and controllability of brakes. Although MR brakes appear to be a fitting solution to the many disadvantages of conventional hydraulic brakes, many initial design-related intricacies need to be addressed to employ this technology effectively, viz, the selection of MR fluid, the geometric parameters, and optimum parameters for electromagnetic circuit design. An overall improvement in these parameters can drastically improve the braking torque along with the control feasibility.

25.4 Methodology The study is carried out in 3 phases, and results obtained from each phase are utilized for the subsequent analysis. COMSOL is utilized throughout for the computational analysis as it proves to be a convenient platform while handling the analysis of multiple domains. To reduce the computation, two-dimensional axisymmetric analysis is done for all the stages.

25.4.1 Phase 1: MR Fluid Analysis For this analysis, a conventional model of MR brake was created in COMSOL with steel material assignment for rotor, stator, and core. Copper was used as coil material. Five variants of Lord Corporation’s MR fluids, namely, MRF122EG, MRF126LF, MRF132DG, MRF140BC, MRF140CG, were considered for the study. The rheological data of each of these variants is provided in Table 25.2. The B-H curve data of the fluid required for the magnetic analysis was digitized and provided as input during the simulation.

25 Comparative Study of Magnetorheological Brakes …

343

Table 25.2 Rheological data of LORD MRF variants MR fluid

122EG

126LF

132DG

140BC

140CG

Viscosity, Pa-s @ 40 °C

0.042 ± 0.020

0.070 ± 0.020

0.112 ± 0.02

0.1140 ± 0.040

0.280 ± 0.070

Density, g/cm3

2.28–2.48

2.64–2.84

2.95–3.15

3.75–3.95

3.54–3.74

78

80.98

86

85.44

Solids 72 Content by Weight, %

After making the geometry and assigning the materials, boundary conditions were applied. Amperes Law was applied to all the components. A separate Amperes Law boundary condition was applied on MR fluid in order to consider it as a fluid in the COMSOL environment. The magnetic insulation was applied on the outer surface of the stator alone since all other regions in the brake are magnetically permeable, generating magnetic effects in the entire domain of the system. A coil having 320 turns of 1 mm2 area was given a current of 0.5 A. Following this, physics-controlled meshing was done on the entire system with fine element size. The study was carried out by varying the MR Fluid inputs. The surface average values of magnetic field norms were obtained for all the fluids.

25.4.2 Phase 2: Optimization of Brake Geometry For the analysis in this phase, MR brake with the modified core was investigated further, as shown in Fig. 25.2. The brake geometry was altered primarily by varying the length of the region of magnetization. The choice of MR fluid was made from the preceding analysis. Procedures for the magnetic analysis, similar to phase 1 were carried out. The average magnetic field for each configuration was obtained from the Fig. 25.2 MR brake with modified magnetic core

344

A. Hafsana et al.

numerical analysis. Maximum shear stress and braking torque for all the geometric variants were calculated theoretically and analyzed graphically.

25.4.3 Phase 3: Variation in Power Input Parameters The optimized design with the chosen MR fluid was analyzed by altering the current input in the range from 0.25 A to 3 A, and the number of turns from 280 to 440, respectively. This study is performed as initial research to gain insights into implementing a suitable control system for precise and effective control of the proposed MR brake.

25.5 Results 25.5.1 Comparative Study of MR Fluids The magnetic analysis revealed that the brake with MRF 126LF had the maximum magnetic field intensity out of the five fluids considered. The magnetic field intensities for all the MR fluid variants are shown in Table 25.3 (Fig. 25.3).

Fig. 25.3 The intensity of magnetic field for different MR fluids

MR fluid

Magnetic field norm (A/m)

122EG

13,423

132DG

10,529

140BC

11,429

140CG

7810

Variation of Magnetic field norm for different MR fluid Magnetic field norm (A/m)

Table 25.3 Variation of magnetic field intensity for different MR Fluids

20000 15000 10000 5000 0

122EG

126LF

132DG

140BC

140CG

LORD Coorporation MR Fluids

25 Comparative Study of Magnetorheological Brakes …

345

Table 25.4 Variation of Braking Torque due to MRF for different core geometries Brake

Average magnetic field intensity (A/m)

Maximum shear stress (Pa)

Length of the region having magnetization (m)

Braking Torque due to MRF (Nm)

Percentage increase in braking torque (%)

Conventional

16,109

5259.58

0.028

0.74025



1

16,053

5241.3

0.03

0.79037

6.77039

2

15,977

5216.49

0.032

0.83907

13.3492

3

15,879

5184.49

0.034

0.88604

19.6948

4

15,756

5144.33

0.036

0.9309

25.754

5

15,607

5095.69

0.038

0.97332

31.4851

6

15,428

5037.24

0.04

1.0128

36.8179

7

15,209

4965.74

0.042

1.04834

41.6196

8

14,940

4877.91

0.044

1.07884

45.7393

9

14,614

4771.47

0.046

1.10327

49.0391

10

14,149

4619.65

0.048

1.1146

50.5707

25.5.2 Optimization of Brake Geometry Ten design variants, along with the convention design, were subjected to magnetic analysis, and the values of magnetic torque obtained for these are shown in Table 25.4. Although configuration no. 10 was observed to provide the best braking torque, it was not chosen for further analysis. This is because the data reveals a saturation of increase in retardation torque with a further increase in length of the region of magnetic influence as it is noticed that the difference in percentage increase in braking torque is diminishing. In addition to this, the percentage increase in braking torque from configuration no. 9 to configuration no. 10 is significantly less. This also provides additional space for coil windings, contributing to higher input power with lesser volume consumption. No. 9 was regarded as the best choice for the subsequent analysis (Figs. 25.4 and 25.5).

25.5.3 Variation in Power Input Parameters The analysis was carried out, and results were found to be linear-increasing as expected (Figs. 25.6 and 25.7).

A. Hafsana et al.

Braking torque due to MRF (Nm)

346

Torque vs Length of region of magnetisation

1.5 1 0.5 0 0

0.01 0.02 0.03 0.04 0.05 Length of region of magnetisation (m)

0.06

Fig. 25.4 Variation of torque with the length of magnetization

Fig. 25.5 Distribution of magnetic field lines

25.6 Conclusion A wide range of analyses concerned with the magnetic theory of the braking mechanism have been carried out. The analysis of different magnetic fluids revealed that Lord Corporation’s MRF126-LF provided high torque for our brake. Further analysis by changing the core structure revealed that even though magnetic field intensity was reduced, the braking torque obtained was better as the length of the region where the magnetic field was generated increased. This study also showed an increase in the braking torque when the current and number of coil turns were increased. Overall, the new design has proved to outscore the conventional design from both MRF braking torque and structural weight point of view and hence provides a better system that

25 Comparative Study of Magnetorheological Brakes …

347

Braking torque vs Current

Braking torque (Nm)

8 7 6 5 4 3 2 1 0 0

0.5

1

1.5 2 Current (A)

2.5

3

3.5

Fig. 25.6 Variation of braking torque with current

Braking torque vs number of turns Braking torque (Nm)

1.6 1.4 1.2 1 0.8 0.6 0.4 0.2 0 0

100

200

300

400

500

Number of turns

Fig. 25.7 Variation of braking torque with number of turns

can be implemented for motion control. It is expected that this work saves designer time and helps with optimizing the magnetic properties of MR brakes.

References 1. Sohn, J.W.: An experimental study on torque characteristics of magnetorheological brake with modified magnetic core shape. Proc. of SPIE, vol. 9431 (2015) 2. Nguyen, Q.H.: Optimal design and selection of magnetorheological brake types based on braking torque and mass. Smart Mater. Struct. 24 (2015) 3. Olabi, A.G.: Design and application of magnetorheological fluid. Mater. Des. 28 (2007) 4. Huang, J.: Analysis and design of a cylindrical magnetorheological fluid brake. J. Mater. Process. Technol. 129 (2002) 5. Park, E.J.: Multidisciplinary design optimization of an automotive magnetorheological brake design. Comput. Struct. 86 (2008)

348

A. Hafsana et al.

6. Poznic, A.: Magnetorheological fluid brake basic performances testing with magnetic field efficiency improvement proposal. Hungarian J. Ind. Chem. 113–119(40) (2012) 7. Yoon, D.-S.: An eddy current effect on the response time of a magnetorheological damper: analysis and experimental validation. Mech. Syst. Sign. Process. 127 (2019) 8. Shiao, Y.: Torque enhancement for a new magnetorheological brake. In: The 7th International Conference on Materials for Advanced Technologies (2013) 9. Wang, J., Meng, G.: Magnetorheological fluid devices: principles, characteristics and applications in mechanical engineering. J. Mater. Design Appl. (2001) 10. Nguyen, Q.H., Choi, S.B.: Optimal design of a novel hybrid MR brake for motorcycles considering axial and radial magnetic flux. Smart Mater. Struct. (2012) 11. Nguyen, Q.H., Choi, S.B.: Optimal design of an automotive magnetorheological brake considering geometric dimensions and zero-field friction heat. Smart Mater. Struct. (2010) 12. Lee, D.Y., Wereley, N.M.: Quasi-steady Herschel– Bulkley analysis of electro and magnetorheological flow mode dampers. J. Intell. Mater. Syst. Struct. (1999) 13. Park, E.J., Stoikov, D., Luz, L.F., Suleman, A.: A performance evaluation of an automotive magnetorheological brake design with a sliding mode controller. Mechatronics (2006) 14. Chen, P., Bai, X.X., Qian, L.J., Choi, S.B.: A magnetorheological fluid mount featuring squeeze mode: analysis and testing. Smart Mater. Struct. (2016) 15. Choi, S.B., Han, Y.M.: Magnetorheological Fluid Technology: Applications in Vehicle Systems (2012) 16. Wang, J., Meng, G.: Magnetorheological fluid devices: principles, characteristics and applications in mechanical engineering. Proc. Inst. Mech. Eng. L. (2001) 17. Liu, B., Li, W.H., Kosasih, P.B., Zhang, X.Z.: Development of an MR-brake-based haptic device. Smart Mater. Struct. (2006)

Chapter 26

Modal and Fatigue Analysis of Ultrasonic Machining Tool for Performance Analysis Mehdi Mehtab Mirad, Saka Abhijeet Rajendra, Jasper Ramon, and Bipul Das

Abstract Ultrasonic machining (USM) is an unconventional machining process which is commonly used for commercial machining of fragile and brittle materials such as ceramics, glass and semiconductor materials. The tool is an essential component of the ultrasonic machining system. In this paper, the emphasis has been given on evaluating the performance of the tool during USM process. The concentration has not been given by the previous researchers in analyzing the performance of the tool in the presence of various defects. The current study is developed on a numerical model for modal and fatigue analysis of the tool used in USM process under the presence of various defects and non-defects. In this regards, a dedicated finite element method (FEM) is used for understanding the physics of the process for performance and influence of the tool geometry. The mode shapes and fatigue life of the defective and non-defective conical tools are determined through modal and fatigue analysis.

26.1 Introduction Brittle and hard materials have various industrial applications due to their better physical, mechanical and other properties. Several complicated small-shaped structures are needed to be fabricated on these materials to bestow them with better performances. Majority of the time apart from the material property, the machining precision and the damaged surface integrity of the workpiece also affects the final quality of the product. In order to increase the application of brittle and hard materials, the USM process is employed. The ultrasonic tool is one of the most important components of the USM. Investigation of tool geometry and the stress incorporated in the tool in USM process for brittle and tough materials was performed. A steady state dynamics direct step was included in the analysis after the frequency step to M. M. Mirad (B) · S. A. Rajendra · J. Ramon · B. Das Department of Mechanical Engineering, National Institute of Technology Silchar, Silchar, Assam, India e-mail: [email protected] B. Das e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. Das et al. (eds.), Modeling, Simulation and Optimization, Smart Innovation, Systems and Technologies 292, https://doi.org/10.1007/978-981-19-0836-1_26

349

350

M. M. Mirad et al.

analysis displacement and stress of the tool due to the applied load at the longitudinal mode of vibration. The stress distribution in the tool was plotted using the Hencky-von Mises criteria [1]. The modal analysis performed by FEM on the horn and quarter wavelength method, sponge method and fixed–fixed FEM simulation are performed to study the input and output parameters of USM [2]. For ultrasonic welding of plastic material, the design and analysis of acoustic horns was performed. The theoretical dimension of conical shaped horn is calculated and compared with the dimensions procured through commercial horn design software CARD. The natural frequencies obtained by theoretical method and by the CARD software were closed enough [3]. Design and FEM based analysis of higher order horn profiles for the USM process is carried out. The proposed horn profiles are exponential, double exponential, triple exponential and mixed horn profile [4]. A hollow circular horn for USM is designed. A detailed design process has been shared and the modal analysis was done on ANSYS workbench. The stress component and the amplitude of vibrations produced in the horn was calculated through harmonic analysis. The proposed design for the ultrasonic horn is justified by comparing it with the already existing design of ultrasonic horn [5]. A computational method for the evaluation of multiaxial fatigue life and wheel dynamic cornering fatigue has been proposed. Initially a material model is selected followed by a multi-axial fatigue damage criterion. For predicting the fatigue life of the structures under the cyclic loading Palmgren–Miner rule was used [6]. Experimental and numerical investigations of the Butt-Welded Joints for the fatigue life was evaluated and it was seen that uniaxial fatigue theory cannot predict the correct damage, instead von Mises effective strain method and the SWT equation to evaluate the fatigue life achieved the best results [7]. Fatigue damage model for calculating fatigue life of the microelectromechanical systems (MEMS) through computational method was proposed [8]. From the above literature it is observed that the limited work has been carried out on the fatigue failure analysis of ultrasonic tool for USM process. In the present study, a conical tool of stainless steel (SS) is considered. A defect is presented in three different directions, i.e. horizontal, vertical and inclined axis in the tool profile. A comparative study is carried out on the defective and non-defective conical tool through modal and fatigue analysis.

26.2 Finite Element Model of the Tool Selection and design of an ultrasonic tool profile is very difficult. An ultrasonic conical tool of two different diameters viz. 5 and 3 mm along with the axial length 16 mm is modeled as shown in Fig. 26.1. This model is carried out with the help of ANSYS workbench. The mesh generation is performed to achieve the best result of the hypothetical model. In the present mesh analysis, free tetrahedral element is used with its finer size. The properties of the proposed tool material are tabulated in Table 26.1.

26 Modal and Fatigue Analysis of Ultrasonic Machining …

351

Fig. 26.1 Finite element model of a conical tool

Table 26.1 Properties of the proposed tool material

Material

Density (kg/m3 )

Young’s modulus (GPa)

Poisson’s ratio

Stainless steel

7900

200

0.3

26.3 Modal Analysis This analysis is performed to obtain the frequencies and mode shapes of the conical tool in the presence of defect and non-defect. A defect is presented in three different directions, i.e. horizontal, vertical and inclined axis in the tool profile. The defect is considered in the form of a rectangular hole with 0.2 mm depth, 0.2 mm width and 1 mm length. The deformation profile and the Eigen frequency of conical tool is shown in Fig. 26.2 where the longitudinal mode of vibration is generated. There are six different Eigen frequencies are created at different six modes in all the defective and non-defective conical tools. The preferred mode where longitudinal mode of vibration is monitored in the sixth mode of all the respective tools. The other five modes of all the respective tools have inadequate longitudinal mode of vibration. Mode shapes with respective Eigen frequencies of conical tool in the presence of defect and non-defect is tabulated in Table 26.2. The maximum Eigen frequency (i.e. 95,219 Hz) is generated in the conical tool where a vertical defect is introduced.

26.4 Fatigue Analysis Fatigue analysis is performed to obtain the fatigue life of the ultrasonic tool under cyclic loads. A defect is presented in three different directions in the tool profile

352

M. M. Mirad et al.

Fig. 26.2 Deformation profile and Eigen frequency of conical tool where longitudinal mode of vibration is generated: a no defect, b horizontal defect, c vertical defect, d inclined defect

Table 26.2 Mode shapes with respective Eigen frequencies (Hz) of conical tool in the presence of defect and non-defect Tool profile

No. of modes

No defect

Horizontal defect

Vertical defect

Inclined defect

Conical

1

16,151

16,178

16,178

16,178

2

16,152

16,182

16,182

16,179

3

64,013

63,983

64,087

63,980

4

64,013

64,104

64,099

64,100

5

69,474

69,551

69,544

69,556

6

95,125

95,191

95,219

95,194

as mentioned earlier. The fatigue life of conical tool in the presence of defect and non-defect is shown in Fig. 26.3. In non-defective tool, the fatigue life cycle is found to be 3.617 × 107 . Due to the addition of horizontal defect, it is seen that the tool life decreases by 100 times, i.e. 3.53 × 105 . In case of addition of vertical and inclined defect, the tool life decreases to 3.402 × 106 and 1.448 × 105 respectively.

26 Modal and Fatigue Analysis of Ultrasonic Machining …

353

Fig. 26.3 Fatigue life of conical tool: a no defect, b horizontal defect, c vertical defect, d inclined defect

The stress–strain hysteresis graph of defective and non-defective conical tool is shown in Fig. 26.4. In non-defective tool, the maximum stress is generated 27.441 MPa. In case of defective tools, the maximum stresses are generated 79.846 MPa, 46.517 MPa and 99.839 MPa where defects are presented in horizontal, vertical and inclined axis respectively. It is found that the maximum stress is produced in the tool where a defect is presented in the inclined axis.

26.5 Conclusion The present study is considered the design of an ultrasonic conical tool with the help of finite element method. A defect is introduced in the tool in three different orientations i.e. horizontal, vertical and inclined axis. The mode shapes and fatigue life of the defective and non-defective tools are determined through modal and fatigue analysis. In the modal analysis, there are six different axial and non-axial modes are generated in all the defective and non-defective tools. The preferred mode where longitudinal mode of vibration is generated at the sixth mode in all the tools. The maximum Eigen frequency (i.e. 95,219 Hz) is generated in the tool where a vertical defect is placed. The fatigue life is calculated with the help of a strain life approach.

354

M. M. Mirad et al.

Fig. 26.4 Stress–strain hysteresis graph of conical tool: a no defect, b horizontal defect, c vertical defect, d inclined defect

The fatigue life of non-defective tool is found to be 3.617 × 107 . After the addition of defect in the tool, it is found that the minimum tool life is occurred (i.e. 1.448 × 105 ) where a defect is placed in the inclined axis. Acknowledgments The authors would like to acknowledge the Condition Monitoring Laboratory supported by DST-SERB sponsored research project SRG/2020/000293 at Department of Mechanical Engineering, National Institute of Technology Silchar for providing the necessary facilities for carrying out the research work.

References 1. Singh, K., Kumar, V.S.: Finite element analysis of ultrasonic machine tool. Int. J. Eng. Res. Technol. 3(7), 1647–1650 (2014) 2. Seah, K.H.W., Wong, Y.S., Lee, L.C.: Design of tool holders for ultrasonic machining using FEM. J. Mater. Process. Technol. 37(1–4), 801–816 (1993) 3. Patel, D.M., Rajurkar, A.U.: Finite element analysis assisted design of ultrasonic horn for plastic welding. In: Proceedings of the International Conference on Computational Methods in Manufacturing (2011) 4. Chhabra, A.D., Kumar, R.V., Vundavilli, P.R., Surekha, B.: Design and analysis of higher order exponential horn profiles for ultrasonic machining. J. Manuf. Sci Prod. 16(1), 13–19 (2016)

26 Modal and Fatigue Analysis of Ultrasonic Machining …

355

5. Roy, S.: Design of a circular hollow ultrasonic horn for USM using finite element analysis. Int. J. Adv. Manuf. Technol. 93(1), 319–328 (2017) 6. Zheng, Z.G., Sun, T., Xu, X.Y., Pan, S.Q., Yuan, S.: Numerical simulation of steel wheel dynamic cornering fatigue test. Eng. Fail. Anal. 39, 124–134 (2014) 7. Chang, P.H., Teng, T.L.: Numerical and experimental investigation on the fatigue life evaluation of butt-welded joints. Met. Mater. Int. 14(3), 361–372 (2008) 8. Jalalahmadi, B., Sadeghi, F., Peroulis, D.: A numerical fatigue damage model for life scatter of MEMS devices. J. Microelectromech. Syst. 18(5), 1016–1031 (2009)

Chapter 27

Effect of Suspension Parameter on Lateral Dynamics Study of High Speed Railway Vehicle Yamika Patel, Vikas Rastogi, and Wolfgang Borutzky

Abstract To ensure the safety and stability of high speed railway vehicles (HSRV), the study of lateral dynamics of railway vehicles is significant. This paper presents a heuristic non-linear creep model and linear creep model to investigate the effect of suspension parameter and wheel conicity on lateral dynamics of HSRV as suspension elements and wheel conicity affect the passenger comfort, stability and hunting behaviours of HSRV. For the analysis, a 31-DOF full car model consisting of a car body, two bogies and two wheelsets for each bogie is developed using the bondgraph technique. Parametric variation in the suspension parameter has been made to study the effect on critical hunting speed. From the analysis, it has been observed that primary suspension elements have a more significant influence on vehicle stability. A correlation between primary suspension elements and wheel conicity with critical velocity has been tried to establish. From the study, it has been observed that critical velocity increases with increasing the value of the suspension parameter, but its value decreases with increasing wheel conicity.

27.1 Introduction Many studies have been conducted to investigate the dynamic stability of railway bogies and vehicles running on curved tracks. For analysing the curving performance and stability of railway vehicles, most of the researchers had employed linear creep models having different Degree of Freedom (DOF). A high speed railway has many non-linear factors in practice, such as non-linear tangential contact forces and moments acting between wheels and rails, normal forces acting at the wheel–rail interface and so on. The application of linear creep models [1, 2] is subject to some inaccuracy. Y. Patel (B) · V. Rastogi Delhi Technological University, New Delhi, Delhi, India e-mail: [email protected] W. Borutzky Bonn-Rhein-Sieg University of Applied Sciences, Sankt Augustin, Germany © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. Das et al. (eds.), Modeling, Simulation and Optimization, Smart Innovation, Systems and Technologies 292, https://doi.org/10.1007/978-981-19-0836-1_27

357

358

Y. Patel et al.

Polach [3] used linear and non-linear creep models to explore the dynamic reactions and critical speeds of railway cars. Cheng et al. [4] have utilised both linear and non-linear creep models to investigate the effect of suspension parameters on critical hunting speeds. In their hunting stability analysis, Lee et al. [5, 6] used a 6-DOF model system to account for the vertical and rolling motions of the wheelset and bogie systems. They assessed the hunting stability of a high speed railway vehicle in terms of various suspension settings using 8-DOF and 10-DOF models. Although Wang and Li [7] used complicated vehicle models to analyse the derailment ratio, the effect of auto speed on derailment quotients for both linear and non-linear creep models with different suspension characteristics has not been explored or compared. Cheng et al. [8] built a 20-DOF model that considered each wheelset’s lateral displacement and yaw angle, the lateral and vertical displacements of the bogie and car body, and the roll and yaw angles of the bogie and car body. While constructing the 20-DOF model, Cheng et al. neglected the roll angle of each wheelset. However, in practice, the roll angle of each wheelset is a critical DOF for estimating the contact forces between rails and wheels. As a result, if the roll angle of each wheelset is not taken into consideration, dynamic reactions may be incorrectly calculated. In this study, the roll angle of wheelsets is considered while constructing a 31-DOF model. This provides an accurate effect of suspension parameter on critical hunting speed. The model was created using bondgraph [9–11] simulation technique. This modelling technique has advantage that the suspension forces are intrinsically available in bondgraph modelling technique as suspension elements. Various researchers have used this modelling technique to study rail dynamics [12, 13]. A 31-DOF high speed railway vehicle model is created for analysing the effect of suspension parameters for linear and non-linear contact model. This 31-DOF model considers the pitch motion, lateral, vertical, yaw and roll motion for vehicle body and two bogies, but for four selfsame wheelsets, except pitch motion all other motions were considered for investigating the effect of suspension parameter.

27.2 Bondgraph Modelling of 31-DOF Railway Vehicle A schematic diagram of high speed railway vehicle is shown in Fig. 27.1, which includes a full car body, two bogies and two selfsame wheelsets for each bogie. Car body and bogie systems are connected through secondary suspension elements in the lateral and vertical directions. Bogie and wheelsets are connected through primary suspension elements in the vertical and lateral directions. The suspension systems, including both primary and secondary, were designed as a combination of springdamper which has motion in longitudinal, lateral and vertical directions. As a result, the analysis includes all possible motions of the system pieces, yielding a system with a total of 31 DOFs. The proposed model is developed by using the building block approach. In this approach, all the parts are modelled separately and then integrated to complete the model.

27 Effect of Suspension Parameter on Lateral …

359

(a)

(b) Fig. 27.1 Schematic diagram of high speed railway vehicle a top view, b front view

Figure 27.2 depicts the fragments of a car body model (CBM). It is a sub-model of a car body. The above kinematic relations are used to calculate the transformer moduli utilised in the sub-bondgraph model of an automobile body. The signal bonds, which are full arrows attached to the C elements, are utilised to observe the present value of the relevant displacements. The symbol SE in the bondgraph represents the source of effort. It is included at the 1-junction of the bondgraph to integrate the force of gravity and centrifugal force of car body’s centre of gravity in vertical (z) and lateral (y) directions. A kinematic relationship is formed between the mass centre and

360

Y. Patel et al.

Fig. 27.2 Bondgraph model of car body (CBM)

component of rotational velocity to express the velocities at different locations on a vehicle car body are used to obtain transfer moduli in each bondgraph sub-model. A body which is rigid in nature and having mass and second moment of inertia M b , and J bz , J by and J bz simultaneously along the vertical, lateral and longitudinal centroidal axes is defined as a bogie. Figure 27.3 depicts a section of the bondgraph model of truck frame, abbreviated as BM. The model incorporates the vertical, lateral, roll, pitch and yaw motion of the bogie. The wheelset’s primary function is to support the vehicle. The primary suspension serves as a link between the wheelsets and the bogie frames. This research focuses on a conical wheelset with a rigid body and a flexible rail with a knife edge. Figure 27.4 (a) depicts the bondgraph model of a conical wheelset on knife-edge rail, and Fig. 27.4 (b) depicts the complete bondgraph model of wheelset. In a differential form, all the governing equations of a car body, bogie and wheelsets have been taken from Kim et al. [14]. Wheel–rail contact modelling: Garg et al. [15] calculate various creep forces and moments for the left and right wheels that are non-linear after transforming the contact plane into an equilibrium coordinate system for a minor roll and yaw angle of each wheelset as given in Eqs. 27.1–27.6. Garg and Dukkipati [16] provided the linear creep forces and moments. n L FwL x = αi j Fwlx

(27.1)

27 Effect of Suspension Parameter on Lateral …

Fig. 27.3 Bondgraph model of truck frame (BM)

Fig. 27.4 a Conical wheelset on knife-edge rail, b Complete bondgraph model of wheelset

361

362

Y. Patel et al. R Fwn Rx = αi j Fwr x

(27.2)

n L FwL y = αi j Fwly

(27.3)

R Fwn Ry = αi j Fwr y

(27.4)

n L MwLz = αi j Mwlz

(27.5)

R Mwn Rz = αi j Mwr z

(27.6)

L L L R R R where Fwlx , Fwly , Mwlz , Fwr x , Fwr y and Mwr z are linear creep forces and moments in the corresponding directions on the left and right wheels. αi j represents the saturation constant in the heuristic creep model obtained from Johnson’s approach [15] as given in Eq. 27.7.

 αi j =

1 βi j

 βi j − 13 βi2j + 1 βi j

1 3 β 27 i j



≥ βi j ≤ 3

=> βi j ≥ 3

(27.7)

where β ij is the non-linearity factor in heuristic creep model and it is estimated for a vehicle moving over a curved track in quasi-static motion.

27.3 Results and Discussion To explore the lateral dynamics and hunting behaviour of a railway car, an integrated bondgraph model with 31 Degrees of Freedom is simulated under different track conditions using nominal geometrical and inertial parameters listed in Table 27.1. The simulation has been performed using ‘Symbol Shakti’ to study the effect of primary suspension element and wheel conicity on the critical hunting speed of high speed railway vehicle. Figures 27.5 and 27.6 show the impact of suspension elements on critical hunting velocity. Because of inertia and gravitational restoring forces, a creep phenomenon occurs. The gravitational force produces a creep that attempts to reduce the amplitude, whereas inertia produces a creep that increases the amplitude. As a result, the vehicle’s stability is determined by the stiffness of the wheelset and the bogie, whereas its destabilisation is determined by inertia. As the inertia forces increase in strength, they eventually take control of the motion, making the system unstable. As a result, there is a peak critical velocity, but the increase appears to be minor. The critical speed is unaffected by the damping coefficients, as shown in Fig. 27.6a, b. The differences appear to be small compared with earlier situations, implying

27 Effect of Suspension Parameter on Lateral …

363

Table 27.1 Parameter of high speed railway vehicle S. no. Parameter

Nomenclature Values

1.

Car body’s mass

Mc

34,000 kg

2.

Bogie’s mass

Mb

3000 kg

3.

Wheelset’s mass

Mw

1400 kg

4.

Car body’s moment of inertia about the x axis

I cx

75.06 × 103 kg m2

5.

Car body’s moment of inertia about the y axis

I cy

2.08 × 106 kg m2

6.

Car body’s moment of inertia about the z axis

I cz

2.08 × 106 kg m2

7.

Bogie’s moment of inertia about the x axis

I bx

2260 kg m2

8.

Bogie’s moment of inertia about the y axis

I by

2710 kg m2

9.

Bogie’s moment of inertia about the z axis

I bz

3160 kg m2

10.

Wheelset’s moment of inertia about the x axis

I wx

915 kg m2

11.

Wheelset’s moment of inertia about the y axis

I wy

140 kg m2

12.

Wheelset’s moment of inertia about the z axis

I wz

915 kg m2

13.

Wheel radius

r0

0.4575

14.

Half of the primary longitudinal and vertical suspension arm

lt

0.978 m

15.

Half of the secondary lateral suspension arm

lC

1.21 m

15.

Half of the primary lateral suspension arm

lp

0.978 m

16.

Distance between the vehicle’s body and the mass l S centre of the bogie frame

9m

17.

Height of the vehicle body CG above the wheelset h CG

1.4 m

18.

Height of the secondary suspension above the bogie CG

h0

0.03 m

19.

Height of the bogie CG above the wheelset CG

hg

0.44 m

20.

Lateral creep coefficient

f 11

10.2 × 106 N

21.

Forward creep coefficient

f 33

15 × 107 N

22.

Spin creep coefficient

f 22

16 N

23.

Lateral/Spin creep coefficient

f 12

3120 N

24.

Track Gauge

dp

0.7465 m

25.

Cant angle

∅se

0.0873 rad

26.

Coefficient of Friction

μ

0.2

that vertical stiffness and damping have little effect on a railway vehicle’s hunting behaviour. Figure 27.7 shows the effect of wheel conicity on the critical speed. Critical hunting speed is inversely related to wheel conicity and decreases as it increases, as shown in the graph.

Y. Patel et al.

Critical speed (kmph)

364 450 400 350 300 250 200 150 100 50 0

Linear Non linear 0

50

100

150

Primary Lateral stiffness Kpy x (105 ) N/m

Critical speed (kmph)

(a) 500 450 400 350 300 250 200 150 100 50 0

Linear Non linear 0

5

10

15

Primary vertical stiffness Kpz x

20

(105 )

N/m

(b)

Fig. 27.5 Variation of critical speed with change in primary stiffness parameter

27.4 Conclusions The modelling and simulation of a railway car with 31 DOFs are provided in this study. The system was developed using a building block technique. Modelling of every component was done on separate bases and then was integrated to form the complete model. The simulation of this assembled bondgraph model has been completed. The effects of the suspension parts on the vehicle’s crucial hunting velocity were investigated using parametric research. The crucial hunting speed has been found to be particularly sensitive to the principal lateral spring stiffness, according to the findings of the study. Conicity’s effect on required hunting speed was investigated, and an inverse link was observed.

27 Effect of Suspension Parameter on Lateral …

365

700

Critical speed (kmph)

600 500 400 300 200 Linear

100

Non Linear

0 0

2

4

6

Primary lateral damping coefficient Cpy x

8

(105)

Ns/m

(a)

Critical Speed (kmph)

700 600 500 400 300 200

Linear

100

Non Linear

0

0

2

4

6

8

Primary vertical damping coefficient Cpz x(105) Ns/m (b)

Fig. 27.6 Variation of critical speed with change in primary damping coefficient

Critical velocity (kmph)

500 400 300 200

linear

100

non linear

0 0.02

0.025

0.03

0.035

0.04

Wheel Conicity (rad)

Fig. 27.7 Variation of critical speed with change in wheel conicity

0.045

0.05

366

Y. Patel et al.

References 1. Bell, C.E., Horak, D.: Forced steering of rail vehicles: stability and curving mechanics. Veh. Syst. Dyn. 10, 357–386 (1981) 2. Wickens, A.H.: Railway vehicles with generic bogies capable of perfect steering. Veh. Syst. Dyn. 25, 389–412 (1996) 3. Polach, O.: On non-linear methods of bogie stability assessment using computer simulations. Proc. IMechE, Part F: J. Rail Rapid Transit 220, 13–27 (2006) 4. Cheng, Y.-C., Hsu, C.-T.: Hunting stability and derailment analysis of a car model of a railway vehicle system. Proc. Inst. Mech. Eng. Part F J. Rail Rapid Transit 226(2), 187–202 (2011) 5. Lee, S.-Y., Cheng, Y.-C.: Influences of the vertical and the roll motions of frames on the hunting stability of trucks moving on curved tracks. J. Sound Vib. 294(3), 441–453 (2006) 6. Lee, S.-Y., Cheng, Y.-C.: Hunting stability analysis of highspeed railway vehicle trucks on tangent tracks. J. Sound Vib. 282(3–5), 881–898 (2005) 7. Wang, W., Li, G.X.: Development of a simulation model of a high-speed vehicle for a derailment mechanism. Proc. IMechE, Part F: J. Rail Rapid Transit 224(2), 103–113 (2010) 8. Cheng, Y.C., Lee, S.Y., Chen, H.H.: Modeling and nonlinear hunting stability analysis of high speed railway vehicle moving on curved tracks. J. Sound Vib. 324, 139–160 (2009) 9. Karnopp, D.: Bond graphs for vehicle dynamics. Veh. Syst. Dyn. 5, 171–184 (1976) 10. Mukherjee, A., Samantaray, A.K.: Bond Graph in Modeling, Simulation and Fault Identification. IK International Pvt Ltd, New Delhi (2006) 11. Pacejka, H.: Modelling complex vehicle systems using bond graphs. J. Franklin Inst. 319, 67–81 (1985) 12. Kumar, V., Rastogi, V.: Investigation of verticaldynamic behaviour and modelling of a typical Indianrail road vehicle through bond graph. World J. Model. Simulat. 5, 130–138 (2009) 13. Kumar, V., Rastogi, V., Pathak, P.: Dynamic analysisof vehicle–track interaction due to wheel flat using bondgraph. Proc. IMechE, Part K: J. Multi-body Dyn. 1464419317739754 (2017) 14. Kim, P., Jung, J., Seok, J.: A parametric dynamic study on hunting stability of full dual-bogie railway vehicle. Int. J. Precis. Eng. Manuf. 12(3), 505–519 (2011) 15. Dukkipati, R.V.: Vehicle Dynamics. CRC Press (2000) 16. Garg, V., Dukkipati, R.V.: Dynamics of Railway Vehicle Systems. Academic Press Inc., Toronto (2012)

Chapter 28

A Weighted Fuzzy Time Series Forecasting Method Based on Clusters and Probabilistic Fuzzy Set Krishna Kumar Gupta and Sanjay Kumar

Abstract Probabilistic fuzzy set (PFS) is an ideal tool to touch uncertainties due to randomness (probabilistic) and fuzziness (non-probabilistic) in a single framework. In the present study, we divide time series data into clusters and propose a novel weighted fuzzy time series (FTS) forecasting method using PFS. Proposed method models non-probabilistic uncertainty due to imprecision and linguistic representation of time series data and probabilistic uncertainties in assigning membership grades to time series datum along with occurrence of recurrence of fuzzy logical relations. In proposed forecasting method, probabilities to membership grades are assigned using Gaussian probability distribution function (PDF). Time series data of SBI share price are forecasted using proposed forecasting method in order to show its suitability and applicability. Root mean square error and average forecasting error are used as performance indicator to confirm the outperformance of proposed weighted fuzzy time series forecasting method based on PFS.

28.1 Introduction Zadeh introduced fuzzy sets [1] to handle the uncertainty in the systems due to linguistic representation, imprecision and vagueness. Since statistical method-based time series forecasting models can handle only probabilistic uncertainty, therefore, FTS forecasting models [2–4] have an edge over conventional time series forecasting models. After FTS models due to Song & Chissom [2–4], Chen [5] and many other researchers [6–14] proposed several other FTS forecasting models to improve accuracy in forecasting. Sharma et al. [15] proposed approach of rough set for forecasting. Recently, Egrioglu [16] proposed recurrent FTS functions for FTS. Recurrence of fuzzy logical relations (FLRs) had been an issue in FTS forecasting. Weighted FTS models deal both the issues of weighting and recurrence in time series K. K. Gupta (B) · S. Kumar Department of Mathematics, Statistics and Computer Science, College of Basic Sciences and Humanities, G. B. Pant University of Agriculture and Technology, Pantnagar, Uttarakhand 263145, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. Das et al. (eds.), Modeling, Simulation and Optimization, Smart Innovation, Systems and Technologies 292, https://doi.org/10.1007/978-981-19-0836-1_28

367

368

K. K. Gupta and S. Kumar

forecasting. Before weighted FTS forecasting recurrent fuzzy logical relation (FLR) was neglected, and each FLR was treated of equally importance. Yu [17] proposed weighted FTS forecasting model to resolve this issue and enhanced accuracy in forecast. In weighted FTS models, every recurrent FLRs are taken into account, and dissimilar weights are assigned to some recurrent FLRs. Cheng et al. [18], Chen and Phuong [19] and Cheng and Chen [20] proposed FTS methods based on trend weighted, optimal weighting vectors and weighted association rule for financial time series forecasting. Meghdadi and Akbarzadeh [21] presented concept of PFS to deal with nonprobabilistic and probabilistic uncertainties simultaneously. Liu and Li [22] developed foundation of a probabilistic fuzzy logic system to manage impact of stochastic uncertainties and random noise in control problems. Many researchers [23–27] used PFS for different real-life problems of control and classification. Gupta and Kumar [28–30], Pattanayak et al. [31], Efendi et al. [32] and Torbat et al. [33] proposed forecasting methods using PFS, probabilistic intuitionistic fuzzy set, hesitant probabilistic fuzzy set, fuzzy random auto-regression time series and probabilistic fuzzy ARIMA method. MACQUEEN [34] defines the procedure for partitioning an N-dimensional population on basis of sample. Issue of recurrence of FLRs has been ignored in all earlier PFS-based FTS forecasting methods. Performance of PFS in clustered environment has also been not tested by the researchers. With this motivation with this, a novel weighted FTS forecasting method using PFS and k-means clustering algorithm [30] is proposed. Proposed FTS forecasting method addresses issues of recurrence in FLRs along with both kinds of uncertainties in a single framework. Advantages of proposed FTS method are that it handles uncertainty caused by linguistic representation of data set as well as uncertainty in assigning membership grades to data set. Time series data in each cluster are fuzzified using triangular fuzzy sets, and probability to membership grade of time series datum is assigned using Gaussian PDF. SBI share prices at BSE are forecasted using proposed method to show its out performance.

28.2 Preliminaries This section contains some fundamental definitions of fuzzy set, PFS and FTS. Definition 1 [1] Mathematical object of following form defines fuzzy set A on X = {x1 , x2 , ..., xn }. A = {x, μ A (x)|∀x ∈ X }

(28.1)

where μ A : X → [0, 1] that determines membership grades of elements of X in A.

28 A Weighted Fuzzy Time Series Forecasting Method …

369

Definition 2 [22] Let μ be membership grade of variable x ∈ X . PFS is defined on probability space (Vx , I, P). Here, I and Vx are σ field and sets of all possible events {μ ∈ [0, 1]}, respectively. Probabilities for the element E i (μ) in Vx are defined as follows.    P(E i ), P(Vx ) = 1 Ei = (28.2) P(E i ) ≥ 0, P P(E i ) is probability for event E i {E i ⊆ [0, 1]}. PFS F may be expressing as union of finite sub-probability space. F ≡



(Vx , I, P)

(28.3)

x∈X

  VF1 , I F1 , P and Let F 1 and F 2 be two PFSs and are expressed as F1 ≡ x∈X   VF2 , I F2 , P . Liu and Li [22] defined basic set operations as follows: F2 ≡ x∈X

1.

Union: F1 ∪ F2 =



 VF1 ∨ F2 , I F1 ∨ F2 , P , and P(E) ≥ 0,

x∈X

     P E = P(E), P VF1 ∨ F2 = 1.  VF1 ∨ F2 is {u ⊆ [0, 1], v ⊆ [0, 1]}. ∀ element of E’s in VF1 ∨ F2 P(E) = Here, P E F1 · P E F2 ≥ 0 with E F1 ’s in VF1 and E F2 ’s in VF2 . 2.

Intersection: F1 ∩ F2 =



 VF1 ∧ F2 , I F1 ∧ F2 , P , and P(E) ≥ 0,

x∈X

     P E = P(E), P VF1 ∧ F2 = 1.  VF1 ∧ F2 is {u ⊆ [0, 1], v ⊆ [0, 1]}. ∀ element of E’s in VF1 ∧ F2 P(E) = Here, P E F1 · P E F2 ≥ 0 with E F1 ’s in VF1 and E F2 ’s in VF2 . 3.

Complement: F1c =



 VF1c , I F1c , P ,and P(E) ≥ 0,

x∈X

     P E = P(E), P VF1c = 1. Here, VF1c is {u ⊆ [0, 1]}, and E is element event in VF1c .

370

K. K. Gupta and S. Kumar

Definition 3 [2] Let f i (t) be fuzzy sets on Y (t) ⊆ . Set of f i (t) is denoted by F(t) and is known as FTS on Y (t). If F(t − 1) causes F(t) i.e.F(t − 1) → F(t), then relationship may be represented as F(t) = f (t − 1) ◦ R(t, t − 1), where ◦ is max–min composition operator, and R(t, t − 1) is the union of FLRs.

28.2.1 Score and Deviation Functions Score and deviation functions are useful for comparing elements of a PFS. In this research paper, we have used following score function [35] to rank probabilistic fuzzy column vector in PFS. For an element of PFS g(μi | pi )(i = 1, 2, . . . , k), score function is defined as follows: s(g) =

k 

μi pi

(28.4)

i=1

and pi are membership grades and corresponding probability. Here, μi k d(g) = i=1 (μi − s(g))2 pi is known as deviation function of g(μi | pi ). Two elements of PFS g1 and g2 are compared using following law. If s(g1 ) > s(g2 ) then g1 > g2 , If s(g1 ) = s(g2 ) then (i) if d(g1 ) > d(g2 ), then g1 < g2 ; (ii) if d(g1 ) < d(g2 ), then g1 > g2 ; (iii) if d(g1 ) = d(g2 ), then g1 = g2 .

28.3 Proposed Forecasting Method Proposed FTS forecasting method is a novel application of PFS in time series forecasting. Proposed method uses k-means clustering and weighted FLRs in which membership grades of data set are considered as random variable. Probabilities are associated with membership grades using Gaussian PDF. This forecasting method also uses a score function to aggregate the elements of PFS. The following steps explain complete method to forecast using proposed FTS forecasting method. Step 1: Define universe of discourse as X = [E 1 , E 2 ]. E 1 = X min − X 1 and E 2 = X max +X 2 , X max , X min are maximum, minimum of data set and X 1 , X 2 are the arbitrary

28 A Weighted Fuzzy Time Series Forecasting Method …

371

positive real numbers. Partition X into n equal length intervals (u i ; i = 1, 2, . . . , n) and construct triangular fuzzy sets (Ai ; i = 1, 2, . . . , n) in accordance with u i using following parameters.

Ai =

[E 1 + (i − 1)h, E 1 + i h, E 1 + (i + 1)h]; i = 1, 2, 3, . . . , n − 1 [E 1 + (i − 1)h, E 1 + i h, E 1 + i h]; i = n (28.5)

1 Here, h = E2 −E . n Step 2: Apply k-means clustering algorithm to divide data set into k clusters. Find appropriate number of clusters. Step 3: Fuzzify time series data using fuzzy set Ai in each cluster and apply following Gaussian PDF to assign the probabilities to membership grades.

pμi =

⎧ ⎪ ⎪ ⎨ ⎪ ⎪ ⎩

⎛ √ l 2π ζ j



⎝e

(xi −(μi −1)l j −m j )2 2ζ 2j



+e

0;

(xi −(1−μi )l j −m j )2 2ζ 2j

⎞ ⎠; μi ∈ [0, 1]

(28.6)

otherwise

Here, pμi ∈ [0, 1] and xi are data point in specific clusters with corresponding membership grade, respectively. ζ j , m j , l j are standard deviation, mean, length of interval of data that lies in fuzzy set Ai . For fuzzification of time series data, we take PFS F i with highest probability of membership grades and create probabilistic fuzzy logical relations (PFLRs). w1 w2 Step 4: Assign weights to PFLRs as follows. If F1 −→ Fr , F2 −→ Fr , …, wr −1 Fr −1 −→ Fr FLRs are used in forecast, then weights to high-order PFLRs are assigned in ascending order giving more weigh to next PFLR than the previous PFLR. Normalized weights are computed as follows.   Wl = w1 , w2 , . . . , wr −1     r −1  w1 w2 wr −1  = r −1 , r −1 , . . . , r −1 wl = 1 . , provided, l=1 wl l=1 wl l=1 wl l=1 Step 5: Multiply elements of each PFLR by its weight to have weighted PFLR and take union of weighted probabilistic fuzzy relations Rlr to construct up to (r1)th order weighted probabilistic fuzzytime invariant relation Rr (i.e. corresponding probabilities are independents),Rr = Rlr . l

Step 6: Apply score function from Eq. 28.4 to get the probabilistic fuzzy row vector with highest score function value and forecasted outputs are computed using following expression. r −1

m

Forecasted output = l=1l l . Here, m l is middle point of interval corresponding membership grades with highest score function value.

372

K. K. Gupta and S. Kumar

28.4 Forecasting of SBI Share Price using Proposed Method To understand efficiency and suitability of proposed weighted FTS forecasting method based on PFS, it is implemented in the forecasting of financial time series data of SBI share prices of two financial years. Actual time series data (Table 28.1) are taken from the annual reports of SBI of the year 2008–09 and 2009–10. Step 1: Minimum and maximum SBI share prices (X min , X max ) are observed from Table 28.1 to define which is further partitioned into fourteen intervals. Fuzzy sets Ai are constructed in the accordance of fourteen intervals. Table 28.2 shows the interval and corresponding fuzzy set. Table 28.1 Actual SBI share prices Months

SBI prices

Months

SBI prices

Months

SBI prices

04–08

1819.95

12–08

1325

08–09

1886

05–08

1840

01–09

1376.4

09–09

2235

06–08

1496.7

02–09

1205.9

10–09

2500

07–08

1567.5

03–09

1132.25

11–09

2394

08–08

1638.9

04–09

1355

12–09

2374

09–08

1618

05–09

1891

01–10

2315

10–08

1569.9

06–09

1935

02–10

2059.95

11–08

1375

07–09

1840

03–10

2120.05

Table 28.2 Intervals and corresponding fuzzy sets

Interval

Fuzzy set

u1 = [1100, 1221.43]

A1 = [1100, 1221.43, 1342.86]

u2 = [1221.43, 1342.86]

A2 = [1221.43, 1342.86, 1464.29]

u3 = [1342.86, 1464.29]

A3 = [1342.86, 1464.29, 1585.71]

u4 = [1464.29, 1585.71]

A4 = [1464.29, 1585.71, 1707.14]

u5 = [1585.71, 1707.14]

A5 = [1585.71, 1707.14, 1828.57]

u6 = [1707.14, 1828.57]

A6 = [1707.14, 1828.57, 1950]

u7 = [1828.57, 1950]

A7 = [1828.57, 1950, 2071.43]

u8 = [1950, 2071.43]

A8 = [1950, 2071.43, 2192.86]

u9 = [2071.43, 2192.86]

A9 = [2071.43, 2192.86, 2314.29]

u10 = [2192.86, 2314.29]

A10 = [2192.86, 2314.29, 2435.71]

u11 = [2314.29, 2435.71]

A11 = [2314.29, 2435.71, 2557.14]

u12 = [2435.71, 2557.14]

A12 = [2435.71, 2557.14, 2678.57]

u13 = [2557.14, 2678.57]

A13 = [2557.14, 2678.57, 2800]

u14 = [2678.57, 2800]

A14 = [2678.57, 2800, 2800]

28 A Weighted Fuzzy Time Series Forecasting Method …

373

Table 28.3 Clusters for time series data of SBI share prices Data in the clusters

Cluster’s mean

Cluster-1

1819.95, 1840, 1891, 1935, 1840, 1886, 2059.95, 2120.05

1923.99

Cluster-2

1496.7, 1567.5, 1638.9, 1618, 1569.9

1578.2

Cluster-3

2235, 2500, 2394, 2374, 2315

2363.6

Cluster-4

1375, 1325, 1376.4, 1205.9, 1132.25, 1355

1294.93

Step 2: Number of clusters along with respective time series data is calculated using ‘R’ software [36] (Table 28.3). Step 3: Share prices of SBI are fuzzified using triangular fuzzy sets (Table 28.1). Probabilities to the membership grades of SBI share prices are assigned using Eq. 28.6. SBI share prices in each cluster are probabilistic fuzzified choosing membership grades with highest probability. Probabilistic fuzzified SBI share prices and PFLRs are constructed in each cluster which is shown in Tables 28.4 and 28.5, respectively. Step 4: Assign the weight to PFLRs and calculate the weight. Step 5: Multiply in the elements of each PFLR by its corresponding weights to have weighted PFLR weighted PFLRs by taking union of weighted PFLRs. Step 6: Compute probabilistic fuzzy row vector with the highest score function value. Table 28.4 Probabilistic fuzzified SBI share prices Cluster

Actual share price

Probabilistic fuzzified share price

Cluster

Actual share price

Probabilistic fuzzified share price

Cluster-1

1819.95

F6

Cluster-3

2235

F9

1840

F6

2500

F12

1891

F7

2394

F11

1935

F7

2374

F10

1840

F6

2315

F10

1886

F6

1375

F2

2059.95

F8

1325

F2

2120.05

F8

1376.4

F2

1496.7

F3

1205.9

F1

1567.5

F4

1132.25

F1

1638.9

F4

1355

F2

1618

F4

1569.9

F4

Cluster-2

Cluster-4

374 Table 28.5 PFLRs for share prices in each cluster

K. K. Gupta and S. Kumar Cluster-1

Cluster-2

Cluster-3

Cluster-4

F6 → F6

F3 → F4

F9 → F12

F2 → F2

F6 → F7

F4 → F4

F12 → F11

F2 → F2

F7 → F7

F4 → F4

F11 → F10

F2 → F1

F7 → F6

F4 → F4

F10 → F10

F1 → F1

F6 → F6

F1 → F2

F6 → F8 F8 → F8

Abridged RMSE and AFER (Table 28.6) confirm that proposed method outperforms MA(3), WMA(3), ARIMA, also other existing fuzzy and intuitionistic FTS forecasting methods [5, 6, 12, 29, 37–39] in forecasting of SBI share price.

28.5 Conclusions Time series forecasting based on fuzzy set can touch only uncertainty that arise because of linguistic depiction and imprecision of time series data. In this research paper, we have proposed a novel weighted probabilistic FTS forecasting method using k-means clustering and PFS. Proposed FTS forecasting method also uses a score function on the weighted probabilistic fuzzy relations to make the defuzzification process simple and less complex. Advantages of proposed forecasting method are that it deal with issue of recurrence of FLRs and include both types of uncertainties simultaneously. PFS-based proposed forecasting method has been applied to forecast financial time series data to show its outperformance and suitability over various existing statistical, FTS and IFTS forecasting methods. In a PFS-based FTS forecasting method, probabilities are assigned to membership grades using Gaussian probability distribution function (PDF) assuming that membership grades follows normal distribution. To find a suitable PDF to assign probabilities to membership grades, when they do not follow normal distribution can be the scope of further research in terms of modification in this model. It would also be interesting to evaluate performance of PFSs in FTS forecasting with clustering algorithm other than k-means.

1638.9

1618

1569.9

1375

1325

1376.4

1205.9

1132.25

1355

1891

1935

1840

1886

2235

2500

2394

09–08

10–08

11–08

12–08

01–09

02–09

03–09

04–09

05–09

06–09

07–09

08–09

09–09

10–09

11–09

1591.4

1359.03

1382.48

1823.67

2207

1987

1887

2309.33

2052.83

1878.83

1888.67 1880.17

1727

1459.42 1585.88

1231.05 1255.9

1238.18 1197.49

1302.43 1282.58

1358.8

1423.3

1520.97 1480.47

1608.93 1597.43

1608.13 1616.55

1567.7

1634.73 1589.32

1433

1433

1433

1433

1500

1600

1600

1500

1500

2500

2235

1886

1840

1935

1891

1355

2300

2300

1900

1900

1900

1900

1433

1132.25 1300

1205.9

1376.4

1325

1375

1569.9

1618

1638.9

1567.5

1496.7

2415

2485

1855

1855

1890

1890

1482

1365

1155

1482

1365

1482

1505

1610

1610

1505

1575

1855

1855

2470

2245

1970

1470

1970

1882.5

1357.5

1145

1195

1332.5

1332.5

1382.5

1670

1603.33

1670

1570

1470

1832.5

1770

2245.65

2142.04

1865.71

1865.71

1883.93

1865.71

1504.23

1258.23

1258.23

1504.23

1504.23

1504.23

1531.5

1531.5

1777.8

1531.5

1531.5

1865.7

1777.8

2420

2520

2006.51

1725.98

2006.51

2006.51

1665.9

1294.27

1294.27

1305.52

1665.9

1305.52

1512.39

1574.35

1574.35

1512.39

1512.39

1725.98

1725.98

1877.657

2374.204

2311.382

1877.657

1877.657

1895.491

1877.657

1520.652

1322.446

1144.718

1520.652

1520.652

1520.652

1466.36

1533.504

1533.504

1466.36

1466.36

1877.657

2328.48

2466.99

2138.21

1860.08

1853.54

2138.21

1682.31

1264.98

1264.98

1682.31

1682.31

1682.31

1452.59

1544.29

1544.29

1452.59

1452.59

1860.08

1860.08

(continued)

2496.43

2375



1808.33

1889.29

1889.29

1848.81

1160.71

1160.71

1282.14

1282.14

1282.14



1565.48

1565.48

1565.48

1525



1808.33



08–08

1718.88 1665.01

1900

1819.95 1900

1840



1567.5







07–08







1496.7



1840



06–08



05–08

1818.13 –



1819.95

04–08



WMA(3) ARIMA Chen [5] Huarng [6] Pathak and Joshi and Kumar and Bisht and Gupta and Proposed Singh [10] Kumar [37] Gangwar Kumar [39] Kumar [29] method [38]

Months SBI share MA(3) prices

Table 28.6 Forecasted SBI share prices using existing and proposed method

28 A Weighted Fuzzy Time Series Forecasting Method … 375

9.17

230.54

2249.65 2197.31

10.58

2120.05

03–10

2347.83

2422.67 2401.67

2361

258.29

2059.95

02–10

AFER

2315

01–10 2300

2300

2300

7.65

186.47 8.27

187.25

2059.95 2100

2315

2374

2394

6.29

164.03

2135

2205

2205

2345

8.95

205.89

2070

2295

2395

2395

9.52

200.13

1883.93

2142.04

2191.75

2191.75

6.3

131.24

2120

2020

2365.99

2365.99

7.86

179.03

2166.247

2311.382

2352.723

2352.723

8.65

182.98

2138.21

2070.4

2321.66

2321.66

4.03

84.26

2051.19

1889.29

2375

2375

WMA(3) ARIMA Chen [5] Huarng [6] Pathak and Joshi and Kumar and Bisht and Gupta and Proposed Singh [10] Kumar [37] Gangwar Kumar [39] Kumar [29] method [38]

2376.33 2402.83

RMSE

2374

12–09

Months SBI share MA(3) prices

Table 28.6 (continued)

376 K. K. Gupta and S. Kumar

28 A Weighted Fuzzy Time Series Forecasting Method …

377

References 1. Zadeh, L.A.: Fuzzy sets. Inf. Control 8(3), 338–353 (1965) 2. Song, Q., Chissom, B.S.: Forecasting enrolments with fuzzy time series —part I. Fuzzy Sets Syst. 54(1), 1–9 (1993) 3. Song, Q., Chissom, B.S.: Fuzzy time series and its models. Fuzzy Sets Syst. 54(3), 269–277 (1993) 4. Song, Q., Chissom, B.S.: Forecasting enrolments with fuzzy time series —part II. Fuzzy Sets Syst. 62(1), 1–8 (1994) 5. Chen, S.M.: Forecasting enrolments based on fuzzy time series. Fuzzy Sets Syst. 81(3), 311–319 (1996) 6. Huarng, K.: Effective lengths of intervals to improve forecasting in fuzzy time series. Fuzzy Sets Syst. 123(3), 387–394 (2001) 7. Chen, S.M., Hwang, J.R.: Temperature prediction using fuzzy time series. IEEE Trans. Syst. Man Cybern. Part B (Cybernetics) 30(2), 263–275 (2000) 8. Song, Q.: A note on fuzzy time series model selection with sample autocorrelation functions. Cybern. Syst. 34(2), 93–107 (2003) 9. Lee, H.S., Chou, M.T.: Fuzzy forecasting based on fuzzy time series. Int. J. Comput. Math. 81(7), 781–789 (2004) 10. Pathak, H.K., Singh, P.: A new bandwidth interval based forecasting method for enrollments using fuzzy time series. Appl. Math. 2(04), 504 (2011) 11. Singh, P., Borah, B.: An efficient time series forecasting model based on fuzzy time series. Eng. Appl. Artif. Intell. 26(10), 2443–2457 (2014) 12. Lu, W., Chen, X., Pedrycz, W., Liu, X., Yang, J.: Using interval information granules to improve forecasting in fuzzy time series. Int. J. Approximate Reasoning 57, 1–18 (2015) 13. Cheng, S.H., Chen, S.M., Jian, W.S.: Fuzzy time series forecasting based on fuzzy logical relationships and similarity measures. Inf. Sci. 327, 272–287 (2016) 14. Bose, M., Mali, K.: A novel data partitioning and rule selection technique for modeling highorder fuzzy time series. Appl. Soft Comput. 63, 87–96 (2018) 15. Sharma, H.K., Kumari, K., Kar. S.: A rough set approach for forecasting models. Decis. Making Appl. Manage. Eng. 3(1), 1–21 (2020) 16. Egrioglu, E., Fildes, R., Bas, E.: Recurrent fuzzy time series functions approaches for forecasting. Granular Comput., pp. 1–8 (2021) 17. Yu, H.K.: Weighted fuzzy time series models for TAIEX forecasting. Physica A 349(3–4), 609–624 (2005) 18. Cheng, C.H., Chen, T.L., Chiang, C.H.: Trend-weighted fuzzy time-series model for TAIEX forecasting. In: International Conference on Neural Information Processing,pp. 469–477 Springer, Berlin, Heidelberg (2006) 19. Chen, S.M., Phuong, B.D.H.: Fuzzy time series forecasting based on optimal partitions of intervals and optimal weighting vectors. Knowl.-Based Syst. 118, 204–216 (2017) 20. Cheng, C.H., Chen, C.H.: Fuzzy time series model based on weighted association rule for financial market forecasting. Expert Syst. 35(4), e12271 (2018) 21. Meghdadi, A.H., Akbarzadeh-T, M.R.: Probabilistic fuzzy logic and probabilistic fuzzy systems. In: Fuzzy Systems, 10th IEEE International Conference, vol. 3, pp. 1127–1130 (2001) 22. Liu, Z., Li, H.X.: A probabilistic fuzzy logic system for modeling and control. IEEE Trans Fuzzy Systems 13(6), 848–859 (2005) 23. Li, H.X., Liu, Z.: A probabilistic neural-fuzzy learning system for stochastic modeling. IEEE Trans. Fuzzy Syst. 16(4), 898–908 (2008) 24. Hinojosa, W.M., Nefti, S., Kaymak, U.: Systems control with generalized probabilistic fuzzyreinforcement learning. IEEE Trans. Fuzzy Syst. 19(1), 51–64 (2011) 25. Zhang, G., Li, H.X.: An efficient configuration for probabilistic fuzzy logic system. IEEE Trans. Fuzzy Syst. 20(5), 898–909 (2012)

378

K. K. Gupta and S. Kumar

26. Fialho, A.S., Vieira, S.M., Kaymak, U., Almeida, R.J., Cismondi, F., Reti, S.R., Sousa, J.M.: Mortality prediction of septic shock patients using probabilistic fuzzy systems. Appl. Soft Comput. 42, 194–203 (2016) 27. Li, H.X., Wang, Y., Zhang, G.: Probabilistic fuzzy classification for stochastic data. IEEE Trans. Fuzzy Syst. 25(6), 1391–1402 (2017) 28. Gupta, K.K., Kumar, S.: Fuzzy time series forecasting method using probabilistic fuzzy sets. In: Advanced Computing and Communication Technologies, pp. 35–43. Springer, Singapore (2019) 29. Gupta, K.K., Kumar, S.: Hesitant probabilistic fuzzy set based time series forecasting method. In: Granular Computing, pp. 1–20 (2018) 30. Gupta, K.K., Kumar, S.: Probabilistic intuitionistic fuzzy set based intuitionistic fuzzy time series forecasting method. In: Manna, S., Datta, B., Ahmed, S. (eds.) Mathematical Modeling and Scientific Computing with Applications. ICMMSC 2018, Springer Proceedings in Mathematics and Statistics, vol. 308. Springer, Singapore (2018) 31. Pattanayak, R.M., Behra, H.S., Panigrahi, S.: A novel probabilistic intuitionistic fuzzy set based model for high order fuzzy time series forecasting. Eng. Appl. Artif. Intell. 99, 104136 (2021) 32. Efendi, R., Arbaiy, N., Deris, M.M.: A new procedure in stock market forecasting based on fuzzy random auto-regression time series model. Inf. Sci. 441, 113–132 (2018) 33. Torbat, S., Khashei, M., Bijari, M.: A hybrid probabilistic fuzzy ARIMA model for consumption forecasting in commodity markets. Econ. Anal. Policy 58, 22–31 (2018) 34. MacQueen, J.: Some methods for classification and analysis of multivariate observations. In: Proceedings of the fifth Berkeley symposium on mathematical statistics and probability, vol. 1, no. 14, pp. 281–297 (1967) 35. Xu, Z., Zhou, W.: Consensus building with a group of decision makers under the hesitant probabilistic fuzzy environment. Fuzzy Optim. Decis. Making 16(4), 481–503 (2017) 36. https://cran.r.project.org 37. Joshi, B.P., Kumar, S.: Intuitionistic fuzzy sets based method for fuzzy time series forecasting. Cybern. Syst. 43(1), 34–47 (2012) 38. Kumar, S., Gangwar, S.S.: Intuitionistic fuzzy time series: an approach for handling nondeterminism in time series forecasting. IEEE Trans. Fuzzy Syst. 24(6), 1270–1281 (2016) 39. Bisht, K., Kumar, S.: Fuzzy time series forecasting method based on hesitant fuzzy sets. Expert Syst. Appl. 64, 557–568 (2016)

Chapter 29

Modeling Clusters in Streamflow Time Series Based on an Affine Process Hidekazu Yoshioka

and Yumi Yoshioka

Abstract Defining “clusters” in a time series data is a ubiquitous issue in many engineering problems. We propose an analytical framework for resolving this issue focusing on streamflow time series data. We use an affine jump process and define the number of clusters as the number of specific jumps having jump sizes larger than a prescribed threshold value. Our definition is not only analytically tractable, but also provides a physically-consistent cluster decomposition formula of streamflow time series. Statistical dependence of clusters on the threshold value is analyzed by using real data. We argue transferability of the analysis results to modeling clustered arrivals of water quality load and migratory fish.

29.1 Introduction Stochastic processes are playing indispensable roles in civil and environmental engineering and related fields, such as the earthquake engineering [1], ecological engineering [2], and hydraulic engineering [3]. Stochastic processes encountered in realworld problems are not simply white noise processes but are more complex ones possibly having jump, delay, and sometimes explosion and extinction [4]. Hydrological data such as time series of water elevation and water quality indices of a surface water body can be seen as a stochastic process, which often involves clustered jumps corresponding to flood events [5]. Here, a cluster means a mass arrival of something in a short duration. However, for stochastic process models it is still unclear what a cluster is and how clusters can be separated with each other (Fig. 29.1). For example, the Hawkes-type models of self-exciting jump processes are well-known nonlinear stochastic process models to generate “clustered” jump events but are often discussed without explicitly defining clusters [6]. Rigorously H. Yoshioka (B) · Y. Yoshioka Shimane University, Nishikawatsu-cho 1060, Matsue 690-8504, Japan e-mail: [email protected] Y. Yoshioka e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. Das et al. (eds.), Modeling, Simulation and Optimization, Smart Innovation, Systems and Technologies 292, https://doi.org/10.1007/978-981-19-0836-1_29

379

380

H. Yoshioka and Y. Yoshioka

Fig. 29.1 A time series data of outflow discharge of a dam

defining clusters in time series data would significantly advance our understanding on stochastic process models and hence allows deeper analysis with these models. Such an approach, to date, is still rare despite its importance. We present a new methodology to define flood “clusters” for a class of stochastic process model to be used in simulation and analysis of streamflow time series collected at an observation point in a river. Our model is an affine jump process model whose statistical moments are obtained in a closed form [e.g., 7], while flood and recession events observed in real data are reasonably reproduced. Following the novel definition of clusters in Jiao et al. [8] applicable to affine processes in finance, we define a cluster as an occurrence of a jump event having the jump size larger than a prescribed threshold value. This definition is not only mathematically rigorous, but also biologically plausible because some field survey in a river implied that the time of fish migration is close to the time of large jump of discharge [9]. The abovementioned definition is consistent with the mathematical property of (self-exciting) affine processes that a large jump increases jump intensity and is often followed by the other jumps, thereby creating a cluster. Once a definition is given, statistical properties of clusters can be analyzed either theoretically or numerically. We provide a brief application example later. Our contribution will open a new door for advanced modeling and analysis of stochastic process models.

29.2 Affine Process Model and Clusters 29.2.1 Affine Process Model We present an affine process model considered in this paper. Its specification for an engineering problem is provided in the next section. The model is given as follows:  dX t = a(b − X t )dt + c 0

X t−



+∞

z N (dt,du,dz), t > 0, X 0 > 0,

(29.1)

0

where t ≥ 0 is time, a, b, c > 0 are parameters, N is a spatio-temporal Poisson random measure having the compensator dtduv(dz) with a Lévy measure v(dz)

29 Modeling Clusters in Streamflow Time Series …

381

 +∞ having a bounded variation in (0, +∞) such that 0 zv(dz) < +∞ but not neces +∞  +∞ sarily 0 v(dz) < +∞ [9]. We assume a > c 0 zv(dz) ≡ cM,  and that the process X admits a stationary density and bounded moments E X tk (k ∈ N) at a stationary state. The recession is hence strong enough to prevent X from blowing up. We then obtain the stationary average E[X t ] = b/(a − cM) > 0. At the same time, the Eq. (29.1) admits a unique non-negative path that is right continuous with left limits.

29.2.2 Clusters By virtue of the model (29.1), occurrence time τi of ith cluster for t > 0 is recursively specified based on Jiao et al. [8]: τi+1 = inf{t; t > τi , Jt = Jt− + 1}, i = 0, 1, 2, . . . andτ0 = 0,

(29.2)

where J is a counting process given by Jt =

 t 0

xt−



0

+∞

N (dt,du,dz), t > 0, J0 = 0

(29.3)

cy

with an auxiliary affine process x governed by  dxt = a(b − xt )dt + c

xt−



cy

z N (dt,du,dz), t > 0, x0 = X 0 .

(29.4)

0

0

Here, y > 0 is a constant parameter and the quantity cy corresponds to the threshold to judge the occurrence of clusters. Multiplying y by c is simply for later use in our application. The counting process J discontinuously increases with the y intensity xt 0 v(dz), and its jump times are identified as the occurrence times of clusters. The condition τi = 0 is for brevity. The auxiliary affine process model (29.4) has a form similar to but driven by jumps smaller than cy. Based on these preparations, we define the occurrence times of clusters. Definition 1 For the affine process model (29.1), the occurrence time τi of ith cluster for t > 0 is given by the recursion (29.2), where J is a counting process (29.3). A more naïve definition of clusters may be to define another counting process K : Kt =

 t 0

X t− 0



+∞

N (dt,du,dz), t > 0, K 0 = 0,

(29.5)

cy

with which all jumps larger than cy are collected. However, using the process J not only allows us to count a part of large jumps but also to decompose X as [8]:

382

H. Yoshioka and Y. Yoshioka

X t = xt +

Jt 

u (i) t−τi , t ≥ 0,

(29.6)

i=1

where {u (i) }i=1,2,3,... is a sequence of independent ideally distributed processes. The governing equation of each u (i) has the generic form 

u t−

du t = −au t dt + c



+∞

z N (dt,du,dz), t > 0,

(29.7)

0

0

with an initial condition u 0 taken from the non-singular probability density +∞ v(dz)/ cy v(dz) (cy < z). Each u (i) is positive and decays exponentially as    +∞ ln u (i) ∼ − a − c cy v(dz) t on average. From a physical viewpoint, the first term of (29.6) is a small-jump process and the second term is a sum of decaying (flood) clusters. A physically-consistent decomposition like (29.6) is impossible with K of (29.5).

29.3 Brief Application 29.3.1 Study Site We apply the theory developed in the previous section to a real data. The time series that we consider is an hourly record of outflow discharge of an existing dam (O Dam in H River, Japan). We have been collecting the hourly outflow discharge data in each water year (WY) from 2016. A WY starts from the beginning of June and ends on the last day of coming May (i.e., WY2016 is from June 1 2016 to May 31 2017). Based on the previous research [11], define the discharge at time t as Q t and set its governing equation as

dQ t = ρ Q − Q t dt +

 0

A+B Q t−



+∞

z N (dt, du, dz), t > 0, Q 0 > 0 (29.8)

0

with the recession rate ρ > 0, minimum discharge Q > 0, and parameters A, B > 0. By the transformation of variables X t = A + B Q t , we arrive at the model (29.1) with ρ = a, b = A + B Q, and c = B. The correspondence among the parameters shows that the threshold y of the discharge Q is transferred to cy = By of X . Actually, this preliminary finding motivated the use of the threshold cy in Sect. 29.2.

29 Modeling Clusters in Streamflow Time Series …

383

Fig. 29.2 Comparison of j (1/year) and k (1/year) for different values of the threshold y (m3 /s)

29.3.2 Computation We compare the expectations j = t −1 E[Jt ] and k = t −1 E[K t ] of the two counting processes J and K under t → +∞. By definition, j and k correspond to the expected number of clusters (By Definition 1 in the former and by that with the formal replacement x → X in the latter) in a unit time interval. It is easy to show k ≥ j by the ordering X t ≥ xt satisfied almost surely. Because the two √ processes are counting processes, their standard deviations only have the order of t, which is smaller than the order t of the expectations under a large-time limit. Although not reported in detail here, the model parameters have been successfully obtained by matching the first to fourth statistical moments as in Yoshioka and Tsujimura [11] with the Lévy measure v(dz) = exp(− f z)z −(g+1) dz, f > 0, g < 1. Figure 29.2 compares the expectations j (1/year) and k (1/year) for different values of the threshold y (m3 /s) in WY2018 with the average discharge of 5.38 (m3 /s) and the standard deviation 22.06 (m3 /s) (Fig. 29.1). The maximum and minimum discharges in this WY are 368.9 (m3 /s) and 1.0 (m3 /s), respectively. The two expectations have qualitatively the same shapes but are clearly different with each other, suggesting that clusters should be carefully defined. The expected numbers of clusters in one year in the sense of Definition 1 are found as follows: 8.6 [y = 10 (m3 /s)], 0.35 [y = 100 (m3 /s)], 0.002 [y = 200 (m3 /s)], and 5 × 10−8 [y = 400 (m3 /s)].

29.3.3 Remarks on Water Quality The results obtained above can be applied to environmental indicators of river water quality including the dissolved silica (DSi, a primary nutrient of river ecosystem) load [12] in a downstream reach of O Dam. Based on our water sampling surveys from 2019, we found the following relationship between the (daily averaged) discharge Q (m3 /s) and L (g/s): L = 5.730Q + 0.851, (R 2 = 0.99, 52 data samples).

(29.9)

The clustering nature of the discharge can be inherited in the DSi load because of the affine nature of (29.9) and the particular forms of (29.1) and (29.8). It will be

384

H. Yoshioka and Y. Yoshioka

interesting to analyze whether the same holds true in other rivers and/or for other environmental indicators. Note that fish migration has been known to synchronize flood events [9]; the threshold y can then be tuned by a biological reasoning.

29.4 Conclusion We discussed a definition of clusters applicable to affine process models and applied it to an existing streamflow time series data. Impacts of the threshold value, which is a parameter that must be specified a priori, were analyzed as well. The definition [8] discussed in this paper is applicable to multi-dimensional affine processes and to polynomial processes [10] with minor modifications. A computational method that accurately handles expectation functional based on stochastic differential equations would be helpful for efficiently analyzing the advanced cases. Acknowledgements The following research grants supported this research: Kurita Water and Environment Foundation 19B018, 20K004, 21K018 and JSPS KAKENHI 19H03073.

References 1. Mitseas, I.P., Beer, M.: First-excursion stochastic incremental dynamics methodology for hysteretic structural systems subject to seismic excitation. Comput. Struct. 242, 106359 (2021) 2. Sun, W., Li, S., Wang, J., Fu, G.: Effects of grazing on plant species and phylogenetic diversity in alpine grasslands, Northern Tibet. Ecol. Eng. 170, 106331 (2021) 3. Tsai, C.W., Hung, S.Y., Wu, T.H.: Stochastic sediment transport: anomalous diffusions and random movement. Stoch. Env. Res. Risk Assess. 34(2), 397–413 (2020) 4. Mu, X., Jiang, D., Hayat, T., Alsaedi, A., Ahmad, B.: Stationary distribution and periodic solution of a stochastic Nicholson’s blowflies model with distributed delay. In: Mathematical Methods in the Applied Sciences, in press (2021) 5. Lapides, D.A., Leclerc, C.D., Moidu, H., Dralle, D.N., Hahm, W.J.: Variability of stream extents controlled by flow regime and network hydraulic scaling. Hydrol. Processes 35(3), e14079 (2021) 6. Ma, Y., Pan, D., Wang, T.: Exchange options under clustered jump dynamics. Quant. Fin. 20(6), 949–967 (2020) 7. Fadina, T., Neufeld, A., Schmidt, T.: Affine processes under parameter uncertainty. Prob. Uncertainty Quant. Risk 4(1), 1–35 (2019) 8. Jiao, Y., Ma, C., Scotti, S., Zhou, C.: The Alpha-Heston stochastic volatility model. Math. Financ. 31(3), 943–978 (2021) 9. Bjerck, H.B., Urke, H.A., Haugen, T.O., Alfredsen, J.A., Ulvund, J.B., Kristensen, T.: Synchrony and multimodality in the timing of Atlantic salmon smolt migration in two Norwegian fjords. Sci. Rep. 11(1), 1–14 (2021) 10. Filipovi´c, D., Larsson, M.: Polynomial jump-diffusion models. Stochastic Syst. 10(1), 71–97 (2020)

29 Modeling Clusters in Streamflow Time Series …

385

11. Yoshioka, H., Tsujimura, M.: Hamilton-Jacobi-Bellman-Isaacs Equation for Rational Inattention in the Long-Run Management of River Environments Under Uncertainty. arXiv preprint arXiv:2107.12526 (2021) 12. Yoshioka, H., Yoshioka, Y.: Designing cost-efficient inspection schemes for stochastic streamflow environment using an effective Hamiltonian approach. In: Optimization and Engineering, in press (2021)

Chapter 30

Assessing and Predicting Urban Growth Patterns Using ANN-MLP and CA Model in Jammu Urban Agglomeration, India Vishal Chettry

and Keerti Manisha

Abstract Globally, rapid and haphazard urban growth has induced land use land cover (LULC) transformations in cities and their surroundings. The cities located in the Himalayan foothills have experienced tremendous urban growth in recent years. In this context, urban growth modeling integrated with remote sensing and geoinformatics assists to predict the future urban growth pattern. Therefore, the urban growth pattern of Jammu Urban Agglomeration (UA) from 1991 to 2021 is assessed in this paper. Shannon’s entropy index assesses the trend of built-up expansion in Jammu UA. Further, the upcoming urban growth for the year 2031 was predicted by integrating artificial neural network-multi-layer perceptron (ANNMLP) and cellular automata (CA) model. The results revealed a substantial rise in built-up land cover while fallow land, vegetation, agriculture, and water body land cover decreased during 1991–2021. The occurrence of dispersed growth in Jammu UA was specified by the entropy index. The predicted urban growth pattern for the year 2031 showcased a further escalation in the built-up land cover while other land cover categories continued declining trend. Overall, such an urban growth pattern is unsustainable for Jammu UA, and there is an urgent requirement of urban containment measures.

V. Chettry (B) Manipal School of Architecture and Planning, Manipal Academy of Higher Education, Manipal, India e-mail: [email protected] K. Manisha NIT, Hamirpur, Himachal Pradesh 177005, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. Das et al. (eds.), Modeling, Simulation and Optimization, Smart Innovation, Systems and Technologies 292, https://doi.org/10.1007/978-981-19-0836-1_30

387

388

V. Chettry and K. Manisha

30.1 Introduction Rapid urbanization and the expansion of cities are among the most important global trends observed in this century. In 1900, only 15% of the entire global population were categorized as urban that gradually reached 30% in 1950 and later exceeded 50% by 2000 [1]. The census calculations of urban growth post-independence (1947 AD onwards) revealed an increase in India’s urban population from 17.29% in 1951 to 31.15% in 2011 [2]. Overall, urbanization has promoted tremendous economic and social growth, such as a reduction in poverty, increased literacy rate, and higher employment opportunities in many countries. However, the rapid and haphazard urbanization has been promoting urban sprawl and its associated issues. RS and GIS are combined with multiple methods to categorize and analyze the changes in LULC spatially both within and outside/around the administrative boundary [3]. The error matrix is prepared to assess the accuracy level of the obtained LULC maps. The LULC transition matrix is used to analyze the land cover transformations by assessing well-processed multiple spatio-temporal remote sensing data [4]. Spatial zone division approaches are employed to assess urban growth patterns of cities based on administrative boundaries, circular buffer zones around the city centroid, pie sections based on the cardinal direction, and amalgamation of pie sections and circular buffers. Landscape metrics are also extensively employed to measure the structures and pattern of urban landscape patches. The intensity of dispersion or concentration of built-up units within any geographical area is computed through Shannon’s entropy index. Urban expansion index categorizes urban growth pattern in three typologies, i.e., infill development, edge-expansion, and scattered growth [5]. Adjacent neighborhood relationships concept defines the variations in the urban growth and classifies into four typologies, i.e., secondary urban core, urban fringe, linear growth, and leapfrog growth [6]. The knowledge about existing urban growth only is not sufficient to promote planned urbanization [7]. Thus, prediction of spatial urban growth coupled with the analysis of current and past trends is crucial to scientifically evaluate the impact of future urban growth. Traditionally, urban growth in developing countries is forecasted through statistical projection of future urban population and their requirements and estimation of future land demand [8]. However, with the technological advancements, RS and GIS are integrated with various machine learning models to predict urban growth. It includes but are not limited to model based on cellular automata (CA), such as multi-CA, fuzzy CA, constrained CA, ANN CA, patch-based CA [9]; agentbased models [10], system dynamics [11]; slope, land use, exclusion, urban extent, transportation, and hill shade (SLEUTH) [12]; land change modeler (LCM); and weight of evidence (WOE) [13]. These models are suitable to investigate complex urban growth process. Research focusing on urban growth prediction using advanced machine learning approach has been attempted for few Indian cities. FUZZY AHP-CA model was utilized to predict the future urban growth within five rapidly growing Indian cities

30 Assessing and Predicting Urban Growth Patterns …

389

[10]. CA-Markov-based urban growth modeling was compared with the agent-based modeling (ABM) for Bhopal city [14]. Multi-criteria cellular automata Markov chain (MCCA-MC), multi-layer perception Markov chain (MLP-MC), and SLEUTH were explored to model urban growth of Udaipur city [15]. Land change modeler existing in the TerrSet software was utilized to forecast the future built-up growth in Chennai city [16]. Netlogo-based multi-agent modeling to simulate spatio-temporal urban growth patterns of Chandigarh city [17]. Multi-layer perceptron neural network (MLPNN) model included in the IDRISI Selva was used to estimate future land demand for urban growth in Guwahati Municipal Development Authority. Since the last few decades, the fragile mountainous ecosystem of the Himalayan region has been adversely affected due to rapid urban growth combined with intensive anthropogenic activities [18]. However, the investigation of built-up growth patterns in the Himalayan region received limited attention [19]. Moreover, the previous research has only quantified the spatio-temporal growth trend in the urban areas [20, 21]. Therefore, this research attempts to assess and predict urban growth patterns of a fastest growing city in Himalayan region using ANN-MLP and CA model. Jammu Urban Agglomeration, India is selected as a case study to map and analyze the existing urban growth pattern and exhibit the future trends to better plan for sustainable urbanization.

30.2 Study Area and Datasets Jammu city is the winter capital of the Jammu and Kashmir union territory of India. Jammu Urban Agglomeration (UA) lies within 32° 48’ to 32° 39’ N latitudes and 72° 47’ to 72° 57’ E longitudes enclosing an area of 198.28 km2 (Fig. 30.1). It is located near the bank of Tawi river and is surrounded by Himalayas toward the north and northern-plains in the south. Jammu UA experiences a humid sub-tropic climate wherein during extreme summer, the temperature rises to 46° Celsius and during winters, the temperature drops to 4 °C. The elevation of Jammu UA varies between 261 and 684 m above mean sea level. The census 2011 estimates the total population of Jammu UA to be 0.65 million and exhibited a decadal population rise of 7.38% during 2001 to 2011. The Jammu city is e driven by tourism and small industrial estates. The Landsat datasets gathered from USGS to conduct this study are mentioned in Table 30.1.

30.3 Methods The methodology adopted in this study is exhibited in Fig. 30.2. The downloaded RS images of 1991, 2021, 2011, and 2018 were georeferenced, and the study area was clipped using Jammu UA boundary layer file (obtained through census of India) in ArcGIS 10.3. The pre-processing of the RS images included increase of image

390

V. Chettry and K. Manisha

Fig. 30.1 Jammu UA location map

Table 30.1 Landsat satellite image details Landsat sensor

Jammu Urban Agglomeration Scene_ID

Path/row

No. of bands

Acquisition date

5 TM

LT51490371991030ISP00

149/37

07

1991-01-30

30

7 ETM

LE71480372001026SGS00

148/37

09

2001-01-26

30

5 TM

LT51490372011021KHC00

149/37

07

2011-01-21

30

8 OLI TIRS

LC81480382021025LGN00

148/38

11

2021-01-25

30

Grid cell size (m)

texture, checking of Gaussian distribution and false color composite (FCC) preparation. Thereafter, LULC maps for the year 1991, 2001, 2011, and 2021 were prepared using maximum likelihood classification (MLC) tool. Based on Bayes’ theorem, the MLC generates LULC maps by forecasting the likelihood of pixels that represents a particular land cover category. Overall accuracy and kappa coefficient (Eqs. 30.1 and 30.2) were determined to check the level of accuracy of the obtained LULC maps. OA =

RS × 100 TS

(30.1)

30 Assessing and Predicting Urban Growth Patterns …

391

Fig. 30.2 Research methodology

ki =

P(o) − P(e) 1 − P(e)

(30.2)

where RS is the total reference samples selected for that LULC category, TP is the total amount of accurately classified samples, P(o) represents the recognized magnitude of agreement, P(e) represents the magnitude of expected samples by chance. Shannon’s entropy detects the pattern of urban growth in Jammu UA (Eqs. 30.3– 30.4). It exhibits the degree of diffusion or compaction in urban areas. The entropy number toward ‘0’ indicates compact growth while the values greater than the mean value of loge(n) reveals the occurrence of diffusion, i.e., urban sprawl. Hn = −

n 

Pi log e(Pi)

(30.3)

i=1

Pi = X i /

n 

Xi

(30.4)

i=1

wherein Pi represents the possibility of the variable appearing in the zone i and is determined as shown in Eq. 30.4. In this case, the study area is divided into zones based on the 8 cardinal direction, hence n = ‘8’.

392

V. Chettry and K. Manisha

Fig. 30.3 Drivers of urban growth utilized for future prediction of urban growth in Jammu UA

The urban growth was predicted using a plug-in of the QGIS software, i.e., MOLUSCE (module for land use change evaluation). In this study, first urban growth was simulated for the year 2021 based on the input data and only after obtaining satisfactory accuracy level, the urban growth during 20,231 was forecasted using MOLUSCE model. The input data contain raster image of LULC maps of former (2001 MLC map) and later periods (2011 MLC map) and ten spatial variables as driving factors such as slope, hill shade, DEM, distance from the urban core, suburban centers, waterbodies, railways, major roads, airport, and future urban centers (Fig. 30.3).

30.4 Result and Discussion 30.4.1 LULC Change Detection The urban growth pattern of Jammu Urban Agglomeration (UA) is exhibited through LULC maps from 1991 to 2021 (Fig. 30.5). The values of accuracy assessment are exhibited in Table 30.2. As an outcome, the overall accuracy (OA) and kappa coefficients (k i ) were found above 85% and were satisfactory for further analysis [24, 25]. The vegetation land

30 Assessing and Predicting Urban Growth Patterns …

393

Table 30.2 Accuracy assessment of the LULC maps S. no.

Indicator

Landsat 5 TM (%)

Landsat 7 ETM (%)

Landsat 5 TM (%)

Landsat 8 OLI_TRS (%)

1991

2001

2011

2021

1

Overall accuracy

90.60

92.40

91

93

2

Kappa coefficient

90.73

89

89

91.49

cover experienced a decrease from 38.57 to 31.54 km2 during 1991–2021 (Fig. 30.5). A significant growth was observed in built-up land cover from 29.10 km2 in 1991 to 108.37 km2 in 2021. There was reduction in the water body land cover from 5.84 km2 in 1991 to 3.76 km2 in 2021. From the year 1991, agricultural land cover reduced from 63.56 km2 to 31.30 km2 in 2021. Similarly, open land cover declined from 61.21 km2 in 1991 to 23.31 km2 in 2021. Overall, the pattern of land cover change exhibited a substantial growth in built-up land cover primarily through the decline in agriculture, open land, and vegetation land cover within Jammu UA (Fig. 30.4). The built-up expansion of the city is tremendously influenced by the rapidly growing population in the Jammu city. In order to meet the city’s expanding demand,

Fig. 30.4 LULC map of Jammu UA during the study period

394

V. Chettry and K. Manisha

Landcover classificaon-Jammu UA 120 108.37

Area (Km 2)

100 80

70.18 63.5661.21

60 40

58.74 53.40

53.91 48.61

38.57

38.68

35.91

29.10

31.54

31.30 23.31

26.20

20 5.84

6.03

1991

2001

4.90

3.76

2011

2021

0

Year Vegetaon

Built-up Area

Waterbody

Agriculture

Open Land

Fig. 30.5 LULC distribution in Jammu UA during 1991, 2001, 2011, and 2021

Table 30.3 Shannon’s entropy index values in Jammu UA Year

1991

2001

2011

2021

Entropy value

1.96

1.90

1.92

1.95

land has been continuously transformed from agriculture to built-up and other land cover types [23].

30.4.2 Detect Urban Growth Pattern Shannon’s entropy index determines the urban growth pattern in Jammu UA. The entropy value near to zero indicates compact growth; if it is greater than the mean value of loge(n), i.e., 2.079, the distribution is dispersed. The values obtained through Shannon’s entropy index (1991–2021), as shown in Table 30.3, exhibit the extent of dispersion. Overall, there was a rise in Shannon’s entropy index values from 1.96 in 1991 to 1.95 in 2021. Thus, it is evident that disperse growth pattern is occurring in Jammu UA.

30.4.3 Urban Growth Prediction MOLUSCE plug-in of QGIS 2.8.4 software is programmed jointly by Asia Air Survey Co. Ltd. and NEXTGIS. It was employed to forecast the LULC of Jammu UA in 2031. In this study, artificial neural network-multi-layer perceptron (ANN-MLP) and cellular automata (CA) model were integrated in MOLUSCE. The LULC maps of

30 Assessing and Predicting Urban Growth Patterns …

395

Fig. 30.6 Predicted 2031 LULC in Jammu UA

2001 and 2011 along with ten spatial growth drivers exhibited in Fig. 30.3 was utilized to generate simulated LULC of 2021. The spatial growth drivers were scaled between −1 and + 1 for maintaining consistency among the datasets. MOLUSCE exhibited the delta overall accuracy as −0.00315 (the difference between the minimum reached error and current error), minimum validation overall error as 0.01829 (the minimum reached error after validating the sample set). The kappa value represented by the current validation kappa was 0.87521. Subsequently, the accuracy of the 2021 LULC maps acquired through simulation was assessed by comparing with the LULC maps generated through MLC process. Accuracy assessment displayed that the overall accuracy was 93%, and kappa coefficient was 91.49%. Hence, the model was suitable to predict future LULC of Jammu UA for 2031 (Fig. 30.6). The proportion of built-up land cover in predicted LULC during 2031 revealed further rise in from 108.37 km2 in 2021 to 131.11 km2 in 2031. There was further decline in other land covers such as vegetation (from 31.54 km2 in 2021 to 26.49 km2 in 2031), water body (from 3.76 km2 in 2021 to 3.01 km2 in 2031), agriculture (from 31.30 km2 in 2021 to 20.73 km2 in 2031), and open land (from 23.31 km2 in 2021 to 16.94 km2 in 2031). The rise in built-up growth was observed primarily toward the south and north directions. The existing industrial estates, i.e., Bari Brahamana in the south and the real estate growth in the northern direction attracted built-up growth in 2031. The prevalence of better quality of life in union territory capital city can also be the factor for rapid built-up growth.

396

V. Chettry and K. Manisha

30.5 Conclusions This paper assessed the existing urban growth pattern in the Jammu UA from 1991 to 2021. Jammu UA is located in the foothills of the Himalayas and has experienced a significant LULC transformations during the study period. In this study, the Landsat satellite imageries from the USGS for the years 1991, 2001, 2011, and 2021 were downloaded and clipped to the administrative boundary of Jammu UA (obtained through census of India) in ArcGIS 10.3. The maximum likelihood classification tool in ArcGIS 10.3 was utilized to generate the LULC maps during the study period. The accuracy of the generated LULC maps was determined by ensuring satisfactory accuracy levels obtained through overall accuracy and kappa coefficient. The results indicated a significant growth in built-up units and decline in agriculture, open land, water body, and vegetation land cover within Jammu UA. Shannon’s entropy index exhibited an occurrence of disperse urban growth. LULC transformation from agriculture, fallow land, and vegetation is one of the underlying reasons for the built-up growth. The MOLUSCE plug-in of QGIS 2.8.4 software was utilized to forecast urban expansion in Jammu UA. The MOLUSE model used MLC-based LULC maps of 2001 and 2011 along with ten growth drivers to obtain a simulated LULC map for 2021. The future LULC of 2031 was forecasted by integrating the ANN-MLP and CA models. Forecasted urban growth pattern revealed a continuous decrease in natural land cover and rise in the built-up units. Such urban growth trend would harm the sensitive environment and ecology of the cities located in foothills of Himalayan region. There is requirement of strategies that could limit the horizontal expansion of Jammu UA. Despite the limits imposed due to the medium resolution remote sensing images utilized in this study, it remains effective because they eliminate the issues associated with multiple user data interpretation and are comparatively faster than other traditional methods. The future scope of this research includes the implications of such rapid expansion on the socioeconomic status of the people residing in Jammu UA. Similar methodology can be adopted for other cities located in such sensitive zones to examine the existing and predict the urban growth pattern.

References 1. United Nations: World Urbanization Prospects: The 2014 Revision. New York (2015). https:// doi.org/10.18356/527e5125-en 2. Government of India: India HABITAT III National Report. New Delhi (2016) 3. Behera, M.D., Tripathi, P., Das, P., Srivastava, S.K., Roy, P.S., Joshi, C., Behera, P.R., Deka, J., Kumar, P., Khan, M.L., Tripathi, O.P., Dash, T., Krishnamurthy, Y.V.N.: Remote sensing based deforestation analysis in Mahanadi and Brahmaputra river basin in India since 1985. J. Environ. Manage. 206, 1192–1203 (2018). https://doi.org/10.1016/j.jenvman.2017.10.015 4. Nkeki, F.N.: Spatio-temporal analysis of land use transition and urban growth characterization in Benin metropolitan region, Nigeria. Remote Sens. Appl. Soc. Environ. 4, 119–137 (2016). https://doi.org/10.1016/j.rsase.2016.08.002

30 Assessing and Predicting Urban Growth Patterns …

397

5. Chettry, V., Surawar, M.: Assessment of urban sprawl characteristics in Indian cities using remote sensing: case studies of Patna, Ranchi, and Srinagar. Environ. Dev. Sustain. 23, 11913– 11935 (2021). https://doi.org/10.1007/s10668-020-01149-3 6. Chettry, V., Surawar, M.: Urban sprawl assessment in Raipur and Bhubaneswar urban agglomerations from 1991 to 2018 using geoinformatics. Arab. J. Geosci. 13, 667 (2020). https://doi. org/10.1007/s12517-020-05693-0 7. Mansour, S., Al-Belushi, M., Al-Awadhi, T.: Monitoring land use and land cover changes in the mountainous cities of Oman using GIS and CA-Markov modelling techniques. Land Use Policy 91 (2020). https://doi.org/10.1016/j.landusepol.2019.104414 8. Yang, Y., Zhang, L., Ye, Y., Wang, Z.: Curbing sprawl with development-limiting boundaries in urban China: A review of literature. J. Plan. Lit. 35, 25–40 (2020). https://doi.org/10.1177/ 0885412219874145 9. Yang, J., Gong, J., Tang, W., Shen, Y., Liu, C., Gao, J.: Delineation of urban growth boundaries using a patch-based cellular automata model under multiple spatial and socio-economic scenarios. Sustainability 11, 6159 (2019). https://doi.org/10.3390/su11216159 10. Bharath, H.A., Chandan, M.C., Vinay, S., Ramachandra, T.V: Modelling urban dynamics in rapidly urbanising Indian cities. Egypt. J. Remote Sens. Sp. Sci. 1–10 (2017). https://doi.org/ 10.1016/j.ejrs.2017.08.002 11. Wang, W., Zhang, X., Wu, Y., Zhou, L., Skitmore, M.: Development priority zoning in China and its impact on urban growth management strategy. Cities 62, 1–9 (2017). https://doi.org/10. 1016/j.cities.2016.11.009 12. Zhuang, Z., Li, K., Liu, J., Cheng, Q., Gao, Y., Shan, J., Cai, L., Huang, Q., Chen, Y., Chen, D.: China’s new urban space regulation policies: a study of urban development boundary delineations. Sustainability 9, 45 (2017). https://doi.org/10.3390/su9010045 13. Zheng, X.Q., Lv, L.N.: A WOE method for urban growth boundary delineation and its applications to land use planning. Int. J. Geogr. Inf. Sci. 30, 691–707 (2016). https://doi.org/10.1080/ 13658816.2015.1091461 14. Bharath, A.H., Vinay, S., Ramachandra, T.V: Agent based modelling urban dynamics of Bhopal, India. J. Settlements Spat. Plan. 7, 1–14 (2016). https://doi.org/10.19188/01JSSP012016 15. Mondal, B., Chakraborti, S., Das, D.N., Joshi, P.K., Maity, S., Pramanik, M.K., Chatterjee, S.: Comparison of spatial modelling approaches to simulate urban growth: a case study on Udaipur city, India. Geocarto Int. 35, 411–433 (2020). https://doi.org/10.1080/10106049.2018.1520922 16. Padmanaban, R., Bhowmik, A.K., Cabral, P., Zamyatin, A., Almegdadi, O., Wang, S.: Modelling urban sprawl using remotely sensed data: a case study of Chennai city, Tamilnadu. Entropy 19, 1–14 (2017). https://doi.org/10.3390/e19040163 17. Pandey, B., Joshi, P.K.: Numerical modelling spatial patterns of urban growth in Chandigarh and surrounding region (India) using multi-agent systems. Model. Earth Syst. Environ. 1, 1–14 (2015). https://doi.org/10.1007/s40808-015-0005-6 18. Dame, J., Schmidt, S., Müller, J., Nüsser, M.: Urbanisation and socio-ecological challenges in high mountain towns : insights from Leh (Ladakh), India. Landsc. Urban Plan. 189, 189–199 (2019). https://doi.org/10.1016/j.landurbplan.2019.04.017 19. Nengroo, Z.A., Bhat, M.S., Kuchay, N.A.: Measuring urban sprawl of Srinagar city, Jammu and Kashmir. India. J. Urban Manag. 6, 45–55 (2017). https://doi.org/10.1016/j.jum.2017.08.001 20. Kumar, A.D.: Analysing urban sprawl and land consumption patterns in major capital cities in the Himalayan region using geoinformatics. Appl. Geogr. 89, 112–123 (2017). https://doi.org/ 10.1016/j.apgeog.2017.10.010 21. Singh, L., Singh, H.: Managing natural resources and environmental challenges in the face of urban sprawl in Indian Himalayan City of Jammu. J. Indian Soc. Remote Sens. (2020). https:// doi.org/10.1007/s12524-020-01133-4

Chapter 31

Some Investigations on CdSe/ZnSe Quantum Dot for Solar Cell Applications K. R. Kavitha, S. Vijayalakshmi, B. MuraliBabu, and V. DivyaPriya

Abstract For the next-generation solar cell technology, quantum dot-sensitized solar cell is a promising agent. So, this paper mainly concentrates on synthesis of core–shell nanoparticles for solar cell applications and analyzes the properties using different measurement techniques. After hydrothermal treatment, CdSe/ZnSe (core– shell) nanoparticles are stabilized by aqueous solutions containing L-cysteine via a chemical precipitation technique. The CdSe core is first produced using precursors and L-cysteine as the capping ligand. The layer-by-layer approach is then used to grow the ZnSe shell onto the CdSe core. The properties of samples are analyzed by Xray diffractometer (XRD) and scanning electron microscope (SEM). The absorption and emission properties are analyzed using UV–vis spectrophotometer, respectively. To analyze the composite dots and determine their chemical arrangement, average size, size distribution, shape, and internal structure, TEM and FT-IR, particle size analyzer, scanning and energy-dispersive X-ray spectrometry are used.

31.1 Introduction Nanotechnology has wide band gap semiconductor materials, which are used in a large number of opto-electronic devices. Semiconductor quantum dots (QDs) are small fluorescent nanocrystals that range in size from 1 to 10 nm. QDs are made up of a semiconductor central core stabilized by an inorganic salt shell [1]. Different types of heterostructures can be created by coalescing different materials in a single NC, such as core/shell NCs 6–9 or multicomponent heteronanorods and tetrapods [2]. Because of their wide applications in optical devices, the II-VI Group semiconductor nanocrystals, particularly CdSe, CdTe, and CdS, have got more attention [3].

K. R. Kavitha (B) · S. Vijayalakshmi · V. DivyaPriya Sona College of Technology, Salem, Tamilnadu, India e-mail: [email protected] B. MuraliBabu Paavai Engineering College, Namakkal, Tamilnadu, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. Das et al. (eds.), Modeling, Simulation and Optimization, Smart Innovation, Systems and Technologies 292, https://doi.org/10.1007/978-981-19-0836-1_31

399

400

K. R. Kavitha et al.

Among the various compositions of nanocrystals, CdSe nanocrystals are the most interesting material because of its unique properties which are used in wide range applications [4]. Cadmium selenide (CdSe) is one of the most popular Group II–VI semiconductors due to band gap energy (712 nm or 1.74 eV) of CdSe. Thus, the nanoparticles of CdSe can be plotted to have arranged of band gaps throughout the visible range, corresponding to the major part of the energy that arises from the solar spectrum. This property of CdSe beside with its fluorescing properties is used in assortments of applications such as solar cells and light emitting diodes [5, 6]. Due to its applications in light emitting diodes, photo-detectors, and full-color displays, ZnSe is an important II–VI n-type direct band gap semiconductor and has got a lot of attention [7]. The wide band gap (bulk band gap 2.7 eV) and expressively large binding energy (21 meV) of ZnSe [8] make it an ideal choice as an inorganic passivation shell for a variety of semiconductor core/shell nanocrystals, with the goal of improving the stability and emission properties of semiconductor core nanocrystals with comparatively narrow band gaps and for efficient room temperature operation [9]. X-ray diffraction (XRD), high-resolution transmission electron microscopy (HRTEM), UV–vis spectroscopy FT-IR, PSA, SEM, and X-ray spectrometry were used to characterize the synthesized CdSe/ZnSe quantum dots. On the sandwich-type dye-sensitized solar cell, the J–V and IPCE analyses were carried out (DSSC). The CuCo2 S4 /graphene electrode exhibits a high PCE (11.85%) and long-term stability, according to the findings. The heterostructure, mesoporous nature, and wide surface area of the composite contribute to its greater PCE. This enables for better light harvesting. Due to its strong photovoltaic and electrocatalytic activities, the CuCo2 S4 /graphene heterostructure can be advantageous for energy conversion and storage device applications [10]. Zhang et al developed water-soluble highly luminescent NIR-emitting QDs through constructing CdTe/CdSe/ZnS core/shell/shell nanostructure. The water solubilization of the initially oil-soluble CdTe/CdSe/ZnS QDs was achieved through ligand replacement by 3-mercaptopropionic acid. The as-prepared water-soluble CdTe/CdSe/ZnS QDs possess PL quantum yields as high as 84% in aqueous media, which is one of the best results for the luminescent semiconductor nanocrystals [11]. Murali Babu et al investigated the photovoltaic characteristics, electrochemical impendence spectra for the DSSCs in terms of open circuit voltage, short circuit current, fill factors, solar cells efficiency and electron life time [12].

31.2 Experimental Details Cadmium oxide and selenium powder, sodium borohydride (NaBH4, 98%), Lcysteine (C3 H7 NO2 S), and sodium hydroxide pellets (NaOH) are used as received without any further purification. All the reactions are performed in standard glassware using double-distilled water and in the presence of nitrogen gas atmosphere.

31 Some Investigations on CdSe/ZnSe Quantum Dot …

401

31.3 Synthesis of Quantum Dots The chemical precipitation process is used to make CdSe/ZnSe quantum dots followed by hydrothermal treatment using (0.005 M as CZ1, 0.03 M as CZ2, and 0.01 M as CZ3). For the synthesis, CdO and L-cysteine are dissolved in certain amount of distilled water to prepare the Cd precursor. Similarly, Zn precursor is obtained by dissolving certain mmols of zinc chloride and L-cysteine in distilled water. Both the Cd and Zn precursor solutions are titrated to pH 9 by the addition of 1.0 M NaOH. In two three-necked flasks, stock solutions of selenium precursor are prepared by the reduction reaction of NaBH4 and selenium powder separately, under constant stirring until a clear colorless solution is obtained, in the presence of nitrogen gas atmosphere. After continuous stirring for about 30 min, the Cd precursor solution is slowly added into the Se precursor in three-necked flask for the formation of CdSe core nanocrystals. This solution is stirred for about 30 min, followed by the addition of Zn and Se precursor solutions alternatively. All of these reactions take place at room temperature and in the presence of nitrogen gas. Initially, the synthesized solution is in yellow color, and after stirring for about 30 min, the solution is transferred into a Teflon-coated autoclave and heated at160 ºC for 1 h. The autoclave is brought back to room temperature, and the obtained solutions are found to in dark reddish color. The precipitates are collected by centrifugation with distilled water and ethanol and then vacuum-dried for CdSe/ZnSe nanopowders.

31.4 Results and Discussions 31.4.1 UV Spectra The optical study is performed on UV–vis spectrophotometer and spectrofluorimeter. Figure 31.1 shows the synthesized CdSe/ZnSe quantum dots. The absorption spectra of CZ1, CZ2, and CZ3 are exhibited a broad absorbance peak between 378 and 410 nm with an absorption edge at 378 nm for CZ1, 390 nm for CZ2, and 410 nm for CZ3.The particles are blue shift that exists as a result of quantum confinement effect that arises when the absorption is decreased due to the decrement of mole ratio [13, 14].

402

K. R. Kavitha et al.

Fig. 31.1 UV absorption spectra

31.4.2 Surface Morphological Analysis 31.4.2.1

Optical Microscopy Analysis

The surface of the produced nanomaterials was investigated using an optical microscope with a Universal Infinity Optical System and a 1.3 MP Moticam CMOS digital camera. The produced CdSe/ZnSe nanomaterials are stored on a glass slide for optical microscopy investigation and are analyzed using an Olympus (BX41M-N22MB) microscope. Figure 31.2 depicts a CdSe/ZnSe nanomaterials image captured with a

Fig. 31.2 CdSe/ZnSe nanomaterial

31 Some Investigations on CdSe/ZnSe Quantum Dot …

403

Fig. 31.3 SEM image of CdSe/ZnSe nanomaterial

bright field optical microscope, with block dots indicating the presence of the nanomaterials. Quantum dots in a variety of shapes (pyramid, rod, dot, square) have been observed in a variety of semiconductor materials.

31.4.2.2

SEM Analysis

Figure 31.3 clearly demonstrates that the CdSe/ZnSe products are agglomerated structure. It can also be seen that there is an irregular size distribution in all products. The SEM observations show that the samples prepared with surfactants are fully aggregated. The size of the prepared CdSe/ZnSe is around 100 nm [15].

31.4.2.3

HRTEM Analysis

The JEOL-JEM-2100F produces transmission electron microscopy (TEM) images (HRTEM). Figs. 31.4a–c show the HRTEM micrographs of CZ3 CdSe/ZnSe quantum dots. The size and morphology of the synthesized CZ3 CdSe/ZnSe quantum dots are observed from HRTEM analysis. The HRTEM micrographs revealed the highly amorphous nature of the synthesized samples from the well-defined images with clear core–shell structure, and spherical-shaped particles are very clear. The diameter of the quantum dots is estimated to be in the range around 80 nm for CZ3. Fig. 31.4d shows the SAED image of CdSe/ZnSe quantum dots, and it shows the clear lattice fringes [16, 17].

404

K. R. Kavitha et al.

Fig. 31.4 TEM image and SAED pattern

31.4.3 Elemental Analysis X-ray spectrometry is used to develop the elemental analysis of nanomaterials (EDX). JEOL-JSM-7001FF is used to analyze the EDX images. The EDX analysis of CdSe/ZnSe nanomaterials is shown in Fig. 31.5 which shows a typical 20 kV X-ray spectra, as well as peak assignments for the elements Zn, Se, Cd, S, and O [18] (Fig. 31.6).

Fig. 31.5 EDX spectra of CdSe/ZnO nanomaterials

31 Some Investigations on CdSe/ZnSe Quantum Dot … Fig. 31.6 Weight % of CdSe/ZnO nanomaterials

405 Quantitative results

50

Weight%

40 30 20 10 0 C

O

Na

Si

S

Zn

Se

Cd

Table 31.1 Different parameters of CdSe/ZnSe nanomaterials Element

App Conc

Intensity (Corn)

Weight (t %)

Weight% (Sigma)

Atomic (c %)

CK

1.37

0.3381

10.02

1.34

16.90

OK

19.14

0.9761

48.51

0.94

61.45 13.24

Na K

5.19

0.8532

15.02

0.45

Si K

0.11

0.6307

0.43

0.10

0.31

SK

1.11

0.8064

3.40

0.17

2.15

Zn K

1.21

0.8232

3.64

0.37

1.13

Se L

3.90

0.5283

18.24

0.52

4.68

Cd L

0.23

0.7630

0.75

0.19

0.13

Total

100.00

31.4.4 FT-IR Characterization Figure 31.7 shows the FT-IR spectra of the prepared CZ1, CZ2, and CZ3 CdSe/ZnSe quantum dots. The band at 3000 cm−1 for CZ1, at 3200 cm−1 for CZ2, and at 3250 cm−1 for CZ3 can be assigned, which means that the water molecules remain intact on the surface of the synthesized CdSe/ZnSe quantum dots. For CZ2, this OH band shows a shift toward higher wave number with shortened peaks. The bands observed at 750 for CZ1, at 1100 cm−1 for CZ2, and at 1000 cm−1 for CZ3 are attributed to C-O stretching vibrations. On the other hand, most of the absorption bands of CZ-3 quantum dots show a red shift compared to CZ1, CZ2 quantum dots [19, 20].

31.4.5 Particle Size Analyzer (PSA) Figure 31.8 shows that the particle sizes of the prepared CdSe/ZnSe quantum dots and the size of the quantum dots are estimated to be in the range around 135.8 nm

406

K. R. Kavitha et al.

Fig. 31.7 FT-IR spectra of CdSe/ZnSe nanoparticles

for CZ1, 52.1 nm for CZ2, and 374 nm for CZ3.The size of the particle is decreased when the mole ratio is decreased. CZ3 has a smaller particle compared to CZ1 and 2 [21] (Table 31.2).

31.4.6 XRD The crystallographic properties such as structure, phase purity, and size of the synthesized CdSe/ZnSe quantum dots are determined using X-ray diffraction study. Figure 31.9 shows the XRD pattern of the prepared CdSe/ZnSe core/shell quantum dots (CZ1, CZ2, and CZ3). The strong peaks at 24.45°, 38.15°, and 68.3°, which correspond to crystal planes (111), (220), and (311), respectively, have been identified as CdSe. The diffraction peaks of ZnSe are 30.2°, 44.2°, 53.2°, 55.7°, 64.2°, and 72.4°, respectively, corresponding to (101), (110), (200), (004), (104), and (210) crystal planes[20, 21].

31 Some Investigations on CdSe/ZnSe Quantum Dot …

407

Fig. 31.8 Particle size analysis of a CZ1 0.05 M ratio, b CZ2 0.01 M ratio, c CZ3 0.03 M ratio Table 31.2 Particle size for different concentration

Table 31.4 2θ and hkl values of nanoparticles

S. no.

CdSe/ZnSe concentration (M)

Particle size (nm)

1

0.005

135.8

2

0.03

374

3

0.01

52.2

Sample

2θ (Deg)

Hkl

CdSe

24.45°

(111)

38.15°

(220)

68.3°

(311)

30.2°

(101)

44.2°

(110)

53.2°

(200)

55.7°

(004)

64.2°

(104)

72.4°

(210)

ZnSe

408

K. R. Kavitha et al.

Fig. 31.9 XRD of CdSe/ZnSe nanomaterial

31.5 Conclusion CdSe/ZnSe QDs have been synthesized from aqueous with L-cysteine as capping ligand. XRD results indicated that the formation of core/shell structure is confirmed from the higher angle shift of the broad diffraction peaks. HRTEM images revealed the spherical-shaped particles with monodisperse nature. The quantum confinement effect achieved by the synthesized QDs is substantiated from the absorbance properties. The elemental analysis of nanomaterial is evolved by energy-dispersive Xray spectrometry (EDX). Absorption bands of CZ3 quantum dots show a red shift compared to CZ1, CZ2 quantum dots conformed by FT-IR, and from the particle size analyzer, CZ3 is having smaller particle compared to CZ1, CZ2.

References ˘ 1. van Embden, J., Jasieniak, J., GAłmez, D.E., Mulvaney, P., Giersig, M.: Review of the synthetic chemistry involved in the production of core/shell semiconductor nanocrystals. Aust. J. Chem. 60, 457–471 (2007) 2. Ivanov, S.A., Piryatinski, A., Nanda, J., Tretiak, S., Zavadil, K.R., Wallace, W.O., Werder, D., Klimov, V.I.: Type-II core/shell CdS/ZnSe nanocrystals: synthesis, electronic structures and spectroscopic properties. J. Appl. Chem. Sci. 9, 129(38), 11709 (2007) 3. Rakgalakane, B.P., Moloto, M.J.: Aqueous synthesis and characterization of CdSe/ZnO coreshell nanoparticles. J Nanomat (2011) 4. Mi, W., Tian, J., Jia, J., Tian, W., Dai, J., Wang, X.: Characterization of nucleation and growth kinetics of the formation of water-soluble CdSe quantum dots by their optical properties. J.

31 Some Investigations on CdSe/ZnSe Quantum Dot …

409

Phys. D Appl. Phys. 45(43):435303 (2012) 5. Ganapathy Raman, S., Selvarajan, P., Ratnam, K., Chidambaradhanu: Studies of cadmium selenide (cdse) nano particles synthesized by solvo-thermal route. J Electrochem Soc 160(4):D156–D162 (2013) 6. Vasiliu, I.C., Elisa, M., Niciu, H., Iordanescu, R., Feraru, I., Ghervase, L.: Synthesis and characterization of CdSe-doped LhO-Ah03-P20S glass. In: 12th IEEE international conference on nanotechnology (IEEE-NANO) (2012) 7. Senthilkumar, K., Kalaivani, T., Kanagesan, S., Balasubramanian, V.: Synthesis and characterization studies of ZnSe quantum dots. J. Mater. Sci.: Mater. Electron. 2, 2048–2052 (2012) 8. Hernández, R., Rosendo, E., Romano-Trujillo, R., Nieto, G.: Obtaining and characterization of ZnSe nanoparticles from aqueous colloidal dispersions. J. Superficies Vacíovol. 27(1), 11–14 (2014) 9. Kumara, P., Singh, K., Wurtzit: ZnSe quantum dots: synthesis, characterization and PL properties. J. Optoelectron. Biomed. Mat. 1(1), 59–69 (2009) 10. Kavitha, K.R., Murali Babu, B., Vadivel, S.: Fabrication of hexagonal shaped CuCo2S4 nanodiscs/graphene composites as advanced electrodes for asymmetric supercapacitors and dye sensitized solar cells (DSSCs). J. Mat. Sci. (2021) 11. Zhang, W., Chen, G., Wang, J., Ye, B.-C., Zhong, X.: Designand synthesis of highly luminescent near-infrared-emitting water-soluble CdTe/CdSe/ZnS Core/Shell/Shell quantum dots. Inorg. Chem. 48(20), 9723–9731 (2009) 12. Murali Babu, B., Shyamala, P., Saravanan, S., Kavitha, K.R.: Fabrication and performance estimation of dye sensitized solar cell based on Cdse/Zno nano particles. J Mat Sci Springer 28(14), 10472–10480 (2017) 13. JiaGuo-Zhi, F.-N., Jun, W.: Synthesis of water dispersed Cdse/Znse type-II core-shell structure quantum dots. Chalcogenide Lett. 7(3), 181–185 (2010) 14. Arthur, E., Akpojivi: Lucky synthesis and characterization of Cds and Cdse quantum dots by UV-VIS spectroscopy. J. Emer. Trends Eng. Appl. Sci. (JETEAS) 4(2):273–280 (2013) 15. Acharya, A., Mishra, R., Roy, G.S.: Characterization of Cdse/polythiophene nano composite by Tga/Dta, Xrd, Uv-Vis spectroscopy. Sem-EDXA And FTIR. Armenian J. Phys. 3(3):195–202 (2010) 16. Jahed, N.M.S.: Spectrally resolved dynamics of synthesized CdSe/ZnS quantum dot/silica nanocrystals for photonic down-shifting applications. IEEE Trans. Nanotechnol. 13(4):825 (2014) 17. Dabbousi, B.O., Rodriguez-Viejo, J., Mikulec, F.V., Heine, J.R., Mattoussi, H., Ober, R., Jensen, K.F., Bawendi, M.G.: (CdSe)ZnS core-shell quantum dots: synthesis and characterization of a size series of highly luminescent nanocrystallites. J. Phys. Chem. B 101, 9463–9475 (1997) 18. Nemchinov, A., Kirsanova, M., Kasakarage, N.N.H., Zamkov, M.: Synthesis and characterization of type II ZnSe/CdS core/shell nanocrystals. J. Phys. Chem. C 112(25) (2008) 19. Dinga, Y., Sunb, H., Liua, D., Liua, F., Wanga, D., Jianga, Q.: Water-soluble, high-quality ZnSe@ZnS core/shell structure nanocrystals. J. Chinese Adv. Mat. Soc. 1(1), 56–64 (2013) 20. Sukanya, D., Mahesh, R., John Sundaram, S., Delavictoire, M.R.J., Sagayaraj, P.: Synthesis of water dispersible CdSe/ZnSe quantum dots in aqueous media for potential biomedical applications. Der Pharma Chemica 7(6), 271–281 (2015) 21. Ramanery, F.P., Mansur, A.A.P., Mansur, H.S.: Synthesis and characterization of waterdispersed CdSe/CdS core-shell quantum dots prepared via layer-by-layer method capped with carboxylic-functionalized poly(Vinyl Alcohol). Mat. Res. 17:133–140 (2014)

Chapter 32

Design and Finite Element Analysis of a Mechanical Gripper Mousam Bhagawati, Md. Asaduz Zaman, Rupam Deka, Krishnava Prasad Bora, Partha Protim Gogoi, Maharshi Das, and Nandini Arora Abstract The present industrial sector is undergoing a revolution due to automation, and in industrial automation, robots are extensively used since it reduces work force requirements and production time. A gripper serves as a robot’s hand and is widely used for different tasks in various fields. In this work, an attempt has been made to design and analyze a mechanical gripper using finite element analysis. The design of the gripper was done with the help of SolidWorks software, and the structural analysis was carried out using the ANSYS-FEA package. The static structural analysis was performed on the mating gears, claws and the overall model of the gripper.

32.1 Introduction In the present scenario, robots are used to supplement humans by doing dull, dirty or dangerous work. With increase in technology, the demand for quicker and easier methods is to operate equipment increased, to perform tasks like welding, assembly, pick and place objects, packaging, labeling, product inspection and many more. The robot performs its tasks with the help of a peripheral device known as end effectors, which are attached to the robot’s wrist. Most end effectors are mechanical or electromechanical and serve as grippers, process tools or sensors. They range from simple two-fingered grippers for pick and place task to complex sensor systems for robotic inspection. A mechanical gripper is used as an end effector for grasping the objects with its mechanically operated fingers. In industries, two fingers are enough for holding purposes. Also, more than three fingers may be used based on the application. The gripper requires either hydraulic, pneumatic or electric drive system to create input power which is transmitted to the gripper jaws or claws or fingers to perform opening and closing actions. Also, a sufficient force must be provided to grip the object which can be obtained by calculations or sensory feedback end effector. M. Bhagawati (B) · Md. Asaduz Zaman · R. Deka · K. P. Bora · P. P. Gogoi · M. Das · N. Arora Dibrugarh University Institute of Engineering and Technology, Dibrugarh University, Dibrugarh, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. Das et al. (eds.), Modeling, Simulation and Optimization, Smart Innovation, Systems and Technologies 292, https://doi.org/10.1007/978-981-19-0836-1_32

411

412

M. Bhagawati et al.

Reddy and Suresh [1] carried out a review on the importance of a universal gripper in industrial robot applications which deals exclusively with gripping of different variety of materials/parts comparing with other various types of grippers. Fuster [2] attempted to design and build a new module for fable which can grasp common household items. At least nine out of ten objects should be able to be picked up without damage by the gripper for considering it to be good. Martinez [3] carried out a project wherein a robot’s gripper is designed, manufactured and constructed with a 3D printer. The gripper is composed of two servo motors that allow the movement of the wrist and the movement of the grippers. Razali et al. [4] carried out a study on the design of a gripper tool for an industrial robot. The objective for determining, selecting and designing the proposed green robot gripper has been shown to be able to handle a variety of shapes and sizes and is done by using CATIA software. Reddy and Eranki [5] carried out a work focusing on design and structural analysis of a robotic arm. The work mainly deals around the shearing operation, where the sheet is picked manually and placed on the belt for shearing which involves risk factor. Tai et al. [6] discussed how the performance of the grippers can be improved in the future in different applications through robust controllers, instrumentations as well as structure design, and also analysis on the implementation of human gripping capabilities on robotic grippers by employing vision sensing, incorporating visual feedback, artificial muscles and smart and soft materials is done. Raut et al. [7] carried out a detailed study of design of robot’s gripper. The primary goal of this study is to accomplish an optimal design of robot gripper to perform various industrial applications. This work primarily focuses on the design and analyzes a functional robotic gripper to achieve simple grasping tasks, i.e., ability to pick and move an object of a specific load.

32.2 Modeling of the Geometry The complete design of the gripper has been generated with the help of SolidWorks software. The complete design consists of different parts like base plate, gear mate, long and short arm, gripper jaws or claws, joint pins and gear pins. DC servo motor is a perfect fit to deliver the proportional motion among the parts due to less power necessity and lightweight. Firstly, all the components of the gripper are designed individually. All the components are then assembled together to develop the final design of the model with the help of revolute joints, cylindrical joints and fixed joints. The model is then analyzed using ANSYS to evaluate the Von Mises stress, equivalent elastic strain, total deformation and the safety factor on the gear assembly, gripper jaw and also on the whole model (Fig. 32.1).

32 Design and Finite Element Analysis of a Mechanical Gripper

413

Fig. 32.1 Final assembled model

32.2.1 Design Methodology

3D Modelling using SolidWorks

Finite Element Analysis using ANSYS

Result

32.2.2 Configurations of the Gripper The base plate will be fixed to the robot’s wrists on which all the other parts are connected. The gears are mated together so that power input into one of the gears rotates the other gear. The power input will be delivered by a DC servo motor which will provide a torque of 0.56 N m. The short arms are joined with the gears, with the help of gear pins. The movement of the gears will facilitate the movement of the short arms and the long arms, which are again joined with the short arms. In this

414 Table 32.1 Configuration of the gripper

M. Bhagawati et al. Configuration

Value

Degree of freedom

2

Minimum size of the holdable object

0.3 mm

Maximum size of the holdable object

137.5 mm

way, the rotation of the gears is successfully converted to the opening and closing of the gripper jaws (Table 32.1). Material selection and cost analysis to develop an economic model are a very complex process. To compare and decide on the suitable material, many parameters need to be considered. In this project, the materials are designated in a way to satisfy the load-bearing capacity and are not too expensive. The working condition and load-carrying capacity are attributed as the major factors in the material selection process. A material with good yield strength and high tensile strength is necessary in order to obtain maximum load-bearing capacity with minimum weight. Aluminum has good mechanical properties as it exhibits good tensile strength, high yield strength, and it can be very easily extruded. Due to its low density, low cost, ease of fabrication and low weight-to-strength ratio, aluminum has been designated as the gear and short arm material. For the base plate, long arms and pins, structural steel has been elected. Having high strength, stiffness, toughness and ductile properties, structural steel is one of the most commonly used materials. It has a high strength-to-weight ratio which means it has high strength per unit mass. Steel can be easily fabricated and produced massively. For the jaws, polyethylene has been chosen because of its advantage of being light in weight which will aid the function of motor. For heavier jaws, it requires a highpowered motor. In addition, polyethylene is a durable thermoplastic with variable crystalline structure. It is one of the most widely produced plastics in the world (Table 32.2). Table 32.2 Material used

Parts

Material assigned

Base plate

Structural steel

Long arm

Structural steel

Gear pins and joint pins

Structural steel

Short arm

Aluminum

Gears

Aluminum

Jaw

Polyethylene

32 Design and Finite Element Analysis of a Mechanical Gripper

415

32.2.3 Finite Element Analysis FEA is the simulation of a physical phenomenon using a numerical mathematic technique referred to as the finite element method, or FEM. It also is one of the key principles used in the development of simulation software. Engineers can use these FEM to reduce the number of physical prototypes and run virtual experiments to optimize their designs. Necessity of Structural Analysis. Load-bearing capacity is always a measure of a good design. During a stress analysis, variety of techniques are used to determine the stresses and strains within materials and structures that have been subjected to forces. In engineering, the main aim of stress analysis is to design structures and artifacts that can withstand a specified load with the least amount of material. The result of the analysis is a description of how the applied forces spread throughout the structure, resulting in stresses, strains and the deflections of the entire structure and each component of that structure. Static Structural Analysis of Gear Mate and Claw in ANSYS. The simulation software package, ANSYS 2020 R2, has been applied to verify the design of the multi-objective robot gripper. The static structural analysis has been carried out for gear mates and jaw in ANSYS which is under the effect of torque. The results of this analysis will be presented in the form of maximum stresses induced, total deformation and the factor of safety (FOS) for entire structure of gears under the applied moment (torque) of 0.56 N m. Boundary Conditions and Loads. For good simulation, proper loads and boundary conditions must be applied in order to make the model as close to the reality as possible. Fixed support is added to the holding plate. Moment of 0.56 N m is applied at joint of the gear, short arm and base plate. Displacement is applied at the jaws to permit them to perform translation motion. Also, various joints are applied at the connecting points of the parts with each other. A revolute joint allows only rotational motion about a single axis, and a cylindrical joint allows only linear motion along an axis and rotation about that axis. Revolute joints are applied at the connecting points of baseplate, gears, short arm, baseplate and the long arms. Fixed support is applied at the base plate and the object (rod). Cylindrical joints are applied at the mating point of the short arm and jaw and also at the mating point of the jaws and the long arm (Figs. 32.2 and 32.3). Grid-Independent Analysis. Grid-independent study is a process used to find the optimal grid condition that has the smallest number of grids without generating a difference in the numerical results based on evaluation of various grid conditions. This study is performed to eliminate or reduce the number of grids on the computational results (Table 32.3). To verify the grid independence of the gripper model for the given element size, the gripper model was evaluated by simulating the model in ANSYS. The optimal grid resolution is selected by comparing the following analyzed results (Table 32.4).

416

Fig. 32.2 Loads applied on the model

Fig. 32.3 Joints applied on the model

M. Bhagawati et al.

32 Design and Finite Element Analysis of a Mechanical Gripper

417

Table 32.3 Grid sizes for grid-independent analysis Mesh

Element size (mm)

Number of elements

A

3

15,510

B

2

36,150

C

1.5

51,382

Table 32.4 Grid-independent analysis results Mesh

Total deformation (mm)

Maximum Von Mises stress (MPa)

Maximum equivalent elastic strain (mm/mm)

A

0.084524

3.016

0.0013176

B

0.088075

3.2233

0.0015782

C

0.088304

3.2527

0.0015278

From Table 32.4, it is observed that the values for Mesh B (36,150 elements) and Mesh C (51,382 elements) produce similar results with very little variation in all three aspects (i.e., total deformation, maximum equivalent stress, maximum equivalent elastic strain), whereas the values for Mesh A show a bigger variation in comparison with either Mesh B or Mesh C. Hence, the further analysis has been done by taking mesh size as 2 mm and number of elements as 36,150. Validation. FEA of an assembly of two mating gears similar to that used by Raut et al. [7] was done using ANSYS. Similar boundary conditions and loads were applied as done by Raut et al. [7]. The results obtained from the present analysis were compared with the results of Raut et al. [7] for validation purpose. The calculated value of equivalent stress, elastic strain and total deformation from the numerical simulations was compared with the results obtained by Raut et al. [7] (Table 32.5). Through minor differences that exist between both the results, however, an overall good agreement was observed. Table 32.5 Compared results

Parameter

FEA simulation results

Results from Raut et al. [7]

% Error

Equivalent stress

3.3397 MPa

3.2971 MPa

1.29

Elastic strain

4.7388e–05

4.6438e–05

2.04

Total deformation

0.0013686

0.0013408

2.07

418

M. Bhagawati et al.

32.3 Results and Discussions Due to applied loads and supports, the gears and the jaws experience stress and deformation. The solution for the total deformation and equivalent Von Misses is analyzed under a constant load of 0.56 N m torque applied on the gears. The results show the maximum and minimum total deformation, equivalent (Von Mises) stress and safety factor the gears and the jaws experienced under the given load.

32.3.1 Results for the Gear Assembly Due to applied load, maximum stress has occurred at the root of tooth as the teeth are continuously in contact with each other while transferring torque. The maximum stress induced is 1.8013 MPa which is negligible as compared with the permissible yield strength 280 MPa of the gear material aluminum. It is thus seen that the gear mate assembly is strong enough to operate under the given loads without any failure. Due to transferring of torque, distortion of the gear mate assembly occurs under the action of structural stresses. Maximum deformation occurs at the tip of the gears. The total deformation occurring is negligible for the design of the gears (Figs. 32.4, 32.5, 32.6 and 32.7). The stresses induced in gears cause internal strain within the structure. The gear mate assembly underwent a minute change in dimensions due to the applied moment. The resultant strain is so diminutive that on the removal of applied load, the structure

Fig. 32.4 Von Mises stress induced on the gears

32 Design and Finite Element Analysis of a Mechanical Gripper

Fig. 32.5 Resulting total deformation on the gears

Fig. 32.6 Elastic strain induced on the gears

419

420

M. Bhagawati et al.

Fig. 32.7 Resulting safety factor of the gears

Table 32.6 Results of static structural analysis on gears

Parameter

Resulting value Maximum

Minimum

Von Mises stress (MPa)

1.8013

1.234e–5

Total deformation (mm)

0.046399

0.0126

Equivalent elastic strain

2.8324e–5

3.8714e–10

F.O.S

>15

>15

can transform to its original state. Maximum strain has occurred at the root of the tooth. Safety factor represents the load-carrying capacity of a system beyond what the system actually supports. In other words, safety factor refers how much stronger a system is than its needs to be. For the gears, resulting safety factor is greater than 15 (Table 32.6).

32.3.2 Results for the Jaws of the Gripper Due to gripping operation of the object (a 30 mm diameter aluminum rod), maximum stress has occurred at the point of contact. The maximum stress induced is 1.5637 MPa which is negligible as compared with the permissible yield strength 25 MPa of the jaw material polyethylene. It is thus seen that the polyethylene jaw is strong enough to operate under the given loads without any failure (Figs. 32.8, 32.9, 32.10 and 32.11). Due to the translation movement of the jaws and structural stresses, maximum total deformation has occurred at the central pivot point connecting the gear and the

32 Design and Finite Element Analysis of a Mechanical Gripper

Fig. 32.8 Von Mises stress induced on the jaws Fig. 32.9 Resulting total deformation on the jaws

421

422

Fig. 32.10 Resulting safety factor of the jaws Fig. 32.11 Elastic strain induced on the jaws

M. Bhagawati et al.

32 Design and Finite Element Analysis of a Mechanical Gripper

423

Table 32.7 Results of static structural analysis on jaws Parameter

Resulting value Maximum

Minimum

Von Mises stress (MPa)

1.5637

7.9903e–05

Total deformation (mm)

0.076

0.0189

Equivalent elastic strain

0.0015782

1.2149e–7

FOS

>15

>15

Table 32.8 Results of static structural analysis on the model Parameter

Resulting value Maximum

Minimum

Von Mises stress (MPa)

3.2233

1.2343e–5

Total deformation (mm)

0.088

0

Equivalent elastic strain

0.0015782

3.8714e–10

jaw through the short arm. The maximum deformation induced is 0.076 mm. The total deformation of the jaws is marginal which is admissible in the design of the jaws. Also, resulting safety factor is greater than 15. The stresses induced cause internal strain within the jaw. The jaws underwent a microscopic change in dimensions due to the applied moment. The resultant strain is very small, and on the removal of applied load, the structure can transform to its original state. The maximum strain has occurred at contact between the jaw and the object (Tables 32.7 and 32.8).

32.3.3 Results for the Entire Model of Gripper Maximum stress has occurred at the pivot point or joint connecting the base plate and the long arm. Maximum elastic strain has occurred at the point of contact of the jaw and the holding object. Also, maximum total deformation has occurred at the joint, joining the jaws and the short arms. The results are attached below (Figs. 32.12, 32.13 and 32.14).

32.4 Conclusions The primary objective of this work was to design a mechanical gripper and to perform the finite element analysis on the gripper jaw and the gear mate assembly. Initially, the grid-independent analysis was performed to find the optimal grid condition upon

424 Fig. 32.12 Von Mises stress induced on the model

Fig. 32.13 Total deformation on the model

M. Bhagawati et al.

32 Design and Finite Element Analysis of a Mechanical Gripper

425

Fig. 32.14 Equivalent elastic strain induced on the model

which the analysis was executed to determine total deformation, equivalent stress, equivalent elastic strain and factor of safety. The results of the analysis proved that the design of the mechanical gripper is robust and also performs as desired under prescribed loads. The results further concluded the areas which were under maximum and minimum stresses, strains and deformations. The values of all the aforementioned parameters were well under permissible limits. Every design has its own limitations, and it is critically important to minimize the range of scope of limitations. Although performing finite element analysis reduced some of the limitations, there are a few more which arise due to material selection. The analysis was performed using only one type of gripper jaw material. In the future, further analysis may be carried out using a different jaw material which may assist in a different field of application.

References 1. Reddy, P.V.P., Suresh, V.V.N.S.: A review on importance of universal in industrial robot applications. Int. J. Mech. Eng. Robot. Res. 2278-0149 2(2) (2013) 2. Fuster, A.M.G.: Gripper design and development for a modular robot, DTU Electrical Engineering (2015)

426

M. Bhagawati et al.

3. Martinez, A.M.: Mechanical design of a robot’s gripper, Warsaw University of Technology (2015) 4. Razali, Z.B., Othman, M.H., Daud, M.H.: Optimum design of multi-function robot arm gripper for varying shape green product. EDP Sci. (2016). https://doi.org/10.1051/mateconf/201678 01006 5. Reddy, G.R., Eranki, V.K.P.: Design and structural analysis of a robotic arm. Master’s degree thesis, BTH-AMT-EX—2016/D06—SE (2016) 6. Tai, K., El-Sayed, A.R., Shahriari, M., Biglarbegian, M., Mahmud, S.: State of the Art Robotic Grippers and Applications (2016). https://doi.org/10.3390/robotics5020011 7. Raut, V.A., Tambe, N.S., Li, Z.: Optimal design and finite element analysis of robot gripper for industrial application. Int. J. Eng. Sci. 1–9 (2018)

Chapter 33

Analysis of Fractional Calculus-Based MRAC and Modified Optimal FOPID on Unstable FOPTD Processes Deep Mukherjee, G. Lloyds Raja, Palash Kundu, and Apurba Ghosh

Abstract Controlling an unstable continuously stirred tank reactor (CSTR) process using a model reference adaptive control (MRAC) scheme is a very challenging task. Hence, this article shows a novel approach of fractional calculus using both Grunwal– Letnikov (G-L) and Riemann–Liouville (R-L) methods which are used to develop fractional order tuning rule of MRAC for first-order CSTR with time delay. The efficacy of fractional order proportional–integral–derivative (FOPID) controller is also investigated, and its parameters are computed using a modified particle swarm optimization (PSO) algorithm using IAE, ISE and ITAE as cost functions. Comparative simulation studies are carried out between normal PSO and modified PSO on time domain metrics. Another comparative simulation studies are investigated between fractional order Lyapunov rule and conventional Lyapunov rule to showcase the efficacy of fractional order method for controlling the unstable process. The quantitative measurements of both of the rules are also computed. The effect of perturbation in plant model is also studied to show efficacy of the rule.

33.1 Introduction Fractional calculus is the generalization of normal differentiation and integration to non-integer (arbitrary) order. The subject is as old as the calculus of differentiation and integration and goes back to times when Leibnitz, Newton and Gauss invented this kind of calculation. Studies have shown that PID controller [1] and D. Mukherjee (B) School of Electronics Engineering, KIIT University, Bhubaneswar 751024, India e-mail: [email protected] G. Lloyds Raja Department of Electrical Engineering, NIT Patna, Bihar 80005, India P. Kundu Department of Electrical Engineering, Jadavpur University, Kolkata 700032, India A. Ghosh Department of Instrumentation Engineering, Burdwan University, Burdwan 713104, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. Das et al. (eds.), Modeling, Simulation and Optimization, Smart Innovation, Systems and Technologies 292, https://doi.org/10.1007/978-981-19-0836-1_33

427

428

D. Mukherjee et al.

FOPID controller [2] have been used to control temperature and concentration of nonlinear CSTR. However, in practice, PID controller introduces several problems that need to protect against when dealing with imperfect systems. This includes saturation, which is a common nonlinear problem found in real-life situations. So, FOPID controller is a proper choice to overcome the problem encountered by the PID controller. FOMCON toolbox [3] helps to evaluate the tuning parameters of FOPID controller, and except toolbox, several optimization methods as genetic algorithm [4], particle swarm optimization [5] and artificial bee colony [6] have been used to evaluate optimal solutions using a single as well as multi-objective functions in FOPID controller. From this survey, it is evident that no literatures are available for modified optimization technique to tune FOPID controller. Hence, a novel dynamic inertia weight-based PSO which is approached to bridge that gap may showcase satisfactory optimum result with higher rate of operation than normal PSO. It is also figured out that adaptive control [7] becomes more popular than FOPID controller as it does not excessively rely on models and does not trade performance for modeling accuracy. Adaptive control improves itself under unforeseen, adverse conditions. So, under adaptive control standard model, reference adaptive control is chosen in our work. Many experts have shown their effort to control the temperature and concentration of the CSTR process using the MIT rule [8] and Lyapunov rule [9] of MRAC controller in which it is found that Lyapunov rule outperforms by choosing proper Lyapunov function which also shows the stability of the system. In [10], fractional order MIT rule is used to develop control strategy for CSTR. From the above literature survey, it is observed that there are no literatures pertaining to fractional order Lyapunov-based MRAC schemes for CSTR. Hence, the novel aspects of this work are as follows: 1.

2.

A novel dynamic inertia weight-based particle swarm optimization algorithm is suggested to tune FOPID controller parameters to get improved performance compared to normal PSO algorithm. R-L and G-L fractional calculus-based Lyapunov stability rule of MRAC controller is also carried out and compared with conventional Lyapunov stability rule using an unstable first-order CSTR model with dead time.

33.2 Unstable Process Continuous stirred tank reactor operates continuously as it is a steady-state reactor and manifests highly nonlinear behavior where temperature and flow rate act as a process and manipulated variables, respectively. The basic schematic of the CSTR process is shown in Fig. 33.1. In this tank, there is some stirring apparatus that moves and agitates the fluid within the reactor to promote better mixing. CSTR gives a lot of control over the components within reactor. CSTR is very simple and also cost effective compared to other complex reactors. But CSTR is not so good with viscous fluids. In this work, a benchmark first-order unstable model of CSTR [11] is adopted as shown below:

33 Analysis of Fractional Calculus-Based MRAC …

429

Fig. 33.1 Continuous stirred tank reactor basic schematic

G(s) =

1 −0.2s e s−1

(33.1)

33.3 Fractional Calculus 33.3.1 G-L Fractional Derivative G-L fractional calculus is a modified approach to the gamma function which is given by   n 1  j α f (x − j h) (−1) h→0 h α j j=0

f α (x) = lim

(33.2)

  α where can be approximated as j   τ (α + 1) α! α = = j j!(α − j)! j!τ (α − j + 1)

(33.3)

430

D. Mukherjee et al.

where α is the order of differentiation, j is the binomial coefficient, h is step size, and τ is the gamma function. Now, putting the expression of Eq. (33.3) in Eq. (33.2), it is shown below. n 1  τ (α + 1) f (x − j h) f (x) = lim α (−1) j h→0 h j!τ (α − j + 1) j=0 α

(33.4)

The G-L fractional derivative method is represented in a precise way with a modified version of the R-L integration method.

33.3.2 R-L Fractional Integral   α can be written as j   α(α − 1) . . . (α − ( j − 1))(α − j)! α! α = = j j!(α − j)! j!(α − j)!

(33.5)

When α → −α, 

−α j

 =

α(α + 1) . . . (α + j − 1) −α(−α − 1) . . . (−α − ( j − 1)) = (−1) j j! j! (33.6)

The above expression can be rewritten using the gamma function as shown below: 

−α j

 = (−1) j

τ (α + j) j!τ (α)

(33.7)

The above expression is applied to (4) yielding the following: I α ( f ) = lim h α n→∝

I α( f ) =

1 τ (α)

n  τ (α + j) f (x − j h) j!τ (α) j=0



x a

(x − u)α−1 f (u)du

(33.8)

(33.9)

33 Analysis of Fractional Calculus-Based MRAC …

431

Fig. 33.2 Model reference adaptive control (MRAC) schematic

33.4 Control Law 33.4.1 Model Reference Adaptive Control (MRAC) Model reference adaptive control [12] consists of mainly reference models and controllers. The response of the actual plant must track that of the ideal reference model for an ideal input. The block diagram of the MRAC scheme is shown in Fig. 33.2. Usually, MRAC is used to introduce a controller whose parameters can be updated so that the behavior of the system matches that of an ideal model. For a first-order reference model, the gain is considered as pm , and the general expression is as follows:   qm 2 s + qm 3 = [ pm ]r

(33.10)

33.4.2 FO-Lyapunov Stability Rule The Lyapunov method [13] is useful for determining the stability of a nonlinear system. It is also applicable to stability testing of a linear system, and it is determined without solving any differential equations. The first-order plant and model are chosen as given below: dy(t) = −ay(t) + bu(t) dt

(33.11)

432

D. Mukherjee et al.

dy m (t) = −am ym (t) + bm u c (t) dt

(33.12)

u = θ u c ∗ (K p + K d s)

(33.13)

where

Now, the tracking error between model and plant is given by e(t) ˙ = y˙ (t) − y˙m (t)

(33.14)

A typical Lyapunov quadratic function is chosen with the following form: V (t) =

1 2 b 1 2 e + (θ − ) 2 2γ b

θ˙ 1 V˙ (t) = −e2 + b(θ − )[eu c + ] b γ

(33.15)

(33.16)

where γ is adaptive gain. The above expression is modified using fractional calculus as follows: θ˙α 1 V˙α (t) = −e2 + b(θ − )[eu c + ] b γ

(33.17)

Now, eu c + θ α /γ = 0. The adaptation law of the FO-Lyapunov stability rule is obtained as follows: θ =−

γ eu c sα

(33.18)

It is assumed that V is bounded below by zero, and it is decreasing since V˙ ≤ 0. Now, it is considered V˙ = −e2 , and Barbalat’s lemma is applied to show the system as stable with the following expression: V¨ = −2ee˙

(33.19)

V¨ = −2e(−e + (bθ − 1)u c

(33.20)

V¨ = 2e2 − 2(bθ − 1)u c

(33.21)

The above control law is modified with the following expression:

33 Analysis of Fractional Calculus-Based MRAC …

u = θ 1 u c (K p + K d s) − θ2 y

433

(33.22)

Equation (33.14) is rewritten as follows: e˙ (t) = (−a − bθ2 )y + bθ1 u c (Kp + Kd s) + am ym − bm u c (Kp + Kd s)

(33.23)

e˙ (t) = −am (y − ym ) + (am − a − bθ2 )y + (bθ1 − bm )u c

(33.24)

A typical Lyapunov function is chosen with the following expression: V =

1 1 1 2 e + (am − a − bθ2 )2 + (bθ1 − bm )2 2 2γ b 2γ b

V˙ = e[−am e + (am − a − bθ2 )y + (bθ1 − bm )u c (K p + K d s)] 1 1 − (am − a − bθ2 )θ˙2 + (bθ1 − bm )θ˙1 γ γ

(33.25)

(33.26)

V˙α = e[−am e + (am − a − bθ2 )y + (bθ1 − bm )u c (K p + K d s)] 1 1 − (am − a − bθ2 )θ˙2α + (bθ1 − bm )θ˙1α (33.27) γ γ The adaptation law is established with the following expression (Figs. 33.3 and 33.4): θ2 =

γ ey sα

Fig. 33.3 G-L fractional calculus-based FO-Lyapunov

(33.28)

434

D. Mukherjee et al.

Fig. 33.4 R-L fractional calculus-based FO-Lyapunov

θ1 = −

γ eu c (Kp + Kd s) sα

(33.29)

33.4.3 FOPID Rule The general transfer function of the fractional PID controller [14] is given as shown in (33.30). G(s) = K p + K I /s λ + K d Sμ

(33.30)

where except K P, K I and K d, two additional fractional parameters lambda and mu are used as two extra degrees of freedom. It extends the range for controlling the exact behavior of the system on time domain and helps also to represent better nature of slope with −20λ dB/dec or 20μ dB/dec on frequency domain.

33.5 Modified Particle Swarm Optimization (MPSO) Tuning the FOPID controller is a very challenging task adding two extra degrees of freedom. The range of extra degrees of freedom is chosen between 0 and 1 for the particular unstable process, and the range of other gain parameters is chosen between 0 and 100 using novel dynamic inertia weight-based particle swarm optimization

33 Analysis of Fractional Calculus-Based MRAC …

435

algorithm [15]. The multi-objective function rather than a single objective function is the only way to improve the optimal performance on the unstable process.  J (K p , K i , K d , β, μ) = W (





∞ |e(t)|dt +

0

t|e(t)|dt + 0



|e(t)|2 dt)

(33.31)

0

The pseudocode is shown below. Input: Initialize the parameters DIW = [ChIW, LDIW] ChIW= Chaotic inertia weight, LDIW = Linear decreasing inertia weight Output: (gbest) particle Start: for individual do Initialize the swarm. end for iter = 0 while criterian not satisfied do for individual do Evaluate fitness value if value is better then pbest = evaluated fitness value end if end for find gbest choose inertia weight if rand() < D then i = distributed random integer w = DIW(i) end if Evaluate velocity and position of particle end for iter = iter + 1 end while

436

D. Mukherjee et al.

33.6 Result and Analysis 33.6.1 Algorithm Outcome Under dynamic inertia weight-based particle swarm optimization, the following parameters are selected: N p (Population size) = 10 MC N = 50 lb = [00000] ub = [10010010011] The optimal values of the FOPID controller are shown in Table 33.1. The performance of the optimal FOPID controller is shown in Fig. 33.5. From Fig. 33.5 and Table 33.1, it is evident that dynamic inertia-based PSO shows enhanced performance over normal PSO concerning all time domain specifications and error metrics as shown in Table 33.2. The modified PSO minimizes objective function more better Table 33.1 Optimal values of modified PSO and normal PSO Method

KP

KI

KD

lam

mu

Modified PSO

0.3296

0.2001

0.3149

0.46003

0.69803

Normal PSO

2.678

0.6693

1.442

0.8354

0.9665

1.4 Proposed Modified PSO 1.2

Response

1 0.8

PSO

0.6 0.4 0.2 0

0

5

10

15

20

25

Time(s)

Fig. 33.5 Performance of optimal FOPID controller

30

35

40

45

50

33 Analysis of Fractional Calculus-Based MRAC …

437

Table 33.2 Time domain specifications of optimal FOPID controller Rise time (s)

Settling time (s)

% overshoot (%)

2.02

8.98

18

3.45

12.45

34

ISE 2.786 10.88

IAE

ITAE

0.456

34.67

3.23

84.34

than PSO after iteration process. The objective function for MPSO is computed as 0.02745, and objective function for PSO is computed as 0.4629. The system remains more stable with less overshoot and reaches steady-state value quickly using the modified PSO method. Next, it investigated the performance of FOPDT process using FOMRAC controller, where different adaptive gains of FO-Lyapunov rule are used to investigate the stability of the system to track the reference model. Here, an extensive technique is used to study the effect with different adaptive gains on the system. Here, under the MRAC controller, the most crucial task is the selection of a proper reference model; thus, error between the reference model and current plant may be diminished to improve the tracking. In this work, it is investigated on two different first-order reference models varying only time constant of the models. The transfer function of the reference models is selected below: 1 s+1

(33.32)

1 0.5s + 1

(33.33)

G 1 (s) = G 2 (s) =

33.6.2 Case 1 (Time Constant of 1 of Reference Model) First of all, the performance of the system is tested using different adaptive gain values with the conventional Lyapunov rule. The simulation time is set for 10 s to showcase the direction of tracking. So, choosing proper adaptive gain plays a very crucial role for controlling unstable process. Now, in this work, very tiny ranges of adaptive gain within 0.01–0.9 are selected, and beyond this range, it is not capable of tracking the ideal model. From Fig. 33.6 and Table 33.3, it is clear that if the adaptive gain value is increased toward 1, then the plant tracks with high speed and less overshoot but shows more tolerance and steady-state error in performance. So, the adaptive gain is chosen 0.2 as it follows the steady-state path parallely with set point with less spike as compared to performances with other adaptive gains. It shows less value of quantitative measurement while compared to other adaptive gains. G-L and R-L fractional order-based Lyapunov rules are approached to show better tracking of view rather than the conventional rule using fixed adaptive gain as 0.2. So, the value of extra degree of freedom (α) is chosen between 0 and 1.

438

D. Mukherjee et al. 15 0.02 0.05

10 0.08 0.2

Response

5 0.5

0

-5

Reference Model

-10

-15

0

1

2

3

4

5

6

7

8

9

10

Time (s)

Fig. 33.6 Effect of different adaptive gains using conventional Lyapunov rule

Table 33.3 Performance using different adaptive gains

Gain

ISE

IAE

ITAE

0.2

55.06

16.36

112.5

0.08

85.34

34.27

134.9

Now, using G-L fractional calculus, the performance measures are computed with different values of alpha between 0 and 1, and results are better in perspective of overshoot than the conventional Lyapunov rule using the alpha value set as 0.5. Next, it is further investigated to showcase a better approach using R-L fractional integral method. In this method, the performance of the plant is studied with different values of alpha between 0 and 1 and achieved a better result by choosing alpha as 0.5. The performance of the system using the FO-Lyapunov rule is shown in Fig. 33.7. From Fig. 33.7 and Table 33.4, it is observed that R-L fractional integral method shows less tolerance as compared with G-L fractional method. Now, robustness is also investigated with +10% and −10% variation on gain of unstable plant. The perturbed transfer functions are shown below: G3 (s) =

1.1 exp(0.2s) s−1

(33.34)

G4 (s) =

0.9 exp(0.2s) s−1

(33.35)

The robustness performance is shown using R-L fractional method in Fig. 33.7. Simulation time is set for 8 s to show the transient part of the servo response. A very small change of peak amplitude is observed, and there is no more variation of steadystate error also thus indicating that the design is robust. It is also observed that the

33 Analysis of Fractional Calculus-Based MRAC …

439

6 Gamma=0.2, Alpha=1

4 G-L Method, Alpha = 0.5

Response

2 0 R-L Method, Alpha = 0.5

-2 -4 -6 -8

0

1

2

3

4

5

6

7

8

9

10

Time (s)

Fig. 33.7 Effect of tracking using FO-Lyapunov rule

Table 33.4 Performance of fractional order-based Lyapunov rule

Rule

ISE

IAE

ITAE

G-L method

44.81

15.98

110.8

R-L method

10.567

7.009

37.95

plant tracks the model with the same speed using both nominal and perturbed efforts that shows robustness. There is no large variation of error metrics using perturbed conditions in comparison with nominal effort (Fig. 33.8).

33.6.3 Case 2(Time Constant of 0.5 of Reference Model) Further investigation is carried out using conventional and fractional order Lyapunov rule by changing only the time constant of the ideal model. The simulation time is set for 10 s to clearly show the transient part of the response. In this case, the range of adaptive gain is selected between 0.01 and 0.9. Beyond this limit, it is not capable of tracking the model for a long time. Now, the performance of the system using adaptive gain values as 0.08 and 0.2 is shown in Fig. 33.9. From Fig. 33.9, it also found the less amplitude and faster speed of plant while tracking the model using adaptive gain as 0.2. Now, keeping adaptive gain fixed as 0.2, R-L fractional calculus-based FO-Lyapunov rule is only approached to showcase improved tracking like previous investigation. The comparative study between conventional and FO-Lyapunov rules is shown Fig. 33.10. Here, from extensive simu-

440

D. Mukherjee et al. 3 -10%

2.5 +10%

Response

2

1.5

1

0.5

0

0

1

3

2

4

6

5

8

7

Time(s)

Fig. 33.8 Perturbed performance

8 6 4

0.08

Response

2 0 -2 -4 0.2

-6 -8 -10

0

1

2

3

4

5

6

7

8

9

10

Time(s)

Fig. 33.9 Nominal performance

lation studies, the extra degree of freedom using R-L FO-Lyapunov rule is chosen as 0.4 for this current plant. From Fig. 33.10 and Table 33.5, it is observed that the FO-Lyapunov rule improves the performance with less amplitude and steady-state error but tracks the reference model with the same speed like conventional rule. From Table 33.5, it also observed less quantitative values using R-L-based FOLyapunov method.

33 Analysis of Fractional Calculus-Based MRAC …

441

5 Alpha =1

0

Response

Alpha =0.4

-5

-10 0

1

2

3

4

5

6

8

7

9

10

Time(s)

Fig. 33.10 Effect of performance using FO-Lyapunov rule

Table 33.5 Performance between fractional order-based Lyapunov rule and conventional rule

Rule

ISE

IAE

ITAE

Lyapunov

54.81

25.98

105.8

FO-Lyapunov

12.673

7.169

40.23

33.7 Conclusion and Future Scope In this work, a novel dynamic inertia weight-based particle swarm optimization method is proposed to tune FOPID controller parameters to get improved performance compared to normal PSO algorithm. Another investigation of the proposed R-L and G-L fractional calculus-based Lyapunov stability rule of MRAC controller is also carried out and compared with conventional Lyapunov stability rule. It was found that the R-L fractional calculus-based Lyapunov method shows a bit of acceptable performance than G-L fractional calculus-based Lyapunov method as well as conventional Lyapunov stability rule in terms of steady-state error and overshoot. The robustness of the suggested design is also studied. Quantitative performance measures like integral of absolute error (IAE), integral of squared error (ISE) and integral of time-weighted absolute error (ITAE) are also computed to compare the performance of proposed control methods. In the future, this work can be extended using multi-loop method to improve tracking with actual and desired performance with zero tolerance and zero steady-state error. Also, a comparative study can be

442

D. Mukherjee et al.

approached with extensive version of other optimization algorithms for tuning FOPID controller for unstable time-delay processes.

References 1. Jimisha, K., Shinu, M.: Analysis of PID controller for high pressure rated modified CSTR system. IARJSET 3, 274–277 (2016) 2. Dulau, M., Gligor, A.: Fractional order controllers versus integer order controllers. Procedia Eng. 181, 538–545 (2017) 3. Mukherjee, D., Kundu, P., Ghosh, A.: PID controller design for interacting tank level process with time delay using MATLAB FOMCON TOOLBOX. IEEE CIEC, pp. 1–5. Kolkata (2016) 4. Vaishnavi, V.: Design of optimal PID control for an unstable system using genetic algorithm. IJIMT 3 (2012) 5. Bingul, Z., Karahan, O.: Comparison of PID and FOPID controllers tuned by PSO and ABC algorithms for unstable and integrating systems with time delay. Optimal Control Appl. Methods 39(4), 1431–1450 (2018) 6. Bijani, V., Khosravi, A.: Robust PID controller design based on a novel constrained ABC algorithm. Trans. Inst. Measure. Control 40, 202–209 (2018) 7. Mushiri, T., Mahachi, A.: A model reference adaptive control system for the pneumatic valve of the bottle washer in beverages using simulink. Procedia Manuf. 7(3), 364–373 (2017) 8. Adrian, C., Corneliu, A.: The simulation of adaptive system using MIT rule. In: 10th WSEAS international conference on mathematical methods and computational technique in electrical engineering, pp. 301–305. Bulgaria (2008) 9. Fan, X., Wang, Z.: A fuzzy Lyapunov function method to stability analysis of fractional order T-S fuzzy systems. IEEE Trans. Fuzzy Syst. (Early Access) (2021) 10. Podlubny, I.: Fractional order system and PIλ Dμ controllers. IEEE Trans. Autom. Control 44(1), 208–214 (1999) 11. Kishore, C.R., Padmasree, R.: Design of PID controllers for unstable systems using multiple dominant poleplacement method. Indian Chem. Eng. 60(4), 356–370 (2018) 12. Wei, Y., Sun, Z.: On fractional order composite model reference adaptive control. Int. J. Syst. Sci. 47(11), 2521–2531 (2016) 13. Mermoud, D., Gallegos, N.: Using geneal quadratic Lyapunov functions to prove Lyapunov uniform stability for fractional order systems. Commun. Nonlinear Sci. Numer. Simul. 22(1–3), 650–659 (2015) 14. Damara, S., Kundu, M.: Design of robust fractional PID controller using triangular strip operational matrices. Fract. Calculus Appl. Anal. (2015) 15. Shafei, E., Shirzad, A., Rabczuk, T.: Dynamic stability optimization of laminated composite plates: an isogeometric HSDT formulation and PSO algorithm. Composite Struct (In-Press, 2021)

Chapter 34

Chaotic Lorenz Time Series Prediction via NLMS Algorithm with Fuzzy Adaptive Step Size Rodrigo Possidônio Noronha

Abstract This work aims to propose the chaotic time series prediction based on autoregressive moving average (ARMA) model via fuzzy adaptive step size– normalized least mean square (FASS-NLMS) algorithm. In the FASS-NLMS algorithm, which is the estimation algorithm used for estimating the weight vector of ARMA model, the step size is adapted via Mamdani fuzzy inference system (MFIS). In the MFIS used, the input variables are the squared residual error and the normalized time instant; the output variable is the adapted step size. The proposed methodology was evaluated in the Lorenz time series prediction. Furthermore, through statistical metrics, the obtained ARMA model was evaluated through results obtained for the training stage and prediction stage, in which it was possible to note the satisfactory performance of the proposed methodology.

34.1 Introduction By definition, a time series is an ordered set of samples [1]. Several applications are related to time series prediction, such as in econometrics [2], meteorology [3], and others. Prediction of chaotic time series is a great challenge, since their temporal behavior is nonlinear. The dynamics of a chaotic time series present a stationary dynamic, such that a small variation in an initial condition implies large variations in its dynamics [4]. Thus, due to the stationary behavior, it is possible to model the dynamics of a chaotic time series, for example, through the autoregressive moving average (ARMA) model [5, 6]. In order to perform the time series prediction based on ARMA model, it is initially necessary to estimate it for a training data set. The estimation of ARMA model consists of estimating its weight vector. Since the ARMA model is linear, it is possible to estimate its weight vector via estimation algorithms based on stochastic gradient [7], for example, through the least mean square (LMS) R. P. Noronha (B) Federal University of Maranhão, Graduate Program in Electrical Engineering, São Luís 65080-805, Brazil e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. Das et al. (eds.), Modeling, Simulation and Optimization, Smart Innovation, Systems and Technologies 292, https://doi.org/10.1007/978-981-19-0836-1_34

443

444

R. P. Noronha

[8] and normalized least mean square (NLMS) [7]. In this work, the estimation of the weight vector of ARMA model is performed through NLMS algorithm, whose reason is due to this algorithm presents a good performance in the presence of correlated signals [9]. Works referring to time series prediction using the NLMS algorithm can be found, for example, in [10, 11], whose objective was to predict the reactive power required to operate an electric arc furnace through ARMA model. Although the NLMS algorithm is widely used in the literature due to its efficiency, its performance is influenced by the step size, such that the trade-off between convergence speed and steady-state mean square error (MSE) is influenced by its value [12]. It is important to note that independent of the application, the performance of the algorithm is influenced by the step size [13]. According to [14], an alternative to obtain a good trade-off between convergence speed and steady-state MSE is through variable step size [12, 15]. On the other hand, the methodologies mentioned above, which propose the use of variable step size, require high-order statistical measures, weighting parameters, and others. A good part of the limitations cited in the methodologies above for obtaining the step size can be addressed, such as the dependence on high-order statistical measures, when the step size is obtained using a Mamdani fuzzy inference system (MFIS). In this work, to estimate the weight vector of ARMA model, it was used the fuzzy adaptive step size–normalized least mean square (FASS-NLMS) algorithm proposed in [16]. For the FASS-NLMS algorithm, the adaptive step size of NLMS algorithm is obtained through a MFIS, whose the antecedent linguistic variables of MFIS are the squared residual error and the normalized instant time by the min-max method. Since the FASS-NLMS algorithm was published recently, the objective of this work is to evaluate the performance of this algorithm in the chaotic time series prediction. The methodology was evaluated in the prediction of the chaotic Lorenz time series. In order to compare the performance of the proposed methodology, the results obtained in the prediction of the chaotic Lorenz time series via FASS-NLMS algorithm were compared with the results obtained via LMS and NLMS algorithms. In addition, the results obtained were also compared with the algorithm proposed in [17], where the adaptive step size of NLMS algorithm was proposed through an MFIS with only the squared error used as the antecedent linguistic variable. This paper is structured as follows: in Sect. 34.2, the mathematical representation of ARMA model is presented; in Sect. 34.3, the FASS-NLMS algorithm is presented; in Sect. 34.4, the main steps to estimate the weight vector of ARMA model via FASSNLMS algorithm are presented; in Sect. 34.5, the computational results obtained are presented; in Sect. 34.6, the conclusion is presented.

34.2 Autoregressive Moving Average Model The estimation of the weight vector of ARMA model, according to [18], is performed after obtaining a sample set y(k − 1), y(k − 2), . . . of the time series y(k) and a noise sample set (k − 1), (k − 2), . . .. For time series prediction, after obtaining

34 Chaotic Lorenz Time Series Prediction via NLMS …

445

the ARMA model based on a training data set, then it is possible to estimate the future samples. For a training data set, the ARMA( p, q) model is given by: yˆ (k) = ψ1 y(k − 1) + ψ2 y(k − 2) + · · · + ψ p y(k − p) + (k) − ϑ1 (k − 1) −ϑ2 (k − 2) − · · · − ϑq (k − q) (34.1) where yˆ (k) is the ARMA( p, q) model output. In addition, p and q are, respectively, the maximum delay of the time series and the noise. It is important to note that ψ(k) and ϑ(k) are the weights of ARMA( p, q) model, (k) is a gaussian noise. Since the weights of ARMA( p, q) model are linear, then it is possible to rewrite (34.1) in vector form, given by: yˆ (k) = ((k))T (k) = ((k))T (k),

(34.2)

where (k) = [y(k − 1) y(k − 2) . . . y(k − p) − (k) − (k − 1) − (k − 2) . . . − (k − p)]T ∈ R( p+q+1)×1 is the regressors vector and (k) = [ψ1 . . . ψ p 1 ϑ1 . . . ϑq ]T ∈ R( p+q+1)×1 is the weight vector of ARMA( p, q) model. The update of the estimate of the weight vector of ARMA( p, q) model, through LMS algorithm, is given by [19]: 1 (34.3) (k + 1) = (k) − μ∇(k) (e2 (k)), 2 where e2 (k) = (y(k) − yˆ (k))2 is the squared error obtained between the time series y(k) and the output yˆ (k) of ARMA( p, q) model, ∇(k) (e2 (k)) is the gradient of the squared error and μ is the step size used to update the estimate of the weight vector. Since ∇(k) (e2 (k)) = −2e(k)(k), (34.3) is rewritten as follows: (k + 1) = (k) + μe(k)(k)

(34.4)

The equation for updating the estimate of the weight vector using the NLMS algorithm is obtained after normalizing (34.4) by ((k))T (k), as follows [19]: (k + 1) =

⎧ ⎨

(k) + μ

⎩ (k),

e(k)(k) , ((k))T (k)

if ((k))T (k) = 0 if ((k))T (k) = 0,

(34.5)

34.3 FASS-NLMS Algorithm In Sect. 34.2, the difference equations for obtaining of ARMA( p, q) model were developed. As proposed in this work, the weight vector (k) of ARMA( p, q) model is obtained via FASS-NLMS algorithm, given by [16]:

446

R. P. Noronha 2 μ(k) = MFIS(e ⎧ (k), K (k)) e(k)(k) ⎨ (k) + μ , (k + 1) = ((k))T (k) ⎩ (k), k ∈ [1, K ],

if ((k))T (k) = 0 if ((k))T (k) = 0,

(34.6)

where μ(k) is the adaptive step size, K (k) is the normalized time instant by the min-max method and K is the total number of time instants. Definition 1 (Membership Function [20]). A function that associates a given number x belonging to a universe of discourse U to a fuzzy set F with a given membership degree, through the following mapping m(x) : U → [0, 1], is entitled membership function (MBF). Definition 2 (Linguistic Variable [20]). Suppose that x is a variable defined in a universe of discourse U . Through the mapping performed by an MBF, the variable x receives a linguistic value with a certain membership degree. Thus, the variable x is defined as a linguistic variable. A linguistic value is a linguistic term that a linguistic variable can receive, represented by a fuzzy set and defined by an MBF. Definition 3 (Fuzzy Rule [20]). A fuzzy rule base is composed by fuzzy rules defined as If antecedent fuzzy propositions then consequent fuzzy propositions. The expert’s subjective knowledge about how to solve a given problem is represented in the antecedent and consequent fuzzy propositions. For a MFIS, the antecedent and consequent fuzzy propositions are of the type x is A, where x is a linguistic variable and A is a linguistic value, such that x receives A with a certain membership degree. As presented in (34.6), the input variables of MFIS are the squared error e2 (k) and the normalized time instant K by the min-max method [16]. It is important to note that the input variables of MFIS are the antecedent linguistic variables. Through the fuzzification performed by each antecedent MBF for each value received by the input variables, each antecedent linguistic variable receives each antecedent linguistic value. For the antecedent linguistic variable, three antecedent MBFs m j (K (k)) and m j (e2 (k)) of triangular type were defined, which perform the following mappings m j (K (k)) : [0, 1] → [0, 1] and m j (e2 (k)) : [1 × 10−3 , 1.3] → [0, 1] and define the following antecedent linguistic values: small (S) for j = 1, medium (M) for j = 2, and large (L) for j = 3 [16]. The parameters that define the antecedent MBFs of triangular type are presented in Table 34.1. R1 R2 R3 R4 R5 R6 R7 R8 R9

: : : : : : : : :

If K If K If K If K If K If K If K If K If K

(k) is (k) is (k) is (k) is (k) is (k) is (k) is (k) is (k) is

S and e2 (k) is S then μ(k) ¯ is M S and e2 (k) is M then μ(k) ¯ is M S and e2 (k) is L then μ(k) ¯ is M M and e2 (k) is S then μ(k) ¯ is S M and e2 (k) is M then μ(k) ¯ is S M and e2 (k) is L then μ(k) ¯ is L L and e2 (k) is S then μ(k) ¯ is S L and e2 (k) is M then μ(k) ¯ is M L and e2 (k) is L then μ(k) ¯ is L

(34.7)

34 Chaotic Lorenz Time Series Prediction via NLMS … Table 34.1 Parametric intervals of MBFs of type triangular K (k) e2 (k) Interval S

[0 0.2 0.3]

Interval S

M L

[0.2 0.3 0.5] [0.3 0.5 1.0]

M L

[0.001 0.01 0.3] [0.01 0.3 0.9] [0.3 0.9 1.3]

447

μ(k) ¯ Interval S

[0.1 0.5 1.0]

M L

[0.5 1.0 1.5] [1.0 1.5 2.0]

The consequent linguistic variable is μ(k). ¯ For the consequent linguistic variable, ¯ of triangular type were defined, which perform three consequent MBFs m j (μ(k)) ¯ : [0, 0.1] → [0, 1] and define the following conthe following mappings m j (μ(k)) sequent linguistic values: small (S) for j = 1, medium (M) for j = 2, and large (L) for j = 3. The parameters that define the consequent MBFs of triangular type are presented in Table 34.1 [16]. As presented in Definition 3, it is important to note that the expert’s subjective knowledge is represented in the antecedent and consequent fuzzy propositions. The fuzzy rule base developed for obtaining the adaptive step size is shown in (34.7) [16]. Since the logical connective and was used between the antecedent fuzzy propositions in (34.7), the fuzzy implication input is given by the t-norm between the antecedent MBFs, as follows [20]: α i = t (K (k), e2 (k)) = min[m j (K (k)), m j (e2 (k))],

(34.8)

where t : [0, 1] × [0, 1] → [0, 1] is the t-norm. In turn, the fuzzy implication output is defined by an MBF, given by m R i = min[α i , m j (μ(k))] ¯

(34.9)

It is important to note that an MBF is obtained as described in (34.9) for each fuzzy rule. Since each fuzzy rule contributes with a response in the form of an MBF described in (34.9), it is therefore necessary to obtain a single MBF through the fuzzy aggregation, given by [20]: m T otal = max[m R 1 , m R 2 , . . . , m R 9 ]

(34.10)

It is important to note that the MBF described in (34.10), obtained through the fuzzy aggregation, is not a numeric value. On the other hand, the adapted step size μ(k), which is the output of MFIS, is a numeric variable. Thus, it is necessary to defuzzify the MBF described in (34.10), which in this work is performed using the

448

R. P. Noronha

centroid method. Through the centroid method, a numerical value is obtained at each time instant for the step size, given by [20]: 9 μ(k) =

i=1

9

μ(k)m ¯ ¯ Total (μ(k))

i=1

m Total (μ(k)) ¯

(34.11)

34.4 Steps for Time Series Prediction Based on ARMA Model via FASS-NLMS Algorithm The following are presented the main steps for time series prediction based on ARMA model via the FASS-NLMS algorithm. Step 1: Defining the parameters p and q of ARMA( p, q) model. Step 2: Obtain the weight vector (k) of ARMA( p, q) model. The weight vector (k) of ARMA( p, q) model is obtained through FASS-NLMS algorithm, as presented in (34.6). It is important to note that the training data set should contain rich enough information of the time series dynamics to obtain a good modeling and, consequently, a good prediction of future values. Step 3: Evaluating the ARMA( p, q) model obtained through statistical metrics. Step 4: Perform the time series prediction. In this step, the prediction of future values of the time series is performed based on the model obtained in Step 2.

34.5 Computational Results The computational results obtained in this work are presented in this section. The ARMA model was obtained via FASS-NLSS algorithm and applied to chaotic Lorenz time series prediction. The computational results obtained are due to two stages; the first stage, or training stage, refers to the stage of obtaining of ARMA( p, q) model for a training data set; the second stage, or prediction stage, refers to the time series prediction stage, based on the ARMA( p, q) model obtained in the first stage. With respect to the results obtained in the two stages, the performance of ARMA( p, q) model was evaluated using the following statistical metrics: • Variance Accounted For (VAF):   var (y − yˆ ) × 100, VAF(%) = 1 − var (y)

(34.12)

where var (•) is the variance, y is the data vector of a time series, yˆ is the estimated data vector by the ARMA( p, q) model.

34 Chaotic Lorenz Time Series Prediction via NLMS …

449

• Mean Square Error (MSE): 1 MSE = (y(k) − yˆ (k)), N k=1 N

(34.13)

where N is the total number of samples. • Normalized Root Mean Square Error (NRMSE):

 N 

1 (y(k) − yˆ (k))2 NRMSE = N k=1 max(y) − min(y)

(34.14)

• Non-Dimensional Error Index (NDEI): NDEI =

RMSE , std(y)

(34.15)

where std(•) is the standard deviation. • Best Fit Criteria (FIT):

||y − yˆ || FIT(%) = 1 − × 100, ||y − y¯ ||

(34.16)

where y¯ is the mean value of y and || • || is the Euclidean norm of a data vector. The Lorenz time series is obtained of a set of nonlinear differential equations of thermal convection in the lower atmosphere [21]. Lorenz developed this set of nonlinear differential equations to predict convection in fluids which exhibit chaotic behavior. The Lorenz model seeks to represent the Earth’s atmosphere that is heated by the ground and dissipates this thermal energy into space. This set of nonlinear differential equations serves as a popular benchmark set for validation of methodologies dealing with chaotic nonlinear dynamical systems. The set of nonlinear differential equations developed by Lorenz is given below [21]: x(t) ˙ = α[y(t) − x(t)] y˙ (t) = βx(t) − y(t) − x(t)z(t) z˙ (t) = x(t)y(t) − γ z(t), (34.17) where x(t) is proportional to the flow velocity of the fluid, y(t) characterizes the temperature difference between ascending and descending fluid elements, and z(t) is proportional to the vertical temperature deviations from equilibrium value. Associated with Prandtl number the parameter α relates viscosity and thermal conductivity, β is related to temperature gradient and γ is a geometric factor. The parameter values were set equals to α = 18, β = 28, and γ = 8/3. In this work, the set of nonlinear differential equations in (34.17) is solved by the fourth-order Runge–Kutta method with the time step of t = 0.01.

450

R. P. Noronha

It is important to note that the results obtained through FASS-NLMS algorithm were compared with the results obtained through LMS and NLMS algorithms, such that the step size for the LMS and NLMS algorithms was set equal to μ = 0.003. For the training and prediction stages, were used 10,000 samples of Lorenz time series obtained through (34.17), such that the first 8000 samples were used in the training stage and the 2000 samples no prediction stage. For the prediction stage, the predicted future values were obtained 2 steps forward. The parameters p and q, by trial and error, were set equal to p = 4 and q = 2. In Figs. 34.1, 34.3, and 34.5, it are shown the outputs estimated by the ARMA(4, 2) model for the training and prediction stages. The results obtained through the statistical metrics for the training and prediction stages, with respect to Lorenz time series x(k), y(k), and z(k), are presented in Tables 34.2 and 34.3, respectively. It is possible to note that the ARMA(4, 2) model obtained through FASS-NLMS algorithm, with respect to the statistical metrics VAF(%), MSE, NRMSE, NDEI, and FIT(%), obtained the best results for the

20 15 10

x(k)

5 0 -5 -10 -15 -20 0

2000

4000

6000

8000

10000

k Fig. 34.1 Estimation of Lorenz time series x(k) for the training stage (red color) and for the prediction stage (blue color)

Fig. 34.2 Adaptive step size for Lorenz time series x(k)

y(k)

34 Chaotic Lorenz Time Series Prediction via NLMS … 25 20 15 10 5 0 -5 -10 -15 -20 -25 0

2000

4000

451

6000

8000

10000

k Fig. 34.3 Estimation of Lorenz time series y(k) for the training stage (red color) and for the prediction stage (blue color) Table 34.2 Results of the statistical metrics obtained for the training stage x(k) Metrics LMS NLMS FASS-NLMS VAF(%) MSE NRMSE NDEI FIT(%) y(k) Metrics VAF(%) MSE NRMSE NDEI FIT(%) z(k) Metrics VAF(%) MSE NRMSE NDEI FIT(%)

Ref. [17]

93.9361 0.2393 0.0947 0.0833 94.2332

95.8098 0.2097 0.0760 0.0702 95.9792

97.7622 0.0520 0.0147 0.0117 98.6172

96.4561 0.0871 0.0412 0.0245 97.7641

LMS 93.9975 0.2405 0.0977 0.0858 94.6976

NLMS 95.8429 0.2109 0.0775 0.0723 96.0027

FASS-NLMS 97.9290 0.0531 0.0168 0.0128 98.6304

Ref. [17] 96.1021 0.0901 0.0478 0.0293 97.9864

LMS 94.4875 0.2291 0.0796 0.0801 94.9960

NLMS 96.8104 0.2059 0.0544 0.0567 96.6448

FASS-NLMS 98.4114 0.0351 0.0098 0.0104 98.8427

Ref. [17] 96.8931 0.0418 0.0221 0.0209 98.1134

452

R. P. Noronha

Table 34.3 Results of the statistical metrics obtained for the prediction stage x(k) Metrics LMS NLMS FASS-NLMS Ref. [17] VAF(%) MSE NRMSE NDEI FIT(%) y(k) Metrics VAF(%) MSE NRMSE NDEI FIT(%) z(k) Metrics VAF(%) MSE NRMSE NDEI FIT(%)

94.2020 0.2351 0.0917 0.0833 94.9156

96.2011 0.2041 0.0721 0.0702 96.3415

97.9612 0.0501 0.0112 0.0117 98.9912

97.0125 0.0803 0.0389 0.0215 97.9815

LMS 94.1223 0.2376 0.0913 0.0858 94.9812

NLMS 96.0019 0.2078 0.0721 0.0723 96.4516

FASS-NLMS 98.1251 0.0502 0.0126 0.0128 98.9914

Ref. [17] 97.1091 0.0789 0.0367 0.0201 98.0012

LMS 94.7812 0.2261 0.0761 0.0801 95.4589

NLMS 97.0104 0.2019 0.0511 0.0567 96.9814

FASS-NLMS 98.8125 0.0311 0.0071 0.0104 99.1355

Ref. [17] 97.8178 0.0706 0.0301 0.0178 98.4329

Fig. 34.4 Adaptive step size for Lorenz time series y(k)

z(k)

34 Chaotic Lorenz Time Series Prediction via NLMS … 50 45 40 35 30 25 20 15 10 5 0

2000

4000

6000

453

8000

10000

k Fig. 34.5 Estimation of Lorenz time series z(k) for the training stage (red color) and for the prediction stage (blue color)

Fig. 34.6 Adaptive step size for Lorenz time series z(k)

training and prediction stages. The temporal evolutions of the adapted step sizes are presented in Figs. 34.2, 34.4, and 34.6. Through the statistical metrics presented in Tables 34.2 and 34.3, it is possible to note that, when compared to the NLMS algorithm with the adaptive step size obtained via MFIS in [17], the FASS-NLMS algorithm obtained the best performances, for the training and prediction stages. The superior results were only possible to be obtained due to the step size, at each time instant, be adapted via a MFIS, where the input variables are the squared residual error between the time series and the ARMA(4, 2) model output and the normalized time instant.

454

R. P. Noronha

34.6 Conclusion In this paper, it was proposed the chaotic Lorenz time series prediction based on ARMA model with the estimate of the weight vector performed through FASSNLMS algorithm. For the training stage as well as the prediction stage, it was possible to note that the ARMA model estimated via FASS-NLMS algorithm obtained the best results, according to the statistical metrics, when compared to the NLMS algorithm proposed in [17], and LMMS and NLMS algorithms with fixed step size. The superior results obtained via FASS-NLMS algorithm are due to the step size being adapted through an MFIS, where its input variables are the squared residual error between the time series and the ARMA model output and the normalized time instant. Thus, the methodology proposed in this paper proved to be efficient in chaotic time series prediction.

References 1. Hamilton, J.D.: Time series analysis. Princeton University Press (2020) 2. Nonejad, N.: An overview of dynamic model averaging techniques in time-series econometrics. J. Econ. Surv. 35(2), 566–614. Wiley Online Library (2021) 3. Lorenz, M., Brunk M.: Trends of nutrients and metals in precipitation in northern Germany: the role of emissions and meteorology. Environ. Monitor. Assess. 193(6), 1–20. Springer (2021) 4. Farmer, D.J., Sidorowich, J.: Predicting chaotic time series. Phys. Rev. Lett. 59(8), 845. APS (1987) 5. Kocak, K.: Arma (p, q) type high order fuzzy time series forecast method. Appl. Soft Comput. 58, 92–103. Elsevier (2017) 6. Ansari, K., Park, D.K., Kubo, N.: Linear time-series modeling of the gnss based tec variations over southwest japan during 2011–2018 and comparison against arma and gim models. Acta Astronautica 165, 248–258. Elsevier (2019) 7. Garroppo, G.R., Callegari, C.: Prediction of mobile networks traffic: enhancement of the nmls technique. In: 2020 IEEE 25th International Workshop on Computer Aided Modeling and Design of Communication Links and Networks (CAMAD), pp. 1 – 6. IEEE (2020) 8. Rahman, S., Rashid, M.M., Alam, Z.M.: A unified analysis of proposed wavelet transform domain lms-algorithm for arma process. In: 2019 5th International Conference on Advances in Electrical Engineering (ICAEE), pp. 195–200. IEEE (2019) 9. Pandey, A., Malviya, L., Sharma, V.: Comparative study of lms and nlms algorithms in adaptive equalizer. Int. J. Eng. Res. Appl. (IJERA) 2(3), 1584–1587. Citeseer (2012) 10. Golshan, H.M., Samet, H.: Updating stochastic model coefficients for prediction of arc furnace reactive power. Electric Power Syst. Res. 79(7), 1114–1120. Elsevier (2009) 11. Samet, H., Mojallal, A., Ghanbari, T.: Employing grey system model for prediction of electric arc furnace reactive power to improve compensator performance. Przeglad Elektrotechniczny 89(12), 110–115 (2013) 12. Aslam, S.M., Shi, P., Lim, C.C.: Self-adapting variable step size strategies for active noise control systems with acoustic feedback. Automatica 123, 109354. Elsevier (2021) 13. Shin, C.H., Sayed, H.A., Song, J.W.: Variable step-size nlms and affine projection algorithms. IEEE Signal Process. Lett. 11(2), 132–135. IEEE (2004) 14. Strutz, T.: Estimation of measurement-noise variance for variable-step size nlms filters. In: 2019 27th European Signal Processing Conference (EUSIPCO), pp. 1–5. IEEE (2019)

34 Chaotic Lorenz Time Series Prediction via NLMS …

455

15. Sun, Y., Wnag, M., Han, Y., Zhand, C.: An improved vss nlms algorithm for active noise cancellation. In: AIP Conference Proceedings, vol. 1864, n. 1 , pp. 020158. AIP (2017) 16. Noronha, R.P.: Indirect adaptive inverse control design based on the FASS-NLMS algorithm. IFAC-PapersOnLine, vol. 54, no. 20, pp. 354–359. Elsevier (2021) 17. Ng, Y., Mohamad, H., Chuah, T.: Block-based fuzzy step size nlms algorithms for subband adaptive channel equalisation. IET Signal Process. 3(1), 23–32. IET (2009) 18. Makridakis, S., Hibon, M.: Arma models and the box-jenkins methodology. J. Forecast. 16(3), pp. 147–163. Wiley Online Library (1997) 19. Diniz, P.S.R.: Adaptive filtering. Springer, vol. 4 (1997) 20. Wang, L.X.: Course in Fuzzy System and Control. Prentice Hall Publications (1997) 21. Karunasinghe, D., Liong, S.: Chaotic time series prediction with a global model: artificial neural network. J. Hydrol. 323(1–4), 92–105. Elsevier (2006)

Chapter 35

Channel Adaptive Equalizer Design Based on FIR Filter via FVSS-NLMS Algorithm Rodrigo Possidônio Noronha

Abstract Equalization is an important application of the signal processing theory to communication theory, since its function is to cancel noise present in communication channels. The performance of an adaptive equalizer is influenced by the step size of an adaptive algorithm based on stochastic gradient used for updating the estimative of its weight vector. A good performance in terms of speed convergence and steadystate mean square error (MSE) of adaptive equalization can be obtained through of the variable step size. To this end, in this work, the adaptive equalizer design is performed via fuzzy variable step size–normalized least mean square (FVSSNLMS) algorithm, whose variable step size is obtained through a Mamdani fuzzy inference system (MFIS). The adaptive equalization methodology was analyzed in three signal-to-noise ratio scenarios.

35.1 Introduction The demand for a fast data transmission in communication channels has been increasingly growing. Although the academic community has been successful in developing new methodologies to increase the speed of data transmission, the presence of noise in the channel is a limiting factor for a good performance of a communication system. In this sense, the equalization of communication channels is essential to cancel noise and ensure the integrity of the input signal of the communication system [1]. An equalizer can be modeled by an adaptive filter where the adaptive equalizer weight vector can be obtained through an adaptive algorithm based, for example, on stochastic gradient descent [2]. Several adaptive algorithms based on stochastic gradient descent have been used for equalizer design, most frequently using the least mean square (LMS) algorithms[1, 3] and normalized least mean square (NLMS) [4, 5].

R. P. Noronha (B) Federal University of Maranhão, Graduate Program in Electrical Engineering, São Luís 65080-805, Brazil e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. Das et al. (eds.), Modeling, Simulation and Optimization, Smart Innovation, Systems and Technologies 292, https://doi.org/10.1007/978-981-19-0836-1_35

457

458

R. P. Noronha

Due to characteristics of NLMS algorithm that favor its application in communication channel equalization, such as low sensitivity to input signal power variations and good performance on correlated signals, only the NLMS algorithm is the object of study in this work [6]. It is important to note that the performance of NLMS algorithm, with respect to convergence speed and steady-state mean square error (MSE), is influenced by the step size [7, 8]. The most common solution adopted in the literature to solve this problem is through variable step size [3, 7, 9]. Specifically for adaptive equalizer design, in [10] the variable step size is obtained through the autocorrelation function. In [11], a formulation for obtaining the random variable step size was proposed. In [12], the variable step size is obtained as a function of the input signal energy of the equalizer. It is important to note that a common characteristic for these methodologies mentioned above is that they require high order statistical measurements to obtain the variable step size. A viable alternative to obtain variable step size for the NLMS algorithm is through membership fuzzy inference system (MFIS). MFIS has been recently used in different areas proposing efficient solutions to control system [13], identification system [14], robotic [15], and others. Through MFIS, it is possible to develop a formulation based on fuzzy rules of the type If antecedent fuzzy proposition then consequent fuzzy propositions, based on the expert’s knowledge, to obtain the variable step size. A further justification is that through MFIS it is possible to obtain variable step size independent of high-order statistical measures. It is important to note that in the literature there are few contributions with respect to obtaining of variable step size for the NLMS algorithm through MFIS. In [16], it was proposed the design of adaptive equalizers using the NLMS algorithm with variable step size obtained through MFIS. It is important to note that the fuzzy rule base formulated in [16] was developed having only the squared equalization error as MFIS input. Due to the amount of elapsed time instants until the current time instant contains important information about the convergence of NLMS algorithm, an important unexplored point in [16] is related to this aspect, since the MFIS input is composed only by the squared equalization error. In this sense, this work proposes a new methodology for adaptive equalization of communication channels based on fuzzy variable step size–normalized least mean square (FVSS-NLMS) algorithm proposed in [17]. The adaptive equalizer is represented mathematically by a finite impulse response (FIR) filter. The variable step size is obtained through a linguistic description implemented in a fuzzy rule base, such that the MFIS input is composed by the squared equalization error by the normalized time instant by the min-max method. In this work, the justificative for using the FVSS-NLMS algorithm, it is to obtain an improvement with respect to convergence speed and steady-state MSE in the estimation of the adaptive equalizer weight vector, when compared to the algorithm proposed in [16]. This paper is organized as follows: in Sect. 35.2, the declaration of the equalization problem is presented; in Sect. 35.3, the mathematical formulations of FVSS-NLMS algorithm are shown; in Sect. 35.4, the results obtained in the adaptive equalizer design in the presence of a noise with signal-to-noise ratios equal to SNR = 18 dB, SNR = 24 dB and SNR = 30 dB are presented; in Sect. 35.5, the conclusion of this work is presented.

Title Suppressed Due to Excessive Length

459

35.2 Adaptive Equalization Problem Statement for Communication Channels The goal of an equalizer is to reconstruct the signal transmitted though a communication channel. According to [18], an equalizer is defined as a filter designed to counteract the uncertainties contained in a given communication channel that modify the message signal sent. Let the equalizer input signal be given by: x(k) =

M−1 

h(i)a(k − i) + v(k),

(35.1)

i=0

where h(i) with i = 0, 1, 2, . . . , M − 1 is the linear and time-invariant channel impulsive response, v(k) is the additive Gaussian noise signal with variance σv2 , a(k) is the coded input signal described by a binary sequence ±1, k is the time instant and M is the number of equalizer weights. The equalizer output signal is given by: ˆ (35.2) d(k) = ((k))T X(k) = (X(k))T (k), where X(k) = [x(0) x(1) . . . x(M − 1)]T ∈ R M×1 is the regressor vector or equalizer input vector and (k) = [θ0 θ1 . . . θ M−1 ]T ∈ R M×1 is the adaptive equalizer weight vector. The equalizer is represented through a structure described by an adaptive FIR filter, where the adaptation criterion is given as a function of the minimization ˆ of the squared equalization error e(k) = d(k − L) − d(k), where d(k) is the communication channel input signal and L ∈ N is the equalization delay. In the section that follows the algorithm used for updating the estimate of the adaptive equalizer weight vector is presented.

35.3 FVSS-NLMS Algorithm The adaptive equalizer weight vector, using the LMS algorithm, is obtained as follows [19]: 1 (35.3) (k + 1) = (k) − μ∇(k) (e2 (k)), 2 where ∇(k) (e2 (k)) is the gradient of the squared equalization error, e2 (k) = (d(k − 2 ˆ ˆ is the squared equalization error d(k) = ((k))T X(k) = (X(k))T (k) L) − d(k)) is equalizer output signal and μ is the step size. Due to ∇(k) (e2 (k)) = −2e(k)X(k), it is possible rewrite (35.3) as follows: 1 (k + 1) = (k) − μ∇(k) (e2 (k)) 2 = (k) + μe(k)X(k)

(35.4)

460

R. P. Noronha

through the normalization of (35.4) by the signal power of d(k), it is obtained that [19]: e(k)X(k) (35.5) (k + 1) = (k) + μ (X(k))T X(k) In (35.5), it is possible to note that the step size is an important parameter for updating the estimate of the adaptive equalizer weight vector. In order for the equalizer can reconstruct the signal transmitted over the communication channel, it is necessary that the estimate of the adaptive equalizer weight vector (k) be updated at every instant. In this work, the update of the estimate of the adaptive equalizer weight vector (k) is performed by the FVSS-NLMS algorithm, as follows [17]: 2 μ(k) = MFIS(e ⎧ (k), K (k)) e(k)X(k) ⎨ (k) + μ(k) , (k + 1) = (X(k))T X(k) ⎩ (k), k ∈ [1, K ],

if (X(k))T X(k) = 0 if (X(k))T X(k) = 0

(35.6)

where (k) is the adaptive equalizer weight vector, μ(k) is the variable step size obtained by the MFIS, e2 (k) is the squared error used to update the estimate (k), X(k) is the regressors vector and K (k) is the normalized time instante by the minmax method. Definition 1 (Membership Function [20]). A Membership Function (MBF) is a function that maps each element x of a universe of discourse U to a fuzzy set F with a membership degree between 0 and 1, given by the following mapping m F (x) : U → [0, 1], where m F is the MBF of the fuzzy set F. It is important to note that the MBF m F defines the fuzzy set F. Definition 2 (Linguistic Variable [20]). So as numeric variables receive numeric values, linguistic variables receive linguistic values. It is important to note that a linguistic value is represented by a fuzzy set and is defined by an MBF. Then, as presented in Definition 1, each linguistic variable receives a linguistic value with a certain membership degree. Definition 3 (Fuzzy Rule [20]). The subjective knowledge of the expert, with respect to an MFIS, is represented in a fuzzy rule base. A fuzzy rule is composed by antecedent and consequent fuzzy propositions. For example, for a linguistic variable “speed,” a fuzzy proposition is of the type “speed is high,” where “high” is a linguistic value. This fuzzy proposition means that the linguistic variable “speed” is received the linguistic value “high” with a certain membership degree. The fuzzy rule base developed to obtain the variable step size is presented in (35.7). The MFIS input variables are e2 (k) and K (k) also are the antecedent linguistic variables. The linguistic variable of the consequent is μ(k), ¯ and the MFIS output variable is the variable step size μ(k). At every time instant, each antecedent and consequent linguistic variable receives each linguistic value with a certain membership

Title Suppressed Due to Excessive Length

461

degree, through the following mappings m j (K (k)) : [0, 1] → [0, 1], m j (e2 (k)) : ¯ : [1 × 10−3 , 1 × 10−1 ] → [0, 1] [0, 1 × 10−5 , 3 × 10−5 ] → [0, 1] and m j (μ(k)) [17]. R1 R2 R3 R4 R5 R6 R7 R8 R9

: : : : : : : : :

If K If K If K If K If K If K If K If K If K

(k) is (k) is (k) is (k) is (k) is (k) is (k) is (k) is (k) is

S and e2 (k) is S then μ(k) ¯ is M S and e2 (k) is M then μ(k) ¯ is M S and e2 (k) is L then μ(k) ¯ is M M and e2 (k) is S then μ(k) ¯ is S M and e2 (k) is M then μ(k) ¯ is S M and e2 (k) is L then μ(k) ¯ is L L and e2 (k) is S then μ(k) ¯ is S L and e2 (k) is M then μ(k) ¯ is M L and e2 (k) is L then μ(k) ¯ is L

(35.7)

For each antecedent and consequent linguistic variable, three MBFs of triangular type were defined with the linguistic values “Small” (S) for j = 1, “Medium” (M) for j = 2, and “Large” (L) for j = 3. The parameters of MBFs of the antecedent and consequent are shown in Table 35.1. The antecedent fuzzy propositions of each fuzzy rule are connected by the logical connective and, thus forming a compound fuzzy proposition. Each fuzzy rule is activated with a certain degree of activation obtained through the t-norm between the antecedent MBFs, given by: α i = m(K (k), e2 (k)) = min[m j (K (k)), m j (e2 (k))].

(35.8)

It is important to note that antecedent and consequent fuzzy propositions of each fuzzy rule are related through a fuzzy relation, obtained through the fuzzy implication. The input of the fuzzy implication is the activation degree (35.8) and the output is an MBF, given by: m R i = min[α i , m j (μ(k))] ¯ (35.9) Since each fuzzy rule is activity with an activation degree, then for each fuzzy rule is obtained an MBF m R i due to fuzzy implication. The MBF m R i is an individual response of each fuzzy rule. Through the fuzzy aggregation, a single MBF is obtained,

Table 35.1 Interval of triangular MBFs K (k) e2 (k) Interval Interval S

[0 0.2 0.3]

S

M

[0.2 0.3 0.5]

M

L

[0.3 0.5 1]

L

μ(k) ¯ Interval [0.1 0.3 0.9] × S 10−5 [0.3 0.9 1] × M 10−5 [0.9 1 3] × L 10−5

[0.001 0.005 0.015] [0.005 0.015 0.03] [0.015 0.03 0.1]

462

R. P. Noronha

which represents the total response of the fuzzy rule base, and the fuzzy aggregation is performed. The fuzzy aggregation is obtained as follows: m Total = max[m R 1 , m R 2 , . . . , m R 9 ]

(35.10)

After obtained the MBF m Total , in order to obtain the variable step size μ(k), it is necessary to perform the defuzzification of m Total . The defuzzification method used in this work is of centroid type, given by: 9  μ(k)m ¯ ¯ Total (μ(k))

μ(k) =

i=1 9  m Total (μ(k)) ¯

(35.11)

i=1

35.4 Simulation and Results 35.4.1 Simulation The input signal d(k) is a random signal modulated via binary phase shift keying (BPSK) resulting in a coded signal a(k) described by a binary sequence {±1}. The additive noise v(k) is uniformly distributed with signal-to-noise ratios equal to SNR = 18 dB, SNR = 24 dB and SNR = 30 dB. The channel impulsive response is given by h = [0.9 0.3 0.5 − 0.1]. The order of the adaptive equalizer weight vector (k) of was set equal to M = 10. The simulations were performed for 4000 time instants. Furthermore, due to the presence of additive noise on the communication channel, with the objective of obtaining greater statistical consistency in the analysis of the results obtained, 100 independent Monte Carlo simulations were performed referring to the adaptive equalizer design. The results obtained during simulations of the adaptive equalizer designed via FVSS-NLMS algorithm were compared with the results obtained by the proposed algorithm in [16] and by the LMS and NLMS algorithms with fixed step size. The step size for the LMS and NLMS algorithms was set equal to μ = 0.03.

35.4.2 Results In this section, the results obtained through the adaptive equalizer design based on adaptive FIR filtero are presented. In Figs. 35.1, 35.2, and 35.3, the temporal evolutions of MSE for the FVSS-NLMS algorithm, for the proposed algorithm in [16] and for the LMS and NLMS algorithms with fixed step size are presented, with

Title Suppressed Due to Excessive Length

463

MSE (dB)

10 0 -10 Equalizer LMS Equalizer NLMS Equalizer Ref. [16] Equalizer FVSS-NLMS

-20 -30 -40

0

500

1000

1500

2000

2500

3000

3500

4000

k Fig. 35.1 MSE for SNR = 30 dB Fig. 35.2 MSE for SNR = 24 dB

MSE (dB)

20 0 -20 -40

Equalizer LMS Equalizer NLMS Equalizer Ref. [16] Equalizer FVSS-NLMS

0

1000

2000

3000

4000

k

signal-to-noise ratios equal to SNR = 30 dB, SNR = 24 dB, and SNR = 18 dB, respectively. As can be seen in these figures, the FVSS-NLMS algorithm obtained the best performances with respect to convergence speed and steady-state MSE. The temporal evolutions of the variable step size are shown in Figs. 35.4, 35.5, and 35.6, respectively, for the signal-to-noise ratios SNR = 30 dB, SNR = 24 dB, and SNR = 18 dB. When compared to the proposed algorithm in [16], it is possible to note the superior performance of FVSS-NLMS algorithm. The good performance of FVSS-NLMS algorithm is due to obtaining variable step size through a linguistic description based on the expert’s subjective knowledge and implemented in a fuzzy rule base, whose MFIS inputs are the squared equalization error and the normalized time instant by the min-max method. This way, the step size is adapted as a function of the squared equalization error and temporal advance of the estimation of the adaptive equalizer weight vector (k) performed by the FVSS-NLMS algorithm. It is possible to note that the lower the signal-to-noise ratio the worse is the performance of the adaptive equalizer with respect to the steady-state MSE; therefore, the best results were obtained when SNR = 30 dB and the worst results when SNR = 18 dB. To verify if the adaptive equalization of the channel was performed satisfactorily, the results obtained in magnitude (Figs. 35.7, 35.8, and 35.9) and phase (Figs. 35.10, 35.11, and 35.12) responses were compared with the results obtained by the inverse channel. Thus, the magnitude and phase responses of the inverse channel are the reference responses. Through these figures, it is possible to note be seen that

464 10

MSE (dB)

Fig. 35.3 MSE for SNR = 18 dB

R. P. Noronha

0 -10 Equalizer LMS Equalizer NLMS Equalizer Ref. [16] Equalizer FVSS-NLMS

-20 -30 0

1000

2000

3000

4000

3000

4000

3000

4000

3000

4000

k Fig. 35.4 Variable step size for SNR = 30 dB (k)

0.06 0.04 0.02 0 0

1000

2000

k Fig. 35.5 Variable step size for SNR = 24 dB (k)

0.06 0.04 0.02 0 0

1000

2000

k Fig. 35.6 Variable step size for SNR = 18 dB (k)

0.06 0.04 0.02 0 0

1000

2000

k

the adaptive equalizer designed by the FVSS-NLMS algorithm obtained the best results, with respect to magnitude and phase response tracking.

35.5 Conclusion In this paper, the adaptive equalization of linear and time-invariant channels was proposed using a new version of NLMS algorithm with variable step size obtained

Title Suppressed Due to Excessive Length

Fig. 35.7 Equalized channel magnitude response for SNR = 30 dB

Fig. 35.8 Equalized channel magnitude response for SNR = 24 dB

Fig. 35.9 Equalized channel magnitude response for SNR = 18 dB

465

466

Fig. 35.10 Equalized channel phase response for SNR = 30 dB

Fig. 35.11 Equalized phase magnitude response for SNR = 24 dB

Fig. 35.12 Equalized phase magnitude response for SNR = 18 dB

R. P. Noronha

Title Suppressed Due to Excessive Length

467

through an MFIS, entitled FVSS-NLMS algorithm. The performance of the adaptive equalizer was evaluated through extensive simulations. In addition, the results obtained by the adaptive equalizer design through FVSS-NLMS algorithm were compared with the results obtained by the algorithm proposed in [16] and by the LMS and NLMS algorithms with fixed step size, for signal-to-noise ratios with SNR = 18 dB, SNR = 24 dB, and SNR = 30 dB. As expected, through the obtaining variable step size performed as a function of a linguistic description based on the expert’s knowledge, satisfactory results were obtained by the FVSS-NLMS algorithm, with respect to convergence speed and MSE in the steady state. In addition, the performance of the adaptive equalizer was compared with the magnitude and phase responses of the inverse channel, where it was possible to note the superior performance developed by the equalizer designed through FVSS-NLMS algorithm. It was possible to note that the FVSS-NLMS algorithm is less sensitive to the decrease in signal-to-noise ratio, where for SNR = 18 dB, satisfactory results were obtained when compared to the results obtained by the proposed algorithm in [16] and by the LMS and NLMS algorithms with fixed step size.

References 1. Jiaghao, L., Lim, C., Nirmalathas, A.: Indoor optical wireless communications using few-mode based uniform beam shaping and LMS based adaptive equalization. IEEE Photon. Conf. (IPC) 1–2. IEEE (2020) 2. Vlachos, E., Lalos, A., Berberidis, K.: Stochastic gradient pursuit for adaptive equalization of sparse multipath channels. J. Emer. Sel. Topics Circuits Syst. 2(3), 413–423. IEEE (2012) 3. Zhu, P., Xu, X., Zhang, X., Wang, R., Zhang, X.: A new variable step-size lms algorithm for application to underwater acoustic channel equalization. In: IEEE International Conference on Signal Processing, pp. 1–4. IEEE (2017) 4. Khan, S., Wahab, A., Naseem, I., Moinuddin, M.: Comments on “Design of fractional-order variants of complex LMS and NLMs algorithms for adaptive channel equalization”. Nonlinear Dyn. 101(2), 1053–1060. Springer (2020) 5. Khan, A.A., Shah, S.M., Raja, M.A., Chaudhary, N.I., He, Y., Machado, J.A.T.: Fractional LMS and NLMS algorithms for line echo cancellation. Arab. J. Sci. Eng. 46, 9385–9398. Springer (2021) 6. Pandey, A., Malviya, L.D., Sharma, V.: Comparative study of LMS and NLMS algorithms in adaptive equalizer. Int. J. Eng. Res. Appl. (IJERA) 2(3), 1584–1587. Citeseer (2012) 7. Aslam, M.S., Shi, P., Lim, C.-C.: Self-adapting variable step size strategies for active noise control systems with acoustic feedback. Automatica 123, 109354. Elsevier (2021) 8. Strutz, T.: Estimation of measurement-noise variance for variable-step-size NLMS filters. In: 27th European Signal Processing Conference (EUSIPCO), pp. 1–5. IEEE (2019) 9. Casco-Sanchez, F., Lopez-Guerrero, M., Javier-Alvarez, S., Medina-Ramirez, C.R.: A variablestep size nlms algorithm based on the cross-correlation between the squared output error and the near-end input signal. IEEE Trans. Electric. Electron. Eng. 14(8), 1197–1202. IEEE (2019) 10. Tsuda, Y., Shimamura, T.: An improved NLMS algorithm for channel equalization. In: IEEE International Symposium on Circuits and Systems. Proceedings, vol. 5, pp. 353–356. IEEE (2002) 11. Jimaa, S A., Al Saeedi, N., Al-Araji, S., Shubair, R.M.: Performance evaluation of random step-size NLMS in adaptive channel equalization. In: 1st IFIP Wireless Days, pp. 1–5. IEEE (2008)

468

R. P. Noronha

12. Teja, M.V.S.R., Meghashyam, K., Verma, A.: Comprehensive analysis of LMS and NLMS algorithms using adaptive equalizers. In: International Conference on Communication and Signal Processing, pp. 1101–1104. IEEE (2014) 13. Ren, Q., Bigras, P.: A highly accurate model-free motion control system with a Mamdani fuzzy feedback controller combined with a TSK fuzzy feed-forward controller. J. Intel. Robot. Syst. 86(3–4), 367–379. Springer (2017) 14. Sharma, A.K., Singh, D., Verma, N.K.: Data driven aerodynamic modeling using Mamdani fuzzy inference systems. In: International Conference on Sensing, Diagnostics, Prognostics, and Control (SDPC), pp. 359–364. IEEE (2018) 15. Najmurrokhman, A., Kusnandar., Komarudin, U., Sunubroto., Sadiyoko, A., Iskanto, T.Y.: Mamdani based fuzzy logic controller for a wheeled mobile robot with obstacle avoidance capability. In: International Conference on Mechatronics, Robotics and Systems Engineering (MoRSE), pp. 49–53. IEEE (2019) 16. Ng, Y., Mohamad, H., Chuah, T.: Block-based fuzzy step size nlms algorithms for subband adaptive channel equalisation. IET Signal Process. 3(1), 23–32. IEEE (2009) 17. Noronha, R.P.: Adaptive inverse control synthesis subject to sinusoidal disturbance for nonminimum phase plant via FVSS-NLMS algorithm. In: 2021 Australian & New Zealand Control Conference, pp. 179–184. IEEE (2021) 18. Romano, J.M.T., Attuz, R., Cavalcante, C.C., Suyama, R.: Unsupervised signal processing: channel equalization and source separation. CRC Press (2018) 19. Diniz, P.S.R.: Adaptive filtering, vol. 4. Springer (1997) 20. Wang, L.X.: Course in Fuzzy System and Control. Prentice Hall Publications (1997)

Chapter 36

Direct Adaptive Inverse Control Based on Nonlinear Volterra Model via Fractional LMS Algorithm Rodrigo Possidônio Noronha

Abstract This work addresses to perform the direct adaptive inverse control (DAIC) design via fractional least mean square (FLMS) algorithm. Since the controller of DAIC aims to track the plant inverse dynamics as a function of the plant model inverse identification, to track nonlinear dynamics of polynomial type, in this work, the controller is based on Volterra model. Since the performance of an estimation algorithm is important to update the estimate of the controller weight vector, the main objective of this work is to perform the performance analysis of FLMS algorithm, with respect to convergence speed and mean square error (MSE). The proposed analysis was performed on a model containing a polynomial type nonlinearity, represented by a Nonlinear AutoRegressive with eXogenous inputs (NARX) model. In addition, the proposed analysis was performed in the presence of a sinusoidal disturbance signal and time-varying reference signal.

36.1 Introduction Among the various applications based on inverse model identification theory, it is important to highlight the control technique proposed by Widrow and Walach, entitled adaptive inverse control (AIC) [1]. AIC is a control technique that has the goal of obtain a controller to track the plant inverse dynamics as a function of the update of the estimate of its weight vector [2]. When the weight vector of AIC is updated as a function of the reference error, obtained between the plant output signal and the reference signal, it is obtained the direct adaptive inverse control (DAIC), proposed

R. P. Noronha (B) Federal University of Maranhão, Graduate Program in Electrical Engineering, São Lu˜is 65080-805, Brazil e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. Das et al. (eds.), Modeling, Simulation and Optimization, Smart Innovation, Systems and Technologies 292, https://doi.org/10.1007/978-981-19-0836-1_36

469

470

R. P. Noronha

by [3]. All contributions to the DAIC proposed in [4–6] were implemented with the controller based on a linear model. It is important to note, since for the DAIC, ideally the controller is equal to the plant inverse model, the determination of the mathematical representation for the controller is of fundamental importance [7]. For example, if the plant model is nonlinear, and if the controller is represented by a linear structure, then tracking the plant inverse dynamics will be unsatisfactory [8, 9]. Different mathematical representations of nonlinear models have been proposed in the literature, for example, Hammerstein model [10], Nonlinear AutoRegressive with eXogenous inputs (NARX) model [11], Wiener model [12], Hammerstein– Wiener model [13], and others. Another nonlinear representation widely used in the literature is the Volterra models [14]. Volterra models have been used, for example, in control systems [15, 16], fuzzy systems [17], echo cancelation [18], and others. Volterra models, by definition, are composed by linear terms due to the first-order convolution summation and terms nonlinear of polynomial type due to the higher order convolution summations [19]. In this work, the DAIC is based on Volterra model. Although the Volterra model performs well in tracking nonlinear dynamics of polynomial type, a complexity of this model, which imposes restrictions on the analysis and practical implementation of a control system, is the determination of the number of Volterra kernels [14, 20]. The determination of the number of Volterra kernels is essential for good performance of Volterra model [21]. In the literature, a common approach to work with Volterra models is truncating it until the second kernel. When the Volterra model is truncated until the second kernel, it is obtained the second-order Volterra model [21]. Although the Volterra model is nonlinear of polynomial type, the weights of Volterra kernels are linear. Thus, it is possible to estimate the Volterra model weight vector, for example, using estimation algorithms based on stochastic gradient [19, 22, 23]. Based on fractional calculation, in [24] a new version of the least mean square (LMS) algorithm was proposed, entitled fractional least mean square (FLMS). The equation for updating the estimate of the weight vector using the FLMS algorithm is composed by the first-order derivative and by the fractional order derivative [25]. For the FLMS algorithm, the goal of using a fractional order derivative in the equation for updating the estimate of the weight vector is to obtain a better performance of estimation, with respect to convergence speed and mean square error (MSE) [24]. Thus, in this work it is proposed the performance analysis of FLMS algorithm in the DAIC design based on a second-order Volterra model. To date, according to the literature survey performed by the author of this proposal, no performance analysis of FLMS algorithm in the DAIC design based on Volterra model has been proposed. The justification for proposing the DAIC based on Volterra model is that

36 Direct Adaptive Inverse Control Based on Nonlinear Volterra …

471

based on Volterra model it is possible that the controller tracks the plant inverse dynamics through the plant model inverse identification containing a nonlinearity of polynomial type. To this end, in this work the plant model containing a nonlinearity of polynomial type, which is referring to a variable dissipation electric heater, is of NARX type. In order to obtain results that explore the performance of FLMS algorithm, a periodic disturbance signal was added to the control signal. In addition, the reference signal is of time-varying type. This paper is structured as follows: in Sect. 36.2, the formulations of the second-order Volterra model are presented; in Sect. 36.2.1, the formulations of FLMS algorithm are presented; in Sect. 36.3, the mathematical formulations for the DAIC based on Volterra model are presented; in Sect. 36.4, the computational results obtained are presented; in Sect. 36.5 the conclusion is presented.

36.2 Nonlinear Volterra Model According to [19], the general form of the m-th order Volterra model is given by: yˆ =

+∞  +∞  m=1n1 =−∞

...

+∞ 

θm (n1 , n2 , . . . , nm )

nm =−∞

m 

u (k − ni ) ,

(36.1)

i=1

where u(k) is the input signal, yˆ (k) is the output signal and θm (n1 , n2 , . . . , nm ) is the m-th Volterra kernel. It is important to note that the Volterra model is not recursive. Truncating the Volterra model (36.1) until the second kernel, it is obtained the secondorder Volterra model, given by: yˆ (k) =

N1 

θ1 (n1 ) u (k − n1 ) +

n1 =0

N2  N2 

θ2 (n1 , n2 ) u (k − n1 ) u (k − n2 ) , (36.2)

n1 =0 n2 =0

where θ1 (n1 ) is the first-order kernel that composes the first-order convolution summation and θ2 (n1 , n2 ) is the second-order kernel that composes the second-order convolution summation. It is important to note that through the second-order convolution summation, the nonlinear terms of polynomial type of Volterra model are obtained [21, 26, 27].

36.2.1 FLMS Algorihm For the FLMS algorithm, according to [24], the update of the estimate of the Volterra model weight vector is given by:

472

R. P. Noronha

Θ(k + 1) = Θ(k) − μ

∂J (k) − μf ∂Θ(k)



∂J (k) ∂Θ(k)

v J (k),

(36.3)

where J (k) is the cost functional described as a function of the squared error e2 (k) = (y(k) − yˆ (k))2 , v is the order of the fractional derivative, y(k) is the system output signal, yˆ (k) is the Volterra model output signal, μ and uf are the step. It is important to note that the weight vector Θ(k) contains the weights of the first-order kernel θ1 (n1 ) and of the second-order kernel θ2 (n1 , n2 ). In (36.2), para N1 = N2 = 2, the weight vector Θ(k) ∈ R12×1 and the regressors vector U(k) ∈ R12×1 are give by: T  Θ(k) = θ1 (0) . . . θ1 (2) θ2 (0, 0) θ2 (0, 1) . . . θ2 (2, 1) θ2 (2, 2)  T U(k) = u(k) . . . u(k − 2) (u(k))2 u(k)u(k − 1) . . . u(k − 2)u(k − 1) (u(k − 2))2 .

(36.4) As shown in [29], the first-order partial derivative in (36.3),

∂J (k) , is given by: ∂Θ(k)

∂J (k) ∂e(k) ∂y(k) ∂J (k) = = e(k)U(k). ∂Θ(k) ∂e(k) ∂y(k) ∂Θ(k)

(36.5) 

As shown in [29], the fractional order partial derivative in (36.3), is given by:



∂ ∂Θ(k)

v

∂J (k) ∂e(k) J (k) = ∂e(k) ∂y(k)



∂ ∂Θ(k)

∂ ∂Θ(k)

v J (k),

v y(k).

(36.6)

Definition 1 [28]: The fractional derivative of a function f is defined as follows: 1 (D f )(t) = Γ (k − v)



v

(Dv f )(t − α)α =

d dt

v 

t

(t − τ )k−v−1 f (τ )dτ

(36.7)

0

Γ (1 + α) (t − α)α−v , Γ (1 + α − v)

(36.8)

where t > 0 and 1 + α − v > 0. It is important to note that D is the differential operator and that the Gamma function is given by: 



Γ (v) =

t v−1 exp(−t)dt.

(36.9)

0

As shown in [29], through the Definition 1, it is obtained that (36.6) can be rewritten as:  v ∂ (Θ(k))1−v (36.10) . J (k) = −e(k)U(k) ∂Θ(k) Γ (2 − v)

36 Direct Adaptive Inverse Control Based on Nonlinear Volterra …

473

With the obtaining of (36.5) and (36.10), it is possible to rewrite (36.19), which is the equation for updating the estimate of the weight vector Θ(k), as follows: Θ(k + 1) = Θ(k) − μe(k)U(k) − μf e(k)U(k)

(Θ(k))1−v , Γ (2 − v)

(36.11)

36.3 Direct Adaptive Inverse Control As presented in Sect. 36.1, the plant model P[•], which is a nonlinear functional and discrete time stable, is described by a NARX model, given by: y(k) = P[y(k − 1), . . . , y(k − ny ), u(k), u(k − 1), . . . , u(k − nu )],

(36.12)

assuming that ny = 2, nu = 1 and that the plant model is second degree, it is possible to rewrite (36.12), given by: y(k) = c0,0 +

ny 

c1,0 (n1 )y(k − n1 ) +

n1 =1 ny

+

ny  

nu 

c1,0 (n1 )u(k − n1 )

n1 =1

c2,0 (n1 , n2 )y(k − n1 )y(k − n2 )

n1 =1 n2 =1 ny nu

+ +



n1 =1 n2 =1 nu  nu 

(36.13) c1,1 (n1 , n2 )y(k − n1 )u(k − n2 ) c0,2 (n1 , n2 )u(k − n1 )u(k − n2 ),

n1 =1 n2 =1

where c0,0 , c1,0 (n1 ), c2,0 (n1 , n2 ), c1,1 (n1 , n2 ), and c0,2 (n1 , n2 ) are the model kernels. It is important to note that y(k) is the plant output signal and u(k) is the control signal. It is important to remember that the goal of DAIC is to obtain the controller ˆ C[•] at each time instant through the update of its weight vector, such that the plant inverse dynamics can be tracked. This is only possible through the plant inverse model identification [4]. Since the controller is based on the second-order Volterra model, according to Fig. 36.1, the control signal u(k) is given by: u(k) =

2  n1 =0

θ1 (n1 ) r (k − n1 ) +

2  2  n1 =0 n2 =0

θ2 (n1 , n2 ) r (k − n1 ) r (k − n2 ) , (36.14)

474

R. P. Noronha

where Θ(k) = [θ1 (0) . . . θ1 (2) θ2 (0, 0) θ2 (0, 1) . . . θ2 (2, 1) θ2 (2, 2)]T ∈ R12×1 is the controller weight vector. It is important to note that the first-order Volterra kernel θ1 (n1 ) contains the weights {θ1 (0) . . . θ1 (2)} and that the second-order Volterra kernel θ2 (n1 , n2 ) contains the weights {θ2 (0, 0) θ2 (0, 1) . . . θ2 (2, 1) θ2 (2, 2)}. In the form of difference equation, after developing the summations present in (36.14), it is obtained the control signal: u(k) = θ1 (0)r(k) + θ1 (1)r(k − 1) + θ1 (2)r(k − 2) + θ2 (0, 0)(r(k))2 + θ2 (0, 1)r(k)r(k − 1) + . . . + θ2 (2, 1)r(k − 2)r(k − 1)

(36.15)

+ θ2 (2, 2)(r(k − 2)) , 2

that in vector form, it is possible to rewrite (36.15), given by: u(k) = (R(k))T Θ(k) = (Θ(k))T R(k),

(36.16)

where R(k) = [r(k) r(k − 1) r(k − 2) (r(k))2 r(k)r(k − 1) . . . r(k − 2)r (k − 1) (r(k − 2))2 ]T ∈ R12×1 is the reference signal vector. The update of the estimate of the weight vector Θ(k) is only possible after obtaining the error ec (k). However, previously it is necessary to update the estimate of the ˆ ˆ weight vector of the estimated plant model P[•]. Since P[•] is based on the secondˆ order Volterra model, according to Fig. 36.1, the output signal yˆ (k) of P[•] is given by:

Fig. 36.1 Block diagram for the DAIC based on Volerra model

36 Direct Adaptive Inverse Control Based on Nonlinear Volterra …

yˆ (k) =

2 

π1 (n1 ) u (k − n1 ) +

n1 =0

2  2 

475

π2 (n1 , n2 ) u (k − n1 ) u (k − n2 ) ,

n1 =0 n2 =0

(36.17) where Π(k) = [π1 (0) . . . π1 (2) π2 (0, 0) π2 (0, 1) . . . π2 (2, 1) π2 (2, 2)]T ∈ R12×1 is the weight vector of the estimated plant model. It is important to note that the first order Volterra kernel π1 (n1 ) contains the weights {π1 (0) . . . π1 (2)} and that the second-order Volterra kernel π2 (n1 , n2 ) contains the weights {π2 (0, 0) π2 (0, 1) . . . π2 (2, 1) π2 (2, 2)}. In the form of difference equation, after developing the summations present in (36.17), it is obtained: yˆ (k) = π1 (0)u(k) + π1 (1)u(k − 1) + π1 (2)u(k − 2) + π2 (0, 0)(u(k))2 + π2 (0, 1)u(k)u(k − 1) + . . . + π2 (2, 1)u(k − 2)u(k − 1) (36.18) + π2 (2, 2)(u(k − 2))2 that in vector form, it is possible to rewrite (36.18), given by: yˆ (k) = (U(k))T Π(k) = (Π(k))T U(k),

(36.19)

where U(k) = [u(k) u(k − 1) u(k − 2) (u(k))2 u(k)u(k − 1) . . . u(k − 2)u (k − 1) (u(k − 2))2 ]T ∈ R12×1 is the control signal vector. At each time instant, the update of the estimate of the weight vector Π(k) is only possible after obtaining the residual error emod (k). According to Fig. 36.1, the residual error emod (k) is given by: emod (k) = y(k) − yˆ (k),

(36.20)

in the form of difference equation, it is possible to rewrite (36.20), given by: emod (k) = c0,0 +

ny  n1 =1 ny

+

ny  

c1,0 (n1 )y(k − n1 ) +

nu 

c1,0 (n1 )u(k − n1 )

n1 =1

c2,0 (n1 , n2 )y(k − n1 )y(k − n2 )

n1 =1 n2 =1 ny nu

+



c1,1 (n1 , n2 )y(k − n1 )u(k − n2 )

n1 =1 n2 =1

+

nu  nu 

c0,2 (n1 , n2 )u(k − n1 )u(k − n2 ) −

n1 =1 n2 =1



2  2 

2 

π1 (n1 ) u (k − n1 )

n1 =0

π2 (n1 , n2 ) u (k − n1 ) u (k − n2 )

n1 =0 n2 =0

(36.21)

476

R. P. Noronha

The obtaining of the error ec (k) is only possible after the estimate of the weight vector Π(k) be updated via FLMS algorithm. Thus, the error ec (k) is given by: ec (k) =

2 

π1 (n1 ) eref (k − n1 ) +

n1 =0

2  2 

π2 (n1 , n2 ) eref (k − n1 ) eref (k − n2 ) ,

n1 =0 n2 =0

(36.22) in the form of difference equation, it is possible to rewrite (36.22), given by: 2 ec (k) = π1 (0)eref (k) + π1 (1)eref (k − 1) + π1 (2)eref (k − 2) + π2 (0, 0)eref (k) + π2 (0, 1)eref (k)eref (k − 1) + . . . + π2 (2, 1)eref (k − 2)eref (k − 1) 2 + π2 (2, 2)eref (k − 2).

(36.23) the reference error eref (k) = r(k) − y(k), where r(k) is the reference signal. In the form of difference equation, it is possible to rewrite the reference error eref (k), given by: eref (k) = r(k) − c0,0 −

ny 

c1,0 (n1 )y(k − n1 ) −

n1 =1



ny ny  

nu 

c2,0 (n1 , n2 )y(k − n1 )y(k − n2 )

n1 =1 n2 =1 ny nu

− −



n1 =1 n2 =1 nu  nu 

c1,0 (n1 )u(k − n1 )

n1 =1

(36.24) c1,1 (n1 , n2 )y(k − n1 )u(k − n2 ) c0,2 (n1 , n2 )u(k − n1 )u(k − n2 ),

n1 =1 n2 =1

36.4 Computational Results This section has an objective to present the results and performance analysis of FLMS algorithm in the DAIC design. The plant model is of NARX type, referring to a variable dissipation electric heater. For more information about the plant and its model, it is recommended to consult Ref. [30]. The second-degree plant model is given by:

36 Direct Adaptive Inverse Control Based on Nonlinear Volterra …

477

y(k) = 0.0007 − 0.3815y(k − 1) + 0.3988y(k − 2) − 0.8916u(k − 1)y(k − 1) + 0.6362u(k − 2)y(k − 1) − 0.8193u(k − 1)y(k − 2) + 0.6832u(k − 2)y(k − 2) + 0.3706u(k − 1)2 + 0.5559u(k − 1)u(k − 2) − 0.2407u(k − 2)2 , (36.25) it is important to note that sampling period used was set equal to Ta = 6 s. In addition, the total number of samples for obtaining of the computational results was set equal to 81000. For the purpose of performance comparison, the results obtained through FLMS and LMS algorithms in the DAIC design based on second-order Volterra model were compared. Thus, for the FLMS algorithm, μ and μF were set equal to 5 × 10−4 and v = 0.5. Similarly, for the LMS algorithm, μ was set equal to 5 × 10−4 . After extensive computer simulations, the best results with respect to performance of FLMS and LMS algorithm were obtained when the weight vectors Θ(k) and Π(k) are given by: Θ(k) = [θ1 (0) θ1 (1) θ1 (2) θ2 (0, 0) θ2 (0, 1) θ2 (0, 2) θ2 (1, 1) θ2 (1, 2) θ2 (2, 2)]T ∈ R9 Π(k) = [π1 (0) π1 (1) π1 (2) π2 (0, 0) π2 (0, 1) π2 (0, 2) π2 (1, 1) π2 (1, 2) π2 (2, 2)]T ∈ R9 ,

(36.26) since the weight vectors Θ(k) and Π(k) are defined as shown in presented in (36.26), for this work, the difference equations (36.15), (36.18) and (36.23) were rewritten, respectively, as: u(k) = θ1 (0)r(k) + θ1 (1)r(k − 1) + θ1 (2)r(k − 2)+ θ2 (0, 0)(r(k))2 + θ2 (0, 1)r(k)r(k − 1) + θ2 (0, 2)r(k)r(k − 2)+ θ2 (1, 1)(r(k − 1))2 + θ2 (1, 2)r(k − 1)r(k − 2) + θ2 (2, 2)(r(k − 2))2 , (36.27) yˆ (k) = π1 (0)u(k) + π1 (1)u(k − 1) + π1 (2)u(k − 2) + π2 (0, 0)(u(k)2 + π2 (0, 1)u(k)u(k − 1) + π2 (0, 2)u(k)u(k − 2)+ π2 (1, 1)(u(k − 1))2 + π2 (1, 2)u(k − 1)u(k − 2) + π2 (2, 2)(u(k − 2))2 , (36.28) ec (k) = π1 (0)eref (k) + π1 (1)eref (k − 1) + π1 (2)eref (k − 2) + π2 (0, 0)(eref (k))2 + π2 (0, 1)eref (k)eref (k − 1) + π2 (0, 2)eref (k)eref (k − 2) + π2 (1, 1)(eref (k − 1))2 + π2 (1, 2)eref (k − 1)eref (k − 2) + π2 (2, 2)(eref (k − 2))2 . (36.29) Since for DAIC the main goal is that the controller tracks at each time instant the plant inverse dynamics, the performance analysis of FLMS and LMS algorithms was only performed as a function of the controller weight vector Θ(k). The convergence speed was analyzed through the convergence speed of the plant output signal y(k) to the reference signal r(k). The steady-state MSE, on the other hand, was analyzed through MSE of the error ec (k).

478

R. P. Noronha

Voltage [V]

In Fig. 36.2, it is shown the sinusoidal disturbance signal n(k). The plant output signal y(k), obtained as a function of the update of the estimate of the weight vectors Θ(k) and Π(k) performed by the FLMS and LMS algorithms, is shown in Fig. 36.3. It is important to note that to obtain nonconservative results in the performance analysis of FLMS and LMS algorithms; in addition to the sinusoidal disturbance signal n(k), it was used a time-varying reference signal r(k). It is possible to note through Fig. 36.3, which the plant output signal y(k), for the FLMS algorithm, obtained a satisfactory and fast convergence to the reference signal r(k), even with a sinusoidal disturbance signal n(k) added to the control signal u(k). On the other hand, the LMS algorithm obtained a lower performance with respect to convergence speed when compared to the FLMS algorithm. In Fig. 36.4, it is shown the control signal u(k) developed by the controller, obtained at each update of the controller weight vector Θ(k), via FLMS and LMS algorithms. For the performance analysis with respect to the steady-state MSE, according to the MSE of the error ec (k) shown in Fig. 36.5, it is possible to note the superior performance obtained by the FLMS algorithm, where for most of the time instants the steady-state MSE is lower when compared to the steady-state MSE obtained by the LMS algorithm. Therefore, it is possible to note the superior per-

5 4 3 2 1 0 -1 -2 -3 -4 -5 0

1

2

3

4

5

6

7

8 104

k

Temperature [°C]

Fig. 36.2 Sinusoidal disturbance signal n(k)

120 105 90 75 60 45 30 15 0 0

Reference Signal LMS FLMS

1

2

3

4

k Fig. 36.3 Plant output signal y(k)

5

6

7

8 104

36 Direct Adaptive Inverse Control Based on Nonlinear Volterra …

479

Voltage [V]

90 80 70 60 50

LMS FLMS

0

1

2

3

4

5

6

7

8 104

k Fig. 36.4 Control signal u(k)

MSE [dB]

50 0 -50 -100 -150

FLMS LMS

1

2

3

4

5

k

6

7

8 104

Fig. 36.5 MSE of the error ec (k)

formance obtained by the FLMS algorithm, with respect to convergence speed and steady-state MSE, when compared to the LMS algorithm.

36.5 Conclusion The objective of this work is referring to performance analysis of FLMS algorithm, with respect to convergence speed and steady-state MSE, in the design of DAIC based on second-order Volterra model applied to the control of a plant represented by a NARX model with a nonlinearity of polynomial type. In addition, as a complexity increment, the performance analysis was performed in the presence of a sinusoidal disturbance signal added to the control signal and time-varying reference signal. A comparison was performed between the results obtained by the FLMS and LMS algorithms, where the superior performance obtained by the FLMS algorithm was noted, such that, even in the presence of the sinusoidal disturbance signal and timevarying reference signal, through the tracking of the plant inverse dynamics, the plant output signal still continued to converge to the reference signal. Thus, the

480

R. P. Noronha

DAIC design based on second-order Volterra model via FLMS algorithm showed to be efficient in controlling plant models with nonlinearity of polynomial type.

References 1. Widrow, B., Walach, E.: Adaptive signal processing for adaptive control. IFAC Proc. Volumes 16(9), 7–12. Elsevier (1983) 2. Widrow, B., Walach, E.: Adaptive Inverse Control: A Signal Processing Approach. Reissue ed. John Wiley & Sons, Inc (2008) 3. Shafiq, M. A., Shafiq, M., Ahmed, N.: Closed loop direct adaptive inverse control for linear plants. Sci. World J. 2014. Hindawi (2014) 4. Shafiq, M., Shafiq, A. M., Yousef, A. H.: Stability and convergence analysis of direct adaptive inverse control. Complexity 2017. Hindawi (2017) 5. Shafiq, M., Al Lawati, M., Yousef, H.: A simple direct adaptive inverse control structure. In: Canadian Conference on Electrical and Computer Engineering (CCECE), pp. 1–4. IEEE (2016) 6. Noronha, R. P.: Adaptive inverse control synthesis subject to sinusoidal disturbance for nonminimum phase plant via FVSS-NLMS algorithm. In: 2021 Australian & New Zealand Control Conference, pp. 179–184. IEEE (2021) 7. Ahmed, O.H.A.: High performance speed control of direct current motors using adaptive inverse control. WSEAS Trans. Syst. Control. 2, 54–63 (2012) 8. Skrjanc, I., Iglesias, A., Sanchis, A., Leite, D., Lughofer, E., Gomide, F.: Evolving fuzzy and neuro-fuzzy approaches in clustering. Inf. Sci. 490, 344 – 368. Elsevier (2019) 9. Ribeiro, H.A., Tiels, K., Umenberger, J., Schon, B.T., Aguirre, A.L.: On the smoothness of nonlinear system identification. Automatica 121, 109158. Elsevier (2020) 10. Rayouf, Z., Ghorbel, C., Braiek, N.B.: A new Hammerstein model control strategy: feedback stabilization and stability analysis. Int. J. Dyn. Control 7(4), 1453–1461. Springer (2019) 11. Gandhmal, D., Kumar, K.: Wrapper-enabled feature selection and CPLM-based NARX model for stock market prediction. Comput. J. 64(2), 169–184. Oxford University Press (2021) 12. Kang, K.X., Don, Y.H., Jiu, Z.C.: A novel extreme learning machine-based HammersteinWiener model for complex nonlinear industrial processes. Neurocomputing 358, 246–254. Elsevier (2019) 13. Hammar, K., Djamah, T., Bettayeb, M.: Nonlinear system identification using fractional Hammerstein-Wiener models. Nonlinear Dyn. 98(3), 2327–2338. Springer (2019) 14. Doyler, J.F., Pearson, K.R., Ogunnaike, A.B.: Identification and Control Using Volterra Models. Springer (2002) 15. Jajarmi, A., Baleanu, D.: On the fractional optimal control problems with a general derivative operator. Asian J. Control 23(2), 1062–1071. Wiley Online Library (2021) 16. Noronha, R.P.: Nonlinear adaptive inverse control synthesis based on RLS Volterra model. In: 3rd International Conference on Control Systems, Mathematical Modeling, Automation and Energy Efficiency (SUMMA), pp. 143–148. IEEE (2021) 17. Zhang, X., Zhao, Z.: Normalization and stabilization for rectangular singular fractional order TS fuzzy systems. Fuzzy Sets Syst. 381, 140–153. Elsevier (2020) 18. Guerin, A., Faucon, G., Le Bouquin-Jeannes, R.: Nonlinear acoustic echo cancellation based on Volterra filters. IEEE Trans. Speech Audio Process. 11(6), 672 – 683. IEEE (2003) 19. Mortensen, R.: Nonlinear system theory: The volterra/wiener approach. JSTOR (1983) 20. Schoukens, J., Ljung, L.: Nonlinear system identification: A user-oriented road map. IEEE Control Syst. Mag. 39(6), 28–99. IEEE (2019) 21. Shiki, B. S., Lopes, V., da Silva. S.: Identification of nonlinear structures using discrete-time volterra series. J. Braz. Soc. Mech. Scie. Eng. 36(3), 523–532. Springer (2014) 22. Kapgate, S.N., Gupta, S., Sahoo, A.K.: Adaptive Volterra modeling for nonlinear systems based on LMS variants. In: 5th International Conference on Signal Processing and Integrated Networks (SPIN), pp. 258–263. IEEE (2018)

36 Direct Adaptive Inverse Control Based on Nonlinear Volterra …

481

23. Mayyas, K., Afeef, L.: A variable step-size partial-update normalized least mean square algorithm for second-order adaptive Volterra filters. Circ. Syst. Signal Process. 39, 6073–6097. Springer (2020) 24. Zahoor, R.M.A., Qureshi, I.M.: Modified least mean square algorithm using fractional derivative and its application to system identification. Eur. J. Sci. Res. 35(9), 1244–1248 (2015) 25. Ahmand, J., Zubair, M., Rizvi, S.S.H., Shaikh, M.S.: Design and analysis of the fractionalorder complex least mean square (FoCLMS) algorithm. Circ. Syst. Signal Process. 40, 1–30. Springer (2021) 26. Da Silva, S.: Non-linear model updating of a three-dimensional portal frame based on wiener series. Int. J. Non-Linear Mech. 46(1), 312–320. Elsevier (2011) 27. Da Silva, S., Cogan, S., Foltête, E.: Nonlinear identification in structural dynamics based on wiener series and Kautz filters. Mech. Syst. Signal Process. 24(1), 52–58. Elsevier (2010) 28. Atangana, A., Gómez-Aguilar, F.F.: Numerical approximation of Riemann-Liouville definition of fractional derivative: from Riemann-Liouville to Atangana-Baleanu. Numer. Methods Partial Diff. Equ. 34(5), 1502–1523. Wiley Online Library (2018) 29. Ahmad, J., Usman, M., Khan, S., Syed, H. J.: Rvp-flms: a robust variable power fractional lms algorithm. In: 6th International Conference on Control System, Computing and Engineering (ICCSCE), pp. 494–497. IEEE (2016) 30. Verly, A.: Caracterização de agrupamentos de termos na seleção de estrutura de modelos polinomiais narx. Master’s thesis, Universidade Federal de Minas Gerais (2012)

Chapter 37

Indirect Adaptive Inverse Control Synthesis via Fractional Least Mean Square Algorithm Rodrigo Possidônio Noronha

Abstract This work has as main objective to perform the performance analysis of the fractional least mean square (FLMS) algorithm, with respect to convergence speed and steady-state mean square error (MSE), in the indirect adaptive inverse control (IAIC) design. Since the main goal of IAIC, through inverse identification of the plant model, is to obtain a controller that tracks the plant inverse dynamics at each update of the controller weight vector, then performance analysis of the estimation algorithm is of fundamental importance. As a complexity scenario and aiming to obtain nonconservative results, the performance analysis was performed in the IAIC design for a non-minimum phase plant in the presence of sinusoidal reference signal and sinusoidal disturbance signal.

37.1 Introduction Through the model inverse identification, it is possible to control the dynamics of a plant without using a control signal via feedback [1]. In 2008, a new control technique based on model inverse identification was proposed by Widrow and Walach, entitled indirect adaptive inverse control (IAIC) [2]. The goal of IAIC is to obtain a controller that tracks the plant inverse dynamics at every time instant by the update of its weight vector through the plant model inverse identification [2, 3]. For IAIC, the controller weight vector is updated indirectly with respect to the reference error. Since then, some contributions to IAIC theory were proposed [4–7]. In [8], a stability proof of IAIC with controller based on adaptive finite impulse response (FIR) filter was proposed. In [9], the IAIC design based on fuzzy systems in the presence of unknown inputs was proposed. In [10], the IAIC design based on neuro-fuzzy network was proposed.

R. P. Noronha (B) Federal University of Maranhão, Graduate Program in Electrical Engineering, São Luís 65080-805, Brazil e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. Das et al. (eds.), Modeling, Simulation and Optimization, Smart Innovation, Systems and Technologies 292, https://doi.org/10.1007/978-981-19-0836-1_37

483

484

R. P. Noronha

As proposed by [2, 8], in this work, for IAIC, the controller is based on adaptive FIR filter. The justification for using a linear representation of FIR type for the controller is due to the fact that its weight vector can be estimated via algorithm based on stochastic gradient [11]. Generally, in the literature, for contributions proposed to IAIC theory with the controller defined by a linear representation, the controller weight vector is estimated via least mean square (LMS) [12] and normalized least mean square (NLMS) [8] algorithms. All the algorithms based on stochastic gradient cited above, used for estimating the controller weight vector, are based on the definitions of the traditional calculation with nonfractional order derivatives [13]. Through the fractional calculus, it is possible to obtain nonconservative formulations due to the use of fractional order derivatives [14–17]. In different areas of mathematics and engineering, the fractional calculus theory has been used, such as in control theory, computational intelligence, function optimization, and others [18–22]. Contributions using fractional calculus theory have also been proposed to theory of estimation algorithms based on stochastic gradient, such as in [13] that a new version of LMS algorithm was proposed, entitled fractional least mean square (FLMS). The motivation for proposing the LMS algorithm in the fractional context is to improve its performance during the update of the estimate of the weight vector, with respect to speed convergence and steady-state mean square error (MSE) [23, 24]. According to the literature survey performed by the author of the contribution proposed in this work, the specialized literature on IAIC and estimation algorithms based on stochastic gradient does not possess any performance analysis of FLMS algorithm in the IAIC design. In this sense, the main contribution that can be highlighted in this work is referring to performance analysis of FLMS algorithm in the IAIC design, with respect to convergence speed and steady-state MSE. In order to obtain nonconservative results in the presented proposal, the performance of the FLMS algorithm was analyzed in the IAIC design applied to a plant with a non-minimum phase model in the presence of a sinusoidal reference signal. In addition, it was used a sinusoidal disturbance signal added to the control signal. The results obtained were compared with the results obtained through LMS algorithm. This paper is organized as follows: in Sect. 37.2, the mathematical formulations for IAIC are presented; in Sect. 37.3, the FLMS algorithm is shown; in Sect. 37.4, the computational results obtained are presented; in Sect. 37.5, the conclusion is presented. Notation: Matrices and vectors are denoted by bold letters. The superscripts (•)T represent a transposed matrix.

37.2 Indirect Adaptive Inverse Control Let A(q −1 ) and B(q −1 ) be the polynomials that form the discrete time stable plant model P(q −1 ), such that P(q −1 ) = B(q −1 )/A(q −1 ). The control signal u(k) and the plant output signal y(k) are related as follows:

37 Indirect Adaptive Inverse Control Synthesis …

485

yd(k) +

q- L



eref(k)

y(k)

r(k)

^ -1 u(k) C (q ) copy

Controller Copy

yc (k)

^ C (q -1 )

P(q - 1)

Controller

Plant

q- L

e(k)

u(k-L)

+



Fig. 37.1 Block diagram for the IAIC

A(q −1 )y(k) = q −d B(q −1 )u(k),

(37.1)

where A(q −1 ) = 1 + a1 q −1 + a2 q −2 + · · · + an q −n , B(q −1 ) = 1 + b1 q −1 + b2 q −2 + · · · + bm q −m and n, m ∈ N. The polynomials A(q −1 ) and B(q −1 ) are described as a function of the delay operator q −1 , such that, for example, q −m u(k) = u(k − m), where k ∈ N is the kth time instant. As proposed by [2, 8], in this work the controller for monovariable, causal and non-minimum phase plants, is based on adaptive FIR filter. As can be seen in the block diagram shown in Fig. 37.1, the controller is represented by C(q −1 ). It is important to note that at each time instant the controller weight vector C(q −1 ) is updated so that the plant inverse dynamics can be tracked through the plant model inverse identification. Also, it is important to note that a copy of the controller is placed to left of the plant model, such that the control signal u(k) is the output signal of Ccopy (q −1 ). At each time instant the controller weight vector (k) = [θ0 θ1 . . . θ M−1 ]T ∈ R M×1 is estimated, such that the output signal of C(q −1 ) is given by: ˆ −1 )y(k), yc (k) = C(q

(37.2)

ˆ −1 ) = θ0 + θ1 q −1 + θ2 q −2 + · · · + θ M−1 q −M+1 , it is developing (37.2), since C(q obtained the following difference equation: yc (k) = θ0 y(k) + θ1 y(k − 1) + · · · + θ M−1 y(k − M + 1),

(37.3)

which can be rewritten in vector form, given by: yc (k) = (Y(k))T (k) = ((k))T Y(k),

(37.4)

where Y(k) = [y(k) y(k − 1) . . . y(k − M + 1)]T ∈ R M×1 is the plant output vector.

486

R. P. Noronha

After obtained the controller weight vector (k), it is obtained the control signal u(k), given by: (37.5) u(k) = Cˆ copy (q −1 )r (k), ˆ −1 ) = θ0 + θ1 q −1 + θ2 q −2 + · · · + θ M−1 q −M+1 , it is developing (37.5), since C(q obtained the following difference equation: u(k) = θ0 r (k) + θ1r (k − 1) + · · · + θ M−1 r (k − M + 1),

(37.6)

which can be rewritten in vector form, given by: u(k) = (R(k))T (k) = ((k))T R(k),

(37.7)

where R(k) = [r (k) r (k − 1) . . . r (k − M + 1)]T ∈ R M×1 is the reference signal vector. In order to update the estimate of the weight vector (k) at each time instant, it is necessary to obtain the error signal e(k), given by:   ˆ −1 )y(k) = q −L − C(q ˆ −1 )P(q −1 ) u(k) e(k) = q −L Cˆ copy (q −1 )r (k) − C(q (37.8) ˆ −1 ) = θ0 + θ1 q −1 + developing (37.8), since P(q −1 ) = B(q −1 )/A(q −1 ) and C(q θ2 q −2 + · · · + θ M−1 q −M+1 , it is obtained the following difference equation:   e(k) = u(k − L) − θ0 y(k) + · · · + θ M−1 y(k − M + 1) ,

(37.9)

where L, according to [8], is given by: M +d +m , L∼ = 2

(37.10)

and the reference error eref (k) is given by: eref (k) = yd (k) − y(k) = q −L r (k) − P(q −1 )u(k),

(37.11)

such that, developing (37.11), since P(q −1 ) = B(q −1 )/A(q −1 ), it is obtained the following difference equation: eref (k) = r (k − L) − u(k) − · · · − bm u(k − m) + a1 y(k − 1) + · · · + an y(k − n) (37.12)

37 Indirect Adaptive Inverse Control Synthesis …

487

37.3 Fractional Least Mean Square Algorithm Through LMS algorithm, the update of the estimate of the controller weight vector (k) is given by [11]: 1 (k + 1) = (k) − μ∇(k) (e2 (k)) 2 = (k) + μe(k)Y(k)

(37.13)

where ∇(k) (e2 (k)) = −2e(k)Y(k), e(k) = u(k − L) − (Y(k))T (k) and μ is the step size. According to [13], the update of the estimate of the controller weight vector (k) through FLMS algorithm is given by: ∂ J (k) − μf (k + 1) = (k) − μ ∂(k)



∂ J (k) ∂(k)

v J (k),

(37.14)

where v ∈ (0, 1) is the order of the fractional derivative [14] and J (k) = ∇(k) (e2 (k)) is a cost functional whose objective is to minimize the gradient of the squared error through the update of the estimate of the controller weight vector (k). The partial derivatives described in (37.14) are developed below. The first-order ∂ J (k) is given by [24]: partial derivative ∂(k) ∂ J (k) ∂ J (k) ∂e(k) ∂ y(k) = = e(k)Y(k), ∂(k) ∂e(k) ∂ y(k) ∂(k)  with respect to fractional order partial derivative, 

∂ ∂(k)

v J (k) =

∂ J (k) ∂e(k) ∂e(k) ∂ y(k)

∂ ∂(k)



(37.15)

v

∂ ∂(k)

J (k) results in [24]: v y(k).

(37.16)

Definition 1 [25]: The fractional derivative of a function f is given by: (D v f )(t) =

1 (k − v)

(D v f )(t − α)α =



d dt

v  t (t − τ )k−v−1 f (τ )dτ

(37.17)

0

(1 + α) (t − α)α−v , (1 + α − v)

(37.18)

1 + α − v > 0, D is the differential operator and t > 0. The Gamma (v) is given by:

488

R. P. Noronha

∞ (v) = t v−1 exp(−t)dt,

(37.19)

0

According to [13], through Definition 1, the fractional order partial derivative (37.16) is rewritten as: 

∂ ∂(k)

v J (k) = −e(k)Y(k)

((k))1−v . (2 − v)

(37.20)

As proposed in [13], substituting the results obtained for the partial derivatives (37.15) and (37.20) into (37.7), it is obtained the equation for updating the estimate of the controller weight vector (k), given by: (k + 1) = (k) − μe(k)Y(k) − μ f e(k)Y(k)

((k))1−v , (2 − v)

(37.21)

37.4 Computational Results In this section, the objective is to present the results obtained through IAIC design and performance analysis of FLMS algorithm in the estimation of the controller weight vector (k). As a complexity scenario, the IAIC design was performed for a non-minimum phase plant in the presence of a sinusoidal disturbance signal and sinusoidal reference signal. In this work, the plant model P(q −1 ) is given by: y(k) =

1+

1.8q −1

1 + 2q −1 (u(k) + n(k)), + 1.07q −2 + 0.21q −3

(37.22)

it is possible to note that the plant is non-minimum phase since its model contains a zero located at −2. In addition, the plant model, discrete time stable since, contains three poles located at −0.7, −0.6, and −0.5. It is important to note that the plant model was obtained in [26]. The sampling period was set equal to Ta = 0.025 s. Since the total simulation time was set equal to 40 s, then the total number of time instants k is equal to 1600. It is important to note that the results obtained through FLMS and LMS algorithms, with respect the performance analysis in the IAIC design, were compared. With respect to the FLMS algorithm, the step sizes μ and μ F were set equal to 5 × 10−4 . The value order of the fractional derivative was set equal to v = 0.5. With respect to the LMS algorithm, the step size μ was set equal 5 × 10−4 . The sinusoidal √ disturbance signal was obtained in [26], set equal to n(k) = −3 + cos(0.5 k)e−0.1k − 2sen(π k/20) and shown in Fig. 37.2. The conˆ −1 ) was set equal to M = 10. For the delay block q −L , troller order Cˆ copy (q −1 ) = C(q the value of L was set equal to 6. In addition to the sinusoidal disturbance signal,

Disturbance Signal

37 Indirect Adaptive Inverse Control Synthesis …

489

-1 -2 -3 -4 -5 0

5

10

15

20

25

30

35

40

Time [s] Fig. 37.2 Sinusoidal disturbance signal n(k)

in order to obtain nonconservative results and thus exploit the ability to track the inverse plant dynamics through the plant model inverse identification at each update of the controller weight vector (k) via FLMS and LMS algorithms, the reference signal is of sinusoidal type with frequency f = 0.1 Hz, given by: ⎧ 5sin(ωt), ⎪ ⎪ ⎪ ⎪ 9sin(ωt), ⎪ ⎪ ⎪ ⎪ 15sin(ωt), ⎪ ⎪ ⎨ 12sin(ωt), r (k) = 5sin(ωt), ⎪ ⎪ ⎪ ⎪ 9sin(ωt), ⎪ ⎪ ⎪ ⎪ 15sin(ωt), ⎪ ⎪ ⎩ 12sin(ωt),

1 ≤ k ≤ 200 201 ≤ k ≤ 400 401 ≤ k ≤ 600 601 ≤ k ≤ 800 801 ≤ k ≤ 1000 1001 ≤ k ≤ 1200 1201 ≤ k ≤ 1400 1401 ≤ k ≤ 1600,

if if if if if if if if

(37.23)

Plant Output Signal

where t = Ta k and ω = 2π f . As proposed by [6, 7], the analysis of convergence speed of the controller weight vector (k) was performed through the convergence speed of y(k) to r (k). As proposed by [6, 7], the performance analysis, with respect to the steady-state MSE,

20 10 0 -10 Reference Signal LMS FLMS

-20 -30 0

5

10

15

20

Time [s] Fig. 37.3 Plant output signal y(k)

25

30

35

40

490

R. P. Noronha 10 LMS FLMS

8

eref

6 4 2 0 -2 0

5

10

15

20

25

30

35

40

Time [s] Fig. 37.4 Reference error eref (k)

Control Signal

20 0 -20 LMS FLMS

-40 0

5

10

15

20

25

30

35

40

Time [s] Fig. 37.5 Control signal u(k)

was performed through MSE of the error e(k). With respect to Fig. 37.3, it is possible to note that the plant output signal y(k), obtained through IAIC with the update of the estimate of the controller weight vector (k) performed by the FLMS algorithm, quickly converged to the reference signal r (k). This result demonstrates that the controller was able to satisfactorily track the plant inverse dynamics, even with the sinusoidal disturbance signal n(k) added to the control signal u(k), at each time instant by the update of the estimate of the controller weight vector (k), through the plant model inverse identification. When compared to the LMS algorithm, it is possible to note, with respect to convergence speed, which an unsatisfactory result was obtained. In addition, the satisfactory result obtained by the FLMS algorithm, with respect to convergence speed, can also be noted through the reference error eref (k), shown in Fig. 37.4. The control signal u(k), developed through the update of the estimate of the controller weight vector (k) by the FLMS and LMS algorithms, is shown in Fig. 37.5. With respect to performance analysis of the steady-state MSE, the FLMS algorithm obtained a lower steady-state MSE when compared to the LMS algorithm, as can be noted in Fig. 37.6. Therefore, as proposed by [13], by using the fractional order

37 Indirect Adaptive Inverse Control Synthesis …

491

0.06

MSE

LMS FLMS

0.04

0.02

0 0

5

10

15

20

25

30

35

40

Time [s] Fig. 37.6 MSE of e(k)

derivative in the proposed equation for updating the estimate of the controller weight vector (k), it was possible to obtain satisfactory results in the IAIC design obtained by the FLMS algorithm with respect to convergence speed and steady-state MSE.

37.5 Conclusion In this work, it was performed the performance analysis of FLMS algorithm in the IAIC design applied to the control of a non-minimum phase plant in the presence of a sinusoidal disturbance signal and sinusoidal reference signal. The complexities inserted in the control design allowed to obtain nonconservative results in the performance analysis of FLMS algorithm. Given the complexity scenario, it was possible to note the superior performance of FLMS algorithm, with respect to convergence speed and steady-state MSE, when compared to the LMS algorithm. On the other hand, it is possible to cite some of the problems referring to the FLMS algorithm, such as the choice of the step sizes. These problems can be solved, for example, using the variable step size.

References 1. Widrow, B., Walach, E.: Adaptive signal processing for adaptive control. IFAC Proc. Vol. 16(9), 7–12. Elsevier (1983) 2. Widrow, B., Walach, E.: Adaptive Inverse Control: A Signal Processing Approach, Reissue ed. Wiley, Inc (2008) 3. Shafiq, M., Lawati, A. M., Yousef, H.: A simple direct adaptive inverse control structure. In: Canadian Conference on Electrical and Computer Engineering (CCECE), pp. 1–4. IEEE (2016) 4. Rigney, B., Pao, L., Lawrence, D.: Adaptive inverse control for settling performance improvements. In: American Control Conference, pp. 190–197. IEEE (2009) 5. Rigney, B.P., Pao, L.Y., Lawrence, D.A.: Nonminimum phase adaptive inverse control for settle performance applications. Mechatronics 20(1), 35–44. Elsevier (2010)

492

R. P. Noronha

6. Noronha, R.P.: Adaptive inverse control synthesis subject to sinusoidal disturbance for nonminimum phase plant via FVSS-NLMS algorithm. In: 2021 Australian & New Zealand Control Conference, pp. 179–184. IEEE (2021) 7. Noronha, R.P.: Indirect adaptive inverse control design based on the FASS-NLMS algorithm. IFAC-PapersOnLine 54(20), 354–359. Elsevier (2021) 8. Shafiq, M., Shiaf, M.A., Yousef, H.A.: Stability and convergence analysis of direct adaptive inverse control. Complexity. Hindawi (2017) 9. Liu, Z., Lu, K., Lai, G., Chen, C.L.P., Zhang, Y.: Indirect fuzzy control of nonlinear systems with unknown input and state hysteresis using an alternative adaptive inverse. IEEE Trans. Fuzzy Syst. 29, 500–514. IEEE (2019) 10. Karatzinis, G., Boutalis, Y.S., Kottas, T.L.: System identification and indirect inverse control using fuzzy cognitive networks with functional weights. In: European Control Conference (ECC), pp. 2069–2074. IEEE (2018) 11. Diniz, P.S.R.: Adaptive filtering, vol. 4. Springer (1997) 12. Wang, X.Y., Wang, Y., Li, Z.S.: Research of the 3-DOF helicopter system based on adaptive inverse control. Appl. Mech. Mat. 389, 623–631. Trans Tech Publ (2013) 13. Zahoor, R.M.A., Qureshi, I.M.: A modified least mean square algorithm using fractional derivative and its application to system identification. Eur. J. Sci. Res. 35(1), 14–21 (2009) 14. Shoaib, B., Qureshi, I.M., Shafqatullah, I.: Adaptive step-size modified fractional least mean square algorithm for chaotic time series prediction. Chinese Phys. B 23(5), 050503. IOP Publishing (2014) 15. Gorenflo, R., Mainardi, F.: Fractional calculus. In: Fractals and Fractional Calculus in Continuum Mechanics, pp. 223–276. Springer (1997) 16. Taraso, V.E.: Handbook of Fractional Calculus with Applications, vol. 5. Gruyter Berlin (2019) 17. Hilfer, R.: Applications of fractional calculus in physics. World scientific (2000) 18. Ren, H.P., Wang, X., Fan, J.T., Kaynak, O.: Fractional order sliding mode control of a pneumatic position servo system. J. Franklin Inst. 356(12), 6160–6174. Elsevier (2019) 19. Khan, S., Naseem, I., Malik, M.A., Togneir, R., Bennamoun, M.: A fractional gradient descentbased rbf neural network. Circuits Syst. Signal Process. 37(12), 5311–5332. Springer (2018) 20. Song, W., Li, M., Li, Y., Cattani, C., Chi, C.H.: Fractional Brownian motion: difference iterative forecasting models. Chaos Solitons Fract. 123, 347–355. Elsevier (2019) 21. Dai, W., Huang, J., Qin, Y., Wang, B.: Regularity and classification of solutions to static Hartree equations involving fractional Laplacians. Discrete Continuous Dyn. Syst. 39(3), 1389. American Institute of Mathematical Sciences (2019) 22. Ahilan, A., Manogaran, G., Raja, C., Kadry, S., Kumar, S.N., Kumar, C.A., Jarin, T., Sujatha, K., Kumar, P.M., Babu, G.C., Murugan, N.S., Parthasarathy: Segmentation by fractional order darwinian particle swarm optimization based multilevel thresholding and improved lossless prediction based compression algorithm for medical images. IEEE Access 7, 89570–89580. IEEE (2019) 23. Khan, S., Wahab, A., Naseem, I., Moinuddin, M.: Comments on Design of fractional-order variants of complex LMS and NLMs algorithms for adaptive channel equalization. Nonlinear Dyn. 101(2), 1053–1060. Springer (2020) 24. Ahmad, J., Usman, M., Khan, S., Naseem, I., Syed, H.J.: Rvp-flms: a robust variable power fractional LMS algorithm. In: 6th IEEE International Conference on Control System, Computing and Engineering (ICCSCE), pp. 494–497. IEEE (2016) 25. Atangana, A., Gmóes-Aguilar, J.F.: Numerical approximation of Riemann-Liouville definition of fractional derivative: from Riemann-Liouville to Atangana-Baleanu. Numer. Methods Partial Diff. Eq. 34(5), 1502–1523. Wiley Online Library (2018) 26. Lin, S.Y., Yen, J.Y., Chen, M.S., Chang, S.H., Kao, C.Y.: An adaptive unknown periodic input observer for discrete-time LTI SISO systems. IEEE Trans. Autom. Control 62(8), 4073–4079. IEEE (2016)

Chapter 38

Thermo-Economic Analysis for the Feasibility Study of a Binary Geothermal Power Plant in India Shivam Prajapati , Shulabh Yadav , Raghav Khandelwal , Priyam Jain , Ashis Acharjee , and Prasun Chakraborti Abstract The objectives of ORC thermodynamic analysis are maximum performance and thermal conversion efficiency. For a feasibility design study, it is important to optimize both energy performance and system cost. Based on typical geothermal resources, this article describes a feasibility study for a binary geothermal plant in India. For important cycle design alternatives, a variety of working fluids, and component selection factors, thermodynamic and economic evaluations were done. The target function for selecting the most thermo-economic designs is calculated by dividing total Purchased Equipment Costs (PEC) by net electrical power output (W net ). The working fluids investigated are n-pentane, R245fa, and R134a. The thermodynamic analysis shows that at a given turbine inlet pressure and mass flow rate of the working fluid, the net electrical power output (W net ) of the cycle design reaches its maximum. Two-stage designs outperform one-stage designs in terms of W net , thermal, and exergy efficiency. According to an economic comparison, working fluid types and cycle designs have a significant impact on economic performance as assessed by PEC. The top five options were subjected to profitability analysis. The results show that a normal Rankine cycle with a two-stage turbine employing n-pentane is the most thermo-economical design based on the brine resource and reinjection conditions.

38.1 Introduction Most countries’ energy policies will continue to favour renewable electricity generation. Wind and solar PV are intermittent, lowering the usage factor, and limiting the total capacity that can be placed in a grid. Baseload renewables, such as hydro and geothermal, are by far the major suppliers of renewable electricity globally. S. Prajapati (B) · S. Yadav · A. Acharjee · P. Chakraborti Department of Mechanical Engineering, NIT, Agartala, Tripura, India e-mail: [email protected] R. Khandelwal · P. Jain Department of Chemical Engineering, SVNIT, Surat, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. Das et al. (eds.), Modeling, Simulation and Optimization, Smart Innovation, Systems and Technologies 292, https://doi.org/10.1007/978-981-19-0836-1_38

493

494

S. Prajapati et al.

Geothermal energy is a renewable source of heat energy that originates under the earth’s surface at temperatures ranging from 50 to 3500 °C [1]. Despite the fact that geothermal reservoirs are finite resources, if they are operated at a rate below their peak production rate and the brine is reinjected into the geological thermal zone, the number of power plants can be built to run for the entire lifetime of the facility. Geothermal power generation offers a lot of room for growth on a global scale [2]. The three main types of geothermal power generation are dry steam, flash steam, and binary cycle [3]. Depending on the generation characteristics of the resources, most power plants generate electricity with steam and bottoming. The most efficient way to utilize high-pressure, high-temperature brine is through flash separation and steam expansion using a steam turbine. Binary power plants based on the Organic Rankine Cycle (ORC) are used for medium and low-temperature sources, as well as a bottoming cycle for the steam turbine. Unlike coal or gas-fired power plants, ORC binary units must be specifically built to take advantage of the temperature and flow available from the geothermal resource. The required reinjection temperature limit to prevent excessive mineral scale deposition in the ORC evaporator heat exchanger is another design consideration. This paper focuses on ORC binary geothermal power plants, and it proposes a thermo-economic method to cycle and component design, as well as performance and cost co-optimization. For both cycle and component analyses, the ORC working fluid must be chosen carefully. A few writers have examined the relationship between the working fluid structure and thermodynamic parameters, as well as the cycle thermodynamic performance. Saleh et al. [4], Quoilin et al. [5], and Shengjun et al. [6] studied several working fluids for low-temperature applications and concluded that low-temperature hydrofluorocarbons., such as R245fa and R134a, are acceptable for binary ORCs. Based on parametric sensitivity analysis, Aghahosseini et al. [7] studied several pure and zeotropic-mixture working fluids for power generating applications at various operating circumstances. Energy and exergy efficiencies, cycle reversibility rate, external heat demand, and mass flow rate were the thermodynamic cycle performance metrics employed. The working fluid, they said, had a significant impact on the cycle’s performance. The basic ORC plant schematic diagram for one-stage turbines is shown in Fig. 38.1, and the numerous cycle configurations for two-stage ORCs are shown in Fig. 38.2. ORC system performance is also influenced by the design of the thermodynamic cycle, or cycle configuration [8]. But only a few articles deal with the design of thermodynamic cycles. Mago et al. [9] used dry organic working fluid to assess and compare the basic and regenerative ORC designs. Based on their findings, the regenerative ORC offers greater first and second law thermodynamic efficiency, as well as diminished irreversibility. ORC plant cost and design have also been the subject of a small number of published research studies. The ORC efficiency of ammonia, n-pentane, and PF 5050 was studied by Hettiarachchi et al. [9]. The objective function was a ratio of the total heat exchanger to net power generation. They discovered that the cost of a power plant is affected by the working fluid used. Meinel et al. [10]compared conventional and recuperating ORC cycles to the installation of a two-stage ORC with regenerative preheat. The two-stage ORC with

38 Thermo-Economic Analysis for the Feasibility …

495

Fig. 38.1 One stage turbines: a standard cycle, b recuperative cycle

Fig. 38.2 Two stage turbines: a standard cycle, b recuperative cycle, c regenerative cycle

regenerative preheating has been proven to provide the most efficient thermodynamic and economic performance. According to the first and second laws of thermodynamics, Yari et al. [3]and Coskun et al. [11] conducted comparative research of several geothermal power facilities. Single flash, double flash, flash binary, single ORC, ORC with internal heat exchanger, ORC regeneration, ORC regeneration with an internal heat exchanger, and Kalina cycle plants were all studied by Coskun et al. in terms of thermodynamics and economics.

496 Table 38.1 Geothermal and cooling water source data

S. Prajapati et al. Parameter

Nominal value

Temperature of geothermal source (°C)

173

Pressure of geothermal source (bar)

9

Mass flow of geothermal (kg/s)

8

Temperature of the cooling water source (°C)

20

Pressure of cooling water source (bar)

1.53

Mass flow of cooling water source (kg/s)

90

When determining turbine performance during thermodynamic analysis, most studies reported in the literature do not take realistic absolute pressure levels as a limitation. According to Moustapha et al. [12], the pressure ratio, as well as the projected inlet and exit pressures, must be like realistic turbine models. There will be a change in the Reynold number when the turbine operates at absolute pressure out of design, which may have varying degrees of impact on its performance, depending on the kind and turbine’s design. As part of the pre-feasibility design process, this research will examine a number of binary geothermal power plant configurations using thermodynamic and economic analyses. One and two-stage designs with the upgrade of the recuperator or regenerator cycle are available. For each working fluid, a constant pressure ratio and an absolute pressure level commensurate with known turbines were utilized to increase the accuracy of the turbine models. Actual geothermal well and cooling water data from a site in India’s Himalayan Geothermal Zone (TGZ) was obtained from a variety of sources and used in the current feasibility design research, as shown in Table 38.1.

38.2 Methodology R245fa, n-pentane, and R134a are the working fluids evaluated in this feasibility study since they are the most extensively used in commercial ORC systems. As stated in Table 38.2, the design parameters for the thermodynamics cycle are based on typical assumptions for superheat, subcooling, heat exchanger pinch points, and nominal component performance. For the feasibility study, the objective function of optimization was the net electric power (W net ). When it comes to developing practical Table 38.2 Properties used for creating thermodynamics cycles

Parameter

Value

Superheat (°C)

5

Subcooling (°C)

5

Minimum temperature approach (°C)

5

Pump isentropic efficiency (%)

80

Expander isentropic efficiency (%)

85

38 Thermo-Economic Analysis for the Feasibility …

497

geothermal power plants, this metric is more important than thermal and exergy efficiencies [13]. To begin, each cycle was subjected to a thermodynamic study, and W net was estimated. Second, the cycle and component designs were compared further by calculating the expenses of purchased equipment (PEC). The ratio of net electrical power output (W net ) to indicated capital cost (γ = W net /PEC) was employed as a criterion for selecting the most cost-effective solutions. Finally, to assess the economic performance of the top five economical designs, a profitability analysis was performed.

38.2.1 The Thermodynamic Cycles As shown in Figs. 38.1 and 38.2 the modelled thermodynamic setups and the Fig. 38.1a, b demonstrate standard cycle setups that are commonly utilized in ORC research. The vapour is expanded through a turbine in the typical ORC cycle, which drives an electric generator. After expansion, the vapour condenses, is pressured by the pump, and goes to the evaporator, where it is evaporated to a superheated state and used as the turbine’s input (Fig. 38.1a). In recuperative cycles, heat is reclaimed from fluids at the turbine outlet by introducing a heat exchanger (the recuperator) after the pump and before the evaporator. (Fig. 38.1b). The two-stage standard cycle (2-stage std.) uses two radial turbines with varying operating pressures or an axial turbine with two stages (Fig. 38.2a). The usage of a two-stage turbine provides for a higher cycle pressure ratio. All two-stage turbines are coaxial, with one generator driving them. The recuperative cycle (Fig. 38.2b) is like the one-stage cycle, but in it, part of the exhaust from the first turbine stage is used to preheat the fluid before it enters the boiler (Fig. 38.2c).

38.2.2 Modelling Thermodynamic modelling: The feasibility analysis employs standard adiabatic component models, first law energy balance, and second law efficiency based on the following assumptions: • • • • • •

Steady-state. Changes in kinetic and potential energy are ignored. Fouling in the heat exchangers and pressure drop along pipelines are also ignored. Pumps and turbines offer constant isentropic efficiencies. Dead state temperature and pressure for the cycles are 20 °C and 1 bar, respectively. Geothermal brine is modelled as water. Mass and energy balances for any control volume at a steady state are: m˙ in = m˙ out

(38.1)

498

S. Prajapati et al.

Q˙ + W˙ =



m˙ out h out −



m˙ in h in

(38.2)

where the subscripts in and out show the inlet and outlet, Q˙ and W˙ are the net heat and work inputs, m˙ is the mass flow, and h is the enthalpy [kJ/kg]. The ratio of the net electrical power output to the heat addition is the first-law thermal efficiency of an ORC: ηth =

  W˙ net  Q

   Wp + Wt    = m˙ brine h eva,in − h eva,out

(38.3)

where, Wp and Wt are the net power of the pump and the turbine, respectively, and the subscript eva refers to the evaporator. According to Dipippo et al. [14] and Pressinger et al. [13], the geothermal plant’s overall exergy efficiency can be determined by dividing the net electrical output by the total flow rate of exergy, E in , of the brine:    Wp + Wt  Wnet    ηe = = E in m˙ brine (h eva,in − h o ) − To seva,in − so

(38.4)

where, s is the specific entropy [kJ/kg]. The temperature of the brine is calculated using the enthalpy and entropy of the evaporator inlet (subscript Eva, in) and the dead state (subscript 0). m˙ brine is the mass flow rate of the brine. The fraction of the flow rate which flows to the supply water heater tank in a regenerative cycle design (Fig. 38.2c) is calculated by an equation from Mago [9]: X=

h6 − h5 h 12 − h 5

(38.5)

Economic Modelling: Purchased equipment cost (PEC) of pumps and turbines are estimated using the correlation from Turdon et al. [15]: PEC = K 1 + K 2 ∗ Y + K 3 ∗ (Y )2

(38.6)

where K values are given in Table 38.3 and Y is the power transferred in kW. For preheating the working fluid, the regenerative cycle uses a stainless-steel storage tank for direct liquid contact heat exchange. The tank’s PEC is estimated as [16]: Table 38.3 Parameters used in an equation to calculate equipment costs (6) Component

Y

K1

K2

K3

Pumps

Power [kW]

3.3892

0.0536

0.1538

Axial turbines

Power [kW]

2.7051

1.4398

-0.1776

38 Thermo-Economic Analysis for the Feasibility … Table 38.4 Index of capital goods prices used in calculating updated PEC prices in equation [8]

Component

499

Quarter 2–3 (2001)

Quarter 3–4 (2004)

Quarter 1–2 (2014)

Pump

1047



1390

Radial turbine

1064



1088

Tank



1143

1685

PEC = 2.48 × 103 V 0.597

(38.7)

where V is the tank volume in m3 . Due to changes in economic conditions and inflation, the equation to update PEC [17] is:  Cnew = Cold

Inew Iold

(38.8)

where C is the cost (referring to PEC) and I is the cost index. Subscripts old and new refer to the base time when the cost is known and to the time when the cost is desired, respectively. The data for the cost index is taken from the info share of Indian statistics [18] in Table 38.4. To compare investment options, the ratio of net electrical power output to total purchase equipment costs is useful. The investment ratio, γ , is like a Levelized cost that does not consider the time value of money. γ = n i=1

Wnet Purchased Equipment Costi

(38.9)

where n is an index number for the main components in the cycle design. Based on direct and indirect costs, the investment cost of the ORC plant can be estimated as shown in Table 38.5, according to Bejan et al. [19]. Net Present Value (NPV) and Discounted Payback Period (DPB) are the two decision variables used to calculate project profitability. Bejan et al. [19] defined NPV as the sum of the present values of cash flows coming into and out of an enterprise over a period NPV =

t  i=1

Ri − TCI (1 + q)i

(38.10)

With t as the life expectancy of the equipment, q as the discount rate, and TCI (total capital investment) as the capital investment, and R as the annual revenues from electricity sales, respectively. The discount rate is usually set by the industry and maybe by the inflation rate [20]. The DBP estimates the years to recover the initial capital investment.

500

S. Prajapati et al.

Table 38.5 Expense in direct and indirect capital investment [19] Total capital investment (TCI) in ORC plant A. Direct costs (DC)

B. Indirect costs 1. Engineering + supervision: 8% DC

1. Onsite costs 1. Purchased equipment costs (PEC)

2. Construction costs + construction profit: 15% DC

2. Piping: 35%PEC

3. Contingency: 20% (of 1 and 2)

3. Purchased equipment installation: 45% PEC 4. Control and instrumentations: 20%PEC 2. Offsite costs 5. Civil, architectural, and structural work: 60%PEC 6. Service facilities: 65%PEC

38.2.3 Modelling Using Aspen Plus The processes are simulated in Aspen Plus [21] with the cubic Peng-Robinson equation of state (EOS), which was used to calculate the thermodynamic and thermo-physical properties of working fluids. The most recent version 8.4 of Aspen Exchanger Design and Rating (EDR) was used to compute heat, sizes, and prices of heat exchangers. The cost estimate for a heat exchanger is generated by the Aspen EDR utilizing the geometry of the heat exchanger components obtained via thermodynamic modelling. The cost of a heat exchanger includes material costs, labour costs, and material and labour markups.

38.3 Results and Discussion 38.3.1 Thermodynamic Analysis According to the assumptions and data in Tables 38.1 and 38.2 for each working fluid and each cycle configuration in Figs. 38.1 and 38.2 thermodynamic cycles were constructed. Each cycle configuration has a range of possible evaporator pressures and working fluid mass flow rates. Influence of turbine inlet pressure and mass flow: As shown in Fig. 38.3 Results of model simulations for standard-cycle of cycle power W net , varying turbine inlet pressures, and working fluid mass flows. In single-stage models (Fig. 38.3a), with the largest mass flow rate but lowest turbine intake pressure, the cycles with R245fa and n-pentane produce maximum W net between 400 and 425 kW. It is expected that

38 Thermo-Economic Analysis for the Feasibility …

501

Fig. 38.3 For various working fluids, increasing mass flow rate and reducing turbine inlet pressure increase net power output (W net ) and a one-stage designs, b two-stage designs

502

S. Prajapati et al.

this outcome will occur when a reasonable pressure ratio is needed for the turbine, however, the radius of the turbine is able to be extended to accommodate a greater flow of fluid. In comparison with other working fluids, R134a achieves its maximum W net but at extremely high pressure of 32 bar and approximately 25 kg/s of mass flow, which is unsustainable with either a vaporizer or a turbine. According to Fig. 38.3b, two-stage cycles using R245fa and n-pentane reached the maximum W net at 22 bar and 11 bar, respectively, and mass flow of working fluid at 16.3 kg/s and 6 kg/s. Further investigation will be conducted using the optimal inlet pressure and mass flow rate values from the regenerative-cycle and recuperative-cycle analyses. The two-step designs would naturally have a lower condenser pressure than one-stage. R134a cannot be used in two-step designs as the required condenser pressure can lead to condensation in the turbine. The influence of cycle design on the plant performance: As shown in Fig. 38.4 the model is able to predict the maximum network from multiple cycle configurations by using the optimum turbine inlet pressure and maximum flow rate. Two stages of the standard cycle (2-Stage_Std) and two stages of recuperative cycle (2-Stage_Rec) are used to produce a maximum W net of 619 kW using R245fa. The findings demonstrate the advantages of adopting multi-stage turbines that are the best fit for the heat and cooling resource. R245fa has a consistently higher W net than n-pentane. In the plants of binary ORC, n-pentane is generally used in commercial geothermal power generators. R245fa is a synthetic chemical, whilst n-pentane is a refined petroleum product. As a result, R245fa is more costly and may not be viable in a multi-megawatt ORC plant. It is possible for low-temperature sources such as those in this case study to offset the high cost of materials with improved power generation performance.

Fig. 38.4 Maximum net electrical power output (W net ) for multiple thermodynamic cycles demonstrating the benefits of multi-stage turbines

38 Thermo-Economic Analysis for the Feasibility …

503

Fig. 38.5 Different cycle designs and working fluids thermal and energy efficiency for maximum power output circumstances

Under maximum power output conditions, Fig. 38.5 compares the thermal and exergy efficiencies of the R245fa and n-pentane cycle designs. The recuperative and regenerative cycles have better thermal efficiency than either conventional oneor two-stage cycles. The exergic efficiency of conventional and recovery cycles is similar, but the exergic efficiency of regeneration cycles is lower. The value of W net affects the exergy efficiency, according to Eq. 38.4. Standard and recuperative cycles have the same net power output, but regenerative cycles have a lower net power output. Two-stage designs offer much greater thermal and exergy efficiency than one-stage designs, according to a comparison of one-stage and two-stage designs. In theory, a supplementary recuperator could be used in a two-stage system with a regenerative cycle (2-Stage_Regen) in order to enhance cycle performance [3]. However, because the temperature of working fluids at the point following the feedwater heater tank in the regenerative cycle (Fig. 38.2c) is near boiling, this cannot be done in this case study. As a result, a two-regenerator design is not practical for this case study due to the significant chance of pump 2 failing due to vaporization within the pump. It’s also worth noting that the unfeasible R134a working fluid would have the highest one-stage thermal efficiency (8.78%) and energy efficiency (40.9%), illustrating that thermodynamic analysis alone isn’t optimal for a feasibility study.

38.3.2 Economic Analysis Purchased equipment costs (PEC): In Fig. 38.6, the cost of purchasing the equipment in each configuration of the plant with a variety of working fluids is shown.

504

S. Prajapati et al.

Fig. 38.6 Cost of all purchased equipment estimated in USD in 2014

This PEC includes a turbine, pump, heat exchanger, and an additional tank for the regenerative cycle. The cost of working fluid is not included since the volume of fluid required for each design is estimated based on site characteristics and design features that are not known at the feasibility analysis stage. Note that the expenses for the regenerative cycle’s feedwater heat tank are computed using Eq. 38.7, assuming a capacity of 6 m3 for n-pentane designs and 8 m3 for R245fa systems. 2-stage models with R245fa have a significantly higher PEC than 1-stage models, whilst 2stage models with n-pentane have a PEC lower than 1-stage. As a result of the much higher flow rate of working fluid required, PEC requires much larger heat exchangers for R245fa. The heat exchanger costs comprise 78–88% of the total costs for the R245fa systems. The evaporator is the largest component cost, ranging from 50% of PEC for 1-stage and 64% for 2-stage R245fa designs. The heat exchanger costs are related to the required heat transfer areas, which are shown in Fig. 38.7. The one-stage designs using n-pentane need larger heat exchangers than the one-stage designs using R245fa. By contrast, two-stage designs with R245fa require larger heat exchangers than two-stage designs using n-pentane. Because the recuperative and regeneration cycles demand smaller evaporators and condensers, heat exchanger costs can be reduced. The total cost of the investment will be higher than without a recuperator if the cost of the recuperator exceeds the reduced costs of evaporation and condenser because the feed heater tank is substantially less expensive than a recuperator heat exchanger, the overall cost of heat exchangers in the two-stage regenerative cycle (2-Stage Regen) is cheaper. W net , on the other hand, is lower in this design than in the other two-stage systems. R134a has previously been ruled out as a viable option for this resource because of its technical limitations. However, it is interesting that the one-stage standard cycle design has a heat exchange total area of only 900 m2 and the lowest PEC of INR 242,386.

38 Thermo-Economic Analysis for the Feasibility …

505

Fig. 38.7 Required areas of the heat exchanger from different cycle designs

Investment ratio: In Table 38.6 the ratio between power and cost for the technically workable cycle designs. The highest ratio is 1.003 for the two-stage standard cycle with n-pentane working fluid. The highest ratio for R245fa was 0.848 for the one-stage standard ORC. For a given turbine layout, the ratio of power and cost is unaffected by thermodynamic improvements from adding a recuperator or regenerator. For each of the working fluids, however, there is a significant difference between two one-stage systems. Due to the high cost of heat exchangers, even if the power output is good, the choice of the working fluid can have an impact on power generation economics. Even though they produce the highest W net at 619 kW, two-stage R245fa systems have a low power/cost ratio. Using R134a as the carrier gas would have the second-highest investment ratio of 0.939, which underscores the significance of technical, thermodynamic, and economic analysis at the feasibility stage. Air-cooled condensers and water-cooled condensers: Water-cooled condensers have been used in all of the designs so far. When there is no available cooling water on site, air-cooled condensers must be chosen, despite the fact that they are more expensive than shell and tube condensers. Because of the difficulties in obtaining and pumping cooling water, most commercial geothermal binary power plants employ air-cooled condensers. Even though the recuperative and regeneration cycles that use n-pentane have the least heat transfer area requirements, their investment ratio is reduced due to higher condenser prices. When air-cooled condensers are utilized, Table 38.6 Investment ratio between W net and purchased equipment costs (PEC) γ

1-Stage_Std

1-Stage_Rec

2-Stage_Std

2-Stage_Rec

2-Stage_Regen

n-pentane

0.577

0.602

1.003

0.925

0.934

R245fa

0.848

0.805

0.580

0.581

0.586acc

506 Table 38.7 Total investment cost (TIC) and specific investment costs (SIC) of the four optimal ORC cycle designs

S. Prajapati et al. Cycle design

TIC (INR) SIC (INR/kW)

n-pentane 2-Stage_Standard

1,435,371

3,003

n-pentane 2-Stage_Regenerator 1,364,950

2,856

n-pentane 2-Stage_Recuperator 1,556,202

3,256

R245fa

3,006

1-Stage_Standard

1,436,639

the investment ratio lowers from 1.003 to 0.755 for the two-stage n-pentane standard cycle and from 0.934 to 7.33 for the two-stage n-pentane regenerator cycle. Total investment cost (TIC): Four of the highest investment ratio designs have specific investment costs (SIC) ranging from INR 2856/kW to INR 3006/kW. The TIC for each design is calculated by multiplying SIC by the optimal W net . Figure 38.7 shows the thermo-economic results for each of the four best options. As reported by Gawlik et al. [22], these values are in the range of 2000–4000 INR/kW (about 2500 and 5000 INR/kW), and these values can be higher with exploration and drilling costs included. Roos et al. [23] reported that ORC manufacturers produced the typical ORC systems with SIC ranging from INR 2000/kW to INR 4000/kW in 2009. Jung and Krumdieck [24] SICs of INR 2000/kW–INR 3500/kW were reported for small commercial systems using ORC technology in 2015 (Table 38.7). Geothermal development costs: Drilling costs, which have historically accounted for the largest portion of overall geothermal development, must be included in the investment in a binary geothermal power plant. Estimating the costs of geothermal development is difficult because drilling for geothermal energy and reservoir engineering involves a high level of commercial sensitivity and uncertainty. Drilling costs are scarcely reported. Stefansson et al. [25], It is estimated that drilling costs represent between 20 and 50% of the total cost of a typical geothermal energy plant. A European binary power station can cost up to 70% more than its total cost due to drilling and development costs., according to Kranz et al. [26]. In 2000, Kocher et al. [27] reported that the cost of exploration and drilling for binary geothermal power plants with a capacity of 5 MW or larger was about INR 500/kW. Geothermal confirmation and site development drilling ranged between INR 600/kW and 1200/kW with a mean of 1000/kW in 2015, according to the Geothermal Energy Association. Over a 10-year period from 2005 to 2015, the Indian producer cost index for mining services including drilling oil and gas wells increased by 77.2%. According to this study, drilling and development costs were INR 1772 per kW for 2015 in the economic analysis. Profitability analysis: According to the Geothermal Energy Association, geothermal power facilities take 3–5 years to build. The first two years of capital investment are estimated at 20% of TIC for exploration and confirmation of resources, with the remaining 80% committed in the third year. The plant starts generating energy in the fourth year based on the W net rate multiplied by the plant availability factor, which for geothermal power plants is about 90%. It is expected that the discount rate is 10%. The value of the inflation rate was taken from the Indian Consumer Price Index (CPI) where the inflation rate has averaged around 2.7%

38 Thermo-Economic Analysis for the Feasibility …

507

Table 38.8 Assumptions for economic modelling in the profitability analysis Plant lifetime

20 years

Plant availability

90%

Electricity revenue unit price

INR 0.083/kWh

O&M cost

INR 0.013/kWh

Annual electricity price escalation

3.0%

Inflation rate

2.7%

Discount rate

10%

Table 38.9 Profitability analysis from thermodynamic and economic modelling results Cycle design

Total cost (INR)

NPV (INR)

DPB (years)

n-pentane 2-Stage_Standard

INR 2,282,387

INR 1,004,024

12.9

n-pentane 2-Stage_Regenerator

INR 2,115,038

INR 811,094

13.5

n-pentane 2-Stage_Recuperator

INR 2,403,218

INR 903,541

13.5

R245fa

INR 2,153,235

INR 664,583

14.3

1-Stage_Standard

since 2000 [28]. The price of electricity in the plant over its lifetime has been estimated at 0.083 INR/kW with a 3% annual increase in electricity rate [24]. Amongst the estimates of David et al. [29] regarding operating and maintenance costs for an ORC plant, 0.01 INR/kW was the average. Table 38.8 summarizes the parameters used in the economic models. As shown in Table 38.9 the factors affecting the profitability of the four candidate designs. The NPV has a wide range between INR 664,583 and INR 1,004,024, but DPB is more consistent between 13 and 14 years. The total cost of investment ranges from INR 2,115,038 to INR 2,403,218. These values are consistent with the total investment amount reported for building the 400 kW Geothermal power plant at Tatapani, Chhattisgarh at the end of 2013. The actual expense of the Tatapani, Chhattisgarh Geothermal plant project was INR 2,007,770 [30]. The R134a standard ORC cycle’s profitability study indicates that it is a viable choice. The technique is straightforward, R134 is inexpensive, the heat exchangers are compact, and the onestage expansion via a simple turbine is appealing. The overall cost is INR 2,140,946, the net present value (NPV) is INR 830,226, and the DPB is 13.4 years. As shown in Fig. 38.12 based on the standard two-stage ORC cycle designed with n-pentane, the discounted cash flow for the operating years is calculated. The cumulative NPV would increase with a longer plant lifespan and a lower discount rate. If the project was subsidized or the public invested in it, the discount rate would be lower.

508

S. Prajapati et al.

Fig. 38.12 Cumulative NPV cash flow analysis for the n-pentane 2-Stage_Std ORC

38.4 Conclusion The thermodynamic, technical, and economic feasibility were investigated for the design of a binary geothermal power plant with different cycle configurations, working fluids, and component options. Five different ORC binary cycle models and a technically feasible pressure ratio were modelled to calculate the turbine efficiency. The analysis employed a typical Indian geothermal resource with a brine temperature of 173 °C, a pressure of 9 bar, and a flow rate of 8 kg/s. The most economically and technically advantageous design for this resource utilizes n-pentane working fluid, uses a two-stage turbine, and does not use either a recuperator or a recuperator. This design had a net generation capacity of 478 kW with an NPV for the 20-year life of INR 1,004,024 and a PBD of approximately 13 years. The two-stage expansion of the thermodynamic cycle design offers higher net electric power and higher thermal and exercise efficiencies than one design. The only thermodynamic analysis would indicate that the two-step system is the optimum design. However, the analysis of total capital costs and profitability shows that the rising cost of large heat exchangers and the added technical complexity can make the two-stage design less feasible than one-stage designs. For this lower temperature case study, the additional cost of a regenerator heat exchanger and a regenerator mixing tank tends to negate the thermodynamic advantages. The water-cooled condenser in the hull and the less expensive tube may not be available. Despite the thermodynamic equality of the two condensers, the added cost of the air-cooled condensers may mean the case study is not economically feasible. A plant’s performance and total investment cost are influenced mainly by the type of working fluid and cycle configuration. The cost of working fluids has not been explicitly accounted for in the economic model, but it is likely that the lower cost of n-pentane, coupled with the lower mass flow requirement, favours n-pentane over R245fa. Manipulation, toxicity, and flammability may

38 Thermo-Economic Analysis for the Feasibility …

509

also be critical factors in the selection of work fluids that have not been explicitly considered in this analysis. The contribution of this work is the exploration feasibility study based on technical, thermodynamic, and economic analysis.

References 1. Madhawa Hettiarachchi, H., et al.: Optimum design criteria for an organic Rankine cycle using low-temperature geothermal heat sources. Energy 32(9), 1698–1706 (2007) 2. Zhou, C., Doroodchi, E., Moghtaderi, B.: An in-depth assessment of hybrid solar–geothermal power generation. Energy Convers. Manage. 74, 88–101 (2013) 3. Yari, M.: Exergetic analysis of various types of geothermal power plants. Renew. Energy 35(1), 112–121 (2010) 4. Saleh, B., et al.: Working fluids for low-temperature organic Rankine cycles. Energy 32(7), 1210–1221 (2007) 5. Quoilin, S., et al.: Thermo-economic optimization of waste heat recovery organic Rankine cycles. Appl. Therm. Eng. 31(14), 2885–2893 (2011) 6. Shengjun, Z., Huaixin, W., Tao, G.: Performance comparison and parametric optimization of subcritical organic Rankine cycle (ORC) and transcritical power cycle system for lowtemperature geothermal power generation. Appl. Energy 88(8), 2740–2754 (2011) 7. Aghahosseini, S., Dincer, I.: Comparative performance analysis of low-temperature organic Rankine cycle (ORC) using pure and zeotropic working fluids. Appl. Therm. Eng. 54(1), 35–42 (2013) 8. Branchini, L., De Pascale, A., Peretto, A.: Systematic comparison of ORC configurations by means of comprehensive performance indexes. Appl. Therm. Eng. 61(2), 129–140 (2013) 9. Mago, P.J., et al.: An examination of regenerative organic Rankine cycles using dry fluids. Appl. Therm. Eng. 28(8), 998–1007 (2008) 10. Meinel, D., Wieland, C., Spliethoff, H.: Economic comparison of ORC (organic Rankine cycle) processes at different scales. Energy 74, 694 (2014) 11. Coskun, A., Bolatturk, A., Kanoglu, M.: Thermodynamic and economic analysis and optimization of power cycles for a medium temperature geothermal resource. Energy Convers. Manage. 78, 39–49 (2014) 12. Moustapha, H.: Axial and Radial Turbines. Concepts NREC (2003) 13. Preißinger, M., Heberle, F., Brüggemann, D.: Advanced organic Rankine cycle for geothermal application. Int. J. Low-Carbon Technol. ctt021 (2013) 14. DiPippo, R.: Second law assessment of binary plants generating power from low-temperature geothermal fluids. Geothermics 33(5), 565–586 (2004) 15. Turton, R.: Analysis, Synthesis, and Design of Chemical Processes. Prentice Hall PTR, Upper Saddle River, N.J. 1998 16. Peters, M.S., Timmerhaus, K.D.: Plant Design and Economics for Chemical Engineers. McGraw-Hill, New York (1991) 17. Turton, R., et al.: Analysis, Synthesis, and Design of Chemical Processes. Pearson Education (2008) 18. Zealand, S.N.: Economic Indicators—Capital Goods Price Index. http://www.stats.govt.nz/inf oshare/default.aspx?AspxAutoDetectCookieSupport=1 19. Bejan, A., Moran, M.J.: Thermal Design, and Optimization. Wiley (1996) 20. Thuesen, G.J., Fabrycky, W.J.: Engineering Economy. Prentice-Hall, Upper Saddle River, N.J. (2001) 21. Aspen plus. Aspen Technology, Inc., Wheeler Road, Burlington, Massachusetts, USA. http:// support.aspentech.com/ 22. Gawlik, K., Kutscher, C.: Investigation of the opportunity for small-scale geothermal power plants in the Western United States. Trans. Geother. Resour. Council 2000, 109–112 (2000)

510

S. Prajapati et al.

23. Roos, C.J., Northwest, C., Center, A.: An overview of industrial waste heat recovery technologies for moderate temperatures less than 1000 F. Northwest CHP Application Center (2009) 24. Jung, H., Krumdieck, S., Vranjes, T.: Feasibility assessment of refinery waste heat-to-power conversion using an organic Rankine cycle. Energy Convers. Manage. 77, 396–407 (2014) 25. Stefansson, V.: Investment cost for geothermal power plants. Geothermics 31(2), 263–272 (2002) 26. Kranz, S.: Market Survey Germany, GFZ Potsdam (2007). http://www.lowbin.eu/public/GFZLowBin_marketsituation.pdf 27. Kutscher, C.F.: The status and future of geothermal electric power. In: Proceedings of the Solar Conference. American Solar Energy Society; American Institute of Architects (2000) 28. Reverse Bank of New Zealand Te Putea Matua. http://www.rbnz.govt.nz/statistics/key_graphs/ inflation/ 29. David, G., Michel, F., Sanchez, L.: Waste heat recovery projects using organic Rankine cycle technology–examples of biogas engines and steel mills applications. In: World Engineers’ Convention, Geneva (2011) 30. Power, A.E.A.C.: 400 kW Geothermal Power Plant at Chena Hot Springs, Alaska. Final Report, 2007

Chapter 39

Hybrid 3D-CNN Based Airborne Hyperspectral Image Classification with Extended Morphological Profiles Features R. Anand , S. Veni , and P. Geetha Abstract The classification of high-definition hyperspectral images from metropolitan locations needs to resolve specific critical issues. The conventional morphological openings and closures weaken object barriers and deform item forms, making the process more challenging. The morphological openings and closings by reconstruction can circumvent this issue to an extent with a few unintended effects. The images that are anticipated to vanish at a particular scale stay available later when morphological openings and closings are done. The extended morphological profiles (EMPs) with unique structuring factors and a growing number of morphological operators generate extremely high-dimensional data. These multidimensional facts may also contain duplicated data, creating a brand-new classification task for standard classification algorithms, particularly for classifiers that are not resistant to the Hughes phenomenon. In this article, we examine extended morphological profiles with partial reconstruction and directed MPs to categorize high-definition hyperspectral snap images from urban locations. Second, we expand it using a Hybrid 3D CNN to boost classification performance. In this article, the total accuracy of 99.42% is obtained with a small number of testing samples. Similarly, the average accuracy is reasonable when compared to other 2D and 3D convolutional neural networks.

39.1 Introduction The advancement in the technologies like remote sensing has increased the capability of using hyperspectral images on a larger scale in widespread applications. The precise and accurate classification with hyperspectral images is crucial in research, R. Anand (B) · S. Veni Department of Electronics and Communication Engineering, Amrita School of Engineering, Amrita Vishwa Vidyapeetham, Coimbatore 641112, India e-mail: [email protected] P. Geetha Department of CEN, Amrita School of Engineering, Amrita Vishwa Vidyapeetham, Coimbatore 641112, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. Das et al. (eds.), Modeling, Simulation and Optimization, Smart Innovation, Systems and Technologies 292, https://doi.org/10.1007/978-981-19-0836-1_39

511

512

R. Anand et al.

and a better classification methodology is essential for getting good results after classifying these hyperspectral images [1]. Lv and Wang [2] proposed the classification methods of hyperspectral images in three different forms, namely (i) supervised, (ii) semi-supervised, and (iii) unsupervised classification. Automatic classification of hyperspectral images is widely used to categorize information or data of earth’s lithosphere and in other remote sensed images to identify featured information’s extracted. This automatic using classification is used in pattern detections and other applications. In current trends, Hyperspectral Imaging Classification based on DL (Deep Learning) methods has been widely studied for hyperspectral images, in which Convolutional Neural Network (CNN), Series of Auto-encoders (SAE) and Deep Boltzmann Machines (DBM) [3–5] can be used. CNN is widespread used in HSI classification. Primarily, Naïve CNN is used with five layers that consider the information of spectral alone. To solve this, a Modified Convolutional Neural Network can be used [6]. This model uses a 3-Dimensional patch as input to the model that includes both spatial and spectral data. SAE [7] is an unsupervised type of extraction of features done by stacking and building up a series of automatic encoders. It is even capable of extracting and arriving at spectral and spatial-based features [8]. Along with pooling (spatial pyramid pooling), CNN is exclusively for spatial information [9]. A CNN model that uses both 1-Dimensional and 2-Dimensional features is dual-channel CNN. However, a 3-Dimensional CNN also uses both spatial and spectral data. Novel pixel pair methodology is employed to expand the training set to train the deep learning models. Active Learning (AL) is also widely studied and experimented with in Hyperspectral Image classification. AL strategies possess six different methods of type using Support Vector Machine (SVM) and Modified Random Forest (MRF) modelbased AL framework [10] designed for HSI classification. Among the three different methods, semi-supervised based multinomial logistic regression model combines the entropy based active selection strategy [11]. The classification approach, Bayesian and propagation method (loopy belief), and combining AL strategies [12] are studied. Li [13] proposed a method combining an SAE neural network and Multiclass Level Uncertainty (MCLU). Liu et al. [14] attempted to offer a technique was to combine a weighted incremental dictionary learning criterion with RBM. Haut et al. [15] suggested a method to combine six Active learning (AL) standards, including breaking ties, mutual information, random acquisition, full EP, etc., with BCNN [16]. Even though all these methodologies provide the best performance, the architecture of each model is different. The suggested model uses a unique deep network architecture different from other BCNN, RBM, SAE [17]. This method uses a new multiclass criterion, known as Best-versus-Second Best (BvSB), for sampling and selecting the informative details [18]. Methods like DA, FT and BN are used to speed up training and reduce time consumption [19]. The classification of hyperspectral data follows the traditional method of recognition of pattern having two different steps: (1) Difficult handcrafted features computed

39 Hybrid 3D-CNN Based Airborne Hyperspectral …

513

from raw data input [20] (2) Acquired features are used to learn classifiers like SVM (Support Vector Machines) [21] and NN (Neural Network) [22]. The high dimensionality and heterogeneity of hyperspectral data are solved by statistical learning methods [23]. Particularly for high dimensional data and some available training samples. Since the depicted items are diverse, it is hard to find which features are crucial and readily available for classification. The traditional model of image based recognition, deep learning models [24–28] are labels of apparatuses that can study the grading of models by building highlevel from low-level models, so automating the steps of feature development for the problems is necessary. The large datasets and slight large images with very high spatial and spectral resolutions are required for more information. The deep learning outworks are suitable for addressing these problems [29]. Techniques that depend on deep learning have accurate results for detecting objects, like artificial [30] and for classifying the hyperspectral data. Usually, the spectral features are combined with spatial data that are highly dominative in every step. This is then given as input to Logistic Regression Classifiers. This approach is a unified framework. In the same manner, it is suggested to use a deep learning framework to classify hyperspectral images (HSI) into multiple classes. Both spectral and spatial parameters are combined into a single step to form highlevel or grade features. Significantly, the high-grade components are manipulated in this modified CNN, and mainly, MLP (Multi-Layer Perceptron) is used, which is responsible for classification purposes [30]. These systems are highly capable of constructing spectral and spatial features on one side and achieve a high level of prediction and performance due to CNN and MLP. From this, our proposed method addressed high-level classification accuracy for both spatial and spectral data. Here, the feature-based information has high demand to increase the classification rate. So, we used an extended morphological featuresbased algorithm with hybrid 3D CNN architecture. In this paper, the upcoming section describes the materials and methods discussed and followed by results and discussion, are elaborated. Finally, we conclude our paper with achievements of accuracy. The proposed research work is an extended version of the work proposed in our earlier paper which incorporates the deep learning techniques [29].

39.2 Materials and Methods The proposed flow chart for hyperspectral image classification is shown in Fig. 39.1. Initially, principal component analysis is used to decrease the original HSI data’s high spectral dimensionality. Each of the major components is assigned a morphological profile, then layered to create the extended morphological profile. Finally, we constructed the extended EMP using the created EMP. Dilation and erosion are the fundamental morphological processes. They are expressed using a kernel operation on an input binary image, with white pixels representing uniform areas and black pixels representing region borders. In a conceptual sense, erosion and dilation work

514

R. Anand et al. Image Preprocessing

Airborne Hyperspectral Image

•Create 3D Image Cubes

Extended Morphological Features extraction •Erosion •Dilation

Classification Algorihthms •EMP-2DCONV •EMP-3DCONV •EMP-3DHybridSN

Performance Evaluation •OA •AA •Kappa

Fig. 39.1 The flow chart of our proposed network

by moving a structural element across the image and evaluating kernel and image coordinates [31]. Erosion and dilation can be used in conjunction to accomplish specific filtering tasks. Opening, closing and boundary detection are the most often utilized combinations. At pixel x in the HSI image, the opening profile (OP) and closing profile (CP) are specified as n-dimensional vector are shown in Eqs. 39.1 and 39.2 [32]. OPi = γ R(i) (x)∀i ∈ [0, n]

(39.1)

CPi = ∅(i) R (x)∀i ∈ [0, n]

(39.2)

where, γ R(i) and ∅(i) R are the opening and closing reconstructions and ‘n’ will be several opening and closing, respectively. By Stacking OP and CP, the morphological profiles (I) are shown in Eq. 39.3, MP(x) = {OPn (x), ...I (x)...., CPn (x)}

(39.3)

Completing MPs, the EMP is introduced with the help of a 3D cube.

39.2.1 EM- Hybrid 2D-3D-CNN Architecture The Indian Pines [30] airborne hyperspectral datasets were utilized in this experiment. Here, Indian pines 145 * 145 * 200 high-dimensional bands converted to 145 * 145 * 15 bars using principal components analysis [31]. This output is converted to 3D-data cube 25 ∗ 25 ∗ 15 ∗ 1, the detailed process of our proposed work shown in Table 39.1. Cube of morphological profiles reshaped and transferred to the suggested Hybrid network. In Fig. 39.2, the ground truth for a sample image from one of these datasets is shown. The Indian Pines dataset was acquired using an aerial visible/infrared imaging spectrometer (AVIRIS) sensor. This data was gathered in 1992 in India’s North-West area. As shown in Fig. 39.2, each sample contains 224 spectral bands

Layer

Conv3D_1

Conv3D_2

Conv3D_3

Input 3D matrix

25 ∗ 25 ∗ 15 ∗ 1

23 ∗ 23 ∗ 9 ∗ 8

21 ∗ 21 ∗ 5 ∗ 16

Table 39.1 Proposed EM-hybrid architecture

3 ∗ 3 ∗ 3 ∗ 32

3 ∗ 3 ∗ 5 ∗ 16

3∗3∗7∗8

Filter or pooling size (F1 , F2 , F3 , D)

s

 W −F+2P 

s

 W −F+2P 

s

 W −F+2P 

Formula

W1 , W2 = 21, W3 = 5 19 ∗ 19 ∗ 3 ∗ 32 F1 , F2 , F3 = 3, S = 1, P = 0   A = 21−3+2(0) + 1 = 19 1   + 1 = 19 B = 21−3+2(0) 1   +1=3 C = 5−3+2(0) 1

+1

21 ∗ 21 ∗ 5 ∗ 16

23 ∗ 23 ∗ 9 ∗ 8

W1 , W2 = 23, W3 = 9 F1 , F2 = 3, F3 = 5, S = 1, P = 0   + 1 = 21 A = 23−3+2(0) 1   B = 23−3+2(0) + 1 = 21 1   +1=5 C = 9−5+2(0) 1

W1 , W2 = 25, W3 = 15 F1 , F2 = 3, F3 = 7, S = 1, P = 0   A = 25−3+2(0) + 1 = 23 1   + 1 = 23 B = 25−3+2(0) 1   +1=9 C = 15−7+2(0) 1

(continued)

Output 3D matrix (A, B, C, D) (P, Q, R)

+1

+1

Mathematical calculation

39 Hybrid 3D-CNN Based Airborne Hyperspectral … 515

Layer

Convert 3D shape to 2D Shape

Conv2D_1

Flatten layer

Dense layer_1

Dense layer_2

Final output layer

Input 3D matrix

19 ∗ 19 ∗ 3 ∗ 32

19 ∗ 19 ∗ 96

17 ∗ 17 ∗ 64

18496 ∗ 1

256 ∗ 1

128 ∗ 1

Table 39.1 (continued)

16

128

256



3 ∗ 3 ∗ 64



Filter or pooling size (F1 , F2 , F3 , D)

+1







(P ∗ Q ∗ R)

s

 W −F+2P 

(A * B * (C * D))

Formula







Flatten Layer = P ∗ Q ∗ R = 17 ∗ 17 ∗ 64 = 18496

W1 , W2 = 19 F1 , F2 = 3, S = 1, P = 0   + 1 = 17 A = 19−3+2(0) 1   + 1 = 17 B = 19−3+2(0) 1

P = A = 19 Q = B = 19 R = C ∗ D = 3 ∗ 32 = 96

Mathematical calculation

16 ∗ 1

128 ∗ 1

256 ∗ 1

18496 ∗ 1

17 ∗ 17 ∗ 64

19 ∗ 19 ∗ 96

Output 3D matrix (A, B, C, D) (P, Q, R)

516 R. Anand et al.

39 Hybrid 3D-CNN Based Airborne Hyperspectral …

517

Fig. 39.2 Ground truth map of three Indian pines airborne hyperspectral remote sensing scenes

with a spatial dimension of 145 * 145 and associated ground truth data. Training and testing samples are used in the proposed work, as shown in Table 39.2. Table 39.2 Training and testing samples are used in the proposed work [29] Indian pines hyperspectral data Class name Alfalfa

Training samples

Test samples

Total samples

13

32

45

Corn-notill

413

1000

1413

Corn-mintill

262

581

843

59

166

225

Grass-pasture

135

338

473

Grass-trees

202

511

713

10

20

30

Hay-windrowed

120

335

455

Oats

−1

14

13

Soybean-notill

348

680

1028

Soybean-mintill

701

1719

2420

Soybean-clean

613

Corn

Grass-pasture-mowed

198

415

Wheat

62

143

205

Woods

409

886

1295

Buildings grass trees drives

120

270

390

28

65

93

Stone steel towers

518

R. Anand et al.

39.3 Results and Discussion As shown in Fig. 39.3, we can see that we were able to capture most of the primary visual information contained inside the hypercube by utilizing only six components. As a result, we will employ the EMP technique on these six images as our primary collection of characteristics. In our experiments, a structural element with a disc shape and a diameter of 6 pixels is used. A 2-pixel increment step and four openings and closings as shown in Table 39.3. Each MP has 2n + 1 characteristics, where ‘n’ denotes the number of openings/closings. The EMP has a total of (2n + 1) ∗ p characteristics, where ‘a’ indicates the number of principal components utilized to construct the EMP. Let X ∈ R 145∗145∗200 be the data cube for Indian pines, where 145*145 denotes the spatial dimension, and 200 denotes the number of spectral bands. By utilizing sparse PCA, we can X ∈ R 145∗145∗15 was retrieved as the reduced data. In addition, three morphological profiles are generated based on the three spectral bands obtained through dimension reduction. The acquired personal computers are then layered onto a single vector to extract 15 rounds for usage to expand the 54 morphological profiles shown in Fig. 39.4. Thus, the retrieved 9375 features are reshaped into a 4D tensor of form 25 * 25 * 15 * 1 for the input to the proposed hybrids. In Fig. 39.5 shows the confusion matrix of extended morphological profiles of EMP-3D-hybrids [30, 31]. In our proposed method, all the testing samples of alfalfa, grass/trees, hay-windrowed, oats, and woods classes are correctly classified because this spectrum’s absorption is entirely different. So, all the samples have a wide variety of spectral reflectance with different spatial and range values of these classes. In the case of bldg-grass-tree-drives, out of 1000 samples, 996 samples are correctly classified, and the remaining samples are misclassified as oats and soybeans. In corn,

Fig. 39.3 6 Principal components images of Indian pines [29]

Table 39.3 Principal components and their variance [29]

Principal components

Variance

Principal components 1

0.69

Principal components 2

0.24

Principal components 3

0.02

Principal components 4

0.01

Principal components 5

0.01

Principal components 6

0.001

39 Hybrid 3D-CNN Based Airborne Hyperspectral …

Fig. 39.4 54 EMP features [29]

Fig. 39.5 Confusion matrix for 3D-CNN with EMP

519

520

R. Anand et al.

Table 39.4 Performance metrics of airborne Indian Pines hyperspectral data

Indian pines hyperspectral 2D-CNN

3D-CNN

EMP-3D-hybrid

Kappa coefficient

93.14

94.32

99.34

Overall accuracy

93.87

96.35

99.42

Average accuracy

94.77

95.12

99.17

557 samples are correctly classified out of 581 samples, but one example of corn-min is misclassified as tree drivers. The absorption characteristics of corn-min and corn are similar. The overall accuracy is achieved with 99.42%, as shown in Table 39.4. Similarly, the average accuracy also is high compared to other methods because EMP features and EMP-3D-Hybrid make good testing ratios.

39.4 Conclusion EMP-3D-Hybrid SN classifier algorithms are used in this work to achieve higher classification accuracy for aerial HSI image datasets. This is a hybrid deep learning classifier based on the HSI data classification approach more applicable for categorizing urban land use and requires a low degree of computing complexity during deployment. The gradient descent algorithm is utilized for training one-vs-all 16class classifiers for EMP-3D-Hybrid classification. Gradient descent can work with gigabytes of training images with few iterations. As a result, an overall accuracy of 99.42% is obtained with a small number of testing samples. Similarly, the overall accuracy is achieved with 99.42%, and the average accuracy also is high compared to other methods because EMP features, and EMP-3D-Hybrid makes good testing ratios.

References 1. Cao, X., Yao, J., Xu, Z., Meng, D.: Hyperspectral image classification with convolutional neural network and active learning. IEEE Trans. Geosci. Remote Sens.1–13 (2020).https://doi.org/10. 1109/tgrs.2020.2964627 2. Hu, W., Huang, Y., Wei, L., Zhang, F., Li, H.: Deep convolutional neural networks for hyperspectral image classification. J. Sensors 2015, 258619 (2015) 3. Cao, X., Zhou, F., Xu, L., Meng, D., Xu, Z., Paisley, J.: Hyperspectral image classification with Markov random fields and a convolutional neural network. IEEE Trans. Image Process. 27(5), 2354–2367 (2018) 4. Hamida, A.B., Benoit, A., Lambert, P., Amar, C.B.: 3-D deep learning approach for remote sensing image classification. IEEE Trans. Geosci. Remote Sens. 56(8), 4420–4434 (2018) 5. Lee, H., Kwon, H.: Going deeper with contextual CNN for hyperspectral image classification. IEEE Trans. Image Process. 26(10), 4843–4855 (2017)

39 Hybrid 3D-CNN Based Airborne Hyperspectral …

521

6. Zhang, H., Li, Y., Zhang, Y., Shen, Q.: Spectral–spatial classification of hyperspectral imagery using a dual-channel convolutional neural network.Remote Sens. Lett. 8(5), 438–447 (2017) 7. Yue, J., Zhao, W., Mao, S., Liu, H.: Spectral–spatial classification of hyperspectral images using deep convolutional neural networks. Remote Sens. Lett. 6(6), 468–477 (2015) 8. Lin, Z., Chen, Y., Zhao, X., Wang, G.: Spectral–spatial classification of the hyperspectral image using autoencoders. In: Proceedings of IEEE 9th International Conference on Information Communication Signal Processing (ICICS), Dec 2013, pp. 1–5 9. Yue, J., Mao, S., Li, M.: A deep learning framework for hyperspectral image classification using spatial pyramid pooling. Remote Sens. Lett. 7(9), 875–884 (2016) 10. Sun, S., Zhong, P., Xiao, H., Wang, R.: An MRF model-based active learning framework for the spectral-spatial classification of hyperspectral imagery. IEEE J. Sel. Top. Signal Process. 9(6), 1074–1088 (2015) 11. Li, J., Bioucas-Dias, J.M., Plaza, A.: Semisupervised hyperspectral image segmentation using multinomial logistic regression with active learning. IEEE Trans. Geosci. Remote Sens. 48(11), 4085–4098 (2010) 12. Li, J., Bioucas-Dias, J.M., Plaza, A.: Hyperspectral image segmentation using a new Bayesian approach with active learning. IEEE Trans. Geosci. Remote Sens. 49(10), 3947–3960 (2011) 13. Li, J.: Active learning for hyperspectral image classification with stacked autoencoders based neural network. In: Proceedings of 7th Workshop Hyperspectral Image Signal Processing, Evolution Remote Sensing (WHISPERS), Jun 2015, pp. 1–4 14. Liu, P., Zhang, H., Eom, K.B.: Active deep learning for classification of hyperspectral images. IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens. 10(2), 712–724 (2017) 15. Haut, J.M., Paoletti, M.E., Plaza, J., Li, J., Plaza, A.: Active learning with convolutional neural networks for hyperspectral image classification using a new Bayesian approach. IEEE Trans. Geosci. Remote Sens. 56(11), 6440–6461 (2018) 16. Li, J., Bioucas-Dias, J.M., Plaza, A.: Spectral–spatial hyperspectral image segmentation using subspace multinomial logistic regression and Markov random fields. IEEE Trans. Geosci. Remote Sens. 50(3), 809–823 (2012) 17. Makantasis, K., Karantzalos, K., Doulamis, A., Doulamis, N.: Deep supervised learning for hyperspectral data classification through convolutional neural networks. In: 2015 IEEE International Geoscience and Remote Sensing Symposium (IGARSS) (2015). https://doi.org/10. 1109/igarss.2015.7326945 18. Chen, Y., Zhao, X., Jia, X.: Spectral–spatial classification of hyperspectral data based on deep belief network. IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens. 8(6), 2381–2392 (2015) 19. Midhun, M., Nair, S.R., Prabhakar, V., Kumar, S.S.: Deep model for classification of the hyperspectral image using restricted Boltzmann machine. Proc. ACM Int. Conf. Interdiscipl. Adv. Appl. Comput. 2014, 35 (2014) 20. Chang, C.I.: Hyperspectral Data Processing: Algorithm Design and Analysis. John Wiley and Sons (2013) 21. Camps-Valls, G., Bruzzone, L.: Kernel-based methods for hyperspectral image classification. IEEE Trans. Geos. Rem. Sens. 43 (2005) 22. Camps-Valls, G., Bruzzone, L.: Kernel Methods for Remote Sensing Data Analysis. J. Wiley and Sons, NJ, USA (2009) 23. Camps-Valls, G., Tuia, D., Bruzzone, L., Atli Benediktsson, J.: Advances in hyperspectral image classification: earth monitoring with statistical learning methods. Signal Process. Mag. 31(1), 5–54 (2014) 24. Lecun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998) 25. Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. Science 313(5786), 504–507 (2006) 26. Hinton, G., Osindero, S., The, Y.: A fast learning algorithm for deep belief nets. Neural Comput. 18(7), 1527–1554 (2006) 27. Bengio, Y., Lamblin, P., Popovici, D., Larochelle, H.: Greedy layer-wise training of deep networks. Adv. NIPS 19, 153–160 (2007)

522

R. Anand et al.

28. Chen, Y., Lin, Z., Zhao, X., Wang, G., Yanfeng, G.: Deep learning-based classification of hyperspectral data. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 7(6), 2094–2107 (2014) 29. Anand, R., Veni, S., Geetha, P., Subramoniam, S.R.: Comprehensive morphological profiles analysis of airborne hyperspectral image classification using machine learning algorithms. Int. J. Intell. Netw. 2, 1–6 (2021) 30. Anand, R., Veni, S., Aravinth, J.: Robust classification technique for hyperspectral images based on 3D-discrete wavelet transform. Remote Sens. 13(7), 1255 (2021) 31. Sunil, A., Sajithvariyar, V.V., Sowmya, V., Sivanpillai, R., Soman, K.P.: Identifying oil pads in high spatial resolution aerial images using faster R-CNN. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 32. Jayaprakash, C., Damodaran, B.B., Viswanathan, S., Soman, K.P.: Randomized independent component analysis and linear discriminant analysis dimensionality reduction methods for hyperspectral image classification. J. Appl. Remote Sens. 14(3), 036507 (2020)

Chapter 40

Numerical Approximation of System of Singularly Perturbed Convection–Diffusion Problems on Different Layer-Adapted Meshes Sonu Bose and Kaushik Mukherjee Abstract The aim of this chapter is to investigate efficient numerical approximation of coupled system of singularly perturbed convection–diffusion boundary-value problems (BVPs) as well as initial-boundary-value problems (IBVPs) for perturbation parameters of different orders of magnitude. Both the system of BVPs and the system of IBVPs are discretized by the classical finite difference schemes. To accomplish the purpose, we construct two meshes: generalized S-mesh and Vulanovi´c Lmesh, that are appropriately adapted to the overlapping boundary layers for different values of perturbation parameters. Through the numerical experiments, we investigate uniform convergence of the proposed finite difference schemes in the discrete supremum norm on the both the layer-adapted meshes. Further, faster convergence phenomenon is demonstrated related to the finite difference approximation on the generalized S-mesh.

40.1 Introduction Numerical approximation of coupled system of differential equations has always been a subject of interest to many researchers due to application of these types of differential equations in modeling various physical problems that arise in biology, epidemiology, and ecology; and nuclear engineering, etc. For instance, one can consider the reaction–diffusion model in the human immunodeficiency virus transmission (see [8]), the nuclear reactor model in the space-time-dependent nuclear reactor dynamics (see [2]). On the other hand, in the case of the small perturbation parameters having different orders of magnitude, such system of differential equations behaves like singularly perturbed problems whose solutions possess overlapping boundary S. Bose (B) · K. Mukherjee Department of Mathematics, Indian Institute of Space Science and Technology, Thiruvananthapuram, Kerala 695547, India e-mail: [email protected]; [email protected] K. Mukherjee e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. Das et al. (eds.), Modeling, Simulation and Optimization, Smart Innovation, Systems and Technologies 292, https://doi.org/10.1007/978-981-19-0836-1_40

523

524

S. Bose and K. Mukherjee

layers. Due to this special characteristic, getting an efficient numerical solution of these types of problems becomes an extremely challenging task. In the literature, many research works are reported by several authors related to the numerical solutions of the singularly perturbed system of ordinary differential equations (ODEs) as well as partial differential equations (PDEs). For instance, Cen in [1] considers the finite difference scheme to solve singularly perturbed system of BVPs consisting of convection–diffusion ODEs. The analysis is performed on the piecewise-uniform Shishkin mesh, and it is shown that the accuracy of the method is O(N −1 ln N ) in the discrete supremum norm. To obtain uniformly convergent numerical solution for the singularly perturbed system of IBVPs consisting of convection–diffusion PDEs, Singh and Natesan in [4, 5] analyzed first-order and second-order finite difference schemes on the piecewise-uniform Shishkin mesh, respectively. In this chapter, we computationally investigate numerical solutions of singularly perturbed system of convection–diffusion BVPs of the form (40.1)–(40.3) and system of convection–diffusion IBVPs of the form (40.4)–(40.6), by introducing the Vulanovi´c L-mesh and the generalized S-mesh for different values of perturbation parameters. We discretize the system of BVPs by the standard upwind finite difference scheme and the system of IBVPs by an implicit upwind finite difference scheme. The generalized S-mesh, a generalized form of the piecewise-uniform Shishkin mesh, is introduced by Vulanovi´c in [7] for numerical approximation of a class of two point BVPs. Further, Vulanovi´c et al. in [6] analyze numerical solution for class of two point BVPs on a modified version of Shishkin mesh, we denote it by Vulanovi´c L-mesh. The implementation of the generalized S-mesh as well as the Vulanovi´c L-mesh for solving singularly perturbed system of convection–diffusion problems with different values of perturbation parameters is first time done by this article. The article is organized as follows: In Sect. 40.2, we consider the continuous problem consisting of singularly perturbed system of BVPs. Section 40.3 describes the generalized S-mesh and the Vulanovi´c L-mesh. The finite difference scheme is also introduced for discretization of the continuous problem. Numerical results for the system of BVPs are provided in Sect. 40.4. In Sect. 40.5, we consider the continuous problem consisting of singularly perturbed system of IBVPs. Section 40.6 describes discretization of the domain and introduces the finite difference scheme for discretization of continuous problem. Numerical results for the system of IBVPs are provided in Sect. 40.7. We end up this chapter with a brief conclusion in Sect. 40.8.

40.2 The Continuous Problem-I Consider the following class of singularly perturbed system of boundary-value problems (BVPs) on the domain Ω = (0, 1):

40 Numerical Approximation of System of Singularly …

⎧ d2 y(x) dy (x) ⎨  (x), x ∈ Ω, + P(x) + Q(x)y (x) = f Lε y := −E 2 dx dx ⎩ y(0) = 0, y(1) = 0,

525

(40.1)

where y = (y1 , y2 )T and the coefficient matrices are given by      q11 (x) q12 (x) p11 (x) 0 ε1 0 , Q(x) = . , P(x) = E= 0 ε2 0 p22 (x) q21 (x) q22 (x) 

Without loss of generality, we assume that ε1 , ε2 satisfy 0 < ε1 ≤ ε2 0,

p22 (x) ≥ p0 > 0.

(40.2)

In addition, we assume that matrix Q = {qi j }i,2 j=1 is L 0 matrix (off-diagonal entries are non-positive, diagonal entries are positive) with

min x∈Ω

⎧ 2 ⎨ ⎩

q1 j ,

j=1

2 

q2 j

j=1

⎫ ⎬ ⎭

≥ q0 > 0.

(40.3)

It is assumed that the entries of the coefficient matrices and the source term  (x) = (f1 (x), f2 (x))T are sufficiently smooth functions. f The asymptotic behavior of the solutions y1 (x), y2 (x) of the system of BVPs (40.1)–(40.3) and their derivatives can be established by following the approach given in (Lemma 2, [1]). From this result, one can see that the solutions y1 , y2 of the system of BVPs (40.1)–(40.3) possess overlapping boundary layers near the boundary point x = 1.

40.3 The Discrete Problem N

N Here, we give construction of the mesh Ω = {xi }i=0 on the domain Ω = [0, 1]. N Let h i = xi − xi−1 denotes step sizes on Ω , i = 1, . . . , N .

40.3.1 The Generalized S-mesh We denote ln1 N = ln N , ln2 N = ln(ln N ), lni N = ln(lni−1 N ), for i = 2, 3, ..., K , where K is a positive integer such that 0 < ln K N < 1. Then k, 0 < k ≤ K is a fixed integer independent of N . We define the transition parameters by

526

S. Bose and K. Mukherjee

Fig. 40.1 Generalized S-mesh

τε12 = min

τε11 = min

1 2ε2 , ln N 2 p0

and τεi 2 = min

τεk2 2ε1 , ln N 2 p0



 τεi−1 2ε2 i 2 , ln N , for i = 2, ..., k, 2 p0

and τεi 1 = min

 τεi−1 2ε1 i 1 , ln N , for i = 2, ..., k. 2 p0

The generalized S-mesh is constructed by dividing the domain [0, 1] into 2k + 1 subdomains such as [0, 1 − τε12 ], [1 − τε12 , 1 − τε22 ], ..., [1 − τεk2 , 1 − τε11 ], ..., [1 − τεk1 , 1]. Then, we divide [0, 1 − τε12 ] into N /2 mesh intervals, and divide each of the other 2k subintervals into N j , j = 1, 2, ..., 2k, mesh intervals such that N1 + Nm N2 + ... + Nk + Nk+1 + ... + N2k = N /2. Let rm = , m = 1, 2, ..., 2k. Note N 1 that r1 + r2 + ... + r2k = 2 . The generalized S-mesh is denoted by S(k) mesh. In particular, S(1) mesh is same as the standard Shishkin mesh on [0, 1] (Fig. 40.1).

40.3.2 The Vulanovi´c L-mesh Let L = L(N ) satisfying ln(ln N ) ≤ L ≤ ln N and e−L ≤ NL . The mesh is constructed using a mesh generating function V, depending on the transition points:

 τεL2 2 1 2 L = min , ε2 L , τε1 = min , ε1 L . 2 p0 2 p0

τεL2

N

The mesh points in Ω are defined by xi = V(i/N ), i = 0, 1, ..., N ,

40 Numerical Approximation of System of Singularly …

527

Fig. 40.2 Vulanovi´c L-mesh

where the mesh generating function V ∈ C2 [0, 1] is given by ⎧ L 3 L L ⎪ ⎨−8(1 − 2τε2 )(1/2 − ξ ) − 2τε2 (1/2 − ξ ) + 1 − τε2 , 0 ≤ ξ ≤ 1/2, L L L V(ξ ) = 1 − τε2 + 4(τε2 − τε1 )(ξ − 1/2), 1/2 ≤ ξ ≤ 3/4, ⎪ ⎩ L L 3/4 ≤ ξ ≤ 1. 1 − τε1 + 4τε1 (ξ − 3/4), The Vulanovi´c L-mesh is constructed by dividing the domain [0, 1] into three subdomains such as [0, 1 − τεL2 ], [1 − τεL2 , 1 − τεL1 ], and [1 − τεL1 , 1]. Then, we divide [0, 1 − τεL2 ] into N /2 subintervals and divide each of the other two subintervals into N /4 mesh intervals. The Vulanovi´c L-mesh and the standard Shishkin mesh are identical on [1 − τεL2 , 1], when L = ln N . But the coarse part of Vulanovi´c L-mesh is a smooth continuation of fine part and is no longer equidistant (Fig. 40.2).

40.3.3 The Finite Difference Scheme N

For a given mesh function v(xi ), xi ∈ Ω , let us define the forward, backward, and )−v(xi ) , central difference operators Dx+ , Dx− and δx2 ,respectively by Dx+ v(xi ) = v(xi+1h i+1 2(D + v(x )−D − v(x ))

i i i−1 ) x x Dx− v(xi ) = v(xi )−v(x , and δx2 v(xi ) = . For discretization of the hi h i +h i+1 continuous problem, the standard upwind finite difference scheme takes the following N form on the mesh Ω :

 (xi ), for i = 1, . . . , N − 1, L N Y (xi ) = f  Y (x0 ) = 0, Y (x N ) = 0,

where Y = (Y1 , Y2 )T and L N Y = −Eδx2 Y + P(xi )Dx− Y + Q(xi )Y . Lemma 1 (Discrete Maximum Principle). Assume that the discrete function N N {v(xi )}i=0 satisfies that v ≥ 0 on Γ N (boundary points of Ω ). Then, L N v ≥ 0 in N Ω N implies that v ≥ 0 at each point of Ω . This lemma implies parameter-uniform stability of the difference operator L N .

528

S. Bose and K. Mukherjee

40.4 Numerical Results We conduct the numerical experiments for the following test example on the generalized S-mesh and the Vulanovi´c L-mesh. Example 1 Consider the system of BVPs:     ⎧ 2 ⎪ ⎨ −E d y(x) + 1 0 dy (x) + 2 −1 y(x) = f  (x), x ∈ (0, 1), dx 2 dx 01 −1 4 ⎪ ⎩ y(0) = 0, y(1) = 0, where f1 (x) and f2 (x) are chosen such that ⎧ π  ⎪ 1 − e−(1−x)/ε1 1 − e−(1−x)/ε2 ⎪ ⎨ y1 (x) = + − 2 sin (1 − x) , 1 − e−1/ε1 1 − e−1/ε2 2 −(1−x)/ε2 1−e ⎪ ⎪ ⎩ y2 (x) = − (1 − x)e x . 1 − e−1/ε2 We perform the numerical experiments by choosing the constants p0 = 1/2. For S(1) mesh, we choose N1 = N /4 and N2 = N /4. For S(2) mesh, we choose N1 = N /8, N2 = N /8, N3 = N /8, and N4 = N /8. Using L < ln N instead of ln N , the Vulanovi´c L-mesh provides higher density of the mesh points in the layer region. ∗ The smallest value of L is chosen to be L ∗ = L ∗ (N ) which satisfies e−L = L ∗ /N . The calculated maximum point-wise errors and the corresponding order of convergence on S(1), S(2) meshes and the Vulanovi´c L-mesh with (L = L ∗ ) are presented in Tables 40.1 and 40.2 for Example 1. In order to reveal the numerical order of convergence, we plot the maximum point-wise errors in loglog scale in Figs. 40.3 and 40.4.

40.5 The Continuous Problem-II Consider the following class of singularly perturbed system of parabolic initialboundary-value problems (IBVPs) on the domain D = Ω × (0, T ], Ω = (0, 1) : ⎧ ∂ y ⎪  (x, t), (x, t) ∈ D, ⎪ ⎪ ∂t + Lx,ε y = f ⎪ ⎨ y(x, 0) = 0, on {(x, 0): x ∈ [0, 1]}, ⎪ ⎪ y(0, t) = 0, on {(0, t): t ∈ [0, T ]}, ⎪ ⎪ ⎩ y(1, t) = 0, on {(1, t): t ∈ [0, T ]}, where y = (y1 , y2 )T and the spatial differential operator Lx,ε is given by

(40.4)

40 Numerical Approximation of System of Singularly …

529

Table 40.1 Maximum point-wise errors and the corresponding order of convergence of component Y1 for Example 1. N S(1) mesh S(2) mesh Vulanovi´c L-mesh Error Order Error Order Error Order (ε1 , ε2 ) = (2−16 , 2−8 ) 64 5.5147e−01 3.1275e−01 128 1.7716e−01 256 9.9274e−02 512 −32 (ε1 , ε2 ) = (2 , 2−24 ) 64 5.8272e−01 3.1476e−01 128 1.7756e−01 256 9.9405e−02 512

0.8182 0.8199 0.8355 0.8528

3.6586e−01 1.9959e−01 1.0760e−01 5.7280e−02

0.8742 0.8913 0.9095 0.9252

3.8666e−01 2.2537e−01 1.3031e−01 7.4304e−02

0.7787 0.7904 0.8103 0.8308

0.8885 0.8259 0.8369 0.8534

3.9278e−01 2.0058e−01 1.0759e−01 5.7221e−02

0.9695 0.8986 0.9109 0.9259

3.8755e−01 2.2418e−01 1.2947e−01 7.3786e−02

0.7897 0.7920 0.8112 0.8311

Table 40.2 Maximum point-wise errors and the corresponding order of convergence of component Y2 for Example 1. N S(1) mesh S(2) mesh Vulanovi´c L-mesh Error Order Error Order Error Order (ε1 , ε2 ) = (2−16 , 2−8 ) 64 1.2414e−01 8.1670e−02 128 5.1850e−02 256 3.1049e−02 512 (ε1 , ε2 ) = (2−32 , 2−24 ) 64 1.2469e−01 8.1810e−02 128 5.2012e−02 256 3.1181e−02 512

0.6040 0.6554 0.7397 0.7927

8.6230e−02 5.5168e−02 3.2586e−02 1.8241e−02

0.6443 0.7595 0.8370 0.8834

9.0431e−02 6.1963e−02 3.8832e−02 2.3322e−02

0.5454 0.6741 0.7355 0.7819

0.6080 0.6534 0.7381 0.7913

8.6030e−02 5.4851e−02 3.2408e−02 1.8137e−02

0.6493 0.7591 0.8374 0.8831

9.0263e−02 6.1836e−02 3.8762e−02 2.3314e−02

0.5457 0.6737 0.7334 0.7809

Lx,ε y = (Lx,ε1 y, Lx,ε2 y)T = −E

∂2 ∂ y + P(x) y + Q(x, t)y , ∂x2 ∂x

and the coefficient matrices are given by  E=

     q (x, t) q12 (x, t) p (x) 0 ε1 0 , Q(x, t) = 11 . , P(x) = 11 0 ε2 0 p22 (x) q21 (x, t) q22 (x, t)

Without loss of generality, we assume that ε1 , ε2 satisfy 0 < ε1 ≤ ε2 0,

p22 (x) ≥ p0 > 0.

(40.5)

In addition, we assume that matrices Q(x, t) = {qi j }i,2 j=1 is L 0 matrices (offdiagonal entries are non-positive, diagonal entries are positive),

min

⎧ 2 ⎨

(x,t)∈D ⎩

j=1

q1 j ,

2  j=1

q2 j

⎫ ⎬ ⎭

≥ q0 > 0.

(40.6)

40 Numerical Approximation of System of Singularly …

531

It is assumed that the entries of the coefficient matrices and the source term  (x, t) = (f1 (x, t), f2 (x, t))T are sufficiently smooth functions. The existence and f uniqueness of the solution for our model problem (40.4) can be guaranteed by the sufficient smoothness of initial and boundary data (see, [3]). The asymptotic behavior of the analytical solutions y1 (x, t), y2 (x, t) of the system of IBVP (40.4)–(40.6) and their derivatives are discussed in ([4], Theorem 1).

40.6 The Discrete Problem Consider the domain D = Ω × [0, T ]. Here, we construct a rectangular mesh N ,Δt N D = Ω × [0, T ]Δt .

40.6.1 Discretization of the Domain On the domain [0, T ], we consider a uniform mesh for the time discretization given by [0, T ]Δt = {nΔt, n = 0, ..., M, Δt = T /M}. On the spatial domain Ω, we consider the generalized S-mesh and the Vulanovi´c L-mesh for the spatial discretization, which are described in Sect. 40.3.

40.6.2 The Finite Difference Scheme N ,Δt

For a given mesh function Z (xi , tn ), (xi , tn ) ∈ D , let us define the forward, backward, and central difference operators δx+ , δx− and δx2 in space and the backward difference operator δt− in time, respectively, by

δx+ Z (xi , tn ) = δx2 Z (xi , tn ) =

Z (xi+1 ,tn )−Z (xi ,tn ) (xi−1 ,tn ) , δx− Z (xi , tn ) = Z (xi ,tn )−Z , h i+1 hi 2(δx+ Z (xi ,tn )−δx− Z (xi ,tn )) Z (xi ,tn )−Z (xi ,tn−1 ) − and δ Z (x , t ) = . i n t h i +h i+1 Δt

For discretization of the continuous problem, the implicit upwind finite difference N ,Δt scheme takes the following form on the mesh D : ⎧ − → N ,Δt ⎪ ⎨ (δt− + Lx,ε )Y (xi , tn ) = f (xi , tn ), for i = 1, . . . , N − 1, Y (x0 , tn ) = 0, Y (x N , tn ) = 0, ⎪ ⎩ Y (x , 0) = 0, for n = 1, . . . , M, i

(40.7)

532

S. Bose and K. Mukherjee

N ,Δt  2 −  where Y = (Y1 , Y2 )T and Lx, ε Y = −Eδx Y + P(x i )δx Y + Q(x i , tn )Y . For the discrete problem (40.7), we have the following discrete maximum principle.

Lemma 2 [Discrete Maximum Principle] Assume that the discrete function  N ,Δt N ,M satisfies that Z ≥ 0 on ∂D N ,Δt (boundary of D ). Then, δt− + { Z (xi , tn )}i=0,n=0  N ,Δt N ,Δt  . Z ≥ 0 in D N ,Δt implies that Z ≥ 0 at each point of D Lx, ε  This provides parameter-uniform stability of the difference operator δt− +  N ,Δt . Lx, ε

40.7 Numerical Results We conduct the numerical experiments for the following test example on the generalized S-mesh and the Vulanovi´c L-mesh. Example 2 Consider the system of parabolic IBVPs: ⎧       ⎪ ∂ y ∂ 2 y 7 0 ∂ y 9 + x −8 t 3 (1 − t) ⎪ ⎪ ⎪ ⎨ ∂t − E ∂ x 2 + 0 7 ∂ x + −4 5 + x y = x 2 (1 − x)2 , (x, t) ∈ D = (0, 1) × (0, 1], ⎪ y(x, 0) = 0, x ∈ [0, 1], ⎪ ⎪ ⎪ ⎩ y(0, t) = 0, y(1, t) = 0, t ∈ (0, 1].

As the exact solution of Example 2 is not known, to obtain the accuracy of the numerical solution and also to demonstrate the ε-uniform convergence of the proposed scheme, we use the double mesh principle. We perform the numerical experiments by choosing the constants p0 = 2 and the time step Δt = 1/M, M = N /4. For S(1) mesh, we choose N1 = N /4 and N2 = N /4. For S(2) mesh, we choose N1 = N /8, N2 = N /8, N3 = N /8 and N4 = N /8. Using L < ln N instead of ln N , the Vulanovi´c L-mesh provides higher density of the mesh points in the layer region. The smallest value of L is chosen to be L ∗ = L ∗ (N ) ∗ which satisfies e−L = L ∗ /N . The calculated maximum point-wise errors and the corresponding order of convergence on S(1), S(2) meshes and the Vulanovi´c L-mesh with (L = L ∗ ) are presented in Tables 40.3 and 40.4 for Example 2. Further, in order to demonstrate the numerical order of convergence, we plot the maximum point-wise errors in loglog scale in Figs. 40.5 and 40.6.

40.8 Observations and Concluding Remarks From the numerical results, the following observations are made for both the coupled system of singularly BVPs and IBVPs of convection–diffusion type.

40 Numerical Approximation of System of Singularly …

533

Table 40.3 Maximum point-wise errors and the corresponding order of convergence of component Y1 for Example 2. N S(1) mesh S(2) mesh Vulanovi´c L-mesh Error Order Error Order Error Order (ε1 , ε2 ) = (2−20 , 2−10 ) 64 4.5979e−04 3.3249e−04 128 2.2405e−04 256 1.4560e−04 512 −40 (ε1 , ε2 ) = (2 , 2−30 ) 64 4.5948e−04 3.3232e−04 128 2.2389e−04 256 1.4561e−04 512

0.4676 0.5694 0.6218 0.7303

4.6763e−04 3.0110e−04 1.8704e−04 1.0859e−04

0.6351 0.6869 0.7844 0.8460

4.1656e−04 2.8195e−04 1.8832e−04 1.1775e−04

0.5631 0.5822 0.6775 0.7733

0.4674 0.5697 0.6207 0.7308

4.6749e−04 3.0094e−04 1.8695e−04 1.0859e−04

0.6354 0.6867 0.7837 0.8462

4.1626e−04 2.8169e−04 1.8815e−04 1.1770e−04

0.5633 0.5822 0.6768 0.7739

Table 40.4 Maximum point-wise errors and the corresponding order of convergence of component Y2 for Example 2. N S(1) mesh S(2) mesh Vulanovi´c L-mesh Error Order Error Order Error Order (ε1 , ε2 ) = (2−20 , 2−10 ) 64 3.0450e−04 2.3607e−04 128 1.7053e−04 256 1.1545e−04 512 (ε1 , ε2 ) = (2−40 , 2−30 ) 64 3.0454e−04 2.3610e−04 128 1.7054e−04 256 1.1546e−04 512

0.3672 0.4692 0.5627 0.6947

2.9964e−04 2.1682e−04 1.3736e−04 8.2459e−05

0.4667 0.6585 0.7362 0.8175

2.3675e−04 1.9079e−04 1.3589e−04 8.8952e−05

0.3113 0.4896 0.6113 0.7415

0.3672 0.4693 0.5627 0.6948

2.9968e−04 2.1684e−04 1.3737e−04 8.2463e−05

0.4668 0.6585 0.7362 0.8175

2.3699e−04 1.9090e−04 1.3594e−04 8.8979e−05

0.3120 0.4897 0.6114 0.7415

• The calculated maximum point-wise errors decrease monotonically as N increases, which confirms that the proposed finite difference schemes converge uniformly irrespective of the diffusion parameters having different orders of magnitude. • The proposed finite difference schemes are first-order uniformly convergent in the discrete supremum norm on both the generalized S-mesh and the Vulanovi´c L-mesh. • The accuracy of the numerical solutions on the Vulanovi´c L-mesh is found to be better than that on the standard Shishkin mesh (S(1) mesh). • Moreover, we observe that the numerical solutions are more accurate on S(2) mesh in comparison with both the standard Shishkin mesh and the Vulanovi´c L-mesh.

534

S. Bose and K. Mukherjee

Fig. 40.5 Loglog plot of maximum point-wise errors for Y1 for ε1 = 2−40 , ε2 = 2−30 of Example 2

Fig. 40.6 Loglog plot of maximum point-wise errors for Y2 for ε1 = 2−40 , ε2 = 2−30 of Example 2

Our experiments indeed unveil that the generalized S-mesh plays a vital role not only to achieve uniformly convergent numerical solutions, but also to satisfy faster convergence phenomenon related to the finite difference approximation of the coupled system of singularly perturbed problems. Therefore, we are further pursuing the theoretical analysis to establish the parameter-uniform error estimate for numerical approximation of singularly perturbed problems on the generalized S-mesh. Acknowledgements The authors would like to express their sincere thanks to the reviewers for their valuable comments and suggestions. The first author wishes to acknowledge Council of Scientific and Industrial Research, India, for the research grant 09/1187(0004)/2019-EMR-I.

40 Numerical Approximation of System of Singularly …

535

References 1. Cen, Z.: Parameter-uniform finite difference scheme for a system of coupled singularly perturbed convection-diffusion equations. Int. J. Comput. Math. 82, 177–192 (2005) 2. Kastenberg, W.E., Chambré, P.L.: On the stability of nonlinear space-dependent reactor kinetics. Nucl. Sci. Eng. 31(1), 67–79 (1968) 3. Ladyzenskaja, O.A., Solonnikov, V.A., Ural’ceva, N.N.: Linear and Quasi-Linear Equations of Parabolic Type, Translations of Mathematical Monographs, vol. 23. American Mathematical Society (1968) 4. Singh, M.K., Natesan, S.: Numerical analysis of singularly perturbed system of parabolic convection-diffusion problem with regular boundary layers. Differ. Equ. Dyn. Syst. (2019) https://doi.org/10.1007/s12591-019-00462-2 5. Singh, M.K., Natesan, S.: A parameter-uniform hybrid finite difference scheme for singularly perturbed system of parabolic convection-diffusion problems. Int. J. Comput. Math. 97(4), 875– 905 (2020) 6. Vulanovi´c, R.: A higher-order scheme for quasilinear boundary value problems with two small parameters. Computing 67(4), 287–303 (2001) 7. Vulanovi´c, R.: A priori meshes for singularly perturbed quasilinear two-point boundary value problems. IMA J. Numer. Anal. 21(1), 349–366 (2001) 8. Yang, Z.P., Pao, C.V.: Positive solutions and dynamics of some reaction diffusion models in hiv transmission. Nonlin. Anal. Theor. Methods Appl. 35(3) (1999)

Chapter 41

Role of Decision Making for Effective Health Care Sabuzima Nayak, Manisha Panda, and Ripon Patgiri

Abstract A specific disease does not have a unique treatment because hundreds of factors influence a person’s health. Thus, determining an effective and efficient treatment or medication for the patient requires a complex decision-making process. In the decision-making process, various personnel such as industry, experts, clinicians, or patients are involved. Moreover, the huge volume of medical data and various barriers related to the collection of these data hinder the complete analysis of the data to generate medical evidence for the treatment. In this chapter, we have discussed some aspects related to the decision-making process in health care. We have elaborated on the transition of dependency of the decision-making process on evidence-based practice approach to practice-based evidence approach. Furthermore, a short discussion on the future of decision making, i.e., intelligence decision, is presented.

41.1 Introduction Initially, the healthcare decision making was easy because it focused on one outcome, i.e., cure the patient of the disease. However, the process of decision making became complex with the enhancement of research in the field of medicine and health care. Decision making in health care includes many aspects; however, superficially, it can be considered that decision is made for the most appropriate treatment and medication for a patient. The complexity of the healthcare decision-making process increases many folds because the parameters having minor impact need to be evaluated carefully. The parameters having minor impact on the decision may have a huge impact on the outcome. Furthermore, the outcome after the decision-making process for a patient may not be applicable to another patient having similar circumstances. For example, some patients having the same disease cannot be given S. Nayak · M. Panda · R. Patgiri (B) National Institute of Technology Silchar, Silchar, India e-mail: [email protected] S. Nayak e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. Das et al. (eds.), Modeling, Simulation and Optimization, Smart Innovation, Systems and Technologies 292, https://doi.org/10.1007/978-981-19-0836-1_41

537

538

S. Nayak et al.

the same medicine because one may be allergic to one or more components of that medicine. Thus, along with cure, the patient’s overall well-being is equally essential, which makes the healthcare decision-making process complex.

41.2 Decision Making The main aim of the healthcare system is to achieve health for everyone [14]. Health care aims at preventing the pain of patients using appropriate healthcare interventions. It also emphasizes treatment on small populations who are suffering but gaining high profit for large populations. Moreover, the healthcare system guarantees high longevity by combining decision-making wisdom with the healthcare system to generate feasible solutions. During decision making, the factors considered from an individual level are his general motivation, considered criteria and evidence related to every criterion, and balancing his interest and aim of the healthcare system. Similarly, decision making for healthcare forms an institutional level [13] which are publicity of rationales for choices, the importance of criteria agreed by the majority of stakeholders, revising the old decision based on new evidence and implementing the decision taken. Medical decision making is the process used to decide the diagnosis of a particular medical condition from all the vitals available, avoiding the maximum clinical mistakes.

41.2.1 Real World Data Currently, in the pharmaceutical industry, clinicians and patients are using realworld data for decision making. Real-world data helps to determine the effectiveness, safety, and cost of the treatment in real time. They are healthcare delivery data which is periodically collected from different sources, for example, electronic health records, clinical registries, billing data, surveys, and surveillance related to healthcare activity. Real-world evidence is obtained after the analysis of these data. Medical treatment evidence provides information regarding the effectiveness, benefits, risks, and safety. Some common challenges related to the collection and aggregation of medical evidence are as follows [23]: • Data: It is recorded in different formats. Hence, preprocessing of the data is required before analysis. Data integrity, quality, and security are other important challenges that are faced during its acquisition. • Gaps in expertise: The volume of data is huge, and all clinical data cannot be analyzed by a single expert. Different fields of data need to be analyzed by different experts. Thus, this gap in expertise, at times, leads to the generation of incorrect medical evidence.

41 Role of Decision Making for Effective Health Care

539

• Observational research: To generate reliable medical evidences, the observational research follows proper standardized guidelines. The research should be transparent and solve the issues related to measurement error, selection bias, and missing data. • Trust: In health care, various parties such as government, industry, hospitals, academia, and payers collaborate. This collaboration is sometimes complex and not transparent. Thus, building trust is difficult in such a relationship, which makes the sharing of all data and discussion difficult.

41.2.2 Missing Data Decision making depends on data; hence, missing value has an adverse impact on the decision taken. Missing value refers to missing the measured parameter value. It needs to be taken care of because, in some cases, incomplete data cannot be ignored as they may contain some very important information, for example, primary vital signs ( e.g., heartbeat rate). Usually, the primary vital signs (say parameter) have many factors that influence them, such as physical activities, health conditions, and environment. The technique produces biased estimates, which causes huge errors in health applications. With missing of these biased estimates, it enhances the error rates [1]. Also, these factors’ values are required to reduce the errors from the data. The missing values are classified into three types [24], namely missing completely at random, missing at random, and not missing at random. Missing completely at random refers to a situation when the missing value is independent of any factor. It means it occurs due to some random event such as the monitoring device getting removed accidentally, Internet failure, etc. Missing at random occurs when the missing value is independent of any factor, but the reason is known, such as removing the motoring device while taking a bath or charging the device. Not missing at random occurs when the missing value is indirectly dependent on a factor, for example, a patient taking out the monitoring device to smoke. In such a situation, smoking is indirectly the reason for the missing value. One of the common solutions for missing values is deletion [29]. The records having mission value for any parameter are deleted to prevent it from including the data in the decision-making process. The deletion method is carried out diversely. In some cases, the whole dataset containing a missing value is deleted, resulting in a reduction in the data during analysis. Another way for deletion is pairwise deletion, where the record is analyzed and then deleted. It results in the deletion of some records in a dataset instead of the whole dataset. Another solution is imputation which fills the missing value. Various imputation methods are mean imputation, cold-deck imputation, hot-deck imputation, and regression imputation [24, 34]. These methods are single imputation methods that result in biased estimates because they do not consider the variability of the missing values. Currently, the multiple imputation method is popular because it completes the dataset [12].

540

S. Nayak et al.

41.3 Multimodal Data-Driven Approach The multimodal data-driven approach has played a pivotal role in smart health care from the scope of data management to decision making. This system has three different approaches [6] (a) semantic perception and semantic alignment, (b) data fusion and cross-border association, and (c) intelligent decision.

41.3.1 Semantic Perception and Semantic Alignment The healthcare data is collected from the client using open medical data, or IoT, or wearable healthcare devices. The data is collected in their ecological environment for a realistic decision on the target. Privacy is maintained using cryptography with secret keys. Semantic perception uses different techniques such as latent semantic analysis [19], magnetic resonance imaging [28], Markov logic network [8], and different machine learning techniques. Semantic alignment implements a distributed and parallel computing system for data acquisition layer [37]. The data is retrieved from the sensors for feature extraction and semantic alignment using a modular coordinated transformation method. The association mining methods are applied to build up a prediction model. This involves data mining techniques such as clustering, association for extracting the data, and extracting useful information for the treatment required. Thus, involving cross-time space, cross-modality, and cross-granularity in perception mechanism, implementing feature extraction methods, and including decision-oriented multimodal coupling framework with intelligent algorithms can help in developing a smart healthcare system.

41.3.2 Data Fusion and Cross-Border Knowledge Fusion The multimodal data fusion helps in increasing the accuracy and comprehensiveness of the decision-making system. The data is acquired mainly from four different sources, namely in-hospital medical data, out-of-hospital medical data, data from the wearable sensors, and open healthcare data. The best results are achieved in a framework-based approach, where medical images are usually decomposed into lowand high-frequency bands in the non-sampled contour-let transformation domain [3]. This process is also used in detecting Alzheimer’s disease [25] by extracting supplemental information from acquired data. Similarly, the cross-border knowledge fusion includes ontology matching where initially semantic similarities are matched [36]. The knowledge bases are used as the input to establish an efficient alignment of the cross-ontology categories and relations. In another framework, probability risk assessment and neural networks are deployed to set up a knowledge fusion method integrating the previous data acquired from the database, which helps in increasing computational efficiency [9].

41 Role of Decision Making for Effective Health Care

541

41.4 Practice-Based Evidence For a long time, healthcare decisions mainly depend on evidence-based practice (EBP) because it leads to a methodological approach, for example, clinical guidelines, to decide on the most appropriate treatment and medicine [27]. The general steps followed to practice EBP are formulating a clinical question, listing available solutions, checking feasibility, and developing a plan for care [20]. The requirement of clinical information is converted to a question that needs to be solved. Then, all available research solutions which have the potential to provide answers to the question are searched and listed. In the next step, the feasibility of each solution is determined based on the evidence of validity, effect size, and applicability of the solution on the sample population under study. Finally, developing a plan for care is using the available evidence, expert knowledge, and the patient’s values and circumstances. However, EBP has many issues that limit its application. Applying EBP to all patients and clinical conditions is difficult and time-consuming. Moreover, the study and evidence related to the solutions are huge in volume, which further increases the time complexity. Clinical practice guideline (CPG) helps in reducing the time complexity by making the exploring, evaluating, appraising, and synthesizing of study material related to the solutions systematic [30], which can be used as an alternative. However, CPG advocates the latest best available solution disregarding some older solutions which may be more effective. Moreover, CPG favors higher-quality evidence while disregarding lower-quality evidence and expert opinion. These limitations of EBP and CPG resulted in adopting a new approach called the practice-based evidence approach. Practice-based evidence (PBE) approach refines the EBP by enhancing the clinical evidence with the doctor’s practice. The outcome generated by continuous treatment or care of patients is included in the result. PBE helps in including real-time data for healthcare decision making [18]. The most important advantage of PBE is collecting ethnic and racial disparities and adding to the healthcare decision-making process. The PBE focuses on the needs of patients, families, and clinicians during decision making. The doctors are documenting their clinical practices into electronic health records while following systematic research or clinical guidelines. Electronic health records help in implementing data mining, data analytics, and statistics to extract information from the records. PBE approach requires a large collection of electronic health records to decide on appropriate treatment and medicines. Hence, collecting, maintaining, and protecting all these records require implementing the data warehouse. Data warehouse initially cleans the source of data, i.e., removing or correcting any erroneous data. This is performed by consulting medical experts in the respective domain. The preprocessed records follow the extraction, transformation, and load (ETL) process. The extraction process extracts cleansed data sources belonging to different systems. The transformation process converts the medical data into the schema of the data warehouse. Load process stores or loads the medical data into the data warehouse. Data warehouse helps in integrating medical data from different

542

S. Nayak et al.

sources. Moreover, implementing data analysis on the medical data stores in the data warehouse is very easy. Data analysis requires extracting required data from the data warehouse. It is accomplished using a dependent data mart which is a data warehouse subset [7]. Then, data analytics algorithms can be executed, such as decision tree or logistic regression.

41.5 Intelligent Decision Most of the decision frameworks are created based on the patient’s data, age, previous medical history, and other relevant data [33]. A technology called modeling and stimulation can improve efficiency using a logical system architecture for the decision system [35]. This method has high medical efficiency as well as appreciable risk management. Intelligent decision making involves deep learning and machine learning techniques. A classifier, for formation and retrieval of initial clusters and using fuzzy logic to process that data, help in proving the most accurate healthcare program [21]. Similarly, a built-in clinical decision support system ensures the privacy of the patients to provide the best diagnose without compromising the privacy [26]. Dynamic and interoperable communication framework (DICF) is a framework that is augmented between human equipped with wearable sensors (H-WS) and backbone network-medical center (BN-MC) [2]. DICF optimizes all the issues involved in the framework, e.g., signal accumulation, decision making, and heterogeneous communication. The system has three processes, namely sensing and monitoring, decision making, and communication. In the sensing and monitoring process, the wearable sensor with the radio transmitter interacts with mobile device. The sensed information (e.g., ECG, EMG, temperature, BP) is transmitted at a periodic interval depending on the type of sensing. When the readings fall out of the standard range, emergency notification and prioritized transmission are initiated by the mobile. Thus, the signal is transmitted to the BN-MC. The decision-making process uses the machine learning algorithm, involving regression and classification. It is used for early diagnosis and first-aid suggestions helping in prior prediction. The predictive part is optimized by repeatedly analyzing the sensed information. Moreover, the predicted result is stored for future reference. This process signifies the decision making for diagnosis recommendation and disease prediction. In case of an abnormality, the regression learning is implemented at two different intervals; when the observed monitoring unit has reliability less than the average reading, it sends a high alert notification to the mobile. The communication process involves the transmission of the signal entity and notification reception as another entity. It is important for the sensed information to be transmitted with respect to the sensor’s value to avoid improper prediction of diagnosis. Thus, an ordered information forwarding is implemented in the interoperable communication. The process in DICF is inspected using end-to-end latency.

41 Role of Decision Making for Effective Health Care

543

Gope et al. [15] proposed authentication and fault-tolerant decision-making scheme based on physical unclonable function (PUF). It is designed to support IoTbased health care. PUF is a distinctive physical feature of a machine or device (similar to biometric features of a person). PUF cannot be regenerated using any cryptographic techniques. Therefore, PUF makes every device unique, which cannot be cloned or reproduced [16]. The scheme has two phases, namely, setup and authentication. During the setup phase, the local processing unit (LPU) enrolls into Body Sensor Networks (BSN) care server. When the enrollment is successful, then LPU receives secret credentials for authentication by the server. The LPU uses PUF to identify itself with the server uniquely. During the authentication phase, the server authenticates the LPU using the secret credentials. The authentication process performs some manipulation of PUF to complete the verification of the LPU. After the successful completion of authentication, the LPU can communicate securely with the server. This scheme makes a decision using data collected from multiple sensors rather than a single healthcare sensor. This helps to reduce false positive and false negative decisions drastically. Furthermore, it helps in handling situations when the data is not received from the sensor. The scheme uses machine learning for data analysis. The support vector machines are used to learn about the patient’s normal behavior from the data generated by the healthcare sensor. The compute-intensive computations of the support vector machines are performed in the server nodes to generate a normal behavior model. When the latest data is received by the scheme, first, whether the data is complete or not is determined. The complete record is checked with the model to determine any anomaly. When normal records are received, it is recorded, and after receiving a certain number of normal records, the normal behavior model is updated to generate a new model. When an incomplete record is received, then the missing data is predicted and used for predicting the behavior.

41.6 Value of Information “Value of information (VOI) is a tool that can be used to study the uncertainty associated with a coverage decision and its implications [4].” It judges the value of evaluating additional information to minimize the uncertainty present in the decision taken in health care. Furthermore, it specifies an optimal design for additional research by combining both the probability loss and financial loss due to the incorrect decision [5]. VOI calculates the expected value of perfect information (EVPI) that provides the estimated value to minimize the decision uncertainty. Another estimated value is the expected value of partial perfect information (EVPPI) which is EVPI considering one or many parameters. VOI is also used to estimate values considering a sample space called expected value of sample information (EVSI) [10]. Another estimated value is real options analysis (ROA), which helps to determine whether to consider a healthcare technology that requires less finance or to wait for more additional information before deciding. Such estimated values become important if the medical technology required for treatment has a high overall cost [17]. However,

544

S. Nayak et al.

it has many issues that need to be solved before considering in decision making: (a) Some optimal research plans are infeasible because the clinical trials performed on small populations become unethical. A clinical trial on a new drug is necessary, but a trial on a small population may be dangerous. (b) Collecting real-time data is very difficult. The collection of real data worldwide requires secure infrastructure and fast telecommunication with trust and collaboration among various clinicians. (c) All uncertainties are not evaluated (d) Sometimes the VOI results yield a distorted view of existing uncertainties because VOI ignores the structural uncertainties of a model. (e) VOI is complex, and also, the limited knowledge of policymakers further increases the complexity in interpreting the VOI results, and (f) VOI has high time complexity.

41.7 Security Decision making involves a lot of factors to give the most accurate and precise output. One of the common ways of smart health care is body sensor networks (BSN). In this, the body sensors acquire different vitals from the patient’s body, process the data, and transmit to the next transitional node for further manipulations. But the crucial concern today is data dependability. The BSN signals are too prone to data manipulation attacks, where compromised signal is injected into the original signal resulting in faulty results which intervene in the decision-making methodology. Looking at the priority of the dependable data for the patient’s health, a three-layer decision-making framework [32] is being used to sieve out the dependable data from the available signals. In the first layer, the compromised signals, which can cause faulty analysis, are screened out from the signals received from the sensors attached to the patient’s body. There is a fair possibility of injecting suspicious information to the data packets received during the aggregation locally. This is ruled out by the framework in the next layer, where the decision framework selects and transmits the dependable data to the cloud for storage. In the rearmost layer, the model separates the undependable data that are introduced to the data packets by malware from the signals stored in the cloud. This decision-making framework in every stage filters the data, thus decreasing the amount of data transmission and simultaneously helping to give the best possible treatment to the patients without the least possibility of discrepancies.

41.8 Shared Decision Making Shared decision making (SDM) involves the patient and their relatives in the healthcare decision-making process. It is necessary to provide patient-centered care [31]. SDM is considered a good clinical practice because it relates health care with ethics such as freedom to decide, kindness, and justice [11]. Moreover, SCM is mainly pre-

41 Role of Decision Making for Effective Health Care

545

ferred in case of preference-sensitive decisions because it results in better decisions, better outcomes, and better patient–clinician relationships. However, many factors influence the adaptation of SDM. Joseph-Williams et al. [22] named four factors that influence SDM, namely continuity of care, time, workflow, and healthcare setting characteristics. Continuity of care relates to the relationship of patients with the clinicians. In some scenarios, many clinicians are taking care of the patient or period change of clinicians or incomplete information handover between clinicians. It results in the patient’s inability in building trust to discuss the information regarding the disease to participate in the decision-making process effectively. Time is another factor because patients need ample time to understand the information, discuss with the clinicians, raise some concerns, etc. Patients sometimes feel clinicians are busy; hence, the clinicians do not want to discuss with the patient for a long time, or the patient feels guilty for taking a long time. Workflow factor relates to the division of the SDM process among various clinicians. The patients feel more comfortable with nurses for interaction, and they can act as a mediator between doctor and patient. However, sometimes it becomes a barrier because they may not be qualified to answer some detailed information. Characteristics of the healthcare setting are important because inappropriate environmental conditions discourage the interaction of patients with clinicians. Some examples of inappropriate environmental conditions are no privacy, noisy environment, and requirement of physical examinations.

41.9 Conclusions Decision-making process of health care has many folds of complexity. The complexity lies in every level of the process; the sources that generate the medical data, the gap in data analysis by the experts, and the lack of healthcare technology. Earlier, health care entirely focused on curing the patient. However, today health care focuses on the most appropriate medication and method for a patient. This change came due to the advancement in health care which researched the influence of various external factors on the patient’s health, such as diverse environmental conditions or patient’s preferences (e.g., allergy or financial situation). Decision-making process is heavily dependent on the technology mostly to sort the data received from every step of the process. Intelligent decision making is the future of healthcare decision making because smart devices have become the primary device to record the patient’s health round the clock. It gives complete data without any time gap, which greatly helps to increase the efficiency of the decision making. Furthermore, smart technology is reducing the technology gap in health care. It is also removing many barriers, for instance, the physical presence of the expert.

546

S. Nayak et al.

References 1. Azimi, I., Pahikkala, T., Rahmani, A.M., Niela-Vilén, H., Axelin, A., Liljeberg, P.: Missing data resilient decision-making for healthcare iot through personalization: a case study on maternal health. Future Gener. Comput. Syst. 96, 297–308 (2019) 2. Baskar, S., Shakeel, P.M., Kumar, R., Burhanuddin, M., Sampath, R.: A dynamic and interoperable communication framework for controlling the operations of wearable sensors in smart healthcare applications. Comput. Commun. 149, 17–26 (2020) 3. Bhatnagar, G., Wu, Q.J., Liu, Z.: A new contrast based multimodal medical image fusion framework. Neurocomputing 157, 143–152 (2015) 4. Bindels, J., Ramaekers, B., Ramos, I.C., Mohseninejad, L., Knies, S., Grutters, J., Postma, M., Al, M., Feenstra, T., Joore, M.: Use of value of information in healthcare decision making: exploring multiple perspectives. Pharmacoeconomics 34(3), 315–322 (2016) 5. A. Briggs, M. Sculpher, and K. Claxton. Decision Modelling for Health Economic Evaluation. Oup Oxford (2006) 6. Cai, Q., Wang, H., Li, Z., Liu, X.: A survey on multimodal data-driven smart healthcare systems: Approaches and applications. IEEE Access 7, 133583–133599 (2019) 7. Chhabra, R., Pahwa, P.: Data mart designing and integration approaches. Int. J. Comput. Sci. Mobile Comput. 3(4), 74–79 (2014) 8. Despotovic, V., Walter, O., Haeb-Umbach, R.: Machine learning techniques for semantic analysis of dysarthric speech: an experimental study. Speech Commun. 99, 242–251 (2018) 9. Dong, X., Gabrilovich, E., Heitz, G., Horn, W., Lao, N., Murphy, K., Strohmann, T., Sun, S., Zhang, W.: Knowledge vault: a web-scale approach to probabilistic knowledge fusion. In In: Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 601–610 (2014) 10. Eckermann, S., Willan, A.R.: Expected value of information and decision making in hta. Health Econ. 16(2), 195–209 (2007) 11. Elwyn, G., Tilburt, J., Montori, V.: The ethical imperative for shared decision-making. Euro. J. Person Centered Healthcare 1(1), 129–131 (2013) 12. Enders, C.K.: Multiple imputation as a flexible tool for missing data handling in clinical research. In: Behaviour Research and Therapy. Best Practice Guidelines for Modern Statistical Methods in Applied Clinical Research, vol. 98, pp. 4–18 (2017) 13. Goetghebeur, M., Castro-Jaramillo, H., Baltussen, R., Daniels, N.: The art of priority setting. Lancet (London, England) 389(10087), 2368 (2017) 14. Goetghebeur, M.M., Cellier, M.S.: Can reflective multicriteria be the new paradigm for healthcare decision-making? the evidem journey. Cost Effecti. Resour. Allocation 16(1), 1–11 (2018) 15. Gope, P., Gheraibia, Y., Kabir, S., Sikdar, B.: A secure iot-based modern healthcare system with fault-tolerant decision making process. IEEE J. Biomed. Health Inform. 1 (2020) 16. Gope, P., Lee, J., Quek, T.Q.S.: Lightweight and practical anonymous authentication protocol for rfid systems using physically unclonable functions. IEEE Trans. Inf. Forensics Secur. 13(11), 2831–2843 (2018) 17. Grutters, J.P., Abrams, K.R., de Ruysscher, D., Pijls-Johannesma, M., Peters, H.J., Beutner, E., Lambin, P., Joore, M.A.: When to wait for more evidence? Real options analysis in proton therapy. Oncologist 16(12), 1752–1761 (2011) 18. Harrington, L.: Closing the science-practice gap with technology: from evidence-based practice to practice-based evidence. AACN Adv. Crit. Care 28(1), 12–15 (2017) 19. Hui, P.-Y., Meng, H.: Latent semantic analysis for multimodal user input with speech and gestures. IEEE/ACM Trans. Audio Speech Lang. Process. 22(2), 417–429 (2013) 20. Jevsevar, D.S., Bozic, K.J.: Orthopaedic healthcare worldwide: using clinical practice guidelines in clinical decision making. Clin. Orthopaed. Related Res. 473(9), 2762–2764 (2015) 21. Jindal, A., Dua, A., Kumar, N., Das, A.K., Vasilakos, A.V., Rodrigues, J.J.: Providing healthcare-as-a-service using fuzzy rule based big data analytics in cloud computing. IEEE J. Biomed. Health Inform. 22(5), 1605–1618 (2018)

41 Role of Decision Making for Effective Health Care

547

22. Joseph-Williams, N., Elwyn, G., Edwards, A.: Knowledge is not power for patients: a systematic review and thematic synthesis of patient-reported barriers and facilitators to shared decision making. Patient Educ. Counsel. 94(3), 291–309 (2014) 23. Justo, N., Espinoza, M.A., Ratto, B., Nicholson, M., Rosselli, D., Ovcinnikova, O., Garcia Marti, S., Ferraz, M.B., Langsam, M., Drummond, M.F.: Real-world evidence in healthcare decision making: global trends and case studies from latin america. Value Health 22(6), 739– 749 (2019) 24. Little, R.J., Rubin, D.B.: Statistical Analysis with Missing Data, vol. 793. John Wiley & Sons (2019) 25. Liu, S., Liu, S., Cai, W., Che, H., Pujol, S., Kikinis, R., Feng, D., Fulham, M.J., et al.: Multimodal neuroimaging feature learning for multiclass diagnosis of alzheimer’s disease. IEEE Trans. Biomed. Eng. 62(4), 1132–1140 (2014) 26. Liu, X., Lu, R., Ma, J., Chen, L., Qin, B.: Privacy-preserving patient-centric clinical decision support system on naive bayesian classification. IEEE J. Biomed. Health Inform. 20(2), 655– 668 (2015) 27. Osop, H., Sahama, T.: Electronic health records: Improvement to healthcare decision-making. In: 2016 IEEE 18th International Conference on e-Health Networking, Applications and Services (Healthcom), pp. 1–6 (2016) 28. Pereira, S., Pinto, A., Alves, V., Silva, C.A.: Brain tumor segmentation using convolutional neural networks in mri images. IEEE Trans. Med. Imag. 35(5), 1240–1251 (2016) 29. Silva, L.O., Zárate, L.E.: A brief review of the main approaches for treatment of missing data. Intell. Data Anal. 18(6), 1177–1198 (2014) 30. Steinberg, E., Greenfield, S., Wolman, D.M., Mancher, M., Graham, R., et al.: Clinical Practice Guidelines We Can Trust. National Academies Press (2011) 31. Stiggelbout, A.M., Van der Weijden, T., De Wit, M.P., Frosch, D., Légaré, F., Montori, V.M., Trevena, L., Elwyn, G.: Shared decision making: really putting patients at the centre of healthcare. BMJ 344, e256 (2012) 32. Tao, H., Bhuiyan, M.Z.A., Wang, J., Wang, T., Hsu, D.F., Liu, P., Salih, S.Q., Wu, J., Li, Y.: Dependdata: data collection dependability through three-layer decision-making in bsns for healthcare monitoring. Inf, Fus (2020) 33. Tekin, C., Atan, O., Van Der Schaar, M.: Discover the expert: context-adaptive expert selection for medical diagnosis. IEEE Trans. Emerg. Top. Comput. 3(2), 220–234 (2014) 34. Tsai, C.-F., Chang, F.-Y.: Combining instance selection for better missing value imputation. J. Syst. Softw. 122, 63–71 (2016) 35. Zeigler, B.P.: Discrete event system specification framework for self-improving healthcare service systems. IEEE Syst. J. 12(1), 196–207 (2016) 36. Zhang, F., Song, Y., Cai, W., Liu, S., Liu, S., Pujol, S., Kikinis, R., Xia, Y., Fulham, M.J., Feng, D.D., et al.: Pairwise latent semantic association for similarity computation in medical imaging. IEEE Trans. Biomed. Eng. 63(5), 1058–1069 (2015) 37. Zhang, Y., Qiu, M., Tsai, C.-W., Hassan, M.M., Alamri, A.: Health-cps: healthcare cyberphysical system assisted by cloud and big data. IEEE Syst. J. 11(1), 88–95 (2015)

Chapter 42

Data Secrecy: Why Does It Matter in the Cloud Computing Paradigm? Ripon Patgiri , Malaya Dutta Borah , and Laiphrakpam Dolendro Singh

Abstract Data secrecy is one of the most prominent issues in security; however, little research has been carried out on data secrecy. User data are precious, and the user never wants their data to reveal any unintended users, including the administrators. However, the administrator can read users’ data which is a violation of data secrecy. In this paper, we discuss the issues on data secrecy and its violation. Moreover, we highlight all the possible problems and challenges of data secrecy. Most of the data exchange platforms do not maintain data secrecy, and therefore, the users’ data are free to read or use for administrators. This issue is least discussed in the scientific forum, and hence, we enlighten the probable data misuse by the administrators. Also, this paper explores end-to-end encryption systems for various data exchange platforms.

42.1 Introduction Security is a protocol to prevent misuse of the data, which is defined in Definition 1. Security is a well-established protocol and practiced for a long time. Similarly, privacy protects data from misuse while preserving confidentiality, and it is defined in Definition 2. Privacy becomes more prominent due to the emergence of new technologies, for instance, IoT. Therefore, many researchers are establishing strict privacy protocol [1–3]. On the contrary, secrecy is not getting much attention from the research communities. R. Patgiri (B) · M. Dutta Borah · L. Dolendro Singh Department of Computer Science and Engineering, National Institute of Technology Silchar, Silchar, Chachar 788010, Assam, India e-mail: [email protected] URL: http://cs.nits.ac.in/rp/ M. Dutta Borah e-mail: [email protected] L. Dolendro Singh e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. Das et al. (eds.), Modeling, Simulation and Optimization, Smart Innovation, Systems and Technologies 292, https://doi.org/10.1007/978-981-19-0836-1_42

549

550

R. Patgiri et al.

Definition 1 Network security consists of the policies and practices adopted to prevent and monitor unauthorized access, misuse, modification, or denial of a computer network and network-accessible resources [4]. Definition 2 Data privacy is the protection of data that concerns the proper handling of sensitive data to meet regulatory requirements as well as protecting the confidentiality and immutability of the data [5]. Definition 3 Secrecy is data hiding from other users those who need not know. Definition 4 Absolute security or hard secrecy is a security protocol that follows the “only me” philosophy, and the data are protected from adversaries, administrators, and any other entity. Definition 5 Soft secrecy is a security protocol designed to maintain secrecy in an intended group. It excludes adversaries, administrators, and any other entity from the intended parties. Secrecy refers to keeping secret as a secret within intended users as defined in Definitions 2, 3, and 4. It may be exclusively kept secret without sharing with anyone. Moreover, a secret can be shared with a group of people. The key difference between secrecy and privacy is that privacy conceals the user’s identity, whereas secrecy conceals data as well as user’s identity. Therefore, data secrecy implies hiding user’s information as well as the data pertaining to it. The users of the cloud computing face violation of data secrecy. It is a severe issue which is least discussed in the scientific forums. Moreover, there are many cloud services that violate the secrecy of their clients, which are outlined below: • Email administrators can read all emails of their clients, for example, GMail, Yahoo Mail, Outlook, AOL, iCloudMail, ProtonMail, etc. • Instant messenger’s administrators can read all the message of their clients; for instance, Google Hangouts, Microsoft Teams, Slack, Discord, Facebook Workplace, Mattermost, etc. • Cloud storage administrators can read all the stored data of their clients; for instance, Google Drive, OneDrive, Dropbox, iCloud, Icedrive, pCloud, etc. • Cloud services are prone to misuse by the administrators, for example, Word, SpreadSheet, Powerpoint/slides, forms, etc. These documents are saved in the cloud storage, and administrators can read all those documents. • Smartphone applications are accessing data from mobile devices. Therefore, many countries are banning Android apps/companies due to trust issues [6, 7]. Also, there are allegations that the company owners are misusing the users’ data; however, these are allegations on companies without solid and valid proof against those companies/Android apps/software of data being misused. Thus, the company owners are unable to protect themselves from banning by the countries even if they have not committed any crime, data breaching, or data misuse. Moreover, people are agitating on using modern technology due to the insecurity of their data.

42 Data Secrecy: Why Does It Matter in the Cloud Computing Paradigm?

551

Obviously, the administrators are not valid or intended users of the data owner or clients. Notably, the administrators make themselves valid and intended users by default in cloud computing. On the contrary, Facebook and Twitter are used for different purposes, and these platforms make all messages publicly available; and therefore, secrecy does not apply. The key contribution of this paper is as follows: we expose the possible violation of users’ secrecy by the unintended users, which is valid for the cloud computing paradigm. Initially, we explore the possible exploitation of users’ data by the service providers, and we demonstrate the possible issues and challenges in implementation by a few use cases. Therefore, this paper is organized as follows: Sect. 42.2 establishes the preliminaries of cryptography to understand the paper. Section 42.3 discusses a use case of email server. Also, it highlights the possible issues and challenges of implementing a highly secret email server where the administrator cannot read the users’ emails. Moreover, Sect. 42.4 demonstrates the cloud storage service as a use case of data misuse. However, it is easy to implement the"data as a secret" in cloud storage, but there is an issue of secret key maintenance which is outlined. Furthermore, Sect. 42.5 demonstrates possible threats to users’ secrecy in identity management system (IDMS). IDMS stores sensitive data where state-of-the-art IDMS does not maintain users’ secrecy. In addition, we demonstrate a use case of instant messaging platform (IMP) in Sect. 42.6.

42.2 Preliminaries in Cryptography In communication, cryptography is necessary to establish secure communication between two parties in the cloud computing paradigm. Cryptography protects data misuse from any unintended users (attackers). Therefore, there are diverse algorithms for different requirements.

42.2.1 Cryptographic Hash Functions Cryptographic string hash function takes input and produces fixed bit length output, for instance, SHA2 [8], SHA3 [9, 10], SHAKE [9], cSHAKE [9], BLAKE2 [11], BLAKE3 [12], and OSHA [13]. BLAKE2 and BLAKE3 are fast and secure string hash functions, while OSHA and SHAKE can produce variable-length output. However, SHA2 is the most used secure hash algorithm. SHA family produces the fixed-length output; for instance, SHA256 produces 256 bits output, but OSHA and SHAKE can produce as required by the users. OSHA changes its output if the output size changes, whereas SHAKE output remains consist of matching prefix changed bit-sized output [13]. Moreover, OSHA does not use fixed constants, and it replaces the constants with variables which makes OSHA more unpredictable.

552

R. Patgiri et al.

42.2.2 Public Key Cryptography Public key cryptography requires public and private keys. A message can be decrypted by a private key which is an encrypted public key. A trusted third party distributes the public key, and the private key is kept secret. The most famous public key cryptography is RSA [14]; however, it has factorization issue [15–17]. Recent literature suggests that RSA can be broken in a few hours, which causes serious concerns to the cryptography community [18].

42.2.3 Symmetric Key Cryptography Symmetric key cryptography depends on a shared secret key where two parties must be active at the same time. Asymmetric key cryptography does not require the sender and receiver active simultaneously, for instance, public key cryptography. However, the public key requires renewal of key periodically; otherwise, it becomes vulnerable to the attackers. On the contrary, symmetric key relies on short-term shared secret key which can be shared using key agreement protocol. The most used symmetric key cryptography is AES [19]; however, symmetric key cryptography is vulnerable to cryptanalysis attack [20, 21]. Recent literature suggests that symKrypt [22] provides strong resistance against such kinds of attacks. Moreover, Stealth ensures high security in end-to-end symmetric cryptography [23].

42.2.4 Key Agreement Protocol Key agreement protocol shares secret key between two parties, such as DiffieHellman algorithm [24]. Diffie-Hellman computes shared secret key, and it requires true random number generator (TRNG) [25]. However, it has an issue of Logjam [26] which makes the Diffie-Hellman algorithm vulnerable. Therefore, elliptic-curve Diffie-Hellman [27] is used in the most modern practices. Moreover, privateDH [28] can also be used to protect high-valued digital assets.

42.2.5 Connection Establishment The sender initiates the communication with the server and initiates the cryptographic procedure. The sender sends a hello message to the server by encrypting the message using a server’s public key. Then, the sender and the server initiate the key agreement protocol. The sender A needs shared secret keys; for instance, compute shared secret key using Diffie-Hellman or ECDH. However, there is an issue of man-in-the-middle

42 Data Secrecy: Why Does It Matter in the Cloud Computing Paradigm?

553

(MITM) attack. The privateDH [28] provides a strong resistance to MITM attacks or Logjam [26]. Alternatively, the sender A generates a secret key by any TRNG [25] and sends it to mail server S. Sender A encrypts the generated secret key using a private key, and then, the public key of the mail server is S. However, it is more vulnerable to attack due to the recent emergence of quantum cryptography [18].

42.3 Email Server Figure 42.1 demonstrates the email server’s architecture. Let sender A send an email M to B through email server S. In this assumption, the sender A encrypts the mail using a shared secret key between the sender and the mail server S. The sender can encrypt the message using AES [19] or symKrypt [22] to send it to the mail server S. The server S decrypts the mail and stores it in its storage location. Later, the receiver B retrieves the mail from the server S. The same process needs to be followed by the receiver B to retrieve the mails from the mail server S.

42.3.1 Data Secrecy Issue Our above assumption is that the sender A sends a mail to B through S. In this case, a strict security measure is taken care of between A and S. Moreover, the communication between B and S is secured properly. But, the key issue is S. The S can decrypt the mail and read it. It violates data secrecy. Even though the mail server S protects the mails from other attackers, it does not protect its employed administrators. The administrator can read the mails, and therefore, it is termed as

Fig. 42.1 Current state-of-the-art email server’s architecture

554

R. Patgiri et al.

“valid and employed Mallory (VEM)” [29]. The VEMs violate the data secrecy, and they can easily misuse the mails. There is no security for the VEMs. But, the users’ mails should be protected from the VEMs too.

42.3.2 Asymmetric Communication Issue Since the sender A and receiver B may not be active at the same time, therefore, it requires asymmetric communication through a mail server. To bypass VEMs, the sender A and receiver B need to produce public–private key pairs such that the mail server S cannot decrypt it. However, the sender A and receiver B are end-users, and therefore, they cannot maintain the public–private key pair for long term. Moreover, the end-user A or B can switch their platform frequently; thus, public–private key pair cannot be stored permanently. Furthermore, the device of the end-user may be lost or damaged at any point of time, and there is no guarantee [29]. Therefore, it requires a new method to bypass VEMs such that the mail server S cannot decrypt the mails except the desired user.

42.3.3 Virus Since the mail server cannot decrypt the mail, therefore, sender A can easily send malicious codes to B [29]. The receiver B may not have anti-virus software. Consequently, it poses a new challenge to overcome (Fig. 42.2).

Fig. 42.2 Current state-of-the-art cloud storage’s architecture

42 Data Secrecy: Why Does It Matter in the Cloud Computing Paradigm?

555

42.4 Cloud Storage Cloud storage is booming and gaining lots of popularity due to various benefits; for instance, user data are stored permanently without any maintenance cost. Moreover, there are other benefits too: disaster recovery, data loss, lifetime storage, accessibility from anywhere, and highly secure. The most famous cloud storage are Google Drive, OneDrive, Dropbox, Amazon Drive, and iCloud. These cloud service providers provide storage as a service. A user establishes a secure connection with the storage server and securely transmits the user data to the storage server for permanent storage. Moreover, a user can retrieve these stored data at any time from anywhere. The cloud service provider decrypts the users’ data and stores its database. The cloud service provider can read all those stored users’ data.

42.4.1 Data Secrecy Cloud service provider decrypts the data and stores it in storage media. The service provider can read or misuse the data; this a clear violation of data secrecy. A user can encrypt data using its secret key before sending it to the storage server. The user’s secret key is not shared with anyone, including the storage server. Therefore, cloud storage cannot decrypt the data from users. Thus, the cipher is stored in the cloud storage. Here, only the owner of the data knows how to decrypt the data. Therefore, future users require secrecy as a service where the users can store their secret in the cloud.

42.4.2 Issues in Secret Key Users cannot maintain their secret key permanently due to various reasons: device lost, damaged, platform changes, etc., and hence, it poses a challenge how to provide reliable client-based encryption systems. It requires regenerative secret keys if lost.

42.5 Identity Management Server An identity management system (IDMS) is a directory service to maintain users’ information. Figure 42.3 demonstrates the architecture of IDMS. Let user A create its account on the IDMS server (for instance, LDAP and Active Directory). The user A needs to establish a secure connection to the IDMS server to create its directory (account) in IDMS. Conventional IDMS requires a user name, password, user email

556

R. Patgiri et al.

Fig. 42.3 Current state-of-the-art identity management system’s architecture

IDs, phone number, communication address, date of birth, and zip code. Most of the IDMS stores passwords in hash code instead of the raw password. Does it make sense?

42.5.1 Issue of Password PassDB [30] raises a question “why should anyone see other’s passwords in either raw format or hash code?” For instance, the president of the USA and its password (hash code). If the hash code of a particular person is identified, then no raw password is required to misuse it. Therefore, either storing a password in raw format or storing hash code is the same effects. Strangely, a few applications send back to the user its raw password in forget button. Therefore, data secrecy is the top concern in IDMS implementation. PassDB proposes a separate database from user ID [30]. A password cannot be retrieved using a user ID or vice-versa. The password database can return either true or false. There is no direct mapping mechanism between passwords and user ID.

42.5.2 Hash Value of Password Let us assume that the hash value of a password P is H p using hash function SHA256, which produces fixed-length output. The SHA256 produces the same output for the same input, no matter where it is run or how many times it is run. It is universal, and it does not produce the different output for the same input for Google, Facebook, Apple, or Amazon. It remains the same. Therefore, it does not make any sense whether the IDMS stores the passwords in raw format or hash value. Alternatively, OSHA or HMAC can be used to solve this problem because it produces different output for

42 Data Secrecy: Why Does It Matter in the Cloud Computing Paradigm?

557

the same input for different settings. For instance, a company can specify the length and rounds in OSHA [13], which makes different output from the other companies’ hash value for the same input.

42.5.3 Data Secrecy Apart from the password issue, data secrecy is another important issue; for instance, the date of birth, email address, communication address, zip code, phone numbers, etc., clearly violate data secrecy. This information is free for the VEMs. Why does the IDMS require the users’ data of birth, user name, communication address, and zip code? Moreover, the user ID should not be in raw format; it should be stored as a hash value except the email ID and phone numbers. Furthermore, user ID (hash value), mobile number, and email ID should be separated from the password database. On the contrary, all information of the user can be read by the VEMs in conventional IDMS. Therefore, it clearly violates the data secrecy.

42.6 Instant Messaging Platforms The modern instant messaging platform (IMP) is a combination of synchronous and asynchronous communications, for instance, WhatsApp, Signal, and Threema. It is similar to the email server. Each message is stored in the chat server. The server can decrypt the messages and read these messages. Why should two end-users’ secrecy be violated? Rösler et al. [31] suggests an efficient solution for end-to-end encryption systems for group communication. WhatsApp claims to provide end-to0end encryption in their white paper [32]. Consequently, a user of WhatsApp can enjoy the secrecy. It establishes the public key of each user and stores the public key in the WhatsApp server. The private keys are stored in the clients’ devices. There is no information about if the client has lost the private key. What will happen if the user’s device is damaged or lost permanently. How does the private key is retrieved if the platform changes frequently? What happen if IMP publishes the public key on behalf of the other users? These questions pose a serious concerns on any IMP (Fig. 42.4).

42.7 Conclusion In this paper, we demonstrate data secrecy that violates standard practices in the cloud computing paradigm. A few use cases have been demonstrated to establish the most crucial issue in the cloud computing paradigm. Nowadays, everything is stored in the cloud, and therefore, the users’ data require a secrecy protocol that

558

R. Patgiri et al.

Fig. 42.4 Current state-of-the-art instant messaging platform’s architecture

cannot be read by any unintended users, including the VEMs. On the contrary, we have demonstrated that cloud administrators can read the users’ data which is a clear violation of data secrecy. Furthermore, we have shown diverse issues in a few use cases of cloud computing. Finally, the major issues and challenges are highlighted in the implementation of data secrecy which requires diverse secrecy algorithms for various platforms because these are heterogeneous.

References 1. Ogonji, M.M., Okeyo, G., Wafula, J.M.: A survey on privacy and security of internet of things. Comput. Sci. Rev. 38, 100312 (2020). https://doi.org/10.1016/j.cosrev.2020.100312 2. Tourani, R., Misra, S., Mick, T., Panwar, G.: Security, privacy, and access control in informationcentric networking: a survey. IEEE Commun. Surveys Tutor. 20(1), 566–600 (2018). https:// doi.org/10.1109/COMST.2017.2749508 3. Wang, T., Zheng, Z., Rehmani, M.H., Yao, S., Huo, Z.: Privacy preservation in big data from the communication perspective-a survey. IEEE Commun. Surveys Tutor. 21(1), 753–778 (2019). https://doi.org/10.1109/COMST.2018.2865107 4. Forcepoint: What is network security? network security defined, explained, and explored. Accessed on Oct 2020 from https://www.forcepoint.com/cyber-edu/network-security (2020) 5. SNIA: What is data privacy? Accessed on Oct 2020 from https://www.snia.org/education/ what-is-data-privacy (2020) 6. Ferek, K.S., McKinnon, J.D.: U.S. Bans Chinese apps tiktok and wechat, citing security concerns. Accessed on Dec 2020 from https://www.wsj.com/articles/commerce-secretary-wilburross-says-he-will-ban-wechat-use-in-u-s-after-sunday-night-11600429988 (Sept 18, 2020) 7. Restuccia, A., McKinnon, J.D.: Trump issues new ban on alipay and other chinese apps. Accessed on Jan 2021 from https://www.wsj.com/articles/trump-signs-order-banning-alipayand-other-chinese-apps-11609889364 (January 6, 2021) 8. FIPS: Announcing Approval of Federal Information Processing Standard (FIPS) 180-2, Secure Hash Standard; A Revision of FIPS 180-1 (Aug 2002). https://www.federalregister.gov/ documents/2002/08/26/02-21599/announcing-approval-of-federal-information-processingstandard-fips-180-2-secure-hash-standard-a [online; accessed 23 May 2021] 9. Kelsey, J., Change, S.j., Perlner, R.: SHA-3 derived functions: cSHAKE, KMAC, TupleHash and ParallelHash. Technical Report NIST SP 800-185, National Institute of Standards and Technology, Gaithersburg, MD (Dec 2016). https://doi.org/10.6028/NIST.SP.800-185, https:// nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-185.pdf

42 Data Secrecy: Why Does It Matter in the Cloud Computing Paradigm?

559

10. NIST: SHA-3 Standard: Permutation-Based Hash and Extendable-Output Functions. CSRC | NIST (Aug 2015). https://doi.org/10.6028/NIST.FIPS.202, https://www.nist.gov/publications/ sha-3-standard-permutation-based-hash-and-extendable-output-functions 11. Guo, J., Karpman, P., Nikoli´c, I., Wang, L., Wu, S.: Analysis of BLAKE2. In: Topics in Cryptology—CT-RSA 2014, pp. 402–423. Springer, Cham, Switzerland (Feb 2014). https:// doi.org/10.1007/978-3-319-04852-9_21 12. Blake3-team: BLAKE3-specs (Apr 2021). https://github.com/BLAKE3-team/BLAKE3specs/blob/master/blake3.pdf [online; accessed Apr 2021] 13. Patgiri, R.: Osha: A general-purpose one-way secure hash algorithm. Cryptology ePrint Archive, Report 2021/689 (2021). https://eprint.iacr.org/2021/689 14. Rivest, R.L., Shamir, A., Adleman, L.: A method for obtaining digital signatures and publickey cryptosystems. Commun. ACM 21(2), 120–126 (1978). https://doi.org/10.1145/359340. 359342 15. Kleinjung, T., Aoki, K., Franke, J., Lenstra, A.K., Thomé, E., Bos, J.W., Gaudry, P., Kruppa, A., Montgomery, P.L., Osvik, D.A., te Riele, H., Timofeev, A., Zimmermann, P.: Factorization of a 768-Bit RSA Modulus, pp. 333–350. Springer Berlin Heidelberg (2010). https://doi.org/ 10.1007/978-3-642-14623-7_18 16. Thomé, E.: [Cado-nfs-discuss] 795-bit factoring and discrete logarithms (Dec 2019). https://lists.gforge.inria.fr/pipermail/cado-nfs-discuss/2019-December/001139.html [online; Accessed on Feb 2021] 17. Zimmermann, P.: [Cado-nfs-discuss] Factorization of RSA-250 (Feb 2020). https://lists.gforge. inria.fr/pipermail/cado-nfs-discuss/2020-February/001166.html [online; accessed on March 2021] 18. Gidney, C., Ekerå, M.: How to factor 2048 bit RSA integers in 8 hours using 20 million noisy qubits. Quantum 5, 433 (Apr 2021). https://doi.org/10.22331/q-2021-04-15-433, https://doi. org/10.22331/q-2021-04-15-433 19. Specification for the Advanced Encryption Standard (AES). Federal Information Processing Standards Publication 197 (2001). http://csrc.nist.gov/publications/fips/fips197/fips-197.pdf 20. Alioto, M., Poli, M., Rocchi, S.: Differential power analysis attacks to precharged buses: a general analysis for symmetric-key cryptographic algorithms. IEEE Trans. Dependable Secure Comput. 7(3), 226–239 (2010). https://doi.org/10.1109/TDSC.2009.1 21. Baksi, A., Bhasin, S., Breier, J., Jap, D., Saha, D.: Fault attacks in symmetric key cryptosystems. Cryptology ePrint Archive, Report 2020/1267 (2020). https://eprint.iacr.org/2020/1267 22. Patgiri, R.: Symkrypt: A general-purpose and lightweight symmetric-key cryptography. Cryptology ePrint Archive, Report 2021/635 (2021). https://eprint.iacr.org/2021/635 23. Patgiri, R.: Stealth: A highly secured end-to-end symmetric communication protocol. Cryptology ePrint Archive, Report 2021/622 (2021). https://eprint.iacr.org/2021/622 24. Diffie, W., Hellman, M.: New directions in cryptography. IEEE Trans. Inf. Theor. 22(6), 644– 654 (1976). https://doi.org/10.1109/TIT.1976.1055638 25. Patgiri, R.: Rando: A general-purpose true random number generator for conventional computers. In: To Be Appeared in IEEE International Conference on Trust. Security, and Privacy in Computing and Communications (TrustCom 2021), pp. 1–7. Shenyang, China (2021) 26. Adrian, D., Bhargavan, K., Durumeric, Z., Gaudry, P., Green, M., Halderman, J.A., Heninger, N., Springall, D., Thomé, E., Valenta, L., VanderSloot, B., Wustrow, E., Zanella-Béguelin, S., Zimmermann, P.: Imperfect forward secrecy: How diffie-hellman fails in practice. In: Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, pp. 5–17. CCS ’15, Association for Computing Machinery, New York, NY, USA (2015). https:// doi.org/10.1145/2810103.2813707, https://doi.org/10.1145/2810103.2813707 27. For Pair-Wise Key Establishment Schemes UsingDiscrete Logarithm Cryptography, R.: Elaine barker and lily chen and allen roginsky and miles smid. Accessed on Jan 2021 from https:// nvlpubs.nist.gov/nistpubs/Legacy/SP/nistspecialpublication800-56ar.pdf (2007) 28. Patgiri, R.: Privatedh: An enhanced diffie-hellman key-exchange protocol using rsa and aes algorithm. Cryptology ePrint Archive, Report 2021/647 (2021). https://eprint.iacr.org/2021/ 647

560

R. Patgiri et al.

29. Patgiri, R.: Whisper: a curious case of valid and employed mallory in cloud computing. In: To Be Appeared in IEEE International Conference on Cyber Security and Cloud Computing (IEEE CSCloud 2021), pp. 1–6. Washington DC, USA (2021) 30. Patgiri, R., Nayak, S., Borgohain, S.K.: Passdb: a password database with strict privacy protocol using 3d bloom filter. Inf. Sci. 539, 157–176 (2020). https://doi.org/10.1016/j.ins.2020.05.135, https://www.sciencedirect.com/science/article/pii/S0020025520305624 31. Rösler, P., Mainka, C., Schwenk, J.: More is less: On the end-to-end security of group chats in signal, whatsapp, and threema. In: 2018 IEEE European Symposium on Security and Privacy (EuroS P), pp. 415–429 (2018). https://doi.org/10.1109/EuroSP.2018.00036 32. WhatsApp: Whatsapp encryption overview, version3. retrieved on May 2021 from https://scontent.whatsapp.net/v/t39.8562-34/122249142_469857720642275_ 2152527586907531259_n.pdf/WA_Security_WhitePaper.pdf?ccb=1-3&_nc_ sid=2fbf2a&_nc_ohc=3CHK2I8TOVQAX_Dvz4i&_nc_ht=scontent.whatsapp.net& oh=e6bc5f73ac2b4101e1d0ec0ada784a0a&oe=60DADED9 (2020)

Chapter 43

A Survey on Optimization Parameters and Techniques for Crude Oil Pipeline Transportation Amrit Sarkar and Adarsh Kumar Arya

Abstract In recent decades, the high cost of carrying oil and gas via pipelines has compelled analysts to develop optimization methods to assist pipeline managers in operating pipeline grids at the lowest possible cost. The network and operations of crude oil pipelines are very complicated. Pumps powered by crude oil engines are utilized in most situations, with gas-powered generators or turbines used in a few instances. Energy and heating expenses account for about 45–65% of the operational costs incurred in transporting crude oil to refineries. Pour point depressing chemicals, DRA dosage, and other expenses are also significant. Many optimization studies have been conducted on crude oil pipelines, including design, operations, network, scheduling, and maintenance, among other areas. Most works utilize classical algorithms in the early stages and dynamic algorithms in the later stages; however, contemporary evolutionary algorithms have become popular for optimizing oil pipeline transportation optimization. The article first reviews the efforts made by researchers in the design stage of the pipeline. Further, the optimization strategy followed in pipeline operations is considered. Later the rheological properties of waxy crude oil along with optimization strategy have been discussed. Furthermore, the difficulties associated with improving crude oil pipeline operations and the potential for future optimization in the pipeline sector are addressed in detail. This research will benefit crude oil pipeline operators, engineers, and researchers in the relevant area by providing a comprehensive understanding of the many kinds of optimizations that may be carried out in complicated oil transport operations.

A. Sarkar (B) Oil India Limited, Duliajan, Assam, India e-mail: [email protected] Department of Mechanical Engineering, School of Engineering, University of Petroleum and Energy Studies, Energy Acres Building, Bidholi, Dehradun 248007, India A. K. Arya Department of Chemical Engineering, School of Engineering, University of Petroleum and Energy Studies, Energy Acres Building, Bidholi, Dehradun 248007, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. Das et al. (eds.), Modeling, Simulation and Optimization, Smart Innovation, Systems and Technologies 292, https://doi.org/10.1007/978-981-19-0836-1_43

561

562

A. Sarkar and A. K. Arya

43.1 Introduction Drakes [1] made the historic discovery of the world’s first commercial crude oil well at Titusville, Pennsylvania, in the United States in 1859. The first crude oil refinery is believed to have been located in Ploiesti, Romania, and the first commercial pipeline, an 8-km-long pipeline with 300 m3 /day, was built in 1865 to connect a pithole to the oil creek railroad [2]. The first commercial pipeline was built in 1865 and had a throughput of 300 m3 /day. More and more oil discoveries have been made in a variety of nations throughout the years. The top ten crude oil production nations are as follows: the United States, Russia, Saudi Arabia, Iraq, Canada, China, the United Arab Emirates, Kuwait, Brazil, and Iran [3]. India comes in at number 20 on the list; even though India ranks third in the world in terms of crude oil consumption, it generates just 20% of its yearly needs domestically. It must thus import crude oil from other producing nations to meet a significant portion of its consumption. India consumes just 51.4 barrels of crude oil per capita, compared to the United States’ use of 934.3 barrels per capita [4]. As a result, if India’s per capita consumption rises steadily, the country’s need for crude oil would climb by orders of magnitude. A more significant part of this crude oil transportation is handled via underground pipes. There is 2,034,201 km of crude oil pipelines globally, with 40% in North America (154,201 km) [5] of the total. Contrary to popular belief, India’s total crude oil pipeline length is 10419 km, of which 9876.5 km is onshore, and 622 km is offshore. It is estimated that India’s gas and product pipelines total 27,871 kms and 14,727, respectively [6]. According to data, North America and Asia will likely have pipelines totalling more than 85,000 kms by 2023. Optimizing the pipeline has never been more critical because of the expanding crude oil pipeline and the maturing field of crude oil deposits. The amount of optimization work done on the crude oil pipeline is minor when compared to other pipelines. The works in this article will be divided into three groups: design stage, rheological property, and operation stage. Route survey, Geotech survey, GIS, design, and environmental safety are the main components of the pipeline industry, as is compliance with growing regulatory, risk management and control, corrosion management, SCADA, and leak detection systems. Other components include pigging in pipelines and emergency pipeline repair in the event of a catastrophe.

43.1.1 Cost Components of Pipeline Networks The cost involved in pipeline networks consists of fixed costs and operating costs. The fixed cost includes the cost involved in pipe, pumping station, valves, fittings, tanks and manifolds, metering stations, SCADA and telecommunications, engineering and construction management, environment and permitting, RoW acquisition cost, other contingency costs [7]. Different diameters, thicknesses, pipe routes, pump, and motor size choices must be considered for optimum holistic pipeline design. The pipeline’s

43 A Survey on Optimization Parameters …

563

NPV should then be estimated over a 25–30 year time horizon. The best pipeline design should use IRR to make use of the many choices available. Since the manual calculation with different constraints is almost impossible to find a suitable optimum solution, recent optimization techniques with modern solution tools are necessary. The operating cost of the pipeline network includes the equipment maintenance cost, SCADA and telecommunication cost, pipeline maintenance cost, valve and metering station maintenance cost, tank farm operation and maintenance cost, utility cost, ongoing environmental and permitting cost, RoW lease cost, other rental and lease cost, general and administrative costs including payroll [7]. DRA is doing cost, corrosion inhibitor doing cost, pigging cost, intelligent pigging cost, etc. Since the design are based on some assumption, allowances, and factor of safety, during operation, system or actual scenario will be far away than design criteria. For example, in most of the cases pump characteristic curve does not match with the system curve. Sometimes properties of crude oil considered do not match actual crude oil to be transported by pumps, especially viscosity and wax appearing temperature. It is impossible to develop a solution manual for the best operating combination to minimize the cost with new operating parameters and constraints. Hence suitable optimization techniques with modern software need to be explored to tradeoff between multiple options.

43.2 Parameters in Optimizing Pipeline Operations The section discusses the many parameters utilized by the researchers to minimize the cost of pipeline networks, cost of pipeline design, cost of drag reducer and chemical injection, and cost of crude oil pipeline operations.

43.2.1 Oil Pipeline Design and Network Optimization It’s a long-term plan to design a crude oil pipeline because it will be costly to change once implemented. NPV calculations based on different options with a time horizon of 25 years are generally considered. Objective functions Minimization of capital cost or in future energy cost. Constraints and boundary conditions Diameters, thickness, MAOP, length, flow rate, temperature, pressure, viscosity, WAT. Formulation of problems The equations are usually modelled considering the pressure and temperature profile of the pipeline, viscosity versus temperatures profile, wax deposition rate, and DRA dosing rate.

564

A. Sarkar and A. K. Arya

Solution of algorithms and results: The solution of the algorithm is usually carried out based on problems either using classical or evolutionary algorithms. Review of existing relevant works on the optimization of crude oil pipeline carried out as below: B. Wang et al. developed a model considering hydraulic and thermal parameters of crude oil and pipeline systems during the design phase [8]. Energy consumption taken is an objective function; meanwhile, other relevant parameters are taken as constraints, especially the relationship between viscosity and temperature, and the uncertainty of the flow rate is considered to establish a mathematical programming model. They try to minimize energy and construction costs. Two scenarios were considered, one for pump station locations by pressure reduction and the second was joint pump and heating stations. They also found that construction and operation costs are practically possible with the thermohydraulic constraints, and the model may be helpful for pipeline designers. B. Wang et al. considered a large slope section of a pipeline for optimal design. Stochastic MILP models for pipeline construction costs, pressure reduction stations and pump stations, and pump station operating costs are evaluated to reduce the total yearly cost [9]. The model calculates the size of the pipeline, the decrease in pressure, and the placements of the pumping stations. Using the stochastic MILP model, the placement of the pressure reduction station and pump stations may be determined while designing an oil product pipeline in a large slope area. These researchers created a model to maximize the efficiency of a Chinese crude oil pipeline network while keeping costs low. This network’s optimum cost was 34.5% lower than the total cost, indicating that oil flow is inefficient overall and has enormous room to increase flow efficiency. M. Rizwan et al. analyzed and optimized an existing pipeline to find any requirements for further expansion of the pipeline [10]. The objective of this study was: Recognize any gaps in the system’s capability or weakness by (1) Calculating hydraulic pressures and flows (Pressure losses, (2) Temperature Changes and Estimation of Pumping requirements from Gathering Centers). Identification of operational constraints/bottlenecks caused by the absence of crucial pipe segments. (3) The network’s performance can be improved. As a result of the model’s accuracy, a few red flags were raised, and the network was subsequently optimized. C. Hongsheng et al. carried out an economic study to design a hot oil pipeline considering all constraints of hot oil pipelines [11]. They were able to find out optimal flow velocity and diameter for a given throughput. They were also able to find economical minimum pipeline capacity for a given length and vice versa. A study was conducted to design an optimum pumping combination with the least capital and operating cost [12]. Five sets of centrifugal pumps were considered in the study. This study offers an optimum pump design for crude oil pipeline at the least cost, highest availability, and reliability.

43 A Survey on Optimization Parameters …

565

43.2.2 Wax Deposition, Chemical Dosing and Drag Reduction For a waxy crude oil pipeline, it is essential to know the rheological behaviour of crude oil and its effect in varied conditions. Nowadays, since easy oil is very tough to extract from oil fields, more and more medium and heavy crude need to handle through the new or existing pipelines. Existing works on rheological properties and chemical dosing are summarized below [13]. For the petroleum business, wax buildup in pipes is a significant problem. The wax thickness and pigging frequency may be predicted using wax deposition models. Increased oil–water pumping alternately and diverting to intermediate tanks decreased backpressure as a consequence of higher dispatch temperature. M. Abdulzahra, in his thesis, focuses on detecting blockages by using a technique based on a few measurements usually gathered from the normal operating conditions of the pipeline system [14]. The finite element like the simulator and Genetic algorithm was used to perform the optimization procedure of this technique. H. A. M. Abdou et al. optimized the diesel fuel consumption in a pipeline by comparing the analysis from three options: (1) Alternately, a regular and preventative maintenance plan may be followed to protect the decreased OPE from degrading anymore [15]. To reduce HFL, replace the current pipeline with a new one that has a smoother interior wall. (2) Achieving a decrease with DRA HFL Diesel fuel consumption was estimated in three different methods to find the one that uses the least amount of fuel. A. Joseph et al. optimized a pipeline carrying waxy crude. The objective function was to minimize the total pumping energy and chemical injection costs for a particular flowrate [16]. When input temperature is raised beyond crude oil’s pour point, handling costs per foot, these findings support previous research that shows: (1) If you have a budget and current economic parameters, your economic flow rate will be (2) Overflowing or under flowing doesn’t help with optimization any more than this. (3) An economic flowrate is that which corresponds to the available budget and prevailing economic parameters. Any flow rate greater or less than this does not make for optimization. A method was developed to optimize the DRA degradation model [17]. Due to the presence of shear stresses sources in bends, change of diameters effectiveness of DRA degraded. The model allows adjusting the variation and dosing the optimized DRA, thereby increasing operational profits. D. Eskin et al. developed a wax deposition model in oil transport pipelines [18]. In a turbulent flow pipeline, heat and mass transfer analysis was carried out. There was a wax deposition flux model developed using temperature-dependent wax precipitation kinetics. Using a porous material correlation, the heat conductivity was measured and analyzed. Model performance was calculated by calculating the amount of wax deposited in a flow loop during the illustration. Numerical simulations of wax deposition in an oil transport pipeline show how well the simplified model performs.

566

A. Sarkar and A. K. Arya

43.2.3 Optimization in Optimizing Oil Pipeline Operations This paper will mainly concentrate on this topic since most oil fields worldwide matured, and producing and transporting more medium and heavy oil paved the way to increasing production to meet the growing demand. Operation optimization will have multiple benefits, i.e., costly energy savings, increased organization bottom line, and carbon emission reduction. Objective functions mainly minimize pumping energy cost, furnace heating cost, DRA dosing cost, chemical additives dosing cost. Constraints and boundary conditions are mainly inlet and outlet temperature, pressure, flow rate, ambient temperature, subsoil temperature, WAT, heating load, characteristics of pumps and furnace. Formulation of problems: modelling of equations typically done considering the pressure and temperature profile of pipeline, viscosity vs temperatures profile, wax deposition rate, DRA dosing rate, frictional loss, hydro-thermodynamic behaviour of liquids. Solution of algorithms and results: algorithm solutions are typically based on problems, either classical or evolutionary algorithms. Review of existing relevant works on the optimization of crude oil pipeline carried out as below: Minimizing total pump and heating furnace cost by HOPBB (Heated oil Pipeline Branch and Bound), MINLP model [19]. A new model based on branch and bound type HOP is established to get the global optimum of the model. Non-Convex and convex continuous relaxation of the model are proposed. A case study on the operation of the Q-T pipeline in China shows a saving of 6.83% running cost. Minimization of pumping cost and furnace fuel cost by the hybrid meta-heuristic algorithm (Particle Swarm Optimization) method was developed to help with HOPOO [20]. There are two phases to the algorithm. Stage 1: Using a clever optimization technique, we solve the original issue. Stage 2: Refining the previous stage’s answer using a local search algorithm. The optimized system’s operational costs are much lower than those of the conventional design. T. T. Bekibayev et al. developed a New Dynamic programming approach algorithm to minimize pumping and furnace fuel costs [21]. They divided the main model into many overlapping sub task with finding the optimal substructure. Minimization of pump power consumption and furnace fuel consumption by Genetic Algorithms was developed, and they chose a Crude oil S-L pipeline of length 324.3 km, and outer dia DN450 with six pump stations was considered for the case [22]. An optimization software was developed. The energy-saving opportunity was of 5–9%. The applied method was able to optimize the combination pump and combination of heaters. This software could converge fast and give accurate results. This method was applicable for different pipes besides pipeline operation, making pipeline staff comfortable for analysis purposes. Minimization of pumping cost and furnace fuel cost by Genetic Algorithms and multi-objective optimization. To minimize the system’s overall energy loss, a system was designed with an optimization goal in mind [23]. Both multi-objective optimization and genetic algorithms were used in the process. The model was used in the pipeline in China. The pipeline transportation system’s equipment layout and operating characteristics

43 A Survey on Optimization Parameters …

567

are optimized hierarchically using the model. Using Dynamic programming, the solution used a two-layer nesting method to minimize thermodynamic cost and dynamical expenses of the pump, furnace, and whole pipeline [24]. The model is used in the Shuguang Super heavy oil pipeline, design throughput of 1MTPA. The viscosity was super heavy even at normal temperatures. Optimized the model to determine optimal outstation temperature, discharge pressure, and other running parameters. From the analysis, it was found that 10.7–16.4% of energy could be saved. For a hot oil pipeline, frictional loss increases with decreasing flow rate, contrary to the normal concept of frictional loss decreases with decreased flow rate [25]. Temperature and hydraulic profile along the pipeline were calculated using the theoretical formula. The case study of 720 mm OD with Daqing crude shows that below critical flow for a hot waxy pipeline frictional loss increases. An empirical correlation between the outlet temperature and critical flow rate was established. The correlation may not be useful for other pipelines, but it may help to understand the behaviour. The safe outlet temperature is optimized in this study. Hot oil pipeline safety necessitates a flow rate greater than the critical rate. These correlations could not be applied to other pipelines, but they are useful in understanding hot oil pipeline hydraulics. A non-linear Mixed Integer Programming Model was developed to minimize the pump and furnace’s power and fuel cost by two levels hierarchical Model [26]. A software package, HOPOPT (Hot oil pipeline optimization), was developed to solve the problem. The practical application of this model shows that operating costs can be reduced by 3% by the optimization method. The forward recursion method is used to solve the dynamic problem of pressure head, and the Powel algorithm is used to optimize oil temperature. E. Liu et al. implemented GA, PSO, and Simulated Annealing optimization techniques in a pipeline to compare the pipeline’s performance and quantum of energy savings [27]. They considered the pump speed, pressure, and temperature as a variable for optimization. They found that PSO was better than the other two for saving total energy consumption and improving convergence speed, short optimization time. L. Gu implemented one dynamic programming for overall cost optimization for a crude oil pipeline. Difference between actual and theoretical calculation data of heat transfer coefficient and frictional coefficient of pressure loss adjusted with the least square method [28]. The main decision variable was outlet temperature and pump combination scheme. The main constraint considered pump and heating stations’ inlet and outlet temperature, flow rates, heat load, pump characteristics, etc. Results showed that 23% and 18% energy gain was achieved in winter and summer, respectively. When crude oil pipeline shutdown due to operation and maintenance reasons, due to the complex behaviours of crude oil, the oil tends to solidify and ultimately may congeal and block the pipeline in prolonged shutdown conditions. After the crude oil thermal model was shut down, it was rebuilt using an enthalpy porous medium. Numerical simulation and analysis were performed using Fluent software [29]. The various impact variables, such as ambient temperature, starting temperature of crude oil soil depth, etc., were examined on the entire freezing duration of crude oil in the pipeline. The study’s findings may serve as a theoretical foundation for calculating the maximum period a pipeline can be shut down. A hot oil pipeline issue was modelled using the MINLP

568

A. Sarkar and A. K. Arya

(mixed-integer non-linear programming) approach. For certain circumstances, they suggested an analogous continuous relaxation that is neither nonconvex nor convex [30]. It was decided to use a branch-and-bound method to find the global optimum solution to the mixed-integer non-linear programming model. The branch-and-bound approach was made more efficient by using an outer approximation method and the technique of warm start. The investigated algorithm reduced operating costs by 6.83% compared to the current practical scheme for a hot oil pipeline. K. Yang et al. solved the mathematical model of commingled crude in different proportions for different weather for minimizing the overall energy consumption in the crude oil pipeline [31]. Hierarchical solving was performed with Ant colony optimization method. Power consumption before and after optimization is found to be very much significant. M. Zhou et al. developed a mathematical model for a heated crude oil pipeline, considering the minimization of pump energy cost and heating cost as an objective function. Particle swarm optimization and Differential evolution algorithm was used to solve this model. The optimization results applied to the crude oil pipeline showed a significant improvement of energy saving of 17.59% for a particular flow rate of the existing system [32]. Using this model, they streamlined China’s crude oil import transportation network, reducing costs and risks. MATLAB software [33], an ant colony algorithm, and genetic algorithms combined to create a crude oil import network transportation application that was very efficient. According to the research done by [34], to reduce the Chinese Crude Oil Pipeline’s energy usage and associated costs, they used dynamic programming and non-linear programming. They are evaluated with the help of the computer software developed by Visual Basic. They put forward three energy-saving measures; it turns out that: i. ii.

iii.

If the oil temperature is reduced by about 10%, the pipeline’s corresponding energy consumption and cost may be reduced by approximately 10%. The operation of the pipeline following the peak-valley power pricing may result in a 6% reduction in the energy cost of the pipeline, but it could also cause the energy consumption to increase somewhat. Operating matching with a combination of flow rates throughout a specified period may simultaneously reduce the equivalent energy consumption and energy cost, although this strategy’s impact is unclear. It is possible to decrease the transported oil temperature to minimize energy consumption and costs [35]. The energy consumption optimization model was developed using the non-linear mixed variables optimization technique. A complete analysis and optimization software for the oil pipeline system’s energy usage was created using OLE DB database connection technology, based on the object-oriented programme design idea and a software database written in Access2003. E. Abbasi et al. developed a model using NSGA II (Non-Dominated Sorting Genetic Algorithm) to find the solution of a multi-objective model of pipeline energy and pump maintenance cost [36]. The output results were promising. They would give the pipeline operators to choose from a different optimal scenario on a Pareto Front. Using an approach combining unstructured-finitevolume and finite difference methods is applied to solve the governing equations

43 A Survey on Optimization Parameters …

569

[37], in which unstructured grids discretized the soil domain by developing a model to simulate the underground pipeline numerically. They can calculate the hydraulic and temperature profile of a complex pipeline with different inlet temperatures. C. Li et al. developed a model for a shutdown pipeline to identify the temperature drop profile and restart the pipeline using VB and MATLAB [38]. Because of the practical use of the VB and MATLAB hybrid programming technique, the amount of time spent on algorithm development and reliability enhancement of heated oil pipeline shutdown and restart simulation software may be reduced. G. Stanley-Telvent et al. carried out a detailed optimization study of a pipeline using the Energy Management- Power Optimization Software (EM-PPO) [39]. The study made it possible to realize operational cost savings in real-time and capital cost savings by using the ‘What-If’ feature of the software to select pipeline routes, pump station locations and even model various Shippers’ products to quantify their impact on the project. A user could then select the best pipeline route for overall economics and provide this feedback to the Right of Way group to know what added ‘value’ to consider for the real estate needed to procure that particular route. They carried out a study based on a spreadsheet to find out the optimum operation criteria to reduce the energy consumption in pipelines. C. V. Barreto et al. used historical operational and other paraments to choose an operating option for a particular throughput. A simulator of oil pipeline networks has been developed and presented to help schedules operate the pipeline and includes an algorithm for optimizing the power costs [40]. The algorithm uses a function that measures the estimated minimum cost to the goal node and an accurate simulation to the transportation system. The algorithm has been tested with success in an oil pipeline in Northern Spain (Table 43.1). In the earlier review works as per Table 43.2, researchers reviewed the application of optimization in the oil and gas sector or only in the gas sector, whereas in this paper, we critically analyzed the application of optimization in crude oil pipeline only. To our knowledge, this is not done earlier. Also, the last review works carried out in 2015, since many technological advances have happened in recent years in optimization application and solution tools, the same is also reviewed in this paper. Recently, renewables have been emphasized to minimize the carbon footprint; hence, finding the easy oil is very tough. Moreover, most oil fields worldwide are mature, and heavy and waxy crude needs to be handled in the coming days by oil companies and pipeline operators. Hence implication of optimization in handling heavy and waxy crude oil is critically analyzed in this paper. Multi-Objective Optimization in Crude Oil Pipeline Complex problems need multi-objective optimization since the pipeline is very complex and practical constraints are too many, though a lot of multi-objective works carried out in water and gas pipelines, few works available on the crude oil pipeline, maybe it on design, scheduling, network or operation. There are many scopes to carry out multi-objective optimization works on crude oil pipelines especially using

570

A. Sarkar and A. K. Arya

Table 43.1 Optimization techniques and objective function Citation

Objective function parameters

Optimization technique/other model used

[8]

Minimize energy and construction cost

Thermohydraulics

[9]

Minimize yearly operating cost

Stochastic MILP models

[10]

Network optimization

Network modelling and simulation

[13]

Wax deposition growth

Wax deposition model

[14]

Detecting pipeline bloackage

EFM (finite element method) and GA (genetic algorithm)

[16]

Optimize wax handling method

Mathematical model

[17]

Minimize DRA dosing cost

DRA degradation model

[18]

Minimization of wax deposition

Numerical simulation

[19]

Minimization of pumping and heating cost MINLP model

[20]

Minimization of pumping and heating cost Particle swarm optimization (PSO)

[21]

Minimization of pumping and heating cost New dynamic programming (New DP)

[22]

Minimization of pumping and heating cost Genetic algorithm (GA)

[23]

Minimization of pumping and heating cost GA and multi-objective optimization

[24]

Minimization of energy cost

Dynamic programming

[25]

Relation between temp and flow rate

Empirical relation

[26]

Minimization of power and fuel cost

NLMIP

[27]

Minimization of total energy cost

GA, PSO and SA (simulated anneling)

[28]

Minimization of total energy cost

Dynamic programming

[30]

Minimization of operating cost

MINLP model

[31]

Minimization of overall energy cost

ACO (ant colony optimization)

[32]

Minimization of energy cost

PSO and DE algorithm

[34]

Minimization of energy use

Dynamic and non-linear programming

[35]

Minimization of energy cost

Non linear mixed variable optimization

[36]

Minimize pump energy and maintenance cost

NSGA II multi-objective optimization

[37]

Pressure and temperature profile

Non structured finite volume and finite difference method

[38]

Optimize shutdown time

Mathematical model

[39]

Operational and capital cost saving

Energy management-power optimization model

[41]

Minimization of energy consumption

Spreadsheet model

evolutionary algorithms. If we better understand crude oil pipeline operation and its optimization, extending the work on design and scheduling networks would be easier. Though most of the work was carried out using two objective functions, hardly any works were available on considering three objectives, i.e., minimization of pumping energy cost, furnace heating cost, and chemical dosing cost. Moreover, benchmarking

43 A Survey on Optimization Parameters …

571

Table 43.2 Review works on optimization of oil and gas pipeline Citation

Publication year

Oil/gas

Segment

[42]

2015

Oil and gas

Pipeline network layout

[43]

1998

Oil and gas

Optimization in pipeline segments, no specific segments

[44]

2015

Oil and gas

Problems, methods, and challenges

[45]

2013

Oil and gas

Upstream and midstream

[46]

2015

Gas

Gathering, transmission and distribution

of energy consumption, i.e., KWh/MT/KM not considered in any works, maybe also considered in future works along with multi objection algorithm since the cost will be different for different times of years due to the rheological properties of crude oil.

43.3 Research Gaps In recent years the application of optimization techniques in pipelines has been increasing gradually. Though multi-objective optimization implemented in gas pipelines tremendously in the recent past, very few applications are available in crude oil pipelines. From Table 43.1, fewer works were carried out using the evolutionary algorithm and multi-objective optimization with more than two objective functions.

43.4 Conclusions The study was mainly focused on the optimization parameters and techniques used in crude oil pipeline transportation. Literature was divided into three categories: design, rheological properties, and operation. On review, it was found that though classical and dynamic optimization techniques are used in crude oil pipelines, from Table 43.1, very little works were carried out on complex crude oil pipeline transportation using multi-objective optimization with evolutionary algorithms. From Fig. 43.1, it is found that maximum works are carried out to minimize operation and energy cost because it is a big concern for pipeline operators and a lot of costly energy cost in operations saved. Hence there is enormous scope to work on crude oil pipeline transportation operation using multi-objective optimization with more than two objective functions and evolutionary algorithms. Researchers who intend to carry out optimization works in crude oil pipelines will find this paper useful since earlier, no review works were done specifically on crude oil pipeline optimization.

572

A. Sarkar and A. K. Arya

Fig. 43.1 Crude oil pipeline optimisation parameters

References 1. History of gas and oil pipelines (2011) [online]. Available http://www.pipelineknowledge.com/ 2. This Week in Oil and Gas History: October 10–16. Greasebook. https://greasebook.com/blog/ this-week-oil-gas-history-october-10/. Accessed Oct 17, 2021 3. Oil Producing Countries 2021. https://worldpopulationreview.com/country-rankings/oil-pro ducing-countries. Accessed Oct 17, 2021 4. Oil Consumption By Country (2021). https://worldpopulationreview.com/country-rankings/ oil-consumption-by-country. Accessed Oct 17, 2021 5. Oil and Gas Pipelines: GlobalData Report Looks at Capacity and Spending. https://www.off shore-technology.com/comment/north-america-has-the-highest-oil-and-gas-pipeline-lengthglobally/. Accessed Oct 17, 2021 6. Reports and Studies: Petroleum Planning and Analysis Cell. https://www.ppac.gov.in/content/ 5_1_ReportStudies.aspx. Accessed Oct 18, 2021 7. E.S.: Liquid Pipeline Hydraulics. CRC Press (2004) 8. Wang, B., et al.: Sustainable crude oil transportation: design optimization for pipelines considering thermal and hydraulic energy consumption. Chem. Eng. Res. Des. 151, 23–39 (2019). https://doi.org/10.1016/j.cherd.2019.07.034 9. Wang, B., Yuan, M., Yan, Y., Yang, K., Zhang, H., Liang, Y.: Optimal design of an oil pipeline with a large-slope section. Eng. Optim. 51(9), 1480–1494 (2019). https://doi.org/10.1080/030 5215X.2018.1525710 10. Rizwan, M., Al-Otaibi, M.F., Al-Khaledi, S.: Crude Oil Network Modeling, Simulation and Optimization: Novel Approach and Operational Benefits (2013) [online]. Available http://www. asme.org/about-asme/terms-of-use 11. Hongsheng, C., Changchun, W., Xiaokai, X.: Engineering-economic characteristics of hot crude oil pipelines. Proc. Biennial Int. Pipeline Conf. IPC 1, 205–211 (2009). https://doi.org/ 10.1115/IPC2008-64590 12. Abdulnabi, I.O.N.: Optimization of Centrifugal Pumps Operation for Least Cost and Maximum MSc by Research, Apr 2008 13. Joshi, R.R.: Modeling and Simulation of Wax Deposition in Crude Oil Pipeline, Nov 2017 [online]. Available http://www.irphouse.com 14. Abdulzahra, M., Khazaali, A.: Optimization Procedure to Identify Blockages in Pipeline Networks via non-invasive Technique based on Genetic Algorithms (2017) [online]. Available http://preserve.lehigh.edu/etd

43 A Survey on Optimization Parameters …

573

15. Abdou, H.A.M., Company, A.P.: Optimization Consumption for Diesel Fuel Through Crude Oil Pumping in Western Desert, Egypt (2016) 16. Joseph, A., Ajienka, J.A.: An economic approach to the handling of waxy crude oil 3(1), 14–25 (2015) 17. Lara, J., Cuero, P., Reyes, C., Unriza, J.C.: PSIG 1524 Pipeline Optimization Using DRA Degradation Models. Struct. Modif. Over 41, 1–8 (2015) 18. Eskin, D., Ratulowski, J., Akbarzadeh, K.: Modelling wax deposition in oil transport pipelines. Can. J. Chem. Eng. 92(6), 973–988 (2014). https://doi.org/10.1002/cjce.21991 19. Yang, M., Huang, Y., Dai, Y.H., Li, B.: An efficient global optimization algorithm for heated oil pipeline problems. Ind. Eng. Chem. Res. 59(14), 6638–6649 (2020). https://doi.org/10.1021/ acs.iecr.0c00039 20. Li, B., He, M., Kang, Y., Shi, H., Zhao, J., Liu, Y.: A hybrid meta-heuristic solution to operation optimization of hot oil pipeline. IOP Conf. Ser. Mater. Sci. Eng. 646(1). https://doi.org/10.1088/ 1757-899X/646/1/012031 21. Bekibayev, T.T., Zhapbasbayev, U.K., Ramazanova, G.I.: Optimization of oil-mixture ‘hot’ pumping in main oil pipelines. J. Phys. Conf. Ser. 894(1). https://doi.org/10.1088/1742-6596/ 894/1/012127 22. Liu, E., Li, C., Yang, L., Liu, S., Wu, M., Wang, D.: Research on the optimal energy consumption of oil pipeline. J. Environ. Biol. (2015) 23. Liu, Y., Cheng, Q., Gan, Y., Wang, Y., Li, Z., Zhao, J.: Multi-objective optimization of energy consumption in crude oil pipeline transportation system operation based on exergy loss analysis. Neurocomputing 332, 100–110 (2019). https://doi.org/10.1016/j.neucom.2018.12.022 24. Wu, Y.: A study on optimal operation and energy saving of super heavy oil pipeline. Proc. Int. Conf. Comput. Distrib. Control Intell. Environ. Monit. CDCIEM 2011, 400–403 (2011). https://doi.org/10.1109/CDCIEM.2011.46 25. Chen, J., Zhang, J., Li, H.: Hydraulic characteristics of pipelines transporting hot waxy crudes. Proc. Biennial Int. Pipeline Conf. IPC 1, 723–727 (2007). https://doi.org/10.1115/IPC200610259 26. Changchun, W., Dafan, Y.: Optimized operation of hot oil pipelines. IFAC Proc. 25(18), 347– 352 (1992). https://doi.org/10.1016/s1474-6670(17)49995-2 27. Liu, E., Peng, Y., Yi, Y., Lv, L., Qiao, W., Azimi, M.: Research on the steady-state operation optimization technology of oil pipeline. Energy Sci. Eng. 8(11), 4064–4081 (2020). https:// doi.org/10.1002/ese3.795 28. Gu, L.: Optimization of long-distance oil pipeline production operation scheme. IOP Conf. Ser. Earth Environ. Sci. 772(1). https://doi.org/10.1088/1755-1315/772/1/012081 29. Zhonghua, D.: Analysis on influencing factors of buried hot oil pipeline. Case Stud. Therm. Eng. 16 (2019). https://doi.org/10.1016/j.csite.2019.100558 30. Yang, M., Huang, Y., Dai, Y.-H., Li, B.: Solving Heated Oil Pipeline Problems Via Mixed Integer Nonlinear Programming Approach, Jul 2019 [online]. Available http://arxiv.org/abs/ 1907.10812 31. Yang, K., Liu, Y.: Optimization of production operation scheme in the transportation process of different proportions of commingled crude oil. J. Eng. Sci. Technol. Rev. 10(6), 171–178 (2017). https://doi.org/10.25103/jestr.106.22 32. Zhou, M., Zhang, Y., Jin, S.: Dynamic optimization of heated oil pipeline operation using PSODE algorithm. Measur. J. Int. Measur.Confed. 59, 344–351 (2015). https://doi.org/10.1016/j. measurement.2014.09.071 33. Wang, Y., Lu, J.: Optimization of China crude oil transportation network with genetic ant colony algorithm. Information (Switzerland) 6(3), 467–480 (2015). https://doi.org/10.3390/inf o6030467 34. Yu, Y., Wu, C., Xing, X., Zuo, L.: Energy Saving for a Chinese Crude Oil PipelineASME/Terms-of-Use, California, USA, Jul 2014 [online]. Available http://www.asme.org/ about-asme/terms-of-use 35. Wang, Y., Liu, Y., Zhao, J., Wei, L.X.: Energy consumption analysis and comprehensive optimization in oil pipeline system. Adv. Mater. Res. 648, 251–254 (2013). https://doi.org/10.4028/ www.scientific.net/AMR.648.251

574

A. Sarkar and A. K. Arya

36. Abbasi, E., Garousi, V.: Multi-objective Optimization of both Pumping Energy and Maintenance Costs in Oil Pipeline Networks using Genetic Algorithms (2010) [online]. Available https://www.researchgate.net/publication/221616469 37. Yu, B., et al.: Numerical simulation of a buried hot crude oil pipeline under normal operation. Appl. Therm. Eng. 30(17–18), 2670–2679 (2010). https://doi.org/10.1016/j.applthermaleng. 2010.07.016 38. Li, C., Wu, X., Jia, W.: Research of heated oil pipeline shutdown and restart process based on VB and MATLAB. Int. J. Mod. Educ. Comput. Sci. 2(2), 18–24 (2010). https://doi.org/10. 5815/ijmecs.2010.02.03 39. Stanley-Telvent, G., Energy, K.T., Dzierwa-Telvent, R.: Power Optimization in Liquid Pipelines (2007) 40. Camacho, E.F., Ridao, M.A., Ternero, J.A., Rodriguez, J.M.: Optimal operation of pipeline transportation systems. IFAC Symp. Ser. Proc. Triennial World Congr. 5(8), 455–460 (1991). https://doi.org/10.1016/s1474-6670(17)51776-0 41. Barreto, C.V., Pires, L.F.G., Azevedo, L.F.A.: Optimization of pump energy consumption in oil pipelines. Proc. Biennial Int. Pipeline Conf. IPC 1, 23–27 (2004). https://doi.org/10.1115/ ipc2004-0385 42. Li, F., Liu, Q., Guo, X., Xiao, J.: A Survey of Optimization Method for Oil-gas Pipeline Network Layout (2015) 43. Carter, R., Carter, R.G.: Pipeline Optimization: Dynamic Programming After 30 Years (1998) 44. Wang, Y., Tian, C.H., Yan, J.C., Huang, J.: A survey on oil/gas pipeline optimization: problems, methods and challenges. Proc. 2012 IEEE Int. Conf. Serv. Oper. Logist. Inform. SOLI 2012, X, 150–155 (2012). https://doi.org/10.1109/SOLI.2012.6273521 45. Niaei, M.S., Iranmanesh, S.H., Turabi, S.A.: A review of mathematical optimization applications in oil and gas upstream and midstream management. Int. J. Energy Stud. 2, 143–154 (2013) 46. Mercado, R., Roger, Z.: Optimization problems in natural gas transportation systems: a state of the art review. Appl. Energy 147(2015), 536–555 (2015)

Chapter 44

Design and Simulation of Higher Efficiency Perovskite Heterojunction Solar Cell Based on a Comparative Study of the Cell Performances with Different Standard HTLs Pratik De Sarkar and K. K. Ghosh Abstract The performance of CH3 NH3 SnI3-x Brx -based perovskite solar cell is examined by changing the HTL materials viz., PEDOT:PSS, Spiro:MeOTAD, and MoO3 using an appropriate model followed by its simulation. Different designing parameters like mole fraction of perovskite material, doping concentration, defect density, thickness have been optimized to ensure better performance. With these optimum design parameters and taking MoO3 as HTL layer, it is observed and concluded that the proposed structure of the solar cell may attain its highest efficiency to a level of 22.11%.

44.1 Introduction Organometal halide composites of perovskite solar cell (PSC) have raised interest in the market-place of non-conventional source of green energy gradually replacing the age-old silicon technology in the photovoltaic research. Ultimate goal of any photovoltaic performance of a solar device is to obtain high-power conversion efficiency (PCE), high-fill factor (FF), and improved short circuit current with enhanced open circuit voltage. An understanding of the physical processes playing behind the working of the cell is important to attain an optimal quality solar cell. Without taking any risk of time and economy losses involved in investigating high-output and highefficient solar device, the best route to fulfill such objective is to model the solar device and to simulate the results. In contrast to the metallic semiconductors of highsusceptibility resulting in low-exciton energy, the organic semiconductors of much lower susceptibility result in relatively higher exciton energy. This requires a different strategy to dissociate the exciton. Heterojunctions of the active perovskite layer at its two extremes provide the necessary Coulomb field to separate the excitonic charges P. De Sarkar (B) · K. K. Ghosh Institute of Engineering and Management, D-1, EP Block, Sector V, Bidhannagar, Kolkata, West Bengal 700091, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. Das et al. (eds.), Modeling, Simulation and Optimization, Smart Innovation, Systems and Technologies 292, https://doi.org/10.1007/978-981-19-0836-1_44

575

576

P. De Sarkar and K. K. Ghosh

Fig. 44.1 Schematic view of cubic perovskite crystal structure for AMX3 compound. Reproduced with permission [5]. Copyright 2019, Global Challenges

and give transport layers to the separated electrons (electron transport layer ETL) and holes (hole transport layer HTL). To improve efficiency, one of the key aspects of HTL is expected to have a good matching of its valance band maximum with cathode and perovskite in addition to its high-hole injection efficiency. Low-recombination rate in the HTL is also an important factor for consideration. Molybdenum oxide (MoO3 ) is a high-work function, wide-gap material with Fermi level close to its valence band makes it behave p-type and hole injection/extraction material which makes it suitable for organic photovoltaic device. The material may also behave as n-type presumably because of large number of oxygen vacancies. Further, this material has a high transparency in the visible solar spectrum. Its non-toxicity and moderate evaporation temperature give additional benefits in the fabrication process. Hole transfer across the perovskite-MoO3 interface takes place through the valence band maxima of the two materials. Large gap and high-conduction band minimum secure electron blocking. Over the past few years, extensive research work is being made with MoO3 as HTL to strive for better performance solar device [1–4]. Organo-inorganic halide perovskites have ABX3 structure (Fig. 44.1). While A and B represent inorganic and organic cations, X represents anion, e.g., halides or oxygen. Present day perovskites replace the toxic Pb with Sn or other suitable non-toxic cations. Ravinder Kour et al. have made a rigorous review to find suitable lead free perovskite material for solar cell and concluded that CH3 NH3 SnI3-x Brx may be one of the most efficient material to fabricate lead free PSCs [5]. Feng Hao et al. had used this material in their design and observed that among the three mole fraction values, the best performance is achieved for x = 2, i.e., CH3 NH3 SnIBr2 [6]. In their design, Spiro:MeOTAD was used as HTL. The schematic energy diagram of this material for various mole fraction values is shown in (Fig. 44.2). In addition to this HTL material, we have thoroughly investigated the working of PSC taking two other very promising HTLs, namely PEDOT:PSS and MoO3 . Schematics of the corresponding energy levels are depicted in (Figs. 44.3 and 44.4), respectively. The transition metal oxide (TMO) MoO3 is considered as an HTM because of its having desired features like high-work function (6.9 eV) and large

44 Design and Simulation of Higher Efficiency Perovskite … Fig. 44.2 Schematic energy-level diagram of CH3 NH3 SnI3−x Brx compounds with Spiro:MeOTAD as HTL. Reproduced with permission [6]. Copyright 2014, Springer Nature

Fig. 44.3 Schematic energy-level diagram of CH3 NH3 SnI3−x Brx (x = 2) with PEDOT:PSS as HTL

Fig. 44.4 Schematic energy-level diagram of CH3 NH3 SnI3−x Brx (x = 2) with MoO3 as HTL

577

578

P. De Sarkar and K. K. Ghosh

bandgap (3 eV). Also, high transparency in the visible region of the solar spectrum and its non-toxicity, this TMO is chosen as a suitable candidate in our designing the cell.

44.2 Modeling Solar cells, in general, are modeled by normal P–N junction diodes in an equivalent circuit. The perovskite solar cell under study is modeled as P-i-N diode (Fig. 44.5) to describe its working performance (Fig. 44.6). Use of large and direct gap materials of high-absorption coefficients in P-i-N solar cells make them better efficient in power conversion because of sufficiently high generation of EHPs near to the surface [7]. The built in potential for P-i-N solar cells extends over a wider region because of intrinsic layer in between the P and N layer. This eventually increases the carrier diffusion length and lifetime, which leads to increase the overall performance of the solar cell [8]. We have used the following one dimensional differential equations in the model to calculate the design parameters and optimized them to achieve the best performance of the device. q ∂2 ϕ(x) = [n(x) − p(x) − ND (x) + NA (x) − pt (x) + n t (x)] 2 ∂x ε Fig. 44.5 Design diagram

(44.1)

44 Design and Simulation of Higher Efficiency Perovskite …

579

Fig. 44.6 Band diagram of a P-i-N solar cell

Poisson’s equation ∂n ∂p + pqμ p E − q D p ∂x ∂x

Jn, p = nqμn E + q Dn

(44.2)

Transport equation L n, p =



Dn, p τn, p

(44.3)

kB T μn, p q

(44.4)

Diffusion length Dn, p = Diffusivity  ∂n, p 1 ∂ Jn 1 ∂ Jp  = + (G n − Rn ) + + G p − Rp ∂t q ∂x q ∂x

(44.5)

Continuity equation VOC

   IL nk B T ln = +1 q I0

(44.6)

The symbols that are used in the above mentioned equations are defined in the given Table 44.1 [9, 10].

580 Table 44.1 Symbol definitions

P. De Sarkar and K. K. Ghosh Symbols

Definition

ϕ(x)

Electrostatic potential

q

An elementary charge

ε

Permittivity

n(x)

Density of free electron

p(x)

Density of free hole

ND (x)

Donor concentration

NA (x)

Acceptor concentration

pt (x)

Hole trapped density

n t (x)

Electron trapped density

Jn, p

Electron and hole current density

E

Electric field

μn

Electron mobility

μp

Hole mobility

L n, p

Diffusion length of electron and hole

Dn, p

Electron and hole diffusivity

τn, p

Lifetime of electron and hole

G n, p

Optical generation rate

Rn, p

Recombination rate

n

Ideality factor

kB T q

Thermal voltage

IL

Light generated current

I0

Reverse saturation current

44.3 Simulation To simulate the proposed model, we use the simulator SCAPS (a solar cell capacitance simulator) [11, 12]. SCAPS is a one dimensional solar cell simulation program developed at the Department of Electronics and Information Systems (ELIS) of the University of Gent, Belgium. Different AC and DC electrical measurements can be analyzed in both dark and light illumination and also at different temperatures. All the basic one dimensional semiconductor equations can be solved under steady-state condition using SCAPS-1D. A schematic diagram of the workflow of the simulator is depicted in (Fig. 44.7). In the simulation, we have used the design parameters shown in Table 44.2 as reported in published papers. In the top most contact layer of our design, fluorine doped tin oxide (SnO2 :F) is being used as transparent conducting oxide (TCO) layer [12]. In the second layer, titanium di-oxide is being used as electron transport layer (ETL) [12]. Third and the widest layer is the perovskite absorber layer. In this layer, CH3 NH3 SnIBr2 , organometal halide perovskite is used [6]. In the fourth layer, we

44 Design and Simulation of Higher Efficiency Perovskite …

581

Fig. 44.7 Workflow diagram of SCAPS-1D Table 44.2 Design parameter values Parameters

TCO ETL Perovskite HTL-1 HTL-2 [12]/SnO2 :F [12]/TiO2 [6]/CH3 NH3 SnIBr2 [14]/PEDOT:PSS [13]/MoO3

Thickness (nm)

500

50

900

350

350

Bandgap, E g (eV)

3.5

3.2

1.75

2.2

3.0

Electron affinity, χ (eV)

4.0

4.0

3.78

2.9

2.5

Dielectric permittivity, εr (relative)

9.0

9.0

8.2

3.0

12.5

CB effective density of states, N C (1/cm3 )

2.2 × 1018

1.0 × 1019

1.8 × 1018

2.2 × 1015

2.2 × 1018

VB effective density of states, N V (1/cm3 )

1.8 × 1019

1.0 × 1019

1.8 × 1019

1.8 × 1018

1.8 × 1019

Electron mobility, μn (cm2 /Vs)

20

0.02

0.5

10

25

Hole mobility, 10 μp (cm2 /Vs)

2.0

0.5

10

100

Donor 2.0 × 1019 concentration, N D (cm−3 )

1.0 × 1019







Acceptor – concentration, N A (cm−3 )



1.0 × 1016

3.17 × 1014

6.0 × 1018

1.0 × 1015

2.0 × 1015

1.0 × 1014

2.5 × 1015

Defect density, N t (cm−3 )

1.0 × 1015

582

P. De Sarkar and K. K. Ghosh

have used two different materials for hole transport material (HTL) for two different designs. In the first design, PEDOT:PSS is used, while in the second design, MoO3 is used as HTL [13, 14]. A very thin layer of gold is being used as bottom contact layer [15]. The photo-generated current is drawn from the top and bottom contact layer of the device. Once the design parameters are defined successfully, the defects are improvised in every layer of our device to ensure the device performance more efficient [12–14]. After completion of the designing process, we have specified the testing environment. In our simulation, we have tested our device under AM1.5G, one sun condition at 300 K temperature. For the numerical settings, we have set all the parameters to default values. Once the first simulation with PEDOT:PSS as HTL is over, then the second simulation has been performed changing the HTL to MoO3 . After two successful simulations, all the data are collected, and corresponding graphs are plotted for analysis.

44.4 Results and Discussion After successful simulation of the two different structure having HTL material PEDOT:PSS and MoO3, respectively, we have analyzed the band diagram of both the designs in (Fig. 44.8). It is observed from the diagram that in ETL-perovskite junction, the conduction band as well as valance band on ETL have lower energies than those in perovskite material. As a consequence, all the photo-generated electrons move freely from perovskite to ETL; however, no holes can penetrate the barrier of the valance band, i.e., no holes can move to ETL. Similarly, in perovskite-HTL junction, the conduction band as well as the valance band on ETL have higher energies than those of perovskite material. Hence, all the photo-generated holes can move freely from perovskite to HTL. However, no electrons can penetrate the barrier of the conduction band, i.e., no electrons can move to HTL. In summary, it can be said that both the designs are in close conformity with the desired actions of the different sections of the solar device.

Fig. 44.8 Energy band diagram of the two design having HTL material as a PEDOT:PSS and b MoO3

44 Design and Simulation of Higher Efficiency Perovskite …

583

Fig. 44.9 Current density graph of the two design having HTL material as a PEDOT:PSS and b MoO3

We have tested our designed device with the following performance analysis. Solar cell is basically a power generation device with the objective of producing workable voltage and current. Figure 44.9 shows current density graphs of both the structures using PEDOT:PSS and MoO3 as HTLs. Although the graphs look almost similar yet a close inspection reveal the fact that the total current is more in case of second structure of MoO3 as HTL. The negative values in y-axis (current density) indicate that the current is generation type. Current–voltage curve has given the most significant information like open circuit voltage (V OC ), short circuit current density (J SC ), and fill factor (FF) of any solar cell. In our J-V graph, shown in (Fig. 44.10), there are two different plots. The blue curve is for first design having PEDOT:PSS as HTL, and the red curve is for second design having MoO3 as HTL. It is clearly visible from the graph that the second design has more V OC value, i.e., more output power than the first design. The red curve has more area covered under it which means the FF of the second design is more than that of the first design. External quantum efficiency (EQE) is another most significant characteristic to measure the performance of any photovoltaic devices. EQE can be described as the Fig. 44.10 Current–voltage (J-V ) characteristics

584

P. De Sarkar and K. K. Ghosh

Fig. 44.11 Characteristic of external quantum efficiency versus wavelength

ratio of the number of incident photons to generated electrons [16]. Figure 44.11 shows the EQE versus wavelength plot of the two structures. The plots provide a clear indication of satisfactory records of external quantum efficiencies of the two proposed designs (design-1 and design-2) working in the visible region of solar spectrum. Further, design-2 credits a higher efficiency. Performance of the solar cell with our design taking either of the two HTLs, e.g., PEDOT:PSS or MoO3 proves to be better compared to the use of conventional HTL like Spiro:MeOTAD as reflected in the data in Table 44.3. To visualize and validate our inference, we give a graphical representation (Fig. 44.12) to compare the performances with Spiro:MeOTAD, PEDOT:PSS, and MoO3 as different HTL materials. According to this figure, our second design (HTL: MoO3 ) has given better value of V OC, J SC , FF, and η (power conversion efficiency). We thus conclusively remark that MoO3 may be the best suitable HTL materials for this perovskite solar cell to achieve higher efficiency. Table 44.3 Characteristics parameters data J SC (mA/cm2 )

FF

η

Materials in HTL layers

V OC (V)

Spiro:MeOTAD

0.82 [6]

11.73 [6]

57 [6]

5.73 [6]

PEDOT:PSS

1.08

19.03

77.26

15.91

MoO3

1.37

19.07

84.61

22.11

44 Design and Simulation of Higher Efficiency Perovskite …

585

Fig. 44.12 Visual representation of comparison among characteristics parameters’ data

44.5 Conclusion It has already been realized that CH3 NH3 SnIBr2 is one of the most promising and better performing candidates as lead free perovskite material for the fabrication of next generation solar cells. In our work, we have focused mainly on investigating better HTL materials to further improve and enhance the performance of the solar cell of this material composite. After rigorous numerical analysis on a comparative study taken with standard HTLs followed by modeling and simulation, it is inferred that the performance of this perovskite material system may further be improved appreciably if MoO3 is used as HTL material. However, the proposed design may further be optimized by changing the thickness of the layers or by replacing ETL material.

References 1. Hu, X., Chen, L., Chen, Y.: Universal and versatile MoO3 -based hole transport layers for efficient and stable polymer solar cells. J. Phys. Chem. C 118(19), 9930–9938 (2014) 2. Dwivedi, C., Mohammad, T., Bharti, V., Patra, A., Pathak, S., Dutta, V.: CoSP approach for the synthesis of blue MoO3 nanoparticles for application as hole transport layer (HTL) in organic solar cells. Sol. Energy 162, 78–83 (2018) 3. Chaturvedi, N., Swami, S.K., Dutta, V.: Electric field assisted spray deposited MoO3 thin films as a hole transport layer for organic solar cells. Sol. Energy 137, 379–384 (2016) 4. Verma, A.K., Agnihotri, P., Patel, M., Sahu, S., Tiwari, S.: Fabrication and characterization of novel inverted organic solar cells employing ZnO ETL and MoO3 HTL. In: International

586

P. De Sarkar and K. K. Ghosh

Conference on Fibre Optics and Photonics, pp. Tu4A-18. Optical Society of America (2016) 5. Kour, R., Arya, S., Verma, S., Gupta, J., Bandhoria, P., Bharti, V., Datt, R., Gupta, V.: Potential substitutes for replacement of lead in perovskite solar cells: a review. Global Chall. 3(11), 1900050 (2019) 6. Hao, F., Stoumpos, C.C., Cao, D.H., Chang, R.P., Kanatzidis, M.G.: Lead-free solid-state organic–inorganic halide perovskite solar cells. Nat. Photonics 8(6), 489–494 (2014) 7. Sze, S.M., Ng, K.K.: Physics of Semiconductor Devices. John Wiley & Sons (2006) 8. Gray, J.L.: The physics of the solar cell. Handbook Photovol. Sci. Eng. 2, 82–128 (2011) 9. Husainat, A., Ali, W., Cofie, P., Attia, J., Fuller, J.: Simulation and analysis of methylammonium lead iodide (CH3 NH3 PbI3 ) perovskite solar cell with Au contact using SCAPS 1D simulator. Am. J. Opt. Photon. 7(2), 33 (2019) 10. Mostefaoui, M., Mazari, H., Khelifi, S., Bouraiou, A., Dabou, R.: Simulation of high efficiency CIGS solar cells with SCAPS-1D software. Energy Proc. 74, 736–744 (2015) 11. Burgelman, M., Decock, K., Niemegeers, A., Verschraegen, J., Degrave, S.: SCAPS Manual (2016) 12. Slami, A., Bouchaour, M., Merad, L.: Numerical Study of Based Perovskite Solar Cells by SCAPS-1D 13. Chowdhury, M.S., Shahahmadi, S.A., Chelvanathan, P., Tiong, S.K., Amin, N., Techato, K., Nuthammachot, N., Chowdhury, T., Suklueng, M.: Effect of deep-level defect density of the absorber layer and n/i interface in perovskite solar cells by SCAPS-1D. Res. Phys. 16, 102839 (2020) 14. Li, W., Li, W., Feng, Y., Yang, C.: Numerical analysis of the back interface for high efficiency wide band gap chalcopyrite solar cells. Sol. Energy 180, 207–215 (2019) 15. Anwar, F., Mahbub, R., Satter, S.S., Ullah, S.M.: Effect of different HTM layers and electrical parameters on ZnO nanorod-based lead-free perovskite solar cell for high-efficiency performance. Int. J. Photoenergy (2017) 16. Husainat, A., Ali, W., Cofie, P., Attia, J., Fuller, J., Darwish, A.: Simulation and analysis method of different back metals contact of CH3 NH3 PbI3 perovskite solar cell along with electron transport layer TiO2 using MBMT-MAPLE/PLD. Am. J. Opt. Photon. 8(1), 6–26 (2020)

Chapter 45

6G Communication: A Vision on Deep Learning in URLLC Ashmita Roy Medha, Muskan Gupta, Sabuzima Nayak , and Ripon Patgiri

Abstract Currently, the focus of wireless science is gradually moving toward 6G communication technology. The goals of 6G are high; however, it is a necessity to make many applications a reality. This paper analyzes the required suggested targets that must be taken to establish the 6G network effectively and efficiently. In addition, the paper also presents some future applications that will use 6G. The rapid advances in artificial intelligence and machine learning will significantly help the 6G to achieve its goal. 6G will integrate artificial intelligence in its every component to transfer the automation from smart to intelligence. One such aspect is using deep learning to achieve Ultra-Reliable and Low-Latency Communication (URLLC) in 6G. This paper has elaborated on the role of deep learning to achieve URLLC.

45.1 Introduction Since the launch of Bell Lab’s Advanced Cell Phone System (AMPS), which is the first-generation (1G) network, three wide and incremental improvements to the cellular telephone networks have resulted in 2G, 3G, and 4G [1]. The network progression of 1G–4G is visible as the ground-breaking implementation of wireless telegraphy by Marconi in the nineteenth century was theoretically based on the information principle of Shannon in 1948. The 1G analog cellular wireless communication enabled phone-voice connectivity in the 1980s before the 2G optical A. Roy Medha · M. Gupta · S. Nayak · R. Patgiri (B) National Institute of Technology Silchar, Cachar 788010, Assam, India e-mail: [email protected] URL: http://cs.nits.ac.in/rp/ A. Roy Medha e-mail: [email protected] M. Gupta e-mail: [email protected] S. Nayak e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 B. Das et al. (eds.), Modeling, Simulation and Optimization, Smart Innovation, Systems and Technologies 292, https://doi.org/10.1007/978-981-19-0836-1_45

587

588

A. Roy Medha et al.

cellular network replaced it in the early 1990s. Due to computerization, in addition to conventional speaker services, 2G was allowed to yield encrypted and data communications, including short message service (SMS), whereas 3G provided Internet service, video calls, and mobile TV services. The 3G used Code Division Multiple Access 2000 (CDMA2000), Worldwide Interoperability for Microwave Access (WiMAX), Time-Division Synchronous CDMA (TD-CDMA), and Wideband Code Division Multiple Access (WCDMA) to allow numerous data networks throughout the early twenty-first century [5]. Then, 4G was launched in 2009 and has tremendous success both technologically and commercially. The 4G merged various technologies such as Orthogonal Frequency-Division Multiplexing (OFDM), Multiple Input Multiple Output (MIMO), Antenna Construction, and all Internet protocol (IP) to achieve high-rate mobile data transfer. The launch of mobile smartphones and tablets, providing massive data output, and the hardware and networking processes surrounding 4G have contributed to the shaping of the society [5, 7, 8]. Currently, many countries are deploying the 5G network. Most carriers and network devices use the 3rd Generation Partnership Project 5G New Radio protocol for crowded urban areas during the early roll-out stage of 5G networks [17]. The respective 5G network runs in the 2–6 GHz band. After the delay of network density, 5G networks utilize the huge MIMO system as well as the millimeter wave systems. Network slicing is used, particularly 5G mission-critical solutions. Furthermore, 5G supports the traditional augmented reality systems and enhanced reality platforms, high-definition TV streaming, high-connectivity information transmission, and modern virtual reality and expanded reality. There are three service offers for different system situations in a complete 5G network, namely Ultra-Reliable and Low-Latency Communications (URLLC), Massive Machine-Type Communications (mMTC), and upgraded Mobile Broad-Band (MBB) [18]. Nevertheless, researchers have started to focus on the next generation of 5G, i.e., 6G [8]. Following the trend of introducing a new version of the cellular framework once every ten years, 6G is expected to be standardized before 2030. However, the 6G has many issues and challenges that need to be solved; otherwise, 6G will take more than ten years to enter the market [24]. The most concerning fact about 6G is the inability of the current technologies to support 6G communication technology. The requirements of 6G are very high such as 6G will operate in 1 THz frequency with the data rate of 1 Tbps [15]. More details are provided in Sect. 45.2. In addition, we have highlighted some applications which are supported by 6G in Sect. 45.3. Furthermore, Sect. 45.4 precisely elaborates the role of artificial intelligence in 6G. Section 45.5 explained the importance of deep learning in 6G and some issues that require attention. Section 45.6 discusses some points regarding 6G. Finally, Sect. 45.7 concludes the paper.

45 6G Communication: A Vision on Deep Learning in URLLC

589

45.2 6G Communication Here, 5G innovations have to compromise with many problems such as throughput, latency, energy efficiency, implementation cost, durability, and hardware complexity. However, after 2030, 5G will be unable to fulfill the consumer requirements. Thus, 6G has to fulfill the void from 5G to consumer demand. The core priorities for 6G networks are focused on recent developments and projections of future needs [6, 14, 15]: • Extreme data speeds: 6G communication features extreme data speeds that is capable of 1 TBPS. The speed will be achieved by using TeraHertz frequency band for communication. And, 6G may also explore other alternatives for transmission such as visible light communication [3] and molecular communications [11]. • Ultra-low latency: It will provide a extreme low latency of less than 1 millisecond. AI will be used to predict and avoid any possible delay during the communication. • Global connectivity: 6G will achieve this by using multiple network nodes having overlapping domains. Moreover, drones will be a network node. It will be deployed in areas where there is loss of connectivity, for instance, any disaster situation leading to damage of network nodes (e.g., cyclone). • Huge connected devices: During 6G, Internet of Things (IoT) will transfer to Internet of Everything (IoE) where every digital devices are connected to the Internet. • Ultra-high-level reliability: The communication channel, AI, and global connectivity will help in achieving this goal. • Utilizing deep learning to connect intelligence: 6G will use federated learning to share the knowledge of one device with other devices. • Fully AI-driven communication: 6G will explore AI to implement it in every aspect of its communication. AI will be used in making the edge devices self-sufficient in computing and analysis. It will be used in communication layers, specifically, physical, data link, and transport layers to increase their efficiency. • Physical layer security: It is one of the main attractions of 6G. It will use AI in physical layer of the communication system. AI will be helpful for adaptive encoding and decoding, automatic modulation classification, intelligent beamforming, and channel state estimation and prediction. Figure 45.1 illustrates the 6G specifications and features. This 6G will operate at 1 Terahertz (THz) frequency with a data rate of 1 Tbps. The 6G communication technology will be fully satellite integrated for terrestrial communication [15]. It will be capable of providing high quality of services (QoS) to millions of smart devices within a small area. It is achieved by using multiple access points (AP)/base stations (BS) where each APs/BSs has overlapping areas. The 6G network will appear as distributed, cell-less, and massive multiple input multiple output (MIMO) system. Furthermore, drone-assisted and aerial networks will provide support to terrestrial cellular systems. Then, 6G will provide support in air, water, and space [15].

590

A. Roy Medha et al.

Throughtput & Capacity # >100 Gbps # 5Gbps@Edge

Device

Latency # 0.1 ms

# Zero Energy # Intutive Interfaces

6G Specification & Features Coverage

Respnse Time

# Global Coverage #10 Million devices per SqKM

# 1 sec # Zero Touch

Accuracy # 45% for 1 > 0.3, and for 1 ≤ 0.3, the heat transfer is influenced by convection bounding the air layer. Therefore, the radiation performs a crucial role in transferring the heat for medium and higher values of emissivity. However, its contribution is about 52% and 60% for 1 = 0.5 and 0.9, respectively. A comparison of the thermal resistance (Rt ) from the present study with the data mentioned in the International Standard [27] formulation is given in Table 48.7.

48.6 Conclusions A computational study is conducted on the double plaster-brick-glass wool wall to observe the contribution of different heat transfer phenomena, i.e., convection, conduction, and surface radiation through walls. The glass wool thickness has less impact on the overall heat transfer. The contribution of convection and radiation in the heat transfer process is observed to be different for different emissivity condition. Conduction is insignificant due to the addition of glass wool at the inner surface near to air layer. The significant conclusions drawn from this study are mentioned below. • The U-value obtained with Type-I wall is 0.49 W/m2 K, 5.76% less than the Uvalue in Type-II wall. The inner wall mean temperature with Type-I combination is 10.8% lesser than Type-II wall. • The inner wall temperature is reduced by 0.25% when glass wool thickness increased from 2 to 5 cm, although the glass wool thickness has little impact on the overall heat transfer coefficient.

638

B. Nath et al.

• At solar radiation 600 W/m2 , the convection coefficient, the coefficient of radiation, and the heat resistance through the inside air region are 6.407 W/m2 K, 7.193 W/m2 K, and 0.139 W/m2 K, respectively, in the Type-I wall. • The heat transfer process is dominated by radiation till 1 > 0.3, whereas convection plays a major role with the emissivity 1 ≤ 0.3. The conduction percentage in the heat transfer process is very much negligible. • The low emissivity (