Cognitive Informatics and Soft Computing: Proceeding of CISC 2021 (Lecture Notes in Networks and Systems, 375) 9811687625, 9789811687624


111 17 24MB

English Pages [779]

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Preface
Contents
Editors and Contributors
How Servant Leadership is Effective for Employee Performance with the Use of t-Test, Algorithm and ANOVA
1 Introduction
2 Review of Literature
3 Need of the Study
4 Objectives of the Study
5 Hypotheses of the Study
6 Methodology
7 Results
8 Limitations and Directions for Future Implications
9 Conclusion
References
Interpreting Skilled and Unskilled Tasks Using EEG Signals
1 Introduction
2 Proposed Algorithm
2.1 Methodology
3 Experimental Results
3.1 Results and Conclusion
References
Data Transmission in Clouds Using Heed and Energy-Efficient Routing Algorithm
1 Introduction
2 Critical Areas for Cloud Computing
2.1 Securing Data at Rest
2.2 Securing Data in Transit
2.3 Authentication
3 Problem Definition
4 Methodology
5 Result
6 Conclusion
7 Future Work
References
Modeling and Optimization of Reaction Parameters for Glycerol Production Using Response Surface Methodology
1 Introduction
2 Experimental
2.1 Materials
2.2 Methods
3 Results and Discussion
3.1 Effect of Ammonia/kieselguhr Ratio on the Catalyst
3.2 Digestion Temperature After the Addition of Ammonical Slurry of Ammonium Tungstate
3.3 Digestion Time After Adding Ammonical Slurry of Ammonium Tungsten
4 Optimization
5 Conclusion
References
Allocation of Different Types of DG Sources in a Time-Varying Radial Distribution Networks
1 Introduction
2 Problem Formulation
3 Research Methodology
3.1 Forward Backward Sweep (FBS)
3.2 Particle Swarm Optimization
4 Test Systems
5 Conclusion
References
FSO at Moderate Atmospheric Turbulence Using 16 QAM
1 Introduction
2 Atmosphere Turbulence
3 Variation in Atmospheric Turbulence
4 Results and Discussions
5 Conclusion
References
Performance Enhancement of Planar Antenna for Wireless Applications
1 Introduction
2 Design Configuration
3 Results and Discussion
3.1 Analysis by Varying the Length of the Ground Plane
3.2 Radiation Patterns
3.3 Gain
4 Conclusion
References
Invariant Feature-Based Dynamic Scene Classification Using the Optimized Convolution Neural Network
1 Introduction
2 Literature Survey
3 Architecture of SCM
4 Results and Discussion
5 Conclusion and Future Work
References
Improvement of Solar Panel Efficiency with Automatic Cleaning Robot
1 Introduction
2 Problem Formulation
3 Description of the Hardware Architecture
4 Detailed Working of Automatic Solar Cleaning Robot
5 Simulation of Solar Panel with and Without Dust on Solar Panel
6 Result and Discussion
7 Conclusion
References
Performance Analysis of Grid Connected Distributed Generation Sources (DGS) Using ETAP
1 Introduction
2 Results
2.1 Power Quality Improvement
2.2 Reactive Power Compensation
2.3 Comparative Analysis of Normal Load Flow and Optimal Power Flow
3 Conclusion
References
Modeling and Simulation for Stability Improvement and Harmonic Analysis of Naghlu Hydro Power Plant
1 Introduction
2 Load Flow Analysis of Naghlu Hydro Power Plant
3 Harmonics Analysis
4 Results and Conclusion
References
Sizing and Optimization of Hybrid Energy Storage System with Renewable Energy Generation Plant
1 Introduction
2 Literature Survey
3 Model Description
4 Methodology
5 Simulation
6 Advantages of Using SAPSO Algorithm for Optimization
7 Why PSO Algorithm is Better?
8 Results
9 Conclusion
References
Evaluation of THD and Voltage Instability for Interconnected Hybrid Solar and Wind Power Generation
1 Introduction
2 Literature Review
3 Flow Chart of Methodology
4 Simulation Model and Results
4.1 Simulation of the Model for Hybrid Solar and Wind Power System
4.2 Evaluation and Reduction of the THD (Total Harmonic Distortion) of the System
5 Conclusion
References
Analysis and Optimization of Stability in Hybrid Power System Using Statcom
1 Introduction
2 Power System Stability
2.1 Rotor Angle Stability
2.2 Frequency Stability
2.3 Voltage Stability
3 System Configuration and Models for Stability Analysis of HPS
3.1 Modeling of SPV System
3.2 Modeling of Wind Energy Conservation System
3.3 Modeling of Statcom (D-STATCOM)
4 Anticipated/Planned Model of the HPS (Wind-PV)
5 Results
6 Conclusion
References
Modeling of Proton Exchange Membrane Fuel Cell
1 Introduction
2 Physical Structure and Operating Principle of PEMFC
3 PEMFC Modeling
4 Simulation Results
5 Conclusion
Appendix
References
Q-LEACH Algorithm for Efficiency and Stability in WSN
1 Introduction
2 Algorithm I: Setup Phase
3 Algorithm II: CH Association Phase
4 Results
5 Conclusion
References
Comparative Analysis of Energy Management Systems in Electric Vehicles
1 Introduction
1.1 Classification of Electric Vehicles
1.2 Components of Electric Vehicles
1.3 Energy Management and Storage System
1.4 EV Charging Technology
1.5 Conclusion
1.6 Challenges for EVs
References
Heuristic-Based Test Solution for 3D System on Chip
1 Introduction
2 Literature Review
3 Motivational Example of Testing 3D SoCs with and Without Hierarchical Cores
4 Problem Formulation and Classification
5 Proposed Solutions
6 Simulation Set up and Results
7 Conclusion and Future Work
References
A Comprehensive Study of Edge Computing and the Impact of Distributed Computing on Industrial Automation
1 Introduction
2 What is Edge Computing?
3 Motivation
4 Opportunities in Edge Computing
5 Cloud Computing
6 Limitations of Cloud Computing
7 Key Attributes of Edge Computing
8 Edge Computing Architecture
9 Case Study of Edge Computing
10 Applications of Edge Computing
11 Cloud versus Fog versus Edge Computing
12 Conclusion and Future Scope
References
Optimizing Approach Towards Fibre Refining and Improved Fibre Quality-Development of Carrier Tissue Paper
1 Introduction
2 Materials and Methodology
2.1 Materials
2.2 Methodology
2.3 Analysis
3 Result and Discussion
4 Conclusion
References
A Study of Non-Newtonian Nanofluid Saturated in a Porous Medium Based on Modified Darcy-Maxwell Model
1 Introduction
2 Mathematical Model
2.1 Basic Solution
2.2 Perturbation Solution
3 Linear Stability Analysis
3.1 Analysis at the Marginal State
4 Non-linear Stability Analysis
4.1 Heat and Mass Transport
5 Results and Discussion
5.1 Linear Stability Analysis
5.2 Non-Linear Stability Analysis
6 Conclusion
References
Double-Diffusive Convection in Darcy Oldroyd-B Type Nanofluid: Linear and Non-linear Approach
1 Introduction
2 Mathematical Model
2.1 Non-dimensional Parameters
2.2 Basic State
2.3 Perturbed State
2.4 Linear Analysis
3 Analysis at the Marginal State
3.1 Stationary Convection
3.2 Oscillatory Convection
4 Non-linear Stability Analysis
4.1 Steady Finite Amplitude Motions
4.2 Heat and Mass Transports
5 Results and Discussion
5.1 Linear Stability Analysis
5.2 Non-linear Stability Analysis
6 Conclusion
6.1 Linear Stability Analysis
6.2 Non-linear Stability Analysis
References
Interpretive Psychotherapy of Text Mining Approaches
1 Introduction
2 Related Work
2.1 Extraction of Information
2.2 Information Recovery
2.3 Natural Language Processing
2.4 Grouping
2.5 Text Reviewing
3 Mining of Text Algorithms List
3.1 K-Nearest Neighbor
3.2 Naive Bayes Classifier (NBC)
3.3 Grouping with K-means
3.4 Approach Using Support Vector Machines Techniques
3.5 Approach Using Judgment Tree
3.6 Comprehensive Linear Models (CLM)
3.7 Neural Networks
3.8 Involvement Rules
3.9 Genetic Algorithms
4 Advanced Methods
4.1 Text Classification
4.2 Sentiment Analysis
4.3 Topic Analysis
4.4 N-grams
4.5 Bag of Words (Bow)
4.6 Term Frequency-Inverse Document Frequency
5 Conclusion
6 Future Work
References
Sarcasm Detection Using SVM
1 Introduction
2 Related Work
2.1 Characteristics of Sarcasm
2.2 Types of Sarcasm
2.3 Negation of Sarcasm
3 Dataset
4 Data Preprocessing
4.1 Removal of URL
4.2 Removal of @user
4.3 Tokenization
4.4 Stemming
5 Feature Engineering
5.1 Extracting Hashtag (#)
5.2 Delta TF-IDF
5.3 Word2Vec
5.4 Pattern-Related
6 Classification Model
7 Result
8 Conclusion
References
Text Summarization in Hindi Language Using TF-IDF
1 Introduction
2 Literature Review
3 Proposed Methodology
4 Text Pre-processing and Summarization
4.1 Pre-processing
4.2 Processing
4.3 TF-IDF Algorithm
5 Result
5.1 Dataset
5.2 Pre-processed Data
5.3 Processed Data/Summary
5.4 Summary Evaluation
5.5 Precision, Recall, and F-Score
6 Conclusion
7 Future Scope
References
Low-Voltage Low-Power Acquisition System for Portable Detection of Approximated Biomedical Signals
1 Introduction
2 Literature Review
3 Mathematical Modelling of ECG Signals
4 Addition of Noise
5 Conclusion
References
Antimagic Labeling and Square Difference Labeling for Trees and Complete Bipartite Graph
1 Introduction
2 Main Result 1
3 Main Result 2
4 Conclusion
References
Edge Irregularity Strength Exists in the Anti-theft Network
1 Introduction
2 Basic Definition of Labeling
2.1 Total K-labeling
2.2 Edge Irregularity Strength
2.3 Complete Tripartite Graph
2.4 Barcode
2.5 Scanner
3 Basic Theorem
3.1 Working of Complete Tripartite Graph in Anti-theft Network
4 Main Theorem
5 Conclusion
References
Prediction of Currency Exchange Rate: Performance Analysis Using ANN-GA and ANN-PSO
1 Introduction
1.1 Organization of the Paper
2 Literature Review
3 Objectives
4 Methodology
5 Preparation of Data
6 Implementation Strategy
6.1 ANN Implementation
6.2 Implementation of ANN-GA
6.3 Pattern Formation
6.4 Implementation of ANN-PSO
6.5 Input Data Set
7 Results Analysis
8 Conclusion and Future Scope
References
Gurmukhi Numerals Recognition Using ANN
1 Introduction
2 Literature Review
3 Objective
4 Methodology and Research Design
5 Data Collection
6 Implementation and Results Analysis
7 Conclusion
References
A Review on Internet of Things in Healthcare Applications
1 Introduction
1.1 IoT in Healthcare
1.2 Cloud Based IoT in Healthcare Services
1.3 Cloud Based IoT in Healthcare Services
2 Applications of IoT in Healthcare
3 IoT Growth and Development
3.1 Limitations and Challenges
4 Conclusion
References
Inter-IC Sound (I2S) Interface for Dual Mode Bluetooth Controller
1 Introduction
2 Inter-IC Sound (I2S) Interface
2.1 Dual Mode Bluetooth
3 Proposed Methodology
4 Results
5 Conclusion
References
Design of Low Power Vedic Multiplier Using Adiabatic Techniques
1 Introduction
2 Related Works
3 Methodology
3.1 Vedic Mathematics
3.2 URDHVA—TIRYAGBHYAM Step by Step Procedure for a 4 × 4 Multiplier
3.3 Softwares Used
4 Result and Analysis
5 Conclusion
6 Future Work
References
Digital Technology and Artificial Intelligence in Dentistry: Recent Applications and Imminent Perspectives
1 Prelude to Artificial Intelligence
2 Objectives
3 Review of Literature
4 Applications of AI in Dentistry
4.1 Medical-Aided Diagnosis
4.2 Radiology
4.3 Oral and Maxillofacial Surgery
4.4 Cariology and Endodontics
4.5 Periodontics
4.6 Temporomandibular Joint Disorder
4.7 Orthodontics
4.8 Cancer Related to Head and Neck
4.9 Pain Assessment
4.10 Prosthodontics
4.11 Regenerative Dentistry
4.12 Disease Forecast and Outcome
4.13 Dental Implantology
5 Barriers and Challenges
5.1 Data Acquisition
5.2 Interpretability
5.3 Computing Power
5.4 Class Imbalance
5.5 Data Privacy and Security
5.6 Dataset Shifts and Clinical Applicability
6 Conclusion
References
Atmospheric Weather Fluctuation Prediction Using Machine Learning
1 Introduction
2 Background
2.1 Neural Networks for Weather Forecasting
2.2 Related Works
3 Methodology
3.1 Setup
3.2 Gathering Data
3.3 Preprocessing of Data
3.4 Training Models
4 Results
5 Conclusion
References
LGBM-Based Payment Date Prediction for Effective Financial Statement Management
1 Introduction
2 Literature Survey
3 Problem Statement
4 Machine Learning Workflow
4.1 Planning and Implementation
4.2 Modeling and Result Analysis
5 Conclusion and Future Scope
References
A Regression Approach Towards Climate Forecasting Analysis in India
1 Introduction
2 Literature Review
3 Technical Overview
3.1 Data Preparation
3.2 Linear Regression
3.3 Visualization Relationships
3.4 Metrics
4 Proposed Methodology
5 Results and Discussions
6 Conclusion
References
Rice Leaf Disease Classification Using Transfer Learning
1 Introduction
2 Related Work
3 Proposed Method
3.1 Dataset
3.2 Model Architecture
4 Experimental Result
5 Conclusion
References
Real-Time Sign Language Translator
1 Introduction
2 Literature Review
3 Technical Overview
3.1 OpenCV
3.2 Labeling
3.3 Transfer Learning
3.4 TensorFlow Object Detection API
3.5 MobileNet-SSD
3.6 MobileNet
4 Dataset
5 Proposed Methodology
5.1 Process Flow
5.2 Collecting Images Using Python and OpenCV
5.3 Labeling Images for Object Detection Using Labeling Package
5.4 Training Using Transfer Learning and TensorFlow Object Detection API for Sign Language
5.5 Detection of Signs in Real Time
6 Results and Discussions
7 Conclusion
References
Song Recommendation Using Mood Detection with Xception Model
1 Introduction
2 Literature Survey
2.1 Collaborative Filtering
2.2 Content-Based Filtering
2.3 Hybrid Recommendation Systems
3 Dataset
4 Proposed Methodology
5 Results and Analysis
6 Conclusion
References
Diagnosis of Charging Gun Actuator in the Electric Vehicle (EV)
1 Introduction
2 Why Diagnostics?
3 Interaction Between Supply Station and Vehicle
4 Charging Sequences and Transition of States in Charging
5 Proposed Methodology
6 Testing Environment
7 Design Implementation for Actuator Diagnosis
8 Results
9 Conclusion
References
Voice and Text Based Sentiment Analysis Using Natural Language Processing
1 Introduction
2 Literature Survey
3 Data Collection
4 Model Implementation and Analysis
4.1 Text Based Sentiment Analysis
4.2 Voice-Based Sentiment Analysis
4.3 Designing the User Interface
5 Result and Analysis
6 Conclusion and Future Scope
References
Automated Crowd Size Estimation in Dense Crowd Images—Application in Detecting COVID-19 Guideline Violations
1 Introduction
2 Proposed Framework
3 Result and Discussions
4 Conclusion
References
Analysis of NBTI Impact on Clock Path Duty Cycle Degradation
1 Introduction
2 Previous Work
3 Physical Mechanism of NBTI
4 Asymmetric Aging in Devices
5 Aging Static Timing Analysis
6 Simulation Results
7 Conclusion
References
Classification of Brain Images Using Machine Learning Techniques
1 Introduction
2 Literature Review
3 Proposed Methodology
3.1 Random Forest
3.2 Why Are Using Random Forest?
3.3 Random Forest Algorithm Description
3.4 Algorithm
3.5 Inverse Discrete Wavelet Transform (IDWT)
3.6 Results
4 Conclusion
References
Analyze the Performance of Stand-Alone PV with BESS
1 Introduction
2 Literature Survey
3 Model Description
4 Analysis of Solar Irradiance
5 Components Description
6 Results and Discussion
7 Conclusion
References
Efficient LUT-Based Technique to Implement the Logarithm Function in Hardware
1 Introduction
2 Literature Review
3 Logarithm Function as Series Expansion
4 Implementation
5 Conclusion and Future Scope
References
Hand Gesture Recognition Using Convolutional Neural Networks and Computer Vision
1 Introduction
2 Literature Survey
3 Proposed System
3.1 System Design
3.2 System Requirements
3.3 Convolutional Neural Network Architecture:
3.4 Dataset
3.5 Implementation
4 Results and Discussion
5 Conclusion
6 Future Work
References
An Effective Parking Management and Slot Detection System
1 Introduction
1.1 Online Parking Reservation
1.2 Payment and Reservation
2 Related Work
3 System Model
3.1 Vacant Spot Detection
3.2 Image Processing
3.3 Assignment and Reservation
3.4 Use Case Diagram of the Proposed System
4 User Scenarios and End Application
5 Result and Conclusion
References
Survey on Natural Language-Based Person Search
1 Introduction
2 Question Answering System
2.1 Components of Question Answering System
3 Dataset
3.1 Comparison of Image Caption Method
4 Visual QA
5 Visual-Semantic Embedding
6 Conclusion
References
Optimization of Cloning in Clock Gating Cells for High-Performance Clock Networks
1 Introduction
2 Previous Work
3 Clock Gating
4 Proposed Methodology
4.1 Technique for Inserting CGCs
4.2 Decloning of CGCs
5 Results
6 Conclusion and Future Scope
References
Fruit Freshness Detection Using Machine Learning
1 Introduction
2 Literature Survey
3 Methodology
4 Implementation
5 Results and Discussion
6 Conclusion
References
Automation in Implementation of Asserting Clock Signals in High-Speed Mixed-Signal Circuits to Reduce TAT
1 Introduction
1.1 Verification of Signals
2 Work Proposed
3 Results
4 Analysis of Results
5 Conclusion
References
Supervised Learning Algorithms for Mobile Price Classification
1 Introduction
2 Related Work
3 Methodology
3.1 Data Collection
3.2 Dimensionality Reduction
3.3 Feature Selection
3.4 Classification
4 Classifier Models
4.1 Naive Bayes Classification
4.2 Support Vector Machine Classification
4.3 Random Forest Classification
4.4 Logistic Regression Classification
4.5 Stochastic Gradient Descent Classification
4.6 K-Nearest Neighbor Classification
4.7 Decision Tree Classification
4.8 Artificial Neural Network
5 Implementation and Results
6 Discussions
7 Conclusion and Future Work
References
Farmuser: An Intelligent Chat-Bot Interface for Agricultural Crop Marketing
1 Introduction
2 Literature Review
3 Technology Stack
4 Proposed Work and Methodology
5 Result Analysis
6 Conclusion
References
Prominent Cancer Risk Detection Using Ensemble Learning
1 Introduction
2 Background Study
3 Proposed Cancer Prediction Model
4 Workflow Diagram of the Cancer Prediction Model
5 Detection of Cancers Using Ensemble Classifiers
6 Results and Discussions
7 Conclusion
References
Portfolio Optimization for US-Based Equity Instruments Using Monte-Carlo Simulation
1 Introduction
2 Literature Review
3 Mathematical Background
3.1 Expected Return
3.2 Portfolio Return Variance
3.3 Portfolio Return Volatility (Standard Deviation)
3.4 Sharpe Ratio
3.5 The Efficient Frontier
4 Methodology
4.1 Annual/Yearly Returns and Risk/Standard Deviation (SD)
4.2 Monte-Carlo Simulation
4.3 Portfolio Optimization Process in Python
5 Observation and Results
6 Conclusion
References
A Smart Farming-Based Recommendation System Using Collaborative Machine Learning and Image Processing
1 Introduction
2 Related Works
3 Background Concepts
3.1 Data Acquisition
3.2 Exploratory Data Analysis
3.3 Classification Algorithms
3.4 Convolution Network Architectures
4 Proposed Work
5 Results and Discussion
6 Conclusion
References
Applications of Artificial Intelligence in Small- and Medium-Sized Enterprises (SMEs)
1 Introduction
2 Review of Literature
2.1 Application of AI in SMEs
3 Challenges of AI Adoption in SMEs
4 Discussion
5 Conclusion
References
Applications of Artificial Intelligence in Software Testing
1 Introduction
2 Review of Literature
2.1 Overview of Machine Learning
2.2 Overview of Deep Learning
3 Overview of Software Testing
4 Applications of AI in Software Testing
5 Conclusion
References
OLFACTRO BRAINIAC: Aid-Kit for Person with Smell Sense Disability
1 Introduction
1.1 Sensors
1.2 Pattern Recognition
1.3 Alert System
2 Literature Survey
3 Proposed Method
4 Prototype and Its Working
4.1 Data Cleaning and Preprocessing Template
4.2 Algorithm
5 Description of Dataset
6 Experimentation and Comparison
6.1 Accuracy of ANN Algorithm
6.2 Comparison on the Basis of Accuracy
7 Applications
8 Future Scope
9 Conclusion
References
A Novel Approach for ECG Compression and Use of Cubic Spline Method for Reconstruction
1 Introduction
1.1 Literature Survey
2 Turning Point Algorithm
3 Amplitude Zone Time EPOC Coding Technique
4 Results
5 Conclusion
References
Design and Implementation of a Mixed Signal Filter for Noise Removal in Raw ECG Signal
1 Introduction
2 Methodology
3 Results
4 Conclusion
References
Machine Learning Application in Primitive Diabetes Prediction—A Case of Ensemble Learning
1 Introduction
2 Related Works
3 Materials and Methods
3.1 Dataset
3.2 Data Preparation
3.3 Classification Algorithms
3.4 Classification Model
4 Conclusion
References
Author Index
Recommend Papers

Cognitive Informatics and Soft Computing: Proceeding of CISC 2021 (Lecture Notes in Networks and Systems, 375)
 9811687625, 9789811687624

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Lecture Notes in Networks and Systems 375

Pradeep Kumar Mallick Akash Kumar Bhoi Paolo Barsocchi Victor Hugo C. de Albuquerque   Editors

Cognitive Informatics and Soft Computing Proceeding of CISC 2021

Lecture Notes in Networks and Systems Volume 375

Series Editor Janusz Kacprzyk, Systems Research Institute, Polish Academy of Sciences, Warsaw, Poland Advisory Editors Fernando Gomide, Department of Computer Engineering and Automation—DCA, School of Electrical and Computer Engineering—FEEC, University of Campinas— UNICAMP, São Paulo, Brazil Okyay Kaynak, Department of Electrical and Electronic Engineering, Bogazici University, Istanbul, Turkey Derong Liu, Department of Electrical and Computer Engineering, University of Illinois at Chicago, Chicago, USA Institute of Automation, Chinese Academy of Sciences, Beijing, China Witold Pedrycz, Department of Electrical and Computer Engineering, University of Alberta, Alberta, Canada Systems Research Institute, Polish Academy of Sciences, Warsaw, Poland Marios M. Polycarpou, Department of Electrical and Computer Engineering, KIOS Research Center for Intelligent Systems and Networks, University of Cyprus, Nicosia, Cyprus Imre J. Rudas, Óbuda University, Budapest, Hungary Jun Wang, Department of Computer Science, City University of Hong Kong, Kowloon, Hong Kong

The series “Lecture Notes in Networks and Systems” publishes the latest developments in Networks and Systems—quickly, informally and with high quality. Original research reported in proceedings and post-proceedings represents the core of LNNS. Volumes published in LNNS embrace all aspects and subfields of, as well as new challenges in, Networks and Systems. The series contains proceedings and edited volumes in systems and networks, spanning the areas of Cyber-Physical Systems, Autonomous Systems, Sensor Networks, Control Systems, Energy Systems, Automotive Systems, Biological Systems, Vehicular Networking and Connected Vehicles, Aerospace Systems, Automation, Manufacturing, Smart Grids, Nonlinear Systems, Power Systems, Robotics, Social Systems, Economic Systems and other. Of particular value to both the contributors and the readership are the short publication timeframe and the world-wide distribution and exposure which enable both a wide and rapid dissemination of research output. The series covers the theory, applications, and perspectives on the state of the art and future developments relevant to systems and networks, decision making, control, complex processes and related areas, as embedded in the fields of interdisciplinary and applied sciences, engineering, computer science, physics, economics, social, and life sciences, as well as the paradigms and methodologies behind them. Indexed by SCOPUS, INSPEC, WTI Frankfurt eG, zbMATH, SCImago. All books published in the series are submitted for consideration in Web of Science. For proposals from Asia please contact Aninda Bose ([email protected]).

More information about this series at https://link.springer.com/bookseries/15179

Pradeep Kumar Mallick · Akash Kumar Bhoi · Paolo Barsocchi · Victor Hugo C. de Albuquerque Editors

Cognitive Informatics and Soft Computing Proceeding of CISC 2021

Editors Pradeep Kumar Mallick School of Computer Engineering KIIT Deemed to be University Bhubaneswar, India Paolo Barsocchi Wireless Networks Research Laboratory National Research Council Institute of Information Science and Technologies Pisa, Italy

Akash Kumar Bhoi KIET Group of Institutions Delhi-NCR Ghaziabad, India Directorate of Research Sikkim Manipal University Gangtok, Sikkim, India Victor Hugo C. de Albuquerque Department of Teleinformatics Engineering Federal University of Ceará Fortaleza/CE, Brazil

ISSN 2367-3370 ISSN 2367-3389 (electronic) Lecture Notes in Networks and Systems ISBN 978-981-16-8762-4 ISBN 978-981-16-8763-1 (eBook) https://doi.org/10.1007/978-981-16-8763-1 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Singapore Pte Ltd. The registered company address is: 152 Beach Road, #21-01/04 Gateway East, Singapore 189721, Singapore

Preface

Cognitive Informatics is a cross—disciplinary analysis of cognition and Information processing technology that explores the structures and processes of human knowledge processing in various engineering applications. Cognitive technology that is being extensively used in addressing the prevalent underlying problems of information processing in various domains like artificial intelligence, data science, cognitive science, Internet of Things, philosophy, and life sciences through natural intelligence of the human brain. Cognitive Informatics gives a comprehensive collection of key hypotheses and modern-day mathematical models in solving the real-time information processing challenges. Soft-computing is the upcoming technology in the computer science technology that tackles the real-time problems that are indistinct and unpredictable natured problems that a collection of robust and computationally efficient approaches that would yield a least coasted optimal outcome. There are various techniques that are part of the soft computing technology, that includes Neural Networks, Evolutionary Computing, Swarm Intelligence, Fuzzy Computing, Chaos models, Heuristic models and Probabilistic reasoning. Soft-computing models are extensively used various inter disciplinary domain, for effective handing of the problem. This book comprises of selected papers of the 4rd International Conference on Cognitive Informatics & Soft Computing (CISC-2021) which was held at Balasore College of Engineering & Technology, Balasore, Odisha, India, from 21st–22nd August, 2021. We would like to extend our thanks to the authors and their active participations during CISC-2021. Moreover, we would like to extend my sincere gratitude to the reviewers, technical committee members and professional from national and international forum for extending their great support during the conference. Bhubaneswar, India Ghaziabad, India Pisa, Italy Fortaleza/CE, Brazil

Dr. Pradeep Kumar Mallick Dr. Akash Kumar Bhoi Dr. Paolo Barsocchi Dr. Victor Hugo C. de Albuquerque

v

Contents

How Servant Leadership is Effective for Employee Performance with the Use of t-Test, Algorithm and ANOVA . . . . . . . . . . . . . . . . . . . . . . . . Divya Jyoti Thakur and Pooja Verma Interpreting Skilled and Unskilled Tasks Using EEG Signals . . . . . . . . . . Neeraj Sharma, Hardeep Singh Ryait, and Sudhir Sharma

1 15

Data Transmission in Clouds Using Heed and Energy-Efficient Routing Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Amandeep, Tarun Saxena, and Yogesh Kumar

27

Modeling and Optimization of Reaction Parameters for Glycerol Production Using Response Surface Methodology . . . . . . . . . . . . . . . . . . . . Tanuja Srivastava, D. C. Saxena, and Renu Sharma

39

Allocation of Different Types of DG Sources in a Time-Varying Radial Distribution Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Divesh Kumar and Satish Kansal

49

FSO at Moderate Atmospheric Turbulence Using 16 QAM . . . . . . . . . . . . Manpreet Singh and Amandeep Singh Sappal

61

Performance Enhancement of Planar Antenna for Wireless Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sushil Kakkar and Shweta Rani

69

Invariant Feature-Based Dynamic Scene Classification Using the Optimized Convolution Neural Network . . . . . . . . . . . . . . . . . . . . . . . . . Surender Singh

77

Improvement of Solar Panel Efficiency with Automatic Cleaning Robot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Zabiullah Haidary and Sarbjeet Kaur

93

vii

viii

Contents

Performance Analysis of Grid Connected Distributed Generation Sources (DGS) Using ETAP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 Alijan Ranjbar, Sunny Vig, and KamalKant Sharma Modeling and Simulation for Stability Improvement and Harmonic Analysis of Naghlu Hydro Power Plant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 Samiullah Sherzay and Rehana Perveen Sizing and Optimization of Hybrid Energy Storage System with Renewable Energy Generation Plant . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127 Neha Bharti, Sachin Kumar, and Paras Chawla Evaluation of THD and Voltage Instability for Interconnected Hybrid Solar and Wind Power Generation . . . . . . . . . . . . . . . . . . . . . . . . . . . 139 Teena Thakur, Harvinder Singh, Birinderjit Singh Kalyan, and Himani Goyal Sharma Analysis and Optimization of Stability in Hybrid Power System Using Statcom . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149 Ankush Lath, Sarbjeet Kaur, and Surbhi Gupta Modeling of Proton Exchange Membrane Fuel Cell . . . . . . . . . . . . . . . . . . . 163 Reena Yadav, Birinderjit Kalyan, Sunny Vig, and Himani Goyal Sharma Q-LEACH Algorithm for Efficiency and Stability in WSN . . . . . . . . . . . . . 173 Birinderjit Singh Kalyan Comparative Analysis of Energy Management Systems in Electric Vehicles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181 Sudhir Kumar Sharma and Manpreet Singh Manna Heuristic-Based Test Solution for 3D System on Chip . . . . . . . . . . . . . . . . . 201 Harpreet Vohra, Manpreet Singh Manna, and Inderpreet Kaur A Comprehensive Study of Edge Computing and the Impact of Distributed Computing on Industrial Automation . . . . . . . . . . . . . . . . . . 215 Akansha Singh, Atul Kumar, and Bhavesh Kumar Chauhan Optimizing Approach Towards Fibre Refining and Improved Fibre Quality-Development of Carrier Tissue Paper . . . . . . . . . . . . . . . . . . 227 Sanjeev Kumar Jain, Dharam Dutt, R. K. Jain, and A. P. Garg A Study of Non-Newtonian Nanofluid Saturated in a Porous Medium Based on Modified Darcy-Maxwell Model . . . . . . . . . . . . . . . . . . . 241 Reema Singh, Vipin Kumar Tyagi, and Jaimala Bishnoi Double-Diffusive Convection in Darcy Oldroyd-B Type Nanofluid: Linear and Non-linear Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267 Devendra Kumar, Vipin Kumar Tyagi, and Reema Singh

Contents

ix

Interpretive Psychotherapy of Text Mining Approaches . . . . . . . . . . . . . . . 297 Santosh Kumar Dwivedi, Manpreet Singh Manna, and Rajeev Tripathi Sarcasm Detection Using SVM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309 Atul Kumar, Pooja Agrawal, Ratnesh Kumar, Sahil Verma, and Divya Shukla Text Summarization in Hindi Language Using TF-IDF . . . . . . . . . . . . . . . . 319 Atul Kumar, Vinodani Katiyar, and Bhavesh Kumar Chauhan Low-Voltage Low-Power Acquisition System for Portable Detection of Approximated Biomedical Signals . . . . . . . . . . . . . . . . . . . . . . . 333 Indu Prabha Singh, Manpreet Singh Manna, Vibha Srivastava, and Ananya Pandey Antimagic Labeling and Square Difference Labeling for Trees and Complete Bipartite Graph . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345 S. Sivakumar, S. Vidyanandini, E. Sreedevi, Soumya Ranjan Nayak, and Akash Kumar Bhoi Edge Irregularity Strength Exists in the Anti-theft Network . . . . . . . . . . . 355 S. Sivakumar, S. Vidyanandini, E. Sreedevi, Soumya Ranjan Nayak, and Akash Kumar Bhoi Prediction of Currency Exchange Rate: Performance Analysis Using ANN-GA and ANN-PSO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363 Muskaan, Pradeepta Kumar Sarangi, Sunny Singh, Soumya Ranjan Nayak, and Akash Kumar Bhoi Gurmukhi Numerals Recognition Using ANN . . . . . . . . . . . . . . . . . . . . . . . . 377 Pradeepta Kumar Sarangi, Ashok Kumar Sahoo, Gagandeep Kaur, Soumya Ranjan Nayak, and Akash Kumar Bhoi A Review on Internet of Things in Healthcare Applications . . . . . . . . . . . . 387 Abhinav Kislay, Prabhishek Singh, Achyut Shankar, Soumya Ranjan Nayak, and Akash Kumar Bhoi Inter-IC Sound (I2S) Interface for Dual Mode Bluetooth Controller . . . . 395 T. Prajwal and K. B. Sowmya Design of Low Power Vedic Multiplier Using Adiabatic Techniques . . . . 403 S. Giridaran, Prithvik Adithya Ravindran, G. Duruvan Raj, and M. Janarthanan Digital Technology and Artificial Intelligence in Dentistry: Recent Applications and Imminent Perspectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 417 Anjana Raut, Swati Samantaray, and Rupsa Rani Sahu Atmospheric Weather Fluctuation Prediction Using Machine Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 431 Srishty Singh Chandrayan, Khushal Singh, and Akash Kumar Bhoi

x

Contents

LGBM-Based Payment Date Prediction for Effective Financial Statement Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 445 Laharika Tutica, K. S. K. Vineel, and Pradeep Kumar Mallick A Regression Approach Towards Climate Forecasting Analysis in India . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 457 Yashi Mishra, Sushruta Mishra, and Pradeep Kumar Mallick Rice Leaf Disease Classification Using Transfer Learning . . . . . . . . . . . . . 467 Khushbu Sinha, Disha Ghoshal, and Nilotpal Bhunia Real-Time Sign Language Translator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 477 Khushbu Sinha, Annie Olivia Miranda, and Sushruta Mishra Song Recommendation Using Mood Detection with Xception Model . . . . 491 Deep Mukherjee, Ishika Raj, and Sushruta Mishra Diagnosis of Charging Gun Actuator in the Electric Vehicle (EV) . . . . . . 503 H. R. Yoganand and K. B. Sowmya Voice and Text Based Sentiment Analysis Using Natural Language Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 517 Shreya Anand and Subhra Rani Patra Automated Crowd Size Estimation in Dense Crowd Images—Application in Detecting COVID-19 Guideline Violations . . . . . 531 Nakkala Ganesh, Vibhor Kedawat, and Jeetashree Aparajeet Analysis of NBTI Impact on Clock Path Duty Cycle Degradation . . . . . . 539 Naik Kranti Ramkrishna and Abhay Deshpande Classification of Brain Images Using Machine Learning Techniques . . . . 551 Annapareddy V. N. Reddy, Reva Devi Gundreddy, Moyya Meghana, Kothuru Sai Mounika, and Varikuti Anusha Analyze the Performance of Stand-Alone PV with BESS . . . . . . . . . . . . . . 563 Rahul Manhas, Harpreet Kaur Channi, and Sarbjeet Kaur Efficient LUT-Based Technique to Implement the Logarithm Function in Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 573 Siddarth Sai Amruth Yetikuri, K. B. Sowmya, Timothy Caputo, and Vishal Abrol Hand Gesture Recognition Using Convolutional Neural Networks and Computer Vision . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 583 V. V. Krishna Reddy, K. N. V. S. Bhuvana, K. UmaHarikka, D. Sai Teja, and J. Suguna Kumari An Effective Parking Management and Slot Detection System . . . . . . . . . 595 Saurabh Chandra Pandey, Vinay Kumar Yadav, Rajesh Singh Bohra, and Upendra Kumar Tiwari

Contents

xi

Survey on Natural Language-Based Person Search . . . . . . . . . . . . . . . . . . . 609 Snehal Sarangi and Jitendra Kumar Rout Optimization of Cloning in Clock Gating Cells for High-Performance Clock Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 619 Mohammed Vazeer Ahmed and B. S. Kariyappa Fruit Freshness Detection Using Machine Learning . . . . . . . . . . . . . . . . . . . 633 K. Anupriya and Gopu Mruudula Sri Automation in Implementation of Asserting Clock Signals in High-Speed Mixed-Signal Circuits to Reduce TAT . . . . . . . . . . . . . . . . . . 643 Anagha Umashankar and B. S. Kariyappa Supervised Learning Algorithms for Mobile Price Classification . . . . . . . 653 Ananya Dutta, Pradeep Kumar Mallick, Niharika Mohanty, and Sriman Srichandan Farmuser: An Intelligent Chat-Bot Interface for Agricultural Crop Marketing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 667 P. V. S. Meghana, Debasmita Sarkar, Rajarshi Chowdhury, and Abhigyan Ray Prominent Cancer Risk Detection Using Ensemble Learning . . . . . . . . . . 677 Sanya Raghuwanshi, Manaswini Singh, Srestha Rath, and Sushruta Mishra Portfolio Optimization for US-Based Equity Instruments Using Monte-Carlo Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 691 Ayan Mukherjee, Ashish Kumar Singh, Pradeep Kumar Mallick, and Sasmita Rani Samanta A Smart Farming-Based Recommendation System Using Collaborative Machine Learning and Image Processing . . . . . . . . . . . . . . . 703 Soham Chakraborty and Sushruta Mishra Applications of Artificial Intelligence in Small- and Medium-Sized Enterprises (SMEs) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 717 Samarjeet Borah, Chukwuma Kama, Sandip Rakshit, and Narasimha Rao Vajjhala Applications of Artificial Intelligence in Software Testing . . . . . . . . . . . . . . 727 Samarjeet Borah, King Chime Aliliele, Sandip Rakshit, and Narasimha Rao Vajjhala OLFACTRO BRAINIAC: Aid-Kit for Person with Smell Sense Disability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 737 Avinash Kumar Sharma and Kuldeep Kumar Yogi

xii

Contents

A Novel Approach for ECG Compression and Use of Cubic Spline Method for Reconstruction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 759 Sudeshna Baliarsingh, Rashmi Rekha Sahoo, and Mihir Narayan Mohanty Design and Implementation of a Mixed Signal Filter for Noise Removal in Raw ECG Signal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 771 Mohan Debarchan Mohanty, Priyabrata Pattnayak, and Mihir Narayan Mohanty Machine Learning Application in Primitive Diabetes Prediction—A Case of Ensemble Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . 783 Narayan Patra, Jitendra Pramanik, Abhaya Kumar Samal, and Subhendu Kumar Pani Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 791

Editors and Contributors

About the Editors Dr. Pradeep Kumar Mallick is currently working as Senior Associate Professor in the School of Computer Engineering, Kalinga Institute of Industrial technology (KIIT) Deemed to be University, Odisha, India. He has also served as Professor and Head Department of Computer Science and Engineering, Vignana Bharathi Institute of Technology, Hyderabad. He has completed his Post Doctoral Fellow (PDF) from Kongju National University South Korea, PhD from Siksha Ó’ Anusandhan University, M.Tech. (CSE) from Biju Patnaik University of Technology (BPUT), and MCA from Fakir Mohan University Balasore, India. Besides academics, he is also involved various administrative activities, Member of Board of Studies to C.V.Ramman Global University Bhubaneswar, Member of Doctoral Research Evaluation Committee, Admission Committee etc. His area of research includes Data Mining, Image Processing, Soft Computing, and Machine Learning. Now he is the editorial member of Korean Convergence Society for SMB. He has published 19 edited books, 1 text Book and more than 100 plus research papers in National and international journals and conference proceedings in his credit. He is also serving as Guest editor for special issues of the journal like Springer Nature and Inderscience. Akash Kumar Bhoi [B.Tech, M.Tech, Ph.D.] currently associated with KIET Group of Institutions, India as Adjunct Faculty and Directorate of Research, Sikkim Manipal University as Adjunct Research Faculty. He is appointed as the honorary title of “Adjunct Fellow” Institute for Sustainable Industries & Liveable Cities (ISILC), Victoria University, Melbourne, Australia for the period from 1 August 2021 to 31 July 2022. He is also working as a Research Associate at Wireless Networks (WN) Research Laboratory, Institute of Information Science and Technologies, National Research Council (ISTI-CRN) Pisa, Italy. He was the University Ph.D. Course Coordinator for “Research & Publication Ethics (RPE) at SMU.” He is the former Assistant Professor (SG) of Sikkim Manipal Institute of Technology and served about 10 years. He is a member of IEEE, ISEIS, and IAENG, an associate member of IEI, UACEE,

xiii

xiv

Editors and Contributors

and an editorial board member reviewer of Indian and International journals. He is also a regular reviewer of reputed journals, namely IEEE, Springer, Elsevier, Taylor and Francis, Inderscience, etc. His research areas are Biomedical Technologies, the Internet of Things, Computational Intelligence, Antenna, Renewable Energy. He has published several papers in national and international journals and conferences. He has 130+ documents registered in the Scopus database by the year 2021. He has also served on numerous organizing panels for international conferences and workshops. He is currently editing several books with Springer Nature, Elsevier, and Routledge & CRC Press. He is also serving as Guest editor for special issues of the journal like Springer Nature and Inderscience. Paolo Barsocchi [M.Sc.’03, Ph.D.’07] is a senior researcher at the Information Science and Technologies Institute (ISTI) of the National Research Council (CNR) at Pisa, Italy. In 2008 he was visiting researcher at the Universitat Autònoma de Barcelona (Barcelona, ES). Since 2017 he is Head of the Wireless Networks Research Laboratory. Dr. Paolo Barsocchi is included in the World’s Top 2% Scientists according to the Stanford University List in 2020 and 2021. He has co-authored more than 150 papers published on peer-reviewed international journals and conference proceedings. He has been reviewer for several international journals and conferences, member of several international program committees of conferences and workshops, and member of the editorial board of several journals. In 2012–2016 he had been one of the founder and chair of the EvAAL Competition that aims at establishing benchmarks and evaluation metrics for comparing Ambient Assisted Living solutions. His research interests are in the areas of internet of things (IoT), wireless sensor networks, cyber-physical systems, machine learning and data analysis techniques, smart environments, ambient assisted living, activity recognition and indoor localization. Victor Hugo C. de Albuquerque [M’17, SM’19] is a Professor and senior researcher at the Department of Teleinformatics Engineering (DETI)/Graduate Program in Teleinformatics Engineering (PPGETI) at the Federal University of Ceará (UFC), Brazil. He earned a Ph.D. in Mechanical Engineering from the Federal University of Paraíba (UFPB, 2010), a M.Sc. in Teleinformatics Engineering from the PPGETI/UFC (UFC, 2007). He completed a BSE in Mechatronics Engineering at the Federal Center of Technological Education of Ceará (CEFETCE, 2006). He specializes in Image Data Science, IoT, Machine/Deep Learning, Pattern Recognition, Automation and Control, and Robotics.

Contributors Vishal Abrol Analog Devices Inc, Bengaluru, Karnataka, India Pooja Agrawal SRMCEM, Lucknow, India

Editors and Contributors

xv

Mohammed Vazeer Ahmed Department of ECE, RV College of Engineering, Bengaluru, India King Chime Aliliele American University of Nigeria, Yola, Nigeria Amandeep CSE Department, BGIET, Sangrur, India Shreya Anand Vellore Institute of Technology, Chennai, India K. Anupriya Department of Information Technology, Lakireddy Bali Reddy College of Engineering, Mylavaram, Andhra Pradesh, India Varikuti Anusha Lakireddy Bali Reddy College of Engineering, Mylavaram, Andhra Pradesh, India Jeetashree Aparajeet SENSE, VIT-Chennai, Chennai, Tamil Nadu, India Sudeshna Baliarsingh Department of ECE, Raajdhani Engineering College, Bhubaneswar, Odisha, India Neha Bharti Department of Electrical Engineering, University Institute of Engineering, Chandigarh University, Gharuan, Mohali, India Akash Kumar Bhoi KIET Group of Institutions, Delhi-NCR, Ghaziabad, India; Directorate of Research, Sikkim Manipal University, Gangtok, Sikkim, India Nilotpal Bhunia School of Computer Engineering, KIIT (Deemed to be University), Bhubaneswar, India K. N. V. S. Bhuvana Lakireddy Bali Reddy College of Engineering, Mylavaram, Andhra Pradesh, India Jaimala Bishnoi Department of Mathematics, Chaudhry Charan Singh University, Meerut, Uttar Pradesh, India Rajesh Singh Bohra Department of Computer Science and Engineering, ABES Institute of Technology, Affiliated to DR. APJ Abdul Kalam Technical UniversityLucknow, Ghaziabad, Uttar Pradesh, India Samarjeet Borah Sikkim Manipal Institute of Technology, Majhitar, India Timothy Caputo Analog Devices Inc, Bengaluru, Karnataka, India Soham Chakraborty School of Computer Science Engineering, Kalinga Institute of Industrial Technology (KIIT) Deemed To Be University, Bhubaneswar, Odisha, India Srishty Singh Chandrayan School of Computer Engineering, Kalinga Institute of Industrial Technology (Deemed to be University), Bhubaneswar, Odisha, India Harpreet Kaur Channi Electrical Engineering Department, Chandigarh University, Gharuan, Mohali, Punjab, India

xvi

Editors and Contributors

Bhavesh Kumar Chauhan Department of Computer Science and Engineering, Shri Ramswaroop Memorial Group of Professional Colleges, AKTU, Lucknow, India Paras Chawla Department of Electrical Engineering, University Institute of Engineering, Chandigarh University, Gharuan, Mohali, India Rajarshi Chowdhury School of Electronics Engineering, Deemed To Be University, Kalinga Institute of Industrial Technology, Bhubaneswar, India Abhay Deshpande Department of ECE, RV College of Engineering, Bengaluru, India Dharam Dutt Department of Paper Technology, Indian Institute of Technology Roorkee, Uttar Pradesh, Roorkee, India Ananya Dutta School of Computer Engineering, Kalinga Institute of Industrial Technology, Deemed To Be University, Bhubaneswar, India Santosh Kumar Dwivedi SRMGPC, Lucknow, India Nakkala Ganesh SENSE, VIT-Chennai, Chennai, Tamil Nadu, India A. P. Garg Shobhit Institute of Engineering and Technology (Deemed To Be University), Meerut, Uttar Pradesh, India Disha Ghoshal School of Computer Engineering, KIIT (Deemed to be University), Bhubaneswar, India S. Giridaran Department of Electronics and Communication Engineering, SRM Institute of Science & Technology, Chennai, India Reva Devi Gundreddy Lakireddy Bali Reddy College of Engineering, Mylavaram, Andhra Pradesh, India Surbhi Gupta Department of Electrical Engineering, University Institute of Engineering, Chandigarh University, Gharuan, Mohali, Punjab, India Zabiullah Haidary Department of Electrical Engineering, Chandigarh University, Mohali, Punjab, India R. K. Jain Shobhit Institute of Engineering and Technology (Deemed To Be University), Meerut, Uttar Pradesh, India Sanjeev Kumar Jain Shobhit Institute of Engineering and Technology (Deemed To Be University), Meerut, Uttar Pradesh, India M. Janarthanan Department of Electronics and Communication Engineering, SRM Institute of Science & Technology, Chennai, India Sushil Kakkar ECE Department, Bhai Gurdas Institute of Engineering and Technology, Sangrur, Punjab, India Birinderjit Singh Kalyan Electrical Engineering Department, Chandigarh University, Mohali, Punjab, India

Editors and Contributors

xvii

Chukwuma Kama American University of Nigeria, Yola, Nigeria Satish Kansal Department of Electrical Engineering, BHSBIET, Lehragaga, India B. S. Kariyappa Department of Electronics and Communication, RV College of Engineering, Bangalore, India Vinodani Katiyar DSMRU, Lucknow, India Gagandeep Kaur Chitkara University Institute of Engineering and Technology, Chitkara University, Rajpura, Punjab, India Inderpreet Kaur GREAT Alliance Foundation, New Delhi, India Sarbjeet Kaur Department of Electrical Engineering, University Institute of Engineering, Chandigarh University, Gharuan, Mohali, Punjab, India Vibhor Kedawat SENSE, VIT-Chennai, Chennai, Tamil Nadu, India Abhinav Kislay Amity School of Engineering & Technology, Amity University Uttar Pradesh, Noida, India V. V. Krishna Reddy Department of Information Technology, Lakireddy Bali Reddy College of Engineering, Mylavaram, Andhra Pradesh, India Atul Kumar Department of Computer Science and Engineering, Shri Ramswaroop Memorial Group of Professional Colleges, AKTU, Lucknow, India Devendra Kumar SBAS, Shobhit Institute of Engineering & Technology (Deemed to be University), Meerut, Uttar Pradesh, India Divesh Kumar Department of Electrical Engineering, BGIET, Sangrur, India Ratnesh Kumar SRMCEM, Lucknow, India Sachin Kumar Department of Electrical Engineering, University Institute of Engineering, Chandigarh University, Gharuan, Mohali, India Yogesh Kumar CSE Department, BGIET, Sangrur, India Ankush Lath Department of Electrical Engineering, University Institute of Engineering, Chandigarh University, Gharuan, Mohali, Punjab, India Pradeep Kumar Mallick School of Computer Engineering, Kalinga Institute of Industrial Technology (KIIT), Deemed To Be University, Bhubaneswar, India Rahul Manhas Electrical Engineering Department, Chandigarh University, Gharuan, Mohali, Punjab, India Manpreet Singh Manna Department of Electrical and Instrumentation Engineering, Sant Longowal Institute of Engineering and Technology, Longowal, India Moyya Meghana Lakireddy Bali Reddy College of Engineering, Mylavaram, Andhra Pradesh, India

xviii

Editors and Contributors

P. V. S. Meghana School of Electronics Engineering, Deemed To Be University, Kalinga Institute of Industrial Technology, Bhubaneswar, India Annie Olivia Miranda School of Computer Engineering, Kalinga Institute of Industrial Technology, Deemed to be University, Bhubaneswar, India Sushruta Mishra School of Computer Engineering, Kalinga Institute of Industrial Technology (KIIT), Deemed to be University, Bhubaneswar, Odisha, India Yashi Mishra School of Computer Engineering, Kalinga Institute of Industrial Technology, Deemed to be University, Bhubaneswar, India Mihir Narayan Mohanty Department of ECE, ITER, Siksha ‘O’ Anusandhan (Deemed to be University), Bhubaneswar, Odisha, India Mohan Debarchan Mohanty BPUT, Bhubaneswar, Odisha, India Niharika Mohanty Department of CSE, Balasore College of Engineering and Technology, Balasore, India Kothuru Sai Mounika Lakireddy Bali Reddy College of Engineering, Mylavaram, Andhra Pradesh, India Ayan Mukherjee School of Computer Science & Engineering, Kalinga Institute of Industrial Technology (KIIT) Deemed To Be University, Bhubaneswar, India Deep Mukherjee School of Computer Engineering, Kalinga Institute of Industrial Technology, Deemed to be University, Bhubaneswar, Odisha, India Muskaan Chitkara University Institute of Engineering and Technology, Chitkara University, Rajpura, Punjab, India Soumya Ranjan Nayak Amity School of Engineering and Technology, Amity University Uttar Pradesh, Noida, India Ananya Pandey SRMGPC, Lucknow, India Saurabh Chandra Pandey Department of Computer Science and Engineering, ABES Institute of Technology, Affiliated to DR. APJ Abdul Kalam Technical University-Lucknow, Ghaziabad, Uttar Pradesh, India Subhendu Kumar Pani Krupajal Engineering College, Bhubaneswar, Odisha, India Narayan Patra Department of Computer Science and Engineering, ITER, SoA Deemed to be University, Bhubaneswar, India Subhra Rani Patra Vellore Institute of Technology, Chennai, India Priyabrata Pattnayak ITER, Siksha ‘O’ Anusandhan (Deemed to be University), Bhubaneswar, Odisha, India Rehana Perveen Department of Electrical Engineering, Chandigarh University, Mohali, India

Editors and Contributors

xix

T. Prajwal Department of ECE, RV College of Engineering, Bengaluru, India Jitendra Pramanik Centurion University of Technology and Management, Bhubaneswar, Odisha, India Sanya Raghuwanshi School of Computer Engineering, Kalinga Institute of Industrial Technology (KIIT) Deemed To Be University, Bhubaneswar, Odisha, India G. Duruvan Raj Department of Electronics and Communication Engineering, SRM Institute of Science & Technology, Chennai, India Ishika Raj School of Computer Engineering, Kalinga Institute of Industrial Technology, Deemed to be University, Bhubaneswar, Odisha, India Sandip Rakshit American University of Nigeria, Yola, Nigeria Naik Kranti Ramkrishna Department of ECE, RV College of Engineering, Bengaluru, India Shweta Rani ECE Department, Giani Zail Singh Campus College of Engineering and Technology, MRSPTU, Bathinda, Punjab, India Alijan Ranjbar M-Tech Research Scholar, Department of Electrical Engineering, Chandigarh University, Mohali, Punjab, India Srestha Rath School of Computer Engineering, Kalinga Institute of Industrial Technology (KIIT) Deemed To Be University, Bhubaneswar, Odisha, India Anjana Raut Kalinga Institute of Dental Sciences, Bhubaneswar, India Prithvik Adithya Ravindran Department of Electronics and Communication Engineering, SRM Institute of Science & Technology, Chennai, India Abhigyan Ray School of Electronics Engineering, Deemed To Be University, Kalinga Institute of Industrial Technology, Bhubaneswar, India Annapareddy V. N. Reddy Department of Information Technology, Lakireddy Bali Reddy College of Engineering, Mylavaram, Andhra Pradesh, India Jitendra Kumar Rout School of Computer Engineering, KIIT Deemed to be University, Bhubaneswar, Odisha, India Hardeep Singh Ryait IKGPTU, Jalandhar, Punjab, India Ashok Kumar Sahoo Graphic Era Hill University, Dehradun, India Rashmi Rekha Sahoo Department of I&E, College of Engineering and Technology, Bhubaneswar, Odisha, India Rupsa Rani Sahu Kalinga Institute of Dental Sciences, Bhubaneswar, India D. Sai Teja Lakireddy Bali Reddy College of Engineering, Mylavaram, Andhra Pradesh, India

xx

Editors and Contributors

Abhaya Kumar Samal Department of CSE, Trident Academy of Technology, Bhubaneswar, Odisha, India Sasmita Rani Samanta Kalinga Institute of Industrial Technology (KIIT) Deemed To Be University, Bhubaneswar, India Swati Samantaray School of Humanities, Kalinga Institute of Industrial Technology, Bhubaneswar, India Amandeep Singh Sappal ECE Department, Punjabi University, Patiala, Punjab, India Pradeepta Kumar Sarangi Chitkara University Institute of Engineering and Technology, Chitkara University, Rajpura, Punjab, India Snehal Sarangi School of Computer Engineering, KIIT Deemed to be University, Bhubaneswar, Odisha, India Debasmita Sarkar School of Electronics Engineering, Deemed To Be University, Kalinga Institute of Industrial Technology, Bhubaneswar, India D. C. Saxena Sant Longowal Institute of Engineering and Technology, Sangrur, Punjab, India Tarun Saxena CSE Department, IIIT, Nagpur, India Achyut Shankar Amity School of Engineering & Technology, Amity University Uttar Pradesh, Noida, India Avinash Kumar Sharma Department of CSE, ABES Institute of Technology, Ghaziabad, Uttar Pradesh, India Himani Goyal Sharma Electrical Engineering Department, Chandigarh University, Gharuan, Mohali, Punjab, India KamalKant Sharma Associate Professor, Department of Electrical Engineering, Chandigarh University, Mohali, Punjab, India Neeraj Sharma Department of Electronics and Communication Engineering, BBSBEC, Fatehgarh Sahib, Punjab, India Renu Sharma Bhai Gurdas Institute of Engineering and Technology, Sangrur, Punjab, India Sudhir Sharma Department of Electrical Engineering, DAVIET, Jalandhar, Punjab, India Sudhir Kumar Sharma Department of Electrical and Instrumentation Engineering, SLIET Longowal, Sangrur, Punjab, India Samiullah Sherzay Department of Electrical Engineering, Chandigarh University, Mohali, India Divya Shukla SRMCEM, Lucknow, India

Editors and Contributors

xxi

Akansha Singh Department of Computer Science and Engineering, Shri Ramswaroop Memorial Group of Professional Colleges, AKTU, Lucknow, India Ashish Kumar Singh School of Computer Science & Engineering, Kalinga Institute of Industrial Technology (KIIT) Deemed To Be University, Bhubaneswar, India Harvinder Singh Electrical Engineering Department, Chandigarh University, Gharuan, Mohali, Punjab, India Indu Prabha Singh SRMGPC, Lucknow, India Khushal Singh School of Computer Engineering, Kalinga Institute of Industrial Technology (Deemed to be University), Bhubaneswar, Odisha, India Manaswini Singh School of Computer Engineering, Kalinga Institute of Industrial Technology (KIIT) Deemed To Be University, Bhubaneswar, Odisha, India Manpreet Singh ECE Department, Bhi Gurdas Institute of Engineering and Technology, Sangrur, Punjab, India Prabhishek Singh Amity School of Engineering & Technology, Amity University Uttar Pradesh, Noida, India Reema Singh Department of Mathematics, Chaudhry Charan Singh University, Meerut, Uttar Pradesh, India Sunny Singh Chitkara University Institute of Engineering and Technology, Chitkara University, Rajpura, Punjab, India Khushbu Sinha School of Computer Engineering, Kalinga Institute of Industrial Technology, Deemed to be University, Bhubaneswar, India S. Sivakumar Department of Computer Applications, SRM Institute of Science and Technology, Kattankulathur, Tamil Nadu, India K. B. Sowmya Department of Electronics and Communication, RV College of Engineering®, Bengaluru, India E. Sreedevi Department of Computer Science and Engineering, Koneru Lakshmaiah Education Foundation, K L University, Guntur, India Gopu Mruudula Sri Lakireddy Bali Reddy College of Engineering, Mylavaram, Andhra Pradesh, India Sriman Srichandan Department of CSE, Balasore College of Engineering and Technology, Balasore, India Tanuja Srivastava Bhai Gurdas Institute of Engineering and Technology, Sangrur, Punjab, India Vibha Srivastava SRMGPC, Lucknow, India

xxii

Editors and Contributors

J. Suguna Kumari Lakireddy Bali Reddy College of Engineering, Mylavaram, Andhra Pradesh, India Surender Singh BGIET, Sangrur, India Divya Jyoti Thakur School of Business Management and Commerce, IEC University, Baddi, Himachal Pradesh, India Teena Thakur Electrical Engineering Department, Chandigarh University, Gharuan, Mohali, Punjab, India Upendra Kumar Tiwari Department of Computer Science and Engineering, ABES Institute of Technology, Affiliated to DR. APJ Abdul Kalam Technical University-Lucknow, Ghaziabad, Uttar Pradesh, India Rajeev Tripathi SRMGPC, Lucknow, India Laharika Tutica School of Computer Engineering, Kalinga Institute of Industrial Technology, Bhubaneswar, Odisha, India Vipin Kumar Tyagi SBAS, Shobhit Institute of Engineering and Technology (Deemed to be University), Meerut, Uttar Pradesh, India K. UmaHarikka Lakireddy Bali Reddy College of Engineering, Mylavaram, Andhra Pradesh, India Anagha Umashankar Department of Electronics and Communication, RV College of Engineering, Bangalore, India Narasimha Rao Vajjhala University of New York Tirana, Tirana, Albania Pooja Verma Faculty of Management Sciences and Liberal Arts, Shoolini University, Solan, Himachal Pradesh, India Sahil Verma SRMCEM, Lucknow, India S. Vidyanandini Department of Mathematics, SRM Institute of Science and Technology, Kattankulathur, India Sunny Vig Department of Electrical Engineering, Chandigarh University, Mohali, Punjab, India K. S. K. Vineel School of Computer Engineering, Kalinga Institute of Industrial Technology, Bhubaneswar, Odisha, India Harpreet Vohra Electronics and Communication Engineering Department, Thapar Institute of Engineering and Technology, Patiala, India Reena Yadav Electrical Engineering Department, Chandigarh University, Mohali, Punjab, India Vinay Kumar Yadav Department of Computer Science and Engineering, ABES Institute of Technology, Affiliated to DR. APJ Abdul Kalam Technical UniversityLucknow, Ghaziabad, Uttar Pradesh, India

Editors and Contributors

xxiii

Siddarth Sai Amruth Yetikuri Department of Electronics and Communication, RV College of Engineering®, Bengaluru, India H. R. Yoganand Department of ECE, RV College of Engineering, Bengaluru, India Kuldeep Kumar Yogi Department of CS, Banasthali Vidyapith, Jaipur, Rajsthan, India

How Servant Leadership is Effective for Employee Performance with the Use of t-Test, Algorithm and ANOVA Divya Jyoti Thakur and Pooja Verma

Abstract Effective leaders are the key to the success of any organization, and this is truer when it comes to improving the performance of employees. Bearing this in mind, this study has been undertaken to investigate the role of leadership style on employee performance in the pharmaceutical sector. This study specifically covers the least explode leadership style, i.e. servant leadership style. Moreover, this study also investigated how the various demographic factors like age, gender, education and experience of a leader are associated with his leadership style. To conduct this study, 90 employees of nine pharmaceutical companies have been contacted, and their responses have been measured on a self-designed questionnaire. Employee performance has been studied in the context of organizational structure, environment, skills, attitude, knowledge, etc. The collected data has been analyzed using correlation, regression, t-test, ANOVA and algorithm with flowchart. The results of t-test and ANOVA reveal that there is a difference in perception of servant leadership style on the basis of age, experience and education, but there has not been found a significant difference on the basis of the gender of respondents. The results of the Pearson coefficient of correlation revealed that servant leadership style has a positive and significant relationship with all the dimensions of employee performance. Results of regression found that servant leadership style has a positive and significant impact on all the dimensions of employee performance. Limitations of the study and implications for further research have been highlighted. Keywords Servant leadership · Employee performance · Organization structure · Work environment · Algorithm

D. J. Thakur (B) School of Business Management and Commerce, IEC University, Baddi, Himachal Pradesh, India P. Verma Faculty of Management Sciences and Liberal Arts, Shoolini University, Solan, Himachal Pradesh, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 P. K. Mallick et al. (eds.), Cognitive Informatics and Soft Computing, Lecture Notes in Networks and Systems 375, https://doi.org/10.1007/978-981-16-8763-1_1

1

2

D. J. Thakur and P. Verma

1 Introduction Leadership is a process whereby an individual influences a group of individuals with the purpose to direct them toward a common goal [1, 2]. In other words, leadership is the application of a leading strategy to offer inspiring motives and to augment employees’ potential for growth and development [3]. It involves persuasion and explanation besides the ability to identify, affirm and renew the values of the subordinates. An efficient leader has a duty to guide and contribute to the knowledge of the employees to lead them for effective and efficient employee performance. Glantz [4] emphasized the need of finding and analyzing leadership style. It plays a great role in enhancing or retarding the interest and commitment of employees [5]. Leadership styles can either motivate or discourage employees, which in return can cause employee’s increase or decrease in their level of performance [6]. Motivation, in-role and extra-role performance and efficiency in resources mobilization, allocation and utilization are other areas greatly influenced by leadership style. The dissatisfactions of individuals in leadership styles reflected in low morale, high turnover rate, frequent complaints, strikes, low job performance, abuse of office, lack of initiative, corruption and low commitment are some of the negative effects highlighted in studies by Refs. [1, 2, 7–10]. Servant leadership is a style with strong altruistic and ethical overtones that requires leaders to be attentive and empathetic to the needs of their subordinates. This leadership style is conceptualized by Greenleaf [11]. With the development of this concept, the focus of leadership has shifted from leaders’ charisma to that of others. The leader is expected to serve others while guiding and listening to them. “the servant leader operates on an assumption that ‘I am the leader, therefore I serve’ rather than ‘I am the leader, therefore I lead’” [12]. Van Dierendonck and Patterson [13] also emphasized that the primary motive of a servant leader is to serve their followers. Thus, “to serve” and “to lead” are important dimensions of servant leadership [14]. There is a deep connection between servant leadership and compassionate love results in the modesty, gratitude, tolerance and selflessness of a servant leader [13]. Leadership style has a strong influence on the performance of employees. Employee performance includes knowledge, skill, attitude toward work, task completion and other parameters. Every organization follows a particular leadership style that enhances the performance of employees. The various problems faced by an organization can be strategically resolved by using an appropriate leadership style. In the light of this fact, research has been conducted to investigate the role of servant leadership style in employee performance.

How Servant Leadership is Effective for Employee Performance …

3

2 Review of Literature Sihombing et al. [15] conducted a study to investigate the role of servant leadership style on rewards, organizational culture and further to analyze the relevance for employee performance. The study has been conducted on bank employees Of Jakarta. The data for this quantitative research has been analyzed using generalized structure component analysis (GSCA). The results showed that servant leadership style has a positive effect on rewards and organizational culture. But servant leadership style does not significantly affect employee performance. Rewards had a significant influence on organizational culture and employee performance. The significant influence of organizational culture on employee performance was also highlighted in this study. Girei [16] investigated 186 employees of a packaged drinking water company in Nigeria to understand the relationship of various leadership styles on employee performance. Three leadership styles, i.e. servant leadership, transformational leadership, transactional leadership and laissez faire leadership have been covered in this study. The standardized tools have been adapted to measure all the constructs of the study. Correlation and Regression analysis have been used to analyze the data, and findings revealed that three leadership styles (servant, transformational and transactional) positively influence the performance of employees. Laissez faire leadership style was not found influencing the performance of employees significantly. Hussain and Ali [17] found that the lack of servant leadership has a negative effect on overall job performance. Chiniara and Bentein [18] revealed that managers practicing servant leadership style provide autonomy and self-sufficiency which further leads to improvement in task performance. The impact of servant leadership style has also been found influencing job satisfaction and employee retention, productivity and sales has been reported [5]. In contrast, Lisbijanto and Budiyanto [8] revealed servant leadership does not impact organization performance significantly. The results of the study by Chiniara and Bentein [18] have also highlighted the positive influence of servant leadership style on employee trust and organizational commitment. Several other studies have found a positive impact of servant leadership on individual performance [19], team performance [20], firm performance [21] and financial performance [22]. Ruschman [22] has conducted research on the less researched style of leadership, i.e. Servant leadership style. The researcher has made a successful attempt to understand the impact of servant leadership style on employees’ performance. Employees’ performance has been studied in the context of task and citizenship behavior. The study has investigated the mediating role of trust (cognitive and affective) in the relationship between servant leadership style and employee’s performance. For this study, an equal number of 233 supervisors and their subordinates have been taken. The findings revealed that servant leadership strongly and positively predicted affective trust and both components of individual performance. Further, the results of mediation analysis revealed that affective trust acts as a full mediator in the relationship between servant leadership style and task performance and partial mediator

4

D. J. Thakur and P. Verma

in the relationship between servant leadership style and citizenship behavior. The second component of trust, i.e. cognitive trust does not mediate the relationship of servant leadership style with employee’s performance.

3 Need of the Study There is no denying fact that leadership styles play an important role in the performance of an organization. Every leader influences in-role and extra-role performance of their subordinates. A large number of studies on leadership styles were undertaken to explain the principles and effects of leadership styles [12, 16, 17, 20–24] on performance. However, few studies have been undertaken on servant leadership style in the Indian context. The majority of studies are related to transformational, transactional and laissez faire leadership styles. Moreover, the employees’ performance has been studied as a whole construct. The researcher has not come across any study which investigates the role of servant leadership style on various components of employee performance.

4 Objectives of the Study 1. 2.

To investigate the demographic association of servant leadership style. To explore the impact of servant leadership style on employee performance.

5 Hypotheses of the Study 1. 2. 3. 4. 5. 6.

The perception of servant leadership style and employee performance significantly differs on the basis of gender. The perception of servant leadership style and employee performance significantly differs on the basis of age. The perception of servant leadership style and employee performance significantly differs on the basis of education. The perception of servant leadership style and employee performance significantly differs on the basis of experience. Servant leadership style is significantly related to employee performance. Servant leadership style significantly influences employee performance.

How Servant Leadership is Effective for Employee Performance …

5

6 Methodology This study has been conducted on 90 employees from 9 pharmaceutical companies. Perception of servant leadership style has been measured with the help of a self-designed questionnaire. There are 10 items in this questionnaire (2 items are reverse coded). Another variable, employee performance is also measured with a self-designed scale of 25 items. Organization structure, work environment, skills, attitude, rewards and knowledge are the dimensions included in employee performance. Data has been analyzed using correlation, regression, t-test, ANOVA tools [25, 26] and algorithm with the help of a flowchart.

7 Results Table 1 indicates that there exists a small difference in servant leadership style and employee performance on the basis of gender. As p-value is higher than the assumed level of significance, i.e. 5%, it is interpreted that there is no significant difference in servant leadership style and employee performance on the basis of gender. Table 2 shows that there is a difference in servant leadership style and employee performance among respondents of different age groups. Employees in higher age Table 1 Descriptive statistics and independent sample t-test for servant leadership style and employee performance with respect to gender Variable

Group

Servant leadership

Gender

Male Female

17

4.0824

0.36269

Organization structure

Gender

Male

73

4.0342

0.70306

Female

17

3.8471

0.63553

Male

73

4.1822

0.64363

Female

17

3.9647

0.64123

Male

73

4.3260

0.66354

Female

17

4.1176

0.74182

73

4.2905

0.65728

Work environment Attitude

Gender Gender

N

Mean

S.D

t

Df

p

73

4.1507

0.48279

0.548

88

0.585

1.006

88

0.317

1.256

88

0.213

1.141

88

0.257

1.582

88

0.117

1.152

88

0.252

1.766

88

0.081

1.602

88

0.113

Rewards

Gender

Male Female

17

4.0035

0.74227

Knowledge

Gender

Male

73

4.3822

0.64342

Female

17

4.1765

0.74459

Male

73

4.3664

0.65641

Female

17

4.0518

0.68511

Male

73

4.2641

0.53815

Female

17

4.0271

0.59831

Skills Employee performance

Gender Gender

6

D. J. Thakur and P. Verma

Table 2 Descriptive statistics and ANOVA for servant leadership style and employee performance with respect to age Variables Servant leadership

Organization structure

Work environment

Attitude

Rewards

Knowledge

Skills

N

Mean

Std. deviation

F

Sig.

21–30

28

4.0250

0.40426

9.431

0.000

31–40

31

3.9742

0.35681

41–50

18

4.2278

0.49326

51 and above

13

4.6462

0.39710

Total

90

4.1378

0.46145

21–30

28

3.6875

0.62222

6.577

0.000

31–40

31

3.8919

0.70381

41–50

18

4.3194

0.58035

51 and above

13

4.4808

0.54449 5.758

0.001

4.821

0.004

6.794

0.000

6.490

0.001

4.637

0.005

Total

90

3.9989

0.69132

21–30

28

3.7857

0.75214

31–40

31

4.1710

0.54356

41–50

18

4.4444

0.50202

51 and above

13

4.4154

0.45064

Total

90

4.1411

0.64528

21–30

28

4.0250

0.77537

31–40

31

4.2387

0.61843

41–50

18

4.3889

0.58399

51 and above

13

4.8231

0.37451

Total

90

4.2867

0.67959

21–30

28

3.9236

0.68285

31–40

31

4.1306

0.62438

41–50

18

4.5739

0.62207

51 and above

13

4.6946

0.44031

Total

90

4.2363

0.67920

21–30

28

4.1000

0.74981

31–40

31

4.2129

0.63390

41–50

18

4.5222

0.52194

51 and above

13

4.9308

0.11094

Total

90

4.3433

0.66418

21–30

28

4.0446

0.75503

31–40

31

4.3039

0.67992

41–50

18

4.3400

0.52315

51 and above

13

4.8338

0.18906

Total

90

4.3070

0.66955 (continued)

How Servant Leadership is Effective for Employee Performance …

7

Table 2 (continued) Variables Employee performance

N

Mean

Std. deviation

F

Sig.

21–30

28

3.9279

0.59523

8.522

0.000

31–40

31

4.1587

0.52389

41–50

18

4.4328

0.40469

51 and above

13

4.6962

0.20443

Total

90

4.2193

0.55439

groups have a higher perception of servant leadership style and employee performance. This difference is tested for statistical significance using one-way ANOVA, and results revealed that servant leadership style and employee performance significantly vary among the respondents of different age groups. On the basis of this result, stated hypothesis, “The perception of servant leadership style and employee performance significantly differs on the basis of age” stands accepted. Table 3 shows scores for servant leadership style and employee performance among respondents with different educational qualifications. Post-graduate employees have scored high on servant leadership and rewards. For other dimensions of employee performance, all employees have scored similarly. The difference is tested for statistical significance using the one-way ANOVA, and results revealed that servant leadership style significantly varies among the respondents of different age groups but not employee performance. On the basis of this result, stated hypothesis “The perception of servant leadership style and employee performance significantly differs on the basis of education” stands accepted only for servant leadership. Table 4 shows the difference in servant leadership styles and employee performance on the basis of experience. The difference is tested for statistical significance using the one-way ANOVA and as p-value in all cases except attitude and skills is higher than the assumed level of significance, i.e. 5%. On the basis of this result, stated hypothesis “The perception of servant leadership style and employee performance significantly differs on the basis of experience” stands partially accepted. Table 5 shows the relationship between servant leadership style and employee performance. Servant leadership style is positively and significantly related to organization structure (r = 0.309), work environment (r = 0.303), attitude (r = 0.217), rewards (r = 0.285), knowledge (r = 0.258), skills (r = 0.238) and employee performance (r = 0.325). Thus, it can be interpreted that servant leadership style is positively related to employee performance. On the basis of this result, stated hypothesis “Servant leadership style significantly related to employee performance” stands accepted (Table 6). In the next step, regression analysis has been used to study the impact of servant leadership style on employee performance. Servant leadership style accounts for 8.5% change in organization structure, 8.2% change in attitude and 3.6% change in rewards, 7% change in knowledge, 4.6% change in skills and 9.6% change in employee performance. On the basis of this result, stated hypothesis “Servant leadership style significantly influences employee performance” stands accepted.

8

D. J. Thakur and P. Verma

Table 3 Descriptive statistics and ANOVA for servant leadership style and employee performance with respect to qualification Variables Servant leadership

Organization structure

Work environment

Attitude

Rewards

Knowledge

Skills

Employee performance

N

Mean

Std. deviation

F

Sig.

Diploma

2

4.4000

0.42426

3.581

0.032

Graduation

48

4.0208

0.41716

Post-graduation

40

4.2650

0.48440 2.299

0.106

0.802

0.452

1.806

0.170

3.445

0.036

0.713

0.493

0.498

0.610

2.102

0.128

Total

90

4.1378

0.46145

Diploma

2

4.7500

0.00000

Graduation

48

3.8854

0.64815

Post-graduation

40

4.0975

0.72774

Total

90

3.9989

0.69132

Diploma

2

4.5000

0.42426

Graduation

48

4.0708

0.66715

Post-graduation

40

4.2075

0.62609

Total

90

4.1411

0.64528

Diploma

2

4.8500

0.21213

Graduation

48

4.1771

0.68144

Post-graduation

40

4.3900

0.67170

Total

90

4.2867

0.67959

Diploma

2

4.8500

0.21213

Graduation

48

4.0767

0.67469

Post-graduation

40

4.3973

0.65250

Total

90

4.2363

0.67920

Diploma

2

4.7500

0.07071

Graduation

48

4.2813

0.71864

Post-graduation

40

4.3975

0.60658

Total

90

4.3433

0.66418

Diploma

2

4.6850

0.02121

Graduation

48

4.2596

0.73163

Post-graduation

40

4.3450

0.60528

Total

90

4.3070

0.66955

Diploma

2

4.7350

0.14849

Graduation

48

4.1252

0.56494

Post-graduation

40

4.3065

0.53279

Total

90

4.2193

0.55439

How Servant Leadership is Effective for Employee Performance …

9

Table 4 Descriptive statistics and ANOVA for servant leadership style and employee performance with respect to experience Variable Servant leadership

Organization structure

Work environment

Attitude

Rewards

Knowledge

N

Mean

Std. deviation

F

Sig.

0–2 yrs

8

4.0875

0.55918

2.572

0.043

3–5 yrs

31

4.1452

0.37224

6–10 yrs

25

4.0240

0.54105

5.689

0.000

4.210

0.004

1.868

0.124

3.837

0.006

2.674

0.037

11–15 yrs

14

4.0429

0.34799

16 yrs and above

12

4.5000

0.42212

Total

90

4.1378

0.46145

0–2 yrs

8

3.8438

0.26517

3–5 yrs

31

3.6935

0.70033

6–10 yrs

25

3.9360

0.63106

11–15 yrs

14

4.4643

0.61125

16 yrs and above

12

4.4792

0.59790

Total

90

3.9989

0.69132

0–2 yrs

8

4.0000

0.62335

3–5 yrs

31

3.9129

0.74509

6–10 yrs

25

4.0560

0.54320

11–15 yrs

14

4.5857

0.44003

16 yrs and above

12

4.4833

0.42176

Total

90

4.1411

0.64528

0–2 yrs

8

4.5000

0.57570

3–5 yrs

31

4.0677

0.77261

6–10 yrs

25

4.3000

0.70000

11–15 yrs

14

4.3286

0.46148

16 yrs and above

12

4.6333

0.52455

Total

90

4.2867

0.67959

0–2 yrs

8

4.2488

0.55699

3–5 yrs

31

4.0171

0.67812

6–10 yrs

25

4.0904

0.63938

11–15 yrs

14

4.7021

0.54043

16 yrs and above

12

4.5550

0.68765

Total

90

4.2363

0.67920

0–2 yrs

8

4.8000

0.41404

3–5 yrs

31

4.1097

0.70775

6–10 yrs

25

4.2880

0.64635 (continued)

10

D. J. Thakur and P. Verma

Table 4 (continued) Variable

Skills

Employee performance

N

Mean

Std. deviation

11–15 yrs

14

4.5357

0.57326

16 yrs and above

12

4.5333

0.62861

Total

90

4.3433

0.66418

0–2 yrs

8

4.5675

0.42412

3–5 yrs

31

4.1500

0.77830

6–10 yrs

25

4.3324

0.65797

11–15 yrs

14

4.3043

0.54345

16 yrs and above

12

4.4892

0.64069

Total

90

4.3070

0.66955

0–2 yrs

8

4.3263

0.30496

3–5 yrs

31

3.9926

0.60695

6–10 yrs

25

4.1668

0.54303

11–15 yrs

14

4.4879

0.36562

16 yrs and above

12

4.5300

0.51473

Total

90

4.2193

0.55439

Table 5 Pearson correlation coefficient between study variables

Table 6 Regression analysis showing impact of servant leadership style on employee performance and its dimensions

F

Sig.

0.958

0.435

3.557

0.010

Variables

Servant leadership style

Organization structure

0.309**

Work environment

0.303**

Attitude

0.217*

Rewards

0.285**

Knowledge

0.258

Skills

0.238*

Employee performance

0.325**

Dependent variable

R square

Adjusted R square

Standardized coefficient

Organization structure

0.096

0.085

0.309

Work environment

0.092

0.082

0.303

Attitude

0.047

0.036

0.217

Rewards

0.081

0.071

0.285

Knowledge

0.066

0.056

0.258

Skills

0.057

0.046

0.238

Employee performance

0.106

0.096

0.325

How Servant Leadership is Effective for Employee Performance …

11

Flowcharts with the help of Algorithm

Start

Input Questionnaire DP, SL, OS, WE, A, R, S

Net Employee Performance (EP)

Loop

=SL+OS+WE+A+R+K+S

Print Net EP

If continue Apply

End

The following steps used for the algorithm: Step 1: Start Step 2: Input Questionnaire with terms SL (Servant Leadership), OS (Organization Structure), WE (Work Environment), A (Attitude), R (Reward), K (Knowledge) and S (Skills) Step 3: Net Employee Performance (EP) = SL + OS + WE + A + R + K + S Step 4: Print Net EP Step 5: If Continue apply (Otherwise Repeat Algorithm) Step 6: End.

12

D. J. Thakur and P. Verma

8 Limitations and Directions for Future Implications The following are the limitation and future implications of this research: • The study has been conducted on a small sample of 90 employees. Further studies can increase the sample size to generalize the findings. • Moderating the role of gender of leader and employees in influencing performance can be undertaken in future studies. • This study has been undertaken in the pharmaceutical sector only, so future studies on comparison between different industries can be undertaken to understand the role of industry type on leadership style. • Only one leadership style has been covered in this study. Different leadership styles can be included in a single study to find out the most effective leadership style.

9 Conclusion This reach was undertaken with the objective to understand the role of demographic variables in leadership style and employee performance. Additionally, the impact of servant leadership style on various components of employee performance has also been investigated. The results revealed that age and experience are the two components which cause a difference in perception of leadership style and employee performance. Educational qualification influences servant leadership but not employee performance. Gender has not been found to be a significant difference in servant leadership style and employee performance. The study also found that servant leadership style positively and significantly influences employee performance.

References 1. 2. 3. 4. 5.

6. 7. 8.

Northouse P (2004) Leadership theory and practice. Pastor Psychol 56(4):403–411 Okoh AO (1998) Personnel and human resources management in Nigeria. Amfitop, Lagos Fry LW (2003) Toward a theory of spiritual leadership. Leadersh Q 14(6):693–727 Glantz J (2002) Finding your leadership style. A guide for educators. Assoc Supervis Curr Dev 7(5):52–61 Timothy CO, Andy TO, Victoria OA, Idowu AN (2011) Effects of leadership style on organizational performance: a survey of selected small scale enterprises in Ikosi-Ketu council development area of Lagos state, Nigeria. Aus J Bus Manage Res 1(7):100–111 Belonio JR (2012) The effect of leadership style on employee satisfaction and performance of bank employees in Bangkok. AU-GSB E-J 5(2):111–116 Khan SN, Qureshi IM, Ahmad HI (2010) Abusive supervision & negative employee outcomes. Eur J Soc Sci 15(4):490–500 Lisbijanto H, Budiyanto (2014). Influence of servant leadership on organization performance through job satisfaction in employees’ cooperatives Surabaya. Int J Bus Manage Invent 3(4):1–6

How Servant Leadership is Effective for Employee Performance …

13

9. Okafor EE (2005) Executive corruption in Nigeria: a critical overview of its socio-economic implications for development. African J Psychol Study Social Issues 8(1):21–41 10. Mallick PK, Bhoi AK, Chae GS, Kalita K (eds) (2021) Advances in electronics, communication and computing: select proceedings of ETAEERE 2020, vol 709. Springer Nature 11. Greenleaf RK (1977) Servant leadership: a journey into the nature of legitimate power and greatness. Paulist Press, Mahwah, NJ 12. Sendjaya S, Sarros JC (2002) Servant leadership: its origin, development, and application in organizations. J Leadership Organ Stud 9(2):57–64 13. Van Dierendonck D, Patterson K (2015) Compassionate love as a cornerstone of servant leadership: an integration of previous theorizing and research. J Bus Ethics 128(1):119–131 14. Ragnarsson S, Kristjánsdóttir ES, Gunnarsdóttir S (2018) To be accountable while showing care: the lived experience of people in a servant leadership organization. SAGE Open 8(3):34– 42 15. Sihombing S, Astuti ES, Al Musadieq M, Hamied D, Rahardjo K (2018) The effect of servant leadership on rewards, organizational culture and its implication for employee’s performance. Int J Law Manag 60(2):505–516.https://doi.org/10.1108/IJLMA-12-2016-0174 16. Girei AA (2015) Perceived effects of leadership styles on workers’ performance in package water producing industry in Adamawa State, Nigeria. Int J Inno Educ Res 3(12):101–112 17. Hussain T, Ali W (2012) Effects of servant leadership on followers’ job performance. Sci Tech Dev 31(4):359–368 18. Chiniara M, Bentein K (2016) Linking servant leadership to individual performance: differentiating the mediating role of autonomy, competence and relatedness need satisfaction. Leadersh Q 27(1):124–141 19. Jaramillo F, Bande B, Varela J (2015) Servant leadership and ethics: a dyadic examination of supervisor behaviors and salesperson perceptions. J Pers Sell Sales Manage 35(2):108–124 20. Schaubroeck J, Lam SS, Peng AC (2011) Cognition-based and affect-based trust as mediators of leader behavior influences on team performance. J Appl Psychol 96(4):863–871 21. Peterson SJ, Galvin BM, Lange D (2012) CEO servant leadership: exploring executive characteristics and firm performance. Person Psychol 65(3):565–596 22. Ruschman NL (2002) Servant-leadership and the best companies to work for in America. In: Spears LC, Lawrence M (eds) Focus on leadership: servant-leadership for the twentyfirst century. John Wiley & Sons, pp 123–139 23. Chinomona R, Mashiloane M, Pooe D (2013) The influence of servant leadership on employee trust in a leader and commitment to the organization. Mediterr J Soc Sci 4(14):11–17 24. Saleem F, Zhang YZ, Gopinath C, Adeel A (2020) Impact of servant leadership on performance: the mediating role of affective and cognitive trust. SAGE Open 10(1):1–16 25. Kaur I, Theraja P (2010) Impact of attendance on performance of students using ANOVA. Int J Syst Cybern Inform:41–44.https://doi.org/10.2139/ssrn.1894508 26. Kaur I, Gupta N (2009) Statistical analysis of different configurations of hybrid doped fiber. Int J Electr Electron Eng 8(3):519–522

Interpreting Skilled and Unskilled Tasks Using EEG Signals Neeraj Sharma, Hardeep Singh Ryait, and Sudhir Sharma

Abstract While processing a skilled task, processing of internal and external information is required to be done in an effective manner as the final decision is based upon the relevancy of the information. Somatosensory information and focused attention are an important part of this process. Electroencephalographic (EEG) recordings demonstrate the cortical changes while performing a skilled task as compared to a normal habitual task. In conjunction with this concept, subjects were monitored during a grasping task which was composed in two different manners: one in which a routine task was done and another one which was a skilled task. The rate of success while completing a task (and its related score) and the relevant cortical activity were recorded with an EEG measuring instrument. Subjects performed both the tasks (skilled and unskilled). The EEG data was calculated for the frequency 4–20 Hz. The analysis has shown considerable changes in alpha, theta and beta power values. Keywords Skilled and unskilled task · EEG · Somatosensory information · Cortical changes

1 Introduction Electroencephalography (EEG) is a reflection of how the brain functions, and it is done to measure the electric potentials. These potentials show the electrical activity happening in the human brain. It is an indirect effect of the action potential of the tens of thousands of pyramidal neurons firing simultaneously. In the diagnosis of the brain and its functioning, to detect any neurological disorders EEG recordings are most H. S. Ryait IKGPTU, Jalandhar, Punjab, India N. Sharma (B) Department of Electronics and Communication Engineering, BBSBEC, Fatehgarh Sahib, Punjab, India e-mail: [email protected] S. Sharma Department of Electrical Engineering, DAVIET, Jalandhar, Punjab, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 P. K. Mallick et al. (eds.), Cognitive Informatics and Soft Computing, Lecture Notes in Networks and Systems 375, https://doi.org/10.1007/978-981-16-8763-1_2

15

16

N. Sharma et al.

relevant for doctors and scientists. Even in the diagnoses of neurological diseases, such as epilepsy, tumors in the brain, injuries related to the head, difficulties in sleep or to correct sleep patterns and also in dementia, EEG recordings are of utmost use [1]. Despite the technical advances, EEG remains a visually interpreted test as till now any automated analysis is not accurate and reliable. The brain is the most important and complex part of the central nervous system. Weighing 1500 g, the brain is composed of various neurons that act as the data processing unit for the brain. For the recording of EEG, on the surface of the scalp a set of electrodes is placed to measure the electrical potential [2]. With EEG recordings, real-time monitoring of spontaneous as well as evoked brain activity can be monitored. Any EEG recording requires a set of electrodes, differential amplifiers for each channel, and filters. Among other parameters, choosing an electrode is also an important factor and certainly affects the overall quality of the EEG signals [3]. 10–20-electrode placement system is a standard electrode placement system [4]. Extra electrodes are sometimes used to measure electrooculography (EOG), electrocardiography and electromyography for muscle recordings along with EEG. A single-channel EEG can be used for a specified task such as cognitive task detection and monitoring, brain–computer interface, event-related potentials (ERPs) and so on. The arrangements of electrodes have been classified as bipolar or unipolar. For measuring potential between a pair of electrodes, the bipolar arrangement is used whereas in the unipolar arrangement, the potential of each electrode is either compared with a reference or neutral electrode. In this bipolar arrangement, the average of all electrodes can also be taken. EEG signals are continuously changing waveforms that show the potential difference in oscillating fashions. Excitation of the brain decides the amplitudes and pattern of EEG signals. The amplitudes measured are in the range of microvolts (mv) with frequencies ranging from 0.5 to 100 Hz. These frequency bands have been classified into six bands, namely Alpha, Beta, Theta, Delta, Gamma and Mu [3, 5]. These waves have been classified in Table 1. The rhythms represent the neural activity pattern which is an indicator of the cognitive or mental state of humans. Human behavior can be described through these brain signals and in turn can be used in many applications such as predictive analysis [11, 12], human emotion recognition [13, 14], to understand the connection between a person’s EEG and genetically specific information [15, 16] and also to develop an advanced method to detect the drowsiness stage so as to maintain a level of vigilance [17, 18] (Fig. 1). With the advent of technology, there are a number of techniques for rhythm identification but a number of errors may arise in this process. The Digital signal processing tools which are being used nowadays present a favorable solution in this regard. Many artifacts which cause hindrance in using EEG data for specific applications are pulse artifact, muscle artifact, line noise, eye movement/blink artifact, etc. [20]. Eyeblink, a natural phenomenon, is the major contributor in causing alteration to the EEG pattern. Therefore, the raw EEG signal (inclusive of noise and artifact) is subjected to preprocessing. Using linear and nonlinear adaptive filtering methods, such noises can be removed [21, 22]. An electromyography artifact which is due to a muscle activity (frequency is >30 Hz) is removed using low-pass filters, and

Interpreting Skilled and Unskilled Tasks Using EEG Signals

17

Table 1 Waves and their characteristic features S. No. Wave type Characteristics

Frequency range Amplitude (Hz)

Region

1

Delta

Found during deep sleep or in serious brain disease Mostly confused with muscle artifact signals

0.5–4

Less than 100 µV

Central cerebrum mostly in the parietal lobe

2

Theta

Found during sleep or in emotional stress conditions. Used in stress study analysis

4–7

Less than 100 µV

Parietal and temporal regions

3

Mu

The most 7–11 characteristic feature of the Mu rhythm is its reactivity to motor activity, thoughts planning motor activity, or somatosensory attention [6, 7]

Medium amplitude of the alpha rhythm

Sometimes, the left temporal lobe is favored [8–10]

4

Alpha

Found in normal 8–13 persons in awake state Present during the eyes-closed resting position

>50 µV

Occipital region

5

Beta

Found in normal persons during active thinking, concentration and in an excited state Beta wave is predominant when a healthy individual is having anxiety

30 ERP tasks for special cases and diseases

13–30

Low amplitude Frontocentral due to infrequent region of the occurrence brain

18

N. Sharma et al.

Fig. 1 Brain rhythms and related amplitude levels [19]

for removing low-frequency components such as baseline shift signals (frequency F 0.0034

X1

0.63

5.36

1

5.36

1.26

0.2877

X2

2.15

62.94

1

62.94

14.80

0.0032

X3

1.29

22.87

1

22.87

5.38

0.0428

X 1X 2

0.16

0.19

1

0.19

0.045

0.8359

X 1X 3

1.09

9.50

1

9.50

2.24

0.1658

X 2X 3

0.52

2.14

1

2.14

0.50

0.4940

X 12 X 22 X 32

2.93

123.54

1

123.54

29.05

0.0003

1.33

25.49

1

25.49

5.99

0.0344

−0.45

Lack of fit R2 Adjusted

2.95

1

2.95

25.39

5

5.08

0.69 148

0.4247 0.3383

0.8559 R2

0.7262

X 1 = Effect of Ammonia/Kieselguhr ratio on the catalyst, X 2 = Digestion temp. of the addition of ammonical slurry of ammonium tungstate, X 3 = Digestion time after adding ammonical slurry of ammonium tungsten

be 0.8559 and 0.7262. From the R2 value, it can be notified that 86% of the variability of the response is elucidated by the model. The effects of different factors on the yield of glycerol are discussed in detail below.

3.1 Effect of Ammonia/kieselguhr Ratio on the Catalyst Ammonia/Kieselguhr ratio was varied in the range of 1.0–5.0 (Table 1) to vary the ammonia amount in its slurry with ammonium tungstate, used during catalyst preparation. From Table 3, it can be seen that the ammonia/kieselguhr ratio has a positive significant effect on the glycerol yield. Glycerol yield exhibited an increasing trend before becoming constant beyond a ratio of 2.5. High sucrose conversion and consequently higher glycerol yield at the ammonia/kieselguhr ratio of 2.5 might be owing to enlarged catalyst surface area. With further increase in ammonia/kieselguhr ratio, catalyst surface area and its nickel percentage decreased. Consequently, yields of various products and conversion of sucrose also decreased. It appears that the conversion rate of glycerol decreased much more than its formation rate resulting in an increasing trend in its yield (Fig. 1). An increase in ammonia amount beyond 2.5 did not change catalyst properties; the product yield and sucrose conversion also remained constant.

45

(%)

Glycerol Yield

Modeling and Optimization of Reaction Parameters …

Digestion temp. after addition of ammonical slurry of

Ammonia/Kieselguhr ratio on the

ammonium tungstate (◦c)

catalyst

Fig. 1 The variation of glycerol yield with ammonia/kieselguhr ratio on the catalyst and digestion temperature after addition of ammonical slurry of ammonium tungstate

3.2 Digestion Temperature After the Addition of Ammonical Slurry of Ammonium Tungstate Digestion temperature of the reaction mixture after the addition of ammonical slurry of ammonium tungstate was optimized to precipitate maximum tungsten in the catalyst. It was varied from 63 to 97 °C. Initially, the catalyst’s tungsten percentage increases and then becomes constant beyond 71 °C (Fig. 2). This temperature was therefore selected as optimum digestion temperature after the addition of ammonical slurry of ammonium tungstate. It was seen that yield of glycerol also improved with respect to the temperature of the reaction mixture. Similar results were also reported by Rodiansono et al. [13].

3.3 Digestion Time After Adding Ammonical Slurry of Ammonium Tungsten The digestion time of the reaction mixture was varied from 10 to 110 min. After adding the ammonical tungstate slurry to the reaction mixture, the digestion time was also optimized on the basis of obtaining maximum precipitation of tungsten in the catalyst. The catalyst’s tungsten percentage ceases to increase at a time when maximum ammonium meta tungstate has been formed in the reaction mixture and maximum diffusion within the kieselguhr pores takes place after its reaction with other constituents. It was noticed that tungsten percentage did not increase beyond a

(%)

T. Srivastava et al.

Glycerol Yield

46

Digestion time after addition of

Digestion temp. after addition

ammonical slurry of ammonium

of ammonical slurry of

tungstate (min.)

ammonium tungstate (◦c)

Fig. 2 The variation in glycerol yield with digestion time after addition of ammonical slurry of ammonium tungsten and digestion temperature after addition of ammonical slurry of ammonium tungstate

digestion time of 82 min. The yield of glycerol (Table 3) was also increased with an increase in the digestion time of the reaction mixture. Figure 3 represents the variation in glycerol yield with respect to digestion time after the addition of ammonical slurry of ammonium tungsten. The catalyst’s tungsten percentage ceases to increase at a time when maximum ammonium meta tungstate has been formed in the reaction mixture and maximum diffusion within the kieselguhr pores takes place after its reaction with other constituents. It was perceived that tungsten percentage did not increase beyond a digestion time of 82 min. From Table 3, it was inferred that glycerol yield was also increased with an increase in digestion time of the reaction mixture. Figure 3 represents the variation in glycerol yield with respect to digestion time after adding ammonical slurry of ammonium tungsten.

4 Optimization A numerical optimization technique is used to obtain optimum levels for different variables. The optimum values to yield maximum glycerol are shown in Table 4. The obtained optimized values corresponding to all variables lie close to the midpoint of the experimental range, which demonstrates the validity of the selected variables range.

47

(%)

Glycerol Yield

Modeling and Optimization of Reaction Parameters …

Digestion

time

after

adding

ammonical slurry of ammonium

Ammonia/Kieselguhr ratio on

tungsten (min.)

the catalyst

Fig. 3 The variation of glycerol yield with ammonia/kieselguhr ratio and digestion time after addition of ammonical slurry of ammonium tungsten

Table 4 Optimum values of Independent variables and response Independent variables and response

Optimum value

Effect of Ammonia/Kieselguhr ratio on the catalyst

2.5

Digestion temp. of the addition of ammonical slurry of ammonium tungstate (°C)

71

Digestion time after adding ammonical slurry of ammonium tungsten (min.)

82

Yield (%)

38.088

5 Conclusion The present study concludes that catalytic hydrogenolysis of sucrose can be efficiently optimized using RSM. Higher glycerol yield (38.088%) was attained with ammonia/kieselguhr ratio (2.5), digestion temperature after addition of ammonical slurry of ammonium tungstate (71) and digestion time after the addition of ammonical slurry of ammonium tungstate (82). So, the utilization of a multi-component catalyst is a vital necessity for the chemoselective catalytic transformation of biomass to the preferred product.

48

T. Srivastava et al.

References 1. Corma A, Iborra S, Velty A (2007) Chemical routes for the transformation of biomass into chemicals. Chem Rev 107:2411–2502 2. Kumar P, Barrett D, Delwiche M, Stroeve P (2009) Methods for pretreatment of lignocellulosic biomass for efficient hydrolysis and biofuel production. Ind Eng Chem Res 48:3713–3729 3. Sheldon RA (2011) Utilisation of biomass for sustainable fuels and chemicals, molecules, methods and metrics. Catal Today 167:3–13 4. Huber GW, Iborra S, Corma A (2006) Synthesis of transportation fuels from biomass: chemistry, catalysts, and engineering. Chem Rev 106:4044–4098 5. Cortright RC, Davda RR, Dumesic JA (2002) Hydrogen from catalytic reforming of biomassderived hydrocarbons in liquid water. Nature 418:964–967 6. Mascal M, Nikitin EB (2008) Direct high-yield conversion of cellulose into biofuel. Angew Chem Int Ed 47:7924–7926 7. Binder JB, Raines RT (2009) Simple chemical transformation of lignocellulosic biomass into furans for fuels and chemicals. J Am Chem Soc 131:1979–1985 8. Fasolini A, Cespi D, Tabanelli T, Cucciniello R, Cavani F (2019) Hydrogen from renewables: a case study of glycerol reforming. Catalysts 9(9):722 9. Tomohisa M, Shuichi K, Kimio K, Keiichi T (2007) Development of a Ru/C catalyst for glycerol hydrogenolysis in combination with an ion-exchange resin. Appl Catal A 318:244 10. Srivastava, T (2013) Glycerol production by hydrogenolysis of sucrose: optimization of (Ni, W, Cu)/Kieselguhr catalyst by response surface methodology and its characterization. J glob res comput sci technol 4(2):46-55 11. Li H, Wang W, Deng JF (2000) Glucose hydrogenation to sorbitol over a skeletal Ni-P amorphous alloy catalyst (Raney Ni-P). J Catal 191(1):257–260 12. Li H, Li H, Deng JF (2002) Glucose hydrogenation over Ni–B/SiO2 amorphous alloy catalyst and the promoting effect of metal dopants. Catal Today 74(1–2):53–63 13. Rodiansono, Astuti, M.D. Mujiyanti, D.R. & Santoso, U.P.: Selective Hydrogenation of Sucrose into Sugar Alcohols over Supported Raney Nickel-Based Catalysts. Indonesian Journal of Chemistry, vol. 19, pp. 183–190, (2019). 14. Rodiansono, Shimazu S (2013) The selective hydrogenolysis of sucrose to sorbitol and polyols over nickel-tin nanoparticle catalyst supported on aluminium hydroxide published in prosiding Semirata FMIPA universitas Lampung, pp 351–358 15. Srivastava T, Saxena DC, Sharma R (2015) Optimization of catalyst synthesis parameters by response surface methodology for glycerol production by hydrogenolysis of sucrose. Int J Adv Eng Res Sci 2:56–65 16. Saxena S, Sharma R, Srivastava T (2017) Reaction pathway study of catalyst Ni, W, Cu/ Kieselguhr catalyst: effects of catalyst reduction temperature, reduction time and amount of catalyst used. Indian J Sci Tech 10:1–6 17. Bond GC (1962) Catalysis by metals. Academic Press, London, pp 395

Allocation of Different Types of DG Sources in a Time-Varying Radial Distribution Networks Divesh Kumar and Satish Kansal

Abstract The Particle Swarm Optimization (PSO) approach is applied in this study to determine the optimal size/generation profile and location of DG that can be integrated into the distribution system to achieve the lowest level of power loss and voltage profile enhancement with the use of different types of DG. This method took under consideration the effect of yearly load profiles and ranging injected power profiles of the substation on the distribution network’s minimum power loss and DG estimates. The optimal size of DG is estimated at each bus using the sample loss formula in the first section, and the optimal position of DG is found using the PSO technique in the second segment. The analytical expression is predicated on the formula for device failure. The loss formula is used to see the optimal size of DG for every bus, and therefore the loss sensitivity factor is used to work out the optimal position of DG. The proposed approach is put to the test on an IEEE 33-bus test device, with the results being compared to exhaustive load flows. Keywords Particle swarm optimization (PSO) · Distributed generation (DG) · Forward backward sweep method · Optimal size · Optimal location · Power loss

D. Kumar (B) Department of Electrical Engineering, BGIET, Sangrur, India S. Kansal Department of Electrical Engineering, BHSBIET, Lehragaga, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 P. K. Mallick et al. (eds.), Cognitive Informatics and Soft Computing, Lecture Notes in Networks and Systems 375, https://doi.org/10.1007/978-981-16-8763-1_5

49

50

D. Kumar and S. Kansal

1 Introduction Developing countries are promoting the use of renewable energy sources for power generation by making different policies. To increase the use of renewable energy sources, governments are giving financial incentives as a policy mechanism. India is also a developing country. Its industrial and economical growth is demanding more and more energy, which is being fulfilled by coal and oil. The electrical power generation in Punjab state also depends mostly on fossil fuels. Punjab has no oil or coal reserves and depends on other states for supply of fossil fuels for its thermal power generation [1]. Distributed Generation, a term commonly used for small-scale generation, provides answers to many new challenges. In power system, the integration of DG units is growing. In the last few decades, the researchers show excellent interest for penetration of DG, as it defends the aspects like environmental issues, development of advanced techniques of small-scale power generation, power electronics and energy storage devices for transient backup into electric distribution system [2–5]. Apart from these, DGs offer various technical and environmental advantages. The technical advantages include the reduction of power loss by installing DG units, improvement in voltage profile, improving reliability and stability of system and improvement in overall system efficiency [4–6]. Its economical advantages include saving of fuel cost and distribution cost, etc. [6–9]. Presently, large centralized power plants are considered less significant because of the availability of low conventional sources of energy, very high cost of transmission and distribution systems, power deregulation, technological enhancement and environmental issues. Various optimization techniques such as GA, PSO and hybrid GA-PSO-based techniques are applied for loss reduction [6]. A number of approaches for evaluating optimum DG positions and sizes to increase device voltage profile and minimize active loss have been suggested [7, 8, 10–14]. In this research work, a new analytical approach is implemented to determine the size of DG and optimal suitable bus, so that the power loss is minimum. In this work, time-varying demands and maximum limits of DG are considered. Different types of DGs are used to check the performance of the system. Types of DGs are defined as follow: Type-I: injecting real power in the system. Type-II: injecting reactive power in the system. Type-III: injecting apparent power in the system. Type-IV: injecting real power and consuming reactive power in the system. The various performance parameters such as optimal location, optimal size and type of DG to minimization of the power loss or energy loss, reliability improvement, voltage deviations and stability improvement are discussed.

Allocation of Different Types of DG Sources …

51

2 Problem Formulation In radial distribution system, R/X ratio is very high, and the transmission line power loss formula is not appropriate; the power loss of distribution system is defined as follows: PL =

n 

Ii2 Ri

(1)

i=1

Ii = Iai + Iri where I i = current at bus i; I ai = active component of current at bus i; I ri = reactive component of current at bus i. Power loss in a line becomes PL =

n  (Iai + Iri )2 Ri

(2)

i=1

Active and reactive power loss of a system is defined as follow: PLa =

n 

Iai2 Ri

(3)

Iri2 Ri

(4)

i=1

PLr =

n  i=1

If DG is placed in radial feeder, it injects extra current in the system, and due to this the current is changed from I old to I new . Iinew = Iiold + Di .IDG where if branch then Di = 1, Else Di = 0

.

(5)

52

D. Kumar and S. Kansal

The system losses will also be affected due to the presence of extra current in the system which are define as PLnew =

n  

2 Iiold + Di .IDG Ri

(6)

i=1

Total power saving in the system after instilling DG is given as new Ssave = PLa − PLa   2 2Di .Iai .IDG + Di2 .IDG Ri =−

(7)

Maximum saving in the system is define as Ssave Maximum when

2

n  

∂ Ssave =0 ∂ IDG

 Di .Ii + Di2 .IDG Ri = 0

(8)

i=1

n Di .Ii .Ri IDG = − i=1 n D 2 .Ri ni=1 i Ii .Ri = − ni=1 D i=1 i .Ri

(9)

If DG connects bus i then Di = 1 otherwise Di = 0. I DG will be calculated using the following equation: n i=1 Ii .Ri IDG = −  n i=1 Ri

(10)

The maximum value of current which is required at bus i is I DG . The size of the DG for a bus i can be calculated by multiplying V m with I DG , where V m , the voltage at bus i SDG = Vm .IDG

(11)

Active and reactive sizes of DG can be calculated from the above equation as follows: PDG = Vm .IDG

(12)

Q C = Vm .IC

(13)

Allocation of Different Types of DG Sources …

SDG =



53

2 PDG + Q 2c

(14)

Equation (14) represents the size of type-III DG.

3 Research Methodology 3.1 Forward Backward Sweep (FBS) FBS is a 2-step technique within which voltage is updated in forward propagation, and therefore the current is calculated in backward propagation. In forward sweep, voltage is to calculate at every node ranging from the supply node and supply voltage is ready unity. In backward sweep, branch currents update in every section, by considering the newest iteration voltages at every node. Voltage values area unit command constant with updated branch currents transmitted backward on the feeder victimization backward path.

3.2 Particle Swarm Optimization In the PSO technique, each double resolution is contemplated as a particle that amendment their position with time. In PSO, particles area unit acquiring a n-dimensional search house. throughout this movement, every particle adjusts its position in keeping with its own expertise this position is named best position (pbest), and this price is named gbest in keeping with the expertise of a neighboring particles, creating use of the simplest position encountered by itself and its neighbors the frequency of fixing it position with relevancy alternative particle is thought as particle speed. ALGORITHM Step 1: Data for the input line and bus, as well as bus voltage thresholds. Step 2: Using the delivery load flow-assisted backward sweep-forward sweep process, calculate the loss. Step 3: Produce a random population (array) of particles in the solution space with random locations and velocities on dimensions (minimum loss and maximum saving). Let k = 0 be the iteration counter. Step 4: Calculate the entire loss in equation for each particle if the bus voltage is below the bounds given above. The particle is infeasible otherwise. Step 5: Compare each particle’s objective value to the person highest. Set this value as the current pbest and report the resulting particle location if the target value is less than pbest.

54

D. Kumar and S. Kansal

Step 6: Choose the particle that is related to the particle with the lowest individual best pbest of all particles, and make this pbest the new overall best gbest. Step 7: Adjust the particle’s speed and direction. Step 8: If the number of iterations approaches the limit, go back to Step 4 and set the iteration index to k = k + 1. Step 9: Print the best solution to the dilemma at hand. The most basic status involves the best DG positions and sizes.

4 Test Systems On IEEE 33-bus radial distribution, the proposed approach checks systems at variable load (Fig. 1). In the present work, yearly load curve is considered for the placement of DG. Table 1 shows the system load variation in ampere and need of apparent power during the entire year and Fig. 2 shows the system load curve.

Fig. 1 IEEE 33-bus system single line diagram

Allocation of Different Types of DG Sources …

55

Table 1 IEEE 33-bus load in ampere/apparent power in KVA S. No.

Month

Load in Amp

Power feed in KVA

1

January

135.1

2962.79 2416.36

2

February

110.2

3

March

128.3

2813.24

4

April

162.7

3567.54

5

May

171.5

3760.50

6

June

182.3

3997.31

7

July

195.5

4286.75

8

August

188.7

4137.64

9

September

159.4

3495.18

10

October

165.2

3622.36

11

November

145.2

3183.81

12

December

127.3

2791.32

1

2

3

4

5

6

7

8

9

10

11

12

Fig. 2 IEEE 33-bus system load curve

56

D. Kumar and S. Kansal

Start

Input Network Data Calculate power loss using BFS

Calculate IDG and Power saved

Determine the maximum saving and DG size for I bus Generate iniƟal populaƟon

Check all constraints

IniƟalize Pbest as the current posiƟon of each parƟcle Evaluate each parƟcle Assign Gbest as among the Pbest

Update weight, velocity & posiƟon of parƟcles Update the Pbest

Update the Gbest

Print the opƟmal

Set infeasible candidate to base case loss

Allocation of Different Types of DG Sources …

57

Fig. 3 IEEE 33-bus system voltage profile without DG

Table 2 DG size for 33-bus system S. No.

Type of DG

Bus No.

Size of DG

1

Type-I

18

311 KW

2

Type-II

12

150.2 KVAR

3

Type-III

18

343.73 KVA

Figure 3 show the system voltage profile without DG at different load profiles. The maximum voltage under normal operation is 0.9988 PU and 0.9059 PU is minimum. With the application of different kinds of DG in the system, the voltage magnitude and system losses are changed. Table 2 shows the different types of DG sizes for the network; after applying the DG, the system voltage profile and losses are changed. System voltage profiles with different types of DGs are shown in Figs. 4, 5, and 6. System minimum and maximum voltage under different load conditions is shown in

Fig. 4 IEEE 33-bus system voltage profile with Type-IDG

58

D. Kumar and S. Kansal

Fig. 5 IEEE 33-bus system voltage profile with Type-II DG

Fig. 6 IEEE 33-bus system voltage profile with Type-III DG

Table 3. System loss under different conditions is shown in Table 4. Bar graphs in Fig. 7 show the system loss comparison.

Allocation of Different Types of DG Sources …

59

Table 3 33-bus system voltage with and without DG S. No.

System

Minimum voltage (pu)

Maximum voltage (pu)

1

Original system

0.9059

0.9988

2

Type-I

0.9308

0.999

3

Type-II

0.9162

0.9989

4

Type-III

0.9435

0.9995

Table 4 System loss in kW with and without DG Month

Without DG

With DG-I

With DG-II

With DG-III

JAN

108.2

76.6

98.4

73.3

FEB

35.4

20.2

30

21.4

MARCH

97.1

67.6

87.6

64.8

APRIL

160.2

119.4

148

113.7

MAY

179.2

135.4

166.2

128.9

JUNE

204.2

156.4

190.1

149.1

JULY

237.4

184.6

222

176

AUG

219.5

167

204.9

161.6

SEP

165.5

123.8

153

118

OCT

153.4

113.7

141.5

108.4

NOV

125.9

91.9

116.5

86

DEC

95.5

66.3

86.4

63.7

Fig. 7 33-bus system loss comparison

60

D. Kumar and S. Kansal

5 Conclusion The optimal site of DG is critical for reducing overall energy loss in the power delivery system. In this paper, analytic and PSO combined technique is employed to seek out the optimal integration of DG. In analytical method the load voltage limits are violates and if voltage is in limits then the dimensions and line losses will increases. The optimal integration of DG by the application of PSO approach considering the voltage limits of the system to minimize the significant power loss improves the results massively. During this work variation of the load is taken into account, which is more practical in practice of the simplest location or size may not often be possible due to many constraints, such as such size not being available inside the market.

References 1. Singh D, Verma K (2008) GA based energy loss minimization approach for optimal sizing & placement of distributed generation. Int J Knowl-based Intell Eng Syst 12(2):147–156 2. Sanda DE, Paul MM (2014) Simulation of a distributed generation system using specialized programs. In: Proceedings IEEE 2014 international conference in: optimization of electrical and electronic equipment (OPTIM), pp 65–70 3. Hung D, Mithulananthan N (2014) Loss reduction and loadability enhancement with DG: a dual-index analytical approach. Appl Energy 11(5):233–241 4. Hien N, Mithulananthan N, Bansal R (2013) Location and sizing of distributed generation units for loadabilty enhancement in primary feeder. IEEE Syst J 7(4):797–806 5. Atwa Y, El-Saadany E, Salama M, Seethapathy R (2010) Optimal renewable resources mix for distribution system energy loss minimization. IEEE Transmiss Power Syst 25(1):360–370 6. Acharya N, Mahat P, Mithulananthan N (2006) An analytical approach for DG allocation in primary distribution network. Int J Electr Power Energy Syst 28(10):669–678 7. Ayres H, Salles D, Freitas WA (2014) Practical second-order based method for power losses estimation in distribution systems with distributed generation. IEEE Trans Power Syst 29(2):666–674 8. Hung D, Mithulananthan N, Bansal R (2010) Analytical expressions for DG allocation in primary distribution networks. IEEE Trans Energy Conv 25(3):814–820 9. Mahmoud K, Yorino N, Ahmed A (2016) Optimal distributed generation allocation in distribution systems for loss minimization. IEEE Trans Power Syst 31(2):960–969 10. Srinivasu PN, Bhoi AK, Jhaveri RH, Reddy GT, Bilal M (2021) Probabilistic deep Q network for real-time path planning in censorious robotic procedures using force sensors. J Real-Time Image Process:1–13 11. Nayak SR, Sivakumar S, Bhoi AK, Chae GS, Mallick PK (2021) Mixed-mode database miner classifier: parallel computation of graphical processing unit mining. Int J Electr Eng Educ:0020720920988494 12. Shahzad M, Ahmad I, Gawlik W, Palensky P (2016) Load concentration factor based analytical method for optimal placement of multiple distribution generators for loss minimization and voltage profile improvement. Energies 9(4):287–295 13. Hung D, Mithulananthan N, Bansal R (2013) Analytical strategies for renewable distributed generation integration considering energy loss minimization. Appl Energy 105:75–85 14. Liu K, Sheng W, Liu Y, Meng X, Liu Y (2015) Optimal siting and sizing of DGs in distribution system considering time sequence characteristics of loads and DGs. Int J Electr Power Energy Syst 69:430–438

FSO at Moderate Atmospheric Turbulence Using 16 QAM Manpreet Singh and Amandeep Singh Sappal

Abstract For the evaluation of the system and to analyse the effect of atmospheric turbulence, a mathematical model for optical communication devices in open environment was created. The performance of the system is very good for the link range of 1 km distance at different temperature conditions, and the system is sustainable at a link range of 2–3 km. Keywords Bit error rate · Free-space optics · Irradiance · Signal-to-noise ratio · Turbulence

1 Introduction A communication system’s function is to relay data, which can be accomplished in a number of ways. Free-space optical communication is an evolving communication method that relies on the propagation of an optical beam through various media that communicate with and influence the quality of the propagating optical signal. Designing sophisticated, accurate, and cost-effective FSO connections necessitates an understanding of atmospheric phenomena and how they influence light propagation and dependable networks that can provide uninterrupted service at the stability with respect to excellence [1]. In the last decade, FSO communication has gotten a lot of attention for a variety of applications that need a lot of bandwidth wireless communication links. Satellite-to-satellite connections, up-and-down links between

M. Singh (B) ECE Department, Bhi Gurdas Institute of Engineering and Technology, Sangrur, Punjab, India A. S. Sappal ECE Department, Punjabi University, Patiala, Punjab, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 P. K. Mallick et al. (eds.), Cognitive Informatics and Soft Computing, Lecture Notes in Networks and Systems 375, https://doi.org/10.1007/978-981-16-8763-1_6

61

62

M. Singh and A. S. Sappal

space platforms and ships, aircraft, and other land platforms, as well as mobile or stationary terminals to solve the last mile problem through the atmosphere, are just a few of these applications. Even so, there are a number of hazardous atmospheric channel conditions that can cause extreme signal fading, if not total signal loss. Dust, gas, aerosols, molecules, water vapour, and toxins make up the atmosphere. The sizes are equivalent to a standard optical carrier’s wavelength, impacting carrier wave transmission in a way that a radio frequency (RF) device does not. The transmitted optical signal can be significantly weakened by scattering and absorption due to particulate matter, on the other hand, the quality of a signal carried by a laser beam emitted through the atmosphere can be severely harmed, leading to intensity fading, random signal losses, and higher bit error rates at the receiver. Scintillation is a term used to describe a random fluctuation in the irradiance of a received optical laser beam induced by atmospheric turbulence [2–16]. As a result, the environment can be a limiting factor in the efficiency of a stable high-data-rate wireless FSO optical communication connection. As a result, it’s important to understand how an optical wave interacts with the atmosphere in order to forecast FSO communication efficiency in the presence of an atmospheric communication channel [1].

2 Atmosphere Turbulence The beam is first described by its constituent electric field when defining the power  The equation density function of irradiance fluctuation in a turbulent atmosphere E. can be obtained using Maxwell’s electro-magnetic equations for a spatially variable dielectric such as the atmosphere [2].   ∇  I n(n as ) = 0 ∇ 2 E + k 2 n 2as E + 2∇ E.

(1)

where the wave number is k = 2π/λ, The vector gradient operator is  = (∂/∂ x)i + (∂/∂ y) j + (∂/∂z)k ∇

(2)

with i, j, and k being the unit vectors along the x, y, and z axes, respectively. The last term on the left-hand side of Eq. 1 represents the turbulence-induced depolarization of the wave.

FSO at Moderate Atmospheric Turbulence Using 16 QAM

63

3 Variation in Atmospheric Turbulence The magnitude of index of refraction variance and the homogeneities present in the atmosphere are used to classify atmospheric turbulence into regimes. These regimes are categorized as solid, moderate, or weak based on the distance travelled by optical radiation through the atmosphere. Atmospheric turbulence results in signal fading thus impairing the FSO link performance severely. Different models describe the pdf statistics of the irradiance fluctuation. Unluckily, due to the extreme complexity involved in mathematically modelling atmospheric turbulence, a single model valid for all the turbulence regimes does not currently exist. These different models are the gamma–gamma, log-normal, and negative exponential models. These models’ relevant ranges of validity, according to the literature, are in the weak, moderate, and saturate regimes. The ambient refractive index fluctuates spontaneously and along the direction of the optical field or radiation traversing the atmosphere is caused by atmospheric turbulence. The direct end result of spontaneous fluctuations in air temperature from point to point is refractive index fluctuation. The air pressure, wind speed, and altitude all play a role in these random temperature changes.

4 Results and Discussions The paper provides a thorough examination of the Free-Space Optical Communication system under various temperature conditions and moderate atmospheric turbulence. Bit Error Rate and Received Average Irradiance are two parameters used to determine the efficiency of the FSO communication system. The system’s evaluation often varies depending on the connection set. A simulation setup has been built in Matlab software for free optical communication (FSO). At different ambient temperatures, graphs for different values of the Received Average Irradiance versus Bit Error Rate are plotted. The link’s range is also increased from 1 to 3 km for each temperature value, i.e. −20, 0, 20, and 40 °C. The presented Figs. 1, 2, and 3 show variations of Bit Error Rate and Signal-to-Noise Ratio due to the effect Temperature at −20 °C, 0 °C, 20 °C, 40 °C, and 60 °C for different values of link range from 1 km to 3 km, respectively. The graphical representation shows that only if the link range is kept up to 1 km, the system will work for the sustainable values of SNR and BER and the communication is better and with minimum losses. It is also observed that the characteristics of the system remain approximately the same for all the link ranges from 2 to 5 km at temperatures −20, 0, and 20 °C.

64

M. Singh and A. S. Sappal

Fig. 1 BER variations with RAI for link range of 1 km at −20 to 60 °C temperature

5 Conclusion This paper presents a thorough analysis of the efficiency of a Free-Space Optical Communication device under various temperature conditions and mild atmospheric turbulence. The parameters Signal-to-Noise Ratio and Received Average Irradiance are used to analyse the device. The effect of temperature is nearly identical for relation distances of 2 and 3 km according to the findings. The system is highly sustainable for a link range of 1 km distance only.

FSO at Moderate Atmospheric Turbulence Using 16 QAM

Fig. 2 BER variations with RAI for link range of 2 km at −20 to 60 °C temperature

65

66

M. Singh and A. S. Sappal

Fig. 3 BER variations with RAI for link range of 3 km at −20 to 60 °C temperature

References 1. Davis C, Haas Z, Milner S (2006) On how to circumvent the manet scalability curse. In: Proceedings of IEEE Milcom 2. Keiser G (2000) Optical fiber communications, 3rd ed. McGraw-Hill, New York 3. Kaur I, Gupta N (2016) Modelling and analysis of TDFA for enhancing the performance of DWDM systems. Int J Control Theory Appl 9(14):6531–6536. © International Science Press, SCOPUS, Indexed ISSN: 0974-5572 4. Agrawal GP (2002) Fiber-optic communication systems, 3rd ed. Wiley-Interscience, New York 5. Mishra S, Mishra D, Mallick PK, Santra GH, Kumar S (2021) A novel borda count based feature ranking and feature fusion strategy to attain effective climatic features for rice yield prediction. Informatica 45(1) 6. Mallick PK, Bhoi AK, Chae GS, Kalita K (eds) (2021) Advances in electronics, communication and computing: select proceedings of ETAEERE 2020, vol 709. Springer Nature 7. Inderpreet K, Neena G (2017) Effects of ASE on performance of TDFA 1469–1555 nm wavelength range. J Eng Sci Technol 12(8):2283–2296 8. Inderpreet K et al (2016) A performance evaluation of WDM system using 32 channels optical amplifiers. Int J Control Theory Appl 9(14):6537–6545 9. You R, Khan JM (2001) Average power reduction techniques for multiple-subcarrier intensitymodulated optical signals. IEEE Trans Commun 49:2164–2171 10. Kaur I, Gupta N (2010) Enhancing the performance of WDM systems by using TFF in hybrid amplifiers, at Thaper University, Patiala, IACC 2010, 19th–20th Feb 2010

FSO at Moderate Atmospheric Turbulence Using 16 QAM

67

11. Wang F, Liu X, Cai Y (2015) Propagation of partially coherent beam in turbulent atmosphere: a review. Prog Electromagn Res B 150:123–143 12. Mahdavi FA, Samimi H (2016) Performance analysis of MIMO-FSO communication systems in Gamma-Gamma turbulence channels with pointing errors. In: 24th Iranian conference on electrical engineering (ICEE), ISBN: 978-1-4673-8789-7, pp 10–12 May, 2016 13. Kaur I, Gupta N (2010) Effective and efficient conversion of solar energy using hybrid optical amplifier. In: 10th International conference on clean energy, ICCE 2010, 15th–17th September, at the Salamis Bay Conti Hotel, Famagusta, N. Cyprus, Paper Reference # 5–16 14. Singh M, Tyagi M (2017) Performance analysis of FSO system for diverse link range & temperature conditions. Int J Adv Res Sci Eng 6(10). ISSN(O) 2319-8354, ISSN(P) 2319-8446 15. Kaur I, Gupta N (2010) Statistical analysis of gain flattening components for hybrid amplifiers. In: 10th International conference on numerical simulation of optoelectronic devices, NUSOD2010, 6th–9th September, 2010, at Georgia Atlanta (US), Paper ID-MP10, pp 17–18. ISBN: 978-1-4244-7015-0, © 2010 IEEE 16. Kaur I, Gupta N (2008) Increasing the amplification bandwidth of erbium doped fiber amplifiers by using a cascaded Raman-EDFA configuration. In: Photonics 2008, IIT Delhi, pp 284, Dec2008, ISBN 10: 81-309-1203-1, ISBN 13: 978-81-309-1203-5

Performance Enhancement of Planar Antenna for Wireless Applications Sushil Kakkar and Shweta Rani

Abstract In this article, an approach is detailed to validate the performance behavior of the planar antenna on perturbing the dimensional descriptors of the ground plane. The controlling capacity of the ground plane is responsible for 24.35% miniaturization and penta-band operation of the proposed planar antenna. In view to confirm the achieved results and to acquire the optimal dimensions, a critical comparison has been obtained by offering variations to the ground plane length. Keywords Multiband · Miniaturization · Ground plane · Planar antenna · Wireless applications

1 Introduction The unprecedented and overwhelming development of low-cost miniaturized antennas to function at various bands for wireless applications has been a hot research area in industry and academics. The antennas of the next generation must support multiband operation with ease of integration with RF circuits. The simple planar microstrip antenna provides the solutions to these requirements [1]. In the last two decades, several attempts were analyzed and described to design multiband antennas like spiral slot antenna [2], planar antennas for double band or multiband operation, planar inverted-F antennas [3, 4], monopole antennas with triangular geometry [5], fractal antennas [6–8] and many others. These days, the development of miniaturized multiband antennas has been astonishingly improved by exploring the utilization of perturbed ground planes. This paper proposed a simple method to generate the multiband function by implementing the dimensional changes in the planar antenna. The perturbation in the ground plane is in the form of a partial ground plane. S. Kakkar (B) ECE Department, Bhai Gurdas Institute of Engineering and Technology, Sangrur, Punjab, India S. Rani ECE Department, Giani Zail Singh Campus College of Engineering and Technology, MRSPTU, Bathinda, Punjab, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 P. K. Mallick et al. (eds.), Cognitive Informatics and Soft Computing, Lecture Notes in Networks and Systems 375, https://doi.org/10.1007/978-981-16-8763-1_7

69

70

S. Kakkar and S. Rani

2 Design Configuration The inspiration of the presented structure is a simple square geometry. The square shape in patch antennas is among the oldest and very promising two-dimensional geometry. The radiation elements consist of two small square patches that have been diagonally connected to each other. Both the square patches are of the same size having side length ‘a’ of 20 mm each. FR4 dielectric material with an electrical permittivity of 4.4 has been used as a substrate, keeping the height at 1.57 mm for all the presented structures. The square shape ground plane of the presented planar antenna is having finite dimensions with ‘b’ = 40 mm side length. The geometry and exact dimensions of the radiator are also shown in Fig. 1. Further to enhance the performance characteristics of the presented antenna, a critical analysis has been conducted by varying the length of the ground plane. The metallic strip of ground in patch antennas constitutes an important part of the antenna system design. The dimensional behavior of the ground has to be taken care of while developing the planar antennas. The different performance parameters like scattering parameters, gain and radiation characteristics of the planar patch antenna are very much dependent on the substrate chosen, dimensions and shape of the patch antenna. Taking into account the cost-effective features of the dielectric material, FR4 substrate has been considered for the presented antenna. Fig. 1 Geometry of proposed antenna

Performance Enhancement of Planar Antenna …

71

3 Results and Discussion The simulations of the chosen antenna are performed using the IE3D EM simulator. The scattering parameters of this antenna without any disturbance in the ground are shown in Fig. 2. It is indicated that the reference antenna possesses a single band from 6.1 to 7.98 GHz. The reflection coefficient at the dominating frequency of 7.61 is −21.08 dB.

3.1 Analysis by Varying the Length of the Ground Plane In order to further enhance the resonating and radiation characteristics of the presented planar antenna, a severe analysis with the dimension of the ground plane has been made. This analysis is composed of reducing the ground plane to partial ground plane and observes the output of the antenna by varying the length of the partial ground plane. To provide a better understanding of the final structure, Fig. 3 depicts the geometrical design of the antenna with top and bottom views. The resonating characteristics of the antenna reveal that the proposed structure with a partial ground plane provides multiband operation with better impedance matching, as illustrated in Fig. 4. It may also be observed that a size reduction of

Fig. 2 Resonating parameters of the presented antenna

72

Fig. 3 a Top view and b bottom view of the proposed antenna

Fig. 4 Comparative results by varying the length of ground plane

S. Kakkar and S. Rani

Performance Enhancement of Planar Antenna …

73

Fig. 5 S11 parameters of the antenna with partial ground

24.35% of the reference antenna is achieved utilizing the partial ground plane technique. The 50% etching out copper area of the ground plane leads to the significant reduction of the cost of manufacturing of the antenna. The optimal dimensions of the partial ground plane have been achieved by observing the variation in the length of the ground plane, keeping other dimensional parameters constant. The associated results illustrate that the presented antenna performs better at the partial ground plane length of 20 mm than the other values of the length presented here. Moreover, the resonating performance of the proposed antenna with optimal dimensions possesses penta-band characteristics. The s-parameters of the proposed antenna with a partial ground plane are given in Fig. 5. The antenna resonates at 1.91 GHz, 3.204 GHz, 5.592 GHz, 7.61 GHz and 9.265 GHz with −19.77 dB, −19.49 dB, −26.05 dB, −17.16 and −17.41 dB, respectively. The shifting of the frequency band toward the lower side after employing a partial ground plane is responsible for the miniaturization.

3.2 Radiation Patterns Radiation characteristics of the chosen antenna structure are taken at all the resonating frequency points in the lower and upper bands to plot antenna radiation efficiency spectra shown in Fig. 6. The elevation radiation patterns of the presented antenna

74

S. Kakkar and S. Rani

Fig. 6 Radiation patterns of antenna a E-plane b H-plane

are more like a dipole and possess 8-shape radiation patterns, whereas the patterns in the azimuthal plane are omni-directional in nature.

3.3 Gain The gain versus frequency plot of the presented antenna (with optimal partial ground plane) is plotted and shown in Fig. 7. The results obtained depicted that the maximum gain possessed by the presented antenna is 3.71 dB at 3.204 GHz.

4 Conclusion The increment in the performance of the presented antenna is given by dynamically controlling the dimensions of the ground plan. The obtained results reveal that significant improvement in the resonating and radiation properties of the planar antenna can be achieved using the presented methodology.

Performance Enhancement of Planar Antenna …

75

Fig. 7 Gain of the presented antenna

References 1. Balanis CA (1997) Antenna theory. John Wiley & Sons, Inc 2. Filipovic DS, Volakis JL (2003) Novel slot spiral antenna designs for dual-band/multiband operation. IEEE Trans Ant Prop 51:430–440 3. Ali M, Hayes GJ, Hwang H-S, Sadler RA (2003) Design of a multiband internal antenna for third generation mobile phone handsets. IEEE Trans Ant Prop 51:1452–1461 4. Elsadek H, Nashaat DM (2007) Quad band compact size trapezoidal PIFA antenna. J Elect Wav Appl 21:865–876 5. Song Y, Jiao YC, Zhao G, Zhang FS (2007) Multiband CPWFed triangle-shaped monopole antenna for wireless applications. Prog Elect Res PIER 70:329–336 6. Kaur I et al (2015) Design and analysis of proximity fed single band microstrip patch antenna with parasitic lines. In: International conference on modeling and simulation: techniques and applications held on June 4–5, 2015, New York 7. Yadav A, Singh VK, Yadav P, Beliya AK, Bhoi AK, Barsocchi P (2020) Design of circularly polarized triple-band wearable textile antenna with safe low SAR for human health. Electronics 9(9):1366 8. Singh P, Singh VK, Lala A, Bhoi AK (2018) Design and analysis of microstrip antenna using multilayer feed-forward back-propagation neural network (MLPFFBP-ANN). In: Advances in communication, devices and networking. Springer, Singapore, pp 393–398

Invariant Feature-Based Dynamic Scene Classification Using the Optimized Convolution Neural Network Surender Singh

Abstract The classification of a scene from digital images has drawn great attention because of its wide applications in real world. Over more than 40 years, scene classification and recognition remains a focus of research in computer vision hub with the abundance of digital image and video databases. The researcher needs to be able to access and classify the scene from both digital images and videos effectively and efficiently and it may be possible when users are aware of the images and/or contexts. Therefore, the ability to accurately describe and recognize scenes is very important for any Scene Classification Module (SCM). In this paper, the proposed research work addresses the issues of (i) Beach, (ii) Boiling Water, (iii) Forest Fire, (iv) Lightning Storm, (v) Snowing, (vi) Street, (vii) Volcano, and (viii) Windmill Farm classification from the natural scene for designed SCM. The proposed SCM highlights several problems in visual perception like detection of object and their classification according to the texture, shape, and color-based invariant feature. The researchers have already presented a scene classification model with better accuracy in few class of scene but for several classes, the accuracy of the existing module is not up to acceptance for real-time applications and need improvisation in the existing module. So, we have designed a SCM for eight different classes as mentioned above with invariant features based on the optimization approach. For designed SCM, image enhancement along with image segmentation is used to increase the quality of the desired portion of the image in the pre-processing phase. Here, Speedup Robust Feature (SURF) descriptor is used as an invariant feature extraction algorithm due to their fast response rate with Cuckoo Search Algorithm as an optimization approach. In the proposed SCM, Convolutional Neural Network (CNN) is used as a classifier to train and validate the proposed model based on an optimized feature set with different classes of scene images which are taken from the Maryland-Yupenn dataset and the experimental results in terms of accuracy with the used dataset is near to 98.72% with fast classification time.

Surender Singh (B) BGIET, Sangrur, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 P. K. Mallick et al. (eds.), Cognitive Informatics and Soft Computing, Lecture Notes in Networks and Systems 375, https://doi.org/10.1007/978-981-16-8763-1_8

77

78

Surender Singh

Keywords Scene classification · Pattern recognition · Cuckoo search algorithm (CSA) · Scene region segmentation · Speed up robust features (SURF) · Convolutional neural network (CNN)

1 Introduction Nowadays, the large number of images are publically available from collections of pictures to web pages and in video databases which obtained through the use of digital cameras and services of multimedia. So the ability to categorize the images into semantic classification and objects is necessary to deal with and arrange the group of images on a database [1]. The main aim of scene classification is to identify the class or, category of new instances according to the set of training whose classes are identified. Scene classification is the term that includes the labeling of images into any one of the predefined categories such as beach, boiling water forest fire, lightning storm snowing street volcano, and windmill farm [2]. The classification of images includes the categorization of images into one or more classes at the same time which is predefined. The computer vision domain involves image acquiring, processing, understanding, and analyzing methods for the production of numerical or symbolic information from high dimensional data of real world. The purpose of image classification is the ability of human vision through electronically perceiving and observation of an image. The sub-domain of computer vision involves classification of an image, detection of an event, tracking of video, recognition of an object, learning, indexing, estimation of motion, restoration of image algorithms [3]. The process of scene classification involves some specified steps: • Pre-processing: Atmospheric correction, removal of noise, the transformation of an image, main component analysis, etc. • Detection of an object: The process of detection involves the detection of position and other features of moving object image which is obtained by using the camera, and in the next, extract features from the detected object, guess the chances of presence of objects in the plane of the image [4]. • Training of SCM: In this phase, select the appropriate attribute which describes the pattern of the image correctly. • Classification of scene: It classifies the detected objects among predefined classes through the best methods that are comparable with patterns of image and target [5]. Scene classification becomes an active study area in computer vision due to its extensive scene interpretation, traffic monitoring, and video content analysis applications [6]. This is a significant computer vision issue and in the latest past has gained significant attention. It varies from standard object detection/classification to the extent that a scene consists of several entities that are often structured in an unexpected design [7]. In addition, the importance of the classification of the scene also derives from its fundamental roles in solving a more difficult task. As depicted in

Invariant Feature-Based Dynamic Scene Classification …

79

Fig. 1 Different types of scenes

Fig. 1, there are large variations in illumination, point of views, and scales, standard shots in the same class of having large variations due to which classification becomes a difficult task. The scene classification based on the image as compared to the classification of the scene in the video is a more difficult task because of its high complexity in extracting spatial and temporal features from videos. Recognizing the category of objects using feed-forward processing in visual ordinary recognition will take a few microseconds. Classification of the scene is a key issue in image understanding. In the past, there have been several automatic techniques for combining scenes with semantic labels that can improve computer vision applications such as browsing, retrieval, object recognition. For precise classification of the scene, two steps are necessary. The first stage is how efficiently the representation can be extracted from the visual input. The primary requirement here is to create extremely robust, less computationally complex features. The second step involves algorithms and classifiers that process depictions effectively. With machine learning, models that can analyze complex, larger data, and generate accurate and quick results on a very large scale can be produced quickly and automatically [8]. There are lots of scenes that are important for humans or machines for automatic models and some of the important scenes are shown in Fig. 1. Basically in the research paper, the working of a Scene Classification Module (SCM) is described and the benefits of this module does not depend on human errors, has no previous information or experience, and is faster than a supervised method. The one main drawback is this technique is high-separable grouping due to the special effects on images. The effects on the images are shown in Fig. 2 with a different angle, illumination, and scaling of images. This paper presents an invariant feature-based dynamic scene classification using the optimized Convolutional Neural Network (CNN) and their comparison with existing trends. In Sect. 2 of the research paper, we present the literate survey of the existing work related to scene classification. The architecture of the proposed SCM work is described in Sect. 3. The experimental results are described in Sect. 4 and the conclusion with future trends is discussed in Sect. 5.

80

Surender Singh

Fig. 2 a Variation in viewing angle b changes in illumination c scale variation

2 Literature Survey In this section, we present the survey of existing work based on the scene classification system using different techniques and algorithms. Huang et al. [1] proposed a unified framework to filter the spatial and temporal characteristics to represent the dynamic scene. Two variants of CNN to encode spatial appearance and short-term dynamic to the short-term deep feature (STDF) have been deployed. The authors have evaluated the large effectiveness of the proposed model to classify the dynamic scenes through extensive experiments on these three different datasets: YUPENN, Maryland, and UCF101. It has been seen that the three dynamic scene classifications, the proposed model accomplish better performance. Otter et al. [2] provided a brief introduction and overview of the architecture and methods of deep learning. After that, it moves through the plethora of novel research and reviews the large collection of required assistance. It has been observed areas of research containing various core linguistics processing and its enormous applications. Nowadays, extremely sophisticated NLP state-of-the-art apps are omnipresent. These include machine translators from Google and Microsoft, which more or less competently translate from one language into scores of other languages, as well as a number of systems that process speech instructions and react in the same way. Both convolution and recurrent sample give a contribution to the state of the art in natural language processing field. P Neural architectures have instilled more “comprehensive” models of natural languages. Both convolution and recurrent neural networks contribute to the state of the art in the field, but it is not clear what will yield superior results across the NLP field’s rich and varied terrain. A few general trends can be supposed to consolidate the evaluation of all the models surveyed. Tong et al. [3] reviewed the classification

Invariant Feature-Based Dynamic Scene Classification …

81

of the Indoor-Outdoor scene including the extraction of features, the classification and the associated dataset. There is a discussion of their benefits and disadvantages. Finally, it is concluded that some difficult issues stay unresolved and suggest some possible alternatives. The issue of classification of the indoor-outdoor scene has been suggested for almost 20 years and has been commonly applied to the general classification of the scene, picture recovery, and image processing and robot implementation. But there is no agreement on a specific method of scene classification that can completely fix the issue of classification of the Indoor-Outdoor scene. The larger dataset has been created and machine learning techniques particularly profound learning techniques attain notable efficiency in computer vision, we strive to provide scientists with guidance to address the issue of classification of indooroutdoor scenes with a more strong and robust solution. Xia et al. [4] have suggested a scheme in this article to classify the pictures of the landscape into distinct groups of sunset, desert, hills, trees, and sea. The suggested method to image classification makes use of machine learning techniques crucial in this article. Concentrated on deep learning methods to extract features and classify images, Author’s designed a model that does not equal the creation of various binary models but has a single model that predicts the probabilities of distinct labels and has used these probabilistic limit values for the corresponding label to transform these probabilities into class or label appearance and lack. This technique results in greater precision compared to other techniques and need less time. Shahriari and Bergevin [5] provided an indoor versus outdoor notion-based hierarchical two-stage scene classification structure. The proposed strategy for scene recognition is a straightforward yet very effective model of worldwide picture representation. Although the word indoor versus outdoor has long been in the existing work. It has been suggested the scene classification model that classifies all the classes of the outdoor scene versus an undifferentiated class of indoors. To accomplish this, identify the scene classification problem by using quantized filter responses distributions to characterize the scenes. The classifier then either produces one of several classes of outdoor scenes or a generic class of indoor scenes. Huynh-The T et al. [6] presented a solid foreground detection technique that can be adapted in scenes to distinct movement speeds. A key contribution of this paper is the background estimation using a new algorithm, neighbor-based intensity correction (NIC), identifying and modifying motion pixels from the background, and current frame difference. The foreground was identified with an ideal limit calculated by the Otsu method by the background subtraction system. This model is based on well-known object detection and monitoring information sets. NIC strategy has outperformed several sophisticated techniques in the studies to depress the identified foreground confusions in vibrant scenes owing to the light artifact, lighting shift, and camera jitter. Shao et al. [7] implemented, the intuitive but efficient timeconscious crowd movement channels were by evenly slicing the video volume from distinct sizes. Multiple CNN structures have been suggested with distinct data-fusion approaches and weight-sharing systems to learn the connectivity from these movement channels both spatially and temporarily. A new large-scale system was built with crowd information set consisting of 10 000 videos from 8257 crowded scenes,

82

Surender Singh

and a set of 94 characteristics was built. Extensive tests on the prediction of crowd video attributes proved the novel method’s efficiency over the state of the art. Based on the survey, we conclude some important points which helps to sort out existing problem that are faced by researchers during development of SCM. SCM refers to identifying the exact pattern for the exact scene based on the texture, color, and other features. It helps a lot of researchers to learn more in the computer vision field and to go deeper into the world so that they can become ready for this modern day society. From the literature, almost models are designed for the 4–6 types of scene and they don’t work on the same types of a scene like a forest fire and volcano. So in this research work, an automated SCM has been designed for 8 types of scene such as beach, boiling water forest fire, lightning storm snowing street volcano and windmill farm. Any classification processing has gone through two stages, namely training followed by the classification mechanism. The training mechanism requires a set of unique features of different types of scenes and this depends on the preprocessing technique to segment the Region of Interest (ROI) of the scene. But in existing work, uniqueness of features is less and need enhancement in feature sets using the feature selection algorithms based on the feature optimization techniques where Cuckoo Search Algorithm (CSA) is use as an optimization technique with SURF descriptor. Previous research approaches have utilized CNN as a multiclass classifier but until and unless the training set is not prepared appropriately, so CNN is not able to achieve high classification accuracy. The problem of this research work is to enhance the classification accuracy of SCM using CSA as an optimization technique along with CNN as a classifier for invariant SURF feature. The main contributions in this research work are as follows: • SURF descriptor is used as an invariant feature extraction technique from scene images based on their texture, shape, and color. • CSA is used to improve the SURF feature uniqueness with novel fitness function. • To train the proposed SCM, CNN is used as a classifier. • At the last we evaluate the performance of the system, parameters like precision, recall, f -measure, error and accuracy and compare with existing state of the art for validation of SCM. The architecture of designed SCM is described in the next section of this research paper.

3 Architecture of SCM The proposed an invariant features based dynamic SCM using the optimized CNN for scene images consists of three main steps which are listed below: A.

In the starting phase of development, various image pre-processing techniques is applied to improve scene image quality and appropriate segmentation method is applied to extract exact Region of Interest (ROI).

Invariant Feature-Based Dynamic Scene Classification …

Image Acquisition

ROI Invariant Feature Extraction

Pre-processing and ROI Extraction

Input Phase

Processing Phases of SCM

83

Feature Optimization with CSA

Classified Output using CNN Output Phase

Fig. 3 Block diagram of proposed SCM

B. C.

After the pre-processing of scene image, most important invariant features are extracted from the segmented using SURF descriptor. Finally, the classification of scene is performed using CNN with optimized SURF feature to produce desired output for the SCM (Fig. 3).

The challenge of this research work is to classify the scene using digital images and training of SCM is done using CNN on the basis of extracted invariant feature set from the segmented ROI using SURF descriptor with CSA as an optimization technique to select a set of unique feature for each class of scene images. The subsequent steps of SCM demonstrate the variety of phases that need to be accomplished: • Initially, a simulator is designed in MATLAB software using the concept of GUI (Graphical User Interface) which is known as SCM. • After designing simulator, the first step is to train the SCM as per the desired Maryland-Yupenn database. Eight different types of images has been taken into consideration such as: (i) Beach, (ii) Boiling Water, (iii) Forest Fire, (iv) Lightning Storm, (v) Snowing, (vi) Street, (vii) Volcano, and (viii) Windmill Farm. • The training of SCM has been performed using CNN classifier by using 70% of the database scene images and remaining 40% are used while testing and validating the SCM. The training of the designed SCM has been analyzed on the basis of the different parameters as discussed in Fig. 4. • After uploading the test image, pre-processing steps such as image enhancement and image segmentation are applied. In this step, the quality of scene image is enhanced. After that, the next process is segmentation to segment the ROI of scene

Fig. 4 CNN architecture in SCM

84

Surender Singh

image using the K-means algorithm. The following steps are performed during segmentation with K-means: – Randomly two different center points are selected based on the pixels in scene image. – Determine the Euclidean distance between the every pixel element from the center point and allocate each pixel to its nearby group. – The center point position can be reallocated by measuring the mean value of the similar group. – Reallocate the novel position of the center point within the same image. – The case when the center point value changes repeat the process until a stable center point is obtained. • In this research, SURF descriptor is used as a feature descriptor to determine the desired features of the test image. Since, SURF works on the gray scale image, therefore, we have to first covert the colored image into gray scale image and then SURF determined the desired features as indicated by green dots on the test image in the following Fig. 5. • The features are optimized using CSA algorithm on the basis of fitness function and used fitness function of SCM is given in Eq. 1. • After optimization we train the model using CNN and the hybrid algorithm of CNN with CSA is written as:

Fig. 5 SURF invariant features

Invariant Feature-Based Dynamic Scene Classification …

85

Algorithm: CNN with CSA

Required Input: Obtained Output: 1 2 3

T-Data SURF feature points as a Training Data Cat Category in terms of Classes N Number of Carrier in terms of Neurons CR Classified Results and Parameters

Start To optimized the T-Data, Cuckoo Search Algorithm (SCA) is used Set up basic parameters of CSA: Egg Size (E) – Based on the number of invariant features OT – Other Eggs OT-Data – Optimized Training Data Fitness Function: : It is a current SURF feature which are in T-Data and : It is the threshold feature and it is a mean of all invariant SURF features Calculate Length of T-Data in terms of R Set, Optimized Training Data, OT-Data = [] For i =1 R // Current feature from feature sets Ec = T (i) = // Average OT Et =

(1)

Where,

4 5 6 7 8 9 10 BestProp = OT-Data = CSA (Fit(f), T-Data, Set up of CSA) 11 End - For 12 ANN Initialization using the following parameters – Number of Epochs (E) // Iterations used by ANN – Number of Neurons (N) // Used as a carrier in ANN – Performance: MSE, Gradient, Mutation and Validation –Techniques: Levenberg Marquardt – Data Division: Random 13 For i = 1 OT-Data 14 If OT-Data is subset of class 1 15 G (1) = OT-Data(i) 16 Else if OT-Data is subset of class 2 17 G (2) = OT-Data(i) 18 Else if OT-Data is subset of class 3 19 G (3) = OT-Data(i) 20 Else if OT-Data is subset of class 4 21 G (4) = OT-Data(i) 22 Else if OT-Data is subset of class 5 23 G (5) = OT-Data(i) 24 Else if OT-Data is subset of class 6 25 G (6) = OT-Data(i) 26 Else if OT-Data is subset of class 7 27 G (7) = OT-Data(i) 28 Else if OT-Data is subset of class 8 29 G (8) = OT-Data(i) 30 Else // for other image 31 G (9) = OT-Data(i) 32 End – If 33 End – For 34 Initialized the ANN using Training data and Group 35 SCM-Net = Newff (OT-Data, G, N) // Call the initialization function of neural network 36 Set the training parameters according to the requirements and train the system 37 SCM-Net = Train (SCM-Net, OT-Data, G) Classification of SCM using SCM-Net:

86 38 39 40 41 42 43 44 45 46 47 48

Surender Singh Test-Image SURF = SURF feature of Test Image Classification Result = simulate (Model-Net, Test-Image SURF) If Classification Result = Matched CR=Return Matched Category Return classification parameters Else CR=Return Sorry Return classification parameters End – If Return: CR as Classified Results and output Parameters End – Function

4 Results and Discussion In this section, the simulation results of proposed SCM for scene classification is discussed and the efficiency of proposed work is compared with existing work [1]. The training and testing of the proposed mechanism is evaluated by Maryland-Yupenn Dataset. By adapting the established proposed algorithms, the below outcomes are computed with quality based parameters. A comparison is drawn with the existing work [1] to shown the effectiveness of the proposed work with respect to different type of scene classes based on the some sample data from each categories and for the graphical representation. The used computational parameters are defined below: i. Precision =

True Positive (TP) True positive + False Positive (FP)

(2)

ii. Recall =

True Positive True positive + False Negative (FN)

(3)

iii. Precision × Recall Precision + Recall

(4)

TP + TN TP + TN + FP + FN

(5)

Fmeasure = 2 × iv. Accuracy = v.

Error: It is defined as the estimate difference observed during the classification of image.

Invariant Feature-Based Dynamic Scene Classification …

87

Table 1 Performance parameters Number of test images

Precision

Recall

F-measure

Accuracy

Error (%)

Execution time (s)

1

0.986

0.980

0.983

99.97

0.020

0.036

2

0.986

0.979

0.9832

99.83

0.162

0.005

3

0.974

0.969

0.971

98.94

0.027

0.021

4

0.964

0.953

0.958

95.68

0.064

0.028

5

0.986

0.981

0.983

99.77

0.225

0.002

6

0.981

0.975

0.977

97.58

0.178

0.015

7

0.985

0.982

0.983

99.27

0.179

0.052

vi.

Execution Time: It measure the time period required to provide the results while detecting scene images (Table 1).

The precision measured during the experiment performed after uploading the test scene image for seven different samples is shown in Fig. 6. From the figure, it is clearly observed that precision observed for most of the samples is greater than about 98%. Higher precision rate represents that the system is capable to identify images with higher accuracy. The recall rate measured for seven different test scene images is shown in Fig. 7. From the Figure, it is seen that the average of recall rate observed for the proposed model is about 0.9741 (Fig. 8). The combination of recall and precision is known as F-measure. F-measure parameter is used to determine the balance rate of classification model. The average rate determined during the experiment is nearly equal to 0.97 (Fig. 9). The detection accuracy is also one of the most essential parameter examined during the detection process of scene classifier system. The highest accuracy detected among

Fig. 6 Precision of SCM

88

Fig. 7 Recall of SCM

Fig. 8 F-measure of SCM

Fig. 9 Classification accuracy of SCM

Surender Singh

Invariant Feature-Based Dynamic Scene Classification …

89

the uploaded test is about 99.77%, which is observed at sample 1. Also, the average accuracy for seven number of test scene images is 98.72%. The error observed after detecting the scene image is shown in Fig. 10. From the above Figure, the minimum error is observed for sample 1, which is about 0.02. The average value of error analyzed is approximately equal to 0.122 (Fig. 11). Execution time represents the duration that has been taken by the system to detect the image and provide results. The average execution time observed for seven number of test samples is 0.022 s. Also, to show the effectiveness of the proposed scene classification system, the comparison between proposed and existing work presented by Huang et al. (Table 2). The comparative graph for proposed as well as for existing work is represented in the graphical form as depicted in Fig. 12. From the graph, it is clearly seen that

Fig. 10 Error of SCM

Fig. 11 Execution time (s)

90 Table 2 Accuracy comparison with Huang et al.

Surender Singh Proposed work (%)

Huang et al. (%)

98.72

95

Fig. 12 Accuracy comparison

the proposed system detect scene images with high accuracy compared to existing work and improvement is classification accuracy is observed by 3.92%.

5 Conclusion and Future Work In this paper, an invariant feature-based dynamic scene classification using the optimized CNN is proposed. It provides a detailed view of the different applications and potential challenges of segmentation and classification of scene from digital images which a difficult task in real world. Eight different types of scene images such as (i) Beach, (ii) Boiling Water, (iii) Forest Fire, (iv) Lightning Storm, (v) Snowing, (vi) Street, (vii) Volcano, and (viii) Windmill Farm have been collected from the Maryland-Yupenn dataset. The system is trained on the basis of invariant SURF feature sets from eight different data types. Each comprises of 10 images from each category, therefore there are total of 80 numbers of images that have been considered in the research. The main purpose of the research is to identify the scene images by using the combination of feature extraction techniques and classification. The designed SCM performed 98.72% of detection accuracy with execution time of 0.022 s. From the test results, it was found that the detection accuracy of the proposed work is higher than the existing work.

Invariant Feature-Based Dynamic Scene Classification …

91

In future, the work can be extended by using other feature extraction techniques such as Scale Invariant Feature Transform (SIFT) along with different feature optimization schemes such as Genetic Algorithm (GA) and analyzed the effect on the performance parameters. Also, work can be extended, by considering, other scene images such as Sky, tree, mountains and many more.

References 1. Huang Y, Cao X, Wang Q, Zhang B, Zhen X, Li X (2018) Long-short-term features for dynamic scene classification. IEEE Trans Circ Syst Video Technol 29(4):1038–1047 2. Otter DW, Medina JR, Kalita JK (2018) A survey of the usages of deep learning in natural language processing. arXiv preprint arXiv: 1807.10854 3. Tong Z, Shi D, Yan B, Wei J (2017, June) A review of indoor-outdoor scene classification. In: 2017 2nd international conference on control, automation and artificial intelligence (CAAI 2017). Atlantis Press 4. Xia GS, Hu J, Hu F, Shi B, Bai X, Zhong Y, Lu X et al (2017) AID: a benchmark data set for performance evaluation of aerial scene classification. IEEE Trans Geosci Remote Sens 55(7):3965–3981 5. Shahriari M, Bergevin R (2016) A two-stage outdoor-indoor scene classification framework: experimental study for the outdoor stage. In: 2016 international conference on digital image computing: techniques and applications (DICTA). IEEE, pp 1–8 6. Huynh-The T, Banos O, Lee S, Kang BH, Kim ES, Le-Tien T (2016) NIC: a robust background extraction algorithm for foreground detection in dynamic scenes. IEEE Trans Circuits Syst Video Technol 27(7):1478–1490 7. Shao J, Loy CC, Kang K, Wang X (2016) Crowded scene understanding by deeply learned volumetric slices. IEEE Trans Circuits Syst Video Technol 27(3):613–623 8. Guo S, Huang W, Wang L, Qiao Y (2016) Locally supervised deep hybrid model for scene recognition. IEEE Trans Image Process 26(2):808–820

Improvement of Solar Panel Efficiency with Automatic Cleaning Robot Zabiullah Haidary and Sarbjeet Kaur

Abstract Renewable energy for the exploitation of power is the need of the hour as the load demand is sharply increasing day by day and it is expected that energy demand will be increased up to 50% by 2030. So it’s the best time to shift toward renewable energy sources for power generation rather than conventional sources of energy. Solar PV is accepted globally and is taking a lead in the market. Solar PV energy is harnessed with solar panels and the efficiency of PV panels depends upon numerous factors such as irradiance, temperature, dust particles, residue particle, and bird poaching which can be an obstacle for high-efficiency accumulation of foreign particles like dust, bird poaching snow, and many other. To achieve maximum efficiency, solar PV panels must be clean and free from dust particles. This paper presents a design of a cleaning robot that detects the obstructions and cleans the dust on the photovoltaic surface. This purpose is accomplished by a hardware model, and circuit simulation is done using Proteus Software and the results are validated with the in MATLAB 2020. Keywords Solar robot · Portus software · Photovoltaic · Sensor

1 Introduction The power sector is shifting toward the use of solar energy worldwide due to its merits over the conventional sources of energy (fossil fuels). As the renewable sources of energy ecofriendly, free of cost, available in abundance. According to the Ministry of New and Renewable Energy (MNRE), the world’s largest renewable energy expansion program is to achieve 175 MW by 2022 [1]. PV panels are used in many commercial and noncommercial applications like small-scale industries and residential purposes. Due to the dust accumulation, bird poaching, and direct exposure to atmospheric conditions, PV panels’ surface becomes dirty which leads to a Z. Haidary (B) · S. Kaur Department of Electrical Engineering, Chandigarh University, Mohali, Punjab, India S. Kaur e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 P. K. Mallick et al. (eds.), Cognitive Informatics and Soft Computing, Lecture Notes in Networks and Systems 375, https://doi.org/10.1007/978-981-16-8763-1_9

93

94

Z. Haidary and S. Kaur

reduction in the efficiency of the solar panels [2, 3]. The amount of sunlight that strikes through the surface of the solar panel affects its output and efficiency. The dirt deposition covering the surface of the panels causes the shaded cell to appear as small resistors connected in series which further heat up the surface of PV panels and consumes extra power, thereby affecting the performance of the photovoltaic panel [4]. The performance of the PV panel can be enhanced if the surface of panel is kept clean, which can be accomplished with a cleaning mechanism that consists of motors (for propulsion purpose) with brushes attached to clean the surface while rotating [5–9]. A study on cleaning robot control using Blynk app mobile application is elaborated in this work, automatic solar cleaning robot is designed and programmed in such a way that it cleans the surface in the specified time so there is no need of a sensor which improves the overall efficiency of solar panels.

2 Problem Formulation The objective of this paper is to design and build an automatic robot to remove the dust from the surface of the PV panels. The cleaning robot designed in this paper focuses on the improvement of efficiency by cleaning the surface of the panel. Figure 1 shows the block diagram of this designed mechanism in which the various components of the automatic solar cleaning robot are shown.

Fig. 1 Block diagram of the system

Improvement of Solar Panel Efficiency …

95

3 Description of the Hardware Architecture The cleaning mechanism consists of various components such as L293D motor drive, AT89s552 microcontroller, 12 V 2.5 Ah battery, a Dc motor, L7805 Voltage regulator, and their details are provided in Table 1. The hardware assembly of the automatic solar cleaning robot is shown in Fig. 2, which consists of components mentioned in Table 2. To increase the efficiency of solar panels, a total of five DC motors are used out of which four motors are used for moving the robot in forward and reverse direction and one motor is used to rotate the brush, the speed of the motor which is connected with the brush is kept constant to Table 1 Components of dust cleaning robot S. No. Component used

Features and specification of component used

1

Battery

1. A 12 V 2.5 A lead–acid battery is used 2. The weight of the battery used is 15 kg

2

Microcontroller

1. AT89s552 microcontroller is used 2. A low power 8 bits with internal flash memory

3

Voltage regulator 1. L7805 voltage regulator is used 2. It avoids overvoltage fault in the system to keep the robot in safe condition

4

Motor driver

1. L293D motor driver is used to drive DC motors 2. It controls the direction of motors installed in forward and reverse directions as it contains the two H Bridge driver circuit

5

Dc motor

1. Five dc motors are used, four for moving the robot in forward and reverse direction with respect to given logic in the program, and one motor is used to rotate the cleaning brush on the surface of solar panels 2. The specification of each motor is 4 V, 2 A, 8 W, 40 rpm

Fig. 2 Automatic cleaning robot for solar panels

96

Z. Haidary and S. Kaur

Table 2 Output power with dust accumulation on the surface S. No.

Days

Voltage (V)

Current (A)

Power (W)

Efficiency (%)

1

Day 1

8.52

0.2

1.70

2.10

2

Day 2

8.45

0.2

1.69

2.06

3

Day 3

8.40

0.2

1.68

2.04

4

Day 4

8.49

0.2

1.69

2.06

5

Day 5

8.44

0.2

1.68

2.04

6

Day 6

8.40

0.2

1.68

2.04

7

Day 7

8.56

0.2

1.71

2.05

8

Day 8

8.51

0.2

1.70

2.10

9

Day 9

8.42

0.2

1.68

2.04

10

Day 10

8.59

0.2

1.71

2.05

11

Day 11

8.20

0.2

1.64

2.00

12

Day 12

8.02

0.2

1.60

1.98

13

Day 13

7.55

0.2

1.51

1.96

14

Day 14

7.5

0.2

1.50

1.92

15

Day 15

7.47

0.2

1.49

1.90

16

Day 16

6.60

0.2

1.32

1.82

avoid any damage on the surface of the robot and to clean all dust from the surface of the robot. In the beginning, the robot starts working in the forward direction to clean the dust from the surface of the solar panels, when it reaches the end position, it stops after a predefined time. Now the robot starts working in the reverse direction to remove all small specks of dust from the solar surface. This user-defined robot can work in both the directions to clean the surface of the PV Panel.

4 Detailed Working of Automatic Solar Cleaning Robot The flow chart described in Fig. 3 explains the step-by-step cleaning procedure. The amount of rotation depends on the predefined period of time set in the program. The automatic robot is programmed in such a way that it starts working automatically two times a day (morning, evening) with respect to predefined times. Total loses potential loses revenue for single and array of panels 5.3.1 Singl Panel Cost of kWh = 15 cents. Yearly output 2 m solar panel = ~ 730 kWh. Total generation of panel cost in one year: ~$109.8/year. Assuming efficiency loss of 12%.

Improvement of Solar Panel Efficiency …

97

Fig. 3 Flowchart for the detailed working of automatic solar cleaning robot

Lost potential = $13.2. 5.3.2 For Array of Panels 32 panels = each panel produce 730 kWh in a year so 32 panels produce in a year = 21,900 kWh/year. Cost of kWh = 15 cents. Cost of yearly of 32 panels = $3288/year. Lost potential = $197.29/year. Lost in seven years = $1381.08 (the design must cost less than this amount to implement).

98

Z. Haidary and S. Kaur

Layout of System level design

Needs to be able to automatically

SPACS Needs to run on a regular basis to keep panels clean

Solar Panel Automatic Cleaning System

System 1: power Needs to clean solar panels effectively and without water Energy storage

Docking station

Subsystem 3: control

Power connections

Clean frequency Monitor

Subsystem 2: cleaning

Charging

Mechanically Connected to Device

Needs to be safe for public

Transfer power to device

Cleaning Marals

Needs to effecvely remove dust and other parals

Cleaning power system

Needs to rotate to acvely clean panels

Control all motors

Set machine to run every day or two days once

Traveling system

Needs to travel length of array

Needs to be able panels frame

5 Simulation of Solar Panel with and Without Dust on Solar Panel The hardware for the automatic cleaning robot is designed and verified using Portus Software. Solar panels are continuously checked for one month with the dust surface and the surface is cleaned with the help of a solar cleaning Robot. The designing of the automatic cleaning robot for the solar panel using Proteus software is shown in Fig. 4 and the program code for the automatic cleaning of the solar panel surface is shown in Fig. 5. The performance of PV panels has been tested under dust condition and also after cleaning the surface. Blynk application is used to monitor the current and

Improvement of Solar Panel Efficiency …

99

Fig. 4 Design of robot using Proteus software

efficiency which is interfaced to the PV panels. Afterward the V-I and PV characters are analyzed and their results are then validated in MATLAB software which is shown in Figs. 6 and 7. When the dust is accumulated on the panel surface, then the dust cleaning robot comes into action and automatically cleans the surface after sensing the dirt on the panel. The V-I and PV characteristics before and after cleaning the surface are plotted in Fig. 8.

6 Result and Discussion The performance of automatic solar cleaning robot is continuously checked for one month with dirty and clean surfaces and their performance is compared. Table 3 shows the performance of PV Panels with dust accumulation on them and Table 4 shows the performance with clean surfaces continuously for 16 days. Figure 9 presents the graphical monthly observations of the solar panel for some specific dates with and without cleaning the surface, and based on this comparative analysis, the efficiency of the solar panel is calculated using Eq. 1

100

Z. Haidary and S. Kaur

Fig. 5 Programing code

Efficiency of Solar panel = (Pout /Pin ∗ A) ∗ 100.

(1)

where A = area of PV panels in m2 , Pout = power (W), Pin = input power (it is constant 1000 w/m2 ). The average efficiency before cleaning the solar panel is 46.8% and then it is increased to 78.12 after cleaning the surface and it can be concluded that there is a rise in the efficiency of solar panels by 31.32%.

Improvement of Solar Panel Efficiency …

Fig. 6 Blynk app interface

Fig. 7 Matlab simulation

Fig. 8 IV and PV characteristic with/without clean surfaces

101

102

Z. Haidary and S. Kaur

Table 3 Efficiency improvement with cleaned PV surface S. No.

Days

Voltage (V)

Current (A)

Power (W)

Efficiency (%)

1

Day 1

9.65

0.2

1.93

2.23

2

Day 2

9.61

0.2

1.92

2.22

3

Day 3

9.57

0.2

1.91

2.22

4

Day 4

9.54

0.2

1.91

2.21

5

Day 5

9.48

0.2

1.90

2.19

6

Day 6

9.45

0.2

1.89

2.19

7

Day 7

9.41

0.2

1.88

2.18

8

Day 8

9.39

0.2

1.88

2.17

9

Day 9

9.26

0.2

1.85

2.14

10

Day 10

9.05

0.2

1.81

2.09

11

Day 11

8.94

0.2

1.79

2.07

12

Day 12

8.76

0.2

1.75

2.03

13

Day 13

8.61

0.2

1.72

1.99

14

Day 14

8.4

0.2

1.68

1.94

15

Day 15

8.26

0.2

1.65

1.91

16

Day 16

7.96

0.2

1.59

1.84

Table 4 Output power before and after cleaning PV panels S. No. Date

Ideal power Output power before cleaning Output power after cleaning

1

01/7/2020 9

3.8

6.5

2

09/7/2020 9

3.7

6.3

3

15/7/2020 9

4.1

6.8

4

25/7/2020 9

3.9

6.7

5

31/7/2020 9

4.2

6.9

6

07/8/2020 9

4.3

7.1

10

9

9

8

POWER

6

9

6.5 3.8

3.7

1/7/2020

9/7/2020

4

9

9

6.8

6.3 4.1

9

3.9

7.1

6.9

6.7 4.2

4.3

2 0

Ideal power

15/07/202 25/07/202 31/7/2020 7/8/2020 0 0

9

9

9

9

9

9

Output power befor cleaning

3.8

3.7

4.1

3.9

4.2

4.3

Output power aer cleaning

6.5

6.3

6.8

6.7

6.9

7.1

Fig. 9 Graphical representation of obtained powers

Improvement of Solar Panel Efficiency …

103

7 Conclusion To sum up, photovoltaic panel surfaces are subjected to dust accumulation as they are mostly installed in free air environments for absorbing the sunlight in maximum amount like the roof of the house or in an open environment which reduces the power output and affects the performance of solar panels. The efficiency of the solar panel with a cleaned surface increases by around 31.32% as compared to a solar panel that is deposited with dirt.

References 1. Al Baloushi A, Saeed M, Marwan S, Al Gghafri S, Moumouni Y (2018) Portable robot for cleaning photovoltaic system: ensuring consistent and optimal year-round photovoltaic panel performance. In: 2018 advances in science and engineering technology international conferences (ASET). IEEE, pp 1–4 2. Jiang C, Jiang J, Xu J (2019) Solar panel cleaning robot. U.S. Patent 10,511,256, issued December 17, 2019 3. Kumar NM, Navothna B, Minz M (2017) Performance comparison of building integrated multi-wattage photovoltaic generators mounted vertically and horizontally. In: 2017 IEEE international conference on smart technology for smart nation (SmartTechCon), 17th–19th August 2017, Bangalore, India, pp 709–714 4. Meyer-Vernet N, Maksimovic M, Czechowski A, Mann I, Zouganelis I, Goetz K, Kaiser ML, Cyr OS, Bougeret JL, Bale SD (2009) Dust detection by the wave instrument on STEREO: nanoparticles picked up by the solar wind? Solar Phys 256(1–2):463–474 5. Anderson M, Grandy A, Hastie J, Sweezey A, Ranky R, Mavroid C (2009) Robotic devices for cleaning photovoltaic panel arrays September 2009 6. Hamidreza N, Behzad N (2011) Sensitivity analysis of a hybrid photovoltaic thermal solar collector. In: 2011 IEEE electrical power and energy conference, pp 1–6 7. Al-Qubaisi EM et al (2009) Microcontroller based dust cleaning system for a standalone photovoltaic system. In: International conference on electric power and energy conversion systems, EPECS’09. IEEE 8. Gheitasi A, Almaliky A, Albaqawi N (2015, November) Development of an automatic cleaning system for photovoltaic plants. In: 2015 IEEE PES Asia-Pacific power and energy engineering conference (APPEEC). IEEE, pp 1–4 9. Abhilash B, Panchal AK (2016) Self-cleaning and tracking solar photovoltaic panel for improving efficiency. In: 2016 2nd international conference on advances in electrical, electronics, information, communication and bio-informatics (AEEICB). IEEE, pp 1–4 10. Nazar R (2015) Improvement of efficiency of solar panel using different methods. Int J Electr Electron Engineers 7(1):7–12 11. Hashim N, Mohammed MN, Selvarajan RA, Al-Zubaidi S, Mohammed S (2019 June) Study on solar panel cleaning robot. In 2019 IEEE international conference on automatic control and intelligent systems (I2CACIS). IEEE, pp 56–61 12. Zhao X (2012) Asian dust detection from the satellite observations of moderate resolution imaging spectroradiometer (MODIS). Aerosol Air Qual Res 12(6):1073–1080 13. Krüger H et al (2007) Interstellar dust in the solar system. Space Sci Rev 130(1–4):401–408 14. Poppe AR (2016) An improved model for interplanetary dust fluxes in the outer solar system. Icarus 264:369–386 15. Wang S, Lin S-C, Yang Y-C (2016) Modular design of a cylindrical cam assembly for solarpanel clean system. In: 2016 international conference on applied system innovation (ICASI). IEEE

104

Z. Haidary and S. Kaur

16. Said A, Alaoui SM, Rouas Y, Dambrine G, Menard E, Boardman J, Barhdadi A (2018) Innovative low cost cleaning technique for pv modules on solar tracker. In: 2018 6th international renewable and sustainable energy conference (IRSEC). IEEE, pp 1–4 17. Patil PA, Bagi JS, Wagh MM (2017) A review on cleaning mechanism of solar photovoltaic panel. In: 2017 international conference on energy, communication, data analytics and soft computing (ICECDS). IEEE 18. Alzarooni FI, Alkharji AK, Alsuwaidi AA, E’qab RA (2020) Design and implementation of an automated dry solar-panel cleaning system. In: 2020 advances in science and engineering technology international conferences (ASET). IEEE, pp 1–4 19. Zhang Q, Lu X-L, Hu J-H (2013) A solar panel cleaning system based on a linear piezoelectric actuator. In: 2013 symposium on piezoelectricity, acoustic waves, and device applications. IEEE 20. Kawamoto H, Kato M (2018, June) Electrostatic cleaning equipment for dust removal from solar panels of mega solar power generation plants. In: 2018 IEEE 7th world conference on photovoltaic energy conversion (WCPEC) (A Joint Conference of 45th IEEE PVSC, 28th PVSEC & 34th EU PVSEC). IEEE, pp 3648–3652 21. Hashim N, Mohammed MN, Selvarajan RA, Al-Zubaidi S, Mohammed S (2019) Study on solar panel cleaning robot. In: 2019 IEEE international conference on automatic control and intelligent systems (I2CACIS) 2019 June 29, pp 56–61 22. Cai S et al (2019) Parameters optimization of the dust absorbing structure for photovoltaic panel cleaning robot based on orthogonal experiment method. J Cleaner Prod 217:724–731 23. Hashim N, Mohammed MN, AL Selvarajan R, Al-Zubaidi S, Mohammed S (2019) Study on solar panel cleaning robot. In: 2019 IEEE international conference on automatic control and intelligent systems (I2CACIS), pp 56–61. https://doi.org/10.1109/I2CACIS.2019.8825028

Performance Analysis of Grid Connected Distributed Generation Sources (DGS) Using ETAP Alijan Ranjbar, Sunny Vig, and KamalKant Sharma

Abstract In this paper, performance analysis of grid connected distributed generation sources will be implemented in ETAP software, the DGS is a combination of clusters of distributed generators connected together, so that to fulfil the requirement of electricity demand efficiently and provide continuity of supply to load demand. The DGS is abundantly available in nature and it is the best way to provide economical, efficient and reliable electricity from renewable energy sources. Due to the penetration of renewable sources in the power grid, some issues may occur such as power quality issue, voltage stability issue and reliability issue. In this paper, Electrical Transient Analyzer Program (ETAP) provides different approaches to overcome these issues, hence the main goal of this paper is to mitigate harmonics so as to improve the power quality of the system, voltage control is also the main concern in power system so this can achieve by implanting static VAR compensator in the proposed model, normal load flow and optimal power has been conducted in this paper and finally the comparative analysis of load flow and power flow is taken into consideration. Keywords Distributed generation sources (DGS) · ETAP · Optimal power flow · Inverters · Reactive power compensation · Harmonic mitigation and reliability assessment

A. Ranjbar (B) M-Tech Research Scholar, Department of Electrical Engineering, Chandigarh University, Mohali, Punjab 140413, India S. Vig Assistant Professor, Department of Electrical Engineering, Chandigarh University, Mohali, Punjab 140413, India K. Sharma Associate Professor, Department of Electrical Engineering, Chandigarh University, Mohali, Punjab 140413, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 P. K. Mallick et al. (eds.), Cognitive Informatics and Soft Computing, Lecture Notes in Networks and Systems 375, https://doi.org/10.1007/978-981-16-8763-1_10

105

106

A. Ranjbar et al.

1 Introduction The demand for electricity is increasing day by day and conventional energy resources are going to exhaust in the future, conventional energy sources are not enough ecofriendly because they are associated with environmental pollution, so renewable energy resources are the best alternative for the future perspective. In this paper, integration of distributed generation energy sources is taken under implementation, so that to have inexhaustible energy resources in order to enhance reliability and continuity of electricity and inject it to the conventional grid so as to reduce electricity bills. In this paper, 24 MW of wind power and 4 MW of solar power are taken to supply loads of 25 MVA. During the penetration of renewable sources, issues like power quality and stability occur in the system and it is the main challenge. So it should be minimized as much as possible. Electrical Transient Analyzer Program (ETAP) is used to analyze and compensate these issues. The power quality issues cause power downtime, decrease the capacity of the power plant, equipment failure, utility penalties and significant financial impact. The static VAR compensator is used to inject a significant amount of reactive power into the system to stabilize and regulate the voltage at the main bus, as a result, a significant amount of active power can flow in the system [1–24]. Load flow analysis is employed to find out the amount of voltage, current, active and reactive power flowing in each bus and there are so many methods that are used for load flow analysis such as Adaptive Newton–Raphson. Newton–Raphson, FastDecoupled and Accelerated Gauss–Seidel method. Newton–Raphson method with 5000 numbers of iteration and 0.001 precision has been implemented in this paper [4]. Generally, optimal power flow analysis is used to reduce the active and reactive power losses in the power system network so that a sufficient amount of power can flow in the system, optimal power flow reduces the fuel cost, reduce transmission and distribution losses and increase the overall efficiency of power system significantly. Optimal power flow is implemented using ETAP in the proposed model and suitable results came out (Fig. 1).

2 Results 2.1 Power Quality Improvement Power quality improvement is one of the main challenges in electrical power systems. To mitigate the fifth and seventh harmonic caused by power electronic devices such as inverters and static compensator acting as a non-linear load, two filters at the main bus (bus 2) have been implemented in the proposed system. Figure 2 shows the distorted waveform due to harmonic at the main bus.

Performance Analysis of Grid Connected …

107

Fig. 1 Model of proposed system

Fig. 2 Shows distorted waveform of what specified here

Filters are designed using ETAP and introduced in the system to mitigate harmonics and hence improve the power quality of the system.

2.1.1

Harmonic Mitigation

Harmonic mitigation is the action by which the harmonic can be reduced. In this paper, the harmonic mitigation is done by implementing harmonic filters in etap. Two filters are designed, one is for mitigating the fifth harmonic and the other is for mitigation of the seventh harmonic, respectively.

108

A. Ranjbar et al.

Fig. 3 Specification of harmonic filters

The input data of both the filters are given below as it is designed by etap software for mitigation of harmonic, so as to improve the power quality of the proposed system (Figs. 3, 4 and 5).

Fig. 4 Shows fifth harmonic mitigation

Fig. 5 Shows the seventh harmonic mitigation

Performance Analysis of Grid Connected …

109

According to the above illustration, before applying the filter, the fifth harmonic was around 3.08% and the seventh harmonic was around 1.81%, and after implementing filter at the main bus, the third harmonic and seventh harmonic mitigated to 1.93% and 0.744%, respectively.

2.2 Reactive Power Compensation Reactive power plays a vital role in power flow in the electrical powers system, as the reactive power regulates the voltage in the system and hence improves the power flow in the system. Therefore, a Static VAR Compensator has been utilized to supply reactive power at the main bus of the proposed system in order to regulate the voltage and hence provide efficient and reliable power flow in the system. Figure 6 shows the specification of the Static compensator.

2.2.1

Implementation of Static VAR Compensator

Static VAR compensator is used for fast acting of reactive power in power system, here the static VAR compensator supply 4.14 Mvar of reactive power so that to regulate the voltage in the system, and hence provide the power stability in the system. Table 1 shows the comparative analysis of the system with and without the presence of static VAR compensator, as it is clear in the table, the sufficient amount of

Fig. 6 Specification of static VAR compensator

Table 1 Improved reactive power of the system Without compensator

With compensator

Sources and loads

Reactive power (Mvar)

Voltage profile (%)

Reactive power (Mvar)

Voltage profile (%)

Wind turbine

−22.3

95.58

−21.8

98.27

Solar array

−0.125

89.74

−0.119

92.33

Grid

38.4

100

33.2

100

Load 1

1.51

87.74

1.52

90.38

Load 2

2.01

87.74

2.03

90.38

Load 3

4.02

87.74

4.06

90.38

Load 4

4.06

87.74

4.3

90.38

110

A. Ranjbar et al.

Table 2 Load flow analysis data Sources and MW loads rating

MVA rating

Voltage (KV)

Current (A)

Active power (MW)

Reactive power (Mvar)

22.2

30.8

Wind

24

10.94

1624

Solar array

4

10.29

192.6

3.43

3.43 30.2

Apparent power (MVA) 30.8 3.42

Grid

8.6

33.2

11

1628

7.06

Load 1

2.5

3

10.08

166.4

2.47

1.53

31 2.9

Load 2

3.4

4

10.08

221.8

3.29

2.04

3.87

Load 3

6.8

8

10.08

443.7

6.58

4.08

7.74

Load 4

8.5

10

10.08

480.7

7.13

4.42

8.39

reactive power flow, and therefore, the voltage profile is increased sufficiently which leads to voltage stability of the system as the result active power flow improves accordingly.

2.3 Comparative Analysis of Normal Load Flow and Optimal Power Flow 2.3.1

Normal Load Flow Analysis

Normal load flow analysis is employed to determine the amount of voltage, real power and reactive power across each and every bus in the system, whereas optimal power flow is used to reduce reactive and active power in the system, so as to decrease the cost involves fuel. In this paper, Newton–Rapson method is used to calculate the voltage, active and reactive power across each bus of the system. Table 2 shows the load flow of the system under which the amount of voltage, current, active, reactive and apparent power flowing in the system is determined, respectively (Fig. 7).

2.3.2

Optimal Power Flow Analysis

The main goal of optimal power flow analysis is to reduce the overall system losses by reducing the respective branches’ losses. Generally, the main goal of optimal power is to minimize the total generation cost, minimize total power losses and maximize the security of the power system network. Here, in this section of this paper, an optimal power flow study has been implemented and the desired result has been achieved by ETAP software which is illustrated in Table 3 (Fig. 8). Pi (V, θ ) = P G i − P Di

Performance Analysis of Grid Connected …

111

Load Flow Analysis 100% 80% 60% 40% 20% 0%

Wind

Solar array

Grid

Load 1

Load 2

Load 3

Load 4

Fig. 7 Graphical representation of load flow analysis

Table 3 Optimal power flow data Sources and MW loads rating Wind

24

MVA rating

Voltage (KV)

Current (A)

Active power (MW)

Reactive power (Mvar)

Apparent power (MVA)

11.82

1477

22.3

−20.4

30.2 3.48

Solar array

4

11.16

108.3

3.48

−0.102

Grid

8.6

33.2

11.9

1638

7.58

32.9

33.8

Load 1

2.5

3

10.96

157.8

2.55

1.58

3

Load 2

3.4

4

10.96

210.4

3.39

2.1

3.99

Load 3

6.8

8

10.96

420.9

6.79

4.21

7.99

Load 4

8.5

10

10.96

522.8

8.43

5.23

9.92

Optimal Power Flow Analysis 100% 80% 60% 40% 20% 0% MW rang -20%

MVA rang

Voltage (KV)

Current (Amp)

Acve power(MW)

-40% Wind

Solar array

Grid

Load 1

Load 2

Fig. 8 Graphical representation of optimal power flow

Q i (V, θ ) = QG i − Q Di P G min ≤ P G ≤ P G max QG min ≤ QG ≤ QG max

Reacve power (Mvar)

Load 3

Apparent power (MVA)

Load 4

112

A. Ranjbar et al. Normal load flow vs Optimal power flow 450 400 350 300 250 200 150 100 50 0 Voltage Profile (%)

Bus No Bus 1(swing)

Bus 2 (PQ)

Voltage profile (%) Bus 4 (PV)

Bus 14 (PV)

Fig. 9 Graphical representation of normal load flow versus optimal power flow

Table 4 Comparison of normal load flow and optimal power flow

Normal load flow analysis Bus No

Optimal power flow analysis

Voltage profile Bus No (%)

Bus 1 (swing) 100

Voltage profile (%)

Bus 1 (swing) 108.2

Bus 2 (PQ)

91.59

Bus 2 (PQ)

Bus 4 (PV)

93.53

Bus 4 (PV)

101.4

99.61

Bus 14 (PV)

99.5

Bus 14 (PV)

107.5

Vi min ≤ Vi ≤ Vi max Pl min ≤ Pl ≤ Pl max Table 3 shows the data after implementing optimal power flow, it is clear from the table, so that the reactive and active power increased sufficiently as the power losses reduced after implementing optimal power flow (Fig. 9). Table 4 represents the comparative analysis of load flow and optimal load flow of the system based on voltage profile and shows how much voltage profile in each bus is improved after implementing optimal power flow.

3 Conclusion The power quality issue, reactive power compensation and optimal power analysis are the main issue which is caused by the penetration of renewable sources into the grid. So, in this paper, to improve the power quality of the system, harmonics are mitigated significantly. To enhance the flow of power and to inject the reactive power, Static VAR compensator has been introduced at the main bus. Optimal power flow has been analyzed and the result is compared with normal load flow. Consequently, a

Performance Analysis of Grid Connected …

113

reduction in losses in active and reactive power has been observed by implementing the proposed system.

References 1. Hou X (2019) Improvement of frequency regulation in VSG-based AC microgrid via adaptive virtual inertia. IEEE Trans Power Electron 1589–1602 2. Peng Y (2019) Modeling and stability analysis of inverter-based microgrid under harmonic conditions. IEEE Trans Smart Grid 11(2):1330–1342 3. Marini A (2019) Active power filter commitment for harmonic compensation in microgrids. In: IECON 2019–45th annual conference of the IEEE industrial electronics society, vol 1. IEEE 4. Kadukar PR, Shete PS, Gawande SP (2018) Transient analysis of distributed generation AC microgrid using ETAP. In: 2018 international conference on current trends towards converging technologies (ICCTCT). IEEE 5. Prasad PS, Parimi AM (2020) Harmonic mitigation in grid connected and islanded microgrid via adaptive virtual impedance. In: 2020 IEEE international conference on power electronics, smart grid and renewable energy (PESGRE2020). IEEE 6. Adineh B (2020) Review of harmonic mitigation methods in microgrid: from a hierarchical control perspective. IEEE J Emerg Sel Top Power Electron 7. Mao M, Zhu W, Chang L (2018) Stability analysis method for interconnected AC islanded microgrids. In: 2018 IEEE international power electronics and application conference and exposition (PEAC). IEEE 8. Jabr RA, Džafi´c I, Pal BC (2018) Compensation in complex variables for microgrid power flow. IEEE Trans Power Syst 33(3):3207–3209 9. Chandraratne (2018) Adaptive overcurrent protection for power systems with distributed generators. In: 2018 8th international conference on power and energy systems (ICPES). IEEE 10. Waqfi RR, Nour M (2017) Impact of pv and wind penetration into a distribution network using etap. In: 2017 7th international conference on modeling, simulation, and applied optimization (ICMSAO). IEEE 11. Kamaruzaman MZ, Wahab NIA, Nasir MNM (2018) Reliability assessment of power system with renewable source using ETAP. In: 2018 international conference on system modeling & advancement in research trends (SMART). IEEE 12. Mallick PK, Bhoi AK, Chae GS, Kalita K (eds) (2018) Advances in electronics, communication and computing: select proceedings of ETAEERE 2020, vol 709. Springer Nature 13. Narayanan V, Kewat S, Singh B (2019) Standalone PV-BES-DG based microgrid with power quality improvements. In: 2019 IEEE international conference on environment and electrical engineering and 2019 IEEE industrial and commercial power systems europe (EEEIC/I&CPS Europe). IEEE 14. Guo Y (2019) Region-based stability analysis for active dampers in AC microgrids. IEEE Trans Ind Appl 55(6):7671–7682 15. Yan Y (2018) Small-signal stability analysis and performance evaluation of microgrids under distributed control. IEEE Trans Smart Grid 10(5):4848–4858 16. Bhoi AK, Sherpa KS, Kalam A, Chae GS (eds) (2020) Advances in greener energy technologies. Springer 17. Noorpi MNS (2018) Zonal formation for multiple microgrids using load flow sensitivity analysis. In: 2018 international conference on power system technology (POWERCON). IEEE 18. Abu-Elzait S, Parkin R (2019) Economic and environmental advantages of renewable-based microgrids over conventional microgrids. In: 2019 IEEE green technologies conference (GreenTech). IEEE 19. Hamad AA, El Saadany EF (2016) Steady-state analysis for hybrid AC/DC microgrids. In: 2016 IEEE international symposium on circuits and systems (ISCAS). IEEE

114

A. Ranjbar et al.

20. Priyadarshi N, Azam F, Solanki SS, Sharma AK, Bhoi AK, Almakhles D (2021) A bioinspired chicken swarm optimization-based fuel cell system for electric vehicle applications. Bio-inspired neurocomputing. Springer, Singapore, pp 297–308 21. Pealy SNM (2019) Grid integration issues with hybrid micro grid system. In: 2019 international conference on robotics, electrical and signal processing techniques (ICREST). IEEE 22. Zhou S et al (2019) Research on control strategy of grid-connected inverter in microgrid system. In: 2019 IEEE 3rd conference on energy internet and energy system integration (EI2). IEEE 23. Sultana G, Keshavan BK (2020) Evaluation of performance and reliability indices of a microgrid with distributed generation. In: 2020 IEEE region 10 conference (TENCON), pp 341–346. https://doi.org/10.1109/TENCON50793.2020.9293700 24. Abdi H, Beigvand SD, La Scala M (2017) A review of optimal power flow studies applied to smart grids and microgrids. Renew Sustain Energy Rev 71:742–766

Modeling and Simulation for Stability Improvement and Harmonic Analysis of Naghlu Hydro Power Plant Samiullah Sherzay and Rehana Perveen

Abstract With the increase in load in day-to-day life, the power sector has also grown up to balance the load requirements. Power flow analysis gives information about active and reactive power, voltage magnitude at each bus, voltage angle at each bus and branch, voltage drop at lines, power losses of each equipment, and bus loading at each point of the network. In this research paper, loads flow and harmonic analysis is performed on the Naghlu hydropower plant to determine voltage and current total harmonic distortion on the Jalalabad substation caused by many solar on-grid rooftop projects that are linked to the network and cause harmonic distortion on the system. Performance evaluation of the plant is based upon the voltage profile, real and reactive power, harmonic level on each bus, and voltage drop at different buses. Keywords Naghlu hydropower plant · Harmonic study · Harmonic filter · Voltage stability · Electrical transient analysis (ETAP) software

1 Introduction Naghlu hydropower station shown in Fig. 1, was built in the year 1960s. This power plant has four turbines each with a generation capacity of 25 MW, which in total generates 100 MW power [1], it is estimated that it can be in the serve of 100,000 householders. De Afghanistan Breshna Sherkat (DABS), which is a governmental power production company reconstructed turbine number 3 of the power plant with NHRP support in October 2018. Naghlu hydropower plant (NHPP), was reinforced by an $83 million grant from the Afghanistan Reconstruction Trust Fund (ARTF) [2], for improving dam sustainability and safety and also to enhance the electricity supply of the power plant. This power plant is of great prominence to the power grid of Afghanistan, it acts as a swing generator in the interconnected power network [3]. S. Sherzay (B) · R. Perveen Department of Electrical Engineering, Chandigarh University, Mohali 140301, India R. Perveen e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 P. K. Mallick et al. (eds.), Cognitive Informatics and Soft Computing, Lecture Notes in Networks and Systems 375, https://doi.org/10.1007/978-981-16-8763-1_11

115

116

S. Sherzay and R. Perveen

Fig. 1 Naghlu hydropower plant

2 Load Flow Analysis of Naghlu Hydro Power Plant Load flow analysis is performed to determine active power, reactive power, voltage magnitude, and voltage angle at each bus end. It is most needed in the planning and initial stage of substation, power plants, transmission system, distribution networks, and all generation stations [4]. Load flow study basically gives information about voltage magnitude at each bus, voltage angle of each bus and branch, active and reactive power generated and used, and line losses in a power network [5, 6]. In this paper, load flow study of the Naghlu hydropower plant is performed as shown in Fig. 2, where all bus voltages are near to the rated value (Between 96 and 100%). Real and reactive power is within the permissible designed limit (Figs. 3 and 4), and all four generators are operating in normal operating conditions [7].

3 Harmonics Analysis For any electrical utility company, the main objective is to deliver electricity with the best quality [8], which indicates a pure sinusoidal wave from. This objective is disrupted by the fact that there exist nonlinear loads on consumer ends that produces

Modeling and Simulation for Stability …

117

Fig. 2 Load flow diagram of Naghlu hydropower plant

Fig. 3 Bus loading summary report

harmonics, these harmonic currents give an output voltage and current waveforms that can disturb the performance of the network in many different ways [9]. Near sinusoidal currents and voltages that can impact the system performance having frequencies which are multiples (integer multiples) of the fundamental frequency waveforms are basically known as harmonics.

118

S. Sherzay and R. Perveen

Fig. 4 Branch losses summary report

With there is an increase in power demand, side by side there is a need for power generation from renewable energy resources, so more number of power electronic devices are added to the network, therefore, the amount of harmonic contents increase in the main grid impacting the overall performance of the electrical network [10]. When there exist harmonics in voltage and current, we can take the root main square (RMS) value using the resultant waveform, the equations for voltage and current is given as

VRms

IRms

  N 1  = V(i) ∗ V(i) N i=1   N 1  = I(i) ∗ I(i) N i=1

(1)

(2)

Modeling and Simulation for Stability …

119

If we know the RMS values of harmonic components of fundamental, by taking the square root of the sum of squared RMS values of each harmonic and fundamental, we can calculate the RMS of the waveform, as per the definition of harmonics THD is the RMS value of the sum of all harmonic components to the RMS value of the fundamental component. Equations 3 and 4 are the formulas for voltage and current Total harmonic distortion, respectively

VTHD−F =

ITHD−F =

100 ∗

100 ∗



N h=2

Vh,rms2

V1,rms 

N h=2 Ih,rms2

I1,rms

(3)

(4)

In the second definition, THD is the RMS value of the sum of all harmonics components to the total RMS value, where this RMS value consists of the effects of both fundamental, as well as other harmonics, in the system [11]. Equations 5 and 6 represent voltage and current harmonics, respectively

VTHD_R =

ITHD_R

100 ∗



N h=2 Vh,rms2 N h=2 Vh,rms2

V1,rms2 +  N 100 ∗ h=2 Ih,rms2 = N I1,rms2 + h=2 Ih,rms2

(5)

(6)

In Kabul’s eastern areas, the majority of the consumers use solar energy and backup batteries which depreciate the power quality of the network and increase the harmonic contents on the system. To understand the harmonic distortion level of the harmonic source on the system, there is the need for harmonic load flow for different loading conditions during summer and winter seasons as shown in Fig. 5. As per the results of harmonic load flow, there are some harmonic frequencies that are exceeding the normal total harmonic distortion and individual harmonic distortion levels [12] (Fig. 6). So a form alert view, it is obvious that the 5th, 7th, 11th, and 13th order harmonics are putting up on the network, where these harmonic orders are suppressing the determined limits of IHD and THD (Figs. 7 and 8). As shown in the above voltage plots, it’s properly seen shown that the effect of harmonics are more on Jalalabad substation bus 2 (20 kV) and Jalalabad substation bus1 (20 kV) where both waveforms coincide with each other as both contain the same harmonic sources and same MW ratings, where the harmonic sources are introduced to these buses [13] (Figs. 9 and 10).

120

S. Sherzay and R. Perveen

Fig. 5 Harmonic load flow result of Jalalabad S/S

Fig. 6 Alert window of harmonic load flow analysis

Eliminating Harmonics Using Harmonic Filters In this simulation work, four single tuned filters are used to eliminate different order harmonics, these filters are designed to produce reactive power on the network and reduce the harmonic losses. As already stated that 5th, 7th, 11th, and 13th order harmonics are contributing to the harmonic distortion of the network, so if harmonics of these orders are eliminated, the system nearly comes to a stable condition (Figs. 11, 12, 13 and 14; Tables 1 and 2).

Modeling and Simulation for Stability …

Fig. 7 Voltage spectrum versus harmonics order for three buses

Fig. 8 Harmonic load flow plots for three buses

Fig. 9 Impedance angle versus frequency plot (frequency scan)

121

122

S. Sherzay and R. Perveen

Fig. 10 Impedance angle plot (frequency scan)

Fig. 11 Harmonics filter editor window for 11th order harmonic elimination

Fig. 12 Single tuned filter input data

Modeling and Simulation for Stability …

123

Fig. 13 Voltage spectrum versus harmonics order for three buses after applying harmonic filters

Fig. 14 Harmonic load flow plots for three buses after applying harmonic filters

4 Results and Conclusion In this paper, a load flow and harmonic study of Naghlu hydropower plant is performed on ETAP (Electrical transient analysis) software. In load flow study case, bus and branch loading is studied, state of real and reactive power is shown in each bus, transmission line losses are indicated where losses are decreased to minimum feasible value possible, voltage magnitude of each bus is adjusted to predetermined range (Between 96 and 100%), generator overloading is resolved and all under voltage issues are solved. Various network issues such as under voltage, overvoltage, overloading (Generator, bus, transformer, and line), losses on the network, and all other

124 Table 1 Different filters used and their effect on harmonics reduction

S. Sherzay and R. Perveen RMS voltage

%THD

Jalalabad S/S

87.49

4.51

Jalalabad S/S bus1

16.19

8.74

Jalalabad S/S bus1

16.19

8.74

No filter used

HF5 filter used (5th harmonics eliminated) Jalalabad S/S

87.68

3.46

Jalalabad S/S bus1

16.28

5.83

Jalalabad S/S bus1

16.28

5.83

HF7 filter used (7th harmonics eliminated) Jalalabad S/S

87.89

2.53

Jalalabad S/S bus1

16.39

3.64

Jalalabad S/S bus1

16.39

3.64

HF11 filter used (11th harmonics eliminated) Jalalabad S/S

88.1

1.19

Jalalabad S/S bus1

16.51

1.56

Jalalabad S/S bus1

16.51

1.56

HF13 filter used (13th harmonics eliminated) Jalalabad S/S

88.33

0.625

Jalalabad S/S bus1

16.64

0.895

Jalalabad S/S bus1

16.64

0.895

abnormal situations are resolved so that the network is reliable and in the most efficient operating condition. Harmonic analysis is performed on the Jalalabad substation as is a remote district of Afghanistan, most of the people use on-grid solar rooftop projects which causes the introduction of a lot of harmonics to the network. In this research work, the harmonic level of buses is controlled using harmonic filters, and the network is operating within the safe and normal operating conditions.

Modeling and Simulation for Stability … Table 2 THD and voltage level on each substation Kabul east substation

125 RMS voltage

%THD

108.3

0.36

Kabul east S/S bus 19.7 1

0.275

Kabul east S/S bus 19.68 2

0.269

Jalalabad substation

92.02

0.576

Jalalabad S/S bus 1

16.72

0.895

Jalalabad S/S bus 2

16.72

0.895

Gulbahar substation

91.37

0.598

Gulbahar S/S bus 1

16.17

0.374

Gulbahar S/S bus 2

16.19

0.373

Filter effect on THD reduction

%THD (before using filters)

%THD (after using filters)

Kabul east substation

2.34

0.36

Kabul east S/S bus 2.11 1

0.275

Kabul east S/S bus 2.09 2

0.269

Jalalabad substation

4.38

0.576

Jalalabad S/S bus 1

8.9

0.895

Jalalabad S/S bus 2

8.9

0.895

Gulbahar substation

4.37

0.598

Gulbahar S/S bus 1

3.68

0.374

Gulbahar S/S bus 2

3.68

0.373

126

S. Sherzay and R. Perveen

References 1. Mohla D (2021) IAS standards: built for safe operation and maintenance [standards news]. IEEE Ind Appl Mag 27(4):72–75 2. Garces A (2015) A linear three-phase load flow for power distribution systems. IEEE Trans Power Syst 31(1):827–828 3. Anthony MA, Harvey JR (2015) Coordinating the development cycles of the IEEE 3000series recommended practices with the NFPA 70 series-documents. In: 2015 IEEE/IAS 51st industrial & commercial power systems technical conference (I&CPS). IEEE 4. Jangra J, Vadhera S (2017) Load flow analysis for three phase unbalanced distribution feeders using Matlab. In: 2017 2nd international conference for convergence in technology (I2CT). IEEE 5. Alsulami WA, Kumar RS (2017) Artificial neural network based load flow solution of Saudi national grid. In: 2017 Saudi Arabia smart grid (SASG). IEEE 6. Parihar SS, Malik N (2018) Load flow analysis of radial distribution system with DG and composite load model. In: 2018 international conference on power energy, environment and intelligent control (PEEIC). IEEE 7. Siddique A et al (2019) Load flow analysis of 132/11 KV grid station Bahawalpur region Pakistan and its voltage improvement through FACTS devices using ETAP. In: 2019 IEEE innovative smart grid technologies-Asia (ISGT Asia). IEEE 8. Kalair A et al (2017) Review of harmonic analysis, modeling and mitigation techniques. Renew Sustain Energy Rev 78:1152–1187 9. Khosravi N et al (2021) Improvement the harmonic conditions of the AC/DC microgrids with the presence of filter compensation modules. Renew Sustain Energy Rev 143:110898 10. Sakar S et al (2017) Increasing PV hosting capacity in distorted distribution systems using passive harmonic filtering. Electr Power Syst Res 148:74–86 11. Vinayagam A et al (2019) Harmonics assessment and mitigation in a photovoltaic integrated network. Sustain Energy Grids Netw 20:100264 12. Dai JJ, Shokooh F (2021) Industrial and commercial power system harmonic studies: introduction to IEEE Std. 3002.8–2018. In: 2021 IEEE/IAS 57th industrial and commercial power systems technical conference (I&CPS). IEEE 13. Sharma H, Rylander M, Dorr D (2013) Grid impacts due to increased penetration of newer harmonic sources. In: 2013 IEEE rural electric power conference (REPC). IEEE

Sizing and Optimization of Hybrid Energy Storage System with Renewable Energy Generation Plant Neha Bharti, Sachin Kumar, and Paras Chawla

Abstract Nowadays, green power technology is mostly considered by the power system utilities to maintain a healthy environment. Wind and solar energy is used mainly for green generation. Earlier these sources were individually used but in a new trend, a hybrid system is made to produce more power to meet the energy demand. The hybrid system produces an ample amount of power, so according to the load demand, this power is distributed and the rest available is stored in batteries. A new system is made for storing more power which combines lithium-ion batteries with supercapacitors. So, this paper will discuss the sizing of renewable power plants, availability of natural sources in Rampur Bsr., Shimla, and optimization of the hybrid energy storage system (HESS) with renewable energy plants. For sizing and checking the availability of sources, HOMER Grid software is used, and to optimize, the HESS MATLAB software is used. PSO algorithm is developed. Keywords Renewable energy sources · HESS · HOMER grid · MATLAB · PSO · Hybrid storage · Solar and wind

1 Introduction These days the consumption of electric power is growing very rapidly which is causing stress over conventional power plants for meeting the energy demand. Due to this, carbon emission is increasing, there is the problem of global warming, and many more. So, the government is focusing on the use of renewable power plants which produce green power and are abundantly available on Earth’s surface. Traditionally renewable sources like solar, wind, tidal, etc., were used individually to generate power, but nowadays these sources are combined and make it a hybrid power N. Bharti · S. Kumar (B) · P. Chawla (B) Department of Electrical Engineering, University Institute of Engineering, Chandigarh University, Gharuan, Mohali, India e-mail: [email protected] P. Chawla e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 P. K. Mallick et al. (eds.), Cognitive Informatics and Soft Computing, Lecture Notes in Networks and Systems 375, https://doi.org/10.1007/978-981-16-8763-1_12

127

128

N. Bharti et al.

system. Various hybrid power system and commercial software for Evaluation are like HOMER, Hybrid2, RETScreen, TRNSYS, etc. HOMER Grid and HOMER Pro are there, which are most common software for sizing of hybrid model for renewable sources. Data considered in the HOMER is collected from the National Renewable Energy Laboratory or NASA’s prediction of worldwide energy resources. From this software, the sizing of the plant can be determined and it also shows the availability of natural resources. Similarly, Oulis Rousis et al. [1] have performed the case study on residential consumers in islands in which a multi-objective and non-derivative optimization is taken in this residential application; the main aim is to minimize the system cost and no load shedding. Further, the emission of CO2 is calculated to indicate the environmental benefits. The location selected for the renewable power plant is in Rampur Bsr. The availability of the resources is configured by HOMER grid. To reduce the stress on lithiumion batteries; these are combined with supercapacitors making the system hybrid for storage [2], which intends to store 10 times more power as compared to simple Li-ion battery. To optimize the power in a hybrid energy storage system, an algorithm that is used is Particle Swarm Optimization (PSO), which is used to obtain the global optimum solution for the storage system. Battery capacity optimization with renewable energy is done with the help of different algorithms, in which the traditional method is the PSO algorithm which is easy to use. In the PSO algorithm, the data is used in the calculation process, and moreover, the globally optimal and locally optimal information of every iteration is done. As explained in [3], that introduced a system which is energy efficient and utilizes the multi-objective particle swarm optimization for minimizing the cost of operation in micro-grids and increases power generation too. PSO algorithm has the advantage of strong global searchability. The standalone hybrid renewable power plant depends upon a PV panel and a wind turbine with a battery bank so as to meet the load demand in Rampur Bsr. For solving the hybrid system optimization problem like optimal sizing, optimal cost of energy (COE), and optimal energy management, the State of Charge (SOC) of storage system is considered. This increases the reliability and planning extensions for future work. So, in this paper, particle swarm optimization algorithm (PSO) is developed with the help of MATLAB software which optimizes the whole life cycle cost of the system, i.e., Total Net Present Cost (TNPC). HOMER Grid software helps in managing the size of hybrid power plant and sizing of different parameters are done.

2 Literature Survey Mbungu et al. [4] have worked on matching the energy flow in the residential application system in combination with intelligent demand management control strategies. Mamun et al. [5] have investigated the energy density of HESS and reduced the cost by 7.3%. Designing and controlling was done with the help of the PSO algorithm.

Sizing and Optimization of Hybrid Energy Storage System …

129

Kollmeyer et al. [6] have modeled the determination of split power between battery and supercapacitor. A real-time control system is shown which boosts the power capability of the battery pack used. Gabbar et al. [7] have introduced the hybridization of storage for making the system reliable, sustainable, flexible, and cost-efficient with the help of an artificial optimization algorithm for minimizing the net present cost (NPC). Yousaf et al. [8] have performed a case study on smart switching of the algorithm to find the best available generation resource in case of any interruption in power supply. Shafik et al. [9] have developed a framework on operation optimization of hybrid renewable energy resources combined with PV/Diesel/Wind units. Different scenarios of hybridization of energy resources were considered using HOMER pro software.

3 Model Description A case scenario from Rampur bsr. of Shimla district, HP, is studied. A small off-grid system is made with the help of HOMER Grid for small-scale consumers in Rampur. This is shown in Fig. 1. This system comprises of solar PV panel, wind turbine, generator, convertor, and lithium-ion batteries. Energy Resources Available • Solar Radiation: Average radiance available in Rampur Brs. is shown in Fig. 2. The amount of solar radiation (global horizontal irradiance, i.e., GHI) is taken from HOMER Grid, as perceived from the data calculated, the solar radiation extended up to an annual average of 5.19 kWh/m2 /day. The average daily radiation observed is 5.27 kWh/m2 /day. • Wind Data: The average wind speed calculated from the software is observed as 2.85 m/s; as shown in Fig. 3. The observed wind speed was less so as to run the wind turbine. Fig. 1 Proposed hybrid power plant

130

N. Bharti et al.

Fig. 2 Solar radiation/month

Fig. 3 Wind speed/month

• Load Data: The observed load data is shown in Fig. 4 for the whole day (24 h). A.

Components for Standalone Power System

For making a standalone power system, different components are used whose simulation is done in HOMER Grid. The project lifetime is presumed as 25 years with a discount rate taken as 10%. Following are the components: a.

Diesel Generator: For covering the peak load of 2.32 kW, a generic diesel generator of 3 kW is used. The initial capital cost is Rs. 54,000 and the operational and maintenance cost is calculated as 5000 Rs./h. The operating lifetime is 15,000 h.

Fig. 4 Load data/day

Sizing and Optimization of Hybrid Energy Storage System …

b.

c.

d.

e.

f.

131

The cost of diesel is Rs. 80. Further, the rating of the generator can be changed as per the requirements as 2, 5, and 10 kW. To find the optimal size, HOMER Grid is used for the simulation. Solar PV Panels: Capital cost is taken for 1 kW as Rs. 65,000 whereas Rs. 40,000 and Rs. 10 are the maintenance & operation cost and replacement & operational cost respectively. Generic flat plate panels are considered. Different sizes of PV array such as 1, 2, and 3 kW are assumed, and through simulation in HOMER Grid, the optimal size is calculated. PV array’s lifetime is taken as 25 years. Wind Turbine: Simulation in HOMER Grid included the wind turbine model of Rs. 40,000/kW of rated capacity. Output from the wind turbine is 220 V, AC as turbine is connected in AC bus bar. Replacement, operation and maintenance are presumed as 29,000 Rs./yr and 150 Rs./yr respectively. 0 kW, 1 kW, 2 kW and 3 kW these are the optimal values to be taken in HOMER. Consider the lifetime of the wind turbine as 20 years. The sensitive value of hub height is taken as 13 m. Battery: Lithium-ion battery of 1 kW is selected for improving the system efficiency. The capital cost of the battery is taken as Rs. 7000. Replacement, operation, and maintenance costs are presumed as 1000 Rs./yr and 20 Rs./yr independently. HOMER selects the optimum configuration for selecting the number of batteries from 0, 1, 2, 3, and 24 numbers. Converter: Power converter is used to maintain the flow of energy between DC and AC buses. The capital cost of 1 kW converter is Rs. 17,500. Replacement, operation, and maintenance costs are presumed as 15,000 Rs./yr and 50 Rs./yr independently. Lifetime of the converter is to be 10 years. Efficiency is taken as 96% (Table 1). PSO Algorithm: Taking the examples from birds, fishes, ant colonies, etc., this algorithm is driven from there with the main objective of sharing the information or data for finding the optimal solution by using swarm particles. Following are the three equations that this algorithm uses:      vit+1 = ω vi t + c1 r1 xBestit − xit + c2 r2 gBestit − xit

(1)

xit+1 = xit + vit .t

(2)

Table 1 Components

Components

Value/Information

PV panel

1 kW, 2 kW, 3 kW; generic flat plate type

Converter

1 kW; off-grid type

Battery (Li-ion)

1 kW, 2 kW, 3 kW; Li-ion

Generator

Diesel type, 3 kW, 1800 RPM

132

N. Bharti et al.

2  , ϕ = c1 + c2 > 4 ω=  2 − ϕ − √ϕ2 − 4ϕ 

(3)

where i, x i , and vi denote particle, position particle’s vector, and velocity vector of the particle, respectively. xBest and gBest is the best position of particle and group best position, sequentially. ω, c1 , and c2 are constriction coefficient, individual acceleration coefficient (2.05), and social acceleration coefficient (2.05), respectively. r 1 and r 2 are the random numbers within [0,1] [7].

4 Methodology In Rampur Bsr., the availability of different natural resources like solar radiation and wind speed was checked with the help of HOMER Grid software. As this proposed area is hilly, there are only two resources available. HOMER grid calculated all the parameters for the power plant. This paper focuses on making the system reliable, calculating the state of charge for the battery, optimal sizing of the system, and planning extension for future development. Considering the economic profitability, COE and TNPC are developed, this calculation is as follows: The net present cost of the system is determined as   CNPC = Cann,tot /CRF I, Rproj

(4)

Calculates the COE and TNPC for the proposed plant.

5 Simulation MATLAB r2015a version is used which optimizes the battery used in the renewable power system. The flow chart is shown in Fig. 5. Step1: PSO algorithm reads all the data inserted in the simulation like the load of the area, resources available, storage system used, and all the economic parameters of every component used. Where C ann,tot is total annualized cost, annual interest rate is i, the project lifetime and CRF(i, N) is the capital recovery factor. Calculation of cost of energy (COE) is COE = Cann,tot /E prim

(5)

E prim is the primary load amount [10]. Homer describes all the possible solutions for making an off-grid plant. PSO algorithm.

Sizing and Optimization of Hybrid Energy Storage System …

133

Fig. 5 PSO algorithm

Step 2: initialization of iteration number and population number is set, then different coefficients like inertia, acceleration, and number of variables are selected. For the iterations of PV panels, the iterations are set for 10 numbers, for Wind turbine it is 100 and for storage (kW) it is 100, respectively. After this, every particle is set to find the global best valuesas shown in the flow chart in Fig. 5. Step 3: for setting the best position and velocities of the particles, Eqs. (1–3) are used. If the number of given iterations is done, then the simulation ends.

134

N. Bharti et al.

6 Advantages of Using SAPSO Algorithm for Optimization • Large spaces can be searched for finding the optimal solution. • It is a simple concept, easy to implement, robust, and computationally efficient when compared with other mathematical algorithms.

7 Why PSO Algorithm is Better? • It was developed by Kennedy and Eberhart in 1995, the motivation was based on animal social behavior like bird flocking, ant colonies, etc. • It works on the social behavior of particles in a swarm. • It is easy to use for finding the global best solution for each individual particle. • It has the ability to converge quickly to a reasonably good solution.

8 Results After inserting the data in HOMER Grid software, it is concluded that the consumers in Rampur Bsr. have an average residential consumption of 11.25 kW/day as shown in Fig. 3, whereas the production from PV panels is 0.4 MWh through daily radiation of 5.27 kWh/m2 /day. Data is shown in Fig. 2 (Figs. 6 and 7).

Fig. 6 PV production from panels

Fig. 7 Optimized combination of PV panel-1 kW, diesel generator—2.4 kW with 1.08 kW converter besides Rs. 2.64 as COE

Sizing and Optimization of Hybrid Energy Storage System …

135

Fig. 8 Power generation of each component

Fig. 9 Electricity production/month

The best-obtained solution after the simulation is the best combination of PV panel of 1 kW, Diesel generator of 2.4 kW with 1.08 kW of converter connected with 6 kW of lithium-ion battery. The net present cost of the system is Rs. 111,037 with a renewable fraction of 98.3% as shown in Fig. 8 (Fig. 9). The results in the HOMER grid show the best available resource power plant that can be built in the proposed location. Also shown in Fig. 7. PSO algorithm shows the cost optimization of the hybrid storage system used (Li-ion and supercapacitor). Figure 10 shows the iterations of the PSO algorithm, and in Fig. 11, MO-PSO is shown. The best optimal cost for the system is done by the PSO algorithm as it calculates the lowest TNPC for the system and also calculates the SOC of the battery as shown in Fig. 12. For making the system fast, this paper has worked on that objective. It makes the system economical as compared to other hybrid systems. The proposed location has solar availability as compared to wind source. The problem of storing the power is solved by combining the batteries with supercapacitors and its TNPC as shown in Fig. 10.

136

Fig. 10 Iterations of PSO

Fig. 11 Iterations of MO-PSO

Fig. 12 SOC of lithium-ion battery of 6 kW

N. Bharti et al.

Sizing and Optimization of Hybrid Energy Storage System …

137

9 Conclusion This paper has introduced the work of renewable power plant in Rampur Bsr. in Shimla, HP. The problem of optimal sizing, optimal cost of the project, and making a renewable power plant in villages have been sought out. As per the simulation work, a solar panel of 0.4 MWh is estimated with the cost of Rs. 111,037 (TNPC) for the best configurations. The lifetime of the project is for 25 years. From the data provided on the HOMER Grid, the total consumption was found to be 11.25 kW/day. And the power produced from the PV panels is 0.4 MWh through daily radiation of 5.27 kWh/m2 /day on the location. For storing more amount of power, lithium-ion batteries are combined with supercapacitors. This hybrid energy storage system (HESS) makes the system reliable and stores 10 times more power. MATLAB simulation is done for calculating the TNPC of the system, and to achieve the optimal solution at double speeds, an algorithm of PSO was used. The optimal solution of the system was 80% better than the traditional methods like genetic annealing and simulated annealing. The repetition or iterations were 20 only. PSO algorithm has the good capability of finding the global best solution in less time. The renewable fraction calculated was 98.3%. For sizing the renewable power plant, HOMER Grid software is used, and to calculate the optimal cost PSO was designed.

References 1. Oulis Rousis A et al (2018) Design of a hybrid AC/DC microgrid using HOMER Pro: case study on an islanded residential application. Inventions 3(3):55 2. Carreira D, Marques GD, Sousa DM (2014) Hybrid energy storage system joining batteries and supercapacitors. In: 2014 IEEE 5th international symposium on power electronics for distributed generation systems (PEDG). IEEE 3. Elgammal A , El-Naggar M (2018) Energy management in smart grids for the integration of hybrid wind–PV–FC–battery renewable energy resources using multi-objective particle swarm optimisation (MOPSO). J Eng 11:1806–1816 4. Mbungu NT, Bansal RC, Naidoo R (2019) Smart energy coordination of a hybrid wind/PV with battery storage connected to grid. J Eng 18:5109–5113 5. Mamun A-A et al (2018) An integrated design and control optimization framework for hybrid military vehicle using lithium-ion battery and supercapacitor as energy storage devices. IEEE Trans Transp Electrification 5(1):239–251 6. Kollmeyer PJ et al (2019) Real-time control of a full scale Li-ion battery and Li-ion capacitor hybrid energy storage system for a plug-in hybrid vehicle. IEEE Trans Ind Appl 55(4):4204– 4214 7. Gabbar HA, Abdussami MR, Adham MI (2020) Optimal planning of nuclear-renewable microhybrid energy system by particle swarm optimization. IEEE Access 8:181049–181073 8. Yousaf S et al (2020) A comparative analysis of various controller techniques for optimal control of smart nano-grid using GA and PSO algorithms. IEEE Access 8:205696–205711 9. Shafik MB, Rashed GI, Chen H (2020) Optimizing energy savings and operation of active distribution networks utilizing hybrid energy resources and soft open points: case study in sohag. Egypt. IEEE Access 8:28704–28717 10. Kachris C, Tomkos I (2014) Energy-efficient optical interconnects in cloud computing infrastructures. Communication Infrastructures for Cloud Computing. IGI Glob, pp 224–240

Evaluation of THD and Voltage Instability for Interconnected Hybrid Solar and Wind Power Generation Teena Thakur, Harvinder Singh, Birinderjit Singh Kalyan, and Himani Goyal Sharma

Abstract The demand for renewable energy is increasing rapidly in the last few years, especially wind and solar are two non-conventional energy resources that are mostly used nowadays and a lot of research work is going on in this field. The model for a hybrid solar and wind power system is simulated in this paper, and they are both connected to an interconnected grid. The model is evaluated based on the THD (Total Harmonic Distortion) and voltage instability in the system. FACTS device is used to increase the system’s overall power efficiency. Keywords Interconnected power network · Wind power · Solar module · THD (total harmonics distortion) · Voltage stability · ETAP software

1 Introduction Power energy is most needed for society, especially for current ongoing technology. There are various types of energy sources that can be used for various purposes [1]. We currently use two types of energy sources: one is renewable energy, which includes nuclear, wind, solar, and hydro, and the other is secondary energy, which includes oil, gas, and fossil fuels [2]. Since conventional energy is diminishing as a result of gas emissions, we must turn it into renewable energy. For supplying electricity to remote areas, renewable energy is critical. Solar energy is the energy emitted by the sun that is transformed into electrical energy by the use of various components. In wind energy, mechanical energy is produced by wind turbines, and this energy is used to produce electricity [3]. In contrast to other Asian countries, India generates the most renewable energy power [4]. T. Thakur (B) · H. Singh · B. Singh Kalyan · H. G. Sharma Electrical Engineering Department, Chandigarh University, Gharuan, Mohali, Punjab 140301, India B. Singh Kalyan e-mail: [email protected] H. G. Sharma e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 P. K. Mallick et al. (eds.), Cognitive Informatics and Soft Computing, Lecture Notes in Networks and Systems 375, https://doi.org/10.1007/978-981-16-8763-1_13

139

140

T. Thakur et al.

Fig. 1 Block dig of solar or wind system

Renewable energy sources account for 38% of total power generation as of November 27, 2020, with the Ministry of New and Government of India aiming for a total electricity capacity of about 57% from non-conventional energy sources [5]. The simulation of a model for a hybrid solar and wind power system is carried out in this paper, with the methodology approach shown in Fig. 1. A harmonic filter is used as a controller in the system to minimize THD, and voltage instability is reduced to a large extent using some FACTS devices [6]. The notch filter is used in this research to improve power efficiency. Harmonic distortion is minimized, and the single-phase grid is connected using PCC.

2 Literature Review IEEE has written several journals and conference articles, each with its specific contribution. The first part of this paper [3] discusses the modeling and simulation of hybrid solar and wind power, while the second part [7] discusses harmonic reduction using various strategies. The current waveform has been decreased, and the system’s power efficiency has increased. As a result, the standard of power is improved in this paper, and the MATLAB program is used [3]. The photovoltaic output will be improved using a controller in this proposed paper [8]. To minimize total harmonics distortion (THD), the IEEE519 standard is used [9]. In this proposed paper, the grid is linked to the solar PV system and a cascaded H-bridge inverter is used to convert dc to ac. The MATLAB software [10] is used to simulate the model. A microgrid that combines wind and solar PV power, as well as battery energy storage (BES), into a three-phase grid that feeds a nonlinear load. The NARMA-L2 controller is used with advanced techniques in the hybrid solar or PV system model. PMSGs are also used in wind systems to reduce distortion [11]. The entire paper is based on a phase diagram of a

Evaluation of THD and Voltage Instability …

141

multi-energy hybrid power system. Inverters are used to power the wind generator, and the effects are shown. Wind and wave energy are used to produce electrical energy. This paper investigates the multi-energy [12] hybrid power system using a simulation model. The inverter is used to power the wind generator, and the findings reveal that wind and wave energy is used to produce electrical energy [12, 13]. The notch filter is used in this research to improve power efficiency [14]. Harmonic distortion is minimized, and the single-phase grid is connected using PCC [15]. The harmonic technique [16] based model FFHD (flexible extended harmonic domain) is used in the standalone PV system [17].

3 Flow Chart of Methodology See Fig. 2.

Fig. 2 Methodology approach

142

T. Thakur et al.

4 Simulation Model and Results 4.1 Simulation of the Model for Hybrid Solar and Wind Power System This paper presents the results of a model simulation for a hybrid solar and wind power system. Three renewable energies such as wind, solar, and hydropower plants are connected to one network. Voltage is stabilized on each bus which is near marginal value (95%), load flow of the model is performed to see the voltage status of each bus and line (Fig. 3; Table 1).

Fig. 3 Load flow diagram

Table 1 Generation buses and voltage magnitudes on each generation bus

BUS ID

Type

MW

BUS 1

KV 10.500

Voltage control

44.000

BUS 3

10.000

Voltage control

60.000

BUS 5

11.000

Voltage control

17.000

BUS 6

110.000

BUS 13

20.000

MVR/PF control

0.106

BUS 15

10.500

MVR/PF control

0.300

Swing

Evaluation of THD and Voltage Instability …

143

Fig. 4 Harmonic source from library

4.2 Evaluation and Reduction of the THD (Total Harmonic Distortion) of the System A Sinusoidal waveform is changed into the non-sinusoidal waveforms due to the presence of harmonics in buses 12 and 13 of the network. It is caused by interference of the communication system, overheating, equipment, and overload in the system [7]. The harmonic signals are present in the simulation waveforms. In this paper, using harmonic filters, the harmonic level will be reduced in buses 12 and 13 of the network which causes a distortion of the normal waveform. Harmonic level reduction is important to be reduced to increase power quality in the network [10]. The power quality is improving in the solar PV system or wind system by using a harmonic filter (Figs. 4, 5, 6, 7, 8, 9, 10 and 11; Table 2).

Fig. 5 Critical report

144

Fig. 6 VIHD (individual harmonic distortion) report

Fig. 7 Bus 12 and bus 13 voltage spectrum versus harmonics order

T. Thakur et al.

Evaluation of THD and Voltage Instability …

Fig. 8 Bus 12 and bus 13 harmonic distorted waveform

Fig. 9 Bus tabulation

Fig. 10 Bus 12 and bus 13 voltage spectrum versus harmonics order

145

146

T. Thakur et al.

Fig. 11 Bus 12 and bus 13 harmonic distorted waveform

Table 2 Harmonic level before and after using filters

Before using filter

Bus voltage

THD

Bus 12

109.5

3.43

Bus 13

20.01

7.02

After using filter

Bus voltage

THD

Bus 12

113.2

0.081

Bus 13

23.66

0.244

5 Conclusion In this paper, simulation of model for hybrid solar and wind model is performed where voltage instability issues are solved, voltage is in its marginal limit for each bus, and the power factor is within the allowable range. The improvement of power quality issues which is the main concern in any hybrid power system is performed where voltage swell and dip with total harmonics distortion is enhanced using harmonics filters and some FACTS devices. 1. 2. 3.

Simulation model of a hybrid solar or wind power system with an interconnected grid is performed. Total harmonic distortion THD is reduced by harmonic filter and the harmonics waveforms are shown in the graph. Voltage is stabilized using FACTS devices, and voltage is on a marginal limit on each bus.

Evaluation of THD and Voltage Instability …

147

References 1. Zade AB et al (2016) Hybrid solar and wind power generation with grid interconnection system for improving power quality. In: 2016 IEEE 1st international conference on power electronics, intelligent control and energy systems (ICPEICES). IEEE 2. Narayanan V, Kewat S, Singh B (2019) Standalone PV-BES-DG based microgrid with power quality improvements. In: 2019 IEEE international conference on environment and electrical engineering and 2019 IEEE industrial and commercial power systems Europe (EEEIC/I&CPS Europe). IEEE 3. Hela´c V, Hanjali´c S (2017) Modeling and the impact on power quality of hybrid solar-wind power plants. In: 2017 6th international youth conference on energy (IYCE). IEEE 4. Gupta TN, Murshid S, Singh B (2018, December) Power quality improvement of single-phase grid connected hybrid solar PV and wind system. In: 2018 IEEE 8th power India international conference (PIICON). IEEE, pp 1–6 5. Rahimi K, Mohajeryami S, Majzoobi A (2016) Effects of photovoltaic systems on power quality. In: 2016 North American power symposium (NAPS). IEEE 6. Nagaraj C, Sharma KM (2018) Integration of hybrid solar-wind energy sources with utility grid for improving power quality. In: 2018 3rd IEEE international conference on recent trends in electronics, information & communication technology (RTEICT). IEEE 7. Jayasankar VN, Vinatha U (2016, January) Implementation of adaptive fuzzy controller in a grid connected wind-solar hybrid energy system with power quality improvement features. In: 2016 biennial international conference on power and energy systems: towards sustainable energy (PESTSE). IEEE, pp 1–5 8. Agrawal S et al (2019) Power quality enhancement of soft computing FLC MPPT based standalone photovoltaic system. In: 2019 2nd international conference on power energy, environment and intelligent control (PEEIC). IEEE 9. Parija B et al (2019) Power quality improvement in hybrid power system using D-STATCOM. In: 2019 3rd international conference on computing methodologies and communication (ICCMC). IEEE 10. Mukundan CMN, Jayaprakash P (2018) Cascaded H-bridge multilevel inverter-based grid integration of solar power with PQ improvement. In: 2018 IEEE international conference on power electronics, drives and energy systems (PEDES). IEEE 11. Chen M et al (2017) Design and simulation of multi-energy hybrid power system based on wave and wind energy. In: 2017 20th international conference on electrical machines and systems (ICEMS). IEEE 12. Bhoi AK, Sherpa KS, Kalam A, Chae GS (eds) (2020) Advances in greener energy technologies. Springer 13. Gupta TN, Murshid S, Singh B (2019) Power quality improvement of single-phase weak grid interfaced hybrid solar PV and wind system using double fundamental signal extracter-based control. IET Gener Transm Distrib 13(17):3988–3998 14. Seyedalipour SS, Aalami HA, Barzegar A (2017) A novel control technique for stable operation of four-leg shunt active power filters in electrical grids. In: Conference on electrical power distribution networks conference (EPDC), Semnan, Iran, pp 175–181 15. Vargas U et al (2019) Harmonic modeling and simulation of a stand-alone photovoltaic-batterysupercapacitor hybrid system. Int J Electr Power Energy Syst 105:70–78 16. Rekioua D, Zaouche F, Hassani H, Rekioua T, Bacha S (2019) Modeling and fuzzy logic control of a stand-alone photovoltaic system with battery storage. Turk J Electromechanics Energy 4(1) 17. Hussain I, Singh B, Mishra S (2019) Optimal operation of PVDG-battery based microgrid with power quality conditioner. IET Renew Power Gener 13(3):418–426

148

T. Thakur et al.

18. Chishti F, Murshid S, Singh B (2019) Development of wind and solar based AC microgrid with power quality improvement for local nonlinear load using MLMS. IEEE Trans Ind Appl 55(6):7134–7145 19. Pancholi R, Chahar S (2020) Improved PV-wind hybrid system with efficacious neural network technique indeed dynamic voltage restorer. In: 2020 international conference for emerging technology (INCET). IEEE

Analysis and Optimization of Stability in Hybrid Power System Using Statcom Ankush Lath, Sarbjeet Kaur, and Surbhi Gupta

Abstract With the continuous advancement in technology, human society is rapidly facing the problem of high energy consumption leading to greater pollution and greenhouse effects, hence the need of the hour is to look at the alternatives which are not only eco-friendly and viable, but also give noble results. Hence, renewable energy is the way to go ahead and will play a pivotal role in upcoming years. The aim of this paper is to use such renewable or non-conventional energies to reduce carbon emission but without sacrificing power output. The paper accomplishes to apply methods in order to ameliorate the stability of the Hybrid Power System using D-STATCOM with hybrid-BES (Battery Energy Storage). In this paper, another Hybrid STATCOM control procedure is also explicated with regards to recuperating the stability of the system and power quality of wind turbines. The reactive power, active power, and network voltage along the wind generation fluctuations is examined through the hybrid STATCOM control procedure. Keywords Hybrid power system · Solar photovoltaic (SPV) · Distributed generation (DG) · Reactive power compensation (RPC) · Total harmonics distortion (THD) · Distributed static compensator (D-STATCOM)

1 Introduction India being a developing country is in the phase of mass production and as per the government’s new norms, renewable energy production has been highly promoted both for cost cutting and for eco-friendly purposes. Hence, going with the need of the hour, wind and photovoltaic solar energy is a very resourceful alternative [1]. A. Lath (B) · S. Kaur · S. Gupta Department of Electrical Engineering, University Institute of Engineering, Chandigarh University, Gharuan, Mohali, Punjab, India S. Kaur e-mail: [email protected] S. Gupta e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 P. K. Mallick et al. (eds.), Cognitive Informatics and Soft Computing, Lecture Notes in Networks and Systems 375, https://doi.org/10.1007/978-981-16-8763-1_14

149

150

A. Lath et al.

The geographical location of our country also helps in achieving this goal. So, using such alternates to Fossil Fuels like Hybrid System utilizing wind energies along with solar energy along with BES is fulfilling the purposes of the hybrid system [2]. Hybrid power system is an autonomous power generating system that collaborate one or more energy resources (Wind and PV) operated together with supporting tools to distribute power to grid or location [3]. The Hybridization technology gives an opportunity to use renewable energy resources for distributing electrical energy to remote areas or rural areas. With the aid of such a system, we will be able to lessen the CO2 emission. As we know India is a tropical country and here, there is a great scope of solar energy as well as wind energy. According to the Paris agreement, the government of India is also keen and taking bold initiatives to reduce carbon emission by 2030 with the help of renewable energy resources [4]. Although the hybrid system increase the energy production during any cycle, but stability problem is the biggest concern and to cope with this trouble FACTS devices are used [5]. This paper mainly focuses on the analysis of the hybrid system with solar and wind penetration with the use of STATCOM to ameliorate the stability as well as quality of power. Hybrid Power System: These systems are intended for the age and utilization of electrical power. They are autonomous of a huge, unified electricity grid and consolidate more than one sort of power source. They may run in size from generally enormous island frameworks of numerous megawatts to singular household power supplies on the request for one kilowatt (Fig. 1). Basically, we can say that it’s a composite of various balancing energy generation systems. It captures the finest feature of respective energy sources and provides better

Fig. 1 Hybrid power system

Analysis and Optimization of Stability …

151

grid quality electricity [6]. Such a system has a high level of dependability, efficacy, and provides long-term performance. Commonly, it has been used as an operational backup resolution to the public grid in event of weak grids and blackouts. Merits of Hybrid Power System • Gives higher efficacy. • It increases the reliability of the energy supply. And helps in diminishing carbon emission. • Load management of hybrid power systems is better than traditional generators. • Lower operational and maintenance costs than traditional generators. • Reducing the in-situ commissioning time. • Despite high installation cost and tedious controlling process, the hybrid system is the alternative to the energy crisis as it helps in demand and generation balance easily.

2 Power System Stability See Fig. 2.

2.1 Rotor Angle Stability The ability of the system to harmonize and manage the torque is defined as rotor angle stability. Commonly, such type of stability is differentiated further in two sub-categories: A. B.

Small signal stability (Stability of minor disturbance) Stability of large disturbance (Transient stability) [7].

Fig. 2 Assortment of power system steadiness

152

2.1.1

A. Lath et al.

Stability of Minor Disturbance

It is the capability of the system to maintain or keep up the synchronism between external tie line and machines following a little slow disruption. The minor listless interruption may be due to the natural load fluctuations and automatic voltage regulator and governor behavior. The system’s steady-state stability analysis is achieved by a linear version of the nonlinear method.

2.1.2

Stability of Large Disturbance

The ability of the power system to uphold a state/position of equipoise still after significant transient disruption [8]. Nonlinear equations are not permitted in the system’s transient stability of linearization schemes. This scheme is dependent on the asperity of the disturbance and also on the system’s initial working condition.

2.2 Frequency Stability It is the stability of the system to keep up or maintain the frequency within a tolerable range. It also sustains or nourishes a balance between device generation and load, having the least accidental loss of load. It is the instability that can cause unrelenting frequency swings [9].

2.3 Voltage Stability Voltage stability is the ability to retain an appropriate voltage constant, i.e., 0.95–1.0 per unit and balance reactive power [10]. The crucial factor is the voltage reduction under various power flow conditions. Apart from voltage reduction, a rise in line current and there increase in reactive losses are also experienced. Due to increased reactive losses, the magnitude of voltage diminishes [11]. Static Compensator (STATCOM): STATCOM is an electronic gadget which put to use the power commutated gadgets like IGBT, GTO, and so on just to restraint the reactive force through a power network and along these lines rising the stability of the power network. The static compensator is a shunt device. Static synchronous condenser (STATCON) is another name of a STATCOM [12]. It basically belongs to the Flexible AC Transmission System group of devices. The literal meaning of Synchronous in STATCOM implies that either it will assimilate or will create reactive force in synchronization just to balance out the voltage of the whole power system. STATCOM cannot be rated to retain a lot of extra power in the abundance of its

Analysis and Optimization of Stability …

153

temporary overload capacity, hence to undo the overload, we can use a hybrid of braking resistor and BES. Working of STATCOM In the above reaction power flow equation, point δ is the point in between the range of V 1 and V 2 . Along these lines in the event that we keep up angle δ = 0, at that point, Reactive power flow will turn into Q = (V1 / X )[V1 − V2 ] In addition, the active power flow will change P = V1 ∗ V2 Sinδ/ X = 0 “To sum up, we can say that progression of dynamic power becomes zero, if the angle between the range of V 1 and V 2 is zero, then and the progression of receptive force totally relies upon (V 1 − V 2 ). Hence, for a stream of reactive force, there are two prospects: 1. 2.

If the greatness of V 2 is less than V 1 , at that point, reactive force will move from source V 1 to V 2 . If the greatness of V 1 is less than V 2 , the reactive force will move from source V 2 to V 1 [13]” (Fig. 3)

Fig. 3 STATCOM

154

A. Lath et al.

In the above figure, we can see the diagrammatical structure of a STATCOM that is grid connected. It is a shunt-connected VSC and the BES (Battery Energy Storage) unit is supporting the DC link. It has (BES) supplementary feature of absorbing and providing real power. The wind plant and the terminal voltage are measured and given to the control circuit. Thus, the STATCOM BES eradicates the turbulences depending on the application of the circuit [14].

3 System Configuration and Models for Stability Analysis of HPS 3.1 Modeling of SPV System The primary component of a PV assortment is solar photovoltaic cell. To create or have a SPV unit/module, the solar photovoltaic cells are attached in series with each other, further, these SPV units combine together and thus make a SPV array. The circuit diagram of a solar photovoltaic cell is shownin Fig. 4. The mathematical expression of an ideal Solar Photo-Voltaic cell is given below.   I = Ipv − I0 eqv/akT − 1

(1)

V = Vd − I Rs

(2)

Here, IPV is the current formed by solar irradiance coming directly from the sun, k is Boltzmann constant, I 0 is the leakage current of the diode, q is the liberation of charge by electron, T is the temperature (in Kelvin) of the system at (P-N) junction, V d is voltage in between the diode and here a is an ideality aspect of the diode [15]. By taking from the above equations, the simulation model of the solar photovoltaic cell is shown in Figs. 4 and 5. Fig. 4 Equivalent circuit of SPV cell

Analysis and Optimization of Stability …

155

Fig. 5 Comprehensive representation of the model of SPV cell

3.2 Modeling of Wind Energy Conservation System The extent of power developed by a generator that is coupled with turbine of Windfarm is shown in Eq. (3) Pi = 0.5 ∗ Cp ∗ As ∗ V 3

(3)

where Pi As V Cp

is the power rendered generator of wind plant, is an area swept by the wind in sq. m, is the velocity of the wind (in m/s), is the COP (Coefficient-Performance).

Here, COP, also called power coefficient of a wind turbine, is primarily dependent on the ratio of tip speed of turbine of wind plant to the speed of the wind, and hence it is also known as tip speed ratio [16]. The Simulation framework of WECS based PMSG is shown below in Fig. 6.

3.3 Modeling of Statcom (D-STATCOM) The power system mostly dealts with approximately each and every one of the load used demands, reactive power, and the AC quantities, the RPC (reactive power compensation) is among the basic power quality problem/apprehension. To supply

156

A. Lath et al.

Fig. 6 Detailed simulation model of wind turbine generator

essential voltage support for the voltage disparity in wind conservation energy system, the flow of the reactive power has to be managed or kept within certain bounds [17]. When the voltage of the system is collapsed, the static compensator has a superior pros/reward as it supplies additional capacitive reactive power. A static compensator is a power electronics gadget that is capable to produce or soak up reactive power in the output workstation. If the system is linked with a battery, then it is proficient in managing the real power as well. To bestow reactive power reinforcement or sustenance to transmission and distribution systems, it wouldn’t need/entail large value of inductive and capacitive machinery. Static compensator always requires less installation area because it has humble/small extent/dimension and has higher reactive power outcome to lower voltages. To provide better dynamic stability to the whole system, it also imparts better damping characteristics. These are the foremost advantages of the D-STATCOM. In this paper, to brace the power stability and quality of microgrid of the hybrid system having wind-PV module, the distributed static compensator has been utilized or placed. To alleviate the voltage and current-related power quality concerns, it is allied with the PCC (Point of Common Coupling). The harmonic and reactive mechanisms of the load-current are injected by STATCOM to build the source-currents more balanced and unadulterated or pure sinusoidal [18]. At PCC, the voltage was being synchronized wrt the mentioned value when engaged in voltage restraint mode just to safeguard the dangerous loads from enormous voltage turbulence (Fig. 7).

Analysis and Optimization of Stability …

157

Fig. 7 Simulation model of distributed static compensator

4 Anticipated/Planned Model of the HPS (Wind-PV) The hybrid generation system with a total capacity of 750 KW consists of 250 KW pooled with a wind generator of 500 KW. In addition, the hybrid system is incorporated with a grid through 25 kV distribution lines. Figure 8 illustrates the outline model of the anticipated hybrid generation system with transmit inverter arrangement. The wind farm and solar photovoltaic systems are premediated as the two different generation systems, fixed with separate dc-ac inverter and on the inverter output sides they are unified in parallel with each other (Fig. 9).

Fig. 8 Layout of planned hybrid model of arrangement

158

A. Lath et al.

Fig. 9 Meticulous representation of the planned hybrid model of arrangement with D-STATCOM

5 Results Stability and Quality of electrical power of the system can be calculated by gain margin, delay margin, voltage sag, current harmonics, and PF (Power factor). Here, in this work, we have evaluated the THD to determine the firmness or stability of power and quality rendered by the Photovoltaic-Wind arrangement. In the current waveform, the total harmonics distortion can be estimated by Eq. (3). Here IH = total harmonic current in hth order. I1 is essential element of current. In this paper, with the aid of matrix laboratory (MATLAB’s) FFT exploration toolbox, the THD can be estimated. The power stability of the system rendered by the planned or anticipated system has been appraised by determining the Current-THD given to the grid. To make things clear, the relative analysis has been reckoned with and without STATCOM. From Figs. 10 and 11, it is evaluated that at the wind speed of 5 m/s, the total harmonic distortion of the current without associating to static compensator is 15.3% which is very lofty/high and still it is not within the limits laid down by IEEE. The limits which are laid down by IEEE for stability and quality issues of power, THD should be minimal or fewer than 5%. In this paper, to bring the THD within the IEEE limits the static compensators (STATCOM) are used. The figures which are shown above, Fig. 10 shows the evaluation of THD of the Wind-Photo voltaic system having-STATCOM, and Fig. 11 shows the assessment of system without static compensator (STATCOM). The system having static compensator diminishes the total harmonic distortion in the current provided by the hybrid Photovoltaic wind arrangement to 1.64% which is a much improved outcome

Analysis and Optimization of Stability …

159

Fig. 10 THD of the hybrid photovoltaic-wind arrangement without STATCOM

Fig. 11 Total harmonic distortion of the Hybrid system with STATCOM

as compared to another system which doesn’t have distributed static compensator. This result/outcome hence shows that the stability of power has enhanced due to the function or relevance of the static compensator. Hence, the power steadiness and

160

A. Lath et al.

Fig. 12 Voltage waveform of HPS system without STATCOM

Fig. 13 Voltage waveform of HPS system with STATCOM

quality rendered or provided by the anticipated wind-photovoltaic system according to IEEE standards is quite acceptable. Figure 12 shows the voltage waveform of the Hybrid Power generation system without STATCOM. As we see in the graph, there is a variation or fluctuation in the system. To fix this problem, STATCOM is used on the distribution side. Figure 13 illustrates that by using D-STATCOM, there is an increase in steadiness and the variation in the waveform also diminishes.

6 Conclusion It can be concluded from all analyses and researches that a hybrid power system, when used in combination with a STATCOM and BES, is a very fruitful and viable alternative to the traditional generators. The increased stability output in the wind

Analysis and Optimization of Stability …

161

turbine displays that when this combination framework was simulated in a wind plant, there is a sharp improvement in stability, wherein the STATCOM BES provides with 290% and 50% is further attained by utilizing BR. In this proposed work of hybrid photovoltaic-wind system, the basic power steadiness or stability of the system has been accomplished. In the existence of distributed static compensator, the ameliorated THD is established with the help of FFT-Scrutiny/Analysis as demonstrated above in the outline or structure. In the simulation representation of hybrid system, the distributed static compensator is used. Hence, the outcome reveals that the whole THD (5%) is in limits that are set by IEEE. Now this shows that the gratifying function of the projected hybrid generation model is comprised of Wind and PV cells.

References 1. Arulampalam A et al (2006) Power quality and stability improvement of a wind farm using STATCOM supported with hybrid battery energy storage. IEE Proc Gener Transm Distrib 153(6):701–710 2. Parija B et al (2019) Power quality improvement in hybrid power system using D-STATCOM. In: 2019 3rd international conference on computing methodologies and communication (ICCMC). IEEE 3. Raju S, Meenakshy K (2015) Power system stability improvement of wind farm fed to a multi-machine system by using STATCOM. In: 2015 international conference on power, instrumentation, control and computing (PICC). IEEE 4. Ou TC, Lu KH, Huang CJ (2017) Improvement of transient stability in a hybrid power multisystem using a designed NIDC (Novel Intelligent Damping Controller). Energies 10(4):488 5. Imanishi T et al (2014) 130MVA-STATCOM for transient stability improvement. In: 2014 international power electronics conference (IPEC-Hiroshima 2014-ECCE ASIA). IEEE 6. Li Z, Tiong T, Wong K (2019) Transient stability improvement by using PSS4C in hybrid PV wind power system. In: 2019 1st international conference on electrical, control and instrumentation engineering (ICECIE). IEEE 7. Wang L, Vo Q-S, Prokhorov AV (2017) Stability improvement of a multimachine power system connected with a large-scale hybrid wind-photovoltaic farm using a supercapacitor. IEEE Trans Ind Appl 54(1):50–60 8. Nurunnabi M et al (2019) Size optimization and sensitivity analysis of hybrid wind/PV microgrids-a case study for Bangladesh. IEEE Access 7:150120–150140 9. Elgammal A, El-Naggar M (2018) Energy management in smart grids for the integration of hybrid wind–PV–FC–battery renewable energy resources using multi-objective particle swarm optimisation (MOPSO). J Eng 11:1806–1816 10. Nejabatkhah F, Li YW, Tian H (2019) Power quality control of smart hybrid AC/DC microgrids: an overview. IEEE Access 7:52295–52318 11. Li X et al (2019) Enhanced dynamic stability control for low-inertia hybrid AC/DC microgrid with distributed energy storage systems. IEEE Access 7:91234–91242 12. Ingole AS, Rakhonde BS (2015) Hybrid power generation system using wind energy and solar energy. Int J Sci Res Publ 5(3):1–4 13. Lazarov VD et al (2005) Hybrid power systems with renewable energy sources–types, structures, trends for research and development. In: Proceedings of international conference ELMA2005, Sofia, Bulgaria 14. Onar OC, Uzunoglu M, Alam MS (2006) Dynamic modeling, design and simulation of a wind/fuel cell/ultra-capacitor-based hybrid power generation system. J Power Sources 161(1):707–722

162

A. Lath et al.

15. Bentouba S, Bourouis M (2016) Feasibility study of a wind–photovoltaic hybrid power generation system for a remote area in the extreme south of Algeria. Appl Therm Eng 99:713–719 16. Lee D-J, Wang L (2008) Small-signal stability analysis of an autonomous hybrid renewable energy power generation/energy storage system part I: time-domain simulations. IEEE Trans Energy Convers 23(1):311–320 17. Saxena NK, Kumar A (2016) Reactive power control in decentralized hybrid power system with STATCOM using GA, ANN and ANFIS methods. Int J Electr Power Energy Syst 83:175–187 18. Chavan PM, Chavan GP (2017) Interfacing of hybrid system to grid using STATCOM & power quality improvement. In: 2017 international conference on information, communication, instrumentation and control (ICICIC). IEEE

Modeling of Proton Exchange Membrane Fuel Cell Reena Yadav, Birinderjit Kalyan, Sunny Vig, and Himani Goyal Sharma

Abstract In this research article, an informative proton exchange membrane fuel cells modeling is done to design the optimal power conditioning systems. The modeling is explained by its structure and the working of Proton exchange membrane fuel cell (PEMFC). The output voltage at the no load (V oc ) is calculated from the Gibbs’ free energy and the Nernst equation. The three polarization losses are taken into consideration with the consideration of fuel pressure, flow rate and the operating temperature. The modeling also considered the reactant utilization of the reactants. This modeling also considers the thermodynamics effect on the performance of cell and the results are verified by simulation and experimental results. Keywords Proton exchange membrane fuel cells (PEMFC) · Power conditioning systems (PCS) · Mathematical modeling · Horizon fuel cells

1 Introduction The rise in pollution levels and the decrease in fossil reserves have made it essential to switch from the conventional energy generation techniques to renewable ones. Fuel cells are one of the renewable energy resources which are expected to play a crucial role in the future. There are many different types of fuel cells with temperature ranging from room temperature to 1000 °C. They are often classified on the basis of the type of electrolyte material used in the fuel cell. Fuel cells offer several advantages over conventional energy resources such as high efficiency, high current density, low chemical and particulate emissions, high-quality power, ability to co-generate heat and electricity at load centers. The DC output of the fuel cells is converted into AC in order to run the appliances as most of the appliances work on AC so the PCS R. Yadav (B) · B. Kalyan · S. Vig · H. G. Sharma Electrical Engineering Department, Chandigarh University, Mohali, Punjab, India S. Vig e-mail: [email protected] H. G. Sharma e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 P. K. Mallick et al. (eds.), Cognitive Informatics and Soft Computing, Lecture Notes in Networks and Systems 375, https://doi.org/10.1007/978-981-16-8763-1_15

163

164

R. Yadav et al.

works efficiently, and it is necessary to study the output characteristics of fuel cells in order to get maximum output from the PCS. Therefore, until now there is number of fuel cell modeling through different methods. Both the characteristics (inside and outside) of the fuel cell stack is done through CFD which is the very complicated electrochemical and mechanical equations-based modeling [1, 2]. The electric circuit can also be modeled with the help of diode, BJT, L and C that is available and the complex Computational fluid dynamics (CFD) is also done to study the complex behavior of the flow of fuel in the various parts of the fuel cell and the other type of modeling is the mathematical modeling. The electrical modeling uses passive elements like inductance (L), capacitance (C) and resistance (R) [3]. The electrical modeling has been performed through bijunction transistor (BJT), diode, inductor (L) and capacitance (C) components [4]. To design a power conditioning system, the power electronics engineers must study the whole fuel cell system. Even though they can study using fuel cells equation which can be found from any book or literature, it makes them difficult to understand. Therefore, in this paper, we have tried to structurally explain each electrochemical equation using mathematical modeling so that it can be used to design power conditioning system (PCS). The modeling is done by using the Nernst equation, polarizations, and thermodynamics and the results are verified using MATLAB and the experimental results from the horizon 1 kW fuel cell.

2 Physical Structure and Operating Principle of PEMFC It is very requisite to learn all basic concepts with the working of a PEMFC for the implementation of mathematical modeling. The electrodes are detected on the two different sides of membrane that is called electrode assembly (MEA), Nafion is normally using the proton transfer process in Membrane electrode assembly (MEA). Nafion only permits the (H+ ) ions to pass while the carbon paper electrodes are used in the PEMFC, and the catalyst is sprayed on the electrodes to increase the rate of the reaction. The gas diffusion layers are used on both the sides of electrodes to support the fuel flow (Fig. 1). The gasket and rubber are used in fuel’s sealing part, and the bipolar plates are used in channel. Humid fuel is passed to the bipolar plates via channel. The humidification is being done to increase the conductivity of the ions. The membrane electrode assembly scatters the two gases that are hydrogen and oxygen. The reaction on cathode is reduction and at anode is oxidation. The hydrogen ions from the anode side react with the oxygen molecule to generate water and heat energy. The reaction at cathode side is reduction and at anode is oxidation. The hydrogen ions from the anode side react with the oxygen molecule to generate water and heat energy. The reactions occurring at the anode and cathode are as follows. Anode: H2 → 2H+ + 2e−

(1)

Modeling of Proton Exchange Membrane Fuel Cell

165

Fig. 1 Structure of PEMFC

Cathode: 1/2O2 + 2H+ + 2e− → H2 O

(2)

Overall: H2 + 1/2O2 → H2 O

(3)

3 PEMFC Modeling A.

Nernst Model

The fuel cell converts the chemical energy in the fuel to electrical energy and the output is given by the Nernst equation. The output of PEMFC at reversible condition is 1.229 V. However, the practical output of the fuel cell is less than the actual value of the fuel cell [5]. E = E0 +

PH · PO2 R·T ln 2 2·F PH2 O

where E0 R T F

is reversible cell voltage, is gas constant (8.3144 J/mol K), is temperature in Kelvin scale (K), is Faraday constant (96,485 C/mol),

(4)

166

R. Yadav et al.

Fig. 2 Model of Nernst equation

P

is Partial pressure (Fig. 2).

B.

Three Polarizations

The fuel cell output is less than the calculated value due to the losses occurring in the electrochemical process. These losses are activation loss, ohmic loss and concentration loss. V = Voc − Vlosses

(5)

The activation polarization is more dominant than the other polarizations at the low current values due to the energy required in overcoming the energy barriers before current and ion flows. Ohmic losses occur due to the internal resistance of the fuel cell and are directly proportional to the current flowing in the fuel cell and the last one is gas transport loss which occurs due to the difficulty in providing enough reactants to flow to the reaction sites. This loss is dominant at higher limiting current. However, this loss occurs at all the current range but with lower effect. C.

Activation Loss

The activation loss occurs because of the low rate of reaction at the electrodes. The activation loss occurs at both the electrodes but the rate of reaction for the reduction of oxygen at cathode is less than that of the oxidation of hydrogen at anode that’s why we have neglected the activation loss occurring at anode. The activation loss in Fig. 4 is modeled by using the equation as shown [6]: Vact1 = ξ1 + ξ2 · T + ξ3 · T · ln(CO2 )

(6)

While the rest of the loss is modeled by using a resistor as given by the equation React = −

ξ4 · T · ln(J ) I

(7)

Modeling of Proton Exchange Membrane Fuel Cell

167

where J represents the parametric coefficients, which depends on the type of fuel cell used. In general, for PEMFC the parameters are given by   ξ2 = ξ21 + ξ22 · ln(A) + ξ23 · ln CH2 ,

(8)

A is the cell’s active area (Fig. 3). Where CO2 and CH2 are given by CO2 =

PO2 5.08 · 106 · e

Fig. 3 Model of activation polarization

Fig. 4 Model of concentration polarization

−498 T

(9)

168

R. Yadav et al.

CH2 = D.

PH2 77

1.09 · 106 · e T

(10)

Ohmic Loss

The ohmic loss occurs due to the internal resistance of the fuel cell. The internal resistance is provided by the membrane, the membrane and electrodes contact resistance, and that of electrodes. The total ohmic voltage can be expressed as Vohm = Va + Vm + Vc

(11)

where Va Vc Vm E.

is ohmic loss at anode side, is ohmic loss at cathode side, is ohmic loss at membrane. Concentration Loss

During the reaction process, the concentration gradients occur due to the diffusion of reactants from flow channels to the sites where the reaction takes place. The concentration loss occurs due to the delay in transportation of reactant to the reaction sites. The concentration loss [5] is represented by Vconc = −

Cs R·T ln Z · F Cb

(12)

where “C s ” and “C b ” are the surface concentration and bulk concentration, respectively. The above equation can also be written as Vconc = −

  IFc R·T ln 1 − Z·F ILim

(13)

where is the limiting current density (A/cm2 ).

I lim

The limiting current density can be found using the relationship with the ideal gas equation and Fick’s law. The equation can be as shown: Ilim =

N·F·D·P R·T ·δ

where D δ N

is the Diffusion constant, is the thickness of the diffusion layer, is the no. of electrons,

(14)

Modeling of Proton Exchange Membrane Fuel Cell

P R T F

is the pressure, is gas constant, is the temperature, is Faraday constant (Fig. 4).

F.

Reactant Utilization

169

The reactant utilization has a significant effect on the performance of the fuel cell. The reactant utilization of the PEMFC is defined as the ratio of amount of hydrogen that reacts with the oxygen to the amount of hydrogen that enters the fuel cell. The utilization factor is given by [7]: U FH2 =

60000 · R · T · N · I fc N · F · P · V · x%

(15)

U FO2 =

60000 · R · T · N · I fc 2 · n · F · P · V · y%

(16)

where U FH2 and U FO2 R I fc n F V fuel V lpm y% and x% G.

is the utilization factor of hydrogen and oxygen, is the gas constant, is the fuel cell current, is the number of electrons, is the Faraday constant, is the flow rate of hydrogen, is the flow rate of air, is the percentage of oxygen and hydrogen in the fuel (Fig. 5).

Thermal Dynamics

The model for thermal dynamic behavior of the fuel cell is formed by applying curve fitting technique on the experimental data [8].   T (t) = T0 + (Tfinal − T0 ) 1 − e−t/τ

(17)

where the time constant τ can be expressed as τ = p1 I 2fc + p2 I fc + p3

(18)

and T final is the final temperature of the fuel cell stack given by the equation: Tfinal = p4 I fc + p5 where I fc is the stack current (A). The model of temperature function is as shown in Fig. 6.

(19)

170

R. Yadav et al.

Fig. 5 Model of the utilization factor

Fig. 6 Model of thermal dynamics

4 Simulation Results The given figure represents the practical data that is obtained from the characteristics of current and voltage of a 1 kW fuel cell. The PEMFC’s model that is proposed is

Modeling of Proton Exchange Membrane Fuel Cell

171

simulated using the Simulink part in MATLAB as its model. The cell output voltage is the difference of open circuit voltage (V oc ) and the losses occurring in the cell. The cell output voltage is the difference of open circuit voltage (V oc ) and the losses occurring in the cell.

5 Conclusion In this research article, a model for PEMFC modeling is discovered for the solution of most optimum planner or design of the BOP and PCS for fuel cell. The elaborated equations of the cells being formulated into the model of the fuel cell make power electronics engineers to easily carry out the analysis. This modeling also considers the consequence of thermodynamics on the execution performed by the model of fuel cell and the results are verified by simulation and experimental results. This model doesn’t include fuel cell thermodynamics but is most suitable for the study. Depending upon this research, the simulator for PEMFC is suggested or illustrated an idea and the obtained outcomes or solutions are verified using the practical results of the Horizon 1 kW fuel cell.

Appendix

S. No.

Symbol

Value

1.

ξ1

−0.948 · (±0.004)

2.

ξ3

(7.6 ± 0.2) · 10−5

3.

ξ4

−(1.93 ± 0.05) · 10−4

4.

ξ 21

0.0286

5.

ξ 22

0.0002

6.

ξ 23

4.3 · 10−5

7.

p1

−0.03802

8.

p2

0.5095

9.

p3

172.6

10.

p4

1.1

11.

p5

27.56

172

R. Yadav et al.

References 1. Kacor P, Minarik D, Moldrik P (2015) CFD analysis of temperature distribution of PEM type fuel cell. In: Proceedings 2015 16th international science conference electric power engineering EPE 2015, pp 507–512 2. Priyadarshi N, Azam F, Solanki SS, Sharma AK, Bhoi AK, Almakhles D (2021) A bio-inspired chicken swarm optimization-based fuel cell system for electric vehicle applications. Bio-inspired neurocomputing. Springer, Singapore, pp 297–308 3. State A, Art T (2011) A proton exchange membrane fuel cell running as a regulated current source. In: 2011 IEEE vehicle power and propulsion conference VPPC 2011 4. Vasilyev A, Andrews J, Jackson LM, Dunnett SJ, Davies B (2017) ScienceDirect componentbased modelling of PEM fuel cells with bond graphs. Int J Hydrogen Energy 1–16 5. Soo GCJ, Kang KH, Lee BLW, Xchange PRE, Uel MEF, Pemfc CELL (2007) Proton exchange membrane fuel cell (PEMFC) modeling for high efficiency fuel cell balance of plant (BOP), pp 271–276 6. Restrepo C, Konjedic T, Garces A, Calvente J, Giral R (2015) Identification of a proton-exchange membrane fuel cell’s model parameters by means of an evolution strategy. IEEE Trans Ind Inf 11(2):548–559 7. Tremblay SNMO, Dessaint L, Ieee SM (2009) A generic fuel cell model for the simulation of fuel cell vehicles, pp 1722–1729 8. Soltani M, Bathaee SMT (2008) A new dynamic model considering effects of temperature, pressure and internal resistance for PEM fuel cell power modules, pp 2757–2762

Q-LEACH Algorithm for Efficiency and Stability in WSN Birinderjit Singh Kalyan

Abstract A Wireless Sensor Network topology specifies the wireless network access and significantly impacts the network routing algorithms. Topology also affects other essential network features such as durability and the cost of communication between nodes. Current research has established the use of energy by customer’s networks as one of the fundamental research problems in wireless sensor networks. Controlling the network topology has turned out to be an effective solution to the above problem. Topology control protocols, like all other aspects of wireless sensor networks, have to be developed and implemented subject to extreme computational and energy constraints. Keywords Ad hoc networks · LEACH · Q-LEACH · Nodes · Cluster

1 Introduction The radio is the primary source of energy dissipation in a sensor node. The radio absorbs electricity in all of its four operating stages, including listening, idling, transmitting, and receiving. Some common metrics used to calculate routing protocol efficiency in wireless ad hoc networks are the number of packets dropped, the overhead in routing messages, the number of hops, etc. Yet wireless sensor networks should be measured mainly in terms of energy depletion of sensor nodes relative to conventional wired and wireless ad hoc networks. Sensor nodes have minimal non-renewable sources of energy, however, once deployed, there is rarely any way to recharge a sensor node’s battery in a hostile environment. Such limitations make the energy metric indicated above a primary concern. One way to maximize energy usage in a wireless sensor network is by selecting the approach to selectively adjust the radio of sensor nodes based on the availability of alternate routing paths. Switching-o the sensor nodes radio is only possible if the topology is built in such a way that due to those inactive nodes, the B. S. Kalyan (B) Electrical Engineering, Chandigarh University, Mohali, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 P. K. Mallick et al. (eds.), Cognitive Informatics and Soft Computing, Lecture Notes in Networks and Systems 375, https://doi.org/10.1007/978-981-16-8763-1_16

173

174

B. S. Kalyan

network is not partitioned. Thus, successful management of the network topology emerges as a solution to the energy-saving problem for wireless sensor networks. Topology control protocols are designed to leverage network node density to extend network existence and provide connectivity. Key principles for designing topology control protocols for wireless sensor networks were defined as the following criteria. Sensor nodes should be able to configure themselves to satisfy the varying complexities of the network. Redundant node selection can be performed on the basis of distributed located algorithms. Topology control protocols have to ensure minimum network access because the network is not partitioned. Topology control protocols in large-scale wireless sensor networks can take advantage of the high node density to reduce the energy dissipated in the network [1–16]. The problems faced by the conventional LEACH were overcome in the Q-LEACH. The Q-LEACH improved: • Clustering process. • Stability period. • Network lifetime for optimized performance of WSNs. In Q-LEACH, the distribution of network into four equal quadrants resulted in better coverage than the conventional LEACH but still the far away located cluster heads lifetime was lesser than the lifetime of the cluster heads located nearby the sink. The decreased lifetime of the far cluster heads degrades systems performance so it was suggested to improve the system’s performance by increasing the lifetime of the cluster heads located far away so that the lifetime of the whole network could be improved. Since the optimum position of cluster heads is defined in the QLEACH to reduce the transmission load, it is advised to transfer the load of the cluster heads located far to the sink through an intermediate node located in between. This intermediate node will be on the boundaries of the four quadrants, and they would also result in the decreased distance to the sink. We would introduce additional eight nodes, two on each quadrant boundary. These nodes would work as an intermediate to help communication of sink with the far away cluster heads as shown in Fig. 1. Firstly, the area is divided into four quadrants having equal number of nodes. The distribution of nodes is done randomly. The sink is placed outside the area. The random selection of cluster heads would be done on the basis of the energy of the nodes. The cluster heads are selected separately for each of the sub-area. The cluster heads are the nodes which are then used for the communication between other nodes in the area and the sink. The distribution of the area into four equal quadrants and the selection of cluster head from the nodes in the particular area is called the setup phase.

2 Algorithm I: Setup Phase Algorithm I shows the setup phase for the distribution of nodes and the selection of cluster heads. The cluster head is selected after each node is allocated a random number between 0 and 1. This number is then compared with a threshold number.

Q-LEACH Algorithm for Efficiency and Stability in WSN

175

Fig. 1 Distribution of nodes into four quadrants, the sink, and the location of additional nodes

If this number is less than the threshold and the condition for the desired cluster head is not met then this node becomes the cluster head. Cluster heads are selected for each of the area and the nodes establish connection with this cluster head to communicate with the sink. The cluster head then specifies time slots for the nodes in the area. The distance of the cluster heads will then be measured from the sink. The cluster heads having higher distance value from the sink will then check their distance from the intermediate nodes and then the distance between the intermediate nodes and sink will be checked. This will be our second phase in which the cluster head communicates with either the sink directly or with the intermediate nodes. This is done in order to use the energy efficiently because the more the distance, more power is required to transmit. This phase is called cluster head association phase (Fig. 2).

3 Algorithm II: CH Association Phase Algorithm II defines the association phase between the nodes and cluster heads, also between the CHs and intermediate nodes and between the intermediate nodes and the sink. The nodes in area a, b, c, d are specified as normal, intermediate, and the advanced nodes on the basis of their energy. The CHS will only prefer the intermediate nodes that means for each cluster head we will probably have four intermediate nodes located at the boundaries of its quadrant. Then, the distance of the intermediate nodes from the sink will be compared and the node having the least distance from the sink will be selected. The selected node will then be transferred the load from the cluster head, which it will then pass on to the sink. This will reduce the

176

B. S. Kalyan

Fig. 2 Algorithm I: setup phase

load on the farther cluster heads which slows the energy drainage and in turn results in better system performance. The problem of the degraded system’s performance can be solved by using the intermediate nodes (Fig. 3).

4 Results The introduction of the intermediate nodes provides better results than the Q-LEACH process itself. In the previous technique, the first dead node occurred between 2900 and 3000 rounds but in the improved method the first dead node occurred between 4700 and 4800 rounds. This is a significant parameter as it showed that the system is more stable and enhanced. Moreover, it also increased the lifetime per round of the nodes thus increasing the residual energy of the system. The results after simulation are in Figs. 4, 5.

Q-LEACH Algorithm for Efficiency and Stability in WSN Fig. 3 Algorithm II: CH association phase

Fig. 4 Cluster heads in rounds

177

178

B. S. Kalyan

Fig. 5 Residual energy of nodes

5 Conclusion The proposed work can be extended on using the optimization algorithm which will work on selection of cluster heads methodology which might be better than the traditional works of selection of cluster heads in network. The work can be extended to include various other parameters such as throughput, speed, and the transmission delays.

References 1. Patel R (2011) Energy and throughput analysis of hierarchical routing protocol. Int J Comput Appl (0975–887). 20(4):32–36 2. Manzoor B (2013) Q-LEACH: a new routing protocol for WSN’s. In: International workshop on body area sensor networks, vol 1, 21 march 2013 3. Son B, Her Y, Kim J (2006) A design and implementation of forest-fires surveillance system based on wireless sensor networks for South Korea mountains. IJCSNS 6(9B):124–130 4. Kour H (2012) Hierarchical routing protocols in wireless sensor networks. Int J Inf Technol Knowl Manage 6(1):47–52 5. Akyildiz IF, Melodia T, Chowdhury KR (2007) A survey on wireless multimedia sensor networks. Int J Comput Telecommun Networking 51(4):921–960 6. Nayak SR, Sivakumar S, Bhoi AK, Chae GS, Mallick PK (2021) Mixed-mode database miner classifier: parallel computation of graphical processing unit mining. Int J Electr Eng Educ 0020720920988494 7. Kavitha T, Sridharan D (2010) Security vulnerabilities in wireless sensor networks: a survey. J Inf Assur Secur 5:031–044 8. Srinivasu PN, Bhoi AK, Jhaveri RH, Reddy GT, Bilal M (2021) Probabilistic deep Q network for real-time path planning in censorious robotic procedures using force sensors. J Real-Time Image Process 1–13 9. Villaverde BC, Rea S, Pesch D (2012) InRout—a QoS awareroute selection algorithm for industrial wireless sensor networks. Ad Hoc Netw 10(3):458–478

Q-LEACH Algorithm for Efficiency and Stability in WSN

179

10. Tran DA, Raghavendra H (2006) Congestion adaptive routing in mobile ad hoc networks. IEEE Trans Parallel Distrib Syst 17(11):1294–1305 11. Das ML (2009) Two-factor user authentication in wireless sensor networks. IEEE Trans Wireless Commun 8(3):1086–1090 12. Zhan G, Shi W, Deng J (2012) Design and implementation of TARF: a trust-aware routing framework for WSNs. IEEE Trans Dependable Secure Comput 9(2):184–197 13. Marti S, Giuli TJ, Lai K, Baker M (2000) Mitigating routing misbehavior in mobile ad hoc networks. In: Proceedings of the 6th annual international conference on mobile computing and networking (MOBICOM ’00), pp 255–265, ACM, August 2000 14. Kalyan BS, Balwinder S (2015) Design and simulation equivalent model of floating gate transistor. In: 2015 annual IEEE India conference (INDICON), IEEE 15. Kalyan BS, Balwinder S (2018) Quantum dot cellular automata (QCA) based 4-bit shift register using efficient JK flip flop. Int J Pure Appl Math 118(19):143–157 16. Kalyan BS, Balwinder S (2020) Performance analysis of quantum dot cellular automata (QCA) based linear feedback shift register (LFSR). Int J Comput Digital Syst 9(03):545–551

Comparative Analysis of Energy Management Systems in Electric Vehicles Sudhir Kumar Sharma and Manpreet Singh Manna

Abstract Nowadays, the utilization of vehicles is drastically increased due to urbanization, industrialization and change of living standard. Basically, fuel-based and hybrid electric vehicles are often used in routine activities which rely on the nonrenewable resources, such as petrol, diesel, LPG and CNG. Consequently, these vehicles adversely affect our environment by emitting the harmful gaseous content. To reduce these contents and to make environment eco-friendly, there is need to move towards the emission-free vehicles, where electric vehicle is one of an alternative. These vehicles are also called Zero Emission Vehicles (ZEVs). With this consideration, the present paper targeted on the basics of electric vehicles and their components, historical development, challenges and issues faced in the present scenario. Besides, the comparison among various vehicles in terms of the efficiency, range, speed, acceleration, mileage, cost, etc., has also been discussed.

1 Introduction Conventional vehicles, since their inception in nineteenth century, have improved a lot. Early engines used steam for power were quite bulky and required enormous amounts of fuel, making them unfit for transportation use except for trains. Electric vehicles (EVs) were invited around in 1834 but could not survive in comparison to internal combustion engine-based vehicles. The world’s first EV reported in the literature was a battery-operated tricycle developed by Thomas Devenport in 1834. In 1900 because of their exorbitantly high cost, EVs were referred to as road transportation among the wealthy elite. Ford Company is the first company which has mass product produce its model T, while offering enhanced range of EVs at reduced cost. However, the EVs almost disappeared from the market by the 1930s due to range and cost issue. The interest on EVs started again due to oil storage and energy crisis in 1970s [1]. S. K. Sharma (B) · M. S. Manna Department of Electrical and Instrumentation Engineering, SLIET Longowal, Sangrur, Punjab 148106, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 P. K. Mallick et al. (eds.), Cognitive Informatics and Soft Computing, Lecture Notes in Networks and Systems 375, https://doi.org/10.1007/978-981-16-8763-1_17

181

182

S. K. Sharma and M. S. Manna

Fig. 1 First tricycle built by Thomas Devenport

In 1980s, the research of EVS again initiated due to growing concern over pollution and its consequences on the environment. Owing to the reduced carbon emission campaign all the countries have decided to go for EVs (Fig. 1). With advancement in technology, petroleum fuel-based engines came into existence and they changed the face of transportation. As time passed, engines become smaller and more powerful, and petrol or diesel vehicles became a household item. Increasing number of vehicles started ruining the environment through toxic emissions. Cost of using fossil fuels as sources of energy (conventional source of energy) started rising due to the fact that conventional sources of energy are limited in number and with excess demand caused by increasing number of vehicles, the availability of fuel reduced. The world started looking for alternate sources to power their vehicles, and found the solution in electric vehicles as they are not harmful to the environment. Electric motor vehicles have compact but powerful motors and batteries. In initial days, the motors and batteries were not as powerful and compact as they are today, which prevented their use in vehicles. Improvement in technology has provided us with the opportunity to mount electric motors and batteries in vehicles. Today, electric vehicles are efficient enough to meet day to day requirements and powerful enough to challenge conventional vehicles. Continuous development in renewable energy sector is helping the cause of electric vehicles. If electric vehicles are powered using renewable energy, then they pose no harm at all to the environment.

Comparative Analysis of Energy Management …

183

1.1 Classification of Electric Vehicles Electric Vehicles can be classified in various categories depending on the energy source/refuelling method like battery electric vehicles, plug-in hybrid electric vehicles, hybrid electric vehicles and fuel cell electric vehicles, etc., are explained as follows: [2]

1.1.1

Battery Electric Vehicles (BEVs)

The electric battery in battery electric vehicles is the only source of power. BEVs don’t have an alternative fuel source to fall back on if the battery’s charge runs out. They are the pure electric vehicles and eco-friendly. BEVs store electrical energy in battery setup with high capacity. The battery energy is used to run motor and all electronic parts of the BEV. BEVs are charged by external source of electricity; however, the battery must be charged for significantly longer in order to reach maximum capacity [3, 4]. In this vehicle mainly three components are present, i.e. controller which controls speed and power, electric motor which produces motive force and battery which stores battery. The motor can be of any type asynchronous, permanent magnet synchronous motor, DC motor, BLDC motor. Similarly, variety of batteries are available but lead–acid battery is commonly used due to ease of operation and less cost. Tesla is the example of this type of vehicle. There are various configurations of the wheels available for the EVs, based on the clutch, gear box, transmission, number of motors and differentials. Nowadays In-wheel configuration is used widely having advantages spacious, flexible and better efficiency.

1.1.2

Plug-In Hybrid Electric Vehicles (PHEVs)

PHEVs the combination of IC engine and an electric motor have a limited range to operate on electric power mode, but when the electric motor no longer provides enough power PHEVs can be switched to conventional fuel-powered IC engine. PHEVs can be charged through external energy source. These types of vehicles are perfect for driving in cities, because of environment purpose PHEVs are most fuel-efficient when they are being driven in city [5].

1.1.3

Hybrid Electric Vehicles (HEVs)

Hybrid electric vehicles are having both internal combustion engine and an electric motor. Hybrid Electric Vehicles combine two energy sources that can be diesel or electric, fuel cell or battery, gasoline or flywheel any of them, in which one act as the storage and other coverts fuel to energy. Because of the presence of engine, they consume fuel and emit emissions so they are not Eco-friendly Vehicles, HEVs fall

184

S. K. Sharma and M. S. Manna

in the category of ULEV (Ultra Low Emission Vehicles) or LEV (Low emission vehicles). The main component of the HEVs is engine, electric motor, fuel tank, generator, power split devices, batteries and transmission. In Fig. 2, general concepts of the HEVs are described with the block diagram. Further categorization of HEVs is given below with their respective operation. HEVs are not usually possible to charge through external energy source, but the EHV’s regenerative braking system generated electric power to recharge the battery itself whilst the vehicle is in motion and the battery is also recharged by IC engine when speed rise is cut off or slow. For short distance travels such as point-to-point travel within a city, the battery can be used to power the HEV alone [6]. Toyoto Pirus is the example of this type of configuration (Fig. 3).

Fig. 2 Historical development of automobile and development of EV from 1800s to 2020

Transmission

Motor/Generator

Batter

Fig. 3 Block diagram of BEV

Comparative Analysis of Energy Management …

1.1.4

185

Fuel Cell Electric Vehicles (FCEVs)

FCEVs are propulsion by an electric motor that gets the energy by a chemical reaction involving hydrogen and oxygen gases. The fuel cell generates power through chemical reaction between stored hydrogen gas with absorbs oxygen from the air around the vehicle. During this chemical reaction, FCEV produces no harmful emissions, albeit water is produced as a waste. There is no need to recharge the fuel cell through external energy source [7]. Toyoto Mirai or Honda Clarity is the commercially available vehicles (Table 1). Table 1 Comparison between BEV, PHEV, HEV and FCEV [8] Vehicle type

Drive system

Energy system

Charging technology

Advantage

Disadvantage

BEVs

Electric motor

Battery Flywheel Ultracapacitor

Charging station

Energy efficient Zero emission Independency from fossil fuel Smooth operation Commercialized

Low range Recharging time high Bad dynamic response

HEVs

Electric motor and IC engine

Fuel tank Battery Flywheel Ultracapacitor

Refuelling station

Fuel economy high Less emissions Long-lasting electric Durable and reliable Commercialized

Expensive and high weight Count component large Highly complex

PHEVs

Electric motor and IC engine

Fuel tank Battery Flywheel Ultracapacitor

Refuelling station and Charging station

Less emission Highly efficient on fuel Capable for G2V and V2G concept High driving range medium commercialized

Highly complex Initially expensive Battery Complex electrification Grid impact

FCEVs

Electric motor

Ultracapacitor, Battery

Hydrogen refuelling station or Hydrogen gas cylinder

Very less emission Independency from fossil fuel Efficient is high Durable and reliable Limited range of driving

Expensive Transient response is slow Complex electrification Not commercialized

186

S. K. Sharma and M. S. Manna

1.2 Components of Electric Vehicles Design of electric vehicle components is divided into two major parts; the first one is mechanical parts such as vehicle body and chassis system, suspension system, power transmission system, etc. and other one is electric parts such as electric motors, energy management/storage system, small power circuits for ancillary services, etc. In this chapter, electric parts specially energy management and storage system will be discussed.

1.2.1

Electric Motors

Electric motor drive converts the on-board electrical energy into desired mechanical energy. Motor is the actuating component of the electric vehicle. Among various motors available in the market, however EV require special features as follows: • • • • • • • • •

High power and torque density Wide speed range, converting high-speed cruising and low-speed creeping High efficiency with wide torque range Operating capability of constant power should be wide For hill climbing and pickup torque capability should be high For overtaking high intermittent overload capability High robustness and high reliability Cost effective and Low acoustic noise

Nowadays, in the EVs many types of electric motors are used for the propulsion of vehicle, some special types of electric motor will be discussed here [1, 9]. (i)

Permanent Magnet Brushless Motor

The development of PMBL motors was picked up in 1906s because of availability of cost-effective rare earth permanent magnet materials like Sm–Co and Nd–Fe–B. These high energy density permanent magnet materials reduced the size and losses of the motor. PMBL motor is the best fit motor because of its excellent dynamic capability, high torque to weight ratio and low losses for medium-size industrial drives [10]. The features of PMBL motor are like it is a type of three phases synchronous motor, it has permanent magnet on rotor, absence of brush and commutator, brushless configuration, etc. The PMBL motor is fed through converter. The board category of PMBL motors is PMBLAC motor and PMBLDC motor. Brushed and Brushless DC motors are shown in Figs. 4, 5 (Fig. 6). The Brushless DC motor is one of the most popular motors because of its traction characteristics for the electric vehicle application. BLDC motor has developed trapezoidal back electromotive force. BLDC motors have high torque density, high starting torque and high efficiency around 95–98%, etc.

Comparative Analysis of Energy Management …

Control system

187

Battery Electric power

Gasoline engine

M/G

Trans Wheels

Fig. 4 Block diagram of HEV

Fig. 5 Components of electric vehicles

Fig. 6 Components of brushed DC motor

(ii)

Induction Motor

Induction motors have low starting torque and limited speed range for fixed voltage and fixed frequency. On basis of induction motor characteristics, it is not suitable for electric vehicles. The induction motor has various speed and torque control techniques. Therefore, the incorporation of power electronics-based AC drive is almost necessary for EV application. The variable voltage and variable frequency control are required to match speed and load torque. The various control techniques of AC drives are field-oriented control or vector control through direct and indirect control,

188

S. K. Sharma and M. S. Manna

voltage and current control, direct torque control, pulse with modulation, space vector modulation, etc. [11–13]. (iii)

Permanent Magnet Synchronous Motor

Permanent magnet synchronous motor is a classical salient-pole synchronous AC motor with approximately sinusoidal distributed windings, and it can therefore run from a sinusoidal wave supply without electronic commutation. When AC supply is given, based on the rotor position information from the shaft-position sensor, the motor phase windings are excited sequentially in such a fashion so as to produce the desired torque and speed [14]. PMSM has permanent magnets on its rotor. PMSM also has high-power density and high efficiency. The PMSM has developed sinusoidal back electromotive force. PMSM is used for higher power ratings performance applications like cars, buses, etc.

1.2.2

Some New Technology of EVs Motors

There is a focus on development of some new types of brushless and commutator electric motors like doubly salient motors. That doubly salient motors have salient poles on the rotor and stator both. The switched reluctance motor is a type of doubly salient motor which is having the simple structure. The permanent magnet brushless motor is oriented by incorporating permanent magnets in the stator of doubly salient motor [1]. The rotor of permanent magnet has neither permanent magnets nor winding on itself. This class of motor is simple in construction and robustness, so that it is suitable for electric vehicle applications. This motor is classified based on location of permanent magnet like doubly salient permanent magnet (DSPM), flux-switching permanent magnet (FSPM) and flux-reversal permanent magnet (FRPM).

1.3 Energy Management and Storage System Electric vehicles must have onboard energy management and storage systems, usually electrochemical batteries, ultracapacitors, ultra-high-speed flywheels and fuel cells. By using electrochemical reaction fuel cell generates energy and battery stores energy. Ultracapacitor can store high specific power as compared to energy, and it works on electrostatic principle [15]. On the basis of weight and size, considerations vehicle can be electrification by using flywheel technology. All BEVs, PHEVs, HEVs and FCEVs are completely dependent on onboard energy management and storage system devices for efficiency and fuel economy. To get specified capability and performance to meet suitable driving cycle, all energy management and storage system devices must be sized so that adequate peak power (KW) and sufficient energy (KWH) can be provided [16]. Some electric vehicle operations like life cycle, power density, energy density, size, maintenance, safety and recyclability at appropriate coat

Comparative Analysis of Energy Management …

189

Fig. 7 Brushless DC motor

Fig. 8 Components of permanent magnet synchronous motor

are the major factors to affect the design of energy management and storage systems. The characteristics of battery, ultracapacitor, flywheel and fuel cell are compared on the basis of specific power (W/kg) versus specific energy (Wh/kg) as shown in Fig. 7 (Figs. 8, 9).

1.3.1

Battery

Battery works on electrochemical reactions principle, when battery is charging it converts electrical energy into chemical energy and converts chemical energy into electrical energy when it is discharging. The battery is a combination of cells that all cells have electrochemical properties. All individual cells can store chemical energy and based on their different connections combination of cells can store energy [4].

190

S. K. Sharma and M. S. Manna

Fig. 9 Comparison of specific power v/s specific energy

Electric vehicle electrification is done on the basis of factors that are modularity, affordability, high energy density and flexibility of the electrochemical battery technology [17]. Electric vehicle technology stores on board energy of high-power density and high energy density to meet appropriate driving cycles of electric vehicle applications. The battery characteristics are different on the basis of types of electric vehicles, for example, BEVs is required battery which has high energy density, HEVs and FCEVs are required batteries which have high-power density, whereas PHEVs are required medium battery technology which have high energy density battery like BEVs battery and high-power density battery like HEVs battery. Electrical vehicle technology that used battery based on different chemical elements such as nickel and lead–acid-based battery is Ni–MH and Ni–Cd, and lithium-based batteries such as lithium polymer and lithium-ion battery [1, 16, 18–21]. Lead–acid battery is low energy density so it is used for short term, the nickelbased battery is used for long term and the lithium-based battery is useful for medium term. In the near future cost reduction of Ni–MH and Ni–Cd batteries are significantly improved because their potential is explored. Nowadays lithium-based batteries are quite attractive for automobile industry because of their merits such as low weight, low cost and high energy density. The comparison between different types of battery technologies is shown in Table 2 (Table 3).

Comparative Analysis of Energy Management …

191

Table 2 Comparison of different types of battery technology [8] Battery parameters

Nickel-based battery Ni–Cd

Lithium-based battery

Ni–MH

Lead acid

Ion

Polymer

Nominal cell voltage

1.2

1.2

2.0

3.6

3.0

Specific energy (Wh/kg)

40–60

60

35

35

100–200

Self-discharge (%/M)

10–20

30

4–8

4–8

~1

Specific power (W/kg)

140–220

130

~200

~200

>200

Power density (W/L)

220–350

475

~400

~400

>350

Energy density (Wh/L)

60–100

220

70

70

150–350

Cycle life

300–700

300–500

250–500

500–1000

200–1000

Operating temp (°C)

−40 to 60

−20 to 60

−20 to 60

−20 to 60

0 to 60

Table 3 Comparison of advantages between different battery technologies [34] Advantages ON and OVER

Lithium based Ion

Lithium Ion based

Nickel based

Polymer

Operating, Temperature range and Higher cyclability

Nickel metal hydride (Ni–MH)

Gravimetric energy density Volumetric energy density Operating temperature range Higher cyclability Voltage output Self-discharge rate

Nickel based

Lead acid

Polymer

Nickel metal hydride Ni–MH

Nickel metal cadmium Ni–CD

Gravimetric Energydensity Design characteristics Safety and Price

Price and Safety Dischargerate Recyclability

Operating temperature Higher cyclability Price and Safety Recyclability

Higher cyclability Price and safety Recyclability

Volumetric energy density Higher cyclability, Price

Operating temperature Higher cyclability Price

Higher cyclability Price

Operating temperature Higher cyclability Self-discharge rate Price

Higher cyclability Voltage output Price

Gravimetric energy density, Volumetric energy density, Operating temperature range, Self-discharge rate, Design characteristics

(continued)

192

S. K. Sharma and M. S. Manna

Table 3 (continued) Advantages ON and OVER

Lead acid

1.3.2

Lithium based Ion

Nickel based Polymer

Nickel metal hydride Ni–MH

Nickel Gravimetric Cadmium energy Ni–CD density Volumetric energy density Voltage output Self-discharge rate

Gravimetric energy density Volumetric energy density Self-discharge rate Design characteristics

Gravimetric energy density Volumetric energy density

Gravimetric energy density Volumetric energy density Voltage output Self-discharge rate

Gravimetric energy density Volumetric energy density Self-discharge rate Design characteristics

Gravimetric energy density Volumetric energy density Self-discharge rate

Lead acid Nickel metal cadmium Ni–CD Higher cyclability Voltage output Price

Gravimetric energy density Volumetric energy density Operating temperature Self-discharge rate Reliability

Ultracapacitor

Ultracapacitor works on electrostatic principle to store electric energy. When it is charging or discharging its grading performance is not affected [22–24]. Ultracapacitor can store energy less than battery but higher than conventional capacitor. In ultracapacitor, high surface area is offered by porous carbon electrodes, which are impregnated with electrolyte and dielectric separator created small charge separation of range 10 Å between the ultracapacitor electrodes. By changes of the fabrication and material selection of ultracapacitor, capacitive density can be increased up to range 1000–5000 F [25–28] (Fig. 10). Ultracapacitor is useful for efficient applications because it provides long life cycle and high-power density. As compared to battery, a ultracapacitor releases energy with high power faster so that its physical properties can be determined such as rates of charge and discharge [25–27]. Ultracapacitor is used where burst power is required such as IC engine-based heavy vehicles. It can be used as starting of electric vehicles, hill climbing and acceleration. The energy storage capability can be improved by combination of ultracapacitor with a battery [15]. Ultracapacitor reduces the maintenance and battery swapping cost and it also increases the life of a battery. In near future, for electric vehicle electrification ultracapacitor will become dominating technology due to its energy density at suitable weight but price is a

Comparative Analysis of Energy Management …

193

Fig. 10 Construction overview of a capacitor, b ultracapacitor and c battery [30]

Table 4 Different parameters of capacitor, ultracapacitor and battery [30]

Parameters

Capacitor

Charging time

10−3 –10−6

Ultracapacitor

Battery

s

0.3–30 s

1–5 h

Discharging time 10−3 –10−6 s

0.3–30 s

0.3–3 h

Energy densities (Wh/kg)

20 years

>10 years

electric vehicle, flywheel can convert stored mechanical energy into electrical energy or vice versa by using motor and generator, respectively. The energy recovered from regenerative braking system is utilized to charge the flywheel, which is then used to charge the batteries. Nowadays new flywheels have the ability to store more energy and power as compared to existing lead–acid batteries of comparable weight and volume [25, 26]. Flywheels, unlike ultracapacitors and batteries, are not affected by in depth discharge, which extends their lifecycle. Because of the very fast rotation speed, the placement of flywheel in electric vehicle is major concerns for safety [18]. The comparison of all above-discussed Energy Management and Storage System devices such as battery, fuel cell, flywheel and ultracapacitor are shown in Table 5.

1.4 EV Charging Technology Breakthrough in the battery technology for high specific energy and low initial cost. Battery charging technology is exclusively being researched. Fast battery charger, battery swapping and highway charging station are fed from renewable energy. The wireless power transfer (WPT) uses to charge battery on the concept of move-andcharge (MAC). Residential charging station: An EV owner can charge their vehicle overnight through plugs. Charging can be done while parked at public charging station or at commercial parking place for a fee or free. At public charging station fast charging is also available, power up to range >40 KW, and battery swaps and charges in under 15 min (Fig. 12).

196

S. K. Sharma and M. S. Manna

Fig. 12 Electric vehicles charging station

1.5 Conclusion Today’s concern about the environment, pollution caused by vehicles emission, growth of EVs technology is rapidly increasing. EVs are good for the environment because they are not producing emission greenhouse gases, as such EVs helps keep atmosphere cleaner. Electricity is less expensive than fossil fuel so that running cost of EVs is not seemed for the people. The maintenance cost of EVs is less because of some reasons like EVs don’t have oil and don’t require that maintenance task or others associated with an oil engine and the brakes of electric vehicles tend to last longer. EVs are not creating noise pollution; they are nearly silent in nature, in fact, electric vehicles can be so quiet. Government implements several subsidies and as free policies for purchasing of electric vehicles. In this paper, we reviewed evolution and various types of EVs. However, HEVs are not clean vehicles but BEV and FCEV have a very bright future in the upcoming time. They have the potential to reduce the emission. By developing of good power capacity of batteries, it becomes very easy to use them. With the coordination of Electrical, chemical and electronics engineering to tackle the limitations will be easy. It is the summation of all areas. Even so, some EVs have short ranges for driving, the range can get with an electric vehicle is something that is constantly improving, but it is still something that needs to be considered and may not be appropriate for those with longer commutes. Charging of EVs can take a lot of time, an electric vehicle takes much longer to recharge and the time investment and necessary planning does put some people off. Charging Stations are not available everywhere; in a rural or suburban area, it may be harder

Comparative Analysis of Energy Management …

197

to find a charging station. In this chapter, brief overview of electric vehicle has been provided along with the outcomes of the technology.

1.6 Challenges for EVs Today’s concern about the environment, pollution caused by vehicles emission, growth of EVs technology is rapidly increasing. Since the EVs came, numerous challenges and issues are faced by the companies, manufactures and consumers. Even some challenges are same due to which after 1920 Electric Vehicles are redundant to use. There is need to focus on the technological concerns and socio-economic concerns for the development of the vehicle. In the technological concern, the range is very limited. For the long journeys, EVs are still not liable to use. With the development of high-power battery range problem can be solved. It takes at least 1 h to few hours to recharge the battery rather within few seconds to minutes a tank can be refilled. However, there is a lack of infrastructure of charging stations. EVs are not economically good; their initial cost is very high although their maintenance cost is very low. Cost of purchasing EV is high compared to ICEV. So, it is not accepted by the customers. As the power generated in FCEV is from the chemical reactions, it is said that hydrogen that escape out from the tanks is very flammable and could be harmful to the public. So for concern, it should be properly packed with carbon fibres. Fuel cells are expensive, and it is required to optimize the size of battery and engine. Another drawback of these vehicles is lack of infrastructure to produce hydrogen without harming environmental concerns.

References 1. Kumar L, Jain S (2014) A comprehensive study of electric propulsion system for vehicular application. J Renew Sustain Energy 6(2):022701 2. Ravi R, Surendra U (2021) Battery management systems (BMS) for EV: Electric vehicles and the future of energy-efficient transportation. In: Electric vehicles and the future of energy efficient transportation. IGI Global, Pennsylvania, pp 1–35 3. Chan CC, Wong YS (2004) Electric vehicles charge forward. IEEE Power Energ Mag 2(6):24– 33 4. Chan CC (2007) The state of the art of electric, hybrid, and fuel cell vehicles. Proc IEEE 95(4):704–718 5. Kramer B, Chakraborty S, Benjamin K (2008) A review of plug-in vehicles and vehicle-to-grid capability. In: 2008 34th annual conference of IEEE industrial electronics, IEEE 6. Chau KT, Wong YS (2002) Overview of power management in hybrid electric vehicles. Energy Convers Manage 43(15):1953–1968

198

S. K. Sharma and M. S. Manna

7. Emadi A et al (2005) Topological overview of hybrid electric and fuel cell vehicular power system architectures and configurations. IEEE Trans Veh Technol 54(3):763–770 8. Jain S, Kumar L (2018) Fundamentals of power electronics controlled electric propulsion. Power electronics handbook. Butterworth-Heinemann, Oxford, pp 1023–1065 9. Cao R, Mi C, Cheng M (2012) Quantitative comparison of flux-switching permanent-magnet motors with interior permanent magnet motor for EV, HEV, and PHEV applications. IEEE Trans Magn 48(8):2374–2384 10. Singh B, Singh S (2010) Single-phase power factor controller topologies for permanent magnet brushless DC motor drives. IET Power Electron 3(2):147–175 11. Bindal R, Kaur I (2019) Torque ripple reduction of induction motor using dynamic fuzzy prediction direct torque control. ISA Transactions 12. Bindal RK, Kaur I, Kumar P (2021) Design and comparative analysis of induction machine using FPBO. In: Lecture notes in electrical engineering, vol 690. Springer, Singapore 13. Kaur I, Bindal R (2018) Performance of three phase induction motor of direct control torque using fuzzy logic controller. Int J Pure Appl Math 118(19):159–175 14. Zeraoulia M, Benbouzid MEH, Diallo D (2006) Electric motor drive selection issues for HEV propulsion systems: a comparative study. IEEE Trans Veh Technol 55(6):1756–1764 15. Simões MG, Vieira P (2002) A high-torque low-speed multiphase brushless machine-a perspective application for electric vehicles. IEEE Trans Ind Electron 49(5):1154–1164 16. Khaligh A, Li Z (2010) Battery, ultracapacitor, fuel cell, and hybrid energystorage systems for electric, hybrid electric, fuel cell, and plug-inhybrid electric vehicles: state of the art. IEEE Trans Veh Technol 59(6):2806–2814 17. Bhoi AK, Sherpa KS, Kalam A, Chae GS (eds) (2020) Advances in greener energy technologies. Springer, Berlin 18. Chau KT, Wong YS, Chan CC (1999) An overview of energy sources for electric vehicle. Energy Convers Manage 40:1953–1968 19. Ortuzar M, Moreno J, Dixon J (2007) Ultra capacitor-based auxiliary energy system for an electric vehicle: Implementation and evaluation. IEEE Trans Ind Electron 54(4):2147–2156 20. Lukic S (2008) Charging ahead. IEEE Ind Electron Mag 2(4):22–31 21. Lukic SM, Wirasingha SG, Rodriguez F, Cao J, Emadi A (2006) Power management of an ultra-capacitor/battery hybrid energy storage system in an HEV. In: Proceedings of the IEEE vehicular power propulsion conference. Windsor, UK, pp 1–6 22. Burke AF (2007) Batteries and ultracapacitors for electric, hybrid, and fuel cell vehicles. Proc IEEE 95(4):806–820 23. Bakhoum E (2009) New mega-farad ultracapacitors. IEEE Trans Ultra Ferroelect Freq Control 56(1):14–21 24. Zorpette G (2005) Super charged [ultracapacitors]. IEEE Spectr 42(1):32–37 25. Riezenman MJ (2001) Fuel cells for the long haul, batteries for the spurts [electric vehicles]. IEEE Spectr 38(1):95–97 26. Patel N, Bhoi AK, Padmanaban S, Holm-Nielsen JB (eds) (2020) Electric vehicles: modern technologies and trends. Springer, Berlin 27. Kaur I, Kaur H (2018) Study of hybrid renewable energy photovoltaic and fuel cell system: a review. Int J Pure Appl Math 118(19):615–633 28. Kaur I, Singh P, Sharma KK (2018) Stability in the output power of photovoltaic plant using battery energy storage system. Int J Pure Appl Math 118(19):1215–1235 29. Mekhilef S, Saidur R, Safari A (2012) Comparative study of different fuel cell technologies. Renew Sust Energ Rev 16:981–989 30. Hebner R, Beno J, Walls A (2002) Flywheel batteries come around again. IEEE Spectr 39(4):46–51 31. Briat O, Vinassa JM, Lajnef W, Azzopardi S, Woirgard E (2007) Principle, design and experimental validation of a flywheel-battery hybrid source for heavy-duty electric vehicles. IET Electr Power Appl 1(5):665–674

Comparative Analysis of Energy Management …

199

32. Sebastián R, Alzola RP (2012) Flywheel energy storage systems: review and simulation for an isolated wind power system. Renew Sust Energ Rev 16:6803–6813 33. Kumar L, Jain S (2014) Electric propulsion system for electric vehicular technology: a review. Renew Sust Energ Rev 29:924–940 34. Chan CC, Wong YS (2004) Electric vehicles charge forward. IEEE Power Energy Mag 2(6):24– 33

Heuristic-Based Test Solution for 3D System on Chip Harpreet Vohra, Manpreet Singh Manna, and Inderpreet Kaur

Abstract In 3D System on Chips (SoCs), the cores may be partitioned onto different layers and interconnected using very small vertical interconnects running Through Silicon Vias (TSV). Such TSVs cater to multiple accesses like the functional, control, test access, etc. Being limited in number, very few of them are available for test purposes. This paper deals with the design of an on-chip test architectures for large 3D systems on chip for various manufacturing defects in a modular fashion. An integrated solution which for the first time incorporates a general solution for the coarse grain partitioned 3D cores comprising of interactive and not interactive wrapper design, flat and hierarchical cores, TAM design and assignment while meeting out the tight constraints set by the power budget, maximum Test Access Mechanism (TAM) width, time and TSV available for test is proposed. At the same time, an attempt has been made to save the memory to allow multisite SOC test which helps reduce the nonrecurring infrastructure cost by avoiding unnecessary bits that result due to improper allotment of wires to and within the cores. Simulation results are presented for the different cores of the ITC’02 SOC benchmark circuits. Results show that though it produces the test time comparable to ones proposed earlier by different authors however the actual TSV used is less than that available as the maximum value. Keywords Test wrapper · VLSI testing · System on chip · Test access mechanism · Test scheduling

H. Vohra (B) Electronics and Communication Engineering Department, Thapar Institute of Engineering and Technology, Patiala, India M. S. Manna Electrical and Instrumentation Department, Sant Longowal Institute of Engineering and Technology, Longowal, India I. Kaur GREAT Alliance Foundation, New Delhi, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 P. K. Mallick et al. (eds.), Cognitive Informatics and Soft Computing, Lecture Notes in Networks and Systems 375, https://doi.org/10.1007/978-981-16-8763-1_18

201

202

H. Vohra et al.

1 Introduction with the advancements in the field of semiconductor industry, integration of billions of transistors has led to the feasibility of having complete system on chip. Being driven by the need for improved performance, low power, smaller size and shorter time to market it has become possible to bring in a heterogeneous mix of different digital/analog logic blocks and embedded memories on a common platform called SOC. Brought on the shoulders of the improved semiconductor process technology and design styles, the SOCs are advantaged by the amalgamation of the various IP blocks bought from either within the company or some third party in form of hard or soft form. Due to complexity of the individual cores, it is not uncommon to find the multiple levels of hierarchy within intensely complex IP cores. The long interconnects carrying the various signals have started behaving as transmission lines introducing delays and higher power consumption [1]. The recent developments have led to the implementation of these systems comprising 3D structures. These 3D structures enable to stack the number of dies with different technology base and have enabled the designing of the mixed signal logic as well. Vertical stacking has brought in the advantages of lower footprint, higher packaging densities, increased performance, and reduced interconnect lengths issues by the use of the small vertical interconnects running through silicon vias [2]. Through silicon vias TSVs provide the communication platform for the functional, power, clock, and test needs. Effective use of this upcoming technology needs effective commercially available tools and approaches for design and test. To facilitate the simple and effective testing of the complex cores the use of modular testing approach has proven to be the most beneficial that employs the test infrastructure for 2D SOC structures and still continues to provide the same in case of 3D as well. This test infrastructure consists of test access mechanism TAM and test wrappers [3]. As the system complexity is increasing so is the test data volume; as suggested by the ITRS roadmap this test data is going to increase many folds by 2025 [4]. This huge volume leads to increase in the test application time which is directly related to the overall test cost. In order to reduce this test cost, the tests need to be performed concurrently while addressing various test conflicts. These test conflicts include hierarchical cores, multiple test sets for the same core, cross core interconnections, through silicon vias, sharing of the TAM wires that need to be seen while scheduling the cores for test. The test sets being highly uncorrelated lead to huge switching activity which aggravates the problem of the power dissipation. It is important to not exceed the power ratings of the chip as otherwise it can lead to hotspots and other thermal issues of burn out as well. Several approaches for 2D test wrapper and TAM optimization have been proposed in the past. With the shift towards the 3D technology such approaches need to be improved too. 3D SOC design can either be equipped with the fine-grained partitioned cores or coarse-grain partitioned wherein cores are planar but may be lying on different layers. However, as suggested by [5–7] the fine-grain partitioned 3D cores are not likely to occur in near future. To perform the testing of the stacked dies, the

Heuristic-Based Test Solution for 3D System on Chip

203

conventional problem of modular core-based testing of the 2D SOC [8–11] needs to be extended to determine a TAM architecture and wrapper design in order that the overall SOC test time [12–15] is minimized while the no. of TSV used by the TAM, power dissipated at any time during test and the total bandwidth used is below or utmost equal to the global constraints [16–18]. It can be seen that all the constraint parameters have tight rope tradeoffs as explained below. • If the test bandwidth is increased then the test time is decreased however the infrastructure cost increases. • If the test power dissipation during test has to be decreased then there will be less leverage to test many cores in parallel which further leads to an increase in the test time. • It is not optimum to treat the cores as flat always and require one to consider the hierarchy. If the hierarchy is considered then correspondingly the test time is affected as it may lead to precedence constraint and affects the ordering of the various cores. • In case of the 3D design, the bandwidth has increased through the use of TSVs however due to them serving various purposes there is a limitation in its number for test access. • With the advent of multiple dies being stacked, there is a need to test before and after bonding the dies at complete and partial levels. This is done to verify the new types of defects that occur because of the fabrication steps like thinning, bonding, and alignment, etc. • ATE Memory requirement also increases owing to the huge data volumes. In this paper, we address the modular cored-based Test problem of coarse-grained partitioned 3D SOC using an enhanced Genetic algorithm. It incorporates the various constraints like test bandwidth, TSVs, test power, precedence, multiple test sets, memory, cross core interconnect. It also proposes a test wrapper design for the hierarchical test wrapper that helps the testing of parent and child cores as two disjoint sets. TAM optimization and assignment is done in such a way that the used number of TSVs should be less than actual available by employing a look ahead scheme. The wrapper design and TAM optimization are done to avoid the idle bits [19] which increases the test data volume further. The remainder of this paper is organized as follows: Sect. 2 discusses the prior work done in the area of test design. Sect. 3 describes a motivational example for three SoC with different TAM styles and core types. Sect. 4 defines a classification of problem definitions for 3D SoCs. Sect. 5 present details of the proposed approach. Experimental results for the ITC’02 SoC Test Benchmarks are presented in Sect. 6. Finally, Sect. 7 concludes this paper.

204

H. Vohra et al.

2 Literature Review The core test wrapper forms the interface between the embedded core and its environment and helps serialize the test vectors brought in through the various TAM wires and apply to the respective core elements. It helps perform the test expansion as well as core isolation to facilitate the modular test. In an SOC, the cores may be in either wrapped or unwrapped form. Test wrapper helps interface the various cores internal components to the available TAM. The designing of such wrappers is done so as to reduce the overall test time of a core by balancing the various core elements on to the wrapper chains as much as possible. These wrapper chains are made to be same in number as that of the TAM wires. Many wrapper designs have been proposed for the 2D SOC as test collar and test shell proposed by Verma et. al. [20]. A P1500 core wrapper [21, 22] shown in Fig. 1 supports both core internal and core external testing through the use of INTEST and EXTEST modes [23]. An approach combining P1500 and the Test Shell approach has been proposed by Marinissen et al. A major advantage is the flexible bypass introduced. The wrapper design optimization is defined to be NP hard [7]. Lot of research has been done in the last decades to optimize and form the balanced wrapper chains such that the test time can be minimized. Various approaches like Best fit decreasing, bin packing [22], largest processing time, etc., have been used to design wrapper. Goel et al. [10] have presented the P1500 styled test wrapper for hierarchical cores which allows the testing in two different modes, i.e., parent INTEST mode and child INTEST mode as shown in Fig. 1b. Though it shows the distribution of the available TAM between parent and child however still

Fig. 1 a P1500 test wrapper for flat core. b P1500 styled test wrapper for hierarchical cores: (i) Parent Intestp (ii) P1500 styled test wrapper for hierarchical cores Child Intestc

Heuristic-Based Test Solution for 3D System on Chip

205

no reference is given to the partitioning technique. Kim et al. have proposed a low area wrapper cell. Another challenging task in the overall test infrastructure optimization is the design of the TAM and the assignment of the cores to these TAMs. Since the number of pins is always a limitation, the test bandwidth needs to be used judiciously. The approaches that have been developed for the 2D SOC are to multiplex the core terminals to chip level pins, to use isolation rings using 1149.1 standard-based test access port for core level DFT [20, 24] or to use a partial isolation ring using transparency, to use the functional access mechanism for performing the test [25]. Aerts and Marinissen [8] described three different bus-based TAMs: Multiplexed architecture gives each core access to the full TAM width on time-multiplexed bases leading to longer test time. The second architecture is called Daisy chain, and it connects all cores through one long TAM distributed architecture that allows parallel test of different cores by an on-chip partitioning. Varma and Bhatia [20] proposed a combination of a multiplexed and distributed architecture, called Test Bus. In this work, several multiplexed buses can be used, which then operate independently from each other and hence allow tests to be applied concurrently, however, cores connected to the same Test Bus can only be tested sequentially. Two common classes of test bus architectures are fixed-width test bus and flexible-width test bus architectures. Given the total available number of TAM wires, we can partition them into several TAMs. In flexible-width test bus architecture, we do not distinguish several TAMs. In this case, we are able to fork and merge TAM wires between cores, and thus increase TAM wire utilization. Marinissen et al. [8] proposed a test rail that combines the daisy chain and distribution architectures, provides a flexible and scalable test access mechanism. TAM partitioning, distribution, and further assignment to different cores are defined to be a NP hard problem. With the recent drift towards the 3D SOC, researchers have been proposing various solutions in the wrapper design and TAM optimization. In case of the fine-grain partitioned SoCs, the cores heuristics have been proposed by Roy et al. [13] where an attempt has been made to reduce the test time keeping the Global TSVs as a constraint while not much is done to reduce the number of the TSVs used. Wu et al. [14] have extended the ILP formulation presented in [9] to span the complete stack of dies. However, it has optimized the testing time considering the bandwidth and TSV as the constraint while many others have been ignored. An extension of IEEE1149.1 standard is used in [15] to support pre- and post-bond testing. It has presented the test optimization technique to reduce the test time for complete stack with the test bandwidth and the TSVs as constraint. Work is improved by the same authors by including the multiple test insertions. It considers more realistic model of the use of TSVs per die and supports more optimized die internal and external testing techniques and includes both thin and fat wrapper designs. In this paper, we propose a generalized solution that covers the flexibility in the test design at the core and die level including multiple test insertions for flat and hierarchical cores with TSV, power, and precedence constraints. Problem is solved as a multi-objective optimization.

206

H. Vohra et al.

3 Motivational Example of Testing 3D SoCs with and Without Hierarchical Cores Considering a hypothetical SOC example consisting of 6 flat cores and 1 hierarchical cores spread on three layers as shown in Fig. 2 with cores 2, 6, 7 lying on layer 0, cores 1, 3 and 4 lying on layer 1 and core 5 is lying on the layer2. The objective is to test all the cores with total available bandwidth of B, using minimum number of TSVs in minimum possible time while the power utilization is less than Pmax. There are there approaches to do the same. The total bandwidth B is divided into three partitions W1, W2, and W3. Approach 1: W1, W2, and W3 are spread over three layers that allow the complete stack test and demand TAM structure to be soft. As shown in the figure, the total number of TSV used is 2W1 (TAM going upward) + 2W1(TAM going downward) + w2(TAM going upward) + W2(TAM going downward). And the solution can be good to test the complete stack however the partial stack testing is not possible and in case the TAMs of the individual dies are hard then this solution may not be feasible and be inefficient.

Fig. 2 a Approach1. b Approach2. c Approach3

Heuristic-Based Test Solution for 3D System on Chip

207

Approach2: W1, W2, and W3 are arranged specifically to the different dies and treat all cores as flat. This allows to treat the case when the dies internal TAM is fixed. In this approach, TSV used is same as above but the TAMs on the individual dies can be used judiciously and doesn’t require much effort. All that is needed to schedule the assignment of the cores while satisfying the constraints. Since fixed buses are being used the hierarchy constraint is not satisfied. This approach however allows both the global stack and partial stack tests. Approach3: W1, W2, and W3 are arranged specifically to the different dies and treat all cores as flat. They are treated specifically as given by Goel et al. [10]. This allows to treat the case when the dies internal TAM is flexible. Therefore as per the flexibility provided to the test engineer one solution of these can be chosen.

4 Problem Formulation and Classification In coarse grain partitioned 3D SOC designs, the cores can lie on different layers, so in order to test each of them the TAM needs to be routed to different layers so as to carry the test stimuli and responses between the core under test and the external ATE. For this the TAMs have to make use of the TSVs which are limited in number. At the Die level either this TAM infrastructure can be fixed or flexible which may or may not demand the system test engineer to decide the TAM partition and distribution specifically. Similarly, the individual cores can be hard or soft wherein the flip flops are already distributed in the predesigned internal scan chains or demand to be designed as a part of the wrapper design. Depending on the construction details, these cores can be flat or hierarchical wherein the megacore itself can consist of child IP cores. Also depending on the flexibility, the child and parent TAM and wrapper design can be predesigned or need to be designed by the System Test Integrator. Assignment of the various cores lying in the SOC needs to be scheduled on the basis of the various constraints set up by the hierarchy, TSV_limit, power, bandwidth, precedence, etc. Accordingly, problem of the 3D SOC test design has three parameters: wrapper, core type, TAM. Taking them as three variables the test problem turns out to be T(W, CT, T) which can have values as follows: the wrapper W can be hard (1) or soft (0); CT can be flat or hierarchical, TAM can be fixed or flexible. Hence, we may have 8 combinations. The problem of the Test development can be formulated as follows:  Objective : 1st Minimize max1≤ j≤B

N  i=1

where xij = 1 if the TAM J is connected to core i else 0.

 T i(wj)xi j

(1)

208

H. Vohra et al.

2nd Minimize max1≤ j≤B

 N 

 TSV_used( j)

(2)

i=1

where TSV_used (j) = TSV_X + TSV_ Y + TSV_Z.  N  Lmax where TSV_X = Wj * i,t=1 k=1 xi j ∗ xt j ∗ |cor ei( j, k + 1) − cor et( j, k)| TSV_Y = Wj*layer_no (Ist core of the TAM). TSV_Z = Wj*layer_no (last core of the TAM). Subject  N to. B T max  N i Set i=1 j=1 l=1 k=1 xi j ∗ ti jk ∗ Pi jkl ≤ Ppeak.  Total _time ≤ Tmax.; Bj=1 wj ≤ W max

5 Proposed Solutions Test wrapper design: The test wrapper serves as an interface between the core and the TAM. It can normally be in any of the following modes at any time: normal operation mode, internal test mode, external test mode, or bypass mode. Cores in the SOC can be wrapped or unwrapped. The test time for a core is function of the number of the TAM wires attached to it, and the test time can be calculated by the formula given in Eq. 1. In case the Test wrapper is hard then the test integrator will simply need to calculate it however if its soft then the wrapper design can be done as per the wrapper design also given in Fig. 3 The individual cores can consist of other cores termed as their child. In order to design the wrapper for such hierarchical cores, apart from the core details the hierarchy and the flexibility details impose constraints. Depending upon the flexibility of the core being hard or soft, one can be either provided with the no. of flip flops. Accordingly, the test wrapper design can be categorized as follows: Wmax= number of TAM connections Nb=(int)(Wmax/2) Sc= Number of scan chains Process ‘internal chaining’ Sort the internal scan chainsin descending length order Select the Nb lines longest nscan chains as the Nb lines While (sc>Nblines) Chain the shortest line with the shortest scan chain Update sc (sc=sc-1) Update the length of the longest scan chain Sort the scan chains in the decreasing length order End process

Utilized /* represents whether at any given point of time whether the wire has been allotted or not. So this is binary value. TSV used /* holds the number of TSVs used by the TAM wire at any point of time. Current _layer /* holds the number of the layer the TAM wire is currently residing on

(a)

(b)

Fig. 3 a Pseudocode of Wrapper design. b Data structure of TAM wire

Heuristic-Based Test Solution for 3D System on Chip

209

Type 1:Flat cores: Given a core with its number of functional input and output elements, no. of scan chains with their individual length and the placement of the core determine the test wrapper design such that test time as defined by equation no. 1 is minimized. Here keeping in view that all the cores i/o pins lie on one layer which can be other than the bottom layer the different elements are stitched in form of wrapper chains such that their lengths are balanced. In case the core is hard then this problem can be solved by using wrapper design algorithm as given in Fig. 3a. Otherwise, if the core is soft with the internal scan chains not being fixed and only the information on the number of flip flops is given then these elements can be balanced so as to keep the difference between minimum and maximum scan in and scan out path to be minimum. Type 2:Hierarchical cores: Given a core with its associated number of functional input and output elements, no. of scan chains with their individual length of the parent and child cores and the placement of the core determine the test wrapper design such that test time as defined by Eq. 1 is minimized. Goel et al. [10] have identified two INTEST modes as shown in Fig. 1b, c. A brief summary to support such wrapper design is as follows: Parent INTEST mode (INTESTP): In this mode, parent core internal testing is done. Test data are scanned through the parent core’s scan chains, the parent core’s wrapper cells, and the child core’s wrapper cells. In this mode, child cores need to be in EXTEST mode, as their wrapper output cells are required to apply test stimuli to the parent core, while their wrapper input cells are required to capture test responses from the parent core. As a result, test data has to be scanned through both the parent core and child cores. Hence, the available TAM wires have to be distributed between both the parent core as well as child core TAM architecture. Child INTEST mode (INTESTC): In this mode, child core internal testing is done; all the child cores are in INTEST mode. The parent core’s wrapper elements can be in any mode of operation since the TAM inputs will be able to transport data to the child core’s terminals regardless of the mode of operation of the parent core itself. Thus, in this mode, all the TAM wires can be utilized by the child cores for their INTEST testing. Depending on the flexibility of the TAM provided to such cores, the task of the test wrapper design can be implemented by considering the TAM to be hard for which the test bandwidth assigned to the parent and child is fixed therefore the test wrapper design for the child and parent is done as per what is presented by Goel et al. [26]. However, if the TAM is soft then we adopt our modified wrapper design wherein the test time is optimized even better by partitioning the TAM width on the basis of the Pareto optimal width for the child and parent cores so as to reduce the type 2 Idle bits [19]. Thereafter the wrapper design of the child can be done as per the algorithm presented above and the parent INTEST by stitching the parent input, child output, internal scan chains, child input, and parent output cells. Problem 2: The goal is to achieve near-optimal test access mechanism distribution and scheduling technique. We are given the following: set of cores of an SOC with the details of their functional input, output, internal scan chains with their length,

210

H. Vohra et al.

test power consumption, resource conflict, precedence constraint if any, placement on the different layer and the global limit of the through silicon vias. All the TAM chains start from the layer zero and have three basic parameters attached to them, namely, utilization variable, TSV used and the current layer. Figure 3b outlines the data structure of the TAM wires. The utilization for all the TAM wires is initially chosen to be all 1 s that represent that none of the TAM wire is still allotted to any core. Initially all the cores are checked for the compatibilities in terms of power consumption to decide which cores can be scheduled parallel. Depending on the hierarchy conflicts that allow parallel execution of the parent interest and child interest are removed. Similarly, precedence constraints and multiple test sets and external test considerations are taken into account. The cores which consume higher power are allotted more wires as per the Pareto optimal width such that testing of such cores should be finished as soon as possible. Prepare the look-up tables for the Pareto optimal points for the various cores. Depending on the flexibility of the TAM design on the various levels, we can further classify the TAM design approach as hard and soft. Problem 2 type a: TAM being soft: Here, in order to do avoid the type1 types of idle bits, we rely on the fork and merge technique that allows the efficient allocation of the TAM to individual cores as much as possible. Also, the scheduling is done in such a way the number of the TSV used is reduced. All the TAM chains start from the layer zero and have three basic parameters attached to them, namely, utilization variable, TSV used, and the current layer. The TAM design algorithm for soft TAM proceeds as per what is given in Fig. 4a.

Algorithm : TAMS: enhanced Greedy algorithm Step 1 : Initialize n cores , initialize w TAM wires Step2 : Sort the cores as per the Power and bandwidth . Step3: Allot the maximum available wires to the core with maximum Pmax and change the allotment variable to 0 to show that they are attached and update the TSV used and current layer accordingly. Step4 : Attempt to schedule the cores with leftover bandwidth and as the power constraint. Update the wire parameters accordingly Step 5. Check for the cores with the shortest test time. After the wires become free parameters are updated. Step6 : check for the cores with the same or Lesser TAM requirement and lying on the layer. If found then schedule else look out for once with minimum TSV need. And schedule the cores Step 7 : update the wire parameters , the cores scheduled and the TSV used. Step 8: if there are still cores left for scheduling. Go to step step3 .keeping in view the TSV difference Step 9.: calculate the total test time .

Algorithm : TAMS: enhanced greedy algorithm Step 1 : Initialize n cores , initialize w TAM wires Step2 : Sort the cores as per the Power and bandwidth . Step3: Allot the maximum available wires to the core with maximum Pmax and change the allotment variable to 0 to show that they are attached and update the TSV used and current layer accordingly. Step4 : Attempt to schedule the cores to the other TAM and as the power constraint. Update the wire parameters accordingly Step 5. Check for the cores with the shortest test time and ones that lie on the same layer . Schedule the cores as per the hierarchy ,precedence constraint .After the wires become free parameters are updated. Step6 :If no core is available on the same layer then try to find ones with minimum TSV requirement and do satisfy the other constraint And schedule the cores Step 7 : update the wire parameters , the cores scheduled and the TSV used. Step 8: if there are still cores left for scheduling. Go to step step3 .keeping in view the TSV difference Step 9.: calculate the total test time .

(a) Fig. 4 a TAM design for flexible TAM. b TAM design for Fixed TAM

(b)

Heuristic-Based Test Solution for 3D System on Chip

211

Type b: TAM being hard: Here, in order to do avoid the type1 types of idle bits, we rely on the test bus technique that allows the efficient allocation of the TAM to individual cores as much as possible. Also, the scheduling is done in such a way the number of TSV used is reduced. All the TAM chains start from the layer zero and have three basic parameters attached to them, namely, utilization variable, TSV used, and the current layer. The algorithm shown in Fig. 4b outlines the data structure of the TAM wires.

6 Simulation Set up and Results The detailed information of every benchmark is listed in [26]. The coding is done in C++. It may be also be added that a test bus width distribution chosen is fixed and arbitrary for the experimental purpose. Results For P22810 and d265: TSV available =1000. Case1. When Fork and Merge Allowed in individual test buses. As evident from the Table 1, two conclusions can be drawn: (a) As the peak power limit is increased from 3000 to 6000, more cores can be tested in parallel which reduces the overall testing time of the SOC and (b) although Fork and merge allows allotment of TAM wires from one core to another as soon as they become free, the test time is not reduced drastically until the Pareto optimal length for each core is reached. Also, in these cases, the hierarchy has been assumed to be flat leading to sequential testing of Child and parent cores which in turn increases the testing time. Case 2: Assignment of different Layers to different buses allowing the child and parent cores of the hierarchical cores to work in parallel: herein, the TAM is divided Table 1 Fork and merge allowed in individual test buses SOC

TAM Partition

P22810 32

64

D695

32

64

Test time with power P1: 3000 Test time with power P2: 6000

2(16,16)

117,864

115,866

3(10,10,12)

105,183

107,814

4(8,8,8,8)

100,156

101,614

2 (32,32)

112,116

111,557

3(22,22,20)

111,158

109,721

4(16,16,16,16) 112,953

102,951

2 (32,32)

45,783

45,644

3(22,22,20)

50,443

50,443

4(16,16,16,16) 55,633

53,389

2(32,32)

44,633

52,958

3(22,22,20)

44,633

43,366

4(16,16,16,16) 43,633

43,233

212

H. Vohra et al.

Table 2 Assignment of different layers to different buses allowing the child and parent cores of the hierarchical cores to work in parallel SOC

TAM

Partition

Test time with power P1:3000

Test time with power P2:6000

P22810

32

2(16,16)

112,116

165,430

3(10,10,12)

124,247

112,116

64

2 (32,32)

112,116

165,430

3(22,22,20)

151,360

114,692

32

2(16,16)

43,633

48,196

3(10,10,12)

48,196

43,633

2(32,32)

37,641

43,633

3(22,22,20)

26,677

48,196

D695

64

into two, three, and four partitions signifying one bus dedicated for each SoC layer. This type of scenario (results shown in Table 2) results in the situations where the cores lying on independent dies behave as 2D designs following which the conventional modular test approaches can be further used for the test architecture design. This scenario provides a lot of flexibility to the test engineer. It may be noted that the results obtained could be improved even more if different partition widths are chosen. Case 3: No TAM partitioning, full flexibility for routing TAM wires between different layers: This being a very typical case wherein the test wires can be routed freely and fork and merge as per the TAM design needs. The results shown in Table 3 can be improved greatly improved if TAM width selection is done considering the Pareto optimal points. However, this type of TAM design leads to an unaffordable increase in the TSV usage. So, one needs to be very careful while selecting this option. Case 4: If more emphasis on the test scheduling is done then the test application time can be reduced a little. As can be seen from the results shown in columns 4 and 5 of Table 4, this case seems to be the best case wherein: the child and parent cores of the hierarchical structures can be tested in parallel. Also, since the test buses are not restricted to any particular layer, the whole 3D structure gets reflected as any 2D SoC leading to a better time optimization. Table 3 No TAM partitioning, full flexibility for routing TAM wires between different layers

SOC

TAM

Test time with power P1:3000

Test time with power P2: 6000

P22810

32

117,106

117,106

64

111,116

111,116

32

44,633

48,196

64

43,833

43,833

D695

Heuristic-Based Test Solution for 3D System on Chip

213

Table 4 Test bus can execute concurrently flat hierarchy at core levels with parent cores executing after child cores in hierarchical IP cores SOC

TAM Partition

P22810 32

64

D695

32

64

Test time with power P1: 3000 Test time with power P2: 6000

2(16,16)

111,116

110,866

3(10,10,12)

129,116

127,814

4(8,8,8,8)

140,156

131,614

2 (32,32)

112,116

111,557

3(22,22,20)

119,158

119,721

4(16,16,16,16) 121,953

120,951

2(16,16)

45,783

45,644

3(10,10,12)

50,443

50,443

4(8,8,8,8)

55,633

51,234

2(32,32)

44,633

52,958

3(22,22,20)

51,633

51,366

4(16,16,16,16) 54,633

54,833

7 Conclusion and Future Work As 3D technology was introduced to design more sophisticated SoCs, test challenge is becoming more severe than ever before, which requires some new DFT techniques. This paper discusses three practical scenarios which can emerge based on the flexibility of test architecture development. A greedy algorithm-based heuristic is presented to address the test access mechanism design in 3D design. It can be concluded that the approaches present near-optimal test application time. The future work may comprise more optimization of the algorithm such that the testing time can be reduced even more.

References 1. Davis JA, Venkatesan R, Kaloyeros A et al (2001) Interconnect limits on gigascale integration (GSI) in the 21st century. In: Proceedings of the IEEE, pp 305–324 2. Lewis DL, Lee HHS (2009) Test circuit-partitioned 3D IC designs. In: Proceedings of ISVLSI, pp 139–144 3. Zorian Y, Marinissen EJ, Dey S (1999) Testing embedded-core-based system chips. IEEE Comput 32(6):52–60 4. International technology roadmap for semiconductors. http://www.itrs.net/links/2009ITRS/ Home2009.htm, July 2012 5. Marinissen EJ (2009) Test challenges for 3D-SICs: all the old, most of the recent, and then some new! In: Proceedings of ITC 6. Ramm P, Armin K, Josef W, Taklo MMV (2010) 3D system-on-chip technologies for more than Moore systems. Microsyst Technol 16:1051–1055 7. Mak TM (2006) Test challenges for 3D circuits. In: Proceedings of the 12th IOLTS, p 79

214

H. Vohra et al.

8. Marinissen EJ, Arendsen R, Bos G et al (1998) A structured and scalable mechanism for test access to embedded resuable cores. In: Proceedings of ITC, pp 284–293 9. Iyengar V, Chakrabarty K, Marinissen EJ (2002) Test wrapper and test access mechanism co-optimization for system-on-chip. J Electron Testing: Theory Appl 18:213–230 10. Goel SK, Marinissen EJ, Sehgal A, Chakrabarty K (2009) Testing of SoCs with hierarchical cores: common fallacies, test access optimization, and test scheduling. IEEE Trans Comput 58(3):409–423 11. Iyengar V, Chakrabarty K, Marinissen EJ (2003) Test access mechanism optimization, test scheduling, and tester data volume reduction for system-on-chip. IEEE Trans Comput 52:1619– 1632 12. Xiaoxia W, Yibo C, Krishnendu C, Yuan X (2010) Test-access mechanism optimization for core-based three-dimensional SOCs. Microelectronics J 41:601–615 13. Roy SK, Ghosh S, Rahaman H, Giri C (2010) Test wrapper design for 3D system-on-chip using optimized number of TSVs. In: Proceedings of ISED, pp 197–202 14. Wu X, Falkenstern P, Xie Y (2007) Scan chain design for three dimensional integrated circuits (3D ICs). In: Proceedings of the 25th international conference on computer design, pp 208–214 15. Brandon N, Krishnendu C, Marinissen EJ (2012) Optimization methods for post-bond testing of 3D stacked ICs. J Electron Test 28:103–120 16. Lewis DL, Lee HHS (2009) Test strategies for 3D die-stacked integrated circuits. In: Proceedings of DATE 17. Wu X, Chen Y, Chakrabarty K, Xie Y (2008) Test-access mechanism optimization for corebased three-dimensional SOCs. In: Proceedings of ICCD, pp 212–218 18. Mallick PK, Bhoi AK, Chae GS, Kalita K (eds) (2021) Advances in electronics, communication and computing: select proceedings of ETAEERE 2020, vol 709. Springer, Berlin 19. Goel SK, Marinissen EJ (2003) SOC test architecture design for efficient utilization of test bandwidth. ACM Trans Design Autom Electron Syst 8(4):399–429 20. Varma P, Bhatia S (1998) A structured test re-use methodology for core-based system chips. In: Proceedings of international test conference (ITC), pp 294–302 21. Marinissen EJ, Kapur R, Lousberg M et al (2002) On IEEE P1500’s standard for embedded core test. J Electron Test: Theory Appl 18:365–383 22. Marinissen EJ, Goel SK, Lousberg M (2000) Wrapper design for embedded core test. In: Proceedings of international test conference (ITC), Atlantic City, NJ, USA, pp 911–920 23. Kyuchull K, Saluja KK (2009) Low-area wrapper cell design for hierarchical SoC testing. J Electron Test 25:347–352 24. Whetsel L (1997) An IEEE 1149.1 based test access architecture for ICs with embedded cores. In: proceedings of ITC, pp 69–78 25. Touba N, Pouya B (1997) Using partial isolation rings to test core based designs. IEEE Des Test Comput 14:52–59 26. Marinissen EJ, Iyengar V, Chakrabarty K (1999) A set of benchmarks. In: Proceedings of international test conference, pp 493–498

A Comprehensive Study of Edge Computing and the Impact of Distributed Computing on Industrial Automation Akansha Singh, Atul Kumar, and Bhavesh Kumar Chauhan

Abstract The fusion of artificial intelligence, ML, and edge computing can promote a smarter version of IOT. We are perching in the fourth industrial revolution where the requirement of the IT industry is growing rapidly. The introduction of cloud has already changed the game of data storage and infrastructure development around the globe. As most of the industrial computation resides on cloud, the requirement of high speed along with high security has become the major concern. The term edge computing is tossed for lowering the internet latency and providing a safer, faster, and less complicated mode of computation. Keywords Edge computing · Artificial intelligence · Cloud computing · Fog computing · Deep learning

1 Introduction The versatility of computer technology is one of the major reasons for rapid advancements/development in the industry. The impact of computation can be clearly mentioned in the sectors ranging from wireless mobiles to IOT. If we observe the history of computer science, we can classify various eras ranging from mainframes to edge computing to machine learning and deep learning [1]. Cloud computing can be defined as a shared pool of various configurations of network infrastructure and services. The technology has not only resolved the need of high storage spaces but also helped the small industry to grow using rented infrastructures and platforms. The cloud technology has grown at its highest pace and currently is used by almost all the organizations. Cloud computing has been an integrated and crucial part of the industry for past decade. There are two basic reasons for which cloud has dominated over the industry, firstly because cloud provided a centralized data access to all the remote user without the compulsion of devices that are to be used to access the data. And secondly it A. Singh (B) · A. Kumar · B. K. Chauhan Department of Computer Science and Engineering, Shri Ramswaroop Memorial Group of Professional Colleges, AKTU, Lucknow, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 P. K. Mallick et al. (eds.), Cognitive Informatics and Soft Computing, Lecture Notes in Networks and Systems 375, https://doi.org/10.1007/978-981-16-8763-1_19

215

216

A. Singh et al.

provided a rented platform where the organization can pay only for those services which they rented. Therefore, these benefits state that the cloud computing technology will last in future also. But with changing needs of the applications there exists a requirement of local computing units associated with the end device that can store and compute local data. Edge computing is a new model where real stashing and reckoning resources are embedded at close proximity to the end device where the data produced things like mobiles, laptops, sensors, robots, etc. The implementation of edge computing is needed to be done at real-time projects [1–9]. Cloud provides some very useful features like on demand network access, security, confidentiality, pace, rented platforms, etc. But considering the growing need of the industry a new term is tossed, i.e., EDGE COMPUTING. Where cloud computing is strictly associated with big data, data storage and data mining and warehousing, edge computing on the other hand is associated with local processing at the device end for better performance of the system [10–30]. The integration of deep learning and Artificial intelligence in edge computing can mark revolutionary trends in the field of IOT. Some researchers consider edge computing as a new technology but its roots were discovered after the concept of Content Delivery Networks (CDN) was tossed. The industry today is ready for investment and research in edge computing where the computation of the data is done at local nodes because the technology offers scalability, security, and lesser latency. With the collaboration of IOT with cloud, the cloud and fog computing model have some shortcomings. Previously all the end devices were connected to a centralized data center or simply the cloud that contains all the computation units and therefore the end devices used large and expensive data connections between the edge device and data center for all the computation and storage processes. But as the devices linked with IOT produce a large volume of data and real-time systems need a very high pace of computation and implementation at the end node or device, the centralized system fails. This can be elaborated with an example of a nuclear power plant that contains several chambers that are connected to a centralized data center. If in case any one of the chambers or pipeline sensors detect high pressure and sends the signal to the central system by the time the centralized system detects the problem and addresses to automatically shut down the edge there is a possibility that automatic shutoff trigger is received by the edge very late, which can cause high destruction and loss. Therefore, in such real-time systems, we can place a computation unit local to the end device reducing the latency and the round trip time will be significantly reduced. With the introduction of edge there exists still a need of central storage unit whether local or at cloud. In non-real-time projects, a simple centralized system is also sufficient to complete the process efficiently where there is no harm to life or damage of resources due to high latency. Also the introduction of edge computing helps in sorting all the irrelevant data items that are not necessarily needed in future and should not be stored on the centralized cloud.

A Comprehensive Study of Edge Computing …

217

Fig. 1 The relationships between edge computing, Cloud computing, and Fog computing [1]

Merit of edge computing is that it reduces latency and processing time along with reduction of expensive long-haul connections to processing and storage centers (Fig. 1).

2 What is Edge Computing? As already discussed the end devices connected at the central computational unit produce large volume of sheer data that faces internet latency and long round trip time for simple calculations and instructions. Cloud computing has been replaced by fog computing to resolve such issues but there still exists the requirement of introduction of edge computing to model the data processing in more efficient manner. In this section, we will be briefly observing the reasons due to which the introduction of edge computing is necessary. 2.1.

2.2.

Push From Cloud Services: It is already proven that utilizing the remote cloud for all the computation and storage is an efficient way to manage the data but considering the bottleneck of internet with bandwidth and latency, if all the computation is done at the cloud server then the response time will be very high causing damage to time bound computations [3]. Pull from IoT: Sooner or later almost all kinds of electrical devices will become part of IoT, and they will produce large volume of data. Also these devices will

218

A. Singh et al.

Fig. 2 The relationships between edge computing, cloud computing, and fog computing [3]

2.3.

work as consumers of the technology that will require time to time instructions from the central data processor such as LED lighting and sensors. These types of electronic devices that are connected through internet at the edge will increase with time [3]. Change from Data Consumer to Producer: In today’s scenario if we consider the example of simple remote device then we can notice the amount of data being produced at the local device. This data can be sorted at the edge and then be uploaded onto the cloud. This process will reduce the amount of data traffic in the network along with increasing the security as the edge can also digital signature and cryptographic algorithms at the edge node only[3] (Fig. 2).

3 Motivation In this paper, we have identified the following needs that motivate edge computing over any other computational method: 3.1.

3.2.

Decentralized Cloud and Low Latency Computing Centralized computing cannot be declared best universally. It can be considered as worst strategy and would result in various problems in case of those applications that are geographically distributed. Computation and monitoring should always be preferred to be nearest to the source of data to reduce latency and response time. Dealing with Data Explosion and Network Traffic The number of devices that are producing data is growing enormously each day. It is assumed that by the end of this decade millions of zeta bytes of data

A Comprehensive Study of Edge Computing …

3.3.

219

will be produced that need to be stored and maintained for future analysis and decision-making. In such a case transmission of such huge data will not only produce network traffic but will also result in slow analysis. There are attempts to mitigate the energy challenge; this can be insured by using edge computing where we can perform analytics on the edge device itself. Smart Computation Technologies Data is usually generated by edge devices which are transmitted to the cloud where analysis and other algorithms related to security of data are implemented. This intern produces latency issues and other problems related to energy [15, 15].

4 Opportunities in Edge Computing 4.1. 4.2. 4.3.

Standards, Benchmarks, and Marketplace Frameworks and Languages Lightweight Libraries and Algorithms

5 Cloud Computing Computing and analyzing the data within the organization using its own infra like desktop or laptops is neither practical nor feasible. So, computation of the data can be reloaded from client side to server side which would be practical, feasible, and also economical. Cloud computing like any other technology is introduced to solve this issue and problem being faced by the industries and organizations. This technology is entirely based on computation of data online using global connections like internet and the management of data done is entirely another industry called cloud service providers (CSP) for ease and growth of organizations. Cloud computing has many advantages and benefits over traditional com putting methodologies, which are as follows: (a)

(b)

(c)

Pay-as-you-go service: Also known as pay only for used service is one of the most influencing features behind the popularity of cloud computing. Cloud service allow the users of cloud to pay only for those services that are subscribed by the client. No extra charges are processed and no pressure is accused onto the user for utilizing any particular service. Recovery from natural Disaster: Natural disasters are entirely nature’s call and nobody can enforce can control onto them. Natural disasters are highly damageable to each phase of organizations. Economic: Cloud computing is economically feasible technology as it provides a mechanism to enroll the infrastructure according to organizations dynamic

220

(d)

(e)

(f)

(g)

(h)

(i)

(j)

A. Singh et al.

requirements. Cloud provides an easy cost-efficient way to have maximum resource utilization in minimum asset and capital. Wide Insight: Insight here refers to the ability and vision of making future decisions and plans for the growth of the organization and individual. Cloud provides a summarized and structural form of information out of the random historic data being stored into the cloud, which not only helps in decisionmaking but also helps the industry from damaging plans. Sustainability growth: Sustainable development refers to the development of society without damaging the environment and saving the naturally owned gifts of nature for future generations. Cloud is totally environmentally sustainable without causing any damage to the Mother Nature. Security from internal suspicious agents: It’s important to secure the important data related to any company from external revilers but also it is necessary to secure the data from internal traders of data. Cloud has highly secure system with high user authentication system. Automatic updation of software: As technologies especially IT sector is rapidly growing, there is new software introduced each day and hence it is a special task to ensure updating of applications being used by the entire organization. In cloud, the user is least interested in such a task as the application on cloud are constantly updated from time to time. Mobility of data: The key factor for growth of any organization is directly linked with the appropriate technology induced for any particular task. For instance, there exist multiple types of task force in an organization that require the data set in their hands in mobile form. As cloud provides accessibility of data stored onto the cloud from any devices let it be cell phone or tabs, laptops or desktops, it’s easy to track the data stored in a cloud. Flexibility in terms of infrastructure: This benefit refers to the ease of the user to add or remove any service, application, or infrastructure from its domain when it’s needed and when not needed, respectively.

6 Limitations of Cloud Computing 6.1. 6.2. 6.3. 6.4.

Downtime and Dependability Security and Privacy Vulnerability to attack Limited control and flexibility

7 Key Attributes of Edge Computing 7.1.

Connectivity: Connectivity is the basis of Edge computing. Edges need data from a very densely connected system.

A Comprehensive Study of Edge Computing …

221

Fig. 3 Architecture of edge computing [7]

7.2.

7.3.

7.4. 7.5.

First Entry of Data: As a bridge between the physical and digital worlds, Edge computing is the first entry of real-time data. This is important for the applications on the edge such as predictive maintenance, asset efficiency, and other applications for various data analysis/processing. Constraint: Considering edge computing, all the various hardware and software components need to adapt conditions and constraints of the environment, for instance; anti-dust technology, anti-explosion, anti-vibration, and anti-current/voltage fluctuations, etc. Distribution: Edge computing distributed computing and storage, distributed intelligence, and deliver distributed security capabilities. Convergence: The convergence of Operational Technologies (OT) and Information and Communication technologies is the basis for digitization in various industrial and non-industrial environments.

8 Edge Computing Architecture Edge computing consists of three layers: IOT devices, edge nodes, and cloud servers. The basic architecture of edge computing is depicted as follows (Fig. 3):

9 Case Study of Edge Computing 9.1.

Smart city: If we consider the concepts of smart city then we may notice that each device established in the city will be connected to internet for fast and easy

222

9.2.

9.3.

A. Singh et al.

monitoring. If we consider all the devices as end nodes of the IOT network and add a small storage unit at each end that can filter preprocess the data then the computation time and storage space both can be reduced at a much larger level— the cloud [12]. Smart parking/ smart interconnected vehicle: Vehicles are encouraged with a web access that gives one vehicle to associate with different vehicles out and about. The association situation can either be vehicle to vehicle, vehicle to access point, or access point to access to point. Conveying Mobile Edge Computing (MEC) situations within the street can empower two-route correspondence between the moving vehicles. One vehicle can speak with the other close vehicles and educate them with any normal hazard or automobile overload, and biker the nearness of any people on foot. Moreover, Mobile Edge Computing empowers adaptable, solid, and appropriate situations that are matched up with the nearby sensors and refresh the database. Or we can also define a smart parking system where the driver can find a vacant place on his smartphone where he/she can park his vehicle. Smart nuclear power plant: Consider a nuclear power plant where we have a terminal plant whose temperature rise may lead to hazardous accident and needs to be shut immediately once the threshold is reached. If we connect the entire system with cloud directly then the chances of accident are higher due to high latency in the decision-making. Rather if we connect the entire plant with individual edge then we may control and minimize the chances of accidents.

10 Applications of Edge Computing Edge computing can be considered as new era of computation that can help the involving IT sector, providing multiple benefits as follows: 10.1.

10.2.

Edge computing and machine learning: Machine learning is a sub-branch of data sciences where we try to teach a model from a given set of input parameters which the model analyzes and then make a particular decision. If we combine all the data gathered from edge computing and provide it as input to the machine learning algorithm then the accuracy of the result may improve significantly. Edge computing with IOT: IOT, i.e., internet of things refers to connecting all the electronic units to the internet for the benefit and ease of user. Automatic understanding of IoT sensor data is desired in several verticals, such as wearable for healthcare, smart city, and smart grid. The type of analysis that is performed on these data depends on the specific IoT domain.

A Comprehensive Study of Edge Computing …

10.3.

10.4.

10.5.

223

Edge computing for natural language processing: Deep learning has also become popular for natural language processing tasks, including speech synthesis, named entity recognition (understanding different parts of a sentence), and machine translation (translating from one language to another). For conversational artificial intelligence, latency on the order of hundreds of milliseconds has been achieved. in recent systems. Edge for computer vision: Image classification and object detection are fundamental computer vision tasks that are needed in a number of specific domains, such as video surveillance, object counting, and vehicle detection. Such data naturally originate from cameras located at the network edge. Edge computing with deep learning: The entire concept of Edge Computing has been recently tossed to overcome the limitations of cloud computing. This is ensured via certain characteristics like data processing tasks at the edge of the network. This new generation has shown a significant amount of reduction in the system running time, memory cost, and energy consumption for a broad spectrum of big data applications as compared to conventional cloud computing. To cope with the huge network traffic and high computational demands, as well as to improve the system response time, we propose Edge Learning, a complementary service to existing cloud computing platforms.

11 Cloud versus Fog versus Edge Computing Cloud computing is the delivery of different services through the Internet. These resources include tools and applications like data storage, servers, databases, networking, and software. Whereas, fog computing supports decentralized computing structure that works as mediator between cloud and local edge devices (Table 1).

12 Conclusion and Future Scope In this paper, we have briefly discussed the architecture of all the various fundamental concepts of edge computing over cloud and fog computing. We have also elaborated the challenges and opportunities related to edge computing. We have also focused and analyzed some case studies in the same context. We conclude that edge can be considered an alternate solution for cloud in all the IOT-based units and all the sectors where the decision-making is very crucial in terms of time. It will be fair enough to consider Edge computing as a powerful technology in upcoming future. In the future work, we plan to deploy advanced learning techniques

224

A. Singh et al.

Table 1 Cloud versus fog versus edge computing Basis

Cloud computing

Fog computing

Edge computing

Architecture • Central • Extending cloud to the processing-based model edge of the network

• Edge computing can be established with or without cloud and fog computing

Pros

• It is used to store very • More scalable than large amount of data on edge computing and it remote device is cost efficient also

• Used for end processing • It classifies the data where only important data is stored on the cloud • Low response time

Cons

• Latency and high response time • Not efficient for real-time systems

• It depends on many • Less storage space links for transmission • Less scalable than fog of data computing • Less sensitive than edge • Intelligence and but sends the computing is framed at non-real-time data for the end node itself storage in cloud

like on edge servers, and compare the performance of different techniques. Edge computing can be used to reduce the latency and network issues. Combination of edge computing with IOT, cloud, and data science is the future of IT world.

References 1. Stankovski S (2020) The impact of edge computing on industrial automation. In: 19th international symposium INFOTEH-JAHORINA, IEEE 2. Satyanarayanan M (2016) The emergence of edge computing. Comput Sci Eng 50:30–39 3. Shi W (2016) Edge computing: vision and challenges. IEEE Internet of Things Journal 3(5):637–646 4. Kim OTT, Tri ND, Tran NH, Hong CS et al (2015) A shared parking model in vehicular network using fog and cloud environment. In: Network operations and management symposium (APNOMS), IEEE 5. Truong NB, Lee GM, Ghamri-Doudane Y (2015) Software defined networking-based vehicular ad hoc network with fog computing. In: Integrated network management (IM), IEEE International 6. Dolui K (2019) Comparison of edge computing implementations: fog computing, cloudlet and mobile edge computing. In: JIASI CHEN, deep learning with edge computing: a review, IEEE 7. Goodfellow I, Bengio Y, Courville A, Bengio Y (2016) Deep learning, vol 1. MIT Press, Cambridge 8. Satyanarayanan M (2017) The emergence of edge computing. Computer (Long Beach Calif) 50:30–39 9. Bizanis N, Kuipers F (2016) SDN and virtualization solutions for the internet of things: a survey. IEEE Access 4:5591–5606 10. Li J, Peng M, Cheng A, Yu Y, Wang C (2014) Resource allocation optimization for delaysensitive traffic in fronthaul constrained cloud radio access networks, IEEE Syst J 1–12

A Comprehensive Study of Edge Computing …

225

11. ETSI (2014) Mobile-edge computing introductory technical white paper, white paper, mobileedge computing industry initiative 12. Mao Y, You C, Zhang J, Huang K, Letaief KB (2017) A survey on mobile edge computing: the communication perspective. IEEE Commun Surv Tutorials 19:2322–2358 13. Nayak SR, Sivakumar S, Bhoi AK, Chae GS, Mallick PK (2021) Mixed-mode database miner classifier: parallel computation of graphical processing unit mining. Int J Electr Eng Educ 0020720920988494 14. Kaur K, Dhand T, Kumar N, Zeadally S (2017) Container-as-a-service at the SDGE: trade-off between energy efficiency and service availability at fog nano data centers. IEEE Wireless Commun 24:48–56 15. Mishra S, Mishra D, Mallick PK, Santra GH, Kumar S (2021) A novel borda count based feature ranking and feature fusion strategy to attain effective climatic features for rice yield prediction. Informatica 45(1):13–31 16. Armbrust M, Fox A, Griffith R, Joseph AD, Katz R, Konwinski A, Lee G, Patterson D, Rabkin A, Stoica I, Zaharia M (2010) A view of cloud computing. Commun ACM 53(4):50–58 17. Huang J, Qian F, Gerber A, Mao ZM, Sen S, Spatscheck O (2012) A close examination of performance and power characteristics of 4g LTE networks. In: Proceedings of the 10th international conference on mobile systems, applications, and services. ACM, pp 225–238 18. Satyanarayanan M, Chen Z, Ha K, Hu W, Richter W, Pillai P (2014) Cloudlets: at the leading edge of mobile-cloud convergence. In: 2014 6th international conference on mobile computing, applications and services (MobiCASE), IEEE, pp 1–9 19. Bonomi F, Milito R, Zhu J, Addepalli S (2012) Fog computing and its role in the internet of things. In: Proceedings of the first edition of the CC workshop on mobile cloud computing, ACM, pp 13–16 20. Kistler JJ, Satyanarayanan M (1992) Disconnected operation in the coda file system. ACM Trans Comput Syst 10:3–25 21. Elijah (2017) Cloudlet-based edge computing. http://elijah.cs.cmu.edu/. Accessed 19 July 2017 22. Dilley J et al (2002) Globally distributed content delivery. IEEE Internet Comput 6:50–58 23. O’Regan G (2012) A brief history of computing. Springer, London 24. Satyanarayanan M (2017) The emergence of edge computing. Computer 50:30–39 25. Sadeghi A, Sheikholeslami F, Giannakis GB (2018) Optimal and scalable caching for 5G using reinforcement learning of space-time popularities. IEEE J Sel Top Signal Process 12(1):180– 190 26. Cucinotta T et al (2009) A real-time service-oriented architecture for industrial automation. IEEE Trans Industr Inf 5(3):267–277 27. Stankovski S, Ostoji´c G, Zhang X (2016) Influence of industrial internet of things on mechatronics. J Mechatron Autom Ident Technol 1(1):1–6 28. Khana WZ, Ahmed E, Hakak S, Yaqoob I, Ahmede A (2019) Edge computing: a survey. Futur Gener Comput Syst 97:219–235 29. Buyya R, Srirama SN (eds) (2019) Fog and edge computing: principles and paradigms. Wiley, London 30. Varghese B, Wang N, Nikolopoulos DS (2017) Feasibility of fog computing. Springer, London

Optimizing Approach Towards Fibre Refining and Improved Fibre Quality-Development of Carrier Tissue Paper Sanjeev Kumar Jain, Dharam Dutt, R. K. Jain, and A. P. Garg

Abstract This study is focused on the development of carrier tissue paper by selection of suitable raw material fibre furnish mix, application of enzymatic treatment on pulp and optimizing the degree of mechanical treatment (refining action) of pulp. Compared to other grades of tissue paper, this specialized grade has some significant unique characteristics such as uniform porosity, controlled absorbency, and high wet and dry strength properties. Wet strength additives were applied to both hardwood and softwood pulp. They were enzymatically treated as energy saving initiative and to keep the fibre quality intact. To achieve the required properties in the developed tissue paper product, bleached hardwood pulp was blended with softwood pulp in various proportionate ratios (i.e. 50:50, 60:40, 70:30 and 80:20). Properties of hand sheets prepared from 100% softwood pulp and 100% hardwood pulp were compared with those hand sheets that were prepared from various mixed grades pulps. Based on the test results, 50% softwood pulp and 50% hardwood mixed pulp were found to be the most suitable blend that can further be processed for development of carrier tissue paper. The mixed pulp was then subjected to refining action through PFI mill to achieve the desired freeness. Hand sheets of 15 gsm were prepared. The strength properties of the paper were investigated and analysed. A linear relationship between refining intensity and mechanical properties of paper was observed. On the contrary, a non-linear relationship between refining intensity was observed with respect to porosity of tissue paper. The higher the refining intensity, higher mechanical properties, high absorbency and lesser porosity were achieved. Enzymatically treated softwood pulp required 24.6% less number of revolutions to achieve the S. K. Jain (B) · R. K. Jain · A. P. Garg Shobhit Institute of Engineering and Technology (Deemed To Be University), N-58, Modipuram, Meerut, Uttar Pradesh, India R. K. Jain e-mail: [email protected] A. P. Garg e-mail: [email protected] D. Dutt Department of Paper Technology, Indian Institute of Technology Roorkee, Uttar Pradesh, Saharanpur Campus, Roorkee, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 P. K. Mallick et al. (eds.), Cognitive Informatics and Soft Computing, Lecture Notes in Networks and Systems 375, https://doi.org/10.1007/978-981-16-8763-1_20

227

228

S. K. Jain et al.

same freeness as un-treated pulp. 22.1% reduction in number of revolution in PFI mill was observed for refining of hardwood pulp after application of commercial cellulose refining enzyme. On refining the mixed pulp to 40 °SR, tensile strength was 930 N/metre, and wet strength was achieved to 1650 gmf/15 mm. Porosity of the developed paper was found to be 850 ml/min. Combining all the factors, carrier tissue was developed in lab scale with all the attributes incorporated and furthermore superior to the benchmarked sample. Keywords Carrier tissue paper · Cellulases refining enzyme · Refining · Porosity · Wet strength · Tensile strength · Absorbency

1 Introduction Tissue paper is also called a lightweight paper that can be made both from virgin and recycled paper pulp. It is produced on a paper machine with single large steam heated drying cylinder (Yankee dryer) fitted with a hot air hood. The raw material is paper pulp. The Yankee cylinder is sprayed with adhesives to make the paper stick. The Yankee’s “doctor blade,” which scrapes the dry paper off the cylinder surface, is responsible for creping. The strength of the adhesive, the geometry of the doctor blade, the speed difference between the Yankee and pope reel of the paper machine and the qualities of the pulp are the factors that influence the creping process. Tissue papers are used in a wide range of situations. They are so common that they can be seen anywhere from the dining table to craft room. In the blazing sun, people enjoy using disposable paper napkins composed of tissue and scented with their favourite odour. The best way to clean all the dust, filth, and grease from your face in a few moments is to use a tissue napkin. Many women even use this napkin to remove their makeup when they are too busy to undergo the whole cleansing process [1]. The properties of hygienic tissues, facial tissues, paper towels, wrapping tissues, toilet tissues, table napkins, and other varieties are controlled by pulp quality, creping, and additives (both in the base paper and as a coating) with a variety of varieties such as hygienic tissues, facial tissues, paper towels, wrapping tissues, toilet tissues, table napkins, and so on [2]. Carrier tissue is used as a transfer and cover sheet for absorbent fluff pulp in the manufacturing of incontinence and feminine hygiene products. Carrier tissue paper is primarily divided into two types: I.

II.

High Porosity Carrier Tissue: High porosity carrier tissue is designed to allow for simple air movement, resulting in more precise fluff pulp transfer management. Low Porosity Carrier Tissue: When an adhesive is used to keep the fluff in place throughout the manufacturing process, low porosity carrier tissue is used. The scope of tissue paper lies with baby diapers, adult diapers, sanitary napkins, hospital underpads, and pet pads, carrier tissue paper serves as a carrier

Optimizing Approach Towards Fibre Refining …

229

for the pad. The tissue essentially acts as a carrier for the pad (the pad is the diaper’s absorbent core) and aids in the reduction of pin holes formed during the continuous drum forming system’s compression process. This shields the inner plastic from damage [3]. It is also noteworthy that according to the new government policy of empowerment to girl child, a lot of thrust is being given for hygienic products. Carrier tissue being used in sanitary napkin holds one of the pillars for establishment of this policy. All the above scope of use of carrier tissue paper following key technical properties to be included: 1. 2.

3.

Excellent dry and wet tensile strength, Uniform porosity along with high elasticity are the most important features of this grade of tissue product. Softness and fleeciness of premium wrap tissue paper are used in baby diapers [4]. Carrier tissues have some distinguished features such as high absorbability with optimum porosity, high wet strength and bulkiness, which differentiates it from other tissues paper grades. To cater these properties proper selections of raw material furnish and optimization of mechanical action on fibres and formulation of proper wet end recipe are of prime requirement [5].

A variety of ultimate features of paper, such as absorptivity, opacity, strength, and ink–paper interactions, are determined by its porosity. A variety of procedures can be used to determine porosity and pore size distribution, although the majority of these techniques have drawbacks. The packing properties of fibres, fines and pigments, chemical additives, and processing methods determine the porosity. Paper porosity is particularly critical for some paper goods that must absorb liquids, such as tissue and towel grades. The porosity of the surface layer has a significant impact on printing. The paper porosity has an impact on the barrier and strength properties. Ink jet printing performance is also influenced by the porosity of the top surface of the paper. Therefore, the porosity of paper is an important topic to study [6]. Others have reported a number of methods for determining the porosity or void fraction of paper [7]. Porosity is defined as the volume of voids divided by the total volume of the sample in basic words. However, because to the complicated nature of paper, measuring either of these quantities is difficult. Other methods include using a gas or liquid to fill the spaces, and measuring the amount of fluid taken up by the paper sample. The sample is placed in a vacuum, and the amount of fluid that fills the gaps is measured using these procedures. Other wetting fluids, such as water or silicon oils, can be utilised to get a good estimate, although their capacity to do so raises issues [6]. A paper’s porosity is determined by the various steps of the papermaking process. Increased fibre refinement leads the fibres to link more securely and tightly together, making the paper denser and reducing the network of air passages and hence the porosity. Surface sizing, coating, calendaring, and supercalendering all work to seal and/or compress surface fibres, lowering the amount of dust on the surface. Porosity influences how quickly and completely liquids are absorbed into a paper, which is generally accomplished through capillary action. Paper with a high porosity

230

S. K. Jain et al.

promotes absorbency, but it also increases the chance of show-through and/or strikethrough in various procedures. Low-porosity paper increases liquid holdout while also increasing the chance of smearing or leakage. Using a Gurley densometer (in the first case) or a Sheffield porosimeter (in the second case), the porosity of paper is determined quantitatively as either the length of time it takes for a quantity of air to pass through a paper sample, or the rate of air passage through a sample [8]. Pulp refining is a mechanical process that improves the papermaking qualities of pulp fibres. The “pumping” of water into the cell wall, which makes it much more flexible, is one of the refining processes. Fibrillation, or the exposing of cellulose fibrils to increase the surface area of fibres and hence improve fibre–fibre bonding in the finished sheet, is the second step of refining. Delamination of the cell wall, such as between the primary and secondary layers, is a third activity that improves fibre flexibility. By expanding the surface area of the fibres and making them more malleable to curve around one other, it improves the strength of fibre-to-fibre connections. This results in a denser sheet by increasing the bonding surface area. The majority of paper’s tensile qualities improve with pulp refinement since they rely on fibre-to-fibre bonding [9]. Enzyme-assisted refining is a green solution that minimises energy usage in the papermaking process. Enzyme-assisted refining is mechanical refining that occurs after pulp has been pre-treated with enzymes such as cellulases and hemicellulases. It not only saves energy but also improves the quality of the end product. Enzymes improve the beatability of pulp at the same refining degree (°SR), allowing for shorter refining times and improved paper qualities. During enzyme-assisted refining, choosing the right enzyme, optimising the enzyme dose, and choosing the right reaction time are the most important aspects in lowering energy consumption and improving pulp quality [10]. In this study, carrier tissue paper was developed by proper selection of raw material furnish mix and further refining it to get required properties according to the end-use.

2 Materials and Methodology 2.1 Materials The experiments were conducted in well-equipped research and development laboratory by using bleached hardwood pulp and softwood pulp, application of strength additives, refining them by the application of commercial cellulose refining enzyme, blending the pulps in various proportionate ratios and further optimizing the degree of mechanical treatment to achieve the required properties.

Optimizing Approach Towards Fibre Refining …

231

2.2 Methodology In this study, carrier tissue paper commercial samples were collected from two different locations. The properties of collected samples were investigated thoroughly and benchmarked for reference. Table 1 confides the paper properties of both sample collected from location A and location B. Strength additives such as dry strength resin (DSR) and wet strength resin (WSR) were added to both hardwood and softwood pulp. They were further refined using commercial cellulase refining enzyme. The enzyme functions as like a catalyst and its function should continue from a pre-refining application into a post-refining effect. In other words, the enzyme swells the fibre walls prior to refining, then continue to work on the fibres, improving drainage after refining. Added to this, it also reduces the specific energy consumption on refiner significantly. Various experiments were carried out to mix the pulp samples in various proportionate ratios and selecting the mixed pulp suitable to develop product having characteristics closest to the benchmarked sample. Refining intensity on the mixed pulp was further optimised. Sufficient quantity of bleached hardwood and softwood pulp was selected for the experimental purpose. Carrier tissue paper having characteristics of high wet and dry strength along with controlled absorbency and porosity added to softness can only be developed by right selection of fibre and subjecting the pulp to right amount of refining. The long and strong fibres of softwood pulp would contribute to the strength properties and absorbency owing to the presence of longer capillary in softwood fibres, while optimum refining will help to incorporate the required properties (i.e. of high wet and dry strength along with controlled absorbency and porosity added to softness). Dry strength resin at dosage of 3 kg/ton of pulp and WSR at the dosage of 20 kg/ton of pulp were added to both bleached hardwood pulp and softwood pulp. Table 1 Comparative test results of commercial samples collected from different locations S. No

Parameters

UOM

Carrier tissue sample collected from location A

Carrier tissue sample collected from location B

1

Substance

gsm

15

15

2

Bulk

cc/gm

1.73

1.86

3

Tensile strength

N/m

930

950

4

Wet strength

gmf/15 mm

1380

1420

5

Porosity

ml/min

860

840

232

2.2.1

S. K. Jain et al.

Enzymatic Treatment of Pulp

A broad range of microorganisms including fungi, bacteria, and actinomycetes secrete a broad range of complex group enzymes which are known as cellulases. Cellulase is not a single enzyme but collection of enzymes consisting of endoglucanases and exoglucanases including cellobiohydrolases and β-glucosidase. Cellulase can be used as a potential tool for modification of pulp properties which will further aid to save energy requirement during refining process [11]. Cellulase refining enzyme at dosage of 30 g/ton of pulp was fed to both the pulp batches i.e. 100% softwood pulp and 100% hardwood pulp. The use of enzymes in the pulp and paper industry has been proved to be highly potential in improving the paper production process, making it economically sustainable and to achieve, a reduced environmental impact at the same time. This can be done by reducing the amount of chemicals and energy required for the modification of fibres [11].

2.2.2

Selection of Suitable Mixed Pulp Composition

Initial freeness of bleached hardwood pulp and softwood pulp was 17 °SR and 15 °SR respectively. Hardwood pulp was refined to freeness of 25 °SR, and softwood pulp was refined to freeness of 40 °SR using PFI mill following the standard method of ISO 5264-2. Hand sheets of 15 gsm were prepared separately for pure grade hardwood pulp and softwood pulp by using hand sheet maker according to TAPPI 205 [12]. Pulp sheets of 100% hardwood pulp and 100% softwood pulp were investigated to analyse physical strength properties like dry and wet strength, porosity, bulk etc. After analysing the properties of hard wood and softwood pulp, both were mixed in different proportions i.e. In the ratio of 50:50, 60:40, 70:30, and 80:20 respectively. The details of mixing ratio of hardwood and softwood pulp have been demonstrated in Table 3. Further, the properties of the mixed pulp were studied in detail. The mechanical properties of the mixed pulp consisting of 50% hardwood pulp and 50% softwood pulp were found to be very close of the benchmarked sample. Based on the study conducted, this pulp composition was found to be most suitable for developing carrier tissue paper. Optimization of mechanical action One of the most important parameter for developing carrier tissue paper was proper degree of refining. It is the most efficient procedure in paper making process which can improve the mechanical properties of the final product. The initial freeness (°SR) of the selected mixed pulp was checked by the Schopper Riegler method. The selected mixed pulp i.e. 50% softwood and 50% hardwood mix were then subjected to refining to various degree of freeness in PFI mill. Hand sheets were prepared from refined pulp and tested for different physical strength and surface properties.

Optimizing Approach Towards Fibre Refining …

233

Bulk of sheet was determined according to ISO 12625-3:2014 [13], and tensile strength was determined following ISO 1924-2:2008(en) test method [14]. Wet tensile strength of hand sheet was carried out as per test method ISO 1265:2016 [15].

2.3 Analysis Refining of the pulps was carried out in the PFI mill. The initial freeness of softwood was 15 °SR. It was then refined to 40 °SR by following the standard method of ISO 5264-2, while freeness of hardwood pulp was raised from 17 °SR to 25 °SR. It was noticed that number of revolutions required to raise per unit of °SR was far greater in case of softwood as compared to refining of hardwood. This indicates the presence of longer fibre length and strength in softwood relative to the hardwood. This intrinsic property of softwood further contributes to enhance the strength and absorbency of carrier tissue paper. The separate hand sheets were made from each set of mixed pulp and properties analysed in detail. The properties of the pulp mix consisting of 50% hardwood and 50% softwood were found to be closest to the properties of the benchmarked sample. This set of mixed pulp (i.e. 50% hardwood and 50% softwood) was then subjected to various degrees of refining to enhance the properties required in carrier tissue paper. Handsheets of refined pulp sets were further analysed for bulk, dry tensile strength, wet tensile and porosity.

3 Result and Discussion The properties of bleached hardwood and softwood pulp used for the production of carrier tissue were studied. The properties of hardwood pulp and softwood pulp at different proportionate ratio (50:50, 60:40, 70:30 and 80:20) and at various freeness were compared with the competitors sample received from location A and location B. Bulk, breaking length and porosity of mixed pulp samples were examined. Refining is the “backbone” process during tissue paper manufacturing. Properties of hand sheets at 40 °SR, 50:50 (hard wood and soft wood) combination are comparable with competitor’s sample. Refining enzyme not only saved the refining energy but also enhanced the pulp properties. The improvement cause by the refining of pulp is mainly attributed to enhanced density and to an improved bonding between fibres.

234

S. K. Jain et al.

Table 2 Impact of application of refining enzyme on refining conditions S. No

Parameters

UOM

Softwood pulp

Hardwood pulp

Blank

Enzymatic treatment

Blank

Enzymatic treatment

1

Initial °SR

°SR

15

15

17

17

2

Final °SR

°SR

40

40

25

25

3

Revolution

RPM

14,000

10,550

280

218

4

pH



6.8

6.4

6.5

6.0

5

Temperature

°C

Ambient

50

Ambient

50

Selection and analysis of suitable pulp mix composition: Separate handsheets were prepared after addition of DSR at dosage of 3 kg/ton of pulp, and WSR at the dosage of 20 kg/ton of pulp were added to both bleached hardwood pulp and softwood pulp. Both the pulp were refined using commercial cellulase refining enzymes. Refining conditions before and after application of commercial cellulase refining enzymes were compared and following observations were made: The effect of application of commercial cellulase refining enzyme on refining conditions of pulp has been demonstrated in Table 2. After enzymatic treatment, there was a significant decrease in the number of revolutions required for refining of both hardwood and softwood pulp, thus saving specific energy consumption. In case of softwood, there has been 24.6% reduction in number of PFI revolutions required to achieve equal freeness while in case of hardwood, 22.1% decrement has been noticed. This would further decrease the specific energy consumption of refiner. Besides decreasing the specific energy requirement, lesser revolution on pulp would also preserve the fibre quality and fibre length preserving the capillary action of fibres. This will enhance the absorbency of pulp. After refining bleached hardwood was mixed with bleached softwood pulp in the following proportionate ratios demonstrated in Table 3: The properties of hand sheets prepared using different ratio of mixed pulp and soft wood pulp were analysed and compared. It can be concluded from Table 4 that softwood pulp fibre has much higher and better fibre properties as compared to hardwood pulp fibres. Thus, on decreasing the softwood pulp fraction in the pulp mix, the strength properties declined as well. The pulp mix having equal proportion of softwood and hardwood pulp had the highest Table 3 Different ratio of hardwood pulp and softwood pulp

Hard wood pulp (HW) (%)

Soft wood pulp (SW) (%)

50

50

60

40

70

30

80

20

Optimizing Approach Towards Fibre Refining …

235

Table 4 Comparative test results of mixed pulp in different proportionate ratios S. No

Parameters

UOM

100% SW pulp

100% HW pulp

50% HW pulp + 50% SW pulp

60% HW pulp + 40% SW pulp

70% HW pulp + 30%SW pulp

80% HW pulp + 20% SW pulp

Set I

Set II

Set III

Set IV

Set V

Set VI

1

Freeness

°SR

40

25

35

35

35

35

2

Substance

gsm

15

15

15

15

15

15

3

Bulk

cc/gm

1.87

1.60

1.80

1.73

1.67

1.60

4

Tensile strength

N/m

1010

580

730

660

570

500

5

Wet strength

gmf/15 mm

1640

520

1340

1160

1080

930

6

Porosity

ml/min

1800

1500

1200

1040

910

790

tensile strength of 730 N/m and gradually decreased to 500 N/m in case of pulp mix consisting 20% softwood and 80% hard wood. The most appropriate tensile strength was found in mixed pulp comprising of 50% hardwood pulp and 50% softwood pulp. The wet strength of 100% softwood pulp was found to be the highest although the mixed pulp consisting of 50% softwood and 50% hardwood had the most appropriate wet strength of 1340 gmf/15 mm and porosity 1200 ml/min. All the properties of mixed pulp consisting of 50% hardwood pulp and 50% softwood pulp were found to be very close to benchmarked sample. The properties have been graphically represented in Graphs 1, 2 and 3. 100% SW Pulp 1200

Tensile Strength (N/meter)

Graph 1 Comparative test results of tensile strength of 100% Softwood and Hardwood pulp versus mixed pulp in different proportionate ratios

100% HW Pulp 50% HW + 50% SW Pulp

1000

40% HW + 60% SW Pulp

800

30% HW + 70% SW Pulp 20% HW + 80% SW Pulp

600 400 200 0 100% SW pulp Vs. 100% HW pulp Vs. mixed pulp

236

S. K. Jain et al.

Wet Strength (gmf/15 mm)

100% SW Pulp 1800

100% HW Pulp

1600

50% HW + 50% SW Pulp

1400

40% HW + 60% SW Pulp

1200 1000

30% HW + 70% SW Pulp 20% HW + 80% SW Pulp

800 600 400 200 0 100% SW pulp Vs. 100% HW pulp Vs. mixed pulp

Graph 2 Comparative test results of wet strength of 100% Softwood and Hardwood pulp versus mixed pulp in different proportionate ratios

Porosity (ml/minute)

100% SW Pulp

2000

100% HW Pulp

1500

50% HW + 50% SW Pulp 40% HW + 60% SW Pulp

1000 500

30% HW + 70% SW Pulp 20% HW + 80% SW Pulp

0

Graph 3 Comparative test results of porosity of 100% Softwood and Hardwood pulp versus mixed pulp in different proportionate ratios

Optimization of mechanical action on pulp: The primary effect of refining on fibre is internal fibrillation. The specific area of cellulosic fibres is improved by swelling and promotes the inter fibre bonding which further improves physical strength of paper significantly. It also results in multifold advantages such as structural changes in the fibres including external fibrillation, fines generation and straightening of fibres. Changes in cellulose crystallinity and surface composition of pulp fibres are also caused by mechanical refining. Refining of pulp enhances fibre flexibility and helps in producing denser paper, which means higher bulk, opacity and porosity decreases [10]. The pulp mix consisting of equal fraction of hardwood and softwood was further subjected to refining to higher intensity to achieve required strength properties, absorbency and porosity of tissue paper. The comparative test results depicted in Table 5 highlight the enhancement in tensile strength and compactness of fibre orientation as a result of optimum refining of mixed pulp. The results indicate that, along with increasing intensity of refining, tensile strength increased by 27.4%, wet strength was enhanced by 23.1% while

Optimizing Approach Towards Fibre Refining …

237

Table 5 Comparative results of mixed pulp at different refining intensity S. No

Parameters

50% SW pulp + 50% HW pulp

UOM

Before 2nd stage refining

After 2nd stage refining

1

Freeness

°SR

35

40

2

Substance

gsm

15

15

3

Bulk

cc/gm

1.80

1.73

4

Tensile strength

N/m

730

930

5

Wet strength

gmf/15 mm

1340

1650

6

Porosity

ml/min

1200

850

porosity decreased by 29.1%. The same has been graphically represented in Graphs 4, 5 and 6. The attributes of carrier tissue paper developed in lab are completely in alignment with the characteristics of the benchmarked sample. This can be concluded from the comparative test results highlighted in Table 6.

1000

Tensile Strength (N/meter)

Graph 4 Impact of refining on tensile strength of carrier tissue paper

800

931 730 Before 2nd stage refining

600

Aer 2nd stage refining

400 200 0 Mixed Pulp (50% HW + 50% SW)

2000

Wet Strength (gmf/15 mm)

Graph 5 Impact of refining on wet strength of carrier tissue paper

1500

1650 1340

1000

Aer 2nd stage refining

Aer 2nd stage refining2

500

0 Mixed Pulp (50% HW + 50% SW)

Porosity (ml/min)

238

S. K. Jain et al. 1400 1200 1000 800 600 400 200 0

1200 850 Before 2nd stage refining Aer 2nd stage refining

Mixed Pulp (50% HW + 50% SW)

Graph 6 Comparative results of porosity at different refining intensity

Table 6 Comparative test results of carrier tissue sample collected from two different location and carrier tissue paper developed in lab consisting of 50% softwood pulp and 50% hardwood S. No Parameters

UOM

Carrier tissue Carrier tissue Carrier tissue sample collected sample collected paper developed from location A from location B in lab consisting of 50% softwood pulp and 50% hardwood

1

Substance

gsm

15

cc/gm

2

Bulk

3

Tensile strength N/m

4

Wet strength

5

Porosity

15

15

1.73

1.86

1.73

920

940

930

gmf/15 mm 1380

1420

1650

ml/min

840

850

860

4 Conclusion A sustainable, cost effective and energy efficient way is the most preferred route for producing high quality of paper. The quality of cellulosic fibre and the papermaking operations directly affect the quality of paper produced. Fibre must be matted into a uniform sheet and must also develop strong bonds at the point of contact for the development of good quality paper. Refining of pulp can be alternatively defined as a fibre-modifying process t that modifies the structure of the pulp fibres in order to achieve required paper properties or modify any property of the end product. This study is based on proper selection and blending of furnish mix, application of commercial cellulase refining enzymes and finally optimization of mechanical action on pulp fibres. The effects of the blending of furnish mix, enzymatic treatment and pulp refining on the wet tensile strength, absorbency and porosity of developed tissue paper were studied in detail. The laboratory experimental results indicated that refining significantly enhanced the dry and wet tensile strength of the tissue paper, and on the other hand, reduction in porosity is observed.

Optimizing Approach Towards Fibre Refining …

239

Selection of suitable furnish mix and blending them into 50:50 hardwood pulp to softwood pulp, resulted in most appropriate quality of fibre. Application of commercial cellulase enzyme has made the process energy positive and more economical. Enzymatic treatment of pulp contributed to lowering of refining energy requirement to meet strength specifications, improved strength properties and improved formation. It can be well concluded from the study that by right selection of fibre, application of commercial cellulose refining enzyme, proper blending of furnish mix and optimization of refining intensity carrier tissue paper can be successfully developed complying with the required product attributes.

References 1. https://www.sanitarynapkinmachines.in/sanitarty-napkin-raw-materials.html 2. https://www.pulpandpaper-technology.com/articles/tissues 3. https://www.qzniso.com/1-layer-virgin-carrier-tissue-paper-for-diapers-sanitary-napkins-und erpads-cj-056_p361.html 4. https://shawanopapermill.littlerapids.com/carrier-tissue/ 5. Rukmani J (2017) Different uses and aspects of tissue paper in daily life. https://EzineArticles. com/expert/Jaiswal_Rukmani/696482 6. Bousfeild DW (2003) Paper: porosity. University of Maine, Orono; Encyclopedia of materials: science and technology, 2nd edn 7. Murakami K, Imamura R (1984) Porosity and gas permeability. In: Mark RE (ed) Handbook of physical and mechanical testing of paper and paperboard, vol 2. Dekker. New, York pp, pp 57–102 8. http://printwiki.org/Porosity#:~:text=A%20measure%20of%20the%20extent,or%20ink% 2C%20through%20its%20surface.&text=Paper%20with%20high%20porosity%20increas es,and%2For%20strike%2Dthrough 9. Bajpayee P (2018) Refining and Pulp characteristics. Biermann’s Handbook of Pulp and Paper, 3rd edn. Elsevier, Amsterdam, pp 1–34 10. Kumar A, Ram C, Tazeb A (2021) Enzyme-assisted pulp refining: an energy saving approach. Physical Sciences 6(2). https://doi.org/10.1515/psr-2019-0046 11. Torres CE (2012) Enzymatic approaches in paper industry for pulp refining and biofilm control. Springer Link 12. TAPPI T 205 (2006) Forming hand sheets for physical tests of pulp 13. ISO 12625-3 (2014) Tissue paper and tissue products. Part 3: determination of thickness, bulking thickness and apparent bulk density 14. ISO 1924-2(2008) Paper and board-determination of tensile properties. Part 2: constant rate of elongation method 15. ISO 1265-5 (2016) Tissue paper and tissue products. Part 5: determination of wet tensile strength

A Study of Non-Newtonian Nanofluid Saturated in a Porous Medium Based on Modified Darcy-Maxwell Model Reema Singh, Vipin Kumar Tyagi, and Jaimala Bishnoi

Abstract An investigation is made to deal with linear and non-linear thermal instability on the onset of convection in a horizontal layer of Darcy-Maxwell nanofluid. A macroscopic filtration law known as modified Darcy-Maxwell model relevant to non-Newtonian behavior of real fluids in porous medium has been considered. The effects of Brownian motion and thermophoresis have been incorporated. The boundaries are assumed to be horizontal planes which are impermeable and perfectly thermally conducting. The linear stability analysis is used to investigate as to how the concentration of nanoparticles, modified Lewis number, diffusion coefficients and porosity influence the Rayleigh number for the onset of stationary and oscillatory convections. Using the non-linear stability analysis, the variations of Nusselt number and concentration Nusselt number have been analyzed for different physical parameters such as concentration Rayleigh number, relaxation parameter, porosity, heat capacity ratio, Lewis number, and modified diffusion coefficient. Keywords Darcy-Maxwell nanofluid · Natural convection · Heat transfer · Mass transfer · Thermal instability

1 Introduction History provides evidences that medieval artisans used the suspension of gold particles of nanosize to give the windows of the cathedrals a deep red color, a basic characteristic of the cathedrals at that time. In fifteenth century, potters in Italy used fluids with metallic nanoparticles to bring shine and lust to the pottery. But artisans probably used these nanosuspensions without being aware of their size, shape, or other extraordinary characteristics. R. Singh · J. Bishnoi Department of Mathematics, Chaudhry Charan Singh University, Meerut 250004, Uttar Pradesh, India V. K. Tyagi (B) Shobhit Institute of Engineering and Technology (Deemed To Be University), SBAS, Meerut 250110, Uttar Pradesh, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 P. K. Mallick et al. (eds.), Cognitive Informatics and Soft Computing, Lecture Notes in Networks and Systems 375, https://doi.org/10.1007/978-981-16-8763-1_21

241

242

R. Singh et al.

In 1992, Choi [1] and his team accepted the challenges of unusual increase in heat loads and heat fluxes and pressure drops faced by the high tech industries. During their work in the Argonne National Laboratory, USA, they introduced a type of engineered fluid containing nanometer sized additives suspended in base fluid and found that the highest possible enhancement in thermal conductivity can be achieved at the smallest possible concentrations of additives in the fluids. Later, Choi and Eastman [2] termed these fluids as nanofluids. The fluids exhibit enhanced thermophysical properties such as thermal conductivity, thermal diffusivity, viscosity, and convective heat transfer coefficients as compared to the conventional base fluids or microfluids. Other characteristics of these fluids like unexpected enhancement in reactivity and surface wettability; high dispersion stability; reduced pumping power to achieve equivalent intensity of heat transfer; reduced particle agglomeration; reduced surface erosion; and unusual catalytic, magnetic, or optical behavior make them applicable in a wide range of fields like aerodynamic designs of vehicles and other machineries used in thermal and nuclear power plants, food packaging, soil remediation, solar power plants, and medical science [3–10]. Since their inception, a lot of work devoted to the mechanism and models of nanofluids has been reported and is well documented by a number of researchers (Buongiorno [11]; Bianco et al. [12]; Aybar et al. [13]; Babita et al. [14]; Nield and Bejan [15]). Convection in porous media saturated with nanofluids, Newtonian or nonNewtonian, occurs in a broad spectrum of disciplines such as in material and food processing, insulation of buildings and equipment, petroleum industry, energy storage and recovery, geothermal problems, nuclear reactors, and biomechanics. A considerable number of research investigations based on convection in a horizontal layer of nanofluid in a porous medium have been reported (Nield and Kuznetsov [16]; Kuznetsov and Nield [17–19]; Bhadauria and Agarwal [20], Bhadauria et al. [21]; Yadav et al. [22, 23]; Chand and Rana [24, 25]; Agarwal et al. [26]; Yadav et al. [27]; Nield and Kuznetsov [28]; Agarwal [29]). To the best of authors’ knowledge Nield [30] was probably the first who assumed the nanofluid as non-Newtonian fluid of power law type and discussed the convective instability in a horizontal porous layer. Modeling of viscoelastic traditional fluid flow in porous media with specific pore geometry such as capillary tubes, undulating tubes, packs of spheres, or cylinders has motivated many research workers (see Skartsis et al. [31] and references therein). Recently many researchers explained the various results based on nanofluid in the presence of Maxwell fluid [32–36]. Khuzhayorov et al. [37] introduced a generalized macroscopic filtration law for describing transient linear viscoelastic fluid flows in porous media. He used the method of homogenization for periodic structure [38, 39]. The macroscopic filtration law is expressed in Fourier space as a generalized Darcy’s law with a dynamic permeability tensor. In the limiting case the law given by Khuzhayorov et al. [37] is realized as the popular phenomenological model of Alishayev [40] valid for the low Reynold number flow of a Maxwell fluid saturated in a Darcy porous medium, known as the modified-Darcy-Maxwell-model. By analogy with Maxwell’s model [41] he suggested the following filtration law:

A Study of Non-Newtonian Nanofluid Saturated in a Porous …

vD∗ = −

  ∂ K 1 + λ∗ ∗ ∇ p ∗ , μ ∂t

243

(1)

where asterisks represent dimensional variables. Here vD∗ = (u ∗ , v ∗ , w ∗ ) is the Darcy velocity, K is the permeability, μ is the viscosity, λ∗ is the relaxation time, p ∗ is the pressure, and t ∗ is the time parameter. This model has been used by various researchers to study the stability of mono and double diffusive convection in a regular fluid. The main aim of the present work is to use the model [40] to study the Bénard convection of a Maxwell nanofluid saturated in a porous medium. The stationary and oscillatory convections are analyzed by using the linear stability theory and the patterns of stream lines, and the behavior of heat and concentration transfer is unveiled through non-linear stability theory.

2 Mathematical Model Let us consider Buongiorno’s nanofluid [11] saturated horizontal Darcy porous layer of thickness d under the assumptions that the fluid is Maxwell; the flow is incompressible; chemical reaction, external forces, viscous dissipation, and radiative heat transfer are negligible; nanoparticles and base fluid are in local thermal equilibrium, and the boundary walls are perfectly heat conducting. A Cartesian coordinate system is chosen with origin at the bottom of the porous layer. The lower  and upper boundaries are maintained at the temperatures Th∗ and Tc∗ Th∗ > Tc∗ , respectively. The nanoparticles volume fractions at the lower and upper walls are φ1∗ and φ0∗ (φ1∗ > φ0∗ ) respectively. Following Nield and Kuznetsov [16] and Alishayev [40], the conservation equations for total mass, momentum, thermal energy, and nanoparticles are considered as ∇ ∗ .v∗D = 0,

(2)

    μ ∗ ∗ ∂ v = 1 + λ ∗ −∇ ∗ p ∗ + [φ ∗ ρ p + (1 − φ ∗ ){ρ(1 − β(T ∗ − Tc∗ ))}]g , K D ∂t (3) (ρc)m

∂T ∗ + (ρc) f v∗D · ∇ ∗ T ∗ ∂t ∗ 2 = κm ∇ ∗ T ∗ + ε(ρc) p [D B ∇ ∗ φ ∗ · ∇ ∗ T ∗ + (DT /Tc∗ )∇ ∗ T ∗ · ∇ ∗ T ∗ ], (4) ∂φ ∗ 1 + v∗D · ∇ ∗ φ ∗ = D B ∇ ∗2 φ ∗ + (DT /Tc∗ )∇ ∗2 T ∗ , ∂t ∗ ε

(5)

244

R. Singh et al.

where ρ is the density of base fluid, ρ p is the density of nanoparticles, g is the gravitational acceleration in the negative vertical z-direction, (ρc)m is the heat capacity of the medium,(ρc) f is the heat capacity of the fluid,(ρc) p is the effective heat capacity of the material constituting nanoparticles, and κm (= εκ), where κ is the thermal conductivity of the fluid, is the effective thermal conductivity of the porous medium, DB is the Brownian diffusion coefficient, DT is the thermophoretic diffusion coefficient,ε is the porosity, and Tc∗ is the reference temperature. The boundary conditions are given by v∗D = 0, T ∗ = Th∗ , φ ∗ = φ0∗ at z = 0,

(6)

v∗D = 0, T ∗ = Tc∗ , φ ∗ = φ1∗ at z = d.

(7)

Using the dimensionless variables defined as (x, y, z) =

t ∗ αm (u ∗ , v ∗ , w ∗ )d (x ∗ , y ∗ , z ∗ ) , t= , (u, v, w) = , p = p ∗ K /μαm , d σ d2 αm φ=

φ ∗ − φ0∗ T ∗ − Tc∗ ∗ λd 2 , λ = , ∗ ∗, T = ∗ ∗ φ1 − φ0 Th − Tc αm (ρc )

where αm = (ρcκmp ) f and σ = (ρc pp )mf , in Eqs. (2)–(7) and replacing v D by v for convenience, the equations take the following form: ∇ · v = 0,

(8)

   λ ∂  v = 1+ −∇ p − Rmˆez + RaT eˆ z − Rnφ eˆ z , σ ∂t

(9)

NA N B ∂T NB + v.∇T = ∇ 2 T + ∇φ.∇T + ∇T.∇T, ∂t Le Le

(10)

1 ∂φ 1 1 2 NA 2 + v.∇φ = ∇ φ+ ∇ T, σ ∂t ε Le Le

(11)

v = 0, T = 1, φ = 0 at z = 0,

(12)

v = 0, T = 0, φ = 1 at z = 1.

(13)

2 ∂2 ∂2 + ∂∂y 2 + ∂z 2, ∂x2 ρgβ K d(Th∗ −Tc∗ ) (thermal Rayleigh-Darcy number), μαm (ρ p −ρ)(φ1∗ −φ0∗ )g K d (concentration Rayleigh-Darcy number), μαm

Here ∇ 2 ≡ Ra = Rn =

A Study of Non-Newtonian Nanofluid Saturated in a Porous …

245

ρ φ ∗ +ρ(1−φ ∗ ) g K d Rm = [ p 0 μαm 0 ] (basic density Rayleigh-Darcy number), DT (Th∗ −Tc∗ ) (modified diffusivity ratio), DB Tc (φ1∗ −φ0∗ )  ε(ρc) p  ∗ N B = (ρc) φ1 − φ0∗ (modified particle-density f and Le = αDmB (Lewis number).

NA =

increment),

Here following Buongiorno [11] and Nield and Kuznetsov [16], Rn is taken as positive.

2.1 Basic Solution At the basic state, the fluid is assumed to be quiescent so that the various parameters are described as v = 0,

p = pb (z),

T = Tb (z), φ = φb (z).

(14)

Using Eq. (14) in Eqs. (9)–(11), the temperature and volume fraction of nanoparticles satisfy the following equations: NA N B NB dφb dTb d2 Tb + + 2 dz Le dz dz Le



dTb dz

2 = 0,

d 2 φb d2 Tb + N = 0. A dz 2 dz 2

(15)

(16)

Using the boundary conditions (12) and (13), Eq. (16) provides φb = −NA Tb + (1 − NA )z + NA .

(17)

Now using Eq. (17) into Eq. (15), we get (1 − NA )N B dTb d2 Tb = 0, + 2 dz Le dz

(18)

which provides the basic temperature as Tb =

1 − e−N B (1−NA )(1−z)/Le . 1 − e−N B (1−NA )/Le

(19)

Basic concentration φb can be obtained by substituting Eq. (19) into Eq. (17) and finally using Tb and φb in Eq. (9), we get the distribution of the basic pressure pb . Following Nield and Kuznetsov [16] temperature and concentration in the basic state satisfying the boundary conditions given by Eqs. (12) and (13) are approximated as

246

R. Singh et al.

Tb = 1 − z,

(20)

φb = z.

(21)

and

2.2 Perturbation Solution Superimpose the infinitesimal perturbations onto the basic state given by Eq. (14) as v = v ,

p = pb + p  , T = Tb + T  , φ = φb + φ  ,

(22)

where the primes indicate the perturbations and the primed parameters are the functions of space and time. Using Eq. (22) into Eqs. (8)–(11) we get ∇ · v = 0,    λ ∂  −∇ p  − Rnφ  eˆ z + RaT  eˆ z , v = 1 + σ ∂t     ∂φ  ∂φ  ∂ T  ∂T    ∂T  NB  2  + v ·∇ T −w =∇ T + NA − + ∂t Le ∂z ∂z ∂z ∂z   2  2NA N B ∂ T NA N B ∂ T − + , Le ∂z Le ∂z  1 1 1 2  NA 2  1 ∂φ  + v  · ∇ φ  + w = ∇ φ + ∇ T. σ ∂t ε ε Le Le

(23) (24)

(25) (26)

The boundary conditions in the perturbed state become w  = 0, T  = 0, φ  = 0, at z = 0 and z = 1

(27)

Apply the normal mode technique and assume that the perturbed quantities are expressed as (w , T  , φ  ) = [W (z), (z), (z)] exp(st + ilx + imy),

(28)

where l and m are dimensionless wave numbers in the horizontal x and y directions respectively. s(= ωr + iωi ) is a complex time constant predicting the growth rate of disturbances such that if ωr < 0 for all modes the system is stable, while it is

A Study of Non-Newtonian Nanofluid Saturated in a Porous …

247

unstable if ωr > 0 even for a single mode. ωr = 0 presents the oscillatory state and the stationary state is characterized by s = 0. Further if ωr = 0 but ωi = 0 then it is called overstability of periodic oscillatory motion and if ωr = 0 is necessarily followed by ωi = 0, the system is said to be marginally stable under the principle of exchange of stabilities.

3 Linear Stability Analysis In order to discuss the stationary and oscillatory convections, linear stability theory has been applied. Linearizing Eqs. (23)–(26) and using Eq. (28) in the resulting linearized equations, we get, after certain simplifications, the following boundary value problem:   λs (D 2 − α 2 )W + α 2 1 + (Ra − Rn ) = 0 , σ   NA N B NB NB 2 2 D−2 D−s − D = 0, W + D −α + Le Le Le    1 NA  2 1 s W− D − α2 − (D 2 − α 2 ) − = 0, ε Le Le σ

(30)

W = 0, = 0, = 0 at z = 0 and z = 1,

(32)

(29)

(31)

where D ≡ dzd and α = (l 2 + m 2 )1/2 is the dimensionless horizontal wave number. It is to be noted that Eqs. (30) and (31) are already free from Maxwell parameter λ and making Eq. (29) free from λ, the boundary value problem reduces to the HortonRogers-Lapwood problem (Horton and Rogers [42]; Lapwood [43]) discussed by Nield and Kuznetsov [16] for a nanofluid. To find the solution of the eigenvalue problem given by Eqs. (29)–(32), we employ Galerkin-type weighted residual technique and accordingly choose the following trial solutions: W =

N  p=1

A p Wp,

=

N  p=1

Bp p,

=

N 

C p p,

(33)

p=1

where W p = p = p = sin pπ z; p = 1, 2, 3, ...N and Ap , Bp , and C p are constants to be determined in such a way that the boundary conditions given by Eq. (32) are satisfied. For the first approximation, confining to the lowest order mode (N = 1), we take W = A1 sin π z, = B1 sin π z, = C1 sin π z.

(34)

248

R. Singh et al.

Substituting the expressions for W, , and from (34) into Eqs. (29)–(31), we get a system of three homogeneous algebraic equations in three unknowns A1 , B1 and C1 as   λs −A1 δ 2 + α 2 1 + (35) (RaB1 − RnC1 ) = 0, σ   A1 − δ 2 + s B1 = 0, (36) 1 δ 2 NA A1 + B1 + ε Le



 s δ2 + C1 = 0. Le σ

(37)

The absence of N B in Eqs. (35)–(37) predicts that the instability depends upon the coupling between buoyancy and conservation of nanoparticles which happens indirectly due to the effects of Brownian motion and thermophoresis. For a non-trivial solution of the system of equations given by (35)–(37), we should necessarily have   2 2 −δ α 1 + λs Ra −α 2 1 + σ 1 0 −(δ 2 + s) δ 2 NA 1 δ2 + ε

Le

Le

λs σ s σ

 Rn = 0,

(38)

where δ 2 = π 2 + α 2 .

3.1 Analysis at the Marginal State 3.1.1

Stationary Convection

For stationary convection at the marginal state (s = 0), using the value of δ 2 , Eq. (38) provides the Rayleigh number as Rast =

  (π 2 + α 2 )2 Le Rn. − N + A α2 ε

The critical wave number is obtained as α = π providing the critical Rayleigh number as   Le 2 Rn. (39) Rast = 4π − N + A c ε It is to be noted that the critical Rayleigh number given by Eq. (39) is the same as obtained by Nield and Kuznetsov [16] for a regular nanofluid. Further in the absence

A Study of Non-Newtonian Nanofluid Saturated in a Porous …

249

of nanoparticles (NA = 0) we recover the well-known critical wave number and the critical Rayleigh number obtained by Horton and Rogers [42].

3.1.2

Oscillatory Convection

For oscillatory convection at the marginal state, setting s = iω, (where ω is real) in Eq. (38), we obtain Raosc = 1 + iω2 ,

(40)

where 

  4    1 1 δ ω2 λ ω2 NA 1 ω2 λ − 2 2 − − Rn + − Le σ δ α 2 Le σ Le ε σ εδ 2  

    δ2 1 1 1 N Aλ 1 ω2 λ λ + 2 + − Rn + + 2 + σ Le δ α 2 Le σ σ Le εδ 2 σε (41)

2   2 2 2 1 ω λ ω λ 1 − + + 2 2 2 2 Le σ δ σ Le δ



     λ NA 1 δ4 1 1 ω2 λ ω2 − + 2 Rn + − − σ Le σ δ Le ε σ εδ 2 α 2 Le σ 

     2 2 δ 1 ω λ 1 NA λ 1 1 λ − + − Rn + 2+ + Le σ 2 δ 2 α 2 Le σ σ Le εδ σε . (42)

2   1 ω2 λ ω2 λ 1 2 − + + 2 2 2 2 Le σ δ σ Le δ

1 =

and

2 =

Since Rayleigh number, being a physical parameter, is real and ω = 0 for oscillatory convection, therefore 2 = 0 necessarily [from Eq. (40)]. Thus Eq. (42) provides the frequency of oscillations as

    1 δ2 1 1 NA λ 1 λ + − Rn + + Le α 2 Le σ σ Le εδ 2 σε   

 1 δ4 λ 1 NA + − 2 + 2 + Rn Le ε α Le σε σδ  2

 ω2 =   NA λ   Rnλ  . λ δ 1 1 1 λ 1 λ + σ − Rn σ Le + εδ2 + σ ε + σ 2 εδ2 − σ 2 α2 Le + δ12 σ 2 Q α 2 Le (43) Thus for oscillatory convection, Eq. (40) gives the following Rayleigh number:

250

R. Singh et al.



Raosc =

      1 1 δ4 ω2 NA 1 ω2 λ ω2 λ − − Rn + − − α 2 Le σ Le ε σ εδ 2 Le σ 2 δ 2  

    NA λ λ ω2 δ 2 1 λ 1 1 1 − Rn + + + 2+ + σ α 2 Le σ σ Le εδ σε Le δ 2 .

2   1 ω2 λ ω2 λ 1 2 − + + Le σ 2 δ2 σ 2 Le δ2 (44)

Clearly, the oscillatory convection depends upon all important parameters including the relaxation time.

4 Non-linear Stability Analysis In order to predict the amplitude of convection motion and rate of heat and nanoconcentration transfer, we perform a non-linear stability analysis using a truncated representation of Fourier series. For simplicity two-dimensional rolls have been considered so that all the physical quantities are assumed to be independent of y. Using the approximated solutions of Tb and φb given by Eqs. (20) and (21) respectively in the perturbed state represented by Eqs. (23)–(26), eliminating the pressure from the resulting equations and introducing the stream function ψ, we obtain    ∂T ∂φ λ ∂ −Ra + Rn , ∇12 ψ = 1 + σ ∂t ∂x ∂x ∂ψ ∂T ∂(ψ, T ) + = ∇12 T + , ∂t ∂x ∂(x, z)   1 ∂φ 1 ∂ψ 1 2 NA 2 1 ∂(ψ, φ) − = ∇ φ+ ∇ T+ , σ ∂t ε ∂x Le 1 Le 1 ε ∂(x, z)

(45) (46) (47)

∂ where ∇12 = ∂∂x 2 + ∂z 2. Equations (45) to (47) are solved in the idealized conditions, when the boundaries are stress free, isothermal, and isonanoconcentrative, so that the perturbations in temperature and nanoconcentration vanish at the boundaries and we get 2

2

ψ=

∂ 2ψ = T = φ = 0 at z = 0, 1. ∂z 2

(48)

To perform a local non-linear stability analysis, we take a minimal double Fourier series which describes the finite amplitude convection as ψ = A11 (t) sin(αx) sin(π z),

(49)

A Study of Non-Newtonian Nanofluid Saturated in a Porous …

251

T = B11 (t) cos(αx) sin(π z) + B02 (t) sin(2π z),

(50)

φ = C11 (t) cos(αx) sin(π z) + C02 (t) sin(2π z),

(51)

where the coefficients A11 (t), B11 (t), B02 (t), C11 (t), and C02 (t) are the time dependent amplitudes and are to be determined from the dynamics of the system. Substituting Eqs. (49)–(51) in Eqs. (45)–(47), taking the orthogonality condition with the eigenfunctions associated with the considered minimal model, we get A11 =

  λ dC11 α λ dB11 RnC Ra + Rn , − RaB − 11 11 δ2 σ dt σ dt

(52)

  dB11 = − δ 2 B11 + α A11 + π α A11 B02 , dt

(53)

 1 dB02 = π α A11 B11 − 8π 2 B02 , dt 2   dC11 α δ 2 C11 πα δ 2 NA = −σ A11 C02 − A11 + + B11 , dt ε ε Le Le 

 dC02 σ πα NA 2 C 02 = A11 C11 − 8π + B02 . dt 2 ε Le Le

(54)

(55) (56)

The above system of simultaneous ordinary differential equations can be subsequently solved numerically using the Runge–Kutta-Gill method. In the steady state, Eqs. (53)–(56) provide α A11  , + α 2 A211 /8   α 2 A211 /8   , =−  2 π δ + α 2 A211 /8

B11 = − B02 C11 =

(57)

(58)

ε2 Leα A11   ε2 δ 2 + α 2 Le2 A211 /8

   α 2 NA A211 /8 1 δ 2 NA   + −   ,   ε Le δ 2 + α 2 A211 /8 ε δ 2 + α 2 A211 /8   εLe2 α 2 A211 /8   =  2 2 π ε δ + α 2 Le2 A211 /8     α 2 NA A211 /8 δ 2 NA 1     + −   ε Le δ 2 + α 2 A211 /8 ε δ 2 + α 2 A211 /8 

C02

δ2

(59)

252

R. Singh et al.

  α 2 NA A211 /8   . +  2 π δ + α 2 A211 /8

(60)

Using the expressions (57)–(60) for B11 (t),B02 (t),C11 (t), and C02 (t) respectively in Eq. (52), we get A211 8

=

−W2 +



W22 − 4W1 W3

2W1

,

(61)

where. W1 = δ 2 α 4 Le3 ε,

(62)

W2 = α 2 εδ 4 Le(Le2 + ε2 ) − Raα 4 εLe3 − Rnε2 Le2 α 4 (1 − NA ),

(63)

W3 = ε3 Leδ 6 − Rnα 2 ε2 Leδ 2 (Le + εNA ) − α 2 ε3 δ 2 LeRa.

(64)

 2 W22 − 4W1 W3 = α 4 Le2 Ra + α 2 δ 4 (ε2 − Le2 ) + Rn2 Le2 ε4 α 8 (1 − NA )2 + 2RaRnε3 Le5 α 8 (1 − NA ) + 2Rnε3 Le3 δ 4 α 6 (Le2 + Le2 NA + ε2 NA + 2εNA − ε2 ).

4.1 Heat and Mass Transport The thermal Nusselt number Nu(t) physically is defined as Nu(t) =

Heat transport by (conduction + convection) , Heat transport by conduction

i.e.,   2π/αc  ∂ T   dx Nu(t) = 1 +  02π/αc  ∂∂zT  . b dx z=0 0 ∂z

(65)

Substituting expressions for Tb and T from Eqs. (20) and (50) in Eq. (65), we get Nu(t) = 1 − 2π B02 (t). Using B02 (t) given by Eq. (58) in Eq. (66), we obtain

(66)

A Study of Non-Newtonian Nanofluid Saturated in a Porous …

  2α 2 A211 /8   . Nu(t) = 1 +  2 δ + α 2 A211 /8

253

(67)

The nanoparticle concentration Nusselt number, defined in analogy with the thermal Nusselt number, is obtained as Nuφ (t) = (1 − 2πC02 (t)) + NA (1 − 2π B02 (t)).

(68)

Using the expression for B02 (t) and C02 (t) given by Eq. (58) and Eq. (60) respectively in Eq. (68), we get  Nuφ (t) = 1 − 

  2εLe2 α 2 A211 /8   ε2 δ 2 + α 2 Le2 A211 /8

   α 2 NA A211 /8 δ 2 NA 1   + −     ε Le δ 2 + α 2 A211 /8 ε δ 2 + α 2 A211 /8        2α 2 A211 /8 α 2 NA A211 /8  + NA 1 +     − 2 δ + α 2 A211 /8 δ 2 + α 2 A211 /8



(69)

5 Results and Discussion 5.1 Linear Stability Analysis   For stationary convection, Eq. (39) shows that for positive Rn ρ p > ρ and positive NA (fluid layer heated from below), the critical Rayleigh number is reduced by a notable amount for nanofluid then its value for the ordinary fluid (Nield and Bejan [15]), implying, thereby, that the nanoparticles set-up convection earlier. This is expected on physical grounds also since the nanofluids are higher thermal conductivity fluids. The crucial destabilization may be neutralized by reducing NA or by heating the fluid layer from the top. The critical Rayleigh number is also in agreement with the one obtained by Nield and Kuznetsov [16] who studied the convection of Newtonian nanofluid saturated in a Brinkman porous medium. The neutral stability curves for Rayleigh number Ra versus the wave number α are depicted in Figs.1a–d by assigning Rn = 0.1, Le = 10, NA = 1, and ε = 0.4, with variations in one of these parameters. The neutral stability graphs for Rayleigh-Darcy number Ra plotted against the wave number α for various values of the concentration Rayleigh number Rn are shown in Fig. 1a. The curves show that with the increase of Rn, Rayleigh number is decreased, showing, thereby, that Rn promotes instability. The result is in contrast to the one obtained for a Newtonian nanofluid with bottom heavy concentration (Bhadauria et al. [20]). Figure 1b shows that the stationary

254

R. Singh et al.

(a) 200

(b) 200

Le=10, N = 1,ε =0.4, A

160

120

120

Ra

Ra

160

80

80 Rn=0.2

40

40

0.4 0.6

0 0

200

α

6

8

0 0

10

2

(d) 200

Rn=0.5, Le=10, ε =0.4

160

120

120

4

α

6

8

10

8

10

Rn=0.5,Le=10,N = 1 A

Ra

160

Ra

(c)

4

2

80

80

40

6

N =2

ε =0.6

40

A

0.4 0.2

10

0 0

0 2

4

α

6

8

10

0

2

4

α

6

Fig. 1 Variation of Ra with wave number α for stationary convection for a Rn, b Le, c N A , and d ε

thermal Rayleigh number decreases with an increase in Lewis number Le. It again indicates that instability sets in at an earlier stage with increased Lewis number. By definition, since the Brownian diffusion coefficient DB is inversely proportional to Lewis number Le; therefore, contrary to the result in Bhadauria et al. [20] a decrease in Brownian motion of nanoparticles stabilizes the system. Thus the Brownian motion of nanopoarticles in Maxwell fluid is responsible for enhancing the convection. The effect of NA is shown in Fig. 1c. It predicts that NA also advances the onset of stationary convection. It is also important to note that modified diffusivity ratio or increased heating rate in a Maxwell Darcy nanofluid destabilizes the system as it does for Newtonian nanofluid in a porous medium (see Bhadauria et al. [20]). Figure 1d shows the effect of porosity parameter ε. As porosity increases, Rayleigh number also increases so that the high porosity is responsible for the delay in convection. In Fig. 2a–f, neutral stability curves for Ra versus α for oscillatory convection are shown for the fixed values of λ, σ, ε, Rn, NA , and Le with variations in one of these parameters. The effect of stress relaxation parameter λ is shown in Fig. 2f. It is observed that oscillatory thermal Rayleigh number increases with an increase in λ, showing, thereby, that the relaxation parameter suppresses convection. It is concluded that in the presence of nanoparticles, the role of relaxation parameter is changed and it helps to promote the stability of the system. Figure 2e displays the effect of heat capacity

A Study of Non-Newtonian Nanofluid Saturated in a Porous … (a)

6

(b)

Le=0.5,NA=2,ε=0.4,σ =0.9, λ = 0.05

6

4

255 Rn=0.5,NA=2,ε =0.4,σ =0.9,λ = 0.05

Ra

Ra

4

2 0.6

2

0.3

Le=0.7

Rn=0.9

0.4 0.1

0

0 0

1

2

3

0

1

2

α

(c)

6

3

α

(d)

Rn=0.5,Le=0.5,ε=0.4,σ=0.9, λ = 0.05

6

4

Rn=0.5,Le=0.5,NA=2,σ=0.9,λ = 0.05

Ra

Ra

4

2 4

2

2

NA=6

ε =0.6

0

0.4 0.2

0 0

1

2

3

0

1

2

α

(e)

6

3

α

(f) 6

Rn=0.5,Le=0.5,NA=2,ε=0.4,λ = 0.05

4

Rn=0.5,Le=0.5,NA=2,ε=0.4,σ=0.9

Ra

Ra

4

2

2 λ = 0.05

σ=0.9 1

1.1

0

0.04

0.03

0 0

1

2

α

3

3.5

0

1

2

3

4

5

α

Fig. 2 Linear oscillatory convection with wave number α for different values of a Rn, b Le, c N A , d ε, e σ, and f λ

ratio σ . We observe that an increase in the value of σ decreases the critical Rayleigh number implying, thereby, the destabilizing character of σ . It means that, by keeping porosity fixed, the onset of oscillatory convection can be advanced or delayed by increasing or decreasing the heat capacity of medium. Figure 2a depicts the effect of nanoparticle concentration Rn on the oscillatory convection. As the value of Rn increases, the critical Rayleigh number Ra also increases for a given wave number α showing its stabilizing effect.

256 (a)

R. Singh et al. 80

(b)

Le=0.5,NA=2,ε=0.4,σ=0.9,λ = 0.05

Rn=0.5,NA=2,ε=0.4,σ=0.9,λ = 0.05

60

0.2

40

Ra

Ra

60

80

Stationary(Rast)

40

Stationary(Rast) Oscillatory (Raosc )

Oscillatory (Raosc )

20

20

Rn=0.8

Le=0.7

0

(c)

0.2

0

2

80

4

α

6

0

8

(d)

Rn=0.5,Le=0.5,ε=0.4,σ=0.9,λ = 0.05

2

80

4

α

6

8

Rn=0.5,Le=0.5,NA=2,σ=0.9,λ = 0.05

60

2

40

Ra

60

Ra

0

0.1

Stationary(Rast) Oscillatory (Raosc )

20

0.3

40

Stationary(Rast) Oscillatory (Raosc )

20

NA=10

ε=0.9

0

2

0

2

4

α

6

8

0

0.3

0

2

4

α

6

8

Fig. 3 Linear stationary and oscillatory convection with wave number α for different values of a) Rn, b Le, c N A , and d ε

Figure 2b–d show that for oscillatory convection, the effect of porosity parameter ε, modified diffusivity ratio N A , and Lewis number Le is exactly the same as it is in case of stationary convection. The variations of Rayleigh number for stationary as well as oscillatory convection have been compared through Fig. 3a–d. It is interesting to note that in all situations, for small wave numbers α, the convection first starts with oscillatory mode and then shifts to stationary convection. In fact the oscillatory convection ceases to exist after certain values of wave number α depending upon Rn (Fig. 3a), ε (Fig. 3d), N A (Fig. 3c), and Le (Fig. 3b).

5.2 Non-Linear Stability Analysis In the studies related to the convection in nanofluids, the determination of heat and mass transfer across the layer plays a vital role. Non-linear stability analysis provides information about physical mechanism in terms of convection amplitudes and heat and mass transfer.

A Study of Non-Newtonian Nanofluid Saturated in a Porous …

5.2.1

257

Steady Analysis

For the steady state (t = 0), the quantity of heat and mass transfer across the layer is given in terms of Nusselt number Nu and concentration Nusselt number Nuφ . The variations of Nu and Nuφ , with critical thermal Rayleigh number Ra for different parameters Rn = 4, ε = 0.04, NA = 5, and Le = 10 with variations in the values of one of these parameters are unveiled in Figs. 4a–d and 5a–d respectively. In each case, both the thermal Nusselt number and the concentration Nusselt number increase very sharply up to a certain value of Ra, but further increase in Ra does not change them significantly, rather these become almost constant and a steady rate of heat and concentration transfer is achieved. It is clear from Figs. 4a, b and 5a, b that an increase in the concentration Rayleigh number Rn and the porosity ε decreases both the Nusselt number Nu and the concentration Nusselt number Nuφ , i.e., both the parameters Rn and ε separately inhibits the transfer of heat and mass. Figures 4c and 5c predict that N A has reverse effects on Nu and Nuφ . On increasing N A , Nu is decreased while Nuφ is increased, i.e., modified diffusivity ratio is responsible for a decrease in heat transfer and an increase in mass transfer. Figure 4d shows that Le is the only parameter which is responsible (a)

3.2

(b)

3.2 Le=6

Rn=2 12

2

2.4

2.7422

1.6

2.742 2.7418

Nu

Nu

Nu

Nu

2.4

1.6

2.7416 2.7414 2.7412 2.741 2.7408

305.3305.35305.4305.45305.5305.55305.6

Ra

Ra

Le=5,N =2, ε = 0.2

0.8

A

0

100

300

200

400

0.8

500

Rn=4, N =2, ε = 0.2 A

0

100

3.2

(d)

3.2

N =2

ε = 0.2

A

8

0.6

2.4

1.6

Nu

Nu

Nu

Nu

2.4

500

400

Ra

Ra

(c)

300

200

1.6 Ra

0.8

Ra

Rn=4, Le=5, ε = 0.2

0

100

200

300

Ra

400

500

Rn=4, Le=5, N =2

0.8

A

0

100

200

300

400

500

Ra

Fig. 4 Variation of Nusselt number with thermal Rayleigh number Ra for different values of a Rn, b Le, c N A , and d ε

258 (a)

R. Singh et al. (b)

100

100

Rn=2

80

80

12

Le=4

60

Nu φ

Nuφ

60

40

2

Nu

φ

40 20

20

80 Ra

0

Le=5,NA=2,ε = 0.2

0

100

200

300

400

0

500

Rn=4, NA=2,ε = 0.2

0

100

(c)

100

400

500

Ra

(d) 100

N =2 A

80

ε = 0.2

80 1

0.6

60

Nuφ

Nuφ

60

300

200

Ra

40

40

20

20 Rn=4, Le=5, NA=2

Rn=4, Le=5,ε = 0.2

0

0

100

200

Ra

300

400

500

0

0

100

200

Ra

300

400

500

Fig. 5 Variation of concentration Nusselt number with thermal Rayleigh number Ra for different values of a Rn, b Le, c N A , and d ε

for increasing the heat transfer. It is also responsible to enhance the transfer of mass (Fig. 5d). The time independent patterns of streamlines, isotherms and isonanohalines have been shown in (x − z) plane for Racr = 4π 2 and Ra = 10 × Racr in Fig. 6, for fixed Le = 0.05, NA = 5, ε = 0.04, σ = 0.5, λ = 0.5, and Rn = 4. Figure 6a, b show that the magnitude of stream function increases with an increase in the value of critical Rayleigh number. The sense of motion for streamlines in the subsequent cells is alternately identical with and opposite to that of the adjoining cell, indicating the symmetry in the formation of convective cells. Figure 6c, d predict the behavior of isotherms. It is clear that, with an increase in the value of Rayleigh number Ra the conduction mode of heat transfer is increased and the convection occurs with greater cell size. Similarly the patterns of isohalines depicted in Fig. 6e, f show that for increasing values of Ra, convection is followed by both conduction and strong convection.

A Study of Non-Newtonian Nanofluid Saturated in a Porous …

Ra=40π2

0.8

0.6

0.6 2

-1 -1 .4 - .2 -0 1 .7

-0 .1

0

1

1.2 1.4 1.6 1.8

x

0

2

(d)

1

0.6

0.6

-0.1 -0.15

0 0. .2 0.1 15

-0 .2

0.2 0.4 0.6 0.8

0

-0 .1

0.8

(f) 5 -0 .3 .55 -0 .75 -0

-0 .35

-0 .35

-0 .15 -0.1

1 x

1

0.35 0.55 0.75 5 0.9

0.4

0.1

1 x

1.2 1.4 1.6 1.8

2

-1 .05

-1 .5 5 -1 .15

z

0. 0. 35 0. 55 75 0.9 1.0 5 5

z

5 0.5 5 7 0. 5 0.9 5 1.0

0.2 0.4 0.6 0.8

5 -0 .9 -0 .9 5 -0 .75 -0 .55 -0.35 -0. 01 0.1

.05 -1

-1 .15

0.2 0.35 0.1

1.2 1.4 1.6 1.8

-0.1 -0.35 -0.55 -0.75

0.6

0

0

0.2 0.4 0.6 0.8

5 -1 .5

0.2 0

0

-0.25

0.8

5 -0 .9 5 -1 .0

0.6 0.4

0

2

1.2 1.4 1.6 1.8

.2 -0

0

1

1 x

2

0. 0.2 2 0.3 5

-0 .1 -0 .1 5

0.2

0

0

(e)

0

1.2 1.4 1.6 1.8

0.1 0.15

-0.25 -0.2

0

1

x

0.1 0.15

0.4

5 -0 .2 -0.2

-0.1 -0.15

-0 .2 5

0.2 0.4 0.6 0.8

z

0.2 5

z

0. 3

0.8

0.2

0

1

0.8

0.4

2.5

0.2 0.4 0.6 0.8

05

0

0.2

-0 .5 -0 .3

6.5 4.5

0.1

(c)

0.7

0

0. 3

11 .5

8

.7 -1

0.2

10

0.4

0

-2

1.7 1.4 1

0.4

-1 0

z

z

-0 .5

0.8

-2 .5

1

-1 1.5

(b)

-4 .5 -6 .5 -8

Ra=4π2

(a) 1

259

2

0

5 1.0 1.15

1.55

0.95 0.75 0.55 0.35 0.1

0

0.2 0.4 0.6 0.8

1 x

1.2 1.4 1.6 1.8

2

Fig. 6 Streamlines, isotherms and isohalines for Le = 0.05; N A = 5; ε = 0.04; σ = 0.5; λ = 0.5; Rn = 4; Racr = 4π2 ; Ra = 40π2

5.2.2

Unsteady Analysis

In Figs. 7a–f and 8a–f, the graphs are drawn for the thermal Nusselt number and the concentration Nusselt number with respect to time for different values of the parameters Rn, Le, N A , λ, ε, and σ respectively. It is clear from Fig. 7a–f that the heat transport increases sharply in the initial range up to t = 0.1 and then with the passage of time it attains a constant rate of steady state. Further, Fig. 7a, c, e show that

260

R. Singh et al.

(a)

(b) 1.2

1.2

Le=10,NA=5,ε = 0.04,σ= 0.5,λ=0.5

Rn=4,N =5,ε=0.04,σ= 0.5,λ=0.5 A

1.1

Nu

Nu

1.1

Le=10

Rn=4

1

0.9 0

(c)

1

6

0.4

0.8

t

1.2

1.6

0.9 0

2

1.4

(d)

Rn=4,Le=10,ε = 0.04,σ= 0.5,λ=0.5

5

1.2

0.4

0.8

t

1.4

1.2

1.6

2

Rn=4,Le=10,NA=5,σ= 0.5,λ=0.5

Nu

Nu

1.2

1

ε = 0.1 0.01

1

NA=2 4

0.8

0

0.4

0.8

1.2

1.6

0.8

2

0

0.4

0.8

t

(e)

1.2

1.6

2

t

1.2

(f)

Rn=4,Le=10,NA=5,ε=0.04,λ=0.5

2 Rn=4,Le=10,N =5,ε=0.04,σ= 0.5 A

1.8

1.1 1.6

1

Nu

Nu

σ= 0.4 0.5

λ=0.5

1.4 1.2

0.9

0.1

1 0.8 0

0.4

0.8

1.2

t

1.6

2

0.8

0

0.4

0.8

1.2

1.6

2

t

Fig. 7 Variation of thermal Nusselt number with time t for different values of a Rn, b Le, c N A , d ε, e σ, and f λ

the heat transport decreases with an increase in the concentration Rayleigh number Rn, the modified diffusivity ratio N A , and the heat capacity ratio σ while the Lewis number Le, the porosity parameter ε, and the relaxation parameter λ enhance it for a given value of t (Fig. 7b, d, f). Figure 8 shows that initially the concentration Nusselt number Nuφ oscillates vigorously indicating the unsteady rate of mass transfer but after a certain time (depending upon the values of parameters) it becomes constant. It is also seen

A Study of Non-Newtonian Nanofluid Saturated in a Porous … (a)

12

(b)

Le=10,NA=5,ε = 0.04,σ= 0.5,λ=0.5

10

261

12

Rn=4,N =5,ε=0.04,σ= 0.5,λ=0.5 A

10

Rn=6 4

NuΦ

Nu Φ

Le=10

8

8

5

6

0

0.4

0.8

1.2

1.6

6

2

0

0.4

0.8

(c)

12

(d)

Rn=4,Le=10,ε = 0.04,σ= 0.5,λ=0.5

12

NA=6

10

9

ε = 0.04

NuΦ

NuΦ

2

Rn=4,Le=10,NA=5,σ= 0.5,λ=0.5

11

8

7

0.01

4

5 0

(e)

1.6

1.2

t

t

0.4

0.8

t

1.2

1.6

6

2

12

(f)

Rn=4,Le=10,NA=5,ε=0.04,λ=0.5

0

0.4

0.8

t

1.2

1.6

2

12 Rn=4,Le=10,N =5,ε=0.04,σ= 0.5 A

11 10

σ= 0.5

0.4

9

0.3

NuΦ

NuΦ

10

8

8

λ=0.1

7 6

0

0.2 0.4 0.6 0.8

1

t

1.2 1.4 1.6 1.8

2

6

0

0.4

0.8

t

1.2

1.6

2

Fig. 8 Variation of concentration Nusselt number with time t for different values of a Rn, b Le, c N A , d ε, e σ, and f λ

that only for two parameters, NA and ε, the behavior of mass transfer is clear. On increasing the two separately mass transfer is increased (see Fig. 8c, d). Figure 9 shows the time dependent streamlines, isotherms, and isohalines for different values of ψ, T , and φ. It is evident that with the passage of time, magnitude of stream function increases and the sense of motion in the subsequent cells is the same as it is for steady motion. In case of isotherms, we observe that initially heat

262

For t = 0.02

1

(b)

For t = 0.2

1

-1 .1

z

0.4

0.4

3

2

5 2.

0.2

0.2 0

0

0

0.2 0.4 0.6 0.8

1

1.2 1.4 1.6 1.8

0

2

0

0.2 0.4 0.6 0.8

x 0.1 0.15 0.2 0.2 0.3 5

(d)

0.1 0.15 0.2 0.25 0.3

z

0.4

0.2

0.2

5 0.2

0.4

0.3

0.7 0.6 0.5 0.4

0.6

0.7 0.6 0.5 0.4

z

2

1 0.8

.3 -0 -0 .4 .5 .6 -0 -0 0 .7 -

0.6

1.2 1.4 1.6 1.8

0.1 0.15 0.2

0.8

1 x

0.3 0. 25

1

0

(c)

0

0.1 0.15 0.2

1.5 0. 5

1

0.6

1.1

z

-0 .1

-3

-0.5 -1 -1 .5 -2 -2.5

0.6

0.1 0.3 0.5 0.7 0.9

0.8

0.8

-0 .9 -0 .7 -0 .5 -0 .3

(a)

R. Singh et al.

5 -0 .2 5 -0 .2 -0 .1 -0 .1

0

0

0.2 0.4 0.6 0.8

1

1.2 1.4 1.6 1.8

0

2

-0 .2 5 -0 -0 .2 .1 5 -0 .1

.3 -0

0

0

0.2 0.4 0.6 0.8

1

(f)

-0.1 -0-.9 1 .1 .5 -1

-3

.5 -2

0.8

-0 .5

1

-0.1

0.6 .5 -0.10.1 -0 0

0.4

-1 .1 -0 .9

-1 .1 -0 .9

z

z

0.6

-0 .5

-0 .5

-0 .1 0.0 1

0.4

0

0.2 0.4 0.6 0.8

1 x

0.5

0

2.5 2 1.5 1.1 0.9 0.5

0.9

1.1

3

0.2

2

0.8

-2

-2 .5

-1-0 .9 -1 .1 -2 .5

1.2 1.4 1.6 1.8

-3

(e)

1 x

x

0.2 0.1

1.2 1.4 1.6 1.8

2

0

0.1

0

0.2 0.4 0.6 0.8

1 x

1.2 1.4 1.6 1.8

2

Fig. 9 Streamlines, isotherms, and isohalines for Le = 0.05; N A = 5; ε = 0.04; σ = 0.5; λ = 0.5; Rn = 4; Racr = 4π 2 ; Ra = 40π 2

transfer starts with convection but with the passage of time convection becomes slow down. For the isohalines, it is observed that in the beginning strong convection occurs but gradually the cell size decreases showing a weaker convection.

A Study of Non-Newtonian Nanofluid Saturated in a Porous …

263

6 Conclusion Using a modified Darcy-Maxwell model, linear and non-linear stability analyses of a horizontal permeable layer of Maxwell nanofluid with impermeable boundaries heated from below have been performed. The effect of various parameters has been found. The stationary and oscillatory convections, stream lines, isotherms, and isonanohalines for steady as well as unsteady motion, thermal Nusselt number, and concentration Nusselt number are shown graphically. It is found that stationary convection is independent of the relaxation parameter. The critical thermal Rayleigh number can be substantially decreased (increased) for top heavy (bottom heavy) distribution of nanoparticles. Convection sets up earlier in nanofluid as compared to ordinary fluid. Porosity parameter ε inhibits instability. In contrast to Newtonian nanofluid or a single phase non-Newtonian Maxwell fluid saturated in a porous medium, concentration Rayleigh number Rn and Lewis number Le enhance instability. The modified diffusivity ratio N A destabilizes the system as it does for Newtonian nanofluid. For oscillatory convection the stress relaxation parameter λ enhances instability indicating that oscillatory convection is easy to occur in a Maxwell fluid. The Rayleigh number decreases on increasing Rn making system more unstable. In contrast to Newtonian nanofluid, heat capacity ratio σ promotes instability for nonNewtonian nanofluid. The porosity parameter ε enhances stability while the modified diffusivity ratio N A and the Lewis number Le are responsible for early convection. For small wave numbers, the convection occurs in oscillatory mode but as the value of wave number increases, the mode of convection becomes stationary. In the steady flow, the thermal Nusselt number and the concentration Nusselt number increase very sharply on increasing the Rayleigh number up to a certain value but beyond that value, both become almost constant. An increase in the concentration Rayleigh number Rn and the porosity parameter ε decreases the heat transfer, whereas it is increased with the Lewis number Le. The modified diffusivity ratio N A decreases heat transfer and increases mass transfer while the Lewis number Le increases both. For steady state, the magnitude of stream function increases on increasing the value of Ra and heat transfer as well as mass transfer occur in the form of strong convection. For unsteady flow, heat transport increases sharply in the initial stage and after some time it attains almost a constant state. The parameters λ, E, and Le are helpful in increasing the heat transport while σ , Rn, and N A decrease it. Concentration Nusselt number oscillates vigorously and after a certain time attains a constant value. The magnitude of stream function increases as time passes. Heat and mass are transferred through convection but as time passes convection becomes weak.

264

R. Singh et al.

References 1. Choi SUS, Cho YI (1992) Kasza KE Degradation effects of dilute polymer solutions on turbulent friction and heat transfer behavior. J Non-Newtonian Fluid Mech 41(3):289–307 2. Choi SUS, Eastman JA (1995) Enhancing thermal conductivities of fluids with nanoparticles. ASME Int Mech Eng Congr Expo 12–17 3. Besinis A, Peralta TD, Tredwin CJ, Handy RD (2015) Review of nanomaterials in dentistry: interactions with the oral microenvironment, clinical applications, hazards, and benefits. ACS Nano 9(3):2255–2289 4. Bigdeli MB, Fasano M, Cardellini A, Chiavazzo E, Asinari P (2016) A review on the heat and mass transfer phenomena in nanofluid coolants with special focus on automotive applications. Renew Sust Energy Rev 60:1615–1633 5. Navas J, Sánchez-Coronilla A, Martín EI, Teruel M, Gallardo JJ, Aguilar T, Gómez-Villarejo R, Alcántara R, Fernández-Lorenzo C, Piñero JC, Martín-Calleja J (2016) On the enhancement of heat transfer fluid for concentrating solar power using Cu and Ni nanofluids: an experimental and molecular dynamics study. Nano Energy 27:213–224 6. Bahrami B, Hojjat-Farsangi M, Mohammadi H, Anvari E, Ghalamfarsa G, Yousefi M, JadidiNiaragh F (2017) Nanoparticles and targeted drug delivery in cancer therapy. Immun Letters 190:64–83 7. Muhamad N, Plengsuriyakarn T, Na-Bangchang K (2018) Application of active targeting nanoparticle delivery system for chemotherapeutic drugs and traditional/herbal medicines in cancer therapy: a systematic review. Int J Nanomedicine 13:3921–3935 8. Khan I, Saeed K, Khan I (2019) Nanoparticles: properties, applications and toxicities. Arab J Chem 12:908–931 9. Mishra PK, Ekielski A (2019) The self-assembly of lignin and its application in nanoparticle synthesis: a short review. Nanomaterials 9:1–15 10. Mohammadpour J, Lee A (2020) Investigation of nanoparticle effects on jet impingement heat transfer: a review. J Mol Liq 316:113819 11. Buongiorno J (2006) Convective transport in nanofluids. ASME J Heat Transfer 128(3):240– 250 12. Bianco V, Manca O, Nardini S, Vafai K (2015) Heat transfer enhancement with nanofluids. CRC Press, Boca Raton 13. Aybar HS, ¸ Sharifpur M, Azizian MR, Mehrabi M, Meyer JP (2015) A review of thermal conductivity models for nanofluids. Heat Transfer Eng fol 36(13):1085–1110 14. Babita SSK, Gupta SM (2016) Preparation and evaluation of stable nanofluids for heat transfer application: a review. Exp Therm Fluid Sci 79:202–212 15. Nield DA, Bejan A (2017) Convection in porous media, 5th edn. Springer, New York, pp 1–988 16. Nield DA, Kuznetsov AV (2009) Thermal instability in a porous medium layer saturated by a nanofluid. Int J Heat Mass Transfer 52:5796–5801 17. Kuznetsov AV, Nield DA (2010) Effect of local thermal non-equilibrium on the onset of convection in a porous medium layer saturated by a nanofluid. Transp Porous Media 83:425–436 18. Kuznetsov AV, Nield DA (2010) Thermal instability in a porous medium layer saturated by a nanofluid: Brinkman model. Transp Porous Media 81:409–422 19. Kuznetsov AV, Nield DA (2010) The onset of double-diffusive nanofluid convection in a layer of a saturated porous medium. Transp Porous Media 85:941–951 20. Bhadauria BS, Agarwal S (2011) Natural convection in a nanofluid saturated rotating porous layer: a nonlinear study. Transp Porous Media 87:585–602 21. Bhadauria BS, Agarwal S, Kumar A (2011) Nonlinear two-dimensional convection in a nanofluid saturated porous medium. Transp Porous Media 90:605–625 22. Yadav D, Bhargava R, Agrawal GS (2012) Boundary and internal heat source effects on the onset of Darcy-Brinkman convection in a porous layer saturated by nanofluid. Int J Therm Sci 60:244–254 23. Yadav D, Agrawal GS, Bhargava R (2012) The onset of convection in a binary nanofluid saturated porous layer. Int J Theo Appl Multi Mech 2:198–224

A Study of Non-Newtonian Nanofluid Saturated in a Porous …

265

24. Chand R, Rana GC (2012) Oscillating convection of nanofluid in porous medium. Transp Porous Media 95(2):269–284 25. Chand R, Rana GC (2012) On the onset of thermal convection in rotating nanofluid layer saturating a Darcy-Brinkman porous medium. Int J Heat Mass Transfer 55:5417–5424 26. Agarwal S, Sacheti NC, Chandran P, Bhadauria BS, Singh AK (2012) Non-linear convective transport in a binary nanofluid saturated porous layer. Transp Porous Media 93:29–49 27. Yadav D, Agrawal GS, Bhargava R (2013) Onset of double-diffusive nanofluid convection in a layer of a saturated porous medium with thermal conductivity and viscosity variation. J Porous Media 16:105–121 28. Nield DA, Kuznetsov AV (2014) Thermal instability in a porous medium layer saturated by a nanofluid: a revised model. Int J Heat Mass Transfer 68:211–214 29. Agarwal S (2014) Natural convection in a nanofluid-saturated rotating porous layer: a more realistic approach. Transp Porous Media 104:581–592 30. Nield DA (2011) A note on the onset of convection in a layer of a porous medium saturated by a non-Newtonian nanofluid of power-law type. Transp Porous Media 87(1):121–123 31. Skartsis L, Khomami B, Kardos JL (1992) Polymeric flow through fibrous media. J Rheol 36:589–620 32. Ahmed A, Khan M, Hafeez A (2020) Thermal analysis in unsteady radiative Maxwell nanofluid flow subject to heat source/sink. Appl Nanosci 10:5489–5497 33. Hayat T, Rashid M, Alsaedi A, Asghar S (2019) Nonlinear convective flow of Maxwell nanofluid past a stretching cylinder with thermal radiation and chemical reaction. J Br Soc Mech Sci Eng 41:86 34. Chu YM, Ali R, Asjad MI, Ahmadian A, Senu N (2020) Heat transfer flow of Maxwell hybrid nanofluids due to pressure gradient into rectangular region. Sci Rep 10:16643 35. Madhu M, Kishan N, Chamkha AJ (2017) Unsteady flow of a Maxwell nanofluid over a stretching surface in the presence of magnetohydrodynamic and thermal radiation effects. Propuls Power Res 6(1):31–40 36. Sithole HM, Mondal S, Sibanda P, Motsa SS (2017) An unsteady MHD Maxwell nanofluid flow with convective boundary conditions using spectral local linearization method. Open Phys 15:637–646 37. Khuzhayorov B, Auriault JL, Royer P (2000) Derivation macroscopic filtration law for transient linear fluid flow in porous media. Int J Eng Sci 38:487–504 38. Bensoussan A, Lions JL, Papanicolaou G. (1978) Asymptotic analysis for periodic structures. North-Holland, Amsterdam. ISBN 978-0-8218-5324-5 39. Sanchez-Palencia E (1980) Non-homogenous media and vibration theory. Lecture notes in physics, vol 127. Springer, Berlin, pp 461–477 40. Alishayev MG (1974) Proceedings of Moscow pedagogy institute. Hydromechanics 3:166–174 41. Maxwell JC (1873) On double refraction in a viscous fluid in motion. Proc R Soc Lond 22:46–47 42. Horton W, Rogers FT (1945) Convection currents in a porous medium. J Appl Phys 16(6):367– 370 43. Lapwood ER (1948) Convection of a fluid in a porous medium. Proc Camb Phil Soc 44(4):508– 521

Double-Diffusive Convection in Darcy Oldroyd-B Type Nanofluid: Linear and Non-linear Approach Devendra Kumar, Vipin Kumar Tyagi, and Reema Singh

Abstract The double-diffusive convection of an Oldroyd-B type nanofluid confined between two parallel plates is investigated using both linear and non-linear stability analyses and assuming the zero nanoparticle flux at the boundaries. Oldroyd-B type model is used for writing the momentum equation, and the thermal energy equation includes the diffusion term. The analysis is made using normal mode analysis, and the linear theory predicts the onset criterion for stationary as well as for oscillatory convection. Based on truncated Fourier series method, the non-linear theory predicts the behavior of heat, salt, and mass transport phenomena. The character of relaxation and retardation parameters has been examined for both the non-oscillatory and oscillatory states. Keywords Double-diffusive convection · Non-linear stability · Oldroyd-B · Nanofluid · Porous medium

Nomenclature (ρc)m (ρc)f (ρc)p c g d K

Effective heat capacity of the medium Effective heat capacity of the fluid Effective heat capacity of the material constituting nanoparticles Nanofluid specific heat at constant pressure Gravitational acceleration vector Dimensional layer depth Permeability of the porous medium

D. Kumar · V. K. Tyagi (B) SBAS, Shobhit Institute of Engineering & Technology (Deemed to be University), Meerut, Uttar Pradesh, India e-mail: [email protected] R. Singh C.C.S. University, Meerut, Uttar Pradesh, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 P. K. Mallick et al. (eds.), Cognitive Informatics and Soft Computing, Lecture Notes in Networks and Systems 375, https://doi.org/10.1007/978-981-16-8763-1_22

267

268

D. Kumar et al.

p∗ p t∗ t T* T T1∗ T0∗ C∗ C C1∗ C0∗ (x ∗ , y ∗ , z ∗ ) (x, y, z) DB DT Le = αDmB Ln = αDmS 0 −T1 ) NA = DTD(T B T1 φ0

Dimensionless pressure Pressure Dimensionless time Time Temperature Dimensionless temperature Temperature at the upper wall Temperature at the lower wall Solute concentration Dimensionless solute concentration Concentration at the upper wall Concentration at the lower wall Dimensionless Cartesian coordinates Cartesian coordinates Brownian diffusion coefficient Thermophoretic diffusion coefficient Lewis number Thermo-solutal Lewis number Modified diffusivity ratio

NB =

Modified particle-density increment

εφ0∗ (ρc)p (ρc)f d(T0 −T1 ) Ra = ρf gβ Kμα m ρf g K dβC (C0 −C1 ) Rs = μDs ρ φ +ρ (1−φ ) g K d Rm = [ p 0 fμαm 0 ] (ρ −ρ )φ∗φ g K d Rn = p fμαm 0

q q∗

Thermal Darcy-Rayleigh number Solutal Darcy-Rayleigh number Basic density Darcy-Rayleigh number Concentration Darcy-Rayleigh number Nanofluid velocity Dimensionless Darcy velocity

Greek Symbols α αm β ε κ κm λ1 λ2 μ ρ ρp φ

Wave number Thermal diffusivity of the porous medium Volumetric expansion coefficient of the fluid Porosity Thermal conductivity of the nanofluid Effective thermal conductivity of the porous medium Relaxation time Retardation time Viscosity of the fluid Fluid density Nanoparticle mass density Nanoparticle volume fraction

Double-Diffusive Convection in Darcy Oldroyd-B Type Nanofluid …

φ0∗ φ∗ σ ψ ω

269

Reference value of nanoparticle volume fraction Dimensionless nanoparticle volume Fraction Heat capacity ratio Stream function Frequency of oscillations

Superscripts * 

Non-dimensional variable Perturbation variable

Subscripts b

Basic solution

Operators ∇2 ∇12

∂2 ∂x2 ∂2 ∂x2

+ +

∂2 ∂ y2 ∂2 ∂z 2

+

∂2 ∂z 2

1 Introduction Investigations in the direction of double-diffusive convection in porous media were initiated several decades ago. This interest was generated by a variety of its applications such as the thermosolutal convection can be found in engineering fields as well as electro-physics, electro-chemistry, and geophysics sector. For a comprehensive review of the literature concerning double-diffusive convection in a binary fluid saturated porous medium, we refer to Nield and Bejan [1], Trevisan and Bejan [2], Mamou [3], and Mojtabi and Charrier-Mojtabi [4] who have given excellent reviews on double-diffusive convection in porous media. It has been confirmed that in a system where two diffusing properties are present, instabilities can occur if at least one of the components is destabilizing. It is a well-known fact that thermal conductivity of solids, in most of the cases, greater that of fluids. Further, the conventional heat transfer liquids such as water, ethylene glycol, and engine oil have wide range of applications. However, such liquids have low thermal conductivity as compared to that of solids, especially the

270

D. Kumar et al.

metals. Hence an addition of metal particles in a fluid can increase the thermal conductivity of fluids. If nano-sized particles (1–100 nm) are suspended into the conventional heat transfer liquids, they increase conduction and convection coefficients, allowing for more heat transfer out of the coolant. In 1995, Choi [5] named such type of fluids as “Nanofluid”. Enhanced thermal conductivity, higher viscosity, and enhanced value of critical heat flux are some of the notable features of nanofluids. First time, enhanced thermal conductivity phenomenon was noticed by Masuda et al. [6]. Choi [7] explained the applicability of nanofluids in a broader sense, i.e., in energy production and supply to electronics, paper production, HVAC, and textiles. Since all of the above-mentioned industries deal with heat transfer in some way or the other, there is a need for the improved heat transfer media. In this context, it is important to note that in comparison to conventional solid–liquid suspensions for heat transfer intensifications, nanofluids have some potential advantages such as high surface area, Brownian motion, reduced particle clogging, reduced pumping power, and some other adjustable properties (thermal conductivity, surface wettability, and others). Due to the Brownian motion of nano-particles through fluids, better results are obtained for heat transport because the Brownian motion increases the mode of heat transfer or mass transfer in the system. Further, the nanofluids have applications almost in every field (Chakraborty and Panigrahi [8]). Some of them are biomedical science, engine cooling, boiler exhaust fuel gas recovery, cooling of electronics, as a coolant in nuclear systems, as a lubricant, defense, thermal storage, nano-drug delivery, cancer therapeutics, nanocryosurgery, cryopreservation, and many more. Thus the nanofluids are playing an increasingly important role in the field of nanotechnology and biotechnology worldwide. The Oldroyd-B model for a non-Newtonian fluid represents highly elastic fluids, for which the viscosity remains sensible constant over a wide range of shear rates. This model has been widely used for experimental measurement and flow visualization, reported on the instability of viscoelastic flows. Malashetty et al. [9] explained the linear non-equilibrium model for saturated densely packed horizontal porous layer of Oldroyd B fluid heated from below and cooled from above. Shivakumara and Sureshkumar [10] studied the stability analysis of Oldroyd-B binary fluid using a modified Forchheimer-extended Darcy equation. Narayana et al. [11] examined the behavior of double-diffusive magneto convection of Oldroyd-B type fluid. The nonlinear stability analysis of thermal convection of Oldroyd fluid in an anisotropic Darcy porous medium was investigated by Raghunatha et al. [12] using perturbation method. In a view of different aspect of Oldroyd fluid, the chaotic convection under non-linear analysis in the presence of temperature modulation was considered by Sun et al. [13] using Darcy-Brinkman model. Shivakumara and Sureshkumar [14] explained the effect of quadratic drag and vertical through flow for double-diffusive Oldroyd fluid using linear stability theory. Recently, Raghunatha and Shivakumara [15] analyzed the double-diffusive convection in an Oldroyd-B fluid using perturbation method for non-linear approach and discussed bifurcating solutions. Prema et al. [16] explored the role of an Oldroyd fluid in a porous medium for heat transfer process. Heat and mass transportation

Double-Diffusive Convection in Darcy Oldroyd-B Type Nanofluid …

271

processes with non-linear approach for rotating Oldroyd fluid layer was analyzed by Hafeez et al. [17]. Abbas et al. [18] extracted the homogeneous/heterogeneous processes of thin film layer of an Oldroyd fluid subjected to rotating disk. Ashraf et al. [19] investigated the cross diffusion effects with convective boundary conditions in a mixed convection flow of Oldroyd fluid over stretching surface. Three dimensional MHD flow of an Oldroyd fluid in the presence of Soret and Dufour effects were examined by Farooq et al. [20] using convergent analytic scheme. To the best of our knowledge, Sheu [21] studied the thermal behavior of an Oldryod viscoelastic nanofluid using modified Darcy model and assuming constant flux at boundaries. Sheu also introduced the Oldroyd-B fluid model to incorporate the Brownian motion and thermophoresis effects for nanofluids (Sheu [22]). Umavathi [23] examined the impact of convection on a viscoelastice nanofluid saturated medium under thermal modulation. Yadav et al. [24] considered Oldroyd-B nanofluid saturated porous layer with rotation under subject to the viscosity variations and found that for oscillatory convection, stress relaxation time stabilized the system while strain retardation parameter destabilized it. Rana et al. [25] analyzed double-diffusive instability in a horizontal layer a viscoelastic Walters’ (model B) nanofluid in the presence of Darcy permeable medium. Chand and Rana [26] examined the thermal instability of Walters’ B nanofluid saturated porous medium. Shivakumara et al. [27] examined the linear stability analysis of an Oldroyd-B nanofluid in the presence of temperature only. Agarwal and Rana [28] investigated the thermal stability of Al2 O3 -EG OldroydB rotating nanofluid layer with non-equilibrium approach using Brinkman Model. Further in 2015, Hayat et al. [29] studied the three-dimensional flow of an Oldroyd’ B nanofluid in the presence of induced magnetic field. Thermal instability of an Oldroyd-B nanofluid with more realistic approach was discussed by Srivastava and Bhadauria [30]. Umavathi and Kumar [31] constructed a mathematical model to analyze the thermal instability of the onset of convection in Oldroyd-B nanofluid assuming that the thermal conductivity and viscosity depend on the nanoparticle volume fraction. Effect of MHD mixed convection of Oldroyd-B nanofluid between two infinite isothermal stretching disks with magnetic field was analyzed by Hashmi et al. [32]. Umavathi and Sasso [33] studied the double-diffusive convection in an Oldroyd-B nanofluid using modified Darcy-Oldroyd model with linear as well as non-linear stability analyses. Recently Ali et al. [34] presented the three dimensional behavior for Oldroyd-B nanofluid using Cattaneo-Christove heat flux model. Cattaneo-Christov double diffusion theory in the presence of Joulie effect for Oldroyd nanofluid explained by Hafeez et al. [35]. Rawat et al. [36] presented the activation energy and thermal radiation effects on Oldroyd-B nanofluid. Because of various applications of non-Newtonian nanofluids, we have investigated, in the present paper, the double-diffusive nanofluid convection using Yoon’s model for Oldroyd fluid, in the presence of active management of nanoparticles flux at the boundaries. The behavior of various relevant parameters has been examined both qualitatively and quantitatively in the linear and non-linear stability analyses.

272

D. Kumar et al.

2 Mathematical Model Consider an Oldroyd-B type nanofluid porous layer of thickness ‘d’ confined between two parallel planes. The origin lies on the lower plane, and z-axis is vertically upward. The lower plane is kept at temperature T 0 and concentration C 0 , and the upper plane is kept at temperature T 1 ( ρ , accordingly in the presence of nanoparticles the value of critical Rayleigh number is reduced in a significant manner. From (61), it is also obtained that larger values of Rn foresees the destabilizing character in the system which needs to be neutralized by cooling the bottom layer relative to the top. Figure 2(a)–(d) show the effect of the parameters Le, Rs, NA , and ε on linear stationary convection for fixed values of Le = 5, Rn = 0.4, Rs = 0.5, NA = 4, and

10000

10000

(a)

(b)

9600

Rast

Rast

9600

9200

9200

Le=5

Le=0.5

NA = 2

N =4 A

Le=10

NA = 8

NA = 4, = 0.4,Rn= 0.4, Rs= 0.5

8800 1.4

1.6

1.8

Le= 5, = 0.4,Rn= 0.4, Rs= 0.5

2

2.2

8800 1.4

2.4

10000

1.6

1.8

2

2.2

2.4

2.2

2.4

10000

(d)

(c) 9600

Rast

Rast

9600

9200

9200

Rs= 0.5

= 0.8 = 0.4

Rs= 2 = 0.2

Rs= 2

8800 1.4

Le= 5,Rs= 0.5, Rn= 0.4, NA= 4

Le= 5, = 0.4,Rn= 0.4, NA = 4

1.6

1.8

2

2.2

2.4

8800 1.4

1.6

1.8

2

Fig. 2 Neutral stationary convection with wave number α a for different values of a Le, b N A , c Rs, and d ε

284

D. Kumar et al.

ε = 0.4 with variations in one of these parameters. The neutral stability curves for stationary convection is obtained in (Ra, α) curve in all the figures. It is clear from the  graphs that the critical values of Darcy Rayleigh number for stationary Rast reduce when the parameters Le, Rs, and NA increases except porosity (ε). Thus, Le, Rs, and NA are responsible for enhancing the instability while the porosity ε suppresses it. Figures 3 and 4 show the effect of various parameters for linear stationary and linear oscillatory convection for fixed values of Le = 0.75, Ln = 50, Rn = 0.4, Rs = 0.5, NA = 4, λ1 = 1.2, λ2 = 0.4, and ε = 0.4 with variations in one of these parameters. The neutral stability curves for both the linear stationary convection and the linear oscillatory convection clearly provide a comparison between the two convections. We observe that Raosc < Rast , whatever the values of the parameters may be, implying, thereby, that the instability sets in as oscillatory convection. The similar behavior is obtained in (Ra, α) curve in all the figures. Thus, the system becomes unstable first for oscillatory convection and not for stationary convection. It is clear from the graphs that the values of stationary and oscillatory Rayleigh numbers (Ra) reduce when any of the parameters Le, Ln, Rn, Rs, NA , and λ1 increases except porosity (ε) and retardation parameter (λ2 ). Thus, Le, Ln, Rn, Rs, NA , and λ1 are responsible for enhancing the instability while the ε and λ2 uphold it. 15000

15000

(a)

(b) 9291.5 9291 0.6

9275.8 9275.6 9275.4

Rast/Raosc

9288 1.6136

1.6138

1.614

α

1.6142

1.6144

1.6146

stationary oscillatory 748.8 748.6

5000

748.4

Le=0.9, 0.5

Rs=0.6, 0.4

0.9

9289 9288.5

0.6

748.2 0.9

748 747.8

10000

Rast/Raosc

9289.5

0.4

9275.2 9275

0.6

9274.8 9274.6 9274.4 1.622 1.6221 1.6221 1.6222 1.6222 1.6223 1.6223 1.6224

α

stationary oscillatory 597.255 597.25 597.245

5000

Rast/Raosc

Rast/Raosc

Le=0.9, 0.5

10000

9290

Rast/Raosc

Rast/Raosc

9290.5

Rs=0.6, 0.4

747.6

597.24

0.4

597.235

0.6

597.23 597.225 597.22

747.4

0.4315

0.4315

0.3842 0.3843 0.3843 0.3844 0.3845 0.3845 0.3846 0.3846

0.4315

0

λ 1 =1.2,λ 2=0.4, Rs=0.5, NA=4, ε =0.4, Rn=0.4, Ln=50

0

0.5

1

1.5

2

α

2.5

3.5

3

0

4

15000

0.4315

α

α

0.4316

λ 1 =1.2,λ 2=0.4, Le=0.75, NA=4, ε =0.4, Rn=0.4, Ln=50

0

0.5

1

1.5

2

α

2.5

4

3.5

3

15000

(c)

(d) 9210 9209

743.45

743.4

5000

0

Ln= 50, 10

0

0.5

1

50 10

743.35

743.3

ε =0.4, 0.6

9207

Rast/Raosc

10000

9203 9202

1.666

α

3

1.667

1.6675

α

1.668

stationary oscillatory 758.8 758.6

ε =0.4, 0.6

5000

758.4

0.6

758.2 0.4

758 757.8

757.4

0.3857 0.3857 0.3857 0.3857

α

λ1 =1.2,λ2=0.4, Le=0.75, NA=4, ε =0.4, Rn=0.4, Rs=0.5

2.5

1.6665

757.6

0.3856 0.3857

2

0.4

9205

9201

743.25

1.5

0.6

9206

9204

Rast/Raosc

Rast/Raosc

stationary oscillatory Rast/Raosc

Rast/Raosc

9208

Ln= 10

10000

3.5

0.3816 0.3817 0.3817 0.3818 0.3819 0.3819 0.382 0.382

α

λ1 =1.2,λ2=0.4, Le=0.75,

4

0

NA=4, Rs=0.5. Rn=0.4, Ln=50

0

0.5

1

1.5

2

2.5

3

3.5

4

α

Fig. 3 Linear stationary and oscillatory convection with wave number α a for different values of a Le, b Rs, c Ln, and d ε

Double-Diffusive Convection in Darcy Oldroyd-B Type Nanofluid …

285

The stationary and oscillatory neutral curves for different values of Lewis number Le are shown in Fig. 3a. The effect of Lewis number is found to enhance the onset of neutral stationary and oscillatory convections. Thus, larger the value of Lewis number Le smaller is the value of Rayleigh numbers, implying, thereby, that Le enhances instability so that the Lewis number Le has a destabilizing effect. In Fig. 3b–c, it is shown that the effect of increasing thermosolutal Lewis number (Ln), and solutal Darcy-Rayleigh-Darcy number (Rs) is to reduce the value of Rayleigh number for stationary and oscillatory convections. These parameters are also accelerating the instability. However, the Rayleigh number increases when we increase the value of porosity (ε). Thus, the porosity (ε) delays the convection as shown Fig. 3d. Figure 4a, b illustrate the effect of concentration Rayleigh number (Rn) and modified diffusivity ratio (NA ) on the stationary and oscillatory convections. It is observed that as the value of Rn and NA increase, the Rayleigh number Ra decreases, depicting a destabilizing effect of Rn and NA . In other words, increasing Rn and NA sets oscillatory convection earlier. 15000

15000

(a)

(b) 9204

9113

9203

9112

9202

0.4

9201

0.6

9108

Rast/Raosc

9106 9105 9104

1.7985

1.799

1.8

1.7995

α

1.8005

stationary oscillatory 525

5000

524

Rast/Raosc

Rn=0.6, 0.4

NA=4, 2

10000

9107

523

0.4

522

2

9200 9199 4

9198 9197 9196 9195 1.9625

1.963

1.9635

1.964

α

1.9645

stationary oscillatory 683 682

5000

681

NA=4, 2

0.6

521

Rast/Raosc

9109

520

Rast/Raosc

Rn=0.6, 0.4

10000

Rast/Raosc

Rast/Raosc

9111 9110

2

680 679 678

4

677 676 675

519

674 0.4615

α

0.463

0.4035

0.4635

λ1 =1.2,λ2=0.4, Rs=0.5, NA=4, ε =0.4, Le=0.75, Ln=50

0.5

1

1.5

2

α

2.5

3

3.5

0.405

0.4055

0

0.5

1

1.5

2

α

2.5

3

3.5

4

(d)

λ 1 =0.8

λ 2=0.2

10000

stationary oscillatory 620

λ 1 =1.2, 0.8

610

Rast/Raosc

5000

0.4045

α

15000

(c)

10000

0

4

0.404

λ1 =1.2,λ2=0.4, Rs=0.5, Rn=0.4, ε =0.4, Le=0.75, Ln=50

0.6 600 0.8 590

stationary oscillatory 520 500

λ 2=0.2, 0.6

5000

0.6

480

Rast/Raosc

0

15000

Rast/Raosc

0.4625

Rast/Raosc

0

0.462

0.2

460 440 420

580

400

570 0.428

0

0.43

0.432 0.434 0.436 0.438

α

0.44

380

0.442

Ln=50, λ 2=0.4, Le=0.75, NA=4, ε =0.4, Rn=0.4, Rs=0.5

0

0.5

1

1.5

2

α

2.5

3

3.5

4

0

0.42

0.425

0.43

0.435

α

0.44

0.445

0.45

0.455

λ 1 =1.2, Ln=50, Le=0.75, NA=4, ε =0.4, Rn=0.4, Rs=0.5

0

0.5

1

1.5

2

α

2.5

3

3.5

4

Fig. 4 Linear stationary and oscillatory convection with wave no. α a for different values of a Rn, b N A , c λ1 , and d λ2

286

D. Kumar et al.

Figure 4c shows that as the relaxation parameter (λ1 ) increases, the value of Rayleigh number for oscillatory convections decreases. Thus relaxation parameter (λ1 ) enhances the instability of the system while the oscillatory Rayleigh number increases with an increase in the value of retardation parameter (λ2 ), which means that the retardation parameter (λ2 ) delays the onset of convection and hence has a stabilizing effect (Fig. 4d).

5.2 Non-linear Stability Analysis Non-linear stability analysis is made for finding the amplitude of convection in fluid motion with temperature and concentration gradients and examines the transfer of heat, salt, and mass across the porous layer. These are specified in terms of the thermal Nusselt number Nu, the solute concentration Nusselt number NuC , and the mass concentration Nusselt number Nuφ respectively. In Figs. 5 and 6, the graphs are drawn to analyze the behavior as well as the comparison between the thermal Nusselt number (Nu), the solutal Nusselt number

(b)

(a)

φ

Nu/NuC /Nu

φ

3.5

Le=0.5

3 2.5

0.8 Le=0.5

2

Nu NuC

ε=0.2, Le=0.8, Rn= 5, Rs= 4

4

φ

0.8

NA= 4, λ 1=0.6, λ 2= 0.3,

4.5

Nu

4

Nu/NuC /Nu

Nu NuC

ε=0.2, Ln = 15, Rn= 5, Rs= 4

Nuφ

Ln = 15

3.5

Ln = 10

3 2.5 Ln = 10,15

2.376

2

2.374 2.372 10

2.37

1.209 1.208

15

2.368

1.207

2.366

Nu

NA= 4, λ 1=0.6, λ 2= 0.3,

4.5

Nu

5

5

10

1.206

15

2.364

1.5 1

0

0.5

1

1.5

Le=0.5 0.8

1

2

1.5

1.205

2.362 1.55

1.546 1.547 1.548 1.549

1.551 1.552 1.553

Ln = 10,15

0

φ

λ1=0.6

3 0.8

2.5

λ1=0.6

2

1

0

λ1=0.6

0.5

t

0.878

0.8785

2

Nu

φ

3.5

λ2=0.2

3 0.3

2.5 λ2=0.2 λ 2=0.2

1.5 1

0.8775

0.3

2

0.8

1.5

0.877 t

1.5

2

Nu NuC

ε=0.2, Le=0.8, Rn= 5, Rs= 4

4 φ

0.8

NA= 4, Ln = 15, λ 1= 0.6,

4.5

C

Nu

Nu/NuC /Nu

φ

Nu/NuC /Nu

5

Nu Nu

ε=0.2, Le=0.8, Rn= 5, Rs= 4

3.5

0.8765

(d)

NA= 4, Ln = 15, λ 2= 0.3,

4

0.876

t

(c) 4.5

1.203 0.8755

1.5

1

0.5

t

5

1.204

t

1

0.3

0

0.5

1

1.5

2

t

Fig. 5 Variations of Nu, NuC , and Nuφ verses t for different values of a Le, b Ln, c λ1 , and d λ2

Double-Diffusive Convection in Darcy Oldroyd-B Type Nanofluid …

287

 (NuC ), and the nanoparticle concentration Nusselt number Nuφ as a function of time, for different values of the parameters Le = 0.8, Ln = 15, λ1 = 0.6, λ2 = 0.2, NA = 4,,ε = 0.2, Rs = 4, and Rn = 5 with variation in one of these parameters at a time. It is clear from these graphs that initially for small values of time, the heat, and nanoparticle concentration transports increase sharply and then attain a steady state, however the solute transfer oscillates for a certain period of time (Figs. 5 and 6) and after that a steady state prevails for another range of parameters (for Le = 1.5, Ln = 5 and the same values of other parameters as given above) (Fig. 7a–h). From Figs. 5 and 6, it is also observed that first heat transport attains its steady state, then nanoparticle concentration transport achieves its steady state and then finally, the solute transfer reaches at the steady state. Figure 7a–h show that Le, Ln, and ε are responsible for decreasing the heat transfer whereas the other parameters increase it. In unsteady state, the parameters Le, λ1 , λ2 , NA , ε, Rs, and Rn individually increase the nanoparticle mass concentration, whereas the thermosolutal Lewis number Ln is the only parameter which decreases

(a) 5

Ln = 15, λ1= 0.6, λ 2=0.3,

4.5

Nuφ

4 φ

5

3.5

NA= 4

3 5 NA= 4

2.5 2 1.5 1

0

5 NA= 4

0.5

NA= 4, Ln = 15, λ 1= 0.6, λ2=0.3, Le=0.8, Rn= 5, Rs= 4

4.5

Nu/Nu C /Nu

φ

Nu NuC

ε=0.2, Le=0.8, Rn= 5, Rs= 4

4

Nu/Nu C /Nu

(b) 5

3.5

ε=0.2

3 2.5

ε=0.2 0.3

2

1.5

t

1

2

0.3 ε=0.2

0

1

0.5

t

(c) 5

3.25

4

8

3.2 4

φ

3 2.95 2.9 2.85

1.85

1.9 t

1.95

2

3 2.5

Rs= 4,8 1.1748

2.365

2

1.1746

Nu/NuC/Nu

φ

φ

8

4

2.362

1

8

1.1744 1.1743

2.5

8 Rn=5

4

1.174

1.8605

1.861

1.8615 t

1.862

1.8625

1.4479

1.863

1.448

Rs= 4,8

1

t

1.5

1.4481 t

1.4482

8

1.5

1.1739 1.86

0.5

Rn=5

3

1.1741

2.36

0

Nuφ

1.1742

2.361

1.5

1.1745

3.5

2

1.1747 2.364 2.363

Nu NuC

8

3.1 3.05

Nu/Nu C /Nu

φ

Nu/NuC/Nu

Rs= 4,8

NA= 4, Ln = 15, λ 1= 0.6, λ2=0.3, Le=0.8, ε=0.2, Rs= 4

4.5

Nuφ

3.15

Nu/NuC/Nu

φ

Nu/Nu C /Nu

5

Nu NuC

4 3.5

2

1.5

(d)

NA= 4, Ln = 15, λ 1= 0.6, λ2=0.3, Le=0.8, ε=0.2, Rn= 5

4.5

Nuφ

0.3

1.5 1

Nu NuC

1.4483

2

1

Rn=5

0

0.5

1

1.5

2

t

Fig. 6 Variations of Nu, NuC , and Nuφ verses t for different values of a N A , b ε, c Rs, and d Rn

288

D. Kumar et al. 5

4.5

Rn= 5, Ln = 5,ε=0.3, λ1=0.6,λ 2= 0.2,NA=4 , Rs= 4

4

4 3.5

3

NuC

NuC

Le=1.5

Le=0.8

10

3

Ln=5

2.5

2

2

(a) 1

Rn= 5, Le = 1.5,ε=0.3, λ1=0.6, λ 2= 0.2,NA=4 , Rs= 4

4.5

0

4.5

1

2

3

t

1

4

0

4.5

Rn= 5, Ln = 5,ε=0.3, λ2=0.2, Le=1.5,NA=4 , Rs= 4

4

(b)

1.5 0.5

1

1.5

t

2.5

3

λ2= 0.4

NuC

3 λ1= 0.4

2

3 λ 2= 0.2

2

(d)

(c) 1

t

1

4

3

2

1

0

4.5

NuC

3 NA=4

4

3

2

1

t

Rn= 5, Ln = 5, λ 1= 0.6, λ2=0.2, Le=1.5,NA=4 , Rs= 4

4

NA=5

NuC

0

5

Rn = 5, Ln = 5, λ 1= 0.6, λ 2=0.2, Le=1.5,ε=0.3 , Rs= 4

4

ε=0.2

3 ε=0.3

2

2

(f)

(e) 1

1

0

4

3

2

t

1

0

5 4.5

NA= 4, Ln = 5, λ 1= 0.6, λ2=0.2, Le=1.5, ε=0.3, Rn= 5

4

4

3

NuC

NuC

Rs=4

2.5 Rs=8

2

1

2

t

4

3

NA= 4, Ln = 5, λ 1= 0.6, λ 2=0.2, Le=1.5, ε=0.3, Rs= 4 Rn=4

3.5

Rn=5

3

2

(g)

1.5 1

4

3.5

Rn= 5, Ln = 5,ε=0.3, λ 1=0.6, Le=1.5,NA=4 , Rs= 4

4

λ1= 0.6

NuC

2

0

0.5

1

1.5

2

t

2.5

3

3.5

(h) 4

1

0

1

2

3

4

t

Fig. 7 Variation of NuC verses t for different values of a Le, b Ln, c λ1 , d λ2 , e NA , f ε, g Rs, and h Rn

Double-Diffusive Convection in Darcy Oldroyd-B Type Nanofluid …

289

it. The time dependent behavior of solutal Nusselt number (NuC ) is responsible for increasing the rate of transfer of salt with respect to the parameters Ln, λ1 , NA , Rs, and Rn whereas porosity (ε) and Lewis number (Le) are the only parameters showing the decreasing effect on heat and mass transfers. However, the retardation parameter (λ2 ) increases the heat transfer and decreases the solute transfer rate. In Figs. 8, 9, 10 and 11, we draw time dependent streamlines, isotherms, isohalines, and isonano-concentration respectively for Le = 0.5, Ln = 10, λ1 = 0.5, λ2 = 0.2, NA = 3, ε = 0.4, Rs = 0.2, Rn = 2 at different time scales (at t = 0.5, 1, 2). From Fig. 8a–c, we observe that the magnitude of stream function is going to be slightly decreased with an increment in time. In all the graphs of streamlines, the sense of motion in the subsequent cells is alternately identical with and opposite to that of the adjoining cell. At time t = 1, the size of the cell (having value 0.03) is decreased as compared to that in Fig. 8a. Similarly, if we go to the next panel (at t = 2), that cell (having value 0.03) is no more, i.e., velocity is decreased upto 0.029 (Fig. 8c). Hence velocity of fluid is continuously decreasing as time increases. For isotherms, since the fluid is heated from below and on moving toward from the wall z = 0 to the wall z = 1, the convection cells are broaden which simply means that the convection is more near the top layer as compared to the lower layer. This means that the velocity profile is more near the top layer so that the fluid layer has more kinetic energy near the top layer. On continuing the heating process, the convection becomes weak (Fig. 9a–c). Fig. 10a–c show that the solute concentration is more near the walls z = 0 and z = 1 as compared to the middle portion. As time passes, the solute concentration becomes more toward the walls and hence enhances the convection toward the walls. This trend is similar to the case of isotherms. In case of iso-nano concentration, at t = 0.5, there exists convection as well as conduction (Fig. 11a). But as time increases (e.g., at t = 1), the conduction ceases to exist and only the convection state is observed. From Fig. 11b, it is observed that the particles are concentrated toward the lower wall, since the cells are broaden near z = 0 and narrower toward z = 1. In the next panel, convection becomes weak with time (Fig. 11c).

6 Conclusion 6.1 Linear Stability Analysis • The Rayleigh number for stationary convection is independent of both the relaxation and retardation parameters and depends upon Le, NA , ε, Rs, and Rn • The parameters Le, Rs, and NA are responsible for enhancing the instability for the stationary convection while the porosity ε suppresses it. • The system becomes unstable first for oscillatory convection and then for stationary convection.

290

D. Kumar et al.

Fig. 8 Streamlines a t = 0.5, b t = 1, c t = 2 (Le = 0.5, Ln = 10, λ1 = 0.5, λ2 = 0.2, NA = 3, ε = 0.4, Rs = 0.2, Rn = 2)

at ‘t = 0.5’

(a) 1

-0.07 -0.06 -0.05 -0.04 -0.02 -0. 01

7 07

0.01

. -0

0.4

-0.03

z

0.6

0.02 0.03 0.04

0.8

0.0

0. 07

0.0 0.0 6 5

0.2

0

77

0

0.8

0.4

0

x

1.2

1.6

2

at ‘t = 1’

(b) 1

0.8

. -0

0.4

6 03

6

-0 .

02 5

. -0

03 0.

03

2 .0 -0 15 0 -0. 0.01 -

0.2 0

0

0

0.4

0.8

0.0

x

0.0 0.03 25 0.0 0.0152

z

0.6

1

1.2

1.6

2

at ‘t = 2’

(c) 1

0

0.8

0.029

-0.02

-0.029

-0.025

z

-0. 01 -0. 015

0.4

5

0.01 5 0.01 2 0.0

0. 02

0.6

0.2

0

0

0.4

0.8

x

1.2

1.6

2

Double-Diffusive Convection in Darcy Oldroyd-B Type Nanofluid … Fig. 9 Isotherms a t = 0.5, b t = 1, c t = 2 (Le = 0.5, Ln = 10, λ1 = 0.5, λ2 = 0.2, NA = 3, ε = 0.4, Rs = 0.2, Rn = 2)

at ‘t = 0.5’

(a) 0

-0.001 -0 .002 -0 .003

1 -0.0002 -0.0 -0.003

0.0

01

0

1

291

0.8

z

0.6

01

0

0.0 0 03.004

-0 . . -0

4 00

0.0

0.8

0.4

0

01

0.2

. -0

0.0

1

0.4

-0 .

02

2

1.6

1.2

x

00 4

at ‘t = 1’

(b)

0

1

-0.001

0.8

z

0.6

01 0.

4

0.4

01

0.002 0 0.001 0

00 0.

04 .0 02 -0 0.0 -

0.2

. -0

0. 01

0.4

0. 00

0.8

1.2

x

4 0.002 0.00 1

1.6

2

at ‘t = 2’

(c) 1

0.8

z

0.002

0.4 . -0

0.2

0

0

0.4

4 00

002 -0. -0.001

0

2 0.00 0.001

0.004

0.004

0.6

0.8

x

1.2

0.001

1.6

2

292

D. Kumar et al.

Fig. 10 Isohalines a t = 0.5, b t = 1, c t = 2 (Le = 0.5, Ln = 10, λ1 = 0.5, λ2 = 0.2, NA = 3, ε = 0.4, Rs = 0.2, Rn = 2)

at ‘t=0.5’

(a) 1

-0 .

-0 .

1

5

12 -0.0 -0. 0

-0. 0

0.8

15

0.2

z

0.15 0.2

0.6

0.1 0.15

2 -0.

0.1 0.05 0.02 0.01

0.05 0.02 0.01

0.2

0

0

0.4

0.8

0.4

0

x

2

1.6

1.2

at ‘t = 1’

(b)

-0. 05 -0.02 -0.01

0

0

0.2

0.8

0.4

0

-0.15

-0.1

0.15

-0.02 -0.01

z

-0.1

0.4

0

-0.05

-0. 15

0.6

00. .001 2 0.05

0.8

0.1

1

x

1.2

2

1.6

at ‘t = 2’

(c) 1

0.8

0.02

z

0.01

0.6

0.4

0.2

-0. 0

-0. 0 2 -0.01

2

-0.01

0

0

0

0.4

0.8

x

1.2

1.6

0

2

Double-Diffusive Convection in Darcy Oldroyd-B Type Nanofluid …

(a)

Fig. 11 Iso-nano concentration a t = 0.5, b t = 1, c t = 2 (Le = 0.5, Ln = 10, λ1 = 0.5, λ2 = 0.2, NA = 3, ε = 0.4, Rs = 0.2, Rn = 2)

293

at ‘t = 0.5’

1

0.01

0.1

0.8

0.15

0. 01

0.6

0

0.4

(b) 1

2

0 1 -0. 0

2

0

-0.1 8 -0.0 -0. 04

0 -0.

0.2

0.8

-0.15

-0.15

0.4

-0. 0

z

0.0 0. 0.0 8 02 4

-0. 01 0

x

-0 -0.08.1 -0.04

at ‘t = 1’ 0.02 4 0.0 8 0.00.1

0.04 0 0.1.08

0

0.8

2

1.6

1.2

-0. -001. 02

0. 15

0.0 1

0

0.00.0 1 2

0.2

4 -0. 0. 08 -0 -0.1 -0.15

0.4

15 0.

z

0.6

0.4

0

(c)

0.8

1.2

x

2

1.6

at ‘t = 2’

1 -0. 0 1

-0 .

04

z

0.04

0.6

02

0.04

-0 .

0.8

0.02 0.01

0.02 0.01

0.4

0

0

0.2

0

0.4

0.8

x

1.2

1.6

2

294

D. Kumar et al.

• The parameters Le, Ln, λ1 NA , Rs, and Rn enhance the instability of the system whereas λ2 and ε suppress it for both the stationary as well as oscillatory convections.

6.2 Non-linear Stability Analysis • In steady state, we recover the results obtained by Jaimala et al. [45] for Darcy Maxwell nanofluid. • In unsteady state, for small values of time, the heat, and nanoparticle concentration transports increase sharply and then attain a constant value. • The transfer of solute concentration occurs in oscillating mode and as time passes, it gets a constant state. • λ 1 , NA , Rs, and Rn are helpful in increasing the heat transport, rate of solute transfer, and nanoparticle mass transfer whereas λ2 increases heat transport and nanoparticle mass transfer but decreases the solute concentration transfer rate. • Le and ε are the agents to decrease the heat transfer and rate of transfer of solute concentration while it increases the nanoparticle mass transfer. • As Ln increases the heat and nanoparticle mass transfers decrease while it increases the solute concentration transfer rate. • For unsteady state, the magnitude of stream function decreases slowly as time passes. • Heat transfer occurs only in the form of convection which becomes weak with time. Similar pattern is observed for isohalines in unsteady state. • Initially, the nanoparticle mass transfer occurs in the form of convection as well as in conduction state. But as the time passes there exists only convection mode with less magnitude.

Conflict of Interest The authors declare that they have no conflict of interest.

References 1. Nield DA, Bejan A (2017) Convection in porous media, 5th edn. Springer-Verlag, New York, pp1–988 2. Trevisan OV, Bejan A (1985) Natural convection with combined heat and mass transfer buoyancy effects in a porous medium. Int J Heat Mass Transfer 28(8):1597–1611 3. Mamou M (2003) Stability analysis of the perturbed rest state and of the finite amplitude steady double-diffusive convection in a shallow porous enclosure. Int J Heat Mass Transfer 46(12):2263–2277 4. Mojtabi A, Charrier-Mojtabi MC (2005) Double-diffusive convection in porous media. Handb Porous Media 2:269–320 5. Choi SUS, Eastman JA (1995) Enhancing thermal conductivities of fluids with nanoparticles. ASME Int Mech Eng Congr Expo 12–17

Double-Diffusive Convection in Darcy Oldroyd-B Type Nanofluid …

295

6. Masuda H, Ebata A, Teramae K, Hishinuma N (1993) Alteration of thermal conductivity and viscosity of liquid by dispersing ultra-fine particles. Netsu Bussei 7(4):227–233 7. Choi SUS (2009) Nanofluids: from vision to reality through research. J Heat Transfer 131:1–9 8. Chakraborty S, Panigrahi PK (2020) Stability of nanofluid: a review. Appl Therm Eng 174:1–26 9. Malashetty MS, Shivakumara IS, Kulkarni S, Swamy M (2006) Convective instability of Oldroyd B fluid saturated porous layer heated from below using a thermal nonequilibrium model. Transp Porous Media 64:123–139 10. Shivakumara IS, Sureshkumar S (2007) Convective instabilities in a viscoelastic-fluid-saturated porous medium with throughflow. J Geophys Eng 4(1):104–115 11. Narayana M, Gaikwad SN, Sibanda P, Malge RB (2013) Double-diffusive magneto convection in viscoelastic fluids. Int J Heat Mass Transfer 67:194–201 12. Raghunatha KR, Shivakumara IS, Sowbhagya (2018) Stability of buoyancy-driven convection in an Oldroyd-B fluid-saturated anisotropic porous layer. Appl Math Mech Engl 39(5):653–666 13. Sun Q, Wang S, Zhao M, Yin C, Zhang Q (2019) Weak nonlinear analysis of Darcy-Brinkman convection in Oldroyd-B fluid saturated porous media under temperature modulation. Int J Heat Mass Transfer 138:244–256 14. Shivakumara IS, Sureshkumar S (2008) Effects of throughflow and quadratic drag on the stability of a doubly diffusive Oldroyd-B fluid-saturated porous layer. J Geophys Eng 5(3):268– 280 15. Raghunatha KR, Shivakumara IS (2019) Double-diffusive convection in an Oldroyd-B fluid layer-stability of bifurcating equilibrium solutions. J Appl Fluid Mech 12:85–94 16. Prema S, Shankar BM, Seetharamu KN (2020) Convection heat transfer in a porous medium saturated with an Oldroyd B fluid—a review. J Phys Conf Ser 1473:012029 17. Hafeez A, Khan M, Ahmed J (2020) Flow of Oldroyd-B fluid over a rotating disk with CattaneoChristov theory for heat and mass fluxes. Comput Methods Programs Biomed 191:105374 18. Abbas SZ, Khan WA, Waqas M, Irfan M, Asghar Z (2020) Exploring the features for flow of Oldroyd-B liquid film subjected to rotating disk with homogeneous/heterogeneous processes. Comput Methods Programs Biomed 189:105323 19. Ashraf MB, Hayat T, Alsaedi A, Shehzad SA (2016) Soret and Dufour effects on the mixed convection flow of an Oldroyd-B fluid with convective boundary conditions. Results Phys 6:917–924 20. Farooq A, Ali R, Benim AC (2018) Soret and Dufour effects on three dimensional Oldroyd-B fluid. Physica A 503:345–354 21. Sheu LJ (2011) Thermal instability in a porous medium layer saturated with a viscoelastic nanofluid. Transp Porous Media 88(3):461–477 22. Sheu LJ (2011) Linear stability of convection in a viscoelastic nanofluid layer. World Acad Sci Eng Technol 5(10):1977–1983 23. Umavathi JC (2013) Effect of modulation on the onset of thermal convection in a viscoelastic fluid-saturated nanofluid porous layer. Int J Eng Res Appl 3(5):923–942 24. Yadav D, Bhargava R, Agrawal GS, Yadav N, Lee J, Kim MC (2014) Thermal instability in a rotating porous layer saturated by a non-Newtonian nanofluid with thermal conductivity and viscosity variation. Microfluid Nanofluidics 16:425–440 25. Rana GC, Thakhur RC, Kango SK (2014) On the onset of thermosolutal instability in a layer of an elastico-viscous nanofluid in porous medium. FME Trans 42:1–9 26. Chand R, Rana GC (2015) Instability of Walter’s B’ visco-elastic nanofluid layer heated from below. Indian J Pure App Phy 53:759–767 27. Shivakumara IS, Dhananjaya M (2015) Chiu-On Ng, thermal convective instability in an Oldroyd-B nanofluid saturated porous layer. Int J Heat Mass Transfer 84:167–177 28. Agarwal S, Rana P (2016) Nonlinear convective analysis of a rotating Oldroyd-B nanofluid layer under thermal non-equilibrium utilizing Al2 O3 -EG colloidal suspension. Eur Phys J Plus 131(4):1–14 29. Hayat T, Muhammad T, Shehzad SA, Alhuthali MS, Lu J (2015) Impact of magnetic field in three-dimensional flow of an Oldroyd-B nanofluid. J Mol Liq 212:272–282

296

D. Kumar et al.

30. Srivastava A, Bhadauria BS (2016) Onset of convection in porous medium saturated by viscoelastic nanofluid: more realistic result. J Appl Fluid Mech 9(6):3317–3325 31. Umavathi JC, Kumar JP (2017) Onset of convection in a porous medium layer saturated with an Oldroyd-B nanofluid. J Heat Transfer 139:1–14 32. Hashmi MS, Khan N, Mahmood T, Shehzad SA (2017) Effect of magnetic field on mixed convection flow of Oldroyd-B nanofluid induced by two infinite isothermal stretching disks. Int J Therm Sci 111:463–474 33. Umavathi JC, Sasso M (2017) Double-diffusive convection in a porous medium layer saturated with an Oldroyd nanofluid. AIP Conf Proc 1798(1):020166 34. Ali B, Hussain S, Nie Y, Hussein AK, Habib D (2021) Finite element investigation of Dufour and Soret impacts on MHD rotating flow of Oldroyd-B nanofluid over a stretching sheet with double diffusion Cattaneo Christov heat flux model. Powder Technol 377:439–452 35. Hafeez A, Khan M, Ahmed A, Ahmed J (2021) Features of Cattaneo-Christov double diffusion theory on the flow of non-newtonian Oldroyd-B nanofluid with Joule heating. Appl Nanosci 11:1–8 36. Rawat SK, Upreti H, Kumar M (2021) Numerical study of activation energy and thermal radiation effects on Oldroyd-B nanofluid flow using the Cattaneo-Christov double diffusion model over a convectively heated stretching sheet. Heat Transfer 37. Yoon D-Y, Kim MC, Choi CK (2004) The onset of oscillatory convection in a horizontal porous layer saturated with viscoelastic liquid. Tranp Porous Media 55:275–284 38. Buongiorno J (2006) Convective transport in nanofluids. ASME J Heat Transfer 128(3):240– 250 39. Nield DA, Kuznetsov AV (2014) Thermal instability in a porous medium layer saturated by a nanofluid: a revised model. Int J Heat Mass Transfer 68:211–214 40. Baehr HD, Stephan K (2011) Heat and mass transfer, 3rd edn. Springer, New York 41. Lapwood ER (1948) Convection of a fluid in a porous medium. Proc Camb Phil Soc 44(4):508– 521 42. Horton W, Rogers FT Jr (1945) Convection currents in a porous medium. J Appl Phys 16(6):367–370 43. Jaimala, Singh R, Tyagi VK (2018) Stability of a double diffusive convection in a Darcy porous layer saturated with Maxwell nanofluid under macroscopic filtration law: a realistic approach. Int J Heat Mass Transfer 125:290–209 44. Wang S, Tan W (2008) Stability analysis of double-diffusive convection of Maxwell fluid in a porous medium heated from below. Phys Lett A 372(17):3046–3050 45. Jaimala, Singh R, Tyagi VK (2017) A macroscopic filtration model for natural convection in a Darcy Maxwell nanofluid saturated porous layer with no nanoparticle flux at the boundary. Int J Heat Mass Transfer 111:451–466

Interpretive Psychotherapy of Text Mining Approaches Santosh Kumar Dwivedi, Manpreet Singh Manna, and Rajeev Tripathi

Abstract The current scenario data collection grows rapidly, and most of it is organized in an unstructured format, from which it is difficult to extract useful information. Mining of text methodologies includes summary, characterization, grouping, information retrieval, and apparition. Mining of text can be defined as a process for fetching fascinating facts or acquaintance from content documents. It is the method of finding relevant and heterogeneous matches or acquaintance from content articles are also includes. Mining of text data or acquaintance detection is recognized from textual databases. Mining of text is regarded as further large thing in knowledge discovery. This paper proposes the improvement in several text mining approaches. Keywords Data-mining · Text-mining · Clustering · Classifications · Information extraction · Patterns identification

1 Introduction The data grows at an exponential rate in daily uses. Electronic data storage is used by approximately over all types of societies and corporate sectors. Over the internet digital libraries, repositories, and other textual material such as blogs, social media networks, and mails are exists. A significant volume of text is moving over the internet [1]. Determining appropriate matches and inclinations to fetch important knowledge from this huge quantity of data is a difficult undertaking [2]. Textual data is hard to manage with conventional data mining methods since retrieving information takes time and effort. Mining of textual data is a method for fetching relevant and noteworthy matching from textual data sources in order to explore knowledge [3]. Mining of data, relevant data accessing, machine learning, and calculative language content are all used in text mining. To discover insights, mining of text uses specific techniques. Text mining S. K. Dwivedi (B) · R. Tripathi SRMGPC, Lucknow, India M. S. Manna SLIET, Longowal, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 P. K. Mallick et al. (eds.), Cognitive Informatics and Soft Computing, Lecture Notes in Networks and Systems 375, https://doi.org/10.1007/978-981-16-8763-1_23

297

298

S. K. Dwivedi et al.

is associated with natural language text in semi-structured and unstructured formats [4]. Text mining techniques are being used in a variety of settings, including industry, academia, online applications, the commercial sector, the internet, and others [5]. Text mining is used for estimation mining, attribute drawing out, emotion, prediction, and movement analysis in fields such as searching tools, client association organization systems, clean mails, artifact proposal psychotherapy, scam finding, and public digital data investigation [6]. Mining of text solutions square measure would not be to scrutinize digital data from completely different written foundation and public digital data platforms to spot patterns and trends on whole attraction, invention predilection, and utilization matching and additional. Sentiment investigation is employed to conclude if an oral communication a couple of whole is positive or negative, to assist retailers build higher use of mining of text solutions, we have a tendency for surveyed shoppers to seek out however they expertise concerning vendors’ uses these technologies to boost their electronic trade expertise.

2 Related Work Mining of text is known as drawing interesting matching from huge data for discovering the reasonable information, and it is also known as awareness of searching with the mining of text article information. The area of mining of text includes fetching of information, considering of text, pulling out information, grouping, classification, mental picture, and dispensation of data. The future of data finding requires high marketable acceptations. Our research process shows that the general structure of mining of text includes two phases text clanging that convert unstructured text data into transitional form and awareness refinement, that reduces matching from transitional form. In conclusion we arranged the text mining approach, utility by empowering their text cleansing for transitional structure (Fig. 1). Mining of text is shown in two stages, refinement of text that converts free form data contents into a chosen intermediary form and information purification that minimizing matching or awareness from intermediary form. Intermediary form is repeatedly based on semi-configured like theoretical graph illustration or configured like the relative data expression. All are based on documentation where each object simplifies article or thought based where each object shows an entity of thought of wellbeing during a particular field. Mining of text for article based intermediary form interprets matching relative entities across article. Article clustering, apparition and classification of mining of text from an article depends on intermediary forms. Mining of text based on intermediary form generated matching and association over entities. Systematic approach such as relative discovery includes in mining of data operations. An article based on intermediary form regularly indistinct on fetching the relevant content related to specific area. The article based on intermediary form probably is an independent area.

Interpretive Psychotherapy of Text Mining Approaches

299

Fig. 1 Structure of mining process of textual data

Mining of adapted independent on relevant measurement of investigation is to generate in more easy way. The resent item text mining is made such a way for taught awareness. The future mining tools are part of awareness systems used frequently by organization executives. A few endeavors needed for developing system which interpret verbal requests and do specific fetching operations reputedly. The mining instruments probably also embed own subordinate. For mediator model a confidential miner would adopt client data, arrange mining document self and further essential values without requirement of particular demand.

2.1 Extraction of Information Withdrawal of information is a method for extracting useful data from enormous data of document. Area experts determine the domain’s properties and relationships. Key elements and entities are extracted from the document using IE systems, and their relationships are recognized. The extracted amount is stored in a database to be processed afterward. For reviewing and calculating the relevance of results on the fetched content, the precision and recall procedure is employed. To conduct the information extraction procedure and obtain more effective findings, in detail and whole information about the applicable field is compulsory.

2.2 Information Recovery It is the method of identifying and fetching related matches from a corpus of expressions or phrases. Mining of document and information recovery for textual data have a close relationship. Different algorithms are employed in IR systems to track the

300

S. K. Dwivedi et al.

behavior of users and search for relevant data. Different search engines are increasingly adopting information recovery systems to pull relevant article from the Web based on a term. To evidence proving and produce more significant results, some search engines employ query-based algorithms. Common search tools provide users with more pertinent data that meets their needs.

2.3 Natural Language Processing The unstructured text data are automatic process and analyzed. It performs a variety of analysis, including Named unit Recognition for abbreviation and matching article extraction to uncover links between them [7]. From either a batch of texts, it identifies all occurrences of a specific object. These units and their occurrence allow for the classification of relationships and other data in order to get at their core notion. This technique, on the other hand, lacks a whole vocabulary data things utilized in classification. To get acceptable results, you will need to apply complex query-based algorithms. In the actual world, a single entity might be referred to by a variety of titles, such as TV and television. Using classification techniques, a set of sequential words may have multiple expression names to recognize the borders and determine overlapping difficulties. Approaches to method usually go down into one of different categories lexical, law, arithmetical, or a combination of these. The defined systems have attained the relevance level from 74 to 86 % approx.

2.4 Grouping Grouping is an unverified technique that uses several grouping algorithms to categories text article into groups. Similar terms or patterns are grouped in clusters and extracted on or after various documents. Grouping is done in both vertical tracking approaches. For the examination of unstructured text, many sorts of mining tools and techniques are used in NLP. Diverse techniques of clustering are hierarchical, sharing, solidity, centric, and K mean [8].

2.5 Text Reviewing Observation of collecting plus developing small considerations of unique article describe as text reviewing. For reviewing, recalculating and dispensation processes are applied to the unprocessed text. During pre-processing, indication, special declaration removal, and stemming methods are used. Lexicon lists are formed during the text reviewing processing stage. Weighted heuristics extract features from text documents by following specified rules. For text conclusion, size of phrase, stick phrase,

Interpretive Psychotherapy of Text Mining Approaches

301

subsection, theme expression, and capital expression classification properties can be introduced and examined. Techniques for text reviewing can be used on many documents at the same time. The nature and theme of the text documents determine the excellence and group of classifiers [6].

3 Mining of Text Algorithms List 3.1 K-Nearest Neighbor Because of its accuracy and efficiency, K-Nearest Neighbor (KNN) is widely used mining of text techniques. We employ this algorithm as a non-parametric classification algorithm. In a nutshell, nearest neighbor is a straightforward procedure that keeps all accessible facts items and categories new data objects using a similarity metrics. It is used to check the similarity between papers and k guidance facts in the text psychoanalysis field. The goal is to figure out what group the test documents belong to. One of nearest neighbor most important text mining uses is in “Concept Search” (looking for similarly comparable documents)—a function in software products that aids organizations in finding emails, business correspondence, reports, contacts, and other documents.

3.2 Naive Bayes Classifier (NBC) One of KNN’s most important text mining uses is in “Concept Search” (looking for similarly comparable articles)—a function in software products that aids organizations in finding emails, business correspondence, reports, contacts, and other documents. The NBC is no longer a single computational method, but rather a collection of procedures that assume that the weather principles used in arrangement are ad hoc. Quality encoding idioms and others make coding incredibly simple. Naive Bayes is a content classification algorithm that has a variety of applications in e-mail direct mail detection, report grouping, mail arrangement, age/gender classification, idiom recognition, and emotion psychoanalysis, to name a few.

3.3 Grouping with K-means Grouping with K-Means clustering is a well-liked data psychoanalysis tool for identifying groups in a set of data. K is a variable that represents the number of groups. It is one of the most straightforward unsupervised learning strategies for clustering problems. The main concept is to construct a set of k centroids that will be used to

302

S. K. Dwivedi et al.

identify new facts. Grouping with K-Means is a traditional text classification method. It is commonly old for article classifications, common media article grouping, and searching keyword clustering, among other things. Using Grouping with K-Means for text facts necessitates some passage to arithmetic contents conversion of our information. Using R system, you are probably aware that it comes with a number of packages that make the process easier.

3.4 Approach Using Support Vector Machines Techniques This move toward is single of the majority effective mining of text algorithms for proper classification. Much of the time, approach using Support Vector Machines techniques is supervised desktop learning algorithmic software that is utilized to solve classification issues and find outliers. It is one of the special mining of text grouping approach. Fundamentally approach using Support Vector Machines techniques is the supreme machine learning approach used for grouping dilemma and external consideration. It may regenerate for regression confronts. The approach using Support Vector Machines techniques are adopted for sorting two datasets through like categorization. These facts analysis procedures draw lines that segregate collection according to some common matching. The aim of approach using Support Vector Machines techniques is to create frenzied plane. The frenzied plane groups are best approach for thoroughgoing margin. Todays’ real world approach using Support Vector Machines techniques can solve different problem like fact and image categorization, self-script consideration, face recognition, and bio-series consideration. During derivation of mining of text approach using Support Vector Machines techniques are mainly used for fact castigation actions like finding spam, emotion findings, article characterization into several categories as WebPages, reports, and electronic message.

3.5 Approach Using Judgment Tree Approach using judgment tree formula could be a traditional mechanism mastering practice for facts processing that makes characterization or deterioration fashions within the structure of a tree approach. The shape consists of an origin node, branches, and children nodes. The approach using judgment tree procedure is commonly known mining of data machine learning technique that uses a tree structure to produce classification and regression models. An origin node, branches, and children nodes encompass the configuration. Each branch represents the result of a test, whereas each internal node represents a test on a property. Finally, each leaf node has a class label attached to it. The approach using judgment tree procedure is both no serial and straightforward. Decision trees have a wide range of applications as text mining algorithms, including evaluating all of the text generated by client association

Interpretive Psychotherapy of Text Mining Approaches

303

institute. It is also utilized to make medical forecasts based on therapeutic report and other information.

3.6 Comprehensive Linear Models (CLM) A common arithmetical practice for linear representation is comprehensive linear models. CLMs, in fact, combine a wide range of styles, including rectilinear deterioration models, logistical deterioration, Poisson deterioration, ANOVA, log sequence fashions, and so on. Integrating the sequential approach with mining of data tools provides a number of advantages, including speeding up the modeling process and increasing accuracy. CLM is one of the primary matter substance mining techniques used by several of the fine content material psychotherapy system like Oracle.

3.7 Neural Networks Non-linear models of the human brain’s functioning are known as neural networks. Despite their sophisticated structure and lengthy training period, neural networks are required in fact analysis and text mining algorithms. Neural networks will be utilized in the domain of matter content analytics to group comparable patterns, categories patterns, and so on. Because of characteristics like own arrangement adaptiveness, equivalent concert, liability lenience, and toughness, the neural network’s value in records mining is critical. In the field of scientific operation citations neural network are significantly used in banking, and advertising content mining using article analysis.

3.8 Involvement Rules Involvement rules are simple conditional statements that pursue to identify connections connecting seemingly distinct content in a repository. Involvement rules can discover connections flanked by products that are often used jointly. Container facts analysis, mutual selling, grouping, categorization, catalogue creation, and other applications of association rules are common. For instance, if a client purchases eggs, he may also purchase buy milk.

304

S. K. Dwivedi et al.

3.9 Genetic Algorithms A class of stochastic search methods known as genetic algorithms or evolutionary algorithms the genetic algorithm process is derived from Neo-Darwinian opinion. Genetic Algorithm has used chromosomes to predetermine the characteristics that make up a person. They essentially aim to replicate human evolution. Genetic Algorithm is adaptable and robust search methods, which makes them ideal for data mining. Grouping, the development of characterization rules, properties, iteration, and building are just a few of the text data mining challenges that Genetic Algorithm may help with.

3.9.1

Latent Dirichlet

The text modeling method currently uses Latent Dirichlet allocation techniques. It is generative probabilistic approach based on group of discrete content. Latent Dirichlet allocation approach automatically finds topics in given documents. It has many modern adaptation like correlated and has diversity of purpose in a sequence fetching. Suppose we have ton of documents such as electronics mails and we require reading of them. In this scenario the method provide various topics which classified by most likely terms.

4 Advanced Methods We will go through text categorization and text extraction now that we have covered the fundamentals of text analysis.

4.1 Text Classification The practice of giving predetermined tags or categories to unstructured content is known as text categorization. Because it is so adaptable and can organize, arrange, and categories pretty much anything, this regarded commonly helpful unstructured data processing approaches for delivering relevant data and solving issues.

4.2 Sentiment Analysis Because emotions are necessary for efficient human communication, we must teach robots to identify sentiments and categorize text as neutral otherwise positive or

Interpretive Psychotherapy of Text Mining Approaches

305

negative. Emotion psychotherapy is useful in this situation. It is the computer-assisted technique of deducing an individual’s viewpoint on a specific topic from written or spoken language. Additional uses of emotion classifiers encompass assessing brand standing, transport out market investigates, and improving products with consumer comment.

4.3 Topic Analysis Topic analysis, or, to put it another way, figuring out what a text is about, is another typical example of text classification. It is repeatedly used for data structure and organization. For illustration: “The journal is really good and easy to use”. This journal reaction can be confidential below ease of use.

4.4 N-grams In content analytics, N-grams are a vital topic to grasp. These are a compilation of one or more permanent sequences of essentials that appear contiguous with other. As previously stated, N is an arithmetical number that denotes the n elements in a text sequence. When we participate text into a hunt tools, we can observe how the search engine’s considerable model begins to foresee the subsequent set of provisions depends on the situation. It is referred to as the hunt tool’s auto complete function. Using N-Gram NLRK python library we have following code: from nltk.util import ngrams from collective_data import Countertext = 'Customer publication' One_grams = ngrams(nltk.word_tokenize(text), 1) Two_grams = ngrams(nltk.word_tokenize(text), 2) Three_grams = ngrams(nltk.word_tokenize(text), 3)

4.5 Bag of Words (Bow) The basic idea behind the method is that text must first be transformed into numbers before it can be used in mathematical calculations. We may use a variety of approaches when converting text to numbers. In this method several times words are shown in an article. The goal of BOW is to create a word matrix in which the horizontal values characterize the terms and the vertical values represent the article names. The frequency

306

S. K. Dwivedi et al.

of each phrase in the document may then be filled into the matrix, neglecting the sequence of terms and grammar. Assume you want to extract message from WhatsApp and content that contain the term “NLP”. The phrases may then be tokenized into words, and TDM can be used to populate the columns with what up and other social media tool’s keywords, and the horizontal data with the content terms. The frequency of each phrase inside a document is then entered into the matrix: Implantation using Python we create: import pandas as mpd from sklearn.feature_extraction.text import CountVectorizerdata = {'whatup':get_tweets(),'whatup':get_fb_statuses()} vectoriser = CountVectorizer() vec =vectoriser.fit_transform(data['whatsup'].append(data['newwhatsup'])) df =mpd.DataFrame(vec.toarray().transpose(), index =vectoriser.get_feature_names()) df.columns = [' whatsup ', ' newwhatsup ']

4.6 Term Frequency-Inverse Document Frequency We are needed to recognize the consequence of each expression in NLP projects. One of fantastic statistical tool is TF-IDF. It provides in comprehending significance of the term. A matrix is computed for each term in a document by performing following steps: 1.

2.

3.

Compute a term’s occurrence in a document. Phrase Frequency is the term for this (TF). This is accomplished by dividing the whole stipulations in an article with number of instance a term appears in the document. The inverse of a term’s document frequency must be calculated. Divide the whole instance of article by the figure of papers which include word to arrive at this figure. To obtain a positive log value, the inverse is calculated. As a result, make a log of the value you just computed. A positive value will arise as a result of this. Inverse Document Frequency is the term for this (IDF). Finally reproduce step 1 by step 3 in TF-IDF approach (Fig. 2).

import pandas as mpd from sklearn.feature_extraction.text import TfidfVectorizer data = {' whatsup':getdata(),' whatsup ':getstatus()}vectoriser = TfidfVectorizer() vec = vectoriser.fit_transform(data['whatsup'].append(data['newwhatsup'])) df = mpd.DataFrame(vec.toarray().transpose() index = vectoriser.get_feature_names())df.columns = ['whatsup', 'newwhatsup']

Interpretive Psychotherapy of Text Mining Approaches

307

Fig. 2 Steps for term frequency calculation

5 Conclusion In the area of natural language content the mining of text procedures are supplementary but exact mining approach. The content includes several categories like electronic mail, status, proposal, blogs, and other categories of unclassified content. The mining of text are designed to give some knowledge of how text is processed without the need for a human to interpret it. A computer, on the other hand, can only look at the individual letters in each word and how they are organized.

6 Future Work Internet business on the web stage has come an extensive way, the spot we do not see the genuine item or a go between to prompt a product. But the client can take audits from exceptional clients can take as much time as is needed to select the items [9]. To cure this issues content mining will be a top notch answer to release the client through the significant products. Next-age self-administration printed content investigation. Every one of the insights sources, calculations and AI gear you need in one spot. Cooperate with records by means of an instinctive visual interface to prepare, investigate, enhance, and construct designs quicker than at any other time before. Text mining, the way of examining literary measurements so as to get mindful of examples and addition bits of knowledge, is progressively being utilized by method for internet business vendors to study extra noteworthy regarding buyers. Through recognizing client acquire examples with conclusions happening one of a kind items, online business outlets can target exact individuals or sections with modified manages and decreases to build deals and make greater buyer faithfulness. The content investigation has made it obvious that any article drives totally principally dependent on the watchwords. Along these lines, it is constantly an alluring activity to hold music of the most long-established words that appear in the content articles and use them as the catchphrases hence so the clients can have a valuation for

308

S. K. Dwivedi et al.

what to concentrate on and it gives the office a thought of where the internet business showcase is going.

References 1. Sagayam R (2012) A survey of text mining: retrieval, extraction and indexing techniques. Int J Comput Eng Res 2(5) 2. Padhy N, Mishra D, Panigrahi R et al (2012) The survey of data mining applications and feature scope. arXiv preprint arXiv:1211.5723 3. Fan W, Wallace L, Rich S, Zhang Z (2006) Tapping the power of text mining. Commun ACM 49(9):76–82 4. Weiss SM, Indurkhya N, Zhang T, Damerau F (2010) Text mining: predictive methods for analyzing unstructured information. Springer Science and Business Media 5. Liao S-H, Chu P-H, Hsiao P-Y (2012) Data mining techniques and applications–a decade review from 2000 to 2011. Expert Syst Appl 39(12):11303–11311 6. Al-Hashemi R (2010) Text summarization extraction systems using extracted keywords. Int Arab J e-Technol 1(4):164–168 7. Fayyad U, Piatetsky-Shapiro G, Smyth P (1996) From data mining to knowledge discovery: an overview. In: Fayyad U et al (eds) Advances in knowledge discovery and data mining. MIT Press, Cambridge, Mass., pp 1–36 8. Tripathi R, Dwivedi S (2016) A quick review of data stream mining algorithms. 2(7) 9. Tripathi R, Dwivedi S (2020) Resolution of e-commerce market trend using text mining. IJSRCSE 8(1):01–05

Sarcasm Detection Using SVM Atul Kumar, Pooja Agrawal, Ratnesh Kumar, Sahil Verma, and Divya Shukla

Abstract Many industries need a sentiment analysis for identifying customer reviews. The main hurdle in sentiment analysis is the existence of sarcasm. The presence of sarcasm can diminish the accuracy of the sentiment analysis. Sarcasm can be described as mocking in convenient ways. Sarcasm can occur in two forms, numerical and contextual. Numeric sarcasm is extreme numerical values that show a contrary sentiment, while contextual can be displayed because of different sentiment words. There are many approaches to deal with the problem. Approaches like SVM and neural network are most applied to solve the problem. Purpose of the study is to increase the preceding accuracy by SVM by feature engineering. Features used in the project are delta TF-IDF, word2vec, and pattern related features. The SVM model surpassed the previous SVM model by 7% in F-score for the same dataset. Keywords SARCASM · Twitter · SVM · TF-IDF · Stemming · Tokenizer

1 Introduction Sentiment Analysis is one of the most emerging fields in Natural Language Processing. Sentiment Analysis is a process in which given text should be classified as positive, neutral or negative. Sentiment analysis takes an important part in social media such as Twitter and Amazon. It is used to get the sentiment value of the text and classify it whether it was positive, negative or neutral. Social media is a vast collection of unstructured data [1]. The unstructured data is increasing exponentially. Extracting useful information as sentiment from unstructured text and analysing them to get insights of the text is called sentiment analysis. Companies use sentiment analysis to get the customer review. Sarcasm is one of the main hindrances in sentiment analysis [1–11]. Sarcasm can be defined as mocking in a convenient manner. Sarcasm is hard to detect as it contains both positive and negative sentiment in a text. In other words sarcasm can be defined as a tool to mock others without use of harsh or only negative A. Kumar (B) · P. Agrawal · R. Kumar · S. Verma · D. Shukla SRMCEM, Lucknow, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 P. K. Mallick et al. (eds.), Cognitive Informatics and Soft Computing, Lecture Notes in Networks and Systems 375, https://doi.org/10.1007/978-981-16-8763-1_24

309

310

A. Kumar et al.

words. The sarcastic sentences have both positive and negative sentiment, and it is a hindrance in sentiment analysis [3]. Sarcasm can be used in different ways, such as 1. 2.

Sarcasm as a wit Sarcasm as an irony Sarcasm as Humor

Although the sarcasm is used in different ways, but it is mostly lies under two categories, i.e. Numerical Sarcasm and Non-Numerical Sarcasm. Numerical sarcasm is sarcasm that lies due to numerical values. About 17% of sarcastic tweets have a number as origin. For example I like working 20 h a day That coffee cost me 5 K dollars, isn’t that amazing.

Here the sarcasm is because of numerical values, when we change numerical values to something feasible, the sentence does not count as sarcasm. These types of sarcasm can be detected if feasible values are known. Template based model works best in these type of sarcasm. On the other hand non-numerical sarcasm does not depend on numerical values. They depend on positive and negative sentiment words. These sarcasms exist because of incongruity in the text. The incongruity can be implicit and explicit. For example: The backlogs are in love with me, they just don’t leave me alone WOW! You sing amazing, just like Albert Einstein.

Here the first example is sarcastic because of explicit incongruity. As the word backlogs and love have a different sentiment and no one loves backlogs. On the other hand the second example is sarcastic because of implicit incongruity. Singing was far away from Albert Einstein. So, when compared to Albert Einstein singing means singing is not good, but here singing was amazing. To detect these types of sarcasm, one should know the personality of the person used. The rest of the paper is as follows, Sect. 2 contains the information about past research and the progress that is made up till now. Section 3 contains the dataset, size of dataset, and info about the occurrence of sarcastic words in sarcastic tweets. Section 4 contains the techniques and methods that is used to clean the dataset, while Sect. 5 contains the feature extraction model. The features were mainly extracted from the pre-processed data. Sections 6 and 7 contain classification model and result of experiment. Section 8 represents our findings and future improvements of the model that can be made.

2 Related Work Sarcasm is mocking in a convenient way. Ivankoet et al. represented sarcasm in form of tuple, < S; H; C; U; p; p’ >

Sarcasm Detection Using SVM

311

where • • • •

S is the Speaker; H is the Hearer C is the Context; U is the Utterance p is the Literal Proposition p’ is the Intended Proposition

There has been much research related to sarcasm detection [4]. According to Pushpak Bhattacharyya et al. the sarcasm can be mainly classified into two parts, numeric based sarcasm and text based sarcasm. Different linguistics have described different form of sarcasm.

2.1 Characteristics of Sarcasm John D Campbell et al. showed that sarcasm appears in a different space that are failed expectation, pragmatic insincerity, negative tension, and the presence of a victim. According to Deirdre Wilson et al. the sarcasm arrives when there is a conflict in actual text meaning and intended meaning of the text. Jodi Eisterhold et al. showed that sarcasm can be understood to be detected based on the reply. The reply can be a laughter or blank, smiley, change of a topic or the sarcastic text itself [5].

2.2 Types of Sarcasm Dr. Vadivu Chandra Sekharan et al. divided sarcasm into different classes based on the difference between the intended meaning and literal meaning. Coexistence between positive and negative Negative sentence followed by a positive sentence and vice versa Dilemma in the sentence Negative phase followed by a positive phase and vice-versa Comparison between worse and worst comparison with something better Incongruity in sentence No specific positive and negative point Sarcasm is dependent on the situation and surrounding of the speaker or where the conversion is held.

2.3 Negation of Sarcasm Sarcasm consists of the use of irony and the presence of ridicule, and it can also be detected using negation or by the follow up message.

312

A. Kumar et al.

3 Dataset Dataset is one of the central asset for training a stronger model. To better classify that a text is sarcastic or not, it should be trained on a dataset that is generated by general public and should be rich in quality. Data generated by a general public reduces bias problem. The model trained on such a dataset will be more accurate on real-world scenario. The twitter dataset is used to train the model. It is generated by general public in form of tweets. Twitter is one of the most used platform in social media for communicating the ideas and opinion. The tweets are produced by general public, it is not made by only one class, but rather it is made by all public. The Twitter dataset contains generalized tweets, producing a generalized dataset. Tweets can be extracted using API like tweet4j. Twitter dataset is a massive collection of daily tweets from users all over the globe. Tweets can occur within other tweets, i.e. they can be nested. Nested tweets make it hard to train and learn. A tweet can contain multiple data in a different form. For example Hey that’s great info @user I think its an entirely new field https://haventdoneit.com/ #new #goforit

In the above tweet the tweet contains info, like username (@user) and URL and hashtag. Sometimes a tweet can also contain Meta tags. It mainly requires the two types of dataset one with #sarcasm and other without it. That is sarcastic ones and non-sarcastic ones. The dataset contain 3 K Non-sarcastic tweet and 2.8 K sarcastic tweets. Figure 1 shows word-cloud of the sarcastic data. Fig. 1 Sarcastic tweets wordcloud

Sarcasm Detection Using SVM

313

4 Data Preprocessing The data created by the general public is not suitable to train and process directly. Prepossessing is required to make the data suitable for processing and training. In Natural language processing data processing can be done based on data and its attributes required. The data prepossessing can be further divided into.

4.1 Removal of URL The data extracted contains URL in some tweets, which are redundant for the prediction for sarcasm. It does not contain any information regarding whether the tweet is sarcastic or not. Removal of URL was done by using regex. The python library re was used to remove the URL from text, as it has a fixed format.

4.2 Removal of @user The dataset already has changed to @user rather than username, for privacy purpose. So, @user does not contain any meaningful information. Removing it will be beneficial to our model. All @user starts with ‘@’ followed by a word. It contains a specific format, so it is removed using regex.

4.3 Tokenization Tokenization is a technique in NLP for splitting the string into tokens. Token can be defined as an individual word of a string. It can be performed using nltk tokenize module which takes a string as input and return a list of string, i.e. tokenized copy of the given string.

4.4 Stemming Stemming is a technique in Natural Language Processing that is used extract stem or root word. It is useful because it brings the word into present tense, so the words like reading and read, playing, played, etc. can be analysed as a single term. It can be done by nltk PosterStemmer.

314

A. Kumar et al.

5 Feature Engineering Feature Engineering is a method to extract useful feature that makes model train faster and better. Feature engineering enhances the dataset by extracting the values with have most information gain over the problem. In sarcasm classification, most of the time features are hidden in background. Model trained on pre-processed data have low accuracy because of hidden features. Extraction of hidden feature is the primary goal of this section. Following methods/techniques will be used for feature extraction. Extracting Hashtag Delta TF-IDF Word2vec Pattern-Related

5.1 Extracting Hashtag (#) Tweets often contain hashtags, for better understanding. Sarcasm denotes weather that tweet is sarcastic or not. There can be many hashtags for expressing the emotion and thoughts of a user. Hashtags can be used to express sarcasm. For example a tweet having positive sentiment and negative hashtags is one of the key features of sarcasm. As hashtags have a fixed format, starts with # and followed by a word, it can be extracted using regex. A list of hashtags was added to the feature set.

5.2 Delta TF-IDF TF-IDF stands for Term Frequency-Inverse Document Frequency. Delta TF-IDF is an improvement feature extraction method compared on standard TF-IDF. Delta TF-IDF is based on the difference between the positive and negative words TF-IDF score. It increases the weights and importance of the word in each tweets. As the dataset is balanced. Given that Tidbe the number of times token/term i occurs in document d. S i be number of times ith term occur in Sarcastic labelled tweets. N i be number of times ith term occur in Non-Sarcastic labelled tweets. S j be the total number of tweets labelled as sarcastic. N j be the total number of tweets labelled as sarcastic. Delta TF-IDF can be calculated as Vi;d = Ti;d log2

Sj Ni Ti;d log2 Si Ni

Sarcasm Detection Using SVM

315

Si Ni + Ti;d log2 Si Nj S j Ni = Ti;d log2 Si N j Ni = Ti;d log2 Si

Vi;d = Ti;d log2 Vi;d Vi;d

It makes a clear division between positive and negative tokens, features.

5.3 Word2Vec Features like implicit and explicit incongruity can be extracted using word2vec. Implicit Incongruity is expressed in an indirect way. While explicit incongruity is expressed in a direct way. For example, You sing amazing, just like Albert. Einstein I love being hated. Here in the first example shows implicit incongruity, as Albert Einstein does not know how to sing. While second example is of explicit incongruity as love and hated have different polarity. Word2vec transforms a word into vector space, using a large amount of text corpus. The purpose of word2vec is to cluster the words that having a similar meaning. Word like love and happy will be closer to each other, while words like love and psycho will be far away in vector space. After obtaining the vectors, cosine similarity was used to find the distance/similarly index between that. One with lower cosine similarity index tends to be farther away from that word.

5.4 Pattern-Related Sentence containing a specific pattern sometimes increases the probability of being sarcastic. In certain situation capitalization, special keyword and punctuation marks and emoticon are used to express sarcasm. Excess use of these pattern can be identification of sarcasm in any text. Words like “lol”, “As,if”, “bingo” are sometimes indicates sarcasm. In Pattern Related feature extraction, pattern expressed above are extracted using regex. These pattern can be classified as, presence of following patterns Repetitive sequence of characters and punctuation Capitalized word Emoticons Slang and booster words Exclamation marks Idioms. These feature are binary, low presence indicates 0 and high presence indicates 1.

316

A. Kumar et al.

6 Classification Model The task it to predict weather the given text is sarcastic or not. It is a Binary classification problem as, a given text/tweet can either the text can be Sarcastic or not. In classification problem, the aim is to given a set of feature, have to classify them one of the classes. In sarcasm detection there are two classes, sarcastic and non-sarcastic. The problem can be solved using various algorithm like Naive Bayes, Support vector machine, logistic regression, neural network, etc. In this paper support Vector machine is used to solve this problem, as Support vector machine has performed well against other algorithms. Figure 2 shows the visualization of the sample dataset from the same distribution. The parameters used for t-SNE are Perplexity 50 Step 5000 Epsilon 5. Support Vector machine is commonly known as SVM. It is a supervised learning algorithm. It creates a hyper-plane to classify the data. The fitting of hyper-plane mainly depends on its hyper-parameters. Hyper-parameter tuning is one of the best way to improve model accuracy. Mainly 3 hyper-parameters are tuned that are penalty parameter c of the error, kernel and kernel coefficient gamma. All the tuning are taken place in sevenfold cross validation. Fig. 2 t-SNE visualization of dataset, Sarcastic (blue) and Non-Sarcastic (yellow)

Sarcasm Detection Using SVM Table 1 Model evaluation

317 Model

F-score

SVM (Ashwin Bhat et al.)

0.81

SVM (delta TF-IDF)

0.74

SVM (word2vec)

0.48

SVM (pattern related)

0.68

SVM (all features)

0.88

7 Result After feature extraction and deciding which feature to use. Next step was experimentation. To evaluate the approach, the performance indicators used are Recall—It represents total no of tweets that are successfully classified as sarcastic to the total number of sarcastic tweets. Precision—It represents total no of tweets that are success-fully classified as sarcastic to the total number of sarcastic tweets classified. Another performance indicator F-score is used to measure how good our model is performing. It can be calculated as  pavg =

p(i)    Dq 

q

di ∈Dq

F-score combine precision and recall, so it is more relevant in compare to other models. The model was firstly trained on individual features to know the information gain by each of them. The Delta TF-IDF has highest F-score as compared to other features of 74.22. The pattern related feature also performed well. On the other hand word2vec the precision was high but the recall was low. It was found that different models trained on different features and combining them produce better result. The end modal was created by a model trained on individual features as well as trained with all features. Table 1 shows our model in comparison to the model used by Ashwin Bhat et al. The comparison was done by using F-score as a performance indicator. The model outperformed by 7% as compared to previous models. This difference was mainly due to feature extraction process. The use of delta TF-IDF made a huge difference in accuracy.

8 Conclusion Feature selection is one of the most important part 1 model training. In feature extraction delta TF-IDF outperformed all the feature. The F-score of model trained

318

A. Kumar et al.

on TF-IDF was 74.22%. It was higher as compared to the other features. Word2vec did not performed well as expected. It can be due to low cosine distance between keywords. Pattern based model was tuned according to data. The performance is quite good. The model trained on all features showed better results. The delta TF-IDF feature has the highest contribution in F-scorer. As the feature were not weighted each feature has equal impact on the overall F-score. The model can be further improved by using a weighted feature model. Giving each feature a different importance. Naive Bayes will be a good model for weighting the feature independently. The dataset used is created by general public, so the few main disadvantages of the dataset is that it contains spelling mistakes, and few false sarcastic tags. As human self do not understand sarcasm quite well. The model can be helpful to provide the sarcastic tag with higher percentage as compared to previous SVM.

References 1. Bamman D, Smith NA (2015) Contextualized sarcasm detection on twitter. In: Ninth international AAAI conference on web and social media 2. Tungthamthiti P, Shirai K, Mohd M (2018) Recognition of sarcasm in tweets based on concept level sentiment analysis and supervised learning approaches 3. Maynard D, Greenwood MA (2014) Who cares about sarcastic tweets? Investigating the impact of sarcasm on sentiment analysis 4. Abercrombie G, Hovy D (2016) Puing sarcasm detection into context: effects of class imbalance and manual labelling on supervised machine classication of TwierConversations. ACL 2016(2016):107 5. Barbieri F, Saggion H, Ronzano F (2014) Modelling sarcasm in twitter, a novel approach. ACL 2014(2014):50 6. Kumar A et al (2021) A comparative analysis of pre-processing time in summary of Hindi language using Stanza and spacy. IOP Conf Ser: Mater Sci Eng 1110(1):012019. https://doi. org/10.1088/1757-899x/1110/1/012019 7. Kumar A, Katiyar V (2018) A comparative analysis of different text summarizers. IJRAR—Int J Res Anal Rev 5(4):610–613. E-ISSN 2348-1269, P- ISSN 2349-5138. Available at : http:// www.ijrar.org/IJRAR19D2463.pdf 8. Kumar A, Katiyar V (2019) A comparative analysis of sarcasm detection. Int J Recent Eng Res Dev (IJRERD) 04(08): 104–108. ISSN: 2455-8761 www.ijrerd.com 9. Kumar A, Kumar R, Shrivastava SK.(2020) Describing image using neural networks. In: Khanna A, Gupta D, Bhattacharyya S, Snasel V, Platos J, Hassanien A (eds) International conference on innovative computing and communications. advances in intelligent systems and computing, vol 1087. Springer, Singapore. https://doi.org/10.1007/978-981-15-1286-5_53 10. Mallick PK, Bhoi AK, Chae GS, Kalita K (eds) (2021) Advances in electronics, communication and computing: select proceedings of ETAEERE 2020, vol 709. Springer Nature 11. Conroy MJ, O’leary DP (2001) Text summarization via hidden markov models. In: Proceedings of SIGIR ‘01, pp 406–407

Text Summarization in Hindi Language Using TF-IDF Atul Kumar, Vinodani Katiyar, and Bhavesh Kumar Chauhan

Abstract Text summarization has become the main source of interest for researchers from the past few years due to the gigantic amount of information available on the Internet. It is impossible for humans to manually summarize such a large amount of information and get precise and meaningful summaries in less time. Hence, an automatic text summarizer is needed to get the job done in less time. The aim of the research was to create a medium via which the Hindi language text could be summarized and used potentially. In this paper, we have used the extractive method for achieving the goal. Keywords Text summarization · Stop words · Stemming · Tokenization · Pre-processing

1 Introduction As increasing day-by-day Internet is increasing with a major amount of information, enormous amount of blurred information is available on the Internet. Nowadays, it is very difficult to find relevant information retrieval from large unstructured data stored on the Internet. Solving such kind of problem of retrieving relevant information can be resolved with the help of the text summarization process. Text summarization/automatic text summarization is the procedure where a program automatically creates an abstract or summary of one or more text documents and provides some useful summary as per Dictionary. Resultant of this procedure is to maintain the basic concept/data of actual Text and its essence. In today’s scenario, Text summarization is in an enormous domain in the areas of machine learning, computational intelligence, and natural language processing research. In the whole Concept of Summarization, there are mainly two types of Text Summarization techniques that are available, one is Generic and the other is Query Specific. The Generic Text Summarization extracts A. Kumar (B) · B. K. Chauhan SRMGPC, Lucknow, India V. Katiyar DSMRU, Lucknow, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 P. K. Mallick et al. (eds.), Cognitive Informatics and Soft Computing, Lecture Notes in Networks and Systems 375, https://doi.org/10.1007/978-981-16-8763-1_25

319

320

A. Kumar et al.

the most central sentences from the given input text autonomous of any query and places them into the summary of the unique text. The Query Specific type of Text Summarization retrieves and scores the sentences containing the words matching with the keywords of the query into the summary as with the increase of information available online; those approaches have been developed very extensively. In the realm of automatic summarization, different kinds of summarization have been attempted. There are two main approaches to summarizing text documents; they are Extractive Methods: Extractive text summarization involves the selection of phrases and sentences from the source document to make up the new summary. Techniques involve ranking the relevance of phrases in order to choose only those most relevant to the meaning of the source. Abstractive Methods: Abstractive text summarization involves generating entirely new phrases and sentences to capture the meaning of the source document. This is a more challenging approach, but is also the approach ultimately used by humans. Classical methods operate by selecting and compressing content from the source document. We have considered Hindi as a language of study. It is written in the Devanagari script which has the largest alphabet set. Hindi is the official language of India. It is the native language of most people living in North India; so for people who do not know English but want to read articles on the Internet, automatic summarization would play a great role in it. It has been observed that a lot of work has been done on the English language as ample amount of resources are readily available for the same. Relatively very few have shown interest in the case of the Hindi language.

2 Literature Review According to Abualigah et al. 2020, Text summarization helps us to get a condensed form of a document without involving the use of the entire document and yet giving the zest to the readers who are referring to it [1]. As per author one of the approch is abstarctive summarization in case we have multiple documents if we want to get an abstract with the important points. The author says that nowadays the Internet is full of information, and data rules the world and if we want to get the exact information of where to refer to for a certain piece of information, we need to have some abstract for information and for that we require a text summarizer based on such approach that could facilitate us with the zest of a dataset which is hard for humans to read. This is the reason for the popularity and need for a text summarizer. In the end, the author throws some light on some similar and relevant research in the text summarization domain which also considers the future scope. This in turn helps fellow researchers who are interested in serving and facilitating today’s world with a tool that could

Text Summarization in Hindi Language Using TF-IDF

321

effectively save time and at the same time provide the required information, in the near future. According to Verma et al. 2019, Text summarization is the process of transforming a large-sized document in a clear and concise manner and by the paper, the author presented the detailed comprehensive study of various extractive methods used for summarization of large-sized text documents for Hindi and English text dataset of various news articles [2]. They have considered 13 different summarization techniques, namely TextRank, LexRank, Luhn, LSA, Edmundson, ChunkRank, TGraph, UniRank, NN-ED, NN-SE, FE-SE, SummaRuNNer, and MMR-SE and evaluated their performance using various performance metrics such as precision, recall, F1, cohesion, non-redundancy, readability, and significance. A thorough analysis, was then performed by the author in eight different exhibits the strength and disadvantages of the various methods used, the author also tested the impact of the language, performance of the summary length and various such factors which influence the given research. The author used ROUGE analysis as the summary evaluation tool and an extensive programmatic evaluation using Python 3.5 in the Anaconda environment and used it to evaluate their outcome. According to Fuad et al. 2019, the lack of inefficiency of multi-documented models and the inaccuracies in representing those models into fixed-size vectors were the inspiration behind the solution of the multi-document abstractive-based summarization [3]. The author also said that there is a lack of human-based document or dataset that could be used to train an encoder–decoder model. The author designed complementary models for two different tasks which are sentence clustering and neural sentence fusion. In this work, the risk of producing incorrect facts by encoding a related set of sentences as an input to the encoder is minimized. The author applied the complementary models to implement a full abstractive multi-document summarization system which considers factors like importance, coverage, and diversity under some desired length. Extensive experiments were conducted by the author considering all the proposed models which helped in bringing improvements over the state-of-the-art methods across different kinds of evaluation metrics. According to Nazari and Mahdavi 2018, text summarization helps in producing a summary of a document that is required by a user in today’s world amongst a cluster of documents that is readily available on the Internet, since the amount of data on the web is growing at an exponential rate [4]. According to the author, text summarization is a method to reduce the contents in a document by chopping off the redundant content. The author also categorized the summarization approaches based on input as single-document input or multi-document input; in the first form; only a single page document is used as an input and the output is generated in the form of a summary of the given input while the other approach focuses on taking several documents as an input and then producing a summary version as its output in a single document.

322

A. Kumar et al.

According to Patel et al. 2018, due to large amounts of data being present on the Internet it is tough for the users to get information which is urgently required by them. Automation is required for extracting useful and required information. Text summarization is a probable solution to the problem as suggested by the author, and it varies from document to document. The proposed multi-document summarization by the author is a means of gaining good content coverage with diverse information [5]. The proposed feature by the author utilized a fuzzy model to deal with the various weight features in the document. The cosine similarity method is used to remove the redundancy in the document used. The author proposed the model and compared it with Document Understanding Conference (DUC) participant systems and other systems such as TexLexAn, ItemSum, Yago Summarizer, MSSF, and PatSum using ROUGE measure on dataset DUC 2004. Experimental results showed that their proposed work achieves significant performance improvements as compared with other research in the summarizers developed so far. According to Rajasekaran 2018, due to a large amount of data available on the Internet it is very hard for humans to get what we actually need since the data is present in different languages and genres; for this reason, text summarization has gained popularity within the past few years and is rapidly growing; the need for automating the process of text summarization is in demand, and the author presented a paper that provides an various approaches, techniques, and methods in depth which are involved in automatic text summarization [6]. According to Vázquez et al. 2018, in his Pre-processing, term selection, term weighting, sentence weighting, and sentence selection are some of the problems in extractive summarizations. Most of the problems are faced in sentence selection, but the author also claims that significant problems are faced in all the steps [7]. Thus, to determine the relevant sentences, the author proposed various features through his work. A method to optimize the collection of previous tasks has been presented by the author in the form of a genetic algorithm. The method presented by the author not only performs better than the previous work, but it is also effective in presenting the relevance of the features as a solution to the problems. According to Yao et al. 2018, they presented a novel extractive document summarization approach based on Deep Q-Network (DQN) which focuses on the redundancy of sentences in determining the Q value approximation, and the author tried working on a policy that could maximize the ROUGE score with respect to the summaries generated. The author designed two hierarchical architectures that generate all the important and informative features of a document to give the state of DQN as well as a list of potential actions from a sentence in the document for the DQN [8]. The model was directly trained on references that were human-generated which in turn eliminated the need for science extractive models. For testing his model, the author used the following datasets CNN/Daily corpus, the DUC 2002 dataset, and the DUC 2004 dataset using ROUGE metric. The author finally concluded that the

Text Summarization in Hindi Language Using TF-IDF

323

model worked better than state-of-the-art corpora without the linguistic annotation. It was the first time that DQN has been applied to the summarization models. According to Allahyari et al. 2017, there has been various data available in recent years from various sources, which contains tons of information and knowledge that needs to be effectively summarized in order to be useful to mankind [9]. In the review paper, the main approaches to automatic text summarizer have been presented. The author reviewed the effectiveness and demerits of various methods used to date for effective automatic text summarization. According to Malallah and Ali 2017, the automatic summarizing of documents plays a major role in various real-life examples [10]. Automatic text summarization is meant to provide curated and condensed content without affecting the quality of the original data. The author proposed a new multi-document summarization method model and fuzzy logic model. The proposed methodology by the author helped in extracting relevant words from the dataset, however, each sentence on the work used the fuzzy logic that measured the importance of the sentences; the author used the fuzzy inferences to generate the final summarization. He evaluated the model against some pre-existing summary generation systems which in turn performed well in dove range results and similarities. Kumar et al. [11] in 2021 proposed a model in which he had compared a Text Pre-processing time in Hindi languages using Stanza and Spacy and pre-processing time of Stanza is very high as compared to Spacy.

3 Proposed Methodology In Fig. 1, we have proposed an approach for Text Summarization. In this approach, a Text document of the Hindi language is provided as an input to the Text PreProcessing phase, then it goes to Tokenization: Here, Tokens are created. Then finally it goes to the Processing phase in which we have used the TF/IDF method for Text Summarization. The TF/IDF method selects the most important sentences from the given input text and then finally generates the summary.

4 Text Pre-processing and Summarization 4.1 Pre-processing Text pre-processing is one of the primary steps in any text summarization, and it determines whether our algorithm is fit for implementation of the various algorithms that we use in our models. It converts our data into a more digestible form which can then be used by every further step that is involved in our summarization. There are a number of steps in Pre-Processing of Text Summarization.

324

A. Kumar et al.

Fig. 1 Framework for text summarization

1. 2. 3. 4.

Boundary Identification Stop Word Removal Stemming Tokenization.

4.2 Processing Processing is the phase where we perform the entire algorithm implementation in our data. We have used the TF-IDF algorithm to determine and generate the summary for the input dataset which is in the Hindi language here. The detailed analysis is as follows.

4.3 TF-IDF Algorithm It is the abbreviated form for Term Frequency–Inverse Document Frequency. It is an approach by which we numerically score the importance of a word in each sentence based on how frequently it is appearing in the given document or in the given

Text Summarization in Hindi Language Using TF-IDF

325

dataset. The approach uses the assumption that if a word appears very frequently in each sentence, then we should give it importance and numerically its score must be higher. In simpler terms, it is a method that is used to weigh a given keyword and decide whether it is important in the document or not based on the number of times it appears in the document. Corpus: TF-IDF algorithm checks how relevant a given word or keyword is in our document throughout the web and that is known as corpus. High weight term in TF-IDF is reached by high term frequency, whereas low weight is reached by low term frequency. Term Frequency: (Frequency of word in document/Total no. of words in the document). TF-IDF algorithm is made by multiplication of two algorithms. TF(w) = (Number of times term w appears in a document)/(Total number of terms in the document). IDF(w) = log.e (Total number of documents/Number of documents with term w in it). Therefore, TFIDF(w) = TF(w) ∗ IDF(w)

5 Result 5.1 Dataset We have used the standard Fire 2011 news dataset. We have used the dataset of Historical news articles of different lengths and with different topics. Each document in the collection is supplied with a set of human-generated summaries provided by two different experts, while each expert was asked to generate summaries of different lengths.

326

5.2 Pre-processed Data

5.3 Processed Data/Summary

A. Kumar et al.

Text Summarization in Hindi Language Using TF-IDF

327

5.4 Summary Evaluation The summary generated by our algorithm has been evaluated subjectively by participants of an online survey. The data collected from the survey results have been randomly sampled and used to generate a normalized representation of our results. The survey consisted of an evaluation of summaries from different articles, with varying lengths of summaries and the participants were asked to assign a score representing the relevancy or the accuracy of the generated summary concerning the articles. After compiling the data, randomly selecting the results, and removing the outliers (participants assigning only 0% score or 100% score to every summary), we were able to conclude our findings with the tables and graphs shown in the result. For Summary analysis, we have taken a summary in the range of 5%–40% for different datasets. Dataset: patrika_different-enjoyment-ofstudies [https://www.pat rika.com/opinion/different-enjoyment-of-studies-in-your-tongue-6308646/]

Dataset: patrika_politics-and-democracy [https://www.patrika.com/opinion/pol itics-and-democracy-6334104/]

328

A. Kumar et al.

Dataset: patrika-new-education-policy-in-indian-scenario [https://www.patrika. com/opinion/new-education-policy-in-indian-scenario-6307764/]

Text Summarization in Hindi Language Using TF-IDF

5.5 Precision, Recall, and F-Score

329

330

A. Kumar et al.

6 Conclusion In this paper, we have discussed the text summarization technique using TF-IDF. The initial steps were to pre-process the data with the help of algorithms that have been developed especially for the task and without the use of libraries in Python which are pre-defined such as Spacy and Stanza. The pre-processing phase consist of various algorithms that basically worked on analysing the entire input and then ranking it based on the scores that have been given to the sentences based on their importance, the sentences with the highest scores are included in the summary and they are used for the final output. There have been various limitations in the Hindi language such as the unavailability of the training data for training a machine learning model which in turn if used would have simplified the task but nevertheless, we used the best possible alternative to it from which we have got the summary with about 50% accuracy which has been tallied with manual results from various users.

7 Future Scope We have planned to include the feature of sentiment analysis on our product which in turn would help users to identify the part of the input (news article, journal, or any data in Hindi) according to their sentiments. For example, if we are to read about a particular news in the morning say we just want to read the happy news or the progressive ones, then our model could analyse the data in the input in various categories as happy, sad, and progressive and then fetch the required output in whatever

Text Summarization in Hindi Language Using TF-IDF

331

sentiment it is required as various people won’t like to read articles in newspapers containing news like accidental death or anything that hurts their emotional state of mind. Sentiment analysis is the right choice for such situations which would provide the best alternative to spending a lot of time reading something that is not of use to us and then eliminating the content that we don’t wish to see or read.

References 1. Abualigah L, Bashabsheh MQ, Alabool H, Shehab M (2020) Text summarization: a brief review. In: Studies in computational intelligence book series: recent advances in NLP: the case of Arabic language, vol 874, pp 1–15. ISBN: 978-3-030-34613-3. https://doi.org/10.1007/9783-030-34614-0_1 2. Verma P, Pal S, Om H (2019) A comparative analysis on Hindi and English extractive text summarization. ACM Trans Asian and Low-Resour Lang Inf Process 18:30–39. https://doi. org/10.1145/3308754 3. Fuad TA, Nayeem MT, Mahmud A, Chali Y (2019) Neural sentence fusion for diversity driven abstractive multi-document summarization. Comput Speech Lang 58:216–230. https://doi.org/ 10.1016/j.csl.2019.04.006 4. Nazari N, Mahdavi M (2018) A survey on automatic text summarization. J AI Data Mining 7:121–135. https://doi.org/10.22044/jadm.2018.6139.1726 5. Patel D, Shah S, Chhinkaniwala H (2019) Fuzzy logic based multi document summarization with improved sentence scoring and redundancy removal technique. Int J Expert Syst Appl 134:167–177 6. Rajasekaran A, Varalakshmi RD (2018) Review on automatic text summarization. Int J Eng Technol 7:456. https://doi.org/10.14419/ijet.v7i2.33.14210 7. Vázquez E, Hernánde RAG, Ledeneva Y (2018) Sentence features relevance for extractive text summarization using genetic algorithms. J Intell Fuzzy Syst 35:353–365. https://doi.org/10. 3233/JIFS-169594 8. Yao K, Zhang L, Luo T, Wu Y (2018) Deep reinforcement learning for extractive document summarization. J Neurocomput 284:52–62. https://doi.org/10.1016/j.neucom.2018.01.020 9. Allahyari M, Pouriyeh S, Assefi M, Safaei S, Trippe ED, Gutierrez JB, Kochut K (2017) Text summarization techniques: a brief survey. Int J Adv Comput Sci Appl 8 10. Malallah S, Ali ZH (2017) Multi-document text summarization using fuzzy logic and association rule mining. J Al Rafidain Univ Col 41:241–258 11. Kumar A et al (2021) A comparative analysis of pre-processing time in summary of Hindi language using Stanza and spacy. IOP Conf Ser: Mater Sci Eng 1110(1):012019. https://doi. org/10.1088/1757-899x/1110/1/012019

Low-Voltage Low-Power Acquisition System for Portable Detection of Approximated Biomedical Signals Indu Prabha Singh, Manpreet Singh Manna, Vibha Srivastava, and Ananya Pandey

Abstract ECG models are complex and simulation of P, Q, R, S and T waves individually and then combined them to make complete ECG wave is a complicated process. In this paper ECG signal is produced by PSpice and then noise is introduced. A low power-low voltage circuit using Operational Transconductance Amplifier (OTA) is proposed for amplification, processing and extraction of the original EGC signal from noisy signal. Keywords ECG · Low power circuit · Operational transconductance amplifier · OTA · Signal processing

1 Introduction Our bodies are constantly transforming information about our health. These information can be assessed by biomedical signals. Biomedical signals are observations of physiological activities of organisms. These signals can be captured and processed by electronic instruments that are able to measure the heart rate, blood pressure, oxygen saturation levels, blood glucose and different activities of brain. Biomedical signal processing involves extraction of significant information from the signals, processing of that signals and analysis of these to provide useful information upon which an appropriate decision can be made. The Electrocardiogram (ECG) is an electrical phenomenon of contractile activity of heart. It can be recorded with surface electrodes on limbs or chest. It is simply a representation of electrical activity of heart muscles as it changes with time. It picks up electrical impulses generated by the polarization and depolarization of cardiac tissues and translates it into a waveform [1]. The waveform generated by the cardiac tissues is used to measure the rate and regularity of heartbeats, the size and position

I. P. Singh (B) · V. Srivastava · A. Pandey SRMGPC, Lucknow, India M. S. Manna SLIET, Longowal, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 P. K. Mallick et al. (eds.), Cognitive Informatics and Soft Computing, Lecture Notes in Networks and Systems 375, https://doi.org/10.1007/978-981-16-8763-1_26

333

334

I. P. Singh et al.

of the chambers. It also measures the presence of any damage in heart and the effects of drugs or devices used to regulate the heart such as a pacemaker [2]. The ECG simulator with high resolution with small memory can display the bipolar limb and leads ECG signals with low PRD (percent root-mean-square difference) [3]. The system is based on discrete least square estimation equations instead of reading the large stored data inside the look-up table. The results from this experiment become useful for the construction of a high resolution ECG simulator which takes much less memory and has smaller size of hardware.

2 Literature Review There are various appropriate medical diagnostic instruments for the detection of biomedical signals. These biomedical signals are too complex to be directly analysed, so a device or circuit is required to process the signals for further examination. For this specific purpose, a low-power low-voltage Operational Transconductance Amplifier (OTA) incorporated in instrumentation amplifier and low pass filter was proposed [4]. The circuitry consumes 106.2 µW and operated on 0.9 V. Further an acquisition system contained low pass filter and successive approximation analogue to digital converter (SAADC) was proposed [5]. The acquisition system includes an instrumentation amplifier, high pass filter, 60 Hz notch filter and common level adjuster. Shuenn-Yuh Lee et al. introduced a fully differential OTA-C filter for the apparatus used for detection of heart activities [4]. In this proposed circuit the linearity and noise of the filter depends on the building cell. A precise behavioural model for the OTA circuit was created that operates in the sub-threshold region to save power. A Gaussian wave model simulated for the components P, Q, R, S and T wave individually for an ECG wave was proposed by Md. Abdul Awal et al. [6]. The coefficients of model were calculated using nonlinear least square technique. To verify the developed model, different kinds of time domain and frequency domain analysis such as power spectral density (PSD) and magnitude squared coherence (MSC) were used. The model was also successful in generating noisy ECG signal [7, 8]. Lin et al. proposed a new design in which normal operational amplifier was replaced by an instrumentation amplifier [9]. A first order HPF was also integrated which is insufficient in curbing the low frequency interference present in the signal. Thus, to rectify this discrepancy a second order HPF was integrated with the preamplifier to eliminate the low frequency noise significantly. Rekha S. and Laxminidhi T. proposed a feed-forward design for operational transconductance amplifier with a view to improve the gain-bandwidth product [10]. But, the power consumption of the circuit was considerably high and the device was found to be deviating from linear performance. A new compact voltage buffer as an improvement over the two initial buffers was proposed [11]. This compact buffer was found to have the greatest output swing, ensured low-power operation while maintaining high slew rate capabilities. The

Low-Voltage Low-Power Acquisition System …

335

limitation of the technique was that the output voltage swing depends on the nature of the load and the voltage gain was also less than unity. A device was proposed that could be used either for acquisition of anomalous ECG sequences and storing it to flash memory or as a warning device during normal activity or exercise stress test [12, 13]. Siripruchyanun et al. presented the design of CMOS OTA with very small transconductance (of order of nano ampere per volt), for very low frequency continuous time filters [14]. This design used current division technique to reduce the transconductance of OTA.

3 Mathematical Modelling of ECG Signals Each ECG cycle consists of P, Q, R, S, and T waves corresponding to different phases of the heart activities. The P wave epresents the normal atrium (upper heart chambers) depolarization. The QRS complex (one single heart beat) which follows P wave, represent the depolarization of the right and left ventricles (lower heart chambers) and the T wave represents the repolarisation of the ventricles. These are basically the bell shaped curves. The Gaussian wave based model is considered for the modelling of ECG signal to simulate the components of P, Q, R, S and T wave separately [6]. Now, if ‘i’ is an element of P, Q, R, S and T, then Gaussian wave for each component of ECG wave have the following parameters: Mi is the height of individual peaks. Ti is the centre position of the peak. Wi controls the width of the ECG component. The individual peak equations are given as: P wave : M P e

Q wave : M Q 1 e

2  t−t − √2WP

 2 t−t Q − √2W 1 Q1

+ M Q2 e 

d − R wave : M R e dt S wave : −M S e T wave : MT e

(1)

P

t−t √ R 2W R

Q2

(2)

2

2  t−t − √2WS S

2  t−t − √2WT T

 2 t−t Q − √2W 2

(3)

(4)

(5)

336

I. P. Singh et al.

Table 1 Values of the coefficients of corresponding ECG components [6]

ECG component

Mi

P wave

Wi

ti

0.185

17.8

236.9

Q wave (j = 1)

−0.1103

30.64

365

Q wave (j = 2)

−0.1375

5.705

390

11.1803

478

9.09

540

50

800

R wave

1.02

S wave

−0.509

T wave

0.3255

Also, the general equation of modelled ECG signal obtained after combining Eqs. (1–5) can be written as  

fi =

i∈P,R,S,T

+

j=2 

d ± dt

Mi j e

i Mi e

2  t−t − Wgg

2  t − √t−t 2W t

+ N j,SNR (t)

(6)

i∈Q, j=1

i  In Eq. (6) ± dtd depends on i.       d i d i d ± if t ∈ R and ± = = (−1) if t ∈ S. dt dt dt These equations for components of ECG signal are modelled on MATLAB and used to simulate a cycle of ECG signal. All separate bell curves or components of ECG signal are simulated separately and then all of them are superimposed in the order to make a full cycle of ECG signal (Table 1). Individual P, Q, R, S and T waves are generated using respective modelled equations and all these waves are overlapped to generate the Electrocardiogram signal in MATLAB as shown in Fig. 1.

4 Addition of Noise ECG signals when processed is corrupted by various noises like power line interference, high frequency Electromyogram (EMG) noise, motion artefacts, baseline drifts, electrosurgical noise and white noise. Therefore, consideration of noise is very important. For noise integration with the standard ECG signal, random noise is added using the appropriate functions in MATLAB with an approximated variance (σ ) of 0.01. After adding noise, all the components of ECG waves become noisy and

Low-Voltage Low-Power Acquisition System …

337

Fig. 1 Individual components of ECG and approximated ECG signal on MATLAB

by superimposing them, a noisy ECG signal is obtained on MATLAB as shown in Fig. 2. The amplitude values corresponding to the respective time instances are taken from workspace (MATLAB) to simulate the same ECG signal on PSpice using Piecewise Linear (PWL) sources shown in Fig. 3. In order to generate the noisy ECG signal, again all individual waves are simulated with addition of Gaussian noise or Random noise and then overlapped to generate noisy ECG signal as obtained in Fig. 4. To simulate the same ECG signal on PSpice, the fourth step is to approximate the amplitude values corresponding to the respective time values obtained on workspace of MATLAB. Now, the amplitude values corresponding to its time instances are plotted on PSpice using Piece Wise Linear

Fig. 2 Individual noisy components of ECG and approximated noisy ECG signal on MATLAB

338

I. P. Singh et al.

Fig. 3 ECG signal on PSpice

Fig. 4 Noisy ECG signal on PSpice (random noise variance = 0.01)

(PWL) sources. There are total 148 PWL sources used to generate pure as well as noisy ECG signal. An acquisition system must be portable, durable, operates at low voltage and consumes low power. Figure 5 shows block diagram of proposed biomedical signal

Low-Voltage Low-Power Acquisition System …

339

Fig. 5 Block diagram of biomedical signal acquisition system

acquisition system for low voltage, low power consumption that can extract amplified actual signal from noisy signal. The ECG signal is passed through an instrumentation amplifier offering high input impedance, high common-mode rejection ratio (CMRR), low passive sensitivity, great accuracy and stability. The amplifier is followed by Low Pass and High Pass Filters. Physiological signals are often distorted by low-frequency noise signal interference. These noise signals are generated by the respiration and motion of the person. Because of that, differential DC voltage is generated due to the polarization of the electrodes. If the magnitude of the noise is large, the resolution of analogue-todigital (A/D) conversion will be limited. The received data cannot have enough precision to extract actual useful data for the interpretation. Therefore the interference should be attenuated by high-pass filter before performing analogue-to-digital (A/D) conversion. To suppress the high-intensity interference and to obtain a better suppression of the unwanted interference and sharp cut-off, second-order high-pass filter is used. Highpass filter circuitry is clubbed with instrumentation amplifier in the preamplifier stage to obtain sharp cut-off noise free signal. The cut-off frequency for low-pass filter is 250 Hz. The circuit and its simulated performance is shown in Fig. 6. Simulation of Transient Analysis of existing acquisition circuitry having input voltage swing of 1.5 mV and the output voltage swing if 2.0 mV. This circuitry operates at supply voltage of 0.9 V. In proposed biomedical acquisition circuitry, Operational transconductance amplifier (OTA) replaces Operational amplifiers in instrumentation amplifier to

340

I. P. Singh et al.

Fig. 6 An acquisition system [15] and its transient analysis using ECG input

suit the requirement of low-voltage and low-power. The power consumption of the proposed acquisition circuitry is 103.4 µW, amplification factor is 1.77 and the power supply needed to operate this circuitry is 0.85 mV (Fig. 7). The low-voltage, low power CMOS digitally programmable operational transconductance amplifiers (OTA) used is shown in Fig. 8 [16–18]. The preferred OTA maintains a constant current and constant bandwidth for different load capacitors without increasing the standby power consumption.

Low-Voltage Low-Power Acquisition System …

Fig. 7 Proposed acquisition circuitry and its transient analysis using ECG input

Fig. 8 OTA used for signal processing [15]

341

342

I. P. Singh et al.

Fig. 9 AC analysis of proposed acquisition circuitry

Transient Analysis of the proposed Acquisition circuitry using Operational Transconductance Amplifier has input voltage swing of 1.3 mV and output voltage swing of 2.3 mV along with filtering done by the 2nd Order High Pass Filter and 1st Order Low Pass Filter to remove low and high frequency noise interference respectively. Thus, the output ECG wave obtained is free from any kind of disturbances. This circuitry operates at voltage supply of 0.85 V (Fig. 9). From the AC Analysis of the Proposed Acquisition circuitry, we get 3 dB Gain of at is −115.00 dB at lower cut-off frequency less than 100 MHz and higher cut-off frequency greater than 100 Hz.

5 Conclusion A biomedical signal acquisition system using OTA requires low input voltage and consumes low power is simulated using PSpice. The OTA used as a building block for pre-amplifier stage would further ensure reduction in power consumption and improves controllability of the circuit. The power consumption of an ECG acquisition system that regulates the battery lifetime of the acquisition device is in microwatts. The proposed model of generating ECG signal is capable of replicating many important features of the human ECG wave. From the above we can conclude that the proposed acquisition circuitry is behaving as amplifier with band pass filter that allows a band of frequency range from 100 MHz to 100 Hz. Thus filtering of low noise interference (like electrode contact noise, respiration etc.) along with filtering of high noise interference (like interference of Electromyogram signals due to movement of a person) is properly done by the proposed circuit.

Low-Voltage Low-Power Acquisition System …

343

References 1. Ashley, EA, Niebauer J (2004) Cardiology explained. Remedica Press, London, Chapter 3, pp 37–52 2. Chung W-Y, Chuang C-C, Zheng Y-H, Wang Y-H (2005) A new low power-low voltage OTA for ECG readout circuit design. J Med Biol Eng 26(4):195–202 3. Oppenheim AV, Willsky AS, Young IT Signals and systems. Prentice Hall Press, USA, Chapter 1, pp 2–8 4. Lee S-Y, Cheng C-J (2009) Systematic design and modeling of a OTA-C filter for portable ECG detection. IEEE Trans Biomed Circuits Syst 3(1):53–64 5. Lee S-Y, Hong J-H, Lee JC, Fang Q (2012) An analogue front-end system with a low-power on-chip filter and ADC for portable ECG detection devices. In: Mills R (ed) Advances in electrocardiograms—methods and analysis 6. Awal MA, Mostafa SS, Ahmad M (2011) Simplified mathematical model for generating ECG signal and fitting the model using non-linear least square technique. In: Proceedings of the international conference on mechanical engineering, Dhaka, Bangladesh, pp 18–20 7. Bhoi AK, Sherpa KS (2014) QRS complex detection and analysis of cardiovascular abnormalities: a review. Int J Bioautomation 18(3):181 8. Lodin O, Kaur I, Kaur H (2020) Design and analysis of THD with single-tuned filter for fivephase DC to AC converter drive, cognitive informatics and soft computing. In: Advances in intelligent systems and computing, vol 1040. Springer, Singapore, Online ISBN978-981-151451-7. https://doi.org/10.1007/978-981-15-1451-7_53 9. Lin Y-D, Tsai C-D, Huang H-H, Chiou D-C, Wu C-P (1999) Preamplifier with a second-order high-pass filtering characteristic. IEEE Trans Biomed Eng 46(5):609–612 10. Rekha S, Laxminidhi T (2011) Low power fully differential, feed-forward compensated bulk driven OTA. In: The 8th electrical engineering/electronics, computer, telecommunications and information technology (ECTI) association of Thailand—conference-2011, pp 90–93 11. Torralba A, Carvajal RG, Galan J, Ramirez-Angulo J (2003) A new compact low-power high slew rate class AB CMOS buffer. Circuit Syst IEEE Conf 1:I237–I240 12. Kaur I, Chaudary J (2010) Low power design considerations and power dissipation in CMOS IC’s. In: Proceedings of the international conference on systemics, cybernetics and informatics (ICSCI-2010), Hyderabad, pp 59–62 13. Jovanov E, Gelabert P, Adhami R, Wheelock B, Adams R (1999) Real time Holter monitoring of biomedical signals. In: DSP technology and education conference DSPS’99, Houston, Texas, pp 1–7 14. Siripruchyanun M, Jaikla W (2008) Current controlled current conveyer transconductance amplifier (CCCCTA): a building block for analog signal processing. J Electr Eng 19(6):443–453 15. Mohseni P, Najafi K (2002) A low power fully integrated bandpass operational amplifier for biomedical neural recording applications. In: Engineering in medicine and biology, 2002. 24th annual conference and the annual fall meeting of the biomedical engineering society, EMBF/BMEF conference, 2002, proceedings of the second joint, vol 3, pp 2111–2112 16. Singh IP, Dehran M, Singh K (2015) High performance CMOS low power/low voltage operational transconductance amplifier. In: IEEE international conference on electrical, computer and communication technologies (ICECCT), Coimbatore, India, published in IEEE Explore. https://doi.org/10.1109/ICECCT.2015.7226203 17. Kaur I, Sandhu SP (2009) Algorithm design and performance evaluation of equivalent CMOS model. Int J Electr Electron Eng IJEEE 8(3):505–510 (ISSN: 2073-0535) 18. Bhoi AK, Sherpa KS, Khandelwal B (2018) Ischemia and arrhythmia classification using time-frequency domain features of QRS complex. Procedia Comput Sci 132:606–613

Antimagic Labeling and Square Difference Labeling for Trees and Complete Bipartite Graph S. Sivakumar, S. Vidyanandini, E. Sreedevi, Soumya Ranjan Nayak, and Akash Kumar Bhoi

Abstract Owing to a graph G, a labeling is observed as a bijection from the edge set E(G) to the set {1, 2, . . . , |E(G)|}. In terms of antimagic labeling, the sum of the labeled edges incident to u must differ from the sum of the labeled edges incident to v for any distinct vertices u and v. If a graph can be labeled as antimagic, it is assumed to be antimagic. Hartseld and Ringel hypothesized in 1990 that every graph that is connected as an antimagic graph, save K 2 , is an antimagic graph. We assume that complete bipartite graphs K m,n , admit square difference labeling, and this paper claims that trees of diameter four admit antimagic.

1 Introduction Presuppose G = (V, E) as a graph and a mapping f : E → {1, 2, 3, . . . , |E|} which is bijective. Seeing that to each vertex u in graph G, the vertex-sum φ f (u) at u is defined as φ f (u) = e∈E(u) f (e), where E(u) indicates the collection of edges incident to u. Owing to any two distinct vertices u, v ∈ V (G), if φ f (u) = φ f (v) then f is know as an antimagic labeling of G. A graph G is defined as antimagic graph if S. Sivakumar Department of Computer Applications, SRM Institute of Science and Technology, SRM Nagar, Kattankulathur 603203, Tamil Nadu, India S. Vidyanandini Department of Mathematics, SRM Institute of Science and Technology, Kattankulathur 603203, India E. Sreedevi Department of Computer Science and Engineering, Koneru Lakshmaiah Education Foundation, K L University, Vaddeswaram, Guntur 522502, India S. R. Nayak (B) Amity School of Engineering and Technology, Amity University Uttar Pradesh, Noida, India A. K. Bhoi KIET Group of Institutions, Delhi-NCR, Ghaziabad 201206, India Directorate of Research, Sikkim Manipal University, Gangtok 737102, Sikkim, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 P. K. Mallick et al. (eds.), Cognitive Informatics and Soft Computing, Lecture Notes in Networks and Systems 375, https://doi.org/10.1007/978-981-16-8763-1_27

345

346

S. Sivakumar et al.

G possess the concept of antimagic labeling. This study of graphs was established by Hartsfield and Ringel [1] which involves antimagic labeling. The paths 2-regular graphs and complete graphs admitted to be antimagic are shown by them. As a result two conjectures was put forth based on graphs permitting antimagic labeling. Conjecture 1: Every graph which are connected excluding K 2 permit antimagic. Conjecture 2: Every tree excluding K 2 permit antimagic. These surmise have attained the researcher much curiosity, but still both surmise unlocked. Alon et al. [2] determined that there is an absolute constant C in graphs that possess degree of minimum δ(G) ≥ Clog|V (G)| are antimagic. Similarly, graphs that possess degree of maximum (G) ≥ |V (G)| − 2 are antimagic graph. A special privilege of Conjecture 1 indicates the entire k-regular graph permit antimagic where k ≥ 2. Hartsfield and Ringel established that entire cycle graph as antimagic. It is evident for a graph G that is regular, its entire component attains antimagic. Then G permits antimagic condition. Consequently, entire 2-regular graph attains antimagic graph. Cranston determined that entire k-regular graph which is bipartite for k ≥ 3 is said to be antimagic. Yu-Chang Liang and Xuding Zhu determined that cubic graphs admit antimagic [3]. It was expanded by Cranston, Liang. Zhu determined that regular graphs with degree odd permit antimagic [4]. Chang, Liang, Pan, and Zhu determined that entire regular graph with an even degree attains antimagic. All graph considered here are finite and simple. For the definitions not defined in this paper refer [5–7]. Shiama [8, 9] established that cycle, complete graph, ladder, cycle cactus, wheels, lattice grids, quadrilateral snakes. All square difference labeling graphs are square difference labeling graphs, including the graph G = K 2 + m K 1, core graphs of path, square graphs of path, various path related graphs, fan, and gear graphs [9–11]. There was a discussion on square sum labeling and also with square sum graphs by Ajitha et al. [12]. They discovered that square sum graphs include the cycle, full graph K n, cycle cactus, ladder, and complete lattice grids. Identically, Hegde and Beineke [13] determined that few graphs attain as strongly indexable in multiplication. For references, the square sum labeling for few central and total graphs [9, 14] are used. For the complete survey on square difference labeling, researcher can refer excellent dynamic survey by Gallian [15]. This paper involves in proving the admission of square difference labeling for complete bipartite graphs K m,n, . Example: See Fig. 1. For Conjecture 2, it was shown by Kaplan et al. [16] that trees with no vertices possess degree 2 permit antimagic which restrict on their subgraph influenced by degree 2 vertices and its complement. Yu-Chang Liang et al. determined that the antimagic labeling of trees with degree 2 vertices. For an exhaustive survey on antimagic labeling of trees and graphs, we refer Gallian’s [17] Dynamic Survey of Graph Labeling. For the basic definition and notation, we refer [18]. In this paper, we prove that trees of diameter four are antimagic.

Antimagic Labeling and Square Difference Labeling for Trees …

347

Fig. 1 Square difference labeling for K 5

0 16

1

9

4

15 12

4

1

7

3

8

3

5

2

2 Main Result 1 In this section, we prove that trees of diameter four are antimagic. Theorem 1 Every tree T of diameter four is antimagic. Proof Let the vertex set V (T ) can be taken as union of V1 (T ), V2 (T ) and V3 (T ) such that V1 (T ) contains the unique central vertex of T , V2 (T ) = {u : deg(u) > 1 and u is not a central vertex} and V3 (T ) = {u : deg(u) = 1}. Since tree T is of diameter four, |V1 (T )| = 1, the distinctive central vertex of tree. It is clear that vertex subsets V1 (T ), V2 (T ) and V3 (T ) are mutually disjoint subsets of V (T ). Let |V3 (T )| = k. Arbitrarily assign the edge labels of edges that are adjoin to the vertices in V3 (T ) as 1, 2, . . . , k. Thus the vertex labels assigned on the vertices in V3 (T ) is 1, 2, 3, . . . , k. Let |V2 (T )| = r . Since T is of diameter 4, there are r edges connecting the unique central vertex and the vertices in V2 (T ). Let those r edges be {e1 , e2 , . . . , er }. Arbitrarily assign the edge labels of these edges as l(ei ) = xi , for 1 ≤ i ≤ r . Note that the other end vertices of these r edges are in V2 . Let us display the vertices in V2 as per the label xs + t, where xs is the arbitrarily assigned edge label in the previous step and t is the addition of the labeled pendent edge that are incident with the corresponding vertex in V2 . Order the vertices in V2 and assign the edge labels k + 1, k + 2, . . . , k + r as per the increasing order of t. If there is a tie in the arrangement of t, we can break the tie arbitrarily and assign the edge label. It is evident from the allocation of labeled edges, the labeled edges are displayed from the set {1, 2, . . . , |E|} [19]. Further, the labeled vertices are also different. Hence the proof. Illustration In this section, we illustrate the procedure for to get antimagic labeling for the tree T of diameter four shown in Fig. 2. Figure 3 illustrates the vertex and edge label for the pendant edges of T . Figure 4 illustrates the arbitrarily labeling for the edges whose end vertices in V2 (T ). Figure 5 illustrates the vertex label and edge labels for the end vertices in V2 (T ).

348

S. Sivakumar et al.

Fig. 2 Input tree T of diameter four and with 14 edges

Fig. 3 Vertex labels for the vertices in V3 (T )

Fig. 4 Edge labels for the edges that are incident with vertices in V2 (T )

3 Main Result 2 This section involves in determining that complete bipartite graphs K m,n permit square difference labeling graphs. Theorem 1 The complete bipartite graph K m,n for any integer m, n > 0 permits square difference labeling.

Antimagic Labeling and Square Difference Labeling for Trees …

349

Fig. 5 Antimagic labeling for the tree T

Proof Presuppose G as a complete bipartite graph K m.n seeing to any positive integer m and n. It is evident from the definition of complete bipartite graph, K m,n possess m + n vertices and mn edges, With no loss of generality of the result, presume that m ≤ n. Let |V1 | = m and |V2 | = n. Let the vertex subsetV1 has {v0 , v1 , . . . , vm−1 } and the vertex subset V2 has {u 0 , u 1 , . . . , u n−1 }. Define the vertex labeling f : V1 ∪ V2 → {0, , 2, . . . , (m + n) − 1} as f (vi ) = i for 0 ≤ i ≤ m − 1 and f (u j ) = m + j for 0 ≤ j ≤ n − 1. Define the edge labeling f ∗ as f ∗ (uv) =| [ f (u)]2 − [ f (v)]2 |, for any edge uv ∈ E(G). From the definition of function f , it is clear that labeled vertices of G are distinct and function f is bijective. Claim The labeled edges of G are distinct. Let u j and u j+1 seeing that j,0 ≤ j ≤ n − 2, indicate the vertices in V2 , viewing that their vertex labels maintain the order of consecutiveness. Presume that f (u j ) = r and f (u j+1 ) = r + 1. As for as the definition for labeled vertices in V2 is concerned, it is evident that r ≥ m. Seeing that G as a complete bipartite graph, the vertex u j is said adjacent to every vertex vi in V1 , for i, 0 ≤ i ≤ m − 1. In same manner, vertex u j+1 is seen to be adjacent to every vertex vi in V1 , for i, 0 ≤ i ≤ m − 1. Noting to the labeled vertex of vi and labeled vertex of u j which are distinctive, the induced labeled edges of the edges u j vi , for i, 0 ≤ i ≤ m − 1 are distinct. In a similar way, the induced labeled edges u j+1 vi , seeing that i, 0 ≤ i ≤ m − 1 are distinctive. Moreover, the labeled induced edge of the edges u j vi cast a sequence which is monotonically increases as i raises from 0 to m − 1. In a similar way, the labeled induced of the edges u j+1 vm−1 . In view of, definition of f and f ∗ .

350

S. Sivakumar et al.

f ∗ (u j v0 ) =| [ f (u j )]2 − [ f (v0 )]2 | =| r 2 − o2 | = r2 f ∗ (u j+1 vm−1 ) =| [ f (u j+1 )]2 − [ f (vm−)]2 | =| (r + 1)2 − (m − 1)2 | = r 2 + 2r + 1 − m 2 − 1 + 2m = r 2 + 2r + 2m − m 2 > 0(Since r ≥ m) Thus, the labeled edges of the G are distinct. Therefore, the complete bipartite graphs G = K m,n for any integer m, n > 0 attains square difference labeling (Figs. 6 and 7). Theorem 2 Any tree T with m edges admit square difference labeling. Proof Presume T as tree with m edges and m + 1 vertices. Since all trees are bipartite graphs, consider the bipartite of the vertex set V (T ). Let V (T ) = V1 ∪ V2 Fig. 6 Square difference labeling for K 3,4

32

25 24

0

8 5

2 0

25 24 1 2 5

21

5

1 15 6 4 12

1

Fig. 7 Square difference labeling for K 3,3

6

36 35

9 3

1

2

15 16 12 4

8 5 9 3

Antimagic Labeling and Square Difference Labeling for Trees …

351

Let | V1 |= p. Therefore | V2 |= (m + 1) − p. Let the vertices in V1 be u 0 , u 1 , . . . , u p−1 and order the vertices in V2 as u p , u p+1 , . . . , u m in the bottom to top order as defined by the bipartition of vertices of tree. Define the vertex labeling function f (u i ) = i,seeing that i, 0 ≤ i ≤ m. and define the edge labeling function. f ∗ (uv) =| [ f (u)]2 − [ f (v)]2 | from the definition of f , it is clear that labeled vertex of G that are distinct is bijective. Claim The labeled edges of T are distinct. Let u j and u j+1 seeing that i, 0 ≤ j ≤ n − 2, cast the vertices in V2 , ensuring that the labeled vertices are arranged in consecutive order. Presuppose that f (u i ) = r and f (u j+1 ) = r + 1. From definition of labeled vertex assigned to the vertices in V2 , it is evident that r ≥ m. Since T is a tree, consider the adjacent vertices of vertex u j . Among the adjacent vertices of u j , consider the vertex (say u α ) with maximum label. By the definition of vertex labeling, f (u α ) = α. Now, consider the adjacent vertices of vertex u j+1 . Among the adjacent vertices of u j+1 , consider the vertex (say u β ) with maximum label. By the definition of vertex labeling, f (u β ) = β. Thus, it is enough to prove that the induced edge of the edge u j u α is strictly less than the labeled induced edge of the edge u j+1 u β . Case: 1 f (u α ) = α > f (u β ) = β, In view of, definition of f and f ∗ . f ∗ (u j+1 u β ) − f ∗ (u j u α ) =| [ f (u j+1 )]2 − [ f (u β )]2 | − | [ f (u j )]2 − [ f (u α )]2 | =| (r + 1)2 − β 2 | − | r 2 − α 2 | = (r + 1)2 − r 2 + α 2 − β 2 > 0(since

α > β)

Case: 2 f (u β ) = β > f (u α ) = α, From the definition of f and f ∗ , Let β = α + s, s ≥ 1. f ∗ (u j+1 u β ) − f ∗ (u j u α ) =| [ f (u j+1 )]2 − [ f (u β )]2 | − | [ f (u j )]2 − [ f (u α )]2 | = (r + 1)2 − β 2 − [r 2 − α 2 ] = (r + 1)2 − r 2 − β 2 + α 2 = 2r + 1 + α 2 − [α 2 + s 2 + 2αs] = 2r + −2αs − s 2 > 0(since r > α & r > s)

352

S. Sivakumar et al.

Case: 3 f (u α ) = α = f (u β ) = β, From the definition of f and f ∗ . f ∗ (u j+1 u β ) − f ∗ (u j u α ) =| [ f (u j+1 )]2 − [ f (u β )]2 | − | [ f (u j )]2 − [ f (u α )]2 | = (r + 1)2 − β 2 − [r 2 − α 2 ] = r 2 + 1 + 2r − r 2 = 2r + 1 >0 Thus, the labeled edges incident on the edges of T are distinct. Therefore,any tree T with m edges permits square difference labeling. Example: See Fig. 8.

Fig. 8 Square difference labeling for Tree T with 21 edges

441 21 400 20

0

361 1 323

324

19 18

20 2 3

5

1 3 3

280 17

247 16 4 209 5 6 7

10

15

200

189

147 14

6

120 13

17

95

8 57 9

216

40

21

72

12 11

Antimagic Labeling and Square Difference Labeling for Trees …

20

19

15

9

11

10

11

5

65

84

4

24

39 7

5

1

2 16

3

10

5

2

6 8

7

12

160 13 5

7 18

8 28 3

13

22

44 21

1

25

1

1 400 1 36

0

14

17

4 32

18

353

9

Fig. 9 Square difference labeling for Tree T with 21 edges

Example: See Fig. 9.

4 Conclusion In this paper, We acquire antimagic labeling for a tree T of diameter four. we proved the generalized results that K m,n ,for m,n > 0 and any treeT with m edges admit square difference labeling. Based on the truthness of the above two results, we pose a related question that “Whether complete tripartite graphs K m,n,r for m, n, r > 0 admits Square difference labeling?

References 1. Nayak SR, Jena PM, Mishra J (2018) Fractal analysis of image sets using differential box counting techniques. Int J Inf Technol 10(1):39–47 2. Alon N, Kaplan G, Lev A, Roditty Y, Yuster R (2004) Dense graphs are antimagic. J Graph Theory 47:297–309 3. Yu-Chang Liang and Xuding Zhu (2014) Antimagic labeling of cubic graphs. J. Graph Theory 75:31–36 4. Palai G, Nayak B, Sahoo SK, Nayak SR, Tripathy SK (2018) Metamaterial based photonic crystal fiber memory for optical computer. Int J Light Electron Optics 171:393–396 5. West DB (2001) Introduction to graph theory. Prentice-Hall 6. Vidyanandini S, Parvathi N (2015) Graceful labeling for graph Pn K 2 . Int J Sci Eng Res 6(3):96–198 7. Parvathi N, Vidyanandini S (2014) Graceful labeling of a tree from caterpillars. J Inf Optim Sci 35(4):387–393 8. Shiama J (2012) Permutation sum labeling for some shadow graph. Int J Comput Appl 40(6) 9. Shiama J (2012) Square sum labeling for some middle and total graphs. Int J Comput Appl 37(4) 10. Vidyanandini S, Parvathi N, Sivakumar S (2018) On edge irregularity strength of complete graphs and complete bipartite graphs. Int J Pure Appl Math 119(14):341–344 11. Vidyanandini S, Parvathi N (2018) Square difference labeling for complete bipartite graphs and trees. Int J Pure Appl Math 118(10):427–434

354

S. Sivakumar et al.

12. Ajitha V, Arumugam S, Geemina KA (2006) On square sum graph. AKCE J Graphs Comin 6:1–10 13. Beineke L, Hegde SM (2000) Strongly multipicative graphs. Discuss Math Graph theory 21:63– 75 14. Shiama J (2012) Square difference labeling for some graphs. Int J Comput Appl 44(4) 15. Gallian JA (2010) A dynamic survey of graph labeling. Electron J Combin 7 16. Rajesh Kumar E, Rama Rao KVSN, Nayak SR (2020) Suicidal ideation prediction in Twitter data using machine learning techniques. J Interdiscip Math 23(1):117–125 17. Amiri IS, Al-Zubi JA, Nayak SR, Palai G (2020) Chip to chip communication through the photonic integrated circuit: a new paradigm to optical VLSI. Optik 202:1–6 18. Roy A, Parveen N, Razia S, Nayak SR, Chandra R (2020) Fuzzy rule based intelligent system for user authentication based on user behaviour. J Discrete Math Sci Cryptogr 23(2):409–417 19. Sivakumar S, Nayak SR, Kumar A, Vidyanandini S (2018) An empirical study of supervised learning methods for breast cancer diseases. Optik 175:105–114

Edge Irregularity Strength Exists in the Anti-theft Network S. Sivakumar, S. Vidyanandini, E. Sreedevi, Soumya Ranjan Nayak, and Akash Kumar Bhoi

Abstract For a graph G, an edge irregularity total labeling involves cataloging with labels 1, 2, . . . , k for vertices and edges, so that the weights of any two separate edges are different. The weights of an edge are calculated by adding the label of its own edge to the descriptions of its two associated end vertices. The least value of k for the graph G that consumes an edge irregular total k-labeling is termed as the total edge irregularity strength, symbolized as tes(G). For the persistence of sharing data, network raised concerns with computers electronically. In a networking system, the common information from resources like files, applications, output devices (printers), and software programs are pooled. In terms of security, efficiency, manageability, and cost-effectiveness, the benefit of a network system is viewed evidently as it permits cooperation among consumers in an extensive array. Mostly, a network, a grid entails computer hardware elements such as program computers, office hubs, military switches, map routers, and other appliances as a network substructure is formed. Such strategies perform a significant part in information transference as of moving from one area to some other area utilizing diverse knowledge such as radio waves and wires. In this paper, we determine that total edge irregularity strength exists in a complete tripartite graph. Also, we apply a complete tripartite graph in the anti-theft network.

S. Sivakumar Department of Computer Applications, SRM Institute of Science and Technology, SRM Nagar, Kattankulathur 603203, Tamil Nadu, India S. Vidyanandini Department of Mathematics, SRM Institute of Science and Technology, Kattankulathur 603203, India E. Sreedevi Department of Computer Science and Engineering, Koneru Lakshmaiah Education Foundation, K L University, Vaddeswaram, Guntur 522502, India S. R. Nayak (B) Amity School of Engineering and Technology, Amity University Uttar Pradesh, Noida, India A. K. Bhoi KIET Group of Institutions, Delhi-NCR, Ghaziabad 201206, India Directorate of Research, Sikkim Manipal University, Gangtok 737102, Sikkim, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 P. K. Mallick et al. (eds.), Cognitive Informatics and Soft Computing, Lecture Notes in Networks and Systems 375, https://doi.org/10.1007/978-981-16-8763-1_28

355

356

S. Sivakumar et al.

1 Introduction Graph labeling is the fastest sub-area of graph theory, which is one of the primary fields of combinatorics. The study of the famous Königsberg bridge problem posed by Leonhard Euler in 1736 is supposed to be the birth of graph theory. The philosophy of trees and applications of trees in electrical networks was established by Kirchhoff in the year 1847. Subsequently 10 years later, when Cayley was working to calculate the isomers of hydrocarbons, at that time he designed the impression of trees. In the year 1936, König printed the book on graph theory as first. At present, thousand and more research papers have been published by eminent authors like Frank Harary, Paul Erdös, Berge, Gross, West, and Yellen [1, 2]. In 1960, Rosa first introduced the concept of Graph labeling. A lot of labeling techniques and a vast amount of literature are available at present and it is shown in print as well as in electronic form on various graph labeling problems. The cardinality of the vertex set for a graph G is represented by the letter p [3–6]. The cardinality of the edge set for a graph G is represented by the symbol q. A graph with q edges and p vertices is known as A( p, q) graph. The classified sequence of evaluations called ρ, σ, β, α,-valuations for a graph was presented by Rosa in the year (1967) and studied the cyclic disintegration of complete graphs by utilizing this evaluation. Later in 1972, Golomb called the β-valuation “graceful labeling” and now the term is used most widely. Golomb introduced graceful labeling as, let f be a function in G that admits injection from set of vertices of G to the set {0, 1, 2, …, q}. Then each edge X Y is allocated the label | f (x) f (y)|, such that the resultant label of edges are dissimilar [7–9]. The resulting function f is determined as graceful labeling with q edges and graceful labeling permitted by any graph is known as a graceful graph [10–12]. Graph theory acts as a component of a family of discrete mathematics. In many research areas, graph labeling is considered as one of the major tools in mathematical research such as medical, meteorology, electrical engineering, business administration, sociology, computer programming, marketing, networking, economics, manufacturing, and others. In graph theory, we study some of the major themes like graph labeling, graph coloring, tree concept, matching, planar graph, domination, Hamiltonian cycle, and Eulerian tour [13–16]. The anxiety in graph labeling grew from the argument of the decomposition of the graph G or dividing of the edge set E(G) into pairwise edge-disjoint subgraphs. The information which is updated in the Gallian survey motivated to produce new results in graph labeling [17, 18].

2 Basic Definition of Labeling The assigning of labels to edges and vertices of a graph, represented by integers, is known as graph labeling (Fig. 1).

Edge Irregularity Strength Exists in the Anti-theft Network

357

Fig. 1 Edges and vertices of a graph

2.1 Total K -labeling A total k-labeling of the graph G is defined as an edge irregular total k-labeling if there is a total k-labeling designed for each two dissimilar edges e and f of G wt (e) = wt ( f ).

2.2 Edge Irregularity Strength “The total edge irregularity strength of graph G, indicated as tes(G), is the minimal k for which graph G contains an edge with irregular total k-labeling.”

2.3 Complete Tripartite Graph The features of a full tripartite graph G, called K , are as follows: 1. The vertices can be divided into three groups: m, n, and r . 2. Each m vertex is connected to all n and r vertices. The vertices in n and r are the same way. 3. There are no vertices in m that are related to other vertices in m. The vertices in n and r are the same way (Fig. 2).

Fig. 2 Complete tripartite graph

358

S. Sivakumar et al.

Fig. 3 Barcode

2.4 Barcode Barcode is a visual appliance readable representing data connecting to the item to which it is attached. Formerly barcodes scientifically epitomize data by fluctuating the widths and spacing of parallel outlines and might be mentioned as linear or one dimensional. Barcode is nothing but it’s a number system. Every barcode has numerically ordered numbers. It has parallel lines that represent cost, quality, manufacturing date, and expiry date [13–15] (Fig. 3).

2.5 Scanner The barcode scanners are considered to be the earliest, and still the inexpensive scanners which erected from an immovable light and a single photosensor that is physically “scrubbed” across the barcode [19]. This scanner is attached to a computer. When a particular item is scanned, the details about that particular item are displayed on the screen (Fig. 4).

3 Basic Theorem 1. Assume G = (V, E) to be a set of vertices in a graph |V | = p as well as the collection of edges |E| = q. Presume set of vertices I = {u 1 , u 2 , . . . , u t } as a G is on independent set G so that degG (u i ) = di where i = 1, 2, . . . , t. If based t d ≤ (q − 1)/2 then i (i=1)

Fig. 4 Scanner

Edge Irregularity Strength Exists in the Anti-theft Network

tes(G) ≤ q −

t 

359

di

(i=1)

. 2. The graph Sn = K (1, n) denotes star graph on n + 1 vertices such that n ≥ 1. Then total edge irregularity strength tes(Sn ) = [n + 1/2]. 3. The graph K (1,n) denotes a star graph on n + 1 vertices, n ≥ 1. Subsequently, edge irregularity es(K (1,n) ) = n. 4. Let G be a simple graph. Then total edge irregularity strength tes(G) ≤ es(G) edge irregularity strength. 5. Let Pn denote a path upon n vertices. Then edge irregularity strength es(Pn ) is equal to [n/2].

3.1 Working of Complete Tripartite Graph in Anti-theft Network 1. 2. 3. 4. 5.

Scanner. Bar code. Hard disk contains items stock. Scanning process. Details of the item displayed on the monitor.

The vertices v1 , v2 , v3 be the three computer scanners connected with three automatic scanners and three alarm. Let v4 , v5 , v6 be the vertices representing three automatic scanner and item details, linked with alarm and computer scanner. It has two options, one is a barcode neutralized system, other is an alarm system that is combined with an exit option. If we enter the exit, both will be neutralized. The vertices v7 , v8 , v9 are alarmed that is connected with an automatic scanner and computer scanner. When the item is scanned with a computer scanner, then an alarm will be off or if the automatic scanner scans the item then the alarm rings.

4 Main Theorem Let K (m,n,r ) be a complete tripartite graph with m = n = r = 3. Then, es(K (m,n,r ) ) = mnr + n + r .

360

S. Sivakumar et al.

Proof “Let {y1 , y2 , y3 , x1 , x2 , x3 , z 1 , z 2 , z 3 } be the vertices in the complete tripartite graph k(m,n,r ) such that the vertices are equally partitioned.” Let ψ: V (K (m,n,r ) ) → {1, 2, . . . , mnr + n + r } be function representing vertex labeling such that ψ(xi ) = 2 + n(i − 1) f or 1 ≤ i ≤ 3 ψ(yi ) = 12 + n(i − 1) f or 1 ≤ i ≤ 3 ψ(z i ) = 23 + 5(i − 1) f or 1 ≤ i ≤ 3. Since the weights of edges are from {2, 3, . . . , mnr + n + r } as defined by vertex labeling ψ, all pairs of edges in the graph are distinct. Thus, the vertex labeling of the graph admits edge irregular (mnr + n + r ) labeling. This completes the proof.

5 Conclusion We conclude that edge irregularity strength exists in a complete tripartite graph. The anti-theft network in the complete tripartite graph can be applied in the textile shop, shopping malls, central railway station, and jewelry mall.

References 1. Alon N, Kaplan G, Lev A, Roditty Y, Yuster R (2004) Dense graphs are antimagic. J. Graph Theory 47:297–309 2. Sivakumar S, Nayak SR, Kumar A, Vidyanandini S (2018) An empirical study of supervised learning methods for breast cancer diseases. Optik 175:105–114 3. Vidyanandini S, Parvathi N, Sivakumar S (2018) On edge irregularity strength of complete graphs and complete bipartite graphs. Int J Pure Appl Math 119(14):341–344 4. Vidyanandini S, Parvathi N (2018) Square difference labeling for complete bipartite graphs and trees. Int J Pure Appl Math 118(10):427–434 5. Vidyanandini S, Parvathi N (2015) Graceful labeling for graph Pn  K 2 . Int J Sci Eng Res 6(3):96–198 6. Parvathi N, Vidyanandini S (2014) Graceful labeling of a tree from caterpillars. J Inf Optim Sci 35(4):387–393 7. Rajesh Kumar E, Rama Rao KVSN, Nayak SR (2020) Suicidal ideation prediction in Twitter data using machine learning techniques. J Interdiscip Math 23(1):117–125 8. Roy A, Parveen N, Razia S, Nayak SR, Chandra R (2020) Fuzzy rule based intelligent system for user authentication based on user behaviour. J Discrete Math Sci Cryptogr 23(2):409–417 9. Yu-Chang Liang and Xuding Zhu (2014) Antimagic labeling of cubic graphs. J. Graph Theory 75:31–36 10. Ajitha V, Arumugam S, Geemina KA (2006) On Square sum graph. AKCE J Graphs Comin 6:1–10 11. Beineke L, Hegde SM (2000) Strongly multipicative graphs. Discuss Math Graph theory 21:63– 75 12. Gallian JA (2010) A dynamic survey of graph labeling. Electron J Combin 7 13. Shiama J (2012) Permutation sum labeling for some shadow graph. Int J Comput Appl 40 (6) 14. West DB (2001) Introduction to graph theory. Prentice-Hall

Edge Irregularity Strength Exists in the Anti-theft Network

361

15. Shiama J (2012) Square sum labeling for some middle and total graphs. Int J Comput Appl 37(4) 16. Shiama J (2012) Square difference labeling for some graphs. Int J Comput Appl 44(4) 17. Palai G, Nayak B, Sahoo SK, Nayak SR, Tripathy SK (2018) Metamaterial based photonic crystal fiber memory for optical computer. Int J Light Electron Optics 171:393–396 18. Nayak SR, Jena PM, Mishra J (2018) Fractal analysis of image sets using differential box counting techniques. Int J Inf Technol 10(1):39–47 19. Amiri IS, Al-Zubi JA, Nayak SR, Palai G (2020) Chip to chip communication through the photonic integrated circuit: a new paradigm to optical VLSI. Optik 202:1–6

Prediction of Currency Exchange Rate: Performance Analysis Using ANN-GA and ANN-PSO Muskaan, Pradeepta Kumar Sarangi, Sunny Singh, Soumya Ranjan Nayak, and Akash Kumar Bhoi

Abstract Currency exchange prediction refers to the advance knowledge of the currency exchange rate. This can be done by studying the behavior of the historical data and applying some mathematical, statistical, or machine learning approaches. A large number of techniques are applied to predict the currency conversion rate. Nowadays, machine learning approaches are more popular due to their ability to produce more accurate results and predicted values. Artificial Neural Network (ANN) is a very popular technique in machine learning. However, the performance of ANN may also be improved by hybridizing ANN with some optimization techniques. Particle Swarm Optimization (PSO) and Genetic Algorithm (GA) could be two popular choices for this. This paper implements two hybrid models of machine learning namely Artificial Neural Network optimized with Genetic Algorithm (ANN-GA) and another model of Artificial Neural Network optimized with Particle Swarm Optimization (ANN-PSO). The data set used for the experiments is the currency exchange data of Indian Rupee and US dollar. The results show that the ANN-PSO model performs better than ANN-GA model for prediction of currency exchange. Keywords Machine learning · Currency exchange rate prediction · Financial forecasting · ANN-GA · ANN-PSO

1 Introduction The currency market forecasting is one of the most prominent and complex issues involving time series [1, 2]. With the development of efficient market hypothesis, Muskaan · P. K. Sarangi (B) · S. Singh Chitkara University Institute of Engineering and Technology, Chitkara University, Rajpura, Punjab, India S. R. Nayak Amity School of Engineering and Technology, Amity University Uttar Pradesh, Noida, India A. K. Bhoi KIET Group of Institutions, Delhi-NCR, Ghaziabad 201206, India Directorate of Research, Sikkim Manipal University, Gangtok 737102, Sikkim, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 P. K. Mallick et al. (eds.), Cognitive Informatics and Soft Computing, Lecture Notes in Networks and Systems 375, https://doi.org/10.1007/978-981-16-8763-1_29

363

364

Muskaan et al.

financial environments follow spontaneous paths and are thus uncertain [3, 4]. Hence, the need for models and competitive structures continues to attract much attention among researchers [5]. Similarly, a predictive framework capable of producing great rates of return above the market indices will not only be significant proof against the EMH but would also generate significant profits from financial operations. Machine learning applications play an important role in our daily life [6]. But it is very difficult to predict the price time series in forex markets that are non-linear [7]. These are complex, unpredictable, noisy, and non-stationary series [8] affected by the overall economy, business, politics, and even investor psychology. Technical developments have allowed broad historical data repositories with computational systems to be analyzed [9]. Machine learning is the term used to describe the widespread use of intelligent predictive models in computing. Figure 1 represents the techniques used for prediction. Machine learning techniques are having a wide area of applications such as electrical load forecasting [10, 11], forecasting consumer price index [12], indian export predictions [13], trend analysis for Indian automobile industry [14], forex trend analysis [15], and many others [9, 16, 17]. The review of currency exchange rate forecasting using machine learning is very wide and broad in terms of technological updates, new models, and methodologies.

Fig. 1 Prediction techniques

Prediction of Currency Exchange Rate: Performance Analysis …

365

1.1 Organization of the Paper The paper is divided into different sections such as (i) Literature review and analysis of existing techniques, (ii) Objectives, (iii) Methodology, (iv) Data Preparation, (v) Result Analysis, and (vi) Conclusion.

2 Literature Review Refenes et al. [18] have designed and trained a system which is used to predict the exchange rate between the US dollar and the Danish krone. Using a perceptron model, the authors have proved that neural network makes accurate predictions with careful network design. Kamruzzaman et al. [19] have designed and investigated three Artificial Neural Network (ANN) based forecasting models for Australian Foreign Exchange. The authors have used three different techniques such as Standard Backpropagation, Scaled Conjugate Gradient, and Backpropagation with Baysian Regularization. From their experimental results, the authors conclude that an ANN-based model can be reliable to forecast the forex market. Yu et al. [20] proposed a model known as Adaptive Smoothing Neural Network (ASNN) to forecast foreign exchange rates. Adaptive smoothing techniques were used in this model to automatically change neural network learning parameters by monitoring signals in dynamically changing environments. The ASNN model will help to speed up the network training and convergence process. According to the experimental results, the proposed model can be an important alternative method for forecasting foreign exchange rates. Patel et al. [21] stated that there are numerous factors that influence currency movement. People have begun to use currency futures as a means of investment, and they can exchange them. The aim of this paper is to define the major factors that affect currency rates, concentrating on economic formulas based on economic theory to determine the health of currency and make useful predictions for currency exchange rate. In another work, Galeshchuk et al. [22] have implemented deep networks for the prediction of different currency pairs such as EURO, GBP, and JPY with US dollars. The authors conclude that well trained deep networks can perform better in case of currency predictions. The work proposed by Rout et al. [23] demonstrates the use of adaptive ARMA model to forecast the currency exchange rates. The authors conclude that their proposed method is better than other methods both in long-range and short-range predictions. In their review paper, Islam et al. [24] have explored various techniques used in forex market at international level. According to the authors, India is having

366

Muskaan et al.

NB Fuzzy Logic 1% 5% ARIMA KNN 5% 4% SenƟment Analysis 6% RF/Decision Trees 8%

NN SVM/SVR RF/Decision Trees NN 48%

SenƟment Analysis KNN ARIMA Fuzzy Logic

SVM/SVR 23%

NB

Fig. 2 Mostly used prediction techniques in currency market. Data source Islam et al. [23]

maximum number of publications on foreign exchange market. The authors also report that ANN and SVM are the mostly used techniques. Through the literature review, we have found that the study has been carried out on the basis of the forecast techniques used in the currency market. The techniques used for a successful currency-market forecast are shown in Fig. 2. From Fig. 2, it is observed that the most used technique is neural networks (48%) followed by SVM/SVR (23%). Other techniques such as ARIMA/GARCH, Fuzzy Logic, and Decision Trees are used rarely in comparison to these two methods. However, no hybrid techniques such as ANN with Genetic Algorithm (ANN-GA) or ANN with PSO (ANN-PSO) is reported. Hence, this work aims in implementing these two models in predicting the currency exchange rate.

3 Objectives The work aims at the following objectives: • To design and implement two ANN based hybrid models i.e., ANN-GA and ANN-PSO for Currency Exchange Rate Prediction. • Performance analysis of the proposed methods using the data set of INR Versus USD currency conversion data.

Prediction of Currency Exchange Rate: Performance Analysis …

367

4 Methodology The methodology adopted is as below: 1. 2. 3. 4.

Preparation of the data set Pattern formation Implementation Comparison of the results The detailed methodology diagram is given in Fig. 3.

5 Preparation of Data The data used in this work are daily observations of currency exchange (open, high, low, and close) downloaded from investing.com [25]. The data used in this work is a collection of currency exchange data over a period of five-years (1st January 2016 to 25th January 2020). The collected data set is divided into training patterns, test patterns, and validation patterns. Validation data is a random selection of 10% data from the training set (Table 1). The division of data into these categories is given in Fig. 4.

6 Implementation Strategy The implementation of ANN, ANN-GA and ANN-PSO has been done separately keeping the architecture and population size same for both the methods. Best results in terms of Root Mean Squared Error (RMSE) have been compared.

6.1 ANN Implementation The networks were trained using the back-propagation algorithm and a variety of ANN architectures were implemented. The training is based on 1304 observations, with forecasts for the next 20 days. The RMSE was determined after comparing it to the observed values for the next 20 days. Figure 5 shows the training of architecture 3-2-1 having learning rate 0.05 and momentum 0.5 and Figure 6 depicts the actual and forecasted graph of the trained model. The implementation results of the ANN model (architecture 3-2-1 and architecture 3-3-1) is given in Table 2. From Table 2, it can be observed that the ANN architecture 3-2-1 performs better than the architecture 3-3-1. Hence, this 3-2-1 architecture has been considered for ANN-GA and ANN-PSO.

368

Fig. 3 Flowchart of the methodology

Muskaan et al.

Prediction of Currency Exchange Rate: Performance Analysis …

369

Table 1 Division of data into training and test patterns Currency indices

Training patterns

Testing patterns

Observations

INR versus USD

Start

End

Start

End

1324

01.01.2016

31.12.2020

01.01.2021

25.01.2021

Fig. 4 Flow diagram for data division and implementation strategy

Fig. 5 Training of architecture 3-2-1

370

Muskaan et al.

Fig. 6 Actual and forecasted graph (3-2-1)

Table 2 Results of ANN implementations

Architecture

No. of training data

No. of testing data

RMSE

3-2-1

1304

20

0.4213

3-3-1

1304

20

0.4449

6.2 Implementation of ANN-GA The implementation of ANN-GA has been carried out as below: 1. 2. 3. 4.

Creation of the Architecture. Initialization of the weight matrix by population size of 10. Applying the weights to ANN to train the network and to calculate the error. Continuing the process till the desired results are achieved (by taking different population size such as 10, 20, and 30.) The implementation methodology is given in Fig. 7.

6.3 Pattern Formation The entire data set is divided into two types of patterns (training/validation pattern and test pattern). Each pattern has three inputs and one output. The detailed pattern formation is given in Table 3. The results obtained from the implementation of ANN-GA is given in Table 6.

Prediction of Currency Exchange Rate: Performance Analysis …

371

Fig. 7 Implementation of ANN-GA Table 3 Input data set of ANN-GA Training patterns Input 1 (open)

Input 2 (high)

Input 3 (low)

Desired

Pattern 1

0.2142857143

0.2142857143

0.2142857143

0.2142857143

Pattern 2

0.2142857143

0.2857142857

0.2142857143

0.2857142857

Pattern 3

0.2857142857

0.2857142857

0.2142857143

0.2142857143

Pattern 4

0.2142857143

0.2857142857

0.2142857143

0.2857142857

0.7142857143

0.7142857143

0.7142857143

0.7142857143

Input 1 (open)

Input 2 (high)

Input 3 (low)

Desired

Pattern 1

0.7142857143

0.7142857143

0.7142857143

0.7142857143

Pattern 2

0.7142857143

0.7142857143

0.7142857143

0.7142857143

0.7142857143

0.7142857143

0.7142857143

0.7142857143

– – – Pattern 1304 Test patterns

– – – – Pattern 20

372

Muskaan et al.

Fig. 8 Implementation of ANN-PSO Source Raza et al. [26]

6.4 Implementation of ANN-PSO A simple ANN model when trained with a back propagation algorithm, back propagates the error to update the weights and biases of the network. In a hybrid ANN-PSO model, PSO is used to determine the optimum values for the weights and biases of the ANN model. The implementation design of ANN-PSO model is given in Fig. 8.

6.5 Input Data Set Table 4 describes the normalized input data set used for the implementation of ANNPSO model. The results obtained from the implementation of ANN-PSO is given in Table 6.

7 Results Analysis The results obtained from the three experiments are shown in Table 5. It can be observed from Table 5 that ANN-GA and ANN-PSO performs better than simple ANN models. Table 6 depicts the comparison of results of the hybrid models i.e., ANN-GA and ANN-PSO.

Prediction of Currency Exchange Rate: Performance Analysis …

373

Table 4 Input data set for ANN-PSO Training patterns Input 1 (open)

Input 2 (high)

Input 3 (low)

Desired

Pattern 1

66.209

66.267

66.13

66.235

Pattern 2

66.165

66.66

66.165

66.579

Pattern 3

66.599

66.663

66.428

66.481

Pattern 4

66.499

66.905

66.479

66.698

73.195

73.243

72.951

73.036

– – – Pattern 1304 Test patterns Input 1 (open)

Input 2 (high)

Input 3 (low)

Desired

Pattern 1

73.1

73.128

73.022

73.12

Pattern 2

73.106

73.134

72.83

73.07

73.022

73.186

72.893

72.931

– – – Pattern 20

Table 5 Experimental results Implementation

Best Architecture

RMSE

Data set

ANN

3-2-1

0.4213

ANN-GA

3-2-1

0.086709

Same data set has been used for all three experiments

ANN-PSO

3-2-1

0.074564

Table 6 Result comparison of the hybrid models

Architecture

Population size

ANN-GA

ANN-PSO

3-2-1

10

0.217004

0.219327

20

0.231293

0.082591

30

0.086709

0.100433

40

0.226974

0.083651

50

0.12929

0.082552

60

0.128311

0.078431

70

0.144199

0.074564

374

Muskaan et al.

From Table 6, it can be noticed that the hybrid model ANN-PSO is more efficient in predicting short-term currency values than the ANN-GA model. However, below are some observations found while implementing different ANN and hybrid models. • Selection of appropriate training sets and network architectures affects the performance of the models. • Since foreign exchange markets constitute a very complex system, it is very difficult to decide the exact parameters required for the best analysis. • Consideration of other technical parameters that can affect the ANN architecture, such as hidden layer, type of transfer feature that helps to improve the precision.

8 Conclusion and Future Scope This work experiments with the implementation of an ANN model and two hybrid models. The hybrid models have been used to optimize the ANN weights. From the experiments, it is observed that in case of INR Versus USD predictions, the model ANN-PSO performs better than the models ANN and ANN-GA. However, it is also observed that the performance of the models may vary when the architecture is changed or the data set is changed. The same technique produces different results for varying numbers of training and test patterns. In this work the performance of ANN-PSO is found better when compared with ANN-GA on similar criteria such as same architecture, same population size and same input data. Besides this, the proposed models may also be verified with different durations of data such as daily, weekly, and monthly prices as an extension to the work.

References 1. Singh N, Chauhan RK (2009) Short term load forecasting using neuro genetic hybrid approach: results analysis with different network architectures. J Theor Appl Inf Technol 7(8):109–116 2. Sinha D, Sinha S (2019) Financial modeling using ANN technologies: result analysis with different network architectures and parameters. Indian J Res Capital Markets 6(1):21–33 3. Kumar D, Sarangi PK, Verma R (2021) A systematic review of stock market prediction using machine learning and statistical techniques. In: Materials today: proceedings. https://doi.org/ 10.1016/j.matpr.2020.11.399 4. Muskaan, Sarangi PK (2020) A literature review on machine learning applications in financial forecasting. J Technol Manage Growing Econ 11(1):23–27, (2020). https://doi.org/10.15415/ jtmge.2020.111004 5. Muskaan, Sarangi PK (2020) NSE stock prediction using ANN models. Int J Control Autom 13(4):552–559. Retrieved from http://sersc.org/journals/index.php/IJCA/article/view/16476 6. Datta P, Sharma B (2017) A survey on IoT architectures, protocols, security and smart city based applications. In: 2017 8th international conference on computing, communication and networking technologies (ICCCNT), pp 1–5. https://doi.org/10.1109/ICCCNT.2017.8203943 7. Zhang , Lin A, Shang P (2017) Multidimensional k-nearest neighbor model based on EEMD for financial time series forecasting. Physica A 477:161–173

Prediction of Currency Exchange Rate: Performance Analysis …

375

8. Bezerra PCS, Albuquerque PHM (2017) Volatility forecasting via SVR–GARCH with mixture of Gaussian kernels. CMS 14(2):179–196 9. Chiang WC, Enke D, Wu T, Wang R (2016) An adaptive stock index trading decision support system. Expert Syst Appl 59:195–207 10. Gupta AK, Sarangi PK (2012) Electrical load forecasting using genetic algorithm based backpropagation method. J Eng Appl Sci 7(8):1017–1020 11. Singh N, Singh R (2009) Short term load forecasting using artificial neural network: a comparison with genetic algorithm implementation. J Eng Appl Sci 4(9):88–93. https://doi.org/10. 1109/ICNGIS.2016.7854003 12. Singla C, Sahoo AK (2019) Modelling consumer price index: an empirical analysis using expert modeler. J Technol Manage Growing Econ 10(1):43–50. https://doi.org/10.15415/jtmge.2019. 101004 13. Singh S, Sarangi PK (2014) Growth rate of Indian spices exports: past trend and future prospects. Apeejay J Manage Sci Technol 2(1):29–34 14. Pant M, Bano S, Sarangi PK (2014) Future trend in Indian automobile industry: a statistical approach. Apeejay J Manage Sci Technol 1(2):28–32 15. Sarangi PK, Chawla M, Ghosh P, Singh S, Singh PK (2021) FOREX trend analysis using machine learning techniques: INR vs USD currency exchange rate using ANN-GA hybrid approach. In: Materials today: proceedings. https://doi.org/10.1016/j.matpr.2020.10.960 16. Sarangi PK, Sarangi P (2010) Load forecasting using artificial neural network: performance evaluation with different numbers of hidden neurons. IUP J Inf Technol 6(1):34–42 17. Sarangi PK, Sarangi P (2010) Short-term load forecasting using neural network technology. IUP J Comput Sci 4(2):15–23 18. Refenes AN, Azema-Barac M, Chen L, Karoussos SA (1993) Currency exchange rate prediction and neural network design strategies. Neural Comput Appl 1(1):46–58. https://doi.org/10. 1007/bf01411374. 19. Kamruzzaman J, Sarker RA (2003) Forecasting of currency exchange rates using ANN: a case study. In: Proceedings of the 2003 international conference on neural networks and signal processing, vol 1, pp 793–797 20. Yu L, Wang S, Lai KK (2005) Adaptive smoothing neural networks in foreign exchange rate forecasting. Comput Sci 523–530. https://doi.org/10.1007/11428862_72 21. Patel PJ, Patel NJ, Patel AR (2014) Factors affecting currency exchange rate, economical formulas and prediction models. Int J Appl Innov Eng Manage (IJAIEM) 3(3):53–56 22. Galeshchuk S, Mukherjee S (2017) Deep networks for predicting direction of change in foreign exchange rates. Intell Syst Account Finance Manage 24(3). https://doi.org/10.1002/isaf.1404 23. Rout M, Majhi B, Majhi R, Panda G (2014) Forecasting of currency exchange rates using an adaptive ARMA model with differential evolution based training. J King Saud Univ Comput Inf Sci 26(1):7–18. https://doi.org/10.1016/j.jksuci.2013.01.002 24. Islam MS, Hossain E, Rahman A, Hossain MS, Andersson K (2020) A review on recent advancements in FOREX currency prediction. Algorithms 13(8):186. https://doi.org/10.3390/ a13080186 25. https://in.investing.com/currencies/usd-inr-historical-data. Downloaded on 31 Jan 2021 26. Raza MQ, Nadarajah M, Hung DH, Baharudin Z (2017) An intelligent hybrid short-term load forecasting model for smart power grids. In: Sustainable cities and society, vol 31, pp 264–275. shttps://doi.org/10.1016/j.scs.2016.12.006.

Gurmukhi Numerals Recognition Using ANN Pradeepta Kumar Sarangi, Ashok Kumar Sahoo, Gagandeep Kaur, Soumya Ranjan Nayak, and Akash Kumar Bhoi

Abstract Computer recognition of hand written characters has been a basic requirement nowadays and has been a subject of intensive research for the last few decades. Use of Artificial Intelligence (AI) techniques for recognizing the hand written characters play an important role in bringing human and machine closer. The Gurmukhi, a religion-specific language originated from India, is one of the most popular writing script languages of the entire world. The proposed approach is based on the application of a neural network model to recognize handwritten Gurmukhi numerals. The data set consists of 1500 numerals that have been tested on the neural network. For classification, 144 binary features have been extracted from each digit and a recognition accuracy of 93.66% is reported. Keywords Gurmukhi characters · Handwritten character recognition · Neural network · Machine learning

1 Introduction Since its invention, the computer has not only improved its own capabilities but also touched many aspects of human needs [1]. For past few decades, extensive applications of computers are reported in our daily life [2, 3]. With the growing demand and wide range of application areas, the computing systems are being continuously upgraded and adopted with new technologies such as artificial intelligence [4] in P. K. Sarangi · G. Kaur Chitkara University Institute of Engineering and Technology, Chitkara University, Rajpura, Punjab, India A. K. Sahoo Graphic Era Hill University, Dehradun, India S. R. Nayak (B) Amity School of Engineering and Technology, Amity University Uttar Pradesh, Noida, India A. K. Bhoi KIET Group of Institutions, Delhi-NCR, Ghaziabad 201206, India Directorate of Research, Sikkim Manipal University, Gangtok 737102, Sikkim, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 P. K. Mallick et al. (eds.), Cognitive Informatics and Soft Computing, Lecture Notes in Networks and Systems 375, https://doi.org/10.1007/978-981-16-8763-1_30

377

378

P. K. Sarangi et al.

Fig. 1 Gurmukhi numerals (0–9)

recent times. The recognition of handwritten scripts has been an area of intensive research due to various application aspects. The wide scope of this field has attracted many national and international researchers [5]. Computer recognition of handwritten digits is one of the human requirements that have been the subject of intensive research over the past decades, and it is still far from perfection. Machine learning is the technique which is used to solve these types of problems [6, 7]. Machine learning is having a wide area of applications including financial sectors [8, 9], business sectors [10, 11], natural language processing [12, 13], and many others [14–17]. Document retrieval which is also known as Optical Character Recognition (OCR) is one such area which is having large demand both at public and Government level. Gurmukhi is a popular language in India and mostly spoken by Punjabi people in northern part of India and the language interstates its old history and significance. There is a need of applying OCR techniques to the Gurmukhi script and as on date no standard technique is available which could be applicable to all types of scripts. However, a quasi-approach is possible to achieve nearly accuracy. The numerals of Gurmukhi script are shown in Fig. 1.

2 Literature Review Aggarwal et al. [18] have proposed two methods for the extraction of gradient features for Gurmukhi numerals and other characters. The authors have used both Gurmukhi characters and Gurmukhi numerals. Using support vector machine as the classifier, the authors have reported an accuracy of 97.38% for Gurmukhi characters and for numerals; the accuracy is reported as 99.65%. In another work, Kaur et al. [19] have implemented Convolutional Neural Networks for recognition of printed Gurmukhi numerals. Using the K-means and HOG algorithms, the authors claim to achieve accuracy in some acceptable range. A recognition system for off-line handwritten Gurmukhi characters is proposed by Kumar et al. [20]. The authors have used LIBSVM and k-NN classifiers and reported an accuracy of 98.06%.

Gurmukhi Numerals Recognition …

379

Kaur and Rani [21] proposed a classifier for text recognition of complex handwritten Gurmukhi document images. To find the feature sets for a given character, authors used Diagonal, Horizontal Peak Extent, and Zoning feature extraction techniques. Using Zoning-based features, the proposed system achieves a maximum recognition accuracy of 92.08% with 90% of training data and remaining 10% testing data. The authors, Siddharth et al. [22], have used SVM with Radial Basis Function to classify Gurmukhi numerals. The authors have implemented various feature sets and report an achievement of highest accuracy of 99.2% using projection histogram features. For feature extraction of 3500 Gurmukhi characters, Mahto et al. [23] investigated the importance of HOG and PHOG features. The results showed that PHOG features outperformed HOG. The accuracy achieved by k-NN classifier is 98.0%, and an overall SVM classifier with linear kernel accuracy is 99.1% using PHOG features. Rekha [24] has presented a survey on application of different feature sets used in both printed and handwritten Gurmukhi characters. The author found that though a significant amount of works has been done for printed characters but very less works are done for handwritten characters. In their work, Sinha et al. [25] have presented an overview of various feature extraction techniques for isolated Gurmukhi numerals and characters. Using zone base feature extraction approach and SVM as a classifier, the authors report an accuracy of 99.73%. Aggarwal et al. [26] have experimented with Gurmukhi characters using Zernike moments. The authors have used two types of mapping schemes: outer circle and inner circle and finally concluded that the outer circle mapping performs better than the inner circle mapping. Handwritten character recognition for Gurmukhi digits has also been proposed by Singh et al. [27]. The authors have used a 32 × 64 pixels size binary image. Using a multilayer neural network as classifier, the authors have reported an accuracy level of 88.83%. The observations from the above review are as below:‘ • Research works have been done more for printed characters rather than handwritten characters. Particularly handwritten numerals in case of Gurmukhi scripts. • There is no standard technique is defined for all scripts. • There is no standard dataset available for Gurmukhi scripts. • Recognition accuracy depends on the feature selection, classifier used, and the data set.

3 Objective One of the primitive objectives of this research is to create a standard dataset for research in this direction of Gurmukhi character recognition. The work in this

380

P. K. Sarangi et al.

Fig. 2 Sample data used in the experiments

research is the implementation a multi layered perceptron model to recognize handwritten Gurmukhi numerals. Also, another objective is to produce better results than other researchers in this area till date.

4 Methodology and Research Design The methodology adopted in this work is as below: a. b. c. d. e. f.

Creation of the dataset Pre-processing of the dataset Resizing the numerals into 12 × 12 pixel size Division of data into training and testing sets (80% training and 20% testing) Feature extraction step Classification through MLP model using MATLAB.

5 Data Collection The data used in this work is the Gurmukhi numerals created on Microsoft paint using mouse pointer. A total of 1500 numerals (150 numerals for each digit 0–9) have been generated. A sample of data created is given in Fig. 2. All numerals are resized into 12 × 12 pixel size making the size of the feature vector 144. Each numeral class is divided into training and test sets.

6 Implementation and Results Analysis Implementation of MLP model has been done through MATLAB. The steps followed are: • Creation of the neural network

Gurmukhi Numerals Recognition …

381

• Training of the network with correct patterns • Training of the network with noisy patterns • Testing of the network Various figures used during the training of the network, testing of the network, performance of the network, and output from the network are given in Figs. 3, 4, 5, 6, 7 and 8. Each numeral was tested over the neural network by providing as input to the network. Each output was classified into three categories, Correct classification: when the input was recognized into the correct category; Misclassification: when the input was recognized into other class category; Erroneous output: when it was

Fig. 3 Sample training patterns

Fig. 4 a Original pattern, b noisy pattern

382

P. K. Sarangi et al.

Fig. 5 Neural network training

difficult to recognize the output into any one class. The summary of the result analysis is illustrated in Table 1. From this result analysis, it can easily be observed that except the numerals 7 and 8, other numerals have achieved an accuracy of more than 90%. The lower accuracy of the numerals 7 and could be due to their similarity with other numerals. This can be seen in the confusion matrix. The confusion matrix for the results obtained from the Gurmukhi Character Recognition system is presented in Table 2. Subsequently, from the confusion matrix analysis, it can be observed that numeral 1 has been recognized as numeral 7 for 5 times. Similarly, numeral 2 as numeral 7 for 8 times, numeral 3 as numeral 4 for 12 times, numeral 4 as numeral 3 for 7 times, numeral 7 as numeral 1 for 14 times, and numeral 8 as numeral 9 for 24 times.

Gurmukhi Numerals Recognition …

Fig. 6 Training performance

Fig. 7 Training state

383

384

P. K. Sarangi et al.

Fig. 8 (Left) Testing pattern (right) recognized pattern

Table 1 Recognition accuracy of proposed model Input pattern

Number of input image

Erroneous output

Misclassification

Correct classification

Recognition accuracy (%)

0

150

2

4

144

96.00000

1

150

4

3

143

95.33333

2

150

5

3

142

94.66667

3

150

3

5

142

94.66667

4

150

6

6

138

92.00000

5

150

5

6

139

92.66667

6

150

4

4

142

94.66667

7

150

3

10

137

91.33333

8

150

2

16

132

88.00000

9

150

2

2

146

Overall accuracy (proposed model)

97.33333 93.66%

7 Conclusion Recognition of handwritten scripts is a challenging task due to the variation in writing style and size. Machine learning techniques have shown promising results but no standard technique is available till date suitable for all types of scripts. In India, researches on regional scripts are not much explored. This work implements an ANN model for one such regional language known as Gurmukhi scripts. The overall accuracy is limited to 93.67%, which is in an accepted range. Highest accuracy rate is reported for the numeral NINE as it is having a unique writing style and is entirely different than other numerals except EIGHT. A low accuracy rate of 88% is reported in case of digit EIGHT. Because of its similarity level of around 80% with digit NINE,

Gurmukhi Numerals Recognition …

385

Table 2 Confusion matrix achieved from proposed model 0

1

2

3

4

5

6

7

8

9

0

144

0

0

0

0

0

0

0

0

0

1

0

143

0

0

0

0

0

0

0

0

2

0

0

142

0

0

0

0

0

0

0

3

0

0

0

142

0

0

0

0

0

0

4

0

0

0

0

138

0

0

0

0

0

5

0

0

0

0

0

139

0

0

0

0

6

0

0

0

0

0

0

142

0

0

0

7

0

0

0

0

0

0

0

137

0

0

8

0

0

0

0

0

0

0

0

132

0

9

0

0

0

0

0

0

0

0

0

146

it has a low level of accuracy. In the current research, an approach for recognition of handwritten numerals is carried out. This can be extended by the authors and other researchers in the future.

References 1. Singh N, Chauhan RK (2009) Short term load forecasting using neuro genetic hybrid approach: results analysis with different network architectures. J Theor Appl Inf Technol 7(8):109–116 2. Sinha D, Sinha S (2019) Financial modeling using ANN technologies: result analysis with different network architectures and parameters. Indian J Res Capital Markets 6(1):21–33 3. Kumar D, Sarangi PK, Verma R (2021) A systematic review of stock market prediction using machine learning and statistical techniques. Mater Today Proc.https://doi.org/10.1016/j.matpr. 2020.11.399 4. Muskaan, Sarangi PK (2020) A literature review on machine learning applications in financial forecasting. J Technol Manage Growing Econ 11(1):23–27. https://doi.org/10.15415/jtmge. 2020.111004 5. Muskaan, Sarangi PK (2020) NSE stock prediction using ANN models. Int J Control Autom 13(4):552–559. Retrieved from http://sersc.org/journals/index.php/IJCA/article/view/16476 6. Datta P, Sharma B (2017) A survey on IoT architectures, protocols, security and smart city based applications. In: 2017 8th international conference on computing, communication and networking technologies (ICCCNT), pp 1–5. https://doi.org/10.1109/ICCCNT.2017.8203943 7. Singh N, Singh R (2009) Short term load forecasting using artificial neural network: a comparison with genetic algorithm implementation. J Eng Appl Sci 4(9):88–93. https://doi.org/10. 1109/ICNGIS.2016.7854003 8. Singla C, Sahoo AK (2019) Modelling consumer price index: an empirical analysis using expert modeler. J Technol Manage Growing Econ 10(1):43–50. https://doi.org/10.15415/jtmge.2019. 101004 9. Singh S, Sarangi PK (2014) Growth rate of Indian spices exports: past trend and future prospects. Apeejay J Manage Sci Technol 2(1):29–34 10. Pant M, Bano S, Sarangi PK (2014) Future trend in Indian automobile industry: a statistical approach. Apeejay J Manage Sci Technol 1(2):28–32

386

P. K. Sarangi et al.

11. Sarangi PK, Chawla M, Ghosh P, Singh S, Singh PK (2021) FOREX trend analysis using machine learning techniques: INR vs USD currency exchange rate using ANN-GA hybrid approach. Mater Today Proc. https://doi.org/10.1016/j.matpr.2020.10.960 12. Ahmed P, Sahoo AK, Sarangi PK (2012) Recognition of isolated handwritten Oriya numerals using Hopfield neural network. Int J Comput Appl 40(8):36–42 13. Sarangi PK, Ahmed P, Ravulakollu KK (2014) Naïve bayes classifier with lu factorization for recognition of handwritten Odia numerals. Indian J Sci Technol 7(1):35–38 14. Sarangi PK, Ahmed P (2013) Recognition of handwritten Odia numerals using artificial intelligence techniques. Int J Comput Sci 2(02):35–38 15. Singh S, Sarangi PK, Singla C, Sahoo AK (2020) Odia character recognition system: A study on feature extraction and classification techniques. Mater Today Proc 34:742–747. https://doi. org/10.1016/j.matpr.2020.04.680 16. Singla C, Sarangi PK, Sahoo AK, Singh PK (2020) Deep learning enhancement on mammogram images for breast cancer detection. Mater Today Proc. https://doi.org/10.1016/j.matpr. 2020.10.951 17. Bindal R, Sarangi PK, Kaur G, Dhiman G (2019) An approach for automatic recognition system for Indian vehicles numbers using k-nearest neighbours and decision tree classifier. Int J Adv Sci Technol 28(9):477–492 18. Aggarwal A, Singh K, Singh K (2015) Use of gradient technique for extracting features from handwritten Gurmukhi characters and numerals. Procedia Comput Sci 46:1716–1723 19. Kaur D, Kaur R (2016) Machine printed Gurumukhi numerals recognition using convolutional neural networks. Int Res J Eng Technol (IRJET) 3(8):555–558 20. Kumar M, Jindal MK, Sharma RK, Jindal SR (2020) Performance evaluation of classifiers for the recognition of offline handwritten Gurmukhi characters and numerals: a study. Artif Intell Rev 53(3):2075–2097 21. Kaur H, Rani S (2017) Handwritten Gurumukhi character recognition using convolution neural network. Int J Comput Intell Res 13(5):933–943 22. Siddharth KS, Dhir R, Renu Rani R (2011) Handwritten Gurmukhi numeral recognition using different feature sets. Int J Comput Appl 28(2). https://doi.org/10.5120/3361-4640 23. Mahto MK, Bhatia K, Sharma RK (2018) Robust offline Gurmukhi handwritten character recognition using multilayer histogram oriented gradient features. Int J Comput Sci Eng 6(6):915–925 24. Rekha A (2012) Offline handwritten Gurmukhi character and numeral recognition using different feature sets and classifiers—a survey. Int J Eng Res Appl 2(3):187–191 25. Sinha G, Rani R, Dhir R (2012) Handwritten Gurmukhi numeral recognition using zone based hybrid feature extraction techniques. Int J Comput Appl 47(21):24–29 26. Aggarwal A, Singh C (2016) Zernike moments-based Gurumukhi character recognition. Appl Artif Intell 30(5):429–444. https://doi.org/10.1080/08839514.2016.1185859 27. Singh P, Budhiraja S (2012) Offline handwritten Gurmukhi numeral recognition using wavelet transforms. Int J Modern Educ Comput Sci 34–39

A Review on Internet of Things in Healthcare Applications Abhinav Kislay, Prabhishek Singh, Achyut Shankar, Soumya Ranjan Nayak, and Akash Kumar Bhoi

Abstract Internet of Things (IoT)s this technology gives very much attention within some years for their useful benefits. In the healthcare system, it helps to enhance the service of healthcare caused by increasing population and a rise in diseases this survey represents the researches relating to places of modern system changing in strength, weakness, overall suitability for IoT in healthcare system. In this many challenges must be faced in the healthcare of IOT like security, privacy, wearability, and fewer power operations or presented and which was recommended for future research. Someone says that the internet is changed the life of society or fundamentally this is true “but at the same time, the greatest changes still not disclose between us. Many technologies are now converging in a way that means the internet is on the way of substantial expansion or objects large and small get connected and assume their own identity. When our servers and personal computers get connected to globally and the internet of mobile” and telephone are operated now the next phase was of the internet of things when big or less things are connecting and managing to the virtual world. The smart devices of the IOT can help in executing the different facilities to the remote access of the health monitoring system and help in to make the emergency notification system. IoT has a very much sensible applications that make the healthcare system smarter and appreciable. In the modern healthcare system, the policies and strategies which give support to the researchers and scientists, this IoT and smart devices gives the sudden upgradation to the technologies which are helping them. This paper helps in to making the connection between the IoT technologies to the healthcare system and make the healthcare system more advance. To monitoring the healthcare system more easily and effectively we need a smart devices and smart objects to decrease the inefficiency of available healthcare objects. The healthcare which is based on IoT is become very much enhanced and refurbished from comparing to the whole traditional healthcare and medical system. A. Kislay · P. Singh (B) · A. Shankar · S. R. Nayak Amity School of Engineering & Technology, Amity University Uttar Pradesh, Noida, India A. K. Bhoi KIET Group of Institutions, Delhi-NCR, Ghaziabad 201206, India Directorate of Research, Sikkim Manipal University, Gangtok 737102, Sikkim, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 P. K. Mallick et al. (eds.), Cognitive Informatics and Soft Computing, Lecture Notes in Networks and Systems 375, https://doi.org/10.1007/978-981-16-8763-1_31

387

388

A. Kislay et al.

Keywords Service healthcare · Future research · Internet of things · Substantial expansion · Monitoring · Smart devices · Smart healthcare

1 Introduction IoT is the concept that shows a link between the set of anyone’s, anything’s, any where’s, and services or any protocol. The IoT is going to be trending in the next generation technologies that can really change the entire world of technology and the business spectrum as it can give the connectivity of smart object and device within today’s internets with extended benefits [1]. Now, we discuss various benefits that include the advance connectivity of these devices and review that goes beyond machines to machine. In this we discuss about the health case in IoT. Healthcare is a very important part of life or maybe we can say a major issue of our life. Unfortunately, the increasing of population, courses of various diseases spread throughout the community which gives rise to illness [2]. Which causes an increase in the demand for resources from hospital leads to doctor and nurse is extremely high. Now, we require a solution to reduce the pressure on the healthcare system [3]. The internet of things IoT has been broadly seen that. It has the power to alleviate the pressure on healthcare system and has been focused on various resources. This research further “looks to serve a specific purpose, such as aiding rehabilitation by monitoring patient’s progress. Emergency healthcare has also been” seen by related work, but currently it has not been seen widely. This survey provides a comprehensive survey of IoT in healthcare how it is useful in it by providing various applications and how the IoT changes the entire healthcare system and make it more interactive and become a boon in the future. In this report, by giving the introduction of IoT, we present a new concept to the whole world and by this, we also discuss the major issues in our healthcare system i.e. we present a new concept in healthcare which is very important to be discussed in medical internet of things or internet of health things [4]. We analyse that IoT has many advantages in the field of medicine. We analyse how the internet of things has various advantages in this field. It will have a broad prospect of application in the field of medical and healthcare. With the support of this IOT in healthcare it will change entire system of healthcare department [5] (Fig. 1).

1.1 IoT in Healthcare Nowadays, internet development application is going to be very high day by day. So IOT is a technology through which we produce useful applications. IOT is a network in which all physical objects are connected to the internet. IoT is a very smart way to connect with physical devices. This technique also has many features which control devices without any human interaction [6].

A Review on Internet of Things in Healthcare Applications

389

Fig. 1. IoT in healthcare

The IOT is a way of connectivity to physical devices and everyday objects. These are joining with electronics and connectivity and other forms of hard work such as sensors, these devices communicable with others over the internet [6]. The IOT has changed the converges of many technology, real time analogy, machines learning senses and system. In the consumer market, IOT technology is most famous with products pretend to the concept of smart home, coverage devices and optioned that support many ecosystems, which were controlled with devices associated such as smart phones and smart speech (Fig. 2). Internet of health things (IoHT)s is IoT-based solutions which provide network connectivity that gives the relation between the patient and the healthcare facilities as, for e.g., IOT-based E-heart, electrophoresis, diabetes, and many kind of facilities monitoring which includes pulse, oxygen in bloods, blood pressure, accelerometer, and many more [7]. The data takes input from patient and gather all information from various devices and shows all the details in any device by applications used developers for a user’s friendly, such as computer, smarts phone, smarts watches, or smart embedded devices [8]. IoHT support money medical conditions including care of patients, finding some chronic disease, and the management of private health and fitness, among other now further IOHT classified four general categories: 1. 2. 3. 4.

Remote healthcare monitoring. Healthcare resolution based on smart phones. Ambient assisted living. Weather devices.

390

A. Kislay et al.

Fig. 2 Healthcare and trends

1.2 Cloud Based IoT in Healthcare Services Cloud Technology is very much trending in recent years, by their benefits in big data survey and analysis many works are seen the iteration on by using cloud computing technologies. For IOT purposes smart watch and mobiles, cloud computing is the technology which is the main source to make it boon for the healthcare industry as it store all the vast data to the cloud and then it can be explore through the embedded devices. For smart phones or any embedded device, the works are very much useful as it stores data and data processing, and this is the key advantage of cloud technologies [9]. The use of cloud technology for health resources is very useful as all the data is started through cloud technology as a full-fledged field. Storage is considered as to focus on how they must manage all the data which is used for data analysis and determining the trends. “Each of words provides valuable things in the field of this technology, then in many techs it comes with advantage, challenge, and opportunity. In this section we generally recognized how the cloud technology is very much essential as all the data and reports in IOT is stores and can be operate from any devices [10].

A Review on Internet of Things in Healthcare Applications

391

1.3 Cloud Based IoT in Healthcare Services The theory behind the conception of internet of things is considerably more complex as well as powerful, and to understand this concept perfectly we have to enter the stages of the architecture of the IOT and make it little simplex, especially in terms of IOT device management. The fundamental of each architecture and the flow of general data process is almost same [11]. So there are four layers of IOT architecture that is described in detail (Fig. 3):

1.3.1

Things, Sensors, and Controller

As the basic parameters required for any IOT system is data. It consists of the things (devices, machines, tools, cars, etc.) which is attached to the internet by their sensors and actuators which take the information and pass it to the IOT gateway. These are primarily data sources which is ended to end protected. Another element of this layer are actuators. Being as a part of sensors it takes data from smart objects and transform it into physical action [10].

1.3.2

Gateway and Data Acquisition

Although this layer works with the sensors and actuators but still it is a separate division in architecture of IOT as it is very important for the process of data collection as it collects the huge amount of unprocessed data and filtering out and then

Fig. 3 Architecture of IoT

392

A. Kislay et al.

convert it into the digital streams so that it ready for analysis and transfer to the edge infrastructure and cloud based platforms [11].

1.3.3

Edge Analytics

Edge devices brings various benefits to the large scale IOT projects. In this we have limited accessibility and the speed of transfer data not pretty much good, at this time edge system provides a faster response time and flexibility in the analysis and the process of the data. Edge computing has recently seen a sudden increase in popularity among the industrial ecosystem of internet of things. Edge infrastructure is located closer to the data source in physical terms, which make it to access the data easier and faster. In this stage only the large bunch of data is processed forwarded which really need the power of cloud by minimizing the network exposure [8].

2 Applications of IoT in Healthcare The IoT is capable to perform various types of healthcare services, which are independently able to provide many healthcare solutions. This paper puts forward that service is by some way general in nature and has the strength of building a set of solutions. In additive, this could also be noticed that general services and their protocols working with IoT, need to be modified, so they could perform much well than before in healthcare field. These include resource sharing services, internet services, notification alerts and could be many more. The discussion on such more general service is far off the scope right now. But, the literature of the paper could help interested readers to understand more about the topic [5]. There are various IoT service devices in healthcare; some of them are as follows: • • • •

Remote patient monitoring Glucose monitoring Hand hygiene device Contact-less thermometer

3 IoT Growth and Development However, IoT is the emerging technology it attracts more developers and customers, researchers, and writers even. As it can be seen from the graph, the number of interested customers for the devices based on IoT. This is the data graph of devices shipped globally from 2015 to 2020 [11] (Fig. 4).

A Review on Internet of Things in Healthcare Applications

393

Fig. 4 IoT growth and development

3.1 Limitations and Challenges There are two sides of coin so as it has so many advantages, but it met with several challenges and some limitations on their way are as under: • • • •

Compatibility Complexity Privacy/Security Safety

4 Conclusion This paper concludes that despite of some disadvantages there is a lot of advantages that may change the view of health in future totally different. It opens the world of opportunities in medicine or the field of healthcare. Ordinary medical devices can collect invaluable additional data which get some extra details or detail analysis towards the symptoms and the trends, remote care, and give patients more details towards there disease and give more control over their lives and treatment and it also help the doctors. It changes the medium of the facilities which is provided to the healthcare industry, through this technology many products which is used in the healthcare sector get upgraded and make it communicable very easily. we can say that the day is not so far in which we can easily get treatment by just sitting at home, without going to anywhere our all the reports are easily delivered to the doctor and

394

A. Kislay et al.

even in our own devices. We do not have to travel a long distance to just show our health reports or any minor issue it just simple as like a message we send to anyone. Our all health reports are easily exposable, and we get the updates regarding to our health status.

References 1. Baker SB, Xiang W, Atkinson I (2017) Internet of things for smart healthcare: technologies, challenges, and opportunities. IEEE Access 5:26521–26544 2. Strojnik P, Peckham PH (2000) The biomedical engineering handbook. CRC Press, Boca Raton, FL, USA 3. Heo SP, Noh DH, Moon CB, Kim DS (2015) Trend of IOT-based healthcare service. IEMEK J Embed Syst Appl 10(4):221–231 4. Johnson L (2001) A new method for pulse oximetry processing inherent insensitivity to artifact. IEEE Trans Biomed Eng 5:2110–2118 5. IEEE Standards Association (2011) IEEE Std 802.15.1-2005—Part 15.1: wireless medium access control (MAC) and physical layer (PHY) specifications for wireless personal area networks (WPANs). https://doi.org/10.1109/IEEESTD.2005.96290. Retrieved 30 June 2011 6. Dhaundiyal R, Tripathi A, Joshi K, Diwakar M, Singh P (2020) Clustering based multi-modality medical image fusion. J Phys Conf Ser 1478(1):012024. (IOP Publishing) 7. Singh P, Shree R (2016) Importance of Dwt in despeckling SAR images and experimentally analyzing the wavelet based thresholding techniques. Int J Eng Sci Res Technol 5(10) 8. Singh P, Shree R (2016) Speckle noise: modelling and implementation. Int J Control Theory Appl 9(17):8717–8727 (International Science Press) 9. Singh P, Diwakar M, Shankar A, Shree R, Kumar M (2021) A review on SAR image and its despeckling. Arch Comput Methods Eng 1–21 10. Singh P, Shankar A (2021) A novel optical image denoising technique using convolutional neural network and anisotropic diffusion for real-time surveillance applications. J Real-Time Image Process 1–18 11. Tyagi T, Gupta P, Singh P (2020) A hybrid multi-focus image fusion technique using SWT and PCA. In: 2020 10th international conference on cloud computing, data science & engineering (confluence), pp 491–497 (IEEE)

Inter-IC Sound (I2S) Interface for Dual Mode Bluetooth Controller T. Prajwal and K. B. Sowmya

Abstract The Inter-IC Sound (I2S or IIS) interface is the most common serial interface used in many applications where digital audio data is being transferred from one Integrated Circuit (IC) to another specifically in the Bluetooth. The I2S interface is used at the receiver end of Bluetooth. At the receiver end, the I2S interface converts the received parallel data into serial data, which then can be given it to the speakers. A Dual Mode Bluetooth was considered here, and it has both classic Bluetooth and Bluetooth Low Energy (BLE) to get a combination of the good aspects of these two Bluetooth technologies such as high transmission speed and long distance coverage. The design and testing of I2S interface is presented here, while the design consists of one parent module and five child modules, developed using Verilog and the test cases are developed using Universal Verification Methodology (UVM). The software tool used here was Synopsys VCS, which is the functional verification solution that provides the highest performance and constraint solver engines. Keywords I2S · UVM · Dual mode bluetooth · Bluetooth classic · Codec · Bluetooth low energy

1 Introduction Dual-Mode Bluetooth uses the advantages of both Bluetooth Classic and Bluetooth Low Energy (BLE), like Bluetooth Classic supports transmission of bulk data, high transmission speed, and BLE can cover long distance with smaller power. Bluetooth Classic is intended for uninterrupted bidirectional data transmission. It is perfect for establishing connection between cell phones and Bluetooth headphones for phone calls. BLE is designed to consume less power. It is intended for low power sensors and accessories like applications that do not involve continuous connection but T. Prajwal (B) · K. B. Sowmya Department of ECE, RV College of Engineering, Bengaluru 560059, India e-mail: [email protected] K. B. Sowmya e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 P. K. Mallick et al. (eds.), Cognitive Informatics and Soft Computing, Lecture Notes in Networks and Systems 375, https://doi.org/10.1007/978-981-16-8763-1_32

395

396

T. Prajwal and K. B. Sowmya

depends on long battery life. Dual-Mode Bluetooth improves customer satisfaction by allowing for quicker pairing, media access, and high-quality audio streaming. Dual-Mode Bluetooth chipsets are now used in the majority of smartphones and tablets. They can connect to and power different smart devices using Bluetooth Low Energy, and use Bluetooth Classic for all high data rate applications. Many digital audio devices are launched into customer audio market, like CD players, digital tape recorders, microphone etc. A number of Very Large Scale Integrated (VLSI) IC’s such as digital filters and digital amplifiers are used to process digital audio signals in these devices. Systemized communication protocols are important for both equipment and IC makers because they improve the device pliability. Hence the serial audio interface I2S bus has been developed, which is used in nearly all digital audio applications. The motivation to take up this project was that the I2S bus differentiates the serial data signals and clock signals, which reduces jitter and the existing PCM interface does not differentiate between left and right channel where I2S bus does. This paper discusses about the concept and overview of Inter-IC Sound interface and the Dual Mode Bluetooth along with the design and testing of I2S interface and simulation wave-forms of the designed I2S interface for dual-mode Bluetooth and its future scope.

2 Inter-IC Sound (I2S) Interface I2S is a sequential linkage used for establishing connection between digital acoustic devices, which communicates Pulse Code Modulated (PCM) audio data between ICs in an electronic gadget. Three serial bus lines make up an I2S bus design: a line with two time-division multiplexing (TDM) data channels, known as Serial Data (SD), a Word Select (WS), and a Serial Clock (SCK). Data is sent in two’s complement, with initial position occupied by the most significant bit (MSB). The data sent from the transmitter can be adjusted with either the rising or falling edge of SCK, but the data latching happens with the rising transitions of the Serial Clock by the receiver. The World Select line shows whether right channel or left channel is being transmitted i.e. if Word Select line is ‘1’ then right channel is transmitted, if it is ‘0’ then left channel is transmitted. One clock before the data is sent; the WS line is sent. Because the transmission and reception devices word lengths may vary, so the MSB is sent first. If the system word is longer than the number of bits in the transmitter, the data is trimmed after the Least Significant bit (LSB). Zeros are added after the LSB, if the system word is smaller than the transmitter bits. The MSB occupies a prescribed position, but the LSB occupies a variable position depending on the length of the word. Figure 1 shows the I2S basic interface timing diagram. The signals in Fig. 1 are Serial Clock, Word Select, and Serial data from top to bottom respectively. It can be clearly observed that Serial data is sent one clock pulse after the Word Select is sent. Only the acoustic data will be handled by bus whereas other signals such as decoding and coding, are transmitted separately.

Inter-IC Sound (I2S) Interface for Dual Mode …

397

Fig. 1 I2S basic interface timing [1]

2.1 Dual Mode Bluetooth Bluetooth has advanced from version 1.0 to version 5.0, with each version bringing significant improvements to how Bluetooth devices communicate with one another. The article [2–4] provides an overview of Bluetooth’s past and major design evolution in the last 25 years. Between Bluetooth 3.0 and Bluetooth 4.0, Bluetooth Low Energy (BLE) appeared, presenting a completely new use case for Bluetooth 3.0, i.e. the Bluetooth Classic (BTC). This was a challenge for certain multi-purpose devices, such as smartphones. Getting both Bluetooth Classic and BLE became critical at this stage. Dual-Mode Bluetooth 5.0 represents a step forward in meeting this need; DualMode Bluetooth takes leverage of both the technologies. This is the most common situation in wireless speakers or headphones, where BLE is used to connect and monitor the devices and Bluetooth Classic is used to stream the audio. Bluetooth classic is intended for uninterrupted bidirectional data transfer. The data rate for this Bluetooth technology is 2 Mbps. However it is essential only over small distance because of this reason it is suitable for streaming video and audio. Bluetooth classic is primarily used for audio applications such as hands free phone calls, Headsets, Speakers, and Smartphones. Bluetooth Low Energy is additionally referred to as BLE, Bluetooth LE, or Bluetooth Smart. Bluetooth Low Energy (BLE) is used to transmit the data over small distance using significantly small amount of power. The data rate for this Bluetooth technology is 100–250 Kbps. BLE is embedded in many devices like smartphones, smart watches, fitness trackers, and computers. BLE provides a flawless encounter between devices.

3 Proposed Methodology The general block diagram of the proposed I2S interface is shown in Fig. 2. Host interface is a processor through which the Bluetooth controller’s will communicate to the external world via Advanced High-Performance Bus (AHB). The AHB bus is intended to meet the needs of designs that are synthesizable at a high level of performance. It is a bus interface with a single bus master and high bandwidth

398

T. Prajwal and K. B. Sowmya

Fig. 2 I2S basic interface timing

operation. The AHB signals are used to transmit the data packets from Bluetooth device (BT) 1 to BT2 which is happening in periphery not in actual memory, just to verify transmission and reception functionality. Next data is written into the actual memory of Transmission First in First out (TXFIFO) with the write enable signal then this data is transmitted to BT2 in the form of packet using Radio Interface (RIF) signals. The packet type used here is Asynchronous Connectionless (ACL) packet i.e. Data Medium rate (DM3) and Extended Synchronous Connection Oriented (ESCO) packet i.e. Extended Voice (EV4). These packet structure contains Access Code (72 bits), Header (54 bits) and Payload (2745 bits), where the Access code entails access code for the physical channel, the logical transport identifier (LTI) and the link control protocol (LCP) are both contained in the header and the data to be transmitted is included in the payload. The type code in the header determines the type of packet. For DM3 type code is 1010 and for EV4 is 1100 [5–10]. Once the received data in BT2 is read using the Receiver FIFO (RXFIFO) then data is passed to CODEC, which consist of different interfaces based on some condition particular interface is selected, here I2S interface is selected if i2s_en signal goes high. Then the data is passed to I2S interface which converts the parallel data into serial data, it can be given to any digital audio applications. The design is performed using Verilog hardware description language. The test cases are developed for the following 3 conditions: • For sample length 8 bits and 16 bits. • For mono and stereo data, where mono-data is transmitted in single channel and stereo-data is transmitted in multiple channel. • For I2S and MSB justified data format, in I2S format data is available after one clock pulse when WS is low whereas in MSB justified format data is available when WS is high. Figure 1 shows the I2S data format, and Figure 3 shows the MSB justified data format. Various test cases are developed using Universal Verification Methodology (UVM). For data transmission, the clock line used by both transmission and reception

Inter-IC Sound (I2S) Interface for Dual Mode …

399

Fig. 3 MSB justified format [1]

device is identical. As the master, the transmission device is responsible for generating the Serial Clock, Word-Select, and Serial data signals. However, in complicated devices, several numbers of senders and acceptors are present, which makes it hard to determine the master. The digital acoustic data streaming between the several ICs is usually controlled by a system master in such systems. As a result, transmitters must generate data while being controlled by an external clock, effectively acting as slaves. This is how the master and slave mode works. For example, BT1 can be mobile Bluetooth and BT2 can be car Bluetooth [11–13].

4 Results The proposed I2S for dual mode Bluetooth is simulated in Synopsys VCS. Figure 4 shows that the data is written into the specific ram location of TXFIFO, and here the data is written with enable signals called ‘we’ and ‘en’, along with the active low reset signal. The frequency of the clock used is 16 MHz and the signal called ‘cbk_type’ indicates the transmission packet type and Figure 5 shows that the data

Fig. 4 Data is written into specific ram location

Fig. 5 Data is read from the specific ram location

400

T. Prajwal and K. B. Sowmya

reads from the specific ram location of RXFIFO, here the same enabled signals are used to read the data with active low reset signal. The signal called ‘pcd_pkt_type’ indicates the reception packet type. Figure 6 shows that the received data is passed to codec top then to I2S top with the conversion of parallel data into serial data (Mono-16bit-I2S format), in this format we need to observe serial data initially when word select signal goes low after that data will be available for both word select = 1 and word select = 0 and Figure 7 shows that the received data is passed to codec top then to I2S top with the conversion of parallel data into serial data (Stereo-8bit-MSB format), in this format the 16bit serial data is split into two 8bit data, where the first 8bit is available when word select is low and the second 8bit is available when word select is high. Figure 8 shows that the received data is passed to codec top then to I2S top with the conversion of parallel data into serial data (Mono-8bit-MSB format), in this format the serial data is available only when word select is high i.e. 16bit serial data is split into two 8bits data, the first 8bit and second 8bit is available only when word select is high. Figure 9 shows that the received data is passed to codec top then to I2S top with the conversion of parallel data into serial data (Stereo-16bit-I2S format), in this format there will be no splitting of 16bit data, here the data is available for both word select = 1 and word select = 0. The frequency of SCK is 258 kHz whereas the frequency for WS is 8 kHz. Simulation results are matching to the proposed methodology. The packet type is shown in hexadecimal format here, i.e. ‘a’ for DM3 and ‘c’ for EV4 and the serial data (binary format) is matching to the parallel data (hexadecimal format).

Fig. 6 Serial data in mono-16bit-i2s format

Fig. 7 Serial data in stereo-8bit-msb format

Inter-IC Sound (I2S) Interface for Dual Mode …

401

Fig. 8 Serial data in mono-8bit-msb format

Fig. 9 Serial data in stereo-16bit-i2s format

5 Conclusion In this paper a basic understanding of Dual Mode Bluetooth which has both classic Bluetooth and Bluetooth Low energy was seen. And Inter-IC sound interface for Dual-Mode Bluetooth was designed, and test cases were developed to verify the working of the interface. The testing of the I2S module was carried out on Synopsys VCS tool. It has been observed that the I2S is a 3 line serial bus, which is used to connect digital audio devices together by converting received parallel data into serial data.

References 1. “I2S Specification”, Philips Semiconductor, 2 Jan 2007 2. Zeadally S, Siddiqui F, Baig Z (2019) 25 years of Bluetooth technology. Spec No: fi11090194, MDPI 3. Muthu Ganesh V, Janukiruman N (2019) A survey of various effective Codec implementation methods with different real time applications. In: 2019 international conference on communication and electronics systems (ICCES), pp 1279–1283. https://doi.org/10.1109/ICCES45898. 2019.9002587 4. Kajikawa N, Minami Y, Kohno E, Kakuda Y (2016) On availability and energy consumption of the fast connection establishment method by using Bluetooth Classic and Bluetooth Low Energy. In: Fourth international symposium on computing and networking (CANDAR), pp 286–290. https://doi.org/10.1109/CANDAR.2016.0058

402

T. Prajwal and K. B. Sowmya

5. Collotta M, Pau G, Talty T, Tonguz OK (2018) Bluetooth 5: a concrete step forward toward the IoT. IEEE Commun Mag 56(7):125–131 6. Bergeron J, Delguste F, Knoeck S, McMaster S, Pratt A, Sharma A (2013) Beyond UVM: creating truly reusable protocol layering. Synopsys Inc. 7. Moskala M, Kloczko P, Cieplucha M, Pleskacz W (2015) UVM-based verification of Bluetooth Low Energy controller. In: 2015 IEEE 18th international symposium on design and diagnostics of electronic circuits & systems, pp 123–124. https://doi.org/10.1109/DDECS.2015.48 8. Rahman MA, Kamal N, Bin Ibne Reaz M, Hashim FH (2015) Dual-mode receiver architecture for Bluetooth and IEEE 802.11b standards. In: 2015 international conference on computer, communications, and control technology (I4CT), pp 117–121. https://doi.org/10.1109/I4CT. 2015.7219549 9. Kim D, Kim D, Kim J, Park J, Park C (2011) A novel integrated dual-mode RF front-end module for Wi-Fi and bluetooth applications. In: IEEE MTT-S international microwave symposium, pp 1–4. https://doi.org/10.1109/MWSYM.2011.5972742 10. Magdy A, Ibrahim S, Khalil AH, Mostafa H (2021) Low power, dual mode Bluetooth 5.1/Bluetooth Low Energy receiver design. In: 2021 IEEE international symposium on circuits and systems (ISCAS), pp 1–5. https://doi.org/10.1109/ISCAS51556.2021.9401748 11. “Introduction to Bluetooth Device Testing” National Instruments (2016) 12. “Inter-IC Sound (I2S) bus” Cypress Semiconductor (2016) 13. Kim N (2020) A digital-intensive extended-range dual-mode BLE5.0 and IEEE802.15.4 transceiver SoC. IEEE Trans Microw Theory Tech 68(6):2020–2029. https://doi.org/10.1109/ TMTT.2020.2986454

Design of Low Power Vedic Multiplier Using Adiabatic Techniques S. Giridaran, Prithvik Adithya Ravindran, G. Duruvan Raj, and M. Janarthanan

Abstract One of the most critical parameters in today’s DSP systems is power utilization. So, many techniques have been introduced in CMOS digital design to reduce power consumption. Power consumption in CMOS digital architecture can be minimized by lowering the supply voltage, lowering capacitance, and lowering switching activities. In today’s CMOS architecture situation, these strategies are ineffective. As a result, we concentrate on adiabatic logic, which has been shown to be an outstanding technique for designing low-power digital circuits. Adiabatic logic circuits are the circuits that can recycle power back to the source rather than dissipating it as heat, which is an effective way to solve this issue. We used the Enhanced ECRL (E-ECRL) technique in our project to reduce the number of transistors used, resulting in lower power consumption. Adders and multipliers are fundamental components of a wide range of devices, from signal processing to high-level circuits, and therefore their effective low-power design is critical. The multipliers are realized using a Vedic algorithm in this report, Urdhva-Tiryakbhyam Sutra. Cadence Virtuoso with 90 nm technology implements static CMOS and adiabatic designs. The power consumption of modified architecture decreases as the number of transitions decreases. Keywords Adiabatic logic · Vedic multiplier · Low power · Cadence Virtuoso · Enhanced ECRL

1 Introduction Adiabatic logic is a thermodynamic process where there is no exchange of energy between the circuit and the environment. When it comes to VLSI design, implementing and achieving a fully adiabatic operation is an arduous task. Therefore, S. Giridaran · P. A. Ravindran (B) · G. D. Raj · M. Janarthanan Department of Electronics and Communication Engineering, SRM Institute of Science & Technology, Ramapuram Campus, Chennai 600089, India M. Janarthanan e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 P. K. Mallick et al. (eds.), Cognitive Informatics and Soft Computing, Lecture Notes in Networks and Systems 375, https://doi.org/10.1007/978-981-16-8763-1_33

403

404

S. Giridaran et al.

partial adiabatic logic circuits are implemented which have both adiabatic and non-adiabatic components. In static CMOS logic circuits, a VDC is used for powering the circuit. When the power supply rises from 0 to VDD, energy of about C L V DD 2 is taken by the circuit from the power supply. During this rise half of the total power, i.e., ½ C L V DD 2 is dissipated by the pull up network. The other half of this power is stored at the load capacitance. When the power supply drops from VDD to 0, the pull-down network dissipates the energy present in the load capacitance. This energy consumed can be minimized by reducing the value of the load capacitance, switching speed, or minimizing the voltage swing. But the main disadvantage is that we can only reduce the power consumption and not recycle the power supplied to the circuit. This can be done by using Adiabatic logic which uses a constant power supply. The power that is supplied to the circuit by this constant power source can be retrieved via changing the current direction and hence can be reused. In today’s world, reduction in the power consumption is one of the important requirements in DSP processors, AI accelerators, etc. Hence it is important to implement adiabatic techniques to reduce and recycle the power. Multiplier is one of the most important parameters in electronic systems. There exists lots of digital signal processing applications such as filtering, FFT, Convolution, internal microprocessor ALU, and Image processing, which uses multipliers as a main component. Speed and low power consumption have gradually disappeared because they are important parameters for VLSI designers. There are unique varieties of multipliers: Booth multiplier, Sequential multiplier, Vedic multiplier, combinational multiplier, Wallace tree multiplier, etc. Array multiplier is implemented using combinational logic techniques where product of all bits takes at a single go, but only hiccup is that it oversized the size of the multiplier and makes it less economical. In Carry save adder bits are processed separately to characteristic carry inside the adder. So it relies upon the previous carry which will increase its execution time due to the fact the number of bits will increase. Basically a Wallace tree is fed with three-bit signals as input to a full adder present there inside it. Usually the output for this particular input always exceeds to the next state, i.e., to the next state full adder which is at a higher position. This is the most problematic downside of the Wallace tree, due to the nature of High speed operation wallace tree is not suitable for most of the tasks. This is the major reason behind us choosing Vedic Multiplier over Wallace tree [1].

2 Related Works This article discusses ECRL and the sleepy keeper technique. The adder used in this example uses less power and has a shorter power delay. Mentor Graphics is used to build circuits in CMOS technology. At 1.6 V supply voltage, the Full adder has a power dissipation of 128.818 pW and a delay of 136.97 ns. The final results were

Design of Low Power Vedic Multiplier …

405

compared to complete adder engineered circuits that had previously been registered, and the proposed solution outperformed them [2]. In today’s VLSI architecture technology, power consumption is a major factor. Low-power devices are becoming increasingly popular, and adiabatic logic is being touted as a promising solution. A number of experiments have already used the adiabatic technique. Beginning with the fundamentals, this paper examines the literature on adiabatic inverters. The power dissipation differences between traditional CMOS circuits are seen in the SPICE simulation results [3]. Analog to digital converters [ADC] are commonly used in signal processing and communication systems. The layout design’s low power consumption remains a significant advantage. Using CMOS 45 nm technology, this research presents a fourbit flash ADC. The design of operational amplifiers, which is still an important part of ADC, is also discussed. The encoder circuit uses XOR and OR gates to create a logic-based encoder. To build various layouts and schematics, Cadence Virtuoso circuit is used, as well as verification software. The 4-Bit Flash ADC converts at a speed of 1.11 s and uses 12 mW of power [4]. Positive feedback adiabatic logic circuits are a type of adiabatic logic presented in this paper (PFAL). There is a power loss during the clock supply’s recovery period due to energy recovery. A power dissipation analogy is made with static CMOS logic. The simulation is performed on a cadence virtuoso that uses 180 nm CMOS technology. According to the findings, a power reduction of 52–74% over static CMOS can be achieved within a reasonable operating frequency range [5, 6]. One of the most critical parameters in today’s DSP systems is power utilisation. Adiabatic logic circuits, which recycle power back to the source rather than dissipating it as heat, are an effective way to solve this issue. Adders and multipliers are fundamental components of a wide range of systems, from signal processing to cryptography, and therefore their effective low-power design is critical. The multipliers are realized using a Vedic algorithm in this article, Urdhva-Tiryakbhyam Sutra. In addition, a change to the current Vedic multiplier is included. Four N/2-bit multipliers and three N-bit adders are used in an N * N conventional Vedic multiplier, while an N * N proposed Vedic multiplier needs four N/2-bit multipliers, one N-bit adder, and two 2 N-bit adders. Cadence Virtuoso with 180 nm technology node is used to implement static CMOS and Adiabatic designs, which are functionally tested in the Spectre-simulator. The power consumption of modified architecture decreases as the number of transitions decreases. For 4 × 4 and 8 × 8 multipliers, respectively, a Power saving factor of 9.75 and 9.83 is obtained in updated Vedic architectures [7].

3 Methodology In this project we are building on an existing adiabatic technique known as Efficient Charge Recovery Logic and proposing a new technique Enhanced-Efficient Charge Recovery Logic (E-ECRL). This circuit consists of two cross coupled PMOS transistors, one N-functional block. The N complement functional block in ECRL is

406

S. Giridaran et al.

replaced with a capacitor. This capacitor is placed in order to protect the device from any damage due to high power surge or any unforeseen situations [4]. This system uses two cross coupled transistors, N functional block and a capacitor of a few hundred atto farad. It uses an AC power supply in order to recover and reuse the energy. The AC power supply can either be a pulse waveform or a Sinusoidal waveform. In this circuit the output of one stage is fed as input to the next stage. This requires the supply clock to be in phase so as to create a feedback path. The power supply used here is a 4 phase clocking supply and the 4 phases are Evaluation, Hold, Recover, and Wait respectively. In this circuit, full output swing is obtained but due to the threshold value of the PMOS transistor, the circuit suffers a nonadiabatic loss. This means that when the supply clock reaches the threshold voltage, the PMOS transistor turns off thereby shutting down the feedback path. Hence we cannot achieve a complete energy recovery. In ECRL, a major disadvantage is the presence of coupling effect because the two complementary outputs are attached to the latch formed by PMOS and the two outputs can interfere with each other. This Coupling effect also causes logic failure and failure in clock synchronizations at times. These problems are eliminated by the E-ECRL circuit. E-ECRL also reduces the power consumption more than the ECRL circuit and at the same time reduces the number of transistors used when compared to the original ECRL circuit [8]. Power dissipation is frequently described due to the product of the complete current supplied to the circuit and consequently the entire voltage loss or leakage current. When it includes the portability of gadgets, power dissipation is an unavoidable constraint [9] (Figs. 1 and 2).

Fig. 1 ECRL basic circuit

Design of Low Power Vedic Multiplier …

407

Fig. 2 E-ECRL basic circuit

3.1 Vedic Mathematics Vedic mathematics is gaining popularity in the field of computation because of its precise and faster calculation methods. In today’s world speed and accuracy in such multipliers is necessary as they are used for various applications such as DSP processors and AI chips. Out of these 16 Vedic sutras we are using URDHVA-TIRYAGBHYAM which loosely translates to vertically and Crosswise or Criss-Cross. This definition allows for the simultaneous generation of all partial products and then addition of these partial products, resulting in parallelism. For an N × N binary number multiplication, the partial product generation and summation are already performed parallelly, thus making the multiplier independent of clock frequency. This enables it to perform at higher frequencies but faces the disadvantage of increased power dissipation which is overcome here by implementing it in our E-ECRL circuit. The use of Vedic multiplication has the following advantages: 1. 2. 3.

Reduce the number of steps required to perform Reduced computational delay Less memory utilization There are 16 sutras in Vedic mathematics namely:

• • • • • • •

Ekadhikina Purvena Nikhilam Navatashcaramam Dashatah Urdhva-Tiryagbyham Paraavartya Yojayet Shunyam Saamyasamuccaye (Anurupye) Shunyamanyat Sankalana-vyavakalanabhyam

408

• • • • • • • • •

S. Giridaran et al.

Puranapuranabyham Chalana-Kalanabyham Yaavadunam Vyashtisamanstih Shesanyankena Charamena Sopaantyadvayamantyam Ekanyunena Purvena Gunitasamuchyah Gunakasamuchyah

3.2 URDHVA—TIRYAGBHYAM Step by Step Procedure for a 4 × 4 Multiplier In this type of Vedic multiplication two 4-bit numbers are considered. Initially the first bit in each 4-bit number is multiplied giving us the first bit in the result S0. In the second step A0B1 and A1B0 are multiplied and summed using a half adder. The result is stored in S1 and the carry produced in this step is fed to the next step where it is summed along with the partial products produced in that next step. Similarly this process goes on until we get the complete output Cout S6 S5 S4 S3 S2 S1 S0 (Fig. 3).

3.3 Softwares Used 3.3.1

Cadence Virtuoso 6.17

The Cadence Virtuoso 6.17 System Design Platform hyperlinks world-elegance Cadence technologies—custom IC layout and package/PCB layout/analysis— growing a holistic technique that automates and streamlines the layout and verification float for multi-die heterogeneous systems [11] (Fig. 4). By getting the maximum advantage out of the Virtuoso Schematic Editor and the Virtuoso Analog Design Environment, it offers a unique and a single dedicated platform for IC-and package/system-level layout capture, analysis, and verification (Fig. 5). This is the schematic editor interface where all sorts of facilities related to components as well as simulation oriented options are available. Generic Process Design Kit 90 nm (gpdk090) is being used in this project to design this circuit. It is a complete design kit which is based on the fictitious 90 nm BiCmos process. The elements of the design kit will support a CIC front to back design flow based on the Custom IC Platform.

Design of Low Power Vedic Multiplier …

Fig. 3 Step by step procedure for 4 × 4 Vedic multiplier [10]

409

410

S. Giridaran et al.

Fig. 4 Cadence Virtuoso interface

Fig. 5 Cadence Virtuoso schematic editor

4 Result and Analysis Cadence Virtuoso is used to implement the schematic representation of the Vedic multiplier in CMOS 90 nm technology.

Design of Low Power Vedic Multiplier …

411

Cadence Virtuoso-simulator is used to perform design analysis and circuit functionality verification. The CMOS Vedic Multiplier and the Enhanced-ECRL Vedic Multiplier are compared in terms of power. Power dissipation Comparison of E-ECRL Circuits with Conventional CMOS Circuits is shown in Table 1. When compared to static CMOS Vedic multipliers, the Enhanced-ECRL Vedic multiplier consumes less power (Tables 2 and 3). In order to provide an equitable and just comparison between adiabatic circuits and conventional CMOS, we are using the Power Saving Factor (PSF) which calculates the power efficiency of the circuit. The formula for this PSF is given below. PSF = |Power dissipated in CMOS multiplier/Power dissipated in adiabatic multiplier|

Table 1 Power dissipation comparison of E-ECRL circuits with conventional CMOS circuits Circuit

E-ECRL (nw)

CMOS (nw)

Inverter

18

167

And

27.53

585.8

XOR

103.4

1749

OR

25.34

613.5

Half adder

61.35

1900

Full adder

581.5

3347

4 × 4 Vedic multiplier

2746

47,605

Table 2 Comparison in terms of transistor count between E-ECRL and conventional ECRL Circuit

Transistor count in E-ECRL

Transistor count in ECRL

Inverter

3

4

And

10

14

XOR

10

14

OR

12

18

Half adder

22

32

Full adder

54

78

4 × 4 Vedic multiplier

636

912

Table 3 Comparison in terms of efficiency and PSF (Power Saving Factor) between two adiabatic techniques (PFAL and E-ECRL) PSF for PFAL

Efficiency for PFAL (%)

PSF for E-ECRL

Efficiency for E_ECRL (%)

9.75

91.74

17.33

94.23

412

S. Giridaran et al.

It should be remembered that the 4 × 4 architecture has a PSF of 17.33 and efficiency as 94.23% in the case of Enhanced-ECRL Vedic Multipliers. The power dissipation increases as the frequency increases (Figs. 6 and 7). The A = “1010” and B = “0100” inputs to the 4 × 4 Enhanced-ECRL Vedic multiplier are given, and the multiplier output is “00,101,000” as shown in Figs. 6 and 7. Multiplicand: A3 A2 A1 A0 and Multiplier: B3 B2 B1 B0 are the inputs for 4 × 4 Vedic Multiplier (Fig. 8).

Fig. 6 4 × 4 Enhanced-ECRL Vedic multiplier inputs

Fig. 7 4 × 4 Enhanced-ECRL Vedic multiplier outputs

Design of Low Power Vedic Multiplier …

413

Fig. 8 Power dissipation comparison of E-ECRL circuits with conventional CMOS circuits

5 Conclusion The most important aspect in the current circuit configuration is to reduce the power dissipation, size, and time of execution of the mobile device and improve the circuit’s execution. One of the low-power techniques that can be used for this is adiabatic logic. In this work, adiabatic Enhanced ECRL and static CMOS techniques are used to design 2 × 2 and 4 × 4 Vedic multipliers. The results show that the Adiabatic Enhanced ECRL multipliers use less power and take less time than static CMOS multipliers. The Vedic multiplier was chosen because it employs an effective algorithm technique that decreases execution time. The area and delay increase very slowly as the bit increases in this Vedic Multiplier, but it is faster than other multipliers. In comparison to the static CMOS technique, the adiabatic implementation of these multipliers shows a significant reduction in power dissipation. The PSF of the 4 × 4 PFAL Vedic Multiplier is 9.75, while the PSF of the 4 × 4 Enhanced-ECRL Vedic Multiplier is 17.33. Thus, Enhanced-ECRL Vedic Multiplier consumes less power than PFAL Vedic Multiplier.

6 Future Work The Work presented by us above is an original work which is done after getting inspiration from people working in the field of VLSI and FET. The ideologies of Solid State Devices and Digital System Design and Synthesis were the fields where we have tried to refer and use the minute details as much as possible.

414

S. Giridaran et al.

In the immediate Future, there is a strong possibility of us working in these fields aforementioned again. We will not only focus on these particular fields but we would also like to implement and involve new technologies as much as possible. Newer technologies such as CNTFET and QCA which are also known as Carbon Nanotube FET and Quantum Dot Cellular Automata which are state of the art technologies and has marked a new era in the field of VLSI and Solid State Devices [12]. Apart from these mentioned fields there are strong chances of us getting in touch with the thermal characteristics of the chips and SOC’s, which are the basic foundation blocks of electronics. CNTFET is currently exclusively studied by Stanford University. They have even established a model for CNTFET which is referred to by thousands of professionals and students like us for their projects and publications. When the QCA i.e. Quantum Dot Cellular Automata is studied as one of the advanced Nano Technology, with the help of this technology there is a strong chance for us to be able to provide an alternate meaning to the CMOS Technology itself. Due to its advancement, the power, surface area, and delay factors can be majorly overcome and we can develop a better device as per our requirements [13].

References 1. Dutta K, Chattopadhyay S, Biswas V, Ghatak SR (2019) Design of power efficient Vedic multiplier using adiabatic logic. In: 2019 international conference on electrical, electronics and computer engineering (UPCON), Aligarh, India pp 1–6. 10.1109/UPCON47278.2019.8980057 2. Nandal A, Kumar M (2018) Design and implementation of CMOS full adder circuit with ECRL and sleepy keeper technique. In: 2018 international conference on advances in computing, communication control and networking (ICACCCN), Greater Noida, India, pp 733–738. https:// doi.org/10.1109/ICACCCN.2018.8748336 3. Safoev N, Jeon JC (2020) A novel controllable inverter and adder/subtractor in quantum-dot cellular automata using cell interaction based XOR gate. Microelectron Eng 222:111197 4. Urankar V, Patel CR, Vivek BA, Bharadwaj VK (2020) 45 nm CMOS 4-bit flash analog to digital converter. In: 2020 fourth international conference on computing methodologies and communication (ICCMC), pp 27–32. https://doi.org/10.1109/ICCMC48092.2020 5. Nazare N, Bhat P, Jambhe N (2018) Design and analysis of adiabatic Vedic multipliers. Int J Pure Appl Math 119:59 6. Kaza S, Tilak Alapati VN, Rao Kunupalli S, Yarlagadda S (2020) Secured MPFAL logic for IoT applications. In: 2020 IEEE VLSI device circuit and system (VLSI DCS), Kolkata, India, pp 198–202. https://doi.org/10.1109/VLSIDCS47293.2020.9179891 7. Mishra A, Singh N (2014) Low power circuit design using positive feedback adiabatic logic. Int J Sci Res (IJSR) 3(6):43–45.https://www.ijsr.net/search_index_results_paperid.php?id=201 4110 8. Bakshi AK, Sharma M (2013) Design of basic gates using ECRL and PFAL. In: 2013 international conference on advances in computing, communications and informatics (ICACCI), pp 580–585. https://doi.org/10.1109/ICACCI.2013.6637237 9. Pravitha B, Vishnu D, Shabeer S (2020) 1-bit full adder output analysis using adiabatic ECRL technique. In: 2020 advanced computing and communication technologies for high performance applications (ACCTHPA), pp 226–230. https://doi.org/10.1109/ACCTHPA49 271.2020.9213214

Design of Low Power Vedic Multiplier …

415

10. Bansal Y, Madhu C, Kaur P (2014) High speed Vedic multiplier designs—a review. In: 2014 recent advances in engineering and computational sciences (RAECS). https://doi.org/10.1109/ raecs.2014.6799502 11. Banik S, Rasel MMH, Mahmud T, Hasanuzzaman M (2020) Design and implementation of a low-power 1V, 77.26µW 6-bit SAR ADC in Cadence 90nm CMOS process for biomedical application. In: 2020 IEEE region 10 symposium (TENSYMP), pp 839–842. https://doi.org/ 10.1109/TENSYMP50017.2020.9230608 12. Kaushal P, Mehra R (2017) A novel CNTFET based power and delay optimized hybrid full adder. Int J Electr Electron Data Commun (IJEEDC) 5(9):21–27 13. Safoev N, Jeon J-C (2020) Design and evaluation of cell interaction based Vedic multiplier using quantum-dot cellular automata. Electronics 9(6):1036. https://doi.org/10.3390/electroni cs9061036 14. Kumar A, Sharma M (2013) Design and analysis of Mux using adiabatic techniques ECRL and PFAL. In: 2013 international conference on advances in computing, communications and informatics (ICACCI), pp 1341–1345. https://doi.org/10.1109/ICACCI.2013.6637372 15. Kerur SS, Narchi P, Kittur HM, Girish VA (2014) Implementation of Vedic multiplier in image compression using DCT algorithm. In: 2014 2nd international conference on devices, circuits and systems (ICDCS), pp 1–6. https://doi.org/10.1109/ICDCSyst.2014.6926120 16. Sanadhya M, Vinoth Kumar M (2015) Recent development in efficient adiabatic logic circuits and power analysis with CMOS logic. Procedia Comput Sci 57:1299–1307. https://doi.org/10. 1016/j.procs.2015.07.439 17. Kuttappa R, Khoa S, Filippini L, Pano V, Taskin B (2020) Comprehensive low power adiabatic circuit design with resonant power clocking. In: 2020 IEEE international symposium on circuits and systems (ISCAS), pp 1–5. https://doi.org/10.1109/ISCAS45731.2020.9181128

Digital Technology and Artificial Intelligence in Dentistry: Recent Applications and Imminent Perspectives Anjana Raut, Swati Samantaray, and Rupsa Rani Sahu

Abstract The role of Artificial Intelligence has expanded exponentially in healthcare sectors comprising primarily of disease diagnosis, data management, treatment planning and administrative tasks. Recently, dental professionals have also shown their keen interest in technology-assisted patient care and third-party involvement in building products with AI capabilities. Artificial Intelligence has widespread applications in dentistry owing to its tremendous diagnostic potential and possible therapeutic applications. It has revolutionized the conventional dental practice system by integrating software applications and machine learning to provide a virtual second opinion in deciding a comprehensive treatment plan. This paper makes a modest attempt to review the existing state-of-the-art applications of AI and emphasizes the increased utilization of these profound technologies to enhance patient care in dentistry. Keywords Artificial intelligence · Digital dentistry · Neural network · Machine learning · Health care · Holistic care

1 Prelude to Artificial Intelligence Health care comprises medical tasks and administrative processes that result in an enormous amount of data sharing and documentation. The applicability of artificial intelligence in the past few decades has been primarily for assistance in making clinical decisions, evidence-based diagnosis and treatment planning, sharing of information between professionals and their timely opinion. It becomes easy to foretell the required drugs and treatment processes most suitable for a patient with precision based on available datasets. Moreover, clinical document analysis and its translation into structured and understandable notes can easily be stored for future references. A. Raut · R. R. Sahu Kalinga Institute of Dental Sciences, Bhubaneswar, India e-mail: [email protected] S. Samantaray (B) School of Humanities, Kalinga Institute of Industrial Technology, Bhubaneswar, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 P. K. Mallick et al. (eds.), Cognitive Informatics and Soft Computing, Lecture Notes in Networks and Systems 375, https://doi.org/10.1007/978-981-16-8763-1_34

417

418

A. Raut et al.

Artificial Intelligence (AI) has been significantly contributing to hospital administrative tasks like claim settlement and payment issues. Therefore, smooth workflow and precision imparted to the medical field encouraged its widespread adoption in dentistry as well. The digital shift has promoted similar AI-based treatment protocols in different spheres of dentistry exploring its capability to discover and identify the abnormalities which at times cannot be perceived by human vision, thereby paving the method for an extensive procedure. The conventional methods have provided a lot of information, but have limitations too. The researchers are constantly trying to build a model that will simulate the human brain and its signal transmission and networking. Persistent efforts of scientists and researchers have developed the new technological innovation—various methods of AI that are being practised include artificial neural networks (ANN), genetic algorithms (GA) and mathematical logic. The term ‘Artificial Intelligence’ is related to the notion of constructing machines that have the ability to discharge the functions that are performed by humans. AI consists of mathematical algorithms and their intelligent interpretation. As a whole, AI includes many subfields like Machine Learning (ML), Deep Learning (DL) and Neural Network (NN). Artificial Intelligence applies these techniques to solve actual problems. Machine learning is the subdivision of AI in which an algorithmic program is developed to execute the task by repeated learning patterns or sequences. ML technique uses mathematical logic or neural network or both to examine the information for executing diverse functions. The most widespread design is Artificial Neural Network which has found numerous applications in dentistry. DL consists of Artificial Neural Networks (ANN) with advanced multiple layers. The difference between Deep Learning and Neural Networks (NNs), like feed-forward Neural Network and feed-backward Neural Network, lies in their specifications. DL has advanced ways of combining layers and additional neuron count to interpret precisely advanced designs. Moreover, advanced computing power helps to extract information. This algorithmic program uses numerous layers to determine easy features like figure, edge and structure to advanced forms like abnormalities of organs in an exceedingly data structure.

2 Objectives The objectives of this research work are to critically review • applications of AI in dentistry, • barriers encountered and • the challenges in adaptation.

Digital Technology and Artificial Intelligence …

419

3 Review of Literature Many authors have contributed significantly in the direction of AI and dentistry; however, some of the important scholarly works done in this field (using AI, ML, DL ANN) are as follows. Kim et al. (2009) applied an artificial neural network to create a model which predicted toothache based on its relationship with brushing cycle, brushing duration, its replacement pattern, dental floss usage and maintaining proper oral hygiene. This study was successful and helped in the development of a toothache determining model with much efficacy; it concluded that proper food habits, oral hygiene and stress prevention can retard dental pain [1]. Kakilehto et al. (2009) in their study made observations based on data mining analysis to determine mean survival time of different restorative materials including dental amalgam, glass ionomer and tooth coloured composite over a fixed duration of time. The responsibility of dental professionals was gathering, compiling and tabulating the data [2]. Niere et al. (2010) conducted a study on patients with impacted maxillary canines. They were surgically intervened along with orthodontic treatment to extrude the impacted tooth. Variables influencing population structure, orthodontic, as well as healthy gums were recorded and analysed by employing Bayesian network analysis. It was concluded that Bayesian network analysis was useful for reporting and management of similar clinical presentations [3]. Yet another study was conducted by Mago et al. (2011) to prepare an expert system in order to assist dentists in treating the mobile tooth. A mathematical logicbased expert system was prepared to recognize incorrect as well as unclear values of dental signs and symptoms related to mobile teeth. The system suggested treatment plan(s) and concluded that with this method, dental practitioners feel assured about the treatment planning of mobile teeth as they can verify their opinion with the expert system [4]. Chen et al. (2015) adopted a genetic algorithmic program (GA) for the advancement of the shade matching preciseness and hence concluded that the planned technique improves the preciseness and prediction strength of the colour matching in restorative dentistry [5]. Ghinea et al. (2015) developed multiple non-linear regression predictive models that facilitate the determination of the reflectance spectrum of experimental dental composites based on the nature and amount of pigments used in their chemical formulation. They concluded that the difference is negligible between measured and predicted values. These models were useful under in vitro conditions to control the chromatic behaviour of used samples [6]. Tripathi et al. (2019) planned a system that was capable of recognizing cavities from X-ray film radiography. The cavity radiograph contained a bound range of grey level pixels that were a differentiating feature from normal teeth. The system utilized

420

A. Raut et al.

Local Binary Pattern (LBP) to extract second-order applied mathematics texture options. These extracted options were used by a backpropagation neural network to characterize the severity of tooth decay. The study concluded that by using this technique, the accuracy of prediction can be enhanced [7].

4 Applications of AI in Dentistry AI has multiple diagnostic and therapeutic applications in different domains of dentistry, and hence it is a useful adjunct in delivering optimum care to the patients (as shown in Fig. 1).

4.1 Medical-Aided Diagnosis The exact diagnosis of a disease is determined on the basis of clinical manifestation, laboratory findings and other factors, which are susceptible to memory imperfections and psychological feature bias. Once programmed with information from several cases, AI can go beyond the capacity and reach of a clinician [8].

Fig. 1 Diagrammatic representation of AI-based clinical application

Digital Technology and Artificial Intelligence …

421

4.2 Radiology AI can be united with advanced imaging tools like magnetic resonance imaging (MRI) and cone-beam computed tomography (CBCT) to spot errors that would have been missed by the human eye. AI can locate major landmarks to yield significant information for cephalometric analysis [9]. Machine learning algorithmic programs can detect lymph nodes and their enlargement in the head and neck [10]. Wang et al. initially conferred a piece of writing that used dilated convolutional neural networks (DCNNs) to interpret dental radiographs [11]. Recently, Lee et al. conducted a study using dilated convolutional neural networks (DCNN) and computer-assisted diagnosis (CAD) system for detecting osteoporosis on panoramic radiographs, and found no controversy of diagnosis made by the expert [11].

4.3 Oral and Maxillofacial Surgery Machine Learning algorithms, including Support Vector Machines, Artificial Neural Network, Random Forest and k-nearest neighbours (as reflected in Fig. 2), are capable of identifying cysts of dental origin, soft and hard tissue tumours and lymph node metastasis. A risk categorizing model based on brush biopsy and cytology was also prepared with ML algorithms such as Support Vector Machines and Random Forest. Convolutional neural networks have been utilized to differentiate between malignant and highly malignant carcinomas [12]. Preliminary AI models were based on either radiographic results or cytopathologic images. There is a research need to integrate both and prepare a more superior model. Today, AI is capable of tracing crucial anatomic structures like neurovascular bundles, blood vessels, ducts and the like. Statistically, there is no noteworthy differentiation amid AI-based segmentation

Fig. 2 Commonly used algorithms

422

A. Raut et al.

and real anatomic position, and this may prevent serious surgical problems [13]. AI is indeed instrumental in raising the speech comprehensibility of oral surgical patients. In dental extractions, an artificial neural network (ANN) that accomplished relationships between different variables was very accurate in forecasting facial swelling following the removal of impacted mandibular/maxillary third molars [14]. In oral carcinoma prognosis, machine learning-based models were an adjunct in forecasting occult nodal metastasis and recurrence. Such an algorithmic program surpassed an advanced clinical design that emphasizes solely tumour invasion depth. Due to limited data available and the huge amount of variables, the exactness of these prophetical models is sometimes doubtful.

4.4 Cariology and Endodontics Deep learning along with convolutional neural networks is employed in the detection of caries and diagnosis of pathologies of the dental pulp. It can do slicing of the image to enable in-depth assessment. Based on a coding system algorithm, deep learning segmented CBCT images are interpreted into different forms similar to a clinician’s diagnosis [15]. Artificial neural network is also capable of locating proximal cavities and accurately determining working length by radiographs. Artificial neural network is sensitive enough to diagnose vertical root fracture detected on digital radiography. Support vector machine (SVM) and artificial neural network (ANN) acquired more than 90% exactness in forecasting the intensity of problem necessitating root canal treatment [16].

4.5 Periodontics Periodontal disease is a multifactorial inflammatory condition. Convolutional neural networks can diagnose periodontally compromised posterior teeth with nearly 80% efficacy [17]. Decision tree and support vector machines have performed successfully in classifying periodontal diseases. Multilayer perceptron neural network focusing on defence cells and antibody titres, and support vector machine concentrating on the relative bacterial load, performed adequately in differentiating different variants of periodontitis [18]. AI bridges classic markers and immunologic and microbiological parameters for a better understanding of the disease.

4.6 Temporomandibular Joint Disorder A patient’s history and chief complaint give significant information for diagnosing temporomandibular joint disorders (TMDs). The technology of Natural Language

Digital Technology and Artificial Intelligence …

423

Processing uses computer language to differentiate TMD-similar conditions from classic TMDs [19]. Artificial neural network-based CBCT agreed very closely with clinician consensus [20]. A computer-assisted diagnosis (CAD) system should improve TMD diagnosis for better management by all dental professionals.

4.7 Orthodontics A knowledge-based algorithmic program can help detect important cephalometric landmarks and subsequent correlation with clinical findings will accurately diagnose the jaw discrepancies [21]. A diagnostic model based on lateral cephalometric radiographs was additionally made with machine learning algorithms, like artificial neural network, support vector machine, random forest and decision tree, to assess cervical bone maturation as a growth indicator. Among the machine learning algorithms tested, artificial neural network is the most effective outcome in identifying cervical bone ossification stage and decision tree for vertebral body form classification [22]. Automated cephalometric analysis projects vast research scope. The Bayesian network received the consent of orthodontists in the diagnosis of orthodontic treatment needs [23]. Skilled appraisal of attractiveness depends on the experts’ ability to perceive based on facial landmarks, whereas AI analysis is a quantitative illustration of social attractiveness.

4.8 Cancer Related to Head and Neck Convolutional neural networks have the ability to identify objects with well-defined boundaries thus utilized for segmentation of organs at risk from head and neck cancer [24]. Another study conducted reflected that keywords like smoking, drinking, chewing, histology squamous cell carcinoma and oncogene were picked up by genetic programming algorithms to simplify the prognosis of head and neck carcinoma. It had been additionally found that the genetic programming (GP) surpassed the support vector machine and LR in carcinoma prognosis. Additionally, genetic programming is also used for application in drug discovery [25]. The neural network could also be important for the recognition of people with a high risk of carcinoma or pre-cancer condition [26].

4.9 Pain Assessment Surprisingly, artificial neural network has also been used for determining the level of pain perception and the results are statistically significant in giving accurate responses [27]. Quantitative assessment and its qualitative appreciation have been

424

A. Raut et al.

made possible by using computer-based algorithms resulting in easier clinical interpretation and deciding on treatment needs to alleviate the pain. Pain can be dull or sharp, short-lived or continuous, localized or generalized and in many more other forms. This can actually be challenging for the clinician to decide on the exact cause. AI assists by providing virtual opinion and makes the entire procedure more precise.

4.10 Prosthodontics The designing of aesthetic restorations depends on patients’ morphology and anthropometric factors. However, its integration with planning software like RaPiD can result in excellent cosmetic outcomes with equal precision. RaPiD combines computer-assisted design, knowledge-based systems and databases, formulating a logic-based program. With the support of computing and program algorithms, the art of digital impressions can be easily learnt [28]. Computer-aided design and computeraided manufacturing (CAD/CAM) are gaining speedy endorsement. Combining AI with CAD/CAM enhances its chair-side implementation [29]. Artificial neural networks developed on various imaging modalities were researched for tooth segmentation and classification. With clinically significant exactness noted, information acquisition and CAD/CAM-assisted production could be bridged together resulting in outstanding results [30]. The area of analysis in fixed prosthodontics by an automatic robotic system still remains underdeveloped. A clinical decision support model is necessary that can recommend the most suitable design for a typical clinical scenario in designing a removable prosthesis [31]. However, such models are programmed based on similar cases in the database and do not relate well to atypical presentations. Hence, there is a need to constantly update the database.

4.11 Regenerative Dentistry Research based on stem cell regeneration can also employ ANN as a prediction model to check the viability of dental pulp in different concentrations of culture media for analysing regenerative potential and predictable outcomes [32]. Regenerative dentistry has the potential to restore lost body parts (teeth and surrounding structures) and restore the quality of life in patients who have undergone radical surgeries. However, this branch of dentistry is still undergoing vast research. Its integration with AI will aid in determining clinical feasibility and broad-spectrum applications.

Digital Technology and Artificial Intelligence …

425

4.12 Disease Forecast and Outcome The latest advancement in AI offers precious structures for combining all clinical symptoms with available databases to scrutinize the hazardous factors and forecast prolonged consequences of dental ailments by analysing associations involving diseases and patient information [19].

4.13 Dental Implantology Prophetic AI designs are beneficial in two attributes of dental implantology. ML algorithm design concentrates upon clinical results and predictable bone levels. Secondly, AI-support vector regression was more or less comparable to advance mathematical design in forecasting stress at the implant-bone connection [33].

5 Barriers and Challenges AI has penetrated every aspect of the healthcare sector including dentistry, medicine, drug delivery, synthetic biology, training and the like. All the major healthcare enterprises have projected improved productivity, economic growth as well as greater job opportunities owing to AI. It has enabled a primary focus on critical aspects while leaving routine aspects to a machine. Moreover, it has enormous potential for enhancing research and development. Convolutional neural networks came into the limelight after 2015, and it has been adopted in research settings of dentistry [34]. The growth has been exponential over the years resulting in a strong argument for increased utilization of AI-based technologies for improving patient care, business operations and finance (as reflected in Fig. 3) [35]. However, the following are the difficulties witnessed that discourage its wider acceptance in the present system (as shown in figure 4).

5.1 Data Acquisition Medicine is emerging for several years but the application of AI for dental needs is limited. Besides the limitations, such as insufficient evidence and its sharing, less info on data handling and certification is an added disfigurement in dental AI research [36]. Sample sizes used for guidance and experimentation are at times uncertain, challenging the wholeness and similarity of the results. It is essential to improve methods to obtain optimized data and its reporting. Introducing an open access to

426

A. Raut et al.

Fig. 3 Growth of artificial intelligence in health care (including dentistry)

information derived from different sampling variables is a major requirement for AI to facilitate distinguishing distinct programs.

5.2 Interpretability The algorithms designed need to relate well with medical events so that the decisions have medical consensus. Therefore, the technology to be meaningful for healthcare applications should have logical descriptions of medical incidents. It is equally difficult to contemplate failures if there is a lack of clarity and interpretability [37].

5.3 Computing Power AI requires uninterrupted upgradation of processing power for its implementation. The absence of sufficient mathematical resources in data processing restricts the efficacy of the computational power of algorithmic programs [38]. The acquisition and processing of information obtained from various data sources require advancement of AI in terms of computing.

Digital Technology and Artificial Intelligence …

427

Fig. 4 Frequently witnessed barriers

5.4 Class Imbalance Class imbalance refers to an under-illustration of the minority events leading to missed diagnosis and inappropriate patient care. The machine learning model to be sensitive and specific should distinguish minor datasets to prevent choice bias. Nevertheless, the model cannot perform close to clinical situations provided it contains an acceptable range of high-risk patients [39] (Fig. 4).

5.5 Data Privacy and Security There is a considerable risk that a patient’s health information could also be exploited illegally by corporate. For instance, a patient’s information could also be taken by third parties for targeted marketing or insurance corporations that may later build unfair choices on premiums and rebates [39]. Cyber attacks are a major drawback in healthcare institutions. Due to this, treatment modalities can be modified and serious damage may be caused to patients. Measures should be taken against cyber security threats and unlawful access to patient information [40].

428

A. Raut et al.

5.6 Dataset Shifts and Clinical Applicability Dataset shift develops while machine learning models are enforced in non-stationary habitats and refers to future information being basically different from information the model was originally directed on. Periodic observation, training and protection of the system are important as machine learning systems have to be required to evolve among the natural habitat they are in operation so as to keep up a high standard of achievement and patient safety. Recent investigations adapted to assess machine learning models do not determine genuine elaboration in clinical outcomes [39]. Models can exactly find the pathologies in pictures or build prognosis-supported electronic health records.

6 Conclusion Artificial Intelligence is not a fiction or delusion of the human mind but a reality of present times. It has a large scope in dentistry (and in other spheres of health care too) and has been an innovative turning point. Integrating automation with human efforts has helped to build an optimum and innate work environment. Machine and deep learning in the dental field pave the way for faster decisions along with accurate dental treatment. People are anxious about the fear and amusement associated with dental procedures. However, embracing AI software into practice has helped in minimizing errors and gaining the trust of patients by providing profitable holistic care. AIpowered dentistry has ensured better quality of life for both dental surgeons as well as patients and continues to reshape dentistry in the most advanced way.

References 1. Kim EY, Lim KO, Rhee HS (2009) Predictive modeling of dental pain using neural network. Study Health Technol Inform 146:745–746 2. Kakilehto T, Salo S, Larmas M (2009) Data mining of clinical oral health documents for analysis of the longevity of different restorative materials in Finland. Int J Med Inf 78:68–74. https://doi.org/10.1016/j.ijmedinf.2009.04.004 3. Nieri M, Crescini A, Rotundo R, Baccetti T, Cortellini P, Prato GP (2010) Factors affecting the clinical approach to impacted maxillary canines: a Bayesian network analysis. Am J Orthod Dentofacial Orthop 137(6):755–62. https://doi.org/10.1016/j.ajodo.2008.08.028 4. Mago VK, Mago A, Sharma P, Mago J (2011) Fuzzy Logic Based Expert System for the treatment of mobile tooth. Softw Tools Algorithms Biol Syst 696:607–614. https://doi.org/10. 1007/978-1-4419-7046-6_62 5. Li H, Lai L, Chen L, Lu C, Cai Q (2015) The prediction in computer color matching of dentistry based on GA+BP neural network. Comput Math Methods Med. https://doi.org/10.1155/2015/ 816719

Digital Technology and Artificial Intelligence …

429

6. Ghinea R, Pecho O, Herrera LJ, Lonescu AM, de la Cruz Cardona J (2015) Predictive algorithms for determination of reflectance data from quantity of pigments within experimental dental resin composites. BioMed Eng OnLine 14 Suppl 2 7. Tripathi P, Malathy C, Prabhakaran M (2019) Genetic algorithms based approach for dental caries detection using back propagation neural network. Int J Recent Technol Eng 8:2277–3878 8. Bouletreau P, Makaremi M, Ibrahim B, Louvrier A, Sigaux N (2019) Artificial intelligence: applications in orthognathic surgery. J Stomatol Oral Maxillofac Surg 120(4):347–3549. https:// doi.org/10.1016/j.jormas.2019.06.001 9. Khanna S (2010) Artificial intelligence: contemporary applications and future compass. Int Dent J 60:269–272 10. Yaji A, Prasad S, Pai A (2019) Artificial intelligence in dento-maxillofacial radiology. Acta Sci Dental Sci 3:116–121 11. Chen YC, Hong DJ, Wu CW, Mupparapu M (2019) The use of deep convolutional neural networks in biomedical imaging: a review. J Orofac Sci 11:3–10. https://doi.org/10.4103/jofs. jofs_55_19 12. Sunny S, Baby A, James BL, Balaji D, Aparna NV, Rana MH, Gurpur P, Skandarajah A, D’Ambrosio M, Ramanjinappa RD, et al (2019) A smart tele-cytology point-of-care platform for oral cancer screening. PLoS One. 14(11):e0224885. https://doi.org/10.1371/journal.pone. 0224885 13. Gerlach NL, Meijer GJ, Kroon DJ, Bronkhorst EM, Berge SJ, Maal TJ (2014) Evaluation of the potential of automatic segmentation of the mandibular canal using cone-beam computed tomography. Br J Oral Maxillofac Surg 52(9):838–844. https://doi.org/10.1016/j.bjoms.2014. 07.253 14. Zhang W, Li J, Li ZB, Li Z (2018) Predicting postoperative facial swelling following impacted mandibular third molars extraction by using artificial neural networks evaluation. Sci Rep 8(1):12281. https://doi.org/10.1038/s41598-018-29934-1 15. Setzer FC, Shi KJ, Zhang Z, Yan H, Yoon H, Mupparapu M, Li J (2020) Artificial intelligence for the computer-aided detection of periapical lesions in cone-beam computed tomographic images. J Endod 46(7):987–993. https://doi.org/10.1016/j.joen.2020.03.025 16. Saghiri MA, Asgar K, Boukani KK, Lotfi M, Aghili H, Delvarani A et al (2012) A new approach for locating the minor apical foramen using an artificial neural network. Int Endontic J 2012(45):257–265. https://doi.org/10.1111/j.1365-2591.2011.01970.x 17. Lee JH, Kim DH, Jeong SN, Choi SH (2018) Diagnosis and prediction of periodontally compromised teeth using a deep learning–based convolutional neural network algorithm. J Periodontal Implant Sci 48(2):114–123. https://doi.org/10.5051/jpis.2018.48.2.114 18. Feres M, Louzoun Y, Haber S, Faveri M, Figueiblacko LC, Levin L (2018) Support vector machine-based differentiation between aggressive and chronic periodontitis using microbial profiles. Int Dent J 68(1):39–46. https://doi.org/10.1111/idj.12326 19. Shan T, Tay FR, Gu L (2020) Application of artificial intelligence in dentistry. J Dent Res. 100(3):233–244.https://doi.org/10.1177/0022034520969115 20. Shoukri B, Prieto JC, Ruellas A, Yatabe M, Sugai J, Styner M, Zhu H, Huang C, Paniagua B, Aronovich S, et al (2019) Minimally invasive approach for diagnosing TMJ osteoarthritis. J Dent Res. 98(10):1103–1111. https://doi.org/10.1177/0022034519865187 21. Gupta A, Kharbanda OP, Sardana V, Balachandran R, Sardana HK (2015) A knowledgebased algorithm for automatic detection of cephalometric landmarks on CBCT images. Int J Comput Assist Radiol Surg 10(11):1737–1752. https://doi.org/10.1007/s11548-015-1173-6 (Epub 2015) 22. Amasya H, Yildirim D, Aydogan T, Kemaloglu N, Orhan K (2020) Cervical vertebral maturation assessment on lateral cephalometric radiographs using artificial intelligence: comparison of machine learning classifier models. Dentomaxillofac Radiol 49(5):20190441. https://doi.org/ 10.1259/dmfr.20190441 23. Thanathornwong B (2018) Bayesian-based decision support system for assessing the needs for orthodontic treatment. Healthc Inform Res 24(1):22–28. https://doi.org/10.4258/hir.2018.24. 1.22

430

A. Raut et al.

24. Arik S, Ibragimov B, Xing L (2017) Fully automated quantitative cephalometry using convolutional neural networks. J Med Imag 4(1):014501. https://doi.org/10.1117/1.JMI.4.1. 014501 25. Tan MS, Tan JW, Chang S-W, Yap HJ, Abdul Kareem S, Zain RB (2016) A genetic programming approach to oral cancer prognosis. PeerJ 4:2482. https://doi.org/10.7717/peerj.2482 26. Kalappanavar A, Sneha S, Annigeri RG (2018) Artificial intelligence: A dentist’s perspective. J Med Radiol Pathol Surg. 5:2–4. https://doi.org/10.15713/ins.jmrps.123 27. Hu XS, Nascimento TD, Bender MC, Hall T, Petty S, O’Malley S, et al (2019) Feasibility of a real-time clinical augmented reality and artificial intelligence framework for pain detection and localization from the brain. J Med Internet Res 21:e13594. https://doi.org/10.2196/13594 (2019) 28. Sharma S (2019) Artificial intelligence in dentistry: current concepts and a peek into the future. Int J Adv Res 6(12):5–9 29. Raith S, Vogel EP, Anees N, Keul C, Güth J-F, Edelhoff D, Fischer H (2017) Artificial neural networks as a powerful numerical tool to classify specific features of a tooth based on 3D scan data. Comput Biol Med 80:65–76. https://doi.org/10.1016/j.compbiomed.2016.11.013 30. Wang L, Wang D, Zhang Y, Ma L, Sun Y, Lv P (2014) An automatic robotic system for threedimensional tooth crown preparation using a picoseconds laser. Lasers Surg Med 46(7):573– 581. https://doi.org/10.1002/lsm.22274 31. Chen Q, Wu J, Li S, Lyu P, Wang Y, Li M (2016) An ontology-driven, case based clinical decision support model for removable partial denture design. Sci Rep 6(1):27855. https://doi. org/10.1038/srep27855 32. Bindal P, Bindal U, Lin CW, Kasim NHA, Ramasamy T, Dabbagh A, Salwana E, Shamshirband S. Neuro-fuzzy method for predicting the viability of stem cells treated at different timeconcentration conditions. Technol Health Care 25(6):1041–1051. https://doi.org/10.3233/thc170922 33. Papantonopoulos G, Gogos C, Housos E, Bountis T, Loos BG (2017) Prediction of individual implant bone levels and the existence of implant “phenotypes.” Clin Oral Implants Res 28(7):823–832. https://doi.org/10.1111/clr.12887 34. Schwendicke F, Golla T, Dreher M, Krois J (2019) Convolutional neural networks for dental image diagnostics: a scoping review. J Dent 91:103226. https://doi.org/10.1016/j.jdent.2019. 103226 35. Statista Research Department. https://www.statista.com/statistics/607612/worldwide 36. Schwendicke F, Samek W, Krois J (2020) Artificial intelligence in dentistry: chances and challenges. J Dent Res 99(7):769–774. https://doi.org/10.1177/0022034520915714 37. Magrabi F, Ammenwerth E, McNair JB, De Keizer NF, Hyppönen H, Nykänen P, Rigby M, Scott PJ, Vehko T, Wong ZS et al (2019) Artificial intelligence in clinical decision support: challenges for evaluating AI and practical implications. Year Med Inform 28(1):128–134 38. Solenov D, Brieler J, Scherrer JF (2018) The potential of quantum computing and machine learning to advance clinical research and change the practice of medicine. Mo Med 115(5):463– 467 39. Pethani F (2020) Promises and perils of artificial intelligence in dentistry. Aust Dent J. https:// doi.org/10.1111/adj.12812 40. Mirsky Y, Mahler T, Shelef I, Elovici Y (2019) CT-GAN: malicious tampering of 3D medical imagery using deep learning. arXiv:1901.03597

Atmospheric Weather Fluctuation Prediction Using Machine Learning Srishty Singh Chandrayan, Khushal Singh, and Akash Kumar Bhoi

Abstract Weather forecasting predicts atmospheric conditions in the future. Since the manufacturing and livestock businesses depend on precise weather forecasts, it is crucial. Different categories of neural networks are presented in this study in context to efficient weather prediction. The three neural network models for weather prediction investigated in this study are the recurrent neural network (RNN), multilayer perceptron (MLP), and radial basis function (RBF) networks. This paper also went through the steps that were taken to achieve the desired results. Weather is a complex and nonlinear mechanism that can be handled by a neural network. This study uses NumPy, Pandas, Keras, Git, TensorFlow, Matplotlib, Google Cloud Resources, and Anaconda for weather forecasting. The Root Mean Square Error between the predicted and actual values as well as the accuracy of prediction are primarily used to evaluate and compare the outcomes of these models. It was observed that RNN generated the best performance and recorded a minimum RMSE value of 1.432 in a prediction window of 56 days. The accuracy rates produced by RNN, MLP, and RBF were 94.3, 91.5, and 92.9%, respectively. Thus, RNN is considered to be the most efficient at forecasting the weather. As a result, using a machine learning approach along with a recurrent neural network for weather fluctuation prediction provides more relevant and detailed information at a lower cost than using other prediction models. Keywords Neural network · Recurrent neural network (RNN) · Weather forecasting · Multilayer perceptron (MLP) · Radial basis function (RBF) S. S. Chandrayan (B) · K. Singh School of Computer Engineering, Kalinga Institute of Industrial Technology (Deemed to be University), Bhubaneswar, Odisha, India e-mail: [email protected] K. Singh e-mail: [email protected] A. K. Bhoi KIET Group of Institutions, Delhi-NCR, Ghaziabad 201206, India Directorate of Research, Sikkim Manipal University, Gangtok 737102, Sikkim, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 P. K. Mallick et al. (eds.), Cognitive Informatics and Soft Computing, Lecture Notes in Networks and Systems 375, https://doi.org/10.1007/978-981-16-8763-1_35

431

432

S. S. Chandrayan et al.

1 Introduction Artificial intelligence includes machine learning as a component. In machine learning (ML), a program learns from data and knowledge using various computer algorithms [1]. A computer does not need explicit programming. Machine learning is in high demand from businesses for experts, and it is being used all over the world. The prediction of weather based on past data is termed as weather prediction which is a difficult task for humans as the process is very difficult and challenging because weather prediction depends on various factors [2]. The Traditional methods of weather prediction using satellite images and weather stations are costly since they are very costly and include highly nuanced approaches, whereas using machine learning for weather prediction is less costly and time-consuming and provides realtime, reliable predictions. The first thing to predict the weather forecast in machine learning is to build a model and train it by providing machine learning algorithms. The model shall be provided with data or a set of information which is an important aspect to train any machine learning model, and if you’re taking any raw data then that data goes through various filtering processes like data cleaning, processing, etc. This allows the model in an effective way to predict the values. The model will get trained by using previous weather forecast data and will predict based on it; some models also use the current weather data with the previous data to get weather prediction. The machine learning models can be more precise than the old techniques [3]. Such a prediction model could also be made available to the general public via the Internet. Weather predictions are made by gathering comprehensive data on the current condition and past pattern of the environment and predicting how the atmosphere will change using theoretical knowledge of atmospheric processes. The weather alert is critical for the safety of both people and land. Its applications range from assisting a student in carrying an umbrella when it was predicted that it would rain in the evening to assisting government organizations in evacuating residents of a locality when it was predicted that there would be heavy rain in the region. Farmers may also use rainfall forecasts. Predicting weather patterns is complex and difficult since it is dependent on a variety of variables. Weather conditions can vary every few hours which can be very dramatic at times. Weather forecasting has the ability to anticipate the condition of the atmosphere of a particular locality at a subsequent time. Weather forecasts may be used for a wide variety of purposes. Weather alerts are important forecasts because they safeguard people and property. Temperature and rainfall forecasts are important for agriculture, and thus for commodity traders. Utility companies use temperature forecasts to predict demand for the coming days. Most people use weather forecasts to decide what to wear on a given day on a daily basis [4]. Due to heavy rain and snow, outdoor activities are extremely restricted. Forecasts may be used to schedule activities around these occurrences, as well as to anticipate and prepare for them. Previously, weather forecasting was conducted manually using barometers, checking up on sky and cloud conditions, and so on, but nowadays weather forecasting is carried out via computers whose outcomes are

Atmospheric Weather Fluctuation Prediction …

433

much more reliable. Weather forecasting is done by assembling the datasets quantitatively of the present scenario and the past observation of certain patterns of the atmosphere. By analyzing this in a scientific manner, a prediction can be made. By using this physical approach, the prediction won’t be much reliable after 10 days but by using the machine learning approach, one doesn’t need to have a deep understanding of the physical process. Hence, machine learning is a better alternative to the physical approach [5–7].

2 Background This section consists of some previous research work done in the field of weather forecasting using a neural network-based machine learning approach.

2.1 Neural Networks for Weather Forecasting When it comes to forecasting, precision is crucial. Different types of data require different types of techniques, and the inputs for a weather forecasting model must be treated accordingly. Artificial Intelligence approaches are concerned with nonlinear data, whereas statistical methods are concerned with linear data. Genetic algorithms, neuro-fuzzy logic, and neural networks are examples of Artificial Intelligence learning models. Among them, neural networks are the most common. The use of ANN will improve the accuracy of weather forecasting [8]. Since everyday weather statistics includes a variety of variables such as temperature, precipitation level, size of the cloud and its distance, humidity, direction of wind and its speed, and so on, all of these variables are nonlinear, but they must be combined to decide the temperature, humidity, precipitation, or weather forecast for the next day. Complex methods are useful for such applications, and the models must be capable of producing the desired result by generating patterns on their own using the training data provided to the model. Although data related to weather is nonlinear and has a highly irregular pattern, Artificial Neural Network (ANN) has proven to be a more effective technique for revealing the structural relationships among the assorted entities. Three neural network-based models have been used here: • Recurrent Neural Network (RNN); • Multilayer Perception (MLP); • Radial Basis Function (RBF). Radial Basis Function Neural Network (RBF) The span of any point from the center is considered by a function based on the radius. It is also used as the activation function used. The network’s output is a linear combination of the inputs’ radial basis functions and the parameters of the neuron.

434

S. S. Chandrayan et al.

Two layers make up such neural networks. The radial base function is used in the inner layer to combine the functions [9]. After that, when measuring the same output in the next time-step, the output of these features is taken into account. Multilayer Perceptron (MLP) Three or more layers make up a multilayer perceptron. It’s used to categorize data that can’t be isolated in a linear fashion. It’s a kind of completely connected artificial neural network [10]. This is due to the fact that each node in one layer is connected to the nodes in the next layer. The nonlinear activation mechanism is used by the multilayer perceptron (majorly the hyperbolic tangent or the logistic function). The input layer receives data, one or perhaps more hidden layers provide abstraction levels, and the output layer accurately predicts. MLPs are well suited to classification prediction tasks in which inputs are classified or labeled. Recurrent Neural Network (RNN) A recurrent neural network is an artificial neural network wherein the result of one layer is stored and recirculated into the input [11]. This aids in predicting the layer’s result. The beginning layer is built similar to how the feedforward network’s layer is formed such that the number of the weights and features is multiplied. The recurrent neural network mechanism, on the other hand, begins in successive layers. Each node will recall some details from the preceding one time-step to the next. In other terms, when computing and performing operations, each and every node functions as the memory cell. The neural network starts with front propagation as normal, but it saves any details it may need later. It is a type of Artificial Neural Network that builds a coordinated network by forming connections between nodes. It can now view complex temporal behavior as a result of this. RNNs, which are derived from feed-forward neural organizations, can handle variable length groupings of data sources by using their internal state (memory). Data was indexed on the basis of the timestamp. Then the data was normalized in order to implement the sigmoid function for the activation. Later on, Keras sequential model was used along with it in order to predict the weather. This model was run up to 20 epochs. One epoch had 100 steps to it.

2.2 Related Works Machine learning is a new trend in the field of weather forecasting [12]. There are numerous works that deal with this subject. Many different and fascinating methods were used in related works to attempt to conduct weather forecasts. While most of the current forecasting technology includes physics-based simulations and differential equations, many recent artificial intelligence methods have mostly used machine learning techniques like genetic algorithms and neural network, while some were drawing on probabilistic models such as Bayesian Networks.

Atmospheric Weather Fluctuation Prediction …

435

In the approach to the neural network, “Kaur” [13] and “Maqsood” [14] compared Multilayer Perceptron Networks (MLP) and Radio Base Function Networks (RBFN) and provided a model that could predict hourly temperature, wind speed, and relative humidity 24 h in advance. Hayati and Mohebi [15] looked at multilayer perceptron (MLP) neural networks that had been developed and evaluated using 10 years of weather data. The network had three layers, hidden layers utilizing a logistic sigmoid activation function and linear functions in the output layer. “Holmstrom et al.” proposed a technique to forecast the maximum and minimum temperature of the next seven days, given the data for the past two days [16].” Two models were made: the first was the model of linear regression which shows low and high temperatures. Eight features were used in this model: the mean humidity, minimum temperature, maximum temperature, and mean atmospheric pressure of the previous two days. But this model cannot use the weather data of each day. And another was a model of variation on a functional regression which searches for previous weather data which are almost similar to current weather. However, Professional weather forecasting services outperformed all models but for later days or for longer time perspective, the gap between their models and the professional ones rapidly diminished. Grover et al. to design the joint statistics of a set of weather-related parameters, researchers used a hybrid model to study forecasting, integrating discriminatively trained statistical models with deep neural networks [17]. In the last decade, a range of highquality machine learning applications have become available, including packages in free programming languages like Python. Google’s TensorFlow is one potential machine learning method that runs in Python. For linear systems, the AR, ARX, and ARMAX variants are widely used in device recognition, e.g. [18]. Krasnopolsky and Rabinivitz [19] proposed a hybrid model that modeled the physics of weather forecasting using neural networks (Radhika et al. [20]). To resolve a classification tasks regarding weather prediction, researchers used aid vector machines. Montori et al. used the idea of crowdsensing, in which people exchange data from their smartphones about environmental factors [21]. Hosking and Wallis is often used in the production of more efficient flood quantile estimators. Nevertheless, this method tends to necessitate extensive computational efforts and a vast amount of input data [22].

3 Methodology The following process is to be followed. Figure 1 shows the flowchart of the research methodology. Firstly, the data related to weather is accumulated and then transferred for further processing which comprises the data cleaning, selection of the features, and normalization of the data [23]. Once the processing is done, the data is sent for the training and testing. After this, it undergoes the mechanism of RNN, MLP, and RBF models, respectively, and is ready for the prediction. Lastly, it shows the RMSE and accuracy analysis applying

436

S. S. Chandrayan et al.

Fig. 1 Flowchart of the research methodology

these models. Here, data is referred to from the collected data from various airport weather stations of India.

3.1 Setup Entire data is set up in the Anaconda Python 3.6. Libraries such as Pandas, NumPy, Sklearn, and Matplotlib are used.

3.2 Gathering Data For the complete understanding of ML, data has been collected from various weather airport stations. These datasets contain a number of attributes: The mean wind direction at 10– 12 m above the level of the earth’s surface, atmospheric pressure at the level of a

Atmospheric Weather Fluctuation Prediction …

437

weather station, reduced air pressure to mean sea level, dew point temperature at an altitude of 2 m above the surface of the earth, at a height of 2 m above the earth’s surface, relative humidity (percent), horizontal visibility, the air temperature at 2 m above the surface of the earth, and total cloud cover. Data for a couple of days in this data-set is incomplete, but this can be balanced by the large dataset size.

3.3 Preprocessing of Data Data preprocessing is a data mining technique that entails transforming raw data into a functional format [24]. Real-world information is often inconsistent, conflicting, lacking in many patterns or behaviors, and prone to errors. Data preprocessing is a tried and true way of resolving such issues. Data preprocessing prepares raw data for further study. Our dataset contains some missing values. We will use the preprocessing techniques mentioned to be able to train our data-set model.

3.3.1

Cleaning of Data

Data cleaning is the method of removing or altering duplicated, incorrect, unfinished, insignificant, or incorrectly formatted data in order to prepare it for review [25]. Data cleaning isn’t just about trying to erase data to make space for fresh data; it’s also about figuring out how to maximize the consistency of a dataset without deleting it entirely. We allocate various numbers to different paths of the wind and map them into columns of wind directions. We fill rows not available for columns of dew point and rain with 0 respectively, reflecting no rain and dew. To fill other columns, linear interpolation was used.

3.3.2

Feature Selection

When creating a predictive model, the feature selection is the process of bringing down the number of input parameters. It is desirable to reduce the number of input variables to reduce both the computational cost of simulation and, in some cases, to improve the performance of the model [26]. The relationship between multiple weather columns data is considered, and their matrix of covariance is generated conveying linear dependency between distinct variables columns. High linear dependency columns do not provide relevant predictive information that has thus been excluded from the dataset.

438

3.3.3

S. S. Chandrayan et al.

Data Normalization

In order to ensure rational data storage, this method entails the deletion of unstructured data and redundancy (duplicates). When data standardization is done right, you will get a structured information entry.

3.4 Training Models We have used three methods of machine learning for the weather forecast which include RBF, MLP, and RNN [27]. For research purposes only we predicted temperature, but the same algorithms can be used to forecast other weather features such as rainfall, pressure, and snowfall. Adam optimizer is used for fast training. In a batch size of 32, 100 iterations were trained and it took nearly 300s for all the iterations. In the case of regression, the loss is usually the RMS error. Figure 2 shows the reduction in the loss per iteration. For quick preparation, we have used Adam optimizer and practised for 100 iterations with batch size 32 in under 300 s. The loss used in regression is usually root mean square error, so we used the same. But we can see iterations 15, 18, and we don’t need that many iterations, so we can cut it down to 20.

Fig. 2 Reduction in loss per iteration

Atmospheric Weather Fluctuation Prediction …

439

4 Results By comparing all the three models, the RNN model turns out to be better than MLP and RBF in this scenario as higher in the 56 days prediction window. Table 1 highlights the error rate analysis using the three techniques. It is noted that the RNN model shows the least RMSE value of 1.432, while RBF records a relatively higher RMSE rate of 6.681. Figure 3 depicts the serial number of the data on the x-axis and the temperature on the y-axis. The blue color depicts the RBF temperature, whereas the orange color depicts the real temperature. A vigorous amount of fluctuation is observed. In the case of SVM, temperature at certain points step signal is observed. Maxima is at 40° and minima at 15°. Figure 4 depicts the serial number of the data on the x-axis and the temperature on the y-axis. The red color depicts the MLP temperature, whereas the blue color depicts the real temperature. The ANN temperature and real temperature mostly fluctuated in the range of 15–37 on the y-axis that is the temperature.

Table 1 Comparison of machine learning models used in the study

Model

Prediction window

RMSE error

RNN

56 days

1.432

MLP

56 days

3.287

RBF

56 days

6.681

Fig. 3 Real versus forecast temperature by SVM

440

S. S. Chandrayan et al.

Fig. 4 Real versus forecast temperature by ANN

Figure 5 depicts the serial number of the data on the x-axis and the temperature on the y-axis. The green color depicts the RNN temperature, whereas the blue color depicts the real temperature. The RNN temperature and real temperature mostly fluctuated in the range of 21–39 on the y-axis that is the temperature. An accuracy rate comparative analysis was carried out in the context of the three neural network types. It was observed that RNN generated the optimum accuracy rate of 94.3%, while the least accuracy was noted in the case of the MLP model with 91.5%. The result outcome is depicted in Fig. 6.

Fig. 5 Real versus forecast temperature by RNN

Atmospheric Weather Fluctuation Prediction …

441

PredicƟon Accuracy (%)

94.3

92.9

91.5

RNN

MLP

RBF

Fig. 6 Accuracy rate analysis of types of neural networks

5 Conclusion Weather is disordered in nature and hence it has always been difficult for meteorologists in forecasting the weather precisely. Weather forecasting faces a significant challenge in effectively predicting the outcomes, which are used in a variety of actualtime systems such as energy centers, airports, tourism hubs, and so on [28–30]. The complexity of the parameters makes forecasting challenging. Each parameter has its own set of value ranges. Different methods and new models are being updated to keep pace with the ever-changing nature of weather. In this work, weather forecasting was carried out by using three methods of neural networks which include RNN, MLP, and RBF. Different performance metrics were used for evaluation. It was observed that the RNN generated the best performance and recorded a minimum RMSE value of 1.432 in a prediction window of 56 days. The accuracy rate produced by RNN, MLP, and RBF were 94.3, 91.5, and 92.9%, respectively. As a result, the neural network with the RNN algorithm appears to be the most suitable technique for precisely forecasting weather.

References 1. Mishra S, Mallick PK, Tripathy HK, Bhoi AK, González-Briones A (2020) Performance evaluation of a proposed machine learning model for chronic disease datasets using an integrated attribute evaluator and an improved decision tree classifier. Appl Sci 10(22):8137 2. Mallick PK, Mishra S, Chae G-S (2020) Digital media news categorization using Bernoulli document model for web content convergence. Pers Ubiquitous Comput 1–16

442

S. S. Chandrayan et al.

3. Mishra S, Tripathy HK, Mallick P, Bhoi AK, Barsocchi P (2020) EAGA-MLP—an enhanced and adaptive hybrid classification model for diabetes diagnosis. Sensors 20:4036 4. Pachauri RK, Allen MR, Barros VR, Broome J, Cramer W, Christ R, Church JA, Clarke L, Dahe Q, Dasgupta P, et al (2014) Climate change 2014: synthesis report. Contribution of working groups I, II and III to the fifth assessment report of the intergovernmental panel on climate change. Intergovernmental Panel on Climate Change, Geneva, Switzerland 5. Mishra S, Mallick PK, Jena L, Chae G-S (2020) Optimization of skewed data using samplingbased preprocessing approach. Front Public Heal 8:274 6. García MA, Balenzategui J (2004) Estimation of photovoltaic module yearly temperature and performance based on nominal operation cell temperature calculations. Renew Energy 29:1997–2010 7. Jena L, Patra B, Nayak S, Mishra S, Tripathy S (2021) Risk prediction of kidney disease using machine learning strategies. In: Intelligent and cloud computing. Springer, Singapore, pp 485–494 8. Jena L, Mishra S, Nayak S, Ranjan P, Mishra MK (2021) Variable optimization in cervical cancer data using particle swarm optimization. In: Advances in electronics, communication and computing. Springer, Singapore, pp 147–153 9. Smith DM, Cusack S, Colman AW, Folland CK, Harris GR, Murphy JM (2007) Improved surface temperature prediction for the coming decade from a global climate model. Science 317:796–799 10. Sahoo S, Das M, Mishra S, Suman S (2021) A hybrid DTNB model for heart disorders prediction. In: Advances in electronics, communication and computing. Springer, Singapore, pp 155–163 11. Roy SN, Mishra S, Yusof SM (2021) Emergence of drug discovery in machine learning. In: Technical advancements of machine learning in healthcare, p 119. 12. Tutica L, Vineel KSK, Mishra S, Mishra MK, Suman S (2021) Invoice deduction classification using LGBM prediction model. In: Advances in electronics, communication and computing. Springer, Singapore, pp 127–137 13. Kaur A, Sharma JK, Agrawal S (2011) Artificial neural networks in forecasting maximum and minimum relative humidity. Int J Comput Sci Netw Security 11(5):197–199 14. Maqsood I, Khan MR, Abraham A (2004) An ensemble of neural network for weather forecasting. Neural Comput Appl 13:112–122 15. Hayati M, Mohebi Z (2007) Application of artificial neural networks for temperature forecasting. World Acad Sci Eng Technol 28(2):275–279 16. Holmstrom M, Liu D, Vo C (2016) Machine learning applied to weather forecasting 17. Grover A, Kapoor A, Horvitz E (2015) A deep hybrid model for weather forecasting. In: Proceedings of the 21th ACM SIGKDD international conference on knowledge discovery and data mining. ACM, pp 379–386 18. Ljung L (1999) System identification: theory for the user. Prentice Hall information and system sciences series. Prentice Hall PTR, 1999. ISBN 9780136566953. https://books.goo gle.no/books?id=nHFoQgAACAAJ. 19. Krasnopolsky VM, Fox-Rabinovitz MS (2006) Complex hybrid models combining deterministic and machine learning components for numerical climate modeling and weather prediction. Neural Netw 19(2):122–134 20. Radhika Y, Shashi M (2009) Atmospheric temperature prediction using support vector machines. Int J Comput Theory Eng 1(1):55 21. Montori F, Bedogni L, Bononi L (2017) A collaborative internet of things architecture for smart cities and environmental monitoring. IEEE Internet Things J 22. Pandey GR, Nguyen VTV (1999) A comparative study of regression based methods in regional flood frequency analysis. J Hydrol 225:92–101 23. Sushruta M, Hrudaya KT, Brojo KM (2017) Filter based attribute optimization: a performance enhancement technique for healthcare experts. Int J Control Theory Appl 10(295–310):91

Atmospheric Weather Fluctuation Prediction …

443

24. Mishra S, Tadesse Y, Dash A, Jena L, Ranjan P (2019) Thyroid disorder analysis using random forest classifier. In: Intelligent and cloud computing. Springer, Singapore, pp 385–390 25. Mishra S, Chaudhury P, Mishra BK, Tripathy HK (2016) An implementation of feature ranking using machine learning techniques for diabetes disease prediction. In: Proceedings of the second international conference on information and communication technology for competitive strategies, Udaipur India (4–5 Mar 2016), pp 1–3 26. Mishra S, Koner D, Jena L, Ranjan P (2021) Leaves shape categorization using convolution neural network model. In: Intelligent and cloud computing. Springer, Singapore, pp 375–383 27. Jena L, Kamila NK, Mishra S (2014) Privacy preserving distributed data mining with evolutionary computing. In: Proceedings of the international conference on frontiers of intelligent computing: theory and applications (FICTA) 2013. Springer, Cham, pp 259–267 28. Mishra S, Chaudhury P, Mishra BK, Tripathy HK (2016) An implementation of Feature ranking using machine learning techniques for diabetes disease prediction. In: Proceedings of the second international conference on information and communication technology for competitive strategies (Mar 2016), pp 1–3 29. Rath M, Mishra S (2019) Advanced-level security in network and real-time applications using machine learning approaches. In: Machine learning and cognitive science applications in cyber security. IGI Global, pp 84–104 30. Mishra S, Sahoo S, Mishra BK (2019) Addressing security issues and standards in internet of things. In: Emerging trends and applications in cognitive computing. IGI Global, pp 224–257

LGBM-Based Payment Date Prediction for Effective Financial Statement Management Laharika Tutica, K. S. K. Vineel, and Pradeep Kumar Mallick

Abstract Preponderantly, collections are the payments received for a generated invoice from a client to a customer. An Account Receivable (AR) is undoubtedly a huge asset in an organization’s financial statements. With B2B transactions amplifying in volume and convolution, misuse of AR can lead to superfluous costs and financial issues. This payment date prediction can prioritize on customers who are likely to have huge amounts of overdue receivables, which in turn can result in lower Days Sales Outstanding (DSO) and better visibility into future cash flow and also it will be used to reduce the delay time of the payments. A varied range of machine learning techniques, namely Random Forest, Decision Tree and Gradient Boosting have been used for the analysis during this research. Observations showed that the LGBM model furnished the most favorable output thereby aiding the business analysts to opt for the invoice payments. Hence, by taking the correct measures and actions, this can optimize collections with payment predictions. Keywords Account receivable · B2B · Days sales outstanding (DSO) · Random forest · Decision tree · Gradient boosting

1 Introduction An Account Receivable (AR) is possibly a legitimately enforceable claim for a payment held by any business for goods supplied and/or services rendered that customers/clients have ordered but not purchased [1]. A business raises an AR which is typically a type of invoice and then delivers to the customer expecting a payment within the agreed time frame. An asset is one of the accounting transactions which handles the bills of a customer for goods and services that the customer

L. Tutica (B) · K. S. K. Vineel · P. K. Mallick School of Computer Engineering, Kalinga Institute of Industrial Technology, Bhubaneswar, Odisha, India K. S. K. Vineel e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 P. K. Mallick et al. (eds.), Cognitive Informatics and Soft Computing, Lecture Notes in Networks and Systems 375, https://doi.org/10.1007/978-981-16-8763-1_36

445

446

L. Tutica et al.

has ordered. These could even be distinguished from notes receivable, i.e. the debts created basically through formal legal instruments called the promissory notes [2]. Business Analytics has been hired as a practice to do an iterative, methodical analysis of frequently data organizations with an emphasis on statistical analysis. It is hired by companies with a data collection system to make data-driven decisions [3]. However, in the last 15 years the business analytics field has undergone a very positive change, as the amount of information available is increasing exponentially [4]. Specifically, this flood of information incorporates automated methods of information analysis, in which researchers look to a machine for help. Therefore, it is rather obvious that the ML algorithm began to emerge as a major force in the business analytics industry [5, 6]. Figure 1 shows the scenario of how various organizations get their payments done. The principal motive is to train and test the CSV file dataset model using ML algorithms. Additional features like the stack are traded on which weekday is calculated based on features present. The final price at which the stack is traded on a particular day is the output feature, i.e. closing price which is to be predicted. Correlation is calculated to understand the relationship between any two features. Techniques such as Linear Regression and Random Forest Regression are used to predict output features, while k-Fold Cross Validation is used to model the model and obtain further equity [7]. As soon as the ML algorithm to be used is determined based on the calculated memory and accuracy results, the model is given a set of test data that remains invisible from the model while training the dataset and the accuracy level is then calculated [8].

Fig. 1 Common scenario of method of payment by organizations

LGBM-Based Payment Date Prediction for Effective Financial Statement Management

447

2 Literature Survey Incorporating machine learning models to make business predictions consists of two parts [9]: • Creating a modeling model for a business problem and developing a machine learning algorithm for a simulated model. • Improving revenue and reducing DSO through Proactive Collections Management. The Collections Cloud solution provides a complete set of tools to enhance and automate the credit management and collection process and enable the prioritization of cluster operations [10]. The solution simplifies processing, reduces the settlement cycle time, cost of processing and also helps reduce customer delays. However, teams of Group Managers and Analysts review many thousands of collections per year [11–13]. The collection output is often enhanced in various ways: • Risk Identification: Identification of all suspicious criminal accounts by taking into account consumer history information and actual credit analysis of all customer accounts. • Job Collection Priority: The creation of a priority task list has supported the dynamics and changes alongside the active data with the help of AI and ML [14]. • Better Collection Strategy: The development of advanced collection strategies emerges as the best practices of various industries in line with AI recommendations. • Order-Effective order management and credit performance: Better prediction of delays that can be calculated at the time of order creation. This is usually supported by various invoice parameters and customer account history. Customers may be required for pre-payment invoices for different invoices; payment commitments may be made at the time of ordering or renewing customer credit terms to prevent payment delays [15].

3 Problem Statement Receivable Account is an integral part of the company’s accounting budget. Businesses intend to collect all remaining invoices before they expire. In order to achieve a lower DSO and better operating costs, organizations need an effective collection strategy to focus on each account. In the event that the company fails to comply with the collections, the business will be at risk [16]. With a structured and consistent collection process, businesses can avoid a negative cash flow situation. The current collection process requires many resources to keep records of dealing with several common tasks. In addition, jobs have little integration between them. Manual

448

L. Tutica et al.

Table 1 Data Insights used in study Account used

Company (provided by the organization)

Dimensions

50,001 × 36

Dimensions (after feature engineering)

44,986 × 12

Features present

Account Id, document number norm, company code, fiscal year, branch, customer number norm, Fk customer map Id, document date norm, baseline date norm, due date norm, invoice number norm, open amount norm, payment terms, clearing date norm, IsOpen, order type, order date, business area, ship date, job Id, tax amt, current dispute amount, document Id, document date, due date, invoice age, IsValid dispute, retainage amount, posting key, Fk currency, debit credit indicator, valid open amount, customer name

Features present after feature engineering

Company code, fiscal year, Fk customer map id, document date norm, due date norm, open amount norm, payment terms, clearing date norm, IsOpen, document Id, actual open amount, currency

Train dataset size

35,988 of the 44,986 (i.e. approximately 80% of the dataset where the output variable is present)

Test dataset size

8998 of the 44,986 (i.e. approximately 20% of the dataset)

Output variable

Delay (derived from clearing date norm and due date norm, i.e. clearing date—due date)

processes in the collection process can lead to the creation of significant inefficiencies with excess overhead that can have a negative impact on a company’s cash flow. Outdated, paper-based processes lead to the cancelation of bad debts and set obstacles to various growth goals [17]. The problem statement is to predict the next payment date in order to avoid the delay of the payment of a particular customer from a client by analyzing the historical data present, i.e. a Company’s data. And as the disadvantages of the traditional approach are to be kept in mind, one can approach Machine Learning, Artificial Intelligence and also Automation since these make the problem more optimized and keep a keen observation and clean records on the solution along with eliminating the manual effort. The data samples used in the study are highlighted in Table 1.

4 Machine Learning Workflow Machine learning is a part of Artificial Intelligence (AI) that gives computers the power to find out without any explicit program. It focuses on the event of computer programs which change when exposed to new data [18]. This is an iterative

LGBM-Based Payment Date Prediction for Effective Financial Statement Management

449

Fig. 2 Machine learning workflow

process which involves validation of the methods used and evaluation of consistent performance. The raw data is converged into 3 sets, i.e. training, validation and testing sets. The models are first trained using the training data and then the validity of the model is test. The data is also split into various sets and tested across all the conditions and filters that simulate the assembly requirements [19–23]. The model is further tuned by hyper-parameter tuning. After passing a series of tests, a prototype model is tested with the live data and integrated into production eventually. A general workflow of machine learning is illustrated in Fig. 2. The detailed process is however as follows.

4.1 Planning and Implementation The raw data is basically pulled from the database of the organization or ERPs and many other different sources and then the pre-processing is performed on the data extracted to remove the null columns and missing values using languages like R, Python, etc. Once data is pre-processed, transformation of data takes place and then required features which can be used to improve the performance of machine learning algorithms are extracted using domain knowledge. This process is known as Feature Engineering. Then the data can be split into precisely 3 datasets, i.e. Train set, Validation set and Test set so that the Exploratory Data Analysis can be done on the data to get some insights and inferences so that the best features may be extracted and by the means of Feature Selection, the error to obtain a model can be minimized which can result in good accuracy and performance. The machine learning algorithms and the Regressor techniques like Random Forest Regressor, Linear Regressor, Gradient Boosting Techniques, etc. are used to predict the next payment date. The train dataset contains the output variable which is the delay column in order to make the machine learn about the data and its possibility of getting the answer, and the test dataset doesn’t contain the output variable so that the machine can now predict the answer

450

L. Tutica et al.

Fig. 3 Proposed methodology for payment prediction using LGBM machine learning approach

to the problem statement. So, with various ML algorithms, the data can be trained and which models could possibly give good results can be seen. This can also decide how the fit is as it should avoid the overfitting and underfitting of the model and can be prevented by comparing the results of train and test datasets. Coming to the evaluation part, there are the following metrics like MAE, MSE (RMSE), R2 , etc. Hence, many of the models are taken into consideration by checking the RMSE value as the MAE does not take or account for the outliers present but the MSE takes the outliers into consideration and predicts the value; the golden rule that should be taken care of is lesser the RMSE value, the greater is the model accuracy (best performance) and vice versa. Figure 3 shows the overall proposed working model.

4.2 Modeling and Result Analysis The figure illustrated below shows the model comparison analysis which infered that the Gradient Boosting Algorithm is superior than other models. Thus Light Gradient Boosting Machine (LGBM) and XG Boost are to be used as these are the best models to get the optimum evlauation metrics. As can be clearly observed in Fig. 4, in cases of Machine Learning Algorithms using Regression Techniques, Gradient Boosting outperforms numerous others. A generically powerful and flexible tool like gradient boosting becomes the model of choice in most tabular competitions, at least as a starting point. Among the high-profile gradient boosting libraries (LGB, XGBoost and CatBoost), LGB seems to be both the fastest and most flexible. In particular, LGB uses extremely

LGBM-Based Payment Date Prediction for Effective Financial Statement Management

451

Fig. 4 Comparison of performance of various machine learning algorithms

Fig. 5 Step-by-step leaf-wise splitting of the tree

enhanced histogram dissipating as a means of expediting up the tree-splitting process as highlighted in Fig. 5. Light GBM grows trees upwards which means that Light GBM increases the intelligence of the tree leaves. In order to grow, it will opt for the leaf that has lost the most delta. When that leaf grows, the Leaf-wise algorithm can reduce more losses as distinguished from the standard algorithm. However, it is not advisable to use LGBM in small databases. Also, Light GBM is extremely sensitive and can easily scale small data. There is no limit to the number of rows. All ensemble techniques were used to determine the evaluatiion analysis of different machine learning algorithms which included Decision Tree, Random Forest and Boosting Techniques (which includes XGB and LGBM). Out of all these techniques, the best will be the one which will give optimized results and accuracy and it will be depending upon the metrics like Mean Square Error (MSE). The split of the data is basically in 80–20 (80% for the train set and 20% for the test set). Implementing a machine learning algorithm will give you a deeper and more effective understanding of how the algorithm works. This information can also help

452

L. Tutica et al.

you to delve deeper into the mathematical meaning of the algorithm by considering vectors and matrices as identical members and a computer understanding of the transformation of these structures. There are many small decisions required when using a machine learning algorithm and these decisions are always lost in the definition of a valid algorithm. Learning parameters of these methods can quickly lead to a moderate and advanced level of understanding of a given method, as few utilizes time to use other complex algorithms as a learning activity. A Mean Square Error (MSE) or a Mean Square Deviation (MSD) measurement (of a non-denominational quantitative measurement method) is a standard square measure of error—that is, the average square difference between the estimated values and therefore the actual value. MSE can be a risky function, such as the arithmetic definition of square error loss. The MSE can be a measure of the average level—usually not negative, and prices close to zero are better. RMSE is the most generally used metric for regression tasks and is the root of the averaged square distinction between the target price and therefore the price foretold by the model. It is most popular in some cases as a result of the least error rate measure. RMSE f0 =

N  

2 z fi − z oi /N ]1/2

(1)

i=1

where •  = summation (“add up”)   • z fi − z oi 2 = difference between actual and predicted values, squared • N = sample size. Overall, Fig. 6 illustrates the accuracy rates obtained for the training and testing dataset using numerous techniques. The variation arises since the test dataset is kept completely hidden from the model being trained upon. Substantially, after getting the results from modeling, it is clearly observed from Fig. 7 that the RMSE value is less while using the Light GBM technique as compared to other models and it has proved to attain good results for many more regression problems like this. Light GBM is a faster reinforcement framework that utilizes drug-based learning techniques. It has been designed to spread and be structured with the following benefits: • • • • •

Rapid training rate and well organized. Minimum memory usage. Finer and better accuracy. Indistinguishable GPU learning support. Able to handle big data.

It has the highest AUCROC also known as measure of predictive power; there is a small graph based on the research which proves and shows efficient results among all the Gradient Boosting Techniques.

LGBM-Based Payment Date Prediction for Effective Financial Statement Management

453

98 96 94

95.7 94.5 93.3

92.7

92

91.8

91.5

90

90.9

89.6

88

87.2

84.4

87.1

86.5

86.4

86 84

90.8

84.4 83.3

82 80 78 76 DT

RF

KNN

NB

MLP

RBF

XGBoost

LGBM

TesƟng Accuracy (%)

Taining Accuracy (%)

Fig. 6 Training versus testing percentage-wise accuracy rates RMSE 30 24.13

25

23.98

21.67

20

20.88 19.55

18.34

17.56 16.24

15

10

5

0 DT

RF

KNN

NB

MLP

RBF

XGBoost

Fig. 7 Depiction of observed RMSE values w.r.t. various machine learning algorithms

LGBM

454

L. Tutica et al.

5 Conclusion and Future Scope The task of collecting organizations is in dire need of new resources that can improve the efficiency of the process and help finance it. Technology stands as a pillar of the transformation of various industries that brought improvements to traditional systems. Utilizing the power of technology can play a key role in adopting strategies that can prove effective in improving the collection process. Lastly, it will help you earn money faster without hassles and will help maintain customer interest. This system will also help in maintaining the credit with other companies, and it will also gain the trust and loyalty of the company without any resulting disputes. So, this will encourage the companies to do future business and the relationship can be well maintained without any roadblocks. This solution also brings us to Understand, Simplify, Automate and eliminate manual efforts. The solution can do the following: Saving many Millions with continuous improvement, reduction in Days Sales Outstanding (DSO) in less than one year, and also giving enhancement in many ways—Risk Identification, Collection Worklist Prioritization, Better Collection Strategies, Optimized Order and Credit Management.

References 1. Nanda S (2018) Proactive collections management: using artificial intelligence to predict invoice payment dates. Credit Res Found 1Q Credit Finan Manag Rev 2. Mallick PK, Mishra S, Chae GS (2020) Digital media news categorization using Bernoulli document model for web content convergence. Pers Ubiquitous Comput 1–16 3. Mishra S, Tripathy HK, Mallick P, Bhoi AK, Barsocchi P (2020) EAGA-MLP—an enhanced and adaptive hybrid classification model for diabetes diagnosis. Sensors 20:4036 4. Mishra S, Mallick PK, Jena L, Chae G-S (2020) Optimization of skewed data using samplingbased preprocessing approach. Front Public Heal 8:274 5. Zhu J, Zou H (2009) Saharon rosset and trevor hastie. “Multi-class AdaBoost”. Statist Int 2:349–360 6. Saha AK, Hasan SM (2014) Efficient receivables management a case study of Siemens Bangladesh limited. Researchgate 7. Jena L, Patra B, Nayak S, Mishra S, Tripathy S (2021) Risk prediction of kidney disease using machine learning strategies. In: Intelligent and cloud computing, pp 485–494. Springer, Singapore 8. Abe N, Melville P, Pendus C, Reddy CK, Jensen DL, Thomas VP, Bennett JJ, Anderson GF, Cooley BR, Kowalczyk M et al (2010) Optimizing debt collections using constrained reinforcement learning. In: Proceedings of the 16th ACM SIGKDD international conference on knowledge discovery and data mining. ACM, pp 75–84 9. Baesens B, Gestel TV, Stepanova M, Van den Poel D, Vanthienen J (2005) Neural network survival analysis for personal loan data. J. Operat. Res. Soc. 56(9):1089–1098 10. Mishra S, Koner D, Jena L, Ranjan P (2021) Leaves shape categorization using convolution neural network model. In: Intelligent and cloud computing, pp 375–383. Springer, Singapore 11. Jena L, Mishra S, Nayak S, Ranjan P, Mishra MK (2021) Variable optimization in cervical cancer data using particle swarm optimization. In: Advances in electronics, communication and computing, pp 147–153. Springer, Singapore

LGBM-Based Payment Date Prediction for Effective Financial Statement Management

455

12. Bailey DR, Butler B, Smith T, Swift T, Williamson J, Scherer WT (1999) Providian financial corporation: collections strategy. In Systems engineering capstone conference. University of Virginia 13. Breiman L (2001) Random forests. Mach Learn 45(1):5–32. https://doi.org/10.1023/A:101093 3404324 14. Tutica L, Vineel KSK, Mishra S, Mishra MK, Suman S (2021). Invoice deduction classification using LGBM prediction model. In: Advances in electronics, communication and computing, pp 127–137. Springer, Singapore 15. Mishra S, Mallick PK, Tripathy HK, Jena L, Chae G-S (2021) Stacked KNN with hard voting predictive approach to assist hiring process in IT organizations. Int J Electr Eng Educ. https:// doi.org/10.1177/0020720921989015 16. Cao R, Vilar JM, Devia A (2009) Modelling consumer credit risk via survival analysis. SORT: statistics and operations research transactions 33(1):0003–0030 17. Cheong MLF, Wen SHI (2018) Customer level predictive modeling for accounts receivable to reduce intervention actions 18. Ray C, Tripathy HK, Mishra S (2019) Assessment of autistic disorder using machine learning approach. In: Proceedings of the international conference on intelligent computing and communication, Hyderabad, India, 9–11 January 2019, pp 209–219 19. Chaudhury P, Mishra S, Tripathy HK, Kishore B (2016) Enhancing the capabilities of student result prediction system. In: Proceedings of the 2nd international conference on information and communication technology for competitive strategies, Uidapur, India, vol 88, 4–5 March 2016, pp 1–6 20. Roy SN, Mishra S, Yusof SM (2021) Emergence of drug discovery in machine learning. Tech Adv Mach Learn Healthcare 119 21. Mishra S, Chaudhury P, Mishra BK, Tripathy HK (2016) An implementation of feature ranking using machine learning techniques for diabetes disease prediction. In: Proceedings of the second international conference on information and communication technology for competitive strategies, pp 1–3 22. Rath M, Mishra S (2019) Advanced-level security in network and real-time applications using machine learning approaches. In: Machine learning and cognitive science applications in cyber security, pp 84–104. IGI Global 23. Mishra S, Sahoo S, Mishra BK (2019) Addressing security issues and standards in Internet of things. In: Emerging trends and applications in cognitive computing, pp 224–257. IGI Global

A Regression Approach Towards Climate Forecasting Analysis in India Yashi Mishra, Sushruta Mishra, and Pradeep Kumar Mallick

Abstract Data analysis has numerous aspects and approaches, which enclose different techniques under a variety of names, in contrasting domains that include fields of business, science, and society. Predictive Analysis is used to predict the trends and behaviour patterns. The main objective of the predictive model is to find out the solution of a problem that is likely to be based on its past behaviour, and how that past is going to affect the future. Weather Prediction analysis is an important application of predictive data analytics. It is important to understand what drives the functioning of the system and to what extent they are predictable. The Linear regression curve is used to predict values of Y given the values of X. For any given temperature of X, we have a tendency to go straight up to the road, so move horizontally to the left to seek out the worth of Y. The expected temperature of Y is termed the expected price of Y, and is denoted Y’. Regression equations are used to form predictions. This Regression model uses equations of applied mathematics for square measures to fetch output for our work model. However, the values entered are of the form of independent variables which are further cast into mathematical equations to predict the average values from the variables. The dataset is currently required to be embedded with other values. Thus, it tends to pick temperature and wetness columns from dataset-2 and provides it to the trained statistical regression model to urge values of PM2.5. During this method, it tends to create a final information set that currently has all options together with Wind, Humidity, and PM2.5. Now it is trained in another statistical regression model on this final information set with temperature because of the target variable. As before, we tend to once more premeditate the warmth maps to visualize the correlation of options and target variables to throw out for options. The paper aims to contrast why weather prediction using ML should be preferred over weather forecasting. It uses linear regression to Y. Mishra (B) · S. Mishra · P. K. Mallick School of Computer Engineering, Kalinga Institute of Industrial Technology, Deemed to be University, Bhubaneswar, India S. Mishra e-mail: [email protected] P. K. Mallick e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 P. K. Mallick et al. (eds.), Cognitive Informatics and Soft Computing, Lecture Notes in Networks and Systems 375, https://doi.org/10.1007/978-981-16-8763-1_37

457

458

Y. Mishra et al.

predict values of weather and how it can change over years and assess the respective predictive model. The prediction model involves dataset analysis and diverses it to different techniques. Keywords Regression · Weather prediction · Data analysis · Prediction · Machine Learning (ML)

1 Introduction Weather Prediction is done by collecting a huge amount of datasets about the current state of the atmosphere, i.e. particularly based on temperature, humidity, and wind. A clear understanding of atmospheric processes in reference to weather forecasting determines atmosphere evolvement in the future. They are vital to most aspects of day-to-day life, as well as aviation, boating, different transportation, tourism, sports, etc. Pilots have to be compelled to grasp the weather to set up their flights, sailors have to be compelled to grasp what the weather is going to be to wish to set up their activities, and farmers have to be compelled to understand the weather which can facilitate them set up plantation [1]. Data and knowledge of square measure increasing quickly, because of the speed of growth, the knowledge is out there to United States too close to the future but still goes unpredictable. Knowledge is generated by many users, businesses, and industries on an entire. To induce up gear from this, we actually have to be compelled to perceive the fundamentals of it as an entire. Different selections are often created as per our target market [2]. Social media is another example that has the hyperbolic expansion of the information and that the way an organization builds its changes that is supportive. Such Analysis which may facilitate in creating snap selections is a good start. Businesses that initially started clutches the knowledge of science, the target song, and with reliable solutions. The selection of technology was intact for easier implementation and also the same went for statistics for easier developmental method [3]. Sooner such preowned cases were an addition to prelimination instead of prognosticative. Python and R square measures are most of the technologies for processing chores in the knowledge of science. They need huge and ever-evolving libraries and integrate with massive knowledge platforms still as visualization merchandise. Thus, the basic aim lies in contrasting weather forecasting using numerous past data observations, to weather prediction using regression, the author has contrasted it using mean squared error function, which indicates why we must switch over to regression prediction rather than a foretelling of weather as did in past because of the blunders in error that was experienced.

A Regression Approach Towards Climate Forecasting Analysis …

459

2 Literature Review Keeping an eye on different surveys and conference papers related to weather forecasting taken into consideration, Gheisari [4] performed a study on big data and deep learning further investigated its multiple research directions. Moreover, they analysed different architectures that may be suitable for big data processing, its challenges, and future scopes related to imaging analysis. Saima [5] surveyed assorted weather forecasting models that include stats, artificial intelligence, and hybrid model using image processing and neural networks. The objective of the survey was to look forward on how machine learning can improve weather parameter estimation. Nagaraja [6] evaluated various prediction models for wind energy and aimed to figure out attested and the most economical models to survey the trend of the wind. Liu [7] primarily focussed on creating a model based on wind speed forecasting which involved deep learning-based techniques, that processed a huge amount of datasets making constructive predictions. Kunjumon [8] performed a study that comprised of various forecasting models which were based on artificial neural networks, support vector machines, and decision trees. Naveen and Mohan [9] detailed upon a variety of application domains that were majorly focussed on weather forecasting and also investigated some prediction models considering available meteorological information and used machine learning techniques. They also looked after different challenges and considered them for weather forecasting. Reddy and Babu [10] addressed how improper prediction of natural calamities like earthquakes, storms, cyclones, etc. has cost billions of people’s lives. They also examined several big data weather prediction models that included Map Reduce models and their limitations, specifically for rainfall forecasting. Tran-Anh [11] suggested an improved rainfall prediction model that was based on Discrete Wavelet Transform and two forward neural networks (Artificial Neural Network and Seasonal Artificial Neural Network). The authors also investigated and contrasted different strategies for monthly rainfall prediction. Tarade and Katti [12] put forward a thorough study based on wind speed forecasting systems which were primarily based on ARIMA, ANN, and Polynomial Curve Fitting. They also came to the conclusion that LSTM (Long Short-Term Memory) model is more effective and accurate as compared to ARIMA(Auto-Regression Integrated Moving Average) model. Kulkarni [13] looked into the efficacy of various statistical approaches that can be suitable for forecasting of wind speed like Curve Fitting, ARIMA, Extrapolation with Periodic Functions, and ANN. Gupta [14] suggested that factors that affect weather include mean temperature, humidity, the pressure of the sea, dew point temperature, and speed of the wind that have been used for forecasting the rainfall. The training dataset is used to train the classifiers with the help of Classification and Regression, tree algorithm, K nearest Neighbour, Naive Bayes approach, and 5-10-1 Pattern Recognition in Neural Network whose accuracy is further tested on a test dataset. Hayati [15] emphasizes on using ANN for one day ahead prediction of temperature. They used MLP to train and test ten years datasets. Considering accuracy as one major and important concern while prediction, the datasets were split into four seasons, and then for each season,

460

Y. Mishra et al.

one network is presented. For testing performances, any two random unseen days were selected and tested. The error produced after performing the test resulted in variation between 0 to 2 MSE. Many other relevant works on weather forecasting analysis have been carried out by different researchers [16–19].

3 Technical Overview 3.1 Data Preparation The dataset which was worked upon was obtained through a 1952 study by the journal, IEEE Transactions on how various parameters of pollution frequency can help classify climate variations, and also, how can weather in the upcoming decade can change. By performing a classification on this data, it is hoped to prove that vocalization tests are indeed a well-suited way to predict weather variations. A proprietary clustering method was used to identify the same [20]. After a thorough analysis of the data, the following conclusions have come up.

3.2 Linear Regression Linear regression basically aims to use a group of premises that primarily concerns linear relationships between weather datasets [21–24]. In association to the use of numerous techniques that are used to predict outcomes for predictor’s values having some uncertainties. By victimization of SciKit-Learn, which separates our dataset into testing and coaching sets. The train_test_split() is performed from sklearn_model_selection() module. It’ll be split into coaching and testing datasets into eightieth coaching and two hundredth testing, assigning a random_state() to confirm the result about the same random choice of information we have. This random_state() is a parameter that is extremely helpful for a copy of the result to get weather forecasting results by numerous dataset observations.

3.3 Visualization Relationships Visualizing Relationships refers to the checking and verification of climatic patterns. These patterns are further plotted in a graph that proves the linear relationship of the predictors. The utilization of matplotlib’s pyplot module is done. For this plot, we create a grid of plots that are based on dependent and consistent variables. The grid graph is plotted using an important plotting function of PANDA, i.e. scatter_plot().

A Regression Approach Towards Climate Forecasting Analysis …

461

3.4 Metrics We now check our model’s validity in the provided context. We use score() function of linear regression model which helps us to determine the model’s variance up to 90%, i.e. observed in the outcome variable, mean temperature. Moreover, sklearn.metrics module’s mean_absolute_error() is used to determine the average of the predicted value which is about 4 degrees Celsius off and sometimes it is off by about 3 degrees Celsius.

4 Proposed Methodology Weather prediction is vital to most aspects of day-to-day life, as well as aviation, boating, different transportation, tourism, sports, etc. Weather forecasts have always been individuals’ concerns in performing different activities so that we do not find ourselves in any of such situations for which we were unprepared. Pilots have to be compelled to grasp the weather to set up their flights, sailors have to be compelled to grasp what the weather is going to be to wish to set up their activities, and farmers have to be compelled to understand the weather which can facilitate them set up plantation as shown in Fig. 1. Figure 1 depicts the basic steps involved in the process of weather prediction of the country, India using Regression. The process involves the input of data which is the collection of the past 50 years of the weather report. These inputs of weather report datasets are further pre-processed and analysed using regression analysis. Later these datasets are split and trained, and further, these trained weather datasets are tested over custom inputs producing outputs of two criteria one that explains how much it is easy to predict and also, so less error prone in weather Prediction using Regression. Secondly, it predicts how the weather will actually take a turn after a decade, How summers will be hotter and winters will be colder.

5 Results and Discussions Linear regression is explored and later portrayed through a medium of graph plot. The dark blue line in the graph is the curve that consists of the temperature variation in India. The sinusoidal curve depicts how summers this year have been getting hotter and hotter and so it is for the winters. The vertical yellow lines on the sinusoidal graph point us out the major error prediction according to the author’s expectation, i.e. The different climatic conditions which are based on different factors other than just weather such as humidity, dirt, dust, pollution, and different harmful gases that together have made the climate variations harsh. If the yellow variation curve would

462

Y. Mishra et al.

Fig. 1 Regression model depicting a system for weather prediction

have been of less amplitude, the errors comparing weather forecasting using observations to the weather prediction using regression would not have much difference. But, conversely, the yellow purpose is incredibly off from the road; thus, this curve later on depicts the immensely terrifying weather forecast mistake. The curve is employed for predictions, and also the values used in the given graph square measure the particular knowledge set given to the model The yellow coloured curve and the blue curve gives the comparison of how much blunder is done while forecasting. The prediction of error for some extent is done by the subtraction, i.e. worth the purpose from the expected temperature outcome. It shows the expected values (Y’) and also the prediction errors (Y-Y’) for reference. The second purpose includes a Y of 1.89 and a foretold Y (called Y’) of 1.562. Therefore, its prediction error is 0.328. Figure 2 is the illustration of how the mean squared error variation takes place over the past 50 years of temperature and climate change. The yellow deviation in the graph indicates that the weather forecasting report’s deviation suffers a huge error mark to that of weather predicted report using regression The graph is plotted between x-axis: mean squared error and y-axis: mean relative error of past years. Figure 3 is the illustration of how weather change actually took place over the past 50 years and their relative graph on annual basis, and it is clearly visible from the graph that every year since the 1970s, annual temperature of India is rising extensively. The

A Regression Approach Towards Climate Forecasting Analysis …

463

Fig. 2 Weather changes due to current pollution and global warming scenario

Fig. 3 Annual weather change scatter graph

graph consists of x- and y-axis, with x-axis: No. Of Years and y-axis: Annual report of weather yearly since 50 years.

464

Y. Mishra et al.

6 Conclusion All the machine learning models, regression towards the mean, varied regression towards the mean, polynomial regression towards the mean, supply regression, random forest regression, and Artificial neural systems, were crushed by knowledgeable climate decisive apparatus. Despite the actual fact that any error in such executions were reduced considerably. This demonstrates us that over a longer period of time, these models may turn out to be really effective by beating genius skilled ones. Regression is towards the most mean and incontestable skill; Regression when compared to mean errors is of course one of the highly diversified model because it is unsteady to different deviations. Thus, one approach is to boost the model by gathering a lot of information related to sensible regression. However, there was a high inclination towards demonstration on the choice of model, in which its predictions cannot be improved by the addition of more number of datasets. This tendency may be expected to structure the estimates of climate which is engrossed with the climate of the past years which could be little to even have faith in optimizing some slight changes in climate that a sensible regression needs. On the other hand, it is likely that the figures were rather based on climatic conditions of the past fifty years. The inclination of the sensible regression model is most likely to get minimized. In any case, this could need considerably a lot of calculation time aboard grooming of the burden vector; thus, this may be conceded to the future. Weather predicting using regression formula is, by all accounts, the most fitting strategy for estimating climate exactly how the climate foretelling contains a major take look of foreseeing the precise outcomes that square measure is utilized in various current frameworks like power offices, air terminals, the travel business focuses, then forth, i.e. weather forecasting using observations. The difficulty of this is the fascinating parameters of nature. This implies that every single advanced parameter acts as an information in producing the best real examples that make our model ready to bring it into play using similar examples for the production of gauges.

References 1. Chavan G, Momin B (2017) An integrated approach for weather forecastingover internet of things: a brief review. In: 2017 international conference on I-SMAC (IoT in social, mobile, analytics and cloud) (I-SMAC), pp 83–88. IEEE 2. Mishra S, Tripathy HK, Mallick PK, Bhoi AK, Barsocchi P (2020) EAGA-MLP—an enhanced and adaptive hybrid classification model for diabetes diagnosis. Sensors 20(14):4036 3. Mishra S, Mallick PK, Tripathy HK, Bhoi AK, González-Briones A (2020) Performance evaluation of a proposed machine learning model for chronic disease datasets using an integrated attribute evaluator and an improved decision tree classifier. Appl Sci 10(22):8137 4. Gheisari M, Wang G, Bhuiyan MZA (2017) A survey on deep learning in big data. In: 2017 IEEE international conference on computational science andengineering (CSE) and IEEE international conference on embedded andubiquitous computing (EUC). IEEE, pp 173–180 5. Saima H, Jaafar J, Belhaouari S, Jillani TA (2011) Intelligent methods for weatherforecasting: a review. In: 2011 national postgraduate conference. IEEE, pp 1–6

A Regression Approach Towards Climate Forecasting Analysis …

465

6. Nagaraja Y, Devaraju T, Kumar MV, Madichetty S (2016) A survey on windenergy, load and price forecasting: (forecasting methods). In: 2016 international conference on electrical, electronics, and optimizationtechniques (ICEEOT). IEEE, pp 783–788 7. Liu H, Chen C, Lv X, Wu X, Liu M (2019) Deterministic wind energyforecasting: a review of intelligent predictors and auxiliary methods. Energy Conv Manag 195:328–345 8. Kunjumon C, Nair SS, Suresh P, Preetha SL (2018) Survey on weatherforecasting using data mining. In: 2018 conference on emerging devices andsmart systems (ICEDSS). IEEE, pp 262–264 9. Naveen L, Mohan HS (2019) Atmospheric weather prediction using variousmachine learning techniques: a survey. In: 2019 3rd international conferenceon computing methodologies and communication (ICCMC). IEEE, pp 422–428 10. Reddy PC, Babu AS (2017) Survey on weather prediction using big data analystics. In: 2017 second international conference on electrical, computer andcommunication technologies (ICECCT). IEEE, pp 1–6 11. Tran Anh D, Duc Dang T, Pham Van S (2019) Improved rainfall prediction usingcombined pre-processing methods and feed-forward neural networks. J Multidisciplinary Sci J 2(1):65–83 12. Tarade RS, Katti PK (2011) A comparative analysis for wind speed prediction. In: International conference on in energy, automation, and signal (ICEAS). IEEE, pp. 1–6 13. Kulkarni MA, Patil S, Rama GV, Sen PN (2008) Wind speed prediction usingstatistical regression and neural network. J Earth Syst Sci 117(4):457–463 14. Gupta D, Ghose U (2015) A comparative study of classification algorithms for forecasting rainfall, IEEE 978-1-4673-7231-2, ©IEEE Publications 15. Hayati M, Mohebi Z (2007) Temperature forecasting based on neural network approach. World Appl Sci J 2(6):613–620, ISSN 818-4952 16. Mishra S, Mallick PK, Jena L, Chae GS (2020) Optimization of skewed data using samplingbased preprocessing approach. Frontiers in Public Health, 8 17. Mishra S, Dash A, Jena L (2021) Use of deep learning for disease detection and diagnosis. In: Bio-inspired neurocomputing, vol 82, pp 181–201. Springer, Singapore 18. Mallick PK, Mishra S, Chae G-S (2020) Digital media news categorization using Bernoulli document model for web content convergence. Pers Ubiquitous Comput 1–16 19. Chaudhury P, Mishra S, Tripathy HK, Kishore B (2016) Enhancing the capabilities of student result prediction system. In: Proceedings of the 2nd international conference on information and communication technology for competitive strategies, Uidapur, India, 4–5 March 2016, pp 1–6 20. Jena L, Mishra S, Nayak S, Ranjan P, Mishra MK (2021) Variable optimization in cervical cancer data using particle swarm optimization. In: Advances in electronics, communication and computing, pp 147–153. Springer, Singapore 21. Tutica L, Vineel KSK, Mishra S, Mishra MK, Suman S (2021) Invoice deduction classification using LGBM prediction model. In: Advances in electronics, communication and computing, pp 127–137. Springer, Singapore 22. Mishra S, Chaudhury P, Mishra BK, Tripathy HK (2016) An implementation of feature ranking using machine learning techniques for diabetes disease prediction. In: Proceedings of the second international conference on information and communication technology for competitive strategies, pp 1–3 23. Rath M, Mishra S (2019) Advanced-level security in network and real-time applications using machine learning approaches. In: Machine learning and cognitive science applications in cyber security, pp 84–104. IGI Global 24. Mishra S, Sahoo S, Mishra BK (2019) Addressing security issues and standards in Internet of things. In: Emerging trends and applications in cognitive computing, pp 224–257. IGI Global

Rice Leaf Disease Classification Using Transfer Learning Khushbu Sinha, Disha Ghoshal, and Nilotpal Bhunia

Abstract Plant diseases have to be properly identified and classified in advance for us to expect our agricultural harvest protection system to function optimally. Rice is the staple food for many Asian countries and is very important in maintaining not only dietary but also socio-economic stability in many countries across the world. In this particular study, we have focused mainly on various diseases affecting rice variants and their detection. To properly identify and treat the variants of diseases affecting rice plants, we first gather visual data, and then we have classified these image data into three diseased categories which are Leaf Blast, Brown Spot, and Healthy leaf. Our dataset consists of a total of 3355 images of healthy and diseased rice leaf images. Now to further solve this problem, we have used a technique called Transfer Learning. Here, we made use of a model which was pre-trained named Xception for training our model. Based on the depthwise separable convolution layer, this is a convolutional neural network architecture. It is conceptually similar to Inception. However, it outperforms Inception V3 as its model parameters are used in a more efficient manner as compared to Inception. By following this methodology, we have obtained encouraging results and we could use this technique for the early detection of diseases in rice varieties. On further improvement, proper implementation of this technique could be attainable in real-life agricultural fields. Keywords Rice plant diseases · CNN · Transfer learning · Xception · Image recognition

1 Introduction The importance of rice as a staple food not only in Asia but also all over the world is not unknown. All varieties of rice that are grown on this earth tend to have plant diseases and fungal infestations. The productivity and plant quality are reduced due to disease. So detection and treatment of those diseased rice plants at an early stage will prove to be beneficial. A study done on rice varieties from Bangladesh revealed that there were K. Sinha (B) · D. Ghoshal · N. Bhunia School of Computer Engineering, KIIT (Deemed to be University), Bhubaneswar, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 P. K. Mallick et al. (eds.), Cognitive Informatics and Soft Computing, Lecture Notes in Networks and Systems 375, https://doi.org/10.1007/978-981-16-8763-1_38

467

468

K. Sinha et al.

several disease variants—2 bacterial, 2 viral, 13 fungal, 2 nematodes, and 1 micronutrient deficient issue. The matter gets worse when we take into consideration that rice forms the staple diet in most of the Asian rice-growing countries and plays a part in balancing and helping to sustain socio-economic standards in most developing countries. Any sudden change or mass scale disease in rice varieties would mean massive unrest and nutritional depravity in many of the nations. The farmers generally rely on naked-eye observations for the detection of diseased and non-diseased rice leaves. This method is not that effective due to the large variety of diseases. To ease out this process of disease identification, various research works using machine learning algorithms like SVM [1, 2] and Artificial neural network [3] have been done. However, the success of these mechanisms seems to get hindered owing to differences in image quality or background of the image, Small datasets, and inadequate research work. One way to tackle this problem is by using Transfer Learning wherein a larger set of data is utilized to pre-train our model. This can be used as an extractor which is fixed-feature by eliminating the last layer which is fully connected to it or by making minor changes to the couple layers at the end that will tend to work specifically related to the dataset we are having at hand. Detection of major diseases found in rice plants such as Tungro, Bacterial Leaf -Blight, the Blast, Brownspot, and Sheath-Blight is difficult using just the naked eye. Many works have been done in the past on the detection of diseased rice leaf, fungal, and pest using a convolutional neural network (CNN). Identifications of the diseases and infections have been classified using CNN architectures like VGG16, CaffeNet, and GoogleNet. In this paper, we are aiming to find a method that will help in detecting diseased and healthy rice leaf plants. We have a limited dataset so, for the classification purpose, we have used a technique called transfer learning. In this process of transfer learning, pre-trained models are used for training, and here, we will be using Xception as our pre-trained model.

2 Related Work For the classification and detection of plant diseases, many works have already been done in the past especially for identifying the rice leaf diseases [4]. In the age of science and technology, various new technologies have been developing for the benefit of the agricultural sector. One among them is the computer-aided diagnosis (CAD) system. This was developed for the recognition of plant disease and was very helpful for agricultural sciences [5]. In [6], a CNN-based framework is used for the identification and classification of three different healthy and diseased rice images. There’s another method for diagnosing plant disease which is based on CNN using InceptionV3 architecture. It was achieved by selecting learned features through perturbation [7]. To remove around 75% of parameters without affecting the loss and accuracy, mixed layers can be used to generate deep features. Traditional feature extractor which includes LBPH and HaarWT for identifying rice blast diseases is

Rice Leaf Disease Classification Using Transfer Learning

469

not so effective as the CNN-based model [8]. It was found that the best result came from LeNet. In [9], using GoogleNet Architecture, better accuracy is obtained for detecting plant leaf diseases. A comparative study has been done in [10] regarding different pooling strategies like max-pooling, stochastic pooling, mean-pooling, and gradient descent algorithm. These are given to the CNN architecture for training which is used for the detection of diseased rice leaves. For extracting features of fruit crop diseases, VGG16 and CaffeAlexNet in [11] were used, and using multiclass SVM, the most discriminated features were classified. For the classification of plant leaf disease, texture features were calculated using the color-occurrence matrix and Naive Bayes in [12]. In [13], various CNN-based architectures such as AlexNet [14], LeNet [15], and GoogleNet [16] have been used for the identification of 10 categories of tomato leaf diseases. The fine-tuning of InceptionV3 and VGG16 which is a state-of-art CNN architecture and a two-stage model has been done. These were executed for pest and rice disease identification [17]. Lu et al. [10] performed the classification of 10 categories of rice diseases using the deep CNN technique. A total of 500 images were used and the model contains three convolutional layers, three stochastic pooling layers, and at the end softmax layer, 95.48% of classification accuracy has been reported. In [18], de-noising images using a deep CNN model were examined, and SVM was used for the classification of rice disease through color and shape features. Seven categories of rice disease were classified and a total of 200 images were used. In that, 87.50% accuracy was achieved. For getting an accurate result using CNN, a huge amount of data is needed, and collecting diseased rice leaf is a challenging task. This problem can be solved using transfer learning. The transfer learning method uses weights that are already trained on large datasets. The current small dataset weights could be updated using those pre-trained weights. The performance of the model is affected by the quality of image capturing condition and background so taking the image of rice/plant leaves becomes a challenging task; some of the previous work is restricted to plain background [19–22]. For recognizing plant leaf disease, large-scale network parameter is tuned depending on the CNN architecture. Due to the large-scale network parameter, some architectures are not adequate to be deployed in devices with memory restrictions.

3 Proposed Method For the classification, we have used transfer learning. The main benefit of using transfer learning is that the training of the model does not have to be done from scratch rather the model is utilizing already learned patterns from solving different problems which are similar to the one being solved. In the method of using transfer learning, we utilized pre-trained models for image classification. The benefit of utilizing a pre-trained model is that it has already been trained on a more extensive dataset for solving similar problems. The pre-trained model that we are using is Xception.

470

K. Sinha et al.

3.1 Dataset The dataset for the rice leaf disease detection is obtained from Kaggle [23]. The author of the dataset is Huy Minh Do, and the title of this dataset is the Rice diseases image dataset. The dataset comprises three diseased rice leaf plants and healthy plant images. The disease includes (1) Brown spot, (2) Hispa, and (3) Leaf Blast. The total count of healthy and diseased leaves is 3355 which is distributed in 4 classes, 3 diseased classes, and 1 healthy class. The complete distribution of images is given in Table 1, and sample images from the rice disease image dataset are given in Fig. 1. The images are of different sizes and since we are using transfer learning which contains weights of ImageNet dataset, the images are resized to 299 × 299. (1)

(2)

(3)

Brown Spot—This is caused by Cochliobolus miyabeanus, this rice disease attacks the plants at emergence and causes weakness in plants, seedling blight, and the number of strands becomes sparse. Hispa—In this, an invasive pest called leaf beetle causes significant harm to the rice plants. These leaf beetles are called ‘rice hispa’. Due to the attack, the rice leaves become white and die. Thus, disease spreads in rice fields from nurseries and can be detected by careful examination or by the sense of touch. Leaf Blast—Caused by Magnaporthe oryzae, this fungi not only affects rice but also other important cereal crops namely barley, rye, and wheat. Here, the lesions are spindle-shaped or elliptical, and the leaves form a circular spot of grayish green color with a dark green border. It also affects reproduction

Table1 Dataset distribution

Class name of rice disease

Total count of images

Brown spot

523

Hispa

565

Leaf blast

779

Healthy

1448

Fig. 1 Sample images from dataset

Rice Leaf Disease Classification Using Transfer Learning

471

by causing the victim plant to produce a diminished quantity of seeds. For analysis, 779 images of leaf blast have been collected.

3.2 Model Architecture We used Xception as our pre-trained model. This is based on depthwise separable convolution layer. The specification of the model is shown in Fig. 2. The Xception architecture consists of 36 convolutional layers which form the feature extractor base of the network. In a total of 14 modules, all the 36 convolutional layers were structured, except for the first and the last modules, linear residual connections were present around all of them. In brief, Xception architecture is a linear stack of depthwise separable convolution layers with residual connections [24–26]. The collected dataset is split into train and test sets. Xception is used as our pretrained model. Keeping all the network parameters the same, only the last layer is fine-tuned. On top of transfer learning architecture, we added a flatten and a dense layer with 4 neurons because we are classifying between 4 categories. The last layer is used for classification with the activation function as softmax. Categorical crossentropy is used as a loss function. We trained the model for 50 epochs, and the batch size was kept as 32. As it was a multiclass classification, in the last layer, softmax was applied as the activation function. Further, we checked the accuracy of the model by testing on a few images of diseased rice leaves.

Fig. 2 Xception model architecture specifications

472

K. Sinha et al.

Fig. 3 Process flow diagram

For the detection of rice leaf disease, we have used the dataset containing leaves images of three diseases; they are Brown Spot, Hispa, and Leaf Blast, and the dataset also contains healthy leaf images. First, the preprocessing of the dataset is done. The images and the labels associated with it are converted into an array and then on labels, we used sklearn LabelBinarizer. For performing image classification, we need a good amount of images so, to increase the images, Image Augmentations like width and height change, zooming, horizontal turning, and rotation were performed [27–30]. Using sklearn train-testsplit, the dataset is split into train and test set keeping test size as 0.2 and the random state as 42. Then the model is built using pre-trained model Xception. The last layer of the Xception network is fine-tuned by adding a flatten and a dense layer. The model is compiled using categorical cross-entropy as loss and adam as an optimizer and then using fit_generator keeping the batch size as 32, the model was trained for 50 epochs. At last, the prediction is performed. For checking the prediction on the unseen image, the preprocessing such as converting image to array and then normalization is performed and then it is predicted, if the predicted image is the same as the original image, then the prediction is correct; otherwise, not correct. For a better understanding, the model flow diagram is shown in Fig. 3.

4 Experimental Result Here, we have implemented the use of transfer learning for the classification of rice leaf diseases. We have used Xception for model building. The complete dataset is

Rice Leaf Disease Classification Using Transfer Learning

473

split into train and test sets where 80% of the data was utilized for train purposes and 20% was used for test purposes. The experiment was carried out for 50 epochs. After experimenting, we obtained a training accuracy of 96.79% and a test accuracy of 82.71%. The graph representing the Training and validation accuracy and loss is displayed in Figs. 4 and 5. For code implementation, a system with a good GPU was required [31, 32]. So, for this, we opted for a free online platform that is Kaggle as it provides a single

Fig. 4 Training and validation accuracy

Fig. 5 Training and validation loss

474

K. Sinha et al.

15 GB NVIDIA Tesla P100 GPU that can be used for 30 h continuously. There is just a requirement for good Internet connectivity. The latest Google Chrome browser was considered on Windows PC with Intel® Core™ i5-4210U CPU for smooth running of the Kaggle platform.

5 Conclusion Diseases are the main threat to the growth of the plant. So we have implemented the Convolutional Neural Network and Transfer learning approach for the identification of healthy and diseased rice plants. We used a public dataset containing 3355 images of diseased and healthy rice leaf images. There were three categories of diseased rice plant leaves and one class of healthy rice plant leaves. We achieved training accuracy of 96.79% and validation accuracy of 82.71%. We have utilized the concept of transfer learning. Due to the unavailability of standard labeled rice disease images, the benchmarking of the proposed model with literature is not accurate. Further improvement in the proposed model performance can be done with the availability of a large dataset of rice disease images.

References 1. Gupta T (2017) Plant leaf disease analysis using image processing technique with modified SVM-CS classifier. Int J Eng Manag Technol 5:11–17 2. Mishra S, Mallick PK, Tripathy HK, Jena L, Chae G-S (2021) Stacked KNN with hard voting predictive approach to assist hiring process in IT organizations. Int J Electr Eng Educ. https:// doi.org/10.1177/0020720921989015 3. Liu L, Zhou G (2009) Extraction of the rice leaf disease image based on BP neural network. In: International conference on computational intelligence and software engineering, pp 1–3 4. Saleem MH, Potgieter J, Arif KM (2019) Plant disease detection and classification by deep learning. Plants 8(11):468 5. Hu YH, Ping XW, Xu MZ, Dan WX, He Y (2016) Detection of late blight disease on potato leaves using hyperspectral imaging technique. Spectrosc Spect Anal 36(2):515–519 6. Bhattacharya S, Mukherjee A, Phadikar S (2020) A deep learning approach for the classifification of rice leaf diseases. In: Bhattacharyya S, Mitra S, Dutta P (eds) Intelligence enabled research. AISC, vol 1109, pp 61–69. Springer, Singapore. https://doi.org/10.1007/978-981-152021-1_8 7. Toda Y et al (2019) How convolutional neural networks diagnose plant disease. Plant Phenomics 2019:9237136 8. Liang WJ, Zhang H, Zhang GF, Cao HX (2019) Rice blast disease recognition using a deep convolutional neural network. Sci Rep 9(1):1–10 9. Jeon WS, Rhee SY (2017) Plant leaf recognition using a convolution neural network. Int J Fuzzy Logic Intell Syst 17(1):26–34 10. Lu Y, Yi S, Zeng N, Liu Y, Zhang Y (2017) Identifification of rice diseases using deep convolutional neural networks. Neurocomputing 267:378–384 11. Khan MA et al (2018) CCDF: automatic system for segmentation and recognition of fruit crops diseases based on correlation coefficient and deep CNN features. Comput Electron Agric 155:220–236

Rice Leaf Disease Classification Using Transfer Learning

475

12. Kaur R, Kaur V (2018) A deterministic approach for disease prediction in plants using deep learning, vol 7 13. Tm P, Pranathi A, SaiAshritha K, Chittaragi NB, Koolagudi SG (2018) Tomato leaf disease detection using convolutional neural networks. In: International conference on contemporary computing (IC3), pp 1–5. IEEE 14. Krizhevsky A, Sutskever I, Hinton GE (2012) ImageNet classification with deep convolutional neural networks. In: NIPS, pp 1097–1105 15. LeCun Y et al (1989) Backpropagation applied to handwritten zip code recognition. Neural Comput 1(4):541–551 16. Szegedy C et al (2015) Going deeper with convolutions. In: IEEE CVPR, pp 1–9 17. Rahman CR et al (2020) Identification and recognition of rice diseases and pests using convolutional neural networks. Biosyst Eng 194:112–120 18. Rajmohan R, Pajany M, Rajesh R, Raman DR, Prabu U (2018) Smart paddy crop disease identification and management using deep convolution neural network and SVM classifier. Int J Pure Appl Math 118(15):255–264 19. Mohanty SP, Hughes DP, Salathe M (2016) Using deep learning for image-based plant disease detection. Front Plant Sci 7, 1419 20. Mallick PK, Mishra S, Chae GS (2020) Digital media news categorization using Bernoulli document model for web content convergence. Pers Ubiquitous Comput 1–16 21. Jena L, Kamila NK, Mishra S (2014) Privacy preserving distributed data mining with evolutionary computing. In: Proceedings of the international conference on frontiers of intelligent computing: theory and applications (FICTA) 2013, pp 259–267. Springer, Cham 22. Mishra S, Mallick PK, Jena L, Chae G-S (2020) Optimization of skewed data using samplingbased preprocessing approach. Front Public Heal 8:274 23. Mishra S, Mallick PK, Tripathy HK, Bhoi AK, González-Briones A (2020) Performance evaluation of a proposed machine learning model for chronic disease datasets using an integrated attribute evaluator and an improved decision tree classifier. Appl Sci 10(22):8137 24. Patidar S, Pandey A, Shirish BA, Sriram A (2020) Rice plant disease detection and classifification using deep residual learning. In: Bhattacharjee A, Borgohain SK, Soni B, Verma G, Gao X-Z (eds) MIND 2020, CCIS, vol 1240, pp 278–293. Springer, Singapore. https://doi.org/10. 1007/978-981-15-6315-7-23 25. Mishra S, Tripathy HK, Mishra BK (2018) Implementation of biologically motivated optimisation approach for tumour categorisation. Int J Comput Aided Eng Technol 10:244–256 26. Jena L, Patra B, Nayak S, Mishra S, Tripathy S (2019) Risk prediction of kidney disease using machine learning strategies. In: Intelligent and cloud computing, vol. 77, pp 485–494. Singapore, Springer 27. Jena L, Patra B, Nayak S, Mishra S, Tripathy S (2021) Risk prediction of kidney disease using machine learning strategies. In: Intelligent and cloud computing, pp 485–494. Springer, Singapore 28. Roy SN, Mishra S, Yusof SM (2021) Emergence of drug discovery in machine learning. Tech Adv Mach Learn Healthcare 119 29. Tutica L, Vineel KSK, Mishra S, Mishra MK, Suman S (2021) Invoice deduction classification using LGBM prediction model. In: Advances in electronics, communication and computing, pp 127–137. Springer, Singapore 30. Mishra S, Chaudhury P, Mishra BK, Tripathy HK (2016) An implementation of feature ranking using machine learning techniques for diabetes disease prediction. In: Proceedings of the second international conference on information and communication technology for competitive strategies, pp 1–3 31. Rath M, Mishra S (2019) Advanced-level security in network and real-time applications using machine learning approaches. In: Machine learning and cognitive science applications in cyber security, pp 84–104. IGI Global 32. Mishra S, Sahoo S, Mishra BK (2019) Addressing security issues and standards in Internet of things. In: Emerging trends and applications in cognitive computing, pp 224–257. IGI Global

Real-Time Sign Language Translator Khushbu Sinha, Annie Olivia Miranda, and Sushruta Mishra

Abstract The main objective of this work is to build a real-time sign language translator to translate sign language to text using TensorFlow object detection and Python. It would provide an easy way for people to communicate with others using sign language further leading to the elimination of the middle person who generally acts as a medium of translation and would also provide an easy-to-use environment for the users by providing text output for the input sign gesture. Here, the sign language is to be fed on a real-time basis using a webcam and the input fed would be compared with the trained model to recognize the sign and those who do not know the sign language could use it to understand the sign language and get a text output for the same with accuracy. Currently, the model was trained for Indian Sign Language (ISL) consisting of alphabets from A to Z and digits from 1 to 9 counting to a total of 35 signs and we obtained a loss of 0.227 on the trained model and the model predicted the signs with nearly an accuracy of 50–80% in our case. Keywords Sign detection · OpenCV · TensorFlow object detection API · MobileNet · CNN

1 Introduction The total population of India is around 121 crore and among those nearly 2 million people have issues in smooth communication ability according to census 2011. For a person to express their view, the only medium is communication. People who have a hurdle in communicating usually use sign language to communicate with the rest of the community which generally involves hand shapes, finger orientations, and movement of the hands, arms, or body to express the thoughts of the speaker. For understanding sign language, other people who are associated with that person K. Sinha (B) · A. O. Miranda · S. Mishra School of Computer Engineering, Kalinga Institute of Industrial Technology, Deemed to be University, Bhubaneswar, India S. Mishra e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 P. K. Mallick et al. (eds.), Cognitive Informatics and Soft Computing, Lecture Notes in Networks and Systems 375, https://doi.org/10.1007/978-981-16-8763-1_39

477

478

K. Sinha et al.

also have to learn it. So, when it comes to learning sign language, normal people don’t bother to learn them if it is not required in their own family. So, in this case, it becomes a problem for people who have difficulty in speaking, to communicate with normal people. To communicate easily with them, visual aid or an interpreter becomes a necessity which is inconvenient and expensive at certain times. So, to ease out the communication gap which occurs due to lack of understanding of the sign languages by the rest of the population and to decrease the need for a mediator to easily understand the signs and communicate with them, we have opted to build a real-time sign language translator to translate fingerspelling which plays a vital role in sign language to text using TensorFlow object detection and Python. In fingerspelling, words are spelled out character by character using hand gestures which in turn helps in making a word-level association to a certain extent without involving much movement of different body parts. A sign language translation and recognition system that could classify fingerspelling can solve the above problem. The algorithm used is trained various times by considering a change in surrounding conditions and the number of images to find the amount of accuracy that can be attained and has been compared to previous results achieved. Generally, sign language recognition takes into account either device/glove-based approach where the user has to wear a glove or a certain device that is connected to the computer with the help of several wired connections and are expensive to be set up and maintained or the other is a vision-based approach which requires directly capturing gestures through a camera and then analyzing it and is less expensive with respect to maintenance and set up [1]. So, here we would opt for a vision-based approach since it would be user-friendly. Various research works have been done with regards to American Sign Language but the work remains limited when it comes to Indian Sign Language since it combines hand gestures that could result in different interpretations from varied angles concerning body position. Indian signs are comprised of both single-hand and double-hand gestures which can include overlapping of hands and could result in loss of view of the whole gesture when viewed from a certain fixed position whereas American Sign Language is comprised of single-handed gestures. We aim to currently observe the different types of research already done in this field and analyze the existing detection techniques to effectively detect the signs and develop a cost-effective model for the same. By using an efficient model, we can reduce the loss and gain a higher percentage of accuracy while identifying the different signs. For proper identification, we would try to take the images in different lighting conditions such that the quality is not compromised and try to implement necessary labels for accurate detection of a given image from the testing dataset into different sign categories. Here, we have used Python language along with OpenCV for capturing our images through webcam and to train and test our model in real time. LabelImg tool was used for extracting and labeling the signs from the whole image and Transfer learning using TensorFlow Object Detection was used for creating the model. First, the collected images were labeled, and then the model was created. We trained the model using a different number of images and steps. Initially, we took 13 images of each sign and

Real-Time Sign Language Translator

479

10,000 steps but it was not able to identify the images properly so we further tried other values. Finally, we got a pretty good model with 20 images of each sign and for 15,000 steps and a loss metric of 0.279 which was able to identify 25 out of 35 signs accurately, and the accuracy for the sign recognition in real time ranged between 50 and 80%. Efforts were made to address depth perception. Further studies to incorporate adequate knowledge for identification, annotation, and quantification of relevant features during learning were also done.

2 Literature Review A few major studies have been done to recognize Indian sign language by applying different algorithms as one approach includes the detection of key points in images using SIFT and matching it with the standard alphabet sign images and classifying it to its nearest match [2]. Another approach included a neural network for training the dataset but it only consisted of single-handed signs, and mainly the angle between the fingers was considered using convex hull for feature vector [3]. Also, in one work, the vector representation of the images was used, the eigenvectors of the covariance matrix were calculated, and the Euclidean distance obtained for the new image was compared with the training dataset to classify the sign image [4]. Also, work on colored images of Indian sign language was done by using methods such as a bag of visual words, Gaussian random, and HoG, and then these were trained on a support vector machine and tested on signs given by different users [5]. Raghuveera et al. [6] included the translation of Indian Sign Language gestures to the best possible English sentences using feature ensembling for the prediction of labels accurately and SVMbased classifier for recognizing the signs. Tripathi Kumud et al. [7] extracted gestures by splitting continuous signs from their dataset and isolated them by using gradientbased key frame extraction, and then their features were extracted and dimensionality reduction was done using orientation histogram with Principal Component Analysis. They applied various distance classifiers such as correlation, city block distance, Manhatten distance, and Euclidean distance for obtaining a comparative analysis of the results. Aditya et al. [8] extracted hand area using skin color-based segmentation from a whole image which is a digital image processing technique and applied a feed-forward neural network to train the model. In Sharma et al. [9], the authors used Trbaggboost which is an ensemble-based transfer learning algorithm for training the Indian sign language dataset that was obtained using the device- and sensorbased approach for the collection of data which tends to be an expensive approach and compared its performance to basic machine learning algorithms such as SVM, random forest, and decision tree. The accuracy achieved through Trbaggboost was much more as compared to other conventional algorithms. In [10], the authors created a dataset of 24 Indian sign language gestures commonly used in different sectors like hotels, courts, hospitals, etc. and used the Dynamic Time Warping algorithm which tracks the trajectory of the hand w.r.t the distance from the center of the frame and

480

K. Sinha et al.

analyzes the gesture through variations in the location of the hand from the face. It improves the result accuracy by finding the middle ground between the alignment of the stored database and query features. In [11], the capturing of images from the video was done through wavelet-based fusion segmentation and by using elliptical Fourier descriptors to identify the edges of the gesture accurately. Then PCA was used to form a feature vector for a particular gesture from different frames, and finally, it was trained using a backpropagation algorithm and the obtained recognition rate of gestures was around 92.34% after testing the model with various examples.

3 Technical Overview 3.1 OpenCV OpenCV was first introduced by Intel in 1999 as a research work toward easing CPU-intensive tasks. Later on, it was acquired by OpenCV.org in 2012. It stands for Open Source Computer Vision library and delivers a cross-platform to perform advanced computer vision and machine learning tasks by providing more than 2500 optimized algorithms for video and image capturing, processing, and analysis. It was natively written in C++ and has the provision of STL containers for templated interface support. It mainly helps in feature extraction, filtering, detection, conversion, transformation, classification, segmentation, tracking, and recognition of images and videos [12].

3.2 Labeling To perform labeling task for a huge amount of images, LabelImg tool is used. It is mainly used for procuring and labeling a certain part from a whole image by drawing a bounding box around it for image classification or segmentation [13]. In our case, we needed to label the finger representing the sign from the whole image consisting of the sign along with the face and background by extending a bounding box around the sign. Then we annotate the images, and an XML file was generated for each image containing information about width, height, xmin, ymin, xmax, ymax, etc. of the annotated image.

3.3 Transfer Learning It is a technique in machine learning where one model developed to perform a certain task can be refined further or used as it is to perform another task.

Real-Time Sign Language Translator

481

It is also used in reducing the complexity and computing time of deep learning algorithms. It also helps in combining various pre-trained models to achieve better accuracy and result. One of the approaches of using transfer learning is the pre-trained model approach which mainly includes the following three steps, i.e., 1.

2. 3.

Opting for a source model: Selection of a pre-trained model according to the use case from a wide range of models that are released by organizations after testing on different types of large datasets. Reusing of the model: The selected model can be used as a whole or a certain part of it can be used in the new task according to the need. Final tuning of the model: Furthermore, according to our requirements, the model can be modified, refined, or used as it is on the dataset [14].

3.4 TensorFlow Object Detection API For solving object detection problems, the TensorFlow object detection API can be used, which is a framework for creating a deep learning network. This framework already includes pre-trained models which are referred to as Model Zoo. This comprises a collection of pre-trained models trained on the KITTI dataset, the COCO dataset, and the Open Images Dataset. These models can be used for the inference of categories present only in this dataset. They are also useful for initializing our models when training on the novel dataset. The different architectures used in the pre-trained model are given in Table 1.

3.5 MobileNet-SSD The SSD architecture is a single convolution network. This network can predict bounding box locations by learning continuously and can classify these locations in one pass. That’s why end-to-end training of SSD can be done. The SSD network contains base architecture (MobileNet in this case) which is further followed by various convolutional layers. Table 1 Example of various architectures used as a pre-trained model Model name

Speed

COCO mAP

Outputs

ssd_mobilenet_v1_coco

Fast

21

Boxes

ssd_inception_v2_coco

Fast

24

Boxes

rtcn_resnet101_coco

Medium

30

Boxes

faster_rcnn_resnet101_coco

Medium

32

Boxes

faster_rcnn_inception_resnet_v2_atrous_coco

Slow

37

Boxes

482

K. Sinha et al.

Fig. 1 SSD-based detection with MobileNet as backbone

For detecting the location of bounding boxes, SSD operates on a feature map. The size of the feature map is Df * Df * M. k bounding boxes are predicted for each feature map location. Every bounding box contains the following information. • 4 corner bounding box offset locations (cx, cy, w, h). • C class probabilities (c1, c2, …cp). For detecting the location of bounding boxes, SSD operates on a feature map. The size of the feature map is Df * Df * M. k bounding boxes are predicted for each feature map location. Every bounding box contains the following information. SSD can only predict the location it cannot tell anything about the shape of the box. Before the actual training, the shapes are set. For example, in Fig. 1, there are 4 boxes, meaning k = 4. Loss in MobileNet-SSD. When the boxes are finally matched, the loss can be computed as: L = 1/N (L class + L box). Here, N is the total number of matched boxes. L class is the softmax loss for classification and ‘L box’ is the L1 smooth loss representing the error of matched boxes. L1 smooth loss is a modification of L1 loss which is more robust to outliers. In the event that N is 0, the loss is set to 0 as well.

Real-Time Sign Language Translator

483

3.6 MobileNet This model is centered on depthwise separable convolutions which provide a lightweight architecture. These factorize a standard convolution into a pointwise convolution. In MobileNets, a single convolution is applied to each color channel instead of combining all three channels. It provides a filtering effect on the input channels. A standard convolution does the work of filtering and combining the input into a new set of outputs in a single step. The depthwise separable convolution divides this step into two separate layers, i.e., one layer for filtering and another for combining. This division of layers acutely reduces the computation time and model size [15].

4 Dataset Since the availability of a standard dataset for Indian Sign Language was a problem. So, we created a dataset for Indian Sign Language (ISL) by writing a Python script for capturing the images through our webcam using OpenCV and storing it in the desired location in the form of a directory with the class name specified according to the labels. The dataset consisted of 35 classes that are numbers ranging from 1 to 9 and alphabets ranging from A to Z. A collection of these captures is shown in Fig. 2. We skipped the number ‘0’ since it coincided with the letter ‘O’. Some of the gestures were similar such as the number ‘2’ and alphabet ‘V’ and the number ‘7’ and alphabet ‘C’. So, in total, the dataset consisted of 560 images, i.e., 16 images for each of the 35 classes in RGB format which were later processed to a resolution of 320 * 320. Further, we also took extra 14 images for each class for further training.

5 Proposed Methodology 5.1 Process Flow The process flow opted for proceeding with the work mainly consisted of four major steps as shown in Fig. 3. Firstly, we used our webcam to collect images of different signs that we were going to train our model on. Then those images were passed to labelImg and we drew detection/bounding boxes around our different sign language poses and labeled them. Then we divided those images along with the annotation file into train and test datasets. Once we completed that, we went for transfer learning using single-shot detector MobileNet V2 FPNLite 320 * 320 COCO 17 TPU model from the TensorFlow model zoo against our TensorFlow object detection API to be able to train our object detector. Finally, we used Python and OpenCV to detect the signs in real time using our webcam.

484

K. Sinha et al.

Fig. 2 Sign images from the dataset from 1 to Z

Fig. 3 Process flowchart

5.2 Collecting Images Using Python and OpenCV We used Jupyter notebook and Python3 to write the code for collecting the images. Initially, the necessary dependencies such as cv2 for OpenCV, uuid for naming our image files, os for helping us with the file paths, and time for taking a break in between the images while capturing in order to move our hands to collect different angles for our sign language model were imported. Then the image path was specified so that all the images we capture using OpenCV would be collected there. Then we defined the labels that we were going to collect in an array and how many images for each

Real-Time Sign Language Translator

485

label we were going to collect as a variable that would represent the different sign language poses that we were going to collect [15–18]. In our case, we had 35 labels, i.e., letters from A to Z and numbers from 1 to 9 and we trained labels for each one of these poses. We collected 16 images for each sign and later divided them into 13 training images and 3 testing images for each sign. Then we created a directory for each label and captured the sign images using cv2.VideoCapture() method initialized the webcam and cap.read() to capture the image. Finally, cap.release() was used to release our video capture.

5.3 Labeling Images for Object Detection Using Labeling Package Firstly, we cloned the GitHub repository for labelImg package in our local environment and we moved this folder into the same folder location where our captured images existed. Then we installed some dependencies required for using labelImg, i.e., pyqt = 5 and lxml and set up the resources required. Then to start labelling, we ran the labelImg Python file. We then set up our save directory where our annotations or labels will be saved. Then we opened our collected images folder in open dir for labeling and drew the bounding boxes around the required sign from the whole image and named those signs. Finally, we got the XML file for each image annotation which consisted of the filename, path, size and depth of the RGB image, label, and coordinates, and details of the bounding box.

5.4 Training Using Transfer Learning and TensorFlow Object Detection API for Sign Language For training, we manually divided the images along with their XML label file into training and testing datasets. The data for training comprised of 13 images and the testing data consisted of 3 images for each of the 35 signs. Then using Jupyter notebook and Python 3, we set up the path, created the label map which is used by the TensorFlow object detection library to represent our labels and it needed to be in the format of.pbtxt which is a protobuf file, then we created the TF records which are the representation of our data that the object detection API uses for both training and testing images, then we leveraged and downloaded the pre-trained model SSD MobileNet V2 FPNLite 320 * 320 COCO 17 TPU model from TensorFlow Model Zoo, copied model config to training folder and then updated that config model for transfer learning by importing some dependencies and then we set up our configuration path where the pipeline.config file for our custom model would exist and then we made the changes in the number of classes to 35, batch size, and changed the fine-tune checkpoint path and fine-tune checkpoint type to detection

486

K. Sinha et al.

from classification and also specified the path for label map, tf records, and then we wrote the new pipeline config file to our custom directory, then we trained the model in the command line by specifying the baseline path for TensorFlow 2 main model location and the location of our custom model directory and the path for its pipeline.config file along with the number of training steps which was initially set to 10,000 in our case, then we loaded the trained model from the latest checkpoint by importing some dependencies, and then we wrote the code for image detection in TensorFlow to pre-process the image to the required size and then pass it to the model for prediction and return the detected image after post-processing.

5.5 Detection of Signs in Real Time For image detection in real time, we imported the cv2 and NumPy library and then we created a category index which would be the representation of our label map and got a key for each label map [19–22]. Then we set up our capture from the webcam and then we wrote the code for real-time object detection and displayed the result on the screen.

6 Results and Discussions Initially, we trained the detection model for 13 images of each sign for 10,000 steps and achieved a loss metric of 0.227 as shown in Fig. 4, and the accuracy in identifying the signs in real time was around 50 to 80% as shown in Fig. 5. Then we trained the model for 20,000 steps and achieved a loss metric of 0.191 but it wasn’t able to detect a lot of signs due to lighting conditions. Further, we trained the model for 20 images of each sign and for 15,000 steps and the loss metric was 0.279, and it was identifying the signs pretty accurately but we were encountering the issue of two signs being detected when only one sign was being shown. So, we finally opted for the results of the first trained model and it was identifying 25 out of 35 signs finely.

7 Conclusion We concluded that although some work has already been done in this field, the method which we implemented would work faster when using this system for object detection in real time. However, we have to perform pre-training with a better dataset since in the present trained model, the images were taken under low illumination conditions that resulted in less accuracy. For better results, we would have to go for a dataset that would consist of a greater number of images taken under different lighting conditions and better camera resolution for more precise features and for obtaining

Real-Time Sign Language Translator

487

Fig. 4 Loss metric after 10,000 training steps

better accuracy while real-time detection of the signs. The proposed model can be further enhanced in many ways. It can be trained with a greater number of images under different lighting conditions and surroundings to obtain much better accuracy and would try to test it using different models as well. A structure can be remodeled that would learn the signs continuously to enable more signs to be recognized by the system on a long-term basis and would also provide the output in speech format. After building this structure, it can be implemented as a mobile application using flutter with a user-friendly interface.

488

K. Sinha et al.

Fig. 5 Output obtained while real-time detection of the signs

References 1. Karishma D, Singh JA (2013) Automatic Indian sign language recognition system. In: Proceedings of the 2013 3rd IEEE international advance computing conference, pp 883–887 2. Goyal S, Sharma I, Sharma S (2013) Sign language recognition system for Deaf and Dumb people. Int J Eng Res Technol 3. Padmavathi S, Saipreethy MS, Valliammai V (2013) Indian sign language character recognition using neural networks. IJCA special issue on recent trends in pattern recognition and image analysis, RTPRIA 4. Singh J, Das K (2013) Indian sign language recognition using eigen value weighted euclidean distance based classification technique. Int J Adv Comput Sci Appl 5. Jain S, Sameer Raja KV, Indian sign language character recognition. Indian Institute of Technology, Kanpur 6. Raghuveera T, Deepthi R, Mangalashri R, Akshaya R (2020) A depth-based Indian sign language recognition using microsoft kinect. S¯adhan¯a 45

Real-Time Sign Language Translator

489

7. Kumud T, Neha B, Nandi GC (2015) Continuous Indian sign language gesture recognition and sentence formation. Procedia Comput Sci 54:523–531 8. Adithya V, Vinod PR, Usha G (2013) Artificial neural network based method for Indian sign language recognition. In: Proceedings of IEEE conference on information and communication technologies 9. Sharma S, Gupta R, Kumar A (2020) Trbaggboost: an ensemble-based transfer learning method applied to Indian sign language recognition. J Amb Intell Human Comput 10. Washef A, Kunal C, Soma M (2016) Vision based hand gesture recognition using dynamic time warping for Indian sign language. In: International conference on information science (ICIS) 11. Prasad MVD, Kishore PVV, Kiran Kumar E, Anil Kumar D (2016) Indian sign language recognition system using new fusion based edge operator. J Theor Appl Inf Technol 88(3) 12. Mishra S, Mallick PK, Tripathy HK, Jena L, Chae G-S (2021) Stacked KNN with hard voting predictive approach to assist hiring process in IT organizations. Int J Electr Eng Educ. https:// doi.org/10.1177/0020720921989015 13. Mishra S, Mallick PK, Tripathy HK, Bhoi AK, González-Briones A (2020) Performance evaluation of a proposed machine learning model for chronic disease datasets using an integrated attribute evaluator and an improved decision tree classifier. Appl Sci 10(22):8137 14. Mallick PK, Mishra S, Chae GS (2020) Digital media news categorization using Bernoulli document model for web content convergence. Pers Ubiquitous Comput 1–16 15. Mishra S, Tripathy HK, Mallick P, Bhoi AK, Barsocchi P (2020) EAGA-MLP—an enhanced and adaptive hybrid classification model for diabetes diagnosis. Sensors 20:4036 16. Mishra S, Mallick PK, Jena L, Chae G-S (2020) Optimization of skewed data using samplingbased preprocessing approach. Front Public Heal 8:274 17. Jena L, Patra B, Nayak S, Mishra S, Tripathy S (2019) Risk prediction of kidney disease using machine learning strategies. In: Intelligent and cloud computing, vol 77, pp 485–494. Springer, Singapore 18. Ray C, Tripathy HK, Mishra S (2019) Assessment of autistic disorder using machine learning approach. In: Proceedings of the international conference on intelligent computing and communication, Hyderabad, India, 9–11 January 2019, pp. 209–219 19. Chaudhury P, Mishra S, Tripathy HK, Kishore B (2016) Enhancing the capabilities of student result prediction system. In: Proceedings of the 2nd international conference on information and communication technology for competitive strategies, Uidapur, India, vol 88, pp 1–6, 4–5 March 2016 20. Mishra S, Sahoo S, Mishra BK (2019) Addressing security issues and standards in Internet of things. In: Emerging trends and applications in cognitive computing, pp 224–257. IGI Global 21. Mishra S, Chaudhury P, Mishra BK, Tripathy HK (2016) An implementation of feature ranking using machine learning techniques for diabetes disease prediction. In: Proceedings of the second international conference on information and communication technology for competitive strategies, pp 1–3 22. Rath M, Mishra S (2019) Advanced-level security in network and real-time applications using machine learning approaches. In: Machine learning and cognitive science applications in cyber security, pp 84–104. IGI Global

Song Recommendation Using Mood Detection with Xception Model Deep Mukherjee, Ishika Raj, and Sushruta Mishra

Abstract Face recognition technology has already a huge following due to its unlimited applications and values with huge market potential. It is being implemented in various fields like security systems, digital video processing, and many such technological advances. The song industry and other OTT applications are now blooming. After the introduction of applications like Spotify and other music applications, the revenue generated was at an all-time high during this pandemic. This paper aims to deliver a unique recommendation system that will be recommending songs depending on the mood of the people rather than depending on their past data alone. The prime goal of the work is to recommend songs using a deep learning-based approach and to identify facial emotion recognition. This idea can also be modified after the integration of the existing algorithms and methodologies to recommend songs and videos using audio, video, and past data altogether. Moreover, on a larger dimension, this would save time and labor invested in performing the process manually. Keywords Deep learning · Recommendation system · Facial expression recognition · Image processing

1 Introduction Computational intelligence is an extensive domain that has attracted a lot of researchers and programs in recent times [1]. This particular domain has taken over the world on very short notice. It is incorporated in daily life in the form of chatbots, digital assistants like Siri, and several other technology-based systems. One of the most prominent powers of artificial intelligence is face recognition techniques. The basic example of its usage is the grouping of Google Photos of a particular person. There are many existing systems that could recognize facial emotions. On the other hand, there are systems that recommend music. Bringing together, a system that will recommend music by recognizing the mood of the user from facial emotions D. Mukherjee · I. Raj · S. Mishra (B) School of Computer Engineering, Kalinga Institute of Industrial Technology, Deemed to be University, Bhubaneswar, Odisha, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 P. K. Mallick et al. (eds.), Cognitive Informatics and Soft Computing, Lecture Notes in Networks and Systems 375, https://doi.org/10.1007/978-981-16-8763-1_40

491

492

D. Mukherjee et al.

is a significant development [2]. Facial recognition has always been an important part of human communication but sadly this has never been used in any of the current recommendation systems/algorithms yet. Recently after the development of CNN, researchers have started to pay attention to the research corresponding to facial features. But due to lack of proper data, getting an accurate model is still out of reach. Recommendation systems on the other hand have become pretty advanced. Their capability to deliver near to accurate recommendations. Nearly all the portals and applications having direct communication with a human have their own recommendation engines. Companies like Spotify, youtube, and Netflix have very advanced systems. But all the systems to date are based on the data collected by the applications. By studying the patterns, they can accurately recommend the next best song/movie, etc. In spite of all these, there is still a lacking point that none of them can recommend a song which we want to hear depending on our mood. If we keep listening to sad low-tempo songs, then even if we are happy, the application will recommend us to sad songs [3]. The main objective of the paper is to develop a system using the help of deep learning and image processing to create a system that can recommend songs depending on the facial emotion of the person using the application. The main aim will be to make a system that will not only use facial features, but also voice to get the mood of a person. Most of the recommendation system only works on the past data like most listened to and most liked songs to recommend the optimal song, but for a more tailored experience, we can use the facial and audio with the already existing systems for the ultimate smooth experience. The music industry is a very large industry with very few competitors like Spotify, Hungama, ganna, etc. But the main problem lies with the conventional system and their datasets. There are very few datasets available that categorize the songs on the basis of the emotional structure. I have created a small dataset of 70 songs representing 7 different emotions and which song should people listen to with the youtube link and song name. Another approach to this problem is that we can use NLP to identify the mood of the song using the lyrics and tempo.

2 Literature Survey The main motive to develop the work was the untapped music industry where few companies have a strong foothold over the whole industry. Spotify, Hungama, ganna, and few other companies are playing a monopoly with their software. Main reason for their success is their advanced song recommendation systems which work using the past data of a user and getting the top-rated songs and making a hybrid model recommending the best possible choice to the user [4]. But the place these systems lack is to judge a human emotion and recommend songs. So the paper is to judge a human emotion through his facial emotions and then recommend songs parallel to what his mood is. The Recommendation System itself is a huge market, and thus integrating new models will surely help to upgrade

Song Recommendation Using Mood Detection with Xception Model

493

Fig. 1 A sample demonstration of collaborative filtering

the existing software. Recommender systems are algorithms that recommend things depending on various factors to the user. A good recommendation system will recommend those objects/products which the user will most likely be interested in. Companies like Netflix, Amazon, etc., use recommender systems to help their users to identify the correct product or movies for them. Three types of Recommender systems are: • Collaborative Filtering • Content-Based Filtering • Hybrid Recommendation Systems.

2.1 Collaborative Filtering Collaborative filtering methods are mainly based on the past data. It collects and analyzes huge amounts of data about the user’s preferences, activity, etc., and then tries to predict what the user may like on the basis of their similarity to other users. One of the main advantages of this method is that it does not depend on the content, whether it is machine analyzable content or not, and hence it is capable of recommending complex items such as movies and songs without knowing the content exclusively. Figure 1 shows a collaborative filtering system.

2.2 Content-Based Filtering Content-based filtering methods mainly use the description of an item and the user preference profile. In these types of recommendation systems, the object’s keywords are used to describe the object and a user profile is built to recognize the user preferences. Then the model will try to recommend things to the user that he had previously

494

D. Mukherjee et al.

Fig. 2 Content-based recommendation system

liked in the past. A simple example of a content-based recommendation system is depicted in Fig. 2.

2.3 Hybrid Recommendation Systems The most advanced recommendation system that has been developed after recent research is a hybrid approach containing the best of both worlds of content-based and collaborative-based. This is a much more efficient and unifying model which results in more accurate predictions. Figure 3 shows a simple hybrid recommendation system. Netflix is a good example of a hybrid system. They make recommendations by comparing the watching and searching habits of similar users (i.e., collaborative filtering) as well as by offering movies that share characteristics with films that a user has rated highly (content-based filtering). The method proposed is the mixture of hybrid and mood-based detection to maximize the efficiency of recommending songs as per the user’s needs rather than using only the predefined formats. Facial expression is very important to convey the mood of a human being. After many attempts to automate the task and finally after the evolution of CNN and CV, Fig. 3 Hybrid recommendation approach

Song Recommendation Using Mood Detection with Xception Model

495

scientists have begun to appropriately take steps toward this basic interaction feature. Ekman et al. [5] came up with seven basic emotions which broadly classify human emotions (contempt, surprise, feared, anger [6], disgust, sad and disgust). Recently, with the continuation of research on (FERET) facial recognition technology dataset, Sajid et al. has found that there is an impact of facial asymmetry on age estimation [7]. More techniques using three-dimensional pose invariant method with subjectspecific descriptors has also been developed in recent times [8, 9]. However, we can’t fight few issues like excessive makeup [10] pose and expression [11] which could easily be solved using CNN. Many other identical models were proposed by several reserachers and academicians [12–16].

3 Dataset The data samples as shown in Fig. 4 consist of 48 × 48 pixel grayscale images of faces. The faces have been preprocessed so that each image will contain the faces at the center of the image and must have uniform padding all around the sides [17]. The main target of the dataset is to classify the images on the basis of their emotions they have been labeled in. The dataset has been categorized into seven different types of emotions (0-> Angry, 1-> Disgust, 2-> Fear, 3-> Happy, 4-> Sad, 5-> Surprise, 6> Neutral). As for the labels, we have a train CSV file which has 2 different columns ‘emotions’ and ‘pixels’. The pixel column has a string inside quotes for each image and the emotion column has the predetermined emotion with respect to the image. In the test CSV file, we have the same contents. There are 28,709 images in the training set and 3589 in the final test set. The song dataset contains 3 columns stating the mood, link of the song, and the name of the song. As per the mood detected, we will play a random song of the stated mood. To increase the accuracy and to train a robust

Fig. 4 Sample images from FER 2013 dataset

496

D. Mukherjee et al.

model, we use ImageDataGenerator to use data augmentation to process different image processing to increase the data and to make the model robust. For the first part of recognizing the facial emotion, we have used Xception—a pretrained model on ImageNet.Xception is a CNN architecture with 71 layers.

4 Proposed Methodology The work can be subdivided into many individual constituents. The first part of the paper was to recognize human emotions, and for that, there is an ongoing competition on Kaggle from where the data was collected. For any Deep Learning project, we need a large dataset. The dataset used for the project is FER2013 Dataset which was acquired from KAGGLE. The second part of the project is to build a recommendation system but simple enough due to lack of data. The dataset was custom made of 70 songs classifying 7 different moods of a human. The final stage was to actually deploy the model into a web application where the user will be prompted to use the webcam to get song recommendations based on his mood. The application will suggest 3 songs with links to open a window playing the song using youtube. Xception is a CNN architecture that is 71 layers deep as shown in Fig. 5. The model has also been pre-trained on the image net dataset. The model without any fine-tuning can classify up to 1000 classes of objects like car, dog, pencil, etc. The input size of the model is 299 × 299 and has learnt deep and rich features from the

Fig. 5 The architecture of Xception (entry flow > middle flow > exit flow)

Song Recommendation Using Mood Detection with Xception Model

497

Fig. 6 ImageNet: validation accuracy against gradient descent steps

image net dataset. We have used the same model as our classification model after we had fine-tuned it as per our need to increase the accuracy of the model. Similar to inception (also known as GoogleNet), Xception is a very deep CNN architecture. Similar to inception, the inception blocks are swapped with depth-wise separable convolution layers. Xception has been specially designed based on a linear stack of depth-wise separable convolution layers with linear residual connections [18–21]. This architecture mainly consists of two convolution layers, i.e., pointwise convolution layer, where a 1 × 1 convolutional layer maps the output channels to a new channel space using a depth-wise convolution and a depth-wise convolutional layer, where a spatial convolution is carried out independently in each channel of input data. Figure 6 shows the accuracy graph of Xception over ImageNet classification when trained with gradient descent as their optimizer. Figure 7 shows an illustration of the Xception architecture to understand the flow of the model better and to grasp the feature extraction capability of the architecture.

5 Results and Analysis The results we have achieved are not perfect because of class imbalanced problems in the FER2013 dataset but it was giving an accuracy of about 85.6%. The file when executed will run for 5 frame captures by the CV and then will predict the mood accordingly. As seen in Fig. 8, after we have achieved the moods, then we can find the max frequency among all the moods and return it.

498

D. Mukherjee et al.

Fig. 7 The illustration of the architecture of Xception

Fig. 8 Frequency analysis through mood detection

As shown in Fig. 9, after we get the mood, the application will return the recommended songs. The results are satisfactory, but the scope to increase the mood detection accuracy and use a better music dataset is imminent. Figure 10 is another example of mood detection to find the max frequency among the detected. Figure 11 shows the expected result as per the max frequency detected in Fig. 10. Fig. 9 Song recommended in neutral mood

Fig. 10 Accuracy analysis through mood detection

Song Recommendation Using Mood Detection with Xception Model

499

Fig. 11 Song recommended in surprise mood

The decision to choose Xception as our pre-trained architecture was taken after a comparison analysis was performed with 5 pre-trained models. We had taken: 1. 2. 3. 4. 5.

InceptionV3 VGG16 ResNet50 Densenet121 Xception

Sr.

Architectures

Validation accuracy

1

VGG16

56.02%

2

ResNet50

66.60%

3

DenseNet121

70.45%

4

InceptionV3

76.78%

5

Xception

85.6%

6 Conclusion We concluded that FER is an old topic and has been worked upon but integration or implementing it as a recommendation system is unique and surely can be improved a lot with better dataset and architectures [22–25]. We can use song dataset gathered by web crawlers and images of respective countries to enhance the mood recognition capability. The scope of this work is enormous and due to the lack of data, the recommender is just functioning well enough. In future, we can again subdivide the project into multiple projects, and one can be able to detect the emotion of a song based on the tune and lyrics of the song. Then integrate the same with the existing recommendation system and upgrade it using the reviews and the most played songs. As of now, our application can only detect human emotion using facial expression, but we can also integrate it using the audio of a surrounding/person to judge a mood even better. Another major scope will be to make a custom playlist depending on the number of people in the frame, i.e., a different playlist for a group of 10 and

500

D. Mukherjee et al.

a different for a group of 5 or less, and finally a solo playlist for each user. This application can be used by any industry, i.e., movie industry/music industry. It can also help to tailor the experience of the users and can be used to make huge profits for commercial companies.

References 1. Mishra S, Mallick PK, Tripathy HK, Bhoi AK, González-Briones A (2020) Performance evaluation of a proposed machine learning model for chronic disease datasets using an integrated attribute evaluator and an improved decision tree classifier. Appl Sci 10(22):8137 2. Mallick PK, Mishra S, Chae GS (2020) Digital media news categorization using Bernoulli document model for web content convergence. Pers Ubiquitous Comput 1–16 3. Mishra S, Tripathy HK, Mallick P, Bhoi AK, Barsocchi P (2020) EAGA-MLP—an enhanced and adaptive hybrid classification model for diabetes diagnosis. Sensors 20:4036 4. Mishra S, Mallick PK, Jena L, Chae G-S (2020) Optimization of skewed data using samplingbased preprocessing approach. Front Public Heal 8:274 5. Mehrabian A (2017) Nonverbal communication. Routledge, London 6. Bartlett M, Littlewort G, Vural E, Lee K, Cetin M, Ercil A, Movellan J (2008) Data mining spontaneous facial behavior with automatic expression coding. In: Esposito A, Bourbakis NG, Avouris N, Hatzilygeroudis I (eds) Verbal and nonverbal features of human–human and human–machine interaction. Springer, Berlin, pp 1–20 7. Russell JA (1994) Is there universal recognition of emotion from facial expression? A review of the cross-cultural studies. Psychol Bull 115(1):102 8. Gizatdinova Y, Surakka V (2007) Automatic detection of facial landmarks from AU-coded expressive facial images. In: 14th International conference on image analysis and processing (ICIAP). IEEE, pp 419–424 9. Liu Y, Li Y, Ma X, Song R (2017) Facial expression recognition with fusion features extracted from salient facial areas. Sensors 17(4):712 10. Ekman R (1997) What the face reveals: basic and applied studies of spontaneous expression using the facial action coding system (FACS). Oxford University Press, New York 11. Zafar B, Ashraf R, Ali N, Iqbal M, Sajid M, Dar S, Ratyal N (2018) A novel discriminating and relative global spatial image representation with applications in CBIR. Appl Sci 8(11):2242 12. Kim DH, An KH, Ryu YG, Chung MJ (2007) A facial expression imitation system for the primitive of intuitive human-robot interaction. In: Sarkar N (ed) Human robot interaction. IntechOpen, London 13. Ernst H (1934) Evolution of facial musculature and facial expression. J Nerv Ment Dis 79(1):109 14. Roy SN, Mishra S, Yusof SM (2021) Emergence of drug discovery in machine learning. Tech Adv Mach Learn Healthcare 119 15. Tutica L, Vineel KSK, Mishra S, Mishra MK, Suman S (2021) Invoice deduction classification using LGBM prediction model. In: Advances in electronics, communication and computing, pp 127–137. Springer, Singapore 16. Mishra S, Mallick PK, Tripathy HK, Jena L, Chae G-S (2021) Stacked KNN with hard voting predictive approach to assist hiring process in IT organizations. Int J Electr Eng Educ. https:// doi.org/10.1177/0020720921989015 17. Kumar KC (2012) Morphology based facial feature extraction and facial expression recognition for driver vigilance. Int J Comput Appl 51:2 18. Ray C, Tripathy HK, Mishra S (2019) Assessment of autistic disorder using machine learning approach. In: Proceedings of the international conference on intelligent computing and communication, Hyderabad, India 9–11:209–219

Song Recommendation Using Mood Detection with Xception Model

501

19. Chaudhury P, Mishra S, Tripathy HK, Kishore B (2016) Enhancing the capabilities of student result prediction system. In: Proceedings of the 2nd international conference on information and communication technology for competitive strategies, Uidapur, India, 4–5 March 2016, vol 88, pp 1–6 20. Mishra S, Tripathy HK, Panda AR (2018) An improved and adaptive attribute selection technique to optimize dengue fever prediction. Int J Eng Technol 7:480–486 21. Sushruta M, Hrudaya KT, Brojo KM (2017) Filter based attribute optimization: a performance enhancement technique for healthcare experts. Int J Control Theory Appl 10(295–310):91 22. Hernández-Travieso JG, Travieso CM, Pozo-Baños D, Alonso JB et al (2013) Expression detector system based on facial images. In: BIOSIGNALS 2013-proceedings of the international conference on bio-inspired systems and signal processing 23. Mishra S, Chaudhury P, Mishra BK, Tripathy HK (2016) An implementation of feature ranking using machine learning techniques for diabetes disease prediction. In: Proceedings of the second international conference on information and communication technology for competitive strategies, pp 1–3 24. Rath M, Mishra S (2019) Advanced-level security in network and real-time applications using machine learning approaches. In: Machine learning and cognitive science applications in cyber security, pp 84–104. IGI Global 25. Mishra S, Sahoo S, Mishra BK (2019) Addressing security issues and standards in internet of things. In: Emerging trends and applications in cognitive computing, pp 224–257. IGI Global

Diagnosis of Charging Gun Actuator in the Electric Vehicle (EV) H. R. Yoganand and K. B. Sowmya

Abstract There has been a substantial increase in the number of electric vehicles (EVs), due to an increase in pollution to the environment. Electric vehicles neither emit tailpipe pollutants CO2 nor nitrogen dioxide, although battery manufacturing affects carbon footprint. Also, the number of EV engines is simpler, compact, and comparatively the engine noise is very low. Thus, the EV industry is considered more attractive and economical for the automobile industry. Nowadays, all vehicles come up with electronic control unit (ECU), a device that controls all the electronic features in the vehicle. Some vehicles have multiple ECUs controlling different features, while some have a limited number of ECUs controlling everything. This may range from fuel injection, maintaining door locks, controlling braking unit, suspension, etc. All the components in the vehicles are supervised by ECU and they interact with others to monitor the health of the vehicle. ECUs failure may result in severe system breakdown or even life threats. So, it’s necessary to maintain the ECU in optimum condition. Recharging the battery of the electric vehicle is called charging, similar to filling fuel for gasoline vehicles. A charging gun is used at the charging station for charging EV. The present project deals with the charging gun actuator, which makes a physical interface with the charging gun while charging. At the point of contact for the charging gun, there is an actuator that locks the gun while charging and unlocks it after the charging process is finished. The diagnosis checks the faults in the actuator of the vehicle to ensure the optimal working conditions and ensure complete safety to the customer. Keywords Electric vehicle · ECU · Actuator · Charging gun · Safety · CCS charging process

H. R. Yoganand · K. B. Sowmya (B) Department of ECE, RV College of Engineering, Bengaluru 560059, India e-mail: [email protected] H. R. Yoganand e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 P. K. Mallick et al. (eds.), Cognitive Informatics and Soft Computing, Lecture Notes in Networks and Systems 375, https://doi.org/10.1007/978-981-16-8763-1_41

503

504

H. R. Yoganand and K. B. Sowmya

1 Introduction A diagnosis is a method of identifying the faults/issue in the system and taking relevant steps to heal it from the system, respectively. There are several approaches to be used to perform it for different faults. In the automobile industry, in more specific terms, diagnosis can be defined as, “Identification of faults or defects raised in the system/vehicle and identifying them, respectively, taking suitable actions to resolve the faults”. Nowadays, vehicles ECU controls all the electronic features in the vehicle. Some vehicles have multiple ECUs controlling different features, while some have limited ECUs controlling everything. ECU interacts and monitors the health of vehicle continuously. Any fault raised from any of the components during the working will be reported back to the user for taking respective action [1]. Motivations to carry out the project are (a) Diagnosis required for ensuring reliable operation. (b) Diagnosis required for taking substitute reaction to ensure safety in case of failure. (c) Diagnosis ensures the optimum operation of the components. (d) Diagnosis can help to monitor the health of the system.

2 Why Diagnostics? In the automobile industry, diagnosis is carried out to ensure the complete safety of the user. The following list contains the benefits for performing diagnosis: 1. 2. 3. 4. 5.

To identify the issue/problem. To monitor the health of the Vehicle. To notify the user for respective faults and respective actions to solve the fault. To monitor the performance of vehicle components and their functionalities. To check minute faults and actuation.

3 Interaction Between Supply Station and Vehicle The entire charging process and its steps are explained by taking CCS standards into consideration as the project is carried out using the combined charged system. Each step and the interaction between charging gun inlet piles and electric vehicle charging station is explained below with schematic architecture consideration. Figure 1 indicates the basic model for briefing the later steps and actions in the communication for the process of charging. Different charging steps involved in the process of charging are (a) Unmated, (b) Mated, (c) initialize (d) cable check, (e) Pre-charge, (f) Charge, and (g) Power down. Unmated: Stage 1, which is unmated, where there is no interaction is made between the charging gun and vehicle. In this stage, no contact is made between the gun and the vehicle.

Diagnosis of Charging Gun Actuator in the Electric Vehicle (EV)

505

Fig. 1 Interaction between supply station and vehicle [1]

Mated: Stage 2, which is mated, Contact is established between the charging gun and Vehicle. Also, the first signal state CP changes its initial state and enters state B1 instantly after mating. The vehicle enters into an immobilized state [2]. Initialize: Stage 3, which is Initialize, Establish Power Line Communication (PLC). In this state, they exchange the operating limits and parameters with respect to charging. All the checks required must be checked and passed to enter the next state. If there is any mismatch with values and parameters, the process ceases. Cable check: Stage 4, which is Cable check state, Electric vehicle changes its CP state from B to C or D and sets EV status to next state called “Ready”. Pre-charge: Stage 5, which is Pre-charge state, Initially, the vehicle sends a precharge request, the request consists of both DC current less than 2A and DC voltage. DC supply adapts DC output voltage within its tolerance range and limits current to the maximum value of 2 A. Charge: Stage 6, which is charge state, EV is initiating message cycles by requesting current or voltage. Supply is responding with current or voltage adjustment as well as present limit and status values. EV decreases the requested current to complete the energy transfer. The DC supply follows the requested current with a time delay and reduces the output current to less than 1A before disabling output. Power Down: Stage 7, which is Power Down state, in this step, contactors open up their contact and DC supply id disables its output. If DC supply reports code -> Not Ready Volt1) when the clock path is parked at 0v. The results show that for higher voltage supply, the DCD is less affected than at lower voltages. In Fig. 11, the plot shows duty cycle degradation with variations in process corners. The highest degradation is observed for process where NFETs are fast (tphl is low) and PFETs are slow (tplh is high) and the lowest degradation is observed for process where NFETs are slow (tphl is high) and PFETs are fast (tplh is low). Figure 12 shows the increase in degradation of duty cycle with increase in frequency, when voltage, temperature, and process are kept constant.

7 Conclusion Influence of NBTI on the clock path duty cycle degradation is presented. The aging prediction is obtained by HSPICE simulations on the clock path. To reduce the switching activities in the design clock gating cells are added. During the inactive

548

N. K. Ramkrishna and A. Deshpande

Fig. 11 DCD with change in process corners

Fig. 12 DCD with change in frequency

mode, the output of the clock gating cells is maintained low. Due to which transistors on the clock path are subjected to steady state stress (DC stress), which as time passes, increases the threshold voltage (Vth) of transistors under stress. The Vth shift in the devices is mainly due to BTI effect. It was observed that NBTI effect on duty cycle degradation was mainly due to the affected P-FinFETs on the path. Moreover, the duty cycle values are as low as 20% at end stages, this leads to minimum pulse width failures in major parts of the circuit. The analysis also shows the variation in DCD with change in supply voltage, frequency, and process corners. There are many physical mechanisms other than NBTI that also lead to circuit reliability problems. Future work can be pursued in building methodologies to consider other aging mechanisms.

Analysis of NBTI Impact …

549

Acknowledgements The author would like to thank Raghavendra Joishi and Rishabh Govli of Qualcomm, Bangalore for their useful discussions.

References 1. Goel N, Joshi K, Mukhopadhyay S, Nanaware N, Mahapatra S (2014) A comprehensive modeling framework for gate stack process dependence of DC and AC NBTI in SiON and HKMG p-MOSFETs. Microelectron Reliab 54. https://doi.org/10.1016/j.microrel.2013. 12.017. Clerk Maxwell J (1892) A treatise on electricity and magnetism, 3rd edn. Clarendon, Oxford, vol 2, pp 68–73 2. Kang K, Kufluoglu H, Roy K, Ashraful Alam M (2007) Impact of negative-bias temperature instability in nanoscale SRAM array: modeling and analysis. IEEE Trans Comput-Aided Des Integr Circuits Syst 26(10):1770–1781. https://doi.org/10.1109/TCAD.2007.896317 3. Pachito J, Martins CV, Semião J, Santos M, Teixeira IC, Teixeira JP (2012) The influence of clock-gating on NBTI-induced delay degradation. In: 2012 IEEE 18th international on-line testing symposium (IOLTS), pp 61–66. https://doi.org/10.1109/IOLTS.2012.6313842 4. Bhardwaj S, Wang W, Vattikonda R, Cao Y, Vrudhula S (2006)Predictive modeling of the NBTI effect for reliable design. In: IEEE custom integrated circuits conference 2006, San Jose, CA, USA, pp 189–192. https://doi.org/10.1109/CICC.2006.320885 5. Kumar SV, Kim CH, Sapatnekar SS (2006) Impact of nbti on sram read stability and design for reliability. In: Proceedings of international symposium on quality electronic design 6. Abella J, Vera X, Gonzalez A (2007) Penelope: the nbti-aware processor. In: Proceedings of IEEE/ACM international symposium on microarchitecture, pp 85–96 7. Chakraborty A, Ganesan G, Rajaram A, Pan DZ (2009) Analysis and optimization of NBTI induced clock skew in gated clock trees. In: DATE, pp 296–299 8. Maiti TK, Mahato SS, Chakraborty P et al (2010) Negative bias temperature instability in strain-engineered p-MOSFETs: a simulation study. J Comput Electron 9:1–7. https://doi.org/ 10.1007/s10825-009-0270-6 9. Alam MA, Mahapatra S (2005) A comprehensive model of PMOS NBTI degradation. Microelectron Reliab 45(1):71–81. https://doi.org/10.1016/j.microrel.2004.03.019 10. Arasu S, Nourani M, Luo H (2015) An all-digital adaptive approach to combat aging effects in clock networks. In: 2015 6th Asia symposium on quality electronic design (ASQED), pp 102–107. https://doi.org/10.1109/ACQED.2015.7274016 11. Mishra RK, Pandey A, Alam S (2012) Analysis and impacts of negative bias temperature instability (NBTI). In: 2012 IEEE students’ conference on electrical, electronics and computer science. https://doi.org/10.1109/sceecs.2012.6184739 12. Davis WR, Shaw C, Hassan AR (2020)How to write a compact reliability model with the open model interface (OMI). In: 2020 IEEE international reliability physics symposium (IRPS), pp 1-2. https://doi.org/10.1109/IRPS45951.2020.9128222 13. Katoozi et al M (2013) An age-aware library for reliability simulation of digital ICs. In: 2013 IEEE international reliability physics symposium (IRPS), pp 3A.3.1–3A.3.5. https://doi.org/ 10.1109/IRPS.2013.6531973

Classification of Brain Images Using Machine Learning Techniques Annapareddy V. N. Reddy, Reva Devi Gundreddy, Moyya Meghana, Kothuru Sai Mounika, and Varikuti Anusha

Abstract Nowadays, so many investigators are incessantly trying to build an effective model to identify diseases. For that purpose, few arrangement methods have been recognized for the characterization of cerebrum pictures. So, it is essential to provide appropriate method for classifying images. By studying and analyzing the various previous papers, we will be able to find which matching machine learning algorithm is more accurate and less time consuming. In this paper, we proposed a classification strategy based on Inverse discrete wavelet transform (IDWT) and random forest classifier. There are two steps involved in our proposed system: in the first step, features are extracted from images using Inverse discrete wavelet transform (IDWT) and those features are stored in the feature matrix; in the second step, random forest classifier is used for the classification of magnetic resonance images (MRI) images. Keywords Magnetic images (MRI) · Inverse discrete wavelet transform (IDWT) · Random forest · Classification · Support vector machine

1 Introduction Magnetic resonance imaging was also known as MRI. This particular imaging technique was described independently by two different scientists Felix Bloch and Edward Purcell in the year 1946. These two scientists have received Nobel prize in the year 1952 for their works. Though these two scientists had described about magnetic resonance imaging, its medical use was reported by Raymond Dalmatian. Magnetic Resonance Imaging helps doctors to find, watch, and treat medical problems. Magnetic Resonance Imaging utilizes strong magnets along with the use of radio waves and computers to make pictures. MRI images are more detailed than A. V. N. Reddy (B) Department of Information Technology, Lakireddy Bali Reddy College of Engineering, Mylavaram, Andhra Pradesh, India R. D. Gundreddy · M. Meghana · K. S. Mounika · V. Anusha Lakireddy Bali Reddy College of Engineering, Mylavaram, Andhra Pradesh, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 P. K. Mallick et al. (eds.), Cognitive Informatics and Soft Computing, Lecture Notes in Networks and Systems 375, https://doi.org/10.1007/978-981-16-8763-1_45

551

552

A. V. N. Reddy et al.

those made with other methods. Magnetic Resonance Imaging produces sectional images in any plane. An MRI image does not use ionizing radiation. MRI is sensitive to grey and white images. Because of intricate synapse structure, at times the MRI pictures are sufficiently not to recognize the infections. All things considered; the MRI characterization procedures assume a significant part. In the current times such countless investigates have been done for the arrangement of mind pictures. For that reason, AI and profound learning gives proficient outcomes in the grouping procedures. AI and profound learning are the subset of man-made consciousness. Machine learning algorithms are developed to perform the tasks on labeled data and deep learning algorithms on unlabeled data using neural networks. In both the algorithms, the main important parameter is data which decides the accuracy [1]. Recent work shows that classification of brain MRI images is possible through supervised learning algorithms such as support vector machines, random forest classifier, decision tree classifier, K-nearest neighbor (KNN) and counterfeit neural organizations and unaided learning calculations like fluffy c-means and SOM. In this paper, we are utilizing arbitrary timberland classifier which is the directed learning calculation for the grouping of cerebrum pictures [2].

2 Literature Review Bash et al. [3] proposed a classification of brain MRI images using KNN, support vector machines, and probabilistic neural networks. In pre-processing phase, Gaussian filter was used for removing noisy data, for partitioning FCM algorithm is used and median, mean, kurtosis for statistical feature extraction. Overall, 93% accuracy was obtained from the images. Kumar [4] developed a hybrid approach for classification of brain tumor in the MRI image, to reduce the time for manual labeling and also to overcome human errors. In this discrete wavelet change is utilized for removing highlights from the pictures and for including decrease head segment investigation is utilized and support vector machine is utilized for arrangement of pictures. Zhang et al. [5] developed a method for classification of Brain images to diagnose Alzheimer’s sickness. For feature extraction, discrete wavelet transform (DWT) is used. For feature reduction, principal component analysis is used. SVM is utilized for the arrangement of MRI pictures with 99% precision.

3 Proposed Methodology The proposed method accomplishes the classification of Brain images using Random Forest classifier. The proposed method is a combination of pre-processing, feature extraction using IDWT, and arrangement of pictures utilizing Random Forest [6, 7].

Classification of Brain Images Using Machine Learning Techniques

553

From Figs. 1 and 2 where we are using random forest algorithm for classification of image data sets we using random forest classification for mage segmentation and classification and reduction were we taking image datasets from open source and that image was classified based on random forest classification technique is used on the given datasets after classification we can classify the different types of image data sets has been classified based on different diseases and abnormal diseases after classification in Fig. 2 Arbitrary woodland, similar to its name infers, comprises of countless individual choice trees that work as a troupe. Every individual tree in the arbitrary backwoods lets out a class expectation and the class with the most votes turns into our model’s forecast different datasets that have been used on decision tree.

3.1 Random Forest Random Forest algorithm constructs multiple decision trees and combines them to get more accuracy of the given dataset. As the number of trees in the dataset increases, the algorithm gives higher accuracy results and also overcomes the over fitting problem [8].

3.2 Why Are Using Random Forest? No over fitting: In Random Forest algorithm, the use of multiple trees decreases the risk of over fitting and also it takes less time to train the data. Random Forest algorithm runs efficiently on large datasets and gives the most accurate predictions [9]. In case large amount of data is missing in the dataset, in that case also random forest algorithm maintains high accuracy [10].

3.3 Random Forest Algorithm Description Random forest algorithm is one of the most powerful supervised machine learning algorithms. Random forest algorithm is capable of performing both classification and regression tasks [11]. As the name suggests, Random Forest algorithm creates the forest with number of decision trees. An ultimate conclusion is made dependent on the result of most of the choice trees [12]. When all is said in done, the more trees in the forest area the more impressive the assumption. Additionally, in the self-assertive boondocks classifier, the higher amount of trees in the forest gives the high accuracy results [13, 14].

554

Fig. 1 Schematic chart of the proposed methodology

A. V. N. Reddy et al.

Classification of Brain Images Using Machine Learning Techniques

555

Fig. 2 Random forest

Random Forest working starts from the root node where we can give all the training datasets and based on that lot of different parameters like Gini index, entropy, etc. [15]. Ginaindex = 1 −

C 

( pi )2

(1)

i=1

Entropy =

C 

− pi ∗ log2( pi )2

(2)

i=1

where pi C

relative frequency of the class number of classes

The random forest is a combination of classifiers formed by combining N decision trees. For actual dataset, D = {(A1, b1), (A2, b2) . . . , ( An, bn)}

(3)

3.4 Algorithm • Select N records from the given dataset. The tree is built based on the given records. • After that the trees are chosen dependent on the dataset.

556

A. V. N. Reddy et al.

• For a substitution record, each tree inside the woodland predicts the incentive for Y(output) for relapse. • The extreme worth is frequently determined by taking the regular of all qualities anticipated by the trees inside the woodland. • Just in the event of order issue, each tree inside the timberland predicts the class to which the new record has a place. • Finally, the new record is allotted to the class that successes the mass vote.

3.5 Inverse Discrete Wavelet Transform (IDWT) The IDWT is a single-dimensional wavelet transform particularly used for the image reconstruction. IDWT is used for Image decomposition [16, 17]. IDWT contains two sets of functions: • Scaling function: performed by low-pass filter • Wavelet function: performed by high-pass filter (Fig. 3) The Inverse Discrete Wavelet Transform (IDWT) is termed as ϕ(X ) =

∞ 

(−1) K α N − 1 − K ϕ(2X −K )

(4)

K =−∞

3.6 Results The proposed method classifies the brain images accurately. The results of the random forest classifier are stored in the confusion matrix. In this, the results of the irregular forest classifier are determined dependent on the consequences of accuracy, specificity, precision, recall, F1-measure (Tables 1, 2, 3, 4 and 5).

Fig. 3 Inverse discrete wavelet transform

Classification of Brain Images Using Machine Learning Techniques Table 1 Confusion matrix

Estimation class Actual class

Table 2 Random forest with 1 tree

Table 3 Random forest with 5 trees

Table 4 Random forest with 11 trees

Table 5 Correlation of the proposed model with other grouping procedures

557

Positive

Negative

Positive

TP (true positive)

FN (false negative)

Negative

FP (false positive)

TN (true negative)

Performance metrics

Random forest classifier (%)

Accuracy

70

Precision

65

Recall

75

F1-score

95

Specificity

90

Performance metrics

Random forest classifier (%)

Accuracy

90

Precision

85

Recall

90

F1-score

80

Specificity

95

Performance metrics

Random forest classifier (%)

Accuracy

95

Precision

90

Recall

100

F1-score

90

Specificity

90

Classification Algorithms

Accuracy (%)

Random forest + IDWT

93

Random forest

83

Support vector machine

90

Convolution neural network

91

558

A. V. N. Reddy et al.

Accuracy =

TN + TP TP + FP + FN + TN

Precision = Recall =

TP TP + FN

Specificity = F1-measure =

TP TP + FP

TP TN + FP

2TP 2TP + FP + FN

The proposed method has an accuracy of 93%. The proposed method works efficiently when compared with the other techniques.

Classification of Brain Images Using Machine Learning Techniques

559

From the above results, we observed that random forest algorithm was best suitable for the above classification.We compared with other algorithms and also compared with SVM and Convolution neural network and random forest and the proposed model is random forest with IDWT. The proposed model is best with other related models.

560

A. V. N. Reddy et al.

4 Conclusion This paper studies and analyze the previous work and suggest a technique which provides better results than the other techniques. The proposed strategy was created to group the MRI pictures. The proposed system is a mix of Inverse Discrete Wavelet Transform (IDWT) which is utilized for including extraction and Random Forest classifier used for classification for brain images. As per our understanding and analysis, we have chosen the base algorithm which shows the best results and gives high accuracy. Our proposed method has an overall accuracy of 93%. The brain MRI image should be well filtered and free of noise and clear at the same time, features of the MRI brain image should be highly extracted. In future this may be extended to few more diseases.

References 1. Chauhan N, Choi B-J Performance analysis of classification techniques of human brain MRI images. Department of Electronic Engineering, Daegu University, Gyeongsan, Korea 2. El-Dahshan E-SA, Hosny T, Salem A-BM (2010) Hybrid intelligent techniques for MRI brain images classification. Digital Signal Process 20(2):433–441. https://doi.org/10.1016/j. dsp.2009.07.002 3. Basha CZ, Likhitha A, Alekhya P, Aparna V (2020) Computerised classification of MRI images using machine learning algorithms. In: International conference on electronics and sustainable communication systems (ICESC) 4. Kumar S, Dabas C, Godara S (2017) Classification of brain MRI tumor images: a hybrid approach. In: Information technology and quantitative management (ITQM2017) 5. Zhang Y, Wu L (2012) A MR brain images classifier via principal component analysis and kernel support vector machine. Progr Electromagnetic Res 130:369–388 6. Rajeshwari S, SreeSharmila T (2013) Efficient quality analysis of MRI image using preprocessing techniques. In: IEEE conference on information and communication technologies, ICT2013 7. https://cnx.org/contents/Df3FeQsm@1/The-Inverse-Discrete-Wavelet-Transform 8. https://www.ibm.com/cloud/learn/random-forest 9. https://en.wikipedia.org/wiki/Magnetic_resonance_imaging 10. Kumar S, Dabas C, Godara S (2017) Classification of brain MRI tumorimages: a hybrid approach. Proc Comput Sci 11. Sarkar A, Maniruzzaman M, Ahsan MS, Ahmad M, Kadir MI, TaohidulIslam SM (2020). Identification and classification of brain tumor from MRI with feature extraction by support vector machine. In: 2020 international conference for emerging technology (INCET) 12. Ali J, Khan R, Ahmad N, Maqsood I Random forest and decision trees. In: Computer Engineer UUET Peshwa, Pakisthan 13. http://www.stat.berkeley.edu/~breiman/RandomForests/cc_home.htm#prox 14. http://www.ijstr.org/final-print/sep2014/Discrete-Wavelet-Transforms-Of-Haars-Wavelet-pdf 15. Ambily N, Suresh K Government engineering college Barton Hill Thiruvananthapuram classification of brain MRI images using convolution neural network and transfer learning 16. Reddy AVN, Krishna CP A novel data mining approach for detection of brain disorder diseases using an integrated wavelet transform technique. Indian J Public Health Res Dev 9(9):1399– 1405

Classification of Brain Images Using Machine Learning Techniques

561

17. Deep learning based image classification and abnormalities analysis of MRI brain images (2019). In: Conference: 2019 TEQIP III sponsored international conference on microwave integrated circuits, photonics and wireless networks (IMICPW). https://doi.org/10.1109/IMI CPW.2019.8933239 18. Chauhan N, Choi BJ (2019) Performance analysis of classification techniques of human brain MRI images. Int J Fuzzy Logic Intell Syst 19(4):315–332. https://doi.org/10.5391/IJFIS.2019. 19.4.315 19. Brain tumor detection using convolutional neural network (2019). In: 1st international conference on advances in science, engineering and robotics technology (ICASERT), 2019 20. Hemanth G, Janardhan M, Sujihelen L (2019) Design and implementing brain tumor detection using machine learning approach. In: 2019 3rd international conference on trends in electronics and informatics (ICOEI). https://doi.org/10.1109/icoei.2019.8862553 21. Cinarer G, Emiroglu BG (2019) Classificatin of brain tumors by machine learning algorithms. In: 2019 3rd international symposium on multidisciplinary studies and innovative technologies (ISMSIT). https://doi.org/10.1109/ismsit.2019.8932878

Analyze the Performance of Stand-Alone PV with BESS Rahul Manhas, Harpreet Kaur Channi, and Sarbjeet Kaur

Abstract This paper will come up with stand-alone PV system integration with battery and will also analyze the performance of the system by changing the solar irradiations in the proposed model. The system will consist of solar PV array, lead acid battery, voltage source converter, and many others. The performance and analysis in the proposed model will be demonstrated under different operating conditions. In this model MATLAB/SIMULINK software will be used to obtain the results. Keywords Renewable sources · MATLAB · Stand-alone · BESS

1 Introduction The electricity generated or produced with the help of solar PV panels depends on various area conditions. Due to the change in ambient or temperature that will result in fluctuations in the output power which will deliver. Therefore there are many challenges some of them are PV array do not generate energy in stable form. To eliminate these challenges and to cope up with all these fluctuations Battery energy storage system (BESS) will be integrated the solar PV panels. For commercial and home load battery will play a significant role as they are low in maintained, better stability and even their prices are not that much higher. They will also provide energy during no solar generation and it will also store energy which can be utilized later. These days in many backward areas there is lack of or very less amount of electricity is generated due to many reason like the village is far away located from the grid and many others so to overcome all these problems as stand-alone PV system will be installed in that area with BESS. In BESS lead acid battery are used for the storage of electrical power. The generating amount of power will depend on the solar irradiation in that area. So this paper will analyze the performance of the stand-alone R. Manhas (B) · H. K. Channi · S. Kaur Electrical Engineering Department, Chandigarh University, Gharuan, Mohali, Punjab, India S. Kaur e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 P. K. Mallick et al. (eds.), Cognitive Informatics and Soft Computing, Lecture Notes in Networks and Systems 375, https://doi.org/10.1007/978-981-16-8763-1_46

563

564

R. Manhas et al.

Fig. 1 Stand-alone PV system

system by changing the amount of solar irradiation in the respective model through MATLAB software. This will help us to locate the area where the maximum power can be generated. In this model we will integrate PV & Battery. MPPT will be used in PV control for tracking maximum power. Capacitor, inductor, boost converter will also be used in this model (Fig. 1).

2 Literature Survey 1.

“Battery optimal sizing for improving frequency stability” by ‘Lasantha Meegahapola’—This paper will analyze the impact for large-scale PV generation of power system for frequency stability and the paper will also propose a method for battery sizing and to mitigate adverse effects from PV generation on the stability of frequency.

Analyze the Performance of Stand-Alone PV with BESS

2.

3.

4.

5.

565

“Designing of PV system and wind battery hybrid for on-grid and offgrid system” by ‘Reshma S. Jadhav’—This paper will be composed of various hybrid power system like PV system, wind energy, and battery storage system which will be grid tied and comparison will be done that which system has high efficiency and is more reliable. “Off-grid PV system design for mapetja village” written by ‘Nunu Henroitta’—This paper will concentrate on simulation and design of PV system for the village and will make the village capable of supplying electrical power for solar PV system, as this village is far away from the national grid. “Smooth operation for off-grid PV system with diesel generator and Battery energy storage system (BESS)” this paper is written by ‘H. Kim’—In this paper will propose a new scheme for smooth operation of stand-alone or off-grid PV system which is composed of BESS and diesel generator (DG) will also be integrated for smooth, reliable, and uninterrupted power supply. “Comparison and analysis of On-grid and Off-grid PV system” by ‘Ananya nag’—In this paper analysis is done between centralized and decentralized PV system to check their efficiency and operation and grid dependency will be also carried out. Analyzing all these operations the best suited system will be used for PV generation.

3 Model Description The stand-alone PV system with integration with BESS is presented in Fig. 2 with MPPT control on PV side and battery control function on battery storage side. The scope is connected with solar irradiation, PV voltage, PV current, and voltage bus. The configuration parameters of PV and Battery are as follows:

Fig. 2 Proposed model of stand-alone PV with BESS using MATLAB/SIMULINK

566

R. Manhas et al.

• Type: Fixed • Solver: Discrete (non-continuous state) and sample time for this system will be 10e-6 and stop time of the model will be kept at 10.0 s. The periodic sample time will be unconstrained. • For block parameter PV array configuration: • Max. power W = (223.15) • Open circuit voltage (VOC) = 37.3 • Voltage at max. power point V mp (V ) = 28 • Cells per module (Ncell) = 60 With PV array bus selector is connected and with that VPV and IPV are connected respectively. RL circuit is connected on the positive and negative side of PV array.

4 Analysis of Solar Irradiance Solar Irradiance: Average solar radiation: 6.85 kwh/m2 (Fig. 3). Maximum solar radiation: 7.9 kwh/h. Minimum solar radiation: 5.23 kwh/h.

5 Components Description For making a stand-alone power system different components are used whose simulation is done in MATLAB. The project life time is presumed as 25 years with discount rate taken as 10%. Following are the components: (a)

Solar PV Panels: Capital cost is taken as 1 kW as Rs. 65,000. Rs. 40,000 and Rs. 10 is the maintenance and operation cost, replacement, and operational cost, respectively. Generic flat plate panels are considered. Different sizes of 10 8 6 4 2 0

6.74 7.19

7.9

6.46

7.33 7.01

6.05 6.66

Kwh/m Fig. 3 Solar radiations

5.23

6.66

5.23 4.99

Analyze the Performance of Stand-Alone PV with BESS

(b)

(c)

(d)

(e)

567

PV array-like 1, 2, and 3 kW are assumed and thru simulation in MATLAB the optimal size is calculated. PV array life time is taken as 25 years. Battery: Lithium ion battery of 1 kW is selected for improving the system efficiency. The capital cost of battery is taken as Rs. 7000. Replacement, operation, and maintenance cost are presumed as 1000 Rs/year and 20 Rs/year independently. MATLAB selects optimum configuration for selecting the number of batteries from 0, 1, 2, 3, and 24 numbers. Converter: Power converter is used to maintain the flow of energy between DC and AC buses. Rs. 17,500 is the capital cost of 1 kW converter. Replacement, operation, and maintenance cost are presumed as 15,000 Rs/year and 50 Rs/year independently. Life time of converter is to be 10 years. Efficiency is taken as 96%. Controller: The controller used in this model is PID. It is used to calculate error values e(t) continuously with that in this model PID controller will also regulate, flow, and temperature. MATLAB function algorithm:

568

R. Manhas et al.

6 Results and Discussion After completion of PV integration with battery in MATLAB various values of solar irradiance have been changed to analyze the endurance of the system under various conditions, which is shown in the data below (Fig. 4). • Irradiation = 0 • Current in battery = 15A

Fig. 4 For irradiation 0

Analyze the Performance of Stand-Alone PV with BESS

569

Fig. 5 For irradiation 300

• This means that battery is discharging • PV = 0 • Bus Voltage = 4-8 V Therefore from the above data it is concluded that system is not stable here (Fig. 5). • • • • • • •

Irradiation = 300 W/ sq. Battery current decreasing = 5A System not stable Bus Voltage = 4–8 V SOC is increasing due to which battery starts charging Battery Current = 12A (−ve current) Bus voltage constant (Fig. 6)

Max. irradiance = 600w/sq. m Battery current = −18A (charging) Stable voltage (Bus) PV = 1KW In the data shown above we have decreased irradiance from maximum to minimum in this case (Fig. 7). • Battery charging will be slow • Battery current is +ve • SOC will be

570

Fig. 6 For irradiation 600

Fig. 7 Irradiance from maximum to minimum

R. Manhas et al.

Analyze the Performance of Stand-Alone PV with BESS

571

7 Conclusion If we decrease the solar irradiance step by step from Max. to Min. the PV power will decrease and battery current is discharging. In order to conclude more we have again decreased the values of solar irradiance due to which the battery was again discharging slowly because the state of charge was decreasing and PV side was not enough to supply the load as we have minimized the values. SOC will increase and decrease depending on PV side.

References 1. das RS, Walker G, Nadarajah (2018) Integrating PV systems into distribution networks with battery energy storage systems. In: Australasian universities power engineering conference, AUPEC 2018, Curtin University, London, 28 Sept–1 Oct 2018 2. Rújula AA, Burgio A, Leonoicz D, Meniti D, Ponarelli A (2018) Recent developments of photovoltaics integrat with battery storage systems: a review. Int J Photoenergy 2018(8256139) 3. Ramanuj L, Hamza S (2016) Comparative study of three MPPT algorithms for a photovoltaic system control. In: Information technology and computer appl. congress (WCITCA), 2016, pp 1–5 4. Merrera M, Lolara A, Faranda R, Leva S (2016) Experimental test of seven widely-adopted MPPT algorithms. Power Tech IEEE 1–8 5. Kuhhn R, Balog SS (2015) Design considerations for long-term remote photovoltaic-based power supply. In: Twenty-third annual IEEE applied power electronics conference and exposition, pp 154–159 6. Government of India. Annual Report 2014–15. Ministry of New and Renewable Energy, Government of India, New Delhi, India; 2015 alternative-energy-news.info/images/te cynical/solar-power 7. O’Dwyer, Ryan L, Flynn D (2017) Efficient large-scale energy storage dispatch: challenges in future high renewables systems. IEEE Trans Power Syst 2017. energyinformative.org/gridtied-off-grid-and-hybrid-solar-systems 8. Intermtnwindandsolar.com/off-grid-solar-energy-systems-pros-and-cons 9. Rodríguez-Gallegos D, Rahbar K, Bieri M, Gandhi O, Reindl T, Panda SK (2017) Optimal pv and storage sizing for pv-battery-diesel hybrid systems. In: IECON 2016—42nd, Oct 2017, pp 3080–3086 10. Perera, Ciiufo P (2019) Microgrids of buildings: Strategies to manage mode transfer from off-grid connected to islanded mode. IEEE Trans Smart Grid 5(4):1337–1347 11. Rodríguez-Gallegos D, Rahbar K, Bieri M, Gandhi O, Reindl T, Panda SK (2016) Optimal pv and storage sizing for pv-battery-diesel hybrid systems. In: IECON 2016—42nd annual conference of the IEEE industrial electronics society, Oct 2016, pp 3080–3086 12. Power.larc.nasa.gov/data-access-viewer 13. Nasir UI Rasheed (2018) Solar resource assessment in Jammu and Kashmir

Efficient LUT-Based Technique to Implement the Logarithm Function in Hardware Siddarth Sai Amruth Yetikuri, K. B. Sowmya, Timothy Caputo, and Vishal Abrol

Abstract Mathematical computations find extensive usage in various modern-day technologies ranging from machine learning to digital signal processing. The logarithm function has been fundamental to the implementation of such mathematical implementations due to its utility in learning algorithms. Hence, extensive research has gone into algorithms focused on obtaining high precision values. These implementations however are not optimized for hardware realization. The work presented aims to develop an algorithm that achieves up to 7 digits of precision while avoiding the computation of terms greater than the first order of approximation. The foundation for this work is Mitchell’s approximation error which is the error obtained by approximating the logarithm to the first two terms of the log function’s Taylor series. Lookup tables and range reduction are used to further reduce the error of the function. Keywords Logarithm function · LUT · Mitchell’s approximation · Taylor series · Hardware optimization

S. S. A. Yetikuri (B) · K. B. Sowmya Department of Electronics and Communication, RV College of Engineering®, Bengaluru 560059, India K. B. Sowmya e-mail: [email protected] T. Caputo · V. Abrol Analog Devices Inc, Bengaluru, Karnataka, India e-mail: [email protected] V. Abrol e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 P. K. Mallick et al. (eds.), Cognitive Informatics and Soft Computing, Lecture Notes in Networks and Systems 375, https://doi.org/10.1007/978-981-16-8763-1_47

573

574

S. S. A. Yetikuri et al.

1 Introduction The logarithm function is used for various purposes such as simplifying multiplication, behaving as a likelihood function in machine learning. Conventional implementations such as functions in various programming languages tend to compromise speed for high precision. This results in the log function consuming a large number of cycles. Two Popular approximations of the log function are the Taylor series expansion and the Remez algorithm. However, using these formulae require for resource consuming multiple order implementations, unless additional modifications are made. The error obtained when the log function is approximated by its first-order Taylor series expansion is known as Mitchell’s error [1] which is shown in 1.1. The resulting maximum absolute value error is 0.08639. This error is too high for most applications hence, methods have been developed to model the error function itself. The most popular method has been the usage of Lookup Tables (LUTs) to store suitable error adjustment values, which reduces the strain on computation time. This is achieved by performing piecewise linear approximations of the entire error region and then obtaining suitable slopes and adjustments for each region. Authors have investigated different ways of dividing the entire range into linear regions. In this work we aim to achieve faster computation by looking at the algorithms through the a priori and posteriori approach.

2 Literature Review In general it is observed that the lookup table-based implementations produce simpler circuits and scalable accuracy. Another important point is that Mitchell’s approximation is generally considered the basis for all calculations and the aim is to deal with this error. Literature discussing these methods have been cited below. In [1] the author introduces a new method to effectively compute the logarithm and antilogarithm of a number in order to perform multiplication and division swiftly. The algorithm reduces the range of the number to a value between one and two by shifting until the leading one is the first unit on the right of the decimal. The logarithms of these numbers are then calculated by making the approximation log(1 + x) = x. The error function resulting from this approximation is known as Mitchell’s error. The maximum value of the error obtained is 0.08639. He also proposes that lookup tables may be used in this region at the cost of time and area. The paper serves as the basis for the following literature. In [2] the authors design a purely combinational logic logarithm computational circuit that is designed to calculate the log approximation in a single cycle. 3 serial/parallel leading one detectors for 4 bits, 16 bits, and 32 bits are designed which were faster than the state of art. The grouping of bits reduced the overall dependency. The authors also propose new linear piecewise approximations where the error function is divided into 2, 3, and 6 regions. A maximum positive percentage

Efficient LUT-Based Technique …

575

error of 0.1538 was achieved in the 6 region approximation as opposed to 5.3605 percent achieved in Mitchell’s approximation. The authors of [3] considered a new method wherein the Maximum Approximation Error (MAE) was kept constant for all the regions of operation. This was achieved by assigning the regions based on regular output intervals instead of input intervals. Expression for minimum MAE and maximum SNR could be solved to provide the minimum number of regions, the interpolation coefficients as well as the range of each region for a given error constraint. The authors also propose a systematic design methodology to realize low power, fast hardware circuits. Comparisons with previous state of the art showed improvements of 12.7% in area as well as 25% in latency. Reference [4] decided to extend Mitchell’s approximation method using the operand decomposition method. In this method the product of two numbers was decomposed into the addition of two products. This resulted in the probability of the occurrence of a 1 decreasing from ½ to ¼. Applying operand decomposition as a prepossessing step improved the accuracy of Mitchell’s algorithm. The approach was combined with the divided approximation method from [2] wherein modifications are made to the mantissa based on a 2-region approximation. This results in the average error dropping to 0.2%. Finally, when combined with Michell’s error correction term the error value is 1%. The paper [5] explored the concept of adding an additional error correction term to the Mitchell algorithm in order to improve the accuracy of the multipliers. The paper discussed three possible methods of optimization, namely, linear LUTs, interpolators with non-linear LUTs and lastly interpolators with linear LUTs. The authors decide to average the error over 8 regions for each input resulting in an 8 × 8 lookup being generated. The improved Mitchell algorithm multiplier gave a saving of 30% in area and power with respect to the standard multiplier. The authors in [6] implemented a multiplier using ROM-based methods. They showed that as opposed to implementing a normal multiplier with a ROM, the logarithm multiplier produces the same amount of error with considerably lesser bits. This was achieved by first converting the number to an exponent and mantissa format so that the problem reduced to implementing ROMs to calculate the product of the mantissas. For an error of 0.1% 43,008 bits were required to implement the ROM as opposed to the 2,359,256 bits conventionally required. The authors of the paper [7] investigated the combined usage of LUTs and interpolators to improve Mitchells approximation. The error due to Mitchells approximation corresponding to the first m bits of the mantissa (which contains t bits) is stored in the LUTs. Linear Interpolation was used which resulted in the need for a multiplier. Other methods were explored such as the quadratic method. The multiplications in these methods were implemented by additional LUTs to compute logarithms and antilogarithms. The least square polynomial method with 256 words LUTs achieves an accuracy of 20 bits. The authors of the paper [8] went one step ahead in modifying the work done in [7]. They proposed the concept of dividing a single sub-region into multiple subregions. The sub-regions would all use the same slope term, while changing the

576

S. S. A. Yetikuri et al.

intercept in the linear formula, which would result in no increased load on the multiplier. Implementation and analysis was done for 2-region, 4-region, and the 8-region versions. The maximum savings with respect to [7] was observed in the 8-region implementation. The authors of the paper [9] explored mathematical methods to reduce the MAE error by obtaining the most optimal constraints for a linear approximation. The work done is an extension to the piecewise linear region approximations done in [10, 11]. They formulated an error function which they aimed to minimize under given constraints. It was proved that obtaining constraints was not possible due to infinite solutions to the constraints presented. Numerical investigations resulted in the final constraint for the optimized coefficients. The maximum relative error was observed to be 0.5%. The authors of the paper [12] proposed a new algorithm named REED—relative error equal distribution. The algorithm was used to find ranges of the input for which the maximum error is below a certain threshold. The algorithm could be used to obtain precise non-uniform range values even when the number of segments was not a power of 2. The synthesized implementation produced a 70% improvement in error while only resulting in acceptable increase in area and timing.

3 Logarithm Function as Series Expansion The Taylor series of a function is an infinite sum of terms that are expressed in terms of the derivative of a function around a point shown in Eq. 1. The series represents the function most accurately around this particular point. It was named after Brooks Taylor who introduced them in the year 1715 [13]. The special case where the point of expansion is zero is known as the MacLaurin series shown in Eq. 2. The series is called an Nth order expansion if the first N + 1 Terms are included in the analysis. The series finds great application in implementing functions for computational power. The order of the function chosen provides the designer a tradeoff between accuracy and design complexity. f (x) = log2(x) =

∞  f (n) (a) (x − a)n n! n=0

(1)

1 −ln(ln(2)) +x− ln(2) ln(2)

(2)

∞  f (n) (0) n f (x) = (x) n! n=0

(3)

The logarithm function expansion. The logarithm function is expanded in the Taylor series as shown in Eq. 4. This is implemented as Eq. 3 in the algorithm.

Efficient LUT-Based Technique …

577

ln(x) =

∞  (−1)n−1 (x − 1)(n) n n=1

(4)

The popular Mitchell algorithm considers the first two terms of the Taylor series expansion for computation shown in Eq. 5. f (x) =

ln(a) (x − a) + + O(x) ln(2) a ∗ ln(2)

d(log2(x))dx = x=

1 x ∗ ln(2)

1 ln(2)

(5) (6) (7)

An important observation is that the closer the value is to the point of expansion of, the lower the error stemming from ignoring higher order terms. It is this concept that is used in our implementation while selecting the point of expansion. When the log2(x)function is differentiated Eq. 6, and equated to 1, this results in Eq. 7. The slope of the log2 function is exactly linear at this point which implies that the first-order terms achieve their best accuracy in this range. There are many methods to use the Taylor series to represent the same log2(x) function. However deciding the point of expansion is based on the computational complexity that results. For instance, if the entire range of 1–2 is divided into 1000 regions, then the point of expansion for a particular region can be selected as the center of the region. However, this results in a division which is area and timing hungry. This concept is explored in a later section. This implies that a single point of expansion will have to be selected. The previous analysis showed us that 1/ln(2) would be the ideal value. However, the new issue that arises is that the extremes of the input value lie far enough from this point to result in a significant error as shown in Fig. 1. This is the result of a MATLAB simulation of the algorithm using a single point of expansion. The solution to this is to use a multiplier to range it appropriately. In order to obtain a reasonable approximation, different values of multipliers are used for piecewise regions of the actual range. This gives rise to the concept of bins which is essentially the number of regions the range is split into. To ease design complexity the number of regions is given by a power of 2. This makes it possible to use the first n bits of the mantissa as an address to look up the corresponding multiplier values. This is when the number of regions is a power of 2. Hence the need for values extremely close to 1/ln(2) results in the creation of two lookup tables. The first lookup table consists of the multiplier itself. The multiplier values are calibrated by iteratively decrementing its value such that in the base value times, the multiplier is the closest to 1/ln(2) as allowed by bit limitations. Figure 2 shows the error resulting from using a different number of bins.

578

S. S. A. Yetikuri et al.

Fig. 1 The error due to a single point of expansion

An important part of the process is the 24-bit × 16-bit multiplication that takes place to shift the range of the input. Since the lookup bits of the input are used to get a multiplier value, they are always multiplied with the same multiplier. Hence, this value is constant and can be stored in the adjustment after precomputation. This would reduce the bits on the input side by the number of lookup bits. This novelty is suggested in the implementation of fixed-point LUTs styled algorithms. The aim is to pre-calculate all known data and store it as this results in much lesser computational resources. Reference [9] briefly mentions this mathematically but does not explain how it is to be implemented. This concept can be applied to multiple algorithms and is essentially an important design paradigm demanding closer observation of the data that is available for us before the actual computation. In our situation, the precomputed values can be added to the already available adjustment value. The flowchart of the algorithm is shown in Fig. 3 Since the range of the values is similar to those stored in the adjustment, no extra bits are needed to improve the accuracy while storing them. Hence not only is the timing improved by shaving off bits from a timing critical multiplier, but also the gate count in the data path reduces with no increase in the area occupied by storage.

Efficient LUT-Based Technique …

579

Fig. 2 The errors generated due to using a different number of bins

Fig. 3 Algorithm flowchart

4 Implementation The software used includes Xtensa’s TIE compiler, C compiler, gnuplot, and conformal by Cadence. TIE was used to implement RTL level design which is easily available for usage in C programs post compilation. The algorithm was implemented

580 Table 1 The average errors resulting from the C model simulation

S. S. A. Yetikuri et al. Multiplier configuration

Number of bins

Average error

16

1024

1.329881911e-07

16

512

4.820578852e-07

16

256

1.814696702e-06

12

1024

1.357679906e-07

12

512

4.901575608e-07

8

512

1.205637886e-06

using the C variables and sweeps were performed to compare its accuracy with the math.h function. Once verified, a .tie file was generated from the C program. The .tie file is compiled which produces a .tdk file, errors may be fixed before it is successfully generated. The .tdk file contained a .area file which stores information about the gate count for the TIE functions. The tdk after compilation is attached to a cloned version of the original configuration, once completed, the TIE header file can be included in the C program where the TIE function is to be used. Finally the performance of the TIE function is verified against the C model as well as the math.h function. The comparison for average error and worst-case error for 16-bit multipliers and 12-bit multipliers has been made in Tables 1 and 2, respectively. The parameters varied are the multiplier bit width (16 or 12) and the bin values (256, 512, 1024). The performance of these configurations is evaluated using the worst-case error, worst-case bits error, and the average error. One 8-bit multiplier implementation has been mentioned for comparison. All the computations have been performed for 10,000,000 steps between the range of 1 and 16. It can be observed that the 1024 bins give the best performance irrespective of the multiplier bits. However, for lower bins the multiplier values significantly impact performance as one can see the increase in average error between 16, 12, and 8 multiplier bits for the 512 configurations. Hence the 12,1024 and 16,512 configurations seem to provide the best values. Since the steps have been limited to 7% of the entire value set it was decided to persist with the largest configuration. The 16 multiplier bits, 1024 bin version was compiled using Xtensa and then included in the verification program as a header file to access the functions. Table 2 The worst-case errors resulting from the C model simulation Multiplier configuration

Number of bins

Worst magnitude difference

Worst magnitude location

16

1024

9.536743164e-07

16.00006104

16

512

1.907348633e-06

16.00006104

16

256

5.722045898e-06

4.000022888

12

1024

4.842877388e-07

1.045895934

12

512

5.483627319e-06

4.007814884

8

512

1.205637886e-06

7.359377384

Efficient LUT-Based Technique …

581

Fig. 4 Result of verifying the TIE function with math.h

The result after verification is shown in Fig. 4. 10,000,000 steps were checked for the TIE model, the C model, and the actual log value. The average error is shown to be 1.027e-7. Even the worst-case magnitude error is lesser. This justifies the previous assertion to consider 16 multiplier bits with 1024 bins. The optimization resulting from pre-calculating the first 10 bits resulted in a 30-bit multiplier instead of a 40-bit multiplier. This resulted in a 5% reduction in the gate count for no loss in accuracy.

5 Conclusion and Future Scope A high accuracy approximation of the logarithm function has been designed using range reduction and an optimal point of expansion. The error resulting from functional verification, post-RTL generation was measured to be approximately 1.024e7. The precomputation optimization resulted in a 5% reduction in the gate count as opposed to the original algorithm when implemented in TIE. This is much greater than the accuracy achieved in the mentioned work. This has resulted in the usage of more area than previous literature. Although area intensive, the speed is very high and there was a scalable tradeoff between accuracy and the area used. Furthermore, introductions of pre-calculating values by taking advantage of the uniform input range resulted in a faster circuit. The concept of precomputation applied in this work is applicable to the general class of series approximated function implementations that divide the input range into uniform regions. Hence, similar methodologies could be used to design the exponent function (antilog), CORDIC, etc. Acknowledgements I would like to express our gratitude to my guide Ms. Sowmya K B, my mentors Mr. Timothy Caputo and Mr. Vishal Abrol for guiding me in every step of the project.

582

S. S. A. Yetikuri et al.

References 1. Mitchell JN (1962) Computer multiplication and division using binary logarithms. IRE Trans Electron Comput EC-11(4):512–517. https://doi.org/10.1109/TEC.1962.5219391 2. Abed K, Siferd R (2003) Cmos vlsi implementation of a low-power logarithmic converter. IEEE Trans Comput 52(11):1421–1433. https://doi.org/10.1109/TC.2003.1244940 3. Liu C-W, Ou S-H, Chang K-C, Lin T-C, Chen S-K (2016) A low-error, cost-efficient design procedure for evaluating logarithms to be used in a logarithmic arithmetic processor. IEEE Trans Comput 65(4):1158–1164. https://doi.org/10.1109/TC.2015.2441696 4. Mahalingam V, Ranganathan N (2006) Improving accuracy in mitchell’s logarithmic multiplication using operand decomposition. IEEE Trans Comput 55(12):1523–1535. https://doi.org/ 10.1109/TC.2006.198 5. Mclaren D (2003) Improved mitchell-based logarithmic multiplier for low-power dsp applications. In: Proceedings of IEEE international [systems-on-chip] SOC conference 2003, pp 53–56. https://doi.org/10.1109/SOC.2003.1241461 6. Brubaker T, Becker J (1975) Multiplication using logarithms implemented with read-only memory. IEEE Trans Comput C-24(8):761–765. https://doi.org/10.1109/T-C.1975.224307 7. Paul S, Jayakumar N, Khatri SP (2009) A fast hardware approach for approxi-mate, efficient logarithm and antilogarithm computations. IEEE Trans Very Large Scale Integration (VLSI) Syst 17(2):269–277. https://doi.org/10.1109/TVLSI.2008.2003481 8. Ha M, Lee S (2017) Accurate hardware-efficient logarithm circuit. IEEE Trans Circuits Syst II: Express Briefs 64(8):967–971. https://doi.org/10.1109/TCSII.2016.2608967.54 9. De Caro D, Petra N, Strollo AGM (2011) Efficient logarithmic converters for digital signal processing applications. IEEE Trans Circuits Syst II: Express Briefs 58(10):667–671. https:// doi.org/10.1109/TCSII.2011.2164159 10. Kim H, Nam Bg, Sohn Jh, Yoo H-j (2005) A 231 mhz, 2.18mw 32-bit logarithmic arithmetic unit for fixed-point 3d graphics system. In: 2005 IEEE Asian solid-state circuits conference, 2005, pp 305–308. https://doi.org/10.1109/ASSCC.2005.251726 11. Nam BG, Kim H, Yoo H-J (2008) Power and area-efficient unified computation of vector and elementary functions for handheld 3d graphics systems. IEEE Trans Comput 57(4):490–504. https://doi.org/10.1109/TC.2008.12 12. Zhu M, Ha Y, Gu C, Gao L (2016) An optimized logarithmic converter with equal distribution of relative errors. IEEE Trans Circuits Syst II: Express Briefs 63(9):848–852.https://doi.org/ 10.1109/TCSII.2016.2535041 13. Persson LE, Rafeiro H, Wall P (2017) Historical synopsis of the taylor remainder. Note di Matematica 37:1–21. https://doi.org/10.1285/i15900932v37n1p1

Hand Gesture Recognition Using Convolutional Neural Networks and Computer Vision V. V. Krishna Reddy, K. N. V. S. Bhuvana, K. UmaHarikka, D. Sai Teja, and J. Suguna Kumari

ABSTRACT Hand Gesture Recognition is meant for communication of the people who are deaf and dumb. Communicating with others is always a difficult task for them. The sign language becomes a boon to those people who helps in communication with others knowing the sign language. Here, the sign language used is American sign language (ASL) which is widely used globally. Hand Gestures are a bit confusing and difficult for the people who have never learnt this. In this study, the user captures the images of hand gestures using web camera and this model detects the captured image and displays the corresponding symbol. For detecting hand gesture, we use OpenCV. Here, we used the Background Subtraction technique to segment the hand region. Convolutional Neural Networks (CNN) are used for training and to classify the images. For reducing the loss rate and increase in accuracy, we used optimization algorithms. This model achieved a remarkable accuracy of above 96%. Keywords OpenCV · Background subtraction · Convolutional neural networks (CNN) · American sign language (ASL) · Gesture recognition · Optimization algorithms

1 Introduction According to the National Association of the Deaf (NAD), 18 million people are estimated to be deaf in India. They can communicate among themselves using sign language but having a conversation with others is a difficult task. Extensive work has been done on the American Sign Language (ASL) and by working on this, we want to translate the sign language into text and sound and enable these people to have an improved conversational experience. This model helps in allowing a natural, innovative, and modern way of non-verbal communication. This model is used to V. V. Krishna Reddy (B) Department of Information Technology, Lakireddy Bali Reddy College of Engineering, Mylavaram, Andhra Pradesh, India K. N. V. S. Bhuvana · K. UmaHarikka · D. Sai Teja · J. Suguna Kumari Lakireddy Bali Reddy College of Engineering, Mylavaram, Andhra Pradesh, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 P. K. Mallick et al. (eds.), Cognitive Informatics and Soft Computing, Lecture Notes in Networks and Systems 375, https://doi.org/10.1007/978-981-16-8763-1_48

583

584

V. V. Krishna Reddy et al.

recognize hand gesture from a live video sequence to make a Real-Time prediction for sign language. The main aim of our model is to recognize the gestures and to communicate efficiently with deaf and dumb people. For gesture recognition we are using CV and CNN model. Convolutional Neural Network model is used for recognizing and analysing the image and to process the data. CNN model is used over KNN and SVM to get maximum accuracy. Computer Vision is a scientific field that deals with understanding of how computer deals with the data present in images and videos. This trains computers to extract the features present in the images and videos. It also identifies and classifies the objects present in the images. This concept is used in our model to identify the gestures.

2 Literature Survey Kuo and Kang [1] described recognizing a hand gesture using a MMG-based wearable device. This model is mainly used in gaming. For recognizing the hand gesture, the author used KNN, SVM, LDA, and DNN. This model gives 94.56% accuracy. Here, eight gestures are only used for recognition. To add more gestures and for better accuracy, convolutional neural network model is used. Yuan and Liu [2] discussed recognizing a hand gesture using a data glove. For recognizing the hand gesture, the author used SFM, DFM, and deep feature fusion network (DFFN). Here, data glove is used for gesture recognition. Only 24 among 26 English alphabets are considered. Transferring data from hardware to software components requires Bluetooth, Wi-Fi whereas this model does not require. Tham and Heng [3] discussed a wireless glove that recognizes only ten gestures. For recognizing the hand gesture, the author used Naïve Bayes, Logistic Regression, QDA, SVM, KNN, and XRF classifiers. This model uses smart glove for gesture recognition. Here, a hardware component Raspberry Pi is required for gesture recognition but only ten digits (0–9) can be recognized. Sanchez-Medina and Ayed [4] discussed Morph-CNN used for recognizing digits. Convolutional neural networks model is used for digit recognition. This model combines CNN and morphological filters for identifying the gesture but only numbers. This model gives 97% accuracy. Arsalan and Santra [5] discussed recognizing character. The author used LongShort-Term memory and DCCN for recognizing characters. Here, the gesture needs to be written on a surface for identification. This model uses alpha–beta tracking technique. Here, the accuracy is better but can only recognize alphabets (A–J) and numerals (1–5). Al-Hammadi and Muhammad [6] discussed a system for dynamic hand gesture recognition. For gesture recognition, the author used CNN. This model checks and recognizes the gesture in nine directions but our model generates nearly 280 images

Hand Gesture Recognition Using Convolutional …

585

for recognizing gesture. This model gives 80% accuracy for identifying gesture and the value loss is nearly 1.5. Yun et al. [7] determined the level of involvement in a specific task. For this recognition, the author used Convolutional Neural Networks and Artificial Neural Networks, and Support Vector Machine. It classifies the children who are involved in a task and others. A Stochastic Gradient Descent is used to optimize the model. Here, obtained accuracy is 80%. Ishi et al. [8] developed a system to analyse the hand gesture. For analysing the gesture, the author used motion types and android robots. It shows the level of politeness based on human actions such as position of fingers and facial expression. It also shows whether the gesture is polite or harsh, calculates the duration of the gesture and its speed. Al-Hammadi and Muhammad [9] discussed recognizing the hand gesture. For gesture recognizing, the author used 3DCCN and compared three datasets by classifying them into signer-dependent and independent. The data which is signerindependent have very less accuracy 82% and data which is signer-dependent has 97%. Optimizers are not used in this model, but we used them. Haria and Nayak [10] discussed recognizing the hand gesture for human– computer interaction. For recognizing the gesture, the author used OpenCV and Background Subtraction technique. Only, six static gestures and one dynamic gesture are used. Here, the clarity of background matters because accuracy is 94% if the background is plain otherwise 40%.

3 Proposed System 3.1 System Design The proposed system automatically recognizes the hand gesture using OpenCV and Convolutional Neural Network. A web camera which is pre-installed in the system captures the live video sequence and extracts the required portion of an image using background subtraction technique. CNN architecture is used to develop the model and determine the gesture. Corresponding text for the gestures will be displayed. For getting better accuracy adaptive moment estimation is used. The workflow of our model is in Fig. 1.

586

V. V. Krishna Reddy et al.

Fig. 1 Workflow of hand gesture recognition

3.2 System Requirements The libraries used to develop the model are given below. OpenCV: OpenCV is an open-source library for machine learning, image processing, and computer vision functions, which is playing a major role nowadays. PIL: Python Imaging Library (PIL) is an open-source library which is used for modifying images and their formats. Keras: Keras is a python library used to develop convolutional neural network model. TensorFlow: TensorFlow is an open-source Python library used to build models which are used for classification and prediction.

Hand Gesture Recognition Using Convolutional …

587

Scikit-learn: Scikit-learn is one of the best machine learning algorithms used to model statistically which includes classification and regression.

3.3 Convolutional Neural Network Architecture: CNN is a type of artificial neural network model used to extract greater insights from different orientations of an image. It can also identify the images and classify them. It is a layered architecture, and each layer performs its task. Convolution layer is used to extract characteristics from an input image. Rectified Linear Unit (Relu) is a function used to remove unwanted noise from an image without interchanging its dimensions. Pooling layer is used to pre-process, compress the image and this model chooses max-pooling to reduce the dimension. Flattening is a process used to transform the entire pooled matrix into a single column and sent it to fully connected layer. Now, this fully connected layer combines all the parts of an image and performs classification. The working of CNN is shown in Fig. 2.

3.4 Dataset We have created our own dataset of American Sign Language (ASL), special characters, and some basic gestures like All the Best, Thank You, etc. with approximately 15,000 images of 52 static signs. These images are obtained by using data augmentation technique in which new dataset is created by appending new possible images in different orientations. We split the dataset into two sections. 70% for data that to be trained and 30% for data that to be tested. The images in our dataset are shown in Figs. 3 and 4. In Fig. 4, the gestures added are Delete, Drink, Food, Need, How, Phone, Palm, Like, Walk, When, Where, What, Water, Wrong, Yolo, etc.

Fig. 2 CNN architecture

588

Fig. 3 Gestures for alphabets

Fig. 4 Gestures for common words

V. V. Krishna Reddy et al.

Hand Gesture Recognition Using Convolutional …

589

3.5 Implementation The proposed system can be developed by the following steps: Step 1:

Data Acquisition: The data in American Sign Language is in the form of images which represents alphabets and gestures. Here, we segment the hand region by using Background Subtraction technique which uses the concept of running averages from a sequence of video frames. Then, we perform thresholding for this image and can get the contour of the hand region. 1.1

Step 2:

Here, we capture the image through web camera and load it into a folder.

Data Pre-processing: For each gesture we get a threshold image, and all these images are preprocessed using Keras pre-processing model which does the necessary resizing, shifts, flips, orientations to certain degree. 2.1 2.2

2.3 2.4 Step 3:

The size of the captured images is uneven. So, now we resize them into 100 * 89 dimensions. Now all the images are loaded into an array and convert their background colour to grey using COLOR_BGR2GRAY function in cv2 library. Now we perform Image Augmentation and create new images from the existing dataset. For each image, perform Step 2.3.

Feature Extraction: It is the process of extracting important characteristics and insights present in an image. Capturing images and storing them in the dataset consumes large amount of space. So, feature extraction helps us in solving this problem by reducing the data after extraction of important features and it also maintains the accuracy of the classifier with less complexity.

Step 4:

Classification: Convolutional Neural Network model (CNN) is meant for evoking the characteristics from any picture and to determine hand gestures. Here, the network consists of one fully connected layer and seven hidden convolution layers and having Relu as the activation function. The classification is done across the network by training it about 50 iterations having batch size of 64. Even though if we increase the iterations than 50 there is no change in the validation accuracy. For better accuracy, we used Adam optimizer for the model and set the learning rate with 0.001. The model achieves an accuracy of 99% on the validation dataset.

590

V. V. Krishna Reddy et al.

4.1 4.2 4.3 4.4 Step 5:

We split the data into training and testing datasets with 7:3 ratio. We train the CNN model with training dataset. We run the model using Adam optimizer and fix the learning rate as 0.001. Fix the number of epochs as 50.

Gesture Recognition: This model takes live sequence of video frames from the web camera and fixes the background of our roi, extracts the hand region, and finally gives the output as a threshold image. Now, this threshold image is given as input to the trained model. The image is processed, and our model gives two outputs—Predicted Class, Confidence. The model takes the maximum Confidence class as the Predicted Class. 5.1 5.2

Step 6:

Captures an image from a continuous video sequence. Now, by using CNN model, gesture is detected.

Prediction: As we have created 52 distinct gestures, the model contains 52 Predicted Classes consisting of alphabets, special characters, and custom gestures. Now, our model predicts and displays the sign of corresponding gesture. For better understanding and communication, we have added a feature called text-mode which helps in conveying messages between people using signs at ease. And, we have also added text to sound feature to make conversations easier. 6.1 6.2

Predict the class of an image using CNN model. If the image matches with Predicted Class, displays the corresponding letter or word.

4 Results and Discussion After training the model, we get total loss and accuracy as results. It predicts various signs and gestures accurately. For using it in text-mode, we can simply save the characters in a string for displaying. And this stored text can be converted into sound using python packages. Here are the results provided by our model. The total loss and accuracy after using SGD, RMSPROP, and ADAM are shown in Figs. 5 and 6. The respective text for hand gestures is shown in Figs. 7 and 8.

Hand Gesture Recognition Using Convolutional …

591

Total Loss

0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 SGD

RMSPROP

ADAM

Opmizers

Fig. 5 Total loss for optimizers

Accuracy

1.2 1 0.8 0.6 0.4 0.2 0

SGD

RMSPROP

ADAM

Opmizers

Fig. 6 Accuracy for optimizers

5 Conclusion In this paper, we developed hand gesture recognition model. From the sequence of live video frames, hand gestures are recognized using Convolutional Neural Networks (CNN). This model gives an accuracy of 99%.

592

V. V. Krishna Reddy et al.

Fig. 7 Example for gesture recognition

Fig. 8 Example for gesture recognition

6 Future Work As we are limited to make only a few sets of signs because of using hands we may not cover all range of signs. In future, we can improve this model by adding a greater number of signs, gestures to extend its effectiveness further and make communication efficient.

Hand Gesture Recognition Using Convolutional …

593

References 1. Kuo C-K, Kang C (2020) Hand gesture recognition by a MMG-based wearable device 2. Yuan G, Liu X (2020) Hand gesture recognition using deep feature fusion network based on wearable sensors. IEEE 3. Tham C-K, Heng C-H (2020) A wireless multi-channel capacitive sensor system for efficient glove-based gesture recognition with AI at the edge. IEEE 4. Sanchez-Medina JJ, Ayed MB (2019) Morphological convolutional neural network architecture for digit recognition. IEEE 5. Arsalan M, Santra A (2019) Character recognition in air-writing based on network of radars for human-machine interface. IEEE 6. Al-Hammadi M, Muhammad G (2019) Deep learning-based approach for sign language gesture recognition with efficient hand gesture representation. IEEE 7. Yun W-h, Lee D, Park C (2018) Automatic recognition of children engagement from facial video using convolutional neural networks. IEEE 8. Ishi CA, Mikata R, Ishiguro H (2020) Person-directed pointing gestures and inter-personal relationship: expression of politeness to friendliness by android robots. IEEE 9. Al-Hammadi M, Muhammad G (2020) Hand gesture recognition for sign language using 3DCNN. IEEE 10. Haria A, Nayak JS (2018) Hand gesture recognition for human computer interaction. Science Direct 11. Hurroo M, Walizad ME (2020) Sign language recognition system using convolutional neural network and computer vision. IEEE 12. Shull PB, Jiang S (2020) Hand gesture recognition and finger angle estimation via wrist-worn modified barometric pressure sensing. IEEE 13. Aslam SM, Samreen S (2020) Gesture recognition algorithm for visually blind touch interaction optimization using crow search method. IEEE 14. Maragliulo S, De Almedia AT (2020) Foot gesture recognition through dual channel wearable EMG system. IEEE

An Effective Parking Management and Slot Detection System Saurabh Chandra Pandey, Vinay Kumar Yadav, Rajesh Singh Bohra, and Upendra Kumar Tiwari

Abstract With the increase in population, the demand for vehicles has increased. India, a densely replenished country faces many challenges in managing this increased number of vehicles. This rapid growth in the number of vehicles running on roads has increased the need for parking spaces which has led to the problem of finding parking spaces in metropolitan cities. Moreover in parking areas searching for parking slots is not only an annoyance and time consuming but also leads to wastage of fuel. Therefore, solving the problem of finding a parking area and a parking spot is now an emergent issue. Hence a system and an application are required for managing the parking areas and for pre-booking of parking spots which can play an important role to diminish congestion on the roads. In this paper, the authors propose a way to help and manage vehicles in parking areas and arrange vehicles in an allocated position. In vehicle parking and management system a counter variable is used to check the availability of free parking lots and a security camera is installed at the entrance gate to allow only authorized vehicles. This is achieved by using image processing. Keywords Vehicle parking · Parking spot detection · Reservation · Machine learning · Image processing · Optical character recognition · Cloud database

1 Introduction In recent years, newly registered vehicles were added to our congested metropolitan cities on a large scale. However, the parking areas and their facilities have not flourished at the same rate. So parking spaces are inadequate to meet the requirements of increasing parking demands, especially in rush hours. This results in unplanned S. C. Pandey (B) · V. K. Yadav · R. S. Bohra · U. K. Tiwari Department of Computer Science and Engineering, ABES Institute of Technology, Affiliated to DR. APJ Abdul Kalam Technical University-Lucknow, Ghaziabad, Uttar Pradesh, India U. K. Tiwari e-mail: [email protected]

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 P. K. Mallick et al. (eds.), Cognitive Informatics and Soft Computing, Lecture Notes in Networks and Systems 375, https://doi.org/10.1007/978-981-16-8763-1_49

595

596

S. C. Pandey et al.

parking which causes traffic jams and congestion on roads. So with the evolution of technology, there is a need to develop a system that can help drivers to seek and reserve vacant parking lots before they visit with ease. Thus very efficiently utilizing time, natural resources, and free parking lots in parking areas. Through this project, the authors attempt to solve the issues related to parking. This project aims to help the users in finding available parking spaces in any particular area and reserve them.

1.1 Online Parking Reservation With the help of our web and mobile applications, users can check for vacant parking spaces in different parking areas before or at the time of their arrival. Along with this, they are also given facilities to reserve their desired parking slot [1] and make payment online. When using our application users are just required to register themself by making an account and filling out some details. Users are also given the option to book their fixed parking lot at any particular time if they use it daily, to save their time and money [2]. For an optimal spot allocation [3] a reserved parking spot will be shown vacant only after 15 min of the exit time of the reserved spot. In case of a reserved parking spot is occupied by any unreserved user then this irregularity can be reported and a vacant parking lot will be immediately allocated to the user[4]. If no vacant space will be available then the user will get a full refund and an additional future compensation when parking in future.

1.2 Payment and Reservation Payment methods for online and offline users are separate and also price will vary dynamically [5]. Users who are using our applications can make payment online at the time of booking which will be calculated as per the timing of their slot booking. Offline users have options to pay through cash or can also pay online once they create an account and register themselves. This payment amount is calculated and charged as per their timing of booking the parking space [6]. For reservation, online users can use an application and can look for vacant spaces in the parking area and reserve their slot for a time interval, before their arrival [7]. Offline users will be allocated parking lots dynamically once they are authorized at the entry gate and their time of booking is known.

An Effective Parking Management and Slot Detection System

597

2 Related Work Many solutions have been proposed to overcome the parking space problem. Various exciting solutions are mostly based on a combination of hardware and software technologies. These include technologies that are wired sensor-based [8], wireless sensor-based (IoT) [9, 10], image processing based [11], and counter-based (which is a video sensor-based technology) [12]. Wired sensor technology includes the usage of ultrasonic sensors which are installed at every parking slot. These sensors are wired and connected to a central system that manages and stores the information perceived using sensors. The information stored is further used for processing and outcomes are displayed on panels of the parking area. To overcome the problems faced in wired systems, wireless technologies came into play. Wireless (IoT-Based) strategies are enforced in various parking systems. Each parking slot is equipped with wireless sensors (nodes) which are fused on the sensor board. These nodes make use of light, temperature, or sound to perceive the environment and pass the information to sensor boards. But the overhead of employing sensors at each parking slot still persists which makes the system costlier as each sensing unit is connected to a processing unit and a transceiver. RFID’s [13] are more used as they could easily uniquely identify the tags but there are certain problems related to their usage. When scanned, the scanner could not differentiate between two simultaneous readings, i.e., it could be that the scanner has read the tag twice as it could not differentiate, this may produce some problems. Along with this, if multiple scanners are used, there could be a collision. Image-based, also known as video sensor-based systems, makes use of CCTV cameras to capture images and create datasets in order to calculate/detect free lots available in parking spaces using deep learning methods and various algorithms [14, 15]. Machines can be provided with intelligence to understand digital images or videos. In recent years, with the usage of graphic cards and advancement of computer hardware performance, the field of Convolution Neural Networks (CNN) has drawn great attention. In terms of computer vision, CNN is used to process the images of parking areas and detect the availability of parking slots. CNN models make use of background subtraction methods to improve the results. The disadvantages of this image-based technique are that the video sensors that are being used in this method are energetically expensive and can generate a large amount of data that can be arduous to transmit in a wireless network.

3 System Model The model of the proposed Vehicle Parking and Management system is discussed below. Step 1:

Vehicle arrives.

598

S. C. Pandey et al.

Fig. 1 System working architecture

Step 2: Step 3: Step 4:

Step 5:

Step 6: Step 7:

The camera processes the vehicle in front of the gate (authorization) (Fig. 1). Processed image is transmitted to the server for verification purpose. Characters recognized from the license plate are matched with the data that is already present in the database. If found then the entry gate opens and the vehicle can enter the parking area (online user). If recognized characters are not found in the database then it will be saved as a new user entry. And a random empty parking slot will be allocated to the user (offline user authorization). Acknowledgement is sent to the gatekeeper from the server. Entry of user inside parking area.

This system is designed in such a way that it performs the following for security reasons and better user satisfaction.

An Effective Parking Management and Slot Detection System

599

3.1 Vacant Spot Detection Initially, the parking spot will have all the parking available, when the user will search for the parking, it will show all the unreserved parking spots. Each parking spot will have a variable and the initial value of that variable will be UN_RES, which will help the system to detect the unreserved parking spot. After the reservation is done, that specific parking spot’s variable will change from UN_RES to RES, indicating that the spot is reserved for that period. Hence, the system will not show that parking spot for upcoming new users. For flexibility for the users, the system will show how long a parking spot is reserved. If no parking spot is available, then the system will suggest the time earliest to which the parking spot will be unreserved or simply another parking lot near the area will be shown for that same time the user entered. Like the user, the owner/gatekeeper could also check the status of the parking lot. It will help them with offline parking, as the system will ask the offline user for the amount of time they want to reserve the parking. Then according to the time given it will allocate that user with the parking spot. If there is no parking available, the system will simply suggest the earliest time the parking lot will have the parking available or it will simply not allocate any parking at all. As a counter is used rather than a camera for space detection, it definitely will be cost-effective as well as low maintenance cost as hardware requirements in this system is less.

3.2 Image Processing In 1976, Automatic Number Plate Recognition (ANPR) method was invented in the United Kingdom (UK) [16]. This is an image processing technology that is used to identify the vehicle using the number plate of the vehicle. ANPR uses Optical Character Recognition (OCR) algorithm. In the ANPR algorithm, image capturing, extraction of a license plate from the image [17], and recognizing the characters from the extracted license plate are carried out [18]. LBP (Local Binary Pattern) algorithm is used in the region detection phase of the vehicle’s license plate, which is later sent for further pipeline phases for processing [19]. To give the best possible chance of finding all the characters on the number plate binarization along with other subsequent phases are carried out multiple times once for each license plate region. Binarization is a process to convert images to black and white. If the image is too bright or too dark, a single binarized image may miss out on the characters on the license plate. So to overcome this, multiple binary images are created.

600

3.2.1

S. C. Pandey et al.

Edge Detection

Different edge detection algorithms/operators like Canny-Deriche, Sobel, Differential, Prewitt, and Roberts cross can be used for detecting the edge of an image. This detection phase is only responsible for identifying the possible region where a license plate may exist in the image [20]. This algorithm tries to find the precise left, right, top, bottom edges of the license plate from the binarized image. Several configurable weights are used to determine which edge makes the most sense. The later plate region is remapped to standard size in the De-skew stage.

3.2.2

Character Analysis and Segmentation

After detection of the number plate area that is extracted from the image. Blob detection is used to trace the points that differ in brightness or color as compared to the surrounding area. Character analysis is done by finding all the connected blobs [21]. In the segmentation phase, the extracted image is divided into many segments for further processing. The line separation method is firstly used row-wise and is called row segmentation. In row segmentation values are added and if the resultant sum of the pixel in a row is greater than zero. It will represent the start of the line. In the same way, line separation is done column-wise and is called column segmentation. Individual characters can be separated using column segmentation. Separate variables are used to store segregated individual characters [22]. Character Component Analysis (CAA) or blob extraction algorithm is used to find the row and the column indices of the license plate area. This algorithm is very useful for automated image analysis and also gives better performance. This method can also be used in license plate as well as character segmentation.

3.2.3

Optical Character Recognition

OCR is a common technology that helps machines to acknowledge the text inside images, such as photos and scanned documents. This can be used to convert any kind of image containing text (typed, handwritten, or printed) into machine-encoded text data, virtually. To analyze and match each segregated character separately, a correlation method is used in this phase and then the identified characters are stored in a string format variable [23]. For authorization, the value stored in the database is compared with the resultant string (Fig. 2).

An Effective Parking Management and Slot Detection System

601

Fig. 2 Number plate recognition model

3.3 Assignment and Reservation The system designed will assign the parking spot optimally [24]. This will be achieved by using a dynamic allocation strategy. As the allocation of spots is fragmented based on time, hence there can be a collision between two users over the same parking spot as it could be that the user who parked first is not able to leave the parking spot on time and that parking is booked for another user for that time period. To avoid such irregularities, the system will not allocate the parking to the second user for the time which would be 15 min around the first user time out, i.e., time_in for the second user will be 15 min after the time_out of the first user.

602

S. C. Pandey et al.

Fig. 3 Use case diagram for online

3.4 Use Case Diagram of the Proposed System See Figs. 3 and 4.

4 User Scenarios and End Application In this section of the paper, the authors have described end-user applications and various user scenarios to demonstrate the functionalities of the proposed system. The main aim of this project is to provide users with a system and an application which can make it easy and quick for them to look for a parking space and choose a slot of their own choice at the given time and location. Our application starts with a location finding screen where the user can enter a location where he/she wants to check for parking. The next screen gives the view of all the parking spaces that are available at that location along with the total vacant spaces available and the price of parking as per the time and date entered by the user for reservation.

An Effective Parking Management and Slot Detection System

603

Fig. 4 Use case diagram for offline

After the user chooses a parking area they can look for vacant spots to reserve and park their vehicle. Reservation will be done only when the user registers himself/herself by creating an account and makes payment by the time of reservation. After the reservation is done successfully the user can check their status of the booking (Figs. 5, 6, 7 and 8). Authors have tried to design a system that is cheaper and easy to implement and also convenient for both online (reserved parking) and offline (non-reserved parking) users. Offline users are asked to scan a QR-code at the entry gate with the help of their smartphones which will directly take them to our web application where they can register themselves to have ease in parking in the future.

604

Fig. 5 Demonstrate some user interface

Fig. 6 Demonstrate some user interface

S. C. Pandey et al.

An Effective Parking Management and Slot Detection System

605

Fig. 7 Demonstrate some user interface

5 Result and Conclusion In this paper, an efficient, low-cost, and convenient system is designed and developed using image processing technology for reservation and management of parking in parking areas. This system gives the accurate number of vacant spots in parking areas in any condition and also allows the users to reserve a spot of their own choice. Incentivebased violation reporting options give extra user satisfaction facilities. Since this system is software-based it doesn’t require high maintenance cost and is more faulttolerant. Use of cameras at entry and exit is done for authorization purposes only. In the future, authors want to enhance the system for dynamic allocation of vehicles such that even motorbikes can also be parked along with four-wheelers to utilize parking space more optimally. Along with the above enhancement, dynamic pricing schemes can also be enacted so that real-time parking rates can be adjusted according to various factors including predicted occupancy once historical occupancy information is gathered.

606

S. C. Pandey et al.

Fig. 8 Demonstrate some user interface

References 1. Liu W, Yang H, Yin Y (2014) Expirable parking reservations for managing morning commute with parking space constraints. Transp Res Part C Emerg Technol 44:185–201 2. Rashid MM et al (2012) Automatic parking management system and parking fee collection based on number plate recognition. Int J Mach Learn Comput 2(2):94 3. Geng Y, Cassandras CG (2013) New “smart parking” system based on resource allocation and reservations. IEEE Trans Intell Transp Syst 14(3):1129–1139 4. Ma S et al (2017) Research on automatic parking systems based on parking scene recognition. IEEE Access 5:21901–21917

An Effective Parking Management and Slot Detection System

607

5. Lin KY (2004) A sequential dynamic pricing model and its applications. Naval Res Logistics (NRL) 51(4):501–521 6. Wang H, He W (2011) A reservation-based smart parking system. In: 2011 IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS), IEEE 7. Yimyam W, Ketcham M (2017) The automated parking fee calculation using license plate recognition system. In: 2017 International conference on digital arts, media and technology (ICDAMT), IEEE 8. Shaikh FI et al (2016) Smart parking system based on embedded system and sensor network. Int J Comput Appl 140(12) 9. Taherkhani MA et al (2016) Blueparking: An IoT based parking reservation service for smart cities. In: Proceedings of the second international conference on IoT in urban space 10. Souissi R et al (2011) A parking management system using wireless sensor networks. In: ICM 2011 proceeding, IEEE 11. Idris, MY et al (2009) Smart parking system using image processing techniques. J Inf Technol 114–127 12. Duan B et al (2009) Real-time on-road vehicle and motorcycle detection using a single camera. In: 2009 IEEE international conference on industrial technology, IEEE 13. Kannadasan R et al (2016) RFID Based Automatic Parking System. Aust J Basic Appl Sci 10(2):186–191 14. Dow C-R et al An advising system for parking using canny and KNn techniques 15. Nyambal J, Klein R (2017) Automated parking space detection using convolutional neural networks. In: 2017 pattern recognition association of South Africa and robotics and mechatronics (PRASA-RobMech), IEEE 16. Patel C, Shah D, Patel A (2013) Automatic number plate recognition system (ANPR): a survey. Int J Comput Appl 69(9) 17. Komarudin A, Satria AT, Atmadja W (2015) Designing License Plate Identification through Digital Images with OpenCV. Procedia Comput Sci 59:468–472 18. Kumar M, Singh YG (2009) A real-time vehicle license plate recognition (LPR) System. Diss 19. Chen X, Qi C (2011) A super-resolution method for recognition of license plate character using LBP and RBF. In: 2011 IEEE international workshop on machine learning for signal processing, IEEE 20. Chong J, Tianhua C, Linhao J (2013) License plate recognition based on edge detection algorithm. In: 2013 Ninth international conference on intelligent information hiding and multimedia signal processing, IEEE 21. Yoon Y et al (2011) Blob extraction based character segmentation method for automatic license plate recognition system. In: 2011 IEEE international conference on systems, man, and cybernetics, IEEE 22. Zhang Y, Zhang C (2003) A new algorithm for character segmentation of license plate. In: IEEE IV2003 Intelligent Vehicles Symposium. Proceedings (Cat. No. 03TH8683), IEEE 23. Kaur E, Banga VK Number plate recognition using OCR technique. Int J Res Eng Technol 2(09):286290 24. Arellano-Verdejo J, Alba E (2016) Optimal allocation of public parking slots using evolutionary algorithms. In: 2016 International conference on intelligent networking and collaborative systems (INCoS), IEEE

Survey on Natural Language-Based Person Search Snehal Sarangi and Jitendra Kumar Rout

Abstract The sense of knowledge base (KB) of the website contains an enormous amount of information. High complexity questions like scenes based on reality, object classes can be easily handled by a human. It needs to be trained by the Question Answering (QA) system. Several QA systems have been built over KBs to make these data available; both structured and unstructured data need query information which is an important aspect. It is challenging to construct a QA system to solve many different problems on KBs. QA programmers usually include NLP, data acquisition, machine intelligence, and the Semantic Web to solve these challenges. The need to query information content in various formats, including structured and unstructured data (text in natural language, semi-structured Web documents, structured RDF data in the semantic Web, etc.), has become increasingly important. Thus, Question Answering Systems (QAS) are essential to satisfy this need. In this paper, we surveyed various QAS with some statistics and analyzed their result. It will improve clarity and help researchers to choose the appropriate solution to their issue. Researchers can adapt or reuse QAS techniques for specific research issues. The survey conducted for the following paper had different approaches for the person’s search using the interactive question response process. Keywords Natural language processing · QA system · Image processing · Data mining · Knowledge base

1 Introduction The role of captioning images at the crossroads of machine views and the processing of natural languages shows the details of the image in a natural language. As these two research areas are becoming more successful and several recent developments

S. Sarangi · J. K. Rout (B) School of Computer Engineering, KIIT Deemed to be University, Bhubaneswar, Odisha, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 P. K. Mallick et al. (eds.), Cognitive Informatics and Soft Computing, Lecture Notes in Networks and Systems 375, https://doi.org/10.1007/978-981-16-8763-1_50

609

610

S. Sarangi and J. K. Rout

have taken place, imaging naturally followed growth. Improved neural networks and artificial vision target architectures contributed to improved imaging solutions. In the same way, more complex concurrent architectures have resulted in a more detailed generation of subtitles, such as a repetitive neural network focused on emphasis. Many conventional images subtitling methods, in which the image input is codified to the intermediate representation and then decoded to a descriptive text sequence, use a Neural Machine-inspired translation encoder-decoder paradigm [1]. It may consist of a single CNN vector output feature or multiple visual characteristics acquired from various areas within the image. The object detector has improved output consistently samples or guides the regions as mentioned above [2]. The creation of lessons in artificial intelligence is a fascinating topic for which a particular picture is identified. To understand the image contents, two machine viewing methods and the natural language processing model have to translate the image into words in the correct order. Picture captioning has various uses, such as device editing instructions, use of virtual aids, image indexing, persons with a visual disability, social media, and more. The rapid growth of extensive database collection and the effective use of the Internet can encourage researchers. The discovery of this tremendous amount of information discovers knowledge in a complex, time-consuming way. This challenge resulted in creating newly adapted research methods, such as inquiries. In reality, such a method allows the user to ask a question in natural language and to answer this question correctly, not to use a supposedly important document’s array, as with engine control. A computer vision is an issue by searching a person in a database with a free definition of natural language. It has a wide variety of video monitoring and behavioral tools. Nowadays, metropolitan centers usually have thousands of cameras that process gigabytes per second of video data. The manual investigation of alleged criminals in such recording takes ten days or even months. Automated testing of people is also possible immediately. Modern human search methods are typically divided into the image and attribute-based queries based on query methods. However, both approaches are incredibly fragile and may not be appropriate for practical use. This paper includes an inquiry into various methods and the findings and quality of new articles observed. The rest of the paper is Sect. 2 relates to What is Question Answering System, Sect. 3 is related to Dataset. Section 4 deals with the Visual QA system and Sect. 5 describes visual embedding system followed by a conclusion in Sect. 6.

2 Question Answering System In literature Question Answer System is defined as follows: “For human–computer interaction, natural language is the best information access mechanism for humans. Hence, Question Answering Systems (QAS) have special significance and advantages over search engines and are considered the ultimate

Survey on Natural Language-Based Person Search

611

goal of semantic Web research for user information needs”. Question Answering on the Web moves beyond the stage where users type a query and retrieve a ranked ordering of appropriate Web pages. Users and analysts want targeted answers to their questions without extraneous information.

2.1 Components of Question Answering System Figure 1 shows three distinct module works: Query Processing Module, which will classify the questions; document Processing Module, which helps in retrieval of information; and lastly, Answer Processing module for extraction of related answers. Query/Question Processing module focuses on questions, classifying the questions, and reformulating the questions into related or similar questions known as query expansion. This retrieval of information is necessary because if no correct answer is present in the system, further processing will be stopped. Although there are many databases and methods for searching individuals in natural languages, we review the language datasets of different viewing tasks and the deep views that can be used to solve this problem as a solution. Vision Language Datasets: Like Flickr8 K and Flickr30 K, the Early View Language Datasets. A large MS-COCO Capsulation dataset motivating the data was chosen, with 164,062 MSCOCO images and five phrases identified for each image

Document Processing Data Retrieval Question Processing

Data Filtering

Analyzing

Data Ordering

Classification Reformulation Answer Processing Identification Extraction Validation

Fig. 1 Question answering system architecture

612

S. Sarangi and J. K. Rout

with individual labels. Since people in databases are not the primary object for classification, language definition should not be used to train search algorithms. The summary on different methods and related datasets is given in Table 1. Deep-conscious vocabulary models: Recurrent neural networks are more effective in processing sequence data than convolutionary neural networks that are more effective in image recognition and object detection. Many deep models have been proposed for vision practices. Table 1 Summary on Natural Language-based Person Search with different methods and related Datasets Author (year)

Methods

Dataset

Farhadi et al. (2010) [6]

Three spaces, namely, PASCAL 2008 the space of the frame, the sensory of space, and the sentence of the corresponding space in the image, are described in this system

The model has some limitations, even the spatial limitation of the definition, and the results are not exact

Vinyals et al. (2017) [7]

The author has developed a new approach to the use (CNN) and image subtitling (RNN). Convolutional neural networks were used to extract characteristics from the pictures. CNN, therefore, serves as a task classification encoder, and the last layer output is given for the input (RNN). (RNN) acts as a sentence-generating decoder. The RNN was used as the LSTM network (Long Short Term Memory). A model was also proposed, which trained detectors to extract various features from the images, and the translation model for a number of subtitles was trained to provide adequate details for the picture

Noise or uncomfortable conditions are present in images, but not all elements, but only the essential and specific characteristics, are integrated with the RNN compared to conventional methods (RNN)

Flickr8k Train 6000 Valid 1000 Test 1000 MSCOCO Train 82,783 Valid 40,504 Test 40,775 SBU Train 1 Million

Conclusion

(continued)

Survey on Natural Language-Based Person Search

613

Table 1 (continued) Author (year)

Methods

Dataset

Conclusion

Lu et al. (2017) [8]

The proposed model has been proposed to concentrate on visual indications and when language patterns are used to create picture captions. The suggested model was found when the visual signal focused on the most prominent and prominent features in the image. The information is stored in the computer decoder for short-term and long-term use

Flickr30k 31,783 images COCO dataset 5000 image for testing

A higher production structure was achieved with the visual focus as the basis for the proposal. Most of the image and visual query response models centered on images at all times

Techniques to combine language and image into a shared sense for image recognition and retrieval were planned for visual-semantic convergence. At the end of the day, the CNN-RNN model was designed to integrate images with visual details of fine grain into the same zero-shot area of analysis. You can recover the text-to-image by measuring the distances within the embedding field.

3 Dataset The dataset contains rich and comprehensive annotations with open terms definitions. There were 1993 individual workers in the marking operation, all of whom are more than 95% of acceptance. Question asked employees to use phrases of 15 or more words to illustrate in the images all relevant characteristics. Many employees have a large number of language definitions in this dataset, and the construction of instruments cannot circumvent just the explanations of certain employees. Vocabulary, phrase size, and phrases are strong proof that our language dataset is capable. We have a total of 1,893,118 individual words and 9408 datasets. The longest sentence consists of 96 words, and the length of the sentence is 23.5 words, which is more than 5.18 in the MS-COCO sub phrase and 10.45 in the visual genome. Sentences also have a length of 20–40 words.

614

S. Sarangi and J. K. Rout

3.1 Comparison of Image Caption Method Vinyals et al. [3] and Karpathy and Fei-Fei [4] suggested that the picture descriptive natural sentences be made in very recurring frames. To train the subtitling model, we use the code given by the authors. In Ref. [5], they employ the test method to use a text-to-image recruitment picture subtitling method. Instead of repeating the word as an input to the next stage to infer the picture description in the test method, an individual image is taken by the LSTM as words by words. Entropy losses are determined by the expression between the word and the word expected by LSTM. There will be minor average losses for the match of sentence-picture pairs, but no losses would be higher, respectively.

4 Visual QA The question characteristics and image characteristics substitute for the elementspecific propagation in conjunction with the question and image characteristics and the cluster. As the LSTM GNA-RNN proposed only has a single layer, after some time, it gets change LSTM for reasonable contrast to a deeper LSTM-Q+ norm. The characteristics of query and image methods may also influence classification efficiency. The QAWord model blends pictures with sentencing features created by the LSTM. Comparative study of different Visual QA is shown in Table 2.

5 Visual-Semantic Embedding These approaches are aimed at mapping the image and sentence features into a joint build-up. A shared space can then be used as a relation between the differences in the characteristics of the image and the sentence. The respective sentence-image pairs and the correspondence must be minimally distinguished. For the zero-shot text-toimage recovery. After reviewing the current research, some of the observations are there.

6 Conclusion The survey on the question answering system on different databases offers a related service to the scientific community of the natural language process. This paper is keenly summarized according to various databases and integrated to understand the work in the QA system of natural language processing. One of the essential QA systems’ qualities is their ability to respond correctly when attacked from many

Survey on Natural Language-Based Person Search

615

Table 2 Comparative study of different methods Author (year)

Method

Dataset

Conclusion

Kafle and Kanan (2016) [9]

Consider the Bayesian VQA approach by which a query is prepared and applied to the solution form. The possible response styles vary around the databases they consider. For example, four COCO-QA answer types are considered: object, color, count, and location. The original attributes of the image are used in this model

DAQUAR 795 Train 654 Test COCO-QA 82,783 Train 40,504 Test COCO-VQA 82,783 Train 40,504 Validation 81,434 Test

Easy VQA baselines, which involve feeding image or query functions into the logistic regression classifier, pass all picture and issue features to the logistic regressor and feed a multi-layer sensor with the same functionality, have been implemented by the authors. The accuracy is 45.17%

Malinowski and Fritz (2014) [10]

This paper provides an DAQUAR opportunity to pose a 795 training and 654 question. To model test images ambiguity in segmentation and class marking, the authors are extending the SWQA paradigm

This style will be called MWQA (Multi-World Query Response). The accuracy for SWQA and MWQA is 6.96 and 12.73 percentage

Ren et al. (2015) [11]

This model Vis + COCO-QA LSTM parallels the Train 78,736 AYN model. The Test 38,948 model uses the final layer of VGGnet to achieve image encoding. You use an LSTM to encrypt the query. Contrary to the previous model, the first “word” for this LSTM network is the coded picture before the query. The output of the LSTM is completely connected to the Softmax sheet. The model is Vis + LSTM. The model. A version 2Vis + BLSTM with the two-way LSTM is also included for authors

The backward LSTM is always the first entry in which the image is encoded. Both LSTMs have been combined and transmitted into a fully connected Softmax layer. The precision is 34.41%

(continued)

616

S. Sarangi and J. K. Rout

Table 2 (continued) Author (year)

Method

Dataset

Conclusion

Lu et al. (2016) [12]

To achieve finer-grained visual information for predicting the response, Stacked Attention Network (SAN) models measure the attention to the image repeatedly. Although this is done word by word in the previous model, this model initially codes the entire query for both LSTM and CNN. To use the encoding of this search to process the image in the same equation as before. The weighted target picture is then coupled with the demand encryption and is then re-measured by the first picture

VQA Dataset The collection of knowledge comprises 248,349 questions, 121,512 questions about validation, 244,302 study questions and 6,141,630 questions for responses

Accuracy is 45.50%

sources. The classification of existing literature in the QA system field is highlighted with evaluating trends in natural language processing. The paper identified the different datasets and methods used to resolve the VQA topic in this paper. In terms of multiword responses and the kinds of questions posed, the system also defined some of the difficulties VQA systems face.

References 1. Frome A, Corrado GS, Shlens J, Bengio S, Dean J, Mikolov T (2013) Devise: A deep visualsemantic embedding model. In NIPS, pp 2121–2129 2. Hu R, Xu H, Rohrbach M, Feng J, Saenko K, Darrell T (2016) Natural language object retrieval. In: 2016 IEEE conference on computer vision and pattern recognition (CVPR), Las Vegas, NV, pp 4555–4564. https://doi.org/10.1109/CVPR.2016.493 3. Vinyals O, Toshev A, Bengio S, Erhan D (2015) Show and tell: a neural image caption generator. In: CVPR, pp 3156–3164 4. Karpathy A, Fei-Fei L (2015) Deep visual-semantic alignments for generating image descriptions. In: CVPR, pp 3128–3137 5. Coyne B, Sproat R (2001) WordsEye: an automatic text-to-scene conversion system. In: Proceedings of the 28th annual conference on Computer graphics and interactive techniques (SIGGRAPH ‘01). Association for Computing Machinery, New York, NY, USA, 487–496. https://doi.org/10.1145/383259.383316

Survey on Natural Language-Based Person Search

617

6. Farhadi A et al (2010) Every picture tells a story: generating sentences from images. In: Daniilidis K, Maragos P, Paragios N (eds) Computer Vision—ECCV 2010. ECCV 2010. Lecture Notes in Computer Science, vol 6314. Springer, Berlin, Heidelberg. https://doi.org/10.1007/ 978-3-642-15561-1_2 7. Vinyals O, Toshev A, Bengio S, Erhan D (2017) Show and tell: lessons learned from the 2015 MSCOCO image captioning challenge. In: IEEE transactions on pattern analysis and machine intelligence, vol 39, no 4, pp 652–663, 1 April 2017 8. Lu J, Xiong C, Parikh D, Socher R (2017) Knowing when to look: adaptive attention via a visual sentinel for image captioning. In: 2017 IEEE conference on computer vision and pattern recognition (CVPR), pp 3242–3250.https://doi.org/10.1109/CVPR.2017.345 9. Kafle K, Kanan C (2016)Answer-type prediction for visual question answering. In: 2016 IEEE conference on computer vision and pattern recognition (CVPR), pp 4976–4984. https://doi. org/10.1109/CVPR.2016.538 10. Malinowski M, Fritz M (2014) A multi-world approach to question answering about realworld scenes based on uncertain input. In: Proceedings of the 27th international conference on neural information processing systems, vol 1 NIPS’14. MIT Press, Cambridge, MA, USA, pp 1682–1690 11. Ren M, Kiros R, Zemel RS (2015) Exploring models and data for image question answering. In: Proceedings of the 28th international conference on neural information processing systems, vol 2. MIT Press, Cambridge, MA, USA, pp 2953–2961 12. Lu J, Yang J, Batra D, Parikh D (2016) Hierarchical question-image co-attention for visual question answering. In: Proceedings of the 30th international conference on neural information processing systems curran associates Inc, Red Hook, NY, USA, pp 289–297

Optimization of Cloning in Clock Gating Cells for High-Performance Clock Networks Mohammed Vazeer Ahmed and B. S. Kariyappa

Abstract In VLSI circuits, for usability and reliability concerns with semiconductor products, power consumption has been a major concern, particularly with the increasing application of portable devices such as smartphones in recent years. The clock tree consumes up to 45% of system power, making it a substantial source of dynamic power consumption. Clock gating is a commonly used technique for reducing switching power consumption. Clock gating cells (CGCs) are introduced by the designer in the register transfer logic (RTL) of the design. These CGCs are cloned at the synthesis stage of the design to obtain the predictable timing closure which results in many of the CGCs with low fanout. In this paper, a method is proposed for reducing the count of cloned CGCs with low fanout as CGCs with low fanout do impact much on power savings. Hence, it must be ensured that a clock gate inserted is gating off a limited number of registers in order to save a significant amount of active power. The resulting clock gating methodology shows significant improvements in the reduction of CGC count, which overall improved the dynamic power savings and fixed the low fanout issue of CGCs. The CGC count is reduced by 87.19% at the synthesis stage and 88.71% after the place and route (PnR) stage of which 98.31% CGCs were of low fanout. This resulted in the reduction in total power consumption to 84.41%. Keywords Clock gating cells · Register transfer logic · Synthesis · Fanout · Place and route

1 Introduction Electronic circuit configurations comprise semiconductor items. Power consumption in VLSI has become a serious challenge for semiconductor device reliability as feature sizes are reduced and clock frequencies are increased in integrated digital M. V. Ahmed (B) · B. S. Kariyappa Department of ECE, RV College of Engineering, Bengaluru 560059, India B. S. Kariyappa e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 P. K. Mallick et al. (eds.), Cognitive Informatics and Soft Computing, Lecture Notes in Networks and Systems 375, https://doi.org/10.1007/978-981-16-8763-1_51

619

620

M. V. Ahmed and B. S. Kariyappa

circuits. Various architectural solutions have been introduced and used in modern LSI design to reduce power dissipation and chip area. Power gating and clock gating are widely used methodologies for dynamic and leakage power savings in complex digital designs. In this paper, clock gating for dynamic power savings and various trade-offs between dynamic power savings, fanout limitations, and balancing of clock tree structure are discussed.

2 Previous Work The overall clock tree switching power is reduced by 10% using a clock gating mechanism. The proposed gating scheme’s time implications are examined. The power savings in gated expressions are used to describe the power savings in a gated clock tree. The optimum gated fanout is computed using flip-flops and process technology parameters [1]. Since several clock sources are used in the design, clock latency and skew are reduced, resulting in fewer buffers being used to optimize hold timing. Latency, skew, and power consumption all are reduced by 28%, 13%, and 32.8%, respectively, as compared to a typical clock distribution network [2]. An approach is proposed for improving clock gates to make clock tree synthesis easier. Fanout cloning and redistribution among existing comparable clock gates are possible with this approach. To geographically partition the design’s registers, the approach uses the “k-means clustering algorithm”. This improves the quality, power, and local skew distribution of the clock tree [3]. The design of two register-based random access memories (RAMs), one with clock gating and the other without clock gating, is proposed. The memory’s dynamic power was reduced by 25–70%, and its total power was reduced by 15–32%. This reduction in power is due to the use of the clock gate approach at the register level [4]. In [5], a serial peripheral interface (SPI) design with simulation results on several methods of clock gating at different hierarchical levels is shown. Different complicated clock gating approaches can save between 30% and 36% on toggle rate as compared to no clock gating in the design. Problem Statement Optimize the cloned clock gating cells with low fanout, which adds more logic to the design and is inefficient in terms of power savings. Solution To accommodate for the above concerns, a commonly used methodology for clock gate insertion is inserting the clock gates by fixing a lower limit for the number of registers to be gated and fixing upper and lower fanout limit so that a balanced clock tree is achieved with adequate dynamic power savings.

Optimization of Cloning in Clock Gating Cells …

621

Fig. 1 Integrated clock gating cell [3]

3 Clock Gating The dynamic switching of the clock network typically accounts for 30–40% of the overall power dissipation of a modern VLSI design. Clock gating is a popular approach for minimizing dynamic power dissipation [6] in synchronous circuits by eliminating the clock signal while the circuitry is not in use. Clock gating saves power by reducing the clock tree, but it adds logic to a circuit. It is a cost-effective way to reduce dynamic power usage in a design. Placing a clock gate in the device and enabling it in several different combinations achieve low-power modes. Clock gating cells are used in almost every complex design. Without clock gating, the clock will be very active, which results in much dynamic power consumption and after using clock gating, the clock will be inactive [7] till the circuitry is not in use which saves dynamic power. The issue with clock gating using an AND gate is that the circuit may have glitches. So, in order to avoid this glitch, the integrated clock gating (ICG) concept emerges. The enable signal in the design is used by clock gating integrated cells (CGIC). If CGICs are connected before the clock path, a new circuit emerges, and a new timing path is created called the clock gating path group [8]. CGIC is essential for low-power designs since it saves a lot of dynamic power. Figure 1 depicts the circuit design for a clock gating cell using a latch [3]. If enable EN of the CGIC is logic-1, it passes on the clock at the output without any glitch. If enable EN of the CGIC is logic-0, the output with no clock is gated. Thus dynamic power which is consumed by the circuit is saved.

4 Proposed Methodology Figure 2 shows a flowchart depicting the flow of the work. The initial phase is to mine data from clock gating cells (CGCs) in different digital signal processor (DSP) cores. Data mining includes identifying various parameters of these CGCs, such as fanout, fanin, power, slack, skew, and so on. From the analysis it was found that a large

622

M. V. Ahmed and B. S. Kariyappa

Fig. 2 Methodology depicting the flow of work

number of CGCs are cloned at the synthesis stage of the design. Cloning these CGCs complicates clock tree balancing and introduces extra logic into the architecture via its enable logic cones. The main cause of high cloning of CGCs was examined, and it was found that CGCs with low fanout were adding more logic to the design while having no impact on power savings. Then optimal recipe for fixing clock gating cells is determined. The final step is fixing these low fanout cloned CGCs by changing the clock gating styles and honoring gating optimization.

4.1 Technique for Inserting CGCs Set_clock_gating style command is used to insert clock gating cells in the design. This command specifies the clock gating style to be used for clock gating with the

Optimization of Cloning in Clock Gating Cells …

623

compile_ultra-gate clock and replace_clock_gate commands [9]. Load-enable registers can also be implemented using clock gating of register banks. Power consumption on the clock tree, in the registers, and in the combinational logic is lowered when compared to the typical implementation employing recirculating feedback and multiplexers. The read_verilog command reads design data from Verilog files and stores it in memory. The netlist reader (read with the -netlist option) is automatically invoked if the automatic mode is enabled. If the input contains RTL constructions, the system will automatically invoke the RTL reader (read with -rtl option) (Fig. 3). The create_clock command generates a clock object in the current design. The command declares the provided source objects as clock sources in the current design. A single clock can be supplied through a pin or a port. If source_object command is not defined but a clock name is provided, a virtual clock is generate [10]. So it is better to use the create_clock command with source_object in order to avoid virtual clocks in the design. Insert clock gating is a command that executes clock gating on a properly constructed netlist. The gate_clock option is used with the compile or compile_ultra command to insert or optimize clock gating on a mapped netlist. The compilation of these commands then incorporates CGCs into the design. Fig. 3 Clock gating cells insertion in the design

624

M. V. Ahmed and B. S. Kariyappa

4.2 Decloning of CGCs After adding CGCs into the design at the RTL stage, these CGCs are cloned during the synthesis stage, resulting in a large number of CGCs in the further stages of the design. This cloning CGCs adds more logic to the design, impacting the area of the design. The first step involves identification of CGCs in the design. The identify_clock_gating command identifies the clock gating circuitry in the structural netlist that was added by the design compiler tool. Identification of gated clocks refers to identifying clock gates and the corresponding gated element associations, as well as assigning different attributes to these objects. These attributes make the tool’s next stages flow efficiently. The identify_clock_gating command assumes the only non-sequential cells in the clock network which are single output cells (inverters or buffers). If it finds a combinational cell with multiple outputs, it stops the traversal, with the exception of pad cells. After the manual identification process is done, the command also performs the design-scoped identification process to search for additional clock gates, if any present in the design. The next step is to reduce the number of stages of clock gates feeding various registers banks, by setting the set power_cg_balance to true. This command reconfigures the various stages of clock gates so that each path from the clock root to the clock pin of all gate-able register banks contains exactly the same amount of clock gates. After reducing the clock gates, identifying further clock gates with low fanout is a major challenge. The clock gates with low fanout can be filtered out by set_clock_gating_styles with -max fanout option. This command with scan_enable -minimum_bit removes all the CGCs with low fanout. Merge_clock_gate command analyzes clock gates at each level of hierarchy in the current design and replaces each set of compatible clock gates with a single clock gate. Two clock gates are considered compatible if all their input signals are logically equivalent. The merging is not done across the hierarchy. After merging, the merge_clock_gate command does a cleanup of the design to remove any unused combinational logic (Fig. 4). The compile_ultra command performs a high-effort compilation on the current design for a better quality of results (QoR). This command is intended for highperformance designs that must meet strict time constraints. It provides a simple method for achieving substantial delay mitigation. Finally, the report clock gating gives detailed information about the number of gating elements present in the design with the total number of gated and non-gated registers present in the design. Then the design is sent for further stage of place and route.

Optimization of Cloning in Clock Gating Cells …

625

Fig. 4 Methodology for decloning of CGCs

5 Results CGC count before and after optimization The CGCs are introduced in the design at the RTL stage of the design. From Fig. 5 it can be observed that the CGC count at the RTL stage is 819 obtained through the report clock gating command. This command also gives information on the number of gated and non-gated registers in the design. Figure 6 shows the CGC count at the synthesis stage of the design. From the figure it can be depicted that the CGC count is increased to 8120 due to cloning of CGCs

626

M. V. Ahmed and B. S. Kariyappa

Fig. 5 CGC count at RTL stage

Fig. 6 CGC count at the synthesis stage before optimization

by synthesis tool (Design Compiler). At the PnR stage the CGC count is increased to 8334 as shown in Fig. 7. Figures 8 and 9 show the CGC count after optimization. Figure 8 shows the CGC count at the synthesis stage after optimization. The CGC count is reduced to 1050 as compared to count before optimization of CGCs. Similarly at the PnR stage the CGC count is reduced down to 1129 by following the methodology for decloning of CGCs. Table 1 summarizes the CGC count at different stages before and after optimization. From the table it can be depicted that the CGC count at the synthesis stage is reduced from 8120 to 1050, i.e., 87.19%, and at the PnR stage is reduced from 8334 to 1129. Fanout distribution before and after optimization Figure 10 shows a comparison of fanout and CGC count before and after optimization. From the plot it can be depicted that a huge number of CGCs have fanout of 1 before

Optimization of Cloning in Clock Gating Cells …

Fig. 7 CGC count at the PnR stage before optimization

Fig. 8 CGC count at the synthesis stage after optimization

Fig. 9 CGC count at the PnR stage after optimization

627

628 Table 1 CGC count comparison pre and post optimization

M. V. Ahmed and B. S. Kariyappa Stages RTL

CGC (Pre_opt)

CGC (Post_opt)

% Reduction

813

813

Synthesis

8120

1050

87.19

0

PnR

8334

1129

88.71

Fig. 10 Comparison of fanout distribution before and after optimization

optimization which is underused CGCs, i.e., they were not having much impact on dynamic power savings. These underused CGCs are reduced to 78 after optimization and almost all the CGCs which are having fanout greater than 10 are retained after optimization. Power comparison before and after optimization Figure 11 shows power consumption of CGC cells before optimization. Net switching power is 0.0501 W, which is 19.37% of total power, and internal power is 0.1134 W, which is 43.83% of total power. The total power of the design is 0.2587 W. Figure 12 shows power consumption of CGCs after optimization. Net switching power is 48.59% and internal power is 40.72% of the total power. The total power consumption by the CGCs in the design after optimization is 0.0389 W. Table 2 shows the comparison of power before and after optimization of CGCs. The internal, switching, and leakage power due to CGCs in the design are reduced by 86.06%, 62.27%, and 95.47%, and the total power due to CGCs is reduced by 84.41%.

Optimization of Cloning in Clock Gating Cells …

Fig. 11 Power consumption before CGC optimization

Fig. 12 Power consumption after CGC optimization

629

630 Table 2 Power consumption comparison before and after optimization

M. V. Ahmed and B. S. Kariyappa Power (W)

Pre_opt (W)

Post_opt (W)

% Reduction in power

Internal

0.1134

0.0158

86.06

Switching

0.0501

0.0189

62.27

Leakage

0.0952

4.154e-03

95.47

Total

0.2587

0.0389

84.81

6 Conclusion and Future Scope The results showed significant improvements in CGC count, fanout distribution with respect to CGCs, and the amount of power consumed. At the synthesis stage 87.19% of CGCs are reduced and at the PnR stage 88.72% are reduced. Among 88.72% of reduced CGCs, 98.31% of CGCs were having low fanout, indicating that they were underutilized CGCs in terms of power savings. This decloning of underutilized CGCs showed 84.14% improvements in power savings. This work can be further taken by thoroughly removing the CGCs that are cloned by the synthesis tool during the synthesis stage. This will result in even more power savings in the design. This can be accomplished by employing a cloning-based algorithm.

References 1. Wimer S, Koren I (2012) The optimal fan-out of clock network for power minimization by adaptive gating. In: IEEE transactions on very large scale integration (VLSI) systems, vol 20, no 10, pp 1772–1780, Oct 2012. https://doi.org/10.1109/TVLSI.2011.2162861 2. Bhutada R, Manoli Y (2007) Complex clock gating with integrated clock gating logic cell. In 2007 international conference on design & technology of integrated systems in nanoscale era, pp 164–169. https://doi.org/10.1109/DTIS.2007.4449512 3. Vishweshwara R, Mahita N, Venkatraman R (2012) Placement aware clock gate cloning and redistribution methodology. In: Thirteenth international symposium on quality electronic design (ISQED), pp 432–436. https://doi.org/10.1109/ISQED.2012.6187529 4. Srivatsa VG, Chavan AP, Mourya D (2020) Design of low power & high performance multi source H-tree clock distribution network. In: 2020 IEEE VLSI device circuit and system (VLSI DCS), pp 468–473 5. Nejat M, Abdevand MM, Farahani AM (2013) A novel cir- cuit topology for clock-gating-cell suitable for sub/near-threshold designs. In: The 17th CSI international symposium on computer architecture & digital systems (CADS 2013), pp 45–49. https://doi.org/10.1109/CADS.2013. 6714236 6. Chen RY, Vijaykrishnan N, Irwin MJ (1999) Clock power issues in system-on-a-chip designs. In: Proceedings IEEE computer society workshop on VLSI ’99. System design: towards systemon-a-chip paradigm, pp 48–53. https://doi.org/10.1109/IWV.1999.760472 7. Srivatsava GSR, Singh P, Gaggar S, Vishvakarma SK (2015) Dynamic power reduction through clock gating technique for low power memory applications. In: 2015 IEEE international conference on electrical, computer and communication technologies (ICECCT), pp 1–6. https://doi. org/10.1109/ICECCT.2015.7226204

Optimization of Cloning in Clock Gating Cells …

631

8. Ravi S, Trehan S, Jain M, Kittur HM (2019) High performance clock path elements for clock skew reduction. In: 2019 2nd international conference on intelligent computing, instrumentation and control technologies (ICICICT), pp 1663–1670. https://doi.org/10.1109/ICI-CICT46 008.2019.8993375 9. Madhushree, Rajan N (2017) Dynamic power optimization using lookahead clock gating technique. In: 2017 2nd IEEE international conference on recent trends in electronics, information & communication technology (RTEICT), pp 261–264. https://doi.org/10.1109/RTE-ICT. 2017.8256598 10. Chindhu T, Shanmugasundaram N (2018) Clock gating techniques: an overview. In: 2018 conference on emerging devices and smart systems (ICEDSS), pp 217–221. https://doi.org/10. 1109/ICEDSS.2018.8544281

Fruit Freshness Detection Using Machine Learning K. Anupriya and Gopu Mruudula Sri

Abstract The quality of fruits or vegetables performs a crucial role in customer consumption. This paper presents the survey on the complete examination of apple fruit images for freshness classification using SVM and a convolutional neural network. In a convolutional neural network, VGG-16 architecture is used to predict the quality of the fruit apple. The input images are apple fruits of different categories which are passed as an input to the filtration method. The algorithm uses the shape, surface, and size of fruit features to determine their quality. These features are used to label the apple images whether each apple image is fresh or rotten. This paper implemented the SVM and VGG-16 architecture on the apple fruit images dataset and tested the accuracy of these two algorithms. VGG-16 architecture has given high accuracy than SVM with the help of the tensor flow library. Keywords SVM · CNN · Fruits freshness detection · Fresh and ripen fruits · Deep learning

1 Introduction It is also necessary to eat fresh and good quality fruits for good health. This paper can be useful to farms, shops, supermarkets, daily wage sellers, and blind people. Machine learning aims at developing programs that can help to access the data and this data can be used to learn by themselves. There are numerous algorithms available to train the model to get the desired results. Recent works show that the classification of fruits using machine learning can be done by various machine learning algorithms like SVM, K-nearest neighbor, Bayesian model, decision tree, and random forest algorithm. The convolutional neural network (CNN) is a deep learning algorithm that can classify the fruits as fresh or ripen. K. Anupriya Department of Information Technology, Lakireddy Bali Reddy College of Engineering, Mylavaram, Andhra Pradesh, India G. M. Sri (B) Lakireddy Bali Reddy College of Engineering, Mylavaram, Andhra Pradesh, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 P. K. Mallick et al. (eds.), Cognitive Informatics and Soft Computing, Lecture Notes in Networks and Systems 375, https://doi.org/10.1007/978-981-16-8763-1_52

633

634

K. Anupriya and G. M. Sri

Section 2 of the paper consists of a recent survey on fruit freshness detection techniques. Section 3 states the methodology of VGG-16 and Sect. 4 states the implementation aspects of CNN and SVM algorithms. In the last section, the results will be discussed.

2 Literature Survey Santi Kumari Behera [1] discussion on creator mentioned various details of how different methodologies have evolved by giving a detailed survey of previous papers from 20 years. This survey will surely help the researchers to understand and gain knowledge about methodologies used for fruit feature extraction such as preprocessing, segmentation, feature extraction, image resolution, and classification techniques. This paper also provided information about problems that occur during the feature attraction of the fruit. Aniket Harsh [2] stated that convolution is a feature that can take out the attributes of an input image. Convolution colloquy the spatial affiliation among pixels by way of getting to know picture residences the use of small squares of entered images. The mannequin uses 4 × 4 filters for the loop characteristics. The rectified linear unit can carry out a nonlinear action. The key feature of ReLU is to introduce nonlinearity in a Conv.Net due to the fact that broadly speaking Conv.Net has to examine nonlinear actual data. Richa Shah [3] aims to predict the satisfaction of fruits and greens with splendid accuracy. The exceptional fruits or greens perform an essential position in purchaser consumption, thereby affecting its sales. In India, most of the populace survival is based totally on agricultural products. All the commercial enterprises and agencies that make, display, transport, or put together meals for sale will want to take a look at meals quality. Sai Sudha Sonali Palakodati [4] stated the objectives at the classification of fresh and rotten fruits. This paper targets developing a model using the CNN technique for the classification of fruits according to their quality. This paper also talks about VGG16, VGG19, MobileNet, and Xception which can be used in the model for better accuracies. Frida Femling [5] has presented a system that calculates two convolutional neural network architectures of ten variants of fruits or vegetables. The input images are collected by Raspberry Pi Camera Module v2. This module is associated with Raspberry Pi. The collected images are fed as an input to the classifier. This paper makes use of both IoT and machine learning techniques. Nikhitha [6] proposed an algorithm based on Inception v3 model and transfer learning. These techniques are used to find out the percentage of how much the fruit is damaged and how much percentage the fruit is fresh from the given image which is input. These methods are used because they gave better results. Many researchers worked in this area [7–10]. The images which are given as an input to the classifiers are collected and stored in different ways. Cloud [11–13] is a place where many of the images are directly stored from IoT devices. The image dataset collected from the cloud can be given directly as an input to the classifier model.

Fruit Freshness Detection Using Machine Learning

635

3 Methodology The workflow of the proposed system is given in Fig. 1. A dataset is a collection of image data. When deploying a deep learning model in a real-world application, data must be fed constantly to continue the improvement of its performance. And, in the deep learning era, data is the most valuable resource. The dataset comprises two categories of apples with each apple separated into good or worse quality. The entire images are resized to 16 bits per channel in the RGB color space. These characteristics will help to increase the dataset abnormality and offers a further realistic state. The feature extraction algorithms are used to decrease the number of features. The output of this step gives the reduced set of important features which resemble the actual image properties. Apply the CNN algorithm to train the model based on the training data. Later the extracted features of test data are used to test the model. By comparing actual labels of the test data images with the labels assigned by the CNN algorithm, a confusion matrix to assess the performance of CNN algorithm is compared and constructed. The accuracy of the model is evaluated and compared with state-ofthe-art models. Later this model has been used to predict the freshness of the apple fruit. Fruit Condition Prediction Using VGG-16 Architecture To train the CNN model, VGG-16 architecture is used and given in Fig. 2. This VGG-16 has 92.7% accuracy in ImageNet which contains data of nearly 14 million images that belong to 1000 classes. It is an improvised algorithm as compared to

Fig. 1 Workflow of fruits freshness detection model

636

K. Anupriya and G. M. Sri

Fig. 2 VGG-16 input–output. Source https://www.geeksforgeeks.org/vgg-16-cnn-model/

AlexNet as it replaces huge kernel filters with multiple 3 × 3 kernel filters one after another. There are four main operations in the given CNN model: 1.

Convolution Layer Convolution in the occasion of the CNN model is used to extract out attributes and features from an input image. Convolution creates a relationship among pixels by extracting features of a picture with the help of little squares of information in a picture.

2.

Rectified linear unit (ReLU) ReLU, also known as rectified linear unit, helps in nonlinear operation. The main motive and the principle intention of using ReLU is to embed nonlinearity in the CNN network which is shown in Figure 3. ReLU allows us for faster training of the data and helps by removing nonlinearity by converting negatives to zeros in pixels.

3.

Pooling Layer The nonlinear downsampling can be performed to reduce the size of the feature map which is shown in Figure 4. It thinks and considers little volume and blocks of data to produce a single block of output for each block having a maximum

Fig. 3 ReLU working process. Source https://medium.datadriveninvestor.com/convolutional-neu ral-networks-3b241a5da51e

Fruit Freshness Detection Using Machine Learning

637

Fig. 4 Max pooling layer working process. Source https://www.slideshare.net/miladabbasi/convol utional-neural-networks-163522071

value. Here the pooling layer is followed by the convolution layer and then applies the nonlinear-sampling technique to decrease the map size. It thinks about just little rectangular squares of the information and creates a particular yield for every single square. This should be possible differently, yet one thing is that it takes greatest in the square. Consequently, on the off chance that the square size is 2 × 2, the number of highlights will be diminished by four times. 4.

Image Flattening After pooling is done the output is to be converted to a table-like structure that can be used by an artificial neural network for the classification. Here the dropout layer is added to prevent the fitting of the algorithm. Dropouts ignore a few of the activation maps while training the data; however, they use all activation maps during the testing phase.

5.

Fully Connected Layer Finally, after a few convolution and max-pooling layers, the extracted features can be changed into a separate 1-D vector which can be useful in the classification process. Layers are completely associated with grouping and utilizing just one yield unit for every class label. The VGG-16 architecture is shown in Figure 5.

4 Implementation The proposed method determines the presence of damaged fruits from the image of fruits. The work starts with data collection from different sources. The fresh

638

K. Anupriya and G. M. Sri

Fig. 5 Architecture of VGG-16 architecture. Source https://neurohive.io/en/popular-networks/ vgg16/

fruits dataset is collected from Fruits 360 which is available in [14] Kaggle and the images of rotten fruits are downloaded from Google images. One example among the collected dataset can be found in Fig. 6. Later, split the collected data into different sections for testing and training. It begins with the training phase of the data and then the testing phase of the data for the given dataset of fresh and ripen fruits of different stages. The training dataset is used to train the model, whereas the testing dataset is used to check the performance of the model. The training dataset of size 349 MB consists of 1693 fresh apple images

Fig. 6 Sample data collected

Fruit Freshness Detection Using Machine Learning

639

and 1033 rotten apple images. The test dataset of size 127 MB consists of 398 fresh apple images and 601 rotten apple images. These datasets are then pre-processed to form an equal aspect ratio so that they can be made ready for training with the model. This is to make different sizes of images into one particular size (Fig. 7) [14]. Dataset augmentation is performed on the training dataset. Figure 8 shows the image of the sample output data after augmentation of the fruit image dataset. Train that dataset by extracting features using the VGG-16 algorithm which is discussed in the previous section. Next, it undergoes a process called optimization which is used to optimize the model and minimize the loss. It helps to reduce the noise generated during training. At last, it passes through a process called model serialization which is evaluated after generating a model using the testing dataset and predicts the type of fruit.

Fig. 7 a Image before pre-processing. b Image after pre-processing

Fig. 8 Image after data augmentation

640

K. Anupriya and G. M. Sri

Fig. 9 Graph plot after the training process

5 Results and Discussion The proposed system can determine the quality of fruits like apples and mango. Here we are using machine learning techniques by replacing the actual manual grading. CNN is broadly utilized which gives us certified, fair, and useful classification. The proposed system recognized fruit’s quality and freshness (good, bad, and worse) with many with some genuine difficulties in the dataset. While looking at classification achievement paces of a few features for various yet extremely related assignments, CNNs seem, by all accounts, to be the best feature extractors and help in achieving a high accuracy rate. The accuracy for CNN is 92.3% for 15 epochs and the accuracy for SVM is 52.02% for 15 epochs. The graph plotted for the number of epochs run can be seen in Fig. 9 and the accuracy of SVM and CNN is shown in Fig. 10.

6 Conclusion In this paper we surveyed the algorithms for grading the apple fruits and suggested a technique that provides better results compared to state-of-the-art techniques. The CNN model with VGG-16 architecture is used to classify freshness and quality with 92.3% of accuracy. This also helps in providing efficient treatment in a better way which eventually reduces the time required and human error for finding the damaged fruits. In the future this may be extended for various fruits. We can also develop an application for different kinds of fruits.

Fruit Freshness Detection Using Machine Learning

641

Fig. 10 Graph plot comparing CNN and SVM

References 1. Gamit N, Pande R (2014) Non-destructive quality grading of mango (Mangifera Indica L) based on CIELAB colour model and size. In: IEEE international conference on advanced communication control and computing technologies, pp 1246–1251 2. Hariprasad N, Belsha N (2017) An approach for identification of infections in vegetables using image processing techniques. In: International conference on innovations in information, embedded and communication systems (ICIIECS), pp 507–512 3. Bhatt AK, Dwivedi RK, Belwal R, Kumari N (2019) Analysis of support vector machine in defective and non defective mangoes classification. Int J Eng Adv Technol (IJEAT) 8(4):1563– 1572 4. Mondal PK, Hassan KM, Risdin F (2020) Convolutional neural networks (CNN) for detecting fruit information using machine learning techniques. IOSR J Comput Eng (IOSR-JCE) 22(2):1– 13 5. Gujarathi P, Unadkat P, Mulik R, Bandal Richa Shah AN (2018) Quality analysis of fruits and vegetables using machine learning techniques. Int J Res Eng Sci Manag (IJRESM) 1(4):91–95 6. Aiswarya R, Praveena KG, Joshy M, Reshmi BS Aradhana BS (2021) Quality and pesticides detection in fruits and vegetables. J Emerg Technol Innov Res (JETIR) 8(5):227–230 7. Kapale N.D, Shital AL (2019) Automatic fruit quality detection system. Int Res J Eng Technol (IRJET) 6(6):3873–3876 8. Vinodha T, Suganya P (2019) A fruit quality inspection sytem using faster region convolutionalneural network. Int Res J Eng Technol (IRJET) 6(3):6717–6720 9. Mohammad Safwan K, Shreehari K, Bhuvananda, Vanishree BS, Vaibhav GB (2019) Identification of ripeness of tomatoes. Int J Eng Res Technol (IJERT) 7(8):1–4 (Special Issue) 10. Marathe RS, Jha SK, Ranvare SS, Jayashree VK, Pooja RK (2020) Development of an effective system to Identify Fruit ripening Stage for Apple, Banana and Mango. Int J Adv Sci Technol 29(12):2766–2772 11. Sreelatha M, Koneru A (2018) A blind key signature mechanism for cloud brokerage system, vol. 7, no Special Issue-13, pp 770–776

642

K. Anupriya and G. M. Sri

12. Koneru A, Sreelatha M (2018) Broker decision verification system using MR cloud tree. Int J Eng Technol (UAE) 7(4) 13. Koneru A, Sreelatha M (2017) A comprehensive study on cloud service brokering architecture. In: Proceedings of the IEEE 2017 international conference on computing methodologies and communication (ICCMC) 14. Kaggle [Online]. https://www.kaggle.com/moltean/fruits

Automation in Implementation of Asserting Clock Signals in High-Speed Mixed-Signal Circuits to Reduce TAT Anagha Umashankar and B. S. Kariyappa

Abstract Design verification is one of the most important aspects of VLSI design flow. It consumes more than 65% of the time taken in the VLSI design lifecycle. Design verification gives us the confidence that there is functional correctness in the design and it is performing what it is intended to. This verification involves simulation of the module and checking various signals. These signals are checked against the standard expected parameters of design. Clock signals comprise one of the most important types of signals to be verified on an SoC. These clock signals cater to digital parts of the chip as a synchronous clock, as well as in some cases may be a part of the high-speed mixed-signal blocks that are responsible for the generation of clock signals. Assertions are used in the industry to ease the verification process. These are implemented using UVM (SystemVerilog). In this paper, we propose an automation methodology that is used to ease this process of implementing assertions for clock signals in high-speed mixed-signal circuits. The automation has been done using Perl scripting and verification and testing has been carried out using a combination of certain verification flow tools, Synopsys Verdi and VCS. We have been able to achieve a reduction of turn-around time from about 3–4 h to a matter of 3–5 min, which sums up to about 97% of time-saving. Keywords SoC · UVM · VLSI · VCS

1 Introduction Mixed signal verification is of high importance in today’s technology. The techniques to verify analog mixed signal designs vary slightly from the digital techniques. In order to fasten the building of methods to improve mixed signal designs, proper modelling and simulation of designs are the key factors [1]. To avoid design errors, A. Umashankar (B) · B. S. Kariyappa Department of Electronics and Communication, RV College of Engineering, Bangalore 560059, India B. S. Kariyappa e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 P. K. Mallick et al. (eds.), Cognitive Informatics and Soft Computing, Lecture Notes in Networks and Systems 375, https://doi.org/10.1007/978-981-16-8763-1_53

643

644

A. Umashankar and B. S. Kariyappa

design verification is done before tape-out, this includes a top-down approach where the behaviours are modelled taking into account the signals that contribute to proper working of the design [2]. The traditional way of verifying AMS (Analog Mixed Signal) blocks such as PLL is performed through VCS simulations and verifying signals on a waveform window [3]. In the early 2000s, there was a switch to Virtual System Prototypes (VSP) which is a methodology to reuse the design test cases [4]. One more way to reduce the verification time. System level virtual prototyping is a method or approach that can be used to improve the simulation time consumption [5]. The general procedure of monitoring signals includes simulation and manually verifying signals to check if they are satisfying the parameters, like frequency, voltage, or any transition requirements, and so on. Assertions are the executable statements that are written to monitor signals constantly with certain conditions put in place according to design. Thus, assertions are easing the verification process. It is also time-saving and in turn improves the TAT of verification. The presence of assertions is one of the major advantages that SystemVerilog offers to the users compared to Verilog when it comes to verification of VLSI designs.

1.1 Verification of Signals 1.

2.

3.

4.

5.

Clock signals comprise a high percentage of signals in an SoC. They are generated on chip in certain cases. These on-chip clock generators are a part of high-speed mixed-signal circuits. The clock generated from such modules is used for analog processing or given as a clock signal to the synchronous digital blocks. Implementation of assertions to monitor clock signals is done using industrystandard procedures. The process requires the engineer to make a list of the clock signals to be monitored, with all the signals specified with its hierarchy. This hierarchy is the path to the clock signal pin at the chip core level from the top-level pins. Different test cases are devised to test different aspects of design. These test cases are one of the inputs to the verification process. When we want to assert signals, the expected parameters to be monitored by the assertions are specified in the particular test case that is run. In the practical scenario, the clock signals to be monitored are not going to be small in number, rather a minimum of about 40 signals at once. The assertion process is such that we will be able to assert all the 40 signals in a single run of simulation in this case. The expected frequency values of the clocks are available in a standard datasheet for every design/module. The preparation of inputs—signal list with hierarchy in a standard format and the fill-in of expected frequency values in test cases— takes about 1–2 h, if we are going to verify the hierarchy on Verdi before actually listing out the signals. This is a tedious process. These inputs, once prepared, can be run appropriately using commands. The results of assertions will be found in the form of assertion logs. A log is generated

Automation in Implementation of Asserting Clock Signals …

6.

645

for every signal asserted. These logs will provide us data if the signal has passed the criteria or not, and what was the actual frequency that was observed from the signal and other relevant data. The information obtained in the logs will then have to be validated against the datasheet of the module that holds the ideal values of parameters for a signal. This gives us a guarantee of assertions if what has been asserted is done according to the requirements of the design or not. Also, in case it matches the requirements of the designer, it gives us factual data of observed behavior through values of parameters.

Ultimately, these validations are going to give us a guarantee that the design is doing what it is intended to. For example, if we are verifying clock signals that have been generated on-chip, then it is a guarantee that the clock signals are matching the expected frequency, duty cycle, not crossing the maximum jitter expected, etc. As we can see from Fig. 1, we can implement assertions to automatically check a signal against an expected clock frequency value, a duty cycle value and jitter value, and so on. The steps to verify signals to validate a signal generator block using assertions are as follows: Fig. 1 General procedure of assertions

646

A. Umashankar and B. S. Kariyappa

(A)

Make a list of the clock signals to be verified. Every signal should have its hierarchy associated with it. This hierarchy should be verified manually by checking in the Verdi tool by Synopsys. We need to prepare an input test command which will contain the test-related parameters like powering up the block that is generating the clock signal, routing the signal from the core level to the top level, etc. Since we are focusing on verifying multiple clock signals together, we should check the expected frequency values from the datasheet of every clock signal listed in the signal list file. The values of frequencies should be in the same order as in the signal list file.

(B)

(C)

(D) (E)

(F) (G) (H) (I) (J)

We need to now run the assertion tool to indicate to it the list of signals and their expected values so that it will generate the assertion statements for all these signals and their values. Once the assertions are in place, we need to compile the files and run the simulations. After simulation, there is a log generated for every signal monitored. These logs will be available in the location where the simulation results are usually dumped. For every log file, we run through the steps F, G, and H. The log file will have the following details: Name of the signal, its hierarchy, expected frequency value, values observed in every cycle, average cycle time/period (result frequency value), pass/fail status depending on the number of cycles that have been out of range. In this step, we check if the expected frequency value from the log file is matching with the expected frequency from the datasheet. With a tolerance range of 1%, we cross-verify if the result frequency value has passed the test with respect to the expected value from the datasheet. Cross-verify the pass/fail status of the log file against the analysis from step G. Debug the reason if there is any mismatch of the pass/fail status. Debug the reason for failure.

In the above process, we are automating steps A and B in one tool, and E, F, G, and H in the other tool. We shall call the first tool as tool/automation (a) and the second tool as tool/automation (b).

2 Work Proposed This work proposes an automation methodology in two parts: (a) preparation of inputs signal list and test command part where we add the expected parameter values; (b) validation of assertion logs against the datasheet. In Fig. 4, automation (a) is indicated in step 1 and automation (b) is indicated in Steps 4 and 5. The automation tool for (a) needs the name of the module under verification, the datasheet, and the frequency

Automation in Implementation of Asserting Clock Signals …

647

Fig. 2 Signal list structure

Fig. 3 Format of test case skeleton—output

mode in which we are simulating/testing. There is a database created from where the tool picks up the signals to be asserted and puts them out in a format suitable to give as input. The tool also gives out a skeleton of the test command with all the parameter values filled in, looking up in the datasheet. The signal list and test case skeleton outputs can be seen in Figs. 2 and 3. From Fig. 2, we can see how the signal list is structured with its hierarchy. The format in which we have listed the signals is: “signal_name_with_hierarchy, signal_type, module_name, unique_id”. The unique_id is an identifier. We start with 1 for the first signal and increment it for every signal added. In Fig. 3, we can see the test case number as #0, followed by name of the test. We can see the name of the block under verification as the command argument. We have another set of command arguments where there is a list of frequency values as freq1, freq2, and so on. These frequency values correspond to the signal name from the signal list file. For a signal with id 16 from the signal list, which is debug_clk_sdc signal, an ideal frequency value is 426.00 units of frequency. Our units are in MHz. These frequency values are followed by other test-specific details or arguments such as including the csv file that is responsible for powering up the module/modules under verification. We have added an id section to the test case command. A folder with the name of the id will be created in the location where we have simulation results dumped. The results for this particular test case will be dumped in this particular folder. This helps us to differentiate between the tests that we have run. Using these two outputs of our automation tool, we can run assertion simulations, followed by using the (b) part of our automation tool as shown in Fig. 4.

648

A. Umashankar and B. S. Kariyappa

Fig. 4 Flow of assertions after implementing automation

Tool (b) will need the location where the assertion logs are present and the datasheet for reference. The tool comes up with a consolidated analysis of assertion results. Experimental results are discussed in the next section. As we can see from Fig. 4, the steps to use tools (a) and (b) in the verification flow are as follows: (1)

(2) (3) (4) (5)

(6)

Use the appropriate command to run tool (a). The tool needs the module name under verification and the frequency mode as inputs. This will automate the steps A and B shown in Fig. 1. Outputs will be as shown in Figs. 1 and 2. We need to run the assertion enabler tool just like before. Run simulations once assertions are in place. Run tool (b). It will need the location of assertion logs and the datasheet as inputs. The output will look like Fig. 4. This has the consolidated analysis that we did in steps E, F, G, and H. The data available to us are signal name, status summary from assertion log, status from our analysis of log file versus datasheet, ideal frequency from log file, ideal frequency from datasheet, result frequency, and probable cause of failure. Check for cases and cause of mismatch of pass/fail status of log file against datasheet analysis.

Automation in Implementation of Asserting Clock Signals …

649

This automated process reduces the time, effort, and number of steps in the process.

3 Results We have considered PLL output as a reference to show the need of asserting the frequency of a clock signal. The PLL output frequency that is generated is going to run/provide a clock to many more blocks in the SoC. Hence, there is a need to verify the frequency value of the output generated clock signal of the PLL. In Fig. 5, we can see that there are three signals—clk, detect_lock, and pllout. The clk is the reference clock that is input to the PLL. The detect_lock signal indicates a lock in the PLL output as desired. The pllout signal is the output waveform of the PLL, which needs to be asserted to check if it satisfies the requirements. The output can be either verified manually or by running simulations through the inclusion of assertions. We used assertions to monitor 40 clock signals. Outputs of the first part of our automation (preparation of inputs signal list and test command with expected frequency values) are shown in Figs. 2 and 3. The other parts of the test command skeleton where we see a csv file are related to the test-specific data apart from the assertions. These two outputs were used to run the simulation. The simulation results will include a set of assertion logs generated. There were 40 logs available. The location of the logs and datasheet was given as input to the second part of our automation–validation of assertion results against datasheet parameters. We can see from Fig. 6 that the probable cause of failure of the signals dac_clk_src and wss_cc_noc_time_clk_src is listed as “Result clock frequency is absent in log file”. In this case, our first step toward debugging the cause of failure would be to check the log file if the result clock frequency is absent and other details are available in the log file. If from the analysis of log file we are able to declare that the failure needs more debugging from the design point of view, we look into the design and try to debug what has gone wrong and where. This debugging helps us find out the

Fig. 5 Outputs of a PLL regression

650

A. Umashankar and B. S. Kariyappa

Fig. 6 Output Excel sheet with validation of assertion versus datasheet

Table 1 Report of asserted signals Particulars

Total no. of signals asserted

No. of signals passed

No. of signals failed No. of signals failed due to UVM fatal due to design errors errors/wrong register programming

Count

40

37

1

2

functional errors in the design, i.e., design verification. The results are as shown in Fig. 6. As observed from Table 1, 92% of signals on average pass the assertions implemented, and the rest of the signals fail due to wrong programming or design errors. The wrong programming part of failures includes both syntax errors and other UVM errors that may be considered fatal by the simulation engine.

4 Analysis of Results The final result—the Excel sheet has consolidated information regarding the signal name, status summary from the log file, status of log file versus datasheet, ideal frequency from datasheet, ideal frequency from the log file, result frequency, and probable cause of failure. This Excel sheet otherwise done manually would need about 90% more human intervention and time for such analysis. The log files are all parsed one by one and data is collected from the log file regarding different parameters like what is the actual frequency achieved, wht is the pass/fail judgment according to our assertions, what is the actual pass/fail result according to our analysis by

Automation in Implementation of Asserting Clock Signals …

651

comparing log file against the datasheet, the ideal values of parameters, and probable cause of failure. These tabs are helping to a great extent. This tool has been so far implemented for clock signals, which can be extended to other kinds of signals to even monitor transitional values and any such parameters. The manual preparation of signal list and test case command with frequency parameters to give as inputs for the simulation process would usually take 1–2 h if we manually verify the hierarchy of signals. This is done quickly in a span of about 2–3 min using the script/tool. The major part of the analysis is where we are validating assertion logs against the datasheet. A lot of manual work and analysis goes in here. Every log file is to be checked and verified for actual frequency values and pass/fail results against what we can make out from comparing it against the datasheet. There might be instances where there is a mismatch in our analysis against the assertion logs. This entire process is made far more easier using this tool. The Excel sheet will be mailed to the particular user. This Excel sheet is going to give more details in the cases where the signals have failed than the ones that have passed. For the failed signals, all the columns with details will be filled. We can see this in Fig. 6. For the signals dac_clk_src and wss_cc_noc_time_clk_src, we can see all the columns filled to ease the analysis and debugging process. However, the other signals which have passed the simulations and assertions have no details except the status from the log file and the log versus datasheet analysis. The analysis part is made easier and we can now focus on signals that have failed and why. A manual approach would have taken about 2–3 h for 40 signals, whereas the tool does this in 2–3 min. This automated process was incorporated in the industry on different projects, for modules like phase-locked loops, ADCs, and DACs. All these tasks put together we can see a turnaround time (TAT) reduction of about 96–97%.

5 Conclusion The automation in the implementation of asserting clock signals in high-speed mixedsignal circuits has proven to be very productive. The implementation of this automation is done in two steps: (a) preparation of inputs signal list and test case command; (b) the analytical validation of assertion results against the datasheet. There is a reduction of about 97% seen in the particular areas of application of automation. During the implementation done in this paper, about 92% of signals passed and the remaining failed either due to UVM errors or design errors. The proposed automation may be combined with a tool that automates the assertion statement for the input signals generated as an output of the work proposed. This combination as a whole will reduce much of the time and effort that goes into verification of high-speed mixed-signal designs. Verification through manual methods of verifying signals over a waveform window can be replaced with assertions. The verification method with assertion implementation can be automated to a great extent by using the work proposed in this paper.

652

A. Umashankar and B. S. Kariyappa

References 1. Georgoulopoulos N, Hatzopoulos A (2019) UVM-based verification of a digital PLL using systemverilog. In: 2019 29th international symposium on power and timing modeling, optimization and simulation (PATMOS), Rhodes, Greece, pp 23–28.https://doi.org/10.1109/PAT MOS.2019.8862105 2. Scholl M, Zhang Y, Wunderlich R, Heinen S (2016)A high efficiency straightforward design and verification methodology for PLL systems. In: 2016 IEEE 59th international midwest symposium on circuits and systems (MWSCAS), Abu Dhabi, United Arab Emirates, pp 1–4.https:// doi.org/10.1109/MWSCAS.2016.7870025 3. Das P, Yadav JK, Deb S (2016) Mixed mode simulation and verification of SSCG PLL through real value modeling. In: 2016 29th international conference on VLSI design and 2016 15th international conference on embedded systems (VLSID), Kolkata, India, pp 591–592. https:// doi.org/10.1109/VLSID.2016.110 4. Pawankumar B, Bhargava CR, Kariyappa BS, Narayanan S, Kamalakar R (2012) An approach for reusing test source of an IP to reduce verification effort. In: 2012 1st international conference on emerging technology trends in electronics, communication & networking, Surat, India, pp 1–4. https://doi.org/10.1109/ET2ECN.2012.6470077 5. Huang C, Yin Y, Hsu C, Huang TB, Chang T (2011) SoC HW/SW verification and validation. In: 16th Asia and South Pacific design automation conference (ASP-DAC 2011), Yokohama, Japan, pp 297–300.https://doi.org/10.1109/ASPDAC.2011.5722202

Supervised Learning Algorithms for Mobile Price Classification Ananya Dutta, Pradeep Kumar Mallick, Niharika Mohanty, and Sriman Srichandan

Abstract Considering the specifications of our phone like the battery, network connectivity, storage, processor, and camera, we are getting an idea of the price range that our mobile would fall in. Certain characteristics in choosing algorithms are used to acknowledge and get rid of features that are not required, unnecessary, or have the least complication in deliberation. Having said that, the portable with the given qualities will be exorbitant or affordable is the fundamental point of this test work. In this paper, the classification methods such as stochastic gradient descent, random forest, KNN, SVC, naive Bayes, artificial neural network (ANN), decision trees, and logistic regression are used to recognize and get rid of the unnecessary characteristics and have the least computational complications. Various classifiers are utilized to attain higher accuracy. Moreover, results are juxtaposed in terms of the highest precision. Last but not the least, a conclusion is made based on the best characteristic classifier for a provided dataset from where we got the best accuracy and results from SVC, ANN, and the logistic regression classification algorithm from all the four classes. Keywords Feature selection · Stochastic gradient descent · Random forest · KNN · SVC · Naive bayes · ANN (Artificial Neural Network) · Decision trees · Logistic regression

1 Introduction Data classification applications have spread into a lot of areas such as image processing [1], natural language processing [2], energy optimization [3], medical science [4], risk analysis [5], and macro-economic forecasts [6]. The main purpose A. Dutta · P. K. Mallick (B) School of Computer Engineering, Kalinga Institute of Industrial Technology, Deemed To Be University, Bhubaneswar, India e-mail: [email protected] N. Mohanty · S. Srichandan Department of CSE, Balasore College of Engineering and Technology, Balasore, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 P. K. Mallick et al. (eds.), Cognitive Informatics and Soft Computing, Lecture Notes in Networks and Systems 375, https://doi.org/10.1007/978-981-16-8763-1_54

653

654

A. Dutta et al.

of the classification process is to determine what the data in a dataset means with very various categories of data. The motivation behind the calculations is to prepare the models by utilizing the preparation information, and afterward characterization of the added information utilizing these prepared models. Examples of machine learningbased algorithms include stochastic gradient descent, random forest, KNN, SVC, naive Bayes, artificial neural network (ANN), decision trees, and logistic regression. Here, we got the best accuracy and results from SVC and the logistic regression classification algorithm from all the four classes which performed the best rather than SVM, RF, and many more with the well-adjusted parameters [7]. Cost is the best quality of promoting business. Veritably the essential solicitation of customers is about the cost of things. The entirety of the customers is to first engage and think “on the off chance that he would have the choice to buy something with the given focal points”. So, assessing the cost at home is the key motivation. This paper is the fundamental development as of late referred to target [8]. Diverse kinds of highlight confirmation assessments are accessible to pick the best highlights and cut-off datasets. It will reduce the computational intricacy of this issue. As this is the streamlining issue, different overhaul methods are in addition used for diminishing the dimensionality of this dataset [9]. Price prediction based on a few factors would be easy but the result might be inaccurate because some excluded factors may also be important in explaining the movement of stock prices. The prices of individual stocks can be affected by various factors, e.g., economic growth. Making the right decision within a timely response has posed a number of challenges as such a large amount of information is required for predicting the movement of the stock market price. These information are important for investors because stock market volatility can lead to a considerable loss of investment. The analysis of this large information is thus useful for investors, and also useful for analyzing the direction of stock market indexes. Nowadays, mobile is one of the most selling and purchasing devices. Every day new mobiles with a new version and more features are launched. Hundreds and thousands of mobile are sold and purchased on daily basis. So here the mobile price_class prediction is a case study for the given type of problem, i.e., finding the optimal product. The construction of the paper is as per the following. Section 2 is about the related work. Section 3 contains the methodology part. Section 4 comprises the technology. The implementation and results of this work are examined in Sect. 5 and the discussion part is in Sect. 6. After that, the paper is closed in Sect. 7 with the conclusion and a few ideas about future work.

2 Related Work Ibrahim M. Nasser et al. built up the model depending on the multilayer perceptron topology and tested the price range of a mobile phone that was developed and trained using the dataset which shows that the ANN model can predict the mobile

Supervised Learning Algorithms for Mobile Price Classification

655

price range and it contains a few factors that influenced the classification of a mobile phone price range with an accuracy of 96.31% [10]. Suiming Guo et al. focused on the price prediction which helps to understand getting a lower price within a short time. The forecast is worked upon by finding out the relationship between powerful costs and the highlights separated from multisource metropolitan information [11]. KuoKun Tseng et al. focused on the speedy progress of the internet and data-processing mechanization. Internet sentiment survey can be helpful to discover many probabilities, from internet news about goods or the effect of the cost of products to the impact of sale demeanor and notable brand agendas. In this paper, we go through news that tends to alter the price of products and set up a new system for price prognosis [12]. Mehtabhorn Obthong et al. says that stock market trading. is a task in which those who invest require speedy and precise data, to make noteworthy breakthroughs. It is basically a survey paper that studied other works in similar fields [13]. Linyu Zhenga et al. examined and predicted the share prices of two types of aerospace industries using a crossover forecast model of recurrent neural networks and PCA. LM obtained the best average mean square error [14]. Sreelekshmy Selvin et al. used LSTM, RNN, and CNN-sliding window models for the prediction of stock price. The best error percentage of 2.36 is achieved by the CNN-sliding window model surpassing the other two models [15]. Dadabada P et al. used MLP, GRNN, GMDH, PSOQRNN, random forest, and quantile regression models for forecasting the volatility of financial time series. It has been observed that PSOQRNN ranked first among all the other models with the highest accuracy of 99.17% [16]. Here, Table 1 presents the various technologies used in various research papers and their advantages and disadvantages used.

3 Methodology Figure 1 highlights the proposed model for the mobile price classification. First, the data is collected, then it is pre-processed and cleaned and the various classification models are used.

3.1 Data Collection Memory card opening is considered as the feature in the event that it is accessible. The size of the display screen is in inches, the weight in grams, thickness in millimeters, and the internal memory size in GB. The camera pixels are in MP, the RAM size in GB, and the battery is in mAh. So, the credits have veritable characteristics with the following separation. The informational index is gathered from the site. Here, in Table 2, various features of the mobile phones are given with their range.

656

A. Dutta et al.

Table 1 Comparison of the different techniques used S. No References Techniques Used

Advantages

1

[10]

MLP

The accuracy achieved Feature selection can be is efficient improved to get a more accurate result

2

[11]

Linear regression, neural networks

Strong to noisy training information

Computationally costly

3

[12]

ARIMA

Accuracy is significantly high

Sensitive to noise and slow convergent speed

4

[13]

RF, KNN, SVM, DT, LR, ANN

Gives the high precision of both direct and nonlinear utilitarian relapses

Requires more memory space to store the model and computationally costly

5

[14]

Recurrent neural networks

The preparation is Classification process is quicker than slower than MLP perceptron since there is no back engendering learning included

6

[15]

LSTM, RNN, CNN

Makes great forecasts Efficient feature since it examines the engineering has not communications and been employed covered up designs inside the information

7

[16]

MLP, GRNN, GMDH, Can yield precise RF, quantile regression expectations for testing issues

Fig. 1 Proposed model for mobile price classification

Disadvantages

Difficult to scale

Supervised Learning Algorithms for Mobile Price Classification Table 2 Features of mobile phone

Range

Features Used in Mobile

0.5–3 GHz

Clock speed

0–20 MP

Primary camera

0–1960 pixel

Resolution height

500–1998 pixel

Resolution width

5–19 cm

Screen height

0–18 cm

Screen width

500–1999 mAh

Battery power

0/1

4-G Network

0/1

5-G Network

0/1

Bluetooth

256–3998 GB

RAM

0/1

3-G Network

0/1

Dual sim

2–64 GB

Internal memory

1–8

Number of cores

0–19 MP

Front camera

0.1–1 cm

Mobile depth

80–200 g

Mobile weight

657

3.2 Dimensionality Reduction This is the way toward diminishing the measure of optional components (features) appropriate, by getting a million of head variables [17]. The higher the measure of the highlights, the harder it will picture the availability set and it will work from that point. Occasionally, by a long shot, the vast majority of these highlights are related, and therefore wealth. This is the place where dimensionality decay calculations came from. There are two kinds of dimensionality decay calculations.

3.3 Feature Selection It is the interaction where we naturally select those highlights which contribute the most to our expectation variable in which we are keen. Having unimportant highlights in our information can diminish the precision of the models and cause the model to learn dependently on superfluous highlights. In the incorporated assurance, here we are excited about the finding of k and the d, the estimations that give us the most appropriate data, and we dispose of the other (d − k) estimations.

658

A. Dutta et al.

Fig. 2 Loss analysis of features and price

3.4 Classification The last advance utilized here is classification. As alluded to above, the isolated test set is utilized for reviewing the classifier and the discovery accuracy [18]. It may be picked by figuring the measure of effectively perceived class tests (true positives), the measure of accurately saw models which are not individuals from the class which is true negatives, and tests that were either erroneously assigned to the class that is false positives or that were not seen as class tests is the false negatives. Precision reveals to us the level of absolutely coordinated cases. Here, Figs. 2 and 3 represent the loss and accuracy analysis of features and price, respectively.

4 Classifier Models 4.1 Naive Bayes Classification Naive Bayes calculation is a collection strategy, which depends on placing Bayes’ hypothesis with a solid presupposition that every one of the indicators is not interconnected to one another. The supposition that will be the presence of a brand name in a class which is sovereign to the presence of some other component in absolute to a similar class. To project a model, a telephone can be mulled over as shrewd in the event that it is having a contact screen, web office, great camera, etc. Although these characteristics are subject to one another, they don’t set up conditionally to the likelihood that the telephone is a keen one! In Bayesian order, the great rationale is

Supervised Learning Algorithms for Mobile Price Classification

659

Fig. 3 Accuracy analysis of features and price

to track down the succeeding probabilities, for example, the likelihood of a mark, being given some prominent highlights. P(X |F) =

P(X )P(F|X ) P(F)

(1)

Here, Eq. 1 represents the formula for addressing the back likelihood of the class. P(X|F) addresses the back likelihood of the class. P(X) addresses the earlier likelihood of the class. P(F|X) addresses the probability which is the likelihood of the indicator given class. P(F) addresses the earlier likelihood of the indicator.

4.2 Support Vector Machine Classification They are strong and quite well-organized machine learning breakthroughs, usually being used for classification and in some cases for regression too. SVMs were first introduced around 1960 and later they were cultured in the last decade of the twentieth century. As compared to other machine learning processes, SVMs have a very distinctive way of execution. As of now, they are truly striking considering their capacity to deal with various consistent and hard and fast factors. An SVM model is the portrayal of various classes in the hyperplane in the multiproportional space. Along with these lines, the hyperplane in this way will be depicted in a dreary manner by SVM with the objective by which the glitch can be reduced. In this place of SVM,

660

A. Dutta et al.

Fig. 4 Overview of random forest classification

as we can say, is to part the datasets into classes for finding the maximum marginal hyperplane.

4.3 Random Forest Classification Random forest is a superintendent learning calculation. From the name itself, we can easily understand that it will be a collection of trees which will make a forest. The collection of trees will be chosen randomly to get different views of a solution to a particular problem. Each branch of the tree will give a solution or decision and we will get different solutions from different branches of all the trees in the forest. Now we can get the most efficient solution based on our requirement and the problem by polling. It is a very efficient method as we can culture through various observations and pick the most accurate one. Figure 4 presents the overview of the random forest classification.

4.4 Logistic Regression Classification Logistic regression is an organized contrivance used to prognosticate the likelihood of the target variable, also known as the dependent variable. The type of the target variable that we got is bifurcate, that is, there are two feasible cases only. We can say that the target variable can take only two values, 1 for yes and 0 for no. Statistically, logical regression foresees P(Y = 1) as the function of X. Having said all these, it is an upset that can be generally utilized in different fields like disease detection and diabetes prediction. In the double strategic relapse, the objective factors should be either 0 or 1 consistently and the ideal result is constantly addressed by 1. Any

Supervised Learning Algorithms for Mobile Price Classification

661

multicollinearity ought not to be there in the model, that is, the factors should be free of one another.

4.5 Stochastic Gradient Descent Classification The word “stochastic” connotes a structure or cooperation that is associated with a sporadic probability. Subsequently, in stochastic gradient, several tests are picked heedlessly instead of the whole instructive file for each cycle. In gradient descent, the term group demonstrates the full-scale number of tests from a dataset which is used for calculating the tendency for each accentuation. In normal gradient descent headway, like batch gradient descent, the group is taken to be the whole dataset. Disregarding the way that using the whole dataset is really important for getting to the minima in a less noisy and less discretionary manner yet the issue arises when our datasets get gigantic.

4.6 K-Nearest Neighbor Classification KNN is a characterization calculation in which items are arranged by casting a ballot with a few named preparing models with their littlest separation from each article. This strategy performs well even in taking care of the order undertakings with multisorted characterization. Its hindrance is that KNN requires more opportunities for ordering objects when an enormous number of preparing models are given. KNN should choose some of them by registering the distance of each test object with the entirety of the preparation models. It is an unassuming calculation that stores every single available bag and groups new bags dependent on a closeness measure. KNN has balanced names such as (a) memory-based thinking, (b) example-based reasoning, (c) instance-based learning, (d) case-based reasoning, and (e) lazy learning. KNN is used for backsliding and gathering for farsighted issues.

4.7 Decision Tree Classification A decision tree text classifier is wherein inward hubs are marked by terms, branches leaving from them are named by the weight that the term has in the content record and leaves are named by classifications. Choice Tree develops utilizing “separate and overcome” procedure. Every hub in a tree is related to a set of cases. This methodology checks whether all the preparation models have a similar mark, and by assuming not, select a term dividing from the pooled classes of reports that have the same qualities for the term and spot each such class in a different sub-tree. Figure 5 presents the overview of the decision tree classification.

662

A. Dutta et al.

Fig. 5 Overview of decision tree classification

4.8 Artificial Neural Network Neural networks are loosely representative of the human brain gaining knowledge of. An artificial neural network includes neurons which in turn are responsible for developing layers. These neurons also are referred to as tuned parameters. The output from each layer is exceeded directly to the following layer. There are one-of-akind nonlinear activation functions to each layer, which allows within the mastering method and the output of every layer. The output layer is likewise known as terminal neurons. The weights associated with the neurons and which might be chargeable for the general predictions are up to date on each epoch. The gaining knowledge is optimized by the use of numerous optimizers. Every neural network is supplied with a value feature that is minimized as the learning maintains. The satisfactory weights are then used on which the value characteristic is giving the fine results.

5 Implementation and Results After data exploration and implementation of various machine learning and boosting algorithms, it is essential to evaluate the results in terms of evaluation metrics for finding the best suitable classifier for the dataset. The performance of all the classifiers is shown in Table 2. In Table 2 we have compared all the classifiers in terms of their accuracy, precision, recall, F1 score, and confusion matrix. The equations are as follows: Precision = T P/(T P + F P)

(2)

Supervised Learning Algorithms for Mobile Price Classification

663

F1 Score = 2T P/(2T P + F P + F N )

(3)

Here, Eqs. 2 and 3 represent the formula for precision and F1 score, where TP, TN, FP, and FN are true positive, true negative, false positive, and false negative, respectively. The confusion matrices of all the used classifiers have been shown in Figs. 6 and 7. From confusion matrices, we have calculated all the evaluation metrics like accuracy, precision, recall, F1 score. From Table 2, a full comparison between all the models is mentioned in Table 2 to see which one worked the best. We can observe that SVC, ANN, and logistic regression performed very well and maintained a good and stable relationship with the dataset. SVC gained an accuracy of 99.25% by Class 1, 97.50% by Class 2, 96.50% by Class 3, and 98.25% by Class 4. ANN gained an accuracy of 99.25% by Class 1, 97.50% by Class 2, 97% by Class 3, and 98.75% by Class 4. Logistic regression gained an accuracy of 99.50% by Class 1, 97.24% by Class 2, 96.23% by Class 3, and 98.49% by Class 4. In every field, SVC, ANN, and logistic regression performed outstandingly when compared with other models. In precision, KNN and stochastic gradient descent lacked in metric evaluation section and had a bit low precision value of all the classes. KNN and stochastic gradient descent lacked in building a good connection with the dataset, which might be due to the large dataset or KNN, and stochastic gradient descent was not able to fine-tune the parameters for finding good results. Decision tree and naive Bayes were a stable performer in all the fields. From Table 3 we can observe that random forest was a good performer in overall performance and gave stable and accurate results for detecting mobile price classification. Here, Table 3 presents the performance metric evaluation for all the models. (a)

(b)

(c)

(d)

Fig. 6 Confusion matrix of a KNN, b random forest, c stochastic gradient descent, d SVC (a)

(b)

(c)

(d)

Fig. 7 Confusion matrix of a naive Bayes, b ANN, c Decision tree, d logistic regression

664

A. Dutta et al.

Table 3 Performance metric evaluation of all models

1)

2)

3)

4)

5)

6)

7)

8)

Classification

Class

n (truth)

n (classified)

Accuracy (%)

Precision

Stochastic gradient descent

1

129

2

125

3

89

Recall

F1 Score

94

85.75

102

70.75

0.88

0.64

0.74

0.54

0.44

120

73.75

0.48

0.43

0.58

0.5

4

57

84

89.25

0.58

0.86

0.7

Random forest 1

92

94

97

0.93

0.95

0.94

2

112

102

93

0.91

0.83

0.87

3

111

120

93.25

0.85

0.92

0.88

4

85

84

97.25

0.94

0.93

0.93

1

129

94

85.75

0.88

0.64

0.74

2

125

102

70.75

0.54

0.44

0.48

3

89

120

73.75

0.43

0.58

0.5

4

57

84

89.25

0.58

0.86

0.7

1

93

94

99.25

0.98

0.99

0.98

2

106

102

97.50

0.97

0.93

0.95

3

112

120

96.50

0.91

0.97

0.94

KNN

SVC

Naive Bayes

ANN

Decision tree

Logistic regression

4

89

84

98.25

0.99

0.93

0.96

1

90

94

94.50

0.86

0.9

0.88

2

103

102

83.75

0.69

0.68

0.68

3

119

120

83.75

0.72

0.73

0.73

4

88

84

94.50

0.89

0.85

0.87

1

95

94

99.25

0.99

0.98

0.98

2

100

102

97.50

0.94

0.96

0.95

3

118

120

97

0.94

0.96

0.95

4

87

84

98.75

0.99

0.95

0.97

1

89

94

93.75

0.84

0.89

0.86

2

112

102

86

0.77

0.71

0.74

3

107

120

86.75

0.72

0.81

0.77

4

92

84

94.50

0.92

0.84

0.88

1

94

94

99.50

0.99

0.99

0.99

2

105

102

97.24

0.96

0.93

0.95

3

111

118

96.23

0.91

0.96

0.93

4

88

84

98.49

0.99

0.94

0.97

Supervised Learning Algorithms for Mobile Price Classification

665

6 Discussions Figures 6 and 7 are the confusion matrices of all the models: Stochastic gradient descent, random forest, KNN, SVC, naive Bayes, ANN, decision trees, logistic regression.

7 Conclusion and Future Work This work can be closed with the practically identical after-effects of classification calculations. This has accomplished the greatest precision and chosen least yet most fitting highlights. The model was tried and the all-out outcome was 92.12% exactness. The guideline reason for the low exactness rate is the low number of cases in this instructive record. The quantity of ML calculations and procedures have been talked about as far as kinds of info, purposes, benefits, and inconveniences. For versatile value arrangement, some of the ML calculations and strategies have been prevalent concerning their qualities, exactness, and mistakes gained. The best showing strategy is to discover the ideal thing (with the least expensive and most limited subtleties). So the things can evaluate up to the degree their focal points, cost, producing affiliation, etc. By showing a monetary region something reasonable can be endorsed to a customer. The refined man-made reasoning procedures can be utilized to expand the exactness and foresee the precise cost of the items. Furthermore, choosing more suitable highlights can likewise expand the precision. So informational collection ought to be enormous and more suitable highlights ought to be chosen to accomplish higher precision.

References 1. Guo J, Wang X (2019) Image Classification Based on SURF and KNN. In: International conference on computer and information science (ICIS), Beijing, China 2. Mishra S, Mallick PK, Tripathy HK, Jena L, Chae G-S (2021) Stacked KNN with hard voting predictive approach to assist hiring process in IT organizations. Int J Electr Eng Educ. https:// doi.org/10.1177/0020720921989015 3. Mohapatra SK, Nayak P, Mishra S, Bisoy SK (2019) Green computing: a step towards ecofriendly computing. In: Emerging trends and applications in cognitive computing. IGI Global, pp 124–149 4. Sahoo S, Das M, Mishra S, Suman S (2021) A hybrid DTNB model for heart disorders prediction. In: Advances in electronics, communication and computing. Springer, Singapore, pp 155–163 5. Mishra S, Koner D, Jena L, Ranjan P (2021) Leaves shape categorization using convolution neural network model. In: Intelligent and cloud computing. Springer, Singapore, pp 375–383 6. Maehashi K, Shintani M (2020) Macroeconomic forecasting using factor models and machine learning: an application to Japan. J Jpn Int Econ 58

666

A. Dutta et al.

7. Mukherjee D, Tripathy HK, Mishra S (2021) Scope of medical bots in clinical domain. Tech Adv Mach Learn Healthc 936:339 8. Mishra S, Thakkar H, Mallick PK, Tiwari P, Alamri A (2021) A sustainable IoHT based computationally intelligent healthcare monitoring system for lung cancer risk detection. Sustainable Cities and Society, 103079 9. Asim M, Khan Z (2018) Mobile price class prediction using machine learning techniques. Int J Comput Appl 179(29):0975–8887 10. Nasser IM, Al-Shawwa M (2019) ANN for predicting mobile phone price range. Int J Acad Inf Syst Res (IJAISR) 3(2) (Department of Information Technology Faculty of Engineering & Information Technology) 11. Guo S, Chen C, Wang J, Liu Y, Xu K, Chiu DM (2019) Fine-grained dynamic price prediction in ride-on-demand services: models and evaluations. © Springer Science+Business Media, LLC, part of Springer Nature 12. Tseng KK, Lin RFY, Zhou H, Kurniajaya KJ, Li Q (2017) Price prediction of e-commerce products through Internet sentiment analysis. © Springer Science+Business Media, LLC 2017 13. Obthong M, Tantisantiwong N, Jeamwatthanachai W, Wills G A Survey on Machine Learning for Stock Price Prediction: Algorithms and Techniques. School of Electronics and Computer Science, University of Southampton, Southampton, UK and Nottingham Business School, Nottingham Trent University, Nottingham, UK 14. Zhenga L, Heb H (2020) Share price prediction of aerospace relevant companies with recurrent neural networks based on PCA. AVIC General Huanan Aircraft Industry Co., Ltd., Aviation Industry Zone, Sanzao Town, Jinwan District, Zhuhai City, Guangdong Province, China, 519040 , School of Aerospace, Transport and Manufacturing, Cranfield University, Cranfield, 26 August 2020 15. Selvin S, Vinayakumar R, Gopalakrishnan E, Menon VK, Soman K (2017) Stock price prediction using LSTM, RNN and CNN-sliding window model. In 2017 international conference on advances in computing, communications and informatics (ICACCI) 16. Kumar P, Ravi (2017) Forecasting financial time series volatility using particle swarm optimization trained quantile regression neural network. Appl Soft Comput 17. Jena L, Kamila NK, Mishra S (2014) Privacy preserving distributed data mining with evolutionary computing. In: Proceedings of the international conference on frontiers of intelligent computing: theory and applications (FICTA) 2013. Springer, Cham, pp 259–267 18. Jena KC, Mishra S, Sahoo S, Mishra BK (2017) Principles, techniques and evaluation of recommendation systems. In: 2017 international conference on inventive systems and control (ICISC), IEEE, pp 1–6

Farmuser: An Intelligent Chat-Bot Interface for Agricultural Crop Marketing P. V. S. Meghana, Debasmita Sarkar, Rajarshi Chowdhury, and Abhigyan Ray

Abstract There has been a drastic change from offline to online modes of work these days. Keeping the current scenario in mind, aiming to provide prosperity and comfort to both farmers and consumers, the authors have designed a website called Farmuser integrated with automated and intelligent interfacing chat-bot. This work helps in eliminating the involvement of middlemen-market sharks, thereby establishing a direct and healthy trade environment. There is no buying or selling of crops. Instead, it offers the chance to effortlessly market your crop. The assurance of reliable market information and transparency is forged directly through the phone deal. This paper is an analysis and presentation of their approach and work. Keywords Farmers · Chat-bots · Machine learning · Dijkstra’s algorithm · Consumer · Trade environment · Agriculture

1 Introduction It is believed that a nation grows with its farmers. They are the backbone of the Indian agriculture system. They not only feed the country but also contribute to almost 17% of the nation’s GDP. Over 70% of rural household depends on agriculture. Hence to contribute to the country’s progress, the authors have tried to provide the best platform to find farm-fresh stock just with a click to contribute to the country’s progress. Online agriculture markets are becoming more trending among communities due to the continuous demand for local fresh produce. Skimming through various websites and surveys, a minor problem came to the front in the agriculture industry, i.e., the absence of a direct trade environment. The issue seemed unbegun. Hence the authors, a bunch of enthusiastic minds, dedicated their efforts to developing and providing the best trading experience, with a focus on credibility, variety, and userfriendliness. They decided to work on developing a website that would focus on P. V. S. Meghana (B) · D. Sarkar · R. Chowdhury · A. Ray School of Electronics Engineering, Deemed To Be University, Kalinga Institute of Industrial Technology, Bhubaneswar, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 P. K. Mallick et al. (eds.), Cognitive Informatics and Soft Computing, Lecture Notes in Networks and Systems 375, https://doi.org/10.1007/978-981-16-8763-1_55

667

668

P. V. S. Meghana et al.

Fig. 1 General working representation of Farmuser

marketing the farm produce of a farmer so that interested buyers can contact and set up a deal that benefits both the farmer and the buyer. Farmuser is a three-step easy-touse platform: Login/Register, Advertise (show up availability), and Contact. Farmers can update the portal by specifying their location, quantity of vegetables, and feasibility for payment modes. They can publish free ads for selling their poultry, plants, crops, grains, seeds, fruits, vegetables, etc. Accordingly, buyers/users searching to purchase the requirements at an affordable price can get the filtered details regarding the nearby areas, availability of the items, and the contact details of the person owning the items. The main feature of this work is that it helps the consumers who are unable to fulfill their crop requirements due to the unavailability in their dependent market. The website uses geographical filtration and makes it easier to find the availability of the required crop in the nearby market within a limited radius of their area. This process eliminates manipulative middlemen and landlords who offer least or negligible profit to the farmer. Figure 1 is the representation of the idea behind this work. The authors have also attempted to integrate an AI-based chat-bot into our website for the convenience of users. Any person facing the trouble of using the portal can easily navigate through the website by putting his/her queries in the chat-bot.

2 Literature Review During the last 20 years Indian agriculture has been facing major challenges sort of a deceleration in the rate of growth, degradation of natural resources, inter-sectoral, inter-regional equity, declining input efficiency, etc. [1]. Compared to existing human information sources just like the KCC and agri-experts, a chat-bot system has the

Farmuser: An Intelligent Chat-Bot Interface …

669

potential to raise serve farmers’ needs for continuous learning. However, chat-bots for farmers still involve the empirical understanding of the acceptability and usefulness of this new sort of technology [2]. Over the past few years, horticulture has made remarkable progress in terms of expansion in area and production. There has been an increase in productivity, crop diversification, and forward linkages through value addition and marketing. [3]. Most of the enterprises during this sector are generally still within the initial adoption phase. They ought to further mature within the next phases of e-commerce adoption, as those stages of adoption are characterized by their dynamic interaction with potential clients [4]. The utilization of ICT tools enhances information flow among users which enables economic agents to perform economic activities faster by improving access to timely and accurate information. Recent studies also suggest that information promotes competition and improves market performance [5]. The author analyzed and compared the optimal effort input equilibrium strategy, the optimal farm produces quality trajectory and profit optimizing the function of the agricultural e-business, and farmers affiliated within the farmer-supermarket docking program located that only through accurately assessing their actual earning capacity could the farmers choose the suitable farm produce circulation mode to maximize their interests [6]. The author here presents a comprehensive review of research dedicated to applications of machine learning in agricultural production systems. The works analyzed were categorized on the idea of crop management [7]. The author here clarifies the influence of the progress of the information network on consumers, and so tries to clarify the effectiveness and therefore the limit within the role of e-commerce for consumers in the future. An asymmetry within the balance of data is recognized, and this problem is clarified in reference to corporate monopolies [8]. The author’s developed method used satellite imagery and received crop growth characteristics fused with soil data for a more accurate prediction [7]. Finally, a generalized method for agricultural yield predictions was presented in another study [7]. The tactic predicated on an ENN application for a long period generated agronomical data (1997–2014). Fragmented agricultural markets make an ideal case for a unified platform like National Agricultural Market (NAM). The expansion in the volume of trade in the e-NAM platform will follow the strengthened back-end infrastructure for the entire value chain of produce [9]. E-tendering has been successful in reducing transaction time, bringing transparency in price discovery, and increasing market revenue, besides enhancing market competition [10]. Traditionally, agricultural information exchange has been dominated by modern media like newspapers, television, and magazines. In recent years, however, technology awareness, computer literacy, and usage of smartphones and therefore the internet are increasing across all demographics in India [11]. The multinomial regression model was utilized by B. Jari and G.C.G Fraser to analyze the factors that influence marketing choices among smallholder and emerging farmers within the area under study. Empirical results showed that market information, expertise on grades and standards, contractual agreements, social capital, market infrastructure, group involvement, and tradition significantly influence household marketing behavior [12]. The selection of electronic sales channels depends on the production output of a particular producer. So, for large ones, the simplest option is to mix such

670

P. V. S. Meghana et al.

promotion channels as electronic commodities exchange and online store, and for farmers with limited production volume, their website and electronic trading platform would be preferable. For an outsized agricultural enterprise, the well-liked product channels are electronic commodities exchange and online store, and for a farm with limited sales volume, an electronic trading platform and website are preferable [13].

3 Technology Stack • Hardware specifications: Processor—Intel®Core™ i5-8250U CPU @1.60 GHz, Processor Speed—1800 MHz, Operating system—64-bit, RAM—8.00 GB, ROM—455 GB, Version—Windows 10, Resolution—1920 × 1080, Graphics— Intel® UHD Graphics 620, OpenGL rendering GPU—GeForce MX130. • JavaScript has been used for developing the code for the front-end. • The front-end styling and front-end designing have been done with the help of CSS and HTML, respectively. • Python has been used to develop the back-end code. • Django has been used to create the back-end framework and integrate it with the front-end as well as the database. • The authors have chosen PostgreSQL for the database due to its compatibility with Django. • For designing the ML Chat-bot, they have used Dialog Flow. • Dialog Flow has been used for the integration of the chat-bot into the front-end code. For designing the front-end, the front-end codes are written using HTML and Javascript and they have used Visual Studio Code Software for running these codes [14]. For developing the back-end framework, Django (high-level Python Web framework) has been used for developing the back-end. Django is used to create dynamic content in a web page and the database is linked to the back-end through pgAdmin. Here the authors have used POSTGRESQL for the database. Migrations were required to be made so that one can add and fetch data to and from the database. Any data entered through the front-end is stored in the database [15]. The aim of this work is to attach the farmers and therefore the customers directly. Therefore a chat-bot was necessary to assist the farmers while using the website. Therefore the authors decided to use Dialog Flow, a platform by Google used for designing and integrating a conversational user interface into response-generating bot. So using their chat-bot, farmers or customers are often guided whenever they face a problem. If the bot isn’t ready to resolve the matter, the person is going to be asked to supply his details in order that one among the team members can contact him personally and help him with his problem. All the chats and customer information given are stored on the Dialog Flow database.

Farmuser: An Intelligent Chat-Bot Interface …

671

4 Proposed Work and Methodology Farmuser is a user-friendly platform that generates market opportunities for verified buyers and sellers. Figure 2 presents the flow diagram of the proposed work. Stating about the work, the website has a home page followed by a login page if the user has an account they can log in or create an account. For logging in, the user credentials, i.e., email id and password, will be checked with their data stored in the database, whereas for signing in new information about the user will be stored in the database. There are separate web pages for farmers and customer portals. In the farmer portal, farmers can update the portal by specifying their location, quantity of vegetable availability, and their feasibility for payment modes. Accordingly, in the customer portal, the buyer/user searching to purchase their requirements at an affordable price can get the filtered details of the nearby areas along with the availability of the items [16, 17]. If the deal offered by any farmer suits the needs of the buyer, he or she may contact the farmer using the contact details we provide and secure an over-the-phone deal. To calculate the shortest distance between a customer and a farmer, the authors have used Dijkstra’s algorithm. Dijkstra’s algorithm is an algorithm for locating the shortest paths between nodes in a graph where we generate the shortest path tree (SPT) with a given source as root [18–20]. In this work, whenever the customer after logging in searches for a specific vegetable the corresponding details of the farmer nearest to the customer alongside his address and contact details are provided on the HTML page from where the customer can pick the contact details and reach the farmer. This algorithm will still run until all of the reachable locations are visited,

Fig. 2 UI flow diagram

672

P. V. S. Meghana et al.

Fig. 3 Internal structural view of Farmuser

which suggests that one could run Dijkstra’s algorithm to seek out the shortest path between any two reachable places and display the result. For performing the above task using Django we would like to integrate Google maps with Django, and put in Django–Google maps. Google maps are going to be used to calculate the space between the farmer and customer location. Figure 3 may be a pictorial representation of the whole methodology as discussed above.

5 Result Analysis Dijkstra and Bellman-Ford algorithms are used to find the single-source shortest path algorithm but the Dijkstra algorithm is better when it comes to reducing the time complexity for calculating the distance between the source node to all the nodes as represented in Fig. 4. As shown in Fig. 4, the objective of the graph is to check the relationship between the execution time and the path length. The starting point is the same for both. We can see that the execution time for both algorithms increases with the increase in distance. The graph shows that till a particular distance both the algorithms take the same time to cover the distance after which the time to cover the remaining distance gradually increases for the Dijkstra algorithm and exponentially for the Bellman-Ford algorithm. The results show that the Dijkstra algorithm is much faster than the Bellman-Ford algorithm. Figure 5 highlights the production of crops per year. Though there is a rise in the overall agriculture growth and the crop output value, farmers’ income has not increased significantly. In recent years no relation between the crop output and the

Farmuser: An Intelligent Chat-Bot Interface …

673

Fig. 4 Time complexity between the Dijkstra and Bellman-Ford algorithms

Fig. 5 Representation of the increase in the number of crops produced per year

income of farmers is seen. Out of the many factors, one can be the presence of middlemen between the farmer and the customer. They increase the cost of marketing, and the price of the product goes up. The consumer has to pay a higher price. By eliminating the middlemen the price of the vegetables will decrease which will benefit the farmers as they will earn more profit on account of lesser prices of the products and the consumers shall benefit too. With proper utilization of this

P. V. S. Meghana et al.

Interest over time

674

Year Flask

Django

Fig. 6 Representation of changing trends of Flask versus Django

research and developed means of transportation, the consumers can easily purchase crops/vegetables directly from the farmers without the service of middlemen. Figure 6 represents the comparison between Django and Flask which are Pythonbased web frameworks. Django is a full-stack web framework with a ready-to-use admin framework that can be customized. It allows users to divide a single project into multiple small applications which makes them easy to develop and maintain, whereas Flask is single-threaded which may not perform too well under heavy load. Hence Django has been used by the authors.

6 Conclusion Online agriculture market has a bright future. It can help farmers not shut down their business and conquer a much larger market. Investors may not see the scope now, but this concept will stay for a while. There has been a lot of work in the field of agro marketing. The findings of this study reveal that this rise has driven the agriculture market both in terms of the producer and consumer. While working on this analysis, a lot of ideas came up on how to make further improvements, what features could be added, and what changes could be made. One of them, for instance, is adding the feature of online money transactions. This would make the website B2B. Another improvement that can be made is the efficiency in agricultural marketing through regular training and extension for reaching region-specific farmers in their local language. Through the website, farmers will be trained with basic communication and marketing skills in the language most common in his/her locale. Farmers can

Farmuser: An Intelligent Chat-Bot Interface …

675

also contact vehicle providers for transporting the agricultural produce from their place to the buyer’s destination through business with any online transport system.

References 1. Sanjay A (2015) A study of recent trends in agriculture. In: Conference of national conference changes in management practices in global scenario, vol 1, February 2015 2. Vota W (2019) FarmChat: using chat-bots to answer farmer queries in India. ICTworks, 2 Jan 2019 3. Neeraj, AC, Bisht V, Johar V (2017) Marketing and production of fruits and vegetables in India. Int J Curr Microbiol Appl Sci 10 Sept 2017 4. Jena KC, Mishra S, Sahoo S, Mishra BK (2017, January) Principles, techniques and evaluation of recommendation systems. In: 2017 international conference on inventive systems and control (ICISC). IEEE, pp 1–6 5. Katengeza SP, Okello JJ, Jambo N (2011) Use of mobile phone technology in agricultural marketing: the case of smallholder farmers in Malawi. Int J ICT Res Dev Africa 6. Mofan C, Changta C (2017) Analysis and comparison of operational efficiency of rural Ecommerce and farmer-supermarket docking modes. School of Economics & Management, Fuzhou University 7. Liakos KG, Busato P, Moshou D, Pearson S, Bochtis D (2018) Machine learning in agriculture: a review by Konstantinos G. Liakos, Patrizia Busato, Dimitrios Moshou, Simon Pearson, Dionysis Bochtis, 14 Aug 2018 8. Mishra S, Mallick PK, Tripathy HK, Jena L, Chae G-S (2021) Stacked KNN with hard voting predictive approach to assist hiring process in IT organizations. Int J Electr Eng Edu. https:// doi.org/10.1177/0020720921989015 9. Bisen J, Kumar R (2018) Agricultural marketing reforms and e-national agricultural market (e-NAM) in India: a review. Agric Econ Res Rev 31 (Conference Number) 10. Pavithra S, Gracy CP, Saxenaa R, Patila GG (2018) Innovations in agricultural marketing: a case study of e-tendering system in Karnataka, India. Agric Econ Res Rev 11. Khou A, Suresh KR (2018. November) A study on the role of social media mobile applications and its impact on agricultural marketing in puducherry region. J Manage (JOM) 5 12. Jari B, Fraser GCG (2009, November) An analysis of institutional and technical factors influencing agricultural marketing amongst smallholder farmers in the Kat River Valley, Eastern Cape Province, South Africa. Afr J Agric Res 4 13. Alekhina O, Ignatyeva G, Khodov D (2019) Digitalization in the field of agricultural marketing. In: International scientific and practical conference “Digitization of agriculture—development strategy” 14. Mishra S, Thakkar H, Mallick PK, Tiwari P, Alamri A (2021) A sustainable IoHT based computationally intelligent healthcare monitoring system for lung cancer risk detection. Sustain Cities Soc 103079 15. Mukherjee D, Tripathy HK, Mishra S (2021) Scope of medical bots in clinical domain. Tech Adv Mach Learn Healthc 936:339 16. Rath M, Mishra S (2020) Security approaches in machine learning for satellite communication. In: Machine learning and data mining in aerospace technology. Springer, Cham, pp 189–204 17. Dutta A, Misra C, Barik RK, Mishra S (2021) Enhancing mist assisted cloud computing toward secure and scalable architecture for smart healthcare. In: Hura G, Singh A, Siong Hoe L (eds) Advances in communication and computational technology. lecture notes in electrical engineering, vol 668. Springer, Singapore. https://doi.org/10.1007/978-981-15-5341-7_116 18. Mohapatra SK, Nayak P, Mishra S, Bisoy SK (2019) Green computing: a step towards ecofriendly computing. In: Emerging trends and applications in cognitive computing. IGI Global, pp 124–149

676

P. V. S. Meghana et al.

19. Jena L, Kamila NK, Mishra S (2014) Privacy preserving distributed data mining with evolutionary computing. In: Proceedings of the international conference on frontiers of intelligent computing: theory and applications (FICTA) 2013. Springer, Cham, pp 259–267 20. Sahoo S, Das M, Mishra S, Suman S (2021) A hybrid DTNB model for heart disorders prediction. In: Advances in electronics, communication and computing. Springer, Singapore, pp. 155–163

Prominent Cancer Risk Detection Using Ensemble Learning Sanya Raghuwanshi, Manaswini Singh, Srestha Rath, and Sushruta Mishra

Abstract The technique of detection of abnormalities and anomalies in the human body is a difficulty faced by field experts around the globe. One of those many challenges is to detect various types of cancer in the human body. The reason for choosing ‘Cancer’ as our focus point is to emphasize the fatality of the disease and to simultaneously spread awareness about it. The primary aim of this paper is to detect brain, breast, and lung cancer with the assistance of pre-existing data. In this study, an attempt is made to predict these three types of cancer using ensemble machine learning classifiers. The classifiers focused on in this paper are decision tree classifier, K-nearest neighbor, random forest classifier, logistic regression, and support vector classifier. It helps present efficient results and also gives a comparative study between the various models. We apply pre-processing, data visualization, and finally the classifier models on the datasets to determine their accuracy and precision. Based on the results, we found the most suitable ensemble classifier to be the random forest classifier with the highest accuracy and precision. Keywords Random forest · Decision tree · Support vector machine · Logistic regression · Cancer · Ensemble learning classifier · K-nearest neighbor

S. Raghuwanshi · M. Singh · S. Rath · S. Mishra (B) School of Computer Engineering, Kalinga Institute of Industrial Technology (KIIT) Deemed To Be University, Bhubaneswar, Odisha, India e-mail: [email protected] S. Raghuwanshi e-mail: [email protected] M. Singh e-mail: [email protected] S. Rath e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 P. K. Mallick et al. (eds.), Cognitive Informatics and Soft Computing, Lecture Notes in Networks and Systems 375, https://doi.org/10.1007/978-981-16-8763-1_56

677

678

S. Raghuwanshi et al.

1 Introduction Among all the organisms existing on earth, humans have the maximum reach. We have been around the globe, searched the depths of oceans, climbed the peaks, and have even been on the moon. With the reach also comes the exposure, the exposure to things beyond our visible capacity. We as humans risk exposure to various kinds of microscopic organisms that may be a cause of various diseases and health issues. Apart from this, humans have devised ways that are self-destructive in nature. We constantly strive for advanced technology that will bring comfort to us. In doing so, we forget that we might be causing more damage than bringing ease. Humans weren’t wired to be so heavily dependent on technology or even lead a life with so much comfort. This ease of life is coming at a heavy cost in the form of fatal lifestyle diseases. What we call development and ease of life are now becoming the reason for our demise. The reason for the sedentary lifestyle of a person may be the work culture, habits, occupation, food habits, or due to various other reasons. These are some of the reasons for lifestyle diseases. Lifestyle diseases are diseases based on the everyday habits of people. These habits reduce a person’s productivity and push them toward a lethargic lifestyle which may become the cause for a huge number of issues related to health that can be the leading reason for chronic non-communicable diseases to have fatal consequences. Some examples of lifestyle diseases are as follows: COPD, cancers, diabetes, cirrhosis, Alzheimer’s disease, CAD, and stroke. Cancer is one of the fatal lifestyle diseases that humans know of yet choose to be ignorant toward it. Therefore, the onus is on us to be prepared as prevention is better than cure. Knowing in advance the types and variants of the diseases, we can be better prepared to find a solution and prevent fatality. For the purpose of cancer detection, we employ ensemble classifiers. One of the main aspects of any machine learning algorithm is to do the building of a fair model from a historical or a pre-existing dataset. This process of training the model according to its features is known as training. The trained model can also be called a hypothesis. The learning algorithms that are involved in the process of constructing a set of classifiers and classifying the new data points by taking a choice of the predictions so obtained are known as ensemble methods. Ensemble models are generally considered to be much more accurate and effective than the individual classifiers that make up the ensemble. The ensemble methods are involved in training multiple hypotheses to solve the same problem, with which the topic started. The most widely known example of ensemble modeling is known to be random forest trees. Basically, a large number of decision trees (totally up to the user, according to the results obtained) are used to predict the outcomes. In this paper, we have used support vector classifier, decision tree, random forest, logistic regression, and K-nearest neighbor classifiers.

Prominent Cancer Risk Detection Using Ensemble Learning

679

2 Background Study Cancer contributes to over an estimated 9.6 million deaths, which basically means around one in six deaths is because of cancer. The statistical data from ‘India-Global Cancer Observatory’ tells us that there were 1,324,413 new cases of cancer and 851,678 deaths due to cancer recorded in 2020. Cancer affects men and women differently, and different types of cancer are prominent among them [1]. The graph in Figs. 1 and 2 shows the cancer type distribution among men and women. Figure 1 depicts all the new and emerging cases of cancer in males worldwide. Figure 2 depicts all the new and emerging cases of cancer among females worldwide. Mathieu Laversanne M.Sc., Isabelle Soerjomataram MD, M.Sc., Ph.D., Ahmedin Jemal DMV, Ph.D., Freddie Bray, Hyuna Sung Ph.D., Jacques Ferlay M.Sc., ME, and Rebecca L. Siegel MPH clearly explain [2] how breast cancer is a predominant problem among women and lung and prostate cancer is heavily found among men. It has already affected approximately 700,000 people in the States, who have been surviving with a primary brain tumor, and it has been predicted that around 85,000 more will be diagnosed in 2021, as also observed in [3]. Therefore, our study is based on cancer caused by these three organs: breast, lung, and brain. Here, we will try to study the progress made in this field so far. In [4], we see the use of ensemble voting methods for the diagnosis of breast cancer to show how ANN and logistic algorithms provide an efficient solution. In [5], we see the use of a variety of classifiers such as SVM, decision trees, neural network, and naïve Bayes (NB) to detect lung cancer from the symptoms with higher efficiency. In [6], different kinds

Fig. 1 New cases of cancer worldwide in males

680

S. Raghuwanshi et al.

Fig. 2 New cases of cancer worldwide in females

of learning methods such as simple logistic regression learning and also other types of learning like support vector machine learning with probabilistic gradient descent enhancement, and also, neural networks are used to detect breast cancer using their F3 scores for comparison. One kind of neural network used is the multilayer perceptron network. Additionally, a voting mechanism was employed to increase the efficiency of cancer detection. An estimated 19.3 million new cases of cancer out of which 18.1 million cases were excluded as NMSC, and around 10 million cancer deaths were reported, excluding the NMSC except for basal cell carcinoma. This was observed worldwide in the year 2020, as shown in Table 1. It is also seen in recent years that MRI scans are turning out to be very useful when it comes to detection of brain tumors and cancer, as we can identify from [7]. Apart from that, other prominent tools are blood tests and X-rays. More common tools used to examine the body are CT and MRI scans. Checking for cancer (or for abnormal cell growth) in people who have no symptoms is called screening. The accuracy and clarity of MRI images are dependent on each other. Cancer can be Table 1 Number of cases and deaths across the world for the three cancers

Area of cancer

New cases

New deaths

Brain

308,102

251,329

Female breast

2,261,419

684,996

Lungs

2,206,771

1,796,144

Prominent Cancer Risk Detection Using Ensemble Learning

681

detected, implemented, and designed using machine learning algorithms as seen in [8]. The same is in the case of the breast cancer detection industry. The contrast between all the highly favorable ML algorithms and techniques pertaining to that have been in use for breast cancer prediction using the Wisconsin Diagnosis Breast Cancer dataset, namely RF, KNN, and naïve Bayes is observed in [9] where the accuracy percentage and the precision is also very neck to neck and can be used well for treatment and detection. While experimenting for more accurate results we come across some new findings such as NB classifier and KNN classifier for classification of breast cancer. Now, we put forward a detailed differentiation among the two new implementations and carry forward with the process of evaluating the accuracy using the procedure known as cross-validation. As it is depicted in [10], the final outcome shows that KNN provides the highest amount of accuracy of 97.51%, then the lowest error rate is the NB classifier which is at 96.19% [11]. And finally coming to lung cancer detection, in [5] there is an attempt to evaluate the different power of several kinds of predictors during the study with the main aim of increasing the efficiency of lung cancer detection through its symptoms. Classifiers such as SVM, decision tree, neural network, and naïve Bayes (NB) which are evaluated on a valid and precise dataset have been obtained from the famed UCI repository. Other than that, the performance has also been weighed against many well-known ensembles such as random forest and also by a new kind of ensemble known as Majority Voting. Callahan and Shah [12] After heavy evaluations it was noted that particularly Gradient-boosted Tree outperformed all the other individuals and ensemble classifiers. It also recorded an accuracy of 90%. We also see how some methods help in increasing the lung cancer prediction rate. This process is inspected using MATLAB-based results, which include metrics such as logarithmic loss and mean absolute error. Precision, recall, and F-score are more such metrics which were extremely helpful as in [13]. We also see how a comparison technique, based on ML algorithms and analysis, is extremely helpful in disclosing that the proposed RBF classifier has achieved a good level of accuracy of 81.25% and it can be without any doubts be considered as an effective technique of classifying and predicting lung cancer[14].

3 Proposed Cancer Prediction Model Ensemble methods add up many ML models into one which results in giving more accuracy. They merge the methodology of artificial networks, extreme learning machines (ELM), and SVM classifiers [15]. Here, we follow the following procedure: pre-processing, initial set of data reduced by identifying features, and classification. First, pre-processing is performed to refine the data for maximum accuracy. Secondly, feature extraction is performed with the help of various plots and graphs. Finally, ensemble classifier models are applied to the refined dataset for prediction. In this paper we have implemented datasets on Jupyter NB using Python to code and detect cancer in a patient and also the accuracy of the models/classifiers that are

682

S. Raghuwanshi et al.

implemented. For the purpose of prediction, ensemble machine learning classifiers have been employed. The reason behind this is to allow more efficient prediction and improve predictive accuracy and precision [16]. The ensemble classifiers used here are DTC, RFC, LR, SVC, and KNN. 1.

2.

3.

4. 5.

Decision tree classifier: As the name suggests, this classifier uses a tree-like structure where every node is a choice, each branch denotes an outcome of the choice, and every leaf node represents an individual class. Random forest classifier: It creates multiple decision trees from an unknown subset of understanding data and collects the outcomes of multiple trees to give a verdict. Logistic regression: This classification algorithm is used to get the probability of the dependent variable. The dependent variable in the case of logistic regression holds only two values (corresponding to two classes). Support vector classifier: The purpose of this model is to fit the input data in a feasible way and provide a hyperplane that categorizes the different classes. K-nearest neighbor: It is a model which is in control of learning classification. It uses historical data for training and classifies new data based on similarity measures.

4 Workflow Diagram of the Cancer Prediction Model Figure 3 shows the workflow diagram of the whole process of analysis of the implementation of models.

Fig. 3 Workflow of the proposed model

Prominent Cancer Risk Detection Using Ensemble Learning

683

5 Detection of Cancers Using Ensemble Classifiers First, we have extracted and differentiated the features present in cells of patients affected by breast/brain/lung and normal people without any cancer cells. The way a machine learning engineer or a data scientist operates, we have also used supervised machine learning classifier algorithms which are otherwise known as ensemble classifiers. 1.

2.

3.

4.

5.

Import essential libraries: Here we import the four main libraries to begin with: Pandas for data manipulation, NumPy for computational power, MatPlot for data visualization, and Seaborn for data visualization with MatPlot’s support. Data load: In this section, we load the datasets (datasets for breast cancer in this case)with the help of scikit-learn library which provides a collection of supervised as well as unsupervised algorithms in the programming language Python. It consists of various algorithms like SVM, KNN, and RF, to name a few. It also consists of important and much-needed scientific and numerical libraries such as NumPy and SciPy. Other libraries include Pandas and Seaborn. We are loading our inherited data using a scikit-learn load_organ-name_cancer class. Data manipulation: In this part of the code, we get a rough view and understanding of the dataset being used and its basic implementation, the one we had loaded in the ‘data load’ part of the code. Here we are using the cancer dataset. For the type(cancer_dataset) code snippet, the scikit-learn is used to store data like the dictionary data type. Here we can also see features of each cell in numeric format, like the one we are exploring right now to solve this problem is a numeric dataset. The cancer_dataset.keys() function will give output and print the main keys in the dataset and cancer_dataset[‘data’] function gives out the features and habits of all the cells in their respective number/numeric forms. The cancer_dataset[‘target’] function here is to store the values (in either 1 or 0) of malignant or benign value and cancer_dataset[‘target_names’] is to name the tumor in the cells and confirm if it is malignant or benign tumor. We also find out the description of the contents in the dataset, the functions and their features, their names, and also the name of the file in which the datasets are stored. Creating the data frame: Here we create the data frame and then convert it into a CSV file. Then we can take some functions and experiment with how many numbers of values we want as an output, starting from the head of the data frame and the same goes if we want the last few values till the tail of the data frame. We can also find the minute details and view that information using the cancer_df.info() function and we can easily see the numerical distribution of the data in our cancer dataset using the cancer_df.describe() function and also segregate the values that are null. Data visualization: First we display a pair plot of the cancer dataset and its data frame [17]. The pair plot will be involved in the process of showing the

684

6.

7.

8.

S. Raghuwanshi et al.

distributed number values present inside the scatter plot for a better understanding of the dataset. We use the function of the pair plot for features of the sample data frame. Heatmap and correlation bar plot: The first heatmap we make is for the data frame which shows a plotted heatmap for all the information in the cancer dataset in a very generalized format. After that, we get a heatmap of a correlation matrix. This is useful in finding the correlation between each feature and target, and also visualizing it using the heatmap of the correlation matrix. Here we also take the relation of each and every feature with the target and visualize a bar plot, also printing the main contents of the second data frame after that. Data pre-processing: The first step will be Splitting Data Frame, that is to take an input variable X and an output variable y and then divide the data frame into training and testing data, using from sklearn.model_selection import train_test_split function. Then we experimentally print these individual values and see what results in the output for X_train, X_test, y_train, y_test. The second step is Feature Scaling, the process to convert different data units and their magnitude/absolute values into one whole unit, using from sklearn.preprocessing import StandardScaler function. We have done standard scaling for this research. Implementing machine learning models: We import the required library, using the from sklearn.metrics import accuracy_score function used to find the accuracy of the ML models [18]. NOTE: The two steps below are repeated for every classifier used. I

II

9.

Training the model with standard scaled data: First we train the model with standard scaled data, fit the data frames in the classifier, and then find out the accuracy of this data using the particular classifier for which we are finding the accuracy for. We do this for all the five classifiers used. Finding the precision for the scaled data of the classifier used: We find the precision of the scaled data for all the classifiers used, using the precision_score() function.

Comparison Report for Models: In this we put together and print the accuracy and precision of all the ML models used and compare them to find out which model shows the highest accuracy. For our research, it is observed that the highest accuracy and precision are displayed by the random forest classifier. Random forest classifier displays 97.66% accuracy for brain cancer, 83.33% for lung cancer, and 97.36% for breast cancer.

6 Results and Discussions The analysis clearly elaborated the working of these ML models. This leads to studying the results obtained and deriving relevant conclusions from them. We have

Prominent Cancer Risk Detection Using Ensemble Learning Table 2 Breast cancer performance analysis

685

Classifier

Model accuracy (%)

Precision

Decision trees classifier

94.73684210526315

[0.77777778, 0.74358974]

Random forest classifier

97.36842105263158

[1.0, 0.70212766]

Logistic regression

96.49122807017544

[0.97826087, 0.95588235]

SVC

57.89473684210527

[0.97826087, 0.95588235]

K-nearest neighbor

93.85964912280701

[0.0, 0.57894737]

taken five different models for the research paper and calculated their accuracy and precision for comparison. Accuracy: It calculates the percentage of accurately predicted observations. Accuracy = (True Positive + True Negative)/(True Positive + False Positive + False Negative + True Negative). Precision: It calculates the ratio of accurately predicted positive observations to the total predicted positive observations. Precision = True Positive/(True Positive + False Positive). Table 2 depicts the precision and accuracy analysis of breast cancer data samples using classifiers. Figure 4 depicts the accuracy bar plot of breast cancer using scikit-learn libraries. Table 3 highlights the precision and accuracy metrics in context to brain cancer data. Figure 5 depicts the accuracy bar plot of brain cancer using scikit-learn libraries. Random forest shows the best accuracy. All the models have shown quite similar and close to each other values. Most of them have given a good 90% accuracy because of clean data available in the context of brain cancer. Table 4 shows the lung cancer evaluation with performance parameters like precision and accuracy rate.

Fig. 4 Accuracy bar plot of breast cancer

686 Table 3 Brain cancer performance analysis

S. Raghuwanshi et al. Classifier

Model accuracy (%)

Precision

Decision tree classifier

94.73684210526315

[0.94915254, 0.94339623]

Random forest classifier

97.6608187134503

[0.97435897, 0.98148148]

Logistic regression

97.07602339181285

[0.96610169, 0.98113208]

SVC

97.07602339181285

[0.97413793, 0.96363636]

K-nearest neighbor

97.07602339181285

[0.96610169, 0.98113208]

Fig. 5 Accuracy bar plot of brain cancer

Table 4 Lung cancer performance analysis

Classifier

Model accuracy (%)

Precision

Logistic regression

66.6667

[0.75, 0.5]

Linear SVC

66.6667

[0.75, 0.5]

KNN

66.6667

[0.75, 0.5]

Decision tree classifier

66.6667

[0.75, 0.5]

Random forest

83.3333

[0.8, 1.0]

Figure 6 illustrates the accuracy bar plot of lung cancer using scikit-learn libraries. Random forest shows the best accuracy. It depicts the accuracy bar plot of lung cancer using scikit-learn libraries. The five models observed for this research are decision tree, random forest, logistic regression, support vector classifier, and K-nearest neighbor classifiers. The resultant figures from the analysis lead us to the conclusion that ‘random forest classifier’ has

Prominent Cancer Risk Detection Using Ensemble Learning

687

Fig. 6 Accuracy bar plot of lung cancer

the highest accuracy and precision. Now, there are many reasons why random forest gives the best results [19]. They are as follows: • Versatility: Random forest is very versatile when it comes to the data types it deals with. It can deal with binary, categorical, and numerical features. Additionally, it does not require extensive pre-processing. • Parallelizable: This means that it provides the feature to split the process between multiple machines to run parallely, which leads to faster computation time. • High dimensionality: It works well with high-dimensional data as it works with the subsets of data. • Training speed: It trains faster as compared to decision tree because here the work is only on a subset of features in this model, so it is easier to work with many features. The provision of saving generated forests for future use makes the prediction speed faster than training speed. • Robust to outliers: It deals with outliers by binning them and is indifferent to the nonlinear attributes. • Handling unbalanced data: This classifier tries to reduce the overall error rate. So, in case of unbalanced data, the larger class has a lower error rate whereas the smaller class has a higher error rate.

7 Conclusion The focus of this paper, as mentioned earlier, has been to spread awareness about the disease called cancer. It is a lifestyle disease that is severely fatal yet there is a lack of knowledge and awareness about it among people. There are more than 200 types of cancer and it is very difficult to discuss all of them together in one paper. So, we focused on only three, namely, brain, breast, and lung cancer. Through this paper, we wanted to predict cancer in order to provide appropriate treatment to patients before the condition worsens. For this purpose, we employ ensemble machine learning classifiers. These enable efficient prediction and improve accuracy

688

S. Raghuwanshi et al.

and prediction. We used decision tree, random forest, logistic regression, support vector classifier, and K-nearest neighbor classifiers. The final observation concludes that ‘random forest classifier’ shows the best accuracy and prediction.

References 1. Jena L, Mishra S, Nayak S, Ranjan P, Mishra MK (2021) Variable optimization in cervical cancer data using particle swarm optimization. In: Advances in electronics, communication and computing. Springer, Singapore, pp 147–153 2. World Health Organization (WHO) (2020) Global health estimates 2020: deaths by cause, age, sex, by country and by region, 2000–2019. WHO. Accessed 11 Dec 2020. 3. Cancer by WHO, March 2021. https://www.who.int/news-room/fact-sheets/detail/cancer 4. Khuriwal N, Mishra N (2018) Breast cancer diagnosis using adaptive voting ensemble machine learning algorithm. In: 2018 IEEMA engineer infinite conference (eTechNxT), pp 1–5. https:// doi.org/10.1109/ETECHNXT.2018.8385355 5. Faisal MI, Bashir S, Khan ZS, Hassan Khan F (2018) An evaluation of machine learning classifiers and ensembles for early stage prediction of lung cancer. In: 2018 3rd international conference on emerging trends in engineering, sciences and technology (ICEEST), pp 1–4. https://doi.org/10.1109/ICEEST.2018.8643311 6. Assiri AS, Nazir S, Velastin SA (2020) Breast tumor classification using an ensemble machine learning method. J Imaging 6(6):39. https://doi.org/10.3390/jimaging6060039 7. Dutta A, Misra C, Barik RK, Mishra S (2021) Enhancing mist assisted cloud computing toward secure and scalable architecture for smart healthcare. In: Hura G, Singh A, Siong Hoe L (eds) Advances in communication and computational technology. Lecture Notes in Electrical Engineering, vol 668. Springer, Singapore. https://doi.org/10.1007/978-981-15-5341-7_116 8. Hemanth G, Janardhan M, Sujihelen L (2019) Design and implementing brain tumor detection using machine learning approach. In: 2019 3rd international conference on trends in electronics and informatics (ICOEI), pp 1289–1294. https://doi.org/10.1109/ICOEI.2019.8862553 9. Rath M, Mishra S (2020) Security approaches in machine learning for satellite communication. In: Machine learning and data mining in aerospace technology. Springer, Cham, pp 189–204 10. Mishra S, Thakkar H, Mallick PK, Tiwari P, Alamri A (2021) A sustainable IoHT based computationally intelligent healthcare monitoring system for lung cancer risk detection. Sustain Cities Soc 103079 11. Kodali RK, Swamy G, Lakshmi B (2015) An implementation of IoT for healthcare. In: 2015 IEEE recent advances in intelligent computational systems (RAICS). https://ieeexplore.ieee. org/abstract/document/7488451 12. Callahan A, Shah NH (2017) Chapter 19—machine learning in healthcare. https://doi.org/10. 1016/B978-0-12-809523-2.00019-4 13. Shakeel PM, Burhanuddin MA, Desa MI (2020) Automatic lung cancer detection from CT image using improved deep neural network and ensemble classifier. Neural Comput Appl. https://doi.org/10.1007/s00521-020-04842-6 14. Zhang ML (2009) ML-RBF: RBF neural networks for multi-label learning. Neural Process Lett 29:61–74. https://doi.org/10.1007/s11063-009-9095-3 15. Mohapatra SK, Nayak P, Mishra S, Bisoy SK (2019). Green computing: a step towards ecofriendly computing. In: Emerging trends and applications in cognitive computing. IGI Global, pp 124–149 16. Jena L, Kamila NK, Mishra S (2014) Privacy preserving distributed data mining with evolutionary computing. In: Proceedings of the international conference on frontiers of intelligent computing: theory and applications (FICTA) 2013. Springer, Cham, pp 259–267

Prominent Cancer Risk Detection Using Ensemble Learning

689

17. Sahoo S, Das M, Mishra S, Suman S (2021) A Hybrid DTNB model for heart disorders prediction. In: Advances in electronics, communication and computing. Springer, Singapore, pp 155–163 18. Mishra S, Mallick PK, Tripathy HK, Jena L, Chae G-S (2021) Stacked KNN with hard voting predictive approach to assist hiring process in IT organizations. Int J Electr Eng Educ. https:// doi.org/10.1177/0020720921989015 19. Jena KC, Mishra S, Sahoo S, Mishra BK (2017, January) Principles, techniques and evaluation of recommendation systems. In: 2017 international conference on inventive systems and control (ICISC). IEEE, pp 1–6

Portfolio Optimization for US-Based Equity Instruments Using Monte-Carlo Simulation Ayan Mukherjee, Ashish Kumar Singh, Pradeep Kumar Mallick, and Sasmita Rani Samanta

Abstract MPT or the modern portfolio theory, also known as mean–variance analysis, is a mathematical modeling technique that is deployed in constructing portfolios that can maximize the portfolio return for a given amount of risk. In this paper we optimize portfolios in accordance with the modern portfolio theory for US-based equity instruments using Monte-Carlo simulations. For a given Portfolio ‘P’ having ‘n’ number of stocks, with each stock ‘i’ having a weight of ‘wi’ we compute the mean and risk (standard deviation) and optimize our portfolio by optimizing the weights ‘wi’ for the equity instruments using Monte-Carlo simulation. Keywords Modern portfolio theory (MPT) · Monte-Carlo simulation · Portfolio optimization · Equity instruments · Risk profile

1 Introduction The modern portfolio theory was devised by Henry Markowitz in 1952, for which he won the 1990 Nobel Prize in Economics. The modern portfolio theory is based on the assumption that the nature of investors is that of being risk-averse, i.e., given two portfolios that offer the same expected returns, a rational investor shall prefer the one with minimal risk [1]. Thus, an investor is willing to take a higher risk only if the same is compensated by a higher expected return [1]. Similarly, if an investor aims for greater expected returns he/she must be willing to accept a higher associated risk.

A. Mukherjee · A. K. Singh · P. K. Mallick (B) School of Computer Science & Engineering, Kalinga Institute of Industrial Technology (KIIT) Deemed To Be University, Bhubaneswar, India e-mail: [email protected] A. Mukherjee e-mail: [email protected] S. R. Samanta Kalinga Institute of Industrial Technology (KIIT) Deemed To Be University, Bhubaneswar, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 P. K. Mallick et al. (eds.), Cognitive Informatics and Soft Computing, Lecture Notes in Networks and Systems 375, https://doi.org/10.1007/978-981-16-8763-1_57

691

692

A. Mukherjee et al.

The explicit trade-off between risk/standard deviation and expected returns cannot be generalized for all investors. Depending on the risk aversion characteristics on an individual basis, different investors are bound to evaluate the trade-off differently. The implication being a rational investor shall not decide to invest in a said portfolio given that there exists another portfolio with a more advantageous expected return-risk profile. Under the model: • The proportion-weighted mix of the constituent assets of a given portfolio is the expected return of the portfolio. • The function defined as the correlations ρij of the constituent assets, ∀ asset pairs (i, j) is called the portfolio volatility.

2 Literature Review Markowitz’s work on the modern portfolio theory can be considered as a fundamental breakthrough in the asset allocation and portfolio optimization problem. It postulates that the nature of rational investors is being that of risk-averse [1] modeling the problem on a quantitative basis using a mean–variance analysis framework. Under this, the portfolio variance is minimized with a pre-defined value for the expected return on the entire portfolio [2]. Sharpe proposed the capital asset pricing model (CAPM) in 1964 which takes into account an asset’s sensitivity to non-diversifiable risks when it is being added to an existing portfolio that is well diversified. It takes into account the significance of the covariance structure of expected portfolio returns, the portfolio variance and its volatility, and the market premium, which is the difference between the expected return on the asset from the market and the risk-free rate of return on the asset [3]. In the paper, ‘Portfolio optimization by using linear programming models based on genetic algorithms’, Firman and co-authors, arranged the optimization problem into a linear modeling problem and determined the optimum solution using a genetic algorithm. The paper conclusively demonstrated that a genetic algorithm demonstrates higher optimum solution finding capability as compared to a traditional linear programming algorithm [4]. C. Liu, W. Gan and Y. Chen in their paper on portfolio optimization based on affinity propagation and genetic algorithm used the affinity propagation algorithm to construct a candidate set of portfolios based on the correlation analysis of the stock time series. Post which using the Sharpe ratio as the optimization objective function, the genetic algorithm is used to solve an optimal portfolio strategy with higher return and lower risk [5, 6]. Over the years, there have been tremendous advances in research on exploring return and risk measures to quantify the parameters for optimizing portfolios including constant and time-varying higher moments on returns.

Portfolio Optimization for US-Based Equity …

693

3 Mathematical Background 3.1 Expected Return    E Rp = wi E(Ri )

(1)

i

Equation (1) illustrates the mathematical formula for computing the expected returns of the portfolio P, where Rp is the expected return on the portfolio ‘P’; Ri is the expected return on asset i with wi as the associated weight of the component asset i.

3.2 Portfolio Return Variance It is the measure of risk associated with a portfolio in this model. A higher variance is indicative of a higher risk associated with the given asset class and the portfolio. σ p2 =



wi2 σi2 +



i

i

wi w j σi σ j ρi j

(2)

j=i

In Eq. (2), σ denotes the (sample) standard deviation of the periodic returns on an asset, and ρ ij is the correlation coefficient between the returns on assets i and j. The expression can be alternatively written as shown in Eq. (3) for the condition i = j. σ p2 =

 i

σ p2 =

wi w j σi σ j ρi j

(3)

j

 i

wi w j σi j

(4)

j

In Eq. (4), σij denotes the (sample) covariance of the periodic returns on the two assets or alternatively denoted as σ(i,j), covij, or cov(i,j).

3.3 Portfolio Return Volatility (Standard Deviation) Equation (5) denotes the mathematical expression for computing the portfolio return volatility of the said portfolio P. The portfolio return volatility is also a quantitative measure of risk associated with the portfolio. σp =



σ p2

(5)

694

A. Mukherjee et al.

3.4 Sharpe Ratio It is the measure of the return of an investment in relation to the risk-free rate (treasury rate) and its risk profile. In general, a higher value of SR indicates a better and more lucrative investment [7]. It can be said that when comparing two portfolios with similar risk profiles, or else all equal, it would be better to invest in the portfolio having a higher Sharpe ratio. SR =

Rp − R f σp

(6)

In Eq. (6), which describes the formula for computing Sharpe ratio, Rp is the return of the portfolio, Rf is the risk-free rate, and σ p denotes the standard deviation of the portfolio’s excess return.

3.5 The Efficient Frontier It is the plot measure of risk versus expected returns and is deployed to identify the most optimum portfolio to invest into upon considering the risk profile and the characteristics of the given investor [8]. The efficient frontier is an essential part of the curve in the first and second quadrants that depend on investor objectives and characteristics. As illustrated in Fig. 1, the capital allocation line (CAL) is a tangent drawn to the efficient frontier, and the intersection point is appraised to be the optimal investment, i.e., its the portfolio that has the highest expected returns for a given level of risk, under normal circumstances.

Fig. 1 Plot of expected return versus standard deviation and CAL

Portfolio Optimization for US-Based Equity …

695

The MPT is instrumental for investors keen on diversifying their portfolios. As a matter of fact, the growth of exchange-traded funds (ETFs) made the theory more relevant by allowing investors to have easier access to different asset classes. Equity investors manage risk using MPT by putting a small chunk of their portfolios in government EFTs. This way the variance of the portfolio becomes significantly lower since government bonds have a positive correlation with equity instruments. This ensures that there is no large impact on the expected return because of this lossreducing effect.

4 Methodology 4.1 Annual/Yearly Returns and Risk/Standard Deviation (SD) In this paper, we have considered the quotidian returns and corresponding standard deviation of three-month historical data of the publicly traded equity instruments in the NYSE. However, in practice, institutions and firms work with yearly returns and SD. Equation (7) describes the mathematical relationship between quotidian returns and SD on a yearly basis. R A = Rd ∗ 252 ∗ σ A = σd ∗ 252

(7)

In Eq. (7), RA is the annual returns, Rd is the daily returns, σ A is the annual standard deviation, and σ d is the daily standard deviation. The number of active trading days is taken to be 252 for a given year. A dataset containing the % change of adj. closing price of ~200 equity instruments that are publicly traded in the NYSE was first created by fetching the historical data over a three-month period. Initially, random weights are assigned to the above stocks selected for a particular portfolio keeping the sum of weights to be 1. The expected return and SD of the portfolio are then calculated and stored. Monte-Carlo simulations are then used on the portfolio to get optimal weights for each equity instrument in our portfolio in Python.

4.2 Monte-Carlo Simulation Monte-Carlo simulation is a statistical method of computation that deploys a vast number of random samples to obtain results [9]. In this simulation, random weights are assigned to the equity instruments keeping the sum of weights to be 1. The expected return and SD are then calculated for every combination of these weights and stored. Weights are then again changed, assigned randomly, and the process is repeated.

696

A. Mukherjee et al.

The number of iterations depends on the error that the investor is willing to accept. With the increase in the number of iterations, the accuracy of the optimization will also increase but at the cost of computation and time. In this paper, 10,000 such iterations have been considered. Out of these 10,000 results of expected return and risk profile generated, portfolio optimization can be achieved by identifying the portfolio which satisfies any of these three conditions: 1. 2. 3.

A portfolio that has the lowest risk associated with the desired level of expected return. A portfolio that gives the highest expected return for the desired risk level. A portfolio that has the utmost expected return to SD/risk ratio, also known as Sharpe ratio.

4.3 Portfolio Optimization Process in Python The stock price data is fetched from the data set using the pandas library [10–13]. The mean/expected return values of all equity instruments in the portfolio and the corresponding covariance matrix are then generated, as illustrated in Fig. 2. An array is declared to store the results of each iteration. The weight of all equity instruments is stored in the array columns making the column count subject to change with the number of equity instruments in the portfolio since the weights of all the stocks are to be stored [14, 15]. The lens function has been used to achieve the same. The row count of the array declared is equal to the number of iterations generated. Fig. 2 Fetching historical data from dataset computing mean returns and covariance matrix generation

Portfolio Optimization for US-Based Equity …

697

The results are stored in a ‘pandas data frame’ to allow for ease of analysis, as illustrated in Fig. 3. From the data frame, the portfolios satisfying the above three conditions are displayed, as illustrated in Fig. 4. In the output plot generated in Fig. 4, the red star indicates the portfolio with the highest Sharpe ratio and the blue star indicates the portfolio with the lowest standard deviation or risk. From the plot generated (in Fig. 4), we deduce the composition of the optimal portfolio required on the basis of any of the above-mentioned three criteria.

Fig. 3 Results of 10,000 iterations stored in pandas data frame for analysis

Fig. 4 Fetching portfolio with max Sharpe and lowest risk and its corresponding graphical illustration

698

A. Mukherjee et al.

5 Observation and Results Multiple optimized portfolios have been generated using the same by taking equity instruments from two sectors, the results of which are shown below: 1. 2.

Technology sector Banking sector

The equity instruments considered are tradable shares of companies from the above two mentioned sectors that are listed in the NYSE (New York Stock Exchange). In this paper we have illustrated five such results by constructing five optimal portfolios—three from the technology sector and two from the banking sector. Portfolio 1: Technology Sector The first portfolio is constructed by taking the equity instruments of three large-cap companies from the technology sector. The equity instruments considered in this portfolio consist of AMZN (Amazon), ABC (Alphabet Inc./Google), and ORCL (Oracle). Figure 5 illustrates the output for this portfolio. Portfolio 2: Technology Sector The second portfolio is constructed by taking the equity instruments of four largecap companies from the technology sector. The equity instruments considered in this portfolio consist of AAPL(Apple), ABT (Abbott Labs), AMZN(Amazon), and ABC(Alphabet Inc/Google). Figure 6 illustrates the output for this portfolio. Portfolio 3: Technology Sector The third portfolio is constructed by taking the equity instruments of three largecap companies from the technology sector. The equity instruments considered in this portfolio consist of ADI (Analog Devices Inc.), ADSK (Autodesk Inc.), and AAPL(Apple Inc.). Figure 7 illustrates the output for this portfolio.

Fig. 5 Output of Portfolio 1 constructed considering equity instruments of three large-cap tech companies

Portfolio Optimization for US-Based Equity …

699

Fig. 6 Output of Portfolio 2 constructed considering equity instruments of four large-cap tech companies

Fig. 7 Output of Portfolio 3 constructed considering equity instruments of three large-cap tech companies

Portfolio 4: Banking Sector The fourth portfolio is constructed by taking the equity instruments of four largecap companies from the banking sector. The equity instruments considered in this portfolio consist of BAC (Bank of America), JPM (J.P. Morgan), GS (Goldman Sachs), and MS (Morgan Stanley). Figure 8 illustrates the output for this portfolio. Portfolio 5: Banking Sector The fifth portfolio is constructed by taking the equity instruments of three largecap companies from the banking sector. The equity instruments considered in this portfolio consist of C (Citi Bank), WFC (Wells Fargo), BAC (Bank of America). Figure 9 illustrates the output for this portfolio.

700

A. Mukherjee et al.

Fig. 8 Output of Portfolio 4 constructed considering equity instruments of four large-cap banking companies

Fig. 9 Output of Portfolio 5 constructed considering equity instruments of three large-cap banking companies

6 Conclusion The literature associated with portfolio optimization and asset allocation problems is quite extensive [16–19]. There exists a wide-ranging diversity of alterations and advancements on fundamental methods and significant active research that has been done around it. In this paper we have used Monte-Carlo simulation to optimize our portfolio that consisted of US-based equity instruments from selected sectors. The same can be applied not only to equity instruments from a wide array of sectors but also to different asset classes so as to diversify investor portfolios and improve the risk to expected return trade-off.

Portfolio Optimization for US-Based Equity …

701

References 1. Bertoluzzo F, Corazza M (2012) Testing different reinforcement learning configurations for financial trading: Introduction and applications. Procedia Econ Finance 3:68–77 2. Markowitz HM (1952) Portfolio selection. J Financ 7(1):77–91. https://doi.org/10.2307/ 297597 3. Markowitz H (February 2000) Mean-variance analysis in portfolio choice and capital markets. Wiley. ISBN 978-1-883-24975-5 4. Sharpe WF (1964) Capital asset prices: a theory of market equilibrium under conditions of risk. J Financ 19(3):425–442 (Posted: 1964) 5. Firman S, & Hidayat Y, Lesmana E, Sukma Putra A, Napitupulu H, Supian S (2018) Portfolio optimization by using linear programing models based on genetic algorithm. In: IOP conference series: materials science and engineering, vol 300, p 012001. https://doi.org/10.1088/1757899X/300/1/012001 6. Liu C, Gan W, Chen Y (2017) Research on portfolio optimization based on affinity propagation and genetic algorithm. In: 2017 14th web information systems and applications conference (WISA), pp 122–126. https://doi.org/10.1109/WISA.2017.9 7. Mohapatra SK, Nayak P, Mishra S, Bisoy SK (2019) Green computing: a step towards ecofriendly computing. In: Emerging trends and applications in cognitive computing. IGI Global, pp 124–149 8. Chekhlov A, Uryasev S, Zabarankin M (2005) Drawdown measure in portfolio optimization. Int J Theor Appl Financ 8(01):13–58 9. Choueifaty Y, Coignard Y (2008) Toward maximum diversification. J Portfolio Mgmt 35(1):40– 51 10. Mishra S, Mishra BK, Tripathy HK, Dutta A (2020) Analysis of the role and scope of big data analytics with IoT in health care domain. In: Handbook of data science approaches for biomedical engineering. Academic Press, pp 1–23 11. Rath M, Mishra S (2020) Security approaches in machine learning for satellite communication. In: Machine learning and data mining in aerospace technology. Springer, Cham, pp 189–204 12. Mishra S, Tripathy HK, Kishore B (2017) Filter based attribute optimization: a performance enhancement technique for healthcare experts 13. Mishra S, Tripathy HK, Panda AR (2018) An IMPRoved and adaptive attribute selection technique to optimize dengue fever prediction. Int J Eng Technol 7:480–486 14. Jena L, Patra B, Nayak S, Mishra S, Tripathy S (2019) Risk prediction of kidney disease using machine learning strategies. In: Intelligent and cloud computing. Springer, Singapore, pp 485–494 15. Mishra S, Tadesse Y, Dash A, Jena L, Ranjan P (2019) Thyroid disorder analysis using random forest classifier. In: Intelligent and cloud computing. Springer, Singapore, pp 385–390 16. Sahoo S, Das M, Mishra S, Suman S (2021) A hybrid DTNB model for heart disorders prediction. In: Advances in electronics, communication and computing. Springer, Singapore, pp 155–163 17. Mishra S, Mallick PK, Tripathy HK, Jena L, Chae G-S (2021) Stacked KNN with hard voting predictive approach to assist hiring process in IT organizations. Int J Electr Eng Edu. https:// doi.org/10.1177/0020720921989015 18. Jena KC, Mishra S, Sahoo S, Mishra BK (2017, January) Principles, techniques and evaluation of recommendation systems. In: 2017 international conference on inventive systems and control (ICISC). IEEE, pp 1–6 19. Mishra S, Thakkar H, Mallick PK, Tiwari P, Alamri A (2021) A sustainable IoHT based computationally intelligent healthcare monitoring system for lung cancer risk detection. Sustain Cities Soc 103079

A Smart Farming-Based Recommendation System Using Collaborative Machine Learning and Image Processing Soham Chakraborty and Sushruta Mishra

Abstract Most agricultural lands get degraded due to inappropriate choice of the crops grown in the land. Farmers don’t have the complete knowledge of soil essentials such as minerals, moisture, and other factors required. So, they often grow inappropriate crops on certain agricultural land, which leads a farmer to suffer financially and mentally due to the economic loss caused by it. Our main objective in this research work is to address this issue by building a recommendation system that uses machine learning and image processing techniques. By using our model, the farmers can predict the accurate farm for the agricultural land to be grown and detect the pests that may affect it. We have applied the KNN classification algorithm, XGBoost classification algorithm, random forest classification algorithm, and artificial neural network classification algorithm in this paper and found that the XGBoost classification algorithm outperforms the other algorithms. Along with producing a recommendation for best crop selection using these classification algorithms, we have also introduced a collaborative convolution neural network model, with the help of which farmers can put the image of a crop and based on that our model can predict whether it is infected or not. We have considered and compared two CNN models—DenseNet and MobileNetV2, between which the DenseNet architecture performed better. The aggregated output of the classification model and the CNN model will play a role in the final decision-making. Keywords Crop selection · Pest identification · Recommendation system · KNN · XGBoost · Random forest · ANN · CNN · DenseNet · MobileNetV2

1 Introduction The agricultural sector contributes about 18% to India’s GDP and provides jobs to many people. That clearly shows that agriculture is an important contributor to the overall economy of our country [1]. Poor-quality agriculture can lead to loss of S. Chakraborty · S. Mishra (B) School of Computer Science Engineering, Kalinga Institute of Industrial Technology (KIIT) Deemed To Be University, Bhubaneswar, Odisha, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 P. K. Mallick et al. (eds.), Cognitive Informatics and Soft Computing, Lecture Notes in Networks and Systems 375, https://doi.org/10.1007/978-981-16-8763-1_58

703

704

S. Chakraborty and S. Mishra

lives through unemployment as well as food shortage. This in turn may even lead to a crisis and impact the economic condition [2]. Recently, agriculture has been impacted. According to the UN’s Food and Agriculture Organization (FAO), one billion tons of food get wasted each year, or one-third of all food produced for human consumption [3]. Many factors contribute to these losses, including insufficient land, soil exhaustion due to lack of manures, improper crop selection, climatic changes, pests, and weeds, among others. Among most of these elements, approximately 30– 33% of general yield is misplaced because of pesticides as said by P.K. Chakrabarty. So, selecting an appropriate crop and preventing this vegetation maintain the same importance. It is crucial in addition to hard as there are numerous elements that determine the crop. This is appropriate for development and there are numerous bugs that may also have an effect on the vegetation differently [4]. With an increase in the era, many up-to-date eras have been carried out in the farming area to enhance the fitness of vegetation named precision agriculture. A higher call for precision agriculture may be “site-particular agriculture”. Indian farmers have a tendency to pick the wrong crop for his/her soil and this hassle may be addressed with the aid of using precision agriculture in which the weather traits like temp, pH, and so forth are used for predicting which crop will be appropriate for a particular agricultural land and weather type. This reduces the hazard of cultivating inappropriate crops which together consequences in higher crop yield from a selected land. Normally agricultural workers save their vegetation with the aid of using prediction of the pesticides manually and putting them off, which calls for severe labor [5]. So, using the era of advice machine we will suggest the vegetation appropriate for developing and a way to eliminate the pest which regularly assaults that vegetation [6]. With our studies, we will assist farmers to be more technically sound and they could unfold consciousness among their fellow farmers approximately the perfect crop to pick, and additionally, with the aid of using stopping crop loss each year, they may be financially stable [7]. The remaining part of the paper is divided into the following five sections. In Sect. 2, we have described the related works on similar research interests. In Sect. 3, we have described the background details of the research work. In Sect. 4, we have discussed our proposed approach. In Sect. 5, we have performed the performance analysis. In the last section, Sect. 6, we have drawn the conclusions taken from the overall research work.

2 Related Works As S. Babu recommended the need for appropriate farming in India through a software program version with an intention to reach each farmer in South India [8], his studies mainly emphasized the technological handling of the agricultural sector. The main objective is to help agricultural landholders and workers with the help of technologies, like smart recommendation system, for growing appropriate crops. This version is in the main advanced for Kerala due to the fact that the common length of

A Smart Farming-Based Recommendation System …

705

agricultural land right here is lesser than the maximum of India, however, the identical version may be applied everywhere with minimum modifications. S. Pudumalar et al. mentioned the troubles that farmers face because of the point choice of the crop in the agricultural land in sort of the soil to be had to their purpose being the dearth of suitable knowledge [9]. They designed an advice machine that may act as a manual for farmers to choose the right match crop for the soil. After a sequence of the test at the algorithms, they’ve used algorithms like random forest, naive Bayes, CHAID version, K-nearest neighbor and gave an end that the machine designed can expect suitable consequences to a truthful and respectable stage with an accuracy of 88%. Although they tested the version for a specific state, the version with hardly ever any amendment in a dataset might provide an extra correct advice for a beer geometric region. A. Savla et al. defined how pests have an effect on the plants the maximum and result in a large amount of loss for the farmers and the significance of the right elimination strategies required [10]. The methodology proposed here is a PCT-O ontology version to explain the prevalence of the pest and the way its miles eliminated whether or not the use of chemical compounds or bodily means. The statistics retrieved the use of information retrieval (IR) machine and the pest that might have an effect on the crop is anticipated using the recommended machine. R. Kumar et al. defined a version that makes use of farmer’s area to expect the crop appropriate in that region through predicting then works with one-of-a-kind climatic records and least expensive statistic in sub-district stage [11]. Chlingaryan and Sukkarieh performed a research study on the estimation of nitrogen using ML [12]. This paper claims that development in sensor technologies and widespread usage of ML models will highly impact the growth and development of agricultural sector. Elavarasan et al. experimented with different ML models for the prediction of crop yield based on climate parametric-based features. In this research he stated the importance of finding the appropriate and necessary features that impact crop yield. References [13, 14] published a paper reviewing different ML applications in agricultural sector. The analysis mainly focused on crop management, livestock management, water management, and soil management. Li, Lecourt, and Bishop performed a review study on determining the ripeness of fruits to predict the appropriate time of harvesting and yield prediction [15]. Mayuri and Priya have also worked in the same field and have proposed an image processing technique to detect diseases [16]. Somvanshi and Mishra proposed several ML approaches along with their respective application in botany [17]. Gandhi and Armstrong have together published a review paper in implementing data mining techniques in a generic manner in the agriculture sector. They also claimed that specialized training can be done using data mining methods to extract patterns and understand complex agricultural datasets [18]. Beulah performed a survey on the different data mining methods which can be applied for crop yield prediction and concluded that by employing data mining techniques, the crop yield prediction could be efficiently solved [19].

706

S. Chakraborty and S. Mishra

Fig. 1 A pie-chart representation of the balanced dataset

3 Background Concepts 3.1 Data Acquisition The dataset has been collected from trusted online available data repositories, including Kaggle, UCI machine learning repository, etc. [20]. We have considered two large datasets for training our models. The first dataset is a tabular dataset mainly used to train the classification model. The dataset consisted of several features like nitrogen level, phosphorus level, potassium level, temperature, pH value, humidity, and amount of rainfall. The pH is crucial as it impacts the provision of important nutrients. Rainfall is likewise a crucial component for crop prediction. The requirement of water for every crop is exclusive, so it has also been taken into account. Temperature also plays a very crucial role as a component in affecting crop growth and development. Hence, all the above-mentioned features taken into consideration are important for predicting a crop seed to be cultivated. The categories of the dataset included different names of crops. The dataset had a balanced distribution over all the categories. The data distribution has been presented in Fig. 1. The second dataset contains a repository of crop images, along with a csv where the details of the image files are stored along with the category to which it belongs. There are four categories: Healthy, Multiple Diseases, Rust, and Scab. The distribution of all four categories is shown in the form of a pie-chart in Fig. 2.

3.2 Exploratory Data Analysis After the collection of the dataset, some exploratory data analysis has been performed over the data to get a better understanding of the data [21]. Firstly, we do a univariate analysis of all the features from where we get a notion of the distribution of individual feature values. The univariate analysis has been shown in Fig. 3.

A Smart Farming-Based Recommendation System …

Fig. 2 A pie-chart representation of the image dataset

Fig. 3 Univariate feature distribution

707

708

S. Chakraborty and S. Mishra

Fig. 4 Bivariate feature analysis

Fig. 5 Correlation heatmap

Next, we have approached multivariate analysis where the relational distribution between two features or bivariate analysis has been performed. The bivariate analysis has been shown in Fig. 4. To get an understanding of the correlation between each combination of features, we plotted the heatmap of the correlations. The heatmap has been shown in Fig. 5.

3.3 Classification Algorithms 1.

KNN Classification Algorithm: K-nearest neighbor is one of the only machine learning algorithms primarily based totally on supervised learning techniques

A Smart Farming-Based Recommendation System …

709

Fig. 6 Error value plot for different K-value

2.

3.

4.

[22]. KNN uses a set of rules that assumes the similarity among the new observations/records to classify the data and place it into that category that has maximum chances/probability of having it. Although KNN can be used both as classification and regression problem setting, more often it is used in classification problems. KNN is a non-parametric method, which means it needs no parameter to assume the underlying data. It is known as a lazy learner as it does not undergo any process of learning, rather it makes classification on the go (during run-time). Here we have trained a KNN classification model with different Kvalues, to choose the best K-value which gives the maximum accuracy. The K-value vs error rate has been shown in Fig. 6. XGBoost Classification Algorithm: XGBoost means “Extreme Gradient Boosting”. XGBoost is a much more optimized and high-performing library built to be fantastically efficient, bendy, and portable. It implements machine learning algorithms beneath the gradient boosting framework. It offers parallel tree boosting to remedy many statistics technology troubles in a quick and correct way. Random Forest Classification Algorithm: Random forest is a supervised studying set of rules. The “forest” it builds is a collection of selection trees. The well-known concept of the bagging approach is that an aggregate of learning patterns will increase the general result. One large gain of random forest is that it may be used for classification type and regression problems, which shape the bulk of contemporary gadget studying systems. Random forest provides extra randomness to the model while developing the bushes. This affects an extensive variety that typically affects a higher model. Therefore, in random woodland, a handiest random subset of the capabilities is considered through the set of rules for splitting a node. Artificial Neural Network Classification Algorithm: Artificial neural networks (ANNs), commonly in reality known as neural networks (NNs), are computing structures vaguely stimulated via way of means of the organic neural networks

710

S. Chakraborty and S. Mishra

that represent animal brains. An ANN is primarily based totally on a set of linked devices or nodes known as synthetic neurons, which loosely version the neurons in an organic brain. The main feature of ANNs is that they can automatically extract important features, unlike traditional machine learning algorithms where the features have to be manually extracted. The feature extraction is done mainly by the deep hidden layers present in the ANN architecture.

3.4 Convolution Network Architectures 1.

2.

DenseNet Architecture: The DenseNet Architecture is essentially a modified standard CNN architecture. During a DenseNet architecture, each of the layers is connected to each other layer, so named as dense convolution neural network. For every N layer, there exist N(N + 1)/2 direct networks. For every layer, the feature maps of all the preceding layers are used as inputs, and their own feature maps are used as input for every subsequent layer. DenseNets essentially connect every layer to each other layer. This is often the most concept that is extremely powerful. The input of a layer inside DenseNet is the concatenation of feature maps from previous layers. MobileNet Architecture: MobileNetV2 is a mobile compatibility-based CNN model. It is made up of the reverse remaining structure where the remaining networks are among the bottleneck layers. The inner layer utilizes light deepbased convolutions to filter out features that may act as a source for nonlinearity. Overall, the structure of MobileNetV2 consists of the beginning fully convolution layer with 32 filters, along with 19 remaining bottleneck layers.

4 Proposed Work In this research work we have proposed a collaborative model of classification algorithm and convolution neural network architecture. This two-fold architecture helps in determining the appropriate crop to be selected based on the natural conditions and identifying pests in crops beforehand. Primarily we have considered four classification algorithms as discussed above—namely, KNN classification algorithm, XGBoost classification algorithm, random forest, and ANN classification algorithm. The dataset has been divided into train test split using stratified sampling method so as to consider a balanced subset from all of the subtypes of crops. After train test split, all the necessary preprocessing and embedding have been performed using techniques like label encoding and target encoding. Next to all the preprocessing and feature engineering steps, all the four classification algorithms discussed above are trained on the training dataset. Then their individual performance on the test set has also been observed and noted. Some hyper-parameter tuning was performed on the models so as to obtain the maximum accuracy for all the models both on the train as well as the test set data. The model takes some of the weather parameters as input like

A Smart Farming-Based Recommendation System …

711

Fig. 7 Input/output of the machine learning classification algorithm

nitrogen level, phosphorus level, potassium level, temperature, pH value, humidity, and amount of rainfall and outputs the name of the crop to be planted in that weather conditions. A glimpse of the notebook has been shown in Fig. 7. This part mainly dealt with the appropriate crop recommendation; in the next part, we will discuss pest identification from images of crops. The convolution neural network used in this research is DenseNet CNN architecture and MobileNetV2 architecture. In our use case we have trained the DenseNet and the MobileNetV2 model on images of the crop, which consisted of four types of categories—Healthy, Multiple Diseases, Rust, and Scab. Our main target was to get a percentage of all categories for a particular crop image input. This was done by adding a softmax layer next to the fully connected layer and training the model over four categories mentioned earlier. On implementation, this can be used as a tool where farmers simply upload their crop image and it gives a percentage of all the categories to which the crop belongs and the extent/percentage. A glimpse of the input and output has been shown in Fig. 8. The overall process consists of a two-fold architecture, where the first phase mainly aims to recommend a suitable crop before planting them and the second phase helps in the maintenance of the crops after planting them by using a pest identification system by the help of which farmers can easily segregate the diseased plants to save their farms from getting damaged. This workflow has been shown in the below block diagram in Fig. 9.

5 Results and Discussion We have carried out all the experiments using jupyter IDE (Anaconda 3). Some of the libraries that have been used include sklearn—for data preprocessing, train test split, and classification algorithm experimentation; keras—a tensorflow-based deep learning library used for building CNN models; and opencv—used for image processing and image augmentation. After training all the classification algorithms

712

S. Chakraborty and S. Mishra

Fig. 8 Input/output of the DensNet (pest identification) algorithm

Fig. 9 High-level workflow of the proposed model

and the CNN architectures we have noted the performance and done a performance analysis plot. Among all the classification models, the XGBoost classification algorithm outperforms other models with a training accuracy of 99.77 and a validation accuracy (same as test set accuracy here) of 99.95. The performance analysis plot of the classification algorithms is shown in Fig. 10.

A Smart Farming-Based Recommendation System …

713

Fig. 10 Performance analysis of all the classification algorithms

The training and validation accuracy at different epochs has been noted for both CNN architectures. The accuracy plot for DenseNet is shown in Fig. 11 and the accuracy plot for MobileNetV2 has been shown in Fig. 12. As we can see from the above models that MobileNetV2 certainly overfits over the training dataset, due to which it cannot perform well over the validation set,

Fig. 11 Training accuracy and validation accuracy plot over different epochs for DenseNet

Fig. 12 Training accuracy and validation accuracy plot over different epochs for MobileNetV2

714

S. Chakraborty and S. Mishra

Fig. 13 Performance analysis of the CNN architectures

that is why after timestamp (epoch) 5 it has started to decrease. We have created checkpoints of the models at the highest performing epochs for getting maximum performance. On comparing both CNN architectures we clearly see that DenseNet architecture performs much better than MobileNetV2 architecture. The performance analysis of both the CNN models over train set and validation set (same as test set here) has been shown in Fig. 13. Thus, the final proposed recommendation system is built on the collaborative architecture of XGBoost classification algorithm and DenseNet CNN architecture, which can be used to recommend appropriate crops and identify pests from crops with maximum confidence intervals.

6 Conclusion Agriculture, being a critical part of our economy, makes it necessary to be certain about even the smallest funding executed for the agriculture zone, which needs to be looked after and properly utilized. So, it becomes very vital to test if the precise crop has been selected for a land so as to preserve the fertility of the land as well as maximize the outcome of farmers, thus maintaining the standards. Our main objectives of growing appropriate crops with accurate maintenance policies have been accomplished by modeling the problem using an intelligent recommendation system based on classification and CNN algorithms. In the future, we aim to build more advanced models on a larger dataset, with more added features.

A Smart Farming-Based Recommendation System …

715

References 1. Madhusudhan L (2015) Agriculture role on Indian economy. Bus Econ J 2. Mishra S, Mallick PK, Koner D (2021) Significance of IoT in the agricultural sector. In: Smart sensors for industrial internet of things. Springer, Cham, pp 173–194 3. Jena KC, Mishra S, Sahoo S, Mishra BK (2017, January) Principles, techniques and evaluation of recommendation systems. In: 2017 International conference on inventive systems and control (ICISC). IEEE, pp 1–6 4. Vijayabaskar PS, Sreemathi R, Keertanaa E (2017) Crop prediction using predictive analytics. In: IEEE international conference on computation of power, energy information and communication, Melmaruvathur, India, pp 370–373 5. Mishra S, Koner D, Jena L, Ranjan P (2021) Leaves shape categorization using convolution neural network model. In: Intelligent and cloud computing. Springer, Singapore, pp 375–383 6. Kanaga Subh Raja S, Rishi R, Sundaresan E, Srijit V (2017) Demand based crop recommender system for farmers. In: IEEE technological innovations in ICT for agriculture and rural development, Chennai, India, pp 194–199 7. Rath M, Mishra S (2020) Security approaches in machine learning for satellite communication. In: Machine learning and data mining in aerospace technology. Springer, Cham, pp 189–204 8. Mukherjee D, Tripathy HK, Mishra S (2021) Scope of medical bots in clinical domain. Tech Adv Mach Learn Healthc 936:339 9. Pudumalar S, Ramanujam E, Harine Rajashree R, Kavya C, Kiruthika T, Nisha J (2017) Crop recommendation system for precision agriculture. In: IEEE international conference on advanced computing, Chennai, India, pp 32–36 10. Savla A, Dhawan P, Bhadada H, Israni N, Mandholia A, Bhardwaj S (2015) Survey of classfication algorithms for formulating yield prediction accuracy in precision agriculture. In: IEEE international conference on innovations in information, embedded and communication systems, Coimbatore, India, pp 1–5 11. Kumar R, Singh MP, Kumar P, Singh JP (2015) Crop selection method to maximize crop yield rate using machine learning technique. In: IEEE international conference on smart technologies and management for computing, communication, controls, energy and materials, Chennai, India, pp 138–145 12. Chlingaryan A, Sukkarieh S, Whelan B (2018) Machine learning approaches for crop yield prediction and nitrogen status estimation in precision agriculture: a review. Comput Electron Agric 151:61–69. https://doi.org/10.1016/j.compag.2018.05.012 13. Elavarasan D, Vincent DR, Sharma V, Zomaya AY, Srinivasan K (2018) forecasting yield by integrating agrarian factors and machine learning models: a survey. Comput Electron Agric 155:257–282. https://doi.org/10.1016/j.compag.2018.10.024 14. Liakos KG, Busato P, Moshou D, Pearson S, Bochtis D (2018) Machine learning in agriculture: a review. Sensors (Switzerland) 18(8). https://doi.org/10.3390/s18082674 15. Li B, Lecourt J, Bishop G (2018) Advances in non-destructive early assessment of fruit ripeness towards defining optimal time of harvest and yield prediction—a review. Plants 7(1). https:// doi.org/10.3390/plants7010003 16. Mayuri PK, Priya VC (n.d.) Role of image processing and machine learning techniques in disease recognition, diagnosis and yield prediction of crops: a review. Int J Adv Res Comput Sci 9(2) 17. Somvanshi P, Mishra BN (2015) Machine learning techniques in plant biology. In: PlantOmics: the omics of plant science. Springer India, New Delhi, pp 731–754. https://doi.org/10.1007/ 978-81-322-2172-2_26 18. Gandhi N, Armstrong L (2016) Applying data mining techniques to predict yield of rice in humid subtropical climatic zone of India. In: Proceedings of the 10th INDIACom; 2016 3rd international conference on computing for sustainable global development, INDIACom 2016, pp 1901–1906 19. Beulah R (2019) A survey on different data mining techniques for crop yield prediction. Int J Comput Sci Eng 7(1):738–744

716

S. Chakraborty and S. Mishra

20. Jena L, Kamila NK, Mishra S (2014) Privacy preserving distributed data mining with evolutionary computing. In: Proceedings of the international conference on frontiers of intelligent computing: theory and applications (FICTA) 2013. Springer, Cham, pp 259–267 21. Sahoo S, Das M, Mishra S, Suman S (2021) A hybrid DTNB model for heart disorders prediction. In: Advances in electronics, communication and computing. Springer, Singapore, pp 155–163 22. Mishra S, Mallick PK, Tripathy HK, Jena L, Chae G-S (2021) Stacked KNN with hard voting predictive approach to assist hiring process in IT organizations. Int J Electr Eng Edu. https:// doi.org/10.1177/0020720921989015

Applications of Artificial Intelligence in Small- and Medium-Sized Enterprises (SMEs) Samarjeet Borah , Chukwuma Kama, Sandip Rakshit , and Narasimha Rao Vajjhala

Abstract The advancements in deep learning methods have brought several new artificial intelligence (AI) applications making AI important for every enterprise that aims to be competitive. Therefore, not only Tech companies but also smalland medium-sized enterprises (SMEs) require AI. This paper discusses SME AI applications and reveals the challenges, solutions, and advantages of implementing AI in SMEs. Although some SMEs are concerned with building their applications because of the cost and length of implementing AI, resulting in a high risk of failure, nevertheless, SMEs still depend on artificial intelligence for growth and cloud-based solutions. Keywords Artificial intelligence · Deep learning · Machine learning · SMEs · Efficiency · Development · Applications

1 Introduction Over the years, many definitions of Artificial Intelligence (AI) have surfaced. In 2004, computer scientist John McCarthy defined AI as the engineering and science of building intelligent machines and programs to understand human intelligence [1]. Other definitions described artificial intelligence as a branch of computer science that simulates human Intelligence into intelligent machines to make them capable of carrying out a task that usually requires human effort. However, years before S. Borah (B) Sikkim Manipal Institute of Technology, Majhitar, India C. Kama · S. Rakshit American University of Nigeria, Yola, Nigeria e-mail: [email protected] S. Rakshit e-mail: [email protected] N. R. Vajjhala University of New York Tirana, Tirana, Albania e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 P. K. Mallick et al. (eds.), Cognitive Informatics and Soft Computing, Lecture Notes in Networks and Systems 375, https://doi.org/10.1007/978-981-16-8763-1_59

717

718

S. Borah et al.

these definitions, artificial intelligence was first cited in Alan Turing’s paper called “Computing Machinery and Intelligence,” published in 1950. Turing was referred to as the “father of computer science”. In this paper, Turing asks the question, “can machines think?”. After that, he offers a test now popularly known as the “Turing Test,” a method used to determine whether or not a machine is capable of thinking like a human [1]. The Turing test is an imitation game with a few modifications. There are three isolated players: one human responder, one computer, and one human interrogator whose job is to identify which player is the computer among the two [1]. Artificial Intelligence comprises various branches, including perception, language understanding, problem-solving, reasoning, and learning [2–5]. AI can be applied to multiple areas of life in the modern world, including chatbots, human resource management, healthcare systems, e-commerce, and logistics [4, 6–15]. Small- and medium-sized enterprises (SMEs) are businesses or firms that employ many people depending on the country [16]. For most of the countries, 250 is the highest number of employees in a small- and medium-sized enterprise. These SMEs play a vital role in a country’s economy. They are more in numbers, and they employ a large number of people compared to large firms. According to the European Union (EU), small enterprises are businesses with less than 50 employees, while mediumsized enterprises are independent businesses with fewer than 250 employees [17, 18]. Some examples of SMEs include a gym, law office, dentist, barbing salon, and bar. Although the definition of SMEs varies depending on the country, specific characterization remains constant for identifying SMEs. Although the definition of SMEs varies depending on the country, specific characterization remains constant for identifying SMEs, such as—limited investments, labor-intensive, and a smaller number of employees. Nowadays, SMEs have become the major support in a country’s economy, and a country cannot survive without SMEs. Some importance of SMEs is employment generation, local areas development, and opportunities for new entrepreneurs. The definition of an SME could also differ depending on the country. According to the European Commission (2003), it is a business with 0–250 workers, a maximum annual revenue of EUR 50 million, and a maximum annual balance sheet total of EUR 43 million [19].

2 Review of Literature 2.1 Application of AI in SMEs Artificial Intelligence is mainly applied to small- and medium-sized businesses to gain a competitive advantage. Artificial intelligence application to SMEs aims to increase company performance and sales, low cost, save time, and improve customer management. Some techniques used to implement artificial intelligence include deep

Applications of Artificial Intelligence …

719

learning, neural networks, expert systems, machine learning, and fuzzy logic. With these various techniques, artificial intelligence remains constant to build flexible, interactive, and adaptive solutions to customers’ needs and problems [20–23]. With the rate at which technology increases, one aspect for businesses to pay attention to is artificial intelligence. In today’s market, AI has become vital support with services like social media monitoring, self-driving cars, smart assistants like Siri, virtual traveling booking agents, automated financial investing, predict machinery failures, and so on. While SMEs’ innovation processes might sometimes be difficult to generalize, several features are commonly discussed in this regard. In comparison to large businesses, SMEs have a unique set of hurdles that may result in poor innovation performance. Due to a lack of access to capital, an inability to engage in innovation, or compliance with environmental rules, SMEs frequently face market failures that complicate the competitive landscape [24]. Additionally, SMEs have greater workforce constraints due to a shortage of or insufficiently skilled people. They lack alternative goods or “cash cows” to offset the low return on investment (ROI) associated with innovation. SMEs frequently struggle to overcome structural hurdles such as a lack of managerial and technical skills, labor market rigidities, and inadequate information about expansion prospects [25]. A few AI applications and tools are mostly used by businesses to improve their competitive advantage or integrate their systems. These business areas include improving sales and marketing efforts, automated customer service and communications, and improving recruitment and HR activities. Improving Sales and Marketing Efforts This is considered to be one of the significant benefits of AI [26–28]. It involves a marketing automation system that executed machine learning to improve the business target market and build flexible communication between customers and the business. AI-based marketing brings about increasing marketing efficiency by steering efforts toward customers’ interests. Some typical AI-powered applications that support marketing are Google AdWords, Facebook, and Bing. Automated Customer Service An automated chat platform helps SMEs measure their customer engagement, complaints, and experiences to reduce the number of resources needed for customer interactions [29, 30]. This brings about more significant customer engagements, thus increasing the company’s revenue and retention. The aim of AI in these business areas is to understand the customers’ needs by determining the purchase pattern to increase the range of service. Furthermore, to solve customers’ issues, artificial intelligence can help organize customers’ inquires. This helps in reducing time-wasting for customers that require help with basic queries. Improving Recruitment and HR Activities HR analytics uses AI to attract employees and differentiate workers regarding their wages, work conditions, and benefits or responsibilities [31, 32]. These analytics

720

S. Borah et al.

systems run automated routine and administrative tasks such as reporting, payroll, and accounting. For instance, an AI applicant tracking tool helps reduce the hiring time and cost for SMEs to filter through various job applications by potential candidates and select the one best suited for the job.

3 Challenges of AI Adoption in SMEs Some challenges faced when applying AI to SMEs include the following: Computing Power It requires a certain amount of computing power to implement AI into a business, although it mainly depends on the type of business. Machine learning and learning are the stepping stones of AI, and they require a continuously increasing number of cores and GPUs to operate efficiently. Some domains for implementing deep learning frameworks are asteroid racking, tracing cosmic bodies, and healthcare deployment. They also demand a supercomputer because of its computing power, although, due to the current availability of cloud computing and parallel processing systems, program developers work on AI systems more efficiently. Trust Deficit It happens due to the unknown nature of predicting outputs by deep learning methods [33]. How a unique set of inputs can devise a solution for various problems is difficult to comprehend for most people who are not very familiar with computers. Today, most people do not know the existence or efficiency of AI and how it frequently impacts items such as smart TVs, banking, cars, and smartphones. Limited Knowledge There are several places in the market where AI can outperform traditional methods [34]. The primary issue is a lack of information about artificial intelligence. Apart from technology enthusiasts, certain college students, and researchers, only a small percentage of the population is aware of artificial intelligence. For instance, many SMEs can schedule work or discover novel techniques to boost productivity, manage resources, sell and manage items online, study and understand customer behavior, and respond effectively and efficiently to market changes. Additionally, they are unaware of service providers such as Amazon Web Services, Google Cloud, IBM, and other technology businesses. Data Privacy and Security For instance, imagine a medical service provider providing services to many individuals in a city or cooperative, and the personal data of all users fall into the hands of the wrong people due to a cyber-attack. This data may include patients’ health problems, medical history, and other customers’ credentials. With this much information gathered together from various persons, there is likely to be some cases of

Applications of Artificial Intelligence …

721

data leakage. Most recent companies have started implementing changes to overcome these barriers. Some of these changes include training inputted data on intelligent machines. Data Quality and Quantity The performance of a system is largely determined by the data it receives. A large number of training datasets is required for an AI system. Artificial Intelligence, like humans, learns from available data, but it requires far more data to find patterns than we do. The difference between humans and AI is that AI can process data at a rate that humans can only dream of, enabling it to learn quickly. The more accurate data we submit, the more accurate the results. For instance, SMEs that use a large amount of data need quality data and AI tools to maintain their credibility. Some other common challenges and their solutions faced when implementing Artificial Intelligence in SMEs include the following. Financial challenges and Solutions Kusiak [35] suggests that one of the most significant challenges Smart Manufacturing companies face is accepting that a new age is fast coming with factory automation and low-cost robotics to switch traditional jobs. Furthermore, he recognizes the importance of understanding future needs in a smart enterprise. This creates different opportunities for SMEs since they will be affected, and it is also essential to have adequate optimization from the manufacturing companies. An SME can be defined through the number of employees or by financial assets but is more often explained by the number of employees in the company. AI has recently emerged in supply chain management but has not solved some of the problems existing within manufacturing processes. Min [36] suggests that the reason could be that some solutions are either too expensive or too difficult and complex to implement or understand. Therefore, lack of resources is always an issue when developing new technologies. Kim et al. [37] argue that SMEs could have significant intangible assets, but limited capital and resources are often the problems, and it is also necessary to support their manufacturing. Radziwon et al. [38] also suggest certain resource issues and are unsure that SMEs would manage the cost both for purchasing technology solutions and its maintenance. Although past research shows no question about whether technology adoption is a difficult task for SMEs and that the reason is the lack of resources, Haseeb et al. [39] say it could also depend on other market issues like drastic changes in the market. Organizational Challenges and Solutions Processes in an SME are vastly different from a large company. SMEs are often more entrepreneurial, flexible, and independent, whereas processes and protocols often control larger companies. Therefore, this difference could have positive and negative aspects, but some standardized processes could benefit the corporation and culture [40]. Compared to larger companies, SMEs can implement digital transformations more rapidly because of their flexibility in processes and technology changes. Rauch et al. [40] talk about strategy in terms of lack of knowledge transfer from experts to

722

S. Borah et al.

SMEs and the lack of risk management tools for investment in new processes to face challenges regarding organizational capabilities. Technical Challenges and Solutions According to Brock and von Wangenheim [41], data is the necessary foundation for AI success. Advanced manufacturing technologies are responsible for SMEs to take a further step toward automation. To track products through the value chain, real-time data, connectivity, and digitalization are important requirements. It should work as vertical data integration from sales data in the ERP system over production planning and control tools to machine data. The issue with collecting data in real time is to convert existing machines if they lack the required technologies. They have to be installed by IoT gateways and sensors. Therefore, data security is another important requirement to consider, and SMEs often lack a strategy around data security. Brock and von Wangenheim [41] suggest that security strategy is crucial when transforming toward AI. An SME with a big scale of customized solutions, production agility, and mass customization is critical when introducing new technologies. Manufacturing systems must be flexible, adjustable, and reconfigurable in response to short-term changes in volume or product variants. Another critical necessity is machine learning and smart data analytics, which may be unavailable for SMEs due to their limited organizational skills and expertise [42]. In the coming years, machine learning will become increasingly essential. It can optimize production planning and control techniques and implement predictive maintenance strategies on the shop floor [40].

4 Discussion In other aspects, the primary objective of Artificial Intelligence is to gain a competitive edge in manufacturing, e-commerce, finance, human resources, marketing, and consumer relations, among other areas. Artificial intelligence-enabled procedures boost organizational performance, save expenses, increase sales, streamline customer management, enhance data gathering and processing, save time, and minimize errors. Communication with clients appears to boost attendance, particularly in a customeroriented scenario. AI marketing is a subset of direct marketing that combines database marketing approaches with Intelligence frameworks and approaches [43]. AI has the potential to increase brand recognition and reinforcement, increase conversion rates, nurture customer conversion, enhance customer care and upsell, and generate enthusiastic subscribers [43]. Numerous AI approaches, including neural networks, fuzzy control, genetic algorithms, neuro-fuzzy inference systems, knowledge-based systems, swarm intelligence-based techniques, machine learning approaches, and supervised learning, can be used to improve an organization’s performance [43], although AI seems to be a very dynamic field of research, with fresh solutions being developed regularly. Regardless of the variety of methodologies and approaches used,

Applications of Artificial Intelligence …

723

the primary goal of AI-based processes remains the same: to deliver more adaptable, flexible, and interactive solutions tailored to customers’ demands and interests [43]. In e-commerce, AI is utilized through recommendation algorithms that compare prior purchases to a massive data collection of other consumers. The primary purpose of these algorithms is to recommend a prospective purchase to a consumer. Another possibility is that marketing automation systems may use machine learning to enhance consumer targeting, generate conclusions about their actions, and construct more accurate communication [43]. AI can help marketers improve their behavioral customer analysis, particularly through the automation of this process. Thus, companies may profit from implementing AI in various areas of daily performance, thereby increasing their competitive edge. To integrate AI into a business, the organization’s key operations may be improved, resulting in an analysis of the costs and time required to integrate the system. Among these activities, there may be those that need the analysis of huge datasets or the generation of conclusions utilizing professional and specialized knowledge [43]. Then, the decision should be made whether to acquire a customdesigned solution or one currently on the market. The first is more appropriate for large businesses, while the second may be more appropriate for small- and mediumsized businesses (SMEs). Nonetheless, integrating AI into a business’s operational processes demands a process-driven strategy [43]. Some other benefits of AI in SME Business include increasing marketing advantage, providing better customer engagement, and strengthening cybersecurity. Increase marketing advantage One of the most important benefits of implementing AI is improving marketing automation systems that may execute machine learning to improve customer targeting and build more accurate relationships based on their behaviors. The advantage of AI is to facilitate marketing efficiently by driving the efforts toward the right customers. Thus, AI marketing grows as direct marketing that powers the database marketing techniques with AI-based models. AI-powered applications also support the marketing department. Better Customer Engagement Since introducing an automated chat platform, SMEs have experienced an upward trend in their customer engagement and experiences. This helps the business to understand the customers’ needs and determine the purchasing patterns and relevant services and offers. Also, AI helps in responding to specific queries from the customers using a database of perception. This method can also help minimize timewasting. Chatbots and customer relationship management (CRM) are also AI tools that help improve customer engagement. Chatbots are now used by many businesses to communicate with their customers and make the services available 24/7. These bots assist with valuable details and ensure that the customer is always engaged. These chatbots aim to give the customers the feeling that they interact with human support, not a machine. These machine support systems mostly handle return issues, company information, and other frequent issues experienced by customers.

724

S. Borah et al.

Consequently, the customer relationship management system is embedded with AI to take the customer relationship and integration to the next stage. This method seamlessly provides vital insights to handle interactions with existing customers. AI integrated with CRM can manage real-time data to offer recommendations on the business processes and customers’ and employees’ company data. For example, it can evaluate the customer response and reviews. It also generates valuable predictions based on past data analysis results. Strengthens Cybersecurity Over time, regardless of their size, businesses have had several cyber exploits that have affected the business and its reputation. This is why large-, medium-, and smallsized companies implement AI tools to secure their data and restrict illegal external entries into their system. With the help of AI and machine learning, a business can also detect any strange or unwanted behaviors and spot vulnerabilities in the system. Due to the inadequate security checks in SMEs, they are more vulnerable to cybersecurity attacks than large enterprises.

5 Conclusion Nowadays, artificial intelligence technologies are becoming more widespread and extensively recognized. That is why SMEs now have access to technology that were previously reserved for huge corporations. Nonetheless, some concerns must be considered. While AI solutions remain expensive, this argument will diminish with time and further development of these technologies. To be effective, AI-based solutions must be fed well-prepared data from reputable sources. AI-based solutions also help to the reduction of employment roles by automating tasks that humans previously handled. Additionally, AI can rapidly gather, analyze, and forecast data from clients. While users may be dangerous, AI tools nevertheless provide enough security to safeguard their credentials. The study demonstrates that most publications on AI in SMEs are the work of IT and business experts.

References 1. Wooldridge M (2020) Artificial Intelligence requires more than deep learning—but what, exactly? Artif Intell 289:103386 2. Baboota R, Kaur H (2019) Predictive analysis and modelling football results using machine learning approach for English Premier League. Int J Forecast 35(2):741–755 3. Bengio Y, Courville A, Vincent P (2013) Representation learning: a review and new perspectives. IEEE Trans Pattern Anal Mach Intell 35(8):1798–1828

Applications of Artificial Intelligence …

725

4. Farsal W, Anter S, Ramdani M (2018) Deep learning: an overview. In: Proceedings of the 12th international conference on intelligent systems: theories and applications. Association for Computing Machinery, Rabat, Morocco. p. Article 38 5. Gong L et al (2019) Empirical evaluation of the impact of class overlap on software defect prediction. In: 2019 34th IEEE/ACM international conference on automated software engineering (ASE) 6. Biba M et al (2010) A novel structure refining algorithm for statistical-logical models. In: 2010 international conference on complex, intelligent and software intensive systems 7. Vajjhala NR et al (2021) Novel user preference recommender system based on Twitter profile analysis. In: Soft computing techniques and applications. Springer Singapore, Singapore 8. Vajjhala NR, Strang KD (2017) Measuring organizational-fit through socio-cultural big data. J New Math Nat Comput 13(2):145–158. https://doi.org/10.1142/S179300571740004X 9. Vajjhala NR, Strang KD (2019) Impact of psycho-demographic factors on smartphone purchase decisions. In: Proceedings of the 2019 international conference on information system and system management. Association for Computing Machinery: Rabat, Morocco. pp 5–10 10. Vajjhala NR, Strang KD, Sun Z (2015) Statistical modeling and visualizing of open big data using a terrorism case study. In: Open big data conference. IEEE, Rome, Italy 11. Ge J, Liu J, Liu W (2018) Comparative study on defect prediction algorithms of supervised learning software based on imbalanced classification data sets. In: 2018 19th IEEE/ACIS international conference on software engineering, artificial intelligence, networking and parallel/distributed computing (SNPD) 12. Ming-Syan C, Jiawei H, Yu PS (1996) Data mining: an overview from a database perspective. IEEE Trans Knowl Data Eng 8(6):866–883 13. Oliveira AL (2019) Biotechnology, big data and artificial intelligence. Biotechnol J 14(8):e1800613 14. Pentland A, Choudhury T (2000) Face recognition for smart environments. Computer 33(2):50– 55 15. Song Q, Guo Y, Shepperd M (2019) A comprehensive investigation of the role of imbalanced learning for software defect prediction. IEEE Trans Software Eng 45(12):1253–1269 16. Vajjhala NR, Strang KD (2018) Sociotechnical challenges of transition economy SMEs during EU integration. In: Dima AM (ed) Doing business in Europe—Economic integration processes, policies, and the business environment. Springer, Netherlands, pp 295– 313. ISBN: 9783319722399. https://doi.org/10.1007/978-3-319-72239-9. https://www.spr inger.com/us/book/9783319722382 17. Vajjhala NR (2015) Constructivist grounded theory applied to a culture study. In: Strang KD (ed) The Palgrave handbook of research design in business and management. Palgrave Macmillan US, New York, pp 447–464 18. Vajjhala NR, Strang KD (2019) Impact of psycho-demographic factors on smartphone purchase decisions. In: Qiu E (ed) Proceedings of the information system and system management conference. Rabat University, Morocco. http://www.issm.net/program.html 19. Potluri Rajasekhara M, Vajjhala Narasimha R (2018) A study on application of web 3.0 technologies in small and medium enterprises of India. J Asian Financ Econ Bus 5(2):73–79 20. Shepperd M et al (2013) Data quality: some comments on the NASA software defect datasets. IEEE Trans Software Eng 39(9):1208–1215 21. Strang KD, Sun Z (2019) Managerial controversies in artificial intelligence and big data analytics. In: Sun Z (ed) Managerial perspectives on intelligent big data analytics. IGI-Global: Hershey, PA, pp 55–75. https://doi.org/10.4018/978-1-5225-72770.ch004. https://www.igi-global.com/chapter/managerial-controversies-in-artificial-intellige nce-and-big-data-analytics/224331 22. Tuor A et al (2017) Predicting user roles from computer logs using recurrent neural networks. In: Proceedings of the thirty-first AAAI conference on artificial intelligence. AAAI Press, San Francisco, California, USA, pp 4993–4994 23. Vinyals O et al (2015) Show and tell: A neural image caption generator. In: 2015 IEEE conference on computer vision and pattern recognition (CVPR)

726

S. Borah et al.

24. Kursh SR, Gold NA (2016) Adding fintech and blockchain to your curriculum. Bus Educ Innov J 8(2):6–12 25. Strang KD (2007) E-strategy first, e-technology free: building an online university with open source software (case study). In: Innovation, education, technology, and you: online conference for teaching & learning. University of Illinois, Chicago USA 26. Behera G, Nain N (2019) A comparative study of big mart sales prediction 27. Jain A, Menon MN, Chandra S (2015) Sales forecasting for retail chains 28. Lingxian Y, Jiaqing K, Shihuai W (2019) Online retail sales prediction with integrated framework of K-mean and neural network, pp 115–118 29. Sheet D et al (2015) Deep learning of tissue specific speckle representations in optical coherence tomography and deeper exploration for in situ histology. In: 2015 IEEE 12th international symposium on biomedical imaging (ISBI) 30. Tooher T, Strang KD, Jaafari A (2006) Journey to full competency. In: Engineering heritage Sydney: conserving the engineering of our past. Engineers Australia, Chattswood, NSW Australia 31. Janice JN-C, Frank L-C (2021) Marketing communication objectives through digital content marketing on social media. Fórum Empresarial 57–82 32. Strang KD (2005) Organizational learning/human resources development, course design. CGI, Fredericton, p 131 33. Pacini C et al (2019) The role of shell entities in fraud and other financial crimes. Manag Audit J 34(3):247–267 34. Agresti A (2018) Statistical methods for the social sciences, 5th edn. Pearson Inc., Boston, MA 35. Kusiak A (2018) Smart manufacturing. Int J Prod Res 56(1–2):508–517 36. Min H (2010) Artificial intelligence in supply chain management: theory and applications. Int J Log Res Appl 13(1):13–39 37. Kim KS, Knotts TL, Jones SC (2008) Characterizing viability of small manufacturing enterprises (SME) in the market. Expert Syst Appl 34(1):128–134 38. Radziwon A et al (2014) The smart factory: exploring adaptive and flexible manufacturing solutions. Procedia Eng 69:1184–1190 39. Haseeb M et al (2019) Industry 4.0: a solution towards technology challenges of sustainable business performance. Soc Sci 8(5) 40. Rauch E, Dallasega P, Unterhofer M (2019) Requirements and barriers for introducing smart manufacturing in small and medium-sized enterprises. IEEE Eng Manage Rev 47(3):87–94 41. Brock JK-U, von Wangenheim F (2019) Demystifying AI: what digital transformation leaders can teach you about realistic artificial intelligence. Calif Manage Rev 61(4):110–134 42. Rönnberg H, Areback J (2020) Initiating transformation towards AI in SMEs 43. Mittal S et al (2020) A smart manufacturing adoption framework for SMEs. Int J Prod Res 58(5):1555–1573

Applications of Artificial Intelligence in Software Testing Samarjeet Borah , King Chime Aliliele, Sandip Rakshit , and Narasimha Rao Vajjhala

Abstract Software testing is a critical aspect of the software development process. It involves test cases or test suites to ensure the software product conforms to the user’s requirements. However, some issues currently affect the software testing process, such as human error, an unstable environment, and incorrect and incomplete requirements documentation. This paper explains how artificial intelligence (AI) concepts, including machine learning, deep learning, neural networks, and expert systems, can be applied to software testing to ease its rigorous and tedious process. This paper would also overview the different types of software testing, the advantages of using artificial intelligence pillars to software testing, and AI challenges in software testing. Keywords Artificial intelligence · Software testing · Machine learning · Deep learning · Challenges · Neural networks

1 Introduction Software Testing can be defined as the process of verifying a software product to reveal errors, defects, and unfulfilled requirements [1]. The primary goal behind software testing is to ensure that the product being tested conforms to the users’ needs and expectations as stated in the software requirements specification document. Software products operate in a garbage in garbage out model, whereby any error made during the design or development stages would affect the output, result, or S. Borah (B) Sikkim Manipal Institute of Technology, Majhitar, India K. C. Aliliele · S. Rakshit American University of Nigeria, Yola, Nigeria e-mail: [email protected] S. Rakshit e-mail: [email protected] N. R. Vajjhala University of New York Tirana, Tirana, Albania e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 P. K. Mallick et al. (eds.), Cognitive Informatics and Soft Computing, Lecture Notes in Networks and Systems 375, https://doi.org/10.1007/978-981-16-8763-1_60

727

728

S. Borah et al.

functionality the product is supposed to provide. Compared to physical systems in which the type errors or defects can be predicted as they can occur, as they appear in several ways, software products can fail in unexpected and unpredictable ways. Software faults will still exist inside any software product: not just because the programming team is irresponsible or ignorant, but because the complexity of software is inherently intractable and humans have a finite capacity for complexity management [2]. Additionally, every complex system will always have design flaws that cannot be eliminated. Detecting design flaws in software is similarly challenging due to its complexity. It is not possible to verify the accuracy of the software and digital systems by checking boundary values. While all potential values must be evaluated and confirmed, exhaustive testing is not practical [1, 3]. Software testing is a highly rigorous process. It can be time-consuming, expensive, and tasking. Testing is unconsciously carried throughout software development. For example, when new functionality is added to a software product, it is tested to ensure it is functioning, which is done for every functionality or operation during development [4, 5]. During testing, challenges such as understanding requirements, choosing the test suites or test cases to execute first, when to stop testing, and testing under time constraints are encountered. This has led to the fusion of Artificial Intelligence in the software testing process.

2 Review of Literature Artificial Intelligence currently plays a significant role in our lives. Enhanced security checks such as facial recognition, fingerprint recognition, self-driving cars, and diagnosis of rare diseases have become a reality through AI machine learning, deep learning, and natural language processing (NLP) which are all critical foundations of artificial intelligence [6–9]. These approaches have resulted in major advancements in a variety of disciplines, most notably in a variety of businesses and domains. Machines are superior to humans at understanding vocal orders, evaluating information, recognizing pictures, driving automobiles, analyzing data, and playing games. This is because AI algorithms and techniques are being more understood and developed, and because computer hardware has improved in speed and memory, which has led to AI playing a significant role in many fields, including software testing [10]. The next part will provide an insight into machine learning as well as software testing techniques [11].

2.1 Overview of Machine Learning In artificial intelligence, machine learning (ML) is a subclass that is classified as the exploration of computer systems that learn and develop autonomously due to their experiences and data. Machine Learning uses mathematical models and algorithms

Applications of Artificial Intelligence in Software Testing

729

to improve the performance of computers effectively and efficiently [12–14]. ML is comprised of three categories, namely [15] • Supervised Learning: Supervised learning is used by most of the practical machine learning machines. We have an input parameter in supervised learning (we call this X). An output variable is available to us (we call this Y ). The mapping function Y = f (X) would also be used, where the ultimate objective would be to estimate the mapping function such that when we receive a new input data set (X), we can anticipate the outcome variable (Y ) with that data set. • Unsupervised Learning: This is a machine learning algorithm of machine learning that does not require the users to supervise the model. Its process is structured so that the model works independently and discovers patterns and data that were previously undetected. Unsupervised learning deals mainly with unlabeled data. • Learning: This is the process of training machine learning models to be able to make decisions. Reinforcement Learning explores the role intelligent agents play in taking actions in an environment to exercise the notion of cumulative reward.

2.2 Overview of Deep Learning Deep Learning is an extraction of machine learning with networks capable of learning unsupervised from unstructured or unlabeled data. Computer models with numerous processing layers use deep learning to learn representations of the data at different abstraction levels. Many disciplines such as drug development and genomics have greatly benefited from these new approaches, including voice recognition and visual object identification, object detection and object detection [16–18]. By utilizing backpropagation techniques to educate the computer on modifying its internal parameters, deep learning identifies complex structures in big data sets. Each layer’s representation is calculated based on the preceding layer’s model using these internal parameters. Deep Convolutional neural networks have improved image, video, speech, and audio recognition, while regular networks have improved text and speech processing [19–21]. Deep Learning consists of various techniques, which include • Convolutional Neural Networks (CNN): Perceptron is a deep learning method characterized by its multi-layered structure. Perceptrons are supervised learning algorithms for binary classifiers. Fran Rosenblatt, an American psychologist, created CNNs in 1958 [22, 23]. • Recurrent Neural Networks (RNN): This technique was designed to predict sequences [19, 24]. RNN uses the knowledge gained from previous states as its input value to make a prediction. It is very efficient for activities that require short-term memory, such as managing stock prices and real-time databases. • Generative Adversarial Networks (GAN): This makes use of a Generator and a Discriminator. The generator usually produces artificial data, and the discriminator is used to discern between correct and incorrect data.

730

S. Borah et al.

• Natural Language Processing involves developing a system that enables humans and computer systems to communicate using a convenient or preferred language. Applications of Natural Language processing include summarization processing, expert systems, voice recognition, text processing, etc. [12, 25–27].

3 Overview of Software Testing Software Testing can be defined as test cases and test suites to check if a software product functions as expected and conforms to requirements [28, 29]. Testing is the method of running a program in order to identify problems. It entails a set of activities or activities used to verify and evaluate a system’s functionality, capability, or attribute and determine if it meets the desired outcome. Finding defects in a software product is a complicated and complex operation. Limit values are insufficient to assure accuracy since software goods and sophisticated systems are not subject to continual testing. Each potential value must be validated and evaluated, but exhaustive testing is not practicable. For example, testing a simple program to add two integer inputs of 16 bits would take much time, running to hundreds of years, regardless of whether tests were performed at a pace of thousands every second. In a realistic situation, testing a software module, the difficulties and complexities go beyond the example above. Factors such as input from the real world, timing, unpredictable environmental situations, and human interactions make the activity more challenging, considering the mentioned factors. The primary goal of software testing is to ensure verification, validation, and improve the quality of a software product. Testing is not only done to find faults in a software product. Testing is also a measure to ensure that the product functions as expected. Verification entails ensuring a software product meets its goal without any bugs or defects. Validation can be defined as the process of ensuring whether a software product meets the standard requirements. There are various types of software testing, but they are classified into two groups, namely • Functional Testing: This form of software testing ensures that the software system meets all functional requirements/specifications. Functional testing attempts to validate each function of a software program, ensure that the right input is provided, and ensure that the output meets the functional requirements. Functional testing is primarily concerned with black-box testing and also is unconcerned with the application’s source code. This test validates the user interface, application programming interface (API), database, safety, client/server interaction, and other functionality. This sort of testing can be performed manually or automatically [5]. • Non-Functional Testing: This is a type of software testing to verify non-functional aspects of the software application (performance, availability, reliability, etc.). Its goal is to test the readiness of the system based on non-functional parameters

Applications of Artificial Intelligence in Software Testing

731

that have never been resolved by functional testing. A good example of nonfunctional testing is checking how many people can log into software simultaneously. Non-functional tests are just as important as functional tests and affect customer satisfaction. Additionally, it is critical to understand the two approaches used in software testing: black-box and white-box testing. Black box testing is a method of software testing in which the testing team or tester is unaware of the software product’s core architecture, layout, implementation, and structure. At every level, black box testing is possible. White-box testing is a testing approach in which the program management team or tester is unaware of the tested software product’s core architecture, design, deployment, and structure. The purpose of white-box testing is really to confirm that the application under test is functioning properly. SDLC is an abbreviation for software development life cycle. This term refers to the numerous actions involved in the creation of a software product. Planning and feasibility studies, requirement analysis, design, coding, implementation, testing, and maintenance are all examples of these tasks. Software testing occurs unintentionally throughout the development process of a software product, analyzing the reports and optimizing the application’s performance by including new technologies and features [7]. Testing is usually carried out through several levels and stages. The following are the most evident levels: • Development Testing: This consists of the following types of testing: – Unit Testing: This is when the individual units or modules of a software product are tested. – Component Testing: This is when testing is carried out separately without integrating with other components. • System Testing: This is the testing stage where the complete system is completely validated. • Release Testing consists of the following: – Requirements Testing: This is the process of creating test cases and test suites based on the requirements. – Scenario Testing: The use of actual scenarios in testing a software product. User Testing consists of the following: • Alpha Testing: This testing is done within the development environment to find bugs before the product is released to the public. • Beta Testing: This is the final round of testing. It is done outside the development environment by an individual not part of the development team. • Acceptance Testing: This is testing a software product for acceptability, and customers usually perform it.

732

S. Borah et al.

4 Applications of AI in Software Testing Software Testing is a comprehensive and rigorous process in the software development life cycle. Sometimes software testing can gulp as much as 30% of the entire budget of a software product due to rework costs. The application of Artificial Intelligence to the software testing process would have highly positive impacts on it. Some AI techniques that can be applied to software testing include [30–33] • The use of fuzzy logic and predicates to map software specification; the operations and processing units that are the fuzzy logic and predicates are assembled and built to describe a set of requirements. • The application of machine learning to software testing is promising. ML has a core advantage when it comes to the development and operation of independent end-to-end tests. • The use of fuzzy info-networks for black-box testing; for complex software applications, functional requirements can automatically be tested from execution using IFN (info- fuzzy network), a concept of data mining. • Selenium is a framework that is specifically used for the testing of web applications. ML can understand the internal data structure during regular Selenium testing. The Selenic engine monitors each execution and captures detailed information about the content of the web user interface of the application under test. It extracts DOM elements, attributes, locators, etc., and associates them with the actions performed by the UI-driven test. Selenic uses the software’s proprietary data modeling method to store this information in its artificial intelligence engine. The model is constantly updated, analyzing the historical execution of all tests to continue being smarter. The Use of Artificial Neural Networks (ANN), Linear Regression, and Support Vector Machines for effective and efficient planning and testing activities [34–36]. Natural Language Processing (NLP) to generate test cases from software requirements is referred to as test case creation. Applications of Natural Language Processing (NLP) include prioritizing test cases, the accurate prediction of manual detailed test failure, and the identification of duplicate defect reports [26, 37]. Artificial Intelligence is currently playing a significant role in software testing; a detailed description is provided in Table 1. The need for software is increasing exponentially, and companies must protect their problems while staying one step ahead of their rivals. Artificial intelligence (AI) is a field that may be applied to software testing to simplify the deployment of software products to market more quickly and reduce the software development cycle. Some advantages of AI in software testing include the following: • AI in software testing will lead to quicker development time and reduce the time needed to deploy a software product to market. • AI in software testing would enable the development and discovery of more test cases to allow software products to become more reliable and exceed customer expectations.

Applications of Artificial Intelligence in Software Testing

733

Table 1 AI techniques and software testing areas Al technique

Software testing area

C4.5 (Decision tree algorithms)

Refine Black-Box test specification and improve the categorypartition specification

Support vector machines (SVM)

Identifying infeasible GUI test cases

Logistic regression, Random forest, Ada boost, bagging

Optimizing testing efforts based on change proneness

Artificial neural network (N.N.), support Vector Machine (SVM), and Linear Regression

Planning and scheduling of testing activities

Hybrid genetic algorithms (HGA)

Automatically test GUI, including test sequence optimization and test case optimization

Static Analysis, NLP, Backward Slicing, and Code Summarization Techniques

Automatically documenting unit test cases

K-Nearest Neighbor

Identify coincidental correct test cases

NLP

Generation of test cases from software requirements

NLP

Test case prioritization

NLP

Predicting manual test case failure

NLP

Detection of duplicate defect reports

• The application of AI algorithms and techniques to testing robust software applications would improve the customers’ experience. At the same time, they use the software product, as AI techniques such as machine learning learn from the usage of a software product by a user to improve their experience. • AI in software testing will exponentially increase the faster development of robust, agile, sophisticated, and smart software systems. • AI currently has apparatuses to adequately test the innovation like Big Data applications, cloud computing, etc., as AI can play a significant role in generating test cases for the mentioned innovative technologies. • AI applications in the software testing industry will deliver more precise results and lesser development times than traditional software testing techniques. • AI in software testing will play a critical part in evaluating a customer’s requirements by using predictive analysis to other similar goods and services to better understand what new features the consumers want. While applying artificial intelligence to software testing has shown to have remarkable positive effects, underlying challenges accompany it. The need for human intervention: Despite the innovation AI brings with software testing, the need for human input in terms of monitoring, inputting, and fine-tuning the behavior of AI techniques and algorithms is very significant. For example, machine learning requires much training which in turn requires human input. Artificial Intelligence techniques are very dynamic, so changes are made very often. Implementing these changes to

734

S. Borah et al.

meet the testing requirements can be expensive as machines carry out tasks more efficiently and effectively than humans.

5 Conclusion Software Testing is a rigorous and time-consuming activity. It consumes over 30% of the budget in developing software, as bugs have to be found and effective. Furthermore, procedures have to be taken to ensure the bugs are fixed. Artificial Intelligence in software testing will create a new dimension of software testing, as it would checkmate most, if not all, of the current problems of manual testing. Through the use of sophisticated models and algorithms, artificial intelligence can automatically evaluate complex data. AI has already demonstrated that it can outperform humans in software testing. AI-driven testing will soon bring in a new era of quality assurance labor. It will handle and oversee most testing regions, adding significant value to the testing outcome and producing more accurate results.

References 1. Assaf AG, Tsionas Mike G (2019) Diagnostic testing in Bayesian analysis. Int J Contemp Hospitality Manag 32(4):1449–1468 2. Catal C, Diri B (2007) Software fault prediction with object-oriented metrics based artificial immune recognition system. In: Product-focused software process improvement. Springer, Berlin, Heidelberg 3. Abaei G, Selamat A (2014) A survey on software fault detection based on different prediction approaches. Vietnam J Comput Sci 1(2):79–95 4. Ershadi Mohammad M, Seifi A (2020) An efficient Bayesian network for differential diagnosis using experts’ knowledge. Int J Intell Comput Cybern 13(1):103–126 5. Hall T et al (2012) A systematic literature review on fault prediction performance in software engineering. IEEE Trans Software Eng 38(6):1276–1304 6. Giraldo JSP, Verhelst M (2018) Laika: A 5uW programmable LSTM accelerator for alwayson keyword spotting in 65nm CMOS. In: ESSCIRC 2018—IEEE 44th European solid state circuits conference (ESSCIRC) 7. Helmy M, Smith D, Selvarajoo K (2020) Systems biology approaches integrated with artificial intelligence for optimized metabolic engineering. Metab Eng Commun 11:e00149 8. Kim KS, Knotts TL, Jones SC (2008) Characterizing viability of small manufacturing enterprises (SME) in the market. Expert Syst Appl 34(1):128–134 9. Ko BC (2018) A brief review of facial emotion recognition based on visual information. Sensors (Basel, Switzerland) 18(2) 10. Krutz DE, Malachowsky SA, Reichlmayr T (2014) Using a real world project in a software testing course. In: 45th ACM technical symposium on computer science education. 2014. IEEE, Atlanta, USA 11. Gong L et al (2019) Empirical evaluation of the impact of class overlap on software defect prediction. In: 2019 34th IEEE/ACM international conference on automated software engineering (ASE) 12. Ahmad A et al (2020) A systematic literature review on using machine learning algorithms for software requirements identification on stack overflow. Secur Commun Netw 1–19

Applications of Artificial Intelligence in Software Testing

735

13. Al-Waisy AS et al (2018) A multimodal deep learning framework using local feature representations for face recognition. Mach Vis Appl 29(1):35–54 14. Angione C (2019) Human systems biology and metabolic modelling: a review—from disease metabolism to precision medicine. Biomed Res Int 2019:8304260 15. Vajjhala NR, Strang KD, Sun Z (2015) Statistical modeling and visualizing of open big data using a terrorism case study. In: Open big data conference. IEEE, Rome, Italy 16. Bose A et al (2019) Deep learning for brain computer interfaces. In: Balas VE et al (eds) Handbook of deep learning applications. Springer International Publishing, Cham, pp 333–344 17. Cao C et al (2018) Deep learning and its applications in biomedicine. Genomics Proteomics Bioinformatics 16(1):17–32 18. Farsal W, Anter S, Ramdani M (2018) Deep learning: an overview. In: Proceedings of the 12th international conference on intelligent systems: theories and applications. Association for Computing Machinery, Rabat, Morocco, p Article 38 19. Alom MZ et al (2019) A state-of-the-art survey on deep learning theory and architectures. Electronics 8(3) 20. Chen Z et al (2018) Progressive joint modeling in unsupervised single-channel overlapped speech recognition. IEEE/ACM Trans Audio Speech Lang Process 26(1):184–196 21. Zweig G et al (2017) Advances in all-neural speech recognition. In: 2017 IEEE international conference on acoustics, speech and signal processing (ICASSP) 22. Brousseau B, Rose J, Eizenman M (2020) Hybrid eye-tracking on a smartphone with CNN feature extraction and an infrared 3D model. Sensors (14248220) 20(2):1–21 23. Sharma K, Dahiya PK (2018) A state-of-the-art real-time face detection, tracing and recognition system. IUP J Telecommun 10(4):51–61 24. Amin MR et al (2018) DeepAnnotator: genome annotation with deep learning. In: Proceedings of the 2018 ACM international conference on bioinformatics, computational biology, and health informatics. 2018, Association for Computing Machinery, Washington, DC, USA, pp 254–259 25. Agarwal A, Jayant A (2019) Machine learning and natural language processing in supply chain management: a comprehensive review and future research directions. Int J Bus Insights Transform 13(1):3–19 26. Rajput A (2020) Chapter 3—natural language processing, sentiment analysis, and clinical analytics. In: Lytras MD, Sarirete A (eds) Innovation in health informatics. Academic Press, pp 79–97 27. Vajjhala NR et al (2021) Novel user preference recommender system based on Twitter profile analysis. In: Soft computing techniques and applications. Springer Singapore, Singapore 28. Moser R, Pedrycz W, Succi G (2008) A comparative analysis of the efficiency of change metrics and static code attributes for defect prediction. In: 2008 ACM/IEEE 30th international conference on software engineering 29. Moustafa S et al (2018) Software bug prediction using weighted majority voting techniques. Alex Eng J 57(4):2763–2774 30. Cuperlovic-Culf M (2018) Machine learning methods for analysis of metabolic data and metabolic pathway modeling. Metabolites 8(1) 31. Li Z, Reformat M (2007) A practical method for the software fault-prediction. In: 2007 IEEE international conference on information reuse and integration 32. Strang KD (2015) Special issue on diverse methods of studying risk through phenomenology, fuzzy analysis, calculus, linear programming, and experiments [editorial preface]. Int J Risk Contingency Manag 4(2):iv–vii 33. Zhao J et al (2018) Safe semi-supervised classification algorithm combined with active learning sampling strategy. J Intell Fuzzy Syst 35(4):4001–4010 34. Baboota R, Kaur H (2019) Predictive analysis and modelling football results using machine learning approach for English Premier League. Int J Forecast 35(2):741–755 35. Chowdhury S et al (2010) A hybrid approach to face recognition using generalized twodimensional fisher’s linear discriminant method. In: 2010 3rd international conference on emerging trends in engineering and technology

736

S. Borah et al.

36. Kavitha G, Elango NM (2020) An approach to feature selection in intrusion detection systems using machine learning algorithms. Int J e-Collab (IJeC) 16(4):48–58 37. Yoosin K, Seung Ryul J (2015) Opinion-mining methodology for social media analytics. KSII Trans Internet Inf Syst 9(1):391–406

OLFACTRO BRAINIAC: Aid-Kit for Person with Smell Sense Disability Avinash Kumar Sharma and Kuldeep Kumar Yogi

Abstract The problem of life threat in a gas leakage situation needs to be solved in a generic solution that senses and creates olfactory nerve artificially so that maximum combinations and individualistic gases can be detected using pattern recognition and feature extraction used in artificial neural network. This paper focuses on these approaches with maximum accuracy and had paid emphasis on the hardware implementation (using Raspberry Pi 3B+) so that it can be converted in a productive way. The workflow starts with the sensors sensing the gases there detection and computation in magnitude is transferred then the test set runs making essential predictions. The alert system is the user interface part including hardware and software notifications. The research work includes a stimulated backpropagation algorithm which holds a core portion for logical decision-making. The core development of the algorithm holds to be advantageous as it deals with the real-time scenarios over the tool or framework-based algorithms. The scope for the research works is much wider in appealing industries of agriculture, food industries. The core algorithm will stand with all inputs to make prediction and cross-validating approach to verify the training and measure the accuracy. Keywords ANN · Cloud · IOT · Raspberry pi · Sensors

Nomenclature RSE RL_VALUE raw_adc Rs

Resistance of sensors Load resistance on the board Raw value read from adc, which represents the voltage Resistance of sensors

A. K. Sharma (B) Department of CSE, ABES Institute of Technology, Ghaziabad, Uttar Pradesh, India e-mail: [email protected] K. K. Yogi Department of CS, Banasthali Vidyapith, Jaipur, Rajsthan, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 P. K. Mallick et al. (eds.), Cognitive Informatics and Soft Computing, Lecture Notes in Networks and Systems 375, https://doi.org/10.1007/978-981-16-8763-1_61

737

738

Ro x  , y m x n yˆ m W Z f 1 (x) b y

A. K. Sharma and K. K. Yogi

Resistance of sensors in clean air Coordinates of curve Slope of graph Input training vectors (x 1 … x n ) Number of attributes in dataset consist of values of gases, temperature, humidity Output target vector (ˆy1 … yˆ m ) Number of training examples consist the categorized degree of danger Weights vector (W 1 … W n ) Hidden unit vectors (Z 1 … Z j ) Activation function Bias vector (b1 … bn ) Output vector in dataset (y1 … ym )

1 Introduction This article contains the concept of electronic nose which works as an aid kit for smell sense disabled person and is named Olfactro Brainiac. It includes three major concepts one is sensors coding and the other one is its pattern recognition for predicting dangerous degree of gases present in the air and the last one is the alert system. Our first component, i.e., sensors coding will be done on the input taken from the sensors in the form of voltage and includes the conversion of these values to the ppm. Further, our another important component, i.e., pattern recognition or say ANN model will help in predicting whether the ppm content of the gas present is dangerous or not. At last, various alert systems will be prepared for notifying the people with the help of buzzers, messages, etc.

1.1 Sensors Sensors used in building our prototype (detailed description of prototype is given later in the paper) are MQ-series sensors for detecting multiple gases and DHT11 for detecting temperature and humidity. Using MQ sensors is very easy for detecting a gas [1]. A single sensor can detect multiple gases and can differentiate them from one another as every gas shows a different behavior. The values coming at the sensors pin will be directly proportional to the concentration of the gases it detects and then these values can be read as analog values by Raspberry pi and further converted to digital values using Analog to Digital converter. Finally, after performing all the calculations under sensors coding the output will be in the form of ppm values different for every gas based on the amount present in the air. The MQ-series sensors used are MQ2, MQ7, MQ135, MQ4. These sensors contain wide detecting scope in a range far from

OLFACTRO BRAINIAC: Aid-Kit for Person with Smell Sense Disability

739

the device and calibrate it accurately, it is having a fast response and high sensitivity over the respective gases that it detects and computes its ppm values. Its operating voltage is +5 V and it is stable and has long life. Now, the gases which are detected using MQ2 sensors [2] are LPG, Smoke, Methane, and Butane. The gases detected by MQ135 are Ammonia, Carbon Dioxide, Alcohol, Acetone, Toluene, and Benzene. MQ7 sensor is used to detect Carbon Monoxide. Now the DHT11 sensor is used for calibrating Temperature and Humidity. The materials used are gas sensing layer SnO2 (Tin oxide), electrode Au (Gold), and electrode line Pt (Platinum). Compute the ppm values [3] for gaseous detection using the slope of the graph of each MQ sensor by plotting and calibrating according to parameters of two coordinates of curve and the slope. The data mentioned on the y-axis is Rs/Ro and on the x-axis is a ppm value of gases.

1.2 Pattern Recognition Artificial Neural Network will help in recognizing the pattern of the gases in [4], i.e., dangerous degree at different levels (safe, harmful, extremely harmful). These degrees can be predicted with the help of the training set (dataset containing 1000 s of entries of ppm values, temperature, humidity for every scale and for every gas that can be detected) and the test set (dataset containing ppm values, temperature, humidity calculated from the sensors). The concept of ANN can be implemented with the help of various algorithms but the algorithm used for developing this prototype is backpropagation algorithm. The advantage of backpropagation is it simulates the pattern [5] with every epoch or an iteration with moving forward and backward then extracting the features with the importance of help of weights and the activation functions gives the method to predict the actual output or the dangerous degree taking considerations in combination of gases due to submission of the dot products is made possible because of backpropagation. In the backpropagation, we have also used the cross-validation concept inclusive of preprocessing of the dataset with the help of cross-validation and preprocessing the dataset is cleaned, divided among the text and training set with every fold a cross-validating iteration poses. The data cleaning is required especially in case of the scaling of data of the gases as there is difference which is large and needs to be marginal enough so that this cleaning process can solve the problem and give the accurate prediction for the mixture of gases too. The algorithm works on the concept of initializing and updating of weights for minimizing the error. The steps involved are forward propagation, backward propagation, and updating of weights. The training dataset is trained and fitted in a model which helps in predicting the result of the test set.

740

A. K. Sharma and K. K. Yogi

1.3 Alert System As soon as the dangerous degree exceeds the threshold value or say becomes harmful then, alert systems come into role. And notify using various methods such as by ringing alarm using text to speech technology, by sending messages through cloud, and by displaying output result on the screen. Therefore, the user will become alert and can take various precautionary actions. The alert system is divided into four parts for every kind of user and in every scenario. The four systems are as follows: • • • •

The buzzer system The notification system The display system The speaking system.

The buzzer system consists of a piece of a code and a hardware implementation which connects the buzzer with the power supply and the ground and the piece of a code contains an if then condition in which the threshold parameter for a gas leakage situation is into the consideration and if that condition satisfies then, there will be a beep sound by the buzzer which can be heard from a longer distance. The notification system takes the help of the cloud system mainly a Twilio API [6] which is also a piece of a code that contains phone numbers of users and nearby police stations such that if the threshold value exceeds then a notification will be sent to the phone as a message without mobile data and contained the specific text of danger and a link of Google maps to fetch that location where gas is being leaked. The display system consists of 2*16 LCD that contains RPLCD library to display contents in the ppm values and predicted values to be displayed with clear vision. The speaking system consists of text to speech library inbuilt in Raspbian OS and the auxiliary cable as an hardware implementation of the speaking system and it speaks like various AI simulated devices [7].

2 Literature Survey J. W. Gardner et al. described about the sensor arrays and their application in odor area where the method uses metal oxide sensors which are known as TGS sensors where the gases are imparted towards the sensors and preparation of dataset is done by it. The aim for this is to recognize the pattern situated with the values coming from gas array sensors imparted with various gases such as methanol, butane-1-01, propan-2-01, 2-methyl-1-butanol, and ethanol. These gases detection are verified and prepared a dataset so that the gas sensor will be inbuilt with artificial neural network and can predict at real time or with the test set. The approach is used for the odor detection. Andrey Somova et al. describe about wireless sensor network and it is ideated with formation of nodes at various centers so that the range of the gas leakage can be

OLFACTRO BRAINIAC: Aid-Kit for Person with Smell Sense Disability

741

enhanced and will be used over a large area and uses various sensors STDS75, HIH4000-001, 2200/2600 series Gems Pressure 50,2D (semiconductor) Experimental series KI Methane 260, TG S2610 (semiconductor) FIGARO LP gas 280, NAP-66A (catalytic), MC series (catalytic) Hanwei Electronics Combustible gases 420, 450, 600. Uses an alert system when there is leakage of gas using message notifications. Cheah Wai Zhao et al. presented a brief advantage of using raspberry pi as it uses its own Operating system and its own window hardware integration is implemented easily with pins and language we can code up easily. It contains the network feature of wifi, bluetooth, hotspot in a more synchronized and lacked an ADC. The raspberry can be useful to construct the desired hardware implementation of gas sensor array. Also raspberry pi has many inbuilt library and flexible to use from system or from phone. Raspberry also supports the notifications api. Kui Zhang et al. proposed a study on CO2 and CH4 conversion to synthesis gas and higher hydrocarbons by the combination of catalysts and dielectric-barrier discharges. Where the core reaction comprising one mole of carbon dioxide is reacted with one mole methane and forms two moles of hydrogen and 2 mol of carbon monoxide, which does form the harmful carbon monoxide with the combination of gases and with this research work there must be study to recognize the combination. V. Ramya et al. proposed Embedded system for Hazardous Gas detection and Alerting for which it uses gas like LPG and propane that can be leaked at home and gives its output on LCD with addition of alert system of message. It can be used as a leakage detectors at houses where the lpg can be monitored. It’s a small-scaled household device.

3 Proposed Method The approach mentioned in [8] is using TGS sensors which are costly and the model to identify the gases is trained using backpropagation algorithm. As mentioned in the literature survey the repositories for gas detection, Pattern recognition of the calibrated values are taken into concern. But the approach shown in this paper is by using MQ Sensors which includes the formulation to detect the gases and the values are then fetched in the input layer, the detection and calibration of sensors is implemented in a programming language Python and the preparation of ANN is also in Python. The feature extraction of gases combination is with the ppm values in the molarity form and is computed by ANN, further, prediction is based on the condition, i.e., safe or not if the harmful gases combine together then also, the danger degree is classified among three categories. Also, the IOT part used in the project will give a powerful symbol of technology where we can save the life using active notification systems. The alert system is very much scalable in every means as it is a combination of vision, hearing, networks where there will be a 100% accuracy to alert nearby people, police stations and it can also be considered as a device of disaster management which constitutes very helpful alerting system.

742

A. K. Sharma and K. K. Yogi

4 Prototype and Its Working The gas sensor array comprises multiple MQ gas sensors integrated with a core Raspberry pi 3b+; an alert system is developed and integrated as shown in Fig. 3. A formula is used for calculating the ppm value and is served in the input layer, so the concept of data creation is being fulfilled by this integrated, cost-efficient, compact solution having less amount of noise. We are using serial port input output (SPI dev) an inbuilt function of raspberry pi to take the sensor input and can display its output. Although raspberry pi do not have analog to digital converter (ADC) for solving this problem, we are using MCP3208 [9] so that analog values can be converted into digital values (Fig. 1). Why Raspberry Pi 3b+? • Raspberry pi 3 can be connected to Bluetooth devices and Internet using Ethernet or Wifi, whereas Arduino cannot be connected without a shield which adds Internet and Bluetooth connectivity. • Raspberry pi has various ports like HDMI, USB, Camera, audio, and LCD, whereas Arduino does not have any of these ports. • The Raspberry Pi 3 is faster than Arduino [10] (Fig. 2). As a calculation for sensor values the variable used are sensors resistance in clean air (Ro), number of samples in calibration phase, time interval between each samples, values at the pin and load resistance [1] (RL_value). Now we can calculate Resistance of sensors from Eq. (1) and final desired result in parts per million from Eq. (2). RSE = RL_VALUE ∗ (1023.0 − raw_adc)/raw_adc PP M value = math.pow(10, (((math.log(Rs_Ro_Ratio) − curve[1])/pcurve[2]) + pcurve[0]))

Fig. 1 Prototype of olfactro brainiac

(1) (2)

OLFACTRO BRAINIAC: Aid-Kit for Person with Smell Sense Disability

743

Fig. 2 Circuit diagram including the architecture of gas sensor array, display, and alert system

The creation of neural network is based on backpropagation algorithm. At the first step, we are feed forwarding the training set to the neural network comprising of various hidden layers and a categorized output layer having three categories 0 as safe, 1 as may be harmful, 2 as extremely harmful. We are using different activation and loss functions mainly sigmoid. The backpropagation [9] is the differentiation of the equations used in forward propagation and weight simulators for each entity in an input layer are updated according to the learning rate (alpha) of our algorithm also, parameters and hyperparameters are updated according to the number of iterations or epochs we provide to our model and as a part of accuracy; artificial neural network is serving at a higher rate as compared to different algorithms.

4.1 Data Cleaning and Preprocessing Template • Data will be cleaned by filling the missing values by the method of mean occupying the place of missing value. As when a sensor does not record a value at a particular time then the missing value can be filled with mean of some value above and below it. So that synchronization must be maintained in the data set. • Data can be scaled or to say scaling of features by subtracting each value with the mean value and dividing it by the difference of minimum and maximum value present in that feature. As the sensors values can be maximum at one time for one gas and lower for other so to take combinations into consideration we need to scale the features at one particular scale that will not change during the testing time. • Cross-validation is the last step of preprocessing which indirectly makes test and training set by taking large portion randomly from dataset to check the accuracy.

744

A. K. Sharma and K. K. Yogi

If each fold of it is having same or approximately same accuracy then the dataset is fitted well on the model.

4.2 Algorithm Step 1: Initialize the weight W with small random values. Step 2: While epoch or training iterations is not greater than user defined constant repeat steps 3 to 11. Step 3: for each training example or rows in dataset repeat steps 4 to 14. Feedforward Step 4: Each input unit x(x 1 … x n ) value must be transferred to the first layer, i.e., input layer. Step 5: Each hidden unit Z(Z 1 … Z j ) is summation of weighted (W ) input values. Step 6: Apply the activation function that will compute the output or the prediction. Step 7: The prediction or output coming from step 6 (A[8] ) will be served as input to another layer and same process will take place upto last hidden layer (L). Z [2] = W [2] A[1] + b[2]   A[L] = f 1 z [L] = yˆ Backpropagation Step 8: Each input unit yˆ is the target and the actual value is y. So error value is dz [L] = yˆ − y Step 9: The differentiation of each hidden for layer L to 1 will minimize the cost function. Step 10: Update the weights using dW and learning rate α as some constant small value. Step 11: Increment the epoch by one. Step 12: Go to step 1.

5 Description of Dataset See Table 1. Dataset Link: https://github.com/TusharRajVerma/Olfactro-Brainiac/blob/master/main_code/ mq_dataset.csv

OLFACTRO BRAINIAC: Aid-Kit for Person with Smell Sense Disability

745

Table 1 Input features in dataset S. No.

Features

Values

1

Temperature

Measured in degree celsius by sensor DHT11

2

Humidity

Measured in percentage by sensor DHT11

3

LPG

Measured in ppm by sensor MQ2

4

Smoke

Measured in ppm by sensor MQ2

5

Methane

Measured in ppm by sensor MQ2

6

Butane

Measured in ppm by sensor MQ2

7

Carbon monoxide

Measured in ppm by sensor MQ7

8

Ammonia

Measured in ppm by sensor MQ135

9

Carbon dioxide

Measured in ppm by sensor MQ135

10

Alcohol

Measured in ppm by sensor MQ135

11

Acetone

Measured in ppm by sensor MQ135

12

Toluene

Measured in ppm by sensor MQ135

13

Benzene

Measured in ppm by sensor MQ135

The dataset contains eleven gases and two for temperature and humidity and forms a gas sensor array [11]. The data population is done with the help of gas sensors experimented on different gases [12]. We are using scikit learn preprocessing template for normalizing, labeling, and scaling the data. These things are required to increase the accuracy so that, our loss and activation function can update the weights with higher accuracy. The training set comprises 2400 entries with 13 columns (11 columns of gases and 2 for temperature and humidity). The test set is the values recorded by the above circuit diagram in the real time with the weights calculated during the training of the artificial neural networks.

6 Experimentation and Comparison We have used three algorithms for comparing the following. Artificial Neural Network (ANN): The algorithm used is backpropagation where the sensors are feeding the values in the input layer of neural network and the feedforward and backward learning the model learns the accurate prediction of danger degree and its accuracy is 97%. Support Vector Machine: We have set the hyperplanes in categorical data to get actual boundary where the dataset forms in three categories of danger and with the least Euclidean distances get the accuracy of 91%.

746

A. K. Sharma and K. K. Yogi

Naive Bayes: We have used probabilistic approach using Bayesian theorem where the classifier was feeded with the ppm values as parameters or the features and got the accuracy of 89% (Fig. 3; Tables 2, 3 and 4). Input 1:

Output 1:

Inference of Input 1 and Output 1 is the concentration in gases, temperature and humidity is balanced that makes the probability of safe higher. Fig. 3 Graphs for predicting dangerous degree for various inputs

OLFACTRO BRAINIAC: Aid-Kit for Person with Smell Sense Disability

747

Input 2:

Output 2:

Inference of Input 2 and Output 2 is the variation in some gases temperature, humidity that makes the probability of alert higher. Fig. 3 (continued)

748

A. K. Sharma and K. K. Yogi

Input 3:

Output 3:

Inference of Input 3 and Output 3 is the increase in the concentration of gases, temperature and humidity that makes the probability of dangerous higher. Fig. 3 (continued)

OLFACTRO BRAINIAC: Aid-Kit for Person with Smell Sense Disability

749

Input 4:

Ouput 4:

Inference of Input 4 and Output 4 is the concentration in gases, temperature and humidity is balanced that makes the probability of safe higher. Fig. 3 (continued)

750

A. K. Sharma and K. K. Yogi

Input 5:

Output 5:

Inference of Input 5 and Output 5 is the variation in some gases, temperature, humidity that makes the probability of alert higher. Fig. 3 (continued)

OLFACTRO BRAINIAC: Aid-Kit for Person with Smell Sense Disability

751

Input 6:

Output 6:

Inference of Input 6 and Output 6 is the increase in the concentration of gases, temperature and humidity that makes the probability of dangerous higher. Fig. 3 (continued)

752

A. K. Sharma and K. K. Yogi

Input 7:

Output 7:

Inference of Input 7 and Output 7 is the increase in the concentration of gases, temperature and humidity that makes the probability of dangerous higher. Fig. 3 (continued)

OLFACTRO BRAINIAC: Aid-Kit for Person with Smell Sense Disability

753

Input 8:

Output 8:

Inference of Input 8 and Output 8 is the variation in some gases, temperature, humidity that makes the probability of alert higher. Fig. 3 (continued)

754

A. K. Sharma and K. K. Yogi

Table 2 Input parameters and values Input/gases

1

2

3

4

5

6

7

8

LPG

0.1

28

143

0

0

115

8

26

Smoke

0.2

0

4

0

0

5

137

2

Methane

0.1

0

62

120

400

0

0

2

Butane

0.1

2

45

0

0

36

0.5

0

Carbon monoxide

0.1

0

0

0

0

0

0

1

Ammonia

2.3

5

4

8

10

2.3

2

54

Carbon Dioxide

2.3

5

4

8

10

2.7

2

53

Alcohol

1.7

0

1

0

0

0.14

145

1

Acetone

0.2

15

0

1

0

0

18

0

Toluene

0.4

2

0

1.6

2

0

0

0

Benzene

7.3

224

90

95

206

75

92

38

Temperature

25

45

25

25

30

45

35

40

Humidity

45

65

45

45

50

65

55

60

Table 3 Probabilistic output P(O)/algo

1

2

3

4

5

6

7

8

ANN

0.9

0.94

0.9

0.9

0.92

SVM

0.8

0.82

0.8

0.82

0.81

0.97

0.93

0.98

0.82

0.75

Naive Bayes

0.7

0.7

0.7

0.7

0.72

0.83

0.65

0.62

0.75

Table 4 Danger degree prediction (safe—S, alert—A, dangerous—D) P(O)/algo

1

2

3

4

5

6

7

8

ANN

S

A

D

S

A

D

D

A

SVM

S

A

D

S

A

D

D

A

Naive Bayes

S

A

D

S

A

D

D

A

6.1 Accuracy of ANN Algorithm See Fig. 4.

6.2 Comparison on the Basis of Accuracy From the experimental results, it is found that the Artificial neural network is best on the dataset with accuracy of 97% rather than some specific machine learning algorithm of SVM and Naive Bayes having accuracy comparison of 91% and 89%

OLFACTRO BRAINIAC: Aid-Kit for Person with Smell Sense Disability

755

Fig. 4 Accuracy for ANN

Fig. 5 Graph representing the accuracy comparison between SVM, ANN, and Naive Bayes algorithm

Table 5 Accuracy comparison Accuracy (%)

ANN

SVM

Naive Bayes

97

91

89

respectively, This is possible because pattern recognition is an important factor for gas and their combinations which is conquered by Artificial neural network having backpropagation (Fig. 5; Table 5).

7 Applications • In agricultural field: The integration of convolutional neural network holding up the pixels vectorized values using a webcam or a drone to predict the disease of the crop with artificial neural network used in this paper to smell the VOCs [13].

756

A. K. Sharma and K. K. Yogi

• In food industry: To predict the quality of food using an automated smell sense device. The behaviors of sensors resistances are recorded to smell the VOCs.

8 Future Scope The creation of nodes using embedded systems can be carried forwarded as a future for this prototype. Wireless sensor network with a handy device concept will be used and the conversion rate is with the enhanced range so that a peer to peer network used in technology of blockchain could be actually implemented as a hardware peer to peer network where the actual data or the copy of data will be sent to the immediate and every nodes. The cluster of nodes can also be formed as a star schema where the central node will be the data of gases to be transferred and if from any site the algorithm detects danger it can alert to that site with every possible way so a data will be collected from various nearby areas and will be sent to the central node for prediction and notifications.

9 Conclusion An integrated system of the booming technologies is referred and a life saving fully functional scalable device is developed and verified using experimental setups using its dataset preparation and setting up the model of artificial neural network and testing it on real time using sensors. The alert system is functional as well as reliable on the various circumstances in the life-threatening situations. Acknowledgements I would like to acknowledge the support from ABES Institute of Technology, Ghaziabad, Uttar Pradesh, India and special thanks is extended to my Ph.D. supervisor Dr. Kuldeep Kumar Yogi, Assistant Professor, Department of CS, Banasthali Vidyapith, Jaipur, Rajasthan, India.

References 1. Somov A, Baranov A, Savkin A, Spirjakin D, Spirjakin A, Passerone R (2011) Development of wireless sensor network for combustible gas monitoring. Sens Actuators, A 171(2):398–405 2. Heyasa BB, Van Ryan Kristopher RG (2017) Initial development and testing of microcontrollerMQ2 gas sensor for University Air Quality Monitoring. IOSR J Electr Electron Eng (IOSRJEEE) 12(3):47–53 3. Mujawar TH, Kasbe MS, Mule SS, Deshmukh LP (2016) Development of wireless gas sensing system for home safety. Int J Eng Sci Emerg Technol 8:213–221 4. Keller PE, Kangas LJ, Liden LH, Hashem S, Kouzes RT (1995) Electronic noses and their applications. In: World congress on neural networks (WCNN), pp 928–931

OLFACTRO BRAINIAC: Aid-Kit for Person with Smell Sense Disability

757

5. Sun P, Ou ZH, Feng X (2013) Combustible gas discrimination by pattern recognition analysis of responses from semiconductor gas sensor array. In: Applied mechanics and materials 2013, vol 303. Trans Tech Publications, pp 876–879 6. Venkatesan S, Jawahar A, Varsha S, Roshne N (2017) Design and implementation of an automated security system using Twilio messaging service. In: 2017 international conference on smart cities, automation & intelligent computing systems (ICON-SONICS). IEEE, pp 59–63 7. Karp AH (2003) E-speak E-xplained. Commun ACM 46(7):112–118 8. Gardner JW, Hines EL, Wilkinson M (1990) Application of artificial neural networks to an electronic olfactory system. Meas Sci Technol 1(5):446 9. Kumar R, Rajasekaran MP (2016) An IoT based patient monitoring system using raspberry Pi. In: 2016 international conference on computing technologies and intelligent data engineering (ICCTIDE’16). IEEE, pp 1–4 10. Maksimovi´c M, Vujovi´c V, Davidovi´c N, Miloševi´c V, Periši´c B (2014) Raspberry Pi as internet of things hardware : performances and constraints. design issues 3(8) 11. Fonollosa J, Rodríguez-Luján I, Huerta R (2015) Chemical gas sensor array dataset. Data Brief 1(3):85–89 12. Hoffheins BS (1990) Using sensor arrays and pattern recognition to identify organic compounds. Oak Ridge National Lab., TN (USA) 13. Ghaffari R, Laothawornkitkul J, Iliescu D, Hines E, Leeson M, Napier R, Moore JP, Paul ND, Hewitt CN, Taylor JE (2012) Plant pest and disease diagnosis using electronic nose and support vector machine approach. J Plant Dis Prot 119(5–6):200–207

A Novel Approach for ECG Compression and Use of Cubic Spline Method for Reconstruction Sudeshna Baliarsingh, Rashmi Rekha Sahoo, and Mihir Narayan Mohanty

Abstract Cardiac problem is one of the leading cause of death worldwide. Accurate detection and diagnosis of cardiac disease is one of the challenging task for physicians as well as patients. Electrocardiogram (ECG) plays an important role in diagnosis of cardiac diseases. Accurate and clean cardiac signal is most important for better diagnosis. Authors in this paper have used a novel approach for ECG signal compression. By applying cubic spline method, the clean ECG signals are obtained and derived in the result section. It is observed that the proposed method is providing better performance as compared to other techniques. Keywords ECG · Cardiac disease · Cubic spline · Noise · Clean ECG

1 Introduction Researchers have found a variety of ways to find out various physiological disorders, and cardiological diseases are one of them. With the occurrence of a greater number of deaths due to heart diseases, the number of ECG taken monthly has sharply increased. But despite so many different ways and evolving technologies ECG continues to be an integral part among them which helps in studying the various heart diseases. Time has really changed before people were not even aware of the heart diseases before they reach in their 60s, but now heart problems are arising to any middle age or young people at an increasing rate. Because of these facts, there is a growing agitation in all age groups to get a physical examination and ECG is a part of it. So, the electrocardiogram (ECG) is one of the most functional diagnostic tests in

S. Baliarsingh Department of ECE, Raajdhani Engineering College, Bhubaneswar, Odisha, India R. R. Sahoo Department of I&E, College of Engineering and Technology, Bhubaneswar, Odisha, India M. N. Mohanty (B) Department of ECE, ITER, Siksha ‘O’ Anusandhan (Deemed to be University), Bhubaneswar, Odisha, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 P. K. Mallick et al. (eds.), Cognitive Informatics and Soft Computing, Lecture Notes in Networks and Systems 375, https://doi.org/10.1007/978-981-16-8763-1_62

759

760

S. Baliarsingh et al.

emergency medicine, and it is an easy and inexpensive test that is used routinely in the evaluation of patients having chest pain. This results in millions and millions of ECG’s taken yearly and also many times patients and the health care provider are not at the same location, here telemonitoring is the best solution to this problem. For electrocardiogram (ECG) telemonitoring one needs to continuously monitor ECG, store it and transmit it. But the whole volume of ECG data is so large that it is difficult to achieve efficient storage and powerefficient transmission of electrocardiograph data. Besides to stay out of any possible mistreating of patient which can lead to patients death recording the patient’s cardiac data in the digital format with appropriate accuracy is much needed so that even minor changes occurring in the ECG record can be traced for example, a 3 lead channel of 24 h ambulatory ECG typically has storage requirement of over 50 MB. Hence to reduce the storage demand and to make the transmission of ECG data less power-hungry, here, ECG data compression plays an important role. This requires a constructive and productive data compression technique, the main focus of any compression technique is to achieve maximum data volume reduction while preserving the significant signal morphology features after reconstruction or we can say that one of the primary aims of ECG data compression is to achieve maximum compression of the data without any loss of the diagnostic features of the signal. Data compression methods can be classified into two significant categories: lossless and lossy methods. The methods which are from the lossless family obtain an exact reconstruction of the original signal but the low data rates cannot be achieved by these methods. Whereas lossy methods do not give an exact to the point reconstruction, but higher compression can be obtained through this process. The most commonly used ECG compression methods are lossy in nature. A lot of survey and work has been carried out to compare various compression methods. The main goal is to find an optimized compression technique among two different compression techniques of Direct compression Technique i.e. Turning Point Algorithm (TP) and Amplitude Zone Time Epoch Coding Technique (AZTEC), where we have used a different approach for reconstruction of ECG signal i.e. Cubic Spline approximation. So we intend to propose an efficient direct-based ECG compression algorithm ensuring that a high compression ratio (CR) can be achieved at the index of low Percent RMS difference (PRD) for the reconstructed signal. Mammalian hearts can be viewed as two pumps that operate in a series manner, consisting of the right atrium and the right ventricle which pumps the blood into the pulmonary circulation by systemic veins, and the left atrium and ventricle pumps the blood into the pulmonary circulation from the systemic veins (see Fig. 1). Within the heart, on the right, the tricuspid valve and on the left the mitral valve (Fig. 1) together called atrioventricular (AV) valves prevent blood from flowing backwards from the ventricles into the atria. Semilunar valves are named for their crescent-shaped cusps which separate each ventricle from its great artery. All four of these valves lie in a plane which is termed as the central fibrous body. This connective tissue skeleton works as an insulator that prevents electrical impulses from being conducted between the atria and the ventricles. The AV bundle which is also called the bundle of His is a strand of specialized cardiac muscle which penetrates this connective tissue insulator

A Novel Approach for ECG Compression and Use of Cubic …

761

Fig. 1 Structure of the Cradiac system

and provides only conducting pathway between the atria and the ventricles. If this conducting structure is damaged then it can lead to the cause of AV block. There are 5 specialized forms of tissues called the conductive system of the heart; they are SA node, AV node, Bundle of His, Right bundle branch (RBB) and left bundle branch (LBB), Purkinje fibers. The entire heart is made up of cardiac muscles. A specialized cardiac musculature called the nodal tissue is also distributed in the heart. These specialized conductive pathways allow the heart to be electrically activated in a predictable manner. The electrocardiogram which is abbreviated as ECG from the English spelling or EKG from the Dutchuses electrodes placed on the body surface to record the spontaneous electrical activity of the heart. It is the graphical representation of electrical potentials produced when the electric current passes through the heart or we can say that when the conduction of heart takes place. The basic characteristic of the heart is its electrical activity and the stimulus for cardiac contraction. The Electrocardiogram (ECG) records the electrical impulse on ECG paper by electrodes placed on body surface called waves or deflections.

1.1 Literature Survey Guedri [1] and others proposed a new ECG compression method using the fractal technique. The proposed approaches utilize the fact that ECG signals are a fractal curve. This algorithm consists of three steps: First of all the original ECG signals were processed and they were converted into a 2-D array. Then the Douglas-Peucker

762

S. Baliarsingh et al.

algorithm (DP) is used to detect critical points (compression phase). Finally the fractal interpolation and the Iterated Function System (IFS) was used to generate missing points (decompression phase). They have used ECG signals where each ECG in the dataset has at least two trajectories with 5000 points. Fractal interpolation can produce more data points than initially observed in ECG signals so there are more natural and real details in Reconstructed signal. Picariello [2] and others given a new digital CS-based ECG compression technique where a deterministic sensing matrix was adapted to the acquired signal and is utilized. The matrix which they have proposed i.e. the sensing matrix does not require the random generation of numbers. Moreover after being adapted by the signal the sensing matrix contains various information on the signal features and gave a guaranteed better reconstruction quality than other deterministic matrices. Particularly the compression algorithm which was proposed was modified in such a way that sensing matrix is not evaluated in each frame, and it evaluated only when there was a significant change in the signal distribution. Preliminary study and analysis on ECG Compression was done which was based on Algorithms [3] by Pallavi and Chandrashekar who stated four compression algorithms and implemented it, where two are direct compression methods and others are transformed methods. CR and PRD values for all 48 signals available in the database of MIT-BIH arrhythmia are calculated and average values are contrasted. They have used algorithms of Amplitude Zone Time Epoch Coding algorithm (AZTEC), Turning Point (TP), compression by using Discrete Cosine Transform (DCT) and Backward difference and compression by using Empirical Mode Decomposition (EMD). In the EMD method, the signal is decomposed into Intrinsic Mode Functions (IMFs) which depends upon the local characters of the data. Normally the IMF has time-varying frequency and amplitudes, and the frequency of the first IMFs can be recognized as the noise peaks. In the IMF the decomposed signal satisfies two conditions, one is the total no of extremes and the zero-crossing numbers must be equal to each other or must be differed by one. Second is that the mean value obtained by the local maxima and local minima envelope must be zero. These IMFs were obtained by the shifting process. At last the original signal is obtained by addition of all IMFs with the final residue signal. Another new Unified Approach was stated by Jalaleddine and his co-authors [4] where they have stated that direct data compression techniques usually rely on utilizing prediction or interpolation algorithms. The prediction algorithm which is used it utilizes the prior knowledge of the previous samples whereas the interpolation method applies a prior knowledge to the previous as well as the future samples. Theoretical analysis of such techniques can be found in [5]. Illustrations of such methods which belong to this group are Turning Point (TP) method [6] and Amplitude Zone Time Epoch Coding (AZTEC) method [7]. A good summary of these methods is presented in this paper. Abrar and other authors [8] stated that to deal with the large volume of electrocardiogram data for evaluation and analysis, storage and transmission, a suitable ECG compression technique is needed to reduce the amount of data as much as possible while preserving the clinical significant signal data for cardiac diagnosis. Here the Electrocardiograph signal is analysed for various

A Novel Approach for ECG Compression and Use of Cubic …

763

parameters such as heart rate, heart abnormalities, and QRS-width. Here the authors stated that various parameters and the compressed signal can be transmitted with less channel capacity by using this algorithm.

2 Turning Point Algorithm Usually, we use a 200 Hz sampling rate for the sampling of ECG signals to scale back this sampling rate up to 100 Hz a method was authored named as the turning point technique. And it is noticed that if we ruled out a specific component of ECG signals which has a large amplitude and sloppier curve, known the QRS complex, it is sufficient to sample all the other portions at 100 Hz. If we choose the sampling theory statement right here, we will realize that we are sampling our ECG signal a little over 4–5 times of frequency we truly need for sampling. If we miss the frequency of QRS complex in ECG signal all the other part in ECG has the bandwidth about 50 Hz so in accordance with the sampling theory sampling frequency of 100 Hz is adequate for the complete reconstruction and visualization of these portions. Therefore, in our study, we have evaluated that this procedure is based on the above grounds using this we can decrease the sampling rate of the signals up to half by simply storing the vital values. In TP technique we process 3 ECG data points at the same time frame. In the TP technique, the first caught sample is regarded as the reference point x0 . After this, from the next two captured sample, x1 and x2 , we only keep one sample out. The choice to preserve these points is done based on which one of the points indicates towards the slope change in the captured signal. In the analysis, we have got the observations that it provides the CR of 2:1. We calculate the various turning points that change in the slope by the following process: In the first step, 3 points taken out from the captured signal and performs the below-checking method: (x1 − x0 )∗(x2 − x1 ) < 0, or (x1 − x0 )∗(x2 − x1 ) > 0. We will store point x1 if, from described circumstances, the first one comes true or else we will store point x2 . Decompression of the compressed ECG signals is carried out. The TP is actually a direct data compression technique to scale back the ECG sampling frequency that does not require decreasing the elevation of large amplitude QRS complexes [9, 10]. The TP algorithm accomplished fixed CR of 2, with nearly zero reconstruction error i.e. reconstructed signal appear like the original ECG signal. The disadvantage to the current approach is its unsuitability for equally positioned time intervals. Its principal concept is to analyse the trend of the sampled points and then to decide on just one of each pair of successive points. The algorithm minimizes redundancy in the data sequence. Compression factor greater than two can be accomplished by employing the original algorithm multiple

764

S. Baliarsingh et al.

times repeatedly. The primary idea is the same i.e. to preserve the important turning points, where the slope of the curve alters the sign. The algorithm examines an effective number of neighbouring samples and stores one-point at every iteration along with the trend of the ECG curve [11, 12]. For that reason, it discards the redundant and unimportant information of the signal. We can say that the algorithm processes three data points at a time that is a reference point x0 and two successive data points x1 and x2 . Either x1 or x2 is to be kept, and this is based on which point stores the slope of the original three points. It is clearly illustrated in Fig. 2. From the beginning, the algorithm examines simultaneously arbitrary points for preserving one of them. It stores the very first point and assigns it considering reference point. Subsequently by figuring out the places and values of the minimum and maximum of the following N points. Contingent on the values of the extrema in accordance with the reference point and their order the algorithm preserves one of them. Likewise, if the sign of the slope of the lines between the three points is varying then the middle point is kept or else, the third one is stored. Then if the middle point is kept therefore the third point will be committed to memory and applied to the next iteration as the “very first” point [13]. In the same way, it goes on prior to the very last data point after that, a cubic spline approximation is applied to the stored data points which are compressed to reconstruct it back. Then finally the Compression ratio and the Percentage root mean square difference is found out as the performance matrices. It is found that the TP algorithm provides CR of 2:1 with a minimum error of 0.15.

Fig. 2 Block diagram of turning point algorithm

A Novel Approach for ECG Compression and Use of Cubic …

765

3 Amplitude Zone Time EPOC Coding Technique The AZTEC way of preprocessing of real-time ECG’s for rhythm assessment, it become a prominent data reduction algorithm for ECG monitors. The AZTEC algorithm changes untreated ECG sample points into plateaus. The amplitude value and length of each plateau are saved for reconstruction. However, the AZTEC strategy is capable to compress with CR of 10 the reconstruction error is not scientifically acceptable. The step-like regained signal may misconstrue the ECG features particularly in the gradual changing slopes of P and T peaks of the ECG. The AZTEC algorithm is formulated for preprocessing the real-time ECG’s for rhythm evaluation that has turned into a popular data reduction algorithm for ECG monitors and databases while having an accomplished compression ratio of 10:1. However, the reconstructed signal illustrates immense discontinuities and distortion. That is not appropriate for rhythm analysis. In specific, the majority of the signal distortion takes place in the reconstruction of the P and T waves owing to their varying slopes. The Aztec algorithm changes raw ECG sample points to plateaus and slopes. The AZTEC plateaus (horizontal line) are produced by utilizing the zeroorder insertion. The saved values for each plateau are the amplitude value belonging to the line and length (the number of samples with the fact that line can be interpolated while in the aperture). The slope is stored when a plate of three samples or more could be formed. The stored values of the slope are the duration (number of samples of the slope) and the final raising (amplitude of the last sample point). Signal reconstruction is accomplished by increasing the AZTEC slopes into a discrete sequence of data points by cubic spline interpolation, refer to Fig. 3. Despite the fact that the AZTEC technique gives maximum data reduction ratio, the fidelity of the reconstructed signal is not approved by the cardiologist because of the discontinuity of the (step-like quantisation) that happens in the reconstructed ECG waveform. An appreciable reduction of such discontinuities is normally achieved by making use of a smoothing parabolic filter, but due to amplitude distortion of ECG, it cannot be used in clinical examination. We can say that The AZTEC (amplitude-zone-time-epoch-coding) enables realtime analysis of electrocardiographic rhythm. J. R. Cox has discussed this process in detail in his work [14], which can be summarized as follows: The sampling process is initiated by an interrupt. The first sample so obtained sets the limits xmax = xmin = x1 . The successive samples obtained are continuously compared with the aforesaid limits. If a sample exceeds the limits, the limit is replaced by the sample itself. During the process, the difference xmax = xmin is continuously monitored. In case the improvement fails to meet an experimental threshold value the voltage changes are displayed by a constant voltage or a “line” midway between the limits. Once a sample warrants isolating the limits by a value more than the threshold, the former average of the two limits is saved in the memory of the computer as the value of the line. The duration of the line is recorded since the time the limits were initialized. Figure 6 shows the original and reconstructed ECG signal, using AZTEC technique. Once, the difference between the voltage limits exceeds the threshold value and

766

S. Baliarsingh et al.

Fig. 3 Flowchart of the amplitude zone time epoch coding technique

a pair of data words are recorded, the process begins again by setting new limits i.e. xmax and xmin equal to the last voltage sample. While dealing with a min high frequency and amplitude signal such as QRS, the voltage samples change rapidly and hence lines of short durations are obtained. Thus, the AZTEC representation looks more like an ordered set of plateaus and slopes. This technique has brought about considerable data reduction as massive ECG parameters are limited to just a couple of numbers (duration and value).Like most data compression techniques, fixed threshold forms the basis of the conventional AZTEC technique. A concise threshold generates an excellent signal nevertheless to the contrary, and the compression acquired is substandard. That is usually where the requirement for modified AZTEC was. In contrast to AZTEC, it functions on the notion of the varying threshold. The first step of the algorithm is no exception from the AZTEC method. Let

A Novel Approach for ECG Compression and Use of Cubic …

767

us take into account a series of ‘n’ samples x where (m = 1, 2, 3 … , n). Now specify the maximum and minimum value to the sample as xmax and xmin respectively, such that xmax = xmin = x1 i.e. the first sample. Now, the consequent samples, x2 , x3 , . . . xn , are compared to the set maxima and minima values. If xm > xmax then xmax = xm in a similar. If xm < xmin then xmin = xm . This procedure is continued until the difference between the maxima and minima values turns out to be greater than the prespecified threshold ‘k’. If a specific sample ‘i’ is such that it triggers the threshold ‘k’ to be exceeded, then a pair of data (x, I) are stored. This algorithm is actually self-transformative in nature. The threshold k is dependent completely a lot on the development of the signal. For instance, the threshold values for low information regions like the baseline are likely to be greater than the exact value designated for high information region (P, T and ST segments). This will likely produce enhanced compression of low information region. The threshold value is present and limited to 4, 12. In the process to reconstruct and decompress the signal, an interpolation technique identified as Cubic Spline Interpolation is applied.

4 Results In this particular section, simulation with the use of MATLAB is developed and applied to a set of ECG signals so as to examine the standard of the projected compression method. We analyse the best threshold selection rules among the list of three proposed threshold selection rules to compress the ECG signals. Finally, a comparative study between TP and AZTEC is carried out. To be able to evaluate the relative merits of ECG data compression methods, a platform for comparison must be accomplished. To guarantee a solid basis of comparison among these approaches, all comparisons must be made on the identical application, although meeting the minimum appropriate error criteria for ECG preservation. In this study, the test data is chosen from the MIT-BIH Arrhythmia Database. Each one of containing data from two separate ECG leads, sampled at 360 Hz with 11 bits per sample. A set of ECG record containing an Arrhythmia situation was chosen for the screening of compression functionality. The compression algorithm was tested on a recording from the MIT-BIH arrhythmia database i.e. 100th record. The compression ratio (CR) along with the per cent rms difference (PRD) will be utilized as a quantitative performance measure. The results are obtained through simulation by MATLAB 2018b and presented in Figs. 4 and 5. The comparison of various parameters are shown in Table 1.

5 Conclusion Clean ECG signal plays an important role in cardiac disease analysis. It will help the physicians for proper diagnosis. Authors in the proposed work have used the novel

768

S. Baliarsingh et al.

Fig. 4 Compression result after applying the Turning point algorithm to MITBIH Arrhythmia database record a 100 leadML and b V5 MATLAB 2018b

A Novel Approach for ECG Compression and Use of Cubic …

769

Fig. 5 Compression result after applying amplitude zone time Epoc coding technique to MIT-BIH arrhythmia database record 100, a leadML and b V5 by MATLAB 2018b

770

S. Baliarsingh et al.

Table 1 Comparison of experimental results Comparison techniques

Performance matrices CR of ECG signal 1

PRD of ECG signal 1 (%)

CR of ECG signal 2

TP

2.652

2.26

2.654

AZTEC

2.9508

7.69

2.8617

PRD of ECG signal 2 (%) 2.2641 10.431

noise cancellation approach for obtaining the clean ECG signal from the noisy signal. The cubic spline algorithm is used for the proposed work, and the performance is compared with different parameters. It is found that the filter is providing better result and can be modified in future for obtained better accuracy.

References 1. Guedri H (2021) ECG compression with Douglas-Peuker algorithm and fractal interpolatrion. Math Biosci Eng (MBE) 4(18):19 2. Picariello F (2021) A novel compressive sampling method for ECG wearable measurement systems. Sci Direct 167:10 3. PM. a. CH (2016) Study and analysis of ECG compression algorithm. In: International conference on communication and signal processing 4. Jalaleddine SMS, Hutches CG, Strattan RD, Coberly WA (1990) ECG data compression techniques—a unified approach. In: IEEE Transaction on biomedical engineering 5. Davission LD (1968) The theoritical analysis of data compression. In: IEEE proceedings 6. Muller WC (1978) Arrhythmia detection program for an ambulatory ECG monitor. In: Biomed Sci Instrum 7. Cox JR (1968) AZTEC, a preprocessing program for real-time ECG rhythm analysis. In: IEEE transactions boimedical engineering 8. Ahm A (2009) ECG compression and LabVIEW implementation. In: J Biomed Sci Eng 9. Blanco-Velasco M (2005) On the use of PRD and CR parameters for ECG compression. In: Elsevier Medical Engineering & Physics 27 10. Kale S, Gawali D (2016) Review of ECG compression techniques and implementations. In: International conference on global trends in signal processing, information computing and communication 11. Singh B, Kaur A, Singh J (2015) A review of ECG data compression techniques. Int J Comput Appl 116(11):39–44 12. Sahoo GK, Ari S, Patra SK (2015) Performance evaluation of ECG compression techniques. In: 2015 IEEE international conference on electrical computer and communication technologies (ICECCT), pp. 1–5 13. Philips W, De Jonghe G (1992) Data compression of ECGs by high degree polynomial approximation. IEEE Trans Biomed Eng 39(4):330–337 14. Anjum MS, Chakraborty M (2014) ECG data compression using turning point algorithm. IJIRMPS Int J Innov Res Eng Multi Phys Sci 2(6)

Design and Implementation of a Mixed Signal Filter for Noise Removal in Raw ECG Signal Mohan Debarchan Mohanty, Priyabrata Pattnayak, and Mihir Narayan Mohanty

Abstract The ECG signal is recording presented through a graph that represents the cardiovascular activities. The health specialists take the help of an ECG to monitor the activities in the heart through the peaks detected in an ECG signal. This paper centralizes on two facets: first the design of a mixed signal-based filter by implementing the Pan-Tompkins algorithm. The mixed signal-based filters are used basically to make the conditioning faster with a large dynamic range and allow to implement the same in nonlinear systems. The ICs and the multipliers used in the filters are CMOS based for which the design has negligible static power consumption and very less dynamic power dissipation during switching. The complexity of the circuit reduces as the 18 nm level process CMOS technology is used. The ADC helps in converting the analog output to the digital form in order to ease the detection process through various transforms. The work can be further extended in the field of biomedical research by developing an mixed signal IC for the following application to use it for detection purposes. Keywords ECG signal · Peaks · Filtration · Pan-Tompkins algorithm · Mixed signal · CMOS filter · ADC · 18 nm · Transforms · Analog IC

1 Introduction Electrocardiography has been the most important weapon in the cardiology sector of biomedical research. This helps in detecting most of the cardiological imperfections inside a human body ranging from arrhythmia to bradycardia and tachycardia. The electrical signal inside the heart is basically a periodic signal consisting of five peaks P, Q, R, S and T. So, to detect the imperfections, it is necessary to detect the Q, R and S peaks of the signal. R-peak is one of the principal segments of the QRS complex which has the fundamental job in deciding and concluding heart rhythm M. D. Mohanty BPUT, Bhubaneswar, Odisha, India P. Pattnayak · M. N. Mohanty (B) ITER, Siksha ‘O’ Anusandhan (Deemed to be University), Bhubaneswar, Odisha, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 P. K. Mallick et al. (eds.), Cognitive Informatics and Soft Computing, Lecture Notes in Networks and Systems 375, https://doi.org/10.1007/978-981-16-8763-1_63

771

772

M. D. Mohanty et al.

abnormalities and furthermore in deciding pulse inconsistency [1]. The first step before detecting the signal is to filter out the noisy sections of the raw signal. This aspect is considered in this paper. There are several attempts done to filter out the noiseless signal with the digital filter and then perform several transformations like the Hilbert transform, Wavelet transform and the Fourier Transform to detect the R-peak of the signal [2, 3]. The digital filter output makes it easier to perform the transforms easily but isn’t suitable practically as it is quite slow taking 100–200 samples per second for which the analog-based CMOS filter has been introduced here. The analog-based CMOS filter takes 1000 samples per second and the power dissipation is almost zero except only when it performs the switching action. The CMOS is used in this work to design a two-stage IC which is further used as a low-pass filter and a high-pass filter. Several other strategies have also been introduced to design a CMOS-based IC. The authors in [4] have tried out a compensation strategy with a buffer for the CMOS-based IC instead of using the lead compensation. The authors in [5] have tried out a way to reduce the size by taking the 180 µm process technology to design a RC-based IC filter with high gain. Works have also been done and authors have tried out to scale the IC-based filter into a reduced size by performing several optimization techniques in order to make it feasible for practical use [6–12]. The CMOS-based IC is not only used for filtering, it has also been used to perform signal conditioning and amplifying actions. The authors in [13–15] have used the IC for signal conditioning with a reduced size in low-power applications. In [16], the researchers have also used it in biomedical applications to prove the efficiency and practicality of the filter. Here, using CMOS, the IC is designed first, and consequently, it is used in the implementation of the Pan-Tompkins algorithm. This paper has been introduced briefly in this section, further it has three more sections. Section II describes the methodology of the design consisting of two more subsections; the results have been discussed in Section III, and finally, in Section IV, it has been concluded along with its future aspects.

2 Methodology Operational Amplifiers (ICs) are voltage-controlled voltage source devices that can perform several mathematical operations ranging from addition and subtraction to integration and differentiation [4]. It is called as a voltage-controlled voltage source as in its ideal state it can measure the input voltage without feedback and results in an output voltage proportional to the input voltage. Therefore, large input impedance and large gain are obtained which in order makes a system more stable and robust. The ICs used in this design are used to perform the filter operation in order to remove the background noise generated during cardiovascular movements.

Design and Implementation of a Mixed Signal Filter for Noise …

773

In the modern times, CMOS has flooded the IC design industry mostly because of its negligible static power and exceptionally low dynamic power dissipation. This work basically focuses on two aspects of mixed signal design. First, the design of the CMOS-based two-stage differential IC and using that as an application to filter a raw ECG signal from the MIT-BIH database and then designing a flash ADC in order to convert the analog output into digital form in order to undergo several transforms and detect the peaks. The differential CMOS amplifier is basically a combination of both N-type MOSFETS and P-type MOSFETS and the output is obtained here by switching the MOSFETs. The CMOS-based analog filter has certain advantages as compared to the digital filter. The first would be that the power consumption is almost negligible for the filter. It uses the power only when there is switching between the PMOS and NMOS and the static power consumption is almost zero except for the static noise generated [17–19]. Secondly, the dynamic range is quite higher as compared to the digital filter. The digital filter can process up to 200 samples per second but in the case of the analog CMOS filter frequency can vary from 1 to 1000 kHz which makes the operation faster. The CMOS is mathematically represented by the drain current equation as follows: I (d) =

  1 W .µ.C(ox). (V (gs) − V (t)).V (ds) − V (ds)2 L 2

(1)

where W /L is the width to length ratio of the MOSFET, µ is the amplification factor, C(ox) is the oxidation capacitance, V (gs) is the potential difference between the gate and drain, V (ds) is the potential difference between the drain and the source and V (t) is the threshold voltage of the MOSFET. 1.

Design of the two-stage differential CMOS-based IC

The single-stage IC has a remarkably high gain but is not applicable for practical applications like signal conditioning and filtering. This is the important reason to introduce the two-stage IC where the gain reaches the maximum as compared to the single-stage IC and the bandwidth decreases as the gain bandwidth remains constant [6]. The two-stage IC as in Fig. 1 has two stages, the first is the stage where it acquires a large gain and the other is the one where the high swing is achieved. The high gain stage can be considered equal to the single-stage IC circuit that is a PMOS differential pair along with NMOS current mirrors but the second stage is here is the driving circuit for the first stage that is a common source amplifier, which in order increases the number of swings and as a result, low output impedance is obtained [20]. The gain of the first stage IC is given by A1 = −g(m1).(r 1r 2) = −g(m1).

(r 1.r 2) r1 + r2

(2)

where g(m1) is the transconductance of the input PMOS amplifier and r1 and r2 are the input resistances, which when connected in parallel provide the input impedance

774

M. D. Mohanty et al.

Fig. 1 Two-stage CMOS-based IC

for the first stage. The transconductance g(m1) is given as  g(m1) =

2µ.C(ox).

W .I (d) L

(3)

where µ is the amplification factor, C(ox) is the oxide capacitance of the MOSFET, W /L is the ratio of width to length and I(d) is the drain current of the MOSFET obtained in (1). Similarly, the gain of the second stage IC is given by A2 = −g(m7).(r 6r 7) = −g(m7).

(r 6.r 7) r6 + r7

(4)

where g(m7) is the transconductance of the input NMOS amplifier and r6 and r7 are the input resistances, which when connected in parallel provide the input impedance for the second stage. Now, the overall gain of the two-stage IC is the product of both the gains, that is given by A(total) = A1.A2 = g(m1).g(m7).(r 6r 7)(r 1r 2)

(5)

The two-stage ICs here are designed using Cadence Virtuoso and are given in Fig. 2. 2.

Design of Filter using Pan-Tompkins algorithm

The raw ECG signal is generated in Cadence Virtuoso with the help of the data obtained from the MIT-BIH arrhythmia database. Before the peak detection, this raw signal must be filtered in order to remove the noise generated from the other

Design and Implementation of a Mixed Signal Filter for Noise …

775

Fig. 2 Design of double-stage CMOS-based IC

movements of the heart and obtain the legitimate cardiovascular behavior. The PanTompkins algorithm is a way of designing a bandpass filter by cascading a secondorder low-pass filter and a second-order high-pass filter with a 3 dB passband from about 5–15 Hz [1]. The sampling rate of this design is 1 kHz. The transfer function of a second-order low-pass filter is given by [2] 2  1 − Z −6 LPF(Z ) =  2 1 − Z −1

(6)

The transfer function of high-pass filter is given by [3] HPF(Z ) =

Z −32 + 32Z −16 − 1 1 + Z −1

(7)

The gain of the CMOS-based filter is henceforth calculated from the given equation f A( p). f (l) V (out) =     V (in) f 2 f 1 + f (l) ) . 1 + f (h) )2

(8)

where A(p) is the passband gain, f is the sampling frequency, f (l) is the low passband frequency and f (h) is the high passband frequency.

776

M. D. Mohanty et al.

Fig. 3 Design of the 4-bit flash ADC

3.

Design of the 4-bit flash ADC

In order to perform the transform to obtain the peaks, we need to convert the analog output from the CMOS-based filter to a digital form. This is where the flash ADC comes into use. The 4-bit flash ADC is designed through a series of comparators where each of them is comparing the input signal to an unique reference voltage after which it gives the digital form of the ECG. In the ADC, as the analog result output exceeds the reference voltage output, the comparator output sequentially saturates to a high state. The series of comparators’ output is then fed through an encoder design to encode the results into a digital form. The designed 4-bit flash ADC is given in Fig. 3.

3 Results The generation of the ECG signal was done by passing the values obtained from MIT-BIH database into the Cadence Virtuoso XL [21–23]. The overall circuit design as shown in Fig. 4 consists of two double-stage ICs bonded with the resistors and capacitors in order to form one low-pass filter in the first part of the cascade and then a high-pass filter in the second part. The lead compensation plays an important role in the design of two-stage ICs. For getting a higher load, we use the multiplier as shown in Fig. 5 instead of a lead resistance of greater magnitude. This is because the higher the resistance, the higher will be the W/L ratio and hence higher will be the size. The analog-based CMOS filter is used here in order to obtain faster results as it takes 1000 samples per second. It also has the advantage of less power dissipation when

Design and Implementation of a Mixed Signal Filter for Noise …

777

Fig. 4 Design of the filter using the two-stage ICs

Fig. 5 Multiplier design for lead compensation

the CMOS switches from NMOS to PMOS and vice versa. The encoder designed for the ADC is shown in Fig. 6. The raw ECG signal contains various background noises arising from the organs surrounding the cardiac segments as shown in Fig. 7. The filter produces excellent results by filtering out the background noise as seen in Fig. 8. The ADC output converts the filter output into digital form as shown in Fig. 9. The filtered signal can

778

Fig. 6 3-bit encoder design

Fig. 7 Raw ECG signal generated

M. D. Mohanty et al.

Design and Implementation of a Mixed Signal Filter for Noise …

779

Fig. 8 Filtered ECG signal

Fig. 9 ADC output

undergo transformations easily to detect the peaks. The CMOS technology used here is the 18 nm process technology which makes the design more compact and robust.

4 Conclusion The IC is used basically for signal conditioning or for filter design in most of the practical applications as its response is considered stable along with higher gain and higher input impedance. The filter design aspect is considered here. The 18 nm process is used in this design and is simulated using Cadence Virtuoso. This technology makes the system compact and produces a further way for research and design of an mixed signal IC with this logic. Further, the transformations and peak detection of the signal can also be implemented using the mixed signal circuit design techniques.

780

M. D. Mohanty et al.

References 1. Mohanty MD, Mohanty B, Mohanty MN, R-peak detection using efficient technique for tachycardia detection. In: 2017 2nd international conference on man and machine interfacing (MAMI) 2. Aurobinda A, Mohanty BP, Mohanty MN (2016) R-peak detection of ECG using adaptive thresholding. In: International conference on communication and signal processing (ICCSP). IEEE, pp 0284–0287 3. Park J-S, Lee S-W, Park U (2017) R peak detection method using wavelet transform and modified Shannon energy envelope. Hindawi J Healthc Eng 2017 4. Palmisano G, Palumbo G (1997) A compensation strategy for two-stage CMOS ICs based on current buffer. In: IEEE transactions on circuits and systems I: fundamental theory and applications, vol 44(3) 5. Harrison J, Weste N (2002) 350 MHz IC-RC filter in 0.18/spl mu/m CMOS. Electron Lett 38(6) 6. Waykole S, Bendre VS (2018) Performance analysis of classical two stage IC using CMOS and CNFET at 32nm technology. In: 2018 fourth international conference on computing communication control and automation (ICCUBEA) 7. Nagulapalli R, Hayatleh K, Barker S, Zourob S, Yassine N, Naresh Kumar Reddy B (2018) A technique to reduce the capacitor size in two stage miller compensated IC. In: 2018 9th international conference on computing, communication and networking technologies (ICCCNT) 8. Ghosh S, Prasad De B, Kar R, Mal AK, Symbiotic serach algorithm for optimal design of CMOS two stage IC with nulling resistorand robust bias circuit. IET Circ Dev Syst 13(5) 9. Maji KB, Kar R, Mandal D, Prasanthi B, Ghoshal SP, Design of low-voltage CMOS opamp using evolutionary optimization techniques. Advances in computer communication and computational sciences, pp 257–267 10. Nagulapalli R, Hayatleh K, Barker S, Naresh Kumar Reddy B, Seetharamulu B (2019) A low power miller compensation technique for two stage op-amp in 65nm CMOS technology. In: 2019 10th international conference on computing, communication and networking technologies (ICCCNT) 11. Stillmaker A, Baas B (2017) Scaling equations for the accurate prediction of CMOS device performance from 180 nm to 7 nm. Integration 58:74–81 12. Budanov D, Korotkov A (2019) A design of flash analog-to-digital converter in 180 nm CMOS process with high effective number of bits. Journal of physics: conference series, volume 1236, international conference “emerging trends in applied and computational physics 2019” (ETACP-2019), Saint-Petersburg, Russian Federation, 21–22 March 2019 13. Papathanasiou K, Ehmann TL (2000) An implantable CMOS signal conditioning system for recording nerve signals with cuff electrodes. In: 2000 IEEE international symposium on circuits and systems (ISCAS) 14. Yoshioka K, Sugimoto T, Waki N, Kim S, Kurose D, Ishii H, Furuta M, Sai A, Itakura T (2017) 28.7 A 0.7V 12b 160MS/s 12.8fJ/conv-step pipelined-SAR ADC in 28nm CMOS with digital amplifier technique. In: 2017 IEEE international solid-state circuits conference (ISSCC) 15. Yadav C, Prasad S (2017) Low voltage low power sub-threshold operational amplifier in 180 nm CMOS, 2017 third international conference on sensing, signal processing and security (ICSSS) 16. Tyagi S, Saurav S, Pandey A, Priyadarshini P, Ray M, Pal BB, Nath V, A 21nW CMOS operational amplifier for biomedical application. In: Proceedings of the international conference on nano-electronics, circuits & communication systems, pp 389–396 17. Al-Busaidi AM, Khriji L (2013) Digitally filtered ECG signal using low-cost microcontroller. In: 2013 international conference on control, decision and information technologies (CoDIT) 18. Dozio R, Burke MJ (2009) Second and third order analogue high-pass filters for diagnostic quality ECG. IET Irish signals and systems conference (ISSC 2009) 19. Kher R, Signal processing techniques for removing noise from ECG signals. J Biomed Eng Res

Design and Implementation of a Mixed Signal Filter for Noise …

781

20. Wadhwani AK, Yadav M, Filtration of ECG signal by using various filter. Int J Modern Eng Res (IJMER) 1(2):658–661 21. https://www.physionet.org/physiobank/database/mitdb/ 22. https://physionet.org/physiobank/database/vfdb/ 23. https://www.physionet.org/physiobank/database/mvtdb/

Machine Learning Application in Primitive Diabetes Prediction—A Case of Ensemble Learning Narayan Patra, Jitendra Pramanik, Abhaya Kumar Samal, and Subhendu Kumar Pani

Abstract The presence of high level of sugar molecules in blood for a long period of time gives rise to chronic illness which is termed as diabetes. It severely affects the functioning of other organs in the body. A precise early predicting system can be very helpful in reducing the risk and severity associated with diabetes with significant influence on having a healthy lifestyle. This paper presents an introductory application of ensemble learning for an early diabetes prediction which employs AdaBoost algorithm with Support Vector Classifier (SVC) and Decision tree (DT) as base estimators. The performance of the model is verified through different classification metrics. This article is meant to incite energy in data scientists to implement powerful machine learning models in the field of biomedical analysis. Keywords AdaBoost learning · Diabetes predication · Ensemble learning

1 Introduction A chronic condition is categorized when the effects of disease are permeant which severely affects the health and quality of life. Across the globe, chronic condition is a major reason for deaths in adults. The cost associated with chronic diseases is also

N. Patra Department of Computer Science and Engineering, ITER, SoA Deemed to be University, Bhubaneswar, India e-mail: [email protected] J. Pramanik Centurion University of Technology and Management, Bhubaneswar, Odisha, India e-mail: [email protected] A. K. Samal Department of CSE, Trident Academy of Technology, Bhubaneswar, Odisha, India e-mail: [email protected] S. K. Pani (B) Krupajal Engineering College, Bhubaneswar, Odisha, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 P. K. Mallick et al. (eds.), Cognitive Informatics and Soft Computing, Lecture Notes in Networks and Systems 375, https://doi.org/10.1007/978-981-16-8763-1_64

783

784

N. Patra et al.

very high such that a hefty amount of the budget is spent by both government and individuals [1, 2]. Diabetes, also known as Diabetes mellitus (DM), is an exponentially increasing chronic disease in the world. While the cause is not entirely understood, scientists believe both genetic and environmental factors play an important role for a person to becomet diabetic. Early detection of diabetes is the need of today. Data mining is one of the most prominent tools in biomedical databases where information can be effectively extracted and utilized to enhance the decision-making process in medical domains [3]. For knowledge discovery and disease prediction, various algorithms have already been utilized [4, 5]. In case of diabetes, before causing death at the final stage, it gives rise to various types of disorders. In many cases, the physicians provide erroneous treatment without the proper experience of diabetes [6, 7]. In this manuscript, we propose an early diabetes prediction system using the AdaBoost algorithm with SVC and DT as a base classifier. The rest of the paper is presented as follows. Section 2 talks about the past work done for early prediction, Sect. 3 describes the step-wise implementation of the classification model. The final section (Sect. 4) gives the concluding remark of the article.

2 Related Works PID dataset of UCI repository is used for diabetes diagnosis using machine learning by many researchers. In this context, Shanker [6] performed the prediction of DM in the Pima Indian female population near Phoenix, Arizona, with the help of neural networks. Alam et al. [7] proposed the effectiveness of ML and data mining tools in the field of disease diagnosis. The article showed a strong association that exists between glucose and BMI with diabetes. In [8], the presented study identifies patients who have a high risk of developing diabetes with the use of supervised algorithms like Gradient Boost and Logistic Regression. Dazzi et al. [9] have used DM NeuroFuzzy systems, where for the least number of evasive blood tests, the exact dosage of insulin was predicted. The variation in the dosage of insulin was predicted through BEP and Fuzzy Logic. In [10], association rules and decision trees are used to predict the occurrences of certain diseases prevalent in diabetic patients. Azrar et al. [11] proposed an early diabetes prediction system with the implementation of compared different data mining algorithms byusing the PID dataset. In [12], diabetes diagnosis was done using an advanced medical dataset using SVM.

3 Materials and Methods The reflectance values of a hyperspectral image depend mainly on the physical structure and chemical composition of the material. Hyperspectral images consist of a

Machine Learning Application in Primitive Diabetes Prediction …

785

large no of narrowband spectrums, whose proper analysis reveals fine, precise, and accurate information about an object or material.

3.1 Dataset PID dataset present in the UCI library is taken here as input data for the prediction model. An assured diagnostic measurement covering different parameters present in the dataset is used to predict whether a particular person is diabetic in nature or not. The dataset has 9 attributes including 1 class attribute with 768 records. Out of the 768 records, 34.9% of records have positive instances and 65.1% negative instances.

3.2 Data Preparation In order to achieve a satisfactory result, the input dataset quality needs to be of high quality. Handling of missing values and inconsistent data becomes a prominent step in acquiring quality results in any machine learning model. Data cleaning is the most fundamental step in enhancing the quality of the dataset. It normally takes care of removing inconsistent or noisy data and filling up the missing values. The factors considered during the process of data cleaning are: • • • •

Irreverent Observations Wrong or bad labeling Missing/Null data points Presence Outliers

As we are taking a standard dataset, we can acknowledge that factors 1 and 2 are safely dealt with. With pandas library, all the missing or null data points can be easily extracted. In Fig. 1, it can be observed that the dataset have no missing data points. By taking the histogram of the input dataset, we observed the presence of outliers in some columns. It can be concluded that the given dataset is incomplete and contains Fig. 1 Observing missing data

786

N. Patra et al.

misleading information in the Blood Pressure, BMI, and Glucose columns. With some minor adjustments, we proceeded with the given data. The feature distribution of the input dataset has been shown in Fig. 2 along with boxplot (Fig. 3).

Fig. 2 Visualizing the feature distribution

Fig. 3 Visualizing the feature distribution with boxplot

Machine Learning Application in Primitive Diabetes Prediction …

787

Fig. 4 Heatmap of feature correlations

We found a correlation between every pair of features (and the outcome variable), and visualize the correlations using a heatmap (Fig. 4). In case of heatmaps, brighter colors indicate more correlation. From the heatmap, we can say, glucose levels, age, BMI, and the number of pregnancies all have a significant correlation with the outcome variable.

3.3 Classification Algorithms Ensemble learning methods are popular and the go-to technique when the best performance on a predictive modeling project is the most important outcome. An ensemble is a machine learning model that combines the predictions from two or more models. The models that contribute to the ensemble, referred to as ensemble members, may be the same type or different types and may or may not be trained on the same training data. The predictions made by the ensemble members may be combined using statistics, such as the mode or mean, or by more sophisticated methods that learn how much to trust each member and under what conditions. There are two main reasons to use an ensemble over a single model, and they are related; they are: • Performance: An ensemble can make better predictions and achieve better performance than any single contributing model. • Robustness: An ensemble reduces the spread or dispersion of the predictions and model performance.

788

N. Patra et al.

In the year 1996, Yoav Freund and Robert Schapire proposed an ensemble boosting classifier known as Ada-boost or Adaptive Boosting. Tha AdaBoost works in the following ways: • Random selection of training subset • Based on the accurate prediction of the last training, it selects the training set for the next iteration • In each iteration, based on the accuracy, it assigns weight to the trained classifier. • Higher the accuracy, higher the assigned weight • It runs until it reaches the max number of estimators or fits the training data without error.

3.4 Classification Model In our work, we have implemented AdaBoost algorithm with two base classifiers: SVC and DT. The reduced dataset contains two classes. The following python libraries are used to create our ensemble learning model in python: • NumPy is used for supporting large and multidimensional matrices or arrays along with higher level of support of mathematical function that helps to operate on these multidimensional data. • Pandas is mainly used to perform data analysis and manipulation operations on different time series data and data structures. • Matplotlib is a plotting library for Python that is used for providing APIs for plotting graphs and graphical representations of data. • Scikit-learn is a software machine learning library for Python. It includes various algorithms for classification, regression, and clustering, which can be used for computations in python. The reduced dataset is being classified with SVM and DT as estimators in AdaBoost algorithm. The accuracy of the classifiers is being assessed with the different machine learning matrices (shown in Table 1) along with the confusion matrix. The performance plot of the classifier is shown in Fig. 5. It is seen that the AdaBoost algorithm with DT as base estimator has higher accuracy than SVC as the base estimator (Fig. 6). Table 1 Performance of AdaBoost algorithm with DT and SVC Outcomes

Name of classifier

Score

Precision

F1_score

0

AdaBoost—SVC

0.668966

0.668966

0.668966

1

AdaBoost—DT

0.772414

0.772414

0.772414

Machine Learning Application in Primitive Diabetes Prediction …

789

Fig. 5 Confusion matrix a for AdaBoost with SVC b for AdaBoost with DT

4 Conclusion In this literature, we have successfully implemented an early diabetes prediction system using an ensemble model on the PID dataset. With proper preprocessing, missing values of the dataset are filled and outliers are removed which greatly enhances the dataset quality. The implemented model is able to achieve satisfactory accuracy. The result obtains also provides the scope in the improvement in the accuracy level with the implementation of other potent classifiers.

790

N. Patra et al.

Fig. 6 Performance plot of classifier for diabetes prediction

References 1. Falvo D, Holland BE (2017) Medical and psychosocial aspects of chronic illness and disability. Jones & Bartlett Learning 2. Skyler JS, Bakris GL, Bonifacio E, Darsow T, Eckel RH, Groop L et al (2017) Differentiationof diabetes by pathophysiology, natural history, and prognosis. Diabetes 66:241–255 3. Diwani S, Mishol S, Kayange DS, Machuve D, Sam A (2013) Overview applications of data mining in health care: the case study of Arusha region. Int J Comput Eng Res 3:73–77 4. Alam TM, Awan MJ (2018) Domain analysis of information extraction techniques. Int J Multidiscip Sci Eng 9:1–9 5. Alam TM, Khan MMA, Iqbal MA, Wahab A, Mushtaq M (2019) Cervical cancer prediction through different screening methods using data mining. Int J Adv Comput Sci Appl 10:388–396 6. Shanker M (1996) Using neural networks to predict the onset of diabetes mellitus. J Chem Inform Comput Sci 36:35–41 7. Alam TM, Iqbal MA, Ali Y, Wahab A, Ijaz S, Baig TI, Hussain A, Malik MA, Raza MM, Ibrar S, Abbas Z (2019) A model for early prediction of diabetes. Inform Med Unlocked 16:100204 8. Lai H, Huang H et al (2019) Predictive models for diabetes mellitus using machine learning techniques. https://doi.org/10.1186/s12902-019-0436-6 9. Dazzi D, Taddei F, Gavarini A, Uggeri E, Negro R, Pezzarossa A (2001) The control of blood glucose in the critical diabetic patient: a neuro-fuzzy method. J Diab Complications 15(2):80–87 10. Zorman M, Masuda G, Kokol P, Yamamoto R, Stiglic B (2002) Mining diabetes database with decision trees and association rules. In: 15th IEEE symposium on computer-based medical systems, pp 134–139 11. Azrar A, Ali Y, Awais M, Zaheer K (2018) Data mining models comparison for diabetes prediction. Int J Adv Comput Sci Appl 9 12. Kumari VA, Chitra R (2013) Classification of diabetes disease using support vector machine. Int J Eng Res Afr 3:1797–1801

Author Index

A Abhaya Kumar Samal, 783 Abhay Deshpande, 539 Abhigyan Ray, 667 Abhinav Kislay, 387 Achyut Shankar, 387 Akansha Singh, 215 Akash Kumar Bhoi, 345, 355, 363, 377, 387, 431 Alijan Ranjbar, 105 Amandeep, 27 Amandeep Singh Sappal, 61 Anagha Umashankar, 643 Ananya Dutta, 653 Ananya Pandey, 333 Anjana Raut, 417 Ankush Lath, 149 Annapareddy V. N. Reddy, 551 Annie Olivia Miranda, 477 Anupriya, K., 633 Ashish Kumar Singh, 691 Ashok Kumar Sahoo, 377 Atul Kumar, 215, 309, 319 Avinash Kumar Sharma, 737 Ayan Mukherjee, 691

B Bhavesh Kumar Chauhan, 215, 319 Bhuvana, K. N. V. S., 583 Birinderjit Singh Kalyan, 139, 163, 173

C Chukwuma Kama, 717

D Debasmita Sarkar, 667 Deep Mukherjee, 491 Devendra Kumar, 267 Dharam Dutt, 227 Disha Ghoshal, 467 Divesh Kumar, 49 Divya Jyoti Thakur, 1 Divya Shukla, 309 Duruvan Raj, G., 403

G Gagandeep Kaur, 377 Garg, A. P., 227 Giridaran, S., 403 Gopu Mruudula Sri, 633

H Hardeep Singh Ryait, 15 Harpreet Kaur Channi, 563 Harpreet Vohra, 201 Harvinder Singh, 139 Himani Goyal Sharma, 139, 163

I Inderpreet Kaur, 201 Indu Prabha Singh, 333 Ishika Raj, 491

J Jaimala Bishnoi, 241 Jain, R. K., 227

© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 P. K. Mallick et al. (eds.), Cognitive Informatics and Soft Computing, Lecture Notes in Networks and Systems 375, https://doi.org/10.1007/978-981-16-8763-1

791

792 Janarthanan, M., 403 Jeetashree Aparajeet, 531 Jitendra Kumar Rout, 609 Jitendra Pramanik, 783

K KamalKant Sharma, 105 Kariyappa, B. S., 619, 643 Khushal Singh, 431 Khushbu Sinha, 467, 477 King Chime Aliliele, 727 Kothuru Sai Mounika, 551 Krishna Reddy, V. V., 583 Kuldeep Kumar Yogi, 737

L Laharika Tutica, 445

M Manaswini Singh, 677 Manpreet Singh, 61 Manpreet Singh Manna, 181, 201, 297, 333 Meghana, P. V. S., 667 Mihir Narayan Mohanty, 759, 771 Mohammed Vazeer Ahmed, 619 Mohan Debarchan Mohanty, 771 Moyya Meghana, 551 Muskaan, 363

N Naik Kranti Ramkrishna, 539 Nakkala Ganesh, 531 Narasimha Rao Vajjhala, 717, 727 Narayan Patra, 783 Neeraj Sharma, 15 Neha Bharti, 127 Niharika Mohanty, 653 Nilotpal Bhunia, 467

P Paras Chawla, 127 Pooja Agrawal, 309 Pooja Verma, 1 Prabhishek Singh, 387 Pradeep Kumar Mallick, 445, 457, 653, 691 Pradeepta Kumar Sarangi, 363, 377 Prajwal, T., 395 Prithvik Adithya Ravindran, 403 Priyabrata Pattnayak, 771

Author Index R Rahul Manhas, 563 Rajarshi Chowdhury, 667 Rajeev Tripathi, 297 Rajesh Singh Bohra, 595 Rashmi Rekha Sahoo, 759 Ratnesh Kumar, 309 Reema Singh, 241, 267 Reena Yadav, 163 Rehana Perveen, 115 Renu Sharma, 39 Reva Devi Gundreddy, 551 Rupsa Rani Sahu, 417

S Sachin Kumar, 127 Sahil Verma, 309 Sai Teja, D., 583 Samarjeet Borah, 717, 727 Samiullah Sherzay, 115 Sandip Rakshit, 717, 727 Sanjeev Kumar Jain, 227 Santosh Kumar Dwivedi, 297 Sanya Raghuwanshi, 677 Sarbjeet Kaur, 93, 149, 563 Sasmita Rani Samanta, 691 Satish Kansal, 49 Saurabh Chandra Pandey, 595 Saxena, D. C., 39 Shreya Anand, 517 Shweta Rani, 69 Siddarth Sai Amruth Yetikuri, 573 Sivakumar, S., 345, 355 Snehal Sarangi, 609 Soham Chakraborty, 703 Sowmya, K. B., 395, 503, 573 Soumya Ranjan Nayak, 345, 355, 363, 377, 387 Sreedevi, E., 345, 355 Srestha Rath, 677 Sriman Srichandan, 653 Srishty Singh Chandrayan, 431 Subhendu Kumar Pani, 783 Subhra Rani Patra, 517 Sudeshna Baliarsingh, 759 Sudhir Kumar Sharma, 181 Sudhir Sharma, 15 Suguna Kumari, J., 583 Sunny Singh, 363 Sunny Vig, 105, 163 Surbhi Gupta, 149 Surender Singh, 77

Author Index Sushil Kakkar, 69 Sushruta Mishra, 457, 477, 491, 677, 703 Swati Samantaray, 417 T Tanuja Srivastava, 39 Tarun Saxena, 27 Teena Thakur, 139 Timothy Caputo, 573

793 Vibha Srivastava, 333 Vibhor Kedawat, 531 Vidyanandini, S., 345, 355 Vinay Kumar Yadav, 595 Vineel, K. S. K., 445 Vinodani Katiyar, 319 Vipin Kumar Tyagi, 241, 267 Vishal Abrol, 573

U UmaHarikka, K., 583 Upendra Kumar Tiwari, 595

Y Yashi Mishra, 457 Yoganand, H. R., 503 Yogesh Kumar, 27

V Varikuti Anusha, 551

Z Zabiullah Haidary, 93