Simulation and Analysis of Mathematical Methods in Real-Time Engineering Applications [1 ed.] 1119785375, 9781119785378

Written and edited by a group of renowned specialists in the field, this outstanding new volume addresses primary comput

399 66 8MB

English Pages 368 [372] Year 2021

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Title page
Copyright
Preface
Acknowledgments
1 Certain Investigations on Different Mathematical Models in Machine Learning and Artificial Intelligence
1.1 Introduction
1.2 Mathematical Models of Classification Algorithm of Machine Learning
1.3 Mathematical Models and Covid-19
1.4 Conclusion
References
2 Edge Computing Optimization Using Mathematical Modeling, Deep Learning Models, and Evolutionary Algorithms
2.1 Introduction to Edge Computing and Research Challenges
2.2 Introduction for Computational Offloading in Edge Computing
2.3 Mathematical Model for Offloading
2.4 QoS and Optimization in Edge Computing
2.5 Deep Learning Mathematical Models for Edge Computing
2.6 Evolutionary Algorithm and Edge Computing
2.7 Conclusion
References
3 Mathematical Modelling of Cryptographic Approaches in Cloud Computing Scenario
3.1 Introduction to IoT
3.2 Data Computation Process
3.3 Data Partition Process
3.4 Data Encryption Process
3.5 Results and Discussions
3.6 Overview and Conclusion
References
4 An Exploration of Networking and Communication Methodologies for Security and Privacy Preservation in Edge Computing Platforms
Introduction
4.1 State-of-the-Art Edge Security and Privacy Preservation Protocols
4.2 Authentication and Trust Management in Edge Computing Paradigms
4.3 Key Management in Edge Computing Platforms
4.4 Secure Edge Computing in IoT Platforms
4.5 Secure Edge Computing Architectures Using Block Chain Technologies
4.6 Machine Learning Perspectives on Edge Security
4.7 Privacy Preservation in Edge Computing
4.8 Advances of On-Device Intelligence for Secured Data Transmission
4.9 Security and Privacy Preservation for Edge Intelligence in Beyond 5G Networks
4.10 Providing Cyber Security Using Network and Communication Protocols for Edge Computing Devices
4.11 Conclusion
References
5 Nature Inspired Algorithm for Placing Sensors in Structural Health Monitoring System - Mouth Brooding Fish Approach
5.1 Introduction
5.2 Structural Health Monitoring
5.3 Machine Learning
5.4 Approaches of ML in SHM
5.5 Mouth Brooding Fish Algorithm
5.6 Case Studies On OSP Using Mouth Brooding Fish Algorithms
5.7 Conclusions
References
6 Heat Source/Sink Effects on Convective Flow of a Newtonian Fluid Past an Inclined Vertical Plate in Conducting Field
6.1 Introduction
6.2 Mathematic Formulation and Physical Design
6.3 Discusion of Findings
6.4 Conclusion
References
7 Application of Fuzzy Differential Equations in Digital Images Via Fixed Point Techniques
7.1 Introduction
7.2 Preliminaries
7.3 Applications of Fixed-Point Techniques
7.4 An Application
7.5 Conclusion
References
8 The Convergence of Novel Deep Learning Approaches in Cybersecurity and Digital Forensics
8.1 Introduction
8.2 Digital Forensics
8.3 Biometric Analysis of Crime Scene Traces of Forensic Investigation
8.4 Forensic Data Analytics (FDA) for Risk Management
8.5 Forensic Data Subsets and Open-Source Intelligence for Cybersecurity
8.6 Recent Detection and Prevention Mechanisms for Ensuring Privacy and Security in Forensic Investigation
8.7 Adversarial Deep Learning in Cybersecurity and Privacy
8.8 Efficient Control of System-Environment Interactions Against Cyber Threats
8.9 Incident Response Applications of Digital Forensics
8.10 Deep Learning for Modeling Secure Interactions Between Systems
8.11 Recent Advancements in Internet of Things Forensics
References
9 Mathematical Models for Computer Vision in Cardiovascular Image Segmentation
9.1 Introduction
9.2 Cardiac Image Segmentation Using Deep Learning
9.3 Proposed Method
9.4 Algorithm Behaviors and Characteristics
9.5 Computed Tomography Cardiovascular Data
9.6 Performance Evaluation
9.7 Conclusion
References
10 Modeling of Diabetic Retinopathy Grading Using Deep Learning
10.1 Introduction
10.2 Related Works
10.3 Methodology
10.4 Dataset
10.5 Results and Discussion
10.6 Conclusion
References
11 Novel Deep-Learning Approaches for Future Computing Applications and Services
11.1 Introduction
11.2 Architecture
11.3 Multiple Applications of Deep Learning
11.4 Challenges
11.5 Conclusion and Future Aspects
References
12 Effects of Radiation Absorption and Aligned Magnetic Field on MHD Cassion Fluid Past an Inclined Vertical Porous Plate in Porous Media
12.1 Introduction
12.2 Physical Configuration and Mathematical Formulation
12.3 Discussion of Result
12.4 Conclusion
References
13 Integrated Mathematical Modelling and Analysis of Paddy Crop Pest Detection Framework Using Convolutional Classifiers
13.1 Introduction
13.2 Literature Survey
13.3 Proposed System Model
13.4 Paddy Pest Database Model
13.5 Implementation and Results
13.6 Conclusion
References
14 A Novel Machine Learning Approach in Edge Analytics with Mathematical Modeling for IoT Test Optimization
14.1 Introduction: Background and Driving Forces
14.2 Objectives
14.3 Mathematical Model for IoT Test Optimization
14.4 Introduction to Internet of Things (IoT)
14.5 IoT Analytics
14.6 Survey on IoT Testing
14.7 Optimization of End-User Application Testing in IoT
14.8 Machine Learning in Edge Analytics for IoT Testing
14.9 Proposed IoT Operations Framework Using Machine Learning on the Edge
14.10 Expected Advantages and Challenges in Applying Machine Learning Techniques in End-User Application Testing on the Edge
14.11 Conclusion
References
Index
End User License Agreement
Recommend Papers

Simulation and Analysis of Mathematical Methods in Real-Time Engineering Applications [1 ed.]
 1119785375, 9781119785378

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Table of Contents Cover Title page Copyright Preface Acknowledgments 1 Certain Investigations on Different Mathematical Models in Machine Learning and Artificial Intelligence 1.1 Introduction 1.2 Mathematical Models of Classification Algorithm of Machine Learning 1.3 Mathematical Models and Covid-19 1.4 Conclusion References 2 Edge Computing Optimization Using Mathematical Modeling, Deep Learning Models, and Evolutionary Algorithms 2.1 Introduction to Edge Computing and Research Challenges 2.2 Introduction for Computational Offloading in Edge Computing 2.3 Mathematical Model for Offloading 2.4 QoS and Optimization in Edge Computing 2.5 Deep Learning Mathematical Models for Edge Computing 2.6 Evolutionary Algorithm and Edge Computing 2.7 Conclusion References 3 Mathematical Modelling of Cryptographic Approaches in Cloud Computing Scenario 3.1 Introduction to IoT 3.2 Data Computation Process 3.3 Data Partition Process 3.4 Data Encryption Process 3.5 Results and Discussions 3.6 Overview and Conclusion

References 4 An Exploration of Networking and Communication Methodologies for Security and Privacy Preservation in Edge Computing Platforms Introduction 4.1 State-of-the-Art Edge Security and Privacy Preservation Protocols 4.2 Authentication and Trust Management in Edge Computing Paradigms 4.3 Key Management in Edge Computing Platforms 4.4 Secure Edge Computing in IoT Platforms 4.5 Secure Edge Computing Architectures Using Block Chain Technologies 4.6 Machine Learning Perspectives on Edge Security 4.7 Privacy Preservation in Edge Computing 4.8 Advances of On-Device Intelligence for Secured Data Transmission 4.9 Security and Privacy Preservation for Edge Intelligence in Beyond 5G Networks 4.10 Providing Cyber Security Using Network and Communication Protocols for Edge Computing Devices 4.11 Conclusion References 5 Nature Inspired Algorithm for Placing Sensors in Structural Health Monitoring System - Mouth Brooding Fish Approach 5.1 Introduction 5.2 Structural Health Monitoring 5.3 Machine Learning 5.4 Approaches of ML in SHM 5.5 Mouth Brooding Fish Algorithm 5.6 Case Studies On OSP Using Mouth Brooding Fish Algorithms 5.7 Conclusions References 6 Heat Source/Sink Effects on Convective Flow of a Newtonian Fluid Past an Inclined Vertical Plate in Conducting Field 6.1 Introduction

6.2 Mathematic Formulation and Physical Design 6.3 Discusion of Findings 6.4 Conclusion References 7 Application of Fuzzy Differential Equations in Digital Images Via Fixed Point Techniques 7.1 Introduction 7.2 Preliminaries 7.3 Applications of Fixed-Point Techniques 7.4 An Application 7.5 Conclusion References 8 The Convergence of Novel Deep Learning Approaches in Cybersecurity and Digital Forensics 8.1 Introduction 8.2 Digital Forensics 8.3 Biometric Analysis of Crime Scene Traces of Forensic Investigation 8.4 Forensic Data Analytics (FDA) for Risk Management 8.5 Forensic Data Subsets and Open-Source Intelligence for Cybersecurity 8.6 Recent Detection and Prevention Mechanisms for Ensuring Privacy and Security in Forensic Investigation 8.7 Adversarial Deep Learning in Cybersecurity and Privacy 8.8 Efficient Control of System-Environment Interactions Against Cyber Threats 8.9 Incident Response Applications of Digital Forensics 8.10 Deep Learning for Modeling Secure Interactions Between Systems 8.11 Recent Advancements in Internet of Things Forensics References 9 Mathematical Models for Computer Vision in Cardiovascular Image Segmentation 9.1 Introduction 9.2 Cardiac Image Segmentation Using Deep Learning 9.3 Proposed Method 9.4 Algorithm Behaviors and Characteristics

9.5 Computed Tomography Cardiovascular Data 9.6 Performance Evaluation 9.7 Conclusion References 10 Modeling of Diabetic Retinopathy Grading Using Deep Learning 10.1 Introduction 10.2 Related Works 10.3 Methodology 10.4 Dataset 10.5 Results and Discussion 10.6 Conclusion References 11 Novel Deep-Learning Approaches for Future Computing Applications and Services 11.1 Introduction 11.2 Architecture 11.3 Multiple Applications of Deep Learning 11.4 Challenges 11.5 Conclusion and Future Aspects References 12 Effects of Radiation Absorption and Aligned Magnetic Field on MHD Cassion Fluid Past an Inclined Vertical Porous Plate in Porous Media 12.1 Introduction 12.2 Physical Configuration and Mathematical Formulation 12.3 Discussion of Result 12.4 Conclusion References 13 Integrated Mathematical Modelling and Analysis of Paddy Crop Pest Detection Framework Using Convolutional Classifiers 13.1 Introduction 13.2 Literature Survey 13.3 Proposed System Model 13.4 Paddy Pest Database Model 13.5 Implementation and Results

13.6 Conclusion References 14 A Novel Machine Learning Approach in Edge Analytics with Mathematical Modeling for IoT Test Optimization 14.1 Introduction: Background and Driving Forces 14.2 Objectives 14.3 Mathematical Model for IoT Test Optimization 14.4 Introduction to Internet of Things (IoT) 14.5 IoT Analytics 14.6 Survey on IoT Testing 14.7 Optimization of End-User Application Testing in IoT 14.8 Machine Learning in Edge Analytics for IoT Testing 14.9 Proposed IoT Operations Framework Using Machine Learning on the Edge 14.10 Expected Advantages and Challenges in Applying Machine Learning Techniques in End-User Application Testing on the Edge 14.11 Conclusion References Index End User License Agreement

List of Illustrations Chapter 1 Figure 1.1 Gartner hyper cycle. Figure 1.2 Two state Markov model. Figure 1.3 Distribution of data points and first obtained representative. Figure 1.4 SVM. Figure 1.5 Hyperplane in SVM. Figure 1.6 Exposed, infection and recovery transmission in SEIR model. (a) Any s... Figure 1.7 SIR model. Chapter 2 Figure 2.1 Edge network.

Figure 2.2 Edge computing motivation, challenges and opportunities. Figure 2.3 Offloading in IoT nodes. Figure 2.4 Offloading schemes advantages. Figure 2.5 Computation offloading flow. Figure 2.6 Offloading techniques. Figure 2.7 Markov chain-based stochastic process. Figure 2.8 Offloading in vehicular node. Chapter 3 Figure 3.1 Integrating IoT and cloud. Figure 3.2 Star cubing algorithm. Figure 3.3 Dividing and recreating shares. Figure 3.4 Encryption and decryption. Figure 3.5 Encryption and decryption process. Figure 3.6 Step 1 - Generation of data using Arduino UNO board. Figure 3.7 Step 2 - Generated data displayed. Figure 3.8 Step 3 - Converting data into .csv file. Figure 3.9 Step 4 - Output of Star cubing algorithm. Figure 3.10 Step 5 - Partitioned secret shares. Figure 3.11 Step 6 - The encrypted partitioned secret shares. Figure 3.12 Step 7 - The encrypted shares stored in cloud. Figure 3.13 Step 8 - The decrypted file shares. Figure 3.14 Step 9 - The recovered original main file. Chapter 4 Figure 4.1 Edge computing. Figure 4.2 Pioneering edge security and privacy preservation. Figure 4.3 2-way trust management scheme. Figure 4.4 Encryption mechanisms in edge computing. Figure 4.5 Dynamic key exchange mechanism. Figure 4.6 Identity and key management security design

pattern. Figure 4.7 IoT combined with edge gateways. Figure 4.8 Edge blockchain. Figure 4.9 Privacy preservation model. Figure 4.10 Beyond 5G capabilities. Figure 4.11 Anomaly based IDS. Chapter 5 Figure 5.1 Damage detection methods [12]. Figure 5.2 (a) and (b) shows the relationship between AI, ML and DL [10]. Figure 5.3 Taxonomy of ML [12]. Figure 5.4 Variety of ML techniques in SHM [12]. Figure 5.5 Model of shear building [13]. Figure 5.6 The discrepancy matrix [13]. Figure 5.7 Test structure [14]. Figure 5.8 ANN performance of the beam model [15]. Figure 5.9 Arrangement of sensors in beam model [16]. Figure 5.10 Methodology of Machine Learning [17]. Figure 5.11 CFRP sandwich structure and PZT distribution [17]. Figure 5.12 Result of four test [18]. Figure 5.13 Sydney Harbour bridge [19]. Figure 5.14 Sensor arrangements and joints of Harbour bridge [19]. Figure 5.15 Top view of bridge [20]. Figure 5.16 Observations [20]. Figure 5.17 Event tree [21]. Figure 5.18 Proposed method [22]. Figure 5.19 Data sets [23]. Figure 5.20 Test setup [25]. Figure 5.21 Arrangement of sensors in panels [26]. Figure 5.22 Similarity between system [27].

Figure 5.23 ROC curve [28]. Figure 5.24 Damages in flange [28]. Figure 5.25 Pictures of mouth brooding fish (a) The way of protection. (b) Left ... Figure 5.26 Variable representation in cichlids [3]. Figure 5.27 Inspiration of MBF algorithm [29]. Figure 5.28 Application sectors of MBF system [29]. Figure 5.29 Fitness curve of IGA system [31]. Figure 5.30 Optimal domains [32]. Figure 5.31 OSP algorithm for DABC [33]. Figure 5.32 Convergence curves [1]. Figure 5.33 Reinforced concrete one-way slab [35]. Figure 5.34 Convergence curves [35]. Figure 5.35 Expression tree [36]. Figure 5.36 Comparison based on C1 and C3 parameter. Figure 5.37 Sensor contribution and coverage. Chapter 6 Figure 6.1 Technical problem design. Figure 6.2 Velocity curves with several qualities of α. Figure 6.3 Variation of “Gr” on velocity. Figure 6.4 Velocity curves with several qualities of M. Figure 6.5 Velocity curves with several qualities of Ko. Figure 6.6 Impact of “Gm” on velocity. Figure 6.7 Velocity curves with several qualities of Kr. Figure 6.8 Velocity curves with several qualities of Sc. Figure 6.9 Velocity curves with several qualities of S0. Figure 6.10 Impact of “Q” on temperature. Figure 6.11 Impact of “Pr” on temperature. Figure 6.12 Effect of “Kr” on concentration profile. Figure 6.13 Concentration curves with several qualities of Sc. Figure 6.14 Concentration curves with several qualities of So.

Chapter 7 Figure 7.1 M0. Figure 7.2 M1. Chapter 8 Figure 8.1 Basic structure of deep learning. Figure 8.2 Process involved in digital forensics. Figure 8.3 Cybernetics loop. Figure 8.4 Cybernetics relating method oriented and science oriented services [7... Figure 8.5 Data acquisition system in biometric identity. Figure 8.6 Deep learning process involved in fingerprint recognition. Figure 8.7 Deep learning framework for forensic data analysis. Figure 8.8 Digital forensic data reduction. Figure 8.9 Deep auto encoders. Figure 8.10 Restricted Boltzmann Machine. Figure 8.11 Federated learning architecture. Figure 8.12 IoT factors influencing computer forensics. Chapter 9 Figure 9.1 Structure of the heart. Figure 9.2 Cardiac image segmentation activities with various imaging types. Figure 9.3 Shows 3 concentric rings, in the inner rings with a focus sapling and... Figure 9.4 Shows three algorithms Weak edge actions (a) Circular scale image wit... Figure 9.5 Memory vs Computation time. Chapter 10 Figure 10.1 CNN architecture [16]. Figure 10.2 LSTM unit [17]. Figure 10.3 CNN-LSTM model used to classify DR. Figure 10.4 Hybrid CNN-LSTM architecture.

Figure 10.5 Metrics of CNN-LSTM model. Chapter 11 Figure 11.1 Machine learning vs. deep learning. Figure 11.2 Block diagram of auto encoder. Figure 11.3 Architecture of convolution neural network. Figure 11.4 Various applications of deep learning. Figure 11.5 Different big data parameters. Figure 11.6 Machine learning in healthcare. Chapter 12 Figure 12.1 Concern physical configuration [1]. Figure 12.2 Velocity outline for several Nemours of β. R=0.5, γ = α π/6, Pr=0.71... Figure 12.3 Velocity outline for several Nemours of α. R=0.5, β=5, γ = π/6, Pr=0... Figure 12.4 Velocity outlines for several Nemours of γ. R=0.5, α = π/6, β=5, Sc=... Figure 12.5 Velocity outline for several Nemours of Gr. R=0.5, Pr=0.71, Ko=1, γ=... Figure 12.6 Velocity outline for several Nemours of Gm. R=0.5, M=1, Sc=0.6, Pr=0... Figure 12.7 Velocity outline for several Nemours of M. R=0.5, Sc=0.6, Pr=0.71, K... Figure 12.8 Velocity outline for several Nemours of Ko. R=0.5, M=1, Sc=0.6, Pr=0... Figure 12.9 Temperature outline for several Nemours of Q. R=0.5, M=1, Sc=0.6, Pr... Figure 12.10 Temperature outline for several Nemours of Pr. R=0.5, M=1, Sc=0.6, ... Figure 12.11 Temperature outline for several Nemours of R., Q=0.5, M=1, Sc=0.6, ... Figure 12.12 Concentration outline for several Nemours of Kr. Figure 12.13 Concentration outline for several Nemours of Sc. Chapter 13

Figure 13.1 Paddy disease detection framework. Figure 13.2 RGB color image of Gall midge in paddy crop. Figure 13.3 Pre-processing of Gall midge insect. Figure 13.4 Deep CNN model. Figure 13.5 Deep CNN image denoising. Figure 13.6 Gray scale orientation. Figure 13.7 Histogram of inclination. Figure 13.8 Pest spot identification. Figure 13.9 (a) Input image; (b) Filtered image; (c) Boundary detection; (d) Rem... Figure 13.10 Accuracy performance analysis. Chapter 14 Figure 14.1 Proposed framework using machine learning on the edge. Figure 14.2 Comparison of number of test cases. Figure 14.3 Comparison of testing time.

List of Tables Chapter 1 Table 1.1 Accuracy of classifiers. Chapter 2 Table 2.1 Existing studies using deep learning in edge. Chapter 4 Table 4.1 Protocols and its features. Chapter 8 Table 8.1 Performance of biometric in forensic investigation. Table 8.2 List of datasets for various biometric identity. Chapter 9 Table 9.1 Acronym used in the chapter. Table 9.2 Comparison of algorithms. Chapter 10 Table 10.1 Data type for attributes of dataset.

Table 10.2 Statistical description of dataset. Table 10.3 Correlation between attributes in dataset. Table 10.4 Dataset sample. Table 10.5 Comparison of the evaluation results. Chapter 11 Table 11.1 Different architecture of deeper learning and its applications. Chapter 12 Table 12.1 Skin friction (τ). Table 12.2 Nusselt numeral (Nu). Table 12.3 Sherwood numeral (Sh). Chapter 13 Table 13.1 Sensors and their methodologies. Table 13.2 Pest of rice – sample dataset. Table 13.3 Gall midge – GLCM features. Table 13.4 Classification accuracy for paddy insect with SIFT features. Chapter 14 Table 14.1 Test cases generated for each of the scenarios. Table 14.2 Comparison of end-user application testing at the edge with ML and ot...

Scrivener Publishing 100 Cummings Center, Suite 541J Beverly, MA 01915-6106 Modern Mathematics in Computer Science Series Editors: Hanaa Hachimi, PhD, G. Suseendran, PhD, and Noor Zaman, PhD Scope: The idea of a series of books on modern math methods used in computer science was conceived to address the great demand for information about today’s emerging computer science technologies. Modern math methods, including algorithms, encryptions, security, communication, machine learning, artificial intelligence and other math-based advanced concepts, form the backbone of these technologies and are crucial to them. Modern math plays a vital role in computing technologies by enhancing communication, computing, and extending security through different encryption algorithms. The ever-increasing demand for data storage capacity, from gigabytes to petabytes and higher, has higher requirements that need to be met. Modern math can match those requirements. Empirical studies, theoretical and numerical analysis, and novel research findings are included in this series. The information highlighted in this series encourages cross-fertilization of ideas concerning the application of modern math methods in computer science. Publishers at Scrivener Martin Scrivener ([email protected]) Phillip Carmical ([email protected])

Simulation and Analysis of Mathematical Methods in RealTime Engineering Applications Edited by T. Ananth Kumar, E. Golden Julie, Y. Harold Robinson, and S. M. Jaisakthi

This edition first published 2021 by John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, USA and Scrivener Publishing LLC, 100 Cummings Center, Suite 541J, Beverly, MA 01915, USA © 2021 Scrivener Publishing LLC For more information about Scrivener publications please visit www.scrivenerpublishing.com. All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise, except as permitted by law. Advice on how to obtain permission to reuse material from this title is available at http://www.wiley.com/go/permissions. Wiley Global Headquarters 111 River Street, Hoboken, NJ 07030, USA For details of our global editorial offices, customer services, and more information about Wiley products visit us at www.wiley.com. Limit of Liability/Disclaimer of Warranty While the publisher and authors have used their best efforts in preparing this work, they make no representations or warranties with respect to the accuracy or completeness of the contents of this work and specifically disclaim all warranties, including without limitation any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives, written sales materials, or promotional statements for this work. The fact that an organization, website, or product is referred to in this work as a citation and/or potential source of further information does not mean that the publisher and authors endorse the information or services the organization, website, or product may provide or recommendations it may make. This work is sold with the understanding that the publisher is not engaged in rendering professional services. The advice and strategies contained herein may not be suitable for your situation. You should consult with a specialist where appropriate. Neither the publisher nor authors shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages. Further, readers should be aware that websites listed in this work may have changed or disappeared between when this work was written and when it is read. Library of Congress Cataloging-in-Publication Data ISBN 9781119785378 Cover image: Mathematical - Jevtic | Dreamstime.com Cover design by Kris Hackerott Set in size of 11pt and Minion Pro by Manila Typesetting Company, Makati, Philippines Printed in the USA 10 9 8 7 6 5 4 3 2 1

Preface This book addresses primary computational techniques for developing new technologies in terms of soft computing. It also highlights the security, privacy, artificial intelligence, and practical approach in all fields of science and technologies. It highlights the current research which is intended to advance not only in mathematics but in all the possible areas of science and technologies for research and development. As the book is focused on the emerging concepts in machine learning and artificial intelligence algorithmic approaches and soft computing techniques, it will be used by researchers, academicians, data scientists and technology developers. Chapter 1 deals with Investigations on Different Mathematical Models in Machine Learning and Artificial Intelligence. It starts with a discussion about knowledge-based expert systems. It contains primitive representation and primitive inference. This is followed by problem-solving techniques and a mathematical model of classification algorithms. This chapter discusses various mathematical algorithms like Markov chain model, automated simulation algorithms, KNN, SVM and comparison analysis of KNN and SVM. Finally, it describes the SEIR model for COVID-19. Chapter 2 mainly discusses edge computing optimization using mathematical modelling. It includes edge computing architecture, challenges, motivation and research direction. This is followed by Computational offloading in edge computing applications, classification, mathematical schemes like Markov chain-based schemes, hidden Markov model, Qos and optimization. The author then discusses Deep Learning Mathematical Models and Evolutionary algorithm in edge computing. Chapter 3 discusses various cryptography approaches used in cloud computing based on a mathematical model. This chapter starts with an introduction to IoT and the cloud, integration and application. It is followed by a discussion of the data computation process and data partition. This includes Shamir Secret (SS) Share Algorithm for Data Partition and data encryption; AES algorithms with results are discussed. Chapter 4 deals with Security and Privacy Preservation in Edge Computing Platforms. It contains key management schemes and secure IoT-based edge computing. For providing maximal security the authors conducted an extensive exploration on adoption of blockchain technologies across edge computing networks and privacy preservation practices. Finally, they explore the machine learning

approaches and advancements of on-device intelligence in edge computing infrastructures. Chapter 5 is about Mouth Brooding Fish Approach (MBF) for Placing Sensors in Structural Health Monitoring System. MBF can handle a wide scope of worldwide streamlining issues and has the probability to be utilized to take care of entire issues since it depends on a certifiable phenomenon. The combination of MBF-ILS algorithm improves the optimal sensor placement and hence reduced the usage of more sensors. Due to the ILS algorithm, there is a perfect gap maintained between the global and local best solution. So this will increase the convergence speed of an algorithm. Chapter 6 mainly deals with the impact of the heat source/decrease effects on convective fluid movement beyond an inclined vertical plate in the field. Disruption techniques regulate the fluid velocity, temperature, and concentration equations in terms of dimensional parameters. Next the authors discuss Mathematic Formulation and Physical Design. Finally they discuss finding with graph. Chapter 7 focuses on Application of Fuzzy Differential Equations in Digital Images via Fixed Point Techniques. It begins by discussing the basics of Fuzzy logic methods, which seem promising and useful in drug research and design. Digital topology is a developing field that uses objects’ topological properties to relate to 2D and 3D digital image features. The fixed-point theorem due to Banach is a valuable method in metric space theory; This chapter contains well-known fixed point theorem for studying the nature of digital images. That is established by applying the concept of fuzzy numbers. Sufficient conditions are also determined to get the desired result. Chapter 8 discusses Novel Deep Learning Approaches in Cyber security and Digital Forensics. Digital forensics play a vital role in solving cybercrime and identifying the proper solution for the threat that occurs in the network. It includes Biometric analysis of crime scene traces of forensic investigation. Forensic science holds a major position in all the informative and scientific domains due to its significance in social impacts. Varieties of data forensic analytical methods were proposed by various researchers, much concentrating on the domain of physics. Better security can be provided for forensic science through the cryptographic algorithms which perform the authentication verification process effectively. Chapter 9 deals with Mathematical Models for Computer Vision in Cardiovascular Image Segmentation. It gives a detailed review of the state of the art through practitioner processes and methods. Three popular imaging models offer a detailed summary of these DL strategies, providing a broad spectrum of current deep learning methods designed to classify various cardiac functional structures. In the three methods, deep

learning-based segmentation approaches highlighted future promise and the existing shortcomings of these methods of cardiac segmentation based on deep learning that may impede broad practical implications. Deep learning-based approaches have made a massive impact on the segmentation of cardiac images but also raise awareness and understanding problems that demand significant contributions in this area. Chapter 10 discusses Modelling of Diabetic Retinopathy Grading Using Deep Learning. It contains a deep introduction about Diabetic Retinopathy Grading and a brief review of related work done by various authors. The authors show the application of deep learning to predict the DR from the retinal images. They propose a hybrid model and presented a CNN-LSTM classifier for the DR classification using the DRDC dataset. The proposed hybrid model comprises the CNNLSTM network and has better accuracy. This approach is faster and obtained an accuracy of 98.56% for the DRDC dataset. Also, the training and validation loss of the hybrid model is 0.04 and 0.06, respectively. The AUC is measured around 99.9%, demonstrating the reliable performance of the hybrid system. Overall processing time of the proposed hybrid system is around seven minutes. Chapter 11 describes the Novel Deep-Learning Approaches for Future Computing Applications and Services. After their introduction, the authors discuss architecture, auto encoder, Convolutional Neural Network (CNN), hierarchical of layers and supervision of mastering as the important factors for booming a programme for learning. The level of layers is important for proper monitoring and the classification of data shows the advantages of keeping the database. In the current and forthcoming period, richness learning could be performed as a useful safety application through facial recognition and mixed speech recognition. Furthermore, electronic image processing is a kind of research discipline that can be used in several locations. Chapter 12 gives full analyses of the magnetic field, substance and therapeutic utility effects, the study-Absolute convective motions of a viscous, impenetrable, and electrically regulated fluid moving a slanting platter through a powerful media, free stream speed can obey the exponentially expanding small disturbance rule. Skin pressure is enhanced by the increase of (Gr), (Gc), (Ko) and (α), and is minimized by the effect of (M)(β) and (α). The amount of Nusselt rises with Ec, while under the control of (Pr) and (Q), it decreases. Chapter 13 describes Paddy crop cultivation in one of the foremost financial maneuvers of the Southern Province of India. Such Paddy crops are influenced by the assault of pest and the disease influenced

by them. The authors discuss an efficient pest identification framework based on histogram-gradient feature processing, and deep CNN algorithm with SVM classification is proposed for improving paddy crop cultivation. A deep CNN algorithm is used for noise reduction in unclassified pest images to improve classification under linear SVM. The identification of pest from the de-noised images is performed using a linear SVM classifier along histogram variants embedded with gradient feature. The descriptors feature such as SIFT, SURF, and HOG are computed for all classifiers. It is found that the proposed methodology has evidenced to achieve improved classification when compared with all other existing algorithms. Chapter 14 describes the term Edge Analytics, which can be defined as tools and algorithms that are deployed in the internal storage of the IoT devices or IoT gateways that collects, processes, and analyses the data at the deployed place itself rather than sending that data to the cloud for analytics. It contains novel end-user application testing equipped with ML on the edge of the IoT devices. A novel framework to achieve this is also proposed. The case study taken is a real-time one and has been tested successfully using the test cases generated on the edge.

Acknowledgments We deeply indebted to Almighty god for giving this opportunity and it only possible with presents of God. We extend our deep sense of gratitude to our Son Master H. Jubin, for moral support and encouragement, at all stages, for the successful completion of this Book. We extend our deep sense of gratitude to our scholars and friends for writing the chapter in time and amenities to complete this book. I sincerely thank our parents and family member for providing necessary support. We express our sincere thanks to the management of Vellore Institute of Technology, Vellore, India and Anna University, Regional Campus, Tirunelveli. Finally, we would like to take this opportunity to specially thank Wiley Scrivener publisher for his kind help, encouragement and moral support. —Y. Harold Robinson, Ph.D. — E. Golden Julie, Ph.D. I would like to thank the Almighty for giving me enough mental strength and belief in completing this work successfully. I thank my friends and family members for their help and support. I express my sincere thanks to the management of IFET College of Engineering, Tamilnadu, India. I wish to express my deep sense of gratitude and thanks to Wiley Scrivener publisher for their valuable suggestions and encouragement. —T. Ananth Kumar, Ph.D. I express my sincere thanks to the management of Vellore Institute of Technology, Vellore, India. Also, I would like to thank the Wiley Scrivener Press for giving me the opportunity to edit this book. —S. M. Jaisakthi, Ph.D.

1 Certain Investigations on Different Mathematical Models in Machine Learning and Artificial Intelligence Ms. Akshatha Y* and Dr. S Pravinth Raja† Dept. of CSE, Presidency University, Bengaluru, Karnataka, India Abstract Artificial Intelligence (AI) is as wide as the other branches of computer science, including computational methods, language analysis, programming systems, and hardware systems. Machine learning algorithm has brought greater change in the field of artificial intelligence which has supported the power of human perception in a splendid way. The algorithm has different sections, of which the most common segment is classification. Decision tree, logistic regression, naïve bays algorithm, support vector machine algorithm, boosted tree, random forest and k nearest neighbor algorithm come under the classification of algorithms. The classification process requires some pre-defined method leading the process of choosing train data from the user’s sample data. A host of AI Advanced AI programming languages and methodologies can provide high-level frameworks for implementing numerical models and approaches, resulting in simpler computational mechanics codes, easier to write, and more adaptable. A range of heuristic search, planning, and geometric reasoning algorithms can provide efficient and comprehensive mechanisms for resolving problems such as shape description and transformation, and model representation based on constraints. So behind every algorithm there lies a strong mathematical model, based on conditional probability. This article is the analysis of those mathematical models and logic behind different classification algorithms that allow users to make the training dataset based on which computer can predict the correct performance. Keywords: Artificial intelligence, classification, computation, machine learning

1.1 Introduction The increasing popularity of large computing power in recent years, due to the availability of big data and the relevant developments in algorithms, has contributed to an exponential growth in Machine Learning (ML) applications for predictive tasks related to complex systems. In general, by utilizing an appropriate broad dataset of input features coupled to the corresponding predicted outputs, ML automatically constructs a model of the scheme under analysis. Although automatically learning data models is an extremely powerful approach, the generalization capability of ML models can easily be reduced in the case of complex systems dynamics, i.e., the predictions can be incorrect if the model is extended beyond the limits of ML models [1]. A collection of AI ideas and techniques has the potential to influence mathematical modelling study. In particular, information-based systems and environments may include representations and associated problem-solving techniques that can be used in model generation and result analysis to encode domain knowledge and domain-specific strategies for a variety of illstructured problems. Advanced AI programming languages and methodologies may include high-level frameworks to implement numerical models and solutions, resulting in codes for computational mechanics that are cleaner, easier to write and more adaptable. A variety of heuristic search, scheduling, and geometric reasoning algorithms may provide efficient and comprehensive mechanisms for addressing issues such as shape definition and transformation, and model representation based on constraints. We study knowledgebased expert systems and problem-solving methods briefly before exploring the applications of AI in mathematical modelling.

1.1.1 Knowledge-Based Expert Systems Knowledge-based systems are about a decade old as a distinctly separate AI research field. Many changes in the emphasis put on different elements of methodology have been seen in this decade of study. Methodological transition is the most characteristic; the emphasis has changed from application areas and implementation instruments to architectures and unifying concepts underlying a range of tasks for problem-solving. The presentation and analysis were at two levels in the early days of knowledge-based systems: 1) the primitive mechanisms of representation (rules, frames, etc.) and their related primitive mechanisms of inference (forward and backward chaining, inheritance, demon firing, etc.), and 2) the definition of the problem.

A level of definition is needed that describes adequately what heuristic programmers do and know, a computational characterization of their competence that is independent of the implementation of both the task domain and the programming language. Recently in the study, many characterizations of generic tasks that exist in a multitude of domains have been described. The kind of information they rely on and their control of problem solving are represented by generic tasks. For expert systems architecture, generic tasks constitute higher-level building blocks. Their characteristics form the basis for the study of the content of the knowledge base (completeness, accuracy, etc.) in order to explain system operations and limitations and to establish advanced tools for acquiring knowledge.

1.1.2 Problem-Solving Techniques Several problem-solving tasks can be formulated as a state-space search. A state space is made up of all the domain states and a set of operators that transform one state into another. In a connected graph, the states can best be thought of as nodes and the operators as edges. Some nodes are designated as target nodes, and when a path from an initial state to a goal state has been identified, a problem is said to be solved. State spaces can get very big, and different search methods are necessary to monitor the effectiveness of the search [7]. A) Problem Reduction: To make searching simpler, this strategy requires transforming the problem space. Examples of problem reduction include: (a) organizing in an abstract space with macro operators before getting to the real operator details; (b) mean-end analysis, which tries to reason backwards from a known objective; and (c) sub-goaling. B) Search Reduction: This approach includes demonstrating that the solution to the problem cannot rely on searching for a certain node. There are several explanations why this may be true: (a) There can be no solution in this node’s subtree. This approach has been referred to as “constraint satisfaction” and includes noting that the circumstances that can be accomplished in the subtree below a node are inadequate to create any minimum solution requirement. (b) In the subtree below this node, the solution in another direction is superior to any possible solution. (c) In the quest, the node has already been investigated elsewhere. C) Use information of domains: The addition of additional information to non-goal nodes is one way to monitor the quest. This knowledge could take the form of a distance from a hypothetical

target, operators that can be applied to it usefully, possible positions of backtracking, similarities to other nodes that could be used to prune the search, or some general formation goodness. D) Adaptive searching techniques: In order to extend the “next best” node, these strategies use assessment functions. The node most likely to contain the optimal solution will be extended by certain algorithms (A *). The node that is most likely to add the most information to the solution process will be expanded by others (B *).

1.2 Mathematical Models of Classification Algorithm of Machine Learning In the artificial learning area, the machine learning algorithm has brought about a growing change, knowledge that spoke of human discerning power in a splendid manner. There are various types of algorithms, the most common feature of which is grouping. Computer algorithm, logistic regression, naive bay algorithm, decision tree, enhanced tree, all under classification algorithms, random forest and k nearest neighbour algorithm support vector support. The classification process involves some predefined method that leads to the train data method of selection from the sample data provided by the user. Decision-making is the centre of all users, and the algorithm of classification as supervised learning stands out from the decision of the user. Machine learning (ML) and deep learning (DL) are common right now, as there is a lot of fascinating work going on there, and for good reason. The hype makes it easy to forget about more tried and tested methods of mathematical modelling, but that doesn’t make it easier to forget about those methods. We can look at the landscape in terms of the Gartner Hype Cycle: Figure 1.1 is curve that first ramps up to a peak, then falls down into a low and gets back up into a plateau. We think that ML, and DL in particular, is (or at least is very close to) the Height of Unrealistic Expectations. Meanwhile, the Shortage of Productivity has several other methods. People understand them and use them all the time, but nobody speaks about them. They’re workhorses. They’re still important, though, and we at Manifold understand that. You also have to deploy the full spectrum of available resources, well beyond ML, to build effective data items. What does that mean in practice?

Figure 1.1 Gartner hyper cycle.

1.2.1 Tried and True Tools Let’s look at a couple of these advanced tools that continue to be helpful: the theory of control, signal processing, and optimization of mathematics. Control theory [2], which in the late 1950s became its own discipline, deals with real-time observation, inference, and control of a complex system’s (potentially unnoticed) states. When you understand the physics of a system, i.e., where the dynamics are not random, it is especially useful. This is a big difference because when we don’t completely understand the underlying physics, such as retail demand behaviour or ad buying on the internet, ML is really useful. Consider vehicular motion, which has physical laws that we don’t need to learn from an ML algorithm; we know how the equations of Newton operate and we can write down the differential equations that control a vehicle’s motion. Building ML models to learn this physics will burn data reams and compute cycles to learn something that is already understood; it’s wasteful. On the contrary, we can learn something important more quickly by putting the known physics in a state-space model and presenting the assumption in the language of control theory.

Signal processing, which deals with the representation and transformation of any signal, from time-series to hyper-spectral images, is another useful instrument. Classical transformations of signal processing, such as spectrograms and wavelet transforms, are also useful features to be used with ML techniques. These representations are currently used by many developments in speech ML as inputs to a deep neural network. At the same time, classical signal processing philtres, such as the Kalman philtre, are also very effective first solutions to issues that, with 20% of the effort, get you 80% of the way to a solution. Furthermore, strategies such as this are also much more interpretable than more advanced DL ones [9]. Mathematical optimization, finally, is concerned with finding optimal solutions to a given objective function. Linear programming to optimise product allocation and nonlinear programming to optimise financial portfolio allocations are classical applications. Advances in DL are partly due to advances in the underlying optimization techniques that allow the training to get out of local minima, such as stochastic gradient descent with momentum. Mathematical optimization, as with other methods, is very complementary to ML. Both of these instruments do not work against each other, but provide interesting ways of combining them instead.

1.2.2 Joining Together Old and New Many active solutions across different fields are used to combine the modern ML/DL environment with conventional mathematical modelling techniques. For instance, you can combine state-space modelling techniques with ML in a thermodynamic parameter estimation problem to infer unobserved system parameters. Or, you can combine ML-based forecasting of consumer behaviour with a broader mathematical optimization in a marketing coupon optimization issue to optimise the coupons sent. Manifold has extensive experience with signal processing interfaces and ML. Using signal processing for feature engineering and combining it with modern ML to identify temporal events based on these features is a common pattern we have deployed. Features inspired by multi-variate time series signal processing, such as short time short time Fourier Transform (STFT), exponential moving averages, and edge finders, allow domain experts to quickly encode information into the modelling problem. Using ML helps the device to learn from additional annotated data continuously and improve its output over time. In the end, that’s what’s crucial to remember: all of these methods are

complementary, and to build data products that solve real business challenges, you need to remember all of them. The forest for the trees is overlooked by an unnecessarily limited emphasis on ML.

1.2.3 Markov Chain Model A statistical and mathematical structure with some hidden layer configurations, the Markov chain model can be interpreted as the simple Basyian network that is directly visible to the Spectator Basyian network. For supervised and supervised simulations, this model makes a remarkable contribution. Education for strengthening and for pattern recognition, i.e. groups, if two instances are taken into account. A and B and it has 4 transitions when the system is in A, so it can be viewed similarly, as a transition from B when a system is in B, it can be viewed as a transition from A (Figure 1.2). In this way, a transition matrix will be created that will define the probability of the transformation of the state. In this way, it states not only in two classes, but even without classes or classes, that the model can be created [3].

1.2.4 Method for Automated Simulation of Dynamical Systems The problem of performing automated dynamic system simulation and how to solve this problem using AI techniques will be considered. Next, we’re going to consider some of the key ideas involved in the mathematical model simulation process. Then, as a software program, we can explore how these concepts can be applied. a. Simulation of mathematical engineering models If we consider a particular mathematical model, the problem of performing an effective simulation for a specific engineering system can be better understood. Let us consider the model below: X′ = σ(YX) Y′ = rX - Y - XZ (1.1)

Figure 1.2 Two state Markov model. (1.1)

where X, Y, Z, σ, r, b ∈ R, and σ, r and b are three parameters, which are usually taken to be optimistic, regardless of their physical origins. For different values of r in 0 < r < ∞, equations are also studied. Few researchers has studied this mathematical model to some degree, but there are still several questions to be answered regarding this model with regard to its very complicated dynamics for some ranges of parameter values [4]. For example, if we consider simulating eq. (1.1), the problem is choosing the appropriate parameter values for σ, r, b, so that the model’s interesting dynamic behaviour can be extracted. As we need to consider a three-dimensional search space σ r b and there are several possible dynamic behaviours for this model, the problem is not a simple one. In this case, the model is composed of three simultaneous differential equations, the behaviors can range from simple periodic orbits to very complicated chaotic attractors. Once the parameter values are selected then the problem becomes a numerical one, since then we need to iterate an appropriate map to approximate the solutions numerically. b. Method for automated simulation using AI Then determining the “best” set of parameter values BP for the mathematical model is the issue of performing automatic simulation for a specific engineering system. Here is where the technique of AI is really beneficial. In AI, the main concept is that we can use those techniques to simulate human experts in a specific application domain. In this case, we then use heuristics and statistical estimates derived from experts in this field to limit the computer program’s search space. You may define the algorithm for selecting the “best” set of parameter values as follows [9]. Step 1: Read the mathematical model M. Step 2: Analyze the model M to “understand” its complexity. Step 3: Generate a set of permissible AP parameters using the model’s initial “understanding.” This collection is generated by heuristics (expressed in the knowledge base as rules) and by solving some mathematical relationships that will later be described. Step 4: Perform a selection of the “best” set of parameter values BP. This set is generated using heuristics (expressed as rules in the knowledge base). Step 5: Execute the simulations by numerically solving the mathematical model equations. The various forms of complex behaviours are described at this stage.

A computer algorithm that can be called an intelligent device for simulating dynamical engineering systems is the result of this implementation [5].

1.2.5 kNN is a Case-Based Learning Method For classification, it holds all the training details. In several applications, such as dynamic web mining for a wide repository, being a lazy learning technique prevents it. One way to enhance its effectiveness is to find some representatives to represent the entire classification training data, viz. Building an inductive learning model from the dataset of training and using this model for classification (representatives). There are several existing algorithms initially built to construct such a model, such as decision trees or neural networks. Their efficiency is one of the assessment benchmarks for various algorithms. Since kNN is a simple but effective classification method and is convincing as one of the most effective methods in text categorization for Reuters corpus of newswire articles, it motivates us to create a model for kNN to boost its efficiency while also maintaining its accuracy of classification [9]. Figure 1.3 shows frequency distribution in statistics is a representation that displays the number of observations within a given interval. B. Support Vector Machine (SVM) Support vector machines [7] (SVMs, also support vector networks) analyse knowledge used for classification and regression analysis in machine learning. An SVM training algorithm generates a model that assigns new examples to one category or another, given a set of training examples, each marked as belonging to one or the other of two categories. The picture below has two data forms, red and blue. In kNN, we used the test data to measure its distance to all training samples and to take the minimum distance sample. It takes a lot of time to calculate all the distances and a lot of memory to store all the samples from the training.

Figure 1.3 Distribution of data points and first obtained representative.

Figure 1.4 SVM. Our primary objective is to find a line that divides the data uniquely into two regions. Such knowledge that can be split into two with a straight line (or high dimension hyperplanes) is called Linear Separable. In the above Figure 1.4, intuitively, the line should pass as far as possible from all the points as there can be noise in the incoming data. The accuracy of the classification does not affect this data. Thus, the farthest line would offer more immunity against noise. Therefore, SVM discovers a straight line (or hyperplane) with the highest minimum distance from the training samples [10].

1.2.6 Comparison for KNN and SVM KNN classifies knowledge based on the distance metric, while SVM needs a proper training process. Because of the optimal design of the SVM, it is guaranteed that the separated data will be separated optimally. KNN is commonly used as multi-class classifiers, while regular binary data belonging to either one class is different from SVM. A One-vs-One and One-vs-All strategy is used for a multiclass SVM. Figure 1.5 explains about hyperplanes are decision boundaries that help classify the data points. Data points falling on either side of the hyperplane can be attributed to different classes. Also, the dimension of the hyperplane depends upon the number of features. In the One-vs-one method, n*(n-1)/2 SVMs must be trained: one SVM for each pair of classes. Among all outcomes of all SVMs, we

feed the unknown pattern to the entity and the final decision on the form of data is determined by majority result. This method is often used in the classification of multiple groups. We have to train as many SVMs when it comes to the One-vs-All approach as there are groups of unlabeled data. As in the other method, if given to the SVM with the greatest decision value, we give the unidentified pattern to the system and the final result.

Figure 1.5 Hyperplane in SVM. Although SVMs look more computationally intensive, this model can be used to predict classes once data training is completed, even when we come across new unlabelled data. In KNN, however, each time we come across a collection of new unlabelled data, the distance metric is determined. In KNN, therefore, we still need to specify the distance metric. There are two major cases of SVMs in which groups can be linearly or non-linearly separable. We use kernel functions such as the Gaussian basis function or polynomials when the classes are nonlinearly separable [11]. Therefore, we only have to set the K parameter and select the distance metric appropriate for KNN classification, while in SVMs, if the classes are not linearly separable, we only have to select the R parameter (regularisation term) and also the kernel parameters. Table 1.1 gives the comparison between 2 classifiers to check accuracy. When we check about accuracy of both of the classifiers, SVMs usually have higher accuracy than KNN as shown [6–8]. When performing the visual studio tests after integrating OpenCV libraries, the accuracy percentage for SVM [7] was found to be 94

percent and 93 percent for KNN [6]. Table 1.1 Accuracy of classifiers. Classifier Training set Test set Accuracy rate (in %) SVM

10,000

10,000

98.9

KNN

10,000

10,000

96.47

1.3 Mathematical Models and Covid-19 The compartmental models are divided into different groups depending on the essence of the disease and its pattern of spread: 1. Susceptible-Infected-Recovered (SIR): This model divides the Nsize population into three epidemiological overview subpopulations; Susceptible, Contaminated and Recovered, represented respectively by variables S, I and R. It is also possible to add birth, mortality and vaccination rates to this model. Individuals in the susceptible class are born. Infected individuals transmit the disease to susceptible individuals and stay in the infected class (the infected period) and individuals are believed to be resistant to life in the recovered class. 2. Susceptible-Exposed-Infected-Recovered (SEIR): This model divides the N-size population into four epidemiological overview subpopulations; Prone, Exposed, Contaminated, and Recovered, represented respectively by variables S, E, I, and R. In this model, birth rate, death rate, and vaccination rate, if known/applicable, can also be considered. For a disease where there is a substantial post-infection incubation period in which an infected person is not yet infectious, this is an acceptable model. 3. Susceptible-Infected-Susceptible (SIS): Some diseases, such as those caused by the common cold, do not have long-lasting immunity. Upon recovery from infection, certain infections do not have immunisation and individuals become susceptible again. 4. Susceptible-Exposed-Infected-Susceptible (SEIS): When there is no immunity to the pathogen, this model can be used (implying that the R class would be zero). Tuberculosis can be an instance of this model [12].

1.3.1 SEIR Model (Susceptible-ExposedInfectious-Removed)

To estimate the infected numbers, the traditional SusceptibleExposed-Infected-Recovered (SEIR) model is used. Viruses or bacteria are the cause of infectious diseases such as rubella, mumps, measles and pertussis. The transmission of these diseases requires a time of incubation. The incubation period is a period in which clinical signs are displayed by people who have started being attacked by viruses or bacteria but have not been able to spread the disease. The Susceptible-Exposed-Infected-Recovery (SEIR) model can reflect the spread of illness by observing the incubation time. Immigration has an effect on the spread of illness. This is caused by refugees who may bring the disease from their regions to other countries. For this reason, the immigration SEIR model should be considered. We will define the SEIR model with immigration here, decide an equilibrium point and state the stability of the equilibrium. The model is then extended to the illness of herpes [13]. In analysing the spread and control of infectious diseases, mathematical models have become valuable tools. Most research on outbreak models also involves the disease’s persistence and extinction. In mathematical epidemiology, the majority of models are compartments. For diseases with a longer incubation period, a population is divided into prone, exposed, infectious, and recovered compartments in some traditional research studies; the model is then called the SEIR model. Figure 1.6 shows data representation at different point of contact. Figure 1.7 is about SIR model. The model consists of three comparments:

Figure 1.6 Exposed, infection and recovery transmission in SEIR model. (a) Any susceptible can be exposed by rate of per contact point. (b) Infection occurs at rate per contact and (c) recovery occurs at rate for infected population.

Figure 1.7 SIR model. S: The number of susceptible individuals. When a susceptible and an infectious individual come into “infectious contact”, the susceptible individual contracts the disease and transitions to the infectious compartment. I: The number of infectious individuals. These are individuals who have been infected and are capable of infecting susceptible individuals. R: The number of removed (and immune) or deceased individuals. These are individuals who have been infected and have either recovered from the disease and entered the removed compartment, or died. It is assumed that the number of deaths is negligible with respect to the total population. This compartment may also be called “recovered” or “resistant”.

1.3.2 SIR Model (Susceptible-Infected-Recovered) The outbreak prediction has become highly complicated for emerging scientific science due to the pandemic scenario of COVID-19 disease cases around the world. To accurately forecast the forecasts, many epidemiological mathematical models of spread are growing daily. In this analysis, to analysis the various parameters of this model for India, the classical susceptible-infected-recovered (SIR) modelling method was used. By considering various governmental lockdown initiatives in India, this method was studied [14]. Estimation of parameters of SIR model of India using an actual data

set: Fundamental models based on compartments, as seen in the following, were used for the epidemic mathematical model: 1. (Susceptible->Infectible) SI model, 2. (Susceptible->Infectible-> Susceptible) SIS model, and 3. (Susceptible->Infectible-> Recovery/Removed) SIR model. The standard SIR model is basically a series of differential equations that can be classified as susceptible (if previously unexposed to pandemic disease), infected (if presently conquered by pandemic disease), and removed (either by death or recovery) [15].

1.4 Conclusion The goal of this chapter was to present some of the machine learning and AI principles and methodologies and explore some of their possible applications in different aspects of computational mechanics. The methodologies outlined herein are maturing rapidly, and many new applications are likely to be found in computational mechanics. Undoubtedly, AI methodologies would inevitably become, to the same degree as today’s “traditional” algorithmic devices, a natural and indispensable part of the set of computer-based engineering resources. These instruments would then greatly elevate the role of computers in engineering from the current focus on calculation to the much wider field of reasoning.

References 1. Min Cai, Huan-Shui Zhang and Wei Wang, “A power control algorithm based on SIR with multiplicative stochastic noise,” Proceedings of 2004 International Conference on Machine Learning and Cybernetics (IEEE Cat. No.04EX826), Shanghai, China, 2004, pp. 2743-2746 vol. 5, doi: 10.1109/ICMLC.2004.1378324. 2. M. Muselli, A. Bertoni, M. Frasca, A. Beghini, F. Ruffino and G. Valentini, “A Mathematical Model for the Validation of Gene Selection Methods,” in IEEE/ACM Transactions on Computational Biology and Bioinformatics, vol. 8, no. 5, pp. 1385-1392, Sept.-Oct. 2011, doi: 10.1109/TCBB.2010.83. 3. Dutta, Nabanita & Subramaniam, Umashankar & Sanjeevikumar, P.. (2018). Mathematical models of classification algorithm of Machine learning. 10.5339/qproc.2019.imat3e2018.3. 4. KNN Model-Based Approach in Classification, Gongde Guo1, Hui

Wang 1, David Bell 2, Yaxin Bi 2, and Kieran Greer 1, School of Computing and Mathematics, University of Ulster, Newtownabbey, BT37 0QB, Northern Ireland, UK. 5. S. Ji, L. T. Watson and L. Carin, “Semi supervised Learning of Hidden Markov Models via a Homotopy Method,” in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 31, no. 2, pp. 275-287, Feb. 2009, doi: 10.1109/TPAMI.2008.71. 6. Pavithra, M., Rajmohan, R., Kumar, T. A., & Ramya, R. (2021). Prediction and Classification of Breast Cancer Using Discriminative Learning Models and Techniques. Machine Vision Inspection Systems, Volume 2: Machine Learning-Based Approaches, 241-262. 7. SanghamitraMohanty, HimadriNandini Das Bebartta, “Performance Comparison of SVM and K-NN for Oriya Character Recognition”, (IJACSA) International Journal of Advanced Computer Science and Applications, Special Issue on Image Processing and Analysis, 2011, pp. 112-115 8. D. Bouchoffra and F. Ykhlef, “Mathematical models for machine learning and pattern recognition,” 2013 8th International Workshop on Systems, Signal Processing and their Applications (WoSSPA), Algiers, 2013, pp. 27-30, doi: 10.1109/WoSSPA.2013.6602331. 9. R. Veena, S. Fauziah, S. Mathew, I. Petra and J. Hazra, “Data driven models for understanding the wind farm wake propagation pattern,” 2016 International Conference on Cogeneration, Small Power Plants and District Energy (ICUE), Bangkok, 2016, pp. 1-5, doi: 10.1109/COGEN.2016.7728969. 10. H. J. Vishnukumar, B. Butting, C. Müller and E. Sax, “Machine learning and deep neural network — Artificial intelligence core for lab and real-world test and validation for ADAS and autonomous vehicles: AI for efficient and quality test and validation,” 2017 Intelligent Systems Conference (IntelliSys), London, 2017, pp. 714721, doi: 10.1109/IntelliSys.2017.8324372. 11. A. Salaün, Y. Petetin and F. Desbouvries, “Comparing the Modeling Powers of RNN and HMM,” 2019 18th IEEE International Conference on Machine Learning and Applications (ICMLA), Boca Raton, FL, USA, 2019, pp. 1496-1499, doi: 10.1109/ICMLA.2019.00246. 12. S. A. Selvi, T. A. kumar, R. S. Rajesh and M. A. T. Ajisha, “An Efficient Communication Scheme for Wi-Li-Fi Network Framework,” 2019 Third International conference on I-SMAC (IoT in Social, Mobile, Analytics and Cloud) (I-SMAC), Palladam, India, 2019, pp. 697-701, doi: 10.1109/I-S MAC47947.2019.9032650.

13. Mathematical models and deep learning for predicting the number of individuals reported to be infected with SARS-CoV-2 A. S. Fokas, N. Dikaios and G. A. Kastis. 14. M. A. Bahloul, A. Chahid and T. -M. Laleg-Kirati, “FractionalOrder SEIQRDP Model for Simulating the Dynamics of COVID-19 Epidemic,” in IEEE Open Journal of Engineering in Medicine and Biology, vol. 1, pp. 249-256, 2020, doi: 10.1109/OJEMB.2020.3019758. 15. Y. Yang, W. Yu and D. Chen, “Prediction of COVID-19 spread via LSTM and the deterministic SEIR model,” 2020 39th Chinese Control Conference (CCC), Shenyang, China, 2020, pp. 782-785, doi: 10.23919/CCC50068.2020.9189012. *Corresponding author: [email protected] † Corresponding author: [email protected]

2 Edge Computing Optimization Using Mathematical Modeling, Deep Learning Models, and Evolutionary Algorithms P. Vijayakumar*, Prithiviraj Rajalingam and S. V. K. R. Rajeswari ECE Department, SRMIST, Kattankulathur, Chennai, India Abstract The rapid growth of the Internet of Things (IoT) with advanced applications requires high speed and real-time computing power. Edge computing brings the computation of data closer to the machine where it is being collected. It leads to a decrease in latency, bandwidth usage, and resources for the server and its cost. The significant challenges in edge computing are 1) optimal offloading decision making, 2) resource allocation, 3) Meeting Quality-of-Service (QoS) and Experience (QoE). This chapter addresses the above challenges using mathematical models, Deep Learning and the Evolutionary algorithm. The deep learning algorithm solves the highly complex problem by developing a model from the training data or observation (reinforcement learning). The deep learning approach converts the optimization problem of edge computing into classification or regression or intelligent decision-making problems and solves them. The Evolution algorithm finds an optimum solution for the given problem through the natural process of evaluation, which is used to solve the edge computing multi-optimization problem. An evolution algorithm like a genetic algorithm and ant colony can solve a few research problems of edge computing like task scheduling. Keywords: Edge computing, deep learning, machine learning, evolutionary algorithm

2.1 Introduction to Edge Computing and Research Challenges Edge computing is a new distributed computing paradigm. The pattern of edge computing is closer to the location as a platform for

computation and data storage before working with the cloud. In simpler terms, edge computing works with smaller and real-time data, whereas cloud works with big data. Edge computing helps in quick response times and also saves bandwidth [1, 2]. In the use case of cloud-based augmented reality applications, latency and processing limitations are key challenges to implementing the cloud system due to geographical distance from the infrastructure. Edge computing comes into the picture as an advancement of cloud gaming as it allows short-distance travel of data. Edge computing has the advantages of reducing lag times and latency [3]. Mostly edge computing has a role in helping cloud-based IoT systems to provide computational service. A small recap of the Cloud-Based IoT system is provided below.

2.1.1 Cloud-Based IoT and Need of Edge Computing The Internet of Things (IoT) plays a vital role in human daily life by making all the devices connected through the internet, and it works ingeniously. Day by day, the IoT plays a crucial role in all the domains [4]. For example, IoT provides excellent service to medical applications like tracking patient status, heart rate, blood pressure, and sugar level can be monitored, and if a patient goes into a critical or unstable condition, the doctor can provide solutions through the report generated by the IoT application [6]. The IoT data can also be used to study different patients’ lifestyles and activities to prevent them from going into a critical situation. Therefore, the IoT has developed opportunities to provide brilliant solutions with many predictions and intelligence. IoT devices are correctly functioning because of several technologies like cloud computing that give many advantages to IoT devices, including storage infrastructure, processing the real-time data in IoT devices, and high-performance computing. It leads to cloud computing as a revolutionary part of IoT devices, which provides smart and self-predicted data [6]. Due to IoT devices’ evolution, cloud providers take an immense advantage to provide the communication or transfer of data between the IoT devices. This results in the Cloud of Things, which connects both cloud computing and IoT devices. Large data centers are built by various cloud providers across the world, which have the capacity to serve users from around the world. Hence there exists a separation from the cloud centers to the users, which creates delays, latencies. Another disadvantage of cloud services due to distance of separation is location access can never be accurate. Information about users and their mobility is also another disadvantage of cloud computing. In augmented reality applications

like cloud gaming as discussed for real-time tracking system applications like vehicular systems, a unique computing method is required, which is achieved by edge computing [7]. Edge computing enables the deployment of cloud computing capabilities at the edge of the network. The infrastructure providers own the data centers and implement multi-tenant virtualization. The third-party customers, end users, and infrastructure providers can access these edge data centers. Edge computing services are automated, thus refraining from disconnecting from the cloud. This leads to the possibility of creating a hierarchical multi-tiered architecture. Edge computing leads to an open ecosystem where one trusted domain cooperates with other trusted domains and a multitude of customers are served. Though there are multiple edge paradigms with few differences, there are also similarities [8]. Edge Architecture’s outline is provided below for understanding before dealing with the challenges and the solution using mathematical models (Markov Chain Model and game theory), deep learning and evolutionary algorithm.

2.1.2 Edge Architecture All the edge activities are controlled by two-tier architecture, so it works properly in time-sensitive systems. The design of twodimensional structures focuses primarily on efficiency, application management, and edge management. The edge network aims to reduce the data strain by providing computation away from data centers towards the network’s edge. To provide services for cloud computing, an edge computing network is created by using smart objects and network gateways and decentralizing the data centers. The edge network consists of three layers, among which Edge devices are the first layer. Edge devices are the data providers that include user gadgets, e.g., sensors, machines, smartphones, and wearables, as shown in Figure 2.1. They are responsible for data collection and delivery [7]. Edge nodes, the second layer of the network layer, are the router, switches, and small/macro base stations responsible for computing operations and data processing and data routing [9]. The third layer is the cloud that consists of data centers, servers, databases, and storage, and is responsible for data analytics using Artificial intelligence, visualization, and other high computational requirements [9].

Figure 2.1 Edge network. The three-tier architecture, Edge Computing (EC), acts as a complement to cloud computing. It is appropriate for applications with sensitive time functions as well as activities that strengthen counting [10]. Critical time operations are managed on the margins, and computer consolidation tasks are performed in the cloud. The three-dimensional structure focuses primarily on the Interface between the cloud and end, as well as the assigned function. Edge computing is part of the cloud-based IoT system. Edge computing improves the performance of the IoT system. To understand the need for edge computing, we need first to understand cloud-based IoT functioning. Then computing has been backed by the centralized network with cloud computing, which allows the users to consume large numbers at any time in different places based on the pay concept. In the concept of cloud computing, there is a frequency of communication between the centralized server and user models such as smartphones, smartwatches, etc. [5]. The physical distance between the end device and the cloud server is typically vast. It leads to an increase in response time because of the considerable distance. In cloud computing, there are many challenges to provide uninterrupted service by having a good communication link to the end user, especially when the distance between the cloud server and end device is large and the device is on mobility. For example, a person with a mobile who moves from one place to a different place requires a large number of cloud servers with short response time and depends on the stability of cloud nodes. It triggers a lot of research – based on beyond the clouds towards the edge of the network, as shown in Figure 2.1 [8]. This Edge Computing system

overcomes the challenges noted above of a cloud-based IoT system. As shown in Figure 2.1, edge network nodes are formulated in such a way they will be close to the end device and doing part or full computation of cloud server by offloading the load from it. Since the computation is done locally, the system’s performance can be improved with significantly less response time. An extensive literature survey on edge computing with different applications has been done to improve performance in a cloud-based IoT system.

2.1.3 Edge Computing Motivation, Challenges and Opportunities As discussed in the preceding section, the aim of having edge computation is to decrease the data strain from the cloud towards the edge of the network. Hence there is the possibility of deploying and running a number of applications at the edge of a network. The implementation of edge computing has a lot of potential and advancement in many industries. i) Motivation for Edge Computing: The motivation for edge computing is drawn from its speed of communication. Facial recognition is one of the applications from where it can be inferred that edge computing excels in applications that require short response time [1]. As computing is done closer to the source, visual guiding applications like Google maps create better user experiences. The cloud system becomes less loaded due to offloading offered by edge computing, thus becoming energy efficient. Communication between the initial layer, i.e., smart homes and cloud drain the battery, therefore, an alternative as the edge can overcome the problem [12]. Network traffic and data explosion is prosperity that tempts to adopt edge computing. There are yet many edge computing powers that are clearly mentioned in Figure 2.2. To achieve cloud decentralization and low latency in computing. Resource allocation of front-end devices can be overcome by sustainable energy consumption, which drives edge computing motivation. Edge computing also enhances and supports smart computing techniques. ii) Edge computing challenges: In order to implement edge computing, it is vital to take challenges into account. Privacy and Security - As edge computing works with various nodes, traveling to and from different networks requires special encryption

mechanisms for security (Figure 2.2). Resource containment being one of the properties of edge computing, security methods are required [1]. Optimization metrics - Edge computing is a distributed paradigm. The workload has to be deployed in various data centres effectively depending on bandwidth, latency, energy, and cost to reduce the response delay [12]. Experiencing uncompromising QoS is another challenge of edge computing (Figure 2.2).

Figure 2.2 opportunities.

Edge

computing

motivation,

challenges

and

Programmability - Unlike cloud programming, the edge being the heterogeneous platform, the runtime of edge devices differs. The computing stream that determines the computing flow, efficiency must be addressed at the synchronization of the devices [12]. Discovering the edge nodes and general-purpose computing is also another challenge in edge node mentioned in Figure 2.2.

Naming - As the edge network work with various networking devices, the naming scheme has to be followed to avoid confusion with identifying and programming the different edge networks. Naming is also significant for providing security and protection, and mobility of the devices [12]. Data Abstraction - Edge network collects data from various data generators; a lot of unessential data and noise also get collected. The data is sent to the next layer (abstraction layer), where data abstraction takes place. Unwanted data gets filtered at this layer and is then sent to the upper layer for further process. It is important to effectively filter data as an application may not work if data is filtered when it is a mistake for noise [12]. Partitioning a task and applying offloading mechanism is another challenge when considering edge computing (Figure 2.2). With many challenges described in Figure 2.2, virtual and physical security also has to be taken into account before considering edge computing. iii) Edge opportunities: In light of motivation and challenges, edge computing finds itself promising for both consumers and businesses by creating a seamless experience. Many opportunities are listed in Figure 2.2, where standards, benchmarking, and marketplace are edge opportunities. The possibility of creating a lightweight model framework and algorithms in the edge is another opportunity. Micro-operating system with virtualization is another opportunity of edge. Field and Industrial IoT - Power, Transportation and manufacturing are the leading contenders of edge. Industrial devices that include HVAC systems, motors, oil turbines, RFID’s in the supply chain, etc., that use the information to analyze for security management, predictive maintenance, performance and usage tracking, demand forecasting, etc., can witness advancement using edge computing [13]. Smart Cities Architecture - Municipalities providing faster urban services, traffic management, green energy and public safety, intelligent bus stop are few smart cities applications that may benefit from edge computing [11, 13]. Retail and Hospitality - Customer care can be refined using edge and by analyzing customer sentiments using kiosks or point of scale terminal in retail and hospitality using edge computing. Customer experience enhances [13]. Connected Vehicles - Vehicular system used for tracking, navigation,

e.g., police vehicles and predicting maintenance and influence of dynamic pricing is a use case of edge computing [13]. Facial Recognition - Edge computing allows quicker response in a short time by reducing fraud in banking, institutions and various organization [13]. iv) Research directions: With the discussed challenges taken into account, lack of standardization should be considered while developing an edge computing system. Furthermore, the research needs to identify and provide new challenges like context awareness for further defining the critical exploitation, deployment of edge techniques in cloud computing, IoT, and networks.

2.2 Introduction for Computational Offloading in Edge Computing Computational offloading is the migration of computational tasks at a different processor or an external device such as cluster, grid, cloud, base stations, or access points. While running on multiple devices, computation offloading helps promote longer battery lifetime. To enhance the insufficient computation, single users benefit [14]. Figure 2.3 shows the offloading process in the IoT nodes. When the tasks are time-critical and computationally complex, the IOT nodes offload those task computation to nearby edge nodes to execute the task with less delay. Every IoT node has to decide whether to offload the task to an edge node or locally compute based on the task’s time criticality. This offloading or local computation decision can be made by using deep learning algorithm or Markov chain model or Game theory-based model. Non-time-critical tasks that are highly computational complex can be transferred to the cloud computing platform where they can be executed.

Figure 2.3 Offloading in IoT nodes.

2.2.1 Need of Computational Offloading and Its Benefit Computational offloading is a process of offloading or unloading computation tasks at the edge servers rather than the cloud. i) Advantages of Computational Offloading: There are various advantages of offloading, which are depicted in Figure 2.4. The benefits of performing computational tasks at the edge devices are [14]: Energy consumption - When computation tasks are performed at the edge servers instead of at the cloud, less energy is consumed, thus making the system energy efficient [14] along with the increase in battery lifetime (Figure 2.4).

Figure 2.4 Offloading schemes advantages. Complex processing - Due to the computation tasks performed at the edge servers, the complexity of computation tasks at the end devices is avoided; thus, it saves the end devices’ battery life [14]. Scalability - Due to offloading process at the edge servers, mobile applications, e.g., mobile gaming, mobile healthcare, etc., can run at the end devices (Figure 2.4). End devices cannot perform complex computations as they are run at the edge servers [14]. Performance - Offloading tasks at the edge servers thrives for excellent performance (Figure 2.4), resources, flexibility, and costeffectiveness. Cost Effective - Other advantage that can be inferred from the figure is that the methods and ease of computational tasks help reduce the overall cost of the system (Figure 2.4). Security and Privacy - The overall security concerning privacy (Figure 2.4) is also increased. New application support - Offloading provides the ease of application to run (Figure 2.4). ii) Applications of Computational offloading [14]: Due to the advantages of the offloading technique, along with the capability of low latency and high bandwidth, the applications that thrive with offloading techniques are: Intelligent Transportation Systems - Vehicular systems require ultra-

low latency and reliability; thus, the applications, e.g., road safety services, autonomous driving, road traffic and optimization, benefit from offloading techniques. Serious gaming - Applications in education, healthcare, entertainment, simulation and training focus on low latency requirement, thus offloading technique helps in satisfying the low latency also by implementing round trip time to 1ms. Robotics and Telepresence - As response times are needed much quicker, in terms of milliseconds, applications like earthquake relief, emergency rescue, etc., benefit from offloading techniques. AR/VR/MR - The augmented reality, Virtual reality and Mixed reality are the services that benefit from offloading technique due to the server’s low latency advantage. Though the applications defined are not limited to, with the emergence of 5G, the offloading computation can provide much more efficiency in low latency and high bandwidth.

2.2.2 Computation Offloading Mechanisms Computational offloading is one of the primary processes in an edge computing environment to reduce the delay and improve the response time. There are various approaches for computational offloading, as shown in Figure 2.5, based on offloading flow and offloading scenario. Based on offloading goals, the computation offloading is divided into two different categories. Offloading flow comes under the first category where the previous offloading category can be divided into four other categories, i.e., offloading from ED to EC, offloading from EC to cloud computing, offloading from one server to another and hierarchical offloading. Offloading scenario, which is the second, is based on one-to-one scenarios, one-to-many scenarios, many-to-one scenario and many-to-many scenarios [15].

Figure 2.5 Computation offloading flow. i) Classification based on offloading flow a) From ED to EC. This comes under the first category where ED and EC together form a whole system. Here, computational tasks are executed by ED locally and are offloaded at EC [15]. b) From EC to CC - ED generally sends the task to the EC. EC analyzes and decides if a particular task can be performed by it or if not, it sends it to the cloud to complete the task. This is the second category of Offloading flow [15]. c) Many ECs combine together and form an edge system from EC to others - This being the third category of Offloading flow. When an EC receives a task, it is decided by EC whether to perform a particular task or to offload it to the EC server in the same system, which has a direct impact on offloading performance. To optimize execution delay and power consumption, cluster formation is carried out in a single scenario [15]. d) The hierarchical offloading. The fourth category of offloading flow works in a tier/hierarchical system. A single task can be offloaded to local EC/cloud/several or a few of the tiers [15]. ii) Classification based on offloading scenarios a) One-to-one - This is the first offloading scenario. To optimize

the offloading performance, one entity decides to offload a particular computational task or not. This application can depict many to one offloading as one entity (ED) can run on multiple applications by offloading data separately [15]. b) One-to-many - Many EC servers is available in one too many offloading schemes. ED decides the offloading decision which includes whether to offload and to which server it should offload. This is the second offloading scenario [15]. c) Many-to-one - As the name suggests, being the third of offloading scenarios, many EDs offload their tasks to one server. For optimizing the whole system, the decision is made by all the entities. The single server is responsible for making the decision for all Eds [15]. d) Many-to-many - The fourth offloading scenario, i.e., many-tomany, being the most complex one, is the combination of one-tomany offloading and many-to-many offloading. The information from both EC and ED is required for decision making for the centralized offloading model in the many-to-many offloading scenario. Due to the complexity of solving the model, distributed method offloading is much needed [15]. 2.2.2.1 Offloading Techniques Pictorial representation of offloading techniques given in Figure 2.6. It is classified based on the mode of offload selected, channel model used, computation method used, and energy harvesting method chosen [16]. These are the aspects to be considered when solving a computational model. There are four techniques in computation offloading [15]. i) Offloading model - If a task is partitionable, it is divided into two offloading modes: Binary offloading mode, where the whole task is offloaded and the second is a partial mode where a partial task is offloaded. ii) Channel model - The channel model is divided into interference model and interference-free model depending on multiple access mode.

Figure 2.6 Offloading techniques. iii) Computing the model-in computation model, the energy consumption and latency for task execution and task transmission depend on the computation and queue model. iv) Energy Harvesting Model - The energy harvesting is also used in managing the energy requirement, which is divided into deterministic and stochastic. Among those techniques, offloading based on a mathematical model is discussed in the next section.

2.3 Mathematical Model for Offloading Mathematical offloading is a process of mathematical calculations and programming to compute offloading of tasks. In general words, mathematical offloading and modeling are converting physical world challenges into a mathematical form that would generate solutions for any application. The system model as such is divided into Stochastic and Deterministic process. The dependent factor of the system determines the parameters of the model. Being manageable to scientific investigations is the advantage of the deterministic model. The stochastic model is considered for having entities that produce probabilities concerning time [17]. Model for offloading where static in stochastic and deterministic is responsible for representing a

system at a given time and dynamic in stochastic and deterministic is responsible for representing a system with respect to change in time. The dynamic model for stochastic and deterministic is divided into discrete and continuous. In discrete, the variables change at a discrete set of points with respect to time. In continuous dynamic, variables change with respect to time [18]. The stochastic dynamic-based system can be easily modeled and the optimization problem can be solved using the Markov chain model. There are various types of Markov chain models for dynamic problem solving, depicted in Figure 2.7, namely, i) Markov chain, Semi-Markov chain, Markov Decision, Hidden Markov, Queuing model. The section below describes these models in detail.

Figure 2.7 Markov chain-based stochastic process.

2.3.1 Introduction to Markov Chain Process and Offloading Working of Markov Chain Model - To statistically model random processes, the Markov chain model is one of a simple way. Markov Chain model is also defined as “A stochastic process containing random variables, transitioning from one state to another depending on certain assumptions and definite probabilistic rules” [19]. They are widely used in applications from text generation to financial modeling and auto-completion applications. Using Markov property, random variables transit from one state to another. Markov property states that the probability of changing from current to the possible future state by random process depends on the current state and time and is independent of the series of states that preceded it.

Figure 2.6 illustrates the Stochastic process, which is further classified into Markov Chain, Semi Markov, Markov Decision, Hidden Markov and Queuing model. Markov chain describes a sequence of possible events where each event’s probability depends on the previous state event [20]. SemiMarkov model is an actual stochastic model that evolves with time by defining the state at every given time [21]. As the name itself implies for Markov Decision, it is a process of making decisions where partly random decisions are few and few are at the decision maker’s control [22]. In the Hidden Markov model, rather than being observables, the states are hidden [23]. The Queuing model helps predict the length of the queue and waiting time is predicted [23]. The Markov chain decision process is a mathematical model that evaluates the system during the transition from one state to another to the probability rules. It can be classified into two types. They are discrete-time Markov decision chains and continuous-time Markov decision chains. Additionally, it can be classified based on the number of states taken to decide the next state. It will be defined as the firstorder, second-order, and higher-order chains. There are many offloading schemes based on the Markov decision process [22, 23]. 2.3.1.1 Markov Chain Based Schemes The Markov chain is the mathematical system that transitions from one state to another state. A transition matrix that defines the possibilities of transition from one state to another is used to tally the transitions [24]. The Markov chain is one of the critical stochastic processes. The present state grasps all the information and uses it for the process’s future evolution [25]. The Markov chain model comes under the stochastic offloading process [25]. Among many processes, the Markov chain model is chosen where dynamic decision making is the requirement considering environmental parameters like wireless channel conditions and computation load [24]. 2.3.1.2 Schemes Based on Semi-Markov Chain This section aims at the Semi-Markov chain based offloading. The Semi-Markov chain is developed using the Markov renewal process. It is defined by the renewal kernel and the initial distribution using other features that equal to the renewal kernel. Semi-Markov chain model differs from the Markov chain model by having a realization of the process of the defined state by a random process that evolves over time [26]. For the verification of security for an offloading scheme, a state transition model is used.

2.3.1.3 Schemes Based on the Markov Decision Process The Markov decision process is a mathematical framework of a discrete-time stochastic process. It assists in decision making for models that are partly random and partly under the control of the decision-maker. By using dynamic programming, optimization challenges are worked out [27]. The Markov decision process contains a list of items, e.g., probabilities of transition, states, decision epochs, costs, and actions. The computation in the Markov decision process is very high concerning the increase in the number of states. These issues can be solved by different algorithms such as linear programming, value iteration algorithms. 2.3.1.4 Schemes Based on Hidden Markov Model Hidden Markov is a partially observable statistical Markov model where the agent partially observes the states. These models involve “hidden” generative processes with “noisy” observations correlated to the system [28]. The Hidden Markov model-based schemes allow the system or device to accompany the processing latency, power usage, and diagnostics accuracy.

2.3.2 Computation Offloading Schemes Based on Game Theory To model allocation problems for wireless resources, the Game theory is practiced. Game theory helps in reducing the resource allocation problem by dividing it into distributed decision-making problems. The main advantage of game theory is that it focuses on strategic interactions by eliminating the use central controller. Game theory models are getting more attention as a source to address wireless communication problems day by day. A game theory model consists of a group of decision-making blocks. The users plan a group of strategies and after using the strategy and corresponding pay off produced. Offloading mobile data can be expressed in 3 tuples, where A is represented as a group of users, B={B1, B2,….. Bn} is the strategy space of the user and C={C1, C2,…..Cn} is the utilization of the user after an action. If Bi is the strategy chosen by single user i, then the remaining users chosen strategies can be represented as B-i then B = {Bi,B-i} is the strategy formed by the user. At an equilibrium level, a strategy formed by the user needs to be chosen. No user will rationally choose to switch from his selected approach, which leads to a decrease in utility. .

Game theory models can be divided into two groups: (i) Cooperative game model, (ii) Non-Cooperative game model. In the cooperative game model, all the users will cooperate to attain an equilibrium state, which provides many benefits and will maximize the utilization factor through all user cooperative decision-making. This method is called Pareto optimality. In this method, a user cannot raise his pay off without reducing another user’s pay. In the non-cooperative game model, different users select their own strategy without coordinating with other users. Each user is more concerned about their own payoff. All the decisions taken by a single user will make them more competitive with other users. The computational offloading schemes are based on game theory that improvises the system’s design and data offloading optimization. There are different ways it can be optimized. (i) Data offloading is always based on the multi-user decision-making problem. The multi-users are offloading scheme service providers and those who are beneficial from the offloading schemes to improvise their benefits. The advantage of users, i.e., service providers and service users, can be taken for maximum output [29]. The solution will be a game theory that provides solutions for different problem scenarios and resources appropriately shared between the other users. (ii) Each block in the data offloading game theory completes each system’s advantages and disadvantages. Game theory gives a very efficient way to save the nodes from acting greedily through various software [30].

2.4 QoS and Optimization in Edge Computing Quality of Service (QoS) defines a system’s performance and quality. To improve QoS, edge computing plays a vital role in any application using network resources at the local network. The applications include IoT devices, e.g., vehicular devices [31]. To guarantee a delay bounded QoS when performing task offloading is the challenge. When a large number of users compete for communication and limited computing resources, delay bounded QoS becomes the challenge. Another reason for delay bounded QoS is delay and energy consumption due to extra communication overhead while offloading computational tasks to edge servers [32]. There are two approaches to optimize QoS. The deterministic method of task offloading technique has a condition of completing the task by 100% before the deadline, which is quite impossible in practical

situations due to transient noise, etc. Hence a statistical approach can be applied to complete the task before the deadline [32]. The section below discusses the statistical delay bounded Qos.

2.4.1 Statistical Delay Bounded QoS In statistical delay bounded Qos with a probability of threshold, the tasks are planned to be completed before a deadline. While implementing a statistical approach, it is first required to consider the challenges. i) Correlation between statistical delay and task offloading: The first challenge is defining a correlation between the statistical delay requirement and task offloading. Under a constrained communication and computation resource, quantifying the correlation is achieved. ii) The low complexity - Second challenge for implementing statistical delay bounded QoS is to provide a holistic solution with low time complexity. Complexity is due to heterogeneous users in terms of computing capabilities and task requests [32]. Holistic task offloading algorithm considerations are discussed below.

2.4.2 Holistic Task Offloading Algorithm Considerations By leveraging convex optimization theory and the Gibbs sampling method, the statistical approach is illustrated. A statistical computation model and a statistical transmission model is proposed. In the statistical computation model, the CPU clock is configured to save more energy, thus providing a statistical QoS guarantee. The traditional transmission rate is provided with a statistical delay exponent for QoS guarantee in the statistical transmission model. MINLP (Mixed Integer Non-Linear Program) is then formulated with statistical delay constraint for task offloading. The statistical delay constraint is then converted to the number of CPU cycles and constraints on statistical delay using the probability and queue theory. A holistic task offloading algorithm is proposed. The first resource allocation problem is solved to achieve offloading decisions. Efficiency is met by convergence and scalability. An algorithm with these conditions into consideration will provide a solution with much accuracy and efficiency [32].

2.5 Deep Learning Mathematical Models for Edge

Computing Deep Learning is a Machine Learning function of Artificial Intelligence that has been applied in many implementations. Deep learning finds its application in fields that require big data, natural language processing, object recognition and detection and computer vision [33]. Instead of considering explicit data to perform a task, DL uses data representations. The data is arranged in a hierarchy with abstract representations enabling learning of good features [34]. Deep Learning uses cloud computing for performing computational tasks and storage. Latency, Scalability and Privacy were the main challenging concerns of cloud computing, which forced us to choose edge computing over the cloud [33]. Edge computing has solutions for the above challenges of latency, scalability and privacy [33]. Edge computing provides the resource computational tasks at the edge of the devices. The proximity of edge sources to edge end devices is small, which further reduces the edge’s latency. Edge computing works with a hierarchical plan for end devices, edge computes nodes, cloud data centers by providing computing resources at the edge and are scalable to the users. Due to this property, scalability is never an issue. To eliminate any attacks while transferring data, the edge operates very near to the source (trusted edge server), which refrains the data privacy and security attacks [33].

2.5.1 Applications of Deep Learning at the Edge By providing many solutions, DL finds its vast applications in changing the world. This section will discuss the applications of Deep Learning at the edge [33]. i) Computer vision - In computer vision, DL helps in image classification and object detection. These are computer vision tasks required in many fields e.g., video surveillance, object counting, and vehicle detection. Amazon uses DL in Edge for image detection in DeepLens. To reduce latency, image detection is performed locally. Important images of interest are uploaded to the cloud, which further saves bandwidth [33]. ii) Natural Language Processing - Speech synthesis, Named entity recognition, Machine translation are a few natural language processing fields where DL utilizes Edge. Alexa from Amazon and Siri from Apple are famous examples of voice assistants [33]. iii) Network Functions - Wireless scheduling and Intrusion

detection, Network caching are common fields of an edge in network functions [33]. iv) Internet of Things - IoT finds its applications in many areas. In every field, analysis is required for communication between IoT devices, the cloud and the user and vice versa. Edge computing is the latest solution for implementing IoT and DL. From much research, DL algorithms are proven to be successful. Examples of IoT using edge include Human activity recognition, Health care monitoring, and Vehicle system [33]. v) AR and VR - Augmented Reality and Virtual Reality are the two models where edge provides applications with low latency and bandwidth. DL is considered to be the only pipeline of AR/VR. Object detection is an application of AR/VR [33].

2.5.2 Resource Allocation Using Deep Learning Allocation of resources optimally to different edge system defines resource allocation. Deep learning uses various learning methods to allocate resources. In this section, deep learning, i.e., the Deep Reinforcement Learning (DRL) allocation method, is discussed. An edge network of green mechanism resource allocation is proposed to satisfy mobile users of their requirements. The “green mechanism” implies increasing the energy efficiency in a system [33]. Table 2.1 illustrates how the methods for allocating resources efficiently with challenges. A DRL method is applied to the edge to overcome this challenge by taking user and base station as requirements. The DRL helps reduce power and bandwidth from the base station to the user, thus making the system energy efficient [33]. The main aim of the DRL method is to provide energy efficiency and a better user experience. Another advantage of DRL is that it has the capability not to exceed the space of the base station. Convex optimization method is first derived to obtain minimum transmission energy and iterate with DQN. It also reduces the space state of the network. On the basis of convex optimization results, optimal connection and optimal power distribution are found. Agent and external environment are the two states of DRL. By taking different actions, the external environment state is achieved. The external environment receives a reward. The main purpose remains as to maximize the value of the reward. In the experimental analysis, several users with three base stations are considered. The number of users for each convergence step is considered. It is seen that as the number of users increased, DRL required more steps for convergence; thus, convergence speed tends to slow down and the efficiency has

also increased [33]. Table 2.1 Existing studies using deep learning in edge. S. Existing no. methods

Inference

1.

Joint task allocation and Resource allocation with multi-user Wi-Fi.

To minimize the energy consumption at the mobile terminal, a Q-learning algorithm is proposed. In this method, energy efficiency is not considered, which leads to additional costs for the system.

2.

Joint task allocationDecoupling bandwidth configuration and content source selection.

An algorithm was proposed for avoiding frequent information exchange, which was proven to be less versatile and hence cannot be used in large applications.

3.

Fog computing method for mobile traffic growth and better user experience.

As users are located in different geographical places, implementing fog becomes challenging and requires high maintenance and increased costs.

4.

Deterministic mission arrival scenario

After successfully completing the present mission, each mission is completed, which cannot work as the data source generates tasks continuously, which cannot be handled by the deterministic method.

5.

Random task arrival model

This method works on task arrived and not on the queue tasks, which fails the system to work efficiently.

2.5.3 Computation Offloading Using Deep Learning Computation offloading is a great mechanism to offload extensive tasks at the nearby server and communicate cloud with important/filtered data. With edge, computation offloading has excellent applications for mobile devices by enhancing efficiency. In a study, a dynamic computing offloading mechanism is performed. The objective of the study was to reduce the cost of computational resources. Mobile edge computing is considered (MEC). A Deep

Learning method, i.e., Deep Supervised Learning (DSL) is considered. A network of a mobile-based computer system is considered. A precalculated offloading solution is proposed. A continuous offloading decision problem is formulated as a multi-label classification problem. After experimental analysis, it is inferred that as the exhaustive strategy suffers exponentially with the increase in the “n” fine-grained components.

2.6 Evolutionary Algorithm and Edge Computing Usually, an evolutionary algorithm is used to solve the NP-hard problem, where solving the problem by a traditional optimization mechanism is impossible. This evolutionary algorithm takes some ransom solution vector from the solution space and tries to get optimal solution by the n number iteration by slowly evolving towards an optional one in each iteration through some cost or reward function. There are many evolutionary algorithms like 1) particle swarm optimization, 2) Genetic algorithm, 3) colony optimization algorithm. In edge computing, many NP-hard optimization problems could be solved using those evolutionary algorithms. In Mobile edge computing (MEC), offloading makes low latency and energy-efficient. Security critical tasks involve more computation and take more time. If we offload them, we can achieve a good performance. To minimize task completion time and energy consumption, particle swarm optimization algorithms are proposed [35]. Position-based mapping is carried out to map the particle solution of scheduling. a new slow down particle movement process mechanism is reported in the update particles step of the algorithm. Tthey have proved that the new update mechanism with slow down process achieved better performance compared to that of conventional particle swarm optimization algorithm. Another offloading mechanism in a mobile edge environment is proposed with a combination of queuing network and genetic algorithm for the mobile edge environment [36]. Predicting the waiting time and service time of the edge server is an important one to make offloading decision. A queuing network model is introduced in this work to model the waiting time and the service time. The waiting times and the service times which are generated from the queuing network are taken as an indirect indicator for the load level of the edge server which is used for the offloading decision by using genetic algorithm. The genetic algorithm is designed to make optimal offloading by minimizing the task response time with the constrain of

load level of the edge server and task transmission time from node to edge server. This proposed algorithm outperformed compared to that of particle swarm optimization and traditional round-robin scheduling based offloading in terms of response time. Using the vehicular network node as an edge node is the latest trend. Figure 2.8 shows the vehicle edge network. In Figure 2.8, the roadside sensor node will offload the complex computations to the vehicle computing node. When the vehicle moves, it transfers the load to another vehicle to carry over the computation. Once the computation is finished, it will offload to the roadside sensor node.

Figure 2.8 Offloading in vehicular node. There are a few research works done to utilize the vehicle as an edge node. An edge computing model for vehicle network applications to improve vehicles’ computational capacity is introduced with a scheduling algorithm using ant colony optimization [37]. This

framework makes it possible to use a dynamic vehicle network and the vehicle as an edge service server and improve the connected vehicle application’s computational power. The autonomous organization of vehicle edges is also addressed in this work. The proposed job scheduling on the vehicle edge nodes provided excellent performance in urban and highway scenarios where many vehicles’ presence offers more optimization solutions.

2.7 Conclusion The chapter has given a basic introduction to edge computing with its associated research challenges. The various mathematical models for solving the edge computing problem are also explored. An insight on computational offloading and multiple approaches for computational offloading are discussed. The Markov chain-based decision-making approach is an efficient mathematical approach. The applicability of it for the computational offloading problem in edge computing is also explored. A game theory-based dynamic approach for offload decision making is provided with available solutions. Achieving target QoS in dynamic edge scenarios is another research challenge; the potential solution approaches for them are discussed. Deep learning is widely now applied in many domains to solve NP-hard problems. In edge computing also many NP-hard problems could be solved using a deep learning mechanism. In this chapter, a few DL-based edge computing solutions are discussed. Various offloading mechanism using deep reinforcement learning is presented with associated challenges of offloading. As the second part of the deep learning application, resource allocation in edge computing environments using deep learning mechanisms is also given. Evolutionary-based optimization is another potential solution to solve the multi-objective, multiconstraint optimization problem. A few optimization problems using evolutionary algorithms like ant colony optimization, genetic algorithm, and particle swarm optimization for edge computing are also described.

References 1. Feng, Jingyun, Zhi Liu, Celimuge Wu, and YushengJi. “AVE: Autonomous vehicular edge computing framework with ACO-based scheduling.” IEEE Transactions on Vehicular Technology 66, no. 12 (2017): 10660-10675. 2. “Computation Offloading.” Wikipedia, 9 Jan. 2021, en.wikipedia.org/wiki/Computation_offloading. Accessed 17 Jan. 2021.

3. Feng, Jingyun, Zhi Liu, Celimuge Wu, and YushengJi. “AVE: Autonomous vehicular edge computing framework with ACO-based scheduling.” IEEE Transactions on Vehicular Technology 66, no. 12 (2017): 10660-10675. 4. Misra, Gourav, et al., “Internet of things (iot)–a technological analysis and survey on vision, concepts, challenges, innovation directions, technologies, and applications (an upcoming or future generation computer communication system technology).” American Journal of Electrical and Electronic Engineering 4.1 (2016): 23-32. 5. Lee, In, and Kyoochun Lee. “The Internet of Things (IoT): Applications, investments, and challenges for enterprises.” Business Horizons 58, no. 4 (2015): 431-440. 6. Petrolo, Riccardo, Valeria Loscri, and Nathalie Mitton. “Towards a smart city based on the cloud of things, a survey on the smart city vision and paradigms.” Transactions on Emerging Telecommunications Technologies 28, no. 1 (2017): e2931. 7. M. Caprolu, R. Di Pietro, F. Lombardi and S. Raponi, “Edge Computing Perspectives: Architectures, Technologies, and Open Security Issues,” 2019 IEEE International Conference on Edge Computing (EDGE), Milan, Italy, 2019, pp. 116-123, doi: 10.1109/EDGE.2019.00035. 8. Taleb, Tarik, KonstantinosSamdanis, BadrMada, HannuFlinck, Sunny Dutta, and Dario Sabella. “On multi-access edge computing: A survey of the emerging 5G network edge cloud architecture and orchestration.” IEEE Communications Surveys & Tutorials 19, no. 3 (2017): 1657-1681. 9. Roman, Rodrigo, et al. “Mobile Edge Computing, Fog et al.: A Survey and Analysis of Security Threats and Challenges.” Future Generation Computer Systems, vol. 78, Jan. 2018, pp. 680–698, 10.1016/j.future.2016.11.009. 10. Hai Lin, Sherali Zeadally, Zhihong Chen, Houda Labiod, Lusheng Wang, 2020. A survey on computation offloading modeling for edge computing. Journal of Networks and Computer Applications. https://doi.org/10.1016/j.jnca.2020.102781. 11. Santos, José, Philip Leroux, Tim Wauters, Bruno Volckaert, and Filip De Turck. “Anomaly detection for smart city applications over 5g low power wide area networks.” In NOMS 2018-2018 IEEE/IFIP Network Operations and Management Symposium, pp. 1-9. IEEE, 2018. 12. Kumar, K. S., Kumar, T. A., Radhamani, A. S., & Sundaresan, S. (2020). 3 Blockchain Technology. Blockchain Technology:

Fundamentals, Applications, and Case Studies, 23. 13. Sittón-Candanedo, Inés & Alonso, Ricardo & García, Óscar & Muñoz, Lilia & Rodríguez, Sara. (2019). Edge Computing, IoT and Social Computing in Smart Energy Scenarios. Sensors. 19. 3353. 10.3390/s19153353. 14. Chemitiganti, Vamsi. “Edge Computing: Challenges and Opportunities.” Medium, 6 June 2019, medium.com/datadriveninvestor/edge-computing-challenges-andopportunities-9f2dddbda49e. Accessed 16 Jan. 2021. 15. Wikipedia Contributors. “Markov Decision Process.” Wikipedia, Wikimedia Foundation, 20 May 2019, en.wikipedia.org/wiki/Markov_decision_process. 16. Cheng, N., Lyu, F., Quan, W., Zhou, C., He, H., Shi, W., Shen, X., 2019. Space/aerial-assisted computing offloading for IoT applications: a learning-based approach. IEEE J. Sel. Area. Commun. 37, 1117–1129. https://doi.org/10.1109/JSAC.2019.2906789. 17. A. M. Ghosh and K. Grolinger, “Deep Learning: Edge-Cloud Data Analytics for IoT,” 2019 IEEE Canadian Conference of Electrical and Computer Engineering (CCECE), Edmonton, AB, Canada, 2019, pp. 1-7, doi: 10.1109/CCECE.2019.8861806. 18. Adimoolam M., John A., Balamurugan N.M., Ananth Kumar T. (2021) Green ICT Communication, Networking and Data Processing. In: Balusamy B., Chilamkurti N., Kadry S. (eds.) Green Computing in Smart Cities: Simulation and Techniques. Green Energy and Technology. Springer, Cham. https://doi.org/10.1007/978-3-03048141-4_6.

19. Carvalho, Gonçalo, et al. “Computation Offloading in Edge Computing Environments Using Artificial Intelligence Techniques.” Engineering Applications of Artificial Intelligence, vol. 95, 1 Oct. 2020, p. 103840, www.sciencedirect.com/science/article/abs/pii/S0952197620302050,10.1016/j.en Accessed 16 Jan. 2021. 20. Y. Liu, M. Peng, G. Shou, Y. Chen and S. Chen, “Toward Edge Intelligence: Multiaccess Edge Computing for 5G and Internet of Things,” in IEEE Internet of Things Journal, vol. 7, no. 8, pp. 67226747, Aug. 2020, doi: 10.1109/JIOT.2020.3004500. 21. “A Brief Introduction to Markov Chains | Markov Chains in Python.” Edureka, 2 July 2019, www.edureka.co/blog/introductionto-markov-chains/. Accessed 17 Jan. 2021. 22. “Markov Chain.” Wikipedia, 11 Jan. 2021,

en.wikipedia.org/wiki/Markov_chain#:~:text=A%20Markov%20chain%20is%20 Accessed 17 Jan. 2021. 23. “Semi-Markov Process - an Overview | ScienceDirect Topics.” Www.Sciencedirect.com, www.sciencedirect.com/topics/computerscience/semi-markov-process#:~:text=1.1. Accessed 17 Jan. 2021. 24. Wikipedia Contributors. “Markov Decision Process.” Wikipedia, Wikimedia Foundation, 20 May 2019, en.wikipedia.org/wiki/Markov_decision_process. 25. “Hidden Markov Models - An Introduction | QuantStart.” Www.Quantstart.com, www.quantstart.com/articles/hiddenmarkov-models-an-introduction/. Accessed 17 Jan. 2021. 26. Hai Lin, SheraliZeadally, Zhihong Chen, HoudaLabiod, LushengWang,A survey on computation offloading modeling for edge computing, Journal of Network and Computer Applications, Volume 169, 2020, 102781, ISSN 1084-8045, https://doi.org/10.1016/j.jnca.2020.102781. 27. “Markov Chains Explained Visually.” Explained Visually, setosa.io/ev/markov-chains/. 28. “Markov Renewal Process.” Wikipedia, 15 July 2020, en.wikipedia.org/wiki/Markov_renewal_process. Accessed 19 Jan. 2021. 29. Yick, Jennifer, Biswanath Mukherjee, and DipakGhosal. “Wireless sensor network survey.” Computer Networks 52, no. 12 (2008): 2292-2330. 30. Bhandary, Vikas, Amita Malik, and Sanjay Kumar. “Routing in wireless multimedia sensor networks: a survey of existing protocols and open research issues.” Journal of Engineering 2016 (2016). 31. Q. Li, S. Wang, A. Zhou, X. Ma, f. yang and A. X. Liu, “QoS Driven Task Offloading with Statistical Guarantee in Mobile Edge Computing,” in IEEE Transactions on Mobile Computing, doi: 10.1109/TMC.2020.3004225. 32. A. M. Ghosh and K. Grolinger, “Deep Learning: Edge-Cloud Data Analytics for IoT,” 2019 IEEE Canadian Conference of Electrical and Computer Engineering (CCECE), Edmonton, AB, Canada, 2019, pp. 1-7, doi: 10.1109/CCECE.2019.8861806. 33. M. Yang et al., “Deep Reinforcement Learning based Green Resource Allocation Mechanism in Edge Computing driven Power Internet of Things,” 2020 International Wireless Communications and Mobile Computing (IWCMC), Limassol, Cyprus, 2020, pp. 388393, doi: 10.1109/IWCMC48107.2020.9148169.

34. Chen, Jiasi & Ran, Xukan. (2019). Deep Learning with Edge Computing: A Review. Proceedings of the IEEE. pp. 1-20. 10.1109/JPROC.2019.2921977. 35. Zhang, Yi, Yu Liu, Junlong Zhou, Jin Sun, and Keqin Li. “Slowmovement particle swarm optimization algorithms for scheduling security-critical tasks in resource-limited mobile edge computing.” Future Generation Computer Systems (2020). 36. Zakaryia, Samah A., Safaa A. Ahmed, and Mohamed K. Hussein. “Evolutionary offloading in an edge environment.” Egyptian Informatics Journal (2020). 37. Feng, Jingyun, Zhi Liu, Celimuge Wu, and YushengJi. “AVE: Autonomous vehicular edge computing framework with ACO-based scheduling.” IEEE Transactions on Vehicular Technology 66, no. 12 (2017): 10660-10675. *Corresponding author: [email protected]

3 Mathematical Modelling of Cryptographic Approaches in Cloud Computing Scenario M. Julie Therese1*, A. Devi2, P. Dharanyadevi3 and Dr. G. Kavya4† 1Department of Electronics and Communication Engineering, Sri Manakula Vinayagar Engineering College, Puducherry, India 2Department of Electronics and Communication Engineering, IFET College of Engineering, Tamilnadu, India 3Department of Computer Science and Engineering, Pondicherry University, Puducherry, India 4Department of Electronics and Communication Engineering, SA Engineering College, Chennai, India Abstract The world is packed with innovative technologies like Artificial Intelligence, Augmented Reality, Blockchain, Internet of Things, 5G wireless technology, etc. The data generated by these technologies are boundless and huge, making it complicated to be handled by a traditional data centre. This led to the development of Cloud Computing, a financial and business model that replaces traditional data centre. It also has many unique properties, and unfortunately, since cloud computing is associated with Internet, it has many vulnerabilities and is prone to attacks which arise from pooled, virtualized and outsourced resources. The key mechanisms for protecting the varying storm of data are Access control, Auditing, Authentication, and Authorization. Without the right level of security, the customer may not be able use the service providers offering effectively. A significant contribution to build an integrated security platform that covers all key mechanisms for protecting data comes in three categories: Identity management, Threat detection and Data encryption. Among the three, data encryption reduces the exposure to data theft and plays a major role in maintaining security while using cloud services. There are a few existing cryptographic approaches to deal with insider and outsider threat; the first is Data computation, followed by Data partition and finally Data Encryption.

Keywords: Security, cloud of things, data computation, data partition, encryption

3.1 Introduction to IoT The IoT requires an integrated security prototype that takes into account the security problems from a holistic viewpoint that involves the advanced users and their contact with this technology [1]. IoT is a network interconnection of ordinary things which are normally fitted with an enormous degree of intelligence. IoT has gigantically expanded the use of the Internet by combining every contact object with embedded systems, leading to a dispensed network of humanconnected devices and internet-enabled devices. IoT is a popular phenomenon to enhance the quality of life. In the last few years IoT has been the focal point of the world’s scientists and practitioners [2]. IoT is a tool that can capture, store and process data. Services, tracking, and various devices can also be viewed. An advanced security prototype is required, which looks at the security problems from a holistic point of view that advanced users share with this technology. The Internet is predominant for IoT, so security loopholes will occur. Secured communication is a key issue for all paradigms developed by an IoT-based sensing program.

3.1.1 Introduction to Cloud Cloud computing is one of the most rapidly growing areas of science, and people use cloud technology for various purposes. Cloud computing is an emerging technology that is massively scalable and that offers enormous services with the use of internet. The services offered by the cloud are in the form of Infrastructure, Platform, Software. To effectively utilize these services, proper privacy and security principles are to be followed. The use of cell phones and tablet computers has increased considerably in recent years, leading to high cloud storage use. As a result, the protection of individuals as well as of companies in cloud storage is affected. Due to poor safety in cloud storage, some user data leak can occur. Cloud computing is the methodology with tremendous scalability, versatility and flexibility to include certain client services in order to shift resource limitations on computers. Cloud technology provides the user several facilities, which often include some threats for the data of the user. Highly centralized cloud storage [35] provides significant security challenges to cloud storage. The user has externalized their data into the cloud server which is not entirely trustworthy, and this makes maintaining data protection and privacy in the cloud very difficult [3, 4].

3.1.2 General Characteristics of Cloud On knowing the basics of IoT and cloud, knowledge on general characteristics of cloud infrastructure is necessary. They are elasticity and scalability, where the resources can be expanded or reduced according to the user’s requirement. The second is Pay per use, where the users pay only when they use the resources. Next is On-Demand access, where cloud services are not a permanent part of the IT infrastructure, when there is a request from the user, services are provided. Other characteristic is Resiliency which is flexibility in providing service, even when there is failure in a server the workload is moved to another server without intervention in the service that is provided [5–7]. Next is Multi-tenancy, which is resource pooling where the resources along with the cost are shared among multiple users, and finally it is improved security and access control. Some of the general characteristics of cloud are the essential or key characteristics, the latter also includes Broad Network Access and Measured services. The broad network access always allows access to resources according to their convenience, provided the internet and relevant credentials are available for the user. As in the case of measured services the resource usage is measured, controlled and reported, additionally transparency is maintained between the service provider and customer.

3.1.3 Integration of IoT and Cloud IOT and Cloud are two different technologies that outperform each other [8, 9]. Nowadays integrating the two technologies is very important to bring about a Cloud of Things. There are two different layers, one is the IoT Layer and the other is the Cloud layer. The data generated from the IoT layer from various IoT devices are stored in the cloud. In order to enhance the security level in the cloud layer the data undergo three different process: data computation, data partition and data encryption as shown in Figure 3.1.

3.1.4 Security Characteristics of Cloud Since a huge volume of sensitive data is being stored in the cloud, it holds important to understand the security characteristics of Cloud.

Figure 3.1 Integrating IoT and cloud. Authentication Authentication in the IoT network is difficult since heterogeneous authentication is organized in the network. In the course of accessing the network, electronic devices must be detected and approved [10–12]. Each computer has a unique identifier key for the IoT network. Confidentiality Privacy guarantees that health information and data from patient records are shielded from unfair users [14]. In addition, all private data must not be exposed to non-authenticated personalities and maintained without risk [15]. Liability In a healthcare system, if an accident happens, the device should be able to determine who takes authority [16, 17]. The chapter structure is as follows: the first section provided a description of IoT, Cloud, General Characteristics of cloud, Integration of IoT and Cloud and Security characteristics of Cloud. The next section explores the often complicated Data Computation Process using Star cubing method. Data partitioning using Shamir secret Share algorithm will be dealt with in section three. In section four the data encryption process using Advanced Encryption Standard (AES) will be addressed. Section five outlines the results and discussions. Section six offers an overview and conclusion. The next section deals with the cryptographic approaches that lead to security of data in the Cloud.

3.2 Data Computation Process Data is a gathering of values or amounts. It is the basic resources to

be administered by a computer. Data cube computation is considered as one of the efficient key tasks while implementing data warehouse [18]. In a data warehouse system, the different methods of aggregation play a vital role in computation of data cubes. As we mentioned above, the different aggregation methods in data computation include multi-dimensional aggregation and multi-array aggregation. For multiple aggregates used for convenient supporting function in OLAP databases, the data cube is used. The elementary cube problem is to undergo computation for all level of aggregates to the level of best.

3.2.1 Star Cubing Method for Data Computation Star Cubing is otherwise called as mixed approach method as it integrates both the top-down method and bottom-up computation [19]. The main function of star cubing is to survey shared dimensions. Let us consider the four dimensions PQRS as mentioned in Figure 3.2. 3.2.1.1 Star Cubing Algorithm Step 1: Concurrently compute PQS as cuboid PQ. Step 2: Aggregate Star Cubing in a top-down manner.

Figure 3.2 Star cubing algorithm. Step 3: Assume shared dimensions PQ, PQS/PQ means cuboid PQS. Step 4: Aggregate apriority pruning in a bottom-up manner. Step 5: Integrate the data for PQRS dimensions.

Let M(x) be an irreducible monic polynomial of degree n over T3.

Where T3 be the ternary extension field. The field cubing Fc is given as,

Where G is the arbitrary element. The polynomial cubing can be computed as,

where,

The field cubic operation can be performed as,

By determining the constants n and 2n, considered as per the field constants states the cubic operations. This operation can also be computed in offline mode. And hence the irreducible polynomial of M(y) is of the form,

M(y) can also be given as,

When it is normally performed as an irreducible form of trinomial.

3.3 Data Partition Process In recent times, the amount of data generated via digital resources and other web-related activities are vast in nature. It is also not advised to store all the data locally [13] as this contributes to several risks. So the huge dataset is divided into reduced subset. However, to enhance web-related applications the data are fragmented into numerous parts [20]. This process is named as data partitioning. Usually in a large-scale organization partitioning takes place to split the data and develop accessibility, approachability, and scalability. The cost of storage is enriched by data partitioning. It plays a significant part in issuing the processing through various nodes. The application cordiality and response time are upgraded considerably.

3.3.1 Need for Data Partition Scalability: It improves the scalability of data. The physical hardware boundary is reached when the particular database system is scaled up. The system will be expandable almost forever when data is split into partitions, each held by an individual server. Performance: The basic operation related to data access on every partition occur in concentrated data capacity. The partitioning of data

makes the system well-organized. When an operation is affected by two or several partition it is allowed to run in parallel. Security: The data is parted into sensitive and non-sensitive forms of data. Along with that security, control is also applied. The reliability of data is high. Operational Flexibility: Huge opportunities are offered by the partitioning process for refining, capitalize on efficiency, and lowering cost. Different tactics are implemented to monitor, reestablish, manage, and back up along with extra tasks based on the significance of data in the respective partition [21]. Availability: A single point of failure can be prevented by separating data through many servers. Only the data in partition are inaccessible if one instance fails. It is possible to resume operations on other partitions. The fact that these services are developed with integrated redundancy is less critical for managed data-stores of PaaS.

3.3.2 Shamir Secret (SS) Share Algorithm for Data Partition It concerns encryption methods to secrecy and split it into different shares and to disperse them between multiple parties; thus the secret can be reconstructed only when the respective parties bring together. The holder of the secret is known as the dealer. Only it creates shares and defines a starting point for the number of shares that are needed to reconstruct the secret [22]. The dealer distributes the shares and they are controlled by different parties. Secret sharing is useful because they allow more secure storage of highly complex data, including encryption keys, numbered bank accounts, and missile codes. By allocating data, there is no point of data loss. Secret cipher shares are important in cloud computing locations because they offer a high level of security for secrets using the algorithm implemented in software. Secret cipher share is a partitioning algorithm where it works based on the concept of SS Sharing. SS Sharing algorithm is cryptanalysis that was shaped by Adi Shamir. It is a way to secretly share a secret in which every user gets a special part of the secret. For reconstructing the real secret, the required number of parts is less. The total number of parts is less than the threshold scheme [23]. Or else every user is in need of reconstructing the original data. SS Sharing is applied in sharing a secret in a secured way; commonly it secures the encryption key. The secret consists of many parts so-called shares. These shares are used for reconstructing the secret in an original way [24]. For unlocking the secret through SS sharing, a least number of shares is

essential, which is known as the threshold. It is used in denoting the smallest count of shares required for unlocking the secret.

3.3.3 Working of Shamir Secret Share Shamir’s method of exchanging secrets is based on polynomial interpolation, an algebraic approach that helps us to determine unidentified values within a distance between two identified data points and deprived of knowing what is on either hand [25]. It encrypts a “secret” and then divides it into pieces and distributes it to effectively recreate this secret without needing every single share as shown in Figure 3.3. It is possible, however, to use polynomial interpolation. Rather, only the threshold that provides sufficient data points to appropriately guess the values among breaks in the encoded shares needed.

Figure 3.3 Dividing and recreating shares. Steps for Shamir Secret Sharing: Step 1: Polynomial is constructed

Step 2: Shares are distributed Step 3: Secrets are recovered Let, AC of cardinal m be finite and h be the secret to share. The required are l shares with threshold d. If h 0 and c2 > 0 such that

2. For a constant c3, we have

If c = (α − (c1 + c3))/c2, then there is a unique solution of on [0, G], where G = min{a, b, c} Proof: Define an operator W:C → C by

To prove W:C → C, we have to show that Wψ is continuous. Consider

Consider

So Wχ ∈ C and W maps C into itself. Consider,

Now to establish that W is a contraction mapping. For χ1, χ2 ∈ C,

So, W is a contraction self-map on C; it has a fixed point x ∈ C which is unique.

7.4 An Application In this section, we give an application of compressing the images using the Banach fixed point theorem. Image compression helps to reduce unwanted noise from the digital image. The method of saving an image poses several problems. Let M0 be a digital representation of image (see Figure 7.1). We may create the following procedure starting from the digital image M0: (i) A duplicate of M0 is made and glued onto the left side. (ii) A duplicate of M0 is made and glued onto the right side.

(iii) Now, we’ve got a new image represented by M1 (Figure 7.2) (iv) If the same approach is used for M1, we again have a new M2 digital image similar to M1. In the mathematical context, we would like to propose the upper procedure. Let the function that takes Mi to V (Mi) be V. Therefore, if we proceed in this manner, we will get V (M2) = M2, i.e., Its fixed point is M2. We obtain an infinite sequence of sets {Fn}, if we proceed in the same manner. The {Mn} sequence converges with M2. M5 cannot be isolated from M2. As a consequence, for improved resolution, the computer program uses M5 instead of M2. Simultaneously, in place of M5, the algorithm could use M2 to evaluate specific digital image properties quickly. From the above example, we can find that the Banach fixed point theorem can be more helpful than other techniques.

Figure 7.1 M0.

Figure 7.2 M1.

7.5 Conclusion We tried to provide the application of the Banach fixed point theorem in the digital version to understand the nature of the images in a better form. In our next work, we will try to provide other fixed-point applications in digitalized images. In this paper, well-known fixed point theorem is employed to study nature of digital images. That is established by applying the concept of fuzzy numbers. Sufficient conditions are also determined to get the desired result.

References 1. Agarwal, R. P., Meehan, M., O’Regan, D. (2001). Fixed Point Theory and Applications, Cambridge University Press, 2001. 2. Alt, T., & Weickert, J. (2020). Learning Integrodifferential Models for Image Denoising. arXiv preprint arXiv:2010.10888. 3. Dalal, S., Masmali, I. A., & Alhamzi, G. Y. (2018). Common fixedpoint results for compatible map in digital metric space. Advances in Pure Mathematics, 8(3), 362-371. 4. Dolhare, U. P., & Nalawade, V. V. (2018). Fixed point theorems in digital images and applications to fractal image compression. Asian Journal of Mathematics and Computer Research, 25(1), 18-37. 5. Gao, R., Gu, C., & Li, X. (2017). Image zooming model based on fractional-order partial differential equation. Journal of Discrete Mathematical Sciences and Cryptography, 20(1), 55-63. 6. Gong, M., Tian, D., Su, L., & Jiao, L. (2015). An efficient bi-convex

fuzzy variational image segmentation method. Information Sciences, 293, 351-369. 7. Jleli, M., & Samet, B. (2014). A new generalization of the Banach contraction principle. Journal of Inequalities and Applications, 2014(1), 38. 8. Kollem, S., Reddy, K. R., & Rao, D. S. (2020). Improved partial differential equation-based total variation approach to nonsubsampled contourlet transform for medical image denoising. Multimedia Tools and Applications, 1-27. 9. Lakshmikantham, V., & Mohapatra, R. N. (2004). Theory of fuzzy differential equations and inclusions. CRC Press. 10. Mehmood, F., Haider, Z., Farooq, U., & Baoqun, Y. (2019). Droplet movement control using fuzzy logic controller and image processing along with pin valves inside microfluidic device. Measurement and Control, 52(9-10), 1517-1531. 11. John A., Ananth Kumar T., Adimoolam M., Blessy A. (2021) Energy Management and Monitoring Using IoT with CupCarbon Platform. In: Balusamy B., Chilamkurti N., Kadry S. (eds) Green Computing in Smart Cities: Simulation and Techniques. Green Energy and Technology. Springer, Cham. https://doi.org/10.1007/978-3-030-48141-4_10 12. Pu, Y. F., Siarry, P., Chatterjee, A., Wang, Z. N., Yi, Z., Liu, Y. G., … & Wang, Y. (2017). A fractional-order variational framework for retinex: fractionalorder partial differential equation-based formulation for multi-scale nonlocal contrast enhancement with texture preserving. IEEE Transactions on Image Processing, 27(3), 1214-1229. 13. Rosenfeld, A. (1986). Continuous functions on digital pictures, Pattern Recognition Letters 4, 177-184. 14. Zhu, W., Xian, L., Wang, E., & Hou, Y. (2019). Learning classification of big medical imaging data based on partial differential equation. Journal of Ambient Intelligence and Humanized Computing, 1-10. *Corresponding author: [email protected], ORCID:

https://orcid.org/0000-0002-7952-1217

8 The Convergence of Novel Deep Learning Approaches in Cybersecurity and Digital Forensics Ramesh S1*, Prathibanandhi K2, Hemalatha P3, Yaashuwanth C4 and Adam Raja Basha A5 1Department of Computer Science & Engineering, Krishnasamy College of Engineering & Technology, Cuddalore, Tamil Nadu, India 2Department of Electrical & Electronics Engineering in Saveetha School of Engineering, Saveetha Institute of Medical and Technical Sciences, Saveetha University, Chennai, Tamil Nadu, India 3Department of Computer Science & Engineering, IFET College of Engineering, (Autonomous), Villupuram, Tamil Nadu, India 4Department of Information Technology, Sri Venkateswara College of Engineering (Autonomous) Sriperumbudhur, Chennai, Tamil Nadu, India 5Department of Electronics and Communication Engineering, Audisankara College of Engineering & Technology (Autonomous) Gudur, Andhra Pradesh, India Abstract Digital forensics has become an inevitable paradigm for creating valuable insights by processing the digital evidence derived from internet sources in cybercrime investigation. Several cybercrime methodologies and tools, including Helix, Encase, and Winhex, provide practical forensic analysis to prevent cyber attacks. It facilitates the investigators to produce the outputs and ascertain the progress of criminal case investigation. Despite the compatibility with forensic devices and the capability to examine digital evidence at all levels, the cyber intrusions performed by computer criminals and sharing digital assets among their peers have also been increasing to a greater extent. In this context, the seamless data exchange of sensitive information and personal data sharing among the interconnected components has a multitude of security vulnerabilities. Deep Learning (DL) models promise to automate the extraction of the optimal values from the network

traffic and perform malware classification, spam, phishing detection, and intrusion detection. The DL-based cyber forensic investigation engine facilitates evidence preservation, analysis, and interpretation through decisions based on the evidential weights. This chapter aims to bring out the innovations and advancements in deep learning for cyber forensics and digital forensics. Keywords: Digital forensics, cybersecurity, deep learning (DL), deep neural networks, convolution neural networks (CNN), recursive neural networks (RNN), dynamic convolution neural networks (DCNN)

8.1 Introduction Deep Learning is affiliated to Machine Learning (ML) which mimics the intelligence [1] of the human brain in data processing and making a decision based on the analysis of data. The process of machine learning relies on Artificial Neural Networks (ANN) with learning representation. Deep learning explodes the data in numerous forms, as the data fed to the deep learning process were unstructured and it will take years to analyze and extract valid information from the unstructured data. The term “deep” in deep learning technology has been derived from the utilization of multiple layers for deriving the data from the processing network. Hence deep learning can be defined as the type of machine learning technique that utilizes multiple layers to extract a high level of features from the given unprocessed input. Deep learning architecture is created by considering the layer-to-layer model of the network for yielding better performance. The algorithms of the Deep learning technology is well suited for the unsupervised data as the count of unsupervised data is more when compared to the supervised data. The Deep Learning algorithms which were termed as neural networks identify and relate the association between the given input and expected output data. The backbone structure of the Deep Learning algorithm is depicted in Figure 8.1.

Figure 8.1 Basic structure of deep learning. In Figure 8.1, three levels of layers are mentioned, namely, input layer, hidden layer, and output layer. The input layer may have numerous inputs; similarly there may be any number of hidden layers present based on the deep learning algorithm we employ for exploring the input data. The logic expression for the deep learning and the machine learning looks like (8.1) The input x = {x1,x2,……xn} and to decrease the loss function, the iterative varying weight is given by the following equation. (8.2)

And the probability distribution function (pdf) is given as: (8.3) During the coding process of a deep learning algorithm, the ultimate aim is set to isolate the various factors and to extract the associated information as the output. Deep Learning has an extended list of applications since 1990 and requires big data to obtain better results. A certain notable application of Deep Learning includes its role in cybersecurity, especially in digital forensics [13]. The challenges faced by the digital forensics are classified into three types, namely, technical challenges, resource challenge, and legal

challenges. The Technical challenges include the anti-forensic challenges like cryptographic algorithm, hiding the sensitive data in the unknown storage locations and channel conversion. Apart from the aforementioned technical challenges, cloud security, steganographic techniques, and skill gap were other challenges. The legal challenges incorporate the poor guidelines and standards followed in the maintenance of digital forensics, and poor preservation of physical devices that hold the sensitive data. The migration in technology will be termed under the resource challenges of the digital forensics. The preceding sections of this chapter will continue as follows: section 8.2 illustrates the role of deep learning the digital forensics while section 8.3 mentions the risks involved; the data analytics were done in section 4 along with the subsets mentioned in section 8.5. The later part of this chapter, sections 8.6 to 8.11, illustrate the detection and control mechanisms of threat to the digital forensics.

8.2 Digital Forensics Digital forensics is a science of preserving, identifying, extracting, and documenting the digital evidence stored in computing devices which will be utilized by a court of law. This science extracts the data from any computing device not restricted to computers, mobile phones, audio/videotapes, pen drive, hard disc, etc. Digital forensics assists the forensic officials in analyzing, identifying, and preserving the digital shreds of evidence. The objectives of digital forensics are: To identify, analyze, and recover the related digital materials to assist the investigation process and to present it to a court of law. To scrutiny the crime motive and to track the corresponding culprit. For data acquisition and duplication processes to recover deleted files from the storage devices. For faster identification of pieces of evidence and to preserve the evidence.

Figure 8.2 Process involved in digital forensics. The processes involved in digital forensics are illustrated in Figure 8.2. Figure 8.2 mentions the four basic steps involved in digital forensics: collection, examination, analysis, and reporting of the obtained information. Some of the challenges involved in digital forensics are Highly intelligent hackers with easy access to the hacking tools Lack of physical evidence Huge data not limited to Terabytes makes it difficult and the investigation becomes a time-consuming process Certain challenges of digital forensics can be addressed by the Deep Learning (DL) algorithms, as it solves the input unstructured data with a huge data set to derive the related output that assists the process of digital forensics.

8.2.1 Cybernetics Schemes for Digital Forensics Digital forensics incorporates the utilization of technically driven and proved solutions for the analysis and examination of materials discovered in digital devices. The discovering of materials includes the preservation of digital devices, collection of material evidence, examination, analysis, interpretation, preparing a proper report and presenting the digital evidence. The digital devices that hold materials may be the flash drive, hard disk drive, pen drive, CD/DVD, audio/videotape, etc. The data gathered from the digital forensics acts as evidence either in criminal or civil cases, and the most featured process of digital forensics is the recovery of erased or formatted data from the digital devices. The erased or vanished data may be related to the criminal case, intellectual property rights, patents, important documents, balance sheets, financial data, or any sensitive data. Cybernetics is an approach for identifying systems that are properly regulated in its structure, concerns, and probabilities. It also incorporates the study of concepts that were derived, black box, feedback, and control of machines, organisms, and organizations. Cybernetics studies offer information about the procedures to investigate a design or a process of any system and make it an efficient study. The architecture of cybernetics is designed by Horst Ritel [1], Bruce Archer [2], who addressed complex design challenges for the past decade to develop the technology in digital forensics. Figure 8.3 illustrates the loop of cybernetics and it investigates the system with an appropriate sensor, the output of which is sent to the

controller for decision processing. The decision processed by the controller is followed by the action that is executed with the help of other supporting systems. The supporting system may be related to engineering sciences, biology, and social sciences. This is depicted in Figure 8.3. The Systematic Digital Forensic Investigation Model (SDFIM) concentrates on the analysis of the crime scene and cybercrime. The model comprises 11 phases that incorporated examination and analysis. The examination phase of the SDFIM focuses on looking and obtaining the evidence from the crime scene. To ease this process, the model incorporates filtering of data, validation, matching the pattern, search techniques, and recovering the data sets that may be ASCII or non-ASCII. The Cyber Forensic Field Triage Process Model (CFFTPM) follows a list of procedures including analysis and identifying and exploring the data sets in a small frame. Cybernetics relates the method oriented services like control theory and communication theory but is not restricted to these topics; it can be related to science-oriented services like engineering sciences, social sciences, and biology. One such well-known domain of cybernetics is the digital data forensics science that has evolved in the last decade. Various software tools have been designed by programmers and cybersecurity [15] experts to find its application in the digital forensic domain. Some of the digital forensic software are MailXaminer, MailPro+, SQL Log Analyzer, Exchange Log Analyser, etc. The Cybernetics relating method oriented and science oriented services are driven by the control theory and communication theory as depicted in Figure 8.4.

Figure 8.3 Cybernetics loop.

Figure 8.4 Cybernetics relating method oriented and science oriented services [7, 14].

8.2.2 Deep Learning and Cybernetics Schemes for Digital Forensics Digital forensic is a science of investigating and collecting digital information from the crime scene. Cybernetics relates to illustrating the complex relationship between the input and the output. The integration of Deep Learning with cybernetics yields various methods of interaction among the devices connected. Deep learning is successful in applying the algorithms in digital forensics [4] so that the pieces of evidence are collected more effectively than a human being could do from the crime scene. A variety of approaches has been designed and proposed based on neurobiologists’ expectations. The relation between the input, hidden layer and the output of the Deep Learning is expressed below and this relationship is well suitable for cybernetics applications. (8.4) (8.5) (8.6) Whereas, the ft is the forget gate, it and ot mentions the input and output gate.

8.3 Biometric Analysis of Crime Scene Traces of Forensic Investigation

8.3.1 Biometric in Crime Scene Analysis The biometric technology [1] provides a high level of accuracy in the automated evaluation and analysis of crime scene traces. Crime is a pre-planned or emotional-based act that is outside the law of a particular nation, which may be penalized or punished based on the laws in a particular nation. The crime scene contains the most important information that must be gathered properly to scrutinize and trap the perpetrator. The forensic investigation at the crime scene is the most predominant process to collect information about the incident. The forensic investigation is the process of safeguarding and collecting the information and documenting the notes, photographs, drawings, audiotapes, video files, etc., and also includes the collection of physical information [9] like fingerprints, footprints, tire marks, paints, hair, DNA, shoes, and shoe print, skin, bone material, gadgets, etc. The collected evidence will be submitted as proof for the crime and to prove the identification of the criminal. Numerous methods have been proposed to analyze the crime scene traces using forensic investigation and biometric analysis. Apart from the aforementioned, forensic anthropometry, forensic dactyloscopy, and forensic documentation are the major tools in criminal identification. There are certain limitations that have been identified in forensic investigation: Insufficient or lack of human effort in collecting the shreds of evidence. The existence of tiny physical or biological pieces of evidence hidden in a crime scene is a major concern that commonly occurs. A neglected fingerprint, ear print, tire mark, shoe prints are some well-known examples. Concealment of identity: The criminals involved in the crime scene generally conceal their identity and the forensic scientist have a hard task in identifying the criminals. Huge time consumption: The manual forensic investigation is a hugely time-consuming process and it involves more complex steps to solve to identify the solution. No proper usage of standards: There are no proper standards followed in the forensic investigation. Hence it is not even used all over the world. Hence to overcome these pitfalls in the forensic investigation, it is advisable to involve biometric-based forensic investigation as it addresses most of the concerns that arise. Several biometric identities possess a different level of performance parameters [16]. The performance parameters [2] are uniqueness, universality,

acceptability, circumvention, and score. Table 8.1 lists the different metric values of the various biometrics in analyzing the crime scene traces. Fingerprint: The fingerprint is identified as three major parts, namely patent, latent, and plastic. The patent fingerprint is identified such that a material containing the blood strains, dirt, or paint from the finger, whereas the plastic fingerprint is identified to be a three-dimensional fingerprint pattern, provided that the latent fingerprint was not easy for identification as it is not visible to the human eye. Facial Recognition: The human image has a high level of duplication probability and it has a smaller role in digital forensics. Most researchers avoid the usage of facial recognition as the duplication probability is more in this biometric identity. Facial recognition possesses three major disadvantages, namely: a) Immutability of face image over some time. b) Retrieval of facial image and c) sketched images perfectly matches with the real image captured. Iris Recognition: The iris pattern recognition is a method that evolves along with the development of technology. The iris recognition possesses a high level of uniqueness whereas the acceptability rate is 0.0 as it is not highly used in crime scene investigations. Table 8.1 Performance of biometric in forensic investigation.

Biometric Uniqueness Universality Acceptability Circumvention Fingerprint 1.0

0.5

0.5

1.0

Facial 0.0 Recognition

1.0

1.0

0.0

Iris 1.0 Recognition

1.0

0.0

1.0

DNA

1.0

0.0

0.0

1.0

DNA: The Deoxyribose Nucleic Acid (DNA) has a high level of accuracy in detecting a culprit based on the DNA analysis that was performed in the crime scene investigation. The DNA cannot be duplicated but it involves a huge process and a professional to identify the DNA pattern that is collected through the hair, nails of the victim or the criminal. 8.3.1.1 Parameters of Biometric Analysis

The False Rejection Rate and the False Acceptance Rate are the principle parameters in the biometric analysis procedure and includes False Match Rate (FMR) and False non-Match Rate (FNMR). The FMR is the rate of decision made with I matches T, whereas I and T are independent individuals. (8.7) The FNMR is the rate of decision made provided I does not match with T, whereas I and T are matching each other. (8.8) The Equal Error Rate is the rate of FMR to FNMR at the threshold value where both errors are similar to each other.

8.3.2 Data Acquisition in Biometric Identity The crime scene investigation is a vital process as it holds information about the criminal and the victim who was involved in the crime scene. The test sample which is collected from the crime scene is generally termed in different names like questioned item, reference sample, or test sample. The reference samples are to be collected manually and are fed as the input to the Deep Learning algorithm which relates the information available in the input unstructured data to produce valid information. The process involved in the acquisition of data from the captured biometric identity [3] is illustrated in Figure 8.5. A suitable sensor is used to collect the reference sample from the crime scene so that the captured sample is fed as the input to the comparison with the existing records of the criminals. This can be achieved automatically with the aid of Deep Learning algorithms that screens and compares the capture sample with the reference database. The results of the comparison are classified to rest in either matching with the reference database or mismatching with the data sets available in the reference database. The cost function of the system can be determined by (8.9)

Figure 8.5 Data acquisition system in biometric identity.

8.3.3 Deep Learning in Biometric Recognition Deep Learning eases the investigation of the crime scene by collecting valuable information from the unstructured data that has been collected from the crime scene. In this section, a generic view of Deep Learning algorithms for the analysis and recognition of biometric identity [5] is explained. To employ Deep Learning in biometric recognition, it is essential to have a proper dataset so that the Deep Learning algorithm will have a comparison at its multiple hidden layers to extract valid information from the reference sample. Some of the well-known data sets associated with biometric recognition are as mentioned in Table 8.2. Deep Learning in Fingerprint Recognition: Numerous algorithms and models exist in this modern digital era for fingerprint recognition, among which certain notable algorithms and models that yield good performance rate are listed here. One such minutiae extraction algorithm is MENet, which yields better results on the fingerprint recognition system with the FVC database. The multiview deep representation model based on CNN is used for recognizing the three-dimensional fingerprint along with a variety of datasets like PolyU2D, CASIA, NIST, etc. This model provides a recognition rate of 98% which is considered to be a high level of accuracy in the fingerprint recognition process. The algorithm is extended to Deep Belief Network (DBN) integrated with a dataset of LivDet 2013 to yield a better recognition rate. Table 8.2 List of datasets for various biometric identity.

Data sets

Biometric identity

FVC Fingerprint Datasets

Fingerprint

Poly U High-Resolution Fingerprint Database

Fingerprint

CASIA Fingerprint Datasets

Fingerprint

NIST Fingerprint Datasets

Fingerprint

CASIA Iris-1000 Database

Iris Recognition

UBIRIS Datasets

Iris Recognition

IIT Delhi Iris Datasets

Iris Recognition

MICHE Datasets

Iris Recognition

PolyU Multispectral Palmprint Dataset

Palmprint

CASIA Palmprint Database

Palmprint

IIT Delhi Touchless Palmprint Database

Palmprint

Figure 8.6 illustrates the basic structure of the Deep Learning Algorithm for the fingerprint recognition model.

8.4 Forensic Data Analytics (FDA) for Risk Management The digital transformation and the existence of the latest technologies like Deep Learning and its family support various applications for business development and crime scene investigations. One such notable application offered by Machine Learning is the Forensic Data Analytics (FDA) in managing the risk to diminish the risk. Risk management is a sequence of the process involving the identification of the risk, analyzing the risk, controlling, and mitigating the threats that cause the risk. The risk here refers to the threat or theft of the digital data of an organization that leads to a severe financial or operational crisis of the entire process of the organization. The Forensic Data Analytics performed by Deep Learning technology involves three major processes, namely, acquisition of evidence, preservation, and analysis of evidence. The process ends with the presentation of evidence. Evidence acquisition is a function of collecting the easily erasable and losing capability data from the crime scene, and there is a need for investigators to be aware of Deep Learning algorithms before performing the forensic data analysis. Improper knowledge of the Deep Learning algorithms leads to the risk of losing the preserved evidence and the investigator must

possess a proper knowledge of the Deep Learning algorithms. An investigator handling the analysis of the forensic data must have proper training in the Deep learning methods and its algorithms and should have sufficient knowledge in handling the digital pieces of evidence. Deep Learning plays a vital role in the investigation and analysis process and one should be wise in choosing the appropriate Deep Learning algorithm. On completion of the investigation process, the investigator must proceed with the foreseeable process of report preparation which involves risk factors like threats to attack or diminishing the report and shreds of evidence from the records of Forensic investigators.

Figure 8.6 recognition.

Deep

learning

process

involved

in

fingerprint

Figure 8.7 illustrates the procedures involved in the Deep Learning algorithm [6] to perform the Forensic Data Analysis. The organizations handle a huge amount of data that may be related to their process; the data are highly vulnerable to attacks and the attacker tends to access the data illegally. In this case, for risk

management, forensic data analysis was used. The digital forensic investigator must be highly competent and be able to detect unauthorized access and suspected movement of data. Besides, the investigator must be aware of the type of files moved and the location from and to which the data has been moved. Besides these, the investigator must be able to track when the data breach has occurred and should be competent enough to locate the file and to delete it from the new location. For all these processes, the Deep Learning algorithms and framework [2] assists Forensic data Analytics officials to manage the risk effectively.

Figure 8.7 Deep learning framework for forensic data analysis.

8.5 Forensic Data Subsets and Open-Source Intelligence for Cybersecurity Data forensics and its huge volume of data is a highly concentrated area in recent years. The reason for the interest in the huge value of data in the private and government organization was that the attackers intended to acquire these data illegally for fraudulent activities. The huge volume of data also yields a positive factor for data managers and forensic investigators in that they can identify the data theft criminals and their circumstances. Hence the management of big data possesses both advantages and disadvantages. Besides these pros and cons, the management of huge data is always involved in tedious management tasks and techniques and it is necessary to reduce the volume and variety of data held in each organization. The reduction of data involves complex tasks as depicted in Figure 8.8. It is not an easy task as it affects the process of the organization and hence it is advisable to make use of data sets so that the huge volume of data may be reduced. The reduction of data volume not only makes

it easier to handle the data but also makes for faster operations of data and also supports quicker data analysis by the data forensic investigators. The data reduction can be performed by the Data Reduction by Selective Imaging process (DRbSI) [8]. The practical analysis of the DRbSI is a technique that has proved that it can reduce the volume of data by 0.2% when compared to the original volume of data. The DRbSI holds fine application in criminal data analytics in which the data sets of DRbSI make the process easier and faster.

8.5.1 Intelligence Analysis The analysis of criminal intelligence comprises a variety of techniques namely, Intelligent Led Processing (ILP), Digital Forensic Intelligence (DFI), and Open Source Intelligence (OSI). The Intelligent Led Processing is an application that can be employed in criminal data intelligence analysis with the decision-making tool to facilitate the reduction of crime and prevention of crime by framing and following strong strategic policies for intelligence analysis. The ILP aims at identifying the attackers, preserving the crime scene and location, incidents linked with the crimes and the measures to be taken to prevent repetition of the crime. The Digital Forensic Intelligence (DFI) is a precisely accurate method for digital data forensic intelligent analysis. This is a looking forward or looking ahead analysis method and it can be mentioned as a predictive method of preserving the data from crime.

Figure 8.8 Digital forensic data reduction.

8.5.2 Open-Source Intelligence Open-Source Intelligence is one of the best tools for criminal intelligence that has originated from crime prevention, data security, and enforcement of data security laws. Open-source tools include Google, Wikipedia, reports released by the government, blogs, academic papers, YouTube, Internet, Twitter, Facebook, etc. Apart from these open-source tools, certified and registered security agencies have their sources, including the Federal Bureau of Investigation in the United States, Central Intelligence Agencies, Royal Canadian Police, European Police, etc. The Open-Source Intelligence is faster, reliable, easy to share, operational, and involves reduced risk.

8.6 Recent Detection and Prevention Mechanisms for Ensuring Privacy and Security in Forensic Investigation 8.6.1 Threat Investigation The rapid increase in the need for technology innovation and the necessity for information technology in day-to-day life reduces people’s awareness of data security. The evolution of new advanced technology and the ease of usage of the internet allow attackers to introduce novel methods of cyber attacks for implementing data theft attacks, and they are proving to be more successful in attacking. The threats experienced by the forensic investigation are common; they are Dark Web, Ransomware, and Botnet. Dark Web: The dark web was developed by a group of professional hackers. They developed advance tools using simpler mechanisms like a virus, malware, Trojan horses, logic bomb, adware, etc., to attack the forensic investigation system and to hack data illegally. Ransomware: Ransomware is a type of attacking software developed by cybercriminals. Ransomware, once installed in a network or a system, is capable of procuring all the documents under the control of the hacker. The authorized user no longer has control over their system or network or files. Certain common Ransomware is Cerber, Satan, Horstman, and Atom, etc. Botnet: A botnet is the business of attacking products developed by the attacker and is sold to others for the launching of attacks. The attacker who is not technically strong in performing an attack will prefer a botnet which will have more chance of succeeding in an attacking process. The attackers may be of any type but the common medium through which they initiate the attacks are the virus, malware, Trojan horses, unwanted applications, etc. Virus: A virus is a computer program that reaches the targeted system or a network and executes its main operation of invading the target software or programs. The forensic investigator, on detecting the harmful virus in their system or network, will not initially remove the virus or alter the content of the program. The investigator will scan the entire network and check the data that was targeted by the virus. The forensic investigator then carefully removes the virus from the system or the network.

Trojan Horses: A Trojan horse is a malicious program that breaks the security level of the system or a network and has the capability of breaking multiple-level securities implemented to protect the system or the network. Trojan horses will not affect the files in the invaded system or network but will simply copy the content of the files in the invaded system of the network. The attacker makes use of these Trojan horses just to have a copy of the files that are stored in the targeted system or the network. On detecting the Trojan horses, forensic investigators tend to remove the targeted files or will change the name of files or directories so that the Trojan horses fail to locate the targeted file. Logic Bomb: Logic bombs are programs similar to Trojan horses that implement disastrous effects in the victim system or the network. Logic bombs are capable of breaking the security policies and can succeed in the attack on the attainment of a certain set of requirements in the targeted system.

8.6.2 Prevention Mechanisms The prevention mechanism against the threat to the privacy and security of the forensic investigation [11] starts with the detection of threat, design and implementation of security policies, and network analysis. Threat detection is a process of analyzing the network or a system for vulnerabilities [10], a process known as Intrusion Detection System (IDS). The IDS scans the networking packets before being transmitted or received and it uses special sensors to execute the scanning process. It identifies the signatures of each packet and allows the packets with known and authorized signatures. In turn, the IDS block the packets with malicious signatures. On detecting malicious signatures, the forensic investigators must frame a security and privacy policy to safeguard the system and the same is to be implemented to secure the system from intruders. The Intruder Prevention System (IPS) is a maintaining of the security policies and reviewing the policy at constant time intervals to let the network get upgraded against any sort of attack. Certain approaches have been followed to prevent intrusion in the forensic investigation; the first is intrusion detection systems and the second approach is the usage of novel cryptographic algorithms. The homomorphic encryption scheme will be well suitable for deep learning algorithm, and the basic principle is as follows: (8.10) The threat in the forensic investigation can be identified by following

a set of privacy policies like installation of Intrusion Detection System, identifying the network pattern, and Tracking the router activities for malicious activities. The threat to the digital forensic can be effectively prevented by performing any of the following mechanisms. Hash Function: The Hash functions like Secure Hash Algorithm SHA-1, SHA 128, SHA-256, which are some of the hash functions used for authentication verification process. On successful verification of authentication credentials, access is provided to legitimate users so that the sensitive resource cannot be accessed by unauthorized users. Drive Imaging: The forensic investigators have to image the data source before commencing the investigation. The imaging is a process of copying the whole content to another location or a drive. During the analysis, the probability of loss of data is high. In such cases, the lost data can be recovered from the copy successfully.

8.7 Adversarial Deep Learning in Cybersecurity and Privacy Cybersecurity and privacy are highly challenging aspects in this digital era as attackers tend to be ahead in technology development in implementing the attack. Forensic investigators and cybersecurity professionals implement various mechanisms to prevent the intrusion by the attackers. Some of the DL methods implemented in the cybersecurity mechanisms are Deep Belief Networks, Deep Auto Encoders, and Restricted Boltzmann Machine. Apart from this, neural networks are also implemented in the cybersecurity mechanisms. Deep Belief Networks: The DL is composed of multiple hidden layers with interconnection among all hidden layers, the input layer, and the output layer. The Deep belief Network has two classifications, namely, Deep auto Encoders and Restricted Boltzmann Machine for cybersecurity prevention mechanisms. Deep Auto Encoders: The Deep Auto Encoder is a class of neural networks that remains unsupervised. The input fed to the encoder was slightly modified in the hidden layer which was trained in computation in order to construct an effective model. The autoencoder is designed in such a way to reduce the noise and is now termed as denoising autoencoder. Figure 8.9 depicts and illustrates the process involved in the Deep autoencoders.

The information theory determined the mutual relationship between the input port and hidden port by (8.11)

Figure 8.9 Deep auto encoders. All the input and output data were dependent to each other and is described as: (8.12) The autoencoder has multiple hidden layers with an encoding function “e” and similarly the decoding function “d”. The accuracy of the autoencoder can be computed as: (8.13) Where, TP is True Positive, TN is True Negative, FP is False positive and FN is False Negative malicious activity predicted class.

The encoding and decoding functions are interrelated with each other to avoid the additive noise to be added with the original input. Restricted Boltzmann Machine: The Restricted Boltzmann Machine (RBM) is a bilayer graphical model which is an unsupervised DL method. The function of the RBM is like the Deep autoencoder in terms of the hidden layer. The hidden layer in the RBM is a non-supervised one, in such a way that each node of the hidden layer is connected to the input layer and the output layer. Figure 8.10 depicts the RBM model, where all the hidden layer node is connected with the input layer with a function Wij.

Figure 8.10 Restricted Boltzmann Machine.

8.8 Efficient Control of System-Environment Interactions Against Cyber Threats The system environment interaction is the relation between the system that holds the digital forensic data and the adaptive environment that makes use of these forensic data. It holds a huge risk of illegal access to the data that was stored in the system. Cyber threat is common, for attacks like that were launched by professional attackers to gain illegal access to the data stored in the system. The intelligent systems using deep learning are designed with the task of interacting effectively between the system and the environment without paving space for any kind of attack. The intelligent system is rigid against the illegal access of data and manipulation of the data. The system environment interaction process is initiated at the system end by receiving the input from the environment. The received input data is processed using a deep learning algorithm which possesses multiple hidden layers that were closely related to each other to produce a related output. The processed output is fed back to the environment for further processing like collecting the reaction of the

environment which is considered to be the next cycle input for the system. The purpose of the interaction between the system and the environment may vary among the applications. Irrespective of the purpose of system-environment interactions, the process of obtaining the input for processing, rendering the output to the environment, and once again collecting the information from the environment is a routing cyclic process which is monitored by the controller. The analysis of the system environment interaction can be done by assigning the various level of entropy in the communication theory that was implemented between the system and the environment. Let, P(X) be the source information P(Y) be the received information. P(X|Y) is the uncertainty that occurred in the source information P(Y|X) be the noise added in the information A(X;Y) be the original information transmitted over the medium The mapping of entropies of the communication process is determined by the sensors which determine the assigned entropy P(X) and the effectors are designed and installed for the determination of P(Y) whereas the central controller is installed to monitor the actual information A(X;Y).

8.9 Incident Response Applications of Digital Forensics Digital forensic concentrates on the investigation of digital devices procured from the crime scene and exploring digital devices to extract digital sensitive information. IT professionals with well-defined skillsets of forensic investigation extend their consideration about the threats to the networks. In this case, the IT professionals are well able to understand the basics of the threat and can detect the threat more effectively. Incident response is a complementary set of steps that occurs when a threat or an attack is detected. The incident response of the investigator on detecting a threat is more critical and a wrong move can allow the attacker to succeed in the attacking process. The IT professionals must be well trained in handling a virus and malware and they must be well trained to mitigate the attack on the network. The skillset involved in the incident response of the digital forensics are: Acquisition of data.

Maintaining transparency of the system and its privacy policies Proper investigation capability Reporting about the vulnerable attack Capable to automate the iterative process The steps involved in the incident response are: Preparing: IT professionals must be well prepared to handle the incident response and security policies. Identifying: The Forensic investigators, during the identification process, must detect the incident properly, the incident type, the incident method, the effects of the incident, the risks involved in the incident, and even more to be considered. Remediating: On detecting an incident, the IT professionals must implement certain plans without violating the security policies and apply the artifacts for determining the efficient way of solving the issue. Incident Recovery and Communication: Once the incident has been identified and the recovery solutions designed, the recovery plan is to be established to resume the original status of the network. After the successful recovery from the incident, the incident must be properly communicated to the forensic investigators.

8.10 Deep Learning for Modeling Secure Interactions Between Systems The design of a security system for intrusion prevention is essential in this digital era. Setting this as a goal, numerous developments have been made in the field of system security utilizing the deep learning concepts. Various approaches to security systems have been designed using transfer learning, multi-task learning, and federated learning. The aforementioned methods are implemented with various assumptions based on vulnerable incidents. Transfer Learning: Transfer learning is a technique in which the knowledge of modeling used in one incident is used as a reference to model solutions for another system. This kind of information reference is called source information and it provides an efficient solution as it is well trained in the previous incident. Hence, for this reason, transfer learning is the most common method of designing solutions for the incident using DL modeling concepts.

Multi-Task Learning: The main aim of multi-task learning is to learn from the previously occurred similar results of similar incidents. This method is closely related to the transfer learning method; the difference is in transfer learning, which refers to any one of the solution methodologies, whereas the multi-task learning method refers to various solutions of previous incidents. Federated Learning: Federated Learning (FL) is a model that depends on a distributed learning method to design a DL model for designing security solutions. This method trains digital equipment such as a smartphone, computers, laptops, etc. This method has an ability of continuous learning and hence this method is considered to be more efficient in the DL method of designing the solutions. The Federated Learning architecture is represented pictorially in Figure 8.11.

Figure 8.11 Federated learning architecture. The common attacks on these deep learning modeling methodologies are, in the transfer learning method, when the attacker tends to add a few more irrelevant hidden layers with the aim of confusing the DL algorithm into performing a malfunction, whereas, in the federated learning method, the information is not retrieved from a single source. The method extracts information from multiple previous

sources so that the vulnerability in the federated learning architecture has been proved to be inefficient.

8.11 Recent Advancements in Internet of Things Forensics The features of the Internet of Things (IoT) and its ease of usage encourage and attract users to apply it in multidomain applications. One such application of IoT [12] is forensic investigation and it also involves many challenging procedures to be implemented. The complexity of the IoT procedures in forensic investigation is due to the huge volume of data generated by the IoT and it involves an adverse procedure to handle the huge volume of data. Numerous data sets have been used to reduce the volume of data generated such that the IoT can be effectively used in forensic applications. The IoT with its simplified architecture can be easily implemented in a smartphone, computer, or laptop, and hence finds more applications in the forensic domain. The factors of IoT influencing computer forensic are digital evidence, big IoT data, data spread over multiple platforms, computer architecture, specific hardware, and software, which is mentioned in Figure 8.12.

Figure 8.12 IoT factors influencing computer forensics.

8.11.1 IoT Advancements in Forensics The advancements of the Internet of Things in computer forensics are:

Digital Equipment Forensic: Digital equipment including the smartphone, computer, or laptop plays a major role during the forensic investigation process. These digital devices are location specific and are used to locate the position of the device at the time of the crime scene. The Forensic Edge Management Systems (FEMS) are employed to provide forensic security to the digital equipment by facilitating network management, intrusion detection systems, and intrusion prevention systems. Forensic Analysis for Smart Vehicles: Information sharing among vehicles has been accomplished using IoT smart vehicle applications. The smart vehicle application enables traffic and road safety to be maintained. The forensic analysis helps to maintain the security of the data related to vehicle and road traffic management.

8.11.2 Conclusion From the above study and the analysis, digital forensics plays a vital role in solving cybercrime and identifying the proper solution for the threat that occurs in the network. Forensic science holds a major position in all the informative and scientific domains due to its significance in social impacts. Various threat causes and effects were discussed in this section and the counteracting measures were also mentioned. Varieties of data forensic analytical methods were proposed by various researchers, much concentrating on the domain of physics. Better security can be provided for forensic science through the cryptographic algorithms which perform the authentication verification process effectively.

References 1. Jauro, Fatsuma & Chiroma, Haruna & Gital, Abdulsalam & Almutairi, Mubarak & Abdulhamid, Shafi’i & Abawajy, Jemal. (2020). Deep learning architectures in emerging cloud computing architectures: Recent development, challenges and next research trend. Applied Soft Computing. 106582. 10.1016/j.asoc.2020.106582. 2. Rahayu, S. & Robiah, Y. & Sahib, Shahrin. (2008). Mapping Process of Digital Forensic Investigation Framework. 8. 3. Awad, Ali Ismail & Hassanien, Aboul Ella. (2014). Impact of Some Biometric Modalities on Forensic Science. Studies in Computational Intelligence. 555. 47-62. 10.1007/978-3-319-05885-6-3. 4. Tistarelli, Massimo & Grosso, Enrico & Meuwly, Didier. (2014). Biometrics in Forensic Science: Challenges, Lessons and New

Technologies. 153-164. 10.1007/978-3-319-13386-7_12. 5. Minaee, Shervin & Abdolrashidi, Amirali & Su, Hang & Bennamoun, Mohammed & Zhang, David. (2019). Biometric Recognition Using Deep Learning: A Survey. 6. Koroniotis, Nickolaos & Moustafa, Nour & Sitnikova, Elena. (2020). A new network forensic framework based on deep learning for Internet of Things networks: A particle deep framework. Future Generation Computer Systems. 110. 10.1016/j.future.2020.03.042. 7. Karie, Nickson & Kebande, Victor & Venter, H.. (2019). Diverging deep learning cognitive computing techniques into cyber forensics. Forensic Science International: Synergy. 1. 10.1016/j.fsisyn.2019.03.006. 8. Quick, Darren & Choo, Kim-Kwang Raymond. (2016). Digital forensic intelligence: Data subsets and Open Source Intelligence (DFINT+OSINT): A timely and cohesive mix. Future Generation Computer Systems. 78. 10.1016/j.future.2016.12.032. 9. Prabakaran, D. & Shyamala, R. (2019). A Review on Performance of Voice Feature Extraction Techniques. 221-231. 10.1109/ICCCT2.2019.8824988. 10. Pandi Jain, Gayatri & Shah, Saurabh & Wandra, K.H.. (2020). Exploration of Vulnerabilities, Threats and Forensic Issues and Its Impact on the Distributed Environment of Cloud and Its Mitigation. Procedia Computer Science. 167. 163-173. 10.1016/j.procs.2020.03.194. 11. Gopalakrishnan, A., Bala, P. M., & Kumar, T. A. (2020, July). An Advanced Bio-Inspired Shortest Path Routing Algorithm for SDN Controller over VANET. In 2020 International Conference on System, Computation, Automation and Networking (ICSCAN) (pp. 15). IEEE 12. Yaqoob, Ibrar & Hashem, Ibrahim & Ahmed, Arif & Kazmi, S.M. & Hong, Choong Seon. (2018). Internet of things forensics: Recent advances, taxonomy, requirements, and open challenges. Future Generation Computer Systems. 10.1016/j.future.2018.09.058. 13. Arslan, Bilgehan & Sagiroglu, Seref. (2019). Fingerprint Forensics in Crime Scene: A Computer Science Approach. 8. 88-113. 14. Ademu, Inikpi & Imafidon, Chris. (2012). The Influence of Security Threats and Vulnerabilities on Digital Forensic Investigation. 10.13140/2.1.2236.8006. 15. Muñoz-González, Luis & Lupu, Emil. (2019). The Security of Machine Learning Systems. 10.1007/978-3-319-98842-9_3.

16. Pavithra, M., Rajmohan, R., Kumar, T. A., & Ramya, R. (2021). Prediction and Classification of Breast Cancer Using Discriminative Learning Models and Techniques. Machine Vision Inspection Systems, Volume 2: Machine Learning-Based Approaches, 241-262. *Corresponding author: [email protected]

S. Ramesh: ORCID: https://orcid.org/0000-0001-6029-9778 K. Prathibanandhi: ORCID: https://orcid.org/0000-0001-79750290 P. Hemalatha: ORCID: https://orcid.org/0000-0002-5221-521X C. Yaashuwanth: ORCID: https://orcid.org/0000-0002-32959254 Adam Raja Basha: ORCID: https://orcid.org/0000-0002-2327284

9 Mathematical Models for Computer Vision in Cardiovascular Image Segmentation S. Usharani1*, K. Dhanalakshmi2, P. Manju Bala1, M. Pavithra1 and R. Rajmohan1 1Department of Computer Science and Engineering, IFET College of Engineering, Villupuram, Tamilnadu, India 2Department of Computer Science and Engineering, PSNA College of Engineering and Technology, Dindigul, Tamilnadu, India Abstract Now a days, research area of Computer Science and Information Technology in the field of Computer Vision have drawn increasing interest owing to their beneficial applications. Computer vision, deep learning and image processing are generally applied in numerous applications, including 2D and 3D image processing, face recognition, biometric authentication, speech interpretation, and several other applications are emerging. Cardiovascular imaging will improve dramatically, powered and get more improvement by the revolution of deep learning (DL). The basic principles and idea behind the efficacy of computer vision based image processing and deep learning algorithm. Deep learning cardio image segmentation, which encompasses basic imaging modalities such as MRI, CT, ultrasound and heart structures such as four chambers. From the three modalities, application of CT is used for the interactive cardiac imaging environs, Graph oriented Methods is used for image segmentation. Graph-Oriented Segmentation approaches based on graph partitioning that can look identical at first sight, but show specific behaviors on closer inspection. For application of CT, it is the distinct existence of certain habits that may render one superior to another. Reportedly, the effectiveness of each approach in the cardiovascular imaging field is illustrated for a specific segmentation application. Keywords: Cardio vascular image segmentation, graph oriented approach, multimodal imaging, ringed graph cut, arbitrary rover

method

9.1 Introduction 9.1.1 Computer Vision The core areas in information technology, systems engineering, and electrical engineering are the integrative areas of deep learning, data processing, and data acquisition and information retrieval, and image classification which have drawn the attention of many experts. A broad variety of subjects and activities are addressed by previous and future studies, from pure research to a large range of practical implementations in the real world. Several conventional and modern approaches, frameworks and strategies have been established to solve these issues, among which environmental optimization (EO) has performed an increasing significant feature, like optimization computation, expert systems and other concepts. As identifiers of a specifically described experimental field and group of methods and processes, the terms Environmental Computer Vision, Environmental Image Processing, and Environmental Pattern Classification are more and more widely recognized. The latest accessibility of computer hardware and software frameworks such as GPUs and cloud computing systems, whose design and technological model support EO algorithms incredibly well, has also favored it, reducing the relatively difficult mathematical constraint enforced by such approaches and also encouraging real-time application to be created. State-of-the-art accomplishments in the fields of Adaptive Machine Vision, Image Analysis, Pattern Classification and similar frameworks to the current findings and improvement and issues. Evolutionary techniques such as genomic algorithms (GAs), genetic optimization (GO) and swarm intelligence techniques such as ant colony optimization (ACO) and fuzzy logic (FL), and other EO techniques such as genetic micro-optimization and adaptive learning are among the latest principles and techniques in various EO paradigms. Edge identification, image optimization, image retrieval, image feature removal, image and object identification and categorization, vehicle plate identification, facial identification and classification, image encoding, pattern image retrieval, handwriting identification, and visual tracking are used in computer vision and image encoding techniques. The activities for pattern identification include categorization, analysis, grouping, feature evaluation and removal of complexity, generalization, imbalance manipulation of information, and evaluation of clinical and bio-medical information.

9.1.2 Present State of Computer Vision Technology Modern computer vision processing is enabled by deep learning algorithms that enable understanding of images using a unique form of neural network called a Convolutional neural network (CNN). Using hundreds of test pictures, those neural networks are educated, which enables the algorithms to recognize and decompose anything embedded in an image. CNNs scan pixel-by-pixel objects which recognize and memorize structures. Via scanning features such as shapes and colors, these nodes often remember the optimal outcome that they can give for each source image or categorize image elements. This experience is then exercised by the processes as the guide when searching more images, and the AI framework remains better at obtaining the detailed outcome. When scanning more images, this memory is used by the methods as the location. Computer vision mechanism is used and verified in many areas such as Healthcare, Manufacturing, and Security applications. In healthcare it is used to analyze diseases from CT or MRI scans and other medical results. In a manufacturing system, it is used to construct the products. In security applications, it is used to recognize face, fingerprint and retina based on biometric analysis. In autonomous vehicles computer vision is used to detect people, obstacles, traffic signals and signs in transportation. Based on the research, the motor industry systems use computer vision and share the technologies between industries.

9.1.3 The Future of Computer Vision The development of computer vision will find it conduct a wider variety of activities with more research into and improvement of the system. In addition to being simpler to learn, computer vision systems would also be able to distinguish much more from images than they do today. This can also create more efficient systems in combination with other techniques or other subsections of AI. Image editing software, for example, can be paired with natural language processing (NLP) to recognize objects for visually impaired individuals in the world. In the advancement of artificial natural technology (ANT) and automated special technology (AST), computer vision would also perform a critical position by providing their ability to interpret data as well as, or even higher than, the natural vision system. Despite the capacities of modern computer vision, it may be impossible to assume that there are many technological advantages and developments that are still unexplained. For AI systems that are

as human as us, the advancement of computer vision would open the way. Before doing so, though, there are a few obstacles that must be solved, the cloud computing market of the black box of AI being the largest of them. In 20 years, computer vision will be a standardized element comprising remote analytics and software resources within the context of the global computing system, similar to today’s broadband network. Software analytics and information, like graphical, speech, written, and computational and sensor analysis, will be applied to all platforms by default within the Internet of Things (IoT). In quartz, accessible to all types of data, a few new Neural Computation (NC) frameworks will be standardized. With additional on-chip computing power for image processing and analysis, imaging tools can be more precise. Techniques for image processing would be close to that used currently, with no significant advances anticipated. A few feature identifiers and functional learning models can standardize the computer vision ecosystem, allowing a standardized NC framework for software creativity and business development. Computer vision and automation models would be far preferable to modern simple deep learning models, integrating deep-learning and multiple linear regression widelearning with enhanced feature descriptor methods and robust training parameters allowed by omnipresent databases consisting of labeled examples of any sort of image or data such as sound, graphical, economic and other data. Personal secrecy would practically vanish.

9.1.4 Deep Learning In order to attain better efficiency in cardiac image segmentation, the growth of DL, conventional machine learning approaches, such as design strategies and atlas-based approaches have been seen. After all, to achieve reasonable precision, they often need substantial design technology or prior expertise. Deep learning (DL)-based architectures, on the other hand, are effective at instantly finding complicated aspects from object recognition and classification of data. Using a specific training technique and end-to-end design, these functions are specifically trained from results. Which enables Deep Learning–based techniques simply to extend to other technologies for image processing. DL-based segmentation techniques have steadily outpaced past state conventional approaches, acquiring more prominence in science, learning from improved computer hardware as well as improved sufficient testing results.

9.1.5 Image Segmentation

Image segmentation is the method of separating an image into sections that are racially homogenous in spite of some specified property, which is known as regions. Such regions are vector collections, while the characteristics frequently selected as parameters for area uniformity are: grey level, color and shape. The image that has been condensed as a consequence of classification is much simpler to better view or analyze, so classification typically follows the implementation of several methods for image processing and classification. Manual classification cardiac images in CT and MRI images are quite expensive. It’s why efforts are being made to build automated image segmentation strategies using convolutional neural networks. They differentiate between 2D image segmentation techniques in which specific image surfaces are individually analyzed and 3D image segmentation in which surfaces are divided simultaneously. For now, 2D image methods make it possible to classify images more accurately, but 3D image techniques require considerably fewer data from training. Convolutional neural networks are used most widely for 3D model, while RNN are used in both 2D and 3D image techniques. At Hypothetical Research, researchers are running a program in which they are creating new methods of coronary artery classification with the study of blood flow in particular cardiac medical systems. They are exploiting both traditional algorithms and machine learning techniques. They may think of several functional uses, in addition to the treatment of cardiac disorder. Thermogenesis inscriptions are often contaminated with calcium in cardiovascular problems. The study of calcium level in blood vessels and ultrasound images is an important and realistic event [1]. Techniques for machine learning also can assist with this problem. It is particularly essential to visualize the blood vessels, not only in cardiology, but also in preparing of the radiation therapy phase, particularly in the case of breast cancer. The heart is an internal structure that is complicated. Automatic processing of ultrasound images could uniquely extend therapy preparation processes such that they are minimally toxic to vital cells. So, new algorithms for machine learning have been created. To promote the contouring method, they may help to generate digital cardiovascular images.

9.1.6 Cardiovascular Diseases As per the WHO (World Health Organization), cardiovascular disorders are the chief cause of death in the world. About 18.9 million persons suffered from cardiovascular diseases, mostly from heart attacks and strokes, in 2019. Every year, the percentage is always rising. Significant advancements in cardiovascular science and

practice have been made in modern periods, with the goal of improving the identification and handling of cardiac disorders and so reducing the number of deaths from cardiovascular diseases. Advanced health imaging strategies such as MRI, CT and echo are currently commonly used, allowing multivariate congenital evaluation of cardiovascular physiological functions and structures and providing diagnostic, disease surveillance, treatment preparation and prophecy assistance [2]. Of specific interest, in various implementations, cardiac image classification is a significant initial step. It divides the image into a variety of regions dependent on measurable dimensions, such as myocardial density, wall width, both ventricle volume (Left and right), and ejection fraction (EF) volume, etc. Usually, skeletal anatomy of significance of cardiac image segmentation involve LV, RV, LA, RA and coronary artery.

9.2 Cardiac Image Segmentation Using Deep Learning Modern deep learning approaches in the three maximum broadly used methods in cardiac image classification using deep learning as shown in Table 9.1. The term used in the chapter is seen. The structure of the heart is shown in Figure 9.1. In Figure 9.2, frameworks for the three maximum broadly used methods, an outline of traditional tasks relevant to cardiac image classification is shown.

9.2.1 MR Image Segmentation A clinical imaging approach that can image the systems inside and around the heart is cardiovascular MRI. It needs no radiation exposure as compared to Ultrasound. Rather, it depends on the gravitational flux to stimulate hydrogen atoms in the heart in combination with electromagnetic radiation, and then produces an image by evaluating their reaction. Cardiovascular MRI enables detailed quantitative analysis of both cardiovascular pathology and activity and pathological specimens, such as scars, by using various imaging series [3]. Consequently, cardiovascular MRI is widely considered to be the gold benchmark for cardiovascular quantitative analysis. 9.2.1.1 Atrium Segmentation 9.2.1.1.1 Fully Convolutional Network Based Segmentation Both ventricles (left, right) and endocardium are segmented openly into tiny axis images of cardiovascular magnetic resonance. In spite of

both frequency and efficiency, their endwise strategy established on Fully Convolutional Network attained inexpensive segmentation efficiency, substantially surpassing conventional strategies. To increase the role training ability for classification by optimizing the network architecture [4]. In Fully Convolutional Network based strategies, most methods use 2-Dimentional networks for classification reasonably rather than 3-Dimensional networks. This is primarily owing to most cardiovascular MRI scans standard throughplane precision and motion artefacts, restricting the usefulness of 3D networks. Table 9.1 Acronym used in the chapter. CT

Computed Tomography

MRI

Magnetic Resonance imaging

AO

Aorta

PA

Pulmonary Arteries

PV

Pulmonary Veins

RA

Right Atrium

RV

Right Ventricle

LA

Left Atrium

LV

Left Ventricle

MYO

Myocardium

LVESM Left ventricular end-systolic mass RVEDM Right ventricular end-systolic mass LVEDM Left ventricular end-diastolic mass RVEDM Right ventricular end-diastolic mass CA

Coronary Artery

AV

Aortic valve

Figure 9.1 Structure of the heart.

Figure 9.2 Cardiac image segmentation activities with various imaging types. 9.2.1.1.2 Analysis of Images in Both Memory and Time Efficiency The main limitation of utilizing 2-Dimensional cardiovascular classification systems is that they run slice by slice, so they do not exploit interconnected addictions. For instance, a consequence, 2-

Dimensional networks will refuse to identify and fragment the cardiac on difficult strips, just like ventral and radial portions where the ventricular shapes are not well established. A quantity of workings have strained to incorporate enhanced circumstantial information to direct 2-Dimensional Fully Convolutional Network in order to resolve this issue [5]. Position priors acquired from labeling or multi-view images may be used in this previous knowledge. Others obtain spatial information, using recurrent neural networks or interconnected networks, from neighboring strips to support classification. Such networks can also be used to exploit knowledge in the cardiac cycle over various time periods to enhance the spatial and temporal accuracy of classification performance. 9.2.1.1.3 Smearing Structural Conditions An additional issue that can bound both 2-Dimensional and 3Dimensional Fully Convolutional Networks classification efficiency is that in which they are normally only educated with pixel astute hidden layers. To understand characteristics that reflect the underlying anatomical structures, these pixel-wise error parameters may not be adequate [6]. Therefore, many methods concentrate on scheming and smearing Structural limitations on the way to educate the system to enhance its reliability and scalability of estimation. These requirements are described as concepts of regularization that take into consideration of information on geometry, curves and area, or texture data, enabling the node to produce classification that are more biologically reasonable. In contrast to regularizing nodes at training phase, an AE variant was suggested at the post-processing level to correct incorrect segmentation method. 9.2.1.1.4 Multi-Task Learning In order to standardize FCN-based cardiovascular ventricular classification during learning, multi-task training has also been investigated by conducting supplementary activities that are important to the main classification function, such as movement prediction, cardiovascular prediction, identification of ventricular size, and image restoration. Instantaneously, educating a system for several responsibilities allows the system to identify attributes that are helpful transversely to access all the activities, resultant in better knowledge effectiveness and correctness of estimation. 9.2.1.1.5 Multi-Step Systems The rising support in a multi-stage system that splits the issue of classification into sub-tasks in the application of neural networks [7]. The suggested technique further generalizes images of different sizes

and backgrounds by specifically finding the ROI and moving the input data into a logical direction. 9.2.1.1.6 Hybrid Segmentation Methods Other research direction aims to combine neural networks with traditional strategies to classification, e.g., layer frames, reconfigurable structures, strategies based on atlas, and strategies based classification. In the extraction of features and model configuration phases, neural networks are implemented here, decreasing the reliance on human relations and enhancing the segmentation correctness of the traditional segmentation approaches implemented subsequently. To identify the LV immediately, CNN then used an AE to measure the LV’s form. The calculated shape was then used for the initialization of follow-up non-rigid shape optimization methods. As a whole, the embedded non-rigid model convergence rate is faster and the classification reaches higher precision than traditional feature extractions. For section RV, this method was used. While these hybrid models showed better classification precision than traditional designs of non-deep learning, most of them still need incremental training for shape optimization [8]. In addition, these techniques are mostly crafted for one unique biological design. 9.2.1.2 Atrial Segmentation One of the most common cardiovascular diseases is sinus arrhythmia (SA), affecting about 1 billion people in the world. Therefore, in the laboratory, ventricular classification is of primary concern, enhancing atrial structure analysis in together pre-operation process, SA partition preparation and post-operation continuation tests. In particular, atrium classification can be used as a source for scar classification and measurement of sinus arrhythmia from LGE photos. In the history, conventional techniques have been implemented for automatic left atrium classification, such as area growth and techniques that involve significant prior convictions. The efficiency of these procedures, furthermore, relies heavily on best configuration and ad hoc preprocessing techniques, restricting the clinic’s rapid acceptance. 2-Dimensional Fully Convolutional Networks, are specifically used to slice both atria (left and right) from normal 2-Dimensional lengthy imageries. Particularly, neural networks can also be educated without any changes to the system architecture to separate ventricles through 2D short-axis sheets. In order to accurately classify the atrium, 3-Dimensional networks and multiview Fully Convolutional Networks were also investigated to obtain 3-Dimensional relevant data from 3-Dimensional Lateral

Ganglionic Eminence (LGE) images. In general, a completely automated two-segment classification system that includes a first 3Dimensional U-net to approximately identify the ventricular core from down-sampled images, proceeded by a separate 3-Dimensional U-net to precisely segments the atrium at maximum resolution in the compressed parts of the image features. The multi-step strategy is time and space efficient, scoring primarily through a mean similarity coefficient of 0.95 assessed on a testing dataset of 58 situations in the left atrium classification problem [9]. 9.2.1.3 Cicatrix Segmentation Cicatrix classification is typically conducted by a counterbalance MR imaging method called LGE MR scanning. LGE MR scanning allows for cardiac scars and ventricular thrombosis to be detected, facilitating better cardiac arrhythmia and myocardial infarction control [10]. Scar classification was mostly conducted prior to the emergence of deep learning using sensitivity threshold method or classification techniques that are immune to shifts in local severity. The key drawback of these approaches is that, in order to minimize the computational complexity and the computation time, they typically require systematic classification of the area of interest. As a consequence, for large-scale trials or medical implementation, these semi-automated approaches are not acceptable. For the intention of scar classification, deep learning techniques have been integrated with conventional classification techniques: an atlas-based strategy for defining the left atrium and formerly applying DNN (deep neural networks) in that area to identify necrotic tissues. DNN classify the left atrium and the ventricular Cicatrix compared to end-to-end strategies. In general, better recognition precision, a multitier CNN by a relational responsiveness framework to rage functions from adjacent visions. 9.2.1.4 Aorta Segmentation For precise structural and physiologic classification of the aortic valve, classification of the thoracic vertebrae from the cine MR images is important. The traditional granularity of corrections in arterial cine image series, where just a few images have been formatted, is a prevalent problem for this mission. A non-rigid image classification approach is used to transfer the symbols in the cardiac cycle from the illustrated images to the unidentifiable neighboring images to resolve the issue, efficiently producing pseudo-annotated images that could be used for additional processing. For the increasing aorta and 0.95 for the decreasing aorta, this quasi approach obtained a mean Dice metric over a test range of 100 samples of 0.96 [11]. In particular,

their method based on Fully Convolutional Networks and Recurrent Neural Network can specifically conduct the classification task on an entire image series, relative to a prior method based on feature extractions, without involving explicit ROI measurement.

9.2.2 CT Image Segmentation for Cardiac Disease Computed Tomography is a semi-screening test used for curing disease. Cardiovascular CT or cardiovascular CT scan is primarily used to examine the cardiovascular anatomy and especially the carotid artery. There are mainly two imaging techniques when it comes to coronary angiography: Contrast and Non-contrast CT angiography. It facilitates to experience various intensities, such as fatty tissue, morbidly obese, and air. Contrast-enhanced cardiac path, which is decided to apply just after implant of a contrast agent, is a superior version of contrast-enhanced coronary cine angiography [12]. Researchers are going to look at some deep learning algorithm based cardiac CT segmentation methods. 9.2.2.1 Segmentation of Cardiac Substructure This research study consisted of these variables. Adequately delineating the cardiac structural elements lead to increased heart function observations. Generally, the cardiovascular structural elements are divided which include both ventricles (left and right), both atria (left and right), myocardium, Aorta and Pulmonary artery (PA). 9.2.2.1.1 2-Step Segmentation In the deep learning algorithm, one method uses a two-step image classification where a ROI is retrieved from the image and input into a CNN for further classification. Initially, a threshold value for the LV segmentation is identified, and then a voxel categorization using a patch-based convolutional neural network (CNN) is executed. U-net is a preferred option for slice detection in cardiac CT images. A comparison of techniques for complete heart segmentation is estimated by Multi-Modality Whole Heart Segmentation task (MM WHS). A few of those techniques help to determine the first ROI segmentation [13]. This helps the system to concentrate on structural important portions of the image which has been exposed to perform well for WHS. 9.2.2.1.2 CNN Multiple Views Biomedical imaging is also heavily utilized to perform studies of the heart. Three CNNs that are independent orthogonal was able to learn

how to segment different views. This approach is particularly effective because it is the product of independent studies. 9.2.2.1.3 Hybrid Loss Numerous techniques are applied to address hybrid risk. And various loss functions are combined to employ an imbalance across ventricular cavity. A new method has been invented to create an automatic identification of patients that suffer from significant heart disease. A multi-scale Fully Convolutional Network is initially working for LV segmentation, and formerly a convolutional auto encoder (CAE) are employed to categorize the myocardium within the ventricle, preceded by a SVM to describe patients centered on characteristics analyzed by the auto-encoder. 9.2.2.2 Angiography It is a vital step for diagnoses of coronary heart disease, stenosis grading, and computation of blood circulation and also to plan medical techniques. This discussion has also been discussed for years, and there have been just several experiments on how deep learning could be used in this perspective [14]. Methods for CA segmentation can be defined as centerline extraction and segmentation of the lumen. 9.2.2.2.1 CNN Post and (or) Preprocessing Phases One important problem of cardiovascular centerline detection is the existence of adjacent cardiovascular structures and veins, and also motion artefacts in endovascular CT. Several other deep learning strategies are using a CNN as a post-processing step for a conventional approach. The centerline extraction model is a technique of discovering max fluid flow in a stable distributed porous media flow, by a knowledge built feature-set classification model predicting anisotropic orientation feature vectors for stream calculation. Meanwhile, a CNN classification has been used to separate real blood vessels through leaks to non-coronary fields. Multi-task Fully convolutional Network middle line abstraction scheme that can really accumulate a distinct pixel wide middle line from images whereby the Fully Convolutional Network concurrently anticipated middle-line direction layouts and final optimism layouts from blood vessels and moving up aorta binary mask and are used as instruction for the succeeding marginal route abstraction to achieve the essential middle line abstraction outcomes [15]. In contrast to the CNNs and to be used as pre- or postprocessing methodologies, middle line abstraction through a 3-Dimensional dilated CNN was identified, that even the CNN was equipped at a 3-Dimensional level to define

the prior probability throughout a separate number of options ways and to evaluate the circumference of a blood vessel at a certain level. 9.2.2.2.2 Point to Point CNNs Most simulations use end-to-end convolutions in training for saturated segmentation. A segmentation approach in which a distinct CNN would be qualified to tackle coronary arteriovenous and brain MR images separately. This experiment proved that multipurpose networks can do more things than less. Deep monitoring in a 3Dimensional U-net design allows effective face detection and exact voxel level predictions. The likelihood of form may also be introduced into the method. To indirectly implement an approximately tubeshaped preceding for the container segments mostly through system that carries by introducing a system based authorization to yield reliable segmentation of the source images and to control the topological constrictions. Quite lately, pattern convolutional networks has been evaluated for CA segmentation in CT coronary CT angiography where tubular lumen vertices are considered to be nodes and destinations of these tissue surfaces are optimized. They found that extending LSTM to fully connected layers was considerably better in the healthy group. The convolutional graph used in the work can start generating smooth layers without post-processing. 9.2.2.3 CA Plaque and Calcium Segmentation A major threat feature for heart disease is coronary artery calcium (CAC). CAC scale is quantified using tumor area and tumor density. To reduce CAC by precise detection and segmentation is important for CAC diagnosis and prediction [16]. 9.2.2.3.1 Two-Step Segmentation A two-step segmentation scheme was suggested by researchers of deep learning approaches to automatically score calcium. These techniques were a number of approaches to support the decision of our theory for CAC classification. Similar to a two-stage method, CT is able to identify the likelihood of CAC with a small ROI in the heart. 9.2.2.3.2 Direct Segmentation Since combining U-net with edge connectivity metric for quantification of CAC obtained good results. All these approaches classify the CAC in the same workflow – identifying and quantifying the impact. Another way to solve this problem is to avoid the intermediate level of segmentation and directly measure the problem. Thus, for NCP and MCP in CA, restricted quantity of research

mechanism were described that study of DL approaches for quantification and segmentation. It’s very important, because a vascular event could lead to an ischemic event and a major cardiac event. Lastly, the similar appearance and intensities of NCP and MCP as adjacent tissue segmentation is more challenging. Therefore, it is necessary to provide high-quality and accurate images, and also be fast. Recently, a vessel concentrated 3-Dimensional convolutional neural network (CNN) with consideration layer was used to segment three types of plaque on the removed and reformatted CCT images. A convolutional RNN was used to detect and assign a significance of coronary plaque in the MPR images.

9.2.3 Ultrasound Cardiac Image Segmentation The Ultrasound is an indispensable technique for the analysis of cardiovascular diseases. MRI scans are utilized by physicians for preoperative or post-operative planning. Many previous techniques, including such active shape models, active contours, level sets and, has been engaged to use ultrasound imaging for classification of anatomical. There have been various problems associated with computed tomography, such as reduced ultrasound signal to noise ratio, differing edge dropout, speckle noise, low image divergence, and shadowing by objects, like saturated weight and bones. Likewise, in cardiovascular CT and MR, so many deep learning–based methodologies have been suggested in recent times to solve the problem of cardiovascular ultrasonic feature extraction in aspects of both correctness and quickness. The significant number of these deep learning–based methodologies place the emphasis on LV differentiation. However, limited empirical studies address issues of LA segmentation. 9.2.3.1 2-Dimensional Left Ventricle Segmentation 9.2.3.1.1 DL+ Deformable Models Voxel-wise segmentation for echocardiography helps make the tissue identification a huge challenge. Attempts have been made to address this challenge by merging Deep Learning with deformation techniques in 2D. The algorithm also integrated only trained neural network features instead of handcrafted features to strengthen the results. In this way, deep learning technique has influenced the accumulation and disposition of the target object. In this way the first stage of segmentation does not apply to the entire pipeline which decreases the examine area of second-stage segmentation. This method was utilized to subdivide the LV segments into apical extended axis echocardiogram. DBN will predict reconfigurable

parameter values for differentiation and rigid transition specifications for localization. This demonstrates the robustness of data-driven model to appearance variations. 9.2.3.1.2 Incorporating Temporal Consistency Cardiac ultrasonography is frequently documented as a series of images. So many methodologies attempt to use similar temporal elements to enhance the precision and reliability of LV segmentation. Researchers are using an SMC-based system with a transition model and a sequence of cardiovascular phases to analyse the data. These outcomes demonstrate that the proposed method was faster than the previous method that didn’t take age into account. By integrating memory from U-net, LSTM and interpolation between consecutive frames, we improved classification accuracy. This method was more spirited to image excellence changes in a series than single border Unet. 9.2.3.1.3 Incorporating Unmarked Data Numerous algorithms have been developed to locate mark on unmarked images, growing the quantity of data for training. In this way, clustering and classification networks will be first prepared using a tiny group of marked data, and formerly smeared to a total unlabeled data to obtain classification and clustering scores. The foreign annotations are reviewed by external classifiers before they are used for retraining the network. I employed the algorithm of Kalman filtering to a set of images. To fine-tune CNN for a specific genome, then training the neural network with manual segmentation as well as CNN’s label. A semi-supervised framework, it is also possible to train on both the labelled and unlabeled pictures. The framework can manage basic image processing, including resegmentation. The generative network enables the classification network to help generate the ultrasound image to be used. 9.2.3.1.4 Incorporating Data from Different Domains Securing annotated, labelled data from different domains also can contribute to enhance the classification in one domain. An innovative feature analysis network has been developed for learning standardized image features of big data. Integrated with a reflects the influence, the technique has also shown greater efficiency over a database-driven approach. 9.2.3.2 3-Dimensional Left Ventricle Segmentation Delivering a 3D ultrasonography of a subject is even more difficult

than a two-dimensional ultrasound. Although 3D echocardiograms are considered more accurate volume indices, the resolution is less than traditional 2D x-ray images. Multidimensional measurement spaces can mean increased challenge for DL approaches. Another possible method to minimize the overall expense of deep learning models is to evade computation of 3-Dimensional data directly. First to generate coarse classification layouts on 2-Dimensional portions from a 3-Dimensional capacity. Then, the sub-2D datasets would be used as input for a 3-Dimensional CNN. The coarse classification 2D map is often used to configure a three-dimensional method that is improved by a deformable data model. To address the big issue of minimal training data, transfer learning was used, and a broad real image segmentation model was pre-trained and then fine-tuned for Left Ventricle segmentation. By utilizing anatomical structure is to boost deep learning technology’s segmentation accuracy on 3D images. An anatomical condition model where a constraint of form is placed in data to prepare a deep convolutional net. These shapes are based on the learned features from the auto-encoder. The texture of the object is more clearly described by joining neural network with standard atlas-based segmentation method. The argumentative training method was also applied to improve the performance of the segmentation networks compared to traditional voxel-wise training. 9.2.3.3 Segmentation of Left Atrium These limitations are the cause for the determined by dividing, triggered by alterations in imaging technology, incidence protocol and patients’ condition. This is to enhance the model effectiveness for unseen domains. 9.2.3.4 Multi-Chamber Segmentation In addition to LV segmentation, an advanced deep learning technique is appropriate to execute multi-chamber segmentation. CNNs are useful for the application of image categorization, segmentation and prediction of heart disease. One detailed review on a nonpublic medical dataset demonstrates that the medical indicators used from image detection are superior to those dependent on textual segmentation. To simulate real clinical situations, a data set for clinical cardiovascular ultrasound has been published for teaching and research purposes. These were set to the preliminaries specification research of deep learning methods. Due to differences in tissue thickness, LV and LA segments could not be accurately separated in the dataset. Advanced encoder-decoder CNNs have shown lower variation than human variability.

9.2.3.5 Aortic Valve Segmentation An Active Shape Model, Deep Neural Networks and MSL applied in the segmentation of AV valve into 3-Dimensional volumes in cardiovascular ultrasound. An adaptive sparsely connected neural network has a low set of variables and is used to discover boundary box locations to identify the target structure. This method also achieved good run-time while also exhibiting notable increases over preceding MSL method. Till now, researchers have addressed significant advances of DLbased image segmentation in the three methods (i.e., magnetic resonance, ultrasound, and computed tomography) that are frequently used for the cardiovascular conditions. Now, researchers are discussing their segmentation technique.

9.3 Proposed Method One such class of approaches is Graph segregating algorithms. In specific, three segmentation strategies focused on graph segregation that can seem identical at first sight, but display distinct behaviors on closer examination. The specific existence of certain habits that may render one superior to another for the unique problems. Since this algorithm for graph cutting has been effectively used in several applications, it is basically a 2-flag algorithm. In general, identifying the minimum cut which segregating several terminals is defined by NP Problem, even though it is given an algorithm to get within a range of the ideal solution. Shortest cut in the middle of two classes of saplings is discovered by the algorithm. There is a propensity to define the cut that modestly surrounds the saplings in cases of poor object borders or limited tuber classes. The suggested arbitrary rover algorithm has a specific user experience, but it does not endure from the issue of “small cut” and inevitably applies to an absolute scale of marks. Currently, the functional total of this measurement is greater than that of cutting a binary graph on a comparable size image graph. This algorithm even has a powerful connection to the algorithm for graph cuts. In the corresponding segmentation, both arbitrary rover algorithm and graph cuts the utilize the determination of selected features between each output mark. A manipulator is habitually concerned in defining just a limited pixels in the focal point area instead of marking pixels in both the focal point and circumstantial where a focal point/circumstantial segmentation is needed. In comparison, it is better to automatically specify forefront saplings than both forefront and context saplings if one wishes to implement these kind of algorithms by selecting saplings automatically. Naturally, the Statics strength algorithm applies the Arbitrary rover

algorithm to a circumstance in which only forefront labelled are delivered. For the provided forefront labelled pixel, Static strength algorithm can be obtained by beginning an arbitrary rover at for each unmarked pixel and measuring the required amount of actions formerly the rover reaches a marked sapling. The probabilities may possibly be measured systematically with simulation of arbitrary rover using these algorithms. By identifying a limit that generates the minimum Static strength ratio on which this procedure was primarily obtained, the estimated number of steps can be transformed into a forefront/circumstances segmentation. Not unexpectedly, arbitrary rover and the Static strength algorithm have a systematic relationship.

9.4 Algorithm Behaviors and Characteristics Table 9.2 demonstrates dissimilarities among these three algorithms. Particularly in comparison with the parallels, these variations can tend to be slight. From a realistic point of view, one must ask if the outcomes of these algorithms are equivalent or if it should assume basically analogous behavior. It is extremely evident that the same segmentation would be generated by adding each methodology to a specific segmentation. The innermost circle is selected by graph cuts since it would consider the cut to have the lowest cost. The cost of cutting growing black/white transformation of a ring would be the same, and also that the ring with the smallest diameter should be chosen. This function is important as it discovers a forefront sapling surrounding the shortest unit entity. The drawback to this action is that it takes several more saplings to identify a wider target. In addition, this feature is also the root of the issue of small cut which can lead to a recovery of the minimal frontier that moderately encircles the saplings. An arbitrary rover that begins from a single node is much more likely to target a sapling that allows a smaller group of circles to be crossed. Hence, the arbitrary rover algorithm will still pick the center ring with an arbitrary amount of concentric rings. Remember, however, that perhaps the probability form a “white frosting” between some of the anterior and posterior rings that may be used to locate any of the middle rings as shown in Figure 9.3. This action of the arbitrary rover algorithm is helpful because it prevents the issue of “minor cut” and finds far more “similar” boundaries between the saplings. Even so, in order to attain the required segmentation, this action may also end in the user’s consent to position contextual saplings near to the forefront target. In addition, consider if even set of concentric rings are contemporary, the arbitrary rover algorithm will return a ring that

stitches the two middle rings rather than “flicking” the segmentation to the closest ring in its equality between both the saplings. Table 9.2 Comparison of algorithms. Algorithms

Task

Actual

Conditions

Graph-cuts

F(a) = aSMa a = {0,1} Risk stable to {0,1}

Arbitrary Rover F(a) = aSMa 0 ≤ a ≤ 1} Risk stable to {0,1} Static strength

0≤a

Risk stable to {0}

Figure 9.3 Shows 3 concentric rings, in the inner rings with a focus sapling and outside the outer surface rings with a backdrop sapling. The graph cuts would always pick the correct shortest, arbitrary rover will always choose the center, and Static strength will choose the biggest, irrespective of the amount of concentric rings. A “white frosting” distribution will also be generated by the Static strength algorithm in which each degree correlates to a ring. In the Static strength algorithm, consider that the context sapling is not used. The Static strength algorithm searches for a limit that generates a cut which reduces the static strength ratio, provided this solution, and defined as is

intuitively, this proportion can be thought

of as the proportion between the density and the length. The Static

strength ratio of a ring in a gradient context is

which with a

greater radius would be smaller. The large ring will be restored from the “white frosting” distribution generated by the Solution, as the Static strength algorithm selects the limit which reduces the Static strength ratio.

Figure 9.4 Shows three algorithms Weak edge actions (a) Circular scale image with saplings defined by the user. (b) “Tiny cut” actions of cuts in graphs. (c) The tiny cut problem is solved by more sapling, but creates a “dense” segmentation. (d) Arbitrary rover. (e) The Static strength algorithm, even without context saplings, identifies the appropriate cut. (f) Solution, Limit for strong segmentation to be produced (e) Getting a minimal ratio of Static strength. Figure 9.4 demonstrates another part of the algorithm’s nature. Where, on a white drawing, a black line has been drawn. Since there is no strength cue at the distance and the statistics with both areas are similar, all three algorithms demonstrate the capacity to find the poor border. Even though, there are two problems with graph cuts. Firstly, once small seed classes are put, they immediately see the small cut problem. The algorithm, though, discovers a “mean square off” cut that is unattractive even though the sapling class are made big sufficient. The reasoning for such a mean square cut is that a four linked lattice is used and consequently the mean square cut will have the similar cut rate as the diagonal cut, so the algorithm actually

returns these kind of cuts. Through a lattice with improved connectivity at the expense of higher memory usage will alleviate this issue. However, the arbitrary rover and static strength algorithm do not show a “mean square off” solution or suffer from the “tiny cut” problem, for even a four connected lattice.

9.5 Computed Tomography Cardiovascular Data For the past 10 years, computed tomography imaging has witnessed a resolution transformation. There are now between 256 and 320 detector rows with pneumatic cylinder oscillation memory consumption of less than half a second in Quality multi-slice scanners. These developments also indicated an improvement not only through spatial resolution, as well as in temporal resolution. The injected comparison to the transparency of the cardiovascular system is used for high-resolution imaging through computed tomography angiography (CTA). For examining 3D cardiovascular morphology, Computed tomography angiography is now the modality of choice. Computerized, semi-automated post-processing approaches are no longer the norm because of the tremendous amount of data generated by such scanners, they are invaluable resources for the radiologist.

9.5.1 Graph Cuts to Segment Specific Heart Chambers A mixture of electrophysiological and epigenetic criteria today directs electrophysiological endoscopic treatments, such as respiratory vein separation for treating heart failure. Consequently, making 3Dimensional conceptions of the cardiac compartment issue to RF endoscopy obtainable for preprocedural preparation, based on inter feeding tube instruction, and comment obey is useful for the electrophysiologist. The techniques for heart chamber segmentation of computed tomography angiography images are required for this. The specifications of the instruments for segmentation are: 1. Correctness: The segmentation must be as similar to the ground reality given by the customer as practicable. 2. Simple to still use: The option gives limited input from the consumer. 3. Efficiency: The algorithm is efficient and consciousness. As the issue of durability and consistency is inevitably completely automated segmentation, immersive segmentation is much more desirable. Interactive techniques benefit from the anatomy’s users consent and enhance the overall performance of the process.

In comparison, detailed chamber segmentation of limited user input is indeed a difficult problem. The challenge is mainly despite the poor boundary among Chambers. For instance, due to clear blood flow linkage through aortic valve, the heart left and atrium ventricle also have similar strength. The reliability of the segmentation algorithm is also questioned by image noise as well as distinct image strategies throughout different locations.

9.5.2 Ringed Graph Cuts with Multi-Resolution The theory hypothesizes that certain voxels were defined on the basis of a prior information of the anatomy as subject or context saplings. It calculates a best feature binary segmentation which partitions the saplings of the subject and the saplings of the context completely. The key challenge of the graph cutting algorithm resides throughout the large computing expenses and memory requirement, considering the power of discovering a global solution. A 3D density of thousands of slice for images is created by traditional CT scans. For instance, a 512 x 512 volume has 262,144 pixels, each for a single voxel. About 1 MB of memory can quickly be filled by a graph which preserves the nodes. It will take seconds to implement an image segmentation on just this amount using max-flow method. In this case, the technique they are using to render the graph cutting algorithm realistic is a ringed multi-resolution model. With basic information of the approximate scale of both the chamber, after user selects a subject sapling within the chamber, the context sapling can be identified automatically. This makes it more difficult for 1 click segmentation to be done. Which allows it happen, thanks to the contrast agent treatment, is the streamlined form of the heart cavity and the good correlation pressure as in chamber. Firstly, an objective of this method is to have a hard segmentation on a decreased resolution graph through graph cuts. To direct a wide angle ringed cut, the low resolution approximation is used then. 1. Apply the low-resolution density to a sapling area rising from its object sapling. When approaching a predetermined maximum distance which relies on the subjective understanding of the standard size of the chamber, the increasing stops. Isolated incidents owing to leakage to the ventricles, the coronary veins, and the ribs may be included in the outcome. 2. Lubricate a coating from the edge of the rising field. As context saplings, the outermost part of the deformation is labelled. 3. To have a quick segmentation of the afferent arteriole, add graph

cuts at low resolution. It can be solved easily since the low resolution graph has far fewer components. 4. To join a band which internal boundaries is identified as entity sapling, lubricate a sheet from the border of the hard segmentation and the external boundaries is identified as context sapling. 5. To have a reliable segmentation outcome, add the graph cuts in high resolution. It is also quick to solve since the graph is based on a small range comprising just few other voxel layers.

9.5.3 Simultaneous Chamber Segmentation using Arbitrary Rover While graph cuts are very well equipped for the abstraction from a specific context of a single original image, there is no logical outgrowth to simultaneous multi-label segmentation. Conversely, in a specific effect stage, the arbitrary rover algorithm will segment several regions. Assume that consumer got pixels labelled with L. By each unmarked pixel, they request: what would be the likelihood that it always enters each one of the L sapling points, provided an arbitrary rover beginning at this site? It will be demonstrated that this equation can be done precisely without the need for an arbitrary rover simulation. We allocate an L-tuple matrix from each pixels by executing this equation, which determines the likelihood that an arbitrary rover, beginning of each unmarked pixel, will first enter each one of the L sapling points. An initial segmentation can be obtained from such Ltuples by choosing a most possible sapling destination for an arbitrary rover for each pixel. Within that method, a graph with such a specified set of vertices and edges considers an image as a strictly discrete entity. An actual weight is added from each edge, which corresponds to the probability how an arbitrary rover would pass the edge. It has also been determined that the likelihood of an arbitrary rover initially touching a sapling point is entirely equivalent to a solution to the crisis of initial conditions at the preprocessing step positions and defined to unification at the composite field point in question, while the other is set to zero. DC circuit comparison, a steady-state. Using concept of fixed periodic quantum state, it can clearly be seen that the possibilities add up to unification from each link. For such a purpose, provided L labels, they have to resolve only L-1 models, while the residual model is identified through the unit limitation.

9.5.3.1 The Arbitrary Rover Algorithm Defining an accurate terminology for just a graph. A graph involves with nodes and edges represented as G = {V, E), number of nodes in the graph is represented as v ∈ V and similarly for edges (link) e ∈ E ⊆ V × V. An edge, e, vi and vj, spanning two nodes, is represented by eij. The weighted graph adds a number, named a weight, to each edge. An edge’s weight, eij, is denoted by w(eij) or wij. The degree of a node is links eij event on vi edges. The likelihoood that an arbitrary rover at node vi switches to node vj is generated by is given due to a set of non-negative weights. Also, the following would presume that the graph is linked. In order to reflect the physical demands through arbitrary rover biases, a feature that translates a shift in image strength to weight must be specified. Even though this is a communal characteristic of graphbased image processing algorithms, multiple weighting methods in the research are widely used. Consequently, a feature that maximizes the equilibrium of the corresponding weights was suggested to be used. They have chosen the Gaussian distribution calculation function provided by this (9.1) Where the pixel image strength is indicated by gwi. In this algorithm, the value of α reflects only other free parameter. Developers recruit, in practise, (9.2) Here e is a tiny constant e = 10−6 and δ is a normalizing constant δ = max(gwi − gwj), ∀i, j. the goal of (9.2) is to offer the tradition of α important for images of various quantizations and compares, and to ensure that neither of the balances get on null in the same way. In the research and a suitable method for both the solution, the discrete topic has also been extensively discussed. To expresses the discrete matrix of Laplacian as

(9.3)

Here Mvivj is being used to mean the nodes vi and vj index the matrix L. Split into 2 sets of nodes, VN (marked) and VU (unmarked) such as VN ∪ VU = V and VN ∩ VU = ∅. Notice that, irrespective of their name, VN includes every marked d points. Instead, for represent the subgroups, they should rearrange the matrix M (9.4)

Represent the likelihood presumed through each vertex, vi, by for each dot, l. As a feature, propose a set of labeled for the pixel values as ∅(vj) = l, ∀vj ∈ VN, where l ∈ ∅, 0 < l < L. By each label, describe the VN × 1 labelled vector, l at vertex vj ∈ VN as (9.5)

The approach to the combinational problem will, as shown, be sought by solving (9.6) Which is really just a dense, symmetrical, beneficial, system of linear functions with the amount of formulas of VU and the range of individual elements bound by 2|E| + |V|, While MU is presumed to be non-singular for a related graph, the resolution, al, is assumed to appear but to be special. Consequently, after resolving the method, the ability for both labels can be identified. (9.7) Here A has vectors drawn from every al and B have vectors provided from every cl. As stated earlier, provided L names, just L-1 structures should be solved, because the possibilities for each vertex must add up to unification.

9.5.4 Static Strength Algorithm Because only forefront saplings are defined by the customer, neither plot cuts nor even the arbitrary rover algorithm could be used. The Static strength algorithm can indeed be considered as a good expansion of a specific cluster class of the arbitrary rover algorithm. The segmentation is based on the estimated sequence of iterations that an arbitrary rover would follow to locate the customer saplings, beginning for each pixel. The static strength algorithm, nevertheless, was generated from some kind of segmentation objective of minimizing the static strength ratio in the initial formulation. (9.8) Wherever 1s is the matrix of all and d is the function of degrees of nodes. The function of the variable, a, is described as (9.9)

In which the set of forefront vertices is indicated by L. NP-Hard is, sadly, a combinational reduction of this issue. As a result, to use real results, the variable was simplified and the “size” was set to a standard, i.e. aS b = l. Lagrange multiplier is used to implement a conditional minimization provides the energy as (9.10) As well as the corresponding minimum was (9.11) Meanwhile they are only associated with the relative costs of the method, and that in fact which (9.11) reflects the estimated sequence of iterations trying to obtain a sapling, They disregard the vector variable

, holding

While the Laplacian vector in (9.11) is

singular, the inclusion of sapling defined by consumers, i.e., eliminates the singularity of, ai = 0, ∀vi ∈ VN. The Resolution, ai, at node, vi, achieved by Solution of (9.11) provides the required

sequence of iterations to identify a sapling node for an Arbitrary rover. Consequently, if someone were to resolve (9.11) with 2 sapling, v1 and vn, then (9.12)

bU reflects the vector representing the degrees of unmarked nodes, where − 1S reflects the matrix of all of them. The explanation why (9.12) persists however on the left side, premultiplying all sides by 1S yields zero, consequently the right side should be balanced. Then, (9.13)

Because it is understood that the Laplacian leads in computation of the Solutions to the Arbitrary Rover Problem, aCW, (9.14)

Here γ reflects the productive conductivity of the material between v1 and vn, nodes, the Arbitrary rover possibilities are also provided by the Solution for 2 Static strength Systems (9.11). It is obvious that it ought to be so, they would assume that an arbitrary rover might even be more likely to hit that sapling first with less predicted strategies to improve one sapling over another. Computing a solution to (9.11) gives an idea about how a given vertex is “much further aside” to a forwarding node, but that does not give a difficult segmentation. Accordingly, it converts the solution to (9.11) by convolving this same solution to a difficult segmentation, a, only at worth that generates a difficult segmentation that minimises (9.8). It is important to test only n limits and the parameters of (9.8) can be calculated easily, resulting in the rapid development of difficult

segmentation. In this manner, generating a difficult segmentation means that almost all entities corresponding to the forefront segment are linked and every set of original image is linked to a sapling if more than one sapling group is available. Notice that this process is somewhat close to what has been achieved throughout the NCuts algorithm that transform a gentle segmentation into some kind of difficult segmentation. To conclude, the Static strength algorithm’s methods are: 1. Get a collection of labelled pixels that signify the context, VN. 2. Mapping the input images to edge masses in the lattice using (9.1). 3. Solve (9.11) for the predicted activities to be conducted by an arbitrary rover to enter a node VN beginning from each pixel. 4. By attempting n limits, β, of and selecting the segmentation that generates the shortest ratio provided in, acquire a difficult segmentation (9.8). Allocate each vertex vi, to forefront if ai ≤ β and ai > β context.

9.6 Performance Evaluation 9.6.1 Ringed Graph Cuts with Multi-Resolution The segmentation of the chambers of the heart will be accomplished with a single tick of the mouse inside the left atrium using multiresolution and ringed graph cuts. Single tick segmentation of the left atrium with the algorithm for multi-resolution and ringed graph cuts. It is segregated from a CT size with a 512 x 512 dimension resolution of 262,144 pixels, each of which requires only around 15 seconds for a single voxel to section the left atrium on a Intel Pentium 5.8 GHz device and requires only around 20 MB of storage. The chambers and arteries of the heart, including the both atrium, both Ventricle, and aorta, are distinct. The multi-resolution, ringed graph cutting algorithm uses these chambers to be segregated independently.

9.6.2 The Arbitrary Rover Algorithm Using (1), it converted into a weighted graph a 4-chamber perspective of a CTA heart size and implemented the arbitrary rover algorithm. For this issue, the arbitrary rover algorithm was selected since 5 labels are necessary to classify the heart’s 4 chambers. The segmentation of this 256 x 256 image provided a computation time of

around 12 seconds. In a wide range of CTA cardiac data with differing noise levels, results shows this algorithm is accurate.

9.6.3 Static Strength Algorithm This is a full automatic application, the Static strength algorithm’s output is computationally expensive. The algorithm, however, enabled immersive output for the 256 x 256 x 256 sub volumes were applied in CT application.

Figure 9.5 Memory vs Computation time.

9.6.4 Comparison of Three Algorithm In this chapter, the three algorithms listed demonstrate distinctive behaviors that make each appropriate for a specific issue. Once a certain approach is identified to function effectively before prior convictions on a given issue, features are either extracted or trained through instances to maximize system results. Improving the processing feasibility of these methods remains a problem. Already, one to two order of scale high performance by translating the key solvers into commodity graphics hardware for these issues. Figure 9.5 shows the comparison chart of three algorithms.

9.7 Conclusion Computer vision has been revolutionized by deep learning and is used in cardiovascular imaging. This chapter gives a detailed review of the state of the art through practitioner processes and methods. Three popular imaging models offer a detailed summary of these DL strategies, providing a broad spectrum of current deep learning methods designed to classify various cardiac functional structures. In the three methods, deep learning–based segmentation approaches highlighted future promise and the existing shortcomings of these methods of cardiac segmentation based on deep learning that may impede broad practical implications. Deep learning–based approaches have made a massive impact to the segmentation of cardiac images but also raise awareness and understanding of problems that demand significant contributions in this area. Also the application of CT images along with cooperative graph method of segmentation with the mathematical modeling proves that each and every model has some unique advantages; each has its own way identifying the cardio vascular problems using image segmentation.

References 1. Chen, Chen, Chen Qin, Huaqi Qiu, Giacomo Tarroni, Jinming Duan, Wenjia Bai, and Daniel Rueckert. “Deep Learning for Cardiac Image Segmentation: A Review.” Frontiers in Cardiovascular Medicine 7 (2020): 25. 2. Al Kheraif, Abdulaziz A., Ashraf A. Wahba, and H. Fouad. “Detection of dental diseases from radiographic 2d dental image using hybrid graph-cut technique and convolutional neural network.” Measurement 146 (2019): 333-342. 3. Kuang, Zhuo, Xianbo Deng, Li Yu, Hongkui Wang, Tiansong Li, and Shengwei Wang. “Ψ-Net: Focusing on the border areas of intracerebral hemorrhage on CT images.” Computer Methods and Programs in Biomedicine (2020): 105546. 4. Manju Bala, P., Usharani, S., Aswin, M. “Classification of hyper spectral remote sensing data to categorize crop condition, stage and variety.” International Journal of Scientific and Technology Research, pp. 3138–3141, 2020. 5. Oulefki, Adel, Sos Agaian, Thaweesak Trongtirakul, and Azzeddine Kassah Laouar. “Automatic COVID-19 lung infected region segmentation and measurement using CT-scans images.” Pattern recognition (2020): 107747.

6. Panwar, Harsh, P. K. Gupta, Mohammad Khubeb Siddiqui, Ruben Morales-Menendez, Prakhar Bhardwaj, and Vaishnavi Singh. “A deep learning and grad-CAM based color visualization approach for fast detection of COVID-19 cases using chest X-ray and CT-Scan images.” Chaos, Solitons & Fractals 140 (2020): 110190. 7. Jayakumar, D., A. Elakkiya, R. Rajmohan, and M. O. Ramkumar. “Automatic Prediction and Classification of Diseases in Melons using Stacked RNN based Deep Learning Model.” In 2020 International Conference on System, Computation, Automation and Networking (ICSCAN), pp. 1-5. IEEE, 2020. 8. Suzuki, Kazuhiro, Yujiro Otsuka, Yukihiro Nomura, Kanako K. Kumamaru, Ryohei Kuwatsuru, and Shigeki Aoki. “Development and Validation of a Modified Three-Dimensional U-Net Deep-Learning Model for Automated Detection of Lung Nodules on Chest CT Images From the Lung Image Database Consortium and Japanese Datasets.” Academic Radiology (2020). 9. Jin, Liang, Jiancheng Yang, Kaiming Kuang, Bingbing Ni, Yiyi Gao, Yingli Sun, Pan Gao et al. “Deep-learning-assisted detection and segmentation of rib fractures from CT scans: Development and validation of FracNet.” EBioMedicine 62 (2020): 103106. 10. Narmadha, S., S. Gokulan, M. Pavithra, R. Rajmohan, and T. Ananthkumar. “Determination of various Deep Learning Parameters to Predict Heart Disease for Diabetes Patients.” In 2020 International Conference on System, Computation, Automation and Networking (ICSCAN), pp. 1-6. IEEE, 2020. 11. Razeghi, Orod, José Alonso Solís-Lemus, Angela WC Lee, Rashed Karim, Cesare Corrado, Caroline H. Roney, Adelaide de Vecchi, and Steven A. Niederer. “CemrgApp: An interactive medical imaging application with image processing, computer vision, and machine learning toolkits for cardiovascular research.” SoftwareX 12 (2020): 100570. 12. Adimoolam M., John A., Balamurugan N.M., Ananth Kumar T. (2021) Green ICT Communication, Networking and Data Processing. In: Balusamy B., Chilamkurti N., Kadry S. (eds.) Green Computing in Smart Cities: Simulation and Techniques. Green Energy and Technology. Springer, Cham. https://doi.org/10.1007/978-3-03048141-4_6 13. Chen, Hung-Hsun, Chih-Min Liu, Shih-Lin Chang, Paul Yu-Chun Chang, Wei-Shiang Chen, Yo-Ming Pan, Ssu-Ting Fang et al. “Automated extraction of left atrial volumes from two-dimensional computer tomography images using a deep learning technique.”

International Journal of Cardiology (2020). 14. Paragios, Nikos, Yunmei Chen, and Olivier D. Faugeras, eds. Handbook of mathematical models in computer vision. Springer Science & Business Media, 2006. 15. Gao, Xiaohong W., Carl James-Reynolds, and Edward Currie. “Analysis of tuberculosis severity levels from CT pulmonary images based on enhanced residual deep learning architecture.” Neurocomputing 392 (2020): 233-244. 16. Pavithra, M., Rajmohan, R., Kumar, T. A., & Ramya, R. (2021). Prediction and Classification of Breast Cancer Using Discriminative Learning Models and Techniques. Machine Vision Inspection Systems, Volume 2: Machine Learning-Based Approaches, 241-262. *Corresponding author: [email protected]

10 Modeling of Diabetic Retinopathy Grading Using Deep Learning Balaji Srinivasan1*, Prithiviraj Rajalingam2 and Anish Jeshvina Arokiachamy1 1Loyola-ICAM college of Engineering and Technology, Chennai, India 2SRM University, Chennai, India Abstract The eye health of patients with diabetics can be worsened by diabetic retinopathy. This may cause serious consequences such as blurred vision and total loss of vision over a long course of time if left untreated. The trend lately in healthcare is deploying deep neural networks due to their learning potential. Modeling the neural networks on the fundus images and deployment can assist physicians for faster and accurate clinical diagnosis. A hybrid deep learning framework for diabetic retinopathy classification is proposed. We present a two-stage hybrid system for the classification of diabetic retinopathy; the Convolutional Neural Networks (CNN)-based features are used for classification, and on the top, and the Long Short-Term Memory (LSTM) networks are applied to improve the performance. The hybrid model is validated with the Diabetic Retinopathy Dataset Competition (DRDC) datasets and the results are compared with the existing model for accuracy. The hybrid model achieved an accuracy of 98.56%. Keywords: CNN, LSTM, diabetic retinopathy, modelling

10.1 Introduction Diabetes is a disease that occurs when the blood glucose/blood sugar level in the body is very high. The root cause of diabetes is insufficient amount of insulin in the body or applied insulin is not used in an efficient way by the body. This leads to severe effects in the human body like damage in blood vessels and retina. It will result in blood leakage from the small blood vessels. It affects the retinal tissue which will swell and results in damage to the vision. This condition is referred to as diabetic retinopathy that affects both eyes. Some of the severe symptoms of Diabetic Retinopathy (DR) include difficulty in

recognizing colors, blurred vision, and complete loss of sight. Although it takes several years for diabetic retinopathy to reach a stage where it could threaten the eyesight, it can cause blindness if left untreated. Hence early screening is a must to provide early diagnosis. In a study conducted in India in 2016 [1], it is projected that in two decades the number of patients with DR would be around 22.4 million and patients with STDR will be around 2 million diabetics. In another study [2], it was observed the patients with DR were predominantly males above the age of 40. Incidentally it was observed that the patients with DR in this study were having 6/18 or better in the worse eye. An International Diabetes Federation report predicts that by the year 2045 there will be around 629 million people with diabetes. In India, the first National Diabetes and Diabetic Retinopathy survey (2015-19) asserted that the prevalence of DR and Sight-Threatening DR (STDR) was 16.9% and 3.6%, respectively. It was also observed that DR was more prevalent in the diabetic population aged up to 50 years. In the diabetic population aged between 60 and 69 years the prevalence of DR was observed to be around 18.6%. DR is observed in around 18.3% for the diabetic population age group between 70 and 79. For the diabetic population above 80 years, the DR is around 18.4%. The prevalence of blindness among diabetic patients was 2.1% and visual impairment was 13.7%. In this report it was seen the females showed prevalence of 11.7% and males showed prevalence around 11%. In rural areas, most people are out of reach of the eye clinics and also the awareness is relatively less. Though there are clinics, they lack facilities. As India’s population is enormous, one doctor per 10,002 people shows the scarcity of resources. Despite the shortage of resources, smartphones have reached even the rural areas. India ranking third in the world for the number of internet use by volume, the tele-density has grown exponentially in recent years. With increasing access to mobile phone and internet in the rural areas, tele-opthalmology can revolutionize the Indian health care systems. Deep learning has been currently employed in multiple fields, like automation and healthcare. Deep neural networks have been developed to attain the potential to assist doctors. Its learning rate has the potential to classify or identify the target variable precisely with human-like accuracy. Thus, deep neural networks can be employed for DR grading. The neural network model trained using fundus images could be deployed as a mobile application. When the users scan their eye using a smartphone, they can get their screening results quickly. This could assist in identifying DR and the user can

then consult an ophthalmologist for further diagnosis. In this paper, various deep neural networks based on their potential, purpose, and function are being trained using fundus images obtained from the Diabetic Retinopathy Dataset Competition (DRDC), and their results are compared. The three popular types of neural networks are Multilayer Perceptrons (MLPs) which is commonly used for tabular datasets, Convolutional Neural Networks (CNNs) used for face recognition, classification of images, estimation of human poses as well as document analysis and Recurrent Neural Networks (RNNs) that are suitable for language modelling and text generation. Based on the potential and application of these networks, CNN was deduced for solving the multiclass image classification problem. They classified the fundus images into 5 grades of DR: 0-No DR, 1-mild DR, 2-moderate DR, 3-Severe DR, and 4-Very severe DR. This paper proposes a hybrid deep learning model comprising the CNN and Long Short-Term Memory (LSTM) networks to classify the DR. In the proposed model CNN is used for auto-extraction of the features and the LSTM network is applied for classification. The LSTM network enhances the learning ability with the long sequence memories. More importantly it performs better than the other artificial neural networks with significantly less error and high accuracy. The hybrid model using the combination of CNN and LSTM enhances the accuracy for DR classification. The contribution of this paper is briefly summarized below: a) A hybrid CNN-LSTM model developed to assist the DR scan provides earlier support in diagnosis of patients with diabetic retinopathy. b) The performance analysis is conducted to observe the accuracy, sensitivity, specificity, F1-score, and Area under the Curve (AUC). The rest of the paper is organized as follows. A review of recent related works is discussed in Section 10.2. Section 10.3 presents the proposed hybrid framework for DR classification. The dataset and preprocessing of the dataset are discussed in section 10.4. Section 10.5 discusses the results and performance of the proposed system. Section 10.6 concludes the paper.

10.2 Related Works Retinopathy is an illness or a disorder that affects the human eye’s retina. Different retinopathies are created by independent

associations such as diabetes, coronary heart disease, hypertension, and chronic kidney disease. A retinopathy caused by Diabetes Mellitus (DM) is referred to as retinopathy. DR is a risky and chronic disorder that causes different impairments in the retina of the human eye. These impairments are called a lesion. However, if the person has diabetes with a record of clinical stroke, the risk is increased further. The one in three (34.6%) had some DR in the US, Australia, Europe, and Asia, as per a global meta-analysis study report in 2010. This was possible to identify DR narrowly as non-proliferative DR (NPDR) and proliferative DR (PDR). The blood vessels in the retina primarily distinguish between NPDR and PDR. Harm to blood vessels in NPDR and fluid passes through the retina, whereas in PDR, throughout the retina, new irregular blood vessels develop. PDR is a more advanced DR level. It can cause complete blindness if the DR exceeds the PDR stage. The modularizing device aims to diagnose and reduce damage and loss to the retina, and therefore the literature in the current study is considered for the detection of lesions. Micro-aneurysms (MA), Hemorrhages (HEM), Hard Exudates (HE), and soft exudates or cotton wool spots (CWS) are the main types of lesions present in NPDR photos. Hemorrhages occur as forms of dots and blots. MA and HEM are known to be red lesions or are often described as dark lesions and as vibrant lesions. It can be noted from the literature that very few studies have assessed the severity of using multiple lesions to rate all the stages. However, a variety of clinical studies have shown that the presence of a single lesion such as MA or HE detects only the early stage (i.e., moderate stage). Most of these researches have applied the deep CNN approach to classify the DR. The benefit of using the ML-based classifier is that the assigned marking of the feature selection for severity calculation makes automated grading readily available. The machine learning classifier is also advanced in terms of time complexity. The traditional approaches are used for the range of machine learning classification models such as Gaussian Mixture Model (GMM), k-nearest neighbor (kNN). Furthermore, high sensitivity (with the exception of the method suggested in the sensitivity measure), and more accuracy are not feasible for these types of methods. Many researchers have embraced the deep learning models for DR detection. Deep learning, an emerging efficient DR detection method, provides high performance compared to conventional techniques for extraction of features. It is also important to note that the adoption of Graphics Processing Units (GPUs) has accelerated the research in the detection of DR in employing the deep CNN-based feature extraction method. Xu, Feng and Mi [3] stressed the need for identifying the presence of diabetic retinopathy earlier as it would be beneficial in the clinical

treatment of the patient. They developed a model applying the deep convolutional neural network for automatic classification of DR for the color fundus images. The model achieved an accuracy of 94.5%, which was significantly higher than the then-researched approaches. Similar to the model proposed by Xu et al. [3] that utilized a convolutional neural network, Mansour [4] proposed a multifold approach comprising DNN and SVM for computer-aided diagnosis of DR to help detect the diseases early and plan treatment. The model had adopted multilevel optimization to enhance the feature detection for DR classification. This approach achieved an accuracy of around 97.93%. Gargeya and Leng [5] proposed and assessed a novel method of datadriven deep learning algorithm for DR identification by automation. The algorithm analyzed and graded the fundus images as safe (no retinopathy) or vulnerable to DR thus recognizing medical referral cases of interest. A total of 75,137 public information images were used to train and test for DR in the diabetes population. Artificial intelligence was used to discern good fundi from DR fundi. Before experimentation, a jury of retinal specialists found the ground truth for the proposed data collection. They also tested the proposed model using MESSIDOR-2 and E-Ophtha datasets for external validation. An entirely data-driven rating algorithm based on artificial intelligence was used for fundus screening images obtained from diabetic patients and classified, with high efficiency. This assistive approach aided the patients for further assessment by an ophthalmologist. On a global scale, implementing such an algorithm could significantly reduce the rate of loss of vision attributed to DR. Islam, Hasan and Abdullah [6] built a novel deep convolution neural network that detected all MAs. The proposed model has been checked with the network on the highest publicly accessible Kaggle diabetic retinopathy dataset and obtained 0.851 quadratic weighted kappa scores. Additionally with 0.844 AUC ranking, which delivers state-ofthe-art grading results. In early treatment, the proposed model has achieved a sensitivity of 98% and a specificity of over 94%, which is demonstrated by the proposed procedure’s effectiveness. Zhang et al. [7] developed the Deep DR system by compiling a wide collection of DR medical images that were later labeled by clinical ophthalmologists. They then evaluated the relationship between the number of perfect existing evidence and the amount of class labels, and also the impacts of various classifiers for optimum performance. The developed models were evaluated for their accuracy and durability using nine metrics. The results indicate that the authentication process increases efficiency with a precision of 97.5%,

an accuracy of 97.7%. The grading method has a sensitivity of 98.1% and a specificity of 98.9%. Pao et al. [8] presented an entropy image calculated using the green portion of the fundus image. In particular, Un-sharp Masking (UM) image enhancement is used for pre-processing prior to estimating entropy images. The channel CNN integrates the features of the graylevel entropy and the green portion pre-processed by UM, has suggested to enhance detection efficiency. The DR detection was advanced by the approach of using the entropy images of grey level and UM preprocessed green component as the inputs to the CNN channel. They proposed that deep learning technology will support ophthalmologists in the detection of attributable DR and would support the automatic retinal imaging system. Hemanth, Deperlioglu and Kose [9] reported that in medical image processing using fundus image is pivotal for effective diabetic retinopathy detection. They proposed a hybrid approach for diabetic retinopathy diagnosis combining classical image processing with deep learning for better diagnosis. The solution presented used the histogram imaging and the comparison of minimal adaptive histogram equalization techniques. The second step is the classification using CNN. The proposal was validated used the MESSIDOR dataset and achieved an accuracy of 97%. The performance of deep learning enhanced algorithm was studied by Abràmoff et al. [10] for automated detection of DR. They compared the performance to an already existing Iowa Detection Program (IDP) which does not employ the deep learning components on a set of fundus images from the publicly available database. They observed that the new proposed system resulted in a sensitivity of 96.8% and specificity of about 87%. The negative predictive value of around 99% and AUC was reported to be 0.98. A deep learning enhanced algorithm provided better performance than earlier algorithms for automatic detection of diabetic retinopathy. Lahmiri [11] presented a threefold complex approach for the classification of the DR. It used CNN for feature extraction then the student t-test for extraction of the top features. Finally, non-linear SVM for classification. The CNN-SVM approach achieved an accuracy of 99.11% and better the combination of CNN with other machine learning classifiers. The three-stage hybrid model is accurate and faster to detect the DR using digital retina images with hemorrhage. Seth and Agarwal [12] used the Eyepacs dataset and applied the hybrid deep learning model for the DR classification. The approach used the combination of CNN-linear SVM and CNN-softmax for the

classification and achieved a sensitivity of 0.93 and 0.87 respectively. The approach also had satisfactory precision and recall scores. The hybrid framework CNN-linear SVM and CNN-softmax achieved a specificity of 0.85 and 0.67, respectively. Toledo-Cortés, de la Pava, Perdomo and González [13] used the Eyepacs and Messidor dataset for the DR classification. The framework comprised deep learning network with machine learning classifier. It used the hybrid CNN model with Gaussian process. The framework achieved better prediction by improving the interpretability. The approach has bettered prediction since it used grade estimation. The model was validated using the Eyepacs and Messidor dataset for the DR classification and achieved AUC of 0.97 and 0.87 respectively. The model suffers in prediction of no-DR classification. Lee et al. [14] used the pre-trained model and SVM for the prediction. With the remarkable AUC score, the deep learning model with SVM out-performs in the accuracy. The study had limitations as it only used a small dataset for development and validation. The model used the images within particular geography so this model cannot be generalized. The accuracy of the model is also based only on few features and this will impact the accuracy when scaled up. Earlier, Doshi, Shenoy, Sidhpura and Gharpure [15] proposed a deep learning diabetic retinopathy model that applied the GPU accelerated deep convolutional neural network for diagnosis of DR and its five stages automatically. The model also learns to extract feature automatically which is very important in the diagnosing the disease. It presents the GPU’s design implemented to trigger the networks based on the deep convolution to analyze automatically. It will classify the retinal images into five stages of the disease based on the efficiency and severity. The GPU accelerated approach implements the hybrid model for the classification DR with higher accuracy and faster computation.

10.3 Methodology This section highlights the approach proposed in the current work. This section provides an overview of the deep learning CNN, LSTM, and metrics for evaluation. The twofold approach for the detection of DR in the fundus image employed the deep CNN to auto-extract the features and LSTM to perform the classification. Firstly, the raw fundus images navigate through the data preprocessing pipeline for data resizing, shuffling, and normalization. Secondly, the training and test dataset is obtained from the preprocessed dataset. Thirdly, the

hybrid architecture is trained using the training dataset. Subsequently, the training accuracy and validation accuracy along with the respective losses are measured after each epoch. The evaluation metrics used are sensitivity, specificity, accuracy, AUC, confusion matrix, and F1-score. Using the unique mathematical model in the neural networks comprising the summation and convolution, the deep convolution network is used to classify DR with features related to DR. To identify the class and the grade of DR, the neural network is modeled for the expected output from the given input. As the neural networks are iterative it employs an optimization function to categorize the unique combination of hyper parameters. These parameters are modulated to improve classification accuracy. The classical approach for feature extraction is defined in the deep learning CNN. In contrast to the conventional method of classification, the CNN uses the data fed into the neural network to auto-generate the complex features. Furthermore, the deep learning CNN can achieve the extraction of the global features and other features automatically. In particular, the neural network ascertain structure of intricate features set uses the image patches as input with the convolutional filters and local sub-sampling. The deep learning CNN architecture stacks several levels of layers (convolution, pooling and non-linearity). In many cases the stacked CNN layers will be connected to the additional convolution and fully connected layers. The back-propagation gradients across traditional networks enable all weights to be trained in all filter banks. The convolutional layer’s function is to discover local combinations of features from the previous layer, while the pooling layer’s role is to sum all related features into one layer semantically. Moreover, the pooling decreases the layer-by-layer resolution of the derived functionality, and concurrently increases their robustness. Followingly, the non-linear activation function receives the locally weighted summation and a one-dimensional feature vector is obtained from the extracted high-level features using the fully connected layer. Finally, for the classification layer, the extracted features are normalized to a probability distribution associated with a class of images. In the hybrid model the convolutional, pooling, and fully connected layers are the major building blocks of the CNN architecture. The crux of the CNN is firstly to obtain the local features from the high layer and then carry it to the lower layers to obtain the complex features. In the CNN, the tensor of feature maps is determined by a set of the kernel. In the striding process, the kernel convolves the entire input

so the dimension of the output volume becomes integers. But during this process, the input volume dimensions are reduced and zeropadding is done to match the input volume dimension of low-level features. Using the Rectified Linear Unit (ReLU) the non-linearity increases in the feature maps. The dimension of the input is down sampled by the pooling layer to minimize the number of parameters. The fully connected layers use the features extracted through the convolutional and pooling layer to arrive at a decision. The artificial neural network, CNN is proven effective for feature extraction [16]. The feed forward neural network (vide Figure 10.1) comprises three layers: convolutional layer, fully connected layer, and pooling layer. Along with these three the input layer and output layer are stacked. Using the filters, convolution and pooling operation, the CNN extracts the features of the dataset. For further process of classification the features are fed in to the fully connected layer. We use 1D convolution to classify the features of DR from the image. As in Figure 10.1 additional convolutional layer is used so that the data flows through to the next layer after performing the convolutional operation and then an activation function. The equation 10.1 [16] show the convolution of the input signal (xk) and the filter weight (Wcnn). The bcnn and σcnn are bias parameter and activation function respectively. (10.1) The challenges of training and scaling of the Recurrent Neural Network (RNN) is overcome by the LSTM. In LSTM, to formulate the output it uses the recurrent connection and has a unique structure to capture the sequential data using the previous data. The LSTM comprises the memory block, memory cells, and gate unit. In LSTM, the hidden units of the RNN are replaced with the memory cells connected recurrently to one another. The states of the memory cells are modulated by the gate units. The LSTM betters the Recurrent Neural Network (RNN) by replacing the hidden memory for the ordinary hidden nodes [17]. The LSTM unit [18] comprises three sigmoid gates: forget gate (f), input gate (i), and output gate (o). The fraction of current input that merge into the cell memory is characterized in the input gate, the output gate characterizes the impact of cell memory on the node output and the forget gate characterizes the forget rate. The LSTM unit (Figure 10.2) at time t, the equations for a forward pass are given below: (10.2)

(10.3) (10.4) (10.5) (10.6)

Figure 10.1 CNN architecture [16]. The proposed CNN-LSTM hybrid model flowchart is shown in Figure 10.3 to classify retina digital images for DR. In the hybrid architecture (vide Figure 10.4) is the combination of the CNN and LSTM to detect the DR. In the hybrid network the features are extracted using CNN and LSTM is the classifier. The hybrid model comprises the CNN layers (convolutional layers, pooling layers and fully connected layer followed by LSTM layer, and output layer with the softmax function. The convolution block comprises of the CNNs, pooling layers and dropout layer. The feature extraction is done using convolutional layer activated by Rectified Linear Unit (ReLU) function. The image dimension is reduced using maxpooling layer. Finally, the LSTM is used to classify the DR.

Figure 10.2 LSTM unit [17].

Figure 10.3 CNN-LSTM model used to classify DR.

Figure 10.4 Hybrid CNN-LSTM architecture.

10.4 Dataset We have used the popular DRDC fundus image datasets to evaluate the proposed hybrid deep learning framework. The dataset is portioned in to training and validation dataset. The information on the retinal, ophthalmic and other diseases such as diabetes and hypertension can be obtained from the fundus image. To understand the fundus image the extraction of normal and abnormal features are mandatory. The feature that can be extracted from the fundus image are exudate area, exudate count, exudate density, microaneurysms count, microaneurysms area, optic cup area, cup to disc ratio, optic disc area, blood vessel area, blood vessel density, haemorrhage area and haemorrhage count. The normal features of fundus images include optic disk, microaneurysms and blood vessels. Exudates and haemorrhages are the main abnormal features for DR, that significantly cause of blindness in the working-age population.

The data analysis and exploration is conducted, Table 10.1 shows the data type for attributes of the dataset. The dataset is comprehended with the statistical description of the dataset (Table 10.2) and correlation between attributes in the dataset (Table 10.3). To overview the DRDC dataset, a sample dataset is seen in Table 10.4.

10.5 Results and Discussion The proposed hybrid model implementation steps are as follows. Input: DR Dataset Output: Class Label (DR or No DR) 1. The feature dataset is extracted from the fundus image in the standard scalar. 2. Using the cleaned data the exploratory analysis is conducted to understand the data distribution. 3. The stack of Conv2D and MaxPooling2D layers are structured for the required depth in the 2D convolutional network. 4. The CNN layers are time distributed and then the LSTM and output layers are defined. Table 10.1 Data type for attributes of dataset. Feature

Data type

Exudate_Area

float64 Cup_to_Disc_Ratio

float64

Exudate_Count

float64 Bloodvessel_Area

Int64

Exudate_Density

float64 Blood_Vessel_Density float64

Microaneurysms_Count int64

Feature

Hemorrhage_Area

Data type

float64

Microaneurysms_Area

float64 Hemorrhage_Count

Int64

Optic_Cup_Area

float64 DRLevel

Int64

Optic_Disc_Area

float64 Dtype

object

5. The hybrid model is used to classify the DR. 6. The hybrid model is evaluated for performance. The evaluation metrics used are accuracy, sensitivity, specificity, and F1-score. In the metrics, TP denotes the fundus images for DR, FP denotes the No-DR images misclassified as DR, TN denotes the fundus images that are fittingly classified for No-DR and FN denotes the DR images misclassified as No-DR.

The hybrid model is employed to predict the occurrence of DR. The dataset is validated for the hybrid model and the performance is measured using the performance metrics (Table 10.5). Figure 10.5 shows the visual representation of the evaluation metrics used to validate the hybrid model. The dataset extracted from the fundus images consists of the features of the fundus images with and without DR. The deep learning CNN automatically extracts the features from the fundus image. Based on the statistical significance level, the features for the model are selected and used as training and test dataset. The results of the proposed system are compared with the other models as listed in Table 10.5 for accuracy, sensitivity, specificity, and F1-score. It is inferred from Table 10.5 that the proposed model is more accurate and has an enhanced AUC compared to the other models. With lesser false positive and false negative values, and better true positive and true negative values the proposed hybrid model has very negligible misclassification. At epoch 125, the training and validation accuracy is 98.56% and 98.45%, respectively, and this warrants the classification efficiency of the proposed system. Table 10.2 Statistical description of dataset. 10

Exudate_Area Exudate_Count Microaneurysms_Count

count 516

516

516

mean 39.633

5.031

1.343

std

133.758

9.541

2.183

min

0

0

0

25%

0

0

0

50%

0

1

0

75%

13.125

6

2

max

1376

87

11

Table 10.3 Correlation between attributes in dataset.

Exudate_Area Exudate_Count Microaneury Exudate Area

1

0.448

0.07

Exudate Count

0.448

1

0.258

Exudate_Density

1

0.448

0.07

Microaneurysms_Count 0.07

0.258

1

Microaneurysms_Area

0.05

0.221

0.955

Optic_Cup_Area

0.092

0.132

-0.023

Optic_Disc_Area

0.044

0.055

-0.021

Cup_to_Disc_Ratio

0.186

0.069

-0.055

Bloodvessel Area

-0.049

0.038

0.367

Blood_Vessel_Density

-0.049

0.038

0.367

Haemorrhage_Area

0.011

0.068

0.113

Haemorrhage_Count

-0.039

0.055

0.228

DRLevel

-0.041

-0.022

-0.293

Table 10.4 Dataset sample.

Exudate_Area Exudate_Count Exudate_Density Microaneurysms_C

16

10

7.33E-04

0

1.5

7

6.87E-05

3

226

35

1.04E-02

8

436.5

8

2.00E-02

9

1795.5

20

8.22E-02

0

Table 10.5 Comparison of the evaluation results. Classifiers Accuracy (%)

Sensitivity (%)

Specificity (%)

AUC (%)

CNN-LSTM 98.56 ± 0.01 98.55 ± 0.01

99.56 ± 0.01

99.1 ± 0.03

CNN-kNN

92.38 ± 0.02 93.38 ± 0.04

95.27 ± 0.02 98.12 ± 0.02

Figure 10.5 Metrics of CNN-LSTM model. Besides, the training and validation loss of the hybrid system is 0.04 and 0.06 respectively. The ROC curves between the True Positive Rate (TPR) and the False Positive Rate (FPR) is used to measure the AUC, which is around 99.9%, demonstrating the performance of the hybrid system. The experiment was conducted in Intel(R) Core(TM) i5-8250U CPU @ 1.60 GHz and the computational complexity of the model is measured by its processing time. Overall processing time for the proposed hybrid system is around seven minutes. Firstly, to autoextract the deep learning features from the database and secondly, ascertain the top features. Lastly, the classification of the DR. The hybrid model has the double benefit of speed and accuracy. The proposed systems perform better when compared to a similar model applied for the classification. The proposed model performed with consistent sensitivity and specificity compared to [19, 20]. The accuracy was better in the proposed model compared [21] and [22]. The adoption of the hybrid model will lessen the steps in the feature extraction and particularly the proposed model is accurate and fast. In summary, the highlights of the results are: (1) The use of the deep learning CNN algorithm benefits the application in automatic feature extraction for the classification. (2) The hybrid model for the DR classification is observed to have high accuracy. (3) Finally, the hybrid model besides achieving high accuracy is also faster. For a real-time scenario, the model must be deployed and when the user gives an input the model must predict the output. For the users an input webform is required in the frontend. The values entered in the web form should connect the model and subsequently the target variable is predicted. The predicted output is displayed on the webpage.

To implement a web application for DR classification, a web framework is used to develop the web application. The most popular in-built framework of Python, Flask is used for the development of the DR classification webpage. In the implementation, first the web page for DR classification is developed using Flask and the hybrid model is serialized through the Pickle package. The home page created using Flask displays the web form in which the user can upload the image required for DR classification. Subsequently, the model is accessed and the inputs are passed through the hybrid model for classification. After the classification is completed, the result page displays the screening result based on the prediction of the DR grade.

10.6 Conclusion In summary, this work studied the application of deep learning to predict the DR from the retinal images. The proposed hybrid model presented a CNN-LSTM classifier for the DR classification using the DRDC dataset. The proposed hybrid model comprising the CNNLSTM network has better accuracy. This approach is faster and obtained an accuracy of 98.56% for the DRDC dataset. Also, the training and validation loss of the hybrid model is 0.04 and 0.06, respectively. The AUC is measured around 99.9% demonstrating the reliable performance of the hybrid system. Overall processing time of the proposed hybrid system is around seven minutes. Furthermore, the presented classifier performed satisfactorily compared to the other classifiers. As the accuracy achieved was considerably high with better speed, this hybrid model can be deployed as a mobile or a web application in the future that can be helpful for the users who are out of reach of DR screening facilities.

References 1. Gadkari, S., Q. Maskati, and B. Nayak, Prevalence of diabetic retinopathy in India: The All India Ophthalmological Society Diabetic Retinopathy Eye Screening Study 2014. Indian Journal of Ophthalmology, 2016. 64(1): p. 38-44. 2. Raman, R., L. Gella, S. Srinivasan, and T. Sharma, Diabetic retinopathy: An epidemic at home and around the world. Indian Journal of Ophthalmology, 2016. 64(1): p. 69-75. 3. Xu, K., D. Feng, and H. Mi, Deep convolutional neural networkbased early automated detection of diabetic retinopathy using fundus image. Molecules, 2017. 22(12): p. 2054. 4. Mansour, R.F., Deep-learning-based automatic computer-aided

diagnosis system for diabetic retinopathy. Biomedical Engineering Letters, 2018. 8(1): p. 41-57. 5. S. A. Selvi, T. A. kumar, R. S. Rajesh and M. A. T. Ajisha, “An Efficient Communication Scheme for Wi-Li-Fi Network Framework,” 2019 Third International conference on I-SMAC (IoT in Social, Mobile, Analytics and Cloud) (I-SMAC), Palladam, India, 2019, pp. 697-701, doi: 10.1109/I-S MAC47947.2019.9032650. 6. Islam, S.M.S., M.M. Hasan, and S. Abdullah, Deep learning based early detection and grading of diabetic retinopathy using retinal fundus images. arXiv preprint arXiv:1812.10595, 2018. 7. Zhang, W., et al., Automated identification and grading system of diabetic retinopathy using deep neural networks. Knowledge-Based Systems, 2019. 175: p. 12-25. 8. Pao, S.-I., et al., Detection of Diabetic Retinopathy Using Bichannel Convolutional Neural Network. Journal of Ophthalmology, 2020. 2020. 9. Hemanth, D.J., O. Deperlioglu, and U. Kose, An enhanced diabetic retinopathy detection and classification approach using deep convolutional neural network. Neural Computing and Applications, 2020. 32(3): p. 707-721. 10. Abràmoff, M.D., et al., Improved Automated Detection of Diabetic Retinopathy on a Publicly Available Dataset Through Integration of Deep Learning. Investigative Ophthalmology & Visual Science, 2016. 57(13): p. 5200-5206. 11. Lahmiri, S., Hybrid deep learning convolutional neural networks and optimal nonlinear support vector machine to detect presence of hemorrhage in retina. Biomedical Signal Processing and Control, 2020. 60: p. 101978. 12. Seth, S. and B. Agarwal, A hybrid deep learning model for detecting diabetic retinopathy. Journal of Statistics and Management Systems, 2018. 21(4): p. 569-574. 13. Toledo-Cortés, S., M. de la Pava, O. Perdomo, and F.A. González. Hybrid Deep Learning Gaussian Process for Diabetic Retinopathy Diagnosis and Uncertainty Quantification. 2020. Cham: Springer International Publishing. 14. Lee, J., et al., Macular Ganglion Cell-Inner Plexiform Layer Thickness Prediction from Red-free Fundus Photography using Hybrid Deep Learning Model. Scientific Reports, 2020. 10(1): p. 3280. 15. Pavithra, M., Rajmohan, R., Kumar, T. A., & Ramya, R. (2021).

Prediction and Classification of Breast Cancer Using Discriminative Learning Models and Techniques. Machine Vision Inspection Systems, Volume 2: Machine Learning-Based Approaches, 241-262. 16. Lecun, Y., L. Bottou, Y. Bengio, and P. Haffner, Gradient-based learning applied to document recognition. Proceedings of the IEEE, 1998. 86(11): p. 2278-2324. 17. Hochreiter, S. and J. Schmidhuber, Long Short-Term Memory. Neural Computation, 1997. 9(8): p. 1735-1780. 18. Liu, P., X. Qiu, and X. Huang, Recurrent neural network for text classification with multi-task learning. arXiv preprint arXiv:1605.05101, 2016. 19. Lauer, S.A., et al., The Incubation Period of Coronavirus Disease 2019 (COVID-19) From Publicly Reported Confirmed Cases: Estimation and Application. Annals of Internal Medicine, 2020. 172(9): p. 577-582. 20. Islam, M.M., H. Iqbal, M.R. Haque, and M.K. Hasan. Prediction of breast cancer using support vector machine and K-Nearest neighbors. in 2017 IEEE Region 10 Humanitarian Technology Conference (R10-HTC). 2017. 21. Haque, M.R., M.M. Islam, H. Iqbal, M.S. Reza, and M.K. Hasan. Performance Evaluation of Random Forests and Artificial Neural Networks for the Classification of Liver Disorder. In 2018 International Conference on Computer, Communication, Chemical, Material and Electronic Engineering (IC4ME2). 2018. 22. Hasan, M.K., M.M. Islam, and M.M.A. Hashem. Mathematical model development to detect breast cancer using multigene genetic programming. In 2016 5th International Conference on Informatics, Electronics and Vision (ICIEV). 2016. *Corresponding author: [email protected], ORCID: orcid.org/0000-

0003-1156-4744

11 Novel Deep-Learning Approaches for Future Computing Applications and Services M. Jayalakshmi1*, K. Maharajan2, K. Jayakumar3 and G. Visalaxi4 1National Engineering College, K.R Nagar, Tuticorin, Tamil Nadu, India 2Vel Tech Rangarajan Dr. Sagunthala R&D Institute of Science and Technology, Chennai, India 3School of Computer Science and Engineering, Vellore Institute of Technology (VIT), Vellore, India 4Bharath Institute of Higher Education and Research, Chennai, Tamil Nadu, India Abstract Deep Learning is the present and restorative field of machine learning. It uses the well-organized machine learning approach in terms of efficacious, supervised, precise time and computational cost. It paves the way to solve various complicated problems in the computational areas. It has done a major breakthrough with measurable work performance in a various applications. It is used for exploring complicated structure in infinity amount of data using back propagation algorithm. It has done a remarkable performance and advancements in various applications like detection of objects, share market analysis, remote sensing, speech recognition, health care, parking system, big data and deep vision system. Artificial Intelligence takes learning algorithms consistently by enhancing the amounts of data continuously. This will lead the efficiency of the training processes because efficiency is considered, based on the large volume of data. Such a preparation method is called Deep. It has two phase training and inference. In Training phase bundle of data identified by some labels for taking the decisions on their static characteristics. Final decision on the matching data and display the unexposed data identified by new labeling based on their traversing knowledge on the data can be done by inferring phase. Keywords: Deep learning, artificial intelligence, training phase,

inference phase

11.1 Introduction Artificial intelligence (AI) has a subsection known as Machine learning (ML) which imparts an automatically knowledgeable process, without being specifically scheduled by principles and details. It has some previous experiences like data patterns and potential enhanced outcome, based on that model sophisticated generalization in data that involve many nonlinear data. Artificial neural network system (ANNs) is adapted in deep learning platform. The ANNs are continuously taking learning algorithms. Growing data volumes may improve training efficiency processes. The accuracy depends on larger data volumes. The development of the profound learning process is based solely on two stages of the planning process and the implied stage. The planning stage involves the labeling and agreeing on the matching characteristics of high data quantities and the lowering stage uses their previous experience to deal with expectations and new unexposed markings. It is an approach that enables the structure to comprehend complex vision tasks as precisely as possible. The learning process is controlled or carried out unmonitored across distinctive phases of abstraction and different layers of representation. These signals are weight-balanced and the combined signals are passed to non-linear tasks to achieve effects. The “deep learning” approach lists the notion that the data have been converted into several layers. These structures consist of an exceptionally special loan allocation route (CAP), meaning input-tooutput conversion measures and impulse relationship between the layers. It is important to remember the distinguishing points between representative learning and deep learning. Representative learning involves method selection that allows the system machine to enter the raw data and recognizes some representations of detection and classification. These methods are just those learning approaches with several layers of abstract representation and more. Figure 11.1 demonstrates the variations between profound and machine learning.

Figure 11.1 Machine learning vs. deep learning. In wide databases, nonlinear transformations employ comprehensive learning strategies and high-level model abstractions. It also explains how a computer changes its internal attributes by accepting abstractions and representations by layer to list the descriptions. This type of approach is widely used in various fields such as identification, medical treatment, identification of objects Salazar et al. [51, 52], language recognition, image classification, pedestrian detection, language processing, adaptive testing, large size data, cancer detection, data flow, document analysis and speech identification. This type of method uses enormous basic trust data to classify the specific advantages, summation and creates an enhanced withdrawal function and figuration categorization model for various fields of applications Wang et al. [62]. It carried out the original feature of deep learning and it is the reporting of several systematic and general approaches without intervention by human engineering. The key reasons for the profound methodology centered on the following are: Multi-layered, nonlinear processing. Learning supervised or unattended. Nonlinear multilayer processing is a various order of method in that the current layer acknowledges the preceding layer effects and passes their output to the next layer. In order to organize the value of the data, hierarchy between layers is specified. The goal class mark is supervised and unattended learning. The one of the feature Availability gives a managed computing system and Non-Availability gives an unattended computing system. Soniya et al. [56] described deep learning patterns, models, prototypes and vulnerabilities. They explored these functions, including learning strategies, tuning and methods of optimization. And also they looked on big deep learning datasets.

11.2 Architecture The networks such as deep-neural, recurrent neural, and deep belief networks include various names for deep learning architectures. In that Deep neural network configured by joining many hidden level of layers which are placed between input layers and Artificial Neural Network output layers using different topologies. The deep neural network is able to shape tangled and non-linear relations, producing certain prototypes in that some of the object is available and it is considered as some primitive configuration of the layer. This type of

giving the input to the forward networks with no loop and the data laid down from the I/O to the O/P layer. Deep learning has many different architecture and algorithms for implementing the complex problems easily. Table 11.1 shows the distribution in terms of yearwise in deep learning architecture. Whereas to get rich learning a lot of algorithms and various types of architecture are available. Table 11.1 shows the different architecture of deeper learning and its applications. And that CNN and LSTM are essential and follow the already used techniques. Table 11.1 Different architecture of deeper learning and its applications. S. Architecture no.

Major application areas

1.

Auto-Encoder

a. Natural Language processing b. Understanding compact representation of data

2.

Convolutional neural networks

a. Document analysis b. face recognition c. Image recognition d. NLP(natural language Processing) e. Video analysis

3.

Deep belief networks

a. Failure prediction b. Information retrieval c. Image recognition

4.

Deep stacking Networks

a. Information Retrieval b. Continuous speech recognition

5.

LSTM/GRU networks

a. Speech recognition b. handwriting recognition c. Image captioning

6.

Recurrent neural networks

Handwriting and speech recognition

Auto-Encoder: It is a type of network normally called a “Neural feed” in which the level of enter is the same. You compress the inputs into a smaller code and then rebuild the output from this specific representation. The code is a compact “summary” or perhaps “compression,” which is also referred to as the latent representation of space. There are three components: encoder, decoder and code. The encoder compresses the centre and creates the code, then simply rebuilds the input using that

particular code. We need three things in order to create an autoencoder: an encoding technique, decoding technique, and a loss feature to evaluate the output. Autoencoders are primarily a reduction in dimensionality (or perhaps compression) algorithm with a few crucial properties: a. Data-specific: Auto encoders can simply compress information in a meaningful way much as what they have learned. Because they study specific features for the training information specified, they are completely different from a common compression algorithm like gzip. So we cannot expect an autoencoder to compress photographs by handwritten digits. b. Lossy: The output of the autoencoder is not exactly the same as the feedback; it’s a narrow but degraded image. If you want lossless compression, they are not the best option. c. Uncontrolled: In order to teach an auto-encoder we don’t have to do anything fantastic, just throw in the raw information. Auto encoders are regarded as an unmonitored learning strategy because they are not trained on explicit product labels. But they are much more accurate because they produce their own labels from the instructional information.

Figure 11.2 Block diagram of auto encoder. The encoder and decoder are fully connected to neural feed networks because auto-encoders have the same mode as ANNs through back propagation. Code is one level of an ANN and the dimension of our choice. The amount of nodes at the code level (code size) is a hyperparameter used before the autoencoder is instructed. Figure 11.2 shows the auto encoder block diagram. Initially, the input passes through the entire ANN encoder in order to develop the code. The decoder, with the ANN structure equivalent, now produces the paper using the code. The goal is usually to collect the same output with the feedback. Be aware that the architecture of the decoder is the encoder’s mirror image. This isn’t really a requirement, although it is

usually the case. The only necessity is that the input and output needs to be exactly the same dimensionally. Almost anything could be played with in the centre.

11.2.1 Convolutional Neural Network (CNN) It is a multi-level neural system and depends on the visible cortex of the animal. LeCun et al. [36] created the very first CNN. The CNN programme areas include mainly personally identified handwrittenAlwzwazy et al. [4] and Sudholt et al. [57] and imageprocessing Affonso et al. [3], e.g., postal code interpretation. From a structure perspective, earlier levels are used to determine attributes, including tips, and eventually levels are used to recombine features, in order to create high input attributes accompanied by dimensional characteristics extracted. The next thing is to perform convolution and then pool again in the correctly matched with the multilayer perception. The ultimate aim is to identify the functions on the some object in the output level through which they apply back propagation algorithms to attain the desired result. It has several merits from various layers such as deep processing layers, convolutionary layers and pooling along with whole classification layer connection and it takes different applications Araque et al. [6] and Zhou et al. [74], including speech recognition, health applications, different tasks and natural language video recognition. CNN produces much better accuracy and increases device functionality because of characteristics like neighborhood linking and weight sharing. Figure 11.3 shows the performance of CNN with the input, pooling layers, convolution, outputs and hidden layers information flow.

11.2.2 Restricted Boltzmann Machines and Deep Belief Network The Restricted Boltzmann Machine (RBM) represents the concealed level, the visible level as well as the symmetrical link between levels in an undirected graphical and modeled way. There is simply no connection between an entrance and the hidden level in RBM. The whole perception system symbolizes the architecture of multi-layer systems, using a new training technique with many abstract layers. Each couple of linked levels here is an RBM and it is also recognized as a stack of Boltzmann limited machines. The input layer includes the basic sensory feedback and the hidden level that characterizes the abstract explanation of this feedback. The task at the paper level is usually only the classification of the network. The teaching component is conducted in two stages: uncontrolled pre-instruction and controlled adjustment. In uncontrolled pre-education, RBM is

competent to reconstruct its feedback from the very first concealed level The following RBM is competent and the first hidden level is also considered to be the visible layer and the input, so that the RBM can be operated using the very first cached layer of the output. Each level is therefore pre-experienced or even pre-trained. Now that the pre-school is completed, supervised fine-tuning measures start. In this particular phase, the output nodes are marked with the values or perhaps the tags to support the learning process and then detailed network education is completed with the descent gradient or perhaps back propagation algorithm.

Figure 11.3 Architecture of convolution neural network. LSTM/GRU Network The LSTM works on a cell normally called the mind product that can maintain its value over a sufficiently long period of time and treat it as a feature of its input. This enables the device to remember the estimated final value. The memory device or maybe a cell has three ports, named “gates,” limiting the movement of information within the device, i.e., in the cellphone and from the phone. The input port or perhaps the gate manages the new information flow in mind. The next gate named will forget port settings and will be used to retrieve brand new information when a current info item is lost. The GRU design is easy to work with compared to the LSTM. It can be competent very quickly and it is considered better in terms of delivery.

Recurrent Neural Network RNN is a range of structures and it is used as the standard multiarchitecture for networking. One of the important key feature of an RNN is it has a relation that can be replied to in previous layers as against to full feed forward connections. It requires an earlier input memory and models problems in time. It can be improved, competently developed with enhanced time back propagation (BPTT).

11.3 Multiple Applications of Deep Learning It aims to address the advanced feedback facets by using many layers of representation. This brand new method of machine learning Abadi et al. [1] has done great things in uses such as experience, voice, images, recognition of handwriting, natural language processing, medical science, and many more. Its new research includes the disclosure of search engine optimization and fine tuning of the product by evolutionary algorithms and gradient descent. Any of the key problems that technical advancement in the whole of learning would certainly be scaling computations, optimizing search engine variables in deeper neural networks, learning and design techniques. In addition, a thorough analysis in a number of complex neural community versions is an enormous problem for this future field of research. The fuzzy logic with deeper neural networks is important, challenging and a provocative part to be studied. Several implementations of DL (deep learning) are shown in Figure 11.4. a. Adaptive Testing Chandra and Sharma [10] have taken a new method by integrating learning speeds with Laplacian for weight updating. They found neurons to be necessary for weight and learning rate change. It will be viewed operations and on basis of error gradient the parameter could be feasible. The neuron Laplacian rating could be used for new weights, as well as for the difficulty to get the full value of the learning charge. This was applied with the linear activation efficiency benchmark datasets and also the maximum output. The work has shown that the classification accuracy has improved. It has a drawback like Laplacian ratings could not be used in an online mode. They proposed the “Exponential LP” with rectified linear activation function is much easier when information was available in both batches and streams. Xiao et al. [66] suggested a groundbreaking complete learning adaptive evaluation methodology. Without handoperated intervention, these techniques will immediately acquire the data. Two applications were created by using DNN features, i.e., dynamic testing and partial testing. These apps were used for making

decisions such as failure or passing Looks et al. [34] and for good test ordering. Reliability enhancement and even Fig. Four experimental implementations have shown deep learning success. Falcini et al. [20] have developed a complete learning disciplined life procedure.

Figure 11.4 Various applications of deep learning. b. Big Data Now that data sizes increase very fast, the application communicates a lot of scope plus metamorphic chances for the different big data parameters as shown in Figure 11.5. This massive information prediction plus analytical solutions widely opens up the exceptional demands for information and data. Chen and Lin [13] found and indicated that before serious learning you will find important issues. These problems are immense, disproportionate labels, heterogeneous and non-static divisions and transformation solutions are essential. Gheisari et al. [21] submitted that complete learning from fundamental data analysis would yield excellent results and also lead to the detection of unknown and comfortable abstract patterns that were not understandable.

Figure 11.5 Different big data parameters. c. Biological Image Classification It has used decision-tree algorithms, Neural Network, CNN, Nearest Neighbor and help vector computer for their experiment on the data set used. The researchers obtained extremely promising results in the right way and discovered that complete learning used for imaging tasks achieved predictive output accuracy. The dynamics of the image dataset also determine the exactness. d. Data Flow Graphs It is part of the TensorFlow Printing Intelligence Building, introduced a TensorFlow Graph to help explain the system learning structure using the data flow graph. Serial transformations were used to create the common diagram format. The relationship of non-critical nodes was established and the graph was clustered step-by-step by removing the frame from the source code. They also performed edge bundling to ensure receptive, balanced and modular compositioncentered expansion. Ripoll and others [49] suggested a brand new methodology for evaluating the neural network. The technique has been developed to determine if a person has to switch from crisis to cardiology. The method was checked for 1,320 people who got raw, unannotated ECG signals. Learning equipment, the technique helped build static graphs in a dynamic graph of various forms and sizes. The

team also worked on the library, which shows several kinds. e. Deep Vision System Puthussery et al. [47] developed a fresh method to independent navigation using machine learning methods. It has used CNN to classify markers on the images by which they developed a robot operating system, as well as a system to direct the marker according to the position. They also used a depth camera to assess the distance and navigation to these markers. Abbas et al. [2] suggested incredible deep learning versions to help implement video processing applications in real time. In the technological age, they explained the feasibility of achieving real-time video processing programmes such as video object detection, tracing and recognition with efficient learning facilities. Its architecture and new approach have resolved computer costs, layer quantity, consistency and accuracy problems. They used heavy learning algorithms that provide a strong and efficient structure with many neuronal levels to manipulate huge video data effectively. Sanakoyeu et al. [53] had a graphic similarity structure and an unexpected learning wealth. They used poorly localized and suggested a brand new rationalized problem in order to remove the batch of samples with reciprocal associations. The competing relationships are spread over the same samples and many lots are formed in a separate team. An evolutionary neural network was used to create relationships within and between groups and to represent each sample without a label. The proposed methodology clearly illustrates strategic performance for function analysis as well as for the product category. f. Document Analysis and Recognition Wu et al. [65] gave a fresh visual and speech recognition approach. Neurons in a map are not required to show the kernel. In the R CNN, neurons do not share the very same convolutionary kernel and ATR CNN also employs an alternate approach. Subramanian and Kannan [28] checked the entire learning and the applicability of this for any identification of optical character [45] by the Tamil script. They gave a survey of experiments on Tamil apps carried out by the pros. They also pointed out the necessary steps to boost OCR creation through the use of heavy learning engineering with simple data analysis. LeCun et al. [36] have suggested a handwritten [22] definition of the ZIP code. The back spread algorithm was used in a strong neural network, with increased instruction time. Nguyen et al. [41] used deep neural networks [37] to evaluate mathematical symbols handwritten [29] online. These image patterns are based on web patterns. The test was carried out in the CHROME database. They realized that the CNN method creates better results compared with

MQDF in order to evaluate offline mathematical symbols. BLSTM also performed much better compared to internet patterns [64] offered by MRF. Xing and Qiao [67] provided an efficient writer recognition [19] method using CNN, which is rich in multi-stream. Regional handwritten [8] spots were taken as the details, followed by education in the Softmax loss category. They developed and enhanced multi-stream building with data increase training and thus improved DeepWriter’s functionality. DeepWriter was developed for managing varying text images, a complete multi-stream CNN, which teaches knowledgeable writers to offer an impressive portrayal. The tests were performed in the HWDB [58] and IAM datasets and proved the achievements at nearly 99% accuracy rates. Poznanski and Wolf [46] have developed a means of solving the problem of knowing the handwritten images. For the comparison of text [9] profiles, canonical correlation analysis was used. Many completely connected branches have been used by CNN. They used a systematic learning approach with the standard removal process. Yadav et al. [69] established a contemporary method for deep learning in the multimedia [44] booklet to classify individuals. They used diagonal extraction techniques at convolutionary levels for the experimental analysis [50, 63]. After that, they conducted genetic algorithms to nurture the forward network of the software type. They used a data set consisting of 360 instructions The diagonal was shown by a study-based recognition method with accuracy results with less instruction time. g. Healthcare Environment Loh et al. [33] suggested a new approach to rural health cardiac management and diagnostic and discussed the advantages, solutions and challenges as stated by Lalmuanawma et al. [30] for incorporating strong learning algorithms. Improving rural medical products, including telemedicine and wellness apps, was really important. Computer-assisted systems are still being used to assist in the interpretation and analysis of health imaging [18]. The machine learning architecture shown in Figure 11.6 offers various benefits to patients and doctors. Mobile device progress will speed up the proliferation of medical services to all those living in poor areas. Dai and Wang [17] launched a health system designed to reduce the enormous workload of nurses and doctors by using artificial intelligence solutions. They found sufficient pattern recognition techniques and the entire recognition module to detect their status on the basis of severe neural networks. They also worked on the excitation assessment aspect based on a Bayesian inference diagram and subsequently developed a simulation environment including a body simulator for the preparation of the body. The team deep recognition factor used to classify the body characteristics as well as

an activity evaluation [43] module for documenting and computing statistical evidence was used, for example, with Bayesian inference diagrams on the entire body simulation module. It was most successful to experiment with increasing statistical data. They used the data set consisting of nine corporate constitutional forms for the experiment.

Figure 11.6 Machine learning in healthcare. h. Medical Applications Vasconcelos et al. [60] explored the Deep Convolutionary Neural Community (DCNN) primarily-based classifier can be used to handle unparalleled, smaller health-related data as described by Bhat et al. [10]. They used systems for raising information, integrating final recognition and lesion patterns into group training. In the regularized ensemble setting, Yuan et al. [71] formulated a strategy for profound learning. It was used to solve the multi-class and imbalanced problems. They used stratified class balance samples and completely focused on the unpredictability of novice students. The sampling method has arbitrarily selected examples and checked the regularization process to prosecute the classifier in order to distribute the details. It also changes the error limits by looking at the classifier

functionality. Besides real-world data sets, 11 separate artificial have been used for their experiment. The new approach had obtained the highest precision for the minority groups with the stability of the ensemble. Experimental experiments have shown that the approach proposed is entirely correct and also describes the discrepancy between the initial group classifications. They clarified that computational costs are also considerably lower. But as the amount of training awareness increases, the usefulness of the technology increases. Razzak et al. [48] proposed highly accurate stimulant solutions in medical applications, such as medical imaging, wellbeing, image recognition, computer-aided diagnostics, image segmentation, image storage, image fusion and image processing. Artificial intelligence and drug learning methods have helped doctors understand, anticipate and prevent the disease and its risk. The approach was used by doctors to consider the generic variants that contribute to the disease’s incidence. This include both natural and efficient learning algorithms such as SVM, Neural Networking (NN), CNN, Extreme Learning Models (ELM), Generative Adversarial Networks (GANs), the Recurrent Rent Neural Network (RNN), Long Short Run Memory (LSTM), and many others. i. Object Detection The new hybrid car Local Multiple Hone System was developed by Ucar et al. [59]. By using the many CNNs, the new hybrid vehicle Local Multiple Hone System was first split into neighborhood regions with its characteristic extraction capability and clear classification [70]. Secondly, they selected discriminatory characteristics (PCA) via major component analyses and transported both structural and empirical risk reduction to a variety of SVMs. Finally, they attempted to fuse SVM outputs. In the pedestrian data sets Caltech and Caltech101, they have performed pedestrian detection and object recognition experiments. Their proposed system achieved better results with increased accuracy at reduced failure speed and much better object detection. They built their own data set and also demonstrated good results with the use of CNN for object detection using a profound learning algorithm. The need for the full learning data kit is great, so that apps accumulate huge data sets continuously. Experimental profound learning technology is a strong method in migrating the human-made component with high-quality outcomes. Causal et al. [27] recommended a detailed study of the entire learning methods of object detection [25] and video tracking [61]. The survey included a neural network, fluid logic, deep analysis, evolutionary algorithms essential for surveillance and detection. The study established numerous challenges and data sets for the identification [15] of objects plus surveillance based on the severe neural network.

j. Parking System Chen et al. [11] has a full learning framework for mobile cloud computing that uses a repository and a cloud training process. Communication was possible with the Git protocol, which can help to transfer information in an unstable environment. During drive, intelligent digital cameras recorded the implementation and the video was finalized on the NVIDIA Jetson TK2. Experiment results showed that the rate of detection of 4 frames per second was increased compared to R CNN. Amato et al. [5] suggested a newly decentralized option for parking occupancy in the detection group. It is installed on an extreme neural network (CNN) suitable for all intelligent cameras. Two visual data sets, such as CNR Park-EXT, are checked. They actually asked for a data collection that includes images [26] from a real parking lot and collects them from nine smart cameras. The photographs were taken in light and diverse weather conditions on different days. They have used an awareness and validation data collection to detect parking space and execute the operation in real time on clever cameras. They didn’t use the central server experiment. The camera module Raspberry Ri framework was used. The server requires the category’s binary output to implement the proposed procedure. They found that CNN was extremely exact when the condition, noise, shadows and partial occlusions were slightly different. k. Person Re-identification Author Cheng et al. [14] suggested the technique called visual recognition. It assists the individual identification [40] in one more time. Distance metrics between pairs of illustrations were employed to increase the discrimination in the functions with positive results [73]. They have also proposed the development and test of a structured Laplacian embedding technique, along with all ordered distance links in the graph form of Laplacian. By including the proposed method along with the softmax loss needed for the CNN training, the suggested method had some deep characteristics by preserving the interpersonal dispersion as well as intrapersonal compactness that was important to identify individually. l. Remote Sensing Remote sensing studies are widely used in body versions by the help of deep learning and deep networks. The proposed approach employed three operations: rotation, translation and rotation, which created an enhanced data and a much more descriptive and efficient model. The proposed method also implanted a simpler process data increase in order to overcome the problem of remote sensing image

processing and led to future revolutionary changes in the remote sensing scene category. Zhu et al. [74, 75] used strong neural networks that put different decisions before owners, like global change tracking or even learning strategies when resource consumption has been reduced. Remote sensing researchers have always been an unbelievable and difficult toolbox which enables them to cross boundaries and exploit real problems with implied models on a broad scale. m. Semantic Image Segmentation Chen et al. [12] tried to address all serious neural network services to the problem of syntactic and semantic image segmentation. The report provided three primary contributions to the spot. First, they experimented with sampled filtering to preserve the resolution. Reactions from the characteristics with the neural network are approximated for the prediction mission. They did not increase the measuring volume and also effectively increased the view of filtration systems. They subsequently suggested a multi-scale segmentation Atrous Spatial Pyramid Pooling (ASPP) method. It could function fully as a feature level by using multiple sample rates on the screens and by capturing multi-scale images and artifacts. Methods have been integrated for severe convergence networks and even probabilistic structures. The experiment, together with optimization accuracy, achieved invariance both qualitative and quantitative changes. n. Speech Recognition Mohamed et al. [39] proposed a new acoustic modeling technique using DBN. For the data collection, normal TIMIT was used and has a 23.0% telephone error charge (PER). They used the entire BPDBN propagation algorithm and even the associated DBN mind called the DBN AM architecture. The effect of the size and the model of the hidden level were analyzed and various methods of reducing fitness were adopted. Bottlenecks helped keep the excess fit at the last stage of the BP DBN. The biased plus hybrid preparedness also helped prevent intrusion in the associative memory of DBN. In contrast to the other man, the results of the DBN structure were highest. The corpus TIMIT test has been carried out and one has been used with 462 speaker instructions. A total of 50 speakers were used for the results and models with the 24 speaker core test set were found. For those tests, the Viterbi development set of decoder parameters has been improved and the telephone error fee (PER) has also been determined by the test set. A deep neural network (Hidden Markov and DNN) approach has been proposed by Hamid and Jiang [24]. They used versions of the Convolutionary Neural Network to update the adaptation technology using speaker code which would

considerably improve the CNN structure. Noda et al. [42] suggested that heavy neural network speech recognition [31] should be re-used. It is used especially when the racket ruins the sound quality. Because of many problems, deep neural networks can learn robust and latent characteristics. Hidden Markov Model was involved in their work to identify the voice of the sound audio. By means of auto encoders and the CNN, a speed of 50% below the 10 decibel noise ratio [54] could be achieved. Experimental findings in terms of accuracy and efficiency were excellent. Experimental findings have shown clearly that DNN adaptability and hearing evaluations have been much more efficient than the Markov (HMM)’s hidden design. The technique has also shown the characteristic changes made on the paper stage. Hidden Markov Model [55] has also been implemented by Serizel [55] to build a voice recognition [72] method for a heterogeneous community of speakers. DNN and standardization of measurements of the vocal area (VTLM) for the experiment were used. Firstly, the experiment was conducted separately and a hybrid approach was used. A mixture of strategies has shown that the price of the basic telephone error has risen by 30 to 35% and the price of the basic word error is 10%. Xue et al. [68] provided a pattern for deep neural network adjustment. They also recommended different training methods for establishing relationship codes for each test condition. They are used as a three way of approaches to use the coding adaptation software. Three forms included nonlinear standardization, combined speaker adaptation guide with speaker codes and direct model adaptation to DNN based on speech codes. It would be tested, then the proposed approach with two regular speech recognition practices, one being TIMIT. In a switchboard company, both telephone recognition and other speech recognition were high. Results showed that all three methods were very efficient, with an 810% decrease. Markovnikov et al. [38] offered workers the opportunity to choose automatic Russian voice recognition, i.e., DNN. It will use RCNN, LSTM and CNN for their realistic experimental setup. And also use very big Russian similar word [23] and getting the best outcomes for data sets in the good tolerance rate, i.e., 7.5%. o. Stock Market Analysis Arevalo et al. [7] proposed a way of forecasting, precise and evaluating the stock market using Apple inventory information and achieving a Stock target sample price of 19,109 sample numbers. It took three weeks to study and used techniques for extreme neural networks. Lateral accuracy measurements were used to calculate functionality. Chong et al. [16] proposed a specific method of analyzing and forecasting severe bonds. The input from inventory reports was used to predict the future of the three non-examined

techniques Principal Component Analysis (PCA) and Restricted Boltzmann Machine (Auto Encoder and rbm). The Korea KOSPI 38 inventory returns and also the production target are a visit to Stock to choose from 73,041 samples and a four-year sample period. For functionality measurement, they used strong neural network approaches. They used NMSE, RMSE, MAE, and Shared Knowledge. The analysis showed that DNN works far better than a linear, selfrepressive model.

11.4 Challenges Deep learning techniques prove their best and solve different complex applications with high standards and multiple abstraction layers. Surely it is accepted that the precision, accuracy, receptiveness and accuracy of deep learning methods are nearly equal or may even be greater than that of human experts. In order to truly experience the joy of victory, the technology must face several obstacles in the current situation. And so here’s the list of problems deep learning has to overcome: Input information must be handled continuously by deep mastering algorithms. Algorithms must ensure transparency of implementation. Resource infrastructure such as high end GPUs, storage requirements is demanding. Enhanced Big Data Analytics techniques. Severe networks are referred to as black box networks. Dynamic design presence and hyper parameters. Need high processing capacity. Localized minima are affected. Computationally uncompromising. Need a lot of details. Costly for any complex computing problems. There is no clear theoretical basis. Hard to find topology, complete training info. Deep learning provides brand-new information storage programmes and infrastructures and enables computers to comprehend representations and objects.

11.5 Conclusion and Future Aspects In fact, the emerging technology like deep learning is a rapidly moving machine learning programme. They have used deep learning algorithms in diverse areas really and increased its adaptability and success. Completions as well as a higher consistency with prosperous learning certainly show the importance of this particular technology, clearly underlining the development of tendency and profound learning just for future research and development. The hierarchy of layers and supervision of mastering are the important factors to booming a programme for learning. The level of layers is important for proper monitoring and the classification of data feels the advantages of keeping the database. Deep learning depends on the SEO for existing machine learning applications and the innovative nature of their hierarchical level of various processing. And also it can send impacted outcome to different applications, such as electronic image processing and speech recognition. In the current and forthcoming period, richness learning could be performed as a useful safety application through facial recognition and mixed speech recognition. Furthermore, electronic image processing is a kind of research discipline that can be used in several locations. Serious learning is an exciting and contemporary [35] problem for artificial intelligence to prove to be a real optimization. Finally, we find that, if we stick to the result trend, with the improved accessibility of computational resources and data, the use of rich learning in most applications really starts to validate. The research is very emphatic, young and sure; and the rapid development of serious learning in ever more uses with excellent prosperity and boom, e.g., natural language treatment, remote sensing and healthcare, is expected to certainly achieve high levels of satisfaction and victory. Future Deep learning aspects include: Working deep networks and many noise types in a complex and also non-static loud scenario. Improvement of the functionality of serious networks by increasing the variety of features. Compatibility of severe neural networks within the unattended online mastering environment. Deep strengthening learning will be the future path. To create heavy generative models for the parametric speech recognition programme with outstanding and advanced temporal modeling capabilities. The deep learning methodologies have been published and have great

interest in every region where regular machine learning techniques have been applicable. Lastly, complete learning is probably the most useful, monitored and stimulating machine learning strategy. It enables researchers to quickly evaluate the unbelievable and hidden concerns associated only with programme to yield accurate and better results.

References 1. Abadi M, Paul B, Jianmin C, Zhifeng C, Andy D, Jeffrey D, Matthieu D (2016) Tensorflow: a system for large-scale machine learning. In: The proceedings of the 12th USENIX symposium on operating systems design and implementation (OSDI’16), vol 16, pp. 265–283. 2. Abbas Q, Ibrahim MEA, Jaffar MA (2018) A comprehensive review of recent advances on deep vision systems. Artif Intell Rev. https://doi.org/10.1007/s1046 2-018-9633-3 3. Affonso C, Rossi ALD, Vieria FHA, Carvalho ACPDLFD (2017) Deep learning for biological image classification. Expert Syst Appl 85:114–122. 4. Alwzwazy HA, Albehadili HA, Alwan YS, Islam NE (2016) Handwritten digit recognition using convolutional neural networks. In: Proceedings of international journal of innovative research in computer and communication engineering, vol 4(2), pp. 1101–1106. 5. Pavithra, M., Rajmohan, R., Kumar, T. A., & Ramya, R. (2021). Prediction and Classification of Breast Cancer Using Discriminative Learning Models and Techniques. Machine Vision Inspection Systems, Volume 2: Machine Learning-Based Approaches, 241-262. 6. Araque O, Corcuera-Platas I, Sánchez-Rada JF, Iglesias CA (2017) Enhancing deep learning sentiment analysis with ensemble techniques in social applications. Expert Syst Appl 77:236– 246. 7. Arevalo A, Niño J, Hernández G, Sandoval J (2016) High-frequency trading strategy based on deep neural networks. In: Proceedings of the international conference on intelligent computing, pp. 424–436. 8. Ashiquzzaman A, Tushar AK (2017) Handwritten arabic numeral recognition using deep learning neural networks. In: Proceedings of IEEE international conference on imaging, vision & pattern recognition, pp. 1–4. https://doi.org/10.1109/ICIVP R.2017.78908 66 9. Azar MY, Hamey L (2017) Text summarization using unsupervised deep learning. Expert Syst Appl 68:93–105. 10. Bhatt, C., Kumar, I., Vijayakumar, V., Singh, K. U., & Kumar, A.

(2020). The state of the art of deep learning models in medical science and their challenges. Multimedia Systems, 1-15. 11. Chen CH, Lee CR, Lu WCH (2016) A mobile cloud framework for deep learning and its application to smart car camera. In: Proceedings of the international conference on internet of vehicles, pp. 14–25. https://doi.org/10.1007/978-3-319-51969 -22 12. Chen LC, Papandreou G, Kokkinos I, Murphy K, Yuille AL (2018) Deeplab: semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs. IEEE Trans Pattern Anal Mach Intell 40(4):834–848. 13. Chen XW, Lin X (2014) Big data deep learning: challenges and perspectives. IEEE 2:514– 525. https://doi.org/10.1109/ACCES S.2014.23250 29 14. Cheng D, Gong Y, Changb X, Shia W, Hauptmannb A, Zhenga N (2018) Deep feature learning via structured graph Laplacian embedding for person re-identification. Pattern Recogn 82:94–104. 15. Chu J, Srihari S (2014) Writer identification using a deep neural network. In: Proceedings of the 2014 Indian conference on computer vision graphics and image processing, pp. 1–7. 16. Chong E, Han C, Park FC (2017) Deep learning network for stock market analysis and prediction: methodology, data representations and case studies. Expert Syst Appl 83:187–205. 17. Ciresan D, Meier U, Schmidhuber J (2012) Multi-column deep neural networks for image classification. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3642– 3649. 18. Dai Y, Wang G (2018) A deep inference learning framework for healthcare. Pattern Recogn Lett. https://doi.org/10.1016/j.patrec.2018.02.009 D 19. Dhieb T, Ouarda W, Boubaker H, Alilmi AM (2016) Deep neural network for online writer identification using Beta-elliptic model. In: Proceedings of the international joint conference on neural networks, pp. 1863–1870. 20. Falcini F, Lami G, Costanza AM (2017) Deep learning in automotive software. IEEE Softw 34(3):56–63. https/doi.org/10.1109/MS.2017.79 21. Gheisari M, Wang G, Bhuiyan MZA (2017) A survey on deep learning in big data. In: Proceedings of the IEEE international conference on embedded and ubiquitous computing (EUC), pp. 1–8.

22. Ghosh MMA, Maghari AY (2017) A comparative study on handwriting digit recognition using neural networks. In: Proceedings of the promising electronic technologies (ICPET), pp. 77–81. 23. Gurjar N, Sudholt S, Fink GA (2018) Learning deep representations for word spotting under weak supervision. In: Proceedings of the 13th IAPR international workshop on document analysis systems (DAS), pp. 7s–12s. 24. Hamid OA, Jiang H (2013) Rapid and effective speaker adaptation of convolutional neural network based models for speech recognition. In: INTERSPEECH, pp. 1248–1252. 25. Tamilarasan, A. K., Krishnadhas, S. K., Sabapathy, S., & Sarasam, A. S. T. (2021). A novel design of Rogers RT/duroid 5880 material based two turn antenna for intracranial pressure monitoring. Microsystem Technologies, 1-10. https://doi.org/10.1007/s00542020-05122-y 26. Jia X (2017) image recognition method based on deep learning. In: Proceedings of the 29th IEEE, Chinese control and decision conference (CCDC), pp. 4730–4735. 27. Kaushal M, Khehra B, Sharma A (2018) Soft computing based object detection and tracking approaches: state-of-the-art survey. Appl Soft Comput 70:423–464. 28. Kannan RJ, Subramanian S (2015) An adaptive approach of tamil character recognition using deep learning with big data-a survey. Adv Intell Syst Comput: 557–567. 29. Krishnan P, Dutta K, Jawahar CV (2018) Word spotting and recognition using deep embedding. In: Proceedings of 13th IAPR international workshop on document analysis systems (DAS). https://doi.org/10.1109/das.2018.70 30. Lalmuanawma, S., Hussain, J., & Chhakchhuak, L. (2020). Applications of machine learning and artificial intelligence for Covid19 (SARS-CoV-2) pandemic: A review. Chaos, Solitons & Fractals, 110059. 31. Ling ZH, Kang SY, Zen H, Senior A, Schuster M, Qian XJ, Meng HM, Deng L (2015) Deep learning for acoustic modeling in parametric speech generation: a systematic review of existing techniques and future trends. IEEE Signal Process Mag 32(3):35–52. 32. Liu PH, Su SF, Chen MC, Hsiao CC (2015) Deep learning and its application to general image classification. In: Proceedings of the international conference on informatics and cybernetics for computational social systems, pp. 1–4.

33. Loh BCS, Then PHH (2017) Deep learning for cardiac computeraided diagnosis: benefits, issues & solutions. M Health. https://doi.org/10.21037/mhealth.2017.09.01 34. Looks M, Herreshoff M, Hutchins D, Norvig P (2017) Deep learning with dynamic computation graphs. In: Proceedings of the international conference on learning representation, pp. 1–12. 35. Luckow A, Cook M, Ashcraft N, Weill E, Djerekarov E, Vorster B (2017) Deep learning in the automotive industry: applications and tools. In: Proceedings of the IEEE international conference on big data, pp. 3759–3768. 36. LeCun Y, Bengio Y, Hinton G (2015) Deep learning. Nature 521:1– 10. 37. Lopez D, Rivas E, Gualdron O (2017) Primary user characterization for cognitive radio wireless networks using a neural system based on deep learning. Artif Intell Rev: 1–27. 38. Markovnikov N, Kipyatkova I, Karpov A, Filchenkov A (2018) Deep neural networks in russian speech recognition. Artif Intell Nat Lang Commun Comput Inf Sci 789:54–67. https://doi.org/10.1007/978-3-319-71746-3_5 39. Mohamed A, Dahl G, Geoffrey H (2009) Deep belief networks for phone recognition. In: Proceedings of the nips workshop on deep learning for speech recognition and related applications, pp. 1–9. 40. Mohsen AM, El-Makky NM, Ghanem N (2017) Author identification using deep learning. In: Proceedings of the 15th IEEE international conference on machine learning and applications, pp. 898–903. 41. Nguyen HD, Le AD, Nakagawa M (2015) Deep neural networks for recognizing online handwritten mathematical symbols. In: Proceedings of the 3rd IAPR IEEE Asian conference on pattern recognition (ACPR), pp. 121–125. 42. Noda K, Yamaguchi Y, Nakadai K, Okuno HG, Ogata T (2015) Audio-visual speech recognition using deep learning. Appl Intell 42(4):722–737. 43. Nweke HF, Teh YW, Al-garadi MA, Alo UR (2018) Deep learning algorithms for human activity recognition using mobile and wearable sensor networks: state of the art and research challenges. Expert Syst Appl: 1–87. 44. Ota K, Dao MS, Mezaris V, Natale FGBD (2017) Deep learning for mobile multimedia: a survey. ACM Trans Multimed Comput Commun Appl (TOMM) (TOMM) 13(3s):34.

45. Prabhanjan S, Dinesh R (2017) deep learning approach for devanagari script recognition. Proc Int J Image Graph 17(3):1750016. https://doi.org/10.1142/S0219 46781 75001 64 46. Poznanski A, Wolf L (2016) CNN-N-gram for handwriting word recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2305–2314. 47. Puthussery AR, Haradi KP, Erol BA, Benavidez P, Rad P, Jamshidi M (2017) A deep vision landmark framework for robot navigation. In: Proceedings of the system of systems engineering conference, pp. 1– 6. 48. Razzak MI, Naz S, Zaib A (2018) Deep learning for medical image processing: overview, challenges and the future. ClassifBioApps Lect Notes Comput Vis Biomech 26:323–350. 49. Ripoll VJR, Wojdel A, Romero A, Ramos P, Brugada J (2016) ECG assessment based on neural networks with pertaining. Appl Soft Comput 49:399–406. 50. Rudin F, Li GJ, Wang K (2017) An algorithm for power system fault analysis based on convolutional deep learning neural networks. Int J Res Educ Sci Methods 5(9):11–18. 51. Salazar F, Toledo MA, González JM, Oñate E (2012) Early detection of anomalies in dam performance: a methodology based on boosted regression trees. Struct Control Health Monit 24 (11):2012– 2017. 52. Salazar F, Oñate E, Toledo MA (2017) A machine learning based methodology for anomaly detection in dam behaviour, CIMNE, monograph no M170, 250 pp, Barcelona summarization via deep learning. Expert Syst Appl, pp. 1–10. 53. Sanakoyeu A, Bautista MA, Ommer B (2018) Deep unsupervised learning of visual similarities. Pattern Recogn 78:331–343. 54. Santana LMQD, Santos RM, Matos LN, Macedo HT (2018) Deep neural networks for acoustic modeling in the presence of noise. IEEE Latin Am Trans 16(3):918–925. 55. Schmidt, J., Marques, M.R.G., Botti, S. et al. Recent advances and applications of machine learning in solid-state materials science. npj Comput Mater 5, 83 (2019). https://doi.org/10.1038/s41524-0190221-0 56. Soniya, Paul S, Singh L (2015) A review on advances in deep learning. In: Proceedings of IEEE workshop on computational intelligence: theories, applications and future directions (WCI), pp 1–6. https://doi.org/10.1109/wci.2015.74955 14

57. Sudholt S, Fink GA (2017) Attribute CNNs for word spotting in handwritten documents. Int J Doc Anal Recognit (IJDAR). https://doi.org/10.1007/s1003 2-018-0295-0 58. Thomas S, Chatelain C, Heutte L, Paquet T, Kessentini Y (2015) A deep HMM model for multiple keywords spotting in handwritten documents. Pattern Anal Appl 18(4):1003–1015. 59. Ucar A, Demir Y, Guzelis C (2017) Object recognition and detection with deep learning for autonomous driving applications. Int Trans Soc Model Simul 93(9):759–769. 60. Vasconcelos CN, Vasconcwlos BN (2017) Experiment using deep learning for dermoscopy image analysis. Pattern Recognit Lett. https://doi.org/10.1016/j.patre c.2017.11.005 61. Wang L, Sng D (2015) Deep learning algorithms with applications to video analytics for a smart city: a survey. arXiv, preprint arXiv: 1512.03131 62. Wang T, Wen CK, Wang H, Gao F, Jiang F, Jin S (2017) Deep learning for wireless physical layer: opportunities and challenges. China Commun 14(11):92–111. 63. Wang Y, Liu M, Bao Z (2016) Deep learning neural network for power system fault diagnosis. In: Proceedings of the 35th Chinese control conference, 1–6. 64. Wicht B, Fischer A, Hennebert J (2016) Deep learning features for handwritten keyword spotting. In: Proceedings of the 23rd international conference on pattern recognition (ICPR). https://doi.org/10.1109/icpr.2016.79001 65. 65. Wu Z, Swietozanski P, Veaux C, Renals S (2015) A study of speaker adaptation for DNN- based speech synthesis. In: Proceedings of the interspeech conference, pp. 1–5. 66. Xiao B, Xiong J, Shi Y (2016) Novel applications of deep learning hidden features for adaptive testing. In: Proceedings of the 21st Asia and South Pacific design automation conference, pp. 743–748. 67. Xing L, Qiao Y (2016) DeepWriter: a multi-stream deep CNN for text-independent writer identification. Comput Vis Pattern Recognit. arXiv: 1606.06472 68. Xue S, Hamid OA, Jiang H, Dai L, Liu Q (2014) Fast adaptation of deep neural network based on discriminant codes for speech recognition. IEEE/ACM Trans Audio Speech Lang Process 22(12):1713–1725. 69. Yadav U, Verma S, Xaxa DK, Mahobiya C (2017) A deep learning

based character recognition system from multimedia document. In: Proceedings of the international conference on innovations in power and advanced computing technologies, pp. 1–7. 70. Yu X, Wu X, Luo C, Ren P (2017) Deep learning in remote sensing scene classification: a data augmentation enhanced convolutional neural network framework. GIScience & Remote Sens: 1–19. 71. Yuan X, Xie L, Abouelenien M (2018) A regularized ensemble framework of deep learning for cancer detection from multi-class, imbalanced training data. Pattern Recogn 77:160–172 72. Zhang XL, Wu J (2013) Denoising deep neural networks based voice activity detection. In: Proceedings of the IEEE international conference on acoustics, speech and signal processing, pp. 853–857. 73. Zhao C, Chen K, Wei Z, Chen Y, Miao D, Wang W (2018) Multilevel triplet deep learning model for person reidentification. Pattern Recogn Lett. https://doi.org/10.1016/j.patre c.2018.04.029 74. Zhou X, Gong W, Fu W, Du F (2017) Application of deep learning in object detection. In: Proceedings of the IEEE/ACIS 16th international conference on computer and information science (ICIS), pp. 631–634. 75. Zhu XX, Tuia D, Mou L, Xia GS, Zhang L, Xu F, Fraundorfer F (2017) Deep learning in remote sensing: a comprehensive review and list of resources. IEEE Geosci Remote Sens Mag 5(4):8–36. *Corresponding author: [email protected], ORCID ID: 0000-

0002-7297-9161 K. Maharajan: ORCID ID: 0000-0003-4353-3133 K. Jayakumar: ORCID ID: 0000-0002-6044-6667 G. Visalaxi: ORCID ID: 0000-0002-2548-6587

12 Effects of Radiation Absorption and Aligned Magnetic Field on MHD Cassion Fluid Past an Inclined Vertical Porous Plate in Porous Media Raghunath Kodi1*, Ramachandra Reddy Vaddemani2 and Obulesu Mopuri3 1Department of H&S, Bheema Institute of Technology and Science, Adoni, A.P, India 2Department of H&S, K.S.R.M College of Engineering, Kadapa, A.P, India 3Department of Mathematics, S.K. University, Anantapur, A.P, India Abstract This article analyses the magnetic field, substance and therapeutic utility effects, the study-Absolute convective motions of a viscous, impenetrable, and electrically regulated fluid moving a slanting platter through a powerful media, free stream speed can obey the exponentially expanding small disturbance rule. Two-term harmonic and non-harmonic capacities routinely highlight nondimensional control situations. Velocity, temperature, and concentration appropriations are addressed in depth by graphs for different boundary projections heading into the problem. The specification of skin friction, heat intensity, and mass transmission intensity are inferred. This test helps monitor molten iron flow in the steel industry, fluid metal cooling in atomic reactors, and meteorology. Keywords: Aligned magnetic field, radiation absorption, MHD, porous medium, inclined angle, mixed convective flow

12.1 Introduction With its relevance to geophysics, astrophysics, and many engineering issues, such as coolant for nuclear reactors, border controls in aerodynamics, and cooling towers, the subject of the free transmitting of convective and mass, fluid across a sloping heated surface under

the pressure of a magnetic field has become quite fascinating. Mass transfer is one of the most frequent phenomena in both the chemical and physical and biological sectors. As the fluid is at rest, the mass transition occurs; only molecular diffusion results from gradients of concentration. Convective heat and mass transmission mechanisms are identical in design with low mass amounts in the fluid and limited mass transference speeds. Several experiments of simultaneous mass and heat transmission have also been performed in conjunction with various physical environments. In specific chemical engineering systems, a chemical reaction happens during the time between the exterior weight and the fluid on which the platter rests. These procedures are carried out in various manufacturing uses, including the manufacture of polymers, the development of Ceramics or glassware, and fruit preparation. Mohammed Ibrahim T Sankar Reddy and Roja [1] numerically researched the dilemma of an unstable double dimension MHD flow via a porous plate with porous medium and chemical reactions. Due to their diverse significance, these flows were examined by several scholars, including Ortham and Ahmet [7], Yih [3], Srinivas and Muthuraj [6], Raju and Varma [5], Orphan and Elbashbeshy [2], Mahapatra, Dooley, Gupta AS [4]. Simultaneous movement of heat and mass by porous has included multiple technical and physical developments such as extensive porous drying, geothermal reservoirs, enriched oil recovery, thermally insulating, and nuclear reactor cooling. Raghunath, Obulesu, and Siva Prasad [8] derived heat and mass transmission by the porous medium between two porous vertical platelets—unsteady MHD flow. Suresh Babu, Nagesh, Kr Nath, and Siva Prasad [9] addressed the Free Convection Heat Propagation Study of Finite Elements in the Longitudinal Annular Porous Media. GVR Prasad, Nagesh, KR Nath and Siva Prasad [10] studied Viscoelastic fluid flow of MHD over a revolving porus object with heat and chemical reactions. The impact on the flow of thermal radiation through a wedge loaded with a complete the pormeable object was discussed by Chankha et al. [11]. Kumaran and Sandeep [12] have presented the effects of crossdiffusion on the revolving of MHD non-Newtonian solution over a parabolic area. Koriko et al. [13] possessed the flow of a revolutionary paraboloid with Brownian motion and thermophoresis over the upper flat surface. Very recently, the researchers [14–16] by taking into account thermal radiation and heat flux by Cattenao-Christov, MHD flow was investigated over numerous geometries of flow. The influence of nonlinear thermal radiation was studied by Ramanareddy et al. [17] on the flow of MHD between revolving platters with homogeneous-heterogeneous reactions. The effect of radiation and chemical react in the porous vertical wall on the flow of

the MHD boundary layer was addressed by Shaik, and Karna [18] and improved the parameter of heat generation or Eckert’s amount observed to minimise the number of nusselt. Due to the thin stretching layer, the collision of fluctuating heat source/sink specifications on the flow was possessed by Ramanareddy et al. [19]. It has been observed that the temperature profiles are improved by the rising values of fluctuating heat source/sink parameters. The collision of heat and mass transmission on the flow over a moving/stationary upward plate of a nanofluid were explored by Mahantesh et al. [20]. This research focuses on studying the absolute convection movement of a viscous, impenetrable, and electrolytic liquid moving by a critical plate via a porous media, with an aligned magnetic field, chemical reactions, and radiation absorption impact.

12.2 Physical Configuration and Mathematical Formulation The laminar, imprinting, viscous, heat-absorbing fluid movement and electrically conducting, past an inclined plate amid chemical reactions and dufat results, were incorporated into the porous medium. The flow in the x-direction is believed to be in the half-infinite incline plate and y-axis usually. Introduced at an angle α to the flow path is a magnetic field with a standardized duration Bo. The free stream speed follows the exponentially rising small disruption theorem. This research’s analytical solutions are focused as followed by [1] (Figure 12.1) (12.1)

Figure 12.1 Concern physical configuration [1]. (12.2)

(12.3)

(12.4) Velocity, temperature and concentration flow conditions (12.5)

Introduction of non-dimensional numbers, (12.6)

The simple field equations (12.2–12.4) can be described as nondimensional. (12.7)

(12.8)

(12.9) The resulting process parameters are reduced to sizeless. (12.10)

Equations (12.7)–(12.9) define a sequence of partial differential equations not resolved. This can be composed of a set of natural, dimensionless mathematical models that can-be empirically solved. By representing velocity, temperature, and concentration, this can be accomplished. (12.11)

Equations (12.11) of correspondence (12.7)–(12.9) and combining the Ec-like strength term,

Zero order terms: (12.12)

(12.13) (12.14) First order terms: (12.15)

(12.16) (12.17) The respective limit requirements are (12.18) The following solutions are obtained under constraints (12.14–12.19) (12.19) (12.20)

(12.21) (12.22)

(12.23) (12.24) In Equation (12.11), via the boundary layer, we gain speed, temperature, and concentration.

(12.25)

(12.26)

(12.27)

12.2.1 Skin Friction (12.28)

12.2.2 Nusselt Number (12.29)

12.2.3 Sherwood Number (12.30)

12.3 Discussion of Result The findings demonstrate the existence of the effects of parameters. In this analysis, the default values for computations are adopted: β=5,

Gr = 5.0, α=30, γ = 30, Gm 5.0, R=0.5, Sc=0.6, Q=0.1, M=1, Ko=1, Kr=0.1. All graphs equate to these values, except stated in the relevant .

12.3.1 Velocity Profiles Figure 12.2 demonstrates the β-parameter impact on the velocity profile. We found that velocity increases to raise Casson parameter β values. Figure 12.3 indicates the inclined angle α impact on the velocity graph. We observed the velocity reducing to the glad angle α value. Figure 12.4 demonstrates the effects on the velocity profile of the synchronized magnetic field parameter π. We expressed that the velocity declines as the associated magnetic field specification π rises. Figures 12.5 and 12.6 display the aspect of Gr & Gm on velocity. These statistics indicate that fluid velocity increases with growing Gr & Gm values. This is attributed to a buoyancy impact that improves momentum. Figure 12.7 shows the magnetic field influence in velocity spread. The existence of the gravitational field renders the fluid flux resistive. It is the Lorentz power that slowly downwards the speed of air. In the presence of a permissibility parameter, Figure 12.8 indicates the velocity is against the span-wise in corresponding specification. We conclude that the velocity diminishes as the Ko vector.

Figure 12.2 Velocity outline for several Nemours of β. R=0.5, γ = α π/6, Pr=0.71, Ko=1, Kr=0.1, M=1, Q=0.1, R=1, Gm=5, Ec=0.001, Sc=0.6, Gr=5.

Figure 12.3 Velocity outline for several Nemours of α. R=0.5, β=5, γ = π/6, Pr=0.71, Ko=1, Kr=0.1, M=1, Q=0.1, R=1, Gm=5, Ec=0.001, Gr=5, Sc=0.6.

Figure 12.4 Velocity outlines for several Nemours of γ. R=0.5, α = π/6, β=5, Sc=0.6, Pr=0.71, Gr=5, Ko=1, Kr=0.1, M=1, Q=0.1, R=1, Gm=5, Ec=0.001.

Figure 12.5 Velocity outline for several Nemours of Gr. R=0.5, Pr=0.71, Ko=1, γ=30, α=30, Kr=0.1, M=1, Q=0.1, R=1, Gm=5, Ec=0.001, Sc=06.

Figure 12.6 Velocity outline for several Nemours of Gm. R=0.5, M=1, Sc=0.6, Pr=0.71, Ko=1, α=30, γ=30, Kr=0.1, Gr=5, Q=0.1, R=1, Ec=0.001.

Figure 12.7 Velocity outline for several Nemours of M. R=0.5, Sc=0.6, Pr=0.71, Ko=1, γ=30, β=5, γ = α=30, Kr=0.1, Gr=5, Q=0.1, R=1, Gm=5, Ec=0.001.

Figure 12.8 Velocity outline for several Nemours of Ko. R=0.5, M=1, Sc=0.6, Pr=0.71, β=5, γ = α=30, Kr=0.1, γ=30, Gr=5, Q=0.1, R=1, Gm=5, Ec=0.001.

12.3.2 Temperature Profiles Figure 12.9 presents the collision of heat source specification Q on temperature dispensation—the temperature declines as the heat source Q rises. Figure 12.10 illustrates the different Prandtl number on temperature profiles. Temperature falls as the amount of Prandtl rises. The explanation is that smaller Prandtl numbers are similar to raising the fluid’s thermal conductivity, and thus heat will flow faster with larger Prandtl numbers from the heated surface. Radiation absorption influences R, as seen in Figure 12.11. Temperature is known to raise the Radiation absorption parameter R.

12.3.3 Concentration Profiles Figures 12.12 illustrates the concentration profiles for various chemical reaction values Kr. From this figure, the concentration decreases when the variable Kr rises. Figure 12.13 displays species concentration profiles for different Schmidt number Sc values. The thickness of the physical phenomenon of concentration has diminished with Sc, even finding that concentration decreases steadily and enters the free stream state for high Sc values.

Figure 12.9 Temperature outline for several Nemours of Q. R=0.5, M=1, Sc=0.6, Pr=0.71, Ko=1, β=5, α=30,γ=30, Kr=0.1, Gr=5, R=1, Gm=5, Ec=0.001.

Figure 12.10 Temperature outline for several Nemours of Pr. R=0.5, M=1, Sc=0.6, Q=0.1, Ko=1, β=5, α=30, γ=30, Kr=0.1, Gr=5, R=1,

Gm=5, Ec=0.001.

Figure 12.11 Temperature outline for several Nemours of R., Q=0.5, M=1, Sc=0.6, Pr=0.71, Ko=1, β=5, α=30, γ=30, Kr=0.1, Gr=5, R=1, Gm=5, Ec=0.001.

Figure 12.12 Concentration outline for several Nemours of Kr. Table 12.1, Plays numerical skin-friction values for Grashof numbers (Gr), adjusted Grashof numbers (Gc), Magnetic parameters (M), Porosity parameters (Ko), and Inclined angles (α), Casson parameters (β). Table 12.1 indicates that skin-friction increases with a rise in Grashof numeral (Gr), adjusted Grashof numeral (Gc), Porosity parameter (K), Radiation absorption (R), and Balanced magnetic field parameter (π), where it decreases under the control of magnetic parameter and angle inclination. Table 12.2 indicates that the Nusselt number rises with a rise in Eckert number and radiation response (R). It reduces under the control of the Prandtl number and heat absorption parameter. Table 12.3 with growing soret parameter values, the number of Sherwood increases. With the addition of variable of Schmidt and chemical reaction, Sherwood’s amount reduces.

Figure 12.13 Concentration outline for several Nemours of Sc. Table 12.1 Skin friction (τ). Gr Gm M Ko γ

Α

Β

Τ

5

7.1611

10

10.6733

15

14.1925 5

7.1611

10

10.8155

15

14.4673 1.5

6.9677

2

6.4417

2.5

5.8393 1

7.1611

5

3.6578

7.5

0.3186 π/10

0.3187

π/6

0.3186

π/3

0.3185 π/10

0.3499

π/6

0.3186

π/3

0.1840 0.25 5.0642 0.50 5.2755 0.75 5.4549

Table 12.2 Nusselt numeral (Nu). Pr

Ec Q

R

Nu

0.75

-0.6312

0.80

-0.6828

0.85

-0.7342 1

-0.5879

5

-0.5810

10

-0.5724 0.1

-0.4717

0.15

-0.5144

0.20

-0.5366 0.25 -7.0111 0.50 -4.6990 0.75 -2.8333

Table 12.3 Sherwood numeral (Sh). Sc Kr

Sh

0.6

-0.6145

1.2

-1.1013

1.8

-2.5014 0.001 0.3581 0.003 0.1430 0.005 0.0052

12.4 Conclusion Velocity decreases to raise inclination angle values α, magnetic field parameters α, magnetic parameter M, permeability parameter Ko, where reverse patterns arise as it reveals Grashof, adjusted Grashof Specifications, and Casson Parameter β. The temperature distribution declines as Prandtl and Radiation Absorption (R) rises, as heat source parameter Q rises. With the rise of chemical reaction and Schmidt specifications, the concentration limit layer decreases, rising with the increasing values of Soret Parameters. Skin pressure is enhanced by the increase of (Gr), (Gc), (Ko) and (α), and is minimized by the effect of (M)(β) and (α). The amount of nusselt rises with Ec, while under the control of (Pr) and (Q), it decreases. The amount of Sherwood rises with a rise in S, but in the case of Sc and Kr, there is a reverse reaction.

References 1. S Mohammed Ibrahim, T Sankar Reddy, P. Roja, Radiation Effects on Unsteady MHD Free Convective Heat and Mass Transfer Flow of Past a Vertical Porous Plate Embedded in a Porous Medium with Viscous Dissipation, International Journal of Innovative Research in Science, Engineering and Technology, Vol. 3, Issue 11. 2. Elbashbeshy EMA. Heat and mass transfer along a vertical plate with variable surface temperature and concentration in the pressure of the magnetic field. International Journal Engineering Sciences 1997; 34:515–22. 3. Yih KA. The effect of transpiration on coupled heat and mass transfer in mixed convection over a vertical plate embedded in a saturated porous medium. International Communication of Heat and Mass Transfer 1997; 24(2):265–75. 4. Mahapatra TR, Dholey S, Gupta AS. Momentum and heat transfer in the MHD stagnation point flow of a visco-elastic fluid toward a stretching surface. Mechanica 2007; 42:263–72. 5. Raju MC, Varma SVK. Unsteady MHD free convection oscillatory Couette flow through a porous medium with periodic wall temperature. J Fut Eng Technol 2011; 6(4):7–12. 6. Srinivas S, Muthuraj R. MHD flow with slip effects and temperature-dependent heat source in a vertical wavy porous space. Chem Eng Commun 2010; 197(11):1387–403. 7. Orthan A, Ahmet K. Studied MHD mixed convective heat transfer flow about an inclined plate. Int J Heat Mass Transfer 2009; 46:129–

36. 8. K Raghunath, M Obulesu, Dr R Siva Prasad. Heat and mass transfer on Unsteady MHD flow of through porous medium between two vertical porous plates, AIP Conference Proceedings 2220, 130003-1-130003-6 (2020); https://doi.org/10.1063/5.0001103 9. G. Suresh Babu, G. Nagesh, K. Raghunath, Dr. R. Siva Prasad. “Finite Element Analysis of Free Convection Heat Transfer Flow in a Vertical Conical Annular Porous Medium,” in International Journal of Applied Engineering Research (Scopus), Volume 14, Number 1 (2019) pp. 262-277. 10. G.V Nagendraprasad, G. Nagesh, K. Raghunath, Dr. R. Siva Prasad. “MHD flow of a Visco-elastic fluid over an unbounded rotating porous plate with Heat source and Chemical reaction,” in International Journal of Applied Engineering Research (Scopus), Volume 13, Number 24 (2019) pp.16927-16938. 11. A.J. Chamkha, S. Abbasbandy, A.M. Rashad, K. Vajravelu, Radiation Effects on Mixed Convection over a Wedge Embedded in a Porous Medium Filled with a Nanofluid. Transport in Porous Media 91 (2012) 261–279. 12. G.Kumaran, N.Sandeep, Thermophoresis and Brownian moment effects on parabolic flow of MHD Casson and Williamson fluids with cross diffusion, Journal of Molecular Liquids 233, 262–269, 2017. 13. Tamilarasan, A. K., Krishnadhas, S. K., Sabapathy, S., & Sarasam, A. S. T. (2021). A novel design of Rogers RT/duroid 5880 material based two turn antenna for intracranial pressure monitoring. Microsystem Technologies, 1-10. https://doi.org/10.1007/s00542020-05122-y. 14. S.Saleem, M.Awais, S.Nadeem, N.Sandeep, T.Mustafa, Theoritical analysis of upper-convected Maxwell fluid flow with CattaneoChristov heat flux model, Chinese Journal of Physics (In Press), 2017. 15. M. Jayachandra Babu, N. Sandeep, MHD non-Newtonian fluid flow over a slendering stretching sheet in the presence of crossdiffusion effects, Alexandria Engineering Journal (2016) 55, 2193– 2201. 16. M.Khader, A. M. Megahed, Numerical Solution for Boundary Layer Flow due to a Nonlinearly Stretching Sheet with Variable Thickness and Slip Velocity, European Physical Journal Plus, 128 (2013) 100-108. 17. Ramana Reddy, J. V., Sugunamma, V., & Sandeep, N. (2016). Effect of non-linear thermal radiation on MHD flow between rotating

plates with homogeneous-heterogeneous reactions. International Journal of Engineering Research in Africa, 20, 130-143. 18. Shaik, M. I., & Karna, S. (2016). Influence of Thermo-Diffusion and Heat Source on MHD Free Convective Radiating Dissipative Boundary Layer of Chemically Reacting Fluid Flow in a Porous Vertical Surface. Journal of Advances in Applied Mathematics, 1(1), 17. 19. Ramana Reddy, J. V., Sandeep, N., Sugunamma, V., & Anantha Kumar, K. (2016). Influence of non uniform heat source/sink on MHD nanofluid flow past a slandering stretching sheetwith slip effects, Global Journal of Pure and Applied Mathematics, 12(1), 247254. 20. Adimoolam M., John A., Balamurugan N.M., Ananth Kumar T. (2021) Green ICT Communication, Networking and Data Processing. In: Balusamy B., Chilamkurti N., Kadry S. (eds.) Green Computing in Smart Cities: Simulation and Techniques. Green Energy and Technology. Springer, Cham. https://doi.org/10.1007/978-3-03048141-4_6. *Corresponding author: [email protected], ORCID:

orcid.org/0000-0002-1622-3634 Ramachandra Reddy Vaddemani: ORCID: orcid.org/0000-00025157-3525 Obulesu Mopuri: ORCID: orcid.org/0000-0001-9483-2411

13 Integrated Mathematical Modelling and Analysis of Paddy Crop Pest Detection Framework Using Convolutional Classifiers R. Rajmohan*, M. Pavithra, P. Praveen Kumar, S. Usharani, P. Manjubala and N. Padmapriya Department of CSE, IFET College of Engineering, Villupuram, Tamil Nadu, India Abstract Paddy crop cultivation is one of the foremost financial maneuvers of the Southern Province of India. Such Paddy crops are influenced by the assault of pest and the disease influenced by them. In this manner, paddy crop pest identification is essential to the rural economic improvement in the southern India landmass. An efficient pest identification framework based on histogramgradient feature processing and deep CNN algorithm with SVM classification is proposed for improving paddy crop cultivation. An acoustic probe is modeled for the detection of the presence of pests in the paddy field. A deep CNN algorithm is used for noise reduction in unclassified pest images to improve classification under linear SVM. The identification of pest from the de-noised images is performed using a linear SVM classifier along histogram variants embedded with gradient feature. The test results show that the Deep CNN+SVM using HOG/SIFT/SURF features produces high accuracy in all insect identification through feature properties. Keywords: SVM, CNN, deep learning, HOG, SURF, SIFT

13.1 Introduction Due to the probability of more processing the pest images after receiving, images are commonly used as data, and identifying objects is one of the most challenging tasks in the field of computer vision (CV) [1]. The computer vision references to automated retrieval, examination, and comprehension by a single image or a series of pictures of useful information. Computer Vision work involves

clustering, location, identification of artifacts, segmentation, etc. The detection of objects is the mechanism by which real-life instances are detected in images or videos; bounding boxes and class marks are generated to distinguish and locate several destinations. In general, identifying objects consists of three steps: extraction from the input image, the selection of detection window, and the creation of a classificatory. The conventional detection algorithm consists of handcrafted features and superficially educated architecture that lack power and resilience to heterogeneity. Applying a wireless sensor network for monitoring paddy crop and pest identification [1, 3, 16] is needed nowadays. Joining the sensor data alongside Mobile application benefit farmers to use their insight in a proficient path with a specific end goal to extricate the best outcomes from their paddy crop development. Our System provides a framework that aggregates the sensor data for predicting the pest prediction and identification in the paddy crops. The framework can scale because every farmer’s requests and subsequent data gathering may speak to an essential asset for the future. The major contributions of the research are: To build a framework for facilitating real-time monitoring and identification of pests in paddy crops. To utilize sensors for crop field monitoring and pest management. To construct a database to store paddy pest information. To find out the insect-based on crop images using a patternmatching algorithm. To build a database for paddy insect syndromes and treatment possibilities. The rest of the paper is organized as follows: Section 13.2 introduces the existing methodologies for pest and disease identification in paddy crops. Section 13.3 details the structure archetypal of the recommended smart pest and disease prediction and identification in paddy crops. Section 13.4 outlines the database system developed for storing the current paddy crop insect management information. Section 13.5 overlooks the implementation of deep CNN and SVM under SIFT, SURF, and HOG features. Section 13.6 concludes the proposed work and implicit future deployments.

13.2 Literature Survey Our system focuses on designing a framework that accumulates

sensor data for disease prediction and image processing for disease identification. No such frameworks have been developed practically for disease prediction in paddy crops. But many researchers have proposed a disease prediction model, but still, they lack valid field results. Numerous sensors have been field implemented for data accumulation of crop quality, but they have data accuracy problems in prediction [2, 17]. Notable sensors are analyzed and concluded in Table 13.1. The SIFT and SURF descriptors were determined and applied under the support vector machine model for classification. An agricultural pest insect identification model [9] is developed, which computes saliency map features for a learning-based category using deep CNN. Later, a device for identifying plant hopper insects [10] is modeled, tested on rice stem using SVM with GLCM features and SVM classifier with HOG features. Consequently, a modeling concept [8, 11] was proposed, which identifies pest based on the insect position’s latitudinal and longitudinal coordinates in the input images. The next frontiers were determined common to all systems and addressed in our framework: The need for a large dataset for efficient identification. We built our database, which includes a variety of pest images. With the inclusion of combined gradient and histogram-based features, we perform indepth CNN training and SVM classification with SURF and HOG features.

13.3 Proposed System Model The proposed Smart Paddy Pest Management model has a based sensor network incorporated into a deep learning framework. The progressive approach of the recommended system embraces two modules: Pest Prediction and Pest Identification. Pest Prediction about ensuring whether the crop is pest affected based on the sensor data. Pest identification is all about detecting what type of problem is occurred in the paddy crop. The result of both processes is controlled and processed through the deep learning framework. Table 13.1 Sensors and their methodologies. Sensor

Method

Drawbacks

Ultrasonic transducers

Damage is detected by measuring the distance taken by the acoustic wave to travel through a leaf. Damages are detected beyond 13 mm.

Damage is always identified with diameter below 5 mm. So early stage detection is

impossible. Accelerometer Damages are detected through Background noise sensor vibrations of leaf. and wind interference make the prediction of insect sound difficult. Microphone

Damage in wheat kernels are The signals detected by the acoustic waves produced by the of insects. insects are not poorly inferred by microphones.

Optical vibration sensor

Vibration of the leaf is used for pest detection.

Measuring leaf vibration is interfered by external sources.

Accelerometer Sensor is fixed to the stem for Sensor attachment sensor measuring vibration. will damage the plant growth. Piezoelectric sensor

Larvae of insects are Noisy environments identified through vibrations. proved hard for finding insect larvae.

13.3.1 Disease Prediction The proposed disease model is depicted in Figure 13.1 with all the required components. Disease prediction is performed with the help of acoustic probes. The acoustic sensors are designed as a combination of acoustic sensors and LAI sensors. Deploying these sensors helps to detect pest infestation in the early stages and reduces crops’ damage to a greater extent. Leaf Area Index (LAI) measuring is a profitable method for monitoring larvae and insects’ presence in the paddy field. After detecting insects in the field, the location is intimated to the farmer through a mobile app. The farmer personally verifies the current situation of the crop and takes necessary actions. Once the farmer is aware of the infected place, a photonic image of the diseased plant is captured and uploaded into the mobile. Image processing is performed to detect the disease and pest-infested. The precautionary measure to be taken is conveyed to the farmer through the mobile app.

Figure 13.1 Paddy disease detection framework.

13.3.2 Insect Identification Algorithm For detecting various paddy crop insects such as Stem borers, Gall midge, Planthopper, Leaf-folder, Case worm, Gundhi Bug, Mealy Bug, swarming caterpillar, etc., the spitting image dispensation procedures, namely image acquisition, image segmentation, preprocessing feature extraction and classification of an image are introduced [5]. 1. Image acquisition Ailing influenced paddy bug is caught through a camera as shown in Figure 13.2. To locate the correct bug tainted, the RGB shade of the trimmed picture must be noticeable. This is accomplished with the assistance of a top of a versatile line camera. These pictures are stored away in both processes capable of picture augmentation in the database.

Figure 13.2 RGB color image of Gall midge in paddy crop.

2. Image preprocessing The fundamental thought of the strategy is to update the image data and improve the picture properties. Image pre-processing is essential for appearing, securing, and transmission of the picture. It achieves picture change in paddy creepy-crawly pictures utilizing an RGB shading position. The working model of the preprocessing step for the proposed framework is depicted in Figure 13.3. 3. Image segmentation The image quality is influenced by the force of the camera, streak, external condition factors, encompassing light, recurrence mutilation, etc. These elements go about as clamor in pictures. They can be expelled by utilizing the profound convolutional neural system CNN calculation. The profound convolutional algorithm is being used to lessen bug distinguishing proof in paddy crops. Utilizing CNN gives the plant growth specialist a favorable position to recognize irritations at beginning times virtually by investigating the mind-boggling highlights through the portable application. CNNs are received for ailment distinguishing proof in paddy due to their exceptionally mechanized element taking in procedures from the handled paddy pictures. Low-level paddy creepy-crawly sickness pictures can likewise be recognized utilizing profound learning designs. On the entire, for design acknowledgment in paddy trim pictures, the best groupings technique is profound learning. Deep-CNN architecture is proposed shown in Figure 13.4 [7, 18], unbolting the featured design method for deep learning of image topographies. It is a layer-based de-convolutional design for the indepth knowledge of picture attributes with convergent hidden layers among convolutionary de-convolutionary layers. The Continual linear and non-linear functions constitute the deep CNN. The linear functions are expressed explicitly by convolution operations, and the non-linear function represents the involved processes. The convolution layer understands the paddy insect images’ local property and induces complex feature representations of paddy diseases. The broader the design, the more rice disease pictures are abstracted. The fundamental process of profound CNN efficiency is articulated as

Figure 13.3 Pre-processing of Gall midge insect.

Figure 13.4 Deep CNN model. (13.1) where y is the deep CNN output, x refers to input data contribution, Ci is the convol-matrix of the i-th layer, Zi is the i-th convol layer’s preference, fi is a non-linear parameter, and is the composite parameter including Ci and Zi. The objective of the deep CNN is to treasure an optimum parameter set with M input images to minimize the noising as shown in Figure 13.5: (13.2) The above phenomenal denotes, xm and ym to be deciding parameters. Here, l characteristically signifies the image de-noising parameter, such as Euclidean distance in regression. 4. Feature extraction The feature extraction process is executed with four different variations; namely, Grey-Level Cooccurrence Matrix (GLCM), ScaleInvariant Feature Transform (SIFT), Speeded up robust features (SURF), and Histogram of oriented gradients (HOG). The paddy creepy insect picture comprises various injury shape and sore shading

on account of a few kinds of ailment, for Stem borers, Gall midge, Planthopper, Leaf-folder, Case worm, Gundhi Bug, Mealy Bug, Swarming caterpillar. Highlights, for example, shape and shading, assume a noteworthy part in ailment distinguishing proof [19]. The condition can be determined by estimating the paddy creepy-crawly picture’s broadness and stature to quantify the protest pixel tally. The pixels are then used to recognize RGB esteems for computing the Grey-Level Co-occurrence Matrix (GLCM).

Figure 13.5 Deep CNN image denoising. The technique proposed in work [6] is embraced in the abstraction of RedGreenBlue highlights. The preeminent advance is the detachment of RGB segments from the first shading pictures. The subsequent stage is calculating mean, standard deviation, difference, and skewness from the isolated RGB parts utilizing the accompanying Equations (13.3) to (13.6). (13.3)

(13.4)

(13.5) (13.6)

Here P is the aggregated count of pixels and Zi is the ith pixel value. The GLCM P, de (a′, b′) matrix defines the intensity fitted with gray levels I j), which appears in the segregated frame at the defined

wavelength. d = (da, db). The following mathematical representation is used for determining the complex text features of paddy leaf diseased images [6]. (13.7)

(13.8) (13.9)

(13.10) (13.11) (13.12) (13.13)

(13.14)

The decent variety among test pictures is yielded out in an unassuming way, measuring normal dim levels inside the network, change in the dark level regarding range level of least and most extreme dim levels introduce in the grid. Consequently, absolute coevent highlights, specific, meaningful, fluctuation, and range, have been realized in the feature extraction process utilizing the Equations (13.7) to (13.14). Scale-invariant feature transform (SIFT) A picture’s local features can be identified utilizing SIFT (Scaleinvariant feature transform) calculation [13]. Filter key focuses on an image disengaged, and every one of the focuses is put away in the database. The element discovery and coordinating for each purpose of a picture will be finished utilizing SIFT. A protest of another image can be perceived by looking at a particular element in the database. With Euclidian’s assistance, separation of applicant coordinating highlights can be removed because of their element vectors. Suitable matches are sifted through the whole arrangement of partners of

another picture. It should be possible by finding the subsets of crucial focuses on the new image. Algorithm 1: SIFT [13] - Detection of key points Step 1: Key points must be determined. Step 2: The Location and scale of the segmented image must be refined. Step 3: Orientation of an image is determined. Step 4: At last, descriptors for each key point is to be determined. 1. Determining key points: The change in intensity can be acquired by taking the difference between the Gaussian scales nearby values [13]. (13.15)

(13.16) (13.17) 2. Refinement of the segmented image: The key points, such as scale and location, are discrete values to intercalate it for greater precision. For this DOG [13] function will be used using the second-order Taylor series: (13.18)

3. Determination of orientation of image: An image in a small window is taken, then gradient magnitude and orientation [13] will be computed. (13.19)

(13.20) 4. Finding descriptors for each key points:

A small region is taken around the critical point, and now it is divided into n*n cells. Each cell has a gradient orientation histogram. Gradient magnitude and Gaussian weighting functions are used to weigh each histogram entity. Each orientation histogram is being sorted. Speeded-Up Robust Features SURF SURF [14] estimates the DoG with lines for cabinets. Instead of sinusoidal average, floats are used for guessing since the square is transformed much faster if you use the foundational object. The transformation will also be possible with different scales in tandem. SURF uses the Hessian crystalline structure induced by the BLOB identifier to explore the imagination motives. For the initial mission, the time - domain reactions use adequate gaussian weights in all levels and lateral headings. SURF also uses wavelet impulses for highlighting images. An area surrounding the main point is selected and divided into sub-regions. Afterward, the spectral responses are used to define the spotlight of SURF for each SADC region. For simple intrigue, the sign of Laplacian, which is already documented in the discovery. The wavelet coefficients sign identifies dazzling blotches from the transpose case on flat surfaces. If there should be an occurrence of coordinating, the highlights are looked at just on the off chance that they have some sort of complexity (in light of sign), which permits speedier coordinating. Interest point detection [14] is based on the Hessian lattice approximation used in blob detectors, and scale variation is performed using Gaussian approximation. The following equation is used for defining input images, Im: (13.21) where Im is the input image, l is the location, and l(m,n) is the input image pixel sum. The scale variation using hessian lattice [14] and Gaussian approximation [14] is calculated as: (13.22)

where (Ή(l,α)) is the Gaussian approximation with convolution of 2nd order derivative. Histogram Oriented Gradient HOG Hog [15] is a component descriptor that checks inclination introduction events in confined parts of a picture. The HOG

descriptor is invariant to geometric and photometric changes, aside from protest introduction. It can depict the diagram of paddy crop insects. The Hog highlights are extricated from paddy crop insect subwindows distinguished by the top layer of identification and scaled to prepare an SVM classifier. Algorithm 2: Hog Detection Step 1: The input image is changed over to grayscale orientation as shown in Figure 13.6. Step 2: The luminance slope is ascertained at every pixel using the following equation: (13.23)

Figure 13.6 Gray scale orientation.

Figure 13.7 Histogram of inclination. Step 3: Create histogram of inclination for every cell as shown in Figure 13.7.

Step 4: Detect normalization and Descriptor Blocks for determining feature quantity (13.24)

5. Image classification Several classification methodologies like Bayesian, artificial neural network [ANN], fuzzy classification, Random forest, SVM are preceded. The SVM classifier will be beneficial to classify paddy insects since it uses image features (color, boundary, shape, spot) and Grey Level Co-occurrence, SIFT, SURF, and HOG attributes insect identification. A support vector machine (SVM) [2] identifies paddy crop insects based on the plant stem and leaf examination. Support Vector Machines are a framework for productively preparing direct learning machines in kernel instigated highlight spaces regarding the bits of knowledge of speculation hypothesis and using enhancement hypothesis. A portion of the structures is given below: (13.25) (13.26)

(13.27) In the characterization procedure, the paddy insect dataset is ordered into two sets: the prepared dataset and the testing dataset. The training dataset is investigated utilizing profound CNN to extricate its highlights and attributes for correlation with the testing dataset. The testing dataset is an arrangement of information whose highlights are broken down, and the bug is to be grouped. The SVM classifier performs an investigation on the testing dataset and orders in light of the correlation with the preparation dataset. Table 13.2 Pest of rice – sample dataset. Common Scientific name name Stem borers

Scirpophaga incertulas

Image Description

Remedy

Harm by Fipronil encouraging inside agent is used the rice stem or for

drying of the focal whorl of a leaf.

treatment

Imidacloprid agent is used for treatment

Gall midge

Orseolia oryzae

Encourages inside the developing stem and infusing a poison cecidogen.

Case worm

Nymphula depunctalis

Nourish by Foliar sprays rejecting patches of to be used leaf tissue abandoning papery epidermis.

Swarming Spodoptera

Leaves and rice

Profenophos

caterpillar mauritia

tenders are obliterated.

to be sprayed

Climbing cut worm

The leaf margin is sacked ultimately.

Beauveria to be sprayed

Mythimna separata

13.4 Paddy Pest Database Model Paddy crops patter with pests from the historical period. Under changing natural conditions and aimless compound application to the paddy edit, as a rule, the problems have additionally created protection. The agriculturists should be familiar with the paddy pest, harm indication caused by them, and administration alternatives to maintain a good paddy yield. Here we have endeavored to concentrate on the causative living being, time and phase of the assault, assault manifestations, and diverse administration alternatives as non-substance and synthetic treatment. Database creation is the most basic and tedious undertaking in paddy crop pest control. Based on a vigilant examination of the contemporary resources, including distributed and unpublished writing, crop pests, and populace flow of specialists’ conclusions, applicable data for the paddy framework was gathered from various sources [4, 12, 16] and recorded in Table 13.2. These sample datasets are stored in a remote server for pattern matching.

Figure 13.8 Pest spot identification.

13.5 Implementation and Results The recommended model or detection of paddy plant-insect is implemented in MATLAB, and categorically 250 insect test images are identified and applied to the training set. Deep CNN is used to denoise the insect image datasets categorically based on relevant features. Given the number of iterations, training period and false caution rate differed for different insect images given to the classifier. In the chosen 250 pictures, 50 de-noised images train with the classifiers such as K-nearest, Adaboost classifier, and deep CNN & SVM classifier; the remaining 200 prints are used for testing. The different paddy plant-insect considered are Stem borers, Gall midge, Planthopper, Leaf-folder, Case worm, Gundhi Bug, Mealy Bug, Swarming caterpillar. The experimental isolation of insects in paddy crops for pests such as Stem borers, Gall midge, Planthopper are

shown in Figure 13.8.

Figure 13.9 (a) Input image; (b) Filtered image; (c) Boundary detection; (d) Remedial cure for Gall midge insect. Table 13.3 Gall midge – GLCM features. Features

False smut values

Mean

0.659781

Standard Deviation 0.258473 Thermal

0.359254

Variance

0.062712

Skewness

0.986872

Vitality

0.842578

Dissimilarity

1.182381

Inverse Difference

2.667681

Similarity

0.978636

The experimental results for false smut are shown in Figure 13.9. Among the 200 diseased images, the number of detected disease images for Stem borers, Gall midge, Planthopper, Leaf-folder, Case worm, Gundhi Bug, and Mealy Bug Swarming caterpillar diseased identified images were only 185 under various category. The detection success rate for paddy crop insect images is 92.50%. Table 13.3 demonstrates the feature values obtained in the detection of smut pest in the paddy crops. These values are inhibited during the identification of a diseased plant. The proposed methodology (Deep CNN & SVM classifier) is compared with the existing approach, such as K-nearest neighbor and Adaboost classifier. The descriptors feature such as SIFT, SURF, and HOG are computed for all classifiers. It is found that the proposed methodology has been evidenced to achieve improved classification in terms of accuracy, as shown in Table 13.4 and Figure 13.10. It is inferred that the proposed model outperforms all other classifiers, and also HOG features have produced more profound outputs when compared to SIFT and SURF. Table 13.4 Classification accuracy for paddy insect with SIFT features. Pest name KNN %

SVM %

Adaboost %

SIFT SURF HOG SIFT SURF HOG SIFT SURF Stem borers

81

85

87

79

83

85

77

81

Gall midge 82

87

86

80

85

84

78

83

Brown plant hopper

81

88

87

77

84

83

73

70

Whitebacked

78

82

84

80

84

86

74

78

plant hopper Green 77 leafhopper

83

85

77

83

85

69

75

Leaffolder

73

79

81

77

81

85

69

75

Case worm

90

90

91

84

88

89

78

82

Gundhi Bug

76

83

84

79

85

86

73

79

Mealy Bug 79

85

87

81

87

89

71

77

Swarming 82 caterpillar

86

87

82

86

87

70

74

Climbing cut worm

86

87

80

84

87

74

78

82

Figure 13.10 Accuracy performance analysis.

13.6 Conclusion The drastic environment change urges the need to devise a new methodology for agriculture, i.e., paddy crop plantation. This paper proposed a new mobile app-based sensor model for defining pest classification in paddy crops. The acoustic probe, a combination of acoustic sensor and LAI sensor, is designed for detecting pests in paddy field-based noise level and chlorophyll pigment concentration. The deep CNN has been used for de-noising the pest images, and the SVM classification is used for pest classification. The work major concentrates on paddy crop insects such as Stem borers, Gall midge, Planthopper, Leaf-folder, Case worm, Gundhi Bug, Mealy Bug, and Swarming caterpillar. The proposed methodology (Deep CNN & SVM classifier) is compared with the existing approach, such as K-nearest neighbor and Adaboost classifier. The descriptors feature such as SIFT, SURF, and HOG are computed for all classifiers. It is found that the proposed methodology has evidenced to achieve improved classification when compared with all other existing algorithms. We can significantly conclude that the proposed method has improved accuracy of 82.71% with SIFT, 89.57% with SURF, and 90.78% with HOG features. Multi-view overlapping, function seclusion, and smaller object recognition activities appear to be incomplete, which becomes the next step for this study. We would address the above issues in the future by extending the paddy pesticide database and updating the system framework. Furthermore, the rice pesticide monitoring system already has the provisional evolving project done, and more developments in the next stage are expected.

References 1. Ghyar, B. S., & Birajdar, G. K. (2017, November). Computer vision based approach to detect rice leaf diseases using texture and color descriptors. In 2017 International Conference on Inventive Computing and Informatics (ICICI) (pp. 1074-1078). IEEE. 2. Azfar, S., Nadeem, A., Alkhodre, A., Ahsan, K., Mehmood, N., Alghmdi, T., & Alsaawy, Y. (2018). Monitoring, Detection and Control Techniques of Agriculture Pests and Diseases using Wireless Sensor Network: A Review. Int. J. Adv. Comput. Sci. Appl, 9, 424-433. 3. Azfar, S., Nadeem, A., & Basit, A. (2015). Pest detection and control techniques using wireless sensor network: A review. Journal of Entomology and Zoology Studies, 3(2), 92-99. 4. Dinakaran, D., Gajendran, G., Mohankumar, S., Karthikeyan, G., Thiruvudainambi, S., Jonathan, E. I., Miller, S, et al. (2013).

Evaluation of Integrated Pest and Disease Management Module for Shallots in Tamil Nadu, India: a Farmer’s Participatory Approach. Journal of Integrated Pest Management, 4(2), B1-B9. 5. S. Devadharshini, R. Kalaipriya, R. Rajmohan, M. Pavithra and T. Ananthkumar, Performance Investigation of Hybrid YOLO-VGG16 Based Ship Detection Framework Using SAR Images, 2020 International Conference on System, Computation, Automation and Networking (ICSCAN), Pondicherry, India, 2020, pp. 1-6, doi: 10.1109/ICSCAN49426.2020.9262440. 6. Rashno, A., & Sadri, S. (2017, April). Content-based image retrieval with color and texture features in neutrosophic domain. In 2017 3rd International Conference on Pattern Recognition and Image Analysis (IPRIA) (pp. 50-55). IEEE. 7. D. Jayakumar, A. Elakkiya, R. Rajmohan and M. O. Ramkumar, Automatic Prediction and Classification of Diseases in Melons using Stacked RNN based Deep Learning Model, 2020 International Conference on System, Computation, Automation and Networking (ICSCAN), Pondicherry, India, 2020, pp. 1-5, doi: 10.1109/ICSCAN49426.2020.9262414. 8. Barbedo, Jayme G.A. (2020). Detecting and Classifying Pests in Crops Using Proximal Images and Machine Learning: A Review. AI 1, no. 2: 312-328. 9. Ayan, E., Erbay, H., & Varçın, F. (2020). Crop pest classification with a genetic algorithm-based weighted ensemble of deep convolutional neural networks. Computers and Electronics in Agriculture, 179, 105809. 10. Sethy, P. K., Barpanda, N. K., Rath, A. K., & Behera, S. K. (2020). Deep feature based rice leaf disease identification using support vector machine. Computers and Electronics in Agriculture, 175, 105527. 11. Suthakaran, A., & Premaratne, S. (2020). Detection of the affected area and classification of pests using convolutional neural networks from the leaf images. International Journal of Computer Science Engineering (IJCSE). 12. Horgan, F. G., & Kudavidanage, E. P. (2020). Use and Avoidance of Pesticides as Responses by Farmers to change Impacts in Rice Ecosystems of Southern Sri Lanka. Environmental Management, 117. 13. Jia, S., & Gao, H. (2020, March). Review of Crop Disease and Pest Image Recognition Technology. In IOP Conference Series: Materials Science and Engineering (Vol. 799, No. 1, p. 012045). IOP Publishing.

14. Chen, J., Chen, J., Zhang, D., Sun, Y., & Nanehkaran, Y. A. (2020). Using deep transfer learning for image-based plant disease identification. Computers and Electronics in Agriculture, 173, 105393. 15. Pattnaik, G., & Parvathi, K. (2021). Automatic Detection and Classification of Tomato Pests Using Support Vector Machine Based on HOG and LBP Feature Extraction Technique. In Progress in Advanced Computing and Intelligent Engineering (pp. 49-55). Springer, Singapore. 16. Hussain, M. R., Naim, A., & Khaleel, M. A. (2020). Implementation of Wireless Sensor Network Using Virtual Machine (VM) for Insect Monitoring. In Innovations in Electronics and Communication Engineering (pp. 73-78). Springer, Singapore. 17. Tamilarasan, A.K., Krishnadhas, S.K., Sabapathy, S. et al. A novel design of Rogers RT/duroid 5880 material based two turn antenna for intracranial pressure monitoring. Microsyst Technol (2021). https://doi.org/10.1007/s00542-020-05122-y. 18. Pavithra, M., Rajmohan, R., Kumar, T.A. and Ramya, R. (2021). Prediction and Classification of Breast Cancer Using Discriminative Learning Models and Techniques. In Machine Vision Inspection Systems, Volume 2 (eds M. Malarvel, S.R. Nayak, P.K. Pattnaik and S.N. Panda). https://doi.org/10.1002/9781119786122.ch12 19. Adimoolam M., John A., Balamurugan N.M., Ananth Kumar T. (2021) Green ICT Communication, Networking and Data Processing. In: Balusamy B., Chilamkurti N., Kadry S. (eds.) Green Computing in Smart Cities: Simulation and Techniques. Green Energy and Technology. Springer, Cham. https://doi.org/10.1007/978-3-03048141-4_6. *Corresponding author: [email protected]

14 A Novel Machine Learning Approach in Edge Analytics with Mathematical Modeling for IoT Test Optimization D. Jeya Mala1* and A. Pradeep Reynold2 1Director of MCA, PG Dept of Computer Applications, Fatima College, Madurai, India, India 2Lecturer, Dept of Civil Engineering, GEMS Polytechnic College, Bihar, India Abstract Edge analytics is the buzzword that provides a paradigm shift from traditional data analytics done using the data collected from the IoT devices through a cloud-based service. The term Edge Analytics can be defined as tools and algorithms that are deployed in the internal storage of the IoT devices or IoT gateways that collect, process, and analyze the data at the deployed place itself rather than sending that data to the cloud for analytics. If analytics needs to be done at the edge, then privacy and security will be the primary concerns, so the algorithms written for doing the decisionmaking process should get the interface with the end-user devices or interfaces. This indicates that the processing element in these IoT end-user applications must be tested before delivery. As the data being used for edge analytics will be dynamically generated from the environment, the generation and selection of test cases for testing from the voluminous amount of test cases will be a really challenging task. Also, these data processed at the edge should be used in end-user applications which in turn will have several use cases that include user experience and usability testing aspects. Hence, this chapter proposes a novel mathematical model–based machine learning approach in Edge Analytics to optimize the testing of IoT applications. Keywords: Internet of Things (IoT), machine learning (ML), edge analytics, test optimization, cloud testing, use cases

14.1 Introduction: Background and Driving Forces Edge analytics is the buzzword that provides a paradigm shift from

traditional data analytics done using the data collected from the IoT devices through a cloud-based service. The term Edge Analytics can be defined as tools and algorithms that are deployed in the internal storage of the IoT devices or IoT gateways that collect, process, and analyze the data at the deployed place itself rather than sending that data to the cloud for analytics. If analytics needs to be done at the edge, then the algorithms written for doing the decision-making process should get the interface with the end-user devices or interfaces. This indicates that the processing element in these IoT end-user applications must be tested before delivery. As the data being used for edge analytics will be dynamically generated from the environment, the generation and selection of test cases for testing will be a really challenging task. Also, these data processed at the edge should be used in end-user applications which in turn will have several use cases that include user experience and usability testing aspects. The Internet of Things (IoT) is a collection of interrelated devices which may be machines, electrical appliances, electronic gadgets, computer systems, smartphones, and objects with sensors, animals with RFID cards, and even people with wearable devices which are equipped with the ability to send and/or receive data over a network in an automated way. An IoT platform can connect everyday things that are embedded with electronics, software, and sensors to the internet, enabling them to collect and exchange data [6]. The term “thing” in IoT refers to any device that can automatically collect and transmit data over a network or internet using any kind of sensors [10]. Edge analytics is an approach used for the collection and processing of data at the devices such as sensors, network switches, or other devices instead of waiting for the data to be sent back to a centralized data store [12]. This processing part will be accomplished through automated analytical computation as it is generated by these devices. This can decrease latency in the decision-making process on connected devices. Placing the analytics algorithms to sensors and network devices alleviates the processing strain on enterprise data management and analytics systems. This helps even as the number of connected devices being deployed by organizations is increasing and the amount of data being generated and collected also increases. If analytics needs to be done at the edge, then the algorithms written for doing the decision-making process should get the interface with the end-user devices or interfaces. This indicates that the processing element in these IoT end-user applications must be tested before delivery. As the data being used for edge analytics will be dynamically

generated from the environment, the generation and selection of test cases for testing will be a real challenging task. Also, these data processed at the edge should be used in end-user applications which in turn will have several use cases that include user experience and usability testing aspects [3]. Hence, to address this most important and crucial problem, the application of Machine Learning techniques could be the right choice for delivering defect-free IoT solutions to the customers. In this chapter, the application of various Machine Learning techniques in IoT testing will be discussed; a suggestion on how to generate test cases from sample use cases of an end-user application will be provided.

14.2 Objectives The objective of this chapter is to analyze the application of various Machine Learning techniques in IoT testing with a suggestion on how to generate test cases from sample use cases of an end-user application.

14.3 Mathematical Model for IoT Test Optimization Mathematical Model for optimization in IoT Testing using Edge Analytics is given below: Min. Size (test_suite) Time (test proeess) Cost (test process) Max. Coverage (Fi) Sub.to

Where, Coverage (Fi) is computed as

Where, Fi. is the set of features present in Use cases for the given

application for i-1..n. Let Ni be the set of use cases derived from the end user application and Vi be the set of covered use cases and let Vi’ be the set of nontrivial use cases, the optimistic functionalities coverage of the test process is given as Fi. In order to achieve this optimization, the ML approaches are applied to select only the relevant test cases and relevant use cases to reduce the size of the test suite, amount of cost and time involved in testing. At the same time, coverage of all the functionalities or features present in the Use Cases is also the maximum.

14.4 Introduction to Internet of Things (IoT) As per an industrial white paper given by the Dataflair team [11], in the Internet of Things (IoT), the devices have provided a completely new way of data collection, processing, and decision making to the organizations. It helps in creating a smart working environment that has data collection, monitoring, controlling with embedded intelligence to achieve automation. These devices are connected to form a network that comprises devices with embedded actuators, sensors, network switches, and code logic to achieve some functionality. The term “Internet of Things” was first used by Kevin Ashton, cofounder of the Auto-ID center at MIT, in 1999 in a presentation he made to Procter & Gamble to get the need for RFID in tracking and monitoring. Also, in the same year, MIT professor Neil Gershenfeld mentioned the interconnection of the devices in his book When Things Start to Think. Even though the term IoT dates back to 1999, the concept of connected devices over the embedded internet has been used since the 1970s. From these views, the IoT has evolved using the convergence of various technologies such as Wireless, MEMS, Micro Services, and the Internet. This has enabled the unstructured machine-generated data in the form of Operational Technology to be converged with software development in the form of Information Technology for further processing of these data to get the insights of such operations and further decision making [7]. Communication between Machine-to-Machine (M2M) has made the machines being connected using a network with data being stored in the cloud to collect and manage the data without user interaction This M2M currently offers the interconnection of smart devices that connects applications and people also to establish a smart

communication gateway. This chapter is further divided into the following sections: IoT Architecture; Edge Analytics; Why is Edge Analytics important?; IoT testing; Machine Learning Approaches; Application of ML in IoT testing; IoT End-User Application Testing for a case study application, and finally the conclusion.

14.5 IoT Analytics Before going to the detailed discussion on IoT Analytics, let us first discuss the entire process involved in it. The data collected by the sensors from the things that are smartphones, production machinery, electronic appliances will be securely communicated using the Internet of Things platform. As per the whitepaper given by Dataflair (2018), this IoT platform acts as the backbone in IoT Analytics, as it collects and combines data from multiple devices and other platforms connected through the internet. This platform then applies analytics to take decision making and further processing and transfer of data. IoT Analytics offers a futuristic dimension to the business side of the application areas such as E-commerce, M-Commerce, Retail, Wearable technologies, Industry 4.0, etc in which the devices are connected through the IoT platform. Data collection from IoT devices is a big challenge as the devices are independent in nature and hence collection, aggregation, classification, and making decisions are extremely difficult. Hence, nowadays the IoT platforms provide a way to send the sensor collected data to the cloud or the clusters or a centralized data store or a data warehouse for data analytics. As the amount of data collected from these sensors will be voluminous in nature, and this collection termed as big data, collecting and storing them into the cloud for further processing is the current area of development in IoT Analytics. This is applicable in any of the IoT architectures, in which as soon as the sensors collect and store data, these data should be processed to provide some useful outcomes to do a decision-making process. Hence, the data collected and stored in the cloud or any local storage devices will then be taken for analysis to take further actions over it. But when the IoT platform is expanded to include more sensors then collecting data and sending data to the cloud or any other storage medium for further processing will be a cumbersome task as many times not all the data collected from the sensors will be valuable and will not contribute to any of the decision-making processes. In these

kinds of situations simply storing all the collected data without validating them will not be advisable and also some important actions or decisions need to be taken immediately after the sensor collects the data instead of waiting for storing and retrieving data from the storage media. If we try to compress the data to minimize the storage issue, the computation overhead involved in compressing the data will be another issue or problem. Even if we have added more IoT gateways to handle this voluminous amount of data by the way of the distribution of process overhead into these gateways, once again we will face the problem of assembling the data for further processing, issue of scaling, etc., will happen. Hence, we need an alternate solution to resolve all the stated problems. From the literature study, it has been observed that Edge Analytics is an alternate way to handle these kinds of problems.

14.5.1 Edge Analytics In the referential architecture discussed earlier, the IoT devices at the bottom have one or more sensors, switches, actuators, or a combination of all these. When these devices are constantly generating streams of data, the amount of data will be significantly high and hence, a need to store these data to a highly scalable storage system is highly necessary to access this data later to enable decision making by applying analytics to them. In real-time systems, the processing of data and decision making should be done at the device itself and not sending and retrieving back the data to and from the cloud. In these situations, real time analytics, namely Edge Analytics, is required [3]. Edge analytics is an approach to data collection and analysis in which an automated analytical computation is performed on data at a sensor, network switch, or other device instead of waiting for the data to be sent back to a centralized data store [9]. Nowadays due to Industry 4.0-based industrial evolution, Edge analytics is applied in industries that have limited resources in terms of bandwidth, Wi-Fi, etc. But this chapter shows how it can also be applied for applications in which immediate decision making is essential once the data collection is done. Some of the survey results given in an industrial whitepaper by Microfocus [12] have also indicated that data from the sensors and other forms of data collection are becoming more and more ubiquitous across all walks of life. For example, a single Airbus A350 generates 2.5 TB of data per day. Cisco estimates that 507.5

Zettabytes of data will be generated in 2019 alone. A typical solution will be having interim processing, which is increasingly relevant for handling this staggering volume of data, and edge analytics offer a cost-effective, relatively efficient solution. About 40% of IoT data in 2019 is expected to be processed through edge analytics, and this number will surely grow with IoT. Edge analytics is performed by either tools or algorithms that sit on or close to IoT devices to collect process and analyze data at the source rather than sending that data back to the cloud for analysis. This streamlines the data analysis process by performing it in real time and to ensure as much useful information is garnered from the device as possible [15]. A typical real-time example is a Traffic Management system. A light sensor at a traffic light can be built with intelligent monitoring for traffic management. Real-time feedback within the device itself ensures immediate and appropriate use of the data it is gathering, circumventing the need to send the data elsewhere for outside consideration. There are several advantages we can get due to this Edge Analytics [14]. Pushing analytics algorithms to sensors and network devices alleviates the processing strain on enterprise data management and analytics systems, even as the number of connected devices being deployed by organizations and the amount of data being generated and collected increases. There are several edge analytics tools developed by many different organizations. A typical example is the IBM Edge IoT Analytics application which pushes the processing application modules to the IoT gateways which are now called Edge Gateways will only store the results of the analysis. The analysis part is done at the IoT device itself thus avoids sending all the information to the cloud or any other storage medium. If any further computationally high overhead processes need to be done, the essential processed data sent to the cloud is then taken for analysis using Bluemix service. This helps in visualizing the results and any other outcomes to the end users [1]. In many organizations, streaming data from manufacturing machines, industrial equipment, pipelines and other remote devices connected to the IoT creates a massive glut of operational data, which can be difficult—and expensive—to manage [7]. By running the data through an analytics algorithm as it is created, at the edge of a corporate network, companies can set parameters on what information is worth sending to a cloud or on-premises data store for later use.

14.6 Survey on IoT Testing Nowadays, the amount of IoT applications developed for different purposes are enormously increasing and hence provide a complete IoT space for the end users. As these applications are growing, there is a growing demand for handling voluminous data, performing the required functionalities with complete usability aspects. As these IoT applications are now equipped to meet the different expectations from customers according to various domain requirements, the expected functionalities should be provided using efficient algorithms. These algorithms will provide the functionalities of the various use cases present in the problem domain. Hence, to ensure that these algorithms are providing the expected functionality, testing is essential. This testing which tests the IoT applications is termed “IoT testing”. It ensures security, compatibility, performance, device interoperability, and end-user application testing. Here, security testing includes testing of data protection, authentication, security in terms of encryption, security breaches, etc., which are to be done both at the device level as well as at the cloud and network side too. The compatibility testing is used to test whether the code is compatible with different kinds of IoT devices such as mobile devices, desktop machines, OS versions in devices, network protocols versions [2]. As per Ajit Pendse [2], in performance testing, it focuses on the response time, turnaround time and is normally measured as to how fast the communication over the network is achieved when the internal processing part is higher. This can be done at the network layer using HTTP, MQTT, etc., and also employing analytics in the processing layer. The End-User Application testing which we focus on in this chapter tests all the functionalities that are represented as use cases along with usability testing. Here, the IoT application is completely analyzed and all the use cases in it are extracted. The test cases are generated using requirements-based test case generation techniques to validate the outcome of each of the use cases [8]. In Device Interoperability testing, it verifies the connectivity of all the IoT devices over the IoT platform. It is normally done at the network layer to assess the communication level between the devices [8]. In the case of end-users’ usability testing, the users’ experience in using the IoT application is the primary factor. Here, the end-user experience will be captured employing the time taken to operate a particular device, speed to work with the entire system, and

remembering and retrieval of operations while operating an IoT agent will be tested [8]. As all these testings are really crucial; the way of doing these kinds of testing activities will be really difficult as the application is not a traditional GUI-based one or a web-based one in which intermittent results can be assessed to check the fault-free operation of the IoT application. Hence, a testing framework is essential to execute all these kinds of testing in a very challenging, limited access environment. Some of the major challenges in testing an IoT application are: the complex responsiveness of the system, need for high speed and less time in processing the gathered data to get improved performance, security issues, and scalability of sensors on need-basis, many IoT protocols such as MQTT, XMPP, CoAP, etc., and inter-sensor communication [5]. As per the referential architecture, the end-user application testing should be done at the Application layer and Services layer as they have functional testing as part of them. Also, as it is discussed in Edge Analytics, most of the processing part is now placed on the edge of the IoT device itself, now the end-user application testing has been shifted to the Sensor layer. According to Forbes in a whitepaper (2018), “The global IoT market can grow from $157B in 2016 to $457B by 2020, attaining a Compound Annual Growth Rate (CAGR) of 28.5 percent.” Hence, delivering a high-quality IoT product is essential, and therefore the end-user application testing gained its importance, and an efficient way of performing it is the need of the hour. Several industries are focusing on different testing methods from manual testing to automated testing techniques and tools. When we consider the manual testing process, the number of test cases will be voluminous in situations such as an IoT application having 20 use cases with 2 serial interfaces that will produce 40 different combinations to be tested. Say, if 2,000 test cases are required for each combination, then approximately 80,000 test cases will be needed to test such a simple application with 20 use cases [4]. Hence, manual testing may not be possible in these cases. The need for automated testing is essential to achieve the IoT testing. Test automation really helps in IoT end-user application testing by providing improved coverage, indemnification of errors, defect fixing, and faster delivery of quality software. Also, in an industrial whitepaper [8], the primary goal of test

automation in IoT applications is to provide continuous integration and continuous delivery (CI and CD) imposed by DevOps in Agile based software development scenario. Thus an automated testing framework provides support in the Agile-based application development activities. Hence, the quick delivery of quality software, increased competition in the market, and adoption of effective testing methodologies have made test automation as the need of the hour. In this test automation, end-user application testing in particular helps organizations to get benefit from new streams, analytics in test cases results from the analysis, reduced human intervention, and hence, many of the industries are focusing on automated testing as the way to achieve the effective testing process. But, as the test cases need to be generated for different use case scenarios based on the dynamic real-time environment, the challenge arises as to how to achieve the selection of effective and relevant test cases to test each of the scenarios and how to analyze the real-time changing environmental data. Also, testing the IoT applications using an emulator in a laboratory environment will be completely different from testing them in their deployment environment. Also, the interfaces between the sensors, the sensors to the cloud servers, the sensors to the web servers are extremely difficult to simulate in the development environment. This leads to the selection of proper testing tools to address the said challenge. Because of the dynamic nature of the applications’ behavior based on the real-time data, the identification of proper testing tools for IoT testing becomes a challenging task, as these kinds of IoT-based applications are closer to human life, business-critical decision making, predicting the future, and effective and efficient solution. Hence, simple test automation using tools which simply perform the same set of tasks of test case generation, execution, and report generation repeatedly will not be an effective solution. Apart from these, the IoT testing must test the complete range of both IoT environmental and functional vulnerabilities. This needs rigorous testing on all the use case scenarios which are exposed to the end users and to other devices to take crucial decision making based on the processed data [8]. Also, many of the IoT-based applications need rigorous testing process due to their critical use case scenarios, the use of normal testing tools may not be so useful. But in some cases, testers will be simply wasting a huge amount of testing time on testing even trivial use case scenarios which may lead to skipping off or not rigorously testing the actual critical use cases.

Hence, the current industrial expectations need human-like intelligence in the automated testing tools to test IoT applications to achieve the testing of all the use case scenarios. This leads to the need for an automated testing tool equipped with intelligence in this IoT era.

14.7 Optimization of End-User Application Testing in IoT It includes the testing of all functional use cases of an IoT application which also includes user experience and usability testing. End-user application testing is used to test all the functional aspects of the IoT application. It validates each of the functionalities for its correctness. Each IoT application is initially analyzed and all the use cases are extracted from the specification document. Each of these use cases should now be tested using the test cases generated either manually or automatically [5]. As the IoT systems will have limited user interfaces, limited processing capacity, and limited storage, generally, these kinds of systems will be tested on the cloud based on the processed data which is collected from the sensors at the processing elements in the cloud. As storing and retrieving the data to and from the cloud is a cumbersome activity and even the validity and applicability of data to store or not to store in the cloud is a challenge, there is a need for a paradigm shift from this conventional type of data processing into processing the data at the edge. This kind of edge analytics–based end-user application testing will help in validating the use cases and will be a real benefit to both the developers as well as to the end users. This kind of testing at the edge will result in change based, risk-based analysis in the testing process, test-driven development and decision making, continuous integration, and continuous delivery in the IoT ecosystem. Employing a log file at each of the processing parts, later the traceability and debugging and defect fixing becomes easier.

14.8 Machine Learning in Edge Analytics for IoT Testing Nowadays, several industries and businesses are applying Machine Learning (ML) techniques for data analytics and intelligent decision making to increase their productivity and efficiency. Based on an industrial whitepaper (2018) given by Thinkxtream [13], IoT Edge Analytics & Machine Learning for Real-time Device Control”, the ML

is a powerful analytical tool for the voluminous amount of data which is normally termed as big data. In an IoT-based application development, a similar scenario exists. As the data from the sensors of IoT devices are voluminous, as they are collected every millisecond or second or minute as per the requirement, there is a need to process this collected data using ML to make intelligent decisions for further processing. From the IoT architecture, the data collected from the sensors are stored in the cloud through the IoT gateways. This stored data is then processed and analyzed for further visualization, reporting, or decision making. However, as the data collected by the sensors is from the real-time environment, dynamic processing at the edge of the IoT device, i.e., at the gateway to take intelligent decisions immediately at that point itself will definitely increase the efficiency of the decision-making process. Also, the turnaround time of storing in the cloud and then taking the data for processing and then applying some ML algorithms to derive some decisions out of them will really be an overhead. And that too, when it is coming on the end-user application testing, storing this real-time data into the cloud and then selecting the relevant data to act as test cases to validate the various use case scenarios will become a cumbersome process as the amount of data will be voluminous (Whitepaper, 2018 on IoT Edge Analytics & Machine Learning for Real-time Device Control). Hence, this chapter proposes a novel framework to apply ML algorithms at the edge of the IoT device to do end-user application testing to validate the use case scenarios based on the behavioral analysis at the edge itself. As the ML algorithms placed at the edge can filter most of the noisy data collected by the sensors, only the relevant data to be analyzed by the edge and by the cloud will be stored for decision making. As these algorithms have intelligence, they can easily be applied to test the use case scenario at that point of time. This can be achieved in two ways: (i) Keep the limited ML algorithms at the edge as the IoT devices have only limited processing capabilities and evaluate the test results; (ii) Keep local IoT networks and have the analytics and decision-making algorithms using ML on these edge networks to achieve higher-level decision making to evaluate the test results.

14.9 Proposed IoT Operations Framework Using

Machine Learning on the Edge The proposed framework for IoT operations using ML algorithms on the edge of the IoT devices is shown in Figure 14.1. In Figure 14.1, the ML algorithm for test case generation from the use case scenarios is given on the edge of the IoT device itself. On executing every use case scenario, the ML algorithm will automatically select the test cases from the sensors and then it applies analytics to analyze and filter only suitable test cases relevant to the use case scenario.

Figure 14.1 Proposed framework using machine learning on the edge. Once it evaluates the adequacy of the test cases, the valid test cases will be stored in the repository which may be a local server or cloud storage for further processing. From the repository, the test cases are then selected and are executed against the coded functionality of the use case scenario. Based on the execution results, either a pass or fail status will be updated for each of the test cases and the use case scenarios. Finally, test reports will be generated. Also, log files will be maintained for any future analysis.

14.9.1 Case Study 1 - Home Automation System Using IoT An IoT-enabled Home automation system has achieved great popularity in the last decade or so, and it increases the comfort and quality of life. A case study application for which we developed an inhouse application is a smartphone-based application. It is used to control and monitor the home appliances using different types of

communication techniques. In this application, the working principle of different types of wireless communication techniques such as ZigBee, Wi-Fi, Bluetooth, EnOcean, and GSM is used which in turn helps the users to choose their own choice of technology to build a home automation system. Here the user can control the lights and fans in his/her room through Google Assistant as well as a Dynamic HTML page. The lights will be connected to the 4-channel relay which is connected to the NodeMCU board. Then using the IFTTT app the user needs to specify commands to work with the Google Assistant as well as Adafruit. After configuration with these apps, the user can control the electronic appliances in his/her room from any place. After these configuration set up is done, the test cases are generated for each of the scenario and are showed in Table 14.1. Table 14.1 Test cases generated for each of the scenarios. Use case Use case scenario scenario #

Test cases for end-user application testing

Selection of test cas using ML

1.

User installs the mobile app for Smart Home Automation System

,