Emerging Trends in Expert Applications and Security: Proceedings of 2nd ICETEAS 2023, Volume 1 (Lecture Notes in Networks and Systems, 681) 9819919088, 9789819919086

The book covers current developments in the field of computer system security using cryptographic algorithms and other s

133 15 16MB

English Pages 585 [559] Year 2023

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Preface
Acknowledgements
About This Book
Contents
About the Editors
The Study of Cluster-Based Energy-Efficient Algorithms of Flying Ad-Hoc Networks
1 Introduction
2 IoT Architecture and Devices
3 Flying Ad-Hoc Networks (FANETs)
4 Classification of FANET Routing Protocols
5 Practical Approaches of Routing Protocols in FANETs
5.1 Cluster Based
5.2 Non-cluster Based
6 Recent Research and Developments in FANETs
7 Conclusion
References
Current Web Development Technologies: A Comparative Review
1 Introduction
2 Front-End Libraries and Framework
3 Comparison
4 React.JS in Front-End Development
5 Features
6 Angular Versus React
7 Advantages
8 Disadvantages
9 Conclusion
References
R-Peak-Based Arrhythmia Detection as an Impact of COVID-19
1 Introduction
2 Literature Survey
3 Materials and Methods
4 Result
5 Conclusion
References
IoT-Based Water Quality Monitoring and Detection System
1 Introduction
2 Literature Survey
3 Material and Methodology
3.1 Methodology
3.2 System Architecture
4 Implementation
4.1 Sensor Interfacing
4.2 Microcontroller Programming
4.3 Transmitting Sensed Data to the Internet and Projecting the Sensed Data in Web App
5 Results and Discussion
6 Conclusion
References
A Comparative Analysis of Different Diagnostic Imaging Modalities
1 Introduction
2 X-Ray Radiography
3 Magnetic Resonance Imaging (MRI)
4 Ultrasonography
5 Elastography
6 Optical Imaging
7 Comparison of Several Medical Imaging Techniques
8 Conclusion
References
Uncovering the Usability Test Methods for Human–Computer Interaction
1 Introduction
2 User Engagement and Human–Computer Interaction
3 Software Testing and its Need
4 Usability Test Methods
5 Research Gap and Problem Statement
6 Contribution of Paper
7 Research Methodology
8 Proposed Outcomes/Conclusion
References
The Effects of Lipid Concentration on Blood Flow Through Constricted Artery Using Homotopy Perturbation Method
1 Introduction
2 Formulation of the Problem
3 Discussion
4 Conclusion
References
Actual Facial Mask Recognition Utilizing YOLOv3 and Regions with Convolutional Neural Networks
1 Introduction
2 Methodology
2.1 Object Discovery Using CNN
2.2 Faster R-CNN Algorithm
2.3 Region Proposal Network (R.P.N.)
2.4 Object Exposure Using R.P.N. and Quicker-R-CNN
2.5 Image Preprocessing
2.6 Architecture Diagram
2.7 Set of Data
3 Results and Discussion
4 Conclusion
References
Real-Time Smart System for Marking Attendance that Uses Image Processing in a SaaS Cloud Environment
1 Introduction
2 Related Works
3 System Architecture
3.1 Proposed Algorithm
3.2 Facial Recognition
3.3 Deep Learning
3.4 Comparison Between Proposed and Existing Systems
4 Implementation
4.1 Python
4.2 OpenCV
4.3 Finding Faces—HOG
4.4 Projecting and Posing Faces
4.5 Encoding Faces
5 Stepwise Procedure
5.1 Create Dataset
5.2 Train Dataset
5.3 Testing
5.4 Comparison Representation
6 Results and Discussion
7 Advantages
8 Conclusions and Future Work
References
Design of Automatic Clearance System for Emergency Transport and Abused Vehicle Recognition
1 Introduction
2 Related Work
3 Materials and Methods
3.1 Proposed Model
3.2 Arduino UNO
3.3 RFID Tag
3.4 GSM Modem SIM800A
3.5 Infrared Sensors
4 Working
4.1 Density-Based Traffic Control
4.2 Ambulance Go-Ahead System
4.3 Embezzled Vehicle
5 Conclusion
References
Securing Account Hijacking Security Threats in Cloud Environment Using Artificial Neural Networks
1 Introduction
1.1 Feedforward Backpropagation Algorithm of Artificial Neural Networks
2 Experimental Setup
3 Results
4 Conclusions
References
IoT-Based Smart Pill Box and Voice Alert System
1 Introduction
2 Proposed System
3 Literature Survey
4 Methodology
4.1 Take Bills
4.2 Alert System
4.3 Internet of Things
4.4 ESP-12E-Based Nodemcu
5 Results
6 Conclusion
References
Early Identification of Plant Diseases by Image Processing Using Integrated Development Environment
1 Introduction
2 Modeling of the Working System
2.1 Simulation Using Microcontroller Circuit
2.2 Simulation of Image Processing System
2.3 Model Demonstration
3 Conclusion
References
Usability Attributes and Their Mapping in Various Phases of Software Development Life Cycle
1 Introduction
2 Review of Literature
3 Mapping of usability Attributes in the Different Stages of software Development Life Cycle
4 Discussion
4.1 Initial Investigation
4.2 Determination of System Needs
4.3 System Design
4.4 Software Development
4.5 Systems Testing
4.6 Implementation and Evaluation
5 Conclusion
References
Controlling Devices from Anywhere Using IoT Including Voice Commands
1 Introduction
2 Related Works
3 System Architecture
4 Modules
4.1 Web Page
4.2 IoT Web Page Control
4.3 Application Control
4.4 Bluetooth
5 System Implementation
5.1 Arduino
5.2 Liquid Crystal Display
5.3 Relay
5.4 Bluetooth
5.5 Internet of Things
5.6 Infrastructure
5.7 ESP-12E Architecture
5.8 Embedded Systems
6 Output
7 Conclusion
8 Future Enhancements
References
IoT-Based Air Quality Monitoring System
1 Introduction
2 Related Works
3 System Architecture
4 Modules
4.1 Arduino
4.2 Bluetooth
5 Implementation
5.1 Hardware Requirements
5.2 Liquid Crystal Display
5.3 Ethernet Connection (LAN) or a Wireless Adapter (Wi-Fi)
5.4 Inernet of Things
5.5 Sensors
6 Clustering Methods
7 Result and Findings
8 Conclusion
9 Future Enhancements
References
Modeling of Order Quantity Prediction using Soft Computing Technique: A Fuzzy Logic Approach
1 Introduction
2 Parameters Used
2.1 Time Elapsed After a New Launch
2.2 Stock Level Present in the Storage
2.3 Discount Percentage
3 Methodology
4 Case Study (iPhone 13)
5 Results and Discussion
6 Conclusion
References
Development of Classification Framework Using Machine Learning and Pattern Recognition System
1 Introduction
2 System Architecture
2.1 Methodology
2.2 Feature Extraction
3 Neural Network Training
4 Experiment Results and Discussion
5 Conclusion
References
Human Part Semantic Segmentation Using Custom-CDGNet Network
1 Introduction
2 Related Work
3 Methodology
3.1 Class Distributions
3.2 Distribution Network
3.3 CDGNet Objective
4 Experimental Analysis
4.1 Data Set Used
4.2 Evaluation Parameter
4.3 Quantitative Analysis
5 Conclusion
References
Effect on Compressive Strength of Portland Pozzolana Cement on Adding Admixtures Using Machine Learning Technique
1 Introduction
2 Literature Review
3 Experimental Set-up with Proposed Methodology
3.1 Designing of M40 Grade Concrete
3.2 Ground Granulated Blast Furnace Slag
3.3 Sugar
4 Observation and Results
4.1 Ground Granulated Blast Furnace Slag (GGBS)
4.2 Sugar
5 Conclusion
References
Tech Track for Visually Impaired People
1 Introduction
2 Objective
3 Literature Survey
4 Existing System
5 Proposed System
6 System Architecture
7 Hardware Requirements
8 Software Requirements
9 Materials
9.1 Arduino UNO
9.2 Ultrasonic Sensor
9.3 Vibration Sensor
9.4 Buzzer
9.5 Global Positioning System (GPS)
9.6 GSM Module
10 Working Operation
11 Conclusion
References
Intelligent Compression of Data on Cloud Storage
1 Introduction
2 Literature Survey
3 System Architecture
4 Modules
4.1 User Creation
4.2 Upload Process
4.3 Deduplication
4.4 Proof of Storage
5 System Requirements
6 Algorithm
7 Usecase Diagram
8 Activity Diagram
9 Software and Technologies Description
9.1 Java
9.2 Java Platform
9.3 Java Technology
9.4 Eclipse
10 Conclusion
11 Future Enhancement
References
Diagnosis of Diabetic Retinopathy Using Deep Neural Network
1 Introduction
2 Literature Survey
3 Existing Methodology
4 Proposed Methodology
5 Modules
6 Conclusion
7 Future Enhancement
References
Multi-parameter Sensor-Based Automation Farming
1 Introduction
2 Related Work
3 System Design
4 Methodology
5 System Requirements
6 Algorithm
7 Conclusion and Future
References
Comparing Ensemble Techniques for Bilingual Multiclass Classification of Online Reviews
1 Introduction
1.1 Background
1.2 Related Work
2 Proposed Methodology
2.1 Dataset Collection
2.2 Data Preprocessing
3 Implementation Details
4 Results
5 Future Work
6 Conclusion
References
Detection of Disease in Liver Image Using Deep Learning Technique
1 Introduction
2 Related Work
3 Proposed Method
3.1 Input Layer
3.2 Convolutional Layer
3.3 ReLU Layer
3.4 Max Pooling Layer
3.5 Flattening
3.6 Fully Connected Layer
3.7 Softmax Layer
4 AlexNet
5 Result
6 Conclusion
References
How to Quantify Software Quality Factors for Mobile Applications?: Proposed Criteria
1 Introduction
2 Background Work
3 Software Quality
4 Software Quality Factors
5 ISO/IEC-9126
6 ISO/IEC-25010
7 Quantification and Fuzzy Logic
7.1 Multi-Criteria Decision Approach
7.2 Fuzzy Multi-Criteria Approach
8 Proposed Methodology
8.1 Flow Graph of Methodology
8.2 Procedure
9 Conclusions and Future Scope
References
Dysgraphia Detection Using Machine Learning-Based Techniques: A Survey
1 Introduction
2 Types and Symptoms of Dysgraphia
3 Detection of Dysgraphia
3.1 By Doctors/Experts
3.2 Automated System
4 Data Collections
5 Related Study
6 Conclusion
References
Designing AI for Investment Banking Risk Management a Review, Evaluation and Strategy
1 Introduction
2 Theoretical Framework for Artificial Intelligence Application in This Research
3 Artificial Intelligence in Risk Management and Post-Trade Processing
3.1 Counterparty Credit Risk (CCR)
3.2 Montecarlo Simulations in Counterparty Credit Risk
3.3 Market Risk
3.4 Liquidity Risk
3.5 Operational Risk
4 Research Gaps
5 Conclusion
6 Future Opportunities for Research
References
A Neutrosophic Cognitive Maps Approach for Pestle Analysis in Food Industry
1 Introduction
2 Implementation
References
Assistive Agricultural Technology—Soil Health and Suitable Crop Prediction
1 Introduction
2 Literature Review
3 Methodology
4 Conclusions and Results
References
Quantum Key Distribution for Underwater Wireless Sensor Network: A Preliminary Survey of the State-of-the-Art
1 Introduction
1.1 Motivation
1.2 Quantum Mechanics for Underwater Wireless Optical Sensor Network
2 Related Work
3 Design Issues and Challenges
4 Innovative Strategy
5 Conclusion
References
Innovations and Insights of Sequence-Based Emotion Detection in Human Face Through Deep Learning
1 Introduction
2 Handy Video Data Sets
3 Survey of Literature
4 Experiment and Set Up
5 Preprocessing
6 Deep Learning and Convolution Neutral Network
7 Final Model
8 Result and Discussion
9 Conclusion
References
Speed Analysis on Client Server Architecture Using HTTP/2 Over HTTP/1: A Generic Review
1 Introduction
2 Limitations of HTTP/1
3 Enhanced Features of HTTP/2
4 Conclusion
References
Smart Chatbot for Guidance About Children’s Legal Rights
1 Introduction
1.1 The Historical Background of AI
2 Necessity
3 Dataset
4 Methodology
5 Results and Evaluation
5.1 Evaluation of the Classification
5.2 User Studies
6 Conclusion and Future Work
References
Automatic Speed Control of Vehicles in Speed Limit Zones Using IR Sensor
1 Introduction
2 Circuit Parts
2.1 Infrared Receiver
2.2 Infrared Transmitter
2.3 Moter
2.4 Relay Driver
2.5 LDR
2.6 Flame Sensor
2.7 Chassis 7805 Voltage Regulator
3 Circuit Diagram
4 Working
5 Application and Advantage
6 Scope for Future
7 Conclusion
References
Ordering Services Modelling in Blockchain Platform for Food Supply Chain Management
1 Introduction
1.1 Transferring of Assets
1.2 List of Stakeholders
1.3 Blockchain Technologies Used for Implementation
2 Blockchain Architecture for Ordering Services Food Supply Chain
3 Smart Contracts Implemented in Ordering Services Food Supply Chain
4 Analysis of SCM Implementation Using BCT
5 Conclusion
References
Impact of Cryptocurrency on Global Economy and Its Influence on Indian Economy
1 Introduction
1.1 History
1.2 Literature Review
2 Bitcoin Versus Other Crypto Currencies
2.1 Security
2.2 Storage
2.3 Portability
2.4 Payment Methods
2.5 Anonymity
3 Bitcoin Challenges
3.1 Volatility
3.2 Inefficiency in Self-regulation
3.3 Cyber Security
3.4 Scalability
3.5 New Era of Technology
3.6 Bitcoin Usage
3.7 Tax Clarity
4 Economic Impact of Cryptocurrencies in Different Nations
5 Advantages that Cryptocurrencies Provide for the Global Economy
6 Effect of Cryptocurrencies on the Indian Economy
6.1 Increasing Openness
6.2 Employment on the Rise
6.3 Boost for the FinTech Industry
6.4 Improve Online Payments
6.5 Realize Atmanirbhar Bharat's Objective
7 Future of Cryptocurrency in India
8 Conclusion
References
Secure Hotel Key Card System Using FIDO Technology
1 Key Card–A Decades Old Security Saga
1.1 Chronology of a Locking Mechanism
1.2 How Did a Hotel Key Card Survive the Rapid Cyber Security Changes?
2 The Need to Replace Hotel Key Card
2.1 The ‘What’ of RFID Technology Used in Key Cards
2.2 Loopholes in the Current System
3 FIDO-Enabled Secure Hotel Key Card
3.1 The Recent Plunge of FIDO Protocols into Security
3.2 FIDO UAF, NFC, and Mobile Key Cards
3.3 Benefits of Mobile Key Cards
4 Future Scope
4.1 Changing Scenarios in Hotel Security
4.2 Sustainability of FIDO Protocols
References
Security Issues in Website Development: An Analysis and Legal Provision
1 Introduction
2 Vulnerabilities in Website Security
2.1 Authentication and Authorisation
2.2 Cross-Site Scripting (XSS)
2.3 Cross-Site Request Forgery
2.4 Data Leakage
2.5 Invalid Clicks
2.6 Importing Modules
2.7 DOS Attack (Denial of Service Attack)
2.8 Injection Flaws
3 Website Security in Internet
4 Neccessity to Provide Web Security
5 Resolution to as to the Security Issues
5.1 Resolution as to Individual
5.2 Resolution as to an Organisation
6 Legal Provision Associated with Security Aspects of Website
7 Conclusion
References
Fashion Image Classification Using Machine Learning
1 Introduction
2 Objective
3 Literature review
4 Existing system
5 Proposed System
6 System Architecture
7 Image processing in Python
8 Morphological Image Processing
9 Gaussian Image Processing
10 Fourier Transform in Image Processing
11 Edge Detection in Image Processing
References
An Expert Sequential Shift-XOR-Based Hash Function Algorithm
1 Introduction
2 Related Work
3 Problem Definition
4 Methodology
5 Description
6 Analysis and Interpretation
7 Conclusion
References
Medical Record Secured Storage Using Blockchain with Fingerprint Authentication
1 Introduction
2 Literature Survey
3 Existing System
4 Proposed System
5 Modules
6 Algorithm
7 Screenshots
8 Conclusion
References
A Dynamic Tourism Recommendation System
1 Introduction
2 Literature Survey
3 System Architecture
4 Modules
4.1 Administrator Module
4.2 User Module
4.3 Tourism and City Guide
4.4 POI Categories
5 System Requirements
6 Algorithm
7 Use Case Diagram
8 Flow Diagram
9 Software and Technologies Description
9.1 Java
9.2 Android Studio
10 Conclusion
11 Future Enhancement
References
ATM Theft Detection Using Artificial Intelligence
1 Introduction
2 Literature Survey
3 Existing System
4 Proposed System
5 System Architecture
6 Software Requirements
7 Hardware Requirements
8 Modules
9 Conclusion
10 Future Enhancement
References
Proposed Model for Design of Decision Support System for Crop Yield Prediction in Rajasthan
1 Introduction
2 Research Methodology
3 Design of Proposed Model
3.1 Factors Affecting Crop Yield
3.2 Data Collection
4 Conclusion
References
Interactive Business Intelligence System Using Data Analytics and Data Reporting
1 Introduction
2 Related Works
2.1 Business Intelligence
2.2 Data, Information, and Knowledge
2.3 Business Intelligence Architectures
2.4 Business Intelligence Capabilities
2.5 Enabling Factors in Business Intelligence Projects
3 Experimental Workflow Analysis
3.1 Workflow
4 Methodology
4.1 Connection
4.2 Integrate
4.3 Publish
5 Results
6 Conclusion and Future Enhancement
6.1 Conclusion
6.2 Future Enhancement
References
Metaverse in Robotics—A Hypothetical Walkthrough to the Future
1 Introduction
2 Evolution of Metaverse and Robotics
2.1 Metaverse
2.2 Robotics
3 Integrating the Metaverse with Robotics
3.1 Hypotheses 1
3.2 Hypothesis 2
4 Strategies for Effective Implementation of Metaverse in Robotics
5 Conclusion
References
Recommend Papers

Emerging Trends in Expert Applications and Security: Proceedings of 2nd ICETEAS 2023, Volume 1 (Lecture Notes in Networks and Systems, 681)
 9819919088, 9789819919086

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Lecture Notes in Networks and Systems 681

Vijay Singh Rathore João Manuel R. S. Tavares Vincenzo Piuri B. Surendiran   Editors

Emerging Trends in Expert Applications and Security Proceedings of 2nd ICETEAS 2023, Volume 1

Lecture Notes in Networks and Systems Volume 681

Series Editor Janusz Kacprzyk, Systems Research Institute, Polish Academy of Sciences, Warsaw, Poland Advisory Editors Fernando Gomide, Department of Computer Engineering and Automation—DCA, School of Electrical and Computer Engineering—FEEC, University of Campinas—UNICAMP, São Paulo, Brazil Okyay Kaynak, Department of Electrical and Electronic Engineering, Bogazici University, Istanbul, Türkiye Derong Liu, Department of Electrical and Computer Engineering, University of Illinois at Chicago, Chicago, USA Institute of Automation, Chinese Academy of Sciences, Beijing, China Witold Pedrycz, Department of Electrical and Computer Engineering, University of Alberta, Alberta, Canada Systems Research Institute, Polish Academy of Sciences, Warsaw, Poland Marios M. Polycarpou, Department of Electrical and Computer Engineering, KIOS Research Center for Intelligent Systems and Networks, University of Cyprus, Nicosia, Cyprus Imre J. Rudas, Óbuda University, Budapest, Hungary Jun Wang, Department of Computer Science, City University of Hong Kong, Kowloon, Hong Kong

The series “Lecture Notes in Networks and Systems” publishes the latest developments in Networks and Systems—quickly, informally and with high quality. Original research reported in proceedings and post-proceedings represents the core of LNNS. Volumes published in LNNS embrace all aspects and subfields of, as well as new challenges in, Networks and Systems. The series contains proceedings and edited volumes in systems and networks, spanning the areas of Cyber-Physical Systems, Autonomous Systems, Sensor Networks, Control Systems, Energy Systems, Automotive Systems, Biological Systems, Vehicular Networking and Connected Vehicles, Aerospace Systems, Automation, Manufacturing, Smart Grids, Nonlinear Systems, Power Systems, Robotics, Social Systems, Economic Systems and other. Of particular value to both the contributors and the readership are the short publication timeframe and the world-wide distribution and exposure which enable both a wide and rapid dissemination of research output. The series covers the theory, applications, and perspectives on the state of the art and future developments relevant to systems and networks, decision making, control, complex processes and related areas, as embedded in the fields of interdisciplinary and applied sciences, engineering, computer science, physics, economics, social, and life sciences, as well as the paradigms and methodologies behind them. Indexed by SCOPUS, INSPEC, WTI Frankfurt eG, zbMATH, SCImago. All books published in the series are submitted for consideration in Web of Science. For proposals from Asia please contact Aninda Bose ([email protected]).

Vijay Singh Rathore · João Manuel R. S. Tavares · Vincenzo Piuri · B. Surendiran Editors

Emerging Trends in Expert Applications and Security Proceedings of 2nd ICETEAS 2023, Volume 1

Editors Vijay Singh Rathore Department of Computer Science and Engineering Jaipur Engineering College and Research Centre Jaipur, Rajasthan, India Vincenzo Piuri Depatment of Computer Engineering University of Milan Milan, Italy

João Manuel R. S. Tavares Faculdade de Engenharia Universidade do Porto Porto, Portugal B. Surendiran Department of Computer Science and Engineering National Institute of Technology Puducherry, India

ISSN 2367-3370 ISSN 2367-3389 (electronic) Lecture Notes in Networks and Systems ISBN 978-981-99-1908-6 ISBN 978-981-99-1909-3 (eBook) https://doi.org/10.1007/978-981-99-1909-3 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Singapore Pte Ltd. The registered company address is: 152 Beach Road, #21-01/04 Gateway East, Singapore 189721, Singapore

Preface

The 2nd International Conference on Emerging Trends in Expert Applications and Security (ICE-TEAS 2023) was held at Jaipur Engineering College and Research Centre, Jaipur, India, during 17–19 February 2023 in hybrid mode. The conference was organized collectively by the “Department of Computer Science Engineering, Department of Information Technology, and Department of Artificial Intelligence and Data Science” of JECRC, Jaipur, in association with Springer Nature for publication (LNNS Series), and supported by CSI Jaipur Chapter, ACM Professional Chapter Jaipur, and GR Foundation, Ahmadabad. The conference addressed recent technological developments, specifically the “Expert Applications and its Security”. Technology has transformed with great speed in the last few decades, resulting in the development of expert applications making life more effortless. The conference raised awareness about issues related with emerging technologies as well as increased threats in expert applications and security, which will aid in the creation of better solutions for society. The COVID-19 pandemic has impacted us more than any other event in most of our lifetimes. Companies, associations, and destinations globally are trying to navigate their way through this crisis, balancing the short-term need with a long-term strategy. While we are all in the same storm, we must realize that we are in different boats, and therefore, different solutions and strategies are necessary. To understand another dimension of the conference abbreviation, it would be better to understand the word “ICE”, which means “the solid state of frozen water”, i.e. proposing and discussing the scientific thoughts frozen in the mind by analysing all its pros and cons in advance. Also, TEAS means “Hot drink infused with dried crushed herbs and leaves which brings freshness” depicting that the scientific ideas come up with the fresh noble solution for expert application by discussing with the scientists and researchers globally during the technical sessions and the TEA breaks resulting the frozen ideas get melted with the proposed solutions. The issues can be addressed with proper planning and utmost care to benefit the concerned. Here, through the conference “ICE-TEAS 2023”, the “frozen idea”: (ICE) of the rising threats in the expert applications would be discussed, analysed, and probably solved, during various tea sessions (and tea breaks) of the conference.

v

vi

Preface

ICE-TEAS 2023 was organized keeping these dimensions at preference. The conference aimed to provide an international platform to the researchers, academicians, industry representatives, government officials, students, and other stakeholders in the field to explore the opportunities, to disseminate and acquire beneficial knowledge from the various issues deliberated in the paper presented on different themes in the conference. The Technical Program Committee and Advisory Board of ICE-TEAS 2023 include eminent academicians, researchers, and practitioners globally. The conference received incredible response from both delegates and students in reference to research paper presentations. More than 516 papers were received, out of which 98 were selected after impartial plagiarism checks and rigorous peer-reviewed processes. All 98 papers have been included in two different volumes (Vol-1, Vol-2) each containing 49 papers which were presented in 12 different technical sessions. The conference was in hybrid mode wherein about 30% participants came to attend the conference physically from across the globe and the rest 70% joined virtually to present their paper as well as to hear the exemplary speakers across the globe. We had international participants and delegates from countries like Italy, Serbia, Norway, Portugal, USA, Vietnam, Ireland, Netherland, Romania, and Poland are few. We deeply appreciate all our authors for having confidence in us and considering ICE-TEAS 2023 a platform for sharing and presenting their original research work. We also express our sincere gratitude to the focused team of chairs, co-chairs, international advisory committee, and technical program committee. We are very much thankful to Mr. Aninda Bose, (Senior Publishing Editor, Springer Nature, India) for providing continuous guidance and support. Our heartfelt thanks to all the reviewers and technical program committee members for cooperation and efforts in the peerreviewed process. We are indeed very much thankful to everyone associated directly or indirectly with the conference, organizing a firm team and leading it towards grand success. We hope that it meets the expectations. We are very much grateful to the patrons, general chair, conference chairs, delegates, participants, and researchers for their throughout provoking contributions. We extend our heartiest thanks and best wishes to all the concerned(s). Jaipur, India

Prof. Dr. Vijay Singh Rathore PC Chair and Convenor, ICE-TEAS 2023

Acknowledgements

First and foremost, we would like to thank God, the Almighty, who has granted countless blessings for the grand success of the Conference. The organizing committee wishes to acknowledge financial as well infrastructural support provided by Jaipur Engineering College and Research Centre and the technical support by CSI Jaipur Chapter, and ACM Jaipur Professional Chapter. The support of GR Foundation, Ahmedabad, is also gratefully acknowledged. We are very much grateful to Mr. O. P Agrawal, Chairperson, JECRC; Mr. Amit Agrawal Vice-Chairperson, JECRC; and Mr. Arpit Agrawal, Director, JECRC, for their unstinted support and invaluable guidance throughout, which gave this three-day event in its present shape. We are also grateful to Prof. Vinay Kumar Chandna, Principal, JECRC, and General Chair of ICE-TEAS 2023 for the meticulous planning and attention to details which has helped us in organizing this event. Our heartfelt thanks to the PC chairs and editors of ICE-TEAS 2023 Proceedings; Prof. Vincenzo Piuri, Professor at University of Milano, Italy; Prof. João Manuel R. S. Tavares, Professor, University of Porto, Portugal; and Prof. Vijay Singh Rathore, Professor-CSE and Director–Outreach, JECRC, Jaipur. Our sincere gratitude to the co-chairs and editors of ICE-TEAS 2023 Proceedings, Prof. Rosalina B. Babo, Professor, Polytechnic Institute of Porto, Portugal; Dr. Marta Campos Ferreira, Assistant Professor, University of Porto, Portugal; and Dr. B. Surendiran, Associate Professor, NIT Puducherry, India. We are extremely grateful to Prof. S. K. Singh, Vice-Chancellor, RTU Kota, who served us as Inaugural Chief Guest; Prof. R. S. Salaria, Director, Guru Nanak University, Hyderabad; and Dr. Maya Ingle, Director, DDUKK, Devi Ahilya University, Indore, as inaugural guests. We would like to extend our deepest gratitude to Mr. Aninda Bose—Senior Editor, Springer Nature, for providing continuous guidance and support. We are very much thankful to Prof. Vijay Kumar Banga, Principal, Amritsar College of Engineering and Technology, Amritsar; Dr. Indira Routaray, Dean, International Education, C. V. Raman Global University, Bhubaneswar; Prof. Mike Hinchey, Professor, University of Limerick, Ireland, and President, IFIP; Prof. Marcel Worring, Professor and Director of the Informatics Institute, University vii

viii

Acknowledgements

of Amsterdam, Netherlands; Prof. Patrizia Pucci, Professor, Department of Mathematics and Informatics, University of Perugia, Perugia, Italy; Prof. Cornelia-Victoria Anghel Drug˘arin, Professor, Babes, -Bolyai University Cluj-Napoca, România; Prof. Francesca Di Virgilio, Professor, University of Molise, Italy; Prof. Kirk Hazlett, Adjunct Professor, University of Tampa, Florida, USA; Prof. Milan Tuba, ViceRector, Singidunum University, Belgrade, Serbia; Prof. Vladan Devedzic, Professor of Computer Science, University of Belgrade, Serbia; Prof. Ibrahim A. Hameed, Professor, ICT, Norwegian University of Science and Technology (NTNU), Ålesund, Norway; Prof. Gabriel Kabanda, Adjunct Professor, Machine Learning, Woxsen University, Hyderabad; Dr. Nguyen Ha Huy Cuong, Professor, Department of Information Technology, The University of Da Nang, College of Information Technology, Da Nang, VietNam; Dr. Dharm Singh, Namibia University of Science and Technology, Namibia; Prof. Igor Razbornik, CEO, Erasmus+ Projects with Igor, Velenje, Solvenia; Dr. Marta Campos Ferreira, Assistant Professor, Faculty of engineering, University of Porto, Portugal; Prof. M. Hanumanthappa, Professor and Director, CS, Bangalore University, Bangalore; Dr. Satish Kumar Singh, Professor, IT, IIIT Allahabad; Prof. Durgesh Kumar Mishra, Professor, CSE, Sri Aurobindo Institute of Technology, Indore; Dr. Nilanjan Dey, Associate Prof., CSE, Techno International New Town, Kolkata; Prof. P. K. Mishra, Institute of Science, BHU, Varanasi; Dr. Ashish Jani, Professor and Head, CSE, Navrachna University, Vadodara, Gujarat. Prof. Reena Dadhich, Professor and Head, Deptartment of Computer Science, University of Kota, Kota; Prof. O. P. Rishi, Professor—CS, University of Kota, Kota; Prof. K. S. Vaisla, Professor, BTKIT, Dwarahat, Uttrakhand; Dr. Vivek Tiwari, Assistant Professor, IIIT Naya Raipur, Chhattisgarh; Prof. Krishna Gupta, Director, UCCS and IT, University of Rajasthan, Jaipur; Prof. Vibhakar Mansotra, Professor, CS, University of Jammu, Jammu; Prof. P. V. Virparia, Professor and Head, CS, Sardar Patel University, Gujarat; Prof. Ashok Agrawal, Professor, University of Rajasthan, Jaipur; Dr. Neeraj Bhargava, Professor and Head, Deptartment of CS, MDS University, Ajmer; Prof. C. K. Kumbharana, Professor and Head, CS, Saurashtra University, Rajkot; Prof. Atul Gonsai, Professor, CS, Saurashtra University, Rajkot; Prof. Dhiren Patel, Professor, Department of MCA, Gujarat Vidyapeeth University, Ahmedabad; Dr. N. K. Joshi, Professor and Head, CS, MIMT, Kota; Prof. Vinod Sharma, Professor, CS, University of Jammu, Jammu; Dr. Tanupriya Chaudhary, Associate Professor, CS, UPES, Dehradun; Dr. Praveen Kumar Vashishth, Amity University, Tashkent, Uzbekistan; Dr. Meenakshi Tripathi, Associate Professor, CSE, MNIT, Jaipur; Dr. Jatinder Manhas, Sr. Assistant Professor, University of Jammu, J&K; Dr. Ritu Bhargava, Professor-CS, Sophia Girls College, Ajmer; Dr. Nishtha Kesswani, Associate Professor, CUoR, Kishangarh; Dr. Shikha Maheshwari, Associate Professor, Manipal University, Jaipur; Dr. Sumeet Gill, Professor, CS, MDU, Rohtak; Dr. Suresh Kumar, Associate Prof. Savitha Engineering College, Chennai; Dr. Avinash Panwar, Director, Computer Centre, and Head IT, MLSU, Udaipur; Dr. Avinash Sharma, Principal, MMEC, Ambala, Haryana; Dr. Abid Hussain, Associate Professor, Career Point University, Kota; Dr. Sonali Vyas, Assistant Professor, CS, UPES, Dehradun; Mr. Kamal Tripathi, Founder and CEO, NKM Tech Solutions; Dr. Pritee Parwekar, Associate Professor, CSE, SRM IST, Ghaziabad; Dr. Pinal J. Patel, Dr. Lavika

Acknowledgements

ix

Goel, Assistant Professor, MNIT, Jaipur; Dr. N. Suganthi, Assistant Professor, SRM University, Ramapuram Campus, Chennai; Dr. Mahipal Singh Deora, Professor, BN University, Udaipur; Dr. Seema Rawat, Associate Professor., Amity University, Tashkent, Uzbekistan; Ms. Gunjan Gupta, Director, A2G Enterprises Pvt. Ltd., Noida; Dr. Minakshi Memoria, HOD CSE, UIT, Uttaranchal University, Uttaranchal; Dr. Himanshu Sharma, Assistant Professor, SRM Inst., Delhi NCR; Dr. Paras Kothari, Professor and Head MCA, GITS, Udaipur; Dr. Kapil Joshi, Assistant Professor, CSE, Uttaranchal University, Dehradun; Dr. Amit Sharma, Associate Professor, Career Point University, Kota; Dr. Salini Suresh, Associate Professor, Dayanand Sagar Institutions, Bengaluru; Dr. Bharat Singh Deora, Associate Professor, JRNRVU, Udaipur; Dr. Manju Mandot, Professor, JRNRVU, Udaipur; Dr. Pawanesh Abrol, Professor, Department of Computer Science, University of Jammu, Jammu; Dr. Preeti Tiwari, Professor, ISIM, Jaipur; Dr. Kusum Rajawat, Principal, SKUC, Jaipur; and all other resource persons, experts, guest speakers, and session chairs for their gracious presence. We deeply appreciate all our authors for having confidence in us and considering ICE-TEAS 2023 a platform for sharing and presenting their original research work. We also express our sincere gratitude to the focused team of chairs, co-chairs, reviewers, international advisory committee, and technical program committee. No task in this world can be completed successfully without the support of your team members. We would like to extend our heartfelt appreciation to our Organizing Secretary Prof. Sanjay Gaur, Head CSE, JECRC; Co-organizing Secretaries Dr. Vijeta Kumawat, Deputy HoD-CSE, Dr. Smita Agarwal Head IT, and Dr. Manju Vyas Head AI and DS, for their seamless contribution towards systematic planning and execution of the conference and making it a grand success altogether. Thanks to all organizing committee members and faculty members of the JECRC for their preliminary support. Heartfelt thanks to the media and promotion team for their support in wide publicity of this conference in such a short span of time. Finally, we are grateful to one and all, who have contributed directly or indirectly in making ICE-TEAS 2023 a grand success. Jaipur, India

Prof. Dr. Vijay Singh Rathore PC Chair and Convenor, ICE-TEAS 2023

About This Book

This book is Volume 1 of the Proceedings of International Conference ICE-TEAS 2023 at JECRC, Jaipur. It includes high-quality and peer-reviewed papers from the 2nd International Conference on Emerging Trends in Expert Applications and Security (ICE-TEAS 2023) held at the Jaipur Engineering College and Research Centre Jaipur, Rajasthan, India, during 17–19 February 2023, which addressed various facts of evolving technologies in expert applications and analysis of the threats associated with them and eventually proposing solutions and security to those threats. Technology advancements have broadened the horizons for expert applications proliferation that covers varied domains namely design, monitoring, process control, medical, knowledge, finance, commerce, and many more. However, no technology can offer easy and complete solutions owing to technological limitations, difficult knowledge acquisition, high development and maintenance cost, and other factors. Hence, the emerging trends and rising threats and proposing adequate security in expert applications are also issues of concern. Keeping this ideology in mind, the book offers insights that reflect the advances in these fields across the globe and also the rising threats. Covering a variety of topics, such as expert applications and artificial intelligence/machine learning, advance web technologies, IoT, big data, cloud computing in expert applications, information and cyber security threats and solutions, multimedia applications in forensics, security and intelligence, advancements in app development, management practices for expert applications, social and ethical aspects in expert applications through applied sciences. It will surely help those who are in the industry and academia and working on cutting-edge technology for the advancement of next-generation communication and computational technology to shape real-world applications. The book is appropriate for researchers as well as professionals. The researchers will be able to save considerable time by getting authenticated technical information on expert applications and security at one place. The professionals will have a readily available rich set of guidelines and techniques applicable to a wide class of engineering domains.

xi

Contents

The Study of Cluster-Based Energy-Efficient Algorithms of Flying Ad-Hoc Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ayushi Sharma, Sandeep Sharma, and Prabhsimran Singh

1

Current Web Development Technologies: A Comparative Review . . . . . . Vibhakar Pathak, Vishal Shrivastava, Rambabu Buri, Shikha Gupta, and Sangeeta Sharma

13

R-Peak-Based Arrhythmia Detection as an Impact of COVID-19 . . . . . . . Supriya Dubey and Pritee Parwekar

25

IoT-Based Water Quality Monitoring and Detection System . . . . . . . . . . . M. Kanchana, P. V. Gopirajan, K. Sureshkumar, R. Sudharsanan, and N. Suganthi

35

A Comparative Analysis of Different Diagnostic Imaging Modalities . . . Vivek Kumar, Shobhna Poddar, Neha Rastogi, Kapil Joshi, Ashulekha Gupta, and Parul Saxena

47

Uncovering the Usability Test Methods for Human–Computer Interaction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Garima Nahar and Sonal Bordia Jain

57

The Effects of Lipid Concentration on Blood Flow Through Constricted Artery Using Homotopy Perturbation Method . . . . . . . . . . . . Jyoti, Sumeet Gill, Rajbala Rathee, and Neha Phogat

69

Actual Facial Mask Recognition Utilizing YOLOv3 and Regions with Convolutional Neural Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A. Thilagavathy, D. Naveen Raju, S. Priyanka, G. RamBalaji, P. V. Gopirajan, and K. Sureshkumar

81

xiii

xiv

Contents

Real-Time Smart System for Marking Attendance that Uses Image Processing in a SaaS Cloud Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . G. Nalinipriya, Akilla Venkata Seshasai, G. V. Srikanth, E. Saran, and Mohammed Zaid

95

Design of Automatic Clearance System for Emergency Transport and Abused Vehicle Recognition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 M. Malathi, P. Sinthia, M. R. Karpaga Priya, S. Kavitha, and K. Suresh Kumar Securing Account Hijacking Security Threats in Cloud Environment Using Artificial Neural Networks . . . . . . . . . . . . . . . . . . . . . . . 119 Renu Devi, Sumeet Gill, and Ekta Narwal IoT-Based Smart Pill Box and Voice Alert System . . . . . . . . . . . . . . . . . . . . 129 P. Sinthia, R. Karpaga Priya, S. Kavitha, M. Malathi, and K. Suresh Kumar Early Identification of Plant Diseases by Image Processing Using Integrated Development Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139 S. Kavitha, M. Malathi, P. Sinthia, R. Karpaga Priya, and K. Suresh Kumar Usability Attributes and Their Mapping in Various Phases of Software Development Life Cycle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153 Ruchira Muchhal, Meena Sharma, and Kshama Paithankar Controlling Devices from Anywhere Using IoT Including Voice Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163 S. V. Ravi kkumaar, N. Velmurugan, N. Hemanth, and Valleri Gnana Theja IoT-Based Air Quality Monitoring System . . . . . . . . . . . . . . . . . . . . . . . . . . . 175 Amileneni Dhanush, S. P. Panimalar, Kancharla Likith Chowdary, and Sai Supreeth Modeling of Order Quantity Prediction using Soft Computing Technique: A Fuzzy Logic Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185 Anshu Sharma, Sumeet Gill, and Anil Kumar Taneja Development of Classification Framework Using Machine Learning and Pattern Recognition System . . . . . . . . . . . . . . . . . . . . . . . . . . . 195 Kapil Joshi, Ajay Poddar, Vivek Kumar, Jitendra Kumar, S. Umang, and Parul Saxena Human Part Semantic Segmentation Using Custom-CDGNet Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207 Aditi Verma, Vivek Tiwari, Mayank Lovanshi, Rahul Shrivastava, and Basant Tiwari

Contents

xv

Effect on Compressive Strength of Portland Pozzolana Cement on Adding Admixtures Using Machine Learning Technique . . . . . . . . . . . 219 Prafful Negi, Vinod Balmiki, Awadhesh Chandramauli, Kapil Joshi, and Anchit Bijalwan Tech Track for Visually Impaired People . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231 G. Abinaya, K. Kanishkar, S. Harish, and D. J. Subbash Intelligent Compression of Data on Cloud Storage . . . . . . . . . . . . . . . . . . . . 243 E. Mothish, S. Vidhya, G. Jeeva, and K. Gopinathan Diagnosis of Diabetic Retinopathy Using Deep Neural Network . . . . . . . . 253 S. Dhivya Lakshmi, C. Lalitha Parameswari, and N. Velmurugan Multi-parameter Sensor-Based Automation Farming . . . . . . . . . . . . . . . . . 267 K. Suresh Kumar, S. Pavithra, K. P. Subiksha, and V. G. Kavya Comparing Ensemble Techniques for Bilingual Multiclass Classification of Online Reviews . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275 Priyanka Sharma and Pritee Parwekar Detection of Disease in Liver Image Using Deep Learning Technique . . . 285 T. K. R. Agita, M. Arun, K. Immanuvel Arokia James, S. Arthi, P. Somasundari, M. Moorthi, and K. Sureshkumar How to Quantify Software Quality Factors for Mobile Applications?: Proposed Criteria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299 Manish Mishra and Reena Dadhich Dysgraphia Detection Using Machine Learning-Based Techniques: A Survey . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315 Basant Agarwal, Sonal Jain, Priyal Bansal, Sanatan Shrivastava, and Navyug Mohan Designing AI for Investment Banking Risk Management a Review, Evaluation and Strategy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 329 Simarjit Singh Lamba and Navroop Kaur A Neutrosophic Cognitive Maps Approach for Pestle Analysis in Food Industry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349 Kanika Bhutani, Sanjay Gaur, Punita Panwar, and Sneha Garg Assistive Agricultural Technology—Soil Health and Suitable Crop Prediction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361 K. Naveen, Saksham Singh, Arihant Jain, Sushant Arora, and Madhulika Bhatia Quantum Key Distribution for Underwater Wireless Sensor Network: A Preliminary Survey of the State-of-the-Art . . . . . . . . . . . . . . . 371 Pooja Ashok Shelar, Parikshit Narendra Mahalle, and Gitanjali Rahul Shinde

xvi

Contents

Innovations and Insights of Sequence-Based Emotion Detection in Human Face Through Deep Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 385 Krishna Kant and D. B. Shah Speed Analysis on Client Server Architecture Using HTTP/2 Over HTTP/1: A Generic Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 397 Anuj Kumar, Raja Kumar Murugesan, Harshita Chaudhary, Neha Singh, Kapil Joshi, and Umang Smart Chatbot for Guidance About Children’s Legal Rights . . . . . . . . . . . 405 Akshay Kumar, Pooja Joshi, Ashish Saini, Amrita Kumari, Chetna Chaudhary, and Kapil Joshi Automatic Speed Control of Vehicles in Speed Limit Zones Using IR Sensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 413 Riya Kukreti, Ritu Pal, Pratibha Dimri, Sakshi Koli, and Kapil Joshi Ordering Services Modelling in Blockchain Platform for Food Supply Chain Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 423 Pratibha Deshmukh, Sharma Avinash, Atul M. Gonsai, Sayas S. Sonawane, and Taukir Khan Impact of Cryptocurrency on Global Economy and Its Influence on Indian Economy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 433 B. Umamaheswari, Priyanka Mitra, Somya Agrawal, and Vijeta Kumawat Secure Hotel Key Card System Using FIDO Technology . . . . . . . . . . . . . . . 447 Aditi Gupta, Rhytthm Mahajan, and Vijeta Kumawat Security Issues in Website Development: An Analysis and Legal Provision . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 457 Darashiny Nivasan, Gagandeep Kaur, and Sonali Vyas Fashion Image Classification Using Machine Learning . . . . . . . . . . . . . . . . 469 V. Monisree, M. Sneha, and S. V. Sneha An Expert Sequential Shift-XOR-Based Hash Function Algorithm . . . . . 483 Sunita Bhati, Anita Bhati, and Sanjay Gaur Medical Record Secured Storage Using Blockchain with Fingerprint Authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 493 R. Jayamahalakshmi, G. Abinaya, S. Oviashree, and S. Yuthika A Dynamic Tourism Recommendation System . . . . . . . . . . . . . . . . . . . . . . . 505 G. NaliniPriya, Akilla Venkata Sesha Sai, Mohammed Zaid, and E. Sureshram ATM Theft Detection Using Artificial Intelligence . . . . . . . . . . . . . . . . . . . . 517 S. P. Panimalar, M. Ashwin Kumar, and N. Rohit

Contents

xvii

Proposed Model for Design of Decision Support System for Crop Yield Prediction in Rajasthan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 527 Kamini Pareek, Vaibhav Bhatnagar, and Pradeep Tiwari Interactive Business Intelligence System Using Data Analytics and Data Reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 539 E. Sujatha, V. Loganathan, D. Naveen Raju, and N. Suganthi Metaverse in Robotics—A Hypothetical Walkthrough to the Future . . . . 559 Salini Suresh, R. Sujay, and N. Ramachandran

About the Editors

Prof. (Dr.) Vijay Singh Rathore presently working as Professor-CSE and Director— OutReach, Jaipur Engineering College and Research Centre, Jaipur (India), Membership Chair, ACM Jaipur Chapter, and Past Chairman, CSI Jaipur Chapter. Having teaching experience of 20+ years, he has 5 patents published, 20 Ph.D. supervised, published 94 research papers and 10 books and associated as Editor and Reviewer in some reputed journals, received 20+ national and international awards of repute. His core research areas include Internet security, cloud computing, big data, and IoT. He has organized and participated in 25+ national and international conferences of repute. His foreign visits for various academic activities (Delegate/Invited/Keynote Speaker/Session Chair/Guest) include USA, UK, Canada, France, Netherlands, Singapore, Thailand, Vietnam, Nepal, etc. He had been Member of Indian Higher Education Delegation and visited 20+ leading universities at Canada (May, 2019), UK (July, 2017), and USA (August, 2018) supported by AICTE and GR foundation. Other delegation visits include University of Amsterdam (2016), Nanyang Technological University, Singapore (2018), University of Lincoln, QMUL, Brunel University, and Oxford Brookes University (2020) for discussions on academic and research collaborations. He is Active Academician and always welcoming and forming innovative ideas in the various dimensions of education and research for the better development of the society. João Manuel R. S. Tavares graduated in Mechanical Engineering at the Universidade do Porto, Portugal, in 1992. He also earned his M.Sc. degree and Ph.D. degree in Electrical and Computer Engineering from the Universidade do Porto in 1995 and 2001 and attained his Habilitation in Mechanical Engineering in 2015. He is Senior Researcher at the Instituto de Ciência e Inovação em Engenharia Mecânica e Engenharia Industrial (INEGI) and Full Professor at the Department of Mechanical Engineering (DEMec) of the Faculdade de Engenharia da Universidade do Porto (FEUP). João Tavares is Co-editor of more than 75 books, Co-author of more than 50 chapters, 650 articles in international and national journals and conferences, and 3 international and 3 national patents. He has been Committee Member of several international and national journals and conferences, is Co-founder and Co-editor of the book series xix

xx

About the Editors

Lecture Notes in Computational Vision and Biomechanics published by Springer, Founder and Editor-in-Chief of the journal Computer Methods in Biomechanics and Biomedical Engineering: Imaging and Visualization published by Taylor & Francis, Editor-in-Chief of the journal Computer Methods in Biomechanics and Biomedical Engineering published by Taylor & Francis, and Co-founder and Co-chair of the international conference series: CompIMAGE, ECCOMAS VipIMAGE, ICCEBS, and BioDental. Additionally, he has been (co-)Supervisor of several M.Sc. and Ph.D. thesis and Supervisor of several postdoc projects and has participated in many scientific projects both as Researcher and as Scientific Coordinator. His main research areas include computational vision, medical imaging, computational mechanics, scientific visualization, human–computer interaction, and new product development. Prof. Vincenzo Piuri has received his Ph.D. in Computer Engineering at Polytechnic of Milan, Italy (1989). He is Full Professor in Computer Engineering at the University of Milan, Italy (since 2000). He has been Associate Professor at Polytechnic of Milan, Italy, Visiting Professor at the University of Texas at Austin, USA, and Visiting Researcher at George Mason University, USA. His main research interests are artificial intelligence, computational intelligence, intelligent systems, machine learning, pattern analysis and recognition, signal and image processing, biometrics, intelligent measurement systems, industrial applications, digital processing architectures, fault tolerance, cloud computing infrastructures, and Internet of things. Original results have been published in 400+ papers in international journals, proceedings of international conferences, books, and chapters. He is Fellow of the IEEE, Distinguished Scientist of ACM, and Senior Member of INNS. He is IEEE Region 8 Director-elect (2021–2022) and will be IEEE Region 8 Director (2023–2024). Dr. B. Surendiran is working as Associate Professor, Department of CSE, NIT Puducherry, Karaikal. He has more than 16 years of teaching and research experience. He had completed his Ph.D. in the year 2012 from NIT Trichy. His research interests are medical image processing, machine learning, dimensionality reduction, etc. He had 50+ international journal and conference publications to his credit. He has delivered 85+ lectures, reviewed 500+ research papers, and guided 12 Ph.D. scholars. He has received Best Reviewer Award in 2019 by publons.com, Web of Science, and also Outstanding Reviewer Award by Biomedical and Pharmacology Journal in the year 2020. He is Active Reviewer for IEEE sensors, IET image processing, and various SCI/Scopus journals.

The Study of Cluster-Based Energy-Efficient Algorithms of Flying Ad-Hoc Networks Ayushi Sharma, Sandeep Sharma, and Prabhsimran Singh

Abstract In the Internet of Things, cloud computing cannot handle data due to its high latency, since a growing number of physical objects and sensors are connected to the Internet. IoT-based ad-hoc networks are local area networks that are created spontaneously as devices connect. Designed for industrial use, it is deployable and configurable. As UAVs communicate and collaborate wirelessly, they establish a flying ad-hoc network (FANET). Urban public infrastructure and services, such as transportation and governance, can be transformed by utilizing FANET-based Internet of Things (IoT). Sensor-based data collection can be performed on the fly without the need for hardware to support a wide range of applications, such as automated transportation, smart agriculture, disaster management, security, smart cities, smart housing, smart freight, smart governance, smart mobility and smart surveillance. In this paper, we have proposed a comprehensive analysis of the cluster-based and the non-cluster-based routing protocols with regard to their parameters, methods, metrics, applications and the workings. Keywords Ad-Hoc networks · FANETs · IoT · Wireless sensor network · Cluster based · Energy efficient · Routing protocols

1 Introduction Many years ago, wired networks were the only method of connecting computers to the Internet. During the last decade, communications over wireless networks have A. Sharma (B) · S. Sharma · P. Singh Guru Nanak Dev University, Amritsar, India e-mail: [email protected] S. Sharma e-mail: [email protected] P. Singh e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 V. S. Rathore et al. (eds.), Emerging Trends in Expert Applications and Security, Lecture Notes in Networks and Systems 681, https://doi.org/10.1007/978-981-99-1909-3_1

1

2

A. Sharma et al.

changed interconnection through enabling computers to both exchange information stored on them and communicate with each other wirelessly. While the Internet is primarily composed of homogeneous devices (computers), recent developments have enabled the use of semi-homogeneous devices for purposeful interconnection, such as computers, cameras, phones, sensors and other instruments (e.g., satellites). In the latest vision, interconnecting hardware devices such as computers to simple sensors makes it possible to create pure heterogeneous networks and contexts by expanding the interconnection between devices. To achieve this, we envision the Internet of Things (IoT), which is an interconnected system of products and people, which collect and share data on their use and on the environment they inhabit. As defined by the Internet of Things (IoT), a “Thing” is an entity or physical object that has an embedded system, a unique ID and network connectivity. The Internet of Things can include any natural or man-made object that is able to communicate over a network and can be given an Internet Protocol (IP) address. We are witnessing the rapid advancement of IoT technology in our daily lives, especially with a variety of cheap Wi-Fi radio interfaces and other equipment such as GPS and sensors. Innovating devices such as these have provided the basis for developing miniaturized smart flying vehicles, like unmanned aerial vehicles (UAVs). There are several civilian and military applications for UAV devices at present. Security of the country [13], safety of the public [20], ad-hoc network relays [8], emergency relief operations [7], monitoring of traffic [12], predicting the wind [6], reconnaissance and surveillance operations [17] and the use in warfare [10] are just a few examples (Fig. 1). An ad-hoc network refers to collaborative communications where wireless nodes interact with each other in a distributed manner in order to accomplish a common objective. Using flying machines for the purpose of transmitting and receiving data in wireless communication, flying ad-hoc networks (FANETs) are an improved version of ad-hoc networks. FANETs are aerial-only networking technologies that permit UAVs to interact with other UAVs and with ground-based devices via the air. There

Fig. 1 Internet of Things (Source Adapted from [28])

The Study of Cluster-Based Energy-Efficient Algorithms of Flying …

3

are four types of UAVs: long-range and high-altitude drones, small drones or small drones with mini engines. The first two are military drones.

2 IoT Architecture and Devices The Internet of Things architecture consists of multiple elements, including sensors, protocols, actuators, cloud services and layers. Additionally, protocols and gateways distinguish Internet of Things (IoT) architecture layers from devices and sensors. In general, IoT architecture [11] can be divided into the following four layers (Fig. 2). (1) (2) (3) (4)

Sensor layer Gateway and network layer Management service layer Application layer.

The IoT devices refer to hardware devices that collect data over the Internet, such as sensors, gadgets, appliances and other machines. IoT devices are mostly categorized into the following categories.

Fig. 2 IoT architecture (Source Adapted from [22])

4

A. Sharma et al.

Fig. 3 IoT devices (Source Adapted from [3])

(1) Sensors/actuators (2) Gateways and data acquisition (3) Data center/cloud (Fig. 3).

3 Flying Ad-Hoc Networks (FANETs) A new promising area is Flying-Things, as unmanned aerial vehicles have emerged in recent years. FANETs can be made possible by UAVs, which are highly efficient at completing tasks and organizing ad-hoc behavior of networks. Until UAVs are able to interact with each other on the basis of mobility models, the formation of aerial nodes is not feasible. As the mobility models help UAVs in deploying in the sky, their behavior is dynamic, unlike wireless sensor networks and mobile ad-hoc networks. UAV networks based on infrastructure can be problematic, so ad-hoc networking can solve those problems. With the rise of small UAVs, low-cost FANETs have been envisioned owing to recent advancements of technology in communication systems, sensors and electronics [16] (Fig. 4).

4 Classification of FANET Routing Protocols Since FANETs are highly dynamic, abrupt changes in topology are caused by UAVs, making routing among them crucial [32]. A major aspect of the FANETs is the use of routing protocols when dealing with UAV-to-UAV communication. Some of popular routing protocols are as follows:

The Study of Cluster-Based Energy-Efficient Algorithms of Flying …

5

Fig. 4 FANET (Source Adapted from [14])

(1) (2) (3) (4)

Static routing protocols Proactive routing protocols Reactive routing protocols Hybrid routing protocols (Table 1).

Table 1 Parametric analysis of popular FANET routing protocols in IoT (Source Adapted from [21]) Criteria

Static protocols

Proactive protocols

Reactive protocols

Hybrid protocols

Main idea

Static table

Table-driven protocol

On-demand protocol

Combination of proactive and reactive protocols

Complexity

Less

Moderate

Average

Average

Route

Static

Dynamic

Dynamic

Dynamic

Topology size

Compact

Compact

Massive

Compact and massive

Fault tolerant

Absent

Absent

Absent

Mostly available

Bandwidth utilization

Best possible

Least possible

Best possible

Moderate

Convergence time

Quicker

Slower

Mostly fast

Medium

Signaling overhead

Absent

Existing

Existing

Existing

Communication latency

Less

Less

High

High

Missing failure rate

High

Low

Low

Very low

6

A. Sharma et al.

5 Practical Approaches of Routing Protocols in FANETs There are two practical approaches used for routing protocols in FANETs:

5.1 Cluster Based In UAV networks, cluster-based routing protocols can be divided into two categories: Based on Probabilistic Clustering: A random election of the cluster head (CH) is included in some of the probabilistic cluster-based routing protocols (Fig. 5 and Table 2). Based on Deterministic Clustering: A CH is selected using more confident metrics in deterministic cluster-based routing protocols (Table 3).

Fig. 5 Cluster network data collection using UAVs (Source Adapted from [4])

The Study of Cluster-Based Energy-Efficient Algorithms of Flying …

7

Table 2 Probabilistic clustering-based FANET routing protocols Name

Details

UAV routing protocol (URP)

In URP, data is collected from a specific area based on a dynamic clustering protocol [29]. Mobile sink nodes based on UAVs gather data from dispersed nodes by walking randomly or according to predefined paths

UAV-based linear sensor routing protocol (ULSN)

With ULSN, data transmission will be less energy intensive and network lifespan will be longer [15]. A single UAV, relay nodes (RNs), sensor nodes (SNs) and sinks are the four types of nodes used in the system

UAV-wireless sensor routing protocol (UAV-WSN)

Two major cooperative behaviors occur between UAVs and WSNs: Data from WSNs is used to update flight plans of UAVs, and WSN operations influence UAVs routing paths to optimize data location [19]

Energy-aware link-based clustering (EALC)

Using the EALC, UAV routing can be improved by addressing two major problems: short flight times and inefficiency in routing [1]. Clustering with K-means density resolves both problems

Bio-inspired mobility prediction clustering (BIMPC)

The BIMPC addresses the issue of high mobility of drone networks as well as the ability to quickly change network topologies [31]

5.2 Non-cluster Based There are no clusters found in these types of routing protocols (Table 4).

6 Recent Research and Developments in FANETs A number of researchers have proposed algorithms for FANETs to enhance the efficiency of the network and to make it stable. The parametric analysis of some of the popular FANET protocols proposed or analyzed in the last 4 years is as shown in Table 5.

7 Conclusion Being one of the promising technologies, IoT is being used in many organizations to improve efficiency, meeting customer satisfaction, better decision making and

8

A. Sharma et al.

Table 3 Deterministic clustering-based FANET routing protocols Name

Details

Density-based spatial clustering of application with noise (DBSCAN)

The extended Kalman filters (EKFs) method was utilized by DBSCAN to approximate the location of moving objects [9]. Using DBSCAN, multiple mobile agents will be tracked with the help of an efficient sensor manager and best path planning

Cluster-based location-aided dynamic source routing (CBLADSR)

The CBLADSR protocol is a heuristic-based node-weight routing protocol that is location-aided [27]. Electing CH and cluster formation are accomplished with the help of a node-weight heuristic algorithm

Mobility prediction cluster algorithm (MPCA)

In MPCA, the prediction of dictionary structure is combined with the clustering of mobility predictors based on link expiration time [33]. MPCA takes mobility of nodes into account when calculating weights

Localization multi-hop hierarchical routing (IMRL)

The IMRL cluster routing algorithm is a fuzzy-based algorithm that improves localization accuracy, energy efficiency and data transmission in comparison with existing solutions [18]. As a result of IMRL routing, fuzzy logic inference is used to calculate the position of UAV nodes according to RSSI values based on weighted centroid localization

Traffic differentiated routing (TDR)

Known as TDR, it is an approach that addresses services that are delay-sensitive and rely on reliability [25]

Table 4 Non-cluster based FANET routing protocols Name

Details

Topology-based routing

IP addresses are used to identify nodes in this category of routing protocols, and forwarding packets through the appropriate path is determined by the link information in the network

Hierarchical-based routing

Routers are divided into regions in hierarchical routing. Detailed information about how packets are routed within each region is provided by each router. Nevertheless, it is unaware of the internal structure of other regions

Heterogeneous-based routing

There is no difference between mobile and fixed nodes on the ground in their interaction with UAVs

Energy-based routing

A network energy balance algorithm is employed by the algorithm. Despite the presence of dead nodes, it promotes network stability and prolongs network lifespan

Swarm intelligence-based routing First used in the cellular robotic system, swarm intelligence (SI) is a system that is self-organizing [5]. In intelligence theory, an optimization algorithm can be considered the SI

The Study of Cluster-Based Energy-Efficient Algorithms of Flying …

9

Table 5 Parametric analysis of FANETs (last 4 years) Paper

Parameters used

Outcomes

ECAD [23]

(1) Lifetime of network (2) Number of path failures

(1) Extending the network’s life (2) Reducing failure rates on paths (3) Increasing packet delivery efficiency

Topology-based routing protocol and mobility models for FANETs. [30]

(1) UAVs communication

(1) The industry is expected to see UAV communications become practical and real soon

A novel hybrid secure routing for FANETs [26]

(1) Research models (2) Mobility in flying (3) Efficient hybrid (4) Optimized routing scheme (5) Throughput, communication, packet delivery ratio (6) Dynamic and static network performance analysis

(1) Mobility in FANETs is the focus of this research (2) A hybrid routing scheme that is highly efficient and optimized is presented (3) Does not take into account other factors such as throughput, communication overhead and packet delivery ratio (4) Enhances network performance under static and dynamic conditions (5) Dynamic source routing and optimal link state routing are used in the design

Energy conservation techniques for FANETs. [24]

(1) Energy consumption (2) Network lifetime (3) Valuable load (4) Energy depletion (5) Controlling transmission power (6) Active cycles of nodes

(1) Consumption of energy has been reduced (2) Extending the network’s life (3) Valuable load (4) The transmission power can be controlled in order to reduce energy depletion (5) Allocating weights among nodes (6) Node active cycle reduction (7) FANETs require a high level of energy proficiency (8) High-level communication stability (continued)

solving day-to-day problems. FANETs are playing a vital role in many IoT applications these days both underwater and surface remote areas. Several research areas are required to support FANETs, which require scientists’ attention and interest. In this paper, the FANET routing protocols have been studied, analyzed and classified in terms of parameters like network lifetime, path failure, packet losses, stability, reliability, etc. Comparative studies have revealed that the different routing protocols have their own limitations, characteristics, strengths and suitability. The implementation pace of the various routing protocols in FANETs has been classified in terms of clustering and non-clustering in this paper. Researchers and engineers may find

10

A. Sharma et al.

Table 5 (continued) Paper

Parameters used

Outcomes

EMASS-A novel energy, safety and mobility aware-based clustering algorithm for FANETs [2]

(1) Autonomous and efficient UAVs (2) Improve stability and reliability

(1) Ensured accuracy through a unique role (2) Energy consumption of FANETs can be reduced, their lifetime can be extended, and their topology can be maintained in a stable manner by EMASS (3) Clustering schemes are inefficient when it comes to stability and network latency for high mobility UAVs (4) Through improved parameters related to awareness of safety distances and mobility, EMASS improves safety and stability of networks, contributing to lower energy consumption and higher cluster stability (5) In comparison to BICSF and EALC algorithms, this algorithm improves energy efficiency, packet deliverability, delay and cluster stability (6) The proposed EMASS algorithm is used to manage the clusters of FANETs that have been obtained through the use of a SDN framework

this classification helpful in choosing the most appropriate routing protocol. From the details studied, we have concluded that the stability and reliability are the two most important factors of the FANETs.

References 1. Aadil F, Raza A, Khan M, Maqsood M, Mehmood I, Rho S (2018) Energy aware cluster-based routing in flying ad-hoc networks. Sensors 18(5):1413. https://doi.org/10.3390/s18051413 2. Aissa M, Abdelhafidh M, Mnaouer AB (2021) EMASS: a novel energy, safety and mobility aware-based clustering algorithm for FANETs. IEEE Access 9:105506–105520. https://doi. org/10.1109/access.2021.3097323 3. Allurwar N (2022). How IoT works–explanation of IoT Architecture & layers. IoTDunia. https://iotdunia.com/iot-architecture/. (24 May 2022) 4. Arafat MY, Moh S (2019) A survey on cluster-based routing protocols for unmanned aerial vehicle networks. IEEE Access 7:498–516. https://doi.org/10.1109/access.2018.2885539 5. Beni G, Wang J (1993) Swarm intelligence in cellular robotic systems. In: Robots and biological systems: towards a new bionics?, pp 703–712. https://doi.org/10.1007/978-3-642-58069-7_38 6. Cho A, Kim J, Lee S, Kee C (2011) Wind estimation and airspeed calibration using a UAV with a single-antenna GPS receiver and pitot tube. IEEE Trans Aerosp Electron Syst 47(1):109–117. https://doi.org/10.1109/taes.2011.5705663

The Study of Cluster-Based Energy-Efficient Algorithms of Flying …

11

7. Chowdhury S, Emelogu A, Marufuzzaman M, Nurre SG, Bian L (2017) Drones for disaster response and relief operations: a continuous approximation model. Int J Prod Econ 188:167– 184. https://doi.org/10.1016/j.ijpe.2017.03.024 8. de Freitas EP, Heimfarth T, Netto IF, Lino CE, Pereira CE, Ferreira AM, Wagner FR, Larsson T (2010) UAV relay network to support WSN connectivity. In: International congress on ultra modern telecommunications and control systems. https://doi.org/10.1109/icumt.2010.5676621 9. Farmani N, Sun L, Pack DJ (2017) A Scalable Multitarget tracking system for cooperative unmanned aerial vehicles. IEEE Trans Aerosp Electron Syst 53(4):1947–1961. https://doi.org/ 10.1109/taes.2017.2677746 10. George J, PB S, Sousa JB (2010) Search strategies for multiple UAV search and destroy missions. J Intell Robot Syst 61(1–4):355–367. https://doi.org/10.1007/s10846-010-9486-8 11. Haas ZJ, Pearlman MR (2001) ZRP: a hybrid framework for routing in Ad Hoc networks, pp 221–253. Addison-Wesley Longman Publishing Co., Inc. EBooks. https://dl.acm.org/citation. cfm?id=374547.374554 12. Hayat S, Yanmaz E, Muzaffar R (2016) Survey on unmanned aerial vehicle networks for civil applications: a communications viewpoint. IEEE Commun Surv Tutor 18(4):2624–2661. https://doi.org/10.1109/comst.2016.2560343 13. Hossein Motlagh N, Taleb T, Arouk O (2016) Low-altitude unmanned aerial vehicles-based internet of things services: comprehensive survey and future perspectives. IEEE Internet Things J 3(6):899–922. https://doi.org/10.1109/jiot.2016.2612119 14. Azevedo MIB, Coutinho C, Martins Toda E, Costa Carvalho T, Jailton J (2020) Wireless communications challenges to flying ad hoc networks (FANET). Mobile Comput. https://doi. org/10.5772/intechopen.86544 15. Jawhar I, Mohamed N, Al-Jaroodi J, Zhang S (2013) A framework for using unmanned aerial vehicles for data collection in linear wireless sensor networks. J Intell Robot Syst 74(1–2):437– 453. https://doi.org/10.1007/s10846-013-9965-9 16. Khan MA, Safi A, Qureshi IM, Khan IU (2017) Flying ad-hoc networks (FANETs): a review of communication architectures, and routing protocols. In: 2017 first international conference on latest trends in electrical engineering and computing technologies (INTELLECT). https:// doi.org/10.1109/intellect.2017.8277614 17. Kopfstedt T, Mukai M, Fujita M, Ament C (2008) Control of formations of UAVs for surveillance and reconnaissance missions. IFAC Proc Vol 41(2):5161–5166. https://doi.org/10.3182/ 20080706-5-kr-1001.00867 18. Liu K, Zhang J, Zhang T (2008) The clustering algorithm of UAV networking in near-space. In: 2008 8th international symposium on antennas, propagation and EM theory. https://doi.org/ 10.1109/isape.2008.4735528 19. Martinez-de Dios JR, Lferd K, de San Bernabé A, Núñez G, Torres-González A, Ollero A (2012) Cooperation between UAS and wireless sensor networks for efficient data collection in large environments. J Intell Robot Syst. https://doi.org/10.1007/s10846-012-9733-2 20. Merwaday A, Guvenc I (2015) UAV assisted heterogeneous networks for public safety communications. In: 2015 IEEE wireless communications and networking conference workshops (WCNCW). https://doi.org/10.1109/wcncw.2015.7122576 21. Nadeem A, Alghamdi T, Yawar A, Mehmood A, Siddiqui MS (2018) A review and classification of flying ad-hoc network (FANET) routing strategies. Int J Sci Basic Appl Res (IJSBAR) 22. Naveen S (2016) Study of IoT: understanding IoT architecture, applications, issues and challenges. In: Conference: international conference on innovations in computing & networking 23. Oubbati OS, Lakas A, Zhou F, Güne¸s M, Yagoubi MB (2017) A survey on position-based routing protocols for flying ad hoc networks (FANETs). Veh Commun 10:29–56. https://doi. org/10.1016/j.vehcom.2017.10.003 24. Poudel S, Moh S, Shen J (2020) Energy conservation techniques for flying ad hoc networks. In: The 9th international conference on smart media and applications. https://doi.org/10.1145/ 3426020.3426024 25. Qi W, Song Q, Kong X, Guo L (2019) A traffic-differentiated routing algorithm in flying ad hoc sensor networks with SDN cluster controllers. J Frankl Inst 356(2):766–790. https://doi. org/10.1016/j.jfranklin.2017.11.012

12

A. Sharma et al.

26. Raj SJ (2020) A novel hybrid secure routing for flying ad-hoc networks. J Trends Comput Sci Smart Technol 2(3):155–164. https://doi.org/10.36548/jtcsst.2020.3.005 27. Shi N, Luo X (2012) A novel cluster-based location-aided routing protocol for UAV fleet networks. Int J Digital Content Technol Appl 6(18):376–383. https://doi.org/10.4156/jdcta. vol6.issue18.45 28. Smartwatches as IoT Edge Devices (2021) Hoylar. https://www.hoylar.com/smartwatches-asiot-edge-devices/. (21 Dec 2021) 29. Uddin MA, Mansour A, Jeune DL, Ayaz M, Aggoune EHM (2018) UAV-assisted dynamic clustering of wireless sensor networks for crop health monitoring. Sensors 18(2):555. https:// doi.org/10.3390/s18020555 30. Wheeb AH, Nordin R, Samah AA, Alsharif MH, Khan MA (2021) Topology-based routing protocols and mobility models for flying ad hoc networks: a contemporary review and future research directions. Drones 6(1):9. https://doi.org/10.3390/drones6010009 31. Yu YL, Ru L, Fang K (2016) Bio-inspired mobility prediction clustering algorithm for ad hoc UAV networks 24:328–337 32. Zafar W, Muhammad Khan B (2016) Flying ad-hoc networks: technological and social implications. IEEE Technol Soc Mag 35(2):67–74. https://doi.org/10.1109/mts.2016.2554418 33. Zang C, Zang S (2011) Mobility prediction clustering algorithm for UAV networking. In: 2011 IEEE GLOBECOM workshops (GC Wkshps). https://doi.org/10.1109/glocomw.2011. 6162360

Current Web Development Technologies: A Comparative Review Vibhakar Pathak, Vishal Shrivastava, Rambabu Buri, Shikha Gupta, and Sangeeta Sharma

Abstract This paper highlights current web development technologies in IT industry and its measure. Developers of web-based applications are confronted with a bewildering array of available choice formats, languages, frameworks, and technical objects. They investigate, identify, and evaluate technology for creating web applications. They determine that although most of the web’s connectivity issues have been resolved. The confusion of technology for web-based apps is that this area lacks a reliable model that is suited to it. A section-based library called ReactJS is intended to enhance clever user interfaces. It is the most well-known front-end JS library at present. In the Model View Regulator (M-V-C) scheme, it solidifies the view (V) layer. Facebook, Instagram, as well as local independent designers and organisations maintain it. The development of enormously complex electronic programmes that can modify their data without reviving the pages after them is the main focus of the answer. Better customer experiences and the advancement of web apps that are lightning fast and robust are its main goals. Similar to Angular JS, ReactJS can be planned with other JavaScript libraries or frameworks in MVC. Keyword Back-end and front-end development

1 Introduction Designing and managing a website that is accessible through a web browser and hosted on a server either on premise hardware or in the cloud—for the Internet or V. Pathak · V. Shrivastava (B) · R. Buri · S. Gupta · S. Sharma (B) Computer Science Engineering Department, Arya College of Engineering and IT, Jaipur, India e-mail: [email protected] S. Sharma e-mail: [email protected] V. Pathak e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 V. S. Rathore et al. (eds.), Emerging Trends in Expert Applications and Security, Lecture Notes in Networks and Systems 681, https://doi.org/10.1007/978-981-99-1909-3_2

13

14

V. Pathak et al.

Fig. 1 Web development technologies

an intranet—is known as web development. This ranges from simple plaintext web pages to sophisticated online apps. Designed for bussiness and community networks. A web application is a distributed application that runs on more than one computer and communicates with a server or database over a network. A web application provides the ability to update and maintain a programme without deploying. Dynamic HTML (DHTML), extensible (XHTML), and XML are some of the technologies that provide interactive features to web applications. Web servers are also being enhanced to respond to client’s requests in a more flexible way than presenting the same content every time. By bringing website browsing closer to using native mobile applications, Progressive Web Apps (PWAs) advance the web. It is the most recent technology for website building since it offers a feature set that was previously exclusive to native apps. Progressive web app has characteristics of both website and locally installed programme as shown in Fig. 1. Web development technologies refer to all of the many programming languages and development tools that are used to build dynamic, feature-rich websites and applications. Web development technologies are basically divided into two categories front-end and back-end technologies. Front-end technologies: for your website or application’s “client side,” front-end technologies are used. They are applied to generate the user-interaction elements of your website and its interactive elements. This covers the fonts, colours, and styles of the text and even the images, buttons, and menus. There are numerous front-end technologies. Some of them are: 1. HTML: Hypertext Mark-up Language, or HTML. It is composed of the same words and phrases that web designers use to build web page. Hypertext is text that contain links that readers may click to get to a different page or section of the same page. The portions of a page, such as headers and footers, as well as other components, such

Current Web Development Technologies: A Comparative Review

15

as tables and graphics, are defined using mark-up language, which employs tags or plaintext with specific markers. One of the three important tools for building websites is HTML. The structure of HTML determines how text, images, and other elements will be displayed on the page. Colours, format, and layout are just a few of the aesthetic characteristics that cascading style sheets’ (CSSs) controls for these elements. In the meanwhile, JavaScript modifies how these components act. 2. CSS: CSS, or cascading style sheets, is an acronym. It is a language for creating style sheets that specify the layout and appearance of mark-up language documents. It gives HTML an extra functionality. Typically, it works with HTML to modify the look and feel of online pages and user interfaces. Any XML document type, including plain XML, SVG, and XUL, may be used with it.

2 Front-End Libraries and Framework There are several JavaScript-based front-end libraries and frameworks; it is due to the innovation of the V8 engine. We gather usage information from Stack Overflow as well as GitHub, the vast hosting site for Git repositories, in order to identify wellknown front-end libraries and framework that adhere to industry level, and the usage patterns for various front-end frameworks and tools on GitHub and Stack Overflow can be used to gauge the preferences of front-end developers globally [1]. The graph shown in Fig 2 demonstrate that react continues to hold the top spot. In terms of popularity, Angular 2 continues to hold the second spot, while Vue and Angular1 are ranked comparatively lower than the other (Fig. 2). 1. VUE.JS: Even you created the well-known open-source framework Vue.js in 2014; it is based on JavaScript ES6. A lightweight JavaScript framework for creating responsive online user interfaces is called VUE. JS (or just Vue). VUE is a robust toolkit for creating the front end of interactive web apps by extending ordinary HTML and CSS. An open-source, progressive JavaScript framework called Viejas is used to create dynamic web interfaces. One of the well-known frameworks for streamlining web development is this one. The view layer is where VUE JS focuses. It may be seamlessly incorporated into large front-end development projects. 2. ANGULAR: With the use of HTML and TypeScript, single-page client apps may be created utilising the Angular platform and framework. TypeScript is used to create Angular. As a collection of TypeScript libraries that you load into your apps, it offers both core and optional functionality. Because it has strong beliefs about how your application should be organised, Angular is regarded as a framework. Additionally, it offers a

16

V. Pathak et al.

Fig. 2 Front-end framework usage data

lot more “out-of-the-box” capabilities. You may immediately start coding without having to choose which routing libraries to use or make other similar decisions. 3. React Native with React.js: Facebook created the React JavaScript library to improve the user experience on the Facebook and Instagram websites. Due to Reacts robust capabilities, Facebook made the library available in open-source JavaScript ES6 library to corporations and developer worldwide.[2] Additionally, in 2015, Facebook introduces React Native, a tool for using React to create mobile applications for popular operating systems like iOS and Android. Back end The server-side element of software known as the back end oversees application performance, stores data, and does data analysis. Back-end developers carry out a range of different responsibilities in addition to creating APIs and libraries and working with system components, business processes, and data architecture. Backend development exchanges data, interacts with the front end, and displays the information as a web page all without being visible to users. Following are top back-end technologies: JAVASCRIPT JavaScript features include dynamic typing, lightweight scripting, and objectoriented programming. It also has a large user base.

Current Web Development Technologies: A Comparative Review

17

Additional tools: MeteorJS, Node.js, and Express renowned applications: Google, eBay, Facebook, and Netflix. PYTHON Python’s characteristics include support for GUI programming, object-oriented, portability, and a sizable standard library. Django, Flask, Pyramid, and CherryPy are extra tools. Spotify, Pinterest, Instagram, and Google are popular applications. RUBY: Features of Ruby include object-oriented, adaptability, expressiveness. Other tools include Ruby on Rails, Sinatra, Grape, and Padrino. Renowned applications: GitHub, Base camp, Shopify, Airbnb. PHP: Cross-platform interoperability, exception reporting, active community support, and real-time access monitoring are some of the features of PHP. Laravel, CakePHP, Symphony, and CodeIgniter are more tools. Popular apps include WordPress, Yahoo, Mailchimp, and Wikipedia. JAVA: Java’s features include object-orientedness, portability, platform independence, multithreading, and dynamic typing. Spring, Grails, Blade, and Drop Wizard are further tools. Well-known applications: develop mobile applications, Desktop GUI programmes, Web-based programmes, Application games. GOLANG: Golang’s features include a robust standard library, concurrency, testing support, object-oriented programming, and system programming. Other tools: Gin, Beego, and Echo SoundCloud, Badoo, and Uber are well-known apps.

3 Comparison Django: Besides being lightweight, it also has a feature-rich and powerful set of features that aid in the development of web apps [3]. A high-level Python web framework called Django enables the quick creation of safe and dependable websites. Django, which was created by seasoned programmers, handles a lot of the pain associated with web development, allowing you to concentrate on developing your app not having

18

V. Pathak et al.

to recreate the wheel [4]. It is open source and also free and has a strong community, excellent documentation, and a variety of free and paid support options. Laravel: Django was followed by Laravel, an open-source and full-stack PHP framework [4]. Even basic knowledge of PHP is sufficient for working with Laravel. Laravel is loaded with tonnes of features and an innumerable number of features; a collection of tools and resources are available to create contemporary PHP apps using the free and open-source Laravel framework. In recent years, Laravel’s popularity has risen quickly as more developers have chosen it as their go-to framework for an efficient development process thanks to a robust ecosystem that makes use of its built-in capabilities and a range of compatible packages and extensions. NodeJS: An open-source, cross-platform runtime environment called Node.js is used to create networking and server-side applications [5]. Applications for Node.js may be created in JavaScript, and it is run on Linux, OS X, and the Microsoft Windows using the Node.js runtime. Node.js is designed to use a single thread with one process at a time and run on a dedicated HTTP server [5]. Applications built using Node.js execute asynchronously and on events. The Node platform does not use the conventional approach of receive, process, transmit, wait, and receive for its code. Instead, Node makes tiny queries one after the other without stopping to wait for answers, processing incoming requests in a continual event stack [6] (Figs. 3 and 4).

Fig. 3 Comparison between web technologies

Current Web Development Technologies: A Comparative Review

19

Fig. 4 Framework of web technologies

4 React.JS in Front-End Development Prior to the appearance of Respond, Precise was the fair and significant competitor in the JS structure field. Rakish was a completed and genuine structure, but it was extremely difficult for designers because a lot of coding may have been required [7]. It is safe to say that even prearranged JavaScript engineers struggled with coding and searched for a solution to settle their reaction. Using Angular.JS to create JS applications was never the ideal course of action. It has more features than were required by the majority of planners [8]. When React.js arrived, it revolutionised the development of web applications. Respond is not an MVC framework, unlike Angular is [9]. Just call it a library. State managers, switches, and programming interface leaders are thus prohibited in the middle library by React.js [9]. For Respond engineers, it might seem to be a breaking point anyhow, but this is unquestionably the most astounding aspect of building a website because the code is quite straightforward with all sections and

20

V. Pathak et al.

different aspects [10]. The purpose of the study is to demonstrate the appropriateness and compactness of the React.js structures that may be used to enhance online applications.

5 Features 1. LIGHTWEIGHT DOM Enhancing performance, a more practical and lightweight report object model is provided in the response. Though it responds to the record object model stored in memory, it is not associated with the DOM created by the programme. The programme is then executed quickly and effectively as a result [11]. Direct interaction with the programme DOM is made in a significant portion of the other web enhancement structures, achieving direct control over the full DOM tree on each and every page-triggering event. The presentation is therefore significantly affected when a large chunk of data needs to be altered. ReactJS employs a concept known as a virtual DOM, which is against the rules. The way it works is rather obvious. 2. EASY SIMPLE CURVE ReactJS is simple and easy to understand, which encourages one to become familiar with the framework rapidly. The presumption to maintain data is very fundamental and provides one with almost no complexities [9]. The design is seriously simple utilising JSX feels to be a completely regular and satisfying peculiarity that an engineer coexists with the system without any problem. Starting degrees of ability in the structure can undoubtedly be accomplished with next to no blocks or confusions. 3. JSX The tech behemoth XML and the JSX language are fundamentally comparable. The employment of JSX is not in any way required when promoting a question-based application, but it is highly regarded among planners because a short hand makes improvement simpler when they are producing mark-ups for components and the associated limiting events [1]. Human nature’s propensity to choose satisfying and simple procedures spreads the word. 4. PERFORMANCE ReactJS has a reputation for being a particularly strong performer. One of the main things that set the frameworks apart from other designs in the cruel world is this. The virtual DOM component of the framework fundamentally provides support for the construction’s very effective execution. ReactJS stores a virtual record object model in memory, which is what occurs. First adjustments are performed to the virtual DOM as soon as a change has to be reflected on the currently shown web page rather than immediately resurrecting the underlying DOM. After modifications are performed to the virtual DOM, a diff() algorithm is used, which examines both the virtual and the

Current Web Development Technologies: A Comparative Review

21

physical DOMs, and simply important and required centre points of the programme DOM tree are resurrected, resulting in blazingly quick application execution [11]. 5. ONE-WAY DATA FLOW ReactJS is set up to permit and maintain a downstream unidirectional data stream. If a bidirectional data stream is needed, more parts should be made. As they needed to remain constant and the data inside should not change under any circumstances, this was done. As a result, the preceding is given less attention and data from one course is constant. As a result, React.js is exceptional in that the time of approved data sources remains synchronised among the components that revolve around it. It ends up being one of the most amazing frameworks to support organic web-based applications as a result. The components using that data typically re-render when they anticipate an upstream modification to be made in order to reflect the changes. The reason they need to be in sync with the data that is streaming downstream is because of this. React [9]. Js’s transition offers an alternative to the standard model view controller for data confining that is equivalent (MVC). 6. VIRTUAL DOM Virtual record object model or DOM, in short, is another essential component of ReactJS. With the caveat that it is stored in memory, it is similar to the record object model created by the application. Virtual DOM operates in a very straightforward manner. The moves are first reflected to the memory-living virtual DOM whenever a sale for modifying the page content is made. Then, instead of retransmitting the complete DOM, the regular modifications are merely mirrored to the programme DOM after a diff () computation has looked at the two, such as the virtual DOM and the programme DOM. This fundamentally boosts the introduction of usage when a large number of data adjustments need to be done.

6 Angular Versus React Nowadays, comparing Precise and Respond is a well-known topic in the tech community. However, a few other frameworks, like Backbone.JS, Ember.JS, React.JS, Vue.JS, and Angular.JS are avilable. One of the most popular and straightforward jargons used by professionals today is JavaScript. Many professionals enjoy using scripts to build their projects, applications, and web-related items, but they often struggle to find the appropriate environment or library for their tasks. The most important requirements for everything, other from still, are Exact and React.js, but most people struggle to decide which one would be the most appropriate for their task. React.js is easier to learn for beginners because it has fewer functions and encourages more attention-grabbing monitoring, whereas designers need to learn something that is simpler and quicker to develop [9].

22

V. Pathak et al.

To create intuitive user interfaces, React.JS is a library, not a framework. Naturally, Precise is a finished structure.

7 Advantages A JavaScript package called Answer is used to create reusable UI components. Facebook delivers information for free [1]. The following are some of the factors that contributed to the decision to choose response as the front-end improvement: 1. Simple to advance as we can rapidly construct things. 2. It assists us with building a rich point of interaction as ineffectively looking connection point would not look great. 3. Quicker advancement and can be utilised to bring in cash quicker as well. Efficiency is a significant component and react is certainly breathing out all things considered. 4. Unusual associations accept it, and a huge number of associations employ Respond to design their destinations. Netflix is one of the striking connections. 5. It has more downloads than Angular and strong local area support. 6. It is a contentious issue, and everyone is enthused about the progress. Along with all the advantages, there are some disadvantages as follows.

8 Disadvantages 1. Since Respond does not support MVC and also lacks a comprehensive plan, it is necessary to import libraries for state and display [12]. 2. The fact that the answer departs from class-based modules can hinder itemsituated programming and limit the freedom of the originators.

9 Conclusion This study examined how to use the response, which is essential for developing front-end developments and web apps. Respond was obviously the preferred option when comparing the front-end architectures Rakish and Respond [12]. The solution was thoroughly explored, and its benefits and drawbacks were noted as well. In most instances, the solution should be chosen because the steps for express framework assurance were obviously recognised. The advantages of the virtual DOM in Answer and how it constructs the display of the UI as well as how it reduces deferral were also examined in further detail. In this paper, three potential frameworks and libraries for front-end development are

Current Web Development Technologies: A Comparative Review

23

addressed. Along with potential web application development approaches. React, Angular 2, and Vue may be contrasted in a number of aspects. Web development is the process of creating and maintaining websites; it i0s the labour done in the background to make a website appear nice, function quickly, and provide a positive user experience. This is done by web developers, or "devs," who use a range of coding languages. In the future, we will broaden the scope of our research to incorporate more front-end development techniques. A web-based portal’s back-end layout was demonstrated. A back-end web application developer’s duties, which include creating tables and fields with a particular number of rows to database, are demonstrated. The server-side scripting is his responsibility. His main area of interest is the creation of all server-side definition and upkeep of the core database, as well as making sure the front end is responsive and performs well. He might also be in charge of adding other people’s front-end components to the application. In this paper, the function and duties of an administrator were also described using an actual instance [7].

References 1. https://github.com/search?l=JavaScript&o=desc&q=stars%3A%3E1&s=stars&type=Reposi tories 2. http://strongloop.com/developers/Node-jsinfographic/?utm_source=ourjs.com#3 3. Complete comparison guide (2020). https://medium.com/front-end-weekly/react-vsangularvs-vue-js-a-complete-comparison-guided16faa185d61 4. Nithya Ramesh B, Amballi AR, Mahanta V (2018) Django the python web framework. (April– June 2018) 5. http://Node.js.org/ 6. Shah H, Soomro TR (2017) Node.js challenges in implementation. (May 2017). 7. Top 6 reasons to choose react as front-end development (2018) 8. https://brainhub.eu/blog/reasons-to-choose-react-for-frontend-development/ 9. How to use axioms with React (2018) 10. https://www.researchgate.net/publication/332456776_Research_and_Analysis_of_the_Fro ntend_Frameworks_and_Libraries_in_EBusiness_Development 11. Performance comparison (2019)–Angular vs reacts vs vue.js, https://blog.logrocket.com/ang ular-vs react-vs-Vue 12. Research and Analysis of the Front-end Frameworks and libraries in e business development (2019)

R-Peak-Based Arrhythmia Detection as an Impact of COVID-19 Supriya Dubey and Pritee Parwekar

Abstract The COVID-19 pandemic has grown as a global crisis causing a large amount of causalities all over the globe and building an unavoidable pressure on the healthcare resources during the world. The recovered COVID-19 patients of age group 30–50 years face breathlessness because of pneumonic issues like lung infections which in turn lead to cardiac problems masked through the cough and breathlessness. Cardiac arrhythmias are the most prominent condition in patients with COVID19 infection that exists even after their recovery. This paper presents a technique to predict the arrhythmia through reading the ECG sign of the COVID-recovered patients. This paper proposed a technique based on Haar Transform (discrete wavelet transform) for detecting PQRS peaks complex from original ECG signal. Keywords Electrocardiogram · Discrete wavelet transform · Principle component analysis · Arrhythmias

1 Introduction The COVID-19 pandemic has grown as a global crisis causing a large amount of causalities all over the globe and building an unavoidable pressure on the healthcare resources during the world. The recovered COVID-19 patients of age group 30– 50 years face breathlessness because of pneumonic and lung infection which in turn heads toward cardiac problems masked through the cough and breathlessness. Post-COVID cardiac problems have been visible in even healthy people to the viral syndrome which can vary from the prevalence of blood pressure, minor palpitation, to S. Dubey (B) · P. Parwekar Faculty of Engineering and Technology, Computer Science and Engineering, SRM Institute of Science and Technology Delhi-NCR Campus, Delhi-Meerut Road Modinagar, Ghaziabad, U.P., India e-mail: [email protected] P. Parwekar e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 V. S. Rathore et al. (eds.), Emerging Trends in Expert Applications and Security, Lecture Notes in Networks and Systems 681, https://doi.org/10.1007/978-981-99-1909-3_3

25

26

S. Dubey and P. Parwekar

Fig. 1 Normal versus abnormal heart rhythm

crucial condition arrhythmias, coronary heart attacks, coronary heart failures, acute pulmonary embolism, and different vascular activities. Irregular pulse rate sometimes ends up in a medical condition referred to as cardiac arrhythmia. There are many types of arrhythmias, and every sort encompasses a pattern related to it. Broadly, arrhythmias may be classified into: • Morphological cardiac arrhythmia for single irregular heartbeat. • Rhythmic arrhythmias for a collection of irregular heartbeat. An electrocardiogram (ECG) measures the electric activity of heart and helps in diagnosing various heart diseases. These signals are the combination of PQRS and T-waves. R-peaks’ identification is very helpful for predicting arrhythmia. An electrocardiogram consists of the series of repeating PQRST peaks. Initially, a peak is generated from the linear signal to create wave. The declining linear wave shortly gets a downward deflection tagged as Q wave. An unforeseen upright deflection is determined simply on the far side the alphabetic character wave to create a high cone, i.e., the R wave. On its decline a small downward defection is that a plain hinge once the S wave is understood wave that marks the tip of a phase of the ECG signal (Fig. 1).

2 Literature Survey Nonlinear Bayesian filtering is used in [1] to reduce ECG artifacts from noisy electrocardiogram (ECG) signals. Many ECG signals are filtered using this method, which involves introducing colored and white Gaussian noise to electrocardiograms

R-Peak-Based Arrhythmia Detection as an Impact of COVID-19

27

and analyzing the signal–noise ratio and filter outputs’ design. Over a wide range of ECG SNRs, this approach explains good outcomes when compared to other approaches adaptive filtering, wavelet thresholding, and bandpass filtering [2]. In [3], an application-specific integrated circuit design for signal recording and detecting peaks is provided. The use of the DPPM architecture reduces energy usage. For quality enhancement of the reconstructed signal, the signal is also deconstructed. The QRS peak is detected and defined by detecting the individual wave peaks to evaluate the single-lead ECG delineation system deploying wavelet transform (WT). For processing, the R-peak detection technique is used. Variables that affect performance are used to observe QRS detecting properties. Out of the two detectors, SMAP comes out marginally ahead of PMAP [4]. In [5], a filter with adaptive property is utilized to obtain the peaks of the QRS wave while minimizing mean square error. The noise is present in the primary ECG signal input, and the reference input is noise correlated with the impulse train that coincides with the QRS complexes and used to detect arrhythmia problems such as P-waves, rhythms, atrial fibrillation, conduction blockage recognition, and ventricular premature complexes [6]. One of the algorithm for R-peak detection is introduced in [7] and evaluated on the MIT-BIH database using the QRS spectrum. The advantage of DWT for normal and arrhythmic pulse is explained through detecting R locations in normal and abnormal QRS peaks [1]. Difference operation method is described in [2]. On the other side, there is a study in which the researchers proposed a brand new technique for observing vertebrate QRS complex from abdominal maternal cardiogram data. They split the signal into four groups after segmenting it for 250 min of consecutive frames. To extract the alternatives and minimize the spatiality, they employed principal component analysis over Haar wavelet reconstruction. They used interquartile ranges and sampling to deal with an unbalanced category and outliers while keeping the methodology the same. They trained and evaluated K-nearest neighbor (KNN), supported vector machine (SVM), and theorem network (BN) over the PhysioNet/CinC dataset using 10-fold stratified cross-validation of cardiogram signals’ four electrodes. The exploitation KNN accuracy was 89.59%, whereas the exploitation SVM accuracy was 89.19%. Despite the fact that KNN produced a lot of correct results from SVM, time intensive remains SVM. It is worth noting that the achieved level of accuracy, as well as the length of time they tested, does not appear to be sufficient for getting the best designation over cardiogram signals. Another study was carried out on the concept of medical device communication, often known as Health Level Seven (HL7). Metaphysics has defined ECG wave information, and secret writing aided HL7. Researchers used an automated designation system to diagnose 37 different viscus problems using HL7 conventional cardiogram information descriptions and viscus identification rules. They studied ECG wave data in a variety of formats, on a variety of devices, and on a variety of computer platforms. As a result of the concept of cardiogram wave information being contained in XML format, the collected cardiogram data from various devices could be seen in a standard Internet browser. Their experimental section was designed to test the system’s ability to share cardiogram data in an extremely fast internet browser

28

S. Dubey and P. Parwekar

while avoiding errors and maintaining the designation model’s accuracy level. They had zero errors in the sharing approach and a total accuracy of 93% for anomaly identification.

3 Materials and Methods In this approach, the R-peaks’ square measure is detected by an adaptation of discrete wavelet transform algorithmic rule. This algorithm works in three levels: QRS region detection, discarding the falsely detected regions, and then obtaining the highest peaks in QRS regions • ECG dataset (recording) The dataset was taken from MIT-BIH arrhythmia database. This database consist of the signals sampled at 360 Hz recorded by technical expert for durations of 30–60 mins using the two leads’ arrangements. • Preprocessing The dataset is then fed two some special filters to normalize and de-noise the captured signals. Wavelet transformation is basically used for noisy and aperiodic signals. Wavelets can transform the signals in both time and frequency levels. The performance of the system is calculated using parameters like mean square error and signal-to-noise ratio. • Discrete Wavelet Transform (DWT) Wavelets can transform the signals in both time and frequency levels. They are useful in analyzing signal having both low and high frequencies which results in obtaining long-term trends and short-term trends. This step is based on multi-level decomposition, through a low-pass filter and a high-pass filter. There is a combination of two parts in every stage: approximation part representing the low frequency and detailed part representing high frequency. True Positive depicts correctly detected R-Peak, False Positive depicts wrongly detected R-peak, and False Negative depicts undetected R-Peak. DWT can be defined as in Eq. 1. ψ(x) =

∞ 

(−1)k a N − 1 − kψ(2x − k)

(1)

k=−∞

where a–k are finite set of coefficients and N is an even integer. • Feature Extraction through Hilbert Transform Feature extraction is the next important step for performing spectral analysis to get essential parameters for detecting arrhythmic heartbeats. Hilbert transform is prominently used for extracting features which analyze a real- valued signal. It provides

R-Peak-Based Arrhythmia Detection as an Impact of COVID-19

29

Fig. 2 Flowchart for the proposed approach

the phase shift for the previously obtained output. The Hilbert transform is useful for analyzing amplitude and frequency of a signals instantaneously. Hilbert transform can be defined as in Eq. 2. 1 x(t) ˆ = π

∞ −∞

x(k) dk t −k

(2)

• R-peaks’ detection using PCA Next, we calculate variance to get the feature vectors for detecting arrhythmias. Then, principle component analysis is deployed for dimensionality reduction of the feature vectors. The estimated variances and Eigen values help to detect arrhythmia in the ECG signal. The relationship between two signals has explained covariance matrix. Principle component analysis is a series of p unit vectors representing a group of points in real coordinate space. In this, the i-th vector depicts the direction of a line best fitting to the data while being orthogonal to the first i-1 vectors. These directions form an orthonormal axis, in which the individual data’s dimensions are not linearly correlated (Fig. 2).

4 Result The essential component required for clinical diagnosis is accurate features. It is required to extract accurate features from noise-free (preprocessed) ECG. The proposed approach helps to obtain the accurate features in terms of PQRS peaks and

30

S. Dubey and P. Parwekar

associated information. The following screenshots represent the output of each and every step very clearly of one of the recorded signals. Figure 3 demonstrates the noisy or de- trended signals through high-pass filtering and the proposed approach. Figure 4 represents the trending of the signals obtained at the previous steps.

Fig. 3 Acquired input de-trended signals

Fig. 4 Trended signals

R-Peak-Based Arrhythmia Detection as an Impact of COVID-19

31

Fig. 5 Identified Q RS peaks

The next Fig. 5 demonstrates the detection of Q, R, and S peaks for the sample signal stream chunk. The Q, R and S peak s are essential for feature extraction point of view. The following Fig. 6 represents various parameters essential for predicting the irregular heartbeat pattern. The performance of the proposed system is estimated in terms of mean square error (MSE) in Q, R and S peaks demonstrated in Fig. 7.

5 Conclusion In this paper, an approach is proposed for detecting QRS peaks in electrocardiogram signals accurately for detecting arrhythmic heartbeats in COVID-recovered patients. The information obtained regarding detected R-peak is extremely helpful for prediction and detection of heart conditions like tachycardia and cardiac arrhythmia. With this approach, cardiac arrhythmia is predicted from R-peak detection in ECG signal of the patients, and the accurate heart condition of COVID-recovered patients can be predicted. The approach consists of discrete wavelet transform along with the

32

Fig. 6 Parameters associated with Q RS peak

Fig. 7 Mean square error in Q, R, and S peaks

S. Dubey and P. Parwekar

R-Peak-Based Arrhythmia Detection as an Impact of COVID-19

33

principle component analysis. This approach helps in predicting the heart condition of COVID-recovered patients.

References 1. Sornmo L, Pahlm O, Nygards M-E (1985) Adaptive QRS detection. IEEE Trans Biomed Eng BME-32(6). June 1985. https://doi.org/10.1109/tbme.1982.324901 2. Thakor NV, Zhu Y-S (1991) Applications of adaptive filtering to ECG analysis: noise cancellation and arrhythmia detection. IEEE Trans Biomed Eng 38(8). Aug 1991. https://doi.org/10.1109/ 10.83591 3. Yehand Y-C, Wang W-J (2008) QRS complexes detection for ECG signal: the difference operation method. Comput Methods Programs Biomed 91(3). Sept 2008. https://doi.org/10.1016/j. cmpb.2008.04.006 4. Kumari CU, Vignesh NA, Panigrahy AK, Ramya L, Padma T (2019) Fungal disease in cotton leaf detection and classification using neural networks and support vector machine. Int J Innov Technol Explor Eng (IJITEE) 8(10):3664–3667, Aug 2019. https://doi.org/10.35940/iji tee.j9648.0881019 5. Kumari CU, Mounika G, Prasad SJ (2019) Identifying obstructive, central and mixed apnea syndrome using discrete wavelet transform. In: International conference on e-business and telecommunications. Springer, Cham, pp 16–22. March 2019. https://doi.org/10.1007/978-3030-24322-7_3 6. Pavani T (2017) Synthesis of circular antenna arrays using flower pollination algorithm. J Adv Res Dyn Control Syst 14(Special issue):767–778 7. Kumari CU, Kora P, Meenakshi K, Swaraja K, Padma T, Panigrahy AK, Vignesh NA (2020) Feature extraction and detection of obstructive sleep apnea from raw EEG signal. In: International conference on innovative computing and communications. Springer, Singapore, pp 425–433. https://doi.org/10.1007/978-981-15-1286-5_36

IoT-Based Water Quality Monitoring and Detection System M. Kanchana, P. V. Gopirajan, K. Sureshkumar, R. Sudharsanan, and N. Suganthi

Abstract This paper suggests Internet of things (IoT) and Arduino-based tele-water quality monitoring systems integrated with the web application. The most critical component of the Wireless Sensor Network includes a microcontroller for processing the circuit’s sensor data, a communication system between and within node communication, and a few other sensors such as pH, Microbial Fuel Cell (MFC), humidity, and temperature. All data is collected at the Arduino (data which is sensed by the sensors) by using remote monitoring and Internet of things technology (ESP8622MOD). The real-time data is transmitted, and that data is displayed by sending through the cloud in a human interface format on a server device (pc/mobile) by the web or mobile applications. The main objective is to get a water monitoring system with high mobility (the ability to check water more freely, the ability to check wherever required), high frequency, and low power (as it is compatible, we can take and check quickly). Therefore, our main objective is that our system will immensely help people with small water reservoirs, water mines, and villagers/ framers with fields to become conscious/aware of polluted water and stop contaminating the water. The deciding parameters of water, whether toxic or drinkable, are decided by the potential level of hydrogen value. If the hydrogen value is less than five, it is acidic, and if the hydrogen M. Kanchana Department of Computing Technologies, School of Computing, SRM Institute of Science and Technology, Kattankulathur, Chennai 603203, India P. V. Gopirajan (B) Department of Computational Intelligence, School of Computing, SRM Institute of Science and Technology, Kattankulathur Campus, Chennai 603203, India e-mail: [email protected] K. Sureshkumar Department of Information Technology, Saveetha Engineering College, Chennai 602105, India R. Sudharsanan Department of Information Technology, SRM Institute of Science and Technology, Ramapuram, Chennai 600089, India N. Suganthi Department of Computer Science and Engineering, SRM Institute of Science and Technology, Ramapuram, Chennai 600089, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 V. S. Rathore et al. (eds.), Emerging Trends in Expert Applications and Security, Lecture Notes in Networks and Systems 681, https://doi.org/10.1007/978-981-99-1909-3_4

35

36

M. Kanchana et al.

value is above 7.5, then it is primary. The MFC sensor, if the analogy sensor, senses any electricity from the cathode and anode ends (0–200 indicates acidic water, 200– 300 is normal water, and above 300 is essential to water); by this water, we decide whether water toxic in nature and temperature and humidity also plays a role in determining the quality, it can result in an effect on the level of O2 . Also, the capacity of organisms to resist certain pollutants, so different conditions should have different water temperatures. Keywords Microbial fuel cell · Potential of hydrogen · Internet of things · Microcontroller unit · Water quality monitoring

1 Introduction Water quality is the uppermost dominant factor in the world’s ecosystem [1–3]. Pure and neat water keeps up a diversity of healthy life and wildlife. Even though it feels unrelated or different at first, our behavior toward land eventually affects water quality. Poor quality water affects not only aqua life or agricultural lands but the whole ecosystem diversity [4, 5]. These factors deal with all the parameters which affect the water quality in the terrain. All of these parcels may be chemical, physical, or natural factors. Biological water quality parameters include turbidity and temperature, and chemical parameters include similar to dissolved oxygen and pH. Wild pointers of the quality of water include phytoplankton and algae [6, 7]. These parameters are applicable not only to face water studies of the lakes and oceans but to artificial processes and drain to groundwater. Monitoring water quality can help experimenters forecast and know the natural processes in the terrain and control mortal causes on an ecosystem. With various methods, scientists determine the quality of water. These include acidity (pH), temperature, particulate matter (turbidity), dissolved solids (specific conductance), hardness, suspended deposition, and dissolved oxygen [8]. IoT-based devices help more in suggesting solutions for complex problems autonomously [9]. These different parameters give adequate info about the water quality of a water body. Still, the result of one parameter is less crucial than covering changes time by time. For demonstration, measure/find the pH of the creek beside your place and turns off that it’s 5.5; you might think it’s acidic. The pH of 5.5 is “normal” for the creek. However, commodities that are affecting water quality may still pass (presumably upstream). Following is some deciding significant water quality indicators [10–12]: . Temperature: The water temperature is main to aquatic plants and water irrigation because few plants grow in few themselves. Temperature can take hold of the position of oxygen in the water and the capability of organisms to repel solid impurities/pollutants.

IoT-Based Water Quality Monitoring and Detection System

37

. Acidity: Calculating the potential of hydrogen is useful in measuring the number of hydrogen ions in the water. By calculating the hydrogen ions, we can range/judge whether it’s acidic, basic, or neutral. . Turbidity: It makes the water clouded or non-transparent. It is the amount of particular matter (similar to the complexion, ground, bitsy organisms, or plan) sustained in water. . Specific Conductance: It calculates the capability of H2 O to pass electricity through it so that it will depend on the number of solvable solids, very likely to the scrub, in the H2 O. . Hardness: The quality of dissolved magnesium (mg) and calcium (ca) in H2 O decides the “hardness” of water. The hardness of the water varies throughout the world.

2 Literature Survey Overall, the research process, we have been able to distinguish some good research papers as the understructure of our project and ways to learn and concoct from their research works. Our task is to create a portable monitoring system for water quality. So, the research papers put ahead are purely based on methods that use water quality in different approaches. Research papers from 2015 to 2021 have been rigorously examined, and the best policy is implemented from all examinations. Potential of hydrogen, by the submissive sensor-tags using supported SAW technology, and a code-reuse cellular scheme-based sensing framework is suggested [13]. A reviewer from every cell is in a place to study many tag sensors simultaneously, which is many tasks. The surface acoustic wave tag sensors are adequately suggested to be designed to be orthographic to allow continuous detection and a wide range of study; SAW tag sensor communication is needed to be more distantly increased through reflectors based on loading of Inter Digital Transducer (IDT) resonantly. It steadily monitors the pH and turbidity of the water [14, 15]. These frameworks were important in detecting water contamination/pollutants. These sensors are connected to an Arduino Uno microcontroller, which examines and processes the data. The data will be sent through Wi-Fi connectivity to the cloud/Internet and the mobile for monitoring reasons. A mobile application has also been developed to study the physiochemical parameters. The results indicate that the system successfully displayed the reading, processing, and transmitting of data designed and used an IOT-based method comprising temperature, turbidity, and pH sensors, all assimilated into a mobile application confederate for a water monitoring system. They used a Bluetooth Standard (IEEE802.15.1) for transfer communication of sensed data [16–18]. In contrast, the water quality monitoring system relies on the test water’s turbidity, temperature, and pH [19]. This system uses three sensors (temperature, conductivity, and temperature) and then transmit to two different modules, GSM and Arduino, which convert sensed data from analog to digital data and display it on the LCD screen in the circuit if there is any danger the communication is done GSM module, it sends SMS to the

38

M. Kanchana et al.

person, giving the water quality parameters or data sensed. They have integrated many sensors with Arduino to extract data [20–22]. The extracted data is sent to the LCD screen and the mobile application [23, 24]. In this project, they accomplished sensed data by using WiMAX Technology and ZigBee technology. Wang et al. measured the water quality impurities range by calculating Signal to Perturbation (SPR) of the saw system and then compared it to the Total Dissolved Solids (TDS) meter, then measured the NaCl impurity content in a lesser concentration range to determine the quality [25]. Derbew et al. suggested a wireless sensor monitoring system for water contamination [26]. The researchers also lacked much knowledge regarding the toxicity, measurements, and how the spilled chemicals traveled through the water and into the surrounding environment [13]. In [17, 22], they discussed various approaches to water quality monitoring (WQM), ranging from manual techniques to more cuttingedge systems that use wireless sensor networks (WSNs) for in situ WQM. According to physicochemical properties, wastewater has been categorized in a study [18]. Olatinwo et al. discussed the application of wireless technologies for the aim of monitoring water quality [27]. To govern and manage raw water supplies, a technology called the Internet of Water Things was proposed [27]. A proposed system for assessing the level of seawater.

3 Material and Methodology 3.1 Methodology We have integrated the four sensors to identify the water quality in work [28]. So, these four sensors are integrated into an Arduino Node mcu via ADC (Analog to Digital Converter). Here, four sensors are the MFC sensor, PH sensor, temperature sensor, and humidity sensor. So, the potential of the hydrogen sensor is adopted to calculate hydrogen elements present in the H2 O [19, 20, 29]. So, we can determine whether the H2 O nature is acidic or basic, and MFC sensor corporates waste energy given off by the microbes during cellar respiration. Bacteria absorb nutrients and give off electrons. The trick is harnessing those electrons and then putting them to work (microbes can be found everywhere in the water, soil, mines, etc..); microbes deprive of oxygen because oxygen molecules are bond with electrons, and because of this, oxygen plays a critical role in the process essentially what causes of the electrons to go where we want them. So in the process, we keep in a chamber where in the ends, cathode and anode electrode rods, now the anode and cathode connected wires are connected to an Arduino that allows a limited flow of electrons as microbes are present in the water, which feed on nutrients they release electrons because this electron cut off from oxygen by using Air molecules bubbles in the MFC were reduced by using bubble trap (making the environment oxygen-free), preventing interference with biosensor functionality, so the E-particles are passed toward the O2

IoT-Based Water Quality Monitoring and Detection System

39

by the path of least resistance, these travel them from the anode electrode to cathode electrode through wiring then the voltage will increase and reaches its potential and converts analog signals to digital in Arduino. This indicates that the value ranges between 200 to 300 is good quality water and value more than 300 is impure water. And coming to the temperature, the water temperature is the main to aquatic plants and water irrigation because few plants grow in few conditions. Temperature can take hold of the position of oxygen in the water and the capability of organisms to repel solid impurities/pollutants. So, from the above all sensors, the data is transmitted to the internet by taking a cloud server and displaying the real-time data in the web application.

3.2 System Architecture Figure 1 shows the proposed work system architecture. Here, the power supply is coming from an adapter (12 V 1 amp) to the Arduino board; we are integrating four sensors (pH temp, MFC, humidity sensors) which are linked to the Arduino (microcontroller), so the data sensed by the sensors is converted into the analog to a digital value by Arduino board itself. The Arduino board is connected to the Node MCU (ESP8266WI-FImob) sensor, which transmits the data to the cloud and then to the web application (monitoring unit).

Fig. 1 System architecture

40

M. Kanchana et al.

4 Implementation 4.1 Sensor Interfacing 12 V power supplied with a 1 Amps adapter to the project, which are connected to the Arduino sensor, then provide power to all other sensors (all sensors connected to the 5 V input) and other ends to the Voltage Common Collector (VCC). For all sensors, if there is any extra voltage, it is connected to sensor GND to power GND in Arduino. The sensor analog data wire is connected to the Arduino inputs from A0 to A5, and then analog data is converted into digital data. From the digital PWM pins, we extract data to an LCD breakout board (to display instant data), and from TX and RX, the data is transferred to the Node MCU (ESP8266MOB sensor Wi-Fi), in ESP8266 senor also, the GND pin is connected to the ground. Also, the voltage regulator has connected to the esp8266MOB sensor only to regulate only 5 V constantly. Table 1 explains about the description of digital pins in Arduino and other sensors. This table also gives information about each digital pin and the type of connection made. Table 1 Description of digital pins in Arduino Digital I/O pin numbers

Name of the pin

Type of connection

Digital pin description

1

5V

I/P

Input power supply for all sensors

2

Ground



Every sensor is grounded

3

Tx Digital

O/P

Output data. Transistor–Transistor Logic level, A data transmitting wire from Arduino to ESP8266 Mod Sensor

4

Rx Digital

I/P

Output data. Transistor–Transistor Logic level, Connected by pH sensor and LCD

5

Cathode electrode

O/P

Data sent to the analogy input of Arduino

6

Discharge air temp (from humidity and temperature sensor)

O/P

Data sent to the analogy input of Arduino

IoT-Based Water Quality Monitoring and Detection System

41

Fig. 2 Complete sensor integration

4.2 Microcontroller Programming The data from the MFC, pH, and temperature sensor connected to the ESP8266WIFIMOB sensor is extracted to the ESP8688Wi-Fi sensor using packages like DHT and ONEWIRE, and from the ESP8266Wi-Fi sensor is directly related to the WiFi [10, 13]. The data is directly transmitted to the Internet to the cloud server, and microcontroller programming is programmed in Embedded C in the Arduino Uno IDE software. And in the ESP8266Wi-Fi sensor, we have to give SSID and password to the sensor, so it will directly connect to the Wi-Fi network, as shown in Fig. 2.

4.3 Transmitting Sensed Data to the Internet and Projecting the Sensed Data in Web App The sensed data (real-time) is uploaded to the cloud through the Internet using a cloud server using Wi-Fi modules like ESP8266HTTP Client.h, Wi-Fi Client.h, Software Serial.h, etc. [30]. So it can be accomplished by creating an Application Programming Interface (API) key value, which allows the application to pass the data and interact/exchange with external software components and operating systems only to that server on the internet. And the web application is created by HTML, PHP, CSS, and JavaScript. To store data, the MYSQL database is used. Table 2 shows the sensed data stored in a database.

42

M. Kanchana et al.

Table 2 Sensed data stored in database

5 Results and Discussion This water quality monitoring device requires four sensors. The sensor data is converted from analog to digital signals by using the node MCU. Information is sent to the esp8266Wi-Fi sensor by TX of the Arduino. The sensor transmits the data to the cloud server by the local network in a web application, so here the result can be viewed in the web app, so the deciding parameters for the water, whether it’s toxic or not, are if the potential of hydrogen value is less than five is acidic and above 7.5 is essential in spirit and in MFC sensor if the analogy sensor sense’s any electricity from the cathode and anode ends (0–200 it indicates as acidic water, 200–300 is normal water and above 300 is terrible water) by this water we decide whether water toxic in nature and temperature and humidity also plays a role in determining the quality, it can result in an effect on the level of O2 , also the capacity of organisms to resist certain pollutants, so different conditions should have different water temperature. Figure 3 shows the real-time data on the LCD screen while experimenting with normal water. Figure 4 lists values of various parameters like pH value is 6, MFC value is 275, and if temp and humidity are also in favorable conditions. Hence, tested water is regular tap water, and various parameter values show that the water is of good quality. Figure 5 shows the acidic solution water, where the pH value is one, and the MFC value is 175 even if temp and humidity are favorable conditions. Still, the water is abnormal because MFC and pH values are irrelevant.

6 Conclusion The water monitoring structure’s less-cost, satisfactory, in-live time quality has been executed and analyzed. By this project, humans can keep a terrace of the range

IoT-Based Water Quality Monitoring and Detection System Fig. 3 Real-time data in the LCD screen while experimenting with normal water

Fig. 4 Result showing in the web application while testing with normal water

Fig. 5 Result showing in the web application while testing with acidic water

43

44

M. Kanchana et al.

measure of contamination taking place in the water mines and get real-time cautions. Through this process, we eradicate dangerous diseases from contaminated water and metals ionizing in the water. Immediate steps should be taken to restrain the furthest contamination levels or pollutants, such as in the village’s reservoir or irrigation water mines. This system structure can be easily and quickly arranged with a monitoring system near the targeted region. The single-trained individuals will accomplish the main challenge of monitoring by using the Internet of things and its services.

References 1. Bhateria R, Jain D (2016) Water quality assessment of lake water: a review. Sustain Water Resour Manag 2(2):161–173. https://doi.org/10.1007/s40899-015-0014-7 2. Leigh C et al (2019) A framework for automated anomaly detection in high frequency waterquality data from in situ sensors. Sci Total Environ 664:885–898. https://doi.org/10.1016/j.sci totenv.2019.02.085 3. Tyagi S, Sharma B (2014) Water quality assessment in terms of water quality index water quality assessment in terms of water quality index water quality assessment in terms of water quality index. Am J Water Resour 2013 1(3):34–38. https://doi.org/10.12691/ajwr-1-3-3 4. Online IP, Ahmad S, Shah S, Bari F, Saeed K, Akhtar N (2016) Assessment of water quality of River Bashgal, vol 8, no 6, pp 14–25 5. Rawat KS, Singh SK (2018) Water Quality Indices and GIS-based evaluation of a decadal groundwater quality. Geology Ecology Landsc 2(4):240–255. https://doi.org/10.1080/247 49508.2018.1452462 6. Tung TM, Yaseen ZM (2020) A survey on river water quality modelling using artificial intelligence models: 2000–2020. J Hydrol (Amst) 585:2020, May 2020. https://doi.org/10.1016/j. jhydrol.2020.124670. 7. das Kangabam R, Bhoominathan SD, Kanagaraj S, Govindaraju M (2017) Development of a water quality index (WQI) for the Loktak Lake in India. Appl Water Sci 7(6):2907–2918. https://doi.org/10.1007/s13201-017-0579-4 8. Adimalla N, Qian H (2019) Groundwater quality evaluation using water quality index (WQI) for drinking purposes and human health risk (HHR) assessment in an agricultural region of Nanganur, south India. Ecotoxicol Environ Saf 176(126):153–161. https://doi.org/10.1016/j. ecoenv.2019.03.066 9. Sasikumar S, Naveen Raju D, Gopirajan PV, Sureshkumar K, Pradeep R (2022) Deep learning and internet of things (IOT) based irrigation system for cultivation of paddy crop, vol 434. https://doi.org/10.1007/978-981-19-1122-4_35 10. Shanmugasundharam A, Kalpana G, Mahapatra SR, Sudharson ER, Jayaprakash M (2017) Assessment of Groundwater quality in Krishnagiri and Vellore Districts in Tamil Nadu, India. Appl Water Sci 7(4):1869–1879. https://doi.org/10.1007/s13201-015-0361-4 11. Gohin F et al (2019) Twenty years of satellite and in situ observations of surface chlorophyll-a from the northern Bay of Biscay to the eastern English Channel. Is the water quality improving? Remote Sens Environ 233(Sept):111343. https://doi.org/10.1016/j.rse.2019.111343 12. Sagan V et al (2020) Monitoring inland water quality using remote sensing: potential and limitations of spectral indices, bio-optical simulations, machine learning, and cloud computing. Earth Sci Rev 205:103187. https://doi.org/10.1016/j.earscirev.2020.103187 13. Suresh K et al (2022) Simultaneous detection of multiple surface acoustic wave sensor tags for water quality monitoring utilizing cellular code-reuse approach. IEEE Internet Things J 9(16):14385–14399. https://doi.org/10.1109/JIOT.2021.3082141

IoT-Based Water Quality Monitoring and Detection System

45

14. Amruta MK, Satish MT (2013) Solar powered water quality monitoring system using wireless sensor network. In: 2013 International mutli-conference on automation, computing, communication, control and compressed sensing (iMac4s), 2013, pp 281–285. https://doi.org/10.1109/ iMac4s.2013.6526423 15. Bhardwaj RM (2011) Overview of ganga river pollution, Delhi 16. Yadav NK (2012) CPCB’s real time water quality monitoring. https://www.cseindia.org/cpcbsreal-time-water-quality-monitoring--4587. Accessed 13 Dec 2022 17. Adu-Manu K, Tapparello C, Heinzelman W, Katsriku F, Abdulai J-D (2017) Water quality monitoring using wireless sensor networks: current trends and future research directions. ACM Trans Sens Netw 13:1–41. https://doi.org/10.1145/3005719 18. Yue R, Ying T (2011) A water quality monitoring system based on wireless sensor network & solar power supply. In: 2011 IEEE international conference on cyber technology in automation, control, and intelligent systems, pp 126–129 19. Dinh TL, Hu W, Sikka P, Corke P, Overs L, Brosnan S (2007) Design and deployment of a remote robust sensor network: experiences from an outdoor water quality monitoring network. In: 32nd IEEE conference on local computer networks (LCN 2007), 2007, pp 799–806. https:// doi.org/10.1109/LCN.2007.39 20. Qiao T, Song L (2010) The design of multi-parameter online monitoring system of water quality based on GPRS. In: 2010 international conference on multimedia technology, 2010, pp 1–3. https://doi.org/10.1109/ICMULT.2010.5631313 21. He D, Zhang L-X (2012) The water quality monitoring system based on WSN. In: 2012 2nd international conference on consumer electronics, communications and networks (CECNet), 2012, pp 3661–3664. https://doi.org/10.1109/CECNet.2012.6201666 22. World Health Organization (2010) Hardness in drinking-water: background document for development of WHO guidelines for drinking-water quality. World Health Organization, Geneva. https://apps.who.int/iris/handle/10665/70168 23. To˘gaçar M, Cömert Z, Ergen B (2021) Intelligent skin cancer detection applying autoencoder, MobileNetV2 and spiking neural networks. Chaos Solitons Fractals 144:110714. https://doi. org/10.1016/J.CHAOS.2021.110714 24. Elazhary H (2019) Internet of Things (IoT), mobile cloud, cloudlet, mobile IoT, IoT cloud, fog, mobile edge, and edge emerging computing paradigms: Disambiguation and research directions. J Netw Comput Appl 128, no. June 2018, pp. 105–140, 2019, doi: https://doi.org/ 10.1016/j.jnca.2018.10.021. 25. Wang Y, Chen D, Liu W, Chen X, Liu X, Xie J (2018) An AlN based SAW device for detection of impurities in water based on measuring the signal-to-noise ratio. In: 2018 IEEE 13th annual international conference on nano/micro engineered and molecular systems (NEMS), 2018, pp 90–93. https://doi.org/10.1109/NEMS.2018.8556937 26. Derbew Y, Libsie M (2014) A wireless sensor network framework for large-scale industrial water pollution monitoring. In: 2014 IST-Africa conference proceedings, 2014, pp 1–8. https:// doi.org/10.1109/ISTAFRICA.2014.6880619 27. Olatinwo SO, Joubert T-H (2019) Enabling communication networks for water quality monitoring applications: a survey. IEEE Access 7:100332–100362. https://doi.org/10.1109/ACC ESS.2019.2904945 28. Abbasian Dehkordi S, Farajzadeh K, Rezazadeh J, Farahbakhsh R, Sandrasegaran K, Abbasian Dehkordi M (2020) A survey on data aggregation techniques in IoT sensor networks. Wirel Netw 26(2):1243–1263. https://doi.org/10.1007/s11276-019-02142-z 29. Petus C, Waterhouse J, Lewis S, Vacher M, Tracey D, Devlin M (2019) A flood of information: using Sentinel-3 water colour products to assure continuity in the monitoring of water quality trends in the Great Barrier Reef (Australia). J Environ Manag 248:109255. https://doi.org/10. 1016/j.jenvman.2019.07.026 30. Silva S, Nguyen HN, Tiporlini V, Alameh K (2011) Web based water quality monitoring with sensor network: employing ZigBee and WiMax technologies. In: 8th International conference on high-capacity optical networks and emerging technologies, 2011, pp 138–142. https://doi. org/10.1109/HONET.2011.6149804

A Comparative Analysis of Different Diagnostic Imaging Modalities Vivek Kumar, Shobhna Poddar, Neha Rastogi, Kapil Joshi, Ashulekha Gupta, and Parul Saxena

Abstract The likelihood of disease prevention and effective treatment has increased because to the development of medical imaging. Medical imaging can offer highresolution scans of each organ or tissue when diagnosing and detecting. The imaging approach produces diagnostic results that are less vulnerable to human intervention. For an appropriate diagnosis, the diagnostic imaging may need to combine several different imaging modalities. In this research paper, several strategies are reviewed and put side by side for comparison. The techniques that are of concern are X-ray radiography, MRI, ultrasonography, elastography, and optical imaging. Keywords Radiography · X-ray · Optical imaging · Ultrasonography · MRI · Elastography

1 Introduction Today, medical images are crucial in the detection and diagnosis of many disorders. Medical imaging offers direct visualization ways to cut through a human body and V. Kumar · K. Joshi (B) Department of Computer Science, Uttaranchal Institute of Technology, Uttaranchal University, Dehradun, Uttarakhand, India e-mail: [email protected] S. Poddar Om Sterling Global University, Hisar-Chandigarh Road, Hisar, Haryana, India N. Rastogi Uttaranchal Institute of Management, Uttaranchal University, Dehradun, India A. Gupta Department of Management Studies, Graphic Era(Deemed to be University) Dehradun, Dehradun, Uttarakhand, India P. Saxena Department of Computer Science, Soban Singh Jeena University, Campus Almora, Almora, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 V. S. Rathore et al. (eds.), Emerging Trends in Expert Applications and Security, Lecture Notes in Networks and Systems 681, https://doi.org/10.1007/978-981-99-1909-3_5

47

48

V. Kumar et al.

observe the smallest biochemical and anatomical alterations characterized through a variety of physical and biological parameters, starting with anatomical details, functional operations, and ending with molecular and cellular expressions. Despite being instructive, medical imaging, typically, requires skilled medical professionals to evaluate the information contained in the images [1]. The living conditions and lifestyles of individuals have undergone significant changes in recent years as a result of China’s economy and social development. Chronic diseases, senile diseases, obesity, and sub-health issues are on the rise, and health issues are getting more prevalent [2]. The notion of large health needs to be established immediately in order to address these issues. The term “excellent health” refers to a holistic, allencompassing approach to life that prioritizes not just the physical and emotional well-being of an individual but also their relationships with others, the environment, and their families and communities [3]. Figure 1 displays the process flow typical of most contemporary medical imaging systems. In order to acquire a raw image or reconstruct one, it begins with the image acquisition stage using a CT, MR, PET, US, or other form of scanner. This process involves a variety of various sources, geometries, and detectors among other properties [4]. The created images are subsequently sent to the image processing stage, which is the next step in the process flow. In this case, the images have already been preconditioned for the purpose for which they will be used. Filtering to improve the image, anatomical/feature segmentation, image registration, merging several images into vector images, and volume rendering are a few examples of image processing operations. The output is then sent to the application stage, where doctors, surgeons, engineers, scientists, and students use the image for their individual application needs. The use of measurements and image data gathered from scanners, together with subsequent post-processing, are examples of applications in this category that can be used to direct interventional techniques in the treatment of trauma and disease.

Fig. 1 Process flow diagram used by the majority of contemporary medical imaging programs and systems

A Comparative Analysis of Different Diagnostic Imaging Modalities

49

To view into the patient, a number of different procedures might be applied. These methods depend on a signal going straight through the patient. These signals interact with the tissues of the patient. By recognizing the signal exiting the body, an internal image of the patient can be produced. X-ray radiography, MRI, ultrasonography, elastography, and optical imaging are the interesting approaches in this research [5]. The final section of the study will be devoted to a comparison of these methods, which have a great chance of being useful in the medical field in the upcoming months and years.

2 X-Ray Radiography This is the most common and significant diagnostic imaging technique. It is a minimally intrusive technology, like ultrasonography, nonetheless, it produces a latent representation of the inside body architecture on an X-ray film using very little ionizing or non-ionizing radiation through a X-ray machine [6]. A static X-ray machine is possible, mobile, or portable. Radiography involves directing X-rays at the body in order for them to pass through the intended organ before being accumulated behind a piece of flat of X-ray film to produce a 2D image [7]. Breast tumors, cancer, and tuberculosis are just a few of the illnesses that radiography can be used to diagnose. Excessive X-ray exposure carries some dangers, including the possibility of skin burns, hair loss, and cancer. However, the advantages typically outweigh the disadvantages (Fig. 2).

3 Magnetic Resonance Imaging (MRI) The non-invasive method of visualizing is MRI the interior organization and some bodily functions. It uses electromagnetic radiation that isn’t ionizing and doesn’t seem to pose any risks from exposure. While being exposed to carefully controlled magnetic fields, it uses radio frequency radiation to produce high-quality crosssectional photographs of the body in any plane [8]. The MR image is produced by placing the subject inside a large magnet, which generates a rather powerful external magnetic field. As a result, the body contains several atoms, including hydrogen, have their nuclei aligned considering the magnetic field. The energy is then released from the body, recognized, and processed by a computer to produce an MRI image [9] after the introduction of an RF signal. An MRI system’s whole-body configuration as it is currently configured is depicted in Fig. 3. It consists of a magnet, a set of RF coils that house the subject and a series of magnetic-field gradient coils, as well as a component that powers the

50

V. Kumar et al.

Fig. 2 X-ray radiography

coils and analyses the signal picked up. The former element, which consists of many coil kinds, is known as a “magnetic subsystem.” The last part could be referred to as an “electric subsystem” because it processes the recorded MR signals and drives power and control signals to the coils [10].

Fig. 3 System for whole-body MRI

A Comparative Analysis of Different Diagnostic Imaging Modalities

51

4 Ultrasonography Ultrasonography is a diagnostic technique that creates medical images utilizing megahertz-range, high-frequency, wideband sound waves that differ in how they reflect off tissue. When diagnosing abdominal illnesses, ultrasonography is still the imaging technique of choice [11]. If necessary, ultrasonography is non-invasive, generally accessible, affordable, risk-free, and simple to carry done on a regular basis. Transabdominal ultrasonography, a “real-time” imaging method, provides an initial general overview before localizing the “area of interest” to conduct a thorough review and ultimately identify the disease’s root cause. Additionally, it can lessen the need for expensive and labor-intensive diagnostic techniques like CT, MR, pancreatography, endoscopic ultrasonography, and others. Internal bodily structures, such as tendons, muscles, joints, arteries, and internal [12] organs, can be seen with ultrasonography. Sonograms, another name for ultrasound images, are created by delivering ultrasonic pulses to tissue with the use of a probe that is equipped with a transducer that generates a piezoelectric effect’s reasoning states that a lower frequency will result in greater tissue penetration but lower potential imaging resolution [13]. Differentshaped fields of view are produced based on the probe’s arrangement and form. The tissue reflects the sound to variable degrees, creating an echo effect. The operator sees images of these echoes that have been recorded. Since bone blocks ultrasound, it can only be used if the lesion has a bony defect that ultrasonic waves can pass through.

5 Elastography Recently, there has been a growing interest in using elastography techniques to assess the elasticity of various tissues. Early illness diagnosis is enhanced by the capacity to quantify changes in tissue stiffness utilizing elasticity imaging techniques. Elastography is currently utilized in clinical practice to evaluate the liver, breasts, prostates, thyroids, and musculoskeletal system. It is also quite affordable and offers real-time assessments of tissue stiffness [14]. MRE has recently evolved into a normative clinical tool for staging liver fibrosis on 1.5 T and 3 T scanners [15]. Depending on the preference of the site, MRE can be obtained with or without contrast agent on board, allowing it to be performed at any time during a typical clinical examination. The liver MRE method can be broken down into the following four components. . Employing an external mechanical motor to introduce shear wave motion into the liver. . Using phase-contrast imaging and motion [16] encoding to obtain wave pictures of the liver. . Using an inversion method to the wave pictures to create stiffness maps. . Reporting average stiffness values in ROIs that were found to have high wave quality and no artifacts using the liver stiffness maps.

52

V. Kumar et al.

This paper examines the MRE fundamentals and offers recommended practices for routine clinical application. The MRE technique has a unique advantage for doctors in that all major MR manufacturers have adopted the same hardware and inversion algorithm, and good repeatability has been shown.

6 Optical Imaging A non-invasive approach called optical imaging makes use of light to display how cells and molecules function within a living organism. In deep tissues, where light spreads out diffusely, optical imaging is thought to be a potent tool [17]. Contrast is produced by endogenous molecules with optical fingerprints or by the application of exogenous agents that generate signal. The way the light spreads is diffuse. The visualization of tissue anomalies or pathologic processes is made possible by the interaction of light with various tissue components. In Fig. 4, an optical imaging device is displayed [18]. The illuminator was a xenon arc light coupled with a low bandpass filter centered at 440 nm, with a maximum width at half of 10 nm. About a 50° oblique angle from the sample plane was used to illuminate the samples. The geometry was adjusted to give the imaged field with consistent light. A CCD camera with a 0.5X Rodenstock lens affixed was used to capture the images [19]. Linearly polarizing filters were positioned between the paths of the light incident on the sample and the light collected by the camera to enable polarization imaging. With an analyzing polarizer positioned parallel to and perpendicular to the polarization of the incident light, respectively, reflection co- and cross-polarized images were

Fig. 4 System for optical imaging

A Comparative Analysis of Different Diagnostic Imaging Modalities

53

captured. The system’s field of vision is 2.8 cm by 2.5 cm, and its lateral resolution is around 15 m. Less than a half-second was needed to acquire optical wide-field images [20].

7 Comparison of Several Medical Imaging Techniques The first concept to compare [21] is image quality, which may be summed up in Table 1 as having strong spatial resolution and good contrast. The second idea is the availability of the system, which may be illustrated by the price of the system and the accessibility of real-time data, as given in Table 2. The third principle, safety, can be illustrated by how ionizing radiation affects the patient. Table 3 summarizes [22] how heating affects the body [23]. Table 1 Comparison of the various medical imaging techniques’ image quality

Table 2 Comparison of the various medical imaging methods’ system availability

Table 3 Comparison of the safety of several medical imaging techniques

Imaging methods

Image quality Spatial resolution

Good contrast

Radiography

0.9 mm

Soft tissues and fluid

MRI

0.4 mm

Hard and soft tissue

Ultrasonography

0.9 mm

Soft tissues

Elastography

199 µm

Soft tissues

Optical

99 nm

Soft tissues

Imaging methods

System availability Cost

Real-time information

Radiography

Medium

No

MRI

High

No

Ultrasonography

Low

Yes

Elastography

Medium

Yes

Optical

Low

No

Imaging methods

Safety Ionizing radiation

Heating

Radiography

Yes

Low

MRI

No

Medium

Ultrasonography

No

Negligible

Elastography

No

Low

Optical

No

Medium

54

V. Kumar et al.

8 Conclusion A thorough comparison of the various medical imaging techniques has been offered. The techniques that are of concern are optical imaging, MRI, ultrasound, elastography, and X-ray radiography. These methods have been contrasted in terms of point of view, image quality, safety, and system availability. The debates demonstrate that none of these methods are universally reliable in all medical applications.

References 1. Rodriguez JH, Fraile FJC, Conde MJR, Llorente PLG (2016) Computer aided detection and diagnosis in medical imaging: a review of clinical and educational applications. In: International conference on technological ecosystems for enhancing multiculturality, pp 517–524 2. Verma A, Sharma B (2010) Comparative analysis in medical imaging. Int J Comput Appl 1(13):88–93 3. Cui X, Cui X, Liu J, Cheng M, Zhang X (2022) Research on virtual simulation experiment teaching system of integrated medicine based on IPE concept. In: International conference on county economic development, rural revitalization and social sciences, vol 650, pp 5–9 4. Robb RA (2008) Medical imaging and virtual reality: a personal perspective. Virtual Reality, vol 12, pp 235–257 5. Roobottom A, Mitchell G, Hughes GM (2010) Radiation-reduction strategies in cardiac computed tomographic angiography. Clin Radiol 65(11):859–867 6. Umar AA, Atabo SM (2019) A review of Imaging techniques in scientific research/clinical diagnosis. MedCrave step into the world of research, vol 6, pp 175–183 7. Morigi MP, Albertin F (2022) X-ray digital radiography and computed tomography. J Imaging 8 8. Katti G, Ara SA, Shirenn A (2011) Magnetic resonance imaging (MRI)–a review. Int J Dental Clin 3:65–70 9. Moser E, Stadlbauer A, Windischberger C, Quick HH, Ladd ME (2009) Magnetic resonance imaging methodology. Eur J Nucl Med Mol Imaging 36:30–41 10. Kose K (2021) Physical and technical aspects of human magnetic resonance imaging: present status and 50 years historical review. Adv Phys 6 11. Dimcevski G, Erchinger FG, Havre R (2013) Odd Helge Gilja: ultrasonography in diagnosing chronic pancreatitis: new aspects. World J Gastroenterol 19:7247–7257. 12. Caglayan F, Bayrakdar IS (2018) The intraoral ultrasonography in dentistry. Niger J Clin Pract 21:125–133 13. Almutairi FF, Abdeen R, Alyami J, Sultan SR (2022) Effect of depth on ultrasound point shear wave elastography in an elasticity phantom. Appl Sci 12 14. Nanjappa M, Bolster B, Jin N, Kannengießer S, Sellers R, Kolipaka A (2022) Magnetic resonance elastography of the liver: best practices. Abdom Imaging 80 15. Saouli A, Mansour K (2011) Application of the finite elements method in optical medical imaging. In: Mediterranean microwave symposium. Hammamet, pp 117–121 16. Yaroslavsky AN, Joseph C, Patel R, Muzikansky A, Neel VA (2017) Delineating nonmelanoma skin cancer margins using terahertz and optical imaging. J Biomed Photon Eng 3 17. Kumar R, Memoria M, Gupta A, Awasthi M (2021) Critical analysis of genetic algorithm under crossover and mutation rate. In: 2021 3rd international conference on advances in computing, communication control and networking (ICAC3N), 2021, pp 976–980. https://doi.org/10.1109/ ICAC3N53548.2021.9725640

A Comparative Analysis of Different Diagnostic Imaging Modalities

55

18. Verma S, Raj T, Joshi K, Raturi P, Anandaram H, Gupta A (2022) Indoor real-time location system for efficient location tracking using IoT. In: 2022 IEEE world conference on applied intelligence and computing (AIC), 2022, pp 517–523. https://doi.org/10.1109/AIC55036.2022. 9848912 19. Diwakar M, Tripathi A, Joshi K, Sharma A, Singh P, Memoria M (2021) A comparative review: medical image fusion using SWT and DWT. Mater Today Proc 37:3411–3416 20. Dhaundiyal R, Tripathi A, Joshi K, Diwakar M, Singh P (2020) Clustering based multi-modality medical image fusion. J Phys Conf Ser 1478(1): 012024. (IOP Publishing) 21. Joshi K, Kumar M, Tripathi A, Kumar A, Sehgal J, Barthwal A (2022) Latest trends in multimodality medical image fusion: a generic review. In: Rising threats in expert applications and solutions: proceedings of FICR-TEAS 2022, pp 663–671 22. Diwakar M, Tripathi A, Joshi K, Memoria M, Singh P (2021) Latest trends on heart disease prediction using machine learning and image fusion. Mater Today Proc 37:3213–3218 23. Jain M, Kumar A (2017) RGB channel based decision tree grey-alpha medical image steganography with RSA cryptosystem. Int J Mach Learn Cybern 8:1695–1705

Uncovering the Usability Test Methods for Human–Computer Interaction Garima Nahar and Sonal Bordia Jain

Abstract Software has become one of the most important facets of human life. It is a set of programs or instructions that instructs a system about what to do. Software development represents a group of activities, rules, and guidelines that are represented in the procedure of analysis, requirements gathering and planning, designing and developing, system testing, deploying, maintaining, and supporting software products. HCI has come into sight in the last decade. For our research purpose, we want to understand both the fields and their relationship. We also want to get the knowledge of the HCI design process, its steps, and HCI testing techniques, so that we can further decide on our area of research. The paper is focused on a study of Software Engineering and Human–Computer Interface along with Human-Centered System Development Life Cycle (HCSDLC) model. We also reviewed that various researchers discussed Software Engineering with HCI as now user is considered the most important part of any development. User satisfaction is a major concern; therefore, Agile Development and Agile HCI are very importantly considered. In addition to the HCI development process study, we have discussed so many types of usability testing techniques and their comparisons also. Project managers and other team members including the testing team always find it challenging to select the most affordable and acceptable HCI usability testing method for a software project. When choosing a testing method, a variety of desired components has to be considered. A Systematic and Efficient Literature Review is required to be agreed upon to assist software project managers to make this difficult and significant option so that the most considerable HCI usability testing method or combination of testing methods can be chosen.

G. Nahar RTU, Kota, Rajasthan, India e-mail: [email protected] S.S. Jain Subodh P.G. Mahila Mahavidyalaya, Rambagh, Jaipur, Rajasthan, India S. B. Jain (B) S. S. Jain Subodh P.G. College, Jaipur, Rajasthan, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 V. S. Rathore et al. (eds.), Emerging Trends in Expert Applications and Security, Lecture Notes in Networks and Systems 681, https://doi.org/10.1007/978-981-99-1909-3_6

57

58

G. Nahar and S. B. Jain

Keywords Human–computer interface · HCI · Agile · Usability testing · User

1 Introduction Software has become one of the most important facets of human life. It is a group of instructions or programs that instructs a system about what to do. Software development represents a collection of activities, rules, and guidelines that are represented in the procedure of analysis, planning, designing, developing, testing, deploying, maintaining, and supporting software products [1]. Software development is significant due to its pervasive nature. It facilitates businesses and organizations to differentiate themselves from others and can be more competitive and viable. It also improves customer experiences with more efficient and productive operations. Programmers, software developers, and software engineers are the first and foremost people who accomplish the software development process. IBM VicePresident and blogger Dibbe Edwards draw attention toward: “Software has emerged as a key differentiator in many products—from cars to washing machines to thermostats—with a growing Internet of Things connecting them” [2]. Software Engineering is a discipline that studies the nature of software, approaches, methodologies, theories, and laws for software behavior, and its ultimate aim is high productivity, high quality, low cost, on-time completion, easily updatable, and most importantly easy to use by the user and ease the user or we can say user friendly as well as useful to user [3]. User involvement has become the most important concern in software development activity. Majid et al. [4] stated that the user is involved in all the stages of SDLC that lead to the increased software usability. A study and survey with 32 software experts to study the user participation in the System Development Life Cycle process are performed, and the Human-Centered System Development Life Cycle (HCSDLC) model is utilized [4]. The most traditional approach for software development is a time-tested, linear, sequential, and easy-to-understand model, i.e., Waterfall Model, completing one activity before starting the next activity. These activities can be Requirement Analysis, Planning, System Design, Coding, Implementation or Deployment, Testing, Operation, Maintenance, and Feedback [5]. Each phase has a distinct goal. There is no probability to return to the previous phase because of the structured and processcentered nature of this model. Backflows should be avoided [6]. But, it is observed during SDLC; some modifications or revisions are essential as a result of user requirements. This prohibited revisiting or backflow makes software non-reusable, and upgrading the system is too clumsy and expensive. But nowadays, customer satisfaction while delivering the software is considered as the primary goal instead of confirming the plan as in the traditional Waterfall Model. In most software projects, the changes come from the environment outside the software development team, i.e., customer (human). The major changes are done in

Uncovering the Usability Test Methods for Human–Computer Interaction

59

requirements, interface, and technology [7]. Moreover, these changes cannot be unresponsive, ignored, or eliminated to avoid business failure. To eliminate the complete rework, a new low-cost model, the Agile Model has been introduced in a response to this customer’s expectations. Agile Software Development Model involves customer or end-user satisfaction with evolutionary development model leading to recurrent improvement. These methods are the response to customer expectations with the reduction in cost and increased usability [8]. Usability is identified as a feature of software product that can be utilized by the targeted users to fulfil the particular goals with proficiency and satisfaction in a special context of use [9]. Alternatively, the life cycle of User-Centered Design (UCD) resembles Agile methodology. UCD is a collection of techniques, approaches, procedures, and activities that focuses the involvement of user in the development process [10]. The ISO13407 states UCD as an iterative process made up of four primitive and important activities, i.e., know the user, specify the user requirements, produce design solutions by incorporating HCI, and evaluate the usability of design against user requirements [11]. The past five decades have given more attention to the functionality of the software, cost of development, and security issues rather than human values or social responsibilities. With the introduction of artificially intelligent systems, new attention is being paid to the values and ethics. Value-based methods [12] are prominent in Human–Computer Interaction, and in the present time, they are being applied in the early stages of Software Engineering. The word value is also used with Agile methods but as business value or economic value (Fig. 1). Fig. 1 Software developer’s values v/s human values

60

G. Nahar and S. B. Jain

2 User Engagement and Human–Computer Interaction Engaging user has become the main concern of software designers and developers of software products while working on UCD Model [13]. Further, the requirement to understand users’ experiences has stimulated an emphasis on user engagement in software development. Human–Computer Interaction (HCI) utilizes the concept of this emphasis. HCI and other related fields are based on user engagement. User engagement is a unanimous goal in software development in which designers do their utmost to engage users. The need to engage user and to know about their interaction with computers or software has introduced User Experience within HCI. The gaps between Software Engineering and Human–Computer Interaction are the areas that have more scope for research [14]. Some challenges that hinder the software development process or restrain the usability of software products have been identified with the process of transferring HCI recommended practices to the Software Engineering process (Fig. 2). As more than half of the concern of software development is on the development of user interface, the role of the HCI field cannot be underestimated. HCI is a field that is related to the design, evaluation, and implementation of interactive software systems for the use of humans and also the study of key facts encircling them. HCI is also a multidisciplinary field that is a fusion of human psychology, social science, computer science, and information technology to design and develop interactive computing systems that can be used by humans. The HCI commenced gaining elevation in the decade 1980s. In the coming years, the integration of HCI with SE has been kept growing to enhance the usability of the software. Initially, the complexity was on the higher side. The most common factor that was the reason for this complexity was that the software development companies were stubborn with usability practices and Human-Centered Design along with Agile Software Development techniques, especially for small organizations. A Cross-Discipline User Interface Life Cycle amalgamates HCI and SE within the sphere of Agile Development [15]. The major requirement for an HCI professional or interactive designer while designing a usable software system is to have distinct skills like human psychology, requirement modeling, user interface design, etc. The term user interface designer for these HCI experts who blend the knowledge of usability, graphics, and interaction design is used. Human–Computer Interaction focuses on User Interface Design (UID) with the focus on easiness in use, ease to learn, better user performance, and customer satisfaction, and Software Engineering Fig. 2 Graphical overview of transfer process

Uncovering the Usability Test Methods for Human–Computer Interaction

61

focuses on functional requirements and their execution in developing a software system. An interaction layer as an interface between software system (computer) and user (human) is introduced. This interaction layer is used in the region where HCI and SE are required to work together to ensure that the final software system works as specified in the preliminary Requirements Engineering (RE). SE needs to work with HCI to build a high level of user interface (UI). But, a clear collaboration is not visible as explicit integration of HCI methods and processes is still lacking.

3 Software Testing and its Need The software testing is the most important step in the process of software development [16]. Software testing is an activity or set of activities that are carried out for assessing and refining software quality and to check whether the software system is defectfree or not, as quality software and meeting the user’s requirements are in high demand. The major focus is on the need for WHY TO TEST instead of HOW TO TEST to make HOW simplified and easy. Testing persons ought to be accustomed to basic testing goals, objectives, standards, and concepts to avoid bad testing reports. Mistakes are the inevitable nature of humans. In software development, sometimes these mistakes can be very expensive and hazardous. In technical terms, software testing is a combination of verification and validation to ensure the correctness, completeness, and accuracy of the product software. The objectives and goals of testing are to find and prevent maximum defects, satisfy the Software Requirement Specification (SRS) and Business Requirement Specification (BRS), satisfy user requirements, develop high-quality and accurate test cases, estimation of software reliability, minimize cost and efforts, and most importantly, gain customer satisfaction by delivering them a quality software product. The qualities and characteristics of highly effective testers are that they are keen observers, good in communication as well as analytical skills, good time managers, quality-oriented, passionate and quick learners with a positive attitude, and moreover user-oriented, as they always think about user requirements. The software testing detects faults, and also the losses caused by these faults [17]. Software testing is a process that is used for checking the quality and accuracy of the software based on user requirements. The various incidents and the reasons behind these incidents and the loss after these incidents that were caused due to the lack of appropriate and proper testing are listed (Fig. 3). This figure shows the three main stages which play the important role in software development. A single error or fault can lead to major financial loss or a human’s life too, and this can be avoided by the proper software testing. The most common examples of these kinds of testing failures are Ariane 5 Flight 501 disintegrated just 40 s after the launching period in the year 1995; Knight Capital Group lost $440 million on its trading system in the year 2012; Yahoo! massive data breach about 500 million credentials in the year 2016, and many more.

62

G. Nahar and S. B. Jain

Fig. 3 Main stages which play important role in software development

Around 20 major fault occurrences are listed due to improper software testing, the reason for faults, the losses by these faults, and possible remedies. The major reason behind these faults was lacking the ability to identify the error in the proper time due to which the chaos came to pass.

4 Usability Test Methods A new view of usability test methods for the interface of Human–Computer Interaction is also described [18]. Usability is the most important factor in the HCI which assures the fulfilment of interaction. The usability test is an essential process for the user as well as the designer. It is used in HCI design to enhance and evaluate usability, improve the interface, and reduce the learning time of users with high user efficiency and high user satisfaction. Subsequently, it also helps designers in emphasizing the features of the interface, lessening the cost of development and support, and enhancing its market competitiveness and acceptability. The awareness with the usability test methods in HCI design accompanied by an analysis of these methods is focused. HCI is a mode of communication or a platform for the flow of information and feedback and also a technique to interact between humans and computers. The user interface is used as a synonym for the Human–Computer Interface. The effective interaction in HCI is largely influenced by a well-designed user interface, making usability testing of interface design increasingly important over the years. Moreover, a complete rating of the degree of use in HCI is usability itself. ISO 9241-11:2018 “Definitions and concepts of Usability” provides an agenda for interpreting the concept of usability and using it in environments somewhere humans use interactive systems, other types of systems (such as built environments), as well

Uncovering the Usability Test Methods for Human–Computer Interaction

63

Fig. 4 Core activities comprised in an evaluation process

as products (such as industrial and consumer products) and services (such as technical and personal services) [19]. The concept of usability is explained, key terms and concepts are defined, the fundamentals of usability are identified, and the application of usability is explained. Learnability of user, efficiency, and memorability of the system for the user, errors, and mistakes by the user and satisfaction of user are the listed quality components of usability. Usability testing is a process used to improve design, wherein real users and customers perform tasks with a software product under controlled and monitored conditions. Its purpose is to assist the software development team in meeting the needs of future users and making the software more user-friendly and comprehensible. Usability testing also identifies the areas in which users are going to struggle while using software and recommends improvements. Usability Evaluation (UE) is an important part of the User Interface (UI) Design process as its iterative cycles consist of designing, prototyping, and evaluation (Fig. 4). Curative usability test methods that are proposed by Ghasemifard et al. [18] for usability testing are as follows: (a) (b) (c) (d) (e) (f) (g) (h)

Heuristic evaluation method. Cognitive walkthrough method. Scenario-based method. Remote Testing Usability Method. User-based Testing Method. Focus group method. Contextual inquiry method. Model-based evaluation method.

The evaluation criteria for usability testing methods while considering all usability test aspects are described. These are as follows: high velocity or time to complete a task; low overall cost; flexibility in using tool and framework; resource requirements; how many test should be performed; test type either experimental or analytical; the effects of evaluators experience on test results; the degree of problems found either major or minor; method purpose parameter. The usability design of HCI influences the business prospect of the software. Designers ought to be directed by the natural and human ideas; in parallel, they must

64

G. Nahar and S. B. Jain

enhance the usage and working of interfaces from numerous areas, perhaps design, ergonomics, cognitive psychology, linguistics, and semiotics, finally accomplish the ultimate objective of enhancing the usability of the software. The demerits of traditional interface testing does not focus on the decision-making process rules of humans with a major focus on HCI interface testing based on the decision-making model [20]. The decision-making model analyzes the visual search for information and the logical decision-making process. It considers the influence of software interface design elements on decision-making. Interface color matching, logic decision making, visual search efficiency, and time pressure tests are the test elements that offer improved assistance for software interface design. The HCI testing is an essential block of the HCI design and development process [21]. The designed and developed software product is tested, and the bugs are removed and further design improvements are identified. HCI testers have two major preferences: 1. Closed-door laboratory testing—testers have a complete check on the progression of testing. 2. Field testing—testing has been performed with target users to test actual usage of the software product. The choice of testing method is often dependent on the nature of the software artifact. In general, laboratory testing is performed to test a user interface (UI) and software usability, whereas field testing is accomplished for testing the user acceptance of software and its functionalities. A different point of view about field testing preference is given because field testing can perform more realistic and reliable HCI design tests with a focus on the operational considerations for field testing. HCI testing is a lateral phase of HCI design and development, but Agile development and design thinking principles promote and encourage the testing in the early phases of SDLC. The factors that come across while conducting the HCI testing in the field are as follows: 1. Availability of venue and facilitating conditions due to emotional unhappiness of target audience in the too-early stage when good UI is not visible. Laboratory testing is a solution at this stage. 2. How orderly the data are provided to mark up the basic relationship between HCI product and user behavior. The laboratory testing can restrict the users from unnecessary influence to avoid the adverse effect on design. Along with this, field testing may influence the user and HCI testers due to continuous changes in environmental and specific conditions during testing. 3. The HCI tester must be able to reconduct the testing in addition to the parameters that are added during the review cycle that may be long in duration. So, if the field testing incorporates the business associates, performing the additional cycles of testing might not be feasible. In order to conduct the field testing, some operational considerations are taken care of like: is your software product prepared for field testing; how to accomplish the testing; what are you inclined to perform the realistic task to obtain the data

Uncovering the Usability Test Methods for Human–Computer Interaction

65

you desire; how to plan and design the testing; will you be able to manage the data collected; will you be able to deal with your team; and so on. The planning and conduction of laboratory and field testing are dependent on software products only, and in general, laboratory testing is more beneficial in earlier phases of the HCI design process, whereas field testing in later phases with numerous iterations to ensure the realistic usage of the software product. The proper use of both the testings will lead to a successful software design with maximum user satisfaction. A comparison was performed between Synchronous and Asynchronous Usability Test Methods [22] through an evaluation of hospital websites and encompassed three points of comparison: the number and nature of usability problems unearthed, the experience of the participants involved in the test, and overall task performance. The Asynchronous Usability Test method is considered better due to the comfort, convenience, and friendliness with participants; also, a large number of content and navigation usability issues were detected. In terms of task performance metrics, there is no difference between both methods. The asynchronous method uncovered the more high-risk problem at a low cost in a more natural and user-friendly manner. Additionally, the asynchronous method can be performed on a huge-sized sample even if no evaluator is present during testing sessions, which reduces the cost in terms of money and time both. The traditional Concurrent Think-Aloud Usability Testing method was compared with the Co-Discovery Usability Testing method [23] in Saudi Arabia and encompassed three points of comparison: number and nature of usability problems unearthed, the experience of the participants involved in the test, and overall task performance. The conclusion is that the Co-Discovery Test method is better than the Think-Aloud Test method. The major reason behind this conclusion is the CoDiscovery Test method detected minor usability problems in large numbers that are concerning layout and functionality. This method is also easier to perform and seems much more natural and user-friendly for participants. The Co-Discovery method gained a higher positive rating from participants. In terms of task performance, there is no difference between the methods. The Co-Discovery method is a cost-effective and economical method along with less difficulty, a high quantity of problem finding, and a user-friendly and pleasant usability test experience.

5 Research Gap and Problem Statement HCI and usability testing are the major area of focus in the paper. The integration of HCI with SE has been kept growing to increase the usability of the software product. Usability Testing is a process which improves the test design where real users and customers perform actual tasks or work in a real-time environment to test the proper functioning of software product. This testing is performed within controlled and monitored conditions. The purpose for testing is to help the software development team which takes care of the needs of future users and thus makes software userfriendly, more comprehensible, and well-defined. Various usability test methods were

66

G. Nahar and S. B. Jain

discussed with the comparison in various user interface environments in different regions or work areas. Some of the major problems that are identified during review of literature are: 1. Which kind of testing method is best for the software testing? 2. What type of questions must be asked in questionnaires or given to users for testing purpose? 3. Closed-door or field testing method, which one will be more beneficial? 4. Whether a combination of test methods can be used or not? 5. Which testing method will find maximum faults in minimal cost? There have been several problems while deciding the best suitable test method for developing a faultless software product. The ultimate goal is to fulfil the Software Requirement Specification (SRS) and the Business Requirement Specification (BRS) with user satisfaction. The paper aims to describe the criteria for testing as well as the results obtained by the testing. In future work, further expansion of the usability testing methods in more detail or more diverse areas will be done.

6 Contribution of Paper The primary aim of the paper includes: 1. To identify the user evolvement in software development and HCI with the identification of the purpose of testing. 2. To classify the types of HCI usability testing. 3. To identify the evaluation criteria for usability testing. 4. To differentiate the various usability testing methods.

7 Research Methodology Research methodology can be qualitative, quantitative, or mixed-method methodology. The mixed-method methodology combines both qualitative and quantitative methodologies to integrate perspectives and creates a clear picture about the process to be followed during research. Data collection is also an important task. There are many different options for collecting data for the research. It is possible to categorize these options into different types including interviews (which can be both unstructured and semi-structured), focus groups, group interviews, surveys (both online and in person), observations, documents and records, as well as case studies. The certain testing criteria involving the users and testers will be listed before performing the research. It will be listed for a particular group of users and for a particular type of system to perform a combination of testing methods. A comparative study with these listed criteria needs to be done between different types of software

Uncovering the Usability Test Methods for Human–Computer Interaction

67

products with different user groups to identify the best test method or a combination of test methods or a new test method. The following test methods are being planned: heuristic evaluation, cognitive walkthrough, scenario-based, remote testing usability, user-based testing, focus group, contextual inquiry, and model-based evaluation. We will follow a research methodology that will include the questionnaires from user groups including some testing platforms. These questionnaires will be determined according to user groups.

8 Proposed Outcomes/Conclusion This paper is focused on a review of Software Engineering, Human–Computer Interface. We also reviewed that various researchers discussed Software Engineering with HCI as now user is considered the most important part of any development. User satisfaction is a major concern; therefore, Agile Development and Agile HCI are very importantly considered. In addition to the HCI development process study, we have discussed so many types of testing techniques and their comparisons also. Project managers and other team members including the testing team always find it challenging to select the most affordable and acceptable HCI usability testing method for a software project. When choosing a testing method, a variety of desired components has to be considered. A Systematic and Efficient Literature Review is required to be agreed upon to assist software project managers to make this difficult and significant option so that the most considerable HCI usability testing method or combination of testing methods can be chosen. Such comparative study can be done between different types of software products with different participants to identify the best test method or a new test method. We are planning in our future work to recommend an enhanced testing framework based on these comparative and analytical studies for accepting it better by the HCI usability testing team members.

References 1. Despa ML (2014) Comparative study on software development methodologies. Database Syst J 5(3):37–56 2. What is Software Development? IBM. https://www.ibm.com/topics/software-development# anchor--82427513 3. Adenowo AAA, Adenowo BA (2013) Software engineering methodologies: a review of the waterfall model and object-oriented approach. Int J Sci Eng Res 4(7):427–434

68

G. Nahar and S. B. Jain

4. Majid RA et al (2010) A survey on user involvement in software development life cycle from practitioner’s perspectives. In: 5th international conference on computer sciences and convergence information technology. IEEE. 5. Pressman RS (2005) Software engineering: a practitioner’s approach, 6th ed 6. Fowler M (2004) UML distilled: a brief guide to the standard object modeling language. Addison-Wesley Professional 7. Highsmith J, Cockburn A (2001) Agile software development: the business of innovation. Computer 34(9):120–127 8. Agile Software Development. Wikipedia, Wikimedia Foundation, 12 Mar 2022. https://en.wik ipedia.org/wiki/Agile_software_development 9. Zapata C (2015) Integration of usability and agile methodologies: a systematic review. In: Design, user experience, and usability: design discourse, pp 368–378 10. Jokela T et al (2003) The standard of user-centered design and the standard definition of usability: analyzing ISO 13407 against ISO 9241-11. In: Proceedings of the Latin American conference on Human-computer interaction 11. Whittle J (2019) Is your software valueless? IEEE Softw 36(3):112–115 12. Biffl S et al (eds) (2006) Value-based software engineering, vol 1. Springer, Berlin Heidelberg 13. Doherty K, Doherty G (2018) Engagement in HCI: conception, theory and measurement. ACM Comput Surv (CSUR) 51(5):1–39 14. Ogunyemi A, Lamas D (2014) Interplay between human-computer interaction and software engineering. In: 2014 9th Iberian conference on information systems and technologies (CISTI). IEEE 15. Memmel T, Gundelsweiler F, Reiterer H (2007) Agile human-centered software engineering. In: BCS-HCI’07: 21st British HCI group annual conference on people and computers 16. Uddin A, Anand A (2019) Importance of software testing in the process of software development. IJSRD-Int J Sci Res Dev 6. ISSN:2321-0613 17. Mahmood F, Dhirendra P (2019) Software testing, fault, loss and remedies. Int J Emerg Technol Innov Res 553–568. www.jetir.org 18. Ghasemifard N et al (2015) A new view at usability test methods of interfaces for human computer interaction. Global J Comput Sci Technol 19. ISO 9241-11:2018. ISO, 4 Apr 2018. https://www.iso.org/standard/63500.html 20. Huang B, Zhang P, Wang C (2018) Human–computer interaction design testing based on decision-making process model. In: International conference on man-machine-environment system engineering. Springer, Singapore 21. Tan C-H et al (2016) HCI testing in laboratory or field settings. In: International conference on HCI in business, government, and organizations. Springer, Cham 22. Alhadreti O (2022) A Comparison of synchronous and asynchronous remote usability testing methods. Int J Hum Comput Interact 38(3):289–297 23. Alhadreti O (2021) Comparing two methods of usability testing in Saudi Arabia: concurrent think-aloud vs. co-discovery. Int J Hum Comput Interact 37(2):118–130

The Effects of Lipid Concentration on Blood Flow Through Constricted Artery Using Homotopy Perturbation Method Jyoti, Sumeet Gill, Rajbala Rathee, and Neha Phogat

Abstract The current research aims to simulate blood flow with nanoparticles through a stenosed artery with porous walls. The insertion of nanoparticles has negative consequences on stenosed tubes. Blood is seen as a coupled stress fluid in this framework. The closed versions of the formulas for velocity and temperature distributions are provided. We created a model of couple stress blood flow via a restricted tube. The results are interpreted as graphs for each of the major parameters in the advanced MATLAB Tool. Keywords Couple stress fluid · Slip velocity · Wall permeability · Brownian diffusion coefficient · Thermospheric diffusion coefficient · Nanoparticles

1 Introduction The study of fluid dynamics is important in understanding blood flow inside the human body. Due to the pumping of the heart, blood flows throughout the body via veins and arteries. Blood flow helps to prevent microbiological and mechanical harm. Irregular blood flow conditions can lead to a number of cardiovascular diseases, including atherosclerosis (commonly called as stenosis). Stenosis is the deposition of fatty substances caused by plaque development in arteries. The coupled nonlinear equations are solved using the Homotopy Perturbation Method (HPM) in the situation of moderate stenosis and accompanying boundary conditions. Bloodmediated nanoparticle distribution is a pretty recent and rapidly expanding topic in Jyoti (B) · S. Gill · N. Phogat Department of Mathematics, M.D. University, Rohtak, Haryana 124001, India e-mail: [email protected] S. Gill e-mail: [email protected] R. Rathee A.I.J.H. Memorial College, Rohtak, Haryana 124001, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 V. S. Rathore et al. (eds.), Emerging Trends in Expert Applications and Security, Lecture Notes in Networks and Systems 681, https://doi.org/10.1007/978-981-99-1909-3_7

69

70

Jyoti et al.

the development of medicines and diagnostics. This allows for the modification of immune system interactions, blood clearance profiles, and interactions with target cells. References describe some of the fundamental publications dealing with Newtonian and non-Newtonian fluid models. “Shukla et al. discussed the effect of stenosis on blood flow through an artery using blood as a non-Newtonian fluid, the effect of radian distribution of cells is observed with the presence of the peripheral plasma layer close to the wall and deliberated the blood flow via an artery having mild stenosis using blood as a power law fluid [1].” Sinha and Singh evaluated blood flow through an artery with moderate stenosis, as well as the effects of couple stress on non-Newtonian blood flow [2]. This demonstrated that if a modest stenosis get developed, the resultant flow disorder affects the progression of the illness and arterial deformability, as well as changing regional blood rheology [3]. Misra et al. developed a non-Newtonian model that analyzed blood flow via arteries under stenotic situations [4]. Pralhad and Schultz investigated arterial stenosis modeling. Blood was considered a non-Newtonian pair stress fluid and its applicability to blood disorders [5]. In general, stenosis surface imperfections complicate experimental and numerical models of flow processes. With such complications in mind, an important amount of scientific work has already expended in studying the flow properties of blood via blocked arteries [6, 7]. Rathee and Singh developed a two-layered model of pair stress blood flow via a stenotic tube under the action of an externally applied magnetic field in a porous material and discovered that the precise strength of the magnetic field can aid in controlling blood flow in hypertension patients. They considered viscosity fluctuation in connection to the Einstein relation as proposed by Haldar and Ghosh (1994) [8]. They investigated the impact of permeable walls and slide on nanofluid flow via stenosed tapering arteries [9]. They gave brief summary of the formability of blood flow using a mathematical model of Jeffrey fluid through nanoparticles in a flattened stenosed artery, such as its hallmarks, and talk about alternative constitutive existing models to account in one or more of these characteristics [10]. Oka and Murata explained how blood moves steadily slowly through a capillary with a porous wall using the principles of hydrodynamics. Further presumptions include the very low flow rate across the capillary wall [11]. By using boundary layer techniques, the limiting case of a step function distribution of permeability and porosity is then explored and proven to provide the necessary boundary condition [12]. In this study, He’s homotopy perturbation approach, a potent analytical technique, is used to approximate periodic solutions for various nonlinear differential equations in mathematical physics by using heat transfer and Van der Pol damped nonlinear oscillators [13]. Nadeem examined blood flow in tapering stenosed arteries using a nano-Prandtl fluid flow analysis. Heat transmission and the presence of nanoparticle fraction were discovered. The analytical solution of coupled nonlinear differential equations was found using the homotopy perturbation approach [14]. The theoretical investigation of blood flow via a tapered and overlapped stenosed artery under the influence of an external magnetic field is presented in this research. Furthermore, it has been proven that primary stenosis has a significant impact on stenosis [15]. The intent of this research is to solve the couple stress blood flow problem in the presence of a stenosed tube with porous walls, taking into consideration the effects of lipid

The Effects of Lipid Concentration on Blood Flow Through Constricted …

71

concentration and slip velocity. The values for various parameters are computed, and graphs have been created.

2 Formulation of the Problem It is postulated that the blood is a non-Newtonian couple stress fluid of constant density and viscosity flowing through an artery with a tube-like structure having radius r and length l0 . The stenosis develops symmetrically about the tube axis but it is non-symmetric with respect to radial coordinates and the geometry of arterial wall with interstitial stenosis, Fig. 1 is described by The given equation represents the arterial segment with symmetric shape of stenosis as   R(z) = 1 − A l0s−1 (z − d) − (z − d)s , R0

d ≤ z ≤ d + l0 ,

(1)

s

where

A=

∈ s(s−1) R0 l0s s − 1

and s ≥ 2 is the stenosis shape parameter, d represents position of stenosis, l0 is length of stenosis, maximum height of stenosis is located at z = d + l20 , R(z) represents radius of stenosed vessel, R0 is radius of normal or unconstructed artery, and Rε0  1. The governing equations

Fig. 1 Represents the geometry of the arterial wall with interstitial stenosis

72

Jyoti et al.

μ ∂p ∂u − u + g(ργn f ) T − T0 ) + ρgα(C − C0 ) = ρ , ∂z K ∂t (2)          ∂C ∂T ∂T DT ∂T 2 1 ∂ ∂T r + τ DB + Q 0 + αnf = , r ∂r ∂r ∂r ∂r T0 ∂r ∂t (3)       1 ∂ ∂C DT 1 ∂ ∂T ∂C r + r = ’, DB (4) r ∂r ∂r T0 r ∂r ∂r ∂t

μ∇ 2 u − η∇ 2 (∇ 2 u) −

1 (ρCp)n f

 where 2 = r1 ∂r∂ r ∂r∂ is the linear operator,− ∂∂ pz = A0 + A1 Cosωt is the pressure gradient, A0 is being constant amplitude and A1 is being amplitude of pulsatile  component, C is the concentration, u is the axial velocity component, t is the time (ρC) variable, T is the temperature, g is the acceleration due to gravity, τ= (ρC)n f is the f proportion of the fluid’s heat capacity to the effective heat capacity of nanoparticles, K is the permeability constant, D B is Brownian diffusion coefficient, DT is thermospheric diffusion coefficient, and r is the radial coordinate. Boundary conditions for temperature (θ ) and concentration (σ ) ∂T ∂C = 0, =0 ∂r ∂r C = C0 , T = T0

at r = 0,

(5)

at r = R(z),

(6)

The non-dimensional form ∇2u −

R2 ∂u 1 2  2 ∂p − 0 u + Gr θ + G f σ = α2 , ∇ − ∇ u −N 2 α ∂z K ∂t   ∂θ 2 1 ∂θ ∂σ ∂θ = 2 , + NT ∇2θ + β + NB ∂y ∂y ∂y α1 ∂t ∇2σ

NT 2 1 ∂σ . ∇ θ+ = 2 NB α2 ∂t

τD B τDT Q 0 R02 , N B = C0 , NT = , α = R0 αn f T0 (ρCp)n f αn f αn f

(8) (9)

The non-dimensional variables

∇ 2 = y1 ∂y∂ y ∂y∂ , T = [θ + 1]T0 , C = [σ + 1]C0 , t  = tt0 , y = β=

(7)

/

r , R0

αn f t0 ρ , α12 = , μt0 R02

The Effects of Lipid Concentration on Blood Flow Through Constricted …

α22 =

R 2 T0 R2 C0 R02 D B t0 , , N = 0 , G r = g(ργ )n f 0 , G f = gρα 2 μ μ μ R0

73

(10)

where N B is Brownian motion parameter, N T is thermospheric parameter, α is Womersley parameter, G r is local temperature Grashof number, G f is local Grashof number, K is being slip parameter. Non-dimensional boundary conditions ∂σ = 0, ∂y σ = 0,

∂θ =0 ∂y θ = 0,

at y = 0,

(11)

R(z) . R0

(12)

at y =

Solution of the Problem Using the Homotopy Perturbation Method (HPM), the linked Eqs. (8) and (9) have a solution that is.   H (k, θ ) = (1 − k) L(θ ) − L(θ )10

  2 ∂θ ∂σ ∂θ 1 ∂θ + NT − 2 + k L(θ ) + β + N B = 0, ∂y ∂y ∂y α1 ∂ y H (k, σ ) = (1 − k)[L(σ ) − L(σ )10     ∂θ 1 ∂σ NT 1 ∂ y − 2 = 0, + k L(θ ) + NB y ∂ y ∂y α2 ∂t where we take L = 0 ≤ k ≤ 1.

1 ∂ y ∂y



(13)

(14)

y ∂∂y and k is being embedding parameter having the range,

The initial guesses taken

2 2 eωt , ω being constant θ10 (y, z) = − y −R 4   2 y − R 2 ωt e , ω being constant. σ10 (y, z) = − 4

(15)

θ (y, z) = θ0 + kθ1 + k 2 θ2 + · · · · · · ,

(16)

σ (y, z) = σ0 + kσ1 + k 2 σ2 + · · · · · ·

(17)

Define

74

Jyoti et al.

The values for temperature (θ) and concentration (σ) are calculated by putting Eqs. (16) and (17) in (8) and (9) Equations, respectively, for k → 1, thus  4   y − R4 y2 −− R2 2ωt β(N T + N B )e θ= 4 64    4  2 y − R4 ωeωt y − R2 R2 ωeωt − + 64 16 α12 α12     4 N T y2 − R2 ωt y − R4 eωt ω e − σ= NB 4 64 α22  2 2 2 ωt R ωe y − R + 16 α22 

(18)

(19)

The exact solution for axial velocity by using system of Laplace and Hankel transform with the substitution of Eqs. (18) and (19) into Eq. (7) is written as:  ∞   h J0 (yλn ) α12 a 2  λ2n − m1 t J + A A e (aλ ) ( )N 1 n 0 1 a 2 n=1 h 2 + λ2 J12 (aλn ) h 1 λn

    2 a a + R2 NT R2ω α12 − h1 t a + J1 (aλn ) − 2 J2 (aλn ) + e m h1 NB 4λn 2λn 4α22   ω ω G f − G f 2 + (N T + N B )G r − 2 G r α2 α1    a a4 + R4 3a 6a 3 J1 (aλn ) − J2 (aλn ) + J3 (aλn ) 64λn 64λ2n 64λ3n    R2ω a a2 + R2 2a 2 a N α12 J1 (aλn ) − J + Gr 2 + (aλ ) 2 n 16λn 16λ2n mλn α1   h A0 h 1 A1 m A1 h 1 A1 − h 1 t − m1 t m + 2 1−e cos t + 2 sin t − 2 e J1 (aλn ) h1 m + h 21 m + h 21 m + h 21      

h1 NT R2 1 h1 2 − 21 2Gf m −ω t +ω t + α1 e + 2ω  h1 sin h 2 m NB m 4α2 +ω m    a a2 + R2 2a 2 J1 (aλn )− 2 J2 (aλn ) 4λn 4λn      4 a a + R4 ω 3a 6a 3 − J2 (aλn ) + J3 (aλn ) J1 (aλn ) − 64λn 64λ2n 64λ3n α22

u(y, t) =

The Effects of Lipid Concentration on Blood Flow Through Constricted …

 

h1 1 h1 1 −1 −ω t + ω t− sin h e 2 m −(N T + N B )  h 1 2 m m 2 m +ω    

h1 ω 2 h 1 − 21 −ω t 1 m sin h  e +ω t 2 m α12 hm1 + ω

 a a4 + R4 3a 6a 3 J1 (aλn ) − J J3 (aλn ) (aλ )+ 2 n 64λn 64λ2n 64λ3n  

h1 h 2 1 R2ω − 21 −ω t 1 m +ω t sin h + 2  h1 e 2 m α1 m + ω

  a a2 + R2 2a 2 J1 (aλn )− J2 (aλn ) 16λn 16λ2n

  a 2 − R 2 aβ β a2 + J1 (aλn ) J2 (aλn ) 4 λn 2 λ2n

    h A0 m

a12 a 2 − R 2 aβ β a2 − m1 t 1−e + Gr J1 (aλn ) − J2 (aλn ) h1 h1 4 λn 2 λ2n +

Gr α12

The volumetric flow rate is calculated with the formula given by: a Q = 2π

y.u(y, t)dy 0



J1 (aλn ) λn 4π   2 2 2 a 1 h + λn J1 (aλn )

 ∞  h 2  λ2n J0 (yλn ) α12 a − m1 t J (aλ )[(A + A e )N 1 n 0 1 2 a 2 n=1 h 2 + λ2 J1 (aλn ) h 1 λn  

 R2ω a a2 + R2 α12 − h1 t N T a + ) J J + e m ( − (aλ ) (aλ ) 1 n 2 n h1 NB 4λn 2λ2n 4α22   ω ω G f − G f 2 + (N T + N B )G r − 2 G r α2 α1    a a4 + R4 3a 6a 3 J1 (aλn )− J2 (aλn ) + J3 (aλn ) 64λn 64λ2n 64λ3n    R2ω a a2 + R2 2a 2 +G r 2 J1 (aλn )− J (aλ ) 2 n 16λn 16λ2n α1

Q=

75

(20)

76

Jyoti et al.

 h1 A0 h 1 A1 a N α12 J1 (aλn ) 1 − e− m t + 2 cos t+ + mλn h1 m + h 21  m A1 h 1 A1 − h 1 t sin t − 2 e m m 2 + h 21 m + h 21    h 2 1 h1 − 21 (( m1 −ω)t) 2Gf +ω t +α1 sinh e m ( hm1 + ω) 2 m     2 a a + R2 NT R2 2a 2 + 2ω J1 (aλn ) − 2 J2 (aλn ) NB 4λn 4λn 4α2      4 a a + R4 ω 3a 6a 3 J2 (aλn ) + J3 (aλn ) − J1 (aλn ) − 64λn 64λ2n 64λ3n α22

 

h1 1 h1 1 −ω t −1 2 Gr + α1 e 2 m sinh −(N T + N B )  h 1 + ω t− m 2 m 2 m +ω    

h1 2 h 1 ω − 21 −ω t 1 m +ω t sin h  e 2 m α12 hm1 + ω

 a a4 + R4 3a 6a 3 J1 (aλn ) − J J3 (aλn ) + (aλ ) 2 n 64λn 64λ2n 64λ3n  

h1 h 2 1 R2ω −ω t − 21 1 m +ω t sin h e + 2  h1 2 m α1 m + ω

  a a2 + R2 2a 2 J1 (aλn ) − J2 (aλn ) 16λn 16λ2n

    h1 a 2 − R 2 aβ A0 m

β a2 1 − e− m t J1 (aλn ) J2 (aλn ) + 4 λn 2 λ2n h1

  (21) α12 a 2 − R 2 aβ β a2 J1 (aλn ) − J2 (aλn ) +G r h1 4 λn 2 λ2n

3 Discussion In this paper, numerical computations were directed and resulted formulas, for axial velocity u outlined in agreement to axial distance Z for different values of Grashof number Gr, slip parameter h, and concentration of nanoparticles ∅. The program is executed with different parameter values that are presented alongside each graph.

The Effects of Lipid Concentration on Blood Flow Through Constricted …

77

Fig. 2 Variation in axial velocity for different values of Grashof number (Gr )

Figures 2, 3 and 4 depict velocity profiles for various values of the Grashof number Gr, slip parameter h, and concentration of nanoparticles ∅. Figure 2 illustrates the axial profiles, which exhibit decrease in axial velocity along the axis of tube along with the increasing values of Grashof number. Also, for greater values, decrease rate of axial velocity is comparatively high. In Fig. 3, the concentration of nanoparticles reaches to the maximum stagnation point along with the decrease in the values of axial velocity. In Fig. 4, the slip parameter raises blood viscosity in comparison to no slip at arterial wall resulting in a decrease in axial velocity (shown below).

4 Conclusion • The flow of blood via stenosed arteries is investigated a porous media with the effects of slip velocity. • The axial velocity and volumetric flow rate derived from these governing equations are using an appropriate transformation approach. • There will be more deposition of LDL along the walls of arteries with increase in porosity and concentration of nanoparticles. • Deposition of LDL increases due to increase in the slip parameter as LDL gets affected way too much; thus, there are more chances of stenosis. • There is variation in the pressure gradient (blood pressure) with an increase in concentration of nanoparticles.

78

Jyoti et al.

Fig. 3 Variation of axial velocity for different values of concentration of nanoparticles (∅)

Fig. 4 Variation of axial velocity for different values of slip parameter (h)

The Effects of Lipid Concentration on Blood Flow Through Constricted …

79

• The values of shear stress and more graphs as well can be calculated for further research work.

References 1. Shukla JB, Parihar RS, Rao BRP (1980) Effect of stenosis on non-Newtonian flow of blood in an artery. Bull Math Biol 42:283–294 2. Sinha P, Singh C (1984) Effects of couple stresses on the blood flow through an artery with mild stenosis. Biorheology 21:303–315 3. Haldar K (1985) Effects of the shape of stenosis on the resistance of blood flow through an artery. Bull Math Biol 47(4):545–550 4. Misra JC, Patra MK, Misra SC (1993) A non-Newtonian model for blood flow through arteries under stenotic conditions. J Biomech 26:1129–1140 5. Pralhad RN, Schultz DH (2004) Modeling of arterial stenosis and its applications to blood diseases. J Math Biosci 190:203–220 6. Jain N, Singh S, Gupta M (2012) Steady flow of blood through an atherosclerotic artery: a non-Newtonian model. Int J Appl Math Mech 8:52–63 7. Abd Elmaboud Y, Mekheimer KS (2012) Unsteady pulsatile flow through a vertical constricted annulus with heat Transfer. Z Naturforsch 67a:185–194 8. Rathee R, Singh J (2013) Analysis of two- layered model of Couple stress blood flow in the central layer through stenotic tube in porous medium under the effect of magnetic field. Int J Appl Math 28(2):1210–1218 9. Akbar NS, Rahman SU, Ellahi R, Nadeem S (2014) Nano fluid flow in tapering stenosed arteries with permeable walls. Int J Therm Sci 85:54–61 10. Ratan Shah R, Kumar R (2017) Study of blood flow with suspension of nanoparticles through tapered stenosed artery. Global J Pure Appl Math 13:7387–7399 11. Oka S, Murata T (1970) A theoretical study of flow of a capillary with permeable wall. Jpn J Appl Sci 9:345–352 12. Saffman PG (1971) On the boundary conditions at the surface of a porous medium. Stud Appl Math 50:93–101 13. He JH (1999) Homotopy perturbation technique. Comput Methods Appl Mech Eng 178:257– 262 14. Nadeem S, Ijaz S, Akbar NS (2013) Nanoparticle analysis for blood flow of Prandtl fluid model with stenosis. Int Nano Lett 15. Shit GC, Roy M, Sinha A (2014) Mathematical modelling of blood flow through a tapered overlapping stenosed artery with variable viscosity. Appl Bionics Biomech 11:185–195

Actual Facial Mask Recognition Utilizing YOLOv3 and Regions with Convolutional Neural Networks A. Thilagavathy, D. Naveen Raju, S. Priyanka, G. RamBalaji, P. V. Gopirajan, and K. Sureshkumar

Abstract With the Coronavirus Disease (COVID-19) outbreak in 2019, it appears as though time has stopped. The World Health Organization (WHO) determined that wearing a face mask was essential for limiting virus transmission to combat pathogen transmission. Nevertheless, physically choosing how often individuals wear face masks in public is a momentary operation. The need to track people using respirators required creating an automated process. Nowadays, different deep and machine learning techniques can be used productively. All of the requirements for this type of model have been addressed in this work. The necessity and architectural layout of the suggested method have been thoroughly examined, followed by a rigorous inspection of several methodologies and their related comparison analyses. We investigate optimum settings for the sequential convolutional neural network model to accurately identify the existence of masks avoiding overfitting. Face identification categorically deals with distinguishing a specific group of entities, i.e., face. It has numerous applications, such as autonomous driving, education, surveillance. The prevalent practice reconfigures the images before fitting them into the network to surmount the inhibition. Some face collections are head turn, tilt, and slant with multiple faces in the frame and different types of masks with different colors. The A. Thilagavathy · D. Naveen Raju · S. Priyanka · G. RamBalaji Department of Computer Science and Engineering, R.M.K. Engineering College, Kavaraipettai, India e-mail: [email protected] D. Naveen Raju e-mail: [email protected] S. Priyanka e-mail: [email protected] P. V. Gopirajan (B) Department of Computational Intelligence, School of Computing, SRM Institute of Science and Technology, Kattankulathur Campus, Chennai 603203, India e-mail: [email protected] K. Sureshkumar Department of Information Technology, Saveetha Engineering College, Chennai, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 V. S. Rathore et al. (eds.), Emerging Trends in Expert Applications and Security, Lecture Notes in Networks and Systems 681, https://doi.org/10.1007/978-981-99-1909-3_8

81

82

A. Thilagavathy et al.

proposed method consists of a cascade classifier and a pre-trained Convolutional Neural Network (CNN), which contains two 2D convolution layers connected to layers of dense neurons. As grayscale rationalizes the algorithm and diminishes the computational requisites, it is utilized for extracting descriptors instead of working on color images instantaneously. It has categorical representation, and the data is converted to categorical labels. Keywords COVID-19 · YOLOv3 · R-CNN · Face mask

1 Introduction Earlier utilization masks are items that, for several reasons, cover the face. Depending on their intended use, they are made from various materials and used for rituals, enjoyment, security, or disguise. Everyone could wear it to defend themself from the disease. Masks were first used for ceremonies and rites, and the earliest discovered mask dates back to 7000 BC. As per health doctors, the virus can be spread via both direct and indirect contacts with a person [1]. As a consequence, healthcare living things have strictly enforced metrics such as the mandatory use of masks [2], as highlighted in. The unrestrained coronavirus from 2019 has brought many changes globally, with deadly spread to more than 200 countries and two international conveyances as of 2021. Furthermore, the coronavirus virus outbreak has compelled scientific achievements worldwide to aid in the fight against the pandemic. The WHO declared this a pandemic situation [3]. Furthermore, the coronavirus outbreak has compelled scientific achievements worldwide to aid in the fight against the pandemic. The writers have proposed a method for confirming an individual’s right spot of a face mask, while [4] contains conversations on the technological processes used to deal with the disease. A further study was conducted to develop a request that scrutinizes guys wearing facial masks in an open area [5]. In [6], this is not the first time that having to wear face masks to battle transmission has already been stressed all through COVID-19. It is a procedure that could be traced to China’s 1910–11 Manchurian epidemic. PyTorch was utilized in [7] to locate face masks, and the outcomes have been 97% precise. Furthermore, [8] posited the identification of various types of masks that used a slashing approach, and the result was procured after the model was run in real-time mode. Another study was carried out to improve an implementation that inspects people wearing facial masks in crowded locations [5]. The suggested scheme applied the assignment using the Faster R-CNN model and had a precision of 99.8%. With technological advances that the globe has now seen, there seems to be a variation of techniques available [9–15] that may be good for society if used. The CNNs are widely used to analyze visual image representation and are utilized in various situations. Emitted Light Modulation, Artificial Network neural, and [B.P.] neuronal system [16]. In [17], CNN models, also known as “artificial neural

Actual Facial Mask Recognition Utilizing YOLOv3 and Regions …

83

networks,” are used to classify unusual abnormalities by getting data from sonography. Echocardiography systems are used to classify the unofficial mask object recognition. [18] presented a document on facial recognition fragmentation using an evolutionary algorithm and the ANFIS classification algorithm for trying to locate face image features using the imutils. Adaptive Neuro-Fuzzy Inference System (ANFIS) combines the benefits of Artificial Neural Networks (ANNs) and Fuzzy Logic (F.L.) in a single framework. This model should be used at airport entrance and exit gates to prevent and reduce the spread of viruses. The scheme can tell if someone is wearing a mask [19]. If hospital workers are discovered without masks, an alarm will sound, and the higher authorities will take appropriate action against the worker [20]. Whenever the computation recognizes individuals who are not wearing masks, a caution ought to be sent to inform those near the area or the adjacent worried professionals just, so fundamental issues can be resolved for these infringers [21]. When someone is seen without a mask, the involved parties are informed and will take the proper action [22]. The World Health Organization (WHO) declared this a global epidemic. When there are so many images and videos, they will be monitored and classified when humans search for visual recognition systems. They include YOLO and Faster RCNN, which have been discovered to be used in military equipment. In many places, they will have enhanced security to watch for illegal activities, which will be captured using surveillance cameras to report the issue. Some self-driving objects, like selfdriving cars and manufactured automata, will use this object detection algorithm for every concept. The above two methods will be made clear by the writers: the Convolutional Neural Network methodology and the YOLO v3 search engine. Applications like facial tracking and the central role will be utilized to detect masks. It would deal with two session issues in which it will see whether their face was in the captured picture. The AdaBoost algorithm is used to construct robust classifiers compared to linear combinations, or the approach is a learning algorithm. The strength of this strategy is that it would be simple to implement, and it will detect the facial parts like the nose, mouth, eyes, etc., based on comparability value (Fig. 1).

2 Methodology CNN’s strong spatial feature abstraction capacity and cheap summing cost make it practical for computer vision-based design identification tasks [23]. This application typically uses a hard neural network ace [24]. The network is qualified to learn about the ideal kernel combination because of the situation. The Residual Network (ResNet) [25] was expected to pull far deeper neural networks by learning about uniqueness mapping from the preceding structure. Because entity indicators are typically organized on mobile devices using systems with limited capabilities, Mobile Network (MobileNet) [26] was anticipated.

84

A. Thilagavathy et al.

Fig. 1 Preventing the infection to counter mode

2.1 Object Discovery Using CNN Convoluted neural networks now also have substantially more complex needs. Researchers have proposed multi-image classifiers to solve problems such as RGBDbased waste object recognition and classification for nuclear decommissioning [27]. First, make a panel considerably lesser than the size of the photos. Send a cropped version to CNN, so they can make predictions. Slided the window further and trimmed photographs were sent to CNN. After trimming the entire image with this window size, carry out each step again with a bigger size. CNN can then be used to predict outcomes after receiving clipped images. When everything is finished, you will get a collection of clipped regions with an item, along with the object’s pixels’ kit and subclass. The term “object recognition via moving panels” refers to this method.

Actual Facial Mask Recognition Utilizing YOLOv3 and Regions …

85

2.2 Faster R-CNN Algorithm The most famous contemporary iteration of the R-CNN family is faster. (a) A region suggestion technique to create box boundaries for places of potential items, as in photos, is generally included in these networks. (b) A feature generation stage in which features from these objects are extracted, typically through a CNN. (c) A categorization level to decide what subclass each item would be placed in, and (d) a correction level to increase the precision of the boundary box dimensions for the item. For region suggestion quickly, R-CNN employs a computer-focused browse technique that computes each image in around 2 s. The Region Proposal System is utilized to create region-specific recommendations in a Faster R-CNN [17] study to overcome this problem (R.P.N.). It is not just cutting the deadline for area plans per image. It allows the region proposal stage to share layers with the next detecting stages for 2 s to 10 ms, improving the feature representation total.

2.3 Region Proposal Network (R.P.N.) The fully convolutional program’s core takes in the picture initially and then feeds it all into its (R.P.N.).The supplied copy is initially downsized to have a 600 px shortest side and a maximum of 1000 px longest side. The size of the output features varies significantly from the takes in the copy contingent on that backbone network’s pace. The system can determine regardless or not that there is an artifact inside the reference image at every position in the predicted output and calculate its scale. By each position on the productivity feature map of the communications system, one series of “anchors” are established on the given image to achieve this. These foundations serve as substitutes with sample designs of various sizes and aspect ratios. As the networks advance, the output feature plot must evaluate if the k equivalent anchors that span the input image include objects and then refine theirs directs to offer box boundaries as “object recommendations” or points for attention. A 512-d feature map is produced for each site after the backbone feature map is first put through a 3 × 3 of 512-unit compression. An 18 x 1-unit convolution operation enabling categorization of objects with a 36 × 1-unit inversion level for box boundaries reversion of two sister layers that come after. A sizeable output is determined by 18 elements within the categorization section (H, W, and 18). An object within each of the nine anchors present at each position present in the dataset for the foundation (size–H, W) is determined using obtained output. The output of 36 elements on reversion division is of [size: H W36]. The four deterioration numbers for all nine anchors in the backbone feature map are determined using this output (size: H W). The coordinates of newscasters containing objects are improved by using these regression coefficients.

86

A. Thilagavathy et al.

2.4 Object Exposure Using R.P.N. and Quicker-R-CNN The R.P.N. and the Quick R-CNN has sensor remote access to the Quicker RCNN layout. Faster R-CNN models contains numerous rectangular boxes from similar pixels and textures to detect possible object locations. Using search-selective proposed locations, the straightforward R-CNN block creates a significant change since it simply transmits the actual photo before the classifier implements CNN by 2000 times. The search selection method has been replaced with the R.P.N. A foundational CNN processes an input pic first to produce the output feature operation (size—60, 40, 512). The benefits of pooling layers among the R.P.N. and Fast R-CNN detector foundations, in addition to test time effectiveness, are a significant reason why it stands to reason to employ an R.P.N. as just a proposition engine. The spine classifier features would then be pooled using a bounding box leading to the R.P.N. This is handled by the layer that pools R.O.I.s. The area associated with a proposition is the R.O.I. proposed method, which chooses a location from the spine dataset and splits it into a repaired set of the sub before applying peak to produce a repaired outcome. Quick R-CNN has much more data on the average pooling layer’s rewards. N signifies the total number of object proposals technique (N, 7, 7, 512) of R.O.I. outcome. The characteristics are procured to the younger sister regression and classification twigs after passing via two connected strands. These collection and identification stems vary from R.P.N.s. In this case, identifying the determinants has C items for each recognition work’s courses (including a catch-all upbringing period). The character traits are filtered using a smooth surface to generate additionally qualified to determine the likelihood that a proposal is originally belonged to every category. Its faster R-input CNNs, based on the interpretations including both methods earlier in this section, perform a significant amount of handling before passing the photograph to the photodiode router. Throughout YOLOv3, this very same system handles similar formation and grouping. While it is true that YOLOv3 is, as anticipated, significantly faster than Faster R-CNN, the production of Quicker R-CNN is greater. It could discern so many quick characteristics. In this scenario, similar frequency and efficiency should always be sacrificed. However, accurate detection gives up meaning in equipped modeling techniques in which disease surveillance is essential, and the throughput quality difference is minor.

2.5 Image Preprocessing In this milestone, we will improve the image data that suppresses unwilling distortions or enhances some image features essential for further processing, although performing some geometric transformations of images like rotation, scaling, translation, etc.

Actual Facial Mask Recognition Utilizing YOLOv3 and Regions …

2.5.1

87

Loading and Preprocessing the Data

We used Keras ImageDataGenerator module to supplement the data. The term “increment” refers to making anything “great” or “grow” anything (in this example, statistics), and the Keras ImageDataGenerator class functions by accepting a batch of photos for training purposes, taking this sample, and randomly performing a variety of adjustments to each photograph in the set (including random rotation, resizing, shearing, etc.), using the new, arbitrarily modified batch to replace the original collection and randomly training the CNN on the this.

2.5.2

Import the ImageDataGenerator Library

The ImageDataGenerator class in the Keras in-depth training neural network library allows you to fit networks using digital image supplementation. Let us import the Keras ImageDataGenerator class.

2.5.3

Configure ImageDataGenerator Class

The ImageDataGenerator object is created, and the data preprocessing kinds are specified. For picture data, there are five basic types of data enhancement techniques: Step 1: The width shift range and height shift range variables are used to shift the image. Step 2: The horizontal and vertical flip variables are used to flip the image. Step 3: The rotation range option is used to rotate images. Step 4: The brightness range option specifies the range of image brightness. Step 5: The zoom range option is used to zoom in on an image. An instance of the ImageDataGenerator class can be created for training and testing.

2.5.4

Apply ImageDataGenerator Functionality to Trainset and Testset

Let us use the code to add ImageDataGenerator capability to Trainset and Testset. Flow from the directory function is used for the training set. This method will return batches of photos from the subdirectories as ‘Mask’, ’WithoutMask’ with labels ‘Mask’:0 and ‘withoutMask’:1.

88

A. Thilagavathy et al.

Arguments Directory: The directory in which the data is stored. If labels are “deduced,” there should be folders with photos for every category. The directory structure is instead disregarded. Batch size: The size of the information batch. The default is 32. Target size: The size at which images should be resized after being read from the disc. Class mode ‘int’: indicates that the labels are encoded as integers (for example, sparse categorical cross-entropy loss). The term “category” denotes that the labels are encoded as a categorical vector (for example, categorical cross-entropy).

2.6 Architecture Diagram In this flowchart, we can see that the webcam will be used to detect the face using OpenCV and will give a blob notification that is used for blob detection. Blob identification module refers to the modules that are designed to detect image capture points. They would be contrasted primarily on how they differ in the context in terms of features like the brilliance of coloring. They spent a lot of time studying and developing blob detectors. In blob detection, there is a rule to give complementary information about regions so that they do not need to be acquired from edge corners. The data is used for further processing. To mask detect categorization using photos, the imported set was used. The collection is nearly 328.92 MB and comprises almost 12 thousand pictures. Dataset comprising of 12 thousand photographs are organized into exercise, measuring, and verification folders (Fig. 2). Following image verification, it sends the dataset to the training model, which is defined as a dataset that employs a deep learning technique to model training. Along with this method, they have sample output data corresponding to input data connected to the output. Then, insert info as an algorithm to check whether the

Fig. 2 Architecture diagram

Actual Facial Mask Recognition Utilizing YOLOv3 and Regions …

89

Fig. 3 Facial mask recognition architecture diagram

production executes correctly against the sample output. It sends the data to the updated model. It is also known as the “controller helper method,” which helps bind different input data sources to the direct model object you indicate as a parameter (Fig. 3). In training mode, we can see that there will be a live video session. Then, it captures the images separately, and finally, it verifies with the training model to know whether a person wears a mask.

2.7 Set of Data This is a particular instance archive used to create classifiers. It can be made by stripping information from the internet or attending multiple web pages [28].

3 Results and Discussion We aimed to review the details chosen for execution with advanced technology in this paper, like using YOLOv3 and R-CNN. They use a flat vector where the elements will be taken from a limited set of values or levels. Where “binary” means that the labels are encoded as float32 scalars with values of 0 or 1, they use some data augmentation. Images shift via the width_shift_range and height_shift_range arguments. The image flips via the horizontal_flip and vertical_flip arguments. Image rotates via the rotation_range argument Image brightness via the brightness_range argument Image zoom via the

90

A. Thilagavathy et al.

zoom_range statement. Furthermore, they have included many available datasets, and the RMFD dataset is widely used in this. The bounding box discovery of the masking faces can be handled with a quality deep learning strategy. Faster R-CNN and YOLOv3 were two other well-known methods used in the thorough R.C.T.s. To monitor public places, they use surveillance cameras. They detect with a single shot at a higher frame rate than the faster R_CNN algorithm. Faster R-CNN should be performed if upscale graphics are provided on the placement device. You can still use YOLOv3 on mobile devices as well (Figs. 4 and 5). The visible outcomes of actual face mask detection, comprising one such area, demonstrate facial images and determine whether an individual is wearing a mask. In the case of the bounding red box, they are not wearing masks (Figs. 6, 7 and 8). In the case of green coloured bounding box, they are wearing a mask (Fig. 8).

Fig. 4 Graph for the production of face mask detection

Fig. 5 Data model

Actual Facial Mask Recognition Utilizing YOLOv3 and Regions …

91

Fig. 6 No mask

Fig. 7 Not wearing mask properly

Fig. 8 Perfect results

4 Conclusion Nevertheless, this work aims to provide a full assessment of the numerous approaches that could be used to implement such an intricate technology. Following reviewing all of the implementation strategies, it is safe to say that deep learning has recently gained popularity among scientists. The approaches’ speed makes it suitable to be used in similar jobs. Masks are now a familiar sight among individuals around the globe as a result of the coronavirus. Furthermore, despite the availability of numerous resources, the RMFD dataset is commonly used. The model’s adoption in public spaces could be helpful if handled constructively. The present scheme could be improved for future projects by incorporating automatic temperature-detecting technologies. The system might also include a check to see if social distance is being used in congested locations. For biometric reasons, a capability of face landmark detection could be incorporated. Furthermore, machine learning techniques could be used to explore innovative feature extraction techniques modified to obtain better results more quickly.

References 1. Zhang R, Li Y, Zhang AL, Wang Y, Molina MJ (2020) Identifying airborne transmission as the dominant route for the spread of COVID-19. Proc Natl Acad Sci 17(26):14857–14863 2. Interim Infection Prevention and Control Recommendations for Patients with Suspected or Confirmed Coronavirus Disease 2019 (COVID-19) in Healthcare Settings (2021). https://www. cdc.gov/coronavirus/2019-ncov/hcp/infection-control-recommendations.html

92

A. Thilagavathy et al.

3. Jagadeeswari C, Uday Theja M (2020) Performance evaluation of intelligent face mask detection system with various deep learning classifiers 4. Pandiyan P (2020) Social distance monitoring and face mask detection using deep neural network 5. Henderi Rafika AS, Warnars HLH, Saputra MA (2020) An application of mask detector for prevent Covid-19 in public services area. J Phys Conf Ser 6. Lynteris C (2018) Plague masks: the visual emergence of anti-epidemic personal protection equipment. Med Anthropol 37(6), 442–257 7. Basha Z, Pravallika BNL, Shankar EB (2021) An efficient face mask detector with PyTorch and deep learning. E.A.I. Endorsed Trans. Pervasive Health and Technol 8. Susanto S, Putra FA, Analia R, Suciningtyas IKLN (2020) The face mask detection for preventing the spread of COVID-19. In: International conference on applied engineering (ICAE), pp 1–5 9. Bhadani AK, Sinha A (2020) A facemask detector using machine learning and image processing techniques. Eng Sci Technol Int J 10. Kong X, Wang K, Wang S, Wang X, Jiang X, Guo Y, Shen G, Chen X, Ni Q, Real-time mask identification for COVID-19: an edge computing-based deep learning framework. IEEE Internet Things J 11. Pooja S, Preeti S Face mask detection using A.I. In: Khosla PK, Mittal M, Sharma D, Goyal LM (eds) Predictive and preventive measures for Covid-19 pandemic. Algorithms for intelligent systems. Springer, pp 293–305 12. Qin B (2020) Li D Identifying facemask-wearing condition using image super-resolution with classification network to prevent COVID-19. Sensors 20(18):5236 13. Yadav S (2020) Deep learning-based safe social distancing and face mask detection in public areas for COVID-19 safety guidelines adherence. Int J Res Appl Sci Eng Technol 14. Gopirajan PV, Sivaranjani M, Parkavi K, Kumar VMN (2022) Machine learning based prediction of COVID-19: a study on Italy’s pandemic problems. In: 2022 8th international conference on smart structures and systems (ICSSS), pp 1–5. https://doi.org/10.1109/ICSSS54381.2022. 9782221 15. Sivaranjani M, Gopirajan. PV, Gowdham C, Abitha A, Ravindhar NV (2022) Computational data analysis, prediction and forecast of health disaster: a machine learning approach. In: 2022 8th international conference on smart structures and systems (ICSSS), pp 1–5. https://doi.org/ 10.1109/ICSSS54381.2022.9782289 16. Anandakumar H, Umamaheswari K (2018) A bio-inspired swarm intelligence technique for social aware cognitive radio handovers. Comput Electr Eng 71:925–937 17. Arulmurugan R, Anandakumar H (2018) Early detection of lung cancer using wavelet feature descriptor and feed forward back propagation neural networks classifier. In: Lecture notes in computational vision and biomechanics, pp 103–110 18. Savvides M, Heo J, Abiantun R, Xie C, Kumar BV, Class dependent kernel discrete cosine transform features for enhanced holistic face recognition in FRGC-II, vol 2 19. Loey M, Manogaranb G, Taha MHN, Khalifa NEM (2020) Fighting against COVID-19: a novel deep learning model based on YOLO-v2 with ResNet-50 for medical face mask detection 20. Bu W, Xiao J, Zhou C, Yang M, Peng C (2017) A cascade framework f masked face detection 21. Chavda A, Dsouza J, Badgujar S, Damani A (2020) Multi-stage CNN architecture for face mask detection 22. Wang Z, Wang G, Huang B, Xiong Z, Hong Q, Wu H, Yi P, Jiang K, Wang N, Pei Y, Chen H, Miao Y, Huang Z, Liang J (2020) Masked face recognition dataset and application 23. Deng J, Dong W, Socher R, Li LJ, Li K, Fei-Fei L (2008) Imagenet: a large-scale hierarchical image database. In: Proceedings of IEEE conference on computer vision and pattern recognition, pp 248–255 24. Szegedy C, Liu W, Jia Y, Sermanet P, Reed S, Anguelov D, Erhan D, Vanhoucke V, Rabinovich A (2015) Going deeper with convolutions. In: Proceedings of IEEE conference on computer vision and pattern recognition, pp 1–9

Actual Facial Mask Recognition Utilizing YOLOv3 and Regions …

93

25. He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: Proceedings of IEEE conference on computer vision and pattern recognition, pp 770–778 26. Howard AG, Zhu M, Chen B, Kalenichenko D, Wang W, Weyand T, Andreetto M, Adam H (2017) Mobilenets: efficient convolutional neural networks for mobile vision applications. arXiv 27. Sun L, Zhao C, Yan Z, Liu P, Duckett T, Stolkin R (2019) A novel weakly-supervised approach for RGBD-based nuclear waste object detection. IEEE Sens J 19:3487–3500 28. Wang Z, Wang G, Huang B, Xiong Z, Hong Q, Wu H, Yi P, Jiang K, Wang N, Pei Y, Chen H, Yu M, Huang Z, Liang J (2020) Masked face recognition dataset and application

Real-Time Smart System for Marking Attendance that Uses Image Processing in a SaaS Cloud Environment G. Nalinipriya, Akilla Venkata Seshasai, G. V. Srikanth, E. Saran, and Mohammed Zaid

Abstract A facial recognition system is a piece of technology that can match a human face from a database of images. It is often used for robust authentication. An Attendance Management System is a tool used by businesses to track employee time and participation information. Software as a service (SaaS) allows users to access the software via the Internet as a service. Advanced and automated attendance management technologies are being used to address this issue. The problem of proxies and students being counted as present even if they are not is easily resolved with this architecture. This technology records attendance via a live video broadcast. This system’s objective is to develop a facial recognition-based class attendance system because the manual technique of taking attendance currently in use is cumbersome and difficult to maintain. The creation of the database, face detection, face recognition, and attendance updating are the four stages of this system. Faces can be recognized in the live-streamed video from the classroom. The attendance will be recorded in a spreadsheet and saved to Google Drive at the end of the session. Keywords Face recognition · Attendance management system · Haar cascade · Histogram-oriented gradient · Software as a service (SaaS) · Cloud environment

1 Introduction Face recognition is one of the few biometric methods that offers the benefits of accuracy and little intrusion. Since the 1970s, face recognition has piqued the interest of academics working in a range of fields, such as security, image processing, and computer vision. Face recognition is seen as useful in multimodal information processing sectors. Faculty personnel have used to be given attendance registers to manually record students’ attendance in the classroom. However, it takes G. Nalinipriya (B) · A. Venkata Seshasai · G. V. Srikanth · E. Saran · M. Zaid Department of Information Technology, Saveetha Engineering College, Chennai, Tamil Nadu, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 V. S. Rathore et al. (eds.), Emerging Trends in Expert Applications and Security, Lecture Notes in Networks and Systems 681, https://doi.org/10.1007/978-981-99-1909-3_9

95

96

G. Nalinipriya et al.

a long time. Additionally, in a big classroom setting, it might be quite difficult to check each student’s attendance individually. Traditional evidence has concerns with consistency and time constraints. It is trustworthy and efficient for tracking student attendance in class. The suggested system demonstrates the use of facial recognition for automatically recording student attendance, saving faces in databases, and retrieving the list of students who are missing. It determines whether any face image in a database corresponds to the image of the face of any specific individual. This problem is challenging to automatically fix because of the alterations that a variety of factors, like facial expression, aging, and even lighting, can make to the image. Facial recognition provides a variety of benefits even though it is less trustworthy than other biometric techniques. This strategy has proven useful in a number of industries, including forensic medicine, police controls, security and access control, and Attendance Management Systems. The various techniques used to track attendance include: face recognition, iris recognition, RFID-based system, and fingerprint-based system. Among the methods mentioned above, face recognition is simple to apply and does not need the test subject’s assistance.

2 Related Works We looked through a lot of papers that use the attendance marking system and discovered, 1. Biometric attendance system based on Aadhaar using wireless fingerprint terminals published by Dhanalakshmi et al. As Dhanalakshmi et al. show, two distinct methods are suggested to validate the collected fingerprint during the verification phase. The organization’s own database is used in the first strategy, whereas the Aadhaar Central Identification Repository (CIDR) is used in the second strategy. The attendance records of the students are recorded and updated in the device database using wireless fingerprint terminals, then transferred to the server database. In the event of a student’s irregularity, absence, or insufficient attendance, SMS alerts are issued to both the student and their parents. Aadhaar data may not be accessible, and fingerprint-based systems have their own limitations. 2. A biometric and radio-frequency identification (RFID)-based web-enabled, secure system for tracking real-time location and attendance. A method to validate attendance using RFID has been suggested by Srinidhi et al. Both students’ and teachers’/staff members’ attendance records may be kept up to date using the system. The system is also capable of locating students, faculty, and other staff members [1, 2]wherever they may be on the campus of the university. One may keep up with the most recent course subjects, receive live feeds of various campus activities, and monitor their pals in real time with the use of an Android application. The system offers an automated SMS service that lets parents know that

Real-Time Smart System for Marking Attendance that Uses Image …

97

their kid has successfully entered college by sending them an SMS on their behalf. If a student falls behind in attendance, both parents and the student will get an email notification. The system has an automated attendance performance graph feature that provides insight into a student’s consistency in attendance over the course of the semester. 3. Real-Time Online Attendance System using the Smartphone’s GPS and fingerprint. According to Lia Kamelia et al., the objective of the study is to develop a GPS and fingerprint-based system for tracking online presence. The ZFM-20 fingerprint module offers the primary input for the system, a security tool, and a way to acquire access to the whole system. The GPS module is used to locate the user and relay that information to the smartphone. The system’s Arduino module will automatically text the parties concerned with the user’s location information. Limitation: The fingerprint-based system has drawbacks of its own. 4. Student Attendance System Design and Implementation using Iris Biometric Recognition. The iris of the human eye is employed as a biometric, as Kennedy et al. demonstrate. The proposed system automatically [2– 4] took class attendance by collecting each attendee’s eye image, detecting their individual iris, and looking for a match in the built-in database after enrolling all attendees by saving their personal information along with their distinctive iris template. This system’s drawback is that it is not economical. 5. Creation of an online class attendance register system with biometric capabilities. According to research by NomngaPhumzile et al., the system was created to use fingerprint technology to track students’ attendance at both lectures and examinations. The system allows administrators and lecturers to access and edit a student’s attendance data via a web browser, allowing for online management of the student’s attendance records. Limitation: The system is fingerprint-based, which has drawbacks of its own.

3 System Architecture In the proposed system, Phase 1, to take a photo and identify every face in it. Phase 2, concentrate on just one face at a time, and keep in mind that even if it is lit poorly or turned oddly, the face still belongs to the same person. Phase 3, identify the different distinctive characteristics of the face that can be used to set it apart from the faces of other people. These traits can include things like eye and nose size, face length, skin tone, and more.

98

G. Nalinipriya et al.

Fig. 1 Proposed system architecture

Phase 4, determine the person’s name by comparing the distinctive features of the face to all the faces of people we already know. The human brain is capable of instantaneously and automatically doing each of these activities. Since computers are unable to make such high-level generalizations, we must either educate or program. Phase 5, for each step of face recognition, the system is used separately. Systems for face recognition fall into two categories: identification and verification. A claimed person’s face image is compared to a template face image in a 1:1 match process known as facial verification. In contrast, the face recognition problem compares a query face image (Fig. 1).

3.1 Proposed Algorithm Facial recognition actually involves a number of interrelated problems. It makes a decision after looking at a picture first. Each and every face therein. Next, pay attention to each face. the ability to spot a face even when it is twisted oddly. The same person is visible no matter the illumination or angle. Third can recognize distinguishing face features that set individuals apart from other persons. Then, by contrasting distinguishing characteristics of that face with all the persons you already know, you can discover the name of the person. This must be done since computers are unable to generalize at a high level.

Real-Time Smart System for Marking Attendance that Uses Image …

99

3.2 Facial Recognition Facial recognition software does comparisons and analyses of patterns that are based on an individual’s facial characteristics in order to authenticate or uniquely identify that person. Facial recognition is utilized most frequently in the field of security; nevertheless, interest in its application in other fields is growing. Identifying faces can be accomplished using a variety of methods, such as the generalized matching face detection approach and the adaptive regional blend matching method. The vast majority of facial recognition [5–7] systems function by analyzing the many nodal points that are present on a human face. The unique identification or validation of a person can be accomplished through the comparison of the values to the variable that corresponds to the facial points of that individual. New approaches, such as 3D modeling, are helping to accelerate the development of facial recognition technology while also addressing flaws in more conventional approaches.

3.3 Deep Learning Deep learning gets its name from the fact that it involves numerous layers of neural networks, each of which helps to facilitate learning. The algorithm for deep learning acquires knowledge through experience, much like how humans do. It is able to execute calculations and learn from a substantial amount of data, making it suited for the task of handling difficulties with artificial intelligence. Methods of deep learning are utilized in difficult fields such as computer vision and audio processing, where there is a substantial amount of available data.

3.4 Comparison Between Proposed and Existing Systems The face recognition system is used by CNN. Only one of the feature maps from the input layer is utilized to feed the normalized face picture [7–9]into the CNN model. This is the only thing that is used to do this. During the course of our research, we observed that the hog-based face detector found in dlib is unable to recognize faces when they are positioned in an unexpected angle. There are also six feature maps in this layer. It is able to detect faces to a significant degree even when they are not presented frontally. And that’s fantastic as far as a frontal face detector is concerned. However, you should not raise your expectations too high or you can end up being disappointed. In the meantime, the detector that is based on CNN can almost always find faces. Real-time video is unfortunately inappropriate. It is designed to work with a graphics processing unit (GPU). It is possible that you will want a powerful Nvidia GPU in order to match the performance of the hog-based detector. To give you a notion of the execution time, it takes approximately 0.2 s for hog to generate a 620

100

G. Nalinipriya et al.

× 420 image, but it takes approximately 3.3 s for CNN [7]to make an image with the same dimensions (Intel i5 Dual Core 1.8 ghz). At the moment, the CNN-based detector presents a computational challenge that prevents it from being utilized for real-time video.

4 Implementation We used implementation tools like Python and OpenCV and projecting of faces as well as encoding of faces. Also, we created a dataset and trained the created dataset and tested it accordingly.

4.1 Python Programming on a huge scale as well as on a smaller one is conceivable using Python. Features of this thing include both its dynamic type and its independent memory management. Python allows users to write code in a variety of different styles, including procedural, imperative, functional, and object-oriented. It offers support for a significant portion of the standard library.

4.2 OpenCV An advantage is provided by the fact that OpenCV is a framework that may run on multiple platforms. Due to the many skills it possesses, it may initially appear to be intimidating. Having a thorough understanding of how these methods function is essential in order to use OpenCV effectively. This is the secret ingredient. Thankfully, only a minimal amount of information on the participants is required in order to get started. The capability of facial recognition that will be used is included in OpenCV across several different modules.

4.3 Finding Faces—HOG Face detection in photos is done using histogram of oriented gradients (HOG) (Fig. 3). As the first step in the process, the first image is transformed to grayscale from color. An arrow is drawn to indicate the direction in which the projected picture is projected, and the purpose of doing so is to determine how dark the pixels are in comparison to the pixels that are surrounding them. Because keeping the gradient [3, 7, 9] for every single pixel would provide an excessive amount of detail, the image will first

Real-Time Smart System for Marking Attendance that Uses Image …

101

be divided into squares with a resolution of 16 by 16 pixels, and then, we will count the number of gradients that are present in each major direction within each square. The histogram of oriented gradients, also known as HOG, is utilized in the process of face detection in photographs (Fig. 3). In order to begin the procedure, the initial image is converted from color to grayscale. This is the first phase. An arrow is drawn to indicate the direction in which the projected picture is projected, and the purpose of doing so is to determine how dark the pixels are in comparison to the pixels that are surrounding them. This technique is repeated for each pixel in the graphic until an arrow appears for each of the individual pixels in the graphic. In point of fact, it reveals the gradient of the image. Because keeping the gradient for every single pixel would provide an excessive amount of detail, the image will first be divided into squares with a resolution of 16 by 16 pixels, and then, we will count the number of gradients that are present in each major direction within each square. If the facial characteristics and hairdo of the unregistered individual are extremely similar to those of someone whose photo is already in the code, there is a chance that the unregistered person will be recognized with up to 86% accuracy, albeit inconsistently.

4.4 Projecting and Posing Faces After isolating the faces from the image’s backdrop, we must correct a problem with the face’s orientation. The key concept is that we will be able to detect 68 points that are present on every face. Some examples of these points include the tip of the chin, the outer edge of each eye, and the inside edge of each brow. In order to determine this 68-point value, we will use a face landmark analysis [10]. Following that, we will begin to train machine learning. After we have taught a machine learning system to recognize these 68 specific locations on any face the identification of individual faces in Attendance system will be started.

4.5 Encoding Faces Approaching the issue of distinguishing between different faces now. If we instantly analyze the face that we get from the second step with every image that we have in our database of people, we will be able to solve the problem with relative ease. In order for this to be accomplished, there are a few key steps that need to be taken from each perspective. A collection of 128 computer-generated measures is used to create a representation of a person’s face, which is what face encoding is all about. If you took two images of the same person, they would both have a similar encoding, but if you took pictures of two different persons, the encoding would be completely different. The 68 points from the face that were determined by making use of the landmarks on the face make up the fundamental measurements (Fig. 2). From this point forward, these 68 points will be converted into 128 measurements, which will

102

G. Nalinipriya et al.

Fig. 2 Block diagram

then be compared to the 128 measurements that are already present in our database image (Fig. 2).

5 Stepwise Procedure In this, we have demonstrated how the process is taking place throughout the process and how the dataset has been created and how it gets trained and what all testing takes place.

5.1 Create Dataset The photographs of the students’ faces, taken with either mobile devices or webcams, will be stored in a directory. Due to the fact that the picture quality of the photographs may create differences in the accuracy %, it is recommended that the images can be taken with a high resolution and stable lighting.

Real-Time Smart System for Marking Attendance that Uses Image …

103

Fig. 3 Attendance list with image

5.2 Train Dataset Training datasets are input into machine learning algorithms so that the algorithms can be instructed on how to execute a specific task or generate predictions. The faces that were captured in the previous step and trained will be used to generate a.npz file, which will be saved in the same location as the other files. We should take multiple images of each student in order to increase the proportion of correct identifications. Most of the methods that are used to look for empirical relationships in training data have a tendency to overfit the data, which enables them to find and take advantage of apparent relationships in training data that are not true generally. This tendency to overfit the data is what allows them to find empirical relationships in training data that are not true generally.

5.3 Testing See (Figs. 3, 4 and 5).

104

G. Nalinipriya et al.

Fig. 4 Face detected

5.4 Comparison Representation

The above bar chart shows the execution time. For a 620 × 420 image, CNN executes in about 3.3 s compared to hog’s roughly 0.2 s execution time. For real-time video, a CNN-based detector is currently computationally expensive and ineffective.

Real-Time Smart System for Marking Attendance that Uses Image …

105

Fig. 5 Code sample

6 Results and Discussion Comparison of a person’s values to the variable linked with their facial points helps to uniquely identify or validate the person, and the system for managing attendance was created to make it easier for administrators to keep track of their students. Management can save both time and money with automated attendance management software. Such a system also reduces staff workload and increases efficiency.

7 Advantages Maintaining a steady attendance is not difficult. This system reduces the amount of paperwork required while maintaining accuracy and also being entirely automated. It is dependable and easy to operate, in addition to being a user-friendly technology

106

G. Nalinipriya et al.

requiring only a small amount of human intervention. This strategy will allow for a sufficient minimum amount of time to be completed, and it will then be followed by a paperless attendance method that lowers individual error while also preventing proxy marking and lowering the amount of paper waste. Attendance records may be retrieved in an instant and with minimal effort using a cloud-based database, which eliminates the requirement for any externally powered hardware components. Extremely safe, with access granted only to members of the staff and those who have been vetted.

8 Conclusions and Future Work The proposed solution will represent a groundbreaking advancement for the Attendance Management System in numerous commercial as well as educational organizations. Systems for face recognition are increasingly reliable, accurate, and costeffective. Our solution will assist the personnel of the organizations in many computational duties and procedures linked to student attendance, as well as in the simple and automatic marking of student attendance. Our approach will eliminate intermediaries and save time wasting. Because of the “Face Recognition based Attendance Management System”, which also lessens human error and other errors in computation and other operations, a person will save time, energy, and effort.

References 1. Dhanalakshmi N, Kumar SG, Sai YP (2017) Aadhaar Based biometric attendance system using wireless fingerprint terminals. In: 2017 IEEE 7th international advance computing conference (IACC), 2017, pp 651–655. https://doi.org/10.1109/IACC.2017.0137 2. Srinidhi MB, Roy R (2015) A web enabled secured system for attendance monitoring and real time location tracking using Biometric and Radio Frequency Identification (RFID) technology. In: 2015 international conference on computer communication and informatics (ICCCI), 2015, pp 1–5. https://doi.org/10.1109/ICCCI.2015.7218103 3. Kamelia L, Hamidi EAD, Darmalaksana W, Nugraha A (2018) Real-time online attendance system based on fingerprint and GPS in the smartphone. In: 2018 4th international conference on wireless and telematics (ICWT), 2018, pp 1–4. https://doi.org/10.1109/ICWT.2018.852 7837 4. Adeniji VO, Scott MS, Phumzile N (2016) Development of an online biometric-enabled class attendance register system. In: Cunningham P, Cunningham M (eds) Published in: IST-Africa 2016 conference proceedings, IIMC international information management corporation, 2016. ISBN: 978-1-90582455-7 5. Matilda S, Shahin K (2019) Student attendance monitoring system using image processing, 2019. IEEE international conference on system, computation, automation and networking (ICSCAN), 2019, pp 1–4. https://doi.org/10.1109/ICSCAN.2019.8878806 6. Shanmugamani R, Abdul Rahman AG, Moore SM, Koganti N (2018) Deep learning for computer vision: expert techniques to train advanced neural networks using TensorFlow and Keras. Packt Publishing, Birmingham

Real-Time Smart System for Marking Attendance that Uses Image …

107

7. Yang H, Han X (2020) Face recognition attendance system based on real-time video processing. IEEE Access 8:159143–159150. https://doi.org/10.1109/ACCESS.2020.3007205 8. Lin Z-H, Li Y-Z (2019) Design and implementation of classroom attendance system based on video face recognition. In: 2019 international conference on intelligent transportation, big data & smart city (ICITBS), 2019, pp 385–388. https://doi.org/10.1109/ICITBS.2019.00101 9. Chollet (2017) Deep learning with python, Manning Publications, Co., Greenwich, CT, US. (Rosebrock (2017) Deep learning for computer vision with python: starter bundle, PyImageSearch) 10. Bhattacharya S, Nainala GS, Das P, Routray A (2018) Smart attendance monitoring system (SAMS): a face recognition based attendance system for classroom environment. In: 2018 IEEE 18th international conference on advanced learning technologies (ICALT), 2018, pp 358–360. https://doi.org/10.1109/ICALT.2018.00090

Design of Automatic Clearance System for Emergency Transport and Abused Vehicle Recognition M. Malathi, P. Sinthia, M. R. Karpaga Priya, S. Kavitha, and K. Suresh Kumar

Abstract Nowadays, many people are suffering due to heavy traffic. Due to the heavy crowd, it is not able to regulate the traffic in the presence of a traffic signal. Hence, the article is planned to overcome the traffic congestion, emergency vehicle clearance and also helps to detect the stolen vehicle. Hence, we designed a smart Traffic Control System (STCS). This system of control will reduce the time taken, stress, etc. Modern world traffic overcrowding has now developed the foremost challenges in India. The intelligent systems use infrared sensors to calculate the amount of traffic present during a given time period and accordingly program the traffic lights to switch between green and red signals. This allows control of traffic which is established on density. Every alternative vehicle like ambulance, fire service is prepared with distinctive RFID tag and the reader is situated at signals. When the tag comes into the range of the reader, the signal is changed to green automatically. Subsequent measure is the recognition of embezzled transport, and the operation is the vice-versa. The signal goes to red, and an information about the vehicle is directed to policemen by GSM SIM800A which facilities to seize the vehicle. A buzzer rings aloud to indicate the presence of a stolen vehicle. This will reduce the need of traffic policemen at the junctions.

M. Malathi (B) Department of Electronics and Communication Engineering, Rajalakshmi Institute of Technology, Chennai, Tamil Nadu, India e-mail: [email protected] P. Sinthia Department of Biomedical Engineering, Saveetha Engineering College (Autonomous), Chennai, Tamil Nadu, India M. R. K. Priya · S. Kavitha Department of Electrical and Electronics Engineering, Saveetha Engineering College (Autonomous), Chennai, Tamil Nadu, India K. S. Kumar Department of Information Technology, Saveetha Engineering College (Autonomous), Chennai, Tamil Nadu, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 V. S. Rathore et al. (eds.), Emerging Trends in Expert Applications and Security, Lecture Notes in Networks and Systems 681, https://doi.org/10.1007/978-981-99-1909-3_10

109

110

M. Malathi et al.

Keywords IR sensors · RFID tag · Buzzer · GSM modem · Transportation

1 Introduction India is one of the developing countries in the world. It is characterized by immense natural wealth, but at the same time the cities suffer from dreadful crisis. Traffic congestion and stoppage problems play a major role in recent years [1]. Traffic congestion refers to a situation where vehicles experience reduced speeds, longer travel times, and increased queuing on the roads. The number of vehicles on the road keeps on accumulating with rising population. In India, we have everything on paper—good plans and good laws, but implementation of the same is tedious. Snarling traffic jams arises by the chaotic and non-uniform nature. Every junction in our cities needs continual monitoring. Our disregard to traffic rules makes our roads more dependent on a traffic personnel [2]. If there is no officer, the junction is sure to collapse. Driving an economic vehicle, but stuck in traffic will be the most irritating thing in the world. In the era of IoT and smart people, we propose a Traffic Clearance System, which can control traffic flow based on accurate circumstances. The additional major road network in the world is the Indian subcontinent. The road network stretches about 5.4 million km, out of which national highway covers about 97,991 km. Due to sheer magnitude, it has been a challenging issue for our government for providing first-class roads. Adding to it, the cost to be spent by the government for the maintenance of these roads every year is around Rs.20,000– Rs.30,000. The main reason behind this is due to increasing in owning of vehicles by the citizens and overloading of the roads in the main cities of the nation [3]. The people’s struggle every day, and their effort for avoiding the traffic, the pollution from it, and rash driving have become the biggest cause for the chronic stress and many psychological problems. About 30 min to two hours of a day’s time is spent in traffic, which totals to almost 360 h a year. In the coming 10–12 years, India would need about 15,000 km of new expressways. To achieve this target, the National Highway Authority of India (NHAI) along with the local establishments has to work really hard. If the inhabitants of the nation continuously violate the highways and traffic rules, this may not be possible. The citizens of the country and the government’s mechanism are essential to work in tandem for this, to get enhancement in traffic and the lives of the populations.

2 Related Work Sundar et al. [4]. The author proposed a method intelligent control system to pass an emergency vehicle safely. Every vehicle is equipped with RFID. The proposed work uses RFID reader, NSK-EDK-125 TTL, and PIC 16F877A system. The structure helps to identify the number of vehicles on a particular path for certain duration. The

Design of Automatic Clearance System for Emergency Transport …

111

implemented method helps to identify the network congestion and also communicate with policeman; when we found any stolen vehicles, GSM-SIM 300 is used to inform to police control room. Zigbee CC 2500 and PIC 16F877A are used to inform the traffic congestion between ambulance and traffic controller. The development of control from MPC [5] to real-time traffic was time consuming. The number of vehicles at the signal was measured by the detectors installed at the intersections. The cross-road intersections and roundabout intersections were handled by V2V and V2I communication [6]. A vehicle in a signal may differ from its dimensions. Thus, the agent settings become difficult. A model-free adaptive control based on traffic prediction and phase split strategy was used for urban traffic [7]. It is a multi-input and multi-output system with time-varying input. The fleshing out of middle-class people resulted in accretion of vehicles on the road. The problems faced by the public which are arising due to traffic are studied in real time [8]. GPSand RFID tag are used for the automatic lane consent for emergency vehicles [9]. It will help vehicles like ambulance, fire service to reach their endpoint on time. This method fails if the data about starting point and end point is not properly known. Many cities in India are now using the method of video surveillance as discussed which involves manual analysis of data from a control room. The real-time traffic is measured by the cameras and the same is passed on to the traffic police at the junction [10]. The author proposed a method called leading edge technology to switch the traffic and avoid the law violators. The device furnished with automatic number plate recognition (AMPR) technology in turn helps to identify the license plate of the vehicles. The implemented method uses Hindi license pattern recognition. It has the capability to extract the histograms of oriented gradient features, and also integrated classification model helps to classify the Hindi characters.

3 Materials and Methods 3.1 Proposed Model Since the above-mentioned problems, it can be fixed by an ingenious way of traffic flow control. It comprises three sections: density-based traffic organization, alternative transportation clearance, and embezzled automobile recognition. Initially, the density refers to the amount of vehicles in that specific signal at a given instant of periods. Infrared sensors are used to measure the density in a particular side of the junction. The IR sensors provide information to the controller about the number. The controller performs the appropriate action to open the signal with higher density. In the second section, RFID tags [1] are attached to emergency vehicles. As soon as these tags come into contact with the reader, the signal in which the tag was read turns to green automatically. This minimizes human effort. The third section is the detection of theft vehicle. When the reader states an RFID tag, it equates with the

112

M. Malathi et al.

collection of stored RFIDs. If the RFID read is the same of that of embezzled automobile, the signal tries to red and the buzzer rings loud. In addition, a message is sent to the local policemen to identify the vehicle and it will help the local policemen to catch the vehicle.

3.2 Arduino UNO The Duemilanove board consists of AtMega328 microcontroller. It consists of 14 digital pins, out of which six are pulse width. There are six analog pins. It can be charged with a host computer through USB. An advantage with Arduino is that it is easily compatible with PC and the program can be downloaded easily to the board through RS 232.

3.3 RFID Tag It is the method that involves generation, handling, broadcast, and reception of radio waves. This wireless technology transmits information by means of electromagnetic fields. The tags are used for the detection of emergency vehicles and stolen vehicles.

3.4 GSM Modem SIM800A This modem works similar to a mobile phone for sending and receiving messages. It has plug and play facility upon serial interface. With the use of AT commands, GSM monitors signal strength and sends and receives SMS and MMS messages. The message about the location of the stolen vehicle is sent by SIM800A.

3.5 Infrared Sensors These electronic instruments are used in obstacle detection. It transmits and receives infrared radiations of 680 mm–1 mm. IR sensors are used to detect objects passing through their radiation. The number of vehicles is counted with the help of IR sensors.

Design of Automatic Clearance System for Emergency Transport …

113

4 Working The prototype results in: 1. Density-based traffic control. 2. Ambulance go-ahead system. 3. Embezzled vehicle detection.

4.1 Density-Based Traffic Control In this method, the number of vehicles is counted by IR sensors. These sensors are placed on the way in successive distances shown in Fig. 1. The count is incremented by one as any vehicle passes by the sensor. The IR sensor works on the transmission and reception of IR waves. This does not require any video surveillance or manual effort to count the vehicles on a particular side. The period of the green and red lights is determined by comparison between the counts on all the sides. The side with largest number is first provided with the green signal. Once the vehicle count has decreased with time, the green signal switches to the next side with a number of vehicles. It works on priority basis. Yellow signal is turned on before the switching for safety purpose. Figures 1, 2, and 3 illustrate the density-based control for first, second, and third roads, respectively (Fig. 4).

Fig. 1 When the number of vehicles on the first side is greater than the two, it receives the green wave signal

114

M. Malathi et al.

Fig. 2 When the second side is flooded with number of vehicles than the others, it turns green

Fig. 3 When the third road has number of vehicles than others, it receives green signal in prior to the others

4.2 Ambulance Go-Ahead System This part of the working model deals with automatic clearance of road for emergency vehicles. Each vehicle is prepared with RFID tags. The reader module is placed on the signal poles. Once the card is in range, it is read by the reader. The signal is turned to green and remain green as long as the reader receives the signal. This action is independent of the traffic density. The need for the ambulance to reach a hospital or the victim is given more significance than the controlling traffic (Fig. 5).

Design of Automatic Clearance System for Emergency Transport …

115

Fig. 4 Illustration of traffic with number of vehicles in the first side with green wave on the first side and red on the other

Fig. 5 Illustrating the arrival of the ambulance at the junction

4.3 Embezzled Vehicle The detection of a stolen vehicle is still a big task for the policeman due to overflowing the traffic. In this paper, we have used RFID tags. The tag number of that vehicle is fed to the reader database. Once the vehicle comes to the signal, its tag number is compared with a list of stored numbers. If it equates, the signal is modified to red automatically. The buzzer is also turned high so that it alerts the policemen about the

116

M. Malathi et al.

Fig. 6 Message sent to the owner about the theft once the tag of the stolen vehicle is read

vehicle. In addition, an information is directed to the policemen by GSM. It helps in the preparation to catch the vehicle at the next junction (Fig. 6).

5 Conclusion Thus, the traffic signal is controlled on adaptive basis. It also minimizes the work of policemen for a greater extent and also provides a less waiting time at junctions. The emergency vehicles like fire service, ambulance can be equipped with tags for automatic clearance. It helps in saving life. The stolen vehicles can also be found in less span of time. If the stolen vehicle and ambulance are on the same way, the ambulance will be given priority and the vehicle can be caught at other junctions. Similarly, if there are two ambulances at a connection, the ambulance that is nearer to the pole is given priority. The work can be extended by the use of long range detectors in the place of RFID tags. The stolen vehicle can be found out using GPS. This will provide the precise locality of the particular automobile. It can be extended to multiple road junctions.

Design of Automatic Clearance System for Emergency Transport …

117

References 1. Kamal MAS, Imura J, Hayakawa T, Ohata A, Aihara K (2014) Smart driving of a vehicle using model predictive control for improving traffic flow. IEEE Trans Intell Transp Syst 15(2) 2. Venkatesh V, Syed N (2015) Smart traffic control system for emergency vehicle clearance. Int J Innov Res Comput Commun Eng 3(8), Aug 2015 3. Arunmozhi P, Joseph William P (2012) Automatic ambulance rescue system using shortest path finding algorithm. Int J Sci Res 3(5) 4. Sundar R, Hebbar S, Golla V (2015) Implementing intelligent traffic control system for congestion control, ambulance clearance, and stolen vehicle detection. IEEE Sens J 15(2):1109–1113 5. Bento LC, Parafita R, Nunes U (2012) Intelligent traffic management at intersections supported by V2V and V2I communications. In: IEEE conference on intelligent transportation systems anchorage, Alaska, USA, 16–19 Sept 6. Nakanishi H, Namerikawa T (2016) Optimal traffic signal control for alleviation of congestion based on traffic density prediction by model predictive control. In: SICE annual conference 2016, Tsukuba, Japan, 20–23 Sept 7. Singh N, Kumar T (2018) An improved intelligent transportation system: an approach for bilingual license plate recognition. In: Information and communication technology for intelligent systems, vol 107, pp 29–38 8. Jin S., Hou Z, Chi R, Bu X (2016) Model free adaptive predictive control approach for phase splits of urban traffic network. In: IEEE Chinese control and decision conference, Yinchuan, China, 08 Aug 9. Traffic Congestion in Bangalore-A Rising Concern. http://www.commonfloor.com/guide/tra ffic-congestion-in-bangalore-arising-concern-27238.html. Accessed 2013 10. Hegde R, Sali RR, Indira MS (2013) RFID and GPS based automatic lane clearance system for ambulance. Int J Adv Electric Electron Eng 2(3):102–107

Securing Account Hijacking Security Threats in Cloud Environment Using Artificial Neural Networks Renu Devi, Sumeet Gill, and Ekta Narwal

Abstract Cloud computing has become increasingly popular and successful due to recent technological advances. It offers storage and computing resources as needed. Cloud computing provides financial advantages for users. But, most cloud service provider companies’ funds are used in cloud data security. In the present work, the threats related to cloud account hijacking have been illustrated and the model to secure from this threat has been presented. We have used Feed-Forward Back-Propagation Algorithm for storage in cloud environment. Keywords Cloud computing · Artificial neural network · Feedforward neural network · Data security · Backpropagation algorithm

1 Introduction The use of cloud computing by individuals and businesses has risen significantly over the last decade for many reasons, including increased efficiency and cost savings [1]. A lot of technical resources are available in cloud computing. In particular, it has enabled large amounts of data to be stored, large amounts of computations on data to be performed, and many other services related to data storage [2]. But, data security is the major concern in the cloud. Account hijacking is one of the vital security threats in the cloud environment [3]. Account hijacking means that your account is hijacked by hackers. Passwords and credentials can be stolen or hacked through software vulnerabilities such as buffer overflow attacks. The person who controls a R. Devi · S. Gill (B) · E. Narwal Department of Mathematics, M.D. University Rohtak, Rohtak, India e-mail: [email protected] R. Devi e-mail: [email protected] E. Narwal e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 V. S. Rathore et al. (eds.), Emerging Trends in Expert Applications and Security, Lecture Notes in Networks and Systems 681, https://doi.org/10.1007/978-981-99-1909-3_11

119

120

R. Devi et al.

user account can eavesdrop on transactions, change data, give false and damaging responses, and send users inadvertently to other websites [4].

1.1 Feedforward Backpropagation Algorithm of Artificial Neural Networks Artificial neural networks are based on biological neural systems. It is used for device learning. It works like the human brain [5]. Feedforward backpropagation is an algorithm of artificial neural networks. This algorithm usually uses three layers. At the beginning of this process, the number of layers in a problem solution is directly proportional to the complexity of the problem. Neural networks are based on neurons connected by links. Neurons contain multiple weights and adjusted weights for each connection. A link will be established from an input neuron to another neuron. All neurons move in the forward direction. Inputs propagate through the network to outputs. The links are connected through their associated weights and then apply activation function for output values. The error signals are generated by subtracting actual outputs from desired outputs. This error signal is the basis of the backpropagation step. This error is propagated back through the neural network and adjusted the weights [6]. The neural networks have learned and given the desired results. The following figure (Fig. 1) represents the feedforward backpropagation algorithm of artificial neural networks. We have explained the paper in the following manner: Sect. 2 describes the experimental setup, which consists of training the passwords using the feedforward backpropagation algorithm of artificial neural networks in MATLAB. Section 3 contains an explanation of the results of the experimental setup’s results, and Sect. 4 comprises the paper’s conclusion.

Fig. 1 Feedforward backpropagation neural networks

Securing Account Hijacking Security Threats in Cloud Environment …

121

2 Experimental Setup The machine was trained with a single password, double passwords, and up to 15 numbers of passwords as per one go. Firstly, we trained a single password. We used 64 bits of input data, one hidden layer, thirty-two neurons, and some parameters for this training. Table 1 shows the 64-bit binary-converted input data. Figure 2 shows parameters used for single password training and two passwords’ training up to 15 number of passwords’ training. The machine took the zero iteration and zero time in the single password training and got the output value, which was the same as the target value. The following table (Table 2) shows the output value of single password training. And, Fig. 3 represents the performance curve of single password training, and Fig. 4 shows the regression plot of single password training. After the training of single passwords, we conclude that a machine could quickly learn 64 bits of data. After that, using the same parameters, we trained the two Table 1 Binary-converted input data for training of single data 01010010 01100101 01001110 01110101 01000000 00100011 00110001 00110010

Fig. 2 Neural networks’ architecture of single password training

Table 2 Output value of training of single password

01010010 01100101 01001110 01110101 01000000 00100011 00110001 00110010

122

R. Devi et al.

Fig. 3 Performance curve of training of single password

Fig. 4 Regression plot of training of single password

passwords, three passwords up to 15 number of passwords. We used the following Table 3 as input data for 15 number of passwords’ training. The above Figs. 5 and 6 represent the performance curve and regression plot of training of 15 number of passwords. It shows the best training performance curve at two out of three epochs. Figure 4 shows training, test, and best line meet at two

Securing Account Hijacking Security Threats in Cloud Environment …

123

Table 3 Binary-converted input data for training of 15 number of passwords 01011001 01101111 01000111 01101001 01000000 00100011 00110011 00110000 01000001 01110010 01010100 01101001 01000000 00100011 00110011 00110001 01001101 01101111 01001110 01110101 01000000 00100011 00110011 00110010 01010011 01101001 01010000 01110101 01000000 00100011 00110011 00110011 01001110 01100101 01001000 01100001 01000000 00100011 00110011 00110100 01010000 01101001 01001011 01100001 01000000 00100011 00110011 00110101 01010010 01100101 01001110 01110101 01000000 00100011 00110011 00110110 01001101 01101001 01010100 01101111 01000000 00100011 00110011 00110111 01010000 01110101 01001110 01110101 01000000 00100011 00110011 00111000 01001001 01110011 01001000 01110101 01000000 00100011 00110011 00111001 01010011 01101001 01010100 01101111 01000000 00100011 00110100 00110000 01001101 01100001 01000011 01101111 01000000 00100011 00110100 00110001 01010010 01101001 01010100 01100001 01000000 00100011 00110100 00110010 01000001 01110010 01001010 01110101 01000000 00100011 00110100 00110011 01000001 01101101 01001001 01110100 01000000 00100011 00110100 00110100

epochs. Here, all lines converge to 0.1 value but do not converge to exactly zero. By this performance curve, we can say that a machine cannot learn the 15 or more number of different passwords at a time.

Fig. 5 Performance curve of training of 15 number of passwords

124

R. Devi et al.

Fig. 6 Regression plot of training of 15 number of passwords

The below Table 4 shows the decimal equivalent output of training of 15 number of passwords. The Table 5 represents the data of minimum errors and number of epochs during the training of single, two, three, up to 15 number of passwords training in a machine.

3 Results This part of the paper describes the results of training passwords in a device. In the paper, we trained the 64 × 1 bits, 64 × 2 bits, 64 × 3 bits, 64 × 4 bits, 64 × 5 bits, 64 × 6 bits, 64 × 7 bits, 64 × 8 bits, 64 × 9 bits, 64 × 10 bits, 64 × 11 bits, 64 × 12 bits, 64 × 13 bits, 64 × 14 bits, 64 × 15 bits data using some parameters. In the functions of Trainbr, Purelin, Tansig have been used. After training, we analyzed, from the password training, minimum errors, and the number of epochs, that a machine can learn 14 number of passwords at a time. Still, during the training of 15 number of passwords, we got the maximum difference between the target value and output value. These training results show that any person or organization can save multiple passwords in the cloud with security.

Securing Account Hijacking Security Threats in Cloud Environment …

125

Table 4 Decimal equivalent output of training of 15 number of passwords 0 1 0 0.457 0.414 0.542 0.414 0.585 0 1 0 0.373 0.414 0.543 0.457 0.542 01000000 0 0 1 1 0 0.414 0.626 0.585

0 1 1 0.543 0.543 0.5 0.5 0.499 0 1 1 0.5 0.5 0.499 0.456 0.542 00100011 0 0 1 1 0.456 0.5 0.5 0.373

0 1 0 0.457 0.414 0.542 0.414 0.585 0 1 0 0.373 0.414 0.543 0.457 0.542 01000000 0 0 1 1 0 0.414 0.626 0.585

0 1 1 0.543 0.543 0.5 0.5 0.499 0 1 1 0.5 0.5 0.499 0.456 0.542 00100011 0 0 1 1 0.456 0.5 0.5 0.373

0 1 0 0.457 0.414 0.542 0.414 0.585 0 1 0 0.373 0.414 0.543 0.457 0.542 01000000 0 0 1 1 0 0.414 0.626 0.585

0 1 1 0.543 0.543 0.5 0.5 0.499 0 1 1 0.5 0.5 0.499 0.456 0.542 00100011 0 0 1 1 0.456 0.5 0.5 0.373

0 1 0 0.457 0.414 0.542 0.414 0.585 0 1 0 0.373 0.414 0.543 0.457 0.542 01000000 0 0 1 1 0 0.414 0.626 0.585

0 1 1 0.543 0.543 0.5 0.5 0.499 0 1 1 0.5 0.5 0.499 0.456 0.542 00100011 0 0 1 1 0.456 0.5 0.5 0.373

0 1 0 0.457 0.414 0.542 0.414 0.585 0 1 0 0.373 0.414 0.543 0.457 0.542 01000000 0 0 1 1 0 0.414 0.626 0.585

0 1 1 0.543 0.543 0.5 0.5 0.499 0 1 1 0.5 0.5 0.499 0.456 0.542 00100011 0 0 1 1 0.456 0.5 0.5 0.373

0 1 0 0.457 0.414 0.542 0.414 0.585 0 1 0 0.373 0.414 0.543 0.457 0.542 01000000 0 0 1 1 0 0.414 0.626 0.585

0 1 1 0.543 0.543 0.5 0.5 0.499 0 1 1 0.5 0.5 0.499 0.456 0.542 00100011 0 0 1 1 0.456 0.5 0.5 0.373

0 1 0 0.457 0.414 0.542 0.414 0.585 0 1 0 0.373 0.414 0.543 0.457 0.542 01000000 0 0 1 1 0 0.414 0.626 0.585

0 1 1 0.543 0.543 0.5 0.5 0.499 0 1 1 0.5 0.5 0.499 0.456 0.542 00100011 0 0 1 1 0.456 0.5 0.5 0.373

0 1 0 0.457 0.414 0.542 0.414 0.585 0 1 0 0.373 0.414 0.543 0.457 0.542 01000000 0 0 1 1 0 0.414 0.626 0.585

0 1 1 0.543 0.543 0.5 0.5 0.499 0 1 1 0.5 0.5 0.499 0.456 0.542 00100011 0 0 1 1 0.456 0.5 0.5 0.373

0 1 0 0.457 0.414 0.542 0.414 0.585 0 1 0 0.373 0.414 0.543 0.457 0.542 01000000 0 0 1 1 0 0.414 0.626 0.585

0 1 1 0.543 0.543 0.5 0.5 0.499 0 1 1 0.5 0.5 0.499 0.456 0.542 00100011 0 0 1 1 0.456 0.5 0.5 0.373

0 1 0 0.457 0.414 0.542 0.414 0.585 0 1 0 0.373 0.414 0.543 0.457 0.542 01000000 0 0 1 1 0 0.414 0.626 0.585

0 1 1 0.543 0.543 0.5 0.5 0.499 0 1 1 0.5 0.5 0.499 0.456 0.542 00100011 0 0 1 1 0.456 0.5 0.5 0.373

0 1 0 0.457 0.414 0.542 0.414 0.585 0 1 0 0.373 0.414 0.543 0.457 0.542 01000000 0 0 1 1 0 0.414 0.626 0.585

0 1 1 0.543 0.543 0.5 0.5 0.499 0 1 1 0.5 0.5 0.499 0.456 0.542 00100011 0 0 1 1 0.456 0.5 0.5 0.373

0 1 0 0.457 0.414 0.542 0.414 0.585 0 1 0 0.373 0.414 0.543 0.457 0.542 01000000 0 0 1 1 0 0.414 0.626 0.585

0 1 1 0.543 0.543 0.5 0.5 0.499 0 1 1 0.5 0.5 0.499 0.456 0.542 00100011 0 0 1 1 0.456 0.5 0.5 0.373

0 1 0 0.457 0.414 0.542 0.414 0.585 0 1 0 0.373 0.414 0.543 0.457 0.542 01000000 0 0 1 1 0 0.414 0.626 0.585

0 1 1 0.543 0.543 0.5 0.5 0.499 0 1 1 0.5 0.5 0.499 0.456 0.542 00100011 0 0 1 1 0.456 0.5 0.5 0.373

0 1 0 0.457 0.414 0.542 0.414 0.585 0 1 0 0.373 0.414 0.543 0.457 0.542 01000000 0 0 1 1 0 0.414 0.626 0.585

0 1 1 0.543 0.543 0.5 0.5 0.499 0 1 1 0.5 0.5 0.499 0.456 0.542 00100011 0 0 1 1 0.456 0.5 0.5 0.373 (continued)

126

R. Devi et al.

Table 4 (continued) 0 1 0 0.457 0.414 0.542 0.414 0.585 0 1 0 0.373 0.414 0.543 0.457 0.542 01000000 0 0 1 1 0 0.414 0.626 0.585

Table 5 Comparison of minimum error and number of epochs

0 1 1 0.543 0.543 0.5 0.5 0.499 0 1 1 0.5 0.5 0.499 0.456 0.542 00100011 0 0 1 1 0.456 0.5 0.5 0.373

Number of passwords

Minimum errors

Number of epochs

1

0

0

2

0.0391

2

3

0.0622

2

4

0.0619

2

5

0.0752

2

6

0.0898

2

7

0.0846

0

8

0.1094

1

9

0.0941

5

10

0.888

2

11

0.1055

3

12

0.1012

2

13

0.0991

2

14

0.0966

2

15

0.1031

2

4 Conclusions Cloud security is the most researched field across the globe. A significant growth in cloud computing has occurred in recent years. Information about individuals and companies is increasingly stored in the cloud, raising security concerns. A significant barrier to cloud computing’s growth has been security concerns. Cloud account hijacking is a primary security issue. The market continues to be affected by data privacy complications. The present works tried to simulate the cloud data security problem by the techniques of artificial neural networks. For the solution of account hijacking security threats, we trained some passwords with the help of a feedforward backpropagation algorithm—these passwords’ data are stored in the form of a weight matrix in the cloud. And, no one can easily find the dimension of the weight matrix, and data are stored with utmost security. If any person or company saves their data in a weight matrix, it will be almost impossible to crack the data by any hijackers.

Securing Account Hijacking Security Threats in Cloud Environment …

127

References 1. Hassan, J., Shehzad, D., Habib, U., Aftab, M. U., Ahmad, M., Kuleev, R., & Mazzara, M. (2022). The Rise of Cloud Computing: Data Protection, Privacy, and Open Research Challenges - A Systematic Literature Review (SLR). Computational Intelligence and Neuroscience, 2022(5). https://doi.org/10.1155/2022/8303504 2. Jathanna R, Jagli D (2017) Cloud computing and security issues. Int J Eng Res Appl 07(06):31– 38. https://doi.org/10.9790/9622-0706053138 3. Tirumala SS, Sathu H, Naidu V (2016) Analysis and prevention of account hijacking based INCIDENTS in cloud environment. In: Proceedings-2015 14th international conference on information technology, ICIT 2015, Jan 2016, pp 124–129. https://doi.org/10.1109/ICIT.201 5.29 4. Christina AA (2015) Proactive measures on account hijacking in cloud computing network. Asian J Comput Sci Technol 4(2):31–34. www.trp.org.in 5. Farizawani AG, Puteh M, Marina Y, Rivaie A (2020) A review of artificial neural network learning rule based on multiple variant of conjugate gradient approaches. J Phys Conf Ser 1529(2). https://doi.org/10.1088/1742-6596/1529/2/022040 6. Jethva HB (2013) Engineering a review on back propagation algorithms for Feedforward Networks Shital Solanki Department of Information Technology, L. D . College of Engineering , Gujarat Tech- KEYWORDS: Back propagation, convergence , feed forward neural networks, tra, vol 2277, pp 73–75.

IoT-Based Smart Pill Box and Voice Alert System P. Sinthia , R. Karpaga Priya, S. Kavitha, M. Malathi, and K. Suresh Kumar

Abstract This research work proposes an advanced medicine box monitoring system and its analysis. Patients with prolonged illness and elderly patients who have to undergo regular medication face a lot of medical errors with categorizing vast measure of pills every day. This paper insists on the intake and assembling of a medicine box whose purpose is to overcome this facility lacking in the medical field. The pill box will be capable of categorizing the pills by means of automation. The primary application of this medication pill box is concentrated on people who are under regular medical treatment or vitamin supplements and also used by medical attendants who deal with the more number of patients. Our smart medicine container is designed in such a way that it can be programmable anytime which facilitates the patient caretakers or clients to find out the quantity of pills and timing to intake the medicines for every day. The time at which the pill has to be taken should be preset after which the pill box will send reminders to the users or patients to take pills using alarm or buzzer. Keywords Health care · Smart pill box · Voice alert system

P. Sinthia (B) · R. Karpaga Priya · S. Kavitha · K. Suresh Kumar Saveetha Engineering College, Chennai, India e-mail: [email protected] R. Karpaga Priya e-mail: [email protected] S. Kavitha e-mail: [email protected] K. Suresh Kumar e-mail: [email protected] M. Malathi Rajalakshmi Institute of Technology, Chennai, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 V. S. Rathore et al. (eds.), Emerging Trends in Expert Applications and Security, Lecture Notes in Networks and Systems 681, https://doi.org/10.1007/978-981-99-1909-3_12

129

130

P. Sinthia et al.

1 Introduction The principal goal of the smart medicine box system is to serve the patients who suffer from bipolar disorder or other chronic health problem in motivating them to take regular medicines. This provides much more convenience to the patient caretakers. The medication time can be preset by means of the application setup, and the same can be notified using a voice alert system. Then, the preset value will be communicated to the pill box system. After receiving the communication, the system will give a wake-up reminder to the patient when the time to take the medicine has reached. As an add-on to this, there is also an extra feature by which the patients receive points whenever they take up the tablets from the smart device. The medicine intake will be indicated by the development of tree. Notwithstanding, if the patient fails to take up the drug within stipulated time, an intimation will be conveyed to the caretaker to take necessary actions [1]. The aim of this research work is to design an cost-efficient automated pill box with timely alert system. Patients who suffer from depression phases and Mania phases will be benefited from this device. Thus, at times during which the patient undergoes Mania phases, there will be large possibility of unsafe condition as the patients will have mentality to do cranky and risky things which might cause harm to their own life. Although there are various treatment techniques given to patients with bipolar disorder, the most predominant method is taking in proper drugs in order to keep up the quantity of chemicals present in the brain [2].

2 Proposed System The people who are not able to take care of themselves will not have the attentiveness to take the regular pills daily. In those cases, the patient caretaker will have to attentively give reminders to the patients to take medicines. The responsibility of the caretaker can be reduced by sorting out the drugs in the medicine box. The medicine that the patient needs to take at respective times has been kept into the pill box and the pill time for required medicine has been set to the system [3]. When the time to take the pills has arrived, the voice alert system will be communicated, and consequently, the patient will be reminded by the voice alert to take the pills.

3 Literature Survey Arduino Development Board based fall detection system authored by Arote and Bhosale [4] is taken for review. The accidents and injuries that are related to fall are represented as the most communal causes that lead to fatal and hospitalization of old-age people. Hospitals and nursing homes are facing an important threat among

IoT-Based Smart Pill Box and Voice Alert System

131

elderly people which is the accidents due to fall. An on-body sensor is proposed which is found to be an enhanced fall detection system. The important components of the on-body sensor are accelerometer, GPS model, and GSM model. A real-time accelerometer attached with GPS and GSM instant messaging module is demonstrated which is movable, wearable, cost-efficient and had an advanced accuracy level. The movements can be monitored and communicated by sending alerts in case of emergency by this device. Prototype of Control of electrical appliances Prototype by using GSM Module and Arduino authored by Tigor Hamonangan Nasution, Muhammad Anggia Muchtar, Ikhsan Siregar, Ulfi Andayani, Esra Christian, Emerson Pascawira Sinulingga in 2017, [5] is taken for review. GSM protocol introduces a uncomplicated communication technology. The author proposes a prototype that consists of electric appliance control tool which is controlled through SMS by using GSM. The reason for choosing GSM protocol was that it does not exhibit dependability on mobile devices’ platform. The system consists of GSM SIM 900 and Arduino in order to control a relay module. The commands were given through SMS in coordination with the relay module, and the information will then be received by the feedback of the command. In order to test the working of the device, ten (10) various kinds of input signal string were implemented as a command control was proceeded. The orders were transmitted from the input string which were directly related to operation of the relay and messages that are sent as feedback from the command presented before were rendered. Regular monitoring of beat-to-beat heart rate diagnosis technique using a camera as contactless measurement sensor authored by L. Iozzia, Student Member, IEEE, L. Cerina, L.T. Mainardi, Member, IEEE in 2016 [6] is taken as third paper for review. The technology explained in this paper is the video photoplethysmography (videoPPG) which has emerged as an area in accordance to the existence of remote assessment of various cardiovascular factors, namely heart rate (HR), respiration rate (RR), and heart rate variability (HRV). An entire automated technique based on the principle of chrominance is proposed in this research work. The optimal region of interest (ROI) is selected in every subject in order to determine and measure the truthfulness of beat detection and interbeat intervals’ (IBIs) monitoring. Twenty-six subjects were taken for experimental recordings who experienced a stand-up maneuver. The outcome of the experiments has shown that the precision of the detection of beats is slightly higher in passive position (95%) compared to standing position (92%). This is because of retention of the balance that gives rise to magnified motion artifacts in the signal kinetic. The results have shown faulty analysis (expressed as mean ± sd) of heart rate at every instant in the range of +0.04 ± 3.29 beats per minute in rest and +0.01 ± 4.26 bpm in movement. Accelerometer and RFID technology implemented human fall detection using IoT by Kaudki and Surve [7]. Intelligence and attentiveness of human action are a key search point for a wide field of extended utilization such as inaccessible observation of the elder people health, determination of the falls, and ambulatory tracking. Recognition of human activity has increased the requirement due to its utilization in the process of automatic video retrieval implementing the usage of various optical parameters. In spite of the various research works carried out previously, there are

132

P. Sinthia et al.

confidentiality issues in computer vision techniques. This paper emphasizes a fall detection system that is especially fabricated to proctor the old-age people. This system is designed by using an accelerometer sensing equipment and RFID application. The system operates through interior and exterior following using the embedded technology with the set points. The old-age people who are leading a lonely life at their home and the disabled person who is not able to move might face some accidents like falling. This article proposes the accelerometer sensor and the RFID application to proctor the daily actions of the senior citizens on a regular basis. Development of a Wireless Blood Pressure Monitoring System authored by Using Smartphone by Ilhan and Kayrak [8] is taken for review. Patient monitoring and diagnosis of diseases of remote area patients can be carried out using telemedicine system. Telemonitoring may be referred as a method of medical practice that involves distant observation of the patient parameters when they are not at the same location as the healthcare professional. Around-the-clock monitoring of vital parameters is indispensable in the case of critical patients. The capital goal of the proposed system is to create an efficient device to monitor blood pressure controlled by Arduino microcontroller. The Android app is utilized for software execution. Design and implementation of real-time patient monitor built with wireless transmission via Wi-Fi along with matrix bar code technique are proposed in this paper. The matrix bar code contains information of the patient. The bar code incorporates various important data like the patient name, unique patient identification (ID), unique hospital identification (ID), name of the drugs used, medicine ID, timing at which the medicine is to be taken, and many other related contents about the bag containing the tablets. The medicine box will be incorporated with a camera which scans the data in order to assure whether the patient has taken the medicine or not. An alarm system communicates to the patient regarding reminder to take medicine with the help of buzzer. The buzzer will be set off at the preset time, and the person will then have to remove the medicine bag from the container and examine the matrix bar code. By doing so, the caretaker can confirm whether the patient has taken the regular medicine or not. If the patient fails to take the medicine, the medicine box will give reminders again to the patient. By means of the user interface, the device guides the patient to take the appropriate drugs at proper time. The medicine timing which has to be followed by a patient is monitored from any distant place by a web page, and if there are any variations found in the timing at which the patient consumes the medicine, amount, and threshold timing of drug, the information will be uploaded in the web page. The inevitable advantage of the RFID technology which has been used widely is the encryption and decryption that would reduce theft of useful information or data abuse through the internet. Also, this smart pill box initiates an add-on provision called as consultation from home. In case the patient wants to avail consultation from the doctor, the doctor can program an appointment using the RFID to recognize the respective patient. The research work carried out by Najeeb et al. [9] is taken for review. This work proves that in correspondence to most important characteristic of medicine from initial supervision to advanced treatments, ethical drugs have become a leading constituent of health schemes all over the world. Relating to their psychoactive

IoT-Based Smart Pill Box and Voice Alert System

133

consequences, these drugs are often consumed in ways not intended by the medical practitioner or by any other person for whom it had not been recommended. There are also specific instances where teenagers rob drugs such as opiates, CNS depressants, and stimulants from their friends and family. The objective of the research work is to design a system consisting of the proper prescription of the drugs which enables to manifest a patient’s approach of such treatment based on their identification and prescribed schedule. This also assists the pharmacist or doctor to supervise this intake. Hiba et al. [10]: Two chief functionalities characterize the proposed system, namely safety which guarantees the health of the patient and the proper working of the system. This system will be linked to a phone application which enables patient parents to monitor the condition. The weight of each pill will also be calculated by configuring the medical box, pre-setting the timing of medicine consumption, setting alarms to the user regarding the number of remaining pills, generating buzzers whenever the patient forgets to consume the needed number of pills or does not take them at all, and so on. The outcome was very reasonable with the generation of false alarm at a lower pace of 3%.

4 Methodology In this proposed method, Arduino MEGA microcontroller is used to interface with the sensors and to the communication devices. RTC which is abbreviated as RealTime Clock, can be considered as an integrated circuit that record the time. The touch sensor used for the patient will be taken the tablet. The LCD is utilized to modify the latest information of the patient. The IoT module is used to update the information to the cloud which can be accessed. The APR voice module is used for intimation. The GSM is used for send the SMS (Fig. 1).

4.1 Take Bills In this work, our main purpose is to take the medicines in correct time. We have smart bill box system, and in this bill box system, we are using touch sensor, the touch to set the patient take the medicine. The RTC is used for remind the time for take bills. The APR used for the touch sensor activate the bill name be announced by speaker, in this case patient easily understand [11].

134

P. Sinthia et al.

Fig. 1 Block diagram of the proposed work

4.2 Alert System In case, the bill will be finished to send the SMS to doctor, relative, and many others. All the details will be updated in IoT. In our project, IoT will be acted us webpage. The RTC will be update the all information IoT every 1 h or 2 h based patient requirement.

4.3 Internet of Things The Internet of Things (IoT) is the collective arrangement of materialistic gadgets, transports, constructions, and other supplementary modules incorporated with electronics, software, sensors, actuators, and network connectivity that facilitates the mentioned objects to accumulate and convey data. IoT was described as “the infrastructure of the information society” by the Global Standards Initiative on Internet of Things (IoT-GSI) in the year 2013. The IoT enables the objects to be detected and controlled from a remote area across existing network infrastructure by IoT. As a consequence, this gives rise to possibilities of more number of direct interface of the physical world into systems which are dependent on computer technology. Thus, a very useful network system with high efficiency, accuracy, and economic benefit can be developed. IoT built with sensors and actuators has proven to be high technology occurrence of the much more prominent category of virtual network systems. This

IoT-Based Smart Pill Box and Voice Alert System

135

Fig. 2 Prototype of smart pill box

helps in perceiving the updated technologies like smart grids, smart homes, intelligent shipping, and smart cities [12].

4.4 ESP-12E-Based Nodemcu Espressif Systems designed a microcontroller called as the ESP8266. The ESP8266 itself is a self-possessed Wi-Fi networking solution which helps in proving a bridge from existing microcontroller to Wi-Fi. It also has the capability of operating automated applications [13]. The NodeMCU module is usually fabricated with a built-in USB connector and a rich mixture of pin-outs. It also enables the interfacing of laptop by means of USB cable which can be connected to the NodeMCU devkit and flashed excluding all hindrances, similar to the Arduino. It can also be interfaced with breadboard [14] (Fig. 2).

5 Results The above prototype is the final demonstration of smart pill box and voice alert system, the speaker gives the auditory output, and the 16*2 LCD display shows the visual output. In the case of patients not taking the pill, the caretaker will receive a message to their mobile via IoT and the information will be saved in the data log of Adafruit open-source platform (Fig. 3). The above screenshots show the real-time message received by the registered mobiles of the caretaker/doctor which was received by using GSM module which is connected to the Arduino mega module, and the above message indicates that the patient has not taken the morning and afternoon pills (Figs. 4, 5 and Table 1). The above tabular column summarizes the prototype output about six different patients’ daily pill intake records.

136

Fig. 3 Message output in mobile Fig. 4 Prototype output in LCD display

P. Sinthia et al.

IoT-Based Smart Pill Box and Voice Alert System

137

Fig. 5 IoT web output

Table 1 Patient data output

Patient name Morning

Afternoon

Night

Subject 1

Pills taken

Pills not taken Pills taken

Subject 2

Pills not taken Pills taken

Pills taken

Subject 3

Pills taken

Pills taken

Pills not taken

Subject 4

Pills taken

Pills taken

Pills not taken

Subject 5

Pills taken

Pills not taken Pills taken

Subject 6

Pills taken

Pills not taken Pills taken

6 Conclusion The primary focus of this paper is to outline a system that can help the patients with any kind of disorders to attain motivation in consuming the necessary drugs. It also facilitates the patient caretakers to track the medicine intake of the patients. The system is built by interfacing sophisticated equipment and sensors. For this reason, this system is found to be cost efficient when compared to other commercialized systems. In addition to this, the growing tree animation is utilized to gain the inducement and assist in modifying the activities of the patient. As an add-on feature, the proposed system can transmit the notification by means of the user LINE application. Therefore, the designed arrangement can easily be integrated with the user technology and acquire coexistent alarm for obtaining a wake-up call regarding the medicine intake.

138

P. Sinthia et al.

References 1. Medisafe iConnect, an Affordable SmartPill Management System j, Apr 2016. https://www. medgadget.com/2016/04/medisafeiconnect-anaffordablesmart-pill-management-system.html. 2. Bhati S, Soni H, Zala V, Vyas P, Sharma MY (2017) Smart medicine reminder box. IJSTE Int J Sci Technol Eng 3(10) 3. da Silva JPS, Schneider D, de Souza J, da Silva MA (2013) A roleplaying-game approach to accomplishing daily tasks to improve health. In: Proceedings of the 2013 IEEE 17th international conference on computer supported cooperative work in design (CSCWD). IEEE, pp 350– 356 4. Arote S and Bhosale RS (2015) Fall detection system using accelerometer principals with arduino development board, Int J Adv Res Comput Sci Manag Stud 3(9) 5. Nasution TH et al (2017) Electrical appliances control prototype by using GSM module and Arduino, CIEA. https://doi.org/10.1109/IEA.2017.7939237 6. Iozzia L, Cerina L, Mainardi LT (2016) Assessment of beat-to-beat heart rate detection method using a camera as contactless sensor. IEEE. (Student member, IEEE, Member) 7. Kaudki B, Surve A (2018) Accelerometer and RFID technology implemented human fall detection using IoT, ICICCS 8. Ilhan I, Kayrak M (2016) Development of a wireless blood pressure monitoring system by using smartphone. Comput Methods Programs Biomed 125:94–102 9. Najeeb NJ, Rimna A, Safa KP, Silvana M, Adarsh TK (2020) Review paper on IoT Driven Smart Pill Box, Int Res J Eng Technol 7(1) 10. Zeiden H, Karam K, Abizeid Daou R (2018) Smart medicine Box, IMCET 11. Ransing R, Rajput M (2015) Smart home for elderly care, based on Wireless Sensor Network. In: International conference on nascent technologies in the engineering field (ICNTE), Navi Mumbai, India 12. Ming W, Xiaoqing Y, Dina F (2013) Design and implementation of home care system on wireless sensor network. In: 8th International conference on computer science & education (ICCSE), Colombo, Srilanka 13. Suryadevara N, Mukhopadhyay S (2011) Wireless sensors network based safe home to care elderly people: a realistic approach. In: IEEE recent advances in intelligent computational systems (RAICS), Trivandrum, India 14. Li Z, Chen W, Wang J, Liu J (2014) An automatic recognition system for patients with movement disorders based on wearable sensors. In: IEEE 9th conference on industrial electronics and applications (ICIEA), Hangzhou, China

Early Identification of Plant Diseases by Image Processing Using Integrated Development Environment S. Kavitha , M. Malathi , P. Sinthia , R. Karpaga Priya , and K. Suresh Kumar

Abstract In this paper, a more efficient, economic, and simpler mechanism of determining the health or disease affecting a plant is proposed. The working concept is demonstrated by identifying some of the existing diseases for a test plant using a system that compares two instances (or) states of the plant to find the difference of color pixel levels occurring (either an increase or a decrease) and mainly consisting of Raspberry Pi and 8051 microcontroller circuit connections. The main scope of the concept can be seen in the field of automation in agricultural care and treatment. The theoretical working of the device is first demonstrated in simulations using Proteus for the physical components and the software-related tests are done in a virtual Python Integrated Development Environment (IDE) called PyCharm. After confirming the working of the theoretical simulations, the hardware and software components are combined and finally verified using a physical model. Keyword Raspberry Pi Python IDE PyCharm

S. Kavitha (B) · P. Sinthia · R. K. Priya · K. Suresh Kumar Saveetha Engineering College, Chennai 602105, Tamil Nadu, India e-mail: [email protected] P. Sinthia e-mail: [email protected] R. K. Priya e-mail: [email protected] K. Suresh Kumar e-mail: [email protected] M. Malathi Rajalakshmi Institute of Technology, Chennai 602124, Tamil Nadu, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 V. S. Rathore et al. (eds.), Emerging Trends in Expert Applications and Security, Lecture Notes in Networks and Systems 681, https://doi.org/10.1007/978-981-99-1909-3_13

139

140

S. Kavitha et al.

1 Introduction The main objective of the proposed device is to test a new, simpler, and more economical way of performing plant disease detection on real-world plants, identifying the disease and providing the treatment (or) remedy immediately if it is an option. If our theory is successfully tested, this technology can be implemented in every agricultural area including homegrown gardens, small-scale plantations, and crop fields. It could provide a form of automation in the agricultural science field that has not been used to a larger degree until now. Our theory is that by applying image processing and obtaining the data of two instances of a plant from different time periods, we could compare the number of pixels of each color which are sorted based on their RGB values and this would allow us to identify the disease based on the difference of the color pixels that occurs between those two instances. Normally, when the plant is infected with a pathogen (or) disease, the colors do not remain the same. Based on the type of bacterial (or) viral disease, different colors and patterns may occur [20]. For example, Early Blight is a fungal disease that causes dark brown and black lesions on the entire plant, and Tobacco Mosaic Virus is another example which covers the leaves of the plant with spots of almost radiant yellow hue. If the system constantly monitors for these color differences by comparing them with another optimum or less infected version of the plant, then it might be able to identify the disease based on just these values. Most existing or tested disease detection systems implement machine learning (or) deep learning which increase the amount of data to be processed and cannot be implemented for large-scale fields with hundreds and thousands of plants. But, for this model, only two images of a plant are required to provide an accurate analysis of the plant’s status. However, image processing is not as simple as any other digital data processing; therefore, a separate software is required to translate the image into usable data. One of the most prominent software in the field of image processing is the OpenSource Computer Vision (OpenCV) package module. It was mostly built for the C++ language, but has been extended to programming languages such as Python and Java, and it has also been implemented in simulation software such as MATLAB. It currently consists of over 2500 algorithms and functions [6]. The components for the proposed work have been selected so that they can work with the OpenCV software. The processing unit has to be good with quickly processing fairly large amounts of data and should be compatible with camera modules or should have a specially allocated camera device. The controller unit is included just to decrease the load on the processing unit and increase the overall time efficiency of the entire system.

Early Identification of Plant Diseases by Image Processing Using …

141

2 Modeling of the Working System The main function of detecting the plant’s disease can be achieved just using image processing in the Raspberry Pi, but the overall process of moving the device and operating the sprayers increases the processing load placed on a single component. Therefore, the operations are spilt into segments: the physical components are controlled by a separate controller and the main software-related operations are done through the Raspberry Pi. The device chosen for the detection mechanism is the Raspberry Pi zero W model (Fig. 1), since it provides the function of using an external camera setup, the PiCamera to capture the image, and it provides the option of using multiple coding languages which can be used to operate the processes within the Raspberry [5]. It has a lesser processing speed than the other Raspi units since it is quite small in size, but it should be sufficient to prove the execution of the program successfully and it also supports wireless LAN which provides remote control of the Raspberry from any PC or smart devices available which shares the same network as the Raspberry. An SD card can be used as the ROM of the zero W, and it also houses the operating system needed for the Raspberry, the Raspbian OS (64-bit version) inside the SD card which is loaded when the device is booted for the first time. The zero W consists of 40 pins, out of which 27 pins work as General-Purpose Input Output (GPIO) pins which can send (or) receive data based on the GPIO setup. The complete pin configuration of the zero W can be seen by simply typing in “pinout” in the command terminal of the Raspbian desktop. The Raspberry PiCamera (Fig. 2) is a small and light-weighted camera module that was specifically made for the Raspberry Pi units. It can capture videos at 1080 p Fig. 1 Raspberry Pi zero W board

142

S. Kavitha et al.

Fig. 2 5MP PiCamera v1.3 module

quality and has a resolution of 2592 X 1944 pixels (5MP). A Camera Serial Interface (CSI) cable is used to connect the PiCamera to the Raspberry Pi. The camera is not enabled by default in the Raspberry, so it has to be manually enabled. For controlling the hardware, an 8051 microcontroller board with an AT89S52 chip mounted on it is used. It was selected since the chip provides a reliable means of controlling all the additional components, and it can be programmed using Assembly Language Programming or C Language Programming [11]. It consists of 32 digital input/output pins which can be configured based on the requirement. The Raspberry could give only inputs to the 8051 microcontroller; however, the processes would be proceeding side by side and two processes may overlap each other and result in an undesirable working condition. Thus, the system was designed such that the Raspberry would give signals to the microcontroller and the controller would execute the operations based on that signal, but the operations in the raspberry are paused until the controllers send a process completion signal. This way, the Raspberry and the microcontroller have a half-duplex (one receiver and one transmitter at a time) communication between them.

2.1 Simulation Using Microcontroller Circuit For the simulation of the circuit using the 8051 microcontroller, the following software are required: . Keil5 compiler software. . Proteus professional simulator. The Keil5 compiler is used for programming Atmel chips using either assembly language programming (or) c-programming language [11]. The hex code is generated after successful compilation of the program and can be uploaded to the simulation model in Proteus. The program for the 8051 microcontroller is explained through the flowchart in Fig. 3, and the simulation circuit diagram is given in Fig. 4.

Early Identification of Plant Diseases by Image Processing Using …

143

Fig. 3 Flowchart of 8051 operation code

Now in the Proteus simulation software, the following components are required from the library in order to test the hardware circuit: . . . . .

AT89S52 microcontroller circuit. L293D driver circuits (analog). Motors (simple DC model). Buttons (active). LEDs (active).

The following circuit connections are given where the microcontroller ports are referred as M, and driver module ports are referred as D: . . . . .

M pins 2.0–2.3 to D1 pins IN1–4. M pins 2.4–2.7 to D2 pins IN1–4. M pins 3.0–3.3 to D3 pins IN1–4. M pin 3.4 to LED. M pins 3.5–3.7 and 0.6 to buttons.

The buttons and LED are provided to imitate the data transfer from and to the Raspberry (since no version of the Raspberry Pi models are currently available in the Proteus simulator’s component library). The DC motors 5 and 6 are supposed to be water pumps, but the working of the water pumps is very similar to a DC motor

144

S. Kavitha et al.

Fig. 4 Simulation in Proteus software

so far they are used in their place for the simulation process. All the driver circuits are connected to a power source in the place of the batteries. The generated hex code from Keil is uploaded into the AT89S52 chip and start the Proteus simulation. If no error arises, move on to testing all the operations. Pressing the button connected to pin 3.5 starts rotating four DC motors in clockwise for forward movement. The distance of the movement is determined by the duration that the motors are operated. Pressing the buttons connected to pins 3.6 and 3.7 will operate the sudo water pump motors 1 and 2, respectively. Pressing the button connected to pin 0.6 will start moving the four motors in counterclockwise direction for reverse movement. Whenever the operation from an input to the 8051 microcontroller is completed, a completion signal is to be transmitted back to the Raspberry, and this is shown with the help of the LED connected to pin 3.4. If all these operations are successfully done, then the simulation has verified the working of the 8051 programming and is ready to be used in the real component.

2.2 Simulation of Image Processing System For testing the plant disease detection program through the means of simulation, PyCharm software was selected as the development environment since Python is a

Early Identification of Plant Diseases by Image Processing Using …

145

built-in programming language in the Raspberry Pi models and it also provided the ability to make use of the OpenCV image and pixel processing software package [9]. In the PyCharm IDE, create a new program for the plant detection program. Import the OpenCV package which contains the image processing functions and operations [9]. Now, a few instances such as a healthy state, a state infected with Early Blight, and a state infected with Powdery Mildew of a test plant are pasted in the program folder. These images are created in the aspect ratio of 2592 × 1944 as taken by the Raspberry PiCamera. Now, let us consider that the plant has been affected by powdery mildew, the previous instance before the infection is the image of the healthy plant and the current state would be the images of the plant with brown lesions on it (infected with Early Blight). Load the previous instance image of the plant into a variable, and count the number of color pixels of green, black, white, purple, yellow, and brown based on the upper and lower values of their RGB. Now for counting all the pixels, the compiler has to proceed from one pixel to another, and if it is done with the default pixel ratio, about 5,038,848 pixels have to be traversed one by one. With this condition, the process may take a few minutes in PyCharm and up to a few hours in the Raspberry Pi. So, the image is interpolated as shown in Fig. 5 (the aspect ratio of the image is compressed or expanded by using the collective pixels [6]) before the pixels are counted. Now, the same process is applied for the current instance image of the plant, after which the pixel counts of the different colors from the previous instance and the current instance are subtracted one from the other. In the given case, due to the infection present in the current instance, the remainder of green and brown color counts would be higher. The program recognizes these differences and associates the data with an existing disease which is Early Blight. Similarly, if we had considered Fig. 5 Interpolation of the image from 2592 × 1944 to 240 × 180 pixels’ resolution

146

S. Kavitha et al.

Fig. 6 Output from PyCharm simulation

powdery mildew, based on the high difference of green and white color counts, the program would have identified it as powdery mildew. If no high differences are noted for the color levels, then the plant would be identified as having “no issues”. Finally, the color counts for the two instances are printed along with the state (or) infected state of the plant is printed for the user. If the program is able to correctly identify the plant’s state (Fig. 6), then the simulation is successful and can now be implemented in the Raspberry Pi.

2.3 Model Demonstration The real-time connections are given as shown in Fig. 7 similar to the Proteus simulation circuit diagram, except that the buttons and LED are replaced by inputs from and output to the Raspberry Pi, respectively. The connections are given in the following configuration (Raspberry is referred as R and microcontroller is referred as M) (Fig. 8): . . . . .

M pin 3.5 to R pin 40 (GPIO 21). M pin 3.6 to R pin 38 (GPIO 20). M pin 3.7 to R pin 36 (GPIO 16). M pin 0.6 to R pin 31 (GPIO 6). M pin 3.3 to R pin 32 (GPIO 12).

The hex code developed through the Keil software is uploaded into the AT89S52 chip using an AVR programmer which transmits the instruction hex code to the 8051 microcontroller board [12]. No changes are required for the controller’s code (Fig. 9). There are some changes that need to be made to the Python code now, since more than one plant is to be scanned and the images have to be taken by Raspberry instead of creating and placing them for the process. Now, as we have discussed in the simulation topic, the process requires two states of the plant, the previous instance and the current instance. But, we cannot take both the previous and current instances using the same program. So, a separate code is written which only moves through the line and captures that previous instance or the comparable instance of the each plant and stores it in the Raspberry’s memory (Fig. 10). After capturing and storing

Early Identification of Plant Diseases by Image Processing Using …

Fig. 7 Complete circuit connections Fig. 8 Actual circuit connections

147

148

S. Kavitha et al.

Fig. 9 Working demonstration

Test1b

Test2b

Fig. 10 Images of the previous instances

all the plants, the device must return to its original place. No other operations of detecting or applying treatments take place with this program, and it only needs to be run once. After successfully creating the previous instances, we move onto the main plant disease detection program. First, the required libraries and setup conditions are given, after which the number of plants in the demonstration is specified. The device moves forward and captures the image of the current instance of the plant which is shown in Fig. 11. The image of the current instance is loaded and interpolated, and the different colors’ pixels based on RGB values are counted as seen in the simulation, after which the previous instance is loaded and interpolated and the pixels are separated similarly. The difference between the pixel counts of the different colors is found, after which the data of those differences are used to identify the disease. If a disease condition is identified, then in addition to printing the infected state of the plant, the appropriate signal is sent to the microcontroller to spray the remedy using the water pump. If no irregular differences are found, then the plant is considered as healthy

Early Identification of Plant Diseases by Image Processing Using … Test1a

149 Test2a

Fig. 11 Images of the current instances

and no steps are taken. In the case of identifying diseases that have no remedy, the user must be intimated that such a disease is present. Then, the current instance is now saved as the comparable entity so that it can be used as the previous instance in the next iteration. Once the processing is completed for one plant, the device moves onto the next one and starts the detection process, and this continues as many times as there are plants in the demonstration so that all the plants are scanned for diseases. Once all the plants are done, the device returns to its original position and prints the status of all the plants as shown in Fig. 13. All the operations taking place in this program are explained through the flowchart in Fig. 12. The plant disease detection program can be repeated in a daily or weekly basis to constantly monitor the health of the plants and for providing immediate treatment. With that, the working of the device was successfully demonstrated.

3 Conclusion The demonstration was able to confirm the entire circuit’s working and proved that our hypothesized theory for detecting plant diseases was accurate in most cases. After starting the development of the proposed system, we discovered that this mechanism of disease detection could be applied to identify the infection present on all the parts of the plants above the ground level. This is due to the fact that the process scans the color levels of the entire plant instead of just the leaves. The simulations using artificial created images were simple; however, when working with real-time images of the plant, the environmental lighting caused variations and would end up in complete failure. Since the processes done depend on particular values of RGB, these changes could result in the wrong results. This problem was overcome by using a LED light or other constant light which would be ON while capturing the images. The operations in the Raspberry Pi take longer when compared to a PC due to its limited processing speeds. However, it could still produce results with the same accuracy without the help of additional processing units. Due to the Raspberry Pi’s VNC server application, the operations and programs to be executed can be controlled from other devices such as PCs, laptops, and smart phones. The disadvantage that was found was that since the detection mechanism uses two instances of the plant to identify its health, it cannot detect an already existing disease since there is no previous

150

S. Kavitha et al.

Fig. 12 Flowchart of plant detection code

comparable optimum state. It can only identify diseases that are newly developed after integrating this system or if the existing infection spreads. Possible Improvements of This Technology With our demonstration, we only focused on plants that are placed in the same line or row. Realistically, crops are planted in rows and columns, so a turning mechanism can be included to improve the mobility of the device. The distance traveled by the device is determined by the duration that the motor are operated. Instead, a system of following lines and stopping when a certain color is detected can be implemented. A regular water spraying mechanism can be added with the monitoring, if there is a

Early Identification of Plant Diseases by Image Processing Using …

151

Fig. 13 Final output from the device

need to automatically water the plants along with the detection process. Some plant diseases cannot be treated with remedies, but they need to be removed since the infection could spread. In the demonstration, we could identify these diseases, but an additional removal and disposal mechanism could be added to safely remove these infected plants.

References 1. Huixian J (2020) The analysis of plants image recognition based on deep learning and artificial neural network. IEEE Publishment 2. Mohanty SP, Huges DP, Salathe M (2016) Using deep learning for image-based plant disease detection 3. Li L, Zhang S, Wang B (2021) Plant disease detection and classification by deep learning—a review. IEEE Publishment 4. Khirade SD, Patil AB (2015) Plant disease detection using image processing. IEEE Publishment 5. Nayyar A, Puri V (2015) Raspberry Pi-A small, powerful, cost effective and efficient form factor computer: a review. Int J Adv Res Comput Sci Softw Eng 6. Culjak I, Abram D, Pribanic T, Dzapo H, Cifrek M (2012) A brief introduction to OpenCV. IEEE 7. Xie G, Lu W (2003) Image edge detection based on OpenCV. Int J Electron Electr Eng 1(2) 8. Marengoni M, Stringhini D (2011) High level computer vision using OpenCV. IEEE 9. Dhawle T, Ukey U, Choudante R (2020) Face detection and recognition using OpenCV and python. IRJET Paper 10. Mustafa M, Hamarash I (2020) Microcontroller-based motion control for DC motor driven robot link. IEEE Publishment 11. Kamaluddin MU, Shahbudin S, Isa NM, Abidin HZ (2019) Teaching the intel 8051 microcontroller with hands-on hardware experiments. IEEE Publishment 12. Gehlot A, Singh R, Malik PR, Gupta LR (2020) Meet 8051 and Keil Compiler—a software development environment. Internet of things with 8051 and ESP8266 13. Szeliski R (2011) Computer vision: algorithms and applications. Springer, London Dordrecht Heidelberg New York 14. Kulkarni SA (2017) Problem solving and Python Programming. 2nd edn. Yes Dee Publications 15. Ren Z, Ye C, Liu G (2010) Application and research of C language programming examination system based. IEEE

152

S. Kavitha et al.

16. Ivanov S, Hinov N (2021) Smart system for control and monitoring a DC motor. IEEE Publishment 17. Marot J, Bourennane S (2017) Raspberry Pi for image processing education. IEEE 18. Wang W, Swamy MNS, Ahmad MO (2004) RNS application for digital image processing. IEEE 19. Gonzalez RC, Woods RE (2006) Digital image processing. Prentice-Hall 20. Timmerman AD, Korus KA (2014) Introduction to plant diseases. University of Nebraska

Usability Attributes and Their Mapping in Various Phases of Software Development Life Cycle Ruchira Muchhal, Meena Sharma, and Kshama Paithankar

Abstract It has been observed that development of interactive software has been increasing day by day. The success of this software depends upon the quality it provides to its end users. Therefore, it is a matter of concern for software development companies to produce high-quality software and provide a great user experience. Usability being the quality attribute has to be included and measured in every step of software development process in order to produce usable software. This research paper identifies a set of usability attributes that are mapped across various phases of the software development life cycle. Keywords Usability attribute · Mapping · Software development life cycle

1 Introduction With tremendous rise in computer applications and the users using this application, computer user interfaces have become very vital. If the user interface is difficult and frustrating to use, then it is more likely that the software will not be used by the user. This has awakened a need to study a very important attribute of software concerned with its quality, i.e., usability. Usability is a crucial component of software quality and has an impact on the acceptance of the system, according to ISO/IEC 9126 [1]. Formally, usability is described as the degree to which a product may be used by certain users to achieve specific goals with effectiveness, efficiency, and satisfaction in a particular context of use [2]. Usability is related to how easy a system is to learn, R. Muchhal (B) · K. Paithankar Shri Vaishnav Institute of Management, Indore, India e-mail: [email protected] K. Paithankar e-mail: [email protected] M. Sharma Institute of Engineering & Technology, DAVV, Indore, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 V. S. Rathore et al. (eds.), Emerging Trends in Expert Applications and Security, Lecture Notes in Networks and Systems 681, https://doi.org/10.1007/978-981-99-1909-3_14

153

154

R. Muchhal et al.

how effective it is to use, how easy it is to recall, how well it can avoid and correct mistakes, and how satisfied users are with it [3]. Nowadays, user-centered design has gained immense importance to ensure the quality and success of any system in terms of user satisfaction. Therefore, the involvement of the user in the development of the system is necessary. Usability is no longer only the quality attribute but has become the functional feature of any system. This means that, in order to gain user satisfaction, high usability has emerged as the desirable attribute of usable software. But, it does not appear magically as a desire of the user. Usability features cannot be added as an afterthought at the end of the development process; rather, they must be taken into account at every stage of the software development life cycle (SDLC) [4]. Usability has to be incorporated and measured at every step of software development life cycle. By applying usability measuring process in the early stage of development process will give a continuous feedback to the developer and help to achieve desirable usability. During the early stage of software development, usability has been considered as one of the important quality attributes. In the recent years, usability of software applications and computer systems has been observed to become a functional feature of the software with increasing users recognizing the importance of Human–Computer Interface (HCI). In this research paper, we have compiled the usability attributes followed by various practitioners and researchers which can be mapped at different stages of software development life cycle. A study of various usability measurement models and usability attributes is done in Sect. 2 of this paper. Section 3 deals with mapping of these usability attributes at various stages of software development life cycle. In Sect. 4, a discussion on the mapping of these attributes is presented. Finally, we conclude with the usefulness of this mapping in Sect. 5.

2 Review of Literature A literature review suggests that listing the features and attributes necessary for a product to be usable and measuring whether they are present in the final product are the best way to specify and measure usability [5]. Various researchers have contributed different attributes in measuring usability. According to one study, usability depends on four attributes: effectiveness, learnability, flexibility, and attitude [6]. In another model, the usable system should be efficient to use in order to maximize its productivity as well as satisfying and pleasing to the users during or after use [7]. It is suggested that an enhanced usability model would decompose usability into effectiveness, efficiency, satisfaction, and learnability attributes. It was also suggested that security should be considered in conjunction with those four attributes [8].

Usability Attributes and Their Mapping in Various Phases of Software …

155

A consolidated Quality in Use Integrated Measurement (QUIM) approach was also proposed, which characterized usability into ten attributes: efficiency, effectiveness, learnability, productivity, safety, trustworthiness, accessibility, universality, and usefulness [9].

3 Mapping of usability Attributes in the Different Stages of software Development Life Cycle Section 2 discusses the fact that almost all previously defined usability models have broken down the concept of usability into multiple attributes in a heterogeneous manner. Due to this, their use becomes increasingly challenging during system development stages. Universality, learnability, appropriateness, recognizability, accessibility, operability, user interface esthetics, error protection for users, access control, adaptability, effect, customizability, efficiency, helpfulness, practicability, resilience, unambiguity, and validity are some of the usability attributes considered in this paper during the software development life cycle [10, 11]. Following are the activities that make up the Classical Systems Development Life Cycle [12]: 1. 2. 3. 4. 5. 6.

Initial investigation. Determination of system needs. System design. Software development. Systems’ testing. Implementation and evaluation.

The mapping of usability attributes at various phases of SDLC is presented here pictorially. Figure 2 depicts mapping of usability attributes: resilience, adaptability, universality, practicability, efficiency, and accessibility with initial investigation phase. Mapping of helpfulness, validity, unambiguity, learnability, appropriateness, and recognization with determination of system needs’ phase is illustrated in Fig. 3. In Fig. 4, mapping of system design phase with effect, customizability, adaptability, universality, and user interface esthetics is shown. Figure 5 depicts mapping of attributes learnability, customizability, adaptability, universality, user interface esthetics, operability, appropriateness, recognizability, and effect to the software development phase. Mapping of attributes learnability, universality, and user error protection to systems testing phase is shown in Fig. 6. Similarly, in Fig. 7, mapping of implementation and evaluation phase with usability attributes learnability, efficiency, helpfulness, universality, validity, and resilience is shown.

156

R. Muchhal et al.

Implementatio n and Evaluation

Initial Investigation

Determination of System Needs

Systems Testing

Software Development

System Design

Fig. 1 Activities in classical systems development life cycle

4 Discussion Mapping of various usability attributes to the different activities of Classical Systems Development Life Cycle is achieved on the basis of understanding of the definition of the attributes and purpose of each activity.

4.1 Initial Investigation The preliminary investigation activity begins as soon as a request for a system is made. It is the primary objective of this phase to determine whether the current system contains a problem or deficiency. It is here that the feasibility of the software is reexamined. In the end, the decision is made whether to proceed with the software or to abandon it. The three sub-phases of this phase include: request for clarification, feasibility study, and request for approval [12]. The purpose of this phase which mainly contains the above-mentioned sub-phases matches with the definition of the

Usability Attributes and Their Mapping in Various Phases of Software …

157

Resilience

Practicability

Efficiency

Initial Investigation

Adaptability

Accessibility

Universality

Fig. 2 Mapping of usability attributes to the initial investigation phase

attributes universality, adaptability, efficiency, resilience, accessibility, and practicability. Therefore, these attributes are mapped with initial investigation phase as shown in Fig. 2.

4.2 Determination of System Needs During this phase, system analysts examine every aspect of the system under investigation to answer some key questions about its requirements. Answering these key questions requires the collection of information through questionnaires, individual interviews, reviewing manuals and reports, and actually observing work activities. In order to identify the features that a new system should possess, the analysts analyze the requirements data, including both information the system must produce as well as operational features such as processing controls, response times, and inputs and outputs. By understanding the objective of determination of system needs’ phase, the attributes learnability, appropriateness, recognizability, helpfulness, unambiguity, and validity are mapped with this phase as illustrated in Fig. 3.

158

R. Muchhal et al.

Helpfulness

Unambiguity

Validity

Determination of System Needs

Learnabilit y

Recognizabilit y

Appropriaten ess

Fig. 3 Mapping of usability attributes to the determination of system needs phase

4.3 System Design During this activity, the details of how a system will meet the requirements identified during systems analysis are produced. During this phase, the system is logically designed. Designers sketch the form or display as they envision it when the system is complete. Data inputs, calculations, or storages are described here. There are many ways to present a document containing detailed design specifications. A complete and clearly outlined software specification is also provided by designers [12]. By understanding the tasks performed by the designers and the details generated by them, the attributes universality, adaptability, effect, customizability, and user interface esthetics are mapped in system design phase as shown in Fig. 4.

Usability Attributes and Their Mapping in Various Phases of Software …

159

Affect

User Interface Asthetics

Customizabil ity System Design

Universality

Adaptability

Fig. 4 Mapping of usability attributes to the system design phase

4.4 Software Development A software developer may install or modify existing software, then install purchased software, or they may write a new customized program [12]. In order to determine which option is best, one has to consider the cost of each option, the time available to write the software, and the availability of programmers. A programmer is responsible for documenting the program and explaining why certain procedures are coded in a specific manner. When an application has been installed, documentation is crucial for testing and maintaining it. Here, by analyzing the responsibility of the developer or programmer, the attributes universality, learnability, appropriateness, recognizability, operability, adaptability, effect, customizability, and user interface esthetics are included and mapped at the phase of software development and shown in Fig. 5.

4.5 Systems Testing As part of system testing, software is tested experimentally to ensure that it does not fail and it operates as it is intended, i.e., according to its specifications and according to user expectations [12]. Here, processing of special test data is done and results are

160

R. Muchhal et al.

Affect Customizab ility

Learnability

Recognizabi lity

Adaptability Software Developme nt

Approriaten ess

Universality

Operability

User Interface Asthetics

Fig. 5 Mapping of usability attributes to the software development phase

Learnability

System Testing

Universality

Fig. 6 Mapping of usability attributes to the systems’ testing phase

User Error Protection

Usability Attributes and Their Mapping in Various Phases of Software …

161

Learnability

Resilience

Efficiency

Implementation & Evaluation

Validity

Helpfulness

Universality

Fig. 7 Mapping of usability attributes to the implementation and evaluation phase

analyzed. By allowing only a limited number of users to use the system, analysts see if any unforeseen uses are made of the system. The activity performed at this phase correctly maps to the attributes learnability, universality, and user error protection as shown in Fig. 6.

4.6 Implementation and Evaluation During implementation, the systems’ engineer checks out and uses new equipment, trains users, installs the new application, and constructs the files of data needed to use software. Strengths and weaknesses are identified by evaluating the system. By understanding the need and importance of this phase, the attributes learnability, efficiency, helpfulness, universality, validity, and resilience can be mapped to implementation and evaluation phase as shown in Fig. 7.

162

R. Muchhal et al.

5 Conclusion This study provides an understanding of the mapping of usability attributes at various phases of the software development life cycle. Further, the measures can be defined for each usability attribute. It will enable usability attributes to be quantified and ensure usability of the work product in early stages of software development to produce usable product at the end of the development process.

References 1. ISO/IEC (1991) ISO 9126. Information technology-Software quality characteristics and metrics 2. ISO 9241–11 (1998) Guidelines for specifying and measuring usability 3. Nielsen J (1992) Usability engineering life cycle. IEEE Comput Soc 25(3):12–22 4. Velmourougan S, Dhavachelvan P, Baskaran R, Ravikumar B (2014) Software development life cycle model to build software applications with usability. In: International conference on advances in computing, communications and informatics (ICACCI). IEEE, pp 271–2769 5. Bevan N, Macleod M (2010) Usability measurement in context. Behav Inf Technol 13(1– 2):132–145 6. Shackel B (2016) Usability-context, framework, definition, design and evaluation. Hum Factors Inform Usability 21(5–6):21–37 7. Nielsen J (1994) Usability engineering. Elsevier 8. Abran A, Khelifi W, Suryn W, Seffah A (2003) Usability meanings and interpretations in ISO standards. Softw Qual J 11:325–338 9. Seffah A, Donyaee M, Kline RB, Padda, HK (2006) Usability measurement and metrics: a consolidated model. Softw Qual J 14:159–178 10. Hasan LA, Al-Sarayreh KT (2015) An integrated measurement model for evaluating usability attributes. In: Proceedings of the international conference on intelligent information processing, security and advanced communication, vol 94, pp 1–6 11. Paithankar K, Ingle M (2009) Classification of software quality attributes—a comparative study in perspective of usability. J Technol Eng Sci 1(2) 12. Senn J (1989) Analysis & design of information systems, 2nd edn, McGRAW- Hill international Editions

Controlling Devices from Anywhere Using IoT Including Voice Commands S. V. Ravi kkumaar, N. Velmurugan, N. Hemanth, and Valleri Gnana Theja

Abstract Since the late 1970s, there has been talk of home automation. There is an urgent need to save energy in every manner because of the rise in population and energy usage as time goes on. The inability to monitor and control home appliances from afar is a significant contributor to wasted energy. Since the advent of advanced technologies and readily available services, people have substantially shifted their views on what a home ought to have the option to do and how administrations ought to be given and gotten to inside the home. The newest and most advanced internet technology is IoT. In this research, we showcase both a Bluetooth-based home automation system and an internet-based HAS. An automated house is frequently referred to as a smart home, and the fundamental concept being discussed here is the ability to manage basic home features and operations automatically over Bluetooth. This may be done from anywhere in the world using the internet. The communication technology used to build smart houses is Bluetooth. It is fast, simple, and inexpensive to set up. The technology is already well known to most people. When there is no internet connection, we can use Bluetooth voice commands to control the appliances, so the data charges will be reduced. Keywords Bluetooth · Home Automated System (HAS) · Internet of Things (IoTs)

S. V. Ravi kkumaar · N. Velmurugan · N. Hemanth (B) · V. G. Theja Department of Information Technology, Saveetha Engineering College, Chennai, Tamil Nadu, India e-mail: [email protected] S. V. Ravi kkumaar e-mail: [email protected] N. Velmurugan e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 V. S. Rathore et al. (eds.), Emerging Trends in Expert Applications and Security, Lecture Notes in Networks and Systems 681, https://doi.org/10.1007/978-981-99-1909-3_15

163

164

S. V. Ravi kkumaar et al.

1 Introduction The idea of the Internet of Things has surfaced, providing an incredible vision for the future of technology. By exploiting the interaction and cooperation of different technical devices, it seeks to make it feasible to gather and comprehend information from the environment around us. In particular, the idea of “Smart Houses” intends to merge these gadgets into homes, enabling the automation of chores that were traditionally done by people, to simplify everyday life and give a more pleasant atmosphere. However, a lot of these gadgets fall short of their claims since they were not designed to account for the user’s frequent changes in preferences and habits, necessitating reprogramming of the gadget to conform to the new behaviour. This article demonstrates the planning, development, and full deployment of a voiceactivated smart home controller for intelligent devices, providing an effective means of addressing the aforementioned issue. It is then used in a practical situation.

2 Related Works Adam Zielonka [1], Increasing our houses’ energy efficiency is feasible thanks to the convergence of the Internet of Things (IoTs) with computational intelligence. To best suit a family’s requirements, connected gadgets may be configured. In this article, we present the remote platform control system for the Internet of Things convection installation we developed for a small house. Patricia Franco [2], Due to its enormous benefits in creating an energy-efficient smart grid, appliance load monitoring has become more and more important in smart homes. Invasive load monitoring (ILM) and non-intrusive load monitoring (NLM) are two categories into which these processes’ management techniques may be divided (NILM). Lowri Williams [3] And because they are so prevalent in our everyday lives, Internet of Things (IoTs) gadgets, which frequently gather sensitive information, are widely used. Despite the fact that such gadgets automate and streamline routine chores, they also present serious security concerns. Yunxi Zhuansun [4] The rapid development of the Internet of Things (IoTs) has increased the need for solutions to ensure the security and privacy of IoT-based smart home systems. Wireless communication and sensor technologies are necessary for the security and secrecy of smart home systems since they are a crucial part of the Internet of Things (IoTs). According to Mu-Chun Su [5], hand gesture-based control has a lot of promise, both conceptually and in terms of actual implementations. This is because it is practical and simple. A real-time interactive control system for home appliances is presented in this research. Vijaykumar [6], The potential of regulating and managing domestic energy usage, which helps to reduce energy losses, has led to an increase in the use of Smart Energy Control Systems (SECS) in the context of smart homes. Huixiang Liu [7], The safety of the house is assured. Hardware and software are created in accordance with the system architecture, which is defined in this studies’ IoT architecture for smart homes.

Controlling Devices from Anywhere Using IoT Including Voice …

165

Andrzej Sikora [8], The control module collects readings from sensors and information from users about the circumstances within the home and, using computational intelligence, optimises settings to modify the created IoT convection system for a families’ comfort. Zongwen Bai (nine), After modification, the system significantly improves in terms of reduced temperature variations and decreased usage. Mohamed A. Ahmed [10], In contrast to NILM methods, where just one point of sensing is required, ILM is based on inexpensive metre devices linked to household appliances. Despite having a higher price tag than NILMs, ILM systems are more dependable and efficient. Hybrid systems that combine NILM with the individual power metering offered by brilliant fittings and shrewd apparatuses are likewise expected to turn out to be more normal soon. To make a movement identification framework in the light of IoT engineering, this research provides a unique ILM technique for load monitoring.

3 System Architecture The Arduino UNO controller serves as the system’s brain in the proposed system. This is used to manage any hardware operations that communicate with the controller. To operate the appliances, a voice command and an IoT webpage are utilised as the input. Relays attached to appliances are controlled by controllers based on user commands. The LCD monitors all operations, which are also IoT-monitored. With this technique, several appliances can be connected and controlled with commands. IoT is used to monitor and control the commands and switching of the appliances. This suggests a smart house with voice activation. All the devices are connected and operated using the web page through internet connection and voice commands of the user by Bluetooth connection which also save the energy consumption in our home. Switching on/off of appliances can be done through IoT of web page and voice command of the user. Voice commands are helpful for patients in hospitals. It is also helpful for handicap people and old people in home. They can easily control all home appliances through the IoT web page and voice commands whether they are at home or outside (Fig. 1). Its controller, the Arduino Uno, serves as the system’s central processing unit. The input utilised to operate the appliances is a voice command system and an IoT web page. Relays are linked to appliances, and the controller controls them in response to user commands. The LCD monitor and IoT are used to track all processes. Using this technique, appliances may be linked and controlled using instructions. IoT is used to track and manage instructions and appliance switching. The voice-activated smart house is implied by this. The user can easily control all home appliances, such as lights, fans, and all home appliances. They can easily control all home appliances through the IoT web page and voice commands whether they are at home or outside. As with IoT in smart homes, people have always want things to be more comfortable and easy.

166

S. V. Ravi kkumaar et al.

Fig. 1 System architecture

4 Modules 4.1 Web Page In this module, we control home appliances online using web page. The web page is connected to IoT that helps to control the home appliances (Fig. 2). Fig. 2 Web page architecture

Controlling Devices from Anywhere Using IoT Including Voice …

167

Fig. 3 IoT web page control architecture

4.2 IoT Web Page Control In this circuit, switching of the appliances is monitored and controlled through IoT. Embedded systems, including central processing units (CPUs), sensors, and communication hardware are all essential parts of an IoT ecosystem, which consists of interconnected smart devices that collect, share, and act on data from their immediate environments. By establishing a connection with an IoT gateway, IoT devices exchange the sensor data they acquire. In this module, we use IoT to control the fan, bulb, and plug point in the web page. The IoT has a buttons on web page which are used to control the appliances in real time (Fig. 3).

4.3 Application Control In this module, we use mobile application to control appliances. The appliance control is controlled by application with the help of Bluetooth (Fig. 4). Fig. 4 Application control architecture

168

S. V. Ravi kkumaar et al.

Fig. 5 Bluetooth voice architecture

4.4 Bluetooth Home appliances can be controlled by voice commands of the user with help of Bluetooth mobile apps. All the operations are displayed on the LCD. Several appliances can be connected in this method and can be operated through commands (Fig. 5).

5 System Implementation 5.1 Arduino To put it simply, Arduino is an electronics platform built on open-source hardware and software. Arduino boards can take in data like light from a sensor, a user hitting a button, or a tweet and then utilise that data to trigger actions like turning on a Drove, enacting an engine, or distributing to a site. You can use your board to perform what you need to do by sending directions to its microcontroller (Fig. 6).

5.2 Liquid Crystal Display The liquid crystal display (LCD) screen is a versatile electronic display. Because of its versatility and low cost, a 16 × 2 LCD display module is widely used in electronic products. These modules are superior than seven-segment and other multi-segment LEDs. Given these reasons: to put it simply, a 16 × 2 LCD has room for 16 characters

Controlling Devices from Anywhere Using IoT Including Voice …

169

Fig. 6 Ardunio

Fig. 7 Liquid crystal display

across each of its two lines. LCDs are low-priced, simple to code, and can show any kind of character, even those that have been specifically designed for a certain use (unlike in seven segments). The LCD displays each character as a 5 × 7 pixel matrix. There are two registers on this LCD, and they are called the Command and Data registers. There is a place to record all the orders that have been sent to the LCD, and that place is the command register (Fig. 7).

5.3 Relay In the majority of control systems or pieces of equipment, relays serve as both the principal switching and protection mechanism. Every relay opens or closes contacts or circuits because of an electrical variable, like voltage or flow. A transfer is a sort of exchanging gadget since its function is to change the on/off state of an electrical circuit (Fig. 8). Relays may be categorised and come in different types based on the tasks they are used to do. Different types of relays are used for different purposes, including as protection, reclosing, regulation, auxiliary, and monitoring. Protective relays are

170

S. V. Ravi kkumaar et al.

Fig. 8 Relay

always keeping tabs on voltage, current, and power. If these values deviate from predetermined limits, an alert is generated or the circuit is isolated. Relays of this kind are used to safeguard machinery such as transformers, motors, and generators. Relays used for monitoring keep an eye on things like electricity flow direction and produce an alert as necessary, additionally known as directional relays.

5.4 Bluetooth Using Ultra High Recurrence (UHF) radio waves with short frequencies in the modern, logical, and clinical radio groups, from 2.400 to 2.485 GHz, Bluetooth is a remote innovation standard for making individual region organisations (Skillet) and trading information over brief distances among fixed and cell phones (Container). In the past, it was supposed to be a cordless alternative to the RS-232 data cables.

5.5 Internet of Things A network of everyday objects including furniture, equipment, and buildings that include electronics, software, sensors, actuators, and a network connection makes up what is known as the Internet of Things (IoT). As per the 2013 Worldwide Guidelines Drive on Web of Things (IoT-GSI), “the framework of the data society” is the Internet of Things (IoT). Through the use of existing network infrastructure, objects can be sensed and remotely controlled by the Internet of Things (IoT), opening up opportunities for a nearer combination of the actual world with PC-based frameworks and achieving enhancements in precision, proficiency, and monetary profit. IoT turns into an individual from the more extensive class of digital actual frameworks, which

Controlling Devices from Anywhere Using IoT Including Voice …

171

likewise incorporates brilliant matrices, shrewd homes, clever transportation, and savvy urban communities, when it is upgraded with sensors and actuators. Each object has an embedded computer system that makes it individually recognisable, yet it can still communicate with other things via the current internet infrastructure.

5.6 Infrastructure The Internet of Things will become ingrained in daily life. It will become part of our overall infrastructure just like water, electricity, telephone, TV and most recently the Internet. The Internet of Things, a component of the future internet, would link common things with a strong integration into the physical environment, as opposed to the existing internet, which normally connects large-scale computers.

5.7 ESP-12E Architecture The ESP8266EX has Wi-Fi built right in, and it also boasts an upgraded L106 Diamond series 32-bit CPU from Tensilica with on-chip SRAM. Examples of scripts for such applications are included in the SDK, and the ESP8266EX is often linked to external sensors and other application-specific devices through its GPIOs. The Smart Connectivity Platform (ESCP) from Espressif Systems exhibits complex system-level capabilities including adaptive radio biasing and quick sleep/wake context switching for VoIP that uses less power. For low-power operation, advance signal processing, and spur cancellation and radio co-existence features for common cellular, Bluetooth, DDR, LVDS, LCD interference mitigation.

5.8 Embedded Systems An embedded system is a complete system that includes hardware, application software, and a real-time operating system. The system might be large and integrated or very small and stand-alone. Our comprehensive embedded system tutorial covers everything from the basics to advanced topics such as embedded system features, design, processors, microcontrollers, tools, addressing modes, assembly language, interrupts, embedded C programming, led blinking, serial communication, LCD programming, keyboard programming, project implementation. Programming languages are collections of one or more functions. A function is a collection of statements used to carry out a particular activity. Basic building blocks and grammatical principles are present in every language. The C programming language is intended for usage with variables, character sets, data kinds, keywords, expressions, and other tools that are utilised while building C programmes.

172

S. V. Ravi kkumaar et al.

Embedded C programming language is the name of the extension of the C language. When opposed to the examples above, embedded programming in C also has several more capabilities, such as data types, keywords, header files, which are denoted by the #includemicrocontroller name tag.

6 Output

7 Conclusion The purpose of the project is to use voice commands and an IoT website to control home appliances. The designed system allows its users control the appliances in real time, remotely. The use of the IoT web page and Bluetooth voice commands made connecting to the home appliances easier, through smart phone Android application.

8 Future Enhancements In the future, the project can be enhanced, by making the system controlled by AI. We can also implement camera for security purpose for the home. Automatic controlling devices with the help of sensors.

Controlling Devices from Anywhere Using IoT Including Voice …

173

References 1. Naitam MD, Pofley CP, Darunde NH, Tikhat M, Belekar AS, Rajurkar RC, Rekkawar AV (2022) IoT based smart home automation. Int J Innov Res Electr, Electron, Instrum Control Eng ISO 3297:2007 10(7) 2. Kavitha M, Basith NA, Prakash GG, Kishore RJ (2022) Smart home automation for differently abled person using controller and IoT. Int J Eng Res Technol (IJERT) ISSN: 2278-0181 Published by, www.ijert.org ICONNECT–2022 Conference Proceedings 3. Gupta K, Kumar S, Rai RK, Dubey P (2021) Voice command system. JETIR 8(5) 4. Kumari JV, Pavithra (2021) Neelam IoT based smart home automation system. JETIR 8(1) 5. MohamedIsmail K, Radhakrishnan R, Ramachandran M, Vijay Raj VG (2021) Voice controlled smart home automation. Int J Innov Res Electr, Electron, Instrum Control Eng 9(3) 6. Kumar P, Lin Y, Bai G, Paverd A, Dong JS, Martin A (2019) Smart grid metering networks: a survey on security, privacy and open research issues. IEEE Commun Surveys Tuts 21(3):2886– 2927, 3rd Quart. https://doi.org/10.1109/COMST.2019.2899354 7. Cintuglu MH, Mohammed OA, Akkaya K, Uluagac AS (2017) A survey on smart grid cyberphysical system testbeds. IEEE Commun Surveys Tuts 19(1):446–464, 1st Quart. https://doi. org/10.1109/COMST2016.2627399 8. Annaswamy A (2013) IEEE vision for smart grid control: 2030 and beyond roadmap. IEEE Stand 1(1):1–12. https://doi.org/10.1109/IEEESTD.2013.6648362 9. Al-Kuwari M, Ramadan A, Ismael Y, Al-Sughair L, Gastli A, Benammar M (2018) Smarthome automation using IoT-based sensing and monitoring platform. In: Proceedings of the international conference on compatibility power electronics and power engineering (CPEPOWERENG), pp 1–6. https://doi.org/10.1109/CPE.2018.8372548 10. Mocrii D, Chen Y, Musilek P (2018) IoT-based smart homes: a review of system architecture, software, communications, privacy and security. Internet Things 1–2:81–98. https://doi.org/ 10.1016/j.iot.2018.08.009

IoT-Based Air Quality Monitoring System Amileneni Dhanush, S. P. Panimalar, Kancharla Likith Chowdary, and Sai Supreeth

Abstract There are a number of factors that have contributed to the rising pollution levels, including increased population, increased vehicle use, industrialization, and urbanisation, all of which have negative effects on human welfare by directly affecting the health of the population that is exposed to these factors. It will show the air quality in PPM on the LCD and additionally as on a webpage, so we will be able to monitor it terribly easily. In an IoT-based pollution observation system, the air quality is measured over an internet server exploiting the internet and can trigger an alarm once the air quality goes down to the distant side of a particular level, suggesting that when there are ample quantity of harmful gases are present in the air, such as CO2 , smoke, alcohol, aromatic hydrocarbon, and NH3. MQ135 detector is the most suitable option for observation of air quality because it can detect most harmful gases and might live their quantity accurately. The pollution level is monitored anywhere on a PC or mobile. Install this technique anywhere and can also trigger some device once pollution goes to the distant side of some level, find it irresistible can start the fan or can send an alert. Keywords Air pollution · MQ135 sensor · IoT · PPT

1 Introduction Air pollutants are the most important problem of each nation, whether or not they are miles evolved or growing. Health problems are developing at quicker charge A. Dhanush · S. P. Panimalar · K. L. Chowdary (B) · S. Supreeth Department of Information Technology, Saveetha Engineering College, Chennai, Tamil Nadu, India e-mail: [email protected] A. Dhanush e-mail: [email protected] S. P. Panimalar e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 V. S. Rathore et al. (eds.), Emerging Trends in Expert Applications and Security, Lecture Notes in Networks and Systems 681, https://doi.org/10.1007/978-981-99-1909-3_16

175

176

A. Dhanush et al.

especially in city regions of growing nations in which industrialization and developing range of motors results in launch of lot of gaseous pollution. Harmful results of pollutants encompass moderate hypersensitive reactions inclusive of infection of the throat, eyes, and nostril in addition to a few extreme problems like bronchitis, coronary heart diseases, pneumonia, lung, and irritated asthma. According to a survey, due to air pollutants, 50,000–100,000 untimely deaths every every year arise within the USA on my own, whilst in the EU, the range reaches to 300,00 and over 3,000,000 worldwide. Various types of anthropogenic emissions named as the number one pollution are pumped into the ecosystem that undergoes chemical response and similarly results in the formation of new pollution typically known as secondary pollution.

2 Related Works Reference [1] For the purpose of developing and organising AirVis, a novel visual analytic system that supports sphere professionals in quickly landing and presenting the complex propagation patterns of air pollution based on graph representations, Gluttony is a problem-solving algorithmic approach that uses the thematically optimal decision at each level to build and develop this notion. We came up with a new visual approach of incorporating circular information in the assessment of air contamination’s possible spread. With the experts’ help, we depicted the more difficult situations in the geography-driven analysis of spread processes and produced a slew of broad plan assumptions to guide the reversal of visual direction. Reference [2] An air quality model based on observations has been developed, and a new model has been presented based on it. When it comes to encouraging this growth, we used inheritable calculations and closest neighbour calculations; the inheritable calculation is a framework for working on improvement issues based on normal choice, whilst the closest neighbour calculation arranges an example based on the request for its nearest neighbour. Complex organisations may demonstrate a wide range of air quality types that are spatially dispersed. Reference [3] Convolutional-grounded bidirectional returning irregular unit architecture is presented for PM2.5 consideration in this transient soothsaying model. Both the inheritable calculation and the backing vector machine were applied to carry out the supplied task. Section and regression analysis may be done using support vector machines by separating data into its component parts. The results are compared to those of conventional AI models and regular models of deep learning. Reference [4] There is a link between IAP openness and associated gambles shown in this article. We used a Support Vector Machine and backpropagation computations to accomplish this task. For better life and better health, the recommended solutions to IAQ should include man-made logic. Reference [5] In light of the web of things, a framework for monitoring the quality of the brain’s health is constructed, and a characterisation of calculation is used to recognise the continuous execution of the brain’s health quality expectation. This study makes use of a Bayesian approach to computation. The Bayesian calculation is

IoT-Based Air Quality Monitoring System

177

based on Bayes’ hypothesis, which is used to determine the probability of a hypothesis based on evidence or perceptions. Reference [6] Using the web of things and distributed computing, a framework for monitoring the quality of one’s mental health has been established, and this study introduces distributed computing in order to speed up information processing. Use the formula BPTT to perform and get the best results in this concept (backpropagation through time). A slope-based approach is used to create certain types of repeating brain structures. Reference [7] This research is proposed using IoT technology to filter, analyse, and predict future information based on the brain’s organising of information. Twolayer model forecast computation is the method used in this article. In this research, the brown haze model for Internet of Things checking was dissected and predicted. Observation data from IoT devices are fully used in this model. Forecasts of exhaust cloud pollution, early warnings, and choice investigations are all part of its knowledge base. Brown haze dynamic checking is developed to analyse and anticipate information from the Internet of Things. Reference [8], In contrast to NILM methods, where just one point of sensing is required, ILM is based on inexpensive metre devices linked to household appliances. Despite having a higher price tag than NILMs, ILM systems are more dependable and efficient. Hybrid systems that combine NILM with the individual power metering offered by brilliant fittings and shrewd apparatuses are likewise expected to turn out to be more normal soon. To make a movement identification framework in the light of IoT engineering, this research provides a unique ILM technique for load monitoring.

3 System Architecture The Arduino UNO controller serves as the system’s brain in the proposed system. This is used to manage any hardware operations that communicate with the controller. To operate the appliances, a voice command and an IoT web page are utilised as the input. Relays attached to appliances are controlled by controllers based on user commands. The LCD monitors all operations, which are also IoT-monitored. With this technique, several appliances can be connected and controlled with commands. IoT is used to monitor and control the commands and switching of the appliances. This suggests a smart house with voice activation. All the operations are displayed on the LCD and monitored within the IoT. Several programmes may be linked in this approach and may be operated through instructions. These instructions are absorbed into the air and tracked through the IoT (Fig. 1).

178

A. Dhanush et al.

Fig. 1 System architecture

4 Modules 4.1 Arduino In the realm of consumer electronics, Arduino is a popular open-source platform that emphasises simplicity of both hardware and software. Light on a sensor, a finger on a button, or a tweet may all be read by an Arduino board and converted into actions such as starting a motor, lighting an LED, or posting a message to Twitter. By transmitting a predetermined set of instructions to the board’s microcontroller, you may instruct the board to carry out a certain set of actions. You may do this with the help of the processing-based Arduino software (IDE) and the wiring-based Arduino programming language. Arduino has been the brains behind many projects throughout the years, from simple toys to highly advanced medical equipment. Students, enthusiasts, artists, programmers, and experts from all around the world have banded together around an open-supply platform to share their knowledge and expertise with one another and with others just getting started in the field (Fig. 2).

IoT-Based Air Quality Monitoring System

179

Fig. 2 Arduino

Fig. 3 Bluetooth

4.2 Bluetooth . It is used for plenty packages like Wi-Fi headset, sport controllers, Wi-Fi mouse, Wi-Fi keyboard, and lots of extra patron packages (Fig. 3). . By adhering to the IEEE 802.15.1 specification, wireless LANs may be constructed (PAN). To transmit data wirelessly, it employs a technique called frequency-hopping unfold spectrum (FHSS) radio transmission.

5 Implementation 5.1 Hardware Requirements To run this venture, we require both the hardware and software programme requirements. The specifications for a bit of hardware are called hardware requirements. The bulk of equipment simply has compatibility or running device requirements. A printer, for example, should function with Windows XP or higher versions.

180

A. Dhanush et al.

Fig. 4 Liquid crystal display

5.2 Liquid Crystal Display The liquid crystal display (LCD) screen is a versatile electronic display. Because of its versatility and low cost, a 16 × 2 LCD display module is widely used in electronic products. These modules are superior than seven-segment and other multi-segment LEDs. Given these reasons: to put it simply, a 16 × 2 LCD has room for 16 characters across each of its two lines. LCDs are low-priced, simple to code, and can show any kind of character, even those that have been specifically designed for a certain use (unlike in seven segments). The LCD displays each character as a 5 × 7 pixel matrix. There are two registers on this LCD, and they are called the Command and Data registers. There is a place to record all the orders that have been sent to the LCD, and that place is the command register (Fig. 4).

5.3 Ethernet Connection (LAN) or a Wireless Adapter (Wi-Fi) People use tough drives to keep records that cannot be erased. If there are one or extra platters interior of a hermetic case, then they are secure. With every spin of the platter, a magnetic pressure movements with the platters and writes records on them. As the platters byskip every other, the records are written on them. When a person desires to use a laptop, they want a laptop working gadget to help them do so. Software on a laptop can run an internet browser and a word processor due to the fact it may examine keystrokes and mouse movements. In order to install a working gadget, you want to have a tough pressure that you could use.

IoT-Based Air Quality Monitoring System

181

5.4 Inernet of Things A network of everyday objects including furniture, equipment, and buildings that includes electronics, software, sensors, actuators, and network connection makes up what is known as the Internet of Things (IoT). As per the 2013 Worldwide Guidelines Drive on Web of Things (IoT-GSI), “the framework of the data society” is the Internet of Things (IoT). Through the use of existing network infrastructure, objects can be sensed and remotely controlled by the Internet of Things (IoT), opening up opportunities for a nearer combination of the actual world with PC-based frameworks and achieving enhancements in precision, proficiency, and monetary profit. IoT turns into an individual from the more extensive class of digital actual frameworks, which likewise incorporates brilliant matrices, shrewd homes, clever transportation, and savvy urban communities, when it is upgraded with sensors and actuators. Each object has an embedded computer system that makes it individually recognisable, yet it can still communicate with other things via the current internet infrastructure.

5.5 Sensors They also can be interested in you. If you purchase them for an extremely low price, they can deliver you facts in nearly actual time. They additionally do not use much power. Sensors have made it viable for us to analyse air first-rate with new time and area resolution. When we reflect on consideration on our surroundings in this way, it modifications how we think. In order to higher apprehend air first-rate and the way it influences the surroundings and our health, researchers are advising the use of sensor-primarily based gadgets to degree air first-rate. Besides Citizen Science, which we humans assist their groups reduce down on air pollutants dangers with the aid of using locating approaches to reduce waste and different waste. Progress in small electronics and microfabrication has made it smooth and reasonably priced to make a lot of factors at the equal time for an extremely low price. It is viable to shop for sensors that may inform you what amount of pollution is within the air and what sort of particulate matter (PM) there is within the air. These sensors also can describe organic materials like black carbon which can be within the air, and they could do that too. It is additionally now no longer clear how properly a number of the industrial sensors paintings. Many do not paint properly whilst as compared to different approaches of having information (Figs. 5, 6, 7 and 8).

182

A. Dhanush et al.

Fig. 5 MQ-2

Fig. 6 MQ-4

Fig. 7 MQ-7

Fig. 8 MQU

6 Clustering Methods A way to study with out being instructed what to do. One way to study is called “unsupervised” learning. This is when we search for styles in datasets that do not have answers. Use it maximum of the time to search for important structure, explanatory processes, generative functions, and corporations in a fixed of examples. Using

IoT-Based Air Quality Monitoring System

183

clustering, fact factors can be divided up into smaller corporations which might be carefully associated with each other and extra assorted from each other. It absolutely, is a set of factors which might be comparable and awesome.

7 Result and Findings This mobile app is used to display the readings for all the gases which are absorbed by the instrument, and it even sends a notification message to the mobile as a warning alarm. It helps in getting to know how dangerous it is going like.

8 Conclusion A version for assessing air excellent is created the usage of the advised framework. The BP community innovation and statistics mining method are blended with this version. In order to examine and have a look at the effect components, we employ statistics mining methodologies and might clear out the indicator elements that imply that the affiliation is not always robust enough. Prediction accuracy is elevated even as calculation time is reduced to the version estimates by an enormous margin. To show to environmental coverage agencies that the advised air excellent anticipation version is more accurate, the framework demonstrates that it really works to illustrate the validity and applicability of the prediction.

9 Future Enhancements In the future, the project could be enhanced, by making the system controlled by AI. We can also implement sensors in the traffic signals and analyse the pollution.

184

A. Dhanush et al.

References 1. Nayak R, Panigrahy MR, Rai VK, Rao TA (2022) IOT based air pollution monitoring system 3(4) 2. Kaur N, Mahajan R, Bagai D (2021) Air quality monitoring system based on Arduino microcontroller 5(6) 3. Sai PY (2020) An IoT based automated noise and air pollution monitoring system 6(3) 4. Ezhilarasi L, Sripriya K, Suganya A, Vinodhini K (2019) A system for monitoring air and sound pollution using Arduino controller with IoT technology 3(2) 5. Arduino E Tools and techniques for engineering wizardry by Jeremy Blum. 1st edn 6. Deshmukh S, Surendran S, Sardey MP (2018) Air and sound pollution monitoring system using IoT 5(2018) 7. Cohen AJ et al (2017) Evaluations and 25-year patterns of the worldwide weight of global burden of diseases study 2015. Lancet 389(10082):1907–1918 8. Mocrii D, Chen Y, Musilek P (2018) ‘IoT-based smart homes: a review of system architecture, software, communications, privacy and security.’ Internet Things 1–2:81–98. https://doi.org/10. 1016/j.iot.2018.08.009

Modeling of Order Quantity Prediction using Soft Computing Technique: A Fuzzy Logic Approach Anshu Sharma , Sumeet Gill, and Anil Kumar Taneja

Abstract In the present work, we intend to predict the ideal order size for electronic goods retailers. We categorize the three market-related variables that significantly impact inventory and use a fuzzy logic controller approach to determine the ideal order size and to develop useful solutions. We have worked on the optimum ordering quantity for the retailers of the iPhone, and iPhone 13 has been taken as a product for this modeling. We classify the three parameters that influence the inventory observed from the market situation, data for prices, and average monthly sales of the iPhone 13. Keywords Fuzzy inference system · Fuzzy sets · Decision-making · Fuzzy rules · Optimum order quantity

1 Introduction In an ideal world, everything should go with perfect timing. Unfortunately, in business, this is not always the case, and one must cope with unexpected eventualities. Unless the organization makes smart choices at the right moment, operations in any company run the risk of becoming chaotic. Organizational plans generally focus on raising customer service standards and lowering operating costs to preserve profit margins. The continual fluctuation of consumer demand makes the inventory keeping full of challenges. There is always a question that every organization faces, i.e., on how much stock should be kept in warehouses. If the store has too much stock, then A. Sharma (B) · S. Gill Department of Mathematics, Maharshi Dayanand University, Rohtak, Haryana 124001, India e-mail: [email protected]; [email protected] S. Gill e-mail: [email protected] A. K. Taneja Galgotias University, Greater Noida, UP 203201, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 V. S. Rathore et al. (eds.), Emerging Trends in Expert Applications and Security, Lecture Notes in Networks and Systems 681, https://doi.org/10.1007/978-981-99-1909-3_17

185

186

A. Sharma et al.

it will lead to a risk of developing dead stock. On the other hand, if it has too little, it will not be able to meet client requests, and thereby, clients are migrating to other sellers. For this reason, a solid analytical technique is necessary to keep a structured inventory up to date. Economic Order Quantity (EOQ)-based inventory models were the first. However, it is not appropriate for real-world situations. In our study, we work on the fuzzy inference system, where fuzzy precedents are considered based on the market situation. Zadeh [8] proposed the fuzzy set theory, which has been applied in models of inventory management. Fuzzy logic control is used in many fields like Engineering, Robotics, medical sciences, etc. A fuzzy logic technique was therefore suggested for a study of iPhone 13 to solve the inventory control problem under uncertainty. Sona et al. [5] analyzed the problem of optimum procurement in the automobile industry using the fuzzy controller approach. Park [2] constructs a multi-item inventory model in both crisp and fuzzy environments with constraints on storage area, the number of orders, and production cost. Hamid and Aref [6] proposed a fuzzy logic model using wheel load and tire inflation pressure as input parameters, and each contains five membership functions. Using a hybrid ANN and fuzzy logic neural model, Kumaran and Kailas [1] forecast the future closing price of the stock. In this paper, there are five sections: Sect. 1 is for the introduction. Section 2 contains the parameters used in the fuzzy system, and the next Sect. 3 includes the methodology. Section 4 contains the case study and then follows the result discussion and conclusion.

2 Parameters Used We have proposed a fuzzy expert system for predicting order quantity based on fuzzy logic techniques. Below is the fuzzy inference system that we form based on three inputs Time, Discount%age, and Stock level, respectively, and one output which is ordering quantity. Here, we consider three factors for the fuzzy inference system, which are as follows:

2.1 Time Elapsed After a New Launch The price of the existing model of iPhone continues to decrease as the time of the new launch comes near. This factor also affects the ordering decision of the retailers and consumer behavior in purchasing the iPhone. As the time elapsed after a recent launch increased, the price of the existing models decreased. This discount percentage given to the customer affects their purchasing behavior.

Modeling of Order Quantity Prediction using Soft Computing …

187

2.2 Stock Level Present in the Storage One of the critical determinants of inventory activity is stock availability. Knowing how much inventory is available to meet demand is referred to as stock availability. Careful consideration of this element may help to decrease excess stock and stock-out situations.

2.3 Discount Percentage A price cut is a pretty typical marketing strategy to attract clients by providing a significant effect, and it encourages consumers to purchase the advertised goods right away. When a significant price discount on a product is there, customers estimate a larger amount of benefits. This discount percentage, in turn, affects the ordering decision of retailers and works as one of the determiners of order quantity.

3 Methodology In this study, we employ a methodology called “Fuzzy Logic Controller” to handle uncertain data. The proposed controller uses a fuzzy inference system, i.e., the Mamdani Inference System, to simulate the qualitative elements of quantitative studies and human knowledge and reasoning processes. Fuzzy logic is to deal with approximate rather than precise reasoning. It is a type of multi-valued logic developed from fuzzy set theory. Membership functions, often referred to as characteristic functions, which assign each object a degree of membership ranging from zero to one, are the defining characteristics of fuzzy logic sets [3]. We initially turn our parameters into fuzzy sets. Then, we create a fuzzy logic controller to analyze the inventory choice and define fuzzy inference rules for the three input parameters. Rules for FIS are to analyze the inventory ordering decision (output) using the four parameters as (inputs). FLC makes it easier to display the results and evaluate output performance. In FIS, there are four steps: 1. 2. 3. 4.

Fuzzification of the input parameters. IF–THEN rules. System of inference. Defuzzification of the output (Fig. 1).

For this, we classify the collection of the dataset into ranges. Then, we create a fuzzy logic controller to analyze the inventory choice and define fuzzy inference rules for the three input parameters. Rules for FIS are used to analyze the inventory ordering decision (output) using the four parameters as (inputs). The rules determine if this methodology is successful. Here, we consider the rules based on data and

188

A. Sharma et al.

Fig. 1 Fuzzy logic controller

knowledge from retailers of the product. The IF–THEN format is typically used to represent rules. Situations are described by linguistic variables and considered by fuzzy IF–THEN rules. The role of fuzzy mapping rules is to describe the connection between fuzzy input–output variables. The fuzzy logic’s narrow sense serves as the foundation of a fuzzy implication rule [7]. IF–THEN rule functions similarly to how people think. During defuzzification, we must determine how to handle fuzzified data obtained from the inference system. Here, we use the centroid method for defuzzification, giving the required result.

4 Case Study (iPhone 13) We consider iPhone 13 for constructing the fuzzy inference system. For this, we have considered selling price data of the iPhone 13 after its launching date to till date and average monthly sale data from a website and an application [5]. Based on this, we estimate the ordering amount, which works as our output of the proposed system. Based on above data, we set the range and, accordingly, the membership values of input and output units and decided to order the amount in the output. During this process, we set the IF–THEN rules according to the available data, which helps us to infer the system. We implemented a total of 14 rules in the system. Here, we have calculated the order quantity by considering three linguistic factors: time, discount percentage, and stock availability. The variable discount percentage is the average of a monthly discount. The table below contains information about the input variables (Table 1). Assume that if there is stock available of 400–550 units, it means that the material stock available is medium. Likewise, we can understand the other classification of linguistic variables (Table 2). The following figures (Figs. 2 and 3) provide the fuzzy sets for the input variable Time and the fuzzy sets for the output variable, i.e., OrderDecision.

Modeling of Order Quantity Prediction using Soft Computing … Table 1 Classification of input variables with corresponding range values

Linguistic variables (input) Linguistic terms Range of values Time

Very_less

0.57–2.88

(in months)

Less

1.25–6.2

High Very_high Discount%age

Low

(in %age)

Medium High

Table 2 Classification and range values of the output variable

189

4.0–9.8 7.27–12 0–6.96 2.5–17.8 11.8–20

Stock

Less

(in units)

Medium

212–802

Sufficient

578–1000

Linguistic variables

Linguistic terms

OrderDecision (In units)

Very_less

0–400

Range of values 0–155

Less

100–500

Moderate

370–870

Max

700–1000

Fig. 2 Fuzzy sets of input variable time

We can obtain the fuzzy sets for the other input variables of the fuzzy inference system in a similar way. The rule base for our fuzzy inference system, which links the three input variables to the output variable, is now defined in the following figure. These rules provide knowledge to our system. A suitable fuzzy inference mechanism and associated rules are as follows in Figs. 4 and 5 shows the schematic diagram for developing the fuzzy rule base.

190

A. Sharma et al.

Fig. 3 Fuzzy sets for output variable order decision

Fig. 4 Schematic diagram for fuzzy set “Less” of input variable time

Fig. 5 Knowledge base rules (1–14)

According to these rules, when a new product launched in that less timeframe, available discount is low and if sufficient stock is available at the retailer end, he will order a significantly less quantity. Additionally, the amount to be ordered is at its maximum if the Time since New Launch is High, the Discount%age is High, and

Modeling of Order Quantity Prediction using Soft Computing …

191

the Stock availability is Low. Similarly, we can comprehend the other rules easily. These rules are assessed based on the market situation.

5 Results and Discussion The rule view and the surface view of the proposed system are available in the belowmentioned graph in Figs. 6 and 7, respectively. Here, we give the input of six months to Time, and 6.7 is the Discount%age with 476 units of Stock availability, giving us the decision to order 610 units that belong to the moderate category of output fuzzy set. In the same way, we give different inputs to our system, and the result has shown in Table 3.

Fig. 6 Rule view

Fig. 7 Surface view

192

A. Sharma et al.

Table 3 Output for different values of input parameters S. No

Input variables Time (months)

Discount%age (%age)

Stock (units)

Output

variable

Value (units)

OrderDecision

1

1

0

100

900

Maximum order

2

2

0

400

690

Order moderately

3

3

3

374

610

Order moderately

4

5

3.1

650

580

Order moderately

5

6

6.7

476

610

Order moderately

6

7

5.5

752

500

Order moderately

7

8

10.63

451

600

Order moderately

8

9.7

8.5

830

50

Order less

9

10.1

9.03

782

110

Order moderately

10

11

16.6

180

500

Order moderately

The output table for different values of the input parameters is shown in the following Table 3.

6 Conclusion In the above-mentioned work, we took into account three major factors, including Time, Discount%age, and Stock availability. These characteristics are not constant and can be altered. The cost of the goods, the presence of alternatives in the market, the product’s configuration, and other aspects should all be taken into consideration by a retailer when determining the optimal order size. The fuzzy logic toolbox in MATLAB is used to assess the fuzzy system in accordance with the input–output variables. It can be investigated further along with the other elements influencing the market-based ordering decision.

References 1. Kumaran K, Kailas A (2012) Prediction of future stock close price using proposed hybrid ANN model of functional link fuzzy logic neural network. IAES Int J Artif Intell (IJ-AI) 1(1):25–30. https://doi.org/10.11591/ij-ai.v1i1.362 2. Park KS (1987) Fuzzy set theoretic interpretation of economic order quantity. IEEE Trans Syst, Man Cybern 6:1082–1084. https://doi.org/10.1109/TSMC.1987.6499320 3. Play.google.com (2022). https://play.google.com/store/apps/details?id=com.keepa.mobile&hl= en_IN&gl=US

Modeling of Order Quantity Prediction using Soft Computing …

193

4. Setyono A, Aeni SN (2018) Development of decision support system for ordering goods using fuzzy Tsukamoto. Int J Electr Comput Eng (IJECE) 8(2):1182–1193. https://doi.org/10.11591/ ijece.v8i2.pp1182-1193 5. Sona P, Johnson T, Vijayalakshmi C (2018) Design of an inventory model-fuzzy logic controller approach. Int J Pure Appl Math 119(9):41–51. https://www.researchgate.net/publication/325 429142_Design_of_an_inventory_model_-_fuzzy_logic_controller_approach 6. Taghavifar H, Mardani A (2014) Fuzzy logic system based prediction effort: a case study on the effects of tire parameters on contact area and contact pressure. Appl Soft Comput 14:390–396. https://doi.org/10.1016/j.asoc.2013.10.005 7. Yen J, Langari R (2003) Fuzzy logic intelligence control and information. Pearson Education (Singapore) Pte. Ltd., Indian Branch, 482 F.I.E. Patparganj, Delhi 110092, India 8. Zadeh LA (1965) Fuzzy sets. Inf Control 8:338–353

Development of Classification Framework Using Machine Learning and Pattern Recognition System Kapil Joshi, Ajay Poddar, Vivek Kumar, Jitendra Kumar, S. Umang, and Parul Saxena

Abstract The technique of identifying patterns with the aid of a machine learning system is called pattern recognition. The classification of data based on previously acquired knowledge or on statistical data extrapolated from patterns and/or their representation is known as pattern recognition. Pattern recognition is such pattern where we scale some object and it is a technique critical in many areas, including surveillance cameras, access control systems, biometric data, interactive game apps, human computer interaction. Through this article, we explain the application of a multi-pattern recognition framework in various steps and used the classification framework to identify the object intensity using machine learning. Our study is also based on some parameters where we examine the results between recognition system and ML technique. We conclude a proposed system for the implementation of pattern recognition system and this work is also useful for 3D image preprocessing as well as artificial neural networks to improve the system’s recognition rate. Keywords Image preprocessing · Machine learning · Pattern recognition K. Joshi (B) · V. Kumar Department of Computer Science, Uttaranchal Institute of Technology, Uttaranchal University, Dehradun, Uttarakhand, India e-mail: [email protected] A. Poddar Om Sterling Global University, Hisar-Chandigarh Road, Hisar, Haryana, India J. Kumar Department of Computer Science, Nit Jalandhar, Punjab, India e-mail: [email protected] Uttaranchal Institute of Technology, Uttaranchal University, Dehradun, India S. Umang Department of Computer Applications DSB Campus, Kumaun University, Nainital, India e-mail: [email protected] P. Saxena Department of Computer Science, Soban Singh Jeena University, Campus Almora, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 V. S. Rathore et al. (eds.), Emerging Trends in Expert Applications and Security, Lecture Notes in Networks and Systems 681, https://doi.org/10.1007/978-981-99-1909-3_18

195

196

K. Joshi et al.

1 Introduction It is possible to think about pattern recognition as a categorization method. Its ultimate objective is to optimally segregate one class from the others and extract patterns depending on specific conditions. Everywhere you look, pattern recognition is being used. Algorithms for facial recognition are already actively employed in a multitude of activities. Algorithms of this type, in specific, play a significant role inside the biometric identification issue. Regrettably, today’s pattern recognition algorithms necessarily need extremely high computational difficulty, resulting in low growth of their conventional implementations processing units [1] and trying to limit the area of such algorithms’ application. The requirement to use biometric technology in large quantities of video research in order to discover terrorists in the video streams derived from security cameras causes a dynamic additional increase of recognition. The use of neural network models is among the most common methods for pattern recognition. A number of studies looked into their use in the modeling of convolution neural networks as well as their application in the facial recognition [2] problem. Furthermore, the back propagation one of the algorithms will be greatest frequently used algorithms for the ANN learning method. In our instance, the back propagation method consists of organizations a network of interconnected neurons that everyone has inputs with real values and outputs a single output that is real-valued. Such algorithms work extremely well parallelized, enabling for practical deployment Graphical Processing Elements (GPUs). GPUs of today have thousands of threads and hundreds of cores, making them suitable for computationally intensive problems involving huge amounts of information, including this type of neural network simulation. This necessitates the utilization of GPU capabilities and capacities of hardware and software [3]. We discuss in this paper look at the procedure for creating a framework for neural network-based image identification and show experimental outcomes for the face detection and problem with recognition. Normally, computing methodologies as pattern classification are categorized into different segments here, we mentioned as Fig. 1.

2 System Architecture Every pattern recognition system should include the desirable step of mele preprocessing to enhance performance. Its job is to separate an image’s intriguing pattern from its background while also applying noise filtering, smoothing, and normalization to fix various problems such sharp changes in lighting direction and intensity. There are numerous techniques for preprocessing images, such as the Fermi energybased segmentation technique. To identify the preprocessing classification, through this study, the methodologies are discussed.

Development of Classification Framework Using Machine Learning …

197

Fig. 1 Classification of recognition system

2.1 Methodology System is divided into several sections, including performing data, neural network coaching, object recognition or categorization, and the return of the recognized image. The system is designed by loading the trained model [4]. The image preprocessing step comes following the adoption of information systems step. Preprocessing a picture entails a variety of methods such as thresholding, softening, filtration, resembling, and normalizing it so that the subsequent method to final prediction can be made simpler and more exact. Preprocessing of images is related to data loading. The images that have been loaded have been preprocessed. Figure 2 depicts its image preprocessing techniques. The system is divided into several sections, including data loading, artificial neural training, object recognition or classifying, and the return of the acknowledged image. The system begins by loading a training dataset. The image preprocessing step comes but after data loading step. Preprocessing an image entails a variety of procedures such as smoothing, thresholding, filtration, resized, and standardizing it as a result, the subsequent method to final prediction can be made easier and more correct [5]. Overall image processing with color code conversion is performed in Fig. 3. Document image binarization (thresholding) is the converting procedure image in grayscale to a binary representation. There are 2 types of thresholding segments. The initial step is to transform such a binary to image format (in black and white).The binarization is the first stage of preprocessing, which converts the picture

198

K. Joshi et al.

Fig. 2 Output dependence to input using offset

Fig. 3 Representation of preprocessing technique

into binary images (black and white), with image pixels 0 and 1. In most cases, the true scanning image color (RGB image) as well as must be converted in comparison to a binary picture viewer on a certain value. A grayscale photograph contains 256 different the colors black and white combinations, with 0 representing 255 and pure black representing white as snow. This picture is transformed by using binary determining whether or not each and every pixel’s intensity is larger than the 255 scale. Even if the pixel output is increased than the 255 scale, so it becomes equal to or less than 255 after conversion to grayscale image. Noise reduction is essential being formerly consider removing any potentially unwanted bit patterns have distorted the character of the output to remove noise, a variety of filtering operations, such as the Median filter and the Weiner filter, can be used. Smoothing’s goal is to simplify the contours of damaged and loud enter characters some pixels have been added to the picture to achieve a smooth form. Normalization is a step-by-step process. Examine the following as an example, suppose the image’s. The intensity range seems to be 100–230, and necessary variety is 0–150, and each pixel intensity is subtracted by 100 to get from 0 to 130. Then multiply by two every potency values by 150/130 to obtain a number from 0 to 150.

Development of Classification Framework Using Machine Learning …

199

The auto-normalization process converts a picture completely frequency response of the numerical technique defined in the form of an image file. The normalization process will generate regions with the same fixed dimensions. Methods of normalization seek to eliminate all kinds of different variants during the writing process, resulting in standardized data. Size [6] normalization, e.g., is employed to modify the personality size in relation to a specific standard. Character recognition methods could use both lateral and vertical shape normalizations.

2.2 Feature Extraction Feature extraction is a process performed by characteristics extracted including such as the eyes, mouth, ears, and nose. In general, there are 2 ways to represent facial characteristics: The first is about local facial characteristics as in the eyes, mouth, and nose; another is about overall facial structure as stated by a rectangular region containing the eyes, mouth, and nose. In both of the characteristics, eyes and mouth are considered account in this study. The following is an explanation of the suggested feature extraction algorithm: i. ii. iii. iv. v. vi. vii.

Face column is divided into a couple of sections. We are taking for each row “r” follow the step 3 and 4. The initial pixel data is encountered either the side is (×1, y1) or (×2, y2). Measure the distance between two pixels. Using step 4, two sets of nonzero value will be focused. Calculate the maximum distance and shows the actual distance. Using the corresponding pixel, calculate the following points:a. b. c. d.

Distance from left eyeball to right eyeball. Distance from left mouth to right mouth. Distance from left eyeball to right mouth. Distance from right mouth to left mouth.

3 Neural Network Training A recognizer is required after feature extraction as from given face image in order to recognize the facial images retrieved from the database server. In our case particular instance, our cognitive network will be able to use to solve problems such as face recognition, face detection, and gender categorization; the first is phase is the training phase, in which we did it ourselves select the performance of a classifier that the programming complex must classify [7]. The features extracted in the second step will be used to training the network. A customized Levenberg–Marquardt algorithm

200

K. Joshi et al.

and a CNN method were used to train the network. Our purpose is to minimize the hidden layers while achieving high recognition and classification accuracy while spending as little time as possible training the neural network. Training time increases as the number of hidden neurons grows. In our case, research, we alter existing algorithms to get rid of them flaws as well use GPU parallel processing. Following neural network training could be utilized for acknowledgement or categorization. Figure 4 depicts the system’s composition for data loading. The complex program can work with both and grayscale colored image. We used a modified tone correction algorithm to improve the convenience of image operations. The essence of it the following to begin, the lowest pixel brightness across the whole image will be determined and deducted as from bright light of every pixel. As a result, the lowest possible brightness equals 0. The highest values of radiance M are then found, and the all-pixel brightness is multiplied by 255/M. The filter module is used to filter the impression in the internal representation. Before trying to apply the direct Fourier transform, each pixel goes through the following transformation: Fig. 4 Basic diagram of data loading with informative image

Development of Classification Framework Using Machine Learning …

A (x, y) = (−1)x+y A (x, y)

201

(1)

where A is an image and x and y are coordinate positions The origin of exact location for the Fourier transforms a MxN significant component in relation to size is moved in relation to center coordinates and frequency by this transformation [8].The transform’s purpose was to present the image’s Fourier transform in a type that was suitable for visualization. Figure 4 depicts the classical machine learning process. A user chooses datacontaining files from the education dataset, a set of requirements and a set of tests before applying the algorithm. The consumer additionally adjusts the amount of variables depending on the situation neural network representation. An object is considered recognized when the mean square error produced by its neural network is less than a predefined value. This value is determined for the training set by the “Acceptable error” parameter as well as for the parameter-controlled test “satisfactory error in testing.“ Here the study is comprised into the evaluation scheme with neural network as shown in Fig. 5. Recognize the training of a learning algorithm using the Levenberg–Marquardt technique. At first, a generative adversarial network is built based on the parameters Fig. 5 Evaluation scheme with neural network

202

K. Joshi et al.

chosen, and input data from referenced files is loaded for learning. The weights and regularization parameter are then initialized. Each training epoch begins with a lecture all of the components of the classical machine learning as well as the calculation of the corresponding Jacobian matrix and neural network errors. The gradient and the guesstimated Hessian matrices then computed. Hessian and Jacobian matrices while utilizing graphics accelerator elements, all matrix algebra features are used to invoke [9] the CUBLAS library. NN training begins after determining the initial objective function value. The initial value is calculated using the formula (2) below. The objective function’s values are F(Y) = αEθ + βED

(2)

It is essential to fix the equation to be able to determine the weight shift at each iteration from the algorithm (3). T   J J + λdiag JT J △θ = JT E

(3)

Matrix transformation is used for that because it must compute Bayesian regularization the sum of the hyper parameters eigenvalues both the matrix and the Hessian matrix with inverse approximation The method is based on upon that LUdecomposition of bock previous iteration is used to locate the inverse matrix. The decomposition of LU such matrix A is a matrix of size M N in which L is the triangular matrix at the bottom and U is the bock LU-decomposition. Gaussian elimination can be used to find the exact specification of the LU-decomposition. In A block A00, e.g., we distinguish:  A=

A00 A01 A10 A11



 =

   L 00 L 01 U00 U01 · L 10 L 11 0 U11

(4)

Then, ⎧ A00 ⎪ ⎪ ⎪ ⎨A 10 ⎪ A ⎪ 01 ⎪ ⎩ A11

= L 00 U00 = L 10 U00 = L 00 U01 = L 10 U01 + L 11 U11

(5)

As a result of this, the LU-decomposition of MxN size of a large matrix size is reduced to the dissolution of (M–r) × (N–r) size of a large matrix. Continuing the procedure, discover that its dissection of such matrix A could be cut down to many matrices decompositions of do not exceed rxr. For a going to lead component within the matrix applied to LU-decomposition to be distinct from 0, a row permutation must be produced during the decomposition process (columns). As a result, in addition toward the formulations L and U, a permutation matrix P: = PLU will be obtained. Then, in order to use the eventually

Development of Classification Framework Using Machine Learning …

203

results matrices L and U to find the inverse matrix, etc., their rows (columns) must be rearranged in conformance using the matrix. To discover solve an equation using the inverse matrix and LU-decomposition where the unknowable 1 A: LUA−1 = I

(6)

Here I—the identity matrix. Because the matrices L and U are we can use triangular matrices to decrease it to a structure two problems can be derived using a much simpler method low calculation cost by changing the variable.

U A−1 = x Lx = I

(7)

Aside from the implementation of such LU-block decomposition with CUDA, the cublasStrsm feature can be used for (7) system solutions before using the graphics accelerator. Following the solution of Eq. (7), the neural network’s increment weights must be tested. If increasing the weights results in reduction, they are accepted in the training algorithm, and the regularization coefficient of parameter is reduced by tenfold. If the optimization problem is not decreased, the increase weight values are turned down; the regularization the parameter has been raised. 10times, as well as the equation is resolvable. T   J J + λdiag JT J △θ = JT E

(8)

After the Levenberg–Marquardt algorithm’s iteration, the hyperparameters for Bayesian regularization are recalculated and used the new neural network weights. The threshold criterion is proposed by us like a training stop requirement as a proportion of correctly identified test set items. Then, if the percentage of the recognized components of the test exceeds the set threshold at any epoch of training, training is terminated. If the stop condition is not met during the learning experience, the training will continue until the number of sample epochs reaches a “number of epoch’s parameter.

4 Experiment Results and Discussion This paragraph describes the information used to train the system for the issue of face recognition in the experiments. Face identification, recognition, and gender identification trial in classification taken as a result of database of images of the face the database includes 395 different people’s faces, each with 20 images. The image sizes and shooting circumstances were the same when the database was created. They used the JPEG format with a resolution of 24 bits. The base [10] contains images of men and women of various ethnicities and ages.

204

K. Joshi et al.

Fig. 6 Preprocessing on camera man image

Prior to recognition, resolution algorithms are used to preprocess images improve the picture quality of the feedback in Fig. 6. Face detection and identification are the two main stages of a face recognition system in Fig. 7. The system will search for any faces during the face detection stage. The face is then photographed. Following that, image processing converts the facial image to black and white. A common face detection feature during the detection phase is a group of adjacent squares that are located above the eye as well as the cheek region. This rectangle’s positions are explain in relation to a target object that functions as a line segment for the intended object a person’s face can be discerned from various shortenings in our case. Figure 8 depicts the implemented results. Following the detection of a face, the character recognition and the verification procedure are carried out as a subsequent step. The identified and the processed facial image then especially in comparison with relation to a dataset of images to determine who that individual is. Figure 7 depicts the process of recognizing faces. Recognition of faces works by figuring out the person’s identification being recognized. Fig. 7 Face detection using ML and ANN

Development of Classification Framework Using Machine Learning …

205

Fig. 8 Testing dataset as face recognition in multiple segments

5 Conclusion Feature extraction has been discussed and we described in this paper the framework for pattern recognition implementation process. The framework includes several steps such as image processing techniques, image normalization, and neural network training to improve recognition quality. During the research process, a pattern recognition framework was created and tested the face recognition difficulty in multiple dataset. The proposed model has been used for future work.

References 1. Xue G, Liu S, Ma Y (2020) A hybrid deep learning-based fruit classification using attention model and convolution autoencoder. Complex Intell Syst 1–11 2. Khan S, Islam N, Jan Z, Din IU, Rodrigues JJC (2019) A novel deep learning based framework for the detection and classification of breast cancer using transfer learning. Pattern Recogn Lett 125:1–6 3. Masud M, Sikder N, Nahid AA, Bairagi AK, AlZain MA (2021) A machine learning approach to diagnosing lung and colon cancer using a deep learning-based classification framework. Sensors 21(3):748 4. Kastrati Z, Imran AS, Kurti A (2019) Integrating word embeddings and document topics with deep learning in a video classification framework. Pattern Recogn Lett 128:85–92 5. Taek Lee J, Chung Y (2017) Deep learning-based vehicle classification using an ensemble of local expert and global networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition workshops 47–52 6. Bishop CM, Nasrabadi NM (2006) Pattern recognition and machine learning. New York, Springer, vol 4, no 4, p 738 7. Alfarisy AA, Chen Q, Guo M (2018) Deep learning based classification for paddy pests & diseases recognition. In: International conference on mathematics and artificial intelligence, pp 21–25 8. Ambore B, Gupta AD, Rafi SM, Yadav S, Joshi K, Sivakumar RD.(2022) A conceptual investigation on the image processing using artificial intelligence and tensor flow models through correlation analysis. In: 2022 2nd international conference on advance computing and innovative technologies in engineering (ICACITE). IEEE, pp 278–282

206

K. Joshi et al.

9. Joshi K, Diwakar M, Joshi NK, Lamba S (2021) A concise review on latest methods of image fusion. Recent Adv Comput Sci Commun (Formerly: Recent Patents on Computer Science) 14(7):2046–2056 10. Diwakar M, Tripathi A, Joshi K, Sharma A, Singh P, Memoria M (2021) A comparative review: medical image fusion using SWT and DWT. Mater Today: Proc 37:3411–3416 11. Verma SS, Prasad A, Kumar A (2022) CovXmlc: high performance COVID-19 detection on X-ray images using multi-model classification. Biomed Signal Process Control 71:103272

Human Part Semantic Segmentation Using Custom-CDGNet Network Aditi Verma, Vivek Tiwari, Mayank Lovanshi, Rahul Shrivastava, and Basant Tiwari

Abstract Human body part segmentation is a semantic segmentation of human images task that entails labelling pixels in an image into their respective classes. The human body is composed of hierarchical structures in which each body part in the image has a particular individual location. Considering this knowledge, the sample class distribution technique was developed by collecting and applying the primary human parsing labels in vertical and horizontal dimensions. The proposed network exploits the underlying position distribution of the classes to make precise predictions with the help of these classes. We produce a distinct spatial guidance map by combining these guided features. This guidance map is then superimposed on our backbone network. Extensive experiments were executed on a large data set, i.e. LIP, and evaluation was done using the mean IOU and MSE-loss metrics. The proposed deep learning- based model surpasses the baseline model and adjacent state-of-the-art techniques with a 2.3% hike in pixel accuracy and a 1.4% increase in mean accuracy. Keywords Human body part semantic segmentation · CDGNet · LIP · Human parsing

A. Verma (B) · V. Tiwari · M. Lovanshi International Institute of Information Technology (IIIT), Naya Raipur, India e-mail: [email protected] V. Tiwari e-mail: [email protected] M. Lovanshi e-mail: [email protected] R. Shrivastava Sagar Institute of Science Technology & Research, Bhopal, India e-mail: [email protected] B. Tiwari MIT World Peace University, Pune, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 V. S. Rathore et al. (eds.), Emerging Trends in Expert Applications and Security, Lecture Notes in Networks and Systems 681, https://doi.org/10.1007/978-981-99-1909-3_19

207

208

A. Verma et al.

1 Introduction Human body part semantic segmentation is also known as human parsing. Its aim is to perform the segmentation of human body parts into fine-grained components like the right leg, left leg, hat, face, hair, scarf, upper clothes, and many more, as depicted in Fig. 1. Due to factors like intricate patterns and textures of clothing, changeable positions of people, and the scale variation of various semantic pieces, human part semantic segmentation falls under scene parsing, where pixel categorization is carried out for particular images. Potential applications such as image editing, activity recognition, emotion recognition, person reidentification, human behaviour analysis, and advanced computer vision applications are necessary to understand humans’ complex semantic structure elements [1]. Human parsing is close to semantic segmentation in predicting the labels of pixels in the image. Earlier image parsing studies were primarily concerned with the spatial scale of settings. However, due to the structural limitations of convolutional layers, the geographical surroundings only provide a limited amount of contextual information. Some approaches, such as ACFNet [2], organize pixels into regions along with adding representations of the region (region representation) to the pixel representations and accomplish the best results in competitive performance comprising a variety of demanding benchmarks associated with semantic segmentation. These strategies, however, did not intentionally examine each category’s spatial distribution, limiting their capacity to apprehend the distribution of separate classes and, therefore, leading to the inoperability of the rules of distribution to help parse.

Fig. 1 Sample images with human part semantic segmentation

Human Part Semantic Segmentation Using Custom-CDGNet Network

209

Nowadays, advances in FCNNs have developed various successful frameworks for human parsing tasks. Regardless, many tasks like semantic segmentation, object detection, and including a CNN-based approach require the accessibility of thoroughly annotated images for training purposes. Also, a large data set is necessary to train the framework for human parsing properly. When creating an image data set, challenges like occlusion, low quality, varying size of the image and overlapping can lead to poor prediction. Human-centric vision [3–5], human–robot interaction [6], and fashion analysis rely on the human pixel-level semantic segmentation of the human body, which is an essential task in human comprehension. In contrast to previous studies, only a few human parsing methods are generated, especially when an instance-aware environment is concerned. For instance-aware human parsing, the two paradigms currently employed are bottom-up and top-down approaches. Talking about top-down approaches, they frequently begin by locating human instance [7] ideas, which are subsequently fine-grained and processed. Bottom-up human parsers, on the other hand, are driven by the instance segmentation techniques of the bottomup instances, followed by performing the pixel-wise grouping and instance-agnostic parsing simultaneously. Their grouping strategies span from proposal-free instance localization to graph-based super pixel association to instance edge-aware clustering. The proposed work introduces a class distribution method called Custom-CDGNet by simplifying the complex human structure into 2D space and then converting it into vertical and horizontal 1D positional distribution according to their corresponding classes. Further, these classes teach human positional knowledge according to categories. We build a new supervision signal by collecting the vertical and horizontalwise binarized maps from vertical and horizontal class distributions. By gathering the actual human part semantic segmentation labels in vertical and horizontal orientations and using them for supervision, we create methods, for instance, class distribution. The network takes advantage of these classes and their underlying position distribution. After that, the backbone network is then covered by a spatial guidance map that is created by combining these guided features. Our major contributions include: (1) Introduction of the new method named as Custom-CDGNet that simplifies the complexity of human parsing methods. (2) Generating class distribution labels that utilize inherent position distribution. (3) Quantitative analysis of the proposed work with different benchmark methods. In this paper, Sect. 2 discusses the benchmarks existing in the field of semantic segmentation and human part semantic segmentation. Section 3 contains a detailed explanation of the methodology with a detailed description of the architecture, diagram and its working. Section 4 contains experimental analysis followed by a conclusion in Sect. 5.

210

A. Verma et al.

2 Related Work Human part semantic segmentation has recently attracted tremendous interest, with remarkable progress with advanced deep convolutional neural networks [8] and largescale data sets. This section of the paper overviews the related works that have shown promising results. Semantic segmentation [9] deals with clustering parts of an image that belongs to the same class of an object. It is a pixel-level prediction where each pixel in an image is classified according to the category. Cityscapes [10], PASCAL VOC [10], ADE20K [11], and U-Net [12] are some of the benchmark methods for this task. In human part semantic segmentation, all the pixels in the human image are labelled accordingly. The technique used for the human part semantic segment is similar to that of scene parsing. In human part semantic segmentation, many deep learning-based networks have established remarkable benchmarks. Ruan et al. [13] constructed CE2P [13] network with Resnet101 as the building block built, extensively used edge detail, global spatial context information, and feature resolution and obtained the best result in the LIP challenge, 2019. Yuan et al. [14] devised a representation strategy which was objectcontextual for semantic segmentation and attained better performance on the LIP data set, stating that the class/label of the distinct pixel is the cluster of the category of the object to which the pixel belongs. Some studies have synthesized the prior human knowledge for human part semantic segmentation. Wang et al. [15] worked on the hierarchy of the human structure. They assembled the hierarchy for effective human part semantic segmentation. Also, Ji et al. [16] used the inherent physiological constitution of the human body by constructing a new semantic neural network tree to work on semantic segmentation of the human body parts. Zhang et al. [17] achieved excellent results by applying grammar rules in parallel and a cascaded way, using the human body’s natural hierarchical structure and the relationship between various organs. Zhang et al. [18] merged keypoint positions with human semantic boundaries to enhance part semantic segmentation. These techniques rely on the specific human stance or the previous human hierarchical structure, making it challenging to guarantee comprehensiveness when several people are present or when unanticipated occlusions cover certain human body parts. A different approach, parsing R-CNN [19] introduced by Yang et al. and RP R-CNN [20] represented multi-scale features along with semantic information and attention mechanism to improve the human visual understanding of CNN to outperform in multi-human parsing and dense pose estimation [21].

Human Part Semantic Segmentation Using Custom-CDGNet Network

211

3 Methodology This section proposes a method that converts the intricately depicted human structural information in 2D space to vertical and horizontal one-dimensional (1D) positional information with their specific matching classes. The proposed approach is motivated by CDGNet [22] and attention-based models like HANet [23], CBAM [24] and SENet [25]. CBAM and SENet apprehend the extensive context of the entire image, whereas HANet considers an attention map which is height driven. The main focus of HANet is to parse urban scene images. The proposed method is extended to classes and directional positions that expand the actual human paring labels to other guiding signals which are linked to classes and one directional location. These generated signals are crucial in leading the network for the efficient location of human body parts. It should be understood that the attention mechanism is not being utilized by the guiding signals to improve the features’ presentation. Because human bodies are hierarchically built, the unique components of the human body have varied a range of apparent distributions in both horizontal and vertical directions. This paper proposes a distribution-directed network, distributed class wise that predicts the distribution of classes in dimensions categorized as vertical and horizontal while also being guided by distribution loss. The human image’s projected distribution characteristic is then completely utilized to improve the feature representation for human modelling.

3.1 Class Distributions In human body part segmentation, images with each pixel as human part labels are used in the form of training data. We compute the per-class position of distributions from individual images into the horizontal and vertical directions based on this research’s labels referred to as the horizontal and vertical class distributions. The class distributions guide the network in understanding the context distribution of each category, allowing the network to evaluate various categories of spatial distributions under the constraint of the proposed distribution loss.

3.2 Distribution Network The proposed distribution method produces each class distribution that exhibits the location of the class instances. This distribution further guides the feature representation, ultimately leading to human parsing. Taking an image frame and its features as input, we get Xi of W × H × C where C refers to channel size. Individually squeezing the input feature Xi in the vertical and horizontal directions to extract the directional positional properties. After extracting directional characteristics, average pooling is

212

A. Verma et al.

applied in orthogonal directions to generate labels. The proposed model uses two 1D convolution networks. The first convolution network uses 3 size kernel with its channel number to build the horizontal and vertical class distribution features. These features are then directed by making use of the new labels for the class distributions in both horizontal and vertical directions, along with the appropriate losses. At the same time, the second one-dimensional convolution network uses seven-sized kernels and its channel as input features that generate each horizontal and vertical channel distribution feature individually. Salient distribution maps are created by triggering two convolutions with sigmoid functions rather than using the softmax function. These operations, which are in the form of sequences, generate guided features on any one of the axes, either horizontal or vertical. It is denoted by: A'h = I u' p( Ah) = I u' p(σ (conv7 × (δ(conv3 × (Z h)))))

(1)

A'v = I '' up( Av) = I '' up(σ (conv7 × (δ(conv3 × (Z v)))))

(2)

where δ = ReLU function, σ = sigmoid function, Zh = horizontal class, Zv = vertical class, A' h = horizontal guided feature, A' v = vertical guided feature, I' up , I'' up = bilinear interpolation operation. Figure 2 proposed the comprehensive architecture of the proposed work that contains the class distribution guidance (CDG) and spatial pooling (SP) module. CDG module helps in to extract horizontal and vertical classes. While SP module helps to remove the fixed-size constraint of the network. The proposed model has 3 modules, i.e. edge module, backbone, and high-resolution model. Edge module helps to extract edge between the human part by using feature fusion techniques. The backbone module focuses on the extraction of the features. While high-resolution module are used to get horizontal and vertical labels, that is feed in CDG model and SP module.

Fig. 2 Overview architecture: The figure represents the overview architecture of the proposed network. CDG stands for class distribution guidance module; SP represents pyramid spatial pooling

Human Part Semantic Segmentation Using Custom-CDGNet Network

213

3.3 CDGNet Objective The proposed methodology inspired by CDGNet and CE2P model and it outperforms to its baseline model. Objective of the proposed methodology can be considered as parsing results and edge prediction. Edge prediction is interpreted as loss of weighted cross-entropy between projected maps of the edge and its labels which is supplied by the edge module, whereas parsing results are the cross-entropy loss generated from high-resolution module parsing map and parsing labels.

4 Experimental Analysis To demonstrate a positive performance of the suggested model in this research paper, the evaluation was carried out on the metrics like pixel accuracy, mean IoU, and overall mean accuracy. Also, class-wise mean IoU was evaluated for each of the 19 semantically labelled classes of the LIP [26], parsing data set.

4.1 Data Set Used LIP [26], Look Into Person, is an extensive data set which focuses on the task of human parsing, also known as human part semantic segmentation. This data set contains a total of about 50,462 images which is bifurcated into 30,462 training, 10,000 testing and 10,000 validation images. The data set comprises detailed pixelwise annotations with 19 labels describing human parts. 16 key points of 2D human poses are also included in the data set for pose estimation.

4.2 Evaluation Parameter The proposed model evaluate results on some of the evaluation matrices, i.e. discussed below: Mean IOU Intersection over union can be understood as the area generated by overlapping predicted segmentation result and ground truth, further dividing it by the area generated by the union of predicted segmentation result and ground. Mean IoU is usually calculated for segmenting two classes, also known as binary classes and multi-class. Pixel accuracy (PA) is an evaluation metric representing the percentage of correctly categorized pixels in an image. This metric is used for semantic segmentation to calculate the ratio of appropriately identified pixels with the total amount of pixels present in the respective image.

214

A. Verma et al.

Mean Accuracy is referred to as the correct predictions, which is then divided by the total input samples. While mean accuracy refers to the average accuracy of multiple classes.

4.3 Quantitative Analysis In order to attain the highest performance in the human parsing, we performed quantitative experiments on the LIP data set in comparison with well-known human parsing algorithms. Table 1 represents the class-wise results of mean IOU on LIP data sets. In the experiment, the proposed model gives a performance of 60.52% mean IOU and outperforms existing benchmarks when compared with existing state-of-the-art frameworks. Table 2 represents the evaluated results as mean accuracy, pixel accuracy, and mean IOU comparison to other existing methods. So the proposed model claims to outperform pixel accuracy at 89.22%, mean accuracy at 72.07%, and mean IOU at 60.52 compared to other state- of-the-artwork.

5 Conclusion This paper proposes a human part semantic segmentation method called CustomCDGNet, which works to attain effective and efficient semantic segmentation of human parts. This method exploits pixel labelling of each class to generate a vertical and horizontal class distribution [26] of all human parts. The knowledge of the class distribution of each class in both horizontal and vertical directions significantly benefited the learning of each pixel from the only person in the image to multiple persons as well. Performing comprehensive qualitative and quantitative analysis of CustomCDGNet, it was found that C-CDGNet surpasses the existing state-of-the-art human parsing approaches. Also, when working on the large data set, LIP, C-CDGNet gives 89.22% pixel accuracy, 72.07% mean accuracy, and 60.52% mean IoU.

31.42 55.65 44.56 72.19 28.39 33.80 57.50 48.90 75.20 32.50 37.74 57.95 48.40 75.19 32.37 40.00 58.73 55.25 77.92 34.32 40.50 58.94 54.42 78.25 35.09

63.55 70.20 36.16 23.48 68.15

65.29 72.54 39.09 32.73 69.46

66.90 72.20 42.70 32.30 70.10

66.20 71.56 41.06 31.09 70.20

69.96 73.55 50.46 40.72 69.93

71.06 74.61 50.13 42.09 71.58

71.87 74.79 51.03 42.25 72.70

JPPNet[26]

CE2P[13]

SNT[16]

CorrPM[18]

SCHP[30]

CDGNet[22]

Our

39.02 57.45 54.27 76.01 32.88

32.52 56.28 49.67 74.11 27.23

29.63 49.70 35.23 66.04 24.73

Attention[29] 58.87 66.78 23.32 19.48 63.20

skirt

face

l-arm r-arm l-leg

r-leg

29.98 33.15 78.14 71.43 72.44 70.96 69.50 57.47

30.05 32.97 77.12 71.25 73.35 70.54 69.26 58.24

26.29 31.68 76.19 68.65 70.92 62.28 66.56 55.76

23.79 29.23 74.36 66.53 68.61 62.80 62.81 49.03

19.40 27.40 74.90 65.80 68.10 60.03 59.80 47.60

14.19 22.51 75.50 65.14 66.59 60.10 58.59 46.63

18.76 25.14 73.36 61.97 63.88 58.21 57.99 44.02

12.84 20.41 70.58 50.17 54.03 38.35 37.70 26.20

11.29 20.56 70.11 49.25 52.88 42.37 35.78 33.81

58.24

58.75

56.50

49.82

48.10

46.12

44.09

27.09

32.89

39.05

Avg

89.42 60.52

88.86 60.30

88.36 58.62

87.77 55.33

88.20 54.70

87.67 53.10

86.26 51.37

84.00 42.92

84.53 44.03

84.75 46.81

l-shoe r-shoe bkg

9.65 23.20 69.54 55.30 58.13 51.90 52.17 38.58

pants j-suits scarf

30.33 51.03 40.51 69.00 22.38

sock

28.39 51.98 41.46 71.03 23.61

coat

57.66 65.63 30.07 20.02 64.15

u-cloth dress

MMAN[27]

glove glass

DeepLab[28] 56.48 65.33 29.98 19.67 62.44

hair

hat

Method

Table 1 Class-wise quantitative comparison of mean IoU with benchmark methods that performed evaluation on validation set of LIP data set. Here, each class represents the mIoU value of each human part predicted by our proposed method C-CDGNet

Human Part Semantic Segmentation Using Custom-CDGNet Network 215

216

A. Verma et al.

Table 2 Comparative result of our proposed model on the LIP data set Method

Backbone

Pixel Acc

Mean Acc

Mean IoU

CE2P[13]

Resnet 101

87.37

63.20

53.10

SNT[16]

Resnet 101

88.05

66.42

54.73

CorrPM[18]

Resnet 101

87.68

67.21

55.33

PCNet[31]

Resnet 101

– 89.05



HHP[15]

DeeplabV3

SCHP[30]

Resnet 101

CDGNet[22]

Resnet 101

88.86

71.49

60.30

Our

Resnet 101

89.22

72.07

60.52



70.58

57.03



59.25 59.36

References 1. Rochan M (2018) Future semantic segmentation with convolutional lstm. arXiv:1807.07946 2. Zhang Fan et al (2019) Acfnet: attentional class feature network for semantic segmentation. In: Proceedings of the IEEE/CVF international conference on computer vision, pp 6798–6807 3. Ebadi SE et al (2021) PeopleSansPeople: a synthetic data generator for human-centric computer vision. arXiv:2112.09290 4. Arshad A, Tiwari V, Lovanshi M, Shrivastava, R (2023) Role identification from human activity videos using recurrent neural networks. In: Proceedings of 8th IEEE international women in engineering (WIE) conference on electrical and computer engineering (WIECON-ECE) 5. Lovanshi M, Tiwari V (2023) Human pose estimation: benchmarking deep learning-based methods. In: Proceedings of the IEEE conference on interdisciplinary approaches in technology and management for social innovation 6. Hu H, Jaime FF (2022) Active uncertainty learning for human-robot interaction: an implicit dual control approach. arXiv:2202.07720 7. Shrivastava R, Tiwari V, Jain S, Tiwari B, Kushwaha AKS, Singh VPA (2022) role-entity based human activity recognition using inter-body features and temporal sequence memory. IET Image Process 8. Choudhary M, Vivek T, Venkanna U (2020) Enhancing human iris recognition performance in unconstrained environment using ensemble of convolutional and residual deep neural network models. Soft Comput 24(15):11477–11491 9. Bose K, Shubham K, Tiwari V, Patel KS (2023) Insect image semantic segmentation and identification using UNET and DeepLab V3+. In: ICT infrastructure and computing. Springer, Singapore, pp 703–711 10. Wang P, Chen P, Yuan Y, Liu D, Huang Z, Hou X, Cottrell G (2018) Under- standing convolution for semantic segmentation. In: 2018 IEEE winter conference on applications of computer vision (WACV), pp 1451–1460 11. Zhou B, Zhao H, Puig X, Xiao T, Fidler S, Barriuso A, Torralba A (2019) Semantic understanding of scenes through the ade20k dataset. Int J Comput Vision 127(3):302–321 12. Ronneberger O, Fischer P, Brox T (2015) U-net: convolutional networks for biomedical image segmentation. In: International conference on medical image computing and computer-assisted intervention, Springer, Berlin, pp 234–241 13. Ruan T, Liu T, Huang Z, Wei Y, Wei S, Zhao Y (2019) Devil in the details: Towards accurate single and multiple human parsing. In: Proceedings of the AAAI conference on artificial intelligence, vol 33, pp 4814–4821 14. Yuan Y, Huang L, Guo J, Zhang C, Chen X, Wang J (2018) Ocnet: object context network for scene parsing. arXiv:1809.00916

Human Part Semantic Segmentation Using Custom-CDGNet Network

217

15. Wang W, Zhu H, Dai J, Pang Y, Shen J, Shao L (2020) Hierarchical human parsing with typed part-relation reasoning. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 8929–8939 16. Ji R, Du D, Zhang L, Wen L, Wu Y, Zhao C, Huang F, Lyu S (2020) Learning semantic neural tree for human parsing. In: European conference on computer vision. Springer, Berlin, pp 205–221 17. Zhang X, Chen Y, Zhu B, Wang J, Tang M (2020) Blended grammar network for human parsing. In: European conference on computer vision. Springer, Berlin, pp 189–205 18. Zhang Z, Su C, Zheng L, Xie X (2020) Correlating edge, pose with parsing. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 8900–8909 19. Yang L et al (2019) Parsing R-CNN for instance-level human analysis. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 364–373 20. Yang Lu et al (2020) Renovating parsing R-CNN for accurate multiple human parsing. In: European conference on computer vision. Springer, Cham, pp 421–437 21. Güler RA, Natalia N, Iasonas K (2018) Densepose: dense human pose estimation in the wild. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 7297–7306 22. Liu K, Choi O, Wang J, Hwang W (2022) Cdgnet: class distribution guided network for human parsing. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 4473–4482 23. Choi S, Kim JT, Choo J (2020) Cars can’t fly up in the sky: Improving urban-scene segmentation via height-driven attention networks. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 9373–9383 24. Woo S, Park J, Lee J-Y, Kweon IS (2018) Cbam: convolutional block attention module. In: Proceedings of the European conference on computer vision (ECCV), pp 3–19 25. Hu J, Shen L, Sun G (2018) Squeeze-and-excitation networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 7132–7141 26. Liang X, Gong K, Shen X, Lin L (2018) Look into person: joint body parsing & pose estimation network and a new benchmark. IEEE Trans Pattern Anal Mach Intell 41(4):871–885 27. Luo Y, Zheng Z, Zheng L, Guan T, Yu J, Yang Y (2018) Macro-micro adversarial network for human parsing. In: Proceedings of the European conference on computer vision (ECCV), pp 418–434 28. Chen L-C, Papandreou G, Kokkinos I, Murphy K, Yuille AL (2017) Deeplab: semantic image segmentation with deep convolutional netsatrous convolution, and fully connected crfs. IEEE Trans Pattern Anal Mach Intell 40(4):834–848 29. Chen L-C, Yang Y, Wang J, Xu W, Yuille AL (2016) Attention to scale: Scaleaware semantic image segmentation. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 3640–3649 30. Li P, Xu Y, Wei Y, Yang Y (2020) Self-correction for human parsing. IEEE Trans Pattern Anal Mach Intell 31. Zhang X, Chen Y, Zhu B, Wang J, Tang M (2020) Part-aware context network for human parsing. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 8971–8980

Effect on Compressive Strength of Portland Pozzolana Cement on Adding Admixtures Using Machine Learning Technique Prafful Negi, Vinod Balmiki, Awadhesh Chandramauli, Kapil Joshi, and Anchit Bijalwan Abstract Although concrete is one of the most common building materials used today, determining the precise concrete compressive strength is still difficult due to the extremely complicated interplay between its composition. The paper shows designing of M40 concrete and then finds the change in compressive strength of concrete on adding admixtures. First admixture used by us is, ground granulated base furnace. We replace cement with GGBS with 5, 20, and 35% of cement. We make 6 cubes sample of each case and then test 3 cubes sample of each case on 7 day and other 3 cubes samples on 28 days. Second admixture used by us is sugar. Here we didn’t replace cement with sugar but taken certain percentage of total weight of cement. We have taken .01, .05, .08, and .1% of weight of cement. In this also we had made 6 cube samples of each case and then test 3 cubes samples of each case for 7 days and remaining cubes are tested on 28 days. Finally, we compared normal concert compressive strength, concrete formed by using GGBS and then concrete made using sugar admixture. In order to determine the concrete workability and determine the impact of the admixture on the workability of the concrete, we again undertake slump tests in both circumstances. In between, we also discussed the problem of input accuracy and output relationship during the processing of data using machine learning techniques.

P. Negi · V. Balmiki · A. Chandramauli Depatment of Civil Engineering, UIT, Uttaranchal University, Dehradun, Uttarakhand, India e-mail: [email protected] A. Bijalwan Department of Cyber security, School of Computing and Innovative Technologies, British University Vietnam, Hanoi, Vietnam e-mail: [email protected] Adjunct Prof, Faculty of Electrical and Computer Engineering, Arba Minch University, Arba Minch, Ethiopia K. Joshi (B) Department of CSE, Uttaranchal Institute of Technology, Uttaranchal University, Dehradun, Uttarakhand, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 V. S. Rathore et al. (eds.), Emerging Trends in Expert Applications and Security, Lecture Notes in Networks and Systems 681, https://doi.org/10.1007/978-981-99-1909-3_20

219

220

P. Negi et al.

Keywords Granulated blast furnace slag (GGBS) · Sugar · Concrete · Admixtures

1 Introduction The construction sector is one of the most essential components in the economic and social growth of a country [1]. The majority of infrastructures, including residential homes, big buildings, retaining walls, dams, and bridges, have been built with concrete, credit to the construction sector [2]. In concrete, blended cement is the binder that is most frequently utilised. The production of cement is a significant source of CO2 emissions, and this global issue has recently received a lot of attention [3]. Because more cement-based concrete is being used in construction projects and as a result, more dangerous gases are being released into the atmosphere, which is causing the earth’s temperature to rise quickly [4]. In order to somewhat reduce cement output, admixtures with properties similar to cement are utilised. In order to achieve a specific modification, or revisions, to the typical properties of concrete, an admixture is a chemical substance that, unless in exceptional circumstances, is added to the concrete mix during mixing or during a separate mixing operation prior to the placement of concrete in quantities no greater than 5% by mass of cement. We can employ mixes in either a solid or liquid condition [5]. Concrete admixtures are essential to the construction sector because different admixtures are in high demand depending on their characteristics and regions of application. For some building projects, admixtures are employed to enhance the characteristics of concrete. However, occasionally, admixtures can negatively affect both the desired and unwanted features of concrete [6]. The effectiveness of an admixture is influenced by the kind and quantity of cement, the amount of water, the amount of time spent mixing, the slump, and the temperatures of the concrete and the surrounding air [7]. There are many different kinds of admixtures, including fly ash, silica fumes, sugar, accelerators, and rice husk ash, among others [8]. In our research, we have used GGBS and sugar to see the change in compressive strength of concrete. Firstly, we design M40, and then in first case, cement is replaced byGGBS with various percentages, while in case of sugar, we took certain percentage of total weight of cement.

2 Literature Review Ibrahim M Nasser, Siti Radziah Abdullah, K H Boonand, MohdHaziman Wan Ibrahim (2020); This study uses experimental methods to investigate how sugar affects the setting time, compressive strength, drying shrinkage, and carbonation of composite cement mortar. As retarders, a number of sugar concentrations between 0 and 0.08% by weight of cement were used. As the sugar dose was raised, it was discovered that the composite cement paste’s initial and ultimate setting times rose. When the sugar concentration reached 0.08%, the initial and final setting timings

Effect on Compressive Strength of Portland Pozzolana Cement …

221

changed from 100 to 550 min and 135 to 610 min, respectively. In different days, the compressive strength, drying shrinkage, and carbonation of the specimens were evaluated. At 150 days, 0.06% sugar-containing mortar outperformed control specimens in terms of compressive strength, although sugar had less of an impact on the mortar’s drying shrinkage. Azmat Ali Phul, Muhammad Jaffar Memon, Syed Naveed Raza Shah, Abdul Razzaque Sandhu (2019); When cement is only partially replaced, the study examines the compressive strength characteristics of concrete incorporating fly ash and ground granulated blast furnace slag (GGBS). The right proportion of GGBS and fly ash was established using several percentages ranging from 0 to 30% for different curing days. The recovered concrete was put through tests to find out its slump, compaction factor, Vee-bee, and compressive strength. The cement to water ratio was held constant at 0.47 for all mixtures. Tests on the compressive strength of M25 grade concrete were carried out 3, 7, and 14 days after curing. Longer curing durations were found to enhance the slump, compaction factor, Vee-bee, and compressive strength of concrete produced with GGBS and fly ash. The findings indicated that GGBS and fly ash enhance the mechanical properties of concrete by boosting its workability and compressive strength. Usman N. D, Chom H. A, Salisu C, Abubakar H.O, Gyang J. B. (2018); In this research, we investigate how sugar impacts the compressive strength of concrete and cement setting time while using OPC. The experiment’s type of sugar, sucrose crystals (C12H22O11), was dissolved in the required amount of water. Compressive tests are performed on different days, and different quantities of sugar by cement weight are employed in this article’s concrete. The weighed sugar crystals were first dissolved in the necessary 0.6 litres of water (C12H22O11). Batches were made depending on weight after the components were physically combined. As a result, at a sugar level of 0.06%, cement sets up more slowly. At sugar concentrations of 0.08%, setting times start to shorten, and flash setting starts at concentrations of 0.2 to 1%. The cement paste did not cure as quickly as it would have if the sugar concentration had increased from 0.08 to 1%. Three days, at 0.05%, seven days, at 0.06%, and 28 days, at 0.06%, are when the sugar concentration reaches its highest. If we add sugar to concrete, it will increase the cement’s setting time by about 1.33 h if the proportion of sugar in the cement is 0.06 by weight. Sugar addition to concrete has no impact on its compaction or workability. Sugar can be added to concrete to increase its compressive strength over time. ArpanaBeerannaDevakate, Acharya V T and Keerthi Gowda B S (2017); Cement and concrete are the two most significant engineering components utilised in the construction industry. Concrete’s characteristics are significantly influenced by the surroundings. As a result, admixtures are used to maintain uniformity. Sugar can be used on construction sites to delay the curing of cement because it is affordable and widely available. The experiment’s mix had a w/c ratio of 0.45 and was 1:1.22:2.78. All of the specimens underwent a 28-day water cure. The specimens for compressive strength have dimensions of 150 × 150 × 150 mm. Santosh Kumar Karri, G.V. Rama Rao, P. Markandeya Raju (2015); When combined with calcium hydroxide, the compounds, referred to as pozzalonas, exhibit

222

P. Negi et al.

cementitious characteristics. The most often used pozzalonas are fly ash, silica fume, metakaolin, and crushed granulated blast furnace slag (GGBS). Analysing how well the admixtures work with concrete is essential for achieving a lower life cycle cost. The current study focuses on analysing the variables of M20 and M40 grade concrete that contains 30, 40, or 50% ground granulated blast furnace slag (GGBS) in place of cement. On the cubes, cylinders, and prisms, tests for compressive strength, split tensile strength, and flexural strength are conducted. Oner and S Akyuz (2007); The impacts of compressive strength and the right quantity of ground granulated blast furnace slag (GGBS) in concrete are investigated in this study using a laboratory experiment. The GGBS was added utilising the partial replacement method to all the mixes. There were four groups of 32 permutations each containing a different amount of binder. Eight combinations with cement levels of 175, 210, 245, and 280 kg/m3 were made to determine the Bolomey and Féret coefficients (KB, KF). Each group’s initial dosages was set at 175, 210, and 280 kg/m3 by taking 30% less cement out of control concretes that had dosages of 250, 300, 350, and 400 kg/m3 . Test concretes were created by mixing GGBS in amounts that were roughly similar to 0, 15, 30, 50, 70, 90, and 110% of the cement content of control concretes, with dosages of 250, 300, 350, and 400 kg/m3 . All specimens were moist-cured for 7, 14, 28, 63, 119, and 365 days before being put through compressive strength testing. The test results showed that the compressive strength of GGBS-containing concrete mixtures increased when the amount is increasing. After an ideal point, or at around 55% of the total binder concentration, GGBS addition has little effect on compressive strength. This can be explained by the unreacted GGBS that was used as paste filler.

3 Experimental Set-up with Proposed Methodology An accurate prediction model is created using the Clementine 12.0 software’s AI/ML algorithm. Even though this platform is rather simple to utilise for AI methods, an exact flow chart is still required to create a reliable prediction model. Following is a description of the five steps involved in creating the AI model. 1. Data entry: The first step in the data collection process is to enter the data. 2. Training and testing: The input data will be split into training and testing data groups. While testing is used to evaluate created prediction models, training is used to develop prediction models that fit the data. 60% data will be used for training 40% of the data will be used for testing in this study. 3. The training data-based learning procedure for AI prediction models. 4. The process of putting data-driven AI prediction models to the test. 5. Prediction outcomes: The performance of the predicted results obtained for each model will be evaluated using the four accuracy indicators (Fig. 1).

Effect on Compressive Strength of Portland Pozzolana Cement …

223

Fig. 1 Prediction model using Clementine 12.0 AI/ML algorithm

3.1 Designing of M40 Grade Concrete IS-10262:2009 . Grade designation- M40. . Type of cement- Portland Pozzolana Cement (33 grade, confirms to Indian Standard -8112). . Aggregates’ maximum nominal size is 20 mm. . Cement content minimum: 320 kg/m3 . . Maximum water/cement ratio is 0.45 and workability is 100 mm. . Severe exposure condition; pumping used for placing concrete. . Good level of supervision. . Crushed angular aggregates are the type of aggregates, and the maximum cement content is 450 kg/m3 . Test Data for Materials . Used cement: PPC 43. . Cement’s specific gravity is 3.08; no chemical additives were used. . Coarse aggregate specific gravity is 2.74, while fine aggregate specific gravity is 2.74. . Coarse aggregates for water absorption: 0.5%. . Aggregates that can absorb water, 1%. . Sieve analysis: Conforms to grading zone II. Calculation . Fm= fck + 1.65sigma Fm = 48.25. . Water/cement = 0.40, extreme, 320 kg/m3 Ok. . Volume split between coarse and fine aggregates. Size 20–Zone = 0.60. For water/cement = 0.5. So, for water/cement = 0.4. Volume of coarse aggregates/Volume of fine aggregate = 0.63. Volume for pumping correction—0.9*0.63 = 0.562. The volume of coarse aggregate is equal to 0.562, while the volume of fine aggregate is equal to 1–0.0562, which equals 0.433. Mix Calculation . . . . . .

Concrete volume = 1 m3 . Cement volume = Cement Mass/(Specific Gravity*1000) = 0.139 m3 . Vol of water = 0.197 m3 . Total aggregates volume = 1–0.139 + 0.197 = 0.66386 m3 . Quantity of coarse aggregates = 0.66386 × 0.567 × 2.5 × 1000 = 941.02 kg. Quantity of fine aggregates = 0.6638 × 0.433 × 2.7 × 1000 = 776.11 kg. Mix proportion—Cement = 438 kg. Water = 197.14 kg. Coarse aggregate = 941.02 kg. Fine aggregate = 776.11 kg. Cement, sand, aggregates ratio with water as follows- 01:1.77:2.15:0.45.

3.2 Ground Granulated Blast Furnace Slag Slag from a blast furnace that has been ground into granules A by-product of the manufacturing of iron in blast furnaces is ground granulated blast-furnace slag, or GGBS for short. They are fed simultaneously with iron ore, coke, and limestone and run at a temperature of roughly 1500 °C. Slag is a by-product produced when iron ore is converted into iron that floats on top of iron. The tapped off molten liquid of this slag must be immediately quenched in a quantity of water if it is to be utilised to make GGBS [9]. In the presence of water, the hydraulic substance GGBS can hydrate and form a hardened composite. It is waste material from iron industry having cementitious property which can help in replacing cement to some extent [10]. According to our research, the replacement amount of cement by GGBS in concrete is 0, 5, 20, or 35% of the total weight of cement. Test Performed . Slump test . Compressive strength test (Table 1).

Effect on Compressive Strength of Portland Pozzolana Cement … Table 1 GGBS’s physical characteristics

225

S. No

Characteristics

Experimentally obtained

1

Colour

Off-white

2

S. G.

2.9

3

Bulk density

1100 kg/m3

4

Fineness

400 m2 /kg

3.3 Sugar The various environmental conditions in which construction is done has an impact on the concrete’s qualities. Since there is no water available for the chemical reaction, setting water in plastic concrete, as in low-temperature situations, causes the volume of the concrete to grow, delaying the setting and hardening of the concrete. Large quantities of pores eventually remain, and weakening occurs. Similarly in high temperature condition, hydration of cement is accelerated which sets early and gains strength in early stage but with time strength is reduced considerably [11]. Hence to maintain the standard condition of setting time of concrete according to weather conditions, admixtures like accelerators or retarders are used [12]. In general, soluble carbohydrates having a sweet flavour—many of which are found in food—are referred to as “sugar.” Carbohydrates are made of carbon, oxygen, and hydrogen and include sugar. In hot climates where the ambient temperature shortens the normal curing time of concrete, sugar is a good retarder additive. When sugar is added to Portland cement paste at the beginning of mixing, hardening may be postponed indefinitely as sugars have been labelled as cement destroyers [13]. Concrete hardens due to formation of calcium silicate hydrate. Sugar likely has a retarding effect by preventing the production of calcium silicate hydrate [14]. Sugar exceeding 0.2% by weight of concrete will be slow down the reaction drastically. If we use sugar in concrete, the result sugar increases the setting time of cement around 1.33 h, if percentage of sugar is 0.06 by weight of cement. According to studies, sugar at lower dosages, between 0.05 and 0.0% by weight of cement, retards the process while sugar at larger dosages, between 0.4 and 2% by weight of cement, accelerates it [15]. We prepared sucrose crystals (C12H22O11), which will be evenly diluted in the water used for mixing at concentrations of 0, 0.05, 0.08, and 0.1% by weight of cement. Test Performed . Slump test . Initial and final setting time test (Vicat Apparatus Test) . Compressive strength test (Table 2).

226

P. Negi et al.

Table 2 Physical properties of aggregates Type of aggregates

Coarse aggrregates

Fine aggregates

S.no

Experimentally obtained value

Experimentally obtained value

Characteristics

1

Types

Crushed

Uncrushed

2

Specific gravity

2.5

2.4

3

Bulk density

1765 kg/m3

1668 kg/m3

4

Fineness modulus

6.45

2.76

5

Maximum size

20 mm

-----

4 Observation and Results 4.1 Ground Granulated Blast Furnace Slag (GGBS) From the above observations, we found that (Tables 3, 4 and Fig. 2)i. In slump test, for normal concrete, we get true slump. ii. In slump test, for 5, 20, and 35%, we get zero slump which indicates poor workability. iii. On replacing GGBS with cement mean compressive strength for seven day and 28-day decreases slowly up to 20% and then decreases drastically when replaced by 35% of GGBS.

Table 3 Experiment readings Types of design

7-day strength (MPa)

Normal concrete (0% GGBS)

18.693

29.467

19.578

30.467

20.43

29.778

5% GGBS

20% GGBS

35% GGBS

28-day strength (MPa)

19.356

32.178

19.684

25.002

17.79

27.74

18.49

26.498

19.96

31.258

19.18

29.062

11.38

21.835

12.14

19.23

11.56

20.02

Effect on Compressive Strength of Portland Pozzolana Cement …

227

Table 4 Average strength values Mean of 7-day strength (MPa) Mean of 28-day strength (MPa)

Type of design

Normal concrete 0% (GGBS) 19.567

29.904

5% GGBS

18.94

28.307

20% GGBS

18.251

27.947

35% GGBS

11.69

20.36

35 30 25 20 15 10 5 0 0%

5% Mean 7-day strength (Mpa)

20%

35%

Mean 28- Day Strengh (Mpa)

Fig. 2 Average strength values

4.2 Sugar The study draws the following conclusions from the data (Tables 5, 6, 7 and Fig. 3): . At a dose level of 0.08% by weight of cement, sugar can cause cement to cure more slowly by up to 1.07 h. . Workability of the cement was very much affected by the addition of sugar [16]. . When 0.05% sugar was added, we got zero slump with almost no workability. . On adding 0.08% sugar, we got shear slump. . And on adding 0.1% sugar, we got collapse slump with very high workability. . Concrete’s compressive strength is diminishing as sugar additive percentage rises.

Table 5 Results of the setting time experiment

Percentage of sugar Initial setting time Final setting time in in cement by weight in hour(min) hour(min) 0.00

1.52(91)

5.21(307)

0.05

2.43(145)

6.48(388)

0.08

2.59(155)

6.65(399)

0.1

2.17(130)

5.35(321)

228

P. Negi et al.

Table 6 Results of compressive test experiment

Types of design

7 days (Mpa)

28 days (Mpa)

Normal concrete

18.693

29.467

19.578

30.467

20.43

29.778

20.01

25.786

17.44

27.142

18.23

26.79

16.289

31.96

15.198

28.6

15.577

31.12

8.22

13.56

6.569

15.67

6.68

14.91

Type of design

Mean 7-day strength (MPa)

Mean 28-day strength (MPa)

Normal concrete (0% weight of cement)

19.567

29.904

0.05% sugar

0.08% sugar

0.1% sugar

Table 7 Average strength values

.05%

18.56

26.47

.08%

15.55

25.157

0.1%

7.156

14.71

35 30 25 20 15 10 5 0 0%

0.05% Mean 7-Day Strength (Mpa)

Fig. 3 Average strength value

0.08% Mean 28-Day Strength (Mpa)

0.10%

Effect on Compressive Strength of Portland Pozzolana Cement …

229

5 Conclusion Our article’s goal is to explore how various admixtures affect concrete’s compressive strength by using them in place of cement. So, we swapped out GGBS for sugar and added sugar to other admixtures, respectively, and observed what changes emerged [17]. This study was successful in demonstrating that ML techniques may accurately forecast concrete compressive strength without the need for laboratory testing [18]. These samples were utilised to compile a database [19] that was then used to generate a prediction model and assess its efficacy. Conclusion For GGBS . Three different amounts of GGBS (5, 20, and 35%) were utilised in place of cement in this instance. As the amount of GGBS increases, the compressive strength of 7 day and 28 day continues to decrease. The material’s compressive strength progressively declined between 5 and 20%, but at 35%, it abruptly decreases. . We also performed slump test to check the workability of the normal concrete and at various percentages of GGBS. At normal concrete, we get true slump, but when we replaced cement with GGBS with various percentages, we got zero. It means, with GGBS concrete, workability becomes very poor. Numerous studies show that the compressive strength of concrete rises as the amount of GGBS increases when GGBS is substituted with OPC. However, in our research, we used PPC cement, which has the opposite impact on concrete; our strength decreased as the percentage of GGBS increased. Conclusion For Sugar . Here, sugar concentrations of 0.05, 0.08, and 0.1% were calculated using cement weight. On adding sugar as 0.05% and 0.08%, the setting time was lengthened, however, on adding sugar as 0.1%, it was shortened. Study concluded that sugar retards growth when utilised in the proper amounts. . Compressive strength for 7 and 28 days both declines as sugar content rises. The compressive strength varied greatly depending on the sugar content. . Workability of the concrete was very much affected by the addition of sugar. When we add 0.05% of sugar, we were getting zero slump. On increasing its percentage to 0.08%, we were getting shear slump, and when 0.1% sugar was added, we got collapse slump. It concluded that with increase in % of sugar, workability changes from very poor to very high workability. Again, various research works shows that on adding sugar to OPC compressive strength was increasing but our study shows that on adding it with PPC, its compressive strength was decreasing. Other two results setting time and workability have same effect on PPC as it shows in OPC.

230

P. Negi et al.

References 1. Jhatial AA, Sohu S, Bhatti NK, Lakhiar MT, Oad R et al (2018) Effect of steel fibres on the compressive and flexural strength of concrete. Int J Adv Appl Sci 5(10):16–21 2. Sandhu AR, Lakhiar MT, Jhatial AA, Karira H, Jamali QB (2019) Effect of river Indus sand and recycled concrete aggregates as fine and coarse replacement on properties of concrete. Eng, Technol Appl Sci Res 9(1):3832–3835 3. Mohamed OA, Al Khattab R (2022) Fresh properties and sulfuric acid resistance of sustainable mortar using alkali-activated GGBS/fly ash binder. Polymers 14(3):591 4. Glaus MA, Laube A, Van Loon LR (2006) Solid–liquid distribution of selected concrete admixtures in hardened cement pastes. Waste Manag 26(7):741–751 5. O’Rourke Brian, McNally Ciaran, Richardson Mark G (2009) Development of calcium sulfate– ggbs–portland cement binders. Constr Build Mater 23(1):340–346 6. Khan SU et al (2014) Effects of different mineral admixtures on the properties of fresh concrete. Sci World J 2014 7. Babu KG, Kumar VSR (2000) Efficiency of GGBS in concrete. Cem Concr Res 30(7):1031– 1036 8. Kanamarlapudi L et al (2020) Different mineral admixtures in concrete: a review. SN Appl Sci 2(4):1–10 9. Suresh D, Nagaraju K (2015) Ground granulated blast slag (GGBS) in concrete–a review. IOSR J Mech Civ Eng 12(4):76–82 10. Grubeša IN et al (2016) Characteristics and uses of steel slag in building construction. Woodhead Publishing 11. Devakate AB, Keerthi Gowda BS (2017) Effect of sugar on setting-time and compressive strength of concrete. In: International conference ICGCSC-2017, MITE 12. Ahmad Shamsad, Lawan Adamu, Al-Osta Mohammed (2020) Effect of sugar dosage on setting time, microstructure and strength of Type I and Type V Portland cements. Case Stud Constr Mater 13:e00364 13. Ashworth Robert (1965) Some investigations into the use of sugar as an admixture to concrete. Proc Inst Civ Eng 31(2):129–145 14. Juenger MCG, Jennings HM (2002) New insights into the effects of sugar on the hydration and microstructure of cement pastes. Cem Concr Res 32(3):393–399 15. Usman ND et al (2016) The impact of sugar on setting-time of ordinary Portland cement (OPC) paste and compressive strength of concrete. FUTY J Environ 10(1):107–114 16. Negi SS, Memoria M, Kumar R, Joshi K, Pandey SD, Gupta A (2022) Machine learning based hybrid technique for heart disease prediction. In: 2022 international conference on advances in computing, communication and materials (ICACCM). IEEE, pp 1–6 17. Diwakar M, Tripathi A, Joshi K, Memoria M, Singh P (2021) Latest trends on heart disease prediction using machine learning and image fusion. Mater Today: Proc 37:3213–3218 18. Praveen S, Tyagi N, Singh B, Karetla GR, Thalor MA, Joshi K, Tsegaye M (2022) PSO-based evolutionary approach to optimize head and neck biomedical image to detect mesothelioma cancer. BioMed Res Int 19. Sisodia PS, Tiwari V, Kumar A (2014) A comparative analysis of remote sensing image classification techniques. In: 2014 international conference on advances in computing, communications and informatics (ICACCI). IEEE, pp 1418–1421

Tech Track for Visually Impaired People G. Abinaya, K. Kanishkar, S. Harish, and D. J. Subbash

Abstract As a society, we have come to rely more on the technology in our dayto-day activities as we age. The primary motivation for this study is to facilitate independent blind exploration. This file depicts a cane created to aid the visually impaired in navigating their environment, whether it be locating obstacles or reaching their final objective. Ultrasonic sensors, a global positioning system (GPS), a GSM module, a buzzer, and a vibration motor are all a part of it. Ultrasonic sensors are used to locate and identify potential obstructions. While the GPS receiver aids in navigation, the GSM operates as a mobile phone to alert visually impaired individuals to potential danger. This walking cane features a vibrating motor adjacent to the loop, which flashes when there is an obstruction in the user’s path, making it beneficial for the visually impaired and the hearing impaired. The bombardier’s job is to keep an eye out for obstacles at all times. A speaker that beeps a valid whenever he takes damage is also at his disposal. Previously, blind people had to depend on others to guide them on journeys in unknown areas, but this article makes that unnecessary. Keywords Visual impaired · Navigator · Connecting technologies

G. Abinaya · K. Kanishkar · S. Harish (B) · D. J. Subbash Department of Information Technology, Saveetha Engineering College, Chennai, Tamil Nadu, India e-mail: [email protected] G. Abinaya e-mail: [email protected] K. Kanishkar e-mail: [email protected] D. J. Subbash e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 V. S. Rathore et al. (eds.), Emerging Trends in Expert Applications and Security, Lecture Notes in Networks and Systems 681, https://doi.org/10.1007/978-981-99-1909-3_21

231

232

G. Abinaya et al.

1 Introduction God has blessed all living beings with the gift of sight, and it is a very special one. People who are blind or have other forms of vision loss may nonetheless function normally if they have access to information via aural means alone. People who use Braille devices are impeded in their ability to read printed materials. Despite the abundance of media tools accessible online, there are some limitations, such as the fact that they are not user-friendly or consumer-friendly, are not portable, and cannot be used anywhere to read all of the materials in their original form. To name a few examples: books, computers, and mobile phones. For these reasons, we decided to design assistive eyewear for prisoners that would be easy to use even for the blind and visually impaired. This small camera has a smart design to read the printed version of any book, file or portable text content and then convert it into sound that can be played through microphones or speakers. This portable and intelligent screen is controlled by software run on a Raspberry Pi module. The next step is for the extracted text to be turned into voice so that persons who are blind or have low vision may access it. For the ultimate hardware model’s verification, we used a web page and a cell document as test examples. It goes through a series of transformations before becoming a processed glass and experimental samples for the necessary audio codecs.

2 Objective The primary objective of this work is to develop a product that may greatly benefit those who are visually impaired and must depend on others for daily activities. Second, a novel innovation called the tracking system for the blind project employs a band that generates ultrasonic waves to warn the vision impaired of potential dangers in their path. A buzz or vibration alerts the user, and an integrated GPS and GSM module allows the user to text their whereabouts to friends and family back home. Thirdly, it facilitates independent mobility for those with visual impairments by highlighting potential trip hazards. As long as they have this band around their waists, they’re good to go.

3 Literature Survey There have been several developments in the construction of methods, technology, and devices that help the visually impaired navigate their environments independently and without the aid of others in recent years. There are certain requirements, however, they are limited in scope.

Tech Track for Visually Impaired People

233

To better inform the research community and consumers about the project’s capabilities and adaptive technology innovations for the blind, Dr. [1] Bourbakis presented “Wearable Obstacle Avoidance and Electronic Travel Aids for the Blind; A Survey” A Comparative Survey of Mobile Obstacle Detection Systems. The study is grounded on a categorization and quantitative/qualitative analysis of the system’s numerous features and performance standards. Ungar [2], he provided advice for the blind who find themselves in urban areas. But they didn’t think about those who can’t afford such tools. For the visually challenged, the solution to this handicap is the third eye. Ms. Pooja Sharma [3], she concluded that the item may potentially be noticed, but that there were obstacles associated with the distance and the viewing angles. Then again, a third eye for the visually impaired has a wide discovery point that might be considerably more extensive relying upon the sensor’s reach. Hugo Fernandesc, Joao Barroso Blind [4] Guide: A body region network utilising ultrasonic sensors to help the outwardly hindered. For the visually impaired, the discovery of hindrances by utilising this gadget gives the most ideal arrangement, since the study provides a formula to help those who are blind sense obstacles without the use of a white cane or a dog. This method relies on an array of ultrasonic sensors known as a body area network to produce a sonic response. Rather of relying on a guide dog or a white cane, the visually impaired may soon be able to go without either if the body area network is implanted into clothing. White canes with tips designed to help the blind walk more easily are only one example of the modern technological advancements available to the visually impaired. The cane comes in many forms in the modern technological world, including the white cane, laser cane, and smart cane. Dogs are expensive to train, therefore, some people can’t afford to have one. From what we’ve learned, the remote directing system is not easily relocated, thus this device is the best option. By Mohan M.S. Madulika 2013 [5], an electronic travelling assistance for the blind that helps with navigation and monitoring. This is a GPS and GSM-based arm controller that uses ultrasonic technology to identify obstacles and tell the visually handicapped of their distance from such obstacles. By Mohammed H. Rana and Sayemil 2013 [6], a smart walking stick. The buzzer, vibration motor, wet electrode, and obstacle-detecting ping sensor form the basis of this. The ping sensor detects the obstruction, and the motor’s vibration conveys the obstacle’s distance to a visually impaired user. We’ve already covered the virtual white stick detection gadget, which employs dynamic triangulation to estimate distances at a rate of 15 times per second. A confused individual might use this gadget to detect the outside by directing it like a flashlight. Along with distance estimation, this device can likewise distinguish surface discontinuities like the foundation of a divider, a phase, or a drop-off. To do as such, we look at the reach information gathered while the client moves the gadget around in order to detect planar fixes and discontinuities. Made a wearable minicomputer optimised for navigating enclosed spaces and worn as a navigational belt as a deterrent. The navigational belt could be switched between two modes, the first of which included audible tones representing the framework information. The person had trouble telling the difference between ij

234

G. Abinaya et al.

the sounds of nothing and of something being blocked. Additionally, the framework wouldn’t be aware of the client’s ever-changing circumstance. Have portrayed the creation of a route assist to aid in the exploration, safety, and identification of barriers for those who may be confused or hindered by external factors. A microcontroller capable of producing a custom discourse output is necessary for the design. The gadget also has two vibrators, two ultrasonic sensors that may be attached to the client’s shoulders or other body parts, and one ultrasonic sensor that is embedded into the stick itself. He has recommended methods for those who seem to be physically weak to use in metropolitan areas. However, they did not take into account those who cannot afford costly technological tools. The gizmo’s assisting eye for the dazzling overwhelms this restriction. There are a number of limitations on the locations and distances involved, despite claims that impediments may be located. In contrast, the identification sector, where the sensors’ range would be extensive, stands to benefit much from this endeavour.

4 Existing System A novel optical character recognition (OCR)-based smart reader is proposed in this research to help the visually challenged. It would be helpful if there were a portable text reader that was low-cost, lightweight, and easy to get ahold of by the general public. We provide a Raspberry Pi-based camera platform that includes image processing techniques, optical character recognition, and a text-to-speech (TTS) generator. Photographing the printed textual material using the camera module is the first step in the optical character recognition (OCR) process. Binarization, detonation, extraction, segmentation, and feature extraction are all parts of the pre-processing stage. In this work, we discuss how to create a system that can analyse text for the visually handicapped while also incorporating other systems. Google Tesseract was utilised as the OCR and Peak were used as the TTS for this project.

5 Proposed System The modules of the planned pen are all wired directly to one other, allowing for a compact construction. The blind is given the ability to walk with the aid of a cane; a fully controlled mechanical spring suspension that can sense the smallest of constraints. The closed pen sleeve is still assumed to be the norm despite the fact that many technologies have improved. This cuff may be worn as a bracelet because to its sleek design. In the event of an emergency, this system is set up to send out SMS messages to subscribed mobile phones. This post’s latitude and longitude were calculated using GPS data, but we also included a map showing our actual location.

Tech Track for Visually Impaired People

235

When an ultrasonic sensor detects an obstacle in the path of a character with limited vision, it sends out an auditory signal to alert them to the situation. Further, the device has a switch that, when activated, notifies other participants of the visually impaired person’s current position so that they may provide assistance. These details will be sent to the bereaved in the form of an SMS message through GPS and GSM terminals.

6 System Architecture The Arduino UNO is the brains of the operation, processing data from a variety of sensors and switches and making decisions on how to use a GPS module and a motor to create vibrations. It determines the kind of barrier it has met by analysing the data from its sensors and then sending that information as an actuation output signal to the vibration motor. Second and third on the stick are ultrasonic sensors that can pick up on things like tiny stones, cars, and even walls. The warning signal was produced by a portable vibrator, which emitted a distinctive shaking sensation in response to each unique impediment it faced. When danger is detected, the microcontroller sounds an alarm, and sensors locate potential hazards from a safe distance. The person’s position is tracked via a global positioning system (GPS) chip.

7 Hardware Requirements Arduino UNO. Ultrasonic sensor. Vibration sensor. Buzzer. GSM. GPS. Power supply.

236

G. Abinaya et al.

Fig. 1 System architecture

8 Software Requirements Arduino IDE. Embedded C.

9 Materials 9.1 Arduino UNO Using Arduino, you may build computers with better sensing and manipulating capabilities than a traditional desktop computer. When an obstruction is detected, an ultrasonic sensor sends that information to an Arduino controller board, which then sends an audible message to the appropriate media. The brains of the control system are an ATMEGA328P microprocessor embedded in an Arduino Board. The Arduino

Tech Track for Visually Impaired People

237

Fig. 2 Arduino UNO

platform is a single-board microcontroller that is freely available to the public and developed as a successor to the wiring platform. The hardware is comprised of an Atmel AVR CPU and on-board input/output capability integrated into a straightforward open hardware design for the Arduino board. Both a compiler for a common programming language and the board’s own boot loader make up the software. In order to determine distance, an Arduino receives sensor data and processes it in accordance with the code. A pulsating pattern of varying intensities is produced by comparing the resultant value to the fixed value (Fig. 2).

9.2 Ultrasonic Sensor It sends out an ultrasonic signal at 40.000 Hz, which travels through the air and hits whatever is in its way if there is an obstruction. It will be returned to the module through a bounce. You can sort out how far something is by utilising its movement time and the speed of sound. These ultrasonic sensors have been constructed to be robust against environmental hazards. They can withstand things like vibration, infrared radiation, background noise, and electromagnetic interference (EMI). An SRF-04 is employed as the sensor. An ultrashort trigger pulse is needed, and an echo pulse is also provided. When the user activates the module, ultrasonic waves are sent out and reflect off of any obstacles in the way. The sensor’s output is a voltage shift that is proportional to the object’s distance. This apparatus may also be used to identify potholes (Fig. 3).

238

G. Abinaya et al.

Fig. 3 Ultrasonic (UV) sensors or ultrasound sensors

9.3 Vibration Sensor The technology is based on a vibration motor that emits vibrations of varying strengths in response to the system’s distance from the barrier. It is likely that the vibration’s intensity will be greatest if the barrier is somewhat close. As the distance to the barrier grows, so does the reduction in motor intensity. In order to detect bending angles, touch, vibration, and even shock, a piezoelectric sensor is used. At the risk of oversimplification, its core idea is that each time a structure is subjected to motion, and it also undergoes acceleration.

9.4 Buzzer To warn the user that the obstruction is dangerously near and might cause a collision, a low-frequency piezo buzzer is utilised. In noisy public places, the vibration motor is accompanied by a buzzer to inform the user. In the table below, you can see how the buzzer’s pins are set up. Positive and negative pins are included. A plus sign or long end indicates the positive end of this. This terminal gets its power from 6 V, while the negative terminal (shown with a “–” or short terminal) is connected to the GND terminal (Figs. 4, 5 and 6). Fig. 4 Buzzer pin configuration

Tech Track for Visually Impaired People

239

Fig. 5 GPS module

Fig. 6 GSM module

9.5 Global Positioning System (GPS) When talking about a navigational aid, you could hear the acronym GPS. There is a noticeable decrease in GPS signal strength while passing through or around obstacles like mountains or large buildings. The use of GPS in this situation is motivated by a desire to pinpoint the precise location of the person in question. Uses for the Global Positioning System (GPS) include navigation, security, and localised search. The

240

G. Abinaya et al.

receiver uses the data from four satellites to determine your precise position, down to the degree in both latitude and longitude.

9.6 GSM Module GSM refers to the Global System for Mobile communications, which is a kind of mobile phone network. Different GSM frequencies will be used. Mainly, GSM networks use either the 900 MHz or 1800 MHz ranges. Because the 900 MHz and 1800 MHz frequency bands were already in use, other nations like the United States utilise the 850 MHz and 1900 MHz range instead. It’s the means via which mobile data is sent. It is a kind of mobile user engagement. A SIM900module is used in this case. Smallest and least expensive module available. Arduino and microcontrollers are often used in embedded system projects.

10 Working Operation At the point when the switch is enacted, information is shipped off the ultrasonic sensor; assuming that the information is positive, the sensor computes the distance between the article and the sensor; in the event that the item is inside the reach, the Drove illuminates; lastly, the result is acquired through the vibrating engine or the sound from the bell. Pressing the button will send an SMS to the guardian through the GSM module with the location. A message containing the keyword “Saved” will be processed by the microcontroller upon receipt by the GSM modem. Then, it gets the location of the stick using the GPS modem and transmits this information to the GSM modem so that it can reply back to the sender. In an emergency, the user can press the stick’s emergency button, causing the microcontroller to acquire the user’s location data from the GPS modem and transmit it to the GSM modem, which sends an SMS to the user’s emergency contact number. In this area, the writers may give credit to anybody they like. Please note that this is optional.

11 Conclusion Also, this project brought out the design and engineering of a new idea of Arduinobased virtual eye for the visually impaired. To aid the challenged in a positive way, it is advised that they use an electronic guidance system that is simple, affordable, effective, portable, adaptable, user-friendly, and has many other outstanding qualities and benefits. Through the use of GPS and GSM technologies, it can scan and identify the height and depth of barriers in the path of a visually impaired person, as well as the position of recognised obstacles (Fig. 7).

Tech Track for Visually Impaired People

241

Fig. 7 Working operation flow chart

References 1. Bourbakis NG, Dakopoulos D (2015) Wearable Obstacle Avoidance Electronic Travel Aids for Blind: A Survey, IEEE Trans. Systems Man and Cybernetics Part C: Applications and Reviews 40(1):25–35 2. Ungar S (2021) Third eye for the blind. Asian J Convergence Technol ISSN 2350-1146, I.F-5.11 3. Pooja S, Shimi SL, Chatterji S. A Review on Obstacle Detection and Vision, International Journal of Science and Research Technology 4. Fernandesc H, Barroso J (2015) Blind Guide: an ultrasound sensor based body area network for guiding blind people. In: 6th International conference on software development and technologies for enhancing accessibility and fighting infoexclusion 5. Madulika MS (2013) The electronic travelling aid for blind navigation and monitoring. Int J Advanc Res Sci 6. Mohammed HR, Sayemil (2013) Smart walking stick. Int J Sci Eng Res 7. Earshia VD, Kalaivanan SM, Subramanian KB (2020) A Wearable Ultrasonic Obstacle Sensor for Aiding Visually Impaired and Blind Individuals. Int J Comput Appl, National Conference on Growth of Technologies in Electronics January 2020 8. Pooja S, Shimi SL, Chatterji S (2018) A review on obstacle detection and vision. Int J Sci Res Technol 9. Chen CJ, Chen JA, Huang YM (2017) Intelligent Environmental Sensing with an Unmanned Aerial System in a Wireless Sensor Network, International Journal on Smart Sensing And Intelligent Systems, Vol.10, No.3

242

G. Abinaya et al.

10. Dhiraj G, Pankhuri S et al (2015) Design and development of a low cost electronic hand glove for deaf and blind. In: International conference on computing for sustainable global development 11. Lazuardi U, Alexander FA, Wiest J (2015) Application of Algae-Biosensor for Environmental Monitoring. In: Proceedings of 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC) Milan, pp 7099–7102 12. Kale R, Phatale AP (2014) Design of a GPS based virtual eye for the blind people. Int J Curr Eng Technol 4(3): 2162–2164

Intelligent Compression of Data on Cloud Storage E. Mothish, S. Vidhya, G. Jeeva, and K. Gopinathan

Abstract Cloud computing is one of the most used technologies today. The main advantage of cloud computing is that the project developer need not buy the physical devices that will be used for the development of the project. They just need to rent the needed physical devices that are required for the project. The rented devices can easily be scaled up if the requirement of the project is increased. The major problem in cloud computing is the duplication of files that are stored in the cloud server. This will increase the memory usage of the cloud storage. In order to reduce the memory of the cloud storage, we need the application that will check the duplicate files in the cloud server before uploading the files in the main memory. If the duplicate of files is available on the cloud server, the application needs to map the original files to the current files that is uploaded so that the user will not get affected by the mapping but on the backend no duplicate files will be created but the files get mapped by the application. In this way, the duplication files on the cloud server will be avoided thereby reducing the storage space requirement on the cloud server. Keywords Deduplication · Cloud computing · Cloud storage · Privacy

1 Introduction Data deduplication is a technique for removing duplicate data from datasets, which enhances cloud storage service providers’ storage capacity and results in effective use of disc space. This technology enables cloud customers to efficiently manage their cloud storage space by preventing the storing of duplicate data and saving bandwidth. It is employed to address data repetition. These methods are typically employed in E. Mothish · S. Vidhya · G. Jeeva (B) · K. Gopinathan Department of Information Technology, Saveetha Engineering College, Chennai, Tamil Nadu, India e-mail: [email protected] S. Vidhya e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 V. S. Rathore et al. (eds.), Emerging Trends in Expert Applications and Security, Lecture Notes in Networks and Systems 681, https://doi.org/10.1007/978-981-99-1909-3_22

243

244

E. Mothish et al.

cloud servers to reduce server space. As a result, data storage is secure, effective, and efficient.

2 Literature Survey Deng et al. (2012) in their work, proposed several schemes where attribute-based encryption (ABE) is employed for access control of outsourced data in cloud computing. The majority of them struggle with rigidity when putting the intricate access control restrictions on cloud storage into practice. Therefore, we presented hierarchical attribute set-based encryption to comprehend scalable, flexible, and fine-grained access control of outsourced data in cloud computing. Bellare et al. (2013) proposed an architecture that gives secure de duplicated storage that will resist brute-force attacks that are called DupLESS. Message-based keys received from a key server using the PRF protocol are used for encryption in DupLESS. It allows users to save encrypted data with an established cloud service and have the provider handle data deduplication on their behalf. Chiu et al. (2016) added a distribution layer to increase the efficiency of cloud computing. For the purpose of further lowering the cloud service’s querying costs, they offered a plan they called efficient information retrieval for the ranked query. Multiple levels are assigned to queries, with a higher rank retrieving a higher percentage of matched files. Information righteousness and capacity effectiveness are two necessities for distributed storage according to Shubham et al. (2017). Information respectability for distributed storage is ensured by the verification of retrievability and confirmation of data ownership (PDP) techniques. Proof of Proprietorship (POR), which strengthens the file’s secure ownership. We put forth a solution that outperforms the POR and PDP plans now in place while adding the benefit of file deduplication.

3 System Architecture The server asks the user to register and login the website. After login user can able to upload the file and then cloud server will check whether the file is existed or not if existed user not able to upload the file. If user wants to download the file, the cloud server will send an email authentication to the user to verify identity of the user. Any manipulation or hack takes place in server, the cloud server will send digital signature verification to user to protect the server from the hackers or third party. The original user uploads the files or the data for the first time it would be preprocess the data whether is it is already exists and if it is not then the cloud server will allow the user to upload the file to the cloud server and get updates by the proof of storage. Then the subsequent user uploads the same data or file to the cloud server.

Intelligent Compression of Data on Cloud Storage

245

The cloud server will not allow the user to upload by without uploading them to the server (Figs. 1 and 2).

Fig. 1 System architecture

Fig. 2 Storage architecture

246

E. Mothish et al.

4 Modules 4.1 User Creation In order to create a cloud account, the user must first register their information online. The user’s data is kept in a database (DB). We employ the MYSQL database in our project. After registration process, the user becomes a cloud user and the cloud (Fig. 3).

4.2 Upload Process In this stage, user can able to upload local files in the cloud. The files uploaded by the user will be evaluated based on hash and tag value by the cloud server. When the user uploads a file it will check whether the file is already present in the cloud or not. If already the file is present the cloud won’t allow the user to upload a file.

4.3 Deduplication The files that need to be uploaded already exist on the cloud server during the deduplication process. The following users have local access to the files, and the cloud server has the file’s authenticated structures. Without actually uploading the data to the cloud server, subsequent users must persuade the cloud server that they are the rightful owners (Figs. 4, 5 and 6).

Fig. 3 User creation

Intelligent Compression of Data on Cloud Storage

247

Fig. 4 Upload preprocess

Fig. 5 Deduplication

Fig. 6 Proof of Storage

4.4 Proof of Storage Users only have a tiny amount of metadata locally during the Proof of Storage phase, thus they want to determine whether the files are accurately saved on the cloud server without downloading them. The users without submission of original files will be detected through digital signature in the module.

248

E. Mothish et al.

5 System Requirements Hardware requirements System: Pentium Dual Core. Hard Disk: 320 GB. Monitor: 15” LED. Input Devices: Keyboard, Mouse. RAM: 2 GB. Software requirements. Operating system: Windows Family. Coding Language: JAVA/J2EE. Tool: Eclipse. Database: MYSQL.

6 Algorithm The MD5 algorithm uses the message as input and generates a 128-bit “message digest” of the input as output. It is thought to be computationally impossible to construct any message with a certain target message digest or to produce any two messages that have the same message digest for processing. This technique is designed to be used in digital signature applications where a huge file has to be securely “compressed,” or divided into numerous blocks and then further divided into various chunks of data before being encrypted with a private key in a public-key cryptosystem. The MD5 hashing algorithm is a one-way cryptographic that calculates the sum of all the files present in the input directory in the mapper function. These MD5 hash would be the key to the reducer, so files with the same hash would go to the same reducer. The default for the mapper in Hadoop is that the key is the line number and the value is the content of the file.

7 Usecase Diagram See Fig. 7.

8 Activity Diagram See Fig. 8.

Intelligent Compression of Data on Cloud Storage

Fig. 7 Usecase diagram Fig. 8 Activity diagram

249

250

E. Mothish et al.

9 Software and Technologies Description 9.1 Java Java technology is both a programming language and a platform. A high-level language is the Java programming language. Depending on the programming language, you normally need to compile or interpret a program before running it on your computer. The Java programming language is uncommon since it allows for both compilation and interpretation of programs. A program is translated into platform-independent Java byte codes using the compiler before being interpreted by the Java platform’s interpreter. The computer’s interpreter parses and executes each Java byte code instruction. Interpretation occurs rather than compilation, which only occurs once when the program is run.

9.2 Java Platform The Java Platform refers to the hardware or software environment that a program operates on. Windows, Linux, Solaris, and Mac-OS are a few of the most well-known operating systems that have previously been highlighted. The majority of platforms may be described as a combination of the operating system and the hardware. The Java platform differs from most other platforms in that it is entirely software-based and works on top of other hardware-based systems. There are two parts to the Java platform: . The Java Virtual Machine (Java VM) . The Java Application Programming Interface (Java API) The Java VM is a base for the Java platform and is ported onto various hardwarebased platforms. The Java API is a sizable collection of pre-made software elements that offer numerous beneficial features, including GUI widgets.

9.3 Java Technology Applets and applications are the two sorts of program that are most frequently produced in the Java programming language. In a browser that supports Java as an applet, an application that complies with particular standards can execute. But there are other things you can accomplish with the Java programming language in addition to making fun and interesting online applets. The general-purpose, highlevel Java programming language is another strong development platform. Using the comprehensive API, a wide range of programs may be developed.

Intelligent Compression of Data on Cloud Storage

251

9.4 Eclipse Eclipse is an Integrated Development Environment (IDE) for computer programming (IDE). For environment customization, it offers an extendable plug system and a preset workspace. Applications may be made using Eclipse, which is mostly written in Java. Among the many programming languages that may be used with Eclipse are Ada, ABAP, C, C++, COBOL, Fortran, Haskell, JavaScript, Lasso, Lua, Natural, Perl, PHP, Prolog, Python, R, Ruby (including Ruby on Rails framework), Scala, Clojure, Groovy, Scheme, and Erlang. It may also be utilized to build packages for mathematical software. A few examples of development environments include the Eclipse Java development tools (JDT) for Java and Scale, Eclipse CDT for C/C++, and Eclipse PDT for PHP. Although it is incompatible with the GNU, Eclipse SDK is free and given under the terms of the Eclipse Public License. One of the earliest IDEs to support GNU, it is compatible with Iced Tea without any problems.

10 Conclusion In this paper, a hash-based method called MD5 for data deduplication has been presented and implemented in a distributed environment using the Hadoop framework. This method removes duplication by deleting redundant files and storing unique files in the index table, and it uses bucket approach to index unique hash values. Higher storage efficiency is attained as a result, which also has a significant impact on how quickly hash values can be computed, boosts the deduplication ratio, and effectively identifies duplicate chunks. Thus, we achieve data deduplication and access control with different security requirements. Security and efficient of cloud storage are improved.

11 Future Enhancement Future enhancement of our project is to detect duplicate data with all file formats. It would show the existing data and deletes the repeated data automatically.

References 1. Widodo RNS et al (2020) A new content-defined chunking algorithm for data deduplication in cloud storage. Fut Generat Comput Syst 71:145–156. https://doi.org/10.1016/j.future.2017.02. 013.Accessed 27 Aug 2020 2. Xia W et al (2020) The design of fast content-defined chunking for data deduplication based storage systems. IEEE Trans Parallel Distrib Syst 31(9):2017–2031,https://doi.org/10.1109/tpds. 2020.2984632. Accessed 27 Aug 2020

252

E. Mothish et al.

3. Adithya M, Scholar PG, Shanthini B (2020) Security analysis and preserving block-level data DE-duplication in cloud storage services. J Trends Comput Sci Smart Technol (TCSST) 2:02(2020):120–126 4. Xia W et al (2019) Accelerating content-defined-chunking based data deduplication by exploiting parallelism. Fut Generat Comput Syst 98:406–418. https://doi.org/10.1016/j.future. 2019.02.008 5. Yoon M (2019 A constant-time chunking algorithm for packet-level deduplication. ICT Express 5(2):131–135. https://doi.org/10.1016/j.icte.2018.05.005 6. Wang C et al (2018) NV-Dedup: High-performance inline deduplication for non-volatile memory. IEEE Trans Comput 67(5):658–671. https://doi.org/10.1109/tc.2017.2774270 7. Kumar N et al (2018) Efficient data deduplication for big data storage system. Advanc Intell Syst Comput:351371. https://doi.org/10.1007/978-981-13-0224-4_32

Diagnosis of Diabetic Retinopathy Using Deep Neural Network S. Dhivya Lakshmi, C. Lalitha Parameswari, and N. Velmurugan

Abstract Diabetes-related eye damage is known as diabetic retinopathy. It significantly increases the risk of blindness. Numerous new occurrences of diabetic retinopathy may be decreased with proper eye care. The approach suggested in this study employs U-net segmentation with area fusion and convolutional neural network (CNN) to automatically diagnose and categorize high-resolution retinal fundus pictures into 5 disease stages depending on their severity (Carrera EV, Gonzalez A, Carrera R, Automated detection of diabetic retinopathy using SVM, IEEE XXIV international conference on electronics, electrical engineering and computing (INTERCON), 2017, [1]). When it comes to proliferative diabetic retinopathy, which is characterized by retinal proliferation of neovascularization and retinal detachment, high variability in the categorization of fundus pictures is a significant difficulty (Kumar S, Kumar B, Diabetic retinopathy detection by extracting area and number of microaneurysm from colour fundus image, in 5th international conference on signal processing and integrated networks (SPIN), 2018, [2]). Subsequently, fragmentation of the retina may happen from improper inspection of the retinal vessels, which is required to get an accurate result. Retinal segmentation is a method for autonomously defining blood vessel boundaries. By using region merging, the features are not lost during segmentation and are passed on to the image classifier, which has a 93.33% accuracy rate. Fundus pictures were categorized into no DR, mild, moderate, severe, and proliferative categories based on severity levels. Two datasets were taken into consideration: Diabetic Retinopathy Detection 2015 and Optos 2019 Blindness Detection, both from Cagle. The suggested technique includes data gathering, pre-processing, augmentation, and modeling as its phases. Our suggested model achieves 90% accuracy. Regression analysis was also done, and the accuracy rate was 78%. The main goal of this effort is to provide a trustworthy system for the automated detection of DR. S. Dhivya Lakshmi · C. L. Parameswari (B) · N. Velmurugan Department of Information Technology, Saveetha Engineering College, Chennai, Tamil Nadu, India e-mail: [email protected] N. Velmurugan e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 V. S. Rathore et al. (eds.), Emerging Trends in Expert Applications and Security, Lecture Notes in Networks and Systems 681, https://doi.org/10.1007/978-981-99-1909-3_23

253

254

S. Dhivya Lakshmi et al.

Keywords Diabetic · Retinopathy · Segmentation · Convolutional neural network · CNN

1 Introduction A metabolic illness known as diabetes mellitus, sometimes known as prediabetes, is characterized by the body producing inadequate quantities of the hormone insulin, which causes blood sugar levels to rise. Diabetes affects more than 62 million individuals in India. 80% of those with diabetes who have had the disease for more than 20 years will eventually develop diabetic retinopathy. Diabetic retinopathy is a disorder that may occur in people with diabetes. The retina, the light-sensitive lining in the rear of the eye, progressively suffers as a consequence. A significant, sightthreatening consequence of diabetes is diabetic retinopathy. The body’s capacity to ingest and store sugar is hampered by diabetes [3] (glucose). A rise in blood sugar is a defining feature of this disorder, which may harm not only the body’s organs but also the eyes. Diabetes has the potential to harm the retina’s tiny blood vessels over time. Diabetic retinopathy arises when these small blood vessels begin to leak blood and other substances. As a consequence, the retinal tissue expands and causes vision to become hazy. Both eyes are often impacted. As long as a person has diabetes, diabetic retinopathy will probably progress. If unchecked, diabetic retinopathy may cause blindness. According to the International Diabetes Federation, there were 366 million cases of diabetes globally in 2011, and 552 million cases are anticipated by 2030. Every nation is seeing a rise in type 2 diabetes cases. Diabetes affects 80% of people in low- and middle-income nations. With 195%, India comes in #1. 54 million in 2025 compared to 18 million in 1995. Previously, it was believed that India’s urban population had a high prevalence of diabetes mellitus (DM). According to recent study, infections are also sharply rising in rural regions. According to studies done in India, the incidence of diabetes in rural areas has increased thrice in recent years (from 2.2% in 1989 to 6.3% in 2003). A study conducted in India found that one in ten people over the age of 40 in rural southern India would develop type 2 diabetes and diabetic retinopathy. Diabetes-related retinal damage is referred to as diabetic retinopathy. This is a serious issue because it is one of the most common causes of blindness. When high blood sugar damages the retina’s tiny blood vessels, diabetic retinopathy develops. It is possible to reduce the number of new cases by at least 90% with careful and consistent eye screening. It frequently affects both retinas in both eyes and can result in blindness if left untreated [4].

Diagnosis of Diabetic Retinopathy Using Deep Neural Network

255

2 Literature Survey Two examples of such figures are Carrera and Gonzalez [1]. Diabetes Retinopathy Identification using SVM, which was presented at the XXIV International Conference on Electronics, Electrical Engineering, and Computing Mechanics (IEEE’s 2017 Intercon). Computerized methods now allow for the automatic diagnosis of diabetic retinopathy. Samples of retinal images were analyzed for this study using digital image processing. The creation of a universal definition for non-proliferative diabetic retinopathy (NPDR) in retinal imaging is the primary objective of this study. The outcomes demonstrated that the proposed method was successful in locating DR. We discovered that this method had a predictive power of 94% and a sensitivity of 95%. Using a variety of metrics, the proposed method’s viability is evaluated. The primary focus of future research will be the application of existing algorithms for DR detection and clinical assessment. This group includes Kumar and Kumar [2]. SPIN 2018: The 5th International Conference on Integrated Networks and Signal Processing The article “Detection of diabetic retinopathy by area and number of micro-aneurysm extraction from colored fundus” suggested one approach. Thanks to color fundus images, microaneurysms in diabetic retinopathy could be identified and their sizes measured with great precision. Microvascular aneurysms were defined using principal component analysis (PCA), finite adaptive graph equation (CLAHE), morphological methods, and iterative averaging. A linear support vector machine (SVM) was used to perform the DR classification. This discussion will focus on Mahendran Gandhi R.’s work "Diagnosis of Diabetic Retinopathy Using Morphological Process and SVM Classification." The DR detection method is said to have a sensitivity of 96 percent and a specificity of 92%. The severity of the patient’s condition was evaluated using the SVM classifier. In Meinal, Gaur et al.’s upcoming paper, we are able to automatically detect diabetic retinopathy by employing support vector machines. A condition known as diabetic retinopathy, which can result in double or blurry vision (DR), can develop when blood insulin levels remain consistently high. The careful screening for diabetic retinopathy has saved the lives of many diabetes patients. Changes to the outer retina are the first sign of diabetic retinopathy. This kind of distortion is common when older methods of processing images. In order to select retinal images that are in good health, these methods include fundus image acquisition, preprocessing, feature extraction, and classification. To diagnose diabetic retinopathy, Karan Bhatia and his colleagues used ensemble machine learning methods for characteristics extracted from segmented retinal images [5]. In this study, we evaluated the progression of DR (diabetic retinopathy) using a variety of classification schemes. The methods of categorization used in this investigation worked well. The development of cutting-edge DR detection systems that will assist healthcare professionals in detecting the disease earlier will be the primary focus of future research. Computer-aided Diagnosis of Diabetic Retinopathy by Machine Learning Approaches,” by Raman et al., was presented at the 2016 8th IEEE International

256

S. Dhivya Lakshmi et al.

Conference on Communication Software and Networks (ICCSN). Using retinal images to make a visual diagnosis of diabetic retinopathy. This system creates dynamic DR models through the use of machine learning techniques. The proposed method has the capability of accurately identifying the various DR clinical stages. The proposed strategy outperformed existing approaches when it came to feature extraction, categorization, and classification of non-proliferative diabetic retinopathy (NPDR) lesions. Future research will focus on the sensitivity, specificity, and accuracy of the proposed method, all of which have room for improvement. In “Diabetes Diagnosis by Vector Vortex Optimization Algorithm” (2018), Deperliolu et al., and many others [6] used deep learning techniques to analyze retinal fundus images to identify diabetic retinal disease. Using a convolutional neural network (Convinet), images of the retinal fundus were classified for this research. The results of the test showed that the proposed method has high levels of accuracy (97%), sensitivity (96.67%), specificity (93.33%), and recall (93.33%), as well as low levels of false positives and negatives (zero percent). Future research will primarily focus on comparing the proposed approach to available public NT datasets. The training set may expand to include additional covert images. Xu and others [5] of 2019 established a learning-based transfer method for diabetic retinopathy (DR) identification. In diabetic rats, the expression of the vasodilator fraction is altered in brain regions that are more likely to develop vascular disease [5]. The initial source of the data was Kagle’s official website. After that, additional approaches were used to enhance the data. This investigation made use of a few trained models. The ImageNet (neural network) dataset served as the basis for the pretraining of each NN. Finally, according to the severity of the images, five categories of DR pathology were established. The experimental results demonstrate that the proposed method’s classifications are sixty percent accurate. In comparison to the previous methods, this new one is simpler and more dependable. The development of cutting-edge DR detection systems that will assist healthcare professionals in detecting the disease earlier will be the primary focus of future research. Diabetic retinopathy diagnosis using a neural network was first presented at the 10th IEEE 2017 International Workshop on Intelligence and Computational Applications (IWCIA) by Bui and Manirat, among others [7]. The DIARETDBI dataset, which is accessible to the general public, was used in this study to test the proposed method. The results demonstrated that the proposed method could be used to segment cotton. With a sensitivity of 85.9%, a specificity of 84.4%, and an accuracy of 85.54 percent, this method was effective. Using a variety of machine learning techniques and finer-grained features, the accuracy of DR identification will be the focus of future research. Deep learning in Diabetic Retinopathy Detection was the title of a paper that Saquib et al. [8] submitted to the 2020 ICSTCEE conference, which was focused on intelligent technologies in the computing, electrical, and electronic fields (DR). Diabetes may damage the retina over time, eventually leading to blindness. DR testing is now performed manually by ophthalmologists, which takes time. Using deep learning (DL), an AI subfield, future work in this area (project) will focus on analyzing the various stages of DR.

Diagnosis of Diabetic Retinopathy Using Deep Neural Network

257

Shelar et al., Detection and Classification of Diabetic Retinopathy from Fundus Images [9] Computer Communication and Informatics: An International Conference, 2021. You may have diabetes mellitus if your body is unable to produce enough insulin. Sugar metabolism is controlled by insulin, allowing glucose to be stored in muscle and other tissues. As a result, blood sugar levels rise, resulting in diabetic retinopathy and eventually blindness as a result of blood vessel damage to the retina. Chavan and Thorat, and other people: Diabetic retinopathy (DR) is the medical term for this. This is brought on by blood vessel breaks or leaks in the macula, a lightdetecting region in the back of the retina. Diabetes is the leading cause of blindness among adults of working age who do not receive proper medical care. To put it simply, both Shri and Sri Renukadevi suffer from diabetic retinopathy (DR), the leading cause of blindness worldwide. Diagnosis and treatment are essential for delaying or preventing vision loss and degeneration. The scientific community has published a number of AI-aided methods for detecting and labeling diabetic retinopathy in fundus retinal images. The primary objective of this study is to weigh the benefits and drawbacks of this technology in the context of early cardiac disease diagnosis.

3 Existing Methodology We already spoke about edge detection techniques. For data mining and picture segmentation, edge detection is utilized in the domains of image processing, computer vision, and machine vision. To find the edge, algorithms like Sobel, Kenny, Prewitt, Roberts, and fuzzy logic methods are often utilized using the Sobel method to segment images. The local thresholding method employs distinct threshold values for segmented sub-images created from the complete picture as opposed to a global thresholding strategy that uses a single threshold value for the entire image for segmentation [10]. Automatic categorization of the retinal vasculature using fundus imaging may help with RGA methodology, eye diagnosis, planning, and therapy. Automatic diagnosis of retinal pictures is being developed in intelligent healthcare systems for the early identification of glaucoma, stroke, and blindness. A disadvantage of the existing approach is the difficulty in determining the precise color gamut. With this method, it is difficult to identify the vasculature because of the basic properties of retinal pictures. Unsatisfactory edges are noted [6]. Fundus secretions are impossible to collect.

4 Proposed Methodology In a recently suggested method, retinal arteries are identified and diabetic retinopathy is detected using a segmentation method that has been successfully applied to a neural

258

S. Dhivya Lakshmi et al.

network [5]. The new technique of the suggested system has a high detection rate and is sufficiently resistant to variations in light, which is a benefit. An artificial neural network known as a convolutional neural network (CNN) was created by researchers to analyze input pixels for image recognition and processing. An input layer, an output layer, a hidden layer (or multiple layers), and an activation or normalization layer (or multiple layers) are the levels in a CNN. Regions with convolutional neural networks, an integrated deep learning technique, make use of rectangular area clues in conjunction with the features of convolutional neural networks (R-CNN). A two-stage detection process is used by the R-CNN. In the first step, we choose a group of places that could be where the element is. The researchers proposed employing active retinal image processing angiography and exudates to investigate retinal vascular disorders [8]. They play a critical role in the early diagnosis of several diseases, including diabetes, by contrasting the condition of the retinal blood vessels. To put it simply, training a CNN involves figuring out the precise values for each filter such that the input picture, after being processed at various stages, activates certain neurons in the final layer that predict the proper class. Despite the fact that training CNNs from scratch on small projects is doable, the majority of applications call for training extremely large CNNs, which requires a significant amount of computing power and data [9]. These days, it is difficult to find those two items. In transfer learning, we use trained weights in an existing model to predict new categories using previously learned features (training on millions of images from thousands of categories, using multiple high-power GPUs over several days). In Fig. 1, in this work, the methodology is illustrated. The program pulls images from the database (eye images are manually uploaded to the database). This starts the preprocessing phase, which involves putting the raw data into a form that the network can use. After that, it starts extracting the format feature. The information in the original dataset is preserved while converting the data into numerical features for processing. Get the chapter label from the data entry workbook. The input image undergoes the same process. Image preprocessing and segmentation steps attempt to group pixels based on how similar they are and have multiple uses. The next degree of feature extraction. After all operations are completed, classification indicates whether the eye is abnormal or infected. If affected indicates the stage of the affected eye. Outputs Here in Fig. 2, the system will take the input and will make it ready for the process. In Figs. 3 and 4, image will go through all the processes and will produce the output. In Fig. 5, system will produce the output. It will report whether the eye is whether affected or normal.

Diagnosis of Diabetic Retinopathy Using Deep Neural Network

Fig. 1 Architecture diagram

Fig. 2 First level

259

260

S. Dhivya Lakshmi et al.

Fig. 3 Second level

5 Modules This paper consists 5 modules, they are: a. b. c. d. e.

Preprocessing Ground truth segmentation Discrete wavelet transform Feature extraction Neural network classifier

a. Preprocessing Enhancing picture characteristics, eliminating image noise, and ensuring image stability all need preprocessing. The next paragraph gives an overview of the pretreatment techniques that have been employed in research the most recently. Many researchers downscale the photos to a certain resolution in order to make them compatible with the network employed. In other research, the pictures’ green channel was the only one recovered, and because of their stark contrast, the images were turned into grayscale. Data normalization was performed to normalize the photos to a similar distribution, and cropped photographs were utilized to eliminate superfluous portions of the image. The term “image preprocessing” describes actions performed

Diagnosis of Diabetic Retinopathy Using Deep Neural Network

261

Fig. 4 Third level

on pictures at a fundamental level of abstraction. Entropy is a measure of information, yet the information content of a picture is reduced rather than increased by these metrics. By removing undesirable distortions or enhancing specific visual characteristics that are essential for subsequent processing and analysis, preprocessing aims to improve the quality of the picture data. Fortunately, modern digital technology simple digital circuits can be used for advanced parallel computers to handle multidimensional signals. Three types of targets can be made for this manipulation: . Image in Image Analysis Measurements . Image Processing Image in Image . Image perception is the high-level interpretation of an image. Preprocessing photos is necessary because of issues including poor illumination, insufficient contrast between exudate and the picture background pixels, and noise in the input fundus image. As a fundamental preprocessing step in retinal image analysis, normalization versus a reference model is used to reduce visual contrast in the original image.

262

S. Dhivya Lakshmi et al.

Fig. 5 Fourth level

Cleansing, selecting instances, normalizing, hot coding, transforming, extracting features, and choosing are all examples of data preparation. The training set is produced once the data has been cleaned and prepared. b. Ground truth segmentation Ground facts are “valid and accurate” breakdowns, often created by one or more human experts. For example, a dermatologist can highlight (identify) an area of a skin lesion on an image. We often use a performance metric called Intercept Across Union (IOU), which measures the degree of overlap between the machine hash produced by the algorithm in question and the ground truth hash, to determine how accurate the machine hash is relative to the latter. The effectiveness of the segmentation algorithm is assessed by contrasting the resulting images with the ground truth images. When evaluating or measuring the accuracy and validity of your aggregate results, you may use image data captured on the Site or accurately annotated by a third party as “ground fact.” Ground truth can also refer to training or validating a model with a properly labeled dataset. The Deep Scans library’s external inspection feature accomplishes this for data professionals who want to quickly evaluate hash data. c. Discrete Wavelet Transform It is suggested that the discrete wavelet transform be used to turn image data into wavelet modulus data. DWT makes use of a 7-click filter for its high-pass wavelet parameters and a 9-click filter for its low-pass ones. The 9/7 float DWT is recommended for lossy compression, whereas the 9/7 integer DWT is recommended for lossless compression.

Diagnosis of Diabetic Retinopathy Using Deep Neural Network

263

Fig. 6 DWT structure

It alters the time and frequency domain locally. There are various smaller versions of the main picture. HH, LL, HL, and Multi-resolution analysis is intended to give enough temporal and frequency resolution when used with high frequencies. Both frequency resolution and temporal resolution are improved at lower frequencies. suitable for short-lived signals with strong frequency components. The highfrequency subband contains the input image’s fringe information. The LL subdomain contains information on crystal-clear images. Utilize the data from these subdomains to enhance the image’s look after retrieval. The wavelet transform is computed separately for distinct time domain signal segments at different frequencies (Fig. 6). Any wavelet transform in numerical analysis and functional analysis that samples wavelets independently is referred to as a discrete wavelet transform (DWT). In contrast to Fourier transformations and other (location in time) wavelet transforms, it captures both frequency and position information, which is a significant benefit. Numerous approaches have been used to implement the DWT technique. The earliest and most well-known algorithm is the (hierarchical) scoring algorithm. In this approach, two filters—a smoothing filter and an inhomogeneous filter—are built using wavelet coefficients and utilized alternately to gather data for all metrics. When the signal length L and the total number of data D = 2N are used, the D/2 data is first tallied on the L/2 (N-1) scale and then on the (D/2)/2 scale. Up to two data points are acquired on the L/2 scale, or L/2 (N-2). Typically, when employing this method, the data is sorted from the greatest scale to the smallest scale, yielding the same range up to the input array length. d. Feature Extraction Feature extraction is a method for reducing the dimensions of a dataset by breaking it down into smaller, more manageable chunks. Because of this, processing can be

264

S. Dhivya Lakshmi et al.

simplified. The fact that these massive datasets consist of so many unique variables is a major plus. To manipulate this effectively, a lot of computing power is required. By selecting variables and integrating them into features, feature extraction aids in extracting the optimal feature from these vast datasets. These functions are intuitive to use and provide a clear representation of the underlying data. Invoke the Graycomatrix method to generate the GLCM. By iteratively counting the occurrences of pixels with intensity values (gray levels) in the spatial connection mapped to the pixel with j-value, the function produces a graylevel frequency matrix (GLCM). By default, a spatial relationship is defined as being between the pixel of interest and the pixel to the right (horizontally adjacent). Pixels with the j-value of each I j) element are added to the count of occurrences of input image pixels with the required spatial relation in the final glcm. The quantity of gray levels in the picture affects how big the GLCM is. By default, Graycomatrix scales photos to a maximum of eight intensity levels, but you may change this behavior by specifying different values for the NumLevels and GrayLimits variables. Feature Extraction Apps The “bag of words” approach to natural language processing is the most popular approach. This method classifies words or characteristics based on how often they are used after being extracted from a phrase, document, website, etc. Therefore, one of the most crucial processes in the whole process is feature extraction. Processing of images: One of the biggest and most fascinating topics is image processing. In order to comprehend your images in this area, you typically start playing with them. Therefore, we use a range of approaches, such as feature extraction, to alter elements like forms, edges, or movements in a digital picture or video. The primary focus of autoencoders is on unsupervised, efficient encryption of data. Unsupervised learning describes this method. In this case, the feature extraction process is critical for selecting the most important qualities from the data to code. Learning from the original dataset’s coding might lead to the acquisition of additional features. e. Neural Network Classifier Deep learning, in this context, means using a neural network with several layers. Nodes generate layers. A node is really just a location where some kind of processing takes occur. It is meant to mimic a kind of neuron in the human brain that only reacts to very powerful stimuli. A node combines data input with a set of coefficients or weights that amplify or degrade those inputs in order to determine the relevance of an input relative to the function the algorithm is attempting to learn. Is there a preferred feature input for accurate data classification? The so-called activation function of a node uses the total of the input weight products to decide how far the signal should go before affecting the output, which may be a classification. "Activation" describes the process by which a neuron responds to a stimulus (Fig. 7).

Diagnosis of Diabetic Retinopathy Using Deep Neural Network

265

Fig. 7 NN classifier

Neural networks use algorithms that are inspired by the structure and function of the human brain to find hidden patterns in data. Here, “neural networks” refer to interconnected synapse- and dendrite-based structures, which might have either a biological or synthetic origin. A neural network may be able to give the necessary results without necessitating a change in the output norm if the inputs are sufficiently diverse. Artificial intelligence-based neural network theory has grown significantly in relevance in the design of commercial systems. In this case, a non-knowledge-based picture classifier is supervised learning. A BPN neural network model with a radial basis function of the network activation function is used in this instance to achieve classification. The features of training samples with specified target vectors are supplied into the newly built BPN model for supervised training to acquire network attributes such as node biases and weighting factors. Last but not least, trained networks infer the test picture attributes to distinguish between normal and pathological circumstances and identify illness stages.

6 Conclusion To achieve this goal, we developed a graphical user interface to collect and evaluate input fundus images and categorize the results [11]. We thoroughly present the techniques of preprocessing, wavelet transform, and GLCM feature extraction, as well as neural network and output classification because these techniques improve accuracy, specificity, and performance. Due to the batch gradient descent training approach with high learning rate and quadratic weighted kappa loss function [12], our model achieves high performance. For DR classification, deep learning methods should be used as they can be used in other medical image classification problems that face the hurdle of not having enough training data. To compare the performance of other pre-trained deep convolutional networks, experiments are needed.

266

S. Dhivya Lakshmi et al.

7 Future Enhancement Future improvements to the method’s performance allowed greater accuracy [12]. Compared to the current data set, we apply the same model to a larger dataset. As this increases the confidence of healthcare organizations in using real-time models, we apply the feature extraction component of pre-trained models to algorithms such as vector machines and change performance metrics such as specificity and sensitivity [13]. We compare the performance of different image preprocessing methods, apply pre-trained models to difficult image classification problems that arise in the real world, and apply different transfer learning methods.

References 1. Carrera EV, Gonzalez A, Carrera R (2017) Automated detection of diabetic retinopathy using SVM. In: IEEE XXIV international conference on electronics, electrical engineering and computing (INTERCON) 2. Kumar S, Kumar B (2018) Diabetic retinopathy detection by extracting area and number of microaneurysm from colour fundus image. In: 5th international conference on signal processing and integrated networks (SPIN) 3. Gandhi M, Dhanasekaran R (2013) Diagnosis of diabetic retinopathy using morphological process and SVM classifier. In: International conference on communication and signal processing 4. Bhatia K, Arora S, Tomar R (2016) Diagnosis of diabetic retinopathy using machine learning classification algorithm. In: 2nd international conference on next generation computing technologies (NGCT) 5. Xu R, Yang R, Hu H, Xi Q, Wan H, Wu Y (2019) Diabetes alters the expression of partial vasoactivators in cerebral vascular disease susceptible regions of the diabetic rat 6. Deperlio˘glu O, Köse U, Güraksın GE, Deperlio˘glu Ö et al (2018) Diabetes determination via vortex optimization algorithm-based support vector machines 7. Bui T, Maneerat N (2017) Detection of cotton wool for diabetic retinopathy analysis using neural network. In: IEEE 10th international workshop on computational intelligence and applications (IWCIA) 8. Mishra S, Hanchate S, Saquib Z (2020) Diabetic retinopathy detection using deep learning. In: International conference on smart technologies in computing, electrical and electronics (ICSTCEE) 9. Shelar M, Gaitonde S, Senthilkumar A, Mundra M, Sarang A (2021) Detection of diabetic retinopathy and its classification from the fundus images. In: International conference on computer communication and informatics (ICCCI) 10. Raman V, Then P, Sumari P (2016) Proposed retinal abnormality detection and classification approach: Computer aided detection for diabetic retinopathy by machine learning approaches. In: 8th IEEE international conference on communication software and networks (ICCSN) 11. Thorat S, Chavan A, Sawant P; Kulkarni S, Sisodiya N, Kolapkar A (2021) Diabetic retinopathy detection by means of deep learning. In: 5th international conference on intelligent computing and control systems (ICICCS) 12. Suganyadevi S, Renukadevi K, Balasamy K, Jeevitha P (2022) Diabetic retinopathy detection using deep learning methods. In: First international conference on electrical, electronics, information and communication technologies (ICEEICT) 13. Gunawardhana PL, Jayathilake R, Withanage Y, Ganegoda GU (2020) Automatic diagnosis of diabetic retinopathy using machine learning. In: 5th international conference on information technology research (ICITR)

Multi-parameter Sensor-Based Automation Farming K. Suresh Kumar, S. Pavithra, K. P. Subiksha, and V. G. Kavya

Abstract Indian economy is vastly contributed by agriculture sector. About 17% of the total Gross Domestic Product (GDP) is contributed by this sector. Crop production has increased with the discoveries of newer seed types, better methods of agriculture and use of fertilizers to its maximum efficiency. With growing technology, agriculture sector also needs smarter methods to enhance the quality of farming and increased productivity. A lot of human effort and instincts are needed in the conventional old methods. There is a fair percentage of probability that these instincts might fail, and human error might occur. Smart farming methods can be used to avoid the continuous manual monitoring which will reduce time and be cost effective. The proposed method identifies and recognizes the plant disease symptoms accurately and effectively using deep neural network and convolutional neural network. This IOT-based method will begin the water siphons automatically when a low water level is detected. A web application which will caution on water flow level, dampness of the soil and turning on or off of the engine is proposed in this method. This method’s main objective is to give power to the user to prevent plant diseases, monitor and control their fields from anywhere with the need of being present physically all the time. Keywords Deep learning · Detection of plant diseases · Internet of Things (IoT) · Automated and smart farming · Arduino

1 Introduction With increasing population, the need for producing food will increase about 70% by 2050 according to an estimate by UN Food and Agriculture Organisation. With limited availability of natural resources, enhancing the farm yield whilst maintaining the balance in the environment has become a critical need with growing technology. K. S. Kumar (B) · S. Pavithra · K. P. Subiksha · V. G. Kavya Department of IT, Saveetha Engineering College, Chennai, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 V. S. Rathore et al. (eds.), Emerging Trends in Expert Applications and Security, Lecture Notes in Networks and Systems 681, https://doi.org/10.1007/978-981-99-1909-3_24

267

268

K. S. Kumar et al.

Adopting Internet connectivity solutions in farming methods is proposed to be a feasible solution to reduce the manual force. Agricultural productivity is a main factor on which a country’s economy is dependent. Identifying plant diseases and preventing them is a key solution for maintaining the yield quality and prevention of productivity losses. Conventional models of farming are feasible and reliable but require a constant human presence for manual monitoring, observing and analysing of the patterns of the plants and recognize the diseases. This will be more time consuming and tedious work for the farmers. IoT-based solutions for smart farming include the use of sensors like temperature, light, soil moisture, humidity, crop health, etc. Systems based on these sensors are built and used for monitoring the field and automating the irrigation system. Main advantage of IoT-based systems is that it provides remote monitoring which means that the user can access and monitor their respective fields from any place. The choice between automated and manual options is also given to user to take necessary actions on the field with the data given by the IoT system. Smart farming is very efficient in comparison with traditional farming methods which is feasible without being time consuming. A system based on image processing techniques which will be an automated detection model based on vision for detecting diseases in plants is proposed in this paper. Plant diseases or infections are detected by recognizing the colour patterns and features of the leaf using algorithms based on image processing. Colour segmentation using K mean algorithm is used in this system. Disease classification along with other sensor parameters for the monitoring of the field and automated pesticide and irrigation systems is done by connecting IoT with softwares [1].

2 Related Work This work is based on restructured residual dense network proposed for tomato leaf disease detection. Combining the advantages of residual and dense networks, this is a hybrid leaning model under deep learning which can decrease the training process parameter’s count and also improve the accuracy with enhanced flow of gradients and flow of information. Original RDN model was initially used for image super resolution which later needed to be restructured in terms of network architecture for tasks of classification through adjustments in the hyper parameters and image features which were given as input. An accuracy of 95% on the tomato test data set verifying a satisfactory performance is observed in the experimental results. Less computation with high performance efficiency can be obtained with improvements in the crop leaf identification models with a reconstructed residual dense model of network [2]. Detection of citrus diseases in its early stage is very essential for preventing crop losses and to implement timely remedies to control the spread of the disease in the farms. Due to limited availability of samples of labelled diseases, it will be difficult to deploy approaches based on machine learning like deep learning for accuracy in

Multi-parameter Sensor-Based Automation Farming

269

detection of various citrus diseases. An architecture based on deep learning metric for disease detection for any sparse data is proposed. This lightweight, accurate and faster approach employs a patch-based network of classification comprising of a neural network classifier, a prototype cluster module and an embedding module for accurate detection of diseases in the citrus fruits. Public evaluation of this approach on various fruits and leaves displayed its efficiency in accurate detection of various diseases from the images of the diseased leaves. This model showed an accuracy of 95.04% in terms of detection, number of parameters used for tuning and efficiency in time for detection process. This scheme empowers practical implementation on constrained resources like mobile phones without any compromise on its efficiency and accuracy [3]. An in depth walk through on the available advancements in the area of smart agriculture approaches employing AI techniques and IoT technologies. A critical evaluation of these technologies in times of demanding situations and their vast deployment is discussed. This paper covers the modern-day hardware constructing blocks hardware to be used in smart agriculture models that is based totally upon a large and wide variety of IoT nodes deployed within the field with appropriate sensors to monitor the present situations of the plants. Standardizing in terms of commercially present and available smart farming systems is a technical challenge faced [4]. This paper classifies the applications of IoT in the field of agriculture for automated and smart agricultural practices into seven categories, including smart monitoring, agrochemicals applications, managing plant diseases, efficient irrigation systems, smart harvesting, smart water management, managing and tracking the supply chain and smart agricultural practices. Challenges in hardware boards, hardware and software costs, networking and energy management, interoperability of systems, security and privacy threats are still to be addressed and reduced. This is extensive evaluation on using IoT for better farming techniques in a most profitable manner [5].

3 System Design A. Problem Description With the conventional manual methods for monitoring and detecting plant diseases, it can be difficult for the farmers to constantly keep monitoring their fields. Huge quantity of labour, expertise within the plant diseases and excessive time intervals are needed. B. Proposed System The proposed system offers an automated smart farming model. In this model, multiple sensors for soil moisture levels, temperature, humidity level and plant disease detection are used. Automated irrigation systems in case of low moisture

270

K. S. Kumar et al.

Fig. 1 Architecture diagram

levels in the soil which can be activated by the user at anytime from anywhere is proposed. The user will also be able to detect any plant diseases that occur in its early stage and automate the sprinkling of pesticide which will be advantageous in preventing losses in the yield. A smart field monitoring system with multiple sensors using IOT is a solution to the limitations of conventional farming methods (Fig. 1).

4 Methodology An ARDUINO UNO microcontroller is used for interfacing the field sensors and communication devices. The LCD display, DHT11, soil moisture sensor, NodeMCU and pump motor interfaced with the Arduino UNO. The DHT11 sensor is used to detect the temperature of the plants by using the exclusive technique of acquisition of digital signals and sensing humidity and temperature technology, and it ensures high reliability and excellent long-term stability, and water levels in the soil are measured with the soil moisture sensors and the amount of water stored in the soil’s horizon can also be estimated. Water in the soil is not directly measured by the sensors, instead changes in the properties of soil concerned with water levels are measured and compared with predictable ways. When a plant disease is detected, a pump motor will be activated for sprinkling the medicine. Another pump motor will be activated for irrigation when the soil is too dry and water levels are low. A Python data set is given to monitor the plant disease. An LCD display monitor is updated with all the current information. A cloud wed server stores all the information on the diseases and other field information.

Multi-parameter Sensor-Based Automation Farming

271

User Login The first process of the application is that the user has to login into the web page. Each user has their own login page and access. After successful login, the user can view their details of the farm like plant disease detection, soil moisture, temperature and humidity. Data collection The collection data from the hardware sensor is done here and is then sent to the data processor for further processing. The data like soil moisture, temperature and humidity levels is obtained from the field with the help of sensors which set up in the field. Data processing The collected data is then processed from the hardware sensor and the details are stored in a cloud server. The cloud at this phase consists of a web server and a database where the sensed data is stored, and a decision logic that makes decisions depending on the data is stored as well. The result of the decision logic will be transferred to the webpage and subsequently to the IOT gateway in this step of information distribution. Field monitoring data detection The processed data is then displayed in the web page for easy access to the user. From the processed data, we can detect if the plant is diseased or not and alerts or notifications can be sent to the hardware sensor which will enable the medicine sprinkler. The diseased plant can be removed before it affects other plants. And other data like temperature, moisture and humidity is also displayed to the user. Execution of the application On launching the application, the user will be able to view and monitor their field from the sensors placed at their farms. Users will be able to control the water motor and medicine sprinklers. The users will receive alerts in case of low moisture level, low temperature and any diseases detected in the plants. Technologies used: 1. Arduino Arduino is an open-source electronics platform for hardware development that builds devices and connects with the everyday life. It is user-friendly and contains a programmable circuit microcontroller board and a software or IDE that runs on system where code to physical board is written and executed. Arduino was invented at the Ivrea Interaction Design Institute where users will be able to develop them independently and adjust them to their particular needs. The Arduino UNO is one of the many varieties of Arduino. It contains all the features needed to support the microcontroller by connecting it to a computer with a USB cable or power it with an AC-to-DC adapter or battery.

272

K. S. Kumar et al.

2. Deep Learning Deep learning is one of the types of machine learning which imitates the human’s ability to acquire knowledge. Deep learning is very faster and easier than other traditional machine learning models in collecting and analysing large amount of data. The input data is produced over many artificial neural net layers. This compressed and abstract representation of data produces the result as a classification of the input data in different classes. This implies that deep learning model leads to less human effort to feature extraction process and it optimize the process by saving time and increasing the accuracy. 3. IOT IOT or Internet of technology is a term that refers to a network of inter-connected systems and a communication between the devices and the cloud, as well as among the devices. It aids as a bridge between hardware and software applications. Sensors, gadgets and electronic appliances and other machines are IOT hardware devices that collect and send the data to certain applications that is embedded into other IOT devices over the Internet using the IP address. These devices are managed through a software application or directly integrated through web servers which doesn’t require external software applications.

5 System Requirements Hardware Requirements • • • • • •

Power supply Dht11 Arduino UNO Soil moisture sensor LCD display Pump motor

Software Requirements • Python language • Embedded c language • Arduino ide

6 Algorithm Algorithm: Step 1: Place the different sensors in the field.

Multi-parameter Sensor-Based Automation Farming

273

Step 2: Various data from the field is collected using the hardware sensors placed in the field. Step 3: The collected data is then stored in the cloud web server. Step 4: If the moisture level is low, then the water pump is turned on. Step 5: Deep learning algorithms for disease detection is integrated with Arduino UNO which will alert the user in case of any diseased plants and the medicine sprinkler is activated. Step 6: The data on field parameters is displayed to the user on the webpage and on the LCD display which is placed on the hardware as well. Pseudocode: Start Input temperature, humidity, moisture if (450 ≤ Moisture ≤ 800) Water pump is activated If (minimum humidity level < humidity < maximum humidity level) Display humidity level If (minimum Temperature level < Temperature < maximum Temperature level) Display Temperature level if (450 ≤ Moisture ≤ 800) Display Moisture level Input plant disease dataset Recognize leaf pattern If Disease recognized Medicine sprinkler is activated Else Display “Plant is Normal” Stop

7 Conclusion and Future Enhancements This smart field monitoring systems integrating IOT with Python is very much efficient and cost effective. This remote sensing model helps the farmers in monitoring

274

K. S. Kumar et al.

and checking on their fields from any place and eradicates the need for manual presence of the farmers in the field. With integration of different field parameters, farmers will be able to take smart decisions according to the necessity and situation. Detecting diseases and automating their medicine sprinklers will be very advantageous in terms of producing good and quality yield. With automated irrigation system, precision agriculture can be very well performed, and the resources can be used in a very efficient and conservative matter. IOT-based agriculture will find major uses in future. Monitoring of the field can be extended to real time with end-to-end privacy, automated harvesting and picking can be done. Accurate predictive analysis will help to make better decisions about the crop and its growth. Farmers can use this application to track their products in supply chain to avoid fraudulent activities.

References 1. Nurzaman Ahmed DD, Hussain I (2018) “Internet of Things (IoT) for Smart precision agriculture and farming in rural areas”. IEEE IOT J 5(6):4890–4899 Dec 2018. https://doi.org/10.1109/JIOT. 2018.2879579 2. Zhou C, Zhou S, Xing J, Song J (2021) Tomato leaf disease identification by restructured deep residual dense network. IEEE Access 9:28822–28831. https://doi.org/10.1109/ACCESS.2021. 3058947 3. Janarthan S, Thuseethan S, Rajasegarar S, Lyu Q, Zheng Y, Yearwood J (2020) Deep metric learning based citrus disease classification with sparse data. IEEE Access 8:162588–162600. https://doi.org/10.1109/ACCESS.2020.3021487 4. Naseer T, Sugar B, Ruhnke M, Burgard W (2015) Vision-based Markov localization across large perceptual changes. In: Proceedings of European conferences mobile robots (ECMR), Lincoln, UK, September 2015, pp 1–6 5. Friha O, Ferrag MA, Shu L, Maglaras L, Wang X (2021) Internet of things for the future of smart agriculture: a comprehensive survey of emerging technologies. IEEE/CAA J Autom Sin 8(4):718–752. https://doi.org/10.1109/JAS.2021.1003925

Comparing Ensemble Techniques for Bilingual Multiclass Classification of Online Reviews Priyanka Sharma and Pritee Parwekar

Abstract Understanding customers and their need is how a business run today. Gone are the days when customers went to shops for buying the goods. With emerging technology, another revolutions are in terms of providing the goods to the customers as per their convenience which leads to things like online shopping, home delivery, and getting feedback on services. Feedback or comment provided by customer plays a critical role in understanding the customer needs and helps in building the future sales based on recommendation and sentiments. Dealing with global world and customers, the language of these comments can be bilingual, multilingual and can be in English or non-English languages. Understanding the language on a single platform for different categories is a time-taking manual process which may lead to delay in getting the advantage from the reviews. Here, in this paper, two non-English languages are being considered with multiple product categories and their categorization using Natural Language Processing and machine learning by comparison of ensemble techniques to bring out hidden insights on different platforms for given language. Various retail and CPG companies have great customers’ base, and providing best services is new goal. The languages considered are French and Spanish with 20 product categories written in English language. Comparing various ensemble techniques, the model with highest performance got selected with an accuracy of 99.84% for test sample. The methodology used is highly effective, easy to use and fast with great accuracy. Keywords Machine learning · Decision tree · Collocations · Extra trees · NLP · Boosting

P. Sharma (B) · P. Parwekar SRM Institute of Science and Technology, Modinagar, Ghaziabad, UP 201204, India e-mail: [email protected] P. Parwekar e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 V. S. Rathore et al. (eds.), Emerging Trends in Expert Applications and Security, Lecture Notes in Networks and Systems 681, https://doi.org/10.1007/978-981-99-1909-3_25

275

276

P. Sharma and P. Parwekar

1 Introduction 1.1 Background With the evolution of AI and machine learning, finding market trends, understanding customers and taking business decisions after reviewing the customer reviews, interest, and demands are new social reform. Customer reviews [1] from various platforms for various products producing a whole amount of data, which if classified timely can provide profit by increasing sales and further recommending new items based on the current purchase. Use of ML and NLP in CRM and in getting business insights is gaining popularity recently which is helping customers as well as retailors to get comfort and ease to do business effectively. When reviews written in two different languages are written together for a product, then identifying their product category needs a lot of time and knowledge of that language, which further has chances of wrong classification due to manual error. Taking data reviews or comments in two different languages, i.e., French and Spanish for multiclass classification, performing feature extraction, label encoding, and then applying ensemble techniques to predict new reviews by identifying language are part of this paper which will be discussed in detail here. Rest of the paper is organized as follows. Section 2 gives the approach of the mode building. Sections 3–5 use the NLP and ensemble techniques to build models and compare training and testing accuracies and future work. Section 6 gives the conclusion.

1.2 Related Work Several studies have been considered for this, “Multi-Class Classification of Turkish Texts with Machine Learning Algorithms in IEEE in year 2018” [2], by Fatih Gürcan worked on the text in Turkish language and presented the problem of classifying text written in Turkish language into five predefined categories which are “Sport, Economy, Technology, Politics and Health”. It also uses machine learning model. Next study is on paper named “English languages enrich scientific knowledge: The example of economic costs of biological invasions” by Elena Angulo and at all in Elsevier in June 2021. Next paper [3], this paper provides a precise summary of necessity and encounters which are involved in the automatic identification of language for machine translation task. It is also mentioned that “Language identification” and “machine translation” are quite important for availability of cross-lingual information. As Hindi, Marathi and Sanskrit language are quite close to each other and the task is to have distinctive features which can classify them is a problematic task. The paper mentioned “Segmentation and translation of individual languages in a multilingual document really improved the quality of machine translation”. Hindi and Sanskrit languages were used in the paper giving a room to recognize other languages as well.

Comparing Ensemble Techniques for Bilingual Multiclass …

277

Another related research is “Entity Linking: A Problem to Extract Corresponding Entity with Knowledge Base” [2], by Gongqing Wu and at all, Member IEEE. In this work, the researchers familiarize with the complications and submissions of the entity linking task and it emphases on proceeding the important approaches for addressing this issue. At last, the researchers list the knowledge bases, datasets, evaluation criterion & certain challenges of entity linking. The current approaches are relevant for linking similar languages.

2 Proposed Methodology The methodology proposed here is to identify language and use collocation according to that language, further encoding the text reviews and finally building a model by applying ensemble techniques.

2.1 Dataset Collection The comments/reviews were collected for French and Spanish languages from Spain and French markets, respectively [4, 5]. Initially, equal number of reviews, i.e., 5632 for each language, i.e., French and Spanish were collected in raw form. Initially, these reviews were for 29 different product categories. In Fig. 1, the raw sample dataset of Spanish reviews is shown, as seen there were 5632 data samples. Similarly, the reviews in French language were collected for Amazon products. Here also, 5632 raw data samples were collected initially. Further, the sample can be seen in Fig. 2. The data sample for the both the languages was combined to make a single dataset, and there was a total of 11,264 raw data samples for 29 unique product categories. As seen in Fig. 3, the top category is wireless.

2.2 Data Preprocessing The dataset as mentioned in Figs. 2 and 3 is having two columns named “Category” and “Reviews”. Category column shows the target variable, and reviews’ column shows the data to be trained. As seen in Fig. 4, raw data can first be identified the language of the text that comes out to be fr and es. ‘fr’ is for French Language and ‘es’ is for Spanish Language. All the reviews were changed to lowercase characters, and after checking for duplicate values, it was dropped from the text. All the categories having reviews less than a specific threshold were dropped, and only categories fulfilling the threshold criteria

278

Fig. 1 Sample dataset of Amazon reviews in Spanish language

Fig. 2 Sample dataset of Amazon reviews in French language

P. Sharma and P. Parwekar

Comparing Ensemble Techniques for Bilingual Multiclass …

279

Fig. 3 Summary of raw data sample of Amazon reviews

were selected [6]. As shown in Fig. 5, the final result is having 10,074 reviews and 20 unique categories. Next, we check for the relationship between the language detected [7] and the category to better understand the reviews. Figure 6 depicts the relation between the same. The vertical line shows the language detected, and the horizontal line shows the product categories.

Fig. 4 Dataset after preprocessing

Fig. 5 Summary of dataset after preprocessing

280

P. Sharma and P. Parwekar

Fig. 6 Relationship between language_detected and category

Figure 7 shows the distribution of the reviews as per their count, and it can be seen that the data is quite imbalanced which can be the case with test data as well. So keeping the data like this only without performing any change.

Fig. 7 Distribution graph for 20 product categories and their count

Comparing Ensemble Techniques for Bilingual Multiclass …

281

3 Implementation Details The proposed method is implemented in a series of steps. The first step is to factorize the category which is in text with corresponding unique values in form of numbers. A new column as shown in Fig. 8 has new column name “category_id” with new values from 0 to 19 for each category, and it has been tagged with each comment. Next step is to extract features from the reviews for that we need some word embedding technique. The proposed technique used here is to remove Stopwords for French and Spanish languages and then applying NLP steps of generating tokens, lemmatize, and then applying collocations to get only meaningful words [8] related to that product in that particular language. The result of the collocation is in form of unigrams and bigrams which are then converted to vectors for a feature vector set [9]. This method of feature generation used is language independent and can be used for any language to make the model scalable. Figure 9 shows the set of words in as a result of applying collocations on French and Spanish languages [10]. This has been tagged with each category which helps in classifying the new reviews/comments [11]. After this in next step, this feature vector is applied to different ensemble techniques like boosting algorithms to compare the result and to find the best algorithm to train the model which can be used to classify the product categories with reviews in these bilingual languages [12].

Fig. 8 Factorize the category to get category_id

282

P. Sharma and P. Parwekar

Fig. 9 Sample collocated unigrams and bigrams for each category

4 Results The parameters for measuring the result [13] of the ensemble model were accuracy for both training and testing with precision, recall, and F1-Score, by calculating confusion matrix. The reviews were separated in a ratio of 80:20 where 80% is considered for training and remaining 20% for testing. As seen in Table 1, the training and test accuracies of decision trees and extra trees classifiers come out as best that is 99.72% for training data and 99.84% for testing data. AdaBoost classifier does not perform well on these data giving an accuracy of approximately 20%. Gradient boosting classifier comparatively perform well for testing data. Further, getting the highest performance from extra trees, the confusion matrix for all categories shown in Fig. 10 provides all correct values in diagonals. Table 1 Performance table with accuracy

S. no

Algorithm used

Train accuracy (%)

Test accuracy (%)

1

AdaBoost classifier

21.00

20.60

2

Gradient boosting classifier

81.07

96.62

3

XG BOOST

85.87

98.61

4

Decision tree classifier

99.72

99.84

5

Extra trees classifier

99.72

99.84

Comparing Ensemble Techniques for Bilingual Multiclass …

283

Fig. 10 Confusion matrix in form of heat map for all 20 categories

5 Future Work In this paper, the two non-English languages were considered which can further be increased. The reviews taken were compared for ensemble techniques, but as future work deep learning algorithms can also be used like state-of-the-art models [14]. Different word embedding techniques can also be applied to make the feature vectors [6].

284

P. Sharma and P. Parwekar

6 Conclusion The comparative study shows that on the bilingual data the extra trees and decision trees outperformed with an accuracy of almost 100% and model generated does not require any GPUs or TPUs and is easily scalable and highly effective. It can save hours per day as well as cost by automatically classifying the customer reviews from various platforms in different languages and is easy to use.

References 1. Singh RP, Haque R, Hasanuzzaman M, Way A (2020) Identifying complaints from product reviews: a case study on Hindi, CEUR-WS.org, vol 2771. Ireland, p 28 2. Gürcan F (2018) Multi-class classification of turkish texts with machine learning algorithms. In: IEEE 3. Babhulgaonkar A, Sonavane S (2020) Language identification for multilingual machine translation. In: IEEE international conference on communication and signal processing, July, pp 0401–0405 4. Keung P, Lu Y, Szarvas G, Smith NA (2020) The multilingual Amazon reviews corpus, arXiv2010.02573v1 5. Amazon Inc. (2015) Amazon customer reviews dataset. https://registry.opendata.aws/amazonreviews/ 6. De Melo G, Siersdorfer S (2007) Multilingual text classification using ontologies. In: European conference on information, pp 541–548. Springer 7. Artetxe M, Schwenk H (2019) Massively multilingual sentence embeddings for zeroshot crosslingual transfer and beyond. Trans Assoc Comput Linguist 7:597–610 8. Bojanowski P, Grave E, Joulin A, Mikolov T (2017) Enriching word vectors with subword information. Trans Assoc Comput Linguist 5:135–146 9. Bowman SR, Angeli G, Potts C, Manning CD (2015) A large annotated corpus for learning natural language inference. In: Proceedings of the conference on empirical methods in natural language processing (EMNLP). Association for Computational Linguistics 10. Conneau A, Khandelwal K, Goyal N, Chaudhary V, Wenzek G, Guzm´an F, Grave E, Ott M, Zettlemoyer L, Stoyanov V (2019) Unsupervised cross-lingual representation learning at scale. arXiv:1911.02116 11. Joulin A, Grave E, Bojanowski P, Mikolov T (2017) Bag of tricks for efficient text classification. In: Proceedings of the 15th conference of the European chapter of the association for computational linguistics, vol 2. Valencia, Spain, pp 427–431 12. Bel N, Koster CHA, Villegas M (2003) Cross-lingual text categorization. In: International conference on theory and practice of digital libraries. Springer, pp 126–139 13. Kingma DP, Adam JB (2014) A method for stochastic optimization. arXiv:1412.6980 14. Yu S, Su J, Luo D (2019) Improving BERT-based text classification with auxiliary sentence and domain knowledge. IEEE Access 7:176600–176612 15. Wu G, He Y, Hu X (2016) Entity linking: a problem to extract corresponding entity with knowledge base. IEEE Access 6220–6231

Detection of Disease in Liver Image Using Deep Learning Technique T. K. R. Agita, M. Arun, K. Immanuvel Arokia James, S. Arthi, P. Somasundari, M. Moorthi, and K. Sureshkumar

Abstract One of the biggest causes of death among humans worldwide is liver disease. Currently, manually identifying the cancer tissue is a difficult and laborious task. By segmenting liver lesions in CT scans, it is possible to evaluate the severity of the disease, make treatment plans, predict results, and keep a record of the clinical response. In order to address the current problem of liver disease, it has been suggested to use the fully convolutional neural network (FCNN) in this research for the detection of liver cancer disease and mathematically modeled. For the analysis of liver cancer, FCNN has been a useful method for semantic segmentation. It is crucial to distinguish between cancerous and non-cancerous lesions since the diagnosis and course of treatment are determined by the CT-based lesion-type definition. It requires a high level of knowledge, resources, too. Contrasting colorectal cancer liver metastases from benign cysts in abdominal CT scans of the liver, however, has been explored using a deep learning technique. The goal of this research project is to

T. K. R. Agita ECE, Saveetha Engineering College, Chennai, India M. Arun ECE, Panimalar Institute of Technology, Chennai, India e-mail: [email protected] K. I. A. James · S. Arthi Vel Tech Multi Tech Dr. Rangarajan Dr. Sakunthala Engineering College, Chennai, India P. Somasundari Panimalar Engineering College, Chennai, India M. Moorthi (B) HOD, Department of Medical Electronics & BME, Saveetha Engineering College (Autonomous), Chennai, India e-mail: [email protected] K. Sureshkumar IT, Saveetha Engineering College (Autonomous), Chennai, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 V. S. Rathore et al. (eds.), Emerging Trends in Expert Applications and Security, Lecture Notes in Networks and Systems 681, https://doi.org/10.1007/978-981-99-1909-3_26

285

286

T. K. R. Agita et al.

create a binary classifier that can accurately distinguish between a healthy liver and a liver affected by hemochromatosis by implementing CNN and AlexNet models. From the training of images, AlexNet achieves 95% and CNN has 90% accuracy. Here AlexNet achieves more accuracy compared with CNN. Keywords Liver disease · Computed tomography (CT) · Fully convolutional neural network (FCNN) · AlexNet

1 Introduction The liver performs a variety of important function, including processing and transporting nutrients as well as helping to digest meals. There are numerous different liver disorders and diseases. Some are brought on by viruses, such as hepatitis. Others may be brought on by using drugs or consuming too much alcohol. Cirrhosis, jaundice, or skin yellowing can be symptoms of liver illness and are brought on by chronic liver damage or scar tissue. Deep learning is gradually emerging as a significant method for imaging applications in evaluating and treating liver disease. A new neural network (NN) architecture with several output channels that reflect the various components of the goal is used in this study to employ deep learning to enhance the effectiveness of picture segmentation. For the primary task of pixelated categorization, we guarantee effective consistency regularization by using at least one interlinked auxiliary output channels. Residual learning is specifically applied to multi-output channel consistency regularization by linking the principal output channel and auxiliary output channels of the network via additive paths. Locating lung and liver tumors using information that is open to the general population. Based on the identification and localization of lung and liver cancers using publically accessible data, the approach is assessed. The results clearly demonstrate that residual learning using multi-output-channel consistency enhances the baseline deep neural network. The suggested framework is highly inclusive and ought to have a wide range of uses in numerous deep learning issues. The main drawbacks of a neural network are that it is a black box and that training datasets are time-consuming. Here, we’re going to use a CNN technique to find the liver disease.

2 Related Work In [1], we suggested utilizing classification algorithms to predict liver disease. We are aware that machine learning algorithms can also be utilized to uncover concealed data for accurate diagnosis and decision-making. Liver diseases have become more prevalent recently, and many countries consider these to be very fatal diseases. Using multiple classification techniques including logistic regression, K-nearest neighbor, and support vector machines, the primary goal of this research article is to predict

Detection of Disease in Liver Image Using Deep Learning Technique

287

liver illness. This classification system is compared using an accuracy score and confusion matrix. We describe the application of machine learning to the diagnosis of liver disorders in [2]. Every year, more than 2.4% of deaths in India are caused by liver illnesses. Due to its modest symptoms, liver disease is often challenging to diagnose in its early stages. Frequently, the symptoms show up when it’s too late. In order to better diagnose liver illnesses, this research examines two identification techniques: patient parameters and genome expression. We describe the application of machine learning (ML) algorithms to the diagnosis of liver disease in [3]. As a result, liver disease now poses a greater risk to people, making it more crucial than ever to identify its underlying causes. Therefore, an automated software is required to increase accuracy and reliability in the early diagnosis of liver disease. Classical ML systems are developed for this aim to predict the disorder. In this study, it is suggested to use Support Vector Machines (SVM), Decision Trees (DT), and Random Forests (RF) to predict liver disease more accurately, precisely, and consistently. According to [4], the chronic hepatitis B virus infects the livers of about 257 million people worldwide. A million people with persistent infections like HBV pass away from chronic liver disease every year. Calculations based on machine learning are incredibly helpful in providing specialists with critical measurements, ongoing information, and progress reviews regarding a patient’s condition, lab test results, circulatory strain, family history, clinically relevant information, and more. In [5], the process of discovering patterns in enormous data sets using technologies including as machine learning, statistics, and database systems can be dubbed data mining. Medical systems have an abundance of data, which calls for a powerful analysis tool to uncover hidden correlations and data drift. The medical condition that includes liver disorders is referred to as “liver disease.” DT, Linear Discriminants, Fine Gaussian SVMs, and Logistic Regression are just a few of the classification methods used in data mining. In this work, we use deep learning and transfer learning methods for classifying the liver disease.

3 Proposed Method This proposed method suggests using a fully convolutional neural network to identify liver disease. There are train and test phases for every learning algorithm in the process. During the training phase, various techniques known as “data augmentation” were used to enhance the obtained CT data. The expanded information, also known as input data, is then incorporated into the NN system to produce a suitable framework. To get around the restrictions that modern spatial 3D knowledge does not fully exploit in neural network identification, the evaluation of different CNN layers has been conducted in our feature extraction process [6]. A texture classifier has been developed for the proposal process to separate ROIs into normal and

288

T. K. R. Agita et al.

Fig. 1 Block diagram for proposed methodology

Input Image

Preprocessing

CNN and AlexNet

Classification

Healthy Liver

Hemochromatosis Liver

abnormal hepatic lesions [7, 8]. At the categorization detection stage, the abstract functions have been used to distinguish hepatocellular carcinoma (HCC), liver cysts, and hemangiomas irregular hepatic lesions [9]. For this project’s training phase, we went through multiple revisions to get a better model structure. Use more CT scans during the testing phase to eventually test the system depending on the findings. Here, we will contrast the AlexNet and CNN algorithms to see whether one has greater accuracy [10]. Here, we suggested that deep learning and transfer learning fall under the CNN approach. A more sophisticated machine learning technique for improved classification is convolutional neural networks. Here, categorization and detection are both completed. Figure 1 displays the block diagram for the suggested methodology. Convolutional neural network (CNN) is a Supervised Deep Learning method that is applied to computer vision. It can be divided into the following five steps: Convolution, Max Pooling, Flattening, and Full Connection.

3.1 Input Layer The input layer of a CNN architecture, primarily supplies the system’s output layer with artificial input, initial set of data to previous layer of artificial neurons for processing.

Detection of Disease in Liver Image Using Deep Learning Technique 0 0 0 0 0 0 0

0 1 0 0 1 0 0

0 0 0 0 0 1 0

0 0 0 1 0 1 0

0 0 0 0 0 1 0

0 1 0 0 1 0 0

0 0 0 0 0 0 0

Input Image

0 1 0

0 0 1

1 0 1

Feature Detector

289 0 0 1 1 0

1 1 0 4 0

0 1 1 2 1

0 1 2 1 2

0 0 1 0 1

Feature Map

Fig. 2 Convolutional layer

3.2 Convolutional Layer A filter known as a Feature Detector or kernel lies at the center of convolution. In essence, we multiply the image section by the filter and then check the corresponding number of 1 s that are matched. Feature Map is the name of the generated image in the upper right corner of the figure below Fig. 2. That essentially represents how many features have been convolved. The input image information is diminished as a result. A 2 indicates that we have reduced the image, while a 4 indicates that we have further reduced the original image, making it easier to process. The issue is whether information is lost when using the Feature Detector filter. The filtering process is better and we are not losing many features when the number in the Feature Map is higher. In essence, we produce as many Feature Maps as we require Feature Detection filters (e.g., edge detect, blur detect, emboss detect).

3.3 ReLU Layer The rectifier or Rectified Linear Unit (ReLU) activation function is a type of activation function used in artificial neural networks that is described as the favorable element of its argument: f(x) = max(0, x)

(1)

where x is the input to a neuron, as shown in Fig. 3. This is also referred to as a “ramp” function and is comparable to electrical engineering’s half-wave rectification. The rectifier is the deep neural network activation function that is most frequently utilized. An additional step after Convolution is the ReLU function. The ReLU is utilized in order to increase non-linearity. Images are incredibly nonlinear, thus breaking linearity is what we’re after because we want to promote non-linearity. For instance, in a black-and-white image, the ReLU removes the linear component brought on by the shadows. In actuality, shadows appear in an image like a linear evolution of gray scale, which we can eliminate using the ReLU.

290

T. K. R. Agita et al.

Fig. 3 ReLU layer

3.4 Max Pooling Layer A feature of a picture, like a dog, can typically be presented in a variety of locations or orientations. Our neural network must possess a characteristic known as spatial invariance. This means that our neural network needs to be flexible enough to find features regardless of where they are located in a given environment or how close or far away they are. Pooling is what we use to get it. We used the Max Pooling based on the previous figure. The highest value is determined for a 2 × 2 pixel box taken from the previously constructed Feature Map, and it is reported in the Pooled Feature Map. This procedure is repeated when relocating the 2 × 2 box in the Feature Map. As a result, the features are still preserved, and since the greatest number of pixels is used, any distortions whether they be spatial, textural, or of another kind are also taken into account. Additionally, by lowering the number of parameters, pooling has the added benefit of limiting overfitting. That is similar to how humans operate because they don’t need to view every detail that can be distracting for their eyesight and instead choose to ignore (Fig. 4). 0 0 1 1 0

1 1 0 4 0

0 1 1 2 1

0 1 2 1 2

Feature Map Fig. 4 Max pooling layer

0 0 1 0 1

Max Pooling

1 4 0

1 2 2

0 1 1

Pooled Feature Map

Detection of Disease in Liver Image Using Deep Learning Technique

291

Fig. 5 Flattening

3.5 Flattening Rearranging the pooled feature map into a single column is known as flattening. We do this action because this vector is now prepared to be fed into an ANN for additional processing. Figure 5 below provides a summary of the complete procedure that has been explained so far.

3.6 Fully Connected Layer The flattened vector that we previously stated is now an input in a fully linked ANN. By “completely connected,” we refer to the whole connectivity of the hidden layer. By definition, this is a CNN. This will enable us to aggregate our characteristics into multiple attributes in order to more accurately forecast the classes. In actuality, increasing the number of characteristics (such edge, blur, and emboss detect) raises the success rate of image prediction. We can determine a list of the neurons crucial for dogs and cats, respectively, by observing the violet (for dogs) and green (for cats) neural connections among output and behavior. The steps taken so far are summarized in the Fig. 6.

3.7 Softmax Layer In the figure concerning dog detection, we discovered 0.95 dogs and 0.05 cats. How do these two numbers combine to equal one is the key question. Only the Softmax function, whose formula is as follows can make that happen. ez j fj (z) = ∑ z ke k

(2)

292

T. K. R. Agita et al.

Fig. 6 Fully convolutional layer

Fig. 7 Softmax layer

The Softmax function, which is a generalization of the logistic function as shown by the formula in Fig. 7 above, ensures that our predictions add up to one. The cross entropy function (CEF) and the soft maximum function are frequently connected. To maximize the performance of our CNN, we must assess the model’s validity using the CEF as a loss function having used the soft maximum function in CNN. Using the cross-entropy function has various benefits. Among the major is that gradient descent can go slowly unless, for instance, the output value is considerably below the actual values at the start of BPNN. Cross-entropy aids the network in evaluating even significant errors because it use the logarithm.

4 AlexNet Alex Krizhevsky, Ilya Sutskever, and Geoffrey Hinton developed the convolutional neural network (CNN) architecture termed as AlexNet. The calculation will be correct if AlexNet’s picture size is 227 × 227 × 3 rather than 224 × 224 × 3. The ImageNet Large-Scale Visual Recognition Challenge had AlexNet as a competitor. With a top5 error rate of 15.3%, the network did better than the close second by over 10.8

Detection of Disease in Liver Image Using Deep Learning Technique

293

percentage points. The primary discovery was that the depth of the model, which required expensive computational resources but was made achievable by the use of graphics processing units (GPUs) during training, was essential for the model’s excellent performance. Not the first quick GPU implementation of a CNN to triumph in an image recognition competition, AlexNet. According to Chellapilla et al., a CNN on GPU executes four times quicker than a comparable implementation on CPU. At IDSIA was already 60 times quicker and performed at superhuman levels. In the four image competitions CNN won between May 15, 2011, and September 10, 2012, four times in total. Additionally, they significantly enhanced the best performance for multiple image databases in the literature. The eight layers in AlexNet were as follows: the first five were convolutional layers, some of them were followed by Max Pooling layers, and the final three were fully linked layers as illustrated in Fig. 8. It made use of the non-saturating ReLU function had superior training efficiency compared to tanh and sigmoid [11]. Here, it is trained to classify the type of liver disease from the input image. Fig. 8 AlexNet

294

T. K. R. Agita et al.

5 Result Utilizing CNN, the liver has been categorized. Here, we are providing the system with the gathered dataset as input. By minimizing undesirable distortions or boosting particular visual features that are crucial for later processing and analysis activities, pre-processing seeks to enhance the picture data. The four different types of image processing techniques are listed below. . . . .

Brightness adjustments or changes of pixels Transformations in geometry Segmenting and Filtering of Images Image resaturation and the Fourier transform.

The reader is presumed to be familiar with the idea of a NN. ANN exhibit extraordinary performance in algorithms. ANN is employed in a number of categorization tasks including words, audio, and visual input. A convolutional neural network is used to classify images, whereas RNNs in further detail, an LSTM are used to predict the order of words [12]. We’ll create CNN’s fundamental building elements in this blog. Input Layers: The input layer is the one where our model receives its input. There are exactly as many features in our data as there are neurons in this layer. Hidden Layer: After the input layer, the hidden layer gets the input. There could be a large number of hidden layers based on our model and the size of the data. There may be a varied number of neurons for each hidden layer, which is frequently the same as the number of features. Each layer’s output is derived by matrix multiplying it by the learnable weights and biases of the layer above, combining those values, and using an activation function to get the output from that layer. The network becomes nonlinear as a result. Output Layer: The output layer then applies a logistic function, such as a sigmoid or soft max, to each class’s output to convert it into a probability score for each class. The output of each layer is then determined after the model has been given the data. This action is known as “feeding forward.” An error function is then used to calculate the error; typical error function examples include square loss error and cross-entropy. We next compute the derivatives and feed back into the model. Back propagation is utilized to decrease loss generally. Here is the fundamental Python code for a neural network with two hidden layers and random inputs. Classification algorithms attempt to select the categorization of a new observation out of a set of categories based on the criteria of a labeled training set. According to the task, physical anatomy, tissue preparation, and features, classification accuracy changes. Here, CNN and AlexNet are being used to distinguish between a healthy liver and a liver affected by hemochromatosis. AlexNet achieves 95% and CNN has

Detection of Disease in Liver Image Using Deep Learning Technique

295

90% accuracy. Training of images for AlexNet and CNN shown in Figs. 9 and 10. From the training of images, AlexNet achieves more accuracy compared with CNN. The AlexNet input and output images of the hemochromatosis disease and healthy liver are shown in Figs. 11 and 12, respectively. The CNN original and resized liver images are shown in Figs. 13 and 14, respectively. In Fig. 15, the CNN classified image is displayed.

Fig. 9 Training of images for AlexNet

Fig. 10 Training of images for CN

296 Fig. 11 AlexNet input image

Fig. 12 AlexNet output image

Fig. 13 CNN input image

Fig. 14 Resized image

T. K. R. Agita et al.

Detection of Disease in Liver Image Using Deep Learning Technique

297

Fig. 15 CNN classified image

6 Conclusion There is a new approach suggested for categorizing liver diseases. Deep learning’s convolutional neural network (CNN) is the foundation of the entire methodology. The diseased area is automatically defined by an algorithm. The CNN algorithm’s goal is to detect better accuracy and produce better results. Here AlexNet achieves more accuracy than CNN. Our detected contours closely resemble the ones that were manually traced. The proposed system can be improved in the future with additional effort and thorough research. Additionally, the diagnosis of liver illness using CNN can be incorporated to enhance dataset identification, and performance analysis is produced by contrasting it with other classification techniques.

References 1. Thirunavukkarasu K, Singh AS, Irfan M, Chowdhury A (2018) Prediction of liver disease using classification algorithms. In: 4th international conference on computing communication and automation, pp 1–3 2. Sontakke S, Lohokare J, Dani R (2017) Diagnosis of liver diseases using machine learning. In: International conference on emerging trends & innovation in ICT, pp 129–133 3. Sivasangari A, Reddy BJK, Kiran A, Ajitha P (2020) Diagnosis of liver disease using machine learning models. In: Proceedings of the fourth international conference on I-SMAC (IoT in Social, Mobile, Analytics and Cloud), pp 627–630 4. Ramalingam V, Pandian A, Ragavendran R (2018) Machine learning techniques on liver disease—a survey. Int J Eng Technol 493–495 5. Liu I, Setio AAA, Ghesu FC, Gibson E, Grbic S, Georgescu B, Comaniciu D (2018) Prognosis of liver disease: using machine learning algorithms. In: International conference on recent innovations in electrical, electronics & communication engineering, pp 875–879 6. Azer SA (2019) Deep learning with convolutional neural networks for identification of liver masses and hepatocellular carcinoma: a systematic review. World J Gastrointest Oncol 11:1218–1230 7. Wei W, Yang X (2021) Utility of convolutional neural network-based algorithm in medical images for liver fibrosis assessment. Chinese Med J 2255–2257 8. Manikandan T, Bharathi N (2016) Lung cancer detection by automatic region growing with morphological masking and neural network classifier. Asian J Inf Technol 15:4189–4194 9. Anand L, Neelanarayanan V (2019) Liver disease classification using deep learning algorithm. Int J Innov Technol Exp Eng 8:5105–5111 10. Othman E, Mahmoud M, Dhahri H, Abdulkader H, Mahmood A, Ibrahim M (2022) Automatic detection of liver cancer using hybrid pre-trained models. Sensor 1–20

298

T. K. R. Agita et al.

11. Swapna M, Sharma YK, Prasadh BMG (2020) CNN architectures: Alex Net, Le Net, VGG, Google Net, Res Net. Int J Recent Technol Eng 8:953–959 12. Almotairi S, Kareem G, Aouf M, Almutairi B, Salem MA-M (2020) Liver tumor segmentation in CT scans using modified SegNet. Sensor 1–13

How to Quantify Software Quality Factors for Mobile Applications?: Proposed Criteria Manish Mishra and Reena Dadhich

Abstract The quality and reliability of any software product are the most important issues because the quality of any product directly depends upon branding. When one talks about software as a product then it is a big question of how to ensure every dimension of quality aspect. This issue becomes much bigger when moved from a desktop program to a mobile application, due to the inherent limitation of mobile due to mobility. Thus there is a requirement to help quality managers by customizing a suitable software quality model (e.g., ISO-9126, ISO-25010, etc.) standard to ensure software quality, in mobile environments. This paper proposed criteria to quantify the quality of a mobile application. Keywords Quantification · Software quality · Mobile application · Fuzzy logic · Questionnaire · Survey etc

1 Introduction One cannot deploy the existing software quality model directly as a whole, some work out in this direction is necessary such as: firstly to explore those new or modified sub-characteristics which occur due to problems of inherent mobile limitation (power issues, limited resources, connectivity, etc.), and modify or extend the existing software quality model with the help of fuzzy logic. Here, the aim is to design a mathematical model and implement it with the help of a module that quantifies the overall quality of mobile software and shows better performance. It will also fulfill the very important demand, to sum up, the whole quality in quantitative terms so that the mobile application can be easily rated. M. Mishra (B) Research Scholar, UOK, Kota, India e-mail: [email protected] R. Dadhich Prof. Department of Computer Science and Informatics, UOK, Kota, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 V. S. Rathore et al. (eds.), Emerging Trends in Expert Applications and Security, Lecture Notes in Networks and Systems 681, https://doi.org/10.1007/978-981-99-1909-3_27

299

300

M. Mishra and R. Dadhich

There are the following problems with mobile applications development: 1. 2. 3. 4.

Mobile has limitations due to mobility and limited resource utilization. Mobile applications are very popular and they need high attention for quality. Mobile users want everything just in touch (i.e., Usability). Response time is very important because the mobile user tends to switch to another application. 5. What are the quality factors that should be accepted/reject to justify overall quality perception for a mobile application? Therefore there is a high need to formulate a suitable mathematical model for mobile applications by taking optimized quality factors reasonably and quantifying the overall quality of the mobile application with a mathematical model fuzzy logicbased mathematical model and analyzing the performance, which is the main objective of the proposed research work. There are several areas of mobile applications where they demand quality in the form of numeric value due to justification purpose (like to map in between what the mobile application is currently being providing? And what is exactly the user need?) in between production and as a final product as a quality aspect or branding. For example for a mobile GAME, there can be different needs of different personnel: the Game testing is a process due to the developer view and the GAME as a branding (end-user view). It is also true for e-commerce applications and mobile learning applications. All mobile applications demand not only exploring optimized quality factors but also quantification by taking a suitable mathematical model.

2 Background Work Quality of software is defined as per IEEE 1061 standard “the degree to which software possesses a desired combination of quality attributes”. A quality factor is "a management-oriented attribute of software that contributes to its quality". Nowadays quality aspect of mobile applications is a very important issue, and most researchers have given their opinion on fixing characteristics and sub-characteristics to achieve better quality. Exploration for suitable sub-characteristics and characteristics depends upon the survey and questionnaire. Nitze et al. [1] conducted a survey on the quality aspect of mobile applications with 144 participants. This paper discussed an online survey distributed to students and employees of two German universities and discussed the different attributes of a high-quality mobile application. Wich et al. [2] proposed a unique approach to evaluating the usability of the mobile business app with the help of a questionnaire. This paper followed a research approach and evaluated the design tool with the help of an expert interview. Some researchers have their opinion on improvement quality factors for mobile application development. Santos et al. [3] discussed the challenges and opportunities of agile testing for mobile application development. The main objective was

How to Quantify Software Quality Factors for Mobile Applications …

301

to identify and share the biggest challenges during the testing of a mobile application. Parsons et al. [4] extends ISO/IEC-9126 to design a conceptual framework for mobile learning and propose some matrices which will be helpful to access the quality of a mobile learning application. Panichella et al. [5] proposed a taxonomy that detects app reviews automatically as per their category with the help of NLP, text analysis, and sentiment analysis. Rajan et al. [6] proposed performance evaluation of an online mobile application. This paper proposed a framework “Test My APP” which executes performance testing, where the metric is response time, where response time depends upon application delay, hardware delay, and network delay. Marinho et al. [7] discuss the importance of quality factors of software and compare w.r.t. different existing quality models. Moumane et al. [8] explain about implementation of ISO/IEC-9126 in a mobile environment. This paper explains about the use of the external quality model of ISO-9126 in a mobile environment to help quality managers. Agarwal et al. [9] focus on how an organization put efforts to produce quality software product within time, study several factors and conclude that only one-factor software size was found to be significant to determine effort, cycle time, and quality. Maryoly et al. [10] describe the design of a quality model with a systemic approach to a software product. Slaughter et al. [11] evaluate cost of quality of software to produce and deliver high-quality product on time as per budget. Dromey et al. [12] explore all aspects of ISO/IEC-9126. Boehm et al. [13] established a welldefined framework that analyzes the characteristics of software quality. Now the big question is how it can be possible to quantify the quality of software products. Some researcher applies fuzzy logic which helps in the quantification of a desktop application. Dubey et al. [14] propose software quality appraisal with the help of fuzzy logic. This paper attempts to evaluate performance by introducing new quality attributes in ISO-9126. Dadhich et al. [15] propose a way to measure the reliability of aspect-oriented software using the fuzzy logic approach. Pasrija et al. [16] propose a method to implementation of fuzzy logic and Choquet integral for the quantification of software quality. Srivastava et al. [17] provide a tool for quantification of software quantify quality factors with the help of fuzzy multi-criteria approach. Srivastava et al. [18] identify the matrices that help to provide a framework and measure software quality statistically. Jung et al. [19] published an article on the quantification of software product quality taking ISO/IEC-9126 as a base model.

3 Software Quality Software quality is the degree to which a software application or system meets its intended purpose and user requirements. It is generally measured in terms of correctness, usability, maintainability, reliability, efficiency, and flexibility. Quality assurance and quality control are two important aspects of software quality. 1. Quality assurance refers to a process of verifying or determining whether a product or service meets certain quality standards. It is the process of ensuring

302

M. Mishra and R. Dadhich

that all the components of a product or service are of a certain quality level. Quality assurance can be used to monitor any process, from product manufacturing to customer service. It is an ongoing process that begins at the design stage and continues throughout the entire production process. Quality assurance is an essential part of any organization’s success and helps to ensure that customers receive a quality product or service. 2. Quality control (QC) can involve inspection, testing, and other methods of assessing product’s quality. Quality control is an important part of the production process, as it helps to ensure that products meet customer expectations and comply with applicable regulations. Quality control also helps to reduce costs by identifying and addressing potential issues before they become problems. Quality control processes typically involve a combination of manual and automated methods, and may be performed by a dedicated team or by other members of the production team. Poor software quality can lead to user dissatisfaction, reduced efficiency, and increased costs. Quality assurance and quality control processes can help to ensure that software meets its requirements and is of a high standard. Software developers and engineers can employ a variety of techniques to improve the quality of their software. Software quality is an important factor for businesses as it can directly impact customer satisfaction and revenue. Quality assurance and quality control processes can help to ensure that software is of a high standard. Software quality is a critical factor in the success of any software product. Quality assurance and quality control processes are essential for ensuring that software meets its requirements and is of a high standard. Three views of the mobile application which affect software quality will be considered as: 1. Developer View: The developer view of a mobile application is focused on the process of creating and maintaining the application. Developers will design and code the features and architecture of the application, test for errors and compatibility, and provide updates and bug fixes. 2. Tester view: The tester view of a mobile application focuses on testing the application for both functional and non-functional requirements. Testers will execute manual and automated tests to validate the application’s performance, scalability, usability, and security. 3. End-user view: The end-user view of a mobile application is focused on using the application for its intended purpose. End-users will interact with the application to complete tasks and access features, and should be able to provide feedback if the application does not meet their expectations.

How to Quantify Software Quality Factors for Mobile Applications …

303

4 Software Quality Factors Software quality factors are based on the customer’s requirements and expectations, and include such things as usability, reliability, maintainability, performance, scalability, compatibility, security, and portability. Software quality factor as per software developer, tester and end-user views. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22.

Functionality: The software should provide the functionality that the user expects. Reliability: The software should be reliable and perform consistently without errors or crashes. Performance: The software should be fast and responsive. Security: The software should be secure and protect user data. Compatibility: The software should be compatible with different operating systems, browsers, and devices. Scalability: The software should be scalable as per increasing number of users. Documentation: The software should have comprehensive documentation for users and developers. Testability: The software should be testable and able to be tested efficiently. Accessibility: The software should be accessible to users with disabilities. Localization: The software should support multiple languages and locales. Support: The software should have good customer and technical support. Extensibility: The software should be able to be extended with new features and functionality. Interoperability: The software should interact with other software and services. Efficiency: The software should use resources efficiently and minimize the impact on the environment. Quality Assurance: The software should go through a thorough quality assurance process. Compliance: The software should comply with applicable regulations and standards. Cost-Effectiveness: The software should be cost-effective to develop, maintain, and use. Flexibility: The software should be flexible and able to adapt to changing requirements. Quality of Service: The software should provide a high level of service with minimal downtime. Availability: The software should be available to users when needed. User Experience: The software should provide a good user experience. Customizability: The software should be customizable to meet user needs.

304

M. Mishra and R. Dadhich

23. Automation: The software should be able to be automated to reduce manual errors. 24. Traceability: The software should have traceability so that changes can be tracked and monitored. 25. Version Control: The software should have version control so that changes can be tracked and reverted if necessary. 26. Monitoring: The software should have monitoring to detect and alert on any issues. 27. Backup and Recovery: The software should have backup and recovery capabilities in case of an unforeseen incident. 28. Disaster Recovery: The software should have disaster recovery capabilities in case of a major disaster. 29. Compliance: The software should be compliant with applicable laws and regulations. 30. Privacy and Security: The software should have appropriate privacy and security measures in place. 31. DevOps: The software should be designed with DevOps principles in mind. 32. Code Quality: The software should have high-quality code that is easy to maintain. 33. Architecture: The software should have a well-designed and scalable architecture. 34. Automated Testing: The software should have automated tests to ensure that changes don’t break existing features. 35. Continuous Integration: The software should use continuous integration and delivery to ensure that changes are tested and deployed quickly. 36. Configuration Management: The software should have a configuration management system to ensure that all components are in sync. 37. Auditability: The software should have auditability so that changes can be tracked and traced. 38. User Management: The software should have appropriate user management. 39. Monitoring: The software should have monitoring to detect and alert on any issues. 40. Logging: The software should have appropriate logging so that issues can be identified and investigated. Overall, the quality of a mobile application is dependent on the views of all of these stakeholders.

How to Quantify Software Quality Factors for Mobile Applications …

305

5 ISO/IEC-9126 ISO-9126 is an international standard for software product quality that is used to assess software products against a set of quality characteristics. It provides a framework for assessing the quality of software products and services in terms of six characteristics, namely functionality, reliability, usability, efficiency, maintainability, and portability. The standard is intended to help software producer’s measure and improve the quality of their products and services. ISO-9126 can be used by software producers, software buyers, software users, and others to assess the quality of a software product. It is a useful tool for vendors to demonstrate their software quality and for customers to make informed purchasing decisions. It can also be used as a reference for software quality assurance and testing activities. ISO-9126 is an international standard for software product quality. It provides a systematic model for software quality assessment that helps organizations identify and measure the key aspects of software quality. The standard can be used to help organizations evaluate their software products and ensure that they are meeting customer requirements. ISO-9126 provides a framework for organizations to measure the effectiveness of their software development process, as well as to identify areas where improvement is needed. By using ISO-9126, organizations can ensure that their software is of the highest quality and meets customer expectations (Table 1).

6 ISO/IEC-25010 ISO/IEC-25010 is an international standard for software product quality. It provides a common set of definitions for describing the quality of a software product. The standard also provides a framework for quantifying the quality of a software product and for measuring the software product development process. The standard provides guidance on how to assess the quality of software products and processes, as well as how to identify areas for improvement. It is intended to help organizations improve the quality of their software products and to help them identify gaps in their processes. ISO/IEC-25010 is an important standard because it provides a common set of definitions and measurements that can be used to compare different software products and processes. By providing a standard set of definitions and measurements, software developers, testers and customers can better understand each other’s expectations and goals. Additionally, organizations can use the standard to measure their progress and identify areas for improvement. ISO-25010 is a valuable tool for the software industry, as it helps to ensure that high-quality software is created, which leads to satisfied customers. It also serves as

306 Table 1 Characteristic/Subcharacteristics of ISO/IEC-9126

M. Mishra and R. Dadhich Characteristic

Sub-characteristics

1. Functionality

1. Suitability 2. Accuracy 3. Interoperability 4. Functionality Compliance 5. Security

2. Efficiency

6. Time utilization 7. Resource utilization 8. Efficiency Compliance

3. Portability

9. Replaceability 10. Adaptability 11. Installability 12. Co-existence 13. Portability Compliance

4. Maintainability

14. Analyzability 15. Changeability 16. Testability 17. Stability 18. Maintainability Compliance

5. Usability

19. Understandability 20. Learnability 21. Operability 22. Attractiveness 23. Usability Compliance

6. Reliability

24. Maturity 25. Fault Tolerance 26. Recoverability 27. Reliability Compliance

a reference for quality assurance professionals to ensure that software products meet customer expectations. ISO-25010 is an important part of the broader ISO/IEC-25000 family of standards, which provides a comprehensive system for the management of software quality. ISO-25010 is applicable to software in any form, for any organization, and for any application. It is applicable to all phases of software development, from initial requirements definition to final product delivery. The standard also helps to reduce the cost and time associated with software development, as it ensures that software products are built to the highest standards (Table 2). Overall, ISO/IEC-25010 is an important standard for improving the quality of software products and processes. It provides a common set of definitions, measurements, and guidance for assessing the quality of software products and processes, and for improving software development processes.

How to Quantify Software Quality Factors for Mobile Applications … Table 2 Characteristic/Subcharacteristics of ISO-25010

Characteristic

Sub-characteristics

1. Functionality

1. Interoperability 2. Capacity 3. Compliance 4. Accessibility 5. Accuracy 6. Audit 7. Recoverability

2. Reliability

8. Maturity 9. Fault Tolerance 10. Availability

3. Usability

11. Learnability 12. Operability 13. User Error Protection 14. Attractiveness

4. Efficiency

15. Time Behavior 16. Resource Utilization

5. Maintainability

17. Analyzability 18. Changeability 19. Stability 20. Testability

6. Portability

21. Adaptability 22. Install ability 23. Replace ability

7. Security

24. Confidentiality 25. Integrity 26. Non-repudiation 27. Authentication

8. Compatibility

28. Co-existence 29. Interchangeability 30. Usability Compliance 31. Co-operation

307

In summary, ISO-9126 is more focused on software quality evaluation and provides a framework of metrics and evaluation methods, while ISO-25010 is more focused on software product quality, providing a comprehensive framework and more detailed metrics and evaluation methods.

308

M. Mishra and R. Dadhich

7 Quantification and Fuzzy Logic Quantification is the process of assigning numerical values to a given set of objects in order to measure, compare, and analyze them. The numerical values can be used to determine the size, frequency, or intensity of a given set of objects. Quantification can be used to assign weights or scores to objects or to measure the relative importance of objects in an overall system. Quantification and fuzzy logic are related in the sense that fuzzy logic is used to assign numerical values to objects that may not have a fixed and exact value. This allows for more precise analysis of a given set of objects, as well as the ability to assign weights or scores to objects that may not have a fixed value.

7.1 Multi-Criteria Decision Approach Multi-criteria decision approach is a decision-making process that evaluates multiple criteria to make a decision. It uses a systematic process to identify and prioritize criteria, evaluate options, and select a solution that best meets the criteria. The process involves analyzing and assessing the criteria using a set of decision rules to determine the best option. This approach is used to help make complex decisions where there are multiple stakeholders with different perspectives and interests. Multi-criteria decision approaches are used in many different industries, such as engineering, economics, finance, health care, and environmental science. It is also used in government, where it is used to evaluate policies and programs. This approach is useful in situations where there is a need to consider multiple perspectives and interests, and when a variety of criteria must be taken into account. Multi-criteria decision approaches are an effective way to compare and evaluate different options, and are an important part of the decision-making process. They can be used to make decisions that are both rational and equitable. For example, a company may need to choose between two software packages for their internal systems. The decision-making process could involve multiple criteria such as cost, features, ease of use, scalability, and customer support. Each criterion could be assigned a weighting, or importance, to help the company decide which package is the best fit. The decision could then be made based on which package best meets the criteria. In this example, the multi-criteria decision approach provides a comprehensive, objective way for the company to make the best choice for their needs. The approach can also be used in other decision-making scenarios, such as selecting candidates for a job or choosing a location for a new business.

How to Quantify Software Quality Factors for Mobile Applications …

309

7.2 Fuzzy Multi-Criteria Approach Fuzzy multi-criteria approach is a decision-making tool that uses fuzzy set theory to allow decision makers to take into consideration multiple criteria and uncertain information. This approach allows decision makers to consider the relative importance of each criterion, and to assess the uncertainty of the data used. The fuzzy multi-criteria approach is a useful tool for decision-making in complex situations, where multiple criteria and uncertain information need to be considered. Fuzzy multi-criteria approach combines fuzzy set theory and multiple criteria decision-making techniques. It allows decision makers to consider multiple criteria simultaneously, and to assign weights to each criterion according to its relative importance. This approach also allows decision makers to consider uncertainty in the data used to make decisions. The fuzzy multi-criteria approach is an effective tool for decision-making in complex situations. It allows decision makers to consider multiple criteria simultaneously and to assign weights to each criterion according to its relative importance. It also allows decision makers to consider uncertainty in the data used to make decisions. This approach can be used to make decisions in a variety of settings, including business, engineering, and healthcare.

8 Proposed Methodology This paper proposes possible criteria as the following steps to quantify the quality of mobile applications based on three different views, developer, tester, and end-user views taking ISO/IEC-9126 as a baseline model, extending the existing quality model by adding some additional sub-characteristics and evaluating the software quality. Three views of the mobile application will be considered in this paper: 1. Developer View 2. Tester view 3. End-user view. This software quality for mobile applications depends upon these three views, where we associate optimized characteristics (w.r.t. ISO/IES 9126) with these views logically.

310

M. Mishra and R. Dadhich

8.1 Flow Graph of Methodology

Customize the existing quality model

Extend quality model with new sub-characteristics of mobile application

Designing the questionnaire

Conducting the survey

Find fuzzy rate and fuzzy weight

Converting fuzzy parameter to crisp parameter

Find the resultant crisp numeric value

8.2 Procedure Step 1: Customize the existing quality model This step is a very important step, where one can map an appropriate quality model according to the mobile application. In this step, review the sub-characteristics, characteristics and justify their presence.

How to Quantify Software Quality Factors for Mobile Applications …

311

Step 2: Extend quality model with new sub-characteristics of mobile application When we can freeze the suitable quality model. Then we can extend our quality model according to inherent features of mobile applications, for ex, we can add sub-characteristics of fun and/or challenge for a mobile game application. Step 3: Designing the questionnaire Now we are preparing our theoretical model. Design a questionnaire according to the developer view, tester view, and end-user view, based on the model with 1 to 5 rating. Step 4: Conducting the survey Survey with the developer and tester, including feedback from the tester that will help the developer. Normally the population of the end-user will be more than these two and there may be a chance that some end-user are not serious about the survey, due to which includes the don’t know option other than the 5 rating, so we are aware of the false inputs and apply proper imputation (e.g., Expectation Maximization) method to find incomplete data. Step 5: Find fuzzy rate and fuzzy weight Now we convert the crisp rating into fuzzy rating and fuzzy weight, calculate the net quality fuzzy parameter for each sub-characteristic, find the coefficient of correlation between sub-characteristics that belong to a particular characteristic, and group all the sub-characteristics w.r.t. suitable characteristics. Step 6: Converting fuzzy parameter to crisp parameter Find the resultant crisp quality parameter from the fuzzy parameter. Now we have three crisp quality values according to different views. Step 7: Find the resultant crisp numeric value Aggregate all three crisp quality values to find the resultant crisp quality value with a suitable aggregate function (e.g., choquet integral).

9 Conclusions and Future Scope Quantification of quality aspects for mobile applications has a big contribution toward the mobile software industry and also for a society where almost all persons are bound to use Smartphone’s. As there is a number of techniques available for desktop application regarding the quantification of quality aspect but as per the current study the proposed research will provide a novel idea for mobile applications also to quantify the software quality factors with the help of proposed research results, the mobile users, as well as developers, will get benefit by:

312

M. Mishra and R. Dadhich

1. Demand quality in the form of numeric value due to justification purpose (like to map in between what the mobile application is currently being provided? And what are exactly the user needs?) In between production and as a final product as a quality aspect or branding. 2. All mobile applications demand not only exploring optimized quality factors but also quantification by taking a suitable mathematical model.

References 1. Nitze A, Schmietendorf A (2015) A survey on mobile users’ software quality perceptions and expectations. In: Software testing, verification and validation workshops (ICSTW), IEEE eighth international conference, 13–17 April 2015, pp 1–2 2. Wich M, Kramer T (2015) Enhanced human-computer interaction for business applications on mobile devices: a design-oriented development of a usability evaluation questionnaire. In: System sciences (HICSS), 48th Hawaii international conference, 5–8 Jan 2015, pp 472–481 3. Santos A, Correia I (2015) Mobile testing in software industry using agile: challenges and opportunities. In: Software testing, verification and validation (ICST), IEEE 8th international conference, 13–17 Apr 2015, pp 1–2. 4. David P, Hokyoung R. A framework for assessing the quality of mobile learning 5. Panichella S, Di Sorbo A, Guzman E, Visaggio CA, Canfora G, Gall HC (2015) How can i improve my app? Classifying user reviews for software maintenance and evolution. In: Software maintenance and evolution (ICSME), 2015 IEEE international conference, 1 Oct 2015, pp 281–290 6. Sundara Rajan VS, Malini A, Sundarakantham K (2014) Performance evaluation of online mobile application using Test My App. In: Advanced communication control and computing technologies (ICACCCT), international conference, 8–10 May 2014, pp 1148–1152 7. Euler H, Rodolfo F (2012) Quality factors in development best practices for mobile applications. In: Proceedings of 12th international conference. Springer2, part IV, pp 632–645 8. Ali I, Karima M, Alain A (2013) On the use of software quality standard ISO/IEC 9126 in mobile environments. In: IEEE 20th Asia-pacific software engineering conference, Sep 2013 9. Agarwal M, Chari K (2007) Software effort, quality, and cycle time: a study of CMM Level 5 projects. IEEE Trans Software Eng 33(3):145–156 10. Maryoly O, Perez MA, Rojas T (2003) Construction of a systemic quality model for evaluating software product. Software Qual J 11(3):219–242 11. Slaughter SA, Harter DE, Krishnan MS (1998) Evaluating the cost of software quality. Commun ACM 41(8):67–73 12. Dromey RG (1995) A model for software product quality. IEEE Trans Software Eng 21(2):146– 162 13. Boehm BW, Brown JR, Lipow ML (1976) Quantitative evaluation of software quality. In: Proceedings of the 2nd international conference on software engineering, San Francisco, CA, USA, Oct 1976, pp 592–605 14. Sanjay D, Disha S (2015) Software quality appraisal using multi-criteria decision approach. IJIEEB 2 15. Reena D, Bhavesh M (2012), Measuring reliability of an aspect oriented software using fuzzy logic approach, IJEAT 1:233–237 16. Vatesh P, Sanjay K, Praveen R (2012) Assessment of software quality: choquet integral approach, Oct 2012, vol 6. ICCCS. ELSEVIER, pp 153–16

How to Quantify Software Quality Factors for Mobile Applications …

313

17. Srivastava PR, Singh AP, Vageesh KV (2010) Assessment of software quality: a fuzzy multi-criteria approach. Evolution of computation and optimization algorithms in software engineering: applications and techniques. IGI Global USA, Chapter-11, pp 200–219 18. Srivastava PR, Kumar K (2009) An approach towards software quality assessment. Commun Comput Inf Syst Ser (CCIS Springer Verlag) 31(6):345–346 19. Ho-Won J, Seung-Gweon K, Chang-Shin C (2004) Measuring software product quality: a survey of ISO/IEC 9126. An Article Published by IEEE Computer Society

Dysgraphia Detection Using Machine Learning-Based Techniques: A Survey Basant Agarwal, Sonal Jain, Priyal Bansal, Sanatan Shrivastava, and Navyug Mohan

Abstract Dysgraphia is a handwriting disorder which affects the writing abilities of an individual. Currently there is no cure for this disorder but even the diagnosis is difficult. This paper summarises the current research on detection of Dysgraphia through machine learning. Researchers around the globe have researched this topic and have proposed different solutions for the same. The types of Dysgraphia and their symptoms are also discussed in this paper. Different accuracies were achieved by using a variety of Machine Learning algorithms like Support Vector Machine, AdaBoost, K-Means Clustering, Random Forest, Convolutional Neural Networks, etc. Another aim of this paper is to spread awareness about the problem of Dysgraphia and its implications on society. Keywords Dysgraphia · Image processing · Machine learning · Writing difficulty

B. Agarwal (B) Department of Computer Science and Engineering, Central University of Rajasthan, Ajmer, India e-mail: [email protected] S. Jain PG Department of Computer Science and Technology, Sardar Patel University, Vallabh Vidhyanagar, Anand, India e-mail: [email protected] P. Bansal · S. Shrivastava · N. Mohan Department of Computer Science and Engineering, Indian Institute of Information Technology Kota, Kota, India e-mail: [email protected] S. Shrivastava e-mail: [email protected] N. Mohan e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 V. S. Rathore et al. (eds.), Emerging Trends in Expert Applications and Security, Lecture Notes in Networks and Systems 681, https://doi.org/10.1007/978-981-99-1909-3_28

315

316

B. Agarwal et al.

1 Introduction Dysgraphia is a combination of two Greek terms ‘Dys’ means disability and ‘Graphia’ means handwritten letter [17]. Dysgraphia is a disease which causes writing disability. Dysgraphia mainly impacts handwriting skills, typing and spelling skills [16]. The term ’Dysgraphia’ has been used to point to both motor coordination and the persistent difficulty observed by human beings in expressing views in suing hand written text. It reflects weaknesses in literateness and language conventions that include grammar, punctuation, and the formation of spellings. Figure 1 [20] shows the percentage of children suffering from Dysgraphia in different age groups. The primary objective of this paper is to discuss the ongoing research and development towards detecting Dysgraphia using machine learning techniques, with handwriting of the person given as an input. Paper will also discusses the importance of detecting this disorder at an early stage or an early age of the child, and its consolidating effects on the complete growth and development from the physical, physiological, and psychological standpoint of an individual. Some qualitative methods of identifying the disorders are also discussed briefly in this paper. Students suffering from Dysgraphia are often accused lack of attitude towards study because they don’t have a neat handwriting. The school environment for children suffering from Dysgraphia can have an enormous impact on the child’s normal development and academic achievements. By diagnosing early, we can help the child and make them improve writing in due course of time to a certain extent. It also helps the teachers to adapt their teaching style after understanding the condition and the special needs of their children. Dysgraphia is not tied to Intelligence Quotient (IQ), so the students suffering from Dysgraphia can perform well if they receive the necessary regulations and assistance, they need within time but most of the people are unaware of this issue.

Fig. 1 Percentage of children with learning disabilities based on age [20]

Dysgraphia Detection Using Machine Learning-Based Techniques …

317

and even if this came into the light, the process of detection might be very timeconsuming and hectic. The inefficiency of detection often wastes critical years of learning. The traditional methods for detection of Dysgraphia are discussed in further sections of this paper.

2 Types and Symptoms of Dysgraphia Dysgraphia is an issue that manifests itself in poor handwriting skills among youngsters with at least average intellect who have not been diagnosed with any specific neurological issues [22]. Dysgraphia is a common learning difficulty often recognised by poor writing abilities, can or cannot be determined through an academic progress report. Neurobiological and psychological disabilities contribute indirectly to the Dysgraphia symptoms prevailing in learner academic progress. Dysgraphia often culminated as an issue with a set of skills known as transcription. Dysgraphia is characterised by poor handwriting. Typically, any individual if suffering from Dysgraphia exhibits a wide range of symptoms, some of which are jotted down below [16]: . Writing the letters down and properly spacing the formed characters on the paper. . Problem in writing the words/phrases/sentences in a straight line. . Trying to produce letters which are of the right size, working using both hands, i.e. holding the paper with one hand and writing with the other, is a good idea. . Controlling and holding a pencil/pen or any other writing equipment. . Difficulty in maintaining the right arm’s position and posture for writing. . Another long-term effect of Dysgraphia is that having trouble forming letters can make learning spelling difficult. Dysgraphia categorisation is mainly under these subcategories [2], viz., Motor Dysgraphia, Dyslexic Dysgraphia, and Spatial Dysgraphia (Figs. 2 and 3). The types and their respective symptoms are summarised in the Table 1: Fig. 2 Handwriting sample with no disability [21]

318

B. Agarwal et al.

Fig. 3 Handwriting of a person with dysgraphia [21]

Table 1 Types of Dysgraphia and their symptoms Type

Symptoms

Dyslexic Dysgraphia Moderate handwriting, poor spellings, normal writing speed, i.e. no problem with motor skills Motor Dysgraphia

Bad handwriting, correct spellings, slow writing due to muscle weakness and neurobiological deficiency

Spatial Dysgraphia

Bad handwriting due to inconsistent spacing, correct spellings, normal writing speed

3 Detection of Dysgraphia The current techniques of detecting Dysgraphia are summarised in Fig. 4. As shown, there are two ways, i.e. by doctors/experts and by using an automated system.

3.1 By Doctors/Experts A family doctor or paediatrician who knows the medical history, a professional therapist, and a psychologist are all involved in the traditional technique of diagnosing Dysgraphia. Other diseases that could cause writing difficulties will need to be ruled out by the panel. Dysgraphia can then be diagnosed by a psychologist who specialises in learning problems. Academic examinations, fine motor skill tests, IQ testing, and writing tests may be used to do this. Various things such as the person’s hand and body position, pencil grip, and other writing processes will be observed by the doctors during the testing. They’ll then look for symptoms of Dysgraphia in the final sample they’ve obtained. One of the key requirements that is necessary to take this situation

Dysgraphia Detection Using Machine Learning-Based Techniques …

319

Fig. 4 Generalised machine learning approach for handwriting analysis

in consideration is that the symptoms have been present for at least 6 months and that suitable therapies have been implemented. Traditional diagnosis procedures have a number of drawbacks. The ‘BHK test’ (The Concise Assessment Scale for Children’s Handwriting) is extensively used to diagnose Dysgraphia in France [9]. Health Insurance providers recognise this test and will cover the costs of both diagnosis and therapy. Authors in [18] said that in the BHK test many features like letter form, size, etc. are scored and a report is generated which is a time-consuming process. These issues are connected to the time it takes to score the tests, evaluator variability, and most importantly, the time window of often 6 months or more–between early concerns about a child’s handwriting and the opportunity to speak with an expert. This is very crucial since it can eventually lead to wrong evaluation. Asselborn et al. [14] proposed a data driven approach address the mentioned issue.

3.2 Automated System This method is based on machine learning and image processing. Different handwriting features are extracted and various ML algorithms are used for the prediction.

320

3.2.1

B. Agarwal et al.

Feature Extraction

Researchers have used both digital and hardware handwriting features in the development of their prediction model to detect Dysgraphia. Baseline Angle [5, 9], Slant Angle [1, 5], Spelling Errors [2], Average Letter Size [1, 5], Average Line Spacing [1], Writing Speed [2, 3, 7, 8, 10], Average Pen Pressure, Time, Pen grip, Acceleration are the most common features utilised so far.

4 Data Collections Data is one of the most important components to make a machine learning model learn to predict. Authors in [1] collected data from a group consisting of children having Dysgraphia. They made them write a paragraph on paper and took photos of each paper from a mobile phone camera. Further, authors in [2] also analysed the handwritten content of 20 learners with Dysgraphia aged 4 to 14. Out of which 12 learners were males and 8 learners were females. 6 learners from the rural area and 14 were from the urban area, 4 learners have no experience of working on computers before. Children with emotional disorders and other disabilities have not been considered for the evaluation. In 2018 and 2019, qualified experts at the Centre for Special-Needs Education collected data from children with dysgraphia as part of a regular assessment. At their elementary school, qualified professionals collected data from youngsters who did not have dysgraphia. Authors in [5] also used a dataset of 1481 free handwritten text images. There are at least five lines of text in each text. Their collection contains 198 images of patients with dysgraphia, accounting for around 13% of the total. A human has personally recorded the 9 features for each image in the dataset. This person was unaware of the illness connected with the image in order to eliminate biases. Authors in [6] employed two groups of hand writers (Proficient and Dysgraphia). The Teachers’ Questionnaire for Handwriting Proficiency [24] and the Hebrew Handwriting Evaluation (HHE) [23] were used to identify poor hand writers. With the advancements in digital technology, graphologists are now using digital tools like tablet devices to capture ‘online’. Various classification and clustering techniques have been used to find out the best features suited for the analysis of targeted disease [8].

5 Related Study Regarding the use of machine learning, image processing and other technological tools to detect Dysgraphia. Authors in [1] discussed a model which can detect the presence of dysgraphia in a child and help parents to take necessary action on time.

Dysgraphia Detection Using Machine Learning-Based Techniques …

321

They have used the SVM machine learning model trained on features such as letter size, slant of letters, spacing, and pressure from the image of a child. Given a text, all the characteristics and attributes such as Congestion, Fragmentation, Slant, and Shakiness are being extracted out for the further processing that leads to detection of Dysgraphia. Further, authors in [2] found that a handwriting evaluation framework based on machine learning and image processing (for feature extraction) is a better approach for handwriting analysis for Dysgraphia. They have used the handwriting evaluation framework that was based on a pre-trained machine learning model (OCR) and image processing. It analyses the handwritten documents and classifies the learner profile as a result. Study concluded that a handwriting evaluation framework based on machine learning and image processing is a better approach for handwriting analysis for Dysgraphia profiling. Authors in [5] equipped a machine learning approach based on different features such as Slant, Pressure, Amplitude, Letter Spacing, Word Spacing, Slant Regularity, Size Regularity, Horizontal Regularity. These features were extracted using image processing or manually and given fairly high accuracy and achieved an accuracy of 90% using Random Forest. Dysgraphia can be predicted with satisfactory accuracy from a simple analysis of a handwritten text, according to the findings of the study. Some researchers used hardware-based tools to collect real time handwriting features. Some have criticised this approach of collecting handwriting data as students don’t feel comfortable and might not be able to write naturally on screen like they write on paper. Authors in [6] examined the contribution of each approach to the detection and characterisation of bad handwriting using objective, digitiser-based data as a supplement to traditional, subjective handwriting evaluation. The subjective criteria used in this study (global legibility, letter erased or overwritten, unrecognisable letters, and spatial arrangement) were found to be effective in distinguishing between an individual with Dysgraphia and proficient handwriting, with in-air time being the best discriminator among the objective measures. Authors in [8] recorded data using a digital tablet device Wacom. Authors in [9] have shown that including movement dynamics increases accuracy and drastically reduces the quantity of information required to identify children with Dysgraphia. They demonstrated that using the dynamic information provided by tablets into their digital exam to differentiate between normal and dysgraphic youngsters is quite effective. Authors in [13] developed a tablet and used a Play Draw Write method to detect symptoms of Dysgraphia. Further, authors in [19] also used Wacom Intuos graphic tablets for handwriting acquisition of 509 children. Authors in [15] have used demographic features for dysgraphia detection such as age, gender, laterality (lefthanded or right-handedness). Therefore, features like age, gender, laterality are also included. Table 2 consists of Summary of available literature of attempt of addressing dysgraphia using machine learning techniques.

322

B. Agarwal et al.

Table 2 Summary of literature related to detection of dysgraphia using machine learning S. no Author

Features used

Algorithm used

Accuracy

Dataset

1

Rahul Budha et al. [1]

Slant of letter, pressure, average letter size, spacing thinking time, pen grip

SVM



Children having dysgraphia disease is made to write a paragraph on paper

2

Sarthika Dutt et al. [2]

Handwriting, spelling mistakes, writing speed

SSIM, tesseract, 70% dictionary lookup

3

Peter Drotár et al. [3]

Velocity, SVM, AdaBoost 72.50%, Handwritten acceleration, classifier, 79.50%, 72.3% content of 120 pressure, length random forest learners with age of segment, groups 8 to 15 altitude, pen lifts, width/height of segment, duration

4

Katie Spoon et al. [4]

Demographics data along with handwriting

Convolutional neural network (CNN)

55.7 ± 1.4%

Handwritten content of approx 90 learners

5

Gilles Richard et al. [5]

Slant, pressure, amplitude, letter spacing, word spacing, slant regularity, size regularity, horizontal regularity

Random forest, naive bayes, logistic regression

96.2%, 90.8%, 95.60%

1481 pictures of free handwritten text each of 5 lines out of which 13% were diagnosed with dysgraphia

6

Sara Rosenblum et al. [6]

Erased and MANOVA overwritten letters, spatial measures, in-air time and on paper time



120 learners with age groups 8 to 10

7

Maitrei Kohli et al. [7]

Slower writing speed, irregularly formed letters, use of inappropriate words when writing

75%



Artificial neural networks (ANNs)

Handwritten content of 20 learners with age groups of 4 to 14

(continued)

Dysgraphia Detection Using Machine Learning-Based Techniques …

323

Table 2 (continued) S. no Author

Features used

Algorithm used

Accuracy

Dataset

96%

Handwritten content of 27 learners with dysgraphia

8

Seema Kedar et al. [8]

Temporal and ANOVA, SVM, spatial decision Tree, measures, time, KNN, K-Means width, height, orientation and length of segment, spacing, pressure, speed, acceleration, jerk, spectral density, deletions

9

Konrad Zo lna et al. [9]

Letter formation, size of letters, spacing between words, slant features

10

Thibault Speed, pressure, BHK test, Asselborn et al. tilt, space, random forest [10] density, size, tremor

96%

298 youngsters were studied, 56 of them had dysgraphia

11

Dimas Adi et al. [11]

Thin writing, slow writing, uphill or downhill writing etc

Expert system(decision tree)

94.71%



12

Ruchira Kariyawasam et al. [12]

Correct letter words, no. of attempts, time taken, erase count

SVM, neural network

90%, 85%



13

Mekyska et al. [24]

Ten kinematic features, 34 non-linear dynamic features, and 7 other features, digitised tablet

Random forest – algorithm, linear discriminant analysis

54 students (27 normal and 27 dysgraphic), digitised tablet COMPET was used to collect the data

14

Deschamps et al. [25]

Included spatial, kinematic, and dynamic features

Support vector machines

580 students with 128 students having dysgraphia, used tablets to collect data

Recurrent neural 90% network model (RNN)

91% Sensitivity

Handwritten samples of children were taken on a Wacom Tablet

(continued)

324

B. Agarwal et al.

Table 2 (continued) S. no Author

Features used

Accuracy

Dataset

15

Dankovicova [26]

Spatiotemporal, Support vector dynamic, machines kinematic features

Algorithm used

75.5% Sensitivity

72 students with 36% dysgraphic

16

Rosenblum [27]

Spatiotemporal, Support vector dynamic, machines kinematic features

90% Accuracy

90 students with 49 dysgraphic students, tablet was used to collect data

17

Mekyska [28]

Spatial, temporal, kinematic, dynamic features, included drawing activities

90% Specificity

76 students with15 dysgraphic

18

Kedar et al. [29]

Spatiotemporal, Random forest dynamic and kinematic features

92.85% Accuracy

60 students

19

Dui [30]





509 children

20

Richard [32]

Features like Random forest slant, pressure, amplitude, letter spacing, word spacing, word spacing, slant regularity, size regularity, horizontal regularity (manual extraction)

96.2% Accuracy

1400 handwritten images written on four lines

21

Dutt et al. [33]

Sentence structure, word formation, visual-spatial response

99% Accuracy

240 students with 140 having some sort of disabilities and 45 having dysgraphia

XG-Boost

BHK test

Random forest

(continued)

Dysgraphia Detection Using Machine Learning-Based Techniques …

325

Table 2 (continued) S. no Author

Features used

Algorithm used

Accuracy

22

Hewapathirana Correctness of [34] written number, erase count, time taken

CNN for 99% Accuracy generating confidence score of number or word as an image and SVM and random forest for classification of dysgraphia

23

Zvoncak [35]

Common spatial, kinematic, temporal and dynamic features

Gradient boosted tree-based regression model

24

Kariyawasam [36]

3 Sinhala CNN, SVM, letters, success probability of written letters, number of correct and incorrect letters, number of attempts, total time taken, erase counts

25

Yogarajah [37] Images as input to CNN

CNN

Dataset –

L2 97 students normalisation with reduced error rate of 5%

88% Accuracy for letter dysgraphia and 90% for numerical dysgraphia

5000 handwritten image (MNIST) dataset

86.14% Accuracy

267 handwritten images with 54 images of dysgraphic students

6 Conclusion In this paper, we have analysed the research work related to the detection of Dysgraphia by various researchers around the world. The work on development and the scope of research is a lot due to the high range of applications needed in the industry. It is thus needed for faster and accurate detection of Dysgraphia. This paper includes both hardware and software detection methods. In hardware, there exists a tablet and a wireless pen which can collect data like pressure, duration, speed, etc. that can be used for detection purposes. Many researchers around the world are trying to gather classified data which can then be used to train the ML algorithm and predict the outcome (Dysgraphic/Non-dysgraphic) with satisfactory performance.

326

B. Agarwal et al.

Acknowledgements This work done in this paper is supported by Science for Equity Empowerment and Development Division of the Department of Science and Technology, GoI.

References 1. Solanki RB, Waghela SP, Shankarmani R (2020) Dysgraphia disease detection using handwriting analysis. Int J Adv Technol Eng Sci 08(04) 2. Dutt S, Ahuja NJ (2020) A novel approach of handwriting analysis for dysgraphia type diagnosis. Int J Adv Sci Technol 29(3):11812. http://sersc.org/journals/index.php/IJAST/article/ view/29852 3. Drotar P, Dobeš M (2020) Dysgraphia detection through machine learning. Sci Rep 10(1):1–11. https://doi.org/10.1038/s41598-020-78611-9 4. Spoon K, Crandall D, Siek K (2019) Towards detecting dyslexia in children’s handwriting using neural Networks”; https://aiforsocialgood.github.io/icml2019/accepted/track1/pdfs/43_ aisg_icml2019.pdf 5. Richard G, Serrurier M, Dyslexia and Dysgraphia prediction: a new machine learning approach. https://arxiv.org/abs/2005.06401 6. Rosenblum S, Parush S, Epstain L, Weiss PL, Process versus product evaluation of poor handwriting among children with developmental dysgraphia and ADHD. https://www.aca demia.edu/24629677/Process_Versus_Product_Evaluation_of_Poor_Handwriting_among_C hildren_with_Developmental_Dysgraphia_and_ADHD 7. Maitrei K, Prasad T (2010) Identifying dyslexic students by using artificial neural networks. Lect Notes Eng Comput Sci 2183:118 8. Kedar S, Bormane SJ (2018) Online analysis of handwriting for disease diagnosis: a review. Int J Eng Technol 7(3.24), 505–511. doi:http://dx.doi.org/https://doi.org/10.14419/ijet.v7i3.24. 22802 9. Zo lna K, Asselborn T, Jolly C, Casteran L, Nguyen-Morel M-A, Johal W, Dillenbourg P (2019) The dynamics of handwriting improves the automated diagnosis of dysgraphia 10. Asselborn T, Gargot T, Kidzi´nski Ł, Johal W, Cohen D, Jolly C, Dillenbourg P, Automated human-level diagnosis of dysgraphia using a consumer tablet. https://core.ac.uk/download/pdf/ 211985353.pdf 11. Kurniawan D, Sihwi W, Sari, Gunarhadi G (2017) An expert system for diagnosing dysgraphia, pp 468–472. https://doi.org/10.1109/ICITISEE.2017.8285552. 12. Kariyawasam R, Nadeeshani M, Hamid T, Subasinghe I, Ratnayake P (2019) A gamified approach for screening and intervention of dyslexia. Dysgraphia and Dyscalculia 156–161. https://doi.org/10.1109/ICAC49085.2019.9103336 13. Lunardini GD, Termine C, Matteucci M, Stucchi NA, Borghese NA, Ferrante S (2020) A tablet app for handwriting skill screening at the preliteracy stage: instrument validation study. JMIR Ser Games 8(4), Article id e20126https://doi.org/10.2196/20126.PMID:33090110;PMCID: PMC7644384 14. Asselborn T, Chapatte M, Dillenbourg P (2020) Extending the spectrum of dysgraphia: a data driven strategy to estimate handwriting quality. Sci Rep 10(1):3140. Published 21 Feb 2020. https://doi.org/10.1038/s41598-020-60011-8 15. Asselborn T, Gargot T, Kidzi´nski Ł, Johal W, Cohen D, Jolly C, Dillenbourg P (2019) Reply: limitations in the creation of an automatic diagnosis tool for dysgraphia. NPJ Digit Med 2:37. https://doi.org/10.1038/s41746-019-0115-z 16. What is Dysgraphia? Team Understood. https://www.understood.org/articles/en/understan ding-dysgraphia 17. Understanding Dysgraphia; International Dyslexia Association. https://dyslexiaida.org/unders tanding-dysgraphia-2/

Dysgraphia Detection Using Machine Learning-Based Techniques …

327

18. Gargot T, Asselborn T, Pellerin H, Zammouri I, M. Anzalone S, Casteran L, et al (2020) Acquisition of handwriting in children with and without dysgraphia: a computational approach. PLoS One 15(9):e0237575. https://doi.org/10.1371/journal.pone.0237575 19. Deschamps L, Gaffet C, Aloui S, Boutet J, Brault V, Labyt E (2019) Methodological issues in the creation of a diagnosis tool for dysgraphia. Digit Med 2:36. https://doi.org/10.1038/s41 746-019-0114-0 20. Learning Disabilities Statistics; Learning Disabilities Association of Ontario. https://www. ldao.ca/introduction-to-ldsadhd/articles/about-lds/learning-disabilities-statistics/ 21. Handwriting Samples; OC Handwriting. https://ochandwriting.com/handwriting_samples. html 22. Dimauro G, Bevilacqua V, Colizzi L, Di Pierro D (2020) TestGraphia, a software system for the early diagnosis of dysgraphia. IEEE Access 8:19564–19575. https://doi.org/10.1109/ACC ESS.2020.2968367 23. Erez N, Parush S (1999) The hebrew handwriting evaluation, 2nd edn. School of occupational therapy. Faculty of medicine. Hebrew University of Jerusalem, Israel, Jerusalem 24. Mekyska J, Faundez-Zanuy M, Mzourek Z, Galaz Z, Smekal Z, Rosenblum S (2017) Identification and rating of developmental dysgraphia by handwriting analysis. IEEE Trans Hum-Mach Syst 47(2):235–248. https://doi.org/10.1109/THMS.2016.2586605 25. Deschamps L, Devillaine L, Gaffet C, Lambert R, Aloui S, Boutet J, Brault V, Labyt E, Jolly C, De A (2021) Development of a pre-diagnosis tool based on machine learning algorithms on the BHK test to improve the diagnosis of dysgraphia. Adv Aartif Intell Mach Learn 222–13194 26. Dankovicova Z, Hurtuk J, Fecilak P (2019) Evaluation of digitalized handwriting for dysgraphia detection using random forest classification method. In: SISY 2019— IEEE 17th international symposium on intelligent systems and informatics, proceedings. 10.1109/SISY47553.2019.9111567 27. Rosenblum S, Dror G (2016) Identifying developmental dysgraphia characteristics utilizing handwriting classification methods. IEEE Trans Hum-Mach Syst 47(2):293–298 28. Mekyska J, Bednarova J, Faundez-Zanuy M, Galaz Z, Safarova K, Zvoncak V, Mucha J, Smekal Z, Ondrackova A, Urbanek T, Havigerova JM (2019) Computerised assessment of graphomotor difficulties in a cohort of school-aged children. In: International congress on ultra-modern telecommunications and control systems and workshops, Oct 2019. https://doi. org/10.1109/ICUMT48472.2019.8970767 29. Kedar S et al (2021) Identifying learning disability through digital handwriting analysis. Turkish J Comput Math Educ (TURCOMAT) 12(1S):46–56 30. Dui LG, Lunardini F, Termine C, Matteucci M, Ferrante S (2020) A tablet-based app to discriminate children at potential risk of handwriting alterations in a preliteracy stage. In: Proceedings of the annual international conference of the IEEE engineering in medicine and biology society, EMBS 2020-July, 5856–5859 (2020). 10.1109/EMBC44109.2020.9176041 31. Sihwi SW, Fikri K, Aziz A (2019) Dysgraphia identification from handwriting with support vector machine method. J Phys: Conf Ser 1201 (2019). https://doi.org/10.1088/17426596/1201/1/012050 32. Richard G, Serrurier M (2020) Dyslexia and dysgraphia prediction: a new machine learning approach 33. Dutt S et al (2021) Comparison of classification methods used in machine learning for dysgraphia identification. Turkish J Comput Math Edu (TURCOMAT) 12(11). https://doi. org/10.17762/turcomat.v12i11.6142 34. Hewapathirana C, Abeysinghe K, Maheshani P, Liyanage P, Krishara J, Thelijjagoda S (2021) A mobile-based screening and refinement system to identify the risk of dyscalculia and dysgraphia learning disabilities in primary school students. In: 2021 10th international conference on information and automation for sustainability, ICIAfS 2021. https://doi.org/10.1109/ICIAfS 52090.2021.9605998 35. Zvoncak V, Mekyska J, Safarova K, Galaz Z, Mucha J, Kiska T, Smekal Z, Losenicka B, Cechova B, Francova P et al (2018) Effect of stroke-level intra-writer normalization on computerized assessment of developmental dysgraphia. In: 2018 10th international congress on ultra modern telecommunications and control systems and workshops (ICUMT). IEEE, pp 1–5

328

B. Agarwal et al.

36. Kariyawasam R, Nadeeshani M, Hamid T, Subasinghe I, Samarasinghe P, Ratnayake P (2019) Pubudu: deep learning based screening and intervention of dyslexia, dysgraphia and dyscalculia. In: 2019 14th conference on industrial and information systems (ICIIS). IEEE, pp 476–481 37. Yogarajah P, Bhushan B (2020) Deep learning approach to automated detection of dyslexiadysgraphia. In: The 25th IEEE international conference on pattern recognition 38. Bublin M, Werner F, Kerschbaumer A, Korak G, Geyer S, Rettinger L, Schoenthaler E (2022) Automated dysgraphia detection by deep learning with SensoGrip. arXiv:2210.07659

Designing AI for Investment Banking Risk Management a Review, Evaluation and Strategy Simarjit Singh Lamba and Navroop Kaur

Abstract The technological advancements in artificial intelligence have brought newer opportunities for the investment banking sector. Primarily for the RISK management function which deals with identifying, gauging, reporting and dealing the risks across counterparty credit risk, market risk, operational, supervisory and liquidity risk functions. Given that the RISK management function is data centric and includes building efficient risk models, understanding and predicting market changes using quantum financial models and statistical tools and finally predict future behaviors, this makes it an ideal candidate for its evolution and transformation via artificial intelligence. Among other goals, this research aims at reviewing the verticals and use cases within the risk management and post-trade framework of an investment bank that can be transformed by the applications of artificial intelligence. Furthermore, review the academic research done in this area and bring out the functions that have been inadequately explored and probable areas for additional research. Given this is a strongly regulated industry, we would also review the appetite of financial regulators in accepting optimizations using ML and AI. Keywords Risk management · Market risk · Montecarlo simulation · Machine learning · Artificial intelligence · Credit ratings · Liquidity risks · Algorithms and regulatory risks

S. S. Lamba (B) · N. Kaur Department of Computer Science and Engineering, Akal University, Talwandi Sabo, India e-mail: [email protected] N. Kaur e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 V. S. Rathore et al. (eds.), Emerging Trends in Expert Applications and Security, Lecture Notes in Networks and Systems 681, https://doi.org/10.1007/978-981-99-1909-3_29

329

330

S. S. Lamba and N. Kaur

1 Introduction Since the financial turmoil of 2008, investment banks have been bombarded by frequent regulatory changes in the risk management and post-trade functions. These major changes in the business workflow have led to various tactical solutions that the banks have built to match timeliness in the market. These tactical solutions have led to many inefficiencies in the system, humanbased decision making and manual workflows. This is a grooming ground for errors. To understand the size of this problem, the below states some studies done on the current state of Prost trade services [1]: . $25 Bn—In the current state, investment banking industry spends for post-trade processing services yearly. . $3 Bn—Under an assumption of a minimum of two percent trade failure rate, the industry incurs this lose every year due to inefficiencies in the system which can be prevented by applying AI/ML-based solutions that implement a stronger internal control environment. . $345 Bn of consolidated sanctions and fines have been applied on investment banks due to inefficiencies and misreporting by regulators over the past 10 years. Artificial intelligence-based solutions can help in reducing these losses by: Reducing error rate in trade failures, remove manual human decisions and make the underling processes more stable, process semi-structured and even un-structured data to augment the decision support system, detect patterns of historic data to predict future failure scenarios, etc. Below are a few further motivations to explore AI solutions for risk management solutions: . What if, we are able to envisage which trades have a higher affinity of failure and which payments may be defaulted by counterparties. . What if, we are able to highlight in advance which client behaviors seem hypothetically duplicitous and can lead to future sanctions and fines from the regulators. . What if, we are able to replace subjective human-decision making with a perceptron that is able to study historic knowledge, self-learns and delivers a more efficient outcome driving higher business value. . What if, we are able to review real-time market news and movements, take timely decisions for risk and post-trade processing saving future losses and fines. In the previous years, with frequent changes in the investment banking ecosystem, all banks have been trying to remain a viable business by constantly changing business strategies. However, from a technology estate perspective, most of the middle and back-office software solutions are legacy and have been retrofitted to solve the regulatory challenges.

Designing AI for Investment Banking Risk Management a Review …

331

In addition to the stringent regulatory noose, the market volumes have been increasing drastically. Just within the last two years, the market volumes witnessed a sharp daily volume increase from 8 million contracts 12 million contracts being traded on an average daily. This added further stress on risk management operations of counterparty credit risk (CCR), market risk (MR), operations and compliance risk systems. Concurrently, the risk management ecosystem is ever evolving, leading to tactical solutions being implement, which not only are less efficient, but also lead to increased cost of maintenance, losses due to errors in decision support system, regulatory fines due to misreporting of numbers and further leading to increased regulatory probing. Artificial intelligence-based solutions can help massively in this environment to attain the business aspirations.

2 Theoretical Framework for Artificial Intelligence Application in This Research In anticipation of seeing substantial improvements, many organizations jump to applying artificial intelligence solutions almost to every use case to transform their technology landscape without proper analysis of whether the use case is a right fit for technology transformation. Applying these solutions to the wrong use case can lead to further fuzziness in trade and risk processing leading to greater regulatory fines and loss of stakeholder trust in the transformation process. It is essential that a theoretical framework be built that accesses the use case, understands the root cause, runs a quantitative and qualitative study on the benefits, reviews the current workflow, maturity of the technology estate, availability of required data and uses these AI factors to a feasibility analysis of each use case, so that the right problem can be picked up first. As investment banks aim at increasing returns for the stakeholders, the products that they build and invest in incur various inherent risks which include the below:

To be successful and viable in today’s highly regulated environment, the investment banks must be efficient at managing these risk and other post-trade processes.

332

S. S. Lamba and N. Kaur

The following theoretical framework can be used by any industry to access the use cases and build a pipeline of problem which will be targeted by them to transform by using artificial intelligence. I. Risk Taxonomy To gather evidences on top risks that investment banks track and report back to regulators, an alternate method of studying the bank’s annual reports was followed for ten leading banks globally (Fig. 1). Below is the list of banks that were studied and a comprehensive list of risk taxonomy was drawn (Fig. 2) that details the risks, methodologies, factors and frameworks that the banks use to manage them and report back to regulators.

Fig. 1 Theoretical framework for artificial intelligence application in this research

Designing AI for Investment Banking Risk Management a Review … Fig. 2 Risk taxonomy published by leading banks

333

334

S. S. Lamba and N. Kaur

S. No

Investment bank

S. No

Investment bank

1

Morgan Stanley

6

NatWest

2

Goldman Sachs

7

Citi Bank

3

JP Morgan

8

Deutsche Bank

4

HSBC

9

UBS

5

Nomura

10

Credit Suisse

In the current state, the risk managers within an investment bank have an endof-day view of the risk management components. This means that they do not have tools to do a more active real-time risk management and the current state systems only allow a more historical review of activities, analysis of data and events that have passed. With the increasing demand and stress on risk management from regulators, there is an active demand of better real-time risk management capabilities, pre-trade risk view, predictive analytics, modeling various stress scenarios and improved back testing models that can help in augmenting the existing decision support [2]. The key components of risk governance have been reviewed along with a matrix of usage across the various risk functions under study. These components are avenues of opportunities for embedding ML/AI solutions (Table 1). Further drilling down from governance framework to actual use cases within the risk workflow, Table 2 gives a detailed view of granular components that are both data centric and computational heavy and thus makes them viable use cases to be considered for our study.

3 Artificial Intelligence in Risk Management and Post-Trade Processing Risk management as a function requires data sourcing from multiple internal and external pipelines. This data then needs to be interpreted and used for running calculations to derive risk factors and run predictive analytics. Machine learning is a perfect tool that can work in a high-performance computing environment required by risk management processes and be able to run statical analysis, detecting complex future patterns by analyzing historic data and be able to self-learn and evolve [3]. To review existing literature that focuses on risk and post-trade processes being benefitted by artificial intelligence, a thorough search was done for each of the identified risk methodologies and tools and the taxonomy derived from the leading banks in Fig. 2. Most of the existing literature includes artificial intelligence as a way to resolve data classification needs as a generic commendation and do not focus on specific realworld problems that the investment banking industry faces. A few of them do talk about specific risk measures, however, do not provide clarity on which procedures

Designing AI for Investment Banking Risk Management a Review …

335

Table 1 Risk monitoring framework components Market risk

Credit risk

Operational risk

Risk management tools Risk limits



✓ ✓

Credit risk limits Value at risk



Earning at risk



Expected shortfall



Economic value stress testing



Economic capital



Risk sensitivities





✓ ✓

Risk assessment (RCSA) Operational risk losses



Loss distribution approach



Scenario analysis



Tail risk capture



Stress testing



✓ ✓ ✓ ✓

Scoring models Rating models



Exposure – Probability of default – Loss given default – Exposure at default



Back testing







Table 2 Risk monitoring framework components Market risk

Credit risk

Operational risk

Risk management framework components Risk Appetite







Risk identification







Risk Assessment







Risk measurement







Risk testing







Risk monitoring







Risk reporting







Risk oversight







Capital management (calculation and allocation) – CCAR – ICAAP







336

S. S. Lamba and N. Kaur

providing clarity on which procedures, or fail to provide examples of how artificial intelligence has been applied. Literature review was focused to map the various risk verticals, risk factors that they target, proposed machine learning solutions and any further implementation details that were captured. A summary of this review is explained in the following section that dives into the key risk verticals that would most benefit from application of artificial intelligence.

3.1 Counterparty Credit Risk (CCR) The term counterparty points to the second party with which the bank is getting into a relationship with. This could be giving a loan to a counterparty or trading various products that the bank deals in. There are various risks associated with a counterparty which are termed as counterparty risk. These risks could come from the potential of a counterparty to not be able to standfast on its commitments and default on payments, or could be a country of incorporation risk, etc. In principle, AI application that has been evaluated for CCR function focused on understanding the various risk factors linked to a counterparty that provide leading indicators of future defaults. This can be used to build classifiers that can bucket potential defaulting counterparties and scenarios that may lead to defaults [3]. The strategy of identifying potential defaults for counterparties is a key control mechanism for risk managers within investment banks. . Machine learning techniques such as logistic regression have been historically used for credit marking. . Other techniques like discriminant examination have also been researched to show some success. . More prominently, support vector machines (SVMs) have shown greater success in categorizing counterparties with higher credit risk than others. SVMs were also able to explore risk factors that are key contributors for identifying which counterparties may fail their commitments when they were tested against historic methods [4]. Model management for credit function was redefined under the Basel II and III regulations. There are four key risk measures that get generated for risk managers to take effective decisions:

Designing AI for Investment Banking Risk Management a Review …

BASEL II, III accord

Probability of Default (PD)

Expected Potential Exposure (EPE)

337

Exposure at Default (EAD)

Loss Given Default (LGD)

PD is a key risk measure that requires predictive analytics of future possibility of default by a counterparty. Classifier models suit best for this type of analysis and can outperform the traditional logistic regression. To recognize a credit ranking for the counterparties and identify risks, SVM models have shown promising results. Previous researches show a satisfactory level of analysis done on the credit ranking modeling by using nearest neighbor, binary classifiers (that categories set of defaulters and non-defaulters), discriminant classification and more generic perceptron ANN’s have shown more precise results when back-tested with actual results [4]. While banks have been focused on building an accurate credit score classifier to predict the credit risk being taken while making a deal with a counterparty to ensure precise credit worthiness can be accessed and reduce the potential exposure if the counterparty defaults. Neural network-based models have shown greater success as compared to traditional classifiers to access the health of the counterparty and identify leading indicators for predicting credit default events. While it is clear that most research of applications of AI has been done in effectively predicting the counterparties credit worthiness, however, this is not a new topic. The first of the research dates back to 1994 where a comparison of statistical tools and neural network-based solutions was done by Altman and concluded that a hybrid of the two approaches suits best to enhance the results. Hybrid SVMs are also seen to provide better classification, however, are unable to simulate risk measures [5].

3.2 Montecarlo Simulations in Counterparty Credit Risk Outside the existing researched done for credit scoring, the primary challenges for investment banks are to predict credit risk for more exotic OTC derivative trades having longer tails. The calculation for mark-to-market (MtM) for these trades is

338

S. S. Lamba and N. Kaur

very costly as it requires running large simulations to effectively calculate potential future exposure (PFE). Montecarlo simulations are widely used in such situations to compute the potential exposure that the bank may have in case a counterparty defaults. The process requires MtM to be calculated for every grid point in the trade tenor. Usually between 10,000 and 30,000 simulated paths are generated and MtM’s are calculated for daily and monthly valuations. This required massive amounts of compute power and usually these simulations run of private cloud solutions and cost multi million dollars’ worth computing infrastructure every year. Figure 3 shows a sample MtM simulation via Montecarlo considering a 50-year tenor trade and 10,000 random paths being generated. If we zoom into Fig. 4, we need to run the pricing function for calculating PFE and EPE for each of the grid points.

Fig. 3 High level simulation workflow

Designing AI for Investment Banking Risk Management a Review …

339

Fig. 4 MtM, PFE and EPE simulation using Montecarlo

Considering a monthly grid point scheme, we need to calculate: 10,000 * 12 * 50 = 6,000,000 computations for MtM, EPE and PFE measures each. This requires massive computational power on the cloud and is a financially straining technology function. Given each bank ends up spending millions of dollars’ worth infrastructure to be compliant with the regulatory requirements of CCR, there is massive opportunity of improving the CCR calculations using AI. This field has not seen much research in the past (Fig. 5).

Fig. 5 Pricing function for calculating PFE and EPE

340

S. S. Lamba and N. Kaur

Fig. 6 Surrogate path mapping of a Montecarlo function

3.2.1

Surrogate-Based Optimization

Montecarlo being a brute force simulation technique attracts heavy computational charge, especially when the set of correlated random parameters are large and the executing function is complex. There has been some study in the field of determining a heuristic surrogate model that can mimic the geometric properties of the actual model (Fig. 6). Which allows us to execute the geometric model rather than the mathematical model which is lighter on computation cycles required for executing it as compared with its primary mathematical model. This helps in lowering down the computational cost and fastens the speed of execution of the model. Figure 7 details the workflow for building a surrogate-based machine learning model for mapping the geometric properties of the Montecarlo-based simulation model. The workflow initiates by creating a historic data store for Montecarlo paths.

3.3 Market Risk The risk incurred by risk factors that are determined by market movements is called market risk (MR). MR function deals with sourcing data from multiple external sources, massaging the data to explore features, preparing risk models that can effectively compute risk measures like value at risk (VAR), execute stress and back testing to validate the effectiveness of the models. Given each of these steps are data driven and work on historic data analysis, machine learning techniques can be of immense value to the MR function and model management. The two primary areas where machine learning has been researched for application within market risk are building volatility curves and model validation.

Designing AI for Investment Banking Risk Management a Review …

341

Fig. 7 Surrogate geometric model framework

. Volatility is defined as the scatter of unexpected risks and calculates how much each of the grid points differ from the mean. Forecasting volatility is an essential function of risk supervision and has been historically researched upon by applying neural network models to the estimation function, the model performed better under stress testing [6]. . For model validation, a primary risk measure is the value at risk (VAR) which defines the unfavorable loss over the tenor of the trade will not go beyond a particular number with 97.5% of assurance. Reference [6] brings out interesting industry facts and reviews various current application of AI in the field of model validation. For instance, investment bank Netaxis executes more than thirty hundred thousand simulations to run analytics identifying relations between risk weighted assets to and its portfolio diversification. Within the same research, another global investment services group—Nomura is studied to evidence how AI can help monitor risk models and identify patterns of portfolio selection that increases the overall risk the firm has to manage. . Another pivotal tool in the market risk world is yields.io. This organization is a market leader in application of artificial intelligence for model governance, study

342

S. S. Lamba and N. Kaur

the aberrations that the model outputs under stress scenarios, validation of the risk methodology and pre-trade risk management. . Another interesting research that explores the nirvana state of artificial intelligence-based trading system designs the trade booking system to have the capability to learn from market volatility post-trade, and feedback to the trading methodology to evolve and make better investment decisions in the future on products that reduce the overall risk weighted assets (RWA).

3.4 Liquidity Risk Liquidity risk and factors effecting a banks liquidity are of utmost importance to any organization. While there exist qualitative solutions to measure the current liquidity parameters of a bank, AI-based solutions can help an organization understand the risk factors that effect the liquidity index, how the risk factors are related to the measures and which risk factors are most significant. Artificial neural networks (ANNs) can be beneficial to identify patterns of risk factor deviation and categories the ones that have a larger affinity to impact the liquidity measures. Another key area that can benefit from AI is predictive analytics over the liquidity event itself. Bayesian networks (BNs) are usually the tool of choice for building an estimate function for providing early indicators for a liquidity event [2].

3.5 Operational Risk Operational risk (OR) emanates risk arising from an organization’s internal control gaps. This could be in terms of issues with cybersecurity implementation, conduct issues by staff, infrastructure reliability issues, etc., non-financial risks fall in the category of OR. AI solutions find its roots in operational risk for features like fraud indicators, compliance monitoring, reporting and supervising conduct rules on internal communications vis voice and text analysis. In [2], authors have explored solutions to build a real-time view that support monitoring of compliance issues within an organization. The study done by them reviewed various automation tools that are available in the market to support AI-based identification, aggregation and reporting of compliance issues. Another area that has been heavily researched is evolving spammer detection and blocking techniques as the spamsters also evolve. One of the market leaders in providing AI-based spam detection solutions is Proofpoint that uses self-learningbased algorithms to continuously evolve its techniques.

Designing AI for Investment Banking Risk Management a Review …

343

In another study on money laundering, AI-based solutions have been explored to cluster and categories transactions that show similar patterns that indicate higher affinity to laundering money. This is a serious concern for the banking industry as money laundering if not prevented by the banks, can lead to heavy regulatory fines. This is not a trivial task given the variety and velocity of transactions. AI-based solutions have shown good results in improving the accuracy of detection and reducing false positives being highlighted. Banks also have to continuously monitor internal transactions to detect suspicious activities like insider trading. To do this effectively, organizations have invested in surveillance tools that record audio and textual-based internal communications. Use natural language processing to identify conversations that suggest spurious activities and highlight these events to supervisory officers [2].

4 Research Gaps It is evident that artificial intelligence and application of machine learning techniques have been pursued within the banking industry and shown great success in various use cases. Risk management has also seen some application of these technologies to help reduce inefficiencies, build stronger internal controls, analyze patters of counterparty behaviors and reduce cost of computation. Artificial intelligence-based solutions are key for transforming the risk and posttrade ecosystem within investment banks by redefining the way we run predictive analysis on counterparty risk, re-architect the existing simulation methodologies to be even more cost efficient and accurate, improve the timeliness of regulatory reporting and provide better decision support system for risk managers and trading desks. This literature review has reviewed all verticals within the scope of risk management within top investment banks where AI-based solutions can massively benefit. After close evaluation of the strategies that have been researched in the past and studying the industry research performed by market leading service organizations, we could find large gaps in the application and future potential on AI transformation. (1) AI application gaps within counterparty credit risk (CCR) function: (a) Majority of research being performed has been around credit ranking of counterparties and classifying them into probability bucket of defaults. (b) However, large coverage of risk factors within the CCR function remain unexplored and attract huge costs for high-performance computation on cloud infrastructure. Very minimal research has been done on using AI techniques within complex simulation-based calculations like potential future exposure (PFE) which is both a control requirement as well as a regulatory demand under Basel II and III. (2) AI application gaps within market risk (MR) function:

344

S. S. Lamba and N. Kaur

(a) Market risk has not been explored for AI applications as much as the CCR function which has seen most research. This is possibly due to the fact that CCR use cases naturally mold themselves as more obvious machine learning use cases for classification and regression. (b) There has been some research and industry implementation fielded within yield.io and other investment banks around predicting volatility shifts. There are considerable gaps for optimizing VaR calculations and other risk factors within market risk in light of Basel III requirements of expected-shortfall calculations. (3) AI application gaps within liquidity risk (LR) function: Risk of liquidity is a prominent function which every bank needs to closely monitor and continuously hedge against. The literature survey pointed out minimal focus have been given to LR function for AI application. Use cases like classifying the risk factors and events that effectively predict liquidity risk using ANN solutions can be a major research stream. (4) AI application gaps within operational risk (OR) function: The OR function has seen limited research and industry implementation of AI solutions predominantly in the area of classifying events into those that have a higher affinity to causing operational failures. However, given this function is data centric and includes compliance and supervisory functions, AI solutions can help transform the way existing tools that rely on only qualitative risk factors to identify operational risks. The evolution of AI implementation can start with augmenting the existing technology solutions in a prescriptive manner by reducing the errors in classification of supervisory alerts and in later stages move into more self-learning-based solutions that can highlight compliance failures and take supervisory actions. (5) The literature survey also shows gaps in AI-based technology solutions for executing stress and back testing that check the effectiveness and accuracy of risk models. Both these functions investigate large volumes of risk factors and could be transformed by AI-based solutions. (6) Simulation-based tools such as Montecarlo is used for complex risk measure calculation in both CCR and MR functions. These tools usually calculate complex mathematical algorithms as pricing and risk functions that take both time and infrastructure to execute. (7) There has been no research done in the field of optimizing simulation by use of surrogate functions and predicting a better set of variables than can reduce the number of simulations. This field is completely unexplored and can benefit from AI transformation. Regulatory reporting for risk managers is another function which is unexplored from an AI implementation perspective.

Designing AI for Investment Banking Risk Management a Review …

345

Banks still use quantitative tools for risk aggregation and reporting. Similarly, conduct risk (CR) has seen minimal research and only works on events that have taken place rather than predicting high affinity areas where conduct trainings should be provides and monitored to prevent future conduct events [7].

5 Conclusion In conclusion, post-trade management and risk handling has seen some research and implementation of artificial intelligence techniques in the past decade as the complexity of regulations have increased. With the advancements in artificial intelligence solutions, its applicability and acceptability in the banking industry has grown in the past few years. The regulators are also globally now more receptive to using machine learning as alternate to solve complex business problems. However, after studying the gamut of risk management function and methodologies, it is clear that the current state of artificial intelligence penetration is not enough and there are immense opportunities where artificial intelligence can transform the risk management and post-trade functions of a bank which have been explored in the identified research gaps.

6 Future Opportunities for Research With the advancements in artificial intelligence solutions, its applicability and acceptability in the banking industry has grown in the past few years. The regulators are also globally now more receptive to using machine learning as alternate to solve complex business problems. Research problems for future: Given risk management relies largely on data, hence there is immense opportunity for transforming the risk and post-trade ecosystem through application of ML and AI techniques. This industry has also faced consolidated sanctions and fines of $345 Bn due to inefficiencies and misreporting by regulators over the past 10 years. With the regulations getting more stringent with Basel III accords, it is now essential to transform the risk management and post-trade technology estate and utilize the advancements of artificial intelligence-based solutions to solve complex investment banking problems. After reviewing the gaps in the existing researches done and understanding the key risk management verticals that investment banks are focused upon, the research undertaken will focus on:

346

S. S. Lamba and N. Kaur

. Building a consolidated map of all risk management functions, tools and research upon what artificial intelligence techniques can be beneficial for the transformation. This can then be used by any investment bank as a key to transform their estate. . Counterparty Credit Risk: Existing researches have only focused on credit scoring. A large area of research around predictive analysis of risk factors like ‘expected potential exposure’ (EPE), probability of default (PD), exposure at default (EAD) and loss given default (LGD) is largely calculated using Montecarlo simulation techniques. Future research should also focus on building target state strategies for transforming the counterparty credit risk function and Montecarlo optimization by exploring surrogate function building using artificial intelligence. Surrogate functions can help mimic geometric properties of target simulations and help reduce the cost and runtime for executing computational heavy simulations. This will help banks not just to run effective pre-trade risk scenarios, but also save millions of dollars’ worth hardware cost that they spend on simulating risk factors like EPF, PD and LGD on cloud grid computing. . Market risk is largely unexplored for machine learning transformation. Given a large part of risk weighted assets (RWA) falls under market risk. The research will aim at building a mapping of risk tools and suggest artificial intelligence strategies that should be used for optimizing the business value within these functions. We should also look at stress testing function and effective model risk management using machine learning. . In the post-trade function, trade processing failures cause significant losses to investment banks. The numbers published by DTCC suggest that the industry incurs a loss of three billion USD every year due to inefficiencies in the system which can be prevented by applying AI/ML-based solutions that implement a stronger internal control environment. The future research areas should also focus on identifying risk factors that can help categorize the subgroup of trades with higher affinity of failure, so that remedial actions can be taken in advance.

References 1. Mariani J (2021) Artificial intelligence in post-trade processing. https://www2.deloitte.com/us/ en/pages/consulting/articles/artificial-intelligence-post-trade-processing.html 2. MetricStream. (2018) The CRO’s role and beyond. https://assets.metricstream.com/pdf/insights/ the-chief-risk-officers-role-in-2018-beyond.pdf 3. Aziz S, Michael MD (2018) AI and ML for risk management. SSRN Electron J. https://papers. ssrn.com/sol3/papers.cfm?abstract_id=3201337 4. Bacham D, Zhao J (2017) ML: issues and resolutions. Perform better Counterparty risk analysis. https://www.moodysanalytics.com/risk-perspectives-magazine/managing-disruption/ spotlight/machine-learning-challenges-lessons-and-opportunities-in-credit-risk-modeling

Designing AI for Investment Banking Risk Management a Review …

347

5. Guegan D, Addo P, Hassani B (2018) CRA analysis using ML and AI. https://www.mdpi.com/ 2227-9091/6/2/38 6. Awad M, Khanna R (2019) Efficient learning machines. Springer. https://link.springer.com/con tent/pdf/https://doi.org/10.1007/978-1-4302-5990-9.pdf 7. Hamori S, Kawai M, Kume T, Murakami Y, Watanabe C (2018) AI or deep learning? Implementation in DRA. J Risk Financ Manag. https://www.mdpi.com/1911-8074/11/1/12

A Neutrosophic Cognitive Maps Approach for Pestle Analysis in Food Industry Kanika Bhutani, Sanjay Gaur, Punita Panwar, and Sneha Garg

Abstract Food industry is the most significant industry as everyone has to eat food to live. In food industry, the pestle analysis can be used to assess the environment of the business by considering political, economic, social, technological, legal and environmental factors. This paper applies a new technique on pest analysis based on fuzzy logic and neutrosophic logic. Fuzzy logic is the subset of neutrosophic logic (NL), can be pondered as a dominant tool to include the inconsistent information in real world. It will provide the t (degree of truth), I (degree of indeterminacy) and f (degree of falsity) for all instances. Since the world is full of indeterminacy, NL characterize each logical statement in 3D neutrosophic space. This paper applies the techniques of fuzzy cognitive maps (FCM) and neutrosophic cognitive maps (NCM) to make a cause-effect relationship between various factors. Components of neutrosophic logic: Neutrosophic cognitive maps, neutrosophic sets, neutrosophic rough soft sets, neutrosophic probability, neutrosophic ontology, etc., share a collaborative relationship rather than a competitive one. These techniques have been applied in various domains like medical, engineering applications, military, education, banking, business, decision making, image processing, etc. This paper is an attempt to produce the utility of NL, and a comparison is made between fuzzy cognitive maps (FCM) and neutrosophic cognitive maps (NCM). Keywords Fuzzy cognitive maps · Food industry · Pestle · Neutrosophic cognitive maps · Cause K. Bhutani (B) · S. Gaur · P. Panwar JECRC, Jaipur, RJ, India e-mail: [email protected] S. Gaur e-mail: [email protected] P. Panwar e-mail: [email protected] S. Garg MNIT, Jaipur, RJ, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 V. S. Rathore et al. (eds.), Emerging Trends in Expert Applications and Security, Lecture Notes in Networks and Systems 681, https://doi.org/10.1007/978-981-99-1909-3_30

349

350

K. Bhutani et al.

1 Introduction Soft computing provides a platform to solve real-world problems by providing approximate results. The main building blocks of soft computing are fuzzy logic, genetic algorithms, neural networks that deal with approximate reasoning, functional optimization and random search. In classical logic, three-valued logic and fuzzy logic [1], the degree of membership and non-membership have to belong to or to be included in interval [0,1] [2]. Fuzzy logic is of great importance as (Table 1). • It has the ability to deal with non-analytical ambiguity. • It is a system that describes the event before they happen. • It was developed to solve problems of knowledge representation. Neutrosophy is a new branch of philosophy which mainly deals with the ideas and notions which are neither true nor false, but between true and false, i.e., indeterminate and inconsistent [2]. For example, when a coin is tossed, the possible outcomes are head and tail but what happens when a coin stands still on a curved surface, there comes the role of indeterminacy. Neutrosophic logic is an extension or combination to classical logic, three-valued logic and fuzzy logic that incorporates indeterminacy. As the real world is full of indeterminacy, neutrosophic logic is of great advantage, and it surpasses all the previous determined logics: • It (t + i + f ) = 1, then it is the same case as classical and fuzzy logic. • If (t + i + f ) < 1, then there is some incomplete information referring to intuitionistic logic. • If (t + i + f ) > 1, then there is some contradictory information referring to paraconsistent logic. Table 1 Types of logic Logic

Introduced by

Definition

Classical logic

Aristotle

Every logical statement is restricted to values of true (T ) and false (F)

Three-valued logic Lukasiewicz

Fuzzy logic

Every logical statement is restricted to three    values 1, 1 2, 0

S. Kleene

Every logical statement is restricted to three values (1, 0, U ). U-undefined or unknown

Bochvar

Every logical statement is restricted to three values (1, 0, M). M-meaningless

L.A. Zadeh

Every logical statement is restricted to values in the interval [0, 1]

Neutrosophic logic Florentine Smarandache Every logical statement is restricted to values (t, i, f ) where t-degree of truth, i-indeterminacy, f-falsity

A Neutrosophic Cognitive Maps Approach for Pestle Analysis in Food … Fig. 1 Sample FCM

D1

D3

351 D2

D4

A. Fuzzy Cognitive Maps (FCM) Cognitive map was first presented by Axelrod in 1976 to clear those states which are unclear. It consists of concepts and casual beliefs that deal with the behavior of the system and relationships between the concepts. The concept of FCM was presented by Kosko which is an addition to existing cognitive maps [3]. It works on the principle of fuzzy logic introduced by Zadeh. The FCMs work on expert opinion and unsupervised data [4]. The application of FCM can be seen in various sectors. Features of FCM: • FCM is a graph showing the cause-effect relationship between various concepts. • Concepts indicate nodes that describe information about system and relationships between concepts can be direct, indirect or no relationship. Let us assume two nodes Di and Dj be the FCM’s nodes. The edge between Di and Dj represents the connection of two nodes. Every edge is weighted among {−1, 0, 1}. Assume w is the weight of edge (Fig. 1). • If w = 0, then Dj is not affected by Di . • If w = 1, then increase in Di causes increase in Dj and decrease in Di decrease in Dj. • If w = −1, then increase in Di causes decrease in Dj and decrease in Di increase in Dj. Let D1 D2 D3 D4 ,…. Di are the nodes and Dj be a cycle when Di is ‘1’. If the connection passes over edges of a cycle and this come again to Dj . It means that the system is circular. For this system, the equilibrium state is termed as hidden pattern. If it further reaches to fixed point which is a unique vector in equilibrium state. Suppose if the FCM settled down by D1 and Dn in on state, i.e., (1, 0, 0, …….. , 1), then this vector is termed as fixed point. If it settled down by a repeated vector, then equilibrium is called as limit cycle [5]. B. Neutrosophic Cognitive Maps Neutrosophic cognitive map (NCM) is the superset of FCM. NCM deals with truth relation, falsity relation and indeterminate relations. Neutrosophic logic supports indeterminacy [5]. NCM provides more realistic results in case of unsupervised data and indeterminate relations. NCM can be applied in various domains like medical, social issue, etc.

352

K. Bhutani et al.

Let us assume Di and Dj be the NCM’s nodes. The causality of Di to Dj is denoted by the directed edge between Di and Dj . Every edge is weighted among {−1, 0, 1, I }. Assume w is the weight of edge. • If w = 0, then Dj is not affected by Di . • If w = 1, then increase in Di causes increase in Dj and decrease in Di decrease in Dj. • If w = −1, then increase in Di causes decrease in Dj and decrease in Di increase in Dj. • If w = I , then this relation is indeterminate between Di and Dj . Let D1 D2 D3 D4 ,…. Di are the nodes and Dj be a cycle when Di is ‘1’. If the connection passes over edges of a cycle and this comes again to Dj . It means that the system is circular. For this system, the equilibrium state is termed as hidden pattern. If it further reaches to fixed point which is a unique vector in equilibrium state. Suppose if the NCM settled down by D1 and Dn in on state and D2 is in indeterminate state, i.e., (1, I, 0, …….., 1), then this vector is termed as fixed point. If it settled down by a repeated vector, then equilibrium is called as limit cycle. C. Pestle analysis of food industry Food industry is one of the most powerful business industry as everyone needs food to survive. A planned framework which is used for evaluation of various factors affecting the food industry. Pestle analysis includes political, economic, social, technological, legal and environmental factors [6] (Table 2 and Fig. 2).

2 Implementation Food industry is a very famous business industry. The main motive regarding writing this paper is to identify the relationships between various factors and to understand the concept of indeterminacy in food industry. A survey is conducted from 20 experts of food industry that have given their views on various factors between 0 and 100. Initial Matrix of Pestle (IMP) The IMP matrix represents the factors affecting the food industry is shown in Table 3. The dimension of matrix is 12 × 20 where 12 is the number of factors of pestle analysis and 20 is the number of experts interviewed [7]. Fuzzified matrix of Pestle (FZMP) The FMP converts the numerical values into fuzzy values that lies within [0, 1]. Threshold values α u = 80 and αi = 20 are used for deviation of ± 20%. The experts mark the factor ranging between 80 and 100 as the most relevant and between 0 and 20 as the least relevant. This matrix is shown in Table 4 [7].

A Neutrosophic Cognitive Maps Approach for Pestle Analysis in Food …

353

Table 2 Pestle factors Factor

Sub-factor

Political

Government regulations Evaluate the degree to which the government may have an impact on the economy or a particular business

Economic

Growth rate

It is used to calculate the percentage change in a variable

High labor cost

Because of higher and higher government expectations for minimum wages, the cost of hiring workers is increasing across all industries

Economic recession

It increases the risk of an economic depression

Taxation

The obligation of essential charges on individuals or entities by governments

Consumer spending

The rise in consumer spending benefits the economy to survive its development

Diet cautious

Determine which type of food is healthy and which type of is not

Healthy trends

Determine the association between food and its effect on our health

E-commerce

It is the exchange of different products and available services through the internet

Automation

By using technology in the food industry, need for laborers will be decreased and profit in businesses will be improved and chances of human error will be reduced

Safety regulations

Include safety standards for food businesses so no regulations are ever breached and they remain in the constraints of these protocols to avoid expensive lawsuits

Social

Technological

Legal

Environmental Impact of meat

Description

The production of meat uses immense amounts of water and the meat manufacturing unit is breaking huge quantities of forest to generate original space used for farms. It is necessary to identify the effect of this condition in the lengthy term

Strength of Relationships Matrix of Pestle (SRMP) The strength of relationships matrix of pestle dimension is 12 × 12. Every column and row cell show the factor of food industry. Every element X pq in the matrix shows the relation between factor p and factor q and the value lies in the range between −1 and 1. This matrix is shown in Table 5 [7]. FCM Matrix FCM matrix is made with the help of expert’s opinion as every factor may not be casually related to other factor. So, the experts examine the SRMP matrix and convert

354

K. Bhutani et al.

Fig. 2 Pestle analysis

Table 3 IMP matrix 1

2

3

4

5

6

7

8

9

10 11 12 13 14 15

16 17 18 19 20

P1

50 60 70 90 60 70 40 60 50 20 60 60 70 40

60 70 90 60 60

80

E1

80 50 90 70 70 80 50 70 70 80 80 60 70 60

70 80 70 70 20

80

E2

90 30 90 60 70 80 50 40 60 90 90 50 80 50

80 80 80 10 90

70

E3

20 60 60 80 10 90 70 50 80 70 70 70 60 70

90 90 80 80 80

70

E4

40 40 70 90 60 70 60 40 90 70 60 40 50 50 100 20 30 90 70

50

E5

50 60 70 50 60 60 50 70 50 60 60 60 60 50

80 70 50 70 70

50

S1

50 60 80 70 50 50 80 80 50 70 70 70 70 60

20 80 40 60 60

70

S2

70 50 70 40 60 60 60 60 40 70 80 70 60 70

70 80 40 50 60

60

T1

50 60 70 90 40 70 20 50 60 60 90 80 80 80

70 20 60 40 70

70

T2

80 70 50 30 30 40 40 70 80 90 40 90 80 70

50 30 60 70 80

80

L1

10 80 40 60 70 30 30 50 70 20 30 70 70 90

60 40 70 60 90

90

M1 70 60 30 90 60 70 40 60 50 80 70 60 60 50

60 70 80 90 50 100

it into FCM matrix by considering only valid relationships in food industry [8]. FCM matrix is shown in Table 6 (Fig. 3). Working of FCM Consider the initial vector A1 = (001000000000) where the factor E 2 is in ON state. The response of this vector A1 on the system P represented in Table 4 is given by A1 = (001000000000)

0.66

0.66

0.66

0.83

1

0.5

1

1

0

0.33

0.5

0.5

0.83

0.5

1

0

0.83

P1

E1

E2

E3

E4

E5

S1

S2

T1

T2

L1

M1

0.66

0.66

0.5

0.33

0.66

0.16

0.5

2

1

0.16

0.33

0.5

0.83

0.83

1

0.83

0.83

0.66

1

1

0.83

3

Table 4 FZMP matrix

1

0.66

0.16

1

0.33

0.83

0.5

1

1

0.66

0.83

1

4

0.66

0.83

0.16

0.33

0.66

0.5

0.66

0.66

0

0.83

0.83

0.66

5

0.83

0.16

0.33

0.83

0.66

0.5

0.66

0.83

1

1

1

0.83

6

0.33

0.16

0.33

0

0.66

1

0.5

0.66

0.83

0.5

0.5

0.33

7

0.66

0.5

0.83

0.5

0.66

1

0.83

0.33

0.5

0.33

0.83

0.66

8

0.5

0.83

1

0.66

0.33

0.5

0.5

1

1

0.66

0.83

0.5

9

1

0

1

0.66

0.83

0.83

0.66

0.83

0.83

1

1

0

0

0.83

0.16

0.33

1

1

0.83

0.66

0.66

0.83

1

1

0.66

11

0.66

0.83

1

1

0.83

0.83

0.66

0.33

0.83

0.5

0.66

0.66

12

0.66

0.83

1

1

0.66

0.83

0.66

0.5

0.66

1

0.83

0.83

13

0.5

1

0.83

1

0.83

0.66

0.5

0.5

0.83

0.5

0.66

0.33

14

0.66

0.66

0.5

0.83

0.83

0

1

1

1

1

0.83

0.66

15

0.83

0.33

0.16

0

1

1

0.83

0

1

1

1

0.83

16

1

0.83

0.66

0.66

0.33

0.33

0.5

0.16

1

1

0.83

1

17

1

0.66

0.83

0.33

0.5

0.66

0.83

1

1

0

0.83

0.66

18

0.5

1

1

0.83

0.66

0.66

0.83

0.83

1

1

0

0.66

19

1

1

1

0.83

0.66

0.83

0.5

0.5

0.83

0.83

1

1

20

A Neutrosophic Cognitive Maps Approach for Pestle Analysis in Food … 355

356

K. Bhutani et al.

Table 5 SRMP matrix P1

P1

E1

E2

E3

E4

E5

S1

S2

T1

T2

L1

M1

0

0.75

0.7

0.6

0.7

0.81

0.75

0.73

0.74

0.63

0.72

0.85

E1

0.75

0

0.8

0.7

0.65

0.75

0.74

0.76

0.67

0.65

0.59

0.8

E2

0.7

0.8

0

0.7

0.66

0.71

0.65

0.73

0.69

0.59

0.56

0.71

E3

0.6

0.7

0.7

0

0.73

0.73

0.74

0.71

0.71

0.65

0.65

0.74

E4

0.68

0.65

0.67

0.73

0

0.78

0.65

0.7

0.7

0.6

0.57

0.69

E5

0.81

0.75

0.71

0.73

0.78

0

0.78

0.83

0.73

0.67

0.64

0.78

S1

0.75

0.74

0.65

0.74

0.65

0.78

0

0.8

0.7

0.72

0.59

0.72

S2

0.73

0.76

0.73

0.71

0.7

0.83

0.8

0

0.73

0.66

0.59

0.75

T1

0.74

0.67

0.69

0.71

0.7

0.73

0.7

0.73

0

0.7

0.68

0.7

T2

0.63

0.65

0.59

0.65

0.6

0.67

0.72

0.66

0.7

0

0.71

0.67

L1

0.72

0.59

0.56

0.65

0.57

0.64

0.59

0.59

0.68

0.71

0

0.64

M1

0.85

0.8

0.71

0.74

0.69

0.78

0.72

0.75

0.7

0.67

0.64

0

E2

E3

E4

E5

S1

S2

T1

T2

L1

M1

0.7

0.81

Table 6 FCM matrix P1

E1

0.7

P1 E1

0.65

0.75

E2

0.7

0.66

0.71

E3

0.73

0.56 0.7

E4 E5

0.83

0.59

0.73

0.78

S1 S2 T1

0.78 0.83 0.74

T2

0.69

0.73

0.59

L1

0.72

0.64

0.59

M1

0.85

0.78

0.72

A1 ∗ P = (00000.660.7100000.560) → (000011000010) = A2 A2 ∗ P = (0.7200000.641.371.421.43000) → (100001111000) = A3 A3 ∗ P = (0.7401.3900.73.150.780.830.73000) → (101011111000) = A4 A4 ∗ P = (0.7401.3901.363.860.780.831.4300.560) → (101011111010) = A5

A Neutrosophic Cognitive Maps Approach for Pestle Analysis in Food …

357

Fig. 3 FCM of pestle analysis

A5 ∗ P = (1.4601.3901.364.51.371.421.4300.560) → (101011111010) = A5 = A6

As A5 = A6 , a fixed point is achieved. It shows that high labor cost has no effect on growth rate, consumer spending, automation and impact of meat. A saturation point has achieved which shows the relationship of one factor with all other pestle factors. Neutrosophic Cognitive Maps: FCM does not consider the indeterminacy existing in real world. There are various relations where indeterminacy can be seen in pestle analysis. P1 (government regulations) → E5 (consumer spending). Government regulations affect consumer spending but it does not have any effect on upper class. For example, if someone has to eat outside food, he/she will order irrespective of the regulation. E2 (high labor cost) → E4 (taxation). High labor cost has an indeterminate relation with taxation as the lower class will work below the average daily wage for their survival. E2 (high labor cost) → E5 (consumer spending). High labor cost may not affect the consumer spending directly as upper class do not care about it to live a luxurious life. E5 (consumer spending) → S2 (healthy trends).

358

K. Bhutani et al.

Consumer spending share an indeterminate relation with healthy trends as it varies from person-to-person taste and lifestyle. T2 (E-commerce) → P1 (government regulations). E-commerce may or may not directly affect the government regulations as there are cyber laws associated to it. T1 (automaton) → E2 (high labor cost). Automation may or may not affect the labor cost as robot automation is very costly and less efficient. L1 (safety regulations) → E5 (consumer spending). Safety regulations may or may not affect consumer spending as there are online food delivery options available. M1 (impact of meat) → E5 (consumer spending). This factor may or may not affect consumer spending as upper class does not care about these things (Table 7 and Fig. 4). Working of NCM Consider the initial vector A1 = (001000000000) where the factor E 2 is in ON state. The response of this vector A1 on the system R represented in Table 7 is given by A1 = (001000000000) A1 ∗ P = (0000I I 00000.560) → (0000I I 000010) = A2 Table 7 NCM matrix P1

E1

E2

E3

E4 0.7

I

0.7

0.65

0.75

E2

I

I

E3

0.73

0.7

P1 E1

E5

S1

S2

0.7

E5 S1

0.78

S2

0.83 I

0.69

0.78

I

0.59

0.73

I

T2

T2

L1

0.56

E4

T1

T1

L1

0.72

I

0.59

M1

0.85

I

0.72

0.73

M1

A Neutrosophic Cognitive Maps Approach for Pestle Analysis in Food …

359

Fig. 4 NCM of pestle analysis

A2 ∗ P = (0.72000I I 0.590.590.7000) → (1000I I 111000) = A3 A3 ∗ P = (000.70I I 11000) → (0010I I 11000) = A4 A4 ∗ P = (I 00.70I I 0.780.830.700.560) → (I 010I I 111010) = A5 A5 ∗ P = (I 00.70I I 0.780.830.700.560) → (I 010I I 111010) = A5 = A6 As A5 = A6 , a fixed point is achieved. It shows that high labor cost has an indeterminate effect on government regulations, taxation and consumer spending. A saturation point has achieved which shows the relationship of one factor with all other pestle factors. Conclusion and Future Scope Compared to the results of FCM and NCM, high labor cost has an indeterminate effect on government regulations, taxation and consumer spending. As we can clearly see that NCM shows better results than FCM, high labor cost may or may not affect other factors. It can be seen in other factors also. Indeterminacy has a great impact in real world so, it will provide more realistic results. As an extension to this work, number of parameters can be increased to get more accurate results. Opinion of different experts can be combined to implement other neutrosophic techniques.

360

K. Bhutani et al.

References 1. Libkin L (2016) SQL’s three-valued logic and certain answers. ACM Trans Database Syst (TODS) 41(1):1–28 2. Rivieccio U (2008) Neutrosophic logics: Prospects and problems. Fuzzy Sets Syst 159(14):1860– 1868 3. Bakhtavar E, Valipour M, Yousefi S, Sadiq R, Hewage K (2021) Fuzzy cognitive maps in systems risk analysis: a comprehensive review. Complex Intell Syst 7(2):621–637 4. Ameli M, Esfandabadi ZS, Sadeghi S, Ranjbari M, Zanetti MC (2022) COVID-19 and sustainable development goals (SDGs): scenario analysis through fuzzy cognitive map modeling. Gondwana Res 5. Kandasamy WV, Smarandache F (2003) Fuzzy cognitive maps and neutrosophic cognitive maps. Inf Stud 6. Gul S, Gani KM, Govender I, Bux F (2021) Reclaimed wastewater as an ally to global freshwater sources: a PESTEL evaluation of the barriers. J Water Suppl Res Technol AQUA 70(2):123–137 7. Rodriguez-Repiso L, Setchi R, Salmeron JL (2007) Modelling IT projects success with fuzzy cognitive maps. Expert Syst Appl 32(2):543–559 8. Bhutani K, Kumar M, Garg G, Aggarwal S (2016) Assessing it projects success with extended fuzzy cognitive maps & neutrosophic cognitive maps in comparison to fuzzy cognitive maps. Neutrosophic Sets Syst 12(1):9–19

Assistive Agricultural Technology—Soil Health and Suitable Crop Prediction K. Naveen, Saksham Singh, Arihant Jain, Sushant Arora, and Madhulika Bhatia

Abstract Over 50% of the total workforce of the Indian population is engaged in agriculture and allied sectors. In a country like India, where crop yield is affected by the global change in weather conditions, crop yield prediction is crucial. Machine learning techniques are employed in different areas to solve problems, one of which is in assistive agricultural practices such as crop yield production and disease detection. Crop yield prediction is crucial as it gives farmers an estimate of their crops’ future yield and earnings and how to boost their production. Different algorithms used to predict crop yield include random forest algorithm, neural networks, etc. In this paper, we are critically reviewing a prediction model based on the generative adversarial networks (GANs) algorithm to predict the crop yield using data such as their previous year’s yield, soil chemical composition, weather, and irrigation conditions. Keywords Assistive agriculture · Soil health · Crop prediction machine learning · Generative adversarial networks

K. Naveen (B) · S. Singh · A. Jain · S. Arora · M. Bhatia Amity School of Engineering and Technology, Amity University, Noida, UP, India e-mail: [email protected] S. Singh e-mail: [email protected] A. Jain e-mail: [email protected] S. Arora e-mail: [email protected] M. Bhatia e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 V. S. Rathore et al. (eds.), Emerging Trends in Expert Applications and Security, Lecture Notes in Networks and Systems 681, https://doi.org/10.1007/978-981-99-1909-3_31

361

362

K. Naveen et al.

1 Introduction According to the Annual Report of the Department of Agriculture, India, for the year 2020–21, agriculture accounts for over 50% of the Indian workforce, either working directly in agriculture or indirectly in its allied sectors, and more than 17% contribution to the gross value added. These numbers show the crucial role the agricultural sector plays in the lives of Indians [1, 2]. With the never-ending demand for more food, intelligent methods are being employed to maximize healthy production. The global change in climatic conditions has affected crop yield, resulting in crop failures across different regions varying from a diversity of crops. In 2021 alone, India lost over 5 million hectares of crop area to abnormal climatic conditions and natural calamities. To compensate for this loss, the yield of other areas needs to be maximized [3, 4] (Fig. 1). Machine learning techniques are being employed to create models to help in these tedious activities in agriculture for applications such as plant disease detection, crop yield prediction, soil seed quality enhancement, precision spraying, price forecasting, and many more [5, 6]. All these assistive agriculture technologies are used to improve a farmer’s life and have better chances of cultivating a healthy crop and maximizing yield [7, 8].

“Crop yield prediction helps a farmer know beforehand how much-estimated crop output will be there and how it can be maximized [9], increasing earnings and profit.” It is a very tedious and complex process involving various parameters. Data mining techniques are used to extract information from these parameters and the previous year’s data and create an accurate model that will help predict the crop yield [10, 11] (Fig. 2). A different number of parameters affect the crop yield, such as the soil composition, fertilizers used, irrigation, weather conditions, seed quality, pesticides used, and the agricultural practices employed by the farmer. Estimating the crop yield while considering all these characteristics is quite complex [12, 13]. The Government of India under its Digital India scheme and National Mission for Sustainable Agriculture has set objectives to make agriculture more productive, sustainable, and climate-resilient, and employ artificial intelligence and machine learning to address issues related to agriculture [14]. “Soil health management is one of the most important features of crop yield prediction.” Under the Digital India scheme, farmers have now access to affordable smartphones and Internet access. Farmers submit their soil samples to one of over 3,500 testing laboratories set up by GOI, the soil health card is uploaded on a portal built by GOI which can be accessed by the farmers. Soil health card contains details of the nutrient composition of the soil (Fig. 3). Using this soil health report, along with the previous year’s yield data, irrigated crop area data, and weather data by using machine learning algorithms, crop yield can be predicted, and fertilizers and nutrients need to be added to maximize the yield of the said choice of crop. Researchers have employed a variety of machine learning algorithms and tweaking it according to their feature selection attributes to devise an accurate model

Assistive Agricultural Technology—Soil Health and Suitable Crop …

Fig. 1 Major crop sown in each state in India

Fig. 2 Yield of major crops in India

363

364

K. Naveen et al.

Fig. 3 Indian soil chemical distribution

for a specific crop or specific conditions, all of them vary in accuracy [15, 16], and none of them is 100% accurate. Having historical crop data is crucial for every single one of these prediction models [17]. In this paper, we have reviewed the use of the generative assistive networks algorithm to create a prediction model. “GANs have a generator and a discriminator [18]. The generator will correct its algorithm to create a model which is as close as to real data from our dataset to fool the discriminator, resulting in a model which will be able to produce real-life-like results of crop yield [19]”.

2 Literature Review See Tables 1 and 2.

3 Methodology (1) Collecting data from various sources—Acquiring data from different Government of India sources as illustrated in the Fig. 4. (2) Visualizing the dataset: Distribution plot of the different parameters used in the model (Figs. 5 and 6). (3) Data Preprocessing: Data preprocessing is done as we need to process our input data to make it as clear and accurate as possible for the model to function properly. Different techniques used include removing duplicate data and empty data nodes. Data cleaning has been performed to remove noisy data (Fig. 7).

Assistive Agricultural Technology—Soil Health and Suitable Crop …

365

Table 1 Comparison analysis Authors

Year

Kalimuthu et al. [1]

2020 ✖ ✓ ✖ ✖ Naïve Bayes Gaussian classifier and boosting algorithm were used in this paper to predict crop

A B

C

D Description

Ranjani et al. [2]

2021 ✖ ✖ ✓ ✖ Random forest algorithm was proposed in this paper to predict crops

Suresh et al. [3]

2021 ✖ ✖ ✓ ✖ Over 70% accuracy was shown in this paper by using random forest algorithm for crop prediction

Saini and Nagpal [4]

2022 ✓ ✖ ✖ ✖ The neural network approach was used in this paper to predict the Bajra crop in Haryana. A comparison between linear and nonlinear models was done

Chandraprabha and Dhanaraj [5] 2021 ✖ ✓ ✓ ✖ Random forest along with the Naïve Bayes algorithm was used to predict crops, and the drawbacks of the random forest algorithm were highlighted Kang et al. [6]

2020 ✓ ✖ ✖ ✓ Generative adversarial network-based neural network algorithm was used in this paper for the prediction of vehicle trajectory

Li et al. [7]

2018 ✓ ✖ ✖ ✓ Historical cloud data was used to predict typhoon clouds using generative adversarial networks

Li et al. [8]

2022 ✓ ✖ ✖ ✓ A stock price prediction model was constructed based on generative adversarial networks from the perspective of stock text mining

Hsieh and Lin [9]

2021 ✓ ✖ ✖ ✓ This paper implements generative adversarial networks algorithm to construct a model of housing price prediction

Förster et al. [10]

2019 ✓ ✖ ✖ ✓ This paper proposed an approach to the application of generative adversarial networks in the field of forecasting using hyperspectral images. The proposed model can improve itself to learn the spread of disease

366 Table 2 Keys description

K. Naveen et al. Key

Description

A

Neural networks

B

Naïve Bayes algorithm

C

Random forest algorithm

D

Generative adversarial network

Fig. 4 Data sources used

Fig. 5 Dataset

(4) Feature engineering: The different sets of filter-based feature selection characteristics used in different papers are as follows: - Area - Yield - Soil pH - Climate - Area irrigated (5) Split Data: Training and testing: A 70–30% split was performed on the filtered dataset, for training and testing, respectively. Data is split to avoid overfitting. If we train our model with 100% of pre-processed data, then adding new data will fail when fed into our model. (6) Training: Generative adversarial network (GAN) algorithm has been analyzed as an algorithm for crop prediction. GANs have two components, a generator and a discriminator. Random data is fed into the generator, which then attempts

Assistive Agricultural Technology—Soil Health and Suitable Crop …

367

Fig. 6 Distribution plots for various nutrients present in soil

to generate real-life-like data which is then compared by the discriminator and either passed as real or as fake. The generator continuously takes feedback from the discriminator and improves the algorithm until the discriminator passes the result as real data (Fig. 8). (7) Scoring and Evaluation: RMSE is the standard deviation of the residuals (prediction errors). Equation (1) defines the formula for RMSE used for model evaluation. / RMSE = n = non-missing data points. yj = actual time series.

2 1 n  yj − yj j=1 n 

(1)

368

K. Naveen et al.

Fig. 7 Soil nutrients correlation heatmap

yˆ J = estimated time series.

4 Conclusions and Results A review of these papers helped in understanding the application of various ML algorithms for crop yield prediction, along with the drawbacks of these algorithms to understand where these models are lacking. This also creates the opportunity for new algorithms to be applied in agricultural production prediction that have not been employed before, such as the generative adversarial networks algorithm, which we have critically assessed in this paper. The use of the Naïve Bayes algorithm showed promising results, the same could not be concluded for the random forest algorithm which gave inaccurate results for large datasets. Random forest algorithm has given consistent results for smaller datasets only, whereas Naïve Bayes has overall proven to be accurate and consistent overall. GANs can be used for generating tabular data as well as using labeled datasets as input instead of their traditional use for image and video prediction and generation applications. Within future work, GANs accuracy can be compared and incorporated with existing models to increase their efficiency and accuracy.

Assistive Agricultural Technology—Soil Health and Suitable Crop …

369

Fig. 8 Process flowchart

References 1. Tolani M, Bajpai A, Balodi A, Sunny, LW, Kovintavewat P (2022) Analysis & estimation of soil for crop prediction using decision tree and random forest regression methods. In: 2022 37th international technical conference on circuits/systems, computers and communications (ITC-CSCC), pp 752–755. https://doi.org/10.1109/ITC-CSCC55581.2022.9895017 2. Aggarwal S, Bhatia M, Madaan R, Pandey HM (2021) SVM prediction model interface for plant contaminates. Traitement Du Signal 38(4):1023–1032. https://doi.org/10.18280/ts.380412 3. Paul M, Vishwakarma SK, Verma A (2015) Analysis of soil behaviour and prediction of crop yield using data mining approach. Int Conf Comput Intell Commun Netw (CICN) 2015:766– 771. https://doi.org/10.1109/CICN.2015.156 4. Ayyasamy S, Eswaran S, Manikandan B, Mithun Solomon SP, Nirmal Kumar S (2020) IoT based agri soil maintenance through micro-nutrients and protection of crops from excess water. In: 2020 fourth international conference on computing methodologies and communication (ICCMC), pp 404–409. https://doi.org/10.1109/ICCMC48092.2020.ICCMC-00076 5. Kalimuthu M, Vaishnavi P, Kishore M (2020) Crop prediction using machine learning. Third Int Conf Smart Syst Invent Technol (ICSSIT) 2020:926–932. https://doi.org/10.1109/ICSSIT 48917.2020.9214190 6. Suresh N et al (2021) Crop yield prediction using random forest algorithm. In: 2021 7th international conference on advanced computing and communication systems (ICACCS), 2021, pp 279–282. https://doi.org/10.1109/ICACCS51430.2021.9441871

370

K. Naveen et al.

7. Aggarwal S et al (2021) IOP Conf Ser: Mater Sci Eng 1022 012118. https://doi.org/10.1088/ 1757-899X/1022/1/012118 8. Sunil GL, Nagaveni V, Shruthi U (2022) A review on prediction of crop yield using machine learning techniques. In: 2022 IEEE region 10 symposium (TENSYMP), pp 1–5. https://doi. org/10.1109/TENSYMP54529.2022.9864482 9. Ranjani J, Kalaiselvi VKG, Sheela A, DSD, Janaki G (2021) Crop yield prediction using machine learning algorithm. In: 2021 4th international conference on computing and communications technologies (ICCCT), 2021, pp 611–616. https://doi.org/10.1109/ICCCT53315.2021. 9711853 10. Chandraprabha M, Dhanaraj RK (2021) Soil based prediction for crop yield using predictive analytics. In: 2021 3rd international conference on advances in computing, communication control and networking (ICAC3N), 2021, pp 265–270. https://doi.org/10.1109/ICAC3N53548. 2021.9725758 11. Goyal S, Bhatia M, Urvashi KP (2022) Mining plants features for disease detection tensor flow: a boon to agriculture. In: Rathore VS, Sharma SC, Tavares JMR, Moreira C, Surendiran B (eds) Rising threats in expert applications and solutions. Lecture notes in networks and systems, vol 434. Springer, Singapore. https://doi.org/10.1007/978-981-19-1122-4_39 12. Saini P, Nagpal B (2022) Efficient crop yield prediction of kharif crop using deep neural network. Int Conf Comput Intell Sust Eng Solut (CISES) 2022:376–380. https://doi.org/10. 1109/CISES54857.2022.9844369 13. Vijayabaskar PS, Sreemathi R, Keertanaa E (2017) Crop prediction using predictive analytics. In: 2017 international conference on computation of power, energy information and commuincation (ICCPEIC), pp 370–373. https://doi.org/10.1109/ICCPEIC.2017.8290395 14. Medar R, Rajpurohit VS, Shweta S (2019) Crop yield prediction using machine learning techniques. In: 2019 IEEE 5th international conference for convergence in technology (I2CT), pp 1–5. https://doi.org/10.1109/I2CT45611.2019.9033611 15. Kang L-W, Hsu C-C, Wang I-S, Liu T-L, Chen S-Y, Chang C-Y (2020) Vehicle trajectory prediction based on social generative adversarial network for self-driving car applications. In: 2020 international symposium on computer, consumer and control (IS3C), 2020, pp 489–492. https://doi.org/10.1109/IS3C50286.2020.00133 16. Li H, Yu X, Ren P (2018) Typhoon cloud prediction via generative adversarial networks. In: IGARSS 2018–2018 IEEE international geoscience and remote sensing symposium, pp 3023–3026. https://doi.org/10.1109/IGARSS.2018.8518069 17. Li Y, Cheng D, Huang X, Li C (2022) Stock price prediction Based on Generative Adversarial Network. In: 2022 international conference on big data, information and computer network (BDICN), pp 637–641. https://doi.org/10.1109/BDICN55575.2022.00122 18. Hsieh C-F, Lin T-C (2021) Housing price prediction by using generative adversarial networks. Int Conf Technol Appl Artif Intell (TAAI) 2021:49–53. https://doi.org/10.1109/TAAI54685. 2021.00018 19. Förster A, Behley J, Behmann J, Roscher R (2019) Hyperspectral plant disease forecasting using generative adversarial networks. In: IGARSS 2019–2019 IEEE international geoscience and remote sensing symposium, 2019, pp 1793–1796. https://doi.org/10.1109/IGARSS.2019. 8898749

Quantum Key Distribution for Underwater Wireless Sensor Network: A Preliminary Survey of the State-of-the-Art Pooja Ashok Shelar, Parikshit Narendra Mahalle, and Gitanjali Rahul Shinde

Abstract Oceans, optical, and quantum have a deep relationship. A wide range of underwater applications in oceans had led to the development of underwater wireless optical communication. Now, “How to secure an underwater wireless optical channel?”, was a question. The best suitable solution is underwater quantum cryptography. Underwater applications like enemy ship detection, submarine communications, surveillance of sea territory, and many other applications demand secured transmission. Therefore, the research community thought of securing sensed data using quantum keys generated by laws of quantum mechanics. The underwater quantum key distribution (QKD) technology is used to enhance the security of data transmitted through the underwater wireless sensor network (UWSN). The complex composition and special optical properties of seawater have made quantum key distribution a feasible option for providing hard data security below the water. It facilitates guaranteed security by laws of physics rather than hard mathematical problems (like factorization and integers) and assumed resource requirements. Quantum key distribution is used to generate and distribute symmetric cryptographic keys between two geographically separate users using the principle of quantum mechanics. It provides unconditional security that is, it is not only hard but impossible to break. The paper consists survey of state-of-the-art-based underwater quantum cryptography, starting with quantum mechanics for UWSN up to the design issues and challenges observed from state-of-the-art. This work aims to accomplish two motives. Firstly, to learn the challenging field of quantum cryptography for underwater environments. Secondly, to motivate cryptanalysts to work in this slightly ignored domain of underwater quantum cryptography.

P. A. Shelar (B) Smt. Kashibai Navale College of Engineering, Wadgaon, Pune 411046, India e-mail: [email protected] P. N. Mahalle · G. R. Shinde Vishwakarma Institute of Information Technology, Kondhwa, Pune 411048, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 V. S. Rathore et al. (eds.), Emerging Trends in Expert Applications and Security, Lecture Notes in Networks and Systems 681, https://doi.org/10.1007/978-981-99-1909-3_32

371

372

P. A. Shelar et al.

Keywords Underwater wireless sensor network · Underwater wireless optical communication · Underwater quantum cryptography · Quantum key distribution

1 Introduction Sea is an attractive world of enigna and curiosities to mankind. The wireless sensor network in underwater environments [1] is basic infrastructures for various underwater applications like tsunami detection, underwater mine detection, underwater territory surveillance, enemy submarine detection, oil spill detection, and many more. The long list of underwater applications has indulged the interest of many scientists and industrialists. Underwater wireless communication is important in the modern era of communication as it is the way of passing information among various spatially distributed underwater sensors and randomly moving autonomous underwater vehicles. The most ancient and commonly used way of underwater communication is acoustic technology and its drawbacks are obvious, such as low bandwidth, multipath effect, Doppler spread, high bit error rate, and higher probability of packet loss. As a result, all together makes acoustic communication is the most vulnerable underwater communication medium. To overcome these hurdles, the research community came up with a fascinating option of underwater wireless optical communication [UWOC] [2]. In the modern era, optical communication has become a favorite option for underwater communication as it provides higher bandwidth and low propagation delays. And most importantly optical channel is naturally more secure as compared to the acoustic channel because security is based on strong pillars of quantum mechanics. The bit of data in optical communication is represented in qubits (quantum bit) or single-bit photons, represented as one of the two states (0 or 1). The qubit can also achieve a mixed state, where they are both 1 and 0 at the same time. The observation of two quantum bits proves that each bit can take two values, (0,0) (0,1) (1,0) (1,1), such bits are also known as two-bit photons. This makes quantum computing more powerful. The n-qubits can take the values of 2n , because one qubit can take the value of one bit, and two qubits can take the value of two bits and it goes on. The photons are information carrier, and each photon is based on the physics of quantum mechanics. The pillars of quantum mechanics are superposition, no-cloning, interference, entanglement, and the uncertainty principle. A lot of research work is undertaken on long-distance travel and data rates of UWOC. The light sources in UWOC are blindly considered secured due to their directivity and impermeability. But the scattering phenomena turns to severe with an increase in the propagation distance of the light beam and the deteriorating quality of water which gradually diffuses the optical beam. Therefore, UWOC applications compromise their data security by giving an opportunity for eavesdropping. There is a tremendous amount of security solutions available for acoustic communications but the security aspect of underwater optical communication channels is underestimated. In state-of-the-art, the first study about security weaknesses in UWOC was adopted by Monte Carlo simulation [3].

Quantum Key Distribution for Underwater Wireless Sensor Network …

373

Cryptography is a game of hide and seek. The data travels to its destination by hiding itself from unauthorized parties. To achieve this goal, an algorithm is developed which combines the original message with some keys. The resulting message is known as ciphertext, and this process is known as encryption. Therefore, the security algorithms developed for UWOC is known as underwater quantum cryptography (UQC) and the process of developing and distributing keys to encrypt quantum data throughout the underwater wireless optical sensor network is known as quantum key distribution (QKD).

1.1 Motivation The majority of underwater applications use acoustic as their primary communication medium. The reason for its wide usage is the 100 km distance from which acoustic wave travels in an underwater environment as compared to electromagnetic and optical waves. But the long-distance link suffers from packet loss and compromises data security. In the literature review, we observed that acoustic communication channel has been attacked using jamming, wormhole, blackhole, Sybil, sinkhole, and many more. Then the strategies or cryptographic schemes developed to overcome this list of attacks have various drawbacks. Like, all the schemes rely on the assumption of the attacker’s computation power and are breakable by quantum machines. The cryptographic schemes developed based on symmetric and asymmetric keys had conditions for generating a perfect secret key, which is as follows: (I) (II) (III) (IV)

Key must be as long as the original message. Key should never be used in whole or partly. Key must be kept secure. Key should purely be a random sequence.

The study raised the question, “How to satisfy all underwater security requirements?”, and “How to catch the attacker before sharing the secret data?”. These questions motivated us to think about underwater quantum cryptography and underwater key distribution. Even though the optical communication channel travels a short distance of up to 10–50 km in an underwater environment but it would be the safest UWSN along with QKD.

1.2 Quantum Mechanics for Underwater Wireless Optical Sensor Network The roots of cryptography are from 400 B. C., and then, it became part of information theory and complex mathematical calculations. Earlier, cryptography can be defined as, “A mathematical system transforming data into an unreadable format which is difficult to break but not impossible”. Cryptography is divided into two parts, (i)

374

P. A. Shelar et al.

symmetric key cryptography (that is same keys are used for encryption and decryption) and (ii) asymmetric key cryptography (different keys are used for encryption and decryption). There are various classical cryptographic algorithms like RSA [4], one-time-pad, DES [5], AES [5], and many more. The DES is already broken by classical computers but at the same time, the AES which is in use by NASA is not cracked till-date. But if we read the above definition carefully, which suggests that there is a possibility that traditional cryptographic algorithms can be broken. There is something very scary!!. The reason for such insecurity is the emerging super-powerful quantum computers. They are on their way to coming into a realistic world. Quantum computers are 10,000 times faster than classical computers. Therefore, even though AES generates a 256-bit key there is a possibility that the cyber-attack named “Brute Force Attack”, can make 2256 possible keys to break down highly secured AES. Now classical computers will take many years to make 2256 possible key sets but quantum computers can make 2256 keys in fractions. This threat forced the research community to think of a new cryptographic strategy that will be protected by quantum computers. These gave birth to ‘quantum cryptography’ which is based on quantum mechanics. The very first quantum cryptography algorithm was developed by Bennett and Brassard in 1984, popularly known as the BB84 QKD protocol. And from there show begins and it is still going on. Before starting the journey in quantum cryptography it’s preliminary to understand the pillars of quantum key distribution or quantum cryptography. Photons: One photon is represented as a bit of data that has one of the two states designated by 0 and 1. A single bit of this form is known as qubit or single-bit photon. Light particles (photons) can take four directions at the same time (0, 1, + 45, −45). This happens due to the spinning property of photons. Qubits: Quantum computing is done with a qubit that is actually a photon which is the carrier of quantum information. As shown in Fig. 1, a single qubit is represented as a two-dimensional vector in a complex vector space. Fig. 1 Diagrammatic representation of two-dimensional complex vector space

Quantum Key Distribution for Underwater Wireless Sensor Network …

375

Denote the photon as a vector in orthogonal bases, say |Ψ>, we can also denote Ψ in terms of any other orthogonal bases in that plane, so we can rotate 0 and 1 at 45°, defining the vector as ‘ + ’ and ‘ − ’. The quantum key distribution protocols make use of four non-orthogonal polarization states. That is classified as rectilinear and diagonal states. These states are considered photon vector bases, and their corresponding binary values are called qubits. The conventions used are enlisted in Table 1. Measurement: It is projecting of vector (photon) to your bases. Measurement is done to generate random numbers, which are truly random using the measurement which gives equal probability of getting 0 and 1. If Ψ is actually encoded using |+>| −> bases and if you measure it in |0 > and |1 > bases, then you have 0.5 probability of getting 0 and 1. Superposition: The qubit can also achieve a mixed state, where they are both 1 and 0 at the same time. If we observe two qubits, it takes two values at the same time. For example, the values are like (0,0) (0,1) (1,0) (1,1). The simple act of observing a qubit will cause it to collapse out of its superposition, and fall back to a 0 or 1. Uncertainty: To understand this principle, we will imagine a typical symmetric key cryptographic scenario, where there is Alice, Bob, and Eve. The first step is the generation of the same key by Alice and Bob. In quantum cryptography, symmetric keys are generated using quantum mechanics. Then secondly, Alice will encrypt the message using a key and then will forward it to Bob. Thirdly, Bob decrypts the message using the same key. In this scenario, there can be an evil name as Eve who continuously tries to attack the keys and encrypted data shared between Alice and Bob. In traditional cryptographic algorithms, there is no way to find the presence of Eve before transmitting data. But in quantum cryptography, it is possible to find the presence of Eve using the uncertainty principle. Say, Alice, prepared a |Ψ> and gave to Bob, now to make a measurement bob randomly makes a choice of any one bases (0 1 or + − ). Let’s say, Bobs choose to measure |Ψ> in |0 > and |1 > bases, At a point on the act of measurement, |Ψ> is going to collapse either to |0 > vector line or |1 > vector line. So value of |Ψ> is going to become 0 or 1 as soon as Eve do measurement. It collapse to |0 > at probability of |α|2 and it collapse to |1 > at probability of |β|2 , this is called as uncertainty principle.

Table 1 Conventions used for photon states Polarization state

Vector bases

Symbol

Degrees

Qubit

Rectilinear state

α





0

+

β



90°

1

Diagonal state

γ



45°

0

X

δ



135°

1

376

P. A. Shelar et al.

Entanglement: Essentially, two qubits that are entangled become correlated, even after those qubits have been separated. When you observe one, you learn something about the other no matter how far away it is. For our understanding, it is only important to know that the qubits in quantum computers can become entangled, and that allows researchers to set up problems in a very different way. Whenever you adjust, polarized the first photon will be reflecting second, the second will be reflecting third and so on, these is a cascading effect. Entanglement property allows long-distance quantum communication. Entanglement-based QKD is the only route toward avoiding vulnerabilities which may introduce due to devices taken through manufacturers. Therefore, the concept of device-independent QKD is only possible due to entanglement. No-Cloning Theorem: Eve cannot copy the information about the key send by Alice. If wants to copy, then he needs to perform operation, which will disturb the system. And this disturbance can be detected by bob. The name itself suggests the meaning of theorem that identical copies of unknown quantum state cannot be created. The theorem is based on a fact that all quantum operations must be linear and unitary transformations on the state. The uncertainty principle is actually protected by the no-cloning theorem. Because if eve is successful in cloning an unknown state, then he could make multiple copies of his choice and will then measure each copy with random precision, therefore collapsing the uncertainty principle. This is prevented by no-cloning. Quantum and Classical Channel: In underwater communication, the classical channel is generally an acoustic communication channel. Only classic data is transmitted through the acoustic channel as it is unable to transmit quantum data bits. In UWSN, a combination of an acoustic and quantum channel is more useful. In underwater quantum cryptography protocols, for QKD a classical channel is used along with a quantum channel.

2 Related Work The origin of quantum cryptography was in countries of the US and Europe. The quantum conjugate coding was the first which was introduced in the 1970s by Stephen Wiesener, in New York. Ten in 1984, based on Wiensener’s work, Bennett and Brassard proposed a secured communication protocol, known as BB84 [6]. In 1990, a Ph.D. student from Oxford University developed a security protocol based on quantum entanglement. Since then quantum cryptography has evolved and now is being commercially used in various fields like satellite, terrestrial, and underwater communications. Using quantum cryptography for securing underwater wireless communication channels, is still something that is still less explored. Initially, the experimental results showed that the entangled and polarization states can be well preserved in a 3.3 m seawater channel [7]. Later, in a 55 m underwater channel, the

Quantum Key Distribution for Underwater Wireless Sensor Network …

377

Fig. 2 Taxonomy of underwater QKD protocols

experiments on high-dimensional twisted photons transmission and photonic polarization states transmission proved the feasibility of quantum cryptography in the underwater environment [8]. As shown Fig. 1, the quantum cryptography protocols are divided into two different types, which are based on two features of quantum mechanics that are uncertainty and entanglement. In uncertainty-based QC, the bits are encoded using photon polarization, and in the second type, the bits are encoded using entangled photon states (Fig. 2). Bennett-Brassard-84 [BB84] Protocol [6]: The working of BB84 is based on the conventions shown in Table 1. To understand the working, let’s consider a single photon source and our characters Alice and Bob. Firstly, Alice encodes the photons by randomly selecting one of the four vector bases (α, β, γ, δ). Then in secondly, Bob decodes those photons by doing random selection of bases and then he measures it. These random selections may lead to incorrect outcome. Therefore, to resolve this issue, Alice and Bob use classical channel to share their randomly selected bases so that now they have knowledge of where their choices matched and where not and this is called sharing of basis choice. Then, finally Alice and Bob now have a symmetric key. For the clear understanding, Figs. 3 and 4 show the photonic representation of BB84 protocol. The bellow given are algorithmic steps for BB84 protocol.

378

P. A. Shelar et al.

Fig. 3 Photonic representation of BB84 protocol without eavesdropping Fig. 4 Photonic representation of BB84 protocol with eavesdropping

BB84 Algorithm 1: Alice choose two random bit string ‘k’ and ‘b’, each of n-bits, where, k is the bits which are to be encoded. b is the choice of bases made to encode the bits. 2: Alice sends the encoded n-qubits to Bob. 3: Bob choose one random bit string ‘b1 ’, B1 is choice of bases made to perform measurement. 4: Create table with two fields Ki (outcome of Bob measurement) and bi (corresponding basis of Alice). 5: if bi is not equal to b1 , then. Discard the key bits corresponding to the b1 bases bits. else if bi is equal to b1, then. ki is included into final key.

Six-State Protocol [9]: The protocol is defined using three bases instead of two commonly used bases. The third bases is denoted by | −0 >, | −1>, it is an addition to previous bases. The six-state protocol is comparatively more secure than BB84. Because an eavesdropper tries to collect the data by performing some unitary transformation on the quantum bit in transit. This gives less information for a fixed

Quantum Key Distribution for Underwater Wireless Sensor Network …

379

disturbance of Bob’s qubit. In case, if sender increases the set of inputs, then it becomes difficult for eavesdroppers to learn about data in transit. Bennett [B92] [10]: In 1992, the most simpler version of QKD protocol, named as “B92”, was proposed by Charles Bennett. This protocol is also based on the uncertainty principle popularly known as prepare and measure protocol. The protocol is a modified version of BB84. It uses two bases instead of using four. The secret key distribution between sender and receiver is done by using only two bases states. The sender uses one of the, |ψ0> and |ψ1 > quantum states, to prepare a qubit. Further, the sender assigns a value of 1 or 0 to the respective quantum states. Then, at the receiver end, after receiving the quantum states he takes measurements on random bases so as to retrieve the sender’s qubit. Decoy-State BB84 [11]: In practicality, for most of the QKD systems, the source of photon emissions is multi-photonic. Thus making the QKD protocols susceptible to PNS attacks. Therefore, to overcome these attack situations, the decoy-state protocol uses randomly chosen intensity levels (one single state and another decoy state) to transmit qubits. The sender is then responsible to announce the choices of intensity level made for transmission of each bit. Ekert91 [12]: The E91 QKD scheme used entangled photons. Alice, Bob some separate source, or even Eve is responsible for creating entangled photons. There are three rules of entanglement. First, one can generate an entangled photon pair, in such a way that if Alice and Bob try to measure the polarization of their particles, then they will get exactly opposite answers. Quantum non-locality is the second rule. The result of measurement for Alice and Bob has a fifty percent probability that Alice correctly forms her measurement deduces Bob’s measurement. Thirdly, any attack from eve will weaken the correlation, in such a way that it will be easily detected by Alice and Bob. The QKD protocols are in practice a combination of quantum mechanics and classical cryptographic techniques. Because the concept of classical information theory, for correcting transmission errors, commonly known as key distillation is used in quantum cryptography. As it is important to correct transmission errors whether they are indulged due to an eavesdropper activity or by system imperfections. The quantum key distributed must satisfy two conditions, (i) commonality and (ii) secrecy. And to achieve these secret key, distillation plays an important role [13]. In underwater wireless optical communication, the quantum key distribution is achieved using commercially available optical sensors [14]. Furthermore, there is no longer a restriction on the quantum key size. A long secret key can be generated in quantum key distribution thus violating Shannon’s condition on the key size. The generated long quantum key can be then used with classical cryptographic techniques to encrypt a secret message of the same length. The big-size keys are confidentially transmitted using the principle of uncertainty. According to our knowledge, the state-of-theart ciphers are not breakable. But their short-length key, unfortunately, cannot be guaranteed for the long run. It is difficult to predict the future, so looking at the advancements in quantum and underwater optical technology it is a clever idea to

380

P. A. Shelar et al.

Fig. 5 Representation of quantum (Q) and classical (C) channels in quantum key distribution

accept quantum key distribution as best possible solution currently available in the world of cryptography. As most remarkable point in quantum key distribution is that security is confirmed by the laws of quantum mechanics. And at the same time, it also guarantees confidentiality for a long duration. The encryption of secret messages done in today’s era using classical cryptographic methods would be easily decryptable in the upcoming years. But quantum cryptography is something that prevents the attacker from intercepting an encrypted message. As the quantum key generated using the quantum cryptographic technique has two properties, (i) non-alterable and (ii) non-copiable. Note: Most of the time quantum cryptography and quantum key distribution are considered synonyms but are actually different terms. Quantum cryptography is a broad term that includes different applications like quantum secret sharing, quantum key distribution, and many more. Therefore, in the scope of paper, we will use the term quantum key distribution. All the above-discussed key distribution protocols have a similarity in terms of the transmission of information (quantum and classical) between Alice and Bob, for the generation of the quantum secret key. Hence, the quantum key distribution requires two channels, one for photon transmissions, i.e., optical/quantum channel, and the other for the transmission classical information, i.e., public channel. As shown is Fig. 5, Alice and Bob use the public channel to transmit data that is not sensitive whereas to achieve message integrity they use the quantum channel. The word “guaranteed security” is used in quantum key distribution protocols due to following reasons: (I) The quantum key distribution can be implemented as device-independent QKD [13] that is an attacker annot access the network devices. (II) The combination of quantum and optical channels guarantees security. (III) The QKD protocols use random numbers for key generation. (IV) The authenticate classical channel is developed using unconditional cryptographic schemes. Therefore, it can be said that quantum key distribution protocols offer a higher degree of security as compared to classical protocols. The quantum key distribution using optical fibers [15] is proven to be more mature and is even commercialized in the market. In 2007, free-space QKD at a distance of 144 km was real-time implemented in the Canary Islands [4]. The key generation speed was 12.8 bits/s. These proved that quantum key distribution is possible between ground stations and satellites. In spite of so much proven practical and theoretical research in quantum

Quantum Key Distribution for Underwater Wireless Sensor Network …

381

key distribution, the least importance is shown to the field of QKD in underwater wireless optical communication.

3 Design Issues and Challenges Energy: The QKD algorithms or underwater quantum cryptography protocols should be designed in a way that it consumes a minimum node’s energy. As natural recharging of underwater nodes using sunlight is impossible. And the second option of recharging via ROVs is a very costly affair. Memory: Similarly, underwater optical sensors have a limited size of memory. Moreover, in underwater environment, formatting of memory is next to impossible. This causes failure in data transmission. Therefore, the design of underwater QKD algorithms should be concise and memory friendly. UWSN Architectural Problems: There is a higher chance that an eavesdropper can easily induce the corrupted node and get complete knowledge of the original data or corrupt the communication channel due to the open characteristic nature of UWSN. The second major architectural problem is underwater mobile nodes that flow with ocean currents. Due to this mobility nature, connectivity among nodes can be broken thus causing errors in data transmission. Therefore, data transmission via a quantum communication channel in itself is the first challenge. Scattering: It is the biggest challenge for UWOC and so is for underwater quantum cryptography. With the increase in distance, the light beam starts spreading with great speed as compared to an acoustic wave, hence causing data loss or propagation delay for long-distance communication. In this scenario, it becomes confusing to understand whether data loss is due to attackers or natural scattering phenomena. Unpredictable and Unsafe Underwater Environment: The main reason for deploying underwater optical sensors and embedding quantum cryptographic algorithms into them is the security of underwater applications mainly related to naval. But due to dynamic environmental conditions, it is difficult to always monitor the underwater networks and devices. This gives an opportunity for eve to hack the sensors. Costing: The optical sensors are far more costly as compared to acoustic sensors. The maintenance cost of UWOC is also on the higher side. Therefore, it is very important to develop an underwater quantum cryptographic algorithm by keeping all the above-stated design issues and challenges in mind.

382

P. A. Shelar et al.

4 Innovative Strategy After the extensive literature survey, we came up with an innovative strategy of the improvised version of the BB84 protocol. The prototype version of the enhanced algorithm is mentioned for reference. The basic idea is to convert an asymmetric key cryptographic algorithm into a symmetric key algorithm. These algorithm claims to be more computationally efficient and suitable for the underwater environment in terms of energy and memory. Enhanced BB84 1: Alice choose two random bit string ‘k’ and ‘b’, each of n-bits. k = The bits which are to be encoded. b = Public key of Alice (choice of bases made to encode the bits). 2: Alice sends the encoded n-qubits to Bob. 3: Bob choose one random bit string ‘b1’, b1 = Choice of bases made to perform measurement. 4: Create table with two fields Ki (outcome of Bob measurement) and bi. (corresponding basis of Alice). 5: if (bi ! = b1). { Discard the key bits corresponding to the b1 bases bits. }. else if (bi = b1). { ki = k. where ki is private key of Alice and Bob.

5 Conclusion In state-of-the-art, it is investigated that in underwater wireless optical communication, if optical link distance increases or if water is turbid, then there is a higher possibility of information leakage. As it gives rise to a more severe scattering effect which in turn gives an opportunity for an attacker to intrigue the information passing through the light path. This scenario poses a very big threat to underwater wireless optical communication channels’ data security. There are various official government underwater projects which require secure data transmission. The underwater quantum communication channel along with quantum key distribution is like a boon. In this work, our main motive was to show the importance of underwater quantum key distribution and how to accomplish it and also to encourage cryptographers for developing innovative solutions in the field of underwater quantum key distribution protocol development.

Quantum Key Distribution for Underwater Wireless Sensor Network …

383

References 1. Awan K et al (2019) Underwater wireless sensor networks: a review of recent issues and challenges 2. Kaushal H, Kaddoum G (2016) Underwater optical wireless communication. IEEE 3. Kong M et al (2017) Security weaknesses of underwater wireless optical communication. Opt Express 4. Wardlaw WP (2000) The RSA public key cryptosystem. Springer, Berlin, Heidelberg, Coding Theory and Cryptography 5. de Oliveira AP; Moreno ED et al (2007) Impact of the DES and AES algorithms on PERS (A Specific Processor for Sensor Networks). IEEE 6. Bennett CH, Brassard G (1984) Quantum cryptography: public key distribution and coin tossing. In: Proceedings of the IEEE international conference on computers, systems and signal processing, Bangalore 7. Feng Z et al (2021) Experimental underwater quantum key distribution. Opt Express 8. Chen Y et al (2020) Underwater transmission of high-dimensional twisted photons over 55 meters. PhotoniX Springer 9. Lo H-K (2001) Proof of unconditional security of six-state quantum key distribution scheme. Quantum Inf Comput 10. Bennett CH (1992) Quantum crytography using two any two nonorthoganal states. Phys Rev Lett 11. Dong S (2022) Practical underwater quantum key distribution based on decoy-state BB84 protocol. Optica Publishing Group 12. Ekert AK (1991) Quantum cryptography based on bell’s theorem. Am Phys Soc 13. Van Assche G (2006) Quantum cryptography and secret-key distillation. Cambridge University Press, 40 W. 20 St New York, NY, USA, July 2006 14. Feng Z, Li S, Xu Z (2021) Experimental underwater quantum key distribution. Opt Express 15. Franson JD, Ilves H (1994) Quantum cryptography using optical fibers. Optica

Innovations and Insights of Sequence-Based Emotion Detection in Human Face Through Deep Learning Krishna Kant and D. B. Shah

Abstract Emotion detection domain has the significant contribution for the society being addressing the metal states of humans. Due to development in the technology, the researchers from different domain have expanded their research on emotion and human behavior from facial expressions. Emotion is the basic source from where we detect the mental state of humans. Mental well-being has significant role to play for determining the efficiency of person’s in terms of outcome. This research work is experimented to determine the behavior of human using artificial intelligence from\human detected faces. Video is prime weapon to get the max intense emotion of an individual as the emotions are dispersed across various frames. This work has applied artificial intelligence technique deep CNN with Keras and TensorFlow libraries in Google Colab environment with the GPU run-time support to boost the execution and processing time of the project. OpenCV Library is used for detection the face and recognizing the emotion of an individual. We have applied deep convolution neural network to detect the emotion from the video sequence on two data sets such as FER2013 and CK+ where the proposed algorithm outperforms the FER2013 by 2.4% improvement in the result. Keywords Emotion recognition · Human face detection · Video surveillance · Artificial intelligence · Deep learning

K. Kant Smt. Chandaben Mohanbhai Patel Institute of Computer Applications (CMPICA), Changa, GJ, India D. B. Shah (B) Department of Computer Science and Technology, G H Patel P G, Vallabh Vidyanagar, Anand, GJ, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 V. S. Rathore et al. (eds.), Emerging Trends in Expert Applications and Security, Lecture Notes in Networks and Systems 681, https://doi.org/10.1007/978-981-99-1909-3_33

385

386

K. Kant and D. B. Shah

1 Introduction The most important lexical and non-lexical conversation technique through that we understand the frame of mind of human is facial expressions. Automatic emotion detection addresses two aspects: one is emotional state of human and the other is artificial intelligence (AI). Emotion recognition of face emotion has interesting application in computer vision, e.g., identification of mental state, security, mentoring system, and lie detection. The dimension of ROI that gives the face components are from region of interest from all the parts of face including fore-head, etc. The characteristics that we find from face are done by the extricating features from the region of interest [1]. The another prominent area is drowsy driver detection, crime prediction, interview process, measuring happiness at work, patients monitoring, surveillance. Due to rapid development in the technology, there is demand of high level intelligence for human computer interface in HCI domain. Sentiment detection plays crucial role in all the aspect of life and becoming a new path of understanding the behavior of human. Mehrabian [2] stated that 55% of emotional clues we get from visual, 38% from vocal and 7% form verbal communication. The six emotions that were defined by Ekman’s were happy, sad, disgust, angry, surprise, and fear. Facial expression variations throughout communication are first signal that convey emotional state that’s why researchers are fascinated with its modality. Ekman and Freisn [3] established FACS coding system defined by (AU), divided into 46 action units and all unit are encoded along numerous facial tissue gestures. Because of the modality in the domain of expression, it has become the fascinating community of research. Various challenges and opportunities are there in this domain to explore such as head movement, age, gender, background, and occlusion. Several traditional methods are used to detect the facial emotion, for example, LBP [4], FACS [3], LDA [5], and Gabor wavelet [6]. Recently, CNN and RNN have become the most popular and efficient method which helps to obtain reliable and economical results. Numerous attempts have been modeled by scholars for improvement of DNN that yields prominent result in the domain. In this paper, we stated current advancement in emotion detection through video sequences using deep learning architectures. We have presented the results from 2016 to 2022 with an interpretation of challenges and contribution. The whole research work is arranged in this fashion: Sect. 1 states about Introduction, In Sect. 2, we have discussed the video data set, Sect. 2 consists of Literature Survey, Sect. 4 is about experimental setup, Sect. 5 is Outcome and Analysis and Sect. 6 states conclusion of the work.

2 Handy Video Data Sets Training the neural network is the one of the important factor, the way we train the model it will give the result to us. There are various data set available in this domain to explore and to solve the problem statement of real world. Each data set is

Innovations and Insights of Sequence-Based Emotion Detection …

387

different from other in number and size of video, changes in illumination, pose, and population. The data set used in different context with citation is stated as follows (Table 1). Table 1 Summary of video data set Data set

Description

FER [7]

Emotions

Data set

Description

Emotions

30,000 facial Seven basic RGB images emotions having train of 28,709 and test of 3,589 images

SFEW [12]

2,000 min of video input

Anger, disgust, fear, neutral, happiness, sadness, and surprise

CK+ [7]

593 video’s of facial expressions

Anger, disgust, fear, happiness, sadness, surprise, contempt, and neutral

AVEC-13 [13]

Depression corpus of 150 video clips from 82 subjects having 224 × 224 with RGB color images

Video-based depressed clips

AffectNet [7]

0.4 million image

Neutral, AVEC-14 happy, angry, [13] sad, fear, surprise, disgust, and contempt

Depression corpus of 150 video clips from 82 subjects having 224 × 224 with RGB color images

Video-based depressed clips

MHED2.28 [8]

1066 video’s, train-638 and test-428 videos

Happy, SAVEE [11] angry, sad, fear, surprise, and disgust

Recordings from 7 different 4 male actors, sentiments 480 British English utterances in total

HEIV [9]

1012 video’s having training-607 and testing 405 images, 64% are males, and 36% are females

Ekman’s six emotion

Video emotion 8 [10]

1101 video’s, Ekman’s six training-734 basic and testing emotion 367

AM-FED + [14]

1,044 video’s, out Ekman’s of total, 545 emotion plus video’s have been neutral comprehensively manually coded for facial action units

MMI [15]

2900 video and images of 75 subject

Anger, disgust, fear, happiness, sadness, and surprise (continued)

388

K. Kant and D. B. Shah

Table 1 (continued) Data set

Description

Emotions

Data set

Description

Emotions

RML [11]

720 audiovisual emotional expression samples

Anger, disgust, fear, happiness, sadness, and surprise

Ekman-6 [16]

1,637 videos

Anger, disgust, fear, happiness, sadness, and surprise

3 Survey of Literature Human emotions are expressed in multiple style. The face of the human, video, audio, body gestures, actions, and environments have certain clues of emotion. While incorporating the multiple clues, we can boost efficiency of feelings. When we fuse context and information of the body, then it gives better result. Fulan [7] has done his experiment using CNN along with the attention network mechanism to obtain discriminating features from different feature vectors. They carried out their work on FER2013 and CK + data set and obtained the accuracy of 98%. Liu [17] applied region dual attention-based method to optimize the result on MHED, HEIV, ekman-6 data, videoemotion-8, and SFEW databases. Lang [13] used deep local global attention convolution neural network and weighted spatial pyramid pooling to learn deep and global representation on AVEC2013-150, AVEC2014-150 video that exhibits low computational complexity. Akhand [18] designed a DCNN using transfer learning and verified on pertained network on KDEF and JAFFE data set achieving the accuracy of 96.5% on KDEF and 99.52% on JAFFE. Ketan [19] worked on FER2013 data set using deep CNN with accuracy of 54%. Riyantoko [20] experimented using deep CNN along with HAAR cascade classifier which states that increase in the epoch value results in low MSE value. Hajarolasvadi [11] employee on SAVEE and RML data set, k-means for selecting most discriminating frames from the video, linear support vector machine and Gaussian support vector machine for prime k representative and pretrained CNN for assessing person dependent and person independent scenarios and achieved accuracy on SAVEE: dependent-98% and independent-41% and RML: dependent-95% and independent-39%. Architecture of CNN with LSTM for detecting smile from video data sets with accuracy such as AM-FED + −95.97%, AFEW-94.89%, and MELD-91.14%. Li [21] deployed ResNet-50CNN architecture on own data set of 700 images and obtained accuracy of 95.39 ± 1.41. Shervin [22] employs deep convolutional neural network (DCNN) approach on FERG, JAFFE, CK+ data set and achieved accuracy of FER2013-70.02%, FERG99.3%, JAFFE-92%, and CK+ −98%. Ninad [23] applied deep convolutional neural network (DCNN) on Caltech faces with 85%, CMU with 78% and NIST with 96% accuracy. Shruti [24] applied CNN on FER2013, CK and CK+ [25], Chicago Face Database, JAFFE, FEI, IMFDB, and TFEID with training-65% and validation-74% accuracy. Jie [26] experimented CNN, SVM, RF, and score fusion and Top-K decision

Innovations and Insights of Sequence-Based Emotion Detection …

389

fusion on Ekman-6 with accuracy of 59.61% and VideoEmotion-8-with accuracy of 52.85%.

4 Experiment and Set Up This section provides the information about pre-processing, training of data, testing of data, and various model evaluation. This enables to understand the overall step to perform the action on data set and resulting the expected emotions as an output of human faces. We have done the experiment on FER2013 and CK+ data set for extracting emotion from human faces. Deep CNN performs better results as we give better training to the model. FER2013 contains 48 × 48 pixel gray scale images of faces with total of 35,887 basic images of different emotions such as six basic Ekman’s emotion and neutral. The training is having 28,709 images and test set includes 3,589 images. Aforementioned is good amount of data for developing any emotion detection system. The CK+ data set consists of 593 video frames of 123 distinct topic, male and female between the ages from 18 to 50 years. All frame states facial shifts from neutral to peak expression documented at 30 frames per sec of 640 × 480 and 640 × 490 pixel. From all video sequences, 327 are categorized into six basic Ekman’s emotions including contempt. Researchers have made significant contribution in human facial emotion recognition. Various applications of emotion detection in real world has attracted researchers to undergo emotion analysis and understanding the domain of computer vision. Artificial intelligence is best approach to deal with emotion detection while developing an algorithm in computer vision (Fig. 1).

5 Preprocessing The FER2013 data set was created with huge collection of images of 48 × 48 matrix, each was represented using a pixel. The data set was divided into training of 28,827 images concerned to 7 classes and testing of 7066 images concerned to 7 classes. The division ratio was of 80:20 ratio, i.e., 80% for training and 20% for testing. Data is the most important aspect in deep learning as all the prediction is based on the quality of data and training given to the data. Splitting the data may have 70% and 30% or can have 60% of training, 20% of test, 20% of test validation. All this combination will have significant difference in the output. Image generators are used for prepossessing the data set (train and test). If the both generators are ready, then it means that all the training and testing data is prepossessed (Figs. 2 and 3).

390

K. Kant and D. B. Shah

6 Deep Learning and Convolution Neutral Network Selecting a model is very important as it will help for doing the prediction and making decision efficient. The CNN model has two convolution layer of four sizes, one max pooling of two sizes, drop out ration of 0.25% and two dense layer. Before integrating the whole layers, an output layer is flattened. The drop out was used to solve the problem of unpleasant input. To enable to learn the model, we have deepen the model and create the model linked with over-fitting layers. Here, we have created sequential model where we have added different layers (conv2D).

7 Final Model The final model developed is consist of convolution layer having four hidden layers, each hidden layer contains 30 node, the network has 36 input nodes for the matrix input and six output nodes for six size = (3,3)), activation function = relu and to avoid over-fitting we have used dropout = 0.25. After the model generation, we have fitted the data using fir generator which will take all the train-generator data. At this level, the model will be trained based on the training data that we have prepossessed. Then, the model is compiled and stored the structure of the model along with the width it leaves from all the learning process that we have stored in model.json file and learning weights in.h5 file. We achieved the accuracy of 87.6% and loss of 35.95% while doing epoch = 50. The number of layers is used to achieve good accuracy for real-time applications, i.e., classes, two max pooling layer, and two fully connected layer in the network. The prepossessing of network uses 48 × 48 input image of the faces. For creating the model, we have used sequential model structure in OpenCV. To avoid memorizing, we have additional layers in the network. The number of layers is used to define the accuracy of the model with faster in processing for real-time applications. The recommended CNN is better than the traditional CNN with six filter sizes. To reduce the over-fitting, we have used max pooling and drop out parameter as research component. The network comprises of two convolution layer each with a filter size of 64. We have used two layers of max pooling and drop out of 0.25 is used to reduce the over-fitting. A combination of six convolution layer is applied, the first three have filter size of 128, and the other three have filter size of 256. At the end of this layer, there is decrease in 0.25. For compiling the model, we have used categorical cross entropy as loss and Adam optimizer to update the weights iterative based in training data. The output layer is results in single vector dimension, then fully connected penalty controller layer is used that results in fast pace. Activation function Softmax is used for better classification with good accuracy. The final model is applied on CK+ data set which outperforms the FER2013 with efficiency of 90% that is good to go for improving the accuracy to further milestone.

Innovations and Insights of Sequence-Based Emotion Detection …

391

8 Result and Discussion In this section, we evaluated proposed technique using two data sets: FER2013 and CK+. We have experimented the proposed method on both the data set and found that the method applied on CK+ outperforms the FER2013 with better accuracy. The result of the same has been depicted in the Fig. 4. As we increase epochs count, then there we get variation on accuracy of model. The result obtained on FER2013 is 87.6% and on CK+ is 90%. While implementing the other research components, we may increase the performance of the model to some extent. We have to work on the time complexity of the model so as to boost the performance of the model. Computational time taken by the model for detecting emotion determines effectiveness of the model in real-time application. Data set plays important role in testing the performance and accuracy of the model. We have done the experiment on image data set first and then again applied in video sequence data set and compared the accuracy of both the data sets with respect to accuracy and performance. The most frequent emotion is uncertainty in the face. Fear and surprise are considered as the most contrary emotion and likely to have error because of similarity in the facial features extracted from video sequence (Figs. 5 and 6).

Fig. 1 Sequence of happy expression

Fig. 2 Training and testing of FER2013

392

K. Kant and D. B. Shah

Fig. 3 Training and testing of CK+

Fig. 4 Model epochs

Fig. 5 Classification of FER-2023 expression

9 Conclusion Emotions can be traced or detected with expertise of human at the same time technology is growing very fast which is becoming smart in current age science. Machine is also doing the same task with good accuracy and resembling the human actions, artificial intelligence is playing tremendous character in domain of computer vision and emotion detection. This research is experimented on two data set, i.e., FER2013 and CK+ , the result of CK+ data is better than FER2013 data set. Here, we have

Innovations and Insights of Sequence-Based Emotion Detection …

393

Fig. 6 Loss and accuracy of CNN model by epochs of CK+

applied deep learning CNN algorithm with Keras, TensorFlow for detecting the emotion from the video and images. The framework is planned and executed step by step for getting the expected outcome. We can get correct emotion detected due to these finding in the domain. The deep learning CNN model has been used in this research work that formulates the classification problem of emotion into six basic emotion of Ekman’s model. We have worked six basics types of emotion as per Ekman’s emotion model. Video surveillance plays vital role to get the most prominent result as the emotions are evaluated from the sequence of frames. The proposed model has obtained better accuracy on CK+ data set as compared to FER2013 data set.

References 1. Kumari J, Rajesh R, Pooja K (2015) Facial expression recognition: a survey. Procedia Comput Sci 58:486–491 2. Marechal C, Mikolajewski D, Tyburek K, Prokopowicz P, Bougueroua L, Ancourt C, WegrzynWolska K (2019) Survey on ai-based multimodal methods for emotion detection. High Perform Modell Simul Big Data Appl 11400:307–324 3. Alkawaz MH, Mohamad D, Basori AH, Saba T (2015) Blend shape interpolation and faces for realistic avatar. 3D Res 6(1):1–10 4. Niu B, Gao Z, Guo B (2021) Facial expression recognition with lbp and orb features. Comput Intell Neurosci 2021 5. Jabid T, Kabir MH, Chae O (2010) Robust facial expression recognition based on local directional pattern. ETRI J 32(5):784–794 6. Boughida A, Kouahla MN, Lafifi Y (2022) A novel approach for facial expression recognition based on gabor filters and genetic algorithm. Evol Syst 13(2):331–345.

394

K. Kant and D. B. Shah

7. Ye F (2022) Emotion recognition of online education learners by convolutional neural networks. Comput Intell Neurosc 2022 8. Liu X, Wang M (2020) Context-aware attention network for human emotion recognition in video. Adv Multimed 2020 9. Liu X, Li S, Wang M (2021) Hierarchical attention-based multimodal fusion network for video emotion recognition. Comput Intell Neurosci 2021 10. Jiang Y-G, Xu B, Xue X (2014) Predicting emotions in user-generated videos. In: Proceedings of the AAAI conference on artificial intelligence, vol 28 11. Hajarolasvadi N, Bashirov E, Demirel H (2021) Video-based person-dependent and person independent facial emotion recognition. SIViP 15(5):1049–1056 12. Dhall A, Goecke R, Lucey S, Gedeon T (2011) Static facial expression analysis in tough conditions: Data, evaluation protocol and benchmark. In: 2011 IEEE international conference in computer vision workshops (ICCV workshops), pp 2106–2112. IEEE 13. He L, Chan JC-W, Wang Z (2021) Automatic depression recognition using cnn with attention mechanism from videos. Neurocomputing 422:165–175 14. Samadiani N, Huang G, Hu Y, Li X (2021) Happy emotion recognition from unconstrained videos using 3d hybrid deep features. IEEE access 9:35524–35538 15. Wu H, Lu Z, Zhang J, Li X, Zhao M, Ding X (2021) Facial expression recognition based on multi-features cooperative deep convolutional network. Appl Sci 11(4):1428 16. Xu B, Fu Y, Jiang Y-G, Li B, Signal L (2016) Video emotion recognition with transferred deep feature encodings. In: Proceedings of the 2016 ACM on international conference on multimedia retrieval, pp 15–22 17. Liu X, Xu H et al (2022) Region dual attention-based video emotion recognition. Comput Intell Neurosci 2022 18. Akhand M, Roy S, Siddique N, Kamal MAS, Shimamura T (2021) Facial emotion recognition using transfer learning in the deep CNN. Electronics 10(9):1036 19. Sarvakar K, Senkamalavalli R, Raghavendra S, Jankatti S, Manjunath R, Jaiswal S (2021) Facial emotion recognition using convolutional neural networks. Mater Today: Proc 20. Riyantoko P, Hindrayani K et al (2021) Facial emotion detection using haar-cascade classifier and convolutional neural networks, vol 1844, no 1, p 012004 21. Li B, Lima D (2021) Facial expression recognition via resnet-50. Int J Cogn Comput Eng 2:57–64 22. Minaee S, Minaei M, Abdolrashidi A (2021) Deep-emotion: facial expression recognition using attentional convolutional network. Sensors 21(9):3046 23. Mehendale N (2020) Facial emotion recognition using convolutional neural networks (FERC). SN Appl Sci 2(3):1–8 24. Jaiswal S, Nandi GC (2020) Robust real-time emotion detection system using CNN architecture. Neural Comput Appl 32(15):11253–11262 25. Lucey P, Cohn JF, Kanade T, Saragih J, Ambadar Z, Matthews I (2010) The extended cohnkanadedataset (ck+): a complete dataset for action unit and emotion-specified expression. In: 2010 IEEE computer society conference on computer vision and pattern recognitionworkshops, pp 94–101. IEEE 26. Wei J, Yang X, Dong Y (2021) User-generated video emotion recognition based on key frames. Multimed Tools Appl 80(9):14343–14361

Mr. Krishna Kant pursed Masters (MCA) from Sabtribai Phule Pune University, Pune. He is currently pursuing Ph.D from Sardar Patel University, VV Nagar, Anand, Gujrat and currently working as Assistant Professor in Smt. Chandaben Mohanbhai Patel Institute of Computer Application, Charusat, Changa since 2018. He is Oracle Certified Professional SE 6 Programmer. He has 10 years of teaching experience in academics.

Innovations and Insights of Sequence-Based Emotion Detection …

395

Dr. D. B. Shah Professor and Director, Head of Department, Department of Computer Science, Sardar Patel University, Vallabh Vidyanagar 388 120, Gujarat. She is a Member of the Board of Studies in Computer Science at S. P. University. Awarded Hari Ohm Ashram prize for best research paper in Computer Science and Computer Engineering (2008–09), received Sardar Patel Research Award under Basic and Computer science subjects under Teachers’ category (January 2018), member in editorial board/review committee in many international journals served as program committee member for international conference and various national level conferences worked as a Chairman/Vice-Chairman/Secretary of Computer society of India, Vallabh Vidyanagar chapter.

Speed Analysis on Client Server Architecture Using HTTP/2 Over HTTP/1: A Generic Review Anuj Kumar, Raja Kumar Murugesan, Harshita Chaudhary, Neha Singh, Kapil Joshi, and Umang

Abstract The innovation and advancement of this digital era is where every physical deed is working remotely and on online platform. Today our generation is surrounded with the plethora of platforms where they can post their data and it will be present in the cyber space for n number of years. The creation of data is increasing, storage volume is increasing and algorithms to recognize the patterns of that data are also getting matured. Not only data but also ways to present the data has kept pace with the enhancement of technologies. If we talk about the websites and web development about 10 years ago, it was a plain HTML and now the backend frameworks have improvised with time. The size of the data on those websites has also increased considerably. There are many aspects such as the assets, thumbnails and third party scripts running on a web page which takes a lot of time for a single web page to load. This increasing data on the server side for the website affects the rendering time at client’s end. This paper specifically comprehend on the advancement on protocols over Internet from WWW, i.e., W3 consortium. We have focused on HTTP/2 using cookies for gaining potential in terms of page load time for better performance and increase speed. Keywords Cookies · HTTP/2 · Speed · Protocols · Comparative study A. Kumar · K. Joshi (B) Department of CSE, Uttaranchal Institute of Technology, Uttaranchal University, Dehradun, India e-mail: [email protected] R. K. Murugesan Department of CSE, Taylor’s University, Subang Jaya, Malaysia e-mail: [email protected] H. Chaudhary Tula’s Institute, Dehradun, UK, India N. Singh Department of CSE, Indrashil University Rajpur Kadi, Rajpur, India Umang Department of Computer Applications, Kumaun University Nainital, DSB Campus, Nainital, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 V. S. Rathore et al. (eds.), Emerging Trends in Expert Applications and Security, Lecture Notes in Networks and Systems 681, https://doi.org/10.1007/978-981-99-1909-3_34

397

398

A. Kumar et al.

1 Introduction The websites in today’s era have become so profound and user friendly that the large set of data modifications is done at client’s end. In contrast to poor and obsolete HTML UI/UX, the websites provide magnificent UI/UX which includes photos, thumbnails, assets, music (mp3,.wav) and even videos. The size of these webpage resting on server is considerably more than the previously used web pages. In addition to that, the target is to load a page with maximum data in minimum bandwidth of Internet [1]. To reduce the time space complexity, WWW introduced the contemporary protocol HTTP/2 over HTTP/1 and Google’s web protocol SPDY. SPDY protocol was implemented on top of application layer. SPDY has some improvisation over HTTP1.1. HTTP/2 proved to be a successor of SPDY. The basic model and scheme of the protocols and technicalities were taken from SPDY. HTTP/2 is projected to address several flaws and inflexibilities of HTTP/1.1 improving online performance in terms of the time it takes for a page to load. The concept of HTTP/2 evolved with the discovery of patterns what user is watching/opening/surfing/playing over the Internet using his web browser [2]. The idea of HTTP/2 is to store the information and pre-requisites (assets, thumbnails, scripts, extensions etc.) on the client’s web browser at the first place the client visit the website. The cookie law named as privacy directive is not a compulsory law unless your target audience is based in Europe. The Hypertext Transfer Protocol (HTTP) was created by the Internet Corporation for Assigned Names and Numbers (ICANN) created in the middle of 1990s to facilitate communication between clients/users (web browsers) and web servers. As a result of which many sub-versions of the protocol were developed. Furthermore, Google taking this technology as a next step in 2009 began development SPDY is an experimental protocol that has been used in the past. Not only SPDY worked similarly like HTTP’s semantics but also resolved the head of line (HOL) blocking issue. The HOL proved to be a major limitation in HTTP/1.1[3]. The Internet Engineering Task Force (IETF) suggested tackling the description of a latest HTTP/2 is a new HTTP version in 2012. The SPDY protocol was used in the initial HTTP/2’s proposal in 2012. Since then, it has gone through a lot of changes, all with the goal of making HTTP/2 load web pages faster [4]. In this paper, we have talked precisely upon the analysis of HTTP/2 over existing protocols (HTTP/1 and SPDY). The problems in the existing protocols are discussed in detail and what advancement has been done is also analyzed to impeccably compare the protocols. The setup we have used for this analysis is a sample website made by us which comprise code of JavaScript, HTML, CSS, jQuery and Bootstrap. There will be some scripts as well as HD images and videos will also be included under the assets of the web page. In comprehension to that, the use of cookies in HTTP/2 is also scrutinized as though it decreases the page load time but increases the risk of security if one’s particular information over web. It is also obvious that with the shift in the protocol, it will give eave droppers and attackers to find the vulnerability in the updated version of HTTP [5].

Speed Analysis on Client Server Architecture Using HTTP/2 Over …

399

2 Limitations of HTTP/1 The HTTP 1.1 had a major drawback of handling multiple requests at the same time. This means that in an elementary request scenario, HTTP1 works perfectly fine but in case of pipelined request, the protocol fails to process the second request if the response to first request is hindered. Another problem with HTTP/1 is of HOL, i.e., head of line package [6]. HTTP1.1 uses a TCP connection to communicate with a server. Once a successful connection has been made over TCP in conjunction with the server, client is able to make requests to the server that is GET requests to get the content. In case the client desires innumerable information from the server, they must make multiple GET queries on a stable connection through TCP, one after the other. Furthermore, before submitting another up to date request, the client must wait for the server to respond to the previous request. Pipelining was implemented as a first option to avoid the aforementioned stumbling block when a client sends two queries (GET 1 and GET 2) without first waiting for a response [7]. This means that at the same instance of time the server can get two or more GET request from the client independent of each other. If a request is made in a pipelined method form, then responses also occur in line. First the response of first request will be shown following will be the response from the second request. This is a lot better plot, but the head of line (HOL) blocking is a problem still persists [7]. For example, if the first response is a bulky file containing lots of media and assets, then it will take a long time to transfer to the client and future responses will be delayed. In the above-mentioned scenario, the first request in line will automatically hinder all the others, this event is referred to as the "head of the line" blockage. Some attempts were made to avoid HOL blockage such as allowing a customer to establish multiple TCP connections to the same server are the most common simplest option. However, because the server must keep a large number of the states relating to numerous TCP connections, this technique is costly. As a result, browser manufacturers agreed to limit the total number of connections to only one server to six for Google Chrome and fifteen for Mozilla Firefox. Domain sharing was another experiment to alleviate the detrimental the upper bound’s effect depending on the total number of TCP connections [8]. It entails dividing the content across numerous servers, each with a unique IP address. As a result, the client can create numerous TCP connections in parallel to various servers and retrieval multiple components of the same web page at the same time. However, a browser’s number of TCP connections total (again, including all tabs) capped (Figs. 1 and 2).

400

A. Kumar et al.

Fig. 1 Elementary requests on a TCP connection over HTTP/1

Fig. 2 Pipelined requests on a TCP connection over HTTP/1

3 Enhanced Features of HTTP/2 HTTP2 had an advantage of Binary protocol. In HTTP, the protocol was text based but in HTTP2 binary structure was introduced which was more efficient in terms of parsing. Services like telnet and plain text parsers cannot be used on HTTP2. The syntax for headers remains the same in HTTP/2 because HTTP/1.1’s semantics are as follows: preserved. Likewise, these HTTP heads constantly have the same ideals, performing in a state of redundancy [9]. This is why the HTTP/2 protocol was created. Standards include the HPACK algorithm, which illustrates the medium to compress HTTP heads. Huffman encoding is used in HPACK along with two tables: one static table that stores variants of compressed frequently used headers, and another dynamic

Speed Analysis on Client Server Architecture Using HTTP/2 Over …

401

table that refers additional compressed headers that are session-dependent. HTTP/2 proposes the concept of stream to address head of line blocking is a problem that has to be addressed; each client request is assigned to a specific person. Specific streams are multiplexed, and all streams are multiplexed across TCP connection (single). As a result, requests are not accepted interfere with one another and the server can respond to them all at the same time. As a result, HTTP/2 eliminates head of line blocking while also lowering a count of TCP connections that must be taken care of on the server’s end. Using the priority mechanism, a user can provide a specific ranking with a single HTTP/2 connection, many streams. Now it totally depends on the server to keep this or disregard the system entirely; however, a server is allowed to prioritize some files (JS, CSS and HTML) over images, resulting in faster time it takes for the page to load [10]. In the structure of a web page, this is true pieces of code that refers to other dependencies [11], such as graphics, images and media must to be retrieved first. As a result, they can be scanned as soon as feasible by the browse rand check which photos are being referred to which web page, and issue the appropriate requests as quickly as possible. HTTP/2 enables to the server send profitable data to the customer [12]. Table 1 displays the load time (in seconds) of different constituents of a website. Top 12 websites according to global ranking 2022 were taken under consideration. The client can then refuses or accept [13] the information, which will subsequently be saved for future use in the browser’s cache. The use of cookies in HTTP2 is for the same purpose to store pre-fetched data [14] at client’s end which will enhance page load time [15–18] (Table 2).

Table 1 Different assets load time for widely used websites S. No

Website

JS

CSS

Images

Media

HTML

1

Google

1

2

3

1

2

2

YouTube

2

3

4

4

2

3

Facebook

2

1

1

2

1

4

Twitter

3

2

4

2

1

5

Wikipedia

2

2

2

2

1

6

Yahoo

4

2

2

1

1

7

Amazon

6

2

8

4

2

8

Instagram

6

7

5

3

2

9

Yandex

4

3

3

3

3

10

Reddit

3

2

1

2

2

11

Bing

4

2

2

1

1

12

Naver

2

2

2

1

1

402

A. Kumar et al.

Table 2 Page load time (in seconds) on two versions of HTTP

4 Conclusion We must remember that this paper is a proven piece of work. The goal of this project was to analyze efficiency ratio between HTTP1.1 and HTTP/2. We calculated the superiority, efficiency and sustainability of HTTP/2. Definitely there is an urge to test the new and advanced protocols more rigorously. The prime the goal of this research was to assess the impact of HTTP/2’s adaptation to the existing Internet world. Since the data and the graphics on the websites today are in huge demand, many organizations focus on A/B testing of their websites for a better UI/UX, hence improving the size of the website. But we found HTTP/2 generally has faster page load time. All the appreciation goes to the compression and multiplexing features offered by the protocol. On the fully equipped realistic web pages designed for testing and under simulated and real Internet settings, HTTP/2 spectacularly kept up its performance high than the existing protocols. Definitely the use of HTTP/2 has a better scope under the aegis of speed and better page rendering. Also keeping the efficiency of new protocol in consideration we can make the web pages more attractive without thinking much on heaviness of the page in terms of data. Moreover, the graphics and backgrounds scripts can be added for a better UI/UX until HTTP/2 is used.

References 1. de Saxcé H, Oprescu I, Chen Y (2015) Is HTTP/2 really faster than HTTP/1.1?. In: 2015 IEEE conference on computer communications workshops (INFOCOM WKSHPS). IEEE 2. The http archive, http://httparchive.org 3. SPDY: An experimental protocol for a faster web. http://dev.chromium.org/spdy 4. Browserscope, http://www.browserscope.org/?category=network 5. Peon R, Ruellan H, Hpack-header compression for http/2, https://tools.ietf.org/html/draft-ietfhttpbis-header-compression-10 6. Rescorla E, Rfc 2818: Http over tls. http://tools.ietf.org/html/rfc2818

Speed Analysis on Client Server Architecture Using HTTP/2 Over …

403

7. Naylor D, Finamore A, Leontiadis I, Grunenberger Y, Mellia M, Papagiannaki K, Steenkiste P (2014) The cost of the s in https. In: ACM conference on emerging networking experiments and technologies (CoNEXT), 2014 8. Phan H, Nguyen D, Tran HTT, Thu Huong T, Thang TC (2021) Application layer throughput control for video streaming over HTTP2. In: 2020 IEEE eighth international conference on communications and electronics (ICCE), 2021, pp 123–128. https://doi.org/10.1109/ICCE48 956.2021.9352137 9. RFC1945 Hypertext Transfer Protocol—HTTP/1.0. https://tools.ietf.org/html/rfc1945 10. RFC2616 Hypertext Transfer Protocol—HTTP/1.1. https://tools.ietf.org/html/rfc2616 11. RFC7540 Hypertext Transfer Protocol Version 2 (HTTP/2). https://tools.ietf.org/html/rfc7540 12. de Saxce H, Oprescu I, Chen Y (2015) Is HTTP/2 really faster than HTTP/1.1? In: 2015 IEEE conference on computer communications workshops (INFOCOM WKSHPS), Hong Kong, pp 293–299 13. Iyappan P, Loganathan J, Verma MK, Dumka A, Singh R, Gehlot A, ... Joshi K (2022) A generic and smart automation system for home using internet of things. Bull Electr Eng Inf 11(5):2727–2736 14. Sharma T, Diwakar M, Singh P, Lamba S, Kumar P, Joshi K (2021) Emotion Analysis for predicting the emotion labels using Machine Learning approaches. In: 2021 IEEE 8th Uttar Pradesh section international conference on electrical, electronics and computer engineering (UPCON). IEEE, pp 1–6 15. Jatothu R, Sireesha A, Kumar RG, Joshi K (2022) Deep convolution neural network for RBC images. In: 2022 international conference on innovative computing, intelligent communication and smart electrical systems (ICSES). IEEE, pp 1–5 16. Diwakar M, Sharma K, Dhaundiyal R, Bawane S, Joshi K, Singh P (2021) A review on autonomous remote security and mobile surveillance using internet of things. J Phys: Conf Ser 1854(1):012034. IOP Publishing 17. Avgetidis A, Alrawi O, Valakuzhy K, Lever C, Burbage P, Keromytis A, ... Antonakakis M (2023) Beyond the gates: an empirical analysis of HTTP-managed password stealers and operators. In: 32nd USENIX security symposium (USENIX Security 23) 18. Kumar A, Webber JL, Haq MA, Gola KK, Singh P, Karupusamy S, Alazzam MB (2022) Optimal cluster head selection for energy efficient wireless sensor network using hybrid competitive swarm optimization and harmony search algorithm. Sust Energy Technol Assess 52:102243

Smart Chatbot for Guidance About Children’s Legal Rights Akshay Kumar, Pooja Joshi, Ashish Saini, Amrita Kumari, Chetna Chaudhary, and Kapil Joshi

Abstract Generally, smart chatbot is based the functionality of users and access the current data to create the scale of user. But on the other side, legal study is more important for children aspects so here this article presents the idea of legal chatbots and its application in providing guidance on one’s legal rights. This study refers to chatbots as computer programmes that automatically converse with users and determine whether they need legal advice. This current study generates three advantages that legal chatbots may have granting access to legal rights; bridging the client-attorney communication gap; and automatically producing documentation for each activity. Keywords Chatbot · Legal disputes · Artificial intelligence · Natural language processing

A. Kumar · K. Joshi (B) Department of CSE, Uttaranchal Institute of Technology, Uttaranchal University, Dehradun, India e-mail: [email protected] P. Joshi CSE, HSST, Swami Rama Himalayan University Jollygrant, Dehradun, India A. Saini · A. Kumari Department of CSE, Quantum University, Roorkee, India e-mail: [email protected] A. Kumari e-mail: [email protected] C. Chaudhary Uttaranchal University, Dehradun, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 V. S. Rathore et al. (eds.), Emerging Trends in Expert Applications and Security, Lecture Notes in Networks and Systems 681, https://doi.org/10.1007/978-981-99-1909-3_35

405

406

A. Kumar et al.

1 Introduction Legal bots powered by artificial intelligence (AI) have drawn considerable attention and are among the most promising technological developments. As artificial intelligence (AI) effectively acts as an intelligent robot using natural language, robotics, or the displacement of human labour, is becoming a crucial concern [1]. It is becoming more crucial to develop AI as a foundation for answers to a variety of life’s problems, including legal ones. They will get used to dipping the amount of permissible information required in order to make the legal system more approachable. In the work presented here, we suggest a novel chatbots application to enhance children’s right to use to their lawful rights, i.e. to make it easier for them to contact and receive advice from a consultant. The goals of this chatbot are. (1) Determine the lawful setting. (2) List the parties concerned. (3) Using the details from steps 1 and 2, create a new advisor case.

1.1 The Historical Background of AI In a 1950 paper, he authored about computers and intelligence, British mathematician Alan Turing raised the question of whether or not machines can think. However, the term “AI” was originally used in August 1955 in the Dartmouth Summer Research Conference Project proposal by John McCarthy, Marvin L. Minsky, Nathaniel Rochester, and Claude Elwood Shannon [2]. The term “artificial intelligence” is typically credited to John McCarthy, an American computer and cognitive scientist and one of the pioneers of the AI field. One of the most influential AI theories was American cognitive scientist Marvin Lee Minsky [3]. Since the 1960s, technology-based methods have been using artificial intelligence (AI) to some extent in the legal profession [4]. Computer-assisted legal analysis (CALR) made its debut in the middle of the 1960s, although the initial CALR systems were outdated and not widely available [5]. When Lexis, the first for-profit full-text electronic case law database was released in 1973 and actively marketed to lawyers and judges, the CALR revolution began to take off [6]. That same year, four law firms from New York joined the Lexis legal data service [7]. This event signalled the beginning of a whole new era for legal technology. As a result of the Lexis service’s quick growth, attorneys suddenly had unmatched access to comprehensive, searchable electronic case law, greatly accelerating the legal process [8]. According to AI research, almost 1.6 million scientific publications and 340,000 patent families related to artificial intelligence were produced between 1960 and early 2018. Between 2011 and 2017, the number of annual patent applications filed in the field of artificial intelligence increased by an element of 6.5. An important issue in scientific literature, AI has received 1,636,649 papers published as of mid-2018 [9]. The conclusion that each aspect of legal practise has changed over the past 25 years as a result of the development of technology is therefore not an

Smart Chatbot for Guidance About Children’s Legal Rights

407

exaggeration. These facets include of hiring, client acquisition, communication, and upkeep, court docketing, judicial workflow, and discovery production [10]. On the other hand, improvements in processing power, connectivity, and information accessibility from 2012 to 2018 allowed for advancements in ML, notably in the areas of deep learning and neural networks. This signalled the beginning of a period marked by greater investment and hope for novelty throughout the intact field of AI generally and in legal tech startups specifically [11]. Finally, significant turning points in the recent development of AI included the launch of Apple’s "Siri" in 2011, the defeat of the two human champions by IBM’s "Watson" at the TV game show "Jeopardy" in 2011, the autonomous navigation of Google’s driverless cars in 2012, and also the defeat of the world champion (Mr. Lee Sedol) in the difficult board game "Go" in 2016 [12].

2 Necessity A variety of chatbots provide legal services. The most popular of these is DONOTPAY [13], which raises questions, gathers information, and draughts paperwork so that the user can handle the issue themselves. Because they are proprietary tools, further development is not possible. As a child might communicate a problem in common phrases rather than using legal terms, we are not aware of any chatbots created for children’s rights that allow us to match children’s language to legal principles. To handle it, we have developed our own corpora that can be used to train our machine so that it can comprehend the issue and react accordingly.

3 Dataset Machine learning techniques could be trained on legal corpora, such as the British Law Report Corpus (BLaRC) [14] and Sigma Law [15], but the language is not what we would want young people to use. There are no corpora that model children’s language or are intended for use with young children that deal with legal issues. It is challenging in and of itself to compile a corpus of children’s words on delicate legal issues. As a result, we developed a special corpus of messages. The corpus (Table 1) consists of "A" statements that roughly represent child language and "B" statements that represent adult-modelled child language.

4 Methodology A discussion graph (Fig. 1) is created when a user accesses the application to track their turns with them. It should carry out two actions in order to engage with the user:

408

A. Kumar et al.

Table 1 Number of child language and adult’s modelled corpus Speech tone

A (child language)

B (adult’s modelled)

Example

Greetings

53

6

Hi, Whats up?

Statement

357

105

I want to register a complaint

Response (positive)

141

24

Yes

Response (negative)

133

17

No

Legal type

A (child language)

B (adult’s modelled)

Example

Abuse

167

42

My friend hits me what can I do?

Cyber crime

95

25

Someone online stole my bank details

Hate crime

66

13

There Religion is bad

Sexual assault

53

27

My bf is forcing me for physical relation

1. Verify each user’s role (including the tone of their communication) and legal kind. A neural network handles this classification task, returning a predetermined answer that is appropriate for the circumstance. Any input statement that is not understood will produce a default response in order to continue the dialogue. 2. The chatbot detects the named entities during discussion (name, location, and time). A report is produced for an advisor to take over at the conclusion of the conversation (Table 2). Functions and contents of the message are classified words are tokenised and turned into phrase vectors of 200 dimensions, intending to capture the semantic commonalities among phrases, in order to enable the neural community (Fig. 2) to categorise entry messages as a speaking act and criminal kind [16]. To encode the impact of phrase order on the meaning of the text, these phrase vectors go through LSTM layers. Similar conversions are made from the output of those recurrent layers using a dense layer with a ReLU activation function and a 20% dropout rate to reduce overfitting. Through parallel dense layers and a softmax activation, the statement is evaluated. Entity recognition class in its name without the user having to ask for the information, the device recognises and extracts specified things from the user’s statements for use in future. When dealing with well-formatted inputs, such email addresses, regular expressions are used. A neural network is utilised in all other cases [17] (Table 3).

Smart Chatbot for Guidance About Children’s Legal Rights

Start

Greetings

Greeting Response

Statement

User know his/her legal rights?

No

Yes

General Question to generate case

Generate case

Conclude

End

Fig. 1 Flowchart of dialogue used by the chatbot

Link to website with an appropriate assistance/in formation

409

410

A. Kumar et al.

Table 2 Information that the chatbot has to gather in order to help the advisor Legal type

Required information

Basic information

Name of contact person, incident time, and contact info

Assault

Place of the event, abuser, and victim

Internet crime

Platform on which the crime was committed the cause of the case

Racist attack

What act was committed and who committed it?

Physical harassment

Age of everyone involved. Reason and situation of crime

Fig. 2 Estimation layers in LSTM with speech and legal act classification

Table 3 Comparison between dense and recurrent neural network classification scores Network type

Speech act F1-score (%) Legal type F1-score (%) Avg. F1-score (%)

Dense NN

96.26

93.69

95.65

Double layer RNN

91.13

93.26

95.20

Pre-trained Embedding 98.48

98.16

98.54

5 Results and Evaluation 5.1 Evaluation of the Classification The proceeding layers are given descriptive vectors using a pre-trained embedding [18], resulting in the best score of 98.24%.

5.2 User Studies We asked a select group [19] of participants to choose from various scenarios [20], then engage in conversation with the chatbot and answer seven [21] questions. Responses are given a score ranging from 1 (strongly disagree /no) to 5 (strongly agree/yes). Table 4 shows the overall participant average score [22–24]. 1. How user-friendly was the chatbot?

Smart Chatbot for Guidance About Children’s Legal Rights

411

Table 4 Questionnaire responses User study measure

Minimum

Maximum

Average

Ease of use

5

10

7.46

Politeness

6

9

6.83

Understanding

1

10

5.2

Future use

7

10

8.73

2. 3. 4. 5. 6. 7.

To what extent do you think the chatbot understood you? Was it simple to rephrase the response if the chatbot did not understand? Were the inquiries you received from the chatbot clear? Did the exchange feel natural? Are you interested in reusing the system in future? Were you pleased with the encounter?

6 Conclusion and Future Work During this study, the proposed work provides a chatbot platform to better children’s access to legal representation and their legal rights. This approach uses machine learning to predict the legal type that the user is describing. Further study is based on the scope of legal action and future work might broaden this strategy to include different population categories and legal case types.

References 1. Kerly A, Hall P, Bull S (2007) Bringing chatbots into education: towards natural language negotiation of open learner models. Knowl-Based Syst 20(2):177–185 2. Krausova A (2017) Intersections between Law and Artificial Intelligence. Int J Comput (IJC) 27(1):55, 68. https://core.ac.uk/download/pdf/229656008.pdf 3. Ben-Ari D, Frish Y, Lazovski A, Eldan U et al (2017) Artificial intelligence in the practice of law: an analysis and proof of concept experiment. Rich J L Tech 23(2):3, 53. http://jolt.ric hmond.edu/index.php/volume23_issue2_greenbaum 4. Susskind RE (1990) Artificial intelligence, expert systems and law. Denning Law J 5(1): 105, 116. http://bjll.org/index.php/dlj/article/view/ 5. Goanta C, van Dijck G, Spanakis G (2019) Back to the future: waves of legal scholarship on artificial intelligence, forthcoming in Sofia Ranchordás and Yaniv Roznai, Time, Law and Change. Hart Publishing, Oxford 6. Hellyer P (2005) Assessing the influence of computer-assisted legal research: a study of California Supreme Court Opinions. https://scholarship.law.wm.edu/libpubs/5 7. Medianik K (2018) Artificially intelligent lawyers updating the model rules of professional conduct in accordance with the technological Era’ (2018) 39 Cardozo Law Review 1498, 1530. http://cardozolawreview.com/wp-content/uploads/2018/07/MEDIANIK.39.4.pdf 8. Miller S (2012) For future reference, a pioneer in online reading. WALL ST. J. http://www. wsj.com/articles/SB10001424052970203721704577157211501855648

412

A. Kumar et al.

9. WIPO (2019) Technology trends 2019 artificial intelligence 39 10. Carrel A (2019) ‘Legal intelligence through artificial intelligence requires emotional intelligence: a new competency model for the 21st century legal professional’ (2019) 35 (4) Georgia State UL Rev 1153, 1183. https://readingroom.law.gsu.edu/gsulr/vol35/iss4/4 11. Ben-Ari D et al, Supra note 15:3, 53 12. WIPO Technology Trends, ‘Artificial Intelligence’ (2019) 19; OECD (2019), 20; See ‘Go the Movie” on YouTube, https://www.youtube.com/watch?v=WXuK6gekU1Y; Steven Livingston and Mathias Risse, ‘The Future Impact of Artificial Intelligence on Humans and Human Rights’ (2019) 33 (2) Ethics & International Affairs 141, 158. https://www.hks.harvard.edu/publicati ons/future-impact-artificial-intelligence 13. Boynton S (2017) DoNotPay, world’s first robot lawyer, coming to vancouver to help fight parking tickets. Glob News, 1 Nov 2017. https://globalnews.ca/news/3838307/donotpay-rob otlawyer-vancouverparking-tickets 14. Pérez MJM, Rizzo CR (1996) Design and compilation of a legal English corpus based on UK law reports: the process of making decisions. In: Las Tic: Presente y Futuro en elanalisis de corpus, pp 101–110 15. https://osf.io/qvg8s/wiki/home/ 16. Faruqui M, Dyer C (2014) Improving vector space word representations using multilingual correlation. In: Proceedings of the 14th conference of the European chapter of the association for computational linguistics, pp 462–471 17. Honnibal M, Johnson M (2015) An improved non-monotonic transition system for dependency parsing. In: Proceedings of the 2015 conference on empirical methods in natural language processing, pp 1373–1378 18. Pennington J, Socher R, Manning C (2014) Glove: global vectors for word representation. In: Proceedings of the 2014 conference on empirical methods in natural language processing, pp 1532–1543 19. Joshi K, Kumar M, Memoria M, Bhardwaj P, Chhabra G, Baloni D (2022) Big data f5 load balancer with chatbots framework. In: Rising threats in expert applications and solutions: proceedings of FICR-TEAS 2022, pp 709–717. Springer Nature Singapore, Singapore 20. Diwakar M, Tripathi A, Joshi K, Memoria M, Singh P (2021) Latest trends on heart disease prediction using machine learning and image fusion. Mater Today: Proc 37:3213–3218 21. Diwakar M, Tripathi A, Joshi K, Sharma A, Singh P, Memoria M (2021) A comparative review: medical image fusion using SWT and DWT. Mater Today: Proc 37:3411–3416 22. OECD (2019) Artificial intelligence in society. OECD Publishing, Paris, p 20. https://doi.org/ 10.1787/eedfee77-en 23. Dhaundiyal R, Tripathi A, Joshi K, Diwakar M, Singh P (2020). Clustering based multimodality medical image fusion. J Phys: Conf Ser 1478(1):012024. IOP Publishing 24. Gupta S, Bharti V, Kumar A (2019) A survey on various machine learning algorithms for disease prediction. Int J Recent Technol Eng 7(6c):84–87

Automatic Speed Control of Vehicles in Speed Limit Zones Using IR Sensor Riya Kukreti, Ritu Pal, Pratibha Dimri, Sakshi Koli, and Kapil Joshi

Abstract Because people are driving so quickly today, accidents happen regularly, and we have lost important lives due to careless driving (school zone, areas on agrarian highways, hospitals, etc.). Therefore, the highway authority has installed the signboards in such spots to prevent such kinds of unwelcome accidents, to notify drivers, and to help them control their throttle. The purpose of the automatic throttle system is to regulate the speed of the vehicle in designated locations in order to prevent and reduce accidents in low-speed areas. According to this method, the lowspeed zone begins 100 m before the traffic signal. The study is based on the light vehicle speed control. When a vehicle enters a low-speed zone while travelling at full speed, its speed is automatically decreased to that zone’s permitted limit. The microcontroller will use the sensors to determine the vehicle’s speed. Based on this information, a control is produced which then activates the speed control mechanism in the vehicle and reduces the speed to the permitted speed in that zone. We also include additional features such as metal detection for bombs, an automatic light sensor, object detection, fire detection, etc. Keywords Arduino Uno · IR transmitter and receiver module · Flame sensor · LDR · Relay driver · L293D motor driver · Bo motor · Chassis 7805 voltage regulator

1 Introduction Road infrastructure is a significant issue in the todays scenario. According to recent studies conducted, changes in the roadway as well as excessive or unsuitable speed are R. Kukreti · R. Pal · P. Dimri Department of Computer Science Engineering, Tula’s Institute, Dehradun, India S. Koli · K. Joshi (B) Department of Computer Science Engineering, Uttaranchal Institute of Technology, Uttaranchal University, Dehradun, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 V. S. Rathore et al. (eds.), Emerging Trends in Expert Applications and Security, Lecture Notes in Networks and Systems 681, https://doi.org/10.1007/978-981-99-1909-3_36

413

414

R. Kukreti et al.

linked to one-third of fatal or serious accidents like the presence of construction zones or unforeseen impediments [1]. Traffic authorities, the auto industry, and transport research organisations are very concerned with reducing the frequency of accidents and mitigating their effects [2]. One key step is to use advanced driver assistance systems (ADAS), which are aural, buzzing, or visual indications produced by the vehicle which alerts the driver for an accident [3]. Commercial vehicles are currently equipped with the aforementioned technologies, and according to emerging trends, safety criteria will be met through the use of automatic driving features and a rise in the number of sensors on both the road and the vehicle. Adaptive cruise control (ACC), which enables the system to keep a safe distance from the car in front of it, and cruise control (CC), which may maintain a consistent pace, are two excellent instances of driver aid systems [4, 5]. Many accidents occur at traffic signals these days as a result of increased traffic and reckless driving on the part of the vehicles [6]. As we all know, when we accelerate a car, the engine fires up faster. When the throttle is opened more, the engine draws in more load (fuel + air), which burns and releases more energy in the form of radiations [7]. We have included a speed limiting device in this system that will effectively cut down on the amount of fuel going to the engine. The system’s goal is to propose a conceptual model of automatic variable electrical speed control based on a microcontroller. a system that can be used to regulate a vehicle’s speed based on the posted speed limit. The speed limiting mechanism is the major component of this system. A device called a limiting mechanism is employed to regulate an engine’s speed in accordance with the demands of the load. The fundamental limiter adjusts the energy supply to maintain the specified level by detecting the speed and, occasionally, load of a prime mover. Therefore, it is only described as a mechanism that provides automated speed control or limiting. The limiting mechanism is a control mechanism that operates on the feedback control principle. Its primary duty is to regulate the speed of the prime mover when the load changes. They have little influence over how quickly the cycle changes (Fig. 1).

Fig. 1 Diagrammatic representation of proposed project

Automatic Speed Control of Vehicles in Speed Limit Zones Using IR …

415

Fig. 2 Infrared receiver

2 Circuit Parts 2.1 Infrared Receiver Infrared receivers, which pick up radiation from an IR transmitter, are also known as infrared sensors [8]. Photodiodes and phototransistors are used as IR receivers. Because they exclusively pick up infrared light, infrared photodiodes are distinct from ordinary photodiodes. Below is a picture of a typical photodiode or IR receiver (Fig. 2).

2.2 Infrared Transmitter Infrared radiation is produced by an infrared transmitter, often known as a lightemitting diode (LED). They are referred to as IR LEDs as a result. Even though an IR LED looks like a standard LED, the radiation it generates is invisible to the human eye[9] (Fig. 3).

Fig. 3 Infrared transmitter

416

R. Kukreti et al.

Fig. 4 Moter

2.3 Moter Helps in transforming electrical energy into mechanical energy. Electric motors are available in a range of sizes and ratings. Elevators, rolling mills, and electric trains are a few examples of huge electric motor usage. Power tools, vehicles, and robots are some of the things that use small electric motors. Direct current (DC) motors and alternating current (AC) motors are the two categories into which electric motors fall. Both AC and DC motors perform the same tasks (Fig. 4).

2.4 Relay Driver The output from a microprocessor or low power circuit is extremely low. A LED will illuminate if it is powered, but to drive a heavy load, you will need a relay (also known as an electromagnetic switch) [10], and you will also need a relay driver to supply the right voltage or current to the relay. It is frequently sufficient to use one transistor with resistance to create a relay driver. The transistor serves as a current amplifier in this type of circuit, while the relay performs two functions. Relays protect your delicate electronic components by (a) isolating current (the flow of electrons), which is crucial because high-load appliances operate at various voltages (or potential differences). Relays are electromagnetic switches, as in (b). It is a particular kind of mechanical switch that is drawn by an electromagnet, making its resistance extremely low and enabling it to control powerful appliances. With a strong current, we will be able to manage numerous appliances and other pieces of machinery. A microcontroller has direct control over it (Fig. 5). Fig. 5 Relay driver

Automatic Speed Control of Vehicles in Speed Limit Zones Using IR …

417

Fig. 6 LDR

2.5 LDR To measure light intensity or to communicate the presence or absence of light, photoresistors, also known as light-dependent resistors (LDR), are frequently utilized [11]. The LDR sensor’s resistance rapidly decreases when exposed to light, possibly reaching just a few ohms, depending on the amount of light. In the dark, their resistance is especially high and usually approaches 1 M. LDRs are nonlinear electronics whose sensitivity is influenced by the light’s wavelength. Even though they are used in many different applications, other gadgets, such photodiodes and phototransistors, frequently carry out this light-sensing function (Fig. 6).

2.6 Flame Sensor A flame sensor is a detector that is designed primarily to notice and react to the appearance of a fire or flame [12]. Its fitment could influence the responsiveness to a flame detection. It has a natural gas line, a propane line, an alarm system, and a suppression system for fires. In industrial boilers, this sensor is utilized. This is mostly done to ensure that the boiler is functioning properly. Due to how they detect the flame, these sensors react more quickly and precisely than heat/smoke detectors [13]. The detector that is intended primarily to detect and respond to the occurrence of a fire or flame is a flame sensor. Its fitment could have an effect on the responsiveness to a flame detection. It has a natural gas line, a propane line, an alarm system, and a suppression system for fires. In industrial boilers, this sensor is utilized. This is mostly done to ensure that the boiler is functioning properly (Fig. 7).

2.7 Chassis 7805 Voltage Regulator One type of electrical component needed to keep a steady voltage across any electronic device is the voltage regulator. Voltage swings can lead to an unfavourable outcome in an electronic system. Based on the system voltage need, keeping a stable

418

R. Kukreti et al.

Fig. 7 Flame sensor

Fig. 8 Chassis 7805 voltage regulator

voltage is necessary to achieve that. A linear voltage regulator with three terminals, including one for the fixed output voltage of 5V, is the IC 7805 [14]. This voltage is applied in many different situations. Current manufacturers include STMicroelectronics, ON Semiconductor, Texas Instruments, Infineon Technologies, Diodes integrated, etc (Fig. 8).

3 Circuit Diagram See Fig. 9.

4 Working A two-part transmitter and receiver in the transistor have been used in the project. The microcontroller examines the key input when the micro switches are depressed to ascertain who depressed which key and what data or information was transferred [15]. Following this procedure, the receiver microcontroller collects the data that the IR module has received and encoded, the receiver microcontroller decodes the information sign. Using a microcontroller and the seven segments, data can be sent in DC. The IR module of the transmitter takes the feedback data and transmits it once more

Automatic Speed Control of Vehicles in Speed Limit Zones Using IR …

419

Fig. 9 Circuit digram of adas

to the receiver, which sends feedback that the microcontroller decodes and displays on the liquid crystal display (LCD). The modulation of frequencies is essential to the entire process. In the beginning, power is given to the transmitters’ side glance and input flag, and the keypad is given the alphabetic and numeric data that the decoder gets. Electrical flags are transformed into coded values by the decoder and sent to an IR transmitter before being picked up by a receiver. When the transmitter recurrence is interfaced the vehicles plans receives the recurrence route through the IR flag. After being passed to the decoder, the information in the collector was transformed back to decimal form. The movement is controlled by the electromechanical valve after the microcontroller and decoder information were interfaced. As a result, the controller was programmed to drive the framework and drive the flag to a respectable level. The fuel flow rate is controlled by the valve. In this project, we essentially introduce new functionalities like object detection, fire detection, and light detection. For security reasons, metallic objects like bombs and firearms are recognized by the metal detector [16]. To prevent the admission of any metallic objects, bombs, knives, or guns into public spaces such as theatres, shopping centres, parks, airports, hotels, and train stations through the individual carrying them. A metal detector is a proximity sensor that is used to create a security system [17]. Consequently, a metal detector is utilized to find any surrounding metals or the presence of hidden things. An overview of the applications and workings of metal detectors is provided in this article. When the electromagnetic field from the search coil is transmitted into the earth, the metal detector is in operation. Metals will enhance [18] and emit their own electromagnetic field when exposed to an electromagnetic field. The user is alerted by a metal response when the search coil in the metal detector picks the retransmitted field [19]. Detecting Objects, It is made up of an LED, a potentiometer, a diode, a MOSFET, and an IR phototransistor. Any infrared radiation that strikes the

420

R. Kukreti et al.

Fig. 10 Working model

phototransistor causes current to flow through it, turning on the MOSFET [20–24]. This then causes the LED, which serves as a load, to light up. The phototransistor’s sensitivity is managed via a potentiometer (Fig. 10).

5 Application and Advantage The applications and advantages of this project are as follows: 1. 2. 3. 4. 5. 6. 7. 8. 9. 10.

Straightforward project execution. limiting speed in vulnerable places like schools and hospitals. If an obstacle is found, proceed more slowly. Bombs and other dangerous items can be found easily. The transmitter only needs to be of minimal power to perform this task. Suitable for all auto systems. Sharp edges are found. A lot of accidents can be avoided. They’ll be a big help to the traffic department. In rush hour regions, near hospitals, colleges, schools, and city centres, there are posted speed limits for automobiles. 11. Object detection is conceivable.

6 Scope for Future There is nothing more essential than human life; hence, the scope of this project idea is quite promising. Every vehicle will have a system like this in the future, which will lower the number of deadly collisions. The use of ultrasonic sensors by this system can boost its compatibility. Any obstructions on the road, such as breakers, buildings, etc., can be found using ultrasonic sensors. The ultrasonic sensor technology will be highly useful when driving in steep locations with tight curves, especially at night. Due to its ability to automatically recognize the car in front of it and maintain a safe

Automatic Speed Control of Vehicles in Speed Limit Zones Using IR …

421

distance and speed, this technology would help lessen vehicle collisions on the road when overtaking or travelling at a fast speed.

7 Conclusion People frequently rush when driving in places with heavy traffic because they are in a hurry; however, this hurriedness frequently results in someone losing their life or another person’s life on the road. Our technology, which is based on a “Automatic Vehicle Speed Control System,” is very important for preventing and reducing overall accidents and fatalities in places with heavy traffic. In this project, a system monitors the vehicle’s speed using IR sensors and a microcontroller and alerts the driver to slow down if the speed is too high. If the driver doesn’t slow down, our system will take over and reduce the vehicle’s speed automatically within a few seconds. Thus, this project is an excellent strategy for saving lives in places with high traffic. Using this task, all vehicles, regardless of price or brand, could be managed.

References 1. World Health Organization (WHO) (2022) Road traffic injuries https://www.who.int/en/newsroom/fact-sheets/detail/road-traffic-injuries. Accessed 20 Nov 2022 2. Martinez FJ, Toh CK, Cano JC, Calafate CT, Manzoni P (2010) Emergency services in future intelligent transportation systems based on vehicular communication networks. IEEE Intell Transp Syst Mag 2(2):6–20 3. Wu C, Cao J, Du Y (2022) Impacts of advanced driver assistance systems on commercial truck driver behaviour performance using naturalistic data. IET Intell Transp Syst 4. Pérez J, Seco F, Milanés V, Jiménez A, Díaz JC, De Pedro T (2010) An RFID-based intelligent vehicle speed controller using active traffic signals. Sensors 10(6):5872–5887 5. Eom H, Lee SH (2015) Human-automation interaction design for adaptive cruise control systems of ground vehicles. Sensors 15(6):13916–13944 6. Yang X et al (2022) Physical security and safety of IoT equipment: a survey of recent advances and opportunities. IEEE Trans Ind Inform 18(7):4319–4330 7. Agarwal AK (2007) Biofuels (alcohols and biodiesel) applications as fuels for internal combustion engines. Prog Energy Combust Sci 33(3):233–271 8. Le GD (2015) Localization with symbolic precision using diffuse infrared radiation. In: SCC 2015; 10th international ITG conference on systems, communications and coding. VDE, pp 1–6 9. Dawson W, Nakanishi-Ueda T, Armstrong D, Reitze D, Samuelson D, Hope M, Fukuda S, Ozawa T, Ueda T, Koide R (2001) Local fundus response to blue (LED and laser) and infrared (LED and laser) sources. Exp Eye Res 73(1):137–147 10. Alsafy OMAH, Hammad SAMAH, Hamid YAM (2017) Auto power control of four different sources to ensure no power break (Doctoral dissertation, Sudan University of Science and Technology) 11. Al-Subhi B, Hasoon FN, Fadhil HA, Manic S, Biju R (2019) Smart vehicle headlights control system. AIP Conf Proc 2137(1):030001; AIP Publishing LLC

422

R. Kukreti et al.

12. Chen W, Liu P, Liu Y, Wang Q, Duan W (2018) A temperature-induced conductive coating via layer-by-layer assembly of functionalized graphene oxide and carbon nanotubes for a flexible, adjustable response time flame sensor. Chem Eng J 353:115–125 13. Chowdhury N, Mushfiq DR, Chowdhury AE (2019) Computer vision and smoke sensor based fire detection system. In: 2019 1st international conference on advances in science, engineering and robotics technology (ICASERT). IEEE, pp 1–5 14. Jagtap A, Patil J, Patil B, Patil D, Ansari AAH, Barhate A (2017) Arduino based underground cable fault detector. Int J Res Eng Appl Manag (IJREAM) 3(4):88–92 15. Bansal S, Kumar D (2020) IoT ecosystem: a survey on devices, gateways, operating systems, middleware and communication. Int J Wireless Inf Netw 27(3):340–364 16. Ahmed SS (2021) Microwave imaging in security—two decades of innovation. IEEE J Microw 1(1):191–201 17. Reddy GPR, Sree VS, Priyadarshini YSI, Kumar AS (2022) Arduino based metal detector for military security. J Control Instrum Eng 33–44 (e-ISSN: 2582-3000) 18. Ramos E (2012) Arduino basics. In: Arduino and Kinect projects. Apress, Berkeley, CA, pp 1–22 19. Alves CM, Rezende AR, Marques IA, Naves ELM (2021) SpES: a new portable device for objective assessment of hypertonia in clinical practice. Comput Biol Med 134:104486 20. Oroceo PP, Kim JI, Caliwag EMF, Kim SH, Lim W (2022) Optimizing face recognition inference with a collaborative edge-cloud network. Sensors 22(21):8371 21. D’Ausilio A (2012) Arduino: a low-cost multipurpose lab equipment. Behav Res Methods 44(2):305–313 22. Humaid W (2019) Automatic medicine remainder using Arduino 23. Sharma M, Rastogi R, Arya N, Akram SV, Singh R, Gehlot A, Joshi K (2022) LoED: LoRa and edge computing based system architecture for sustainable forest monitoring. Int J Eng Trends Technol 70(5):88–93 24. Hilal AM, Alsolai H, Al-Wesabi FN, Nour MK, Motwakel A, Kumar A, Zamani AS (2022) Fuzzy cognitive maps with bird swarm intelligence optimization-based remote sensing image classification. Comput Intell Neurosci

Ordering Services Modelling in Blockchain Platform for Food Supply Chain Management Pratibha Deshmukh, Sharma Avinash, Atul M. Gonsai, Sayas S. Sonawane, and Taukir Khan

Abstract Blockchain technology (BCT) is widely used in various applications where different stakeholders of their own interest are coming together where they cooperate and coordinate in trustless decentralized environment. The BCT is more suitable for supply chain management (SCM) applications. The food items supply chain is sensitive issue, and it is difficult to handle the problems food traceability, safety enable, authentication of stakeholders, security and consensus, transparency in transactions, food items shortfall, matching demand and supply chain wastage of material, etc. This research encourages for different stakeholders those are coming together on same platform and cooperate, coordinate each other to achieve common goal. The hyperledger fabric transaction flow is formed to record the world state by passing communication channels on peer to peer through farmers to distributors, distributors to food processing industry, food processing industry to distributors, distributors to retailers, and lastly retailers to customers. This proposed research uses core components of blockchain technology by designing specific architecture, writing smart contracts on Ethereum platform which provides facilitation for certain kind of food supply chain ordering services as a public permissioned platform. It uses blockchain platform along with properties of distributed and peer-to-peer supply chain architecture, and based on this, the analysis of SCM is shown through experimental results.

P. Deshmukh (B) · S. S. Sonawane · T. Khan Bharati Vidyapeeth’s Institute of Management and Information Technology, Navi Mumbai, Maharashtra, India e-mail: [email protected] S. Avinash Department of Computer Science and Eng. Maharishi Markandeshwar (Deemed to Be University), Mullana-Ambala, HR, India e-mail: [email protected] A. M. Gonsai Department of Computer Science, Saurashtra University, Rajkot, GJ, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 V. S. Rathore et al. (eds.), Emerging Trends in Expert Applications and Security, Lecture Notes in Networks and Systems 681, https://doi.org/10.1007/978-981-99-1909-3_37

423

424

P. Deshmukh et al.

Keywords Blockchain technology · Stakeholders · Supply chain management (SCM) · Hyperledger fabric · Smart contracts · Ethereum

1 Introduction India is one of the agro-based countries in the world where the history of agriculture starts from Indus Valley Civilization. Many places, farmers are starving for their income, while foreign countries are earning lakhs from the same situation. The reason is the very poor supply chain management which is not maintained properly, farmers unable to track the status of their goods, improper system for tracking the status of their goods in the various phases of transportation. This paper proposed BCT to manage the transparency of goods’ status and that leads to a good association between the producer and consumer. Also, the records that are immutable can visible forever [1]. SCM faced many challenges like product information, product lifecycle, and transport issue. These issues can be resolved using BCT. The analysis is performed using core characteristics for defining a conceptual framework. When blockchain will be integrated along with SCM process, further development is possible [2]. Blockchain technology has seen larger growth in various industries including medical functions, disaster relief operations, and co-related situations by utilizing blockchain and supply chain which allows assistance to members for tracking and recording cost, schedule, current location, quality etc. It has positive effect in any business organization [3]. Blockchain technology is expanding retail industry, transforming to retailers in order to improve customer confidence and loyalty. Also helpful for analyses the impact of BCT integration of business processes [4]. BCT become a critical priority for enterprises it is important to reform and reconstruct businesses. Study examined strengths and weaknesses of the BCT in supply chain management. Finally, it discussed the example of a decentralized application name frozen food factory with its process steps for development [5]. SCM mechanisms usually suffer from lack of information sharing, long delays, data retrieval, unreliability, product tracing, etc. BCT shows great potential to tackle these issues due to features like immutability, transparency and decentralized manner, proof-of-concept, etc. This paper provides a comprehensive analysis of potential opportunities, new requirements, and principles of designing BCT-based SCM system. Finally, a case study of designing blockchainbased food traceability system is reported to tackle technical challenges in practice [6]. Research examined logistics and supply chain management (LSCM) first in 2016 by considering analysis using bibliometric on BC Technology, knowing the status of research with respect to BCT and LSCM. It helps to adopt the existing literature into different forms of clusters, consist of theoretical sense making; conceptualizing, testing and the technical design of BCT for real-world such applications [7]. Dairy products maintained, linked and tracked with the variables like location, temperature, humidity, motion through producer–consumer using IoT sensors in real-time environment [8]. Information transparency, security can be well maintained using BCT and has massive benefits to the user. Various algorithm patterns meet to ensure

Ordering Services Modelling in Blockchain Platform for Food Supply …

425

the security level. BCT is most commonly used in E-voting, supply chain, etc., study presented various types of techniques and methods used in supply chain under BCT [9, 19, 20]. We have selected the BCT model for ordering services of food SCM as an example for reaching to the conclusion. The SCM is a process where in assets are transferred in terms of some transactions or smart contracts to solve the issues of (i) authentication process verification (ii) design of a decentralized system to capture fruits, vegetables, and processed products (iii) ownership of data with respect to farmers, distributor, food processing industry, and retailers (iv) bill payment encrypted with public key only. Modelling the BCT platform in the domain of SPM will provide services to the stakeholders for transferring and tracking of assets in terms of specific food items and processed products. Distributed ledger is keeping the real state of transaction using Ethereum in immutable nature. List of transferring assets, stakeholders, Stages in implementation of Ordering Services Food Supply Chain, Blockchain Technologies used for implementation are shown as below.

1.1 Transferring of Assets

1.2 List of Stakeholders List of stakeholders involved in ordering services as an individual or organization are shown as below:

426

● ● ● ● ●

P. Deshmukh et al.

Farmer–producer Food Processing Industry–producer, buyer, and seller Distributer–logistic support Retailer–buyers, seller Customer–buyer

1.3 Blockchain Technologies Used for Implementation ● Technology: HTML, CSS, ReactJS ● Blockchain Network: Web3 API, Smart Contracts, Ethereum, Remix Console, Solidity, Ganache, Metamask ● Browser: Chrome, Microsoft Edge, Firefox.

2 Blockchain Architecture for Ordering Services Food Supply Chain This architecture represents various stages in implementation of ordering services food supply chain. The dependencies of transactions for achieving ordering services are implemented in the following steps: Step 1: Farmer - > adding fruits and vegetables and taking orders and selling to Food Processing Industry and Retailer via Distributor at large scale. Step 2: Food Processing Industry/Manufacturer of processed and packaged product - > (Raw Material - > Food Process - > Packed Product - > Selling to Retailers). Step 3: Distributer - > collect fruits and vegetables and transfer to Food Processing Industry and Retailer, transfer packed product from Food Processing Industry - > Retailer. Step 4: Retailer - > buying fruits and vegetable from farmer and packed products from Food Processing Industry, sell the products (fruits, vegetables and packed products) - > Customer. Step 5: Customer - > buying from Retailer. Step 6: Payment – > payment done using Ethereum (Fig. 1).

3 Smart Contracts Implemented in Ordering Services Food Supply Chain In smart contracts, code is deployed to a blockchain platform where the code gets executed. Smart contracts are written for maintaining the transactions and transferring assets from one location to another location. Hyperledger fabric is more generic

Ordering Services Modelling in Blockchain Platform for Food Supply …

427

Fig. 1 Blockchain architecture for ordering services food supply chain

process for architecting hashchain system. Its design is based on proper architecture and consensus mechanism of various transactions and ordering services of a system. List of smart contracts written using by solidity is smart contract-based programming language with the help of business logic are migrated on Ethereum platform and shown as below (Figs. 2, 3 and 4). Solidity is smart contract-based programming language which is created using unique featured technologies of Ethereum framework in applied in BCT where smart contracts are designed and constructed as BCT platforms. It helps to implement business logic and generate a chain in terms of hashchain for recording ordering services as a transaction flow.

4 Analysis of SCM Implementation Using BCT Walmart business implemented the BCT for supplying food items where in end users are able to trace the original food items, raw materials. In association with

428

Fig. 2 List of smart contracts migrated using solidity

Fig. 3 e.g., Smart contract written in Solidity

P. Deshmukh et al.

Ordering Services Modelling in Blockchain Platform for Food Supply …

429

Fig. 4 User interface

IBM, Walmart implemented the food supply chain. Another project implemented for mangos used US and other for pork used in China. Before digitizing the food supply chain process, the tracing of food product took minimum seven days period. Now its takes 2.2 s process for execution the same. Walmart has developed a BCT food supply chain for twenty five food items products. It facilitates for tracing the origin of food items from few suppliers [10]. There is some foods supply chain using BCT core functions are already exists shown the details as below (Table 1). Table 1 List of food items and their companies implemented BCT Food items

Company name

Advantages

Olive oil

OlivaCoin

Supports for traceability, security against price fluctuation, and ensuring quality

Soya

Archer Daniels Midland

Identification, verification and validation, sourcing of raw materials with lower environmental impact

Mangoes, Sugarcane

Coca Cola

Record keeping and verification

Mangoes, Pork

Walmart

Traceability

Potatoes

Frito-lay

Reduce food spoilage and wastage

Rice

Oxfam

Transparency and tracking

Wheat

Britania

Transparency, efficiency

430

P. Deshmukh et al.

5 Conclusion The research explores relevance by implementing decentralized food supply chains for ordering services which is beneficial to resolve problems of transparency, communication, coordination gap between the stakeholders those are interested in achieving similar goal along with verification and validation. This is the systematic and disciplined methodological approach of BCT with the help of smart for ordering services without any monopoly or mediator with proper consensus mechanism and selection of public permissioned blockchain architecture which is best suitable for such application. It also helps for matching the count of demand and supply, transferring assets and to get market opportunities quickly. This is the best example of public permissioned type of BCT platform. Addition to this AI and/or IoT can be upgrade to enable such hashchain for achieving more safety properties.

References 1. Sudha V, Kalaiselvi R, Shanmughasundaram P (2021) Blockchain based solution to improve the supply chain management in Indian agriculture. In: 2021 international conference on artificial intelligence and smart systems (ICAIS) 2. Yousuf S, Svetinovic D (2019) Blockchain technology in supply chain management: preliminary study. In: 2019 sixth international conference on internet of things: systems, management and security (IOTSMS) 3. Nirantar K, Karmakar R, Hiremath P, Chaudhari D (2022) Blockchain based supply chain management. In: 2022 3rd international conference for emerging technology (INCET) 4. Hader M, Elmhamedi A, Abouabdellah A (2020) Blockchain technology in supply chain management and loyalty programs: toward blockchain implementation in retail market. In: 2020 IEEE 13th international colloquium of logistics and supply chain management (LOGISTIQUA) 5. Abdelgalil T, Manolas V, Maglaras L, Kantzavelou I, Ferrag MA (2021) Blockchain Technology: A case study in supply chain management. In: 2021 Third IEEE international conference on trust, privacy and security in intelligent systems and applications (TPS-ISA) 6. Wu H et al (2019) Data management in supply chain using blockchain: challenges and a case study. In: 2019 28th international conference on computer communication and networks (ICCCN) 7. Müßigmann B, von der Gracht H, Hartmann E (2020) Blockchain technology in logistics and supply chain management—a bibliometric literature review from 2016 to Jan 2020. In: IEEE transactions on engineering management, Nov 2020 8. Bhalerao S, Agarwal S, Borkar S, Anekar S, Kulkarni N, Bhagwat S (2019) Supply Chain management using blockchain. In: 2019 international conference on intelligent sustainable systems (ICISS) 9. Raj Y, B S (2021) Study on supply chain management using blockchain technology. In: 2021 6th international conference on inventive computation technologies (ICICT) 10. Sathya D, Nithyaroopa S, Jagadeesan D, Jacob IJ (2021) Block-chain technology for food supply chains. In: 2021 third international conference on intelligent communication technologies and virtual mobile networks (ICICV) 11. Surjandy M, Spits Warnars HLH, Abdurachman E (2020) Blockchain technology open problems and impact to supply chain management in automotive component industry. In: 2020 6th international conference on computing engineering and design (ICCED)

Ordering Services Modelling in Blockchain Platform for Food Supply …

431

12. Vo KT, Nguyen-Thi A-T, Nguyen-Hoang T-A (2021) Building sustainable food supply chain management system based on hyperledger fabric blockchain. In: 2021 15th international conference on advanced computing and applications (ACOMP) 13. Wang J, Yang X, Qu C (2020) Sustainable food supply chain management and firm performance: the mediating effect of food safety level. In: 2020 IEEE 20th international conference on software quality, reliability and security companion (QRS-C) 14. Baralla G, Pinna A, Corrias G (2019) Ensure traceability in European food supply chain by using a blockchain system. In: 2019 IEEE/ACM 2nd international workshop on emerging trends in software engineering for blockchain (WETSEB) 15. Salah K, Nizamuddin N, Jayaraman R, Omar M (2019) Blockchain-Based soybean traceability in agricultural supply chain. In: IEEE Access 16. Sugandh U, Nigam S, Khari M (2022) Blockchain technology in agriculture for indian farmers: a systematic literature review, challenges, and solutions. In: IEEE Syst Man Cybern Mag 17. Deshmukh P, Patil H, Jamsandekar P, Blockchain model for cane development with respect to farmers. Int Res J Eng Manag Stud (IRJEMS) 03(04) 18. Deshmukh PM. Kubal PPJ, Designing blockchain model for cane billing system with respect to farmers. Int J Res Anal Rev (IJRAR)

Impact of Cryptocurrency on Global Economy and Its Influence on Indian Economy B. Umamaheswari, Priyanka Mitra, Somya Agrawal, and Vijeta Kumawat

Abstract The cryptocurrency is a social, cultural, and technological advancement that goes far beyond financial innovation. Cryptocurrencies have the ability to significantly boost the economy due to their openness. Digital assets governed by cryptographic methods are called cryptocurrencies. Different cryptocurrency subtypes exist. The most well-known cryptocurrency is arguably Bitcoin (BTC). Bitcoin has developed into a legitimate investment option and is on its way to having a significant impact on the globe. With an increase of almost 87% in 2019, Bitcoin’s had been doing pretty well till mid February 2020.This cryptocurrency fell by over 15% at the month’s end. This crisis occurred as world stocks collapsed as a result of COVID-19. But the use of cryptocurrencies globally surged by 880% in 2021, according to the Global Crypto Adoption Index provided by Chainalysis. With an index score of 0.37, India came in second place after Vietnam. In just one year, the Indian cryptocurrency market grew by 641%. The international cryptocurrency market is very promising and is growing swiftly. It seems like it could be a market for India as well. It is hard to forecast what will happen to the bitcoin or cryptocurrency market in 2023 and beyond. But by keeping a watch on a few significant crypto patterns, we’ll be able to make wiser investing decisions as the market matures. The future of cryptocurrency appears bright, and India is poised to take over the global market. Keywords Cryptocurrency · Bitcoin · Economy · Digital currency · Stock market

B. Umamaheswari · P. Mitra · S. Agrawal · V. Kumawat (B) JECRC, Sitapura Jaipur, Rajasthan, India e-mail: [email protected] B. Umamaheswari e-mail: [email protected] P. Mitra e-mail: [email protected] S. Agrawal e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 V. S. Rathore et al. (eds.), Emerging Trends in Expert Applications and Security, Lecture Notes in Networks and Systems 681, https://doi.org/10.1007/978-981-99-1909-3_38

433

434

B. Umamaheswari et al.

1 Introduction Bitcoin was first introduced publicly in year 2009 by unidentified developer or group of developers by the name of Satoshi Nakamoto [1]. Bitcoin was introduced to the world as a digital currency in the paper titled bitcoin: A peer-to-peer electronic payment system. Since then, it has become the best-known cryptocurrency in the world. Its popularity has inspired the development of many other cryptocurrencies. Without any other third party as an intermediary, the payments can be easily done by the structure created by bitcoin network. Bitcoin is a protocol that carryout forthcoming, public, decentralized ledger. In order to update the general ledger, a user must prove control of a general ledger entry. A ledger can be updated by allocating a part of his bitcoin to any other ledger entry because token holds the money characteristics [2, 3] which can be considered as a digital currency.

1.1 History The domain bitcoin.org was registered on August 18, 2008. [4] On October 31, 2008, a link to an article by Satoshi Nakamoto was posted on a crypto mailing list [5] where it gained importance. The software of bitcoin was implemented as an open source code by Satoshi Nakamoto and was released in year January 2009 [6, 7].There is no uniform convention for the capitalization of bitcoin, but for the unit of account some sources use bitcoin, capitalized, to indicate the technology and network, and bitcoin, in lower case represents currency which is digitized. The creation of the bitcoin network is done on January 3, 2009, by Nakamoto as he mined the initial block of chain which is named as genesis block [8]. The recipient of the very first bitcoin transaction was Hal Finney. In 2004, the proof of work for bitcoin was developed by authors in 2004 [9]. On the released date of bitcoin software Finney downloaded it and he receives ten bit coins from Nakamoto [10, 11] on January 12,2009 In the year 2010, the first commercial transaction is performed by techie Laszlo Hanyecz using bitcoin when he purchased two Papa John’s pizza from Jeremy Sturdivant for the price of B10,000 [12–15]. Before disappearing in year 2010, Nakamoto has mined about millions of bitcoins as per the reports estimated by the block chain analyst [16]. The network alert key and code repository of bitcoin were handed over by Nakamoto to Gavin Andresen. Later, Gavin becomes the lead developer at bitcoin foundation [17]. Further, the decentralized control was also sought by the Gavin Andresen. This has provided a new platform for the investors and contributors for the future development of bitcoin in contrast to contributions done by Nakamoto [18].

Impact of Cryptocurrency on Global Economy and Its Influence …

435

1.2 Literature Review This paper reviews the literature on key topics related to the popular cryptocurrency Bitcoin. Understanding the impact on economic and financial perspective of this digital currency is another key motivation of this paper. In past recent years, Bitcoin has become a hot topic in the financial industry and also from the economic point of view.. The implementation of the crypto currency has been very obvious to the public after the inception of bitcoin in year 2008. The goal is to review the existing literature that how bitcoin is influencing the economic growth worldwide. The concept is bit hard to accept, but easy to use. It is considered difficult because it is completely different from our conventional currencies that we have been using for a long time. The white paper published the definition of word “bitcoin” on October 31, 2008 [19]. If we talk about economically than bitcoin is legal in seven economies among world’s top ten economies in year 2022 by the GDP [20, 21]. In year 2022, Ukraine accepted the donations as a cryptocurrency in Russian Invansion. Iran also uses bitcoin as bypass sanctions. It is also described as economic bubble by eight recipients in economic sciences [22]. In year 2011–12, bitcoins are exclusively used for payments and transactions. In the same year, initially the price of per bitcoin was $0.30, which grows to $5.27 per bitcoin for the year. The price arose to $31.50 by June 8. But, within a month, the price get drops to $11.00. Again in the next upcoming month, it fell to $7.80, and continuously fells in another month also to $4.77. In year 2013, prices again started rising at $13.30 to $770 by January 2014 [23]. The study done by the Cambridge university estimates that in year 2017 there were approximately 2.9 million to 5.8 millions users which are using cryptocurrency and major users are of bitcoin [24]. The price of bitcoin rose to 50% by July 2017. In year 2017, the price of bitcoin starts at $998 and risen to $13,412.44 by January 2018, with its all-time high on December17, 2017, of $19,783.06 [25]. During the broad market selloff, the bitcoin again loses its value and fell at $4,000 in year 2020 [26]. In the week of March 11, 2020, Ukraine experienced the rose in number of account signups by 83%. But due to the COVID-19 pandemic, the bitcoins price collapse. After posting handle #Bitcoin by Elon Musk on his Twitter profile, gave a sudden rise of $5,000 in an hour [27]. Tesla’s start accepting bitcoins as a payment from buyers for vehicles hikes the bitcoin price to $44,141[28] on February 8, 2021. By this study, we have seen the relationship between bitcoins and economy on different countries. Bitcoin is increasing rapidly in terms of economic growth in different countries as well as in other areas too. We can also see that some countries are using bitcoin as acceptance of educational fee on their websites. This accelerates the usage of bitcoin in entire world. The real estate is another example where cryptocurrency is used on large scale. With increase in acceptance of bitcoin from various countries for payments in different areas will leads to economic growth of that country. One of the important sectors, retail is also showing interest in digital currency technology. Overstock.com is the best example for the same. It also provides services for furniture by accepting bitcoin.

436

B. Umamaheswari et al.

Despite its flaws, the impact of bitcoin with regards to economy cannot be ignored. This paper sheds a light that how bitcoin can influence the economic paradigms that has been recently used in some events using SWOT analysis. The study also tries to analyze the effect of bitcoin on economic growth of various countries. The following sections discuss the bitcoin vs other crypto currencies, challenges, impact on global economy, crypto currency advantages, effect of crypto currency on Indian economy, and future of cryptocurrency in India.

2 Bitcoin Versus Other Crypto Currencies Bitcoin in the global economy has created its way to create impact in the world as bitcoin has become an opportunity for the real investment in today’s time. The popularity of cryptocurrency gained attention in August 2017 when there is a hike in its price from 572.3 USD in 2016 to 4764.8 USD in 2017 which makes up to 64.01% of its total value in March 2019. This tremendous increase in the total value leads to transform the traditional financial system such that the global economy is affected by the strong potential of bitcoin. Thus, investors, banking sectors, companies, and governments have taken interest in investing in this cryptocurrency which is an alternative method in investing in global financial system. Bitcoin has its various unique properties that lead to financial breakthrough at global level and have promoted economic growth. Some of the properties of this “digital currency” are as follows (Fig. 1):

2.1 Security Bitcoin has eliminated the possibility of fraud through its decentralized digital currency system which makes the process of transaction a real-time system. The 3 2 1 0

3: High 2: Moderate 1: Low US Dollar (Fiat)

Gold

Bitcoin

Fig. 1 Comparison between bitcoin and other currencies for different money traits

Impact of Cryptocurrency on Global Economy and Its Influence …

437

assets are in full control of the owner as there is no involvement of any third parties for its management.

2.2 Storage Bitcoin does not have any physical form like cash. Bit it exists in digital form and hence requires digital wallet for its storage. The digital wallet can be accessed and restored through different devices using a seed phrase.

2.3 Portability Bitcoin can easily be carried through the device which can easily and directly transact through its digital wallet. This digital currency is highly portable, fungible, divisible, and irreversible.

2.4 Payment Methods Bitcoin can be used as a payment method as it is a new kind of digital currency and various companies are accepting it in the form of funds.

2.5 Anonymity Bitcoin does not require any distinguishing data to be connected explicitly for storing digital currency in digital wallet unlike traditional banks where the purposes and intents about the customers should be known. The following table shows the comparison between other currencies and digital currency. The journey of bitcoin has just started but it is shaping the global market and economy of nations. The impacts of bitcoin visible in the worldwide economy are as follows: Global Investments Shifting: Investors have now started adding bitcoin in their portfolios. Due to its increasing allocation in the market, the investors have higher chances of improvement in their portfolio. As per VanEck reports, the allocation of bitcoin in small amount can enhance the cumulative return of 60/40 portfolio allocation mix with minimal impact on volatility where 60% of assets hold for equity and 40% for bonds. Some experts have concerns of bitcoin collapse which may lead

438

B. Umamaheswari et al.

to crises in global financial market. But still, investors are investing for bitcoin as a hedge against inflation. Transactions different from Dollar: Bitcoin has no connection to U.S. dollar, and hence, transaction is different from U.S. dollar. Bitcoin has provided another avenue to investors for participating in global economy through its financial transaction while circumventing economic policies of U.S. Some experts have concerns that bitcoin will pose threat to U.S. dollars and will impact the global economy in terms of reserve currency. No requirement of middlemen: Bitcoin does not require any third party intervention for electronic transactions. It allows peer-to-peer transactions which remains validated in a decentralized system. Some experts have concerns that it will eliminates the requirement of banking service and is quicker as it does not require multiple hands. Enhanced Overseas Transactions: The underdeveloped countries having weak economies do not support bank account of every citizen. Such nations does not require bank service to access global Internet market. Bitcoin helps people of such nations to connect with Internet economy through digital wallet for making transactions anytime and anywhere in the global market. During the last two months of 2022, an average of 281, 221 confirmed Bitcoin transactions were performed each day across the world. Bitcoin Regulation: There is a need to grapple financial regulations for bitcoin as it is very ubiquitous. Banking systems are working on the structure to put the new emerging financial system under control through laws. Some of the nations such as Bolivia, Nepal, Pakistan, Vietnam, and Algeria have banned activities which are involving Bitcoin, while other nations such as Canada, the U.S.A, Australia, and the European Union are using it as a mode of payment.

3 Bitcoin Challenges Bitcoin has become the valued cryptocurrency ever since 2008. As per the data of Coinmarketcap.com, bitcoin has US$825.61 billion market capitalization. But, it is facing challenges in the global market which have social, technological, ethical, and political impact on the economy. Some of these challenges are as follows (Table 1).

3.1 Volatility Bitcoin has always shown its volatility since its creation. Some experts predict that the price of bitcoin might reach million in coming years, while some believe that it might go down to zero. So investors have dilemma while investing in huge amount.

Impact of Cryptocurrency on Global Economy and Its Influence … Table 1 Comparison between bitcoin and other currencies

439

Money traits

US dollar (Fiat)

Gold

Bitcoin

Portability

High

Moderate

High

Divisibility

Moderate

Moderate

High

Scarce

Low

Moderate

High

Decentralized

Low

Low

High

Smart

Low

Low

High

Non-consumable

High

High

High

Durability

Moderate

High

High

Sovereign

High

Low

Low

Fungible

High

High

High

Transactability

High

Low

High

Secure

Moderate

Moderate

High

3.2 Inefficiency in Self-regulation Regulation of market behavior of bitcoin is difficult due to its lack in accountability. Such behavior may cause issues like hackers may plague the market, scammers may create fake crowdfunding by investors and then may run off. If proper regulation of bitcoin cannot be performed then investors may loose interest in investing in digital currency.

3.3 Cyber Security Several experts have provided guidelines to protect bitcoin from cybertheft, but understanding the working system at the user end has given its way for the buyers to lose investments as the data and exchanges can be hacked by the hackers even with the smart wallet.

3.4 Scalability Bitcoin uses the technology of blockchain which has the limitation in storing the amount of information to 1 megabyte of data in each block. This allows the limitation in network capacity with three transactions per second. If the transactions are executed beyond the limits, the network will face serious issues in keeping up the records and processing delays.

440

B. Umamaheswari et al.

3.5 New Era of Technology Bitcoin is a new era of technology although it came into existence about 11 years ago. This technology is influencing the global market day by day but still its future is unclear, and thus investors hesitate to take risk in investing in bitcoin.

3.6 Bitcoin Usage Although bitcoin has served as a new means of payment but still very few nations across the world have adopted bitcoin as a viable currency and an authorized means of transaction.

3.7 Tax Clarity Bitcoin has been considered as an intangible property under the law and hence subjected to taxes in capital gains. The investors have to show the tax difference if they purchase bitcoin from the market and sell it for higher price. The investors if purchased something with bitcoin, it will be considered as a taxable event.

4 Economic Impact of Cryptocurrencies in Different Nations If we want to discuss about how cryptocurrencies are affecting the economy, we can argue that even while their use is still relatively low and their market values are rising, we cannot say that they are having a significant impact on monetary policies. The following table shows the relationship between them (Table 2). But most of the countries across the globe started showing interest in cryptocurrency which aids their economy either directly or indirectly. The following table shows the countries with their interest in crypto investment and awareness (Table 3).

5 Advantages that Cryptocurrencies Provide for the Global Economy The trade of cryptocurrency does not need a middleman. The speed of transactions rises as a result. Transaction expenses are reduced because there are no middlemen.

Impact of Cryptocurrency on Global Economy and Its Influence …

441

Table 2 Relationship between cryptocurrency and economics in many sectors Sector

Country

Domain and mode of crypto acceptance

Education

Cyprus, Switzerland, the USA, and Germany

On their websites, certain universities in these nations accepted cryptocurrency as college fees Some online education providers accept Bitcoin as payment

Travel businesses

Cyprus, Switzerland, the USA, and Germany

Purchase of flight bookings, hotel accommodations, rental vehicles, and cruises

Housing and real estate

Cyprus, Switzerland, the USA, and Germany

Land and house purchase, Rental agreements, Taxations on the purchase

Retail

Cyprus, Switzerland, the USA, and Germany

Overstock.com provides services in the field of furniture, by accepting Bitcoin. Crate and Barrel, Nordstrom and Whole Foods, also propose the opportunity to shop with cryptocurrency

Online games

Cyprus, Switzerland, the USA and Germany

In the computer game Project Big ORB, you can exchange your virtual currency for real money by first converting it into other assets, such as cryptocurrency

Singapore trading platform

Singapore

Accepts crypto assets of accredited investors

Israel’s financial center

Israel

Seven Bitcoin ATM

Government tender

El Salvador and Central African Republic

Legal tender and tax contributions

Reduced transaction costs imply improved exchange efficiency and a rise in transaction volume. There is less need for a physical location where people may gather and conduct business. Due to the absence of salary, rent, or utility fees, fixed costs are reduced. There are also traders that don’t have to meet any minimum deposit requirements. Furthermore, cryptocurrencies are not restricted by geographic boundaries. As a result, there is no central organization to oversee the transactions. For businesses, this makes commerce simple and rapid. As of November 2021, one bitcoin is currently worth 59,150. The majority won’t be able to purchase even one bitcoin. As a result, you can buy fractions of cryptocurrencies, increasing the volume and viability of transactions. In India, one can begin with little Rs100. As common currencies between economies, cryptocurrencies can facilitate more trade. The cryptocurrency’s blockchain architecture is supported by a

442

B. Umamaheswari et al.

Table 3 The Countries with the most interest in Cryptocurrency Rank

Country

Total crypto searches

Number of crypto owners

Crypto awareness score out of 10

1

Ukraine

51,470

5,565,881

7.97

2

Russia

42,390

17,379,175

7.46

3

US

6,921,310

27,491,810

6.03

4

Kenya

99,810

4,580,760

5.50

5

South Africa

259,800

4,215,944

4.82

6

United Kingdom

2,081,390

3,360,591

4.54

7

India

3,598,960

100,740,320

4.39

8

Nigeria

649,750

13,016,341

4.24

9

Australia

941,640

857,553

3.77

10

Singapore

253,160

121,854

3.72

peer-to-peer network. As a result, unlike the conventional financial system, the transactions are decentralized. Cryptocurrency users think they should have total control over their money rather than a bank. Multinational companies frequently borrow money in both home and international currencies. Cryptocurrencies are an option that can diversify exposure. Therefore, access to a diverse lending portfolio may be made possible by cryptocurrency. Additionally, the blockchain maintains the confidentiality of the sender and recipient information. The information is protected by a number of security levels, which enhances mining activity. More currencies are available for payments to entrepreneurs. They benefit from stronger financial protection and a more open financial connection as a result. Distributed ledger technology supports the cryptocurrency network. It is also mechanized and digital. As a result, the primary threat to the conventional financial system fraud and corruption is removed. It cannot be manipulated by either individuals or businesses.

6 Effect of Cryptocurrencies on the Indian Economy The five main effects cryptocurrency will have on the Indian economy are as follows.

6.1 Increasing Openness The ability to track every transaction back to its origin increases transparency thanks to cryptocurrencies. Blockchain, the technology on which cryptocurrencies are founded, is also unchangeable. As a result, transaction histories are unchangeable

Impact of Cryptocurrency on Global Economy and Its Influence …

443

and irreversible. As the data cannot be changed in any way, this can considerably reduce corruption.

6.2 Employment on the Rise About 50,000 people are currently employed in the cryptocurrency sector. According to a survey, there will be a ton of job openings in the sector by 2030, with estimates of over 800,000. India already has a sizable skill pool of Fintech and IT specialists. The talent is moreover offered at reasonable prices. With the growth of the cryptocurrency sector, India has the potential to develop into a significant worldwide hub for the industry. In the BFSI, IT, customer support and service, as well as many other areas, this will assist generate a large number of job possibilities. Today’s bitcoin industry continues to contribute to raising the country’s employment rate.

6.3 Boost for the FinTech Industry As was already established, India has a sizable pool of IT specialists. Collaboration between the financial and IT sectors has the potential to provide countless commercial opportunities and foreign currency inflows. Furthermore, it will draw substantial international investments thanks to the government’s implementation of strict regulatory measures and the creation of regulations for an official digital currency. This will significantly strengthen the FinTech industry and advance the Indian economy.

6.4 Improve Online Payments Transactions with cryptocurrencies save time and money. The transactions are instantaneous since they are carried out directly between the sender and receiver without the involvement of a third party. Additionally, there are no longer any transaction fees assessed by middlemen like banks and payment gateways. By lowering the transaction’s cost, this enables consumers to save money on each transaction. Therefore, cryptocurrency transactions can dramatically improve digital payments by reducing transaction time and cost.

444

B. Umamaheswari et al.

6.5 Realize Atmanirbhar Bharat’s Objective The reliance on third-party, private, and foreign-based cryptocurrencies will be abolished with the government’s proposal to create a single, recognized cryptocurrency. Many well-known cryptocurrencies, including bitcoin, ethereum, dogecoin, and others, are currently based overseas. The nation’s official cryptocurrency won’t need to rely on other cryptocurrencies because it will be fully produced there. The government will be able to achieve its aim of “Atmanirbhar Bharat” in the cryptocurrency sector with the assistance of investors, traders, and other persons who will have access to a single cryptocurrency for their requirements.

7 Future of Cryptocurrency in India There are a lot of private cryptocurrencies available in India today. Governmental organizations continue to use bitcoin despite it being the most widely used cryptocurrency. The list of private cryptocurrencies in India is shown here. Those are Bitcoin 1. (BTC) Tether 2. (USDT) Ripple 3. (XRP) Shiba Inu 4. (SHIB) Litecoin, 5. (LTC) Elrond 6. (EGLD) 7. US Currency (USDC) Ethereum 8. (ETH) Ripple 9. (XRP) Dogecoin 10. (DOGE). Over the past few years, India’s retail payments system has grown dramatically. It has advanced and grown more dynamic. Numerous payment operators, wallet companies, and suppliers to operators and card companies are all part of the large payment space. The first digital rupee project in India was recently introduced by the Reserve Bank of India (RBI) for the wholesale sector. The digital rupee, also known as the Central Bank Digital Currency, is printed on currency notes by the Reserve Bank of India (RBI) (CBDC). Due to the movement of digital money, cryptocurrencies will have a significant impact on the Indian economy in 2023 and the years that follow. For India to achieve its goal of developing a substantial GDP and becoming a powerhouse, the faster the money transfers, the better. Newer digital forms that are secure, practical, safe, and easy to use will power the impact of cryptocurrencies on the economy and India’s future. The impending digital currency from the Indian central bank will be a big step in that direction.

8 Conclusion Numerous investors have already benefited from bitcoin, including people, companies, and governments. Many others also extensively rely on trade as their main source of income at the same time. In this regard, the world economy is slowly and steadily transforming to meet these demands, and bitcoin undoubtedly has a strong

Impact of Cryptocurrency on Global Economy and Its Influence …

445

chance of doing so. So it’s safe to claim that bitcoin drives economic growth globally by making capital and financial services significantly more accessible, particularly in poorer nations. Despite the enormous potential that bitcoin holds for the world economy, it is difficult to use bitcoin as the primary currency for the entire country.

References 1. L.S (2 November 2015) Who is Satoshi Nakamoto?. The Economist. The Economist newspaper limited. Archived from the original on 21 August 2016. Retrieved 23 September 2016 2. The Monetary Properties of Bitcoin. Bitcoin Magazine. Retrieved 27 December 2022 3. Bitcoin First (PDF). Fidelity Digital Assets, LLC, p 4. Retrieved 28 December 2022 4. Bernard Z (2017) Everything you need to know about Bitcoin, its mysterious origins, and the many alleged identities of its creator. Business Insider. Archived from the original on 15 June 2018. Retrieved 15 June 2018 5. Finley K (2018) After 10 years, bitcoin has changed everything—and nothing. Wired. Archived from the original on 5 November 2018. Retrieved 9 November 2018 6. Nakamoto S (2009) Bitcoin. Archived from the original on 21 July 2017 7. Nakamoto S (2009) Bitcoin v0.1 released. Archived from the original on 26 March 2014 8. Wallace B (2011) The rise and fall of Bitcoin. Wired. Archived from the original on 31 October 2013. Retrieved 13 October 2012 9. Here’s the problem with the new theory that a Japanese math professor is the inventor of Bitcoin. San Francisco Chronicle. 19 May 2013. Archived from the original on 4 January 2015. Retrieved 24 February 2015 10. Peterson A (2014) Hal Finney received the first Bitcoin transaction. Here’s how he describes it. The Washington post. Archived from the original on 27 February 2015 11. Popper N (2014) Hal Finney, cryptographer and bitcoin pioneer, dies at 58. The New York Times. Archived from the original on 3 September 2014. Retrieved 2 September 2014 12. Cooper A (2021) Meet the man who spent millions worth of bitcoin on pizza. www.cbsnews. com/. CBS News. Retrieved 8 December 2021 13. Griswold A (2014) The first-ever bitcoin purchase was remarkably inglorious. slate.com/. Slate. Retrieved 8 December 2021 14. Hankin A (2021) Bitcoin pizza day: celebrating the $80 Million Pizza order. www.investope dia.com/. Investopedia. Retrieved 8 December 2021 15. Sparks H (2021) Infamous bitcoin pizza guy who squandered $365M haul has no regrets. nypost.com/. New York Post. Retrieved 8 December 2021 16. Kharpal A (2018) Everything you need to know about the blockchain. CNBC. Archived from the original on 13 September 2018. Retrieved 13 September 2018 17. Simonite T (2017) Meet Gavin Andresen, the most powerful person in the world of Bitcoin. MIT Technology Review. Retrieved 6 December 2017 18. Jump up to:a b Odell M (2015) A solution to bitcoin’s governance problem. TechCrunch. Archived from the original on 26 January 2016. Retrieved 24 January 2016 19. Bitcoin Core 24.0.1. Retrieved 13 December 2022 – via GitHub 20. Is Bitcoin Legal?”. Investopedia 21. World Economic Outlook (October 2022). International Monetary Fund 22. Wolff-Mann E (2018) Only good for drug dealers’: More Nobel prize winners snub bitcoin. Yahoo Finance. Archived from the original on 12 June 2018. Retrieved 7 June 2018. 23. Bitcoin Historical Prices. OfficialData.org. Archived from the original on 4 July 2018. Retrieved 3 July 2018 24. Hileman G, Rauchs M (2017) Global cryptocurrency benchmarking study (PDF). Cambridge University. Archived (PDF) from the original on 10 April 2017. Retrieved 14 April 2017

446

B. Umamaheswari et al.

25. Bitcoin hits a new record high, but stops short of $20,000. Fortune. Archived from the original on 14 November 2018. Retrieved 16 April 2019 26. Bitcoin loses half of its value in two-day plunge. CNBC. 13 March 2020. Retrieved 9 December 2020 27. Browne R (2021) Bitcoin spikes 20% after Elon Musk adds #bitcoin to his Twitter bio. CNBC. Retrieved 2 February 2022 28. Li Y (2021) Bitcoin surges above $44,000 to record after Elon Musk’s Tesla buys $1.5 billion worth. CNBC. Retrieved 9 February 2021 29. LiveMint (2022) https://www.livemint.com/market/stock-market-news/this-country-adoptsbitcoin-as-legal-currency-details-here-11651153008675.html

Secure Hotel Key Card System Using FIDO Technology Aditi Gupta, Rhytthm Mahajan, and Vijeta Kumawat

Abstract Booking a hotel room comes with a sense of security and reliability among other amenities like air conditioner, vehicle parking, television, etc. Better the hotel’s rating, better seems the security. Hotel key cards play a significant role in both guests’ and hotel staff’s cycle of accessing hotel services, but they are steadily becoming obsolete and are ever-increasingly proving to be a nuance. Often, guests run into the problems of losing the key card, or the locks malfunction and the problems with the current security mechanisms can’t be ignored as well. In fact, RFID Technology used currently also contains a few loopholes and can be hacked to compromise guests’ security. This paper brings FIDO technology to the spotlight, which can give guests the luxury to unlock hotel doors using their phones in a secure fashion. FIDO can not only provide one-tap secure access to guest rooms, but also other utilities like Pool, Parking, Spa, etc. by enabling multiple access in case a room has more than one guest.This will reduce the guest dependability on carrying and maintaining yet another key card and also help the hotel staff to efficiently regulate the systems. Keywords FIDO · Hotel key card · Security

A. Gupta (B) · V. Kumawat Jaipur Engineering College and Research Centre, Jaipur, Rajasthan 302022, India e-mail: [email protected] V. Kumawat e-mail: [email protected] R. Mahajan Birla Institute of Technology, Jaipur Campus, Mesra, Rajasthan 302017, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 V. S. Rathore et al. (eds.), Emerging Trends in Expert Applications and Security, Lecture Notes in Networks and Systems 681, https://doi.org/10.1007/978-981-99-1909-3_39

447

448

A. Gupta et al.

1 Key Card–A Decades Old Security Saga 1.1 Chronology of a Locking Mechanism The history of locking mechanisms dates back to 2500 B.C. when Assyrians used wooden locking pins in the palace to secure royal treasure [1]. Since then, the security measures and systems have evolved drastically and have become technologically advanced. Egyptians, Greeks, and Romans contributed a significant plethora of techniques to improve the lock systems, and in fact, some of them were simple metallic lock and key designs that are still in use today. In 1861, Linus Yale Jr. patented the infamous Yale Pin Tumbler Cylinder Lock [2], and post that, began the drive for Hotel Security System to get more technologically advanced. As metal keys were a huge trouble to maintain, guests strived for something which can make their hotel-security experience hassle-free. It wasn’t until the late nineteenth century that therefined and fully automated metal/plastic punch hole cards were introduced [3]. These hole cards could be uniquely coded and had 32 holes in them which could be configured in 4.2 billion ways and give a unique identity to all the guests checking in. These hole cards had a specific hole card pattern cut on them which was deciphered by the reader once they were punched in. But, they got exhausted quickly as they were disposable after every guest checked out and so another method had to be found.

1.2 How Did a Hotel Key Card Survive the Rapid Cyber Security Changes? A new software technology had to be introduced at some point of time in the hotel key card system as a higher level of security had to be ensured which could be remotely maintained as well. Around 1980s, Magstripe Cards, launched by IBM, [4] started to pop up in the Hotel security Marketplace which offered a higher level of security in comparison to the Metal and Plastic 32-Hole Punch Key Cards. Magnetic key cards contained a black magnetic stripe which had to swiped on the key card reader and they could also be reconfigured and reused. User’s details were stored against the room number and when swiped, the pairing of user details and hotel room number was done in order to verify the user and give access to the room. But these Magstripe cards could only sustain till 2000s as hackers got an idea of decrypting the technology which resulted into thefts, hotel points transfer, etc. Also, these cards had a lifetime of 20 years, post which they started getting demagnetized, and added to the frustration of the guests and hotel staff. Hence, in the late 2000s, Radio Frequency Identification (RFID) Technology came into the picture which provided enhanced security to the user and an easy contactless way to access their room and other hotel facilities. RFID has a short-range reading capability; hence, the hacker needs to get very close to the guest or the key card

Secure Hotel Key Card System Using FIDO Technology

449

to access guest information and data. RFID is the predominant technology used in the hotel industry as of now, but it too, has certain loopholes and will have to be overcome by an advanced security mechanism yet again as in the history of security evolution.

2 The Need to Replace Hotel Key Card 2.1 The ‘What’ of RFID Technology Used in Key Cards Radio frequency identification technology has been used since many years now instead of the traditional systems as it promised enhanced security and smooth user experience. RFID has two major components: Tag and Reader. A tag comprises of a microchip that stores guest information and data about their utility access points according to their hotel room package. The reader uses Radio Frequency Identification Technology to scan the RFID tag and on the basis of the information accessed, helps the guest to easily enjoy different hotel utilities like swimming pool, spa, dining area, gaming zone, etc. Being a contactless technology, the user just needs to keep the card in the proximity–atleast within a radius of 25 feet–of the reader and the access is granted (Fig. 1). Though the RFID technology enabled hotel key cards offer a high level of security, this system of handing out a physical key card to guests is still a huge problem and can lead to many security threats. Let’s see what loopholes exist in the current system.

Fig. 1 Working of RFID [5]

450

A. Gupta et al.

2.2 Loopholes in the Current System Physical key cards can have a lot of issues, and Teller Report conducted a survey around what problems people face while handling their hotel key cards [6]. Firstly, the major issue aroused when people forgot their key cards inside the room and had to ask the hotel staff for help. It is a nuisance in both emergencies and normal conditions, and somewhere, everybody comes across such a situation once in a while. The second biggest issue experienced by guests was that when they forgot their key card and asked for a re-issue, hotel staff didn’t even ask them to show ID and just on the basis of their name and phone number, they re-issues the key card. That’s a huge risk associated with security, and anyone posing as that guest can barge into their rooms which can lead to burglaries, petty crimes, and even murders in some cases. Teller Report asked a guest, and they recalled, “Once I left my room card in the room and couldn’t remember the room number for a while. After telling the front desk clerk, she only confirmed my name and phone number, and quickly issued another room card for me.” Ms. Liu, a citizen of Beijing, introduced to reporters her experience and worries while staying at a chain hotel in Beijing [6]. She expressed her concern on not even showing an ID Card to get the hotel key card. In another example, in 2019, Ms. Huang checked into a hotel in Nanning, Guangxi, and the hotel issued a room card to her husband without her consent. In this regard, the hotel said that at that time, the other party stated that he was the husband of Ms. Huang, and said a lot of good things to the hotel, so he gave him the room card. The risks don’t end here, even RFID Key Cards have had major security loopholes of which hackers found a breakthrough in 2018. According to the media reports of The Hacker News [7], F-Secure researchers TomiTuominen and TimoHirvonen made a master key card which gave access to over a million hotel rooms of one of the largest lock manufacturers–AssaAbloy’s ‘Vision by VingCard Locking System’! [8] Also, in 2017, Ransomware locked all the hotel rooms of RomantikSeehotelJäegerwirt 4-Star Superior Hotel, a luxurious hotel in Austria, and asked a ransom of $1600 in bitcoin which the hotel had to pay as they were left with no other option. RFID cards can also be easily cloned by a hacker [9] if they are in an appropriate proximity of the guest to read the RFID tag from their room key card. Cloning can be done in minutes and can be proven very dangerous in terms of break-ins, thefts, etc. That’s why there is a need to bring in another technology in the hotel key card scenario; let’s see how we based our assumptions and came up with an alternative to RFID hotel key cards.

Secure Hotel Key Card System Using FIDO Technology

451

3 FIDO-Enabled Secure Hotel Key Card 3.1 The Recent Plunge of FIDO Protocols into Security Fast identity online (FIDO) technology is a recent addition to the security evolution and has proven to be highly effective against getting hacked because of its advanced encryption standards. The FIDO Alliance has issued three protocols: Universal 2nd Factor (U2F), Universal Authentication Framework (UAF), and FIDO2/WebAuthn [10, 11]. Around six years ago, FIDO U2F, and UAF were introduced by FIDO Alliance and recently, FIDO2 [12] has been introduced which contains combined features of both U2F and UAF. FIDO U2F is used to introduce a second factor authentication along with the first factor of authentication, i.e., user passwords, and FIDO UAF is a password-less authentication system designed for phones. FIDO projects a passwordless future and has also taken up drastic measures to become customer-centric recently. This technology will change the way how people deal with passwords today and will revolutionize the market for passwords in the coming years by integrating passwordless mediums of authentication on top tech giants’ OS [13]. Here, we are using FIDO UAF to enable contactless and hassle-free security system for hotel rooms and other utilities which will not have to involve physical key cards maintenance by the guests and hotel staff members as well.

3.2 FIDO UAF, NFC, and Mobile Key Cards Near-field communication [14] is a recent technology that has boomed in the card reading sector at the Hotel, Transport and even the Banking industry. It is actually an extension of the RFID Technology, but it operates in real-time as the physical key cards are not required with near-field communication and has more advanced and dynamic features than RFID. It is a short ranged wireless connection which dynamically uses Magnetic Field Induction to enable a communication between the devices on being brought into close proximity together. Hotels can directly send access details and pins to the guests on their smartphones. With NFC, it is very difficult to hack into the guest details, as firstly, the hacker will have to be in a proximity of less than 4 inches [15] to the guest’s device, and secondly, a guest reserves the authority to initiate the connection between devices and get access into the facilities and so, it will be impossible to get the connection initiation request from the guest without their direct involvement. Mobile key cards are the next big evolution in the journey of hotel security system, and fueled with FIDO UAF and NFC, it will be the most secure option available to the guests till date. Let’s understand how FIDO Universal Authentication Framework operates first. Server will be the hotel system database and the client will be the guest

452

A. Gupta et al.

device. When a login attempt is made from a new device, FIDO UAF initiates two keys–a public key and a primary key [16]. The public key remains with the hotel system database, whereas the private key gets stored in the guest’s device. Now, whenever the guest tries to get access with their device, the private key in their device will get matched with the public key on server, and they would be allowed access to their room or other utilities. Let’s see the working of FIDO-enabled mobile key card which will use the FIDO UAF and NFC technologies as the main strength pillars. Refer to Fig. 2 for a better understanding of the flow. To begin, customer validation will be done by verifying their Government IDs and after they are verified, their device will be connected to the hotel Wi-Fi from where the user will register on the hotel website and FIDO UAF will generate a set of unique public and private keys for that guest. The public key will get stored in the hotel database, and the private key will be stored in the corresponding NFC application of the guest(s). From now on, whenever the guest tries to enter some area, they will have to verify themselves in their device by going through biometrics authentication like Face Touch ID or Fingerprinrt after which they will use NFC to share their private key with the NFC reader. The reader will pair they private key from the guest to the public key counterpart which was stored in the Hotel’s database. If that is successful, the guest will be authorized to get access to that particular area.

Fig. 2 Working of NFC specific to hotel security system

Secure Hotel Key Card System Using FIDO Technology

453

Fig. 3 Working of FIDO-enabled mobile hotel key card using NFC

It is important to note that n number of services can be mapped to the public key which is a step done at the first stage while entering the guest information and editing their hotel package according to their budget. Facilities like Breakfast Section Access, Pool and Lounge Access, Bar Section Access, Spa Access, and most importantly, Guest’s Hotel Room access, are all fed in the database and mapped to the public key of the guest (Fig. 3).

3.3 Benefits of Mobile Key Cards An attacker would need to be close to the gadget in order to access it because NFC functions over very short distances, usually less than four inches. However, the majority of NFC-enabled devices are set up so that a connection can only be made with the user’s express permission; thus, an attacker would need to deceive the user into starting the connection. Even if an attacker is successful in establishing an NFC connection with a target device, they would still need to figure out how to take advantage of a weakness in the software of the target device in order to get access. The difficulty of this operation will vary depending on the particular device and its security features. It is not always an easy task. This feature of NFC improves the hotel security and privacy measures. Key cards are made of plastic, though they are reusable but when the purpose of a key card is served, they are thrown away and add to the already concerning pile of plastic waste. Dumping these cards on a large scale can take an adverse toll on nature and hence, mobile key cards, being digital, will contribute a significant reduction in the growth of plastic waste disposal. Digital carbon footprints can be reduced by the hotel chains by taking stringent environmental-friendly measures from their end to promote awareness towards global warming.

454

A. Gupta et al.

Lost or stolen key cards present hardships for both: guests and hotel staff. The guest is shrouded in a constant fear of getting attacked or robbed, and sometimes, there might be an emergency and the guest is not able to quickly get into the room. Likewise, hotel staff needs to add to their hassle whenever a card is lost or stolen, as they have to destroy/disable the old key card and issue a new one to the guest. In case of emergencies, sometimes, the hotel staff needs to give the master key to the guest as well. In case of virus spread or a medical infection, minimal contact will help guests have a safe stay at the hotel. With mobile key card, no contact with the reception and hotel staff would be required during check-in which will not only speed up the entire check-in hassle, but also, keep the guests safe medically. As guests no longer need to wait at the reception to book a room and get a key card assigned to them, chaotic queues would be avoided during the check-in time, which will enable hotel staff to have more time to enrich the stay of their guests. Guests will have a stronger hold on their experience in the hotel as they would be able to access every other utility via their mobile phones. This will enable them to have more power to make their experience rich and suited according to their own schedule. This dynamic angle will be enjoyed more by the guests who don’t want any interference from others and want to navigate their hotel stay on their own. Hotels can integrate this FIDO system on their mobile application which can serve multiple profit-making purposes. The application will become mandatory to download on the phone, and its installation can be promoted while the guest checksin and checks out. For example, when a guest gets registered on their application, the hotel email database will increase which can later be used to execute festive email campaigns from time to time. Also, loyalty points can be given out to regular guests, which will in turn, help retain the guests. There are countless benefits when a hotel switches to FIDO-enabled mobile key card, and it not only reduces the hassles of guests, but also, saves a lot of time on the hotel staff end. It increases the efficiency of hotel security mechanisms and eases the accessibility of hotel navigation for the guests.

4 Future Scope 4.1 Changing Scenarios in Hotel Security Never in the history of security has one technology stuck for longer period of time. As and when a fresh security technology is introduced in the market, hackers sit to work out a way and fail it with the help of a series of cyber-attacks strategically laid down. Hence, in a market prone to cyber-attacks, new technologies need to be introduced in short and regular intervals in order to defeat cyber-bullies and ensure the security of hotel guests.

Secure Hotel Key Card System Using FIDO Technology

455

Punch cards got out of the picture fairly quickly due to the lack of software technology, but Magstripe cards are still used [17] by some hotels in different parts of the world, even after declaring this system vulnerable to hackers and incompetent. RFID hotel key cards are primarily used across the globe today, and hence, it will take a lot of time to replace this technology with the new NFC technology. Even RFID key cards could be used years after the other tech-enabled key cards come into existence, but they will be in very less numbers. When it comes to the guest-end of the system, hotels need to keep ease of usage and security at the prime of their service platters. Hospitality can only be served at its best if the guests feel safe and secure while booking a hotel room. Also, with this FIDO-enabled contactless mobile-device access facility, the guest will feel they are in control of their rooms and access facilities, thereby empowering them and serving them with a seamless and smooth hotel onboarding system which will offer not only a rich experience, but also a sense of comfort and a hassle-free stay.

4.2 Sustainability of FIDO Protocols In May 2022, Google, Microsoft, and Apple [18] backed FIDO protocols and announced that they would convert their authentication mechanisms into a passwordless authentication method by using the protocols provided by the FIDO Alliance and the World Wide Web Consortium. Being huge industry players and having the largest customer database, these tech giants can definitely be trusted with their dreams of realizing a password-less era. FIDO has also announced that they will enter a customer-centric mode from their mainstream enterprise-centric mode [19]. With providing an unparalleled and impenetrable security because of the one-time and non-reusable components like OTPs, ease of use also needs to be top-notch. If FIDO achieves usability and security in synchronization and quality, nothing will be more lasting than the FIDO Security Protocols. Infact, FIDO has announced that customers could synchronize their cryptographic credentials [20] to other devices for backup purposes. In case, their smartphone devices get lost, they will be able to use those other synced devices and not get stuck unnecessarily. Being able to address some of the most critical points in the customer journey, FIDO provides almost impenetrable security protection, and the hope for the sustainability of FIDO protocols is very much alive!

References 1. Solid Smack, https://www.solidsmack.com/cad/model-of-the-week-assyrian-pin-lock-oldestdesign-ever/

456

A. Gupta et al.

2. Yale Home Global, https://www.yalehome.com/in/en/about-us 3. Vodien Blog, https://www.vodien.com/learn/history-punch-card/ 4. The Balance article, https://www.thebalancemoney.com/what-is-a-magnetic-stripe-card-520 5211, last accessed 2022/04/30 5. How RFID Works, https://www.shopify.com/retail/rfid-technology 6. Teller Report article, https://www.tellerreport.com/business/2021-08-22-how-to-plug-the-loo pholes-in-the-hotel-room-card-without-showing-the-id-card-.ByGWg_rlWF.html 7. Hacker News article, https://thehackernews.com/2018/04/hacking-hotel-master-key.html 8. Fortra Blog, https://www.tripwire.com/state-of-security/researchers-hotel-key-cards-can-behacked-what-you-need-to-know 9. RFID future article, https://www.rfidfuture.com/clone-rfid-cards.html 10. FIDO Alliance, https://fidoalliance.org/what-is-fido/ 11. StrongKey Blog, https://blog.strongkey.com/blog/guide-to-fido-protocols-u2f-uaf-webauthnfido2 12. FIDO2 by FIDO Alliance, https://fidodev.wpengine.com/fido2/ 13. FIDO Alliance and Amazon, https://fidodev.wpengine.com/amazon-joins-fido-alliance-boarddirectors/ 14. Near Field Communication Organisation, http://nearfieldcommunication.org/ 15. How-To Geek Blog, https://www.howtogeek.com/137979/htg-explains-what-is-nfc-and-whatcan-i-use-it-for/ 16. FIDO UAF by FIDO Alliance, https://fidodev.wpengine.com/creating-a-world-without-pas swords-a-fido-uaf-case-study/ 17. Designery Sign Company, https://www.designerysigns.com/mag-stripe-cards.html 18. Apple Newsroom,https://www.apple.com/newsroom/2022/05/apple-google-and-microsoftcommit-to-expanded-support-for-fido-standard/ 19. Cyber News Report, https://cybernews.com/security/is-fido-secure-enough-to-give-us-a-pas swordless-internet/ 20. FIDO Alliance Device Sync, https://fidodev.wpengine.com/charting-an-accelerated-path-for ward-for-passwordless-authentication-adoption/

Security Issues in Website Development: An Analysis and Legal Provision Darashiny Nivasan, Gagandeep Kaur, and Sonali Vyas

Abstract “Website” is domain over the Internet that represents any organisation 24/7 which even the employees of the organisation are unable to perform. It presents information about the organisation in the best possible way in order to attract the consumers without any third person. This is similar to visiting any organisation and understanding their field of expertise, but at the comfort of the user. This makes website development a very crucial skill for the developers to learn. Developing a website requires the knowledge of both frontend and backend. Frontend includes the UI/UX development in which UI, or user interface generally means how the website is looking on the screen. UX or user experience means what experience users are having after the visit of the website; user’s feedbacks are taken, and updates are made. But website is not about the frontend only every website requires to store the data in some or the other form for that there is backend which includes the databases. “Data” is something that is crucial to all the organisation that website carries as many of their and their consumer’s crucial data is stored over the website. This makes “Security” of the website very essential for the developers to learn. This article deals with all those strategies that are required in developing the website. It also discusses the security aspects that are considered while making the website, why such security is required, what are the consequences if such security is not provided. It is always possible even after providing a good amount of security walls there are always attacks that are there so along with the security. The article also discusses the possible types of attacks and their resolution that are possible not only in the technical way but also in the legal way by providing legal provisions associated with it. Keywords UI/UX development · Website development · Security · Attacks · Data · Legal · Frontend · Backend

D. Nivasan · G. Kaur School of Law, UPES University, Dehradun, India S. Vyas (B) School of Computer Science, University UPES, Dehradun, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 V. S. Rathore et al. (eds.), Emerging Trends in Expert Applications and Security, Lecture Notes in Networks and Systems 681, https://doi.org/10.1007/978-981-99-1909-3_40

457

458

D. Nivasan et al.

1 Introduction Websites nowadays are always dynamic and cannot be made stagnant. To enhance the traffic of the audience, it is required that the website should be interactive and not just a bunch of words that are assembled over a single page. To provide such a possible outcome, this is where the various technologies that are developed over time comes into play and the developers who are the professional in such technology are hired. Developing a website can be assumed in a similar way of developing a house; here all the game is of the “Layout”. Layout is just like the blueprint which is given to the developers for the final creation getting published over the Internet, ultimately acting as an essence which will decide the amount of responsiveness and attractiveness of the website. Now this development of the layout is called user interface (UI). Following the development of the layout prior to the development of the outcome website “Feedback” of the user of the website is needed to be taken. It is a necessary step which will assist and guide in the right path, preventing the loss of the money, time and efforts of the developer. Feedback in the initial stage will help the audience to know better of the website and give a clear path to the developer guiding them towards the required need of the users, and this is called as User Experience (UX). After this, the main part of the development comes which is developing the website as per the layout provided which is done in various ways like using some computer languages like HTML, CSS, JavaScript, or Python with Flask Framework or Bootstrap or with some pre-defined platforms like Wordpress and Wix. Once the development is done and the website is functional and is not crashing, it is hosted over the internet using some hosting platforms like GoDaddy and Hostinger. They basically provides with the domains of the website which acts like the name and address of a website similarly as a person has a name and address which uniquely identifies that person metaphorically. In that way, domain helps the website to get recognised among the public [1].

2 Vulnerabilities in Website Security Since websites were taken as a metaphor for the houses just like the house may get cracks, leakage, or other electricity issues; similarly, there are issues associated with the websites like crashing of the website or data not displayed efficiently. This issue of websites generally arises due to various types of vulnerabilities that are there to cause such issues to the website, like.

Security Issues in Website Development: An Analysis and Legal Provision

459

2.1 Authentication and Authorisation There is always a dispute that arouse out of these terms being the same, but these two are terms are different. The difference can be explained by explaining each term. Authentication: it is the way of identifying the person and verifying that the person who is using the person is the same person he deems to claim he is. This can be explained with an example when we enter the Gmail details while accessing any website is the authentication. Authorisation: it is way of providing access to someone else to the attributes owned by one. For example, while using Google Meet we give permission to access the camera and microphone that is what authorisation means. Although having different meanings, these two are simply the way of interaction and allow trust to grow between two entities in order to provide best services to the correct user.

2.2 Cross-Site Scripting (XSS) This means introducing of any unwanted material which although looks normal and part of the website but actually is an illegal entry to the legitimate website. This is done by changing the source code of the website of addition of new line of code into the website. The main functioning of the website is therefore affected, and hence the most vulnerable source code written in JavaScript is changed as that is used for the main functioning of the website. This language is used to decide which element of the website will be used to make what functionality of the website of the website. For instance, changing the amount of anything over the website and the bank account details to have illegal gain of money into the offender’s account.

2.3 Cross-Site Request Forgery In this vulnerability, the main issue arises on the user’s part. Although no major effect is made, rather it takes advantage of those features in which the user’s will is required. Cross-site request forgery generally affects the functions that needs to perform by the user. That means it majorly deals with the privacy of the user. For example, GitHub is a website that is huge hub for the programmers, and open-source contributions can be made, but the offenders generally without reason might change the code which would result in the functionality of the code [3].

460

D. Nivasan et al.

2.4 Data Leakage Data is the most efficient part of the website as it deals with the privacy of the individual. This essential element is generally may be belonging to the user or the organisation who developed the website, and the necessity to protect data arises as this develops a trust between the website owner and the user and also helps to increase the traffic and engage more audience. But still, the data is the most sensitive part of the website which is when leaked may cause huge monetary loss to the organisation, for example, WhatsApp data leak issue and AIIMS data leak issue.

2.5 Invalid Clicks This is a way by which the operations are affected by the click of the user which is generally taken by the offender in such a way that is forces the user without them knowing that they are being forced to click the button or link. It simply is done by showing the user that the click is for a particular purpose, but in actual it takes to some different website, and such clicking might cause some monetary loss to the user as user might get confused the fake website to be a part of the original website and end with transferring the money. For example, various fake products purchase happens through website, injecting malware in the system [4].

2.6 Importing Modules Websites are not made from the code written from scratch. To make the work easy for the developers, there are always pre-built modules that are there which contains the code to make the website. But sometimes these modules are not reliable as they might be the reason to cause vulnerabilities to the website, for example, modules in Python, Java, .NET, etc.

2.7 DOS Attack (Denial of Service Attack) In this generally, the offender tries to prevent the services being given to the user by denying that services continuously from reaching the user and making the database of the website continuously requesting for the service. The user is unable to access anything specifically any services which would cause difficulty to the user in one or the other way [2].

Security Issues in Website Development: An Analysis and Legal Provision

461

2.8 Injection Flaws In this generally, the malicious code is injected to the legitimate website which makes the website to perform inappropriately by the offender who is trying to attack the website. It turns up dangerous as it might give access to source code as well, and modification of the original source code would eventually end up changing the layout as well [5].

3 Website Security in Internet Security to the website is like providing lock to the house; it protects the website from getting unwanted crashes, at the same time allowing it to function smoothly. To protect the website from unauthorised access of data, its necessary to protect the website layer with such security services. The Internet is the place to host a website as through Internet only it’s possible to allow the viewers to access the website. The internet although the most essential part of the human life it is still a very dangerous equipment if it is not handled properly. There are various hacking and attacks that are done through websites violating the online privacy of the user. The users make visits to a lot of website out of need (banking transaction), curiosity (CHAT GPT) unknowingly giving the personal details to the websites. And in such give and take a lot of passwords, IDs, credit card details which if leaked might cause a huge financial loss and personal shame. Therefore, website security is the way of protecting the website against the unauthorised access, modification, and destruction. Website security provides safeguard against the hackers and helps in determining the various measures and security codes, this is also referred to as cyber security, protecting the websites and preventing and identifying the hackers behind the attacks done. This is done by securing the sensitive information like passwords and bank account details, software which includes the code of the website and other necessary modules.

4 Neccessity to Provide Web Security Implementation of the web security is not only related to the developers, host; rather, it is also associated with the users as well. For example, let’s see the example of Amazon one of the top E-Commerce platform which is also available as a website; now, in respect to the developer, it is a sort of an intellectual property which is responsible for his/her everyday earning so if any unauthorised access it done it’s going to affect the ability and skills of the developer who has created such website. Now, if it’s seen with the perspective of the host, he/she will be affected the data leak also issues the money and time that have been invested in this would cost a huge lost.

462

D. Nivasan et al.

One thing to be noted that there are a huge amount of the contacts are associated while launching these websites hence that contact will be at a huge loss. If seen with the perspective of the user, there are always issues related to their data and money. They have invested to purchase any products. Now, this is a very simple reason as to the necessity of the website security is concerned. There are various other points that are required to be seen which goes as follows: ● Website is not only the sole thing that earns the profit to the organisation rather it’s the database that is the most essential part as it contains the data and performs the functionality of the website hence if the website is affected, then the database will also be vulnerable to such attacks. ● Helps in developing the trust with the user and other organisation. Now one organisation does not directly target the normal public only but also the other organisation, which is also called as investment sometimes, the more an organisation is known the more returns will be there for that organisation. ● The cost is also a major factor if prior to attack the security measures are taken into consideration then it will cost much lesser that those security which will be adopted as a safeguard after the attack. ● The attacks are always done in such a manner that it will be seen as a part of the transaction of the functionality of the website. So there are always various malware and antivirus detectors which can be installed in order to detect such unwanted attacks in an earlier stage itself.

5 Resolution to as to the Security Issues 5.1 Resolution as to Individual User is the one who utilises the services provided by the organisation and is unaware of the layout being developed by the organisation, it is not the concern of the user to check whether a particular page is the part of the website or not, and only comes to know about a fake website page only after some loss has been incurred to the user. Now to prevent such loss only there are the following points which can be considered by the user to protect their privacy and finances which goes as follow: ● Authorisation: There are various websites that automatically gains access to the location, camera, microphone, and sending messages to the Gmail login id without even asking for the permission of the user and this is visible to the user by checking it on the search bar itself of the browser, so cancelling such accessibility is a prominent way of preventing access to such websites. ● Firewall: Firewalls are always in built to any electronic devices that have the access to the Internet in some way or the other. This makes sure that no automatic attacks are done. This also ensures that no unauthorised entry is done by any website malware which tries to get injects into the system once we visit the website.

Security Issues in Website Development: An Analysis and Legal Provision

463

● Using the trusted browsers: One way of accessing the website is always through the browsers; hence, the browsers act as an intermediary between the website and the user and helps in interacting with the websites. Benefits of the browsers are not only helping them interacting with the websites but also the helps in tracking and preventing the hackers. For example, brave and safari always keeps a track of the prevention of the unauthorised access done by the hackers. ● OS updates: This is the most common resolution available to the user the operating system used over the device accessible to the user be it Windows, Mac, Android, and iOS every such OS on a frequent basis provides regular updates that is mainly containing the updated version and stronger versions of the OS that provided the security from such attacks and warns the user before using such websites that may harm the device of the users. ● Popup and websites blockers: These are generally the websites and popup that occurs on its own without the consent of the user even when the user has not initiated such website or popup. This generally happens when one website it opened the unwanted website pages or popups associated with such main web pages occurs. In order to prevent such openings, it is recommended to change the setting of the website in the browser and make them unopened unless the user themselves are opening such websites and popups. ● Anti-virus and malware detectors: Although firewall are generally available with the devices, but still it is not sufficient enough to detect high programmed malwares for such detection the anti-virus and the malwares are being manufactured so utilisation of them would massively help in the detection of such unwanted virus and malwares.

5.2 Resolution as to an Organisation Now the organisation is the ones who are responsible in establishing the website over the internet and continuously develop and update it with new features. Hence, protecting the data in their end is necessary as even the user are relied upon the organisation when it comes to their data. Therefore, it is the responsibility of the organisation to maintain the trust of the user and protect the data of the user. Here, the following are the resolution which is concerned to the organisation which is as follows: ● Secured domains: For making a website trusted its necessary that it is used by a various user and provide services that are actually useful for the users. Domains play a very important role in the development of the website as it helps to get it known to the users nationally and internationally. For example, the.com domain is for accessing the website in the whole world. Similarly, in represents India,.edu represents the educational websites, etc. these are the domains that are well recognised and established, hence they are bound to provide security. Following which using these domains will in turn give high security to the organisation.

464

D. Nivasan et al.

● Using secured in the domain names: Now be it any website, every domain has a prefix either http or https these two although looking same ensure high differences while it comes to utilisation that is because of this single character ‘s’ that gets attached to the http which stands for secured. These https websites ensure security to the website and makes it non-vulnerable to the attacks. Therefore, using https by the organisation will help in developing the organisation more rapidly over the Internet [7]. ● User IDs security: The website is made so that user can access the website and all the website has this feature of signing in or logging in; hence, it’s very necessary to safeguard the authentic user credentials to the non-authentic user credentials by changing the credentials frequently. Since for an organisation it’s difficult to know which is a legal user and which is an illegal person accessing the website. ● Analysis and prediction: The website hosted over the internet are continuously in a vulnerable position so the organisation must perform a high check over the activities the developers and its users are doing which will help them to understand the pattern between the normal access and abnormal access. ● Frequent backup: Backups are the easiest way of ensuring that the data is not getting lost even if by any reason, the websites are hacked or attacked. So frequent backup is necessary as any organisation does not contain one- or two-pages data; it contains a huge data for which various data management department are hired to secure it.

6 Legal Provision Associated with Security Aspects of Website The whole Indian legislation does not provide any law that specifically relates to the aspects of website security. So specific laws for the website are not there, but since this security aspects covers the domain of the cyberspace and Internet and the vulnerabilities are dealt with respect to privacy and data, Information Technology Act, 2000, comes into play while dealing with such issues, and hence in order to understand such remedy this act is a great remedy as it deals with the rights of the people associated in respect to the computer systems and internet. The following are the provision which deals with the security aspects: [7] A. The first important provision which deals with this aspect is Sec 43 of the IT Act, 2000 Penalty and Compensation for damage to computer, computer system, etc. If any person without permission of the owner or any other person who is incharge of a computer, computer system, or the computer network: a. Accesses or secures access to such computer, computer system, computer network, or computer resource; b. Downloads, copies or extracts any data, computer database or information from such computer, computer system, or computer network including information or data held or stored in any removable storage medium;

Security Issues in Website Development: An Analysis and Legal Provision

465

c. introduces or causes to be introduced any computer contaminant or computer virus into any computer, computer system, or computer network; d. damages or causes to be damaged any computer, computer system or computer network, data, computer database or any other programmes residing in such computer, computer system, or computer network; e. disrupts or causes disruption of any computer, computer system or computer network; f. denies or causes the denial of access to any person authorised to access any computer, computer system, or computer network by any means; g. provides any assistance to any person to facilitate access to a computer, computer system, or computer network in contravention of the provisions of this Act, Rules or Regulations made thereunder; h. charges the services availed of by a person to the account of another person by tampering with or manipulating any computer, computer system, or computer network; i. destroys, deletes, or alters any information residing in a computer resource or diminishes its value or utility, or affects it injuriously by any means; j. steal, conceals, destroys, or alters or causes any person to steal, conceal, destroy, or alter any computer source code used for a computer resource with an intention to cause damage; he shall be liable to pay damages by way of compensation to the person so affected. B. Sec 65 of Information Technology Act, 2000: Tampering with computer source documents. Whoever knowingly or intentionally conceals, destroys or alters or intentionally or knowingly causes another to conceal, destroy, or alter any computer source code used for a computer, computer programme, computer system or computer network, when the computer source code is required to be kept or maintained by law for the time being in force, shall be punishable with imprisonment up to three years, or with fine which may extend up to two lakh rupees, or with both. C. Sec 66 of Information Technology Act, 2000: Computer-related offences–If any person, dishonestly or fraudulently, does any act referred to in Sect. 43, he shall be punishable with imprisonment for a term which may extend to three years or with fine which may extend to five lakh rupees, or with both. D. Sec 66B of Information Technology Act, 2000: Punishment for dishonestly receiving stolen computer resource or communication device–Whoever dishonestly receive or retains any stolen computer resource or communication device knowing or having reason to believe the same to be stolen computer resource or communication device, shall be punished with imprisonment of either description for a term which may extend to three years or with fine which may extend to rupees one lakh or with both.

466

D. Nivasan et al.

E. Sec 66E of Information Technology Act, 2000: Punishment for violation of privacy–Whoever, intentionally or knowingly captures, publishes, or transmits the image of a private area of any person without his or her consent, under circumstances violating the privacy of that person, shall be punished with imprisonment which may extend to three years or with fine not exceeding two lakh rupees, or with both. F. Sec 66F of Information Technology Act, 2000: (1) whoever, A. With intent to threaten the unity, integrity, security, or sovereignty of India or to strike terror in the people or any section of the people by i.

Denying or cause the denial of access to any person authorised to access computer resource; or ii. Attempting to penetrate or access a computer resource without authorisation or exceeding authorised access; or iii. Introducing or causing to introduce any computer contaminant, and by means of such conduct causes or is likely to cause death or injuries to persons or damage to or destruction of property or disrupts or knowing that it is likely to cause damage or disruption of supplies or services essential to the life of the community, or adversely affect the critical information infrastructure specified under Sec 70; or B. Knowingly or intentionally penetrates or accesses a computer resource without authorisation or exceeding authorised access, and by means of such conduct obtains access to information, data or computer data base that is restricted for reasons of the security of the State or Foreign relations; or any restricted information, data or computer data base, with reasons to believe that such information, data or computer data base so obtained may be used to cause or likely to cause injury to the interests of the sovereignty and integrity of India, the security of the State, friendly relations with foreign States, public order, decency or morality, or in relation to contempt of court, defamation or incitement to an offence, or to the advantage of any foreign nation, group of individuals or otherwise, an offence of cyber terrorism. (2) Whoever commits or conspires to commit cyber terrorism shall be punishable with imprisonment which may extend to imprisonment for life.

7 Conclusion “Website” is domain over the Internet that represents any organisation 24/7 which even the employees of the organisation are unable to perform. It of course contains such data which provides a huge profit to the developers and the organisations. In the era of twenty-first century when everything has turned online only through these websites it has become easy to gain access to the whole of the world and their

Security Issues in Website Development: An Analysis and Legal Provision

467

services being provided, be it grocery, booking shows or transportation anything and everything is available in this cyber space and the websites are interface through which it has become possible. Hence, it becomes necessary to protect such a platform which provides such a great opportunity to the world-wide users. In this context, this article has tried to explain the various aspects of website and its security by discussing what a website is and how actually a website is developed in the real world. Along with it the article has been upon the main attacks that are done to gain access of the computer system or over the data of the system in an unauthorised manner. Now if there is a wrong that is committed, there is always a possibility towards its protection, and hence the various protection that can be done to protect the websites from such attacks are also done. At last, if there is violation done as per the law so what remedy the law has with it is also a part of this article.

References 1. Zhao M, Grossklags J, Liu P (2015) An empirical study of web vulnerability discovery ecosystems. In Proceedings of the 22nd ACM SIGSAC conference on computer and communications security, pp 1105–1117 2. Dua M, Singh H (2017) Detection and prevention of website vulnerabilities: current scenario and future trends. In: 2017 2nd international conference on communication and electronics systems (ICCES), Coimbatore, India, pp 429–435. https://doi.org/10.1109/CESYS.2017.8321315. 3. Kombade RD, Meshram BB (2012) CSRF vulnerabilities and defensive techniques. Int J Comput Netw Inf Secur 4(1):31 4. Security for web developers: a practical tour in five examples by Andrea Chiarelli 5. SuredaRiera T, Bermejo Higuera J-R, Bermejo Higuera J, Martínez Herraiz J-J, Sicilia Montalvo J-A (2020) Prevention and fighting against web attacks through anomaly detection technology. Systemat Rev Sustain 12(12):4945. https://doi.org/10.3390/su12124945 6. Website Security. Website Security|CISA, 1 Nov 2018. www.cisa.gov/uscert/ncas/tips/ST18-006 7. Sec 43 of IT Act, 2000, Sec 65 of IT Act, 2000, Sec 66 of IT Act, 2000, Sec 66B of IT Act, 2000, Sec 66E of IT Act, 2000

Fashion Image Classification Using Machine Learning V. Monisree, M. Sneha, and S. V. Sneha

Abstract We will discuss doubtlessly tough troubles going through the e-commerce enterprise. One of them is related to the trouble that sellers face when they upload snapshots of merchandise on the sales platform and then manually edit them. This outcomes in misclassification, which ends up in not performing in the outcomes of the questions. Another trouble relates to the ability bottleneck while putting orders, wherein the purchaser may not recognize the proper key phrases, but they have got an affect of the visual image. A photo-based search algorithm can release the genuine capability of e-trade by permitting shoppers to click on an object’s image and search for associated products while not having to kind text. In this newsletter, we explore gadget gaining knowledge of algorithms that may help solve both of those problems. Keywords Machine learning · Neural network · Image processing

1 Introduction This article demonstrates using convolutional neural networks to reference the Fashion- MNIST dataset. Fashion-MNIST is a photograph dataset of Zalando-style articles that incorporates 60,000 examples and a take a look at set of 10,000. Despite the fact that it is a triflinger and for an individual to recognize a visual picture from a picture, it is very trying for a PC framework to do likewise with human precision. An effective method of recognizing and categorizing photos requires a set of rules that is invariant under mild perturbations. For example, special lighting conditions, one-of-a-kind scales, and variations with respect to the thread can all V. Monisree (B) · M. Sneha · S. V. Sneha Department of Information Technology, Saveetha Engineering College, Chennai, TamilNadu, India e-mail: [email protected]

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 V. S. Rathore et al. (eds.), Emerging Trends in Expert Applications and Security, Lecture Notes in Networks and Systems 681, https://doi.org/10.1007/978-981-99-1909-3_41

469

470

V. Monisree et al.

affect the set of rules to be expecting photo type in correctly. Recently, deep neural networks were used to solve many issues with brilliant results. Specifically, convolutiona l neural networks have proven excellent outcomes in photograph category, photograph segmentation, pcinventive and judicious issues, and normal language handling inconveniences. Picture class issues with capabilities in light of dim level, variety, movement, profundity, and surface have also been used to several probabilistic models in Bayesian perception networks and hidden Markov styles. Using many different kinds of convolutional neural networks, this article deals into the concept of the Fashion MNIST picture category.

2 Objective The primary purpose could be to expand a hybrid recommender gadget that consists of and enhances the properties of current recommender structures, in addition to a brand new technique to reduce gadget time and display hidden consumer relationships with awesome care. Developing a popular score that will assist users better judge form and shape recommendations [3].

3 Literature review Kolekar,M. H., 2011. Indexing sports videos for broadcast using a Bayesian belief network. Pages27–54 in Volume 54, Issue1of Multimedia Tools and Applications. Yves LeCun, Laurent Bottou, YoshuaBengio, and Pascal Haffner. 1998. Incorporating gradient learning with text recognition. The IEEE’s Transactionson Computers, 86(1), 2278–2324.

4 Existing system Gharbi Alshammari and Stelios Kapetanakis (July 2019) [7] proposed a technique that gives gradation and vicinity inside the combination of attributes with CF with scattered trouble. Also, this method indicates similarity in huge dataset operations without assumptions. For this, 1 millionth of the lens become used. They additionally implemented techniques like Random jump, KNN. In the stop, the overall performance is higher with the hybrid approach mixed with the location technique.

Fashion Image Classification Using Machine Learning

471

5 Proposed System The proposed machine shows that it could proceed from snapshots of the consumer’s garb, determine the type and color of the garb, and ultimately advise the maximum appropriate apparel for the event from the user’s current clothing. The clothing system presents an area for users to keep photos of the garb they’re wearing. Each user is associated with addresser. We are studying device getting to know and deep studying strategies to suit the sort of garb from photographs and determine the color of the apparel. Finally, we advocate an algorithm used to endorse apparel picks [8].

6 System Architecture

Hardware/software Requirements Hardware Requirements ● System-Pentium-IV ● Speed-2.4GHZ

472

V. Monisree et al.

● Hard disk-40 GB ● Monitor-15VGAcolor ● RAM Software Requirements ● Operating System–WindowsXP ● Coding language–Python

7 Image processing in Python Algorithm Tools Images define the world; each photograph has its personal tale, it carries quite a few essential statistics that can be useful in many methods. This statistics may be obtained using a procedure known as photo handling. It is a basic piece of PC vision and assumes a significant part in some genuine world applications, including as robots, self-driving vehicles, and object identification. With the use of image processing, we may analyze and modify several images all at once, and draw useful insights from them. There are humongous bundles available in almost every country [9]. Python is a popular language for this kind of programming. It greatly simplifies the process of image processing because to its exceptional libraries and tools. In this text, you may learn about traditional algorithms, strategies, and equipment for photograph processing and acquiring the preferred result. The very last end result can be both in the shape of a photo, or within the shape of a proper operation of this picture. This can be used for similarly analysis and choice making. A convenient representation for the image is the double feature F(x, y), where x and y are adjoining organizes. The power of the picture at a specific x, y esteem is characterized as the meaning of Fat that worth. We call it a computerized picture if the x, y, and z coordinates have a finite value. The items of an order are laid down in rows and columns. In a digital image, depth, and color are represented by what are called “image factors. The image also can be rendered in three-D, where the x, y, z coordinates become nearby. The elements are arranged within the womb. This picture is referred to as RGB.

Fashion Image Classification Using Machine Learning

473

Source

Source The visual arts include several various genres: ● The red, green, and blue channels make up a 2D snapshot in the RGB image format. ● Images in grayscale have just one color channel and include arrange of hues of white and black. Traditional techniques for processing photographs.

8 Morphological Image Processing Morphological picture handling endeavors to dispose of flaws from parallel photographs, due to the fact binary areas saved via a simple boundary can be distorted through noise. It additionally enables easy out the open and near photo operations. Grayscale images may also benefit from morphological processes. Photo editing that isn’t linear in nature involves some of the processes we see here. It relies upon no longer simplest on the order of the factors, however additionally on their numerical values. This method analyzes snapshots the usage of a small template, known as a shape element that is positioned at various feasible places inside the picture and in comparison with the corresponding nearest factors. The detail structure is a small matrix with values of 0 and1. Let test the two most important morphological operations of the picture processing, dilation and erosin:

474

V. Monisree et al.

● The make bigger operation provides factors to the boundaries of the item in the photograph ● Erosion is a process that strips away material from an object’s edges. The wide variety of pics advanced or delivered to the unique photo depends on the dimensions of the detail structure. At this factor, you might be thinking, "What is a structural detail?" I will say The granular structure is a matrix of optimal ones and zeros, which may take any shape and have any length. It’s positioned somewhere that makes sense with regards to the photograph and the right image community [1].

Source The rectangular detail of the structure “A” corresponds to the item we need to spotlight, “B” intersects the object, and “C” is outdoor the item. Noun married version defines the form of the structural detail. This figure corresponds to the item that we want to spotlight. The middle element identifies the pixel structure of the system.

Source

9 Gaussian Image Processing Gaussian blur, also known as Gaussian smoothing, is the outcome of applying a smoothing function with a Gaussian distribution on an image. Its purpose is to lessen the amount of element in the image and hence minimize the amount of noise. This kind of fading is visually striking in the same way that an image seems to travel through a see-through screen. It’s sometimes used as a data augmentation technique in deep learning or in computer vision to improve images at various sizes. The major feature of the Gaussian looks like this.

Fashion Image Classification Using Machine Learning

475

In practice, it’s far excellent to confuse the Gaussian with a separable assets through splitting the system into two transitions. It is just the horizontal or vertical part of the image that is enlarged in the first skip, using a1D kernel. Apply the same fading 1D grain effect to the remainder of the element in the second bypass. The result acquired is the same as convolving two dimensions into one core. Let’s examine an example to apprehend what Gaussian filters do to photos.

10 Fourier Transform in Image Processing The Fourier rework splits the photograph into sinusoidal and cosine additives. It has numerous applications together with photograph reconstruction, photograph compression, or image filtering. When we are speak me approximately pictures, we’re going to be speak me about discrete Fourier transforms. Let us keep in mind the wave without, it consisting [6]; ● Size, the opposite of assessment ● Spatial frequency-to brightness ● Phase-refers to color records. The frequency domain photo seems like this:

Source

11 Edge Detection in Image Processing Image facet detection is the method of finding the rims of objects in photographs. The discovered object activates the brightness. This may be beneficial while extracting useful facts from a picture, considering the fact that most of the shapes are contained in the edges of the facts. Classical aspect detection techniques paintings via detecting the brightness of discontinuities [8]. It can reply fast if any noise is detected inside the image when it detects adjustments in gray stages. Margins are described as nearby most slopes.

476

V. Monisree et al.

We have seen the Fourier rework, but most effective the frequency is finite. Rivers in line with time and frequency. This conversion is suitable for non-stationary requirements [4]. We recognize that the rims are the various maximum important components of the photograph, by means of making use of conventional pigeons, it’s been noted that the noise has been eliminated; however, the photograph is horrifying. The wavelet transformation is constructed in such a way that we may make a great frequency choice for low-frequency components. Here is an example of a simple wavelet transform in two dimensions;

Image processing using Neural Networks The neurons or nodes that make up a multilayer neural network are the network’s building blocks. For neural networks, these neurons constitute the central tactic. They’re arranged just like the human mind. They attain data, build styles at the facts to apprehend it, and are expecting the final results. A primary neural network consists of three layers: 1. Input layer 2. The hidden layer 3. Output layer

Fashion Image Classification Using Machine Learning

477

Basic Neural Network Source The input is obtained by the entry layers, the prediction of the output is made by the output layer, and the bulk of the computation is performed by the hidden layers. Depending on the situation, any of a large range of possible concealed layers may be adjusted. A neural community have to have at the least one hidden layer. The precept of operation of a neural community is as follows; 1. Consider a picture in which each pixel must represent a neuron in the first layer, and in which connections between neurons in other layers are made through channels in the layers above and below them. 2. There is an accepted monetary value placed on each of these avenues. 3. When feeding into the hidden layers, the input is weighted and then the total is supplied a sentry. 4. In order to convey a dynamic characteristic that controls the neuron’s action or inaction, the hidden layers are present. 5. When neurons are stimulated, they send data to the layers underneath them. Forward propagation refers to this method of disseminating data within a community. 6. The neuron forecasts the output with the total fee in the output layer. We can trust these results since they are accurate estimates. 7. The difference between the actual and predicted results is known as the “error.“ By using a process called “back propagation, “these logs are retransmitted through the network. 8. Weights are adjusted primarily based in this records. This cycle of ahead and backward propagation is repeated a couple of instances for a couple of inputs till the community predicts the right output in maximum cases.

9. This mastering technique completes the neural network. In some instances, the time required to educate a neural network can be significant. In the image below, ai is the enter, wi is the weights, z is the output, and g is any active characteristic. Operations in a single source

478

V. Monisree et al.

Here are a few recommendations for making ready information for picture processing. ● If we want better results, we need to provide more information into the model. ● The picture dataset desires to be the exceptional to provide clearer statistics, but might also require more superior neural networks to manner. ● In many instances, RGB photos are converted to grayscale earlier than being transmitted to neutral networks. Types of Neural Networks Convolutional Neural Network In precis, ConvNets Convolutional Neural Network has three layers: ● Layer of Convolution (CONV): they’re the main blocks of CNN, the operation of convolution is to be carried out. The middle/clear out is the degree of specificity inside the convolution process at which we are interested (matrix). Using the step charge as a guide, the kernel moves horizontally and vertically over the image until the whole thing has been covered [2].

Core motion source ● Pool degree (POOL): This stage is liable for dimensionality discount. This enables reduce the processing electricity required for the MGE. Two kinds of contribution: maximum contribution and medium contribution. Max returns themost important fee pool from the middle place included with in the image. The union average of all of the common values is back with in the part of the picture blanketed by way of the core source.

Fashion Image Classification Using Machine Learning

479

Pooling operation | Source Fully Connected Layer (FC): Fully related layer (FC) works with left inputs, wherein each enter is attached to all neurons.

Fully connected layers | Source Rhoncus particularly used to extract functions from a photo using its layers. CNNs are extensively utilized in picture type in which every enter photograph is passed through a sequence of layers to a chance price between zero and 1.

Source Generative Adversarial Networks The generative model utilizes an installed learning procedure (there are pictures yet presently not names). The Gans system has two distinct types of generator and discriminator. You train the Discriminator to recognize bogus images and the Generator learns to make convincing ones to trick you, but eventually the Discriminator stops attempting to trick you and can tell the difference. In the generator it isn’t always allowed to peer actual photos, in order that it is able to give terrible first out comes, at the same time as it’s far possible for the discriminator to study real photos, however they are

480

V. Monisree et al.

interspersed with faux pics generated with the aid of the generator, which it need to insert as actual or Faux [5]. If the generator is given a certain sort of sound, it may make unique samples on a daily basis, rather than just replicating the same type of image. Scores predicted by the discriminator are used to refine the generator’s output, for the reason that factor in time while the generator produces pics might be extra tough to distinguish, then the consumer may be happy with it. Results the discriminator additionally will increase, getting greater and better picture from the generator from all aspects. Popular varieties of GAN are GAN(DCGAN), ConditionalGAN (cGAN), StyleGAN, CycleGAN, DiscoGAN, GauGAN, and so forth. GANs are terrific for processing and processing snapshots. Some programs of GAN include: face getting older, photo blending, high-quality decision, picture portray, garb transfer.

References 1. Andreeva E, Ignatov DI, Grachev A, Savchenko AV (2018) Extraction of visual features for recom mendation of products via deep learning. In: International Conference on Analysis of Images, Social Networks and Texts. Springer, Cham, pp 201–210 2. Shankar D, Narumanchi S, Ananya HA, Kompalli P, Chaudhury K (2017) Deep learning based large scale visual recommendation and search for e-commerce. arXiv preprint arXiv:1703. 02344 3. deBarrosCosta E, Rocha HJB, Silva ET, Lima NC, Cavalcanti J (2017)Understanding and personalizing clothing recommendation for women. In: World Conference on Information Systems and Technologies. Springer, Cham, pp 841–850 4. Tuinhof H, Pirker C, Haltmeier M (2018) Image-based fashion product recommendation withdeep learning. In: International Conference on Machine Learning, Optimization, and Data Science. Springer, Cham, pp 472–481 5. Yang Z, Su Z, Yang Y, Lin G (2018) From recommendation to generation: A novel fashion clothing advising framework. In: 2018 7th International conference on digital home (ICDH). IEEE:180–186 6. Liu J, Song X, Chen Z, Ma J (2019) Neural fashion experts: I know how to make the complementary clothing matching. Neurocomputing 359:249–263 7. Zhou W, Mok PY, Zhou Y, ZhouY, Shen J, Qu Q, Chau KP (2019) Fashion recommendations throughcross-media information retrieval. J Vis Commun Image Represent 61:112-120

Fashion Image Classification Using Machine Learning

481

8. Su X, Gao M, Ren J, Li Y, Rätsch M (2020) Personalized clothing recommendation based on user emotional analysis. Discret Dyn Nat Soc 9. Borji A (2019) Pros and cons of gane valuation measures. Comput Vis Image Underst 179: 41–65 10. Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville A, Bengio Y (2014) Generative adversarial nets. Adv Neural Inf Process Syst 27

An Expert Sequential Shift-XOR-Based Hash Function Algorithm Sunita Bhati, Anita Bhati, and Sanjay Gaur

Abstract In this technological world, all sorts of information transactions of governmental and non-governmental organizations are being done through internet. Everybody is using WhatsApp, Facebook, Instagram or other social sites. All the information that is floating on the internet is on high risk because of unethical practices of Internet users. So the cybercrime is a crucial matter among us. Information security is now challenging issue for global community. There are so many techniques comes under various security methods like data confidentiality, authentication, access control, data integrity, and more. Authentication is one of the most prominent security services. MD5 or SHA are the most popular cryptographic hash functions that are widely used in internet applications and protocols to provide authenticity. Their concept is very easy to implement and provides high level of security. In this paper, we have proposed a new construction of hash function scheme named SSXORHF (Sequential Shift–XOR based Hash Function) for message authentication. Our scheme, SS-XORHF, is multifaceted so provide very complex hash code for authentication which gives strong security. Keywords Authentication · Hash function · XOR operation · Bit-shifting function

S. Bhati (B) JECRC University, Jaipur, India e-mail: [email protected] A. Bhati Aishwarya College, Udaipur, India S. Gaur Jaipur Engineering College & Research Center, Jaipur, Jaipur, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 V. S. Rathore et al. (eds.), Emerging Trends in Expert Applications and Security, Lecture Notes in Networks and Systems 681, https://doi.org/10.1007/978-981-99-1909-3_42

483

484

S. Bhati et al.

1 Introduction Cryptography is a scientific art of securing information. There are so many cryptographic techniques that have been developed to protect information from cyberattacks. To achieve data confidentiality different kinds of encryption algorithms are used like AES, DES, 3DES, Blowfish, IDEA, etc. At present the conventional methods of encryption are not considered as security measures because of cryptanalysis. The information could be accessed by the unauthorized users for malicious purpose. There is no open provision of authentication of messages in encryption. Message authentication is also a major issue over network. Authentication of messages means, the act of confirming that a message has really been sent by a certain party, usually was achieved by a combination of inspecting the message and/or the messenger. Effective and robust authentication techniques are required to enhance information security over internet. Message authentication can be achieved by using various algorithms like MD4, MD5, SHA-1, RIPE-MD, HAVAL and more. In this paper we have proposed a new authentication technique that is Sequential Shift XOR based Hash Function algorithm which provides a high level of information security through authentication and data integrity. In this hash algorithm, logical operations with complex mechanisms are used to generate strong hash code hence offers a perfect security technique through message authentication.

2 Related Work Information secrecy is a challenging issue in this modern era. So many techniques are developed to provide different security concepts like data confidentiality, data integrity, message authentication, non-repudiation, etc. The following reviews explained few popular techniques that provide information security through message authentication. In 2022, Zhuoyu and Yongzhen [1], their research shows that the information security is the crucial matter of concern to the society, and the study of hash function has been increasingly concerned because of its principle of authentication which becomes it a core part of modern cryptography. The popular hash algorithms are MD5 & SHA and their different versions of algorithms. Hash function based technology is used for password management, electronic signature, spam screening. This study focused on the improved MD5 algorithm which focuses on the internal structure of MD5 that make it more efficient in retrieval. In 2021, Ambedkar et al. [2], their research focused on many existing algorithms that are dependent on the initial value and key constant usage to increasing the security strength of the hash function. They developed autonomous initial value proposed secure hash algorithm (AIVPSHA64) which produce sixty-four-bit secure hash code without the need of initial value and key constant. It is generally useful for a smart card to verify their identity which has limited memory space. The performance of

An Expert Sequential Shift-XOR-Based Hash Function Algorithm

485

the initial value proposed secure hash algorithm (AIVPSHA64) was also measured, and it is found significantly better from existing algorithms. In 2019, De Guzman et al. [3]. In their research, the application of the enhanced secure hash algorithm-512 is implemented on web applications specifically in password hashing. In the proposed algorithm, hill cipher is included for the salt generation which increases the complexity in generating hash tables that may be used as an attack on the algorithm. The same passwords are saved on the database for testing purpose which are used to create hash collisions and it will give result to salt generation that produce a new hash code. The matrix encryption key provides five matrices to be selected upon based on the length of concatenated username, password, and concatenated characters from the username. Hence, the same password will result to a different hash code that will to make it highly secured from future attacks. In 2007, Mirvaziri et al. [4], their research shows that scientists have found weaknesses in a number of hash functions, including MD5, SHA and RIPEMD so the purpose of their study is combination of some function to reinforce these hash functions. Some research studies have shown collision attacks on SHA-1, MD5 hash functions so the natural response to overcome this threat was assessing the weak points of these protocols that actually depend on collision resistance for their security, and potentially schedule an upgrade to a stronger hash function. They have shown several simple message pre-processing techniques that can be combined with MD5 or SHA-1 so that applications are no longer vulnerable to the known collision attacks. In 2004, MD2 is a cryptographic hash algorithm which was shown to be vulnerable to a pre-image attack and the time complexity equivalent to 2104 applications of the compression function [5]. The author concludes, “MD2 can no longer be considered a secure one-way hash function”. In 1989, the MD2 Message-Digest Algorithm, it is a cryptographic hash function, developed by Ronald Rivest [6]. The algorithm was developed for 8-bit computers. The algorithm is not secured now, but it is still used with MD2 and RSA in generating digital certificates in public key [7]. In 1990, Ronald Rivest developed a cryptographic hash function algorithm named the MD4 Message-Digest Algorithm [6]. In this algorithm, the input length of a message is not fixed but it generates 128-bit output which is known as message digest or a hash code. The algorithm MD4 had some weaknesses which was shown by Den Boer and Bosselaers in 1991 [8]. In 1991, Professor Ronald Rivest of MIT developed a new message digest algorithm named MD5 to replace MD4 hash function. The MD5 algorithm is a popular cryptographic hash function that produces a 128-bit hash value. MD5 is used to check data integrity in different security applications. The National Institute of Standards and Technology (NIST), the USA introduced a cryptographic hash function the Secure Hash Algorithm as a series of SHA functions as given below

486

S. Bhati et al.

● SHA-0: It is a 160-bit hash function introduced by NIST. Because of an undisclosed “significant flaw” it was withdrawn and came with revised version SHA-1. ● SHA-1: It is introduced by the United States National Security Agency in 1995. It is one of the most popular in SHA series and applied in different applications and protocols [10, 11]. ● SHA-2: It is having two versions of hash functions SHA-256 and SHA-512 with different block sizes. SHA-256 uses 32-bit words, and SHA-512 uses 64-bit words as input block. It is considered as secure from attacks [12, 13]. A Message Authentication Code (MAC) is also a cryptographic hash algorithm which is used for authentication of a message. It ensures data integrity and authenticity of a message. The algorithm takes two inputs–a secret key and an arbitrarylength message–and produces a fixed size MAC code to authenticate the given message.

3 Problem Definition By reviewing a lot of research work, the major two problems are evaluated as stated below: ● Most of the Hash Function Algorithms are vulnerable to attacks after some period of time, so there is a requirement of stronger Hash Function algorithm that provide a very high level of security. ● Since existing Hash Function Algorithms are mainly concerned with complicated mathematical procedures. As complexity of algorithms is increases, the execution speed decreases, i.e., performance becomes low. So there is a big scope of introducing some new ideas and concepts with Hash Functions so that performance of the system would be enhanced. This research work is mainly focused on these two aspects of protecting information.

4 Methodology Research methodology is a way to solve a research problem step by step. A research study is the complete in-depth analysis on a particular problem. To conduct a research is a tedious task which requires scientific approach to analyze the information about the subject under investigation. Performing analysis with correct methodology predicts accurate future which helps in taking right direction. Methodology is the key part of any research that describes the complete procedure and required tools to conduct the study. Research process generally consists of five

An Expert Sequential Shift-XOR-Based Hash Function Algorithm

487

important steps: problem definition, research design, data collection, data analysis, and interpretation of results. Here, the research methodology will focus on designing a new hash function using bit shifting concepts and logical operations which would be very quick in execution and pursue high level of authentication security. The main outcome of this research will be designing validated architecture or algorithm for the use of secured information transmission system on networks. The algorithm will be tested and proved.

5 Description The main concept of SS-XORHF algorithm is shown in the following Fig 1. In this algorithm, we calculate hash code for providing authentication. First of all, original message is passed into SS-XORHF algorithm where it converts into binary form (i.e. 0 or 1). Then all message bits are divided into equal size of blocks (which is 512 bits each). If the last block is not completed with 512 bits, then padding bits are to be added to make it complete as 512 bits block. Now, one block of 512 bits at a time is processed in the SS-XORHF along with initial Vector (IV) bits of size 128 bits. In SS-XORHF bit-shifting concept, arithmetic and logical operations are used to provide 128 bits output. The whole working process of SS-XORHF is described further in the paper. These output bits now act like a CV (Chaining Variable) for

Fig. 1 Sequential shift XOR-based hash function algorithm

488

Fig. 2 Internal processing of SS-XORHF (HSS-XORHF )

S. Bhati et al.

An Expert Sequential Shift-XOR-Based Hash Function Algorithm

489

next 512 bits block processing. This process will be continued until all the message blocks not proceed. At the end, we get 128 bits output as a hash code (H1 ) for authentication. This hash code will be attached to the original message and sent to the receiver. Now receiver separates the message and code and again calculates the hash code (H2 ) of the message using SS-XORHF algorithm. Now receiver compares both the hash codes (H1 and H2 ), if both are equal, then it is confirmed that the received message is authenticate and the source of the message is also authenticate and message is not altered during transmission. Working of SS-XORHF ( HSS-XORHF ) One block of size 512 bits and initial vector (IV) of size 128 bits are input at a time into SS-XORHF. Then the message block (512 bits) is further subdivided into four sub-blocks of size 128 bits each. Similarly, IV is also subdivided into two sub-blocks of size 64 bits each (Fig 2). In the first round, first 128 bits message block bits are subdivided into two 64 bits blocks. Now, 64 bits of first input block are XORed with the first 64 bits sub-block of IV and calculated bits are stored in the first array A1 . In the same way, 64 bits of second input block are XORed with the second 64 bits sub-block of IV, and calculated bits are stored in the second array A2 . In the second round, second 128 bits message block bits are subdivided into two 64 bits blocks. Now 64 bits of first block are XORed with the output bits of array A1 of round first and calculated bits are stored in the array B1 of second round. Similarly, 64 bits of second block are XORed with the output bits of array A2 of round first, and calculated bits are stored in the array B2 of second round. In the third round, third 128 bits message block bits are subdivided into two 64 bits blocks. Now 64 bits of first block are XORed with the output bits of array B1 of round second, and calculated bits are stored in the array C1 of third round. Similarly, 64 bits of second block are XORed with the output bits of array B2 of round second, and calculated bits are stored in the array C2 of third round. In the fourth round, fourth 128 bits message block bits are subdivided into two 64 bits blocks. Now 64 bits of first block are XORed with the output bits of array C1 of round third, and calculated bits are stored in the array D1 of fourth round. Similarly, 64 bits of second block are XORed with the output bits of array C2 of round third, and calculated bits are stored in the array D2 of fourth round. Now, the output bits of array D1 of round fourth are passed into Left-Shift Function, and the output bits of array D2 of round fourth are passed into Right-Shift Function. After Left-Shift and Right-Shift operations the output bits of Left-Shift Function (i.e., 64 bits) and the output bits of Right-Shift Function (i.e., 64 bits) are combined together that make a 128 bits code named CVoutput . This bit sequence is unpredictable. At the end, we get 128 bits outputs collectively, which are then passed into the same hash function HSS-XORHF as chaining variable for next block (512 bits) processing. This process will be continued till all the message

490

S. Bhati et al.

blocks are processed. At last, we get 128 bits message digest or hash code for message authentication.

6 Analysis and Interpretation As we have already defined the two main issues which are taken in consideration while designing the new hash function algorithm, i.e., Sequential Shift XOR-based Hash Function Algorithm (SS-XORHF) as in this algorithm input message is processed in binary format by subdividing it into equal size blocks of 512 bits each. Each 512 bits block further subdivided into four equal size blocks of 128 bits each which are then processed through four rounds using Initial Vector (IV) bits with XOR operations and complex computing architecture followed by Left-Shift and Right-Shift operations. Finally, with these characteristics, we get a complex hash code which is unpredictable. So the proposed algorithm is robust and offers strong information security through authentication and data integrity.

7 Conclusion In this paper, we have proposed a new authentication technique that is Sequential Shift XOR-based Hash Function Algorithm (SS-XORHF) which is extremely strong and provides high level of information security through authentication and data integrity. Because of using simple logical and mathematical operations and functions, the execution speed of the algorithm is fast as compared to existing algorithms.

References 1. Zhuoyu H, Yongzhen L (2022) Design and implementation of efficient hash functions. In: Published in IEEE 2nd International Conference on Power, Electronics and Computer Applications (ICPECA) 2. Ambedkar BR, Bharti PK, Husain A (2021) Design and analysis of hash algorithm using autonomous initial value proposed secure hash algorithm64. In: Published in IEEE 18th India Council International Conference (INDICON) 3. De Guzman FE, Gerardo BD, Medina RP (2019) Implementation of enhanced secure hash algorithm towards a secured web portal. In: Published in IEEE 4th International conference on computer and communication systems (ICCCS) 4. Mirvaziri H, Jumari K, Ismail M, Hanapi ZM (2007) A new hash function based on combination of existing digest algorithms. In: published in IEEE 5th Student Conference on Research and Development 5. Muller F (2004) The MD2 hash function is not one-way. ASIACRYPT.pp 214–229 6. http://en.wikipedia.org/wiki/MD2 7. Den Boer B, Bosselaers A (1991) An attack on the last two rounds of MD4.

An Expert Sequential Shift-XOR-Based Hash Function Algorithm

491

8. Dobbertin (1995) Cryptanalysis of MD4 9. Schneier on Security: Cryptanalysis of SHA-1 http://www.schneier.com/blog/archives/2005/ 02/cryptanalysis_o.html 10. http://csrc.nist.gov/groups/ST/toolkit/secure_hashing.html 11. Schneier on Security: NIST Hash Workshop Liveblogging (5) http://www.schneier.com/blog/ archives/2005/11/nist_hash_works_4.html 12. Hash cracked–heise Security http://www.heise-online.co.uk/security/Hash-cracked--/features/ 75686/2 13. http://en.wikipedia.org/wiki/Secure_Hash_Algorithm 14. Yavuz AA et al. (2019) Ultra lightweight multiple-time digital signature for the internet of things devices. Publ IEEE Trans Serv Comput (IEEE TSC) 15. Santini P et al. (2019) Cryptanalysis of a one-time code-based digital signature scheme. Publ IEEE Int Symp Inf Theory (ISIT) 16. Ali AM et al. (2020) A novel improvement with an effective expansion to enhance the md5 hash function for verification of a secure E-document. Publ IEEE Access 8:80290–80304 17. Samiullahet M et al (2020) An image encryption scheme based on dna computing and multiple chaotic systems. Published by IEEE Access 8:25650–25663 18. De Guzman et al. (2019) Implementation of enhanced secure hash algorithm towards a secured web portal. 2019 IEEE 4th International conference on computer and communication systems, IEEE Xplore 19. Singh L et al. (2020) Secure data hiding techniques. Published by Multimedia Tools and Applications, pp 15901–15921 20. Ahmed AA, Ahmed WA (2019) An effective multifactor authentication mechanism based on combiners of hash function over internet of things. Published by Sensors MDPI. [online] Available https://doi.org/10.3390/s19173663 21. Mitra P (2018) Recent advances in cryptography and network security. IntechOpen 22. Wang JY, Li YZ (2013) The design and realization of the single-block hash function for the short message. Appl Mech Mater 411–414:53–59

Biblography 23 Bellare M, Canetti R, Krawczyk H (1996) Keying hash functions for message authentication 24 Bellare, M., Canetti, R. and Krawczyk, H (1996) Keying hash functions for message authentication. In: Crypto 96 Proceedings, Lecture notes in computer science vol.1109. N. Koblitz ed., Springer-Verlag 25 Schheier B Applied cryptography, 2nd edition 26 Leuven KU, Mercierlaan K Cryptographic hash functions: an overview, vol 94. Bart Preneel1ESAT-COSIC Laboratory, B-3001 Leuven, Belgium 27 Preneel B (2003) Analysis and design of cryptographic hash functions 28 Safdari M, R Joshi (2009) Evolving universal hash functions using genetic algorithms. In: the Proceedings of the IEEE International Conference on Future Computer and Communication, pp 84- 87. ISBN: 978–0-7695–3591-3. 10.1109/ICFCC.2009.66

Medical Record Secured Storage Using Blockchain with Fingerprint Authentication R. Jayamahalakshmi, G. Abinaya, S. Oviashree, and S. Yuthika

Abstract The significance of medical records in any given hospitals cannot be overemphasized, as they are the main tool available for achieving the highest goals and are helpful to both patients and medical professionals. The majority of customers view electronic health records as ethereal because they don’t weigh anything because they are electronic. To resolve this issue, with the help of a specialist’s approval and a unique finger impression, the administrator of this application can access the patient’s prior clinical health information. Developing a system that is useful for both patients and professionals is likely to be one of the most pressing issues when dealing with this information. In this program, network updates are made while reports are being hashed. In this application, blockchain is executed while getting biometric access from patient. It keeps a decentralized and secure record of crypto exchanges. Hence, blockchain can ensure the constancy and security of information records. Keywords Medical record · Finger print impression · Blockchain · Decentralized

1 Introduction A change in health care has been brought about by evolving legislation and technology. Documenting the care given to a patient by the treating physician is essential. There is now an established body of knowledge dedicated to the art and science of maintaining patient records. Only in this manner will the doctor be able to prove that the therapy was given properly. In addition, it will be of great use in analyzing and scientifically examining issues with patient care. The medical history of a patient is an R. Jayamahalakshmi · G. Abinaya · S. Oviashree · S. Yuthika (B) Department of Information Technology, Saveetha Engineering College, Chennai, TamilNadu, India e-mail: [email protected] G. Abinaya e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 V. S. Rathore et al. (eds.), Emerging Trends in Expert Applications and Security, Lecture Notes in Networks and Systems 681, https://doi.org/10.1007/978-981-99-1909-3_43

493

494

R. Jayamahalakshmi et al.

important part of the care they get. Therefore, the first practical universal electronic health records might be created by using blockchain to secure medical information. Mathematically speaking, a hash is a function that combines commitments of different lengths into a single commitment that satisfies all relevant regulations. For this reason, its unique hash will remain the same size regardless of the primary data percentage or report size. A public upgraded record that is used to track trades across multiple laptops was passed, making it impossible to make retroactive changes to the record without also changing the organization of the association and each succeeding block. Without regard to everything else, the patient must register here and log in before proceeding to the patient’s main page. The expert who needs to provide a cure will then be made aware of by those who need to complete the application structure to meet the expert. The goal of this project is to provide patient’s direct access to their medical records. People can use electronic medical records to retain a digital record of their illnesses and track the success rates of treatment. This helps to keep your encrypted electronic medical records in a decentralized location to avoid single points of failure and to assure the safety of your storage platform.

2 Literature Survey Here, the distributed, peer-to-peer blockchain technology has (slowly) become one of the most notable solutions for protecting data storage and transmission. This study provides a comprehensive evaluation of the most widely used blockchain security apps and highlights the scholarly literature that explores the potential of blockchain technology for enhancing cybersecurity. Based on our research, we can conclude that the Internet of Things (IoT), along with network and machine visualization, public key cryptography, web applications, certification schemes, and the safe keeping of personally identifiable information, are all prime candidates for innovative blockchain applications (PII) [1]. Blockchains are not a practical solution for storing large data. In order to store and distribute massive data more quickly and easily, IPFS may be used as a file sharing system. There is a reliance on cryptographic hashes that can be conveniently recorded on a blockchain. The inability to selectively transfer files is a major limitation of IPFS. If confidential or private information must be communicated, this step is essential. In light of the need for access-controlled file sharing, this article introduces an updated version of the Interplanetary File System (IPFS) that makes use of Ethereum smart contracts. The upgraded IPFS software enforces the access control list that is maintained by the smart contract. This is accomplished by its communication with the smart contract on every file operation (upload, download, and transfer). The effects of IPFS with restricted access are examined and addressed using an experimental setting [2]. Personal health records (PHRs) are very helpful since they empower individuals to manage their own health data. In this paper, we propose utilizing Ethereumbased savvy agreements to give patients unlimited admittance to their information

Medical Record Secured Storage Using Blockchain with Fingerprint …

495

in a structure that is unchanging, straightforward, traceable, dependable, and secure. Medical records may be collected, stored, and exchanged without worry when using the proposed solution of trustworthy reputation-based re-encryption oracles and the decentralized storage of Interplanetary File System (IPFS). Extensive explanations of the inner workings of these algorithms are provided as well assess the proposed brilliant agreements in light of principal execution measurements like expense and precision. We discuss the generalizability and security of our solution as well. In this paper, we dissect the issues with the proposed solution [3]. Many existing IoT gadgets (for instance, business banking shrewd cards) are fitted with the finger impression confirmation instrument to neutralize the shortcomings of secret word-based validation. Furthermore, fingerprint templates are not secured in the current systems. Due to these concerns, we offer a fingerprint authentication system that protects user privacy while still working with the Internet of Things. Both preimage and hill-climbing assaults are neutralized by our technology. Using a wellknown open-source framework, we create a prototype of the suggested system (i.e., Open Virtual Platforms) [16] The usefulness of the proposed IoT-oriented fingerprint identification method is confirmed by extensive experimental findings on eight benchmark data sets. The authentication accuracy of our method is on par with that of unencrypted fingerprint authentication systems in use in a resource-rich, non-IoT setting. Furthermore, our system prototype may be deployed on commercially available low-cost smart cards as the Atmel AT24C256C Memory Smart Card 256 K Bits [4]. To mitigate the risks associated with key leakage and give fine-grained control over who may access encrypted PHRs, we introduce a blockchain-based hierarchical data sharing framework (BHDSF). When compared to current methods, the BHDSF concurrently accounts for untrusted clouds and malevolent auditors to accomplish trustworthy PHR integrity auditing and metadata verification using the blockchain technology [5]. Our method, which is based on ciphertext policy attribute encryption, efficiently restricts access to electronic medical records without impairing retrieval speed. Additionally, we eliminate the possibility of a single point of failure by storing the encrypted electronic medical records in the distributed Interplanetary File System (IPFS). Furthermore, we take use of the immutable and trackable properties of blockchain technology to ensure the safety of medical records throughout storage and retrieval. [6].

3 Existing System A reliable and effective blockchain-based system for sharing remote EMRs might provide conditional indentity privacy preservation and tracking in addition to significantly increasing the efficiency of remote EMR sharing. The effective exchange of EMRs can advance remote medical services and foster the growth of the associated healthcare sector.

496

R. Jayamahalakshmi et al.

Drawbacks ● It is neither efficient nor adaptable enough to meet the needs of a large business when it comes to access control. ● Not a constant access control is the fact that there are several unfavorable characteristics of the encrypted data that have no bearing on the encrypted data itself.

4 Proposed System The client must submit an application for the assurance design in order to guarantee and support the protection organization. The association then notes that the patient’s condition worsened after the expert assertion check. Expecting professional assistance in response to a request from a patient who can examine the application’s bank details to those who can. Blockchain creates an irreversible audit trail, enabling easy monitoring of organizational changes. Because the blockchain is decentralized, any organization’s component can verify the information that has been added to it. Technique: SHA-256 algorithm, SQL. Merits Blockchain makes an irreversible review trail, permitting simple following of changes on the organization. It is decentralized, meaning any organization part can confirm information recorded into the blockchain. The below represents the system architecture of the proposed system (Fig. 1). Users only have a tiny amount of metadata locally during the proof-of-storage phase; thus, they want to determine whether the files are accurately saved on the cloud server without downloading them. The users without submission of original files will be detected through Digital Signature in the module.

Fig. 1 System architecture diagram

Medical Record Secured Storage Using Blockchain with Fingerprint …

497

5 Modules List of Modules ● User ● Doctor ● Admin. User The user must register for this application before they can log in and access the main page. That one person needs to provide clinical assistance over the building to the professional. Then, it will inform the expert so that they can receive support from the expert anticipating who will grant access so that the customer can obtain their security. Suppose a professional wants access to previous clinical data in order to create a unique finger impression (Fig. 2). Doctor In this application, the expert is required to register their specifics presuming that the user has any desire to use the application. After receiving assistance from the executive who is ready to log in, the page will shift to the main page where the expert will actually need to offer the administrator the document report in order to update the report. In the unlikely event that a specialist requires previous patient information, the administrator must obtain that information using a specific access code (Fig. 3). Administrator The work administrator is moreover informed of all patient specifics and the specialist endorsement. Assuming a new specialist will be applying, they will need the administrator’s and the protection organization’s approval, who monitors clients’ and specialists’ functional capacity (Fig. 4). Fig. 2 User module diagram

498

R. Jayamahalakshmi et al.

Fig. 3 Doctor module diagram

Fig. 4 Admin module diagram

6 Algorithm Hash capabilities are pervasive in the domain of data security applications because of their helpfulness. A hash function is a mathematical procedure that transforms one number into another by using compression. However, whereas the output of a hash function is always of a fixed length, the input may be of any length. Values returned by a hash function are often referred to as message digests or simply hash values. The following picture shows the hash function in action (Fig. 5). Features of Hash Functions. Hash functions often have the following characteristics: (1) Hash Value Output of a Constant Length ● The hash function makes it possible to compress data of varying lengths into a single, uniform format. In common parlance, this operation is called “data hashing.”

Medical Record Secured Storage Using Blockchain with Fingerprint …

499

Fig. 5 Hash algorithm

● Since the output of a hash function is often much smaller than the input data, it is also referred to as a compression function. ● Because it is a condensed form of a larger data collection, a hash is sometimes called a digest by another name. ● It’s a hash function with an n-bit output if and only if it has an n-bit output. Most hashing algorithms generate hashes with a bit length of 160 bits up to 512 bits. (2) Operational Effectiveness ● It is usually quite fast to compute h(x) for any hash function h given an input x. ● The computational speed of hash functions is far higher than that of symmetric encryption.

7 Screenshots See Figs. 6, 7, 8, 9, 10 and 11.

8 Conclusion The public upgraded record that is used to record trades across different workstations was passed, making it impossible to change the record retroactively without identifying each block that results from the modification and the alliance’s course of action. The patient will go to the patient’s main page after paying little attention to much else and logging in. The expert who needs to provide a fix will then be

500

Fig. 6 Main page

Fig. 7 Login page

Fig. 8 Patient register

R. Jayamahalakshmi et al.

Medical Record Secured Storage Using Blockchain with Fingerprint …

Fig. 9 Application page

Fig. 10 Doctor’s main page

Fig. 11 Admin’s main page

501

502

R. Jayamahalakshmi et al.

informed by the person who needs to complete out the application progression to meet the expert. Future Enhancements 1. Setting up a genuine anonymous database system. 2. Increasing the quantity of messages sent and the size of those communications while also increasing the efficiency of protocols. 3. Putting two or more algorithms into practice.

References 1. Taylor PJ, Dargahi T, Dehghantanha A, Parizi RM, Choo KKR (2020) A systematic literature review of blockchain cyber security. Digit Commun Netw 6(2):147–156 2. Liu D, Lee J (2020) CNN based malicious website detection by invalidating multiple web spams. IEEE Access 8(1):97258–97266 3. Martin W, Friedhelm V, Axel K (2019) Tracing manufacturing processes using blockchainbased token compositions. Digit Commun Netw 6(2):167–176 4. Puthal D, Malik N, Mohanty SP, Kougianos E, Das G (2018) Everything you wanted to know about the blockchain: its promise, components, processes, and problems. IEEE Consum Electron Mag 7(4):6–14 5. Peng L, Feng W, Yan Z (2020) Privacy preservation in permissionless blockchain: A survey digital communications and networks. [Online]. Available https://doi.org/10.1016/j.dcan.2020. 05.008 6. Kakade N, Patel U (2020) Secure secret sharing using homomorphic encryption. In: Proc 2020 11th International conference on computing, communication and networking technologies, pp 1–7 7. Sundari S, Ananthi M (2015) Secure multi-party computation in differential private data with Data Integrity Protection. In: Proc 2015 International conference on computing and communications technologies, pp 180–184 8. Jiao S, Lei T, Gao Y, Xie Z, Yuan X (2019) Known-Plaintext attack and ciphertext-only attack for encrypted single-pixel imaging. IEEE Access 7(2):119557–119565 9. Kaushik S, Puri S (2012) Online transaction processing using enhanced sensitive data transfer security model. In: Proc 2012 Students conference on engineering and systems, pp 1–4 10. Zheng W, Zheng Z, Chen X, Dai K, Li P, Chen R (2019) NutBaaS: A blockchain-as-a-service platform. IEEE Access 7:134422–134433 11. Casino F, Patsakis C (2020) An efficient blockchain-based privacy-preserving collaborative filtering architecture. IEEE Trans Eng Manage 67(4):1501–1513 12. Chkliaev D, Hooman J, van der Stok P (2000) Mechanical verification of transaction processing systems. In: Proc ICFEM 2000 Third IEEE International conference on formal engineering methods, pp 89–97 13. Zhang S, Lee JH (2020) Mitigations on sybil-based double-spend attacks in Bitcoin. IEEE Consum Electron Mag 7(2):1–1 14. Wang X, Feng Q, Chai J (2018) The research of consortium block chain dynamic consensus based on data transaction evaluation. In: Proc 2018 11th International symposium on computational intelligence and design. pp 214–217

Medical Record Secured Storage Using Blockchain with Fingerprint …

503

15. Zhang S, Lee JH (2019) A group signature and authentication scheme for blockchain-based mobile- edge computing. IEEE Internet Things J 7(5):4557–4565 16. Dhinakaran D, Khanna MR, Panimalar SP, ATP, Kumar SP, Sudharson K (2022) Secure android location tracking application with privacy enhanced technique. In: 2022 Fifth International conference on computational intelligence and communication technologies (CCICT). Sonepat, India, pp 223–229. https://doi.org/10.1109/CCiCT56684.2022.00050

A Dynamic Tourism Recommendation System G. NaliniPriya, Akilla Venkata Sesha Sai, Mohammed Zaid, and E. Sureshram

Abstract In recent decades, the proliferation of the Internet, new technologies, and other forms of communication—especially online travel agencies (OTA)—have led to an increase in the quantity and quality of information available to tourists (lodgings, restaurants, transportation, legacy, vacationer events, exercises, and so on). The generation of information on bequests, tourist activities, and physical activities is included. Web crawlers (or even destination-specific information) may provide visitors with a plethora of options, but this “commotion” of useful information might make it difficult to narrow down the best course of action. There are a number of recommender systems available to help vacationers find relevant information and organize their itineraries. This article will provide you with an overview of the many travel proposal strategies currently in use. On the basis of this review’s findings, an engineering and theoretical framework for a recommender system for the tourism sector is presented. This structure was designed using the half-and-half method of suggestion. The suggested framework goes further than merely recommending vacation places that cater to certain interests. It’s analogous to hiring a tour guide who, utilizing a wide range of resources available to the tourism sector, creates a customized schedule for a certain length of time during a trip. The research team’s ultimate goal is to construct a recommendation system that takes use of recent breakthroughs in areas such as big data, artificial knowledge, and practical assessment. Keywords Online travel agencies · Machine learning · Tourism recommendation system · Travel industry · Artificial intelligence

G. NaliniPriya · A. V. S. Sai · M. Zaid (B) · E. Sureshram Department of Information Technology, Saveetha Engineering College, Chennai, Tamil Nadu, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 V. S. Rathore et al. (eds.), Emerging Trends in Expert Applications and Security, Lecture Notes in Networks and Systems 681, https://doi.org/10.1007/978-981-99-1909-3_44

505

506

G. NaliniPriya et al.

1 Introduction In the tourism sector, suggestion frameworks may be quite helpful while arranging an excursion or searching for a help among different attractions, exercises, and different focal points. These scenarios are described as data filtering frameworks that suggest to users the offers that are most cost-effective (i.e., products, services, etc.); examples include items that are like ones they have previously purchased and preferred or things that have been leaned toward by shoppers with comparable preferences in the past. Users may, for instance, receive recommendations for products that are comparable to other goods they have actively purchased and valued. The rule suggests that in order to predict the level of interest that a client would have in a certain item, one should use the client’s interests that were acquired during the route as contributions. One can select from a variety of methods to calculate various levels of appreciation. On a regular basis, the subjects will be divided into a number of different categories inside the project dependent on the data source. One of these methods is based on the synthesis of comments made by former clients on a variety of topics relating to their experiences with the company. In this context, we are talking about cooperative filtering, which entails recommending to a particular customer the products that have, in the past, received really positive reviews from other customers who share their preferences and preferences in general. Specifically, we are discussing this practice of recommending products to customers based on the feedback they have received from other customers. Academics have turned their attention away from grading systems and toward social data in order to construct social recommendation systems. This is a direct outcome of the expansion of informal organizations, which has caused researchers to redirect their attention. These frameworks need to create an estimate, based on a set of parameters, indicating the degree to which the potential customer and his social circle are comparable to one another. This estimate must be based on the social circle of the prospective client.

2 Literature Survey The idea, application, and development status of trip recommender systems were presented by Chen et al. [1] who did so by collecting and organizing the relevant material published in recent years. Additionally, the system’s major technologies are analyzed in detail, with the complexity and novelty of the system’s application being highlighted. A survey of the numerous methods for making recommendations in the tourist industry was given by Khalid AL Fararni et al. [2]. In this paper, we discuss the engineering and calculated starting point for a vacationer recommender framework in view of a half-and-half proposal technique. The methodology that is being proposed goes past simply suggesting a rundown of vacation spots that are arranged according to the tastes of tourists.

A Dynamic Tourism Recommendation System

507

Aarab et al. [3] proposed and investigated the various ways in which context information can be used to build recommender systems that are intelligent and adaptive. It gives an overview of the multifaceted concept of context, discusses many contextoriented approaches and systems, and explains how such approaches might be used in a variety of application domains. Logesh and Subramaniyaswamy [4], to fill the gap between the issues that customers confront in the real world and the difficulties that academics face in the digital world, a hybrid recommendation strategy in the e-Tourism field has been proposed. The difficulties that have been encountered when researching e-tourism apps have been covered in this article, along with a prospective fix for enhancing the caliber of personalized recommendations. Yochum et al. [5] a proposal was made to offer an outline of the present status of examination in the field by introducing an exhaustive assessment and planning of the associated open information in area-based suggestion systems for the tourist sector. We begin by sorting all the relevant journal articles published between 2001 and 2018 by year of publication, and then we analyze and classify the articles further according to the various recommendation applications covered in them, like issue plans, information assortments, proposed calculations/frameworks, and exploratory outcomes. F’atima Leal et al. [6] client profile in light of the multi-measures evaluations, k-Closest Neighbors (k-NN) expectation of the client appraisals, Trust and Notoriety demonstrating, and steady model refreshing, i.e., delivering near real-time suggestions, are all components of the tourism recommendation system they propose. Sowmya et al. [7] a project aimed at creating an android-based travel and tourist application. Android is a mobile and tablet programming language created by Google. In addition, Firebase database is used for managing and storing information. We, therefore, want to construct the application using the functionalities offered. Parameswaran et al. [8], the development of tourism in a smart city environment may be a way to boost economic growth and, in turn, the quality of life. This study makes an effort to analyze the potential for tourism development in a smart city environment and the use of technology to that end. Yan et al. [9] analyze the instances of cloud computing technology used in the domestic and international tourism industries, along with its benefits and drawbacks, and then proposes the architecture of cloud services for the tourism industry by using the SOA concept to model services and data. Santos et al. [10] suggested a method for creating a tourist recommendation system that takes into account individual user and destination details. The key goal of this research is to see if a client’s physical and mental capacity levels will prompt more exact suggestion results. This work likewise endeavors to give a new classification system for POIs that takes into account their ability to accommodate visitors with specific degrees of physical and psychological problems, as mentioned in the paper’s sections.

508

G. NaliniPriya et al.

Gong et al. [11] analyze the aspects that affect self-service tourists’ level of happiness with the genuineness of holiday sites, furthermore, the information data given by shrewd the travel industry stages, from the vantage point of guest discernment in the development of enormous information modernization. Lack of proper information on smart tourism platforms has been shown to dampen the originality of visitors’ self-service experiences, which in turn hurts the attractiveness of places as tourist draws. Finally, there is a disagreement between the apparent abundance of cultural resources and the perceived authenticity of such resources.

3 System Architecture By understanding the above literature survey, we designed our proposed system as an android application that integrates multiple features into a single easy-to-use app. This will prove to be extremely convenient for the users as it makes improvements over the existing system. As a result of this software, exploring new areas and finding what you need is a breeze. It is available online and may be viewed remotely. A system-maintained database is a convenient place to save all the city’s data. Figure 1 depicts the architecture of the proposed method. It can perform a wide range of functions because to its adaptable modular design. Because of this, we can learn how recommendations are made. At the outset, the tourist notifies the system of his destination city, his expected arrival time, and the first starting position. The user may manually input the start point or have a GPS module built into his smartphone give it. The system gathers POI data after getting these first criteria. Cities’ beautiful locations metadata have been gathered offline because there are not many of them and the information about them doesn’t change regularly. However, as activities vary daily, distinct activities take place on distinct days. Therefore, the system pulls activity data from events online. The system then determines each POI’s popularity, along with the other two variables that make up the POI database (name and geocoordinates). These preparations would enable the system to give the user travel options.

4 Modules 4.1 Administrator Module The administrator module is the main module since it handles the crucial tasks related to site changes, business updates, employment notifications, etc. It keeps track of data for the other four modules. Update notifications, update industries, update hotels, view resumes, and update site information are some of the different software

A Dynamic Tourism Recommendation System

509

Fig. 1 System architecture

elements in the administrator module. Complete municipal history, including political, social, and political leader data, have been added by the administrator. Business information, such as the city’s top enterprises and its data information about jobs, including openings and company profiles, emergency contact information, such as phone numbers, typical places include a description, address, location, and picture of the site, as well as information on news, newspapers, and local channels like which papers are accessible in the city.

4.2 User Module In Fig. 3, the various categories that a user can select and view are shown to which, after registering as a tourist, the user is presumed to be permitted to see. The user interface keeps track of data on the city’s lodgings, entertainment venues, and other tourist hotspots. The tourist module includes several tools, such as the ability to see theaters, hotels, city maps, ATM locations, hospitals, the history of the area, a travel agency, and bus routes.

4.3 Tourism and City Guide Figure 4 shows the various icons of the application which can act as hotspots of the nearby location and show the various interesting locations that exist nearby. It is a portion of the application that offers comprehensive facts about the neighborhood,

510

G. NaliniPriya et al.

including information on the area’s prominent landmarks, eateries, hotels, and retail centers, among other things. The user now has a simple way to get anywhere. The user may also use it to compare any two locations before going there. This facilitates planning.

4.4 POI Categories The application contains a lot of information about the various categories of nearby facilities and other amenities that a user might need. This is updated based on the geographic location using GPS on the smartphone. So, when a user is in any location, they will be shown all the best possible landmarks, gyms, points of interest, famous hotspots, etc. This is done live and gets shown in real time to the user.

5 System Requirements Hardware Requirements System: Pentium Dual Core Hard Disk: 320 GB Monitor: 15'' LED Input Device: Keyboard, Mouse RAM: 2 GB. Software Requirements Operating System: Windows 7. Coding Language: Java. Tool: Android Studio, NetBeans. Database: MYSQL.

6 Algorithm The proposed framework gives an edge over the current framework by recognizing and dissecting numerous features of a client’s character and appearing places and points of interest that are in alignment with the user. This algorithm also improves over time based on the user’s usage of the application. The user can also select certain places that can be prioritized over other places. Figure 2 explains the proposed Decision Tree Algorithm.

A Dynamic Tourism Recommendation System

511

Fig. 2 Proposed algorithm for tourism recommendation system

In our study, we make use of a tourist data set, which contains over 900 records of information and 11 distinct features that may be utilized to classify the information using feature selection techniques and the Decision Tree Algorithm. The opinions of first-time tourists are solicited in order to determine the ideal destination for a vacation. The rating points for each of the 11 criteria are provided to the system as input and are based on user preferences. The input includes rating points, which are frequently used to assign test data to the appropriate class. The final output, which is a place name, is provided as the output after prediction. Users have the option to rate reviews in points. Typically, one input variable at a time and the target variable were computed for filter-based feature selection using the C4.5 Decision Tree.

7 Use Case Diagram See Fig. 3.

8 Flow Diagram See Fig. 4.

512

G. NaliniPriya et al.

Fig. 3 Use case diagram

9 Software and Technologies Description 9.1 Java Java is an advanced, object-oriented language that relies on classes to keep its implementation dependencies to a minimum. Simply said, there is no need to recompile Java code in order to make it work on other platforms, if they support Java. It is a generic language that lets coders “write once, execute everywhere” (WORA). Java programs are frequently ordered to bytecode that can run on any Java virtual machine, no matter what the fundamental PC design (JVM). Java’s punctuation is like those of C and C + + , notwithstanding the language’s absence of low-level abilities. The Java runtime offers dynamic capacities (such reflection and runtime code change) that are not accessible in most static dialects.

A Dynamic Tourism Recommendation System

513

Fig. 4 Flow diagram

9.2 Android Studio Android Studio is the official integrated development environment (IDE) for Android application development. It is based on the IntelliJ IDEA, a Java integrated development environment for software, and incorporates its code editing and developer tools. To support application development within the Android operating system, Android Studio uses a Gradle-based build system, emulator, code templates, and GitHub integration. Every project in Android Studio has one or more modalities with source code and resource files. These modalities include Android app modules, Library modules, and Google App Engine modules.

514

G. NaliniPriya et al.

10 Conclusion Early in the twenty-first century, recommender systems were created to aid vacationers in the decision-making process and to battle the overload of information that was available at the time. Within the scope of this research, we provided an in-depth analysis of the currently active recommender frameworks in the travel industry. We then showed a second mathematical framework to implement recommender systems in the tourism sector. Our hybrid design is to enhance the user’s journey by emphasizing the elements of importance to the user and facilitating the individualization of the experience. Our framework generates an appropriate trip by integrating the chosen configurations of components using operational research methods once the configurations of components that are deemed to be important to the traveler have been selected. Once a viable path has been established, this will take place. For this technical task, we will use state-of-the-art tools including big data analysis software, AI algorithms, and IoT infrastructure.

11 Future Enhancement As interesting directions for future work, we identify the following two lines. First covering access range can be increased, rating system can also be embedded according to user satisfaction. Apart from android, it can also be made for Windows and IOS users. Also, the application can be translated into different languages. Famous landmarks or POIs can be shown based on the geographic location of the user. Making a to-do list that connects multiple locations, making it easier to plan out any trip.

References 1. Chen X, Liu Q, Qiao X (2020) Approaching another tourism recommender. In: 2020 IEEE 20th International conference on software quality, reliability and security companion (QRS-C), Macau, China, pp 556–562. https://doi.org/10.1109/QRS-C51114.2020.00097 2. Nafis F, Fararni KA, Yahyaouy A, Aghoutane B (2020) Towards a semantic recommender system for cultural objects: Case study Draa-Tafilalet region. In: 2020 International conference on intelligent systems and computer vision (ISCV), Fez, Morocco, pp 1–4. https://doi.org/10. 1109/ISCV49265.2020.9204187 3. Aarab Z, Elghazi A, Saidi R, Rahmani MD (2018) Toward a smart tourism recommender system: applied to Tangier City. https://doi.org/10.1007/978-3-319-74500-8_59 4. Logesh R, Subramaniyaswamy V (2019) Exploring hybrid recommender systems for personalized travel applications. In: Mallick P, Balas V, Bhoi A, Zobaa A (eds) Cognitive informatics and soft computing. advances in intelligent systems and computing, vol 768. Springer, Singapore. https://doi.org/10.1007/978-981-13-0617-4_52

A Dynamic Tourism Recommendation System

515

5. Yochum P, Chang L, Gu T, Zhu M (2020) Linked open data in location-based recommendation system on tourism domain: a survey. IEEE Access 8:16409–16439. https://doi.org/10.1109/ ACCESS.2020.2967120 6. Leal F, Malheiro B, Burguillo JC (2018) Trust and reputation modelling for tourism recommendations supported by crowdsourcing. Adv Intell Syst Comput:829–838. Available at https:// doi.org/10.1007/978-3-319-77703-0_81 7. Sowmya MR et al. (2019) Smart tourist guide (Touristo). Emerg Res Comput, Inf, Commun Appl:299–312. Available at https://doi.org/10.1007/978-981-13-6001-5_23 8. Parameswaran AN, Shivaprakasha KS, Bhandarkar R (2021) Smart tourism development in a smart city: Mangaluru. Intell Manuf Energy Sustain:325–332. Available at https://doi.org/10. 1007/978-981-33-4443-3_31 9. Yan X, Juan Z (2020) Research and implementation of intelligent tourism guide system based on cloud computing platform. Adv Intell Syst Comput:187–192. Available at https://doi.org/ 10.1007/978-3-030-62746-1_27 10. Santos F, Almeida A, Martins C, Oliveira P, Gonçalves R (2017) Tourism recommendation system based in user functionality and points-of-interest accessibility levels. In: Mejia J, Muñoz M, Rocha Á, San Feliu T, Peña A (eds) Trends and applications in software engineering. CIMPS 2016. Advances in intelligent systems and computing, vol 537. Springer, Cham. https://doi. org/10.1007 11. Gong N, Fu H, Gong C (2021) A research on the perception of authenticity of self-service tourists based on the background of smart tourism. In: 2021 7th Annual International Conference on Network and Information Systems for Computers (ICNISC), pp 826–830. https://doi. org/10.1109/ICNISC54316.2021.00153

ATM Theft Detection Using Artificial Intelligence S. P. Panimalar, M. Ashwin Kumar, and N. Rohit

Abstract Most often, users use ATMs to withdraw cash from automated electronic machines. When used, a client doesn’t need assistance from a bank employee to handle monetary changes. In the current set-up, many people who take cash from an ATM are vulnerable to robberies and theft since there aren’t enough protections in place. ATM banks are increasingly installing surveillance cameras, but police forces’ ability to keep tabs on the footage has lagged behind. Attempts may be made to steal the money from an ATM by breaking into the machine or by damaging it in a defamatory manner. As a preventative measure, we will equip the ATM with a camera module permanently installed in the room to conduct continual video monitoring. Humans and the activities they take inside the ATM while attempting to break in are identifiable to the camera. Consequently, this software is concerned with the programme created to automate video monitoring and identify any fraudulent actions occurring at ATMs. Such algorithms may be used to identify people in a photo, locate, and follow the camera as it moves and ultimately determine what steps should be taken to avoid illegal behaviour. As a result, we determined whether or not the assault was indeed taking place and notified the police. The ability to foresee an impending assault is a significant advantage. Keywords Artificial intelligence · Machine learning · CNN model · Convolution layer

1 Introduction The convenience of an ATM allows customers to do a variety of banking transactions independently of a human teller, including making cash withdrawals, deposits, and transfers, as well as inquiring about account balances and transaction history. An S. P. Panimalar (B) · M. A. Kumar · N. Rohit Department of Information Technology, Saveetha Engineering College, Chennai, TamilNadu, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 V. S. Rathore et al. (eds.), Emerging Trends in Expert Applications and Security, Lecture Notes in Networks and Systems 681, https://doi.org/10.1007/978-981-99-1909-3_45

517

518

S. P. Panimalar et al.

automated teller machine (ATM) is a calculating device or computer program for processing financial transactions and obtaining related data, such as balance sheets and income statements. The proliferation of ATMs is a direct result of the widespread need for quick and easy financial transactions. Unfortunately, the proliferation of automation and sophisticated devices has led to an increase in crimes against financial institutions. From 1998 to 2003, the number of these offences rose progressively; in 2004, it fell somewhat; and beginning in 2005, it rose sharply. This means that robberies have seen a significant improvement during the last decade and a half. Although this advantage comes with some drawbacks, it is becoming very critical for well-human beings in our society. Since automated teller machines (ATMs) may also be the target of robbery and looting, the problems plaguing India’s banking system extend beyond traditional financial institutions. To put a brake on such incidents, the RBI issued a circular last month in June directed towards the security measures for ATMs. Digital One Time Combination (OTC) locks are only used to restock currency at ATMs. ATMs are often placed in well-guarded, CCTV-equipped, and centrally monitored locations. The companies that provide ATMs make a lot of money and promote the usage of ATM cards heavily. Banks often benefit more from ATMs located in nonbanking locations due to the higher volume of non-bank clients who are required to pay service fees. Customers utilising ATMs located off-premises are, nevertheless, more likely to become victims of robbery. Robberies of automated teller machines (ATMs) are common in areas where cocaine is sold. Addicts that engage in street robbery need to acquire crack, which can be obtained without much preparation or expertise; therefore, this is a necessary part of their lifestyle. Due of the widespread belief that anybody might be a victim of an ATM heist, these crimes get a great deal of media and public attention. In response to the robbery of a senior lawmaker or a close associate of a lawmaker at an ATM, the Legislative has taken swift action to ensure the protection of ATM users. Recent years have been a rise in the incidence of ATM oddities including burglary, misery, robbery, and etiquette transgressions. Since these occurrences have been brought to light, we need a well-built mechanism to identify unusual activity and stop ATMs. Therefore, we have applied deep learning algorithms to the issue and come up with a solution. This alerts the proper authorities before any theft or criminal activity may take place inside the ATM. Therefore, this approach aids in responding early to the heist and taking appropriate action instead of reacting late.

2 Literature Survey Artificial vision for motion analysis—The range of mobility of a patient’s joints is often measured as part of a functional assessment. Mechanical goniometry is used to conduct this clinical measurement, which now presents a number of challenges, most of which are of a mostly human character. In this piece, we take a look at romcom, a new method for gauging joint ROM that relies on 2D posture estimation.

ATM Theft Detection Using Artificial Intelligence

519

This is accomplished with the use of artificial vision libraries and a camera similar to a rgb webcam. Findings support the feasibility of using romantic comedies as a resource in telerehabilitation settings due to its cheap cost and ease of access. [2] Face recognition assault detection during presentation presentations using a light field camera Academics and security experts are becoming more interested in the vulnerability of facial recognition systems. As the sophistication of presentation assaults has increased, there is currently no better face presentation attack detection (pad) (or countermeasure or anti-spoofing) tool (or spoof attacks). In this study, we show how a light field camera may be used to identify face presentation attacks from a novel angle (lfc). Multiple depth (or focus) pictures may be rendered from a single light field capture because the camera captures both the intensity and the direction of each incoming ray. Because of this, we provide a unique method that may uncover presentation threats by inspecting the difference in depth (or focus) between several pictures generated by the lfc. [3] Protected Authenticated Communication in the Smart Home Environment Lighting, heating, air conditioning, temperature sensors, and even security cameras may all be controlled remotely in a smart house. As a result, people may enjoy a safer and more hassle-free way of life. However, security and privacy are major concerns since the information gathered by these devices is often sent to the user over an unsecured network or system supplied by the service provider. As a result, the service provider may have access to and storage of this data. Similar data collection and storage capabilities are available in emerging smart home hubs like Samsung Smart Things and Google Home. The on/off schedule of an HVAC system, for instance, may indicate whether or not a person is home. Just as the loss or alteration of vital medical data acquired by wearable body sensors may have catastrophic effects, so too can its leaking. Unfortunately, although encrypting this data will solve the problems, it will also diminish its usefulness since searches will become more complicated. We show the scheme’s efficacy and efficiency via experiments, simulations, and comparisons to other smart home security systems. Using FMCW radar and a dynamic range-Doppler trajectory approach, we can continuously recognise human motion. Human motion identification using radar is essential for several uses, including security, SAR, IoT, and caregiving. Recognising human motion continuously in a real-world setting, i.e., classifying a succession of activities transitioning from one another, is essential for widespread use. Recognizing continuous human movements in a wide variety of settings simulating the actual world is the goal of this unique dynamic rangeDoppler trajectory (DRDT) approach, which is based on the frequency-modulated continuous-wave (FMCW) radar system. Humans tend to go about their work in a continuous way, with just subtle breaks at the points when two tasks are immediately adjacent in time. Unlike other algorithms, the one we use for fuzzy segmentation and recognition (fuzzysr) explicitly considers the possibility of slow changes in the course of otherwise continuous human operations. Our goal is to identify the nature of each occurrence in a video and separate

520

S. P. Panimalar et al.

it into discrete segments at the same time. The movie is cut into a series of discrete, non-overlapping pieces of about the same duration by the algorithm. With these datasets, our system achieves event-level activity recognition precisions of 42.6% on the hollywood-2 dataset, 60.4% on the cad-60 dataset, 65.2% on the act42 dataset, and 78.9% on the utk-cap dataset. The continuous visual input may be partitioned into a series of fuzzy events using our method, which entails encoding each event as a fuzzy set with fuzzy borders to reflect gradual transitions. Our system assigns an activity label to each event by taking into account all of the block summaries inside that event. In order to measure effectiveness, we do experiments on six data sets.

3 Existing System The existing system aims at measuring the joint range of motion of a patient. In this piece, we take a look at ROM Cam, a new approach to measuring joint range of motion that makes use of 2D pose estimation. Open Pose library, which allows the estimation of human pose in 2D, has been used to determine the joints and their motion. ROM Cam is a new ROM measurement system that can be used by patients and therapists, in medical settings and even from home. IoT-based methods that have been proposed have many levels of preventing the ATM from resisting physical and electronic machine attacks. The built system relies on Arduino for input security and makes use of a wide variety of sensors, including those that detect vibration from a GSM modem, temperature, and sound. If the sensor reads a value above or below its threshold, the process will activate. When anything goes wrong within an automated teller machine, an alarm goes off to notify higher authorities. However, the suggested approach is inaccurate since it is not applicable to secular values and may produce false positives.

4 Proposed System In this model, we suggest employing deep learning methods to safeguard ATMs. The accuracy of the proposed system is achieved by incorporating deep learning algorithms such as convolutional neural networks (CNNs) and recurrent neural networks (RNNs), as well as the mobile net thin model, which is used for image detection and filtering and provides accurate results without requiring extensive training time. An intelligent alarm notification design for in-ATM theft is the focus of this study. Existing systems only take action after an ATM has been robbed. In addition, the system detects the user’s activity and alerts the cash machine operator if the user is attempting to physically damage the machine. The warning signal is triggered in the event of an assault. The ATM will be equipped with a camera module that will be permanently installed in the room to conduct round-the-clock video monitoring of the area. The cameras try to detect the person trying to enter the ATM or withdraw

ATM Theft Detection Using Artificial Intelligence

521

cash from the ATM. After successfully training the model and testing the output. When a person attempts to break into an ATM, the camera will record their suspicious behaviour, send an alarm to the authorities, and go live. An intelligent phone application is developed for the testing model using react native, which the police will hold. The gates are automatically closed as an abnormality is determined. In this way, we will have a way to prevent ATM theft from leaving a paper trail.

5 System Architecture Figure 1 shows the complete working system. We will determine the human activity recognition by which we can effectively identify the user’s activity in an ATM by which an abnormal action can be determined. Therefore, the first process will consist of amassing the data set from multiple sources. Next, we’ll separate these data sets into training and testing versions; the latter will be used to validate the model’s predictions, while the former will be utilised for model training. After that, data set augmentation, which multiplies the original dataset, will be performed. In order to align the data sets into a single dimension, these datasets use a variety of preprocessing procedures. Once our data set has been preprocessed, it will be ready to be used for training the architecture. We will be using a framework such as sequential to train the model.

Fig. 1 System architecture

522 Table 1 Software requirements

S. P. Panimalar et al. Operating system

Windows 10

Languages used

Python, JavaScript

Software package

Python, Visual studio code

Then, we will be performing optimization, and the method used for optimization is ELU, which will optimise the whole model to remove the noises. Loss reduction techniques such cross-entropy will be used, and the mode will be tested with realtime inputs for verification. A react-native-based mobile application is also developed to view the live streaming of the ATM when an abnormal condition occurs with a notification alert message.

6 Software Requirements It is the software requirements document that defines the system to be built. Both a definition and a list of prerequisites are necessary parts of this. It’s more of a list of what the system should accomplish than a description of how it should do it. A software requirements specification can’t be written without first having a solid foundation laid by the software requirements. Throughout the development process, it may be used to estimate costs, schedule activities, carry out tasks, and monitor team progress. The purpose of the Software Requirement Specification is to clearly and concisely outline all of the technical specifications for the software product (Table 1).

7 Hardware Requirements Considering that a contract for the system’s implementation may be based on the hardware requirements, it’s important that they be a comprehensive and consistent specification of the system as a whole. They serve as a baseline for software developers to work off of while creating the system. Only the features of the system are shown, without any recommendations on how they should be put into practise. Refer to Table 2 below for a breakdown of required hardware.

8 Modules i. Detection of human activity ii. Abnormal activity prediction iii. Alert notification and streaming.

ATM Theft Detection Using Artificial Intelligence Table 2 Hardware requirements

523

Components

Specification

Processor

Intel(R) Core (TM) i5-8265U CPU@ 1.60 GHz 1.80 GHz

RAM

8.00 GB RAM

GPU

NVIDIA GEFORCE 960

Monitor

14'' Colour

Hard disk

1 TB

Processor speed

Minimum 500 MHz

Detection of Human Activity A set of informatin on human actions and many difficult,real world issues can now be solved with deep learning. Many difficult, real-world issues can now be solved with Deep Learning. When compared to other approaches, this one excels in computer vision tasks. The above picture exemplifies how effective deep learning can be in computer vision. A well-trained deep network can extract the “key points” that characterise each individual in a picture. The success of these deep learning computers depends on a steady supply of data. Our model improves with the availability of additional labelled data. Google has experimented extensively with a collection of 300 million photos to test the hypothesis that more data leads to greater performance. If you want your Deep Learning model to keep getting better after you put it to use in a production setting, you need to keep feeding it new data. The value of data has skyrocketed in the age of deep learning. For each piece of information you need, you must complete these three stages. Flow Daigram of Detection of Human Activity Developing a Model File If you have a simple stack of layers, where each layer takes in one tensor and outputs one, then you should use a sequential model. Once a sequential model has been constructed, it may be used just like any other model with an API. This implies that there is a connection between each layer’s input and its output. Using these features, you can rapidly develop models that do cool things like retrieve the outputs of all intermediate layers in a sequential model. If you want to use transfer learning, you’ll need to freeze the model’s lower layers and just train its upper ones. The functional API, on the other hand, enables us to build models with a great deal of adaptability, since we can easily develop models in which layers link to more than simply the preceding and subsequent levels. Actually, we can link layers to (any) additional layers. Since this is the case, it is now feasible to construct more intricate networks like Siamese and residual ones. Figure 2c depicts the sequential algorithm process.

524

S. P. Panimalar et al.

Fig. 2 a Workflow of detection of human activity

Abnormal Activity Prediction The Internet of things, or IoT, is a network that connects autonomous computing devices, mechanical and digital machinery, items, animals, and even humans, all of which are assigned a unique identification and may exchange data with one another. IoT is being utilised to broadcast a live event. Multimedia content is “streamed” when it is continuously received and shown to an end-user from a source. The act of transmitting or receiving material in this way is known as “streaming”, a verb. As an alternative to file downloading, in which the end-user downloads the complete file for the material before viewing or listening to it, this phrase refers to the distribution mechanism of the medium rather than the medium itself. The content of this module may be broken down into two sections. i. In the training phase, you’ll learn how to use facial detection to recognise people. ii. During the testing phase, we extract 2D feature vectors from the picture, a process called embedding, which we use to quantitatively characterise each face. For training, we must provide input and output to the model. The generator will generate the input and output sequences. We utilised the Google collaborator notebook for training purposes. In order to train the decoder model, we employed 32-batch size iterations of the Adam optimiser with categorical cross-entropy as the loss function. After each epoch, we evaluated the model using training and validation loss as the metric. During training, we kept track of the model’s validation loss. We saved the model into a file when the validation loss of the model improved after an epoch. We’re using binary cross-entropy for this model. The Adam optimiser is used to optimise the network or update the network’s weights. A video stream is used as input for the project’s testing, and predictions are made using the information included in the model data file. This system will generate whichever action best fits the characteristics of the model file. This allows us to categorise the erratic behaviour and make predictions about it. A live feed from the camera may be broadcast wirelessly

ATM Theft Detection Using Artificial Intelligence

525

to the mobile app, which will be incorporated if unexpected behaviour becomes more common. Figure 2d shows the 2D feature extraction. Alert Notification and Streaming React Native is used in this project to create a mobile application. In order to construct the necessary JavaScript code, the React Native framework creates a hierarchy of UI components. It includes tools for creating a mobile app for both iOS and Android that has a native feel and performance. The mobile industry has seen explosive expansion in recent years. By 2020, it is predicted that the combined revenues from app stores, advertising, and in-app transactions made via mobile apps would total an astounding $188 billion. Standard-setting applications for both individual and enterprise usage need to have seamless performance, a variety of displays, intuitive navigation, and attractive design. Cross-platform applications provide speedier development but sacrifice on performance and support, whereas high-performing, high-quality native apps are very time-consuming to produce. It seems that React Native may be used to rapidly develop high-quality applications that meet or exceed the performance and user-experience expectations of native apps in a short amount of time.

9 Conclusion There are several potential dangers lurking inside the machine, and we looked at them all. If it determines that a person’s activities are out of the ordinary and causes suspicion, it will issue an alert or warning message. In the light of this, we have created and trained a model with the use of deep learning algorithms to forewarn and inform the police before any theft or crime takes place. Using the dominant deep learning method, this study has effectively constructed a human action recognition and human gesture identification system that can automatically identify human activities. For this reason, the suggested method benefits all severely damaged ATMs. The training accuracy for this model hits 90%.

10 Future Enhancement We can improve the accuracy of all forms of identification by examining the use of ATM technology in future. Many opportunities exist in this area for us to improve our theft detection methods or repurpose this project. Using a variety of structured methodologies and algorithms, we can improve the reliability of our activity predictions. For this reason, this idea has a promising future where manual forecasting may be easily and inexpensively adapted to automated manufacturing. This might be the beginning of a major uprising, ushering in a new system that makes it possible to

526

S. P. Panimalar et al.

conduct monetary transactions such as deposits and withdrawals with greater safety and security and without the risk of exploits.

References 1. Soma S, Kiran P (2022) To detect abnormal event atm system using image processing and IOT. Int J Eng Res & Technol (IJERT) 11(09) 2. S R, L S, M V S, M S, P KA (2022) Face biometric authentication system for ATM using deep learning. In: 2022 3rd International conference on electronics and sustainable communication systems (ICESC). Coimbatore, India, pp 647–655. https://doi.org/10.1109/ICESC54411.2022. 9885334 3. Bajaj S, DwadaS JP, Shirude R (2022) Card less ATM Using deep learning and facial recognition features. J Inform Tech Softw Eng. 12:302 4. Philip JM, Parvish Musaraf E, Shyamala S, Kumar S (2022) ATM fraud identification using machine learning. IJIRE-V3I03–74–77 3(3) 5. Bhattacharjee S, Sharma K, Pragati K, Shaw A, Giri S, Chatterjee B (2022) Covered face detection for enhanced surveillance using deep learning. IJIRT 8(11). ISSN: 2349–6002 6. Viji S, Kannan R, Yogambal Jayalashmi N (2021) Intelligent anomaly detection model for ATM booth surveillance using machine learning algorithm: intelligent ATM surveillance model. IEEE 7. Baranitharan M, Nagarajan R, Chandrapraba G (2021) Automatic human detection in surveillance camera to avoid theft activities in ATM centre using artificial intelligence. Int J Eng Res & Technol(IJERT) NCICCT–2021 6(03) 8. Du S, Zhang Q, Yu Y (2021) Tianjin DaxueXuebao (ZiranKexueyuGongchengJishu Ban). J Tianjin Univ Sci Technol 54(11). ISSN (Online): 0493–2137 EPublication: Online Open Access 9. Krishna P, Ahamed S, Roshan K (2021) An AI based ATM intelligent security system using open CV and YOLO. Publ Int J Trend Sci Res Dev (Ijtsrd) 5(4):336–338. ISSN: 2456–6470 10. Bajaj S, Dwada S, Jadhav P, Shirude R (2022) Card less ATM using deep learning and facial recognition features. J Inform Tech Softw Eng. 12:302 11. Moreno-Muñoz P, Ramírez D, Artés-Rodríguez A, Ghahraman Z Human activity recognition by combining a small number of classifiers. IEEE J Biomed Health. Ding C, Hong H, Zou Y, Chu H, Zhu X, Fioranelli F, Le Kernec J, Li C (2019) Continuous human motion recognition with a dynamic range-doppler trajectory method based on FMCW radar. IEEE Trans Geosci Remote Sens 57 12. Tao D, Jin L, Yuan Y, Xue Y (2016) Ensemble manifold rank preserving for acceleration-based human activity recognition. IEEE Trans Neural Netw Learn Syst 27 13. Rosique F, Losilla F, Navarro PJ (2011) Using artificial vision for measuring the range of motion. IEEE Lat Am Trans 19 14. Lu J, Tong KY (2019) Robust single accelerometer-based activity recognition using modified recurrence plot. IEEE Sens J 19 15. (2016) Informatics, vol no 20

Proposed Model for Design of Decision Support System for Crop Yield Prediction in Rajasthan Kamini Pareek, Vaibhav Bhatnagar, and Pradeep Tiwari

Abstract Agriculture is a key element of the Indian economic system. Many farmers are unaware of the outside world and agricultural technology advancements. Rajasthan is a largest state of India and also has the highest percentage of its land covered by desert. There are ten agro-climatic zones that divide the state. It has a variety of soil types and weather patterns, ranging from warm and humid in the south-east to dry and chilly in its western regions. Wheat is an incredibly versatile crop that can be produced in a variety of soil types and climates. For the estimation and forecasting of wheat crop yield in Rajasthan, satellite images, soil parameters, and weather information are important sources of information. All of these elements are interrelated, though. As a result, a prototype model is suggested that combines remotely sensed data with soil and meteorological information. Utilizing this crop model will help forecast crop yields at a specific point of Rajasthan in the growing season. Remote sensing serves as a significant data source for crop development monitoring and yield forecasting. Weather forecasting is necessary to enhance the productivity of agricultural activities. The growth and development of the crop are impacted by several climatic conditions. An important resource in agriculture is soil. For accurate crop yield forecasting, soil quality is crucial since it provides additional information. All types of soils are suitable for growing wheat except of those with high alkaline content and wet circumstances. Keywords Crop yield prediction · Soil sensors · Web API

K. Pareek · V. Bhatnagar (B) Manipal University Jaipur, Jaipur, India e-mail: [email protected] P. Tiwari Birla Global University, Gothapatna, Odisha, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 V. S. Rathore et al. (eds.), Emerging Trends in Expert Applications and Security, Lecture Notes in Networks and Systems 681, https://doi.org/10.1007/978-981-99-1909-3_46

527

528

K. Pareek et al.

1 Introduction Agriculture is a key part of the Indian economic system as well as the human survival. It also generates a significant amount of employment [1]. The world’s population is rapidly increasing, necessitating increased crop yields. With the rise in population, frequent fluctuations in weather conditions, and lack of resources, satisfying the food demand of current population has become a complex task. Precision agriculture has emerged as a forefront method for managing current agricultural environmental challenges. Precision agriculture sometime referred to as a smart farming or digital agriculture. It is a data-driven aspect of sustainable farming method that is facilitated by technology [2]. As indicated in Fig. 1 [3], it is essentially the adoption of information technology, application programmes, and smart objects for agricultural decision support. Precision agriculture allows a farmer to know exactly what parameters are required for a healthy crop, where they are required, and in what quantities at any given time. This necessitates gathering a large amount of data from many sources and different parts of the field. To monitor and estimate agricultural yields, all acquired data must be analysed [4]. Yield growth monitoring and estimating are important for an economical growth of country, and it plays a key role in food management [5]. Crop production is influenced by a several factors, some of which are under human control and others which are influenced by environmental changes as well as sociopolitical influences [6]. It is critical to offer farmers with reliable, timely information based on meteorological factors and soil quality, allowing them to make the best decision possible for their crop, resulting in increased profit and growth [7]. Prediction of crop yield is a complicated process due to the many parameters that affect it. The goal of this study is to. ● Develop a prototype model that will assimilate remotely sensed data with the weather and soil parameters to predict wheat yield in Rajasthan.

Fig. 1 Technologies in precision agriculture

Proposed Model for Design of Decision Support System for Crop Yield …

529

The paper is organized as follows: Introduction is given in Sect. 1. The literature review is provided in Sect. 2. The design of the proposed prototype model, factors affecting the model, and data sources are explained in Sect. 3. In Sect. 4, the paper is concluded.

2 Research Methodology In 2015, Lobell et al. [8] presented a comprehensive method for tracking crop yields using remote sensing techniques, referred to as a Crop Yield Mapper. This method makes use of field models to collect sensory insight of how crops react to climate and other data, including Google Earth Engine to quickly process multiples of historical images. The R2 range for this strategy is 0.14 to 0.58. In 2016, Zhang et al. [9] developed a useful technique using satellite data for tracking vegetation greenness which serves as an indicator for agricultural yield. Satellite has been used widely for yield forecast due to their daily temporal resolution and wide coverage. The model achieved the accuracy higher than 90%. In 2017, Jin et al. [10] proposed an approach to calculate the yield of wheat. This model combines the AquaCrop model and imaging data. This model uses the PSO algorithm. For the proposed model, the imaging data served as the input. In this model, the expected yield has the R2 value of 0.42. In 2017, Saeed et al. [11] built a model to estimate yield before harvest. The model uses vegetation index and weather data to wheat yield forecast. This model using a Random Forest and predicted R2 value is 0.95. In 2018, Guo et al. [12] built a wheat growth estimation model by fusing the WheatGrow model with the PROSAIL model. Simulated zone divisions are the foundation of this approach. By combining soil nutrient indices with the spatial properties of wheat development, as revealed by remote sensing data, the simulated zone partitioning technique was developed. The RSME value of proposed method is 0.92. In 2019, Yang et al. [13] proposed an architecture which is based on CNN approach for estimating rice grain yield from UAV images. Two distinct branches make up the proposed CNN structure that will be used to process RGB and multi-spectral pictures. The suggested network is simple to train and can manage small training sample sizes. This model improved the yield estimation with R2 ranging from 0.464 to 0.511. In 2019, Kaneko et al. [14] used publicly accessible remote sensing data to estimate maize yields in different African countries. The author utilized a dimensionality reduction technique to turn raw images into histograms of pixel counts. These histograms were then analysed using an LSTM-based deep learning model. The average R2 value is 0.56. In 2019, Nevavuori et al. [15] proposed a CNN-based framework multi-spectral data collected by UAV’s. A CNN is trained to adjust the network parameters using input data from RGB and NDVI images. RGB images performed better for the CNN

530

K. Pareek et al.

architecture than NDVI images. The MAPE of 8.8% and 12.6% was obtained when data from the early and late stages of the growing seasons were combined. In 2019, Sun et al. [16] built an LSTM model to estimate wheat production, which combines climatic data and two remotely sensed indicators at the major growth phases. The outcomes demonstrated that the LSTM model with two time phases can achieve the best estimation accuracy of 83% when merging remote sensing and meteorological data. In 2020, Zhang et al. [17] introduced a crop model that assimilates data from several sources in multiple steps. The calibration of crop model factors, assimilation of crop phenology, assimilation of soil moisture, and assimilation of crop LAI are the four assimilation phases included in this model. The yield estimation of proposed model has improved, with an R2 increase of 43%. In 2020, Zhu et al. [18] effectively choose the parameters of the extreme learning machine (ELM) model using the particle swarm optimization approach. A hybrid PSO-ELM model calculates daily ETo using climatic input. In 2020, Chu et al. [19] built a deep learning model, which uses a combination of region data and time-series meteorology data to precisely estimate rice yield. The prediction of winter and summer rice is 0.0192 and 0.0057, respectively, by the proposed model. In 2020, Haque et al. [20] proposed an artificial neural network model. The objective of the suggested concept is to determine the yield per hectare of land with various parameter concentrations that are efficient dependent on various meteorological conditions. The MSE value between the yield results with the actual was 0.0045.

3 Design of Proposed Model Wheat has a wide range of adaptability and is primarily grown in the Rabi season. Many different types of soils and weather conditions are used to cultivate wheat. The Northern Hill Zone (NHZ), North Western Plains Zone (NWPZ), North Eastern Plains Zone (NEPZ), Central Zone (CZ), Peninsular Zone (PZ), and Southern Hill Zone (SHZ) are the six major wheat-growing zones that make up a country. Based on the growing season, soil types, and climatic conditions of wheat, these zones have been categorized. Rajasthan comes under the North Western Plains Zone (NWPZ) [21]. As a study area, Rajasthan was chosen and Wheat was selected for the prediction where harvesting of wheat is done in Mar–April, and sowing is in month of Oct–Nov. The State of Rajasthan, which covers 342,239 km2 and accounts for 10.41% of the country’s total area, is located between 23o 04’ and 30o 11’N and 69o 29’ and 78o 17’ E. It is India’s largest state and the one with the greatest percentage of its land covered by desert [22]. There are ten agro-climatic zones that divide the state as shown in Fig. 2 [23].

Proposed Model for Design of Decision Support System for Crop Yield …

531

Fig. 2 Agro-climatic division of Rajasthan

3.1 Factors Affecting Crop Yield Crop yield forecasting in advance is crucial for planning and making decisions. Traditional data collecting methods for crop monitoring and yield forecasting are expensive and time-consuming. For wheat yield prediction at Rajasthan, the proposed model combines weather data, satellite remote sensing data, and soil property data. The soil property data are constant data that are unlikely to change much in the near future; the satellite and weather data are time-series data [24]. Figure 3 illustrates how the proposed prototype model takes three types of data as input: sensor data, Web API data, and satellite data [25]. 1. Satellite Remote Sensing Data: Remote sensing images are being utilized for yield prediction and crop monitoring thanks to the emergence of satellites [26]. In a broad spatial context, it serves as a significant data source for crop development monitoring and yield forecasting [27]. To examine the variation in growing crop and agricultural productivity, remote

RNN

GIS

Sensors

Web API

Fig. 3 Design of prototype model

GA

Crop Yield Prediction

CNN

532

K. Pareek et al.

sensing time-series could be used [28]. The suggested model incorporates spectral, spatial, and temporal data that were collected and input into the deep learning model and are ideal for crop growth monitoring [29]. It is useful to observe crops and their growth phases as well as anticipate agricultural yields prior to harvest using multi-temporal and multi-spectral remote sensing raw satellite pictures [30]. 2. Meteorological Data: In the context of smart farming, weather data is gather from the previous several years using a weather API and contrast it with the current weather situation. Future weather forecasting is necessary to enhance the productivity of agricultural activities [31]. The growth and development of the crop are impacted by several climatic conditions [32]. The climate of any area is made up of a variety of factors. There are six main components: wind speed, air temperature, atmospheric pressure, humidity, and solar radiation. ● Rainfall: the range of the rainfall that is required for wheat cultivation in Rajasthan is 50 cm to 100 cm. ● Temperature: wheat crop minimum, optimum and maximum temperatures are 3.0 to 4.5 °C, 20 to 25 °C and 30 to 32 °C, respectively. The ideal range for wheat crop growth is a mean daily temperature of 15–20 °C. ● Humidity: 50–60% humidity is optimum for wheat crop. 3. Soil Data: For accurate crop yield forecasting, soil quality is crucial since it provides additional information. Sand, saline, alkaline, and chalky soil are the most common types of soil in Rajasthan. There are also nitrogenous soils, clay, loam, and black lava soil. All types of soils are suitable for growing wheat except of those with high alkaline content and wet circumstances. Wheat cultivation benefits from both fertile, alluvial soil and mixed soil. Crop growth is also influenced by soil characteristics like soil moisture and nutrients. ● Soil moisture is frequently expressed as the amount of water found in depth of one metre of soil. For a wheat crop, a moisture content of between 20 and 60% is ideal. ● Seed germination is influenced by the temperature of the soil. Wheat must be grown at an ideal average temperature between 14 and 18 degrees Celsius. ● A pH range of 6 to 7 is typically the most optimal for plant growth because the majority of plant nutrients are readily available in this pH range. ● Large leaves, quick growth, and development require nitrogen. The energy needed for growth and development is provided by phosphate. Potassium helps plants maintain structural integrity and control water flow. In Rajasthan, 150 kg of nitrogen, 60 kg of phosphorus, and 40 kg of potash per hectare are needed for maximum productivity.

Proposed Model for Design of Decision Support System for Crop Yield …

533

3.2 Data Collection 1. Using GIS Remote sensing is a considered useful technology for yield prediction because it offers dosimetric, quick, and affordable surveillance of the earth surface over a vast region. The model focuses on the wheat yield prediction in Rajasthan. The MODIS satellite, Landsat satellite, or Sentinel satellite can all provide remote sensing data [23]. These data can capture multi-temporal images with high temporal resolution, which is enough to monitor crop yields [28]. ● Earth and climatic observations are collected using the satellite-based Moderate Resolution Imaging Spectroradiometer (MODIS) [33]. The Terra and Aqua satellites two MODIS sensors in Earth orbit are responsible for taking the pictures. Every one to two days, it observes the whole surface of the Earth within a 2,330 km viewing swath. The photos are composed of 36 spectral bands and the wavelengths ranging from 0.405 to 14.385 m. This Data are collected at three distinct spatial resolutions: 250, 500, and 1,000 m [34]. ● The Landsat programme has been collecting satellite images of the Earth for the longest [3]. It offers 16-day temporal resolution and a spatial resolution of 30 m, 100 m, and 15 m for seasonal coverage of the whole planet’s landmass [35]. Different frequency bands along the electromagnetic spectrum are measured by Landsat 8, and it contains 11 bands. ● The increased spatial, spectral, and temporal resolution of Sentinel-2 (S2) images has opened up a wide range of possibilities for agricultural applications, which especially incorporating agricultural planning and monitoring. Two identical satellites, S2A and S2B, are part of S2 and feature the Multi-Spectral Instrument (MSI). The S2 mission provides an unprecedented 290 km wide swath, spatial resolutions of 10 to 60 m, 13 bands of spectral precision in the visible, infrared, and shortwave infrared, and a temporal resolution of 5 days revisit frequency [30]. 2. Using Web API This provides information about the atmosphere. Air temperature, atmospheric pressure, humidity, precipitation, solar radiation, and wind are just a few of the variables that make up the weather. To identify regular weather patterns and evaluate the quality of regional atmospheric conditions, each of these variables can be measured [36]. The AccuWeather is used to gather the daily observed weather data needed for the proposed model. A media company called AccuWeather offers weather forecasting services on a global scale. Table 1 displays the five weather variables that model used. 3. Using Sensor An important resource in agriculture is soil. The physical and chemical properties of the soil have a strong impact on the production process. The information about the state of the soil environment is included in this part [25]. The suggested prototype

534 Table 1 List of weather variables

K. Pareek et al. Variable name

Description

Air temperature

It is a measure of how hot or cold the air is

Precipitation

It is the water released from the clouds in any form

Relative humidity

It represents the amount of water vapour in the air

system uses a variety of hardware and software elements to monitor the soil properties such as moisture, nutrients, pH, and temperature [37]. ● Soil Moisture Sensor The water status of the soil, which is determined by the quantity and potential of the water in the soil, has a direct impact on crop behaviour. A readily available capacitive soil moisture sensor will be used to measure soil moisture in real time as shown in Fig. 4 [38]. ● Soil Temperature Sensor The temperature of the soil, the air, and the water can all be measured using a soil temperature sensor. The DS18B20 waterproof temperature sensor can be used to measure the soil’s temperature as shown in Fig. 5. The sensor has a temperature measurement range of -55 to 125 °C [38]. ● Soil pH sensor The term “soil pH” describes the soils acidity or alkalinity. It counts how many free hydrogen ions are present in the soil. The pH scale: 0 to 14, with 7 being neutral. One of the most crucial aspects of soil that influences nutrient availability in relation to living plants and microbial function is the pH [39]. ● NPK sensor The NPK levels of the soil are determined using a soil NPK sensor. As shown in Fig. 6, the soil NPK sensor can be used to determine the pH and concentration of Fig. 4 Soil moisture sensor

Proposed Model for Design of Decision Support System for Crop Yield …

535

Fig. 5 Soil temperature sensor

nitrogen, phosphorus, and potassium in the soil. It contributes in figuring out the soils fertility. All sensors with other components are listed in Table 4 [38]. Fig. 6 NPK sensor

Table 4 Sensors and other components used to sensed soil properties Components used

Description

Arduino board

It is a prototype platform based on easy to use hardware and software

ESP32 board

This enables dual-mode connectivity for embedded devices

NR24L01 module

It is used for data transmission

DS18B20 waterproof temperature sensor It measures the heat of the soil NPK sensor

Used to determine the pH and amounts of potassium, phosphorus, and nitrogen in the soil

Capacitive soil moisture sensor

It is used to estimate the volumetric water capacity in soil

Modbus module

it is a data communication protocol used to connect software and electronic devices

536

K. Pareek et al.

4 Conclusion Rajasthan is largest state of India and the one with the greatest percentage of its land covered by desert. Wheat is cultivated during the Rabi season and is extremely adaptable. A variety of soil types and weather conditions are used to cultivate wheat. The climate of Rajasthan varies from being extremely dry to being humid. All types of soils are suitable for growing wheat except of those with high alkaline content and wet circumstances. In this work, a prototype model is proposed for the wheat yield prediction in Rajasthan which extract information from multiple sources. This model used a combination of weather data, satellite data, and soil property data to estimate wheat yield and addressed the majority of wheat yield controlling factors. These elements work together to efficiently monitor crop growth phases and forecast wheat yields before harvest. The ultimate goal is to prediction of wheat yield before harvest. The results of the study show that variations in wheat productivity are better explained by a combination of factors affecting wheat yield than by any one factor alone.

References 1. Kumar YJ et al (2020) Supervised machine learning approach for crop yield prediction in agriculture sector. In: 2020 5th international conference on communication and electronics systems (ICCES). IEEE 2. Sharma A et al (2020) Machine learning applications for precision agriculture: a comprehensive review. IEEE Access 9:4843–4873 3. O’Grady MJ, Langton D, O’Hare GMP (2019) Edge computing: a tractable model for smart agriculture? Artif Intell Agricult 3:42–51 4. Shafi U et al (2019) Precision agriculture techniques and practices: From considerations to applications. Sensors 19(17):3796 5. Terliksiz AS, Turgay Altýlar D (2019) Use of deep neural networks for crop yield prediction: a case study of soybean yield in Lauderdale county, Alabama, USA. In: 2019 8th international conference on agro-geoinformatics (Agro-Geoinformatics). IEEE 6. Elavarasan D et al (2020) A hybrid CFS filter and RF-RFE wrapper-based feature extraction for enhanced agricultural crop yield prediction modeling. Agriculture 10(9):400 7. Gopal PSM, Bhargavi R (2019) A novel approach for efficient crop yield prediction. Comput Electron Agricult 165:104968 8. Lobell DB et al (2015) A scalable satellite-based crop yield mapper. Remote Sens Environ 164:324–333 9. Zhang X, Zhang Q (2016) Monitoring interannual variation in global crop yield using long-term AVHRR and MODIS observations. ISPRS J Photogramm Remote Sens 114:191–205 10. Jin X et al (2017) Winter wheat yield estimation based on multi-source medium resolution optical and radar imaging data and the AquaCrop model using the particle swarm optimization algorithm. ISPRS J Photogram Remote Sens 126:24–37 11. Saeed U et al (2017) Forecasting wheat yield from weather data and MODIS NDVI using Random Forests for Punjab province, Pakistan. Int J Remote Sens 38(17):4831–4854 12. Guo C et al (2018) Integrating remote sensing information with crop model to monitor wheat growth and yield based on simulation zone partitioning. Precis Agricult 19(1):55–78 13. Yang Q et al (2019) Deep convolutional neural networks for rice grain yield estimation at the ripening stage using UAV-based remotely sensed images. Field Crops Res 235:142–153

Proposed Model for Design of Decision Support System for Crop Yield …

537

14. Kaneko A et al (2019) Deep learning for crop yield prediction in Africa. In: ICML workshop on artificial intelligence for social good 15. Nevavuori P, Narra N, Lipping T (2019) Crop yield prediction with deep convolutional neural networks. Comput Electron Agric 163:104859 16. Sun J et al (2019) County-level soybean yield prediction using deep CNN-LSTM model. Sensors 19(20):4363 17. Zhang Z et al (2020) Improving regional wheat yields estimations by multi-step-assimilating of a crop model with multi-source data. Agricult Forest Meteorol 290:107993 18. Zhu B et al (2020) Hybrid particle swarm optimization with extreme learning machine for daily reference evapotranspiration prediction from limited climatic data. Comput Electron Agricult 173:105430 19. Chu Z, Yu J (2020) An end-to-end model for rice yield prediction using deep learning fusion. Comput Electron Agricult 174:105471 20. Haque FF et al (2020) Crop yield prediction using deep neural network. In 2020 IEEE 6th world forum on internet of things (WF-IoT). IEEE. 21. Guhathakurta P et al, Observed rainfall variability and changes over Rajasthan state, 21 22. Singh RB, Kumar AJAY (2016) Agriculture dynamics in response to climate change in Rajasthan. Delhi Univ J Hum Soc Sci 3:115–138. 22 23. Topic: Agro- climatic division of Rajasthan. https://www.Krishi.Rajasthan.gov.in. Last Accessed on 20–07–2022 24. Sun J et al (2020) Multilevel deep learning network for county-level corn yield estimation in the us corn belt. IEEE J Select Top Appl Earth Observat Remote Sens 13:5048–5060 25. Was written by Henry, This Chapter, et al. “Agricultural meteorological data, their presentation and statistical analysis 26. Sawasawa H (2003) Crop yield estimation: integrating RS, GIS, and management factors. In: International Institute for geo-information science and earth observation, Enschede The Netherlands (2003) 27. Kern A et al (2018) Statistical modelling of crop yield in Central Europe using climate data and remote sensing vegetation indices. Agricult Forest Meteorol 260:300–320 28. Mu H et al (2019) Winter wheat yield estimation from multitemporal remote sensing images based on convolutional neural networks. In: 2019 10th international workshop on the analysis of multitemporal remote sensing images (MultiTemp). IEEE 29. Sagan V et al (2021) Field-scale crop yield prediction using multi-temporal WorldView-3 and PlanetScope satellite data and deep learning. ISPRS J Photogram Remote Sens 174:265–281 30. Fernandez-Beltran R et al (2021) Rice-yield prediction with multi-temporal sentinel-2 data and 3D CNN: a case study in Nepal. Remote Sens 13(7):1391 31. Gumaste SS, Kadam AJ (2016) Future weather prediction using genetic algorithm and FFT for smart farming. In: 2016 international conference on computing communication control and automation (ICCUBEA). IEEE 32. Bahrami M et al (2020) Determination of effective weather parameters on rainfed wheat yield using backward multiple linear regressions based on relative importance metrics. Complexity 2020 33. Topic: “MODIS data” https://gisgeography.com/modis-satellite/. Last accessed on 20 July 2022 34. Topic: “MODIS data” https://modis.gsfc.nasa.gov/data/. Last accessed on 20 July 2022 35. Topic: “Landsat data” https://landsat.gsfc.nasa.gov/satellites/landsat-8/landsat-8-bands/. Last accessed on 20 July 2022 36. Khaki S, Wang L, Archontoulis SV (2020) A cnn-rnn framework for crop yield prediction. Front Plant Sci 10:1750 37. Madhumathi R, Arumuganathan T, Shruthi R. (2020) Soil NPK and moisture analysis using wireless sensor networks. In: 2020 11th international conference on computing, communication and networking technologies (ICCCNT). IEEE

538

K. Pareek et al.

38. Topic: “Soil sensors for monitoring soil parameters”. https://how2electronics.com/iot-basedsoil-nutrient-monitoring-with-arduino-esp32/. Last accessed on 20–07–2022 39. Grimblatt V et al (2019) Precision agriculture for small to medium size farmers—an IoT approach. In: 2019 IEEE international symposium on circuits and systems (ISCAS). IEEE, p 41

Interactive Business Intelligence System Using Data Analytics and Data Reporting E. Sujatha , V. Loganathan , D. Naveen Raju , and N. Suganthi

Abstract Nowadays, data has grown tremendously. Business intelligence (BI) is emerging technology to identify the progress of the industry. To be successful in organization, this strategy is mandatory with analytics. Many organizations have challenges to analyze user demands. In order to resolve this, data analytics process is used in business intelligence. This system helps to organize and share data among organization which provides potential to determine and provide strategic opportunities to make decision. It leads to accessing data and enabling analytics for service to enrich the all categories of users. Connectors combine real-time data from anywhere and automate reporting within minutes then those simple and beautiful user interactions help new technologies and promising initiatives gain high user adoption. Keywords Business intelligence · Data analytics · Data reporting

E. Sujatha (B) · V. Loganathan Saveetha Engineering College (Autonomous), Thandalam, Chennai, India e-mail: [email protected] V. Loganathan e-mail: [email protected] D. N. Raju R.M.K. Engineering College (Autonomous), Chennai, India e-mail: [email protected] N. Suganthi SRM Institute of Science and Technology, Ramapuram Campus, Chennai, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 V. S. Rathore et al. (eds.), Emerging Trends in Expert Applications and Security, Lecture Notes in Networks and Systems 681, https://doi.org/10.1007/978-981-99-1909-3_47

539

540

E. Sujatha et al.

Fig. 1 Concept of business intelligence

1 Introduction Business intelligence (BI) has setting trend in policies and techniques used by organization for the data analysis of business information. BI technologies lead business activities very ease in handling their previous, current, and future perspective. Fundamental processes of BI techniques are preparing report, real-time analytical processing, mining process, text and data, processing event, managing business performance, benchmarking, and analytics for prediction and prescription. Large scalable data of markets, processes, clients, vendors, and business environment can be collected and analyzed which makes ease in managing, analyzing, and sharing data within organization. It enables decision-making and to determine opportunities in technologies as well (Fig. 1).

2 Related Works 2.1 Business Intelligence Different definitions of business intelligence changes time to time due to various factors and contexts. It is viewed as insight of private instead of state or open knowledge (Fig. 2). BI is useful for all disciplines especially engineers and programmers since its inception. It is classified into framework architecture which organizes different types of information to capture business data for making decision, which enables the business functions and features in flexible options in administration and discusses about the multi-criteria decision-making in business analytics [1]. It is stated that BI focuses

Interactive Business Intelligence System Using Data Analytics and Data …

541

Fig. 2 Internal functions of business intelligence

the receiving the information and process on the current data of users, the industries, technologies, and finally products. It explains predictive analytics in business analytics and follows set of process which contains collection of jobs activated for making decision in order to accomplish the objectives. It is an architectural diagram that changes from data into information first and knowledge later during decisionmaking analysis [2]. BI concentrates on organized information which gathered from different sources. It is a decision support system which provides answers to decide upon the financial situation of the organization. In [3], cloud-based optimization approach is discussed which is implemented in business analytics in order to improve the processes in business. It acts as numerical and methodological models to accumulate information and data [4]. In [5], it is described about BI that is an open domain which covers wide range of applications, uses various technologies, and processes to collect, access, and store for performing analysis in order to make decision. It enables to update the information by its applications and making using of existing information by simulating the environment for modeling the real-world scenario. It increases the prediction ability and improves the performance of the organization [6]. It is useful for the leaders as segmenting the information in to various useful unit sources in business analysis and software engineering tools.

2.2 Data, Information, and Knowledge Data. BI is most popular for its structured codification in ease of implementation and the business transactions with many inputs [5]. It is commercialized in organization

542

E. Sujatha et al.

due to its ease of analyzing the data in different forms, and it frames corresponding strategy. Data is classified into unstructured, semi-structured, and structured data. Standard format is followed in structured data, and it is collected from various online sources and the meta-data, so that it can be easily accessed and understandable by system. Unstructured data are quite different from structured data which is easily understood by the system but this unstructured follows different formats and templates, and it is not organized or sorted as rows and columns. Information which is stored in the industry’s repository possess reusability characteristics and which is distributed in terms of software as Customer Relation Management (CRM), Marketing Automation Systems (MAS), and Social Media Platforms (SMP). Information. Data is transformed into information which yields from the process of gathering the data and processing on it in order to enable to the system to understand this structure and format. Knowledge. It is created to derive decisions based on the related sub-processes. It is extended with lots of real-time scenarios and domain to handle and manage industry demand solutions.

2.3 Business Intelligence Architectures Data sources. It has unstructured data which is used for operationalize the systems, and it captures data from various sources externally [7]. Data warehouse/Data mart. Data stored in centralized location is a data warehouse with various data formats which enables for data extraction, data transformation, and data loading, and it must be standardized before they are accessed by different locations. Data marts are known as tiny warehouse which concentrates on single data rather than gathering data from various locations. It provides limited database and cheap to implement [7, 10]. Data exploration. It acts as inactive business intelligence analysis which comprises query, reporting systems using statistical method. Data mining. It is active business intelligence strategy to extract the knowledge and information from unstructured data. Optimization. Optimization is built to grab the best out of possible solutions, and it is wide and may fall endless. Decisions. If business intelligence theory is implemented efficiently and deployed in appropriate context, then the decision makers can avail the benefits of unstructured data and informal data and can modify the decisions by using mathematical models [8].

Interactive Business Intelligence System Using Data Analytics and Data …

543

2.4 Business Intelligence Capabilities Business intelligence is marching toward successful decision-making system in the industry [9]. Since it is in its crawling stage, it is not reached to many industries. It has potential to extend the organization’s growth [11]. This methodology may fail due to inappropriate context and the industry demand. But still, it has state of the art famous among industries for guaranteed success and reaching the business targets and achievements. Many business intelligence models motivate the importance of features in decision-making process. In [12], it is discussed about organizational and technological perspectives, and its potential is standardizing the quality of the data. Business intelligence runs the applications by its flexibility, risks, and responsibilities. Data Quality. Business intelligence depends on structured and/or numerical data that can be analyzed with statistical models, and its computing capabilities [13] explained about the quality of the data is most focused feature for success of business intelligence and described that quality of the data is significant element in business analysis, and they are useful to add the mass data from various sources, and giant industries can be tied up as single organization in order to make the decision and results at the appropriate time line and provide semantic information and furnish accurate information in the organization. Data quality concentrated to be more unique, consistent in nature, and preferably comprehensive. The process of handling improper data leads less degree of data reliability and data maintenance and also mistakes in the process of system migration. Customer’s satisfaction and expectations are solely reliable on the information which is analyzed accurately in the organization. In [14] stated that the business agility can be improved only when the organization is technically sound. Business intelligence success falls on the characteristics of clean and relevant data. Integration with other systems. The emerging nature of BI makes more challenging to integrate BI systems with other systems in the organization. The process of integrity demands the capturing the data from various locations and sources with different applications in functionality; hence, the individual system has their own significant in the organization. Finally, the quality of the data shared between these systems impact the performance of integration. Because the integration technology demands quality [15]. User access. The nature of BI tools has wide scope and abilities with respect to size because they vary in size. According to the organization demand, the BI is used as single or many applications which is combined with the appropriate BI tool and it types. In few organizations, the access control is restricted according to their usage in application by Internet-centric methodology, full access also given to users if it is required. It is hard to balance between the BI users access rights and the information which they need to fit into the types of decision [16]. Flexibility. Organization’s role is to select the appropriate emerging technology to support BI in order to achieve the start-of-the-art advantages provided by BI and

544

E. Sujatha et al.

must be flexible with organizations business process rules and regulations; hence, flexibility is most important factor for running the BI successfully in organization [8, 16]. Risk Management Support. The process of decision-making focuses much on risk management which is considered to be less degree of certainty. To manage, the high risk is one of the success key factors in organization. Among all, business decision also concentrates on any instability, vulnerability, hazard found. These can be eliminated by using BI. [17] suggested that BI solutions may succeed or fail in the organizations. It is indicated by signs, so that any challenges can be tackled or risks can be adopted appropriately at right time.

2.5 Enabling Factors in Business Intelligence Projects Most critical factors are mentioned in [17] that are technology, human resources, and analytics for success of business intelligent methodology. Technologies. The hardware and software technologies deployed in the hierarchical structure of the organization are most significant key factors for the improvement of the business intelligence. It helps to enhance the high-level processes for increasing the efficiency of inductive learning strategies, models in order to maintain the threshold of computing time. It permits the state-of-the-art graphical models to explore real-time animations. Next focus is on storage capacity with cheap cost which enables to maintain the large database in organization for business analysis [8] [17]. Every organization demands robust network for hybrid structure of the organization for diffusion of data analysis. Hence, the appropriate combination of hardware and software influencing the diffusion of data analysis of tools. Analytics. In organization, the high-level knowledge and information are carried forwarded by mathematical model and analytical methodologies. The decisionmaking strategy involves data visualization with respect to time and logical view as passive factor. To make it active decision-making process, it is suggested to deploy advanced inductive learning methodologies and optimization techniques [15, 17]. Human resources. Individual or team of human resources sets up the competencies of the organization. Highly potential human resources either individual or team will make quality decisions among alternatives. The advanced business intelligence systems anticipate high degree of employees’ skill sets, mental ability, and their creative decision-making abilities. It can be increased by deploying appropriate analyzing tools [17].

Interactive Business Intelligence System Using Data Analytics and Data …

545

3 Experimental Workflow Analysis Business intelligence model is used to implement analytics and managing the performance of organization in prediction. There are six processes are identified which is required to perform analysis process and primarily five levels have to be processed are elicitation requirement, identifying data source and security, designing storyboard and software and hardware configuration. It helps to prepare documentation and plan for software and hardware resources. The issue faced in implementing business intelligence is its emerging domain and exposure in perspectives. The proposed model described with experimental findings in existing work. The case study has validated the model (Figs. 3 and 4).

3.1 Workflow Step 1: Request is received. Step 2: Dispatches the mapping controller. Step 3: Executing the process of business logic. Step 4: Set the result and returns logical view. Step 5: Resolving the view. Step 6: Returns the view of rendering process. Step 7: Returns the response.

4 Methodology It validates the ITS standards testing which relates with design requirements, merging the documents, system testing, and user interface testing. It aims to test requirements for conformance to standards and sets the standards in high level of compliance and conformance testing which helps to concentrates on the functionality rather than particular hardware. It enriches the interoperability and satisfies the requirements of standards. List of Modules Connection. Integrate. Publish.

546

E. Sujatha et al.

Fig. 3 Architecture diagram

4.1 Connection Connector is a module where we can have the data source connection information like database, cloud, third party API’s and files (csv and excel), and it is used to get the dataset details during integration.

Interactive Business Intelligence System Using Data Analytics and Data …

547

Fig. 4 Workflow of the proposed system

In this module, we connect to particular database like (MySQL, Oracle, Mango dB) through connectors and information are validated to given credentials, and there is a list page to see the list of stored connections with delete functionality (Fig. 5).

4.2 Integrate An integrate is a module used to merge the data from different data sources, and it can be database, cloud, and files. Merging data can happen between more than one data sources and must have some relationship among them; without relationship, we can’t merge the data. The merged data is called as dataset, and they preview the output data before storing the dataset. Connectors combine real-time data from anywhere and automate reporting within minutes (Fig. 6).

548

Fig. 5 Home page of connection

Fig. 6 Dataset preview

E. Sujatha et al.

Interactive Business Intelligence System Using Data Analytics and Data …

549

4.3 Publish Publish is a module used to download the output merged data in the format of csv file and also publish the output file to third part visualizer like IBM Watson analytics tool and tableau. The publish can be happen manually or can schedule with specific time format. Alert is a part of publish functionality. Here, we can have some alert condition against the dataset; during publishing process, we can validate the alert condition if the condition is matched with the of dataset it send alert notification message in format of e-mail or SMS, even we can send to mobile via push notification (Fig. 7).

5 Results See Figs. 8, 9, 10, 11, 12, 13, 14, 15, 16, 17 and 18.

Fig. 7 Publish list

550

Fig. 8 Home page of data management–data analytics–data reporting

Fig. 9 Login page

E. Sujatha et al.

Interactive Business Intelligence System Using Data Analytics and Data …

Fig. 10 My dashboard

Fig. 11 Home page of integrate

551

552

E. Sujatha et al.

Fig. 12 MySQL connection

6 Conclusion and Future Enhancement 6.1 Conclusion In conclusion, the emerging domain in business intelligence is applications of Business-to-Customer (B2C), and enterprise management ensures scalability. These factors have improved in the implementation of business intelligence are Cost Profit Clients

Interactive Business Intelligence System Using Data Analytics and Data …

553

Fig. 13 Connection list

It aims to help decision makers to provide responses in handling structured and semi-structured data for business. Business intelligence plays vital role in making profits.

554

Fig. 14 Dataset preview details

Fig. 15 Alert notification

E. Sujatha et al.

Interactive Business Intelligence System Using Data Analytics and Data …

555

Fig. 16 Final status of Publish List1

6.2 Future Enhancement Research path of business intelligence will lead the organization in new era. Data science and big data analytics algorithms can be applied to improve the other external factors of organizations. Business intelligence will rule the future technologies.

556

Fig. 17 Final status of Publish List2

Fig. 18 Schedule

E. Sujatha et al.

Interactive Business Intelligence System Using Data Analytics and Data …

557

References 1. Yalcin AS, Kilic HS, Delen D (2022) The use of multi-criteria decision-making methods in business analytics: a comprehensive literature review. Technol Forecast Soc Change 174:121193 2. Lee CS, Cheang PY, Moslehpour M (2022) Predictive analytics in business analytics: decision tree. Adv Decis Sci 26(1):1–29 3. Sujatha E et al (2018) Paper is published in “Int Res J Eng Technol”, 5(4):520–522 titled as “Cloud based optimization approach to joint cyber security and insurance management system” 4. Popescu GH, Valaskova K, Horak J (2022) Augmented reality shopping experiences, retail business analytics, and machine vision algorithms in the virtual economy of the metaverse. J Self-Govern Manage Econ 10(2):67–81 5. Shao C et al (2022) IoT data visualization for business intelligence in corporate finance. Inf Process Manage 59(1):102736 6. Sujatha E et al (2019) Sharing and monitoring of medical health records. Published in Int J Innovat Res Sci, Eng Technol 8(2):85–88 7. Huang ZX, Savita KS, Zhong-jie J (2022) The business intelligence impact on the financial performance of start-ups. Inf Process Manage 59(1):102761 8. Rana NP et al (2022) Understanding dark side of artificial intelligence (AI) integrated business analytics: assessing firm’s operational inefficiency and competitiveness. Eur J Inf Syst 31(3):364–387 9. Sujatha et al E (2019) Assured way to manage various control in cloud. Published in Int J Innovat Res Comput Commun Eng 7(2):1336–1342 10. Al-Okaily A et al (2022) An empirical study on data warehouse systems effectiveness: the case of Jordanian banks in the business intelligence era. EuroMed J Business Ahead-of-Print 11. Kaewnaknaew C et al (2022) Modelling of talent management on construction companies’ performance: a model of business analytics in Bangkok. Int J Behav Anal 2(1) 12. Huber M, Meier J, Wallimann H (2022) Business analytics meets artificial intelligence: assessing the demand effects of discounts on Swiss train tickets. Transp Res Part B: Methodol 163:22–39 13. Alyan MA, Ali. (2022) The impact of business intelligence on employee empowerment, the mediating role of information and communication technology (ICT), a field study on Jordanian Universities-Zarqa Governorate. Zarqa University, Diss 14. Demirdö˘gen G, I¸sık Z, Arayici Y (2022) Determination of business intelligence and analyticsbased healthcare facility management key performance indicators. Appl Sci 12(2):651 15. Sujatha E et al (2022) E-connect for agro products using supply chain with micro-finance: a blockchain approach. In: Published in IEEE xplore digital library, 2022 8th international conference on smart structures and systems (ICSSS), pp. 1–7. https://doi.org/10.1109/ICSSS5 4381.2022.9782250 16. Sujatha E et al (2019) Sensor based automatic neonate respiration monitoring system. Published in Int J Innovat Technol Explor Eng 9(2):1296–1299 17. Dedi´c N, Stanier C (2016) Measuring the success of changes to existing business intelligence solutions to improve business intelligence reporting. Lecture notes in business information processing, vol 268. Springer International Publishing, pp 225–236

Metaverse in Robotics—A Hypothetical Walkthrough to the Future Salini Suresh, R. Sujay, and N. Ramachandran

Abstract The Metaverse is a parallel, virtual realm that people may freely explore through avatar representation. Gaming companies such as Rockstar, Second Life, and Unity were early adopters of the Metaverse in order to attract customers. Since this endeavour has proven to be successful, other industries as diverse as education and entertainment are eager to follow suit. In order to maintain their top position among the rapidly growing user applications and software companies, even well-known social media developers have begun to invest significantly in the effective growth of the Metaverse era. The Metaverse has recently emerged as a research subject and expectation, simultaneously, throughout the ages robotics has captured people’s imagination greatly. This research aims to provide light on the potential superpower and effect of integrating two developing technologies, the Metaverse and Robotics. Keywords Metaverse · Avatar · Robotics · Gaming · Virtual realm

S. Suresh (B) Dayananda Sagar Institutions, Bengaluru, Karnataka, India e-mail: [email protected] R. Sujay Dayananda Sagar University (B.Tech. CSE (AI&ML), Bengaluru, Karnataka, India N. Ramachandran Indian Institute of Management, Kozhikode, Kerala, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 V. S. Rathore et al. (eds.), Emerging Trends in Expert Applications and Security, Lecture Notes in Networks and Systems 681, https://doi.org/10.1007/978-981-99-1909-3_48

559

560

S. Suresh et al.

1 Introduction Just picture a complete first-grade class sitting down with VR goggles, and their geography teacher taking them on a virtual field trip to the Alps Mountains so they can see, hear, feel, and learn everything about the region. The name “Metaverse” has been echoed throughout the technology industry for over 30 years, and it is finally putting its name into action to make good on the promises it has made in various fields that have relied on technology for quite some time. The early 20 s saw the Metaverse as a pipe dream, but as entertainment and tech giants gave rush entry, the moniker caught on and the dream began to gather traction. Experts in disciplines as diverse as entertainment and education are pooling their resources to accelerate the development of the Metaverse in their respective domains. However, despite the advent and advancement of Metaverse, some of the best technologies that have been a dream for decades and that even children from the smallest age have been captivated with, have not received excellent access to the current big man Metaverse. Robotics is the grand term holds a soft spot in the heart of every individual who admires technological advancement over years. However, there has not been a successful integration of robotics into the Metaverse, which would mark a significant step forward for the development of technology on a global scale. Through a variety of web analytics and literature reviews, this paper proposes two novel hypotheses about the application of the Metaverse to robotics and investigates potential avenues for its successful implementation.

2 Evolution of Metaverse and Robotics 2.1 Metaverse The term “Metaverse” was coined by Neal Stephenson in his science fiction novel named Snow Crasher released on June 1992. People’s interest in the Metaverse was piqued by the release of the film “Ready Player One”, which was adapted from the science fiction novel of the same name. The concept of the Metaverse got the greatest traction once gaming platforms like Second Life and Grand Theft Auto began developing it to present the future of gaming to the public. Metaverse started capturing the eyes of the youth once social media and software technology giants like Mark Zuckerberg and Bill gates started giving rush entry into the advent of Metaverse. In the present scenario, Metaverse has carried away the velocity of the advancement of technology and remains the hot topic over years, as for the gaining popularity and traits of Metaverse, various fields like education, physical training, etc.… started adopting Metaverse.

Metaverse in Robotics—A Hypothetical Walkthrough to the Future

561

2.2 Robotics The term “Robot” was first introduced in the science-fiction play R.U.R. by Karel ˇ Capek in 1920. At first, the concept of robotics was adopted and developed by the cartoon industry focusing mainly on humanoid robots. The manufacturing industry was the first to adopt robotics in the early 1960s. The field of Robotics has seen remarkable growth in recent years. More exciting perhaps is the development of humanoid robots designed to aid humans in various tasks. Robotics has already shown its worth in a wide variety of practical contexts.

3 Integrating the Metaverse with Robotics 3.1 Hypotheses 1 Robotics will take Metaverse to a whole new level. Humanoid and semi-humanoid robots are going to act as non-player characters (NPCs) in the real world. According to the current Metaverse hypothesis, users will be represented in the virtual world by Avatars. The proposed merging of the Metaverse with robotics would allow robots to take on the role of avatars in the actual world, transporting the user to his preferred destination and doing desired tasks with no effort from the user [10]. The nonintelligent humanoid robotic avatar will be controlled by the Metaverse user via virtual reality (VR) goggles (see Fig. 1) to see the world as the robot does and helps the robot to replicate the user’s facial expressions, a gesture controller that would be attached to the user’s wrist and fingers to have the robot mimic the user’s hand gestures, and a motion controller that would be implemented via a stationary walking base or motion controls attached with the on-hand controller. The unique feature of using Metaverse in Robotics is that the user may experience the environment the avatar is in. This unique entity of the Metaverse is implemented through a thin, replaceable skin with a thickness of 3 mm studded with magnetic particles with artificial intelligence to calibrate its sense of touch and to feel the habitat [11]. With these characteristics, the Metaverse can be effectively applied in robotics, for example, if a minor surgical emergency is admitted to the hospital and the chief surgeon is not present at the time, a humanoid robot controlled by the surgeon at home can perform the procedure for the patient at the appropriate time, thereby reducing the risk of death and saving valuable time. The prototype test with the same features has been done by Italian Institute of Technology, Geneva, using a customized humanoid Robot named iCub which was being controlled by a human being from 300 km away connected through optical fibre. He was successfully able to see, hear, and feel the habitat of “International Architecture Exhibition—La Biennale di Venezia” and was also able to interact with the staffs of the exhibition.

562

S. Suresh et al.

Fig. 1 Connectivity of the real-life avatar bots across the globe is being illustrated above with respect to the Hypothesis 1

3.2 Hypothesis 2 Train your robots in Metaverse [12, 13]. The primary difficulty for a Robotics Engineer is not in the creation of humanoid robots but in the training of the robot to do the wide range of tasks required by the actual world. It may be risky and ineffective to train robots in the real world through machine learning. [13] This hypothesis therefore proposes that developers can safely, effectively, and cheaply train their robots in the Metaverse. Training Robots in Metaverse on how to interact with different people in the society, recognizing facial expressions and voice tones to determine the emotional state of the person whom it is conversing with, behaving according to different situations, etc.… before introducing it to the real world will make a huge change in the application of artificial intelligence Robots in the society.

4 Strategies for Effective Implementation of Metaverse in Robotics Successful implementation of Metaverse in Robotics completely relies on definite planning and absolute strategies of implementation. Based on the available literature, concepts and hypotheses about integrating Metaverse with Robotics, some of the absolution strategies for the effective implementation of Metaverse in Robotics are listed below:

Metaverse in Robotics—A Hypothetical Walkthrough to the Future

563

● Collaboration: Top-tier Robotics development organizations and software corporations putting in significant time and resources to the development of Metaverse would make a formidable duo in merging the virtual and physical worlds. ● Investment: Better implementation will only be possible with better investment by the companies focussing on the development of both Metaverse and Robotics. Implementation of humanoid robots to be controlled by the user in real time is costly, and thus, investment by Robotics-based companies on manufacturing robots plays a major role in effective implementation. ● Application: Creative application of Metaverse in Robotics in the society helps in better implementation. For example, countries focussing on the advancement of technology can invest in the Metaverse in Robotics and implement robots for tourism purpose that foreigners can visit different parts of their country on a respective cost that even helps in increasing the country’s economy. ● Training: Best selected professionals in this field of technology should be recruited and formed up under a company that is ready to invest any amount of sum for the development of this technology. That will help in providing better training to the robots in the Metaverse. ● Accessibility: There should be easy accessibility to the technology by common people that helps to gather traction among the population and helps in mass implementation in diverse fields. ● Security: For the notion to be successfully implemented into actual society, secure communication between the user and the avatar is a fundamental and crucial need. Conglomerate encryption is a hypothetical method that hasn’t been thought of and implemented in the crucial security sectors of technology. The security mechanism for connecting a user to an avatar bot will employ an iris biometric validation approach. Using the conglomerate encryption method, this biometric image will be encrypted. A positive AI algorithm is used in conglomerate encryption to create a large number of trash strings, which are then encrypted with the original biometric image using the same technique. This aggregate packet will be sent to the bot reception system. The recipient will decode the packet using a reverse AI algorithm that instantaneously recognizes the original biometric image encryption, organizes the original encryption into a static memory, and places the trash string encryptions into a dynamic memory that erases data gradually. The receiver system will check the decrypted biometric image after that to make sure there is a successful connection. For each system, a certain collection of AI algorithms will exist (one system contains a user end and a receiver). A comparable set of AI algorithms in the user and receiver systems will concurrently and parallelly modify the encryption and generation algorithms at both ends of the system. Since the receiver system does not include a reusability feature for the biometric image, capture and reuse of the image by a third-party system will not result in a successful validation. Conglomerate encryption may thereby provide safe connection validation inside a system.

564

S. Suresh et al.

5 Conclusion As time has progressed, robotics and the Metaverse have both come to be recognized as two crown jewels. Unfortunately, no matter how often the notion of merging the Metaverse and Robotics is discussed in the technology sector, its actual implementation is seldom addressed. One of the most important factors in the practical success of a technological idea is how well it is implemented. This study provides worthwhile strategies for effective implementation of Metaverse in Robotics according to all the recent advancements and hypothesis that different companies like NVIDIA, Meta, Unity, Hyundai, and some universities like Carnegie Mellon University, and Italian Institute of Technology has made and proposed, respectively. Metaverse in Robotics is going to take over the throne of advancement in technology for the next century by its sea vast applications that help in the advancement of most diverse fields.

References 1. Lik-Hang LEE et al (2021) All one needs to know about metaverse: a complete survey on technological singularity, virtual ecosystem and research agenda. J Latex Class Files 14(8):1–66 2. Dwivedi YK et al (2022) Metaverse beyond the hype: Multidisciplinary perspectives on emerging challenges, opportunities, agenda for research, practise and policy. Int J Inf Manage 66:102542 3. Zhang X et al (2022) The metaverse in education: definition, framework, features, potential applications, challenges and future research topics. J Front Psychol 13 (2022) 4. Muhammet D (2021) Metaverse shape of your life for future: a bibliometric snapshot. J Metaverse 1(1):1–8 5. Alfaisal R (2022) Metaverse system adoption in education: a systematic literature review. J Comput Educ 6. Wang D et al (2021) Research on metaverse: concept, development and standard system. In: 2021 2nd international conference on electronics, communications and information technology (CECIT). IEEE, pp 983–991 7. Narin NG et al (2021) A content analysis of the metaverse articles. J Metaverse 1:17–24 8. Trumfio M (2022) Advances in metaverse investigation: streams of research and future agenda. Virtual Worlds 1:103–129 9. Huang J et al (2022) Analysis of the future prospects of the metaverse. In: Proceedings of the 2022 7th international conference on financial innovation and economic development (ICFIED 2022), vol 648. Atlantis Press, pp 1899–1904 10. BeInCrypto homepage, https://beincrypto.com/metaverse-holiday-via-a-robot-that-attendsfor-you/, Last Accessed 2022/12/01 11. NewScientist homepage, https://www.newscientist.com/article/2295617-metas-touch-sensit ive-robotic-skin-could-form-part-of-the-metaverse/, Last Accessed 2022/12/04 12. ZDNET, https://www.zdnet.com/article/metaverse-train-your-robots-in-the-virtual-world-cxo talk-interview/, Last Accessed 2022/12/04 13. Code mentor homepage, https://www.codementor.io/@lukos86/unity-moves-robotics-designand-training-to-the-metaverse-1r89ha5ava, Last Accessed 2022/12/01 14. Yantra homepage, https://www.yantrallp.com/blog/metaverse-and-robotics-the-foundation-ofan-incredible-new-world/, Last Accessed 2022/12/05 15. Forbes homepage, https://www.forbes.com/sites/forbestechcouncil/2022/02/16/forget-the-met averse---the-roboverse-is-already-here/?sh=15c13d7b4229, Last Accessed 2022/11/26

Metaverse in Robotics—A Hypothetical Walkthrough to the Future

565

16. Hyundai homepage, https://www.hyundai.com/worldwide/en/company/newsroom/hyu ndai-motor-shares-vision-of-new-metamobility-concept,-%E2%80%98expanding-humanreach%E2%80%99-through-robotics-&-metaverse-at-ces-2022-0000016777, Last Accessed 2022/12/01 17. The-decoder homepage, https://the-decoder.com/metamobility-robots-to-link-metaverse-andreality/, Last Accessed 2022/11/27 18. TechGig homepage, https://content.techgig.com/technology-unplugged/ai-robotics-are-goingto-automate-the-entire-metaverse-find-out-how/articleshow/87736964.cms, Last Accessed 2022/12/04 19. Parasol Island homepage, https://parasol-island.com/radar/bots-in-the-metaverse/, Last Accessed 2022/12/04