Innovations in Bio-Inspired Computing and Applications. Proceedings of the 13th International Conference on Innovations in Bio-Inspired Computing and Applications (IBICA 2022) Held During December 15–17, 2022 9783031274985, 9783031274992


236 77 83MB

English Pages [951] Year 2023

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Preface
IBICA–WICT 2022 Conference Organization
Contents
Bioinspired Computing and Applications
Bayesian Consideration for Intention to Purchase Safe Vegetables: Evidence from Vietnam
1 Introduction
2 Literature Review
2.1 Safe Vegetable Purchasing Intention (SVPI)
2.2 Health Care (HC)
2.3 Brand Trust (BT)
2.4 Product Price (PP)
2.5 Product Quality (PQ)
2.6 Supermarket Trust (ST)
3 Methodology
3.1 Sample
3.2 Reliability Test and Bayesian Consideration
4 Results
4.1 Reliability Test
4.2 Bayesian Consideration
4.3 Model Evaluation
5 Conclusions
References
Evolution of Configuration Data in CGP Format Using Parallel GA on Embryonic Fabric
1 Introduction
2 Design of Parallel Genetic Algorithm
3 Parallel GA Versus Single GA
4 Conclusion
References
Cross Synergetic Mobilenet-VGG16 for UML Multiclass Diagrams Classification
1 Introduction
2 Proposed Cross Synergetic Mobilenet-VGG16 for UML Diagram Classification
2.1 Problem Statement
2.2 MobileNet Overview ch3howard2017mobilenets
2.3 VGG16 Overview ch3simonyan2014very
2.4 Transfer Learning Overview
3 Results and Discussion
3.1 Evaluation Data and Comparison Methods
3.2 Performance of Proposed Method
3.3 Sensitivity of Parameters
4 Conclusion
References
Solar Irradiation and Wind Speed Forecasting Based on Regression Machine Learning Models
1 Introduction
2 Data Characterization
2.1 Statistical Description and Data Processing
2.2 Data Correlation
3 Forecasting Model
4 Results and Discussions
4.1 Solar Irradiation Forecasting
4.2 Wind Speed Forecasting
5 Conclusion and Future Works
References
Transfer Learning Based Pediatric Pneumonia Diagnosis Using Residual Attention Learning
1 Introduction
2 Related Works
3 Background
3.1 Attention Networks
3.2 Residual Attention Network
3.3 Cost Sensitive Learning
4 Dataset and Experimental Design
4.1 Dataset Description
4.2 Proposed Methodology
5 Results and Discussion
6 Conclusion and Future Work
References
Fuzzy Systems in Bio-inspired Computing: State-of-the-Art Literature Review
1 Introduction
2 Bio-inspired Computing
3 Fuzzy Systems and Extensions
4 State-of-the-Art Literature Review on Fuzzy Bio-inspired Computing
4.1 Fuzzy Ant Colony Systems
4.2 Fuzzy Artificial Immune Systems
4.3 Fuzzy Artificial Neural Networks
4.4 Fuzzy Cellular Automaton
4.5 Fuzzy Cognitive Modeling
4.6 Fuzzy Differential Evolution
4.7 Fuzzy Evolutionary Computation
4.8 Fuzzy Evolutionary Strategies/Programming
4.9 Fuzzy Genetic Algorithms/Programming
4.10 Fuzzy Particle Swarm Optimization
5 Conclusion
References
Anomaly Detection Framework
1 Introduction
2 Methodology
2.1 Test Case of Issuing Virtual Cards
2.2 Experiment
3 Discussion
4 Conclusion
References
Speech Emotion Recognition Using CNN-LSTM and Vision Transformer
1 Introduction
2 Related Works
3 Methodology
3.1 Dataset Description
3.2 Convolution Neural Network
3.3 Long Short Term Memory
3.4 CNN-LSTM Model
3.5 Mel Spectrogram
3.6 Vision Transformer Model
4 Results and Discussions
5 Conclusion and Future Directions
References
Investigating Digital Addiction in the Context of Machine Learning Based System Design
1 Introduction
2 Literature Survey
3 Machine Learning Based Recently Developed Models
4 Proposed Work
5 Conclusion and Future Work
References
Multiobjective Optimization of Airline Crew Management with a Genetic Algorithm
1 Introduction
2 Description of the Optimization Problem
2.1 Problem Model
2.2 Optimization Methods: Genetic Algorithm
2.3 Implementation and Comparison of Different Optimization Methods
3 Results
4 Conclusions
References
A Novel Approach to the Two-Dimensional Cargo Load Problem
1 Introduction
2 Container Loading Problems
2.1 Problem Definition
2.2 Previous Models
2.3 MILP Model
3 Prototype Development
3.1 Mathematical Formulation
3.2 Problem Solving Method
4 Case Study
5 Conclusion
References
Vehicle Detection from Aerial Imagery Using Principal Component Analysis and Deep Learning
1 Introduction
2 Related Works
3 Methodology
3.1 Principal Component Analysis
3.2 Proposed Methodology
3.3 Residual Networks
4 Experimental Results and Discussions
4.1 Dataset Description
4.2 Test Train Split
4.3 Observations and Findings
4.4 Model Evaluation and Benchmark Comparison
5 Conclusion and Future Works
References
Bio-inspired Heterogeneity in Swarm Robots
1 Functional Homogeneity
2 Functional Heterogeneity
3 From Homogeneity to Heterogeneity
4 Requirements on Functional Heterogeneity
5 Evaluation of Functional Heterogeneity
6 Conclusions
References
Software Defect Prediction Using Cellular Automata as an Ensemble Strategy to Combine Classification Techniques
1 Introduction
2 Background
2.1 Classification Techniques
2.2 Cellular Automaton
3 Proposed Method
3.1 Classifiers Deploy
3.2 Datasets
3.3 Learning Algorithm
3.4 Inference Algorithm
3.5 Transaction Rule
4 Results and Discussion
5 Conclusion
References
A Systematic Literature Review on Home Health Care Management
1 Introduction
2 Methodology
3 Bibliometric Analysis
4 Discussion and Future Directions
5 Conclusion
References
The Impact of the Size of the Partition in the Performance of Bat Algorithm
1 Introduction
2 Literature Review
2.1 Heuristic and Metaheuristics
2.2 Scheduling Problems
2.3 Parameterization
2.4 Discrete Bat Algorithm
3 Case Study
3.1 Problem
3.2 Methodology
3.3 Parameterization
3.4 Results
4 Conclusion
References
Automatic Diagnosis Framework for Catheters and Tubes Semantic Segmentation and Placement Errors Detection
1 Introduction
2 Background
2.1 Deep Learning Medical Image Segmentation Methods
2.2 U-Net Architecture
3 Related Works
3.1 Tubes Segmentation Based CXR Images
3.2 Methods Used to Reduce U-Net Parameter and Increase Model Performance
4 Dataset
4.1 Classification and Segmentation Data
5 Methodology
5.1 Customized U-Net Architecture
5.2 Segmentation Training Process
5.3 Classification Models Training Process
5.4 Framework of Diagnosis Validation
6 Experiments and Results
6.1 Segmentation with Customized U-Net Architectures
6.2 Classification Models Results
6.3 Validation Framework Results
6.4 Discussion
7 Conclusion
References
How Artificial Intelligence Can Revolutionize Software Testing Techniques
1 Introduction
2 Preliminaries About Software Testing
3 Preliminaries About Artificial Intelligence
4 Advantages of the Use of AI in Software Testing
4.1 Automatic Writing of Test Cases
4.2 Fast Time to Market
4.3 Earliest Response/Feedback
4.4 Prognostic Analysis
4.5 Integrated Platform
4.6 Reduction of UI-Based Testing
4.7 Better Code Coverage
4.8 Improved Reliability
4.9 Improved Quality
4.10 Automated Visual Validation Testing
5 Examples of Tools
5.1 INFER
5.2 Appvance IQ
5.3 Eggplant AI
5.4 EvoSuite
6 Conclusion and Future Work
References
E-Assessment in Medical Education: From Paper to Platform
1 Introduction
2 Literature Review
2.1 E-assessment: Progress to Date
2.2 Paper Based Assessment Versus ICT Enabled E-assessments: A Comparative Analysis
2.3 Types of E-assessment Strategies: Exploring Options for Medical Education
2.4 The Interrelatedness of Assessment of Learning and Assessment as Learning Strategies
3 Aim of the Study
4 Methodology
5 Results
6 Summary of the Findings
7 Discussion
7.1 Perceived Value of E-assessments
7.2 Self-directed Learning
7.3 Knowledge Recall and Retention
7.4 Self-reflection: Identifying Knowledge Gap and Reflecting on Study Techniques
7.5 Student Engagement
7.6 Recommendations for Practice
8 Conclusion and Future Works
References
DeepPRS: A Deep Learning Integrated Pattern Recognition Methodology for Secure Data in Cloud Environment
1 Introduction
2 Related Works
3 Proposed Methodology
4 Results and Discussions
5 Conclusion
References
Automated Depression Diagnosis in MDD (Major Depressive Disorder) Patients Using EEG Signal
1 Introduction
2 Proposed Algorithm
2.1 Data Sets
2.2 Preprocessing
2.3 Feature Extraction
2.4 Classification Methods/Approaches
3 Experimental Setup and Results
3.1 Feature Extraction Outcomes
3.2 Classification Outcomes
4 Results and Discussion
5 Conclusion
References
An Effective Deep Learning Classification of Diabetes Based Eye Disease Grades: An Retinal Analysis Approach
1 Introduction
2 Methodology
2.1 Data Collection
2.2 Preprocessing
2.3 Feature Extraction
2.4 Feature Selection
2.5 Classification
3 Performance Analysis
4 Conclusion
References
Extracting and Analyzing Terms with the Component `Green' in the Bulgarian Language: A Big Data Approach
1 Introduction
2 Recent Approaches to Terminology
3 The Colours from a Language Perspective
3.1 Some Preliminaries
3.2 The Distributional Semantics Model
4 The Sketch Engine
5 The bgTenTen12 Corpus
6 bgTenTen12 Corpus Search Results
7 Discussion
References
Apartments Waste Disposal Location Evaluation Using TOPSIS and Fuzzy TOPSIS Methods
1 Introduction
2 Literature Background
3 Research Study
4 Case Problem – Chennai, Tamil Nadu, Southern India,
4.1 TOPSIS – Application methodology.
5 TOPSIS, FTOPSIS - Results
6 Conclusion
References
Detection of Cracks in Building Facades Using Infrared Thermography
1 Introduction
2 Literature Review
3 Methods and Materials
4 Experimental Setup
5 Results
6 Conclusion
References
Optimizing Pre-processing for Foetal Cardiac Ultra Sound Image Classification
1 Introduction
2 Literature Review
3 Methodology
4 Experiment and Results
5 Conclusion
References
A Review on Dimensionality Reduction for Machine Learning
1 Introduction
2 Dimensionality Reduction
2.1 Definition
2.2 Feature Selection
2.3 Feature Extraction
3 Discussion
4 Conclusion
References
Detecting Depression on Social Platforms Using Machine Learning
1 Introduction
2 Problem Statement
3 Related Work
4 Research Methodology
4.1 Acquire the Datasets
4.2 Data Materials and Methods
4.3 Features Extraction
4.4 Text Classification Techniques
5 Conclusion
References
Impact of Green Hydrogen Production on Energy Pricing
1 Introduction
2 Theory Background
2.1 Optimal Power Flow – Economic Dispatch
2.2 Locational Marginal Energy Prices
2.3 Hydrogen Transport Cost
3 Case Study
3.1 Case Study 1 – All Generators Are Produced by Thermal Power Plants
3.2 Case Study 2 - Generators 4 and 8 Were Replaced by Wind Onshore Utility Class1 Producers
3.3 Case Study 3 - Winds Onshore Generators 4 and 8 Will Transfer 100 MW in Hydrogen Equivalent to Load 5
3.4 Discussion of Results
4 Conclusion
References
The Future in Fishfarms: An Ocean of Technologies to Explore
1 Introduction
2 State of Art
2.1 Methodology
2.2 Literature Review
3 CRISP-DM Process
3.1 Business Understanding
3.2 Data Understanding
3.3 Data Preparation
3.4 Modelling
3.5 Evaluation
3.6 Deployment
4 Conclusions
References
Tuned Long Short-Term Memory Model for Ethereum Price Forecasting Through an Arithmetic Optimization Algorithm
1 Introduction
2 Background and Related Works
2.1 LSTM
2.2 Swarm Intelligence and Literature Review
3 Proposed Method
3.1 Arithmetic Optimization Algorithm
3.2 Improved AOA with Firefly Search (AOA-FA)
4 Experimental Findings and Results
5 Discussion
6 Conclusion
References
Breast Cancer Identification Using Improved DarkNet53 Model
1 Introduction
1.1 Contributions
2 Literature Review
3 Proposed Methodology
3.1 Image Acquisition
3.2 Pre-processing
3.3 Segmentation
3.4 Feature Extraction
3.5 DarkNet53
3.6 SqueezeNet
3.7 ResNet50
4 Experimental Results
4.1 Datasets
4.2 Experiment Analysis
5 Conclusion
References
Comprehensive and Systematic Review of Various Feature Extraction Techniques for Vernacular Languages
1 Introduction
2 Related Work
3 Design and Implementation
4 Methodology for the Systematic Review
4.1 Database Curation
4.2 Identifying Keywords for Database Search
4.3 Selecting Articles for the Database
4.4 Initial Data Statistics
5 Data Analysis and Findings
6 Descriptive Analysis
7 Conclusion
References
Parallel Ant Colony Optimization for Scheduling Independent Tasks
1 Introduction
2 Related Work
3 Scheduling Problem Definition
4 Parallel Ant Colony Optimization (ParACO)
4.1 Ant Colony Optimization (ACO) for Task Scheduling
4.2 Parallelization Strategies
5 Experimental Results
5.1 Experimental Setup
5.2 Performance Comparison
6 Conclusion
References
A Review on Artificial Intelligence Applications for Multiple Sclerosis Evaluation and Diagnosis
1 Introduction
2 Multiple Sclerosis
3 Artificial Intelligence and the Biological Brain
4 AI Approaches for MS Treatment
5 Conclusion
References
KIASOntoRec: A Knowledge Infused Approach for Socially Aware Ontology Recommendation
1 Introduction
2 Related Work
3 Proposed Methodology
4 Implementation
5 Results and Performance Evaluation
6 Conclusions
References
Policy-Based Code Slicing of Database Application Using Semantic Rule-Based Approach
1 Introduction
2 Background
3 Related Works
4 Running Example
5 Proposed Approach
5.1 Syntax Based DDG
5.2 Semantics Refinement: Condition-Action Rules
5.3 Slicing Computation
5.4 Experimental Results
6 Discussion
7 Conclusion and Future Research
References
Analysing and Modeling Customer Success in Digital Marketing
1 Introduction
2 Related Work
3 Dataset Analysis
4 Methodology
5 Results and Discussion
6 Conclusions
References
DRHTG: A Knowledge-Centric Approach for Document Retrieval Based on Heterogeneous Entity Tree Generation and RDF Mapping
1 Introduction
2 Related Work
3 Proposed Work
4 Implementation and Performance Evaluation
4.1 Dataset Preparation
5 Results
6 Conclusion
References
Bi-CSem: A Semantically Inclined Bi-Classification Framework for Web Service Recommendation
1 Introduction
2 Related Work
3 Proposed Work
4 Implementation and Performance
5 Results and Performance Evaluation
6 Conclusion
References
HybRDFSciRec: Hybridized Scientific Document Recommendation Framework
1 Introduction
2 Related Works
3 Proposed System Architecture
4 Results and Performance Evaluation
5 Conclusion
References
A Collision Avoidance Method for Autonomous Underwater Vehicles Based on Long Short-Term Memories
1 Introduction
2 Related Work
3 Rule-based Control
4 AI-based Control
4.1 Automatic Labeling of the Data
5 Experimental Results and Their Evaluation
6 Conclusion and Future Work
References
Precision Mango Farming: Using Compact Convolutional Transformer for Disease Detection
1 Introduction
2 Vision Transformers for Image Classification
3 Compact Convolutional Transformer for Disease Detection Model for Mango Leaf Disease Classification
4 Experimentation and Discussion
5 Conclusions and Future Work
References
An Efficient Machine Learning Model for Bitcoin Price Prediction
1 Introduction
2 Relevant Study
3 Proposed Method
3.1 Linear Regression
3.2 Polynomial Regression
3.3 Bayesian Regression
3.4 Random Forest and Boosting Ensemble
3.5 SVM
3.6 Auto Regression Integrated Moving Averages (ARIMA)
3.7 LSTM
4 Results
4.1 Graphical User Interface
4.2 Linear Regression Polynomial Regression
4.3 Polynomial Regression
4.4 Bayesian Regression
4.5 Random Forest and Boosting Ensemble
4.6 SVM
4.7 ARIMA
4.8 LSTM (Multilayer)
4.9 LSTM (GRU)
5 Conclusion
References
Ensemble Based Cyber Threat Analysis for Supply Chain Management
1 Introduction
2 Relevant Study
3 Proposed Method
3.1 Boosting Ensemble Learning
3.2 Gradient Boosting
4 Experimental Results
4.1 Count of Legal and Threat Data
4.2 Preprocess Data
4.3 Correlation Between Variables
4.4 Confusion Matrix of RF, Gradient Boosting and XGBoost
4.5 Comparison of Different Learning Models
5 Conclusion and Future Scope
References
Classification Model for Identification of Internet Loan Frauds Using PCA with Ensemble Method
1 Introduction
2 Relevant Study
3 Proposed Method
3.1 Algorithm
3.2 XGBoost Approach
3.3 Logistic Regression
3.4 Decision Tree
4 Experimental Results
5 Conclusion
References
Comparative Analysis of Learning Models in Depression Detection Using MRI Image Data
1 Introduction
1.1 Types of Brain Diseases
1.2 Diagnosis and Treatment of Brain Disease Using Image Processing
1.3 Image Extraction Techniques
2 Relevant Study
2.1 DenseNet
3 Proposed Method
4 Experimental Results
4.1 Performance Evaluation
5 Conclusion
References
Product Safety and Privacy Using Internet of Things Design and Moji
1 Introduction
2 Related Work
3 Proposed Methodology
4 Experimentation and Results
5 Conclusion
References
Optimization of the Performance and Emissions of a Dual-Fuel Diesel Engine Using LPG as the Fuel
1 Introduction
2 Experimental Investigation
3 Experimental Results
4 Results and Discussions
4.1 Optimization Using Taguchi Grey Relational Analysis (TGRA)
4.2 Optimization Using Regression
4.3 Optimization Using Anfis Model
4.4 JAYA – ANFIS Optimization Model
5 Conclusions
References
Hostel Out-Pass Implementation Using Multi Factor Authentication
1 Introduction
2 Proposed Method
3 Implementation
4 Conclusion
References
Federated Learning and Adaptive Privacy Preserving in Healthcare
1 Introduction
1.1 Data Sharing Scenarios
1.2 Federated Learning and Open Health
1.3 Types of Federated Learning
2 Related Work
2.1 Machine Learning Models
2.2 Privacy
2.3 Autonomy
2.4 Design
3 Methods
3.1 Federated Learning
3.2 Adaptive Privacy
4 Evaluation
4.1 Results
5 Conclusion
References
ASocTweetPred: Mining and Prediction of Anti-social and Abusive Tweets for Anti-social Behavior Detection Using Selective Preferential Learning
1 Introduction
2 Related Works
3 Proposed Architecture
4 Performance Evaluation and Result
5 Conclusion
References
WCMIVR: A Web 3.0 Compliant Machine Intelligence Driven Scheme for Video Recommendation
1 Introduction
2 Related Works
3 Proposed System Architecture
4 Implementation and Performance Evaluation
5 Conclusion
References
RMSLRS: Real-Time Multi-terminal Sign Language Recognition System
1 Introduction
1.1 Background
1.2 Challenges
1.3 Contributions
1.4 Organization
2 Architecture
2.1 Human-computer Interaction Layer
2.2 Video Denoising Layer
2.3 Recognition Inference Layer
3 Demonstration
References
Advances in Information and Communication Technologies
Modelling and Simulation of the Dump-Truck Problem Using MATLAB Simulink
1 Introduction
2 Problem Formulation
3 Literature Review
4 Assumptions of the Study
5 Methodology
6 Experiments, Results and Discussion
6.1 Simulink Model
6.2 Model Conceptualization
6.3 Data Collection
6.4 Model Translation
6.5 Verification
6.6 Validation
7 Conclusion
References
Modeling and Simulation of a Robot Arm with Conveyor Belt Using Matlab Simulink Model
1 Introduction
2 Literature Review
3 Methodology
3.1 Problem Formulation
3.2 Model Conceptualization
3.3 Data Collection
3.4 Model Translation
3.5 Verification
3.6 Validation
3.7 Experimental Design
3.8 Production Runs and Analysis (Simulation Results for the Model)
4 Conclusion
References
Bayesian Model Selection for Trust in Ewom
1 Introduction
2 Literature Review
2.1 Trust in eWOM (Electronic Word-of-Mouth: Te)
2.2 Information Quality (IQ)
2.3 Social Influence (SI)
2.4 Perceived Risk (PR)
3 Methodology
3.1 Sample Size
3.2 Bayesian Information Criteria
4 Results
4.1 Reliability Test
4.2 BIC Algorithm
4.3 Model Evaluation
5 Conclusion
References
Review of Challenges and Best Practices for Outcome Based Education: An Exploratory Outlook on Main Contributions and Research Topics
1 Introduction
2 Challenges of OBE Implementation in the Different Countries:
2.1 Engineering Design and Education in Bangladesh
2.2 Outcome-Based Education for Linguistic Courses in Hong Kong
2.3 Challenges of OBE in India
2.4 Outcome Based Education in Israel Defence Force:
2.5 Implementation of OBE in Malaysia for Higher Education:
2.6 Outcome Based Education in Pharmacy Education in Canada:
3 Domain Based Challenges of OBE
3.1 Outcome Based Education in Civil Engineering:
3.2 Challenges of Marriage and Family Therapy Education ch587
3.3 Challenges of Outcome Based Education in Medical Education
3.4 Challenges of Outcome Based Education for Supply Chain Management:
3.5 Challenges of OBE in Chemical Engineering:
3.6 Outcome Based Education for Nursing Education
3.7 OBE for Software Engineering Course
4 Best Practices for OBE
4.1 Best Practices of OBE with Respect to the Different Domains
4.2 Online OBE System Followed by Most of Higher Learning Institutions
4.3 Project Based Outcome Based Education
5 Discussion and Conclusions
References
Blockchain Enabled Internet of Things: Current Scenario and Open Challenges for Future
1 Introduction
2 Internet of Things - Background
3 Blockchain
4 Integration of IoT and Blockchain (BIoT)
5 Blockchain Enabled Internet of Things (BIoT) Architecture and Interactions
6 Challenges Faced in BIoT
7 Applications of BIoT
7.1 Future Scope and Directions
8 Conclusion
References
Fuzzy Investment Assessment Techniques: A State-of-the-Art Literature Review
1 Introduction
2 Fuzzy Sets and Their Extensions
3 Fuzzy Investment Assessment Techniques
4 Literature Review
4.1 Techniques Using Ordinary Fuzzy Sets
4.2 Techniques Using Extensions of Ordinary Fuzzy Sets
5 Conclusion
References
A Comparative Analysis of Classification Algorithms for Dementia Prediction
1 Introduction
2 Review of Related Literature
3 Machine Learning Based Approaches
3.1 Leading Approaches
3.2 Weka Classifiers [20]
4 Methodology
4.1 Source of Data
4.2 Algorithms
5 Results
6 Conclusion
References
Virtual Reality, Augmented Reality and Mixed Reality for Teaching and Learning in Higher Education
1 Introduction
2 Research Method
2.1 Data Sources and Search Strategy
2.2 Inclusion and Exclusion Criteria
3 Results
4 Discussion
4.1 Advantages of Virtual Technology to Higher Education Learning
4.2 Disadvantages of Virtual Technology to Higher Education Learning
4.3 Traditional Learning Approaches and Virtual Learning
4.4 Implementing Virtual Technology in Higher Education
5 Conclusion
5.1 Concluding Comments
5.2 Limitations and Future Research
References
Comparative Analysis of Filter Impact on Brain Volume Computation
1 Introduction
2 Literature Review
2.1 Representation of Magnetic Resonance Images
2.2 Analysis of MR Images
2.3 Volume Estimation and Analysis
3 Materials and Methods
3.1 Image Filters
3.2 Skull Stripping
3.3 Source of Data
3.4 Proposed Approach
4 Results
5 Conclusion
References
The Role of AI in Combating Fake News and Misinformation
1 Introduction
2 Research Hypothesis and Objective
2.1 Research Hypothesis
2.2 Objective
3 Methodology
3.1 Research Design
3.2 Measure Design
4 Algorithms
4.1 Naive Bayes Classifier
4.2 Support Vector Machine (SVM)
4.3 Neural Networks
4.4 Long Short Term Memory Networks (LSTMs)
4.5 Bi-directional Long Short Term Memory Networks(Bi-LSTMs)
4.6 Ensemble Learning Methods
4.7 Random Forest Algorithm
5 Results and Discussion
5.1 Comparing Performances of Models
5.2 Current Methods
5.3 Future Directions of Study
6 Conclusion
References
Rationalizing the TPACK Framework in Online Education: Perception of College Faculties Towards Aakash BYJU’S App in the ‘New Normal’
1 Introduction
2 Literature Review
3 Research Objective
4 Research Model and Hypothesis Formulation
5 Research Methodology
5.1 Data Analysis and Presentation
6 Results of the Study
7 Conclusion and Implications
8 Limitations and Future Scope
References
Teacher’s Attitudes Towards Improving Inter-professional Education and Innovative Technology at a Higher Institution: A Cross-Sectional Analysis
1 Introduction
2 Experimental Methods
2.1 Study Design
2.2 Study Setting and Population
2.3 Variable
2.4 Data Source and Management
2.5 Ethical Consideration
3 Result
3.1 Participants and Bio-demographic Features
3.2 Teachers Attitude About IPE
4 Discussion
5 Limitation
6 Conclusion
References
Augmented Analytics an Innovative Paradigm
1 Introduction
2 Augmented Analytics
3 Trends of Data Analytics
4 Augmented Analytics Benefits
5 Conclusions
References
A Deep Learning Approach to Monitoring Workers' Stress at Office
1 Introduction
2 Related Work
3 Methodology
4 Materials and Methods
4.1 Dataset
4.2 Pre-processing
4.3 Experiments
5 Evaluation
5.1 Comparison with Related Work
6 Conclusions
References
Implementing a Data Integration Infrastructure for Healthcare Data – A Case Study
1 Introduction
2 Related Work
3 Clinical Data Interoperability
4 The Data Pipeline
5 Conclusions
References
Automatic Calcium Detection in Echocardiography Based on Deep Learning: A Systematic Review
1 Introduction
2 State of the Art
2.1 Methods
2.2 Data Extraction
2.3 Results
2.4 Goals and Outcomes Analysis
3 Conclusion
References
AI-Based mHealth App for Covid-19 or Cardiac Diseases Diagnosis and Prognosis
1 Introduction
2 State of the Art
2.1 Search Strategy and Inclusion Criteria
2.2 Study Selection
2.3 Data Extraction and Synthesis
2.4 Results
3 AIMHealth App
3.1 Smartphone Application
3.2 Cloud Database and Features
4 Conclusions and Future Work
References
Mapping the Research in Orange Economy: A Bibliometric Analysis
1 Introduction
2 Background
3 Methodology
4 Results
4.1 Performance Analysis
4.2 Intellectual Structure Analysis
5 Conclusions
References
Wearable Temperature Sensor and Artificial Intelligence to Reduce Hospital Workload
1 Introduction
2 Literature Review
3 Methods and Materials
3.1 Data Collection
3.2 Data Analytics and Machine Learning
4 Conclusions and Future Scope
References
A Recommender System to Close Skill Gaps and Drive Organisations' Success
1 Introduction
2 State of the Art
2.1 Hybrid Recommender Systems
2.2 Related Work on Skill Matching
3 Proposed Conceptual Framework for the TRS
3.1 TRS in the Context of a Talent Marketplace
3.2 Research Methodology
4 Design and Implementation of TRS
5 Discussion
6 Conclusion
References
LEEC: An Improved Linear Energy Efficient Clustering Method for Sensor Network
1 Introduction
1.1 Sensor Network
1.2 Clustering
2 Proposed System
2.1 Proposed System Flow
2.2 Solution Strategy
2.3 Suggested Algorithm
3 Simulation
3.1 Simulation Scenario
4 Result Analysis
4.1 E to E Delay
4.2 Energy Remained
4.3 PDR
4.4 Routing Overhead
5 Conclusion
References
Hydrogen Production: Past, Present and What Will Be the Future?
1 Introduction
2 The European Climate Law
3 The Importance of Hydrogen in Meeting the European Union’s Climate Objectives
4 Hydrogen
4.1 Hydrogen Production
4.2 Fossil-Based Hydrogen
4.3 Carbon Low Hydrogen
4.4 Hydrogen-Derived Synthetic Fuels
5 Conclusion
References
Implementation of the General Regulation on Data Protection – In the Intermunicipal Community of Alto Tâmega and Barroso, Portugal
1 Introduction
2 General Regulation on Data Protection
3 Research Methodology
4 Results
5 Conclusions
References
Implementing ML Techniques to Predict Mental Wellness Amongst Adolescents Considering EI Levels
1 Introduction
2 Background of the Study
3 Methodology
3.1 Dataset
3.2 Data Preprocessing
3.3 Hypothesis Testing for EI
3.4 EI and MW Prediction Models
3.5 Performance Measure of the ML Classifiers
3.6 Implementation
4 Results and Discussions
5 Conclusion and Future Work
References
Healthcare-Oriented Portuguese Sign Language Translator*-4pt
1 Introduction
2 Context
2.1 Related Work
2.2 Technologies
3 Design
3.1 Translator Front End
3.2 Translator Back End
3.3 Practice Module
3.4 Analytics
4 Prototype
4.1 System Training
4.2 Translation
5 Evaluation
6 Discussion
7 Conclusion
7.1 Limitations and Future Work
References
Adding Blockchain and Smart Contracts to a Low-Code Development Platform
1 Introduction
2 Related Work
2.1 Blockchain
2.2 OutSystems
2.3 Electronic Voting
3 Discussion
4 Evaluation
5 Conclusion
5.1 Limitations
5.2 Final Remarks
References
Attempt to Model the Impact of Digitalization on the Economic Growth of Morocco
1 Introduction
2 Related Works
3 Methodology
4 Results and Discussion
4.1 Results
5 Conclusion and Future Works
References
The Drivers and Inhibitors of COVID-19 Vaccinations: A Descriptive Approach
1 Introduction
2 Review of Existing Literature
3 Methodology
3.1 Study Sampling
4 Result
5 Discussion
6 Conclusion
6.1 Managerial Implication
6.2 Study Limitation and Future Research
References
Denoising Fundus Images of Diabetic Retinopathy Using Natural Neighborhood Kriging
1 Introduction
2 Related Work
2.1 Voronoi Properties
2.2 Delaunay Triangulation
2.3 Natural Neighbors
2.4 Natural Neighbour Interpolation
3 Proposed Model
3.1 Natural Neighborhood Search
3.2 Application
4 Conclusion
References
Design of a Compact Low-Profile Ultra Wideband Antenna (UWB) for Biomedical Applications
1 Introduction
2 Related Work
2.1 Design and Geometry of Low Profile UWB Antenna
2.2 Simulation Fabricated Design Structures for Low-Profile UWB Antenna
3 Illustrations of Fabricated Antennas used in Experiments
4 Results
5 Conclusion
References
Thyroid Nodule Classification of Ultrasound Image by Convolutional Neural Network
1 Introduction
2 Related Work
3 RREMI-RF Method
3.1 Feature Extraction Using AlexNet
3.2 Feature Selection
3.3 Classification Using RF, KNN and DE
4 Results and Discussion
4.1 Performance Analysis of RREMI-RF Method with KNN and DE Methods
5 Conclusion
References
Influence of Cross Histology Transfer Learning on the Accuracy of Medical Diagnostics Systems
1 Introduction
2 Data Collection
3 Methodology
4 Results and Discussion
5 Conclusions
References
Author Index
Recommend Papers

Innovations in Bio-Inspired Computing and Applications. Proceedings of the 13th International Conference on Innovations in Bio-Inspired Computing and Applications (IBICA 2022) Held During December 15–17, 2022
 9783031274985, 9783031274992

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Lecture Notes in Networks and Systems 649

Ajith Abraham · Anu Bajaj · Niketa Gandhi · Ana Maria Madureira · Cengiz Kahraman   Editors

Innovations in Bio-Inspired Computing and Applications Proceedings of the 13th International Conference on Innovations in Bio-Inspired Computing and Applications (IBICA 2022) Held During December 15–17, 2022

Lecture Notes in Networks and Systems

649

Series Editor Janusz Kacprzyk, Systems Research Institute, Polish Academy of Sciences, Warsaw, Poland

Advisory Editors Fernando Gomide, Department of Computer Engineering and Automation—DCA, School of Electrical and Computer Engineering—FEEC, University of Campinas—UNICAMP, São Paulo, Brazil Okyay Kaynak, Department of Electrical and Electronic Engineering, Bogazici University, Istanbul, Türkiye Derong Liu, Department of Electrical and Computer Engineering, University of Illinois at Chicago, Chicago, USA Institute of Automation, Chinese Academy of Sciences, Beijing, China Witold Pedrycz, Department of Electrical and Computer Engineering, University of Alberta, Alberta, Canada Systems Research Institute, Polish Academy of Sciences, Warsaw, Poland Marios M. Polycarpou, Department of Electrical and Computer Engineering, KIOS Research Center for Intelligent Systems and Networks, University of Cyprus, Nicosia, Cyprus Imre J. Rudas, Óbuda University, Budapest, Hungary Jun Wang, Department of Computer Science, City University of Hong Kong, Kowloon, Hong Kong

The series “Lecture Notes in Networks and Systems” publishes the latest developments in Networks and Systems—quickly, informally and with high quality. Original research reported in proceedings and post-proceedings represents the core of LNNS. Volumes published in LNNS embrace all aspects and subfields of, as well as new challenges in, Networks and Systems. The series contains proceedings and edited volumes in systems and networks, spanning the areas of Cyber-Physical Systems, Autonomous Systems, Sensor Networks, Control Systems, Energy Systems, Automotive Systems, Biological Systems, Vehicular Networking and Connected Vehicles, Aerospace Systems, Automation, Manufacturing, Smart Grids, Nonlinear Systems, Power Systems, Robotics, Social Systems, Economic Systems and other. Of particular value to both the contributors and the readership are the short publication timeframe and the world-wide distribution and exposure which enable both a wide and rapid dissemination of research output. The series covers the theory, applications, and perspectives on the state of the art and future developments relevant to systems and networks, decision making, control, complex processes and related areas, as embedded in the fields of interdisciplinary and applied sciences, engineering, computer science, physics, economics, social, and life sciences, as well as the paradigms and methodologies behind them. Indexed by SCOPUS, INSPEC, WTI Frankfurt eG, zbMATH, SCImago. All books published in the series are submitted for consideration in Web of Science. For proposals from Asia please contact Aninda Bose ([email protected]).

Ajith Abraham · Anu Bajaj · Niketa Gandhi · Ana Maria Madureira · Cengiz Kahraman Editors

Innovations in Bio-Inspired Computing and Applications Proceedings of the 13th International Conference on Innovations in Bio-Inspired Computing and Applications (IBICA 2022) Held During December 15–17, 2022

Editors Ajith Abraham Faculty of Computing and Data Science Flame University Pune, Maharashtra, India Scientific Network for Innovation and Research Excellence Machine Intelligence Research Labs Auburn, WA, USA Niketa Gandhi Scientific Network for Innovation and Research Excellence Machine Intelligence Research Labs Auburn, WA, USA

Anu Bajaj Computer Science and Engineering Department Thapar Institute of Engineering and Technology Patiala, Punjab, India Ana Maria Madureira Interdisciplinary Studies Research Center (ISRC) Institute of Engineering, Polytechnique of Porto (ISEP/P.PORTO), INOV (Institute for Systems and Computer Engineering, Technology and Science) Porto, Portugal

Cengiz Kahraman Department of Industrial Engineering Istanbul Technical University Istanbul, Türkiye

ISSN 2367-3370 ISSN 2367-3389 (electronic) Lecture Notes in Networks and Systems ISBN 978-3-031-27498-5 ISBN 978-3-031-27499-2 (eBook) https://doi.org/10.1007/978-3-031-27499-2 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

Preface

Welcome to the 13th International Conference on Innovations in Bio-Inspired Computing and Applications (IBICA 2022) and 12th World Congress on Information and Communication Technologies (WICT 2022) held online during December 15–17, 2022. The aim of IBICA is to provide a platform for world research leaders and practitioners, to discuss the full spectrum of current theoretical developments, emerging technologies, and innovative applications of bio-inspired computing. Bio-inspired computing is currently one of the most exciting research areas, and it is continuously demonstrating exceptional strength in solving complex real-life problems. WICT provides an opportunity for the researchers from academia and industry to meet and discuss the latest solutions, scientific results, and methods in the usage and applications of ICT in the real world. Innovations in ICT allow us to transmit information quickly and widely, propelling the growth of new urban communities, linking distant places and diverse areas of endeavor in productive new ways, which a decade ago was unimaginable. Thus, the theme of this World Congress is “Innovating ICT For Social Revolutions.” IBICA–WICT 2022 brings together researchers, engineers, developers, and practitioners from academia and industry working in all interdisciplinary areas of intelligent systems, nature-inspired computing, big data analytics, real-world applications to exchange and cross-fertilize their ideas. The themes of the contributions and scientific sessions range from theories to applications, reflecting a wide spectrum of the coverage of intelligent systems and computational intelligence areas. IBICA 2022 received submissions from 15 countries, and each paper was reviewed by at least five reviewers in a standard peer-review process. Based on the recommendation by five independent referees, finally 54 papers were presented during the conference (acceptance rate of 31%). WICT 2022 received submissions from 20 countries, and each paper was reviewed by at least five reviewers in a standard peer-review process. Based on the recommendation by five independent referees, finally 32 papers were presented during the conference (acceptance rate of 34%). Many people have collaborated and worked hard to produce the successful IBICA– WICT 2022 conference. First, we would like to thank all the authors for submitting their papers to the conference, for their presentations and discussions during the conference. Our thanks goes to Program Committee members and reviewers, who carried out the most difficult work by carefully evaluating the submitted papers. Our special thanks to the following plenary speakers, for their exciting talks: • • • • • • •

Kaisa Miettinen, University of Jyvaskyla, Finland Joanna Kolodziej, NASK-National Research Institute, Poland Katherine Malan, University of South Africa, South Africa Maki Sakamoto, The University of Electro-Communications, Japan Catarina Silva, University of Coimbra, Portugal Kaspar Riesen, University of Bern, Switzerland Mário Antunes, Polytechnic Institute of Leiria, Portugal

vi

Preface

• Yifei Pu, College of Computer Science, Sichuan University, China • Patrik Christen, FHNW, Institute for Information Systems, Olten, Switzerland • Patricia Melin, Tijuana Institute of Technology, Mexico Our special thanks to the Springer Publication team for the wonderful support for the publication of these proceedings. We express our sincere thanks to the session chairs and organizing committee chairs for helping us to formulate a rich technical program. Enjoy reading the articles! Ajith Abraham Ana Maria Madureira Cengiz Kahraman Oscar Castillo General Chairs Nuno Bettencourt Selcuk Cebi Agostino Forestiero Program Chairs

IBICA–WICT 2022 Conference Organization

General Chairs Ajith Abraham Ana Maria Madureira Cengiz Kahraman Oscar Castillo

Machine Intelligence Research Labs, USA Institute of Engineering, Polytechnic of Porto, Portugal Istanbul Technical University, Turkey Tijuana Institute of Technology, Mexico

Program Chairs Nuno Bettencourt Selcuk Cebi Agostino Forestiero

Institute of Engineering, Polytechnic of Porto, Portugal Yildiz Technical University, Turkey National Research Council of Italy, Italy

Publication Chairs Niketa Gandhi Kun Ma

Machine Intelligence Research Labs, USA University of Jinan, China

Special Session Chairs Gabriella Casalino André Santos

University of Bari, Italy Institute of Engineering, Polytechnic of Porto, Portugal

Publicity Chairs Pooja Manghirmalani Mishra Anu Bajaj Ivo Pereira

University of Mumbai, India Machine Intelligence Research Labs, USA University Fernando Pessoa, Portugal

viii

IBICA–WICT 2022 Conference Organization

Publicity Team Peeyush Singhal Aswathy Su Shreya Biswas

SIT Pune, India Jyothi Engineering College, India Jadavpur University, India

International Program Committee Agostino Forestiero Ana Madureira Andre Santos Antonello Florio Anu Bajaj Anurag Rana Deepti Chaudhary Elizabeth Goldbarg Gabriella Casalino Gautami Tripathi Gianluca Zaza Isabel S. Jesus Ivo Pereira János Botzheim Joao Bone João Ferreira João Teixeira Pinto José Everardo Bessa Maia Kingsley Okoye Laura Verde Lee Chang-Yong Maria Micaela Gonçalves Pinto Dinis Esteves Mariella Farella Mário Antunes Michele Mastroianni Miguel Pratas Ferreira Mira

CNR-ICAR, Italy Institute of Engineering, Polytechnic of Porto, Portugal Institute of Engineering, Polytechnic Institute of Porto, Portugal Politecnico di Bari, Italy Thapar Institute of Engineering and Technology, India Shoolini University, Himachal Pradesh, India Kurukshetra University, India Federal University of Rio Grande do Norte, Brazil University of Bari, Italy Jamia Hamdard, India University of Bari “Aldo Moro,” Italy Institute of Engineering of Porto, Portugal University Fernando Pessoa, Portugal Eötvös Loránd University, Hungary Instituto Universitário de Lisboa, Portugal Instituto Universitário de Lisboa, Portugal ISCTE-IUL, Portugal State University of Ceará, Brazil Tecnologico de Monterrey, Mexico Universitá Della Campania “Luigi Vanvitelli” Caserta, Italy Kongju National University, South Korea Polytechnic Institute of Leiria, Portugal Institute for Educational Technology-National Research Council, Italy University of Porto, Portugal University of Salerno, Italy Instituto Universitário de Lisboa (ISCTE-IUL), Portugal

IBICA–WICT 2022 Conference Organization

Murilo Oliveira Machado Najib Ben Aoun Oscar Castillo Pasquale Cantiello Pedro Gago Pooja Manghirmalani Mishra Radu-Emil Precup Rajashree Shettar Reeta Devi Sandeep Verma Shankru Guggari Sidemar Fideles Cezario Sindhu P. M. Vijayakumar Kadappa

ix

Federal University of Mato Grosso do Sul (UFMS), Brazil AL-BAHA University, Kingdom of Saudi Arabia, Saudi Arabia Tijuana Institute of Technology, Mexico Istituto Nazionale di Geofisica e Vulcanologia, Italy Pedro Miguel Cardoso Gago, Portugal Machine Intelligence Research Labs, India Politehnica University of Timisoara, Romania RV College of Engineering, India Kurukshetra University, India IIT Kharagpur, India B M S. College of Engineering, India Federal University of Rio Grande do Norte, Brazil Nagindas Khandwala College, India B M S. College of Engineering, India

Contents

Bioinspired Computing and Applications Bayesian Consideration for Intention to Purchase Safe Vegetables: Evidence from Vietnam . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Bui Huy Khoi

3

Evolution of Configuration Data in CGP Format Using Parallel GA on Embryonic Fabric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Gayatri Malhotra, Punithavathi Duraiswamy, and J. K. Kishore

16

Cross Synergetic Mobilenet-VGG16 for UML Multiclass Diagrams Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Nesrine Bnouni Rhim, Salim Cheballah, and Mouna Ben Mabrouk

24

Solar Irradiation and Wind Speed Forecasting Based on Regression Machine Learning Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yahia Amoura, Santiago Torres, José Lima, and Ana I. Pereira

31

Transfer Learning Based Pediatric Pneumonia Diagnosis Using Residual Attention Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Arun Prakash Jayakanthan, S. Shiva Rupan, V. Sowmya, Moez Krichen, and Vinayakumar Ravi Fuzzy Systems in Bio-inspired Computing: State-of-the-Art Literature Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cengiz Kahraman, Basar Oztaysi, Sezi Cevik Onar, and Selcuk Cebi

52

62

Anomaly Detection Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Nazgul Seralina and Assel Akzhalova

75

Speech Emotion Recognition Using CNN-LSTM and Vision Transformer . . . . . C S Ayush Kumar, Advaith Das Maharana, Srinath Murali Krishnan, Sannidhi Sri Sai Hanuma, G. Jyothish Lal, and Vinayakumar Ravi

86

Investigating Digital Addiction in the Context of Machine Learning Based System Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Geetika Johar and Ravindra Patel

98

xii

Contents

Multiobjective Optimization of Airline Crew Management with a Genetic Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 Alfredo Crego, Thomas Hanne, and Rolf Dornberger A Novel Approach to the Two-Dimensional Cargo Load Problem . . . . . . . . . . . . 120 Francisco Mateus, André S. Santos, Marlene F. Brito, and Ana M. Madureira Vehicle Detection from Aerial Imagery Using Principal Component Analysis and Deep Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129 C. S. Ayush Kumar, Advaith Das Maharana, Srinath Murali Krishnan, Sannidhi Sri Sai Hanuma, V. Sowmya, and Vinayakumar Ravi Bio-inspired Heterogeneity in Swarm Robots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141 Hideyasu Sasaki Software Defect Prediction Using Cellular Automata as an Ensemble Strategy to Combine Classification Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146 Flávio M. Tavares and Eduardo F. Franco A Systematic Literature Review on Home Health Care Management . . . . . . . . . . 155 Filipe Alves, Ana Maria A. C. Rocha, Ana I. Pereira, and Paulo Leitão The Impact of the Size of the Partition in the Performance of Bat Algorithm . . . 165 Bruno Sousa, André S. Santos, and Ana M. Madureira Automatic Diagnosis Framework for Catheters and Tubes Semantic Segmentation and Placement Errors Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176 Abdelfettah Elaanba, Mohammed Ridouani, and Larbi Hassouni How Artificial Intelligence Can Revolutionize Software Testing Techniques . . . . 189 Moez Krichen E-Assessment in Medical Education: From Paper to Platform . . . . . . . . . . . . . . . . 199 Nokukhanya Thembane DeepPRS: A Deep Learning Integrated Pattern Recognition Methodology for Secure Data in Cloud Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210 K. R. Remesh Babu, S. Saritha, K. G. Preetha, Sangeetha Unnikrishnan, and Sminu Izudheen Automated Depression Diagnosis in MDD (Major Depressive Disorder) Patients Using EEG Signal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220 Sweety Singh, Poonam Sheoran, and Manoj Duhan

Contents

xiii

An Effective Deep Learning Classification of Diabetes Based Eye Disease Grades: An Retinal Analysis Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234 F. Ajesh, Anupama Jims, Bosco Paul Alapatt, and Felix M. Philip Extracting and Analyzing Terms with the Component ‘Green’ in the Bulgarian Language: A Big Data Approach . . . . . . . . . . . . . . . . . . . . . . . . . . 245 Velislava Stoykova Apartments Waste Disposal Location Evaluation Using TOPSIS and Fuzzy TOPSIS Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255 S. M. Vadivel, V Sakthivel, L Praveena, and V Chandana Detection of Cracks in Building Facades Using Infrared Thermography . . . . . . . 264 Tiago Fonseca and Joao C. Ferreira Optimizing Pre-processing for Foetal Cardiac Ultra Sound Image Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273 M. O. Divya and M. S. Vijaya A Review on Dimensionality Reduction for Machine Learning . . . . . . . . . . . . . . . 287 Duarte Coelho, Ana Madureira, Ivo Pereira, and Ramiro Gonçalves Detecting Depression on Social Platforms Using Machine Learning . . . . . . . . . . 297 Muhammad Ishtiaq, Kainat Bibi, Mehmoon Anwar, Rashid Amin, and Rahul Nijhawan Impact of Green Hydrogen Production on Energy Pricing . . . . . . . . . . . . . . . . . . . 307 Judite Ferreira and José Boaventura The Future in Fishfarms: An Ocean of Technologies to Explore . . . . . . . . . . . . . . 318 Ana Rita Pires, Joao C. Ferreira, and Øystein Klakegg Tuned Long Short-Term Memory Model for Ethereum Price Forecasting Through an Arithmetic Optimization Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . 327 Marko Stankovic, Luka Jovanovic, Nebojsa Bacanin, Miodrag Zivkovic, Milos Antonijevic, and Petar Bisevac Breast Cancer Identification Using Improved DarkNet53 Model . . . . . . . . . . . . . . 338 Noor Ul Huda Shah, Rabbia Mahum, Dur e Maknoon Nisar, Noor Ul Aman, and Tabinda Azim Comprehensive and Systematic Review of Various Feature Extraction Techniques for Vernacular Languages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 350 Payal Goel and Shweta Bansal

xiv

Contents

Parallel Ant Colony Optimization for Scheduling Independent Tasks . . . . . . . . . . 363 Robert Dietze and Maximilian Kränert A Review on Artificial Intelligence Applications for Multiple Sclerosis Evaluation and Diagnosis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373 Bruno Cunha, Ana Madureira, and Lucas Gonçalves KIASOntoRec: A Knowledge Infused Approach for Socially Aware Ontology Recommendation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382 Aastha Valecha, Gerard Deepak, and Deep ak Surya Policy-Based Code Slicing of Database Application Using Semantic Rule-Based Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392 Anwesha Kashyap and Angshuman Jana Analysing and Modeling Customer Success in Digital Marketing . . . . . . . . . . . . . 404 Inês César, Ivo Pereira, Ana Madureira, Duarte Coelho, Miguel Ângelo Rebelo, and Daniel Alves de Oliveira DRHTG: A Knowledge-Centric Approach for Document Retrieval Based on Heterogeneous Entity Tree Generation and RDF Mapping . . . . . . . . . . . . . . . . 414 M. Arulmozhi Varman and Gerard Deepak Bi-CSem: A Semantically Inclined Bi-Classification Framework for Web Service Recommendation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 425 Deepak Surya, S. Palvannan, and Gerard Deepak HybRDFSciRec: Hybridized Scientific Document Recommendation Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439 Divyanshu Singh and Gerard Deepak A Collision Avoidance Method for Autonomous Underwater Vehicles Based on Long Short-Term Memories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 448 László Antal, Martin Aubard, Erika Ábrahám, Ana Madureira, Luís Madureira, Maria Costa, José Pinto, and Renato Campos Precision Mango Farming: Using Compact Convolutional Transformer for Disease Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 458 M. Shereesha, C. Hemavathy, Hasthi Teja, G. Madhusudhan Reddy, Bura Vijay Kumar, and Gurram Sunitha An Efficient Machine Learning Model for Bitcoin Price Prediction . . . . . . . . . . . 466 Habeeba Tabassum Shaik, B. Sunil Kumar, and Bhasha Pydala

Contents

xv

Ensemble Based Cyber Threat Analysis for Supply Chain Management . . . . . . . 476 P. Penchalaiah, P. Harini Sri Teja, and Bhasha Pydala Classification Model for Identification of Internet Loan Frauds Using PCA with Ensemble Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 486 A. Madhaveelatha, K. M. Varaprasad, and Bhasha Pydala Comparative Analysis of Learning Models in Depression Detection Using MRI Image Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 496 S. Mano Venkat, C. Rajendra, and K. Venu Madhav Product Safety and Privacy Using Internet of Things Design and Moji . . . . . . . . . 504 Suresh Kallam, Ch. Madhu Babu, B. Prathima, C. Lakshmi Charitha, and K. Reddy Madhavi Optimization of the Performance and Emissions of a Dual-Fuel Diesel Engine Using LPG as the Fuel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 510 Hariprasad Tarigonda, R. Meenakshi Reddy, B. Anjaneyulu, G. Dharmalingam, D. Raghu rami Reddy, and K. L. Narasimhamu Hostel Out-Pass Implementation Using Multi Factor Authentication . . . . . . . . . . 532 Naresh Tangudu, Nagaraju Rayapati, Y. Ramesh, Panduranga Vital, K. Kavitha, and G. V. L. Narayana Federated Learning and Adaptive Privacy Preserving in Healthcare . . . . . . . . . . . 543 K. Reddy Madhavi, Vineela Krishna Suri, V. Mahalakshmi, R. Obulakonda Reddy, and C. Sateesh kumar Reddy ASocTweetPred: Mining and Prediction of Anti-social and Abusive Tweets for Anti-social Behavior Detection Using Selective Preferential Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 552 E. Bhaveeasheshwar, Gerard Deepak, and C. Mala WCMIVR: A Web 3.0 Compliant Machine Intelligence Driven Scheme for Video Recommendation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 563 Beulah Divya Kannan and Gerard Deepak RMSLRS: Real-Time Multi-terminal Sign Language Recognition System . . . . . 575 Yilin Zhao, Biao Zhang, and Kun Ma Advances in Information and Communication Technologies Modelling and Simulation of the Dump-Truck Problem Using MATLAB Simulink . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 589 Ibidun C. Obagbuwa, Bam Stefany, and Moroka Dineo Tiffany

xvi

Contents

Modeling and Simulation of a Robot Arm with Conveyor Belt Using Matlab Simulink Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 600 Ibidun Christiana Obagbuwa and Kutlo Baldwin Mogorosi Bayesian Model Selection for Trust in Ewom . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 610 Bui Huy Khoi Review of Challenges and Best Practices for Outcome Based Education: An Exploratory Outlook on Main Contributions and Research Topics . . . . . . . . . 621 Shankru Guggari, Kingsley Okoye, and Ajith Abraham Blockchain Enabled Internet of Things: Current Scenario and Open Challenges for Future . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 640 Sanskar Srivastava, Anshu, Rohit Bansal, Gulshan Soni, and Amit Kumar Tyagi Fuzzy Investment Assessment Techniques: A State-of-the-Art Literature Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 649 Cengiz Kahraman, Basar Oztaysi, Sezi Çevik Onar, and Selcuk Cebi A Comparative Analysis of Classification Algorithms for Dementia Prediction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 658 Prashasti Kanikar, Manoj Sankhe, and Deepak Patkar Virtual Reality, Augmented Reality and Mixed Reality for Teaching and Learning in Higher Education . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 669 Anne Lin and Tendani Mawela Comparative Analysis of Filter Impact on Brain Volume Computation . . . . . . . . . 680 Prashasti Kanikar, Manoj Sankhe, and Deepak Patkar The Role of AI in Combating Fake News and Misinformation . . . . . . . . . . . . . . . . 690 Virendra Singh Nirban, Tanu Shukla, Partha Sarathi Purkayastha, Nachiket Kotalwar, and Labeeb Ahsan Rationalizing the TPACK Framework in Online Education: Perception of College Faculties Towards Aakash BYJU’S App in the ‘New Normal’ . . . . . . 702 Samuel S. Mitra, Peter Arockiam A. SJ, Milton Costa SJ, Aparajita Hembrom, and Payal Sharma

Contents

xvii

Teacher’s Attitudes Towards Improving Inter-professional Education and Innovative Technology at a Higher Institution: A Cross-Sectional Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 713 Samuel-Soma M. Ajibade, Cresencio Mejarito, Dindo M. Chin, Johnry P. Dayupay, Nathaniel G. Gido, Almighty C. Tabuena, Sushovan Chaudhury, and Mbiatke Anthony Bassey Augmented Analytics an Innovative Paradigm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 725 Teresa Guarda and Isabel Lopes A Deep Learning Approach to Monitoring Workers’ Stress at Office . . . . . . . . . . 734 Fátima Rodrigues and Jacqueline Marchetti Implementing a Data Integration Infrastructure for Healthcare Data – A Case Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 744 Bruno Oliveira, Miguel Mira, Stephanie Monteiro, Luís B. Elvas, Luís Brás Rosário, and João C. Ferreira Automatic Calcium Detection in Echocardiography Based on Deep Learning: A Systematic Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 754 Sara Gomes, Luís B. Elvas, João C. Ferreira, and Tomás Brandão AI-Based mHealth App for Covid-19 or Cardiac Diseases Diagnosis and Prognosis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 765 Ana Vieira, Luís B. Elvas, João C. Ferreira, Matilde Cascalho, Afonso Raposo, Miguel Sales Dias, Luís Brás Rosário, and Hugo Plácido da Silva Mapping the Research in Orange Economy: A Bibliometric Analysis . . . . . . . . . 778 Homero Rodriguez-Insuasti, Marcelo Leon, Néstor Montalván-Burbano, and Katherine Parrales-Guerrero Wearable Temperature Sensor and Artificial Intelligence to Reduce Hospital Workload . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 796 Luís B. Elvas, Filipe Martins, Maria Brites, Ana Matias, Hugo Plácido Silva, Nuno Gonçalves, João C. Ferreira, and Luís Brás Rosário A Recommender System to Close Skill Gaps and Drive Organisations’ Success . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 806 E. Luciano Zickler, Susana Nicola, and Nuno Bettencourt LEEC: An Improved Linear Energy Efficient Clustering Method for Sensor Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 816 Virendra Dani, Radha Bhonde, and Ayesha Mandloi

xviii

Contents

Hydrogen Production: Past, Present and What Will Be the Future? . . . . . . . . . . . . 826 Judite Ferreira, Pedro Pereira, and José Boaventura Implementation of the General Regulation on Data Protection – In the Intermunicipal Community of Alto Tâmega and Barroso, Portugal . . . . . . . . . 836 Pascoal Padrão and Isabel Lopes Implementing ML Techniques to Predict Mental Wellness Amongst Adolescents Considering EI Levels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 845 Pooja Manghirmalani Mishra and Rabiya Saboowala Healthcare-Oriented Portuguese Sign Language Translator . . . . . . . . . . . . . . . . . . 858 Gustavo Ferreira and Nuno Bettencourt Adding Blockchain and Smart Contracts to a Low-Code Development Platform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 868 Ana Barbosa, Nuno Dâmaso, Diogo Pacheco, Susana Nicola, and Nuno Bettencourt Attempt to Model the Impact of Digitalization on the Economic Growth of Morocco . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 878 Rhaya Fikry and El Ouazzani Yahia The Drivers and Inhibitors of COVID-19 Vaccinations: A Descriptive Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 885 Sunday Adewale Olaleye, Oluwafemi Samson Balogun, Frank Adusei-Mensah, Richard Osei Agjei, and Toluwalase Janet Akingbagde Denoising Fundus Images of Diabetic Retinopathy Using Natural Neighborhood Kriging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 893 Kuraku Nirmala and K. Saruladha Design of a Compact Low-Profile Ultra Wideband Antenna (UWB) for Biomedical Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 904 B. Rama Rao, K. S. Chakradhar, M. Satyanarayana, A. V. Sriharsha, and D. Nataraj Thyroid Nodule Classification of Ultrasound Image by Convolutional Neural Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 915 Arunkumar Beyyala, R. Priya, Subramani Roy Choudary, and R. Bhavani

Contents

xix

Influence of Cross Histology Transfer Learning on the Accuracy of Medical Diagnostics Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 926 Alexander Mongolin, Sergey Khomeriki, Nikolay Karnaukhov, Konstantin Abramov, Roman Vorobev, Yuri Gorbachev, Anastasia Zabruntseva, and Alexey Kornaev Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 933

Bioinspired Computing and Applications

Bayesian Consideration for Intention to Purchase Safe Vegetables: Evidence from Vietnam Bui Huy Khoi(B) Industrial University of Ho Chi Minh City, Ho Chi Minh City, Vietnam [email protected]

Abstract. The paper uses the Bayesian Model Average for Intention to Buy Safe Vegetables in Vietnam. It is used to discover a research model comprising 05 factors of care for health, product quality, product price, supermarket trust, and brand credibility. A sample of 277 responses was collected for the analysis. The results show that product quality influences the intention to buy safe vegetables. From the research results, the author has given some implications to help managers improve consumer’s intention to purchase safe vegetables in Ho Chi Minh City, Vietnam. Previous studies revealed that using linear regression. This study uses the optimal choice of Bayesian Consideration. Keywords: Bayesian consideration · Healthy care · Product quality · Product price · Brand trust · Brand credibility

1 Introduction Many changes in consumer habits and behaviors, as well as the emergence of new trends, have been caused by the Covid-19 epidemic. Customers have consciously purchased goods that are secure for their families health and well-being. Notably, for four consecutive quarters, health has overtaken all other concerns in Vietnam. Nearly 50% of Vietnamese consumers in the first quarter of 2020 ranked health as their top concern among the most developed nations in the world. As a result, customers are searching for goods that contain supplementary nutrients like vitamin C, vitamin D, omega 3 fatty acids, or probiotics and are produced to the greatest safety and quality requirements. For that reason, the phrase “safe vegetables” appears more and more in agricultural products for consumers. It is based on the previous study on safe vegetables or organic foods related to safe vegetables such as research on factors affecting the intention to consume organic food in the UK [1–4]. Although there have been many studies on safe vegetables, in the current outbreak of the Covid-19 epidemic, this issue is becoming more concern than ever. Along with the previous topics that did not include a specific safe vegetable purchase channel, in this topic the author chose the shopping channel MM Mega Market for research. Vegetables are an indispensable food in a human’s daily diet. Food hygiene and safety to ensure people’s health is being raised more and more hotly, in which the © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 A. Abraham et al. (Eds.): IBICA 2022, LNNS 649, pp. 3–15, 2023. https://doi.org/10.1007/978-3-031-27499-2_1

4

B. H. Khoi

demand for green vegetables meeting safety standards is increasing, especially in big cities like Ho Chi Minh City. According to statistics from the Department of Agriculture and Rural Development of Ho Chi Minh City in 2020, the demand for safe vegetables of the people is about 500 tons per day, of which leafy vegetables account for 55% of vegetables and fruits, accounting for 35% with spices accounted for 10%. According to the competent authority, up to 80% of vegetables on the market do not meet the standards of food safety. In Vietnam, every year, 150,000 new people get cancer due to using dirty food, including using dirty vegetables, 250,000 people get cancer every day, and up to 75,000 people die from using and consuming dirty food to exposure to contaminated food. Faced with that situation, consumers are increasingly considering choosing where to buy safe vegetables to ensure the origin of vegetables. If in the past the market was the first choice of consumers, now the supermarket channel appears to have partly attracted customers because of the many benefits that consumers receive when coming to this distribution channel. According to the General Statistics Office of Vietnam, the number of supermarkets in Ho Chi Minh City in 2019 about 260 supermarkets increased by nearly 50% compared to 2014. Consumers look to supermarkets as a place to supply goods, safe and convenient according to the General Statistics Office 2019. MM Mega Market Vietnam Company, a member of Thailand’s BJC/TCC group, inaugurated the first modern retail and wholesale center in 2002 in Ho Chi Minh City. After more than 20 years of operation and development, in addition to providing essential items for consumers, MM Mega Market is also a place to provide reputable safe vegetables in large quantities to meet the needs of safe vegetable consumers. So MM Mega Market should maintain and keep the trust of current customers as well as the desire to attract more customers in the future. At the same time, businesses are also well aware of the risks from competitors such as Lotte Mart, Aeon Mall, Vin Mart, and Big C… Therefore; the paper uses Bayesian Model Average for Intention to Buy Safe Vegetables: Case Study of MM Mega Market, Ho Chi Minh City (HCMC), Vietnam.

2 Literature Review 2.1 Safe Vegetable Purchasing Intention (SVPI) According to Acheampong [9], safe vegetables, also known as clean vegetables, is a common term in Vietnam that refers to fresh vegetable products that ensure food safety and hygiene used as food for children. People with the right qualifications such as the breed characteristics, the content of toxic chemicals, and the level of contamination with harmful organisms are below the allowable standards, ensuring safety for users and the environment. Regarding purchase intention, Vidhyakala and Santhi [10] argued that, during the evaluation phase of the purchase option, consumers rate different brands and form purchase intentions. In general, consumers decide to buy products from their favorite brands. However, two factors can prevent buying intention from turning into buying behavior: the attitudes of people around and unexpected situations. Consumers can form buying intentions based on factors such as expected income, expected selling price, and expected product features [11].

Bayesian Consideration for Intention to Purchase Safe Vegetables

5

Purchase intention is described as a customer’s willingness to purchase a product [12] and this is the concept the author will use in the cycle. The sales of the business can be surveyed based on the customer’s purchase intention. Predicting purchase intention is the first step to predicting actual customer buying behavior [13]. In addition, based on many theories, purchase intention is seen as the basis for predicting future demand [14]. The ability and willingness of the individual to choose safer food above ordinary food in buying considerations is the aim of selecting safe food [15, 16]. The willingness or attempt of a consumer to buy safer vegetables on their next shopping trip is referred to as “safe vegetable consumption intentions.“ This desire is likely to be influenced by many variables, including perceived variables, subjective norms, the ability to manage habits, predicted income, product pricing, and the level of communication and exchange of consumer information. The following criteria were used to evaluate the factor “intentions of safe vegetable purchases”: encouraging others to purchase safe vegetables, continuing to purchase safe vegetables, and spending more money on safe vegetables [15]. 2.2 Health Care (HC) Health is defined as a state of mental and physical well-being, not merely the absence of disease [17]. The interest is concerned with the human psychological system. Healthconscious consumers are consumers who know their health status and care about their health benefits. They are willing to do things to maintain good health and improve health and quality of life. In this day and age, when living standards are improved, health issues are also more focused on. Many factors affect human health such as diseases, and internal and external factors such as air, food, etc. A lot of research studies investigating the relationship between health and safe food have been carried out. It showed that health is the main factor motivating customers to buy safe food [18]. Many consumers see health protection as an incentive to buy safe food. Personal experience with illness and interest in healthy eating contributes to product consumption trends [19]. Research by Nandi et al. [4] also suggests that health concerns also have a positive impact on the intention to buy safe vegetables. Hypothesis H1: Health care has a positive effect on consumers’ intention to buy safe vegetables. 2.3 Brand Trust (BT) According to Kotler [20], a brand is a name, term, sign, arrangement, or all of those features combined, and it is used to distinguish the products/services of a business from competitors. Brand trust is derived from experience and interactions [21] because its development is often formed through the accumulation of personal experience over time. Lassar et al. [22] argue that “A brand that is highly valued by customers will create a competitive advantage because customers’ trust in that brand will be higher than that of competitors’ brands”. Brand plays an important role in the decision to buy certain brands of goods and services instead of goods and services of other brands [23]. Brand plays a very important role in a specific business: Firstly, a strong brand gives businesses not only the image of products and businesses but also has an important meaning in

6

B. H. Khoi

creating prestige for products, promoting the consumption of goods is a sharp weapon in business. Secondly, with a strong brand, consumers will have confidence in the product of the business, will feel secure and proud to use the product, and be loyal to the product, and therefore the stability of the current number of customers. Consumers’ trust in a brand is a matter of experience, which is influenced by their judgment when they come into direct contact with the brand or indirect contact with the brand [24]. During those encounters, the consumer’s experience will play an important role in their trust in the brand of safe vegetables; because it creates their association, consideration, and inference about the brand on a more certain, more realistic basis [25]. Previous studies have also shown that brand trust has a significant influence on consumers’ purchase intention. Hypothesis H2: Brand trust has a positive effect on the intention to buy safe vegetables. 2.4 Product Price (PP) Amron’s research [26] suggests that consumers expect the price to be commensurate with the quality of the product when they make a purchase. In a study by Saleki et al. [27] on “main factors influencing buying behavior of organic products in Malaysia”, they pointed out price as one of the factors of perceived behavioral control for their ability to limit consumer purchases, they also stated that many consumers place orders mainly based on price [27]. Another study by [28] also shows that price is the most important factor driving demand, so it has a significant influence on consumer behavior. A study by Al-Gahaifi and Svˇetlík [28] showed that price is an obstacle for consumers to buy organic products, high prices will reduce consumers’ purchasing ability for these products, especially low-income consumers and it makes consumers feel that they will not be able to buy organic food products, which makes them feel uncomfortable or difficult in making decisions to purchase for the product [28]. Therefore, we can assume that price influences a consumer’s decision to buy a product. The studies of There is also a study by Zhang et al. [2] which also shows that price has a strong impact on consumers’ purchase intention. Therefore, price plays a decisive role in consumer purchasing. From the above arguments, we have hypothesis H3: Hypothesis H3: Product price has a positive impact on consumer’s intention to buy vegetables. 2.5 Product Quality (PQ) The quality of a product is the degree to which it meets the expectations of current or future customers [29]. Therefore, product quality is defined in terms of product attributes and buyers’ responses to those attributes. Managers need to know how customers perceive the quality of products offered by their company [30]. Based on this knowledge, marketers provide their customers with what they anticipate customers need [31]. Product quality is the most important factor in choosing each brand, especially in a market environment with intense competition and price competition [32]. However, it is difficult to meet customers’ expectations for quality because their understandings are varied and inconsistent. Differences in views on quality are relevant in economic, technological, social, and cultural conditions [33].

Bayesian Consideration for Intention to Purchase Safe Vegetables

7

According to Sibly [34], consumers use quality criteria to learn about product features, and beliefs from inference, and in fact, according to product choices can be directly affected by other methods. Quality is the most concerning factor for consumers about products [35]. Product quality is considered a key first factor in consumers’ purchasing decisions. Even if a company or business invests heavily in advertising and marketing campaigns without attaching importance to product quality, the product does not meet the needs of consumers or is of poor quality to get sales. Therefore, to survive long-term and stand firm in the increasingly fierce competition market today, businesses must first invest in products. Only quality products, that meet the requirements of consumers, in line with trends can help customers remember and mention when they need to buy. Consumers when choosing any type of product will put their first concern on the origin and origin of that product. The quality of products, as well as reputable brands, are also prioritized when choosing. Products that are sold in a place where consumers think they feel secure when shopping there will be prioritized for selection. The study of Shaharudin et al. [36] has shown a positive relationship between product quality and purchase intention. From the above arguments, we have hypothesis H4: Hypothesis H4: Product quality has a positive impact on consumer’s intention to buy safe vegetables. 2.6 Supermarket Trust (ST) Moorman et al. [37] define trust as “willingness to depend on partners they trust”, suggesting, complimenting, and trust as a belief, confidence, or expectation about work results in a reliable partner. Faith is a feeling of certainty about something. Trust in the supermarket is a feeling of certainty about what the supermarket promises, which is what the supermarket displays [38]. So for vegetables, it can be the belief that supermarkets sell clean, safe vegetables and properly list the origin of the product [39]. One of the important reasons that motivate consumers to choose a supermarket to do their shopping is because they believe in the quality of the goods sold there [40]. Therefore, we have the hypothesis H5. Hypothesis H5: Supermarket trust has a positive effect on consumers’ intention to buy safe vegetables.

3 Methodology 3.1 Sample According to the results of 277 survey subjects, in terms of gender, because the research topic is in the field of vegetables and according to some previous studies, most of the people who intend to buy safe vegetables are mainly female. Therefore, the survey sample was taken with a higher proportion of women. The sample statistical results showed that, of which 277 subjects were surveyed, 188 were female, accounting for 67.9% of the sample. The number of men is 89 people, accounting for 32.1%. In terms of age structure, it can be seen that in 277 survey subjects, the number of people under 18 years old is 37 people, accounting for 13.4%, and the number of people

8

B. H. Khoi

aged 18 to 30 years old is the most including 123 people, accounted for 44.4%. The age group from 31 to 60 years old has the second largest number (98 people) accounting for 35.4%. It can be said that these two age groups are mainly responsible for buying vegetables for themselves and their families. The age group above 60 years accounts for 6.9% (19 people). Table 1 describes the statistics of sample characteristics. Table 1. Statistics of Sample Characteristics Sex and Age

Amount Male

89

32.1

188

67.9

37

13.4

18–30

123

44.4

31–60

98

35.4

Above 60

19

6.9

Student

36

13.0

Office Staff

39

14.1

Worker

49

17.7

Female Below 18

Career

Income/Month

Percent (%)

Housewife

71

25.6

Lecturer/Teacher

34

12.3

Officials

24

8.7

Freelance

14

5.1

Other

10

3.6

Below 5 million VND

71

25.6

5–10 million VND

86

31.0

11–15 million VND

58

20.9

16–20 million VND

36

13.0

Over 20 million VND

26

9.4

Regarding income, through 277 survey subjects, the following results were got: the number of people with income below 5 million VND is 71 people, accounting for 25.6%. Income from 5 to 10 million dongs is 86 people, accounting for 30.0%. From 11 to 15 million, 58 people account for 20.9%. From 16 to 20 million VND are 36 people, accounting for 13%. Over 20 million VND is 26 people, accounting for 9.4%. We have the following occupation level: Student 36 people, accounting for 13%. The office staff of 39 people accounted for 14.1%. Workers 49 people account for 17.7%. Housewives 71 people account for 25.6%. Lecturers/teachers, 34 people, accounting for 12.3%. Officials are 24 people, accounting for 8.7%. Freelance 14 people, accounting for 5.1%. Other 10 people accounted for 3.6%.

Bayesian Consideration for Intention to Purchase Safe Vegetables

9

3.2 Reliability Test and Bayesian Consideration Cronbach’s Alpha coefficient of 0.6 or more is acceptable [41–43] in case the concept being studied is new or new to the subject with respondents in the research context. However, according to [44], Cronbach’s Alpha (α) does not show which variables should be discarded and which should be kept. Therefore, besides Cronbach’s Alpha coefficient, one also uses Corrected item-total Correlation (CITC) and those variables with Corrected item-total Correlation greater than 0.3 will be kept. BIC (Bayesian Information Criteria) was used to choose the best model by R software. BIC has been used in the theoretical context for model selection. As a regression model, BIC can be applied, estimating one or more dependent variables from one or more independent variables [45]. An essential and useful measurement for deciding on a complete and straightforward model is the BIC. Based on the BIC information standard, a model with a lower BIC is selected. The best model will stop when the minimum BIC value [8, 45, 46].

4 Results 4.1 Reliability Test Factors and items are in Table 2, Cronbach’s Alpha coefficient of greater than 0.6, and Corrected Item - Total Correlation (CITC) is higher than 0.3 and reliable enough to carry out further analysis, and lower than 0.3 is not reliable as HC5 and PP4. There are some new items in Table 2. The mean of items is from 2.7473 to 3.5343 is suitable for research data. Table 2. Reliability Factor α HC

BT

CITC Item

Code

Mean

I am a person who is very health conscious

HC1

3.2960

0.438

I eat a lot of safe vegetables to help improve my health

HC2

2.7473

0.597

I choose my food carefully to ensure good health

HC3

3.3971

0.508

With the outbreak of the Covid-19 epidemic, HC4 essential products to protect health will be priorities in today’s life

2.7906

0.299

I may sacrifice some hobbies to protect my health because I think health is precious

HC5

3.0397

I am a person who is very health conscious

BT1

3.5343

0.511

I easily recognize the brand MM Mega Market

BT2

3.4079

0.527

I always prefer to buy safe vegetables at MM Mega Market instead of elsewhere

BT3

3.0866

0.706 0.504

0.746 0.593

(continued)

10

B. H. Khoi Table 2. (continued)

Factor α

PP

PQ

ST

SVPI

CITC Item

Code

Mean

0.542

I believe that MM Mega Market provides safe and quality vegetables

BT4

3.2996

0.641 0.504

Safe vegetables at MM Mega Market are cheaper than safe vegetables elsewhere

PP1

3.0144

0.501

Safe vegetables at MM Mega Market have clear prices that make it easy for me to compare prices

PP2

3.2166

0.460

I will spend more money on safe vegetables

PP3

3.0903

0.297

I usually choose to buy safe vegetables at the best price

PP4

2.3357

0.863 0.588

Safe vegetables at MM Mega Market are cheaper than safe vegetables elsewhere

PQ1

2.9531

0.799

Safe vegetables at MM Mega Market have clear prices that make it easy for me to compare prices

PQ2

3.1769

0.712

I will spend more money on safe vegetables

PQ3

3.2455

0.598

I usually choose to buy safe vegetables at the best price

PQ4

3.3718

0.744

Safe vegetables at MM Mega Market are cheaper than safe vegetables elsewhere

PQ5

3.2419

Vegetables bought at the supermarket are safer than ST1 buying at the market

3.1336

0.809 0.647 0.627

Supermarkets sell foods of clear origin

ST2

3.3249

0.649

I believe supermarkets will sell quality products

ST3

3.1408

0.581

Supermarkets always have a customer support program

ST4

3.2635

I will continue to prioritize buying safe vegetables at MM Mega Market shortly

SVPI1 2.9531

0.821 0.595 0.795

I will recommend too many other safe vegetables at SVPI2 3.1769 MM Mega Market

0.661

I am always interested in buying more safe vegetables at MM Mega Market for my family’s needs

SVPI3 3.2455

0.543

I will buy safe vegetables at MM Mega Market whenever I have a need instead of elsewhere

SVPI4 3.3718

4.2 Bayesian Consideration R report shows every step of searching for the optimal model. Bayesian Consideration selects the best 5 models as Table 3.

Bayesian Consideration for Intention to Purchase Safe Vegetables

11

Table 3. BIC model selection SVPI

Probability (%)

SD

Model 1

Model 2

Intercept

100.0

0.033955

– 0.4315

– 0.006956

7.4

0.003522

BT

4.4

0.002210

PP

10.0

0.005166

PQ

100.0

0.009799

ST

5.8

0.003368

HC

Model 3

Model 4

Model 5

– 0.006228

– 0.005847

– 0.004579

0.00093 0.00011 0.01335 1.010

1.005

1.007

1.006

1.010

0.008369

There are five independent and one dependent variable. Product Quality (PQ) influences Safe vegetable purchasing intention (SVPI) with a Probability is 100% and Healthy care (HC), Brand Trust (BT), Product Price (PP), and Supermarket Trust (ST) influence Safe vegetable purchasing intention (SVPI) with low probability is 7.4%, 4.4%, 10%, and 5.8%. 4.3 Model Evaluation

Table 4. Model test Model

nVar

R2

BIC

Post prob

Model 1

1

0.977

– 1.033

0.724

Model 2

2

0.977

– 1.029

0.100

Model 3

2

0.977

– 1.029

0.074

Model 4

2

0.977

– 1.028

0.058

Model 5

2

0.977

– 1.028

0.044

According to the results from Table 4, BIC shows model 1 is the optimal selection because BIC -1.033) is the minimum. In Table 4, the Product Quality (PQ) influence on the intention to purchase safe vegetables (SVPI) is 97.7%. According to BIC, model 1 is the best option, and the probability for five variables is 72.4%. The analysis mentioned above demonstrates that the regression equation below is statistically significant. SVPI = –0.004315 + 1.010PQ

5 Conclusions The study’s objective is to demonstrate the BIC Algorithm’s best choice for consumers’ Safe Vegetable Purchasing Intentions (SVPI). Based on the results of the BIC Algorithm, only Product Price (PP) impact Safe vegetable purchasing intention (SVPI) with Beta = 0.101. So, we give implications as follows.

12

B. H. Khoi

First, the variable PQ1 is “Safe vegetables at MM Mega Market are cheaper than safe vegetables elsewhere” (2.9531). The average consumer choice for this variable agrees at the highest threshold compared to the remaining variables. According to the author, managers should continue to maintain links with businesses that supply safe vegetables to supermarkets and need to build quality processes according to VietGAP or GlobalGAP standards to increase their intention to buy safe vegetables at MM Mega Market HCMC. Second, variable PQ2 “Safe vegetables at MM Mega Market have clear prices that make it easy for me to compare prices” (3.1769). The average level of consumer choice for this variable agrees but is the lowest compared to the remaining variables. As we can see in the market about verifying nutrients in vegetables, there are still many limitations. Consumers do not have enough understanding to base safe vegetables will have higher nutritional value. Therefore, according to the author, managers should come up with product promotion strategies such as highlighting the difference in nutritional content between safe vegetables and different vegetables to help consumers. Easy to use and compare and increase the intention to buy safe vegetables. Third, variable PQ3–"I will spend more money on safe vegetables” (3.2455). The average consumer choice for this variable agrees. Thus, to increase consumers’ perception of safe vegetables at MM Mega Market, which are naturally preserved without being soaked through preservative chemicals? Therefore, according to the author, managers should take reasonable measures to preserve vegetables such as in areas with cold temperatures to ensure safe vegetables are preserved for longer. Accompanying it is the publicity of the tests got in the absence of harmful preservatives in vegetables. With the information on standards and quality certification on the packaging, consumers partly remove their concerns about unhygienic products or the use of harmful chemicals to maintain the freshness of vegetables. Fourth, variable PQ4 “I usually choose to buy safe vegetables with the best price” (3.3718). The average consumer choice for this variable agrees. Thus, to keep the freshness of safe vegetables, according to the author, managers should take measures such as displaying vegetables in a place with cold temperatures followed by a misting system to keep the freshness of vegetables. When the vegetables show signs of deterioration, they must be replaced with new ones and destroyed to avoid affecting the intention of consumers to buy safe vegetables at MM Mega Market when buying the wrong safe vegetables. Finally, variable PQ5 “Safe vegetables at MM Mega Market are cheaper than safe vegetables elsewhere” (3.2419). The average consumer choice for this variable agrees. Therefore, to improve the level of consumer perception about safe vegetables at MM Mega Market with clear origin. According to the author, managers should strictly control safe vegetables from the input stage, to ensure their freshness and safety of vegetables. Strict inspection of vegetable growing stages from seedling to harvest, the percentage of chemicals and nutrients added is always re-checked, neither excess nor deficiency must ensure the correct ratio following regulations of the Ministry of Health, to ensure the safety of users. The managers of MM Mega Market need to equip a flexible customer service department so that when a customer complaint problem occurs, there will be a customer service department to solve it without causing any harm or influencing the surrounding customers. Always consider consumer opinions to improve product quality. In summary, product quality affects consumers’ intention to buy safe vegetables at MM Mega Market HCMC. Products with quality following the wishes of the customer

Bayesian Consideration for Intention to Purchase Safe Vegetables

13

will be consumed by themselves. Therefore, first of all, producers and traders of safe vegetables need to offer products of good quality, meet safety standards according to state regulations, and follow the needs of consumers. At the same time, for the quality of products to reach consumers’ awareness, businesses need to launch communication activities so that information about product quality is known to consumers to increase quality awareness of the product in their mind, thereby increasing the intention to buy safe vegetables. Limitations Although the study has yielded certain results in determining the factors that affect consumers’ intention to buy safe vegetables at MM Mega Market HCMC, there are still certain limitations, such as: Firstly, the topic has only been surveyed in the city area. Ho Chi Minh should not yet accurately reflect the entire country of Vietnam. Therefore, further studies need to expand the scope of research to a wider geographical area. Secondly, due to limitations in qualifications and research time, in the process of surveying and interviewing to collect information, some consumers may not be willing or knowledgeable about the methods and objectives of the research. This will partly have a certain influence on the research results. Due to the limited resources, and a small sample size of only 277 questionnaires, the collected samples are unevenly distributed for each survey group in each ward and district and this may affect the research results. This shows that the sample size is quite small compared to the number of people living in HCMC. Ho Chi Minh. Therefore, further studies need to take larger samples and uniformly distribute the collected samples throughout Ho Chi Minh City. Third, the thesis only studies the influence of some factors on the intention to buy safe vegetables of consumers at MM Mega Market HCMC, but in fact, many other factors may also have an impact on this dependent variable. Therefore, further studies need to add other factors to the study of factors that affect consumers’ intention to buy safe vegetables at MM Mega Market HCMC. Finally, language is a pretty big barrier in translating the meaning of Vietnamese to English. Some words, when translated into English, are equivalent in meaning to the original word but make it very difficult for readers to recognize the difference in semantic level between them, so their evaluation may not be differentiated. After receiving comments from the review board, the author found that the location factor was completely appropriate when included in the model for further analysis. Therefore, further studies need to add the location factor to the model of consumers’ intention to buy safe vegetables at MM Mega Market HCMC.

References 1. Dickieson, J., Arkus, V., Wiertz, C.: Factors that Influence the Purchase of Organic Food: A Study of Consumer Behaviour in the UK. Cass Business School, London (2009) 2. Zhang, B., Fu, Z., Huang, J., Wang, J., Xu, S., Zhang, L.: Consumers’ perceptions, purchase intention, and willingness to pay a premium price for safe vegetables: a case study of Beijing, China. J. Clean. Prod. 197, 1498–1507 (2018) 3. Wang, X., Pacho, F., Liu, J., Kajungiro, R.: Factors influencing organic food purchase intention in developing countries and the moderating role of knowledge. Sustainability, 11(1), p. 209 (2019)

14

B. H. Khoi

4. Nandi, R., Bokelmann, W., Gowdru, N.V., Dias, G.: Factors influencing consumers’ willingness to pay for organic fruits and vegetables: Empirical evidence from a consumer survey in India. J. Food Prod. Mark. 23(4), 430–451 (2017) 5. Bayes, T.: LII. an essay towards solving a problem in the doctrine of chances. By the late Rev. Mr. Bayes, FRS communicated by Mr. Price, in a letter to John Canton, AMFR S’, Philosophical transactions of the Royal Society of London, (53), pp. 370–418 (1763) 6. Thang, L.D.: The Bayesian statistical application research analyzes the willingness to join in area yield index coffee insurance of farmers in Dak Lak province. University of Economics Ho Chi Minh City (2021) 7. Gelman, A., Shalizi, C.R.: Philosophy and the practice of Bayesian statistics. Br. J. Math. Stat. Psychol. 66(1), 8–38 (2013) 8. Raftery, A.E.: Bayesian model selection in social research. Sociol. Methodol. 111–163 (1995) 9. Acheampong, P., Braimah, H., Ankomah-Danso, A., Mochiah, M.: Consumers behaviours and attitudes towards safe vegetables production in Ghana (2012) 10. Vidhyakala, D.K., Santhi, D.P.: Purchase gap between intention and behaviour for green products among consumers. Int. J. Psychos. Rehab. 23(1) (2019) 11. Danes, J.E., Lindsey-Mullikin, J.: Expected product price as a function of factors of price sensitivity. J. Product Brand Management (2012) 12. Kytö, E., Virtanen, M., Mustonen, S.: From intention to action: Predicting purchase behavior with consumers’ product expectations and perceptions, and their individual properties. Food Qual. Prefer. 75, 1–9 (2019) 13. Howard, J., Sheth, J. (Eds.): A Theory of Buyer behavior in Moyer, R. (ed) Changing Marketing System (edn.), pp. 253–262 (1967) 14. Ajzen, I., Fishbein, M.: Belief, Attitude, Intention, and Behaviour: an introduction to theory and research, Addision-Wesley. Reading (1975) 15. Tinh, L., Doan, G.D., Bui, Q.B., Truc, B.T.M.: Personal factors affecting intentions of safe vegetable purchases in Danang City, Vietnam. Asian J. Agric. Rural Dev. 10(1), 9 (2020) 16. Rashid, N.: Awareness of eco-label in Malaysia’s green marketing initiative. Int. J. Bus. Manag. 4(8), 132–141 (2009) 17. MacLeod, A.: Prospection, well-being, and mental health (Oxford University Press, 2017) 18. Naspetti, S., Zanoli, R.: Organic food quality and safety perception throughout Europe. J. Food Prod. Mark. 15(3), 249–266 (2009) 19. Padel, S., Foster, C.: Exploring the gap between attitudes and behaviour: understanding why consumers buy or do not buy organic food. British Food J., (2005) 20. Kotler, P.: Principles of marketing: a South Asian perspective, 13/E (Pearson Education India, 2010) 21. Garbarino, E., Johnson, M.S.: The different roles of satisfaction, trust, and commitment in customer relationships. J. Mark. 63(2), 70–87 (1999) 22. Lassar, W., Mittal, B., Sharma, A.: Measuring customer-based brand equity. J. Cons. Mark., (1995) 23. Ramanathan, J., Velayudhan, S.K.: Consumer evaluation of brand extensions: comparing goods to goods brand extensions with goods to services. J. Brand Manag. 22(9), 778–801 (2015) 24. Keller, K.L., Apéria, T., Georgson, M.: Strategic brand management: a European perspective (Pearson Education, 2008) 25. Ngo, H.M., Liu, R., Moritaka, M., Fukuda, S.: Urban consumer trust in safe vegetables in Vietnam: the role of brand trust and the impact of consumer worry about vegetable safety. Food Control 108, 106856 (2020) 26. Amron, A.: Buying decision in the consumers of automatic motorcycle in Yogyakarta, Indonesia. J. Mark. Manag. 6(1), 90–96 (2018)

Bayesian Consideration for Intention to Purchase Safe Vegetables

15

27. Saleki, Z.S., Seyedeh, M.S., Rahimi, M.R.: Organic food purchasing behaviour in Iran. Int. J. Bus. Soc. Sci. 3 (13) (2012) 28. Al-Gahaifi, T.H., Svˇetlík, J.: Production and consumption of vegetables in republic of Yemen. Acta Universitatis Agriculturae et Silviculturae Mendelianae Brunensis 59(4), 9–18 (2011) 29. Siwiec, D., Pacana, A.: A pro-environmental method of sample size determination to predict the quality level of products considering current customers expectations. Sustainability, 13(10), 5542 (2021) 30. Grunert, K.G.: How consumers perceive food quality. Understanding consumers of food products, pp. 181–199 (2006) 31. Weber, M.E.: Developing what customers really need: involving customers in innovation. IEEE Eng. Manag. Rev. 2(43), 34–44 (2015) 32. Shaharudin, M.R., Pani, J.J., Mansor, S.W., Elias, S.J.: Factors affecting purchase intention of organic food in Malaysia’s Kedah state. Cross-cultural Commun. 6(2), 105–116 (2010) 33. Wankhade, L., Dabade, B.: TQM with quality perception: a system dynamics approach. The TQM magazine (2006) 34. Sibly, H.: Product quality with heterogeneous consumers and linear pricing. Aust. Econ. Pap. 56(4), 328–351 (2017) 35. Harrison, L., Smith, R.: Developing food products for consumers with specific dietary needs. In: Editor (Eds.) Book Developing Food Products for Consumers with Specific Dietary Needs. (Woodhead Publishing Series in Food Science, Technology and Nutrition, 2016, edn.) (2016) 36. Shaharudin, M.R., Mansor, S.W., Hassan, A.A., Omar, M.W., Harun, E.H.: The relationship between product quality and purchase intention: the case of Malaysias national motorcycle/scooter manufacturer. African J. Bus. Manag. 5(20), 8163–8176 (2013) 37. Moorman, C., Deshpande, R., Zaltman, G.: Factors affecting trust in market research relationships. J. Mark. 57(1), 81–101 (1993) 38. Nandonde, F.A.: In the desire of conquering east African supermarket business: what went wrong in nakumatt supermarket. Emer. Econ. Cases J. 2(2), 126–133 (2020) 39. van Leeuwen, J.: Regulations on pesticides and requirements of supermarkets direct us to a food safe production of greenhouse vegetables. In: Editor (Eds.) Book regulations on pesticides and requirements of supermarkets direct us to a food safe production of greenhouse vegetables (2001, edn.), pp. 49–53 (2001) 40. Bui, T.T., Nguyen, H.T., Khuc, L.D.: Factors affecting consumer’s choice of retail store chain: empirical evidence from vietnam. J. Asian Finance, Econ. Bus. 8(4), 571–580 (2021) 41. Nunnally, J.C.: Psychometric Theory: 2D Ed. (McGraw-Hill, 1978) 42. Peterson, R.A.: A meta-analysis of Cronbach’s coefficient alpha. J. Cons. Res. 21(2), 381–391 (1994) 43. Slater, S.F.: Issues in conducting marketing strategy research. J. Strateg. Mark. 3(4), 257–270 (1995) 44. Nunnally, J.C.: Psychometric theory 3E (Tata McGraw-hill education (1994) 45. Raftery, A.E., Madigan, D., Hoeting, J.A.: Bayesian model averaging for linear regression models. J. Am. Stat. Assoc. 92(437), 179–191 (1997) 46. Kaplan, D.: On the quantification of model uncertainty: a bayesian perspective. Psychometrika 86(1), 215–238 (2021). https://doi.org/10.1007/s11336-021-09754-5

Evolution of Configuration Data in CGP Format Using Parallel GA on Embryonic Fabric Gayatri Malhotra1,2(B) , Punithavathi Duraiswamy2 , and J . K. Kishore1 1

2

U R Rao Satellite Centre, Bangalore, India [email protected] M S Ramaiah University of Applied Sciences, Bangalore, India

Abstract. Digital circuit configuration data can be optimized in the design space using genetic algorithms (GA). Further development of Cartesian Genetic Programming (CGP) has been made to better express data on circuit configuration. In order to realize digital circuits with the potential for self-repair with the least amount of resources, the embryonic fabric architecture has recently arisen. In this study, PHsClone (Parallel Half-Sibling and Clone) GA is designed and implemented for evolving configuration data in CGP format for digital circuits built on embryonic fabric architecture. Configuration data or prospective circuit solutions can be generated more quickly using parallel processing on an FPGA. The Xilinx Virtex7 is used to implement the planned PHsClone GA architecture. The PHsClone technique was validated using well-known benchmark circuits, including a safe mode detection circuit used in flight avionics, a 1-bit adder, a 2-bit comparator for creating 4-bit adder and 8-bit comparator circuits. According to the simulation results, four concurrent PHsClone GA executions (four parallel threads) achieve convergence for the safe mode detection circuit 37 times quicker than a single HsClone GA, 10 times faster for a 1-bit adder, and 3 times faster for a 2-bit comparator. Keywords: Genetic algorithm · HsClone · Embryonics genetic programming · Safe mode detection

1

· Cartesian

Introduction

The class of evolutionary algorithms known as genetic algorithms (GA) is helpful in solving search and optimization issues. In comparison to human-driven manual methods, applying GA to digital circuit design explores a broader search space. GA outperforms existing techniques in terms of design optimization [1]. However, the length of time needed to find a remedy due to the high computational complexity of GA, is high in sequential machines. GAs commonly suffer from slow convergence times because of their extensive search space. Utilizing parallel GAs (PGAs) [2] is a preferred technique for enhancing solution quality c The Author(s), under exclusive license to Springer Nature Switzerland AG 2023  A. Abraham et al. (Eds.): IBICA 2022, LNNS 649, pp. 16–23, 2023. https://doi.org/10.1007/978-3-031-27499-2_2

Evolution of Configuration Data in CGP Format

17

while reducing execution time. By using isolated multiple populations, PGAs preserve greater genetic diversity. For high-speed applications like deep space machine systems, pattern recognition, and fault-tolerant systems, implementing GAs in hardware is a superior option [3] [4]. FPGAs are employed because of their high performance and adaptability to speed up the execution of GAs. When compared to the software version of the identical algorithm [5] [6], the FPGA-based GA implementation of Big Bang-Big Crunch (BB-BC) obtains a sizable speedup. In contrast to how the GA processes operate in software, where they are executed sequentially, the hardware implementation is faster since it processes the GA processes in parallel [7]. Since hardware does not compete for the same resources, true parallelization is possible. Pipelining [8] is another strategy that is applied in hardware to increase speed. With the capacity for fault detection and self-repair, digital circuits and systems can be realized with embryonics. The evolutionary algorithm creates unique designs for digital circuits that are superior than traditional designs. The circuit is made efficient by the design, which uses innovative operators, problem representations, and fitness criteria. Self-repair is a desirable feature for the space systems and requires a very resilient digital circuit design for flight avionics. If parallel GA is used, the evolution of the configuration data for digital circuits through GA may be sped up. This study provides parallel GA-based configuration data for the embryonic fabric architecture in the design of space systems. For a typical satellite system application, sensor-based safe mode detection logic is implemented using parallel GA. The setup data for a 2-bit comparator is evolved to build an 8-bit comparator since the Four Pi sensor based safe mode detection (SMD) in satellites is based on an 8-bit threshold comparator. The Micro Digital Sun Sensor (mDSS) based SMD circuit makes use of the sun presence signal (SPS) from mDSS. The configuration data is generated using parallel GA, and the SMD logic based on mDSS is encoded in CGP format. For the construction of a 4-bit adder, another benchmark 1-bit adder circuit is also used. Cartesian Genetic Programming is used to build the embryonic circuit configuration data or genomic data (CGP) [9]. In the event of a circuit fault, the CGP data design offers superior control at the node or gate level. In contrast to standard LUT form, the configuration data size in CGP form does not rise linearly with the number of inputs and outputs. Compared to a 1-bit adder, which only needs 2 ∗ 22 bits, a 4-bit adder in LUT form requires 2 ∗ 28 bits (Sum and carry outputs). While in CGP format the configuration bits are copied to four circuits using cloning, the 1-bit adder and 4-bit adder both require 45 bits. In this work, we implement both PGA and the embryonic fabric on FPGA. A speed advantage is provided by the parallel execution of GA in FPGA. There is also a comparison of the hardware resources used for the circuits. Section 2 is about the algorithm design of parallel GA for application digital circuits to generate CGP format configuration data. Section 3 is about comparison of parallel HsClone over single HsClone. Section 4 concludes the paper.

18

G. Malhotra et al.

0;mDSS1M

0

1;mDSS1R

1

6

NOR 6

7

8

MUX 8

8

9

NOT 9

10

AND 11

SPSop

2;mDSS3M

3;mDSS3R

2

4

NOR 7

4;Safe Mode Enable (SMen)

3 5;Safe Mode Override Disable (SMovrDis)

5

AND 10

CGP DATA: 0;1;0, 2;3;1, 6;7;2, 8;8;3, 4;5;4, 9;10;5.

Fig. 1. CGP phenotype of micro DSS based safe mode detection Table 1. GA parameters Parameter Description

2

Currindv

Current Individual under evaluation

Bestindv

The K-map generated Best CGP pattern

RandIndv

Random individual of 72 bits-mDSS SMD

mutpatn

Mutation pattern with many bits flipping

mutpatn1

Mutation pattern with one bit flipping

mutpatn6

Mutation pattern with six bits flipping

Design of Parallel Genetic Algorithm

Based on Charles Darwin’s principle of natural selection, the GA simulates genetic evolution. The likelihood of survival is highest for those ‘individuals’ Table 2. Logic function table for mDSS safe mode detection Bit identification Logic function

Bit identification Logic function

0

OR

1

OR

2

MUX

3

NAND

4

AND

5

AND

6

OR

7

OR

8

OR

9

NAND

10

AND

12

X AND NOTY 13

11

NOR

14

AND

X AND NOTY

15

AND

Evolution of Configuration Data in CGP Format

19

with the better attributes. The selection of the fittest is a key component of GA selection criteria. Random modifications that are more suited to the creature will have a better chance of surviving throughout the creation of the next generation of organisms. Random search is needed to generate CGP data, which are chromosomal or configuration data. The HsClone algorithm incorporates the GA components, such as encoding, fitness evaluation, population initialization, selection operators, and reproduction operators [10]. While the output nodes are not included in the data, the configuration data in CGP format represents node level data. Random data are used to initialize the population, and the best configuration information according to the Karnaugh map is also kept. The stored best configuration data and random input data are used to perform the crossover. Through the mutation operator, the configuration data is subjected to random changes. Different mutation rates should be included in the GA since they affect the success rate of evolutionary search. The fitness requirements for digital circuits created using HsClone GA are based on a comparison between the desired output of the digital function and the output actually obtained. We propose a PGA in which each parallel GA uses the same starting seed value for the random population generation. Different crossover points, mutation rates, and fitness criteria can be used in each parallel GA. Parallel pipelines are being formed by the crossover, mutation, selection under CGP constraints, and fitness modules. Four concurrent GAs share a random population module, and an algorithm allows for parallel fitness checks on four sets of data. In a single HsClone GA, if the fitness is less than the worst fit percentage, crossover with a random pattern is applied; however, in a PHsClone method, crossover is applied as the primary operator and mutation is applied as the secondary operator. A number called m% that is adjusted for every circuit serves to define the worst fit. The comparator circuit (Four Pi SMD) has m = 70%, the adder circuit has m = 40%, and the mDSS SMD has m = 78%. While a reduced mutation rate enables the algorithm to converge on local maxima more quickly, a greater mutation rate expands the search space and makes place for new solutions. Before being directed to the fitness module, all the developed chromosomes are examined with CGP-related constraints. The CGP constraint is used to validate the level back parameter, node sequencing, and feedback from a node to a prior node. The algorithm parameters are shown in Table 1. Some satellites use the Micro Digital Sun Sensor logic based safe mode detection during sun acquisition. The two sensors mounted in the negative pitch direction are the mDSS1M and mDSS1R, while the positive pitch direction is handled by the mDSS3M and mDSS3R. The sensor’s logic ’1’ output indicates the presence of the sun. Sun presence loss or safe mode detection occurs when the outputs of the mDSS1 or mDSS3 main and redundant sensors are both logic ’0’. Combinational gates in the CGP format are used to accomplish this logic. The mDSS SMD CGP data comprises 6 nodes and 6 inputs (mDSS1M, mDSS1R, mDSS3M, mDSS3R, SMen and SMovrDis). The mDSS SMD circuit encoded in CGP format is shown in Fig. 1. The function table for mDSS SMD circuit is shown in Table 2. The CGP data that got evolved for mDSS SMD with

20

G. Malhotra et al.

PHsClone is shown in Fig. 1. It is 72 bit data and contains six nodes decoded in three nibbles. Customized parallel GA generates the configuration data for the 1-bit adder, 2-bit comparator, Four Pi SMD, and mDSS SMD circuits using PHsClone. The GA pseudo code for mDSS SMD is shown belowFour Parallel Processes Fitness check for Sun Presence Signal output ———————————————————————— Four Constrained Random Patterns of 72 bits Bestfit = fitness(Any node output of Bestindv); Worstfit = 78% of Bestfit; fitval = fitness((population(Any node output of Currindv)); If fitval greater than Bestfit Bestindv = population((Currindv)); Bestfit = fitval; Else If fitval less than Worstfit population((Currindv))=crossover((Bestindv,Randindv())) OR mutation((Randindv(),mutpatn6)) OR mutation((Randindv(),mutpatn)); Else population((Currindv))=mutation((Currindv,mutpatn)) OR mutation((Currindv,mutpatn6)) OR mutation((Currindv,mutpatn1)); EndIf EndIf Repeat till algorithm converges ————————————————————————– The customized GA executes four processes in parallel. The selection criteria is based on condition; worstfit = 78% of Bestfit. In case fitness value is below than worstfit, either crossover is performed with Best individual and Random individual or mutation on the Random individual. In case fitness value is above than worstfit, different rate of mutation is performed on the current individual. Algorithm executes till the best fitness is achieved.

3

Parallel GA Versus Single GA

For the purpose of creating CGP data, the proposed PHsClone algorithm is simulated in Verilog HDL. At every clock, a new random population is made. Using the random function and a seed value, the random population is created. For each circuit, various seed values are examined, and the seed with the fastest convergence is chosen. The adder, comparator, and mDSS SMD circuits’ PHsClone GA convergence time and HsClone convergence time are displayed in

Evolution of Configuration Data in CGP Format

21

the Figs. 2,3 for four, three, two parallel processes and one single process. The convergence time is reduced for the parallel execution of HsClone. The single PHsClone technique is regarded as being equivalent to the original HsClone. In contrast to the original HsClone, where only crossover was taken into account, the mutation in the PHsClone is added. For the mDSS SMD, the convergence time of four parallel processes is 37 times faster, for the 1-bit adder, 10 times faster, and for the 2-bit comparator, 3 times faster. HsClone’s convergence time is longer than PHsClone if the mutation operator is not added. By synthesizing PHsClone GA using Virtex 7v2000tflg1925-1 for mDSS SMD, the hardware resources used by FPGA are determined and shown in Table 3. When compared to a single HsClone, the consumption of Slice LUTs and Slice registers is higher for PHsClone. This resource usage is based on an assumed case in which resource sharing is enabled. While 4-PHsClone utilises 24232 Slice LUTs for mDSS SMD, single HsClone uses 24040 Slice LUTs. Slice LUTs for 4-PHsClone GA have increased because to the increased need for SRAM to maintain four sets of configuration data during GA operations. Due to the fact that there is no increase in the flip flop required, the usage of Slice Registers is comparable to that of 4-PHsClone and HsClone.

Convergence Time Vs. Parallel HsClone Processes

35

1-Bit Adder

32.26

Convergence Time (ms)

30 25

20

2-Bit Comparator

24.44 17.71

15

12.94

15.79 9.18

10

5.93

5

3.54

0

1

2

3

4

Number of Parallel HsClone Processes Fig. 2. PHsClone GA convergence for adder and comparator; Four/Three/Two parallel processes and one process

22

G. Malhotra et al.

10

Convergence Time Vs. Parallel HsClone Processes

9.28

mDSS Safe Mode Detecon

Convergence Time (ms)

9

8 7

6.44

6 5 4 3 2 0.55

1

0.25

0

1

2 3 Number of Parallel HsClone Processes

4

Fig. 3. PHsClone GA convergence for mDSS SMD; Four/Three/Two parallel processes and one process Table 3. 4-PHsClone GA and HsClone GA resource utilization for mDSS SMD

4

Resources

Available 4-PHsClone GA HsClone + mutation Utilization%

Slice LUTs

1221600

24232

24040

Slice Registers 2443200

515

503

0.02

Bonded IOB

300

295

25

1200

1.9

Conclusion

It is suggested to execute HsClone GA in parallel to speed up hardware GA convergence time. CGP data format is preferred over LUT data format, which is typically used in FPGA data configuration, because it allows for fault identification at the node or gate level. For the construction of flight avionics circuits such as 4-bit adders, 8-bit comparators (Four Pi based Safe Mode Detection), and mDSS Safe Mode Detection logic on embryonic fabric employing embryonic cell cascading, CGP data is generated by PHsclone. With PHsClone, the convergence time for the mDSS SMD, 1-bit adder, and 2-bit comparator is greatly shortened. The FPGA implementation of PHsClone demonstrates that parallel GAs utilize resources significantly more than single GAs across all circuits. The register utilization for PHsClone GA is comparable to that of a single HsClone because all parallel GA operations are running in a pipeline. The paralleling of GA is not necessary if any one process converges quickly enough. It is feasible that in some circumstances a single process can generate configuration data at a quicker rate, just as the random number generation influences the convergence

Evolution of Configuration Data in CGP Format

23

time. PHsClone is tested for combinational circuits in the current work, but sequential circuit testing will need to be done in the following study.

References 1. Vasicek, Z., Sekanina, L.: Evolutionary approach to approximate digital circuits design. IEEE Trans. Evol. Comput. 19(3), 432–444 (2015) 2. AL-Marakeby, A.: FPGA on FPGA: implementation of fine-grained parallel genetic algorithm on field programmable gate array. Int. J. Comput. Appl. 80(6), 29–32 (2013). https://doi.org/10.5120/13867-1725 3. Hounsell, B.I., Arslan, T., Thomson, R.: Evolutionary design and adaptation of high performance digital filters within an embedded reconfigurable fault tolerant hardware platform. Soft Comput. 8(5), 307–317 (2004). https://doi.org/10.1007/ s00500-003-0287-x 4. Guo, L., Thomas, D.B., Guo, C., Luk, W.: Automated framework for FPGA-based parallel genetic algorithms. In: Conference Digest - 24th International Conference on Field Programmable Logic and Applications. FPL 2014 (2014). https://doi.org/ 10.1109/FPL.2014.6927501 5. Psarakis, M., Dounis, A., Almabrok, A., Stavrinidis, S., Gkekas, G.: An FPGAbased accelerated optimization algorithm for real-time applications. J. Sig. Process. Syst. 92(10), 1155–1176 (2020). https://doi.org/10.1007/s11265-020-01522-5 6. Hoseini Alinodehi, S.P., Moshfe, S., Saber Zaeimian, M., Khoei, A., Hadidi, K.: High-speed general purpose genetic algorithm processor. IEEE Trans. Cybern. 46(7), 1551–1565 (2016). https://doi.org/10.1109/TCYB.2015.2451595 7. Scott, S.D., Samal, A., Seth, S.: HGA: a hardware-based genetic algorithm. In: Proceedings of the Third International ACM Symposium on Field-Programmable Gate Arrays (FPGA 1995) (1995) 8. Guo, L., Guo, C., Thomas, D.B., Luk, W.: Pipelined genetic propagation. In: Proceedings - 2015 IEEE 23rd Annual International Symposium on FieldProgrammable Custom Computing Machines. FCCM 2015, pp. 103–110 (2015). https://doi.org/10.1109/FCCM.2015.64 9. Malhotra, G.: Cartesian genetic programming approach for embryonic fabric architecture. In: Proceedings of the 6th International Conference on Information Communication and Management. ICICM 2016, pp. 285–290 (2016). https://doi.org/ 10.1109/INFOCOMAN.2016.7784259 10. Andries, P.: Engelbrecht, Computational Intelligence: An Introduction. Wiley, New York (2007). https://books.google.co.in/books?id=IZosIcgJMjUC

Cross Synergetic Mobilenet-VGG16 for UML Multiclass Diagrams Classification Nesrine Bnouni Rhim(B) , Salim Cheballah, and Mouna Ben Mabrouk Sogeti Part of Capgemini, 147 quai du Pr´esident Roosevelt, Issy-Les-Moulineaux, 92130 Paris, France [email protected]

Abstract. Unified Modeling Language (UML) diagrams are a standard modeling language to represent design of software systems. Specifically, UML includes several types of diagrams permitting to assist developing and designing efficiently any software. Additionally, class diagrams (i.e., class diagrams, activity diagrams, sequence diagrams, and use case diagrams) are the most widely used UML diagrams as a standard modeling language for object-oriented software. However, manually classifying UML diagrams is time-consuming and requires an important effort. In addition, there is a need to automate UML class diagrams classification in order to assist researchers, software developers, and academicians to efficiently study and analysis software. One solution can be considered is to use DL-based classification methods as these methods have gained special popularity in different computer vision tools. However, while the majority of innovatory deep-learning efforts using Convolutional Neural Networks (CNNs) focus on improving more refined and strong architectures (e.g., Mobilenet, VGG16, ResNet, U-Net, GANs), there is very restricted work on how to concatenate dissimilar CNN architectures to enhance their relational learning of CNN-to-CNN interactions. In this paper, our purpose is to address this limitation in order to automate the UML diagrams classification. For this reason, we propose a Cross Synergetic MobilenetVGG16 (CS-Mobilenet-VGG16), which aims to tackle two crucial problems in computer vision classification and ensemble CNN learning: (1) a bi-directional flow of information between two CNNs (Mobilenet-VGG16) where information proceeds in a directional method from Mobilenet to the VGG16, and (2) synergetic variability across UML diagrams. Our Cross Synergetic Mobilenet-VGG16 significantly (p [1-D(t)] Jellyfish displays passive shifting Else Jellyfish displays active shifting End if End if Examine border state at a different position Evaluate the volume of food at respective position f( ) Improve the best answer End for End for Step 9: Based on the rank the most relevant document is fetched to the user. End

The Normalized Google Distance (NGD) between two search terms u and v is shown in Eq. (1). Where N is the overall amount of Google-searched internet sites multiplied by the average number of singleton target keywords found on those pages; f (u) and f (v) are

420

M. Arulmozhi Varman and G. Deepak

the amount of similar searches for terms u and v, respectively; and F (u, v) is the number of internet pages that include both a and b. The conditional entropy of two variables can also be defined as X and Y taking values xi and yj respectively is depicted in Eq. (2). Where p(xi , yj ) is the probability that X = xi and Y = yj . This value should be interpreted as the amount of randomness in the random variable X given the random variable Y. The Kullback-Leibler Divergence score, often known as the KL divergence score, measures how one probability distribution differs from another which is shown in Eq. (3). The KL divergence is the negative sum of each event’s probability in P multiplied by the log of the event’s probability in Q over the probability of the event in P. Lin Similarity is engaged to estimate the degree of the semantic relationship between units of concepts, language, and instances. The Lin Similarity is calculated as the ratio of the similarity between the terms upon the difference between them, i.e., the commonality and difference ratio. It can be formulated, as shown in Eq. (4). The similarity of concepts and terms in terms of text and query is a reflection of the degree to which semantic matching exists between them. Similarity between concept Ci and Cj denoted by Sim(Ci, Cj) in the Eq. (5). NGD(u, v) =

max{logf (u), logf (v)} − logf (u, v) logN − min{logf (u), logf (v)}

H (X |Y ) = −

n

DKL (PQ) =

 p(xi , yj )  P xi , yj log p(yj )

i,j

 xχ

P(x)log(

P(x) ) Q(x)

(1) (2) (3)

logP(comman(X , Y )) (4) logP(description(X , Y ))     + β ∗ sim Ci , Cj const + γ ∗ sim Ci , Cj cdepth (5)

simLin (X , Y ) =     sim Ci , Cj = α ∗ sim Ci , Cj dist

5 Results Data is collected from Freebase and wiki data, Crowd sources community ontology, Blogs and social networks API and LODC cloud which it then formulated into knowledge graph. The formulated graph is then converted to RDF format from OWL format. Jellyfish algorithm is used for RDF conversion. RCVI dataset is preprocessed into custom XML format then converted to RDF format. Then using KL divergence both the converted RDF mapping is re ranked in increasing order of semantic similarity (Fig. 2). Table 1 compares the performance of the proposed model with the base model approach and other approaches. Figure 3 compares the precision percentage of the baseline models with DRHTG. It is evident from Table1 that the proposed framework DRHTG operates better than LSI+Cosine +Jaccard, MLAP, STM+Cosine, DRDLC, and LSI+Fuzzy C-means Clustering. Precision, F-measure, FDR, nDGC, and accuracy have all improved in the proposed DRHTG. The F-measure value of DRHTG is greater than LSI+Cosine +Jaccard, MLAP, STM+Cosine, DRDLC, and LSI+Fuzzy C-means Clustering by 14.22% 12.37%,

DRHTG: A Knowledge-Centric Approach for Document Retrieval

421

Fig. 2. Precision percentage vs No. of recommendations

Table 1. Comparison of Performance of the proposed DRHTG with other approaches Search Technique

Average Precision %

Average Recall%

Accuracy

F-Measure

FDR

nDCG

LSI+Cosine+Jaccard

77.32

79.81

78.56

78.54

0.23

0.79

MLAP [2]

78.71

82.15

80.43

80.39

0.22

0.69

STM+Cosine

79.11

83.15

81.13

81.07

0.21

0.86

DRDLC [1]

74.93

78.18

76.55

76.52

0.26

0.86

LSI+Fuzzy C-Means Clustering

81.18

84.17

82.675

82.64

0.19

0.87

Proposed DRHTG

91.78

93.78

92.78

92.76

0.09

0.95

11.69%, 16.24%, and 10.12%, respectively. The proposed model’s precision is 91.78%, while LSI+Cosine +Jaccard, MLAP, STM+Cosine, DRDLC, and LSI+Fuzzy C-means Clustering are 77.32%, 78.71%, 79.11%, 74.93%, and 81.18, respectively. The accuracy of DRHTG is greater than that of LSI+Cosine +Jaccard, MLAP, STM+Cosine, DRDLC, and LSI+Fuzzy C-means Clustering by 14.22%, 12.35%, 11.65%, 16.23%, and 10.12%. The recall of the proposed approach is better than LSI+Cosine +Jaccard, MLAP, STM+Cosine, DRDLC, and LSI+Fuzzy C-means Clustering in percentage by13.97, 11.63, 10.63, 15.6, and 9.61. The nDGC of the proposed approach is lower than LSI+Cosine +Jaccard, MLAP, STM+Cosine, DRDLC, and LSI+Fuzzy C-means Clustering by 0.16, 0.26, 0.09, 0.09, and 0.08, respectively. The FDR values of LSI+Cosine + Jaccard, MLAP, STM+Cosine, DRDLC, and LSI+Fuzzy C-means Clustering are greater than DRHTG by 0.14, 0.13, 0.11, 0.17, and 0.1.

422

M. Arulmozhi Varman and G. Deepak

Fig. 3. Pictorial depiction of the proposed DRHTG and other baseline models

6 Conclusion The results’ efficacy validates the proposed framework’s ability to be used for document retrieval. The RCVI dataset obtained is preprocessed to obtain custom RDF mapping. Blogs and social networks API, Crowdsource community ontology, wiki data, free base, and LODC cloud are incorporated into the RDF conversion to improve classification accuracy. Upon classification, Jellyfish Optimization is employed on the classified results to re-rank the documents in increasing order. The proposed DRHTG framework yields an average accuracy of 92.78%, with a very low FDR of 0.09. Also, the Jellyfish Optimization algorithm has been employed on the classified results to yield more accurate results. The results validate that DRHTG is the best-in-class approach to re-rank documents by semantic similarity and yield it to the user.

References 1. Ramya, R.S., Sejal, D., Venugopal, K.R., Iyengar, S.S., Patnaik, L.M.: DRDLC: discovering relevant documents using latent dirichlet allocation and cosine similarity. In: Proceedings of the 2018 VII International Conference on Network, Communication and Computing, pp. 87– 91, 14 Dec 2018

DRHTG: A Knowledge-Centric Approach for Document Retrieval

423

2. Deka, H., Sarma, P.: Machine learning approach for text and document mining. Int. J. Comput. Sci. Eng. (IJCSE). 6(5) (2017) 3. Hershey, J.R., Olsen, P.A.: Approximating the Kullback Leibler divergence between Gaussian mixture models. In: 2007 IEEE International Conference on Acoustics, Speech and Signal Processing-ICASSP2007. Vol. 4, pp. IV-317. IEEE 15 Apr 2007 4. Chou, J.S., Truong, D.N.: A novel metaheuristic optimizer inspired by behavior of jellyfish in ocean. Appl. Math. Comput. 15(389), 125535 (2021) 5. Li, W., Xia, Q.: A method of concept similarity computation based on semantic distance. Procedia Eng. 1(15), 3854–3859 (2011) 6. Shannon, C.E.: A mathematical theory of communication. Bell Syst. Tech. J. 27(3), 379–423 (1948) 7. Cilibrasi, R.L., Vitanyi, P.M.: The google similarity distance. IEEE Trans. Knowl. Data Eng. 19(3), 370–383 (2007) 8. Kuzi, S., Zhang, M., Li, C., Bendersky, M., Najork, M.: Leveraging semantic and lexical matching to improve the recall of document retrieval systems: a hybrid approach. arXiv preprint arXiv:2010.01195. 2 Oct 2020 9. Karami, A., Lundy, M., Webb, F., Dwivedi, Y.K.: Twitter and research: a systematic literature review through text mining. IEEE Access. 26(8), 67698–67717 (2020) 10. Antons, D., Grünwald, E., Cichy, P., Salge, T.O.: The application of text mining methods in innovation research: current state, evolution patterns, and development priorities. R&D Manage. 50(3), 329–351 (2020) 11. Liu, Y., Hong, Z.: Mapping XML to RDF: an algorithm based on element classification and aggregation. In: Journal of Physics: Conference Series. Vol. 1848, no. 1, p. 012012. 1 Apr 2021 IOP Publishing 12. Arulmozhivarman, M., Deepak, G.: OWLW: ontology focused user centric architecture for web service recommendation based on LSTM and whale optimization. In: Musleh Al-Sartawi, A.M.A., Razzaque, A., Kamal, M.M. (eds.) EAMMIS 2021. LNNS, vol. 239, pp. 334–344. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-77246-8_32 13. Surya, D., Deepak, G., Santhanavijayan, A.: KSTAR: a knowledge based approach for socially relevant term aggregation for web page recommendation. In: Motahhir, S., Bossoufi, B. (eds.) ICDTA 2021. LNNS, vol. 211, pp. 555–564. Springer, Cham (2021). https://doi.org/10.1007/ 978-3-030-73882-2_50 14. Surya, D., Deepak, G., Santhanavijayan, A.: QFRDBF: query facet recommendation using knowledge centric DBSCAN and firefly optimization. In: Motahhir, S., Bossoufi, B. (eds.) ICDTA 2021. LNNS, vol. 211, pp. 801–811. Springer, Cham (2021). https://doi.org/10.1007/ 978-3-030-73882-2_73 15. Surya, D., Deepak, G., Santhanavijayan, A.: Ontology-based knowledge description model for climate change. In: Abraham, A., Piuri, V., Gandhi, N., Siarry, P., Kaklauskas, A., Madureira, A. (eds.) ISDA 2020. AISC, vol. 1351, pp. 1124–1133. Springer, Cham (2021). https://doi. org/10.1007/978-3-030-71187-0_104

424

M. Arulmozhi Varman and G. Deepak

16. Deepak, G., Santhanavijayan, A.: QGMS: a query growth model for personalization and diversification of semantic search based on differential ontology semantics using artificial intelligence. Comput. Intell. 1–30 (2022) 17. Deepak, G., Santhanavijayan, A.: OntoDynS: expediting personalization and diversification in semantic search by facilitating cognitive human interaction through ontology bagging and dynamic ontology alignment. J. Ambient Intell. Humanized Comput. 1–25 (2022)

Bi-CSem: A Semantically Inclined Bi-Classification Framework for Web Service Recommendation Deepak Surya1 , S. Palvannan1 , and Gerard Deepak2(B) 1 Department of Computer Science and Engineering, National Institute of Technology,

Tiruchirappalli, India 2 Manipal Institute of Technology Bengaluru, Manipal Academy of Higher Education, Manipal,

India [email protected]

Abstract. Web services are products in the era of service-oriented computing and cloud computing. As the number of web services on the Internet grows, selecting and recommending them becomes more important. Consequently, in the realm of service computing, how to propose the finest Web services for researchers is now a popular research topic. To determine the proper recommendation, the BI-CSem model was proposed and tested with multiple baseline models using real-world Web Service datasets in this research. Aside from that, thesaurus is built using Web service keywords gathered from Web service repositories such as UDDI,WSDI, and from the World Wide Web Cloud. The extracted terms are then subjected to semantic similarity, which is determined using SemantoSim, Concept similarity, and KL divergence measure, and the terms from the user, such as query, user click, and previous historical data, are pre-processed the terms from the semantic alignment are then classified using XGBoost, while the Web service dataset is classified using XGBoost and GRU. Semantic similarity is determined using just SemantoSim, based on the classification intersection of the top 75 percent words from the extracted terms from the two classifiers and features from the created intermediate term tree generated using STM. Finally, terms are reranked and recommended to the user, and the precision, accuracy, recall, F-measure, and FDR for the Web service recommendation system are calculated, and the Bi-CSem model is found to have an excellent precision percentage of 94.37% and the lowest FDR of 0.06. Keywords: Web service recommendation · eXtreme Gradient Boost · Gated Recurrent Unit

1 Introduction A Web Service is a standards-based, language-independent software entity that receives particularly structured format requests from those other software entities on various servers via vendor and transport-neutral communication protocols, generating application-specific respond. Web services have exploded in popularity as a means of © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 A. Abraham et al. (Eds.): IBICA 2022, LNNS 649, pp. 425–438, 2023. https://doi.org/10.1007/978-3-031-27499-2_40

426

D. Surya et al.

exchanging data, computer resources, and applications via the internet in the previous decade. It is not an easy process to choose a high-quality Web service from a vast range of options. Service engineers (also known as service users) typically obtain a list of web services that fulfil certain functional criteria from service brokers or search engines when building service-oriented applications. From among the functionally comparable candidates, they must choose the best one. However, choosing the best-performing one is challenging because most service users are unaware of their own performance. As a result, efficient service selection and recommendation strategies are required, which may assist service users in reducing risk and delivering high-quality service operations. Motivation: There is need for a strategy for web service recommendation which is based on the query level semantics that ensures classification of query. Semantically inclined an artificially intelligence is driven semantic approach for recommendation of web services via query level semantic. The reason for query level semantic and query classification to web recommendation mainly due to fact that query terms are of highly diverse in nature and also web service annotation and the names of the web services are also highly diverse. So, there is need for intermittent requirement of understanding natural text in the queries. As a result, there is need for a web service recommendation strategy which invites term level semantics to the query as well as query classification. Contribution:The proposed model had a process of formulating thesaurus using Web repositories like UUDI and WSDL. Apart from that, the user query, user click, and prior historical data (if necessary) are pre-processed before being exposed to Topic modelling, which is STM, and the similarity measure is computed for the terms gathered from both utilizing concept similarity, KL Divergence, and SemantoSim measure. The term from the semantic network is classified using XGBoost and the terms from the Web Service dataset are classified using XGBoost and GRU as a classifier. Experiments were conducted on the terms gathered from the classifier and with combination of several algorithms and concepts into one allowed to achieve greater accuracy, recall, precision, F-Measure, with a very low False Discovery Rate (FDR) and with high nDCG. Organization: The following is how the rest of the article is organized. A brief summary of related work is provided in Sect. 2. The Proposed Architecture is depicted in Sect. 3. The Implementation is extensively discussed in Sect. 4. Section 5 discusses the Performance Analysis and Results. Section 6 deals with the paper’s Conclusion.

2 Related Work In the subject of service computing, web service recommendation is now attracting a lot of interest. Balaji et al. [1] suggested a method for automatic query categorization based on web service similarity using a machine learning methodology called K-Nearest Neighbors (KNN). Peerzade et al. [2] presented a collaborative filtering strategy for online service recommendation. For discovering ideal web services, they employ two methodologies: Pearson Correlation Coefficient driven Collaborative Filtering (PCC) & Normal Recovery Collaborative Filtering, that uses a similarity technique to compute web service similarity. The proposed system has two new features, such as novel clustering approaches, when compared to existing methods; and NRCF prediction employs the PCC prediction algorithm in the NRCF approach. Dang et al. [3] introduced a model that

Bi-CSem: A Semantically Inclined Bi-Classification Framework

427

incorporates knowledge graphs and knowledge representations into web service recommendation, as well as a novel attention module with the influence of tags for candidate services on different terms of queries, and a deep neural network is utilized to model the high-level aspects of user-service invocation behavior. By merging collaborative filtering and attention CNN, Ke et al. [4] presented a hybrid collaborative filtering with attention CNN model for online service recommendation. Deep neural nets with the mashup-service invocation matrix and attention-based CNN are seamlessly integrated to capture the intricate mashup-service interactions. Yao et al. [5] proposed a novel method that combines collaborative filtering and text suggestions by using a generative probabilistic model to assess both score data (e.g., QoS) and semantic content information (e.g., capabilities) of online services. By merging collaborative filtering and text-based data, Xiong et al. [6] proposed a deep learning-based modified technique for Web service recommendation. As a result, the request interactions with both mashups and services, as well as their functions, are seamlessly included into a deep neural network, which may be utilized to characterize the intricate connections between mashups and services. Li et al. [7] design a new QoSaware Web service recommendation system that considers the context - particular feature similarities of distinct services. The created system first extracts contextual information from WSDL files in order to cluster Web services based on feature similarities, and then uses an upgraded matrix factorization approach to suggest services to users. Peerzade et al. [8] created a list of optimum web services depending on the user’s history using two methodologies: Pearson Correlation Coefficient based Collaborative Filtering (PCC) and Normal Recovery Collaborative Filtering (NRCF). At long last, they in comparison to previous approaches, the presented method utilizes unique Hybrid clustering algorithms that increase PCC predictive performance and are only based on PCC similarities. Subbulakshmi et al. [9] developed a Web Service recommendation system based on WS semantic analysis and improved collaborative filtering. Ontology-based Semantic Analysis utilizing Tversky Content Similarity Measure aids in identifying the most comparable functionally relevant WS, while the collaborative filtering method use DBSCAN clustering and PCC similarity to find highly collaborative WS based on the ratings provided by experienced users. They combined the collaborative and sematic similarity values of WS using the relative frequency technique. As a result, the suggested technique has been shown to generate WS recommendations that are more realistic, accurate, and efficient. Liu et al. [10] developed a technique that utilizes machine learning approaches to promote online services to users based on both historical consumption data and service descriptions, which is relevant to both structured and unstructured service descriptions, i.e., free text descriptions. They leverage the notion of collaborative topic regression, which includes both probabilistic matrix factorization and probabilistic topic modelling, to generate user-related, service-related, and topic-related matrices. To highlight the retrieval effectiveness of search engines, Ali et al. [11] selected search engines based on Alexa (Actionable Analytics for the Web) Rank, taking into account both precision and relative recall Finally they selected two websites for the report i.e., Google.com and Yahoo.com. A total of 15 queries were chosen at random and spread among the two search engines in order to test their retrieval efficacy as a

428

D. Surya et al.

training data set in order to identify certain features of each kind. Navigational questions, informative queries, and transactional inquiries were evaluated and classified using concepts.Zhou et al.[12] describe new investigations on query classification in detail, focusing on the source of query log, category systems, feature extraction techniques, classification methods, and evaluation methodology, and explore the concerns of large-scale query classification and solution approaches coupled with big data analysis systems, claiming that there are still a number of issues and challenges, such as the lack of an authority categorization and evaluation methodology, the efficacy of the feature extraction method, the ambiguity of effectiveness on a large-scale query log and subsequent query classification on a big data platform, and so on. Zhang et al. [13] utilize convolutional neural networks for query classifications, which is a supervise learning technique. They input the query and associated category to train the model, and the model can anticipate the query’s type in the testing procedure. They utilize two distinct convolutional neural networks to automatically learn the query format depending on semantics, and the results demonstrate that one model’s precision improves by 3% when relative to logistic regression. Shi et al. [14] developed a query classification technique that learned feature representation for CNN utilizing a deep long-short-term memory (DLSTM) based feature mapping. The result reveals a new state-of-the-art query classification performance on baseline methods utilizing their architecture, which blends a stack of DLSTM layers with a traditional CNN layer. Gupta et al. [15] developed a technique that includes doing Bayesian analysis on time periods of interest derived from pseudo-relevant articles, and evaluating the technique on a huge temporal query workload to show that the temporal category of a query can be accurately determined. In [16–24] several models in support of the proposed literature have been depicted.

3 Proposed Work The proposed architecture for Web service recommendation utilizing a knowledgecentric semantic approach is shown in Fig. 1. User clicks, as well as prior historical data such as Web usage data and the user’s previous web service access directory, are used as prospective input, together with the user query. The User query is not necessarily required as part of the frame work’s input. However, the user’s click and past historical web usage data are required for web service suggestion. The query statement is also taken as input if the User chooses to search the online services in terms of the Query. The user’s input items, as well as past historical data and, if available, a query, are preprocessed. Pre-processing involves tokenization, stop word removal, lemmatization and NER (Named Entity Recognition). The terms are extracted from the user input after pre-processing and subjected to topic modelling, which is accomplished using Structural Topic Modelling (STM). The STM is a broad paradigm for covariate information topic modelling at the document level. To solve this problem, a new variation of LDA is applied. It’s an unsupervised modelling approach, with one of the main stages being to figure out the range of topics to model before begins. The variables can alter topical content, topical prevalence, or both, and can enhance inference and qualitative interpretability. Topical prevalence refers to the

Bi-CSem: A Semantically Inclined Bi-Classification Framework

429

Fig.1. Proposed architecture for Bi-CSem model

percentage of distinct themes that appear inside papers, whereas topical content refers to the probability associated with terms in each topic. External variables can include any metadata that differentiates a text from the other, such as author identification (age, gender, political affiliation, etc.), textual genre (for example, media stories vs academic publications), and production time. Currently, the package provides functions for ingesting and manipulating text data, estimating STM, calculating covariate effects on latent topics with uncertainty, estimating a graph of topic correlations, and computing model diagnostics and summary metrics. As the terms are separated by STM, an intermediate term tree is generated which is further used for recommendation. Also, the web service index crawled from the world wide web. So, the world wide web has got several index terms, from the index catalogue of the world wide web the web service indexes are crawled and also a thesaurus is formulated by comprising the web services index terms and also extracted terms from the UDDI and WSDL repository. From the UDDI and WSDL repository the Web service keywords are crawled along with web service index from the world wide web using a customize crawler. And these terms are further used for semantic alignment. Also, Web service dataset is subjected to dual classification using XG boost and GRU. Semantic alignment is performed between the terms which are yielded from structural topic modelling as well as the terms in thesaurus which is formulated using web service indexes and UDDI and WSDL terms. Semantic alignment is using concept similarity, KL divergence and SemantoSim measure. Three measures are used for semantic alignment in

430

D. Surya et al.

order to form a semantic network. The threshold for Concept similarity and SemantoSim is 0.75, whereas for the KL divergence the step deviation of 0.25 is considered. The terms from the semantic network formulated is send as featured for classification of the web service datasets through the XG Boost classifier. Also, the Web service dataset is automatically classified by using GRU (Gated Recurrent Units). Since the GRU, the deep learning classifier the features are not required automatic feature selection takes place from the dataset and the web service dataset is classified. XGBoost is the abbreviation for eXtreme Gradient Boosting. It’s a distributed gradient boosting library developed to be fast, versatile, and transportable. Regularized Learning is one of the features in XGBoost that helps smooth the final learned weights and reduce over-fitting. The regularized objective will prefer models that use essential, predictive operations. Then there’s Gradient Tree Boosting, which means the tree ensemble model can’t be improved using typical optimization methods in Euclidean space. Instead, the model is developed in an additive method. Finally, in addition to the regularized objective, two additional strategies are utilized to minimize overfitting: shrinkage and column subsampling. The first strategy, shrinkage, was presented by Friedman. Shrinkage adjusts newly added weights by a factor after each stage of tree boosting. Similar to a learning rate in stochastic optimization, shrinkage minimizes the influence of each tree and leaves room for the following trees to improve the model. In some situations, the Gated Recurrent Unit (GRU) is a form of RNN that offers benefits over long short-term memory (LSTM). GRU is faster and consumes shorter memory than LSTM; LSTM is more accurate when working with datasets with longer sequences. GRUs even solve the vanishing gradient problem (values utilized to update network weights), which is complex with traditional RNN. If the grade reduces as it propagates over time, it may become too little to influence learning, rendering the neural net untrainable. If a layer in a neural net cannot learn, RNNs can effectively “forget” longer sequences. To solve this problem, GRUs use the update and reset gates. These gates regulate which data can flow through to the output and may be trained to recall information for extended periods. These allow it to send critical information along a chain of events to make more accurate forecasts.Based on the classification output of both the terms from the Web usage data and the XG boost classifier and the terms from the GRU, the Top 75% intersection of the web services is considered. Because the same dataset is classified into classifiers, the standard item from the dataset is considered. And the terms from the common intersection of the top 75% of web services is taken. The generated intermediate terms tree, which is generated from STM, the features from that, and the term for top 75% intersection of web services is taken to compute the similarity measure, which is again calculated utilizing SemantoSim measure alone. It is recommended to the user in the increasing order of similarity measure.

Bi-CSem: A Semantically Inclined Bi-Classification Framework

431

4 Implementation and Performance The presented Bi-CSem model for web service recommendation was implemented in Python and used Google Collaborator as an IDE. Web service index words are crawled from the internet, and a thesaurus is created using the web service index terms as well as terms taken from the UDDI and WSDL repositories.The data for this model was obtained from GitHub repository where it maintains a set of QoS datasets which are collected from real-world Web services. The repository comprised of two types of datasets taken from two sets of users with different features. First datasets describe real-world QoS measurements, including both response time and throughput values, obtained from 339 users on 5,825 Web services. It includes two types of tables with user data such as User ID, IP Address, Country, Continent, AS, Latitude, Longitude, Region and City and with web services data consist of features such as Service ID, WSDL Address, Service Provider, IP Address, Country, Continent, AS, Latitude, Longitude, Region and City. Second datasets describe real-world QoS measurements from 142 users on 4,500 Web services over 64 consecutive time slices (at 15-min interval). Similarly, this dataset also includes two tables with response time values and throughput time values. Response time values consist of features such as User ID, Service ID, Time Slice ID and Response Time (sec). Throughput values consist of features such as User ID, Service ID, Time Slice ID and Throughput (kbps). Finally, after data analysis process the datasets are provided for further classification process. Algorithm 1 depicts the proposed model algorithm.

432

D. Surya et al.

Algorithm 1: Algorithm of the proposed Bi-CSem model

Bi-CSem: A Semantically Inclined Bi-Classification Framework

433

The input to the algorithm is the user queries, user clicks, past historical data of the user, the web service dataset and the web service keywords, the algorithm yields the most relevant web services that captures the requirement of the user based on user queries as the output. The input user queries, user clicks, past historical data of the user are obtained undergo pre-processing steps such as Tokenization, lemmatization, stop word removal and NER (Name Entity Recognition). Upon pre-processing terms are extracted and subjected to Topic modelling which is achieved by STM (Structural Topic model). From the index catalogue of the world wide web, the web service indexes are crawled and also a thesaurus is formulated by comprising the web services index terms and also extracted terms from the UDDI and WSDL repository. Semantic alignment is performed between the terms which are yielded from STM as well as the terms in thesaurus. The terms from the semantic network formulated is send as featured for classification of the web service datasets through the XG Boost classifier. Meanwhile, dual classification is performed on the Web Service dataset using XGBoost and GRU (Gated Recurrent unit The terms from the common intersection of the top 75% of web services is taken and the generated intermediate terms tree, which is generated from STM are undergoes semantic similarity and Terms are reranked. Finally, terms are recommended to the user based on the semantic similarity measure computed upon classification.

5 Results and Performance Evaluation Every recommendation system strives to provide users with the best and most efficient suggestions. Our aim is expected to be met by the effectiveness of this content-based recommendation system. Recall, precision, accuracy, F-measure, False Discovery Rate (FDR), and normalized Discounted Cumulative Gain(nDCG) are all potential measures. Precision is defined as the ratio of what our model properly predicted to what our model anticipated. Accuracy basically refers to how well our model guesses the correct category. The ratio of what our model correctly predicted to what the actual labels are is known as recall (classes or labels), and the F-measure calculates the results’ relevance or the harmonic mean of recall and precision, whereas the false discovery rate (FDR) is defined as the rate of false number of positives error in null hypothesis testing when performing multiple comparisons. nDCG is a ranking quality metric that is frequently used to assess the performance of web search engine algorithms or similar applications. As a result, the lower the FDR and with higher the nDCG the better is the approach. Recall, precision, accuracy, F-measure and FDR are used in their native forms. So, the proposed BI-CSem approach for web service recommendation is evaluated for performance using precision, recall, accuracy, f-measure percentages, FDR (False discovery rate), and the normalized Discounted Cumulative Gain(nDCG) as the preferred metrics. Precision, recall, f-measure indicate the relevance of the result while the FDR and nDCG indicate the number of false-positive furnished by assistance as well as the number of diverse recommendations respectively. So, to compare the performance of the proposed Bi CSem model it is baselined with WSRCF [2], DKWF [3], HCFA [4] models, and also the variation of combining SVM with Cosine similarity and K-means clustering is used as the benchmark model as shown in Table 1. It is indicative from the Table 1 the Bi-CSem model has the highest precision, recall, accuracy-measure and with

434

D. Surya et al.

lowest FDR and with very high nDCG value when compared to the baselined model. The proposed Bi-CSem furnishes an average precision of 94.37%, an average recall of 96.18%, an average accuracy of 95.27%, an average f-measure of 95.26% and FDR of 0.06, and with the nDCG of 0.98. Table 1. Comparison of performance of the proposed Bi-CSem with other approaches Search Technique

Average Precision %

Average Recall %

Average Accuracy %

F-Measure %

FDR

nDCG

WSRCF [2]

76.15

78.22

77.18

77.17

0.24

0.84

DKWF [3]

88.15

91.15

89.65

89.62

0.12

0.95

HCFA [4]

86.17

90.12

88.14

88.1

0.14

0.85

SVM + Cosine Similarity + K-Means

86.32

88.12

87.22

87.21

0.14

0.84

Proposed Bi-CSem

94.37

96.18

95.27

95.26

0.06

0.98

The reason why the proposed Bi-CSem model outperforms all the baselined model is mainly because, it is an approach which is hybridization of two classification model, one is the Deep learning model which is Gated Recurrent Units and other is the XGBoost ML model. Moreover, the structural topic modelling ensures that novel topics which are new are added into the localized frameworks. Apart from this feature extracted from the UDDI and WSDL which is used to formulate the thesaurus and the feature extraction from web service index ensures that very high density of auxiliary knowledge or supplementary knowledge introduced into the framework. Most importantly usage of concept similarity, SemantoSim measure and the KL Divergence for computation of semantic similarity ensures that the regulatory mechanism for relevance computation is quite strong. So as a result, a proposed BI-CSem model outperforms all the other model. Most importantly the combination of the Machine learning boosting algorithm like XGBoost and the deep learning driven Gated recurrent units ensures that the classification is quite concrete and the common instance which are yield by the classifier are alone used for further recommendation. So as a result, the proposed Bi-CSem model outperforms the baselined models. So, the nDCG value is highest of 0.98 mainly due to reason that the Bi-CSem incorporates features from the Web service index, WSDL and UDDI and also the introduction of the STM (structural topic modelling) ensures that the nDCG is highest when compared to the baselined models. Figure 2 depicts the precision versus the number of instances, which is recorded in Table 1. It is very clear that despite the number of instances, the proposed HAIS approach has the highest precision percentage and the WSRCF [2], DKWF [3], HCFA [4] models, and also the variation of combining SVM with Cosine similarity and Kmeans clustering have much lower precision versus the number of instances.The WSRCF model yields an average precision of 76.15%, an average recall of 78.22%, and average

Bi-CSem: A Semantically Inclined Bi-Classification Framework

435

Fig.2. Precision percentage vs no. of instances

accuracy of 77.18%, an average f-measure of 77.17%, and an average FDR of 0.24 and with the average nDCG of 0.84. The reason why WSRCF yield the lowest precision, recall, accuracy and f-measure percentages and with the very high FDR of 0.24, is mainly due to the reason WSRCF model is based hybridizing of collaborating filtering model along with Pearson coefficient model.WSRCF hybridize the Pearson coefficient with collaborating filtering and also normal recovery collaborating filtering similarity measure has been deduced along with prediction of correlation coefficient. However, the Pearson coefficient is quite naïve and collaborating filtering always requires the rating matrix. The rating has to be known for individual items of web services. It is quite serious to obtain a rating for all the web services which are variable on the world wide web in the presented time. As a result, the collaborative filtering framework is always not favorable and most importantly the relevance has not increased with the ratings. Ratings ensure popularity among the community. However, the use of ratings to deduce the most relevant query-related items is not the right way to approach a specification. Moreover, the Pearson coefficient along with the normal recovery correlation coefficient measure is quite weak in WSRCF and henceforth it lacks.The DKWF model yields an average precision of 88.15, an average recall of 91.15%, and average accuracy of 89.65%, an average F-measure of 89.62%, and an average FDR of 0.12 and with the average nDCG of 0.95. The reason is because it is a deep neural network model along with a mash up computation matrix that amalgamates both the knowledge graph and the knowledge representation techniques. Features are extracted to a knowledge graph. Since, due to the presence of knowledge graph the nDCG value is very high because of the supplication of sufficient auxiliary knowledge and the deep learning model also perform quite well, however there is scope for improvement for the final relevance of the recommendation as deserve the Y approach can be relied upon but also has a scope for improvement. The HCFA model yields an average precision of 86.17%, an average recall of 90.12%, and average accuracy of 88.14%, an average f-measure of 88.1%, and an average FDR of 0.14 and with the average nDCG of 0.85. The reason for HCFA method is it hybridizes

436

D. Surya et al.

collaborative filtering along with attention CNN model. So collaborative filtering is based on matrix factorization, however it requires the rating and always the web services on the real worldwide web will not be retracted. So, depending on this method is not quite wise. However, the computation of spark invocation metrics along with the attention CNN model make sure that this method performs well in terms of computation of relevance. However, there is scope for improvement by amalgamating or by integrating better semantic similarity models and also since due to the lack of auxiliary knowledge, the model does not yield the very high nDCG value. The SVM with cosine similarity and the k-means clustering model yields an average precision of 86.32, an average recall of 88.12, and average accuracy of 87.22, an average f-measure of 87.21%, and an average FDR of 0.14 and with the average nDCG of 0.84. This model also does not yield very high precision, recall, accuracy, f-measure percentages. However, the hybridization of classification model along with semantic similarity model and the clustering model ensures that the relevance of the precision, recall accuracy-measure percentage are high with the intermediate values and with intermediate nDCG. However, nDCG lacks due to the lack in auxiliary knowledge which is fed into the system. However semantic similarity models ensure that a decently good number of features will be incorporated into the approach. However Bi-CSem performs well more than semantic similarity models because of incorporation of two classification model which is why it picked as ultimate model for the Web service recommendation.

6 Conclusion In this paper, the results obtained employing the proposed Bi-CSem model proves its potential to be utilized for web service recommendations. The queries collected are pre-processed to receive the query terms and thesaurus is formulated with the usage of Web service repositories like UDDI and WSDL. The terms collected from both undergo semantic alignment by using concept similarity, KL divergence and SemantoSim measure. And the terms collected from semantic network are classified using XGBoost, meanwhile Web Service datasets undergo dual classification using XGBoost and GRU and finally intersection of both classified terms are computed by only using SemantoSim and Reranked for recommendation.The suggestions are based on the input queries of user, Web Service dataset, and from Web service repositories which are precisely linked to give greater accuracy and F-Measure than other baseline models. Experiments implemented on a real-world dataset show that our proposed method achieves better performance than several state-of-the-art web service recommendation methods. For the best performance, we implemented XGBoost and GRU algorithm for the classification process of terms from semantic network and the Web service dataset respectively. The improved results were shown using accuracy, precision, recall, F-measure and support. As a result, the Bi-CSem model has a 95.27% overall accuracy and a low FDR 0.06, making it the best class system for Web service recommendation.

Bi-CSem: A Semantically Inclined Bi-Classification Framework

437

References 1. Balaji, B.S., Balakrishnan, S., Venkatachalam, K., Jeyakrishnan, V.: Automated query classification based web service similarity technique using machine learning. J. Ambient. Intell. Humaniz. Comput. 12(6), 6169–6180 (2020). https://doi.org/10.1007/s12652-020-02186-6 2. Peerzade, S.S.: Web service recommendation using collaborative filtering. International Res. J. Eng. Technol. (IRJET), 4(06), 2567 (2017) 3. Dang, D., Chen, C., Li, H., Yan, R., Guo, Z., Wang, X.: Deep knowledge-aware framework for web service recommendation. J. Supercomput. 77(12), 14280–14304 (2021). https://doi. org/10.1007/s11227-021-03832-2 4. Ke, J., Xu, J., Meng, X., Huang, Q.: hybrid collaborative filtering with attention CNN for web service recommendation. In: 2019 3rd International Conference on Data Science and Business Analytics (ICDSBA), pp. 44–52. IEEE (2019) 5. Yao, L., Sheng, Q.Z., Ngu, A.H., Yu, J., Segev, A.: Unified collaborative and content-based web service recommendation. IEEE Trans. Serv. Comput. 8(3), 453–466 (2014) 6. Xiong, R., Wang, J., Zhang, N., Ma, Y.: Deep hybrid collaborative filtering for web service recommendation. Expert Syst. Appl. 110, 191–205 (2018) 7. Li, S., Wen, J., Luo, F., Gao, M., Zeng, J., Dong, Z.Y.: A new QoS-aware web service recommendation system based on contextual feature recognition at server-side. IEEE Trans. Netw. Serv. Manage. 14(2), 332–342 (2017) 8. Peerzade, S.S.: Web service recommendation using PCC based collaborative filtering. In: 2017 International Conference on Energy, Communication, Data Analytics and Soft Computing (ICECDS), pp. 2920–2924. IEEE (2017) 9. Subbulakshmi, S., Ramar, K., Shaji, A., Prakash, P. : Web service recommendation based on semantic analysis of web service specification and enhanced collaborative filtering. In: Thampi, S., Mitra, S., Mukhopadhyay, J., Li, KC., James, A., Berretti, S. (eds.) Intelligent Systems Technologies and Applications. ISTA 2017. Advances in Intelligent Systems and Computing, vol. 683, pp. 54–65. Springer, Cham (2017). https://doi.org/10.1007/978-3-31968385-0_5 10. Liu, X., Fulia, I.: Incorporating user, topic, and service related latent factors into web service recommendation. In 2015 IEEE International Conference on Web Services, pp. 185–192. IEEE (2015) 11. Ali, S., Gul, S.: Search engine effectiveness using query classification: a study. Online Information Review (2016) 12. Zhou, S., Cheng, K., Men, L.: The survey of large-scale query classification. In: AIP conference proceedings, vol. 1834, no. 1, p. 040045. AIP Publishing LLC (2017) 13. Zhang, H., Song, W., Liu, L., Du, C., Zhao, X.: Query classification using convolutional neural networks. In: 2017 10th International Symposium on Computational Intelligence and Design (ISCID), vol. 2, pp. 441–444. IEEE (2017) 14. Shi, Y., Yao, K., Tian, L., Jiang, D.: Deep LSTM based feature mapping for query classification. In: Proceedings of the 2016 conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 1501–1511 (2016) 15. Gupta, D., Berberich, K.: Temporal query classification at different granularities. In: International Symposium on String Processing and Information Retrieval. pp. 156–164 (2015) 16. Deepak, G., Priyadarshini, J.S.: Personalized and enhanced hybridized semantic algorithm for web image retrieval incorporating ontology classification, strategic query expansion, and content-based analysis. Comput. Electr. Eng. 72, 14–25 (2018)

438

D. Surya et al.

17. Ojha, R., Deepak, G.: Metadata driven semantically aware medical query expansion. In: Villazón-Terrazas, B., Ortiz-Rodríguez, F., Tiwari, S., Goyal, A., Jabbar, M. (eds.) Knowledge Graphs and Semantic Web. KGSWC 2021. Communications in Computer and Information Science, vol. 1459, pp. 223–233. Springer, Cham (2021). https://doi.org/10.1007/978-3-03091305-2_17 18. Deepak, G., Teja, V., Santhanavijayan, A.: A novel firefly driven scheme for resume parsing and matching based on entity linking paradigm. J. Disc. Math. Sci. Crypto. 23(1), 157–165 (2020) 19. Varghese, L., Deepak, G., Santhanavijayan, A.: An IoT analytics approach for weather forecasting using raspberry Pi 3 model B+. In: 2019 Fifteenth International Conference on Information Processing (ICINPRO), pp. 1–5 (2019) 20. Rithish, H., Deepak, G., Santhanavijayan, A.: Automated assessment of question quality on online community forums. In: Motahhir, S., Bossoufi, B. (eds.) Digital Technologies and Applications. ICDTA 2021. Lecture Notes in Networks and Systems, vol. 211, pp. 791–800. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-73882-2_72 21. Deepak, G., Santhanavijayan, A.: OntoBestFit: a best-fit occurrence estimation strategy for RDF driven faceted semantic search. Comput. Commun. 160, 284–298 (2020) 22. Surya, D., Deepak, G.: USWSBS: user-centric sensor and web service search for IoT application using bagging and sunflower optimization. In: Noor, A., Sen, A., Trivedi, G. (eds.) Proceedings of Emerging Trends and Technologies on Intelligent Systems. ETTIS 2021. Advances in Intelligent Systems and Computing, vol. 1371. Springer, Singapore (2021). https://doi.org/10.1007/978-981-16-3097-2_29 23. Pushpa, C.N., Deepak, G., Thriveni, J., Venugopal, K.R.: A hybridized framework for ontology modeling incorporating latent semantic analysis and content based filtering. Int. J. Comput. Appl. 150(11), 33-41 (2016) 24. Mageswari, S.U., Mala, C., Santhanavijayan, A., Deepak, G.: A non-collaborative approach for modeling ontologies for a generic IoT lab architecture. J. Inf. Optimiz. Sci. 41(2), 395–402 (2020)

HybRDFSciRec: Hybridized Scientific Document Recommendation Framework Divyanshu Singh1 and Gerard Deepak2(B) 1 Birla Institute of Technology and Science, Pilani, India 2 Manipal Institute of Technology Bengaluru, Manipal Academy of Higher Education, Manipal,

India [email protected]

Abstract. The Internet has a vast collection of information from different domains, including scientific knowledge in books, journals, and conference proceedings. When a user creates the query, these systems typically retrieve documents based on keywords that are irrelevant to the user in most cases. So, there is a need to retrieve the scientific knowledge more based on knowledge. This paper proposes a knowledge-centric scientific document recommendation framework for the recommendation of scientific documents. The recommendation is user querycentric and uses Lin Similarity for term enrichment. The preprocessing is done by Tokenization, Lemmatization, Stop Word Removal, and Named Entity Recognition (NER). Normalized Google Distance and Normalized Pointwise Mutual Information methods are used to compute semantic similarities to achieve ontology alignment. The final solution set is achieved using Flying Fox Algorithm, and the HybRDFSciRec achieves the best-in-class accuracy and high percentage of precision for a wide range of recommendations over the other baseline models, making it an efficient system for recommending scientific documents. Keywords: OntoCollab · StarDog and Protégé · NER · Lin similarity · SVD · RDF · NPMI · NCD · Flying fox algorithm

1 Introduction A scientific paper is a written report describing original research results and presenting research findings written by researchers and scientists. They are generally considered primary sources and are written for other researchers. These reports are critical to the evolution of modern science, in which the work of one scientist builds upon that of others. As the internet has revolutionized knowledge acquisition, scientific knowledge in books, journals, and conference proceedings is available in digital libraries online, and getting the desired results of scientific documents has become difficult. The retrieval of documents matches some stated user query against a set of free-text records. Since scientific documents are scattered, the amount of content available on the internet has increased dramatically. Standard retrieval systems and search engines complicate the retrieval of such documents. Although they can do it to an extent, it is challenging because scientific indexing terminologies are complicated. The overabundance has resulted in © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 A. Abraham et al. (Eds.): IBICA 2022, LNNS 649, pp. 439–447, 2023. https://doi.org/10.1007/978-3-031-27499-2_41

440

D. Singh and G. Deepak

a spot of bother for the end-user, who generally ends up with a handful of data that may or may not suit his needs. So, there is a need to retrieve relevant documents to the user’s needs. The proposed system enables a semantic approach for scientific document retrieval for knowledge base retrieval. Motivation: The world wide web is moving towards web 3.0 or semantic web, where knowledge is represented in the form of domains. The semantic web is a highly cohesive structure of the world wide web and as we visualized its domains and domain-based organization of knowledge paramo in the is the basis of the semantic web. So scientific document retrieval can become much easier on the semantic web. Since not many strategies exist for scientific document retrieval, there is a need for knowledge base retrieval of scientific documents. Contribution: A knowledge-centric scientific document recommendation framework HybRDFSciRec is proposed for the recommendation of scientific documents. This framework incorporates the generated Resource Description Framework (RDF) with the help of the RDF distiller and uses Term enrichment which is done by computing the Lin Similarity. The tools OntoCollab, StarDog, and Protégé, are used to synthesize the domain ontologies and the ontologies composed of many different scientific domain terms. The semantic similarity is computed using the Normalized Google Distance (NCD) and Normalized Pointwise Mutual Information (NPMI) methods. SVD (Singular Value Decomposition) is used for the computation of informative terms. The flying fox algorithm is used to find the solution sets. Organization The remaining paper is organized as follows: Sect. 2 depicts related works, Sect. 3 depicts the proposed system architecture, Sect. 4 depicts results and the whole paper is concluded in Sect. 5.

2 Related Works Tian et al., [1] presents a retrieval method of scientific documents based on HFS (Hesitation Fuzzy Sets) and BERT (Bidirectional Encoder Representations from Transformer) using multi-attribute decision making and context-dependent similarity calculation. They use the similarity of mathematical expressions through analyzing mathematical expressions and calculating the membership degree of symbolic multi-attributes, and calculates the context similarity using BERT. Pakray et al,. [2] present an architecture for scientific document retrieval and aims to increase quality of retrieval by handling natural language variations of expressing semantically the same in texts and formulae. They showed that sole use of distributional semantics for semantic textual entailment decisions on sentence level is surprisingly good. Pathak et al,. [3] proposed Math Information Retrieval system comprising (MathIRs) using various similarity modules and a substitution tree-based mechanism for indexing mathematical expressions. They also presented experimental results for similarity search and suggested the system will ease the task of scientific document retrieval. Lucarella et al,. [4] presented a document retrieval system based upon the vector processing model. The system employs an automatic indexing procedure with

HybRDFSciRec: Hybridized Scientific Document Recommendation Framework

441

a weighting scheme to reflect term importance. Documents are stored using an inverted file organization. Natural language queries are supported with a retrieval strategy based on best match techniques and relevance feedback. Sugathadasa et al., [5] discussed the three alternative models with vector space representations of the legal area and two different processes for document vector production. Incorporating semantic word metrics and natural language processing techniques, the project is working to represent legal case documents in various vector spaces. Amami et al., [6] developed a hybrid method for recommending scientific papers that combines probabilistic topic modeling-based content analysis with the principles of collaborative filtering and a relevance-based language model. Lai et al., [7] suggests a brand-new document recommendation technique based on a model of group trust. This analyses the levels of user trust in a group and then pinpoints the reliable users. The hybrid personal trust (HPT) model and users’ standing in the group make up the suggested group trust. The user-based collaborative filtering is then combined with group-based trust to propose documents to users. In [8–22] several frameworks in support of the proposed model have been depicted and discussed.

3 Proposed System Architecture Figure 1 depicts the proposed system architecture for an RDF driven, ontology synthesized, knowledge centric scientific document recommendation framework. The framework incorporates the user query as the primary input source with subject to preprocessing. Preprocessing involves Tokenization, Lemmatization, Stop word removal and Name Entity Recognition (NER). Tokenization is done using the blank space special character tokenizer, for Lemmatization WordNet lemmatizer has been used, for Stop word removal regular expression synthesized. Stop Word Removal algorithm has been customized and incorporated for Name Entity Recognition (NER). The GATE (General Architecture for Text Engineering) has been integrated. In query preprocessing the query words are yielded which is again subjected to term enrichment and term enrichment is done by computing the Lin Similarity with that of the domain ontologies which has been modelled and generated. Domain Ontologies is synthesized by using three tools namely the OntoCollab, StarDog and Protégé. The OntoCollab is used to automatically generate the domain ontologies for the scientific domain which is participatory in the framework. StarDog is also used to automatically synthesized ontologies from the existing data and Protégé is used for manually modelling ontologies however only 7.3% of Ontologies are manually modelled and rest of the ontologies used in the framework where generated using OntoCollab and StarDog. OntoCollab generated 68.44% of ontologies and the remaining ontologies were synthesized in the StarDog framework.The ontologies composed of domain terms namely the thermodynamics, heat mass transfer, power and energy, antennas, food and nutrition, geology, zoology botany, organic chemistry, proteomics and genomics, instrumentation technology. So, several bogs, eBooks, index terms available on the world-wide web, scientific articles were all used, their key word are extracted to formulate the ontologies using the tools namely OntoCollab, StarDog and Protégé.Term enrichment is done in order to reduce the cognitive gap between the existential domain knowledge and the query. The cognitive and semantic gap between the domain knowledge and query is reduced, terms are enriched. Although the term enrichment is unsatisfactory, since much more

442

D. Singh and G. Deepak

Fig. 1. Proposed system architecture design for the HybRDFSciRec framework

constituent verified knowledge by community available on the web so this knowledge aggregation is done by writing SPARQL Endpoints to Linked Open Data (LOD) Cloud also by inclusion of entities from the WikiData API. Knowledge aggregation ensures that the term sets is enriched with much more relevant entities and entity population is facilitated into the framework. Dataset comprises of 4800 documents belonging to scientific domain and most of these are eBooks, Journals, conference papers crawled from google scholar RARD2 dataset google scholar, document datasets and also eBooks. eBooks where randomly spliced for domains like thermodynamics, heat mass transfer, power and energy, antennas, food and nutrition, geology, zoology botany, organic chemistry, proteomics and genomics, instrumentation technology. The eBooks are taken from high school eBooks to highly specialized graduate and post graduate levels. Irrespective of size of document, the document dataset was formalized with almost 27200 documents and all the documents are indexed using the scientific index words annotated and formulated. The document dataset is indexed but however its extensively large to ensure most relative terms, the RDF subject-object co-occurrence as well as the RDF is taken into consideration. Using the documents, the RDF is generated with the help of RDF distiller, however the RDF was in triadic format consisting of . The subject and object alone are considered because predicate can be of heterogenous type. The subject-object co-occurrence matrix is formulated similarly, the RDF matrix which is a term frequency matrix exactly to the TFIDF model is formulated. To do this Singular Value Decomposition is applied such the matrices are reduced to vectors and the informative terms are identified in the vectors and the documents containing these informative terms are gathered along the informative terms for further discussion. These informative terms computed using SVD (Singular Value Decomposition) and sematic similarity is computed using the Normalized Google Distance (NCD) and Normalized Pointwise mutual Information (NPMI) with threshold of 0.5 to populate the terms set, between the informative terms yielded and knowledge aggregated driven by the user query. However, to optimize it we are computation sematic

HybRDFSciRec: Hybridized Scientific Document Recommendation Framework

443

similarity is run under the flying fox algorithm such that the most suitable solution is yielded from the feasible solution set which is rank and recommended in the increasing order of the NGD measure to the user. So, the term containing documents are yielded and if the user is satisfied search halts and if user is not satisfied, the current user clicks a record it and the terms in the document, frequent terms, informative terms are sent as query and the entire process continues until there are no further user clicks recorded.The Singular Value Decomposition (SVD) of a matrix is a factorization of the matrix into three matrices as A = UWVT . It has algebraic properties which also conveys important geometrical and theoretical insights about linear transformations. Here, U and V are orthogonal matrices with orthonormal eigenvectors chosen from AAT and AT A respectively. W is a diagonal matrix with r elements equal to the root of the positive eigenvalues of AAT or AT A. The diagonal elements are composed of singular values.The Normalized Google Distance (NGD) is a semantic similarity measure, calculated based on the number of hits returned by Google for a set of keywords. If keywords have many pages in common relative to their respective, independent frequencies, then these keywords are thought to be semantically similar. If two search terms x and y never occur together on the same web page, but do occur separately, the NGD between them is infinite. Conversely, if both terms always occur together, and only occur together, their NGD is zero. Specifically, the Normalized Google Distance (NGD) between two search terms x and y is depicted as Eq. (1). NGD(x, y) =

max{log f (x), log f (y)} − log f (x, y) log N − min{log f (x), log f (y)}

(1)

To characterize the co-occurrence structure of semantic tags that emerges across participants, we computed pointwise mutual information (PMI), a measure of association between two features. PMI is a rank measure commonly used in text mining for collocation extraction, i.e., identifying words that co-occur together more than random indicating a shared meaning like “hot tea” and “crystal clear”. Because PMI is a rank measure, there is no level of significance or accepted cutoff to use for co-occurring terms; however, the normalized variant, NPMI which is more easily interpretable and is less sensitive to tag frequency, calculates a continuous value between −1 and 1. An NPMI greater than zero indicates a co-occurrence with greater probability than chance, with increasing significance of the probability as the NPMI value approaches 1. However, in NPMI the threshold is considered as 0.5 and only positive values between 0 and 1 are considered as semantic similarity. Flying fox algorithm an effective hybrid algorithmic structure, combining operators from existing algorithms. To start applying the algorithm, we have first to define problem bounds, its dimensions, and the termination criterion. Then the population size and survival list are calculated based on problem dimensions. Survival List is the set of new solutions generated in a suitable search space region. Initialize the randomly generated population, evaluate each solution, and identify the best and worst solutions. We have the required value if the termination criterion is met at this point. If not, then set i = 1 for the solution set and start the process of calculating parameters for this i value and update the position of the solution set using Eqs. (2), (3), and (4).   t+1 t t (2) = xi,j + a · rand coolj − xi,j xi,j

444

D. Singh and G. Deepak

    t+1 t t + rand2,j xRt 1 ,j − xRt 2 ,j nxi,j = xi,j + rand1,j coolj − xi,j t+1 n· xi,j , if j=k or rndj =pa

t+1 xi,j = x t

i,j ,otherwise

(3) (4)

Evaluate the new solution set, and if this new solution is better than the previous one, the solution is accepted and update the best and worst-case situations and set i = i + 1. If the new solution is not better, check whether it is a worst-case situation. If this is the worst situation, replace the solution set through the survival list, further evaluate the new solution set, and update the new best and worst-case situations and set i = i + 1; if this is not the case of the worst-case situation, then set i = i + 1. n  k=1 SLk,j t+1 (5) xi,j = n pD =

nc − 1 population size

(6)

‘nc’ is the number of solution set having the same objective function value with the best solution ever found. Now, if the value of i is less than the population size, again, start the process from the calculation of parameters for the solution set, and if it is not, then update pD and replace the solution set according to pD using Eq. (5) with a probability of 0.5. In the next step, evaluate the new solution set and update best and worst-case solutions; at this stage, if the termination criterion is met, then this is the best value, and if the termination criterion is not, then again, start the process with i = 1.

4 Results and Performance Evaluation The performance of proposed Hybridized Scientific Document Recommendation Framework is evaluated using Precision, Recall, Accuracy, F-measure percentages and false discovery rate (FDR) as potential matrices. From Table 1 it is indicative that HybRDFSciRec yields the highest precision percentage of 94.89, highest average recall percentage of 97.08, highest accuracy of 95.98%, highest F-measure of 95.97% with the lowest FDR of 0.05. The proposed HybRDFSciRec is baseline with VEDL [5], GBSPR [6], DRGTUW [7] frameworks in order to compare the qualitative performance HybRDFSciRec with that of other baseline frameworks. So, the VEDL yields an overall average precision of 87.36%, overall average recall percentage of 90.22, overall accuracy of 88.79%, overall F-measure percentage of 88.77, with FDR of 0.12 whereas the GBSPR yields an overall average precision of 879.44%, overall average recall percentage of 91.78, overall accuracy of 90.61%, overall F-measure percentage of 90.59, with FDR of 0.1 and the DRGTUW yields an overall average precision of 90.23%, overall average recall percentage of 92.69, overall accuracy of 91.46%, overall F-measure percentage of 91.44, with FDR of 0.1 (Fig. 2). The reason why HybRDFSciRec yields the highest Precision, Recall, Accuracy, Fmeasure percentages with lowest FDR is mainly due to the fact that it is driven by RDF.

HybRDFSciRec: Hybridized Scientific Document Recommendation Framework

445

Table 1. Comparison of Performance of the proposed HybRDFSciRec with other approaches Model

Average Precision%

Average Recall%

Accuracy%

F-Measure

FDR

VEDL [5]

87.36

90.22

88.79

88.77

0.12

GBSPR [6]

89.44

91.78

90.61

90.59

0.1

DRGTUW [7]

90.23

92.69

91.46

91.44

0.1

Proposed HybRDFSciRec

94.89

97.08

95.98

95.97

0.05

Fig. 2. Recall % vs Number of recommendations of the proposed HybRDFSciRec and other baseline models

RDF are used to enrich the auxiliary knowledge for the document dataset. Apart from the RDF co-occurrence model, singular value decomposition is used to yield the informative terms and most importantly relevance computation takes place by computing the Normalized Google Distance (NGD), NPMI as sematic similarity measures and a large density of auxiliary knowledge apart from the RDF is fed in terms of domain ontologies. For term enrichment and knowledge aggregation from LOD cloud and WikiData. Owing to a large variety of auxiliary knowledge fed into the framework and strong semantic similarity computation mechanisms like NGD and NPMI and for yielding the optimal set using the Flying fox optimization ensures that the proposed HybRDFSciRec framework performs much better than the baseline models. The reason why the VEDL does not perform its best is because although it uses document vector embedding in deep learning it only depends on the dataset, learning happens only using the dataset alone. Auxiliary knowledge is neglected and strong relevance computation mechanisms are absent so, vector space model becomes insufficient although deep learning has been incorporated. The reason why the GBSPR which is graph-based approach for scientific document framework lags is because latent topics leveraging takes place using the social structure of researchers. So, there is some amount of and probabilistic topic modeling with collaborative filtering is used and no language

446

D. Singh and G. Deepak

model is used. Collaborative filtering requires ratings and every scientific document cannot be rated even if it is rated then it is partially rated and probabilistic modeling alone cannot be relay so strong relevance computation mechanisms are absent to this method and henceforth the GBSPR model lags. The reason why DRGTUW framework also does not perform as expected is because, it is again based on group trust model where collaborative filtering is used so, hybrid personal trust matters i.e., user activity, similarity, reputation is used. So, normally trust among users and overall rating based on intuition cannot be considered for relevance computation. However semantic similarity computation methods work but auxiliary knowledge is absent. Henceforth this model does not perform up to the mark.

5 Conclusion A knowledge-centric framework model infused with artificial intelligence-driven skills, HybRDFSciRec, is proposed to recommend scientific documents. The model is based on RDF and tested for 5124 Queries whose ground truth has been collected. The results show that synthesized ontology and knowledge-centric retrieval can significantly improve retrieval effectiveness and highlight these methods’ importance in generating retrieval frameworks. The proposed framework results in an average accuracy of 95.98% and yields much better results than the other baseline models and makes it an efficient and ontologically driven framework for the scientific documents recommendation. The proposed framework is driven by knowledge and the constituent knowledge forms the core of the proposed framework which not only assimilates scientific knowledge but ensures recommendations which are not just accurate and precise but also conform to the relevance and is diversified.

References 1. Tian, X., Wang, J.: Retrieval of scientific documents based on HFS and BERT. IEEE Access 9, 8708–8717 (2021) 2. Toninelli, A., Bradshaw, J., Kagal, L., Montanari, R.: Rule-based and ontology-based policies: toward a hybrid approach to control agents in pervasive environments. In: Proceedings of the Semantic Web and Policy Workshop. November 2005 3. Santos, O.C., Boticario, J.G.: Requirements for semantic educational recommender systems in formal e-learning scenarios. Algorithms 4(2), 131–154 (2011) 4. Chung, H., Kim, J.: An ontological approach for semantic modeling of curriculum and syllabus in higher education. Int. J. Inform. Educ. Technol. 6(5), 365 (2016) 5. Sugathadasa, K., et al.: Legal document retrieval using document vector embeddings and deep learning. In: Science and information conference, pp. 160-175. Springer, Cham. July 2018.https://doi.org/10.1007/978-3-030-01177-2_12 6. Amami, M., Faiz, R., Stella, F., Pasi, G.: A graph based approach to scientific paper recommendation. In: Proceedings of the international conference on web intelligence, pp. 777–782. August 2017 7. Lai, C.H., Chang, Y.C.: Document recommendation based on the analysis of group trust and user weightings. J. Inf. Sci. 45(6), 845–862 (2019)

HybRDFSciRec: Hybridized Scientific Document Recommendation Framework

447

8. Aditya, S., Muhil Aditya, P., Deepak, G., Santhanavijayan, A.: IIMDR: intelligence integration model for document retrieval. In International Conference on Digital Technologies and Applications, pp. 707-717. Springer, Cham. January 2021.https://doi.org/10.1007/9783-030-73882-2_64 9. Surya, D., Deepak, G.: USWSBS: user-centric sensor and web service search for IoT application using bagging and sunflower optimization. In International Conference on Emerging Trends and Technologies on Intelligent Systems, pp. 349-359. Springer, Singapore. March 2021.https://doi.org/10.1007/978-981-16-3097-2_29 10. Deepak, G., Surya, D., Trivedi, I., Kumar, A., Lingampalli, A.: An artificially intelligent approach for automatic speech processing based on triune ontology and adaptive tribonacci deep neural networks. Comput. Electr. Eng. 98, 107736 (2022) 11. Chhatwal, G.S., Deepak, G.: IEESWPR: an integrative entity enrichment scheme for socially aware web page recommendation. In: Data Science and Security, pp. 239–249. Springer, Singapore (2022) 12. Singh, S., Deepak, G.: Towards a knowledge centric semantic approach for text summarization. In: Data Science and Security, pp. 1-9. Springer, Singapore. (2021).https://doi.org/10. 1007/978-981-16-4486-3_1 13. Deepak, G., Santhanavijayan, A.: QGMS: a query growth model for personalization and diversification of semantic search based on differential ontology semantics using artificial intelligence. Comput. Intell. (2022) 14. Manoj, N., Deepak, G. ODFWR: an ontology driven framework for web service recommendation. In: Data Science and Security, pp. 150-158. Springer, Singapore. (2021).https://doi. org/10.1007/978-981-16-4486-3_16 15. Palvannan, S., Deepak, G.: TriboOnto: a strategic domain ontology model for conceptualization of tribology as a principal domain. In: International Conference on Electrical and Electronics Engineering, pp. 215-223. Springer, Singapore (2022).https://doi.org/10.1007/ 978-981-19-1742-4_18 16. Ojha, R., Deepak, G.: SAODFT: socially aware ontology driven approach for query facet generation in text classification. In: International Conference on Electrical and Electronics Engineering, pp. 154–163. Springer, Singapore (2022) 17. Agrawal, D., Deepak, G.: OntoSpammer: A Two-Source Ontology-Based Spam Detection Using Bagging. In: Mekhilef, S., Shaw, R.N., Siano, P. (eds.) ICEEE 2022. LNEE, vol. 894, pp. 145–153. Springer, Singapore (2022). https://doi.org/10.1007/978-981-19-1677-9_13 18. Kynshi, L.D.L., Deepak, G., Santhanavijayan, A.: MagnetOnto: modelling and evaluation of standardised domain ontologies for magnetic materials as a prospective domain. Int. J. Intell. Enterp. 8(4), 459–475 (2021) 19. Mohnish, S., Deepak, G., Praveen, S. V., Sheeba Priyadarshini, J.: DKMI: diversification of web image search using knowledge centric machine intelligence. In Iberoamerican Knowledge Graphs and Semantic Web Conference, pp. 163-177. Springer, Cham (2022). https:// doi.org/10.1007/978-3-031-21422-6_12 20. Vishal, K., Deepak, G., Santhanavijayan, A.: An approach for retrieval of text documents by hybridizing structural topic modeling and pointwise mutual information. In: Innovations in Electrical and Electronic Engineering, pp. 969-977. Springer, Singapore (2021). https://doi. org/10.1007/978-981-16-0749-3_74 21. Umaa Mageswari, S., Mala, C., Santhanavijayan, A., Deepak, G.: A non-collaborative approach for modeling ontologies for a generic IoT lab architecture. J. Inf. Optim. Sci. 41(2), 395–402 (2020) 22. Kumar, N., Deepak, G., Santhanavijayan, A.: A novel semantic approach for intelligent response generation using emotion detection incorporating NPMI measure. Procedia Comput. Sci. 167, 571–579 (2020)

A Collision Avoidance Method for Autonomous Underwater Vehicles Based on Long Short-Term Memories ´ L´aszl´o Antal1(B) , Martin Aubard2 , Erika Abrah´ am1 , Ana Madureira3 , Lu´ıs Madureira2 , Maria Costa2 , Jos´e Pinto2 , and Renato Campos2 1

RWTH Aachen University, Aachen, Germany {antal,abraham}@cs.rwth-aachen.de 2 OceanScan - Marine Systems and Technology, Lda., Porto, Portugal {maubard,lmad,mariacosta,zepinto,rcampos}@oceanscan-mst.com 3 Institute of Engineering, Polytechnic of Porto, Porto, Portugal [email protected]

Abstract. Over the past decades, underwater robotics has enjoyed growing popularity and relevance. While performing a mission, one crucial task for Autonomous Underwater Vehicles (AUVs) is bottom tracking, which should keep a constant distance from the seabed. Since static obstacles like walls, rocks, or shipwrecks can lie on the sea bottom, bottom tracking needs to be extended with obstacle avoidance. As AUVs face a wide range of uncertainties, implementing these essential operations is still challenging. A simple rule-based control method has been proposed in [7] to realize obstacle avoidance. In this work, we propose an alternative AI-based control method using a Long Short-Term Memory network. We compare the performance of both methods using real-world data as well as via a simulator. Keywords: Autonomous underwater vehicles · Obstacle avoidance Rule-based control · AI-based control · Long short-term memories

1

·

Introduction

Autonomous Underwater Vehicles (AUVs) face a wide range of complex tasks in the underwater environment. Two of these tasks are bottom tracking and obstacle avoidance. Bottom tracking means that an underwater vehicle needs to maintain a distance to the seafloor as constant as possible. It helps the AUV to gather different types of sensor data more reliably (e.g., side-scan sonars, multibeam sonars, and camera images). However, since the seafloor’s surface is uneven and it may happen that some obstacles (rocks, walls, or other static objects) are lying underneath, bottom tracking needs to be extended with obstacle avoidance. In this paper, we consider Light Autonomous Underwater Vehicles (LAUVs) with restricted sensor information. The starting point for our work is a rule-based c The Author(s), under exclusive license to Springer Nature Switzerland AG 2023  A. Abraham et al. (Eds.): IBICA 2022, LNNS 649, pp. 448–457, 2023. https://doi.org/10.1007/978-3-031-27499-2_42

Collision Avoidance Using LSTMs

449

control mechanism [7], which is the method currently running on LAUVs. The problem with this method is that it is sensible to noise, therefore, the decisions are not always reliable. In order to provide more robust decisions, we propose as an alternative approach an AI-based control method. We comparatively evaluate both approaches using the data gathered from real-time missions as well as using a simulator. The rest of this paper is structured as follows. We discuss related work in Sect. 2 before we present the rule-based and the AI-based controllers in Sects. 3 and 4, respectively. In Sect. 5, we show and analyze the results of our experiments. Finally, Sect. 6 summarizes this work and lists some of the aspects concerned by future work.

2

Related Work

While submerged, an Autonomous Underwater Vehicle (AUV) can not communicate and be controlled by an operator. Thus, the AUV needs to understand its environment to act accordingly. To understand its environment, the AUV uses different types of sensors that provide information about the distance from the bottom (altitude), the surface (depth), and the front of the AUV. Most AUVs use multibeam echo sounder to detect objects in front of the AUV [15]; multibeam provides 2D horizontal information about the potential object detected. The AUV can find a safe horizontal path by processing the data to avoid the possible object [4]. In this work, we consider Light Autonomous Underwater Vehicles (LAUV) [1], which use a single-beam echo sounder. This sonar gives a single distance value from the AUV to the object and does not provide any information about the potential safety horizontal path to avoid it. In [3], the authors propose an intelligent single-beam echo sounder to avoid collisions using hybrid automata modeling. As explained later in Sect. 4, our novel AI-based controller should improve obstacle avoidance and bottom tracking from [7], where the control depends on the system’s current state, in contrast to our AI-based controller, which exploits also information from the past (time series). Machine Learning (ML) enjoys growing attraction in many fields. Computer vision based on ML has achieved great success in several tasks, such as classification [8,12], object detection [9], and segmentation [9]. These techniques have also been implemented into robots and AUVs [2] to improve the knowledge of the robot about its environment. Techniques such as Long-Short Term Memory (LSTM) [5] and Recurrent Neural Networks (RNN) [6] provide an accurate prediction based on time series and sequential learning, which is widely used in speech recognition and machine translation. Several works have been conducted to predict a robot’s position or behavior based on these ML time series methods. In [11], the authors propose an RNN method to predict the relative horizontal velocities of an AUV using data from an Inertial Measurement Unit (IMU), pressure sensor, and control inputs for dead-reckoning navigation. Thanks to the promising results based on the RNN implementation into AUV navigation,

450

L. Antal et al.

the work [16] proposed a deep framework called NavNet by taking AUV navigation as a deep sequential learning problem. However, a typical RNN can face many challenges due to its limitations in memorizing long data sequences, which can affect the past time window used to predict the AUV behavior. The LSTM method, based on long and short-term memory, outperforms the RNN characteristic [13]. In [14], the authors implement an LTSM-based Dead Reckoning approach to estimate the surge and sway body-fixed frame velocities when the AUV is submerged.

3

Rule-based Control

Fig. 1. Visualization of the sensor measurements considered in [7].

Fig. 2. The corresponding finite state machine of the method.

In order to tackle the obstacle-avoidance problem, the authors of [7] implement a very simple, yet very effective, rule-based approach. This method works in a reactive way, i.e., at every control cycle (timestamp), it considers the current measurements from specific sensors, and using simple trigonometry, it calculates the steepness of the sea bottom. The sensor values taken into consideration are as follows (see Fig. 1 for further details): 1. Depth measurement d: the vertical distance of the AUV to the water surface obtained using the depth sensor. 2. Altitude measurement l: the vertical distance of the AUV to the sea bottom, obtained using the multibeam DVL sensor. 3. Forward distance measurement f : the distance to a detected object in the facing direction of the AUV. These values are provided by a single-beam echo sounder sensor, considering a non-zero aperture β of the beam (see Fig. 1). Based on these three measurements coming from the sensors, the method calculates the steepness α of the sea bottom, and it tries to adapt the pitch

Collision Avoidance Using LSTMs

451

Fig. 3. Illustration of the noise in sensor measurements during a real-world mission. The altitude and depth values coming from the DVL and depth sensors are relatively robust, but the forward distance measurements of the echo sounder are very noisy and, therefore, unsuitable for reactive obstacle avoidance.

using a finite state machine (see Fig. 2) with three states: tracking (when the AUV tries to maintain constant altitude), climbing (when the seafloor is too steep and the vehicle is pitching up), avoiding (when the distance to the object or to the seafloor is too short and the AUV stops the thruster). If the vehicle is in the avoiding state, since the thruster is stopped, the buoyancy pulls the AUV upward until the obstacle in front “disappears” from the echo sounder’s field of view. When that happens, the vehicle goes back to the tracking state. This simple rule-based control can already fulfill the bottom-tracking task, but it has some limitations. (i) Due to the imprecise, noisy aspect of the measured sensor values, the method lacks robustness in some cases. In order to get an impression, Fig. 3 shows the sensor measurements during a real mission made by the OceanScan MST company with a LAUV at Matosinhos harbor, Portugal. (ii) This controller considers only the current sensor values and ignores the past time frame. (iii) Finally, the rules operate with hard pre-defined threshold values (αsafe , lsafe , fsafe ), though it is possible that for different AUVs or different environments, fine-tuning of the threshold values would be necessary.

4

AI-based Control

To solve the problems mentioned in Sect. 3, we propose a machine-learning-based approach. As one subclass of recurrent neural networks, Long Short-Term Memories (LSTM) can handle well time-series data and long-term time dependencies [5]. The idea is to take fixed-length time windows containing the consecutive sensor measurements from the near past and use time-series classification. Our aim is to learn for a given AUV time series the correct maneuver, which is one of the AUV states tracking, climbing, and avoiding extended with two

452

L. Antal et al.

Fig. 4. The proposed pipeline to gather, preprocess and label training data from the log files of real missions and to train the neural network controller.

auxiliary states unsteady and surfaced. The unsteady state is triggered when the available data is too noisy and, therefore, we cannot make a reliable decision; in this case, continuing the previous maneuver or trying to stabilize the AUV would be a proper action. The surfaced state is triggered when the vehicle is on the surface. The reason why it is necessary to distinguish this state is that the echo sounder sensor does not work on the surface. Thus, we do not have any information about a potential obstacle in front of the AUV, so a different type of control is needed. The LSTM network should output the state that the AUV should enter in order to circumvent collisions. We expect that the model learns to make reliable predictions even when the data is noisy. Furthermore, we expect it also to generalize better to different settings (i.e., different AUVs or environments) than the original rule-based approach [7]. To train the LSTM, first, we need to acquire the necessary training, validation and test data. This process happens in three steps, which we visualize in Fig. 4. The steps involved in the pipeline bring up the following essential questions that we answer partly in this section and partly in Sect. 5. The first question is how we can produce the time windows containing the sensor measurements coupled with the correct classification label. Since generating data from simulation would not result in realistic scenarios, we considered log files of real missions in this paper. These missions were executed by the OceanScan MST company using a LAUV. Using the log files, we extracted the necessary sensor data in CSV format from Neptus [10], which is the command and monitor software for the LAUV. Secondly, each time series data gathered from the log files needs to be labeled with a suitable output that defines one of the five states that need to be activated in order to avoid a collision. Manual labeling would not be feasible since it takes a lot of time and effort, and the result may not be as precise as we want. Consequently, we developed an automatic method for labeling the time-series data. We describe this method in Sect. 4.1. Finally, we train an LSTM network using the automatically labeled training data. We describe the parameter settings for the training process in Sect. 5.

Collision Avoidance Using LSTMs

453

Fig. 5. Visualization of the automatic labeling method applied for the same mission as the one shown in Fig. 3. The subplots Echo sounder value, Rotor speed, and DVLfiltered and depth value are the corresponding plots for the relevant sensor values. The CLIMBING/AVOIDING state triggers are the noise gates applied on the sensor values with a different attack (red line) or release time (green line). Lastly, the noise level detectors are plotted in the last row, and the green line shows the noise indicator threshold.

4.1

Automatic Labeling of the Data

After gathering all the raw, unlabeled data from the mission log files, we consider the data as a set of multi-dimensional (multi-sensor) time series. The task is to assign a label, one of the five possible maneuvers (tracking, climbing, avoiding, unsteady, and surfaced ), to each timestamp, taking into account the data at the current timestamp and the data series measured before the current timestamp. The unsteady state should be triggered when the data is too noisy so that we can make no reliable decision. For a given timestamp, we define its noise level as the standard deviation of the first-order discrete difference (i.e., of the absolute values of the differences between the successive sensor values) over the considered time window up to the current timestamp (with the same size as the input for the neural network). We label those timestamps as unsteady, whose noise level exceeds a certain noise indicator threshold value, as illustrated in the bottom two subplots of Fig. 5. The climbing state is triggered using a noise gate with three parameters: an initial value i, a release time r, and an attack time a with r ≤ i ≤ a. We initialize a counter with the initial value i and update it iteratively for each timestamp in chronological order as follows: In case the rule-based controller would choose the climbing state, then (i) if the counter value equals the release time, then we set it to the initial value and (ii) if the counter value is below the attack time then we increase it by one. Otherwise, if the rule-based controller would not choose the climbing state, then (i) if the counter value equals the attack time, then we set it to the initial value, and (ii) if the counter value is above the release time, then we decrease it by one. After these calculations, we label the data at

454

L. Antal et al.

Table 1. The parameter settings for the state triggers (first four rows) and the noise detectors (last two rows).

Measured sensor type Triggered state Threshold value Attack time Release time Echo sounder

climbing

15 m

20

10

Echo sounder

avoiding

8m

20

5

DVL-filtered

avoiding

1.2 m

5

10

Depth sensor

surfaced

0.5 m

5

5

Echo sounder

unsteady

5.75





DVL-filtered

unsteady

0.80





the current timestamp with the command climbing if the attack time has been reached at least once and the release time has not been reached after the last such occurrence. With the three parameters, we are able to tune the sensitivity of the trigger to enter and exit the climbing state. Labeling with the avoiding and surfaced states is analogous to climbing. The trigger for the surfaced state is not plotted in Fig. 5 because it is not interesting for this mission, in which surfacing does not happen. Nonetheless, it would be computed in a very similar way as the climbing and avoiding triggers. Lastly, the tracking is the selected label in case there are no other assigned labels. The analysis of the automatic labeling method for a subset of the possible labels is shown in Fig. 5. In case a timestamp gets multiple labels, we consider the following priority: unsteady > surfaced > avoiding > (tracking |climbing).

5

Experimental Results and Their Evaluation

In this section, we first present the hyperparameter values/settings used to do the experiments, before we report on the experimental results and analyze them. For the automatic labeling process, we summarize the state trigger’s parameter values and the noise detector’s threshold values in Table 1. Their values were empirically determined. We applied the labeling using the listed parameter values, with a sliding window size of 300 timestamps corresponding to a 1-minute timespan considering the usual 5 Hz sampling rate. The size is the same as the input size of the LSTM, and the reason we chose it is that the window size needs to be the smallest possible, for which a label could be assigned unambiguously. Since accurate data should safely determine the actual label, only the noise could cause mislabeling. The window size of 300 is enough because it is very unlikely that a time window would contain this amount of false or noisy data. The result of the labeling process is illustrated in Fig. 6. For the neural network training, we used 17 log files as training and validation data, achieving a validation accuracy of 98.93%. We tested the model with the remaining 13 mission files, achieving an average accuracy of 97.35%. The high accuracy of the model indicates that it learned well to give the same results as

Collision Avoidance Using LSTMs

455

Fig. 6. The result of the automatic labeling process. Two sensor measurements are plotted, and each data point has the color of the corresponding state.

the automatic labeling method. The advantage of using a neural network is that it can learn the data from multiple differently parametrized labeling processes so that the neural network can learn to generalize the predictions for different AUVs or environments. We train the model using ten epochs with a batch size of 64. For the structure of the neural network, there is a wide range of choices. We used the following parameter values, but mention that experimenting with different architectures could further improve the performance: – – – –

Convolutional layer (1D): 32 filters, kernel size of 3, ReLU activation function Max-pooling layer (1D): pool size of 2 LSTM layer: 256 LSTM cells Fully-connected layer: 5 units, softmax activation function

After training, the model was tested in a simulated environment using Dune and Neptus. In order to run the simulator realistically, it needs the bathymetry measurements. Unfortunately, we had only a small region of Porto’s harbor’s bathymetry mapped in the simulator, so we conducted one test survey using this mapped region. During the mission, the AUV has to climb on a wall twice. We show the comparison of the original rule-based control and the neural network control in Fig. 7. The green rectangles (solid) show when the neural network controller sends control maneuvers to avoid the wall. The red rectangles show the timestamps where the rule-base controller avoids the wall; the green dashed rectangles are shown just for comparison. It is observable that using the neural network controller, the AUV starts climbing on the wall, passes it earlier, and finishes the entire mission sooner than the rule-based control. It is worth mentioning that the AUV has a limit of 15◦ for the maximum pitch, which means that if a wall is steeper than 15◦ , then the AUV will not be able to avoid the wall only using the climbing maneuver. In this particular scenario, the avoiding state will be triggered.

456

L. Antal et al.

Fig. 7. Comparison of the neural network and the original rule-based control method.

6

Conclusion and Future Work

In this paper we proposed a pipeline to train a neural network that manages obstacle avoidance. The pipeline consists of multiple steps. First, we gathered the sensor data from multiple log files recorded during different missions. In order to use the raw data for training a model, we presented an automatic labeling method. With the labeling method, we assign a state (one of the five possible maneuvers) to each timestamp of the set of time-series data. Finally, using the labeled data, we trained a Long Short-Term Memory network and tested it in a simulator environment. In Sect. 5, we show the parameters and settings used to do the experiments. Regarding the future, our plan is first to extend the simulator’s bathymetry map such that we can execute more simulated tests. Furthermore, we intend to deploy and test the neural network on a real AUV. If needed, we will use more training data and fine-tune the model’s parameters. With the neural network controller, we aim to increase the efficiency of the bottom-tracking algorithm. The efficiency can be defined multiple ways, however, our desire is the following: – We want to reduce the overall mission time with more efficient wall climbing. – Also, we would like to keep a constant altitude whenever possible, such that the collected data will have better quality. – Finally, using the LSTM, we would like to investigate the problem of predicting the presence of a wall close to the AUV but not observable by the sensor measurements yet. Acknowledgements. This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No. 956200. For more info, please visit https://remaro.eu.

Collision Avoidance Using LSTMs

457

References 1. Alexandre, S., et al.: Lauv: The man-portable autonomous underwater vehicle. In: IFAC Proceedings (2012) 2. Aubard, M., Madureira, A., Madureira, L., Pinto, J.: Real-time automatic wall detection and localization based on side scan sonar images. In: IEEE (2022) 3. Calado, P., et al.: Obstacle avoidance using echo sounder sonar. In: OCEANS 2011 IEEE-Spain, pp. 1–6. IEEE (2011) 4. Healey, A.J.: Obstacle avoidance while bottom following for the Remus autonomous underwater vehicle. IFAC Proceedings Volumes 37(8), 251–256 (2004) 5. Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997) 6. Jordan, M.I.: Serial order: a parallel distributed processing approach. In: Advances in psychology, vol. 121, pp. 471–495. Elsevier (1997) 7. Madureira, L., et al.: The light autonomous underwater vehicle: evolutions and networking. In: 2013 MTS/IEEE OCEANS-Bergen. pp. 1–6. IEEE (2013) 8. Nayak, N., Nara, M., Gambin, T., Wood, Z., Clark, C.M.: Machine learning techniques for AUV side-scan sonar data feature extraction as applied to intelligent search for underwater archaeological sites. In: Field and Service Robotics (2021) 9. Neves, G., Ruiz, M., Fontinele, J., Oliveira, L.: Rotated object detection with forward-looking sonar in underwater applications. Expert Syst. Appl. 140, 112870 (2020) 10. Pinto, J., Dias, P.S., Martins, R., Fortuna, J., Marques, E., Sousa, J.: The LSTS toolchain for networked vehicle systems. In: 2013 MTS/IEEE OCEANS-Bergen, pp. 1–9. IEEE (2013) 11. Saksvik, I.B., Alcocer, A., Hassani, V.: A deep learning approach to dead-reckoning navigation for autonomous underwater vehicles with limited sensor payloads. In: OCEANS 2021: San Diego–Porto. pp. 1–9. IEEE (2021) 12. Samaras, S., et al.: Deep learning on multi sensor data for counter UAV applications-a systematic review. Sensors 19(22), 4837 (2019) 13. Sherstinsky, A.: Fundamentals of recurrent neural network (RNN) and long shortterm memory (LSTM) network. Physica D: Nonlinear Phenomena, p. 132306 (2020) 14. Topini, E., et al.: LSTM-based dead reckoning navigation for autonomous underwater vehicles. In: Global Oceans 2020: Singapore–US Gulf Coast. pp. 1–7. IEEE (2020) 15. Yan, Z., Li, J., Jiang, A., Wang, L.: An obstacle avoidance algorithm for AUV based on obstacle’s detected outline. In: 2018 37th Chinese Control Conference (CCC), pp. 5257–5262. IEEE (2018) 16. Zhang, X., He, B., Li, G., Mu, X., Zhou, Y., Mang, T.: Navnet: AUV navigation through deep sequential learning. IEEE Access 8, 59845–59861 (2020)

Precision Mango Farming: Using Compact Convolutional Transformer for Disease Detection M. Shereesha1 , C. Hemavathy2 , Hasthi Teja2 , G. Madhusudhan Reddy3 , Bura Vijay Kumar4 , and Gurram Sunitha5(B) 1 Department of CSE, B V Raju Institute of Technology, Narsapur, Telangana, India

[email protected]

2 Department of CSE, Annamacharya Institute of Technology and Sciences, Tirupati, A.P., India 3 Department of CSE, K M M Institute of Technology and Science, Tirupati, A.P., India 4 School of Computer Science and Artificial Intelligence, SR University, Warangal, India 5 Department of CSE, Sree Vidyanikethan Engineering College, Tirupati, A.P., India

[email protected]

Abstract. With every forward step that humanity takes, it is our responsibility to contribute to the development of agricultural ecosystem. Precision agriculture effectually leverages science and technology for practicing agronomic principles towards the goal of sustainable agriculture. Deep learning supports the identification of plant disease and enables early and timely disease diagnosis for effective disease detection and management. To investigate the application of Compact Convolutional Transformer (CCT) for classification of mango leaf diseases is the prime objective of the work undertaken in this paper. Two diseases - anthracnose and powdery mildew, that most commonly affect the mango leaves in Andhra Pradesh region, India were considered for our research. An CCT-based prediction model is proposed to use for developing the automated system for detection of anthracnose and powdery mildew diseases from mango leaf images. Two CCT models CCT-7/8 × 2 and CCT-7/4 × 2 were experimented with. The proposed methodology has been evaluated against the VGG16. The CCT-7/8 × 2 model demonstrated an increased performance in terms of - accuracy of 3%–5%, F1score of 3%–4%, precision of 2%–3% over the VGG16 model. The experimental results demonstrated that the proposed CCT-based model performed automated detection of anthracnose and powdery mildew diseases effectually and eases to treat the affected area of plant properly and it aids in increasing mango output and supplying the world market. Keywords: Vision transformer · Compact convolutional transformer · Precision agriculture · Mango farming · Leaf disease detection

1 Introduction A mango fruit is known as the king of the fruits, produced from various large number of categories of equatorial trees that are mainly belonging to the blooming plant genes © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 A. Abraham et al. (Eds.): IBICA 2022, LNNS 649, pp. 458–465, 2023. https://doi.org/10.1007/978-3-031-27499-2_43

Precision Mango Farming: Using Compact Convolutional Transformer

459

named as Mangifera, cultivated mostly as edible fruit and majorly for exporting to various countries based on grading. Global production of mangoes was more than 1000 million tons growing more than 100 countries, led by India with 39% (22 million tons) of the world. Mango trees get affected with a variety of diseases such as diseases considered – anthracnose, mildew, sooty mold, leaf curl, phoma, bacterial canker, scab, zinc deficiency, powdery mildew, downy mildew which cause downgrade in the mango production. In order to increase the production rate of mangoes, the affectious diseases shall be predicted before they create severe affect on the trees and the mangoes. Anthracnose and powdery mildew are one of the types of disease mainly occurring on mango leaves, twigs, petioles, flower clusters, and the fruits. The symptoms of the anthracnose disease are going to be visible as small, dark black spots on the flowers; few spots cover entire flower; infection leads to premature flower drop from the tree; dark spots with yellow marking on fresh leaves; dark, irregular, lower level that surround the area damages the entire fruit; infection leads to premature fruit drop from tree before they are ripe. In order to increase the production rate of mangoes, these types of diseases are to be predicted before they cause severe effect on the plant and the mangoes. The proposed model is going to be used to solve such type of problems by predicting and classifying these types of diseases. The process proposed in this paper is designed to efficiently recognize the anthracnose and powdery mildew disease symptoms on mango leaves by using transformer based deep learning transformer architectures. A deep learning methodology based on transformer-based approach is proposed to classify the diseases of mango leaves based their symptoms automatically.

2 Vision Transformers for Image Classification Image processing and computer vision technologies have made its firm stand in the field of agriculture [1–7]. Various classification techniques were explored through the years for agricultural image classification in the view of supporting precision agriculture [8, 9]. Deep learning models such as CNNs, RNNs etc. have been extensively investigated and customized to suit enormous agricultural applications. Inspired from the transformer-based approaches [10], the transformer architectures were further introduced into the computer vision-based tasks [11, 12] and have been worthily proven that encoder-decoder techniques applied on image patches can perform well in classification tasks of computer vision domain. Further to the debut of vision transformers for computer vision domain, many variations of vision transformers were designed and developed suiting a variety of applications and performances. Compact convolutional transformer model works well with the smaller or scarce datasets with manageable time complexity [13]. The model itself is targeted for small-scale learning and has been a driving initiation of designing and developing vision transformers which can be pretrained with small scale datasets with a smaller number of parameters and manageable time complexity. Jinhai Wang et al. have designed an vision transformer based system for grape bunch detection for supporting vision system in grape harvesting robots [14]. They have experimented on swin transformer model as they have claimed that the CNN models fail

460

M. Shereesha et al.

in cases where objects in images are of irregular shape, size and density. After conducting experimentation using Baidu images, their model performed up to 91.5% of accuracy. The agriculture vision has been conducting challenges since 2020 and the participants have been creative in developing real-time systems for supporting precision agriculture. SegFormer MIT-B2 and SegFormer MIT-B3 were adopted and customized for segmentation of images in satellite agricultural dataset [15]. Each of the SegFormer models have been trained separately and the results were ensembled for better performance. An mIoU score 0.582 has been achieved fetching them the 2nd place in the challenge. Xiaopeng Li and Shuqin Li developed a light weight model which is a multistage learning model of convolutional neural networks and vision transformers [16]. Their four-stage model comprises of convolutional neural networks which perform global feature selection and a vision transformer which works on extracting the local features from the apple images. Also, a down sampling layer is embedded between stages to balance redundancies. This model works at its best when the contrast between the healthy part and the diseased part of the plant leaf is very slight. A ConvViT model which is scalable in terms of trainable parameters the computational time has been developed for classifying diseases in kiwi fruits [17]. With experimentation, the authors have proved that vision transformers are efficient in segmenting the diseases regions of the kiwi fruits and essentially can act as an efficient backbone feature selection structures. With this motivation, we have challenged ourselves to design an efficient model to recognize the anthracnose and powdery mildew disease symptoms on mango leaves by using transformer based deep learning transformer architectures. A deep learning methodology based on transformer-based approach is proposed to classify the diseases of mango leaves based their symptoms automatically.

3 Compact Convolutional Transformer for Disease Detection Model for Mango Leaf Disease Classification The overwhelming problem with vision transformers is that they are suitable only for applications for which large scale data is available. In such cases, learning models can be pretrained and pretrained learning models can be further used on datasets with similar attributes. But, the transformers pretrained on a domain cannot be used for training with data from other domains. In real-life, not many problems can be found which are similar. The pretraining of the vision transformers very much customizes and specializes them to a particular problem and data. Further, it cannot be generalized to another problem and data. The compact transformers allow to train the model from scratch using small datasets. Keeping in view this observation, in this paper, we have undertaken the research aimed to investigate the application of CCT-based approach for mango leaf disease classification. Figure 1 shows the application of CCT-based transformer model for mango leaf disease classification. The compact convolutional transformer model is taken as is from [5] and is fine-tuned and experimented for the considered dataset.

Precision Mango Farming: Using Compact Convolutional Transformer

461

Fig. 1. Convolutional compact transformer based model for mango leaf disease classification [5]

The convolutional compact transformer model comprises of two phases - convolutional tokenizer and sequence transformer. The convolutional tokenizer contains a 2-dimensional convolution layer which takes in the preprocessed image dataset as input and convolutions to produce image patches of size N × N retaining the self-attention. This convolution layer provides benefit of better retaining the encoding relationships and local information between the image patches. The N × N image patches are then reshaped or flattened to 1-dimensional space. The sequence transformer layer then takes over to perform the further processing. The flattened image patches are then processed with positional embedding to retain the spatial information between the image patches. The transformer encoder consists of a sequence of transformer blocks, with each transformer block including a multi-headed self-attention layer and a multi-layer perceptron network as head or task layer. Sequential transformer blocks are patched with a layer for normalization followed by residual connections. The transformer encoder produces the embeddings which contain the sequential information retrieved from different image patches. The SeqPool technique pools the output sequential tokens, prioritizes them, and generates the correlation data between the sequential embeddings. Finally, the output from sequence transformer is fed to the higher task layer as required. Here, the higher task would be classification. An MLP head is used here to make predictions.

4 Experimentation and Discussion The mango leaf dataset has been collected locally from the lebaka village, nandalur mandalam, Cuddapah district of Andhra Pradesh, India. The mango trees are blossomed

462

M. Shereesha et al.

from December to March (sometimes early blossom and sometimes it may be delayed). After two or three years of planting, the mango trees begin to blossom. The farmers make a lot of effort to care for and support both their young and older trees in order to produce more mangoes. The Cuddapah region’s primary seasonal cash crop, the mango, is in charge of these areas’ economies. The tree typically only produces mangoes once a year. (Sometimes twice a year on specially neelam variety). The diseases considered for experimentation are - anthracnose and powdery mildew, that most commonly affect the mango leaves in Andhra Pradesh region, India. The images are collected a camera Samsung SM-G975F in format sRGB. The images are of dimension 3069 × 836 with 72 dpi. A total 574 images are collected from over 50 trees during January to February, 2021. Sample images are shown in Fig. 2. Efforts have been made to collect images from trees of different ages, leaves of varying shapes and sizes, leaves with varying severity of diseases. Also, images are collected of mango leaves which are severely affected from the disease and are dead.

Fig. 2. Sample images of healthy, dead and diseased mano leaves

As the first step the images are fundamentally processed for enhancing their quality. The images have been rescaled using central square crop method. The histogram equalization method is used for enhancing the contrast of the mango leaf images. They are cropped and resized to 128 × 128, noise reduced, normalized, and labeled as part of preprocessing. To scale up the dataset size, the images are augmented with operations zoom in with a scale of 0.1, horizontal flip and rotate operations. The dropout rate is varied between 0.2 and 0.5. The learning rate is set to 0.03. The weight decay rate is set to 0.001. The batch size considered is 64. The number of epochs is varied between 5 to 150. The number of fully connected MLP head units taken were 1024 × 1024. The number of trainable parameters were 3,988,898. The stride size and

Precision Mango Farming: Using Compact Convolutional Transformer

463

the padding size were set constant to 2 and 1 respectively. One-dimensional relative position embeddings were used to retain spatial information among the image patches. Neural weights were initialized using random normal scheme. The convolution kernel sizes 8 × 8 and 4 × 4 were experimented with. The specification of the experimented compact convolutional transformers was CCT-7/8 × 2 and CCT-7/4 × 2. The number of classes related to mango leaves were 4 which are - healthy, dead, anthracnose and powdery mildew.

Fig. 3. Performance of CCT-7/8 × 2 model vs VGG16 model on mango leaf disease dataset

Two variations of compact convolution transformers are configured for experimentation - CCT-7/8 × 2 has 7 transformer encoder layers with 7 attention heads, a 2-layer convolutional tokenizer with 8 × 8 kernel size and - CCT-7/4 × 2 has 7 transformer encoder layers with 7 attention heads, a 2-layer convolutional tokenizer with 4 × 4 kernel size. The convolutional tokenization layer comprises of the 2 convolution layers and a maxpool layer with rectified linear unit as activation function. The sequence transformer layer works with softmax activation. The task layer head considered is a multilayer perceptron. Our dataset was of small size; hence the compact transformer models were chosen for learning. The CCT models are pre-trained from the scratch using the locally collected mango leaf image dataset. The results of CCT models are evaluated against VGG16 deep learning model. Figure 3 graphs the performance of CCT-7/8 × 2 against VGG16 for mango leaf disease detection in terms accuracy and loss functions.

464

M. Shereesha et al.

Table 1. Performance of the CCT-7/8 × 2, CCT-7/4 × 2, VGG16 learning models on mango leaf disease dataset Learning model

Evaluation metrics (%) Accuracy

F1-Score

Precision

VGG16

89.02

90.45

90.23

CCT-7/4 × 2

93.12

92.78

91.74

CCT-7/8 × 2

94.17

94.59

93.34

Table 1 presents the performance of the CCT-7/8 × 2, CCT-7/4 × 2, VGG16 learning models on mango leaf disease dataset. The image patch sizes of 8 × 8 and 4 × 4 are experimented with and an image patch size of 8 × 8 produced the best accuracy. The batch sizes, learning rate were varied to comprehend the accuracy of the proposed methodology. After 150 iterations there was no improvement in the evaluation metrics of any of the models. In the initial iterations, the CCT models started with less performance than VGG16, but as the number of iterations have passed by, the performance of CCT models was considerably increased. Among the CCT models considered - CCT-7/4 × 2 and CCT-7/8 × 2, the later one performed better. The CCT-7/8 × 2 model demonstrated an increased performance in terms of - accuracy of 3%-5%, F1-score of 3%–4%, precision of 2%–3% over the VGG16 model. Overall, the compact convolutional transformer learning models demonstrated considerably good performance on our mango leaf dataset as we have expected.

5 Conclusions and Future Work To investigate the application of Compact Convolutional Transformer (CCT) for classification of mango leaf diseases is the prime objective of the work undertaken in this paper. An CCT-based prediction model is proposed to use for developing the automated system for detection of anthracnose and powdery mildew diseases from mango leaf images. Two CCT models CCT-7/8 × 2 and CCT-7/4 × 2 were experimented with. The proposed methodology has been evaluated against the VGG16. The CCT-7/8 × 2 model demonstrated an increased performance in terms of - accuracy of 3%–5%, F1-score of 3%–4%, precision of 2%–3% over the VGG16 model. Overall, the compact convolutional transformer learning models demonstrated considerably good performance against VGG16 model on our mango leaf dataset as we have expected. We further intend to collect data of a variety of diseases from a variety of mango trees from various districts of Andhra Pradesh, India and intend to extensively investigate various transformer based deep learning models to furtherment of the precision agriculture.

References 1. Karthikeyan, C., Sunitha, G., Avanija, J., Reddy Madhavi, K., Madhan, E.S.: prediction of climate change using SVM and naïve Bayes machine learning algorithms. Turk. J. Comput. Math. Educ. 12(2), 2134–2139 (2021)

Precision Mango Farming: Using Compact Convolutional Transformer

465

2. Prabhakar, T., Sunitha, G., Madhavi, G., Avanija, J., Madhavi, K.R.: Automatic detection of diabetic retinopathy in retinal images: a study of recent advances. Ann. Romanian Soc. Cell Biol. 15277–15289 (2021) 3. Reshma, G., et al.: Deep learning-based skin lesion diagnosis model using dermoscopic images. Intell. Autom. Soft Comput. 31(1), 621–634 (2022) 4. Avanija, J., Sunitha, G., Vittal, H.S.R.: Dengue outbreak prediction using regression model in Chittoor District, Andhra Pradesh, India. Int. J. Recent Technol. Eng. 8(4), 10057–10060 (2019) 5. Gayathri, S., Madhan, E.S., Avanija, J.: Comparative study of efficient methodology for tumor detection: annals of the Romanian society for cell biology 25(3) (2021) 6. Kavitha, T., et al.: Deep learning based capsule neural network model for breast cancer diagnosis using mammogram images. Interdiscip. Sci. Comput. Life Sci. 14(1), 113–129 (2022) 7. Swaraja, K., et al.: Brain tumor classification of MRI images using deep convolutional neural network. Trai. du Signal 38(4), 1171–1179 (2021) 8. Sunitha, G., Madhavi, K.R., Avanija, J., Reddy, S.T.K., Vittal, R.H.S.: Modeling convolutional neural network for detection of plant leaf spot diseases. In: 3rd International Conference on Electronics and Sustainable Communication Systems, pp. 1187–1192. IEEE (2022) 9. Sunitha, G., et al.: Modeling of chaotic political optimizer for crop yield prediction. Intell. Autom. Soft Comput. 34(1), 423–437 (2022) 10. Vaswani, A., et al.: Attention is all you need. Advances in neural information processing systems 30 (2017) 11. Dosovitskiy, A., et al.: An image is worth 16 × 16 words: transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) 12. Touvron, H., Cord, M., Sablayrolles, A., Synnaeve, G., Jégou, H.: Going deeper with image transformers. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 32–42 (2021) 13. Hassani, A., Walton, S., Shah, N., Abuduweili, A., Li, J., Shi, H.: Escaping the big data paradigm with compact transformers. arXiv preprint arXiv:2104.05704 (2021) 14. Wang, J., Zhang, Z., Luo, L., Zhu, W., Chen, J., Wang, W.: SwinGD: a robust grape bunch detection model based on Swin Transformer in complex vineyard environment. Horticulturae 7(11), 492 (2021) 15. Yang, Z., Lai, J.H., Zhou, J., Zhou, H., Du, C., Lai, Z.: Agriculture-vision challenge 2022-the runner-up solution for agricultural pattern recognition via transformer-based models. arXiv preprint arXiv:2206.11920 (2022) 16. Li, X., Li, S.: Transformer help CNN see better: a lightweight hybrid apple disease identification model based on transformers. Agriculture 12(6), 884 (2022) 17. Li, X., Chen, X., Yang, J., Li, S.: Transformer helps identify kiwifruit diseases in complex natural environments. Comput. Electron. Agric. 200, 107258 (2022) 18. Wu, S., Sun, Y. and Huang, H.: Multi-granularity Feature Extraction Based on Vision Transformer for Tomato Leaf Disease Recognition. In: 3rd International Academic Exchange Conference on Science and Technology Innovation, pp. 387–390, IEEE (2021) 19. Jajja, A.I., et al.: Compact convolutional transformer (CCT)-Based approach for whitefly attack detection in cotton crops. Agriculture 12(10), 1529 (2022) 20. Thakur, P.S., Khanna, P., Sheorey, T., Ojha, A.: Explainable vision transformer enabled convolutional neural network for plant disease identification: PlantXViT. arXiv preprint arXiv: 2207.07919 (2022)

An Efficient Machine Learning Model for Bitcoin Price Prediction Habeeba Tabassum Shaik1 , B. Sunil Kumar1 , and Bhasha Pydala2(B) 1 Department of CSE, Narayana Engineering College, Nellore 524 004, A.P., India [email protected], [email protected] 2 Department of IT, Mohan Babu University (Erstwhile Sree Vidyanikethan Engineering College), Tirupati 517 102, A.P., India [email protected]

Abstract. Crypto currencies are one kind of digital currencies that resembles the stock market and operate on a block chain database. A bitcoin is an earliest form of crypto currency and due to its erratic price trends, the market is unstable. Similar to the stock market, bitcoin provides investment opportunities. But due to its volatility, investors find it difficult to invest. A user-friendly interface is used for predicting the price of bitcoin using several algorithms. Hence, to forecast the price of bitcoin, we apply a variety of machine learning (ML) algorithms during this research. The set of algorithms are SVM, Bayesian regression, Random Forest and boosting ensemble, ARIMA, Multilayer LSTM and GRU model. By comparing the resulting values of RMSE, we will establish the most successful method for the prediction of price of bitcoin. Keywords: Digital currencies · Volatile · Regression · ARIMA · GRU

1 Introduction Bitcoin is the first block chain based crypto currency and it was introduced in 2008. Even though it was first presented at that time, it is still cherished and precious. As of now, there are dozens of alternative crypto currencies with various features and specs. Some of these crypto currencies were created from the ground up, while others are copies and forks of bitcoin [6–8]. Bitcoin can lower the transaction costs much more than the traditional online payment systems. Compared to the currency we use, this currency can be controlled by decentralized organizations. The market price of bitcoin fluctuates a lot and is hard to predict. This characteristic may serve as a barrier not just to the growth of digital money but also to investment, as it may prevent its integration with other applications and divert attention. Its utilization will rise if investors get interested. Therefore, the bitcoin prediction may address the issue of ambiguity that motivates an increase in investments, protects investors from losses, and produces better outcomes, all of which improve the consistency of the bitcoin market by removing uncertainties among stakeholders. The primary aim of this project is to develop a user friendly interface for predicting bitcoin price with help of various ML techniques and compare the efficacy of each model. It makes use of time series datasets [9–11]. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 A. Abraham et al. (Eds.): IBICA 2022, LNNS 649, pp. 466–475, 2023. https://doi.org/10.1007/978-3-031-27499-2_44

An Efficient Machine Learning Model for Bitcoin Price Prediction

467

2 Relevant Study In the current model, the price of bitcoin is predicted using both short- and long-term dependencies using deep learning methods like the multilayer perceptron which can also be known as MLP and LSTM which can be abbreviated as long short term memory [1– 5]. Traders and investors employ technical analysis or fundamental analysis to forecast future market prices. These multilayer perceptron’s employ two hidden layer feed forward networks with a predetermined number of hidden nodes and LSTM, a unique kind of Recurrent Neural Network that enables the network to learn from its prior state and makes this network extremely helpful for time-series data. There aren’t many papers on utilizing several machine learning algorithms to generate predictions in the context of Bitcoin in the literature. These approaches left some other machine learning techniques untouched. An essential component, optimization of dataset and gates to get the less RMSE value, was not much utilized in the previous systems, was absent from the data set employed in this technique. They had simply employed the single-layered, conventional variation of the LSTM neural network [12]. After reaching a new all-time high recently, the tendencies of the bitcoin market have shifted. They did not provide any user interface to predict the bitcoin price using several ML algorithms [13–15]. The machine learning techniques are discussed for prediction [18–20, 35]. In Supply chain management system for threat analysis, similarly to predict the price of the crypto currency using machine learning models [29–33]. The Neural Network Based Classifier models were discussed in [24–28]. The Machine Learning Algorithms were discussed in [21–23, 34].

3 Proposed Method The suggested study makes use of machine learning methods, such as ARIMA, LSTM of multiple layers and GRU, linear regression, SVM, polynomial regression and random forest and boosting ensemble, and Bayesian regression [16, 17]. And also, a user interface is developed where user can predict the bitcoin price using several ML algorithms. A UI is built so that user can clearly analyses the outputs. The optimization of data was done by preprocessing. To determine the most effective model for the anticipation of bitcoin price, the effectiveness of each algorithm is examined with help of their respective rooted mean square error values. A time series dataset of bitcoin prices was the source data for this model. 3.1 Linear Regression Based on a specified independent variable, the linear regression accomplishes the job of predicting the value of a dependent variable. 3.2 Polynomial Regression A regression algorithm represents the nonlinear relationship (nonlinear) between two variables among which one is dependent and another is independent, with the nth degree polynomial can be determined as a Polynomial Regression. 3.3 Bayesian Regression Rather than trying to discover the model parameters, Bayesian Linear Regression primarily focuses on establishing the distribution (‘posterior’) for the respective parameters. The distribution is used as the source of both the output and the model parameters.

468

H. T. Shaik et al.

3.4 Random Forest and Boosting Ensemble Meta estimator called a random forest can average the increase in the accuracy of prediction. As this model can fit the decision trees in several categories on the dataset with numerous subsamples. 3.5 SVM The acronym SVM stands for Support Vector Machine. It is applicable as a regression technique. It keeps the characteristics of an algorithm that make it a maximal margin algorithm. 3.6 Auto Regression Integrated Moving Averages (ARIMA) A type of statistical models used in ML for evaluating and forecasting time Series data is known as an ARIMA model, which can be pronounced “auto regression integrated moving averages.” In essence, it is a model that uses historical data to describe a time series. 3.7 LSTM A unique kind of recurrent neural networks which can also be termed as LSTM networks, or “LSTMs,” are able to learn long-term dependencies [2]. A typical LSTM unit has different types of parts such as a cell and three gates which are a forget gate, an input gate, and an output gate among its parts. One of the parts, A cell remembers values across specified time periods, and the three gates control the information flow across the cell. In this study, four LSTM variations are utilized. In this model, a two-layer neural network using timestamp as the chosen feature is used to forecast the price of bitcoin. The LSTM GRU model uses the timestamp as a feature and a new single layer neural network variation with a modified architecture to forecast the price of bitcoin (Fig. 1).

Fig. 1. Workflow of models

An Efficient Machine Learning Model for Bitcoin Price Prediction

469

4 Results 4.1 Graphical User Interface (Fig. 2).

Fig. 2. Graphical user interface

4.2 Linear Regression Polynomial Regression After clicking on linear regression button in UI, the output (prediction) will be shown in the form of above graph. The graph represents both the training set (in green) and the predicted output (in red). As we can observe that there is some difference between the training set regression line and the predicted regression line we can analyze that, by using the linear regression we will not be able to predict the bitcoin price much accurately (Fig. 3).

Fig. 3. Linear regression

470

H. T. Shaik et al.

4.3 Polynomial Regression The output graph of polynomial regression will be represented as above. As shown in the graph, blue regression line represents the actual training set and the three other lines represents the predicted lines with different degrees. From this graph, we can clearly observe the difference between the actual and the predicted lines which can be determined as the polynomial regression technique will not be able to accurately predict the bitcoin price (Fig. 4).

Fig. 4. Polynomial regression

4.4 Bayesian Regression The output graph for Bayesian Regression represents the both actual (green) and predicted (red) lines. As we can see there is a very less difference between both the lines which represents, it can predict the bitcoin price accurately compared to linear and polynomial regression (Fig. 5).

Fig. 5. Bayesian regression

4.5 Random Forest and Boosting Ensemble The output graph of Random forest and boosting ensemble algorithm represents both the actual (in green) and predicted output( in red). As there is difference between both the lines we can analyze that random forest and boosting ensemble algorithm won’t be able to predict the bitcoin price as much accurate as the Bayesian regression model (Fig. 6).

An Efficient Machine Learning Model for Bitcoin Price Prediction

471

Fig. 6. Random forest and boosting ensemble

4.6 SVM This graph represents the outliers (dots) in the training dataset with the predicted line using the algorithm. As there is a large difference between the predicted line and the actual outliers, which represents that by using SVM we won’t be able to predict the price of bitcoin with much accuracy (Fig. 7).

Fig. 7. SVM

4.7 ARIMA The output graph of the ARIMA model represents the observed dataset (dots) and the forecast lines where there is only a small gap between both the sets. With that we can analyze that we can predict the bitcoin price with the help of ARIMA model, but this won’t be able to predict the price of bitcoin much accurately (Fig. 8). 4.8 LSTM (Multilayer) The output graph of LSTM with multi-layer algorithm can be represented as above where the green line represents the testing set and red line represents the predicted line. We can observer from above graph that in the start, both the lines are closer to each other but the two lines are far away from each other at the end as the prediction going on. Hence we can analyze that LSTM with multilayer won’t be able to predict the bitcoin price much accurately (Fig. 9).

472

H. T. Shaik et al.

Fig. 8. ARIMA

Fig. 9. LSTM (Multilayer)

4.9 LSTM (GRU) The output graph of LSTM (GRU) can be represented as above where the testing set line and the predicted line are nearer to each, which represents LSTM (GRU) model will be able to predict the bitcoin price accurately compared to all other algorithms (Fig. 10 and Table 1).

Fig. 10. LSTM (GRU)

Therefore, based on the aforementioned findings, the LSTM GRU model is the best model among the group of algorithms that we selected for our project to forecast the price of bitcoin. Because the LSTM GRU model has the lowest RMSE in this project, it is an effective algorithm. Accuracy improvement over the reference paper model: 221 is our lowest RMSE value.

An Efficient Machine Learning Model for Bitcoin Price Prediction

473

Table 1. The resultant RMSE values from each algorithm S. No.

Model

RMSE value

1

Linear regression

2

Polynomial regression

3

Bayesian regression

4

Random forest and boosting ensemble

5

SVM

6

ARIMA

7

LSTM two layer

690.48

LSTM GRU model

221.40

967.12 1281.86 382.83 871.49 1630.44 701.78

5 Conclusion The proposed model is to present an insight into the trends in bitcoin prices, being an inconsistent and volatile market (cryptocurrency), by forecasting bitcoin prices with the future of digital currency in mind. This project includes several stages such as collecting dataset, preprocessing of data, prediction of price with the resultant models, analyzing the resultant RMSE values. In this project for the implementation of user interface we have used a package named as Tkinter, which results an efficient user-friendly interface. There are other packages also used in this project such as keras, pandas, TensorFlow and NumPy. This project provided a wide range of machine learning algorithms such as polynomial regression, Bayesian regression, SVM, Auto regression Integrated Moving Averages (ARIMA), linear regression, Long Short Term Memory (LSTM) with multilayer, Gated Recurrent Unit (GRU) for price prediction with their efficiency in a user interface in light of the paucity of existing literature on bitcoin price prediction using machine learning. With this information, investors in the bitcoin market will be able to make investment decisions. Finally, with a Root mean square error (RMSE) value of 221, we draw the conclusion that the LSTM GRU model has provided the greatest accuracy, outperforming the earlier models.

References 1. Goutham, M., Sivaraman, N., Roselin, S.: Bitcoin price prediction using deep learning techniques (Mary 2021) 2. Ferdiansyah, F., Othman, S.H., Stiawan, D.: A LSTM-Method for Bitcoin Price Prediction: A Case Study Yahoo Finance Stock Market. Universitas Bina Darma, Indonesia and University Teknologi, Malaysia 3. Jang, H., Lee, J.: An empirical study on modeling and prediction of bitcoin prices with bayesian neural networks based on blockchain information. IEEE Access 6, 5427–5437 (2018) 4. Mittal, R., Arora, S., Bhatia, M.P.S.: Automated cryptocurrencies prices prediction using machine learning (2018)

474

H. T. Shaik et al.

5. Rabitoy, A., Muntajat, Q., Budi, I.: Prediction of bitcoin exchange rate to American dollar using artificial neural network methods. In: Advanced Computer Science and Information Systems (ICACSIS), 2017 International Conference on, pp. 433–438 (2017) 6. Jud Mayer, A., Stifter, N., Krumbholz, K., Wippel, E.: Blocks and chains: introduction to bitcoin, cryptocurrencies, and their consensus mechanisms. Synth. Lect. Inf. Secure. Privacy, Trust (2017) 7. Di Persio, L., Honchar, O.: Artificial neural networks approach to the forecast of stock market price movements. Int. J. Econ. Manag. Syst. 1, 158–162 (2016) 8. Khadem, L., Saha, S. , Dey, S.R. : Predicting the direction of stock market prices using random forest. CoRR, vol.abs/1605.00003 (2016) 9. Heaton, J.B., Polson, N.G., Witte, J.H.: Deep learning in finance. Corr, vol.abs/1602.06561 (2016) 10. Brownlee, J.: Time series prediction with last recurrent neural networks in python with karas. Available Mach. com, 18 (2016) 11. Chen, K., Zhou, Y., Dai, F.: A lstm-based method for stock returns prediction: a case study of china stock market. In: Big Data (Big Data), 2015 IEEE International Conference on, pp. 2823–2824 (Oct 2015) 12. Reid, F., Harrigan, M.: An analysis of anonymity in the bitcoin system. In: Proceedings of IEEE International Conference on Privacy, Security, Risk, and Trust, pp. 13181326 (2013) 13. Nakamoto, S.: Bitcoin: a peer-to-peer electronic cash system (2008) 14. Bishop, M., Tipping, M.E.: Bayesian regression and classification. Nato Science Series sub Series III Computer and Systems Sciences 190, 267–288 (2003) 15. Squark, “Root Mean Square Error Orrmse.” [Online] 16. Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Compute. 9(8), 1735–1780 (1997) 17. Caginalp, G., Laurent, H.: The predictive power of price patterns. Appl. Math. Finance 5, 181–206 (1988) 18. Bhasha, P., Pavan Kumar, T., Khaja Baseer, K., Jyothsna, V.: An IoT-based BLYNK server application for infant monitoring alert system to detect crying and wetness of a baby. In: Bhattacharyya, S., Nayak, J., Prakash, K.B., Naik, B., Abraham, A. (eds.) International Conference on Intelligent and Smart Computing in Data Analytics. AISC, vol. 1312, pp. 55–65. Springer, Singapore (2021). https://doi.org/10.1007/978-981-33-6176-8_7 19. Bhasha, P., Babu, J.S., Vadlamudi, M.N., Abraham, K., Sarangi, S.K.: Automated crop yield prediction system using machine learning algorithm. J. Algebr. Stat. 13(3), 2512–2522 (2022). https://publishoa.com. ISSN: 1309–3452 20. Bhasha, P., Kumar, T.P., Baseer, K.K.: A simple and effective electronic stick to detect obstacles for visually impaired people through sensor technology. J. Adv. Res. Dyn. Control Syst. 12(06), 18–27 (2020). https://doi.org/10.5373/JARDCS/V12I6/S20201003 21. Silpa, C., Niranjana, G., Ramani, K.: Securing data from active attacks in IoT: an extensive study. In: Manogaran, G., Shanthini, A., Vadivu, G. (eds.) Proceedings of International Conference on Deep Learning, Computing and Intelligence. Advances in Intelligent Systems and Computing, vol. 1396. Springer, Singapore (2022). https://doi.org/10.1007/978-981-16-565 2-1_5 22. Silpa, C., Suneetha, I., Hemantha, G.R., Arava, R.P.R., Bhumika, Y.: Medication alarm: a proficient IoT-enabled medication alarm for age old people to the betterment of their medication practice. J. Pharm. Negat. Results 13(4), 1041–1046 (2022) 23. Silpa, C., Arava, R.P.R., Baseer, K.K.: Agri farm: crop and fertilizer recommendation system for high yield farming using machine learning algorithms. Int. J. Early Child. Spec. Educ. 14(02), 6468 (2022). (INT-JECSE). https://doi.org/10.9756/INT-JECSE/V14I2.740. ISSN: 1308–5581

An Efficient Machine Learning Model for Bitcoin Price Prediction

475

24. Jyothsna, V., Kumar Raja, D.R., Kumar, G.H., Chnadra, D.E.: A novel manifold approach for intrusion detection system (MHIDS). Gongcheng Kexue Yu Jishu/Adv. Eng. Sci. 54(02) (2022) 25. Jyothsna, V., Mukesh, D., Sreedhar, A.N.: A flow-based network intrusion detection system for high-speed networks using meta-heuristic scale. In: Peng, S.-L., Dey, N., Bundele, M. (eds.) Computing and Network Sustainability. LNNS, vol. 75, pp. 337–347. Springer, Singapore (2019). https://doi.org/10.1007/978-981-13-7150-9_36 26. Jyothsna, V., Prasad, K.M., Rajiv, K., Chandra, G.R.: Flow based anomaly intrusion detection system using ensemble classifier with Feature Impact Scale. Clust. Comput. 24(3), 2461–2478 (2021). https://doi.org/10.1007/s10586-021-03277-5 27. Jyothsna, V., Munivara Prasad, K., GopiChand, G., Durga Bhavani, D.: DLMHS: Flow-based intrusion detection system using deep learning neural network and meta-heuristic scale. Int. J. Commun. Syst. 35(10), e5159 (2022). https://doi.org/10.1002/dac.5159 28. Jyothsna, V., Sreedhar, A.N., Mukesh, D., Ragini, A.: A network intrusion detection system with hybrid dimensionality reduction and neural network based classifier. In: Tuba, M., Akashe, S., Joshi, A. (eds.) ICT Systems and Sustainability. AISC, vol. 1077, pp. 187–196. Springer, Singapore (2020). https://doi.org/10.1007/978-981-15-0936-0_19 29. Maria Joseph, B., Baseer, K.K.: Reducing the latency using fog computing with IoT in real time. Gongcheng Kexue Yu Jishu/Adv. Eng. Sci. 54(08), 2677–2692 (Oct 2022). Journal ID: AES-15-10-2022-355, ISSN: 2096–3246 30. Baseer, K.K., Jahir Pasha, M., et al.: Smart online examination monitoring system. J. Algebr. Stat. 13(3), 559–570 (2022). ISSN: 1309–3452 31. Baseer, K.K., Jahir Pasha, M., Murali Krishna, T., Mohan Kumar, J., Silpa, C.: COVID19 patient count prediction using classification algorithm. Int. J. Early Child. Spec. Educ. (INT-JECSE) 14(07) (2022). https://doi.org/10.9756/INTJECSE/V14I7.7. ISSN: 1308–5581 32. Jahir Pasha, M., Sujatha, V., Hari Priya, A., Baseer, K.K.: IoT technology enabled multipurpose chair to control the home/office appliance. J. Algebr. Stat. 13(1), 952–959 (May 2022). ISSN: 1309–3452 33. . Baseer, K.K., Neerugatti, V., Jahir Pasha, M., Satish Kumar, V.D.: Internet of things: a product development cycle for the entrepreneurs. Helix 10(02), 155–60 (Apr 2020) 34. Silpa, C., Chakravarthi, S.S., Jagadeesh Kumar, G., Baseer, K.K., Sandhya, E.: Health monitoring system using IoT sensors. J. Algebr. Stat. 13(3), 3051–3056 (June 2022). ISSN: 1309–3452 35. E. Sandhya, Reddy Arava, R.P., Phalguna Krishna, E.S., Baseer, K.K. : Investigating student learning process and predicting student performance using machine learning approaches. Int. J. Early Child. Spec. Educ. (INT-JECS). 14(07), 622–628 (2022). https://doi.org/10.9756/ INTJECSE/V14I7.60. ISSN: 1308–5581

Ensemble Based Cyber Threat Analysis for Supply Chain Management P. Penchalaiah1(B) , P. Harini Sri Teja1 , and Bhasha Pydala2 1 Department of CSE, Narayana Engineering College, Nellore 524 004, A.P., India

[email protected], [email protected]

2 Department of IT, Mohan Babu University (Erstwhile Sree Vidyanikethan Engineering

College), Tirupati 517 102, A.P., India [email protected]

Abstract. Nowadays, the problems confronted besides cyber security manufacturers of between fade tens of billions to across country and world. Cyber Supply Chain (CSC) system is complex which involves different sub-systems performing various tasks. Safeguards such as distribution network has been tricky due to inherent risks and attacks from just about any system can be controlled which can still be abused at every juncture within in the distribution chain. This could trigger a massive interruption upon that business operations connectivity. Therefore, it was indeed supreme useful for understanding but also criterion it and attacks such that agency could really undergo master station initiatives is for operational risk management. To demonstrate the applicability of our approach, malware detection dataset is gathered and a number of ML algorithms, i.e., XGBoost, Gradient Boosting, are used to develop predictive analytics using the Kaggle Malware detection dataset. Keywords: Cyber security · Boosting algorithm · Malware detection

1 Introduction Its computer crimes market is already gaining ground through the years, – particularly thanks to the fact that most info (personal or organizational) is accessible through system for different. Nowadays, the problems confronted besides cyber security manufacturers of between fade tens of billions to across country and world. Together in large study out all over nearly 4 entities such as fourteen countries around the world along all sixteen businesses by both the unu college, that was represented that perhaps the average global level of data violate regarding this year originally stood ongc $3.104 million, someone operand [36–38]. 5% significantly raise as from the year approximate [11–14]. Global organizations were indeed massively looking to invest into functionality yeah predictive to use intelligent automation ing minimize those same obstacles. The prediction of obstacles are discussed using ML techniques [42–44, 59]. As per some one document along jll think tank (2019), 48% yeah sources indicate such an about their spending such as deployment after all advanced analytical such as © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 A. Abraham et al. (Eds.): IBICA 2022, LNNS 649, pp. 476–485, 2023. https://doi.org/10.1007/978-3-031-27499-2_45

Ensemble Based Cyber Threat Analysis for Supply Chain Management

477

cyber incident may well raise along 29% as in budgetary season 2019–20.56% like top officials suggest and it information security consulting firm were being overused and shut to either a fortnight of them seem to be unable of about skillfully examine only those issues identified. 64% after all groups claim a certain advanced analytical reduces the price of detection capabilities but instead responding but instead reduces the total designed to examine along as much as 12% [15–18]. The Machine Learning Algorithms were discussed for IoT applications in [45–47, 58]. Detecting the Depression levels of a person using the MRI image data and training with learning models for Analysis, Classification models for Identification of Internet Loan frauds with methods, [53–57].

2 Relevant Study [1] Its scholars produced of one SVM model utilised prevalently such as information extraction to construct one classification complete immediately term this same ask just like malevolent inside this redirect. This same method demonstrated impressive predictive accuracy relatively. Farther and farther, [2] done its evaluation of both a research project to also deep learning design just that information protection to check this same designs through vast and varied operational parameters web instances. Moreover, [3] conducted a survey data gathering but instead fluid ounces methodology such as computer security detections as a virtual insights along assistance like penetration testing such as network security applications [30–35]. Besides that, [4] summary its cybercrime data source as a machine learning used only for analyzing and interpreting network activity but also anomaly - based intrusion detection [48–52]. Besides that, [5] investigated it and characterization yeah records and use methodological approaches on either a decision tree of between know and understand positive data - set such a types its similarity or standardization after all security data. Another venture [6] examines its effectiveness that use deep learning enters to foretell power distribution perturbation but also cybercrime unequal treatment classifier as well as step - by - step to also designed to detect cyber war in which dishonesty is just the core component of such incident. Out [7], the ensemble-learning model to detect it and hacking attacks yeah SCADA-based automation marketplace seems to be postulated [19–23]. A publishers throughout [8] envisioned of one deep-learning, feature-extraction-based semi-supervised framework just that cyber war safety inside the put any faith border yeah current IoT tv and film. This same suggested technique had been flexible discover unknowable airstrike. But even so, that whole appears to work did not regard Accenture hit and by vendor inbound/outbound lines. Regarding ounces predictive and prescriptive over diverse range data points, [9] found to predict information security reported cases utilizing algorithms to tell apart between various kinds of design. Farther and farther, [10] recommended someone hazard cash deposit structure so here reviews data file character log data of both a device to foretell whom the computers are now at chance of experiencing malware attacks earlier. Support Vector Machine Classification of Remote Sensing Images with the Waveletbased Statistical Features was presented in [24–28], Analysis of COVID-19-Impacted Zone Using Machine Learning Algorithms was presented in [29], methods for Quality Improvement of Retinal Optical Coherence Tomography shown in [39]and Various approaches of deep learning for image classification and detection in healthcare has been presented in [40, 41].

478

P. Penchalaiah et al.

3 Proposed Method Inclusion anyway CTI to ML algorithms just that cybercrime data analysis. Folks relate it and TCS tools to collect dangers (known attacks) but instead machine learning complete take lessons its set of data versus criterion cyber security risks (unknown attacks) to also PCCS processes. It and airstrike functionality has been using attributes including attack method, template, active attacks, as well as necessities to see the character of a target that has been utilized. That whole acceptance testing shall consist after all new attack but also security vulnerabilities distributed even by malware attacks. 3.1 Boosting Ensemble Learning Boosting ensemble performs on to an incremental approach like adapting strength training of either an assertion based on training total storage capacity upon on the efficiency of such earlier classifier model. The load for something like an inference does seem to be expanded whether it is categorized wrongfully but instead lowered whether the classifier predicts. 3.2 Gradient Boosting Gradient Boosting is a well-liked enhancing automatic system. Along back propagation algorithm, one per primary indicator tries to correct the predecessor’s error. Along stark comparison versus gradient boosting, that whole weight training of such training set are really not retooled, rather, one per repressors was indeed instructed just using remaining mistakes yeah previous incarnation even though categorizes (Fig. 1).

Fig. 1. Work flow

Ensemble Based Cyber Threat Analysis for Supply Chain Management

479

4 Experimental Results Here we collected the malware threat detection from kaggle.com which contain 14000 legal and 8000 threat data records. Keras2 box to use for application among these model types but rather training stage executed to either Nvidia K80 GPU with 12 GB RAM from Google Co laboratory. Pre - trained models brands used by therefore in publication but then also implementation. And instruction of the each brand is completed out activities and associated kit. Python programming has been used to start writing performs of between use the PAAS processes regarding model construction but rather learning. The above behavior is a set methodologies were included in the ‘applications’ just that modelling but instead ‘fit’ but instead ‘compile’ regarding schooling. 4.1 Count of Legal and Threat Data The below figure shows the legal data and threat data. By using seaborn package can be used for the visualization of the statistical models. The library is based on Matplotlib and allows the creation of statistical graphics (Fig. 2).

Fig. 2. Count of legal and threat data

4.2 Preprocess Data Resizing and reshaping the images into appropriate format to train our model. Data preprocessing is a process of preparing the raw data and making it suitable for a machine learning model. It is the first and crucial step while creating a machine learning model. When creating a machine learning project, it is not always a case that we come across the clean and formatted data. And while doing any operation with data, it is mandatory to clean it and put in a formatted way. So for this, we use data preprocessing task. MinMax Scaler shrinks the data within the given range, usually of 0 to 1. It transforms data by scaling features to a given range (Fig. 3). 4.3 Correlation Between Variables Heatmap is defined as a graphical representation of data using colors to visualize the value of the matrix. In this, to represent more common values or higher activities brighter colors basically reddish colors are used and to represent less common or activity values, darker colors are preferred. Heatmap is also defined by the name of the shading matrix (Fig. 4).

480

P. Penchalaiah et al.

Fig. 3. Preprocess data

Fig. 4. Correlation between variables

4.4 Confusion Matrix of RF, Gradient Boosting and XGBoost It provides a better understanding of the values by calculating the data in the matrix and analyze them to determine any positive or negative classifications Four outcomes are determined when classifying the instances of the dataset. These include True Positive (TP), True Negative (FP), False Positive (FP) and False Negative (FN) rates (Figs. 5, 6 and 7).

Fig. 5. Confusion matrix of RF

Ensemble Based Cyber Threat Analysis for Supply Chain Management

481

Fig. 6. Confusion matrix of Gradient Boosting

Fig. 7. Confusion matrix of XGBoost

4.5 Comparison of Different Learning Models The experimental results showed that accuracies of the XGBoosting, Gradient Boosting algorithms and provide the Comparative analysis with state of the art models with LG, DT, SVM, Navie-Bayes and RF algorithms in Majority Voting and identified a list of predicated threats (Fig. 8).

Fig. 8. Comparison of different learning models

5 Conclusion and Future Scope The integration of complex cyber physical infrastructures and applications in a CSC environment have brought economic, business, and societal impact for both national and global context in the areas of Transport, Energy, Healthcare, Manufacturing, and

482

P. Penchalaiah et al.

Communication. However, CPS security remains a challenge as vulnerability from any part of the system can pose risk within the overall supply chain context. This paper aims to improve CSC security by integrating CTI and ML for the threat analysis and predication. We considered the necessary concepts from CSC and CTI and a systematic process to analyse and predicate the threat. The experimental results showed that accuracies of the XGBoosting, Gradient Boosting algorithms and provide the Comparative analysis with state of the art models with LG, DT, NB, SVM, and RF algorithms in Majority Voting and identified a list of predicated threats.This article consider malware detection dataset and classify the data using ensemble models. In future you may consider the real time dataset and apply the deep learning and optimization techniques for improving classification accuracies and show the comparative analysis.

References 1. Yeboah-Ofori, A., Islam, S.: ‘Cyber security threat modelling for supply chain organizational environments. MDPI. Future Internet 11(3), 63 (Mar. 2019) 2. Woods, B., Bochman, A.: Supply chain in the software era. In: Scowcroft Center for Strategic and Security. Atlantic Council, Washington, DC, USA (May 2018) 3. Exploring the Opportunities and Limitations of Current Threat Intelligence Platforms, version 1. ENISA (Dec. 2017) 4. Doerr, T.U., Delft CTI Labs: Cyber Threat Intelligences Standards—A High Level Overview (2018) 5. National Cyber Security Centre: Example of Supply Chain Attacks (2018) 6. Research Prediction: Microsoft Malware Prediction (2019) 7. Yeboah-Ofori, A., Katsriku, F.: Cybercrime and risks for cyber physical systems. Int. J. Cyber-Secur. Digit. Forensics 8(1), 43–57 (2019) 8. Open Web Application Security Project: The Ten Most Critical Application Security Risks, Creative Commons Attribution-Share Alike 4.0 International License (2017) 9. US-Cert: Building Security in Software & Supply Chain Assurance (2020) 10. Labati, R.D., Genovese, A., Piuri, V., Scotti, F.: Towards the prediction of renewable energy unbalance in smart grids. In: Proc. IEEE 4th Int. Forum Res. Technol. Soc. Ind. (RTSI). Palermo, Italy, pp. 1–5 (Sep. 2018) 11. Boyens, J., Paulsen, C., Moorthy, R., Bartol, N.: ‘Supply chain risk management practices for federal information systems and organizations.’ NIST Comput. Sec. 800(161), 32 (2015) 12. Framework for Improving Critical Infrastructure Cyber security, Version 1.1. NIST, Gaithersburg, MD, USA (2018) 13. Miller, J.F.: ‘Supply chain attack framework and attack pattern. MITRE, Tech. Rep. MTR140021 (2013) 14. Ahlberg, C., Pace, C.: The threat intelligence handbook 15. Freidman, J., Bouchard, M.: ‘Definition guide to cyber threat intelligence. Using knowledge about adversary to win the war against targeted attacks. iSightPartners, CyberEdge Group LLC, Annapolis, MD, USA, Tech. Rep. (2018) 16. EY: Cyber Threat I1ntelligence: Designing, Building and Operating an Effective Program (2016) 17. Yeboah-Ofori, A., Boachie, C.: Malware attack predictive analytics in a cyber-supply chain context using machine learning. In: Proc. ICSIoT, pp. 66–73 (2019) 18. Bhamare, D., Salman, T., Samaka, M., Erbad, A., Jain, R.: Feasibility of supervised machine learning for cloud security. In: Proc. Int. Conf. Inf. Sci. Secur. (ICISS), pp. 1–5 (Dec. 2016)

Ensemble Based Cyber Threat Analysis for Supply Chain Management

483

19. Buczak, L., Guven, E.: A survey of data mining and machine learning methods for cyber security intrusion detection. IEEE Commun. Surveys Tuts. 18(2), 1153–1176 (2016). 2nd Quart. 20. Yavanoglu, O., Aydos, M.: A review on cyber security datasets for machine learning algorithms. In: Proc. IEEE Int. Conf. Big Data (Big Data), pp. 2186–2193 (Dec. 2017) 21. Villano, G.V.: Classification of logs using machine learning, M.S. thesis, Dept. Inf. Secur. Commun. Technol., Norwegian Univ. Sci. Technol., Trondheim, Norway (2018) 22. Hink, R.C.B., Beaver, J.M., Buckner, M.A., Morris, T., Adhikari, U., Pan, S.: Machine learning for power system disturbance and cyber-attack discrimination. In: Proc. 7th Int. Symp. Resilient Control Syst. (ISRCS), Denver, CO, USA, pp. 1–8 (Aug. 2014) 23. Gumaei, M.M., Hassan, S., Huda, M.R., Hassan, Camacho, D., Ser, J.D., Fortino, G.: A robust cyberattack detection approach using optimal features of SCADA power systems in smart grids. Appl. Soft Comput. 96 (Nov. 2020). Art. no. 106658 24. Hassan, M.M., Gumaei, A., Huda, S., Almogren, A.: Increasing the trustworthiness in the industrial IoT networks through a reliable cyberattack detection model. IEEE Trans. Ind. Informat. 16(9), 6154–6162 (2020). Sep. 25. Abawajy, J., Huda, S., Sharmeen, S., Hassan, M.M., Almogren, A.: Identifying cyber threats to mobile-IoT applications in edge computing paradigm. Elsevier Sci Direct Future Gener. Comput. Syst. 89, 525–538 (2018). Dec. 26. Rashid, M.M., Kamruzzaman, J., Hassan, M.M., Imam, T., Gordon, S.: Cyberattacks detection in IoT-based smart city applications using machine learning techniques. Int. J. Environ. Res. Public Health 17(24), 9347 (2020). Dec. 27. Hassan, M.M., Huda, S., Sharmeen, S., Abawajy, J., Fortino, G.: An adaptive trust boundary protection for IIoT networks using deep-learning feature-extraction based semi supervised model. IEEE Trans. Ind. Informat. 17(4), 2860–2870 (2021). Apr. 28. Prabhakar, T., Srujan Raju, K., Reddy Madhavi, K.: Support vector machine classification of remote sensing images with the wavelet-based statistical features. In: Fifth International Conference on Smart Computing and Informatics (SCI 2021), Smart Intelligent Computing and Applications, Volume 2. Smart Innovation, Systems and Technologies, vol. 283. Springer, Singapore (2022) 29. Abbagalla, S., Rupa Devi, B., Anjaiah, P., Reddy Madhavi, K.: Analysis of COVID-19Impacted Zone Using Machine Learning Algorithms. Springer series – Lecture Notes on Data Engineering and Communication Technology 63, 621–627 (2021) 30. Hassan, M.M., Hassan, M.R., Huda, S., de Albuquerque, V.H.C.: A robust deep-learningenabled trust-boundary protection for adversarial industrial IoT environment. IEEE Internet Things J. 8(12), 9611–9621 (2021). Jun. 31. Mohasseb, A., Aziz, B., Jung, J., Lee, J.: Predicting cyber security incidents using machine learning algorithms: A case study of Korean SMEs. In: Proc. INSTICC, pp. 230–237 (2019) 32. Bilge, L., Han, Y., Amoco, M.D.: Risk teller: Predicting the risk of cyber incidents. In: Proc. CCS, pp. 1299–1311 (2017) 33. Liu, Y., Sarabi, A., Zhang, J., Naghizadeh, P., Karir, M., Liu, M.: Cloud with a chance of breach: Forecasting cyber security incidents. In: Proc. 24th USENIX Secur. Symp. Washington, DC, USA, pp. 1009–1024 (2015) 34. Guide to Cyber Threat Information Sharing, document NIST 800–150 (2018) 35. Yeboah-Ofori, A., Islam, S., Yeboah-Boateng, E.: Cyber threat intelligence for improving cyber supply chain security. In: Proc. Int. Conf. Cyber Secur. Internet Things (ICSIoT), pp. 28–33 (May 2019) 36. Boschetti, A., Massaron, L.: Python Data Science Essentials, 2nd ed. Springer, Dordrecht, The Netherlands (2016) 37. Yeboah-Ofori, A.: Classification of malware attacks using machine learning in decision tree. IJS 11(2), 10–25 (2020)

484

P. Penchalaiah et al.

38. Wang, W., Lu, Z.: Cyber security in smart grid: Survey and challenges. Elsevier Comput. Netw. 57(5), 1344–1371 (2013). Apr. 39. Rajani, A., Kora, P., Madhavi, R., Jangaraj, A.: Quality Improvement of Retinal Optical Coherence Tomography, 1–5 (2021). https://doi.org/10.1109/INCET51464.2021.9456151 40. Madhavi, R., Kora, P., Reddy, L., Jangaraj, A., Soujanya, K., Prabhakar, T.: Cardiac arrhythmia detection using dual-tree wavelet transform and convolutional neural network. Soft Computing 26 (2022). https://doi.org/10.1007/s00500-021-06653-w 41. Reddy Madhavi, K., Madhavi, G., Rupa Devi, B., Kora, P.: Detection of Pneumonia Using Deep Transfer Learning architectures. Int. J. Adva. Trends Comp. Sci. Eng. 9(5), 8934–8937 (2020). ISSN 2278-3091 42. Bhasha, P., Pavan Kumar, T., Khaja Baseer, K., Jyothsna, V.: An IoT Based BLYNK Server Application for Infant Monitoring Alert System to Detect Crying and Wetness of a Baby. In: International Conference on Intelligent and Smart Computing in Data Analytics. Advances in Intelligent Systems and Computing, vol 1312. Springer, Singapore (13 March 2021) 43. Bhasha, P, Suresh Babu, J., Vadlamudi, M.N., Abraham, K., Sarangi, S.K.: Automated crop yield prediction system using machine learning algorithm. J., Algebraic Statistics 13(3), 2512–2522 (2022). https://publishoa.com, ISSN: 1309–3452 44. Bhasha, P., Pavan Kumar, T., Khaja Baseer, K.: A simple and effective electronic stick to detect obstacles for visually impaired people through sensor technology. J. Adva. Res. Dynamical & Control Systems 12(06), 18–27 (May 2020). https://doi.org/10.5373/JARDCS/V12I6/S20 201003 45. Silpa, C., Niranjana, G., Ramani, K.: Securing data from active attacks in IoT: an extensive study. In: Manogaran, G., Shanthini, A., Vadivu, G. (eds.) Proceedings of International Conference on Deep Learning, Computing and Intelligence. Advances in Intelligent Systems and Computing, vol 1396. Springer, Singapore (2022) 46. Silpa, C., Suneetha, I., Reddy Hemantha, G., Arava, R.P.R., Bhumika, Y.: Medication alarm: a proficient IoT-Enabled medication alarm for age old people to the betterment of their medication practice. J. Pharmaceutical Negative Results 13(4), 1041–1046 (Nov. 2022) 47. Silpa, C., Arava, R.P.R., Baseer, K.K.: Agri farm: crop and fertilizer recommendation system for high yield farming using machine learning algorithms. In: Int. J. Early Childhood Special Edu. (INT-JECSE), 14(02), 6468 (2022). https://doi.org/10.9756/INT-JECSE/V14I2.740 ISSN: 1308-5581 48. Jyothsna, V., Kumar Raja, D.R., Hemanth Kumar, G., Dileep Chnadra, E.: A novel manifold approach for intrusion detection system (MHIDS). Gongcheng Kexue Yu Jishu/Advanced Engineering Science 54(02) (2022) 49. Jyothsna, V., Mukesh, D., Sreedhar, A.N.: A flow-based network intrusion detection system for high-speed networks using meta-heuristic scale. In: Peng, S.L., Dey, N., Bundele, M. (eds.) Computing and Network Sustainability. Lecture Notes in Networks and Systems, vol 75. Springer, Singapore (2019) 50. Jyothsna, V., Prasad, K.M., Rajiv, K., Chandra, G.R.: Flow based anomaly intrusion detection system using ensemble classifier with Feature Impact Scale. Clust. Comput. 24(3), 2461–2478 (2021). https://doi.org/10.1007/s10586-021-03277-5 51. Jyothsna, V., Prasad, M., GopiChand, G., Bhavani, D.D.: DLMHS: flow-based intrusion detection system using deep learning neural network and meta-heuristic. Int. J. Comm. Sys. 35(10), e5159 (10 July 2022) 52. Jyothsna, V., Sreedhar, A.N., Mukesh, D., Ragini, A.: A network intrusion detection system with hybrid dimensionality reduction and neural network based classifier. In: Tuba, M., Akashe, S., Joshi, A. (eds.) ICT Systems and Sustainability. Advances in Intelligent Systems and Computing, vol 1077. Springer, Singapore (2020)

Ensemble Based Cyber Threat Analysis for Supply Chain Management

485

53. Maria Joseph, B., Baseer, K.K.: Reducing the latency using fog computing with IoT in real time. Gongcheng Kexue Yu Jishu/Advanced Engineering Science 54(08), pp. 2677–2692 (October, 2022). Journal ID : AES-15-10-2022-355, ISSN: 2096-3246 54. Baseer, K.K., Jahir Pasha, M., et.al.: Smart online examination monitoring system. J. Algebraic Stati. 13(3), 559–570 (2022). ISSN: 1309–3452 55. Baseer, K.K., Jahir Pasha, M., Krishna, T.M., Kumar, J.M., Silpa, C.: COVID-19 patient count prediction using classification algorithm. Int. J. Early Childhood Special Edu. (INT-JECSE) 14(07) (2022). https://doi.org/10.9756/INTJECSE/V14I7.7 ISSN: 1308–5581 56. Jahir Pasha, M., Sujatha, V., Hari Priya, A., Baseer, K.K.: IoT Technology Enabled MultiPurpose Chair to Control the Home/Office Appliance. J. Algebraic Stati. 13(1), 952–959 (May 2022). ISSN: 1309-3452 57. Baseer, K.K., Neerugatti, V., Jahir Pasha, M., Satish Kumar, V.D.: Internet of Things: A Product Development Cycle for the Entrepreneurs. Helix 10(02), pp. 155–60 (Apr. 2020) 58. Silpa, C., Srinivasa Chakravarthi, S., Jagadeesh kumar, G., Baseer, K.K., Sandhya, E.: Health monitoring system using IoT sensors. J. Algebraic Stati. 13(3), 3051–3056 (June 2022). ISSN: 1309-3452 59. Sandhya, E., Arava, R.P.R., Phalguna Krishna, E.S., Baseer, K.K.: Investigating student learning process and predicting student performance using machine learning approaches. Int. J. Early Childhood Special Education (INT-JECS) 14(07), 622–628 (2022). https://doi.org/10. 9756/INTJECSE/V14I7.60, ISSN: 1308–5581

Classification Model for Identification of Internet Loan Frauds Using PCA with Ensemble Method A. Madhaveelatha1(B) , K. M. Varaprasad1 , and Bhasha Pydala2 1 Department of CSE, Narayana Engineering College, Nellore 524004, A.P., India

[email protected], [email protected]

2 Department of IT, Mohan Babu University (Erstwhile Sree Vidyanikethan Engineering

College), Tirupati 517 102, A.P., India [email protected]

Abstract. Due to the rapid growth of e-services including e-commerce, e-finance, and mobile payments, personal loans for consumption have seen a significant uptick in recent years. There will be massive losses due to credit loan fraud because of insufficient grid checking and monitoring. Because of the time and effort required to manually analyse and verify such a high volume of credit/debit card swaps, machine learning methods should be widely deployed to automatically identify fraudulent transactions. This paper offers a unique hybrid supervised & unsupervised learning approach, P-XGBoost that combines Principal Component Analysis (PCA) and XGBoost (Algorithm) to filter out superfluous data while keeping significant information. There, use system search to prevent over-fitting and investigate the showcase of XGBoost, P-XGBoost, and other conventional AI technologies. It comes out that P-XGBoost completely surpasses XGBoost when it comes to detecting fraud, and it also offers a completely novel viewpoint on maintaining client anonymity while searching for forgeries behaviour. Keywords: Fraud detection · XGBoost · Principal component analysis

1 Introduction Theft refers back to the misrepresent yeah lucrative business organization’s framework instead of inherently resulting in straightforward perfectly legal worry. Scam is really a unified law in order of about try to fool some other individual or persons as a monetary benefits. This same malfeasance committed by people outer layer for such establishment known as and although purchaser deception and peripheral scam where even if the one forgery seems to be dedicated along top executives is named fraudulent activity but rather inner scam. Credit card seems to be an unpermitted account by such a user by which the login is indeed not envisioned. It really is moreover defined like when a private ends up taking some other person temporary residence such as ordinary person motives whereas the shareholder like the cardboard but instead the cardboard pledger don’t know about just the direction that now the credit/debit to be used in. A folks leveraging the cardboard really hasn’t’ inside this least its checking out the with card issuer © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 A. Abraham et al. (Eds.): IBICA 2022, LNNS 649, pp. 486–495, 2023. https://doi.org/10.1007/978-3-031-27499-2_46

Classification Model for Identification of Internet Loan Frauds

487

or perhaps the secured creditor seems to have no attack of creating it and expense reimbursement again for simply purchase individuals performed. Information investigation makes reference to pay attention rather than drilling evidence and by significant quantity of data. Information accumulation linked as both (a) supervised teaching depending on getting prepared communication sure identified false representation or actual cases or (b) unsupervised having to learn of communication which were not to just be theft rather than valid. The development of recent coerce epiphany methodologies have been manufactured extra burdensome because of restricts of buying and selling anyway prospects throughout deceit noteworthy. One of the best financially secure selection has been theories stress the fundamental role through options anyway blatant mischaracterization out from excellent knowledge using the quantifiable reasonable arithmetic. Through given results, finishing up sustainable blatant mischaracterization expectation techniques out from initiate location or identity protocol unless there could perhaps crop up some kind onset like bummer anyway safety precautions is really no more of an early advantage yet [8–11]. The image classification models were disbudded in [16–18]. IoT-Enabled classification models were discussed in [19–21, 32]. Intrusion detection models were discussed in [22–26]. The Classification models for Identification of Internet Loan frauds with methods were discussed in [27–31].

2 Relevant Study This literature offers with both the topic areas and also the investigates it would allow us to understand the prevailing system is a system that really are kind of like the above manuscript. The target of just this literature study should be to investigate a related research towards this article but rather frameworks used throughout earlier research. K. J. Charles [1], existing of one guideline knowledge - based systems to assist awake finance companies ing duplicitous consumption anyway credit card, especially as both credit. The target of the framework would be to describe possible fraud as during sign - up process. A publication primary identifies that whole modelling procedures, once again goes on to describe test design, and eventually, represents the results and by assessment after all real information from with a north american credit union. D. Sánchez, e l. Some one. Vitoria, c h. Comparative, as well as j. U r. Chiles [2] patterns have been top ranking analyzed model types just that data gathering. Listed here, researchers recommend about there being used in order to collect insight because then normal user formations might well be collected throughout unauthorized payments even before interactional payment card database systems so as to detect malfeasance. This same proposed method has already been meant to apply forward info approximately fraudulent transactions in many of the most vital corporate businesses such as continent. M. U t. Advantage but rather p. R. T r. Sampaio [3], explains the design sure ffml, positive guideline guidelines model - based vocabulary but instead overarching layout such as expediting it and theoretical level exclamation but instead deployment after all assertive theft restrictions inside number of co commercial bank portals. It’s also illustrated how much a domain - independent pronunciation should be used to intangible an economic system it in to a dataset related data process to minimize governance model - based nuance but also provision response times through in an creative strategy charting grammatical structure

488

A. Madhaveelatha et al.

serviceable both by the master or ou pas multiple user. Ffml is a component of both a full array like augmentative and alternative communication devices but rather experience and understanding evolved ing help forgery analysts’ daily activities sure planning new top ranked fraud prevention policy initiatives, map - based in to one of executable of a reinforcing program interface as well as continuous integration like continuous monitoring as well as adherence feature only within budgetary forum. A. Humans take. Kokkinaki [4], supposes someone template such as extracting value multiple user’ profile pages after all description or sensing nontypical money transfers that might necessitate frauds occurrences or just of one alter through user’s conduct. To serve as a graduate-level classification model, the work of Y. Jian, c e. Cock, d e. Kwong, or l o. Sheng [5] provides three different attributes determining (mcdm) procedures (i. N s, classifier, multicriteria, and vikor) [12–15]. Above a white background of six genuine credit ratings, management fraud data sources, and 70 different nations, an empirical study was designed to compare and contrast different categorization approaches.

3 Proposed Method Various analytic methodology have already been asserted such as ability to detect that whole malfeasance through it processing different parameters one per somewhat method and or the calculation time showing as for graphic representation. Throughout current framework fraud is finished employing support vector machine (SVM but instead SVM classifier opcodes the study noting a cents per share yeah deception occurred but rather characterizing variable factors or comparing variable factors for such cryptographic protocols. This same scheme that also youe ™ re used to have recommended does seem to be fraud utilizing trained to learn opcodes which is decision tree automatic system but instead binary logistic or gradient boosting as a supervised learning. It and face specific challenges within those manuscript asserts about badge transactions were also fresh while compared as well as the history transactions managed to make by just a customer. Here at all creativity is still what many troubling problem throughout truthful when is someone termed presumed deflections difficulties. Suggestion sloshing could be said as little more than a changeable where it shifts well over long - term but rather system will help. We are able to see absolutely vital high points that seem to be hooked once any communicate seems to be managed to make. Accredit called 1) transaction e t non – distinguishing proof variety of a kind swap, 2) card issuer e r e – remarkable id number provided towards the payment card, 3) amount operatively total amount managed to move and otherwise recognized in such a specialized swap by both the buyer, 4) it been subtle details somewhere around dates and times, ing inherent nature so when the transfer seemed to be managed to make 5) term operatively to point if the swap does seem to be truth rather than completely bogus. All such have shown it and fresh showcases after all credit card payments. The ascribes of knowledge arranged of processes encompass 1) moment side of the story along secs to see a travels here between best exchange and its first swap, 2) extent non – transaction price, 3) category such as theft or even not malfeasance.

Classification Model for Identification of Internet Loan Frauds

489

3.1 Algorithm Step 1: Start. Step 2: Data Pre-process IP dataset. Step 3: Use Logistic representation to train Model. Step 3.1: X = new_df.drop(‘Class’, axis = 1). Step 3.2: y = new_df[[‘Class’]] Step 3.3: X_train, X_test, y_train,y_test = train_test_split(X,y,test_size = 0.33, random_state = 44). Step 3.4: print(‘X_trainshape =>’ + str(X_train.shape)). Step 3.5: print(‘y_trainshape =>’ + str(y_train.shape)). Step 3.6: print(‘X_testshape =>’ + str(X_test.shape)). Step 3.7: print(‘y_testshape =>’ + str(y_test.shape)). Step 3.8: X_trainshape => (659,30). Step 3.9: y_trainshape => (659,1). Step 3.10: X_testshape => (325,30). Step 3.11: y_testshape => (325,1). Step 4: Calculate accuracy, macro average, and weighted average using RF. Step 5: calculate fraudulent report. Step 6: Stop. 3.2 XGBoost Approach The technique XGBoost seems to be the last week reigning this same adapted automation and or the relatively large amounts hatreds for such arranged as well as straightforward communication. Classifier has been the completion of gradient funded decision to make tree roots aimed as an efficiency and moreover lethal injection. XGBoost is still a choosing forest clothing supervised learning equation the said uses the tilt bolstering program tells it and formula. An equation differs in the supporting morals: a large reach after all utilizes: could be employed versus sit reoccurrence, classification, positional play, but also consumer defined weather predictions matters [6]. 3.3 Logistic Regression Logistic Regression has been the classification strategy measured stagnation has been the attempted clustering plan that results that whole chances of the dual totally dependent factor that really is expected from independent factor sure data frame that seems to be derived reoffend anticipate its possibility for something like an outcome that have hypothesis 2 anymore nil or even one, true or a no specious and otherwise legitimate. Its predicament outlines this same measured recurrence through quantifiable frame. Figure 1. Shows the plot of X Vs Y for regression [6, 7].

490

A. Madhaveelatha et al.

Fig. 1. Plot of X vs Y for regression

3.4 Decision Tree Decision tree is just the estimate so here uses the one shrub somewhere around diagram as well as framework yeah alternatives or with there possible outcomes of about envisage approximately logical endpoint, the above arithmetic actually uses strict supervision utterance Entropy(s) =

n 

−pi log2 pi

(1)

i=1

Gain(S, A) = Entropy(S) −

n  Sv i=1

s

Entropy(sv)

(2)

This ordinary differential equation display attain was indeed peak but rather electron density seems to be least then to reach a choice node in the tree reveals Fig. 2. Comprising and it relate as well as last but repetition does seem to be conducted through sub groups utilizing leftover features [6].

Fig. 2. Decision tree

Above Fig. 3 shows 3 types of phases: 1. Obtaining phase two. Supervised learning three. Testing phase ✓ Acquiring stage of evolution: type is made data stores as well as trying to clean old data, choosing concept as well as new construction an one efficient model ✓ Training process: utilizing gradient boosting design as a going to train.

Classification Model for Identification of Internet Loan Frauds

491

✓ Testing stage: solid and semisolid whereas the take into consideration various parameters 1. Acquiring Phase 2. Training phase 3. Testing Phase ✓ Acquiring step: type is made data sources as well as going to clean prior data, going to select concept as well as constructing the efficient model ✓ Training sequence: employing classifier design as an instruction ✓ Testing stage of evolution: solid and semisolid whereas the assessing multiple parameters.

Fig. 3. Work flow

4 Experimental Results This technique was indeed able to implement using python programming skills, Django web application framework. This method to use numpy library functions by, scipy, panda, openpyxl as well as xlwt. Now folks is using banking activities data source to seek out that whole new mortgage prognostication (Figs. 4, 5, 6 and 7).

492

A. Madhaveelatha et al.

Fig. 4. Comparison graph of machine learning models.

Fig. 5. Line graph of model accuracies.

Fig. 6. Prediction of loan approval using different models

Classification Model for Identification of Internet Loan Frauds

493

Fig. 7. Ratio of loan approval vs not approval

5 Conclusion In this article human beings provide the pro scam frameworks, which includes things like online loans, as to address the confidentiality issues of customer’s personal information. The ada boost are included into fraud reveal estimation, a dramatic shift in the audience’s perspective may be expected. An interbreeding mentality is successfully implemented when the rankings of abnormalities are used to increase the overall enumeration yeah functioning of either an obvious misinterpretation place classifiers. Thus, the proposed prototype of a document with great accuracy around infinity that provides through categorization. The proposed model can provide a comprehensive prepared in accordance with a precision of 0.93, so maximizing the conceivable outcomes efficiency.

References 1. Leonard, K.J.: The development of a rule based expert system model for fraud alert in consumer credit. Eur. J. Oper. Res. 80(2), 350356 (1995) 2. Vila, M.A., Cerda, L., Serrano, J.M., Sánchez, D.: Association rules applied to credit card fraud detection. Expert Syst. Appl. 36(2), 36303640 (2009) 3. Sampaio, P.R.F., Edge, M.E.: The design of FFML: A rule-based policy modelling language for proactive fraud management in nancial data streams. Expert Syst. Appl. 39(11), 99669985 (2012) 4. Kokkinaki, I.: On atypical database transactions: Identification of probable frauds using machine learning for user proling. In: Proc. IEEE Knowl. Data Eng. Exchange Workshop, p. 229238 (1997) 5. Kou, G., Wang, G., Shi, Y., Peng, Y.: An empirical study of classification algorithm evaluation for nancial risk prediction. Appl. Soft Comput. 11(2), 29062915 (2011) 6. Aitken, S., Wheeler, R.: Multiple algorithms for fraud detection. Knowledge-Based Systems, 13(2), 93–99 (2000). Elsevier (IJCSMC), vol. 4, no. 4, pp. 92–95 (2015). ISSN ISSN: 2320088X 7. Jordan, M.I., Ng, A.Y.: On discriminative vs. generative classifiers: a comparison of logistic regression advances in neural information processing systems 2, 841–848 (2002) 8. Vybornova, O.N., Azhmukhamedov, I.M.: Introduction of metrics for risk assessment and management. Casp. J. Manag. High Technol. 4(32), 10–22 (2015)

494

A. Madhaveelatha et al.

9. Mallick, B., Chaudhary, K: Credit Card Fraud: The study of its impact and detection techniques. Int. J. Comput. Sci. Network (IJCSN) 1(4), 31–35 (2012). ISSN ISSN: 2277-5420 10. Personal loan fraud detection based on hybrid supervised and unsupervised learning. Fangming Huang, Hanlin Wen 978-1-7281-4111-4/20/$31.00 ©2020 IEEE 11. Carminati, M., Caron, R., Maggi, F., Epifani, I., Zanero, S.: September 2015 BankSealer: a decision support system for online banking fraud analysis and investigation 12. Louzada, F., Ara, A.: Bagging k-dependence probabilistic networks an alternative powerful fraud detection tool 13. Halvaiee, N.S., M. Akbari, K.: A novel model for credit card fraud detection using artificial immune system 14. Lei, J.Z., Ghorban, A.A.: An empirical study of classification algorithm evaluation for financial risk prediction 15. Peng, Y., Wang, G., Kou, G., Shi, Y.: On atypical database transactions: identification of probable frauds using machine learning for user profiling 16. Bhasha, P., Pavan Kumar, T., Khaja Baseer, K., Jyothsna, V.: An IoT-based BLYNK server application for infant monitoring alert system to detect crying and wetness of a baby. In: Bhattacharyya, S., Nayak, J., Prakash, K.B., Naik, B., Abraham, A. (eds.) International Conference on Intelligent and Smart Computing in Data Analytics. AISC, vol. 1312, pp. 55–65. Springer, Singapore (2021). https://doi.org/10.1007/978-981-33-6176-8_7 17. Bhasha, P, Babu, J.S., Vadlamudi, M.N., Abraham, K., Sarangi, S.K.: Automated crop yield prediction system using machine learning algorithm. J. Algebr. Stat. 13(3), 2512–2522 (2022). https://publishoa.com. ISSN: 1309–3452 18. Bhasha, P., Kumar, T.P., Baseer, K.K.: A simple and effective electronic stick to detect obstacles for visually impaired people through sensor technology. J. Adv. Res. Dyn. Control Syst. 12(06), 18–27 (2020). https://doi.org/10.5373/JARDCS/V12I6/S20201003 19. Silpa, C., Niranjana, G., Ramani, K.: Securing data from active attacks in IoT: an extensive study. In: Manogaran, G., Shanthini, A., Vadivu, G. (eds.) Proceedings of International Conference on Deep Learning, Computing and Intelligence. Advances in Intelligent Systems and Computing, vol. 1396. Springer, Singapore (2022). https://doi.org/10.1007/978-981-16-565 2-1_5 20. Silpa, C., Suneetha, I., Hemantha, G.R., Arava, R.P.R., Bhumika, Y.: Medication alarm: a proficient IoT-enabled medication alarm for age old people to the betterment of their medication practice. J. Pharm. Negat. Results 13(4), 1041–1046 (2022) 21. Silpa, C., Arava, R.P.R., Baseer, K.K.: Agri farm: crop and fertilizer recommendation system for high yield farming using machine learning algorithms. Int. J. Early Child. Spec. Educ. (INT-JECSE), vol. 14, Issue 02 2022 6468. https://doi.org/10.9756/INT-JECSE/V14I2.740. ISSN: 1308–5581 22. Jyothsna, V, Raja, D.R.K., Kumar, G.H., Dileep, C.E.: A novel manifold approach for intrusion detection system (MHIDS). Gongcheng Kexue Yu Jishu/Advanced Engineering Science 54(02) (2022) 23. Jyothsna, V., Mukesh, D., Sreedhar, A.N.: A flow-based network intrusion detection system for high-speed networks using meta-heuristic scale. In: Peng, S.-L., Dey, N., Bundele, M. (eds.) Computing and Network Sustainability. LNNS, vol. 75, pp. 337–347. Springer, Singapore (2019). https://doi.org/10.1007/978-981-13-7150-9_36 24. Jyothsna, V., Prasad, K.M., Rajiv, K., Chandra, G.R.: Flow based anomaly intrusion detection system using ensemble classifier with Feature Impact Scale. Clust. Comput. 24(3), 2461–2478 (2021). https://doi.org/10.1007/s10586-021-03277-5 25. Jyothsna, V., Prasad, K.M., GopiChand, G., Bhavani, D.D.: DLMHS: flow-based intrusion detection system using deep learning neural network and meta-heuristic scale. Int. J. Commun. Syst. 35(10), e5159 (2022). https://doi.org/10.1002/dac.5159

Classification Model for Identification of Internet Loan Frauds

495

26. Jyothsna, V., Sreedhar, A.N., Mukesh, D., Ragini, A.: A network intrusion detection system with hybrid dimensionality reduction and neural network based classifier. In: Tuba, M., Akashe, S., Joshi, A. (eds.) ICT Systems and Sustainability. AISC, vol. 1077, pp. 187–196. Springer, Singapore (2020). https://doi.org/10.1007/978-981-15-0936-0_19 27. Joseph, B.M., Baseer, K.K.: Reducing the latency using fog computing with IoT in real time. Gongcheng Kexue Yu Jishu/Advanced Engineering Science 54(08), 2677–2692 (October 2022). Journal ID: AES-15-10-2022-355, ISSN: 2096–3246 28. Baseer, K.K., Pasha, M.J., et al.: Smart online examination monitoring system. J. Algebr. Stat. 13(3), pp. 559–570 (2022). ISSN: 1309–3452 29. Baseer, K.K., Pasha, M.J., Krishna, T.M., Kumar, J.M., Silpa, C.: COVID-19 patient count prediction using classification algorithm. Int. J. Early Child. Spec. Educ. (INT-JECSE), 14(07) (2022). https://doi.org/10.9756/INTJECSE/V14I7.7 ISSN: 1308–5581 30. Pasha, M.J., Sujatha, V ., Priya, A.H., Baseer, K.K.: IoT technology enabled multi-purpose chair to control the home/office appliance. J. Algebr. Stat. 13(1), 952–959. (May 2022). ISSN: 1309–3452 31. Baseer, K.K., Neerugatti, V., Pasha, M.J., Kumar, V.D.S.: Internet of things: a product development cycle for the entrepreneurs. Helix 10(2), 155–160 (2020). https://doi.org/10.29042/ 2020-10-2-155-160 32. Silpa, C., Chakravarthi, S.S., Jagadeesh, K.G, Baseer, K.K., Sandhya, E.: Health monitoring system using IoT sensors. J. Algebr. Stat. 13(3), 3051–3056 (June 2022). 3051–3056. ISSN: 1309–3452 33. Sandhya, E., Arava, R.P.R., Krishna, E.S.P., Baseer, K.K.: Investigating student learning process and predicting student performance using machine learning approaches. Int. J. Early Child. Spec. Educ. (INT-JECS), 14(07) (2022), 622–628. https://doi.org/10.9756/INTJECSE/ V14I7.60. ISSN: 1308–5581

Comparative Analysis of Learning Models in Depression Detection Using MRI Image Data S. Mano Venkat1(B) , C. Rajendra1 , and K. Venu Madhav2 1 Department of CSE, Narayana Engineering College, Nellore 524 004, A.P., India

[email protected], [email protected] 2 Department of CS and IS, Blust Botswana, Palapye, Botswana [email protected]

Abstract. Over the last several years, research into the causes and treatments for serious depression has been a hot issue. They are using models of machine learning to fresh data points from the brain that include several sets of information in order to detect depression. However, the predicting accuracy of machine learning algorithms such as SVM & KNN ranged from 98% to 85%. The accuracy of detecting depressed state is still in need of improvement. Here, we use the Brain images picture dataset for our analysis. This data may be used to provide an accurate assessment of the patient’s state of depression. Our strategy is to use the limited data set to fine-tune network weights such that they are optimally suited to the specific job at hand. According to the findings, densenet121 achieved the highest accuracy (up to 95.74%) of all the models tested. Multi-component model designed to outperform the accuracy of its component parts. Keywords: Deep learning · Machine learning · Ensemble methods

1 Introduction The brain is a very essential organ, but the human brain is far more significant. It stands in for the whole central nerve system of the body. The brain is a complex organ comprised of billions of nerve cells linked together by synapses. The cerebellum, forebrain, and forebrain are the three lobes of the brain responsible for different aspects of mental functioning. Three kinds of tissue make up the human brain: grey matter, white matter, & spinal fluid [1–10]. It handles a broad variety of tasks. It’s in charge of the body’s motility, which includes things like leg and arm movement. It’s responsible for taking in data from the body’s five senses and making sense of it. The quick movement of the hand in response to intense warmth or cold is actually controlled by the brain [11-14]. The image classification has been discussed in [18–20,35]. The brain manages and controls all of the body’s autonomic activities, including hormone release, respiration, and heart rate. The IoT based applications were discussed in [21–23, 34]. The approaches for intrusion detection Systems were discussed in [24–28]. Detecting the Depression levels of a person using the MRI image data and training with learning models for Analysis [29–33]. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 A. Abraham et al. (Eds.): IBICA 2022, LNNS 649, pp. 496–503, 2023. https://doi.org/10.1007/978-3-031-27499-2_47

Comparative Analysis of Learning Models

497

1.1 Types of Brain Diseases Stroke is a sickness that affects the brain [2]. This happens when a blood artery in the brain bursts or is blocked, cutting off blood supply to a certain area of the brain. Paralysis of the leg, arm, or voice box may result from this disease. Depressive, eating, personality, anxiety, & psychotic disorders [3–5] are just all examples of brain diseases that affect mental health. Parkinson’s disease is an additional classification of brain condition (pd). The trembling makes it hard to move, coordinate, and walk. The neurons responsible for transmitting signals to coordinate muscle action are lost in people with Parkinson’s disease [6]. 1.2 Diagnosis and Treatment of Brain Disease Using Image Processing When it comes to the reliability and consistency of the data, algorithms provide reliable outcomes. The first step in dealing with picture data is ensuring that the images are legible. After that, several image processing methods, including as segmentation, feature extraction, the region expanding technique, the wavelet transform, watersheds algorithms, k-means grouping, and other combinations, are used to extract important characteristics for improved performances [7, 15]. There are numerous layers utilized for feature extraction in transfer learning [8]. The accuracy of the numerous layers relies on a combination of appropriate factors. [9] Suggests a number of other techniques for the delineation & error checking of pictures of brain tumors. 1.3 Image Extraction Techniques Images of the human brain or other material pertinent to studying the brain and its illnesses are essential for any researcher. Common methods of data collection from of the human brain include electroencephalography (EEG), magnetic resonance imaging (MRI), and functional magnetic resonance imaging (FMRI). It’s a set-up that involves hooking up wires to a person’s skull in order to capture their brain’s electrical activity. MRI refers to magnetic resonance imaging. Used for taking a snapshot of a living brain in action. Skull tumors and bone dislocations may both be detected using MRI. Functional magnetic resonance is abbreviated as “FMRI” [15–17]. To differentiate itself from MRI, it records variations in blood flow to various organs.

2 Relevant Study Transfer learning is just an enhanced version yeah computational. So, for just any imaging techniques concern, it is very important understand the outcomes yeah machine learning algorithms on these concerns. Only when machine learning approach were being implemented either to set of data, the method yeah feature based needs versus be done with a support anyway feature selection methods. As per the journals researched feature based could be performed with both the aid after all methods as though grey – level plc, discrete wavelet transform, and such like. Deep convolutional neural network technics are now the ultimate idea for every vision based issueCire, san notamment abdel

498

S. M. Venkat et al.

had to use a neural network just that mitotically recognition and classification such as ovarian cancer or attained a kind homozygous recessive rating sure 0.782 [13]. Saleh notamment cetera conducted image retrieval employing chance of developing but rather reduced and use principal component analysis but rather categorization utilizing deep neural accuracy seems to be 96.97% [14]. 2.1 DenseNet DenseNet were using cross-layer interconnect between every leading up thin coating for everyone resultant two-layer throughout nutrient haute couture. Pacts with both strands helps to improve flow of data as in system thru 3des discussed within and between layer upon layer through back propagation, therefore, this architect could indeed reduce disappearing convolution layers. This architecture is shown in Fig. 1.

Fig. 1. DenseNet architecture

3 Proposed Method DenseNet (Dense Convolutional Network is indeed an design so here aims to make it and DCNNS have to be even harder, though at the same time it makes people extra proficient to coach, via using slimmer relationships in between strands. DenseNet seems to be a fully convolutional where every other surface would be tied to all the other two-layer those are harder inside the connectivity, so here does seem to be, the first layer is connected to the 2nd, 3rd, 4th and so on, the second layer is connected to the 3rd, 4th, 5th and so on. This really is accomplished versus aid information gain circulation between layers of the protocol stack. To maintain this same graze essence, apiece sheet acquires audio input because of all the information into consideration as well as goes on one’s own convolutional feature to all two-layer which also will arise after the. Apart from refers does not merge includes and via synopsis and although merges it and comes equipped along look very good people. Therefore the ‘ith’ covering had also ‘i’ receives but instead consisted mainly yeah convolution layers among all the prior convnet slabs. All its convolutional feature seem to be gone onto that all next ‘I-i’ strands. The above exposes ‘(i(i + 1))/2’ interconnection with in infrastructure, rather than ‘i’ relationships as it is in classical transfer learning architecture is based. It and thus needs lesser specifications because customary convolutional neural network

Comparative Analysis of Learning Models

499

(CNN, since there is no have to gain knowledge meaningless previous layers. DenseNet made up of two vital halts apart from the fundamental convolution layer but instead average pooling. They’re same thick squares and indeed the switchover layer after layer. Figure 2: The DenseNet121 framework The first layer of DenseNet is composed of a convolution and pooling operation. After that, we have a dense block, then a transition layer, then another dense block, then another transition layer, and lastly a dense block, then a classification layer. There are 64 7 × 7 filters in the first convolution block, and the stride is set to 2. After that comes a MaxPooling layer, which in this case has a stride of 2, and 3 × 3 max pooling.

4 Experimental Results The potential of pre-treatment relaxation-state MRI imaging for predicting antidepressant efficacy is explored. Independently, primarily is credited types yeah pre-trained 3d CNN structure called densenet121, over electroencephalogram (EEG data sources appeared to be quite ok through humans source of data, with intent like moving the understanding in and out of with us job at hand with minimal training statistics. So that each deep CNN can handle a certain input shape, images created with volume concentration (128 * 128 * 3) need to be resampled so that they can be fed into each CNN. Again, 783 photos seem to be separated from the rest, and 578 photos are being utilised for subway design verification despite the fact that they were not included in the final challenge material. Information from ten separate validations may be used to help shape the necessary length n layout. Training a convolution model is very variable because to the non-linear nature of MRI data and the variance of CNN models. A problem like this could be less noticeable when compared to the average it’s accuracy and performance before any noticeable flaps or passes. In this approach, the fundamental design may learn new forms even while missing a large chunk of information during the training dataset creation phase. 4.1 Performance Evaluation Effectiveness of individual subscript data are evaluated to use precision, acuity, and specific, order to meet the growing metrics, receiver-operating characteristic contour (ROC) but rather precision-recall contour (PRC). Roadrunner curvatures used only for simulation results quantification as well as significant importance yeah PSNR (area largely UN gryphon curve) demonstrates the said scheme works well. Receiver operating characteristic (roc shows the true positives (TPR) vs number of false positives (FNR) even before classifying cutoff point keeps changing among both 0 or 1. Of one clever concept might well bend up to higher remaining like overarching storyline. Ten hand reports as well as eight non-responders along in outset of data, tries to make someone 50% unbalancing for sophistication dispersion in the this survey. It’s not a very horrible case of knowledge lopsidedness, but must be recognized out proposed model but rather appraisal. Ergo People’s Republic of china contour used in this to analyze every model’s efficiency on that significant proportion but also fraction classrooms. People’s Republic of china bend storylines accuracy group in particular recognize such as threshold values for both 1

500

S. M. Venkat et al.

and 0. Recall and precision not using negative class of their computation, therefore PRC is not affected by majority class predictions. Alongside all such indicators, box plot this not only goes to show informs of such a model evaluation utilizing, it and percentage of overall sure emergency personnel all of which were classified correctly (i.e., True Positive [TP]), that whole percentage after all nonresponses which have been correctly identified (i.e., True Negatives [TN]), it and percentage of overall like emergency personnel which have been grossly inaccurate just like non-responders (i.e., False positive [FP]) and also the fraction yeah non-responders that have been misidentified even though aiders (i. False Negative [FN]) [8] (Figs. 3, 4, and 5).

Fig. 2. Count plot for depressive and non-depressive

Fig. 3. Random MRI images of depressive and non-depressive

5 Conclusion Researchers done some one detailed review to analyze the consequences of latest Outfit of effective pre-trained CNNs premised through translation to categories ambulance crews as well as non-responders of about remedy utilizing sertraline anti-depressants scan using MRI tools but instead presented incisive research results. All these work demonstrated a certain length n models predict to either pre-trained deep CNN formed through Cross validation could indeed address the shortcomings such as subset of the training specimens or does provide comprehensiveness just those deep neural networks.

Comparative Analysis of Learning Models

501

Fig. 4. Comparative analysis of proposed model with the other machine learning models

Fig. 5. Accuracy curves of training and validation accuracy

Human addressing the following so here given material data acquired and by post restingstate electroencephalographic the said comprise impulse response patterns of various bandwidths could encapsulate – anti existence of MRI dataset and also have high capability inside this prognostication after all diagnostic accuracy. Finally, results appear to suggest that now the performance quality between typical methods accomplished through DenseNet as for precision, precision and recall. A kind symphonic orchestra of such rudimentary subscript model types could significantly better performance than every individual building whilst also by using functionality from every concept through classifying. Better outcomes throughout this essay group of new greatest side previous reports shows that the higher anyway procured given material but also outfit like effective abc news important in the design over source text plan regarding designation like ambulance crews as well as non-responders complete commonly prescribed psychiatric drugs.

502

S. M. Venkat et al.

References 1. Davy, A., et al.: Brain tumor segmentation with deep neural networks. Med. Image Anal. 35, 18–31 (2017) 2. Wilton, S.B., Musuka, T.D., Hill, M.D., Traboulsi, M.: Diagnosis and management of acute ischemic stroke: speed is critical. CMAJ 187(12), 887–893 (2015) 3. McKeith, I.: Dementia with lewybodies. In: Hand Paper of Clinical Neurology, vol. 84, pp. 531–548, Elsevier (2007) 4. Edmonds, E.C., Bondi, M.W., Salmon, D.P.: Alzheimer’s disease: past, present, and future. J. Int. Neuropsychol. Soc. 23(9–10), 818–831 (2017) 5. Joober, R., Garcia, A., Malla, A.: Mental illness is like any other medical illness: a critical examination of the statement and its impact on patient care and society. J. Psychiatry Neurosci 40(3), 147 (2015) 6. Esmail, S.: The diagnosis and management of Parkinson’s disease. Sch. J. Appl. Sci. Res. 1(9), 13–19 (2018) 7. Pauls, J., Trinath, T., Logothetis, N.K., Oeltermann, A., Augath, M.: Neuro physiological investigation of the basis of the fMRI signal. Nature 412(6843), 150–157 (2001) 8. El-Horbaty, E.-S.M., Salem, A.-B.M., Mohsen, H., El-Dahshan, E.-S.A.: Classification using deep learning neural networks for brain tumors. Future Comput. Inform. J. 3(1), 68–71 (2018) 9. Madhumitha, S., et al.: Using image processing on MRI scans. In: 2015 IEEE International Conference on Signal Processing, Informatics, Communication, and Energy Systems (SPICES), pp. 1–5. IEEE (2015) 10. Mendre, V., Gawande, S.S.: Brain tumor diagnosis using image processing: a survey. In: 2017 2nd IEEE International Conference on Recent Trends in Electronics, Information and Communication Technology (RTEICT), pp. 466–470. IEEE (2017) 11. Gujral, S., Kaur, A., Sharma, K.: Brain tumor detection based on machine learning algorithms. Int. J. Comput. Appl. 103(1), 7–11 (2014) 12. Subramaniam, V., Gurusamy, R.: A machine learning approach for MRI brain tumor classification. Comput. Mater. Contin. 53(2), 91–108 (2017) 13. Schmidhuber, J., Giusti, A., Ciresan, D.C., Gambardella, L.M.: Mitosis detection in breast cancer histology images with deep neural networks. In: Editor, E. (ed.) International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 411–418. Springer, New York (2013) 14. Adeli, H., Ahmadlou, M., Adeli, A.: Fractality analysis of frontal brain in major depressive disorder. Int. J. Psychophysiol. 85(2), 206–211 (2012) 15. Ali, S.S.A., Xia, L., Malik, A.S., Yasin, M.A.M., Mumtaz, W.: A wavelet based technique to predict treatment outcome for major depressive disorder. PLoS One (2017) 16. Khodayari-Rostamabad, A., Reilly, J.P., Hasey, G.M., de Bruin, H., MacCrimmon, D.J.: A machine learning approach using EEG data to predict response to SSRI treatment for major depressive disorder. Clin. Neurophysiol. 124(10), 1975–1985 (2013) 17. Jaworska, N., de la Salle, S., Ibrahim, M.-H., Blier, P., Knott, V.: Leveraging machine learning approaches for predicting antidepressant treatment response using electroencephalography (EEG) and clinical data. Front. Psychiatry (2019) 18. Bhasha, P., Pavan Kumar, T., Khaja Baseer, K., Jyothsna, V.: An IoT-based BLYNK server application for infant monitoring alert system to detect crying and wetness of a baby. In: Bhattacharyya, S., Nayak, J., Prakash, K.B., Naik, B., Abraham, A. (eds.) International Conference on Intelligent and Smart Computing in Data Analytics. AISC, vol. 1312, pp. 55–65. Springer, Singapore (2021). https://doi.org/10.1007/978-981-33-6176-8_7 19. Bhasha, P., Suresh Babu, J., Vadlamudi, M.N., Abraham, K., Sarangi, S.K.: Automated crop yield prediction system using machine learning algorithm. J. Algebraic Stat. 13(3), 2512–2522 (2022)

Comparative Analysis of Learning Models

503

20. Bhasha, P., Pavan Kumar, T., Khaja Baseer, K.: A simple and effective electronic stick to detect obstacles for visually impaired people through sensor technology. J. Adv. Res. Dyn. Control Syst. 12(6), 18–27 (2020). https://doi.org/10.5373/JARDCS/V12I6/S20201003 21. Silpa, C., Niranjana, G., Ramani, K.: Securing data from active attacks in IoT: an extensive study. In: Manogaran, G., Shanthini, A., Vadivu, G. (eds.) Proceedings of International Conference on Deep Learning, Computing and Intelligence. Advances in Intelligent Systems and Computing, vol. 1396. Springer, Singapore (2022) 22. Silpa, C., Suneetha, I., Reddy Hemantha, G., Arava, R.P.R., Bhumika, Y.: Medication alarm: a proficient IoT-enabled medication alarm for age old people to the betterment of their medication practice. J. Pharm. Neg. Results 13(4), 1041–1046 (2022) 23. Silpa, C., Arava, R.P.R., Baseer, K.K.: Agri farm: crop and fertilizer recommendation system for high yield farming using machine learning algorithms. Int. J. Early Childh. Spec. Educ. 4(2), 6468 (2022). https://doi.org/10.9756/INT-JECSE/V14I2.740 24. Jyothsna, V., Kumar Raja, D.R., Hemanth Kumar, G., Dileep Chnadra, E.: A novel manifold approach for intrusion detection system (MHIDS), Gongcheng Kexue Yu Jishu/Adv. Eng. Sci. 54(2) (2022) 25. Jyothsna, V., Mukesh, D., Sreedhar, A.N.: A flow-based network intrusion detection system for high-speed networks using meta-heuristic scale. In: Peng, S.-L., Dey, N., Bundele, M. (eds.) Computing and Network Sustainability. LNNS, vol. 75, pp. 337–347. Springer, Singapore (2019). https://doi.org/10.1007/978-981-13-7150-9_36 26. Jyothsna, V., Prasad, K.M., Rajiv, K., Chandra, G.R.: Flow based anomaly intrusion detection system using ensemble classifier with Feature Impact Scale. Clust. Comput. 24(3), 2461–2478 (2021). https://doi.org/10.1007/s10586-021-03277-5 27. Jyothsna, V., Munivara Prasad, K., GopiChand, G., Durga Bhavani, D.: DLMHS: Flow-based intrusion detection system using deep learning neural network and meta-heuristic scale. Int. J. Commun. Syst. 35(10), e5159 (2022). https://doi.org/10.1002/dac.5159 28. Jyothsna, V., Sreedhar, A.N., Mukesh, D., Ragini, A.: A network intrusion detection system with hybrid dimensionality reduction and neural network based classifier. In: Tuba, M., Akashe, S., Joshi, A. (eds.) ICT Systems and Sustainability. AISC, vol. 1077, pp. 187–196. Springer, Singapore (2020). https://doi.org/10.1007/978-981-15-0936-0_19 29. Maria Joseph, B., Baseer, K.K.: Reducing the latency using fog computing with IoT in real time. Gongcheng Kexue Yu Jishu/Adv. Eng. Sci. 54(8), 2677–2692 (2022) 30. Baseer, K.K., Jahir Pasha, M., et al.: Smart online examination monitoring system. J. Algeb. Stat. 13(3), 559–570 (2022) 31. Baseer, K.K., Jahir Pasha, M., Krishna, T.M., Mohan Kumar, J., Silpa, C.: COVID-19 patient count prediction using classification algorithm. Int. J. Early Childh. Spec. Educ. 14(7), 1308– 5581 (2022). https://doi.org/10.9756/INTJECSE/V14I7.7 32. Jahir Pasha, M., Sujatha, V., Hari Priya, A., Baseer, K.K.: IoT technology enabled multipurpose chair to control the home/office appliance. J. Algebr. Stat. 13(1), 952–959 (2022) 33. Baseer, K.K., Neerugatti, V., Jahir Pasha, M., Satish Kumar, V.D.: Internet of things: a product development cycle for the entrepreneurs. Helix 10(2), 155–160 (2020) 34. Silpa, C., Srinivasa Chakravarthi, S., Jagadeesh kumar, G., Baseer, K.K., Sandhya, E.: Health monitoring system using IoT sensors. J. Algebr. Stat. 13(3), 3051–3056 (2022) 35. Sandhya, E., Arava, R.P.R., Phalguna Krishna, E.S., Baseer, K.K.: Investigating student learning process and predicting student performance using machine learning approaches. Int. J. Early Childh. Spec. Educ. 14(7), 622–628 (2022).https://doi.org/10.9756/INTJECSE/V14 I7.60

Product Safety and Privacy Using Internet of Things Design and Moji Suresh Kallam1(B) , Ch. Madhu Babu2 , B. Prathima1 , C. Lakshmi Charitha1 , and K. Reddy Madhavi1 1 CSE, Sree Vidyanikethan Engineering College, Tirupati, AP, India

[email protected] 2 CSE, B V Raju Institute of Technology, Hyderabad, India

Abstract. This work will also have a great value to help designers to evaluate the product’s performance, few metrics were created for the main issues in the Internet of Things. With the metrics, manufacturers can analyze in a more critical manner the usage of sensors, find alternatives to deliver the same function with less cost and decide how is the best way to handle the implications of changing it. It will also help them to evaluate what are the points of vulnerability and how the product is regarding its safety and privacy. Besides all that there is also the need to meet the needs of the user and designed products will have to provide seamless integration with other enterprise data, applications and environment. Also overcome battery challenges that limit computation, display resolution and connectivity. The term Internet of Things might have been present for a long time now, but still there are plenty of work ahead to be done to transform this “term” into this success that everybody has been expecting. Keywords: Design · Internet of Things · Cost · Mojio tool · Performance

1 Introduction The idea of the Internet of things is that, rather than having few very powerful devices such as laptops and smartphones, the user will possess a large number of devices, sometimes less powerful, but they will enable them to always stay connected and informed. Porter and Heppelmann [1] discussed that products are now composed of physical, smart and connectivity components. They have become complex systems composed by hardware, software, sensors, data storage, microprocessors and connectivity. Design is changing the focus from only physical products to data centric physical products. The role of designers is also changing. Now, they have to advance human experience, design for user’s experiences. For example, connected products will communicate to companies when something is not working properly, so they can start acting instantly when they receive the information and address it before it becomes a bigger problem. And in the process, by knowing how the product is being used, companies can also start working proactively and start preventing failures before they happen, creating a more loyal customer base. “Listening” to the product will change perception of © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 A. Abraham et al. (Eds.): IBICA 2022, LNNS 649, pp. 504–509, 2023. https://doi.org/10.1007/978-3-031-27499-2_48

Product Safety and Privacy Using Internet of Things

505

what constitutes a product and how to improve it. Information can be used to modify the company’s own products to meet customer needs, or combine it with other shared data enabling a new level for the next generation of products and services [2]. “Design for Internet of Things” has as a main goal to define ground rules for the design team; simplify the structure of the product without losing desired functionality; manufacture the product economically, with better quality and reliability; deliver the product better; be more responsive to customer needs and get to market faster [3]. However at this stage of development one can only examine design for IOT from theoretical perspective. Gandhi and Gervet [4] reported that this new era is moving designers towards software-driven products and companies that manufacture smart connected products will have an advantage in the market, but won’t be simple nor easy. As Deichmann et al. (2015) explained, elements of technology stacks of companies’ may need to be redesigned so they can support billions of interdependent processing events per minute from a myriad of products, devices and applications.

2 Related Work The approach the design of IOT products is an overview of the main issues that should come across a designer of smart product’s mind. In summary, Fields [5] explained on the IOT Inc. show that first the designer needs to consider what problems are they trying to address with IOT, a problem that the customer is willing to pay to have it solved or see enough benefit to pay for it. Next, the company needs to decide which is the best vehicle, “thing”, that can deliver this benefit or service, but always having in mind that with IOT, the interface can be everywhere and one can be talking about a non-screen-based system product. Then, focusing on the user and the overall system architecture that is going to be used to implement the solution, they need to define what the value proposition of each sensor is. Finding alternatives to the parts that are going to implement the solution is important to have flexibility and reduce costs. Then, finally, knowing how much they can monetize for it, and building a prototype. Companies should always remember that products should be tested in the field as many times as needed before launching it.

3 Proposed Methodology Rules for Designing for Internet of Things Undoubtedly, redesigning objects or machines and changing entire systems is complicated and time consuming. In the following pages are some guidelines to help direct designers through developing a smart product. Simplify design and reduce number of total parts -Minimize the number of sensors: An array of sensors allows us to sense virtually any type of phenomenon in the environment. People need information and that is why they are putting sensors in everything. However, do we really need that many sensors? What information is the product giving to the user and what is the value proposition of each sensor? The best way to answer it is by doing an analysis of alternatives for every major part of the system architecture that is going to be used to implement the solution. This analysis should be done from the inside out and outside in. For example, using the same sensor to execute different

506

S. Kallam et al.

functionality. The user may not need an accelerometer to have an orientation position; the device might be able to perform the same function with video image processing [2].

Fig. 1. Smart product

Minimize the complexity of choices: Remove choices for end-users. Massive amounts of data are and can be generated with sensors, but most of it is thrown away or the user do not know what to do with it. Instead of showing unnecessary data or information, cleaning and filtering information and knowing what will create value for the user are essential to optimize all aspects of the unit under study. This will also help to reduce the number of choices for users, hence, reducing the chances for error. Sensors can be used in its primary function, but when combined with complex algorithms or fused to other sensors, it can output different context information and functionalities. Adding more and more sensors to a product in order to create more value for the user will not work specially when transforming an already existent product into a smart one. As the number of sensors increase, so does the difficulty to install a system, maintain it and manage all of the data generated (Fig. 1). Sensor fusion is the combination of multiple sensors and software that mix data to bring more elaborated applications and better performance to the product. Plenty of companies are working with this technology and it has significantly advanced but it is still in the early stages. According to Semico research, the number of systems incorporating sensor fusion is predicted to grow to over 2.5B units in 2016. A classic example of sensor fusion is the Inertial Motion Units or IMUs. IMUs combines gyroscope, accelerometer and magnetometer, with sensor fusion algorithms and microcontrollers creating an

Product Safety and Privacy Using Internet of Things

507

accurate motion sensor. This is just one example of sensor fusion in terms of motion sensors, but with rich software that manages anything that can be sensed, the technology casts a wider range.

4 Experimentation and Results Sensor fusion is the best way to improve functionality of individual sensors and companies have been incorporating it into many different applications such as automotive and digital home markets. Increasing complexity of sensor fusion requires additional processing capabilities, always-on technology to support context awareness, software overhead, low power processors, need for smaller systems and to top it off, everything has to come with cuts in power consumption. However, this new trend can bring many benefits besides cost; it can bring energy efficiency, reduce error rates in applications, deliver better performance and reduce area. Data is a powerful and decisive asset. Connected devices can provide important information like where the product is being used, which customers are using at a given time and how they are being used. These data can generate real-time readings, to improve sales, forecast, productivity, safety, to learn about people preferences and much more. For that reason, more and more manufacturers are deploying devices such as controllers and sensors to measure operational processes or to learn more about their employees and customers with a relatively low-cost system [6]. Mojio Mojio has been in the market since November 2014. It is an open-source platform developed for the automotive segment and it is the leading platform for connecting cars. It uses machine-to-machine cellular data exchange to send and receive data from and to the car in real time. It connects the car and digitizes all information from the already existing sensors into the Internet through a 3G connectivity. It can be plugged to any car that has been manufactured after 1996, at an On-Board Diagnostic (OBS-II) port, enabling a myriad of apps to access your car’s performance, location and health. It has its own data connection, meaning that it doesn’t rely on Bluetooth technology and the user can access data even though separated from the vehicle. Applications were developed to empower the driver in knowing driving costs, detailed diagnostics to help making decisions at the shop, monitoring your car remotely and so on. Data is customized based on Vehicle Identification Number (VIN) and it is transmitted to a connected device through data cloud. Sensor Cost Efficiency: Mojio has three sensors, but the device connects with hundreds of sensors that already exist in all cars. It has more than twenty apps developed and an expanding marketplace for apps. Applications vary between paid and unpaid, and are developed for web, android and IOS system. Since a lot of useful information can be extracted with only three sensors, we can assume that plenty of smart algorithms are working to transform all the analytic data retrieved from vehicles into useful information. Therefore, we can say that Mojio is extremely high efficient in cost. Below is a list of some applications developed for Mojio and which sensors are in it.

508

S. Kallam et al.

Applications: – Gauge: You are in control of your car’s health and maintenance. – Cloak: Monitor car’s location. – Trek: Makes your time on the road smoother by finding parking spaces near destination, real time traffic and closest gas station. – Carla: Trips analytics and car monitoring. – Onsurance: Shop for better insurance rates based on your driving habits. RepairLync, Dooing, Spot Angels, IF, Kiip, My Mojio, Easy Auto Log, Spot Wizard, FleetLeed, Urgent.ly, Smaritan, Goodcoins, Hustlebox and Ubeio. Sensors: – Integrated GPS – Built-in accelerometer – 3G connectivity Critical sensors: According to Giraud [7] the modern car produces on average more than 20 Gigabytes per hour of data from hundreds of sensors and million lines of codes, Mojio cellular hardware collects from hundreds of sensors plugged into a car’s diagnostic on-board port, but the device itself has only two critical sensors. So, we can say that the Critical Ratio (CR) is very low, closer to zero, considering that the number of critical sensors is two and the data being generated by those two are very small, in fact, according to Flitchard (2014), Mojio is only transmitting about 1MB of data per month through 3G connection.

5 Conclusion The Internet of things can bring a lot of economic, social and technical benefits, but it also raises essential challenges that could stand in the way. Issues like lack of privacy and security need to be controlled to ensure societal acceptance of IOT services, otherwise, it can undermine the user’s confidence to fully enjoy the technology and result in a smaller than expected adoption. With the design for manufacturability and assembly principles, designers can improve the manufacturing process reducing time and cost for the physical product. With IOT product, manufacturers need to think not just about the technical details, but also of how it will fit into the broader context of the user’s life. The main components in a IOT product is its smart components such as connectivity and sensors, and these are the ones that need attention since they are usually the most expensive and complex.

References 1. Sunitha, G., et al.: Modeling of chaotic political optimizer for crop yield prediction. Intell. Autom. Soft Comput. 34(1), 423–437 (2022) 2. Nack, L., et al.: Acquisition and use of mobility habits for personal assistants. In: 2015 IEEE 18th International Conference on Intelligent Transportation Systems. IEEE (2015)

Product Safety and Privacy Using Internet of Things

509

3. Butzin, B., Golatowski, F., Timmermann, D.: Microservices approach for the internet of things. In: 2016 IEEE 21st International Conference on Emerging Technologies and Factory Automation (ETFA). IEEE (2016) 4. Lee, G.M., et al.: Internet of things. In: Evolution of Telecommunication Services, pp. 257– 282. Springer Berlin Heidelberg, Berlin, Heidelberg (2013). https://doi.org/10.1007/978-3642-41569-2_13 5. de Matos, E., Amaral, L.A., Hessel, F.: Context-aware systems: technologies and challenges in internet of everything environments. In: Beyond the Internet of Things, pp. 1–25. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-50758-3_1 6. Weyrich, M., Ebert, C.: Reference architectures for the internet of things. IEEE Softw. 33(1), 112–116 (2015) 7. Aazam, M., et al.: Cloud of things: integrating internet of things and cloud computing and the issues involved. In: Proceedings of 2014 11th International Bhurban Conference on Applied Sciences and Technology (IBCAST) Islamabad, Pakistan, 14th –18th January, 2014. IEEE (2014)

Optimization of the Performance and Emissions of a Dual-Fuel Diesel Engine Using LPG as the Fuel Hariprasad Tarigonda1(B) , R. Meenakshi Reddy2 , B. Anjaneyulu3 , G. Dharmalingam4 , D. Raghu rami Reddy1 , and K. L. Narasimhamu1 1 Mohan Babu University (Erstwhile: Sree Vidyanikethan Engineering College),

Tirupati 517102, India [email protected] 2 Department of Mechanical Engineering, G. Pulla Reddy Engineering College, Kurnool, India 3 Department of ME, Srinivasa Ramanujan Institute of Technology, Anantapur, India 4 Department of ME, Vel Tech Rangarajan Dr. Sagunthala R&D Institute of Science and Technology, Chennai 600062, India

Abstract. In this work, the diesel engine was changed into a dual-fuel engine. Experiments were carried out on this engine using diesel as fuel along with various flow rates of LPG at various loads. The engine parameters were optimized using Taguchi Grey Relational Analysis (TGRA), Regression Analysis (RA) using JAYA algorithm and Adaptive Neuro Fuzzy Inference Systems (ANFIS) using JAYA algorithm. In order to optimize the Brake Thermal Efficiency (BTE) and emissions (Hydrocarbon (HC), Carbon monoxide (CO), Nitrogen Oxides (NOx) and smoke) of the engine, the Taguchi grey relational analysis was employed for Injection Pressure (IP), LPG Flow Rate, Brake Power of the engine. Relational Analysis (RA) was used to develop the functional relationship between the various operational parameters of the engine. The model’s R2 was 0.9818. This regression equation is considered as fitness function in JAYA algorithm with maximization of GRG as objective function. The optimum solution obtained for an Injection Pressure of 190 bar, LPG Flow Rate of 1LPM and BP of 1.13kW using JAYA algorithm with regression model as fitness function. ANFIS based model predicts diesel engine performance and emissions while using LPG as a supplementary fuel. The developed model forecasts braking power (BP), fuel injection pressure, and LPG flow rate on anticipated outputs (BTE, HC, CO NOx, and smoke). According to the results of the performance evaluation, the ANFIS projected data was consistent with the experimental data, with an overall correlation coefficient of 0.99415. The mean absolute percentage error (MAPE) score was determined to be 1.4498 percent, while the root mean square error (RMSE) score was judged to be within acceptable margins of accuracy. The developed model was able to consistently simulate actual engine parameters, even when operating in completely different experimental modes. Thus, it offered a comprehensive and strong predictive platform, enabling a given dual fuel to be employed in real-time optimization techniques. The optimum operating conditions are found to be 194.32 bar injection pressure, 1 LPM LPG flowrate and 1.13 BP with 0.835084 Optimum GRG.

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 A. Abraham et al. (Eds.): IBICA 2022, LNNS 649, pp. 510–531, 2023. https://doi.org/10.1007/978-3-031-27499-2_49

Optimization of the Performance and Emissions of a Dual-Fuel Diesel Engine

511

Keywords: Diesel engine · Dual fuel engine · Taguchi grey relational analysis · Regression analysis · Adaptive neuro fuzzy inference systems · JAYA algorithm

1 Introduction Dual fuel engines are able to run on both diesel and LPG simultaneously. Diesel is necessary for the operation of this engine, and gaseous fuels are required for ignition. Dual-fuel engines are able to switch between running on cheaper LPG gas when it is available and diesel when it is required [11]. In order to optimize the parameters of diesel engines with regards to blends of fuel, injection pressure, and injection timing, the Taguchi method was utilized [1]. The hybrid ANFIS-Rao algorithm was utilized in the process of surface roughness modelling and optimization that was carried out using electrical discharge machining [2]. On engines, RSM-based optimization of butanol, diesel, and cotton oil mixes was done in order to achieve optimal performance. By examining how the engine was being put to use, it was possible to determine which fuel mixtures were the most productive. The optimal fuels led to a decrease in braking torque, braking power, braking energy, and braking momentum, while leading to an increase in BSFC. The optimal fuels demonstrated a great deal of potential in terms of lowering emissions of NOx, HC, and CO [3]. Predictions were made using an artificial neural network (ANN) regarding the performance and emission characteristics of a single-cylinder diesel engine that was fueled by biodiesel-diesel fuel mixes. The engine was run on biodiesel. After that, the response surface methodology (RSM) was used in order to achieve the best possible results [4]. Experiments were carried out on a diesel engine using a mixture of methyl esters of palm stearin (PS) oil and diesel as the fuel source. Throughout the course of the experiments, the injection pressures and compression ratios were subject to a variety of adjustments. It was discovered that the thermal efficiency of the engine during braking was significantly improved when it was operating with PSME40 at an injection pressure of 210 bar and a CR of 16.5 [5]. The effect of BP on the performance of diesel engines using biodiesel blends was investigated in [18]’s study. Researchers used thermal barriers coating to investigate the effect that BP had on the performance and emissions of a low heat rejection engine [19]. In order to optimize the process parameters of a variety of systems [6–10], response surface methodology, artificial neural networks, Taguchi methods, ANOVA methods, and Grey relational analysis methods were utilized. The response surface method, also known as RSM, was utilized in order to improve engine performance and reduce the number of pollutants released into the exhaust. An ideal value was estimated to be a mixture of D90F5 B5 (5% fuel oil, 5% biodiesel, and 90% diesel), and a speed of 2026 rpm with a load of 46% [12]. An experimental study of the performance and emissions of an engine using the Taguchi method and grey relational analysis with the aim of optimizing six input parameters and the five levels that correspond to each of those parameters was carried out. An investigation was carried out for the purpose of determining the combined influence of input parameters such as compression ratio, injection pressure, injection-nozzle geometry, additive, fuel fraction, and EGR on the response variables of BSFC and NOx as in a CI engine that was fueled with blends of Mangifera Indica

512

H. Tarigonda et al.

biodiesel [13]. This investigation was carried out in order to determine the combined effect of input parameters such as compression ratio, injection pressure, and injectionnozzle geometry. The Grey-Taguchi method was utilized in order to optimize the diesel engine performance in addition to the emission characteristics [14–17]. SRM, Taguchibased grey relational analysis, Artificial neural network, and Regression analysis were some of the tools that were utilized in the process of optimizing the performance and emission characteristics of diesel engines fueled by biodiesels [23–31]. According to the research that has been done, only a small number of research works have been reported in the field of optimizing the process parameters of a dual fuel engine that uses LPG as a supplementary fuel along with diesel injection. This is something that has been observed from the published research. The primary objective of this work is to investigate the effect of LPG flow rate on the diesel engine performance as a dual fuel engine, as well as to optimize the engine emission and performance characteristics by using Taguchi Grey Relational Analysis (TGRA), Regression Analysis (RA) using JAYA algorithm, and Adaptive Neuro Fuzzy Inference Systems (ANFIS) using JAYA algorithm. These analyses will be performed using JAYA.

2 Experimental Investigation Experiments were performed on a 16.5:1 compression ratio, 1500 rpm governorcontrolled engine. Kirlosker’s 4-stroke 1-cylinder diesel engine was upgraded with extra instrumentation to operate on LPG and diesel. Figure 1 illustrates the LPG-Diesel dual-fuel engine experimental configuration. The engine is operated at varied LPG-flow rates. At each flow rate, steady-state engine speed, air, diesel, water, braking, thermal, exhaust gas temperature, and emission levels were measured. Various loads and injection pressures were used. Performance metrics are recorded at each load. Experiments are repeated with LPG-diesel and various observations were noted down.

Fig. 1. Show the pictorial view and Schematic view engine set up used in the experimentation

Optimization of the Performance and Emissions of a Dual-Fuel Diesel Engine

513

3 Experimental Results

Table 1. Shows the engine parameters and their levels used in the investigation Symbol

Process parameters

Units Bar

Levels 1

2

3

4

5

A

Injection Pressure

190

210

230





B

LPG Flow Rate

1

1.5

2





C

Brake Power

1.13

1.68

2.23

2.78

3.33

The engine parameters and their level used for optimization are shown in Table 1. Table 2. Shows the experimental result EXP

IPressure

LPG flow rate

BP

1

190

1

1.13

2

190

1

1.68

3

190

1

4

190

5

190

6

BTE

HC

CO

NOx

Smoke

6.95

19

0.06

221

12.7

8.82

28

0.18

305

22.5

2.23

10.68

32

0.23

395

27.8

1

2.78

12.67

36

0.33

481

32.9

1

3.33

13.6

41

0.47

624

60.8

190

1.5

1.13

22

0.18

232

8.7

7

190

1.5

1.68

9.85

31

0.35

335

12.2

8

190

1.5

2.23

11.71

35

0.4

402

21.2

9

190

1.5

2.78

13.42

36

0.57

511

30.7

10

190

1.5

3.33

15.36

39

0.81

639

56.9

11

190

2

1.13

6.53

20

0.16

242

15.2

12

190

2

1.68

9.41

25

0.24

342

18.1

13

190

2

2.23

11.66

28

0.28

423

20.1

14

190

2

2.78

12.92

32

0.32

529

22.3

15

190

2

3.33

14.14

37

0.37

642

35.3

16

210

1

1.13

8.89

18

0.16

258

22.8

17

210

1

1.68

12.16

33

0.34

383

35.1

18

210

1

2.23

14.7

37

0.42

450

41.5

19

210

1

2.78

17.11

41

0.49

550

43.3

20

210

1

3.33

19

57

0.51

661

6.85

49.6 (continued)

514

H. Tarigonda et al. Table 2. (continued)

EXP

IPressure

LPG flow rate

BP

BTE

HC

CO

NOx

Smoke

21

210

1.5

1.13

10.72

17

0.16

268

10.9

22

210

1.5

1.68

13.36

31

0.29

395

21.8

23

210

1.5

2.23

15.69

38

0.31

466

25

24

210

1.5

2.78

18.5

45

0.34

571

28.1

25

210

1.5

3.33

20.86

59

0.48

683

36.2

26

210

2

1.13

9.96

21

0.16

255

9.7

27

210

2

1.68

13.34

29

0.28

380

12.9

28

210

2

2.23

16.32

32

0.35

459

15.1

29

210

2

2.78

18.99

35

0.41

561

17.8

30

210

2

3.33

21.62

51

0.51

659

28.9

31

230

1

1.13

8.66

26

0.32

261

12.2

32

230

1

1.68

11.44

35

0.46

381

28.1

33

230

1

2.23

13.22

40

0.54

457

38.8

34

230

1

2.78

15.23

48

0.61

558

58.5

35

230

1

3.33

16.72

56

0.8

657

70.3

36

230

1.5

1.13

8.58

22

0.49

266

22.9

37

230

1.5

1.68

11.18

33

0.62

390

29.8

38

230

1.5

2.23

13.97

38

0.69

461

40.1

39

230

1.5

2.78

16.86

49

0.76

569

47.3

40

230

1.5

3.33

18.86

60

0.91

711

57.7

41

230

2

1.13

9.31

23

0.43

272

12.6

42

230

2

1.68

13.34

30

0.67

398

25.8

43

230

2

2.23

15.96

36

0.75

470

43.3

44

230

2

2.78

18.52

45

0.82

575

52

45

230

2

3.33

20.46

58

0.94

725

69.8

The experimental results at various levels of process parameters were shown in Table 2.

4 Results and Discussions 4.1 Optimization Using Taguchi Grey Relational Analysis (TGRA) Taguchi Grey relation analysis has been used to attain the mean effect plots of the dual fuel engine order to optimize Injection Pressure (IP), LPG Flow Rate, Brake Power for maximum BTE and minimum emissions.

Optimization of the Performance and Emissions of a Dual-Fuel Diesel Engine

515

4.1.1 Effect of Parameters on BTE

Main Effects Plot for Means Data Means

IPressure

LPGFlowRate

17.5 15.0

Mean of Means

12.5 10.0 190

210 BP

230

1.0

1.5

2.0

17.5 15.0 12.5 10.0 1.13

1.68

2.23

2.78

3.33

Fig. 2. Illustrate the effect of input parameters on BTE

Figure 2 depicts the effect of parameters on BTE. From Fig. 2, it was noticed that BTE increases with rise in injection pressure, LPG flow rate and BP. This trend is attributed to greater LPG flow rate, the faster flame speed of LPG could have led to in improved burning of the fuel. The BTE decreases at high injection pressures due to excessive accumulation of fuel droplets at high injection pressure. From the graph, max. BTE occurs at 210 bar, 2.0 LPG flow rate and 3.3 BP. 4.1.2 Effect of Parameters on HC Emissions Figure 3 shows the influence of input parameters on HC emissions. HC emissions levels increases with increase BP and IP whereas it slightly decreases at high LPG flow rates. This is due to at high injection pressure and BP. At high injection pressures and BPs, fuel spray strikes the cylinder walls at higher speeds and gets cooled due to the effect of cooling by coolants of the engine. This results in incomplete combustion which intern results in high HC emissions. At high LPG flowrates, due to better combustion, the HC emissions were found be less. From the graph, minimum HC occurs at 190 bar, 2.0 LPG flow rate and 1.13 BP.

516

H. Tarigonda et al. Main Effects Plot for Means Data Means

IPressure

LPGFlowRate

50

Mean of Means

40 30 20 190

210 BP

230

1.0

1.5

2.0

50 40 30 20 1.13

1.68

2.23

2.78

3.33

Fig. 3. Shows the influence of input parameters on HC emissions

4.1.3 Effect of Parameters on CO Emissions

Main Effects Plot for Means Data Means

IPressure

LPGFlowRate

0.6 0.5

Mean of Means

0.4 0.3 0.2

190

210 BP

230

1.0

1.5

0.6 0.5 0.4 0.3 0.2

1.13

1.68

2.23

2.78

3.33

Fig. 4. Shows the influence of input parameters on CO emissions

2.0

Optimization of the Performance and Emissions of a Dual-Fuel Diesel Engine

517

Figure 4 represents the effect of input parameters on CO emissions. CO emissions levels increases with increase BP and IP whereas it slightly decreases at high LPG flow rates. This is due to at high injection pressure and BP. At high injection pressures and BPs, fuel spray strikes the cylinder walls at higher speeds and gets cooled due to the effect of cooling by coolants of the engine. This results in incomplete combustion which intern results in high CO emissions. At high LPG flowrates, due to better combustion, the CO emissions were found be less. From the graph, minimum CO occurs at 190 bar, 1.0 LPG flow rate and 1.13 BP. 4.1.4 Effect of Parameters on NOx Emissions

Main Effects Plot for Means Data Means

IPressure

700

LPGFlow Rate

600

Mean of Means

500 400 300 190

210

230

1.0

1.5

2.0

BP

700 600 500 400 300 1.13

1.68

2.23

2.78

3.33

Fig. 5. Shows the influence of input parameters on NOx emissions

Figure 5 illustrates how the input parameters have an effect on the amount of NOx emissions. The higher combustion temperatures that occur at higher loads are responsible for the increase in NOx emissions that occur with increased load. On the other hand, the concentration of NOx drops as the percentage of mixed fuel that is made up of LPG increases. This is because of an increase in the proportion of blended fuels that contain LPG. The primary reason for this is that a surge in the heat of evaporation of LPG–Diesel mixed fuels with a rise in the mass fraction of LPG would result in a reduction in the temperature of the cylinder gases due to the evaporation of fuel, which would lead to a reduction in NOx emissions. According to the graph, the lowest amount of NOx is produced when there is 190 bar, 1.0 LPG flow rate, and 1.13 BP.

518

H. Tarigonda et al.

4.1.5 Effect of Parameters on Smoke

Main Effects Plot for Means Data Means

IPressure

LPGFlowRate

50 40

Mean of Means

30 20 10

190

230

210 BP

1.0

1.5

2.0

50 40 30 20 10

1.13

1.68

2.23

2.78

3.33

Fig. 6. Shows the influence of input parameters on Smoke emissions

Figure 6 shows the influence of input parameters on Smoke emissions. Soke levels increases with increase BP and IP whereas it decreases at high LPG flow rates. This is due to at high injection pressure and BP. At high injection pressures and BPs, fuel spray strikes the cylinder walls at higher speeds and gets cooled due to the effect of cooling by coolants of the engine. This results in incomplete combustion which intern results in high smoke. At high LPG flowrates, due to better combustion, the smoke emissions were found be less. Increased LPG mass fraction reduces smoke. LPG has a lower boiling point and evaporates quickly. LPG vaporizes quickly as a free jet due to pressure drop. This flash boiling injection may increase gas perturbation with fluctuating pressure in the spray field, promoting the spray process. Spray cans improve, and combined fuel droplets are small. Longer droplet retention reduces fuel-splitting particle pollution. Increased combustion speed and shorter burning reduce smoke output. From the graph, minimum smoke occurs at 190 bar, 2.0 LPG flow rate and 1.13 BP. 4.1.6 Single Objective Optimization Model by Using Grey Relational Analysis Normalized response values, GRC and GRG are calculated for BTE, HC, CO, NOx, Smoke. Multi objective optimization model is transformed to single objective optimization model by using Grey relational analysis [20–23] (Table 3).

Optimization of the Performance and Emissions of a Dual-Fuel Diesel Engine

519

Table 3. Shows normalized response values, grey relational coefficient, GRG and Rank EXP

Normalized Values

Grey Relational coefficient

GRG

Rank

BTE

HC

CO

NOx

Smoke

BTE

HC

CO

NOx

Smoke

1

0.0278

0.9535

1.0000

1.0000

0.9351

0.3396

0.9149

1.0000

1.0000

0.8851

0.827917

1

2

0.1518

0.7442

0.8636

0.8333

0.7760

0.3709

0.6615

0.7857

0.7500

0.6906

0.651738

11

3

0.2750

0.6512

0.8068

0.6548

0.6899

0.4082

0.5890

0.7213

0.5915

0.6172

0.585461

18

4

0.4069

0.5581

0.6932

0.4841

0.6071

0.4574

0.5309

0.6197

0.4922

0.5600

0.532036

27

5

0.4685

0.4419

0.5341

0.2004

0.1542

0.4847

0.4725

0.5176

0.3847

0.3715

0.446236

40

6

0.0212

0.8837

0.8636

0.9782

1.0000

0.3381

0.8113

0.7857

0.9582

1.0000

0.778665

4

7

0.2200

0.6744

0.6705

0.7738

0.9432

0.3906

0.6056

0.6027

0.6885

0.8980

0.637097

13

8

0.3433

0.5814

0.6136

0.6409

0.7971

0.4323

0.5443

0.5641

0.5820

0.7113

0.566793

21

9

0.4566

0.5581

0.4205

0.4246

0.6429

0.4792

0.5309

0.4632

0.4649

0.5833

0.5043

31

10

0.5852

0.4884

0.1477

0.1706

0.2175

0.5465

0.4943

0.3697

0.3761

0.3899

0.435307

43

11

0.0000

0.9302

0.8864

0.9583

0.8945

0.3333

0.8776

0.8148

0.9231

0.8257

0.754903

5

12

0.1909

0.8140

0.7955

0.7599

0.8474

0.3819

0.7288

0.7097

0.6756

0.7662

0.652438

10

13

0.3400

0.7442

0.7500

0.5992

0.8149

0.4310

0.6615

0.6667

0.5551

0.7299

0.60883

15

14

0.4235

0.6512

0.7045

0.3889

0.7792

0.4645

0.5890

0.6286

0.4500

0.6937

0.565151

22

15

0.5043

0.5349

0.6477

0.1647

0.5682

0.5022

0.5181

0.5867

0.3744

0.5366

0.503586

32

16

0.1564

0.9767

0.8864

0.9266

0.7711

0.3721

0.9556

0.8148

0.8720

0.6860

0.740089

6

17

0.3731

0.6279

0.6818

0.6786

0.5714

0.4437

0.5733

0.6111

0.6087

0.5385

0.555059

24

18

0.5414

0.5349

0.5909

0.5456

0.4675

0.5216

0.5181

0.5500

0.5239

0.4843

0.519572

30

19

0.7011

0.4419

0.5114

0.3472

0.4383

0.6259

0.4725

0.5057

0.4337

0.4709

0.501768

33

20

0.8264

0.0698

0.4886

0.1270

0.3360

0.7423

0.3496

0.4944

0.3642

0.4296

0.475992

37

21

0.2777

1.0000

0.8864

0.9067

0.9643

0.4091

1.0000

0.8148

0.8428

0.9333

0.800002

2

22

0.4526

0.6744

0.7386

0.6548

0.7873

0.4774

0.6056

0.6567

0.5915

0.7016

0.606575

17

23

0.6070

0.5116

0.7159

0.5139

0.7354

0.5599

0.5059

0.6377

0.5070

0.6539

0.572892

19

24

0.7932

0.3488

0.6818

0.3056

0.6851

0.7075

0.4343

0.6111

0.4186

0.6135

0.557012

23

25

0.9496

0.0233

0.5227

0.0833

0.5536

0.9085

0.3386

0.5116

0.3529

0.5283

0.527989

28

26

0.2273

0.9070

0.8864

0.9325

0.9838

0.3929

0.8431

0.8148

0.8811

0.9686

0.780098

3

27

0.4513

0.7209

0.7500

0.6845

0.9318

0.4768

0.6418

0.6667

0.6131

0.8800

0.655675

9

28

0.6488

0.6512

0.6705

0.5278

0.8961

0.5874

0.5890

0.6027

0.5143

0.8280

0.624282

14

29

0.8257

0.5814

0.6023

0.3254

0.8523

0.7415

0.5443

0.5570

0.4257

0.7719

0.608079

16

30

1.0000

0.2093

0.4886

0.1310

0.6721

1.0000

0.3874

0.4944

0.3652

0.6039

0.570182

20

31

0.1412

0.7907

0.7045

0.9206

0.9432

0.3680

0.7049

0.6286

0.8630

0.8980

0.692484

7

32

0.3254

0.5814

0.5455

0.6825

0.6851

0.4257

0.5443

0.5238

0.6117

0.6135

0.543796

26

33

0.4433

0.4651

0.4545

0.5317

0.5114

0.4732

0.4831

0.4783

0.5164

0.5057

0.491347

35

34

0.5765

0.2791

0.3750

0.3313

0.1916

0.5414

0.4095

0.4444

0.4278

0.3821

0.441078

42

35

0.6753

0.0930

0.1591

0.1349

0.0000

0.6063

0.3554

0.3729

0.3663

0.3333

0.406827

45

36

0.1359

0.8837

0.5114

0.9107

0.7695

0.3665

0.8113

0.5057

0.8485

0.6844

0.643305

12

37

0.3082

0.6279

0.3636

0.6647

0.6575

0.4195

0.5733

0.4400

0.5986

0.5934

0.524975

29

38

0.4930

0.5116

0.2841

0.5238

0.4903

0.4965

0.5059

0.4112

0.5122

0.4952

0.484203

36

(continued)

520

H. Tarigonda et al. Table 3. (continued)

EXP

Normalized Values

Grey Relational coefficient

GRG

Rank

0.4438

0.452961

39

0.3860

0.426435

44

0.8317

0.8876

0.684865

8

0.4190

0.5874

0.6430

0.549886

25

0.5309

0.3894

0.5030

0.4709

0.493112

34

0.4343

0.3667

0.4158

0.4157

0.468258

38

0.3440

0.3333

0.3333

0.3351

0.442511

41

BTE

HC

CO

NOx

Smoke

BTE

HC

CO

NOx

Smoke

39

0.6846

0.2558

0.2045

0.3095

0.3734

0.6132

0.4019

0.3860

0.4200

40

0.8171

0.0000

0.0341

0.0278

0.2045

0.7322

0.3333

0.3411

0.3396

41

0.1842

0.8605

0.5795

0.8988

0.9367

0.3800

0.7818

0.5432

42

0.4513

0.6977

0.3068

0.6488

0.7224

0.4768

0.6232

43

0.6249

0.5581

0.2159

0.5060

0.4383

0.5714

44

0.7946

0.3488

0.1364

0.2976

0.2971

0.7088

45

0.9231

0.0465

0.0000

0.0000

0.0081

0.8667

The optimum parameters were found to be (190, 1, 1.13). 4.2 Optimization Using Regression Mathematical model was created with R2 equal to 0.9818. GRG = 16.2024 − 0.13349 ∗ IPressure − 9.67779 ∗ LPGFlowRate − 3.47561 ∗ BP+ 0.00030629 ∗ IPressure ∗ IPressure + 0.0867553 ∗ IPressure ∗ LPGFlowRate+ 0.022645 ∗ IPressure ∗ BP + 0.304926 ∗ LPGFlowRate ∗ LPGFlowRate + 0.0881965∗ LPGFlowRate ∗ BP + 0.34108 ∗ BP ∗ BP − 0.000196763∗ IPressure ∗ IPressure ∗ LPGFlowRate − 5.70916e − 005 ∗ IPressure ∗ IPressure ∗ BP− 0.00133566 ∗ LPGFlowRate ∗ LPGFlowRate ∗ IPressure + 0.00914521∗ LPGFlowRate ∗ LPGFlowRate ∗ BP + 0.000484606 ∗ BP ∗ BP ∗ IPressure − 0.0189296∗ BP ∗ BP ∗ LPGFlowRate − 0.05421 ∗ BP ∗ BP ∗ BP

This regression equation is considered as fitness function in JAYA algorithm with maximization of GRG as objective function. Figure 7 shows Convergence curve from JAYA algorithm for the GRG. Regression model using JAYA algorithm as fitness function, optimum solution is obtained at Injection Pressure of 190.000000 bar, LPG Flow Rate of 1.000000 and Brake Power of 1.130000. 4.3 Optimization Using Anfis Model The ANFIS, combines the self-learning capabilities of ANN with the reasoning capacities of FIS. ANFIS is an artificial intelligence model of the Takagi – Sugeno type. It employs the linear functions that follow as rule consequences in order to solve complex and non-linear problems. Rule 1: If x is K1 , and y is L1 then z1 = p1 x + q1 y + r 1 . Rule 2: If x is K2 , and y is L2 then z2 = p2 x + q2 zy + r 1 .

Optimization of the Performance and Emissions of a Dual-Fuel Diesel Engine

521

0.9 0.8

GRG__Regression

0.7 0.6 0.5 0.4 0.3 0.2 0.1 0

0

5

10

15

20

25 Iterations

30

35

40

45

50

Fig. 7. Shows convergence curve from JAYA algorithm for the GRG.

x and y were the input parameters, z1 and z2 were the parameters defined by Rule 1 and Rule 2. p1 , p2, q1, q2 , r 1 and r 2 are considered using learning process. The architecture of ANFIS is depicted in the diagram as having five layers, which are referred to as the fuzzification layer, normalised, layer product layer, defuzzification layer, and output layer respectively. Fuzzification Layer (Layer 1): The input parameters can be converted into fuzzy values by the adaptive nodes that are a part of this layer using the equations below. Oi = μMi (x); i = 1, 2 Oj = μMj (y); j = 1, 2 Oi, Oj are the outputs of nodes, μMi (x), μMj (y) are the membership functions. Any parameterized membership function that is suitable can be used for M as the membership function. For a generalized version of the bell function, the formula for μMi (x) reads as follows: μMi (x) =

1    x−ci 2bi 1 +  ai 

The learning algorithm must be used to tune the premises parameters, which are denoted by ai, bi, and ci. Product Layer (Layer 2): Using the formula below, the node in this layer is responsible for calculating the firing strength of the rule. ωi = μMi (x)Mi (y), i = 1, 2 Here ωi is firing strength of ith rule.

522

H. Tarigonda et al.

Normalized Layer (Layer 3): The following formula can be used to determine the normalisation of the firing strength of the ith node. ω1 =

ωi , ω1 + ω2

Defuzzification Layer (Layer 4): A fuzzy value can be converted into an accurate prediction of a parameter’s value through the process of defuzzification. In this layer, each and every node is an adaptive node with a function that corresponds to its node. ω1 fi = ωι (pi x + qi y + ri ) pi , qi , ri are consequent parameters. Learning algorithms are also used to fine-tune these aspects. Output Layer (Layer 5): The output value is calculated by a single node in this layer by adding all of the input signals in the following manner:   ω i fi Output = ωι fi = i i ωi i

As a result, this layer transforms the hazy result that each rule produces into a clear output. When it comes to training the parameters used in ANFIS prediction modelling, a combination of the gradient descent method and the least square method is used. Up until the defuzzification layer, functional signals are made use of. The least square method is then used to control the resulting parameters in order to reduce the amount of error. Gradient descent is utilized during the backward pass in order to improve the assumed parameters (Fig. 8).

Fig. 8. Shows adaptive network-based fuzzy interface system architecture

Optimization of the Performance and Emissions of a Dual-Fuel Diesel Engine

523

In the context of this research, the modelling of ANFIS is done in MATLAB. Table 1 contains a presentation of the ANFIS training parameters. The input membership function is a triangular membership function denoted by trimf, and it takes two input memberships for each parameter. It has been determined that hybrid training functions with output membership functions that are constant are the most effective at reducing prediction errors (Table 4). Table 4. Shows ANFIS training parameters

ANFIS prediction model is developed for predicting GRG using injection pressure, LPG flow rate and brake power as input parameters (Fig. 9).

Fig. 9. Shows network of ANFIS model showing input parameters and GRG.

Figure 10 shows the above figure shows the fuzzy rule set of ANFIS model used in the optimization process.

524

H. Tarigonda et al.

Fig. 10. Shows the above figure shows the fuzzy rule set of ANFIS model.

Figure 11 shows the variation of GRG with injection pressure, LPG flow rate and BP. From the Fig. 11, the maximum and minimum GRG values can be found at various positions input process parameters. Figure 12 shows the contrast of ANFIS GRG and Actual GRG values and Fig. 13 shows the regression graph for actual GRG and ANFIS GRG. From the graphs, it shows ANFIS model GRS values and Actual GRG values are similar.

Optimization of the Performance and Emissions of a Dual-Fuel Diesel Engine

525

a) Surface plot for GRG with LPG flow rate b) Surface plot for GRG with Injection Pressure and LPG Flow rate as inputs. and Injection pressure as inputs.

c) Surface plot for GRG with Brake Power d) Surface plot for GRG with Injection Pressure and Brake Power as inputs. and Injection Pressure as inputs.

e) Surface plot for GRG with Brake Power f) Surface plot for GRG with LPG Flow rate and LPG Flow rate as inputs. and Brake Power as inputs.

Fig. 11. Shows the effect of input parameters on the GRG value.

4.4 JAYA – ANFIS Optimization Model Proposed methodology with ANFIS prediction model as fitness function in JAYA Optimization is shown Fig. 14.

526

H. Tarigonda et al.

Actual GRG and ANFIS GRG 0.9000 0.8000 0.7000 0.6000 0.5000 0.4000 0.3000 0.2000 0.1000 0.0000

1

3

5

7

9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 Series1

Series2

Fig. 12. Shows the contrast of Actual GRG and ANFIS GRG values

Data Fit Y= T

0.8 0.75 0.7 0.65 0.6 0.55 0.5 0.45 0.45

0.5

0.55

0.6

0.65

0.7

0.75

0.8

Fig. 13. Shows the regression graph for actual GRG and ANFIS GRG

Convergence graph from JAYA – ANFIS model: Figure 15 shows convergence graph from JAYA – ANFIS modelANFIS model using JAYA algorithm with as fitness function, optimum solution is obtained at Injection Pressure of 194.324643bar, LPG Flow Rate of 1.000000LPM and Brake Power of 1.130000kW With Best fitness value at 0.835084 (Table 5).

Optimization of the Performance and Emissions of a Dual-Fuel Diesel Engine

Fig. 14. Methodology followed in the present work. 0.9 0.8 0.7

Fitness Value

0.6 0.5 0.4 0.3 0.2 0.1 0

0

10

20 30 Iterations

40

50

Fig. 15. Convergence graph from JAYA – ANFIS model

527

528

H. Tarigonda et al. Table 5. Comparison of model error: (regression and ANFIS prediction models)

Here Pi is forecast value from the model, Ei is experimental value, E is average of observed values and number of experiments is n. Table 6. Comparison of model efficiency: (regression and ANFIS prediction models) Efficiency Correlation coefficient Nash Sutcliffe efficiency coefficient (NSE)

Mathematical model of efficiency n

(Ei −Pi 2 Ei −E

NSE = 1 − i=1 n 

)2

Regression model

ANFIS model

0.9909

0.99415

0.981835

0.988325

i=1

Table 6 shows the comparison of Regression model efficiency and ANFIS model efficiency. It shows that Correlation coefficient and NSE were almost similar.

Optimization of the Performance and Emissions of a Dual-Fuel Diesel Engine

529

Table 7. Comparison of various models results Parameter

Taguchi Grey relational analysis

Regression-JAYA

ANFIS-JAYA

Injection pressure (bar)

190

190

194.324643

LPG flow rate

1

1

1

Brake power

1.13

1.13

1.13

Optimum GRG

0.827917

0.827917

0.835084

Table 7 shows the comparisons of results of Taguchi Grey relational analysis, Regression with JAYA algorithm and ANFIS – JAYA algorithm. All the model’s results were similar.

5 Conclusions The diesel engine was converted in to dual fuel engine with LPG fuel is supplemented to the air which is into the engine at a pressure above the atmospheric pressure. Experiments were conducted on the engine to investigate the emission and performance characteristics of the engine. The performance and emission characteristics of dual fuel engine were optimized using Taguchi Grey Relational Analysis (TGRA), Regression Analysis (RA) using JAYA algorithm and Adaptive Neuro Fuzzy Inference Systems (ANFIS) using JAYA algorithm. Prediction model using ANFIS has been developed and it is giving accurate results with regression coefficient 0.99415. The optimum performance and emission parameters were obtained at 190 IP, ILPM LPG flow rate and 1.13 BP using Taguchi Grey Relational Analysis with best fitness value at 0.827017. Using JAYA algorithm with regression model as fitness function, optimum solution is obtained at 190 bar IP, 1 LPM LPG flow rate and 1.13 BP with best fitness value at 0.827917. Finally, using JAYA algorithm with regression model as fitness function, optimum solution is obtained at 194.324643bar IP, 1 lmp LPG Flow Rate and 1.13 BP with Best fitness value at 0.835084. The results shows that all the models were predicting the similar results. The ANFIS model with JAYA algorithm predicts the optimum performance at an injection pressure of 194.324643 bar.

References 1. Ansari, N.A., Sharma, A., Singh, Y.: Performance and emission analysis of a diesel engine implementing polanga biodiesel and optimization using Taguchi method. Process Saf. Environ. Prot. 120, 146–154 (2018) 2. Agarwal, N., Shrivastava, N., Pradhan, M.K.: Hybrid ANFIS-rao algorithm for surface roughness modelling and optimization in electrical discharge machining. Adv. Produc. Engineer. Manag. 16(2), 145–160 (2021) 3. Atmanli, A., Ileri, E., Yilmaz, N.: Optimization of diesel–butanol–vegetable oil blend ratios based on engine operating parameters. Energy 96, 569–580 (2016)

530

H. Tarigonda et al.

4. Aydın, M., Uslu, S., Bahattin Çelik, M.:Performance and emission prediction of a compression ignition engine fueled with biodiesel-diesel blends: a combined application of ANN and RSM based optimization. Fuel 269, 117472 (2020) 5. Babu, A.R., Prasad Rao, G.A., Hari Prasad, T.: Experimental investigations on a variable compression ratio (VCR) CIDI engine with a blend of methyl esters palm stearin-diesel for performance and emissions. Int. J. Ambient Energy 38(4), 420–427 (2017) 6. Bhowmik, S., Panua, R., Debroy, D., Paul, A.: Artificial neural network prediction of diesel engine performance and emission fueled with diesel–kerosene–ethanol blends: a fuzzy-based optimization. J. Energy Resour. Technol 139(4), 042201 (2017). https://doi.org/10.1115/1. 4035886 7. Canbolat, A.S., Bademlioglu, A.H., Arslanoglu, N., Kaynakli, O.: Performance optimization of absorption refrigeration systems using Taguchi, ANOVA and Grey relational Analysis methods. J. Cleaner Prod. 229, 874–885 (2019) 8. Dey, S., Deb, M., Das, P.K.: Application of fuzzy-assisted grey Taguchi approach for engine parameters optimization on performance-emission of a CI engine. Energy Sources, Part A: Recovery, Utilization, Environ. Eff. 1–17 (2019) 9. Elkelawy, M., et al.: Maximization of biodiesel production from sunflower and soybean oils and prediction of diesel engine performance and emission characteristics through response surface methodology. Fuel 266, 117072 (2020) 10. Gul, M., et al.: Grey-Taguchi and ANN based optimization of a better performing lowemission diesel engine fueled with biodiesel. Energy Sources, Part A: Recovery, Utilization, and Environmental Effects 44(1), 1019–1032 (2022) 11. Hariprasad, T.: Effect of injection pressure on performance of dual fuel diesel engine. No. 2013-01-2887. SAE Technical Paper (2013) 12. Hassan Pour, A., Ardebili, S.M.S., Sheikhdavoodi, M.J.: Multi-objective optimization of diesel engine performance and emissions fueled with diesel-biodiesel-fusel oil blends using response surface method. Env. Sci. Pollut. Res. 25(35), 35429–35439 (2018) 13. Jadhav, S.: Multi-objective optimization of performance (BSFC) and emission (NOx) characteristics for CI engine operated on Mangifera Indica methyl ester using Taguchi grey relational analysis. No. 2016-01-0298. SAE Technical Paper (2016) 14. Jena, S.P., Mahapatra, S., Acharya, S.K.: Optimization of performance and emission characteristics of a diesel engine fueled with Karanja biodiesel using Grey-Taguchi method. Mater. Today: Proc. 41, 180–185 (2021) 15. Karnwal, A., Hasan, M.M., Kumar, N., Siddiquee, A.N., Khan, Z.A.: Multi-response optimization of diesel engine performance parameters using thumba biodiesel-diesel blends by applying the Taguchi method and grey relational analysis. Int. J. Automot. Technol. 12(4), 599–610 (2011) 16. Moulali, P., Tarigonda, H., Prasad, B.D.: Optimization of performance characteristics of homogeneous charge compression ignition engine with biodiesel using artificial neural network (ANN) and response surface methodology (RSM). J. Institut. Eng. (India): Series C 103(4), 875–888 (2022) 17. Muqeem, M., Sherwani, A.F., Ahmad, M., Khan, Z.A.: Taguchi based grey relational analysis for multi response optimisation of diesel engine performance and emission parameters. Int. J. Heavy Veh. Syst. 27(4), 441–460 (2020) 18. Prasad, T.H., Reddy, K.H.C., Rao, M.M.: Combustion, performance and emission analysis of diesel engine fuelled with methyl esters of Pongamia oil. Int. J. Oil, Gas Coal Technol. 3(4), 374–384 (2010) 19. Reddy, G.V., Govindha Rasu, N., Hari Prasad, T.:Analysis of performance and emission characteristics of TBC coated low heat rejection engine. Int. J. Ambient Energy 42(7), 808– 815 (2021)

Optimization of the Performance and Emissions of a Dual-Fuel Diesel Engine

531

20. Sivaiah, P., Ajay kumar G.V., Lakshmi Narasimhamu, K., Siva Balaji, N.: Performance improvement of turning operation during processing of AISI 304 with novel textured tools under minimum quantity lubrication using hybrid optimization technique. Mater. Manuf. Process. 37(6), 693–700 (2022) 21. Sivaiah, P., Chakradhar, D.: Multi performance characteristics optimization in cryogenic turning of 17–4 PH stainless steel using Taguchi coupled grey relational analysis. Adv. Mater. Process. Technol. 4(3), 431–447 (2018) 22. Sivaramakrishnan, K., Ravikumar, P.: Optimization of operational parameters on performance and emissions of a diesel engine using biodiesel. Int. J. Environ. Sci. Technol. 11(4), 949–958 (2013) 23. Senthilkumar, S., et al.: Optimization of transformer oil blended with natural ester oils using Taguchi-based grey relational analysis. Fuel 288, 119629 (2021) 24. Singh, Y., Sharma, A., Tiwari, S., Singla, A.: Optimization of diesel engine performance and emission parameters employing cassia Tora methyl esters-response surface methodology approach. Energy 168, 909–918 (2019) 25. Shrivastava, K., Thipse, S.S., Patil, I.D.: Optimization of diesel engine performance and emission parameters of Karanja biodiesel-ethanol-diesel blends at optimized operating conditions. Fuel 293, 120451 (2021) 26. Uslu, S.: Optimization of diesel engine operating parameters fueled with palm oil-diesel blend: Comparative evaluation between response surface methodology (RSM) and artificial neural network (ANN). Fuel 276, 117990 (2020) 27. Vellaiyan, S., Subbiah, A., Chockalingam, P.: Multi-response optimization to obtain better performance and emission level in a diesel engine fueled with water-biodiesel emulsion fuel and nanoadditive. Environ. Sci. Pollut. Res. 26(5), 4833–4841 (2018). https://doi.org/10.1007/ s11356-018-3979-6 28. Vellaiyan, S., Amirthagadeswaran, K.S.: Taguchi-Grey relational-based multi-response optimization of the water-in-diesel emulsification process. J. Mech. Sci. Technol. 30(3), 1399–1404 (2016) 29. Vellaiyan, S., Amirthagadeswaran, K.S.: Emission characteristics of water-emulsified diesel fuel at optimized engine operation condition. Petrol. Sci. Technol. 35(13), 1355–1363 (2017) 30. Wakode, V.R., Kanase-Patil, A.B.: Regression analysis and optimization of diesel engine performance for change in fuel injection pressure and compression ratio. Appl. Therm. Eng. 113, 322–333 (2017) 31. Yuvarajan, D., Ravikumar, J., Babu, M.D.: Simultaneous optimization of smoke and NOx emissions in a stationary diesel engine fuelled with diesel–oxygenate blends using the grey relational analysis in the Taguchi method. Anal. Methods 8(32), 6222–6230 (2016)

Hostel Out-Pass Implementation Using Multi Factor Authentication Naresh Tangudu1(B) , Nagaraju Rayapati2 , Y. Ramesh1 , Panduranga Vital1 , K. Kavitha1 , and G. V. L. Narayana1 1 IT, Aditya Institute of Technology and Management, Tekkali, AP, India

[email protected] 2 CSE, Mohan Babu University, Tirupathi, AP, India

Abstract. The main bottom line of this paper is to enable robust bio metric based hostel output pass system to guarantee the high security and assurance by using multi factor authentication since bio metrics are not easy to create a fake or steal so any university or college or boarder school can implement and run hostel output mechanism without any hassles. Management can guarantee the parents or guardians about ward security in campus by implementing this bio metric enabled hostel out-pass mechanism. Keywords: Pass · Hostel · Boarder · Biometric · Authentication · Interface · Finger print

1 Introduction Generally out-pass system in hostel designed to keep track of outgoing boarders’ i.e., students or employees of institution/school/university. Now days’ many schools or colleges or universities running out-pass mechanism manually which does not guarantee security and assurance to parent and management. Generally boarders will sent online leave request [1–14] to hostel warder or proctor then proctor or hostel warden or house master has to approve or reject it. This process always have a bottleneck as always there high chances of breaching security. Sometimes boarders are used to give out-pass on physical form by filling details boarder name, date and time of leaving, date and time of return and purpose. After submitting the physical out-pass form to proctor or house master or hostel in-charge, they may in turn consult the parent or guardian over phone or email to approve or reject hostel out-pass. In this method lot of paper work and always there is a chance of data alteration.

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 A. Abraham et al. (Eds.): IBICA 2022, LNNS 649, pp. 532–542, 2023. https://doi.org/10.1007/978-3-031-27499-2_50

Hostel Out-Pass Implementation Using Multi Factor Authentication

533

2 Proposed Method We have suggested two different methods to guarantee the high security. One for higher education students/employees and another is for school kids. This hostel output pass mechanism was implemented using.Net Technologies and SQL server as back end. 1. Collect parent & ward information along with bio metric in case of school children 2. Collect student/employee information in case of higher education student/employee Above master data collection is mandatory to implement multi factor authentication based hostel out-pass. Below Fig. 1 shows different fields captured during master data collection process for bio metric based hostel output pass management system.

Fig. 1. Master data of student/parent fields

Out-pass Mechanism for School Children: In Fig. 2 we have show complete flow activities for hostel out-pass mechanism for school children where parent can logged into application and initiate hostel out pass for his/her ward and will notified via SMS and email about out pass status. After out-pass is approved student out-pass validated by security person at exit gate against the hostel out-pass request using bio metric when he/she return to school or hostel again student need to submit out-pass slip and have bio metric validation. Once validation

534

N. Tangudu et al.

Fig. 2. Hostel out-pass Mechanism for School kids

is completes parent will receive SMS/email about his/her ward arrival to hostel/school. The flow of activities in out-passes generation and return summarized in Fig. 3

Fig. 3. Out-pass Generation and return to hostel

Hostel Out-Pass Implementation Using Multi Factor Authentication

535

Out-pass Mechanism for Higher education students/employees: 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15.

Student or employee will open application Enter credentials Open out-pass application Submit the details of out-pass Parent/guardian/family member will notification via sms/email about out-pass request Warden will log into application Warden will either approve or reject hostel out-pass If approved, student out-pass will be validated against bio metric Parent will notify about out-pass approval via sms/email Student/employee generate out-pass at main gate by submitting biometric at bio metric readers at main gate After successful validation, duplicate out-pass will be generated Parent/guardian/family member will notify about out-pass generation One copy will submitted at main gate Second copy will submitted with bio metric verification when student/employee return to campus Parent/guardian/family member will notify about return of his ward

If the student not reached to campus as hostel out-pass return date and time, then an automatic scheduler will run and alert parent through SMS/Email about his/her ward were not reached to hostel/campus. Out-pass Applying web/Android Interface: The following fields in Fig. 4 are captured when the boarder/parent apply for out-pass. Based on leaving date, time bio metric authentication will work out at security gate level and can generate out-pass. When boarder or parent or guardian applied for out-pass, parent will be intimated about out-pass request through SMS/E mail. Based on reporting date and time, if the ward is not reported as per reporting date and time then parent or guardian will be alerted about their ward non reporting through SMS/Email. Out pass Approval: Hostel In-charge/warden will log into his account and he can either approve/reject the out-pass. Notification about out-pass approval process will be intimated to parent through SMS/E-mail. In the warden approval user interface the fields which are captured during out pass apply will be shown along with remarks text area, approve and reject buttons.

536

N. Tangudu et al.

Fig. 4. Web/mobile interface to apply out-pass

Out Pass format: Below Fig. 5 and 6 shows sample out-pass format which can customizable depends on the way data to be presented. The additional security feature implemented in out-pass is barcode generation and this will be generated based on two factors. One is student roll number and second is leaving date time. Duplicate copy of out-pass generated one is with student and another is with security people. Printing of this out-pass is customizable. Depends on the printer configuration user can print pause printer or normal printer. In case of pause printer duplicate out-pass will appear vertically where as normal printer it will comes in horizontal mode. When finger print scanner are not working, security person can make use of barcode scanner to process boarder out-pass when they reporting to hostel premises.

Hostel Out-Pass Implementation Using Multi Factor Authentication

537

Fig. 5. Sample out pass form suitable for pause printer

Bio metric capturing devices: Lot of finger print readers are available in market with lot of custom features like FBI certified, STQC certified for AADHAR/UIDAI application, fake finger rejection technologies to reject spoof finger prints, USB enabled, etc.

538

N. Tangudu et al.

Fig. 6. Out pass format for normal printer

For this project implementation we have purchased hamster pro 20, where it provided free SDK support for developers for windows, LINUX and android applications. The technical specifications of above product are (Fig. 7).

Fig. 7. Secugen Product hamster Pro 20 features

And many other biometric validators of Secugen are available in market at cheaper cost to implement this project and details are furnished in Table 1.

Hostel Out-Pass Implementation Using Multi Factor Authentication

539

Table 1. Biometric Readers supported by Secugen

3 Implementation About Hamster Pro 20 SDK: Namespace under Hamster Pro 20 SDK isSecuGen.FDxSDKPro.Windows is mainly used for biometric implementation in this vb.net project and List of classes main used in this implementation’s are • • • • •

SGFingerPrintManager SGFPMDeviceList SGFPMError SGFPMDeviceInfoParam SGFPMSecurityLevel

540

N. Tangudu et al.

For better error handling during usage of Methods of Hamster Pro 20 SDK Init(), OpenDevice(), GetDeviceInfo(), GetImage(), GetImageQuality(), CreateTemplate(), MatchTemplate(), Hamster Pro 20 SDK supports error codes listed below Table 2. Table 2. Error code definitions

As part of this implementation. 1. 2. 3. 4. 5. 6.

first we have to install SecuGen SDK for “IGRAllBiometricComSetup.exe” next connect secugen device to computer or laptop install application (developed in.net) collect bio metric using secugenapi methods finger print patterns can be collected using CreateTemplate() method finger print validation can be done using MatchTempate() method

Below Fig. 8 shows different functionalities implemented in vb.net application of hostel out-pass management system. Below Fig. 9 shows the list of students/employees whose out-pass is approved.

4 Conclusion Compare to manual system of hostel out-pass mechanism and existing online solutions, this bio metric enabled multi authentication system will offer more security and assurance

Hostel Out-Pass Implementation Using Multi Factor Authentication

541

Fig. 8. Menus hostel out-pass generation system

Fig. 9. List of students/employees whose out-pass approved

to parents and guardians. Managements and hostel warden can issue hostel out-pass without worrying any risks. We may also extend this work by using face recognizers.

References 1. Jha, L., Patil, H.: A comprehensive study of and possible solutions for a hostel management system. In: Samarjeet Borah, B.K., Mishra, H.K. (eds.) Computing and Communications

542

2. 3. 4. 5. 6. 7. 8. 9.

10.

11. 12. 13.

14.

N. Tangudu et al. Engineering in Real-Time Application Development, pp. 47–54. Apple Academic Press, Boca Raton (2022) Kammar, M.Y., Ashwini, V.K., Sajji, C.R., Patil, S.V: Digital out pass. Int. J. Eng. Res. Technol. (IJERT) NCAIT 8(15), 81–84 (2020) Batra, P., Goel, N., Sangwan, S., Dixit, H.: Design and implementation of hostel management system using Java and MySQL. LC Int. J. STEM 1(4), 63–74 (2020) Anirudh, A., Pandey, V.K., Sodhi, J.S., Bagga, T.: Next generation Indian campuses going SMART. Int. J. Appl. Bus. Econ. Res. 15(21), 385–398 (2017) Farooq, U., ul Hasan, M., Amar, M., Hanif, A., Asad, M.U.: RFID based security and access control system. Int. J. Eng. Technol. 6(4), 309 (2014) Rivera, J.G., Banayos, S., Bautista, J.: Door security system based on Arduino with multifactor authentication. Southeast Asian J. Sci. Technol. 4(1), 25–28 (2019) Kumar, V., Vasudevan, S., Posonia, M.: Urban mode of dispatching students from hostel. ARPN J. Eng. Appl. Sci. 12(13), 4089–4092 (2006) Singhal, D.T.K., Saurabh, Vashishth, I., Chugh, P.: IOT enabled smart hostel: a futuristic perpective. Int. J. Res. Appl. Sci. Eng. Technol. 5, 1451–1466 (2017) Mittal, Y., Varshney, A., Aggarwal, P., Matani, K., Mittal, V.K.: Fingerprint biometric based access control and classroom attendance management system. In: 2015 Annual IEEE India Conference (INDICON), pp. 1–6. IEEE (2015) Saji´c, M., Bundalo, D., Bundalo, Z., Saji´c, L., Lali´c, D., Kuzmi´c, G.: Smart universal multifunctional digital terminal/portal devices. In: 2019 8th Mediterranean Conference on Embedded Computing (MECO), pp. 1–4. IEEE (2019) Edwards, E.O., Orukpe, P.E.: Development of a RFID based library management system and user access control. Niger. J. Technol. 33(4), 574–584 (2014) Rajani, A., Kora, P., Madhavi, R., Jangaraj, A.: Quality improvement of retinal optical coherence tomography, pp. 1–5 (2021) https://doi.org/10.1109/INCET51464.2021.9456151 Sunitha, G., Reddy Madhavi, K., Avanija, J., Tharun Kumar Reddy, S., Hitesh Sai Vittal, R.: Modeling convolutional neural network for detection of plant leaf spot diseases. In: 3rd International Conference on Electronics and Sustainable Communication Systems, pp. 1187– 1192. IEEE (2022) Madhavi, K.R., Kora, P., Reddy, L.V., Avanija, J., Soujanya, K.L.S., Prabhakar, T.: Cardiac arrhythmia detection using dual-tree wavelet transform and convolutional neural network. Soft Comput. 3, 64 (2021)

Federated Learning and Adaptive Privacy Preserving in Healthcare K. Reddy Madhavi1(B) , Vineela Krishna Suri2 , V. Mahalakshmi3 , R. Obulakonda Reddy4 , and C. Sateesh kumar Reddy5 1 CSE, Mohan Babu University, Tirupati, A.P, India

[email protected]

2 CSIT, CVR College of Engineering, Ibrahimpatnam, Telangana, India 3 Department of Computer Science, College of Computer Science and Information Technology,

Jazan University, Jazan, Saudi Arabia [email protected] 4 Department of Computer Science and Engineering (Cyber Security), Institute of Aeronautical Engineering, Hyderabad-97, Dundigal, India 5 Department of ECE, Guru Nanak Institute of Technology, Hyderabad, India

Abstract. Real-world health data for machine learning tasks addresses many issues related to storage, access and privacy concerns on the centralized databases. Distributed data silos need to be secured and privacy assured for the access to multiple sites and reduce the risk of breach. A federated learning framework that learns global model from the distributed health data at different sites and provide secure access to emanate. The adaptive privacy mechanism on the sites is further a protective model to prevent potential attacks. In this paper a federated model with adaptive privacy is discussed. A simulation of the federated learning mechanism with federated sites with data silos and the differential privacy as adaptive privacy has been experimented. The results are satisfactory and found the variability of the differential privacy before and after the application of the privacy preservation. Keywords: Privacy preservations · Health care · Federated learning · Adaptive privacy measures

1 Introduction Artificial Intelligence Techniques enhance and assist the medical and healthcare applications, in terms of efficiency, outcome and render benchmarks services to the benefit of human wellbeing. Machine learning applications of medical and healthcare systems are data driven, deep learning has already demonstrated its success in many industry domains, which also includes medical and healthcare systems. Deep learning requires large datasets, where in some instances to test the algorithms data would not be sufficient in its size when collected from real-time patient databases. Personal information prevails in the medical and healthcare data which is considered to be as a highly sensitive data, as it contains not only diagnostic information but also personal information. Being, © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 A. Abraham et al. (Eds.): IBICA 2022, LNNS 649, pp. 543–551, 2023. https://doi.org/10.1007/978-3-031-27499-2_51

544

K. R. Madhavi et al.

processed by the deep learning algorithms, in the perspective of a consumer/patient, the personal data shall not be emanated to the experts of the artificial intelligence field. Data privacy is the most important concern for critical datasets which associates with personal information of the patient along with diagnostic information. A global machine learning model which is a federation of several learning paradigms is a versatile contribution to the technological world, called Federated Learning (FL). As the learning mechanism maps the results of the diagnostic data to the personal data, institutional data and other important critical data, the objective of learning shall be projected meaningfully by the combination of several learning models on different segments of the data with collaborative training frameworks, where results can be mapped approximately to a category of subjects. 1.1 Data Sharing Scenarios Centralizing and decentralizing of the data sets is a plan which shall be incorporated as the data size grows and has to be shared among the various learning paradigms, an optimization framework may be employed to guide the learning procedures to handle the private data, which shall not be breached or illegally shared. Although implementing an optimization framework for protecting data, the information shared will include model parameters and gradients, where the machine learning algorithms can access range of datasets that exist at different locations by decentralizing the learning processes. The important feature of the Federated Learning is the shared the distributed data sets to the several segments of learning processes through a optimization framework. The merger of optimization framework and learning processes for the distributed datasets is a federated learning which is an open system. Open health proposes an open innovation framework in the medical and healthcare industry, which focuses on driving innovation and capability in health through collaboration. Policies of organization make the organization strong, the idea for the policies are within or outside the organization, which are essentially pave success in advance processes and outcomes. Open health is an open innovation, which also rendered by crowd-sourcing, which need support of organization partnership to frame strategic projects. Medical and healthcare systems are hindered by the pressures to improve the security and privacy of the patients’ health data which indicates the crucial outcomes operating with meticulously planned healthcare budgets. Numerous factors exists threatening the sustainability of medical and healthcare systems such as chronic diseases, new morbid states, incidence of pandemics and also treatments and technologies with aging populations, where there is limited used of data. 1.2 Federated Learning and Open Health Federated Learning proposes avenues to overcome the difficulties and explore the potential of data analytics to benefit the patient outcomes of the medical and health care system. One of the most important barriers in the learning process is sharing of data, particularly in the medical and health care domains, meeting the hedge of potential progress on patient care delivery and patient outcomes and health data research. Various institutions

Federated Learning and Adaptive Privacy Preserving in Healthcare

545

provide data sources, and the data sites are heterogeneous in nature, the machine learning and deep learning algorithms are not largely used by them. The heterogeneity leads to biases of the results and models that do not so easily can be generalized. The best option for obtaining sufficiently large and diverse datasets is to resort to the collaborative technologies in learning and model development [1]. Federated Learning provides data security and privacy and also facilitating the services of open health. 1.3 Types of Federated Learning The systematic implementation of Federated Learning is categorized functionally into two groups: cross-silo and cross-device [1, 2]. The conceptual Federated Learning mechanism is first proposed by Google, which aims to solve the large-scale artificial intelligence and machine learning problems for mobile networks, where different kinds of mobile devices connected are called as cross-device. A model is learnt across several categories of devices, while integrating the data and the computational resources. The cross-silo of Federated Learning is proposed for sharing the knowledge across heterogeneous organizations. Most devices are mobile devices, though considered as smart devices work with limited computing power and limited training datasets, overcoming the latencies the communication is limited and some instances unstable. The cross-silo Federated Learning has exponentially powered servers and each user has access to centralized datasets, encompassed with higher data protection criteria for secure sharing considering many external factors and cybersecurity concerns. Therefore, sophistication is demand and a challenge for the conceptual development of Federated Learning in cross-silo.

2 Related Work The concept of providing privacy and security prevails in Federated Learning challenging all the external factors in the medical and healthcare systems. Particularly, the security and privacy of the data in the local devices where there are risks of security breakage, data leakage. Federated Learning susceptible to privacy attacks, inference attacks and poisoning attacks therefore evades the sharing of raw forms of data [2, 3]. Inference attacks are combined with membership attacks and reconstruction attacks, thus they are black box attacks and white box attacks. A white box inference attack exploits the privacy vulnerability of a stochastic gradient descent algorithm. A malicious aggregation server setup creates a scenario of experiences with user-level privacy leakage, a reconstruction attack is proposed with generative adversarial networks. Poisoning attack is a modern version, which manipulates several parameters of the Federated Learning model and poisons the policies of the Federated Learning processes. The consensus of literatures in privacy preserving federated learning focuses on differential privacy and secure multi party computation. Secure multi party computation is proposed for computing sums of model parameter updates from the mobile devices securely, whereas this approach is useful to specific applications related to sum-based aggregations, which can protect from the non-trusted aggregation server. Most of the

546

K. R. Madhavi et al.

privacy preserving approaches in the federated learning focuses on the mobile application incidents, which includes image recognition, word prediction for mobile keyboards. 2.1 Machine Learning Models Federated Learning solves many critical problems in Machine Learning. Most Popular model of these days is Neural Network model which has state-of-the-art qualities to accomplish results in learning tasks such as image classification, prediction, annotation and captioning. Many federated learning studies are proved that they work on learning methods based on stochastic gradient descent, which are again variants of Neural Networks. Decision Trees are the models of building learning mechanism in Federated Learning. A tree-based Federated Learning System could be employed in federated training of multiple or single decision trees distributed across many sites. Gradient boosting decision trees and random forests are special and popular which achieve good performance index in classification and regression in federated learning platform. Various linear and non-linear models exists which promise easy-to-use methodologies, especially for linear and logistic regressions and enable easy learning with other complex models of the world. For federated learning and training, a one machine model appears to be weaker, an integration of ensemble based methods are used to stack and vote the number of machine learning models for a typical federated setting. Parties of the networks can train the individual machine learning model and communicate to the servers, in order to aggregate or ensemble into a gross learning with federated learning and federated training. Each party of the network can be trained individually and heterogeneous models with average performance parameters can be derived to confederate into a stacked meta-model. Various approaches of deep learning for image classification and detection in healthcare has been presented in [4–7]. Detection of COVID-19 using Deep Learning methods has been discussed in [8], EEG-Based Brain-Electric Activity Detection During Meditation Using Spectral Estimation Techniques were found in [9], methods for Quality Improvement of Retinal Optical Coherence Tomography shown in [10], Detection of Pneumonia Using Deep Transfer Learning architectures in [11] and Analysis of COVID19-Impacted Zone Using Machine Learning Algorithms was presented in [12]. Support Vector Machine Classification of Remote Sensing Images with the Wavelet-based Statistical Features was presented in [13]. Convolutional Neural Network is used for detection and classification [14, 15], Detection of Intracranial Haemorrhage [16], COVID-19 Isolation Monitoring [17], Detection of Diabetic Retinopathy in Retinal Images [18], Predictive Analytics for Control of Industrial Automation[19], Robotic Applications [20], Dengue Outbreak Prediction [21] and Protection and Security in Fog Computing [22]. 2.2 Privacy The important aspect for the success of federated learning systems is privacy protection. The reliability of the manager of the federated learning system shall be analyzed, whether curious or not curious. If not curious no measure shall be considered for securing

Federated Learning and Adaptive Privacy Preserving in Healthcare

547

privacy. If curious and honest, normal measures of security shall be enforced. If honest and not curious, then no additional measures shall be adopted, where the federated learning ensures raw data is not in exchange in the active framework. The model parameters are derived to expose the sensitivity of the data during training. In such scenarios, differential privacy is recommended to inject uniform noise into the parameters where secure multi-party computation shall use exchange of encrypted parameters. When no trust in the personnel then, a trusted execution environment shall be administered to execution. Blockchain is one of the most feasible solutions for such scenario to monitor and coordinate the events by the manager at the servers. 2.3 Autonomy A federated learning system in practices shall consider autonomy of the parties. The parties have right to dropout during the federated learning process. A cross-device setting is a special case to consider, for a party to dropout in the federated learning process. The cross-device setting is large and parties’ connectivity becomes unreliable. Therefore a Federated Learning System falling preys to unreliable connections and parties with adversarial motives. A Federated Learning System shall be stable and robust, able to tolerate failure of the parties and reduce instances of failures. A typical suggestion to overcome the failures is a blockchain-based approach to secure robust aggregation and detect the device disconnection. Secure robust aggregation is supplied through to protect the communicated messages during the party dropout. The parties are by nature selfish and not standing to the shared model with quality, where many disconnection issues raise the latency in the performance. 2.4 Design The key elements of the Federated Learning System design are the entities and the tasks. The entities are called participated entities that subscribe themselves into the communication architecture. The participated entities determine the partitioning of data and the scale of federation. Tasks are the learning mechanisms which follow a learning model to train. A suitable federated learning algorithm is developed and applied according to the entities and tasks. A stuitable federated learning algorithm is fixed on the site / server and satisfies the privacy requirements without compromising learning requirements of the related tasks. Communication messages are protected in parallel to the learning tasks in federated learning. Differential privacy is a best model to incorporate the privacy and learning abilities. Algorithms for learning are tuned to the level of privacy enforcing algorithms.

3 Methods 3.1 Federated Learning To overcome the above said vulnerabilities related to stochastic gradient descent optimization, an amenable method of method for describing the potential of Federated Learning is chosen as classification problem. A general binary classification problem is practiced in deducing the solution of Federated Learning in classification. Let xk is assumed

548

K. R. Madhavi et al.

as kth feature, from the feature space X. The label space Y, draws labels yk, considering the label space {−1, 1}. The labels are positive and negative, let the positive labels in the feature space are X(+) and negative labels in the feature space are X(−). Where; X(+) = {xk  X:yk = + 1} and X(−) = {xk  X:yk = −1}, the classification aims at the objective of constructing a function f: X → Y; such that f(xk(+)) = + 1 and f(xk(−)) = −1. To setup the Federated Learning form n number of sites, Let Di with n sites shall be i i N }N assumed to possess a feature set {Xtrain i=1 and the corresponding label set {Ytrain }i=1 for local training at site I, where i  N. based on the application of the use cases in the medical and health care domain, a global model is shared on the sites to train the model with Di . The factors learning rate, number of epochs, batch size are considered for computation of average gradient ∇Fi (w) with respect to the model parameter w. subsequently the weighted average for the aggregate is computed with the parameter updates at the local model. The process is iterated until convergence considering the minimization of loss. 3.2 Adaptive Privacy Differential Privacy [23, 24] is considered as Adaptive methodology for ascertain the privacy for the personalized data sets. Differential privacy is the assured approach for the privacy preservation on the aggregated data sets [23–25]. Any randomized algorithm A(D) on aggregated data sets is considered as satisfied for  -differential privacy for all the D and D which probably differ from each of the records and for all the sets S  R, where R is the range of A. Therefore; Pr[A(D)  S] ≤ e Pr[A(D )  S], considering the  is the privacy parameter, implying that the single record in the data sets does not impact much on the distribution of the output by the algorithm. For the task of Federated Learning, noise may be added to the objective function of the optimization to obtain the differentially private approximation. Even at each site, the noise is added to the objective function in the model to develop and attain the minimizer of the objectives of perturbation.

4 Evaluation Building a Federated Learning model for privacy preserving in medical and health care domain is a very important order of the day, where the patient data is extremely sensitive. Medical and healthcare is distributed across various data silos in the federated model, extracting the large data sets to detect peculiar events is a challenge in centralized data model. The learning model is employed to identify the adverse effects of drug responses [5, 6]. To predict the ADE or the adverse drug effect the framework should get access to the voluminous electronic health records (EHR) of patients. Patient level sensitive features such as habits, diagnostic codes, prescription slips, wet laboratory results and other administrative records. On the selection of the synthetic data represented the patients affected by non-steroidal anti-inflammatory drugs (NSAID), prediction of adverse effects to the drugs is exercises in the cohort of 9000 samples.

Federated Learning and Adaptive Privacy Preserving in Healthcare

549

The most amenable learning, segmentation and classification methodologies in deep learning are convolutional neural network and transfer learning. A learning model for identifying the reactions of the patients and classifying those affected as ADE categories, assuring the privacy preservation, the sequential convolution model is setup. The model is evaluated before and after privacy preserving algorithms. The quality of the outcome is measure using F1-score. The configuration of the framework consists of 10 sites of Federated Learning setup, Intel(R) Xeon(R) E5–2683 v4 2.10 GHz CPU equipped with 16 cores and 64GB of RAM. 4.1 Results From the synthetic datasets of patients brought from various Federated sites, the experiment has established benchmark results. The performance of the centralized learning model for the tasks of predicting the patients with Adverse Drug Effects is charted in the following graphs. The performance of FL model on the trained distributed data is assessed. For adaptive privacy chosen as differential privacy the varying  differential privacy, on measuring the privacy-utility tradeoff the  at its each F1-score is demonstrated as under (Fig. 1):

Fig. 1. Varying  before and after applying privacy preserving while predicting the patients affected by ADE.

5 Conclusion Several federated models on federated sites are the possibilities if the right volume of the electronic health data is provided. Machine learning can be leveraged based on the voluminosity of the HER and for certain insightful data analytics. Federated learning is implemented on the data sets brought from various simulated federated sites and potential of differential privacy as adaptive privacy mechanism is employed for preserving the privacy. While the model performance is evaluated through F1-scores and the influence on the differential privacy is studied.

550

K. R. Madhavi et al.

References 1. Long, G., Shen, T., Tan, Y., Gerrard, L., Clarke, A., Jiang, J.: Federated learning for privacypreserving open innovation future on digital health. In: Chen, F., Zhou, J. (eds.) Humanity Driven AI, pp. 113–133. Springer, Cham (2022). https://doi.org/10.1007/978-3-030-721 88-6_6 2. Xu, J., Glicksberg, B.S., Chang, S., Walker, P., Bian, J., Wang, F.: Federated learning for healthcare informatics. J. Healthc. Inform. Res. 5(1), 1–19 (2021) 3. Choudhury, O., et al.: Anonymizing data for privacy-preserving federated learning. arXiv preprint arXiv:2002.09096 (2020) 4. Kavitha, T., et al.: Deep learning based capsule neural network model for breast cancer diagnosis using mammogram images. Interdiscip. Sci.: Comput. Life Sci. 14, 113–129 (2021). https://doi.org/10.1007/s12539-021-00467-y 5. Reddy Madhavi, K., Sunitha, G., Avanija, J., Viswanadha Raju, S., Abbagallae, S.: Impact Analysis of Hydration and Sleep Intervention Using Regression Techniques. Turkish J. Comput. Math. Educ. (TURCOMAT) 12(2), 2129–2133 (2021) 6. Yadlapalli, P., Madhavi, R., Sunitha, G.,Jangaraj, A., Kollati, M., Kora, P.: Breast Thermograms Asymmetry Analysis using Gabor filters. E3S Web Conf. 309, 01109 (2021). https:// doi.org/10.1051/e3sconf/202130901109 7. Padmavathi, K., et al.: Automatic segmentation of prostate cancer using cascaded fully convolutional network. E3S Web Conf. 309, 01068 (2021) 8. Madhavi, K.R., Madhavi, G., Krishnaveni, C.V., Kora, Padmavathi: COVID-19 detection using deep learning. In: Abraham, A., Hanne, T., Castillo, O., Gandhi, N., Rios, T.N., Hong, T.-P. (eds.) Hybrid Intelligent Systems: 20th International Conference on Hybrid Intelligent Systems (HIS 2020), December 14-16, 2020, pp. 263–269. Springer International Publishing, Cham (2021). https://doi.org/10.1007/978-3-030-73050-5_26 9. Kora, P., Rajani, A., Chinnaiah, M.C., Madhavi, K.R., Swaraja, K., Meenakshi, K.: EEG-based brain-electric activity detection during meditation using spectral estimation techniques. In: Jyothi, S., Mamatha, D.M., Zhang, Y.-D., Raju, K.S. (eds.) Proceedings of the 2nd International Conference on Computational and Bio Engineering. LNNS, vol. 215, pp. 687–693. Springer, Singapore (2021). https://doi.org/10.1007/978-981-16-1941-0_68 10. Rajani, A., Kora, P., Madhavi, R., Jangaraj, A.: Quality Improvement of Retinal Optical Coherence Tomography, pp. 1–5 (2021). https://doi.org/10.1109/INCET51464.2021.945 6151 11. Reddy Madhavi, K., Madhavi, G., Rupa Devi, B., Kora, P.: Detection of Pneumonia Using Deep Transfer Learning architectures. Int, J. Adv. Trends Comput. Sci. Eng. 9(5), 8934-8937 (2020) 12. Abbagalla, S., Rupa Devi, B., Anjaiah, P., Reddy Madhavi, K.: Analysis of COVID-19impacted zone using machine learning algorithms. In: Springer series – Lecture Notes on Data Engineering and Communication Technology, vol. 63, pp. 621–627 (2021). 13. Prabhakar, T., Srujan Raju, K., Reddy Madhavi, K.: Support vector machine classification of remote sensing images with the wavelet-based statistical features. In: Chandra Satapathy, S., Bhateja, V., Favorskaya, M.N., Adilakshmi, T. (eds.) Smart Intelligent Computing and Applications, Volume 2: Proceedings of Fifth International Conference on Smart Computing and Informatics (SCI 2021), pp. 603–613. Springer Nature Singapore, Singapore (2022). https://doi.org/10.1007/978-981-16-9705-0_59 14. Madhavi, R., Kora, P., Reddy, L., Jangaraj, A., Soujanya, K., Prabhakar, T.: Cardiac arrhythmia detection using dual-tree wavelet transform and convolutional neural network. Soft. Comput. 26, 3561–3571 (2022). https://doi.org/10.1007/s00500-021-06653-w

Federated Learning and Adaptive Privacy Preserving in Healthcare

551

15. Swaraja, K., et al.: Brain tumor classification of mri images using deep convolutional neural network. Traitement du Signal. 38, 1171–1179 (2021). https://doi.org/10.18280/ts.380428 16. Avanija, J., Gurram Sunitha, K., Reddy Madhavi, R., Vittal, Hitesh Sai: An automated approach for detection of intracranial haemorrhage using DenseNets. In: Ashoka Reddy, K., Rama Devi, B., Boby George, K., Raju, Srujan (eds.) Data Engineering and Communication Technology. LNDECT, vol. 63, pp. 611–619. Springer, Singapore (2021). https://doi.org/10. 1007/978-981-16-0081-4_61 17. Reddy Madhavi, K., Vijaya Sambhavi, Y., Sudhakara, M., Srujan, K.R.: COVID-19 Isolation Monitoring System. In: Springer series – Lecture Notes on Data Engineering and Communication Technology, pp. 601–609 (2021) 18. Prabhakar, T., Sunitha, G., Madhavi, G., Avanija, J., Madhavi, K.R.: Automatic detection of diabetic retinopathy in retinal images: a study of recent advances. Ann. Rom. Soc. Cell Biol. 25(4), 15277–15289 (2021) 19. Bhogaraju, S.D., Kumar, K.V., Anjaiah, P., Shaik, J.H., Reddy Madhavi, K.: Advanced predictive analytics for control of industrial automation process. In: Goundar, S., Avanija, J., Sunitha, G., Madhavi, K., Bhushan, S. (eds.) Innovations in the Industrial Internet of Things (IIoT) and Smart Factory, pp. 33–49. IGI Global (2021). https://doi.org/10.4018/978-1-79983375-8.ch003 20. Seeja, G., Obulakonda Reddy, R., Kumar, K.V., Mounika, S.S., Reddy Madhavi, K.: Internet of things and robotic applications in the industrial automation process. In: Goundar, S., Avanija, J., Sunitha, G., Madhavi, K., Bhushan, S. (eds.) Innovations in the Industrial Internet of Things (IIoT) and Smart Factory, pp. 50–64. IGI Global (2021). https://doi.org/10.4018/978-1-79983375-8.ch004 21. Avanija, J., Sunitha, G., Hittesh, R., Vittal, S.: Dengue outbreak prediction using regression model in Chittoor District, Andhra Pradesh, India. Int. J. Recent Technol. Eng. 8(4), 10057– 10060 (2019). https://doi.org/10.35940/ijrte.d9519.118419 22. Rama Subba Reddy, G., Rangaswamy, K., Sudhakara, M., Anjaiah, P., Madhavi, K.R.: Towards the protection and security in fog computing for industrial internet of things. In: Goundar, S., Avanija, J., Sunitha, G., Madhavi, K., Bhushan, S. (eds.) Innovations in the Industrial Internet of Things (IIoT) and Smart Factory, pp. 17–32. IGI Global (2021). https:// doi.org/10.4018/978-1-7998-3375-8.ch002 23. Choudhury, O., et al.: Differential privacy-enabled federated learning for sensitive health data. arXiv preprint arXiv:1910.02578 (2019) 24. Ali, M., Faisal, N., Muhammad, T., Geroges, K.: Federated Learning for Privacy Preservation in Smart Healthcare Systems: A Comprehensive Survey. arXiv preprint arXiv:2203.09702 (2022) 25. Kim, K., Harry, C.T.:Privacy-preserving federated learning. In: Privacy-Preserving Deep Learning, pp. 55–63. Springer, Singapore (2021)

ASocTweetPred: Mining and Prediction of Anti-social and Abusive Tweets for Anti-social Behavior Detection Using Selective Preferential Learning E. Bhaveeasheshwar1 , Gerard Deepak2(B) , and C. Mala3 1 Department of Computer Science and Engineering, National Institute of Technology,

Tiruchirappalli, India 2 Department of Computer Science and Engineering, Manipal Institute of Technology,

Bengaluru, India 3 Manipal Academy of Higher Education, Udupi, Manipal, India

Abstract. In the present times, owing to a large number of social interactions via Online Social Networking platforms. Because the content is not moderated on the World Wide Web, especially on the Social Web, due to lack of moderation, there are many derogatory, Anti-Social, and racist comments. Specifically, Twitter, YouTube, Instagram, Facebook comments, and similar social network platforms. In this paper, ASocTweetPred for Mining and Prediction of Anti-Social and Abusive Tweets for Anti-Social Behavior Detection has been put forth by encompassing Ontologies of different phases, Knowledge Graph, and the classification of the dataset using the LSTM to generate and yield matching ontology snippets by computing the Semantic Similarity using the Bose Einstein’s Index and the APMI measure. Subsequently, the matched ontology snippets and the formulated Knowledge Graph are used in order to extract features by preferential selection using Shannon’s Entropy and Horn’s Index with differential step deviation measures. A triadic hybrid model of the LSTM, Bagging with Random Forest and SVC, and XGBoost classifiers are made use for Anti-Social Tweets prediction. An overall Precision of 97.93%, Recall of 98.09%, Accuracy of 98.01%, F-Measure of 98.00%, and with the lowest FNR of 0.02 has been achieved by the proposed ASocTweetPred framework. Keywords: Anti-Social Behavior · Bagging · LSTM · Semantics · XGBoost

1 Introduction The Social Web is a network of social connections that people make online. The development of websites and software focused on promoting and supporting social interaction is known as the “social Web”. The social Web comprises various online tools and venues where people can share their thoughts, opinions, and experiences. Web 2.0 applications frequently engage the user considerably more directly. The third generation of the internet, or Web 3.0, is becoming more prevalent. It offers a data-driven Semantic Web that © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 A. Abraham et al. (Eds.): IBICA 2022, LNNS 649, pp. 552–562, 2023. https://doi.org/10.1007/978-3-031-27499-2_52

ASocTweetPred

553

employs artificial intelligence to analyze machine-based data to give users a more intelligent and interconnected web experience. The modern Web is immobile and unable to adapt to the particular requirements of each user. The term “online social networks” refers to online social groups like blogs, social networking sites, or even virtual worlds where individuals interact or share ideas and opinions. Due to the significant and increasing prevalence of anti-social behavior on social media, it is impossible to disregard it. Even seemingly minor internet abuse can have a detrimental effect on mental health because it frequently endangers people’s lives. According to reports, anti-social behavior on social media reportedly motivates a growing number of suicide and violent occurrences. Motivation: There is a need for a framework for Anti-Social tweet mining and prediction to detect Anti-Social behavior on Online Social Networking platforms like Twitter, YouTube, Instagram, Facebook comments, and other similar social media platforms because of the increased derogatory, racist, and Anti-Social behavior and comments on the Social Web as there is no proper modulation. So, a Semantically Inclined Machine Intelligence driven framework with hybridized Machine Learning models with Ontology focused is a need of the hour. So, the primary motivation principle is the lack of frameworks for Anti-Social tweet detection, which is semantically driven to accommodate the structure of Web 3.0. Contribution: The proposed ASocTweetPred has the following novel contributions. First, the classification of the dataset using the LSTM. A Knowledge Graph is created from the crawled tweets containing the recognized abusive phrases. The Bose Einstein’s Index, the APMI, differential thresholds, and step deviation metrics are used to calculate semantic similarity. Using the hybridized triadic classifiers, the model was trained using the matched ontology snippets and the Knowledge Graph produced in Phases 1 and 2. The model was trained using the LSTM, Bagging with Random Forest and SVC, and XGBoost, as well as feature selection using Shannon’s Entropy with Horn’s Index and differential thresholds. Hence, Precision, Recall, Accuracy, and F-Measure are increased, and the overall FNR is decreased. Organization: This paper’s section is organized as Related Works are included in Sect. 2. An overview of the Proposed System Architecture is given in Sect. 3, whereas Sect. 4 with Results and Performance evaluation. Portion 5, the final section, concludes the paper.

2 Related Works Kwok et al. [1] investigated tweets utilizing high-frequency text content and correlations between word tokens. LDA topic model was built to locate frequently talked topics on a broader set of tweets. Abu-Salih et al. [2] introduce the CredSaT. The rating of user credibility is established using a novel metric that considers both recent and new features and the temporal component. CredSaT can also be used to spot spammers and other unusual users. Khan et al. [3] introduce BiCHAT. The tweets are fed into the proposed model, which runs them by employing a BERT layer and a deep convolutional layer. An attention-sensitive Bidirectional LSTM network is then applied to the convolutional encoded representation. The model then uses a softmax layer to classify the tweet as hateful or standard. Meng et al. [4] introduce DRAGNET++. It makes the most of the

554

E. Bhaveeasheshwar et al.

contextual information leading up to and the decline in hate intensity at each successive tweet by utilizing the semantic and spreading structure of the tweet threads. Sharma et al. [5] proposed a novel method to predict personalities with unique insights on crucial characteristics, utilizing a LR Classifier with parameter regularization and SGD. The difficulties of automatically detecting offensive tweets in Arabic were examined by Shannaq et al. [6]. The suggested method improved the word embedding models that had already been trained by putting them through multiple iterations on the training dataset. Secondly, it used a hybrid approach that blended the XGBoost and SVC, with a genetic algorithm to get beyond the classifier’s constraints in choosing the best hyperparameter values to use the suggested method. In order to address constraints with Twitter data streams, Ayo et al. [7] introduced a typical metadata framework. The embedded features are fed into the framework that combines a CNN and a bidirectional RNN in order to detect tweets containing abusive content. After pre-analysis and data mining, Wu et al. [8] created a (GSTM) model to forecast future hate speech trends. The tweet stream is used to construct the temporal and geographical evolution characteristics. The mixing component of the model, which has a conversion rate and a rescaling operation contained in the gaussian function, is essential for completing the predicting tasks. Using the public reading amount and the forwarding quantities, which stand for engagement and interaction, Yin et al. [9] designed the SRFI model to consider the behavior that users may freely re-enter another related subject during the receptive or engagement phase. Alshalan et al. [10], after the pre-processing of the data, introduced the method by employing CNN. After detection of the hate tweets, furthermore, they utilized the Nonnegative Matrix Factorization (NMF) model in order to identify the topics. Ali et al. [11] introduced a method utilizing SVM and Multinomial Naive Bayes, to train the classifier and used dynamic stop word filtering to reduce sparsity and (VGFSS) to reduce the dimensions. Furthermore, they used the synthetic minority oversampling technique (SMOTE). Taradhita et al. [12] introduced a method by employing CNN to classify the tweets. They made use of the TF-IDF model for feature extraction as the weighting technique. In [13–21] several models in support of the proposed Literature have been depicted.

3 Proposed Architecture An approach toward an Intelligent Mining and Prediction of Anti-Social and Abusive tweets for Anti-Social Behavior Detection Using Selective Preferential Learning is depicted in Figs. 1 and 2. Phase 1 and 2 is illustrated in Fig. 1, and Phase 3 is in Fig. 2. So, Phase 1 indicates Knowledge Graph of the tweets containing the abusive words is formulated with the fragments of the abusive words and the context of the tweet. It is a five-node Knowledge Graph, every node has five individual words along with the abusive words, and selective abusive phrases are also indicated in the Knowledge Graph. So, for crawling the tweets directly, the Twitter API is used to yield tweets of abusive words, and also tweets are crawled using the customized crawler. Phase 2 is the formalization of distinct pools of Ontologies: Ontology of Conspiracy, Ontology of Aggression, Ontology of Criminology and Crime, Curse Words Ontology, and Ontology pool of Thefts. These Ontologies are formulated by crawling existing datasets and tweets and also based on

ASocTweetPred

555

manual interference of several Knowledge engineers and domain experts, also a semiautomatic generation of ontologies for these specific domains by inputting the resources on the World Wide Web and yielding the ontologies through two distinct frameworks OntoCollab and Stardog.

Fig. 1. Phase 1 and 2 of the proposed ASocTweetPred

The generation of Ontologies take place, and these ontologies are indexed, annotated separately, and centralized annotated thesaurus, which is also maintained in the ontology, is the link for computing the Semantic Similarity (SS) and identifying the more number of the occurrences based on the tweets. Subsequently, the dataset is classified using the LSTM, a Deep Learning classifier. It is classified individualistically using LSTM to handle the large volumes of the data in the dataset. Moreover, LSTM working on the principle of an auto-handcrafted feature selection classifier automatically discovers the classes and classifies the dataset. The classified dataset is further used for reasoning and computations in the proposed framework. Then, the classes of the dataset are used to compute the SS with that of the indexed, annotated, and pool of Ontologies. SS is computed using the Bose-Einstein Statistic Distribution and the APMI measure. APMI measure is a differential semantic algorithm. So, the whole Einstein’s Statistic distribution is merged with the APMI measure to compute the SS. APMI is set to a threshold of 0.75. Such a threshold point is due to the large volumes of entities among the individualistic ontologies. These ontologies are hybrid, semi-automatically generated with manual models of entities and entities discovered through OntoCollab and Stardog. Owing to the large amount entities in these ontologies, the Semantic Similarity is computed using a threshold of 0.75 for APMI. Although APMI is a PMI-driven robust variational Semantic Similarity measure and distribution of Bose-Einsteins Statistic ensures that the best in class matched ontological snippets from these individualistic ontologies of conspiracy, aggression, crime, curse words, and pool of thefts are used. Moreover, several literature

556

E. Bhaveeasheshwar et al.

contexts like Shakespeare and other literary texts with conspiracies and threats are also discovered, manually modeled, and attached to this model. Literary conspiracies are also included, Crimes documented keywords are also included from criminology journals, etc.

Fig. 2. Phase 3 of the proposed ASocTweetPred

At the end of Phase 1, a formulated Knowledge Graph with abusive words was yielded. At the end of Phase 2, matched ontology snippets were yielded, which are further used in Phase 3 as primary inputs, and in order to compute the feature selection Shannon’s Entropy and Horn’s Index are used, with step deviance of 0.25 for Shannon’s Entropy and 0.45 for Horn’s Index. The reason for the variational feature selection step deviance measure for both Shannon’s Entropy and Horn’s Index is that Shannon’s Entropy is much less stringent when compared to Horn’s Index. Because of this reason, it is relaxed to step deviance of 0.25. However, Horn’s Index being robust, step deviance is liberal to 0.45. So, the reason for using variational step deviance measures for both Shannon Entropy and Horn’s Index for feature selection is because Shannon’s Entropy is less stringent, and Horn’s Index is more stringent as an index measure. There is a need to regulate the step deviance for Shannon’s Index, making it 0.25, which is less stringent, whereas Horn’s Index is a more stringent measure, the step deviance is relaxed to 0.45. In order to allow more entities but relevant entities to pass through the training model for feature selection. However, the selected features are only passed to the Machine Learning classifier, namely Bagging and XGBoost classifiers. So, the model used for the training is a three-fold hybridized model comprising the LSTM. This Deep Learning classifier does not require a manual feature selection but automatically selects features. For Bagging

ASocTweetPred

557

and XGBoost, the features selected using Shannon’s Entropy and Horn’s Index combination are yielded. So, the threefold model comprises the Bagging classifier and XGBoost classifier, which have feature-controlled Machine Learning classifiers. However, they are vital to ensure that there is no deviance from the context of the domain as a feature is controlled. Whereas in LSTM, the feature is not controlled. Automatic feature control takes place. So, there could be some amount of out-layer learning.To control this, there is a threefold paradigm or threefold classifier, two Machine Learning classifiers, and one Deep Learning classifier, namely the Bagging, XGBoost, and LSTM. Bagging is an ensemble-based classifier. So, the individual classifier for the Bagging ensemble is SVC and Random Forest. SVC and Random Forest are chosen for ensembling using Bagging because SVC is a lightweight Bi-Linear Support Vector classifier. The Random Forest is a heavy-weight classifier. So, therefore there is a hybridization using heavy weight and lightweight Machine Learning classifiers as individual classifiers for ensembling using Bagging. The model is trained using the matched ontology snippets and the entities discovered from the Knowledge Graph. Several orders of Twitter datasets are passed into the trained model, and the anti-social tweet prediction from the dataset is predicted. Long Short-Term Memory is a RNN type that can detect long-term interconnections, specifically in sequence prediction tasks. LSTM features feedback connections, which implies it can interpret the entire data stream in addition to discrete data points like images. Random Forest is an ensemble method algorithm that uses many distinct decision trees on different subsets of a particular dataset and averages them to improve the forecasted accuracy of that dataset. The random forest employs predictions from each tree, rather than relying solely on one, to predict the outcome based on projections that obtained the most votes. Greater precision is obtained, and overfitting is avoided when there are more significant numbers of trees in the forest. Shannon’s entropy depicted by Eq. (1) defines a data communication system as consisting of three components: a data source, a communication channel, and a receiver. n pi log2 pi (1) H (X ) = H (p1 , . . . , pn ) = − i=1

where pi represents the likelihood that X = xi , where xi signifies the ith possible value of X among the possible n symbols. The Adaptive Pointwise Mutual Information measure depicted by the Eq. (2), is an improved method of the PMI measure for computing semantic similarity between two terms a and b is depicted by the Equation. APMI (a; b) =

pmi(a; b) +y p(a)(b)

(2)

558

E. Bhaveeasheshwar et al.

Pointwise mutual information between [−1, +1] can be normalized, giving −1 for never appearing in conjunction, 0 for independence, and +1 for full co-occurrence is Normalized Pointwise mutual information is depicted by the Eq. (3). npmi(a; b) =

pmi(a; b) h(a, b)

(3)

The PMI of two discrete random variables, A and B, outcomes a and b, estimates the difference between the probability of their coincidence given their joint distribution and respective distributions given by Eq. (4). And the combined self-information is h (a, b). pmi(a; b) = log2

p(a|b) p(a)

(4)

4 Performance Evaluation and Result The dataset for the Anti-Social behavior detection is an integrated dataset comprising the Antisocial Behaviour Public Twitter Indonesia [Dataset] [22] by Fitri Andri Astuti (2021), and Twitter Sentiment Analysis hatred speech [Dataset] [23] has been integrated. However, integration was done using conversion to the English language wherever necessary, annotating them with at least five words, and shortening all the records with similar annotations to be prioritized first. The ones with entirely different annotations are prioritized at the bottom. Apart from combining the two primary datasets, the keywords from these two datasets were used as the reference. Apart from all the keywords in the ontologies which were incorporated, ontologies of conspiracy, aggression, crime, curse words, and pool of threats all these ontology terms are used along with the terms in these two datasets, and customized crawlers were used, publicly targeted under several YouTube comments, Twitter, Instagram, Facebook comments and other similar social media sites were seen and crawled. The tweets were extracted, nearly 7,84,818 tweets were crawled, and category words were generated by annotation and integrated with these two integrated datasets. Once again, the entities with similar categories were grouped and prioritized above. This was the dataset that was prepared, a single large integrated dataset for Anti-Social behavior prediction. The implementation was carried out using Python’s NLTK, with the Python version of 3.10.5. Using Google Colaboratory as the potential Integrated Development Engine, Matplotlib was used for plotting, and WordNet 3.0 was used for carrying out the Lemmatization and the Lexico Syntactico grammatical tasks. Ontologies were generated using the OntoCollab and Stardog and semi-automatically verified. The performance of the proposed ASocTweetPred, a Mining and Prediction of Anti-Social and Abusive Tweets for Anti-Social Behavior Detection Using Selective Preferential Learning, is evaluated using Precision, Recall, Accuracy, F-Measure percentages, and False Negative Rate (FNR) as potential metrics. From Table 1, it is indicated that the proposed ASocTweetPred yields the highest Precision, Recall, Accuracy, F-Measures percentages of 97.93, 98.09, 98.01, 98.11, and with the lowest FNR of 0.2. For the evaluation, it is baselined with four distinct models, namely TTRO, ABADL, DLABT, and DLABSM frameworks, respectively. From Table 1 it is indicative

ASocTweetPred

559

that the TTRO model yields 84.03% of Average Precision, 86.09% of Average Recall, 85.06% of Average Accuracy, and 85.05% of Average F-Measure with an FNR of 0.14. The ABADL model furnishes an overall Average Precision percentage of 87.09, Average Recall percentage of 89.46, Average Accuracy percentage of 88.27, and Average F-Measure percentage of 88.26 with an FNR of 0.11. The DLABT framework yields 90.12% of Average Precision, 92.09% of Average Recall, 91.10% of Average Accuracy, and 91.09% of Average F-Measure with an FNR of 0.08. Subsequently, the DLABSM framework yields 91.19% of Average Precision, 93.04% of Average Recall, 92.11% of Average Accuracy, and 92.11% of Average F-Measure with an FNR of 0.07. Precision, Recall, Accuracy, and F-Measure percentages are used as selective metrics because they quantify the results’ relevance. The FNR is chosen as a metric because they quantify the proposed model’s error percentage or error rate. Table 1. Comparison of performance of the proposed ASocTweetPred with other approaches Model

Average Precision %

Average Recall %

Accuracy %

F-Measure %

FNR

TTRO [24] + LSTM

84.03

86.09

85.06

85.05

0.14

ABADL [25]

87.09

89.46

88.27

88.26

0.11

DABDT [26]

90.12

92.09

91.10

91.09

0.08

DLABSM [27]

91.19

93.04

92.11

92.11

0.07

Proposed ASocTweetPred

97.93

98.09

98.01

98.00

0.02

The reason why the proposed ASocTweetPred furnishes the highest Precision, Recall, Accuracy, F-Measure percentages and the lowest value of the FNR is mainly due to the reason that the ASocTweetPred framework is a Knowledge Centric Semantically Inclined because they use the tweets which are crawled from the tweet API and identify the abusive words from the tweets by referencing it with an existing abusive words thesaurus and a Knowledge Graph is formulated, which comprises of the abusive words. Subsequently, the dataset is classified using the LSTM, a deep learning classifier. The reason for using a deep learning classifier is due to the complexity of the dataset and the need to discover classes automatically without any feature selection. Subsequently, several ontologies comprising conspiracy, aggression, crime, curse words, and ontology pool of threats is collected using several multi-agent crawlers which would look up and crawl from several sources. Specifically, the conspiracy was taken from several literary texts. Apart from this, the history of conspiracy was also crawled from several websites which spoke about conspiracy. The reason why the TTRO + LSTM model does not perform as expected when compared to the proposed model is mainly due to the reason that the TTRO + LSTM framework computes a hate map and only collects data that has got racism. For our experimentation, the TTRO is not used as it is, but it was used with a single classifier, namely

560

E. Bhaveeasheshwar et al.

the deep learning classifier. The reason why the ABADL model, which is an Anti-Social behavior analysis using a deep learning framework, did not perform as expected when compared to the proposed framework, although it used robust deep learning classifiers that as CNN with Glove combination, is mainly because this model, there was no seasoned auxiliary Knowledge was incorporated, it depended only on the individual datasets comprises of the Anti-Social tweets, that is learning and testing both was only from the dataset. So, when augmenting Knowledge was realized, a lot of Knowledge could have been supplied, and even the learning mechanisms could have been made much more potent because CNN with Glove can learn out layers. After all, feature selection is not seasoned. So, depending upon automatic feature selection, the CNN model can ensure that it could also learn out layers. Secondly, the DABDT model, which detects aggressive behavior and discusses threats using the text mining model, did not perform as expected. The DLABSM model is a deep learning model for Anti-Social behavior analysis of social media, which uses data collected from Swarm and Twitter. Although Knowledge was augmented from Twitter and Swarm using their respective APIs, four different deep learning models were also used. However, even with four distinct deep learning models, the Accuracy did not emerge well for our Anti-Social tweet dataset because all those cases are being Anti-Social tested (Fig. 3).

Fig. 3. Precision percentage vs no. of arbitrary classified instances

The ASocTweetPred framework outperforms the baseline models and yields the highest Average Precision, Recall, Measure, and lowest value of FNR because it is a Knowledge Centric Semantically Inclined, Aggregative, and Ontology focused. It has a strong Knowledge regulatory mechanism, robust Semantic Similarity, and step deviation computation. The reason why the compared baseline models did not perform as expected when compared to the proposed ASocTweetPred is due to the lack of auxiliary and augmented Knowledge, lack of Knowledge regulation mechanisms, and no strong learning. Owing to these reasons, the baseline models did not perform as expected compared to the proposed framework.

ASocTweetPred

561

5 Conclusion The ASocTweetPred framework has been put forth for Anti-Social prediction and mining of Anti-Social tweets to detect Anti-Social behavior on the Twitter platform. Several perspectives of Ontologies like the Ontology of Aggression, Crime, Curse Word Ontology, and Ontology pool of threats were integrated and used a standard index to yield sufficient auxiliary Knowledge. Subsequently, a Knowledge Graph is formulated from the crawled tweets with the identified abusive words. SS is computed using the Bose Einstein’s Index with the APMI with differential thresholds and step deviation measures. The dataset is classified using the LSTM classifier. The matched ontology snippets and the Knowledge Graph yielded from Phases 1 and 2 were used for preferentially selecting the features to train the model using the hybridized triadic classifiers. Namely, the LSTM, Bagging with Random Forest and SVC, and XGBoost for training the model, and Shannon’s Entropy with Horn’s Index with differential thresholds were used for feature selection in order to train the model. An overall Precision of 97.93%, Recall of 98.09%, Accuracy of 98.01%, F-Measure of 98.00%, and with the lowest FNR of 0.02 has been achieved by the proposed ASocTweetPred framework.

References 1. Kwok, S.W.H., Vadde, S.K., Wang, G.: Tweet topics and sentiments relating to COVID-19 vaccination among Australian Twitter users: machine learning analysis. J. Med. Internet Res. 23(5), e26953 (2021) 2. Abu-Salih, B., Wongthongtham, P., Chan, K.Y., Zhu, D.: CredSaT: credibility ranking of users in big social data incorporating semantic analysis and temporal factor. J. Inf. Sci. 45(2), 259–280 (2019) 3. Khan, S., et al.: BiCHAT: BiLSTM with deep CNN and hierarchical attention for hate speech detection. J. King Saud Univ.-Comput. Inform. Sci. 34(7), 4335–4344 (2022) 4. Meng, Q., Suresh, T., Lee, R.K.W., Chakraborty, T.: Predicting Hate Intensity of Twitter Conversation Threads. arXiv preprint arXiv:2206.08406 (2022) 5. Sharma, K., Kaur, A.: Personality prediction of Twitter users with logistic regression classifier learned using stochastic gradient descent. IOSR J. Comput. Eng. (IOSR-JCE) 17(4), 39– 47(2015) 6. Shannaq, F., Hammo, B., Faris, H., Castillo-Valdivieso, P.A.: Offensive language detection in arabic social networks using evolutionary-based classifiers learned from fine-tuned embeddings. IEEE Access 10, 75018–75039 (2022) 7. Ayo, F.E., Folorunso, O., Ibharalu, F.T., Osinuga, I.A.: Machine learning techniques for hate speech classification of twitter data: state-of-the-art, future challenges and research directions. Comput. Sci. Rev. 38, 100311 (2020) 8. Wu, X.K., Zhao, T.F., Lu, L., Chen, W.N.: Predicting the hate: a GSTM Model based on COVID-19 hate speech datasets. Inf. Process. Manage. 59(4), 102998 (2022) 9. Yin, F., Pang, H., Xia, X., Shao, X., Wu, J.: COVID-19 information contact and participation analysis and dynamic prediction in the Chinese Sina-microblog. Physica A 570, 125788 (2021) 10. Alshalan, R., Al-Khalifa, H., Alsaeed, D., Al-Baity, H., Alshalan, S.: Detection of hate speech in COVID-19–related tweets in the arab region: deep learning and topic modeling approach. J. Med. Internet Res. 22(12), e22609 (2020). https://doi.org/10.2196/22609

562

E. Bhaveeasheshwar et al.

11. Ali, M.Z., Rauf, S., Javed, K., Hussain, S.: Improving hate speech detection of Urdu tweets using sentiment analysis. IEEE Access 9, 84296–84305 (2021) 12. Taradhita, D.A.N., Putra, I.K.G.D.: Hate speech classification in Indonesian language tweets by using convolutional neural network. J ICT Res. Appl. 14(3), 225–239 (2021). https://doi. org/10.5614/itbj.ict.res.appl.2021.14.3.2 13. Deepak, G., Priyadarshini, J.S.: Personalized and enhanced hybridized semantic algorithm for web image retrieval incorporating ontology classification, strategic query expansion, and content-based analysis. Comput. Electr. Eng. 72, 14–25 (2018) 14. Deepak, G., Ahmed, A., Skanda, B.: An intelligent inventive system for personalised webpage recommendation based on ontology semantics. Int. J. Intell. Syst. Technol. Appl. 18(1–2), 115–132 (2019) 15. Deepak, G., Kumar, N., Santhanavijayan, A.: A semantic approach for entity linking by diverse knowledge integration incorporating role-based chunking. Procedia Comput. Sci. 167, 737–746 (2020) 16. Ojha, R., Deepak, G.: Metadata driven semantically aware medical query expansion. In: Iberoamerican Knowledge Graphs and Semantic Web Conference, pp. 223–233. Springer, Cham (2021) 17. Rithish, H., Deepak, G., Santhanavijayan, A.: Automated assessment of question quality on online community forums. In: Motahhir, S., Bossoufi, B. (eds.) ICDTA 2021. LNNS, vol. 211, pp. 791–800. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-73882-2_72 18. Yethindra, D.N., Deepak, G.: A semantic approach for fashion recommendation using logistic regression and ontologies. In: 2021 International Conference on Innovative Computing, Intelligent Communication and Smart Electrical Systems (ICSES), pp. 1–6 (2021) 19. Arulmozhivarman, M., Deepak, G.: OWLW: ontology focused user centric architecture for web service recommendation based on LSTM and whale optimization. In: Al-Sartawi, A.M.A.M., Razzaque, A., Kamal, M.M. (eds.) EAMMIS 2021. LNNS, vol. 239, pp. 334–344. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-77246-8_32 20. Adithya, V., Deepak, G.: OntoReq: an ontology focused collective knowledge approach for requirement traceability modelling. In: European, Asian, Middle Eastern, North African Conference on Management & Information Systems, pp. 358–370. Springer, Cham (2021) 21. Vishal, K., Deepak, G., Santhanavijayan, A.: An approach for retrieval of text documents by hybridizing structural topic modeling and pointwise mutual information. In: Mekhilef, S., Favorskaya, M., Pandey, R.K., Shaw, R.N. (eds.) Innovations in Electrical and Electronic Engineering. LNEE, vol. 756, pp. 969–977. Springer, Singapore (2021). https://doi.org/10. 1007/978-981-16-0749-3_74 22. Kaggle datasets download -d fitriandri/antisocial-behaviour-public-twitter-indonesia 23. Kaggle datasets download -d arkhoshghalb/twitter-sentiment-analysis-hatred-speech 24. Chaudhry, I.: # Hashtagging hate: Using Twitter to track racism online (2015) 25. Singh, R., Zhang, Y., Wang, H., Miao, Y., Ahmed, K.: Antisocial behaviour analyses using deep learning. In: International Conference on Health Information Science, pp. 133–145. Springer, Cham (2020) 26. Ventirozos, F.K., Varlamis, I., Tsatsaronis, G.: Detecting aggressive behavior in discussion threads using text mining. In: Gelbukh, A. (ed.) CICLing 2017. LNCS, vol. 10762, pp. 420– 431. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-77116-8_31 27. Singh, R., Zhang, Y., Wang, H., Miao, Y., Ahmed, K.: Deep learning for antisocial behaviour analysis on social media. In: 2020 24th International Conference Information Visualisation (IV), pp. 428–434. IEEE(2020)

WCMIVR: A Web 3.0 Compliant Machine Intelligence Driven Scheme for Video Recommendation Beulah Divya Kannan1 and Gerard Deepak2(B) 1 Department of Computer Science and Engineering, National Institute of Technology,

Tiruchirappalli, India 2 Department of Computer Science and Engineering, Manipal Institute of Technology

Bengaluru, Manipal Academy of Higher Education, Manipal, India [email protected]

Abstract. Video recommendation is the need of the hour in the era of Web 3.0 . And there is a need for semantically inclined Web 3.0 compliant framework for video recommendation. In this paper, a WCMIVR framework which is a Web 3.0 compliant web video recommendation framework. WCMIVR has been proposed, although it is a query driven approach, it is centred on staged query enrichment by auxiliary knowledge addition. The proposed WCMIVR model is a knowledge centric semantically inclined framework where the TF-IDF is applied on the documents crawled from the web repository in order to aggregate the enriched categories of videos and also informative words are added from the TF-IDF model. Wikidata and Google Knowledge graph API are standard knowledge stores which are used further used for enrichment of the video categories. Ontology alignment is applied for feature discovery in order to classify the dataset of videos using Machine learning based logistic regression feature controlled classifier. The semantic similarity is calculated using Horn’s index, Twitter semantic similarity and Jaccard similarity which improvisions a strong relevant computation mechanism in order to rank and recommend the videos to the person using it. All in all, the proposed WCMIVR model acquires largest average precision percentage of 97.88, largest average recall percentage of 99.04, largest average percentage of accuracy of 98.46, largest average F-Measure percentage of 98.45 and with the lowest overall FDR measure of 0.03 which makes WCMIVR model the best classifier for web video recommendation which is compliant to Web 3.0. Keywords: Video recommendation · Meta data generation · Image classification · Semantic similarity computation · WCMIVR framework

1 Introduction Over the last two decades, the world wide web has evolved and its contents have outgrown drastically presently. The entire web is transformed into a much bigger and more sophisticated society by itself. Web 3.0 is the next evolution of the internet which uses block chain technologies and the tools of decentralization which overcomes issues of © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 A. Abraham et al. (Eds.): IBICA 2022, LNNS 649, pp. 563–574, 2023. https://doi.org/10.1007/978-3-031-27499-2_53

564

B. D. Kannan and G. Deepak

security and focuses more on data privatization. Reasoning and Data inferencing takes place. Videos has become a major source of entertainment to the consumers. YouTube has also become a primary source of income in the field of advertisement. Apart from YouTube and Vlogging, there are many online educational platforms done through videos and multimedia. Not just multimedia content, videos have exceedingly increased over the last few years. Now, many people have access to the internet and the internet has itself become an integral society with vast digital transformation i.e., a much more digital ecosystem created on the world wide web. Video Recommendation has become really vital and it’s a key reason to boost user engagement in online platforms. Video recommendation also help grow the businesses of various online streaming platforms. Personalization of Video Recommendation has proved to be a key success over the marketing platforms. It’s really important that the present markets adapt to the growing trend of Video Recommendation. Motivation: Videos are large scale in nature. Existing video recommendation techniques are not compliant with the Web 3.0 standards. Web 3.0 is a highly dense collaborative environment and due to these reasons, a need for kinetically inclined knowledge centric paradigms for video recommendation on the world wide web is required. The Web 3.0 framework is much denser where inferencing must be a part along with learning rather than depending on learning alone as a strategy. Contribution: The following are the novel contributions that are provisioned by the WCMIVR framework. The application of TF-IDF as a model to yield the informative words from an existing document repository in order to yield informative words which are relevant to the categories and labels of the dataset. The extraction of entities from standard knowledge stores, Wikidata and Google Knowledge graph API in order to yield the enriched informative words. Ontology alignment for feature selection and extraction in order to classify the video dataset using a strong feature-controlled machine learning logistic regression classifier. Using a hybrid scheme for semantic similarity in order to create a strong semantically driven inferential framework using the Horn’s index, Twitter Semantic similarity and Jaccard similarity using differential thresholds and step deviance measures to rank and recommend the dataset. The precision, accuracy, recall, F-Measure are improved and FDR is reduced. Organization: The rest of the paper is organized as follows. Section 2 depicts the Related Works. Section 3 depicts the Proposed System Architecture. Section 4 depicts the Implementation and Performance Evaluation. Paper is concluded in Sect. 5.

2 Related Works Zhe et al. [1] put up a framework for a video recommendation system. When the person using it sees a video, the framework recommends another video to the user thinking that the user might enjoy. The framework focuses more on the Ranking algorithm and solves biased based problems. Abhishek et al. [2] puts forward a recommendation system for videos in which the framework identifies the emotion of the user during the entire video

WCMIVR: A Web 3.0 Compliant Machine Intelligence Driven Scheme

565

length in real time and recommends video based on the retrieved features, this framework recommends videos at a more accurate rate than other frameworks due to the real time emotion detection. Sijing et al. [3] proposed a video recommendation framework using JointRec cloud framework, what happens in this framework is that each cloud takes in the information of the user and the user’s rating on the description of these videos and recommends videos accordingly. Hongri et al. [4] proposed a framework where object detection of clothes takes place after feeding in videos for to the neural network for the sake of Online shopping and fashion. This system uses the DPRnet framework. Yuxuan et al. [5] proposes a framework that retrieves and identifies humans from surveillance videos. Existing methodologies lack precision and cannot identify accurately from the surveillance videos due to poor quality of videos. Mining of attributions and framework of Reasoning is used to solve all the problems faced by previous methodologies. Qiusha et al. [6] proposes a framework that uses a topic model which completely visualizes the input video with less metadata available and generates respective recommendations that fit the most according to the user’s interest. Summra et al. [7] proposed a framework that used 2D convolutional neural networks, this framework has been trained with six news channels and this system categorizes them respectively and extract the main headlines to the users. Markus et al. [8] proposed a framework that automatically analyzes and retrieves the video content for maintaining the heritage of culture of television and radio broadcasts of the German Democratic republic using person and text recognition, research of similarities, boundary detection and classification techniques. Kaklauskas et al. [9] proposed a personalized video recommendation framework for real estate properties to the consumers, and uses house attributes as input, physiological and emotional parameters of the estate buyer and curates personalized videos to users using Neuro decision matrix methodologies. In [11–17] several models in support of the proposed literature have been depicted. Jun et al. [18] proposes a recommendation system for training with the cost of annotations being the lowest. Wei et al. [19] proposes a framework to target data and adding in more creativity for the sparse dataset. It also focuses on the accuracy of data. Yienwei et al. [20] proposes a framework that uses graph of neural networks and retrieve specific model information of users to capture the interests of users in a much better way. Di et al. [21] proposes a framework that uses collaborative filtering by using accuracy based on relationship of multi path and adds the recommendation of the results along with the user.

3 Proposed System Architecture The proposed architecture of the proposed knowledge centric semantically inclined annotations driven video recommendation model depicted in Fig. 1. Which is a query driven model but although it is query driven, it is also dataset-centric. The queries of the user is subjected to pre-processing of data that takes into account lemmatization, stop word removal, tokenization and named entity recognition. Tokenization ensures that the individual tokens are yielded. Lemmatization derives the base form of the terms. Stop Word Removal ensures that the Query words are devoid of meaningless words like ‘and’, ‘or’, ‘if’, ‘the’ etc. Named Entity Recognition (NER) ensures that the entities are

566

B. D. Kannan and G. Deepak

well embarked and the senses of entities are recognized. After the pre-processing, the individual Query words are obtained which is subjected to enrichment further by passing it to Wikidata API and Google Knowledge Graph API to yield the enriched query words Qw. The Wikidata and Google Knowledge Graph API are two different knowledge stores in which Wikidata collates hierarchal crowdsource, community controlled, community verified, crowd contributed, moderated hierarchal knowledge store. Google Knowledge Graph are crowd sourced and also the entities are sourced from several heterogenous sources modelled into a large knowledge graph. Both Google Knowledge Graph and Wikidata stores auxiliary knowledge which can be augmented, which is already verified by human cognition and it is used to enrich the query words to yield the enriched Query words subsequently. The data set of videos is annotated and categorized with two different annotations which is subjected to term frequency and inverse frequency having a video repository. Crawled Document Repository comprises of the Dataset Crawl by using the categories in the video. Categories of videos are used to crawl the documents and a repository is created and is crawled through customized scholars on the world wide web to enrich auxiliary knowledge and Term frequency and inverse document frequency(TF-IDF) is applied through the crawled document repository to yield information words which are frequently occurring within the document corpus and also rarely occurring across the document corpus is yielded as informative words and again sent to the Google knowledge graph API to yield the enriched information terms as Iw.

Fig. 1. Proposed system architecture

WCMIVR: A Web 3.0 Compliant Machine Intelligence Driven Scheme

567

The enriched information words require further augmentation of knowledge features for which a domain ontology generated using Stardog and Onto Collab as the tool is used and this domain ontology is not in detail but restricted to maximum of 10 levels with a minimum level being 3. It is a detailed upper ontology by eliminating all individuals because there are a lot of instances aligned from the Google knowledge graph API. Therefore, to make the complexity less, minimum of three levels and a maximum of ten levels is maintained which is generated using OntoCollab and Stardog tools to yield the features. This ontology is aligned with that of Integrated Query words (Iw) using concept similarity with a threshold of 0.5 to yield the features that are directly passed to the Logistic regression classifier to classify the categorized data set of videos. The data set is classified using the Logistic Regression classifier for which the features are fed from the aligned ontologies with that of the enriched query words. The reason for using Logistic regression classifier is because it is a very strong machine learning classifier and doesn’t depend on deep learning classifier. Machine learning classifiers like the Logistic Regression model are feature controlled which ensures that only the relevant features are learned and minimizes the learning of deviational outliers. This ensures that there is not much deviations in the classifiers. Owing to this a strong machine learning algorithm which is feature controlled is chosen that becomes highly beneficial for this model and once it is classified using Logistic Regression, the classifiers instances alone are used to compute the semantic similarity with that of enriched Query words (Qw) and semantic similarity is classified using two different measures, one is Twitter semantic similarity and Jaccard similarity.Jaccard similarity sets a threshold of 0.75. Twitter semantic similarity sets a threshold of 0.55 and horns index has a step deviation of 0.35. Horn’s index is also used to compute the semantic similarity with a step deviance of 0.35. The reason for having differential threshold and differential measures is due to two semantic similarity measures (TSS and JS) and one index i.e. Horn’s index when differential thresholds is mainly due to the reason and the fact that when three distinct measures are used with differential threshold. This clearly ensures that there is a high degree of variations in heterogeneity of items which are recommended increasing the diversity but however the relevance will not be compromised. The relevance will be maintained. Due to the large no of instances and large no of auxiliary knowledge, there is a need for three distinct measures because we are further optimizing it, we are using computational complexity which itself gets optimized and the similar instances are ranked and recommended in the ascending order of twitter semantic similarity alone and given to user as facets and as these facets are given, all the videos from the datasets with matching categories for which Jaccard Similarity greater than 0.50, all the videos along with these facets are recommended relevant to the facets(faceted video recommendation). If there is any discrepancy i.e., if the user is not satisfied, further the current user clicks are passed as pre-processed query words and this process continues until there is no more clicks of user taken into account. Logistic regression is used in conventional statistics and Machine Learning. Logistic Regression is also alike to Linear regression but that it projects the output as either false or true instead of projecting continuous data like size. Logistic Regression is a machine learning model that is used for binary classification, Binary classification is a process in which it takes output has one of the two input values. The output can either be 0 or 1

568

B. D. Kannan and G. Deepak

or the output can be True or False. Logistic regression models can be used in real world application, for example, it can be used to find if the given email is spam or not. It can also be used to find if the bank transaction is fraud or not. In medical fields, it can be used to find if the tumour is malignant or not. It is used in various Image classification models that identifies the different components present in the image and classifies them. Logistic Regression model uses the sigmoid function. It is used to problems related to classification whereas Linear Regression model is used to solve problems related to Regression. The variable responsible in Logistic regression is categorical in nature whereas for linear regression model, it is continuous in nature. Horns index is a measure of how similar two different items are. The index ranges from 0 to 1. It’s 1 if there is complete similarity between the items. It’s 0 if there is no similarity between the items. The equation is given below, Horns modification of index is given by Eq. (1).  2 si=1 xi yi  (1) CH =  s S 2 2 i=1 xi i=1 yi XY + X2 Y2 x i is the number of times species i is represented in the total X from one sample.yi is the number of times species i is represented in the total Y from another sample.S is the number of unique species. Twitter Semantic Similarity algorithm [10] estimates the similarity between words with high precision. Twitter is an online platform and it uses around fifty million tweets per day. Twitter actually provides crawling of data automatically at approximately 180 queries every fifteen minutes. Frequency between two words in twitter can be estimated using it’s velocity of occurrence given by Eq. (2). α  φ(w1 ∧ w2) (2) TSS(w1, w2) = max(∅(w1), ∅(w2)) where,α is the scaling factor.φ(w) is the average time between tweets. Cooccurrence of two words w1 and w2 is given by φ(w1 ∧ w2). Jaccard Similarity can be calculated by finding the overlap between two sets of a given data. The formula to find the Jaccard similarity index is given by Eq. (3) and Eq. (4). Jacard Index =

(the number in both sets) ∗ 100 (the number in either set)

J(X, Y) =

|X ∩ Y | |X ∪ Y |

(3) (4)

TF-IDF model goes by the name Term frequency and inverse document frequency and is used for retrieval of information mining of Text. It means the measure of realness of a word by differentiating the number of times that particular word appear in the given file with the number of files the word appears in. Which is depicted in Eq. (5) and Eq. (6) TF − IDF = TF(t, d ) x IDF(t)

(5)

WCMIVR: A Web 3.0 Compliant Machine Intelligence Driven Scheme

where TF (term frequency) of the number of times, ‘t’ appears in a doc, ‘d’   1+n IDF(t) = log +1 1 + df (d , t)

569

(6)

where IDF (inverse document frequency) is the document frequency of term t.

4 Implementation and Performance Evaluation The experimentations for the proposed WCMVIR framework for video recommendation encompassing machine intelligence is an integrative dataset and experimentations were conducted on the integrative dataset which is an integration of three distinct unique standard dataset namely the 2021 DIGIX Video Recommendation by voler [22],A content based video recommendation assistant dataset by Sidhartha Reddy [23] and state of the art machine learning techniques for social, biomedical and video data analytics by Sanjay Purushotham [24]. A single integrated dataset was formulated using the three distinct individual datasets by a method of customized annotations whether annotations are crawled using specific agent. So adjacent categories are further annotated if the dataset was a categorical dataset, if the dataset was not categorical then the labels were extracted and the labels of this dataset is used for further annotations. A large number of annotations were generated for every individual video in the specific dataset. Based on these categories and label, instances are prioritized and highly similar categories are ranked together and also dissimilar categories are prioritized at the lower end of the dataset. So, this single integrated dataset comprising of three individual datasets is used for experimentation. The performance of the proposed WCMVIR framework which is a Web 3.0 compliant machine intelligence driven scheme for video recommendation is evaluated using F-measure, accuracy, recall, precision and False Discovery rates [FDR] as potential metrics owing to the reason to compute the relevance of results. However, the false discovery rate 0.25 is the number of false positives yielded by the framework. Accuracy, recall, Precision and F-Measure percentages and the False discovery rate [FDR] were used in. Their standard form in order to compute the performance of the WCMIVR framework. From Table 1, it is indicated about the proposed WCMIVR framework has yielded the highest precision percentage of 97.88, highest recall percentage of 99.04, highest percentage of accuracy of 98.46, highest F-Measure of 98.45 with an FDR of 0.03. The WCMIVR model is baselined with MALVR, DBTVR, MMGCN and VRKGCF models to compare the performance with respective models. The baseline models were evaluated in the exact same domain as the proposed WCMVIR model for the exact same number of queries and for the exact same dataset and the performances were subjected to comparison. The experimentations were conducted on 4463 queries whose ground truth were conducted over a period of 144 days by submitting the queries to 2416 users of several age groups ranging from 16 and 49, queries were given to them and were asked to search their favourite search engine for the videos of the same title and top ten recommendation keywords were considered. An intersection of all the key words were yielded and the most popular keyword was prioritized and this was used as ground truth set. From Table 1, it is clearly inferable that the MALVR model has precision percentage of 88.47, recall percentage of 90.39,

570

B. D. Kannan and G. Deepak

accuracy percentage of 89.43, F-Measure percentage of 89.41. The DBTVR model furnishes average precision percentage of 89.36, average recall percentage of 91.43, average accuracy percentage of 90.38, average F-Measure of 90.38 and an FDR of 0.11. The MMGCN model yields 90.11% of Average Precision, 91.13% of average recall, 90.62% of average accuracy, 90.61 of average F-Measure with an FDR of 0.10. The VRKGCF model yields 94.43% of average precision, 95.63% of average recall, 95.03% of average accuracy, 95.02% of average F-Measure with an FDR of 0.06. The WCMIVR framework has yielded the highest percentage of average precision of 97.88% and the lowest measure of FDR of 0.03 with average recall and average accuracy percentage as 98.46% and 98.45% respectively. So, the WCMVIR framework outperforms all its baselined models. The reason is due to the fact that it is tailored specifically for the more cohesive crowded and organized form of the web which is the Web 3.0 i.e., a query driven framework which functions based on query enrichment wherein the data set of videos is subjected to TF-IDF through the documents in the crawled data repository to yield the informative words. Table 1. Comparison of performance of the proposed WCMVIR with other approaches Model

Average precision%

Average recall%

Average accuracy%

Average F-measure%

FDR

MALVR [18]

88.47

90.39

89.43

89.41

0.12

DBTVR [19]

89.36

91.43

90.39

90.38

0.11

MMGCN [20] 90.11

91.13

90.62

90.61

0.10

VRKGCF [21] 94.43

95.63

95.03

95.02

0.06

Proposed WCMIVR

99.04

98.46

98.45

0.03

97.88

The TF-IDF model yields the most number of informative words to count the number of frequent words across the document corpus and the number of rear words across the document corpus which is subjected to enrichment by harvesting entities from Wikdata, knowledge repository and Google’s knowledge graph API which are standard knowledge repositories which mainly facilitates in entity enrichment and enriched entities are further subjected to ontology alignment in order to yield features which are further used to classify the data set of videos that is passed as an input to the Logistic Regression Classifier. The features are fed and used to classify the data set of videos using strong Machine learning logistic regression classifier which is a feature-controlled classifier. The semantic similarity is computed using horn’s index, Twitter semantic similarity and Jaccard’s similarity with differential thresholds and heterogenous technical measures. The reason for encompassing a strong machine learning classifier instead of a deep learning classifier is because of the fact that Logistic Regression classifier is feature controlled and will facilitate feature bound learning and deviance-based learning, outlier-based learning will not be facilitated. Owing to all these reasons the proposed WCMVIR framework outperforms all the baseline models and it is semantically inclined,

WCMIVR: A Web 3.0 Compliant Machine Intelligence Driven Scheme

571

entity driven where entities are harvested from standard knowledge sources, ontology alignment is followed for feature selection with that of the enriched query words and semantic similarity computation takes place in the horn’s index, twitter dependent similarity and Jaccard similarity with differential thresholds and classification is achieved using the Logistic Regression classifier.

Fig. 2. Precision % vs number of recommendations distribution curve

Owing to the proposed framework being knowledge centric semantically inclined framework, strong semantic similarity measures or step deviance measure for relevance computation and entity enrichment ensures the proposed framework outperforms the baseline models and yields the highest precision, recall, Accuracy, F-Measure compared to the baseline models. The reason why MALVR model doesn’t perform as expected because although MALVR is a Multiview active learning framework for video recommendation, it is that directly the video recommendation is done by using video-user pairs where videos are represented using test using the metadata. But the metadata used doesn’t yield the knowledge but it only attenuates as data itself because there is no strong knowledge regulation or relevance computation scheme in the model, so the metadata when not properly aggregated, it becomes equivalent to noisy text associated and that’s what happens in this framework. Owing to all. These reasons, although it is a Multiview active learning model where active learning is used as features, the proposed framework doesn’t perform as expected when compared to the proposed model. The reason why it doesn’t perform well is because of the absence of the knowledge derivation mechanisms are absent in the MALVR model. The reason why DBTVR model doesn’t perform well as expected to the proposed model because all the deep facial tensor-based system for video recommendation, deep canonical Para factorization has been performed which is a deep learning paradigm with a strategical Bayesian model, but the reason why this model doesn’t perform as

572

B. D. Kannan and G. Deepak

expected when compared to the proposed framework is because there is absolutely no auxiliary knowledge appended into the model, there is no strong relevance computation mechanisms in the model. There must be some strong classification model but this classification will lead to over classification which results in underfitting of knowledge, as a result this model doesn’t perform well as compared to the proposed model.The reason why MMGCN framework doesn’t perform well is because of the multimodal graph CNN model for personalized video recommendation of micro videos, it is mainly suited for micro videos which are multi-modal that is based on convolutional neural network but however the bi-partite graph when constructed doesn’t perform well because of the fact that the convolutional neural network will learn only the features of the video. Textual features are limited into the model although visual acoustic and textual features are appended into. Shallow appending happens in this model and there is no deep level inferencing mechanism as a result the MMGCN model doesn’t perform as expected. The VRKGCF model which is a video recommendation model uses knowledge graph and collaborative filtering. Knowledge graph provides utmost auxiliary knowledge basically knowledge is highly dense and knowledge is surplus in the model but the problem is with collaborative filtering approach. The collaborative filtering method requires every video be rated. Item rating matrix is required in this method. In this case, it is definably impractical having every video be rated. And also, videos are rated by different communities of people, different community of users with different walks of life so therefore it wouldn’t be uniformity in rating. So rated videos and formulation of rating matrix is definitely not a wise choice. So, the collaborative filtering model that uses the computation of item rating matrix is not an appropriate model and moreover most amount of knowledge with actually no strong relevance computation mechanism and no active learning makes framework not perform as expected. But the encompassment of auxiliary knowledge ensures that the F-Measure, precision, recall, accuracy is much better than the proposed model when compared to the baseline models. Owing to all these reasons, the proposed framework outperforms the baseline models. It has strong relevance computation mechanism in terms of horns index, twitter semantic similarity, Jaccard’s similarity with a differential thresholds and step deviance measures and the various entities are from Wikidata and Google Knowledge Graph API. Ontology alignment is followed for feature selection and encompassment of Logistic Regression and feature controlled Machine Learning, active machine learning framework makes the precision, recall, accuracy, F-Measure much higher compared to the baseline models. Figure 2. Depicts the line graph of Number of Recommendations distribution Vs Precision curve for all the approaches. It is clear that the given proposed WCMIVR model occupies the highest in the hierarchy in the Precision percentage vs Number of distribution curve.

5 Conclusion In a highly coordinated environment like the Web 3.0 is a challenge and there is a need for semantically inclined model for video recommendation from the exponentially growing Web 3.0. The proposed WCMVIR is a framework for Web 3.0 compliant web video recommendation framework that integrates the TF-IDF model with that of the

WCMIVR: A Web 3.0 Compliant Machine Intelligence Driven Scheme

573

semantic similarity models like Horn’s index, Twitter semantic similarity and Jaccard similarity. Informative words are derived based on TF-IDF model which are further enriched using the entities anchored from Wiki Data and Google knowledge graph API’s. Ontology Alignment is applied for selecting features to classify the video dataset using Logistic Regression classifier which is a feature controlled scheme and a strong machine learning classifier and the semantic similarity is computed in order to increase the strength of relevance computation using Horn’s index, Twitter semantic similarity and Jaccard similarity with variation thresholds and step deviance methods and overall, an accuracy of 98.46% with the lowest FDR measure of 0.03 has been achieved in this model.

References 1. Zhao, Z., et al.: Recommending what video to watch next: a multitask ranking system. In: Proceedings of the 13th ACM Conference on Recommender Systems, pp. 43–51, Sep (2019) 2. Tripathi, A., Ashwin, T.S., Guddeti, R.M.R.: EmoWare: a context-aware framework for personalized video recommendation using affective video sequences. IEEE Access 7, 51185–51200 (2019) 3. Duan, S., Zhang, D., Wang, Y., Li, L., Zhang, Y.: JointRec: a deep-learning-based joint cloud video recommendation framework for mobile IoT. IEEE Internet Things J. 7(3), 1655–1666 (2020) 4. Zhang, Z., Lin, Z., Zhao, Z., Zhu, J., He, X.: Regularized two-branch proposal networks for weakly-supervised moment retrieval in videos. In: Proceedings of the 28th ACM International Conference on Multimedia, pp. 4098–4106, Oct (2020) 5. Shi, Y., Wei, Z., Ling, H., Wang, Z., Shen, J., Li, P.: Person retrieval in surveillance videos via deep attribute mining and reasoning. IEEE Trans. Multimedia 23, 4376–4387 (2021). https:// doi.org/10.1109/TMM.2020.3042068 6. Zhu, Q., Shyu, M.-L., Wang, H.: VideoTopic: content-based video recommendation using a topic model. IEEE Int. Symp. Multimedia 2013, 219–222 (2013). https://doi.org/10.1109/ ISM.2013.41 7. Hassan, M.A., Saleem, S., Khan, M.Z., Khan, M.U.G.: Story based video retrieval using deep visual and textual information. In: 2nd International Conference on Communication, Computing and Digital systems, pp. 166–171 (2019) 8. Mühling, M., et al.: Content-based video retrieval in historical collections of the German Broadcasting Archive. Int. J. Digit. Libr. 20(2), 167–183 (2018). https://doi.org/10.1007/s00 799-018-0236-z 9. Kaklauskas, A., et al.: A neuro-advertising property video recommendation system. Technol. Forecast. Soc. Chang. 131, 78–93 (2018). https://doi.org/10.1016/j.techfore.2017.07 10. Carrillo, F., Cecchi, G.A., Sigman, M., Slezak, D.F.: Fast distributed dynamics of semantic networks via social media. Comput. Intell. Neurosci. 2015, 712835, 9 (2015) 11. Surya, D., Deepak, G., Santhanavijayan, A.: KSTAR: a knowledge based approach for socially relevant term aggregation for web page recommendation. In: International Conference on Digital Technologies and Applications, pp. 555–564. Springer, Cham (2021) 12. Deepak, G., Priyadarshini, J.S., Babu, M.H.: A differential semantic algorithm for query relevant web page recommendation. In: 2016 IEEE International Conference on Advances in Computer Applications (ICACA), pp. 44–49, Oct. IEEE (2016)

574

B. D. Kannan and G. Deepak

13. Roopak, N., Deepak, G.: OntoKnowNHS: ontology driven knowledge centric novel hybridised semantic scheme for image recommendation using knowledge graph. In: Iberoamerican Knowledge Graphs and Semantic Web Conference, pp. 138–152, Nov. Springer, Cham (2021) 14. Ojha, R., Deepak, G.: Metadata driven semantically aware medical query expansion. In: Iberoamerican Knowledge Graphs and Semantic Web Conference, pp. 223–233. Nov. Springer, Cham (2021) 15. Rithish, H., Deepak, G., Santhanavijayan, A.: Automated assessment of question quality on online community forums. In: International Conference on Digital Technologies and Applications, pp. 791–800, Jan. Springer, Cham (2021) 16. Yethindra, D.N., Deepak, G.: A semantic approach for fashion recommendation using logistic regression and ontologies. In: International Conference on Innovative Computing, Intelligent Communication and Smart Electrical Systems (ICSES), pp. 1–6, Sep. IEEE (2021) 17. Deepak, G., Gulzar, Z., Leema, A.A.: An intelligent system for modeling and evaluation of domain ontologies for Crystallography as a prospective domain with a focus on their retrieval. Comput. Electr. Eng. 96, 107604 (2021) 18. Cai, J.J., Tang, J., Chen, Q.G., Hu, Y., Wang, X., Huang, S.J.: Multi-view active learning for video recommendation. In: IJCAI, vol. 2019, pp. 2053–2059, Aug (2019) 19. Lu, W., Chung, F.-L., Jiang, W., Ester, M., Liu, W.: 2018. A deep Bayesian tensor-based system for video recommendation. ACM Trans. Inf. Syst. 37(1), Article 7, 22 pp (2019) 20. Wei, Y., et al.: MMGCN: multi-modal graph convolution network for personalized recommendation of micro-video. In: Proceedings of the 27th ACM International Conference on Multimedia (MM’19), pp 1437–1445. Association for Computing Machinery, New York, NY, USA (2019) 21. Yu, D., Chen, R., Chen, J.: video recommendation algorithm based on knowledge graph and collaborative filtering. Int. J. Performability Eng. 16(12), 1933–1940 (2020) 22. Voler: 2021 DIGIX Video Recommendation (2021). https://www.kaggle.com/datasets/voler2 333/2021-digix-video-recommendation 23. Reddy, S.: A content-based video recommendation system (2021) 24. Purushotham, S.: Advanced machine learning techniques for video, social and biomedical data analytics (2015). https://doi.org/10.25549/usctheses-c40-179003

RMSLRS: Real-Time Multi-terminal Sign Language Recognition System Yilin Zhao1 , Biao Zhang2 , and Kun Ma3(B) 1

Business School of Jinan University, University of Jinan, Jinan 250022, China 2 School of Computer and Information, Hefei University of Technology, Hefei 230009, China 3 Shandong Provincial Key Laboratory of Network Based Intelligent Computing, University of Jinan, Jinan 250022, China [email protected]

Abstract. Sign language is commonly used by deaf or speech-impaired people to convey meaning. Sign language recognition (SLR) aims to help users learn and use sign language by recognizing the signs from given videos. Although sign language has gained social acceptance, few sign language recognition systems have been developed with educational purposes. Current SLR system has a single form of expression and lacks real-time interaction and feedback with an expensive cost. And videos captured in real scenes are complex, which has impact on the performance of models. To this end, this paper has proposed a novel realtime multi-terminal sign language recognition system (RMSLRS). Specifically, a lightweight sign language recognition model based on MediaPipe Holistic is proposed to perform sign language inference by sensing multidimensional information such as pose, facial expression, and hand tracking, achieving near real-time performance on mobile and desktop devices. Then, a novel pre-processing module is proposed to reduce the adverse effects of background noise in videos through the YOLO model and OpenCv. Furthermore, a novel technical architecture for front-end and back-end separation and multi-terminal deployment is designed, including WeChat applet, desktop application and website. Finally, this system has been deployed with gratifying success in practice.

Keywords: Sign language recognition learning · OpenCV · Video denoising

1 1.1

· Multi-terminal · Deep

Introduction Background

Sign language is a way to convey meaning with hand gestures involving visual motions and signs. Understanding and utilizing sign language requires significant study and training time [15]. Recently, developing such systems has become hot topic to classify signs of different sign languages into the given class. However, c The Author(s), under exclusive license to Springer Nature Switzerland AG 2023  A. Abraham et al. (Eds.): IBICA 2022, LNNS 649, pp. 575–585, 2023. https://doi.org/10.1007/978-3-031-27499-2_54

576

Y. Zhao et al.

sign language recognition systems are in the developing stage and there are some challenges to recognize signs in real time. To improve the recognition accuracy of the sign language, different features can be fused that we organize them in three categories: using only the hand pose features [4,6], using the hand and face pose features [18], using the hand, face, and body pose features [16,21]. A Kinect-based sign language recognition system using three dimensional (3D) convolutional neural networks (CNN) is proposed to capture spatial-temporal features from raw data, which help in extracting authentic features to adapt to the large differences of hand gestures [11]. Although the recognition rate of this model reaches 94.2%, it is relatively slow and cannot achieve real-time performance. Microsoft Kinect and CNN-based recognition system is developed to use thresholds, background removal and median altering for preprocessing [22]. The background preprocessing operation adopted by this system does not work well in real scenes with more complex backgrounds. A multi-sensor system for gesture recognition of the driver’s hand is presented to calibrate the data received from depth, radar and optical sensors, and use CNN to classify ten different gestures [19]. Although a large performance improvement has been achieved, it requires the help of multiple sensor data and can only be deployed on a single client. In order to facilitate the deaf-mute and sign language enthusiasts to learn sign language knowledge efficiently, it is necessary to develop a Real-time Multiterminal Sign Language Recognition System. It will help break down barriers for sign language users in society and build a bridge of communication for vulnerable groups. 1.2

Challenges

The first challenge of developing such a lightweight sign language recognition (SLR) system is the instantaneity. Current SLR system only learns sign language by watching videos, lacking real-time interaction and feedback. And incorrect movement demonstrations can cause learners to waste a lot of time for selfcorrecting [27]. Few models can be deployed and demonstrated in real-time on mobile devices while maintaining accuracy. This undoubtedly limits the promotion of SLR systems on mobile devices [1,9]. Another challenge is separating the background from the characters. The accuracy of SLR models are greatly affected in the complex background of real scenes, because most models are trained on datasets with characteristic scenes [23]. A more popular approach is to incorporate this task into the model, which greatly increases the computational load of the model. Another method is to use traditional algorithms to achieve this task, but it does not work well in complex scenes [5,20,22] (Fig. 1).

RMSLRS: Real-Time Multi-terminal Sign Language Recognition System

577

OpenCv Pre-processing Algorithm

module Sign Language Recognition Model

Cloud Server R&D Server Back-end

Internet Front-end

Desktop Application

Website

WeChat Applet

Fig. 1. Multi-terminal Sign Language Recognition System with separated front end and back end

1.3

Contributions

In order to overcome the shortcomings of existing sign language recognition systems, this paper proposes a Real-time Multi-terminal Sign Language Recognition System (RMSLRS). The main innovations are as follows: – Lightweight sign language recognition (SLR) model. This proposed model based on MediaPipe Holistic provides a novel and state-of-the-art topology of human poses that captures human joint point information in real time. Compared with feeding the entire image into a huge feature extraction network in the past, this method reduces the computational complexity of the model. Real-time performance is achieved on both mobile devices and desktop applications. – Video background noise reduction. The proposed data pre-processing module can effectively reduce the adverse effects of background noise in videos. We first reverse blurring of useless background information in videos by locating foreground people, then recognize and locate people through the YOLO model [24], and finally Gaussian blur to the background through OpenCv [3]. Compared with the existing technology, we combine deep learning with traditional methods, balancing the speed and accuracy. – Multi-terminal deployment architecture. We design a novel technical architecture of front-end and back-end separation and multi-terminal

578

Y. Zhao et al.

deployment. The back-end is based on cloud servers and development servers, providing technical services for the front-end. Compared with the previous single deployment method, it can be deployed not only on portable devices, such as WeChat applet and APP, but also on computer desktop applications and websites. This allows the compatibility with different devices. 1.4

Organization

The rest of this article is organized as follows. Section 2 introduces the design scheme and related technologies of the system. describes how the system is demonstrated and the application scenarios of the system. The last section outlines brief conclusions.

2

Architecture

Real-time multi-terminal sign language recognition system (RMSLRS) is divided into three layers: Human-computer Interaction Layer, Video Denoising Layer, and Recognition Inference Layer. The architecture is shown in Fig. 2. First of all, in the human-computer interaction layer, a novel technical architecture of front-end and back-end separation and multi-terminal deployment is designed. Modular development of the front and back ends is more conducive to later maintenance and expansion. The back-end is based on the Spring Boot framework [28], and the system is deployed to the cloud server and R&D Server in the background. The front-end can be presented on multiple terminals, including WeChat Applet, Desktop Applications and Websites. Furthermore, in the video denoising layer, a pre-processing module is proposed to effectively reduce the adverse effects of background noise in videos. In order to better obtain effective sign language video stream information, this paper proposes a method that combines deep learning with traditional methods to process video streams in real time through YOLO [24] and OpenCv [3]. Firstly, the characters and hand joints are located through YOLO, and then OpenCv is used to select the region of interest (ROI). Then, videos are cropped and the backgrounds are blurred, which can reduce effects of complex background noise. Finally, the processed videos are annotated and added to the training set to further improve the accuracy of the model. Finally, in the recognition inference layer, as shown in Fig. 4, a novel lightweight sign language recognition model based on MediaPipe Holistic is proposed. The pre-processed video stream information is recognized by the sign language recognition model. And the joint point information is extracted by MediaPipe Holistic, then encoded by Bidirectional Long Short-term Memory (Bi-LSTM) [12], decoded by Connectionist Temporal Classification (CTC) [25], and finally output the prediction sequence. The results are transmitted to the cloud server and the data test module, and further data analysis is performed on the identification results through the data test module. The system uses open source software to implement core functions, such as TensorFlow, Pytorch, OpenCV, YOLO, MediaPipe Holistic, etc.

RMSLRS: Real-Time Multi-terminal Sign Language Recognition System Humancomputer Interaction Layer

UI for Multi-terminal Video

Picture

579

Sign Language Recognition Model Text

Video Stream Feature Extraction

Pre-processing Module Video Denoising Layer

OpenCv

Real-time Feedback

Bi-LSTM Encoder CTC Decoder

Recognition Inference Layer

Sign Language Recognition Model

Target

Fig. 2. Architecture of the Real-time Multi-terminal Sign Language Recognition System

2.1

Human-computer Interaction Layer

In this layer, a novel technical architecture of front-end and back-end separation and multi-terminal deployment is designed. The development mode of the front-end and back-end separation adopted by this system enables the presentation layer to be presented in the form of multiple-terminals, including WeChat Applet, Desktop Applications, and Websites. The use of WeChat applet reduces redundant software installations and can be combined with the push mechanism provided by WeChat [10]. The WeChat applet developed with Vue.js [7] as the underlying uni-app framework is compatible with multiple platforms. Users can enjoy sign language recognition services on a computer through a desktop application. And through the high performance of computers and external devices, such as external cameras, high-definition monitors, etc., we can get a more satisfying experience with a faster response time. This system is developed using the Qt framework [2] with cross-platform advantages and can run on almost all platforms. The development of the website can make the software performance better and the picture more smooth. The APP can provide users with better services and increase user stickiness. In addition, this layer implements multi-modal retrieval and display functions. Users can query sign language recognition result including text, pictures, videos through initial letters, keywords, tags and other forms. Combining the user’s personal information and search records, the information query is carried out through the recommendation algorithm [13]. 2.2

Video Denoising Layer

In this layer, a pre-processing module is proposed to effectively reduce the adverse effects of background noise in videos. This module combines deep learning and traditional methods to perform real-time processing of video streams using both YOLO and OpoenCv. This system adopts YOLOv3s model and Gaussian blur algorithm.

580

Y. Zhao et al.

YOLO Feature Extraction

52*52*255

Spatial Pyramid Pooling

26*26*255

Postprocessomg

13*13*255

Output Layer

OpenCv

Streaming Video

Background Blur

Crop ROI

Fig. 3. Flowchart of Pre-processing Module

As shown in Fig. 3, firstly, the characters and hands are localized through the YOLO model. The features are extracted through the Darknet-53 network, and then the spatial pyramid pooling layer is used for feature fusion. The output layer outputs 13*13*255, 26*26*255 and 52*52*255 three-dimensional feature vectors respectively. And then, these feature vectors are subjected to a series of post-processing to obtain the location and classification of the detection target. The feature vectors of these three dimensions are respectively responsible for detecting targets of different sizes, so both hands and bodies can be well detected. According to the position of the characters, the video is cropped in real-time through OpenCv, and the part containing only characters is obtained, which reduces the useless background noise in the video. For the retained useful part of the video, the background is blurred by Gaussian blur, which further highlights the pose and action of the characters and reduces the interference caused by background noise. Gaussian blur algorithm uses the density function of Gaussian normal distribution [8]. As shown in Eq. 1, where μ is the mean of x, and σ is the standard deviation of x. Since each calculation takes the current calculation point as the origin, the value of μ is zero. Gaussian blur of an image requires its two-dimensional equation, as shown in Eq. 2. f (x) =

2 2 1 √ e−(x−μ) /2σ σ 2π

G(x, y) =

2.3

1 −(x2 +y2 )/2σ2 e 2πσ 2

(1)

(2)

Recognition Inference Layer

In this layer, a novel lightweight sign language recognition model based on MediaPipe Holistic is proposed. As shown in Fig. 4, this layer is divided into three steps: feature extraction, encoder and decoder.

RMSLRS: Real-Time Multi-terminal Sign Language Recognition System Feature Extract

……

Bi-LSTM Encoder

M-Conv

BiLSTM

BiLSTM

M-Conv

BiLSTM

BiLSTM

M-Conv

BiLSTM

BiLSTM

……

……

……

Decode r

Targe t I

CT C

581

Lik e

MediaPipe Holistic

Yo u

Facial

Conv

Bone

Conv

Body

Conv

Posture

Conv

Concat

M-Conv

Fig. 4. The proposed Sign Language Recognition Model

The first is the feature extraction. A new feature extraction module called MConv is proposed, which uses MediaPipe Holistic released by Google for joint point extraction. MediaPipe Holistic can extract more than 540 key points, including 33 poses, 21 one-handed and 468 facial landmarks. As shown in Fig. 5, MediaPipe Holistic senses human pose, facial landmarks and hand tracking simultaneously. After the convolution operation is performed on each key point, the extracted features are connected into a feature map as the input of the next layer. And it consists of a new pipeline with optimized pose, face and hand components that each run in real-time, with minimum memory transfer between their inference backends, and added support for interchangeability of the three components, depending on the quality/speed tradeoffs. As a result, MediaPipe Holistic runs in near real-time performance even on portable devices and in the browser [26]. Next is the encoder. A bidirectional long short-term memory(Bi-LSTM) network [12] is used to learn feature sequences. And complex feature correspondences are learned by mapping spatiotemporal sequences to ordered label sequences. The Bi-LSTM computes the forward and backward hidden sequences by iterating from k = 1 to K, and from k = K to 1, respectively: hfk , cfk = fLST M −f rw (sk , hfk−1 , cfk−1 )

(3)

hbk , cbk = fLST M −bck (sk , hbk+1 , cbk+1 )

(4)

where hfk , cfk represent the hidden states and memory units of the forward LSTM [17] network module fLST M −f rw at the k time step. And hbk , cbk represent the hidden states and memory units of the backward LSTM network module fLST M −bck . The output of the Bi-LSTM network is converted into a probability distribution of the M labeled information categories at time k by using a softmax fully connected layer [14]: zk = sof tmax(W [hfk ; hbk ] + b)

(5)

where W and b are the weight matrix and bias vector of the softmax classifier. We use [ů;ů] for the concatenation operation, and let θ = [θf ; θs ] denote the vector of all parameters in the end-to-end recognition system, where θf and θs

582

Y. Zhao et al.

denote the parameters of the feature extraction and sequence learning modules, respectively. At the end is the decoder. Since the video segmentation points between the sign language actions corresponding to each word in the video sequence are unknown, this paper uses Connectionist Temporal Classification(CTC) as the objective function to solve the above problem. CTC is an objective function originally designed for speech recognition, which integrates all possible correspondences between the input and the target sequence [25]. By adding an extra blank class to the annotated vocabulary to explicitly model the separation between two adjacent sign language actions, the introduction of the extra blank class allows the network output results to not need to be processed between the input and output sequences pre-aligned.

(a) Do

(b) Marry

(c) Stand up

(d) Goodbye

Fig. 5. Examples of joint point extraction

3

Demonstration

First, we will use a few slides to introduce the motivations of RFMTSLS and highlight the operating principles of the system. After that, we will guide the viewer into the system through the QR code of the WeChat applet. Finally, we invite viewers to participate in the interaction using a smartphone. The interaction process is as follows: users enter the WeChat applet and query the sign language information they want to know through the sign language

RMSLRS: Real-Time Multi-terminal Sign Language Recognition System

583

search module, and the system will display it in the form of text, pictures and videos. Then the user enters the sign language interaction module, turns on the camera, and obtains the information of human body joints in real time. As shown in Fig. 5, users demonstrate sign language movements and get real-time feedback on the screen. Real-time multi-terminal sign language recognition system has the following application scenarios: school for deafmutes and continuing education colleges are the most concentrated places for hearing impairments, and a reasonable sign language recognition system needs to be introduced to serve the hearing impaired. At the same time, deaf community is also a suitable application. Sign language is commonly used by deaf or speech impaired people to communicate, which plays an integral role in the deaf community.

Conclusions In this paper, a novel Real-time Multi-terminal Sign Language Recognition System is proposed. First, a lightweight sign language recognition model based on MediaPipe Holistic is proposed to capture human joint point information in real time, ensuring accuracy and achieving real-time performance. Then, this paper proposes a novel data preprocessing module that combines deep learning and traditional methods to effectively reduce the impact of video background noise. Finally, a novel technical architecture for front-end and back-end separation and multi-terminal deployment is designed. It can be deployed not only on portable devices, such as WeChat Applet and APPs, but also on desktop applications and websites. Acknowledgment. This work was supported by the Shandong Provincial Teaching Research Project of Graduate Education (SDYJG21034), the National Natural Science Foundation of China (61772231), and the Shandong Provincial Key R & D Program of China (2021CXGC010103).

References 1. AlKhuraym, B.Y., Ismail, M.M.B., Bchir, O.: Arabic sign language recognition using lightweight cnn-based architecture. Int. J. Adv. Comput. Sci. Appl. 13(4) (2022) 2. Blanchette, J., Summerfield, M.: C++ GUI programming with Qt 4. Prentice Hall Professional (2006) 3. Bradski, G.: The opencv library. Dr. Dobb’s J. Softw. Tools Professional Programmer 25(11), 120–123 (2000) 4. Chen, X., Wang, G., Guo, H., Zhang, C.: Pose guided structured region ensemble network for cascaded hand pose estimation. Neurocomputing 395, 138–149 (2020) 5. Cho, S., et al.: Tackling background distraction in video object segmentation. arXiv preprint arXiv:2207.06953 (2022) 6. Doosti, B.: Hand pose estimation: a survey. arXiv preprint arXiv:1903.01013 (2019) 7. Filipova, O.: Learning Vue. js 2. Packt Publishing Ltd. (2016)

584

Y. Zhao et al.

8. Gedraite, E.S., Hadad, M.: Investigation on the effect of a gaussian blur in image filtering and segmentation. In: Proceedings ELMAR-2011, pp. 393–396. IEEE (2011) 9. Halder, A., Tayade, A.: Real-time vernacular sign language recognition using mediapipe and machine learning. J. Homepage: www. ijrpr. com ISSN 2582, 7421 (2021) 10. Hao, L., Wan, F., Ma, N., Wang, Y.: Analysis of the development of wechat mini program. In: Journal of Physics: Conference Series, vol. 1087, p. 062040. IOP Publishing (2018) 11. Huang, J., Zhou, W., Li, H., Li, W.: Sign language recognition using 3d convolutional neural networks. In: 2015 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6. IEEE (2015) 12. Huang, Z., Xu, W., Yu, K.: Bidirectional lstm-crf models for sequence tagging. arXiv preprint arXiv:1508.01991 (2015) 13. Isinkaye, F.O., Folajimi, Y.O., Ojokoh, B.A.: Recommendation systems: principles, methods and evaluation. Egyptian Inform. J. 16(3), 261–273 (2015) 14. Jang, E., Gu, S., Poole, B.: Categorical reparametrization with gumble-softmax. In: International Conference on Learning Representations (ICLR 2017). OpenReview. net (2017) 15. Jiang, S., Sun, B., Wang, L., Bai, Y., Li, K., Fu, Y.: Skeleton aware multi-modal sign language recognition. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3413–3423 (2021) 16. Kocabas, M., Karagoz, S., Akbas, E.: Multiposenet: fast multi-person pose estimation using pose residual network. In: Proceedings of the European conference on computer vision (ECCV), pp. 417–433 (2018) 17. Koller, O., Camgoz, N.C., Ney, H., Bowden, R.: Weakly supervised learning with multi-stream cnn-lstm-hmms to discover sequential parallelism in sign language videos. IEEE Trans. Pattern Anal. Mach. Intell. 42(9), 2306–2320 (2019) 18. Koller, O., Ney, H., Bowden, R.: Deep learning of mouth shapes for sign language. In: Proceedings of the IEEE International Conference on Computer Vision Workshops, pp. 85–91 (2015) 19. Molchanov, P., Gupta, S., Kim, K., Pulli, K.: Multi-sensor system for driver’s handgesture recognition. In: 2015 11th IEEE international Conference and Workshops on Automatic Face and Gesture Recognition (FG), vol. 1, pp. 1–8. IEEE (2015) 20. Neri, A., Colonnese, S., Russo, G., Talone, P.: Automatic moving object and background separation. Signal Process. 66(2), 219–232 (1998) 21. Newell, A., Yang, K., Deng, J.: Stacked hourglass networks for human pose estimation. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9912, pp. 483–499. Springer, Cham (2016). https://doi.org/10.1007/978-3-31946484-8_29 22. Pigou, L., Dieleman, S., Kindermans, P.-J., Schrauwen, B.: Sign language recognition using convolutional neural networks. In: Agapito, L., Bronstein, M.M., Rother, C. (eds.) ECCV 2014. LNCS, vol. 8925, pp. 572–578. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-16178-5_40 23. Rastgoo, R., Kiani, K., Escalera, S.: Sign language recognition: a deep survey. Expert Syst. Appl. 164, 113794 (2021) 24. Redmon, J., Farhadi, A.: Yolov3: an incremental improvement. arXiv preprint arXiv:1804.02767 (2018) 25. Safeel, M., Sukumar, T., Shashank, K., Arman, M., Shashidhar, R., Puneeth, S.: Sign language recognition techniques-a review. In: 2020 IEEE International Conference for Innovation in Technology (INOCON), pp. 1–9. IEEE (2020)

RMSLRS: Real-Time Multi-terminal Sign Language Recognition System

585

26. Singh, A.K., Kumbhare, V.A., Arthi, K.: Real-time human pose detection and recognition using mediapipe. In: Reddy, V.S., Prasad, V.K., Wang, J., Reddy, K. (eds.) ICSCSP 2021. AISC, pp. 145–154. Springer, Singapore (2021). https://doi. org/10.1007/978-981-16-7088-6_12 27. Wadhawan, A., Kumar, P.: Sign language recognition systems: a decade systematic literature review. Archives Comput. Methods Eng. 28(3), 785–813 (2021) 28. Walls, C.: Spring Boot in action. Simon and Schuster (2015)

Advances in Information and Communication Technologies

Modelling and Simulation of the Dump-Truck Problem Using MATLAB Simulink Ibidun C. Obagbuwa(B)

, Bam Stefany, and Moroka Dineo Tiffany

Department of Computer Science and Information Technology, Sol Plaatje University, Kimberley, South Africa {Ibidun.obagbuwa,201900419,201902565}@spu.ac.za

Abstract. This work focused on solving the Dump Truck problem, which is a discrete-event simulation, by finding the average loader and scale utilizations. This was accomplished by using the MATLAB Simulink. The output of the simulation clearly shows how a real-world Dump-Truck system will perform over time and better utilize its resources. Different aspects of the simulation can be changed using the models and get better utilization results. Keywords: Dump-Truck problem · Discrete-Event simulation and utilization · MATLAB simulink model

1 Introduction The Dump Truck problem is an example of a discrete-event simulation that uses an event schedule, which means that the state of the problem changes at discrete points in time. A discrete-event simulation has the component: System state, Entities, Event notices, Activities, Lists, and Delay (Fig. 1). The Dump Truck is normally solved manually using event scheduling. The problem first appeared in the Discrete-Event System Simulation 4th edition by Bank et al. (2002) as an example of manual simulation using event scheduling. This work aims to calculate the total busy time of the loaders and scale. To find the average loader and scale utilizations This will be achieved by the distributions of the loading time, weighing time, and scaling. The specific objectives for this work are as follows: 1. To create a model for the problem. 2. To calculate the total busy time of the loaders and the scale. 3. To find the average scale and loader utilization.

2 Problem Formulation In this study, six trucks will be considered that will be hauling material X from the company to the industry. Two loaders will load the material X into each truck and © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 A. Abraham et al. (Eds.): IBICA 2022, LNNS 649, pp. 589–599, 2023. https://doi.org/10.1007/978-3-031-27499-2_55

590

I. C. Obagbuwa et al.

one scale that will weigh the truck after it was loaded. The loader and scale have an FCFS/FIFO queue. After a truck has finished being weighed, it will travel to the industry and then return to the loading queue. Figure 2 depicts an example of the dump truck queue and loading system.

Fig. 1. Model Component of the Dump-Truck Problem

Fig. 2. Dump-Truck Queue and Loading (https://www.sinotruckinternational.com/case/howomine-purpose-dump-truck/)

3 Literature Review Liu et al (2018) focused on using the fault identification method to achieve high-geared fault-identification of dump-truck suspension. Kansake and Frimpong (2018) used mathematical models to estimate tire dynamic forces on haul roads. Simulink and RStudio were used to solve the mathematical models and practical estimates of the impact forces were obtained. Li et al (2021) used the discrete-event simulation (DES) to study Bitcoin

Modelling and Simulation of the Dump-Truck Problem

591

mining. The use of realistic features made the model flexible and more comprehensive. The results that were obtained provided valuable insights to bitconers. Gittins et al (2020) explores the use of discrete-event simulation for the management of livestock. The results showed that the DES can help with simulating possible growth strategies as well as observing the impact of already established farming processes. Ozdemir and Kumral (2019) proposed a dispatching system that will maximize the utilization of truck-shovel systems. Based on the results, the proposed approach showed the potential to increase the productivity of the shovel and truck systems.

4 Assumptions of the Study 1. We assume that 5 trucks are at the loader and that one is at the scale at clock time “0” as depicted in Fig. 3. 2. The time it takes for the truck to travel from the loaders to the scale is negligible. 3. We assume there are no delays. 4. In our Simulink model, we assume that our entities are generators with a uniform distribution. 5. We assume that the loading, weighing, and travel times follow the distribution tables that we provide (Fig. 6).

Fig. 3. Loading queue system

5 Methodology The simulation study steps in Fig. 4 were adopted for this study.

592

I. C. Obagbuwa et al.

Fig. 4. Simulation Study Steps (Banks et al. 2010)

6 Experiments, Results and Discussion 6.1 Simulink Model The dump-truck problem is represented in this work using MATLAB Simulink model. The loading, weighing, and travel time distribution tables (Fig. 6) would be used for the Simulink model. The following are essential for building the Simulink model: 1) 2) 3) 4) 5)

Uniformly generate entities, namely Dump trucks. There are three (3) attributes for the loading, weighing, and travel time distribution. Two (2) FIFO/FCFS queues. There are 2 loaders to load the trucks and 1 scale. Lastly, determining the loader and scale utilization.

The distribution table of the loading time, weighing time, and travel time (Fig. 6d) shall be used to achieve the aim of the study. The system specifications for the model are MATLAB R2021a Update (9.10.0.1669831) 64-bit (win64) Simulink. For the data analysis plan, we specify the initial state of the system at clock time = 0. These includes: 1. 2.

Specifying the number of tucks in the loading and weighing queue. Entering the number of tucks in the loaders and on the scale.

Modelling and Simulation of the Dump-Truck Problem

593

3. 4. 5. 6. 7. 8.

Lastly, enter the ids of the trucks in the correct position. We enter the loading time values from our distribution table. We enter the weighing time values from our distribution table. We enter the travel time values from our distribution table. Our model will compute the Future Event List and choose the Imminent event. After that, it will update the System State by moving the trucks according to the imminent event. 9. It will update the clock time. 10. Calculate the busy time of the loaders and the busy time of the scale. 11. This process will be repeated until clock time = 80. 12. In the end, the Average loader utilization and the Average scale utilization will be computed. 6.2 Model Conceptualization Modeling methods are better by the ability to summarize the vital characteristics of the problem, choose and regulate the basic assumptions that symbolize the system, extend, and enhance the model until a usable approximation result is obtained [2]. Figure 4 depicts the conceptual model of the study.

Fig. 5. Conceptual model

The time it takes for the truck to travel from the loaders to the scale is negligible. t = Clock(t) + Distribution Time of Weighing/ Loading/ Travel. BL = Previous BL + (Current Clock(t)-Previous Clock(t)) *L(t). BS = Previous BS + (Current Clock(t)-Previous Clock(t)) *W(t). Average loader utilization = (BL/ number of loaders) / Total Time. Average scale utilization = BS / Total Time. 6.3 Data Collection The distribution table of the loading time, weighing time, and travel time used for modeling is shown in Fig. 6(a-d).

594

I. C. Obagbuwa et al.

a

b

c

d Fig. 6. (a) Distribution of the loading Time for the Dump Trucks (b) Distribution of weighing Time for the Dump Trucks, (c)Distribution of Travel Time for the Dump Trucks, (d)Data used for the C implementation.

6.4 Model Translation Most real systems create models that need to store and calculate large amounts of information. Therefore, the conceptual model must be translated into a computer-recognizable format [2]. The conceptual model shown in Fig. 5 was translated to an operational model using the MATLAB Simulink. The Simulink models of the dump truck problem is shown in the Fig. 7. This model simulates how dump trucks get loaded, weighed, and travel at random discrete distribution times that were specified beforehand. Entities (Dump trucks) get generated uniformly. The trucks first queue up in a FIFO queue that has a maximum capacity of 5 trucks. After that, the trucks move to the two servers (the loaders) where they will be loaded at a random discrete distribution time. They will proceed to another First-In-First-Out queue that also has a maximum capacity of 5 trucks. The time it takes for one truck to move from the loader to the weighing queue is negligible. There is one server (the scale) where each truck will be weighed before the travel to deliver the load. The trucks go through a server that can serve 20 trucks at a time. This server represents the traveling time of each truck before it goes back to the loaders. The server serves at random discrete distribution times, and it includes the time the trucks travel and deliver the load. The model also gives the utilization of the loaders and the scale. In this work, there is a fixed number of dump trucks (6). The clock time starts at 0. At the starting point, the number of trucks is added in the loading queue (FIFO) (maximum capacity 5), the loaders (maximum capacity 2), the weighing queue (FIFO)

Modelling and Simulation of the Dump-Truck Problem

595

(maximum capacity 5), and the weighing scale (maximum capacity 1). After that, the ID of the trucks must be entered at their starting potions. For example., if truck 4 is in the first truck present on the weighing scale you must enter a ‘4’ next to the output ‘Enter the ID of the truck present on the weighing scale:’. This is important because it lets the system know where each truck is at the start of the system state. Lastly, you must enter the loading, weighing, and travel time values that were specified in the individual distribution tables. The program will then log the clock time of each event as well as the system state (how many trucks there are at each state). It will also provide the user with the IDs of the trucks in the loading and weighing queues. The future event list is generated to show the user the events that will happen. The program also logs the busy time of the loaders and the scale. Lastly, it will log the weighing, traveling, and loading time that will possibly be the imminent event. The imminent event is chosen based on the event that will happen next. We update the clock time based on that event’s time. If a truck is done being weighed it will travel, drop the load, and then come back to the loading queue. Once all program is done it will calculate the average Loader and Scale utilization.

Fig. 7. Simulink model

The Simulink model give the Utilization of the loaders and the scale where there is a FIFO loading and weighing queue and 2 loaders and 1 scale. The model gives the user a better understanding of how the current system is performing and can be adjusted to see how the current system can be improved.

596

I. C. Obagbuwa et al.

6.5 Verification Verification entails an operational model created for simulation models. This is to make sure that the computer program worked and that the whole conceptual model was once correctly transformed to an operational model as illustrated in Fig. 4. If the input parameters and the logical shape of the model are effectively represented on the computerized model, the verification is complete [2]. The total busy time of the loaders and the scale and the average scale and loader utilization were computed manually and compared with the results given by the model to make sure our model performs properly. 6.5.1 Confidence Intervals With the loader utilization, we are 95% confident that the population mean is likely to be between 0.23080570434389702 and 0.27225020426026353. With the scale utilization, we are 95% confident that the population mean is likely to be between 0.43000870006530395 and 0.555701071553309. Since both the confidence intervals of the loader and scale utilization as shown in Figs. 8 and 9 are narrow, we can conclude that the estimates are precise.

Fig. 8. Loader utilization

Modelling and Simulation of the Dump-Truck Problem

597

Fig. 9. Scale utilization

6.6 Validation Validation is generally accomplished by adjusting the model. This is an iterative method that compares the model to the conduct of the actual system and makes use of the perception gained to enhance the discrepancy between the two. This method is repeated till the accuracy of the model is determined to be appropriate [2]. Our model takes a real-world problem and gives the user the total busy time of the loaders and the scale and the average scale and loader utilization as shown in Fig. 9. This can help the dump-truck company to better understand how their system is performing (Fig. 10).

598

I. C. Obagbuwa et al.

a

b

c Fig. 10. (a) Output of Loader and Scale Utilization, (b) Loader Utilization Visualization, (c) Scale Utilization Visualization

7 Conclusion From the results obtained in our model, it is shown that the average loader utilization is 37.20% and the average scale utilization is 92.68%. This means if only 6 dump trucks with the distributions time were used, the loaders are not properly utilized. When we look at the results obtained from our simulation, we see that as the number of trucks increases to 20, the loader and scale utilization also increases. The model shows us how a real-world problem will perform over time. We can use the model to change different aspects of the simulation to get better utilization results. By utilizing the scale and loaders (the servers) the company can utilize its resources better. Large number of trucks shall be considered in the extension of this work. Acknowledgments. The authors would like to appreciate Sol Plaatje University for the infrastructure support for this research.

Modelling and Simulation of the Dump-Truck Problem

599

References Ozdemir, B., Kumral, M.: Simulation-based optimization of truck-shovel material handling systems in multi-pit surface mines. Simulation Modelling Practice, and Theory 95, 36–48 (2019). ISSN 1569–190X. https://doi.org/10.1016/j.simpat.2019.04.006, https://www.sciencedirect. com/science/article/pii/S1569190X19300401 Banks, J., Carson, J.S., Nelson, B.L., Nicol, D.: Discrete-Event System Simulation, 5th edn. Prentice-Hall, Upper Saddle River, NJ (2010) Kansake, B., Frimpong, S.: Analytical modelling of dump truck tire dynamic response to haul road surface excitations. Int. J. Min. Reclam. Environ. 34, 1–18 (2018). https://doi.org/10.1080/174 80930.2018.1507608 Li, K., Liu, Y., Wan, H., Huang, Y.: A discrete-event simulation model for the Bitcoin blockchain network with strategic miners and mining pool managers. Computers & Operations Research 134, 105365 (2021). ISSN 0305–0548, https://doi.org/10.1016/j.cor.2021.105365, https:// www.sciencedirect.com/science/article/pii/S0305054821001404 Liu, B., Ji, Z., Wang, T., Tang, Z., Li, G.: Failure identification of dump truck suspension based on an average correlation stochastic subspace identification algorithm. Appl. Sci. 8, 1795 (2018). https://doi.org/10.3390/app8101795 Gittins, P., McElwee, G., Tipi, N.: Discrete event simulation in livestock management. Journal of Rural Studies 78, 387–398 (2020). ISSN 0743–0167, https://doi.org/10.1016/j.jrurstud.2020. 06.039, https://www.sciencedirect.com/science/article/pii/S0743016720303065

Modeling and Simulation of a Robot Arm with Conveyor Belt Using Matlab Simulink Model Ibidun Christiana Obagbuwa(B) and Kutlo Baldwin Mogorosi Department of Computer Science and Information Technology, Sol Plaatje University, Kimberley, South Africa {ibidun.obagbuwa,201802389}@spu.ac.za

Abstract. This work modeled and simulated a robotic arm with a conveyer belt that picks and places objects from one spot to another. Daily production is increasing and increasing production rates while increasing profit margins has become a priority for every organization. When large-scale production occurs, problems in the material handling system arise as a result of product counting, defective piece removal, and so on. Manufacturing units are more interested in automation via robots for their work because of these factors. Pick-and-place robots are a type of automation technology used in the manufacturing industry. They are planned and designed to eliminate human error and intervention, allowing for more precise work and faster production. The simulation of robotic arm in this work was carried out using MATLAB Simulink. Keywords: Simulation · Modelling · Robotic arm · Conveyor belt · Matlab simulink model production · Manufacturing

1 Introduction Robotics is now a far broader field of study than it was a few years ago, encompassing research and development in a variety of diverse fields like kinematics and dynamics, planning systems, control, sensing, programming languages, and artificial intelligence. As we can see, automation is rising around the globe daily to make our lives easier, and the fundamental nature of employment is changing. Because most tasks are now performed by machines, the demand for manpower in industries has fallen dramatically. Human arms are being replaced by robotic arms, to put it simply. As a result, the robotic arm performs the repetitious job that was previously done by hand. A five-degree-offreedom robot arm with a gripper, as well as two conveyor belts, is modelled. Parts are transferred from one conveyor belt to the other by the robot. The pieces are transported to and from the robot arm via two conveyor belts. Electric actuation, supervisory logic, and end effector trajectory optimisation are all included in the model. In this work, modeling, and simulation of a robotic arm with a conveyer belt that picks and places objects from one location to another is presented. The work was carried out with MATLAB Simulink. This simulation study can be used to examine how robot function and predict the performance of the robot. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 A. Abraham et al. (Eds.): IBICA 2022, LNNS 649, pp. 600–609, 2023. https://doi.org/10.1007/978-3-031-27499-2_56

Modeling and Simulation of a Robot Arm

601

2 Literature Review Many academic and industrialist specialists have researched this topic, and it has always been a fascinating subject to consider, namely the material handling system. Many experts have expressed their opinions, describing the material handling management system as the backbone of any manufacturing operation. As a result, the following review has been done to learn about previous research on this topic. Colour Sensor Based Object Sorting Robot was the focus of Shubha and Rudresh’s work, with an automated material handling system as the major goal. When the microcontroller directs the robotic arm’s movement by coordinating it, the object travelling on the conveyor belt is picked [1]. After picking objects from the conveyor, they are sorted by placing them in specified positions. When a robot performs labour formerly performed by a person, accuracy and reproducibility are obtained. In addition, Saranya et al. used MATLAB software as an offline surface clustering approach to conducting experiments on a pick-and-place robotic arm [2]. The goal of their research was to find the largest number of objects that could be grabbed and placed by a robotic arm in the shortest amount of time. They do it by utilizing a Matlab-based image processing technique. Reddy investigated the sorting of things by determining their hue and arranging them in the appropriate position. For the aforesaid task, he employed a microprocessor (AT89S52) and a liquid crystal display (LCD) in his prototype [3]. Furthermore, Marymol et al., studied the colour sensor-based object-sorting robot using an embedded system. They create a robotic arm that sorts different coloured cubes into separate cups [4]. Rautu et al. conducted a research where a flipper mechanism was used to identify items that do not meet the requirements, such item is rejected and pushed away. Different coloured things are collected in either of the circular container’s partitions. Two conveyor belts regulate all DC motors in their project. TCS 230 colour sensor, inductive sensors, load cells, and Siemens 300 series PLC were utilized to identify and separate goods [5]. Several approaches were seen in the literature regarding the simulation of a robotic arm for the picking and placement of objects. In this work, we used MATLAB Simulink for the simulation of a robotic arm with a conveyer belt that picks and places objects from one location to another.

3 Methodology In this work, we adopted the simulation steps in Fig. 1. MATLAB Simulink was used to model and simulate the robot arm with a conveyor belt. Simscape Multibody was used to import the robot arm model developed in Onshape. The necessary data was collected, such as phase AC induction motors, conveyor belt DC geared motors, various types of sensors, and microcontroller LED displays, among other things. Then, depending on the parameters selected, we determined design criteria such as conveyor belt length, width stress analysis on the belt, and needed motor rpm. The actuation and contact modeling between the gripper and the block are using variant subsystems.

602

I. C. Obagbuwa and K. B. Mogorosi

• Select the right level of fidelity for your test using hyperlinks within the model. Actuation: Ideal (recommended motion), Motor (ideal), Motor (ideal) (electric). • The damper, Penalty, and Payload are all in contact with the gripper block. Payload is recommended for box transfer and joint testing. 3.1 Problem Formulation For better results and industry growth, all manufacturing industries are now focusing on creative concepts, methods, and procedures. Because, in the past, when production was modest, these duties could be performed manually by labour, but now, due to increased production, it is not able to do so manually, which raises labour costs. As a result, the installation of these robots has become a need. Every organization currently places a high premium on boosting both production rate and profit margins since, production is growing every day as time goes on. The material handling system experiences issues when production is done on a big scale for a number of reasons, including product counting, removing defective parts, and so forth. These factories have increased interest from industrial units in using robots to automate their processes. 3.2 Model Conceptualization Modeling techniques can be improved by having the capacity to abstract the key aspects of the issue, choose and alter the fundamental assumptions that define the system, and extend and enhance the model until a workable approximation is attained. [7]. A fivedegree-of-Freedom robot arm with a gripper, as well as two conveyor belts, is modelled. Parts are transferred from one conveyor belt to the other by the robot. The pieces are transported to and from the robot arm via two conveyor belts. Electric actuation, supervisory logic, and end effector trajectory optimization are all included in the model. 3.3 Data Collection Simscape Multibody in MATLAB Simulink was used to assemble the model, and the data provided by Simscape was utilized for assembling the youBot. The youBot was assembled and tested several times. The necessary data gathered from the Simscape Multibody include phase AC induction motors, conveyor belt DC geared motors, various types of sensors, and microcontroller LED displays, among other things. Then, depending on the parameters selected, we determined design criteria such as conveyor belt length, width stress analysis on the belt, and needed motor rpm. 3.4 Model Translation The majority of real-world systems produce models that need to be filled with data in a way that computers can understand since these models typically involve calculating and storing a lot of information. In this work, we used MATLAB Simulink to create a material-handling robot model that is fully automated. Simscape Multibody was used to import the robot arm model developed in Onshape. For robotic arms, this sort of

Modeling and Simulation of a Robot Arm

603

Fig. 1. Simulation study steps [7]

robot counts past products. One dc gear motor rotates one conveyor belt. After that, we used two pairs of IR sensors to count objects, and the second pair of sensors to stop the conveyor belt when an object reaches the pickup position. The signal from the sensors is then processed in the microcontroller, which controls the robotic arm. For one robotic arm, we employed three dc gear motors driven by a basic program circuit, as well as a Buhler dc gear head motor for gripping. 3.5 Verification Verification involves computer programs created for simulation models. This is to ensure that the computer program worked properly and that the entire conceptual model was successfully converted to an operational model as shown in Fig. 2. When the input parameters and the model’s logical structure are correctly represented on the computer, the verification is complete [7]. The model user can configure the robot arm for different testing using different subsystems: Conveyor belts, load, and logic tests (default); test of box transfer; and joint evaluations. To configure tests, it is advisable to use the hyperlinks at the top level of the model. 3.6 Validation Validation is usually done by adjusting the model. This is an iterative process that compares the model to the behavior of the actual system and uses the insight gained to improve the discrepancy between the two. This process is repeated until the accuracy

604

I. C. Obagbuwa and K. B. Mogorosi

Fig. 2. Model verification [7]

of the model is determined to be acceptable [7]. Figure 3 shows the validation process. Our model runs perfectly with no technical problems, there are no error messages, and all files are found. Figure 4a is a glimpse of how the model performs. Figure 4a shows the Matlab Simulink model of our work while Fig. 4b depicts the real-life image of the robotic arm system that our model is compared with. The robotic arm in Fig. 4a is similar to the real-life robotic arm in Fig. 4b.

Fig. 3. Model validation process [7]

3.7 Experimental Design In experimental design, the decision on an alternative to simulate must be done. The decision as to which alternative to simulate often depends on the completed and analyzed execution. The length of the initialization period, the length of the simulation run, and the number of replicas performed per run must all be determined for each simulated

Modeling and Simulation of a Robot Arm

605

a

b Fig. 4. (a) Simulink model of the robotic arm, (b) Real-life image of the robotic arm system

system design [7]. The model’s top level has hyperlinks that allow customization of the model for the test anyone would like to run. The system as a whole is the default test (robot arm and conveyor belts). Box transfer tests can be used to determine the amount of power required for a specific manipulator trajectory. The required motor torque and the forces that will be placed on the bearings can both be determined through joint tests. The model user can configure the robot arm for different testing using different subsystems. • Conveyor belts, load, and logic tests (Default) • Test of box transfer • Joint evaluations To configure tests, it is advised that you use the hyperlinks at the top level of the model.

606

I. C. Obagbuwa and K. B. Mogorosi

3.7.1 Arm Subsystem This subsystem consists of the joint actuators, the robot environment, and the robot arm that was loaded from CAD software. At this level, hyperlinks either set the robot joints to use specified motion or they can be represented as a network of interconnected electrical devices. The most popular method for determining a motor’s torque requirements is motion actuation, after which the electrical network’s motors can be chosen in accordance with those needs. The Arm Subsystem is displayed in Figs. 5 and 6. 3.7.2 Input Subsystem

Fig. 5. Input subsystem

The robot arm system’s inputs are set up by this alternative subsystem (Fig. 6). Unit testing of various system components is enabled by the variants. The control version is responsible for implementing the entire system’s supervisory logic. An open-loop test that can actuate any or all of the joints and conveyor belts is the signals variation. Box transfer paths between belts are tested using splines. The control belt version enables both open-loop and closed-loop testing of the robot arm and conveyor belts. The input subsystem is displayed in Fig. 5. 3.7.3 Motion Variant and Actuation Subsystem This variant includes the CAD-imported robot arm joints. They are programmed to use prescribed motion, in which the joint angle is specified by an input signal. The simulation computes the torque needed to complete this motion. For each joint, the radial and axial forces for the motor bearings are calculated. This data can be used to fine-tune the mechanical requirements for the motor, gearboxes, and bearings.

Modeling and Simulation of a Robot Arm

607

Fig. 6. Variant subsystem

3.7.4 Ideal Variant for Forearm Motor Actuation This variation is used to calculate the electrical system’s requirements. The simulation determines the torque necessary to generate the motion that is prescribed for the joints to drive. Each joint’s radial and axial motor bearing forces are calculated as well. The motor’s requirements can be improved with the use of this knowledge. Using the actuation torque, this variation also calculates the amount of electrical power needed to cause this motion. The requirements for power connections and supply can be determined because each joint draws current from it. Choosing a gear ratio and motor torque are necessary for the current estimation utilized here. 3.8 Production Runs and Analysis (Simulation Results for the Model) Performance metrics for simulated system designs are estimated using production runs and subsequent analyses [7]. Figure 7 depicts the total current drawn by the robotic arm’s motors.

Fig. 7. All of the robotic arm’s motors pulled current.

The box’s 3D trajectory as it was carried by the robotic arm is depicted in Fig. 8.

608

I. C. Obagbuwa and K. B. Mogorosi

Fig. 8. Box trajectory

Fig. 9. Motor torques and forces; and constraint forces

The robotic arm’s joint-specific restraint forces are shown in Fig. 9. Each motor’s torque and force in the robotic arm are depicted also in Fig. 9.

Fig. 10. Joints in the robotic arm (Optimization results with low friction)

Modeling and Simulation of a Robot Arm

609

The placements of the joints in the robotic arm are depicted in Fig. 10.

4 Conclusion A robotic arm and two conveyor belts are modelled in this project. Blocks are delivered to the robot via a conveyor belt. The robot picks up the block, flips it over, and takes it away from the robot on another conveyor belt. This model can be used to evaluate the needs for electrical and mechanical design, spot integration issues, develop and test control logic, and enhance path planning. The simulation results show that applying automation to manufacturing can increase the production rate, and profit margins, and enable optimal processes. The extension of this work will focus on testing this model with real-life data from the manufacturing industries. Acknowledgments. The authors would like to appreciate Sol Plaatje University for the infrastructure support for this research.

Data Availability. The data generated using the Simscape Multibody and MATLAB for the work is available upon request from the corresponding author.

Conflicts of Interest. The authors declare that they have no conflicts of interest.

References 1. Rudresh, H.G., Shubha, P.: Colour sensor based object sorting robot. Int. Res. J. Eng. Technol. (IRJET) 04(08) (Aug 2017). e-ISSN: 23950056 2. Saranya, L., Srinivasan, R., Priyadharshini, V.: Robotic arm for pick & place operation using matlab based on offline surface clustering algorithm. Int. J. Res. Comp. Appl. Robo. 5(5), 32–37 (May 2017). ISSN 2320- 7345, 3 3. Reddy, D.V.K.: Sorting of objects based on colour by pick and place robotic arm and with conveyor belt arrangement. Int. J. Mecha. Eng. Robo. Res. (Ijmerr) 3(1) (January 2014). Issn 2278–0149, www.Ijmerr.com 4. Marymol, P., Dhanoj, M., Sheeba, V., Reshma, K.: Colour sensor based object sorting robot using embedded system. Int. J. Adv. Res. Comp. Comm. Eng. (IJARCCE) 4(4) (April 2015). ISSN: 2278-1021 5. Rautu, S.V. et al.: Sorting of Objects Based on Colour, Weight, and Type on A Conveyor Line Using PLC. J. Mecha. Civil Eng. (JMCE), 6th National Conference RDME 2017 (17th-18th March 2017) 6. Miller, S.: Robot Arm with Conveyor Belts (https://github.com/mathworks/Sims cape-RobotConveyorBelts/releases/tag/21.1.2.3) GitHub (2021). Retrieved 27 September 2021 7. Banks, J., Carson, J.S., Nelson, B.L., Nicol, D.: Discrete-Event System Simulation, 5th edn. Prentice-Hall, Upper Saddle River, NJ (2010)

Bayesian Model Selection for Trust in Ewom Bui Huy Khoi(B) Industrial University of Ho Chi Minh City, Ho Chi Minh City, Vietnam [email protected]

Abstract. The chapter discovers elements influencing trust in electronic word of mouth when using the goods and services of shopping malls through variables: information quality, care information, social influence, and perceived risk awareness. In addition to primary data obtained through a survey of 180 clients, we also used highly reputable information sources to compile the study’s data. Research indicates that the aspects of information quality and perceived danger of trust in electronic word of mouth are those that most concern customers (eWOM). Based on the analysis results received, please give some solutions and recommendations to contribute a small part of your opinion on the development of optimal power and enhanced power of electronic word of mouth to Shopping malls in competitive conditions between banks higher and higher today. Finally, the author presents the research implications for administrators and the next research direction. Previous studies revealed that using linear regression. The paper uses the optimum selection by Bayesian consideration for Trust in power and enhanced power of electronic word of mouth. Keywords: BIC algorithm · Trust in Ewom · Information quality · Care information · Social influence · Perceived risk awareness

1 Introduction Vietnam has up to 66% of the population using the internet, up to 94% of which can easily access the Internet via mobile phones. With a population of over 97 million people, the number of smartphone users is growing rapidly (94%) and the demand for the Internet is increasingly high, and the estimated number of people taking part in online shopping is 44.8 million, Viet Nam is considered as an attractive market to attract investors in e-commerce. E-commerce sites simultaneously developed strongly in Vietnam such as LAZADA, Shoppee, Sendo, and Tiki, and are no longer strangers to consumers in our country. Electronic word of mouth Electric Word-of-mouth (eWOM) is a very important and core content in science for micro-consumers, especially in the strong development of the industrial revolution 4.0 [1]. It offers consumers a better way to gather product information and learn from those who have had the experience. Besides, Vietnam’s e-commerce situation has been well developed - a method of buying, selling, and exchanging online in the market. This has sped up the growth of eWOM in Vietnam and brought additional aspects to this digital marketing approach. But not all consumer behavior is influenced by eWOM. Around the world, there are studies © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 A. Abraham et al. (Eds.): IBICA 2022, LNNS 649, pp. 610–620, 2023. https://doi.org/10.1007/978-3-031-27499-2_57

Bayesian Model Selection for Trust in Ewom

611

on eWOM affecting economic activities in many industries [2–6]. We see the impact of eWOM on consumers ‘intentions and consumers’ intentions of use in the shopping mall, research in Vietnam. Therefore, we want to go into the subject to supplement the research white space. In the research, electronic word of mouth will be carefully analyzed from a theoretical basis to practice its effects on the behavior of consumers in shopping malls. Since then, the paper examines factors influencing the tendency of trust in electronic word of mouth (eWOM) by AIC algorithm in digital systems: A case study at shopping malls in Ho Chi Minh City, Vietnam.

2 Literature Review 2.1 Trust in eWOM (Electronic Word-of-Mouth: Te) EWOM is word of mouth over the internet. Today’s new form of online WOM communication is called electronic word of mouth (eWOM) [7]. Many researchers have shown optimal power and enhanced eWOM in today’s market landscape [8, 9]. Personal buying patterns improved significantly after the beginning of the Internet. The method of requesting others about the product assessments they are using has been used a lot since consumers have found the internet convenient to refer to a product they have inquired about [9–11]. Changes on the eWOM platform are very beneficial to customers. This form of communication is of particular importance with the emergence of online platforms, making it one of the most influential sources of information on the Web [12]. These technical developments and these new forms of communication have changed consumer behavior [13], As a result of their influence, they make it possible for customers to communicate with one another by providing information about a business, a product, or a brand. In the past, platforms could only communicate one way, but after the development of social media, people and consumers now have contact to check out who is providing their experience. of a real product or service [9, 14]. Electronic word of mouth also gives companies an edge over traditional WOM as it allows them to try to understand what motivates consumers to post their opinions online and gauge the impact of their audience. That of others [13]. Because of their influence, they enable consumer interaction Consumers’ use of technology to voice their opinions about goods or services (eWOM) can be risky for businesses, though, as it may turn into a factor they neglect to consider by enabling people to learn about or spread information about a company, a product, or a brand [7]. To combat this, businesses are looking to gain greater control over online customer reviews by creating virtual spaces on their websites where consumers can leave comments and share their opinions on the products and services of the business [15]. Many researchers have shown the optimization and power of electronic words in the marketing context [8, 9]. Personal buying patterns were launched significantly after the launch of the internet. Because people find it easier to reference an online search for a product issue, the method of pre-asking others for product reviews is changing [9–11]. The changes in the eWOM platform also benefit customers. Previously, platforms had one-way communication, but after the social media launch, individuals now reach out to check out who is providing their experience with real estate. Products or services [9, 14]. One benefit of social media above other platforms is the ability to have two-way interactive dialogues. Along with the ability to purchase or hunt for specific items or

612

B. H. Khoi

services from strangers, this technology also makes it possible to connect with friends, family, and coworkers through a dynamic online platform [16]. People tend to let others know about their preferences, likes and dislikes all of a sudden because communication is swift [8, 17]. Due to these factors, information on social media is processed by people who have both quick vision and a strong belief in the transmission of information [9, 18]. However, this reliability leads to serious problems related to the handling of fraudulent information on social networks [19]. Today’s customers want to hear authentic feedback and reviews. They also gradually became warier with the hard-sell method. Reviews come from users, especially those who share their interests, more effectively for brand returns [20]. The key factor in electronic word of mouth (eWOM) and indispensable for electronic word of mouth (eWOM) is trust. All eWOM communications are trustworthy and related to trust. Kheng et al. [21] stated that trust in information sources can affect consumers’ attitudes toward products or services. 2.2 Information Quality (IQ) Laurent and Kapferer [22] showed that interest comprises four categories: hedonic value, symbolic value, interest, and perceived risk. Fadde and Zaichkowsky [23] argue that consumers with a high level of interest in products will actively seek care information and evaluate all alternatives, while consumers with high low interest will not do so. So the author has hypothesis H2: Hypothesis 2 (H2). Care information has an impact on trust in eWOM. 2.3 Social Influence (SI) The social influence demonstrates how a reference group’s influence affects how confident consumers are in using technology. When a person’s behavior can be positively impacted by the counsel, recommendations, or encouragement of someone who is significant to them, or when they can also be influenced by people nearby or in the community, such as coworkers, superiors, friends, and community organizations [24]. In order to determine the effect of the factor on trust in consumers’ intentions to use the goods and services of shopping malls, the author suggests the following hypothesis. Hypothesis 3 (H3). Social influence positively affects trust in eWOM. 2.4 Perceived Risk (PR) Some researches that identify risk are subjective because its magnitude varies depending on the context [9]. Mahmood et al. [9] argued that consumer behavior is sometimes about taking risks to help businesses thrive. [8] explain explained that capital and subjective emotions may be shaped through risk perception. The risk associated with the type of product is called essential risk. The significance of the product in the consumer’s awareness, its price, its performance, and its characterization is the inherent risk [25]. Alternatively, the risk handled is an inevitable risk related to the product or service, given the abundance of data needed, risk management can be handled [26]. The shaping of risk effects on belief bias is a thought-provoking aspect that needs to be discovered:

Bayesian Model Selection for Trust in Ewom

613

Hypothesis 4 (H4). Perceived Risk has an impact on trust in eWOM.

Information Quality

H1+ H2+

Care information

H3+

Trust in eWOM

Social influence

H4+

Perceived Risk

Fig. 1. Research model

All hypotheses and factors are shown in Fig. 1.

3 Methodology 3.1 Sample Size Bollen [27] states that the minimal sample size required to estimate one parameter is five samples. Five observations are needed to represent one variable, which is the sample size [28]. Given that there are 21 variables in this study, a minimal sample size of n = 5 × 21 = 105 may be determined. In order to assure sample diversity and representativeness, the author submitted 180 survey questionnaires indirectly via the Internet using technologies that supported Google Forms for users in the city, even though the minimal sample size only calls for 105 surveys. The statistics and sample characteristics are shown in Table 1. Table 1. Sample Characteristics Sex and age

Monthly income

Amount

Percent (%)

Male

87

48.33

Female

93

51.67

18–24

35

19.44

25–35

78

43.33

36–49

46

25.55

Above 50

10

5.55

Below VND 3 million

35

19.44

VND 3 to 7 million

50

27.78

VND 8 to10 million

58

32.22

VND 11 to 15 million

28

15.56

Over VND 15 million

9

5.00

614

B. H. Khoi

3.2 Bayesian Information Criteria In Bayesian statistics, prior knowledge serves as the theoretical underpinning, and the conclusions drawn from it are mixed with the data that have been seen [19]. According to the Bayesian approach, probability is information about uncertainty; probability measures the information’s level of uncertainty [20]. The Bayesian approach is becoming more and more well-liked, especially in the social sciences. With the rapid advancement of data science, big data, and computer computation, Bayesian statistics became a well-liked technique [21]. The BIC is a significant and practical metric for selecting a complete and uncomplicated model. A lower BIC model is chosen based on the BIC information standard. When the minimum BIC value is reached, the best model will end [22].   First, the posterior probability P βj = 0|D given by variable Xj with (j = 1, 2, . . . , p) indicates the possibility that the independent variable affects the occurrence of the event (or a non-zero effect).    P( Mk |D) ∗ Ik (βj = 0) (1) P βj = 0|D = Mk ∈A

where A is a set of models selected in Occam’s Window described in Eqs. 3 and Ik (β j = 0) is 1 when βj in the model Mk and 0 if otherwise. The term P( Mk |D) ∗ Ik (βj = 0) in the above equation means the posterior probability of the model Mk not included Xj = 0. The rules for explaining this posterior probability are as follows [18]: Less than 50%: evidence against impact; Between 50% and 75%: weak evidence for impact; Between 75% and 95%: positive evidence; Between 95% and 99%: strong evidence; From 99%: very strong evidence; Second, the formula provides an estimate of the Bayes score and standard error.    (2) β j P( Mk |D) E βj |D = 

Mk ∈A

      SE β |D =  j

Mk ∈A

  2 var βj |D, Mk + β j P( Mk |D)  2 −E βj D 

(3)



with β j is the posterior mean of βj in the Mk model. Inference about βj is inferred from Eqs. (1); Eqs. (2) and Eqs. (3).

4 Results 4.1 Reliability Test The Cronbach’s Alpha test is a method that the author can use to determine the reliability and quality of the observed variables for the important factor. This test determines whether there is a close relationship between the requirements for compatibility and concordance among dependent variables in the same major factor. The reliability of the factor increases with Cronbach’s Alpha coefficient. The values of Cronbach’s Alpha value coefficient range from 0.8 to 1 for a very good scale, 0.7 to 0.8 for a good usage

Bayesian Model Selection for Trust in Ewom

615

Table 2. Reliability Factor

α

Item

Code

CITC

Trust in eWOM

0.806

Gives your knowledge/perspective on goods in shopping malls

Te1

0.641

Helps you easily decide to accept and use shopping mall products

Te2

0.688

Improve efficiency in making the right goods selection for you

Te3

0.638

The information you receive is the aim IQ1 of the sender

0.627

You assume the information is provided from the sender’s experience

IQ2

0.781

The information you receive or find is effective

IQ3

0.294

The information you receive is presented in a clear and easy-to-understand manner

IQ4

0.730

You think the information provided has IQ5 a positive purpose in sharing the experience with other consumers

0.810

You seek information to use goods in the shopping mall

CI1

0.622

You find product information on many websites

CI2

0.597

You spend a lot of time searching for information

CI3

0.668

You look for information given by the consumer and by the vendor

CI4

0.621

You spend a lot of time refining and aggregating information

CI5

0.268

Many people around me use shopping mall products

SI1

0.765

The important people thought I should use the product of the shopping mall

SI2

0.783

People around me support the use of shopping mall products

SI3

0.498

Information quality

Care information

Social influence

0.852

0.776

0.820

(continued)

616

B. H. Khoi Table 2. (continued)

Factor

α

Item

Code

CITC

Perceived risk

0.707

The goods are not suitable for finance to spend

PR1

0.561

Goods not as expected, desired

PR2

0.611

Food safety risks occur when using goods

PR3

0.558

There are inherent risks when using the PR4 product

0.271

scale, and 0.6 and above for a qualified scale. A measure is considered to meet the requirements of the Corrected item-total correlation (CITC) is greater than 0.3 [33].  2   k σ (xi ) α= 1− k −1 σx2 Table 2 shows that Cronbach’s Alpha coefficient of Information Quality (IQ), Care information (CI), Social influence (SI), and Perceived Risk (PR) for Trust in eWOM (Te) is all greater than 0.7. According to Table 2, there is some corrected item-total correlation that is higher than 0.3. This indicates that the items are highly correlated in the factor and they contribute to the accurate assessment of the idea and attributes of each factor. The corrected item-total correlation of IQ3, CI5, and PR4 larger than 0.3 suggests that these factors are not trustworthy, so it is discarded. Since all observed variables met the criteria that the Cronbach’s Alpha coefficient be larger than 0.6 and the Corrected item coefficient - Total Correlation is greater than 0.3 in the reliability test of Cronbach’s Alpha for each scale, all items were included in the subsequent test step. 4.2 BIC Algorithm To find association rules in trans-action databases, many different methods have been developed and thoroughly investigated. More mining capabilities were offered by the presentation of additional mining algorithms, including incremental updating, generalized and multilevel rule mining, quantitative rule mining, multidimensional rule mining, constraint-based rule mining, mining with multiple minimum supports, mining associations among correlated or infrequent items, and mining of temporal associations [34]. Two data science subfields that are attracting a lot of attention are big data analytics and deep learning. Big Data has become more significant as more people and organizations have gathered vast amounts of Deep Learning Algorithm for Trust in eWOM (Te) [35]. R program used BIC (Bayesian Information Criteria) to determine which model was the best. BIC has been employed in the theoretical environment to select models. To estimate one or more dependent variables from one or more independent variables, BIC can be employed as a regression model [36]. The BIC is a significant and practical metric for

Bayesian Model Selection for Trust in Ewom

617

selecting a complete and simple model. Based on the BIC information standard, a model with a lower BIC is selected [32, 36, 37]. R report displays each stage of the search for the ideal model. Table 3 lists BIC’s choice of the top two models. Table 3. BIC model choice Te

Probability (%)

SD

Model 1

Model 2

Intercept

100.0

0.27869

0.80475

0.61491

IQ

100.0

0.05282

0.41308

0.37845

CI

38.3

0.04887

0.08531

SI

100.0

0.04869

0.17745

0.17875

PR

100.0

0.04794

0.22077

0.21777

There are four independent and one dependent variable in the models in Table 3. Information Quality (IQ), Social influence (SI), and Perceived Risk (PR) have a probability of 100%. Care information (CI) has a probability of 38.3%. 4.3 Model Evaluation

Table 4. Model test Model

nVar

R2

BIC

Post prob

Model 1

3

0.501

−109.61653

0.617

Model 2

4

0.513

−108.66267

0.383

BIC = −2 * LL + log (N) * k.

According to the results from Table 4, BIC shows model 1 is the optimal selection because BIC (−109.61653) is the minimum. Information Quality (IQ), Care information (CI), Social influence (SI), and Perceived Risk (PR) impact Trust in eWOM (Te) is 50.1% (R2 = 0.501) in Table 4. According to BIC, model 1 is the best option, and the probability of the three variables is 61.7% (post prob = 0.617). The analysis mentioned above demonstrates that the regression equation below is statistically significant in Eq. 4. Te = 0.80475 + 0.41308 IQ + 0.17745 SI + 0.22077 PR

(4)

Code: Information Quality (IQ), Social influence (SI), Perceived Risk (PR), Trust in eWOM (Te).

618

B. H. Khoi

5 Conclusion This paper uses the optimum selection of the BIC Algorithm for Trust in eWOM (Te). Results of BIC Algorithm analysis on 3 factors of Trust in eWOM (Te) have the following results: Information Quality (IQ), Care information (0.41308), Social influence (0.17745), and Perceived Risk (0.22077), in which Care information (0.41308), has the strongest impact. The chapter’s investigation yielded results that were remarkably comparable to those of earlier investigations. Limitations of the Research and Further Research Directions Some limitations mentioned below are mentioned when doing the research. Hopefully, the authors can research more deeply and solve the problems in previous studies to contribute to shopping malls. In terms of research capacity and time, the sample selection and sample size are not as expected, so it is only possible to research on Google. The study subjects are quite complete but still have many shortcomings. The variables running in the model are only some influencing factors, not completely the factors that affect the decision to apply the user’s belief. The Sample size selected for the study is still small compared to the overall study, which may also adversely affect the reliability of the study results. There are customers when typing the survey who are not very attentive, so there are cases where customers chatter quickly, so the collected opinions make little sense. After statistical data, the test was run, but it was not statistically significant, so it was necessary to re-survey to get good data. There are many factors affecting the trust in eWOM of users that have not been mentioned to explain more clearly about trust in eWOM. Besides the theoretical and practical contributions drawn from the research results, this research topic has many limitations and then suggests the following research: Include factors related to trust impact to increase the reliability of the study. We will increase the number of survey samples compared to the overall population to increase the reliability of the study. It is recommended to expand the survey to many major cities across the country in the next research.

References 1. Bianchi, A.: Driving Consumer Engagement in Social Media: Influencing Electronic Word of Mouth. Routledge (2020) 2. Roy, G., Datta, B., Mukherjee, S., Basu, R.: Effect of eWOM stimuli and eWOM response on perceived service quality and online recommendation. Tour. Recreat. Res., 1–16 (2020) 3. Ngarmwongnoi, C., Oliveira, J.S., AbedRabbo, M., Mousavi, S.: The implications of eWOM adoption on the customer journey. J. Consum. Mark. (2020) 4. Babi´c Rosario, A., de Valck, K., Sotgiu, F.: Conceptualizing the electronic word-of-mouth process: what we know and need to know about eWOM creation, exposure, and evaluation. J. Acad. Mark. Sci. 48(3), 422–448 (2019). https://doi.org/10.1007/s11747-019-00706-1 5. Srivastava, M., Sivaramakrishnan, S.: The impact of eWOM on consumer brand engagement. Mark. Intell. Plan. (2020) 6. Ismagilova, E., Slade, E.L., Rana, N.P., Dwivedi, Y.K.: The effect of electronic word of mouth communications on intention to buy: a meta-analysis. Inf. Syst. Front., 1–24 (2019)

Bayesian Model Selection for Trust in Ewom

619

7. Yang, W.S., et al.: Iodide management in formamidinium-lead-halide–based perovskite layers for efficient solar cells. Science 356(6345), 1376–1379 (2017) 8. Mahmood, K., Khalid, A., Ahmad, S.W., Qutab, H.G., Hameed, M., Sharif, R.: Electrospray deposited MoS2 nanosheets as an electron transporting material for high efficiency and stable perovskite solar cells. Sol. Energy 203, 32–36 (2020) 9. Acemoglu, D., Cheema, A., Khwaja, A.I., Robinson, J.A.: Trust in state and nonstate actors: evidence from dispute resolution in Pakistan. J. Polit. Econ. 128(8), 3090–3147 (2020) 10. Babi´c Rosario, A., Sotgiu, F., De Valck, K., Bijmolt, T.H.: The effect of electronic word of mouth on sales: a meta-analytic review of platform, product, and metric factors. J. Mark. Res. 53(3), 297–318 (2016) 11. Kim, S., et al.: Removal of contaminants of emerging concern by membranes in water and wastewater: a review. Chem. Eng. J. 335, 896–914 (2018) 12. Abubakar, A.M., Ilkan, M.: Impact of online WOM on destination trust and intention to travel: a medical tourism perspective. J. Destin. Mark. Manag. 5(3), 192–201 (2016) 13. Cantallops, A.S., Salvi, F.: New consumer behavior: a review of research on eWOM and hotels. Int. J. Hosp. Manag. 36, 41–51 (2014) 14. Abedi, M.M., Stovas, A.: A new parameterization for generalized moveout approximation, based on three rays. Geophys. Prospect. 67(5), 1243–1255 (2019) 15. García-Gallego, A., Mures-Quintana, M.J., Vallejo-Pascual, M.E.: Forecasting statistical methods in business: a comparative study of discriminant and logit analysis in predicting business failure. Glob. Bus. Econ. Rev. 17(1), 76–92 (2015) 16. Muszy´nska, B., Grzywacz-Kisielewska, A., Kała, K., Gdula-Argasi´nska, J.: Antiinflammatory properties of edible mushrooms: a review. Food Chem. 243, 373–381 (2018) 17. Bashir, A.M.: Effect of halal awareness, halal logo and attitude on foreign consumers’ purchase intention. Br. Food J. (2019) 18. Oussous, A., Benjelloun, F.-Z., Lahcen, A.A., Belfkih, S.: ASA: a framework for Arabic sentiment analysis. J. Inf. Sci. 46(4), 544–559 (2020) 19. Lkhaasuren, M., Nam, K.-D.: The effect of electronic word of mouth (eWOM) on purchase intention on Korean cosmetic products in the Mongolian market. J. Int. Trade Commer. 14(4), 161–175 (2018) 20. Pezzelle, S., Steinert-Threlkeld, S., Bernardi, R., Szymanik, J.: Some of them can be guessed! Exploring the effect of linguistic context in predicting quantifiers. arXiv preprint arXiv:1806. 00354 (2018) 21. Kheng, V., Sun, S., Anwar, S.: Foreign direct investment and human capital in developing countries: a panel data approach. Econ. Change Restruct. 50(4), 341–365 (2016). https://doi. org/10.1007/s10644-016-9191-0 22. Laurent, G., Kapferer, J.-N.: Measuring consumer involvement profiles. J. Mark. Res. 22(1), 41–53 (1985) 23. Fadde, P.J., Zaichkowsky, L.: Training perceptual-cognitive skills in sports using technology. J. Sport Psychol. Action 9(4), 239–248 (2018) 24. de Sena Abrahão, R., Moriguchi, S.N., Andrade, D.F.: Intention of adoption of mobile payment: an analysis in the light of the unified theory of acceptance and use of technology (UTAUT). RAI Revista de Administração e Inovação 13(3), 221–230 (2016) 25. Pelaez, A., Chen, C.-W., Chen, Y.X.: Effects of perceived risk on intention to purchase: a meta-analysis. J. Comput. Inf. Syst. 59(1), 73–84 (2019) 26. Dabrynin, H., Zhang, J.: The investigation of the online customer experience and perceived risk on purchase intention in China. J. Mark. Dev. Compet. 13(2), 16–30 (2019) 27. Bollen, K.A.: Overall fit in covariance structure models: two types of sample size effects. Psychol. Bull. 107(2), 256 (1990) 28. Hair, J.F., Black, W.C., Babin, B.J., Anderson, R.E., Tatham, R.L.: Multivariate Data Analysis, vol. 6. Pearson Prentice Hall, Upper Saddle River (2006)

620

B. H. Khoi

29. Bayes, T.: LII. An essay towards solving a problem in the doctrine of chances. By the late Rev. Mr. Bayes, FRS communicated by Mr. Price, in a letter to John Canton, AMFR S. Philos. Trans. R. Soc. Lond. (53), 370–418 (1763) 30. Thang, L.D.: The Bayesian statistical application research analyzes the willingness to join in area yield index coffee insurance of farmers in Dak Lak province. University of Economics Ho Chi Minh City (2021) 31. Gelman, A., Shalizi, C.R.: Philosophy and the practice of Bayesian statistics. Br. J. Math. Stat. Psychol. 66(1), 8–38 (2013) 32. Raftery, A.E.: Bayesian model selection in social research. Sociol. Methodol., 111–163 (1995) 33. Nunnally, J.C.: Psychometric theory 3E. Tata McGraw-hill education (1994) 34. Gharib, T.F., Nassar, H., Taha, M., Abraham, A.: An efficient algorithm for incremental mining of temporal association rules. Data Knowl. Eng. 69(8), 800–815 (2010) 35. Najafabadi, M.M., Villanustre, F., Khoshgoftaar, T.M., Seliya, N., Wald, R., Muharemagic, E.: Deep learning applications and challenges in big data analytics. J. Big Data 2(1), 1–21 (2015). https://doi.org/10.1186/s40537-014-0007-7 36. Raftery, A.E., Madigan, D., Hoeting, J.A.: Bayesian model averaging for linear regression models. J. Am. Stat. Assoc. 92(437), 179–191 (1997) 37. Kaplan, D.: On the quantification of model uncertainty: a Bayesian perspective. Psychometrika 86(1), 215–238 (2021). https://doi.org/10.1007/s11336-021-09754-5

Review of Challenges and Best Practices for Outcome Based Education: An Exploratory Outlook on Main Contributions and Research Topics Shankru Guggari1(B) , Kingsley Okoye2 , and Ajith Abraham3 1

2

School of Computer Science and Engineering, KLE Technological university, Hubli, India [email protected] Research Professor, Institute for Future of Education, Tecnologico de Monterrey, Monterrey, Mexico [email protected] 3 Director, Machine Intelligence Research labs, Auburn, USA [email protected]

Abstract. Education has been defined as the main factor for improving the quality of human and life styles. On the other hand, Outcome based education (OBE) is seen as a tool to provide quality education based on the predefined quality objectives or learning outcomes, and helps to achieve the organizational goals. This study believes that the theoretical and practical understanding of the basic challenges and opportunities with outcome based education is crucial, as well as an important task in the present day education and society. To this end, the present paper systematically reviews the basic challenges of outcome based education based on the different regions and application settings. It also brings into light the challenges currently described in the available literatures with respect to the different scholarly domains such as: engineering, medicine, management etc. Finally, the best practices for OBE are empirically discussed in detail. Keywords: Outcome based education · Engineering Management · Regions · Best practices · Challenge

1

· Medicine ·

Introduction

Outcome Based Education (OBE) is an important recent topic within the education sector. It bridges the gap between education and the changing environment, and serves as a systematic approach to evaluate the teaching and learning performance of both instructors and students [19], [1]. Its (OBE) curricula are not standard, but aim to achieve the standardized outcomes. Didactically, OBE Supported by organization x. c The Author(s), under exclusive license to Springer Nature Switzerland AG 2023  A. Abraham et al. (Eds.): IBICA 2022, LNNS 649, pp. 621–639, 2023. https://doi.org/10.1007/978-3-031-27499-2_58

622

S. Guggari et al.

focuses on outcomes (what is need to be learned) of the particular course(s) that the teachers and students teach or learn, and determines the significant element of the learning process but not the trivial learning in itself. Therefore, the learning outcomes are designed based on what the learners want to learn and to help them to learn the courses or skills effectively [19]. In OBE-designed system or curricula, all students are active motivated learners, and teachers are required to provide challenging rigorous activities (Challenge-based learning) [27,28] and suggest context relevant instructions to all concerned students. Thus, OBE is defined as a future learning based method, and focuses on the bridge between what the students learn and actually do or achieve. With this method, teachers practically does not follow any specific teaching style, and also do not follow specific assessment procedures or patterns. In OBE-designed curricula, teachers decisions are trusted and enables active learning opportunities. Henceforth, to develop effective OBE, it is mandatory to establish professional standard boards that encourages professional standard accreditation. Pedagogically, it supports the use of online resources and asynchronous learning [18,29]. OBE has evolved through the years, with different kinds skills and training strategies, but fulfils the same objective (i.e., to produce quality students to meet the industry expectations) [23]. The OBE is an alternative teaching pedagogy to traditional teacher centric education. Performance based education is another name to it. Effective OBE contains 3 main components as follows: 1. An explicit and measurable outcomes. 2. Strategy driven process to achieve the predefined outcomes. 3. Measurement or assessment criteria. In this study, we investigate and systematically present the challenges and opportunities with OBE in respect to the various countries and region, domains, and along with some OBE best practices that can be adopted by the several educators and concerned stakeholders. The rest of the paper is organized as follows: Challenges with OBE faced by different countries are presented in Sect. 2. Domain-based challenges of OBE are described in Sect. 3. Subsequently, best practices of OBE are discussed in Sect. 4. Finally, concluding remarks and future directions for research are given in section 5.

2

Challenges of OBE Implementation in the Different Countries:

In this section, the authors explores the challenges faced by the different countries towards implementing OBE: 2.1

Engineering Design and Education in Bangladesh

Designing of proper Program Educational objectives (PEOs) to explain broader professional accomplishments of the program or learning process, must be done to achieve the cognitive (things which the learners needs to know), behavioural (things that the learners are able to do) and affective (what the learners think or

Review of Challenges and Best Practices for Outcome Based Education

623

care) objectives. Likewise, the Program objectives (PO) represents the narrow statements and/or what the students expects to know. It is the educators or organizations duty to examine and ensure that the POs are lined up or aligned with the organization’s mission, vision, and goals. Thus OBE designed programs clearly defines and describes learning outcomes. In Bangladesh context, providing additional depth knowledge about the subjects like probability and statistics, adding of applications of electrical engineering, differential and integral calculus, complex variables, basic sciences or computer science etc. are the main identified challenges with OBE [13]. There is the need to understand the OBE system and provide proper recommendations towards implementing it as follows [13]. 1. Preparing of curricula for engineering programs. 2. Addressing the lack of proper assessment and evaluation methodologies for PEOs and POs. 3. Limit the outcomes based on vision, mission, and goals of the several institutions. 2.2

Outcome-Based Education for Linguistic Courses in Hong Kong

In this section, we describe some challenges faced by both the learners and teachers in the linguistic courses [16]. Challenges for teachers [16]: 1. Teachers must be aware of content of the course(s). OBE does not give the importance about the content. 2. Designing of outcome based course structure is very challenging task. 3. Teachers must be flexible while presenting the information to the learners. OBE provides expanded opportunity to frame the course structures. There are four basic principles of OBE: (1). Syllabus design (2). Structuring of learning experience (3). Building of positive learning environment, and (4). Achieving outcomes using tools like computers and electronics, including non-electronics gadgets etc. 4. Focus on building of individual and organizational capabilities. Instructional system is an essential component in developing technical skills 5. Always keep track of the curriculum design to meet the significant outcome. 6. Requires tremendous amount of time and energy, while implementing OBE and enablement of high student’ learning. Challenges for learners [16]: Similarly, learners in Hong Kong face the following challenges: 1. Assessment procedures vary from teacher to teacher, as a result, learners are over burned about the assessment methodologies. Multiple assessment tasks like group presentation, online quizzes etc. tend to make the students lose concentration upon the delivered course(s). 2. Risks of learners involving in too many activity for multiple courses at the same time. For instance, course A has some assessment tasks at the same time course B, which increases the risk for the learners by two folds.

624

S. Guggari et al.

3. OBE requires repeated tasks i.e. learners need to undergo similar works and need to understand the assessment criteria carefully and accept the teachers’ feedback at regular time interval. 2.3

Challenges of OBE in India

In this review study, we explored the challenges of TYPE IV COLLEGE, i.e., colleges founded (a) after 1995, and (b). Some private institutes ‘deemed to be university’ by the University Grants Commission (UGC) (an authorized agency in India for higher education) as follows [10]: For faculty: 1. More than 50 2. Major number of faculties don’t have any research activity. For student: 1. Not enough prerequisite knowledge. 2. Prefer passive and rote learning. 3. Deficiency of fluency in English language in terms of reading, speaking and writing. 4. Students prefer memorizing answers from model papers. 5. In India, 80% students are from School based education system but not an OBE-based system. 6. Majority of the students and faculties live out-side the campus so that they don’t do anything beyond or outside the curriculum. Suggested frameworks to design Outcome based education for faculty development initiatives: 1. Organize Orientation and Training Programs for faculties: e.g., Organizing training and workshops for faculties on regular interval time. 2. Require mentoring on writing of Learning Outcomes and Assessments. 3. Design New Teaching Strategies - Lecture Plan Templates: Designing of the lecture plans using existing principles like brainstorming activity, inductive teaching, problem recognition task and combination of both theory and practical experiments. 4. Classroom Observation: Nominate senior faculties to take feedback twice from the students for each semester. Feedback should include the following contents: – Teaching strategies. – Content depth. – Ability to deal with questions of the students. – Microteaching: Allow the faculties to teach a subject for about 15 mins to 2 h, and provide feedbacks openly. – Reflective Notes and continuous improvement: Faculties are anticipated to benefit from reflective note.

Review of Challenges and Best Practices for Outcome Based Education

625

Furthermore, Simon Fraser university also conducted a study on OBE for mechatronics system engineering and have reported the following challenges [8]: 1. More research need to be carried out to improve the learning experiences of the learners especially when size of the Class is large, and suggests to split the class in order to achieve predefined objectives and make the students to be more thinkers and innovators. Engagement, lack of sufficient resources, and interacting with the students are the major drawbacks. 2. Identification of Learners’ characteristics in terms of their expectations. 3. Taking care of Teaching Practice and their evaluations steps. 4. Since the students’ motivation and their engagement in the classroom sessions are the main concerns, steps must be taken to take care of both active and passive learners, and define instructor’s roles in the classroom. Designing of a new curriculum system where the students are treated as legitimate speakers and practitioners are also crucial. 5. Responsibilities of faculty members in terms of implementing OBE with respect to external mandated transition. Thus, practical implementation of OBE methods that aim to improve the students learning experience. 6. Lack of student centric tools usage in the classroom sessions, as this method gives students more practical exposure. 7. Students must think about what and why they are studying, and believe they will be successful. 8. Gap between 21st century students and traditional students. Instructors characterize 21st students as more adoptive and adjust themselves for new environments and opportunities for frequent changes, as they tend to take care of their own learning and can handle huge amount of information. 9. Discrepancy between the teaching objectives and OBE goals. 10. In terms of students’ engagement and their motivations, instructors noted that the students are more adapted to passive learning than active learning. 2.4

Outcome Based Education in Israel Defence Force:

Immigrant physicians are valued resources in many countries. Many studies e.g., [5], clarify that the immigrant physicians require training to adopt the new health care systems where the physicians work. The present study [5] describes OBE under continuing medical education program to give military specific primary care education and helps to improve the performance of immigrant physicians. Curriculum was designed for 3 years from 2003 to 2006. It was delivered through multidisciplinary educational teams. Assessment of the method was performed with the help of pre and post multiple-choice examinations, objective type exams and program end exams. Practically, the study [5] involved 28 physicians of which 23 were from soveint union, 3 from France, and 2 from Latin America. To create effective teaching and learning, learners were randomly divided into 2 groups. Each group comprised of 7 men and 7 womens. Cohens’d method was used to measure the effectiveness of the sample size and learners performance with respect to practice, and kirkpatrick’s model was used to summarize the evaluation. The model helped to

626

S. Guggari et al.

determine the learners’ satisfaction (i.e., how the participants reacted to the program), knowledge gained in terms of learning outcomes, skills developed, change in attitude, and changes in the participants’ health. The results of the experimental analysis suggests that the outcome, based on the continuing education for medical purposes, was effective and addresses the issues of immigrant physicians [14]. However, it can also be said that despite the promising impact and effectiveness of the method, the study showed some limitations like: 1. Information about the learners’ quality assurance scores before and after the program is retrieved through a retrospective manner. The statistical tests may not fully support the results. 2. Only the patients’ outcomes were calculated. 3. No significant actions was taken to define the structured content. 4. Educational needs still varied from country to country. 2.5

Implementation of OBE in Malaysia for Higher Education:

In 2004, Engineering Accreditation Council of Malaysia made it mandatory to use OBE methodologies in engineering education [9]. Later in 2010, all domains of science and technology, social science, and humanities courses adopted the OBE technique to ensure the quality of education in higher education across the national boundaries. The study was conducted using data from 250 students in the university setting from April 2011 to October 2011 [9]. Methodology used [ [9]: At the beginning of the semester, faculty of the department has given entrance survey (filled by the student) to understand the knowledge of the students before exposing them to OBE. By using technology, faculty members taught the course using OBE methodology. In this process students can download or retrieve the content from the i-Learn system (Computer system) [9]. At the end of the semester, a survey is conducted to understand the level of knowledge that students acquired using the OBE method. It was measured using descriptive statistics such as mean, standard deviation, along with inferential statistics such as Pearson correlation, analysis of variance, and t-test to determine the significant differences before and after implementation of the OBE method. The study revealed negative correlation between OBE grade score and class size, thus, it suggested small class size to monitor the activity of the students towards achieving the outcomes. The results also suggested there is no significance difference between OBE grade score and average gap of entrance and exit survey. 2.6

Outcome Based Education in Pharmacy Education in Canada:

With the change in demographics and rise in health care cost, leads the significant importance of OBE to pharmaceutical education in Canada. Practicing pharmacists are not equipped with the challenges associated with their professional roles [25]. Methodology [25]: 1. The study begins with the identification of gaps in pharmacy education, which comprises of meeting with stakeholders, local practicing professionals, graduates, educational curriculum stakeholders.

Review of Challenges and Best Practices for Outcome Based Education

627

2. It measure the gaps using questionnaires for business management and shared those questionnaires to practicing pharmacists. 3. Built assessment tools for industrial case studies. 4. Shared the case studies to partner pharmacy Canadian schools, and also concentrated on students who have done business courses. 5. Discussed the strengths and weaknesses of the findings. 6. Finally, it analyzed and compared the participating programs with respect to curricula, outcomes, educational strategies adopted, and case study results. In the upcoming sections, we describe the Domain specific challenges of OBE:

3

Domain Based Challenges of OBE

Domain is an important component to understand the effectiveness and implementation of various strategies to achieve the objectives of the course. In this section, we discusses the various domains where OBE is adopted with brief description. 3.1

Outcome Based Education in Civil Engineering:

In the study by [8], perceptions of students is explored to implement OBE. They conducted the experiments based on the perception of 79 undergraduates and graduate level civil engineering students. Data was collected through a 10 Item questionnaire with respect to different courses such as concrete technology, rehabilitation of structures, etc. Results reveals that OBE yields positive effects in terms of quality enhancement and quality assurance. The mean score among all the survey questions showed more than 80% responses. The time and effort given by the instructors when preparing lecture notes and designing course activities is more for graduate students in comparison to the undergraduate students. Also, graduate and senior level students showed positive effect to the value-based learning. The course topics in terms of understanding by the undergraduate and graduate level students show an average score of 4.71 and 4.89 respectively. This also indicated lowest score to define relationships between degree program and civil engineer practitioners. Similarly, diploma in civil engineering college of the Bangladesh at universiti teknologi MARA Program outcome was designed based on OBE for each civil course [14]. This includes to: Acquire and apply fundamental knowledge of civil engineering courses, Understand responsibilities like social and cultural duties, and along with the ethics of each designation like assistant engineering or technical assistance, Incur lifelong learning or continuously acquire knowledge, and moreover, Issues related Contemporary knowledge program outcomes showed decrease in percentage of performance using OBE. Also, Ability to communicate effectively, and To develop effective team leaders or manager program outcomes indicated increase in percentage of performance by implementing OBE. The Survey questionnaires from more than 1000 undergraduate students was collected at faculty of civil engineering universiti teknologi MARA. The Questionnaire was divided into 3 sections:

628

S. Guggari et al.

1. Demographic information 2. Statements on implementation of OBE 3. Outlines of statements. Reliability test was performed to evaluate the acceptance/rejection of the tested hypothesis by keeping the alpha value at 0.70. Results indicates that around 56.09% agreed for OBE Implementation in the organization, and also show that more than 50% of the respondents agreed to both lifelong learning and students dependency learning. 3.2

Challenges of Marriage and Family Therapy Education [21]

: Some of the main challenges identified in these areas includes: 1. 2. 3. 4. 5. 6. 7. 8.

Requirement for modification in education philosophy. Educators needs to invest more money and time. Nature of the outcomes are themselves very difficult. Syllabus is based on the outcomes of the course, and those were more dictated by the outsiders. For educators. Difficulty in finding specified outcomes, and also not being able to measure them accurately Accreditation Standards are needed to articulate and understand the basic concepts of Marriage and family therapy education. Need for preliminary training from other domains like psychology, social work or counseling, etc. Clearly define the social causes and teaching methods that are part of marriage and family therapy education.

3.3

Challenges of Outcome Based Education in Medical Education

Medical education is a crucial field in the human society. Practical implementation/understanding of each component is very important. The basic challenges encountered by both medical graduates and faculties are as follows [11]: 1. 2. 3. 4. 5.

Expenses on assessments. Uncertainties among the students. Lack of understanding of responsibilities by the faculty members. Complexities in handling the education processes. Difficulties in Structuring (Planning) the education process and its implementation. 6. OBE provides individualization (student initiative of a particular task, motivations to carry out the task, and enable them to perform their unique interests) and flexibility (Students plan their own schedule to execute their outcomes) is very difficult for a medical education. 7. Mutual accountability is very difficult to establish within the medical institute stakeholders.

Review of Challenges and Best Practices for Outcome Based Education

629

8. Purposeful combination of both redesign of medical education and the delivery of clinical care are difficult. 9. Difficulties to measure the outcome based education metrics in the domain. Similarly, Nephrology training program designed by Accreditation Council for Graduate Medical Education (ACGME) described the following challenges [22]: 1. Difficulty in evidence-based education. 2. Addressing the knowledge gap for urinalysis. 3. Inability to maintain standardization, and interpretation of diagnostic studies. 4. Limited formal training. 5. Challenges in measuring and validation of nephrology training. Other related study in Anthropology that can be allied to the study of human behaviour, describes some challenges of outcome based education [17]: 1. Respondents only cites concerns of bloom taxonomy. 2. Dividing the ideas of anthropology will affect the novelty of the subject. 3. Quantitatively assessing of the students’ performance is a complex as well as challenging task. 4. Professional livelihood of anthropologist is a major concern. 5. Difficulty to understand the academic freedom and the course contents. Also, Competency-based education in ophthalmology was an outcome based training model used to shape the curricula. In this model individual programs are governed and evaluated through specific outcomes. Following are the few challenges faced in ophthalmology training [31]: 1. 2. 3. 4.

Logistical concerns are an important and crucial challenge. Difficulty in assessment tools to bring transparency in training. It creates huge amount of administrative work. Significant difference between financial compensation to perform director duties and amount of time spent on it. 5. Lack of support and understanding of basic concepts by other faculty members. 6. Poor appreciation in terms of developing new goals, objectives, evaluation methods, assessment tools, and implementing of new curriculum with fewer resources. Similarly, based on advanced training skills module which is close to bloom taxonomy, 12 outcomes was developed by the Royal College of Obstetricians and Gynaecologists (RCOG) [20] as follows: For doctors: 1. Expertise in Clinical skills: Managing of both normal and abnormal labours, and ensuring that the monitoring of trainees are made throughout the module. Thus, clearly defines trainee criteria for each clinical encounter.

630

S. Guggari et al.

2. Competence to perform clinical procedures: In this stage trainee should perform all types of instrumental delivery, confident in performing difficult caesarean sections for extremely pre-term babies, placenta Previa and abdominal adhesions. Although, it has few drawbacks such as: 1. Does not provide Clarity about the number of operations. 2. Demands proficiency for rare surgical cases like obstetric hysterectomy and emergency cervical cerclage. 3. Developing proper investigation techniques to treat a patient: Module should describe appropriate tests that trainee should perform in particular clinical situations. 4. Capability to manage a patient: This module deals with patient’s care which involves physicians, blood bank professionals and anaesthetists. It also should take care of intensive treatment unit for a particular syndrome and recognize appropriate treatment. 5. Develop quality competence in both health promotion and disease prevention: Urge trainees to understand the risks associated with a particular disease and does not describe the exact referral points. 6. Ability to do proper communication with the patients and the colleagues. This module also take care of some courses like how to break the bad news and develop communication skills in difficult situation. 7. Handling and retrieving of appropriate information, which includes maintaining of proper documents, and timing of events. 8. Understand the basics of clinical operations and social impact like counseling. 9. Maintaining appropriate attitude, legal responsibilities along with the ethical concepts. 10. Develop appropriate judgment and decision-making skills in evidence-based education. Trainee should review the existing guidelines. 11. Appreciate the role of doctors within the health services in terms of management skills and eagerness to do research for the benefit of the society. 12. Develop appropriate Aptitude for personal development of trainees. 3.4

Challenges of Outcome Based Education for Supply Chain Management:

In another study, OBE is applied in management studies to demonstrate its impact. OBE can be applied in various fields where there is no systematic techniques for evaluating specific outcomes. In [20] an outcome based education based on Beer game theory is introduced to understand effectiveness of supply chain management in a classroom setting. This method was used to find the student’s learning process by making use of total supply chain cost and ordering fluctuations. In the study, there are two main objectives [2]. Firstly, beer game is used measure learner’s progress in the classrooms based on simulation report which was obtained once in a week (automatically generated using software), and

Review of Challenges and Best Practices for Outcome Based Education

631

Secondly, focuses on relationship between the learner’s interactions and overall performance on the course. The method consisted of four classic components: 1. 2. 3. 4.

Retailer has to fulfill the consumer’s orders. Wholesaler has to fulfill the retailer’s orders. Distributor has to fulfill the wholesaler’s orders. Factory has to fulfill the distributor’s orders.

The study [2] used data from 56 students from both engineering management and MBA background. The participants were randomly divided into 2 main groups, and each group has 7 subgroups, thus, each subgroup contained group of 4 members. For 24 weeks, each group simulated the supply chain scenario, and interaction between the 2 groups are not permitted. But interaction between the members of subgroups of the group was allowed or can interact freely. Finally, the performance was measured against the total cost and ordering fluctuations. The results showed evidence that the interaction among the components of a supply chain effects significantly on the total cost of the chain, and also reflects effect on ordering pattern with respect to bullwhip effect. In summary: 1. Learners interaction with dynamic nature of supply chain management transactions is the main challenge. 2. Overall performance measurement is a difficult task. 3. Many critical assessment criterias are ignored due to human errors. 4. Unpredictable fluctuations in supply chain. 3.5

Challenges of OBE in Chemical Engineering:

In this subsection, explanation of the challenges with OBE in respect to chemical engineering is provided [23]. In [23] Bologna degree system was used to define the learning outcomes with the support of computer-based education. It showed the capabilities to solve individual zero degree numerical problems through positive-degrees-of-freedom optimization techniques. It has 3 cycles: In first cycle, it focused on knowledge of programming and analyzed the identified problem using various softwares. In second cycle, which mainly concentrate on the process design and analysis of the problem, and in final (third) cycle for synthesis of process system. Detailed description about the main challenges in terms of the cycles are as follows: Cycle-1: 1. Knowledge: Knowledge of main arithmetic operands, operators, precedence operators, and along with popular logical and relational operators. 2. Comprehension: Understand the basic structures of computer programs such as “if else” statements and “loops” and produce the desired output. 3. Application: Building of flow chart and development of the source code. Use of spreadsheet for data storage and manipulation. Take help of some functions to solve a given problem. 4. Analysis: Identify degrees of freedom and comparing the various solutions with the traditional methods.

632

S. Guggari et al.

Cycle-2 1. Application: Use of stationary and dynamic simulations through various softwares. 2. Analysis: Analysis of the different hypothesis and distinguishing different methods. 3. Synthesis: Different programs are tested. Comparing of the characteristics and their performance. Proposing appropriate methods. Generation of different programs and combining various methods to achieve specific goals. Creation of linear and non-linear models for making decisions, and design process and synthesis . 4. Evaluation: Comparing the different solutions, and evaluation of their efficiency and sustainability. Recommendation of optimal solutions within the specific circumstance. Cycle-3 1. Synthesis/creating: Developing algorithms and innovative computing solutions. Designing of global optimization method. 2. Evaluation: Evaluate (algorithms, models), compare (results) and verify the models and develop optimal models. 3.6

Outcome Based Education for Nursing Education

Recently, a study [8] was conducted to implement outcome Based education for nursing education in South Korea. It was based on standards of Korean Accreditation Board of Nursing Education (KABONE) board. The methodology of the study was carried out in two stages such as: development and evolution. In the development stage, 12 national standard competencies were used to understand the competence of the nurses who have 1-3 years of clinical experience, and in the evolution stage, content, construct and criterion of the developed tool is verified. Factor analysis of data from 141 nurses was used to assess the construct validity. The reliability of the developed tool was assessed using Cronbach’s coefficient and item-correlation. The experimental analysis indicated improvement in the participants’ research activity, awareness about the policies, and leadership qualities for nursing graduates [15]. Interestingly, a three circle outcome based education with the following limitations are found [12]: i) Constraints which are involved in OBE implementation on student education. Generally, education is free and open ended. Curriculum of engineering courses have limited technical scope. ii) OBE emphasis inappropriate attitudes, values and professionalism. iii) In nursing education , there is no discovery with learning. iv) Time management is the critical criteria in OBE. Steps in OBE are complex and due to improper course objectives, outcomes, assessment tools, wrong guidance by instructor leads to much time.

Review of Challenges and Best Practices for Outcome Based Education

3.7

633

OBE for Software Engineering Course

The associated challenges with learning activities for software engineering course are as follows [6]: 1. Classroom discussion for the students in terms of the topic which the instructors introduce. 2. Question and answer discussions on the assignments or projects, and for the tutorials of instructors or teaching assistants. 3. Programming labs for the students to complete projects or programming exercises with the support of tutors or instructors. The OBE has following advantages: 1. Clarity: Teachers are aware about their teaching in courses and students are clear about what they take from the courses. 2. Flexibility: Teachers are free to decide about the method which they want to adopt to teach the course. 3. Involvement: OBE provides significant involvement for the students. In the next section, the study empirically discusses and describe some best practices for OBE.

4

Best Practices for OBE

Practical implementation of OBE play a key role in the development of the educational organization. A few best practices for OBE are as follows: 4.1

Best Practices of OBE with Respect to the Different Domains

Competency-Based Medical Education (CBME) [3]: This is an outcome based education and a big potential to change the medical education that meets social expectations. It is competencies framework that can be used to design, implement, assess, and evaluate the medical course. In CBME system, curricula and assessment is based on the pedagogical outcomes. It maintains transparency between the patients and the public stakeholders, and aims to achieve educational goals alongside predicting quality of care. Some challenges of CBME are as follows: 1. Unable to meet new expectations of patients and policy makers with the existing health care system. There is need to understand the critical future requirements and serve the target communities. 2. Financial support is an essential component to deal with this kind of systems. Transformation of the system requires quality time and patience. 3. Need of quality innovators for transition. Engineering management courses in Mining Engineering: for China’s engineering education system [30].

634

S. Guggari et al.

Outcomes: – Understanding of the basic principles of mining engineering along with its effects and responsibilities. – Understanding of the principles of both economic decision-making and project management of mining engineering and their related projects. – Design and development of mining engineering courses solutions in a multidisciplinary environment. – Employment of project management and economic decision-making principles and the related projects. Suggested Steps for OBE for effective Management Education are as follows: 1. Outcome based education is a open model that can guide the students and enable their creativity. Such results are achieved through collaboration, and tends to provide self-development. For examples, group discussions in the classroom helps the students to express their ideas and helps them to learn from the opinions of other students and strong points. 2. Focus on integrity of knowledge and understanding of knowledge to improve the learning process or efficiency. 3. Teachers must give attention to both the teaching and learning process. The design is based on student centered model to maintain relationship between the students and teacher. 4. Understanding of the theoretical knowledge with practical exposure is an important criteria to build a good and effective outcome based education model. It encourages the students to decide the project and their teammates and find solutions to the problem. Likewise, SMART outcome based education is described for the Engineering course [4]. OBE can provide reliable, measurable, specific and time bound metrics for the listed outcomes. This model is based on 5 layers namely: (i) build, (ii) assessment, (iii) feedback, (iv) target, and (v) time management layers. (i) Build layer: involves the building of OBE-based courses, programs, and organizations’ outcomes. It describes the task of instructors, alumni, and industrial experts (e.g., to develop industry-oriented courses) to design effective and achievable outcomes. (ii) Assessment layer: involves content delivery and tools for assessment purpose. Contents are delivered through various ways such as classroom teaching, lab experiments, tutorials, self-learning, notes or seminars, etc. to attain predefined outcomes. In this process, questionnaires, open book test and viva voice are used as assessment tools. (iii) Feedback layer: helps to improve the teaching capabilities towards achieving the defined course outcomes. It involves both forward and reverse feedback and does not increase human efforts. (iv) Target layer: helps to map the course outcomes to the program outcomes. (v) Time management layer: ensures to complete all activities within the given time frame. Course calendar is an important and crucial activity in the outcome based engineering education. Education management principles for undergraduate students

Review of Challenges and Best Practices for Outcome Based Education

635

– Adoption of continuous improvement to achieve the curriculum objects and define a novel, long term, and effective mechanism to showcase improvement. – Individuals must take proper responsibilities for the smooth operation of quality management work. – Total quality management must be adopted to improve the quality of teaching. In general, Engineering is a dynamic field, in which changes are made over period of time. Various attempts are made to alter traditional engineering system to meet the growing market requirements. There has been Academic planning for OBE, for example, in Indian engineering institutions and universities which includes as follows [26]: Domains: Designing the new curriculum with support of new technology or different domains. OBE-based course and curriculum can help to develop quality engineers and resolve social issues with engineering applications. Curriculum can be designed with other core sector, Information Technology sector, economics, management or humanity, etc. Framework for designing courses: Courses and subjects are defined to achieve assumed objectives. Indeed, it is very important to give preference to fundamental and core subjects of the specific departments. Following are factors that can influence the designing the courses. 1. Dynamic subjects: Courses and subjects can be modified and expanded using dynamic subjects so that students will get exposure about different engineering domains without the burden of time. 2. Design electives: These are helpful to improve the practical approaches and other skills that can make the students become efficient engineering graduates. 3. Humanity subjects: These help the graduates to think insights about social issues and enables to generate technical solutions to existing problems. 4. It is very important to design research and innovation electives to establish platform for implementing the knowledge acquired during the courses. 5. With technical knowledge, graduates also require prominent mankind qualities to manage the diversified social issue to mankind. 6. Management electives play a crucial role to maintain a stable mind set. Management electives are related to various management domains such as personal, family, emotional, religion belief, and national interest’ managements. 7. Also, introduction of recent technological electives like machine learning, artificial intelligence, block chain architecture or neuromorphic computing, etc. helps to find quality career in this dynamic world. In fact, developing creative and choice base stable system to the dynamic environment is a very important thing to avoid unemployment due to limited resources and lagging in adapting the new technologies for the design or defining of new structure. 4.2

Online OBE System Followed by Most of Higher Learning Institutions

Higher educational institutions, for example In Malaysia, measures the underlying activities and provides methods for the design of OBE [32]. The system has

636

S. Guggari et al.

following components: Interface: Provides interface for different steps of the curriculum development cycle and web pages for users’ inputs. Planner: Captures activities of planning. It has different stages for the planning (aimed to map the outcomes). This includes the development and implementation focus on the program delivery, assessment, and reporting of the learning activities. Results collectors: Stores the assessment results of each student. Evaluator: Summaries the results of the students based on the users’ query. Semantic engine: Involve the operations of semantic store, and interactions with planner and evaluator component. It uses semantic queries to get results. Semantic store: Storage of data with the help of semantics and support mapping of learning outcomes. Database: Stores raw OBE data and course assessments, including the mapped or defined course outcomes. 4.3

Project Based Outcome Based Education

Thus type of OBE is introduced to improve the traditional education system. It can enhance both the leadership and membership skills, and are also applied in the computer science field. Mainly it has two components [7]: (i) Determining the number of projects to the breath understanding of computer engineering domain. The Computer science can also integrate the electronic and computer science engineering. The project-based design can cover the above two domains. ii) Engage the students towards project activity. Students tends to understand the basic skills to complete the projects through team-based activity. Comprehensively, we can generalize the best practices and main challenges of OBE as follows: Formulating of outcomes definition: Framing of outcomes or definitions are very crucial and a difficult task with respect to the subjects. Interpretation of outcomes with various instructors and programs vary. Specifying outcomes leads to some lost in holistic approach of learning. Learning is specific, observable and measurable. So outcomes are not recognized widely, and the learners may not be able to understand the exact conceptual meaning of the learning components or process. Assessment techniques/ tools: Once the outcomes are achieved then evaluating these also requires proper assessment methods. The ability of using knowledge may not be established through conventional assessment. Instructors/teachers must adjust for this scenario which is fundamentally different from their traditional assessment procedure. OBE is time consuming, reliable, and a valid assessment technique when compared to traditional assessment procedure, and tends to help the students to understand the subject and/or demonstrating of their knowledge about the subject. Involvement of other members to succeed: Involvement of parents and community members are required to improve the quality of the education. On the other hand, too much involvement of the parents about requesting/suggesting many changes can lead to violation of the learning objectives. Teachers/instructors are required to fully involve in understanding of the course objectives, then framing of the syllabus to support the outcomes. This can cause

Review of Challenges and Best Practices for Outcome Based Education

637

too much work load on teachers. Generalization Assessing of creativity, responsibility, and self-satisfaction, etc. are the general terms associated with OBE but does not have any measurement metrics to gauge them. The following can be said to be the main features of the outcome based education: 1. It is liable for its stakeholders such as teachers, students, parents, public, and employers. 2. It brings change to the whole educational system by including new syllabus, assessment procedures, etc. 3. It allows the institutions or organizations to control the success of their students and provide them with quality education. 4. It describes achievable, authentic, and assessable learning outcomes. 5. It highly concentrates on designing curriculum with respect to the courses, and its assessment procedures.

5

Discussion and Conclusions

Outcome based education (OBE) helps the educators and policy makers to focus on clearly defining outcomes to achieve the organization’s and stakeholders’ goals, and encourages the development of student/learners centric courses. For teachers/educators, it provides an opportunity to design learning or perspective courses to achieve the defined outcomes, “efficiently and effectively”. To make OBE successful, first the concerned actors must define the institutions’ outcomes, then frame attractive, attainable, and comprehensive courses or syllabus to attain the intended outcomes. OBE ensure that the teaching contents and assessment strategies are closely aligned to meet the intended outcomes. It can be improved based on the feedbacks from both the learners and teachers. Indeed, Education is a communal compulsory process and is systematically worked-out in the classroom. OBE can be a way to effectively design, deliver and document the courses and outcome. The method starts with bearing the outcomes and results in mind, then leads to planning and designing of the courses. It hypothesizes the curriculum and exam practices, and suggests different planning models based on the different courses. It minimizes education learning and quasi scientific planning procedures. It divides the set of courses’ content based on the outcomes and trivialize the knowledge. Although, the method does not improve the quality of the curriculum and/or inclination of outcomes that leads to centralized state assessment. The procedures reduces professionalism of faculty members along with less interest towards research and assessment activity. This method is a non-reflexive model, whereby it does not have any capability to inspect by itself and limits some fields of study. It suggests that there is no better education than the mentioned outcomes. It urges that education is about planning of students’ behavioral performance and suggests the significant outcomes at initial state. It forces the students to show similar outcomes and behaviors. It is suitable only to technical education where students are equipped based on the requirements for job or career [24].

638

S. Guggari et al.

References 1. Amirtharaj, S., Chandrasekaran, G., Thirumoorthy, K., Muneeswaran, K.: A systematic approach for assessment of attainment in outcome-based education. Higher Educ. Future 9(1), 8–29 (2022) 2. Bariran, S., Sahari, K., Yunus, B.: A novel interactive OBE approach in SCM pedagogy using beer game simulation theory. arXiv preprint arXiv:1404.4384 (2014) 3. Caverzagie, K.J., et al.: Overarching challenges to the implementation of competency-based medical education. Med. Teach. 39(6), 588–593 (2017) 4. Churi, P., Mistry, K., Dhruv, A., Wagh, S.: Alchemizing education system by developing 5 layered outcome based engineering education (OBEE) model. In: 2016 IEEE 4th International Conference on MOOCs, Innovation and Technology in Education (MITE), pp. 338–345. IEEE (2016) 5. Cohen Castel, O., et al.: Can outcome-based continuing medical education improve performance of immigrant physicians? J. Contin. Educ. Heal. Prof. 31(1), 34–42 (2011) 6. Dai, H.N., Wei, W., Wang, H., Wong, T.L.: Impact of outcome-based education on software engineering teaching: a case study. In: 2017 IEEE 6th International Conference on Teaching, Assessment, and Learning for Engineering (TALE), pp. 261–264. IEEE (2017) 7. Dargham, J.A., Chin, R.K.: A framework for integrating project-based learning into the curriculum for outcome based education. In: 2015 IEEE 7th International Conference on Engineering Education (ICEED), pp. 6–9. IEEE (2015) 8. El-Maaddawy, T., El-Hassan, H., Al Jassmi, H., Kamareddine, L.: Applying outcomes-based learning in civil engineering education. In: 2019 IEEE Global Engineering Education Conference (EDUCON), pp. 986–989. IEEE (2019) 9. Eng, T.H., Akir, O., Malie, S.: Implementation of outcome-based education incorporating technology innovation. Procedia. Soc. Behav. Sci. 62, 649–655 (2012) 10. Garimella, U., Nalla, D.: Moving towards outcome-based education-faculty development initiatives. In: 2014 IEEE Frontiers in Education Conference (FIE) Proceedings, pp. 1–8. IEEE (2014) 11. Gruppen, L.D.: Outcome-based medical education: implications, opportunities, and challenges. Korean J. Med. Educ. 24(4), 281 (2012) 12. Harden, R.M.: Outcome-based education: the future is today. Med. Teach. 29(7), 625–629 (2007) 13. Hassan, M.S.: Challenges of implementing outcome based engineering education in universities in bangladesh. In: 2012 7th International Conference on Electrical and Computer Engineering, pp. 362–364. IEEE (2012) 14. Isa, C.M.M., Saman, H.M., Tahir, W., Jani, J., Mukri, M.: Understanding of outcome-based education (obe) implementation by civil engineering students in Malaysia. In: 2017 IEEE 9th International Conference on Engineering Education (ICEED), pp. 96–100. IEEE (2017) 15. Ko, Y., Yu, S.: Core nursing competency assessment tool for graduates of outcomebased nursing education in south Korea: a validation study. Jpn. J. Nurs. Sci. 16(2), 155–171 (2019) 16. Lixun, W.: Designing and implementing outcome-based learning in a linguistics course: a case study in Hong Kong. Procedia. Soc. Behav. Sci. 12, 9–18 (2011) 17. Lukas, S.A.: “SLOing” anthropology? reflections on outcome-based education. Anthropol. News 6(51), 14 (2010)

Review of Challenges and Best Practices for Outcome Based Education

639

18. Mar´ın, V.I., et al.: Faculty perceptions, awareness and use of open educational resources for teaching and learning in higher education: a cross-comparative analysis. Res. Pract. Technol. Enhanc. Learn. 17(1), 1–23 (2022). https://doi.org/10. 1186/s41039-022-00185-z 19. Mukesh, S., et al.: Outcome-based learning: an overview. Available at SSRN 4026986 (2022) 20. Mukhopadhyay, S., Smith, S.: Outcome-based education: principles and practice. J. Obstet. Gynaecol. 30(8), 790–794 (2010) 21. Nelson, T.S., Smock, S.A.: Challenges of an outcome-based perspective for marriage and family therapy education. Fam. Process 44(3), 355–362 (2005) 22. Parker, M.G.: Nephrology training in the 21st century: toward outcomes-based education. Am. J. Kidney Dis. 56(1), 132–142 (2010) 23. Pintariˇc, Z.N., Kravanja, Z.: Towards outcomes-based education of computer-aided chemical engineering. In: Computer Aided Chemical Engineering, vol. 38, pp. 2367– 2372. Elsevier (2016) 24. Sanyal, A., Gupta, R.: Some limitations of outcome-based education. In: Bhattacharyya, S., Sen, S., Dutta, M., Biswas, P., Chattopadhyay, H. (eds.) Industry Interactive Innovations in Science, Engineering and Technology. LNNS, vol. 11, pp. 591–599. Springer, Singapore (2018). https://doi.org/10.1007/978-981-103953-9 57 25. Slavcev, R.A., Tjendra, J., Cheung, D.: A model of iterative outcome-based curriculum design and assessment for strategic pharmacy education in Canada. Curr. Pharm. Teach. Learn. 5(6), 593–599 (2013) 26. Tiwari, A., Singh, A., Shukla, S., Mishra, S., Goyal, E., Kumar, B.: Outcome-based education (OBE) academic planning-an insight into all round development of an engineer. In: 2018 5th IEEE Uttar Pradesh Section International Conference on Electrical, Electronics and Computer Engineering (UPCON), pp. 1–5. IEEE (2018) 27. Torres-Barreto, M.L., Casta˜ no, G.P.C., Melgarejo, M.A.: A learning model proposal focused on challenge-based learning. Adv. Eng. Educ. 8(2), n2 (2020) 28. UNESCO: Competency based education. learning portal - planning education for improved learning outcome. https://learningportal.iiep.unesco.org/en/library/ competency-based-education. Accessed 21 Aug 2022 29. UNESCO: Ljubljana oer action plan 2017 adopted to support quality open educational resources. https://en.unesco.org/news/ljubljana-oer-action-plan-2017adopted-support-quality-open-educational-resources. Accessed 21 Aug 2022 30. Wang, B., Wu, Y.D.: Exploration on construction of engineering management related courses based on outcome-based education model. In: IOP Conf. Ser.: Mater. Sci. Eng. 688, 055031 (2019). IOP Publishing (2019) 31. Wentzell, D.D., Chung, H., Hanson, C., Gooi, P.: Competency-based medical education in ophthalmology residency training: a review. Can. J. Ophthalmol. 55(1), 12–19 (2020) 32. Zaini, N., Latip, M.F.A., Omar, H.: Semantic-based online outcome-based education measurement system. In: 2011 3rd International Congress on Engineering Education (ICEED), pp. 218–222. IEEE (2011)

Blockchain Enabled Internet of Things: Current Scenario and Open Challenges for Future Sanskar Srivastava1 , Anshu2 , Rohit Bansal3 , Gulshan Soni4 , and Amit Kumar Tyagi5(B) 1 School of Computer Science and Engineering, Vellore Institute of Technology,

Chennai 600127, Tamilnadu, India [email protected] 2 Faculty of Management and Commerce (FOMC), Baba Mastnath University, Asthal Bohar, Rohtak, India 3 Department of Management Studies, Vaish College of Engineering, Rohtak, India 4 Department of Computer Science and Engineering, School of Engineering, O.P. Jindal University, Raigarh, Chhattisgarh, India 5 Department of Fashion Technology, National Institute of Fashion Technology, New Delhi, India [email protected]

Abstract. The modern world and its rapid progression have cemented the requirement of digitization and has started a revolution for automation. The fore runner for this is IoT or the Internet of Things. On a very basic level IoT can be explained as the interconnection of smart devices. The Internet of Things (IoT) describes the network of physical objects “things” that are embedded with sensors, software, and other technologies for the purpose of connecting and exchanging data with other devices and systems over the internet. These devices range from ordinary household objects to sophisticated industrial tools. With more than 7 billion connected IoT devices today, experts are expecting this number to grow to 10 billion by 2020 and 22 billion by 2025. IoT is quickly becoming one of the most important technologies as it allows us to connect everyday objects to the internet using embedded devices and can then be used to communicate with them. This forms a connection between the physical and digital world. But along with this huge advantage comes a lack of intrinsic security measures which make this more vulnerable and open to privacy as well as security threats. To overcome this weakness technologies such as big data and cloud computing have been tried in tandem with IoT which has provided average results. Blockchain is a system of recording data such that it becomes almost impossible to change it. It is a digital ledger of transactions that is duplicated and distributed across the entire network of computers. So, in a sense it allows us to record digitized data and distribute it but not edit it. Thus, most of the issues related to security and vulnerability can be solved quite easily. By integrating these technologies, it brings availability, security, and integrity to the applications. This research paper targets readers who have not been particularly invested in these topics and would like an overview of IoT, blockchain and their applications, uses and future prospects. Keywords: Internet of Things (IoT) · Blockchain · Blockchain Enabled Internet of Things (BIoT) · Transactions · Nodes © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 A. Abraham et al. (Eds.): IBICA 2022, LNNS 649, pp. 640–648, 2023. https://doi.org/10.1007/978-3-031-27499-2_59

Blockchain Enabled Internet of Things

641

1 Introduction Now a days all major technologies focus on increasing access to electronic devices with a particular focus on wireless communication and its miniaturization. Over the years this rapid introduction has increased the number of such electronic devices resulting in better and improved services and a reduced cost making it more available and feasible for people to own. This influx of devices has changed the way people communicate and interact with each other and the environment around them. The modern world now deals with the digital world more than the real world. So, technologies were developed to better understand this new digital world. Technologies like the Wireless Sensors Network and Radio Frequency Identification has birthed IoT that provides a way to interact freely by creating a network of intelligent objects which converts the physical world into a wellconnected information system. It was coined by Kevin Ashton in 1999 and has coma far way since then. It has become one of the most powerful tools for business development. It is the corner stone on which the digital services are bult upon and has integrated with various other technologies such as cloud computing, big data and machine learning. Devices range from wearable to hardware development [1]. IoT platforms are present in domains such as supply chain, manufacturing and energy. A platform is a mass of IoT objects which are controlled by a central node. This is an example of a centralized architecture but it also increases the chances of single pint failure. Also, it requires heavy computing sources for the centralized system to collect and manage all the data collected by these objects. Issues such as data security and privacy have no standard solutions and this further complicates the management process. This variety of standards raises several problems such as flexibility, lack of scalability etc. This is where blockchain comes in as a way to combat the centralized nature. Blockchain is a decentralized immutable distributed ledger of transactions maintained by a peer-to-peer network. Instead of relying on a third party a decision must be reached by all the network participants to make any transaction acceptable. By providing a duplicate of all transaction which have taken place to all the participants it maintains data transparency and ensures high availability [2]. Third part intermediaries usually cause a delay in transaction and by eliminating the need for their involvement participants can perform transactions and share data without having to trust each other. On their own these technologies have brought improvement in various important sectors where they have been applied. By storing the sensor data and IoT objects as transactions in blockchain it creates an immutable trail of observations. All these interactions between devices int IoT smart network are stored in these immutable transactions. Blockchain relies on cryptographic hash functions and by using this feature to store transactions into blocks and linking them to each previous block in the chain it becomes almost impossible to change any previous block without being noticed. This serves various purposes, once we know the block has been linked, we can easily confirm that the interaction between nodes is securely recorded and have not been changed. Storing data hashes ensures the integrity of the data. This can be verified by comparing the hash with hash value stored into the blockchain. We will aim to understand more about how these blockchains work later on. Analysing the current cases of IoT and blockchain integration will help us understand the advantages and also bring to our attention the various challenges that come with using the newer systems. We will evaluate the working of these systems and

642

S. Srivastava et al.

provide suggestions for its improvement. We will focus a little on how the BIoT can be improved and the scope it may have in the future.

2 Internet of Things - Background Internet of Things is something which has grown in use and popularity rapidly over the last few years and its influence can be seen in areas such as smart homes, wearable devices, transportation, healthcare among other things. It is this ability which gives us a way to interconnect physical devices to communicate with other devices through the network which is so valuable. To understand the basics and how IoT works we need to know the components involved. These are sensors/devices, connectivity, data processing and user interface [3–6]. • Sensors/devices: This is a device which will collect data from the environment. For better understanding let us take an example. Imagine a greenhouse which has to be at a certain temperature for the proper growth of the plants. In this case the device which measures and records the temperature would count as a sensor/device. • Connectivity: The data that is collected is now sent to the cloud via cellular, satellite, Wi-Fi or any other mode. Depending on the IoT application different methods can be chosen to manage the consumptions, range and bandwidth. The temperature which is recorded will be periodically sent to the cloud. • Data processing: Once this data reaches the cloud the software will take care of the further processes. It will compare all the temperature reading and check whether it is within the suitable range for maximum plant growth. • User interface: Now after the comparison is complete this final data has to be presented to the user via e-mail or text. In this case if the temperature is too high or low then the manager will receive a text informing him of the malfunction which he can now fix either manually or through an app which helps him regulate the temperature. Instead of the user interacting after the alert he could also have pre = defined some rules which would then automatically adjust the temperature. This is how the basic model of IoT works. Current IoT using applications mostly use a centralized server-client cloud architecture. But peer-to-peer wireless sensor networks are also being used to handle the shortcomings of the centralized systems. The main challenges faced are: • Privacy and security: The main issue arises due to the connectivity component of IoT which allows an entrance to hackers and other malpractices to take place. Older models are more susceptible to attacks as most of these systems weren’t originally build to handle the connectivity between devices. As seen from the example given the device/sensor plays a major role, we have only taken one simple device to explain how IoT works but most smart environments would contain multiple devices which communicate with each other. Each node here is a point of failure which can be used to hack into the system or launch cyber-attacks. This can cause the collapse of the entire system.

Blockchain Enabled Internet of Things

643

• Hardware: Choosing the correct devices and sensors which will ensure that data will be collected and transferred safely has to be done carefully. • Data management: The data collected and transferred by the devices are massive and the need to manage it requires a lot of computing power and efficient use of data pipelines for the processing. Machine learning and predictive analysis can be used to make this easier. • Device maintenance: The smart devices have to be regularly checked to make sure they are functioning and providing accurate data. • Infrastructure: Cloud computing, fog computing etc. are on of the many types of architectures available and have to be chosen accordingly (refer Fig. 1).

Fig. 1. Status of IoT devices in current era

3 Blockchain Satoshi Nakamoto introduced the concept of a decentralized peer-to-peer electronic cash system when he published a paper on Bitcoin. This new data structure which could be used for transactions and validations was named blockchain [7]. Bitcoin used blockchain technology for all the online trading which was free and available to everyone all around the world without the need of a third-party intermediatory. After coming to know the various features such as security, privacy, integrity, immutability, fault tolerance etc. it became popular in other sectors such as agriculture, smart grids, healthcare, supply chain management and many more. In recent years it has become one of the most widely researched topics. It provides a distributed, immutable secure ledger of transactions. The blockchain protocol constructs a chain of block with

644

S. Srivastava et al.

each block containing a set of transactions at a particular time. New block is linked to the previous blocks using a reference hence the term ‘chain’. The four main components involved in a blockchain are: • Peer-to-peer network: This aims to remove the centralization present by providing all the nodes the same privileges and enable easier interaction amongst the nodes. This is done through the use of the private and public keys. The private key is used for decryption and signing transactions while the private key is used for encrypting and to provide an address for the network to reach. • Ledger: the ledger is used for recording all the transactions performed in order. This information is duplicated and made available to all the nodes. The ledger itself is open and everyone on that network can view it. Each node can then decide the validity of the transaction. • Synchronization: To synchronize the ledgers of the nodes we have to broadcast all the transactions, validate them and add the validated transactions again to the ledger. • Miners: due to delays all nodes may not receive the blocks of transactions at the same moment thus to prevent every node from adding a transaction (to maintain the valid and ordered branch) unique nodes which can add transactions called miners are used. The miner needs to compete with other minors to make a new transaction and validate it. Each block is a set of instructions which include a Header and the block content. The Header contains the timestamp, difficulty target, hash value of previous header for the chain creation, encoded transaction into a single hash code and nonce. The block content contains all the information about the data itself (input and output). The input of a current block contains the output of the previous transaction and a field containing the signature with the private key which validates the ownership. The output contains the data to be sent and address/public key of the receiver. Since only the private key of the receiver can prove ownership only that particular receiver can handle the data. This makes sure that no tampering can take place making it secure and distributed [8]. To avoid double spending attacks and to maintain integrity consensus mechanism is utilized. The end goal here is to reach a consensus in the network where third-party involvement is not required and participants need no trust one another. Selecting a leader who validates the new block and then propagates it to the network. This validation takes place when a majority of nodes find a block acceptable so it can then be added to the network [9].

4 Integration of IoT and Blockchain (BIoT) Now that we have a basic overview of how IoT and blockchain works we can see the need of blockchain to fix the various issues that IoT currently possesses. IoT issues like reliability and privacy are easily solved if we use blockchain. The single point of failure is also resolved due to its distributive nature. The Trusted IoT alliance was formed in 2016 to make IoT more fluid and reliable by merging blockchain technology into the IoT framework. Many other projects that aim to do the same were started like Linux Foundation’s Hyperledger Project, LO3ENERGY, IoTex, Raspnode etc. The improvements that BIoT has experienced are as follows [10–12]:

Blockchain Enabled Internet of Things

645

• Decentralization and scalability: peer-to-peer removes central points of failures, improved fault tolerance and system scalability. • Identity: participants in the BIoT can identify every device which is being used. Since the data recorded is immutable it can be trusted to be authenticated. Improves the IoT field and the participants. • Autonomy: using BIoT enables devices to interact without any involvement of the servers. This encourages development of smart autonomous hardware. • Reliability: Participants are capable of verifying the authenticity to be certain that no tampering has taken place. It also enables the data traceability. • Security: communication and interaction between the devices can be stored as transaction of the blockchain. These can also be validated as smart contracts to secure communication. • Marketing: BIoT improves the time it takes to create an IoT system and environment. Services can be easily deployed and payments can be done easily and securely. This improves the overall interconnection.

5 Blockchain Enabled Internet of Things (BIoT) Architecture and Interactions There a many hybrid designs created for better integration between IoT and blockchain [13]. • IoT – IoT design: Here the transaction take place between IoT peer devices. It is utilized when low latency and fast performance is required. The data of transactions between IoT peer devices is stored in the blockchain but all the other data is transferred directly among the IoT devices. This method ensures a smooth and efficient flow of data from one device to another, it is preferable that the devices are in the dame domain or network to reduce the complication which would arise during routing. • IoT peer – to blockchain design: unlike the previous design all the IoT peer devices are not directly connected to each other. The interactions and communications are done through the blockchain. Here the blockchain can monitor and validate all the data related to the transactions that take place. This creates better transparency and traceability. Thus, data can be secure even if the devices belong to different domains. The challenge that arises here is that by recording all the transactions there is an increase in the bandwidth and data. It would also face more latency and scalability issues while also requiring more computational power to handle the nodes needed for this. Application which focuses on renting and trading utilize this design. • Hybrid architecture design: The introduction of edge computing has improved the communication and processing of the data. This kind of hybrid design involves the use of artificial intelligence (AI), fog computing and edge computing to create a more interactive and improved environment for IoT devices. This method also causes an increase in the computational power and consumption but not at the level as the previous design which requires devices to act as nodes. It also reduces the bandwidth and latency issues. Here the heavy work is done by fog or edge computing so all blockchain interactions are done by this layer as not all the transactions of IoT peer devices directly go through the blockchain.

646

S. Srivastava et al.

6 Challenges Faced in BIoT We have seen how the technologies of IoT and blockchain complement each other to create a better improved design but there are still issues that can arise and need improvement as blockchain is designed for more powerful computers which is not feasible for IoT currently [14, 15]. • Storage and scalability: Blockchain currently can only process a few transactions per second and is not designed to store large amounts of data. Data produced in IoT devices is in gigabytes can create issues for the capacity present in the blockchain. Since most of the data stored is not that useful, techniques which filter and compress the data can help reduce this problem. To increase the bandwidth while reducing latency enable better transitions and can be accomplished by using consensus protocol. • Security: We have extensively discussed how the inclusion of blockchain can help fix the issue with rising attacks on IoT devices and the security but this is said on the basis that the data generated by the IoT stays immutable when it arrives in the blockchain. However, if the data is already corrupted before being introduced to the blockchain then it would stay corrupt even with the help of the blockchain. Sometimes due to various reasons the devices themselves fail to work properly and give the wrong data and this issue would not be recognized until the device is tested and recalibrated. Thus, to avoid issues like this the IoT devices should be well checked regularly and also kept in the right places to avoid any physical damage. Run-time upgrading mechanisms along with methods of failure detection should be used as well. Filament is a project which functions fairly well in terms of security. • Data privacy: A lot of IoT devices work with confidential data that requires data privacy and identity management. Integration of security cryptographic software would be required to ensure that data is stored properly and cannot be accessed without permission. • Smart contracts: smart contracts would be an excellent method of making the recording of all interactions and transactions secure and reliable. The smart contracts need the oracles to provide real world data which has to be accurate and trusted. IoT can cause issues here since validating this would make it unstable and accessing so many diverse data sources would overload them. • Legal issues: There is a need to implement control over the network. IoT is also affected by the country and its regulations related to data privacy. These laws need to revised and updated with these new technologies in mind. This will help in standardizing many protocols for certification of security features and thus create a more trusted IoT network This will have a major influence in the future of any integrated technology. • Consensus: A lot of the consensus algorithms are beyond the current capabilities of IoT as they require more resources from the nodes. Lightweight nodes would help solve this issue but most blockchains do not support this yet. More research is required to make sure mining does not continue to be an issue.

Blockchain Enabled Internet of Things

647

7 Applications of BIoT The various sectors and areas that BIoT can be applied to are [16, 17]: • • • • • •

Energy sector Smart contract: Slock.it works with smart contracts that enable renting and trade Industrial IoT Database Healthcare Agriculture: A food traceability system was made to identify the food products as well as the parties involved in the supply of food. • Transportation • Smart homes and cities: Smart homes like Telstra in Australia. 7.1 Future Scope and Directions • • • • •

Machine learning for privacy and security in BIoT applications Fixing challenges related to decentralization Blockchain infrastructure which solves all the issues if BIoT implementations Legal issues and regulations Scalability is still a major issue and many researches are still going on.

8 Conclusion This research paper was aimed toward readers who were not familiar to the newer upcoming technologies. We explained how IoT works and the issues it faces and how other technologies can be used along with it to improve its performance. One of the technologies that would help is blockchain and we have also learned how it works and how it can help IoT in fixing the issues it faces. We then combined and integrated these two technologies and discussed the improvements and the challenges that need to be addressed to fine tune this system to achieve the full potential of both technologies. Concluding we can say that the integration of blockchain and IoT is a must for advancement and to improve this further we need to analyse the main challenges and work towards fixing them. The BIoT is still in its early stages and there is a lot of room for growth and in the future, it is imperative that we merge more technologies which complement each other and introduce more applications to encourage the development in marketplaces. A more liberal and wider use of this technology will require the cooperation of stakeholders, governments and other institutions to provide the right structure to harness the power of BIoT applications.

References 1. Panarello, A., Tapas, N., Merlino, G., Longo, F., Puliafito, A.: Blockchain and IoT integration: a systematic survey. Sensors 18, 2575 (2018). https://doi.org/10.3390/s18082575

648

S. Srivastava et al.

2. Shammar, E.A., Zahary, A.T., Al-Shargabi, A.A.: A survey of IoT and blockchain integration: security perspective. IEEE Access 9, 156114–156150 (2021). https://doi.org/10.1109/ACC ESS.2021.3129697 3. Reyna, A., Martín, C., Chen, J., Soler, E., Díaz, M.: On blockchain and its integration with IoT. Challenges and opportunities. Future Gener. Comput. Syst. 88, 173–190 (2018). ISSN 0167-739X. https://doi.org/10.1016/j.future.2018.05.046 4. Nartey, C., et al.: On blockchain and IoT integration platforms: current implementation challenges and future perspectives. Wirel. Commun. Mob. Comput. 2021, Article no. 6672482, 25 p. (2021). https://doi.org/10.1155/2021/6672482 5. Saxena, S., Bhushan, B., Ahad, M.A.: Blockchain based solutions to secure IoT: background, integration trends and a way forward. J. Netw. Comput. Appl. 181, 103050 (2021). ISSN 1084-8045. https://doi.org/10.1016/j.jnca.2021.103050 6. Hassan, M.U., Rehmani, M.H., Chen, J.: Privacy preservation in blockchain based IoT systems: integration issues, prospects, challenges, and future research directions. Future Gener. Comput. Syst. 97, 512–529 (2019). ISSN 0167-739X. https://doi.org/10.1016/j.future.2019. 02.060 7. Makhdoom, I., Abolhasan, M., Abbas, H., Ni, W.: Blockchain’s adoption in IoT: the challenges, and a way forward. J. Netw. Comput. Appl. 125, 251–279 (2019). ISSN 1084–8045. https://doi.org/10.1016/j.jnca.2018.10.019 8. Lo, S.K., et al.: Analysis of blockchain solutions for IoT: a systematic literature review. IEEE Access 7, 58822–58835 (2019). https://doi.org/10.1109/ACCESS.2019.2914675 9. Zafar, S., Bhatti, K.M., Shabbir, M., Hashmat, F., Akbar, A.H.: Integration of blockchain and Internet of Things: challenges and solutions. Ann. Telecommun. 77, 13–32 (2022). https:// doi.org/10.1007/s12243-021-00858-8 10. Aggarwal, V.K., et al.: Integration of blockchain and IoT (B-IoT): architecture, solutions, & future research direction. IOP Conf. Ser. Mater. Sci. Eng. 1022, 012103 (2021) 11. Chen, T.-H., Lee, W.-B., Chen, H.-B., Wang, C.-L.: Revisited—the subliminal channel in blockchain and its application to IoT SECURITY. Symmetry. 13(5), 855 (2021). https://doi. org/10.3390/sym13050855 12. Maroufi, M., Abdolee, R., Tazekand, B.M.: On the convergence of blockchain and Internet of Things (IoT) technologies. J. Strateg. Innov. Sustain. (2019). https://doi.org/10.33423/jsis. v14i1.990 13. Atlam, H.F., Azad, M.A., Alzahrani, A.G., Wills, G.: A review of Blockchain in Internet of Things and AI. Big Data Cognit. Comput. 4(4), 28 (2020). https://doi.org/10.3390/bdcc40 40028 14. Moudoud, H., Cherkaoui, S., Khoukhi, L.: Towards a scalable and trustworthy blockchain: IoT use case. In: ICC 2021 - IEEE International Conference on Communications (2021). https://doi.org/10.1109/ICC42927.2021.9500535 15. Sheth, H.S.K., Ilavarasi, A.K., Tyagi, A.K.: Deep learning, blockchain based multi-layered authentication and security architectures. In: 2022 International Conference on Applied Artificial Intelligence and Computing (ICAAIC), Salem, India, pp. 476–485 (2022). https://doi. org/10.1109/ICAAIC53929.2022.9793179 16. Deshmukh, A., Sreenath, N., Tyagi, A.K., Jathar, S.: Internet of Things based smart environment: threat analysis, open issues, and a way forward to future. In: 2022 International Conference on Computer Communication and Informatics (ICCCI), Coimbatore, India, pp. 1–6 (2022). https://doi.org/10.1109/ICCCI54379.2022.9740741 17. Tyagi, A.K., Chandrasekaran, S., Sreenath, N.: Blockchain technology:– a new technology for creating distributed and trusted computing environment. In: 2022 International Conference on Applied Artificial Intelligence and Computing (ICAAIC), Salem, India, pp. 1348–1354 (2022). https://doi.org/10.1109/ICAAIC53929.2022.9792702

Fuzzy Investment Assessment Techniques: A State-of-the-Art Literature Review Cengiz Kahraman1(B) , Basar Oztaysi1 , Sezi Çevik Onar1 , and Selcuk Cebi2 1 Istanbul Technical University, Department of Industrial Engineering,

34367 Macka, Istanbul, Türkiye [email protected] 2 Yildiz Technical University, Department of Industrial Engineering, 34349 Yildiz, Istanbul, Türkiye

Abstract. Investment analysis is the process of choosing the most appropriate investment alternative that creates the highest profitability with minimal risk. Therefore, many analytic techniques have been proposed to the literature to minimize investment risks and maximize profits. Most of these methods are extended to fuzzy set theory and its extensions to cope with the uncertainties in the investment environment. The main objective of these studies is to provide an overview of fuzzy investment assessment techniques and their applications. For this, investment assessment techniques of engineering economics including present worth (PW) analysis, annual worth (AW) analysis, rate of return analysis (ROR), benefit/cost (B/C) ratio analysis, and payback period (PP) analysis are analyzed under fuzzy sets including type-2 fuzzy sets, hesitant fuzzy sets, intuitionistic fuzzy sets, Pythagorean fuzzy sets, spherical fuzzy sets, picture fuzzy sets, and circular fuzzy sets. The objective of this paper is to present a state-of-art literature review on fuzzy investment analysis techniques. This study addresses how fuzzy investment techniques can be an effective tool to utilize the subjective opinions of investors. Keywords: Type-2 fuzzy investment techniques · Hesitant fuzzy investment techniques · Intuitionistic fuzzy investment techniques · Pythagorean fuzzy investment techniques · Spherical fuzzy investment techniques · Neutrosophic investment techniques · Picture fuzzy investment techniques · Circular fuzzy investment techniques

1 Introduction Investment analysis is the process of choosing the most appropriate investment alternative that creates highest profitability with minimal risk. Investment analysis involve risk assessment, preparing cash flows and assessing salvage value. In an investment analysis not only the cash flows or salvage values, interest rates involve uncertainty and vagueness. Fuzzy sets and probabilistic methods are the approaches that can be used to model imprecision. In order to model these uncertainties fuzzy investment analysis methods such as fuzzy present value analysis, future value analysis, annual value analysis, benefit/cost ratio analysis, internal rate of return, and rate of return analysis can be used. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 A. Abraham et al. (Eds.): IBICA 2022, LNNS 649, pp. 649–657, 2023. https://doi.org/10.1007/978-3-031-27499-2_60

650

C. Kahraman et al.

Present worth (PW) analysis is one of the most used techniques in which the equivalent present values of the cash flows are calculated with the minimum attractive rate of return. The alternatives are compared based on their present values. In the fuzzy present worth analysis, initial investments, cash flows and/or the minimum attractive rate of return can be defined by using fuzzy sets. On the other hand, in the future value analysis both the benefits and costs are compared based on their future values. Especially, when making an evaluation based on future value of the investment, future worth analysis method can be used. Similar to the net present value analysis, all the investment parameters can be considered as fuzzy in the future worth analysis. Similarly, in the other engineering management techniques, one or multiple parameters are considered as fuzzy and fuzzy sets are used to model the problem. Not only the classical fuzzy sets but also the extensions of fuzzy sets such as intuitionistic fuzzy sets, Type-2 fuzzy sets and hesitant fuzzy sets can be utilized for modeling the uncertainty. In this paper we focus on the literature review of fuzzy discounted cash flow techniques such as fuzzy PW analysis, fuzzy AW analysis, and fuzzy ROR analysis. The other investment analysis methods and methodologies such as fuzzy multi-criteria decision making methods including AHP, TOPSIS, VIKOR, ELECTRE and others and fuzzy inference systems such as ANFIS, and fuzzy neural networks are not considered. The rest of the paper is organized as follows in the second section we explain fuzzy systems and extensions. In the third section we give the fuzzy investment assessment techniques. Fourth session provides the literature review. At the last session is we conclude and give further suggestions.

2 Fuzzy Sets and Their Extensions Discrete multivalued logic has found its ultimate destination with L. A. Zadeh (1965)’s continuous fuzzy logic [1]. Ordinary fuzzy sets introduced by Zadeh are represented with a degree of membership and a degree of non-membership which is the complement of membership degree. To deal with the weaknesses of ordinary fuzzy sets, they have been extended to several new extensions by various researchers by adding new parameters to membership functions such as hesitancy degree, refusal degree or indeterminacy degree. The new extensions of ordinary fuzzy sets are historically shown in Fig. 1 [2]. Criticism of type-1 membership functions caused type-2 fuzzy sets and intervalvalued fuzzy sets to be developed by some researchers. Type-2 fuzzy sets have been introduced by Zadeh [3] to handle the vagueness in membership functions as an extension of ordinary fuzzy sets. Then, intuitionistic fuzzy sets (IFSs) were introduced by Atanassov [4], which are composed of a degree of membership and a degree of nonmembership whose sum is not necessarily equal to 1. Their objective is to take the hesitancy of experts into consideration. Hesitant fuzzy sets (HFSs) introduced by Torra [5] have been used to work with a set of potential membership values of an element in a fuzzy set. After intuitionistic type-2 fuzzy sets (IFS2) are developed by Atanassov [6], Yager [7] called them as Pythagorean fuzzy sets (PFSs) represented with a larger area for membership and non-membership degrees. Later, q-rung orthopair fuzzy sets (Q-ROFSs) have been proposed as a general class of IFSs and PFSs by Yager [8]. Neutrosophic sets which have degrees of truthiness, indeterminacy, and falsity for each element

Fuzzy Investment Assessment Techniques

651

in the universe have been developed by Smarandache [9]. The sum of these independent three degrees can be at most equal to 3. Picture fuzzy sets and spherical fuzzy sets characterized by the degrees of membership, non-membership, and hesitancy for each element in a set have been introduced as a direct extension of IFSs by Cuong [10] and by Kutlu Gündo˘gdu and Kahraman [2], respectively.

Fig. 1. Extensions of fuzzy sets (Kutlu Gundogdu and Kahraman 2019)

The purpose of all these extensions aims at defining the membership functions in a way that imitates the human thought system. Type-2 fuzzy sets were introduced by Zadeh [3] so that fuzziness of membership degrees should be incorporated into membership functions as a third dimension. The general purpose of the other fuzzy set extensions is to consider the degree of hesitancy of decision makers rather than defining a complementary non-membership function. However, the purpose of the recently developed fuzzy set extension circular intuitionistic fuzzy sets (C-IFSs) is to add fuzziness to membership functions, as it is in the type-2 fuzzy sets [11]. In the literature, there is a need for a MCDM method that fuzzifies the membership function as in type-2 fuzzy sets and can take into account the hesitancy of decision makers as it is in intuitionistic fuzzy sets. Intuitionistic fuzzy sets first time incorporated the idea of hesitancy to fuzzy sets. These sets are defined as the sets where each element of the universe has a degree of membership and a degree of non-membership, μA (x), νA (x) with a circle around them whose radius is r satisfying that the sum of membership and non-membership degrees within this circle is at most equal to 1.

3 Fuzzy Investment Assessment Techniques Investment analysis techniques of engineering economics involve present worth analysis, annual worth analysis, rate of return analysis, benefit/cost ratio analysis, payback period analysis, and others. All these classical techniques require exact data to use in their equations. However, exact data regarding interest rates, cash flows, and project life are hard to find in the market conditions. Probabilistic approaches are alternatives for these

652

C. Kahraman et al.

cases which require past data. Observation for long periods is a must for probabilistic approaches in order to determine the distribution of the data. When sufficient data are not in hand, fuzzy set approaches present efficient solutions including all possible results of the problem with their degrees of membership. Some investment parameters such as interest rate, project life, and cash flows can be represented by fuzzy numbers in case of data insufficiency. Fuzzy Present worth analysis has been handled by several researchers using ordinary fuzzy sets and other extensions of ordinary fuzzy sets. Equation (1) shows the ordinary fuzzy present worth equation including triangular fuzzy cash flows and triangular fuzzy interest rates. ⎛ ⎞    ⎞    min f ; 0 ; 0 max f n  t t ⎜ ⎟ n max ftr ; 0 n ftm l l ⎜ ⎟ ⎜ ⎟  +

⎠;  ;

⎟ ⎜ ⎜ ⎟ t=0 ⎝ t t=0 t t=0 t t  ⎜ ⎟  =0 1 + itr  =0 1 + itm 1 + i 1 + i   t t   t t ⎟  =⎜ t =0 t =0 PW l l ⎟ ⎜   ⎜ ⎟ ⎜ ⎟ ⎝ + min ftr ; 0  ⎠ t 1 + it   ⎛

t =0

(1)

r

where ftl : the least possible value of triangular fuzzy future cash flow, ftm : center point of triangular fuzzy future cash flow, ftr : the largest possible value of triangular fuzzy future cash flow, it  : the least possible value of triangular fuzzy interest rate at period l t  , itm , : center point of triangular fuzzy interest rate at period t  , itr : the largest possible value of triangular fuzzy interest rate at period t  , and n shows the project life. Fuzzy rate of return analysis has been handled by some researchers using ordinary fuzzy sets and other extensions of ordinary fuzzy sets. Equation (2) shows the ordinary fuzzy rate of return equation including fuzzy cash flows and fuzzy rate of return.  −t n  − I˜ = 0 (2) F˜ t 1 + ROR t=1

 : fuzzy rate of return, and n where F˜ t : fuzzy future cash flow, I˜ : fuzzy initial cost, ROR shows the project life. Fuzzy Annual worth analysis has been handled by several researchers using ordinary fuzzy sets and other extensions of ordinary fuzzy sets. Equation (3) shows the ordinary fuzzy annual worth equation including triangular fuzzy cash flows and triangular fuzzy interest rates ⎞ ⎛  n  n  n 1 + ˜ir ir 1 + ˜im im 1 + ˜il il ⎟ =⎜ ; NPVm  ; NPVr  ;⎠ (3) AW n n n ⎝NPVl  1 + ˜ir − 1 1 + ˜im − 1 1 + ˜il − 1  l : the least possible value of triangular fuzzy NPW, NPW  m : center point of where NPW  r : the largest possible value of triangular fuzzy NPW, ˜il : the triangular fuzzy NPW, NPW least possible value of triangular fuzzy interest rate, ˜im : center point of triangular fuzzy interest rate, ˜ir : the largest possible value of triangular fuzzy interest rate, and n shows the project life. After the new extensions have been developed the above equations have been modified by using the new fuzzy numbers such as intuitionistic fuzzy numbers, picture fuzzy

Fuzzy Investment Assessment Techniques

653

numbers, neutrosophic numbers, and others. Equation (4) shows type-2 fuzzy present worth equation.

(4)

where ftlU : the least possible value of upper triangular fuzzy future cash flow, ftmU : the center point of upper triangular fuzzy future cash flow, ftrU : the largest possible value of upper triangular fuzzy future cash flow, ftlL : the least possible value of lower triangular fuzzy future cash flow, ftmL : the center point of lower triangular fuzzy future cash flow, ftrL : the largest possible value of lower triangular fuzzy future cash flow, itU : the least l

possible value of upper triangular fuzzy interest rate at period t’, itU : the center point of m

upper triangular fuzzy interest rate at period t’, itU : the largest possible value of upper r

triangular fuzzy interest rate at period t’, itL : the least possible value of lower triangular l

fuzzy interest rate at period t’, itL m : the center point of lower triangular fuzzy interest rate at period t’, itL : the largest possible value of lower triangular fuzzy interest rate at period r t’, and n shows the project life.

4 Literature Review Investment analysis is a collection of mathematical techniques in engineering economics such as rate of return analysis, benefit/cost ratio analysis, present value analysis (PW analysis), annual cash flow analysis, and payback period analysis. In this paper, we classify the fuzzy investment assessment techniques under two main titles: techniques using ordinary fuzzy sets and techniques using extensions of ordinary fuzzy sets. 4.1 Techniques Using Ordinary Fuzzy Sets Ordinary fuzzy sets have been employed in numerous publications related to economy and finance. Here we summarize the most cited fuzzy economy and finance papers with their main topics; Fuzzy optimal replacement analysis [12], fuzzy investment risk analysis [13], fuzzy portfolio analysis [14], fuzzy decision trees in investment analyses [15], fuzzy cash flow forecasting and investment analysis [16], fuzzy PW, fuzzy annual worth, fuzzy rate of return, and fuzzy cost benefit ratio [17], fuzzy PW based investment analysis on manufacturing systems [18], fuzzy benefit-cost analysis for manufacturing

654

C. Kahraman et al.

investment analysis [19], fuzzy benefit-cost analysis[20], cost-benefit analysis with fuzzy rule-based systems [21], fuzzy multi-attribute evaluation of investments [22], fuzzy rate of return analysis [23], benefit-cost analysis with fuzzy sets [24], triangular fuzzy PW analysis [25], fuzzy engineering economic analyses [26], fuzzy financial analysis and economic feasibility [27], fuzzy net present value analysis [28], fuzzy financial instruments [29], fuzzy cost-benefit analysis [30]. 4.2 Techniques Using Extensions of Ordinary Fuzzy Sets The PW analysis has been extended with the recent extensions of ordinary fuzzy sets such as type-2 fuzzy PW analysis [31, 32], hesitant, and intuitionistic fuzzy PW analysis [33, 34], Pythagorean fuzzy PW analysis [31], spherical fuzzy PW analysis [35], intuitionistic fuzzy PW analysis and neutrosophic PW analysis [36, 37], picture fuzzy PW analysis [38], interval-valued and circular intuitionistic fuzzy PW analysis [39]. Other investment analysis techniques have also been handled by various authors; Interval valued intuitionistic fuzzy benefit/cost analysis in wind energy investment [40], spherical fuzzy cost/benefit analysis in wind energy investment [41], intuitionistic fuzzy, hesitant fuzzy, and interval type-2 fuzzy investment analyses [42], Pythagorean fuzzy investment analysis [43], Farmetean fuzzy NPW, and annual worth analysis [44], Hesitant and inter valued intuitionistic fuzzy PW and annual worth analyses [33], type-2 fuzzy investment analysis [45], neutrosophic PW [46], type-2 fuzzy, hesitant fuzzy, and intuitionistic fuzzy investment analysis [47], neutrosophic PW [37], type-2 fuzzy real option assessment [48]), interval-valued intuitionistic fuzzy investment analysis [49].

5 Conclusion Investment decisions are always related to the future and the future includes uncertainty, ambiguity, and vagueness due to the dynamics of the real world and its complex structure. This makes it difficult to obtain precise data and therefore investment decisions include some risks. In order to make decisions in an uncertain environment, expert knowledge is generally used. At this point, fuzzy set theory can easily handle the difficulties in estimating by employing membership functions to express experts’ judgments. In this study, a brief literature review on fuzzy investment assessment techniques is considered to address the effectiveness of these tools on the subjective opinions of investors. It is concluded from the literature that PW analysis is the most used method with fuzzy set and its extensions. Furthermore, the most employed application area of these methodologies is energy investments. For further research, the other investment assessment techniques can be extended with fuzzy set extensions.

References 1. Zadeh, L.: Fuzzy set. Inf. control 8(3), 338–353 (1965) 2. Kutlu Gündo˘gdu, F., Kahraman, C.: Spherical fuzzy sets and spherical fuzzy TOPSIS method. J. Intell. Fuzzy Syst. 36(1), 337–352 (2019)

Fuzzy Investment Assessment Techniques

655

3. Zadeh, L.: The concept of a linguistic variable and its application. Inf. Sci. 8(3), 199–249 (1975) 4. Atanassov, K.: Intuitionistic fuzzy sets. Fuzzy Sets Syst. 20(1), 87–96 (1986) 5. Torra, V.: Hesitant fuzzy sets. Int. J. Intell. Syst. 25(6), 529–539 (2010) 6. Atanassov, K.T.: More on intuitionistic fuzzy sets. Fuzzy Sets Syst. 33(1), 37–45 (1989) 7. Yager, R.: Pythagorean fuzzy subsets. In: Proceedings of the 2013 Joint IFSA World Congress and NAFIPS Annual Meeting, IFSA/NAFIP (2013) 8. Yager, R.: Generalized orthopair fuzzy sets. IEEE Trans. Fuzzy Syst. 25(5), 1222–1230 (2017) 9. Smarandache, F.: Neutrosophy: Neutrosophic Probability, Set, and Logic: Analytic Synthesis & Synthetic Analysis. American Research Press (1998) 10. Cuong, B.: Picture fuzzy sets. J. Comput. Sci. Cybern. 30(4), 409–420 (2014) 11. Atanassov, K.: Circular intuitionistic fuzzy sets. J. Intell. Fuzzy Syst. 39(5), 5981–5986 (2020) 12. Balaganesan, M., Ganesan, K.: Fuzzy approach to determine optimum economic life of equipment with change in money value. In: Mallick, P., Balas, V., Bhoi, A., Chae, G.S., (eds.) Cognitive Informatics and Soft Computing. Advances in Intelligent Systems and Computing, vol. 1040, pp. 645-655. Springer, Singapore (2020). https://doi.org/10.1007/978-981-151451-7_66 13. Lin, S., Zhongming, Z.: Fuzzy risk analysis of harbour engineering investment by hierarchy system approach. China Ocean Eng. 6(1), 87–94 (1992) 14. Terceño, A., Andrés, J., De, B.G., Lorenzana, T.: Using fuzzy set theory to analyse investments and select portfolios of tangible investments in uncertain environments. Int. J. Uncert. Fuzziness Knowl. Based Syst. 11(3), 263–281 (2013) 15. Kahraman, C.: Investment analyses using fuzzy decision trees In: Kahraman, C. (eds.) Fuzzy Engineering Economics with Applications. Studies in Fuzziness and Soft Computing, vol. 233, pp. 231–242. Springer, Berlin, Heidelberg, (2008). https://doi.org/10.1007/978-3-54070810-0_14 16. Kahraman, C., Ulukan, Z.: Investment analysis using forecasted cash flows by grey and fuzzy logics. J. Multiple-Valued Logic Soft Comput. 14(6), 579–598 (2008) 17. Kahraman, C., Tolga, A.Ç.: Fuzzy investment planning and analyses in production systems. In: Kahraman C., Yavuz M., (eds.) Production Engineering and Management under Fuzziness. Studies in Fuzziness and Soft Computing, vol. 252, pp 279–298. Springer, Berlin, Heidelberg (2010). https://doi.org/10.1007/978-3-642-12052-7_12 18. Kahraman, C., Beskese, A., Ruan, D.: Measuring flexibility of computer integrated manufacturing systems using fuzzy cash flow analysis. Inf. Sci. 168(1–4), 77–94 (2004) 19. Kahraman, C., Tolga, E., Ulukan, Z.: Justification of manufacturing technologies using fuzzy benefit/cost ratio analysis. Int. J. Prod. Econ. 66(1), 45–52 (2000) 20. Kahraman, C., Kaya, I.: Fuzzy benefit/cost analysis and applications In: Kahraman C. (eds.) Fuzzy Engineering Economics with Applications. Studies in Fuzziness and Soft Computing, vol. 233, pp 129–143. Springer, Berlin (2008) 21. Verma, N.K., Ghosh, A., Dixit. S., Salour, A.: Cost-benefit and reliability analysis of prognostic health management systems using fuzzy rules. In: Paper presented at the 2015 IEEE Workshop on Computational Intelligence: Theories, Applications and Future Directions, WCI 2015, pp. 1–9 (2016) 22. Dymova, L., Sevastjanov, P.: Fuzzy multiobjective evaluation of investments with applications In: Kahraman, C. (eds.) Fuzzy Engineering Economics with Applications. Studies in Fuzziness and Soft Computing, vol. 233, pp. 243–287. Springer, Berlin (2008). https://doi. org/10.1007/978-3-540-70810-0_15 23. Kuchta, D.: Fuzzy rate of return analysis and applications. In: Kahraman, C. (eds.) Fuzzy Engineering Economics with Applications. Studies in Fuzziness and Soft Computing, vol. 233, pp 97–104. Springer, Berlin, Heidelberg (2008)

656

C. Kahraman et al.

24. Ward, T.L.: Artificial intelligence in engineering economy. In: Paper presented at the Proceedings of the Industrial Engineering Research Conference, pp. 92–96 (1993) 25. Chiu, C., Park, C.S.: Fuzzy cash flow analysis using present worth criterion. Eng. Econ. 39(2), 113–138 (1994) 26. Dimitrovski, A.D., Matos, M.A.: Fuzzy engineering economic analysis. IEEE Trans. Power Syst. 15(1), 283–289 (2000) 27. Sheen, J.: Economic feasibility of variable frequency driving pump by fuzzy financial model. In: Paper presented at the 2009 4th International Conference on Innovative Computing, Information and Control, ICICIC 2009, pp. 934–937 (2009) 28. Kuchta, D.: Optimization with fuzzy present worth analysis and applications In: Kahraman, C. (eds.) Fuzzy Engineering Economics with Applications. Studies in Fuzziness and Soft Computing, vol. 233, pp. 43–69. Springer, Berlin (2008). https://doi.org/10.1007/978-3-54070810-0_3 29. Buckley, J.J., Eslami, E.: Pricing options, forwards and futures using fuzzy set theory In: Kahraman, C. (eds.) Fuzzy Engineering Economics with Applications. Studies in Fuzziness and Soft Computing, vol. 233, pp. 339–357. Springer, Berlin (2008). https://doi.org/10.1007/ 978-3-540-70810-0_18 30. Wang, M., Liang, G.: Benefit/cost analysis using fuzzy concept. Eng. Econ. 40(4), 359–376 (1995) 31. Kahraman, C., Onar, S.C., Oztaysi, B.: Present worth analysis using pythagorean fuzzy sets. In: Kacprzyk, J., Szmidt, E., Zadro˙zny, S., Atanassov, K.T., Krawczak, M. (eds.) IWIFSGN/EUSFLAT -2017. AISC, vol. 642, pp. 336–342. Springer, Cham (2018). https://doi.org/ 10.1007/978-3-319-66824-6_30 32. Kahraman, C., Sarı, ˙IU. (eds.): Intelligence Systems in Environmental Management: Theory and Applications. ISRL, vol. 113. Springer, Cham (2017). https://doi.org/10.1007/978-3-31942993-9 33. Kahraman, C., Çevik Onar, S., Öztay¸si, B.: Engineering economic analyses using intuitionistic and hesitant fuzzy sets. J. Intell. Fuzzy Syst. 29(3) 1151–1168 (2015) 34. Kahraman, C., Çevik Onar, S., Öztay¸si, B., Sarı, ˙I.U., ˙Ilbahar, E.: Wind energy investment analyses based on fuzzy sets. Energy Manag. Collect. Comput. Intell. Theory Appl, 149 (2018) 35. Bolturk, E., Seker, S.: Present worth analysis using spherical fuzzy sets. In: Kahraman, C., Cebi, S., Cevik Onar, S., Oztaysi, B., Tolga, A.C., Sari, I.U. (eds.) Intelligent and Fuzzy Techniques for Emerging Conditions and Digital Transformation. INFUS 2021. Lecture Notes in Networks and Sytems, vol. 308, pp. 777–788. Springer, Cham (2022). https://doi.org/10. 1007/978-3-030-85577-2_91 36. Aydin, S., Kahraman, C., Kabak, M.: Evaluation of investment alternatives using present value analysis with simplified neutrosophic sets. Eng. Econ. 29(3), 254–263 (2018) 37. Aydin, S., Kabak, M.: Investment analysis using neutrosophic present and future worth techniques. J. Intell. Fuzzy Syst. 38(1), 627–637 (2020) 38. Boltürk, E.: Containership investment analysis with using picture fuzzy present worth analysis. J. ETA Maritime Sci. 9(4), 233-242 (2021) 39. Boltürk, E., Kahraman, C.: Interval-valued and circular intuitionistic fuzzy present worth analyses. Informatica 1–19,(2022). https://doi.org/10.15388/22-INFOR478 40. Kahraman, C., Onar, S.C., Oztaysi, B.: A comparison of wind energy investment alternatives using interval-valued intuitionistic fuzzy bene-fit/cost analysis. Sustainability (Switzerland), 8(2), 118 (2016) 41. Onar, S.C., Oztaysi, B., Kahraman, C.: Spherical fuzzy cost/benefit analysis of wind energy investments. In: Kahraman, C., Cevik Onar, S., Oztaysi, B., Sari, I., Cebi, S., Tolga, A. (eds.) Intelligent and Fuzzy Techniques: Smart and Innovative Solutions. INFUS 2020. Advances in

Fuzzy Investment Assessment Techniques

42.

43. 44.

45.

46. 47.

48.

49.

657

Intelligent Systems and Computing, vol. 1197, pp. 134–141 (2021). Springer, Cham.. https:// doi.org/10.1007/978-3-030-51156-2_17 Kahraman, C., Sarı, ˙I.U., Onar, S.C., Oztaysi, B.: Fuzzy economic analysis methods for environmental economics. In: Kahraman, C., Sari, ˙I., (eds.) Intelligence Systems in Environmental Management: Theory and Applications. Intelligent Systems Reference Library, vol .113, pp 315–346 . Springer, Cham (2017). https://doi.org/10.1007/978-3-319-42993-9_14 Çoban, V., Onar, S.Ç.: Pythagorean fuzzy engineering economic analysis of solar power plants. Soft. Comput. 22(15), 5007–5020 (2018). https://doi.org/10.1007/s00500-018-3234-6 Sergi, D., Sari, I.U.: Fuzzy capital budgeting using fermatean fuzzy sets In: Kahraman, C., Cevik Onar, S., Oztaysi, B., Sari, I., Cebi, S., Tolga, A. (eds.) Intelligent and Fuzzy Techniques: Smart and Innovative Solutions. INFUS 2020. Advances in Intelligent Systems and Computing, vol. 1197, pp. 448–456. Springer, Cham (2021). https://doi.org/10.1007/978-3030-51156-2_52 Sarı, ˙IU., Kahraman, C.: Economic analysis of municipal solid waste collection systems using type-2 fuzzy net present worth analysis. In: Intelligence Systems in Environmental Management: Theory and Applications, pp. 347–364. Springer, Cham (2017) Aydin, S., Kahraman, C., Kabak, M.: (2018) Evaluation of investment alternatives using present value analysis with simplified neutrosophic sets. Eng. Econ. 29(3), 254–263 (2018) Kahraman, C., Çevik, O.S., Öztay¸si, B., Sarı, ˙IU., ˙Ilbahar, E.: Wind energy investment analyses based on fuzzy sets. Energy Manag. Collect. Comput. Intell. Theory Appl. 149, 141–166 (2018) Tolga, A.C.: New product development process valuation using com-pound options with type-2 fuzzy numbers. In: Paper presented at the Lecture Notes in Engineering and Computer Science, vol. 2228, pp. 889-894 (2017) Jian, J., Zhan, N., Su, J.: A novel superiority and inferiority ranking method for engineering investment selection under interval-valued intuition-istic fuzzy environment. J. Intell. Fuzzy Syst. 37(5), 6645–6653 (2019)

A Comparative Analysis of Classification Algorithms for Dementia Prediction Prashasti Kanikar(B) , Manoj Sankhe, and Deepak Patkar MPSTME, NMIMS, Vile Parle(W), Mumbai, India [email protected]

Abstract. Dementia is a progressive and chronic condition that affects the ability to perform various cognitive functions. It can affect various aspects of thinking, memory, and orientation. According to the World Health Organization, dementia is regarded as one of the leading causes of death among older people. It has various economic, social, and psychological impacts, hence efficient models are required for early detection of dementia. In published literature so far, limited types of approaches for dementia classification are discussed and compared. This paper can be considered as a supporting guide for researchers who are planning to analyze MRI data for prediction of Dementia. Detailed results with respect to variety of performance evaluation measures are shared that can help the researchers to select the best-suited algorithm for their research. A publicly available Open Access Series of Imaging Studies (OASIS) longitudinal dataset is used for crossvalidation. Keywords: Brain MRI analysis · Dementia prediction · Dementia classification

1 Introduction Dementia is a type of cognitive disorder that generally occurs as a chronic or progressive state. It affects various parts of the brain. Due to the increasing number of people with Dementia in India, the demand for healthcare services is expected to rise significantly. This disease has a negative effect on the quality of life for patients and their families. Early diagnosis of this disease can help in delaying the decline in a patient’s abilities and make them more involved in their treatments and decisions. Although no cure has been found for Alzheimer’s disease, there are various promising research programs that are focused on finding new ways to treat the disease [1]. Magnetic resonance imaging is a non-invasive technique that uses electromagnetic radiation to visualize the internal structure and function of the body. It does not involve exposure to harmful radiations. This method produces high-quality images of the body in any plane by using radio frequency signals. Several automated methods have been proposed to detect and analyze neurodegenerative diseases using brain- imaging data. These methods use neuroanatomical biomarkers to describe the physiological properties of the affected regions. MRI has the capability to provide detailed resolution and soft tissue contrast that allows the investigation of anatomical variability. It is also used to © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 A. Abraham et al. (Eds.): IBICA 2022, LNNS 649, pp. 658–668, 2023. https://doi.org/10.1007/978-3-031-27499-2_61

A Comparative Analysis of Classification Algorithms

659

quantitate the changes in volumetric conditions. Machine learning comes for rescue in achieving this objective. Machine learning has demonstrated promising applications to neuroimaging data analysis for dementia prediction and care [2]. Contribution of this paper includes review of literature on dementia, machine learning methods applied to health informatics for dementia prediction, dataset description, comparison of different approaches, results, conclusion and future scope.

2 Review of Related Literature Patil and Yardi have proposed ANN based approach for Dementia Diagnosis. The steps include intensity computation, discrete cosine transform application for extracting features and classification using supervised neural network [4]. Yo-Ping Huang et al. have proposed Fuzzy inference based approach using MRI data. The system is able to monitor separately left and right brain hemispheres and possible outcomes are MCI, Healthy or schizophrenia [5]. Gunawardena et al. have proposed a Support Vector Machine based approach for dementia classification. It includes preprocessing, Region of Interest (ROI) selection and edge detection. It is a proposed method, results are not published [6]. SeongEun Ryu et. al proposed a model for Dementia Risk Based on XGBoost Using Derived Variable Extraction and Hyper Parameter Optimization. This method gave 85.61% accuracy [7]. H. M. Tarek Ullah et al. Proposed an approach for Dementia detection using Deep Convolutional Neural Networks [8]. Esther E. Bron et.al have proposed an approach for Feature Selection Based on the SVM Weight Vector for Classification of Dementia. Although the performance improvement due to feature selection was limited, the methods based on the p-map generally had the best performance, and were therefore better in estimating the relevance of individual features [8]. Simon Duchesne et al. proposed an automated classification method using computational techniques within the context of cross-sectional analysis of MRI. They achieved accuracy of 92% using a support vector machine [9]. C. Studholme et al. have proposed a direct template-based approach to separating the inhomogeneity of MRI intensity from the tissue-intensity structure. This method resulted in a consistent improvement in the reduction of global intensity variation and in the agreement with a global bias estimate [10]. Varun Jain et al. have proposed Augmentation and Classification model based on severity that classifies dementia into various categories. Using MRI scans [11]. Rory Raeper and colleagues developed a novel ensemble classifier-learning framework for early dementia diagnosis using multiple brain structures. Their approach significantly outperformed other state-of-the-art methods [12]. Abol Basher et al. proposed a method to diagnose Alzheimer’s disease using slice-wise volumetric features from the left and right hemispheres of the brain. Their method combines a convolutional neural network model and a deep neural network model [13]. A team led by Jian Cheng developed a novel 3D convolutional network that can estimate the brain age from T1weighted MRI data. It can also be used to distinguish between healthy subjects and those with mild cognitive impairment [14]. Stephanos Leandrou et al. have reviewed various methods used for the quantitative structural MRI acquisition for the assessment of Alzheimer’s disease[15].

660

P. Kanikar et al.

3 Machine Learning Based Approaches 3.1 Leading Approaches A decision tree method is commonly used to develop classification systems that are based on multiple covariates. It utilizes a population-based approach to classify a group into branch-like segments [16]. The concept of the logistic regression model is that the odds of an event are divided by the probability that it will not happen. This is done by taking the natural logarithm of the odds and converting it into a function that can be used to predict an outcome which is very popular in medical journals [17, 18]. SVM is a supervised learning model that tries to generate separations across groups of observations by mapping them into a higher dimensional space [2]. One advantage of the techniques which use bagging is the ability to test the accuracy of the ensemble without removing data from the training set, as is done with a validation set [19]. 3.2 Weka Classifiers [20] 3.2.1 Bayesian Classifiers The NaiveBayes framework is a probabilistic representation of the Naive Bayes classifier. It can perform various calculations, such as kernel density estimators, on the basis of the normality assumption.The NaiveBayes Updateable,which is an incremental version, can process one instance at a time. It does not use a kernel estimator.The NaiveBayes Multinomial framework is used to implement the multinomial Bayes classifier. The NaiveBayes-MultinomialText version, on the other hand, can perform similar calculations on string attributes. BayesNet learns Bayesian nets with minimal data attributes, and no missing values. 3.2.2 Tree With the help of the boosting methods described in this article, the DecisionStump can build one-level decision trees for datasets with categorical or numeric classes. It treats the missing values as a separate value and extends the tree’s third branch. RandomTree constructs random forests by considering the various random features at each node. RandomForest then aggregates these random trees into its own random forests. A decision tree or regression tree is built using REPTree. It’s designed to reduce its errors and improve its speed by splitting instances into smaller pieces. When building a decision tree or regression tree, Logistic Model Tree(LMT) takes into account the various missing values and the numerical and nominal attributes at each node. Using the LogitBoost module, it can perform a logistic regression function at a node and then use cross-validation to determine the number of iterations that it should run just once. This method improves the tree’s run time and accuracy. The minimal complexity of the tree’s design helps it produce a compact structure. The HoeffdingTree module allows users to split a decision tree into multiple parts by considering the various information gains and the Gini index. It can also predict the leaves using naive Bayes models or majority class models.

A Comparative Analysis of Classification Algorithms

661

3.2.3 Rules OneR classifies using one parameter. For discretization,it uses minimum bucket size.Partial decision tree builder Part compiles rules from C4.5’s heuristics. It generates rules,number of instances covered and number of misclassified instances. DecisionTable evaluates features using cross-validation and it can also determine the class of instances that are not covered by the entry in the decision table. With the nearestneighbor method, it can also determine the class of instances that are not covered by the global majority.Using Ripper algorithm and rule set optimization, JRip is implemented. 3.2.4 Functions The Simple Logistic framework builds logistic regression models by fitting them to LogitBoost and allowing them to perform multiple iterations using cross-validation. It also generates a robust tree with a single node. Logistic is a multinomial logistic regression model that can be built with a robust ridge estimator. This framework can also prevent overfitting by penalizing the large coefficients.The Sequential Minimal Optimization (SMO) algorithm is used to train support vector classification using a sequential minimal optimization scheme. It takes into account the missing values from a given set of attributes and transforms them into binary ones. Values are also normalized automatically. The MultilayerPerceptron is an unsupervised learning network that uses back propagation. Although it is listed as a function, it has its own user interface that is different from other schemes. 3.2.5 Lazy Classifiers The lazy learner is composed of training instances that do not require real work until the classification time. IBk is one of the simplest implementations of the k-nearestneighbor algorithm. Distance for finding nearest neighbor can be Manhattan, Chebyshev or Minkowski distance. KStar follows principle of nearest-neighbor method. It uses transformation based generalized distance function.The LWL is a general framework for implementing Locally Weighted Learning. It takes into account the weights assigned by the training instances and generates a classifier from them. 3.2.6 Metalearning Algorithms A metalearning algorithm takes a classifier and turns it into a powerful learner. One of the parameters that it takes into account is the number of iterations that the system should perform for various iterative schemes, such as boosting and bagging. FilteredClassifier classifies filtered data. Filter’s own set of parameters, are based on the training data. Bagging is a method that reduces the variance in the results of a given classification. It can be used for both regression and classification, since the predictions are based on the average probability estimates. One of the parameters that the algorithm takes into account is the size of the bags. The RandomCommittee constructs an ensemble of base classifiers that are based on the same data but use different random number seed. This ensures that the predictions are the same regardless of the base classification. The RandomSubSpace

662

P. Kanikar et al.

constructs an ensemble of classifiers that are trained on subset of attributes. These are selected randomly. The AdaBoostM1 algorithm can be accelerated by setting a threshold for weight pruning. If the base classifier cannot handle the weight of the training instances, it can be resampled. The LogitBoost algorithm can also be accelerated by setting a threshold for weight pruning. Internal cross-validation can be used to calculate the number of iterations. Another option is to set a shrinkage parameter to prevent overfitting. The Vote method provides a baseline for combining multiple classifiers. The default scheme for combining both regression and classification is to average the predicted probability estimates. There are also various combination schemes available, such as majority voting for classification. A combination of multiple classifiers is known as Stacking, and it can be performed for both regression and classification problems. You can specify the base classification, the metalearner, or the number of folds in cross-validation. Five metalearners implement the Wrapper technique to improve the performance of the base classification. The AttributeSelection method selects the attributes that are most relevant to the given classification by dimensionality reduction. It then passes the data to the classifier. Cross-validation is used to improve the performance of the CVParameterSelection module. Each parameter has its upper and lower bounds. The string containing upper and lower bounds and desired number of increments can be given for each parameter. The MultiScheme algorithm selects a classifier to be used from among the various options by using either cross-validation or resubstitution error on the training data. The performance of the regression model is measured by the mean squared error and of classification, by percentage correct. The IterativeClassifierOptimizer can improve the performance of iterative classification by optimizing iteration count.The goal of Threshold Selector is to optimize any evaluation metric,such as accuracy, recall, Fmeasure etc. A probability threshold can be selected for the output of the classification. 3.2.7 Miscellaneous The InputMappedClassifier module wraps a base classifier and generates a mapping between the attributes that were seen in the input test data and those that were seen while training the model. If there are some attribute values present in testing data but not in training data, they are ignored.

4 Methodology 4.1 Source of Data This research is conducted on MRI volumetric data from OASIS dataset. This set consists of a longitudinal collection of 150 subjects aged 60 to 96 and .373 imaging sessions. The dataset contains different demographic attributes available in dataset like subject-ID, MRI-ID, Group (Demented or nondemented, sequence of visit, MR delay, Gender, Age, socioeconomic status, MiniMental State Examination (MMSE) etc.

A Comparative Analysis of Classification Algorithms

663

4.2 Algorithms A variety of classification algorithms are applied using Weka tool. The following Fig. 1 shows the types of algorithms used for comparison.

5 Results To identify the best algorithm for dementia prediction, the experimentation is conducted using OASIS Longitudinal data on Weka tool version 3.9.6. A broad spectrum of 28 different classification algorithms are used for comparison and analysis. Table 1 below shows detailed results for top ten methods giving best classification accuracies. Table includes time taken for classification in seconds, CCI (Correctly classified instances), ICI (Incorrectly-classified instances), MAE (Mean absolute error), RMSE (Root mean squared error) and RAE (Relative absolute error). With the help of above table it can be inferred that IBk (Instance based learning with parameter –k) classifier performs the best with a classification accuracy of 98.93%. Table 2 given below shows RRSE (Root Relative Squared Error) and weighted average values of all classes for Precision, Recall, F-Measure, Multi-class classification (MCC), Receiver operating characteristic (ROC) Area and Precision-Recall curve (PRC) Area. On the basis of above results, it can be said that IBK classifier performs the best in terms of Precision, Recall, F-measure and MCC. Following Fig. 2 shows the percentages of correctly classified instances for different algorithms. Figure 3(a) and 3(b) below shows comparison of True Positive Rates and False Positive rates of top 10 algorithms. IBK algorithm gives highest True Positive Rate and lowest False Positive Rate. In this research, experimentation is performed on OASIS longitudinal data set of 150 patients with multiple scans resulting to total 372 records using Weka tool. Out of 28 methods applied, IBK method performs the best with classification accuracy of 98.93%.

P. Kanikar et al. Bayesnet Bayes

Functions

Naïve Bayes Naïve Bayes Multinomial Text Naïve Bayes Updateable Logistics Multilayer Perceptron Simple Logistics SMO IBK

Lazy

K Star Locally weighted learning Adaboost M1 Attribute Selected Classifier Bagging Regression Filtered Classifier Iterative Classifier Optimizer LogitBoost

Meta Classification

664

Multiclass Classifier Multiclass Classifier Updatable Multi Scheme Random Committee Randomised Filtered Classifier Random Subspace Stacking Vote Weighted Instances Handle Wrapper Zero R PART

Rules

One R JRIP Decision Table Decision Stump Hoediff Tree J 48

Trees

Logistic Model Tree Random Forest Random Tree REP Tree

Miscellaneous

Input Mapped Classifier

Fig. 1. Classification Algorithms used for comparison [22]

A Comparative Analysis of Classification Algorithms

665

Table 1. Detailed results of top ten classifiers Classifier

Time Taken (Sec.)

% of CCI

% of ICI

Kappa statistic

MAE

RMSE

RAE

Ibk

0

98.93%

1.07%

0.9815

0.011

0.0843

2.86%

SMO

0.26

98.66%

1.34%

0.9767

0.2252

0.2776

58.41%

Logistics

0.01

98.39%

1.61%

0.972

0.0107

0.1028

2.78%

Multiclass Classifier

0.2

98.39%

1.61%

0.972

0.0089

0.0792

2.32%

Regression

0.64

97.86%

2.14%

0.9627

0.0231

0.1056

5.98%

One R

0.1

97.86%

2.14%

0.9627

0.0143

0.1196

3.71%

REP Tree

0.01

97.86%

2.14%

0.9627

0.0175

0.0904

4.53%

Bagging

0.09

97.59%

2.41%

0.958

0.1085

0.1613

28.14%

Filtered Classifier

0.09

97.59%

2.41%

0.958

0.1085

0.1613

28.14%

Random committee

0.03

97.59%

2.41%

0.958

0.1553

0.2001

40.27%

Table 2. Detailed results of top ten classifiers with different parameters. Classifier

RRSE

Precision

Recall

FMeasure

MCC

ROC Area

PRC Area

Ibk

19.21%

0.99

0.989

0.989

0.982

0.989

0.981

SMO

63.26%

0.987

0.987

0.986

0.977

0.991

0.978

Logistics

23.44%

0.984

0.984

0.984

0.973

0.995

0.988

Multiclass Classifier

18.04%

0.984

0.984

0.984

0.973

0.999

0.998

Regression

24.07%

0.979

0.979

0.978

0.961

0.999

0.998

One R

27.25%

0.979

0.979

0.978

0.961

0.978

0.966

REP Tree

20.61%

0.979

0.979

0.978

0.961

0.999

0.998

Bagging

36.77%

0.977

0.976

0.976

0.957

0.999

0.999

Filtered Classifier

36.77%

0.977

0.976

0.976

0.957

0.999

0.999

Random committee

45.61%

0.976

0.976

0.976

0.959

1

1

60.00%

0.00%

94.91% 92.76% 92.76% 98.39% 92.49% 98.66% 98.93% 84.18% 97.59% 97.86% 97.59% 91.42% 91.69% 98.39% 96.78% 97.59% 55.23% 95.98% 86.86% 97.86% 91.96% 87.94% 73.73% 87.40% 92.49% 94.37% 83.65% 97.86%

80.00%

BAYESNET NAÏVE BAYES NAÏVE BAYES… LOGISTICS SIMPLE… SMO IBK K STAR BAGGING REGRESSION FILTERED… ITERATIVE… LOGITBOOST MULTICLASS… MULTICLASS… RANDOM… RANDOMISED… RANDOM… PART ONE R JRIP DECISION TABLE HOEDIFF TREE J48 LMT RANDOM… RANDOM TREE REP TREE

100.00%

TP Rate

(a) 0.006 0.012 0.015 0.015 0.022 0.022 0.022 0.025 0.025 0.023

0.989 0.987 0.984 0.984 0.979 0.979 0.979 0.976 0.976 0.976

120.00%

Ibk SMO Logistics Multiclass… Regression One R REP Tree Bagging Filtered… Random…

Ibk SMO Logistics Multiclass… Regression One R REP Tree Bagging Filtered… Random…

666 P. Kanikar et al.

40.00%

20.00%

% of Correctly Classified Instances

Fig. 2. Graph showing Percentages of correctly classified instances with 28 different algorithms using Weka

FP Rate

(b)

Fig. 3. Graphs showing True Positive and False Positive Rates

6 Conclusion

In this paper, it is tried to compare performances of leading algorithms for Dementia classification. IBK classifier performs the best in majority of performance evaluation parameters.

A Comparative Analysis of Classification Algorithms

667

References 1. Trojachanec, K., Kitanovski, I., Dimitrovski, I., Loshkovska, S.: Longitudinal brain MRI retrieval for Alzheimer’s disease using different temporal information. IEEE Access 6, 9703– 9712 (2018) 2. Tsang, G., Xie, X., Zhou, S.-M.: Harnessing the power of machine learning in dementia informatics research: issues, opportunities, and challenges. IEEE Rev. Biomed. Eng. 13, 113–129 (2020) 3. Ahmed, M.R., Zhang, Y., Feng, Z., Lo, B., Inan, O.T., Liao, H.: Neuroimaging and machine learning for dementia diagnosis: recent advancements and future prospects. IEEE Rev. Biomed. Eng. 12, 19–33 (2019) 4. Patil, M.M., Yardi, A.R.: ANN based dementia diagnosis using DCT for brain MR image compression. In: 2013 International Conference on Communication and Signal Processing, Melmaruvathur, India, April 2013, pp. 451–454 (2013) 5. Huang, Y.-P., Zaza, S.M.M.: A fuzzy approach to evaluating the risk of dementia by analyzing the cortical thickness from MRI. In: 2014 IEEE International Conference on System Science and Engineering (ICSSE), China, July 2014, pp. 82–87 (2014) 6. Gunawardena, K.A.N.N.P., Rajapakse, R.N., Kodikara, N.D., Mudalige, I.U.K.: Moving from detection to pre-detection of Alzheimer’s Disease from MRI data. In: 2016 Sixteenth International Conference on Advances in ICT for Emerging Regions (ICTer), Negombo, Sri Lanka, September 2016, p. 324 (2016) 7. Ryu, S.-E., Shin, D.-H., Chung, K.: Prediction model of dementia risk based on XGBoost using derived variable extraction and hyper parameter optimization. IEEE Access 8, 177708– 177720 (2020) 8. Ullah, H.M.T., Onik, Z., Islam, R., Nandi, D.: Alzheimer’s disease and dementia detection from 3D brain MRI data using deep convolutional neural networks. In: 2018, (I2CT), Pune, April 2018, pp. 1–3 (2018) 9. Duchesne, S., Caroli, A., Geroldi, C., Barillot, C., Frisoni, G.B., Collins, D.L.: MRI-based automated computer classification of probable AD versus normal controls. IEEE Trans. Med. Imaging 27(4), 509–520 (2008) 10. Studholme, C., Cardenas, V., Song, E., Ezekiel, F., Maudsley, A., Weiner, M.: Accurate template-based correction of brain MRI intensity distortion with application to dementia and aging. IEEE Trans. Med. Imaging 23(1), 99–110 (2004) 11. Jain, V., Nankar, O., Jerrish, D.J., Gite, S., Patil, S., Kotecha, K.: A novel AI-based system for detection and severity prediction of dementia using MRI. IEEE Access 9, 154324–154346 (2021) 12. Raeper, R., Lisowska, A., Rekik, I.: Cooperative correlational and discriminative ensemble classifier learning for early dementia diagnosis using morphological brain multiplexes. IEEE Access 6, 43830–43839 (2018) 13. Basher, A., Kim, B.C., Lee, K.H., Jung, H.Y.: Volumetric feature-based Alzheimer’s disease diagnosis from sMRI data using a convolutional neural network and a deep neural network. IEEE Access 9, 29870–29882 (2021) 14. Cheng, J., et al.: Brain age estimation from MRI using cascade networks with ranking loss. IEEE Trans. Med. Imaging 40(12), 3400–3412 (2021) 15. Leandrou, S., Petroudi, S., Kyriacou, P.A., Reyes-, C.C., Pattichis, C.S.: Quantitative MRI brain studies in mild cognitive impairment and Alzheimer’s disease: a methodological review. IEEE Rev. Biomed. Eng. 11, 97–111 (2018) 16. Song, Y.-Y., Lu, Y.: Decision tree methods: applications for classification and prediction. Shanghai Arch Psychiatry 27(2), 130–135 (2015) 17. LaValley, M.P.: Logistic regression. Circulation 117(18), 2395–2399 (2008)

668

P. Kanikar et al.

18. Nick, T.G., Campbell, K.M.: Logistic regression. In: Ambrosius, W.T. (ed.) Topics in Biostatistics, vol. 404, pp. 273–301. Humana Press, Totowa (2007) 19. Banfield, R.E., Hall, L.O., Bowyer, K.W., Kegelmeyer, W.P.: A comparison of decision tree ensemble creation techniques. IEEE Trans. Pattern Anal. Mach. Intell. 29(1), 173–180 (2007) 20. Frank, E., Hall, M.A., Witten, I.H.: The WEKA Workbench. Online Appendix for ‘Data Mining: Practical Machine Learning Tools and Techniques’ Morgan Kaufmann (2016). Accessed 06 Apr 2022 21. Marcus, D.S., Fotenos, A.F., Csernansky, J.G., Morris, J.C., Buckner, R.L.: Open access series of imaging studies: longitudinal MRI data in nondemented and demented older adults. J. Cogn. Neurosci. 22(12), 2677–2684 (2010) 22. Monlezun, D.J., et al.: Machine learning-augmented propensity score-adjusted multilevel mixed effects panel analysis of hands-on cooking and nutrition education versus traditional curriculum for medical students as preventive cardiology: multisite cohort study of 3,248 trainees over 5 Years. BioMed Research International 2018, pp. 1–10 (2018)

Virtual Reality, Augmented Reality and Mixed Reality for Teaching and Learning in Higher Education Anne Lin and Tendani Mawela(B) University of Pretoria, Hatfield, South Africa [email protected], [email protected]

Abstract. This study aimed to explore the impact of Virtual Reality, Augmented Reality, and Mixed Reality in higher education. It highlighted the advantages and disadvantages of virtual technologies and suggested how they can be implemented effectively to enhance higher education. The study adopted the systematic literature review method and included 19 articles in the analysis. The study found that several advantages and challenges are reported on virtual technologies in the extant literature. Some of the benefits include increasing the student’s performance, motivation, and positive attitude towards learning. However, there are also negative outcomes or risks to keep in mind such as the possibility of damaging the student’s relationships or hindering their communication skills, design challenges and integrating the technologies in teaching may be time-consuming. Keywords: Virtual reality · Mixed reality · Augmented reality · Higher education · Systematic literature review

1 Introduction In recent years, there has been an increase in the uptake of Virtual Reality (VR), Mixed Reality (MR) and Augmented Reality (AR) technologies across various sectors. These technologies may also provide a virtual learning environment (VLE) which enhances, motivates, and stimulates the learner’s understanding of theory where instructional learning has been proven difficult to use, especially when enhancing experimental learning in higher education [1]. VR, AR, and MR have similar concepts but use different technologies and means of interaction between the real world and the virtual world. VR is a complete interaction within the virtual world that contains 3-D representations of objects in the real world. AR integrates digital information into the real world by capturing realworld data and displaying an information shadow over the real-world object, and MR also known as hybrid reality, merges the real world and the virtual world which enables people to experience objects and scenarios that do not exist in the real world [2]. Education has been continuously evolving alongside the integration of new technology where both teachers and students are being challenged to use technology efficiently in a technology-dependent world. With the virtual reality market being one of the fastestgrowing markets, many different types of education especially higher education are now © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 A. Abraham et al. (Eds.): IBICA 2022, LNNS 649, pp. 669–679, 2023. https://doi.org/10.1007/978-3-031-27499-2_62

670

A. Lin and T. Mawela

slowly integrating this new technology into a new learning style by providing students with different environments and learning methods. VR is in the early stages of adoption in the higher education context [24]. However, some researchers question whether this cutting edge technology will have a positive or negative impact on the learning environment [3]. When it comes to VR and higher education, the following questions are often raised: (1) What are the benefits of VR in higher education? (2) What are the disadvantages or risks of using VR in higher education? (3) Is VR learning better than traditional learning? and (4) How should higher education implement VR into educational learning efficiently? Several studies have been conducted by various researchers to show how the different technologies of VR [4], MR [5] and AR [6] have impacted educational experiences and how they compare to the impact of enhancing experimental learning in higher education [1]. This study, based on a systematic literature review (SLR) discusses how VR, AR and MR have influenced teaching and learning based on the extant literature. The study found that the literature reports a number of advantages as well as challenges for the adoption of VR, AR and MR in education.

2 Research Method The SLR method incorporates a structured and systematic method to identify, analyse and synthesise secondary data [26]. SLRs are driven by a particular research question. This study sought to address the following research question: How does Virtual Reality, Augmented Reality, and Mixed Reality impact student Learning in Higher Education? The study followed the qualitative and interpretive approaches. The study was informed by the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) principles for the process of identifying relevant research articles that were included in this paper [25]. The data from the secondary studies was analysed using the thematic analysis approach [27]. 2.1 Data Sources and Search Strategy The following databases and engines were chosen to search for research articles for inclusion in the study: Scopus, Google Scholar and Web of Science. The following search string was used to identify relevant research articles from the electronic databases that may contribute towards addressing the research question: (“Virtual Reality” OR “VR” OR “cyberspace” OR “simulated environment” OR “3 Dimensional environment” OR “3-Dimensional environment OR “simulated reality” OR “computer simulation” OR “artificial reality” OR “Augmented Reality” “AR” OR “Mixed Reality” OR “MR”) AND (“Higher Education” OR “University” OR “College” OR “tertiary school” OR “tertiary education”). 2.2 Inclusion and Exclusion Criteria The following inclusion and exclusion criteria were defined (Table 1):

Virtual Reality, Augmented Reality and Mixed Reality

671

Table 1. Article inclusion and exclusion criteria Inclusion criteria

Exclusion criteria

Publications that discuss how Virtual Reality, Mixed Reality, or Augmented Reality affect higher education

Duplicate papers and publications that are not accessible in full text

Publications showing the advantages and Publications of Virtual Reality, Mixed disadvantages of using virtual reality in higher Reality, and Augmented Reality that are not education related to education Papers published between the period of January 2005–October 2020

Articles not published in English

Peer-reviewed papers (journal articles and conference papers)

Articles not aimed at evaluating the effect of Virtual Reality, Mixed Reality, and Augmented Reality on education

3 Results The study followed the four main PRISMA processes namely: Identification, Screening, Eligibility and Included papers. In the identification process, a total number of 1397 articles were found within the selected databases when applying the search term. Fifty-two (52) duplicated articles were identified and excluded after an initial screening process. About 1345 articles were left to work with. After applying the inclusion and exclusion criteria, 1168 articles that met the exclusion criteria requirements were excluded, and 151 articles that did not meet the inclusion criteria requirements were also excluded. Only 26 eligible articles were included in this research. Additionally, a quality assessment process was conducted to assess the articles on criteria related to research design, focus of the paper and reported limitations of each paper. After performing the quality assessment, 19 articles were left for inclusion in the research.

4 Discussion This section presents the results and findings from the chosen articles to answer the questions on how virtual reality, augmented reality and mixed reality impact higher education learning and how they may enhance learning. There are five main discussions in this section: evaluating the purpose of applying the different virtual technologies in educational learning; the advantages and disadvantages of virtual technology in education; the comparison between traditional means of learning and the new method of learning; and how the different virtual technologies can be applied effectively in the learning process. The discussion focused more on augmented reality because it is the most used virtual technology for educational learning followed by virtual reality. Only a small number of articles was found on the usage of mixed reality for educational purposes as access to the technology was very expensive until the recent release of mobile mixed reality [7]. According to [8], virtual technologies are introducing educational innovations, resulting in emerging educational experiences where students are introduced to new ways

672

A. Lin and T. Mawela

of learning and communicating in the new “virtual campus”. This section focuses on the purpose of why different virtual technologies have been introduced in the educational learning environment. Augmented reality (AR) is becoming more accessible to the majority as smartphone technology advances, making it possible for any student in possession of a smartphone to experience augmented reality learning experiences [9]. According to [10], AR is interactive in real-time, intertwines the real and virtual world where it is viewed in a 3D format, and it assists the user to perceive information much better than using normal natural senses. Students can learn and understand complicated theories better if they can perceive information easily through the use of AR. Studies have shown that learning through augmented reality has decreased the failure rate by 50% in a subject with a high failure rate of 70% [11] and increased the pass rate of a subject from 62% to 84% [10]. AR enhances short-term and long-term learning memory. The results showed that ARenhanced the student’s short-term memory, as they performed 16% better than students who did not use AR. However, in the long-term memory, the results showed that the students who did not use AR performed better in the test by 4% compared to students who use AR. Therefore, the technology still requires improvements in the design so that students can perform better in the long term memory [20]. The study by [12] surveyed students for feedback on AR learning where students rated the usability and usefulness of learning through AR. The results showed an overall positive response of around 80% which corresponded with their improved test performance. This means that the students did not only perform better in tests, but they also welcomed this method of learning. The scholars [12] also mention that with the increasing number of students in the subject, practice laboratories are getting overcrowded which decreases teaching quality as the teacher’s dedication is reduced. With AR learning, students can learn on their own without the need for repeated explanations, allowing them to apply their learning processes. On the other hand, authors [7] suggest that mobile mixed reality (MR) offers multimodal learning analytics, wherein the case of clinical anatomy lessons, students are learning through enhanced engagement while teachers can provide accurate feedback to the students. Virtual technology (VR) can also support distance learning in higher education. According to [13], there are five modes of learning for VR distance higher education namely, simulation experiment learning, self-exploration learning, distance open learning, distance education platforms, and distance group discussions. Self-exploration learning helps students to improve their learning efficiency by choosing their teaching content and applying their learning ideas. Simulation experiment learning allows multicultural learning from different schools or countries and provides students with opportunities to conduct experiments that are impossible, dangerous, or expensive to do in real life. Distance open learning allows open education where students can download their study material through cloud computing thus improving their learning experience and learning effect. The distance education platform allows diversified teaching and provides students with a fair learning environment through a conferencing style of distance education. Finally, distance group discussions allow students to exchange opinions and themes under the teacher’s guidance remotely and promote an efficient group discussion which can enrich the student’s knowledge in problem-solving [13].

Virtual Reality, Augmented Reality and Mixed Reality

673

Another study conducted a survey where participants ranked educational approaches which could be supplemented by VR technology. The study found that the top three rankings of these approaches are experimental learning, active learning, and problembased learning [14]. The main purpose of applying virtual technologies in education is to enhance learning, improve the students’ understanding and enrich their knowledge through different types of virtual technologies. Not only does it help spark the students’ interest in learning and reduce procrastination, but it also helps them to understand difficult concepts easier and enables a self-learning environment for students, increasing their pass rate significantly. Teachers are also able to focus more on students and understand them better through recorded data. By using this data, teachers may be able to identify students that are struggling and assist them in precise areas they are struggling with. The next section discusses the advantages and disadvantages of virtual technology to higher education learning. 4.1 Advantages of Virtual Technology to Higher Education Learning AR technology is highly accessible since anyone who owns a smartphone can use the technology [9]. According to [15], AR benefits educational learning in three main areas. Firstly, it improves the students’ learning outcomes by motivating them to learn and provides learning achievement. Students can learn as they play, which is why they adopt a positive attitude towards AR learning activities. Not only can AR enhance the student’s learning motivation, but it also enhances their positive attitude and satisfaction towards learning by showing information in 3D images and helps the students to understand difficult content more easily [15]. VR technology creates a completely different learning environment for the students, according to [3], and may benefit the students greatly where VR technology inspires the students to learn by creating experiences that are impossible to see in real life, making the students feel more motivated to learn. It increases the student’s classroom engagement as students feel more comfortable talking about their experiences through VR. With the enhanced visualisations produced by VR, it makes the students “forget” that they are learning, which inspires them to pay attention with more interest. Not only does it enhance the student’s motivation, but it also improves educational quality in complex subjects such as medicine, where doctors can explain better using VR, as students can see and interact with “real life” situations. Language barrier is one of the biggest barriers in education when learning in a different country, but with the help of VR, every language is implemented in the system, allowing students to have lessons, discussions, or exchange information with other students or teachers from other countries [13]. MR used to be an expensive technology until the recent release of mobile MR [7]. According to [16], the usage of mobile MR in healthcare education improved the student’s procedural skills and showed a significant decrease in boredom, fatigue, and numbness compared to the traditional control groups in class which lowered their cognitive load for learning.

674

A. Lin and T. Mawela

4.2 Disadvantages of Virtual Technology to Higher Education Learning Although AR is advantageous in educational learning, it requires a well-designed interface for usability, as it requires extensive user interaction. Without good usability, students may struggle and find AR technology difficult to use [15], which may cause frustration, confusion and a loss of time instead of enhancing educational learning. Authors [15] also argue that there are many technical problems associated with location-based AR applications and using AR in large groups may be too expensive and unable to implement in normal classes due to time constraints. VR is an advanced technology, which can be very expensive and it is often only accessible to students that can afford the technology, which creates inequality in education [3]. Although VR can give students a completely different view of the world, there are several disadvantages to using VR technology in education [3]. With decreased human contact, the technology poses a threat to damage relationships between students and hinder their human communication skills and students may also be at risk of getting addicted to the virtual world. Even if VR creates a virtual interactive world for students, it lacks flexibility when students have questions that are not planned in the VR session as the software is programmed in a way that certain actions are restricted. Like many other programmed software, there may be functionality issues which may be inconvenient if students have practical exams using VR technology. VR is an advanced technology, which can be very expensive and it is often only accessible to students who can afford the technology, which creates inequality in education. Although mobile MR decreases boredom, fatigue, and numbness compared to the traditional control groups, [16] discovered that many studies found no proven skill competency difference between learning through mobile MR and traditional control groups, like AR and VR. There are also many technical issues such as phone incompatibility, internet connection issues or slow response time which may frustrate the students. There are many benefits of using virtual technology in teaching and learning such as making learning interactive and fun which motivates the students to learn and allows them to learn complicated theories efficiently and effectively. However, technologies can be complicated to use if they are not designed well, which not only defeats the purpose of using virtual technology to enhance education but may also add stress and anxiety to students who do not understand how to use this technology. Virtual technology may also be time-consuming to use in classes as lecturers need to set up the demonstration and ensure that everyone in the classroom is connected and can use the technology without any issues. 4.3 Traditional Learning Approaches and Virtual Learning Existing established practices are often being disrupted by newly developed technology, especially in education and training. Education has been adopting these changes gradually, and the implementation of the new technology in education has greatly disrupted the student’s learning and social lifestyles [17]. According to [18], recent studies found that 67% of higher education students are using mobile devices to learn in classrooms. In addition, over the past 15 years, the majority of the students are not relating to traditional teaching practices as they are perceived as a dull and boring routine for students which

Virtual Reality, Augmented Reality and Mixed Reality

675

demotivate students in educational studies. This section mainly focuses on the comparison between traditional study and virtual learning, highlighting how virtual technology is affecting students in their studies. Traditional classrooms are gradually replaced by enhanced learning spaces also known as active learning spaces. Traditional classrooms make lessons long and boring where students are expected to only learn in class while active learning encourages activities and discussions where students can learn through interaction [19]. There are two types of technologies that are often used in active classrooms, namely AR and VR. AR is often used in classrooms recently as an innovative approach to education, compared to normal textbooks. AR textbooks allow students to interact with their textbooks which is an engaging and attractive learning experience for the students. VR is often used for students to conduct virtual experiments that are impossible, dangerous, or expensive to work in real environments which allow students to have more opportunity to experiment [19]. There are three main types of human memory, namely sensory, short-term, and longterm memory. Virtual technology can enhance sensory memory and students may be able to learn more efficiently and effectively. The study by [20] found that AR-enhanced students exceeded in short-term memory as they scored 16% higher compared to the non-AR students on the short-term test, however in the long-term test, non-AR students received a 4% higher score than students who attended the AR exhibit. The study by [21] was conducted in a computer hardware class, where learners are tested in three different fields, namely, application on achievement, assembly skills self-efficacy, and theoretical knowledge self-efficacy. The study used a Mann-Whitney U-Test to determine and compare the study variables of learners who were placed in the experimental groups with those that used an AR application or the control groups that did not use an AR application. Before the experiment was implemented, the students in both the experimental group and control group were tested and showed no significant differences in the three tests. For the application on achievement, there was a significant difference between the experimental group who used the AR application to learn how to assemble the hardware and the control group who learned it from an assembly manual. In the next test, both groups of students had a high level of theoretical knowledge selfefficacy and showed no significant difference in the test, therefore AR application did not affect the student’s theoretical knowledge self-efficacy. Lastly, both groups of students also had a high level of assembly skills self-efficacy and no significant difference between using the AR application meaning that AR also did not affect the student’s assembly skills self-efficacy. The study highlighted that AR can enhance practical skills significantly. Another finding from this study also points out that students who used the AR application to assemble the hardware completed the task in a shorter time, whereas students who did not use the AR application were given extra time to complete their tasks. Students who used the AR application also asked far fewer questions to complete the task compared to students who did not use the AR application [21]. One study investigated the effects of VR on student’s performance, emotions, and engagement [22]. The participants were assigned to different learning conditions for this experiment: traditional textbooks, VR, and video as a control. In the performance test, the VR group showed the best results in remembering the content than the traditional

676

A. Lin and T. Mawela

textbook group, however, there was no difference between the two groups in understanding the theory. The results of the performance test showed the VR group performed 3.6% better than the textbook group. In the emotion response test, there are two main categories: positive emotions such as interest, amusement, surprise, and negative emotions such as anger, sadness, anxiety, fear, and disgust. A 3 × 2-way ANOVA test was used to conduct the test to compare the emotions in the pre-test and post-test. The test resulted in positive emotions increasing significantly while negative emotions were decreasing significantly for VR learning. Positive emotions decreased while negative emotions stayed the same for both video and traditional textbook learning [22]. The final test conducted in this research was the engagement, average Web-based Learning Tools (WBLT) ratings that were grouped into three different categories: learning, design, and engagement where the one-way ANOVA was used to calculate the effect of each learning condition for evaluation. There was no significant difference between VR and the textbook for the design category. However, VR had the highest rating in all three categories compared to video and textbook. 4.4 Implementing Virtual Technology in Higher Education Classrooms are moving away from long and boring lectures to active lessons where students can interact. Many higher education classrooms are already adopting mobile phones into their system, it may only be a matter of time before virtual technologies will join the technology trend in education and become a norm in classrooms. It is evident that virtual technologies not only increase performance in the classrooms but also increase the student’s positive attitude, motivating them to learn. A study showed that virtual technology enhances short-term memory rather than long-term memory compared to traditional studies. However, the performance of short-term memory for virtual technologies has quadrupled compared to the difference in long-term memory performance in traditional studies. It is argued that a higher education VR learning system needs to be designed with the consideration of the type of course, pedagogical, and psychological issues [23]. The benefits of each system should be taken into consideration when designing the system to ensure that it is beneficial for students. For example, VR is effective in a case where users require to absorb information while multitasking. In addition, AR makes difficult theories much easier to understand through visualisation. Technical issues must also be considered when incorporating technology into education [23]. Researchers [9] suggest using the Substitution, Augmentation, Modification, Redefinition (SAMR) framework to incorporate virtual technologies into education. They also argue that the framework aligns with the three levels of creativity, known as replication, incrementation, and redirection [9]. Replication effectively substitutes the current educational practice and activities with new technology. Incrementation adds performance attributes to the current practice and activities. Redirection helps to rethink which activities and assessments are enabled by the new technology and refine the current practice and activities allowing students to perform activities they were not able to do before. A framework or model needs to be developed and followed to implement virtual technology in education effectively, enhancing the current practices and activities by

Virtual Reality, Augmented Reality and Mixed Reality

677

balancing both practical skills and theory. When designing the models, both traditional learning and new technology need to be analysed and combined correctly to create synergy. However, not all courses in higher education are suitable to implement virtual technology, therefore, they may need to consider alternative approaches.

5 Conclusion 5.1 Concluding Comments In this paper, the SLR method was adopted and a total of 26 articles were identified using the inclusion and exclusion criteria and further assessed using the quality assessment criteria, leaving 19 articles used to discuss how virtual technology enhances higher education. This study focused on how Virtual Reality, Augmented Reality, and Mixed reality enhances student learning in higher education with four questions raised: the advantages and disadvantages of the technology, are traditional classrooms or virtual learning better in higher education learning, and how virtual technology can be implemented effectively into the current educational system. There are many benefits of implementing virtual technology in education such as increasing the student’s performance, motivation, and positive attitude towards learning and opening new doors for students to do previously impossible experiments. However, there are also negative impacts or risks to keep in mind such as the possibility of damaging the student’s relationships or hinder their communication skills, and such problems are already seen in social media, and as with other technologies. Furthermore, technical issues may be a concern in classrooms and may waste more time rather than enhancing student learning. By implementing virtual technology correctly in education, these negative impacts may be avoided or negated. It was also noted that while using virtual technology may enhance short-term memory significantly, it does not enhance long-term memory as well as traditional methods of study. However, virtual technology may allow students to be more efficient and effective in their studies, and with only a slight difference between the two methods of study in long-term memory, students can choose to learn and gain more knowledge rather than spending more time to improve their long- term memory for a mere 5%, as a study has shown that students completed tasks much more efficiently using AR compared to traditional learning. Another finding in this paper points out that virtual technologies are not suitable for all courses. Thus, courses that require both theoretical knowledge and practical skills will benefit the most from this approach, provided they design and integrate virtual technologies in a manner that is appropriate for the subject content of the course. 5.2 Limitations and Future Research This study was limited to data sourced from three electronic databases with regard to the search for articles. Future studies may consider extending the search to additional databases and also extending the period under consideration. The study also faced a limitation due to the availability of papers covering VR, AR and MR. The analysis of different types of virtual technologies, indicated that the majority of the articles found

678

A. Lin and T. Mawela

were related to AR as it is the most used virtual technology in education due to its accessibility, while VR and MR had fewer articles to review likely due to the limited implementation of these technologies in education.

References 1. Kolb, A.Y., Kolb, D.A.: Learning styles and learning spaces: enhancing experiential learning in higher education. Acad. Manag. Learn. Educ. 4(2), 193–212 (2005) 2. Farshid, M., Paschen, J., Eriksson, T., Kietzmann, J.: Go boldly!: explore augmented reality (AR), virtual reality (VR), and mixed reality (MR) for business. Bus. Horiz. 61(5), 657–663 (2018) 3. Hicks, P.: The pros and cons of using virtual reality in the classroom. eLearning industry. https://elearningindustry.com/pros-cons-using-virtual-reality-in-the-classroom (2016). Accessed 6 Nov 2020 4. Merchant, Z., Goetz, E.T., Cifuentes, L., Keeney-Kennicutt, W., Davis, T.J.: Effectiveness of virtual reality-based instruction on students’ learning outcomes in K-12 and higher education: a meta-analysis. Comput. Educ. 70, 29–40 (2014) 5. Ke, F., Lee, S., Xu, X.: Teaching training in a mixed-reality integrated learning environment. Comput. Hum. Behav. 62, 212–220 (2016) 6. Liarokapis, F., Anderson, E.F.: Using augmented reality as a medium to assist teaching in higher education 1, 1–7 (2010) 7. Birt, J., Clare, D., Cowling, M.: Piloting multimodal learning analytics using mobile mixed reality in health education. Paper presented at the 2019 IEEE 7th International Conference on Serious Games and Applications for Health (SeGAH), pp. 1–6 (2019) 8. Barajas, M., Owen, M.: Implementing virtual learning environments: looking for holistic approach. J. Educ. Technol. Soc. 3(3), 39–53 (2000) 9. Cochrane, T., Narayan, V., Antonczak, L.: A framework for designing collaborative learning environments using mobile AR. J. Interact. Learn. Res. 27(4), 293–316 (2016) 10. Tashko, R., Elena, R.: Augmented reality as a teaching tool in higher education. Int. J. Cogn. Res. Sci. Eng. Educ. 3(1), 7–15 (2015) 11. Del Bosque, L., Martinez, R., Torres, J.L.: Decreasing failure in programming subject with augmented reality tool. Procedia Comput. Sci. 75, 221–225 (2015) 12. Martín-Gutiérrez, J., Fabiani, P., Benesova, W., Meneses, M.D., Mora, C.E.: Augmented reality to promote collaborative and autonomous learning in higher education. Comput. Hum. Behav. 51, 752–761 (2015) 13. Liu, Y., Fan, X., Zhou, X., Liu, M., Wang, J., Liu, T.: Application of virtual reality technology in distance higher education. Paper Presented at the Proceedings of the 2019 4th International Conference on Distance Education and Learning, pp. 1–5 (2019) 14. Baxter, G., Hainey, T.: Student perceptions of virtual reality use in higher education. J. Appl. Res. High. Educ. 12, 413–424 (2019) 15. Akçayır, M., Akçayır, G.: Advantages and challenges associated with augmented reality for education: a systematic review of the literature. Educ. Res. Rev. 20, 1–11 (2017) 16. Stretton, T., Cochrane, T., Narayan, V.: Exploring mobile mixed reality in healthcare higher education: a systematic review. Res. Learn. Technol. 26, 2131 (2018) 17. Psotka, J.: Educational games and virtual reality as disruptive technologies. J. Educ. Technol. Soc. 16(2), 69–80 (2013) 18. Delello, J.A., McWhorter, R.R., Camp, K.M.: Integrating augmented reality in higher education: a multidisciplinary study of student perceptions. J. Educ. Multimed. Hypermedia 24(3), 209–233 (2015)

Virtual Reality, Augmented Reality and Mixed Reality

679

19. Smrikarov, A., Ivanova, G., Aliev, E.Y.: Vision for the classroom of the future (Future Education Space), pp. 115–121 (2019) 20. Sommerauer, P., Müller, O.: Augmented reality in informal learning environments: investigating short-term and long-term effects, pp. 1423–1430 (2018) 21. Sirakaya, M., Kilic Cakmak, E.: Effects of augmented reality on student achievement and self-efficacy in vocational education and training. Int. J. Res. Vocat. Educ. Train. 5(1), 1–18 (2018) 22. Allcoat, D., von Mühlenen, A.: Learning in virtual reality: effects on performance, emotion and engagement. Res. Learn. Technol. 26, 1–13 (2018) 23. Joan, D.: Enhancing education through mobile augmented reality. J. Educ. Technol. 11(4), 8–14 (2015) 24. Marks, B., Thomas, J.: Adoption of virtual reality technology in higher education: an evaluation of five teaching semesters in a purpose-designed laboratory. Educ. Inf. Technol. 27, 1287–1305 (2021) 25. Moher, D., Liberati, A., Tetzlaff, J., Altman, D.G., PRISMA Group*: Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. Ann. Intern. Med. 151(4), 264–269 (2009) 26. Kitchenham, B., Brereton, O.P., Budgen, D., Turner, M., Bailey, J., Linkman, S.: Systematic literature reviews in software engineering–a systematic literature review. Inf. Softw. Technol. 51(1), 7–15 (2009) 27. Maguire, M., Delahunt, B.: Doing a thematic analysis: a practical, step-by-step guide for learning and teaching scholars. All Ireland J. High. Educ. 9(3), 3351–33514 (2017)

Comparative Analysis of Filter Impact on Brain Volume Computation Prashasti Kanikar(B) , Manoj Sankhe, and Deepak Patkar MPSTME, NMIMS, Vile Parle(W), Mumbai, India [email protected]

Abstract. The computation of the volume of the brain is a challenging area for medical researchers. One of the most significant elements that may be taken into account while diagnosis and monitoring of several brain related diseases is brain volume. This study aims to analyze the performance of different image filters in order to enhance the images and hence in turn, contribute to more accurate computation of brain volume using MRI (Magnetic Resonance Imaging) sequences. In this paper, different types of binary, grayscale and ternary image filters are applied and then brain volumes are computed. The experiment was conducted on MRI volumetric data obtained from OASIS (Open Access Series of Imaging Studies) dataset which is publicly available for research purpose. The results indicate that Ternary Magnitude Squared Image filter outperforms all other filters in terms of maximum voxel detection and gives brain volume computation accuracy of 89.74%. Keywords: Brain MRI analysis · Brain MRI enhancement · Brain MRI preprocessing

1 Introduction The radiofrequency-based MRI technology examines the hydrogen atoms’ motion in the brain without invading it. It can be used to detect various brain illnesses. To identify and quantify the changes in the brain, a variety of brain imaging techniques can be used, including MRI, Computed Tomography (CT), and perfusion. MRI gives quantitative and qualitative information about in vivo tissues and is non-invasive. The Alzheimer Research Forum claims that this technique can be useful in preventing the onset of Alzheimer’s disease. Syoji Kobashi [1] suggests an automatic technique supported by fuzzy granulation. This technique can smooth the noise peaks in the histogram as well as produce a reliable threshold. Yutaka Hata [2] suggested a technique that makes advantage of fuzzy information granulation and inferences. It gives a thorough insight of the workings of the human brain. A technique for partial volume segmentation in magnetic resonance images has been put forth by Tao Song [3]. He employs a modified probabilistic neural network in his technique. Smitha et al. used MRI to determine the size of the brain tissue in the proposed technique [4]. For the purpose of choosing a course of treatment for © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 A. Abraham et al. (Eds.): IBICA 2022, LNNS 649, pp. 680–689, 2023. https://doi.org/10.1007/978-3-031-27499-2_63

Comparative Analysis of Filter Impact on Brain Volume Computation

681

stroke patients using MRI data, Stefan Bauer et al. have developed a volumetric approach [5]. On a small dataset of image data, this approach was assessed. Brain extraction via isodata clustering is a suggestion made by Hassan Kastavaneh [6]. With the aid of histogram analysis, outliers might be eliminated. The Region of Interest approach was used to validate the procedure. Bigler and Tate employed a tissue segmentation-based method to calculate brain volume (analyze). They came to the conclusion that people with combined dementia and neuropsychiatric illnesses had total brain volumes that were 22% less than their total intracranial volumes[7]. Trimmed minimal covariance determinant (TMCD), a reliable and quick technique for parameter estimation, was proposed by Jussi Tohka et al. Although it takes less time to complete, the results were comparable with leading algorithms like expectation-maximization. To enable the reliable and precise estimate of partial volume model parameters, the TMCD method concept has been proposed [8]. Therefore, research is ongoing, but there is still a substantial demand for automated brain volume estimation. This paper provides a brief assessment of relevant literature, a suggested strategy, findings, and potential future applications.

2 Literature Review 2.1 Representation of Magnetic Resonance Images In MR pictures, pixels are used. The smallest sample of a two-dimensional element in an image is called a pixel. The in-plane spatial resolution is determined by its dimensions, which are specified along two axes in milli-metres. The volume element, known as a voxel, is one that is defined in three dimensions. The calculation of the voxel volume is influenced by the slice thickness and pixel spacing. For storing MR images, the most widely used format id DICOM (Digital Imaging and Communications in Medicine). These files contain information about the patients and MR scans. Typically, a brain scan records a sequence of 250–300 pictures. 2.2 Analysis of MR Images ROI (region of interest) based and voxel-based methods can be used to analyze magnetic resonance images. Volumetric analysis and the visual rating approach are frequently employed in ROI-based categories. Visual rating techniques can be used to measure regional atrophy and other changes in the brain, whereas volumetric methods are frequently employed to measure the volume of certain temporal lobe structures. Currently, manually defining a region of interest is the gold standard way for volumetric analysis. This procedure, meanwhile, takes a lot of effort and may have significant inter-rater variability [9]. Despite the development of numerous approaches for the automatic delineation of ROIs, advancement is still possible. It is possible to design automated methods for improved delineation and more accurate volume measurement.

682

P. Kanikar et al.

2.3 Volume Estimation and Analysis Monitoring disease development and detecting variations in brain size have both benefited from the volumetric examination of various brain areas. The volume of the hemisphere has gradually decreased over time, which can be linked to ageing. Furthermore, regional segmentation of MRI scans is required to get the precise volumetric data required for different structures. The region boundaries are generally not established because of how inaccurately the scans are performed. Extracting the Region of Interest from MRI data demands specialized knowledge and abilities in comparison to the whole brain approach. There is a certain level of expertise required for removing the region because no two brains are the same size and form. The approach for determining the volume changes in distinct medial temporal lobe structures has been improved with the help of quantitative brain segmentation techniques. These techniques enable comparisons of regional volume variations across various disease conditions. The effect of changing brain volume on cognitive levels has been extensively studied and published. Henrike Wolf and others discovered that in patients with MCI (mild cognitive impairment), the severity of cognitive impairment was marginally linked with the intracranial volume [10]. Susan J. et al. employed a technique based on the midsagittal region of the slice to estimate intracranial volume from a single intracranial cross-sectional area. They discovered a strong correlation between the cross-sectional area and inoculation volume [11]. According to Eritaia et. al. Research, a recurrent problem in earlier investigations has been the lack of methods that can precisely identify non-pathological changes in brain volume. Age, sex, and body size are only a few of the variables that contribute to these differences [12]. Mild cognitive impairment and different volume losses in particular brain regions have been linked, according to Driscoll et al. Early detection of volume loss can aid in the early diagnosis of disease [13]. Only a small number of researchers have estimated and analyzed brain volume using publicly available software programmes. Three well-known software programmes, statistical Parametric Mapping (SPM), Free Surfer (FS), and FMRIB Software Library (FSL), have been compared for their ability to automatically estimate brain volume, according to Saman Sargolzaei et al. The study determines which technique is best for predicting intracranial volume using a priori information of the patient group. A paediatric template was then used to examine the data. SPM12 was used to find less systematic bias within the Adult Control group [14]. According to Klasson et alresearch, total brain volume biases the effective total intracranial volume calculated from FS [15]. Total intracranial volume, according to Ridgway et al., is frequently employed to assess inter-subject variability in the pre-morbid phase of brain volume. Both human and automatic techniques can be used to estimate it. We were able to obtain a neutral and precise estimate because to recent advancements in the SPM [16]. RN Richard using digital tools, et al. compared the biases. While SPM displayed bias in relation to gender and atrophy, FS displayed bias depending on skull size. The data showed that studies relating to brain volume can be impacted by the application of the intracranial volume estimation method [17]. The methods used to calculate the volume of the brain are currently not particularly precise since they lack anatomical specificity, classify tissues incorrectly, and require a lot of computing time. Hence, there is still scope for improvement.

Comparative Analysis of Filter Impact on Brain Volume Computation

683

3 Materials and Methods 3.1 Image Filters A binary filter is used to transform an image into a representation of a given structure or object in a map. It can also enhance the outlines of the structure by making its edges wider or smaller.The concept of a binary filter is similar to a 3 by 3 filter. It considers the values of a central pixel as true or false depending on its position and number of neighbours. A 9- digit number is then used to determine if a false or a true value should be returned for the pixel. The researchers also found that the performance of the ternary filter is significantly better than that of the binary filter. They also noted that the filter’s region of support can enhance the SNR [18]. The following Table 1 shows the brief description of filters used for experimentation. Table 1. Brief description of filters used [19] Filter name

Brief description

Binary Magnitude Image Filter

Calculates square root of addition of squares of respective input pixels

Binary Min Max Curvature Flow Image Filter

This filter removes noise from a binary image by using min/max curvature flow

Grayscale Connected Closing Image Filter

The goal is to enhance the pixels associated with a dark object that is surrounded by a brighter object

Grayscale Dilate Image Filter

Performs gray scale dilation of input image

Grayscale Erode Image Filter

Performs gray scale erosion of input image

Grayscale Fill hole Image Filter

The local minima, which is not connected to border of the image, is removed

Grayscale Geodesic Dilate Image Filter

Performs geodesic gray scale dilation on input image

Grayscale Geodesic Erode Image Filter

Performs geodesic gray scale erosion on input image

Grayscale Grind Peak Image Filter

The local maxima, which is not connected to border of the image, is removed

Grayscale Morphological Closing Image Filter

The goal is to close an image using a grayscale morphology. The structure element is composed of either zero or one values

Grayscale Morphological Opening Image Filter

The goal is to open an image using a grayscale morphology. The structure element is composed of binary (zero or one) values (continued)

684

P. Kanikar et al. Table 1. (continued)

Filter name

Brief description

Ternary Add Image Filter

Adds three images (Pixel-wise)

Ternary Magnitude Image Filter

Calculates pixel wise magnitude from three images

Ternary Magnitude Squared Image Filter

Pixel wise squared magnitude addition of three images

3.2 Skull Stripping While processing brain image sequences, “skull stripping” refers to the removal of the entire brain and the division of non-brain voxels. In the analysis of brain images, it is a crucial stage. A module called the Swiss skull stripper registers a picture to patient data. The application of the brain mask is the initial stage. After that, precise brain extraction is accomplished using level-set techniques. 3.3 Source of Data This study uses MRI volumetric data from the first scan of a total of 10 individuals from the OASIS Longitudinal dataset [20]. 3.4 Proposed Approach Using the 3D Slicer tool, the experiment was carried out [21]. It is a powerful tool for performing calculations on multi-dimensional images more quickly. The suggested method incorporates MRI volumetric data. This information is offered in nifti file format and uses about 33Kb of memory for each patient. Figure 1 depicts the suggested technique. Various filters are used for preprocessing. Swiss skull stripping is used to remove the skull after filtering (non-brain pixels). Finally, the picture dimensions and the space between two successive imaging slices are used to calculate the brain volume.

4 Results The following Table 2 shows detailed results of Brain volume computation using different filters. First filtering is applied on input image sequence. Then skull-stripping step is performed to remove non-brain voxels. Then brain voxels are counted. Then considering the gap between the slices, volume is calculated. Here, detailed analysis is presented in terms of Minimum, Maximum, Mean and Standard deviation in volume computation. Finally, accuracy is calculated considering the reference volume obtained from OASIS dataset. For shown example, the reference volume value is 1678 mm3 .

Comparative Analysis of Filter Impact on Brain Volume Computation

685

MRI Volumetric Image Sequence

Binary/ Ternary/ Grayscale Filtering

Skull Stripping

Brain Volume Computation Fig. 1. Proposed model using Different Filters.

Table 2. Detailed results of Brain volume computation using different filters Filter Used

Number Volume of voxels [mm3]

Minimum Maximum Mean Volume Standard Volume Volume [mm3] Deviation [mm3] [mm3]

Binary Magnitude IF

912385

1140.48 12.7279

5675.24

1655.67

602.984

67.95488

Binary 902829 Min-max curvature Flow IF

1128.54 39.9124

4013

1185.05

419.137

67.24344

Grayscale Connected Closing IF

908496

1135.62 114

4013

1175.12

419.907

67.6653

Grayscale Dilate IF

942905

1178.63 181

4013

1495.06

351.326

70.22803

Grayscale Erode IF

900260

1125.33 5

2045

424.382

67.05218

Grayscale Fill hole IF

910643

1138.3

4013

1174.57

419.983

67.82499

Grayscale Geodesic Dilate IF

905864

1132.33 9

3738

1172.46

424.113

67.46927

114

815.726

Accuracy %

(continued)

686

P. Kanikar et al. Table 2. (continued)

Filter Used

Number Volume of voxels [mm3]

Minimum Maximum Mean Volume Standard Volume Volume [mm3] Deviation [mm3] [mm3]

Grayscale Geodesic Erode IF

911985

1139.98 32

4013

1170.8

425.819

67.92509

Grayscale Grind Peak IF

896129

1120.16 9

1793

1162.61

402.962

66.74413

Grayscale 930815 Morphological Closing IF

1163.52 182

4013

1283.13

368.124

69.32771

Grayscale 916672 Morphological Opening IF

1145.84 5

2905

1054.53

391.107

68.27426

Ternary Add IF

914366

1142.96 27

12039

3506.29

Ternary Magnitude IF

911898

1139.87 15.5885

6950.72

2027.56

Ternary Magnitude Squared IF

1204833

1506.04 27

4.83E+07

3.81E+06

1282.81 737.704

Accuracy %

68.10265 67.91854

3.04E+06 89.73658

Figure 2 on next page shows how usage of different filters give different results in terms of voxel count. All binary and grayscale image filters give voxel-count in same range. Ternary Magnitude Squared Image filter gives the highest voxel count. Following Fig. 3 shows comparison of accuracies of brain volume computation after applying different filters. It can be clearly interpreted that Ternary Magnitude Squared Image filter gives the highest accuracy for brain volume computation.

Ternary Magnitude Squared IF

Ternary Magnitude IF

Ternary Add IF

Grayscale Morphological Opening IF

Grayscale Morphological Closing IF

Grayscale Grind Peak IF

Grayscale Geodesic Erode IF

Grayscale Geodesic Dilate IF

Grayscale Fillhole IF

Grayscale Erode IF

Grayscale Dilate IF

1000000 905864

Grayscale Geodesic Dilate IF

914366 911898

Ternary Add IF Ternary Magnitude IF

1204833

916672

Grayscale Morphological Opening IF

Ternary Magnitude Squared IF

930815

896129

Grayscale Morphological Closing IF

Grayscale Grind Peak IF

911985

910643

Grayscale Fillhole IF

Grayscale Geodesic Erode IF

900260

0 Grayscale Erode IF

200000

942905

400000

Grayscale Dilate IF

600000

908496

800000

Grayscale Connected Closing IF

902829

912385

1200000

Binary Minmax curvature Flow IF

Binary Magnitude IF

1400000

Grayscale Connected Closing IF

Binary Minmax curvature Flow IF

Binary Magnitude IF

Comparative Analysis of Filter Impact on Brain Volume Computation 687

Fig. 2. Number of voxels detected after different types of filtering.

95 90 85 80 75 70 65 60 55 50

Fig. 3. Comparison of accuracies of brain volume computation after applying different filters.

688

P. Kanikar et al.

5 Conclusion The results indicate that Ternary Magnitude Squared Image filter outperforms all other filters with detection of 1204833 voxels and brain volume computation accuracy of 89.74%. In future, experimentation can be carried out with more filters to achieve higher accuracy.

References 1. Kobashi, S., Kamiura, N., Hata, Y., Ishikawa, M.: Automatic Robust Threshold Finding Aided by Fuzzy Information Granulation, October 1997 2. Hata, Y., Kobashi, S., Hirano, S., Kitagaki, H., Mori, E.: Automated segmentation of human brain MR images aided by fuzzy information granulation and fuzzy inference 3. Song, T., Jamshidi, M.M., Lee, R.R., Huang, M.: A Modified probabilistic neural network for partial volume segmentation in brain MR image 4. Nair, S.S.K., Revathy, K.: Quantitative analysis of brain tissues from magnetic resonance images 5. Bauer, S., Gratz, P.P., Gralla, J., Reyes, M., Wiest, R.: Towards automatic MRI volumetry for treatment selection in acute ischemic stroke patients 6. Khastavaneh, H., Ebrahimpour-Komleh, H.: Brain extraction using isodata clustering algorithm aided by histogram analysis. https://doi.org/10.1109/KBEI.2015.7436154 7. Bigler, E.D., Tate, D.F.: Brain volume, intracranial volume, and dementia. Invest. Radiol. 36(9), 539–546 (2001). https://doi.org/10.1097/00004424-200109000-00006 8. Tohka, J., Zijdenbos, A., Evans, A.: Fast and robust parameter estimation for statistical partial volume models in brain MRI. Neuroimage 23(1), 84–97 (2004). https://doi.org/10.1016/j.neu roimage.2004.05.007 9. Ishii, K., et al.: Automatic volumetric measurement of segmented brain structures on magnetic resonance imaging. Radiat. Med. 24(6), 422–430 (2006). https://doi.org/10.1007/s11604006-0048-8 10. Wolf, H., Julin, P., Gertz, H.-J., Winblad, B., Wahlund, L.-O.: Intracranial volume in mild cognitive impairment, Alzheimer’s disease and vascular dementia: evidence for brain reserve? https://doi.org/10.1002/gps.1205 11. Ferguson, K.J., Wardlaw, J.M., Louise Edmond, C., Deary, I.J., MacLullich, A.M.J.: Intracranial area: a validated method for estimating intracranial volume. https://doi.org/10.1111/j. 1552-6569.2005.tb00289.x 12. Eritaia, J., et al.: An optimized method for estimating intracranial volume from magnetic resonance images. Magn. Reson. Med. 44(6), 973–977 (2000). https://doi.org/10.1002/15222594(200012)44:6%3c973::AID-MRM21%3e3.0.CO;2-H 13. Driscoll, I., et al.: Longitudinal pattern of regional brain volume change differentiates normal aging from MCI. Neurology 72(22), 1906–1913 (2009). https://doi.org/10.1212/WNL.0b0 13e3181a82634 14. Sargolzaei, S., et al.: Estimating intracranial volume in brain research: an evaluation of methods. Neuroinformatics 13(4), 427–441 (2015). https://doi.org/10.1007/s12021-0159266-5 15. Klasson, N., Olsson, E., Eckerström, C., Malmgren, H., Wallin, A.: Estimated intracranial volume from FreeSurfer is biased by total brain volume. Eur. Radiol. Exp. 2(1), 1–6 (2018). https://doi.org/10.1186/s41747-018-0055-4 16. Ridgway, G.R., Barnes, J., Pepple, T., Fox, N.: Estimation of total intracranial volume - a comparison of methods. https://discovery.ucl.ac.uk/id/eprint/1315736

Comparative Analysis of Filter Impact on Brain Volume Computation

689

17. Nordenskjölda, R., et al.: Intracranial volume estimated with commonly used methods could introduce bias in studies including brain volume measurement. https://doi.org/10.1016/j.neu roimage.2013.06.068 18. Downie, J.D.: Case study of binary and ternary synthetic discriminant function filters with similar in-class and out-of-class images. Opt. Eng. 32(3), 560 (1993). https://doi.org/10.1117/ 12.61203 19. Slicer Wiki contributors. CitingSlicer. Slicer Wiki. https://www.slicer.org/w/index.php?title= CitingSlicer&oldid=63090. Accessed 05 June 2022 20. Marcus, D.S., Fotenos, A.F., Csernansky, J.G., Morris, J.C., Buckner, R.L.: Open access series of imaging studies: longitudinal MRI data in nondemented and demented older adults. J. Cogn. Neurosci. 22(12), 2677–2684 (2010). https://doi.org/10.1162/jocn.2009.21407 21. Fedorov, A., et al.: 3D Slicer as an image computing platform for the Quantitative Imaging Network. Magn. Reson. Imaging 30(9), 1323–1341 (2012). https://doi.org/10.1016/j.mri. 2012.05.001

The Role of AI in Combating Fake News and Misinformation Virendra Singh Nirban, Tanu Shukla, Partha Sarathi Purkayastha(B) , Nachiket Kotalwar, and Labeeb Ahsan Birla Institute of Technology and Science, Pilani, India {nirban,F20200043}@pilani.bits-pilani.ac.in https://www.bits-pilani.ac.in/pilani/nirban/profile Abstract. The recent proliferation of social media has undoubtedly brought about many benefits, but along with it also came a serious impediment to society in the form of “fake news” which has become an eminent barrier to journalism, freedom of expression, and democracy as a whole. The study aims to understand the currently used AI techniques for detecting fake news, identify their shortcomings, and compare them with the emerging models. We compared the performance of memorybased methods (LSTM and Bi-LSTM) with traditional methods. We also compared the changes in performance after applying ensemble learning approaches. The study aimed to identify suitable models for fake news detection. This is in hopes of eventually promoting a safe and healthy environment for sharing information and content online and, in the process, helping develop strategies and techniques to curb the spread of fake news on social media. Keywords: Fake news · Machine learning Artificial intelligence · Social media

1

· Ensemble learning ·

Introduction

The term “fake news” has been around for years, but it has only recently gained traction in the mainstream media. There is no accurate definition of the term “fake news”. So, we follow the one that has been widely adopted in recent studies: fake news is an article that is intentionally and verifiably false. As a result, we consider an article to be false news if it contains false information that can be verified and was created with the dishonest intent of misleading consumers. Fake news also played a significant role during the COVID-19 pandemic. According to a study conducted by Rocha et al. (2021), people’s health took a toll due to the fake news being shared about COVID-19. It was found that social media was one of the main sources of fake news and conspiracy theories. This caused people to be sceptical of the rules imposed by governments and the advice of researchers and health professionals. This led to the worsening of the situation, which further resulted in panic, anxiety, depression, and other such mental illnesses in people of all age groups. Fake news has the potential to even c The Author(s), under exclusive license to Springer Nature Switzerland AG 2023  A. Abraham et al. (Eds.): IBICA 2022, LNNS 649, pp. 690–701, 2023. https://doi.org/10.1007/978-3-031-27499-2_64

The Role of AI in Combating Fake News and Misinformation

691

cost people their lives, as was the case in 2018, when at least 20 people were killed as a result of the circulation of a single piece of fake news on WhatsApp. (Vij 2018). Hence, solutions are needed to combat the spread of fake news through digital channels. An emerging method to tackle this problem is AI-based classification, a branch of computer science that aims to simulate human intelligence with the help of specialised hardware and software by writing, training, and implementing machine learning algorithms. Large amounts of labelled training data are required for AI models, which are then analysed using algorithms to form certain correlations between inputs and outputs and make predictions about future states. In this fashion, we aim to train a model that can learn to identify and classify fake news by reviewing millions of examples.

2 2.1

Research Hypothesis and Objective Research Hypothesis

Null Hypothesis(H0 ) - Memory-based learning approaches do not improve the accuracy of detecting fake news propagated on social media. Alternative Hypothesis(H1 ) - Memory-based learning approaches improve the accuracy of detecting fake news propagated on social media. 2.2

Objective

The main objective of the study is - To analyze the effectiveness of AI in classifying fake news and to identify the important aspects of the models used.

3 3.1

Methodology Research Design

Content mining as a research technique has been used. We have used the WELFake dataset, which has been formed by merging four popular news data sets (Kaggle, McIntire, Reuters, and BuzzFeed Political) and contains 72,134 news articles with 35,028 real and 37,106 fake news. This facilitates knowledge discovery from the text by applying computational analysis to uncover patterns from large amounts of data. The larger dataset was designed to prevent overfitting of classifiers on a smaller dataset and enable better training of the model. The dataset was designed by Verma et al. (2021) with the intent of being used for training further models. The training data set used has four features as shown in Table 1. We focused mainly on the text features and implemented word embedding techniques to best classify news. Also, in our study we have changed the labels as 0 for reliable and 1 for unreliable news. We have carried out fake news detection

692

V. S. Nirban et al. Table 1. Features of Dataset used ID

Unique ID for a News Article

Title

The Title of a News Article

Text

The Text of the Article (can be incomplete)

Label Indicates Reliability; 0 for unreliable 1 for reliable

using five basic AI models and analyzed their effectiveness in successfully identifying fake news. The models used are Naive Bayes, Neural Networks using Keras, Support Vector Machines(SVMs), and Long Short-Term Memory(LSTMs). We then used Ensemble Approaches wherein we implemented Regression Forest Models, Memory based Ensemble Model and a Non-Memory based Ensemble Model in an attempt to build efficient classifying models. 3.2

Measure Design

For the analysis of the models, we considered the Dimensions of Accuracy, Precision, Recall, F1-score and Specificity as the units of measurement. This measurement is done on a ratio scale and thus the values can be compared in terms of their absolute values. Based on what proportion of outcomes the models provide are True Positives, True Negatives, False Positives and False Negatives, the different units of measurement are defined as below: TP + TN TP + TN + FP + FN TP P recision = TP + FP TP Recall = TP + FN 2 F − score = 1 1 precision + recall

Accuracy =

Specif icity =

TN TN + FP

True Positives(TP): Case when the model labels a true outcome to be true. True Negatives(TN): Case when the model labels a false outcome to be false. False Positives(FP): Case when the model labels a false outcome to be true. False Negatives(FN): Case when the model labels a true outcome to be false. To measure these parameters, we used the sklearn, keras, SVM, and nltk libraries of the Python language on the Google Collab platform.

The Role of AI in Combating Fake News and Misinformation

4

693

Algorithms

The basic workings of the algorithms used are shown below (Dasaradh, 2020): 4.1

Naive Bayes Classifier

Naive Bayes Classifier is a prediction model based on Bayes’ theorem. It is called ‘Naive’ as here the model assumes that all the features involved are mutually independent of one another. According to Bayes’ theorem, the conditional probability of an event θ occurring, provided x has already occurred, is given by P (θ|x) = P (θ)

P (x|θ) P (x)

If we have n possible outcomes, then for each of the n possible outcomes, we use the chain rule to getP (θ, x1 , ..., xn ) = P (x1 , ..., xn , θ) = P (x1 |x2 , ..., xn , θ).P (x2 , ..., xn , θ) = P (x1 |x2 , ..., xn , θ).P (x2 |x3 , ..., xn , θ)...P (xn |θ).P (θ) Thus the probability can now be measured as -

P (θ|x1 , ..., xn ) =

n P (θ). i=1 P (xi |θ) P (x1 ).P (x2 )...P (xn )

This formula forms the algorithm for the Gaussian Naive Bayes classifier, which we used to train the model on the basis of the occurrences of a word in an article. 4.2

Support Vector Machine (SVM)

Support vector machines (SVMs) are considered one of the most powerful classification models available, in a variety of kernel functions. An SVM model estimates a decision boundary or a hyperplane based on the attributes used to classify data points. The dimensions of the hyperplane entirely depend on the number of features involved. The optimal hyperplane is the one that segregate out the data points of the different classes with the maximum of margins. Mathematically, we have the Cost function for SVM models as: 1 2 θ 2 j=1 j n

J(θ) = And shown as such:

θT x(i) ≥ 1, y (i) = 1,

694

V. S. Nirban et al.

θT x(i) ≤ −1, y (i) = 0. The function here uses a linear kernel. Kernels are used to build models when data points cannot be easily separated from one another or are multidimensional in nature. We have used the Radial Basis Function (rbf) kernel to train our model here. 4.3

Neural Networks

Neural Networks are models that use simple units called neurons which are arranged in an interconnected series of layers - input layer, hidden layers, and the output layer. Thus, it involves two major processes: Forward Propagation The process of passing the data through the neural network, and the Learning Process - the way the data is used to train the model. In Forward Propagation, we assign each input x with a weight wi -which represents the strength of the connection between the neurons, and add a bias b. Mathematically, the expression for z is: z = x.w + b To introduce non-linearity, we now pass z through an Activation function- the Sigmoid function, which returns values in the range [0–1]. The sigmoid function is given by: 1 yˆ = σ(z) = 1 + e−z Here σ denotes the sigmoid activation function, and the output obtained is known as the predicted value (ˆ y ). We used the gradient descent algorithm for optimization, which modifies the weights and bias in a manner proportionate to the negative of the cost function’s gradient with respect to the concerning weights and bias. This is the working of a single neuron, but it can be extended to entire neural networks with some modifications at key steps. We have used Keras for our implementation of Neural Networks. 4.4

Long Short Term Memory Networks (LSTMs)

If we have a string of words in a sentence, every word has some relationship with another, and this is very important when classifying articles. Traditional neural networks, however, cannot store memories of previous events to influence the later ones. Recurrent neural networks (RNNs), which are neural network models with loops, address this issue and retain memories of previous states effectively. Long Short Term Memory Networks (LSTMs) are a special type of RNN specifically designed to work on such long-term relationships between the words in a sentence.

The Role of AI in Combating Fake News and Misinformation

695

The Architecture of LSTMs is shown below in Fig. 1. (Socher, 2015). i(t) = σ(W (i) x(t) + U (i) h(t−1) ) f

x

+U

(f ) (t−1)

= σ(W

(o) (t)

x

+U

(o) (t−1)

= tanh(W

(c) (t)

x

+U

(c) (t−1)

o c˜

(t)

)

(Forget gate)

h

)

(Output/Exposure gate)

h

)

(New Memory Cell)

c(t) = f (t) ◦ c˜(t−1) + i(t) ◦ c˜(t) )

(Final Memory Cell)

(t)

= σ(W

(Input gate)

h

(t)

h

(f ) (t)

(t)

=o

(t)

◦ tanh(c ) (t)

(Hidden State)

Fig. 1. Architecture of LSTMs.

As shown in Fig. 1, the five stages involved are Memory generation, Input gate, Forget gate, Final memory generation, and Output gate. The model generates a new memory c˜(t) using the input word x(t) and the past hidden state h(t−1) . The relevance of the new memory is checked using the past hidden state to produce the input, forget signal. The output gate finally retrieves the relevant information. 4.5

Bi-directional Long Short Term Memory Networks(Bi-LSTMs)

Bidirectional long-short term memory (Bi-LSTM) is any neural network capable of storing sequence information in both forward (past to future) and backward directions (future to past). It is similar to an LSTM model in most aspects, except that it has additional layers that enable the input to run in two directions, thus distinguishing it from a conventional LSTM (Verma, 2021).

696

V. S. Nirban et al.

Fig. 2. Architecture of a simple Bi-LSTM Model

Through the diagram, we get an overall idea of the flow of information in backward and forward directions. Bi-LSTMs are typically used when sequenceto-sequence tasks are required, and since detecting fake news falls within this category, Bi-LSTMs can be quite useful in this regard. 4.6

Ensemble Learning Methods

It has been observed that sometimes an algorithm can accurately classify fake news, while most other algorithms fail to do the same. This is because that algorithm focuses on certain aspects of the text that others do not. So, if we aggregate these models together, guided by some algorithms, we may arrive at a better model. Two major algorithms for this purpose are: Bagging: It is a weak learner’s model that learns from each of the models in parallel and independently, and then finally averages the results to determine the final model prediction. Boosting: It is also a weak learner’s model, homogenous in nature, but differs from Bagging in the sense that it learns in a sequential manner and adapts accordingly to improve the model predictions. Stacking is a hybrid approach based meta-algorithm for learning many models in parallel and combining them by training a meta-model to produce predictions depending on the predictions of the individual constituent models. Such a deterministic aggregation is expected to build a model that produces more tangible outputs and satisfactory results with much higher accuracy. Blending is another ensemble technique that can help boost performance and accuracy. It employs the same approach as stacking but only makes predictions using a set separated from the train set called the validation/holdout set. Unlike stacking, predictions are produced directly on the holdout set, which is then used to build a model. (By Great Learning Team, 2022) In the ensemble models we tested, we had used Blending as our modes of implementation.

The Role of AI in Combating Fake News and Misinformation

697

Table 2. Comparison among performances of different models Parameters→/ Models↓

4.7

Accuracy F1-Score Recall

Precision Specificity

Na¨ıve Bayes

75.89%

72.83%

64.92%

82.93%

86.75%

Random Forest Regression

84.37%

83.29%

82.96%

83.62%

85.61%

SVM

90.05%

89.74%

87.44%

92.16%

92.63%

NN with Keras

86.74%

85.79%

85.22%

86.37%

88.09%

LSTM

90.15%

89.30%

86.07%

92.78%

93.87%

Bi-LSTM

92.00%

91.60%

91.31% 91.89%

92.62%

Non Memory-Based Ensemble Model

86.39%

85.31%

82.46%

88.37%

90.00%

Memory-Based Ensemble Model

87.30%

86.45%

86.72%

86.17%

87.80%

Random Forest Algorithm

Random forest algorithms are Ensemble learning methods that combine various classifiers to, in general, reduce the weaknesses of the model and improve performance. It uses the results of numerous decision trees of various subsets from the dataset given, which are then averaged or voted upon to produce accurate results. The higher number of forest trees generally increases the Accuracy and Precision of the solution. The most important asset of Random forests is that they maintain the level of accuracy even if the dataset has missing elements. It can be applied to a varied range of regression and prediction problems, since it takes fewer parameters to produce outputs and deals with complex datasets having higher dimensions as well. The algorithm is primarily used for classification and prediction of univariate and multivariate time series. (Gaurkar S. et al., 2021). The steps to implement them are as follows: – For each sample set, a decision tree is built. – Every decision tree’s prediction result is obtained and stored. – Every prediction receives a vote, and the most voted prediction result is selected as the final result.

5 5.1

Results and Discussion Comparing Performances of Models

We have the following observations upon analyzing the final results shown in Table 2. The Naive Bayes algorithm is constantly the least effective in all aspects, showing that we cannot consider all words of equal importance and that weights must be assigned to words. SVMs performed quite well, almost as well as the

698

V. S. Nirban et al.

Memory based models. It even outperformed Bi-LSTMs in terms of Precision and Specificity, but was outperformed by LSTMs in all aspects. This is supported by the results Aphiwongsophon et al. (2018) obtained, in which they used SVM and Neural Network algorithms to analyze data from Twitter. Their work showed a 99.90 percent accuracy rate in detecting fake news. It is also in accordance with the results obtained by Ahmad et al. (2020) and Vijayaraghavan et al.(2020) wherein they found that the traditional models are often outperformed by Models using other forms of learning methods like LSTMs,Bi-LSTMs and Ensemble Learning Methods. In the study where, along with CNNs and LSTMs, Sastrawan et al. (2021) also implemented Bidirectional LSTMs, combined with pre-trained word embedding, it was observed that Bi-directional LSTM-RNN model was significantly more effective than unidirectional models, outperforming them in multiple datasets. In the study done by Bahad et al. (2019), which also used Bi-Directional LSTMs to predict fake news articles, it was observed that the Bi-LSTM model had less loss and more accuracy as compared to the Unidirectional LSTM model, although the differences in their performances were seen to be minimal. All these substantiate our findings. Random Forest Regressors, which act as ensemble models in themselves, performed almost as well as NN with Keras. LSTMs and Bi-LSTMs are the most effective in detecting fake news since they add the aspect of memory into neural networks using recurrent neural networks. While Bi-LSTM outperformed LSTM in the majority of areas, it fell short in Precision and Specificity. The performance of the models did not significantly improve with the addition of non-memory based ensemble models, and it remained lower than the memory based models, LSTM and Bi-LSTM, in all respects. A similar trend was observed in the case of the memory based ensemble model as there too no significant improvement was observed. However, in terms of Precision and Specificity, the non-memory based ensemble model did manage to outperform the memory based ensemble model. In the study carried out by Ahmad et al. (2020), wherein they proposed Ensemble learning methods for identifying patterns in text that help differentiate fake articles from true news. The dataset used was obtained from the World Wide Web, and contained news articles from all domains rather than just politics. They extracted different textual features from the texts and used them to train models. In the many datasets that were analyzed, Ensemble learners always remained above the average of individual learners in terms of accuracy. Similar results were also seen in our case, except that the Ensemble Models were outperformed by the LSTM and the Bi-LSTM models. This is always possible since most loss functions (such as mean squared loss) are more sensitive to a single large deviation than a group of moderate deviations. If the models being averaged are only slightly different, the variance decreases since the average takes care of the outliers. Thus, although it was seen that SVMs performed quite well, the observations show that the more intensive we get into Neural Networks and introduce more

The Role of AI in Combating Fake News and Misinformation

699

aspects like memory, the more accurate the models become. Another level of introduction of Ensemble models was expected to further improve the performances of both memory and non memory based models, but in our case, it was not so prominent and the models performed better off alone. 5.2

Current Methods

As of October 2021, according to Statista, a German company specializing in market and consumer data, Facebook had the largest number of active users online, which naturally means it is the hub of production as well as propagation of fake news and misinformation pertaining to all sorts of topics. Facebook maintains an official blog where they describe the various AI methods they use on their platforms (Facebook AI, 2021; Statista, 2021). As per the blog, although Facebook has employed multiple policies and products to solve various challenges and contain misinformation as much as possible, the increasing spread of fake news has brought to light a major technical challenge: Image Manipulation. A growing concern today is Deepfakes which use AI and deep-learning techniques to replace the face of a person with another in a video or other digital media, creating something very convincing that may lead to several serious issues later on. However, this problem is not limited to people; an image of just about anything being shared by many online may have been manipulated in a way to give a message radically different from what was intended. SimSearchNet is a convolutional neural network model built based on a multiyear collaboration by Facebook AI researchers, engineers, and many others across the company. Currently, Facebook has implemented SimSearchNet++, an advanced image matching model that uses unsupervised learning to track picture changes with excellent precision and recall. It runs on images uploaded to Facebook and Instagram, and is resistant to crops, blurs, and screenshots, among other image manipulations. For images containing text, SimSearchNet++ can also group matches with high precision using optical character recognition (OCR) verification, ensuring no aspect of images goes unchecked at all. Another challenge in detecting misinformation is that different fake news articles may contain the same information and motive, but express them in very different ways by rephrasing the articles, using different images, or changing the format from graphic to text. Facebook is currently implementing new AI systems that automatically detect new variants of content already discredited by independent fact-checkers. When the AI model detects such new variants, they are flagged and forwarded to their factchecking partners for review. These advents have enabled Facebook to predict more matches and identify fake news faster, significantly inhibiting the spread of misinformation on the Social Media giant. 5.3

Future Directions of Study

A disadvantage of our study is that since our current models are majorly based on Supervised Learning, they require large amounts of datasets from reliable sources- both for training and testing models, and these datasets should not

700

V. S. Nirban et al.

be restricted to a particular domain, like politics. Such datasets are not readily available on the Internet and also cost a significant amount of time to compile and verify. So a switch to an Unsupervised or at least a Semi-supervised learning approach would be beneficial. Concluding based on majority of votes in a poll (referred to as Crowdsourcing) will not always be the best idea since most people may be under a wrong impression and may also spread the same fake news widely. This can be overcome by validating conditions like - presence of a verified account, history of suspensions and bans, frequency of posting, likes and dislikes on their content, comments on their posts, and the amount of time for which the user has been active on the platform. It has also been observed that models generally perform worse when analyzing news in local languages. However, in some cases, credibility ratings obtained through crowdsourcing have proven to be quite effective. Hence, for local scenarios, the training of the models can be aided by updating the data with labels from crowdsourcing to improve their accuracies. (Pennycook G, Rang DG, 2019).

6

Conclusion

To curb fake news and misinformation, we need an efficient method integrated within social media platforms to control the propagation of fake news on social media. Flagging fake news has been successful in controlling the spread of misinformation greatly and due to the large scale of this problem, automated solutions like AI based classifiers are necessary to accomplish this task. From the results obtained, we observed that the Naive Bayes model performed the worst and SVMs performed the best among traditional methods. Memory-based models (LSTMs and Bi-LSTMs) performed the best overall and showed a significant improvement in performance over traditional models. Application of ensemble methods did not show any significant improvements in either case. The study overall supports our Research Hypothesis that Memory based models do in fact improve the accuracy of detecting fake news propagated on social media. Future areas of study include the application of Unsupervised Learning which eliminates the need for labelled datasets and will allow the models to adapt rapidly to new types of news articles. Additional improvements can also be gained by using Multi-Modal approaches which take into account not only the text of the news article but also the graphics/images associated with it.

References Ahmad, I., Yousaf, M., Yousaf, S., Ahmad, M.O.: Fake news detection using machine learning ensemble methods. Complexity 2020, 1–11 (2020) Aphiwongsophon, S., Chongstitvatana, P.: Detecting fake news with machine learning method. In: 2018 15th International Conference on Electrical Engineering/Electronics, Computer, Telecommunications and Information Technology (ECTI-CON) (2018)

The Role of AI in Combating Fake News and Misinformation

701

Bahad, P., Saxena, P., Kamal, R.: Fake news detection using bi-directional LSTMrecurrent neural network. Procedia Comput. Sci. 165, 74–82 (2019) Dasaradh, S.K.: A gentle introduction to math behind neural networks. medium.com. https://towardsdatascience.com/introduction-to-math-behind-neural-networkse8b60dbbdeba. Accessed 1 July 2022 Duffy, A., Tandoc, E., Ling, R.: Too good to be true, too good not to share: the social utility of fake news. Inf. Commun. Soc. 23(13), 1965–1979 (2019) Facebook AI.: Here’s how we’re using AI to help detect misinformation. Facebook AI. (n.d.). https://ai.facebook.com/blog/heres-how-were-using-ai-to-help-detectmisinformation. Accessed 2 Dec 2021 Flostrand, A., Pitt, L., Kietzmann, J.: Fake news and brand management: a Delphi study of impact, vulnerability and mitigation. J. Product Brand Manage. 29(2), 246–254 (2019) Gonz´ alez, S., Garc´ıa, S., Del Ser, J., Rokach, L., Herrera, F.: A practical tutorial on bagging and boosting based ensembles for machine learning: Algorithms, software tools, performance study, practical perspectives and opportunities. Inf. Fusion 64, 205–237 (2020) Great Learning Team.: Ensemble learning with Stacking and Blending. mygreatlearning.com. https://www.mygreatlearning.com/blog/ensemble-learning/. Accessed 1 July 2022 Li, D., Guo, H., Wang, Z., Zheng, Z.: Unsupervised fake news detection based on autoencoder. IEEE Access 9, 29356–29365 (2021) Pennycook, G., Rand, D.: Fighting misinformation on social media using crowdsourced judgments of news source quality. Proc. Natl. Acad. Sci. 116(7), 2521–2526 (2019) Rocha, Y.M., de Moura, G.A., Desid´erio, G.A., de Oliveira, C.H., Louren¸co, F.D., de Figueiredo Nicolete, L.D.: The impact of fake news on social media and its influence on health during the COVID-19 pandemic: a systematic review. J. Public Health 2021, 1–10 (2021). https://doi.org/10.1007/s10389-021-01658-z Sastrawan, I., Bayupati, I. and Arsa, D.: Detection of fake news using deep learning CNN-RNN based methods. ICT Express (2021) Singh, S.P.: Understand Stacked Generalization (blending) in depth with code demonstration. iq.opengenus.org. https://iq.opengenus.org/stacked-generalizationblending/. Accessed 1 July 2022 Statista: Most used social media 2021 — Statista. statista.com. https://www. statista.com/statistics/272014/global-social-networks-ranked-by-number-of-users/. Accessed 13 Feb 2022 Tandoc, E., Lim, D., Ling, R.: Diffusion of disinformation: How social media users respond to fake news and why. Journalism 21(3), 381–398 (2019) Verma, P., Agrawal, P., Amorim, I., Prodan, R.: WELFake: word embedding over linguistic features for fake news detection. IEEE Trans. Comput. Social Syst. 8(4), 881–893 (2021) Vij, S.: A single WhatsApp rumour has killed 29 people in India and nobody cares. The Print. https://theprint.in/opinion/a-single-whatsapp-rumour-has-killed-29-peoplein-india-and-nobody-cares/77634/. Accessed 12 Nov 2021 Vijayaraghavan, S., et al.: Fake news detection with different models (Version 1). arXiv. (2020)

Rationalizing the TPACK Framework in Online Education: Perception of College Faculties Towards Aakash BYJU’S App in the ‘New Normal’ Samuel S. Mitra1(B) , Peter Arockiam A. SJ2 , Milton Costa SJ3 , Aparajita Hembrom4 , and Payal Sharma5 1 Department of Commerce (Evening), Assistant Professor in Management, St. Xavier’s

College (Autonomous), Kolkata, India [email protected] 2 Vice Principal of M.Com, B.Com (Evening) and BMS and Financial Administrator, St. Xavier’s College (Autonomous), Kolkata, India [email protected] 3 Former Assistant Professor in Commerce and Management, St. Xavier’s University, Kolkata, India 4 Department of Commerce (Evening), Assistant Professor in Management, St. Xavier’s College (Autonomous), Kolkata, India [email protected] 5 Department of Commerce (Morning), Assistant Professor in Accounting and Finance, St. Xavier’s College (Autonomous), Kolkata, India [email protected]

Abstract. The past couple of years have witnessed an inexorable upsurge in the usage of internet activities, especially, after the emergence of the COVID-19 pandemic. Keeping in pace with the fast track mercurial changes in the aura of technology, a handful of electronic gadgets and head turning mobile applications have also emerged, further propelling the ambit of technological development. Ever since the emergence of the COVID-19 pandemic, education has been largely supported by via online mode. There has also been large scale acceptance of Online Learning Apps. One of the latest grown Online Learning app is the latest version of BYJU’S known as “Aakash BYJU’S”. Teachers that of late, especially the college faculties have shown huge penchant towards the online classes delivered by Aakash BYJU’S. In this light, it is vital to throw light upon the perception of such college teachers towards Aakash BYJU’S online classes. The present research undertaking aims at probing into the attitudes and behaviour of such college teachers towards Aakash BYJU’S online classes by the application of Technology Pedagogical and Content Knowledge (TPACK) model. For this purpose, a survey has been conducted among 343 college faculties in selected districts of West Bengal and their responses were recorded. “Structural Equation Modeling” (SEM) has been used to unravel the model fits and hypothesis testing done at the ultimate stage for validation. The findings reveal positive perception among the surveyed consumers towards the online classes of Aakash BYJU’s.

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 A. Abraham et al. (Eds.): IBICA 2022, LNNS 649, pp. 702–712, 2023. https://doi.org/10.1007/978-3-031-27499-2_65

Rationalizing the TPACK Framework in Online Education

703

Keywords: Online classes · Aakash BYJU’S · TPACK model · Attitudes and behaviour · College faculties

1 Introduction The skyrocketing rate of technological innovations and the bewildering proliferation of technology has made the modus operandi of the society extremely easy. The spontaneous intensifying digital content especially, the usage of various apps by the individuals bear a strong testimony to this fact. The current study is primarily based in West Bengal, during a global pandemic, when college teachers are relying on the online mode of education. One such activity is the usage of Aakash BYJU’S app and its online classes particularly on has grabbed eyeballs that of late and has witnessed a terrific perch in its adoption and usage ever since its launch. With the head-turning growth and development in the landscape of technology, the advent of such an intriguing app has been seen a massive turning point paving way for facilities like online preparation for classes. In fact, the essence of our research endeavour itself lies in this rudimentary idea. BYJU’S is one of the largest used online learning app by the Indian college teachers. Furthermore, the emergence of Aakash BYJU’S and its upheaval in its liking and embracement amongst the college faculties is a matter of scintillating deliberation needing immediate research. The present research undertaking studies the attitudinal revelations and behavioural trajectories of college faculties towards Aakash BYJU’S app by the application of TPACK framework. The study is relevant from the twin perspectives of probing into the perceptions of college faculties towards Aakash BYJU’S app as well as the traditional TPACK framework being modified by the addition of certain key variables of TAM, viz. “Attitude towards Usage” (ATU) and “Behavioural Intention” (BI) as well as an innovative blend being added by the integration of “Exigency (COVID-19)”, making it a novel research endeavour. Furthermore taking into consideration Aakash BYJU’S app and the perception of college faculties in the geographical territory of West Bengal adds novelty to the current research study.

2 Literature Review The domain of technological applications and individual behaviour is not new, but “measuring the perception” of college teachers towards a new app like Aakash BYJU’S, especially in the context of ‘new normal’ is a novel research endeavour. The present undertaking attempts to “measure the attitudes and behaviour” of college faculties towards Aakash BYJU’S and hence it is relevant to discuss about “Technology Acceptance Model” (TAM). TAM an extension of “Theory of Reasoned Action” (TRA), which was a brainchild of Ajzen and Fishbein (1975). TAM comprises of certain key variables. Davis (1989), was the first person to coin the term “Perceived Usefulness” which is actually an individual belief about enhancing his/her job performance which is triggered by using a particular system. Davis, again in the same year also defined “Perceived Ease of Use” as the degree to which a person believes that using the system will be effortless. Davis (1993), made an assertion about a concept called “Behavioural Intention” which he

704

S. S. Mitra et al.

defined a combination of “perceived usefulness” and “attitude towards usage”. Attitude towards Usage (ATU) is a very relevant dimension of TAM and Ajzen and Fishbein (2000), defines ATU as an evaluative impact of emotions, both negative as well as positive in people towards using a system. It is quite obvious that any unforeseen emergencies would trigger “abnormal behaviour” among the individuals. The attitudinal response and behavioural dynamics of college teachers during the current ongoing pandemic of COVID-19 has also changed. In this light it has been witnessed that college teachers have become more conscious towards health and safety and hence prefer the online mode of teaching. Moreover, the added benefits of virtual classes like no physical travelling, ease and convenience, etc. adds to the proclivity of college teachers towards online teaching. This is impacting the perception of college faculties (Anderson and Kyzar, 2022). Mishra and Koehler (2006), first proposed TPACK framework. Angeli and Valanides (2009), describes TPACK as a “situated multifaceted and integrated form of knowledge”. Moreover, TPACK is used for describing and fathoming knowledge needed by the college teachers (Nilsson, 2022) for the successful integration of knowledge in the classrooms (Archambault and Crippen, 2009). “TPACK” is made up of seven constructs (Luo and Zou, 2022), which are “Content Knowledge” (CK), “Pedagogical Knowledge” (PK), “Technological Knowledge” (TK), “Pedagogical Content Knowledge” (PCK), “Technological Content Knowledge” (TCK), “Technological Pedagogical Knowledge” (TPK) and “Technological and Pedagogical Content Knowledge” (TPACK). TPACK has been used in erstwhile researches to examine the perception of college teachers towards technology (Mailizar and Fan, 2020). Technology integration in classrooms is also affected by the acceptance of technology by the college teachers (Santros and Castro, 2021).

3 Research Objective To examine and analyze the attitudes and behaviour of college faculties towards Aakash BYJU’S amidst Covid-19 pandemic.

4 Research Model and Hypothesis Formulation The research model below is a novel one which has been proposed for the first time. It consists of all key dimensions of TPACK and significant dependent variables of TAM which are “Attitude towards Usage” (ATU) and “Behavioural Intention” (BI) blended with an additional dimension of Exigency (COVID-19).

Rationalizing the TPACK Framework in Online Education

705

Fig. 1. Research model (Source of Image: Author’s Own Conceptualization)

H 1 : “TK positively and significantly affects TCK”. H 2 : “TK positively and significantly affects TPK”. H 3 : “CK positively and significantly affects PCK”. H 4 : “CK positively and significantly affects TCK”. H 5 : “PK positively and significantly affects PCK”. H 6 : “PK positively and significantly affects TPK”. H 7 : “TCK positively and significantly affects TPACK”. H 8 : “TPK positively and significantly affects TPACK”. H 9 : “PCK positively and significantly affects TPACK”. H 10 : “TPACK positively and significantly affects ATU”. H 11 : “TPACK positively and significantly affects BI”. H 12 : “ATU positively and significantly affects BI”. H 13 : “Exigency (COVID-19) positively and significantly affects BI”.

5 Research Methodology The current research uses primary and secondary data. For secondary data a host of research articles has been acquired from EBSCO, BASE and Google Scholar. Primary data comprises of a questionnaire which was surveyed across 400 college teachers. Survey was conducted in different districts like Kolkata, Burdwan, Birbhum and Hooghly. The questions were mostly “self-developed” and a few questions were adapted from erstwhile scales. The questionnaire contained a total of 43 questions under 10 segments,

706

S. S. Mitra et al.

namely, “Content Knowledge” (CK), “Pedagogical Knowledge” (PK), “Technological Knowledge” (TK), “Pedagogical Content Knowledge” (PCK), “Technological Content Knowledge” (TCK), “Technological Pedagogical Knowledge” (TPK) and “Technological and Pedagogical Content Knowledge” (TPACK), “Attitude towards Usage” (ATU), “Behavioural Intention” (BI) and “Exigency (COVID-19)”. A “Five-Point Likert Scale”, wherein, “(5 = Strongly Agree; 4 = Somewhat Agree; 3 = Neutral; 2 = Somewhat Disagree and 1 = Strongly Disagree)” has been used to measure the concepts. Few responses were rejected as a result of incomplete and/or erroneous responses. The final valid responses stood at 343. The data has been analyzed by SPSS-AMOS version 23. 5.1 Data Analysis and Presentation 5.1.1

Demographic Profiling

Table 1. Demographic statistics Demographic Construct Gender

Age

Classification Male Female TOTAL 25-34 35-44 45-54 TOTAL

Population Statistics 196 147 343 126 168 49 343

Percentage 0.57 0.43 1.00 0.37 0.49 0.14 1.00

Source: Author’s Own Computation

Table 1 shows the demographic statistics of the participants, where male exceed the female respondents in the ratio of 57:43. Most of the respondents belong to a young age group of 24–34 years and 35–44 years. 5.1.2 Reliability Analysis For the purpose of testing the internal consistency of the variables, “Cronbach’s Alpha Test” has been conducted to test the reliability of the various items considered. Table 2 displays robust reliable results as the “Cronbach’s Alpha” values for all items exceed the threshold limit of 0.7, implying all the variables perfectly fits as questions.

Rationalizing the TPACK Framework in Online Education Table 2. Reliability analysis. Construct

Cronbach’s Alpha

Items

Total Correlation Value of Corrected Item

Total

0.992

47

Technological Knowledge

0.986

TK1 TK2 TK3 TK4 TK5 TK6

0.972 0.948 0.966 0.963 0.960 0.971

0.975 0.986 0.980 0.981 0.977 0.984

Content Knowledge

0.989

CK1 CK2 CK3 CK4

0.964 0.942 0.975 0.967

0.984 0.990 0.987 0.985

Pedagogical Knowledge

0.978

PK1 PK2 PK3 PK4 PK5

0.912 0.974 0.971 0.974 0.978

0.977 0.948 0.956 0.952 0.972

Technological Content Knowledge

0.978

TCK1 TCK2 TCK3 TCK4 TCK5

0.942 0.973 0.972 0.975 0.979

0.979 0.968 0.957 0.970 0.972

Technological Pedagogical Knowledge

0.978

TPK1 TPK2 TPK3 TPK4 TPK5

0.975 0.973 0.970 0.968 0.965

0.971 0.964 0.958 0.957 0.976

Pedagogical Content Knowledge

0.978

PCK1 PCK2 PCK3 PCK4

0.973 0.977 0.970 0.976

0.977 0.969 0.976 0.968

Technological and Pedagogical Content Knowledge

0.982

TPACK1 TPACK2 TPACK3 TPACK4 TPACK5 TPACK6

0.976 0.968 0.953 0.974 0.967 0.949

0.972 0.973 0.981 0.974 0.976 0.973

Attitude towards Usage

0.982

ATU1 ATU2 ATU3 ATU4 ATU5 ATU6

0.956 0.965 0.933 0.972 0.960 0.889

0.978 0.973 0.980 0.975 0.977 0.972

Behavioural Intention

0.972

BI1 BI2 BI3 BI4

0.966 0.876 0.955 0.960

0.955 0.972 0.963 0.960

Exigency (COVID-19)

0.989

EXC1 EXC2 EXC3

0.975 0.974 0.978

0.986 0.985 0.985

Source: Author’s own computation.

-

Cronbach’s Alpha When Item Removed

-

707

708

S. S. Mitra et al.

5.1.3 Convergent Validity Test The “convergent validity” has been estimated for measuring the closeness of the new scale to other variables and other measures of same construct by their respective “factor loadings” (CFA), “average variance extracted” (AVE) and “composite reliability” (CR). Table 3 reveals that the “CFA” for all items are above the ideal level of 0.7, while “AVE” and “CR” exceed their respective threshold of 0.5 and 0.7 respectively. 5.1.4 Divergent Validity Test The usage of “square root of AVE” and the “correlation coefficient matrix” is imperative for testing the “divergent validity” of constructs. As per Fornell and Larcker (1981), “discriminant validity was obtained by comparing the shared variance between factors with the AVE from the individual factors”, to measure whether constructs that theoretically should not be associated to one another are in anyways associated or not. Table 4 shows that the factors and their in-between “MSV” and “ASV” are less compared to AVE and also the “square root of AVE” is “higher compared to the correlations of inter-constructs, hence, satisfying the discriminant validity test”. 5.1.5 Test for Structural Equation Modelling SEM has been conducted to delve into the associations existing between the 10 variables, namely, “TK, CK, PK, TCK, TPK, PCK, TPACK, ATU, BI and EXC”. The comparison of the fit indices with their corresponding values provides a good model fit “Ratio of Chi-square to its Degrees of Freedom” (χ2/df) = 3.127, “Goodness of fit index” (GFI) = 0.949, “Adjusted Goodness of fit index” (AGFI) = 0.934, “Relative Fit Index” (RFI) = 0.967, “Comparative Fit Index” (CFI) = 0.981 and “Root Mean Squared Error of Approximation” (RMSEA) = 0.043. “Table 6 clearly represents the validation of all the hypotheses through the path analysis”. It can be concluded that all the hypothesis has been validated and substantiated by the positive direction of relationship as evidenced in their respective values and figures.

6 Results of the Study The study was an attempt to check the validity of the TPACK framework as to its impact on the perception of college faculties towards Aakash BYJU’S app. The study also analyzed the interplay of variables in the TPACK construct. The various proposed hypothesis taking into consideration in the study were ably supported indicating that TPACK is a valid model for measuring the attitudes and behaviour of college faculties towards Aakash BYJU’S app. TCK was proved to be the strongest variable of TPACK, whilst TK proves to be the main predictor of TCK. This suggests that college faculties must have adequate knowledge about technology to propel their technological content knowledge. Furthermore, PK was found to be the strongest predictor of TPK and PCK. It is opined that pedagogical knowledge of the college faculties have an impact on the pedagogical content knowledge of such college faculties. The attitude of college faculties towards technology significantly affect behavioural intentions to use Aakash BYJU’S

Rationalizing the TPACK Framework in Online Education Table 3. Convergent analysis. Construct

Items

AVE

CR

Technological Knowledge

TK1 TK2 TK3 TK4 TK5 TK6

0.942 0.912 0.963 0.958 0.969 0.971

0.950

0.856

Content Knowledge

CK1 CK2 CK3 CK4

0.976 0.966 0.972 0.970

0.926

0.955

Pedagogical Knowledge

PK1 PK2 PK3 PK4 PK5

0.975 0.968 0.966 0.973 0.975

0.953

0.852

Technological Content Knowledge

TCK1 TCK2 TCK3 TCK4 TCK5

0.972 0.974 0.976 0.971 0.977

0.956

0.854

Technological Pedagogical Knowledge

TPK1 TPK2 TPK3 TPK4 TPK5

0.974 0.978 0.975 0.972 0.973

0.962

0.932

Pedagogical Content Knowledge

PCK1 PCK2 PCK3 PCK4

0.978 0.971 0.976 0.977

0.972

0.968

Technological and Pedagogical Content Knowledge

TPACK1 TPACK2 TPACK3 TPACK4 TPACK5 TPACK6

0.970 0.976 0.974 0.972 0.973 0.977

0.986

0.987

Attitude towards Usage

ATU1 ATU2 ATU3 ATU4 ATU5 ATU6

0.956 0.965 0.933 0.972 0.960 0.889

0.978

0.986

Behavioural Intention

BI1 BI2 BI3 BI4

0.966 0.876 0.955 0.960

0.955

0.992

Exigency (COVID-19)

EXC1 EXC2 EXC3

0.980 0.942 0.978

0.947

0.986

Source: Author’s own computation.

Factor Loading

709

710

S. S. Mitra et al. Table 4. Divergent validity results. Inter-construct Correlations Construct TK CK PK TCK TPK PCK TPACK ATU BI EXC

TK

CK

PK

TCK

TPK

PCK

0.972 0.975 0.976 0.975 0.971 0.977 0.980 0.981 0.990 0.975

0.971 0.973 0.970 0.968 0.972 0.975 0.955 0.969 0.956

0.972 0.976 0.979 0.980 0.982 0.976 0.978 0.960

0.968 0.970 0.981 0.979 0.975 0.980 0.965

0.974 0.975 0.977 0.980 0.982 0.972

0.979 0.980 0.985 0.992 0.970

TPACK

0.974 0.986 0.990 0.968

ATU

0.987 0.985 0.962

BI

EXC

0.992 0.969

0.967

Source: Author’s own computation.

Table 5. Model fit indices for the goodness-of-fit measures. Goodness of Fit

Recommended

Actual Value of Result of Model Fit

Measure

Value

Measures

CMIN/DF

≤ 3.00

3.127

Good Good

GFI

≥ 0.90

0.949

AGFI

≥ 0.90

0.934

Good

RFI

≥ 0.90

0.967

Good

CFI

≥ 0.90

0.981

Good

RMSEA

≤ 0.05

0.043

Good

app. This highlights the significance of TPACK for technology integration in the sector of education. It implies that college teachers with higher TPACK will probably get more engaged in “professional development” via online mode. The study also shows a positive impact of EXC on TPACK. This implies that college teachers find it appropriate to be engaged in online lectures during a menacing global pandemic. The study validates that TPACK framework is a robust predictor of college teachers’ adoption of technology.

Rationalizing the TPACK Framework in Online Education

711

Table 6. Hypothesis results. Hypotheses

Path

Coefficient

Direction

Results

H1

TK→TCK

0.682

Positive

Supported

H2

TK→TPK

0.603

Positive

Supported

H3

CK→PCK

0.657

Positive

Supported

H4

CK→TCK

0.649

Positive

Supported

H5

PK→PCK

0.611

Positive

Supported

H6

PK→TPK

0.615

Positive

Supported

H7

TCK→TPACK

0.675

Positive

Supported

H8

TPK→TPACK

0.672

Positive

Supported

H9

TCK→TPACK

0.678

Positive

Supported

H10

TPACK→ATU

0.712

Positive

Supported

H11

TPACK→BI

0.749

Positive

Supported

H12

ATU→BI

0.612

Positive

Supported

H13

EXC→BI

0.575

Positive

Supported

Source: Author’s own computation.

7 Conclusion and Implications The present study reveals that the interplay amongst the dimension of TPACK are significant and positive. Furthermore, TPACK has proved to be valid and positively associated with college teachers’ acceptance of Aakash BYJU’S app, a manifestation in the sudden escalation in the adoption and usage of the app. While the pandemic has brought the relevance of online mode of education which can also be merged with offline mode in the post pandemic era, therefore paving a way for a hybrid mode of education. Furthermore, consumer behaviour and e-learning has largely been examined through TAM, TPACK framework provides a greater avenue for further research investigation.

8 Limitations and Future Scope The present research study enquires only 343 participants which if not a weak sample size is not sufficient either. Moreover, the study focuses on selected districts of West Bengal. A wide-scale study across the entire state as well as other states of India would provide more valuable insights to the study. For enhancing the study it is important to extend the study beyond the state of West Bengal and overall increasing the sample size of the respondents. Moreover, several other online applications like Food Apps, Payment Apps, Shopping Apps, etc. can also be considered in future studies by surveying various participants from both within and outside state.

712

S. S. Mitra et al.

References Ajzen, I., Fishbein, M.: Understanding Attitudes and Predicting Social Behaviour. Prentice Hall, Englewood Cliffs, NJ (1980) Davis, F.D.: Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Q. 13(3), 319–340 (1989) Davis, F.D., Bagozzi, R.P., Warshaw, P.R.: User acceptance of computer technology: a comparison of two theoretical models. Manage. Sci. 35(8), 982–1003 (1989) Davis, F.D.: User acceptance of information technology: system characteristics, user perceptions and behavioural impacts. Int. J. Man Mach. Stud. 38(3), 475–487 (1993) Ajzen, I., Fishbein, M.: Attitudes and the attitude-behavior relation: reasoned and automatic processes. Eur. Rev. Soc. Psychol. 11(1), 1–33 (2000) Vlachos, P.A., Vrechopoulos, A.P., Doukidis, G.: Exploring consumer attitudes towards mobile music services. Int. J. Media Manage. 5(2), 138–148 (2003) Fornell, C., Larcker, D.F.: Evaluating structural equation models with unobservable variables and measurement error: algebra and statistics. J. Mark. Res. 18(3), 382–388 (1981) Nunnaly, J.: Psychometric Theory. McGraw-Hill, New York, NY (1978) Archambault, L., Crippen, K.: Examining TPACK among K-12 online distance educators in United States. Contemp. Issues Technol. Teacher Educ. 9(1), 71–88 (2009) Angeli, C., Valanides, N.: Epistemological and methodological issue for the conceptualization, development and assessment of ICT-TPCK: advance in technological pedagogical content knowledge (TPCK). Comput. Educ. 52(1), 154–168 (2009) Mishra, P., Koehler, M.J.: Technological pedagogical content knowledge: a new framework for teacher knowledge. Teach. Coll. Rec. 108(6), 1017–1054 (2006) Mailizar, M., Fan, L.: Indonesian teachers’ knowledge of ICT and the use of ICT in secondary mathematics teaching. Eurasia J. Math. Sci. Technol. Educ. 16(1), em1799 (2020) Santos, J.M., Castro, R.D.R.: Technological pedagogical content knowledge (TPACK) in action: application of learning in the classroom by pre-service teachers (PST). Soci. Sci. Human. Open 3(1), 100110 (2021) Luo, S., Zou, D.: A systematic review of research on technological, pedagogical and content knowledge (TPACK) for online teaching in humanities. Journal of Research on Technology in Education (2022) Anderson, S.E., Kyzar, K.B.: Between school and home: TPACK-in-practice in elementary special education contexts. Interdisciplinary Journal of Practice, Theory and Applied Research, vol. 39, no. 4 (2022) Nilsson, P. : From PCK to TPACK – Supporting student teachers’ reflections and use of digital technologies in science teaching. Research in Science and Technological Education (2022)

Teacher’s Attitudes Towards Improving Inter-professional Education and Innovative Technology at a Higher Institution: A Cross-Sectional Analysis Samuel-Soma M. Ajibade1 , Cresencio Mejarito2 , Dindo M. Chin3 , Johnry P. Dayupay4 , Nathaniel G. Gido5 , Almighty C. Tabuena6 , Sushovan Chaudhury7(B) , and Mbiatke Anthony Bassey8 1 Department of Computer Engineering, Istanbul Ticaret Universitesi, Istanbul, Turkey

[email protected]

2 Althletics and Cultural Department, University of the Visayas, Cebu City, Philippines 3 Department of Mechanical Engineering, University of Visayas, Cebu City, Philippines 4 Cebu Technological University, Moalboal, Cebu, Philippines 5 Department of Curriculum, Research and Instructions, Madridejos Community College, Cebu,

Philippines 6 Philippine Normal University, Manila, Philippines 7 Department of CSE, University of Engineering and Management, Kolkata, India

[email protected] 8 Department of Business Administration, UTHM, Batu Pahat, Malaysia

Abstract. Adapting health professional curriculum and training to evolving requirements and exponential expansion in healthcare awareness and knowledge is vital. As an example of this uniformity, interprofessional education can be found. Teachers’ willingness to participate in interprofessional education is closely linked to their attitude about it. The goal of this research is to investigate teacher attitudes toward interprofessional education (IPE) at Ekiti State College of Health and Technology (EKCHT), Ijero Ekiti, Nigeria. Cross-sectional research involving 85 teachers was used. In order to collect data, a five-point Likert scale with three subscales on IPE was utilized, which was stratified sampling. Positive attitude was defined as having a cut-off percentage of more than seventy-five percent. At a 96% confidence level, SPSS version 21 was used to analyze the Bio-demographic data and teacher attitudes were correlated using logistic regression. There are a greater number of male teachers than females that took part in the survey. Attitudes of teacher’s IPE in academic contexts were found to be negative (30.82 < 75%) in the total attitude score (121.45 > 75%). Teacher’s attitudes were not influenced by their age, gender, academic rank, or level of competence. Academics with positive opinions toward interprofessional education were more likely to have used it at the college (P = 0.147). As a result, while teachers have a generally positive view of interprofessional education, they have a negative view of subscale 3-interprofessional education in academic contexts. Training in behavior change and IPE awareness for teachers is suggested to avoid negative attitudes. Keywords: Inter-professional education · Innovative technology · Teacher’s attitude · Positive attitude · Negative attitude · Higher institution © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 A. Abraham et al. (Eds.): IBICA 2022, LNNS 649, pp. 713–724, 2023. https://doi.org/10.1007/978-3-031-27499-2_66

714

S.-S. M. Ajibade et al.

1 Introduction Healthcare professionals work in an ever-changing environment in terms of demographics, epidemiology, socioeconomics, and technology [1]. Spite of the increasing development in medical information, the time it takes to establish a curriculum stayed constant over the years [2]. Information technology plays a major role in improving the competitiveness of an organization [3]. It might not be necessary to customize curriculum to the ever-changing nature of the health care system, but it would be beneficial to do so in an innovative way that takes into account the intricacies of this environment [3]. Inter-professional education has been a major advancement (IPE). [1] IPE provides an opportunity for students across various professions interact with each other in order to improve health results through effective teamwork. Many affluent countries, including the United States, Canada, Sweden, Britain, Norway, Poland, Belgium, and Malaysia have been found to suffer from IPE, according to a comprehensive examination of the issue [4]. A small number of countries in the developing world are also experiencing this phenomenon despite the fact that there is a lack of research on the subject [4]. As a result of studies in industrialized countries, there has been a call for training students in IPE [5, 6]. Several studies conducted in UAE found that students there had a favorable outlook on IPE. IPE is not well-established in Africa [7]. It is important for teachers to demonstrate professional behavior for students when they are still in the process of learning. The benefits of IPE will flow down to the students. One of IPE’s advantages is that it encourages students to work together in teams, enhances interpersonal interactions, and helps break down professional boundaries. IPE is not a course requirement, and some students believe that their professional programs have too tight of schedules and deadlines to allow IPE, in spite of these advantages. Learning structures and resources (time, materials, and money) in impoverished nations make adopting IPE more difficult. It’s surprising that interprofessional education hasn’t caught on in places like Nigeria despite the efforts of organizations like the World Health Organization and the Institute of Medicine. It has been difficult for professions to break out of their professional cocoons since they continue to train unprofessionally. As a result of these stereotypes, it is difficult for teams to function at their full potential. The attitudes of students must be taken into consideration when attempting to integrate IPE into courses, since they are a good indicator of acceptability [8]. This research aims to evaluate the attitude of students and sociodemographic factors impacting IPE attitudes in a Nigerian public health institution through the use of a structured questionnaire which was administered to the participants. The results will assist policy in establishing IPE content and approaches and training needs before the Ekiti State College of Health and Technology (EKCHT) begins an IPE program.

2 Experimental Methods 2.1 Study Design Cross-sectional research was carried out to investigate the attitude of students and the sociodemographic factors impacting IPE attitudes in a Nigerian public health institution.

Teacher’s Attitudes Towards Improving Inter-professional Education

715

2.2 Study Setting and Population Nigeria, Africa’s most populous country, is situated in West Africa. In the north, Nigeria is bordered by Niger, in the northeast by Chad, in the east by Cameroon, and in the west by Benin. It has a population of about 216 million people and a land area of 923,769 square kilometers (356,669 square miles). In Nigeria, the North Central, North East, North West, South East and South South and South West regions are the six geopolitical zones that make up the country. This study was carried out in Ekiti State College of Health and Technology (EKCHT) which is located in the city of Ijero Ekiti in Ekiti State. It is located in the South West region of Nigeria. The EKCHT consists of 6 schools from which data was collected across all the schools such as: School of Environmental Health Studies (SEHS), School of Community Health and Public Health (SCHPH), School of Diagnostics Sciences (SDS), School of Therapeutic and Intervention Sciences (STIS), School of Health Information and Computer Studies (SHICS) and School of Basic Medical Services (SBMS). This study was piloted from April 21 until April 26, 2021 while data was collected from the 6 schools in EKCHT in Ijero Ekiti, Ekiti State between May 10 and July 8th 2021. The institution trains a variety of health care professions who share infrastructure and clinical sites. However, training syllabus lack structured IPE initiatives. All participants in the selected schools were eligible to be included in the study. 2.3 Variable The variables include independent variable which is bio- demographic characteristics of teachers and dependent variable which is attitude score. 2.4 Data Source and Management A random cluster sampling of teachers was carried out to select participants at random for the survey. The teachers completed a questionnaire that inquired about their attitudes and sociodemographic factors impacting IPE attitudes. The research used a 30-item, five-point attitude which includes three subscales customized to the local context. These scales include attitudes towards IPE scale, attitudes towards health care teams (ATHCT) scale and attitudes towards IPE in academic settings scale. Data collection included demographic information such as gender, age, academic positions. A questionnaire was used to gather the information. To collect the data, the researchers used an electronic version of a structured questionnaire which was administered to the participants. The data collection team had some tablets with them in which the teachers had to complete the questionnaire with. After downloading and storing the data in a database, the data was cleaned and analyzed. This study’s sample size was computed using the Cochrane equation at 101 participants [9]. About 86% response rate was gotten from the teachers from the six schools in EKCHT took part in the survey. Structural and random sampling were employed to choose participants where schools were stratified, respectively. The distribution of samples to schools was based on a weighted formula. In order to avoid bias, a probability sampling technique was chosen. After removing some teachers’ undependable responses, the final sample consisted of 87 teachers.

716

S.-S. M. Ajibade et al.

At the start of the research, invalid tools were excluded from the data set. For descriptive statistics, SPSS version 21.0 was used. At a 96% confidence interval, statistical significance was considered. Attitude had a minimum of 30 points and a maximum of 150 points. Cutoff values were calculated with the shift from centrality bias in mind, and negative assertions were given a different weighting during the study. The method of binary logistic regression has been utilized in examining the association relating a teacher’s age, gender, years of professional experience, years of teaching experience, academic position, school, and level of knowledge, as well as their attitude toward IPE. Support for IPE among teachers was examined using the Chi-square test of association with faculty members’ attitudes. 2.5 Ethical Consideration The Involvement of teachers in this survey was entirely voluntary, and due ethical approval was taken from the national board for technical education (NBTE). Furthermore, the ethical and scientific committee gave its approval to the study as well. Also, due permission was taken from the management of EKCHT before the collection of data was embarked on. The participants confidentially completed the questionnaire, and their anonymity was preserved.

3 Result 3.1 Participants and Bio-demographic Features There were 87 participants in the study, and 85 completed questionnaires that were used in the analysis, out of the 101 people sampled. This resulted in a response rate of 86%. Data was collected just when the country was trying to relax on some COVID-19 restrictions, so that might probably have made us not to get 100% response rate. A larger percentage of the participants are males which were 49 (58.7%) while females are 36 (41.3%). About 57% (46) of the participants ranges between the ages of 35 and 44; while 57.6% (49) have professional experience which ranges between 10–19 years; and 58.8% (50) participants have teaching experience which ranges within 1 to 10 years. Furthermore, the school of community health and public health had more participants who took the survey than other schools which was about 44.7% (38), while the school of environmental health studies followed with participants of about 23.5% (20), while the school of basic medical services had participants of 10.6% (9) and school of Health Information and Computer Studies had 8.2% (7) participants. School of Diagnostics Sciences had 7.1% (6) participants and finally the School of Therapeutic and Intervention Sciences had 5.9% (5) participants. As pertaining the academic position 31.8% (27) were in lecturer II academic position, 24.7% (21) were in lecturer I academic position, while 14.1% (12) were in senior lecturer academic cadre and 12.9% (11) were in lecturer III academic position. 10.5% (9) were principal lecturers and then 5.9% (5) were in the academic cadre of Chief lecturer. As pertaining the level of expertise with using IPE, 47 (55.3%) of the participants were beginners with little or no knowledge of IPE, 26 (30.6%) had

Teacher’s Attitudes Towards Improving Inter-professional Education

717

no knowledge of IPE at all, and then 12 (14.1%) were experienced. Table 1 summarizes this information. As pertaining the application of IPE at EKCHT, about 52 (61.2%) has never applied IPE; while 23 (27.1%) has made use of IPE and then about 10 (11.8%) had no idea what IPE was. In terms of supporting the use of IPE at EKCHT, about 91.8% (78) participants say they are in favor of IPE activities. Since schools share facilities, they reasoned that this would promote team-work activities, enhance teaching quality and also allow for better resource usage. 8.2% (7) participants said they would not support IPE because of the differences in school curricula, the time commitment, and the risk of losing their professional status as seen in Table 1. Table 1. Teachers and learner’s demographic profile Variables (n = 85) Gender Academic position

School

Age

Professional experience

Teaching experience

N (Percent) Male

49 (58.7)

Female

36 (41.3)

Lecturer III

11 (12.9)

Lecturer II

27 (31.8)

Lecturer I

21 (24.7)

Senior lecturer

12 (14.1)

Principal lecturer

9 (10.5)

Chief lecturer

5 (5.9)

SEHS

20 (23.5)

SCHPH

38 (44.7)

SDS

6 (7.1)

STIS

5 (5.9)

SHICS

7 (8.2)

SBMS

9 (10.6)

25–34 years

15 (17.6)

35–44 years

46 (54.1)

45–54 years

16 (18.8)

55–65 years

8(9.4)

0–9 years

17 (20)

10–19 years

49 (57.6)

20–29 years

14 (16.5)

30–39 years

5 (5.9)

1–10 years

50 (58.8) (continued)

718

S.-S. M. Ajibade et al. Table 1. (continued)

Variables (n = 85)

Expertise level

Application of IPE

Support IPE

N (Percent) 11–20 years

24 (28.8)

21–30 years

6 (7.1)

Not familiar

26 (30.6)

Beginners

47 (55.3)

Experienced

12 (14.1)

Yes

23 (27.1)

No

52 (61.2)

No idea

10 (11.8)

Yes

78 (91.8)

No

7 (8.2)

3.2 Teachers Attitude About IPE Attitude score averaged 121.45 (SD 8.24, SE 2.15). Attitudes towards IPE scale scored 45.72 (>75%), IPE (ATHCT) scale scored 44.91 (>75%), and attitudes of IPE in academic environment scale scored 30.82 ( 75%. This shows that the teachers in this study had a positive view of IPE. In this situation, IPE, attitudes have a big role in accepting and adopting new ideas. A number of studies in Iraq, the United States, South Korea, and the United Arab Emirates have found positive views regarding IPE. Although attitudes against IPE in academic settings subscale 3 showed a negative rating (30.82 < 75%), It was found that Salama (2018) reported decreased scores on this subscale, but not negative scores, as was found in the current research. However, this makes it clear that although teachers will indeed accept IPE in the classroom, they were unsure of its worthiness and adaptability in education contexts, especially when teaching students and working with teachers from other schools, highlighting the persistence of professional stereotypes and cocoons. Before using IPE in the classroom, efforts must be made to dispel these misperceptions. Since EKCHT does not have a formal IPE program, the comments of the teachers were based on what they had learned elsewhere. After implementing an IPE program, it is suggested to do further study in order to do comparison of attitude scores.

5 Limitation This research is being conducted in just one state and college of health and technology. As a consequence of this, the conclusion cannot be extrapolated to any of the other colleges in other states in Nigeria. The findings of this study, however, are useful and can be used in related contexts. In addition, this investigation was carried out during the COVID-19 epidemic, which may have contributed to the researchers’ difficulty to collect a complete sample. The response rate that was received, on the other hand, is adequate for descriptive study [14].

6 Conclusion IPE was generally well-received by teachers, although in school contexts, the teachers were less enthusiastic. Teacher attitudes regarding IPE were unaffected by factors such as gender, age, academic status, school or years of teaching experience as teachers or in the classroom. They were more open to IPE because of their degree of experience and because they had already implemented it at the college. Teachers should be educated and sensitized to IPE in order to change their behavior.

724

S.-S. M. Ajibade et al.

References 1. World Health Organization. WHO guideline: recommendations on digital interventions for health system strengthening. World Health Organization (2019) 2. Belasen, A.T.: Resilience in Healthcare Leadership: Practical Strategies and Self-Assessment Tools for Identifying Strengths and Weaknesses. Productivity Press, New York (2021) 3. Ajibade, S.-S.M., Mejarito, C., Egere, O.M., Adediran, A.O., Gido, N.G., Bassey, M.A.: An analysis of the impact of social media addiction on academic engagement of students. J. Pharm. Negat. Results 13(4), 1390–1398 (2022). https://doi.org/10.47750/pnr.2022.13.S04.166 4. Sunguya, B.F., Hinthong, W., Jimba, M., Yasuoka, J.: Interprofessional education for whom— challenges and lessons learned from its implementation in developed countries and their application to developing countries: a systematic review. PLoS ONE 9(5), e96724 (2014) 5. Homeyer, S., Hoffmann, W., Hingst, P., Oppermann, R.F., Dreier-Wolfgramm, A.: Effects of interprofessional education for medical and nursing students: enablers, barriers and expectations for optimizing future interprofessional collaboration–a qualitative study. BMC Nurs. 17(1), 1–10 (2018) 6. ESPAD Group. ESPAD report 2019: results from the European school survey project on alcohol and other drugs (2020) 7. Brime, B., et al.: Observatorio Español de las Drogas y las Adicciones. Informe 2021. Alcohol, Tabaco y Drogas Ilegales en España (2021) 8. Rafael, J., Anupol, J., Cajal, B., Gervilla, E.: Data mining techniques for drug use research. Addict. Behav. Rep. 8, 128–135 (2018) 9. Smit, K., Voogt, C., Otten, R., Kleinjan, M., Kuntsche, E.: Why adolescents engage in early alcohol use: a study of drinking motives. Exp. Clin. Psychopharmacol. 30(1), 73–81 (2022). https://doi.org/10.1037/pha0000383 10. Genrich, G., Zeller, C., Znoj, H.J.: Interactions of protective behavioral strategies and cannabis use motives: an online survey among past-month users. PloS ONE 16(3), e0247387 (2021) 11. Gazibara, T., What differs former, light and heavy smokers? Evidence from a post-conflict setting. Afr. Health Sci. 21(1), 112–22 (2021) 12. Cody, S.: Data Mining/Data Privacy and the Collection/Misuse of Our Private Data, No. 5823 (2021) 13. Wahab, L., Jiang, H.: A comparative study on machine learning based algorithms for prediction of motorcycle crash severity. PLoS ONE 14(4), e0214966 (2019) 14. Parekh, A., Bates, J., Amdur, R.: Response rate and nonresponse bias in oncology survey studies. Am. J. Clin. Oncol. 43(4), 229–230 (2020) 15. Ajibade, S.S.M., Oyebode, O.J., Mejarito, C.L., Gido, N.G., Dayupay, J., Diaz, R.D.: Feature selection for student prediction accuracy using gravitational search algorithm. J. Optoelectron. Laser 41(8), 2022 (2022) 16. Bangotra, D.K., Singh, Y., Kumar, N., Singh, P.K., Ojeniyi, A.: Energy-efficient and secure opportunistic routing protocol for WSN: performance analysis with nature-inspired algorithms and its application in biomedical applications. BioMed Res. Int. 2022, 1–13, 1976694 (2022). https://doi.org/10.1155/2022/1976694 17. Ajibade, S.S.M., Ahmad, N.B., Shamsuddin, S.M.: A data mining approach to predict academic performance of students using ensemble techniques. In: Abraham, A., Cherukuri, A., Melin, P., Gandhi, N. (eds.) ISDA 2018 2018. AISC, vol. 940, pp. 749–760. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-16657-1_70

Augmented Analytics an Innovative Paradigm Teresa Guarda1,2,3(B)

and Isabel Lopes3,4,5

1 Universidad Estatal Península de Santa Elena, La Libertad, Ecuador

[email protected]

2 CIST – Centro de Investigación en Sistemas y Telecomunicaciones,

Universidad Estatal Península de Santa Elena, La Libertad, Ecuador 3 Algoritmi Centre, Minho University, Guimarães, Portugal 4 Instituto Politécnico de Bragança, Bragança, Portugal [email protected] 5 UNIAG, Instituto Politécnico de Bragança, Bragança, Portugal

Abstract. Business intelligence (BI) and analytics are a set of techniques, methodologies, and tools used in the analysis of business data, which allow users (decision makers) to have a clearer view of the market, leveraging the decisionmaking process, allowing timely business decisions. Usually BI refers to Extract Transform and Load processes (ETL), Data Warehouse (DW), Data mining (DM), online analytical processing (OLAP), visualization tools, and reports. In turn, the analytics generally uses advanced techniques, providing BI users with Artificial Intelligence (AI) and Machine Learning (ML) techniques. In this context, Gartner introduced the term “augmented analytics” in 2017, making the line between BI and advanced analytics clear. The main objective of this work is to explore the area of Augmented Analytics in the context of BI, through the use of ML and Natural Language Processing (NLP) resources and capabilities, as an innovative paradigm of augmented analytics in the decision-making process. Keywords: Business intelligence · Artificial Intelligence · Machine learning · Natural language processing · Augmented analytics

1 Introduction In companies, the volume of data to be processed has grown exponentially, leading to an increasing complexity in decision-making, which in BI is reflected in the increase in efforts to automate data preparation, analysis and visualization, making all these processes increasingly complex, due to the number of variables required, the dimensionality of the data is also increasing, and the existing analytical approaches do not allow obtaining the necessary information for making an accurate decision. It is in this context that Augmented Analytics arises, making available easy-tounderstand and automated dashboards, as well as a descriptive and predictive approach. In fact, it is designed to do research and produce business information automatically, basically without any supervision required. So, actually, decision makers have the ability to use it directly without needing the support of a business analyst or data scientist. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 A. Abraham et al. (Eds.): IBICA 2022, LNNS 649, pp. 725–733, 2023. https://doi.org/10.1007/978-3-031-27499-2_67

726

T. Guarda and I. Lopes

New technologies are revolutionizing the analysis of large volumes of data. Due to new developments, data can be exchanged and extracted in a distributed environment, whether in a public or private cloud, combining it with data that the company can store in a local environment. Major advances in cybersecurity are also being made due to technologies such as blockchain. On the other hand, AI and ML are creating hyper-automated environments in data processing and analysis. This disruptive trend, uses AI, ML, and NLP techniques to transform the way analytics content is developed, consumed or shared, is transforming the entire business universe in the field of data analysis, and the time is right for organizations to embrace a key component of the future of data and take a step forward in business intelligence. This work explores the area of Augmented Analytics in the context of BI, as an innovative paradigm of augmented analytics in the decision-making process.

2 Augmented Analytics Augmented Analytics is a relatively new concept, being one of the biggest technological trends for the next few years. Augmented Analytics can be understood as a segment of Augmented Intelligence that automates data processing using ML and NPL [1]. It is important to point out that Augmented Intelligence, is considered the “mother” of Augmented Analytics, and has nothing to do with Artificial Intelligence. The first supports exclusively human decisions, while the second takes the place of human intelligence for autonomous decisions. Gartner defined Augmented Intelligence as “a design pattern for a human-centered partnership model of people and artificial intelligence (AI) working together to enhance cognitive performance, including learning, decision making and new experiences.” [2], and defined Augmented Analytics is “the use of enabling technologies such as machine learning (ML) and AI to assist with data preparation, insight generation and insight explanation to augment how people explore and analyze data in analytics and BI platforms.” [3]. Augmented Analytics, uses Artificial Intelligence, ML and NLP, to help with data preparation, as well as generating and explaining insights to increase the way people explore and analyze data [4]. Data Analytics means data analysis. This term refers to the analysis of large volumes of data (Big Data) [5]. As well as its transformation to obtain useful information to draw conclusions that facilitate decision making [6]. By analyzing large volumes of data, companies facilitate the growth of their business, improve their operational efficiency, manage risk better and reduce decision-making time [7]. Augmented Analytics differs from traditional analytics and Business Intelligence (BI) tools by including an Artificial Intelligence component that is constantly running in the background [8]. This AI component allows, through new data, to dynamically alert to the unexpected behavior of a metric, for example. It also allows quick access to knowledge from the processing of large amounts of data, thus helping to discover

Augmented Analytics an Innovative Paradigm

727

hidden ideas, such as unknown unknowns, and to remove human bias, such as unknown knowns. According Gartner, today BI professionals who do not use Augmented Analytics spend approximately 80% of their time collecting, preparing, and cleaning data [9]. Data analytics ML and NLP algorithms techniques are used in conjunction automatizing BI, and data preparation. As result it is possible to perform robust analysis, exploring numerous relevant information in a small fraction of the time normally used. This is the only way to get an idea of how technology can contribute to the daily lives of companies. Another aspect that sets it apart from traditional analytics is the ability to include natural language processing and conversational interfaces, enabling the human workforce to interact with data and insights, significantly optimizing productivity and decision making. By implementing augmented analytics, organizations can democratize the use of data so that all users and executives can make data-driven decisions without the help of data scientists or IT professionals [1, 10].

3 Trends of Data Analytics According to the specialized company SDG Group Data Analytics, the trends in Data Analytics for 2022, presents in its reports the main trends that will mark the direction of the sector this year, being incredible the interoperability between this great diversity of technologies involved. According to this consulting enterprise, the top 10 trends in Data Analytics for 2022 will be as follows [11]: a new generation of Data Warehouse; unlimited enterprise data management; a customer-centric paradigm; data as a transformational asset; the creation of trust environments based on cybersecurity analytics, blockchain and privacy-enhancing computation; the commitment to self-service 2.0 and the auto ML model; ethical data management; quantum computing and its convergence with advanced data analytics techniques; metaverse and extended reality; and automated creation of new content (Table 1). Table 1. Trends in data analytics for 2022 (source: [11–20]). Trends

Description

New generation of data warehouse

Due to cloud computing, companies are adopting new data architectures and structures that are increasingly deeper and scalable. This year we will see a new generation of Data Warehouse due to the architectures Data Mesh, Data Vault 2.0 and Data Fabric, technologies conceived and created natively in the cloud (continued)

728

T. Guarda and I. Lopes Table 1. (continued)

Trends

Description

Unlimited enterprise data management

DataOps applies the teachings of DevOps to analyzing and managing data to create predictable results. In addition to managing changes to data, data models and related artifacts. It does so by leveraging technology to automate the delivery of data with the optimal level of security, quality and metadata to improve the usability and value of data in a dynamic environment. DataOps is now evolving to the next level thanks to artificial intelligence and machine learning creating hyper-automation environments

Customer-centric paradigm

A paradigm shift is taking place in which the customer is placed at the center of the shopping experience. The goal is for the customer to have a unique and homogeneous shopping experience wherever they are. As a result, barriers between digital and physical channels will disappear, thanks to hyperconnectivity (IoT, Cloud, 5G). Thus, with data analysis and intelligent process automation, companies offer customized products and services. Using this approach makes it easier to feed artificial intelligence models that are activated in real time

Data as a transformational asset

Data only has value if it becomes a monetizable and differentiating asset, and should be understood here as the set of data, algorithms, practices and information available to a company. Companies that take advantage of the information that this data provides and extract value from it are the ones that differentiate themselves from their competitors

Creation of trust environments based on cybersecurity analytics, blockchain and privacy-enhancing computation

Companies are adopting cybersecurity strategies that provide protection beyond the traditional perimeter. This is a proactive approach to cybersecurity that is identity-based and uses data collection and analysis capabilities (cybersecurity analytics) for faster threat detection. As well as the automation of manual security tasks. This cyber secure environment is also based on Blockchain technology (continued)

Augmented Analytics an Innovative Paradigm

729

Table 1. (continued) Trends

Description

Commitment self-service 2.0 and the auto ML model

These Data Analytics technologies accelerate the adoption of solutions, giving direct access to end users, democratizing access to data and focusing on generating insights. Self Service 2.0 is integrating and leveraging the analytical capabilities of AI-driven models. Auto ML is using visuals and reports to present its advanced algorithms

Ethical data management

The disruption caused by Quantum Computing associated with AI forces us to improve the ethical management of data. After the advances in privacy that came hand in hand with the GDPR, it is now time to ensure ethical and responsible data development. In this line, the new concept of Private AI emerged. In the domain of public administrations or entities where data sharing is complex, encryption is being used to expose the data as little as possible

Quantum computing and its convergence with advanced data analytics techniques

Quantum artificial intelligence could be the next revolution. We are currently experiencing important parallels in the way quantum computing is developing and its convergence with advanced analysis techniques

Metaverse and extended reality

Metaverse is not just a buzzword in the technology sector. The metaverse is an ecosystem that will facilitate the exploration of extended reality. Within this reality we find all the immersive technologies that merge the real world with the virtual world: augmented, virtual and mixed reality. Extended reality is a set of technological resources that will offer users the possibility to immerse themselves in interactive experiences based on the combination of virtual and physical dimensions

Automated creation of new content

Generative AI is one of the most promising developments in the AI environment for years to come, where artificial intelligence is used to train algorithms on conclusions. Generative AI allows computers to automatically recognize underlying patterns related to inputted information in order to generate new original content

730

T. Guarda and I. Lopes

4 Augmented Analytics Benefits Each in its own way, all the data generated is of enormous importance to organizations. The increase in data means that customers are increasingly online and this translates into more opportunities for businesses. Augmented Analytics offers advantages related, above all, to data accessibility to all types of companies and sizes. Any user of augmented analytics, with a minimum of knowledge about how it works and what it contributes, will be able to obtain relationships and valuable insights from the data stored by the company. Unlike those who do not use it, who will have to choose to go to professionals specialized in data science with very technical profiles. And allows use dashboards in an automatic and understandable way, as well as descriptive and predictive approaches with the same simplicity [1]. With the augmented analytics, analyzes and the predictions that are carried out will be totally impartial, obtaining, in addition, a precise result. This type of analytics offers a wide variety of automations, thus streamlining any data collection, extraction and analysis process. According to Dataversity [21], we can find the following advantages in implementing augmented analytics in business: (1) Allows the data scientist and IT community to focus on special strategies and projects. (2) Augmented data analysis transforms the work of data scientists and automates their algorithms (3) It promotes better solutions, better forecasts and measurable analysis of products and services. (4) Spend less time exploring data and more time acting on relevant insights. (5) Advances in Smart Data Discovery and other sophisticated techniques and solutions can have a positive impact on ROI and TCO. Regarding the application and benefits of Augmented Analytics, we present the following (Fig. 1): analysis automation; exclusion of reports; change of priorities; assertiveness; better use of time; target audience analysis; and process improvement. A company with a BI team is continuously analyzing the data generated by the company. However, much of this date does not provide valuable information that can contribute to the income or a return of investment of the company. In addition, the percentage of data that is analyzed is a minimum amount of data from all that the companies generate daily. With increased analytics, experts can encompass much more, offering more and better insights. All these advantages for companies translate into that decision making has a lower cost, and with this techniques, companies can have more reliable, more varied, timelier and more useful information for their business strategies.

Augmented Analytics an Innovative Paradigm

731

Fig. 1. Application and benefits of augmented analytics.

5 Conclusions Augmented Analytics at first it may seem like a complex concept, but in reality it is simple to understand. Augmented Analytics is a segment of augmented intelligence that automates data insight through big data, ML and NLP capabilities to revolutionize the way analytical content is created, consumed and shared. Automation is a feature in augmented analytics solutions, making analytical decisionmaking reports automatically available to companies, without human intervention. Augmented Analytics, analyzes large amounts of data, but separates it in an intelligent way, eliminating analysis steps of it. Its technological capacity makes it possible to identify hidden patterns in databases. By creating ready-to-share insights, both the data scientist’s life and the state of automation algorithms are made easier. Augmented Analytics, facilitates the automation of analyses, redefines priorities, eliminates the need for reports, increases assertiveness and improves processes. Enabling decision makers to work more efficiently, accurately, and faster. Being accessible to users with less experience and data skills. With the use of technology, organizations can become more competitive, reduce costs and optimize results. For the marketing and relationship area, it is a great differentiator as it allows a much deeper understanding of potential customers, at the same time as there is essential agility and productivity in a market that is in full digital transformation. The estimated growth of the augmented analysis at the global market level for 2022 was S$9.51 billion, representing a compound annual growth rate (CAGR) of 14.43% relative to 2021. US$ 21.70 billion in 2026 (CAGR of 22.90%) [22].

732

T. Guarda and I. Lopes

Is time for organizations to embrace this key component of the future of data analytics, and take a step forward in Business Intelligence. Acknowledgements. The authors are grateful to the UNIAG, R&D unit funded by the FCT – Portuguese Foundation for the Development of Science and Technology, Ministry of Science, Technology and Higher Education. “Project Code Reference: UIDB/04752/2020”.

References 1. Ahmad, T., et al.: Artificial intelligence in sustainable energy industry: status quo, challenges and opportunities. J. Clean. Prod. 289, 1–31 (2021). https://doi.org/10.1016/j.jclepro.2021. 125834 2. Andriole, S.J.: Artificial intelligence, machine learning, and augmented analytics [life in C-suite]. IT Prof. 21(6), 56–59 (2019). https://doi.org/10.1109/MITP.2019.2941668 3. Burk, S., Miner, G.: It’s All Analytics!: The Foundations of AI, Big Data, and Data Science Landscape for Professionals in Healthcare, Business, and Government (1st ed.). Productivity Press (2020). https://doi.org/10.4324/9780429343988 4. Darvazeh, S.S., Vanani, I.R., Musolu, F.M.: Big data analytics and its applications in supply chain management. In: New Trends in the Use of Artificial Intelligence for the Industry 4.0, pp. 175–200 (2020). https://doi.org/10.5772/intechopen.89426 5. Fjäder, C.: Emerging and disruptive technologies and security: considering trade-offs between new opportunities and emerging risks. In: Adlakha-Hutcheon, G., Masys, A. (eds.) Disruption, Ideation and Innovation for Defence and Security. Advanced Sciences and Technologies for Security Applications, pp. 51–75. Springer, Cham. https://doi.org/10.1007/978-3-031-066 36-8_4 6. Gartner. Gartner Glossary: Augmented Analytics. (Gartner) (2017). https://www.gartner.com/ en/information-technology/glossary/augmented-analytics 7. Gartner. Gartner Glossary: Augmented Intelligence. (artner) (2017). https://www.gartner. com/en/information-technology/glossary/augmented-intelligence 8. Kumar, S., Singh, M.: Big data analytics for healthcare industry: impact, applications, and tools. Big Data Mining Anal. 2(1), 48–57 (2018). https://doi.org/10.26599/BDMA.2018.902 0031 9. Li, Y., Wang, Z., Xie, Y., Ding, B., Zeng, K., Zhang, C.: Automl: from methodology to application. In: ACM (Ed.), 30th ACM International Conference on Information & Knowledge Management, pp. 4853–4856 (2021). https://doi.org/10.1145/3459637.3482025 10. Mallow, G.M., Hornung, A., Barajas, J.N., Rudisill, S.S., An, H.S., Samartzis, D.: Quantum computing: the future of big data and artificial intelligence in spine. Spine Surg. Relat. Res. 6(2), 93–98 (2022). https://doi.org/10.22603/ssrr.2021-0251 11. Nguyen, D.C., et al.: Federated learning meets blockchain in edge computing: opportunities and challenges. IEEE Internet Things J. 8(16), 12806–12825 (2021). https://ieeexplore. ieee.org/stamp/stamp.jsp?arnumber=9403374&casa_token=5LSIEHMRgL0AAAAA:Fud Wgrre-TZaTxuXG0j0lJTIzVH-BPCOnEfFAGCwuWhKb3AXbP-A3wPqNb16JFDOmY aDEU8Qcg&tag=1 12. Patel, K.: What is Augmented Analytics and Why Does it Matter? (Dataversity) (2017). https://www.dataversity.net/augmented-analytics-matter/ 13. Prat, N.: Augmented analytics. Bus. Inf. Syst. Eng. 61(3), 375–380 (2019). https://doi.org/ 10.1007/s12599-019-00589-0

Augmented Analytics an Innovative Paradigm

733

14. Priebe, T.N.: Finding your way through the jungle of big data architectures. In: IEEE International Conference on Big Data (Big Data), pp. 5994–5996 (2021). 978-1-6654-39022/21/$31.00 15. Rana, N.P., Chatterjee, S., Dwivedi, Y.K., Akter, S.: Understanding dark side of artificial intelligence (AI) integrated business analytics: assessing firm’s operational inefficiency and competitiveness. Eur. J. Inf. Syst. 31(3), 364–387 (2022). https://doi.org/10.1080/0960085X. 2021.1955628 16. SDG Group. Data % Analytics Trends 2022 (2022). https://4041825.fs1.hubspotusercont ent-na1.net/hubfs/4041825/2022%20Data%20&%20Analytics%20Trends%20-%20SDG% 20Group-1.pdf 17. Shave, L.: Modernising the foundations of records and information systems: Big data. IQ. RIMPA Q. Mag. 38(2), 28–30 (2022). https://search.informit.org/doi/https://doi.org/10.3316/ informit.445303712536040 18. Sheela, S.C., ArulDoss, S.P., Infanta, S.J., Ilakkiya, U.S.: Digitalization in India: trends and challenges. J. Positive School Psychol. 5852–5860 (2022). https://www.journalppw.com/ index.php/jpsp/article/download/4361/2907 19. Thomas, T., Vijayaraghavan, A.P., Emmanuel, S.: Machine Learning Approaches in Cyber Security Analytics. Springer, Singapore (2020) 20. Vashisth, S., Linden, A., Hare, J., Krensky, P.: Hype Cycle for Data Science and Machine Learning, 2019. Gartner Research (2019). https://datanomers.com/whitepapers/Gartner% 20Hype%20Cycle%20For%20Data%20Science%20And%20Machine%20Learning.pdf 21. Xi, N., Chen, J., Gama, F., Riar, M., Hamari, J.: The challenges of entering the metaverse: an experiment on the effect of extended reality on workload. Inf. Syst. Front. 1–22,(2022). https://doi.org/10.1007/s10796-022-10244-x 22. Research and Markets Augmented Analytics Global Market Report 2022, Research and Markets (2022)

A Deep Learning Approach to Monitoring Workers’ Stress at Office F´ atima Rodrigues1,2(B) and Jacqueline Marchetti1 1

Institute of Engineering - Polytechnic of Porto (ISEP/IPP), Porto, Portugal {mfc,1180115}@isep.ipp.pt 2 Interdisciplinary Studies Research Center, Porto, Portugal http://www.isep.ipp.pt Abstract. Identifying stress in people is not a trivial or straightforward task, as several factors are involved in detecting the presence or absence of stress. The problem of detect stress has attracted much attention in the last decade and is mainly addressed with physiological signals and in a controlled ambience with specific tasks. However, the widespread use of video cameras permitted the creation of a new non-invasive data collection techniques. The goal of this work is to provide an alternative way to detect stress in the workplace without the need of specific laboratory conditions. For that, a stress detection model based on images analysed with deep learning neural networks was developed. The trained model achieved a F1 = 79.9% on a binary dataset, of stress/non-stress, with an imbalanced ratio of 0.49. This model can be used in a non-invasive application to detect stress and provide recommendations to the collaborators in the workplace in order to help them to control their stress condition. Keywords: Stress detection networks · VGG · Resnet

1

· Deep learning · Convolutional neural

Introduction

Stress is often described as a complex psychological, physiological, and behavioral state triggered by the perception of a significant imbalance between the demands placed on the person and their perceived ability to meet those demands [1]. Stress at work has become a serious problem affecting many people of different professions, life situations, and age groups, and continually contributes to illness either directly, through its physiological effects such as cardiovascular diseases and anxiety, or indirectly, through bad health behaviors such as lack of sleep. Studies also suggest that stress is a factor between 50% and 60% of all lost working days [2]. However, sometimes people lack the self-awareness to realize that they are under an episode of stress. There is evidence that stressful conditions are preventable and treatable in the workplace, and workers who receive treatment are more likely to be more productive [3]. Hence, non-intrusive stress sensing tools that continuously monitor stress levels, with a minimal impact on workers’ daily lives, could be used to initiate stress-reduction interventions automatically. c The Author(s), under exclusive license to Springer Nature Switzerland AG 2023  A. Abraham et al. (Eds.): IBICA 2022, LNNS 649, pp. 734–743, 2023. https://doi.org/10.1007/978-3-031-27499-2_68

Deep Learning Approach to Monitoring Workers Stress

735

Most of the previous studies of stress detection are based on data collected through laboratory tests, where conditions are created to artificially stimulate stress/non-stress in the study subjects. This work differs from the previous ones, because its objective is to detect stress in people at the workplace, in a noninvasive way, using images captured in the development of their daily tasks. Since the advocated solution for stress detection is largely dependent on image analysis, deep learning is the most suitable technique, specifically convolutional neural networks, that will be explored. The remaining parts of this paper are organized as follows. Section 3 briefly reviews existing stress detection works. Section 4 presents the deep learning methodology adopted to develop the stress detection models. Section 5 describes the dataset of images used, the preprocessing tasks performed and the tuning of deep learning networks by adjusting its hyperparameters. In Sect. 6 it is presented the evaluation of the models obtained with the different networks and is also made a comparison with traditional machine learning models. The last section provides the main conclusions.

2

Related Work

The studies in the literature on stress detection can be divided into machine learning approaches based on feature sets designed to extract discriminatory stress characteristics from features acquired from physiological signals and more recently, researchers have attempted to use facial information instead of physiological signals, or a combination of the two, and they have shown the feasibility of detecting stress from various sources of stress by using machine learning and deep learning techniques. In the study [4] the authors propose a stress detection algorithm based on the deep learning approach that fuses physiological signals and the sequence of facial features extracted from facial images. For the physiological signals they use the electrocardiogram (ECG) for monitoring changes of heart activities in the autonomic nervous system (ANS) due to stress and they also use the RESP a waveform of respiration used to detect breathing pattern changes due to stress. The facial features were used for monitoring the emotional responses due to stress. They designed a protocol that simulates stressful situations and recruited 24 subjects for the experiments. They obtained an average accuracy of 73.3%, AUC of 0.822, and F1 score of 0.700 in two-level stress. Facial cues are a significant and widely studied factor that can define a person’s stress. Giannakakis et al. [1] developed a framework to detect and analyze emotional states of stress/anxiety through video-recorded facial cues. The characteristics investigated were mouth activity, eye-related events, camera-based photoplethysmographic for heart rate estimation, and head action parameters. Methods such as Generalized Likelihood Ratio, Naive Bayes classifier, Support Vector Machines, K-nearest neighbors, and AdaBoost classifier were used and tested. The best accuracy achieved was 97.4% in discriminating among three stress states.

736

F. Rodrigues and J. Marchetti

Zhang et al. [5] has developed a Two-Level Stress Detection Network (TSDNet), which first learns face-level and action-level representations separately and then merges the results via a flow-weighted integrator for the identification of stress. Compared to the Facial Cues-based or the Action Units-based approach, the TSDNet results demonstrated the feasibility and advantage of using the TSDNet classifier, reaching an accuracy of 85.42%. Sabour et al. [6] focus on a multimodal analysis of stress and its physiological responses. In this study, the authors propose a new dataset, UBFC-Phys, collected with and without contact from participants living social stress situations. A wristband was used to measure contact blood volume pulse (BVP) and electrodermal activity (EDA) signals. Video recordings allowed to compute remote pulse signals, using remote photoplethysmography (RPPG), and facial expression features. Pulse rate variability (PRV) was extracted from BVP and RPPG signals. The dataset permits to evaluate the possibility of using video-based physiological measures compared to more conventional contact-based modalities. The goal of the article was to present both the dataset, which they make publicly available, and experimental results of contact and non-contact data comparison, as well as stress recognition. They obtained a stress state recognition accuracy of 85:48%, achieved by remote PRV features. Bobade et al. [7] propose different machine learning and deep learning techniques for stress detection in individuals using multimodal datasets. The accuracies for three-class (fun versus baseline versus stress) and binary (stress versus non-stress) classifications were evaluated and compared using machine learning techniques. The techniques used were K-Nearest Neighbor, Linear Discriminant Analysis, Random Forest, Decision Tree, AdaBoost, and Kernel Support Vector Machine. During the study, using machine learning techniques, accuracies of up to 81.65% and 93.20% were achieved for three-class and binary classification problems, respectively. When using deep learning, the accuracy was up to 84.32% and 95.21%, respectively. In this study, we propose a stress detection model based on a deep learning approach that only uses face images collected from video. Our main contribution is the validation of stress detection in a non-invasive way using only images from faces. The developed model can be used at any office workplace where a camera is available. Consequently, the deep learning approach is expected to aid in reducing work-related stress in employees, improve their mental health, and reduce socioeconomic costs.

3

Methodology

Deep Learning has enhanced the state-of-the-art results in several domains including computer vision [8] speech recognition [9], and reinforcement learning [10]. A deep learning model can be defined as a mapping function that maps raw input data (e.g., images) to the desired output (e.g., classification labels) by optimizing a specific metric (e.g., maximizing classification accuracy) typically using an iterative approach, such as stochastic gradient descent (SGD) [11].

Deep Learning Approach to Monitoring Workers Stress

737

The mapping process takes place in particular transformation layers, called hidden layers, which receive weighted input, transform it using simple but nonlinear activation functions (e.g., ReLU, sigmoid), and then pass these values to the next hidden layer and so on, deeper and deeper in the neural network, hence the name deep neural network, until the last layer of the model (output layer) that outputs the results. Convolutional Neural Network (CNN) mostly used in computer vision applications has special types of layers such as convolutional, pooling, and fully connected layers. These layers are organized in different structures that define the network architecture and, on the other hand, each of the layers can have a distinct set of parameters (e.g., activation function, size). The number of layers, their configuration, the learning rate and other parameters, which drive the overall training process are known as hyperparameters. Optimizing the hyperparameters plays a critical success factor in training the model. In contrast with the traditional machine learning approach, of feature engineering followed by model learning, deep learning is an end-to-end learning approach that do not require feature extraction [11]. A more and more abstract representation of the raw data is produced through the several layers of the network for the best possible abstract representation of the input data. This requires a fine-tuning of the CNNs, thus requiring a considerable number of labelled images and time in order to be tuned and ultimately applied to new scenarios. It requires less domain expertise and experience from the modeler, but adjusting and tuning the network structure and hyper-parameters are crucial when developing new models, as can be seen in Fig. 1 that shows a typical deep learning modelling life-cycle.

Fig. 1. Deep learning modeling life-cycle. Adopted from [12]

Deep learning models aims at optimizing a specific metric (e.g., minimizing a loss function). Therefore, a typical deep learning modelling life-cycle goes through a large number of cycles before reaching a satisfactory result. It includes experimenting with different architectures, fine-tuning the hyperparameters and trying different software libraries. Each iteration of the life-cycle (experiment) results in a large set of artifacts: the values of the hyperparameters, the model’s architecture and respective weights. These artifacts represent knowledge about the training experiments that can be used to analyse, explore, and derive insights (e.g., what hyperparameters work best with a dataset).

738

4 4.1

F. Rodrigues and J. Marchetti

Materials and Methods Dataset

This study uses the UBFC-Phys dataset [6], a public multimodal dataset with 56 participants in an experiment carried out in three tasks: a rest, a speech and arithmetic tasks with different difficulty levels. During the experiment, participants were filmed with a RGB digital camera, with Motion JPEG compression and a rate of 35 frames per second. The dataset contains video recordings of participants, BVP, and EDA signals measured during the three tasks. In addition, their anxiety score was calculated before and after the experimental sessions. Since the goal of our work is to detect stress in a non-invasive way, only the face images of participants will be used to train the deep neural networks. The video images are organized into 56 folders, corresponding to individual folders for each participant. In each folder there are three videos, one for each task. Only three minutes were stored due to the concern of equalizing the duration of the three tasks and alleviating the size of the dataset. In this study, each video frame was converted into an image, creating a dataset with 1,038,769 images. The dataset contains 340,374 (67.2%) images classified as stressed and 698,395 (32.8%) images classified as non-stress. 4.2

Pre-processing

The video-based stress detection task comprises detecting a user’s effective state (stressed or non-stressed) based on the face images. To locate the face region in each frame of the video the Haar Cascade technique [13] was adopted to extract the face region in each frame and then manually check the obtained facial images. In order to manage the imbalance ratio of the original dataset the images were randomly subsampled on a new set of 20,000 images which represents only 1.93% of the total images available. The new set of 20,000 images was divided into 55% no stress and 45% stress images and a specific data augmentation layer was configured for each deep learning architecture. Normalization is also crucial because it helps the algorithms converge more quickly. So, the pixels of images were normalized to values between -1 and 1 and between 0 and 1 according to the network used. In the experimental phase, the holdout method was used to split the data. Using the available datasets, the data were divided into three disjoint sets: training (70%), validation (10%) and testing (20%). The second dataset, validation, is used to obtain an estimate of the model’s performance and to fine-tune the values of the network’s hyperparameters. The test data is used at a later stage to test the model’s final predictability. 4.3

Experiments

Pre-trained networks were used to abstract the complexity of the convolutional neural network design process from scratch and speed up the development of the

Deep Learning Approach to Monitoring Workers Stress

739

models. The pre-trained networks used were: VGG-16, VGG-19, Resnet101, and Resnet50 with the binary cross-entropy as loss function and Adam as optimizer. The final layers of the networks were customized to try to obtain better performance. The network parameterization process consisted of an iterative process where several tests were carried out and, if the results improved, the changes were kept, otherwise they were discarded. Several tests were carried out on the characteristics of the networks such as, modify or remove layers from the network, add layers at the end of the network: dropout, max-pooling, flatten, output, change the number of layers to train. After obtaining the best architecture for each one of the networks, the values of learning rate, batch size and epoch were determined with the training dataset. The best parametrization configurations attained for VGG-16, VGG-19, Resnet101 and Resnet50 networks are presented in Table 1. Table 1. Summary of network architectures configuration. VGG-16 Learning Rate 0.001 16 Batch Size 20 Epochs

VGG-19 0.0001 16 20

ResNet 101 0.01 16 60

ResNet 50 0.001 128 20

The learning rate hyperparameter controls the rate or speed at which the model learns. It controls the amount of apportioned error that the model weights are updated with each time they are updated, such as at the end of each batch of training examples. Regarding the evaluation of the learning rate impact in the model results, the experiments considered the learning rates of 0.1, 0.01, 0.001, and 0.0001. Resnet101 achieved the best result for the learning rate of 0.01. Learning rates lower than 0.01 comprises VGG-16, VGG-19, and ResNet50, which have the best learning rates results of 0.001, 0.0001, and 0.001, respectively. The batch size is one of the most critical hyperparameters to tune. Regarding evaluating the batch size impact on the model results, the experiments considered the respective batches sizes 16, 32, 64, 128, and 256. On one side, using a batch equal to the entire dataset guarantees convergence to the global optima of the objective function, but empirical convergence to that optima gets slower. On the other hand, smaller batch sizes have been empirically shown to have faster convergence to “good” solutions. Most of networks, VGG-16, VGG-19, and ResNet101, have better results for a smaller batch size of 16, but the downside of using a smaller batch size is that the model is not guaranteed to converge to the global optima, it will bounce around the global optima; staying outside the global optima will depend on the ratio of the batch size to the dataset size. On the other hand, the Resnet50 has an intermediate batch size of 128, which better regulates the loss curve bounce around the global optima.

740

F. Rodrigues and J. Marchetti

The number of epochs is a hyperparameter that defines the number of times the learning algorithm will work through the entire training dataset. One epoch means that each sample in the training dataset has had an opportunity to update the internal model parameters. Regarding evaluating the epochs’ impact on the model results, the number of epochs 20, 40, 60, 80, and 100 were considered. When the number of epochs used to train a neural network model is more than necessary, the training model learns“noise” or irrelevant information, within the dataset, the model becomes “overfitted,” and it cannot generalize well on new data. So, the configuration of epochs is seen as a kind of trade-off. On the one hand, the lower number of epochs configuration, conditions the adjustment of the weights, which makes training faster but may cause an increase in loss function and decrease in accuracy. On the other hand, an increase in the number of epochs, increases the total training time of the model as it increases the number of iterations and the adjustment of weights, which improve the loss function and the accuracy. Nevertheless, to mitigate overfitting and increase the neural network’s generalization capacity, the models were trained for an optimal number of epochs, represented in the models VGG-16, VGG-19, Resnet101, and Resnet50, with the respective values 20, 20, 60, and 20. To summarize, the best model was Resnet50 which presented reliable learning curves that present neither overfitting nor underfitting, as can be seen in Fig. 2 d).

Fig. 2. Final training/validation accuracy curves

Deep Learning Approach to Monitoring Workers Stress

5

741

Evaluation

Although accuracy is a good baseline, it does not tell the whole story, especially in class imbalanced datasets such as the one used in this work. Precision and recall despite giving results that explore the data more in depth, are two sides of the same coin, with precision focusing on the amount of correct stress prediction entries retrieved by the model while recall focuses on how many correct stress entries were identified. Consequently, F1-score was selected as the metric of choice given its good balance between precision and recall, while paying attention to class imbalance existent in the data. Table 2. Results metrics of all networks. VGG-16 VGG-19 ResNet 101 ResNet 50 Accuracy Precision Recall F1

0.562 0.663 0.709 0.685

0.580 0.671 0.737 0.703

0.644 0.679 0.892 0.771

0.668 0.674 0.981 0.799

The results obtained for the F1 score in Table 2 have a good balance between precision and recall, considering the imbalance of the data classes. Finally, Table 2 shows that the Resnet50 model has very high recall, which means that the classifier predicted almost all positive cases (stress) as stress. 5.1

Comparison with Related Work

Although the reference study involves three different types of signals, face videos, BVP and EDA, the authors developed separate and combined models for each type of signal, using traditional machine learning algorithms. In this work, only the images were used, since the objective is to detect stress in a non-invasive way, so only the performance of the models developed with the images will be compared. In the reference article the machine learning classifiers used were: Support Vector Machine (SVM) with a linear kernel, SVM with Radial Basis Function (RBF), Logistic Regression (Log Reg), and K-Nearest Neighbors (KNN). Table 3 presents the algorithms used in the reference article their results in detecting stress and the results obtained in this work. In the reference study, the evaluation metrics only consider accuracy; however, due to the existing imbalance in the original dataset, our study considers the F1 score a more appropriate metric to assess results rather than accuracy. In Table 3, our best model was the Resnet50 with an accuracy of 66.83% and an F1 score of 79.93%.

742

F. Rodrigues and J. Marchetti Table 3. Accuracy of machine learning models and deep learning models Type

Classifiers

Accuracy(%) F1 score(%)

Machine Learning SVM - linear kernel SVM - RBF kernel Log Reg KNN

54.42 53.29 54.68 55.07

– – – –

Deep Learning

56.20 58.05 64.43 66.83

68.51 70.26 77.13 79.90

VGG-16 VGG-19 Resnet101 Resnet50

As a result, our Resnet50 model outperformed the best machine learning classifier (KNN) in the reference article, with an accuracy of 66.83% over an accuracy of 55.07%. Moreover, we can affirm that despite the “video-based PRV modality has proven to be reliable in recognizing the stress state”, deep learning models are also reliable in recognizing the stress using only face based features.

6

Conclusions

Work-related stress causes serious negative physiological and socioeconomic effects on employees. Detecting stress levels in a timely manner is important for appropriate stress management. This article presents a new video-based stress detection model using only part of the images of the UBFC-Phys dataset. Several pre-trained convolutional neural networks were reviewed to identify the best model for learning and generalization. The main challenge of creating a stress detector model was to formulate a model capable of learning how to detect stress reliably. Many challenges were faced when deciding on the architecture and its parametrization to achieve valuable results. Experimental results show that Resnet50 has outperformed existing solutions, with best accuracy and F1-Score. As future work, we plan using a more extensive or full dataset as the original dataset. Also, adding new physiological signals that can be capture in a noninvasive way, will also be our goal in order to improve stress detection. Having a reliable and stable stress detection model, the next step will be to integrate it into a recommender system. The recommender system with the help of the stress detection model and accordingly with the employee’s profile will propose appropriate exercises to relieve stress.

Deep Learning Approach to Monitoring Workers Stress

743

References 1. Giannakakis, G., et al.: Stress and anxiety detection using facial cues from videos. In: Biomedical Signal Processing and Control, vol. 31, pp. 89-101. Elsevier (2017) 2. Mental health at work - OSH WIKI networking knowledge. https://oshwiki.eu/ wiki/Mental health at work. Accessed 18 Oct 2022 3. Almeida, J., Rodrigues, F.: Facial expression recognition system for stress detection with deep learning. ICEIS 1, 256–263 (2021) 4. Seo, W., Kim, N., Park, C., Park, S.M.: Deep learning approach for detecting workrelated stress using multimodal signals. IEEE Sens. J. 22, 11892–11902 (2022) 5. Zhang, H., Feng, L., Li, N., Jin, Z., Cao, L.: Video-based stress detection through deep learning. Sensors 20(19), 5552 (2020) 6. Sabour, R.M., Benezeth, Y., De Oliveira, P., Chappe, J., Yang, F.: UBFC-Phys: a multimodal database for psychophysiological studies of social stress. In: IEEE Transactions on Affective Computing. IEEE Press, New York (2021) 7. Bobade, P., Vani, M.: Stress detection with machine learning and deep learning using multimodal physiological data. In: Second International Conference on Inventive Research in Computing Applications, pp. 51-57. IEEE Press (2020) 8. Hassaballah, M., Awad, A.I. (eds.): Deep learning in computer vision: principles and applications. CRC Press (2020) 9. Kamath, U., Liu, J., Whitaker, J.: Deep Learning for NLP and Speech Recognition. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-14596-5 10. Chen, L.: Deep Learning and Practice with MindSpore. CIR, Springer, Singapore (2021). https://doi.org/10.1007/978-981-16-2233-5 11. Gharibi, G., Walunj, V., Rella, S., Lee, Y.: ModelKB: towards automated management of the modeling lifecycle in deep learning. In: 2019 IEEE/ACM 7th International Workshop on Realizing Artificial Intelligence Synergies in Software Engineering (RAISE), pp. 28-34. IEEE (2019) 12. Miao, H., Li, A., Davis, L. S., Deshpande, A. ModelHub: deep learning lifecycle management. In: IEEE 33rd International Conference on Data Engineering (ICDE), pp. 1393-1394. IEEE (2017) 13. Zhang, K., Zhang, Z., Li, Z., Qiao, Y.: Joint face detection and alignment using multitask cascaded convolutional networks. In: IEEE Signal Processing Letters, vol. 23, pp. 1499-1503. IEEE Press (2016)

Implementing a Data Integration Infrastructure for Healthcare Data – A Case Study Bruno Oliveira1 , Miguel Mira2 , Stephanie Monteiro2 , Luís B. Elvas2(B) , Luís Brás Rosário3 , and João C. Ferreira2 1 CIICESI/ESTG, Polytechnic Institute of Porto, Porto, Portugal 2 Instituto Universitário de Lisboa (ISCTE-IUL), ISTAR, 1649-026 Lisbon, Portugal

[email protected] 3 Faculty of Medicine, Lisbon University, Hospital Santa Maria/CHULN, CCUL, 1649-028

Lisbon, Portugal

Abstract. Conducting epidemiologic research usually requires a large amount of data to establish the natural history of a disease and achieve meaningful study design, and interpretations of findings. This is, however, a huge task because the healthcare domain is composed of a complex corpus and concepts that result in difficult ways to use and store data. Additionally, data accessibility should be considered because sensitive data from patients should be carefully protected and shared with responsibility. With the COVID-19 pandemic, the need for sharing data and having an integrated view of the data was reaffirmed to identify the best approaches and signals to improve not only treatments and diagnoses but also social answers to the epidemiological scenario. This paper addresses a data integration scenario for dealing with COVID-19 and cardiovascular diseases, covering the main challenges related to integrating data in a common data repository storing data from several hospitals. Conceptual architecture is presented to deal with such approaches and integrate data from a Portuguese hospital into the common repository used to explore data in a standardized way. Keywords: Healthcare data · Data integration · ETL · COVID-19

1 Introduction Around, 30% of the data that is kept worldwide is related to health [1]. Such important information is frequently hidden and hard to access due to the sensitivity of the involved entities. In addition to security reasons, the domain complexity and the several variants, metrics, and business rules that exist result in complex data structures used to store all data consistently. Moreover, hospitals and medical centers can have several departments with their own system, which results in data fragmentation across several data sources and in data inconsistencies and conflicts. Governmental restrictions and technical characteristics like this have slowed the adoption of electronic health records (EHR) in Europe to 3% from 35% in the United States [2]. Although it makes sense that health information is private, gaining access to this wealth of knowledge based on © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 A. Abraham et al. (Eds.): IBICA 2022, LNNS 649, pp. 744–753, 2023. https://doi.org/10.1007/978-3-031-27499-2_69

Implementing a Data Integration Infrastructure for Healthcare

745

legal certainty and trust can help save and enhance lives [3]. All over the world, different countries implement different health information systems (EHR) with the main goal to support the management of patient’s records and keep track of their health [4]. With the massive expansion in health technology, thousands of bytes of information are stored daily in EHRs systems, subjecting the healthcare area to new challenges and developments [5]. Despite the main goal of the EHRs systems being the management of the patient’s health information, the amount of data storage boosted a paradigm-changing, calling attention to the secondary usage of this data in the healthcare research field [6]. When we talk about the research field in healthcare, the Covid-19 pandemic reaffirms how much value inter-organizational collaboration can add and the importance of data-sharing [7]. The pandemic has made it increasingly obvious that data-driven improvements in healthcare are essential to improving service quality and answers from healthcare systems that help save more lives. Relevant data for supporting better diagnostics can provide useful mechanisms to reduce delays to several treatments. During the pandemic scenario, Europe had 2.7 million new cases of cancer while 1.3 million people died from the disease [8]. Patients with non-communicable diseases did not have access to the care they required. In such cases, data sharing and data interoperability are crucial for health research as well as for citizens and patients. From uncovering complicated pathways to understanding and avoiding diseases, to comparing causes of disease outcomes across populations, and improving health care. To investigate and compare genetic and epidemiological risk factors for improved prevention or treatment, international data integration is required. The data integration process (also known as ETL – Extract, Transform and Load [9]) represents several methods, technologies, and models to integrate data (typically from many data sources) into a single repository. The target repository usually represents a unified view of the data, providing a single version of the truth from the domain it models and the underlying business. For that reason, this process is extremely expensive and consumes a significant part of the project resources [9]. This means, data integration and data quality problems such as syntactic/semantic accuracy and data completeness [10] need to be properly handled. In healthcare, local EHR systems frequently have various data models, vocabularies, names for data items, and levels of data granularity, which creates compatibility difficulties. Due to the target data model’s inability to effectively translate and store the syntax and semantics of the source data, compatibility difficulties may result in information loss [11]. In this paper, the challenge to integrate data of patients with COVID-19 including cardiovascular risk and complications is addressed. Data from a Portuguese Hospital need to be integrated into a common database storing data from several European hospitals using a common schema representation. Using these healthcare data related to cardiovascular problems impacted by the COVID-19 pandemic, allow for several research strategies and techniques that can be used to identify patterns and promote new treatments and answers in treatments and diagnosis. After this introduction, Sect. 2 presents the main related work regarding the ETL implementation for healthcare, revealing some of the more common problems and possible solutions researchers faced when dealing with healthcare data. Next, Sect. 3 presents the challenge described in this paper, as well as the main data quality problems that need to be handled to successfully align source data to the target repository requirements.

746

B. Oliveira et al.

Considering such problems, the Sect. 4 proposes an ETL architecture to deal with the several problems already identified in source data. Finally, in Sect. 5, the conclusions and future work guidelines are presented. It is also important to mention that this study was approved by the Ethics Committee and the Data Protection Officer of the hospital under study.

2 Related Work Data migration processes have been used for a long time to move data from one location to another. Generally, these processes occur when data need to be moved from a legacy system to a new one or when new applications that share the same data are incorporated into the current system’s technological stack. These data migrations can be named differently depending on their context and the domain’s specificities. For example, the ETL are typical data migration processes used in the Data Warehouse (DW) context. They involve the typical tasks of extracting data from one or several data sources (normally considering a specific time window and technology limitations), transforming those data according to specific needs, and loading that pre-processed data to a specific repository embodying specific rules. Additionally, several data quality enforcement procedures take place since incoming data is often of dubious quality and generally involves complex standardization procedures. Regardless of the nature of the data migration process, the magnitude of the tasks and the amount of data typically involved, imposes careful planning of the human and technological resources [12]. In traditional use cases, source data is structured (typically in relational databases) or semi-structured (for example, in spreadsheets), which makes relatively straightforward the identification of relevant data. However, with the emergence of Big Data scenarios and artificial intelligence algorithms for processing data, it is common the occurrence of non-structured data. This is especially true in technical domains involving a complex corpus. For example, the healthcare domain is characterized by highly specialized terminology and textual codification of the underlying themes, which implies a certain level of literacy for understanding and analyzing the context and data itself. Ambiguity is common in domains like healthcare. Several ways of describing symptoms, diseases, and diagnoses are common, which compromises text interpretation by computer procedures and languages. Normally, this isn’t an issue when data is produced by medical personnel because based on their experience, a specific term framed within a specific context can only have one meaning. The National Cancer Institution (NCI) conducted a project to establish a Data Coordination Center (DCC) to develop a centralized shared data repository for multiple research purposes [13]. This study highlighted data conciliation as one of the critical steps for the success of the project and one of the main challenges, taking into consideration that data are collected from various sources, and even when we talk about similar data, the way they are coded, and storage can vary including different formats [14]. This factor can lead to the challenge of understanding how data are coded and if there is any potential for missing or unavailable data. Another issue of integrating healthcare data is the presence of a high volume of unstructured data, including medical images and free-text reports that can bring significant challenges to the extraction of consistent and meaningful information [13, 15], which can represent a

Implementing a Data Integration Infrastructure for Healthcare

747

substantial amount of underutilized data [16]. Creating structured representation from non-structure data to make it more understandable and usable for knowledge acquirement can require substantial effort [17] and de usage of Natural Language Processing (NLP) is frequently adopted as a solution to this issue [16–18]. The patient’s medical history is typically stored using EHR. These records result from several clinician workflows related to the patient and their pathologies, medication, immunizations, or exam reports. Like any organization, hospitals or medical centers can have several departments with their own operational systems and specific workflows. For that reason, EHR data can be spread across several data sources, each one designed considering specific departmental or process needs. In [19], the authors propose an ETL methodology for populating a DW considering the existence of multiple clinical documents produced by heterogeneous systems. Despite these facts, all documents produced and used in the ETL context are structured considering the HL7 standard format, which simplifies ETL design and implementation due to the independence from source systems characteristics. The ambiguity (o disambiguate the meaning of concepts) and data redundancy (the same information in different types of documents) are two issues authors try to minimize through a conceptual framework mapping a dimensional model with HL7 CDA1 concepts. In [20], specific domain concepts from the healthcare domain are addressed through a vocabulary enriched with derived concepts. This work describes a DW implementation based on EHR records from several hospitals, to understand the course of the COVID-19 disease and investigate potential treatment strategies. The authors report several challenges across ETL development, including the data mapping between medical parameters used by the hospitals (they differ in representation and use), data enrichment through the addition of derived concepts and data validation performed manually by the main hospital stakeholders throughout the project. There are also interesting research works, namely [21], focusing on the ETL operations that load data from various sources with different database schema structures into a common data model based on Dynamic ETL implementation. The main idea resides in the concept of the rule that through a rule’s engine generates SQL statements to transform and conform data. A semantic approach is followed in [22]. The authors propose the use of annotations to support the semantic mappings among concepts to support a health analytics engine in the context of human obesity analytics. The referred works reveal interesting aspects when dealing with ETL and more generally with the data migration process in the healthcare context. This is in fact, an interesting research area with several research works focusing on several aspects related to the design and ETL implementation, such as data interoperability [23], data integration [24], and analytics [25].

1 http://www.hl7.org/implement/standards/product_brief.cfm?product_id=7.

748

B. Oliveira et al.

Fig. 1. Summary graphical representation of the 3 databases

3 Clinical Data Interoperability This project aims to collect data regarding cardiovascular history into a single repository. This repository represents a unified view of the data, collecting clinical data related, for example, to diagnostic information and the occurrence of cardiovascular complications in COVID-19 patients. With this standardized approach to data coming from several hospitals, the incidence of cardiovascular complications in patients with COVID-19, and the vulnerability and clinical course of COVID-19 in patients with an underlying cardiovascular disease can be studied. As a data source, we have 3 different databases from a Portuguese hospital. The first, which in this article we will call Database A, contains 64 tables that store COVID-19 symptomatology data from patients. The second database, Database B, is where patients’ cardiology data are stored. This database comprises only 1 table but contains a total of 257 columns. The last database we have available, Database C, contains a total of 12902 reports of the patient’s medical examinations. A summary graphical representation of the 3 databases can be found in Fig. 1. As we analyzed the data in more detail, we observed several types of problems that can hinder or even compromise the loading of data in the destination database. One problem that we found a lot in data from tables in Database A, is the problem of data ambiguity. For example, to obtain the variable “Temperature °C” existing in the destination database, there are, in the source database A, 5 variables that refer to the patient’s body temperature (for example, “TempM-Temperature (manual)”, “temp. Dt-Patient temperature” or “Temp1-Temperature”). Another problem we detected is that the measurement units in the source system do not always coincide with the measurement units in the target system. Besides, there are mandatory fields in the target database that are not filled in the source systems (Missing Values). Regarding Database C, which contains several medical reports, and since these are documents that use mostly

Implementing a Data Integration Infrastructure for Healthcare

749

natural language, there is the unstructured data problem. To overtake this problem, NLP and Text-Mining techniques will be needed to extract value from this information. Having presented the source databases and the problems arising from them, we will now address the target database. The target database aims to maintain the records of patients with cardiovascular history, diagnostic information, and occurrence of cardiovascular complications in COVID-19 patients [26]. It is composed of several entities, such as: Participant Identification Number (Required), that allows the identification of each participant/candidate; Inclusion Criteria (Required), that contains data that determine if this candidate meets the requirements to be considered as a participant; Demographics (Required), that stores data regarding the demographic information of a given participant; Cardiac Baseline Assessment (Required), that contains data about the reason for cardiovascular consultation, cardiovascular history and cardiac medication use prior to admission; Cardiac biomarkers (Required – If Measured), that stores data regarding all the cardiac biomarkers, namely the “Cardiac Troponin”, “Creatine kinase (CK)”, “CKMB”, “(NT-proBNP)BNP” when measured; ECG (Electrocardiogram), that contains data regarding the result of ECG exams; Echocardiography (Required – If Performed), that allows to know the data of Echocardiography exam results; Cardiac MRI (Magnetic Resonance Imaging), that contains data related to cardiac magnetic resonance imaging; CT (Computed tomography), that stores data regarding the results of “torax”, “coronary”, “pulmonary angiography” and/or “PET-CT”; Invasive Cardiac Procedures, that allows to know about the data related to the result of the “coronary angiography” and/or “myocardial biopsy”; Cardiac and thromboembolic COVID-19 complications (Required), that stores data regarding cardiac or thromboembolic during hospitalization after diagnosis with COVID-19; Cardiac Outcome: 7-day follow-up (Required), that contains data about the state of the patients 7 days after the admission, to verify if the cardiology symptoms are still involved; Cardiac Outcome: 30-day follow-up (Required), that stores data about the state of the patients 30 days after the admission, to verify if the cardiology symptoms are still involved; Discharge, that contains data regarding the discharge of the patients either for death, palliative cares, transfer to another facility, or recovery.

4 The Data Pipeline Developing a Data Pipeline is a complex procedure that needs to consider all the specificities of the most common phases, namely: extracting data from the source database for supporting regular data refreshes, considering the availability and operational changes in source data systems; transforming data according to specific business rules and deal with data quality problems that compromise data integration and consistency; and loading data considering the requirements of the target database. The transformation phase is critical since it must guarantee data quality while source EHR data varies depending on how the source businesses construct their EHRs and how [27]. To load all the necessary data to the target database, data should be extracted from 3 source databases previously described (Fig. 1). Data should be properly identified to ensure that only data from patients that had COVID-19 will be extracted and transformed before being loaded to the target repository. The data extraction procedure includes demographic data, clinical observation regarding the first symptoms when the patient

750

B. Oliveira et al.

check-in to the hospital, such as temperature, heart rate, blood pressure, respiratory rate, and hemoglobin unit, results from ECG, MRI, CT, Echocardiography, Invasive Cardiac Procedures exams, cardiac biomarkers, medical reports regarding the discharge of the patient. Once all the data are extracted and loaded into the staging databases, the transformation phase takes place, considering the pre-defined data mappings and data cleaning and data enrichment procedures. One of the first tasks that will be performed in the transformation process will be a mapping between the parameters of the sources database and the target one. The data mapping process can be very challenging regarding health care data because there is no standardized parameter between all the HERs systems. The same parameters may be named differently in each hospital, and some can be recorded in one hospital but not in another [20]. Figure 2 presents the conceptual architecture idealized to deal with the requirements identified.

Fig. 2. ETL architecture

A persistent staging area is identified to support the ETL workflow. Data are extracted from the operational source and a copy is stored in the Staging Area, enabling the ability to recover from failure without restarting from the beginning of the process and avoid affecting the operational data sources.The Monitoring Layer provides two components: to track data lineage (Lineage Control) and to track all ETL main events to provide insights to identify process errors or bottlenecks (Log Handler). The data quality is supported by the Metadata Layer using a Data Dictionary component for parsing diseases and diagnosis terms, as well as metrics that differ from Portugal hospitals and the ones used in the other European hospitals. The Metadata Layer capture and maintain all ETL metadata, including all transformation logic and specific domain rules. The Semantic Layer represents the main domain concepts and their relationship to data conforming. This allows the association of specific metadata within the scope of the term and document (for example, MRI or ECG) used for data integration. Additionally, the consumed data is composed of unstructured information. In this context, the NLP Layer processes technical corpus using specific algorithms to identify keywords (Keyword extraction component). Both the NLP Layer and Metadata Layer communicate with

Implementing a Data Integration Infrastructure for Healthcare

751

each other to provide context (using the Semantic Layer) to the keywords extraction. This approach aims to reduce ambiguity and contextualize terms and phrases in each document context, allowing for the identification of common terms used in each context. A semantic annotation process is also planned, providing the annotation of health data by medical personnel considering basic terms used in healthcare. The semantic annotation will enrich data with meta-information by experts, providing a way to apply semiautomatic mechanisms for processing incoming unstructured data. For example, it can be used by doctors to identify specific symptoms based on specific terms or values or to identify conditions relevant to identify some disease or diagnosis. Finally, the Quarantine Layer provides mechanisms to handle unexpected records (for example, values with unknown measurements or unexpected parameterized values) and identify specific rules to handle such cases automatically or to be reintroduced in ETL flow after human evaluation.

5 Conclusions Studies of rare and worldwide diseases require much statistical power and a unified vision of all nation’s data. It is crucial to exchange samples and data from European individuals to guarantee a common view of the results from different European populations, with their unique geneticist and lifestyle characteristics. In this work, we addressed a COVID19 initiative that aims to o collect data about patients’ cardiovascular history, affected by the infection with COVID-19. Data is collected from several hospitals and integrated into a standardized data repository aiming to provide more deep insights about that data considering a common view of the data. In this work, we addressed the data from one of these hospitals and identified a set of problems that need to be addressed during the populating process. In future work, the proposed conceptual architecture presented in this paper will be used and implemented to test and measure its applicability to the described data integration scenario. Acknowledgement. This work is partially funded by national funds through FCT—Fundação para a Ciência e Tecnologia, I.P., under the project FCT UIDB/04466/2020 and UIDP/04466/2020. Luís Elvas holds a Ph.D. grant, funded by FCT with UI/BD/151494/2021.

References 1. Faggella, D.: Where Healthcare’s Big Data Actually Comes From. https://emerj.com, 22 Nov 2019. https://emerj.com/ai-sector-overviews/where-healthcares-big-data-actuallycomes-from/. Accessed 14 Oct 2022 2. Jones, G.L., Peter, Z., Rutter, K.A., Somauroo, A.: Promoting an overdue digital transformation in healthcare. https://www.mckinsey.com, 20 June 2019. https://www.mckinsey.com/ industries/healthcare-systems-and-services/our-insights/promoting-an-overdue-digital-tra nsformation-in-healthcare. Accessed 14 Oct 2022 3. Bughin, J., et al.: Artificial Intelligence The Next Digital Frontier?, June 2017. https:// www.mckinsey.com/~/media/mckinsey/industries/advanced%20electronics/our%20insi ghts/how%20artificial%20intelligence%20can%20deliver%20real%20value%20to%20c ompanies/mgi-artificial-intelligence-discussion-paper.ashx. Accessed 14 Oct 2022

752

B. Oliveira et al.

4. Chen, M.-T., Lin, T.H.: A provable and secure patient electronic health record fair exchange scheme for health information systems. Appl. Sci. (Switzerland) 11(5), 2401 (2021). https:// doi.org/10.3390/app11052401 5. Khennou, F., Houda Chaoui, N., Khamlichi, Y.I.: A migration methodology from legacy to new electronic health record based OpenEHR. Int. J. E-Health Med. Commun. 10(1), 55–75 (2019). https://doi.org/10.4018/IJEHMC.2019010104 6. Sarwar, T., et al.: The secondary use of electronic health records for data mining: data characteristics and challenges. ACM Comput. Surv. 55(2), 1–40 (2023). https://doi.org/10.1145/ 3490234 7. Aunger, J.A., Millar, R., Rafferty, A.M., Mannion, R.: Collaboration over competition? regulatory reform and inter-organizational relations in the NHS amidst the COVID-19 pandemic: a qualitative study. BMC Health Serv. Res. 22(1) (2022). https://doi.org/10.1186/s12913-02208059-2 8. Joint Research Centre (JRC). Ireland is the country with the highest cancer incidence in the EU (2020). https://ec.europa.eu/newsroom/eusciencehubnews/items/684847. Accessed 15 Oct 2022 9. Kimball, R., Caserta, J.: The Data Warehouse ETL Toolkit: Practical Techniques for Extracting. Conforming, and Delivering Data. John Wiley & Sons Inc. Cleaning (2004) 10. Batini, C., Scannapieco, M.: Data and Information Quality. Springer International Publishing (2016) 11. Fagin, R.: Inverting schema mappings. ACM Trans. Datab. Syst. 32(4), 25–es (2007). https:// doi.org/10.1145/1292609.1292615 12. Ralph, K., Margy, R.: The Data Warehouse Toolkit, 3rd Edition. John Wiley & Sons, Inc. (2013) 13. Moorthie, S., et al.: Rapid systematic review to identify key barriers to access, linkage, and use of local authority administrative data for population health research, practice, and policy in the United Kingdom. BMC Public Health 22(1) (2022). https://doi.org/10.1186/s12889022-13187-9 14. Mai, P.L., et al.: Li-Fraumeni exploration consortium data coordinating center: building an interactive web-based resource for collaborative international cancer epidemiology research for a rare condition. Cancer Epidemiol. Biomark. Prev. 29(5), 927–935 (2021). https://doi. org/10.1158/1055-9965.EPI-19-1113 15. Sanchez, P., Voisey, J.P., Xia, T., Watson, H.I., O’Neil, A.Q., Tsaftaris, S.A.: Causal machine learning for healthcare and precision medicine. R. Soc. Open Sci. 9(8), 220638 (2022). https:// doi.org/10.1098/rsos.220638 16. Chen, J.S., Baxter, S.L.: Applications of natural language processing in ophthalmology: present and future. Front Med. (Lausanne) (9) (2022). https://doi.org/10.3389/fmed.2022. 906554 17. Oubenali, N., Messaoud, S., Filiot, A., Lamer, A., Andrey, P.: Visualization of medical concepts represented using word embeddings: a scoping review. BMC Med. Inform. Decis. Mak. 22(1), 83 (2022). https://doi.org/10.1186/s12911-022-01822-9 18. Zhang, T., Schoene, A.M., Ji, S., Ananiadou, S.: Natural language processing applied to mental illness detection: a narrative review. NPJ Digit. Med. 5(1), 46 (2022). https://doi.org/ 10.1038/s41746-022-00589-7 19. Pecoraro, F., Luzi, D., Ricci, F.L.: Designing ETL tools to feed a data warehouse based on electronic healthcare record infrastructure. In: Digital Healthcare Empowering Europeans, IOS Press, pp. 929–933 (2015) 20. Fleuren, L.M., et al.: The Dutch Data Warehouse, a multicenter and full-admission electronic health records database for critically ill COVID-19 patients. Crit. Care 25(1), 1–12 (2021). https://doi.org/10.1186/s13054-021-03733-z

Implementing a Data Integration Infrastructure for Healthcare

753

21. Ong, T.C., et al.: Dynamic-ETL: a hybrid approach for health data extraction, transformation and loading. BMC Med Inform Decis Mak 17(1), 1–2 (2017). https://doi.org/10.1186/s12 911-017-0532-3 22. Poulymenopoulou, M., Papakonstantinou, D., Malamateniou, F., Vassilacopoulos, G.: A health analytics semantic ETL service for obesity surveillance. Stud. Health Technol. Inform. 210, 840–844 (2015). https://doi.org/10.3233/978-1-61499-512-8-840 23. Gavrilov, G., Vlahu-Gjorgievska, E., Trajkovik, V.: Healthcare data warehouse system supporting cross-border interoperability. Health Inform. J. 26(2), 1321–1332 (2020) 24. Khan, U., Kothari, H., Kuchekar, A., Koshy, R.: Common data model for healthcare data. In: 2018 3rd IEEE International Conference on Recent Trends in Electronics, Information & Communication Technology (RTEICT), pp. 1450–1457 (2018) https://doi.org/10.1109/RTE ICT42901.2018.9012520 25. Khedr, A., Kholeif, S., Saad, F.: An integrated business intelligence framework for healthcare analytics. Int. J. Adv. Res. Comput. Sci. Softw. Eng. 7(5), 263–270 (2017). https://doi.org/ 10.23956/ijarcsse/SV7I5/0163 26. Registry of patients with COVID-19 including cardiovascular risk and complications. https:// capacity-covid.eu. Accessed 09 Oct 2022 27. Hayrinen, K., Saranto, K., Nykanen, P.: Definition, structure, content, use and impacts of electronic health records: a review of the research literature. Int J Med Inform 77(5), 291–304 (2008). https://doi.org/10.1016/j.ijmedinf.2007.09.001

Automatic Calcium Detection in Echocardiography Based on Deep Learning: A Systematic Review Sara Gomes1 , Luís B. Elvas1,2(B) , João C. Ferreira1 , and Tomás Brandão1 1 Instituto Universitário de Lisboa (ISCTE-IUL) ISTAR, 1649-026 Lisboa, Portugal

[email protected] 2 INOV INESC Inovação—Instituto de Novas Tecnologias, 1000-029 Lisboa, Portugal

Abstract. The diagnosis of many heart diseases involves the analysis of images from Computed Tomography (CT) or echocardiography, which is mainly done by a medical professional. By using Deep Learning (DL) algorithms, it is possible to create a data-driven tool capable of processing and classifying this type of image, to support physicians in their tasks, improving healthcare efficiency by offering faster and more accurate diagnoses. The aim of this paper is to perform a systematic review on DL uses for automated methods for calcium detection, identifying the state of this art. The systematic review was based on PRISMA methodology to identify relevant articles about image processing using Convolutional Neural Networks (CNN) in the cardiac health context. This search was conducted in Scopus and Web of Science Core Collection, and the keywords considered included (1) Deep Learning, (2) Calcium Score, (3) CT-Scan, (4) Echocardiography. The review yielded 82 research articles, 38 of which were in accordance with the initial requirements by referring to image processing and calcium score quantification using DL models. DL is reliable in the implementation of classification methods for automatic calcium scoring. There are several developments using CT-Scan, and a need to replicate such methods to echocardiography. Keywords: Neural network · Deep Learning · Computer Vision · Classification · Artery Calcification · Echocardiography

1 Introduction Cardiovascular diseases are the main cause of mortality and a major contributor to disability increasing, representing one-third of world’s deaths in 2019 [1]. The prevalence of all cardiovascular disorders in the same year doubled from 271 million in 1990 to 523 million in 2019. Also, the number of cardiovascular fatalities progressively increased from 12.1 million in 1990 to 18.6 million in 2019 [2]. Vascular calcification is of great interest in terms of risk factors and subsequent effects. As referred on [3], the presence of Coronary Artery Calcium (CAC) is strongly correlated with coronary artery diseases and may perform a powerful indicator on prediction of cardiovascular occurrences and death8. Aortic Valve Sclerosis (AVS) or Aortic © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 A. Abraham et al. (Eds.): IBICA 2022, LNNS 649, pp. 754–764, 2023. https://doi.org/10.1007/978-3-031-27499-2_70

Automatic Calcium Detection in Echocardiography

755

Valve Calcification (AVC) is generally understood to refer to the development of calcific structures that are contained inside the aortic valve leaflets and do not affect the aortic annulus or coronary artery ostia [4]. Aortic Stenosis (AS) is nowadays the most prevalent primary heart valve disease and a significant contributor to cardiovascular mortality [5]. Digital images are essential for the early detection of anomalies or diseases in any system of the human body thanks to advancements in biomedical imaging. The heart system is regarded as one of the most vulnerable systems. Due to the lack of exposure to the complexities of pertinent technology, cardiology is perceived as a complex field of practice. Through imaging modalities such as CT-scan, Magnetic Resonance Imaging (MRI), Angiogram, Electrocardiogram (ECG), and others, medical imaging is a powerful diagnostic tool that offers information about anatomical structures [6]. Cardiovascular calcifications are often seen during regular CT-scans or echocard88iograms. One of the major concerns about the use of CT for CAC detection is the ionizing radiation exposure to the patient [7]. It is fundamental to weight whether the information obtained from this method offers an improvement in predictive ability over the radiation risk factor and the higher costs associated with it. According to the European Society of Cardiology, echocardiography is the firstline approach for diagnosing and following the treatment of aortic stenosis, due to its portability, high temporal resolution, absence of radiation and low cost [8]. The echocardiogram (also known as an echo) is arguably the tool used the most frequently in the field of the cardiovascular system. It is utilized primarily because it may diagnose and treat cardiac illnesses early on. It is a quick, painless, and affordable method that can accurately display the pressure gradient of heart lesions. The Echo is regarded as being safe because it employs sound waves rather than radiation [6]. Every aspect of cardiovascular imaging, from acquisition to reporting, has been impacted by Artificial Intelligence (AI). Examples of AI-based applications include automated acquisition, segmentation, and report production; coronary calcium quantification; computed tomography and magnetic resonance imaging measurements; and diagnosis of coronary diseases. Related to echocardiography, AI can reduce observer variation and offer an accurate diagnosis [9]. DL models have been employed in many computer vision applications, such as image classification, object detection and image segmentation. From the different types of DL models, CNNs are those typically used on AI-based image analysis tasks. This type of neural networks is suitable for several computer vision tasks based on machine learning, due to their ability to automatically find the best features for the application context. Some studies such as [3, 10–14] show that CNNs have been successfully used for medical image segmentation and classification applications. As referred on [15], for CT-scans there are several studies performed on (semi) automated techniques for CAC quantification; however, only a few applications are fully automated. Furthermore, in the field of echocardiography, the best approach found is a semi-automated model, so there is a need on filling this gap by constructing a fully automated model for calcium detection on this type of images, leading to faster and accurate diagnoses, while minimizing negative impacts on patients’ health.

756

S. Gomes et al.

2 State of the Art 2.1 Methods This systematic review is based on PRISMA methodology [16], which means Preferred Reporting Items for Systematic Reviews and Meta-Analyses. PRISMA is focused on assisting authors in better evaluation and reporting of systematic reviews, allowing readers to understand what authors researched and concluded, which improves reporting standards and facilitates peer reviews. In this context, the question this review aims to answer is “What is the state of the art on DL-based models for automatic calcium detection on cardiac images?”. 2.2 Data Extraction The search was conducted on Scopus and Web of Science Core Collection (WoSCC) databases, in October 2022. Searching keywords have been split into 3 categories: Concept, Population and Context. As shown on Table 1, the search query was built by intercepting all the columns, that is Concept AND Population AND Context. Publication years have also been restricted to the last 4 years (2019–2022) and documents had to be reviews or articles. Table 1. Keywords selection Concept

Population

Context

Limitations

“deep learning” “neural network” “computer vision”

computed tomography” “ct-scan” “echocardiography” “ultrassound image”

“aortic stenosis” year: 2019-2022 “aortic calcification” document type: article “calcium score” or review “aortic sclerosis”

1.004.076 + 414.288 11.268 + 4.078 109 + 20 80 + 12

2.3 Results The search described in the previous section yielded 92 results, 80 from Scopus and 12 from WoSCC, which were submitted to some filtering steps, in line with PRISMA methodology. These successive steps are represented in Fig. 1. Firstly, from those 92 results, the duplicates were removed, resulting in 82 distinct documents. These documents were analyzed and categorized, and the less significant ones were excluded, as they were out of the review scope. At the end of this cleaning process, the initial selection was reduced to 50 documents.

Automatic Calcium Detection in Echocardiography

757

Fig. 1. PRISMA Workflow diagram

The document set was then refined by analyzing the research’s purpose of each document. The main purpose expressed on 22 out of 50 documents is “calcium quantification”. Other significant expressed purposes are “Image Analysis/Detection” and “Review”. These three categories were selected for the current review, and therefore documents stating other objectives were discarded. After this purpose-oriented selection, the final document set included 38 references.

Fig. 2. Documents distribution by ML techniques (a), imaging data types (b) and application goals (c).

By analyzing the topics addressed on these documents, as represented on the charts depicted in Fig. 2, it can be observed that the primary addressed topics are Convolutional Neural Network for ML techniques, CT-scan for imaging data types and Calcium quantification/Risk analysis in the field of application goals. For the majority of documents, these three topics are presented simultaneously.

758

S. Gomes et al.

Neural networks were employed in 10 documents, 26% of total selected references, with 87% of them being used for CT-Scan processing and only 13% for echocardiography, as we can see on Fig. 3. For an easier identification of main keywords aborded by included documents, and existing relationships between them, the diagram shown on Fig. 4 was created by using VOSviewer. This diagram represents the keywords aborded, with different sizes based on occurrences. It reveals the existence of two main keywords, “Deep Learning” and “Artificial Intelligence”, followed by “cardiovascular disease” and “Machine Learning”. It also divides topics into three clusters, differentiated by colors. There is evidence of an existing relationship between “computed tomography” and “Deep Learning”. In contrast, there are no stronger relations evolving echocardiography, meaning that developed DL-based studies applied to it are still scarce.

Fig. 3. Neural Network studies

Fig. 4. Keywords network

2.4 Goals and Outcomes Analysis Considering that the main aim of this article is to identify the current applications of AI to calcium detection on cardiac images, the included references were organized by the topics they talk about, as represented on Table 2. It was feasible to recognize the expansion of AI-based models in last few years, after evaluating selected studies. Table 2. References distribution by topics aborded Topic

References

# Doc

% Doc

Deep/Machine Learning

[1, 3, 8, 10–15, 17–43]

36

95%

CT-Scan

[1, 3, 10–15, 17–19, 22–25, 27–39, 41, 42, 44]

31

82%

Aortic Disease/Calcium Score

[1, 3, 8, 10, 12–15, 17–18, 21–25, 28–42, 45]

31

82%

(continued)

Automatic Calcium Detection in Echocardiography

759

Table 2. (continued) Topic

References

# Doc

% Doc

Computer Vision

[1, 3, 8, 10–14, 17, 18, 21, 23, 25, 29, 30, 32, 34–39, 42]

23

61%

Neural Network

[1, 3, 10–14, 29, 34, 43]

10

26%

Echocardiography/Ultrasound

[8, 20, 21, 43]

4

11%

Heart Failure

[20]

1

3%

Covid-19

[27]

1

3%

Fig. 5. Computer Vision models

Deep Learning on Cardiac Images. Starting from the most popular topic, Deep Learning was referenced by 95% of the included documents. DL models have several useful applications to cardiac imaging. Figure 5 represents some types of computer vision (CV) models, such as object detection, image segmentation and prediction, which can be a classification or regression problem.

a

b

Fig. 6. Heart identification CT-scan (a) and Calcium detection (b) on CT-scans

According to review [44], some examples of convolutional neural network (CNN) uses are: (a) finding a heart in a CT image, returning as output a bounding box indicating

760

S. Gomes et al.

the location of the heart; (b) detection of calcium structures, using image segmentation, where the goal is to obtain a “filtered” image showing only the desired object, as represented on Fig. 6. Last of all, DL is also helpful for prediction, which can mean classification or regression. When the goal is to determine the presence of cardiac diseases, that is a classification problem. There are two possible solutions: a binary classification model, that will separate images in two classes: “sick” or “not sick”, and multiclassification model, where each disease represents a class, and each image will be assigned to a diagnosis: disease “A”, “B” or “C”. Regression consists of predicting a continuous value. It can be useful on quantifications, such as for calcium score in aortic valve or left ventricular ejection fraction (LVEF). Calcium Score. CAC is a very strong predictor of Cardiovascular Events (CVE), coronary heart disease, stroke, and all-cause mortality [17]. It has been an important biomarker of existing calcium structures in heart valves, which is responsible for many diseases, particularly in aortic valves, such as aortic stenosis or aortic sclerosis, especially in asymptomatic individuals. According to study [27], there is a proven correlation between high coronary calcium score and Covid-19 severity, from oxygenation measure an CT image analysis. There are several studies using AI to automate processes for detection and quantification of calcium in aortic valves. Authors from [3], with their neural network method implementation, have reached 86% of correct classification of cardiovascular risk, concluding that its implementation could increase workflow efficiency for radiologists. Furthermore, a model developed on [13] shows that it is possible to increase calcium scoring accuracy by training a CNN architecture to correct blurred CT images, reducing significatively the assessment variations from 38% to 3.7%. Workflow Efficiency Improving. As referred in many studies, the implementation of a fully-automated DL model would potentially increase clinical workflow efficiency [3, 8, 15, 18, 21, 26, 37, 43]. In several cases, automated models return higher accuracy than traditional methods. Authors from review [20] affirm that in the field of Heart Failure (HF), when predicting 5-year survival rates for patients with cardiovascular disorders, ML-based AI has been shown to be more accurate (80%) than doctors (60%) in doing so. Another classification problems referenced on review [8] describe DL solutions capable of distinguish between two types of heart diseases from echocardiography images or detect systemic features such as age and sex, which is humanly impossible, with performance scores of 96.2% and 88%, respectively. Also, for quantification issues, deep neural networks are particularly helpful. On assessment of calcium score, it would help radiologists not to have to spend so much time analyzing all the screenings by hand, which is a time-consuming and boring task. When compared with manual assessment, DLbased automated calcium quantification model presented on [15] performed excellent results, supporting viability on its practical implementation. Study [30], which combined different types of examinations on the training dataset for calcium quantification, also returned a better performance against manual scoring. Other quantification problems like Epicardial Adipose Tissue (EAT) volume estimation from CT images described on [32] has demonstrated excellent results, with obtained scores matching manual ones in more than 90%. AI is not a kind of replacer of physicians but can have an important role in improving diagnosis precision.

Automatic Calcium Detection in Echocardiography

761

Radiation Exposure Reduction. Although CT-scan is primarily used for prediction of cardiovascular diseases and mortality, there is a disadvantage associated with it: radiation exposure and its risks. Because of this, its utilization has been restricted. Trying to solve this problem, ultra-low dose (ULD) CT techniques have been created over the years, trying to decrease the effective radiation dose (ERD), according to study [17]. However, they increase noise, resulting in lower quality images. Deep neural networks may help to get around this problem. According to studies [17, 44], DL algorithms can process “noisy” images obtained by ULD-CT techniques, reducing this noise and improving images quality, which makes the use of low-radiation techniques more feasible, safeguarding patients’ health. As demonstrated on Fig. 7 from [44], an ULD-CT image (A) can be transformed into a noise reduced CT image (B).

Fig. 7. Heart identification CT-scan (a) and Calcium detection (b) on CT-scans

Another way to avoid radiation exposure is the use of echocardiography imaging. Through this method it is possible to detect many types of heart and pulmonary diseases, or assess common metrics like calcium score, without exposing patients to invasive radiation, thanks to use of sound waves instead. It is also advantageous in terms of portability and low costs associated. However, AI implementations in the field of echocardiography are still scarce, due to lower image quality resolution. Despite this, echocardiography performs an essential research method in cardiology, as the large amount of data can get off the existent challenge in quality of images [8].

3 Conclusion Following PRISMA methodology, this literature review returned 82 distinct references, 38 meeting the desired context, which shows there is still a lot to explore in this scope. The main topics aborded are Deep Learning, calcium score and CT-scan, primarily with the purpose of calcium quantification. Neural Network is a frequently studied topic, showing several applications on cardiac images, from classification of heart diseases to detection and quantification of calcium. 87% of NN included studies were applied to CT-scans, and only remaining 13% to echocardiography. Developed studies have obtained very high and promising results, some of them with higher performances when compared with manual techniques, proving that AI can be very helpful on increasing clinical workflow efficiency. Diagnoses become faster and more accurate, and clinical professionals can be saved from time-consuming and boring tasks.

762

S. Gomes et al.

Despite its associated risk for patients’ health, due to radiation exposure, CT-scan is currently the principal type of image used on automation of cardiac images processing with DL, mainly due to its higher resolution quality. Echocardiography does not expose patients to such radiation, however results in lower resolution quality images. But this is no longer a problem since Deep Learning exists. It is possible to train a deep neural network capable of reducing images noise and improving their quality. Further developments in this field can fill the existent gap on fully automated classification methods for echocardiography, leading to a more frequent and efficient monitoring of patients and reducing physicians’ workload. Acknowledgement. This work is partially funded by national funds through FCT—Fundação para a Ciência e Tecnologia, I.P., under the project FCT UIDB/04466/2020 and UIDP/04466/2020. Luís Elvas holds a Ph.D. grant, funded by FCT with UI/BD/151494/2021.

References 1. Hong, J.-S., et al.: Automated coronary artery calcium scoring using nested U-Net and focal loss. Comput. Struct. Biotechnol. J. 20, 1681–1690 (2022). https://doi.org/10.1016/j.csbj. 2022.03.025 2. Roth, G.A., et al.: Global Burden of Cardiovascular Diseases and Risk Factors, 1990–2019: Update From the GBD 2019 Study. J. Am. Coll. Cardiol. 76(25), 2982–3021 (2020). https:// doi.org/10.1016/j.jacc.2020.11.010 3. Gogin, N., et al.: Automatic coronary artery calcium scoring from unenhanced-ECG-gated CT using deep learning. Diagn. Interv. Imaging 102(11), 683–690 (2021). https://doi.org/10. 1016/j.diii.2021.05.004 4. Faggiano, A., et al.: Cardiovascular Calcification as a Marker of Increased Cardiovascular Risk and a Surrogate for Subclinical Atherosclerosis: Role of Echocardiography, J. Clin. Med., 10(8) (2021) doi: https://doi.org/10.3390/jcm10081668 5. Baumgartner, (chair), H., et al.: Recommendations on the echocardiographic assessment of aortic valve stenosis: a focused update from the European Association of Cardiovascular Imaging and the American Society of Echocardiography, Eur. Heart J. Cardiovasc. Imaging, 18 (3), 254–275, (2017) 6. Wahlang, I., et al.: Deep Learning Methods for Classification of Certain Abnormalities in Echocardiography, Electronics, 10, (4) 2021 7. Bos, D., Leening, M.J.G.: Leveraging the coronary calcium scan beyond the coronary calcium score. Eur. Radiol. 28(7), 3082–3087 (2018). https://doi.org/10.1007/s00330-017-5264-3 8. Schuuring, M. J., Išgum, I., Cosyns, B.,. Chamuleau, S. A. J., Bouma, B. J.: Routine Echocardiography and Artificial Intelligence Solutions, Front. Cardiovasc. Med., vol. 8, 2021, doi: https://doi.org/10.3389/fcvm.2021.648877 9. Maragna, R., et al.: Artificial Intelligence Based Multimodality Imaging: A New Frontier in Coronary Artery Disease Management, Front. Cardiovasc. Med., vol. 8 (2021) doi: https:// doi.org/10.3389/fcvm.2021.736223 10. Chamberlin, J., et al.: Automated detection of lung nodules and coronary artery calcium using artificial intelligence on low-dose CT scans for lung cancer screening: accuracy and prognostic value, BMC Med., 19(1) (2021) 11. Wolterink, J.M., Hamersvelt, R.W.V., Viergever, M.A., Leiner, T., Išgum, I.: Coronary artery centerline extraction in cardiac CT angiography using a CNN-based orientation classifier. Med. Image Anal. 51, 46–60 (2019)

Automatic Calcium Detection in Echocardiography

763

12. Lee, S., et al.: Deep-learning-based coronary artery calcium detection from ct image, Sensors, 21(21) (2021) doi: https://doi.org/10.3390/s21217059 13. Zhang, Y., van der Werf, N.R., Jiang, B., van Hamersvelt, R., Greuter, M.J.W., Xie, X.: Motioncorrected coronary calcium scores by a convolutional neural network: a robotic simulating study. Eur. Radiol. 30(2), 1285–1294 (2019). https://doi.org/10.1007/s00330-019-06447-7 14. Guilenea, F.N., et al.: Thoracic aorta calcium detection and quantification using convolutional neural networks in a large cohort of intermediate-risk patients. Tomography 7(4), 636–649 (2021) 15. Assen, M.V., et al.: Automatic coronary calcium scoring in chest CT using a deep neural network in direct comparison with non-contrast cardiac CT: A validation study, Eur. J. Radiol., (134) (2021). doi: https://doi.org/10.1016/j.ejrad.2020.109428 16. Moher, D., Liberati, A., Tetzlaff, J., Altman, D.G.: Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. BMJ 339, b2535 (Jul.2009). https://doi. org/10.1136/bmj.b2535 17. Klug, M., et al.: A deep-learning method for the denoising of ultra-low dose chest CT in coronary artery calcium score evaluation. Clin. Radiol. 77(7), e509–e517 (2022). https://doi. org/10.1016/j.crad.2022.03.005 18. Wang, W., Yang, L., Wang, S., Wang, Q., Xu, L.: An automated quantification method for the Agatston coronary artery calcium score on coronary computed tomography angiography. Quant. Imaging Med. Surg. 12(3), 1787–1799 (2022). https://doi.org/10.21037/qims-21-775 19. Yang, D.H.: Application of artificial intelligence to cardiovascular computed tomography. Korean J. Radiol. 22, 1–12 (2021) 20. Yasmin, F., et al.: Artificial intelligence in the diagnosis and detection of heart failure: the past, present, and future. Rev. Cardiovasc. Med. 22(4), 1095–1113 (2021). https://doi.org/10. 31083/j.rcm2204121 21. Yang, F., et al.: Automated Analysis of Doppler Echocardiographic Videos as a Screening Tool for Valvular Heart Diseases. JACC Cardiovasc. Imaging 15(4), 551–563 (Apr.2022). https://doi.org/10.1016/j.jcmg.2021.08.015 22. Eng, D., et al.: Automated coronary calcium scoring using deep learning with multicenter external validation, Npj Digit. Med., 4(1) (2021) 23. Graffy, P.M., Liu, J., O’Connor, S., Summers, R.M., Pickhardt, P.J.: Automated segmentation and quantification of aortic calcification at abdominal CT: application of a deep learningbased algorithm to a longitudinal screening cohort. Abdominal Radiology 44(8), 2921–2928 (2019). https://doi.org/10.1007/s00261-019-02014-2 24. Xu, C., et al.: Automatic coronary artery calcium scoring on routine chest computed tomography (CT): comparison of a deep learning algorithm and a dedicated calcium scoring CT. Quant. Imaging Med. Surg. 12(5), 2684–2695 (2022). https://doi.org/10.21037/qims-21-1017 25. Emaus, M. J., et al.: Bragatston study protocol: A multicentre cohort study on automated quantification of cardiovascular calcifications on radiotherapy planning CT scans for cardiovascular risk prediction in patients with breast cancer,” BMJ Open, 9, (7), (2019), doi: https:// doi.org/10.1136/bmjopen-2018-028752 26. Al’Aref, S. J., et al.: Clinical applications of machine learning in cardiovascular disease and its relevance to cardiac imaging, Eur. Heart J., 40, (24), pp. 1975–1986, (2019). doi: https:// doi.org/10.1093/eurheartj/ehy404 27. Takeshita, Y., et al.: Coronary artery calcium score may be a novel predictor of COVID-19 prognosis: A retrospective study, BMJ Open Respir. Res., 8, (1) (2021) doi: https://doi.org/ 10.1136/bmjresp-2021-000923 28. Wang, W., et al.: Coronary artery calcium score quantification using a deep-learning algorithm. Clin. Radiol. 75(3), 237.e11-237.e16 (Mar.2020)

764

S. Gomes et al.

29. den Oever, L. B.V., et al.: Deep learning for automated exclusion of cardiac CT examinations negative for coronary artery calcium,” Eur. J. Radiol., 129, (2020).doi: https://doi.org/10. 1016/j.ejrad.2020.109114 30. van Velzen, S.G.M., et al.: Deep learning for automatic calcium scoring in CT: Validation using multiple cardiac CT and chest CT protocols. Radiology 295(1), 66–79 (2020). https:// doi.org/10.1148/radiol.2020191621 31. Winkel, D.J., et al.: Deep learning for vessel-specific coronary artery calcium scoring: validation on a multi-centre dataset. Eur. Heart J. Cardiovasc. Imaging 23(6), 846–854 (2022). https://doi.org/10.1093/ehjci/jeab119 32. Hoori, A., Hu, T., Lee, J., Al-Kindi, S., Rajagopalan, S., Wilson, D. L.:Deep learning segmentation and quantification method for assessing epicardial adipose tissue in CT calcium score scans, Sci. Rep., 12(1), (2022) 33. Jiang, B., et al.: “Development and application of artificial intelligence in cardiac imaging,” Br. J. Radiol. 93(1113), (2020). doi: https://doi.org/10.1259/bjr.20190812 34. de Vos, B.D., et al.: Direct Automatic Coronary Calcium Scoring in Cardiac and Chest CT. IEEE Trans. Med. Imaging 38(9), 2127–2138 (2019) 35. Singh, G., et al.: End-to-end, pixel-wise vessel-specific coronary and aortic calcium detection and scoring using deep learning, Diagnostics, 11, (2)( 2021) doi: https://doi.org/10.3390/dia gnostics11020215 36. Winkelmann, M.T., et al.: “Fully Automated Artery-Specific Calcium Scoring Based on Machine Learning in Low-Dose Computed Tomography Screening”, RoFo Fortschritte Auf Dem Geb. Rontgenstrahlen Bildgeb. Verfahr. 194(7), 763–770 (2022). https://doi.org/10. 1055/a-1717-2703 37. Lee, J., et al.: Fully Automatic Coronary Calcium Score Software Empowered by Artificial Intelligence Technology Validation Study Using Three CT Cohorts. KOREAN J. Radiol. 22(11), 1764–1776 (Nov.2021) 38. Zhang, N., et al.: Fully automatic framework for comprehensive coronary artery calcium scores analysis on non-contrast cardiac-gated CT scan: Total and vessel-specific quantifications, Eur. J. Radiol. 134, (2021) 39. Gal, R., et al.: Identification of Risk of Cardiovascular Disease by Automatic Quantification of Coronary Artery Calcifications on Radiotherapy Planning CT Scans in Patients with Breast Cancer. JAMA Oncol. 7(7), 1024–1032 (2021). https://doi.org/10.1001/jamaoncol.2021.1144 40. Lee, J., et al.: Prediction of Coronary Artery Calcium Score Using Machine Learning in a Healthy Population, J. Pers. Med., 10(3)(2020) 41. Lauzier, P.T., et al.: The Evolving Role of Artificial Intelligence in Cardiac Image Analysis. Can. J. Cardiol. 38(2), 214–224 (2022) 42. Waltz, J., Kocher, M., Kahn, J., Dirr, M., Burt, J.: The Future of Concurrent Automated Coronary Artery Calcium Scoring on Screening Low-Dose Computed Tomography, CUREUS 12 (6)(2020) doi: https://doi.org/10.7759/cureus.8574 43. Antoniades, C., Asselbergs, F.W., Vardas, P.: The year in cardiovascular medicine 2020: Digital health and innovation. Eur. Heart J. 42(7), 732–739 (2021). https://doi.org/10.1093/ eurheartj/ehaa1065 44. den Oever, L. B.V., et al.: Application of artificial intelligence in cardiac CT: From basics to clinical practice Eur. J. Radiol., 128 (2020) 45. Hahn, L., Baeumler, K., Hsiao, A.: Artificial intelligence and machine learning in aortic disease. Curr. Opin. Cardiol. 36(6), 695–703 (Nov.2021). https://doi.org/10.1097/HCO.000 0000000000903

AI-Based mHealth App for Covid-19 or Cardiac Diseases Diagnosis and Prognosis Ana Vieira1 , Luís B. Elvas1(B) , João C. Ferreira1,3 , Matilde Cascalho1 , Afonso Raposo2 , Miguel Sales Dias1 , Luís Brás Rosário2 , and Hugo Plácido da Silva4,5 1 Instituto Universitário de Lisboa (ISCTE-IUL), ISTAR, 1649-026 Lisbon, Portugal

[email protected]

2 Centro Cardiovascular da Universidade de Lisboa (CCUL), Faculdade de Medicina da

Universidade de Lisboa (FMUL), 1649-028 Lisbon, Portugal 3 INOV INESC Inovação—Instituto de Novas Tecnologias, 1000-029 Lisbon, Portugal 4 Instituto Superior Técnico (IST), Universidade de Lisboa (UL), 1050-049 Lisbon, Portugal 5 Instituto de Telecomunicações (IT), 1049-001 Lisbon, Portugal

Abstract. Covid-19 has rapidly spread and affected millions of people worldwide. For that reason, the public healthcare system was overwhelmed and underprepared to deal with this pandemic. Covid-19 also interfered with the delivery of standard medical care, causing patients with chronic diseases to receive subpar care. As chronic heart failure becomes more common, new management strategies need to be developed. Mobile health technology can be utilized to monitor patients with chronic conditions, such as chronic heart failure, and detect early signs of Covid-19, for diagnosis and prognosis. Recent breakthroughs in Artificial Intelligence and Machine Learning, have increased the capacity of data analytics, which may now be utilized to remotely conduct a variety of tasks that previously required the physical presence of a medical professional. In this work, we analyze the literature in this domain and propose an AI-based mHealth application, designed to collect clinical data and provide diagnosis and prognosis of diseases such as Covid-19 or chronic cardiac diseases. Keywords: Covid-19 · mHealth App · Telemonitoring · Data collection

1 Introduction A public health crisis began on 11th March 2020, when the World Health Organization declared Corona Virus Disease 2019 (Covid-19) a global pandemic [1]. It has since then rapidly spread and affected millions of people worldwide, making social distancing and quarantine the standard procedures [2]. During this outbreak, the existing public healthcare system was overwhelmed and underprepared [3]. In order to limit the spread of this disease and promote less physical contact and hospitalization, a quick diagnosis, prognosis and remote monitoring became of utmost importance [4]. Smartphone applications have been acknowledged as a fast and valid alternative for diagnosing and surveillance of patients with Covid-19 [5], specifically mobile health © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 A. Abraham et al. (Eds.): IBICA 2022, LNNS 649, pp. 765–777, 2023. https://doi.org/10.1007/978-3-031-27499-2_71

766

A. Vieira et al.

(mHealth) apps. The adoption of mHealth may lead to fewer hospitalizations and inperson clinic contacts, which is of immediate value in response to Covid-19 [6]. In Europe, non-communicable diseases continue to be the biggest consumers of healthcare resources, set to increase with an ageing society [6]. Chronic heart failure, being one of them, is a commonly diagnosed condition and has a poor prognosis [7]. Given the increasing prevalence of heart failure and considering that just a few European countries have a substantial number of organized programs for its care and follow up, new management strategies need to be developed [8, 9]. One way to avert heart failure is to prevent and control conditions that can cause it, such as aortic diseases (e.g., aortic calcification, aortic stenosis, aortic sclerosis) through remote monitoring, as it also helps patients who struggle to receive specialized care due to geography, transportation, or infirmity [9–14]. Aortic calcification, aortic stenosis and aortic sclerosis are conditions that typically affect elderly people [15–19]. Aortic sclerosis carries a higher risk of cardiovascular morbidity and mortality [20], with studies reporting that it affects approximately 30% of patients over 65 years old, up to 40% in those over 75 years and 50% in those over 85 years of age [15, 20–23]. Aortic stenosis affects 7% of population over 65 years old [24] and is the most prevalent cardiac valvular disease in people over age 75 [18]. In this case, if there are no symptoms or only moderate symptoms, the best line of action may be regular follow-up and monitoring to see if any symptoms arise or worsen [17]. Other than the previously mentioned effects Covid-19 has had on the healthcare system, it has also interfered with the delivery of standard medical care, resulting in patients delaying and forgoing medical treatment [25]. Health remote monitoring systems have evolved to support convenient healthy living, easier communication between healthcare givers and patients for close monitoring, measurement of vital health parameters and routine consultation, allowing the patient load at health facilities to be reduced [26, 27]. By collecting user’s physiological data and health information, mHealth technology can be utilized to monitor patients with chronic conditions and detect early signs of Covid-19 [28]. In addition to what’s been said, recent breakthroughs in Artificial Intelligence (AI) and Machine Learning (ML), particularly in Deep Learning (DL) and Computer Vision, have increased the capacity of imaging techniques, which may now be utilized to remotely conduct a variety of tasks that previously required the physical presence of a medical professional, such as surveillance of vital signs of infected as well as suspected individuals [3, 24]. Using healthcare data, AI and ML have great potential in disease prediction [29]. It has also been acknowledged that the future of health care lies in the integration of AI and mHealth apps [30]. The overall aim of this work is to develop an AI-based mHealth application referred to as AIMHealth app, to monitor high-risk patients (i.e., diabetic, hypertensive patients, with heart disease with or without Covid-19), patients with aortic diseases and/or Covid19, as well as to forecast near future disease worsening and decompensation conditions, alerting the user and their doctor regarding clinical situations. With this app, smartphone collected data, including health data, will be securely stored on the device (using a secure data vault), and the access will be controlled and authorized by the user in their own device.

AI-Based mHealth App for Covid-19 or Cardiac Diseases Diagnosis and Prognosis

767

2 State of the Art 2.1 Search Strategy and Inclusion Criteria Following the PRISMA (Preferred Reporting Items for Systematic Reviews and MetaAnalyses) Methodology [31] and after defining a research question “What is the state of the art on Artificial Intelligence-based mHealth applications for Covid-19 Diagnosis or Aortic Diseases Monitoring?”, a systematic review of the literature was conducted. We searched Scopus and Web of Science (WoS) databases, and the research was carried out through September 12th, 2022. In the review, we selected original papers or reviews that had been published in journals between 2017–2022 and written in English. Only documents related to Computer Science, Decision Science, Engineering and Mathematics were collected. The search strategy was built around a query with a specific research focus. This way we could evaluate the number of articles that were present in both databases while taking the concept, context, and population under investigation into account. 2.2 Study Selection After the database search and removal of duplicates, the title, abstract and keywords, were used to screen the papers selected for subsequent full text review, and in some cases the full document was examined if those pieces of information weren’t sufficient. 2.3 Data Extraction and Synthesis The data, in this case the title, author, year, journal, subject area, keywords and abstract, was handled and stored with the support of Zotero and Microsoft Excel tools. Based on the results presented above, a qualitative assessment was conducted for data analysis and synthesis. The Scopus and WoS databases were thoroughly searched for published work relating to the concepts “mHealth” or “Telemonitoring” or “Health App”, the target population “Covid-19” or “Aortic Calcification” or “Aortic Stenosis” or “Aortic Sclerosis”, and within a “Diagnosis” or “Data Mining” or “Physiological Data” or “Data Analysis” context of the study. 2.4 Results The search results are depicted in Table 1. The query was made in both Scopus and at the WoS databases after applying the same restrictions and filters.

768

A. Vieira et al. Table 1. Keyword selection

Concept

Population

Context

Limitations 2017–2022

“Mhealth”

“Covid-19”

“Diagnosis”

“Telemonitoring”

“Aortic Calcification”

“Data Mining”

“Health App”

“Aortic Stenosis”

“Physiological Data”

“Aortic Sclerosis”

“Data Analysis”

Only journal papers, articles, and reviews

37398 Documents 1925 Documents 80 Documents

As seen in the above table, the query was made using the keywords from each column (Concept AND Population AND Context AND Limitations) resulting in 80 documents. Following the PRISMA methodology, 38 documents were collected after duplicates removal and full text reading (see Fig. 1).

Fig. 1. PRISMA workflow diagram

As depicted in Fig. 2, the trend line shows a growth on the topic that we are studying, show evidence of its pertinence.

AI-Based mHealth App for Covid-19 or Cardiac Diseases Diagnosis and Prognosis 41%

32%

22% 2%

2%

0%

2017

2018

2019

2020

769

2021

2022

Fig. 2. Evolution of the eligible studies by year

Given that the main goal of this article is to identify the use of mHealth applications for Covid-19 diagnosis, prognosis and monitoring of chronic cardiac diseases, a list of the main topics discussed on each of the reviewed articles is described in Fig. 3, where we can see that Covid-19 can be diagnosed via telemonitoring and smartphone use. Covid-19

9%

28%

16%

mHealth Diagnosis

18% 2%9%

18%

AI

Telemonitoring/ Monitoring Telemedicine/ Telehealth Smartphone/ Mobile Phone

Fig. 3. Main topics from the literature review

A more thorough analysis of this review is summarized in Table 2, with a description of the main topics addressed by the selected papers. Table 2. Studies by topics Main Topic

Reference

# of Documents

Covid-19

[2–5, 25, 32–62]

36

Telemonitoring/ Monitoring

[3, 25, 32, 35, 37, 40, 42, 43, 45, 47, 48, 50–53, 56, 57, 59–61, 63–65]

23

Diagnosis

[2, 4, 5, 32, 33, 35, 36, 41, 42, 44, 45, 48, 53–57, 59–62, 64, 66]

23

Smartphone/ Mobile Phone

[2, 3, 5, 32, 33, 38, 41–44, 46, 51, 53–55, 58, 59, 64, 65, 67]

20

Artificial Intelligence

[2–5, 33, 39, 42–44, 50, 55, 65]

12

mHealth

[35, 40, 46, 48, 51, 54, 55, 60, 62, 64, 66]

11

Data Analysis

[25, 37, 45, 47, 49, 51, 59, 67]

8

Telemedicine/ Telehealth

[36, 48, 52]

3

770

A. Vieira et al.

In Fig. 3, we can see that the most popular topics are Covid-19, telemonitoring and diagnosis. Authors from [48] analyzed the existing telemedicine tools for remote Covid19 diagnosis, monitoring and management, concluding that these tools may help control future surges of Covid-19 infections and optimize patient outcomes. [40] demonstrates that mHealth technology has proven to be useful in monitoring and mitigating the effects of the Covid-19 pandemic by predicting symptom escalation. On Other research show mHealth’s usefulness in monitoring distinct conditions, on [60] the authors recommend using wearable-based mHealth device for patients with acute cardiovascular disease, as an early screening and real-time monitoring tool. Authors from [64] developed an innovative mHealth service platform that can record, process, and identify 16 types of heart sounds, integrating a customized low-cost stethoscope with a smartphone app. These studies show the usefulness of mHealth technology as a tool for remote monitoring as well as detecting early signs of Covid-19. As far as mHealth apps supported by AI, several authors have explored this idea for Covid-19 diagnosis. Ai-CovScan was proposed on [5] for Covid-19 detection by using breathing sounds, chest X-ray images, rapid antigen test and having a transfer learning approach with existing DL Convolutional Neural Network (CNN). [44] provides an AI engine-powered mobile app-based solution that utilizes several smartphone onboard sensors considering multiple symptoms for tele-testing and preliminary medical diagnosis of Covid-19. On [33] the development of a smartphone-based edge computing app for Covid-19 diagnosis, using CNN models trained and tested with publicly available datasets of chest X-ray images, is proposed.

3 AIMHealth App In line with the literature research, our proposed solution lays on a user-centric privacy ecosystem, seen in Fig. 4 (referred to as the AIM Health secure platform), where we

Fig. 4. Logic architecture of the AIMHealth secure platform

AI-Based mHealth App for Covid-19 or Cardiac Diseases Diagnosis and Prognosis

771

focus on AIMHealth app, the mHealth smartphone application (in the left side of the picture). The smartphone acts as a generic health data collection, aggregation, and storage device, and can be paired with external devices. The smartphone stores all the user’s sensitive health data on a securely encrypted data vault (using multi-factor unlocking mechanisms), and it is intended to act as a personal generic data gateway between the data processors (particularly, large public hospitals of the National Health System) and the user anonymized or pseudo-anonymized clinical data. Furthermore, AIMHealth will use AI and distributed ledger technologies based on blockchain. This last element acts as common data repository, recording the different data processing performed to the user data by a set of entities, to build trustworthy ways to detect and interpret personal health, avoiding unwanted communication. Due to its public nature, no private data will be stored on the blockchain, only specific data about the conducted operation and by whom. Also, through the usage of smart contracts, the permissions given by the users to data processors are bound, to conduct certain operations over their health data. Our AI modelling approach will help to sort through the data to get reliable and accurate insights, via ML, about changes in usual rhythms or anomalies that indicate the need of medical intervention. This work envisions the consideration of federated ML approaches for ongoing, incremental training, as well as agile inference (classification) mechanisms for fast, private, and secure identification of symptomatic and asymptomatic patients, as well as prediction of decompensation cases of the disease, via automatic detection of anomalies in collected physiological data. During development, a large-scale public datacenter available at RNCA (Rede Nacional de Computacão Avançada) will be used, including for hyper-parameter search and network tuning. 3.1 Smartphone Application Using the Flutter framework [68], we developed a mobile application for physiological data collection in a crowdsourcing regime, having the smartphone as the main device for such data collection. Even though Flutter allows for an easy development of crossplatform application from one source-code, for now, the application has only been tested in Android and is openly available at Google Play Store. Figure 5 shows several screenshots that depict the user experience with the AIMHealth app. The displayed screens are login, protocols, settings, photoplethysmography (PPG) data acquisition, and cough recording (“Gravação Tosse”), respectively. 3.2 Cloud Database and Features The AIMHealth app uses an open-source solution, AppWrite [69], for cloud database management, currently hosted in a Virtual Private Server of IT (Instituto de Telecomunicações) and can be deployed using the Docker technology for virtualization- meaning that the current cloud platform may be easily migrated to another machine if needed. This app can remotely adapt its contents in real-time, also allowing content specification by user or user group, since each data acquisition protocol is defined in the cloud

772

A. Vieira et al.

Fig. 5. Screenshots of the AIMHealth App user experience

platform and its steps can be created, modified, and deleted there. Currently the implemented types of steps are camera-based photoplethysmography (cbPPG) acquisition, microphone acquisition, wearable device acquisition, and dynamic forms. The developed app makes use of its architecture and implements five protocols for assessment of cardiac and respiratory function, and application’s usability. These were specially designed to assess the Covid-19 disease situation by the authors in meetings with clinical specialists of HSM – Hospital de Santa Maria, in Lisbon, Portugal. There are two protocols for cardiac function assessment, one of them evaluates the cardiac function by using cbPPG acquisition for 30 s - cbPPG has the ability to estimate respiration rate and blood oxygenation [70]- and the second does so by utilizing data collected by a wearable device paired with the smartphone, collecting the same type of physiological data for comparison and clinical evaluation purposes. For the respiratory function assessment, there are two designated protocols that record a short clip of audio (5 s) through the microphone available in the smartphone: cough recording (“Gravação Tosse”) and voice ‘33’ (“Voz ‘33’”). Lastly, to evaluate the usability of the application, the System Usability Scale (SUS) [71] questionnaire was included.

4 Conclusions and Future Work From the state-of-the-art findings, it was possible to recognize that the focus of chronic heart diseases management and Covid-19 diagnosis is shifting towards in-home care through telemonitoring and adoption of AI-based mHealth apps. As such, this paper has impact on mobile health as it presents an app, in the context of a complete cloud-based infrastructure, the AIM Health platform, that can collect physiological data. This app is finishing its end-user testing and evaluation stages of the software engineering life cycle, and there is currently no available health data collected in a crowdsourcing setting. Hence, the integration of AI modelling features, to enforce diagnosis and prognosis of disease conditions, is still in progress as we wait for the approval of the Ethics Committee

AI-Based mHealth App for Covid-19 or Cardiac Diseases Diagnosis and Prognosis

773

of ISCTE-IUL to start collecting data at large. Once it starts being acquired, pseudoanonymized datasets can be easily gathered in a cloud data repository, analyzed, and utilized for AI model federated learning training and testing. Funding. This work was partially supported by Fundação para a Ciência e Tecnologia (FCT) under DSAIPA/AI/0122/2020 AIMHealth – AI-based Mobile Applications for Public Health Response, coordinated by the Information Sciences, Technologies and Architecture Research Center (ISTAR-Iscte). Luís Elvas holds a Ph.D. grant, funded by FCT with UI/BD/151494/2021.

References 1. Coronavirus disease (COVID-19). https://www.who.int/news-room/questions-and-answers/ item/coronavirus-disease-covid-19. Accessed 24 Sept 2022 2. Biradar, V.G., et al.: An effective deep learning model for health monitoring and detection of COVID-19 infected patients: an end-to-end solution. Comput. Intell. Neurosci. vol. 2022 (2022). https://doi.org/10.1155/2022/7126259 3. Rohmetra, H., Raghunath, N., Narang, P., Chamola, V., Guizani, M., Lakkaniga, N.R.: AIenabled remote monitoring of vital signs for COVID-19: methods, prospects and challenges. Computing (2021). https://doi.org/10.1007/s00607-021-00937-7 4. Polsinelli, M., Cinque, L., Placidi, G.: A light CNN for detecting COVID-19 from CT scans of the chest. Pattern Recogn. Lett. 140, 95–100 (2020). https://doi.org/10.1016/j.patrec.2020. 10.001 5. Sait, U., et al.: A deep-learning based multimodal system for Covid-19 diagnosis using breathing sounds and chest X-ray images. Appl. Soft Comput. 109 (2021). https://doi.org/10.1016/ j.asoc.2021.107522 6. ‘European mhealth hub | Use case of disease monitoring and self-management – example of heart failure’. https://mhealth-hub.org/use-case-of-disease-monitoring-and-self-man agement-example-of-heart-failure. Accessed 18 Oct 2022 7. Stewart, S., MacIntyre, K., Hole, D.J., Capewell, S., McMurray, J.J.: More “malignant” than cancer? Five-year survival following a first admission for heart failure. Eur J Heart Fail 3(3), 315–322 (2001). https://doi.org/10.1016/s1388-9842(00)00141-0 8. Cleland, J.G.F.: Improving patient outcomes in heart failure: evidence and barriers. Heart 84(90001), 8i–10 (2000). https://doi.org/10.1136/heart.84.suppl_1.i8 9. Jaarsma, T., et al.: Heart failure management programmes in Europe. Eur. J. Cardiovasc Nurs. 5(3), 197–205 (2006). https://doi.org/10.1016/j.ejcnurse.2006.04.002 10. Heart failure - Symptoms and causes. Mayo Clinic. https://www.mayoclinic.org/diseases-con ditions/heart-failure/symptoms-causes/syc-20373142. Accessed 27 Sept 2022 11. Aortic valve stenosis - Symptoms and causes. Mayo Clinic. https://www.mayoclinic.org/dis eases-conditions/aortic-stenosis/symptoms-causes/syc-20353139. Accessed 27 Sept 2022 12. Prasad, Y., Bhalodkar, N.C.: Aortic sclerosis–a marker of coronary atherosclerosis. Clin Cardiol 27(12), 671–673 (2004). https://doi.org/10.1002/clc.4960271202 13. Aortic calcification and heart valve disease. Mayo Clinic. https://www.mayoclinic.org/dis eases-conditions/aortic-stenosis/expert-answers/aortic-valve-calcification/faq-20058525. Accessed 27 Sept 2022 14. Cleland, J.G.F.: Patients with treatable malignant diseases–including heart failure–are entitled to specialist care. CMAJ 172(2), 207–209 (2005). https://doi.org/10.1503/cmaj.045307 15. Myasoedova, V.A., et al.: Aortic valve sclerosis in high-risk coronary artery disease patients. Front. Cardiovascular Med. 8 (2021). Accessed 05 Oct 2022. https://www.frontiersin.org/art icles/10.3389/fcvm.2021.711899

774

A. Vieira et al.

16. Otto, C.M., Lind, B.K., Kitzman, D.W., Gersh, B.J., Siscovick, D.S.: Association of aorticvalve sclerosis with cardiovascular mortality and morbidity in the elderly. N. Engl. J. Med. 341(3), 142–147 (1999). https://doi.org/10.1056/NEJM199907153410302 17. Aortic Stenosis Overview. www.heart.org. https://www.heart.org/en/health-topics/heartvalve-problems-and-disease/heart-valve-problems-and-causes/problem-aortic-valve-ste nosis. Accessed 05 Oct 2022 18. Types of Aortic Valve Disease. https://nyulangone.org/conditions/aortic-valve-disease/types. Accessed 05 Oct 2022 19. Sparks, D.: Aortic calcification: An early sign of heart valve problems?. Mayo Clinic News Network, 27 February 2019. https://newsnetwork.mayoclinic.org/discussion/aortic-calcifica tion-an-early-sign-of-heart-valve-problems/. Accessed 05 Oct 2022 20. Venn, R.: Aortic sclerosis outcome in the elderly. Crit. Care 1(1), 1284 (1999). https://doi. org/10.1186/ccf-1999-1284 21. Stewart, B.F., et al.: Clinical factors associated with calcific aortic valve disease. Cardiovascular health study. J. Am. Coll. Cardiol. 29(3), 630–634 (1997). https://doi.org/10.1016/s07351097(96)00563-3 22. Lindroos, M., Kupari, M., Heikkilä, J., Tilvis, R.: Prevalence of aortic valve abnormalities in the elderly: an echocardiographic study of a random population sample. J. Am Coll Cardiol 21(5), 1220–1225 (1993). https://doi.org/10.1016/0735-1097(93)90249-z 23. Zhu, M., Li, M., Lu, B.: Comment on “Cardiovascular morbidity and mortality in patients with aortic valve sclerosis: A systematic review and meta-analysis.” Int. J. Cardiol. 270, 324 (2018). https://doi.org/10.1016/j.ijcard.2018.05.004 24. Elvas, L.B., Almeida, A.G., Rosario, L., Dias, M.S., Ferreira, J.C.: Calcium identification and scoring based on echocardiography. An exploratory study on aortic valve stenosis. J. Personal. Med. 11(7), Art. no. 7 (2021). https://doi.org/10.3390/jpm11070598 25. Jiang, X., Yao, J., You, J.: Cost-effectiveness of a telemonitoring program for patients with heart failure during the covid-19 pandemic in Hong Kong: model development and data analysis. J. Med. Internet Res. 23(3) (2021). https://doi.org/10.2196/26516 26. Kalaiselvan, K., Sahithullah, M., Diron Balachandaran, G., Sakthi, V., Srianth, M.: Smart healthcare support for remote patient monitoring. In: Presented at the 12th international conference on advances in computing, control, and telecommunication technologies, ACT 2021, vol. 2021-August, pp. 967–972 (2021). https://www.scopus.com/inward/record.uri?eid=2-s2. 0-85117785111&partnerID=40&md5=b874b93519ad529d1b0efa7098bef0c3 27. Peyroteo, M., Ferreira, I.A., Elvas, L.B., Ferreira, J.C., Lapão, L.V.: Remote monitoring systems for patients with chronic diseases in primary health care: systematic review. JMIR Mhealth Uhealth 9(12), e28285 (2021). https://doi.org/10.2196/28285 28. ‘Heart Infection: Causes, Symptoms & Treatment’, Cleveland Clinic. https://my.clevelandcli nic.org/health/diseases/22054-heart-infection. Accessed 24 Sept 2022 29. Usama, M., Ahmad, B., Xiao, W., Hossain, M.S., Muhammad, G.: Self-attention based recurrent convolutional neural network for disease prediction using healthcare data. Comput. Methods Programs Biomed. 190, 105191 (2020). https://doi.org/10.1016/j.cmpb.2019. 105191 30. European mhealth hub | Artificial Intelligence (AI). https://mhealth-hub.org/artificial-intell igence-ai. Accessed 17 Oct 2022 31. Moher, D., Liberati, A., Tetzlaff, J., Altman, D.G.: Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. BMJ 339, b2535 (2009). https://doi.org/ 10.1136/bmj.b2535 32. Çelik Ertu˘grul, D., Çelik Ulusoy, D.: A knowledge-based self-pre-diagnosis system to predict Covid-19 in smartphone users using personal data and observed symptoms. Expert Syst. 39(3) (2022). https://doi.org/10.1111/exsy.12716

AI-Based mHealth App for Covid-19 or Cardiac Diseases Diagnosis and Prognosis

775

33. Rangarajan, A.K., Ramachandran, H.K.: A preliminary analysis of AI based smartphone application for diagnosis of COVID-19 using chest X-ray images. Expert Syst. Appl. 183 (2021). https://doi.org/10.1016/j.eswa.2021.115401 34. Islam, M.N., Islam, I., Munim, K.M., Islam, A.K.M.N.: A Review on the mobile applications developed for COVID-19: an exploratory analysis. IEEE Access 8, 145601–145610 (2020). https://doi.org/10.1109/ACCESS.2020.3015102 35. Mao, K., Zhang, H., Yang, Z.: An integrated biosensor system with mobile health and wastewater-based epidemiology (iBMW) for COVID-19 pandemic. Biosens. Bioelectron. 169 (2020). https://doi.org/10.1016/j.bios.2020.112617 36. Liu, L., et al.: Application and preliminary outcomes of remote diagnosis and treatment during the COVID-19 outbreak: retrospective cohort study. JMIR Mhealth and Uhealth 8(7) (2020). https://doi.org/10.2196/19417 37. Bhatia, M., Manocha, A., Ahanger, T.A., Alqahtani, A.: Artificial intelligence-inspired comprehensive framework for Covid-19 outbreak control. Artif. Intell. Med. 127 (2022). https:// doi.org/10.1016/j.artmed.2022.102288 38. Stasak, B., Huang, Z., Razavi, S., Joachim, D., Epps, J.: Automatic detection of COVID-19 based on short-duration acoustic smartphone speech analysis. J. Healthcare Inform. Res. 5(2), 201–217 (2020). https://doi.org/10.1007/s41666-020-00090-4 39. Özyurt, F.: Automatic detection of COVID-19 disease by using transfer learning of light weight deep learning model. Traitement du Signal 38(1), 147–153 (2021). https://doi.org/10. 18280/TS.380115 40. Adans-Dester, C.P., et al.: Can mHealth technology help mitigate the effects of the COVID19 pandemic? IEEE Open J. Eng. Med. Biol. 1, 243–248 (2020). https://doi.org/10.1109/ OJEMB.2020.3015141 41. Mahmoud, M., Ruppert, C., Rentschler, S., Laufer, S., Deigner, H.-P.: Combining aptamers and antibodies: lateral flow quantification for thrombin and interleukin-6 with smartphone readout. Sensors Actuators, B: Chem. 333 (2021). https://doi.org/10.1016/j.snb.2020.129246 42. Vedaei, S.S., et al.: COVID-SAFE: An IoT-based system for automated health monitoring and surveillance in post-pandemic life. IEEE Access 8, 188538–188551 (2020). https://doi. org/10.1109/ACCESS.2020.3030194 43. Hassantabar, S., et al.: CovidDeep: SARS-CoV-2/COVID-19 test based on wearable medical sensors and efficient neural networks. IEEE Trans. Consum. Electron. 67(4), 244–256 (2021). https://doi.org/10.1109/TCE.2021.3130228 44. Khaloufi, H., et al.: Deep learning based early detection framework for preliminary diagnosis of covid-19 via onboard smartphone sensors. Sensors 21(20) (2021). https://doi.org/10.3390/ s21206853 45. Pépin, J.-L., et al.: Detecting COVID-19 and other respiratory infections in obstructive sleep apnoea patients through CPAP device telemonitoring. Digtal Health 7 (2021). https://doi.org/ 10.1177/20552076211002957 46. Abdrbo, A., Weheida, S., Shakweer, T., Abd-Elaziz, M.: Effect of using a technology-based (mobile health) nursing protocol on positive COVID-19 patients’ dyspnea and level of activity. CIN-Comput. Inform. Nursing 40(5), 299–306 (2022). https://doi.org/10.1097/CIN.000000 0000000901 47. Alboksmaty, A., et al.: Effectiveness and safety of pulse oximetry in remote patient monitoring of patients with COVID-19: a systematic review. Lancet Digital Health 4(4), e279–e289 (2022). https://doi.org/10.1016/S2589-7500(21)00276-4 48. Lukas, H., Xu, C., Yu, Y., Gao, W.: Emerging telemedicine tools for remote covid-19 diagnosis, monitoring, and management. ACS Nano 14(12), 16180–16193 (2020). https://doi.org/10. 1021/acsnano.0c08494

776

A. Vieira et al.

49. Piotto, S., Di Biasi, L., Marrafino, F., Concilio, S.: Evaluating epidemiological risk by using open contact tracing data: correlational study. J. Med. Internet Res. 23(8) (2021). https://doi. org/10.2196/28947 50. Shabbir, A., Shabbir, M., Javed, A.R., Rizwan, M., Iwendi, C., Chakraborty, C.: Exploratory data analysis, classification, comparative analysis, case severity detection, and internet of things in COVID-19 telemonitoring for smart hospitals. J. Exp. Theor. Artif. Intell. (2022). https://doi.org/10.1080/0952813X.2021.1960634 51. Wang, Z., et al.: From personalized medicine to population health: a survey of mhealth sensing techniques. IEEE Internet Things J. (2022). https://doi.org/10.1109/JIOT.2022.3161046 52. Savoldelli, A., Vitali, A., Remuzzi, A., Giudici, V.: Improving the user experience of televisits and telemonitoring for heart failure patients in less than 6 months: a methodological approach. Int. J. Med. Inform. 161 (2022). https://doi.org/10.1016/j.ijmedinf.2022.104717 53. Kirkpatrick, A.W., McKee, J.L., Conly, J.M.: Longitudinal remotely mentored self-performed lung ultrasound surveillance of paucisymptomatic Covid-19 patients at risk of disease progression. Ultrasound J. 13(1), 1–7 (2021). https://doi.org/10.1186/s13089-021-00231-9 54. Varnfield, M., et al.: M THer, an mHealth system to support women with gestational diabetes mellitus: feasibility and acceptability study. Diabetes Technol. Ther. 23(5), 358–366 (2021). https://doi.org/10.1089/dia.2020.0509 55. Shokr, A., et al.: Mobile health (mHealth) viral diagnostics enabled with adaptive adversarial learning. ACS Nano 15(1), 665–673 (2021). https://doi.org/10.1021/acsnano.0c06807 56. Mariani, S., Hanke, J.S., Dogan, G., Schmitto, J.D.: Out of hospital management of LVAD patients during COVID-19 outbreak. Artif. Organs 44(8), 873–876 (2020). https://doi.org/10. 1111/aor.13744 57. Balasubramanian, V., Vivekanandhan, S., Mahadevan, V.: Pandemic tele-smart: a contactless tele-health system for efficient monitoring of remotely located COVID-19 quarantine wards in India using near-field communication and natural language processing system. Med. Biol. Eng. Compu. 60(1), 61–79 (2021). https://doi.org/10.1007/s11517-021-02456-1 58. Adrover-Jaume, C., et al.: Paper biosensors for detecting elevated IL-6 levels in blood and respiratory samples from COVID-19 patients. Sens. Actuators B: Chem. 330 (2021). https:// doi.org/10.1016/j.snb.2020.129333 59. Baker, M., Musselman, M.E., Rogers, R., Hellman, R.: Practical implementation of remote continuous glucose monitoring in hospitalized patients with diabetes. Am. J. Health Syst. Pharm. 79(6), 452–458 (2022). https://doi.org/10.1093/ajhp/zxab456 60. Ji, N., et al.: Recommendation to Use Wearable-Based mHealth in Closed-Loop Management of Acute Cardiovascular Disease Patients during the COVID-19 Pandemic. IEEE J. Biomed. Health Inform. 25(4), 903–908 (2021). https://doi.org/10.1109/JBHI.2021.3059883 61. Liu, Y., Shukla, D., Newman, H., Zhu, Y.: Soft wearable sensors for monitoring symptoms of COVID-19 and other respiratory diseases: a review. Progress Biomed. Eng. 4(1) (2022). https://doi.org/10.1088/2516-1091/ac2eae 62. Tobias, G., Spanier, A.: Using an mHealth App (iGAM) to reduce gingivitis remotely (Part 2): prospective observational study. JMIR Mhealth and Uhealth 9(9) (2021). https://doi.org/ 10.2196/24955 63. Ware, P., et al.: Challenges of telemonitoring programs for complex chronic conditions: randomized controlled trial with an embedded qualitative study. J. Med. Internet Res. 24(1) (2022). https://doi.org/10.2196/31754 64. Thiyagaraja, S.R., et al.: A novel heart-mobile interface for detection and classification of heart sounds. Biomed. Signal Process. Control 45, 313–324 (2018). https://doi.org/10.1016/ j.bspc.2018.05.008 65. Sajal, M.S.R., Ehsan, M.T., Vaidyanathan, R., Wang, S., Aziz, T., Mamun, K.A.A.: Telemonitoring Parkinson’s disease using machine learning by combining tremor and voice analysis. Brain Inform. 7(1), 1–11 (2020). https://doi.org/10.1186/s40708-020-00113-1

AI-Based mHealth App for Covid-19 or Cardiac Diseases Diagnosis and Prognosis

777

66. Galetsi, P., Katsaliaki, K., Kumar, S.: Assessing technology innovation of mobile health apps for medical care providers. IEEE Trans. Eng. Manage. (2022) https://doi.org/10.1109/TEM. 2022.3142619 67. Rubin, D.S., Ranjeva, S.L., Urbanek, J.K., Karas, M., Madariaga, M.L.L., HuisinghScheetz, M.: Smartphone-based gait cadence to identify older adults with decreased functional capacity. Dig. Biomarkers 6(2), 61–70 (2022). https://doi.org/10.1159/000525344 68. ‘Flutter - Build apps for any screen’. //flutter.dev/. Accessed 10 Oct 2022 69. ‘Appwrite - Open-Source End-to-End Backend Server’, Appwrite. https://appwrite.io/. Accessed 06 Oct 2022 70. Raposo, A., et al.: e-CoVig: a novel mhealth system for remote monitoring of symptoms in COVID-19. Sensors 21(10), Art. no. 10 (2021). https://doi.org/10.3390/s21103397 71. A.S. for P. Affairs, ‘System Usability Scale (SUS). 06 September 2013. https://www.usabil ity.gov/how-to-and-tools/methods/system-usability-scale.html. Accessed 11 Oct 2022

Mapping the Research in Orange Economy: A Bibliometric Analysis Homero Rodriguez-Insuasti1 , Marcelo Leon2(B) , Néstor Montalván-Burbano3,4,5 , and Katherine Parrales-Guerrero5,6 1 Universidad Estatal Península de Santa Elena, La Libertad, Ecuador 2 Universidad ECOTEC, Samborondón, Ecuador

[email protected]

3 Centro de Investigaciones y Proyectos Aplicados a las Ciencias de la Tierra (CIPAT), Campus

Gustavo Galindo, ESPOL Polytechnic University, Guayaquil 9015863, Ecuador 4 Department of Economy and Business, University of Almería, 04120 Almería, Spain 5 Research Group Innovation, Management, Marketing and Knowledge Economy Research

(I2MAKER), Guayaquil, Ecuador 6 Faculty of Social and Humanistic Sciences, ESPOL Polytechnic University,

Guayaquil 9015863, Ecuador

Abstract. Currently, people and companies face creative and innovation challenges, so they must be constantly evolving, it is here that the creative economy appears as a rapidly growing sector in the world. The purpose of this work is to know the current state of the research done in the orange economy, also known as the creative economy. For this, a data search was carried out in the Web of Science database in the month of August 2021 with the terms “Orange economy” or “creative economy”, and 633 data were obtained. With these data, an analysis Bibliometrics present two approaches: the first examines scientific production and its impact by evaluating authors, countries, publications, and affiliations. In the second, bibliometric mapping or mapping of science, allows analyzing the intellectual structure of the field of study through information from the nuclear level of its structure, meso and macro. As a result, it was possible to observe that the first investigations appear in 1985, but it is not until 2006 that interest has an accelerated increase, although within the top ten of countries that investigate this subject, none of Latin America appears, so neither does any author appear who be a reference in this area of knowledge. Keywords: Creative economy · Orange economy · Creative industry · Innovation · Creativity

1 Introduction The creative economy accounts for just over 6.1% of the world’s total gross domestic product (GDP). According to UN estimates, the creative economy sectors employ about 30 million people worldwide, generate USD 250 billion in exports worldwide and USD 2,250 billion in annual income [78]. In terms of revenue, the creative economy is dominated by television and the visual arts (The Policy Circle, 2022). In the case of © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 A. Abraham et al. (Eds.): IBICA 2022, LNNS 649, pp. 778–795, 2023. https://doi.org/10.1007/978-3-031-27499-2_72

Mapping the Research in Orange Economy: A Bibliometric Analysis

779

the European Union, the creative sector is among the most dynamic industries on the continent, representing around 1.2 million companies [21]. This study aims to explore and answer a series of questions related to the development of this theme: How has the creative economy developed since its conception?; Which are the collaborators (authors, countries and journals) that present the greatest production in the creative economy?; What are the most outstanding articles of this study topic?; What are the relevant topics and themes that the publications related to the creative economy show? Therefore, to answer the questions raised, it is considered that the objective of this research is to analyze the scientific production related to the creative economy through bibliometric analysis2, to explore its intellectual structure and performance associated with its evolution. The article is organized in four sections: i) the introduction, in which it shows the importance of the creative economy in the academic world and society; ii) methodology, detailing the collection protocol used, selection of the database and the terms for searching for information; iii) The results obtained and their analysis; iv) The discussion and conclusions of the study. For this work, three stages were carried out: the first where the choice of keywords was made, the second where the choice of the database, search and refinement of the results was made, and thirdly, the analysis of results. The data with which we worked were 633 articles published in the Web of Science. Among the main results, it was found that as of 2011, an accelerated increase in research began, as well as the main countries that study this area of knowledge are England, the United States, and Brazil. The main implications for the business world would be to help future entrepreneurs with new options by showing that with creativity and innovation good results can be achieved from an economic point of view.

2 Background The UK government originally adopted the phrase “creative industries” at a national level in the mid-to-late 1990s as an attempt to change the terms of the debate over the value of arts and culture [4]. The adoption of this concept was linked to the new Labor government with the establishment of the UK Department for Culture, Media and Sport in 1998 to refer to economic sectors that have the potential to generate income and new jobs through the creation and exploitation of intellectual property (DCMS, 1998). To assess the impact of the creative industries on the nation’s economic growth at the time, the UK mapped the industries in which they were growing [23]. Due to the diverse spectrum of creative industry sectors, including art, heritage, culture and technology, it has been increasingly difficult to identify the areas in which it operates in recent years [38]. John Howkins in his prominent UK research published in 2002 mentions that the creative economy includes various industries such as architecture, visual and performing arts, crafts, film, design, publishing, research and development, games and toys, fashion, music, advertising, software, television and radio, and video games, that is, it includes economic sectors where the value of their products and services is based on intellectual property (2002). For its part, in Latin America, the creative economy is known as the

780

H. Rodriguez-Insuasti et al.

Orange Economy, a term developed by the Inter-American Development Bank (IDB) to represent those industries that are based on talent, intellectual property, connection and rich cultural heritage region of Felipe Buitrago and Iván Duque, in 2013, defined the Orange Economy as the known sequence of activities that allows ideas to become cultural products and services, whose value is based on the presence or absence of intellectual property. Due to the wide emphasis that has been placed on this topic, international institutions such as the UN and UNESCO have begun to pay attention to the cultural and creative sectors with the dawn of the 21st century [25, 69, 83]. However, the creation of global and regional plans has been hampered by the disorganized organizational structure of the creative sector [66]. Despite this, through the results of the investigations it has been possible to visualize that this sector is a key engine of employment generation, economic diversification and economic growth [68]. Indeed, the creative industries are widely recognized as potential game changers for social inclusion, cultural diversity, and human development in general [20]. In addition, it has a great dynamism that makes it resilient to changes in the economy [1].

3 Methodology Bibliometric analyzes in recent years have gained immense popularity in the academic world due to the high impact generated by their research and the possibility of analyzing large volumes of information [16]. Where you can analyze the performance of scientific production, collaboration patterns and explore its intellectual structure in a given field of study, journals and even countries [17, 18, 33, 55]. Bibliometrics comprises the application of quantitative and statistical techniques on bibliographic data [54] that uses a rigorous and transparent methodological process similar to systematic literature reviews, thus ensuring the quality of the information provided [51]. This study shows a three-stage methodological process: i) Search criteria, ii) Database selection, search and refinement of the results, iii) Analysis of results. Phase I Search criteria: To fulfill the objective of this study, it was necessary to structure the information search by selecting search terms and parameters. The terms used were “Orange economy” or “creative economy”, which are often used to identify this field of study [22, 43]. Phase II. Database selection, search and refinement of the results: The use of the Web of Science database was considered Core Collection, for collecting bibliographic information. This selection is justified: i) as it is considered the largest global coverage database; ii) it offers various tools that facilitate the type of search, consolidation and download of data; iii) use of rigorous quality standards such as Impact Factor and JCR category rank; and iv) its use in bibliometric studies in various academic disciplines [30, 61, 71]. The data was extracted in August 2021 from the Web of Science, using the selected terms (phase I) and the “Topic” option, obtaining the topic search (TS) = (“orange economy” OR “creative economics”). Additionally, only articles in the English language were chosen, because the articles are considered knowledge certified by their blind peer review and the language being more frequent at the level of scientific publications [5]. Obtaining 633 records.

Mapping the Research in Orange Economy: A Bibliometric Analysis

781

Phase III. Analysis of results: The bibliographic information was downloaded in a CSV file (comma-separated values) that allows obtaining relevant information on the scientific production, related to the researchers (names of the authors, affiliations and countries of origin), works (titles and year of publication), content (abstract, keywords and citations received), references among others [60]. The information obtained was processed to examine the possibility of finding errors or duplicate files, which ensures the quality of the information [44]. No inconsistencies were found. In data processing and analysis, two softwares were required: i) Microsoft Office Excel that allows data processing, as well as performance analysis of scientific production; ii) The VOSviewer software, in which two-dimensional bibliometric networks can be built and which can be observed using simple graphs [79]. Software used in various scientific disciplines [3, 15, 52, 59]. Bibliometric analyzes present two approaches: performance analysis and bibliometric mapping. The first examines scientific production and its impact by evaluating authors, countries, publications, and affiliations. In the second, bibliometric mapping or mapping of science, allows analyzing the intellectual structure of the field of study through information from the nuclear level of its structure (co-occurrence of keywords), meso (co-citation of authors) and Macro (Co-citation from Journals) [67].

4 Results 4.1 Performance Analysis 4.1.1 Analysis of Scientific Production As can be seen in Fig. 1, the evolution of scientific articles on Creative Economy is presented. The graph shows a total of 633 published investigations with a total of 15,043 citations between the years 2002 and 2021.

Number of items

Quantity of items

100 80 60 40 20 0 1980

1

1 1990

86 75 63 58 484748 3639 3128 22 1616 19 9 9 8 6 77 1 11 4 2000

2010

2020

2030

Years

Fig. 1. Trajectory of the literature on Creative Economy, 2002–2021 (n = 633)

782

H. Rodriguez-Insuasti et al.

In order to observe the evolution over time on this subject, the data was divided into two periods of 10 years each. The first part, which starts from 2002 to 2012, represents 12.6% of the research on creative economy, with 2,076 citations out of 15,043. For 2002, the first two articles appear, the first of which deals with concepts of the creative economy, creative groups, the rise of the creative class and how governments have started to pay attention to it [21], this paper has garnered 5 citations, while the second is about the UK fashion sector as part of the cultural or creative economy [22], this work obtained 84 citations. The second period (2013 to 2021) is that research papers on creative economy suffer a rapid increase: here we can see 87.4% (13,103 of 15,013 citations) of the total scientific production published in the Web of Science. Here several articles with several citations stand out, among which we can mention Comunian and Faggian (2014). This researcher has worked with various collaborators and among the most representative works they present 11 publications with 173 citations in total, in her articles various analyzes are presented on how education is related to the creative economy and its impact on the professional life of graduates. Another notable researcher is Grodach and Seman (2013) with 5 articles and 158 citations. As in the previous case, this researcher has worked with several colleagues, their work has focused on issues of creative economy and how to exploit the potential found within cities. As can be seen in Fig. 1, from 2015 to 2018 the years with the highest number of publications. 4.1.2 Analysis of Contribution by Country This section presents the number of countries that contribute research within the WOS. The table shows the 10 countries with a large number of publications and citations out of a total of 58. As you can see, the first countries that lead this table are England with 113 publications and 1,451 citations, in second place is United Stated of America with 98 articles and 1,575 citations, and Brazil with 62 documents and only 37 citations (Table 1). Table 1. Countries with the most publications in the Web of Science Rank

Country

Documents

Citations counts

1

England

113

1451

2

United States of America

98

1575

3

Brazilian

62

37

4

Australia

49

609

5

Canada

39

530

6

People’s Republic of China

33

136

7

Russia

28

73

8

South Korea

25

298

9

Spain

21

117

Scotland

20

278

10

Mapping the Research in Orange Economy: A Bibliometric Analysis

783

Results show that there is not much difference between Europe (3 countries) and North America together with Asia (2 countries). On the other hand, if one works with the number of citations, it is observed that Unit States has fewer documents (98) than England (113) but has more citations, 1,575 and 1,4,51 respectively. In fourth place we have a South American country like Brazil that has a very low number of citations, just 37. In fifth place, Australia appears with 49 documents, but in third place in number of citations (609). As a point of consideration, no African country appears in this top 10, although among the 58 countries that publish South Africa it appears in 28th place with 6 publications. Regarding the contributions of documents by continents, it was found that 37.7% of documents published in WOS belong to Europe, 28.07% to North America, 12.7% to South America, 11.89% to Asia and 10.04% to South America. Oceania. If we do the same analysis with the number of citations by continent, we will notice that North America heads the list with 2,105 citations, followed by Europe with 1,919, then Oceania with 609, in fourth place, Asia with 434, and finally, South America. South with 37. 4.1.3 Magazine Performance In this section, a fairly detailed analysis is made of the performance of the 365 journals of the Web of Science that deal with the subject of Creative Economy. For this, the following elements were taken into consideration: the number of articles (AT), the quartile of the journal, the percentage of contribution (%), the H-Index (HI), the impact score (IS) and finally, SCImago Journal Rank (SJR). From the data shown in Table 2, it can be concluded that of the 10 journals present, 135 of the 634 articles found in total have been added, which represents 2,162 citations out of a total of 5,561, that is, 38.88% of citations are in this top 10. The first three places of relevance are occupied first by the International Journal of Cultural Policy, with 4.57% of the percentage collected, an H Index of 45, SJR of 0.635 and an impact score of 1.38. Then there is Cultural Trends magazine with 2.84%, H Index 27, an SJR of 0.481 and an impact score of 0.89. In third place is the journal Regional Studies with 2.37%, an H Index of 27, an SJR of 1.844 and an impact score of 0.89. To finish this point, you can see all the magazines are in Q1. Table 2. Journals with more publications Journals

AT 1

Quartile

%2

1

International Journal of Cultural Policy

29

Q1

2

Cultural Trends

18

3

Regional Studies

4 5

Rank

HI 3

SJR 4

IS 5

4.57%

45

0.635

1.38

Q1

2.84%

27

0.481

0.89

15

Q1

2.37%

120

1,844

4.27

Brazilian Journal of Operations & Production Management

13

Q1

2.05%

3



0

European Planning Studies

13

Q1

2.05%

81

1,214

3.14

(continued)

784

H. Rodriguez-Insuasti et al. Table 2. (continued) Journals

AT 1

Quartile

%2

6

Journal of Arts Management Law nd Society

11

Q1

7

Sustainability

11

Q1

8

Cities

10

Q1

1.58%

90

1,711

6.19

9

Urban Studies

8

Q1

1.26%

37

4,514

9.65

Technological Forecasting and Social Change

7

Q1

1.10%

117

2,226

9.01

Rank

10

HI 3

SJR 4

IS 5

1.74%

19

0.391

0.79

1.74%

129

0.612

3.48

1 AT = articles; 2 % = contribution percentage; 3 HI = H-Index; 4 SJR = SCImago journal rank; 5 IS = impact score

4.1.4 Authors Contribution In this section, the 10 main researchers are reviewed, a list of 1000 authors on the subject of orange economy and who have contributed their results to the development of this area of knowledge. The parameters taken into consideration are: the number of publications, the number of citations, the country, the place of work of the researchers and the H-index. The main author is a woman named Roberta Comunian of the King’s College London, she has 13 publications and 394 creative economy paper citations. His first work was published in the journal Regional Studies in 2010 together with another author named Caroline Chapain (86 cites) and analyzes the factors that allow the creative industry to start in two cities in England [7]. In 2020, he published his 2 latest joint investigations with colleagues, one dealing with how covid-19 has affected the creative economy in the UK (81 citations) [25] and the second dealing with the employability of graduates within the creative industry and “cultural” concept in Australia and the United Kingdom (61 citations) [26]. In second place, Carl Grodach of Monash University, Australia, who has 7 publications and 229 citations of his works, can be identified. His first work was published in 2011 and his last was published in 2020. His most cited work (91 citations) was presented in 2020 and deals with a mapping of patterns of companies and people working in the creative industry [27]. Finally, it can be noted that the main authors come from the United Kingdom and Australia, which are the countries that lead this type of research in the creative industry (Table 3). Table 3. Top ten in creative economy studies Rank

Authors

Publication

Citation

Country

Institute/University

HI

1

Communian, R.

13

394

United Kingdom

King’s College London

27

2

Grodach, C.

7

229

Australia

Monash University

22

(continued)

Mapping the Research in Orange Economy: A Bibliometric Analysis

785

Table 3. (continued) Rank

Authors

Publication

Citation

Country

Institute/University

HI

3

Banks, M.

4

205

United Kingdom

University of Wollongong

26

4

Gibson, C.

3

199

Australia

University of Glasgow

58

5

Manville, M.

1

199

United States

University of California

21

6

Stormer, M.

1

199

United Kingdom

London School of Economics and Political Science

83

7

Hesmondhalgh, D.

2

180

United Kingdom

University of Leeds

45

8

Cohen, R.

1

155

United States

Crystal Run Healthcare

12

9

Denatale, D.

1

155

United States

Boston University

10

Markusen, A.

1

155

United States

University of Minnesota

1 26

4.1.5 Frequently Cited Documents (Frequently Cited Documents) The following table shows the articles with the most citations on the subject studied, a total of 1,313 citations could be noted in this top ten of the general total of 3,348. Within this table, the number of citations per article (TC) and year (Table 4). Table 4. Top of the most cited articles Rank Articles titles 1

Behaviour, preferences and cities: urban theory and urban resurgence

2

Authors Type of documents TC

Year

73

Article

199 2006

Looking for work in creative industries policy

2

Article

173 2009

3

Defining the creative economy: industry and occupational approaches

46

Article

155 2008

4

Creative small cities: rethinking the creative economy in place

80

Article

140 2009

5

Rethinking the creative city: the role of complexity, networks and interactions in the urban creative economy

9

Article

135 2011

(continued)

786

H. Rodriguez-Insuasti et al. Table 4. (continued)

Rank Articles titles

Authors Type of documents TC

Year

6

Cultural tourism: a review of recent research and trends

64

Review

122 2018

7

Cultural clusters: the implications of cultural assets agglomeration for neighborhood revitalization

72

Article

121 2010

8

Academic publishing as ‘creative’ industry, and recent discourses of ‘creative economies’: some critical reflections

26

Article

97 2004

9

Enabling and inhibiting the creative economy: the role of the local and regional dimensions in England

7

Article

86 2010

The influence of tourism website on tourists’ behavior to determine destination selection: a case study of creative economy in Korea

8

Article

85 2015

10

The first article refers to the urban change of certain cities in developed countries from the point of view of creative economy and tolerance and urban beauty. In addition, it proposes ideas to solve urban growth patterns from the understanding of urban choice behaviors. It is important to note that this work was published in 2006 in the magazine Urban Studies. The second article analyzes the creative industry in the United Kingdom from three angles: a utopian description of work in this sector; an evaluation of the places where these activities are carried out; and political discourse to encourage new ventures in the creative economy. This article was published in 2009 in the International Journal of Cultural Policy. Finally, a third article analyzes three previous works in this area of knowledge by studying distinctive political agendas and electoral districts to determine how this affects the industries, companies and occupations of the creative economic environment. This work was published in 2008 in the journal Economic Development Quarterly. 4.2 Intellectual Structure Analysis 4.2.1 Author Keyword Matching Network Keyword analysis is a technique that helps researchers discover patterns that will help identify research trends in specific areas [38]. This technique is carried out by building semantic visual maps that show the most important topics [39]. In this work, the VOSviewer version 1.6.17 software was used, 663 articles were worked on and 1629 keywords of the author were found, for which the minimum number of repetitions proposed by VOSviewer, which is coincidences of at least 5 times, was chosen for analysis. Which gave a number of 58 repeated words grouped in 9 nodes (Fig. 2).

Mapping the Research in Orange Economy: A Bibliometric Analysis

787

Fig. 2. Bibliometric map of co-occurrence of author

The form of presentation of the results was organized from the nodes with the highest amount to those with the lowest amount, for which VOSviewer makes use of circles of different colors and sizes for their respective representation and understanding. Table 5 shows the main words used divided into two time periods, in this way it is possible to identify which words are still in force and which are not. As you can see words like “creative economy” (278) and “creative industries” (85) remain the main words associated with research in this research area. Table 5. Top 20 keywords (2002–2021) Keyword

Occurrences

2021–2011

2010–2002

Creative economy

278

263

Fifteen

Creative industries

85

76

9

Creativity

38

38



Cultural politics

33

33



Innovation

29

29



Creative city

21

21



Higher education

16

16

– (continued)

788

H. Rodriguez-Insuasti et al. Table 5. (continued)

Keyword

Occurrences

2021–2011

2010–2002

Creative class

14

14



Entrepreneurship

14

14



Culture

13

13



Creative industry

12

12



Cultural industry

12

12



Cultural economy

11

11



Design

10

10



Sustainability

10

10



Creative society

9

9



Economy development

9

9



Fashion

9

9



Human capital

9

9



Creative cities

8

8



Next, the analysis of the nodes is presented in descending order: 1. Node 3 (blue) is composed of eight elements with 352 co-occurrences. The main words found are “creative economy” with 285 occurrences and is the core of this structure, followed by “cultural policy” with 33 occurrences. This node studies the factors that encourage the development of the creative and cultural economy [36]; questions the lack of attention of local governments in the United States in implementing policies that promote the development of the cultural economy [40] and gives an example of how cultural/creative economy policies have been implemented in Singapore since 1960 as ways to include export value and attract external tourism [41] 2. Node 2 (green) is composed of ten elements with a total of 175 co-occurrences. The words that were identified were “creative industries” with 90 occurrences, followed by “creative class” with 21 occurrences, “economic development” with 11 occurrences. This node highlights the role of the Universities by offering degrees that help in the development of the creative industry [42] from three angles such as: (1) cultural vitality, (2) the rural creative class and (3) creative rural economies in sectors remote [43]. 3. Node 1 (red) has eleven elements and a total of 102 co-occurrences. The main words in this group are “innovation” with 30 occurrences and “sustainability” with 12 occurrences. This node refers to certain questions about sustainability and how the use of the D-DANP-mV model can help in continuous improvement strategies in the sustainable development system in creative cities [81]. 4. Node 4 (yellow) is made up of six elements with 79 co-occurrences. The highlighted words found are: “creativity” with 66 occurrences, the others have a very low number.

Mapping the Research in Orange Economy: A Bibliometric Analysis

789

This node presents certain indicators and indices to measure the creative performance of the cities called creative [45] 5. The other nodes are 5 (purple) with 53 co-occurrences and refer to economic support policies for the cultural and creative industry as job creation options. Node 8 (coffee) with 51 co-occurrences and deals with creativity in the tourism industry. Node 6 (cyan) with 47 co-occurrences and this refers to entrepreneurship in art and culture. Node 7 (Orange) with 33 co-occurrences and refers to academic preparation as part of the creative strategy. Finally, node 9 (purple) with 23 co-occurrences and refers to sustainable development within creative culture in European countries. 4.2.2 Cited Authors Co-Citation Network The analysis of co-citations of the authors is an intellectual structure technique that starts from the concept that two authors who have citations to each other, their fields of research would be very equivalent [46]. This analysis was done with the software VOSviewer version 1.6.17 because it allows a strong visual component thanks to its mapping technique called VOS [47]. The results showed 15,035 authors who, under the premise of at least 20 author citations (VOSviewer presented this option by default), were grouped into 87 items grouped into 5 nodes, showing a total strength of 40,608. The analysis will be done in descending form (Fig. 3).

Fig. 3. Bibliometric map of citations of cited authors.

790

H. Rodriguez-Insuasti et al.

Both node 1 and 2 are made up of 32 authors each. In node 1 (red color) the main authors found are Scott (237 citations) who has works on urban growth and its relationship with the migration of skilled labor in search of better jobs [48] and how globalization fosters the growth of cities- regions [70]. Pratt (207 citations) who analyzes the economic crisis and the diversity of relations between the economy and creative cities [50], as well as the dynamism of culture as forms of resilience to become austere [62]. Marcusen (177 citations) criticizes the concept of “creative class” where he asserts that occupations that show distinctive spatial and political leanings have nothing to do with creativity but with academic achievement [52]. Peck (112) presents his research in relation to the consequences of applying creative policies and how these policies are adapted in different cities [53]. As can be seen, this cluster is closely related to issues of creativity and the policies that are applied in the cities by the rulers. In node two (green) Hesmondhalgh (132 citations) stands out, whose main works investigate the experiences and working conditions within the creative industry [54] as well as a critique of the insecurity, inequality and exploitation of creative workers in the United Kingdom [29]. Comunian (129 citations) focuses her research on the infrastructure that must exist within creative cities, the networks of professionals, the agents that interact for creative cultural development [32]and the precariousness of workers before and after Covid-19 [25]. Throsby (98 citations) his main works analyze cultural policy as an instrument of positive contribution in cultural goods and services [55], as well as sustainable development of the creative industry [56]. This node focuses its studies on the working conditions of creative workers and the infrastructure of the creative industry. In node three (blue color) 11 authors were found, where authors such as Cooke (51 citations) along with Lazzeretti (42 citations) stand out as the main actors in this node due to their book on creative cities where they explore in depth the concepts cultural economy and creative industry [57]. Additionally, we have Glaeser (39 citations) his works are closely related to urban growth and the way in which cities contribute to urban economies [58], and how big data contributes to better conditions of urban life [59]. In node four (yellow color) 9 authors were found where the main ones are: Florida (528 citations) is a reference in these issues by focusing its work on the role that cities have in innovation and entrepreneurship to obtain a better understanding of the economic development processes [60, 61], in addition to investigating the economic geography of the new companies called “urban technology” made up of shared trips, coexistence, joint work, smart cities, etc. and their origin [62]. Howkins (131) is a reference in creative economy issues by proposing in his book that the cultural and creative industry together with concepts of creative city and creative classes can help transform the economy of many countries based on innovation and creativity [63, 64]. Landry (116 citations) is another reference in the area of creative economy, presenting his book “the creative city” where he proposes the ordering and planning of cities based on the human element, their creativity, innovation, and cultural values [65]. As can be seen, this node is where the first and main thinkers of the creative economy concepts that appeared at the beginning of the new century are found. Finally, node five (purple color) found 3 authors, although the main author is Gibson (104 citations) where his main works are related to the dissemination of the concept

Mapping the Research in Orange Economy: A Bibliometric Analysis

791

of creative economy in different sectors in both Asian and Oceanic countries [66, 67] and defends the fact that the creativity of people is not only found in large cities but also in small ones, which leads to the challenge of economic development and urban regeneration [80]. 4.2.3 Journal Co-citations Co-citation analysis is defined as a process that shows how often two documents are linked together in a new investigation [69]. These types of relationships are very important in revealing the various structures within different fields of study [46]. For this analysis, the VosViewer version 1.6.17 software was used, where 13,631 articles could be determined, where 120 journals with at least 20 co-citations were found, which results in 5 nodes that are analyzed below (Fig. 4).

Fig. 4. Bibliometric map of journal of co citations

Red node 1 has 31 items with 1,250 citations where the main ones are: “Harvard Business Review” (67 citations) which belongs to the categories of business, management, accounting, economics, econometrics and finance. Then there is the “Academy of Management Review” (60 citations) which belongs to the business, management and accounting category. Finally, “American Journal of Sociology” (57 citations) was found which deals with social science topics. The first two magazines coincide in business, administration and accounting. Green Node 2 has 29 items with 2,435 citations where the main ones are: “Urban Studies” (441 citations), its category corresponds to environmental science and social sciences. The next magazine is “International Journal of Urban and Regional Research”

792

H. Rodriguez-Insuasti et al.

and it corresponds to the category of social sciences. Finally, “Environment and Planning A” was found, which belongs to the categories of environmental and social sciences. Node 3 in blue with 27 items and 1,578 citations where the main ones are “International Journal of Cultural Policy” (345 citations) its area of study is social science. Next is “Cultural Trends” (120 citations), her areas of study are the arts, humanities, and social sciences. Finally, “Media, Culture & Society” (77 citations) was found to study the social sciences. Node 4 in yellow with 26 items and 1,769 citations where the main journals found are “Regional Studies” (279 citations), their areas of study are environmental and social sciences. The following belongs to the book “The Rise of Creative Class” (262 citations). Finally, it was found “Journal of Economic Geography” (168 citations) and his areas of study are economics, econometrics, finance, and social sciences. Node 5 in purple with 7 items and 309 citations, represents the journal “Journal of Rural Studies” (64 citations). Its research areas are focused on agricultural, biological and social sciences. The next journal is “Cambridge Journal of Regions, Economy and Society” (61 citations) and its areas of study are economics, econometrics, finance and social sciences. Finally, it was found “Creative Industries Journal” (59 citations) and its fields of study are arts, humanities, business, management, accounting, and social sciences.

5 Conclusions The analysis carried out on the creative or orange economy leads to an evident result, which is its valuation at present and the boom that this line of knowledge has in all or almost all the scenarios that contribute with important “products” related to the intellect, the ideas, innovation, knowledge and culture. The statistics on the number of authors who have generated their approach to this line of research, as well as the countries, journals and citations, in which more work has been done on the orange economy; and, the topics and words with the most coincidence, it is very objective and directs its attention precisely to creativity, transdisciplinarity, entrepreneurship, sustainability, fashion, human and intellectual capital, that is, a holistic vision in which several lines of knowledge to achieve a single more advanced and complete product. These results are going to help future researchers to know the state of the art of this subject and to be the starting point to work on other aspects that have been little studied, such as the empowerment of women within the context of the orange economy, as well as to know the trends studied and propose new lines of research, especially in Latin American countries where research and implementation is far behind compared to European and Asian countries.

Mapping the Research in Orange Economy: A Bibliometric Analysis

793

References 1. Andres, L., Round, J.: The creative economy in a context of transition: a review of the mechanisms of micro-resilience. Cities 45, 1–6 (2015). https://doi.org/10.1016/j.cities.2015. 02.003 2. Banks, M., Hesmondhalgh, D.: Looking for work in creative industries policy. Int. J. Cult. Policy 15(4), 415–430 (2009). https://doi.org/10.1080/10286630902923323 3. Biela´nski, M., Korbiel, K., Taczanowska, K., Pardo-Ibañez, A., González, L.-M.: How tourism research integrates environmental issues? A keyword network analysis. J. Outdoor Recreat. Tour. 37, 100503 (2022). https://doi.org/10.1016/j.jort.2022.100503 4. Bop Consulting: Mapping the creative industries: a toolkit. British Council (2010) 5. Carrión-Mero, P., Montalván-Burbano, N., Herrera-Narváez, G., Morante-Carballo, F.: Geodiversity and mining towards the development of geotourism: a global perspective. Int. J. Des. Nat. Ecodyn. 16(2), 191–201 (2021). https://doi.org/10.18280/ijdne.160209 6. Carrión-Mero, P., Montalván-Burbano, N., Morante-Carballo, F., Quesada-Román, A., Apolo-Masache, B.: Worldwide research trends in landslide science. Int. J. Environ. Res. Public Health 18(18), 9445 (2021). https://doi.org/10.3390/ijerph18189445 7. Chung, N., Lee, H., Lee, S.J., Koo, C.: The influence of tourism website on tourists’ behavior to determine destination selection: a case study of creative economy in Korea. Technol. Forecast. Soc. Chang. 96, 130–143 (2015). https://doi.org/10.1016/j.techfore.2015.03.004 8. Communian, R.: Rethinking the creative city: the role of complexity, networks and interactions in the urban creative economy. Urban Stud. 48(6), 1157–1179 (2011). https://doi.org/10.1177/ 0042098010370626 9. Square Roura, E., Lazzeretti, L.: Regional research Spanish association of regional science 14, 213–215 (2009). http://www.redalyc.org/articulo.oa?id=28911696009 10. DCMS: Creative Industries Mapping Document. UKDCMS (1998) 11. Duxbury, N.: Cultural and creative work in rural and remote areas: an emerging international conversation. Int. J. Cult. Policy 27, 753–767 (2020). https://doi.org/10.1080/10286632.2020. 1837788 12. Earnshaw, R.: Research and Development in the Academy, Creative Industries and Applications. Springer, Heidelberg (2017) 13. European Commission: Data on the cultural sector (2022). https://culture.ec.europa.eu/pol icies/selected-themes/data-on-the-cultural-sector 14. Ferreiro-Seoane, F.J., Llorca-Ponce, A., Rius-Sorolla, G.: Measuring the sustainability of the orange economy. Sustainability 14(6), 3400 (2022). https://doi.org/10.3390/su14063400 15. Flew, T.: International models of creative industries policy. In: The Creative Industries: Culture and Policy, pp. 33–58. SAGE Publications Ltd. (2012) https://doi.org/10.4135/978144628841 2.n3 16. Gibson, C., Klocker, N.: The “cultural turn” in Australian regional economic development discourse: neoliberalising creativity? Geogr. Res. 43(1), 93–102 (2005). https://doi.org/10. 1111/J.1745-5871.2005.00300.X 17. Glaeser, E., Huang, W., Ma, Y., Shleifer, A.: A Real Estate Boom with Chinese (n.d.). https:// doi.org/10.1257/jep.31.1.93 18. Grodach, C.: Cultural economy planning in creative cities: discourse and practice. Int. J. Urban Reg. Res. 37(5), 1747–1765 (2013). https://doi.org/10.1111/j.1468-2427.2012.01165.x 19. Herrera-Franco, G., Montalván-Burbano, N., Mora-Frank, C., Bravo-Montero, L.: Scientific research in Ecuador: a bibliometric analysis. Publications 9(4), 55 (2021). https://doi.org/10. 3390/publications9040055 20. Herrera-Medina, E., Bonilla-Estévez, H., Fernando Molina-Prieto, L.: Creative cities: economic paradigm for urban design and planning? Binnacle Territ. Urban 22(1), 11–20 (2013)

794

H. Rodriguez-Insuasti et al.

21. Howkins, J. (ed.): The Creative Economy: How People Make Money from Ideas. Penguin Books, London (2002a) 22. Kong, L.: Ambitions of a global city: arts, culture and creative economy in “Post-Crisis” Singapore. Int. J. Cult. Policy 18(3), 279–294 (2012). https://doi.org/10.1080/10286632.2011. 639876 23. Lazzeretti, L., Capone, F., Innocenti, N.: Exploring the intellectual structure of creative economy research and local economic development: a co-citation analysis. Eur. Plan. Stud. 25(10), 1693–1713 (2017). https://doi.org/10.1080/09654313.2017.1337728 24. León-Castro, M., Rodríguez-Insuasti, H., Montalván-Burbano, N., Victor, J.A.: Bibliometrics and science mapping of digital marketing. In: Rocha, Á., Reis, J.L., Peter, M.K., Cayolla, R., Loureiro, S., Bogdanovi´c, Z. (eds.) Marketing and Smart Technologies. SIST, vol. 205, pp. 95–107. Springer, Singapore (2021). https://doi.org/10.1007/978-981-33-4183-8_9 25. Markusen, A.: Urban development and the politics of a creative class: evidence from a study of artists. Environ. Plan. 38, 1921–1940 (2006). https://citeseerx.ist.psu.edu/viewdoc/dow nload?doi=10.1.1.461.9261&rep=rep1&type=pdf 26. Martin, R., Florida, R., Pogue, M., Mellander, C.: Creativity, clusters and the competitive advantage of cities. Compet. Rev. Int. Bus. J. Inc. J. Glob. Compet. 25, 482–496 (2015) 27. McRobbie, A.: Fashion culture: creative work, female individualization. Fem. Rev. 71(1), 52–62 (2002). https://doi.org/10.1057/palgrave.fr.9400034 28. Montalván-Burbano, N., Velastegui-Montoya, A., Gurumendi-Noriega, M., MoranteCarballo, F., Adami, M.: Worldwide research on land use and land cover in the amazon region. Sustainability 13(11), 6039 (2021). https://doi.org/10.3390/su13116039 29. Morante-Carballo, F., Montalván-Burbano, N., Carrión-Mero, P., Espinoza-Santos, N.: Cation exchange of natural zeolites: worldwide research. Sustainability 13(14), 7751 (2021). https:// doi.org/10.3390/su13147751 30. Nobanee, H., et al.: Green and sustainable life insurance: a bibliometric review. J. Risk Financ. Manag. 14(11), 563 (2021). https://doi.org/10.3390/jrfm14110563 31. Osareh, F.: Bibliometrics, Citation Analysis and Co-Citation Analysis: A Review of Literature (1996). https://doi.org/10.1515/libr.1996.46.3.149 32. Phoong, S.Y., Khek, S.L., Phoong, S.W.: The bibliometric analysis on finite mixture model. SAGE Open 12(2), 215824402211010 (2022). https://doi.org/10.1177/21582440221101039 33. Pico-Saltos, R., Carrión-Mero, P., Montalván-Burbano, N., Garzás, J., Redchuk, A.: Research trends in career success: a bibliometric review. Sustainability 13(9), 4625 (2021). https://doi. org/10.3390/su13094625 34. Pratt, A.C.: Beyond resilience: learning from the cultural economy. Eur. Plan. Stud. 25(1), 127–139 (2017). https://doi.org/10.1080/09654313.2016.1272549 35. Richards, G.: Cultural tourism: a review of recent research and trends. J. Hosp. Tour. Manag. 36, 12–21 (2018). https://doi.org/10.1016/j.jhtm.2018.03.005 36. Roodhouse, S.: The creative industries definitional discourse. In: Entrepreneurship and the Creative Economy. Edward Elgar Publishing (2011) 37. Sandri, S., Alshyab, N.: Orange economy: definition and measurement – the case of Jordan. Int. J. Cult. Policy 1–15 (2022). https://doi.org/10.1080/10286632.2022.2055753 38. Santana, P.S., Silveira, F.F.: Entrepreneurship na industry creative: um I study bibliometric. J. Adm. Da UFSM 12(1), 125–141 (2019). https://doi.org/10.5902/1983465917012 39. Stern, M.J., Seifert, S.C.: Cultural clusters: the implications of cultural assets agglomeration for neighborhood revitalization. J. Plan. Educ. Res. 29(3), 262–279 (2010). https://doi.org/ 10.1177/0739456X09358555 40. Storper, M., Scott, A.J.: Rethinking human capital, creativity and urban growth. J. Econ. Geogr. 9, 147–167 (2009). https://doi.org/10.1093/jeg/lbn052 41. Tepper, S.J.: Creative assets and the changing economy. J. Arts Manag. Law Soc. 32(2), 159–168 (2002). https://doi.org/10.1080/10632920209596971

Mapping the Research in Orange Economy: A Bibliometric Analysis

795

42. The Policy Circle: The Creative Economy (2022) 43. UNESCO: Launch of the 2018 Global Report (2017) 44. Van Eck, N.J., Waltman, L.: Software survey: VOSviewer, a computer program for bibliometric mapping. Scientometrics 84(2), 523–538 (2010). https://doi.org/10.1007/s11192-0090146-3 45. Xiong, L., Teng, C.-L., Zhu, B.-W., Tzeng, G.-H., Huang, S.-L.: Using the D-DANP-mV model to explore the continuous system improvement strategy for sustainable development of creative communities. Int. J. Environ. Res. Public Health 14(11), 1309 (2017). https://doi. org/10.3390/ijerph14111309 46. Yi, X., Throsby, D., Gao, S.: Cultural policy and investment in China: do they realize the government’s cultural objectives? J. Policy Model. 43(2), 416–432 (2021). https://doi.org/10. 1016/J.JPOLMOD.2020.09.003 47. Zardo, J., Mello, R.: Responses from incubators to the creative economy. Braz. J. Oper. Prod. Manag. 15(3), 444–452 (2018)

Wearable Temperature Sensor and Artificial Intelligence to Reduce Hospital Workload Luís B. Elvas4(B) , Filipe Martins1 , Maria Brites2 , Ana Matias1 , Hugo Plácido Silva2 , Nuno Gonçalves3 , João C. Ferreira4,5 , and Luís Brás Rosário1 1 Faculty of Medicine, Lisbon University, Hospital Santa Maria/CHULN, CCUL, 1649-028

Lisbon, Portugal 2 Instituto de Telecomunicações, IST - Instituto Superior Técnico, Av. Rovisco Pais, n. 1, Torre

Norte - Piso 10, 1049-001 Lisbon, Portugal 3 EMITU, Rua da Constituição 346, 4200-208 Porto, Portugal 4 Instituto Universitário de Lisboa (ISCTE-IUL), ISTAR, 1649-026 Lisbon, Portugal

[email protected] 5 INOV INESC Inovação-Instituto de Novas Tecnologias, 1000-029 Lisbon, Portugal

Abstract. Patient sensing and data analytics provide information that plays an important role in the patient care process. Patterns identified from data and Machine Learning (ML) algorithms can identify risk/abnormal patients’ data. Due to automatization this process can reduce workload of medical staff, as the algorithms alert for possible problems. We developed an integrated approach to monitor patients’ temperature applied to COVID-19 elderly patients and an ML process to identify abnormal behavior with alerts to physicians. Keywords: Data analytics · Data transmission · Machine learning · Remote healthcare monitoring · Sensor · Wearable sensors

1 Introduction The global pandemic COVID-19 caused by the novel coronavirus SARS-CoV-2 brought unprecedented challenges, due to the severe acute respiratory syndrome. From the reported series, it became evident that the elderly have an increased risk of mortality when compared with adults under 50 years old [1]. The major symptoms of COVID-19 patients are fever and mild to moderate respiratory illness with silent hypoxia. Severe disease is more likely in older persons and those with underlying medical conditions including cancer, diabetes, cardiovascular disease, or chronic respiratory diseases [2]. In accordance with World Health Organization recommendations, vital signs such as Heart Rate, Respiratory Rate, Temperature, and Peripheral Capillary Oxygen Saturation (SpO2) should be measured at various stages of COVID-19 illness. Thus, the COVID-19 pandemics sped up the use of Internet of Things (IoT), consumer wearables and AI in this clinical setting. Temperature readings with conventional devices at random intervals, such as entering a workplace, is used to monitor the spread of SARS-CoV2 in the general population © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 A. Abraham et al. (Eds.): IBICA 2022, LNNS 649, pp. 796–805, 2023. https://doi.org/10.1007/978-3-031-27499-2_73

Wearable Temperature Sensor and Artificial Intelligence

797

[3]. The value of temperature measurement in disease prevention and containment is constrained by single-point measures’ low sensitivity to identify illness, particularly at its initial stage. Continuous recording from wearables with temperature sensors offers helpful contextual information, thus enhancing the value of temperature data in the identification of COVID-19 [3]. We developed a mHealth device composed by an internet of things (IoT), real-time body temperature monitoring solution based on a wireless sensor, mobile apps, gateways, and a cloud platform. The IP67 device transmits the data via Bluetooth, that is stored in the cloud, where we developed an AI algorithm on top of it, that detects activity that is out of normal for a certain person, triggering an alert for the health center, as demonstrated on Fig. 1.

Fig. 1. System architecture

The device aims to trigger an early warning system for COVID-19 infections, or any other infectious disease, based on real-time sensor of vital signs, mobile devices, and Artificial Intelligence. This system allows the detection of the minimum physiologic parameter change that may be brought on by an infection like COVID-19. In the future we aim to develop a robust algorithm that delivers a real-time health map to healthcare authorities. Our systems’ sensor monitoring, and indoor location capabilities may be combined, employing the sensors as tools for location tracking. Contact tracing can be enabled to reveal the historical flows of possibly infected healthcare professionals, especially in a hospital setting. One of the objectives of this study is to automate communication between the user and the Portuguese National Health System (SNS), hence lowering the risk of COVID-19 spread. The R&D ecosystem, a Small and Medium-sized enterprise (SME), and the SNS will collaborate to build the project.

2 Literature Review The literature review aims to indicate to the community some of the innovative technological approaches in remote healthcare monitoring (RHM) for senior patients and COVID19 patients, in order to foster innovation on enhanced technologies and frameworks for RHM. We conducted a systematic literature review using PRISMA guidelines (Preferred Reporting Items for Systematic Reviews and Meta-Analysis) [4]. Our choice of PRISMA

798

L. B. Elvas et al.

follows the literature trend of using such a method as a basis for reporting systematic reviews. To conduct the review, we considered the key research question “What is the state of the art on the use of IoT and AI in Remote Monitoring Systems in elderly and COVID-19 patients?”. By adopting the PRISMA guidelines, we first performed a search to detect publications that have in their titles, abstract or keywords the following terms with respect to the Boolean expression: ((“data analytics” OR “machine learning” OR “artificial intelligence” OR “cloudbased” OR “IoT”) AND (“elderly” OR “COVID-19”) AND (“remote healthcare monitoring” OR “remote patient monitoring” OR “smart healthcare monitoring”)). The process was based on the last 5 years works, published from 1st of January of 2018 until 5th of November of 2022. The literature search was conducted in November of 2022 and using the data repositories Scopus and Web of Science. From this search keys a total of 178 documents were retrieved, on which a manual cleaning process was performed. Duplicates, book chapters, review works and documents whose title did not fit the search were eliminated and from the resulting documents, the abstracts were analyzed and the ones that did not fit our inclusion criteria were also eliminated. For the selection of research articles, we used the following inclusion and exclusion criteria. We included articles that: 1) proposed wearable IoT medical sensors and frameworks with applications in healthcare for senior and COVID-19 patients; 2) proposed big data analytics, AI and ML technologies; 3) are published in 2022 and in 2021. We excluded articles that: 1) do not propose IoT medical framework for senior or COVID-19 patients; 2) do not use wearable sensors; 3) are published between 2018 and 2020 having less than 30 citations. The PRISMA flow diagram of the selection process is shown in Fig. 2.

Fig. 2. PRISMA flow diagram for the articles’ selection process.

We next present a brief review of topics we found more relevant to our scope in the selected articles.

Wearable Temperature Sensor and Artificial Intelligence

799

Badr et al. [5] present a 12-lead real-time remote ECG (electrocardiogram) prototype patch that is attached to the patients’ body that collects heart signals, performs binary classification to classify the ECG signals and transmits data to medical staff for further analysis. According to the authors’, the remote ECG platform is optimized for power-constrained energy consumption up providing up to 37 h of continuous 12-lead ECG streaming. Ishtiaque et al. [6] present a low-cost IoT system for detection and management of arrhythmia and pneumonia occurrences. The developed wireless platform performs ECG, SpO2, and body temperature readings of patients, uploads data to a web server and a web application sends it to medical staff. The web application implements a deep learning algorithm for detection of arrhythmia as well as provides on-demand pneumonia detection from chest X-ray image classification. Altiparmak et al. [7] propose an IoT framework for home using low-cost peripherals for monitoring SpO2 and blood pressure for COVID-19 patients staying in their homes. Their system design, implemented algorithms and devices are fully explained in their work. The article of Raposo et al. [8] presents e-CoVig, a novel mHealth application, which is designed to register and transmit the heart rate, SpO2, body temperature, respiration, and cough of patients. The developed application includes a mobile application, a web-cloud platform and a low-cost device for acquisition of the heart rate, temperature and SpO2. This system was used in a real-world setting for collective monitoring of COVID-19 patients and possible infected ones. e-CoVig integrates BITalino which allows the acquisition of additional physiological parameters thus, enabling it to be used for cardiac and other chronic patients. Ianculescu et al. [9] present a personalized remote monitoring system, RO-SmartAgeing, to address Mild cognitive impairment (MCI) which may occur with elder people. The authors’ developed system provides a customized non-invasive remote monitoring of MCI related parameters, health assessment and assistance in their homes, given a prearranged smart home environment. The authors’ have integrated ML algorithms to classify and detect early MCI conditions: ECG anomalies and detect accidental falls. Hamim et al. [10] propose an IoT-based remote health monitoring of elder patients in their home. The presented work integrates an Arduino Uno and a Raspberry Pi, a heart pulse sensor for heart rate readings, a body temperature sensor and galvanic skin response sensor to measure emotional arousal, a cloud storage database for gathering of acquired data, and an Android application which in turn enables the screening of the physiological parameters of the elders by medical staff. Hassan et al. [11] propose an intelligent hybrid context-aware model for patients under supervision at home, in particular the elderly, named IHCAM-PUSH, that adopts a hybrid architecture with both local and cloud-based components. The IHCAM-PUSH model, creates a context-aware model by which a real-time health status of patients. A case study on patients suffering from blood-pressure disorders was conducted. Taiwo et al. [12] have proposed a remote smart home healthcare support system (SheS) for monitoring the medical condition of a patient while at home. The system collects information on physiological signals of the patient via wearable sensors and stores’ it in a database in a cloud. The authors developed an algorithm based on Hyperspace Analogue Context (HAC) for service discovery and context change in the home environment, for accurate readings of physiological parameters. Lavric et al. [13] propose an integrated system for management of COVID-19 patients, making use of LoRaWAN (Long Range Wide Area

800

L. B. Elvas et al.

Network) communication infrastructure. The implemented system allows monitoring of symptoms and health state of isolated or quarantined people remotely. The authors claim it can be successfully implemented by local authorities to increase monitoring and consequently to better understand the spread of the novel coronavirus, given the scalability of their solution. Filho et al. [14] developed an IoT-based healthcare platform for remote monitoring of COVID-19 patients in an Intensive Care Unit (ICU), integrating wearable and unobtrusive sensors to monitor patients’ physiological parameters. The authors also present a real-world application involving the development and deployment of the remote monitoring solution for COVID-19 patients in an ICU in a Brazilian hospital. El-Rashidy et al. [15] was the first study to provide an end-to-end deep-learning-assisted communication framework for COVID-19 disease management. The authors propose a web-based mobile clinical-decision-support system (CDSS) integrating wireless body area network sensors, cloud and fog frameworks for data transmission and storage that continuously monitors patients inside or outside hospitals. The authors also propose a deep learning model for COVID-19 detection based on X-ray images for chest scan. Akhbarifar et al. [16] propose a comprehensive lightweight secure remote health monitoring model based on both cloud and IoT technologies, in which patients are remotely monitored by the medical teams for early diagnosing of possible critical conditions. The proposed system performs early diagnosis of combinations of hypercholesterolemia, hypertension, and heart disorder using machine learning methods. Firouzi et al. [17] provide a discussion on the contribution of digital technologies to tackle COVID-19. The authors propose several techniques to combat COVID-19, IoT for tracking and tracing, for remote patient monitoring using wearables’, AI technologies for diagnosis and risk prediction of critical conditions in COVID-19 patients, and robotics and drone technology for crowd surveillance and essential supply delivery. This work also analyzes the real-life use case of IoT solution in South Korea. Zhedi et al. [18] propose an IoT system for a relief supply chain network to address multiple suspected cases of virus infection, during a pandemic outbreak especially in agitated conditions, such as the SARS-COV-2 outbreak. The authors used meta-heuristic and hybrid optimization techniques to minimize prioritization and allocation of ambulances. Both approaches were investigated and validated for several test problems and were applied in a real-world context in Iran. Ganji et al. [19] produced a study that aims to analyze users’ perception and their recommendations concerning IoT-based smart health-care monitoring wearable devices along with the effect of COVID-19 pandemic, by implementing an artificial neural network to classify users’ perception. Based on their findings, the authors indicate that self-comfort and trusted data from wearable IoT devices are the major priority. The review analysis allows us to recognize that the use of innovative technologies such as IoT frameworks, wearable sensors, big data analytics, AI and ML have indeed seen a rise in research, development and application in the healthcare sector, showing promising solutions in several aspects of healthcare either in-hospital or out-hospital, but still many challenges are to be addressed.

3 Methods and Materials The development and implementation of person-centered integrated care for older people (ICOPE), in which an integrated and person-centered strategy is required as reductions

Wearable Temperature Sensor and Artificial Intelligence

801

in intrinsic ability are connected, is urgently necessary, according to recent guidelines for health and care professionals. This research addresses various technological challenges to developing an AI-based solution for this computer science topic, taking into account the state of the art: 1) Integration of many data sources into a big data for elder people monitorization; 2) Effective training of Machine Learning algorithm to categorize risk; and 3) Presentation of meaningful and valuable data to decision-makers to promote enhanced treatments. 3.1 Data Collection Data from several sensors was gathered for analysis, and two prototypes uniting various sensors into one device were created (see Fig. 3). These prototypes gather health information from test participants by continuously and wirelessly monitoring the body temperature. One of the prototypes (Device 1, Fig. 3 (a)) is placed in the armpit and transmits data via Bluetooth, featuring a CR2450 battery and it is IP67, which ensures protection from contact with harmful dust and protection from immersion in water with a depth of up to 1 m for up to 30 min. The other prototype (Device 2, Fig. 3 (b)) monitors the body temperature continuously and wirelessly and features greenTEG’s core body temperature sensor (which measures thermal energy transfer). It is positioned on the torso/chest area, about 20 cm below the armpit. The power supply is a rechargeable lithium-polymer battery, charged via USB. Its battery life can reach 60 days on standby and over 6 days with constant transmission. It takes 2 h to fully recharge the device. The device is also IP67. The output data includes both the skin temperature and core body temperature measurements.

Fig. 3. (a) Thermo.One device (b) CORE device.

The devices are configured to send the temperature measurements via Bluetooth to an antenna which sends data to a cloud platform where they are stored and feed the AI algorithm. One measurement per minute is transmitted by each device. The devices have been tested in different conditions, including varied distances to the antenna and placements in the body. These tests are presented in Table 1 below.

802

L. B. Elvas et al.

Table 1. Performance of the data transmission according to the distance to the antenna and placement on the body site (left - below left armpit; right - below right armpit).

Both devices have additionally been compared to a digital multimeter (Tenma 727730), which measures temperature with a resolution of 0.1 ºC. The sampling rate of the multimeter was 10 s, and the measurements were averaged per minute. Table 2 summarizes the results. Table 2. Experimental results for the comparison between the data from the devices.

3.2 Data Analytics and Machine Learning The data of this research work was collected, stored and processed. We have been collecting data from the past year, from patients from an elder care center. All the data has been classified by a health professional. All data processing is made in the cloud, avoiding unnecessary computing operations from our device, allowing a longer battery lifetime. From the data we have gathered, we present graphics to the health professionals, in order to give them an overall idea of the health status of the patient. In Fig. 4, we have the daily pattern of three different patients. For most of the eldercare patients, it is common for their temperature to drop during the morning period, since it is when they get out of the bed and go into colder environments. By presenting these plots, the medical staff can in a more efficient way assess the health status of the patients. From Fig. 5 we are able to assess the patient’s pattern during the weekdays. Where it is common to see an overall rise from Sunday to Saturday.

Wearable Temperature Sensor and Artificial Intelligence

803

Fig. 4. Daily rhythm of body temperature of three different patients.

Fig. 5. Weakly rhythm of body temperature of the three different patients.

Since our dataset is classified with the days that represent out of the pattern reading, we have trained six classification algorithms, in order to automatically classify which patients are with unusual readings. From Fig. 6 (a), we can see that Random Forest as achieved the best result, with 78.8% of accuracy. Consequence of this classification is Fig. 6 (b), where it is depicted the data of a certain patient, when its temperature rises complete out of the pattern, generating an alert.

Fig. 6. (a) Accuracy of the six classification algorithms. (b) Out of pattern temperature reading.

From this, an email is generated and sent to the health professional, to give the notice that a certain patient is having values out of the ordinary, as depicted by Fig. 7.

804

L. B. Elvas et al.

Fig. 7. Alert email sent to health professional given abnormal temperature reading.

4 Conclusions and Future Scope Based on considerable preparatory work on integrating mobile sensor data, our objective in the pandemic was to build a remote monitoring unit. We were able to show that we can contribute digitally to the pandemic’s short-term management in the context of the first wave of the COVID-19 pandemic, and serve as a basis for possible future pandemic outbreaks. We are working in implementing a healthcare decision support system, and still testing the algorithms’ performance for abnormal temperature detection. The best ML model so far achieved an accuracy of 78.8%, we are collecting data about the algorithms performance and are trying different parameters to see how they respond to our data. Our system generates alerts to the doctor as soon as abnormal values are detected, and due to medical concerns, its crucially preferable to have false alarms then false negatives thus, one of the aims is to lower as much as possible the sensitivity of detection of condition of abnormal temperature readings. We must add that we are not replacing any medical staff nor pre-established routines, and that we are taking temperature patterns for each patient, and let us add too, that not every patient as the same temperature, nor have fever in a predefined value. The alert is generated immediately so the patient can be evaluated by the medical staff accordingly. This early-stage solution keeps a better tracking of the patients. In spite of a simple case of temperature measure this approach can be replicated to our sensing approach, this technology can be easily extended to other clinics and hospitals. As a result, we will continue this work with the aim of building a healthcare decision support system. Eventually, this work can evolve to involve pervasive continuous health monitoring for treatment of other chronic illnesses. Acknowledgment. Project ‘ReATeC - Remote Assessment and Telemonitoring of COVID-19 (2020)’. Financed by ‘AAC 15/SI/2020. Sistema de Incentivos de Actividades de Investigação e Desenvolvimento e Investimento em Infraestruturas de Ensaio e Optimização (upscaling) no Contexto de Covid-19. Portugal 2020. I&D Empresas - COVID-19’.

References 1. Biswas, M., Rahaman, S., Biswas, T.K., et al.: Association of sex, age, and comorbidities with mortality in COVID-19 patients: a systematic review and meta-analysis. Intervirology 64(1), 36–47 (2021)

Wearable Temperature Sensor and Artificial Intelligence

805

2. WHO Health topics Coronavirus disease (COVID-19). https://www.who.int/health-topics/cor onavirus#tab=tab_1. Accessed 21 Sept 2022 3. Hasselberg, M.J., McMahon, J., Parker, K.: The validity, reliability, and utility of the iButton® for measurement of body temperature circadian rhythms in sleep/wake research. Sleep Med 14(1), 5–11 (2013) 4. Page, M.J., McKenzie, J.E., Bossuyt, P.M., et al.: The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. BMJ 372(71), 105906 (2021) 5. Badr, A., Badawi, A., Rashwan, A., et al.: 12-lead ECG platform for real-time monitoring and early anomaly detection. In: 2022 IEEE 18th International Wireless Communications and Mobile Computing, 30 May 2022–3 June 2022, Dubrovnik, pp. 973–978 (2022) 6. Altıparmak, H., Kaba, S: ¸ Remote patient monitoring during the COVID-19 pandemic in the framework of home device manufacturing for IoT-based BPM and SPO2 measurements. In: Al-Turjman, F., Rasheed, J. (eds.) FoNeS-IoT 2021. LNDECT, vol. 130, pp. 1–10. Springer, Cham (2022). https://doi.org/10.1007/978-3-030-99581-2_1 7. Ishtiaque, F., Sadid, S.R., Kabir, M.S., et al.: IoT-based low-cost remote patient monitoring and management system with deep learning-based arrhythmia and pneumonia detection. In: 2021 IEEE 4th International Conference on Computing, Power and Communication Technologies, 24–26 September 2021, Kuala Lumpur, pp. 1–6 (2021) 8. Raposo, A., Marques, L., Correia, R., et al.: e-CoVig: a novel mHealth system for remote monitoring of symptoms in COVID-19. Sensors 21(10), 3397 (2021) 9. Ianculescu, M., Paraschiv, E.-A., Alexandru, A.: Addressing mild cognitive impairment and boosting wellness for the elderly through personalized remote monitoring. Healthcare 10(7), 1214 (2022) 10. Hamim, S., Paul, S., Hoque, S.I., et al.: IoT based remote health monitoring system for patients and elderly people. In: 2019 IEEE 1st International Conference on Robotics, Electrical and Signal Processing Techniques, 10–12 January 2019, Dhaka, pp. 533–538 (2019) 11. Hassan, M.K., El Desouky, A.I., Elghamrawy, S.M., et al.: Intelligent hybrid remote patientmonitoring model with cloud-based framework for knowledge discovery. Comput. Electr. Eng. J. 70, 1034–1048 (2018) 12. Taiwo, O., Ezugwu, A.E.: Smart healthcare support for remote patient monitoring during covid-19 quarantine. Inform. Med. Unlocked 20, 100428 (2020) 13. Lavric, A., Petrariu, A.I., Mutescu, P.-M., et al.: Internet of things concept in the context of the COVID-19 pandemic: a multi-sensor application design. Sensors 22(2), 503 (2022) 14. Filho, I., Aquino, G., Malaquias, R.S., et al.: An IoT-based healthcare platform for patients in ICU beds during the COVID-19 outbreak. IEEE Access 9, 27262–27277 (2021) 15. El-Rashidy, N., El-Sappagh, S., Islam, S.M.R., et al.: End-To-End deep learning framework for coronavirus (COVID-19) detection and monitoring. Electronics 9(9), 1439 (2020) 16. Akhbarifar, S., Javadi, H.H.S., Rahmani, A.M., et al.: A secure remote health monitoring model for early disease diagnosis in cloud-based IoT environment. Pers. Ubiquit. Comput. 1–17 (2020). https://doi.org/10.1007/s00779-020-01475-3 17. Firouzi, F., Farahani, B., Daneshmand, M., et al.: Harnessing the power of smart and connected health to tackle COVID-19: IoT, AI, robotics, and blockchain for a better world. IEEE Internet Things J. 8(16), 12826–12846 (2021) 18. Zahedi, A., Salehi-Amiri, A., Smith, N.R.: Utilizing IoT to design a relief supply chain network for the SARS-COV-2 pandemic. Appl. Soft Comput. 104, 107210 (2021) 19. Ganji, K., Parimi, S.: ANN model for users’ perception on IOT based smart healthcare monitoring devices and its impact with the effect of COVID 19. J. Sci. Technol. Pol. Manag. 13(1), 6–21 (2022)

A Recommender System to Close Skill Gaps and Drive Organisations’ Success E. Luciano Zickler(B) , Susana Nicola, and Nuno Bettencourt Interdisciplinary Studies Research Center, Institute of Engineering of Porto, Polytechnic of Porto (ISRC/ISEP/IPP), Porto, Portugal {1210095,sca,nmb}@isep.ipp.pt

Abstract. Fast-paced technological and social change is leading to widening skill gaps in the workforce, posing a significant risk for organisations. To bridge them, it is important to find and promote hidden champions. This paper presents a conceptual framework for designing a Talent Recommender System (TRS) that finds hidden talents and matches them to the right task. This is achieved, through the innovative approach of combining ontology and clustering techniques to build a comprehensive skill ontology. Another novelty resides in using the skill ontology together with Collaborative Filtering (CF), which can result in a more serendipitous task recommendations when compared to existing methods of skill matching. Keywords: Skill matching · Talent marketplace · Hybrid recommender systems · Artificial intelligence · Skill ontology

1

Introduction

Massive skill shortages, especially relating to highly qualified jobs, have led to the highest global talent shortage in the last sixteen years [1]. Accelerated discovery and use of new disruptive technologies, and changes in demographics and socioeconomic preferences are leading to a new reality of work. A redefinition of jobs is underway, driven by task automation, the gig economy (rise of the freelancers) and the transition into remote and flexible work [2]. The transition from static to project-based work makes it increasingly hard to find the right person for the job as job requirements change more often than with static roles. These transitions, together with highly competitive global markets, can cause vast skill gaps in organisations, which are detrimental to their productivity and efficiency. In fact, organisations worldwide see top talent attraction and retention as their most important internal issue, with almost 70% stating that neither the quantity nor quality of candidates is enough to ensure future business success [3]. Most competencies and knowledge of workers become obsolete after less than five years, with average skill “half-lifes” of just over two and a half years for technical skills [4]. The discrepancy between increasing skill gaps and dwindling prospects for most workers can be illustrated by the German car industry. The IFO Institute c The Author(s), under exclusive license to Springer Nature Switzerland AG 2023  A. Abraham et al. (Eds.): IBICA 2022, LNNS 649, pp. 806–815, 2023. https://doi.org/10.1007/978-3-031-27499-2_74

A Recommender System to Close Skill Gaps

807

forecasts that in Germany, almost one third of the 600 000 employees currently working at gasoline/diesel cars will be out of a job by 2025, all the while there are currently over 40 000 skilled workers needed in this same industry [5]. In response to these challenges, organisations are starting to implement reskilling and upskilling programmes and searching for hidden talent within their organisation. Indeed, 80% of executives believe that at least half of new roles should be filled internally instead of through new hires, with only very specialised skills being sought in the external labour market [6]. Additionally, low motivation and high attrition rates further increase perceived skill gaps and organisation’s vulnerability. According to Gallup fewer than 20% of employees in the EU are engaged in their job and almost 15% are actively disengaged [7]. Decisions on what to learn, where to work and what projects and goals to invest our time and effort in, are difficult, as we rely on complex and incomplete information. Furthermore, the decision on what project to work on, usually depends on other people. This leads to difficulties realising individuals’ full potential, biased selection processes and not considering people’s needs and motivations. How do we make sure the right individual is working on the right project? How do we organise our most valuable resource, our human capital, while guaranteeing a fair, transparent, and efficient process? Most lighthouse organisations use Information Systems to face these challenges, as the amount of data and information exchange grow exponentially. Nowadays, Human Capital Management (HCM) suites are being used by most organisations, usually operating as cloud solutions. Core functionalities of these services are HR administration, talent, and workforce management. The most effective HCM systems to close present skill gaps in organisations are talent marketplaces. When empowered by AI methods and adequate data, these systems can find and promote hidden champions in talent pools and deal with bias in selection processes [8]. The efficiency of a given talent marketplace depends on its TRS, which is responsible for matching talents with opportunities. Two Research Questions (RQs) arise to develop a conceptual framework for efficient TRS design: RQ1: What tools and technologies employed in today’s Recommender Systems (RSs) are most adept at dealing with skill matching? RQ2: How to implement a TRS capable of closing the skill gaps? In section two the state of the art of RS is presented. In section three the conceptual framework for the TRS design can be found, including the implementation context and the research methodology to be employed. Section four corresponds to the implementation of the proposed TRS. In section five the merits of the TRS design are discussed. Finally, a conclusion is presented.

2

State of the Art

In this section, the state of the art of RS, especially hybrid RS and related work on skill matching, is presented, as to motivate the development of the TRS.

808

2.1

E. L. Zickler et al.

Hybrid Recommender Systems

RS have become increasingly important over the years, as large amounts of data and choices must be considered. RS was developed as a tool to filter and make suggestions based on user preference [9]. There are three types of RS: CF, content-based and hybrid. The use of CF has been successfully applied to predict consumer choice and user preferences. A lot of online services, like Amazon or Netflix, use some form of CF for content delivery. It consists of making recommendations based on similarities between users or items that the user interacts with. These interactions are stored in the user-item matrix M , where users are assigned columns J, and items I are assigned rows. Every entry mij is either a decision function with binary output, a probability, or a rating corresponding to a specific user’s j interaction with that item i. There are two methods of implementing CF: memory-based, where M needs to be placed into memory to compute similarities and find neighbours of the active user, and model-based, where offline predictive models are generated from the ratings of M [10]. Both CF methods and content-based drawbacks are the cold start problem (not having enough information at the beginning to form valuable predictions), sparsity and scalability. The state of the art of RS corresponds to hybrid methods. Combining content-based with CF or other Artificial Intelligence (AI) approaches, tackles the problems often associated with using these approaches on their own. [10] found that applying item-based CF together with ontologies and dimensionality reduction techniques, on sparse benchmarked real-life datasets, improved the prediction’s accuracy and time complexity (scalability) notably, when compared to using these methods on their own. The dimensionality techniques used were Expectation Maximization (EM) clustering and nonincremental Singular Value Decomposition (SVD). In a publication in the journal Sensors [11], data-driven and knowledge-driven methods were successfully merged to reflect the dynamic behaviour of the domain knowledge. Neural CF (NCF) was merged with ontologies to reflect both the statistical and semantic nature of the studied domain, an online retailer, achieving higher accuracy as stand-alone methods and being able to continuously update the ontology. NCF is a model-based CF that uses neural networks to find the underlying latent factors describing M . Because NCF’s statistical models are independent of item to user semantic relationships which have a complexity O(n · |M |), this hybrid RS is scalable and effective in handling missing data. 2.2

Related Work on Skill Matching

Previous studies have focused on using RS to Predict Student Performance (PSP) [12] and job matching [13]. It has been found that using Bayesian knowledge tracking together with CF to PSP, outperforms those methods if used on their own [12].

A Recommender System to Close Skill Gaps

809

AI has also been used to scout human capital and fight skill mismatch. [14] presented a tool that helps predict future skill needs, through mapping of expertise data and subsequent analysis using AI. Creating semantics or ontologies from these mappings is essential for jobs to members matching [15]. Maurya & Telang proposed a hybrid RS for recommending new skills to learn for members, bridging the gap between skill supply and labour market demand [15]. The RS made use of Bayesian multi-view models for member-job clustering and applied CF subsequently. Members and jobs are part of the same cluster when they share a subset of common features, jobs that are in that cluster are deemed relevant for any member in the cluster. To co-partition (share the views) the member and job features into shared clusters, Gibbs sampling is used with a conjugate Dirichlet prior. Skills are then recommended using matrix factorisation CF techniques, which perform well in sparse datasets and are scalable because of the generalisation of latent factors. There are several tools capable of recommending skills to members (users) that use AI and can be embedded into HCM systems already in the market. Tandemploy follows a knowledge base (semantic) approach, whereas matching is based on ontologies that encode the semantic relationships between skills and tasks. A rule-based inference motor then classifies employees on how well they match with task requirements. Kalido.me centres its search engine to find opportunities in the talent marketplace around a competency score, which is based on skill assessments. The skill assessment uses text mining and proximity similarity techniques, to predict a given user’s skill competency level [16].

3

Proposed Conceptual Framework for the TRS

Internal talent marketplaces tackle the problems of skill mismatch in organisations. For the latter to be effective and match the right collaborators to the right task or project, they need efficient TRS to find candidates in the talent pool. Talent marketplaces should aim to increment organisation’s human capital capacities and engagement of collaborators by including members that would otherwise not be considered, either because of the lack of specific skills or these skills not being sufficiently evidenced on the corresponding talent pools (e.g., due to outdated or disorganised data, geographic distance, or inefficient internal recruiting processes). The conceptual framework for the TRS design and evaluation needs to consider the talent marketplace context where it will be deployed (cf. Sect. 3.1). This involves multiple areas of science and thus, a flexible and reliable research methodology that deals with real-life applications is needed (cf. Sect. 3.2). 3.1

TRS in the Context of a Talent Marketplace

On the organisations’ talent marketplace (Fig. 1), skills are on demand and projects on offer, allowing collaborators to work on the projects of their choice.

810

E. L. Zickler et al.

At the heart of the talent marketplace, the TRS identifies hidden competencies in talent pools and matches projects to specific skill profiles (Fig. 1). It makes use of ontologies and CF, integrating hard and soft skills, as well as Personal Preference Indicators (PPI), delivering user tailored project recommendations. PPIs could include employee career goals, measures of engagement, type of work preferences (e.g., remote, on-site, project-based), time preferences (e.g., full-time, half-time, occasional), among others.

Fig. 1. Talent marketplace overview. Skills are in demand and certain skill profiles are matched with specific project requirements. Members get rewarded with continuous learning for working on a project.

The TRS should be able to make novel, meaningful recommendations to the collaborator, finding hidden talents. As an example, imagine a recruit working on customer support; previous work experience includes waitering at a chain restaurant. The TRS recommends the recruit for an emergency customer support task, considering that waiters at big chains are accustomed to multitasking and dealing with unexpected situations (e.g., rush hours, special requests, drunken customers), and performing under pressure while following strict company standards. Individual and organisational potential is maximised by this approach. Employees have a vested interest in using the talent marketplace, as a reward system is put in place for project-work and for progress on their learning path

A Recommender System to Close Skill Gaps

811

(Fig. 1). As a reward for working on projects, employees receive platform currency that they can redeem for courses. Furthermore, employees are intrinsically motivated by choosing their own learning paths. Learning paths are offered according to career paths and organisation’s strategic goals, so that employees learn key competencies critical to the organisation’s success. Continuous learning of employees and skill matching leads to bridging skill gaps and ensuring a culture of lifelong learning within an organisation, building up organisations’ capacities. As companies can plan their future according to workforce’ selected learning paths and current skills, organisations can aim for a sustained, inclusive, and sustainable growth. Ultimately, stakeholders’ preferences and motivations decide on future projects and critical competencies inside an organisation: strategic objectives are thus driven by stakeholder interest. 3.2

Research Methodology

The Design Science Research Methodology (DSRM) pertaining to the role of digital innovation in research [17] and Peffer’s nominal process sequence as research plan canvas [18] is employed to validate and evaluate the TRS. DSRM has the benefit of building from the practical domain (business or real-life application) to design and develop an artefact with added value to the domain’s knowledge base. In this case, pertaining to HCM, organisational psychology, computer science (specifically AI and data science), sociology and business administration. Using this methodology, the design and implementation of the TRS can be broken into stacked cycles that advance the solution in iterative steps into a real-life application. Each innovation cycle can be traced and evaluated with its corresponding demonstration, providing novel domain knowledge. Key TRS metrics of interest for organisations are established. Organisations want explainable and reliable results. For skill gaps to close, the quantity of candidates found for a given project and the quantity of non-trivial (unexpected but relevant) matches are most important. This can be measured through novelty and serendipity RS metrics. Novelty relates to the concept of showing nonfamiliar, less frequent recommendations [19], while serendipity is the discovery of an interesting recommendation to the user that they would otherwise not have found (Relevant ∩ U nexpected ∩ N ovel). For the technical design and development of the TRS, the Cross Industry Standard Process for Data Mining (CRISP-DM) [20], will be used inside the DSMR, thus, leveraging both methodologies to achieve an artefact that can be evaluated both contextually and technically. For building the skills ontology (see Sect. 4), ontology reuse and extension methodologies as in the NeOn methodology [21] are applied.

812

4

E. L. Zickler et al.

Design and Implementation of TRS

The TRS is divided in two phases to reduce online runtime. In Fig. 2, an overview of the process-flow taking place to achieve project to collaborator recommendations is presented. The offline or training part of the TRS consists of building a dynamic skill ontology which can assign new skills to the collaborator’s ontology instance. In the online part the matching between projects and active users takes place.

Fig. 2. TRS flow and processes. Steps 1–5 take place offline, A and B online.

Organisations often have incomplete and unstructured data relating to their projects and collaborator skill-profiles. Because of that, the data is harmonised with a standard vocabulary and structure, using a skill taxonomy. This taxonomy is constructed by reusing and extending open-source ontologies. The Burninglass Ontology for skills and jobs [22] accessible through API (https://api.lightcast. io/apis) is updated weekly and has over 17 000 unique skill identifiers. The ONET and ESCO ontologies can be used and extended in the python skills-ml library [23], which uses CTDL-ASN format for competencies [24] in JSON-LD. Further extension of the skill taxonomy, especially associating tasks with skills, is achieved through the O*NET database [25], which follows the 2018 Standard Occupational Classification (SOC). Ontologies are exported and extended in OWL language using WebProt´eg´e [26]. The skill taxonomy gets constantly updated when new rules and skill classifications are found at the end of the

A Recommender System to Close Skill Gaps

813

offline process, e.g., a new skill cluster Cj is found and it is inferred that all members J in Cj have a certain skill Sj . In the second step, these rules (e.g., If Ji part of Cj then Ji hasSkill Sj ) are applied to the ontology’s instances. The result is the member-skill matrix M, with mjs some metric for a given member and skill. The third step consists of static blocking, whereby M is split into smaller M’. Stratification can occur filtering by knowledge area or by interest, so that only comparable members are clustered together in the next step. Filtering by interest is promising, because new interdisciplinary clusters can be found, where the collaborator is guiding the retrieval of relevant projects for them. In step number four, hierarchical clustering is applied to M  . It is crucial to find the right number of clusters K, as too many clusters significantly increase resources expenditure and can lead to overfitting, while too little results in poor recommendations. A weighted multi-objective optimisation method is proposed, whereby the clustering ought to improve some metric of accuracy, novelty and serendipity. These metrics are also used in [19,27]. The previous steps are also applied to the project data, whereby M would correspond to the task-skill matrix, as projects are broken down into tasks T that have associated skills. After step four there are clusters of tasks CT and collaborators CJ, that group them by similarity. Lastly, step five builds a skill ontology that transcends the skill taxonomy in that it takes in newfound rules for cluster and skill attribution, as well as direct member to task matching rules. These rules can immediately be applied in the online stage, allowing for quick direct matches and efficient more in-depth searches or indirect matching. Rules for skill attribution or new skills, can be thought of as association rules inside clusters. The a priori algorithm with lift as metric is used inside of clusters to find rules of interest. To be incorporated to the ontology, further scrutiny is required, semantic similarity between skills is proven through the number of shared parent classes in the skills taxonomy. Because some parent classes might not be of relevance, Pearson coefficient is used as similarity measure between skills. The online part of the TRS focuses on the recommendation itself. In step A, collaborator, and project data is passed through the ontology, assigning skills and corresponding task- or skill-clusters. In step B, a direct matching is made if there is any candidate that has all required skills by a given project’s tasks. Possibly the ideal candidate did not have all required skills in their skill-profile before going through the ontology, this is because the ontology might have applied an automatic rule to assign a skill due to cluster membership, e.g., member is in engineering cluster so analytical thinking is inferred as a skill. It might also be the case that there is no ideal candidate, then an in-depth search is started to rank the collaborators nearest to the ideal candidate. If there are candidates in the ideal candidate’s cluster CJi , then skill-based CF with Pearson’s coefficient as similarity measure is used to find the nearest candidates in the cluster. If there is no one in CJi , then the closest cluster is chosen, and CF applied.

814

5

E. L. Zickler et al.

Discussion

The novelty of the presented TRS design lies in the combination of semantic and statistical AI methods on the problem of skill matching. This refers to the use of ontologies and hierarchical clustering to build a dynamic skill ontology. Because hybrid RS generally outperforms stand-alone models in the literature, it is expected that the TRS would achieve greater accuracy than solutions available in the market. This is because of the sparse nature of the skill and project data in organisations. Another advantage is that the skill ontology when applied as direct match in the online phase, is much faster than CF techniques like [16], as it works as a semantic look-up table. When confronted with the problem of new items without rating, the skill ontology is effective in working as a heuristic to generate good recommendations regardless. It also uses the skill ontology together with CF as a way of optimising the novelty and serendipity measure. In [27] combining machine learning with ontology for course recommendations led to the improvement of these measures. Novel and unexpected results arise because the underlying skill ontology not only depends on semantic relationships between skills, but also new-found heuristics in the form of intra- and inter-cluster relationships. In addition, the search can be further expanded in the online phase through CF of relevant clusters of peers, inferring matches without direct evidence of a skill. Because of the use of an ontology with rules and relationships, results should remain easily explainable.

6

Conclusion

In response to RQ1, the literature for skill to job matching (recommendations) has been presented. The state of the art is that hybrid RS outperform ontologies or CF when used on their own. Therefore, mixing ontologies and hybrid CF is the best way forward, as it brings semantic and statistical methods together. For RQ2, the DSRM was chosen as conceptual framework for the design and implementation of the TRS. Providing a multidisciplinary approach to the evaluation of the artefact. The perceived value of the TRS design resides in offering alternative technologies that combine into an innovative solution for skill matching and focusing on organisation’s needs from a scientific viewpoint.

References 1. ManpowerGroup: ManpowerGroup Employment Outlook Survey, Q4 2022 (2022) 2. Stockton, H., Filipova, M., Monahan, K.: The evolution of work-seven new realities. Deloitte Insights (2018) 3. Sage-Gavin, E., Hines, K., Fuller, J.: Is the talent you need hiding in plain sight? Accenture Strategy Consult. (2021) 4. McGowan, H.E.: Human capital era reality: the skills gap may never close. Forbes (2021)

A Recommender System to Close Skill Gaps

815

5. Falck, O., Czernich, N., Koenen, J.: Effects of the increased production of electric vehicles on employment in Germany. Ifo Institute (2021) 6. Ellingrud, K., Gupta, R., Salguero, J.: Building the vital skills for the future of work in operations. Opera (2020) 7. Sinyan, P., Nink, M.: How European companies can fix their workplaces. Workplace (2021) 8. Field, E., Hancock, B., Schaninger, B.: Stave off Attrition with an Internal Talent Marketplace. McKinsey & Company (2022) 9. Resnick, P., Varian, H.R.: Recommender systems. Commun. ACM 40(3) (1997) 10. Nilashi, M., Ibrahim, O., Bagherifard, K.: A recommender system based on collaborative filtering using ontology and dimensionality reduction techniques. Expert Syst. Appl. 92 (2018) 11. Alaa, R., Ahmed, E.D., Fern´ andez-Veiga, M., Gawich, M.: Neural collaborative filtering with ontologies for integrated recommendation systems. Sensors (2022) 12. Abdi, S., Khosravi, H., Sadiq, S.: Predicting student performance: the case of combining knowledge tracing and collaborative filtering. In: EDM 2018 (2018) 13. Mishra, R., Rathi, S.: Efficient and scalable job recommender system using collaborative filtering. In: LNEE, vol. 601 (2020) 14. Ketamo, H., Passi-Rauste, A., Vesterbacka, P., Vahtivuori-H¨ anninen, S.: Accelerating the nation: applying AI to scout individual and organisational human capital. In: ICIE (2018) 15. Maurya, A., Telang, R.: Bayesian multi-view models for member-job matching and personalized skill recommendations. In: 2017 IEEE BigData (2017) 16. Varma, S., Sologar, A.P.: Systems and methods for dynamically identifying and presenting matching user profiles to a user. [Patent] US10580091B2 (2020) 17. Hevner, A., vom Brocke, J., Maedche, A.: Roles of digital innovation in design science research. BISE 61(1) (2019) 18. Peffers, K., et al.: The design science research process: a model for producing and presenting information systems research. In: DESRIST 2006 Proceedings (2006) 19. Vargas, S., Castells, P.: Rank and relevance in novelty and diversity metrics for recommender systems. In: RecSys 2011 (2011) 20. Wirth, R., Hipp, J.: CRISP-DM: towards a standard process model for data mining. In: Proceedings of KDD-98 (2000) 21. Su´ arez-Figueroa, M., G´ omez-P´erez, A., Fern´ andez-L´ opez, M.: The NeOn methodology for ontology engineering. Comput. Sci. (2017) 22. Frank, M.R., et al.: Toward understanding the impact of artificial intelligence on labor. In: PNAS (2019) 23. Crockett, T., Lin, E., Gee, M., Sung, C.: Skills-ML: an open source python library for developing and analyzing skills and competencies from unstructured text. Center for data science and public policy, The University of Chicago (2018) 24. Credential Engine: Credential Engine Registry | CTDL Profile of Achievement Standards Network Description Language Schema Metadata. https://credreg.net/ ctdlasn 25. E. and T. A. U.S. Department of Labor: O*NET 27.0 Database at O*NET Resource Center, O*net (2019) 26. Tudorache, T., Nyulas, C., Noy, F., Musen, A.: WebProt´eg´e: a collaborative ontology editor and knowledge acquisition tool for the web. Semant. Web 4(1) (2013) 27. Urdaneta-Ponte, M.C., M´endez-Zorrilla, A., Oleagordia-Ruiz, I.: Lifelong learning courses recommendation system to improve professional skills using ontology and machine learning. Appl. Sci. 11(9) (2021)

LEEC: An Improved Linear Energy Efficient Clustering Method for Sensor Network Virendra Dani1(B)

, Radha Bhonde2 , and Ayesha Mandloi3

1 Computer Science and Engineering Department, Shri Vaishnav Vidyapeeth Vishwavidyalaya,

Indore, India [email protected] 2 Computer Science and Engineering Department, SAGE University, Indore, India 3 Computer Science and Engineering Department, Shivajirao Kadam Institute of Technology and Management, Indore, India

Abstract. Wireless Sensor networks are made up of numerous small, resourceconstrained sensor nodes that are placed in specific locations for a variety of applications that need long-term, unattended operation. Because sensor nodes have limited energy resources, efficient energy usage is a key design consideration. From the hardware to the network protocol layers, energy efficiency is accomplished. In this series, Node clustering is a practical method for lowering node energy consumption and extending the life of the network. Implementing an energy-efficient cluster-based protocol is becoming important for WSNs to prolong network life. This paper introduces an improved energy efficient clustering method to reduce energy utilization. We propose LEEC i.e., improved Linear Energy Efficient clustering routing protocol which is based on energy, node connectivity and buffer size of the different employed nodes in network. The energy focused QoS criteria are chosen in this context to assess performance. We compare our work to earlier methods to support the proposed work, and the findings show that the suggested mechanism is very successful at clustering in WSNs. Keywords: WSN · Routing protocol · Clustering · Cluster-head · Energy · QoS · Network nodes · Simulation

1 Introduction Many developing and future enabling technologies fulfill the needs of ubiquitous communication networks in the era of New Generation Networks (NWGN). Wireless Sensor Networks (WSNs) are an important subset of these cutting-edge technologies; their primary objectives are energy efficiency and well-organized data aggregation. A wireless sensor network is made up of sensor nodes that are ad hoc linked and spatially spread at random. Where the data gathered at each node is expected to be supplied to a Sink Node, from which the aggregated data might be sent to the monitoring stations for a given application via a gateway. A sensor is a piece of electrical equipment. This is used to track changes in the environment. [1, 2]. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 A. Abraham et al. (Eds.): IBICA 2022, LNNS 649, pp. 816–825, 2023. https://doi.org/10.1007/978-3-031-27499-2_75

LEEC: An Improved Linear Energy Efficient Clustering Method

817

1.1 Sensor Network The ad hoc networking of numerous wireless devices will enable a variety of applications that are not possible with the traditional base station-to-network node communication architecture. Due to its lack of infrastructure and low-cost, on-demand deployment, ad hoc networks are excellent candidate solutions for both military and civilian applications such as target identification and tracking, emergency rescue operations, patient monitoring, and environmental management [3, 4].

Fig. 1. Wireless sensor network view

A schematic design of sensor node components is shown in Fig. 1. The WSN is made up of “nodes,” which might number in the hundreds or even thousands, and each node is linked to one (or more) sensors. In the event of various data accumulation, remote sensor hubs feature an exhibit of sensors. The sensor hub may be used for a variety of purposes, including persistent or selective detection, area detection, movement detection, and event location, among others [5]. 1.2 Clustering Clustering is the virtual splitting of dynamic nodes into multiple groups in a sensor adhoc network. Nodes are grouped according to their proximity to other nodes. When two nodes are within their transmission range and establish a bidirectional link between them, they are said to be neighbor’s. There are two types of cluster control designs, one-hop clustering, and multi-hop (d-hop) clustering, based on the cluster diameter. Clustering is described as a method of reorganizing all nodes into tiny virtual groupings based on their regional localization, with Cluster Head and Cluster Member selected by the same rule [6]. Cluster Head is a role that is generally rotated across nodes in the cluster. The selection criteria may vary depending on several elements such as the node’s geographical position, stability, mobility, energy, capacity, and throughput, trustworthy nodes, and so on [7].

818

V. Dani et al.

2 Proposed System A flow diagram of the suggested method is also created to help in understanding the complete procedure. Figure 2 depicts the full process’ algorithmic description. 2.1 Proposed System Flow In wireless sensor network, we propose clustering approach using constraints-based parameter. In this Fig. 2 depicts the overall entire process for cluster head selection.

Fig. 2. Cluster head selection scenario

Initially, we establish a sensor network to deploy different number of nodes. In this we populate the sensor nodes randomly. Now, we select different constraints for estimating values that employed the serially. These constraints are applied whenever node value distributed equally for finding efficient cluster head using clustering approach. The constraints are node energy, node connectivity and node buffer length. Firstly, we estimate the node energy of each node thereafter each node compares their energy to 1-hop neighbor and apply check which is maximum or minimum of neighbor node. If

LEEC: An Improved Linear Energy Efficient Clustering Method

819

the estimated energy is larger than to their 1-hop neighbor then select those node which have greater energy. Moreover, we check the connectivity for selected node by degree of each node. If the node has higher degree of 1-hop neighbor then select those node which have higher connectivity. Here, node connectivity means connection of node which have high degree of node. After this process we process on those nodes which have higher connectivity and already have higher energy. For checking buffer length on selected node to apply check based on workload for each node. Similarly, if selected node has higher buffer length to 1-hop each neighbor then we get the effective and optimum cluster head by applying this three constraints selection. This overall clustering approach is minimized the network overhead and maximize the network efficiency while improving linear cluster head. 2.2 Solution Strategy The suggested clustering technique consists of three basic steps that must be completed before the network may be clustered. • Parameter Definition: The node QoS parameters are chosen in this step-in order to compute the value of the restrictions. • Decision Making: Here, we apply all checks for energy, connectivity, and buffer length for considering their values based on decision making. • Cluster Head Election: The cluster head is selected to look out for the cluster members during this period. In the similar manner need to find the threshold for the average time required for transmitting the data to the given node. Therefore, the nodes are communication with each their neighbours and the average time required for the data transmission is computed using the following formula: For finding the average time for all the uplinks and downlinks the following formula is used. 2.3 Suggested Algorithm The suggested approach, which is shown in Table 1, may be used to describe the full process of the constraints-based linear cluster head, or LEEC. Comparing this improved approach that has been offered in parallel with linear energy efficient cluster selection to previously developed energy efficient clustering technique which is used wireless ad hoc network configuration for traditionally improved network lifetime is proposed by Virendra Dani et al. [1].

3 Simulation The explanation of the simulated settings that the tests are carried out under is provided in this part. Therefore, two important simulation scenarios are suggested in this part to

820

V. Dani et al. Table 1. Cluster head selection

Input: Node Counts Output: S election of Optimum Cluster Head

Process: 1: Populate Random Sensor node in network. 2: A node in network broadcast the clustering request. 3: Wait for response generated by network 4: Employee different Constraints i. Node Energy ii. Node Connectivity iii. Node Buffer Length 5: for (i = 0; i ≤ N; i++) a. Compare Node Enrgy If (Node Energy > One hop neighbor node) Select nodes of higher energy b.

c.

Compare Connectivity of all seleccted nodes If (Node Connectivity > One hop neighbor node) Select nodes of higher connectivity Compare Buffer length selected node of higher energy and high connectivity If (Node Buffer Length > One hop neighbor node)

Select nodes as higher buffer length End if End if End if End for 6: Above selected node become cluster head 7: Return CH 8: End Process

illustrate the energy-efficient clustering. Both simulation scenarios use 10, 20, 30, 40, and 50 nodes, which is a distinct number for each simulation (Table 2). The following investigative scenarios are illustrated in this part to conduct the trials. [8].

LEEC: An Improved Linear Energy Efficient Clustering Method

821

Table 2. Semulation setup definition Simulation properties

Values

Antenna model

Omni directional

Topography area

1000 × 1000

Radio-propagation Model

Two ray ground

Mobility model

Random waypoint

Routing protocol

AODV

Traffic model

CBR

3.1 Simulation Scenario To simulate the working and selection of efficient cluster head for prolonging the network different experiments are performed under the following network scenarios. • Simulation of AODV based LCH Approach: In this phase, the network is set up using the AODV of base existing technique of energy-efficient clustering routing protocol, and tests are run with various numbers of nodes. Different performance parameters are calculated throughout the trials, and a comparison of those results with the suggested technique is done. Figure 3 shows the typical network in action. • Simulation of Proposed Clustering Based Network The suggested cluster-based routing approach is used to configure the network during this phase, and their performance is estimated for performance comparison research. Figure 4 serves as a demonstration of the necessary network. The simulation is run using a variety of nodes that are time-synchronized.

Fig. 3. LCH (Old) approach

822

V. Dani et al.

Fig. 4. Proposed Linear Energy Efficient Clustering Approach

4 Result Analysis 4.1 E to E Delay End-to-end latency on a network is the length of time it takes for a packet to go from a resource to the target device over a network.

Fig. 5. Comparision of E to E delay

Figure 5 reports the end-to-end delays of the suggested method and conventional LCH routing. In this picture, the Y axis represents the end-to-end latency in milliseconds and the X axis represents the number of network nodes used in the trials. The findings indicate that the suggested cluster-based routing results in a network with a larger end-toend latency than the current technique. As a result, the proposed method is significantly more adaptable than the conventional one.

LEEC: An Improved Linear Energy Efficient Clustering Method

823

4.2 Energy Remained The nodes use a portion of the energy that they initially had during communication and network activities. The remaining energy of network nodes is measured and presented in this section as a network performance metric.

Fig. 6. Remained energy

The quantity of remaining energy in network nodes during the various trials is depicted in Fig. 6. The tests are run on 10, 20, 30, 40, and 50 nodes, respectively. The Y axis displays the remaining energy after tests, while the X axis displays the number of nodes in the experimental network to illustrate how well the network performs. Here, the definition of energy is expressed in terms of jules. According to the experimental findings, the suggested clustering approach uses less energy than the conventional LCH routing protocol. As a result, the suggested clustering strategy uses less energy than standard network setups. 4.3 PDR The Packet delivery ratio is also termed as the PDR ratio. The packet delivery ratio provides information about the performance of any routing protocols using the successfully delivered packets to the destination. The comparative packet delivery ratio of base LCH routing and cluster-based technique is described using Fig. 7. In this diagram the different number of nodes is given in X-axis and the Y-axis includes the percentage number of packets successfully delivered. According to the obtained results the proposed technique able to deliver more packets effectively as compared to the LCH routing protocol. Additionally proposed efficient cluster head selection shows 88–96% percentage amount of successfully delivered packets. Therefore, the proposed technique is more effective as compared to the traditional routing protocol. 4.4 Routing Overhead The number of extra control messages exchanged in a network is known as routing overhead. The network inefficiency is caused by the routing overhead. Figure 8 illustrates

824

V. Dani et al.

Fig. 7. Comparison of PDR

the routing overhead for both network routing approaches. The X axis in this graphic indicates the number of nodes in the network, while the Y axis shows the network’s routing overhead. The suggested cluster-based routing strategy is significantly better suited to enhancing other network performance metrics since, according to the experimental results, it generates less routing overhead than the current routing method. Less routing overhead is mostly due to the clustering strategy, which reduces the amount of control message exchange required in the proposed routing technique for location addressing and mapping.

Fig. 8. Routing overhead

5 Conclusion Wireless sensor networks (WSNs) have received a lot of interest in recent years. WSNs may be used in an increasing number of civil and military applications for greater efficacy, particularly in dangerous and distant environments. Disaster management, border security, and battlefield monitoring are just a few examples. For cluster-based sensor

LEEC: An Improved Linear Energy Efficient Clustering Method

825

networks, a variety of energy-efficient strategies have been put forth in this study effort. By combining sensor nodes with the right clustering method, WSNs may boost energy efficiency. This paper presents LEEC i.e., Linear Energy Efficient Clustering Method for Sensor Networks, which selects efficient clusters created by an appropriate cluster head selection procedure and cluster distribution to minimize network energy consumption and extend network lifespan. The value of the CH competition range results in a favorable cluster head distribution. Suggested clustering approach use the energy change rate (residual Energy), buffer length and node connectivity for supporting the parameters for energy preservation. Clustering is recalled after a small-time delay for reducing the complexity and enhancing the QoS of the network because if the clustering is not recalled for long time can reduce the efficiency of the network.

References 1. Dani, V., Bhati, N., Bhati, D.: EECT: energy efficient clustering technique using node probability in Ad-Hoc network. In: Abraham, A., Sasaki, H., Rios, R., Gandhi, N., Singh, U., Ma, K. (eds.) IBICA 2020. AISC, vol. 1372, pp. 187–195. Springer, Cham (2021). https://doi.org/ 10.1007/978-3-030-73603-3_17 2. Dani, V.: iBADS: an improved black-hole attack detection system using trust based weighted method. J. Inf. Assur. Secur. 17(3) (2022) 3. Wang, T., Zhang, G., Yang, X., Vajdi, A.: Genetic algorithm for energy-efficient clustering and routing in wireless sensor networks. J. Syst. Softw. 146, 196–214 (2018) 4. Hamzah, A., Shurman, M., Al-Jarrah, O., Taqieddin, E.: Energy-efficient fuzzy-logic-based clustering technique for hierarchical routing protocols in wireless sensor networks. Sensors 19(3), 561 (2019) 5. Raj, J.S., Basar, A.: QoS optimization of energy efficient routing in IoT wireless sensor networks. J. ISMAC 1(01), 12–23 (2019) 6. Saranya, V., Shankar, S., Kanagachidambaresan, G.: R Energy efficient clustering scheme (EECS) for wireless sensor network with mobile sink. Wirel. Pers. Commun. 100(4), 1553– 1567 (2018). https://doi.org/10.1007/s11277-018-5653-1 7. Nakas, C., Kandris, D., Visvardis, G.: Energy efficient routing in wireless sensor networks: a comprehensive survey. Algorithms 13(3), 72 (2020) 8. The Network Simulator, NS-2. http://www.isi.edu/nsnam/ns/

Hydrogen Production: Past, Present and What Will Be the Future? Judite Ferreira1,2(B) , Pedro Pereira2 , and José Boaventura3,4 1 ISEP/IPP, Porto, Portugal

[email protected]

2 ISRC - Interdisciplinary Studies Research Center, Porto, Portugal 3 Universidade de Trás-os-Montes e Alto Douro (UTAD), Vila Real, Portugal

[email protected] 4 Centre for Robotics in Industry and Intelligent Systems, INESC TEC, Porto, Portugal

Abstract. As climate change is already affecting the entire planet, with extreme weather conditions such as droughts, heat waves, river floods, floods and landslides becoming increasingly frequent. Other consequences of these rapid climate change include rising sea levels, ocean acidification and biodiversity loss. Concerns about the climate begun in 1972 with agreements between Europe and other countries. This paper aims to show the importance of green hydrogen as part of the solution to climate change and emission reduction de CO2 . Keywords: European climate law · Green hydrogen · Hydrogen · CO2 emissions

1 Introduction Climate change is affecting the entire planet and it is necessary to intervene as soon as possible to try to make the situation not irreversible. To limit global warming by 1,5 °C, the limit considered safe by the Intergovernmental Panel on Climate Change (IPCC) is essential to achieve carbon neutrality by 2050 [1–4]. The European union has been on this issue through the European Climate Law, this law is binding, committing itself to achieving this objective. The same objective is also defined in the Paris Agreement, signed by 195 countries. The negotiations on climate alteration have taken a long journey to this day as can be seen in the Table 1, that shows the chronology of the negotiations and the dates of the Conference of the Parties (COP) [5]. Table 1 shows that negotiations and concerns about the environment have officially begun in 1972, fifty years ago. The last the COP26 was in Glasgow between November 1st and 13th 2021. This meeting was considered as the longest COP conference, and countries around the world agreed on the goal of climate neutrality, increased funding for vulnerable developing countries and reduced funding for new fossil fuel-related projects. However, countries have failed to provide a common answer to the question of phasing out coal use. The COP27 is happening in Sharm el-Sheikh, Egypt from November 6th to 18th 2022. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 A. Abraham et al. (Eds.): IBICA 2022, LNNS 649, pp. 826–835, 2023. https://doi.org/10.1007/978-3-031-27499-2_76

Hydrogen Production: Past, Present and What Will Be the Future?

827

Table 1. Chronology of the negotiations and the dates of the Conference of the Parties (COP) Date November 1-13, 2021 July 29, 2021 October 7, 2020 December 11, 2019 November 28, 2019 October 2018 June 2017 November 4, 2016 October 4, 2016 December 2015 2014 2010 2007 February 2005 January 2005 November 2001 June 11, 2001 1997 1994 1992 1991 1990 1988 1987 1979 1972

Event COP 26 - Glasgow Entry into force of the European Climate Law T he European Parliament adopts its position on the EU with climate neutrality by 2050 - Brussels T he Ecological Pact European Parliament declares climate emergency T he IPCC of the ONU presents its special report on 1.5°C President Donald T rump announces his intention to withdraw the United States from the Paris Agreement. Entry into force of the Paris Agreement Parliament gives its consent to UE ratification of the Paris Agreement Adoption of the Paris Agreement T he IPCC's fifth evaluation report is released Adoption of the Cancun Agreements Publication of the fourth evaluation report of the Intergovernmental Panel on Climate Change (IPCC) Entry into force of the Kyoto Protocol Establishment of the EU Emissions T rading Scheme Marrakesh Agreements President George W. Bush withdraws the United States from the Kyoto Process Adoption of the Kyoto Protocol "T he United Nations Framework Convention on Climate Change enters into force" Earth Summit in Rio de Janeiro First meeting of the Intergovernmental Negotiating Committee T he IPCC publishes its first evaluation report Creation of the Intergovernmental Panel on Climate Change Adoption of the Montreal Protocol T he World Climate Conference is held in Geneva T he United Nations Conference on the Human Environment is held

On December 7th 2020 the European Parliament approves its negotiating position on the Climate Law, proposing that the EU and all Member States achieve climate neutrality by 2050. This law sets a more ambitious intermediate target for reducing emissions by 60% by 2030. On December 12th 2019, EU leaders agreed to achieve a climate-neutral impact on the EU by 2050 during a European Council meeting in Brussels. On December 11th 2019 the European Commission presents the Ecological Pact which will be legally enshrined in the European Climate Law with a view to achieving climate neutrality in Europe by 2050. Between December 2nd and 13th 2019, COP25 concluded an agreement on an increase in carbon reduction. However, it leaves some disappointment because it defers for the next COP to be made in Glasgow the decisions on a global carbon trading system; clarity on concrete emission reductions; and a system to channel new funding to the most vulnerable countries. Members of the European Parliament (MEPs) want

828

J. Ferreira et al.

the Commission to ensure that all relevant legislative and budgetary proposals are fully aligned with the aim of limiting global warming below 1,5 °C. From December 3rd to 14th , 2018 COP24 in Poland ends with mixed results. The countries agree on a compromise that puts the Paris agreement into practice. However, they failed to respond to problems such as: a global carbon trading system and the steps to be followed to reduce global temperature by 1,5 ºC. Other important data are represented in Table 1, the United States of America negatively marked two dates in this chronological. On June 2001, the President George W. Bush withdraws the United States from the Kyoto Process and in June 2017 President Donald Trump announced to withdraw the United States from the Paris Agreement.

2 The European Climate Law In December 2019, the European Commission presented the European Ecological Pact, an important plan for achieving climate neutrality in Europe by 2050 [4]. This goal has been achieved with the European Climate Law on June 24th , 2021. This law makes legally binding the target of reducing emissions by 55% by 2030 and climate neutrality by 2050. Such measures further bring the European Union closer to its zero net emissions target by after 2050 and confirm its leadership in the global fight against climate change [1]. Climate law should allow objectives to be more easily applied to legislation and should bring benefits to people. Particularly in cleaner air, water, and soil; reduced energy bills; home renovations; better public transport and more electric car charging stations; less waste and healthier food; quality health for current and future generations. To meet these objectives, the European Union also wants to create jobs in the areas of renewable energy, energy efficiency of buildings and processes. To ensure that all countries participate in EU efforts to reduce emissions in the sectors referred to above, the Effort Sharing Regulation sets related targets for Member States for the period 2021 to 2030, as well as rules for determining how allocations and the contours should be made to assess progress. The current reduction target for the sectors covered by the Effort Sharing Regulation is 29% by 2030. As part of the ambitions demonstrated in the European Ecological Pact, this target should be revised. On June 8th , the European Parliament (EP) voted in favor of raising the target to 40% by 2030. The following point shows the importance of green hydrogen production in order to achieve the objectives of the European Climate Law.

3 The Importance of Hydrogen in Meeting the European Union’s Climate Objectives Hydrogen can be used as a raw material, fuel and vector for transport or energy storage and has many possible applications in the industry, transport, energy, and building sectors. Most importantly, the utilisation of the hydrogen does not emit CO2 and releases small amounts of air pollutants. That is why hydrogen is essential to support the EU’s

Hydrogen Production: Past, Present and What Will Be the Future?

829

commitment to achieving carbon neutrality by 2050 and for efforts at global level to implement the Paris Agreement. The problem is that hydrogen is currently largely produced from fossil fuels, including natural gas or coal, resulting in the release of 70 to 100 million tonnes of CO2 per year in the EU. In the past, the production of green hydrogen has registered some interest, but the high price of renewable energy has prevented its implementation. However, with the lowering of energy costs from renewable sources, technological developments, and the urgency of drastically reducing gas emissions are opening new possibilities to the implementation of the green Hydrogen. Green hydrogen is also key to achieving the objectives of the European Ecological Pact [7] and the transition to clean energy in Europe. Electricity produced from renewable sources is expected to decarbonize a large part of the EU’s energy consumption by 2050, but not all of it. Hydrogen has a strong potential to fill some of these gaps as a vector for storing and transporting energy from renewable sources, ensuring reinforcement in the event of seasonal variations, and connecting production places to more distant demand centers. The share of hydrogen in Europe’s energy target is expected to increase from below 2% currently recorded to between 13% and 14% by 2050 [4]. The progressive adoption of hydrogen-based solutions may also lead to the adaptation or reutilization of parts of existing natural gas infrastructure, which has been to prevent pipelines from becoming abandoned assets. Hydrogen will play a role in the integrated energy system of the future, alongside electrification from renewable sources and more efficient and circular use of resources. The deployment of clean hydrogen on a large scale and at an accelerated pace is essential for the EU to achieve a higher level of climate ambition, reducing greenhouse gas emissions by at least 50% and seeking to approach 55% by 2030 at a cost-effective rate. Investment in hydrogen will foster sustainable growth and jobs, which will be critical in the context of the recovery of the COVID-19 crisis and the Ukrainian war. The Commission’s recovery plan stresses the need to unlock investment in clean technologies and key value chains. This plan highlights clean hydrogen as one of the key areas to be addressed in the context of the energy transition and mentions several possible support pathways. Hydrogen accounts for less than 2% of Europe’s present energy consumption and is primarily used to produce chemical products, such as plastics and fertilizers. 96% of this hydrogen production is through natural gas, resulting in significant amounts of CO2 emissions. When produced at times when solar and wind energy resources are abundantly available, renewable hydrogen can also support the EU’s electricity sector, providing longterm and large-scale storage. The storage potential of hydrogen is particularly beneficial for power grids as it allows for renewable energy to be kept not only in large quantities, but also for long periods of time. This means that hydrogen can help improve the flexibility of energy systems by balancing out supply and demand when there is either too much or not enough power being generated, helping to boost energy efficiency throughout the EU.

830

J. Ferreira et al.

The European Parliament wants: there are incentives to stimulate the demand and creation of a European hydrogen market, as well as the rapid deployment of hydrogen infrastructure; the phasing out of fossil-based hydrogen as quickly as possible; all hydrogen imports are certified in the same way as hydrogen produced in the EU, including production and transport, in order to avoid carbon leakage; assessment of the possibility of redirecting existing pipelines to the transport and underground storage of hydrogen. As part of the fight against climate change, the European Union (EU) has set ambitious targets to reduce CO2 gas emissions. The EU aims to achieve a climate-neutral impact by 2050, a target that, together with the intermediate target of reducing emissions by 55% by 2030, is set by the European Climate Law. The European Union has launched several initiatives to achieve these objectives. One is the Effort Sharing Regulation, which is being updated as part of the “Objective 55” legislative package. The Effort Sharing Regulation sets binding targets for reducing greenhouse gas emissions in each EU country in those sectors not covered by the emissions trading scheme, such as transport, agriculture, buildings and waste management. These sectors account for most EU greenhouse gases, accounting for around 60% of total EU emissions [8]. The current reduction target for the sectors covered by the Effort Sharing Regulation is 29% by 2030. As part of the ambitions showed in the European Ecological Pact, this target should have revised. On June 8th , the European Parliament (EP) voted in favour of raising the target to 40% by 2030. As the capacity to reduce emissions varies from Member State to Member State, this reflected in national targets based on countries gross domestic product per capita. The proposed targets for 2030 would range from −10% to −50% compared to 2005 levels, (Fig. 1) and would be in line with the EU’s overall target of a total reduction of 40%.

Fig. 1. Goals for CO2 reduction [EC proposal to amend Regulation (EU) 2018/842]

Hydrogen Production: Past, Present and What Will Be the Future?

831

To ensure the reduction of emissions at a constant rate, an emission reduction path has been defined for each Member State. However, the current system allows for some flexibility. The EU countries can deposit, lend and transfer annual appropriations (or allocations) to each other from one year to the next. The European Union has also proposed the creation of an additional reserve that would include the excess removal of CO2 by EU countries, in addition to its goals under the Land use Regulation and the forestry sector. Member States struggling to meet their national emission reduction targets may use this reserve, provided that certain conditions are met (e.g. the EU as a whole should achieve its 2030 climate target).

Fig. 2. Compared the intended and achieved aims for renewable energy production in 2020 [8]

The Fig. 2 shows how EU countries have met the 2020 renewable energy targets. For example, in 2020, Portugal recorded an energy consumption of renewable energy sources of 34%, which means that it has exceeded a target of 31% in European legislation for 2020. France did not achieved the goal, falling 3,9% below the target and Sweden exceeded the target by 11,1%.

4 Hydrogen The role of hydrogen [9] in the EU’s energy and greenhouse gas (GHG) emission abatement efforts will rapidly increase. Europe currently uses 339 TWh [12] of hydrogen per year [10]. Research on the future EU energy systems, such as the study published by the Joint Research Centre [9] expects a significant increase in the use of hydrogen – between 667–4000 TWh [12] in 2050. Hydrogen is an integral part of the recently announced recovery instrument of the EU Next Generation. Several Member States (MS) have already developed a hydrogen strategy on the national level, such as France, the Netherlands, Germany, Portugal, and Spain. Many other MS are likely to follow the suit soon. Besides, hydrogen is anticipated to be a key topic for EU’s Innovation Fund that opened its first call in July 2020. For hydrogen to deliver a positive role in the energy transition, it must be produced and delivered to end uses in a sustainable manner (cost, energy system, environmental, and job impact). There is no shortage of cost and technical data across the hydrogen value chain, yet their transparency and comparability due to varying assumptions are often poor. The report [9] builds on data collected from public sources and aims to normalize them to comparable units to establish a more reliable basis for decisionmaking (e.g. the scale of investment necessary). Where possible, guidance on how data

832

J. Ferreira et al.

should be reported is provided. Besides the investment cost data across selected items in the hydrogen value chain, effects on employment in the green hydrogen value chain, as well as import options and costs, were explored to provide a more comprehensive picture. In sum, this report does not aim to provide an exhaustive list of all the possibilities in the hydrogen ecosystem, but rather looks at the currently most discussed technologies and options. On the production side, various technology options exist. Note that most of the EU hydrogen is produced on-site (captive hydrogen, 64% of total production capacity) typically in large industrial settings, and the remaining hydrogen is generated as a by product of industrial processes (by-product hydrogen, 21% of total production capacity) or produced centrally and delivered to points of demand (merchant hydrogen, 15% of total production capacity). As of now, 95% of EU hydrogen production is done via steam methane reforming (SMR) and to a lower extent autothermal reforming (ATR), both highly carbon-intensive processes. Such unabated production from fossil fuels is commonly called grey hydrogen and is defined as ‘fossil-based hydrogen’ in the Commission’s strategy. Both SMR and ATR could, however, be coupled with carbon capture, usage, and storage (CCUS) systems with various CO2 capture rates and post-capture utilization of the CO2 . Such production is commonly referred to as blue hydrogen or defined as fossil-based hydrogen with carbon capture in the hydrogen strategy. Most of the remaining 5% is produced as a by-product in the chlor-alkali processes in the chemical industry. Such production uses alkaline electrolysers (ALK) to electrolyze brine. Similar alkaline electrolysers can be used in dedicated hydrogen production, while other electrolytic hydrogen production methods exist using polymer electrolyte membrane (PEM) and solid oxide (SOEC) electrolysers. In cases where the electricity used in the process is renewable, the produced hydrogen is referred to as green, or defined as renewable hydrogen in the hydrogen strategy. This is an important distinction as using current electricity grid mixes of most EU countries results in hydrogen with much higher carbon intensity than via unabated fossil-based routes. Various in-between cases exist as well (e.g. sourcing of both grid and renewable electricity, ATR coupling with electrolysis, etc.). Hydrogen accounts for about 2% of energy production in the European Union, but almost 96% of this hydrogen is produced by fossil fuels, which release between 70 and 100 million tonnes of CO2 per year. Some studies show that renewable energy could provide a substantial share of energy to Europe by 2050. Green hydrogen could account for 20–50% of energy demand in transport and 5–20% in industry. The advantages of using Hydrogen as fuel are several: • its use for energy purposes does not cause CO2 gas emissions (water is the only by-product of the process) • can be used to produce other gases as well as liquid fuels • existing infrastructure (gas transport and gas storage) can be reused for hydrogen • it has a higher energy density than batteries, and can be used for long distance transportation and heavy goods

Hydrogen Production: Past, Present and What Will Be the Future?

833

4.1 Hydrogen Production There are many ways to produce hydrogen, their greenhouse gas emissions, and their relative competitiveness [8]. Hydrogen can be produced through a variety of processes. These production pathways are associated with a wide range of emissions, depending on the technology and energy source used and have different costs implications and material requirements. Electricity-based hydrogen refers to hydrogen produced through the electrolysis of water (in an electrolyzer, powered by electricity), regardless of the electricity source. The full life-cycle greenhouse gas emissions of the production of electricity-based hydrogen depends on how the electricity is produced.

Fig. 3. Water electrolysis [16]

Clean hydrogen refers to renewable hydrogen or Renewable hydrogen is hydrogen produced through the electrolysis of water as represented in Fig. 3 (in an electrolyze, powered by electricity), and with the electricity stemming from renewable sources. The full life-cycle greenhouse gas emissions of the production of renewable hydrogen are close to zero. Renewable hydrogen may also be produced through the reforming of biogas (instead of natural gas) or biochemical conversion of biomass if in compliance with sustainability requirements [10, 11]. 4.2 Fossil-Based Hydrogen Fossil-based hydrogen refers to hydrogen produced through a variety of processes using fossil fuels as feedstock, mainly the reforming of natural gas or the gasification of coal. This represents the bulk of hydrogen produced today. The life-cycle greenhouse gas emissions of the production of fossil-based hydrogen are high [11]. Fossil-based hydrogen with carbon capture is a subpart of fossil-based hydrogen, but where greenhouse gases emitted as part of the hydrogen production process are captured. The greenhouse gas emissions of the production of fossil-based hydrogen with carbon capture or pyrolysis are lower than for fossil-fuel based hydrogen, but the variable effectiveness of greenhouse gas capture (maximum 90%) needs to be considered. 4.3 Carbon Low Hydrogen Carbon Low hydrogen encompasses fossil-based hydrogen with carbon capture and electricity-hydrogen, with significantly reduced full life-cycle greenhouse gas emissions compared to existing hydrogen production [12, 13].

834

J. Ferreira et al.

4.4 Hydrogen-Derived Synthetic Fuels Hydrogen-derived synthetic fuels refer to a variety of gaseous and liquid fuels based on hydrogen and carbon. For synthetic fuels to be considered renewable, the hydrogen part of the syngas should be renewable. Synthetic fuels include for instance synthetic kerosene in aviation, synthetic diesel for cars, and various molecules used in the production of chemicals and fertilizers. Synthetic fuels can be associated with very different levels of greenhouse gas emissions depending on the feedstock and process used. In terms of air pollution, burning synthetic fuels produces similar levels of air pollutant emissions than fossil fuels.

5 Conclusion In this role, green hydrogen is important in following European climate legislation. Initially, a study is made of the chronological evolution of negotiations between different countries to solve the problems associated with climate to avoiding the emission of greenhouse gases. Hydrogen has been considered as a clean energy carrier by generating electricity via fuel cells without carbon dioxide emissions; however, in the current stage, most hydrogen is produced by a steam methane reforming, emitting carbon dioxide as a byproduct, together. In this context, a green hydrogen production system, which is consisted of water electrolysis and a renewable energy plant, should be expanded to prepare for the upcoming hydrogen society in the future [15]. In point 3 is made the analysis of the importance and urgency of implementing the consumption of green hydrogen as a fuel for transport and as energy storage, thus showing that a reduction in greenhouse gases will enable the objectives for carbon neutrality to be achieved in 2050. It was found in this study that much of the hydrogen currently produced is not a “clean hydrogen” (grey hydrogen), which needs to be rapidly modified. One of the solutions to solving this problem is the implementation on the production power plants of grey hydrogen, of captors for the CO2 emitted by them. If there are CO2 captors in these power plants, the hydrogen produced is called blue hydrogen. The best solution will always be the production of green hydrogen in which renewable energy is used to perform electrolysis that allows the production of hydrogen and the release of oxygen. Electrolysis is a relatively mature and thoroughly tested technology that has long been used in various industrial processes but, to scale manufacturing and deployment and drive down costs, it calls for more powerful and efficient electrolysers. The subject studied in this paper is a very current theme, but much remains to be done to achieve the desired goals pretended to 2050. However, some countries are on the right track as can be seen in the graphs in Figs. 1 and 2. In 2020, renewable energy represented 22,1% of energy consumed in the EU, around 2 percentage points above the 2020 target of 20% [14]. The future exists only if we all contribute to a better climate for future generations.

References 1. European Parliament and of the Council: Regulation (EU) 2021/1119 of the European Parliament and of the Council, 30 June 2021

Hydrogen Production: Past, Present and What Will Be the Future?

835

2. European Parliament and of the Council: Commission proposal for a Regulation: European Climate, Brussels, 4 March 2020 3. European Parliament and of the Council: Commission amended proposal for a Regulation: European Climate, Brussels, 17 September 2020 4. European Parliament and of the Council: The European Climate Law, March 2020 5. Parlamento Europeu: Infografia: a cronologia das negociações sobre as alterações climáticas, Atualidade, Parlamento Europeu, Sociedade, 3 December 2021 6. Mackenzie, W.: Green hydrogen pipeline more than doubles in five months, Wood Mackenzie report, March 2020 7. European Commission: European Climate Pact, European Commission, Climate Action 8. European Environment Agency: Amending Regulation (EU) 2018/842 on binding annual greenhouse gas emission, European Environment Agency (2018) 9. European Union: Reductions by Member States from 2021 to 2030 contributing to climate action to meet commitments under the Paris Agreement, European Union, July 2021 10. H2Haul project: EC PCH, Hydrogen Roadmap Europe, Clean Hydrogen Partnership (former FCH JU) (2019) 11. Blanco, H., Nijs, W., Ruf, J., Faaij, A.: Potential for hydrogen and power-to-liquid in a low-carbon EU energy system using cost optimization. Appl. Energy 232, 617–639 (2018) 12. Niermann, M., Drünert, S., Kaltschmitta, M., Bonhoffb, K.: Liquid organic hydrogen carriers (LOHCs) – techno-economic analysis of LOHCs in a defined process chain. (Paper) Energy Environ. Sci. 12, 290–307 (2019). https://doi.org/10.1039/C8EE02700E 13. Wu, J., Xiao, J., Hou, J., Zhang, J., Jin, C., Han, R.: Generation potential and economy analysis of green hydrogen in China. In: 2022 IEEE 5th International Electrical and Energy Conference (CIEEC), pp. 4477–4482 (2022) 14. Eurostat: Renewable energy statistics, Eurostat, Statistics Explained, January 2022 15. Lee, H., et al.: Outlook of industrial-scale green hydrogen production via a hybrid system of alkaline water electrolysis and energy storage system based on seasonal solar radiation. J. Clean. Prod. 377, 134210 (2022) 16. Yue, M., Lambert, H., Pahon, E., Roche, R., Jemei, S., Hissel, D.: Hydrogen energy systems: a critical review of technologies, applications, trends and challenges. Renew. Sustain. Energy Rev. 146, 111180 (2021)

Implementation of the General Regulation on Data Protection – In the Intermunicipal Community of Alto Tâmega and Barroso, Portugal Pascoal Padrão1 and Isabel Lopes1,2,3(B) 1 Instituto Politécnico de Bragança, Bragança, Portugal

[email protected]

2 UNIAG, Instituto Politécnico de Bragança, Bragança, Portugal 3 Algoritmi Centre, Minho University, Guimarães, Portugal

Abstract. Nowadays, secrecy, privacy and preservation of information of a personal nature, presents itself as a very relevant social manifestation, given the technological advances, as well as the dexterity that these advances leverage with regard to computer and analytical exploration. of the data. Thus, the European Union created a Regulation, in 2016, with the aim of standardizing legislation and practices with regard to data protection, in order to protect its citizens and increase transparency in the processing of their data, the same is called the “General Regulation on Data Protection” (GDPR). Thus, after 6 years since its creation and four since its entry into force, with the current study we intend to analyze how the implementation of the GDPR in Local Public Administration in Portugal is taking place, focusing this study on the Intermunicipal Community of Alto Tâmega and Barroso (CIM-AT). Keywords: General Regulation on Data Protection · Municipalities · Data protection officer · Information systems · Safety

1 Introduction The technological explosion at the beginning of the century brought a transversal revolution to our society, promoting a high degree of efficiency and effectiveness in terms of services, based on robust computer applications and relational databases. O Access to information, its availability, confidentiality and integrity have become constant concerns for organizations and for the security of their information systems. Local Public Administration is no exception, on the contrary. The Local Public Administration has a greater proximity to the citizens, it is essential that it prioritizes the privacy of the personal data of its citizens. These not only expect from the Local Public Administration the provision of a good service and improvements in their quality of life, but also a relationship of security and trust. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 A. Abraham et al. (Eds.): IBICA 2022, LNNS 649, pp. 836–844, 2023. https://doi.org/10.1007/978-3-031-27499-2_77

Implementation of the General Regulation on Data Protection

837

Thus, given that the GDPR is a legal imperative of the EU through the entry into force of Regulation (EU) 2016/679, and that in Portugal it should have been implemented four years ago, it is imperative to know and that is the main objective of this research work, to assess whether the municipalities already have the GDPR implemented. As the study universe for the 308 Portuguese municipalities would become too time-consuming, we limited our work to (CIM-AT), which includes six Municipalities, although our objective is to study each of the CIM to reach the intended universe. This research work begins with this introduction, followed by a brief review of the literature on the general data protection regulation. Following this literature review, there is a section where the research methodology used for data collection is presented. Before the conclusion, the results of the study are presented. Finally, the limitations of this work are identified and future works are proposed.

2 General Regulation on Data Protection With the mass use of the Internet and the technological explosion of the last two decades, the existing Directives for the protection of personal data have become completely outdated. Thus, it became imperative to standardize data protection legislation in all Member States, even though the objectives and principles contained in Directive 95/46/EC remained valid, thus resulting in Regulation (EU) 2016/679 (2016), published on April 27, 2016, more commonly known as the RGPD, which implements the new regulations regarding the protection of individuals, with regard to the processing of personal data and the free movement of such data, thus revoking the previous Directive 95/46/EC, whose fundamental objectives were to comply with legislation in all Member States, as well as to allow citizens and business structures to benefit from the digital economy and electronic commerce, reforming the way in which organizations work with personal data, and above all, protecting the personal data of European citizens, the RGPD, on the other hand, generates new duties for the business and public sector. In this way, having a unified and updated legislation with regard to data protection is essential to safeguard the fundamental right to it, as well as to enable the growth of a consolidated digital economy and strengthen the fight against crime and terrorism. The GDPR was ratified in 2016 by the European Union and came into force only in May 2018. As the focus of the regulation is on personal data, we think it is important to clarify how the RGPR defines them. The GDPR defines personal data in a broad sense so as to include any information related to an individual which can lead to their identification, either directly, indirectly or by reference to an identifier. Identifiers include [1]: • • • • •

Names. Online identifiers such as social media accounts. Identification numbers (e.g., passport numbers). Data regarding location (e.g., physical addresses). Any data that can be linked to the physical, physiological, genetic, mental, economic, cultural or social identity of a person.

838

P. Padrão and I. Lopes

Companies collecting, transferring and processing data should be aware that personal data is contained in any email and also consider that third parties mentioned in emails also count as personal data and, as such, would be subject to the requirements of the GDPR [2]. The GDPR “requirements apply to each member state of the European Union, aiming to create more consistent protection of consumer and personal data across EU nations. The GDPR mandates a baseline set of standards for companies that handle EU citizens’ data to better safeguard the processing and movement of citizens’ personal data” [3]. The main innovations of the General Data Protection Regulation are [4]: 1. 2. 3.

“New rights for citizens. The creation of the post of Data Protection Officer (DPO). Obligation to carry out Risk Analyses and Impact Assessments to determine compliance with the regulation. 4. Obligation of the Data Controller and Data Processor to document the processing operations. 5. New notifications to the Supervisory Authority: security breaches and prior authorisation for certain kinds of processing. 6. New obligations to inform the data subject by means of a system of icons that are harmonised across all the countries of the EU. 7. An increase in the size of sanctions. 8. Application of the concept ‘One-stop-shop’ so that data subjects can carry out procedures even though this affects authorities in other member states. 9. Establishment of obligations for new special categories of data. 10. New principles in the obligations over data: transparency and minimisation of data”. All organisations, “including small to medium-sized companies and large enterprises, must be aware of all the GDPR requirements and be prepared to implement” [3].

3 Research Methodology The term “research methodology” is used to “refer to the way in which one responds to research questions. The methodology includes not only data collection techniques, but also research design, framing, subjects, reporting, among others” [5]. In the course of the study, several methodological approaches were considered, with the selection of the work methodology falling on the one that was considered most appropriate. Given the nature of the problem under analysis and since there is no attempt to generalize the results, we opted for an interpretative and exploratory approach based on the design of the case study. The purpose of this strategy is to understand the study of a specific case and not that of other similar cases or to formulate generalizations. This investigation is part of an exploratory and interpretive logic. It is exploratory because the subject under analysis is still little addressed, and it is intended to intensify knowledge about the event that is being studied, and it is interpretive because we intend to interpret the application, by the municipalities, of the RGDP [6].

Implementation of the General Regulation on Data Protection

839

As we are dealing with an investigation in a case study, and the sample proves to be extremely important, it is not based on sampling, as “one does not study a case to understand other cases, but to understand the case” [7]. Thus, the participants of our study will be all municipalities “that are part of the CIM-AT. Throughout the study, the instruments for data collection were: the questionnaire and document review. The questionnaire was intended to provide a global characterization of the municipalities in relation to the GDPR theme, thus seeking to obtain a more comprehensive and contextualized image of their reality. The questionnaire was available between August 1, 2022 and September 30, 2022 (2 months) and 5 responses were obtained, that is, a response rate of approximately 83.33%.

4 Results Each in its own way, all the data generated is of enormous importance to organizations. The implementation of the GDPR has been a legal imperative since May 25, 2018, both in the private and public sectors, so it should already be implemented in all institutions, in the case under study, the CIM-AT Municipalities, acquired this imperative and all the Municipalities that responded to the survey already have the GDPR implemented (see Fig. 1).

0%

17% Yes No 83%

No reply

Fig. 1. Implementation of the GDPR.

As it was observed, 83% of the municipalities have already implemented the GDPR, the remaining 17% correspond to a municipality that did not respond to the survey. Thus, it appears that the Municipalities belonging to the CIM-AT are very well positioned in this task. When asked who developed the GDPR implementation process, it is observed that in all the municipalities that responded to the survey, it was a legal imperative. We can

840

P. Padrão and I. Lopes

conclude that if the law were not in force, the Municipalities would not be protecting the personal data of their Citizens with such integrity, availability and confidentiality, as these are the basic pillars for information security (see Fig. 2).

0% 0% 17% Cerficaon Own iniave 83%

Legal imperave No reply

Fig. 2. Why was the implementation process developed

In this question, the same five answers were obtained, of which 33% of the municipalities responded that they resorted to external entities, 17% of the municipalities were responsible for the implementation themselves, 33% of the municipalities were in partnership with a private entity (see Fig. 3). In other words, 66% of the municipalities abdicated the implementation of the GDPR to private entities. On the question of how long ago the regulation was implemented, in the formulation of the question, the answer was immediately defined in three time periods, in which its beginning corresponds to the entry into force of the GDPR.

Implementation of the General Regulation on Data Protection

841

Fig. 3. Who implemented it

From the resulting analysis of the graph, we can see that 50% of the municipalities implemented the GDPR in the period between 2020 and 2021, 33% of them in the period between 2018 and 2019. One of the municipalities did not give any response (see Fig. 4). Thus, we were able to verify that 33% of the municipalities implemented the GDPR soon after it came into force.

0% 17% 33%

Last year 50%

Between 20202021 Between 20182019 No Replay

Fig. 4. How long ago did you adopt the measures contained in the GDPR

842

P. Padrão and I. Lopes

Asked if the implementation was well accepted by the stakeholders, we found that there is unanimity of acceptance of the GDPR, in the answers obtained. Which leads us to conclude that, despite having been implemented by a legal imperative, there is recognition of its importance (Fig. 5). Asked if there was adequate training on the protection of personal data, we found that training was provided in all municipalities that responded to the survey, 17% corresponds to the municipality that did not respond to the survey (see Fig. 6). Another issue raised that is of the utmost importance is the appointment of a data protection officer, a matter that is clearly evident in the CNPD, “Public entities are always obliged to have an DPO. Article 12 of Law 58/2019 more specifically regulates the designation of DPO in public entities.” [9].

0%

17% Yes 83%

No No Replay

Fig. 5. Has there been adequate training on personal data protection

Given the importance of having a data protection officer and since its appointment is one of the requirements for compliance, the responses from the municipalities belonging to the CIM_AT were positively surprising, as all the responses to the survey state that they have one. Since the Municipalities belonging to the CIM-AT responded positively there is the main question of this research work. which was to know if they had already implemented the General Regulation on data protection, the questions that the survey had for the negative answers to that question, were no longer made and thus cannot be analyzed in this work either.

Implementation of the General Regulation on Data Protection

0%

843

17% Yes 83%

No No Replay

Fig. 6. Is someone responsible for compliance with the GDPR

5 Conclusions Most of the information that is currently shared digitally was previously shared on paper, thus raising new challenges and digital threats in terms of security and privacy, namely in relation to the protection of personal data in an increasingly digital society [8]. There are currently 28 data protection laws based on the EU Data Protection Directive of 1995, ie implemented over 20 years ago and gradually being replaced by the new GDPR [2]. With the advances in information and communication technologies, in the last two centuries, they can only be completely misaligned in order to protect the data of individuals and companies. The digital impact and transformation of recent years is visible in several sectors. And local public administration is no exception and such a transformation are an indisputable fact. As such, whenever they process personal data, they must comply with the same rules of the GDPR. The Public Administration is responsible for helping and assisting Local Administration in preparing for the implementation of the GDPR. The vast majority of personal data handled and processed by the Local Public Administration is due to the fact that they are necessary to carry out their work in the exercise of their powers in relation to citizens. This allows for objective decision-making and the determination of measures that are strictly necessary and adapted to the context. However, it is sometimes difficult, when one is not familiar with this new theme, to implement such an approach requires a great effort and adaptation on the part of the Municipalities. As described, this study was limited to the CIM-AT, and one of the limitations that we can point out was the number of questionnaires carried out, however, given the size of this CIM, we consider that given the size of 6 Municipalities, the response rate was high.

844

P. Padrão and I. Lopes

Regarding the methodology, the investigation also presents intrinsic limits to the instruments used, and subjectivity and personal involvement can condition the interpretation. We consider, however, that several precautions were taken in order to minimize the degree of subjectivity. As is evident in these research studies, we were never able to be fully comprehensive and this work focused more on whether or not municipalities had already implemented the GDPR. Thus, as future works, we consider that it would be important to know quantitative data on cases of non-compliance with the GDPR. Acknowledgements. The authors are grateful to the UNIAG, R&D unit funded by the FCT – Portuguese Foundation for the Development of Science and Technology, Ministry of Science, Technology and Higher Education. “Project Code Reference: UIDB/04752/2020”.

References 1. European Parliament and Council: Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016, Official Journal of the European Union (2016) 2. Ryz, L., Grest, L.: A new era in data protection. Comput. Fraud Secur. 2016(3), 18–20 (2016) 3. Lopes, I.M., Oliveira, P.: Implementation of the general data protection regulation: a survey in health clinics. In: 13ª Iberian Conference on Information Systems and Technologies, vol. 2018-June, pp. 1–6 (2018) 4. Díaz Díaz, E.: The new European union general regulation on data protection and the legal consequences for institutions. Church Commun. Cult. 1, 206–239 (2016) 5. Hudson, L., Ozanne, J.: Alternative ways of seeking knowledge in consumer research. J. Consum. Res. 14(4), 508–521 (1988) 6. Martins Junior, J.: Trabalhos de conclusão de curso: instruções para planejar e montar, desenvolver e concluir, redigir e apresentar trabalhos monográficos e artigos (2008) 7. Stake, R.: The Art of Case Study Research. Sage Publications, Thousand Oaks (1995) 8. SPMS: Serviços Partilhados do Ministério da Saúde, Privacidade da informação no setor da saúde (2017) 9. CNPD: EPD (n.d.). https://www.cnpd.pt/organizacoes/obrigacoes/encarregado-de-protecaode-dados/. Accessed 25 Oct 2022

Implementing ML Techniques to Predict Mental Wellness Amongst Adolescents Considering EI Levels Pooja Manghirmalani Mishra1(B) and Rabiya Saboowala2 1 Machine Intelligence Research Labs, Mumbai, India

[email protected] 2 Mumbai, India

Abstract. Mental health, in general, is defined as the form of emotional and spiritual resilience and the ability to withstand taxing demands, coping with normal stresses of life and daily challenges, working productively and fruitfully. The objective of this article is to unveil the ways in which emotionally intelligent adolescents are more likely able to adjust readily to situations, are less anxious and depressed, thus, demonstrating the adaptive value of emotional intelligence and stable mental health. Using IBM SPSS software, it was found that females had a higher level of Emotional Intelligence than their male counterparts. In order to predict mental wellness using Emotional Intelligence scores, a test was conducted on 532 adolescents from the Mumbai region of India, Machine Learning algorithms (classifiers) were applied. Out of the 7 classifiers, Support Vector Machines yield the highest accuracy. Keywords: Mental wellbeing · Machine learning · Classifiers · Emotional intelligence · Adolescents

1 Introduction Adolescence is a phase in every individual’s life when they develop the ability to think about the future in a real sense which is one step ahead of the present, envision its implication, and simultaneously grasp the complexity of relationships that exist within the society. It is a part of life where they encounter novel and first hand experiences; positive as well as negative emotional reactions often develop from unfamiliar situations. Rapid physical and emotional changes take place during the developmental phase of any individual; there is an urgent need to nurture their skills of coping with the emotions of themselves and others in order to regulate their emotions and behaviour (Karibeeran and Mohanty 2019). The stage of adolescence approximately ranges from 16 to 20 years when students are in school and colleges, and is often referred to as ‘Stress and Storm Phase’. Students during their adolescence stage come across numerous problems because of the background they come from like their socio-economic status, medium of instruction, gender, locality etc. Problems faced may include lack or poor communication skills, © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 A. Abraham et al. (Eds.): IBICA 2022, LNNS 649, pp. 845–857, 2023. https://doi.org/10.1007/978-3-031-27499-2_78

846

P. Manghirmalani Mishra and R. Saboowala

stage fear, lack of participative nature, unable to interact socially, unable to cope with emotions, anxiety, stress, ever increasing demand of society, etc. In the context of increasing mental health problems among adolescents and the major influence of psychosocial factors enhancing it, it has become relatively important for researchers and practitioners to understand the vital role of emotional intelligence in enhancing mental health among students of college students. In recent years’ emotional intelligence (EI) and mental wellness (MW) have gained trend and their correlational application has been expanding, not only in the field of education but in all global sectors including individuals personal life, work and business. One of the major and serious problems faced by teenagers during school times is that of adjustment where it is important for students to maintain their social and interpersonal relations in order to survive through the 21st century which is only possible if they can understand their own emotions and that of others and also, when they are at peace mentally. It has been universally recognized that EI is one of the major factors among others that determine success of individuals in their life and also help in boosting mental health (Bar-on 2001). EI, when defined, includes the total abilities possessed to rightly perceive one’s emotions, skills to access them appropriately and ultimately generate relevant emotions in order to assist the cognitive process, to understand them correctly and also to have an emotional knowledge and lastly to reflectively regulate them (Mayer et al. 2004). Individuals scoring high on EI are just not capable of recognizing their own emotions but are also able to understand others feelings and their emotions as well. They are also skilled at ways in which they can better control and manage them. MW defined by Bhatia (1982) states that it is the ability that an individual possesses in order to equate his feelings, desires, ambitions and ideals in their daily lives with ease and efficiency. In short, mental health serves as the base for an individual’s well-being and as a whole the harmonious functioning of a community and at global level for effective functionality of an organization and the entire society. Due to the changing society and trends in technology there arises a need to understand not only the concern for the mental health of young people but also alternatives and sustainable solutions to scale this problem globally. Key developmental factors for emotional competencies start developing right from childhood till early adolescence, and they are most mandatory for promoting mental health in later adolescence and adulthood. The multiple dimensions of EI are associated directly as well as indirectly with better social functioning. In other words, it can be concluded that individuals who are capable enough to recognize and regulate their own and emotions of others as well are able to initiate and maintain healthy social relationships not only with family but with peers as well and also at their workspace thus maintaining a more healthy environment. In today’s scenario with increasing concerns, the EI and MW of adolescent students is becoming the centre point for researchers throughout the world because of the significant role they play and the correlation that they have in determining successful careers, participation in global society and success in personal life.

2 Background of the Study There are tremendous recent studies conducted by psychologists and behavioural scientists from the domain of education in relation to EI and mental health status of individuals

Implementing ML Techniques to Predict Mental Wellness

847

pre-pandemic. Even in the best of times in life, managing oneself and staying emotionally connected to the society can be a major challenge not just for students but for teachers a s well. As the globe layers on the new realities post-Covid-19, it’s gotten much tougher and the need to popularize EI has become the need of the hour. Bar-On (2002) agrees that self-awareness, tolerance to stress, self-actualization, interpersonal and intrapersonal relationship, reality testing, dealing with optimism, happiness, etc. are the qualities that help in determining the EI of a person. Many psychologists are of the strong belief that those learners who receive an exclusively scholastic environment without any social learnings are not well prepared for challenges in future, both as individuals and as responsible members of the society. There may arise circumstances in daily life wherein the most intelligent child in school may not succeed efficiently as compared to their lower intellectual counterparts. These examples are particularly evident in almost every field including administration, politics, management and business (Singh 2002). Harrod and Scheer (2005) conducted in order to study the difference among EI scores in males and females. Their study was conducted on 200 youths of 16 to 19 years. The results revealed the females reported EI levels than males. Khan and Ishfaq (2016), study also revealed similar findings where a significant difference between EI among Adolescents was reflected with reference to variables like Gender, socioeconomic status, and type of school. Senad (2017) revealed through their study that the CBSE students have a higher level of EI, than ICSE students and also, Female students are at a higher level of EI than their male counterparts in understanding motivation, and empathy. Studies conducted by Mayer and Gehar 1996; Mayer Caruso and Salovey 1999 have also supported through their findings that women are more likely to score higher on measures of EI in personal as well as professional settings than men. Researchers have found that a meaningful positive relationship exists between EI and MH (Sasanpour et al. 2012). Various strata of research studies claim that higher levels of EQ are associated with better mental health and vice-versa (Ruiz-Aranda et al. 2012). Ciarrochi et al. (2000) in their study posit that EI protects people from major stress and other related mental health problems and illness, helps to lead a better life by developing skills of adaptation and survival. Schutte et al. (2007) in their study revealed that better mental health is associated with higher EI scores. Studies have shown that there is a distinct impact of EI components on MH (Arteche et al. 2008). Johnson et al. (2009) in through study also depicted that people with high levels of EI recognize their feelings of stress better as compared to those who score less and have the ability to better manage their and other’s emotions and have good mental health. In studies to derive the probability or prediction of Mental Health using ML, not much has been explored. However, some of the promising results were achieved by Srividya et al. (2018) where they have applied SVM, RF, DT and KNN to predict mental health of a targeted audience. However, the accuracy of their models was below 70%. In the study of Zhang et al. (2019), they have implemented Gradient Boosting Classifiers on a dataset of over 10,000 with 298 factors which was collected using an online survey. They have achieved an accuracy of 90%. Tate et al. (2020) have used SVM and RF on a wider data set and have predicted the mental health of adolescents and have achieved an accuracy of above 90% .

848

P. Manghirmalani Mishra and R. Saboowala

In the present study, the attributes are first reduced and a range of ML techniques are implemented and the model giving the highest accuracy is recommended.

3 Methodology 3.1 Dataset The present research study implemented a research survey methodology post COVID-19 pandemic in order to understand the level of EI amongst adolescents. This method is the most approved and the most widely used research method in research work in the field of Humanities. A questionnaire titled ‘The Assessing Emotions Scale’ is designed by Schutte et al. (1998) and is used for the present study which is a self-report inventory that consists of items on Emotional Intelligence consisting of assessment of emotion, regulation of emotion and utilization of emotion in solving problems in the self and others. For the present study, the above mentioned tool is used which consists of 33 items. This measures EI in four dimensions– perception of emotions, managing emotions in the self, social skills and utilizing emotions (Schutte et al. 2007). The scores obtained from the targeted population, which are the adolescents belonging to the metropolitan city of Mumbai, in India are measured on a 5-point Likert scale with the highest value starting from Strongly Agree to the lowest value of Strongly Disagree. Items no 5, 28 and 33 were reversed scored. Higher the score indicated a higher level of EI. Data was collected by simple random technique. The reliability of the tool was calculated. Cronbach’s Alpha was found to be 0.92, Split-Half Correlation was found to be 0.83 and Split-Half with Spearman-Brown Adjustment was 0.90 which is high. Thus the tool was found to be reliable for the present study. The data of 532 samples were collected from Junior colleges across south Mumbai city of India were the targeted audience lie between the age group of 16–18 years. 3.2 Data Preprocessing As the size of the tool was a little long (33 questions), some students skipped a few answers, making the sample useless for use. In order to avert the loss of data of this nature, missing data was inputted by using neural network prediction using multi-layered feedforward neural network model which predicts the missing value and multi-layered feed-backward neural network model which imputes missing values using estimated values from predictor networks (Manghirmalani and Kulkarni 2017). For this study, to enhance and validate the class labels, Mean Opinion Score (MOS) is computed. The MOS given in the formula below is calculated as the arithmetic mean over single ratings performed by individuals for a given stimulus in a subjective quality evaluation test. Thus: N Rn MOS = n=0 N in which R being singular scoring for a set of N subjects. This output provides affability of the derived score along with the actual score received by the subject.

Implementing ML Techniques to Predict Mental Wellness

849

The data (sample size of 532) was divided into two clusters viz male and female adolescents. Cluster-M has 245 samples whereas Cluster-F has 287 samples. 3.3 Hypothesis Testing for EI H0: There is no significant gender difference in the level of EI among Adolescent Students. The sample for the present study has 532 adolescent pupils residing in Mumbai, India. Out of the total sample, 245 were female and 287 were male students. The mean scores for Female students was 130.99 and for male students was 115.48 as depicted in Table 1. Out of the total sample 46.05% were male adolescents and 53.95% were female adolescents. Table 1. Sample size

EI

Gender

N

Mean

Std. Deviation

Std. Error Mean

Male

245

115.4816

26.34475

1.68310

Female

287

130.9965

18.11029

1.06902

The data for the present was analysed by using the software known as Statistical Package for Social Sciences (SPSS). It is also known as IBM SPSS Statistics, which is used for data analysis of statistical data which could be either descriptive and bivariate statistics, or numeral outcome predictions or predictions for identifying similar or different groups. t-tests in statistics are used to correlate the average scores of two independent clusters. There are two types of t test that is Independent-samples t-test which can be used to compare the mean scores of two independent clusters. The other is the Paired-samples t-test is used to compare the average scores for within a group on two or more different occasions. For this study, the Independent t-test was computed to identify if there is any difference between the scores of male and female on EI Scale. Table 2 represents the findings. The result shows that female pupils have achieved higher ratings than their male counterparts on EI viz they usually are seen to have a higher EI at the adolescent age (based on the sample collected). Hence we can say that the adolescent females are more emotionally intelligent than males. A significant difference (p < .01), among males and females on EQ is found as depicted in Table 2.

850

P. Manghirmalani Mishra and R. Saboowala Table 2. t-test to compare the EI scores of Male and Female pupils Equality of

t-test (Equality of Means)

Variances (Levene’s Test) F

EI

Equal variances assumed Equal variances not assumed

46.682

Sig