Information Technology and Systems: ICITS 2023 [1] 9783031332579, 9783031332586, 3031332571

This book is composed by the papers written in English and accepted for presentation and discussion at The 2023 Internat

116 97 19MB

English Pages 674 [633] Year 2023

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Preface
Contents
Big Data Analytics and Applications
On the Use of Parallel Architectures in DNA Methylation Analysis
1 Introduction
2 Sample Alignment and Methylation Status Inference
3 Detection of Differentially Methylated Regions (DMRs)
4 Artificial Intelligence for Methylation Analysis
5 Conclusions
References
Automation for IBNR Provision Using Power BI and Excel
1 Introduction
1.1 Generalities
2 Methodology
3 Results
4 Conclusions
References
Visualizing the Risks of De-anonymization in High-Dimensional Data
1 Introduction
2 Related Work and Contribution
3 Methods
3.1 k-Anonymity
3.2 l-Diversity
3.3 Spatiotemporal Data Anonymity
3.4 Textual Data Anonymity
3.5 Financial Transactions Data Anonymity
3.6 Aggregation-Based Data Anonymity
4 Application
5 Conclusion and Future Work
References
Computer Networks, Mobility and Pervasive Systems
Wireless Mesh for Schools in the Emberá Comarca
1 Introduction
2 Methodology
3 Internet Connections
4 Why Mesh Networks?
5 Model
6 Conclusions
7 Future Work
References
Implementation of a 2.4 GHz Biquad Antenna for a Microwave and Antenna Laboratory
1 Introduction
2 Development
3 Results
4 Conclusions
References
TaguaSpot: A Versatile Radio Access Networks Design for the Integration of Indigenous Communities to the Information Society
1 Introduction
2 Connectivity Challenges in Indigenous Communities
2.1 The Ella Puru Community, a Case Study
2.2 Potential Project Risks
2.3 Potential Benefits
3 Design Requirements of a Versatile Access Network for Indigenous Communities
3.1 Requirements
3.2 Design Considerations
4 TaguaSpot High-Level Design
5 Conclusions
References
Ethics, Computers and Security
Vulnerability Analysis in the Business Organization
1 Introduction
2 Methodology
2.1 Vulnerability Analysis Tools
2.2 Vulnerability Analysis Process
2.3 Environment Description
3 Results
3.1 Lifetime of Vulnerability in the Information System
4 Conclusions
References
Information Security: Identification of Risk Factors Through Social Engineering
1 Introduction
2 Risk Factors for Social Engineering Attacks
2.1 Computational Techniques for Social Engineering
2.2 Non-computational Techniques for Social Engineering
3 Methodology
3.1 Social Engineering Attack Vectors
4 Results
5 Techniques to Prevent Social Engineering Attacks
6 Conclusion
References
The Information Security Function and the CISO in Colombia: 2010–2020
1 Introduction
2 Literature Review
3 Methodology
3.1 Research Instrument
3.2 Sample
4 Results
4.1 Function of the Information Security Area and Role of the Professional
4.2 Evolution of the Information Security Professional Profile
5 Discussion
5.1 Security Function, Steadily Growing
5.2 Cybersecurity Professional Profile, a Path of Experience and Expansion
6 Conclusions
References
Privacy and Cyber-Security Using Information Systems: A Proposal for Knowledge, Skills, and Attitudes
1 Introduction
2 Background
3 Methodology
4 Results and Discussions
5 Future Work
6 Conclusions
References
A Case Study of a Privacy-Invading Browser Extension
1 Introduction
2 The Attack Scenario
3 Implementation
4 Discussion
5 Conclusion
References
Human-Computer Interaction
An Animatronic Robot to Provide Therapeutic Support to Preschool Children Attending Medical Consultations
1 Introduction
2 Related Work
3 System Architecture
4 Movement Control System
5 Experimentation and Results
5.1 Structural Analysis
6 Conclusions
References
Evaluating the Engagement by Design Methodology in Crowdsourcing Initiatives
1 Introduction
2 Related Works
3 Engagement by Design Cards
4 Application of the Engagement by Design Methodology
5 Analysis of Cards Choice
6 Evaluation of Ideas Generated by the Engagement by Design Methodology
6.1 Results
7 Lessons Learned
8 Conclusion
References
Applying Semiotic Engineering in Game Pre-production to Promote Reflection on Player Privacy
1 Introduction
2 Theoretical Foundation
2.1 Semiotic Engineering
2.2 Game Development
2.3 Privacy in Games
3 Rationale for Using Semiotic Engineering
4 Proposed Approach
4.1 Guiding Questions to Reflect on Privacy
4.2 Example of How to Answer the Questions
5 Final Remarks and Future Work
References
The Consumer Influence of Digital Coupon Distribution Through a Referral Program
1 Introduction
2 Literature Review
2.1 Sales Promotion and Digital Coupon
2.2 Referral Program
2.3 Consumer Perception: Principles of Influence and Decision-Making
3 Methodology
3.1 Research Question and Hypotheses
3.2 Methodology, Variables, Data and Sample
3.3 Study Context and Procedures
3.4 Data, Factor, Reliability Analysis and Anova
3.5 Results
4 Conclusions
4.1 Final Considerations, Scientific and Managerial Contributions
4.2 Limitations and Future Research
References
Digital Travel – A Study of Travellers’ Views of a Digital Visit to Mexico
1 Introduction
2 Methods and Focus: A Digital Visit to Mexico
2.1 Characteristics of the Participants and the Survey
2.2 Visiting Mexico in Forza Horizon 5
3 Data Analysis and Results
4 Discussion
References
Health Informatics
Medical Entities Extraction with Metamap and cTAKES from Spanish Texts
1 Introduction
1.1 UMLS
1.2 Metamap
1.3 cTAKES
2 Methodology
3 Results
4 Comparison of Metamap and cTAKES
5 Conclusion
References
Health Records Management in Trust Model Blockchain-Based
1 Introduction
2 State of the Art
2.1 Blockchain Technology
2.2 Health Records Concepts
3 Related Work
4 Methodology
5 Proposed Trust Model
6 Conclusion
References
Digital Transformation and Adoption of Electronic Health Records: Critical Success Factors
1 Introduction
2 Background
2.1 Digital Transformation and Electronic Health Records
2.2 Latin America and the Caribbean, and Barriers to EHR Adoption
3 Proposal
3.1 Critical Success Factors (CSFs) for Evaluating the Conditions for Health Services
3.2 Configuration of CSF to EHR Adoption
3.3 Operationalization of the CSFs
4 Towards the Evaluation and Application of the CSFs
4.1 Case Study
5 Conclusions
References
Comorbidity Analysis in the Mexican Population Affected by SARS-CoV2
1 Introduction
1.1 Data Acquisition
1.2 Data Processing
2 Affectation Index
3 Comorbidity Statistics
4 Conclusions
References
Practical Guidelines for Developing Secure FHIR Applications
1 Introduction
1.1 Motivation
1.2 FHIR Standard and Security
1.3 The Security Challenge of FHIR API Based Applications
2 Methods and Materials
2.1 Guidelines
2.2 Testing
3 Results
3.1 Guidelines
3.2 Testing
4 Discussion
References
Intelligent System to Provide Support in the Analysis of Colposcopy Images Based on Artificial Vision and Deep Learning: A First Approach for Rural Environments in Ecuador
1 Introduction
2 Related Work
3 System Architecture
4 Pilot Experiment and Preliminary Results
4.1 Training the Neural Network
4.2 Measurement of Perception of the Mobile Application and the Web Application
5 Conclusions
References
Comparison of Transfer Learning vs. Hyperparameter Tuning to Improve Neural Networks Precision in the Early Detection of Pneumonia in Chest X-Rays
1 Introduction
2 Related Work
3 Methodology
3.1 CNN Architecture
3.2 VGGNet
3.3 ResNet
3.4 MobileNet
3.5 Hyperparameter Tuning
4 Experiment and Preliminary Results
5 Conclusion
References
A Smart Mirror Based on Computer Vision and Deep Learning to Support the Learning of Sexual Violence Prevention and Self-care Health in Youth with Intellectual Disabilities
1 Introduction
2 Related Work
3 System Architecture
4 Experimentation and Results
5 Conclusions
References
Intelligent and Decision Support Systems
Weight Prediction of a Beehive Using Bi-LSTM Network
1 Introduction
2 Database
2.1 LSTM Network
2.2 Bi-LSTM Network
2.3 Performance Measures
2.4 Bi-LSTM Configuration
3 Results and Discussion
4 Conclusions
References
Criteria Selection and Decision-Making Support in IT Governance: A Study via Fuzzy AHP Applied to a Multi-institutional Consortium
1 Introduction
2 Theoretical Reference
3 Methodology
3.1 Business Analysis
3.2 Preparation
3.3 Application of the Questionnaire for Data Collection
3.4 Fuzzy AHP Application—Results
4 Application of the Methodology
4.1 Selection of Criteria and Alternative
4.2 Application of Fuzzy AHP
5 Conclusions
References
Machine Learning Model Optimization for Energy Efficiency Prediction in Buildings Using XGBoost
1 Introduction
2 Methodology
2.1 Data Ingestion
2.2 Data Pre-processing
2.3 Building the Model
2.4 Model Tuning
3 Results
4 Discussion
5 Conclusion
References
Analysis of the Azores Accommodation Offer in Booking.Com Using an Unsupervised Learning Approach
1 Introduction
2 Methodology
3 Results and Discussion
4 Conclusions and Further Work
References
AD-DMKDE: Anomaly Detection Through Density Matrices and Fourier Features
1 Introduction
2 Background and Related Work
2.1 Anomaly Detection
2.2 Anomaly Detection Baseline Methods
3 Anomaly Detection Through Density Matrices and Fourier Features (AD-DMKDE)
3.1 Random Fourier Features and Adaptive Fourier Features
3.2 Density Matrix Calculation
3.3 Quantum Density Estimation
3.4 Threshold Calculation and Prediction Phase
4 Experimental Evaluation
4.1 Experimental Setup
4.2 Results and Discussion
5 Conclusion
References
Enhancing Sentiment Analysis Using Syntactic Patterns
1 Introduction
2 Related Work
2.1 Sentiment Analysis
2.2 Sentiment Analysis Processes
3 Sentiment Classification and Analysis
3.1 The Application Case
3.2 The Initial Process
3.3 Using Syntactic Patterns
3.4 Results Analysis
4 Conclusions and Future Work
References
Feature Selection for Performance Estimation of Machine Learning Workflows
1 Introduction
2 Preliminaries
3 Contribution
4 Experiments
5 Conclusion
References
Development of a Hand Gesture Recognition Model Capable of Online Readjustment Using EMGs and Double Deep-Q Networks
1 Introduction
2 Literature Review
2.1 Double Deep-Q Network
2.2 Variability
2.3 Online Learning and Catastrophic Forgetting
3 Materials and Methods
3.1 Architecture
3.2 Online Readjustment Evaluation
4 Results and Analysis
4.1 Offline/Static Results
4.2 Online Readjustment Evaluation
5 Conclusions and Future Works
References
Information and Knowledge Management
Capitalization of Healthcare Organizations Relationships’ Experience Feedback of COVID’19 Management in Troyes City
1 Introduction
2 Managing COVID’19 Crisis in Troyes City
3 Knowledge Capitalization
4 Relationships Capitalization
4.1 Actors Interviews
4.2 First Results
5 Conclusion
References
Factors for the Application of a Knowledge Management Model in a Higher Education Institution
1 Introduction
2 Literature Review
3 Material and Methods
4 Results
4.1 Categories
5 Discussion and Analysis of Results
6 Conclusions
References
eWom, Trust, and Perceived Value Related to Repurchases in the e-commerce Sector of Department Stores
1 Introduction
2 Literature Revision
2.1 Trust
2.2 eWom
2.3 Perceived Value
2.4 Repurchase
3 Methodology
4 Results
5 Conclusions
References
Transitioning to Digital Services: Assessing the Impact of Digital Service Efforts on Performance Outcomes and the Moderating Effect of Service Promotion
1 Introduction
2 Theoretical Background
2.1 Servitization, Digital Services and Digital Service Performance
2.2 Service Offerings and Promotion of Services on Firm Websites
3 Methodology
3.1 Data Collection and Sample of Survey
3.2 Constructs Measurement of the Survey
3.3 AI Assessment of Services Promoted on Company Websites
3.4 Data, Psychometric Properties and Common Method Variance
4 Results
5 Discussion
5.1 Research Contribution and Managerial Implications
6 Limitations and Future Research
References
IoT Devices Data Management in Supply Chains by Applying BC and ML
1 Introduction
2 Literature Review
2.1 VeChain
2.2 Elliptic Curve Cryptography
2.3 Proof of Learning
2.4 Machine Learning and Blockchain on IIoT
3 Design
4 Implementation
4.1 Communication Protocols Between Devices
4.2 Blockchain Implementation
5 Results
5.1 Test Methodology
5.2 Obtained Results
6 Conclusions
7 Future Work
References
Raising Awareness of CEO Fraud in Germany: Emotionally Engaging Narratives Are a MUST for Long-Term Efficacy
1 Introduction to the Current Situation
2 The Main Findings from Literature Reviews
2.1 CEO Fraud Attacks and Learning
2.2 Learning, Narratives, and Serious Games
3 The Project’s Serious Games Simulating CEO Fraud Attacks
3.1 Background of the German Project
3.2 The Analog CEO Fraud Learning Scenario
3.3 The Digital CEO Fraud Learning Scenario
4 Discussion
5 Outlook
References
A Module Based on Data Mining Techniques to Analyze School Attendance Patterns of Children with Disabilities in Cañar - Ecuador
1 Introduction
2 Related Work
3 Methodology
4 Experiment and Preliminary Results
5 Limitations
6 Conclusions
References
Improving Semantic Similarity Measure Within a Recommender System Based-on RDF Graphs
1 Introduction
2 Problem Statement
3 Related Works
3.1 Recommender Systems
3.2 Semantic Similarity Measure
3.3 Vector Representation of Words
4 Measure of Similarity Within a Recommender System
4.1 Recommender System for the Purchase/Sale of Used Vehicles
4.2 Semantic Similarity Measure Between Triplets
5 Experiments
6 Conclusion and Perspectives
References
Comparative Analysis of the Prediction of the Academic Performance of Entering University Students Using Decision Tree and Random Forest
1 Introduction
2 Related Works
2.1 Learning Concepts
2.2 Theoretical Bases
2.3 Synthetic Minority Oversampling Technique (SMOTE)
3 Methodology
3.1 Research Design
3.2 Population and Sample
4 Results and Discussion
5 Conclusions and Future Works
References
Intellectual Capital and Public Sector: There are Similarities with Private Sector?
1 Introduction
2 Literature Review
3 Conclusions
References
EITBOK, ITBOK, and BIZBOK to Educational Process Improvement
1 Introduction
2 Methodology
3 Results
4 Discussion
5 Conclusions
References
Rating Pre-writing Skills in Ecuadorian Children: A Preliminary Study Based on Transfer Learning, Hyperparameter Tuning, and Deep Learning
1 Introduction
2 Related Work
3 Methodology
3.1 Transfer Learning
3.2 Data Augmentation
3.3 Hyperparameter Tuning
3.4 CNN (Convolutional Neural Networks)
3.5 VGG16
3.6 Resnet50
3.7 InceptionV3
4 Experiment and Preliminary Results
4.1 Geometric Shapes Classification
4.2 Geometric Shapes Rating
5 Limitations
6 Conclusions
References
Geographic Patterns of Academic Dropout and Socioeconomic Characteristics Using Clustering
1 Introduction
2 Methodology
2.1 Research and Information Technics
2.2 Methodology Process
3 Results
3.1 Discussion
3.2 Conclusions
References
Implementation of Digital Resources in the Teaching and Learning of English in University Students
1 Introduction
2 Method
2.1 Level and Design
2.2 Population and Sample
2.3 Technique and Instruments for Data Analysis
2.4 Instrument Reliability
3 Results
4 Discussion and Conclusion
References
Communicational Citizenship in the Media of Ecuador
1 Introduction
2 Objectives
3 Methodology
4 Results
5 Discussion and Conclusions
References
A Multimedia Resource Ontology for Semantic E-Learning Domain
1 Introduction
2 Background
2.1 Ontology and Knowledge Inferencing
2.2 Multimedia Ontologies
3 Multimedia Resources Ontology
4 Future Works
References
Business Process Management in the Digital Transformation of Higher Education Institutions
1 Introduction
2 Understanding Business Process Management in the Digital Transformation Context
3 An Approach for Business Process Management in the Digital Transformation of Higher Education Institutions
3.1 Subprocess: Analysis and Lifting of Process Improvements
4 Processes Analysis and Lifting at Salesian Polytechnic University
5 Conclusions and Future Works
References
Collaborative Strategies for Business Process Management at Salesian Polytechnic University
1 Introduction
2 Collaboration and Business Process Management
3 Research Methodology
4 Collaborative Strategies for Business Process Management at Salesian Polytechnic University
4.1 Interactions Among Processes Coordinator and Processes Analysts
4.2 Interactions Among Process Analysts and Expert Users
4.3 Interactions Among Process Analysts
4.4 Interaction Among Process Coordinator and Decision-Makers
5 Conclusions
References
Does Sharing Lead to Smarter Products? Managing Information Flows for Collective Servitization
1 Introduction
2 Sharing-Induced Design Principles
2.1 Supply and Demand Dynamics
2.2 Market-Maker Incentives
2.3 Aftermarket Control and Information
2.4 Product-Design Principles
3 Collective Servitization
4 Product Intelligence
5 Conclusion
References
Analysis of On-line Platforms for Citizen Participation in Latin America, Using International Regulations
1 Introduction
2 Overview
2.1 E-Government
2.2 Citizen E-Participation
2.3 Citizen Participation Platforms
2.4 Citizen Participation in Latin America
3 e-Participation Reference Frameworks
3.1 Canada's Public Participation Guide
3.2 United Nations e-Government
3.3 ePfw Framework (e-Participation Framework)
4 Methodology
4.1 Evaluation Criteria
4.2 Case Study: South American Citizen-Participation Platforms
4.3 Citizen Participation Platforms in Latin America
4.4 Analysis of Citizen e-Participation Platforms
5 Results and Evaluation
5.1 Reasonable Time
5.2 e-Participation Mechanisms in Latin America
5.3 e-Participation Levels
5.4 Transparency
5.5 Platform Usage
5.6 e-Participation Tools
6 Conclusions
References
Managerial Information Processing in the Era of Big Data and AI – A Conceptual Framework from an Evolutionary Review
1 Introduction
2 Evolutionary Mapping of Managerial Information Processing
2.1 Information Behavior – [22] IUE Theory
2.2 Mintzberg’s Managerial Information Behavior
2.3 Choo’s Model of Human Cognition in Managerial Information Processing
2.4 The Cognitive Views of Organisational Information Processing
2.5 Performativity of Intelligent Systems in Distributed Decision Making
3 Discussion and the Framework
3.1 The Driving Forces
4 Conclusion
References
Empowering European Customers: A Digital Ecosystem for Farm-to-Fork Traceability
1 Introduction
2 State of the Art: Food Traceability Solutions
3 A New Tool for Farm-to-Fork Traceability
3.1 Food Classification and Description: Ingredients
3.2 A Simplified Model for the Food Supply Chain
3.3 TrFood Software Platform
4 Study Case: Methods, Materials and Results
5 Conclusions
References
Design of a Task Delegation System in a Context of Standardized Projects
1 Introduction
2 Background
2.1 Task Delegation
2.2 Performance Evaluation
2.3 Standardized Projects
3 Research Methodology
3.1 Planning and Implementation of the SMS
3.2 RQ1. What Criteria are Applied for the Task Delegation? are Any of them Related to Performance?
3.3 RQ2. What Computer Tools are Reported? Which Ones Allow Tasks to be Delegated Based on Established Criteria?
3.4 RQ3. What Method of Performance Evaluation is Used in Organizations?
4 Implementation of Delegator
4.1 High Level Requirements
4.2 Main Algorithm
4.3 Delegator Architecture
4.4 Test Cases
5 Final Discussion
References
eSFarmer - A Solution for Accident Detection in Farmer Tractors
1 Introduction
2 Accidents Overview
3 Proposal Solution
3.1 Client Component
3.2 Broker
3.3 Server Component
3.4 Mobile Application
4 Conclusion
4.1 Future Work
References
An Approach to Identifying Best Practices in Higher Education Teaching Using a Digital Humanities Data Capturing and Pattern Tool
1 Introduction
2 Related Works
3 Modelling Best Practices in Digital Teaching
3.1 The MUSE4Anything Tool
3.2 Data Collection Using the MUSE4Anything Tool
4 Result and Discussion: Data Model
5 Conclusion
References
Recommend Papers

Information Technology and Systems: ICITS 2023 [1]
 9783031332579, 9783031332586, 3031332571

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Lecture Notes in Networks and Systems 691

Álvaro Rocha Carlos Ferrás Waldo Ibarra   Editors

Information Technology and Systems ICITS 2023, Volume 1

Lecture Notes in Networks and Systems Volume 691

Series Editor Janusz Kacprzyk, Systems Research Institute, Polish Academy of Sciences, Warsaw, Poland Advisory Editors Fernando Gomide, Department of Computer Engineering and Automation—DCA, School of Electrical and Computer Engineering—FEEC, University of Campinas— UNICAMP, São Paulo, Brazil Okyay Kaynak, Department of Electrical and Electronic Engineering, Bogazici University, Istanbul, Türkiye Derong Liu, Department of Electrical and Computer Engineering, University of Illinois at Chicago, Chicago, USA Institute of Automation, Chinese Academy of Sciences, Beijing, China Witold Pedrycz, Department of Electrical and Computer Engineering, University of Alberta, Alberta, Canada Systems Research Institute, Polish Academy of Sciences, Warsaw, Poland Marios M. Polycarpou, Department of Electrical and Computer Engineering, KIOS Research Center for Intelligent Systems and Networks, University of Cyprus, Nicosia, Cyprus Imre J. Rudas, Óbuda University, Budapest, Hungary Jun Wang, Department of Computer Science, City University of Hong Kong, Kowloon, Hong Kong

The series “Lecture Notes in Networks and Systems” publishes the latest developments in Networks and Systems—quickly, informally and with high quality. Original research reported in proceedings and post-proceedings represents the core of LNNS. Volumes published in LNNS embrace all aspects and subfields of, as well as new challenges in, Networks and Systems. The series contains proceedings and edited volumes in systems and networks, spanning the areas of Cyber-Physical Systems, Autonomous Systems, Sensor Networks, Control Systems, Energy Systems, Automotive Systems, Biological Systems, Vehicular Networking and Connected Vehicles, Aerospace Systems, Automation, Manufacturing, Smart Grids, Nonlinear Systems, Power Systems, Robotics, Social Systems, Economic Systems and other. Of particular value to both the contributors and the readership are the short publication timeframe and the world-wide distribution and exposure which enable both a wide and rapid dissemination of research output. The series covers the theory, applications, and perspectives on the state of the art and future developments relevant to systems and networks, decision making, control, complex processes and related areas, as embedded in the fields of interdisciplinary and applied sciences, engineering, computer science, physics, economics, social, and life sciences, as well as the paradigms and methodologies behind them. Indexed by SCOPUS, INSPEC, WTI Frankfurt eG, zbMATH, SCImago. All books published in the series are submitted for consideration in Web of Science. For proposals from Asia please contact Aninda Bose ([email protected]).

Álvaro Rocha · Carlos Ferrás · Waldo Ibarra Editors

Information Technology and Systems ICITS 2023, Volume 1

Editors Álvaro Rocha ISEG University of Lisbon Lisbon, Portugal

Carlos Ferrás Facultade de Geografía e Historia University of Santiago de Compostela Santiago de Compostela, Spain

Waldo Ibarra Departamento de Informática Universidad Nacional de San Antonio Abad del Cusco Cusco, Peru

ISSN 2367-3370 ISSN 2367-3389 (electronic) Lecture Notes in Networks and Systems ISBN 978-3-031-33257-9 ISBN 978-3-031-33258-6 (eBook) https://doi.org/10.1007/978-3-031-33258-6 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

Preface

This book is composed by the papers written in English and accepted for presentation and discussion at The 2023 International Conference on Information Technology and Systems (ICITS’23). This Conference had the support of Universidad Nacional de San Antonio Abad del Cusco (UNSAAC), Information and Technology Management Association (ITMA), IEEE Systems, Man, and Cybernetics Society, and Iberian Association for Information Systems and Technologies (AISTI). It took place in Cusco, Peru, during April 24–26, 2023. The 2023 International Conference on Information Technology and Systems (ICITS’23) is an international forum for researchers and practitioners to present and discuss the most recent innovations, trends, results, experiences, and concerns in the several perspectives of Information Technology and Systems. The Program Committee of ICITS’23 was composed of a multidisciplinary group of 261 experts and those who are intimately concerned with Information Systems and Technologies. They have had the responsibility for evaluating, in a ‘doubleblind review’ process, the papers received for each of the main themes proposed for the Conference: A) Information and Knowledge Management; B) Organizational Models and Information Systems; C) Software and Systems Modeling; D) Software Systems, Architectures, Applications, and Tools; E) Multimedia Systems and Applications; F) Computer Networks, Mobility, and Pervasive Systems; G) Intelligent and Decision Support Systems; H) Big Data Analytics and Applications; I) Human– Computer Interaction; J) Ethics, Computers, and Security; K) Health Informatics; L) Information Technologies in Education; and M) Media, Applied Technology, and Communication. ICITS’23 received 362 contributions from 31 countries around the world. The papers accepted for presentation and discussion at the Conference are published by Springer (this book) and by RISTI and will be submitted for indexing by WoS, EI-Compendex, Scopus, and/or Google Scholar, among others.

v

vi

Preface

We acknowledge all of those that contributed to the staging of ICITS’23 (authors, committees, workshop organizers, and sponsors). We deeply appreciate their involvement and support that was crucial for the success of ICITS’23. Cusco, Peru

Álvaro Rocha Carlos Ferras Sexto Waldo Ibarra

Contents

Big Data Analytics and Applications On the Use of Parallel Architectures in DNA Methylation Analysis . . . . . Juan M. Orduña, Lisardo Fernández, and Mariano Pérez

3

Automation for IBNR Provision Using Power BI and Excel . . . . . . . . . . . . Christian Vaca, Fernando Gamboa, Raquel Narvaez, and Renato M. Toasa

13

Visualizing the Risks of De-anonymization in High-Dimensional Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Emmanouil Adamakis, Michael Boch, Alexandros Bampoulidis, George Margetis, Stefan Gindl, and Constantine Stephanidis

27

Computer Networks, Mobility and Pervasive Systems Wireless Mesh for Schools in the Emberá Comarca . . . . . . . . . . . . . . . . . . . Aris Castillo, Armando Jipsion, and Carlos Juiz

41

Implementation of a 2.4 GHz Biquad Antenna for a Microwave and Antenna Laboratory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jhonny Coyago, Edgar González, Flavio Morales, and Rosario Coral

49

TaguaSpot: A Versatile Radio Access Networks Design for the Integration of Indigenous Communities to the Information Society . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Iván Armuelles Voinov, Joaquín Chung, Borja Bordel, Ramón Alcarria, Tomás Robles, and Rajkumar Kettimuthu

59

Ethics, Computers and Security Vulnerability Analysis in the Business Organization . . . . . . . . . . . . . . . . . . Petr Doucek, Milos Maryska, and Lea Nedomová

73

vii

viii

Contents

Information Security: Identification of Risk Factors Through Social Engineering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Lidice Haz, María Gabriela Campuzano, Ivette Carrera, and Ginger Saltos The Information Security Function and the CISO in Colombia: 2010–2020 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jeimy J. Cano M. and Andrés R. Almanza J.

83

95

Privacy and Cyber-Security Using Information Systems: A Proposal for Knowledge, Skills, and Attitudes . . . . . . . . . . . . . . . . . . . . . . 109 Anícia Rebelo Trindade and Célio Gonçalo Marques A Case Study of a Privacy-Invading Browser Extension . . . . . . . . . . . . . . . 127 Robin Carlsson, Sampsa Rauti, and Timi Heino Human-Computer Interaction An Animatronic Robot to Provide Therapeutic Support to Preschool Children Attending Medical Consultations . . . . . . . . . . . . . . . 137 F. Castillo-Coronel, E. Saguay-Paltín, S. Bravo-Buri, V. Robles-Bykbaev, and M. Amaya-Pinos Evaluating the Engagement by Design Methodology in Crowdsourcing Initiatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147 Leonardo Vasconcelos, Daniela Trevisan, and José Viterbo Applying Semiotic Engineering in Game Pre-production to Promote Reflection on Player Privacy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159 Mônica da Silva, José Viterbo, Luciana Cardoso de Castro Salgado, and Eduardo de O. Andrade The Consumer Influence of Digital Coupon Distribution Through a Referral Program . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171 Jorge Pereira and Pedro Quelhas Brito Digital Travel – A Study of Travellers’ Views of a Digital Visit to Mexico . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185 Ingvar Tjostheim and John A. Waterworth Health Informatics Medical Entities Extraction with Metamap and cTAKES from Spanish Texts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197 Mauricio Sarango and Ruth Reátegui Health Records Management in Trust Model Blockchain-Based . . . . . . . 205 António Brandão

Contents

ix

Digital Transformation and Adoption of Electronic Health Records: Critical Success Factors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219 Luis E. Mendoza, Lornel Rivas, and Cristhian Ganvini Comorbidity Analysis in the Mexican Population Affected by SARS-CoV2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233 Jesús Manuel Olivares Ceja, Imanol Marianito Cuahuitic, Marijose Garces Chimalpopoca, Marco Antonio Jesús Silva Valdez, and César Olivares Espinoza Practical Guidelines for Developing Secure FHIR Applications . . . . . . . . 245 Alexander Mense, Lukas Kienast, Ali Reza Noori, Markus Rathkolb, Daniel Seidinger, and João Pavão Intelligent System to Provide Support in the Analysis of Colposcopy Images Based on Artificial Vision and Deep Learning: A First Approach for Rural Environments in Ecuador . . . . . . . . . . . . . . . . . . . . . . . 253 A. Loja-Morocho, J. Rocano-Portoviejo, B. Vega-Crespo, Vladimir Robles-Bykbaev, and Veronique Verhoeven Comparison of Transfer Learning vs. Hyperparameter Tuning to Improve Neural Networks Precision in the Early Detection of Pneumonia in Chest X-Rays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263 Paúl Idrovo-Berrezueta, Denys Dutan-Sanchez, and Vladimir Robles-Bykbaev A Smart Mirror Based on Computer Vision and Deep Learning to Support the Learning of Sexual Violence Prevention and Self-care Health in Youth with Intellectual Disabilities . . . . . . . . . . . . 273 C. Peña-Farfán, F. Peralta-Bautista, K. Panamá-Mazhenda, S. Bravo-Buri, Y. Robles-Bykbaev, V. Robles-Bykbaev, and E. Lema-Condo Intelligent and Decision Support Systems Weight Prediction of a Beehive Using Bi-LSTM Network . . . . . . . . . . . . . . 285 María Celeste Salas, Hernando González, Hernán González, Carlos Arizmendi, and Alhim Vera Criteria Selection and Decision-Making Support in IT Governance: A Study via Fuzzy AHP Applied to a Multi-institutional Consortium . . . 297 José Fábio de Oliveira, Paulo Evelton Lemos de Sousa, and Ana Carla Bittencourt Reis Machine Learning Model Optimization for Energy Efficiency Prediction in Buildings Using XGBoost . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309 Giancarlo Sanchez-Atuncar, Victor Manuel Cabrejos-Yalán, and Yesenia del Rosario Vasquez-Valencia

x

Contents

Analysis of the Azores Accommodation Offer in Booking.Com Using an Unsupervised Learning Approach . . . . . . . . . . . . . . . . . . . . . . . . . . 317 L. Mendes Gomes and S. Moro AD-DMKDE: Anomaly Detection Through Density Matrices and Fourier Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 327 Oscar A. Bustos-Brinez, Joseph A. Gallego-Mejia, and Fabio A. González Enhancing Sentiment Analysis Using Syntactic Patterns . . . . . . . . . . . . . . . 339 Ricardo Milhazes and Orlando Belo Feature Selection for Performance Estimation of Machine Learning Workflows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 351 Roman Neruda and Juan Carlos Figueroa-García Development of a Hand Gesture Recognition Model Capable of Online Readjustment Using EMGs and Double Deep-Q Networks . . . 361 Danny Díaz, Marco E. Benalcázar, Lorena Barona, and Ángel Leonardo Valdivieso Information and Knowledge Management Capitalization of Healthcare Organizations Relationships’ Experience Feedback of COVID’19 Management in Troyes City . . . . . . . 375 Nada Matta, Paul Henri Richard, Theo Lebert, Alain Hugerot, and Valerie Friot-Guichard Factors for the Application of a Knowledge Management Model in a Higher Education Institution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 387 Verónica Martínez-Lazcano, Edgar Alexander Prieto-Barboza, Javier F. García, and Iliana Castillo-Pérez eWom, Trust, and Perceived Value Related to Repurchases in the e-commerce Sector of Department Stores . . . . . . . . . . . . . . . . . . . . . . 397 Carolina Blanco-Gamero, Estephania Acosta-Bonilla, and Manuel Luis Lodeiros-Zubiria Transitioning to Digital Services: Assessing the Impact of Digital Service Efforts on Performance Outcomes and the Moderating Effect of Service Promotion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 409 Christian Stadlmann, Piotr Kwiatek, Zsolt Szántó, and Marcel Joshua Kannampuzha IoT Devices Data Management in Supply Chains by Applying BC and ML . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 423 Daniel Guatibonza, Valentina Salazar, and Yezid Donoso

Contents

xi

Raising Awareness of CEO Fraud in Germany: Emotionally Engaging Narratives Are a MUST for Long-Term Efficacy . . . . . . . . . . . . 435 Margit Scholl A Module Based on Data Mining Techniques to Analyze School Attendance Patterns of Children with Disabilities in Cañar Ecuador . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 451 Denys Dutan-Sanchez, Paúl Idrovo-Berrezueta, Ana Parra-Astudillo, Vladimir Robles-Bykbaev, and María Ordóñez-Vásquez Improving Semantic Similarity Measure Within a Recommender System Based-on RDF Graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463 Ngoc Luyen Le, Marie-Hélène Abel, and Philippe Gouspillou Comparative Analysis of the Prediction of the Academic Performance of Entering University Students Using Decision Tree and Random Forest . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 475 Jesús Aguilar-Ruiz, Edgar Taya-Acosta, and Edgar Taya-Osorio Intellectual Capital and Public Sector: There are Similarities with Private Sector? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 487 Óscar Teixeira Ramada EITBOK, ITBOK, and BIZBOK to Educational Process Improvement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 495 Pablo Alejandro Quezada–Sarmiento and Aurora Fernanda Samaniego-Namicela Rating Pre-writing Skills in Ecuadorian Children: A Preliminary Study Based on Transfer Learning, Hyperparameter Tuning, and Deep Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 505 Adolfo Jara-Gavilanes, Romel Ávila-Faicán, Vladimir Robles-Bykbaev, and Luis Serpa-Andrade Geographic Patterns of Academic Dropout and Socioeconomic Characteristics Using Clustering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 517 Vanessa Maribel Choque-Soto, Victor Dario Sosa-Jauregui, and Waldo Ibarra Implementation of Digital Resources in the Teaching and Learning of English in University Students . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 529 Kevin Mario Laura-De La Cruz, Paulo César Chiri-Saravia, Haydeé Flores-Piñas, Giomar Walter Moscoso-Zegarra, María Emilia Bahamondes-Rosado, and Luis Enrique Espinoza-Villalobos Communicational Citizenship in the Media of Ecuador . . . . . . . . . . . . . . . 539 Abel Suing, Kruzkaya Ordóñez, and Lilia Carpio-Jiménez

xii

Contents

A Multimedia Resource Ontology for Semantic E-Learning Domain . . . . 551 Soulaymane Hodroj, Myriam Lamolle, and Massra Sabeima Business Process Management in the Digital Transformation of Higher Education Institutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 561 Juan Cárdenas Tapia, Fernando Pesántez Avilés, Jessica Zúñiga García, Diana Arce Cuesta, and Christian Oyola Flores Collaborative Strategies for Business Process Management at Salesian Polytechnic University . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 573 Juan Cárdenas Tapia, Fernando Pesántez Avilés, Diana Arce Cuesta, Jessica Zúñiga García, and Andrea Flores Vega Does Sharing Lead to Smarter Products? Managing Information Flows for Collective Servitization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 585 Thomas A. Weber Analysis of On-line Platforms for Citizen Participation in Latin America, Using International Regulations . . . . . . . . . . . . . . . . . . . . . . . . . . . 595 Alex Santamaría-Philco, Jorge Herrera-Tapia, Patricia Quiroz-Palma, Marjorie Coronel-Suárez, Juan Sendón-Varela, Dolores Muñoz-Verduga, and Klever Delgado-Reyes Managerial Information Processing in the Era of Big Data and AI – A Conceptual Framework from an Evolutionary Review . . . . . . . . . . . . . . 611 Mark Xu, Yanqing Duan, Vincent Ong, and Guangming Cao Empowering European Customers: A Digital Ecosystem for Farm-to-Fork Traceability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 623 Borja Bordel, Ramón Alcarria, Gema de la Torre, Isidoro Carretero, and Tomás Robles Design of a Task Delegation System in a Context of Standardized Projects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 635 Alfredo Chávez and Abraham Dávila eSFarmer - A Solution for Accident Detection in Farmer Tractors . . . . . . 647 Rui Alves, Paulo Matos, João Ascensão, and Diogo Camelo An Approach to Identifying Best Practices in Higher Education Teaching Using a Digital Humanities Data Capturing and Pattern Tool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 655 Nico Hillah, Bernhard Standl, Nadine Schlomske-Bodenstein, Fabian Bühler, and Johanna Barzen

Big Data Analytics and Applications

On the Use of Parallel Architectures in DNA Methylation Analysis Juan M. Orduña, Lisardo Fernández, and Mariano Pérez

Abstract Deoxyribonucleic acid (DNA) methylation analysis has become a very relvant topic in the research of important and extended diseases like cancer or Diabetes Mellitus 2. The huge size of the data involved in this analysis requires the use of high performance architectures to process the samples in a timely manner. In this paper, we introduce the basic concepts and procedures for DNA methylation analysis, and we review the different proposals and existing software tools, focusing on the methods they use for offering an efficient analysis as well as the use of the underlying computer architectures that they are able to carry out. Keywords parallel architectures · software tools · DNA methylation

1 Introduction DNA methylation consists of the addition of a methyl group (one carbon atom bonded to three hydrogen atoms, C H3 ) to a cytosine, forming a 5mC link [1]. When this happens, that DNA position is methylated. When you lose that methyl group, the area becomes demethylated. DNA methylation inhibits the expression of certain genes by preventing the proteins responsible for DNA transcription to initiate this process [34]. In fact, it was originally proposed as a “silencing” epigenetic mark. Figure 1 illustrates the DNA chain and the methylation effect. The methylated cytosines are represented in this figure as shadowed circles, while the proteins responsible for DNA transcription are represented in this figure by looped ribbons floating around the DNA double-helix. The DNA transcription consists of the proteins linking the DNA chain and splitting the double-helix into separate strands, copying the DNA sequences of the strands and carrying out these copies out of the cell core. The circles representing methylated cytosines are like chemical obstacles which prevent the proteins from linking the J. M. Orduña (B) · L. Fernández · M. Pérez Departamento de Informática, Universidad de Valencia, Avda. Universidad, s/n., 46100 Burjassot, Valencia, Spain e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Á. Rocha et al. (eds.), Information Technology and Systems, Lecture Notes in Networks and Systems 691, https://doi.org/10.1007/978-3-031-33258-6_1

3

4

J. M. Orduña et al.

Fig. 1 Methylation effects: the methylated cytosines (shadowed circles) become chemical obstacles which prevent the proteins (looped ribbons) from linking the DNA chain for initiating transcription

DNA chain. Thus, the transcription cannot occur, therefore inhibiting the expression of these DNA segments in a biological process. DNA methylation plays key roles in local control of gene expression [25], the establishment and maintenance of cellular identity [16], the regulation of mammalian embryonic development [9], and other biological processes [24]. For example, the methylation process might stop a tumor-causing gene from “turning on,” preventing cancer, and it also seems to play a decisive role in other complex diseases like Obesity, Hypertension, and Diabetes Mellitus Type 2 (DM2) development [22]. DNA methylation analysis requires specific treatment of DNA that modifies its sequence, as well as software tools for their analysis. The methylation data can be obtained through bisulfite sequencing, which provides comprehensive DNA methylation maps at single-base pair resolution [15]. Figure 2 illustrates this process. The first step is the DNA extraction. Next, the Polymerase Chain Reaction (PCR) breaks up the DNA sequences into many fragments and makes multiple copies of these fragments, which are called reads and are the actual samples to be analyzed. At this point, bisulfite is added to the resulting DNA reads. Bisulphite treatment converts unmethylated cytosines (Cs) into thymines (Ts), which gives rise to C-to-T changes in DNA sequence after sequencing, while leaving methylated cytosines (5mCs) unchanged. Finally, the bisulfite-treated reads are processed by a sequencer, which reads each read and yields a text containing the sequence of nucleotids in the read, together with other meta-information like the quality of the read, etc. Since the sequencer carries out this process with all the reads, the output of the sequencer is a (usually huge) text file in fastq format containing the sequence detected for each sample (read). By aligning and comparing bisulfite sequencing reads to the reference genomic DNA sequence, it is possible not only to align the read, but also to infer DNA methylation patterns at base pair-resolution. However, the length of the DNA chain in the human genome is 3x109 nucleotids, and each sample in a fastq file, whose size typically does not exceed hundreds of nucleotids, must be compared to all locations in the genome to find the correct location (this is know as the alignment operation). Each fastq file coming from a next generation sequencer can easily contain tens or hundreds of millions reads. Also, typical study cases require the analysis and comparison of different biological samples coming from different tissues or different individuals, in order to detect different methylation levels in different genes. Thus, the applications

On the Use of Parallel Architectures in DNA Methylation Analysis

5

Fig. 2 Bisulphite sequencing process of a single biological sample: DNA extraction, PCR, bisulfite addition to the DNA fragments or reads, and reading of each read

and systems in this field of computational biology are considered as big data ones, and advance computer architectures are used for executing the applications. However, no studies are available in the literature about the use of parallel architectures in this field. In this paper, we review the different tools proposed for the different steps and aspects of DNA methylation analysis, focusing on the methods they use for offering an efficient analysis to biomedical researchers and the use of the underlying computer architecture that each one is able to carry out.

2 Sample Alignment and Methylation Status Inference The first step in the DNA methylation analysis of a biolgical sample consists of aligning each read and detecting its methylation status. Whole-genome bisulfite sequencing (WGBS or BS-seq) aligns each read against the whole genome, while reduced-representation bisulfite sequencing (RRBS) only aligns the sample against certain areas of the genome call CpG islands [23], rather than the whole genome. CpG dinucleotides are randomly distributed throughout the genome and most of them are methylated, while a fraction of CpG dinucleotides are clustered with lower methylation levels, termed CpG island (CGI) [13]. Nevertheless, both techniques integrate bisulfite conversion and high-throughput sequencing. The input file in this process is the fastq file coming from the sequencer. FASTQ format is a text-based format for storing both a biological sequence (usually nucleotide sequence) and its corresponding quality scores. Many different software tools for DNA alignment and methylation analysis have been proposed. The alingment of sequencing data can be based on either wild-card algorithms or three-letter algorithms. The former one substitutes Cs with Ys (wildcards) in the reference genome when aligning each read, so reads can be aligned with both Cs and Ts [28]. Some represantative tools integrating wild-card aligners are BSMAP and RRBSMAP [36]. On the contrary,

6

J. M. Orduña et al.

the three-letter algorithm converts all Cs into Ts, both in the reference genome and in the reads. In this way, the complexity of the alignment is reduced, but many reads align to many genome positions. In this case, some tools discard those reads to avoid incorrect results. The most extented tools integrating three-letter aligner are Bismark [14], BS-Seeker [5], and HPG-Methyl [20]. All of these tools are designed to be executed on multicore platforms, but there are significant differences which affect their throughput and response time when processing the huge data in bisulfite sequencing. Some of them cannot even use multithreading, like BRAT-BW. Others can use a limited number of parallel threads. BSMAP and BS-Seeker can use only eight execution threads as a maximum. Thus, they cannot fully exploit the degree of parallelism in current multicore commodity processors. On the contrary, Bismark can be executed with up to 12 threads, and HPGMethyl and RRBSMAP can be executed with any number of threads supported by the underlying processor. However, RRBSMAP is the only tool designed for RRBS, and it cannot be used for Whole Genome Bisulfite Sequencing. The metrics used usually in the aligners are sensitivity (percentage of reads aligned) and the required executiom time. The differences in the performance achieved by each tool are shown are shown in the last work [20]. There exist also some tools which use heterogeneous architectures like Graphics Processing Unit (GPU) to increase the parallelism in the alignment process, like GPUBSM [17], Arioc [33] or GPU-RMAP [2]. These tools are designed for aligning short reads. However, they are not extensively used, due to the fact that they are designed only for short reads, while the length of the reads generated by new sequencers tend to grow more and more. And even a more important impairment is the requirement of a GPU and some computer programming skills, which prevent bioinformatic and biomedical researchers from using them. Although many GPU-powered tools have been developed in the field of systems biology [19], the main problem still seems to be that installing, configuring and/or programming a NVidia graphic card using CUDA is often beyond the comfort zone of most bioinformaticians, or the computing platforms of bioinformatic research centers simply do not include GPUs. The required user knowledge about computer systems configuration prevents these tools from being extensively used, in spite of the existence of CUDA scripts/installation wizards, tutorials and guides.

3 Detection of Differentially Methylated Regions (DMRs) The tools discussed in the previous section provide the user with single-base methylation results, indicating the absolute methylation level of each nucleotid found in the DNA of the analyzed sample. Most of them yield the results in Sequence Alignment Map (SAM) or Binary Alignment Map (BAM) files. Sequence Alignment Map is a text-based format originally for storing biological sequences aligned to a reference sequence, but it can contain also information about the methylation context of each methylated cytosine. The binary equivalent of a SAM file is a BAM file, which

On the Use of Parallel Architectures in DNA Methylation Analysis

7

stores the same data in a compressed binary representation. Anyway, the results of a methylation analysis is a huge data file with the alignment and methylation results for every nucleotide in the samples contained in the fastq file. Thus, the analysis of these results is far beyond the human capabilities. Moreover, an effective biomedical analysis requires the identification of those areas of the genome where the methylation level of different samples (coming from individual and/or tissues) significantly differ. These areas are called Differentially Methylated Regions (DMRs). In order to detect a DMR, several BAM or SAM files containing the methylation results coming from different individuals or tissues must be processed and compared. Several tools for processing, displaying the methylation results and discovering differentially methylated regions (DMRs) have been proposed [8, 10, 12, 35]. Most of these tools are based on statistical analysis of the BAM or SAM files and they are mainly R scripts, and therefore they can use multicore architectures as far as R allows. Others use the GPU as a way of processing data for the DMR detection process. The analysis of the differences in the methylation level can be carried out at two levels. The lower level identifies the differences at the level of methylated cytosines (Differentially Methylated Cytosines (DMC). This level is useful for studying small groups of methylated bases, and it is used at the upper level as a way of starting the identification of broader regions (DMRs) like CpG islands or shores. A CpG island is a DNA region where any of the two threads in the DNA helix has the sequence CGCGCGCG.... A shore is a region adjacent to a CpG island. In order to correctly identifying the methylation differences, some essential characteristics should be taken into account. First, the sequencing depth or coverage, that is, the number of reads aligned on a given area, is crucial to minimize the variability in the results inherent to the sequencing process. A low coverage can make useless or non significant an alignment and/or methylation status. Second, the spatial correlation among the methylation levels in CpG regions. Including procedures which take into account this correlation reduces the dependency on the coverage of the area. Also, this correlation helps to estimate the methylation state in areas with a low or no coverage at all. Third, the biological variability between two samples is essential to identify regions which effectively differ between samples. The methylation heterogenity between cells from different tissues can induce errors in the identification of DMRs. Two different approaches have been used for DMR analysis: methods based on counting reads or number of CpGs in pre-defined regions, and methods based on computing the percentage of methylated reads on each DNA position, according the formula Ratio; =;

5mCr eads C Reads + 5mCr eads

(1)

Where C Reads is the number of reads covering that DNA position with a nonmethylated cytosine, and 5mCr eads is the number of reads with a methylated cytosine

8

J. M. Orduña et al.

on that DNA position. An important characteristic of the methods based on this ratio is that they take into account the biological variability among the different samples, although they do not take into account the coverage. An extended tool based on ratio is methylKit [3], a tool based on counting reads on CpG islands for identifying DMRs. It uses parallelized methods for detecting significant methylation changes, mainly a logistic regression. BSmooth [10] is a representative example ot those tools which assume a correlation in the methylation levels among the CpG sites along the genome. Thus, it builds a methylation signal starting from the methylation levels of the neighbor sites and it can estimate the methylation level of regions without enough coverage, since it extends the smoothing process over all the signal. However, it does not establish any error criterion in the identification of DMRs, although the smoothing process can assign false positives to unmethylated CpGs. BiSeq package [12] analyzes reduced-representation bisulfite sequencing (RRBS) data. This tool takes into account the coverage of each CpG, maximizing the impact of those areas with a higher coverage in front of those with a lower coverage. The grouping of DMCs is carried out by beta regression, yielding a higher control on the identification error than BSmooth. Over the past few years, several beta-binomial-based approaches have been developed to identify DM, such as DSS-single [35], DSS-general [21] or GetisDMR [32]. As a representative example, DSS-Single [35] is based on characterizing the counting of reads as a beta-binomial distribution. On one hand, the methylation ratio of a CpG site follows a binomial distribution, since the requenced reads can be either methylated or unmethylated. On other hand, since some bilogical variability is present in the data, the methylation ratio in the CpG sites is assumed to follow a beta distribution. The empirical of the Bayes’ theorem estimates a dispersion parameter which captures the methylation level of the CpG sites, in front of the average value of the group. The DMCs are determined starting from the p-value of a Wald test [7] composed of the comparison of the average methylation levels between two groups. Other tools are based on Hidden Markov Models (HMMs), like HMM-Fisher [27] or HMM-DM [37]. They all model the methylation level of CpGs as methylation states, instead of continuous values of a signal. The transition probabilities between methylated states represent the distribution of distances between DMCs. The emission probabilities represent the likelihood of a differentiated methylation level for a CpG site. The common feature of all these tools is that they are implememented as an R package, and in this way they can only use the underlying platform as long as the R application does. The exceptions are BSmooth and MethylKit, which include a parameter that allow the user to specify the number of cores to be used. Another approach are the tools based on statistical tests (FET, t-test and ANOVA tests) applied to CpG sites in predefined regions. COHCAP [31] and swDMR [30]are tools based on this approach. COHCAP is a R packages, and therefore it can only use the number of cores allowed by this package. swDMR is a perl script which default configuration uses a single core.

On the Use of Parallel Architectures in DNA Methylation Analysis

9

Finally, HPG-DHunter [8] is a different approach based on the use of the wavelet transform. It builds a methylation signal from each sample, and it uses the wavelet transform to analyze the signal coming from different samples. It is the only tool that uses the GPU for both the detection of DMRs at different wavelet transformation levels and the visualization of all the methylation signals at each DMR detected. This tool is written in C and CUDA, and it can use all the existing cores in the GPU. Nevertheless, this tool requires a Nvidia graphic card, the corresponding driver (v384 or higher), a CUDA API (v9 or higher) and also some other tools like QtCreator, etc. These requirements are a bit far from the standard bioinformatic tools, and the required programming skills are far from the typical skills of biomedical or bioinformatic researchers, which prevent this tool from being extensively used. Some methods described above are based on statistical tools, while others are based on a different approach. Surprisingly, one of these works [8] shows that different tools (all of them widely accepted as valid tools) provide different sets of DMRs for the same dataset, and only a small fraction of these sets are DMRs detected by all the considered tools. Thus, the validity or correctness of a given DMR seems to depend on the parameters used for detecting the DMRs, which should be determined by the biomedical user, depending on the conditions of the sequencing process. Nevertheless, this work show a comparison study of the performance achieved by several of these tools.

4 Artificial Intelligence for Methylation Analysis A different aspect which deserve a special section is the use of Artifical Intelligence (AI) in the methylation analysis. Deep Learning (DL) has been used not only for the DMR detection explained in the previous section, but used in other aspects of DNA methylation. Of course, all the proposals of using DL are based on the GPU, using this architecture for building the neural network required in this artificial intellingence technique. DeepCpG [4] proposes an architecture based in a convolutional neural network (CNN) and a gated recurrent network GRU), joining the output layer of each one in a fully connected module used as a predictor of single-cell methylation state. This tool predicts the methylation state of CpG dinucleotides in multiple cells, it can discover predictive sequence motifs, and quantify the effect of sequence mutations. It allows to accurately impute incomplete DNA methylation profiles, starting from the methylation state of neighbor cells. MRCNN [29] is an improvement on DeepCpG that leverages associations between DNA sequence patterns and methylation levels, using 2D array-convolution to tackle the sequence patterns and characterize the target CpG site methylation. Also, DeepH&M [11] is a model derived from DeepCpG. This model estimates hidroxymethylation and methylation levels at single CpG resolution, and it is composed of three different modules. A module called ”CpG module” takes inputs of genomic and methylation features. A module called ”DNA module” processes raw DNA sequence data using a convolutional neural network. Finally, a

10

J. M. Orduña et al.

module called ”Joint module” combines the outputs from the other two modules to predict 5hmC and 5mC simultaneously. An important issue is that this model uses 25x coverage for 5hmC and 20x for 5mC data for training and validation. Histogram Of MEthylation (HOME) [26] is a different proposal for finding DMRs, since it approaches the problem of DMR identification from the perspective of binary classification in machine learning. It uses a Support Vector Machine classifier [6] that takes into account the inherent difference in the distribution of methylation levels between DMRs and non-DMRs to discriminate between the two. Since these tools are based classical classification algorithms, they are designed to be executed on Von Neumman Central Processing Units (CPUs). Another recent proposal designed for studying gene expression [18] analyzes the methylation profile starting with the methylation ratio. Then, they predict the gene expression classes throught six machine learning algorithms and a fully connected neural network. They conclude that combining machine learning with gene expression could be used to provide more insights into the functional role of DNA methylation.

5 Conclusions In this paper, we have reviewed the different tools proposed for the different steps and aspects of DNA methylation analysis, focusing on the methods, tools and architectures they use for offering an efficient analysis to biomedical researchers. As a general conclusion, we can state that the methylation analysis tools are often designed and implemented by biomedical researchers who do not have the necessary skills in computer systems configuration/administration. Thus, they cannot use tools which exploit the parallelism in the underlying computing platform.

References 1. Bird A (2002) DNA methylation patterns and epigenetic memory. Genes Dev 16:6–21 2. Aji AM, Zhang L, Feng W (2010) GPU-RMAP: accelerating short-read mapping on graphics processors. In: 2010 13th IEEE International Conference on Computational Science and Engineering, pp 168–175 3. Akalin A et al (2012) MethylKit: a comprehensive R package for the analysis of genome-wide DNA methylation profiles. Genome Biol 13:R87 4. Angermueller C, Lee HJ, Reik W, Stegle O (2017) DeepCpG: accurate prediction of DNA methylation states using deep learning. Genome Biol. 18(6) 5. Chen PY, Cokus S, Pellegrini M (2010) Bs seeker: precise mapping for bisulfite sequencing. BMC Bioinform 11:203 6. Cortes C, Vapnik V (1995) Support-vector networks. Mach Learn 20:273–297 7. Feng H, Conneely KN, Wu H (2014) A Bayesian hierarchical model to detect differentially methylated loci from single nucleotide resolution sequencing data. Nucleic Acids Res 42(8):e69–e69

On the Use of Parallel Architectures in DNA Methylation Analysis

11

8. Fernández L, Pérez M, Olanda R, Orduña JM (2020) HPG-DHunter: an ultrafast tool for DMR detection and visualization. BMC Bioinform 21:287 9. Fulka H, Mrazek M, Tepla O, Fulka J (2004) DNA methylation pattern in human zygotes and developing embryos. Reproduction 128(6):703–708 10. Hansen K, Langmead B, Irizarry R (2012) BSmooth: from whole genome bisulfite sequencing reads to differentially methylated regions. Genome Biol. 13(10):R83 11. He Y et al (2020) DeepH&M: estimating single-CpG hydroxymethylation and methylation levels from enrichment and restriction enzyme sequencing methods. Sci Adv 6(27) 12. Hebestreit K, Ulrich Klein H (2013) BiSeq: a package for analyzing targeted bisulfite sequencing data 13. Illingworth RS, Bird AP (2009) CpG islands - ‘a rough guide’. FEBS Lett 583(11):1713–1720. Prague Special Issue: Functional Genomics and Proteomics 14. Krueger F, Andrews SR (2011) Bismark: a flexible aligner and methylation caller for BisulfiteSeq applications. Bioinformatics 27(11):1571–1572 15. Laird PW (2010) Principles and challenges of genome-wide DNA methylation analysis. Nat Rev Genetics 11:191–203 16. Li S, Chen M, Li Y, Tollefsbol TO (2019) Prenatal epigenetics diets play protective roles against environmental pollution. Clin Epigenetics 11:82 17. Manconi A, Orro A, Manca E, Armano G, Milanesi L (2014) GPU-BSM: a GPU-based tool to map bisulfite-treated reads. PLoS ONE 9(5):e97277 18. N’Diaye A et al (2020) Machine learning analyses of methylation profiles uncovers tissuespecific gene expression patterns in wheat. Plant Genome 13(2):e20027 19. Nobile MS, Cazzaniga P, Tangherloni A, Besozzi D (2016) Graphics processing units in bioinformatics, computational biology and systems biology. Briefings Bioinform 18(5):870–885 20. Olanda R, Pérez M, Orduña JM, Tárraga J, Dopazo J (2017) A new parallel pipeline for DNA methylation analysis of long reads datasets. BMC Bioinform 18(1):161 21. Park Y, Wu H (2016) Differential methylation analysis for BS-seq data under general experimental design. Bioinformatics 32(10):1446–1453 22. Raciti A, Nigro C, Longo M, Parrillo L, Miele C, Formisano P, Béguino F (2014) Personalized medicine and type 2 diabetes: lesson from epigenetics. Epigenomics 6(2):229–238 23. Rauluseviciute I, Drablos F, Rye M (2019) DNA methylation data by sequencing: experimental approaches and recommendations for tools and pipelines for data analysis. Clin Epigenetics 11:193 24. Robertson K (2005) DNA methylation and human disease. Nat Rev Genetics 6:597–610 25. Schubeler D (2015) Function and information content of DNA methylation. Nature 517:321– 326 26. Srivastava A, Karpievitch YV, Eichten SR, Borevitz JO, Lister R (2020) Home: a histogram based machine learning approach for effective identification of differentially methylated regions. BMC Bioinform 20:253 27. Sun S, Yu X (2016) Hmm-fisher: identifying differential methylation using a hidden Markov model and fisher’s exact test. Stat Appl Genetics Mol Biol 15(1):55–67 28. Sun X et al (2018) A comprehensive evaluation of alignment software for reduced representation bisulfite sequencing data. Bioinformatics 34(16):2715–2723 29. Tian Q, Zou J, Tang J, Fang Y, Yu Z, Fan S (2019) MRCNN: a deep learning model for regression of genome-wide DNA methylation. BMC Genomics 20:192 30. Wang Z, Li X, Jiang Y, Shao Q, Liu Q, Chen B, Huang D (2015) swDMR: a sliding window approach to identify differentially methylated regions based on whole genome bisulfite sequencing. PLoS ONE 10(7):e0132866 31. Warden CD et al (2019) COHCAP: an integrative genomic pipeline for single-nucleotide resolution DNA methylation analysis. Nucleic Acids Res 47(15):8335–8336 32. Wen Y, Chen F, Zhang Q, Zhuang Y, Li Z (2016) Detection of differentially methylated regions in whole genome bisulfite sequencing data using local Getis-Ord statistics. Bioinformatics 32(22):3396–3404

12

J. M. Orduña et al.

33. Wilton R, Szalay AS (2020) Arioc: high-concurrency short-read alignment on multiple GPUs. PLoS Comput Biol 16(11) 34. Wu H, Tao J, Sun YE (2012) Regulation and function of mammalian DNA methylation patterns: a genomic perspective. Briefings Funct Genomics 11(3):240–250 35. Wu H et al (2015) Detection of differentially methylated regions from whole-genome bisulfite sequencing data without replicates. Nucleic Acids Res 43(21):e141. https://doi.org/10.1093/ nar/gkv715 36. Xi Y, Bock C, Muller F, Sun D, Meissner A, Li W (2012) RRBSMAP: a fast, accurate and user-friendly alignment tool for reduced representation bisulfite sequencing. Bioinformatics 28(3):430–432 37. Yu X, Sun S (2016) HMM-DM: identifying differentially methylated regions using a hidden Markov model. Stat Appl Genetics Mol Biol 15(1):69–81

Automation for IBNR Provision Using Power BI and Excel Christian Vaca , Fernando Gamboa, Raquel Narvaez, and Renato M. Toasa

Abstract In Ecuador, Life insurance companies are regulated by Companies, Securities and Insurance Superintendence and according to resolution No. SCSV-3782017-S “Standards for the technical prudence of companies that finance comprehensive prepaid health care services” this type of companies must make a series of provisions, among others, the Incurred But Not Reported Reserve IBNR, this provision ensure that the company has sufficient resources to cover the claims made by their affiliates. IBNR is calculated by an actuary, based on the reimbursements made by the companies at the cut-off date using claims triangles with the Chain Ladder method. It is so, that in this document it is proposed to automate the calculation of this reserve using Power BI to reproduce the recalculation made by the Actuary through an independent area (Internal Audit, Risks, Comptroller, Financial Management, etc.) and in this way ensure that the estimate is fairly made and accounted for. Keywords Automation · Power bi · Provision · Bookings · Risk management

C. Vaca · R. M. Toasa (B) Universidad Tecnológica Israel, Quito, Ecuador e-mail: [email protected] C. Vaca e-mail: [email protected] C. Vaca Universidad Católica de Santiago de Guayaquil, Guayaquil, Ecuador e-mail: [email protected] F. Gamboa Pontificia Universidad Católica del Ecuador, Quito, Ecuador e-mail: [email protected] R. Narvaez Banco Pichincha, Quito, Ecuador © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Á. Rocha et al. (eds.), Information Technology and Systems, Lecture Notes in Networks and Systems 691, https://doi.org/10.1007/978-3-031-33258-6_2

13

14

C. Vaca et al.

1 Introduction 1.1 Generalities The Incurred but no reported reserve (IBNR), corresponds to the amount set aside in the balance sheet of the insurance companies to meet the total estimated cost of meeting all claims arising from events that have occurred up to the end of the monthly balance sheet or fiscal year-end, and were not reported by their members. This reserve should include reserve adjustments derived from events that occurred and were not sufficiently reported. For calculation of the reserves for services rendered and not reported (IBNR) there are several estimation methods [1, 12, 13], currently the most popular are those that use run-off triangles to predict future claims behavior. With this method, the data is presented as a triangular matrix containing the period of claim occurrence and the number of deferral periods for claim payment. In Ecuador, the calculation of the IBNR reserve is given by resolution SCSV378-2017-S [10], which provides for the application of the loss experience triangles method in the version known as Chain Ladder [3, 4, 17], for this, the historical information on events must be classified monthly by occurrence, notification and payment, for which a monthly database is constructed for a period of 12, 18 and up to 24 months, but it is not advisable for it to be less than 12 months. IBNR calculation and updating [5, 6] must be made semi-annually, using at least the reimbursements of the last twelve months, setting the calculation date as the end of each semester, i.e., June 30 and December 31 of each year. In order to perform this calculation, it is necessary to be clear about what is an occurrence and deferral period. The month of occurrence is defined as the monthly period in which a loss occurred. On the other hand, the deferral period corresponds to the number of months elapsed from the occurrence of the loss until the claim is submitted to the company. In Ecuador, the regulations state that in the event that the filing date and the payment date differ by more than 45 days, the date used to obtain the deferral period is the payment date. Thus, for example, if a claim occurred and was filed in the month of January of any year, the deferral period will be zero; on the other hand, if it was filed in the month of February of the same year, then the deferral period will be 1, and so on. The deferral period will always be expressed in whole numbers (0, 1, 2, …). If the date of presentation of the claim, compared to the date of payment of that claim, is greater than 45 days, the difference between the date of occurrence and the date of payment of the claim will be taken as the deferral period. For example, if a claim occurred in the month of January of any given year, the presentation occurred in the month of February of the same year and the payment was made in the month of March (greater than 45 days), the deferral period will be 2. The deferral period will always be expressed in whole numbers 0, 1, 2 and so on. In this case, when payment is made in a month subsequent to filing, only the date of payment is taken for the calculation. As part of the process of constitution of reserves,

Automation for IBNR Provision Using Power BI and Excel

15

companies seek a validation that these comply with the stipulations of Resolution No. SCVS-378-2017-S, Chapter 1, Section II, Article 5.3, fourth paragraph which states, “The control entity shall require at the end of each fiscal year the Back-Testing studies subscribed and actuarially endorsed with the purpose of establishing the razonability of the reserve balances constituted”. This validation is performed by comparing the actual values and the results obtained. Back-Testing makes it possible to evaluate the quality and accuracy of the predictive mathematical model applied for the estimation. When the value calculated with the model is greater than that actually paid for reimbursements, it can be concluded that the number of reserves calculated according to the methods set forth in Resolution No. 378-2017-S has been more than sufficient at the cut-off and evaluation date. This work proposes to solve the calculation of the IBNR according to the provisions of resolution SCSV-378-2017-S using Power BI and Excel, thus, with Power BI the monthly information of the reimbursement report will be consolidated, which will be called the reimbursement sheet, and the calculation of the matrix that organizes the reimbursements by year and month of the claim date (row) and year and month of the settlement date (column), then the Chain Ladder calculations will be performed in Excel using the Power BI add-in external tools Analyze in Excel.

2 Methodology Prior to carrying out the exercise it is necessary to organize the information to be worked with, for this purpose a main folder called “IBNR” is created and within this the subfolder called “BaseReimbursementsMonthly” in the latter the excel or txt files containing the reimbursements of the period or periods to be analyzed are placed, for the proposed case the period is 18 months from the month of July 2019. (July 2019–December 2020) [2, 8, 9, 11, 14–16, 18]. The steps to be developed to upload and apply the ETL and measure creation process in Power BI and Excel are as follows: 1. Upload to Power BI desktop the information of the subfolder “BaseReimbursementsMonthly” through the option Get data -> More… -> File -> Folder. 2. Perform data cleansing by applying the following steps: a. Dirty Data: (with the use of powerquery—M language) i. Incomplete -> Complete, ii. Incorrect -> Modify, iii. Irrelevant -> Delete, b. Basic cleaning tasks: (with the use of powerquery—M languaje) i. Delete the first row, ii. Eliminate columns with text explanations that are not necessary,

16

C. Vaca et al.

c. Exploring the data set: Review and understand what information you have in each column, answer at least the following questions: What variables do you have, what type of data are they, what units of measurement do they use? d. Perform a normalization of the name and format of columns, eliminating those that contain more than 50% of missing values, unless these columns are necessary for the calculation, in which case it is necessary to perform a cleaning of the information. e. Preparation of variables: i. Treatment date and time, ii. Missing data, iii. Quantify categories, iv. Creation of variables and measurements (using the DAX—Data Analysis Expressions language and excel formulas). For the development of point iv of item e, the provisions of resolution SCSV-3782017-S must be applied, thus, in order to determine the amounts of the re-serves by IBNR, the calculation process detailed below is used, with the following notation: • • • •

k: number of months observed (k = 18) i: month of occurrence = 1, 2, … , k j: deferral period = 0, 1, … , k − 1 Cij: total amount observed for services rendered in month i, re-recorded with j months of deferral (or paid if the date of payment is 45 months later than the date of recording)

The Chain-Ladder method is based on the use of available historical information regarding payments for reimbursements. Table 1 shows the pattern of payments arranged in the form of a triangle (known as a run-off triangle) to estimate the evolution of future payments. From the matrix of recorded services rendered, the matrix of accumulated services rendered is constructed horizontally. Thus, each element of the new matrix corresponds to the number of services rendered in month i, recorded with a deferral no longer than j months (or paid if the payment date is 45 days longer than the recording Table 1 Matrix of registered services provided i

j 0

1

2



j

n−1

17 C1, 17

1

C1, 0

C1, 1

C1, 2



C1, j

C1, n − 1

2

C2, 0

C2, 1

C2, 2



C2, j

C2, n − 1

3















Ci, 0

Ci, 1

Ci, 2

i







n−1

Cn − 1, 0

Cn − 1, 1

18

C18, 0

Automation for IBNR Provision Using Power BI and Excel

17

date). The elements of this new matrix are noted with CAij and calculated using the following formula. C Ai, j =

j n=0

Ci,n

(1)

The new cumulative claims matrix is as shown in Table 2: Chain Ladder claims cadence factors measure the average change in bookings and payments for services rendered, made with deferral j, relative to payments made with deferral j − 1. The cadence factors, denoted by λj are calculated using the following formula: k− j

λCj L =

Σi=1 C Ai, j k− j

Σi=1 C Ai, j−1

= para j = 1, . . . , k − 1

(2)

Thus, k − 1 cadence factors are obtained. In the proposed case, there are 18 months of observation, 17 factors are obtained: λC1 L , λC2 L , … , λC17L The claims projection is made from the cumulative values of payments for services rendered in Table 2, and consists of “filling in” the missing values in the matrix (lower triangular part of the matrix). The value of each projected element is noted CA ∗ i, j and is calculated based on the cadence factors as shown below: L L C Ai,C Lj = C Ai,k−1 ∗ (λCk−1+1 ∗ λC(k−1+2) ∗ . . . ∗ λCj L ) para i + j > k

Thus, the cumulative loss projection matrix shown in Table 3 is obtained: Table 2 Matrix of services provided registered accumulated i

j 0

1

2



j

n-1

17 CA1,17

1

CA1,0

CA1,1

CA1,2



CA1,j

CA1,n−1

2

CA2,0

CA2,1

CA2,2



CA2,j

CA2,n−1



3











CAi,0

CAi,1

CAi,2



i







n−1

CAn−1,0

CAn−1,1

18

CA18,0

(3)

18

C. Vaca et al.

Table 3 Accumulated Claims Projection Matrix

i 1

0

1

2

J …

j

n-1

17

CA1,0

CA1,1

CA1,2



CA1,j

CA1,n-1

CA1,17

CA2,n-1

∗ 2,17



2

CA2,0

CA2,1

CA2,2

….

CA2,j

3











… …

∗ ,



CAi,0

CAi,1

CAi,2

i







n-1

CAn-1,0 CAn-1,1

18

CA18,0

∗ 18,1



−1,2 ∗ 18,2









∗ 18,

−1

∗ , −1

… ∗ ,17









−1, −

∗ 18, −1

−1,17 ∗ 18,17

3 Results Prior to carrying out the exercise it is necessary to organize the information to be worked with, for this purpose a main folder called “IBNR” is created and within this the subfolder called “BaseReimbursementsMonthly” in the latter the excel or txt files containing the reimbursements of the period or periods to be analyzed are placed, for the proposed case the period is 18 months from the month of July 2019. (July 2019–December 2020). The total amount of data loaded to Power BI is 1,664,667 lines. As part of the ETL process performed in powerquery, year and month are extracted from the fields: (a) Date of Incurrence or Claim (occurrence i) and (b) Generation or Settlement Date (deferral of payment j), with this date type data, two new fields are created called “Año.Mes.Reclamo” and “Año.Mes.Liquidacion”. In order for this Power BI file to allow the calculation of periods different from the one selected for the development of this case, a parameter (see Fig. 1) is created as a filter criterion that can be modified from the powerquery.

Fig. 1 Filtering parameter for change of analysis periods

Automation for IBNR Provision Using Power BI and Excel

19

The following measures are created in Power BI: (a) Valuepayment, (See Fig. 2): (b) _TotalAcumVLR_PAGARDAIC.Año.MesLiquida, (See Fig. 3): The Accumulated Services Provided matrix is elaborated in Power BI, which will be used to build in Excel the Accumulated Registered Services Provided Matrix, for this purpose the table must be organized as shown in Fig. 4:

Fig. 2 DAX measure, value to be paid

Fig. 3 Measure DAX, accumulated payable value

Fig. 4 Matrix of cumulative services rendered

20

C. Vaca et al.

Fig. 5 Power BI external tools, analyze in excel

Fig. 6 ETL process application, column dynamization override

Using the Analyze in Excel option of external tools Fig. 5 of Power BI, the information is exported in order to build a transposed matrix that allows us to have the Cumulative Registered Services Provided Matrix described in Table 2. Once you have the information from Power BI in Excel, a dynamic table is built and a copy is made in values format in another sheet in order to convert it into a table and import it into Excel’s powerquery and apply the “Override column dynamization” transformation process as shown in Fig. 6. In order to construct the Cumulative Registered Services Provided Matrix described in Table 2, a classification is made by applying the Buscarv formula according to Table 4 to the result of the data shown in Fig. 6, and a temporary table is constructed from this data Fig. 7. The Transpose formula is applied to the data shown in Fig. 7 to arrive at the results shown in Fig. 8. With the data shown in Fig. 8, F Chain Ladder is calculated using the formula shown in Fig. 9, the result of this calculation. REDONDEAR(SUMAR.SI($C$22:$C$39;"0.4 and n = 297) and are therefore considered a good estimate [29]). After this, varimax rotation was performed to associate each question with a component. In the end, all questions were associated with a component, where four components were created, which are in line with the constructs in the literature (defined above). In this study the overall value of Cronbach’s alpha was 0.928, and therefore acceptable. Given the Cronbach’s alpha values for each construct, it was decided to keep all the questions, except those related to involvement with vouchers. Chi-square and ANOVA tests were used to check if there were statistically significant differences between the six test groups, and the questions used to prove homogeneity were: age, academic qualifications, and employment status. Participants were randomly assigned to each of the groups to be homogeneous. In terms of age, the

178

J. Pereira et al.

ANOVA values were p = 0.663 and the Chi-square value showed a significance of p = 0.155, (showing homogeneity between the groups); in terms of their academic qualifications, the ANOVA test showed a significance of the value of p = 0.391 and the Chi-square test significance value was p = 0.167, (showing homogeneity between the groups) and in professional status, the ANOVA test showed a significance of the value of p = 0.052, and the Chi-square test showed a significance of the value of p = 0.127, showing that the groups were homogeneous.

3.5 Results As for the manipulation variables, to understand whether participants perceived the independent variables, they were asked to say whether using the coupon when making a purchase made them feel good and were also asked about proximity to the person they had described above, where the ANOVA obtained a p-value < 0.001, indicating there were differences between the responses in the groups. We also performed Tukey and Bonferroni tests to understand the differences between groups, and these also showed that the variables had been well perceived by the participants. Purchase Intention ANOVA showed a significance p-value of 0.05, and group 3 and 6 (no relationship) (M3 = 3.145, M6 = 3.535) p-value > 0.05, which leads us to not support Hypothesis 2 and Hypothesis 3. This study proves the exact opposite of the literature and tells us that offering a reward (in this case a digital coupon) causes purchase intention to increase, which would not be the case without the digital coupon offer, for this sample [31]. In

The Consumer Influence of Digital Coupon Distribution Through a Referral Program

179

addition, when the relationship is weak, the digital coupon offer is not enough to increase the consumer’s intention to buy. The results concerning purchase intention led to believe that the offer of a digital coupon, when the recommendation is made by someone with a strong tie to the person receiving it, increases the customer’s desire to buy the product [25]. Perceived Quality ANOVA analysis with a significance of p-value = 0.002, indicated there were differences between responses in the groups. Turning now to the analysis of the Tukey and Bonferroni tests, we notice that when we compare group 1 with group 2 and 3, the significance is (M1 = 5.449, M2 = 4.735, M3 = 4.605) p-value < 0.05, and therefore, supporting Hypothesis 8. When comparing the offer vs no offer of digital coupon, in all equal relationship groups (strong, weak and no offer), the significance is (respectively) (M1 = 5.449, M4 = 4.94) p-value > 0.05, (M2 = 4.735, M5 = 4.904) p-value > 0.05, (M 3 = 4.605, M6 = 4.635) p-value > 0.05. Thus, we cannot conclude that the digital coupon offer has an impact on perceived quality, rejecting Hypothesis 5, Hypothesis 6, and Hypothesis 7. Thus, on perceived quality, regardless of the person making the referral, the digital coupon offer is not relevant. In summary, the conclusions that the data revealed were as follows: When there is an offer of a digital coupon (and comparing different ties): – Purchase intention: it is higher when the recommendation is made by a strong relationship, and it is indifferent when it is made by a weak or no relationship – Perceived quality: it is higher when the recommendation is made by a strong relationship, and it is indifferent when it is made by a weak or no relationship When the recommendation is made by someone with a strong tie (and comparing between offer vs no digital coupon): – Purchase intention: when a digital coupon is offered, purchase intention increases – Perceived quality: is indifferent to the offer of a digital coupon Complementary Analysis - Relevance of the Information in the Email We will now analyze one of the dimensions collected in the questionnaire, which did not serve to answer the hypotheses put forward, but rather to try to extract more information from the participants, to understand the relevance given to the recommendation information by means of the tie to the recommender [31]. Thus, the objective was to compare the relevance given to the information present in the email, which was intended to be tested (Table 3): ANOVA, with a significance of p-value < 0.001, indicated there were differences between responses in the groups. Tukey’s and Bonferroni’s tests reveal that when comparing groups where there is digital coupon offer, but with different tie strength, the significance is (M1 = 5.112, M2 = 3.815, M3 = 3.685) p-value < 0.05, the relevance given to email is higher, proving Hypothesis iv.

180

J. Pereira et al.

Table 3 Complementary hypotheses Number

Hi Hii Hii i Hiv

Hypotheses The offer of the digital coupon and referral by a strong tie increases the relevance given to the information in the email compared to a referral without the offer. The offer of the digital coupon and referral by a weak tie increases the relevance given to the information in the email compared to a referral without the offer. The offer of the digital coupon and referral by a no tie increases the relevance given to the information in the email compared to a referral without the offer. The offer of the digital coupon and referral by a strong tie, increases the relevance given to the info rmation in the email compared a referral by a weak tie and no tie strength with the offer of a digital coupon.

Analyzing now when there is offer vs no offer of digital coupon, and when the relationship is strong (group 1 and group 4), the significance is (M1 = 5.112, M4 = 4.19) p-value < 0.05, confirming Hypothesis i. When comparing the weak relationship (group 2 and group 5) the significance is (M2 = 3.815, M5 = 3.918) p-value > 0.05, not confirming Hypothesis ii. When we analyze the no tie groups (group 3 and group 6) the significance is (M3 = 3,685, M6 = 2,44) p-value < 0,05, confirming Hiii. The results present a new perspective of interest about the relevance that people give to information received through a referral program, where we have seen that when a digital coupon is offered, people consider information received through a person with a strong relationship to be more relevant than those with a weak relationship or no relationship. When we hold constant the relationship of the recommendation and analyze the offer vs. no offer of digital coupons, where when the recommendation is made by a person with a strong relationship or no relationship, the relevance attributed to the email is higher when there is an offer of a digital coupon. When the recommendation is made by a no tie relationship, the offer of the coupon is important in the relevance attributed to the email, and this may be a new effective way for companies to communicate with their customers.

4 Conclusions 4.1 Final Considerations, Scientific and Managerial Contributions With the advance of new digital technologies, new challenges have appeared to companies in their relationship with consumers [2]. The literature has explored the study of the coupons themselves, and of referral programs, and has not yet linked these two topics (so it is a new study).

The Consumer Influence of Digital Coupon Distribution Through a Referral Program

181

This research aimed to understand if a referral program could be a good tool for the distribution of digital coupons, analyzing the purchase intention and perceived quality of the participants in the study. Following the theoretical framework, an empirical study was carried out (through the experimental method). It was proved in this study, that the use of a referral program to distribute digital coupons increases consumers’ purchase intention, if the relationship between the consumer and the recommender is strong (which is in line with the literature), otherwise it has no impact. Regarding the perceived quality of the product, no impact of these two mechanisms was identified. Hypothesis 1 was then proved, in which the use of the digital coupon, together with a recommendation by a friend, increases the purchase intention. Hypotheses 4 and 8 were also proven, where it was found that the strength of the tie is also able to influence purchase intention and perceived quality, as predicted in the literature. It was also found that consumers give more relevance to information that comes to them via email (in the context of a referral program) if it comes from a person with who they have a strong tie or no relationship at all but includes a digital coupon. These conclusions open the door to new ways of interacting with the customer (that go beyond purchase intention and perceived quality) that can be used by companies and brands to communicate with their potential and current customers. From a scientific point of view, this study allowed us to understand and develop in more detail the theme of digital coupons distribution (still little explored), understanding how they can have an impact on the purchase intention of its participants. Another conclusion drawn was that sending a digital coupon, through a referral program, without loop strength is not effective in increasing purchase intent and perceived quality. However, it was proven that this method can be used to capture the attention of your potential customers to the message the company wants to convey.

4.2 Limitations and Future Research This study has some limitations, such as the type of sample used (limited and convenience) and the fact that only one advertisement (for one mobile phone) was presented. For future research it is recommended: to understand the impact that a person with a weak relationship (but with social influence—the case of influencers), has on the distribution of digital coupons through a referral program; to measure the effect of the importance of the digital coupon, when the person already wants to buy the product (measuring the indicators before and after the application of the method); to carry out the same study for the purchase of services or another type of product (beyond technology). Another study could be conducted on the point of view of businesses to understand their perspective on digital coupons and their distribution.

182

J. Pereira et al.

References 1. Agu GA (2020) Perceived sales promotion transparency and customer intention to participate: insight from student-bank customers in Nigeria. J Mark Commun. https://doi.org/10.1080/135 27266.2020.1759122 2. Al-Gasawneh JA et al (2021) Moderator-moderator: digital coupon sales promotion, online reviews, website design, and the online shopping intention of consumers in Jordan. Int J Data Netw Sci 5(4):757–768. https://doi.org/10.5267/j.ijdns.2021.7.005 3. Alvarez BA, Casielles RV (2005) Consumer evaluations of sales promotion: the effect on brand choice. Eur J Mark 39(1–2):54–70. https://doi.org/10.1108/03090560510572016 4. Balakrishnan J, Foroudi P, Dwivedi YK (2020) Does online retail coupons and memberships create favourable psychological disposition? J Bus Res 116:229–244. https://doi.org/10.1016/ j.jbusres.2020.05.039 5. Bearden WO, Etzel MJ (1982) Reference group influence on product and brand purchase decisions. J Consum Res 9(2):183–194 6. Burns AC, Veeck A, Bush RF (2017) Marketing research. Pearson 7. Cameron D, Gregory C, Battaglia D (2012) Nielsen personalizes the mobile shopping app: if you build the technology, they will come. J Advert Res 52(3). https://www.scopus.com/inw ard/record.uri?eid=2-s2.0-84866515593&partnerID=40&md5=7c844990bfde102d3cc04e10 6227bae8 8. Cialdini RB, Griskevicius V (2010) Social influence. In: Advanced social psychology: the state of the science. Oxford University Press, pp 385–417 9. Clark RA, Zboja JJ, Goldsmith RE (2013) Antecedents of coupon proneness: a key mediator of coupon redemption. J Promot Manag 19(2):188–210. https://doi.org/10.1080/10496491.2013. 769475 10. Das G (2014) Linkages of retailer awareness, retailer association, retailer perceived quality and retailer loyalty with purchase intention: a study of Indian food retail brands. J Retail Consum Serv 21(3):284–292. https://doi.org/10.1016/j.jretconser.2014.02.005 11. Dickinger A, Kleijnen M (2008) Coupons going wireless: determinants of consumer intentions to redeem mobile coupons. J Interact Mark 22(3):23–39. https://doi.org/10.1002/dir.20115 12. Fong LHN, Nong SZ, Leung D, Ye BH (2021) Choice of non-monetary incentives and coupon redemption intention: Monetary saving and price consciousness as moderators. Int J Hosp Manag 94, Article no 102816. https://doi.org/10.1016/j.ijhm.2020.102816 13. Fortin DR (2000) Clipping coupons in cyberspace: a proposed model of behavior for dealprone consumers. Psychol Mark 17(6):515–534. https://doi.org/10.1002/(SICI)1520-6793(200 006)17:6%3c515::AID-MAR5%3e3.0.CO;2-B 14. Gilovich T, Dacher K, Serena C, Nisbett RE (2016) Social psychology 15. Hair J, Ortinau D, Harrison DE, Celsi M, Bush R (2017) Essentials of marketing research 16. Hong SM, Faedda S (1996) Refinement of the hong psychological reactance scale. Educ Psychol Meas 56(1):173–182. https://doi.org/10.1177/0013164496056001014 17. Kang H, Hahn M, Fortin DR, Hyun YJ, Eom Y (2006) Effects of perceived behavioral control on the consumer usage intention of e-coupons. Psychol Mark 23(10):841–864. https://doi.org/ 10.1002/mar.20136 18. Kaveh A, Nazari M, van der Rest JP, Mira SA (2021) Customer engagement in sales promotion. Mark Intell Plan 39(3):424–437. https://doi.org/10.1108/MIP-11-2019-0582 19. Keller KL, Kotler P (2012) Branding in B2B firms. In: Handbook of business-to-business marketing, pp 208–225. https://doi.org/10.4337/9781781002445.00021 20. Li L, Li X, Qi W, Zhang Y, Yang W (2020) Targeted reminders of electronic coupons: using predictive analytics to facilitate coupon marketing. Electron Commer Res. https://doi.org/10. 1007/s10660-020-09405-4 21. Lobel I, Sadler E, Varshney LR (2017) Customer referral incentives and social media. Manag Sci 63(10):3514–3529. https://doi.org/10.1287/mnsc.2016.2476 22. Malhotra N, Nunan D, Birks D (2017) Marketing research: an applied approach. Pearson.

The Consumer Influence of Digital Coupon Distribution Through a Referral Program

183

23. Marôco J (2007) Análise estatística. Sílabo 24. Menon K, Ranaweera C (2018) Beyond close vs. distant ties: understanding post-service sharing of information with close, exchange, and hybrid ties. Int J Res Mark 35(1):154–169. https:// doi.org/10.1016/j.ijresmar.2017.12.008 25. Mohd Yusof YL, Wan Jusoh WJ, Maulan S (2021) Perceived quality association as determinant to re-patronise Shariah-compliant brand restaurants. J Islamic Mark 12(2):302–315. https://doi. org/10.1108/JIMA-10-2018-0190 26. Nayal P, Pandey N (2020) Framework for measuring usage intention of digital coupons: a SPADM approach. J Strateg Mark. https://doi.org/10.1080/0965254X.2020.1777460 27. Pallant J (2013) SPSS survival manual: a step by step guide to data analysis using IBM SPSS. McGraw Hill 28. Pandey N, Maheshwari V (2017) Four decades of coupon research in pricing: evolution, development, and practice. J Revenue Pricing Manag 16(4):397–416. https://doi.org/10.1057/s41 272-016-0076-7 29. Robinson OP, Bridges SA, Rollins LH, Schumacker RE (2019) A study of the relation between special education burnout and job satisfaction. J Res Spec Educ Needs 19(4):295–303. https:/ /doi.org/10.1111/1471-3802.12448 30. Saunders M, Lewis P, Thornhill A, Bristow A (2019) Research methods for business students. In: Chapter 4: understanding research philosophy and approaches to theory development, pp 128–171 31. Sciandra MR (2019) Money talks, but will consumers listen? Referral reward programs and the likelihood of recommendation acceptance. J Mark Theory Pract 27(1):67–82. https://doi. org/10.1080/10696679.2018.1534213 32. Sinha SK, Verma P (2017) Consumer’s response towards non-monetary and monetary sales promotion: a review and future research directions. Int J Econ Perspect 11(2):500–507. https://www.scopus.com/inward/record.uri?eid=2-s2.0-85050330609&partne rID=40&md5=863e0bb6b927f216ddf430dc63aa256c 33. Sinha SK, Verma P (2018) Impact of sales promotion’s benefits on brand equity: an empirical investigation. Glob Bus Rev 19(6):1663–1680. https://doi.org/10.1177/0972150918794977 34. Steffes EM, Burgee LE (2009) Social ties and online word of mouth. Internet Res 19(1):42–59. https://doi.org/10.1108/10662240910927812 35. Wang Q, Mao Y, Zhu J, Zhang X (2018) Receiver responses to referral reward programs in social networks. Electron Commer Res 18(3):563–585 36. Wirtz J, Orsingher C, Cho H (2019) Engaging customers through online and offline referral reward programs. Eur J Mark 53(9):1962–1987. https://doi.org/10.1108/EJM-10-2017-0756 37. Wirtz J, Tang C, Georgi D (2019) Successful referral behavior in referral reward programs. J Serv Manag 30(1):48–74. https://doi.org/10.1108/JOSM-04-2018-0111 38. Xu F, Michael K, Chen X (2013) Factors affecting privacy disclosure on social network sites: an integrated model. Electron Commer Res 13(2):151–168. https://doi.org/10.1007/s10660013-9111-6

Digital Travel – A Study of Travellers’ Views of a Digital Visit to Mexico Ingvar Tjostheim

and John A. Waterworth

Abstract In this study, we invited members of a Norwegian national survey-panel to give their views on digital travel applications and digital travel experiences, and to consider Mexico as a travel destination. We presented vacationers with descriptions of digital travel products and activities, and a video about Mexico made from the game Horizon Forza 5. We used Partial Least Square—Structural Equation modelling (PLS-SEM) to analyse whether digital presentations, sense of place, and travel motivation could predict intention to visit Mexico. The analysis showed that vicarious sense of place was the best predictor of future intentions. As the study is based on survey data from a national population, a reasonable degree of generalisability, at least for Norway and other Scandinavian countries, is to be expected. It shows that there is a small but emergent market for digital travel and virtual tourism. More importantly, the study indicates that a digital experience in a virtual environment—specifically, that of a vicarious sense of place—can stimulate and motivate vacationers to travel to a tourist destination and hence that there are opportunities which businesses in the travel economy can potentially utilize. Keywords Digital travel · Virtual tourism · tourism marketing · Sense of place · Intention to visit a destination

1 Introduction In the mid-1990s, virtual reality (VR) was often regarded as mostly hype, even though the technology was to some extent available. But some had high expectations that it was the technology of the future for realistic virtual travel, amongst many other types of application. Now it the time to ask whether the technology, which is much more I. Tjostheim (B) Hauge School of Management, NLA, 0130 Oslo, Norway e-mail: [email protected] J. A. Waterworth Umeå University, Umeå, Sweden © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Á. Rocha et al. (eds.), Information Technology and Systems, Lecture Notes in Networks and Systems 691, https://doi.org/10.1007/978-3-031-33258-6_17

185

186

I. Tjostheim et al.

widely available and affordable today, can actually deliver the kind of experience that was promised, an experience with strong similarities to an actual visit to another place? More than 25 years later, can we say whether virtual tourism is a good candidate to replace physical travel [1, 2], or is it still mostly hype? Many travellers have experienced travel restrictions due to Covid-19, and some are concerned about the negative impact of travel on the environment. Because of these factors, and because of technological developments since the 1990s, the question is more relevant than ever. Fencott [3, 4] suggested that the longer visitors spend in a virtual tourism environment, the more likely they are to find the experience memorable and to have a desire to actually visit the physical place the virtual environment is presenting. In a review article on virtual tourism Guttentag wrote [5: 648] “VR will present the sector with both challenges and opportunities. Only with a more widespread and complete understanding of the relationships between VR and tourism will these challenges best be met and the opportunities best exploited.” Some game developers have understood that sometimes gamers like simply to explore a virtual environment. In many popular video games, the setting for the game-play is an urban environment or a place with which the player is likely to be familiar, or wish to become familiar. Some of these games have a tourist mode in which the player can explore and take photographs of famous landmarks. A key aspect here is photo-realism. One technique to achieve high levels of realism is to merge computer-generated graphics with real-life images. For a normal viewer, it is sometimes difficult distinguish between a photorealistic computer image and a photographic image. Forza Horizon 5 is a game with a photo-realistic simulated environment of Mexico. It is possible to visit towns, archeological sites, mountain regions and sandy beaches in the game. As an illustration of the use of digital travel in a game, it is probably one of the best examples. This exploratory study was designed to understand how vacationers respond to digital travel applications that could be similar to or substitutes for in situ experiences at a destination. The findings are important because they help identify opportunities that businesses in the travel economy could potentially utilize. The remainder of the paper is organized as follows. In Sect. 2 we outline how the data was collected, followed by a presentation of the video-game and the measurements taken in the survey. Section 3 presents the data-analysis and the results. In the final section, we conclude with a discussion of the results and their implications for future research.

2 Methods and Focus: A Digital Visit to Mexico A survey was carried out which included both a presentation of Mexico in a videogame and questions about intention to visit Mexico as a travel destination. We then used Partial Least Square—Structural Equation modelling (PLS-SEM) to analyse whether digital presentations, vicarious sense of place, and travel motivation could predict intention to visit Mexico.

Digital Travel – A Study of Travellers’ Views ...

187

2.1 Characteristics of the Participants and the Survey From the Norwegian population, the market research company Norstat AS selected and invited panel members to a survey on digital travel. In total, 632 participated, of which 580 answered all questions and watched the video. For these 580 a timer was used to measure the number of seconds from starting to read the text introducing the video and the time spent on watching the video about Mexico. To help characterise our participants, we first present the profile of our participants compared to a large representative survey of the Norwegian population—see Table 1. In comparison to the national survey, our travel survey had a higher percentage of participants in the 50–69 age group—see Table 1. There were some minor gender differences, but overall, the differences between the national representative survey and our travel survey participants were small, which suggests that our results from the study should have some generalisability to the traveller population of Norway. Overall, the 50+ group has more time for travel, longer holidays, and their financial situation is often better than for the younger groups. The level of education of the participants in our travel survey was also not different from the national survey—see Table 2. In general, it is easier to recruit surveyparticipants from the higher educational groups. In the introduction to the questionnaire, we explained what we meant by digital travel, digital travel products and applications in the following way:

Table 1 Gender and the age profile of the participants in the 2 studies Age

16–29

30–49

50–69

National survey 2021 (N = 1630)

24%

45%

32%

Male (N = 794)

20%

48%

32%

Female (N = 836)

27%

42%

32%

Our travel survey 2022 (N = 580)

18%

38%

44%

Male (N = 268)

19%

30%

51%

Female (N = 312)

18%

45%

37%

Table 2 Educational Profile of the Participants Primary or secondary education

1st degree (Bachelor)

Higher degree (Master or higher)

National survey 2021 (N = 1630)

51%

25%

24%

Male (N = 794)

55%

22%

23%

Female (N = 836)

48%

28%

25%

The travel survey 2022 (N = 580)

46%

26%

27%

Male (N = 268)

48%

26%

25%

Female (N = 312)

44%

26%

29%

188

I. Tjostheim et al.

Table 3 Travel products for marketing purposes, a pre-taste for, or as a substitute for (an alternative to) the in situ experience Travellers that view digital products as an alternative to physical travel Marketing

A pre-taste for

A substitute for

Other

A digital presentation of a museum or a similar attraction

33%

53%

6%

8%

A digital presentation of an activity

41%

48%

2%

9%

A digital presentation of a guided tour

44%

38%

28%

10%

The term virtual tourism is by some used to describe a digital visit, a digital experience of what the museums and tourist attractions has to offer. … For the next questions we distinguish between these three: digital presentations or applications for museums or tourist attractions, for activities for tourist, and for guided tours.

There are examples of travel applications that aim to create a feeling of “being there” as a substitute for actual travel, but a vacation is in many cases also about getting away from where you live. Therefore, to visit a museum or attraction digitally is for many not a substitute for a vacation, but it is still an example of digital travel. In the survey we distinguished between the digital presentations or applications, and the experiences a user may have by using a digital application. We asked separate questions about museums, tourist attractions and guided tours. We used the word “pre-taste” (of the experience) to indicate that, although the digital application has content that gives information to the user, the presentation is mainly intended to let the person feel what it is like to visit the place. A pre-taste is not a means to an end; the purpose is to create an interest and/or to influence the person to book a trip. However, the effect can be stronger than this. Telepresence, the feeling of being somewhere else, though actually in a virtual environment, is sometimes evoked [6]. In their review article, Beck et al. [6] write (p. 598); “Study results suggest that VR, regardless of whether it is non-, semi- or fully immersive, is capable of positively influencing the individual motivation to actually visit a place”. Table 3 presents the percentages of travellers who viewed the digital alternative as a pre-taste, or as a substitute for the physical travel product. Table 3 shows that the majority do not view the digital alternative as a substitute for a visit—less than 10% choose this alternative. In contrast, for museums and an activity approximately 50% answered that the digital alternative can be a pre-taste of an in situ experience.

2.2 Visiting Mexico in Forza Horizon 5 To illustrate digital travel and how it is possible to explore archaeological sites, small towns, and picturesque scenery, we made a 2-min video from Forza Horizon

Digital Travel – A Study of Travellers’ Views ...

189

5. This game was launched in the fall of 2021 and can be played in tourist mode. To illustrate the level of accuracy, the video had actual photos of places and buildings next to digital version of the places and buildings in the game—see Figs. 1, 2 and 3. There have been favourable reviews of the game focusing on the attention to details and how accurately the environment with buildings has been replicated in the game [8]. Sense of place refers to the physical environment, the place as it exists in our shared external world. However, in human geography, sense of place is not only about the setting, the environment, but is to a important extent created in the interaction of a setting and a person [9]. The person brings something to the setting and interacts with the setting and the people that are there. There is a long research tradition in human geography on experience of place [10, 11], sometimes from a tourism perspective [12]. Edward Relph [10] distinguishes between different types of sense of place. For this study we chose one type which is most relevant to the kind of experiences offered

Fig. 1 Uxmal in reality (left) and in the game (right)

Fig. 2 The virtual Parque Municipal and Teatro Juarez

190

I. Tjostheim et al.

Fig. 3 The virtual Parque Municipal de Beisbol Jose Acguilar y Maya

though digital visits: vicarious outsideness [13, 14]. Vicarious outsideness contains a behavioural aspect, what a tourist can do, but it is also about the hosts, in this case the people living in Mexico. In travel planning, it is common for travellers to search for information and to use digital applications. We included questions about TripAdvisor, Google streetview and online search behaviour. In the Partial Least Squares Structural Equation Modeling (SEM-PLS) technique used to analyse responses to our survey, we included a variable named the use of digital search, based on the answers to the questions about search behaviour. There are many different types of travel motivation and reasons for travel. For Europeans generally, and for the Norwegian participants in this study, to travel to Mexico is a long journey. Still, it can serve as an interesting and, for some, exotic tourist destination. We included questions about travel motivation to give the study a more general context. We asked our respondents about their views on digital travel and analysed the responses using the Partial Least Squares Structural Equation Modeling (SEM-PLS) technique. For the analysis presented in the next section, we formulated the following research questions: RQ1. Is there a relationship between digital presentations of museums, attractions and guided tours and intention to visit Mexico? RQ2. Is there a relationship between vicarious sense of place (vicarious outsideness) and intention to visit Mexico?

3 Data Analysis and Results In consumer behaviour research, advertising and other research fields that are concerned with the relationships between attitudes and behaviour, it is common to use measurements of purchase intention [15–18]. The equivalent to purchase intention in a tourism context is intention to visit a place or destination [19]. According

Digital Travel – A Study of Travellers’ Views ...

191

to Tian-Cole and Crompton [20], a person’s intention to visit a destination is a determinant of their actual behaviour of visiting that destination. In our study, we used a seven-point Likert scale [21], ranging from extremely unlikely to extremely likely to visit, as a measure of intention to visit Mexico. Partial Least Squares Structural Equation Modeling (PLS-SEM) was used to test hypotheses based on our research questions. PLS was developed and designed by Wold [22, 23]. The PLS algorithm is a sequence of regressions in terms of weight vectors. PLS path modeling maximizes the explained variance of all dependent variables and thus supports prediction-oriented goals [24]. It is often used in social science research, and in studies that are exploratory in nature [25, 26]. The minimum sample size required by PLS-SEM is seven to ten times the larger number of paths leading to an endogenous construct when, as is the case in this study [27]. Convergent validity is suggested if factor loadings are 0.60 or higher [28] and each item loads significantly on its latent construct [29]. Discriminant validity is suggested if all measurement items load more strongly on their respective construct than on other constructs. The square root of average variance extracted (AVE) of each construct should be higher than the inter-construct correlations—the correlations between that construct and any other constructs [30]. An AVE above the recommended threshold of 0.5 indicates a satisfactory level of convergent validity. For further information about PLS as a statistical method, we refer to Chin [31]. Tables 4 and 5 show the quality criteria recommended for PLS were met, thus supporting further analysis of the research questions. Figure 4 shows that the overall variance explained (R2 ) is 0.23. Chin [31] describes a R2 between 0.19 and 0.33 as weak with low explanatory power. We concluded that RQ1 was not supported: There is not a relationship between digital presentations of museums, attractions and guided tours and intention to visit Mexico. The PLS-SEM analysis did show a strong association between vicarious sense of place and intention to visit Mexico. In the PLS-model, vicarious sense of place was the only variable with a strong significant path. We therefore conclude that RQ2 was Table 4 Discriminant validity of the PLS-model Discriminant Validity Digital presentations

Intention to visit

Digital presentations

0.798

Intention to visit

0.211

0.859

Vacation types 0.056

0.215

Vacation types

Digital search

0.777

Digital Search 0.149

0.204

0.183

0.711

Vicarious sense of place

0.432

0.204

0.119

0.242

Vicarious sense of place

0.924

192

I. Tjostheim et al.

Table 5 Construct reliability and validity Construct Reliability and Validity Cronbach’s Alpha

Composite Reliability

Average Variance Extracted (AVE)

Digital presentations

0.743

0.840

0.638

Intention to visit

0.881

0.918

0.739

Vacation types

0.690

0.818

0.604

Digital search

0.514

0.753

0.505

Vicarious sense of place

0.914

0.946

0.854

Vicarious outsideness: 1. While looking at the video I was both thinking about people living in Mexico and what I could do in Mexico 2. While looking at the video I was both thinking about people living in Mexico and what I could do on a holiday in Mexico 3. While looking at the video I was both thinking about people living in Mexico and what I could do on a sightseeing tour in Mexico

Fig. 4 The result of the PLS analysis

Digital Travel – A Study of Travellers’ Views ...

193

supported: There is a relationship between vicarious sense of place experience and intention to visit Mexico.

4 Discussion Digital technologies are already playing a significant role in travel behaviour, by replicating visits to places and supporting tourists by boosting their ability to imagine themselves in those places. The tourism industry uses different media channels in order to attract visitors to a destination. A tourist may not be satisfied with a virtual substitute for a physical visit, but the virtual experience might increase their desire to visit the actual place, as shown in this study with Mexico as the tourist destination. Such technologies are expected to play an increasing role in travel planning and in products and services offered by companies in the travel and tourism industry. Predictions about the future are not necessarily accurate, however, and our study is no exception to that. Our study emphasized how our participants judged digital travel based on specific questions about museums, attractions, and guided tours, as well as a digital visit to Mexico in a game. For our survey participants, the majority did not view the digital alternative as a substitute for a physical visit. But for visiting museums and some other types of activity, about half of all participants considered that a digital alternative could serve as a pre-taste of an in situ experience. Additional research is needed to look into the potential marketing effects of virtual tourism applications and the extent to which people will replace physical travel with a digital alternative. In the future, with improving and more affordable immersive technology, more convincing digital travel is likely to become more common, given the drawbacks of physical travel such as fossil fuel use, the risk of contracting and spreading diseases, and cost. Digital travel that closely simulates aspects of physical travel will perhaps not be the first choice for most vacationers for their main holiday. Nevertheless, we envision that, for a segment of vacationers at least, digital travel will increasingly become a substitute for some physical travels to places and events, not just an add-on or pre-taste of them.

References 1. Cheong R (1995) The virtual threat to travel and tourism. Tour Manag 16(6):417–422 2. Williams P, Hobson JSP (1995) Virtual reality and tourism: fact or fantasy? Tour Manag 16(6):423–427 3. Fencott C (1999) Content and creativity in virtual environments design. In: Proceedings of virtual systems and multimedia 99. University of Abertay, Dundee, Scotland, pp. 308–317 4. Fencott C, Ling J, van Schaik P, Shafiullah M (2003) The effects of movement of attractors and pictorial content of rewards on users’ behaviour in virtual environments: an empirical study in the framework of perceptual opportunities. Interact Comput 15:121–140

194

I. Tjostheim et al.

5. Guttentag D (2009) Virtual reality: applications and implications for tourism. Tour Manag 31(5):637–651 6. IJsselsteijn W, Riva G (2003) Being there: the experience of presence in mediated environments. In: Riva G, Davide F, IJsselsteijn WA (eds) Being there: concepts, effects and measurements of user presence in synthetic environments. IOS Press, pp. 3–16 7. Beck J, Rainoldi M, Egger R (2019) Virtual reality in tourism: a state-of-the-art review. Tour Rev 74(3):586–612 8. Comparing forza horizon 5 locations to real-life. Theodore McKenzie, Senior Editor, 9 November 2021. https://80.lv/articles/comparing-forza-horizon-5-locations-to-real-life/ 9. Williams DR, Patterson ME (1996) Environmental meaning and ecosystem management: perspectives from environmental psychology and human geography. Soc Nat Resour 9:507–521 10. Relph E (1976) Place and placelessness. Pion, London 11. Tuan YF (1979) Space and place: humanistic perspective. In: Gale S, Olson G (eds) Philosophy in geography. D. Reidel, Dordrecht 12. Lew A (1999) Editorial: a place called tourism geographies. Tour Geogr 1:1–2 13. Tjostheim I (2020) Experiencing sense of place in a virtual environment: real in the moment? [Report RR-20.02]. Umeå Universitet 14. Tjostheim I, Waterworth JA (2022) the psychosocial reality of digital travel: being in virtual places. Palgrave MacMillan 15. Fazio RH, Zanna MP (1978) Attitudinal qualities relating to the strength of the attitude-behavior relationship. J Exp Soc Psychol 14:398–408 16. Morwitz VG, Steckel JH, Gupta A (1997) When do purchase intentions predict sales? Working paper marketing science institute, June, Report no. 97–112 17. Li H, Daugherty T, Biocca F (2002) Impact of 3-D advertising on product knowledge, brand attitude, and purchase intention: the mediating role of presence. J Advert 3:43–58 18. Ahn SJ, Bailenson JN (2011) Self-endorsing versus other-endorsing in virtual environments: the effect on brand attitude and purchase intention. J Advert 40(2):93–106 19. Chung N, Han H, Joun Y (2015) Tourists’ intention to visit a destination: the role of augmented reality (AR) application for a heritage site. Comput Hum Behav 50:588–599 20. Tian-Cole S, Crompton JL (2003) A conceptualization of the relationships between service quality and visitor satisfaction, and their links to destination selection. Leis Stud 22:65–80 21. Bearden WO, Lichtenstein DR, Teel JE (1984) Comparison price, coupon, and brand effects on consumer reactions to retail newspaper advertisements. J Retail 60:11–34 22. Wold HO (1982) Soft modeling: the basic design and some extensions. In: Joreskog KG, Wold HO (eds) Systems under indirect observations, Part II. North-Holland, Amsterdam, pp 1–54 23. Wold HO (1985) Partial least squares. In: Kotz S, Johnson NL (eds) Encyclopedia of statistical sciences, vol 6. Wiley, New York, pp 581–591 24. Henseler J, Ringle CM, Sinkovics RR (2009) The use of partial least squares path modeling in international marketing. Adv Int Mark 20:277–320 25. Barclay DC, Higgins C, Thompson R (1995) The partial least squares approach to causal modeling: personal computer adoption and use as an illustration. Technol Stud 2(2):285–308 26. Sarstedt M, Ringle CM, Cheah J-H, Ting H, Moisescu OI, Radomir L (2019) Structural model robustness checks in PLS-SEM. Tour Econ 1–24 27. Chin WW, Marcolin BL, Newsted PR (2003) A partial least squares latent variables modeling approach for measuring interaction effects: results from a Monte Carlo simulation study and an electronic-mail emotion/adoption study. Inf Syst Res 14(2):189–217 28. Bagozzi RP, Yi Y (1988) On the evaluation of structural equation models. J Acad Mark Sci 16(1):74–94 29. Gefen D, Straub D (2005) A practical guide to factorial validity using PLS-graph: tutorial and annotated example. Commun AIS 16(5):91–109 30. Fornell C, Larcker DF (1981) Evaluating structural equation models with unobservable variables and measurement error. J Mark Res 18(1):39–50 31. Chin WW (1998) The partial least squares approach to structural equation modeling. In: Marcoulides GA (ed) Modern methods for business research. Lawrence Erlbaum Associates, Mahwah NJ, pp 295–358

Health Informatics

Medical Entities Extraction with Metamap and cTAKES from Spanish Texts Mauricio Sarango and Ruth Reátegui

Abstract Natural language processing tools such Metamap and cTAKES allow identified and code medical entities from unstructured texts. This work aims to apply the above-mentioned free tools to extract medical entities from texts written in Spanish language. A methodology to apply the two tools and a comparison of the extraction is presented. Despite the two tools were configured to analyze the same semantic entities, Metamap exceeds cTAKES in the identification of abbreviations, but cTAKES performs a better recognition of medical procedures. Keywords Metamap · cTAKES · Medical entities · Health

1 Introduction Digital medical texts with a narrative and unstructured format store information about patients’ diseases, procedures and medications. Currently, exist a variety of natural language processing (NLP) tools that automatically map, encode and normalize the medical terms or entities present in this type of text. Two widely used tools in the biomedical field are Metamap and cTAKES, which code the entities based on the Unified Medical Language System (UMLS). Also, both tools are available for free, but are developed to work mainly with texts written in English. Some researchers had been used the mentioned tools to identify entities in text written in English [1–3], Italian [4], Germany [5] and Spanish [6]. For a different language than English, works used googletrans as strategy to translate the texts to English and then applied any NLP tool. Despite the above indicated, it is essential to develop and apply a methodology that allow the automatic identification of medical entities from texts written in M. Sarango · R. Reátegui (B) Universidad Técnica Particular de Loja, 101608 Loja, Ecuador e-mail: [email protected] M. Sarango e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Á. Rocha et al. (eds.), Information Technology and Systems, Lecture Notes in Networks and Systems 691, https://doi.org/10.1007/978-3-031-33258-6_18

197

198

M. Sarango et al.

another language, such as Spanish, using existing freely tools to achieve effective management of the information contained in medical documents. Also, a comparison between tools is necessary in order to know the different advantages of each tool. In order to know and become familiar with those tools and the medical vocabulary that they use; a brief explanation of some concepts is given below.

1.1 UMLS The Unified Medical Language System (UMLS) is composed of files and software that brings an extensive number of drugs, biomedical, and health vocabularies, and also describes standards that allow interoperability between different information systems [4]. UMLS has three main tools: the Metathesaurus, the Semantic Network and the SPECIALIST Lexicon and Lexical Tools. Metathesuarus: It is a collection of terms or entities and codes from a large number of vocabularies, including CPT, ICD-10-CM, LOINC, MeSH, RxNorm and SNOMED CT. The vocabularies have hierarchies, definitions and other relationships or attributes [7, 8]. Due to a concept can have many different names depending on the vocabulary from which it comes, concepts with the same meaning are linked to a Concept Unique Identifier (CUI) containing the letter C followed by seven numbers. Semantic Network: It has categories (semantic types) and their relationships (semantic relationships) that create a network that represents the biomedical domain. It helps to interpret the meaning assigned to a concept within the Metathesaurus. There are 133 semantic types that can be identified by their abbreviation: Type Unique Identifier (TUI), or full semantic name [7]. Specialist Lexicon and Lexical Tools: It is an English lexicon of biomedical terms or entities. It has tools that allow normalizing strings, generating lexical variants and creating indexes [7].

1.2 Metamap It is a program that allows mapping biomedical texts to the UMLS Metathesaurus or discovering Metathesaurus concepts in the texts. Metamap is based on symbolic, natural language processing and computational linguistics approach [9]. This tool offers some functionalities such as disambiguation, negation detection, conceptualization, retrieving and evaluating candidates, and mapping construction.

Medical Entities Extraction with Metamap and cTAKES ...

199

1.3 cTAKES cTAKES is an open source and natural language processing systems that extracts clinical information from unstructured texts. It has a “default clinical pipeline” for processing clinical notes identifying types of medical terms or entities such as: medications, diseases/disorders, signs/symptoms, anatomical sites, and procedures. Some of its most important components include: sentence boundary detector, tokenizer, part-of speech tagger, negation detector, named entity recognition, dependency parser [10].

2 Methodology The methodology followed in this work consists of 6 steps (see Fig. 1): Obtaining the Dataset: The dataset was obtained from the TL Plan dataset—Recognition and resolution of biomedical abbreviations. This plan is directed by the Government of Spain with the aim of promoting the development of the NLP in Spanish for the extraction of abbreviations present in clinical documents. The TL Plan offers a

Fig. 1 Applied methodology for the analysis of medical texts

200

M. Sarango et al.

set of 220 free-to-download clinical cases in a narrative format in Spanish. The texts contain information from the summaries of scientific articles in the medical field, so they cover a large number of diseases, procedures and medications. Automatically Translate the Clinical Texts of the Dataset in Spanish into English: The googletrans library was used to automatically translate the clinical cases. The dataset files or texts are reading one by one and translated into English to apply spelling correction and obtain a cleaner text. Perform an Orthographic and Syntactic Correction of the Translated Texts: To perform the orthographic correction of the clinical cases, the TextBlob library in Python 3 was used. It takes the text translated into English to process it and automatically replace the wrong words translated in the English text. Apply the PLN Tools: This component consists of the application of Metamap and cTAKES for the identification and coding of the medical entities present in the translated and corrected clinical texts. As a result, a new collection of texts is obtained for each tool, which contains the medical entities identified and coded in the processed clinical texts. Carry out an Information Structuring Process: Because Metamap and cTAKES return information in two different formats, it is necessary to carry out an information cleaning and structuring process. Therefore, the results could be the most similar as possible. Comparison of Results: 10% of the texts will be selected to compare the results obtained with both tools.

3 Results Due to the large number of entities identified by Metamap and cTAKES, the 10% of texts (20 cases) were analyzed to make a comparison of both tools. Based on the results obtained with Metamap (see Fig. 2) and cTAKES (see Fig. 3), the Table 1 was constructed to show information identified by each tool. In Table 1, columns 2 and 3 show the number of entities identified in documents 1 and 139 with Metamap and cTAKES. Also, columns 4 and 5 present a list of entities identified with cTAKES (column 4) and with Metamap (column5). The numbers in column 6 represent the similar entities identified by both tools.

Fig. 2 Examples of entities identified with Metamap

Medical Entities Extraction with Metamap and cTAKES ...

201

Fig. 3 Examples of entities identified with cTAKES after carrying out the information structuring process

Table 1 Examples of the results obtained in documents 1 and 103 with Metamap and cTAKES Text number

Entities identified

Different entities identified

Metamap

cTAKES

cTAKES

Metamap

Similar entities identified

1

50

46

Chromogranin (CUI C0008586) Normal Gait (CUI C0231683) Normal Skin (CUI C0558145) Weighing patient (CUI C1305866) Solid Dosage Form (CUI C1378566)

History of present illness (finding) 41 (CUI C0262512) Injection of therapeutic Gray Matter (CUI C0018220) Nucleotide Sequence (CUI C0004793) Radiological diagnosis (C0043299) Surgical Procedure (CUI C0543467) Intervention or Procedure (CUI C0184661) Pulmonary resonance (CUI C0231881)

139

51

43

Diffuse Idiopathic Skeletal Hyperostosis (CUI C0020498) Functional surgery (CUI C0442969) Compensation as a Defense Mechanism (CUI C0152057) Nostril (CUI C0595944) Sinusitis (CUI C0037199)

History of present illness (finding) 38 (CUI C0262512) Patient Dead (CUI C1546956) Entire right anterior naris (CUI C1720679) Positive Finding (CUI C1446409) Olfactory pit (CUI C1284018) Infiltration (procedure) (CUI C0702249) Mucus (CUI C0026727) Primary Malignant Neoplasm (CUI C1306459) Administration procedure (CUI C1533734) Chemical (CUI C0220806)Chemical Restraint (CUI C1320374)

4 Comparison of Metamap and cTAKES Considering the 10% of texts analyzed, 11 out 20 cases analyzed with cTAKES present a greater number of medical entities identified compared with the 9 cases in which Metamap obtained the greater number of entities. Also, an average of 58% of the medical entities present in texts are recognized by cTAKES and Metamap.

202

M. Sarango et al.

Fig. 4 Examples of abbreviation and procedures recognized by Metamap and cTAKES respectively

Metamap presents a better identification of abbreviations, but cTAKES performs a better recognition of medical procedures (see Fig. 4), the two tools were configured to recognize the same set of semantic types.

5 Conclusion The output of Metamap and cTAKES have different formats. The Metamap API allows a better configuration of the output format. The results are structured according to the selected options. On the other hand, cTAKES does not permit to modify the parameters of its clinical pipeline, therefore, the structure of the output must be processed to eliminate redundant or unnecessary information. This work describes a methodology that could be adapted to medical texts written in different languages than English. The translation stage could be modified to recognize the input language and translated it to English for subsequent processing steps. Metamap and cTAKES had different results in the identification of certain entities; Metamap outperform cTAKES in the abbreviation identification, but cTAKES is better than Metamap in recognize medical procedures. Therefore, a combination on both tools could be considering for future works. Furthermore, this work has certain limitations that could be taking account, for example, the dataset does not have a list of entities previously identified by experts, this complicates the evaluation process. Therefore, experts could collaborate to identify medical entities for each text and thus create a Gold standard that permits evaluate the results with metrics such as precision, accuracy and recall. In this work, the tools were configured to recognize a wide set of semantic types (62 types) defined by UMLS. The number of semantic types could be reduced to facilitate the identification and analysis process.

Medical Entities Extraction with Metamap and cTAKES ...

203

References 1. Arguello-Casteleiro M et al (2022) MetaMap versus BERT models with explainable active learning: Ontology-based experiments with prior knowledge for COVID-19. In: CEUR Workshop Proceedings, vol 3127, pp 108–117 2. Naseri H et al (2021) Development of a generalizable natural language processing pipeline to extract physician-reported pain from clinical reports: generated using publicly-available datasets and tested on institutional clinical reports for cancer patients with bone metastases. J Biomed Inform 120 3. Moore CR et al (2021) Ascertaining Framingham heart failure phenotype from inpatient electronic health record data using natural language processing: a multicentre Atherosclerosis Risk in Communities (ARIC) validation study. BMJ Open 11(6) 4. Chiaramello E, Paglialonga A, Pinciroli F, Tognola G (2017) Attempting to use meta map in clinical practice: a feasibility study on the identification of medical concepts from Italian clinical notes. Stud Health Technol Inform 228:28–32 5. Becker M, Böckmann B (2016) Extraction of UMLS® concepts using Apache cTAKESTM for German language. Stud Health Technol Inform 223:71–76 6. Perez N, Accuosto P, Bravo À, Cuadros M, Martínez-Garcia E, Saggion H, Rigau G (2020) Cross-lingual semantic annotation of biomedical literature: experiments in Spanish and English. Bioinformatics 36(6):1872–1880 7. UMLS. https://www.nlm.nih.gov/research/umls/index.html. Accessed 24 Aug 2022 8. Bodenreider O (2004) The unified medical language system (UMLS): integrating biomedical terminology. Nucleic Acids Res 32 9. MetaMap. https://lhncbc.nlm.nih.gov/ii/tools/MetaMap.html. Accessed 24 Aug 2022 10. Savova GK, Masanz JJ, Ogren PV, Zheng J, Sohn S, Kipper-Schuler KC, Chute CG (2010) Mayo clinical text analysis and knowledge extraction system (cTAKES): architecture, component evaluation and applications. J Am Med Inform Assoc 17(5):507–513. https://doi.org/10.1136/ jamia.2009.001560

Health Records Management in Trust Model Blockchain-Based António Brandão

Abstract This study presents the application of Blockchain in e-health and proposes a trust model to guarantee the privacy of data patients, the security between the interested parties, and the secure exchange of e-health data. The concepts and structures of data must be more compliant, standardized, and adaptable for interoperability. The exchange of patient data with the eHealth Smart Places (eHSP) explores different types of sources of data, depending on the context, with various types of electronic e-Health Records, several IoT devices, and mobile devices. The data will have to be more granular, consistently aggregated, and support a data structure that allows applying security rules, access, privacy, and data security. The paradigm of patientcentric and the trust model proposed to become the systems security and privacy design oriented to the protection and satisfaction of the patient. At the same time, to share the necessary and sufficient secure data and information between the entities and with the patient with the efficient uses of resources in the system of health. Keyword eHealth · Blockchain · Smart places · IoT

1 Introduction This work, as part of a broader work, is focused on research and guides for the application of Blockchain technology in Smart Places. Here, we specify the e-Health’s Smart Places (eHSP) inserted in the healthcare ecosystem with increasing use in various fields of healthcare and medicine. The central problem that this work aims to guarantee is the confidence in the privacy of the patients and stakeholders, together with the security of the health data. At the same time, it aims to provide means for sharing necessary and sufficient data and information between the various entities involved in health procedures, with the patient as the center of all policies. A. Brandão (B) Universidade Aberta, Lisbon, Portugal e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Á. Rocha et al. (eds.), Information Technology and Systems, Lecture Notes in Networks and Systems 691, https://doi.org/10.1007/978-3-031-33258-6_19

205

206

A. Brandão

The work [1] discusses the new terms used in the health field. It presents a definition of e-Health as the tools and services that use information and communication technologies that can improve prevention, diagnosis, treatment, monitoring, and management. Concepts such as e-Health, m-Health, Tele-Health, and Telemedicine are new ways of providing health care, including various technical components. The following sections aim to present the state of the art of blockchain application in health care, the problem we encountered, the methodology, and the proposal of the trust model supported by the blockchain request from e-Health Smart Places.

2 State of the Art In this section, we explore some of the concepts underlying the subject of this text that runs through the idea of e-Health’s Smart Places (eHSP), and we go through the scientific studies that apply to Blockchain technology in different perspectives in health. To understand the problem, it is necessary to structure the different information, such as Personal Health Record (PHR), Electronic Patient Record (EPR), Electronic Medical Record (EMR), Electronic Health Record (EHR), and blockchain technology settings, the features, the evolution of its implementation and the risks of the project. Blockchain-based solutions are being used in EHR, PHR, and clinical trials [2]. The research and operational challenges lie in trying to integrate blockchain technology into existing EHR systems [3].

2.1 Blockchain Technology Blockchain technology for biomedical and healthcare applications has benefits, deceptions, and new claims. Topics include knowledge about the characteristics of Blockchain technology, evaluation of alternative technologies to Blockchain, emerging applications, and advantages of Blockchain application for biomedical or healthcare applications compared to those based on traditional distributed data [4]. Health services must be focused on patient care, maintaining the privacy and integrity of information. Layers built on the application Blockchain-based mechanism can provide greater privacy, ensuring adequate access to data. The real problems of public health control, data collection, monitoring, and compliance with the requirements can obtain lower administrative costs and organize patient information, allowing the scaling of the health sector and cost control, with the application of Blockchain [5]. Blockchain can combine cost savings with greater accessibility for reporting to interest parties.

Health Records Management in Trust Model Blockchain-Based

207

The use of Blockchain in the management and sharing of electronic medical and health records can enable patients, physicians, healthcare providers, hospitals, clinics, and other health organizations to share data and increase interoperability. The selection and adoption of the appropriate Blockchain architecture depend on the network of participating entities and the defined chain. This architecture can improve redundancy and consistent patient records, protecting patient access and privacy. Different structures can help Blockchain possible solutions and resilience to attacks [6]. Blockchain technology can create a common good for which individuals can control their data. Hospitals and health professionals will be able to have access to reliable and available data. Besides, to be improved is the exchange of health data, so needed for health operations [7].

2.2 Health Records Concepts The Health Records referred to below present the data sets to structure the different health information, such as Electronic Medical Record (EMR), Electronic Health Record (EHR), Electronic Patient Record (EPR), and Personal Health Record (PHR). The EMR can be configured for a department in a hospital or a collection of medical information for patients in all departments in a specific location in the hospital. Exceptionally it can extend out of a hospital and, if it does, it is usually for an organization that has multiple locations, and not for hospitals that belong to different organizations [8]. Electronic Medical Records (EMRs) contain critical health information and need often to be shared among peers. The Blockchain provides a universal, immutable, and transparent chronology of transactions enabling applications with confidence, accountability, and transparency, which allows a data management system and sharing of safe and reliable EMR. Security analysis indicates that DPEM (decentralized EMR management with privacy preservation) is an optimized way to achieve secure sharing of EMR data, which can guarantee accountability and integrity [9]. This proposal has a four-layer structure for DPEM that consists of data preparation, access and security control, data sharing, and data storage. The EMR design is created by suppliers for specific interactions with hospitals and outpatient settings and can serve as a data source for an EHR [10]. The following table shows the comparison between these sets of data and examples. The EHR can be described as the concept of a longitudinal electronic collection of health and patient care from birth to death and brings together several configurations of information and services performed in different systems, grouping the data and presenting them in a single record [10]. The EHR stores data associated with each patient interaction, including demographic information, laboratory tests and results, diagnostics, prescriptions, videos, images, clinical notes, and other information [11]. The implementation of EHRs [12] has three groups: content (uniformity,

208

A. Brandão

compatibility, and interoperability); interoperability (translation modules, domains, and version control); security and confidentiality (authentication, accountability, integrity, and data encryption). The EPR includes information from an electronic medical record that is usually introduced by the patient’s access to health care providers with information about career health data [13]. The main challenges in implementing the EPR are ensuring the involvement of the medical staff, the difficulties posed by regulation, government, and public policies, and limited financial and human resources. The PHR is just a set of information relevant to the patient’s health and can involve contact information for the patient and his family, a list of providers involved in patient care, a list of diagnoses, a list of medications, a list of allergies, history of immunizations, laboratory results and family medical history. Support patient-centered health care by making medical records and other relevant information accessible to patients, thereby assisting patients in self-management health. [14] To analyze the information on the records of patients and health arises in the literature various data sets as the Electronic Medical Record (EMR), Electronic Health Record (EHR), Electronic Patient Record (EPR), and Personal Health Record (PHR). The concepts underlying these registers presented in Table 1 will allow the consistent grouping of the data fields, allowing the segmentation of the information and the granularity that allows applying for permissions and recording the access history.

3 Related Work The Advanced Blockchain (ABC) architecture presented in [16] for health systems describes the architecture as a solution to provide a reliable mechanism for the exchange of safe and efficient medical records. The smart contract based on Blockchain allows treating health interoperability issues of trust between the entities and persons. The interactions can be more effective among patients and medical applications, with patient data to be shared securely by various organizations and devices, and improve the overall efficiency of medical practice workflow. [17] The system [18] is a Blockchain-based technology with a decentralized and openly extensible network. A new approach to an integrated health information model is tested based on an EHR complex and highly connected can operate, where it solves the problem of data access without endangering personal privacy, enhancing the use of individual monitoring devices and the involvement of new service providers. In [19] health data management to perspective-Blockchain for sharing EMR data between health care providers and research studies. The case applied to patient care with cancer seeks to reduce the response time for the sharing of EMR and the total cost and improve health care decision-making. The work [20] states that the interoperability of medical data between institutions goes, through three models: push, pull, and see. Blockchain technology offers a fourth

Health practitioner or related stuff can gain access

Single health facilities

Health facility

Content

Control

Resource

Maintainer

Medical information

Managerial process control on the medical domain

Exams, diagnoses Clinical charts

Demographic, history, allergies

Medical record

Allocation

Information

Date

Health facilities

Purpose

Major objective

EMR

The practice

Types/Examples

Access and maintenance

Scope

Audience

Clinical charts

Medical information

Government

Multiple health facilities

Health practitioner, associated things in the health facilities, and government stuff can gain access

Medical record and public health record

Public health department

Information sharing

Multiple practices

EHR

Table 1 Comparison among EMR, EHR, PHR, and EPR (Adapted of Table 2.1 [15]) EPR

PHR

Medical information

Individual

Single health facility and individuals

Can get access only after obtaining permission by the record owner

Medical history and personal health record

Single person

(continued)

The system is patient-centered

Medical information

Individual

Single health facility and individuals

Can get access just after getting permission by the record owner

Medical record and own health record

Single person

Personal Health management. however, Personal health not containing the record lifetime and not management including dental, behavioral or alternative care. Focuses on relevant information

Patient or authorized individual (to Patient or authorized support treatment within an organization) individual

Health Records Management in Trust Model Blockchain-Based 209

Scope

EMR

Prescriptions & Laboratory tests

Patient communications

Cross-communications practice

Evidence-based patient tracking

Types/Examples

E-prescribing to pharmacies

Patient-provider integration

Multi-practice integration

Patient health outcomes

Table 1 (continued)

Evidence-based patient tracking

Patient communications

Prescriptions & laboratory tests

EHR

EPR

Fully electronic support

Collects and Manages both health and disease data

PHR

210 A. Brandão

Health Records Management in Trust Model Blockchain-Based

211

model that enables secure sharing of medical records, with tools for cryptographic assurance of data integrity, standardized auditing, and formalizes “contracts” for data access. The management of access to data in e-Health, applying Blockchain technology, is illustrated in [21] with the electronic medical records (EMR), the exchange of real-time data from sensors year body and different patients with a secure, scalable solution for the transfer of medical data with the best possible performance. In [22], to ensure the validity of EHRs encapsulated in Blockchain, a signature scheme is presented as the basis of various authority attribute, in which a patient validates a message according to the attribute with the evidence that he has attested to him. In [23] describe the management access control based on Blockchain for health records. Interoperability is also a critical component of the infrastructure to support patient-centered. The design of a Blockchain Platform for clinical trials and precision medicine is proposed in [24] and provides some ideas of technology requirements and challenges. The proposed MedRec [25] is presented using Blockchain for access to medical data and permissions management, which allows the development of design changes in electronic health records (EHRs). The proposal MeDShare [26] intended to be a system of exchanging medical information among physicians, with the application of Blockchain technology. This system utilizes smart contracts and an access control mechanism to control the behavior of the data and revoke access to entities that attempt to violate the data permissions. The proposed system [27] indicates healthcare models by creating immutable patient records with a Modified Merkle Tree data structure, updating medical records, exchanging health information between different providers, and viewing contracts on the blockchain network. The system [28] sends the patient’s personal information off-chain and prevents data falsification by storing encrypted data on-chain. Patients can track the history of their consent data and save it in the consent system for data exchange in the blockchain. Study participants [28] refer to a blockchain-based PHR as a new way to securely store and exchange their medical information.

4 Methodology The overall research approach focuses on design science research (DSR) [29] with the use of research methodologies based on conceptual models [30] and based prototyping projects in science [31]. The IS research on solutions blockchain is still at an early stage and focuses mainly on the analysis of use cases and proof of concept prototype design. Always seeks to understand the real potential of the blockchain.

212

A. Brandão

The process is defined in six stages: the identification of the problem and motivation, the definition of objectives, the design and development, the demonstration, the evaluation, and the communication. [32] The artifact will be validated in testing, using MIMIC III and IV datasets (Medical Information Mart for Intensive Care), in a research environment. In addition, it can be applied in a case study involving several health entities.

5 Proposed Trust Model The proposal defines a trust model supported by blockchain technology that simultaneously ensures data security, and patient and stakeholder privacy, for those who are involved in the creation and management of health data. The architecture must allow managing system rules, for service providers, applications, data, and users. These rules pass [33] by operation rules in the transactional context, trust structures in the context of the Federation of identities, or partnership agreements in the context of the supply value chain. We propose the Trust Model to organize the global eHSP, subdivided into ecosystems, and application domains, supported by generic data models, with various types of health records, legacy data, and IoT data. The eHSP is the territory that pretends structured. The same kinds of eHSP are grouped in ecosystems (Hospitals, Clinics, Pharmacies, and so on). Each ecosystem has various applications domains (management, security, privacy, e-Health, and so on) that are structured in generic data models, depending on domain applications. The model has five tiered levels. The data is aggregated and summarized to provide information to the upper level. The information will be necessary and sufficient and ensure the management of the lower level. [19] This Model adds the interoperability between the ecosystems that provide the interface based on the type of e-Health Records. This aspect is critical to perform the data management with detail for implementing the control of data patients. The characteristics of e-Health’s Records allow the data flow consistent and granular to segment the data shared with the right protections and permissions. This segment facilitates the type of Blockchain applications, as shown in Table 2, on e-Health’s Smart Place Context (see Table 3). The model’s review was oriented to become the system to patient-centric with full data management, with control of the permission, consentient, view the historical access, and reuse of the e-Health Data. Figure 1 describes the data flow proposed. The data of each patient, in each eHSP, need to group the data in the four types of Heath Records, with specific rules of segmentation data, security, and privacy. The following figure summarizes the Context of Blockchain Application, the support architecture and relations established in an e-Health’s Smart Places (eHSP). The following table shows the six types of application Blockchain proposed in [33] remaining adapted for e-Health’s applications.

Health Records Management in Trust Model Blockchain-Based Table 2 Blockchain’s Application Type (Table 1 adapted of [33])

213

Type

e-Health’s applications

BC1

Secure transactions

BC2

Data security

BC3

Data flow control

BC4

Accepted devices

BC5

Version control

BC6

Systems security

Table 3. eHSP in blockchain’s application context (adapted of Table 2 [33]) e-Health’s smart place (eHSP) context

Blockchain’s application BC1

Level 0

e-Health’s smart place

X

Level 1

Ecosystems

X

Level 2

Application domain

X

BC2

BC3

X

X

BC4

BC5

BC6 X

X

X

Level 3

Data models

X

X

X

X

X

Level 4

IoT /Mobile devices

X

X

X

X

X

Fig. 1 Dataflow diagram in e-health’s smart places (eHSP)

214

A. Brandão

Fig. 2 The context of blockchain application in e-health’s smarts places (eHSP)

The System’s Security (BC6) is oriented to use blockchain to ensure the trust of the systems’ configurations, such as the cloud servers, the IoT devices, and the mobile devices. The Version Control (BC5) orients to the use of Blockchain to guarantee the compatibility of the system’s version, as the cloud servers, the IoT devices, and the mobile devices. The Accepted Devices (BC4) orients to the use Blockchain to guarantee that the Nodes Accepted have a valid Organization Identity through Smart Contract Accession. The Data Flow Control (BC3) allows the use of Blockchain to guarantee the management of data between Organizations and the respect of rules associated with the type of Data and enables the access of the patient to access his information and access history. The Data Security (BC2) allows the use of Blockchain to guarantee access complying with the default permission, consistent with the actors and with the action of the patient. Secure Transactions (BC1) orients to use of Blockchain to guarantee the immutable data distributed transactions with the exchange of consistent and incorruptible records. The following table (Table 3) shows the four levels changed for e-Health’s applications match with applications type. The Trust Model proposed for the data flow in the e-Health ecosystem (Fig. 1) has five levels hierarchical (Table 3) and the critical aspect of interoperability based on standards. The Context of Blockchain Application (Fig. 2) presents the architecture for the blockchain’s application in eHSP, using six different Blockchain Application Types (Table 2). The e-Health’s Data respect the structure of Health Records and

Health Records Management in Trust Model Blockchain-Based

215

permit defined the data’s granularity, applied rules of privacy and security, established by the patient, and policies of share between eHSPs. Our study and this model presented allow us to contribute to the resolution of the identified problem, as well as so many others that are occurring, with the objective of allowing the safe exchange of groups of data, based on blockchain technology, between entities, and between the user and system entities, ensuring the security and privacy of sensitive personal data.

6 Conclusion The application of the achievements of Blockchain technology in health seems to lead to an expectation to focus all systems on the patient, such as the start and end of activities in health and medicine. This paradigm is central and allows the application of Blockchain technology to ensure confidence in the patient’s relationships with the various e-Health’s Smarts Places (eHSP), the management of access, and consent in the process of interaction with the healthcare ecosystem in different perspectives. The surrounding standardization and definition of standards seem to be able to facilitate the implementation of Blockchain technology. Clarification of the various electronic health and medical records facilitates the sharing of data, controlled and centered explicit consent of the patient, with the interoperability of shareable data between health organizations. The data model proposed in this work reflects the systematization of e-health smart spaces (eHSP) with data structures suitable for the purposes for which they are intended. The definition of data groups facilitates the treatment of consents and accesses according to the type of data, major sensitive personal data. Blockchain technology is characterized in the model as to the type of application (defining 6 types of application) within the context of the smart space providing five levels of aggregation. The flow of data and the definition of rules by the user make it possible to guarantee the exchange of the data, safety between the entities, and respect for the privacy of personal data and sensitive personal data. In conclusion, the review conducted is feasible and sustainable and reveals that the path of research is consolidated this patient-centered approach with groups of data standards (EMR, EHR, EPR, and PHR) supported Blockchain-based. These paths are articulated for secure transactions, smarts contracts, data security, data flow control, accepted devices, version control systems, and security. Future work will be to consolidate the use of Blockchain technology in healthcare patient-centered. A wide range of opportunities opens up to solve the still open and ongoing problems that guarantee trust between nodes in the Blockchain network, the optimization of space management, and the time of completion of transactions.

216

A. Brandão

References 1. Otto L, Harst L, Schlieter H, Wollschlaeger B, Richter P, Timpel P (2018) Towards a unified understanding of ehealth and related terms – proposal of a consolidated terminological basis: In: Proceedings of the 11th international joint conference on biomedical engineering systems and technologies, Funchal, Madeira, Portugal. SCITEPRESS - Science and Technology Publications, pp 533–539. https://doi.org/10.5220/0006651005330539 2. Hasselgren A, Kralevska K, Gligoroski D, Pedersen SA, Faxvaag A (2020) Blockchain in healthcare and health sciences—a scoping review. Int J Med Inf 134:104040. https://doi.org/ 10.1016/j.ijmedinf.2019.104040 3. Shi S, He D, Li L, Kumar N, Khan MK, Choo K-KR (2020) Applications of blockchain in ensuring the security and privacy of electronic health record systems: a survey. Comput Secur 101966 4. Kuo T-T, Kim H-E, Ohno-Machado L (2017) Blockchain distributed ledger technologies for biomedical and health care applications. J Am Med Inform Assoc 24:1211–1220. https://doi. org/10.1093/jamia/ocx068 5. Engelhardt MA (2017) Hitching healthcare to the chain: an introduction to blockchain technology in the healthcare sector. Technol Innov Manag Rev 7:22–34 https://doi.org/10.22215/ timreview/1111. 6. Faafp TFH (2017) Why blockchain technology is important for healthcare professionals 4 7. Mertz L (2018) (Block) chain reaction: a blockchain revolution sweeps into health care, offering the possibility for a much-needed data solution. IEEE Pulse 9:4–7. https://doi.org/10.1109/ MPUL.2018.2814879 8. Habib JL (2010) EHRs, meaningful use, and a model EMR. Drug Benefit Trends 22:99–101 9. Naresh VS, Reddi S, Allavarpu VD (2021) Blockchain-based patient centric health care communication system. Int J Commun Syst 34:e4749 10. Kierkegaard P (2011) Electronic health record: wiring Europe’s healthcare. Comput Law Secur Rev 27:503–515. https://doi.org/10.1016/j.clsr.2011.07.013 11. Shickel B, Tighe PJ, Bihorac A, Rashidi P (2018) Deep EHR: a survey of recent advances in deep learning techniques for electronic health record (EHR) analysis. IEEE J Biomed Health Inform 22:1589–1604. https://doi.org/10.1109/JBHI.2017.2767063 12. Waegemann CP (2003) Ehr vs. cpr vs. emr. Healthc Inform Online 1:1–4 13. Personal Health Records: What health care providers need to know. https://www.healthit.gov/ sites/default/files/factsheets/about-phrs-for-providers.pdf 14. Archer N, Fevrier-Thomas U, Lokker C, McKibbon KA, Straus SE (2011) Personal health records: a scoping review. J Am Med Inform Assoc 18:515–522. https://doi.org/10.1136/ami ajnl-2011-000105 15. Fu L (2015) The value of integrated information systems for US general hospitals 16. Liu DW, Zhu MS, Mundie DT, Krieger UR (2017) Advanced block-chain architecture for e-health systems 6 17. Zhang P, White J, Schmidt DC, Lenz G (2017) Applying software patterns to address interoperability in blockchain-based healthcare apps. ArXiv170603700 Cs 18. Yuan B, Lin W, McDonnell C (2016) Blockchains and electronic health records 23 19. Dubovitskaya A, Xu Z, Ryu S, Schumacher M, Wang F (2017) Secure and trustable electronic medical records sharing using blockchain 10 20. Halamka JD, Lippman A, Ekblaw A (2017) The potential for blockchain to transform electronic health records 5 21. Rifi N, Rachkidi E, Agoulmine N, Taher NC (2017) Towards using blockchain technology for eHealth data access management. In: 2017 fourth international conference on advances in biomedical engineering (ICABME), Beirut. IEEE, pp 1–4. https://doi.org/10.1109/ICABME. 2017.8167555 22. Guo R, Shi H, Zhao Q, Zheng D (2018) Secure attribute-based signature scheme with multiple authorities for blockchain in electronic health records systems. IEEE Access. 6:11676–11686. https://doi.org/10.1109/ACCESS.2018.2801266

Health Records Management in Trust Model Blockchain-Based

217

23. Linn LA, Koo MB (2016) Blockchain for health data and its potential use in health it and health care related research. In: ONC/NIST use of blockchain for healthcare and research workshop, Gaithersburg, Maryland, United States, ONC/NIST, pp 1–10. sn 24. Shae Z, Tsai JJP (2017) On the design of a blockchain platform for clinical trial and precision medicine. In: 2017 IEEE 37th international conference on distributed computing systems (ICDCS), Atlanta, GA, USA, pp 1972–1980. IEEE. https://doi.org/10.1109/ICDCS.2017.61 25. Azaria A, Ekblaw A, Vieira T, Lippman A (2016) MedRec: using blockchain for medical data access and permission management. In: 2016 2nd international conference on open and big data (OBD), Vienna, Austria, pp 25–30. IEEE. https://doi.org/10.1109/OBD.2016.11 26. Xia Q, Sifah EB, Asamoah KO, Gao J, Du X, Guizani M (2017) MeDShare: trust-less medical data sharing among cloud service providers via blockchain. IEEE Access. 5:14757–14767. https://doi.org/10.1109/ACCESS.2017.2730843 27. Chelladurai U, Pandian S (2022) A novel blockchain based electronic health record automation system for healthcare. J Ambient Intell Humaniz Comput 13:693–703 28. Kim JW, Kim SJ, Cha WC, Kim T (2022) A blockchain-applied personal health record application: development and user experience. Appl Sci 12:1847 29. Gregor S, Hevner AR (2013) Positioning and presenting design science research for maximum impact. MIS Q. 37:337–355. https://doi.org/10.25300/MISQ/2013/37.2.01 30. Johnson J, Henderson A (2011) Conceptual models: core to good design. Synth Lect Hum Centered Inform 4:1–110. https://doi.org/10.2200/S00391ED1V01Y201111HCI012 31. Beck R, Stenum Czepluch J, Lollike N, Malone S (2016) Blockchain–the gateway to trust-free cryptographic transactions 32. Peffers K, Tuunanen T, Rothenberger MA, Chatterjee S (2007) A design science research methodology for information systems research. J Manag Inf Syst 24:45–77. https://doi.org/10. 2753/MIS0742-1222240302 33. Brandão A, Mamede HS, Gonçalves R (2018) A smart city’s model secured by blockchain. In: Mejia J, Muñoz M, Rocha Á, Peña A, Pérez-Cisneros M (eds) Trends and applications in software engineering. Springer, Cham, pp 249–260. https://doi.org/10.1007/978-3-030-011710_23

Digital Transformation and Adoption of Electronic Health Records: Critical Success Factors Luis E. Mendoza , Lornel Rivas , and Cristhian Ganvini

Abstract The adoption of electronic health records (EHRs) implies the integration of policies, resources, infrastructure, and the joint work of health professionals and information technology (IT) specialists. In Latin America and the Caribbean (LAC), the centralization of health services, geographic dispersion, and social conditions, are aspects that contemplate attention of needs and priorities when identifying health digital services strategies, such as the adoption of EHRs. In the context of digital transformation (DX), this work proposes a set of critical success factors (CSFs), with support in guiding questions and metrics, which facilitate the identification of alternatives to reduce the gaps in the adoption of EHRs. The CSFs can facilitate the planning of projects for the implementation of EHRs, based on the DX-involved strengths. Finally, opportunities to evaluate and apply the CSFs, in the Cusco regional health network context, are set out. Keywords Electronic Health Record · Critical Success Factor · Adoption · Digital Transformation

L. E. Mendoza (B) Facultad de Ingeniería en Electricidad y Computación, ESPOL Polytechnic University, Escuela Superior Politécnica del Litoral, ESPOL, Campus Gustavo Galindo Km. 30.5 Vía Perimetral, P.O. Box 09-01-5863, Guayaquil, Ecuador e-mail: [email protected] L. Rivas · C. Ganvini Facultad de Ingeniería y Arquitectura, Departamento de Ingeniería de Sistemas, Urbanización Ingeniería Larapa Grande A-7, Cusco Andean University, Universidad Andina del Cusco, San Jerónimo, Cusco, Peru e-mail: [email protected] C. Ganvini e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Á. Rocha et al. (eds.), Information Technology and Systems, Lecture Notes in Networks and Systems 691, https://doi.org/10.1007/978-3-031-33258-6_20

219

220

L. E. Mendoza et al.

1 Introduction Since the beginning of the twenty-first century, digital health has become a cultural transformation where information technology (IT) has been shaping big changes in the fundamental basics of healthcare, its management, and provisioning [1, 2]. Healthcare has seen significant benefits from digital transformation (DX), with the adoption of new technologies helping to deliver secure, high-quality patient care and drive greater business efficiency [3]. With an important impact on treatment, the workforce, and hospital productivity, digital services such as electronic health records (EHR), digital imaging, e-prescription services, telehealth, monitoring equipment, use of web and cloud-based tools, and enterprise resource planning (ERP) systems, have been integrated into extensive IT platforms of many healthcare organizations. In [4], critical adoption factors for adopting EHR systems by physicians are categorized in: user attitude towards information systems, workflow impact, interoperability, technical support, communication among users, and expert support. In [5] are identified barriers to adoption of EHR, such as cost, time-consuming user perception/perceived lack of usefulness, implementation issues, user/patient resistance, lack of tech assistance/experience, and interoperability/no standard protocols for data exchange. Finally, in [6] are defined a set of parameters to evaluate the adoption of EHR: quality of care, diffusion, infusion, and satisfaction. The authors also identify some factors, which affect the adoption of EHR by physicians: flexibility; user interface; dose functionality, mobility, information quality ty, and ease of learning [6]. However, there are no works on the adoption of EHRs from the DX perspective. Studies with a systemic vision are needed to analyze the factors that must be considered to achieve the adoption of EHRs contextualized to the Latin America and the Caribbean (LAC) countries’ reality. This paper presents the alignment of six CSFs for evaluating the conditions for health services, accomplishing eHealth projects or programs, in the LAC context, and identifying gaps for EHRs adoption for decision-making support. The alignment of the CSFs is done through the proposal of guiding questions, which allow the formulation of metrics that facilitate the measurement of these gaps, and the identification of alternatives to successfully achieve an EHRs adoption initiative. It is expected that the CSFs will constitute a support guide for the project team responsible for the implementation of medical records systems in any health institution, regarding the aspects that must be considered during the whole process of EHRs adoption, in articulation with health professionals and IT specialists. The work is structured as follows, in addition to this introduction. In Sect. 2, the background for DX and the adoption of EHRs are established. Section 3 describes the set of CSFs, their definition, guiding questions, and an example of metrics that allow them to be applied. Section 4 presents the basics to evaluate and apply the CSFs in a particular context. Section 5 closes with conclusions and future work.

Digital Transformation and Adoption of Electronic Health Records ...

221

2 Background 2.1 Digital Transformation and Electronic Health Records As in other industries, investment in IT is essential for the provision of healthcare services; it demands transformation processes from organizations at all levels. DX, over IT platforms, is becoming an integrated, multidimensional process. DX is characterized by a fusion of advanced technologies and the integration of physical and digital systems, the predominance of innovative business models and new processes, and the creation of smart products and services [7]. DX is not just about IT, instead, it involves issues, such as management and leadership, stakeholder engagement, business strategies and models, data and information analytics capabilities, customer experience, and digital literacy [7, 8]. Models proposed by [7, 10, 11] address aspects relevant to DX processes, which could potentially have an impact on EHR adoption processes. These aspects are, orientation towards digital transformation, organizational culture, self-motivation of employees, readiness to accept changes, staff knowledge and competence, existing infrastructure (technology), governance, innovation, supporting staff engagement into DX processes, leverage external and internal knowledge, privacy, freedom of choice, and patient safety [2]. EHR is a widely used term, with variations in definitions and scope of coverage. It is accepted as a longitudinal medical record, accessible to health care professionals at multiple sites of care. The EHR includes all the information contained in a traditional medical record, including the patient’s health profile and behavioral and environmental information. The EHR also includes the temporal dimension, which enables the inclusion of information across multiple events and providers [9]. It comprises documents, both textual and graphic, about the episodes of health and illness experienced by patients, and the respective healthcare activities performed.

2.2 Latin America and the Caribbean, and Barriers to EHR Adoption While implementing EHR systems, organizations seek to accomplish such objectives as improved patient safety and physicians’ efficiency, getting information for better decision making [8], and increased accuracy and reliability of medical data. Despite the benefits of the EHRs and government stimulation to their adoption, healthcare organizations are facing great difficulties trying to implement and adopt the systems and about 30% of EHR implementations fail [6]. One of the main reasons for failures is the lack of HER adoption by health professionals. Infrastructure needs are a common challenge, as well as the interoperability and scalability needs of existing systems [12]. The deficit of adequate infrastructure, demographic changes, modernization needs in management and technology issues,

222

L. E. Mendoza et al.

geographic barriers, and lack of health professionals, among others [2], are some situations that may imply barriers for access to health services in LAC. In general, the resistance of some health professionals to the change from manual to electronic documentation can be a major problem for the IT for eHealth implementation initiatives, especially in middle- to low-income countries. Current problems identified in healthcare documentation, as well as privacy and confidentiality issues, must be addressed and quality control measures introduced before a successful change can be implemented [9]. Although progress has been made in LAC countries towards the establishment of EHR policies, the obstacles to their adoption are significant. In countries like Ecuador and Peru, for instance, the centralization of health services, geographic dispersion, and social conditions, are aspects that need attention and prioritization when identifying eHealth intervention strategies. The adoption of EHR involves the integration of policies, resources, infrastructure, and the joint work of IT specialists and health professionals. Costs, available technology, lack of technical and computer skills of staff, and lack of data processing facilities, still constitute barriers that must be recognized and addressed before the implementation of EHRs. In this context, if the identified problems are not addressed and remedied before introducing an EHR system, merely automating the content and procedures of the medical record may perpetuate the deficiencies and fail to meet the organization’s EHRs objectives. Thus, taking on a DX process may require bringing about changes in communication with patients and providers, and, in general, determining the conditions of the health institution’s situation, considering the use of indicators for selfassessment. On this basis, determine opportunities for improvement to promote the success of EHR adoption, is the main motivation of this work.

3 Proposal 3.1 Critical Success Factors (CSFs) for Evaluating the Conditions for Health Services In [13] were established a set of critical success factors (CSFs) to evaluate organizational conditions to undertake eHealth services, and thus facilitate the identification of gaps for the achievement of the proposed objectives. Therefore, the definition of CSFs refers to the identification of gaps in the achievement of an eHealth service, carried out within an institution, whether in the public or private sector. The six CSFs are Health systems, Information systems applied to health, Technological infrastructure, Perspective of the involved actors, Data and information, and Management. The next section presents the conceptual definition of each CSF from the EHR adoption perspective.

Digital Transformation and Adoption of Electronic Health Records ...

223

3.2 Configuration of CSF to EHR Adoption Tables 1, 2, 3, 4, 5 and 6 present the conceptual definition of each CSF, as well as a set of questions for each one, which guide the definition of the metrics that allow measuring the presence of the CSF in a project or initiative for EHR adoption.

Table 1 Conceptual definition of the CSF: Health systems Conceptual definition

Questions from EHR adoption perspective

Aspects that guarantee the monitoring, tracking, and dissemination of information about patient health services, including access to services, data, and patient health information, both at rural and urban levels. It covers personal, self-care—clinical and informal—and administrative services data. It includes achieving the optimum level of quality of care, without negative impacts on costs and following the standards of care processes and procedures

Q1. How satisfied are users with the service provided, both in medical care and in the administrative services that this involves? Q2. Are medical records currently kept on all patients? Q2. How are patients identified? Q3. Are medical records well documented? Q4. What the proposed EHR system would cover? Q5. Are medical records documentation based on standards?

Table 2 Conceptual definition of the CSF: Information systems applied to health Conceptual definition

Questions from EHR adoption perspective

Elements that guarantee the quality of the information systems in health centers. Encompass, user-centered design, scalability, and interoperability for exchanging information between systems, as well as their interpretation and use. It considers the participation of users during the adoption of information systems and the satisfaction of patients and health professionals. It comprises criteria of relevance and opportunity, for the conversion of data into information to support decision-making

Q1. There is currently support from existing information systems for EHR automation? Q2. Do the existing information systems have the expected quality? Q3. How satisfied are the users with the existing information systems? Q4. What is the level of integration of existing systems that support health services? Q5. What is the level of participation of health professionals during the development, implementation, and/ or monitoring of information systems for health?

224

L. E. Mendoza et al.

Table 3 Conceptual definition of the CSF: Technological infrastructure Conceptual definition

Questions from EHR adoption perspective

Operating conditions of the IT platform (hardware, software, and services) that supports the operation of health centers. Emphasizes the use of standards that allow compatibility with the state’s IT platform, according to the aspects of integration, connectivity, bandwidth, and coverage of IT services. It comprises computer equipment (hardware), base software, tools, networks, communications, and information services

Q1. Do the conditions of technological infrastructure facilitate the use and administration of the systems for its different users? Q2. What type and size of computers are required to meet the needs within available funds? Q3. What technical support is needed? Q4. Will the current telecommunication system meet the identified needs? Q5. Environmental issues needed for computers, etc. have been identified?

Table 4 Conceptual definition of the CSF: Perspective of the involved actors Conceptual definition

Questions from EHR adoption perspective

Participation and integration of eHealth services initiative Stakeholders; mainly, health professionals and patients. It covers communication, opportunities for learning, and digital literacy, considering the multidisciplinary of the different profiles of the involved ones. It also considers the use of language for understanding and effective communication among those involved

Q1. Would expert information technology and health information management advice be readily available? Q2. Have been identified resistance to computer technology and lack of computer literacy by health professionals? Q3. Has been identified concern by providers? Q4. Have been identified concerns raised by healthcare professionals, patients, and the general community about privacy, confidentiality, and quality and accuracy of electronically generated information? Q5. How is the involvement of clinicians and hospital administrators?

Digital Transformation and Adoption of Electronic Health Records ...

225

Table 5 Conceptual definition of the CSF: Data and information Conceptual definition

Questions from EHR adoption perspective

It comprises the collection, management, use of health data, the incorporation of data in its different formats, trust in data and information, as well as its value and meaning. It involves monitoring and information dissemination mechanisms. It includes the use of standards for the classification and use of terminologies of medical procedures, considering policies for the use of health information, which include the security, privacy, and confidentiality of the information

Q1. What is the level of quality in existing health data and information? Q2. A lack of standard terminology has been identified? Q3. Is the information readily accessible and available? Q4. Are there policies that guide users on the use and treatment of data? Q5. How is the security and privacy of patient data ensured?

Table 6 Conceptual definition of the CSF: Management Conceptual definition

Questions from EHR adoption perspective

It involves the support for the project by governing bodies. Existence of budgets assigned for plans and projects. Use of eHealth as a way of disseminating health policies. Consideration of the legal framework for eHealth. Existence of previous initiatives that serve as a reference for eHealth projects

Q1. How is information released for medico-legal purposes? Q2. Does the health service have a record retention policy? Q3. Does the health service have a policy on the release of information from personal health records? Q4. Does the health service have a policy on patient access to their healthcare information? Q5. What do they see as possible and achievable given available funding?

226

L. E. Mendoza et al.

3.3 Operationalization of the CSFs Having the conceptual definitions and guiding questions of the CSFs as a framework of reference, the establishment of metrics takes place, at different measurement scales. Metrics allow for an adequate characterization of conditions that serve as a reference for EHRs adoption initiatives, according to DX context. As an example, Tables 7, 8 and 9 shown the sets of metrics for the CSFs Information systems applied to health, Technological infrastructure, and Data and information, respectively. Table 7 Example of metrics formulation for Information systems applied to health CSF ID

Metric

Max

Formulation

1

Health professionals’ perception of 0 the usefulness of EHRs

4

4 = Very useful 3 = Useful 2 = Not very useful 1 = nothing useful

2

Health professionals’ confidence regarding the quality of medical information they handle

0

2

2 = A lot of confidence 1 = Little confidence 0 = Don’t Know/Not measured

3

Level of availability of patients’ clinical information to health professionals

0

3

3 = Available 2 = Occasionally available 1 = Not available 0 = Don’t Know/Not measured

4

Existence of a computerized Patient Master Index

0

2

2 = There is 1 = Does not exist 0 = Don’t Know/Not measured

5

Existence of a computerized Patient administration system

0

2

2 = There is 1 = Does not exist 0 = Don’t Know/Not measured

6

Existence of a computerized pathology and radiology reporting system

0

2

2 = There is 1 = Does not exist 0 = Don’t Know/Not measured

7

Existence of computerized healthcare statistics reports

0

2

2 = There are 1 = Do not exist 0 = Don’t Know/Not measured

8

Existence of a computerized medical record tracking system

0

2

2 = There is 1 = Does not exist 0 = Don’t Know/Not measured

9

Level of integration of current systems for health services

0

5

5 = More than 75% 4 = Between 50 and 75% 3 = Between 25% and 49.9% 2 = Less than 25% 1 = Not integrated

10

Existence of expert support on the use of current health services systems

0

2

2 = There is 1 = Does not exist 0 = Don’t Know/Not measured

Min

Digital Transformation and Adoption of Electronic Health Records ...

227

Table 8 Example of metrics formulation for Technological infrastructure CSF ID Metric

Min Max Formulation

1

Operational capacity of computer equipment for the operation of EHR services

0

4

4 = Full capacity 3 = Medium capacity 2 = Low capacity 1 = Insufficient capacity 0 = Don’t Know/Not measured

2

Operational capacity of base software 0 (operating system, development tools, database managers) for the operation of EHR services

5

5 = Has 100% of the base software 4 = Has 75% of the base software 3 = Has 50% of the base software 2 = Only has a base software 1 = Does not have base software 0 = Don’t Know/Not measured

3

Existence of technical IT support policies

0

2

2 = There is 1 = Does not exist 0 = Don’t Know/Not measured

4

User satisfaction with current IT technical support

0

5

5 = Fully satisfied 4 = Very satisfied 3 = Moderately satisfied 2 = Not very satisfied 1 = Not satisfied 0 = Don’t Know/Not measured

5

Level of satisfaction of users with current IT services

0

5

5 = Fully satisfied 4 = Very satisfied 3 = Moderately satisfied 2 = Not very satisfied 1 = Not satisfied 0 = Don’t Know/Not measured

6

Existence of informatics security infrastructure

0

2

2 = There is 1 = Does not exist 0 = Don’t Know/Not measured

7

In case of having internet service, perception of the quality of service

0

5

5 = Very good 4 = Good 3 = Regular 2 = Bad 1 = Do not have service 0 = Don’t Know/Not measured

8

Existence of identified technical support needs for EHR services

0

2

2 = There are 1 = Does not exist 0 = Don’t Know/Not measured

9

Existence of identified telecommunication capabilities needs for EHR services

0

2

2 = There are 1 = Does not exist 0 = Don’t Know/Not measured (continued)

228

L. E. Mendoza et al.

Table 8 (continued) ID Metric

Min Max Formulation

10 Technological infrastructure has certifications in terms of design, structure, performance, and reliability

0

5

5 = Has 100% of certifications 4 = Has 70% of certifications 3 = Has 50% of the certifications 2 = Only has one certification 1 = Does not have certifications 0 = Don’t Know/Not measured

4 Towards the Evaluation and Application of the CSFs For the evaluation of CSFs, the use of the Feature Analysis Evaluation Method [14] is contemplated. This method is based on a list of answers about the existence or not of a certain characteristic in a specific tool, method, or project. The method involves the establishment of significant characteristics to be evaluated and facilitates the identification of the differences between each factor. Its application comprises the following points: (a) Identification of the Characteristics; (b) Definition of the Degree of importance; (c) Definition of the Level of Conformity. Regarding the identification of characteristics, some of them were established for the CSFs and others for their metrics, as follows: (a) Characteristics for each CSF: Relevance, Completeness, and Context Independence; and (b) Characteristics for each metric: Relevance, Range, Feasibility, Level of Depth. Subsequently, a questionnaire will be sent to each of the Stakeholders involved in each project, to evaluate the characteristics of the CSFs and their metrics respectively.

4.1 Case Study To evaluate the CSFs, its application is planned through a case study of a real project. The factors will be evaluated through their use at the level of the Cusco regional health network, Peru, for which the necessary institutional support is available. In this case, the corresponding situation for CSF use is, when an EHRs implementation strategy is being defined. Then, the review of CSFs application makes it easier to reach important considerations for EHRs adoption project. The case study will support institutional processes of decentralization in the administration of health services. The application of the factors in this context has the potential to enhance regional empowerment in terms of quality of healthcare services.

Digital Transformation and Adoption of Electronic Health Records ...

229

Table 9 Example of metrics formulation for Data and information CSF ID

Metric

Min

Max

Formulation

1

Quality of the content of the existing health data and information: a—It can be easily accessed b—It is reliable c—It is obtained at the appropriate time d—It is useful

0

5

5 = All alternatives are met 4 = Three out of the 04 alternatives are met 3 = Two out of the 04 alternatives are met 2 = One out of the 04 alternatives is met 1 = None of the alternatives are met 0 = Don’t Know/Not measured

2

Percentage of completeness of patient information 0 in the medical record

4

4 = Fully complete 3 = Has until 70% of completeness 2 = Has until 50% of completeness 1 = Has until 25 of completeness 0 = Don’t Know/Not measured

3

Extent to which terminology used in medical records meets medical standards

0

4

4 = Totally 3 = In high degree 2 = Moderately 1 = Little 0 = Don’t Know/Not measured

4

Extent to which organization and classification of medical records meets medical standards

0

4

4 = Totally 3 = In high degree 2 = Moderately 1 = Little 0 = Don’t Know/Not measured

5

Presence of redundancy in patient medical records 0

2

2 = It is present 1 = Not present 0 = Don’t Know/Not measured

6

Existence of policies or protocols about the use and treatment of health data

2

2 = There are 1 = Does not exist 0 = Don’t Know/Not measured

0

230

L. E. Mendoza et al.

5 Conclusions This work presents the alignment of six CSFs for evaluating the conditions of health services for EHRs adoption, based on the DX-involved strengths. With support in guiding questions and metrics, the CSFs allow the identification of gaps and opportunities for strengthening the adoption of EHRs by any health institution. Metrics provide measurable expressions regarding guiding questions, associated with each factor. In this sense, the CSFs can contribute to the planning of projects or activities for EHRs implementation, reinforcing the strengths, either in IT, management, or any of the topics involved. Future work will allow expert evaluation of the proposed factors and their metrics, ant to develop a software tool (app and web) to facilitate the measurement of CFSs in any health service.

References 1. Marques I, Ferreira J (2020) Digital transformation in the area of health: systematic review of 45 years of evolution. Health Technol 10(3):575–586. https://doi.org/10.1007/s12553-01900402-8 2. Mesko B (2020) Digital health technologies and well-being in the future. IT Prof 22(1):20–23. https://doi.org/10.1109/MITP.2019.2963121 3. Haggerty E (2017) Healthcare and digital transformation. Netw Secur 2017(8):7–11. https:// doi.org/10.1016/S1353-4858(17)30081-8 4. Castillo V, Martínez-García A, Pulido J (2010) A knowledge-based taxonomy of critical factors for adopting electronic health record systems by physicians: a systematic literature review. BMC Med Inform Decis Mak 10(1):60. https://doi.org/10.1186/1472-6947-10-60 5. Kruse C, Kothman K, Anerobi K, Abanaka L (2016) Adoption factors of the electronic health record: a systematic review. J Med Internet Res 4(2):19. https://doi.org/10.2196/medinform. 5525 6. Spatar D, Kok O, Basoglu N, Daim T (2019) Adoption factors of electronic health record systems. Technol Soc 58:101144. https://doi.org/10.1016/j.techsoc.2019.101144 7. Verina N, Titko J (2019) Digital transformation: conceptual framework. In: International scientific conference contemporary issues in business, management and economics engineering. VGTU Press. https://doi.org/10.3846/cibmee.2019.073 8. Bartsch S, Weber E, Büttgen M, Huber A (2020) Leadership matters in crisis-induced digital transformation: how to lead service employees effectively during the COVID-19 pandemic. J Serv Manag 32(1):71–85. https://doi.org/10.1108/JOSM-05-2020-0160 9. World Health Organization and Regional Office for the Western Pacific (2006) Electronic health records: manual for developing countries. World Health Organization, Geneva 10. Mhlungu N, Chen J, Alkema P (2019) The underlying factors of a successful organisational digital transformation. South Afr J Inf Manage 21(1):1–10. https://doi.org/10.4102/sajim.v21 i1.995 11. Osmundsen K, Iden J, Bygstad B (2018) Digital transformation: drivers, success factors, and implications. In: Mediterranean conference on information systems MCIS 2018 proceedings. https://aisel.aisnet.org/mcis2018/37 12. Pan American Health Organization (2017) Health in the Americas. Edition summary: regional outlook and country profiles. Pan American Health Org., Washington

Digital Transformation and Adoption of Electronic Health Records ...

231

13. Rivas L, Ganvini C, Mendoza LE (2022) Critical success factors proposal to evaluate conditions for e-health services. Lecture Notes Netw Syst 414:128–139. https://doi.org/10.1007/978-3030-96293-7_13 14. Grimán A, Pérez M, Mendoza L, Losavio F (2006) Feature analysis for architectural evaluation methods. J Syst Softw 79(6):871–888. https://doi.org/10.1016/j.jss.2005.12.015

Comorbidity Analysis in the Mexican Population Affected by SARS-CoV2 Jesús Manuel Olivares Ceja, Imanol Marianito Cuahuitic, Marijose Garces Chimalpopoca, Marco Antonio Jesús Silva Valdez, and César Olivares Espinoza

Abstract Recently, the SARS-CoV2 pandemic has affected more than 500 million people in the world, from them; more than six million are deaths. In order to reduce contagions, many countries decreed pandemic quarantines consisting of people confinements, reducing social mobility. Such actions affected people economy by reducing employees and prices increasing; but contagions showed a reduction. This pandemic prompted the production of diverse publications from vaccines, prognosis models up to recent interest on the pandemic effects and consequences. This paper analyzes public data constantly updated by the Mexican Ministry of Health since February 2020. There are two goals, the first one is to provide general statistics of recovered people and deceases; the second is to measure the impact of comorbidities on affected people, both recovered and the unfortunate decease cases. A proposed affectation index measures the ratio among the number of contagions citizens and each state population, such measurement establish a pandemic severity value by state. It shows that although the State of Mexico has the largest population size, it is the four least affected of 32 states, while other states with smaller populations such as Baja California Sur and Tabasco have a higher index. The statistics show that 88% of people with comorbidity end with decease, while only 12% of people with no comorbidity end with decease, it suggest that people with at least one comorbidity had less chance to get recovered. Keywords Statistical analysis · SARS-CoV2 · Comorbidity · Affectation index

J. M. O. Ceja (B) · M. A. J. S. Valdez · C. O. Espinoza Centro de Investigación en Computación, Instituto Politécnico Nacional, CDMX, Mexico e-mail: [email protected] I. M. Cuahuitic Instituto Tecnológico de Chilpancingo, Guerrero, Mexico M. G. Chimalpopoca Escuela Superior de Cómputo, Instituto Politécnico Nacional, CDMX, Mexico © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Á. Rocha et al. (eds.), Information Technology and Systems, Lecture Notes in Networks and Systems 691, https://doi.org/10.1007/978-3-031-33258-6_21

233

234

J. M. O. Ceja et al.

1 Introduction The SARS-CoV2 pandemic first detected in December 2019 in China has been affecting into many countries in the world since then. Currently, there have been more than 500 million people infected worldwide; from them more than six million are deceases. Governments, in order to reduce the number of infections and avoid saturating hospitals, imposed a quarantine, consisting on social mobility reduction, mainly through the confinement of its inhabitants. These actions became on reducing the income both familiar and enterprises, jobs lost, and goods and services price increase [6], some other people experienced increase on the anxiety and depression levels [7], Some governments applied politics to palliate the economic problems, besides to improve the health system. The Mexican government supported elderly people by providing some funds, and loans to microenterprises to face the contagions pandemic confinement. As soon as the vaccines were available, the number of deceases dropped while the social mobility increased; nevertheless, new virus mutations appeared that producing other contagions peeks. At the beginning of the SARS-CoV2 pandemic, the scientific community worldwide focused on obtaining a viable vaccine, the advances appeared published in various journals. During the period of increased cases, publications related to the prognosis and forecast of infected people were also increased [1], one of the most employed model is SIR and some variations with the purpose of quantify the hospital services requirements to prevent saturation [5, 9, 13], urban locations are the ones with the bigger number of contagions [8, 10–12]. Some researchers focused the weather and climate conditions as factors that promote the virus spread [14]. The natural language research analyzed the content of Twitter messages to find out the terms of greatest concern in the inhabitants during the periods of peak cases in the main cities [2]. In recent times, publications have increased trying to obtain patterns from the information of people who have suffered from this contagious disease [3, 4]. In this work, the interest is on historical data attempting to discover the relationship among some comorbidities and the SARS-CoV2, at the same time it is of interest to establish a proportion of affected people considering the population of each location. The data analysis use public data from the Mexican Ministry of Health, published since the start of the pandemic in Mexico from February 2020 until July 23, 2022, the date this study. The interest is on identifying which are the main comorbidities of the infected people that are also suffering, particularly which ones may have the greatest effect on deceases and which ones were present in recovered people. The results are important to detect the vulnerable population groups to promote government politics attempting to reduce the deceases.

Comorbidity Analysis in the Mexican Population ...

235

1.1 Data Acquisition This document considers SARS-CoV2 contagions in the population of the 32 states of Mexico. The dataset is from an open data site of the Ministry of Health and Assistance “Secretaría de Salubridad y Asistencia” of the Mexican Government, data update is daily. This data source selected by our group because it contains reliable data since it comes from an official source. The dataset with metadata shown in Table 1, it contains data from the start of the pandemic in Mexico since February 2020 until July 23, 2022, the date on which the data for this work was processed. Each record represents a patient with a unique identifier. A program in Python checked patient identifiers to detect repeated ones, finding no one repeated. Duplicate identifiers would be and indicative of a re-infected people although there is no information confirming that. After downloading data and checked the metadata, the analysts looked up the dataset to find null data or errors that imply a cleaning or data imputation process. The data used in this study was determined to be clean. A data preprocessing changed the original data content to a more readable form for data analysts, changes in variables such as diabetes whose content was found coded with numbers: 1 = YES, 2 = NO, 97 = NOT APPLICABLE, 98 = IGNORE, 99 = NOT SPECIFIED; these numerical values were replaced by labels such as: 1/ Diabetes, 2/noDiabetes, 97/na, 98/i, 99/ne. The people who developed the programs and those who were reviewing the content think that this new coding easy data interpretation. Other attributes with similar values also were changed. The attributes related with states changed numbers with labels taken from the ISO-3166-2 standard with 3 digits. Deceases are those which the attribute FECHA_DEF has values different of 999999-99. Due to the interest in the contagions comparison considering the number of people each state have, a table with this information joined the SARS-CoV2 dataset.

1.2 Data Processing Data processing employed Python language programs including the Pandas library. A computer uses a Core i7 3.4 GHz CPU, and DDR4 16 Gb RAM. The processing time of 2 h 45 min to obtain the graphs. The main operation are GROUP BY to form groups that makes the required classification in this study. Data is prepared with the number of people infected with SARS-CoV2 joined with population data of each state according to the official INEGI “Instituto Nacional de Estadística y Geografía” census of Mexico in 2020.

236 Table 1 Dataset metadata

J. M. O. Ceja et al.

#

Attribute

Description

1

FECHA_ ACTUALIZACION

Dataset update date

2

ID_REGISTRO

Identification number patient

3

ORIGEN

Origin health unit

4

SECTOR

Medical institution that provided care

5

ENTIDAD_UM

Medical institution unit location (State)

6

SEXO

Patient gender

7

ENTIDAD_NAC

Patient’s place of birth (State)

8

ENTIDAD_RES

Patient residence State

9

MUNICIPIO_RES

Municipality of patient residence State

10

TIPO_PACIENTE

Patient type

11

FECHA_INGRESO

Patient admission date

12

FECHA_SINTOMAS

Date on which symptoms began

13

FECHA_DEF

Date the patient died, when it applies

14

INTUBADO

Intubation was required

15

NEUMONIA

Is patient diagnosed with pneumonia?

16

EDAD

Patient age

17

NACIONALIDAD

Is patient Mexican or foreign

18

EMBARAZO

Is patient pregnant?

19

HABLA_LENGUA_ INDIG

Patient speaks an indigenous language?

20

INDIGENA

Is patient an indigenous person?

21

DIABETES

Is patient diagnosed with diabetes?

22

EPOC

Chronic Obstructive Pulmonary Disease

23

ASMA

Is patient diagnosed with asthma?

24

INMUSUPR

Immunosuppression?

25

HIPERTENSION

Hypertension?

26

OTRA_COM

Other diseases?

27

CARDIOVASCULAR

Cardiovascular diseases? (continued)

Comorbidity Analysis in the Mexican Population ... Table 1 (continued)

237

#

Attribute

Description

28

OBESIDAD

Is patient diagnosed with obesity?

29

RENAL_CRONICA

Chronic renal failure?

30

TABAQUISMO

Is patient a smoker?

31

OTRO_CASO

Contact with another SARS CoV-2 case?

32

TOMA_MUESTRA_ LAB

Patient was taken a laboratory sample?

33

RESULTADO_LAB

Sample analysis result

34

TOMA_MUESTRA_ ANTIGENO

Antigen sample for SARS-CoV2

35

RESULTADO_ ANTIGENO

Antigen test result

36

CLASIFICACION_ FINAL

Patient final classification

37

MIGRANTE

Is patient a migrant?

38

PAIS_ NACIONALIDAD

Patient’s country

39

PAIS_ORIGEN

Patient country arrived to Mexico

40

UCI

Required an Intensive Care Unit?

2 Affectation Index The affectation index (Fig. 1) of each state measures the ratio among the number of contagions citizens and each state population, such measurement establish a pandemic affectation value that literature have mentioned as typical from urban locations [11, 12]. Table 2 shows the proportion of inhabitants reported as infected by SARS-CoV2 ordered from highest to lowest, the figure shows that the five highest affectation indexes occurred in Mexico City, Baja California Sur, San Luis Potosí and Morelos. On the other hand, the states with the lowest index of affectation from highest to lowest are Hidalgo, State of Mexico, Oaxaca, Veracruz and Chiapas. Chiapas being the state with the lowest index in the country. It is interesting to note that the State of Mexico, the one with the largest population in the country, found among the five states with the lowest index of affectation. This index value requires further studies because one reason could be that infected patients moved to Mexico City for attention, therefore it explain the highest rate found in Mexico City (65.4% of its population). In addition, it is interesting to note that states with smaller populations such as Baja California Sur and Tabasco have a higher index.

238

J. M. O. Ceja et al.

Fig. 1 State affectation index by SARS-CoV2 in Mexico

Figure 1 shows the affectation index graphically, the y-axis is the percentage of people infected from 0% to 100%. The x-axis are the states using the ISO-3166-2 standard names using three digits. The top value corresponds to the highest urban area in the country. The BCS state has one of the minor populations in the country but many contagions proportionally; may be because it is one of the most touristic destinations and many people from different countries arrive each year, mainly on winter (December and January) holydays, that was one of the periods with the greatest number of contagions. The states with minor urban cities such as Guerrero (GRO), Oaxaca (OAX), Chiapas (CHP), and Veracruz (VER) appears in the graphic with lower indexes, that situation confirms that the major number of contagions are typical from urban, more crowded regions. The Chihuahua (CHH), Sonora (SON) and Coahuila (COA) are states with the higher size, but with more scattered populations, this situation may explain that those states do not appear in the first places.

Comorbidity Analysis in the Mexican Population ... Table 2 Affectation index for each state in Mexico

239

State name

Population

Contagions Affectation index

Ciudad de México

9,209,944

6,026,280

65.4

285,003

35.7

Baja California 798,447 Sur 2,402,598

558,460

23.2

San Luis Potosí 2,822,255

550,853

19.5

1,971,520

319,304

16.2

Tabasco Morelos Colima

731,391

114,683

15.7

Aguascalientes

1,425,607

201,795

14.2

Coahuila

3,146,771

394,850

12.5

Nuevo León

5,784,442

722,444

12.5

Querétaro

2,368,467

293,514

12.4

Sonora

2,944,840

345,180

11.7

Yucatán

2,320,898

271,640

11.7

Guanajuato

6,166,934

694,282

11.3

Campeche

928,363

100,484

10.8

Sinaloa

3,026,943

323,890

10.7

Quintana Roo

1,857,985

197,536

10.6

Nayarit

1,235,456

122,831

9.9

Tamaulipas

3,527,735

345,942

9.8

Baja California 3,769,020 Nte

354,098

9.4

Durango

1,832,650

170,561

9.3

Zacatecas

1,622,138

149,691

9.2

Tlaxcala

1,342,977

115,218

8.6

Michoacán

4,748,846

313,227

6.6

Chihuahua

3,741,869

246,701

6.6

Jalisco

8,348,151

542,485

6.5

Guerrero

3,540,685

222,174

6.3

Puebla

6,583,278

398,822

6.1

Hidalgo

3,082,841

186,087

6.0

Estado de México

16,992,418

979,324

5.8

Oaxaca

4,132,148

214,884

5.2

Veracruz

8,062,579

354,701

4.4

Chiapas

5,543,828

196,810

3.6

240

J. M. O. Ceja et al.

3 Comorbidity Statistics The comorbidity are those conditions that are present at the same time as another one considered as the main affectation. In this case, the main condition is the SARS-CoV2 infection. The interest in this study is to identify what are the most common comorbidities that affected infected people from a historical perspective. This knowledge could be used by health decision makes to focus attention on preventive measures, and to pay more attention on those people if more contagions appear. The first interest is to identify the number of people with decease during SARSCoV2 pandemic having a comorbidity, and those with no comorbidity. Table 3 shows the population distribution considering the recovery or decease, and then divide data by comorbidity and gender. The Fig. 2 shows the proportion of recovered people and those with decease. Both cases consider if any comorbidity was present. Data shows that 71.3% of recovered people do not exhibit any comorbidity. From the 2.6% of deceases, 88% of them (2.3% of people) had at least one comorbidity. These results indicate that comorbidity was typical in deceases of SARS-CoV2. Table 3 show that the male gender exhibits the major number of deceases proportion, also is the number with less recovery compared with female patients. Now, the study focuses on the distribution of the 2.3% of the deceased population with at least one comorbidity. Figure 3, shows that the three most common comorbidities are pneumonia, hypertension, and diabetes, which account for 80% of cases. It is noteworthy that the people who presented asthma have the least comorbidity and this condition is in sixth place among the recovered people (Fig. 4), that is, there were many people recovered despite suffering from asthma. Pneumonia is the main comorbidity associated with deceases in people infected by SARS-CoV2 that appears in fifth place in Fig. 4 of recovered people. Hypertension and diabetes, and obesity are not the main conditions of decease since in Table 4 appear as the first, second and fourth places of recovered people. Figure 4 show the recovered people who had at least one comorbidity, these are 26.1% of the people at the national level (see Fig. 2). These graphs show that Table 3 National affected people with SARS-CoV2 by gender Recovered 97.4% 15,892,514

Deceases 2.6% 421,240

Contagions 100% 16,313,754

No comorbidity 71.3% 11,636,202

Male 5,335,048 32.7%

With comorbidity 26.1% 4,256,312

Male 2,013,588 12.3%

No comorbidity 0.3% 45,555

Male 29,502 0.2%

With comorbidity 2.3% 375,685

Male 227,730 1.4%

Female 6,301,122 38.6% Female 2,242,724 13.7% Female 16,053 0.1% Female 147,955 1.0%

Comorbidity Analysis in the Mexican Population ...

241

Fig. 2 National affected people with SARS-CoV2

Fig. 3 Comorbidity distribution associated with deceased people

hypertension, obesity and smoking are the ones with 81% of people recovered, and that indicates that they are illnesses and addictions that had a lesser impact on the deaths of people affected by SARS-CoV2.

242

J. M. O. Ceja et al.

Fig. 4 Number of people recovered with at least one comorbidity

4 Conclusions This document has presented the analysis of comorbidity in people affected by the SARS-CoV2 virus in Mexico. An index of affectation by state shows the proportion of the inhabitants infected by this virus with respect to the number of inhabitants of each state. Results show that the more urban cities, the major number of contagions. States with scattered populations had minor number of contagions. The states with the higher number of contagions are Mexico City, Baja California Sur, Tabasco and Morelos. The states that presented the lowest index of affectation are Chiapas, Veracruz, Oaxaca, State of Mexico and Hidalgo. It is noticeable that State of Mexico populations could be accounted to Mexico City, this data cannot be inferenced with this dataset, and further studies are required. The comorbidities analysis show that there were more deceases in men compared to women. In the deceases with comorbidities, it is noteworthy that people with asthma had the lowest number, but among the conditions of recovered people, asthma is in the sixth place, that indicates a condition that had little influence on the people affected by this virus, while pneumonia is the main comorbidity associated with deceases. This study shows that comorbidities has more occurrences in deceased people (88%) than with no comorbidities (12%). The results are important to detect vulnerable population groups and provide government policies that seek to reduce deceases, such as diabetic, hypertensive and obese people.

Comorbidity Analysis in the Mexican Population ...

243

Future work consider a dashboard that collects data in real time with some forecast models that produces early alerts. Acknowledgements This research is supported by the Instituto Politécnico Nacional with the project SIP 20222105 Procesamiento y visualización de datasets públicos mediante un clúster decomputadoras personales.

References 1. Panaggio MJ et al (2022) Gecko: a time-series model for COVID-19 hospital admission forecasting. Epidemics 39:100580. https://doi.org/10.1016/j.epidem.2022.100580 2. Pineda-Briseño A, Chire Saire JE (2020) Minería de texto para identificar las principales preocupaciones de los usuarios de Twitter durante COVID-19 en la Ciudad de México. Res Comput Sci 149(8):827–839 3. Tandan M, Acharya Y, Pokharel S, Timilsina M (2021) Discovering symptom patterns of COVID-19 patients using association rule mining. Comput Biol Med 131:104249. https://doi. org/10.1016/j.compbiomed.2021.104249 4. Ilbeigipour S, Albadvi A (2022) Supervised learning of COVID-19 patients’ characteristics to discover symptom patterns and improve patient outcome prediction. Inform Med Unlock 30:100933. https://doi.org/10.1016/j.imu.2022.100933 5. Shringi S, Sharma H, Rathie PN, Bansal JC, Nagar A (2021) Modified SIRD model for COVID-19 spread prediction for Northern and Southern states of India. Chaos Solitons Fractals 148:111039. https://doi.org/10.1016/j.chaos.2021.111039 6. Zaremba A, Kizys R, Aharon DY, Umar Z (2022) Term spreads and the COVID-19 pandemic: evidence from international sovereign bond markets. Financ Res Lett 44:102042. https://doi. org/10.1016/j.frl.2021.102042 7. Dilek TD, Boybay Z, Kologlu N, Tin O, Güler S, Saltık S (2021) The impact of SARS-CoV2 on the anxiety levels of subjects and on the anxiety and depression levels of their parents. Multiple Scler Relat Disord 47:102595. https://doi.org/10.1016/j.msard.2020.102595 8. Aditya Satrio CB, Darmawan W, Nadia BU, Hanafiah N (2021) Time series analysis and forecasting of coronavirus disease in Indonesia using ARIMA model and PROPHET. Procedia Comput Sci 179:524–532. https://doi.org/10.1016/j.procs.2021.01.036 9. Alotaibi N (2021) Statistical and deterministic analysis of COVID-19 spread in Saudi Arabia. Results Phys 28:104578. https://doi.org/10.1016/j.rinp.2021.104578 10. Freire-Flores D, Llanovarced-Kawles N, Sanchez-Daza A, Olivera-Nappa Á (2021) On the heterogeneous spread of COVID-19 in Chile. Chaos Solitons Fractals 150:111156. https://doi. org/10.1016/j.chaos.2021.111156 11. Yu X, Zhang Y, Sun HG (2021) Modeling COVID-19 spreading dynamics and unemployment rate evolution in rural and urban counties of Alabama and New York using fractional derivative models. Results Phys 26:104360. https://doi.org/10.1016/j.rinp.2021.104360 12. Aguilar-Madera CG, Espinosa-Paredes G, Herrera-Hernández EC, Briones Carrillo JA, Valente Flores-Cano J, Matías-Pérez V (2021) The spreading of Covid-19 in Mexico: a diffusional approach. Results Phys 27:104555. https://doi.org/10.1016/j.rinp.2021.104555 13. Youssef H, Alghamdi N, Ezzat MA, El-Bary AA, Shawky AM (2021) Study on the SEIQR model and applying the epidemiological rates of COVID-19 epidemic spread in Saudi Arabia. Infect Dis Model 6:678–692. https://doi.org/10.1016/j.idm.2021.04.005 14. Arefin MA, Nabi MN, Islam MT, Islam MS (2021) Influences of weather-related parameters on the spread of Covid-19 pandemic: the scenario of Bangladesh. Urban Clim 38:100903. https:/ /doi.org/10.1016/j.uclim.2021.100903

Practical Guidelines for Developing Secure FHIR Applications Alexander Mense, Lukas Kienast, Ali Reza Noori, Markus Rathkolb, Daniel Seidinger, and João Pavão

Abstract HL7 Fast Healthcare Interoperability Resources (FHIR) is the latest interoperability standard by HL7 International and is now used by a large number of companies and thousands of developers to build interoperable API-based applications for healthcare data exchange. In October 2021 a security report by Alissa Knight caused a great deal of commotion in the community and raised the question “is FHIR insecure”, as she was able to access thousands of health records by hacking FHIR applications with simple techniques. Looking at the report in detail it turned out to a be an implementers problem and especially a problem of the “last mile between the user and clinical data aggregators”. As the FHIR standard itself does not provide a security framework but only common-sense recommendations the standard itself is not insecure, but the lack of implementing at least a minimum set of security opens a wide area of possibilities for unauthorized access to health data. This paper presents the results of a project to develop a practical framework for FHIR API developers to build secure FHIR applications. The framework offers a set of controls as well as testing methods. It was built on well known frameworks such as the OWASP API Security Project as well as on healthcare specific expert knowledge. Keywords Healthcare · FHIR · API Security

A. Mense (B) · L. Kienast · A. R. Noori · M. Rathkolb · D. Seidinger University of Applied Sciences Technikum Wien, 1200 Wien, Austria e-mail: [email protected] J. Pavão Universidade de Trás-os-Montes e Alto Douro, 5000-801 Vila Real, Portugal © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Á. Rocha et al. (eds.), Information Technology and Systems, Lecture Notes in Networks and Systems 691, https://doi.org/10.1007/978-3-031-33258-6_22

245

246

A. Mense et al.

1 Introduction 1.1 Motivation Electronic healthcare data exchange can only be realized on the basis of interoperability, i.e. healthcare applications implementing interoperability standards. HL7 international has been developing international healthcare data exchange standards for structural interoperability (and a few elements for semantical interoperability) since it was founded in 1987. HL7 Version 2 is the most widespread non-imaging interoperability standard for data exchange in healthcare. Its latest standard is called “Fast Healthcare Interoperability Resources (FHIR)” [1] and due to its reliance on commonly used standards for the WWW and its ease of implementation it has become very popular over the last years. Even despite of the fact, that the standard is still under development and is actually not really a standard at all, it is now used by a large number of companies and thousands of developers to build interoperable API-based applications for healthcare data exchange (whereas FHIR can also be used for other patterns than only implementing an API). There are big expectations about FHIR revolutionizing interoperable healthcare data exchange worldwide. Thus, it is reasonable that the security report “Playing with FHIR: Hacking and Securing FHIR APIs” [2] by Alissa Knight caused an alarm in the community. The fact, that she was able to access thousands of health records by hacking FHIR applications with simple techniques immediately raised the question “is FHIR insecure”.

1.2 FHIR Standard and Security A closer look to the FHIR standards definition shows that, if applications using a FHIR based API interface expose security vulnerabilities, then it will be not because of the FHIR standard—because in fact FHIR defines healthcare data exchange patterns but does not define a security framework! Means, the standard offers of set of security considerations and guidances [3] based on widely agreed security patterns, techniques, international security standards, RFCs and security technologies—but there is no “FHIR security” at all. These guidances cover for instance access control issues, communication security or the challenge of using XHTML in the narrative section of a resource and should be considered by implementers to be used in an overarching security concept for an application based at least on risk analysis. Furthermore, the standard defines artifacts that can be used to enhance the security functionality of the application itself, i.e. the resources “Consent”, “AuditEvent”, “Provence” and the possible use of security labels as part of each resource’s meta data. With “SMART App Launch” FHIR defines “a set of foundational patterns based on OAuth 2.0 for client applications to authorize, authenticate, and integrate with FHIR-based data systems” [4].

Practical Guidelines for Developing Secure FHIR Applications

247

1.3 The Security Challenge of FHIR API Based Applications Generally, FHIR is mostly used as a REST API and thus applications using it are exposed to similar security concerns and risks as other REST API standards and implementations, but the fact FHIR is used for transferring sensitive health information. According to the latest Salt Labs report based on a company survey, 95% of companies suffered from an API security incident in the year 2021 [5]. Additionally, it was found out that the monthly per-customer average of malicious API calls rose from 2.73 million in December 2020 to 21.23 million calls in December 2021 which is an increase of 681% [6]. Looking at the details of the Knight report [2] it turns out to a be an implementers problem and especially a problem of the “last mile between the user and clinical data aggregators”, which means many software architects and developers do not have appropriate security knowledge and are often not aware of basic security requirements of the healthcare domain. Of course, appropriate education of solution architects and software developers about the basic security problems and a secure software development process as well as implementation techniques would be the best approach, but as we are not living in a perfect world, we set up a project to develop a framework with practical guidelines for implementing FHIR API based applications. The goal was to select and offer a basic set of controls as well as to provide easy to use testing methods to enable developers to enforce at least a basic level of security.

2 Methods and Materials 2.1 Guidelines The general approach was a holistic look on the topic of secure software development in the context of medical data exchange including compliance requirements, general organizational frameworks, international standards and guidelines, technical frameworks as well as existing work in the context auf FHIR API development, but to focus on detail specific controls for implementers of application using an FHIR API. For detailed healthcare specific requirements domain experts have been actively involved in the development. From a compliance point of view in the context of the European Union (which is currently the focus of the project) the “General Data Protection Regulation (GDPR)” [7] must be mentioned. On the one hand it requires appropriate security measures and on the other hand it defines high fines for violation. The ISO 27034 standard series (for an overview see [8]) referring to a secure software development cycle offers a generic approach, whereas the Microsoft Secure Software Development Cycle [9] has been implemented several times and contains precise controls. Also the NIST Special Publication 800-218 [10]—defining a Secure Software Development Framework

248

A. Mense et al.

(SSDF)—provides a set of very practical controls covering main aspects of secure software development. For technical security controls the most practical approaches are offered by the Open Web Application Security Project (OWASP) [11].

2.2 Testing As one of the goals of the project is to provide appropriate testing methods an evaluation on available test tools was done. Several tools such as for instance Burp Suite, OWASP Zed Attack Proxy or Postman were evaluated to fulfil the requirements of a test tool which is at no cost, can be run on command-line as well as an automation tool and provides a scripting interface for complex testing.

3 Results 3.1 Guidelines Based on the results of the evaluation work the OWASP API Security Project [12] was chosen as the basis for the core of the set of controls in the framework. The OWASP Foundation published in 2019 the Top 10 vulnerabilities focused on Application Programming Interfaces (API) describing the typical and most common API vulnerabilities [12]: 1. 2. 3. 4. 5. 6. 7. 8. 9. 10.

Broken Object Level Authorization Broken User Authentication Excessive Data Exposure Lack of Resources & Rate Limiting Broken Function Level Authorization Mass Assignment Security Misconfiguration Injection Improper Assets Management Insufficient Logging & Monitoring

OWASP does not only provide a detailed description with examples, but also offer a risk rating and best practices regarding mitigations. These can be used by the guidelines as security requirements to be mitigated and enhanced with specific security testing (see Sect. 3.2). It further turned out, that correct use and implementation of encryption, especially transport layer encryption, is critical and needs to be addressed in detail by the framework to ensure proper handling by the implementers.

Practical Guidelines for Developing Secure FHIR Applications

249

Thus, the current version of the framework contains sections for • • • •

OWASP API Top 10 TLS Healthcare specific requirements Others

Healthcare specific rules are primarily defining requirements to correctly implement necessary privacy controls. Some of them partially overlap or complement OWASP, e.g. “data filtering based on personal identifiers or personal date must not be done on client side” clearly overlaps with OWASP API 5 (broken function level authorization) which has to be implemented on server side. In total the framework currently defines 40 security issues to be considered, providing a detailed description (rational grounds) as well as suggestions for possible mitigations. Where possible any testing method is defined, and a test script provided.

3.2 Testing After the evaluation of tools Postman [13] was chosen because of its functionality adding customized testcases considering the FHIR standard and the possibility to be used in automatic testing. A very important aspect was also the fact, that Postman is already an established tool in the developer community which is the target group of the guidelines. Tests for security rules in the framework are provided as scripts to be executed by Postman (example snippet shown in Fig. 1).

Fig. 1 Snippet testing code

250

A. Mense et al.

Since the OWASP Top 10 is very general in terms of API vulnerabilities, all categories of the OWASP Top 10 were included in the test cases in a correspondingly general manner. The tests provided by the developed framework are to be used as basic templates and can be extended or modified as required. The testing principle via Postman is kept relatively simple for the sake of comprehensibility. The tests are mostly based on server response but also on HTTP status codes. For example, the testing for Broken User Authentication consists of five tests. First, the successful retrieval where a 200 or 201 status code is returned, next an empty header where no security attributes are provided and a 403 is returned which means that the user needs to log in. The third test evaluates what happen when the security token is set but without _oauth2_proxy= prepended and is successful on a 403. The fourth test evaluates the other way around where the prepend is set but without security parameters. The last test only evaluates if a 403 is returned when a token value is set in the header. The green test results in Fig. 2 show, that the OAuth2 implementation used for writing the tests works correct. For the OWASP Injection category, a CSV file with over 100 SQL Injection queries is sent to the API endpoints and it is expected that the API server does not send HTTPS status codes 200. The list of injections can be adapted according to the database technology used. The goal here is to check the robustness of the application against the user data with as many and different methods as possible. Of course, one could test the individual cases manually only with a lot of effort, so it is important to also test the application against possible injection attacks with different tools, for example SQL-Map. The further procedures of mitigation would be, for example, to

Fig. 2 OWASP API 2 Test execution results

Practical Guidelines for Developing Secure FHIR Applications

251

secure the application against such attacks using “prepared statements” or “whitelisting”. These mitigation methods have proven to be very effective so far, but they are bound with a certain amount of effort. Postman scripting can be used in many cases comparing client-side input with expected server-side output. An API endpoint is called using HTTP GET/POST request method and the server response is compared with an already defined response which serves as a reference. If the server response does not deviate from the reference response, then it is known that the server has not sent any additional data. This is a very simple way to verify that the server is sending only the defined data. This principle can be used in several cases to ensure that the backend developers never assumes that the frontend developer will filter the data. Therefore, it is the duty of the backend developer to deal with the matter of who needs the data before releasing an API endpoint. Tests with Postman are not limited to the OWASP Top 10. Some additional examples for checks that have been defined verify that only TLS 1.2 or TLS 1.3 are used and that only ciphers are offered by the server which provide state of the art security. The reason is that only if TLS in an appropriate version and using the appropriate encryption algorithms provide the necessary security for transmitting health data. If the recommendations are not followed developers’ risk that an attacker can intercept the communication and read and in the worst case even manipulate the transmitted data.

4 Discussion According to the definitions of the GDPR [7] health related data is defined as special category of personal data, must be considered highly sensitive and processing of such data shall be in principle prohibited. Thus, applications processing health data must follow existing regulations and implement proper security and privacy controls. As the HL7 FHIR standard for health data exchange provides a definition, that can be easily used, and is based on very web technologies, that can be easily implemented, the adoption of the newest HL7 standard is high. But many reports as well as real world experiences show that implementers are often not aware of basic security requirements and how to implement these. Implementing applications handling healthcare data actually shall be done using a strict secure software development cycle starting with a clear security and privacy by design approach, using a risk management process and including the deployment of the application to a secure infrastructure. To set up such an environment is complex and requires a lot of efforts. To give implementers an easy to use starting point the described project aims to define a simple framework with a set of controls along with testing methods. This should enable to implement a baseline security and raise the awareness for security. The work is still ongoing and the set of controls as well as the number of tests are growing. But unfortunately, it is not possible to predefine tests for each of the

252

A. Mense et al.

security requirements and also already defined tests often have to be customized for specific implementations. As the work is a very generic approach and security must be application specific and appropriate to mitigate application specific risks using this framework can be only considered a first basic step toward “real” security. It definitely does not replace a holistic approach to implement security and privacy for application development in the healthcare domain.

References 1. HL7 International: Welcome to FHIR. http://hl7.org/fhir/. Accessed 14 Sep 2022 2. Knight, A (2021) Playing with FHIR: hacking and securing FHIR APIs, 2021. https://approov. io/for/playing-with-fhir/. Accessed 14 Sep 2022 3. HL7 International: FHIR security. http://hl7.org/fhir/security.html. Accessed 14 Sep 2022 4. HL7 International: Smart App Launch. https://www.hl7.org/fhir/smart-app-launch/. Accessed 14 Sep 2022 5. Salt Labs: Companies are struggling against a 681% increase in API attacks, the latest “State of API Security” report shows. https://salt.security/blog/companies-are-strugglingagainst-a-681increase-in-api-attacks-the-latest-stateof-api-security-report-shows. Accessed 22 Apr 2022 6. Salt Labs: API Security Trends (2022). https://salt.security/apisecurity-trends. Accessed 22 Apr 2022 7. European Parliament and Council: Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016. http://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri= CELEX:32016R0679&from=EN. Accessed 30 May 2017 8. isecT Ltd: ISO/IEC 27034:2011—Information technology—Security techniques—Application security. https://www.iso27001security.com/html/27034.html. Accessed 14 Sep 2022 9. Microsoft. Microsoft Secure Software Development. https://www.microsoft.com/en-us/securi tyengineering/sdl. Accessed 14 Sep 2022 10. Souppaya, M., Scarfone, K., Dodson, D.: NIST special publication 800-218: Secure Software Development Framework (SSDF) Version 1.1. https://doi.org/10.6028/NIST.SP.800-218. Accessed 14 Sep 2022 11. Open Web Application Security Project® (OWASP). https://owasp.org/. Accessed 14 Sep 2022 12. OWASP: OWASP API Security Project. https://owasp.org/wwwproject-api-security/. Accessed 14 Sep 2022 13. Postman API Platform. https://www.postman.com/. Accessed 14 Sep 2022

Intelligent System to Provide Support in the Analysis of Colposcopy Images Based on Artificial Vision and Deep Learning: A First Approach for Rural Environments in Ecuador A. Loja-Morocho, J. Rocano-Portoviejo, B. Vega-Crespo, Vladimir Robles-Bykbaev , and Veronique Verhoeven

Abstract According to the World Health Organization (WHO), cervical cancer (CC) is an illness that has taken more than 342,000 female lives in 2020 and is considered the fourth cause of death in the world. In the rural areas of countries like Ecuador, there is no existence of low-cost tools for women who need to perform a self-screening exam and doctors who need to report cases based on artificial vision. For these reasons, in this article, we present the results of the first stage of development of the ecosystems aimed at the early detection of CC in rural areas. This ecosystem is based on a mobile application used to take photos during self-screening, a web tool to store and manage the image and diagnosis, and a module to classify images using deep learning. Keywords Cervical Cancer · deep learning · mobile applications · rural areas · computer vision

A. Loja-Morocho · J. Rocano-Portoviejo · V. Robles-Bykbaev (B) GI -IATa, Cátedra UNESCO Tecnologías de Apoyo Para la Inclusión Educativa,Universidad Politécnica Salesiana, Cuenca, Ecuador e-mail: [email protected] A. Loja-Morocho e-mail: [email protected] J. Rocano-Portoviejo e-mail: [email protected] B. Vega-Crespo Faculty of Health Science, Universidad de Cuenca, Cuenca 010203, Ecuador e-mail: [email protected] V. Verhoeven Family Medicine and Population Health, University of Antwerp, 2610 Antwerp, Belgium e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Á. Rocha et al. (eds.), Information Technology and Systems, Lecture Notes in Networks and Systems 691, https://doi.org/10.1007/978-3-031-33258-6_23

253

254

A. Loja-Morocho et al.

1 Introduction Cervical Cancer is still a major threat to women’s health, in 2020 approximately 604,127 new cases were diagnosed worldwide, and 342,831 women died from this cause. 90% of CCD (Cervical Cancer deaths) occur in low and middle-income countries (LMICs) [11]. If the trend does not change, deaths will increase by 25% in the next 10 years [3]. In Ecuador, CC is the second cause of oncological mortality in women in 2020, approximately 1,534 new cases were diagnosed, and 813 deaths occurred due to this pathology. In addition, the mortality rate for CC reaches 17.8/100.000 inhabitants [vega] and in South America, the country with the highest rate is Bolivia (35.8), and the country with the lowest rate is Brazil (12,2) [6]. The number of cases and deaths has not changed in the last 20 years in Ecuador. Since the implementation of CC screening, population mortality has been reduced by 75%, mainly in developed countries. In LMIC countries, timely detection of CC continues to be a pending task, since 41.6% of women of reproductive age have not had a screening test [3, 10]. The global strategy for the reduction of CC presented by the World Health Organization called 90-60-90, proposes three fundamental elements for the reduction of cervical cancer. Since CC is a disease caused by the human papillomavirus (HPV), vaccination of at least 90% of the population against this virus is necessary. 70% of the population should undergo at least twice in their life (35 and 45 years) a high sensitivity test for the diagnosis of HPV and finally, at least 90% of patients with HPV alterations should have a comprehensive follow-up, which implies performing a colposcopy and treating the lesions [11]. The 90-70-90 strategy faces difficulties in the LMIC countries, on one hand, the prevalence of vaccination has not reached the optimal level, secondly, the screening coverage does not reach 70%, and thirdly the follow-up of the patients through colposcopy is affordable in medium and high complexity hospitals, which leads to the fact that even if a woman is vaccinated, and screened, she can be lost in the system and not have comprehensive management [11]. The role of colposcopy and colpophotography with the assistance of artificial intelligence can bring women closer to this technology and make a timely diagnosis of injuries that require surgical treatment.

2 Related Work In recent years, computer vision and machine learning have been successfully developed and applied as tools for complex tasks such as the analysis of the biodegradation of type I collagen structures in bone regeneration systems [7]. In this line, deep learning techniques have been used successfully to classify colposcopy images. In the study developed by [8], an experimentation process is

Intelligent System to Provide Support in the Analysis...

255

described which allowed the collection of 485 images of patients with three pathologies: severe dysplasia, carcinoma in situ (CIS), and invasive cancer (IC). Of these images, 233 were captured with a green filter and the rest without a filter. To carry out the classification process, the authors trained a convolutional neural network and applied L1 and L2 regularization, dropout, and data augmentation processes obtaining an approximate accuracy of 50%. In a similar vein, [5] presents an experimentation process with a corpus of images available in the Intel and MobileODT Cervical Cancer Screening Kaggle Competition. The corpus contains a total of 8015 images organized into 3 classes (according to the type of cervix) [4]: – Type 1: completely ectocervical, fully visible, and small/large (1241) – Type 2: has endocervical component, fully visible, may have an ectocervical component which may be small or large (4348) – Type 3: has an endocervical component, is not fully visible, may have an ectocervical component which may be small or large (2426) In general, the authors indicate that they used a “method for classifying images on the basis of deep residual network, employing batch normalization for increased gradient flow, dropout to reduce overfitting, and the Adam optimizer. We used the multi-class logarithmic loss as our loss function” [5]. After testing various neural network configurations, including transfer learning, the authors report that an approximate precision value of 0.6 and a loss value of 0.8 are obtained in the set of images selected for testing. In a similar vein, Alyafeai and Ghouti describe a proposal based on a fully automated pipeline for cervical detection and cervical cancer classification from cervigram images. The pipeline uses two models to carry out both the process of detecting the cervical area and the subsequent classification of the disease. With the first model, the zone or area is detected, and features are extracted with an accuracy of 0.68, meanwhile, the second model proceeds with the classification process which uses a convolutional neural network with an accuracy of 0.82. Another interesting aspect highlighted by the authors is that the pipeline is 1000 times faster to detect the cervical area and 20 times faster to classify CC than what has been reported in previous research [1]. Another deep learning technique that has been used to classify CC images is the deep denoising autoencoder. With this, it is sought to carry out a process of reducing the dimensionality of the problem and with this carry out the classification of the images more efficiently and effectively. In this area, in [2], this technique is used to classify images of 858 patients aged between 13 and 84 years (mean 27 years) and have an active sexual life. The authors report an accuracy of 0.6875 for performing the screening process. The Mask Regional Convolutional Neural Network (Mask R-CNN) is a new alternative that was applied in [9] to diagnose CC in pap smear histological slides. The slides contained both cervical cells and various artifacts (such as white blood cells). The algorithm proposed in this research reached an average precision of 0.578

256

A. Loja-Morocho et al.

(sensitivity and specificity of 0.917) for each image. For this, 178 images from a pap smear slide with a size of 3,048 × 2048 pixels were used. Although the images contained 4 types of classes (normal, atypical, low-grade, and high-grade), the authors simplified the problem in a dichotomous proposal (normal vs. abnormal). As it can be seen in these current examples of the state of the art, the results are still far from being those required for systems that can be used in the medical field. This is mainly due to the high requirements that these types of systems require in terms of sensitivity and specificity.

3 System Architecture Colposcopy/colpophotography is a method that allows you to see the cervix with magnification and thus identify structural changes at the level of the cervix. Due to the action of the Human Papilloma Virus (HPV), the cells of the cervix lose their ability to regulate their growth, and their capacity for mitosis increases. This situation allows the identification of specific configurations at the level of the cervix that allows identifying the place where this rapid growth, extension, and morphological characteristics of the affected epithelium occur. In the standard colposcopy/colpophotography technique, the first step consists of the application of 5% acetic acid. This solution allows vasoconstriction of the blood vessels, and the rapidly growing epithelium turns whitish (acetowhite); According to the density and morphology of this acetowhite epithelium, it can be classified into high or low-grade lesions of being invasive cancer. In the second place, weak Lugol (iodine) is placed in the same way as fast-growing cells consume glycogen and will not capture the black coloration of this stain. Areas that do not uptake are called iodine negative. This second step allows the extent and location of the lesion to be confirmed. For these reasons, Fig. 1 presents the general diagram of the architecture of the proposed system. As can be seen, the first stage of the system is made up of two large layers, one focused on the interactive tools and the other on the knowledge model used. With the first layer, doctors who serve in rural areas can upload the photos to a website for further analysis. Likewise, they can also apply filters to analyze the images and annotate areas they consider suspicious (annotation tool). With the help of color threshold binarization, the mobile application allows you to create masks and/or select regions of interest (ROI) manually or automatically (currently under development). As for the knowledge layer, we present an expert system that has two main functions. The first aims to provide support in the present diagnosis through the classification of images that present patterns of a tumor or a cancerous lesion. For this, convolutional networks and transfer learning are used. On the other hand, the second objective is to generate exercises and patient cases so students who are in the area of gynecology and obstetrics can prepare themselves for the process of detection and analysis of CC (module currently under development). The other modules that make

Intelligent System to Provide Support in the Analysis...

257

Fig. 1 Scheme of the main modules and elements that make up the proposal.

Fig. 2 Screenshots of the mobile application (left) and the web environment to support presumptive diagnosis through neural networks (right).

up the knowledge layer are the Electronic Medical Record (EMR) which contains additional patient information (demographic data, clinical history, etc.), a monitoring module to track the patient’s evolution, a generator of reports that carry out a data analytics process and a module that allows collecting feedback from system users to make subsequent improvements. In Fig. 2 we can observe screenshots of the application’s filters and the annotation tool (left side), while on the right we can observe the web page that contains some of the images uploaded to the server top) and the classification generated by the neural network (bottom).

258

A. Loja-Morocho et al.

4 Pilot Experiment and Preliminary Results To evaluate the first stage of development of the project, we carried out a pilot experimentation plan consisting of two stages. In the first stage, the response of different configurations of the neural network was analyzed with the corpus of Intel and MobileODT, while in the second, a perception survey was carried out on 46 senior-year medical students who are taking the subject of gynecology.

4.1 Training the Neural Network To carry out the training and tests of the neural network, the corpus of images from Intel and MobileODT was used. From this corpus the following classes were obtained: – Type 1: belongs to the earliest stage of cancer in which cellular changes endow it with malicious characteristics, that is, uncontrolled multiplication and invasiveness, it is the longest stage of the disease and is called introductory, in no case is it diagnosable and produces no symptoms, this phase can last up to 30 years. – Type 2: it is characterized by the existence of microscopic cancerous lesions located in the tissue where it originated, in adults it usually lasts from 5 to 10 years depending on the type of cancer, in it, there are no symptoms or discomfort in the patient, but It is already diagnosable through early detection techniques. – Type 3: begins to spread outside its location of origin, and invades adjacent tissues or organs, this is the phase of local invasion and lasts between 1 and 5 years in adulthood, the onset of symptoms depends on the type of cancer. We worked with a total of 6,569 images, with 1,179 for type 1, 3416 for type 2, and 1974 for type 3. A convolutional neural network organized in 6 layers was trained with the following activation functions and neurons: layer 1 (relu, 32), layers 2 and 3 (relu, 64), layer 4 (relu, 128), layer 5 (relu, 512), and layer 6 (relu, 128). The last layer uses a softmax function to classify into 3 classes. With this network, training was carried out with 3 different optimizers (Root Mean Squared Propagation, Adadelta, and Adamax) and was divided into 70% for training and 30% for testing, as can be seen in Fig. 3. However, the results show us they are adequate in terms of precision (0.5 for the test), due to the high level of noise generated by the images due to the presence of Lugol, the speculum, or incorrectly captured images. To improve the corpus, it was decided to reconfigure the distribution of the images considering 3 new classes: 1) images with a healthy uterus, 2) images containing cancer, and 3) images with the speculum. Likewise, a selection of images was made to discard those that used Lugol (molecular iodine solution). Figure 4 shows an example of images where there is the speculum (a) and Lugol (b).

Intelligent System to Provide Support in the Analysis...

259

Fig. 3 Accuracy results of the first convolutional neural network setup that was tested with 3 different optimizers: Root Mean Squared Propagation a, Adadelta b, and Adamax c. Fig. 4 Example of images that contain speculum a and Lugol b.

Table 1 Precision and loss results obtained with the different optimizers considering the new corpus configuration. Optimizador Val Loss Test Loss Val Accuracy Test Accuracy 0.1 Adadelta Adagrad 0.05 Adam 0.01 Adamax 0.01 Ftrl 1.09 Nadam 0 RMSprop 0.01 SDG 0.03 Transfer Learning Resnet 0.43

0.98 3 23 21 1.11 0.1 2.84 2

0.95 0.99 0.99 0.99 0.5 0.99 0.99 0.99

0.82 0.6 0.49 0.61 0.4189 0.8 0.76 0.6

0.784

0.82

0.81

With this new distribution of images, the training process of the neural network was carried out again, testing 8 optimizers and transfer learning with the Resnet network [he ]. As can be seen in Table 1, the classification results improve substantially.

260

A. Loja-Morocho et al.

However, it is important to highlight that the objective of this process is to better understand the aspects related to the image corpus and how it affects the neural network.

4.2 Measurement of Perception of the Mobile Application and the Web Application To determine the perception of the mobile application that the senior medical students have, a survey consisting of 3 demographic questions and 15 Likert scale questions was applied to 46 volunteers. Of this group, 23 are women and 23 are men, ages between 21 and 37 years old (mean of 23.56 and StdDev of 2.43). Through an analysis made with R studio software, a Cronbach’s Alpha test of 0.86 is obtained, a value that is within the confidence limits. Which it was concluded that there is a correlation between the items, enforcing the survey and making it more reliable. In Fig. 5, it can be seen that the survey participants selected as follow: “totally agree” (67.4%) and “agree” (28.3%) with the criterion “the mobile application could be a support tool to improve the visual perception of colposcopy images in medical practice” (Q5). Likewise, gynecology students voted “totally agree” (67.4%) and “agree” (26.1%) that “the mobile application could be a support tool so that medical students can carry out diagnostic practices or visual inspection in colposcopy images” (Q4). Regarding the operation of the web module for loading images, 39.1% of people believe that it is “totally adequate”, and in the same proportion “adequate” (Q3). In relation to the colors of the mobile application (Q2) and the web application (Q1), 54.3% and 60.9% see it as “totally adequate”.

Fig. 5 Main results obtained from the application of the survey to 46 senior medical students.

Intelligent System to Provide Support in the Analysis...

261

5 Conclusions With the applied filters we can have a better visual perception of the images or focus on certain areas of interest. With the application of the color detection algorithm, it has been possible to segment most of the desired areas, and the presence of white tones is analyzed in colposcopy images to detect certain affected areas. This is because the samples are taken with the application of aseptic acid, and these form a kind of white tissue in the affected areas. There are also cases in which there are yellowish tones or red areas, and for this reason, the possibility has been implemented that a doctor can adjust the color ranges and can segment the areas that he considers. Regarding the neural network, it is essential to indicate that the corpus of images substantially affects the capacity of the network to perform adequate classifications. Therefore, with this initial exploration that has been carried out, we can indicate the following: – It is necessary as a first stage to identify the area where the injury may exist. This will allow for effective subsequent classification processes. – It is necessary that the images are standardized concerning the use of the Lugol or the presence of the speculum, or that there are more samples of these circumstances.

References 1. Alyafeai Z, Ghouti L (2020) A fully-automated deep learning pipeline for cervical cancer classification. Expert Syst Appl 141:112951 2. Fernandes K, Chicco D, Cardoso JS, Fernandes J (2018) Supervised deep learning embeddings for the prediction of cervical cancer diagnosis. PeerJ Comput Sci 4:e154 3. Hull R et al (2020) Cervical cancer in low and middle-income countries. Oncol Lett 20(3):2058– 2074 4. Jordan, J., Singer, A., Jones, H., Shafi, M.: The Cervix. Wiley, Hoboken (2009) 5. Payette, J., Rachleff, J., de Graaf, C.: Intel and mobileodt cervical cancer screening kaggle competition: cervix type classification using deep learning and image classification. Stanford University (2017) 6. Reis NV, Andrade BB, Guerra MR, Teixeira MTB, Malta DC, Passos VM (2020) The global burden of disease study estimates of brazil’s cervical cancer burden. Ann Global Health 86(1):56 7. Robles-Bykbaev Y, Naya S, Díaz-Prado S, Calle-López D, Robles-Bykbaev V, Garzón L, Sanjurjo-Rodríguez C, Tarrío-Saavedra J (2019) An artificial-vision-and statistical-learningbased method for studying the biodegradation of type i collagen scaffolds in bone regeneration systems. PeerJ 7:e7233 8. Sato M et al (2018) Application of deep learning to the classification of images from colposcopy. Oncol Lett 15(3):3518–3523 9. Sompawong N, et al (2019) Automated pap smear cervical cancer screening using deep learning. In: 2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), pp 7044–7048. IEEE 10. Vega Crespo B, Neira Molina VA, Flores Salinas MA, Guerra Astudillo G, Mora Bravo LV, Ortíz Segarra JI (2020) Minireview: Situación actual del cáncer de cuello uterino en ecuador, 2019. REVISTA MÉDICA HJCA 12(3):205–211 11. World Health Organization: Cervical cancer (2022). https://www.who.int/news-room/factsheets/detail/human-papillomavirus-(hpv)-and-cervical-cancer, Accessed 25 Jan 2022

Comparison of Transfer Learning vs. Hyperparameter Tuning to Improve Neural Networks Precision in the Early Detection of Pneumonia in Chest X-Rays Paúl Idrovo-Berrezueta , Denys Dutan-Sanchez , and Vladimir Robles-Bykbaev Abstract The WHO (World Health Organization) has updated a publication where they discuss the topic of Covid-19 vaccination for children, and in this document, they mention the vulnerability that this illness has caused among children under the age of 5, exposing them to a higher risk of other diseases such as pneumonia. For this reason, this research is focused on the early detection of pneumonia using children’s chest X-rays and the implementation of artificial intelligence. CNN (Convolutional Neural Network) is the best tool to use as an image processor for the chest X-rays, hence a variety of deep learning techniques were used such as VGG16, VGG16-W VGG19, VGG19-W HT, ResNet50, ResNet50-W, MobileNet, and MobileNet-W. To enhance the accuracy of these deep learning techniques, transfer learning, and hyperparameters were applied to the training process. As a result of this research, we’ve obtained an accuracy of 0.9684 and a loss of 0.0793, and hope that with this research we can help the medical areas in the early detection of pneumonia and save doctors time. Keywords Artificial intelligence · Image classification · Convolutional neural network · Hyperparameter tuning · Transfer learning · Pneumonia · X-ray

1 Introduction Pneumonia is a respiratory disease, defined by the presence of fever and respiratory symptoms accompanied by radiological evidence. For pediatrics, pneumonia is an alarming topic, especially because there is a high rate of infection in children under P. Idrovo-Berrezueta · D. Dutan-Sanchez · V. Robles-Bykbaev (B) GI-IATa, Cátedra UNESCO Tecnologías de Apoyo Para la Inclusión Educativa, Universidad Politécnica Salesiana, Cuenca, Ecuador e-mail: [email protected] P. Idrovo-Berrezueta e-mail: [email protected] D. Dutan-Sanchez e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Á. Rocha et al. (eds.), Information Technology and Systems, Lecture Notes in Networks and Systems 691, https://doi.org/10.1007/978-3-031-33258-6_24

263

264

P. Idrovo-Berrezueta et al.

5 years of age. This disease generates more than 4 million deaths per year, and countries that are still in development or generate low income are the ones that are mostly affected by this disease [6]. Currently, there is an incidence of pneumonia in pediatric cases, this being acquired in approximately 4–6% of patients. In the case of children under 2 years of age, it is very difficult to make a diagnosis that differentiates between bronchiolitis and community-acquired pneumonia [4]. Currently, chest radiography is the best method available to diagnose pneumonia. This method implies a reliance on expert radiologists for the interpretation of radiographs. Through extensive research, it is possible to build a tool that uses neural networks for the correct classification of radiographs, this can help reduce time and help experts manage time more efficiently [10]. Artificial intelligence has positioned itself as a very powerful tool that allows the creation of classification models. These techniques are widely used in various areas of research, where they have played a fundamental role as a support tool. One of the areas that have been greatly favored is the health area [3]. When talking about neural networks, we can create neural networks based on our experience or use the same resources already implemented in other neural networks, this method is known as the use of transfer learning, this method has a variety of selections from which some of the most popular are: VGG, ResNet, MobileNet among others. [2]. In addition, there is a secondary option of performing hyperparameter tuning, whereas we allow the computer to carry out operations where it will try a variety of configurations to obtain the best results.

2 Related Work In [15] they have applied a neural network that predicts if a patient has Alzheimer’s. Alzheimer’s is one of the most challenging illnesses to cure or give treatment to. For this reason, the objective of this research is based on the early detection of Alzheimer’s using brain images. Using deep learning schemes, the neural network classifies the images in multi-class such as: very mild, moderate, mild, and non-demented. In the end, the CNN obtained an accuracy of 99.92%. To obtain these results the data was pre-trained with 16 layers using VGG16 (Visual Geometric Group). Another research of interest is [14] from which they used a data set of the most recent epidemic (COVID-19). For this research, they applied the technique MFDNN (multi-channel features deep neural network) to detect patients that may have covid by using chest X-ray images. To train the neural network they preprocessed the data set in order to reduce the deviation of the probability of the unbalanced data. Once the data was prepared, they applied that data in different deep learning models (VGGq9, Resnet50, GooLeNet, Desnet201), and the results were compared to the performance of the MFDNN. As a result of these experimentations, the model that gained the highest result was the MFDNN obtaining a precision of 93.19%. Not to

Comparison of Transfer Learning vs. Hyperparameter Tuning...

265

mention that the recall and the f1 score also had a higher score compared to the other models. Another research of interest is [12], from which they use chest CT images to detect malignancies located in the lung area. They use Deep Transfer Learning Techniques (DTLT) to predict medical images. Nowadays a Radiologist oversees CT images and detects if a patient presents lung cancer or not, however, this task can be timeconsuming not to mention that the Radiologist plays a fundamental role in diagnosing these malignancies which can sometimes be incorrectly detected. For this reason, they employed a variety of neural networks such as VGG16, VGG19, MobileNet, and DenseNet169. The images used to train these models were conformed of two kinds of lung cancer (squamous and adenocarcinoma). The neural network that gave the best result was the VGG16 obtaining an accuracy of 91.28%. A research based on pneumonia disease [13] has found a new medical treatment to deal with this illness. Cytomegalovirus pneumonia is a major cause of morbidity in individuals from which the drugs that are prescribed to treat this illness cause low efficiency, toxicity, and lack of specificity. To resolve this issue the researcher proposes the construction of artificial stem cells that contain drugs that will suppress Cytomegalovirus pneumonia. In conclusion, this research found that artificial cells showed significant suppression against inflammation tropism and viral replication. In the end, they hope that this research will impulse the use of artificial stem cells to treat pneumonia more efficiently. A research based on the detection of pneumonia [11] has demonstrated how the recent epidemic (Covid-19) has affected people gravely and increased their probability of catching pneumonia. For this motive, a model was built using artificial neural networks to predict if a patient has or not pneumonia. The data for this research were chest X-ray images from which the model that gave the best results was DenseNet201 obtaining an accuracy of 99,84%. This classifier helps create a diagnosis that is robust, fast, and cost-effective.

3 Methodology This section presents the tools that were used including the parameters that were configured for the experiments.

3.1 CNN Architecture In this research, several CNNs are used that are already designed to carry out transfer learning techniques. A measurement metric is the complexity of the networks through the number of parameters, the higher the number of parameters, the more complex the CNN. In Table 1 we can see the CNNs used with their parameters.

266

P. Idrovo-Berrezueta et al.

Table 1 CNNs with the number of total, trainable and non-trainable parameters. [2] CNN Total Parameters Trainable Parameters Non-Trainable Parameters VGG16 VGG19 ResNet50 MobileNet

15,252,034 20,564,290 25,615,938 4,253,864

537,346 539,906 2,161,026 4,231,976

14,714,688 20,024,384 23,454,912 21,888

3.2 VGGNet It was developed at the University of Oxford in 2014. It has a deeper network with many clustered convulsive layers and dense layers. This CNN was trained with RGB images, this means that it only accepts images with three channels and an image size of 224 × 224 [5].

3.3 ResNet This architecture was proposed by Kaming et al. in 2015, being used in various computer vision tasks and winning in the ImageNet comment in 2015 [7]. It has 224 × 224 images as input with RGB channels.

3.4 MobileNet It is a neural network proposed by Google for the design of deep neural networks, with a focus on experimental devices. The main feature is that the parameters and the required memory size have been considerably reduced, allowing it to be a lightweight neural network that can be runned on various platforms [8]. Similar to the other neural networks, it uses a 224 × 224 input with an RGB channel.

3.5 Hyperparameter Tuning When creating a machine learning model, there are several options to define its design, allowing it to have several parameters. Through Hyperparameter tuning, there is the possibility that the same machine performs a search and finds the best optimal parameters for the data set that we are researching, this tool is enhanced with tensorflow and keras [2].

Comparison of Transfer Learning vs. Hyperparameter Tuning...

267

Fig. 1 Training Graphs

Fig. 2 The image on the left is an infected chest X-ray with the disease of pneumonia and the image located on the right is a healthy chest X-ray. Taken from the data set. [1]

4 Experiment and Preliminary Results As a guide for our experimental and result phase, we will use the following diagram Fig. 1 as a guide, the same one that allows us to have a structure for our research. Starting from the interpretation of the selected data set, the preparation of the data, which in this case are images, the models and training of the proposed neural networks, and finally evaluate and interpret the results and data obtained. The data set used in this research is published on the Kaggle Chest X-Ray Images platform, which handles a CC BY 4.0 [9] type license. This data set is in its third version, where it is focused on the use of X-ray images. There is a total of 5856 images, the same ones that are categorized in two classes, healthy and pneumonia, where the cases of healthy class are 1583 images, and the cases pneumonia class are 4273. An example of each of the classes can be seen in Fig. 2. With this selected data set, we seek to test transfer learning on different neural networks available and compare and observe the results against a neural network that uses hyperparameter tuning, allowing us to select the best neural network for the classification of pneumonia images. With this research we aim to support the creation and enhancement of intelligent systems, which will allow the support of

268

P. Idrovo-Berrezueta et al.

Table 2 Parameters used in Transfer Learning Description Parameters Step Size

Optimizer Learning rate

Weight Input Image Patience

Number of examples used in each training. The smaller the value, the fewer data in memory. Optimization algorithm. Indicates how long the path used by the optimization algorithm is. Weight entries for transfer learning Image size loaded for training. A quality control for the neural network training process, allows the training process to be halted or paused if the selected parameters decrease or do not improve the precision score.

Value 64

Adam 0.0004

None, imagenet [224,224,3] 4

health personnel in the diagnosis of this specific disease, helping strengthen the early detection of this high-risk disease, especially in children. As mentioned before a variety of neural networks have been selected and as an additional tool, transfer learning has been applied to each one of these networks to help improve the training process for image classification. The selected networks are VGG16, VGG19, ResNet50, and MobileNet. The reason why these networks were chosen is that they are among the best neural networks for image classification. The detail of the parameters applied for the training process can be found in Table 2. For the training process for the Convolutional neural network with Hyperparameter tuning, different characteristics were used, the same ones that are detailed in Table 3 As a result, a neural network with a total amount of 7,691,734 parameters was obtained, of which 7,691,710 are trainable and 24 are not trainable. The results obtained in terms of accuracy and loss metrics are found in Table 4, obtaining great results whereas the neural networks that are accompanied by the initials “-W” indicate that initial weights were used for the training process, to be more specific the “imagenet” data set was used for that training. In addition, the CNN with the initials HT, represents the neural networks that used Hyperparameter tuning, compared to the other previously trained networks. As we can see in Fig. 3, there is an approximately continuous accuracy value, where the only big difference is in the ResNet50 which drops to 0.8664. In addition to the accuracy metric, it can also be observed that the Loss value is at its highest in HT, RestNet50, and Restnet50-W.

Comparison of Transfer Learning vs. Hyperparameter Tuning... Table 3 Parameters used in Transfer Learning Description Parameters Simple neuron layer

Image input Sample Convulsion layer 1

Convulsion layer 2

Convulsion layer 3

Convulsion layer 4

Learning rate

24 24 6 9 25 25 25 25 25

Range

Number of neurons used

Minimum=16 Maximum = 56 Jumps=4 Image size loaded for training. [200,200,3] Layer that is responsible for 64 applying the convolution filter Layer that is responsible for Minimum = 16 applying the convolution filter Maximum=50 Jumps = 4 Layer that is responsible for Minimum=8 applying the convolution filter Maximum=32 Jumps=4 Layer that is responsible for Minimum=16 applying the convolution filter Maximum=50 Jumps=4 Layer that is responsible for Minimum=4 applying the convolution filter Maximum=16 Jumps=4 Indicates how long the path 0.0001–0.0004 used by the optimization algorithm is.

Table 4 Parameters used in Transfer Learning CNN Pathological Pathological Result Result % VGG16 VGG16-W VGG19 VGG19-W HT ResNet50 ResNet50-W MobileNet MobileNet-W

269

96 96 24 36 100 100 100 100 100

Healthy Result

Healthy Result %

18 19 25 24 21 0 1 0 0

72 76 100 96 84 0 4 0 0

270

P. Idrovo-Berrezueta et al.

Fig. 3 Graph metric results

Fig. 4 Training Graphs

We can have a perspective of how the metrics improve according to the training periods in Fig. 4. With the results obtained, we could infer which is the best neural network to apply in an intelligent system to support the diagnoses of pneumonia pathology. However, we have selected randomly 50 images, 25 healthy and 25 with pneumonia. This selection is intended to test the neural network with real data and evaluate its results. Whereas the results obtained are exposed in Table 5 and Fig. 5.

Comparison of Transfer Learning vs. Hyperparameter Tuning... Table 5 Parameters used in Transfer Learning Accuracy CNN VGG16 VGG16-W VGG19 VGG19-W HT ResNet50 ResNet50-W MobileNet MobileNet-W

0.9622 0.9650 0.9605 0.9577 0.9496 0.8664 0.9253 0.9684 0.9682

271

Loss 0.1016 0.0876 0.1108 0.1147 0.2066 0.3042 0.1939 0.0793 0.0769

Fig. 5 Accuracy test graph

5 Conclusion With the results obtained in Table 5. We can interpret that the percentage of accuracy that a neural network has won’t always signify that the neural network will respond necessarily according to our needs. In Fig. 5 we can observe that HT is the best option to classify chest radiographs, followed by VGG16-W and VGG16. Thus, proving that hyperparameter tuning is a powerful tool when applied to neural networks. In our research, we have used medical cases from which a tool with a precision close to 100% is necessary. Although transfer learning proves to be a useful tool, they don’t match hyperparameter tuning precision results, and this is because hyperparameter tuning looks for the best parameters to train the neural network in a way transfer learning only shares knowledge instead of evolving their training method. In the end, these tool needs to be of great support when processing diagnosis. This research also hopes to create an opportunity for rural sectors to have a specialized classifier that evaluates chest radiology, mostly because these sectors don’t count on a nearby specialist to interpret these images. As future research, the neural network could be improved by increasing the data set and generating applications that have a classifier like this type within their core.

272

P. Idrovo-Berrezueta et al.

References 1. Chest x-ray images. https://www.kaggle.com/datasets/paulti/chest-xray-images 2. TensorFlow Core. https://www.tensorflow.org/tutorials?hl=es-419 3. Behkam R et al (2022) Mechanical fault types detection in transformer windings using interpretation of frequency responses via multilayer perceptron. J Oper Autom Power Eng 11(1):11–21 4. Buñuel Álvarez J, Heredia Quiciós J, Gómez Martinench E (2003) Utilidad de la exploración física para el diagnóstico de neumonía infantil adquirida en la comunidad en un centro de atención primaria. Atencion Primaria 32(6):349–354. https://www.ncbi.nlm.nih.gov/pmc/articles/ PMC7684406/ 5. Fang R, Lu CC, Chuang CT, Chang WH (2022) A visually interpretable detection method combines 3-D ECG with a multi-VGG neural network for myocardial infarction identification. Comput Methods Program Biomed 219:106762 . https://www.sciencedirect.com/science/ article/pii/S0169260722001481 6. García-Sánchez N (2006) Neumonía recurrente. ¿Factor de riesgo para el desarrollo de asma infantil? Atencion Primaria 37(3):131–132. https://www.ncbi.nlm.nih.gov/pmc/articles/ PMC7669006/ 7. He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 770–778. iSSN: 1063–6919 8. Howard AG et al (2017) MobileNets: efficient convolutional neural networks for mobile vision applications. http://arxiv.org/abs/1704.04861, arXiv:1704.04861 [cs] 9. Kermany D, Zhang K, Goldbaum M (2018) Large dataset of labeled optical coherence tomography (OCT) and chest x-ray images, vol. 3. https://data.mendeley.com/datasets/rscbjbr9sj, publisher: Mendeley Data 10. Kundu R, Das R, Geem ZW, Han GT, Sarkar R (2021) Pneumonia detection in chest X-ray images using an ensemble of deep learning models. PLoS ONE 16(9):e0256630. https://www. ncbi.nlm.nih.gov/pmc/articles/PMC8423280/ 11. Nillmani, Jain PK et al (2022) Four types of multiclass frameworks for pneumonia classification and its validation in x-ray scans using seven types of deep learning artificial intelligence models. Diagnostics 12(3):652. https://www.mdpi.com/2075-4418/12/3/652, number: 3 Publisher: Multidisciplinary Digital Publishing Institute 12. Pan L et al (2022) MFDNN: multi-channel feature deep neural network algorithm to identify COVID19 chest X-ray images. Health Inf Sci Syst 10(1):4. https://www.ncbi.nlm.nih.gov/ pmc/articles/PMC9004212/ 13. Qin A (2022) Artificial stem cells mediated inflammation-tropic delivery of antiviral drugs for pneumonia treatment. J Nanobiotechnol 20(1):335. https://doi.org/10.1186/s12951-02201547-x 14. Yadlapalli P, Bhavana D, Gunnam S (2021) Intelligent classification of lung malignancies using deep learning techniques. International Journal of Intelligent Computing and Cybernetics 15(3):345–362. https://doi.org/10.1108/IJICC-07-2021-0147, publisher: Emerald Publishing Limited 15. Younis MT, Younus YT, Hasoon JN, Fadhil AH, Mostafa SA (2022) An accurate Alzheimer’s disease detection using a developed convolutional neural network model. Bull Electr Eng Inf 11(4):2005–2012. https://beei.org/index.php/EEI/article/view/3659, number: 4

A Smart Mirror Based on Computer Vision and Deep Learning to Support the Learning of Sexual Violence Prevention and Self-care Health in Youth with Intellectual Disabilities C. Peña-Farfán, F. Peralta-Bautista, K. Panamá-Mazhenda, S. Bravo-Buri, Y. Robles-Bykbaev, V. Robles-Bykbaev , and E. Lema-Condo

Abstract The World Health Organization (WHO) estimates that near one billion children under the age of 18 have experienced some form of physical, sexual, or emotional violence. This panorama becomes much more complex in the case of children and young people with disabilities since, in developing countries such as Ecuador, there are no educational programs and technological tools that allow teaching notions of sexual violence prevention and healthy self-care. For these reasons, this article presents a proposal based on smart mirrors that combines educational techniques and artificial vision, and deep learning tools to teach these concepts to children and young people with or without disabilities. The proposal was initially validated by a group of 3 expert psychologists and tested with 6 children with various types of disabilities. Keywords Smart mirror · deep learning · sexual violence prevention · health self-care · intellectual disability · children and youth with disabilities C. Peña-Farfán · F. Peralta-Bautista · K. Panamá-Mazhenda · S. Bravo-Buri · Y. Robles-Bykbaev · V. Robles-Bykbaev (B) · E. Lema-Condo GI-IATa, Cátedra UNESCO Tecnologías de Apoyo Para la Inclusión Educativa, Universidad Politécnica Salesiana, Cuenca, Ecuador e-mail: [email protected] C. Peña-Farfán e-mail: [email protected] F. Peralta-Bautista e-mail: [email protected] K. Panamá-Mazhenda e-mail: [email protected] Y. Robles-Bykbaev e-mail: [email protected] E. Lema-Condo e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Á. Rocha et al. (eds.), Information Technology and Systems, Lecture Notes in Networks and Systems 691, https://doi.org/10.1007/978-3-031-33258-6_25

273

274

C. Peña-Farfán et al.

1 Introduction Sexual violence is defined as the imposition in the exercise of sexuality of a person who is forced to have sexual relations or practices with the aggressor or with third parties, using deception, physical force, intimidation, threats, and the generation of emotional or material dependence, the abuse of power, or any other coercive means [7]. For this reason, Sexual and Reproductive Health (SRH), and in general Sexual Health (SH) is considered one of the Fundamental Human Rights since it is an indefectible part of health as such. Health itself must be considered and protected as a right, but above all guaranteed by the State, consequently and with greater emphasis, SH must also be; in such a way that not only access to SH is guaranteed, but primarily access to education based on educational content adapted for people with disabilities, especially intellectual disabilities, must be guaranteed, since people who have this life condition are prone to be victims of sexual violence, because they represent a human group in a state of vulnerability, and are exposed to different types of abuse, including sexual violence. The latter, since people with intellectual disabilities, have a process of representing their own sexuality that is less specific and has a more elemental structure. Therefore, they need to learn and develop gender identity and moral behavior according to family learning or learning by school institutions [2]. According to the Population Fund for the United Nations [3], it considers that people with disabilities are up to three times more likely to suffer physical, sexual, and emotional violence. According to investigations carried out, it is verified that “Of every 10 victims of rape, 6 correspond to children and teenagers. 65% of cases of sexual violence against children and teenagers are committed by relatives.” Between 40% and 68% of young women with disabilities and between 16% and 30% of young men with disabilities will experience sexual violence before the age of 18. Women with disabilities in institutions are more likely to experience forced abortions and sterilizations, sexually transmitted infections, and sexual, emotional, and physical violence. Unfortunately, nowadays, Ecuador does not count on the devices that contribute to the improvement of information and education on health self-care and the prevention of sexual abuse for people with disabilities, causing this situation to cause children and teenagers to be vulnerable in leading a full life and with the necessary knowledge for the field of education, health, and sexuality. It is in our best interest to focus on the population with intellectual disabilities and to develop a teaching-learning methodology through an intelligent mirror that in a didactic way promotes knowledge of the body for health self-care and the prevention of sexual abuse, in boys, girls, and teenagers. For this reason, the smart mirror described in this article provides support so that children and teenagers identify themselves within the proposed activities and interactively learn with their own bodies the prevention of sexual abuse and health self-care from a psychological playful perspective.

A Smart Mirror Based on Computer Vision and Deep Learning ...

275

2 Related Work Currently, different methodologies help in the learning process of health self-care and prevention of sexual abuse. A relevant technique that has had an important development is that of serious games. Since the activities that the child must carry out in our proposal are based on serious games, this section will mention some relevant research in the area. In the study by [9], the authors conducted a study on serious games for health education. In the first instance, they identified 2313 investigations and after eliminating duplicates and applying the inclusion criteria (based on a review of the title, abstract and full text), 161 were selected. Most of the games focused on the field of education for health were developed in America (82) and Europe (64). The games focus on various actors, that is, health service providers (42), patients (38), and public users (75), among others. Of the games aimed at patients, only 6 focus on providing support for specific diseases, while the vast majority are aimed at providing guidelines for a good lifestyle, cognitive development, nutrition, etc. However, as can be seen, none of these applications seeks to provide support in health self-care and even less for children and young people with intellectual disabilities or other disabilities. In [5] a study is presented aimed at the learning level of children between 8 and 10 years old in the field of prevention of sexual abuse, these children were separated into two groups. The first received instruction guided by their tutors, while in the second group the serious game “Orbit” was added to the teaching process. The tests carried out concluded that the second group obtained a higher level of learning, thus demonstrating the impact of the use of serious games in learning to prevent sexual abuse. In [8] the authors point out the lack of accessibility to tools and resources for the education of children with Autism. For this reason, they describe the “Aliza” application, which is based on an intelligent mirror that has linguistic, mathematical, and verbal learning modules, and also detects the child’s emotions to help them recognize them.

3 System Architecture As can be seen in Fig. 1, the system is organized in 3 layers that contain several tools that allow both children and young people with disabilities and their tutors to interact in an agile and dynamic way with the smart mirror. With the support of a team of experts from clinical psychology, medicine, initial education, and computer science, educational scripts, intervention strategies, and playful activities were established so that children and young people with intellectual disabilities can learn initial concepts of health self-care (washing hands and brushing teeth) and the prevention of sexual violence (risk situations and prevention of rape, call for help). The modules and elements that make up the smart mirror are detailed below:

276

C. Peña-Farfán et al.

Fig. 1 Main layers and elements that make up the smart mirror Fig. 2 Screenshots of the activities and images that the smart mirror presents to children

In order for users (children, young people, and teachers) to be able to carry out the different educational activities with the mirror, the system has an interaction layer that allows gestures to be detected (through the camera and the neural network) and access to mirror activities through the graphical user interface and a touch panel (touch screen overlay). In Fig. 2 you can observe 3 screenshots of some of the options implemented by the mirror. In the screenshot on the left (a), you can see the option for the child to specify their gender (girl or boy), while the screenshot in the center displays the menu to access health self-care activities (superior top) and prevention of sexual violence (inferior top) and the last screenshot on the right (c) shows how one of the intimate parts of a woman’s body that no one should touch is pointed out.

A Smart Mirror Based on Computer Vision and Deep Learning ...

277

Fig. 3 Process followed to locate the different objects in the image captured by the camera

Hardware and software layer: The mirror is made up of a low-cost monitor, a webcam, and an Intel® NUC. The mirror does not use a touch screen but performs this function through a touch screen overlay. It works in this way since some children and young people cannot control the level of force with which they press the screen, and therefore, there is a mirror that protects it. The camera has a degree of freedom that allows you to set the angle at which it points at the person using the mirror. The operating system used is Ubuntu version 18.04, with Python as the programming language, OpenCV® to perform the pre-processing of the images and the visualization of the objects (as augmented reality), and the MediaPipe framework to perform the recognition of patterns. Processing layer based on computer vision and deep learning: in this layer, the image of the child or young person with disabilities is captured and through the Convolutional Neural Network (CNN) BlazePose [1] their pose is estimated. The neural network will return the points of interest of both the face (landmarks) and the body. With this information, you can assemble the images and simulate that the user takes a toothbrush or washes his hands. Also, with this, you can calculate the intimate parts of the body of both women and men that no one can touch. Figure 3 shows the process that is used to merge the information captured by the camera and the elements that allow each scene to be created (handwashing, toothbrushing, or prevention of sexual violence). To obtain the points of interest, each frame of the video is analyzed with the CNN BlazePose [1]. To do this, it is necessary to transform the current frame from the format in which OpenCV captures the BGR (Blue, Green, Red) image to RGB format. This results in the points of interest in the form of coordinates, which are identified with an ID established by the network itself (1, 2, 3). These points of interest can be found in [1]. Once the points of interest, the frame, and the image with which the fusion will be performed (4) have been obtained, the mask of said image (5) is generated. With

278

C. Peña-Farfán et al.

Table 1 Profiles of the experts who participated in the validation process of the smart mirror Expert Gender Years of Years of Work area experience in experience psychological working with intervention with ICTs for children and psychological adolescents intervention 1

F

15



2

F

4



3

M

10

10

Clinical Psychologist Clinical Psychologist Clinical Neuropsychologist

this, the original image can be merged with the object that you want to represent, for example, the soap, the towel, the dirt that will be placed on the hands, etc. After the image fusion process (6), the current frame (7) is updated and depending on whether it is required to merge more images, the last updated frame is used to repeat the process described in step (4). Once the image fusion process has finished, the result (8) is shown on the screen.

4 Experimentation and Results In order to carry out an initial validation process of the intelligent mirror, an experimental plan divided into two stages was carried out. In the first stage, a special survey was applied to 3 experts from the areas of clinical neuropsychology and clinical psychology, with the profiles described in Table 11 : As can be seen, the experts have an appropriate level of experience both in the psychological intervention of children and teenagers (mean of 9.6 years, SD of 5.5) and with the work with Information and Communication Technologies (mean of 3.3 years, SD of 5.77). To determine if there was consensus among the experts, the statistical coefficient alpha of Krippendorff [4] was used. With this coefficient, the analysis of the results of the survey was carried out, which was organized into two large blocks. In the first, 9 demographic questions were included (names and surnames, gender, age, ethnicity, marital status, religion, profession, years of experience in the psychological intervention of children and adolescents, and years of experience in working with ICTs), while in the second block 13 questions were asked to determine various aspects of the smart mirror: 1 We worked with 3 experts given that the Bland-Altman method produces a graphical test to determine the inter-rater agreement, and it is not practical to have more than 3 or 4 experts. Similarly, in the Ecuadorian context it is hard to have experts with educational inclusion expertise and an appropriate experience level on the use of technologies to work with children with disabilities.

A Smart Mirror Based on Computer Vision and Deep Learning ...

279

– Ease of use of the smart mirror – Pertinence of using the smart mirror as an educational support tool – Importance of the smart mirror to improve self-care, hygiene, and prevention of sexual violence in children and adolescents – Relevance of the pictograms presented by the smart mirror to achieve interaction with children and adolescents with disabilities. – Difficulty level for children and teenagers to identify virtual objects for cleaning hands and teeth (brush, soap, etc.) during the execution of the serious game. – Relevance of the audio help presented by the smart mirror – Pertinence of the movements made to promote the autonomy of daily hygiene in hand washing and tooth brushing – Usefulness of the serious sexual violence prevention game for children and adolescents to identify dangerous situations and ask for help in case someone tries to touch their body – Relevance of the size of the smart mirror – Possibility of incorporating the smart mirror in special education institutions – Global relevance of games featured by the smart mirror – Usefulness of the smart mirror for the teaching-learning process in self-care of health and prevention of sexual violence – Ease of use/interaction of the smart mirror by children and adolescents To this end, several demonstration sessions were held where the experts were able to interact with the smart mirror and observe all the features and stimuli it presents during the work session. Once the surveys were applied, the Krippendorff alpha was calculated. For this, the programming language for statistical computing R (version 4.1.2) was used. The values obtained for each pair of experts 1–2, 1–3, and 2–3, were 0.0476, 0.462, and 0.7, respectively. As can be seen, the level of consensus between the experts is appropriate for pairs 2–3. On the other hand, it can be indicated that there is no level of consensus between the experts 1–2 and the consensus has a low level in the pair 1–3. As can be seen in Fig. 4, a graphic analysis was also carried out based on the Bland-Altman method [6]. As can be seen, the difference of the means (bias) is lower between the pair of experts 2–3, having a value of –0.23, while for the pairs 1–2 and 1–3 it is much more marked (0.62 and 0.65). This indicates that there is a greater consensus among the third group of experts (2–3). The values between the lower limit (LL) and the upper limit (UL) for pairs 1–2 and 1–3 are wider than for the third pair 2–3, which also indicates that these results are ambiguous. However, it is important to note that there is consensus among experts. Therefore, it would be of great interest to incorporate 2 more experts from the same area into the analysis and determine the level of consensus with this new group. In the second stage of experimentation, tests were carried out with 6 children and young people with various types of disabilities. The age range of children and young people is between 5 and 17 years old (mean = 13.16, SD = 4.21). Of this group, 2 are girls and 3 are boys and have the diagnoses indicated below. Case 1: multi disabilities

280

C. Peña-Farfán et al.

Fig. 4 Results of the consensus analysis among the 3 experts using a representation based on the Bland-Altman comparison method

(deep bilateral anacusis, vision loss, moderate intellectual disability), cases 2 and 3: moderate intellectual disability, case 4: autism spectrum disorder (mild), case 5: Down Syndrome, and case 6: autism spectrum disorder (moderate). In Fig. 5 you can see in the first two photographs (a, b) a member of the team performing tests on the fusion of the toothbrush and the mouth and the detection of an intimate area (in the case of men), while that in the last photograph a girl can be seen interacting with the hand washing game (c). The criteria that were analyzed with the support of a team of experts in the field of education are the following: c01) the audios are understandable, c02) the graphics are understandable, c03) the text presented on the screen is legible, c04) the child has the predisposition to carry out the game, c05) the children understand the indications of the pictograms at the beginning of each section of the game, c06) the audios motivate them to carry out the activity until they are completed, c07) the sounds that appear during the actions of brushing teeth and handwashing motivate to carry out the activity until it is completed, c08) the period of attention is maintained until the end of the game, c09) level of performance during the game when performing the mimes of the instructions and c10) level of performance during the game when using the touch screen overlay. These criteria were evaluated based on the following

A Smart Mirror Based on Computer Vision and Deep Learning ...

281

Fig. 5 Photographs of the visualization that the mirror presents. The first two (a, b) correspond to tests of a member of the team, while the third (c) is of a girl performing the hand washing game Fig. 6 Average score obtained with the 10 criteria after carrying out the interaction and evaluation process with the 6 children (UN = Understandability, PR = child’s performance, AR = child’s Attention and Response.)

indicators with their respective weight: poor (weight = 1), regular (weight = 2), and good (weight = 3). As can be seen in Fig. 6, the results obtained are highly positive for all cases. For criterion c03 the result is low since children with moderate intellectual disabilities have difficulty reading and interpreting texts.

5 Conclusions Within this project, we can highlight that the implementation of serious games in the learning process of health self-care and the prevention of sexual abuse in children and young people with intellectual disabilities is a highly relevant issue due to the lack of technological resources.

282

C. Peña-Farfán et al.

Likewise, it can be highlighted that children and young people when they see themselves reflected with superimposed images show greater interest and curiosity, an aspect that motivates learning. Through serious games created for the smart mirror, it was found that this population had a greater attraction to the area of self-care, where the environment generated is more educational. Although the serious game of sexual abuse prevention was not repeated within the study group, perhaps due to suspicion and because it is still a taboo subject in countries like Ecuador, it was shown that there was correct learning regarding the subject. The smart mirror can be used within institutions that work with children and young people with moderate intellectual disabilities since this tool can help the tutors’ teaching process as interactive material. In future work, it is proposed to make a larger collection of serious games oriented to various areas of health and prevention of sexual violence. Likewise, an emotion recognition module will be developed in conjunction with the serious sexual abuse prevention game to obtain data that helps determine the moods and reactions of the users. A voice recognition module will also be created to increase the accessibility of the smart mirror.

References 1. Bazarevsky V, Grishchenko I, Raveendran K, Zhu T, Zhang F, Grundmann M Blazepose: ondevice real-time body pose tracking. arXiv preprint arXiv:2006.10204 (2020) 2. Caricote Agreda E (2012) La sexualidad en la discapacidad intelectual. ensayo. Educere 16(55):395–402 3. CONADIS, UNFPA Ecuador: Guía sobre derechos sexuales, reproductivos y vida libre de violencia para personas con discapacidad. Consejo Nacional para la Igualdad de Discapacidades, Fondo de Población de las Naciones Unidas (2017) 4. Deveci Topal A, Kolburan Geçer A, Çoban Budak E (2021) An analysis of the utility of digital materials for high school students with intellectual disability and their effects on academic success. Univ Access Inf Soc 22(1):1–16 5. Jones C, Scholes L, Rolfe B, Stieler-Hunt C (2020) A serious-game for child sexual abuse prevention: an evaluation of orbit. Child Abuse Neglect 107:104569 6. Kalra A et al (2017) Decoding the bland-altman plot: basic review. J Pract Cardiovas Sci 3(1):36 7. Ministerio de Salud Pública del ecuador: plan nacional de salud sexual y salud reproductiva 2017–2021 (2017) 8. Najeeb R, Uthayan J, Lojini R, Vishaliney G, Alosius J, Gamage A (2020) Gamified smart mirror to leverage autistic education-aliza. In: 2020 2nd International Conference on Advancements in Computing (ICAC), vol 1, pp 428–433. IEEE 9. Sharifzadeh N et al (2020) Health education serious games targeting health care providers, patients, and public health users: scoping review. JMIR Serious Games 8(1):e13459

Intelligent and Decision Support Systems

Weight Prediction of a Beehive Using Bi-LSTM Network María Celeste Salas, Hernando González, Hernán González, Carlos Arizmendi, and Alhim Vera

Abstract Predicting the future weight of an artificial beehive is fundamental to determine the status and production of an artificial bee beehive, the more weight the beehive has at harvest times the more productive it will be, whether the weight was increased by honey, propolis, royal jelly or brood. This paper presents a bidirectional algorithm (Bi-LSTM) using different configurations and activation functions to obtain different results in order to determine the most accurate prediction for future beehive weight. The models were implemented on a database of an artificial beehive obtained from Kaggle.com, whose location of the beehive is in Würzburg, Germany; the data were taken for the whole year 2017 and the variables obtained from the database are humidity and temperature inside the beehive and beehive weight. Keywords Beehive · Weight prediction · Bi-LSTM network · TensorFlow

1 Introduction Artificial Intelligence is a subfield of computer science that was developed in the 1960s with the idea of creating programs for problem solving and simulation of human reasoning, something that is still being studied. In artificial intelligence, M. C. Salas · H. González (B) · H. González · C. Arizmendi · A. Vera Autonomous University of Bucaramanga, Street 42 No. 48-11, Bucaramanga, Colombia e-mail: [email protected] M. C. Salas e-mail: [email protected] H. González e-mail: [email protected] C. Arizmendi e-mail: [email protected] A. Vera e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Á. Rocha et al. (eds.), Information Technology and Systems, Lecture Notes in Networks and Systems 691, https://doi.org/10.1007/978-3-031-33258-6_27

285

286

M. C. Salas et al.

machine learning has been developed since 1980, and deep learning has been accepted since approximately 2010. Recurrent Neural Networks (RNN) form a family of models suitable for processing sequentially structured data. They are powerful because they have a high-dimensional hidden state with nonlinear dynamics that allows them to remember and process past information. Gradient vectors can be efficiently computed when updating network weights in temporal data processing. Despite their attractive qualities, RNN models could not become a conventional tool in machine learning because of the difficulty of training them effectively, due to a very unstable relationship between the parameters and the dynamics of the hidden states, which manifests itself in the “vanishing/explosion gradient problem” [1]. Hence, there has been little research on the standard RNN model in the last 20 years, with only a few successful applications using large RNN models [2]. This paper aims to perform the comparison of the beehive weight predictions of different configurations of a bidirectional LSTM model in order to evaluate the results of the predicted weight based on the mean square error (MSE), in addition by varying the network configuration parameters such as activation function, number of neurons, number of layers, optimizer, etc. This type of net was implemented in [3], where it compares different configurations of LSTM and Bi-LSTM neural networks to perform forward prediction of wind speed to determine wind power generation in Gabal Elzayt Wind Farm in Egypt. The trained model predicts the tentative trend of wind speed for the period from 2020 to 2022. Other applications of this neural network architecture include: mobile human activity recognition [4], fault prediction [5] and electrocardiogram classification [6].

2 Database To train the neural network models for productivity prediction, the database taken in Wurzburg, Germany in 2017 was implemented [7]. This database of at least one year is necessary because bee beehives usually have only one large harvest per year. In the downloaded database it is shown (Fig. 1) the behavior of the indoor temperature, indoor relative humidity and beehive weight, from March to June the season is spring, it is shown that the temperature starts to increase, while the relative humidity lowers its percentage. In this season the bees can already leave the beehive and pollinate the new crops and flora that come with spring. Before starting to train the algorithm, it is essential to preprocess the data, determine the standard deviations of each variable and omit outliers, mainly for the output variable, which in this case is the weight, this variable varies very little over time, while temperature and humidity vary greatly every day. It is evident that the humidity and weight variable present outliers, so, when doing the preprocessing, these values are replaced by the median with the IQR measure. Once the preprocessing is done, the variables are plotted and a better behavior can be observed, reducing noise, outliers and making the weight variable have an ideal curve. Figure 1 shows the unprocessed variables on the left side and the processed variables on the right side.

Weight Prediction of a Beehive Using Bi-LSTM Network

287

Fig. 1 Raw and processed database

The database has each of the variables taken for each hour and it is evident that there are 8736 data, equivalent to 364 days to perform the training (data separation, data preparation, training and validation) as shown in the following figure, where the time (timestamp), relative humidity, temperature, weight and finally a variable that was added manually which is the season, being 0 (winter), 0.25 (autumn), 0.50 (spring) and 1 (summer). This means that the database has the 4 seasons of the year.

2.1 LSTM Network LSTM networks are more complex and optimal with respect to their origin which are simple recurrent neural networks, these differ with respect to Simple RNNs by the fact that the LSTM network can have several hidden layers and neurons, they have short and long term memory which mitigates the vanishing gradient problem, this is when neurons stop learning as the weights of the network become smaller and smaller (NaN); each recurrence module is a LSTM which has a structure divided into 4 layers: C(t) cell state, input gate (1), forgetting gate (2) and output gate (3) (Fig. 2). In the input gate (1) we observe an input of xt representing the current input, the sigmoid layer which generates zero to let no information through and one to let through; the (x) is a point multiplication operation, the black arrows indicate the transfer vectors. At the input gate (1) enters ct−1 which indicates the previous memory state, h t−1 is the hidden state of the previous LSTM unit, the first sigmoid layer takes h t−1 and xt deciding which previous parts will be removed. The output of this gate is f t ∗ ct−1 , where explicitly W f is the weight matrix and b f is the bias [7].

288

M. C. Salas et al.

Fig. 2 Neural network LSTM

  f t = σ (W f ∗ h t−1 , xt + b f )

(1)

Next, the new information to be stored in the state cell must be decided. The sigmoid layer i t decides which values are to be updated, being W i the matrix of weights of i t y bi the bias, then the tanh layer creates a vector C˜ t of possible new values that could be added to the state cell. These equations reference the oblivion gate.   i t = σ (Wi ∗ h t−1 , xt + bi )

(2)

  C˜ t = tanh(Wc ∗ h t−1 , xt + bc )

(3)

With the above equations we define which values will update the state of the cell ˜ t . We decide what to do to subsequently update Ct by multiplying f t ∗ ct−1 plus i t ∗ C the information for the new Ct . With this equation, irrelevant information is deleted and relevant information is added for learning the LSTM units. Ct = f t ∗ Ct−1 + i t ∗ C˜ t

(4)

    i t = σ Wi ∗ h t−1 , xt + bi

(5)

h t = ot ∗ tanh(Ct )

(6)

Finally, it is decided which will be the output h t in the so called output gate, for this the new cell state is taken into account which will filter the irrelevant information that was determined with the previous equations. The sigmoid layer in the output gate will decide which values of h (t−1) and xt will be output with the equation Ot , following this Ot is multiplied by the tanh layer which normalizes Ct between −1 y 1, giving as final output h t . Now this new h t will be passed to another LSTM unit

Weight Prediction of a Beehive Using Bi-LSTM Network

289

Fig. 3 Bi-LSTM neural network

and will be defined as h (t−1) , also with the current Ct . By creating a string structure between each LSTM unit.

2.2 Bi-LSTM Network Bidirectional Recurrent Neural Networks were introduced in 1997. The great advantage of this type of network is the ability to learn from sequences in the future and in the past, while the LSTM network has the ability to learn from past sequences only. Figure 3 shows the architecture of a bidirectional LSTM network.

2.3 Performance Measures Due to the large amount of data to be processed the algorithms require a normalization or scaling of the data, therefore, the StandardScaler() function will be implemented to preprocess the data. The standard scaler standardizes a value or feature by subtracting the mean of the training and validation data and then scaling to a unit of variance, which means dividing all values by the standard deviation (σ ), after training and prediction of the algorithm, the data are rescaled to the original values with the inverse transform of the standard scaler. For this process the fit_transform() function was designed. The predictions are obtained, and the error obtained during training is evaluated, this error was determined from the mean square error (MSE), which is generally implemented for machine learning due to the randomness of the data. It measures the amount of error between two data sets, training predictions and validation or test predictions are compared with the actual data.

290

M. C. Salas et al.

2.4 Bi-LSTM Configuration Thanks to the library as Tensorflow, which is open software and consists of graphs (mathematical operations) and tensors; in Tensorflow LSTM Networks are implemented in a simpler way [8, 9]. The neural network allows labeling and classifying any type of input, the neural network data can be numbers, images, sounds, texts, as well as time series that are implemented in this project, and it is capable of converting values into numerical data which are expressed in mathematical format, with this it is easy to identify patterns in the variables, it is done through tensors which are mathematical objects or containers that store numerical values, also are characterized by having different dimensions, the tensors can be a vector (1 dimension), a 2D matrix, 3D (reference to a cube), 4D (vector of cubes), etc. All these containers are stored in Python Numpy arrays, on the other hand, the graphs contain nodes representing mathematical operations to be applied to the multidimensional datasets connected between them. The following parameters correspond to the configuration required to train the Bi-LSTM model; the model expects as input a 3D tensor including: – Batch_size. Number of data that each iteration of an epoch or cycle has. Thanks to this the weights of W and b will be updated more times (number of cycles per batch size). – Timesteps. Defines the number of values in a sequence, e.g. there are 3 timesteps in [4, 5, 3] – Feature. Are the dimensions used to represent a data in a timestep. Example of 10 dimensions: X = [0 0]. – Epoch. Number of times a forward or backward propagation algorithm is executed. At each epoch the parameters W and b are updated, making the whole network learn better at each epoch. – Activation function. Found in each neuron of the network and indicates whether the neuron is turned on or off depending on the activation function it uses. – Dropout. Layer connected to the neuron, it slows down the individual power of each neuron, which means that it helps to have a co-dependency between each neuron in order to learn better and ignore LSTM neurons when this information is irrelevant for training. – API Keras. Facilitates the construction of neural networks, the sequential model will be implemented. – Dense. This instruction adds a hidden layer to the neural network. – Optimizer. Adam was implemented, it is in charge of updating the weights of the network. – Loss. Function that evaluates the deviation between the predictions made by the neural network and the real values of the observations.

Weight Prediction of a Beehive Using Bi-LSTM Network

291

3 Results and Discussion Figure 4 shows the flow chart for training and validation of the algorithm. The beehive data, from the year 2017 are preprocessed, then standardized to continuously divide the variables into input variables A (temperature, humidity, season) and the output variable Y (weight). A percentage of the data is taken for training (80% or the first 8 and a half months) and for validation 20%, i.e., once the algorithm is trained it proceeds to predict the weight corresponding to the validation dates, i.e. the last two and a half months of the database. This configuration is implemented to make the predictions for the first 15 days of December and thus make the comparison with the real data of these dates obtained from the database. After training the algorithm, the loss and the MSE error for the training are determined, in the same way it is determined for the validation data. Finally, the model is implemented to perform the future weight predictions, generating the dates and future values for the following days from the uploaded and trained database. Figure 4 shows the flowchart for training and validation of network. The following is a summary of the parameters selected for training the sequential Bi-LSTM model, highlighting that the algorithm was trained by constantly iterating on the parameter values and mentioning the values that showed the best performance at the time of training, validation and prediction of future values of the beehive weight. – Days to predict in the future: 15 – Days to be taken into account in the past: 10

Fig. 4 Flowchart for training and validation

292 Table 1 Network training experiments

– – – – – – – – – –

M. C. Salas et al.

Model

Function activation

Optimizer

1

Linear

Adam

2

Sigmoid

Adam

3

Tanh

Adam

4

Sigmoid

Nadam

5

Linear

Nadam

Input variables: temperature, humidity, season. Output or target variable: beehive weight. Hidden layers: 2 (Bidirectional-LSTM 100 and Bidirectional-LSTM 50). Epochs: 50 Batch_size: 256 Optimizer: Adam-Nadam Learning Rate: 0.001 Function Loss: MSE–Mean Squared Error”. Trigger Function: linear Dropout: 0.1

We implemented 100 LSTM units in the first layer of the LSTM network, 50 units in the second layer, a dropout of 0.1 is added to generate codependency and adjust the model, we also implemented the EarlyStopping() function which is in charge of stopping the training when it detects that the network is not learning or improving the results to avoid spending unnecessary time and computation. The training of the Bi-LSTM network was performed by implementing different activation functions such as linear, tanh and sigmoid, in addition to the Adam and Nadam optimizer. Table 1 shows the training combinations to determine the best prediction for the first 15 days of December. After performing the training of each co-figuration for the Bi-LSTM network, the comparison of the training results, such as loss and MSE error, is performed. It is found that the configuration with the highest accuracy in this study is implementing the linear activation function and the Adam optimizer. With a training error of 0.0052 and validation error 0.00742. The model with completely different results was the Tanh—Nadam configuration, because in this case the algorithm presented an overfitting reaching a training and validation error above 0.6, these predicted data are completely discarded so they are not shown in the article. Therefore the results represented in the article presented more congruence with the real behavior of the beehive weight. The results with higher validation loss occur because this metric is calculated after each epoch, while the training loss is calculated during each epoch (Table 2). The following figure shows the predicted data for the first 15 days of December, the real weight is shown in black and the best model for the prediction is shown in green, which has a weight difference of less than 500 g. It is configured with the

Weight Prediction of a Beehive Using Bi-LSTM Network

293

Table 2 Network training results Model

MSE training

Training loss

MSE validation

Validation loss

1

0.0052

0.0031

0.00742

0.0984

2

0.0155

0.0042

0.2037

0.2209

3

0.0058

0.0039

0.1492

0.1342

4

0.0060

0.0041

0.0994

0.1419

5

0.0101

0.0042

0.2329

0.1642

linear activation function and the Adam optimizer, since it is close to the real values, compared to the other models that are above 1 kg difference (Fig. 5). Once the best model is determined, training and validation is performed with the 12 months of the year to make the predictions now for the first 15 days of January. In other words, the month of December will be part of the validation set. The losses obtained are plotted by implementing the function (loss), where the loss of information or data for training and validation during each cycle or epoch (50) is observed. This loss means that in each cycle or epoch the model is learning, in the first training epoch a loss of 0.12 is shown, while the last cycles reached 0.082 in loss, decreasing 10%. See Fig. 6.

Fig. 5 December prediction results (15 days)

Fig. 6 Losses for training and validation

294

M. C. Salas et al.

Fig. 7 January prediction results

In the following Fig. 7 the blue line represents the real weight of the beehive, in yellow the training and validation predictions for the last 20% (2.4 months) of the real data, the validation data are not known by the model so they tend to have more loss than the training, the red line represents the future predictions, in that case they are for the first fifteen days of January of next year and shows a downward trend which indicates the bees are spending their food that they saved in winter since they cannot go out to collect pollen and nectar.

4 Conclusions It should be noted that the type of learning for this model is supervised, since the model can validate the predictions, although there is no ideal curve for the behavior of the beehive weight. This ideal growth is different for each beehive, depending on many factors such as geographical location, climate during the year, flora present in the area where the beehive is located, environmental pollution, etc. Another important aspect to consider and determine productivity is when the harvest of the beehive is recorded and how the beehive recovers its weight for the next harvest. The database implemented in this project does not show the beekeepers’ harvest. The lack of information is motivating to continue doing research in this area and in the future to obtain more information about the behavior of each beehive to conserve them and take advantage of the honey and other products [10]. Acknowledgements This paper is a contribution under the program “Colombia Científica— Pasaporte a la ciencia”, solution of the focus Society, Challenge 2: Social innovation for economic development and productive inclusion, thematic Productivity, Specialized and quality production.

Weight Prediction of a Beehive Using Bi-LSTM Network

295

References 1. Pascanu R, Mikolov T, Bengio Y (2013) On the difficulty of training recurrent neural networks. International conference on machine learning. PMLR 1310–1318 2. Pollastri G et al (2002) Improving the prediction of protein secondary structure in three and eight classes using recurrent neural networks and profiles. Proteins Struct Funct Bioinform 47(2):228–235 3. Moharm K., Eltahan M., Elsaadany E (2020) Wind speed forecast using LSTM and Bi-LSTM algorithms over gabal El-Zayt wind farm. In: 2020 International conference on smart grids and energy systems (SGES). IEEE, pp 922–927 4. Chen Y, et al (2016) LSTM networks for mobile human activity recognition. In: 2016 International conference on artificial intelligence: technologies and applications. Atlantis Press, pp 50–53 5. Han P et al (2021) Fault prognostics using LSTM networks: application to marine diesel engine. IEEE Sens J 21(22):25986–25994 6. Lv Q-J et al (2019) A multi-task group bi-LSTM networks application on electrocardiogram classification. IEEE J Transl Eng Health Med 8:1–11 7. Carla B (2019) Beehive metrics. https://www.kaggle.com/datasets/se18m502/bee-hive-met rics. Accessed 02 Sept 2022 8. Sinha N (2022) Understanding LSTM and its quick implementation in keras for sentiment analysis. https://towardsdatascience.com/understanding-lstm-and-its-quick-implement ation-in-keras-for-sentiment-analysis-af410fd85b47. Accessed 02 Sept 2022. 9. Abadi M et al (2016) Tensorflow: large-scale machine learning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467 10. Seritan GC et al (2018) Low cost platform for monitoring honey production and bees health. In: 2018 IEEE international conference on automation, quality and testing, robotics (AQTR). IEEE, pp 1–4

Criteria Selection and Decision-Making Support in IT Governance: A Study via Fuzzy AHP Applied to a Multi-institutional Consortium José Fábio de Oliveira , Paulo Evelton Lemos de Sousa , and Ana Carla Bittencourt Reis

Abstract The formation of consortia between public and private institutions to achieve projects with a common objective is a practice considered usual in Brazil. Due to the different destinations between these institutions, divergences among decisionmakers regarding the prioritization of actions may occur with some frequency. With this approach, this work seeks to minimize the impact of subjectivity in the prioritization of actions of a multi-institutional consortium, formed by public and private entities, with the support of Fuzzy logic and multi-criteria analysis to support the decision. Criteria and decision alternatives were identified and analyzed with the support of experts. At the end of the hierarchy, it was possible to present the top management of the Consortium with a proposal of ranked alternatives for optimizing the use of financial resources targeting IT Governance actions. Keywords Fuzzy AHP · Public and private institutions consortium

J. F. de Oliveira (B) · P. E. L. de Sousa (B) · A. C. B. Reis (B) Graduate Program in Applied Computing, University of Brasília - UnB, PPCA, University Campus Darcy Ribeiro - Asa Norte, CEP: 70910-900, Brasília, DF, Brazil e-mail: [email protected] P. E. L. de Sousa e-mail: [email protected] A. C. B. Reis e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Á. Rocha et al. (eds.), Information Technology and Systems, Lecture Notes in Networks and Systems 691, https://doi.org/10.1007/978-3-031-33258-6_28

297

298

J. F. de Oliveira et al.

1 Introduction In Brazil, it is common for public and private entities, as long as they have common goals, to form associations (consortia) to carry out projects. However, it is important to emphasize the existence of laws and codes which delimit the creation and performance of each one. A good example is Law 11.107, Art. 1, § 1º,1 which outlines the possibility of a public consortium being constituted by a public association or legal entity governed by private law. Law 11,079 (Law 11,079, 2004)2 defines the general rules when the consortium is formed by public institutions and a legal entity governed by private law. The Brazilian civil code defines that there are legal entities responding to public and/or private law. Public law is formed by Federal, State and Municipal government administrations. The private aspect is formed by associations, societies, foundations, religious organizations, political parties and individual limited liability companies. In this scenario and with common purposes—which include the integration of data related to education, science, technology and innovation—public and private institutions created a multi-institutional consortium, the target of the present study. Public entities are responsible for promoting teaching, research and scientific production, while private institutions are responsible for the digital publication of scientific journals and advanced information technology networks. As one of its objectives, the consortium envisages developing a set of data services (Digital Service Platform), which provides the organizations that belong and are related to it (universities, institutes, among others) with accurate data about the different agents, objects and products related to higher education, science, technology and innovation (authors, theses, grants and funding, projects, scientific articles, professors and graduate programs, among many others). The ambition underlying the creation of the Platform is to try to reduce ambiguities and conflicts between data used by different organizations in their IT systems to represent the same entities, such as an individual or a project. With this, a significant gain in precision and efficiency in the services provided by the organizations belonging to the consortium is expected, as well as a reduction in the efforts and costs of development and maintenance of systems and databases. A single and common source of accurate data allows the allocation of resources to the development of new processes and services not yet available. In its first phase, actions inherent to IT corporate governance will be considered. The second stage consists of actions which involve development standards and IT infrastructure. The third stage involves strategic actions by the consortium. However, there is a need to define which governance actions will be carried out first, as there are budgetary and human resources limitations to carry out a project of such magnitude, impact and importance for academia and civil society. In this universe, the relationship between entities can be problematic due to the conflict of 1

LAW 11,107. Available at http://www.planalto.gov.br/ccivil_03/_ato2004-2006/2005/lei/l11107. htm. Consulted on 04/09/2022. 2 LAW 11,079. Available at http://www.planalto.gov.br/ccivil_03/_ato2004-2006/2004/Lei/L11 079.htm. Consulted on 04/09/2022.

Criteria Selection and Decision-Making Support in IT Governance ...

299

interests between public affairs and the interest of private subjects, as indicated by [1], as it is perceived that, in the area of education, there is a notable clash—it would intervene in the public sphere, through state funding converging with public policies, when referring to its relations with private collaborators, according to [2]. Another factor to be considered is the misguided assumption that the public entity would be subordinate in relation to the private one, as both finance the project. With this diffuse scenario, decision-making regarding the prioritization of actions becomes conflicting and difficult to carry out, as each decision-maker can prioritize the action that best suits their reality and provides an individual advantage. In this context, the proposal of this work addresses the application of Fuzzy logic with the Analytic Hierarchy Process—FAHP method to order criteria and alternatives of a project composed of a multi-institutional consortium of higher education, science, technology and innovation (ECTI) nationally, in Brazil, regardless of political principles, subjective factors, cultural influences or particular conveniences. Multi-criteria decision analysis—MCDA is a methodology that helps to make decisions mainly in terms of choice, classification or classification of choices [3]. Among the various existing prioritization methods, the AHP, created by mathematician Professor Saaty [4] stands out. Considered simple, the method is widely used in several sectors, such as education [5] and [6] and government [7, 8] and [9]. Thus, with the popularization and use of AHP, some points of attention in relation to its application have become known. The authors [10] highlighted that the method brings with it uncertainties, doubts and hesitation in the subjective judgments adopted by decision-makers. However, these peculiarities can be minimized with the support of the application of Fuzzy logic. As highlighted by [11], fuzzy set theory has been widely used in conjunction with AHP, as it allows decisionmakers to make interval judgments and consider uncertainties or inaccuracies. The fuzzy set algebra was created by [12] and represents the formal body of the theory that makes it possible to handle estimates, which can be imprecise and vague, in environments of predominance of uncertainty.

2 Theoretical Reference In view of the proposed approach raised in this study, some themes stood out in the composition of the content framework that support the methodology presented later. Scenarios of public-public or public–private partnerships or consortia, such as the one here discussed in the study, reflects an ever-increasing reality for projects in the most diverse areas of activity, becoming a solution that drives several countries, regions and their respective managers. Such partnerships involve public entities at different levels of the State (Union, states and municipalities) and powers (executive, legislative and judiciary). This is reported by studies such as [13] that addressed the need for governments to solve sanitary landfill treatment problems through consortia of municipalities in order to achieve economic scale with a consequent reduction in the cost of treatment. In addition to public-public partnerships, several studies deal

300

J. F. de Oliveira et al.

with consortia or public–private partnerships, also known as PPPs. What is perceived is that public–private cooperation at the level of project financing and the provision of large-scale infrastructure projects are increasing at the global level [14]. In-depth studies on PPPs in countries with a strong economy such as China, such as the one developed by [15] have gained repute, especially since, as they themselves claim, public–private partnerships (PPP) have been widely used in China to acquire facilities and perform public services. In the investigation proposed by [16] the interest focused on developing weights of key performance areas (KPAs) and performance indicators for public–private partnerships (PPPs) in Bangladesh using the analysis of the relative importance of indicators that influence the performance score of specific projects from the developing country’s perspective. Regarding the consortium and due to its multidisciplinary topology of the composition of the group of specialists/decision-makers of each institution forming the project, another indispensable basis to be sought in the literature is group decision. [17] approached group decision-making (GDM) with paired comparisons in the Analytical Hierarchy Process—AHP to arrive at a meaningful and reliable solution when considering individual consistency and group consensus in the decision process. In view of this, based on the theoretical frameworks presented so far and considering the organizational context established between the bodies that make up the consortium, the object of this study, it is considered that the use of multicriteria analysis (MCDA), in particular AHP supported by Fuzzy logic—FAHP—be the most appropriate methodology for the problem situation: one of the most referred to alternatives in view of the need for decision in projects such as the one described here and where fields of subjectivity still exert a strong impact.

3 Methodology The proposed methodology to prioritize governance actions adopted an adaptation of the model proposed by [18]. In this context, a methodological proposal composed of phases 1 to 4 was developed, as shown in Fig. 1, as well as approaches related to the elaboration of questionnaires and paired assessments defined in the FAHP method. The next steps detail the phases.

3.1 Business Analysis The business analysis phase aims to understand the structure and needs, identify the main actors involving the consortium, analyze the requirements of corporate IT governance and define the criteria and alternatives. Phase 1 consists of 6 tasks. The first (a) refers to the identification of specialists, among the various technical components of the consortium, in addition to partnerships with Higher Education

Criteria Selection and Decision-Making Support in IT Governance ...

301

Fig. 1 Applied methodology for prioritizing criteria and alternatives. Source [18] with adaptations

Institutions. The next task (b) aligns the business needs with the expectations of sponsors and key stakeholders. Thus, top management and the main interlocutors can visualize the actions in a standardized way. The analysis of the hierarchy problem (c) refers to the need to understand the motivational factors, which involve decisionmaking and the need to apply multicriteria methods. The next step, analysis of IT corporate governance (d), ensures the consortium’s compliance with the ISO/IEC 38,500 standard (ABNT, 2018).3 The following tasks involve actions to identify and define (e) the criteria and (f) alternatives to be developed in the following phases (due to the sensitivity involved in this task, a section was reserved to explain the generation process).

3.2 Preparation Based on the result of the business analysis, the second phase, preparation, is responsible for the modeling and application of the questionnaire, a tool used to collect the insights of specialists in relation to the magnitude of each criterion and alternative. This phase consists of two tasks. The elaboration and validation of the questionnaires are aimed at creating the mechanism for collecting the experts’ perceptions. Perception is understood as the technical point of view, that is, the degree of prominence of that criterion/alternative under individual technical analysis. In the next task, the researchers define the weights to be assigned to the criteria and alternatives, with adaptation of [4], in order to carry out the paired evaluation of comparison between the criteria and alternatives.

3

ABNT NBR ISO/IEC 38.500: Information Technology – IT Governance for the organization. Rio de Janeiro, Brazilian Association of Technical Standards, 2018.

302

J. F. de Oliveira et al.

3.3 Application of the Questionnaire for Data Collection The activity of application of the questionnaires is responsible for providing the physical means, hardware, peripherals and software used in the application of the questionnaires. The next step consists not only of data collection, but also of receiving the final result of the application of the questionnaires and being responsible for compiling the paired analysis of the criteria. This step is performed based on the judgments of experts, who attribute relative importance to the criteria. The main parameter used to arrive at the weights of the criteria is the fundamental scale developed by [4], which consists of a series of values, from “equal importance” to “extreme importance”.

3.4 Fuzzy AHP Application—Results The last step of the process involves the application of the FAHP (Fuzzy Analytic Hierarchy Process), as well as providing the analysis of the achieved results. The FAHP method is a combination of the application of Fuzzy logic with the AHP method, proposed by [4]. Both approaches (AHP and FAHP) apply pairwise comparison for ordering criteria and alternatives. The FAHP has various references in the literature, and among them it is possible to highlight the applications proposed by [19]—Fuzzy Priority Method. [20]—Geometric Mean Method. [21]—Extension analysis method. [22]—Method of Fuzzy Preference Programming and [23]—Fuzzy Prioritization Method. This article adopted the method proposed by [21] to perform the analysis of matrices (criteria and alternatives) derived from numbers from the application of Fuzzy. Thus, for the application of the methodology, calculations are performed to identify the degree of pertinence, synthetic extensions, degrees of possibility and vector of priorities, respectively. The conclusion presents the final analysis of the calculations and perceptions related to the multicriteria problem, in addition to suggesting future approaches for the application of other methods or analyzes not considered in this article.

4 Application of the Methodology 4.1 Selection of Criteria and Alternative To carry out the selection of criteria and alternatives, Phases 1 and 2 of the methodology were completed. It is essential to highlight that the first phase of the Digital Platform development project aims to develop actions aimed at subsidizing and developing IT corporate governance and with the foundations supported by the ISO/IEC 38,500 standard, which presented greater synergy to achieve stakeholder objectives.

Criteria Selection and Decision-Making Support in IT Governance ...

303

Decision criteria were extracted from the principles of this standard. It is important to highlight that the proposed criteria are aligned with the core base of all consortium members (research, scientific data, article data, publication sharing, data from higher education institutions and promotion of education). That said, we have the definitions for the five criteria and alternatives that make up the process of ordering governance actions: C1: Responsibility, C2: Strategy, C3: Performance, C4: Compliance and C5: Comp. Human and assessment and decision alternatives A1: Roles and Responsibilities, A2: Sustainability, A3: Processes, A4: Legislation and A5: Communication Plan.

4.2 Application of Fuzzy AHP When considering the complexity of the case study in question and based on the consensual decision of the group of experts, we have a list of criteria and alternatives. Based on the definitions of the criteria and alternatives, the evaluation questionnaire was applied and the consensual opinion of the collegiate was extracted with their respective notes, thus Phases 1, 2 and 3 were concluded. These notes correspond to the element “m” of the Fuzzy triangular number, adjusted according to the scale of [4]. A degree of fuzzification (δ) of 1.0 was applied in this study. The criteria’s Fuzzy paired comparisons matrix resulted in the data in Table 1. Table 2 presents the Fuzzy paired comparison matrices of the alternatives in light of each of the criteria. Criteria C2, C3, C4 and C5 follow the same logic applied to C1 (Tables 3, 4, 5 and 6). In this way, the Fuzzy Synthetic Extensions (S) of the criteria matrix could be later compared with each other, as indicated. S1 = (0.074, 0.157, 0.338); S2 = (0.132, 0.294, 0.596); S3 = (0.118, 0.245, 0.497); S4 = (0.113, 0.235, 0.517) e S5 = (0.038, 0.069, 0.159). Based on the values of the synthetic extensions above, it is possible to obtain the degree of possibility of M2 ≥ M1, according to Table 7 below. From the data obtained, we arrive at the vector W’c = [d(C1), d(C2), d(C3), d(C4), d(C5)] which normalized results in the vector of weights Wc below: Table 1 Aggregate matrix of fuzzy comparisons of criteria C1

C2

C3

C4

C5

C1

(1, 1, 2)

(1/3, 1/2, 1)

(1/4, 1/3, 1/2)

(1/3, 1/2, 1)

(2, 3, 4)

C2

(1, 2, 3)

(1,1,2)

(2, 3, 4)

(1, 1, 2)

(2, 3, 4)

C3

(2, 3, 4)

(1/4, 1/3, 1/2)

(1,1,2)

(1, 1, 2)

(2, 3, 4)

C4

(1, 2, 3)

(1, 1, 2)

(1, 1, 2)

(1,1,2)

(2, 3, 4)

C5

(1/4, 1/3, 1/2)

(1/4, 1/3, 1/2)

(1/4, 1/3, 1/2)

(1/4, 1/3, 1/2)

(1,1,2)

Source Own authorship

304

J. F. de Oliveira et al.

Table 2 Aggregate fuzzy comparisons matrix—criterion 1 (C1) C1

A1

A2

A3

A4

A5

A1

(1, 1, 2)

(2, 3, 4)

(1/5, 1/4, 1/3)

(1/4, 1/3, 1/2)

(2, 3, 4)

A2

(1/4, 1/3, 1/2)

(1, 1, 2)

(1, 1, 2)

(1/4, 1/3, 1/2)

(2, 3, 4)

A3

(3, 4, 5)

(1, 1, 2)

(1, 1, 2)

(1/5, 1/4, 1/3)

(2, 3, 4)

A4

(2, 3, 4)

(2, 3, 4)

(3, 4, 5)

(1, 1, 2)

(4, 5, 6)

A5

(1/4, 1/3, 1/2)

(1/4, 1/3, 1/2)

(1/4, 1/3, 1/2)

(1/6, 1/5, 1/4)

(1, 1, 2)

Source Own authorship Table 3. Aggregate matrix of fuzzy comparisons of alternatives when considering criterion 2 (C2) C2

A1

A2

A3

A4

A5

A1

(1, 1, 2)

(1/4, 1/3, 1/2)

(1/4, 1/3, 1/2)

(1/3, 1/2, 1)

(1, 2, 3)

A2

(2, 3, 4)

(1, 1, 2)

(1/3, 1/2, 1)

(1, 1, 2)

(2, 3, 4)

A3

(2, 3, 4)

(1, 2, 3)

(1, 1, 2)

(1/4, 1/3, 1/2)

(1/4, 1/3, 1/2)

A4

(1, 2, 3)

(1, 1, 2)

(2, 3, 4)

(1, 1, 2)

(2, 3, 4)

A5

(1/3, 1/2, 1)

(1/4, 1/3, 1/2)

(2, 3, 4)

(1/4, 1/3, 1/2)

(1, 1, 2)

Source Own authorship Table 4. Aggregate matrix of fuzzy comparisons of alternatives when considering criterion 3 (C3) C3

A1

A2

A3

A4

A5

A1

(1, 1, 2)

(2, 3, 4)

(1/3, 1/2, 1)

(1/4, 1/3, 1/2)

(1, 1, 2)

A2

(1/4, 1/3, 1/2)

(1, 1, 2)

(1/4, 1/3, 1/2)

(1, 1, 2)

(1/3, 1/2, 1)

A3

(1, 2, 3)

(2, 3, 4)

(1, 1, 2)

(1, 2, 3)

(2, 3, 4)

A4

(2, 3, 4)

(1, 1, 2)

(1/3, 1/2, 1)

(1, 1, 2)

(1, 2, 3)

A5

(1, 1, 2)

(1, 2, 3)

(1/4, 1/3, 1/2)

(1/3, 1/2, 1)

(1, 1, 2)

Source Own authorship Table 5. Aggregate matrix of fuzzy comparisons of alternatives when considering criterion 4 (C4) C4

A1

A2

A3

A4

A5

A1

(1, 1, 2)

(2, 3, 4)

(1/5, 1/4, 1/3)

(1, 2, 3)

(2, 3, 4)

A2

(1/4, 1/3, 1/2)

(1, 1, 2)

(1/3, 1/2, 1)

(1, 1, 2)

(1/3, 1/2, 1)

A3

(3, 4, 5)

(1, 2, 3)

(1, 1, 2)

(1/4, 1/3, 1/2)

(3, 4, 5)

A4

(1/3, 1/2, 1)

(1, 1, 2)

(2, 3, 4)

(1, 1, 2)

(3, 4, 5)

A5

(1/4, 1/3, 1/2)

(1, 2, 3)

(1/5, 1/4, 1/3)

(1/5, 1/4, 1/3)

(1, 1, 2)

Source Own authorship

Criteria Selection and Decision-Making Support in IT Governance ...

305

Table 6 Aggregate matrix of fuzzy comparisons of alternatives when considering criterion 5 (C5) C5

A1

A2

A3

A4

A5

A1

(1, 1, 2)

(2, 3, 4)

(1, 2, 3)

(2, 3, 4)

(1, 2, 3)

A2

(1/4, 1/3, 1/2)

(1, 1, 2)

(1, 1, 2)

(1, 1, 2)

(1/3, 1/2, 1)

A3

(1/3, 1/2, 1)

(1, 1, 2)

(1, 1, 2)

(1, 2, 3)

(1, 2, 3)

A4

(1/4, 1/3, 1/2)

(1, 1, 2)

(1/3, 1/2, 1)

(1, 1, 2)

(2, 3, 4)

A5

(1/3, 1/2, 1)

(1, 2, 3)

(1/3, 1/2, 1)

(1/4, 1/3, 1/2)

(1, 1, 2)

Source Own authorship

Table 7 Degree of possibility

Source Own authorship

In a similar way, calculations and pairwise comparisons were performed between each of the alternatives, considering each criterion separately. The results are announced below:

At the end of applying the weights, as shown in Fig. 2, the final ordering of alternatives was reached, which represent the actions inherent to IT governance, criteria and respective alternatives. In summary, it was found that the criteria Strategy (C2), Performance (C3) and Compliance (C4) stood out from the others. From this weighting of criteria, it was possible to reach the results (percentages) for each action (alternative). The result of the FAHP analysis is represented by Fig. 2. The image reveals that the percentage of preference for the alternatives was established in the following descending order: Legislation (35.9%), Processes (31.8%), Roles and responsibilities (13.7%), Communication Plan (10.3%) and Sustainability (8.4%).

306

J. F. de Oliveira et al.

Fig. 2 Criteria and alternatives tree. Source Own authorship.

5 Conclusions The objective of this research was to present the top management of the consortium with a proposal of ranked alternatives for optimizing the use of financial resources targeting IT Governance actions. The consortium plans to create a data integration platform related to education, science, technology and innovation (ECTI) at the national level. Recent literature has revealed that there are case studies supported in a context of partnerships in public-public or public–private formats. However, the consortium ecosystem, presented here, has a character little known in the literature. The result of the application of the Fuzzy AHP method provided concrete subsidies that were without political, cultural or organizational biases in the strategic decision-making, with a focus on the best alternative alongside a situation where regulatory frameworks, which aim at data protection, begin to be enforced. Thus, the practical action was to hire a consultancy specializing in personal data protection, whose synergy refers to the alternative Legislation. In the multidisciplinary context of the research funding institutions, which make up the consortium, its diversity of autonomy and legal constitution proved to be a limitation of the work, since the consortium, due to its public–private characteristics, is governed by several Brazilian laws [24]. From the chosen alternative (Legislation), identifying the main IT asset and proposing a risk management model in the exposed context, in addition to expanding the range of applications of the approach, might prove interesting for future work.

Criteria Selection and Decision-Making Support in IT Governance ...

307

References 1. Meirelles HL, Filho, JEB (2016) Direito administrativo brasileiro, 42ª edição, atualizada até a Emenda Constitucional 90, de 15.9.2015, p 52 2. Graef A, Salgado V (2012) Relações de parceria entre pode público e entes de cooperação e colaboração no Brasil, p 33. ISBN 978-85-64478-05-3 3. Figueira J, Greco S, Ehrgott M (2005) Multiple criteria decision analysis, state of the art surveys, p 23 4. Saaty TL (1980) The analytic hierarchy process. McGraw-Hill, New York 5. Muhammad A, Shaikh A, Naveed QN, Qureshi MRN (2020) Factors affecting academic Integrity in E-learning of Saudi Arabian universities. An investigation using Delphi and AHP. IEEE Access 8(8962034):16259–16268. https://doi.org/10.1109/ACCESS.2020.2967499 6. Vaídya OS, Kumar S (2006) Analytic hierarchy process: an overview of application. Eur J Oper Res 169(1):1–29 7. Chen L, Deng X (2018) A modified method for evaluating sustainable transport solutions based on AHP and Dempster-Shafer evidence theory. Appl Sci 8(4):563. https://doi.org/10.3390/app 8040563 8. Rana NP, Luthra S, Mangla SK, Islam R, Roderick S, Dwivedi YK (2019) Barriers to the development of smart cities in Indian context. Inf Syst Front 21(3):503–525. https://doi.org/ 10.1007/s10796-018-9873-4 9. Saaty TL, de Paola P (2017) Rethinking design and urban planning for the cities of the future. Buildings 7(3):76. https://doi.org/10.3390/buildings7030076 10. Tang Y, Beynon M (2005) Application and development of a fuzzy analytic hierarchy process within a capital investment study. J Econ Manag 1(2):207–230 11. Emrouznejad A, Ho W (2018) Fuzzy analytic hierarchy process Identifiers: LCCN 2017011269, p 5. ISBN 9781498732468 12. Zadeh L (1965) Fuzzy sets. Inf Control 8(3):338–353 13. Spigolon LMG, Giannotti M, Larocca AP, Russo MAT, Souza NDC (2018) Landfill siting based on optimisation, multiple decision analysis, and geographic information system analyses. Waste Manag Res 36(7):606–615. https://doi.org/10.1177/0734242X18773538 14. Aerts G, Grage T, Dooms M, Haezendonck E (2014) Public-private partnerships for the provision of port infrastructure: an explorative multi-actor perspective on critical success factors. Asian J Shipping Logist 30(3):273–298. ISSN: 2092-5212. https://doi.org/10.1016/j.ajsl.2014. 12.002 15. Li Y, Wang X (2016) Risk assessment for public–private partnership projects: using a fuzzy analytic hierarchical process method and expert opinion in China. J Risk Res. https://doi.org/ 10.1080/13669877.2016.1264451 16. Hossain M, Guest R, Smith C (2019) Performance indicators of public private partnership in Bangladesh: an implication for developing countries. Int J Product Perform Manag 68(1), 46–68. 10.1108/ IJPPM-04-2018-0137 17. Wu Z, Xu J (2012) A consistency and consensus based decision support model for group decision making with multiplicative preference relations. Decis Support Syst 52(3):757–767 18. Belton V, Stewart J (2002) Multiple criteria decision analysis – an integrated approach. Kluver Academic Publishers, London 19. van Laarhoven PJM, Pedrycz W (1983) A fuzzy extension of Saaty’s priority theory. Fuzzy Sets Syst 11(1–3):229–241. https://doi.org/10.1016/s0165-0114(83)80082-7 20. Buckley JJ (1985) Fuzzy hierarchical analysis. Fuzzy Sets Syst 17(3):233–247. https://doi.org/ 10.1016/0165-0114(85)90090-9 21. Chang DY (1996) Applications of the extent analysis method on fuzzy AHP. Eur J Oper Res 95:649–655. https://doi.org/10.1016/0377-2217(95)00300-2

308

J. F. de Oliveira et al.

22. Mikhailov L (2000) A fuzzy programming method for deriving priorities in the analytic hierarchy process. J Oper Res Soc 51:341–349. https://doi.org/10.1057/palgrave.jors.2600899 23. Mikhailov L (2003) Deriving priorities from fuzzy pairwise comparison judgements. Fuzzy Sets Syst 134:365–385. https://doi.org/10.1016/S0165-0114(02)00383-4 24. Tribunal de Contas da União (2015) TCU, Secretaria de Fiscalização de Tecnologia da Informação, Brasília

Machine Learning Model Optimization for Energy Efficiency Prediction in Buildings Using XGBoost Giancarlo Sanchez-Atuncar , Victor Manuel Cabrejos-Yalán , and Yesenia del Rosario Vasquez-Valencia

Abstract Machine Learning is a field of Artificial Intelligence that has recently become very important when building intelligent systems. The goal is always to build a machine learning model with high accuracy, especially important when used for energy optimization applications such as energy performance of buildings (EPB). Due to growing concerns about energy waste and its impact on the environment, reports suggest that building energy consumption has increased over the past decades worldwide. Our goal is to create a state-of-the-art model based on Extreme Gradient Boosting (XGBoost) capable of predicting the required heating load (HL) and cooling load (CL) of a building in order to determine the specification of the heating and cooling equipment needed to maintain comfortable indoor air conditions in order to create a building designed optimized for a more sustainable energy consumption. An alternative way of achieving this would be through the use of a building energy simulation software, which is very time-consuming, using instead a machine learning solution offers the distinct advantage of an extremely fast prediction once a model is adequately trained. We were able to create an XGBoost regressor with a R2 score of 0.99. Keywords XGBoost · building energy evaluation · machine learning

G. Sanchez-Atuncar (B) · V. M. Cabrejos-Yalán · Y. del Rosario Vasquez-Valencia Cesar Vallejo University, Av. Alfredo Mendiola 6232, Lima, Peru e-mail: [email protected] V. M. Cabrejos-Yalán e-mail: [email protected] Y. del Rosario Vasquez-Valencia e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Á. Rocha et al. (eds.), Information Technology and Systems, Lecture Notes in Networks and Systems 691, https://doi.org/10.1007/978-3-031-33258-6_29

309

310

G. Sanchez-Atuncar et al.

1 Introduction According to the ‘2020 Global Status Report for Buildings and Construction’, for 2019 the residential buildings occupy 22% in energy consumption and 17% in emission and therefore it has a huge impact on climate change [1], see Fig. 1. Moreover, the buildings sector emission increase is due to a continued use of coal, oil and natural gas for heating and cooking combined with higher activity levels in regions where electricity remains carbon-intensive, resulting in a steady level of direct emissions. Electricity consumption in building operations represents nearly 55% of global electricity consumption. These reports underline the importance on effective management and control of building energy consumption. The use of machine learning can enhance our understanding of the factors that affect the quantity (or quantities) of interest that the building designer or architect may wish to focus on. For example, a Deep Learning model aimed at improving decision making processes for the purpose of lowering energy use in residential buildings, this model known as Deep Belief Network (DBN) was used to make the predictions achieving an R2 score of 96.28 [2]. However, when working with tabular data, such as the energy consumption dataset used in this study, another traditional machine learning algorithm has shown outperform deep learning models across several datasets, this is the Extreme Gradient Boosting (XGBoost) tree ensemble model [3]. Another benefit of using XGBoost instead of a Deep Learning model is because it requires much less tuning to the point where the default XGBoost parameters are often the best solution and does not depend on software optimization [4]. Other machine learning algorithms, such as a decision tree, random forests and Knearest neighbors (KNN), have been used in energy performance of buildings (EPB) studies with moderate success, with random forests performing the best by achieving a mean-squared error (MSE) of 0.462 [5]. Other machine learning techniques such as polynomial regression and support vector machines (SVM) have also been explored to predict various quantities of interest in the context of EPB. Machine learning tools

Fig. 1 Global share of buildings and construction final energy and emissions, 2019

Machine Learning Model Optimization for Energy Efficiency Prediction

311

have also been explicitly used in predicting Heating Load (HL) and Cooling Load (CL). Catalina et al. [6] used polynomial regression (including up to quadratic terms) to predict monthly heating demand for residential buildings, focused on the influence of raised floor, structure type, window-to-wall ratio and the presence of carpet to determine CL for different zones, and reported that orientation and the presence of carpet are the most important predictors. Li et al. [7] forecast hourly building CL based mainly on preceding environmental parameters. Of particular interest to this study, HL and CL have been associated with variables such as relative compactness (RC), climate, surface area, wall area, and roof area, orientation, and glazing. The rationale for studying these variables is that designers and engineers have found that they are correlated with energy performance, and HL and CL in particular.

2 Methodology We have proposed the following method in order to build an optimized XGBoost regressor able to predict Heating and Cooling loads. The process contains the following stages: Data Ingestion, Data Pre-processing, Building the model and Evaluation (results).

2.1 Data Ingestion Initially, the dataset is composed of data extracted from Building energy simulation tools, using 12 different building shapes simulated in the software Ecotect, created by the Oxford Centre for Industrial and Applied Mathematics, University of Oxford, UK [8]. The buildings differ with respect to the glazing area, the glazing area distribution, and the orientation, amongst other parameters. We simulate various settings as functions of the afore-mentioned characteristics to obtain 768 building shapes. The dataset comprises 768 samples and 8 features, aiming to predict two real valued responses. It can also be used as a multi-class classification problem if the response is rounded to the nearest integer. The dataset contains eight features: Relative Compactness, Surface Area, Wall Area, Roof Area, Overall Height, Orientation, Glazing Area and Glazing Area Distribution. And two responses: Heating Load and Cooling Load. The aim is to use the eight features: to predict each of the two responses.

312

G. Sanchez-Atuncar et al.

Table 1 First five rows of raw dataset Relative compactness

Surface area

Wall area

Roof area

Overall height

Orientation

Glazing area

Glazing area distribution

heating load

Cooling load

0.98

514.5

514.5

110.25

7.0

2

0.00

0.00

15.55

21.33

0.98

514.5

514.5

110.25

7.0

3

0.00

0.00

15.55

21.33

0.98

514.5

514.5

110.25

7.0

4

0.00

0.00

15.55

21.33

0.98

514.5

514.5

110.25

7.0

5

0.00

0.00

15.55

21.33

0.90

563.5

563.5

122.50

7.0

2

0.00

0.00

20.84

28.28

Table 2 First five rows of training scaled dataset Relative compactness

Surface area

Wall area

Roof area

Overall height

Orientation

glazing area

Glazing Area distribution

2.041

−1.785

−0.561

−1.470

1.0

−1.341

−1.760

−1.814

2.041

−1.785

−0.561

−1.470

1.0

−0.447

−1.760

−1.814

2.041

−1.785

−0.561

−1.470

1.0

0.447

−1.760

−1.814

2.041

−1.785

−0.561

−1.470

1.0

1.341

−1.760

−1.814

1.284

−1.229

0.000

−1.198

1.0

−1.341

−1.760

−1.814

2.2 Data Pre-processing Data pre-processing starts by reviewing whether the data distribution need to scaled close to a Normal Distribution in order optimize the machine learning algorithm performance [9]. Table 1 shows the first five rows of the entire dataset. We can see that there is an imbalance in the distribution and therefore we scale the eight training features using Scikit-Learn library function StandardScaler [10], see Table 2. After scaling the training features, we proceed to create the training/test partition using 33% of our data for testing.

2.3 Building the Model The general structure of the stage is to compare seven different machine learning regressors against our XGBoost model. In order to achieve this we use ScikitLearn’s machine learning API’s Linear Regression, Support Vector Regression (SVR), Decision Tree Regressor, Random Forest Regressor, Multilayer Perceptron (MLP) Regressor and Adaptive Boosting (AdaBoost) Regressor.

Machine Learning Model Optimization for Energy Efficiency Prediction Table 3 XGBoost parameters

Parameter

Value

booster

gbtree

validate_parameters

True

nthread

Maximum

eta

0.3

gamma

0

max_depth

6

min_child_weight

1

lambda

1

313

2.4 Model Tuning For our XGBoost model we selected the following parameters shown in Table 3, where our model is set perform validation of input parameters to check whether a parameter is used or not, and it’s also using the maximum number of parallel threads available. The learning rate, which is the step size shrinkage used in update to prevent overfitting and making the boosting process more conservative, is set to 0.3. The L2 regularization is set 1 (lambda) and the L1 regularization is set to 0 (alpha) (Table 3).

3 Results Figure 2 shows the resulting R2 scores for all machine learning models, with our XGBoost Regressor obtaining the maximum prediction score of 0.999 in all training and test datasets, and for both Heating Load and Cooling Load energy consumption.

Fig. 2 Train and test R2 scores for each Machine Learning model

314

G. Sanchez-Atuncar et al.

The Random Forest Regressor came second with also 0.997 R2 score in the Heating Load test dataset, however it only obtained a 0.971 for the Cooling Load test dataset.

4 Discussion • Implications for Practice. From a practical point of view, our work was not deployed in a production environment in order to validate these results. External factors such as data drift, when the statistical properties of the predictors change, can lead our model to a decrease in accuracy. So, it’s necessary to understand how different datasets, from different architectural sources can impact the model performance. • Implications for Research. Our main contribution in this paper is a new machine learning model that offers an insight on using a well-tuned XGBoost model as faster alternative when calculating building energy consumption predictions. We hope that others find this overview useful to focus their research.

5 Conclusion The first goal of this research was to contribute to the field of Artificial Intelligence by creating a lightweight alternative to costly, time-consuming, building energy simulations. We achieved this goal by creating a machine learning regressor model capable of quickly predicting heating and cooling load to help in the field of renewable and sustainable energy with a high R2 of 0.999, outperforming all other machine learning models. With this proposal and results, we leave this research open to implement even more complex methods and compare the results with those obtained by us.

References 1. United Nations Environment Programme, & Global Alliance for Buildings and Construction (2020) 2020 Global status report for buildings and construction: towards a zero-emissions, efficient and resilient buildings and construction sector - executive summary. https://wedocs. unep.org/20.500.11822/34572 2. Vasanthkumar P, Senthilkumar N, Rao K et al (2022) Improving energy consumption prediction for residential buildings using modified wild horse optimization with deep learning model. Chemosphere 308:136277. https://doi.org/10.1016/j.chemosphere.2022.136277 3. Shwartz-Ziv R, Armon A (2022) Tabular data: deep learning is not all you need. Inf Fus 81:84–90. https://doi.org/10.1016/j.inffus.2021.11.011 4. Fayaz S, Zaman M, Kaul S, Butt M (2022) Is deep learning on tabular data enough? An assessment. Int J Adv Comput Sci Appl. https://doi.org/10.14569/ijacsa.2022.0130454

Machine Learning Model Optimization for Energy Efficiency Prediction

315

5. Hosseini S, Fard R (2021) Machine learning algorithms for predicting electricity consumption of buildings. Wirel Pers Commun 121:3329–3341. https://doi.org/10.1007/s11277-021-088 79-1 6. Ding Z et al (2021) Evolutionary double attention-based long short-term memory model for building energy prediction: case study of a green building. Appl Energy 288:116660 7. Khan A et al (2021) Ensemble prediction approach based on learning to statistical model for efficient building energy consumption management. Symmetry 13(3):405 8. Tsanas A, Xifara A (2012) Accurate quantitative estimation of energy performance of residential buildings using statistical machine learning tools. Energy Build 49:560–567. https://doi. org/10.1016/j.enbuild.2012.03.003 9. Ahsan M, Mahmud M, Saha P et al (2021) Effect of data scaling methods on machine learning algorithms and model performance. Technologies 9:52. https://doi.org/10.3390/technologies 9030052 10. Varoquaux G, Buitinck L, Louppe G et al (2015) Scikit-learn. GetMobile Mob Comput Commun 19:29–33. https://doi.org/10.1145/2786984.2786995 11. Moayedi H, Mosavi A (2021) Suggesting a stochastic fractal search paradigm in combination with artificial neural network for early prediction of cooling load in residential buildings. Energies 14(6):1649 12. Wu D et al (2020) Two neural-metaheuristic techniques based on vortex search and backtracking search algorithms for predicting the heating load of residential buildings. Eng Comput 38(1):647–660 13. Zhang Y et al (2022) Spatio-temporal heterogeneity analysis of energy use in residential buildings. J Clean Prod 352:131422 14. Zheng S et al (2020) Early prediction of cooling load in energy-efficient buildings through novel optimizer of shuffled complex evolution. Eng Comput 38(S1):105–119 15. Zhou G et al (2020) Teaching–learning-based metaheuristic scheme for modifying neural computing in appraising energy performance of building. Eng Comput 37(4):3037–3048 16. Roman N et al (2020) Application and characterization of metamodels based on artificial neural networks for building performance simulation: a systematic review. Energy Build 217:109972

Analysis of the Azores Accommodation Offer in Booking.Com Using an Unsupervised Learning Approach L. Mendes Gomes

and S. Moro

Abstract The Azores Archipelago, located in the Atlantic Ocean 1.6k km to the West of Portugal, is a pristine island tourism destination that attracts visitors from worldwide. We aim to analyze the accommodation offer in the nine Azores islands to draw insights that can help in guiding Regional Government tourism policies. Data from a total of 764 units registered in the Booking.com platform is collected. A total of 13 features, including the average score granted by guests, the number of reviews, if the unit is privately managed, if it offers wi-fi, among others, are extracted. Then, we train a k-means clustering algorithm using Weka. Results suggest the Government should promote privately managed units in lesser populated islands to foster local economic growth. Specific tourism strategies need to differentiate between the different clusters to take advantage of their specificities. Keywords Island tourism · Online reviews · Hospitality · Data science

1 Introduction Due to its geographical isolation, an island is inherently a fragile ecosystem, mostly susceptible to exploitation by humans to harvest and harness its scarce resources. It is precisely the isolation of islands that attracts flocks of tourists who seek more untouched environments and pristine sea waters and beaches [1]. This dichotomy between fragility and attractiveness leads to competing forces that threaten islands’ sustainability [2]. Specifically, local island governments face the challenge of developing a policy that does not limit locals of taking advantage of tourism businesses L. Mendes Gomes (B) ALGORITMI Research Centre/LASI, University of Minho, Braga, Portugal e-mail: [email protected] NIDeS, University of the Azores, Ponta Delgada, Portugal S. Moro Instituto Universitário de Lisboa (ISCTE-IUL), ISTAR, Lisboa, Portugal © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Á. Rocha et al. (eds.), Information Technology and Systems, Lecture Notes in Networks and Systems 691, https://doi.org/10.1007/978-3-031-33258-6_30

317

318

L. Mendes Gomes and S. Moro

to foster economic growth while at the same time preventing an overexploitation of natural resources [3]. Such context makes of island tourism research unique on its own, and dependent on the type of destination in itself, including islands size (e.g., in the Mediterranean Sea, Malta, with 316 km2 vs Cyprus, with 9,251 km2 ), climate (e.g., São Tomé, in the equatorial line in the middle of the Atlantic versus the Falklands, in the South Atlantic), and even if the island belongs to a large Archipelago of several islands (e.g., Cape Verde) or to a small Archipelago constituted by a few islands (e.g., Madeira). In this study, we focus on the Azores Archipelago, which is constituted by nine habited islands, located in the middle of the Atlantic Ocean, about 1.5 thousand kilometers to the West of Portugal mainland, the country to which the archipelago belongs. In Table 1, we characterize each island according to its main features. Azores islands offer a temperate weather throughout all year (i.e., with peaks of around 25 ºC in the Summer and around 17 ºC in the Winter, and troughs of 18 ºC in the Summer and of 11 ºC in the Winter). This is one of the reasons why the Azores attract tourists all year around, with milder temperatures even when compared to Portugal mainland, at the same latitude. Additionally, it has a low density of population, with impressive pristine landscapes, ideal for escapes from the urban environment. Another interesting attraction factor is its volcanic activity, with constant thermal water hot spots that invite for a hot bath in the open air. Further, its location in the middle Atlantic makes of it a great whale watch spot, with touristic boats sailing to see the magnificent large marine mammals. All these attractions in the same place increment Azores value as a destination and emphasize its uniqueness while, at the same time, raise the important challenge of a sustainable exploitation that does not rely on overtourism for economic growth, as it is already happen in other islands [6]. In this study, we aim to understand the distribution of the accommodation units throughout the nine islands, and how that offer is characterized in terms of sustainability, location, type of management (local vs corporate), and other amenities. We adopted data from the widely used Booking.com platform, given most hotels and Table 1 Islands of the Azores Archipelago (sources [4, 5]) Island

Group

Population

Area (km2 )

Nr. counties

Tallest peak (in meters)

Santa Maria

Eastern

5,614

97.42

1

590 (Pico Alto)

São Miguel

Eastern

137,220

759.41

6

1,103 (Pico da Vara)

Terceira

Central

54,998

381.96

2

1,023 (Sta. Bárbara)

Graciosa

Central

4,193

61.00

1

398 (Pico Timão)

São Jorge

Central

8,252

56.00

2

1,067

Pico

Central

13,643

447.00

3

2,351 (Montanha do Pico)

Faial

Central

14,482

173.42

1

1,043 (Cabeço Gordo)

Flores

Western

3,628

143.11

2

914 (Morro Alto)

Corvo

Western

718

Total:

467

17.13

1

242,497

2,136.45

19

Analysis of the Azores Accommodation Offer in Booking.Com

319

accommodation offers are present in this platform [7]. Thus, our contribution is of two-fold: (1) to provide a detailed picture of Azores accommodation units that enables the local Government to better manage the Archipelago’s offer through specific policies that foster sustainability (e.g., by benefiting locally managed small units that invest in eco-friendly services); (2) to extend the current body of knowledge of existing island tourism research by analyzing the Azores accommodation offer considering its specificities within the wider island tourism literature. In Sect. 2, we present the methodology adopted to collect and pre-process the data from Booking.com, as well as the choice of the clustering algorithm and its parameterization. In Sect. 3, we present and discuss the results. And, in Sect. 4, we suggest how the results obtained can help the local government in decision support, through the adoption of methods and techniques and tools borrowed from Data Science, as well as a suggestion of future work.

2 Methodology Considering the comprehensiveness of Booking.com and its wide use in tourism [7, 8], it was chosen for collecting data about the accommodation offer in Azores. The large number of units available in the platform for Azores (above one thousand) renders the process of manual data gathering unfeasible in a reasonable amount of time. To address such issue, we developed a web scraping script that mimics a human user crawling [9] through the different pages of the Booking.com for Azores. Web scraping can efficiently gather the data by selecting and saving the web page elements needed as it iterates through each accommodation unit. The script was developed in R with “rvest” package, that implements XPath, which enables to easily parse the HTML code [10]. In overall, information of a total of 1050 accommodation units was collected. Table 2 shows the specific features (variables) extracted for each unit, while Fig. 1 bellow depicts through an example from where each of the features was obtained. Table 2 List of extracted features

Feature

Description

Name

Name of the accommodation unit

URL

Direct URL to the unit’s Booking.com page

Score nr

The average score granted by guests (from 1 to 10)

Score class Score class (computed based on the score nr.) Nr. reviews Total number of reviews the unit has received Address

The physical address of the unit

Properties

Some specific properties signaled for the unit

Description The textual description of the unit

320

L. Mendes Gomes and S. Moro

Fig. 1 Snapshot from Booking.com depicting the extracted features

In Data Science, it is essential to prepare the dataset before modeling since data has often characteristics that need to be handled to facilitate model training [8]. As such, the dataset of 1,050 instances was explored to assess if there were missing values or accommodation units that were duplicated. We found that there were several duplicates retrieved through web scraping, mostly because advertised units appear throughout the pages in the pagination system of Booking.com. Also, there was a unit with no values, which was discarded. In overall, a total of 764 units deemed valid for data analysis were kept. Additionally, the feature “properties” was used to obtain the three specific characteristics mentioned in this field of Booking.com: (1) if the unit has a beach nearby; (2) if it is a sustainable unit; (3) if the management of the unit is private (all the three newly computed features are categorical/binary: Boolean values). To further enrich the characterization of each individual accommodation unit, we have extracted five additional features from the “description” feature: has.WiFi; has.parking; has.garden; has.terrace; and has.balcony. Thus, if the description contained the “wi-fi” or “wifi” words, then the “has.Wi-Fi” feature assumed the value of 1, otherwise it was set to 0. In the results section, we show descriptive statistics for the dataset and its features. There are several data-driven models that can be applied within data science, with most being dividing into two main categories: supervised learning, where the goal is to predict (or explain) a given phenomenon translated by an outcome variable; and unsupervised learning, where the goal is to find similarities in data. Specifically, clustering is an unsupervised learning approach that is devoted to group data in logical clusters according to their features [11]. Considering our goal is to obtain an overall picture of the Azores accommodation offer, we chose to perform a clustering model. Specifically, we adopt the popular K-means algorithm with the Euclidean distance to compute the distances between instances. We used the Weka tool, with a random initialization of their k-means implementation [12].

3 Results and Discussion In this section, we analyze the obtained results. We chose to analyze them per island given that each island has its own specificity which originates different demand and offer in terms of accommodation units. For example, while all islands are of the volcanic type, Faial has experienced a contemporary significant eruption (in 1958), while São Miguel has experienced a steady volcanic activity, offering visitors hot

Analysis of the Azores Accommodation Offer in Booking.Com

321

thermal waters. Another example is the Flores Island, known for its greenness, in comparison to the arid Corvo Island, both belonging to the Western group. Further, the island of Pico, with 2.351 m at its peak, is one of the tallest volcanic mountains in the Atlantic Ocean ridge (being surpassed by the El Teide, Tenerife, with 3.715 m and by the island of Fogo peak, Cape Verde, with 2.829 m), and it is widely sought by those willing to hike to the peak. Thus, the abovementioned rationale justifies a distinct analysis per island. We start by showing descriptive statistics of the dataset. Hence, in Table 3, we exhibit the distribution of the computed features for the three properties. It can be observed that, while the Azores islands are small in area, only about half of the units are located near a beach. This result signals that the Azores Archipelago has much more to offer that its beaches, despite its mild weather, and thus it cannot be directly compared to other Summer destination islands such as Cape Verde, where the beaches dominate the tourism offer [3]. Nevertheless, some of the Azores islands landscape is made of sharp cliffs, thus not all the coast is easily accessible. We can also check that the Azores units are not advertising them as being sustainable. This result may trigger the Local Government to react and create conditions that incentive unit owners towards sustainability, which is key in a fragile island environment [2]. Also, almost half of the units are managed privately, i.e., by locals, which is aligned with the importance of tourism to the Azores population. This is also an important finding, since local private managers are more likely interested in the long-run of their investment. Next, we analyze the units from the reviewers’ perspectives, by showing the number of reviews and the scores granted. Table 4 shows the number of reviews received and the average score by Booking.com reviewers for the units per island. The largest and most populated islands are the ones that have the most units and therefore receive more reviews. Still, the Corvo island two units have already received Table 3 Distribution of units per island and their properties Island

Group

Santa Maria

Eastern

Nr. units

With beach nearby

17

4

Sustainable units

With private management

1

7

São Miguel

Eastern

411

198

49

186

Terceira

Central

106

91

10

53

Graciosa

Central

10

1

0

4

São Jorge

Central

30

5

2

10

Pico

Central

113

21

8

49

Faial

Central

48

32

6

16

Flores

Western

27

26

1

15

Corvo

Western

2

1

0

0

764

375

76

333

Total:

322 Table 4 Distribution of reviews per island

L. Mendes Gomes and S. Moro

Island

Group

Santa Maria

Eastern

Nr. reviews 1,113

8.635

Average score

São Miguel

Eastern

55,323

8.961

Terceira

Central

12,352

8.969

Graciosa

Central

864

8.830

São Jorge

Central

3,599

9.067

Pico

Central

7,866

8.878

Faial

Central

5,881

8.842

Flores

Western

2,122

8.859

Corvo

Western

Total:

308

9.000

89,428

8.893

a total of 308 reviews. As for the average score, it appears quite homogeneous among the different islands. This is a good indicator that visitors are generally pleased with their stay in the Azores. The average score in Table 4 is not enough to understand the distribution of scores per island. The Fig. 2 above highlights a specific characteristic of Booking.com: its score ranges from 5 to 10, and not from 1 to 10, as one might be misled by Booking.com [13]. Its minimum score has been an issue of discussion (e.g., Mellinas et al. [14] set the minimum on 2.5). Table 5 below shows the descriptive statistics for the 13 features used as the input of the k-means algorithm. For the cluster analysis, the number of clusters was set to 6, 8, 10, 12, and 14. While the sum of squared errors continued to decrease, such decrease was marginal for a number of clusters above 10 (e.g., we obtained an error of 600.44 for 10 clusters and 587.06 for 12). Additionally, we aimed to avoid a large number of clusters that would result in too many niches of accommodation units, making it difficult to offer real contributions for the local government, that manages

Fig. 2 Boxplots for the numerical scores granted by reviewers in Booking.com per island

Analysis of the Azores Accommodation Offer in Booking.Com

323

at a broader level instead of a micro-level. Therefore, we chose the cluster model with 10 clusters for subsequent analysis and discussion. Table 6 shows the distribution of instances per cluster. We note that even with 10 clusters, there are two that have less than 5% of instances (clusters 6 and 9). Cluster 0 concentrates the highest percentage of instances (17%) followed by clusters 4 (16%), and 5 and 8 (14% each). These four clusters concentrate 61% of the instances. In Table 7, we show the average for every feature per cluster computed in Weka (we omit clusters 6 and 9 for space purposes and because this sum only 5% of instances). Table 7 enables to obtain important insights on the Azores accommodation offer. First, from the guests’ perspective, there is homogeneity to a certain degree, given the 8 clusters show a score between 8.84 and 9.06. Nevertheless, cluster 0 deserves a more careful analysis: it is associated with less populated islands (i.e., npopulation of 11,972; please refer to Table 1). This cluster aggregates units that were granted lower scores, showing space for improvement in comparison to the remaining. In average, each unit received 94 reviews, emphasizing this result. Cluster 0 is also the one associated with the least densely populated areas, suggesting that the local government can invest in improving tourism quality to also attract and retain local populations by indirectly offering employment in the hospitality industry. As for the remaining features, three stand out when forming the clusters. First, the proximity to beach is a characteristic common to all units in clusters 4, 5, and 7, in a total of 436 units. Private management is also a common characteristic to all the 98 units of clusters 2 and 4. These two are important features that show differentiation and therefore suggest differentiated strategies by the government when supporting local tourism. Specifically, within the context of a small community in Azores, privately managed units are more likely to foster local development by promoting local products as it is usually the case in island tourism [15]. The third characteristic Table 5 Descriptive statistics Attribute

Distinct

Min

Max

Mean

StdDev

score.num

31

7

10

nreviews

257

1

1496

116

8.96

211

86,325

58,316

npopulation

9

460

138,176

area

9

17

747

533.04

density.population

9

26

185

134.29

is.proximity.beach

2

0

1

is.sustainability.trip

2

0

1

has.private.management

2

0

1

0.49

2

0

1

2

0

1

0.10

has.garden

2

0

1

0.16 0.08

2

0

1

2

0

1

0.50 0.30

0.44

has.Wi-Fi

has.terrace

229.94 64.08

0.10

has.parking

has.balcony

0.60

0.50

0.40

0.49 0.30 0.36 0.28

0.07

0.26

324

L. Mendes Gomes and S. Moro

Table 6 Distribution of instances by cluster

Cluster

N.º instances

% Instances

0

133

17%

1

54

7%

2

98

13%

3

51

7%

4

122

16%

5

107

14%

6

17

2%

7

53

7%

8

108

14%

9

21

3%

10 clusters

764

100%

Table 7 Cluster distribution and characterization Feature\Cluster

0

1

2

3

4

5

7

8

score.num

8.84

8.98

8.98

9.01

9.06

8.98

8.88

8.91

nreviews

94

46

49

89

49

226

161

184

npopulation

11,972 15,175 133,148 15,437 115,288 125,895 46,939 138,176

area

288.47 309.31 725.57

333.14 649.46

694.66

355.34 747.00

density.population

47.63

50.72

182.31

50.00

172.74

178.42

116.96 185.00

is.proximity.beach

0.33

0.37

0.00

0.20

1.00

1.00

1.00

0.00

is.sustainability.trip

0.08

0.04

0.04

0.08

0.07

0.15

0.13

0.22

has.private.management 0.38

0.80

1.00

0.29

1.00

0.00

0.00

0.00

has.Wi-Fi

0.00

0.81

0.33

1.00

0.33

0.29

0.89

0.39

has.parking

0.06

0.09

0.04

0.63

0.02

0.09

0.06

0.10

has.garden

0.07

0.57

0.18

0.02

0.11

0.08

0.40

0.14

has.terrace

0.07

0.13

0.00

0.04

0.04

0.00

0.06

0.00

has.balcony

0.10

0.07

0.06

0.02

0.07

0.10

0.04

0.06

that is also distinctively used to separate the units is wi-fi. The remaining features appear to have little influence.

4 Conclusions and Further Work We analyze the whole accommodation offer of the Azores Archipelago in the Booking.com platform. The 9 islands have both different natural (i.e., volcanic activity, types of beaches) and human contexts, which results in the different clusters obtained in our model. Lesser populated islands require further attention from the

Analysis of the Azores Accommodation Offer in Booking.Com

325

local government in supporting local tourism and specifically in fostering privately managed units that can promote local products and development. Our results provide insights that can be used as a guidance to the local government, showing also the value of data science within tourism. Nevertheless, studies in the future need to be conducted to assess guests’ perspectives (e.g., online reviews analysis) across time as policies are being implemented. Funding The work was supported by the Fundação para a Ciência e Tecnologia (FCT), Portugal, within the following Projects: UIDB/04466/2020 and UIDP/04466/2020.

References 1. Kothari U, Arnall A (2017) Contestation over an island imaginary landscape: the management and maintenance of touristic nature. Environ Plan A 49(5):980–998 2. Connell J (2018) Islands: balancing development and sustainability? Environ Conserv 45(2):111–124 3. Oliveira C, Rita P, Moro S (2021) Unveiling island tourism in cape verde through online reviews. Sustainability 13(15):8167 4. byAçores. https://byacores.com/quantas-ilhas-tem-os-acores/. Accessed 28 July 2022 5. Statistics Azores Portugal. https://srea.azores.gov.pt/. Accessed 28 July 2022 6. Butler RW, Dodds R (2022) Island tourism: vulnerable or resistant to overtourism? Highlights Sustain 1(2):54–64 7. Moro S, Rita P, Ramos P, Esmerado J (2022) The influence of cultural origins of visitors when staying in the city that never sleeps. Tour Recreat Res 47(1):78–90 8. Moro S, Rita P (2022) Data and text mining from online reviews: an automatic literature analysis. Wiley Interdiscip Rev Data Min Knowl Discov 12:e1448 9. Correia A, Moro S, Rita P (2022, in press) The travel dream experience in pandemic times. Anatolia 10. Benedikt M, Koch C (2009) XPath leashed. ACM Comput Surv (CSUR) 41(1):1–54 11. Wu J (2012) Advances in K-means clustering: a data mining thinking. Springer, Heidelberg 12. Witten IH et al (1999) Weka: practical machine learning tools and techniques with Java implementations 13. Booking.com. https://partner.booking.com/en-gb/help/guest-reviews/general/everything-youneed-know-about-guest-reviews. Accessed 20 Aug 2022 14. Mellinas JP, María-Dolores SMM, García JJB (2015) Booking. com: the unexpected scoring system. Tour Manag 49:72–74 15. Kokkranikal J, McLellan R, Baum T (2003) Island tourism and sustainability: a case study of the Lakshadweep Islands. J Sustain Tour 11(5):426–447

AD-DMKDE: Anomaly Detection Through Density Matrices and Fourier Features Oscar A. Bustos-Brinez , Joseph A. Gallego-Mejia , and Fabio A. González

Abstract This paper presents a novel density estimation method for anomaly detection using density matrices (a powerful mathematical formalism from quantum mechanics) and Fourier features. The method can be seen as an efficient approximation of Kernel Density Estimation (KDE). A systematic comparison of the proposed method with eleven state-of-the-art anomaly detection methods on various data sets is presented, showing competitive performance on different benchmark data sets. The method is trained efficiently and it uses optimization to find the parameters of data embedding. The prediction phase complexity of the proposed algorithm is constant relative to the training data size, and it performs well in data sets with different anomaly rates. Its architecture allows vectorization and can be implemented on GPU/ TPU hardware. Keywords density matrix · random features · anomaly detection · quantum machine learning

1 Introduction An anomaly can be broadly defined as an observation or datum that deviates significantly from the patterns of the data set from which it originates, in one or more features. In most cases, data are generated by complex processes and allow for different types of measurements, so an anomaly may contain valuable information about anomalous behaviors of the generative processes, or elements that are O. A. Bustos-Brinez (B) · J. A. Gallego-Mejia · F. A. González MindLab Research Group, Universidad Nacional de Colombia, Bogotá, Colombia e-mail: [email protected] J. A. Gallego-Mejia e-mail: [email protected] F. A. González e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Á. Rocha et al. (eds.), Information Technology and Systems, Lecture Notes in Networks and Systems 691, https://doi.org/10.1007/978-3-031-33258-6_31

327

328

O. A. Bustos-Brinez et al.

impacting the generation or measurement stages [1]. Then, recognizing this type of data (which can be referred to as unusual, atypical, unexpected, malicious or rare, depending on the scenario), discerning real, meaningful anomalies from normal noisy samples (known as ‘outliers’) and identifying the unusual processes that originate them are the main objectives of Anomaly Detection (AD) [2]. The methods and algorithms that perform AD are key in various applications such as bank fraud detection, identification of abrupt changes in sensors, medical diagnostics, natural sciences, network security intrusion detection, among many others [21]. However, classic AD methods face significant challenges that limit their performance and range of application [14]. In most cases, anomalies are much sparser and scarcer than normal data, and are not identified a priori as anomalies, which makes the use of supervised classifiers difficult. Furthermore, the boundaries between “normal” and “anomaly” regions are very subjective and depend on the particularities of each case. Data may also be contaminated by noise, so that both types of data may be intermingled, blurring the very idea of separation. Many of the AD algorithms address some of these difficulties, but at the cost of being vulnerable to others; however, the combination of classical methods with a good mathematical foundation and deep learning algorithms is one of the most promising paths [11], although it also presents shortcomings on the detection of complex types of anomalies and fewer possibilities of explanation. The core idea of the proposed method is to use random Fourier features to approximate a Gaussian kernel centered in each training sample and then use a density matrix to summarize these kernels in an efficient and compact way. The density matrix is then used to estimate the density of new samples, and those whose density lies below a certain threshold are classified as anomalies. The method uses optimization to obtain the best random Fourier feature embedding (a process called “Adaptive Fourier Features”), and is able to calculate the best threshold using percentiles from a validation data set. The outline of the paper is as follows: in Sect. 2, we describe anomaly detection and density estimation in more depth, and present the benchmark methods to which we will compare our algorithm. In Sect. 3, we present the novel method, explaining the stages of the algorithm and how it uses Fourier features and density matrices. In Sect. 4, we systematically compare the proposed algorithm with state-of-the-art anomaly detection algorithms. In Sect. 5, we state the conclusions of this work and sketch future research directions.

2 Background and Related Work 2.1 Anomaly Detection The main mechanism of anomaly detection algorithms is the construction of a model that determines a degree of “normality” for the data points, and then detects anomalies as points that deviate from this model. The importance of anomaly detection lies in

AD-DMKDE: Anomaly Detection Through Density Matrices …

329

the fact that recognizing and understanding anomalous behavior allows one to make decisions and predictions. For example, anomalous traffic patterns on a network could mean that sensitive data is being transmitted across it, or that there is an intruder attempting to attack a system [3]. Anomalous user behavior in payment transactions could signify credit card fraud and give some clues as to how to avoid it in the future [23]. Anomaly detection should be distinguished from outlier detection, the purpose of which is data cleaning: to locate outliers (mainly normal data affected by noise) and subsequently remove them from the data set. Let r be the ratio of anomaly samples with respect to all samples. When r is high, the most common approach to anomaly detection is to use a supervised classification method. However, when r is low (which is usually the case), the best approach is to use semi-supervised or unsupervised AD algorithms. According to [16], classical AD algorithms can be classified into six main types: classification-based approaches (such as the well-known one-class SVM), probabilistic models (such as histograms and density estimation methods like KDE [8]), clustering models, information-based methods, spectral analysis (where dimensionality reduction methods such as PCA are found), and distance-based methods (including isolation forest and nearest neighbors). But in recent years, these classical models have been progressively replaced by deep learning-based algorithms, showing high performance in anomaly detection tasks [25]. These algorithms can efficiently model the inherent features of complex data through their distributed and hierarchical architectures, i.e., they can perform implicit feature engineering, especially when a large amount of data is available [20]. For this reason, they are commonly used for anomaly detection in scenarios involving very large data sets with high-dimensional instances, such as speech recognition, image/video analysis, or natural language processing [14].

2.2 Anomaly Detection Baseline Methods In this paper, we select 11 state-of-the-art anomaly detection methods that represent a complete sample of the most common types of anomaly detection methods. All algorithms consider the proportion of anomalies in the data as a necessary parameter for finding threshold values. We selected five well-known methods that are based on classic mathematical approaches to anomaly detection. These methods include One Class SVM [24], a kernel-based algorithm that encloses normal data in a boundary that leaves anomalies outside of it; Covariance Estimator, that finds the smallest ellipsoid that wraps normal data; Isolation Forest [12], that tries to separate points using decision trees, and those who are easier to isolate are the outliers; Local Outlier Factor (LOF) [4], based on a distance measure from each point to a set of its neighbors; and K-nearest neighbor (KNN) [18], that makes an scoring based on the distance from the k-th nearest neighbor. We also decided to include methods developed in the last decade that do not use neural networks into their architectures, but instead take other approaches. These

330

O. A. Bustos-Brinez et al.

types of methods include SOS [7], that builds an affinity matrix between all points that acts as a similarity measure; COPOD [10], that builds univariant probability functions from each dimension and then joins them in a unified multivariant function that models the true distribution of the data; and LODA [15], that combines classifiers in low dimensions and uses histograms to detect anomalies. Finally, we complete the overview of baseline methods with some of the proposals that are based on the use of neural networks as their central elements. These three models include VAE-Bayes [9], built around a variational autoencoder that builds a latent space from data, where the probability distribution of the data can be retrieved by using Bayesian assumptions; DSVDD [22], in where a neural network is used to transform data into a latent space where normal data can be encompassed into a hypersphere whose features are neurally optimized; and LAKE [13], that includes a variational autoencoder to reduce dimensionality in a way that preserves data distribution, and then performs KDE in this space and separates anomalies using a threshold.

3 Anomaly Detection Through Density Matrices and Fourier Features (AD-DMKDE) González et al. [5] proposed a new algorithm based on the joining of density matrices and random Fourier features for neural density estimation, called “Density matrix for Kernel Density Estimation” (DMKDE), that has its core in kernel density estimation. The DMKDE algorithm consists of a random Fourier feature mapping, a density matrix calculation, and a quantum density estimation phase. This method, whose original purpose was only density estimation, is used as the foundation for the novel method presented here. AD-DMKDE enhances the random Fourier feature mapping from DMKDE through the use of optimization, that allows to find better values for the parameters of the mapping function. This process, called Adaptive Fourier features, can be seen in more detail in Subsect. 3.1. After that, the density matrix is built from the mappings of the training samples (see Subsect. 3.2), and the density of new samples is calculated through a process known as Quantum Density Estimation, inspired in quantum measurement theory from quantum mechanics (see Subsect. 3.3). Then, as final step, a threshold is obtained through a cross-validation process over a validation data set. This threshold is calculated using the rate of the anomalies in the validation set and is explained in the Subsect. 3.4. The test phase of the algorithm comprises applying the already-built Fourier feature mapping over the test set, Quantum Density Estimation and threshold comparison to make the decision whether each test point is anomalous or not. Figure 1 shows a summary of the method.

AD-DMKDE: Anomaly Detection Through Density Matrices …

331

Fig. 1 AD-DMKDE algorithm. Each section corresponds to a different stage of the method. The parameters of the mapping/embedding and the density matrix do not change after they are calculated

3.1 Random Fourier Features and Adaptive Fourier Features In the training phase, each data point in the training data set is implicitly transformed into a higher dimension. The Random Fourier √feature mapping, as first stated in [17], applies the following function r f f (x i ) = 2cos(w T x i + b) where w is a vector of iid from a N(0, I D ) and b is a iid sample from a U ni f or m(0, 2π ). According to Bochner’s theorem, the kernel k can be approximated as an expectation, k(x, y) = E w []. So, since we expect to approximate a Gaussian kernel to perform KDE, the parameters of the kernel, in this case the σ (standard deviation) parameter, is one of the major influences over the selection of w and b. This mapping is improved through AFF (Adaptive Fourier features), a mechanism that, before of the RFF mapping, performs an optimization of the values of w and b by building a neural network that adjust their values through a learning process. After the application of the RFF formula with the optimized values of w and b, the obtained results are normalized to obtain the final AFF embedding φa f f (x i ) that will be the representation of each xi .

3.2 Density Matrix Calculation Once the AFF embedding has been applied to all training samples, the trans-formed training data correspond to the set {φa f f (x 1 ), φa f f (x 2 ), ..., φa f f (x n )}, being n the number of train samples. For each sample, the algorithm computes its corresponding pure-state density matrix Ri , given by the outer product of each φaff (xi ): | |T Ri = φa f f (x i ) φa f f (x i )

332

O. A. Bustos-Brinez et al.

The mixed-state E density matrix that comprehends all the training samples is calculated as R = n1 i Ri . Although R contains all the information of the embedded train samples, its size does not depend on n; instead, it only depends on the size of the embedding. This is a major improvement over other methods, making it possible to compactly represent (and store) very large data sets using only a single matrix whose size is much smaller in general than the training set. The calculation of R only has to occur once in the entire training phase.

3.3 Quantum Density Estimation With the calculation of R, we can estimate the density for new samples, in particular those of the validation and test sets. Given a validation sample xj , we need to transform it by using the exact same mapping and normalization we previously defined in order to obtain φa f f (x j ) and then calculate ( )|T ( ) | R [φa f f (x j )] fˆ x j = φa f f x j as an estimate of the density of xj . This process is repeated { ( for)| every validation | sample, in order to build the validation density set Fˆval = fˆ x j |x j e X val }.

3.4 Threshold Calculation and Prediction Phase To determine which of the samples are anomalies or not, we need to find a threshold θ that can be used as a sort of boundary between these two types of data points. Therefore, we use the anomaly rate of the data set (either an a priori known value or the proportion of anomalies we expect to find). We take the estimates from the validation set and find the percentile that corresponds to the anomaly rate we defined above to calculate the threshold θ := q(anomaly rate%) Fˆval . As a final step, we estimate the density of the test samples (by mapping them and applying density estimation with R), and then use the threshold θ to discriminate them. So, the classification of a particular test sample xk is ‘normal’ if fˆ(x k ) ≥ θ , and ‘anomaly’ otherwise.

AD-DMKDE: Anomaly Detection Through Density Matrices …

333

4 Experimental Evaluation 4.1 Experimental Setup For our experiments, we compared our proposed method with all the baseline algorithms listed above (see Sect. 2.2). To run the first four methods, we used the Python implementation provided by the scikit-learn library. KNN, the shallow methods, VAE and DVSDD were executed through the implementation provided by the PyOD Python library [26]. For the LAKE algorithm [13], the implementation we used comes from the Github repository of its authors1 , although we had to correct some issues in the original code, particularly the way to split the test data set to include both normal and anomalous samples. To handle the inherent randomness that ADDMKDE can present in some stages (particularly in AFF neural network training), we decided to select a unique, invariant value for every random seed that affect the behavior of the method. All the experiments were carried out on a machine with a 2.1 GHz Intel Xeon 64-Core processor with 128 GB RAM and two NVIDIA RTX A5000 graphic processing units, that run Ubuntu 20.04.2 operating system. Data Sets. We used twenty public data sets to evaluate the performance of our proposed algorithm in performing anomaly detection. We chose data sets with a wide variety of characteristics, in order to test the robustness of our algorithm in multiple scenarios. The main characteristics of all the data sets can be seen in Table 1. The source for all the data sets was the ODDS Library of Stony Brook University [19]. Experiment Configuration. For each data set, the following configuration was chosen: the data set was split in a stratified way (keeping the same proportion of outliers in each subset) by randomly taking 30% of the samples as the test set, and from the remaining samples, again randomly taking 30% for validation and the remaining 70% as the training set. This splitting was performed only once per data set, so that all algorithms worked with exactly the same data partitions. When implementing the Scikit-Learn and PyOD algorithms, the default configurations defined in these libraries were used, modifying only the parameters referred to the data set features mentioned in the previous section. The only exception to this rule was the VAE algorithm, whose architecture, based on autoencoders, needed to be slightly modified when using low-dimensional data sets, to ensure that the encoded representation was always of lower dimensionality than the original sample. Also, as mentioned above, LAKE algorithm required some corrections to the original code published by its authors; however, the basic internal structure, the internal logic of the algorithm and the functions on which it is based remained the same as in its original configuration. Our AD-DMKDE algorithm was implemented from the original code of DMKDE, presented in [6], with some modifications. Since the main goal of DMKDE is density 1

https://github.com/1246170471/LAKE.

334 Table 1 Main features of the datasets

O. A. Bustos-Brinez et al.

Dataset

Instances

Dimensions

Outlier Rate

Arrhythmia

452

274

0,146

Cardio

2060

22

0,2

Glass

214

9

0,042

Ionosphere

351

33

0,359

KDDCUP

5000

118

0,1934

Letter

1600

32

0,0625

Lympho

148

18

0,04

MNIST

7603

100

0,092

Musk

3062

166

0,0317

OptDigits

5216

64

0,0288

PenDigits

6870

16

0,0227

Pima

768

8

0,349

Satellite

6435

36

0,3164

SatImage

5803

36

0,0122

Shuttle

5000

9

0,0715

SpamBase

3485

58

0,2

Thyroid

3772

36

0,0247

Vertebral

240

6

0,125

Vowels

1456

12

0,03,434

WBC

378

30

0,0556

estimation, the separation of data as normal or outlier was performed by adding a function that calculates the classification threshold using the rate of anomalies in each data set, a known value in all cases (see Table 1). Evaluation Metrics. The main metric we chose to determine the performance of the algorithms was the F1 score (with weighted average), a well-known and very common metric used when testing classification algorithms. For each algorithm, a set of parameters of interest was selected in order to perform a comprehensive search for the combinations of values that gave the best result for each data set. This search was performed by training only with the training set and reporting the results of the validation set. The selection of the best combination of parameters was performed by choosing the model with the best average F1 score. Once the best parameters were found, these same values were used to run each algorithm once again, this time combining the training and validation sets for the training phase and reporting the results on the test set. These final results were the values we used to compare the overall performance of all algorithms.

iForest

0.821 0.752 0.931 0.710 0.838 0.897 1.000 0.881 0.931 0.952 0.967 0.624 0.757 0.998 0.992 0.794 0.958 0.778 0.942 0.941

OCSVM

0.813 0.804 0.916 0.765 0.744 0.893 0.934 0.881 0.958 0.952 0.963 0.592 0.681 0.984 0.933 0.702 0.953 0.750 0.950 0.942

Data Set

Arrhythmia Cardio Glass Ionosphere KDDCUP Letter Lympho MNIST Musk OptDigits PenDigits Pima Satellite SatImage Shuttle SpamBase Thyroid Vertebral Vowels WBC

0.818 0.756 0.931 0.876 0.975 0.909 0.934 0.841 0.997 0.952 0.966 0.559 0.813 0.991 0.968 0.714 0.986 0.817 0.941 0.949

Cov 0.804 0.702 0.925 0.830 0.789 0.930 1.000 0.886 0.958 0.949 0.961 0.615 0.634 0.979 0.900 0.702 0.949 0.796 0.951 0.957

LOF 0.861 0.753 0.900 0.817 0.932 0.910 1.000 0.909 0.991 0.952 0.965 0.644 0.716 0.998 0.973 0.719 0.953 0.810 0.969 0.942

KNN 0.773 0.739 0.907 0.784 0.716 0.911 0.934 0.864 0.951 0.953 0.963 0.636 0.597 0.980 0.891 0.719 0.949 0.817 0.954 0.913

SOS 0.844 0.750 0.916 0.736 0.785 0.895 0.962 0.868 0.964 0.949 0.966 0.615 0.732 0.994 0.990 0.799 0.953 0.817 0.943 0.970

0.798 0.717 0.848 0.510 0.770 0.899 0.923 0.866 0.954 0.955 0.966 0.597 0.709 0.997 0.973 0.699 0.958 0.817 0.929 0.947

COPOD LODA 0.856 0.783 0.900 0.714 0.803 0.897 1.000 0.895 0.984 0.951 0.966 0.632 0.761 0.996 0.981 0.741 0.958 0.817 0.952 0.957

VAE-B 0.864 0.735 0.907 0.674 0.755 0.897 1.000 0.880 0.992 0.952 0.963 0.679 0.761 0.996 0.978 0.738 0.961 0.819 0.943 0.957

DSVDD

0.909 0.772 1.000 0.993 1.000 0.838 1.000 0.959 0.639 0.819 0.994 0.740 0.841 0.946 0.990 0.885 0.803 0.807 1.000 0.897

LAKE

Table 2 F1-Score for all classifiers over all data sets. The first and second best values are marked in bold and underlined, respectively AD-DMKDE 0.911 0.831 0.974 0.959 0.984 0.927 1.000 0.911 1.000 0.981 0.994 0.758 0.845 1.000 0.998 0.816 0.967 0.904 0.979 0.961

AD-DMKDE: Anomaly Detection Through Density Matrices … 335

336

O. A. Bustos-Brinez et al.

4.2 Results and Discussion The results for all data sets and all algorithms (baseline methods and our proposed method) are shown in Table 2. The best value for each data set is highlighted in bold, and the second-best value is underlined. In the F1 Score table, a noticeable difference appears between most of the baseline methods and our proposed method, with LAKE being a notable exception; however, AD-DMKDE shows the best performance in the majority of the data sets, followed by LAKE and LOF methods. When we consider the influence of the characteristics of the data set on the performance of our method, we see an interesting pattern. The outlier rate of the data sets does not seem to have any influence over the performance of AD-DMKDE, since the method performs well when there are a low number of anomalies (like in Musk or Shuttle), and also when there are many of them (like in Cardio or Pima). The number of dimensions have a weak influence, since the method has the best performance more frequently in data sets with less than twenty dimensions (such as in Vertebral, Pima or Pendigits), but also in half of the data sets with more than 100 dimensions (Arrhythmia and Musk). Finally, when we look at the number of samples, we see that AD-DMKDE stands out more for data sets with less than 1,000 instances, and also with more than 5,000 many instances, but its performance is not as good in the intermediate data sets. In summary, AD-DMKDE is a method that can perform over the average state-of-theart anomaly detection methods, standing out in different scenarios and being capable of affording data sets with both low and high values in its number of dimensions, number of samples and outlier rates.

5 Conclusion In this paper, we presented a novel approach for anomaly detection using density matrices from quantum mechanics and random Fourier features. The new method AD-DMKDE was systematically compared against eleven different anomaly detection algorithms using F1-Score as main metric. AD-DMKDE show better than stateof-the-art performance over twenty anomaly detection data sets, being notably superior than classic algorithms and comparable to deep learning-based methods. The performance of the method does not seem to be affected by the anomaly rate of the data sets or the size of the data sets, but it seems to perform better for data sets with low dimensionality of the samples. AD-DMKDE do not have huge memory requirements due to that the method constructs a single density matrix in all training phase whose size is defined by the embedding. So, it is possible to resume very large data sets in relatively small matrices, which makes AD-DMKDE a scalable solution in those cases. Also, the method allows easy interpretability of the results, because each data point labeled as “anomaly” can be understood as a sample lying in low-density regions with respect to normal data. As future work, we will continue to further develop the main concepts of AD-DMKDE, building new algorithms based on the

AD-DMKDE: Anomaly Detection Through Density Matrices …

337

combination of Fourier features and density matrices with deeper neural networks, such as autoencoders and variational autoencoders.

References 1. Aggarwal CC (2016) Outlier analysis second edition 2. Blázquez-García A, Conde A, Mori U, Lozano JA (2021) A review on outlier/anomaly detection in time series data. ACM Comput Surv (CSUR) 54(3):1–33 3. Bouyeddou B, Harrou F, Kadri B, Sun Y (2021) Detecting network cyber-attacks using an integrated statistical approach. Clust Comput 24:1435–1453 4. Breunig MM, Kriegel HP, Ng RT, Sander J (2000) Lof: identifying density-based local outliers. In: Proceedings of the 2000 ACM SIGMOD international conference on management of data, pp 93–104 5. Gallego JA, González FA (2022) Quantum adaptive fourier features for neural density estimation. https://doi.org/10.48550/ARXIV.2208.00564 6. González FA, Gallego A, Toledo-Cortés S, Vargas-Calderón V (2022) Learning with density matrices and random features. Quantum Mach Intell 4(2):23. https://doi.org/10.1007/s42484022-00079-9 7. Janssens J, Huszar F, Postma E, van den Herik H (2012) Stochastic outlier selection 8. Kalair K, Connaughton C (2021) Anomaly detection and classification in traffic flow data from fluctuations in the flow–density relationship. Transp Res Part C Emerging Technol 127:103178 9. Kingma DP, Welling M (2014) Auto-encoding variational bayes 10. Li Z, Zhao Y, Botta N, Ionescu C, Hu X (2020) Copod: Copula-based outlier detection, pp 1118–1123 11. Liu F, Huang X, Chen Y, Suykens JAK (2020) Random features for kernel approximation: a survey on algorithms, theory, and beyond 12. Liu FT, Ting KM, Zhou ZH (2008) Isolation forest. In: 2008 Eighth IEEE international conference on data mining, pp 413–422. IEEE 13. Lv P, Yu Y, Fan Y, Tang X, Tong X (2020) Layer-constrained variational autoencoding kernel density estimation model for anomaly detection. Knowl Based Syst 196:105753 14. Pang G, Shen C, Cao L, Hengel AVD (2021) Deep learning for anomaly detection: a review. ACM Comput Surv (CSUR) 54(2):1–38 15. Pevny T (2016) Loda: lightweight on-line detector of anomalies. Mach Learn 102:275–304 16. Prasad NR, Almanza-Garcia S, Lu TT (2009) Anomaly detection. Comput Mater Continua 14:1–22 17. Rahimi A, Recht B (2007) Random features for large-scale kernel machines. In: Proceedings of the 20th international conference on neural information processing systems NIPS’07, pp 1177–1184. Curran Associates Inc. 18. Ramaswamy S, Rastogi R, Shim K (2000) Eficient algorithms for mining outliers from large data sets, pp 427–438. Association for Computing Machinery 19. Rayana S (2016) ODDS Library. http://odds.cs.stonybrook.edu 20. Rippel O, Mertens P, Konig E, Merhof D (2021) Gaussian anomaly detection by modeling the distribution of normal data in pretrained deep features. IEEE Trans Instrum Meas 70:1-13 21. Ruff L et al (2021) A unifying review of deep and shallow anomaly detection. Proc IEEE 109:756–795 22. Ruff L, et al (2018) Deep one-class classification, vol 80. PMLR 23. Santosh T, Ramesh D (2020) Machine learning approach on apache spark for credit card fraud detection. Ingenierie des Systemes d’Information 25:101–106 24. Schölkopf B, Platt JC, Shawe-Taylor J, Smola AJ, Williamson RC (2001) Estimating the support of a high-dimensional distribution. Neural Comput 13:1443–1471

338

O. A. Bustos-Brinez et al.

25. Zhang C, et al (2021) Unsupervised anomaly detection based on deep autoencoding and clustering. Secur Commun Netw 2021:1–8 26. Zhao Y, Nasrullah Z, Li Z (2019) Pyod: a python toolbox for scalable outlier detection. J Mach Learn Res 20

Enhancing Sentiment Analysis Using Syntactic Patterns Ricardo Milhazes

and Orlando Belo

Abstract Using specialized analysis tools, combining natural language processing techniques with machine-learning-based sentiment analysis, it is possible to establish positive and negative sentiments expressed in opinion texts. Thus, organizations have the possibility to act in an adequate way, having the opportunity to improve their relationship with their customers and improve their loyalty, according to the type of sentiment identified. In this paper we present and describe a sentiment analysis system especially developed to identify sentiments, of different polarities, expressed in opinion texts of students of an eLearning application. We have slightly rewritten the usual way of approaching sentiment analysis problems by using Hearst patterns, for improving classification models efficiency, valuing the sentiments expressed in a wider scale of classification values. Keywords Sentiment Analysis · Natural Language processing · Text Mining · Machine Learning · Syntactic Patterns · Hearst Patterns

1 Introduction Nowadays, interactions that people have on organizations’ websites or social networks are carried out through written messages, revealing a lot of very distinct cases, which vary from simple occasional situations to well-founded opinions about goods and services. The growing rate of these interaction processes is astonishing, being today one of the most diversified sources we can find in the domain of Big Data [1, 2]. A very concrete case of this exponential growing of data occurred in R. Milhazes (B) · O. Belo ALGORITMI R&D Centre/LASI, University of Minho, Campus of Gualtar, 4710-057 Braga, Portugal e-mail: [email protected] O. Belo e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Á. Rocha et al. (eds.), Information Technology and Systems, Lecture Notes in Networks and Systems 691, https://doi.org/10.1007/978-3-031-33258-6_32

339

340

R. Milhazes and O. Belo

October 2012, when, during the first presidential debate between then-current President Barack Obama and Mitt Romney, more than 10 million tweets were published in the space of 2 h, containing quite diversified information [3]. Some of the tweets proved to be important information sources, as they had opinions about various subjects of daily life, such as health or about the candidates themselves. These tweets, when properly analyzed, could reveal people’s preferences and feelings about issues that are important to both candidates. All these data sources, often textual and unstructured, can be worked on and analyzed by combining natural language processing techniques [4] with text mining [5], for recognizing concepts and their relationships, for example, jointly with machine-learning-based sentiment analysis techniques [6], for identifying opinions or establishing sentiments expressed in texts posted by people. In the last few years, sentiment analysis [7, 8] has been extensively explored in several application domains, representing a possible solution to understand emotions, appreciations and opinions regarding entities such as services, people and products, expressed in opinion texts. Analyses of this nature also provide elements for knowing the degree of satisfaction of software systems users [9], regarding functional and nonfunctional aspects that usually concern their administrators. For example, the form of interaction, availability, usefulness, or system performance. In this paper, we present and discuss the main characteristics and functionalities of a specific sentiment analysis system using syntactic patterns (Hearst Patterns) for refining sentiment classification and characterization. Hearst patterns [10] are particularly interesting for identifying and extracting hypernym/hyponymous pairs, which can be used to define hierarchies of concepts through “is-a” relationships. To demonstrate the utility and the effectiveness of the system we used a set of students’ opinions, expressed in natural language texts, about their experience with a specific e-learning application. In the remaining part of the paper, we will expose and discuss some of the fundamentals regarding sentiment analysis systems and applications, structure and organization (Sect. 2), present the application case, the basics of the system conceived and expose the application of lexical patterns in the sentiment analyses process, discussing their utility and results (Sect. 3). Finally, we present conclusions and some research lines for future work (Sect. 4).

2 Related Work 2.1 Sentiment Analysis Sentiment analysis [7, 8] is an extremely important area of work these days. The tools and applications that exist today on the market show their importance for organizations in understanding the behavior of their customers, in various fields of activity, from advertising or marketing to the optimization of computer systems. The results obtained from the application of sentiment analysis techniques give us the possibility to design and implement measures and develop actions related to service

Enhancing Sentiment Analysis Using Syntactic Patterns …

341

quality, risk identification, customer loyalty, or brand recognition, just to name a few. As such, knowing how customers act and react allows for organizations to make decisions and develop working actions more suited to their customers, making them more empathetic and, obviously, improving their quality of service and increasing their profits. Essentially, we can divide sentiment analysis in three different types, namely at the level of document, sentence or aspect. The analysis of sentiments at the document level is the simplest, because it assumes the existence of a general opinion, usually positive or negative, which is expressed by the author of the document about a specific subject [11]. There are numerous applications of sentiment analysis at the document level. These applications are usually based on supervised approaches, given the binary nature of the classification (positive or negative) of the problems addressed [12]. At the sentence level, the overwhelming majority of sentiment analysis implementations use only subjective sentences. Gathering sentiments from objective sentences is extremely complex. For this reason, techniques that distinguish opinion statements from other statements containing factual statements are frequently used in the data pre-processing phase. For example, Yu and Hatzivassiloglou [13] created some methods for distinguishing subjective sentences from objective sentences. Finally, the sentiment analysis at the aspect level. The two levels presented before are suitable for sentences or opinion documents, referring only to one general attribute of an entity. However, many entities have several attributes and sometimes it is important to observe meticulously the information that opinions contain. Aspect-level sentiment analysis aims to recognize the expressions of sentiment in an opinion text and all the aspects to which they refer.

2.2 Sentiment Analysis Processes The implementation of a system that can extract and classify feelings efficient and effectively is a complex task. In fact, at the moment, there is no implementation that is considered universal for the task of sentiment analysis, because each implementation is composed of small processes that, depending on the task to be performed, can be changed to make possible to achieve more reliable solutions. However, there is a set of tasks that are usually performed in most sentiment analysis processes. Medhat et al. [14] argue that these processes can be developed along four welldefined stages, namely (Fig. 1): 1) identifying sentiments 2) selecting features of characteristics, 3) classifying sentiments, and 4) defining polarity. Although this model was mentioned for the evaluation of products, we think that we can use it as a base model to develop a generic sentiment analysis process. Let’s see what each of the aforementioned tasks involves. The task of identifying sentiments is responsible for evaluating the subjective content and extracting the opinion expressed in the opinion text, which will make possible to determine whether a sentence is subjective or not. Then, the feature selection task is carried out, with the aim of increasing the efficiency of the classification model used, reducing some of the noise that may be included in original texts. Next, it happens the sentiment classification task, which

342

R. Milhazes and O. Belo

Fig. 1 The main tasks of a sentiment analysis process according to Medhat et al.—adapted from [14]

defines the orientation of the sentiment expressed in the opinion text, in two or more classes. Finally, we have the defining polarity task. It is the end of the sentiment analysis process, in which we got the value of the polarity of the opinion (negative, positive or neutral), using the quotation of the opinions provided by the classification mechanisms used. The process proposed by Medhat et al. [14] is well supported by evidence given in several applications, and generally fulfilled the requirements usually required by a sentiment analysis system. The sentiment classification phase is, in fact, the only indispensable phase in this type of process, since it is the task that supports the final objective: the attribution of a sentiment value to opinions. Despite being relevant, having useful contributions to the global sentiment analysis process, the other phases may not be included. However, in the Medhat et al. process organization, the data acquisition phase is implicit. This is a very important task and, for that reason, it should be a part of the sentiment analysis process [15].

3 Sentiment Classification and Analysis 3.1 The Application Case The application domain was defined early on. In the last three years, we have been developing a system for assessing students’ knowledge, which provides mechanisms for collecting opinions about the functioning, performance and contents of the system. With the integration of a sentiment classification system in the e-learning application, we will be able to create a set of mechanisms especially oriented to analyze the opinions of system users, expressed in texts written in natural language. The opinion classification process will analyze, in particular, the various aspects expressed in the texts, objective and subjective, giving particular emphasis on the latter, for determining an appreciation rate that reveals the sentiment of users about the system. We decided that the appreciation rate will be a numerical value, between 1 and 5 (1-very negative, 2-negative, 3-neutral, 4-positive, and 5-very positive), for reflecting the sentiment polarity of users. Knowing this value it is possible to assess the degree of user satisfaction with the e-learning application, and assess, obviously, its quality of service.

Enhancing Sentiment Analysis Using Syntactic Patterns …

343

3.2 The Initial Process Classifying sentiments expressed in textual sources can be carried out in several ways. We can use several types of models, based on supervised or unsupervised learning, or even as a hybrid learning approach. To feed the process we used data arranged by the data preparation process we done previously. Based on the characteristics of the application domain and on the data we processed, we designed the initial classification process as depicted in Fig. 2. We started the process by performing the balancing of the dataset preprocessed, using a random under-sampling technique [16]. We wanted to eliminate data from a class that was majority, and bringing its quantity closer to the quantity of data from a minority class. For the five classes of values (1..5) we considered, data was balanced, having about 12,240 instances per class. Then, we translated data to the format required by machine learning models we selected. Using attribute extraction techniques, such as TF-IDF Vectorizer [17] (TF-IDF) and Word2Vec [18], we transformed textual data into vectors, and trained and tested the classification models implemented using Random Forest [19], and Support Vector Machines [20]. To validate results we used a cross-validation technique, Stratified K-Fold [21], with five partitions, for preserving the percentage of training and test data for each of the partitions created. We got the best results using TF-IDF in attribute selection in both models, having respectively a precision value of 54% and 55% (Table 1, “Precision (Run 1)”). However, as we can see, precision values were not satisfactory. One of the reasons for the occurrence of these results may have been the difficulty system had in classifying opinion texts in more than two or three classes. Usually, sentiment classification problems focus on classifying opinion texts as “positive” or “negative”, and sometimes “neutral”. For this reason, we ran our models again for classifying opinions

Fig. 2 The initial sentiment analysis process classification

Table 1 Sentiment classification process precision results

Attribute Selection Model TF-IDF

Word2Vec

Machine learning model

Precision (Run 1)

Precision (Run 2)

Random Forest

0.54

0.73

Support Vector Machines

0.55

0.73

Random Forest

0.52

0.70

Support Vector Machines

0.50

0.68

344

R. Milhazes and O. Belo

Table 2 Confusion matrices of the classification models 1 2 3 784 802 1 10215 2427 3849 2 4287 1224 6381 3 1669 680 463 2471 4 316 139 542 5 a) TF-IDF + Random Forest

4 214 1028 2012 4528 1963

5 225 649 954 4098 9280

1 2 3 4 9610 1393 848 228 1 3871 3409 3849 802 2 1273 1897 6367 2065 3 387 533 2553 5085 4 184 128 473 2221 5 b) TF-IDF + Support Vector Machines

5 161 309 638 3682 9234

with values between 1 and 5, but this time targeting a polarity having negative, positive and neutral classes. In the column “Precision (Run 2)” of Table 1, we can see the new precision values got in the second run. The value of the precision increased almost 20% in some cases. This may prove that, models that classify opinion texts with values between 1 and 5 do not distinguish very well opinion texts classified with 1with texts classified with 2, and texts classified with 4 with texts classified with 5. To check this, we analyzed the confusion matrices of the two classification models (Table 2), which present the highest precision values for classifications between 1 and 5. As expected, the classification for classes 2 and 4 negatively influenced overall effectiveness, since they had a precision value between 30 and 40%. Furthermore, we see that classification models have difficulties for distinguishing opinions having a classification of 3 from opinions having a classification of 2 or 4.

3.3 Using Syntactic Patterns We decide to improve the effectiveness of the models, exploring other classification approaches. Of all the approaches experimented, the use of Hearst patterns was the most fruitful. Hearst [10] developed a method for the automatic acquisition of hyponyms in large-scale texts, using syntactic patterns that usually reveal the presence of hyponyms in a sentence. The pattern of Fig. 3 allows for identifying the occurrence of nouns (NP—Noun Phrase) in sentences, revealing the hyponyms result set presented in the figure— [10] or [22] provide some other examples of syntactic patterns applications. Being possible to predict the presence of a hyponym in a sentence, we believe that it will also possible to predict the classification of sentiments expressed in a sentence or in a text. So, having adopted syntactic patterns, we decided to use a POS Tagging method for identifying then in opinion texts. We redefined our initial sentiment classification process (Fig. 2), including now a new task for defining syntactic patterns (Fig. 4). The big difference between the two configuration processes relies on the use of syntactic patterns as an additional task, for improving the performance and confidence of the classification models. Using this new configuration, we retrained and tested the models.

Enhancing Sentiment Analysis Using Syntactic Patterns …

345

Fig. 3 An example of a Hearst pattern for identifying hyponyms in a sentence

Fig. 4 The sentiment analysis process classification with syntactic patterns

In this process, before the removal of non-relevant words and the lemmatization process, all words were identified taking into account their grammatical class. This transformed each sentence into a set of tags, representing the grammatical class of all the words present in the sentence. Next, the dataset was divided into two parts, forming a training dataset and a test dataset. Only the training dataset was used to identify syntactic patterns so that the model would not be distorted when receiving the test dataset. In the training data set, we used the information about the grammatical class of the words included in the opinion texts obtained previously, for removing all sequences of five, six and seven consecutive grammatical classes, which were separated taking into account the classification of the opinion. Then, for each class, we kept some patterns that were recognized, applying the following selection criteria: if for a class A, the confusion matrix of the classification model presents a large amount of data incorrectly annotated as class B or C, then all patterns that also occur in class B or class C will be removed from class A patterns. The classification process produced many value 2 classifications that were annotated as class 1 or class 3 in the first run. Thus, all the patterns that were verified in sentences of opinions with classification 2, and that also occurred in opinions of classification 1 or 3, were not identified as class 2 patterns. Table 3 shows some examples of patterns identified (“Pattern”) during the classification process, as well as the class of opinions (“Class”) to which they belong, the number of times they occurred (“NrO”), and some brief examples of sentences (Sentences) in which patterns were verified—the abbreviations VB, NN, RB, IN, VBN, CD, DT and JJS, represent, respectively, a verb in the base form, a singular noun, an adverb, a preposition or a subordinate conjunction, a verb in past tense, a cardinal number, a determiner, and a superlative adjective. The pattern “VBN-CD-IN-DT-JJS” applies precisely to the same set of words, in both of the sentences presented. Both, clearly, represent a positive reaction, which

346

R. Milhazes and O. Belo

Table 3 Examples of some syntactic patterns identified in the classification process Pattern

Class

NrO

Sentences

VB-NN-RB-RB-IN

2

14

“the videos for this class move way too quickly for a beginner…” or “…easy to follow examples that do make sense however then in the activities they give you wildly more difficult…”

VBN-CD-IN-DT-JJS

5

18

“…but it turns out to have been one of the best classes i have ever taken” or “.. this has been one of the best sustainability courses i am taking”

justifies the classification (class) that was assigned (5 stars). The pattern “VB-NNRB-RB-IN” allows us to conclude a user appreciated one of the attributes of a course, but, at the same time, user did not like another attribute; it is a situation for justifying the classification of 2 stars. Based on cases like these ones, we believed that syntactic patterns could be a useful tool for supporting sentiment classification models, contributing to the improvement of their confidence level. To confirm this assumption, we carried out a new analysis process. In the next section, we will analyze the results we got applying syntactic patterns.

3.4 Results Analysis For both, training and testing datasets, we verified the existence of patterns that were taken from the training dataset. As before, we selected the two models that generated the best results in the initial process, namely: TF-IDF + Random Forest, and TF-IDF + Support Vector Machines. In both cases, we applied the GridSearchCV algorithm [23] for optimizing hyper-parameters. Then, we ran the classification models for the test dataset. In Table 4 we present the results obtained for the predictions provided by the selected models. As we can see, there was a significant improvement in the precision of the models, which increased about 15% in both models. Now, these results are satisfactory, considering we are dealing with a problem of multi-classification of textual data. Similarly to what we did earlier, we present in Table 5 the confusion matrices for the two models, but now taking into account the application of syntactic patterns. After analyzing the confusion matrices of Table 5, we verified a significant improvement in the conflict that previously existed in classifications of classes 2 and 4, and, particularly, in class 2. Now, the model clearly differentiates what is an Table 4 The precision results of the models, using syntactic patterns

Model

Precision

TF-IDF + Random Forest

0.69

TF-IDF + SVM

0.69

Enhancing Sentiment Analysis Using Syntactic Patterns …

347

Table 5 Confusion matrices for the classification models with syntactic patters 1 2 3 4 5

1 2181 266 221 114 44

2 170 1756 226 77 19

3 130 272 1436 399 83

4 35 86 345 1195 391

5 31 54 154 586 1933

a) TF-IDF + Random Forest

Table 6 Models’ precision results using syntactic patterns for a ternary classification

1 2 3 4 5

1 2119 214 188 77 28

2 232 1809 294 86 28

3 142 313 1474 425 110

4 42 77 378 1323 555

5 12 21 84 460 174 9

b) TF-IDF + Support Vector Machines

Model

Precision

TF-IDF + Random Forest

0.80

TF-IDF + SVM

0.81

opinion of class 2 star from an opinion having a class 1 or 3. Previously, we did not have this situation. To go a little bit forward in our analysis, we decided to analyze the precision of the models in situations in which they face a problem imposing a ternary classification (positive, negative or neutral). Then, we carried out an identification of syntactic patterns for “neutral” and “negative” classes, which were not correctly distinguished, contrary to what happened with the “positive” class—the results are shown in Table 6. Similar to what happened for the multi-classification problem, there was also a significant increase in precision of the models’ precision in a ternary classification process.

4 Conclusions and Future Work In this paper, we present and discuss the use of several sentiment analysis techniques in the establishment of an appreciation rate, based on the sentiments expressed in opinion texts that students posted in a reviewing application, regarding their experience using an e-learning system application. Usually, this type of analysis is done using static indices, resulting from the attribution of a rating, using one or more stars, which reveal someone’s opinion about a service or a product. We keep this classification mode, but using opinion text sentiment analysis to generate a classification between 1 and 5, as a way to guarantee that the opinions expressed were valued and duly justified by the opinion texts we processed. The opinion texts we used were randomly collected and fit properly in the context of the application case. However, in order to establish a more adequate classification, we had to pre-process the collected texts so that they acquire the data format required by the attribute selection models we selected (TF-IDF and Word2Vec). After a first approach to the classification of sentiments expressed in opinion texts, we found that the classification models we used provided predictions with a precision level not very interesting.

348

R. Milhazes and O. Belo

To overcome this situation, we studied and evaluated other types of approaches for classifying sentiments, having found in the use of syntactic patterns a viable solution for improving the results we got previously. The integration of these standards in our classification process has, in fact, improved the precision of the results of the models we implemented. This way, syntactic patterns proved to be very useful in sentiment classification processes such as the one applied on our application case. At short term, we intend to improve the sentiment analysis system we implemented in several aspects, namely: designing and creating an automatic opinion gathering system with the ability to trigger sentiment analysis mechanisms in real time, providing immediately customers’ feedback to system managers; improving the use of syntactic patterns, discovering new application instances for clarifying better the difference between classifications level 4 and 5; and starting a new research initiative for discovering opinion elements (or combinations) that reveal irony, sarcasm or effective negation sentiments, for improving the confidence of the models we used for classifying sentiments. Acknowledgements This work has been supported by FCT—Fundação para a Ciência e Tecnologia within the R&D Units Project Scope: UIDB/00319/2020.

References 1. Chen T, Guestrin G (2016) XGBoost: a scalable tree boosting system. In: Proceedings of KDD’ 2016, San Francisco, CA, USA. https://doi.org/10.1145/2939672.2939785 2. Tsai C, Lai C, Chao H, Vasilakos A (2015) Big data analytics: a survey. Journal of Big Data 2:1–32. https://doi.org/10.1186/s40537-015-0030-3 3. Enli G, Naper A, (2015) Social media incumbent advantage: Barack Obama and Mitt Romney’s Tweets in the 2012 US Presidential Election Campaign. In: The Routledge companion to social media and politics, 1st edn. Taylor & Francis Group 4. Sharma A (2021) Natural language processing and sentiment analysis. Int Res J Comput Sci 8(10):237. https://doi.org/10.26562/irjcs.2021.v0810.001 5. Maheswari M (2017) Text mining: survey on techniques and applications. Int J Sci Res (IJSR) 6(6):1660–1664 6. Zhang L, Wang S, Liu B (2018) Deep learning for sentiment analysis: a survey. WIREs Data Min Knowl Disc 8(4):e1253. https://doi.org/10.1002/widm.1253 7. Mohey El-Din D (2016) A survey on sentiment analysis challenges. J King Saud Univ Eng Sci. https://doi.org/10.1016/j.jksues.2016.04.002 8. Routray P, Swain C, Mishra S (2013) A survey on sentiment analysis. Int J Comput Appl (0975 – 8887) 76(10) 9. Saini M, Chahal K, Verma R, Singh A (2020) Customer reviews as the measure of software quality. IET Softw 14:850–860. https://doi.org/10.1049/iet-sen.2019.0309 10. Hearst M (1992) Automatic acquisition of hyponyms from large text corpora. In: Proceedings of the fourteenth international conference on computational linguistics, Nantes France. https:/ /doi.org/10.3115/992133.992154. 11. Feldman R (2013) Techniques and applications for sentiment analysis. Commun ACM 56:82– 89. https://doi.org/10.1145/2436256.2436274 12. Pang B, Lee L, Vaithyanathan S (2002) Thumbs up?: sentiment classification using machine learning techniques. In: Proceedings of the ACL-02 conference on empirical methods in natural

Enhancing Sentiment Analysis Using Syntactic Patterns …

13.

14. 15. 16. 17.

18.

19. 20. 21. 22.

23.

349

language processing (EMNLP’02), vol 10, pp 79–86. https://doi.org/10.3115/1118693.111 8704 Yu H, Hatzivassiloglou V (2003) Towards answering opinion questions: separating facts from opinions and identifying the polarity of opinion sentences. In: Proceedings of the 2003 conference on empirical methods in natural language processing (EMNLP ‘03), pp 129–136. https:/ /doi.org/10.3115/1119355.1119372 Medhat W, Hassan A, Korashy H (2014) Sentiment analysis algorithms and applications: a survey. Ain Shams Eng J 5(4):1093–1113 Vinodhini G, Chandrasekaran T (2012) Sentiment analysis and opinion mining: a survey. Int J Adv Res Comput Sci Softw Eng 2(6):282–292 Bhardwaj P (2019) Types of sampling in research. J Pract Cardiovasc Sci 5(3):157. https://doi. org/10.4103/jpcs.jpcs_62_19 Simha A (2021) Understanding TF-IDF for Machine Learning, A gentle introduction to term frequency-inverse document frequency. https://www.simplilearn.com/real-impact-socialmedia-article, last accessed 2022/09/05 Ma L, Zhang Y (2015) Using Word2Vec to process big text data. In: proceedings of IEEE international conference of big data, San Jose, CA. https://doi.org/10.1109/BigData.2015.736 4114 Cutler A, Cutler D, Stevens J (2011) Random forests. Mach Learn 45(1):157–176. https://doi. org/10.1007/978-1-4419-9326-7_5 Srivastava D, Bhambhu L (2010) Data classification using support vector machine. J Theor Appl Inf Technol 12(1):1–7 Berrar D (2019) Cross-Validation. Encycl Bioinf Comput Biol 1:542–545. https://doi.org/10. 1016/B978-0-12-809633-8.20349-X Roller S, Kiela D, Nickel M (2018) Hearst patterns revisited: automatic hypernym detection from large text corpora. In: Proceedings of the 56th annual meeting of the association for computational linguistics (short papers), pp 358–363, Melbourne, Australia Ranjan G, Verma A, Sudha R (2019) K-nearest neighbors and grid search CV based real time fault monitoring system for industries. In: Proceedings of IEEE 5th international conference for convergence in technology (I2CT). https://doi.org/10.1109/I2CT45611.2019.9033691

Feature Selection for Performance Estimation of Machine Learning Workflows Roman Neruda and Juan Carlos Figueroa-García

Abstract Performance prediction of machine learning models can speed up automated machine learning procedures and it can be also incorporated into model recommendation algorithms. We propose a meta-learning framework that utilizes information about previous runs of machine learning workflows on benchmark tasks. We extract features describing the workflows and meta-data about tasks, and combine them to train a regressor for performance prediction. This way, we obtain the model performance prediction without any training, just by means of feature extraction and inference via the regressor. The approach is tested on OpenML-CC18 Curated Classification benchmark estimating the 75th percentile value of area under the ROC curve (AUC) of the classifiers. We were able to obtain consistent predictions with R 2 score of 0.8 for previously unseen data. Keywords Machine learning · Performance estimation · Auto-ML

1 Introduction Machine learning (ML) has become one of the most successful areas of artificial intelligence, especially with recent advances in applications of deep learning models in computer vision and natural language processing areas. For general data mining tasks, the classical approaches utilizing collections of regressors or classifiers with preprocessing techniques and ensembles represent a viable alternative to big deep networks. The area of automated machine learning (AutoML) is concerned with the search for complete machine learning solutions—workflows—tailored to given dataset. In the area of deep networks, the so-called neural architecture search (NAS) algorithms R. Neruda (B) Institute of Computer Science, Czech Academy of Sciences, Pod Vodárenskou vˇeží 2, 182 07 Prague, Czech Republic e-mail: [email protected] J. C. Figueroa-García Universidad Distrital Francisco José de Caldas, Calle 13, 31–75, Bogotá D.C., Colombia © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Á. Rocha et al. (eds.), Information Technology and Systems, Lecture Notes in Networks and Systems 691, https://doi.org/10.1007/978-3-031-33258-6_33

351

352

R. Neruda and J. C. Figueroa-García

have become a topic of extensive study [7]. They usually focus on particular task, such as image classification, and search for a suitable deep network architecture to solve it. AutoML techniques in traditional machine learning, on the other hand work with workflows represented as linear sequences or direct acyclic graphs of suitable ML models. In both areas of AutoML, the performance prediction of the candidate models is the bottle neck of search algorithms because it is usually a very time consuming task. Several methods have been proposed, mostly for NAS domain, how to shorten or skip the training completely by extracting some features of the model and use an estimation of the performance based on these features. The contributions of this paper are in combining features extracted from ML workflows and meta-data features describing tasks. This data is used to train a performance predictor model which acts as a replacement for training the model. To achieve this goal, we extract features from the standardized workflow descriptions to create a TFI-IDF based signature of each workflow. Moreover, we extract metadata features from benchmark datasets, and then train a regression machine learning model to predict performance for the pair of workflow-dataset features. This way, the previous performance of ML models is used to predict future performance on similar data. This approach is tested on OpenML-CC18 benchmark suite containing 72 curated and well-described classification datasets. Results show a reasonable performance of our solution. The structure of this paper is the following. First, in Sect. 2, we define the necessary notions and framework and review related literature. Section 3 describes our approach in detail. Experiments on the benchmark suite are presented in Sect. 4. Finally, the Conclusions section contains discussion and future work.

2 Preliminaries Machine learning is a general area of computer science concerned with creating models based on data [3–5]. Usually, the type of model is selected by human experience and knowledge. The field of meta-learning is focusing on selecting the right algorithm for a given problem specified by data. Authors of [2] define metalearning as “the study of principled methods that exploit metaknowledge to obtain efficient models and solutions by adapting machine learning processes”. In [9], it is stated that “A metalearning system must include a learning subsystem, which adapts with experience. Experience is gained by exploiting meta-knowledge extracted in a previous learning episode on a single dataset and/or from different domains or problems.” The recently introduced term of AutoML usually refers to all aspects of automating the machine learning process, such as model selection, hyperparameter optimization, model search, etc. [7] Thus, many of AutoML systems utilize meta-learning approaches. In our work, we are interested in using experience from previous learning to estimate the performance of a machine learning model on various datasets. Related research in this area is concerned with performance comparisons of different ML

Performance Estimation for ML Workflows

353

models on one dataset, and the influence of hyper-parameter setting. Authors of [6] assess the importance of hyperparameters by regression trees. The work is generalized for more datasets in [14]. Authors of [2, Chapter 17] present a study where hierarchical clustering is used to show similarity in performance of several classifiers. The effect of using feature selection or linear vs non-linear models have been studied in [13] and [16], respectively.

3 Contribution This section describes our approach to utilize meta-learning principles for ML model performance estimation. The general scheme of the approach is presented in Fig. 1. The three basic components are descriptions of the ML models (flows) denoted by X f , description of datasets X d , and results of evaluations of models on datasets. These three parts are used to define a supervised learning regression task, where the concatenated vectors of features X f and X d are inputs, and the evaluations represent the output of the regressor. In the following we describe the components in more detail. In our work we focus on more complex ML models than the previous works by utilizing the concept of workflows, or flows that describe a sequence of ML models with preprocessing algorithms and their combinations by means of ensembles. The notion of ML flows is described and implemented in the scikit-learn python library [12] and also used in the OpenML portal [17]. An example of a flow is presented in Fig. 2. From the flow textual description we automatically extract features

Fig. 1 The scheme of our performance prediction estimator. The estimation is realized by the regressor, which is a gradient-boosted decision tree (GB-DT) in our case. The data sources for the regressor are two sets of features: X d are meta-data from the dataset, the X f are features derived from the ML flow by the TF-IDF vectorizer

354

R. Neruda and J. C. Figueroa-García

Fig. 2 OpenML scikit-learn flow describing Random forest classifier with preprocessing and hyperparameter search Table 1 Meta-data attributes extracted from each dataset

Number Of Classes Number Of Features Number Of Instances Number Of Instances With Missing Values Number Of Missing Values Number Of Numeric Features Number Of Symbolic Features Majority Class Size Minority Class Size Maximum Nominal Attribute Distinct Values

of the flow by means of the TF-IDF vectorizer [15], thus each flow is translated into a vector of real values X f representing the relevant words for the flow description. The second component of our feature extraction is the dataset information. For each dataset we gather meta-data information describing the type and properties of the data. The meta-data contain information about the size of dataset and types of its elements, such as number of instances, number of features, number of classes, etc. The complete meta-data is presented in Table 1. This information creates a vector of numbers, which is normalized to obtain the second part of input data, the X d . The performance estimation of the flow f run on data d is in our model realized by the regressor trained on data from previously saved experiments. We have chosen the gradient boosting decision tree [8] regressor (GB-DT) because it is supposed to be especially efficient for TF-IDF vectors, but our results show that other regression models (such as Bayesian regression or random forest models) can be used with similar success. As the performance criterion to estimate, we have chosen the area under the ROC curve (AUC) which is the standard measure of the classification performance.

Performance Estimation for ML Workflows

355

4 Experiments This section presents results of our experiments with the above described method. We have chosen the OpenML-CC18 benchmark suite which gathers 72 curated and well-described classification datasets together with results of millions of experiments with thousands of diverse machine learning workflows [1, 11]. We have filtered the workflows coming from scikit-learn python library only, thus, our workflow dataset contains 9804 unique ML workflows. Then, the available results of runs of these workflows on data from the benchmark suite have been collected, resulting in 72800 evaluations. The evaluations are not uniformly distributed, some datasets have many evaluations while some have only a few. Also, the distribution of results for particular datasets have large variance, as can be seen on Fig. 3. Thus, we use aggregation of the several performance results to obtain the true performance output for training the regressor. In our experiments, we aimed for 75% quantile from evaluations. While the maximum value can be a good choice as well, we wanted to include the possibility of sub-optimal hyper-parameter setting. Due to high variance and some low-quality solutions in the OpenML database, the mean value is not a good aggregation choice. In the following we present results of two experiments (denoted as Dataset 1 and Dataset 2). The experiments use identical setup for most of the parts, they only differ in the parameters of the TF-IDF vectorizer. For Dataset 1 we have used the default setting of the TF-IDF model which results in the vector X f to have the dimension of 426. For the Dataset 2 we have set the thresholds for minimal and maximal number of documents for each feature. Thus, we have excluded features that appear in more than 1000 flow descriptions, and less than 10 flow descriptions. This in turn represent a big reduction of the feature space from 426 to 87, the X f component for Dataset 2 is roughly 20% the size of X f from Dataset 1. The results of the performance estimator training are gathered in Table 2. We report the R 2 score together with mean square error (MSE) and mean absolute error (MAE) values. For information about the time complexity, the train and scoring times are also included. The experiments were performed 10 times and each experiment itself is a 10-fold shuffled cross-validation, i.e. 90% of data is used for training and 10% for testing. For each value we report mean and standard deviation from these crossvalidations. It can be seen that the performance on the test set are very good with R 2 score around 0.8, which is only slightly lower than training performance 0.86. The mean errors values support the quality of the prediction as well. For detailed view, the distribution of R 2 , MSE and MAE is presented in Figs. 4, 5, and 6, respectively. It is interesting to compare the numbers for the experiment with reduced Dataset 2. The results are surprisingly only slightly worse, e.g. 0.796 compared to 0.806 in the test R 2 score, while the model itself is smaller, as described above. It shows, that the smaller TF-IDF model was still able to capture relevant features even after exclusion of the most and least common words.

356

R. Neruda and J. C. Figueroa-García

Fig. 3 Histograms of AOC evaluations for all runs of workflows on the datasets from the OpenMLCC18 Curated Classification benchmark

Performance Estimation for ML Workflows

357

Table 2 Performance measures for experiments with two datasets. Recorded values represent mean and standard deviation from 10 repetitions on shuffled 10-fold cross-validation. Each measure is computed both on training and test sets. Times for train and score procedures are reported in seconds Dataset 1 Dataset 2 parameter mean stdev mean stdev test R 2 train R 2 test MAE train MAE test MSE train MSE train time score time

0.805989 0.867434 0.045392 0.038108 0.005571 0.003810 9.592160 0.036149

Fig. 4 Histograms of R 2 coefficient values for 10 runs of 10-fold cross-validation. Values computed for training sets (right) and test sets (left). Blue color represents dataset 1, orange dataset 2

Fig. 5 Histograms of mean squared error values for 10 runs of 10-fold cross-validation. Values computed for training sets (right) and test sets (left). Blue color represents dataset 1, orange dataset 2

0.022604 0.002152 0.001910 0.000312 0.000702 0.000064 0.073642 0.007558

0.796203 0.856438 0.046826 0.039642 0.005851 0.004126 2.283237 0.014189

0.022666 0.002190 0.001896 0.000303 0.000696 0.000066 0.078939 0.004614

358

R. Neruda and J. C. Figueroa-García

Fig. 6 Histograms of mean absolute error values for 10 runs of 10-fold cross-validation. Values computed for training sets (right) and test sets (left). Blue color represents dataset 1, orange dataset 2

5 Conclusion In this paper we have proposed a performance estimator for machine learning models that is based on meta-learning approach to make use of previous learning experience. The estimator receives information about the ML workflow and dataset, and produces an estimate of the performance of the workflow on this data. As the estimator we have used a regression version of gradient boosting decision tree. The data from workflow descriptions are processed via the standard TF-IDF vectorizer where even a relatively limited smaller model provides good results. The whole approach is tested on OpenML-CC18 benchmark suite for statistic relevance of the results. We have achieved good results, with R 2 consistently around 0.8 for unseen data. It is important to emphasize that using the regressor means to obtain the performance estimation in a fraction of time in comparison to actually training the workflow on the dataset. We have used only portion of data about performance of various flows that is contained in the OpenML site, to demonstrate the suitability of our approach. Since the training times for the regressor are in the order of seconds to minutes, the possibility for utilizing bigger data is one of the direct next future steps. Also, the possibility to use neural network with vectorizer module, such as word2vec [10], as a more complex regressor, seems like a relatively easy future extension. Another area of future enhancements of the model is to consider larger set of metadata features X d that can include more statistical properties of the dataset.

References 1. Bischl B, Casalicchio G, Feurer M, Hutter F, Lang M, Mantovani RG, van Rijn JN, Vanschoren J (2019) OpenML benchmarking suites. arXiv:1708.03731v2 [stat.ML]

Performance Estimation for ML Workflows

359

2. Brazdil P, van Rijn JN, Soares C, Vanschoren J (2022) Metalearning: applications to automated machine learning and data mining, 2nd edn. Springer, Cham 3. Flach P (2012) Machine learning: the art and science of algorithms that make sense of data. Cambridge University Press, New York 4. Goodfellow IJ, Bengio Y, Courville AC (2016) Deep learning. Adaptive computation and machine learning. MIT Press, Cambridge. http://www.deeplearningbook.org/ 5. Hastie T, Tibshirani R, Friedman JH (2009) The elements of statistical learning: data mining, inference, and prediction, 2nd edn. Springer Series in statistics. Springer, Cham. http://www. worldcat.org/oclc/300478243 6. Hutter F, Hoos H, Leyton-Brown K (2014) An efficient approach for assessing hyperparameter importance. In: Xing EP, Jebara T (eds) Proceedings of the 31st international conference on machine learning. Proceedings of machine learning research, vol 32. PMLR, Beijing, pp 754– 762. https://proceedings.mlr.press/v32/hutter14.html 7. Hutter F, Kotthoff L, Vanschoren J (eds) (2019) Automated machine learning - methods, systems, challenges. Springer, Cham 8. Ke G, Meng Q, Finley T, Wang T, Chen W, Ma W, Ye Q, Liu TY (2017) LightGBM: a highly efficient gradient boosting decision tree. In: NIPS 9. Lemke C, Budka M, Gabrys B (2013) Metalearning: a survey of trends and technologies. Artif Intell Rev 44:117–130 10. Mikolov T, Sutskever I, Chen K, Corrado G, Dean J (2013) Distributed representations of words and phrases and their compositionality. In: Proceedings of the 26th international conference on neural information processing systems, NIPS 2013, vol 2. Curran Associates Inc., Red Hook, pp 3111–3119 11. Mueller AC, Guido S (2016) Introduction to machine learning with python: a guide for data scientists. O’Reilly Media, Inc. 12. Pedregosa F, Varoquaux G, Gramfort A, Michel V, Thirion B, Grisel O, Blondel M, Prettenhofer P, Weiss R, Dubourg V et al (2011) Scikit-learn: machine learning in Python. J Mach Learn Res 12:2825–2830 13. Post MJ, van der Putten P, van Rijn JN (2016) Does feature selection improve classification? A large scale experiment in OpenML. In: IDA 14. van Rijn J, Hutter F (2018) Hyperparameter importance across datasets, pp 2367–2376 15. Salton G, Buckley C (1988) Term-weighting approaches in automatic text retrieval. Inf Process Manag. 24(5):513–523. https://www.sciencedirect.com/science/article/pii/ 0306457388900210 16. Strang B, van der Putten P, van Rijn JN, Hutter F (2018) Don’t rule out simple models prematurely: a large scale benchmark comparing linear and non-linear classifiers in OpenML. In: IDA 17. Vanschoren J, van Rijn JN, Bischl B, Torgo L (2013) OpenML: networked science in machine learning. SIGKDD Explor 15(2):49–60. https://doi.org/10.1145/2641190.264119

Development of a Hand Gesture Recognition Model Capable of Online Readjustment Using EMGs and Double Deep-Q Networks Danny Díaz, Marco E. Benalcázar, Lorena Barona, and Ángel Leonardo Valdivieso Abstract Hand Gesture Recognition (HGR) has enabled the development of alternative forms of human-machine interaction in recent years. HGR models based on supervised learning have been developed with high accuracy. However, over time, the need to add new gestures appears. Consequently, it is necessary to establish a model to adapt to this need. Reinforcement learning permits developing agents capable of adapting their behavior to dynamic environments. In this work, we use Double Deep-Q Networks (DDQN), a reinforcement learning algorithm, to build an agent (an HGR model in this case), based on electromyography signals (EMGs), capable of recognizing new gestures over time. The proposed model is able to recognize 5 hand gestures, achieving an accuracy of 97.36% for classification and 94.83% for recognition. We achieved this result due to a modification in the experience replay, the post-processing of the algorithm, and its Markov Decision Process definition. This model can recalibrate its accuracy over time by adapting to a dynamic environment, while maintaining high performance. Keywords Catastrophic forgetting · Double deep Q-learning · EMG · Hand gesture recognition · Online readjustment · Reinforcement learning

1 Introduction The problem of HGR is to identify the class and the instant of occurrence of a hand movement. Hand gesture recognition enables the development of new, more natural and human-centered forms of human-machine interaction [1]. There are several models, mostly based on supervised learning, focused on hand gesture recognition in real time. For example: feed-forward artificial neural networks [20, 21], convolutional neural networks [2, 22], and support vector machines [23]. These models give a response in less than 300 ms, from four gestures to ten gestures [23]. These D. Díaz · M. E. Benalcázar (B) · L. Barona · Á. L. Valdivieso Artificial Intelligence and Computer Vision Research Lab, Departamento de Informática y Ciencias de la Computación, Escuela Politécnica Nacional, Quito, Ecuador e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Á. Rocha et al. (eds.), Information Technology and Systems, Lecture Notes in Networks and Systems 691, https://doi.org/10.1007/978-3-031-33258-6_34

361

362

D. Díaz et al.

models have an accuracy of about 92% and mainly focus on classification, but not on recalibration or recognition (determining at what point in time the gesture occurs). One way to create an HGR system is through electromyographic signals (EMGs) which are biomedical signals that measure electrical currents generated in the skeletal muscles. When working with EMGs, there are interpersonal and intrapersonal variability [2]. Interpersonal variability refers to the differences in the statistical properties of EMGs for the same gesture for two different people. Intrapersonal variability refers to the differences in the statistical properties of EMGs for the same gesture for a given person in two different instants of time [6]. To mitigate interpersonal and intrapersonal variability, it is necessary to build adaptive HGR models that can respond to a changing environment. One way to solve this problem is using reinforcement learning, which permits developing agents capable of adapting their behavior to dynamic environments [7]. In this work, we propose an HGR model that takes as input surface EMG signals. These signals measure the electric potential produced by active muscle fibers [3]. The proposed HGR model is user-specific and is able to readjust its behavior over time. The training of this model has two phases: offline (no human interaction) and online (human interaction). For offline training, we use a dataset composed of 612 users. For the online training, we use 100 users. The proposed model has some improvements regarding previous reinforcement learning HGR models based on EMGs as follows: – Experience replay with reserved cache for each class. – Re-definition of the Markov Decision Process for this problem. – Post-processing of impure labels isolated from other groups of labels. We selected Double Deep-Q network because of its readjusting and experience replay capabilities. Unlike other methods, it allows us to classify and recognize new gestures without using explicitly labeled data. These advantages have been explored in very little depth in HGR. Reinforcement learning work for HGR has been attempted before, however, several opportunities for improvement have been found that have led us to obtain better results. Additionally, when adding new gestures and evaluating classification and recognition, we have encountered a major problem: “Catastrophic Forgetting”. We described ways to mitigate and address this problem in order to get best results. The remaining of this paper is organized as follows. Section 2 presents a literature review. In Sect. 3, the architecture of the proposed model and the online readjustment evaluation are showed. In Sect. 4, an analysis over the results is presented. In Sect. 5, the conclusions as well as the future work are mentioned.

HGR Using Double Deep-Q Networks

363

2 Literature Review 2.1 Double Deep-Q Network We essentially use Q-learning for classification, which is an off-policy algorithm of reinforcement learning. For any finite Markov Decision Process (MDP), Q-learning finds an optimal policy by maximizing the expected value of the total reward given both an initial state and an action. We can implement this using lookup tables to store the Q values for each state and action of an environment [24]. However, this approach is not suitable for environments with a large state-action spaces. To simplify the representation of this table, we can use an approximation function. An artificial feed-forward neural network serves for this purpose. With this neural network, we can approximate the value of Q for each state received as input, as if it were a linear regression [24]. For a more stable learning, we use Double Deep-Q Network (DDQN), which is an off-policy reinforcement learning algorithm that utilizes double estimation to counteract overestimation problems with traditional Q-learning [19].

2.2 Variability The EMG signals change according to muscle wasting, changes in the electrode position and force variation [14]. Some works have reported covariate shift, a phenomenon that occurs when the probability distribution of the input variables (sEMG data) changes with time [15] not only for individual users but also across different users [16]. These changes cause an intrapersonal variability, this is, the gesture performed by the same person differs along the time [6]. Creating a general model considering the variation of several users along the time is not as practical as a specific model. This is because the specific model only considers the variation of the person who will use the HGR system. To minimize this variance, models should be re-trained every time that a user interacts with the HGR system [18]. If re-training is not performed, the classification accuracy will decrease [17].

2.3 Online Learning and Catastrophic Forgetting In online/streaming learning, a model learns online in a single pass loop over any portion of the dataset, and it can be evaluated at any point, rather than only between large batches. A supervised input data is usually considered for updating the model parameters [14]. However, this can be a problem when we need to add new gestures with no labeled data and preserve the accuracy. While applying online learning, we have to consider the “Catastrophic forgetting” (CF) problem because it is an inevitable characteristic of models based on connections [8]. As noted in [10], updating neural

364

D. Díaz et al.

Fig. 1 HGR architecture based on DDQN to learn to classify and recognize EMGs

networks incrementally over time causes CF, and the new learning causes a rewrite of existing representations. In [11], 3 primary mechanisms to mitigate catastrophic forgetting were identified: 1) Replay of previous knowledge, 2) Regularization mechanisms to constrain parameter updates, and 3) Expanding the neural network as more data becomes available.

3 Materials and Methods 3.1 Architecture The architecture is composed of 5 stages: data acquisition, pre-processing, feature extraction, classification, and post-processing [9] as is shown in Fig. 1. Data Acquisition. The EMG-EPN-612 dataset was used [13]. This dataset contains EMGs from 612 users, the measurements were made using the MYO Armband device with a sampling frequency 200 Hz. Half of the users were used for training and validation, and the remaining users were used for testing using an online evaluator on the web platform [12]. The datasets are composed of 25 repetitions of the following gestures: fist, wave in, wave out, open, pinch, and no gesture (relax gesture). Pre-processing. The energy-based orientation correction method described in [5] was used because the MYO Armband sensor is susceptible to electrode rotation. Moreover, in order to perform the recognition, it is necessary to make a segmentation process partitioning the EMG in several windows. Twenty-four windows are generated and labeled using a sliding window of 300 points and stride of 30 points. Feature Extraction. Feature extraction methods are useful to extract relevant features from the EMGs, which can be defined in time, frequency, or time-frequency domains [4]. We generated five features for each of the 8 channels of the EMG signal, resulting in 40 features. The 5 features generated are: Standard deviation (SD), Absolute envelope (AE), Mean absolute value (MAV), Energy (E), and Root mean square (RMS) [5].

HGR Using Double Deep-Q Networks

365

Classification. We use Double Deep-Q Network and training will be done by reading mini-batches from the experience replay (ER). We reserve some space for each action. This is necessary because the model predicts more frequently the no gesture class (“relax” state), producing over-fitting. With separated reserved cache, we limit their influence on the rest of the gestures. We structured each tuple inside the ER in the following way: Et = [statet , actiont , rewardt , statet+1 , terminal_flag]

(1)

Where statet+1 is equal to statet because no matter which action selects the agent for a state, it will not change; action t is the action taken by the agent at statet ; r ewar d t is the reward given for that action; and ter minal_ f lag indicates if the analyzed signal window is the final one. The definition of the reinforcement learning entities is the following: – Environment: For offline training, we defined the environment (code) where the label of each window is already known. For online learning, the environment is represented by a human who gives a reward for the final prediction. – Agent: It is a Double Deep-Q Network that receives a state and predicts which action will give the best reward. – State: It is the window portion of the entire segmented signal. – Episode: In an episode, the agent will interact with all windows of a single segmented signal. – Action: The agent must take a decision related to which gesture belongs to each window. – Reward: A value of 1 is given when the agent predicts correctly the signal window class and −1 when the agent prediction is incorrect. The selected hyperparameters for classification model are shown in Table 1. Post-processing. The agent predicts one gesture class for each window. To know the gesture of the complete signal, it is necessary to merge all predictions of the windows and get the most frequent prediction. The predicted gestures that do not match with the most frequent prediction are spurious predictions. This way, we can determine the majority class and replace the minority classes (spurious classes) with the most frequent.

3.2 Online Readjustment Evaluation For the readjustment part, a person directly gives feedback. In order to facilitate this interaction, an interface was developed. We have simulated the inclusion of new classes in real time, using the existing gestures in the dataset, to evaluate the change of the model’s accuracy as well as its adaptation with the environment.

366

D. Díaz et al.

Table 1 Selected Hyperparameters

Name

Configuration

Optimizer Input Layer Hidden Layer 1 Activation Layer 1 Dropout Layer Hidden Layer 2 Activation Layer 2 Output Layer Gamma Loss function Learning rate

ADAM 40 neurons 40 neurons ReLU 0.1 40 neurons ReLU 6 neurons 0.1 MSE 0.001

We use different approaches to readjust the model. The goal of each approach is to recover the accuracy lost when new classes were included and keep it stable. We start training a model for three gesture classes: “fist”, “waveIn”, and “waveOut”. Then two new gesture classes (“pinch” and “open”) will be included, without previous offline training. Samples of these new gestures, which were picked randomly from the dataset of 100 users, will be gradually displayed to the agent. The advantage of performing this simulation is that we will properly evaluate the accuracy of the agent. Using the data set labeled with the gestures “open” and “pinch”. This whole process is shown in Fig. 2 and is described below: (a) (b) (c) (d) (e) (f) (g) (h) (i) (j)

The system receives an electromyographic signal. We get the ground-truth using a muscle activity detection algorithm. We will extract the 40 features for each segmented window. The agent makes a prediction p. The features and the predicted label “ p” will be associated with each window of the ground-truth got by the muscle activity algorithm. The interface will display the gesture predicted by the agent. A human should give positive or negative feedback. Tuples are generated to be inserted in the experience replay. We add the tuples to the experience replay’s cache sections. A 5 epoch training with a reduced learning rate will be performed.

HGR Using Double Deep-Q Networks

367

Fig. 2 Flow for online learning Table 2 Classification and recognition accuracy comparison Model Classification Actual model Q-learning [9] SVM [5] Neural Network [4]

97.35% 90.72% 94.90% 96.37%

Recognition 94.82% 87.51% 94.20% 94.90%

4 Results and Analysis 4.1 Offline/Static Results The results of the model were: classification: 97.36% ± 4.35% and recognition: 94.83% ± 5.55%. The classification accuracy of static/offline training is higher compared with other works (Table 2) for user-specific models from the literature. The differences between this work and [9] that use Q-learning are: i) The ADAM optimizer is used instead of Stochastic Gradient Descent with momentum, ii) The mini-batch of the experience replay is shown several times in the training phase with the mini-batch, iii) We do not assume that an action will change the current state, and iv) The gamma value (discount factor) is significantly reduced to 0.1. The value of gamma causes the future reward to be minimally considered. This is congruent with the redefinition of the Markov Decision Process performed, where we focus on the current state. Using ADAM is important regarding the work done with feed forward neural networks [4]. Stochastic Gradient Descent does not allow the learning rate to be adapted by each neuron, and this reduces the learning rate.

368

D. Díaz et al.

Fig. 3 Online readjustment over the same gestures

Fig. 4 Online readjustment with big batch size retraining

4.2 Online Readjustment Evaluation Readjusting Using Only Experience and Rewards: When new gestures are added, and we performed no retraining with the known gesture’s dataset, the accuracy starts around 15% (Fig. 3). Using only human feedback, a mean accuracy slightly higher than 55% is achieved for classification and recognition after 20 interactions for the 100 users. Readjusting Using Experience, Rewards and Retraining: Another approach to improve the results, it is to retrain with the known gesture dataset. The initial accuracy got after retraining is 70%. We gradually add gestures from new classes and the final accuracy ranges between 88% and 90% for classification and recognition, respectively. Model accuracy remains nearly constant after 20 interactions for 100 users (Fig. 4). If we use only experience and not retraining with known dataset (when readjusting the model), it takes many interactions to achieve high accuracy. The best way to reach a higher accuracy quickly is retraining every four steps with a lower learning rate.

HGR Using Double Deep-Q Networks

369

A lack of data (interactions) and a non-optimal learning rate mostly caused the stagnation of accuracy around 90%. Increasing the number of interactions is not a very scalable option. Empirically, what gave us better results was to use the same initial learning rate that will decrease. We do not rule out that a better criterion could be used.

5 Conclusions and Future Works We have developed a reinforcement learning model based on Double Deep-Q Network with 97.35% in classification and 94.82% in recognition. We have evaluated its robustness and adaptability when including a new class, and we got an average accuracy rate of 89.30% for both classification and recognition. This accuracy is lower than the accuracy reached by training with the labeled data directly, but with the advantage of using only experience and rewards. The model can be readjusted after a while, due to the property of experience replay to break correlations. In order to keep the high performance of the model, it is necessary that the person interacts with the agent hundreds of times, which in practical terms is not convenient. It is important to enhance learning with data augmentation. This way, whether the prediction is correct or not, we will inject much more experience into the experience replay that will be used in the real time training. In the end, the model achieves almost a constant accuracy against new unknown classes integration. The muscle activity detection algorithm may cause that the recognition accuracy decreases, due to incorrect segmentation. Developing a more accurate algorithm for this task will be important for future work.

References 1. Sigalas M, Baltzakis H, Trahanias P (2010) Gesture recognition based on arm tracking for human-robot interaction. In: IEEE/RSJ international conference on intelligent robots and systems, pp 5424–5429. https://doi.org/10.1109/IROS.2010.5648870 2. Reaz M, Hussain M, Mohd-Yasin F (2006) Techniques of EMG signal analysis: detection, processing, classification and applications. Biol Proced Online 8(1):11–35. https://doi.org/10. 1251/bpo115 3. Zwarts M, Stegeman D (2003) Multichannel surface EMG: basic aspects and clinical utility. Muscle Nerve Off J Am Assoc Electrodiagnostic Med 28(1):1–17. https://doi.org/10.1002/ mus.10358 4. Benalcazar M, Valdivieso A, Barona L (2020) A user-specific hand gesture recognition model based on feed-forward neural networks, EMGs and correction of sensor orientation. Appl Sci 10(23):8604. https://doi.org/10.3390/app10238604 5. Barona L, Valdivieso A, Vimos V, Zea J, Benalcázar M (2020) An energy-based method for orientation correction of EMG bracelet sensors in hand gesture recognition systems. Sensors 20(21):6327. https://doi.org/10.3390/s20216327

370

D. Díaz et al.

6. Martens J, Daly D, Deschamps K, Staes F (2015) Intra-individual variability of surface electromyography in front crawl swimming. Public Libr Sci 10(12):e0144998. https://doi.org/10. 1371/journal.pone.0144998 7. Abdulhai B, Pringle R, Karakoulas G (2003) Reinforcement learning for true adaptive traffic signal control. ASCE, pp 278–285. https://doi.org/10.1061/(ASCE)0733-947X(2003)129: 3(278) 8. Kirkpatrick J, Pascanu R, Rabinowitz N, Veness J, Desjardins G, Rusu A, Milan K, Quan J, Ramalho T, Grabska-Barwinska A, Hassabis D, Clopath C, Kumaran D, Hadsell R (2017) Overcoming catastrophic forgetting in neural networks. Proc Natl Acad Sci 114(13):3521– 3526. https://doi.org/10.48550/arXiv.1612.00796 9. Vásconez J, Barona L, Valdivieso A, Benalcazar M (2021) A hand gesture recognition system using EMG and reinforcement learning: a Q-learning approach. In: International conference on artificial neural networks. Springer, Cham, pp 580–591. https://doi.org/10.1007/978-3-03086380-7_47 10. Hayes T, Kafle K, Shrestha R, Acharya M, Kanan C (2020) Remind your neural network to prevent catastrophic forgetting. In: European conference on computer vision. Springer, Cham, pp 466–483. https://doi.org/10.48550/arXiv.1910.02509 11. Parisi G, Kemker R, Part J, Kanan C, Wermter S (2019) Continual lifelong learning with neural networks: a review. Neural Netw 113:54–71. https://doi.org/10.1016/j.neunet.2019.01.012 12. EMG Gesture Recognition Evaluator. https://aplicaciones-ia.epn.edu.ec/webapps/home/ session.html?app=EMG. Gesture Recognition Evaluator. Accessed 09 Aug 2022 13. Benalcazar M, Barona L, Valdivieso L, Aguas X, Zea J (2022) EMG-EPN-612 dataset. https:// laboratorio-ia.epn.edu.ec/es/recursos/dataset/2020_emg_dataset_612. Accessed 09 Aug 2022 14. Kawano S, Okumura D, Tamura H, Tanaka H, Tanno K (2009) Online learning method using support vector machine for surface-electromyogram recognition. Artif Life Robot 13(2):483– 487. https://doi.org/10.1007/s10015-008-0607-4 15. Kaufmann P, Englehart K, Platzner M (2010) Fluctuating EMG signals: investigating longterm effects of pattern matching algorithms. In: Annual international conference of the IEEE engineering in medicine and biology, pp 6357–6360. https://doi.org/10.1109/IEMBS.2010. 5627288 16. Atzori M, Gijsberts A, Castellini C, Caputo B, Hager A, Elsig S, Giatsidis G, Bassetto F, Müller H (2014) Electromyography data for non-invasive naturally-controlled robotic hand prostheses. Sci Data 1(1):1–13. https://doi.org/10.1038/sdata.2014.53 17. Patricia N, Tommasit T, Caputo B (2014) Multi-source adaptive learning for fast control of prosthetics hand. In: 22nd international conference on pattern recognition. IEEE, pp 2769– 2774. https://doi.org/10.1109/ICPR.2014.477 18. Sensinger J, Lock B, Kuiken T (2009) Adaptive pattern recognition of myoelectric signals: exploration of conceptual framework and practical algorithms. IEEE Trans Neural Syst Rehabil Eng 17(3):270–278. https://doi.org/10.1109/TNSRE.2009.2023282 19. Van H, Guez A, Silver D (2016) Deep reinforcement learning with double Q-learning. In: Proceedings of the AAAI conference on artificial intelligence, vol 30, no 1. https://doi.org/10. 48550/arXiv.1509.06461 20. Chung E, Benalcázar M (2019) Real-time hand gesture recognition model using deep learning techniques and EMG signals. In: 27th European signal processing conference (EUSIPCO), pp 1–5. https://doi.org/10.23919/EUSIPCO.2019.8903136 21. Chowdhury R, Reaz M, Bin M, Bakar A, Chellappan K, Chang T (2013) Surface electromyography signal processing and classification techniques. Sensors 13(9):12431–12466. https:// doi.org/10.3390/s130912431 22. Chen X, Li Y, Hu R, Zhang X (2020) Hand gesture recognition based on surface electromyography using convolutional neural network with transfer learning method. IEEE J Biomed Health Inform 25(4):1292–1304. https://doi.org/10.1109/JBHI.2020.3009383

HGR Using Double Deep-Q Networks

371

23. Su H, Ovur S, Zhou X, Qi W, Ferrigno G, He DM (2020) Depth vision guided hand gesture recognition using electromyographic signals. Adv Robot 34(15):985–997. https://doi.org/10. 1080/01691864.2020.1713886 24. Sutton R, Barto A (2018) Reinforcement learning: an introduction. MIT Press, Cambridge. ISBN: 9780262039246

Information and Knowledge Management

Capitalization of Healthcare Organizations Relationships’ Experience Feedback of COVID’19 Management in Troyes City Nada Matta, Paul Henri Richard, Theo Lebert, Alain Hugerot, and Valerie Friot-Guichard

Abstract COVID’19 crisis management revealed several relationships problems due to missed collaboration between healthcare actors and socio-economic organizations. In fact, these actors didn’t work usually together indeed their interdependencies tasks. The government Healthcare authority asks to identify these types of problems. In this paper, first results of interviews with these actors are presented. This experience feedback formalization is focused on the influence of relationships between these actors and the consequences on the large sanitary crisis management as COVID’19 one. Keywords Experience feedback formalization · HealthCare organizations relationships · Large Sanitary crisis Management

N. Matta (B) LIST3N, Tech-CICO, University of Technology of Troyes, 12 Rue Marie Curie, 42060 10004 Troyes Cedex, France e-mail: [email protected] P. H. Richard INSYTE, University of Technology of Troyes, 12 Rue Marie Curie, 42060 10004 Troyes Cedex, France e-mail: [email protected] T. Lebert Orange Lab, Paris, France A. Hugerot Emergency Department, Hospital of Simon Weil of Troyes, 101 Avenue Anatole France, 10000 Troyes, France e-mail: [email protected] V. Friot-Guichard Hospital of Simon Weil of Troyes, 101 Avenue Anatole France, 10000 Troyes, France e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Á. Rocha et al. (eds.), Information Technology and Systems, Lecture Notes in Networks and Systems 691, https://doi.org/10.1007/978-3-031-33258-6_35

375

376

N. Matta et al.

1 Introduction COVID’19 sanitary crisis differs from all others in terms of the quick spread of contaminations, the high number of deaths (more than 5,5 million globally and 123,893 in France) and the accrued number of patients hospitalized and induced in intensive care units. The existing sanitary procedures were inadequate to face this type of crisis. Main problems are revealed related to the relationships and coordination between different actors especially, public and private ones. For instance, organizations as Emergency and rescue, Hospital administration, Regional Health Agencies, Ministry of Health, Socio-medical Agencies, Government delegates, Communal and departmental associations (Fig. 1) discovered the importance to collaborate together even to coordinate actions or to make decisions in order to deal with COVID’19 consequences. These organizations learned from how to work together and identified several problems in their procedures. They ask to formalize their experience in order to did some recommendations at tactic and strategic level. Experience feedback is important to identify key elements to be considered in process, directives and organizations changes and to learn from the crisis management organization [1]. The MASK capitalization method [8] is then used towards this formalization with healthcare actors in Troyes city. In this paper, first results of this study are shown. Fig. 1 Organizations involved in COVID’19 Management

Capitalization of Healthcare Organizations Relationships’ Experience …

377

Fig. 2 ORSEC Plan2

2 Managing COVID’19 Crisis in Troyes City Different actors are included to manage territory crises in France. The ORSEC Plans1 identify different functions for each security actors along with the crisis and corresponding gravity and extension (Fig. 2). The first actor concerned by a crisis is the mayor who has to identify the gravity of the situation and requires the help of the Government Delegate and Department who coordinates means. They can ask means and directives from French national institutions and ministries as well as and European organizations. Troyes is a small city with several spread geographic agglomerations and rural villages. There in one central public Hospital at the city with several private clinics and medical centers spread around the city and in big villages (Fig. 3). These centers are quickly overwhelmed by COVID’19 patients. For instance, in April 2020, 225% of hospitalizations were all in Intensive Care Unit. Figure 4 gives an example of the situations of medical centers in Troyes and agglomerations. Due to this problem, crisis management actors and medical centers reorganized their services and coordination systems in order to face this pandemic. We interviewed a number of these actors in order to identify the main elements of these changes with a special focus on factors that contribute to failures and successes of

1

PREMIER MINISTRE, 2019. Circulaire n°6095/SG du 1er juillet 2019 relative à l’organisation gouvernementale pour la gestion des crises majeures. p. 8. 2 PREMIER MINISTRE, 2019. Circulaire n°6095/SG du 1er juillet 2019 relative à l’organisation gouvernementale pour la gestion des crises majeures. p. 8.

378

N. Matta et al.

Fig. 3 Medical centers and hospitals in Troyes city and its agglomerations

Fig. 4 The situation of Troyes Medical centers due of COVID’19 (https://www.coronavirus-statis tiques.com/stats-departement/coronavirus-nombre-de-cas-aube). We note that the number of deaths is very close to patients on Unit Care at the beginning of COVID’19. This number tends to be reduced along times, due to different organizations collaboration

Capitalization of Healthcare Organizations Relationships’ Experience …

379

this crisis management. We focus in our work on relationships between organizations, related to the main requirements of the National Health Agency and the Troyes Emergency departments.

3 Knowledge Capitalization Nonaka et al. put on elicitation process as an important step in their Knowledge Management SECI model [9, 12]. As same as, Grundstein [7] shows formalization of experience feedback as a necessary phase that helps to emphasize knowledge in an organization. Several approaches are proposed in the literature to handle this type of task [6, 11]. We can note mainly the MASK method [8] that guides interview experts in order to model the “what the main concepts organizations’ actors use in their work”, “the main goals they aim in their activity” and “method followed to achieve their goals”. Experts are invited to co-build with the knowledge manager these models (Fig. 5). This method has been used to capitalize knowledge in several companies [8]. We focus on our study on elicitation of HealthCare organizations’ relationships. The following sections explain our methodology and first results.

Fig. 5 MASK Models [11] emphasizing order of tasks and their goals, main concepts and key success of an activity

380

N. Matta et al.

4 Relationships Capitalization Cartier et al. [4] studied the resilience of touristic communities in the face of environmental disasters like wildfire based on actors’ relationships. They put out three main questions: constant communication between actors, identification of resilient communities and continuous resilience process management. Prayag [13] emphasizes on the importance of relationships between actors and resilience in tourism. Based on this type of studies, the interactions\ modes between different actors that face a new type of crisis such as COVID’19 can lead to the identification of some recommendations for sanitary organizations and population resilience.

4.1 Actors Interviews As we note above, the MASK method has been followed especially the co-building of models with actors. Firstly, interviewees have to present themselves and then they draw with the knowledge manager the different relations they had with other organizations during COVID’19 crisis. After that, the MASK constrains model are used in order to emphasize main success and problems to face related to maintaining and missing of communication and relations with some organizations. 25 semi-supervised interviews have been done with actors belonging to different organizations in the Spring of 2021: 1. Department organizations: a. The Department Government Delegate, b. Department Health Agency: The Director, the City responsible and the Defense and crisis referent 2. City Mayors: a. Six Rural mayors b. Troyes Health responsible c. Troyes’s mayor delegate 3. Hospitals: a. The President, the Director and the Health Responsible of Troyes Hospitals Group b. The Emergency department Responsible c. Two directors of private hospitals 4. Safety organizations a. An Elderly house director b. One House help association director c. One local Firefighters’ organization responsible

Capitalization of Healthcare Organizations Relationships’ Experience …

381

Fig. 6 Relationships between actors to face large Sanitary crisis

d. One City doctor e. One SOS Doctors member They identified main relationships that have to maintain in order to face large sanitary crisis similar to COVID’19 (Fig. 6). The following sections presents first results of these interviews.

4.2 First Results Actors to be Involved Several studies put on different actors and concepts to be considered in crisis management. These works defined models and ontologies for this aim. For instance, ISYCRI models [3] distinguishes concepts such as civil society, people, goods, natural sites, event risks, indicators and dangers. MOAC [10], represents mainly classes as Affected Population, Collapsed Structure, Compromised Bridge, Deaths, Infrastructure Damage and properties affected by an impact. Moreover, categorizing damages and resources are identified on SoKNOS [2]. POLARISCO [7] gathers different modules related to different actors in crisis management (PCC or Alert, Police, Firefighters, Public Authorities, Heath Medical and Gendarmerie) (Fig. 7). The identified concepts are so generic to represent several crisis managements types or specific close to operational level as those defined for Rescue operations in ReSont ontology [5]. In our study only actors involved on strategic and decision-making levels has been interviewed. The main distinction was between public and private organizations. In public sectors, medical, social and government organizations are identified, while private actors are medical, economic and social (Fig. 8).

382

Fig. 7 POLARISCO Ontology modules [7] Fig. 8 Actors involved in sanitary crisis management

N. Matta et al.

Capitalization of Healthcare Organizations Relationships’ Experience …

383

Main Influences in Organizations Relationships Several influences models have been identified. We can note especially those between healthcare hospitals (public and private), between healthcare organizations, between medico-social actors and Hospitals, between Hospitals and Government organizations, between economical supports and Healthcare actors. In fact, good collaboration and information sharing between these actors lead to good organizations of Patients Healthcare organizations and reduce the deadlock of capacities that hospitals faced during the COVID’19 Management. As same as, logistics of needed materials can be more controlled. For that, all actors must be involved in decision-making meetings and information sharing support tools must be deployed in order to handle trust and competencies awareness of these actors. Figure 9 gives an example of these influences’ models. Finally, a summary of recommendations has been identified pointing out the influence of the maintaining of these relationships and the impact of large sanitary crisis management (Fig. 10).

Fig. 9 Relations between Healthcare organizations. Awareness and information sharing are necessary to maintain pertinent organization of patients’ healthcare. Negative Influence as Volunteer relationships and different communication tools leads to conflict and misunderstanding of actions

384

N. Matta et al.

Fig. 10 Influence between Maintaining relationships and large sanitary crisis management. For instance, having official relationships, centralized information system and frequent debriefing between all actors promote the trust between them and lead to pertinent patients’ management and virus spread limitations, especially by avoiding violence and conflicts.

5 Conclusion Even Healthcare organizations usually collaborate to manage sanitary crisis, but the COVID’19 management put on several lacks in these collaborations especially extending relationships with private and socio-economic actors. In this study, COVID’19 management experience feedback capitalization shows different reasons of these problems. Healthcare actors’ interviews, using the MASK method, emphasized main consequences of these problems and led to some recommendations for the future like: Promoting partnerships between public and private medical and social organizations in order to install new activities modes and workflows, integration of these actors in the official sanitary procedures, developing new collaborative decision making and debriefing modes and scheduling frequent similar sanitary exercises and training in order to maintain relationships between all actors involved in this type of crisis. These recommendations can be integrated from one side a large sanitary crisis ontology as an application of Polarisco one (Fig. 7) and help from another side, to modify and adapt Government procedures in order to consider Large sanitary crisis management. To develop foresight, a research chair in the field of exceptional health situations has been set up by the university and Simon Weil hospital.

Capitalization of Healthcare Organizations Relationships’ Experience …

385

References 1. Antonacopoulou EP, Sheaffer Z (2014) Learning in crisis: rethinking the relationship between organizational learning and crisis management. J Manag Inq 23(1):5–21 2. Babitski G, Bergweiler S, Grebner O, Oberle D, Paulheim H, Probst F (2011) SoKNOS – using semantic technologies in disaster management software. In: Antoniou G, Grobelnik M, Simperl E, Parsia B, Plexousakis D, De Leenheer P, Pan J (eds) The semantic web: research and applications. Springer Berlin Heidelberg, Berlin, Heidelberg, pp 183–197. https://doi.org/ 10.1007/978-3-642-21064-8_13 3. Bénaben F, Hanachi C, Lauras M, Couget P, Chapurlat V (2008) A metamodel and its ontology to guide crisis characterization and its collaborative management. In: Proceedings of the 5th international conference on information systems for crisis response and management (ISCRAM). Washington, DC, USA, May, pp. 4–7 4. Cartier EA, Taylor LL (2020) Living in a wildfire: the relationship between crisis management and community resilience in a tourism-based destination. Tourism Manage Perspect 34:100635 5. Chehade S, Matta N, Pothin JB, Cogranne R (2020) Handling effective communication to support awareness in rescue operations. J Contingencies Crisis Manage 28(3):307–323 6. Dieng R, Corby O, Giboin A, Ribiere M (1999) Methods and tools for corporate knowledge management. Int J Hum Comput Stud 51(3):567–598 7. Elmhadhbi L, Karray MH, Archimède B, Otte JN, Smith B (2021) An ontological approach to enhancing information sharing in disaster response. Information 12(10):432 8. Ermine JL (2018) Knowledge management: the creative loop. John Wiley & Sons, Hoboken, NJ, USA 9. Grundstein M (2000) From capitalizing on company knowledge to knowledge management. Knowl Manage Classic Contemp Works 12:261–287 10. Limbu M (2012) Management of a crisis (MOAC) vocabulary specification. http://www.obs ervedchange.com/moac/ns/. Accessed 18 Mar 11. Matta N, Ermine JL, Aubertin G, Trivin J-Y (2002) Knowledge capitalization with a knowledge engineering approach: the MASK method. In: Dieng-Kuntz R, Matta N (eds) Knowledge management and organizational memories. Springer US, Boston, MA, pp 17–28. https://doi. org/10.1007/978-1-4615-0947-9_2 12. Nonaka I, Toyama R, Konno N (2000) SECI, Ba and leadership: a unified model of dynamic knowledge creation. Long Range Plan 33(1):5–34 13. Prayag G (2018) Symbiotic relationship or not? Understanding resilience and crisis management in tourism. Tour Manage Perspect 25:133–135

Factors for the Application of a Knowledge Management Model in a Higher Education Institution Verónica Martínez-Lazcano , Edgar Alexander Prieto-Barboza , Javier F. García , and Iliana Castillo-Pérez

Abstract Knowledge management (KM) not only applies to business organizations in general, from where this practice arose, but it is also possible to apply to educational institutions. That is why this research aims to study the factors that allow applying a knowledge management model (KMM) in a higher education institution (HEI) successfully. The methodology consisted of the application of a questionnaire of predefined questions using a Likert scale. The Cronbach’s Alpha coefficient obtained for the four dimensions was .93. The results of this research propose the application of four phases of a knowledge management model adapted from the implementation of a learning community of practice using the steps of the Wiig knowledge management cycle and the proposed knowledge management model in higher education institutions. Therefore, this proposal will be useful for the application of knowledge management models in higher education institutions. Keywords Knowledge management model · Knowledge management · Higher education institution

V. Martínez-Lazcano (B) · I. Castillo-Pérez Universidad Autónoma del Estado de Hidalgo, Pachuca, Hidalgo, México e-mail: [email protected] I. Castillo-Pérez e-mail: [email protected] E. A. Prieto-Barboza · J. F. García Humboldt International University, Miami, FL, USA e-mail: [email protected] J. F. García e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Á. Rocha et al. (eds.), Information Technology and Systems, Lecture Notes in Networks and Systems 691, https://doi.org/10.1007/978-3-031-33258-6_36

387

388

V. Martínez-Lazcano et al.

1 Introduction Knowledge Management (KM) according to [1] arose from the need of the postindustrial society and the new knowledge economy, in which the abundance of intangible resources replaces tangible resources, where knowledge is the engine of the economy in the organization, so, the creation, acquisition, storage, recovery, exchange, distribution and use of knowledge are the main components of KM. In this way, the KM is a set of practices that allow to improve the use and exchange of data and information for decision making [2]. Likewise, Gates (2000) cited by [3] refers that the KM must ensure that the knowledge required by a group of people of specific interests performs certain actions to manage data, documents, exchange of thoughts and successful ideas, as well as coordinate actions towards a common goal. For [3], the basic function of Knowledge Management is the control of knowledge, its main tool is the Internet, and its challenge is to make the right knowledge available to the right people at the right time. While [4] they refer that KM is related to the way in which organizations absorb knowledge and make decisions, since it is a process related to the creation of new knowledge and divides it into three categories: knowledge acquisition, knowledge exchange and application of knowledge. The benefits that can be obtained from an effective KM in an organization, according to [5], are: increasing the level of satisfaction of members of the organization by qualitatively and quantitatively increasing the level of knowledge of each member of the organization, reducing the times of searching for information by having reliable, updated and available sources, create circles of individual learning knowledge through group knowledge and create an environment for the creativity and innovation of the members of the organization. The KM model proposed by [6], applies in Higher Education Institutions (HEIs), since part of the research function is complemented by the teaching and extension functions. This model is made up of four phases that are: Identification, Creation, Distribution and Measurement of knowledge. On the other hand, [7] they refer that to maintain knowledge, the four processes of the Nonaka and Takeuchi model can be applied, or other knowledge management models can be applied that allow ordering, classifying, and documenting, in addition to socializing the knowledge generated in the educational institution. Therefore, [7] they propose a model for building, sharing, and using tacit and explicit knowledge in communities of practice, also known as learning communities. This model proposes four phases which are adapted taking the Wiig model as a basis. These are: building knowledge, maintaining knowledge, grouping knowledge, and using knowledge. The problem with this research is that research professors who work in the research centers of a higher education institution do not know how knowledge management should be applied and integrated into daily activities, and they do not identify, preserve, or keep active the key knowledge generated by these research professors. Therefore, the main purpose of this research is to identify the factors necessary to apply a knowledge management model (KMM) in research centers of a higher education institution.

Factors for the Application of a Knowledge Management Model ...

389

Derived from the analysis of the aforementioned models, it is considered relevant for this research to propose the following phases: identify and build knowledge, create, and maintain knowledge, classify, and share knowledge, and evaluate the use of knowledge.

2 Literature Review For the development of this research, various sources of information related to the topic were consulted [4, 6–14] based mainly literature on knowledge management models and the factors required to apply an KMM in HEIs. Among the most relevant documents, there is the study entitled Identifying the success factors of knowledge management tools in research projects (Case study: A corporate university), [8] identified the factors of culture, information technology, strategy and objective, organizational infrastructure, employee motivation, leadership and management support, human resource management, education, financial resources, measurement, processes and activities, structure and communications in the knowledge management cycle of a corporate university’s research projects; these factors were identified as effective factors in the CG cycle. In research titled Communities of Practice Approach for Knowledge Management Systems, [7] he develops a practical framework for a community of practice (CoP) implementation to align KM strategy with an organization’s business strategy. The authors [7] analyze different models to build, share and use tacit and explicit knowledge in the CoP by applying an KMM based on the Wiig model. In addition, it supports the implementation of the CoP that adopts the key elements of each of the aspects of the Benefits, Tools, Organization, People and Processes (BTOPP) model. Finally, it identifies aspects such as organizational culture and performance measures and provides recommendations for a successful implementation of the CoP. About the research entitled Knowledge Management in Higher Education Institutions: Characterization from a theoretical reflection, [6] they carry out a bibliographic review of qualitative design. The results show KM as a relevant process for HEIs in their effort to successfully develop their substantive functions, however, there are some difficulties that limit its implementation process. Therefore, they propose a KMM that overcomes these challenges, which is made up of the following phases: identification, creation, distribution, and measurement. In the research Applicable of Knowledge Management among the Education Organization Staff Administration based on Jashpara’s Model, [9] they showed that all components of KM, except the organization, were significantly above average; in addition, they report that there is no significant relationship between the work experience of the manager and all components of KM, but there is a positive and significant relationship between the component acquired, the creation of knowledge and work experience. Likewise, in the study Role of knowledge management in developing higher education partnerships: Towards a conceptual framework, [10] identifies the fundamental

390

V. Martínez-Lazcano et al.

elements that indicate the behavior of HEIs and impact on the development of an association. In addition, it explores the institutional factors that affect the development of partnerships and compiles a list of KM activities that it deems necessary to help HEIs exchange knowledge in a partnership environment. As can be seen in both organizations and educational institutions KM can be applied to achieve success. Therefore, in the present research, a review of the literature was made to identify the necessary factors to apply a knowledge management model in research centers of a higher education institution. to achieve success in their projects, and in this way identify the areas of opportunity that this educational institution must improve and be more competitive.

3 Material and Methods The paradigm of this research is positivist with a quantitative approach. The design is considered to be non-experimental since the variable is not manipulated, it observes the phenomena in the natural environment in order to analyze them. The research is cross-sectional because data are collected at a single time [15]. The people involved in this research are the research professors who work in the research centers of a higher education institution. The population size is 684 research professors, and the sample size is 247, this sample was calculated with a formula (1) which is for finite and known populations, whose values are shown in Table 1. The sampling applied was non-probabilistic for convenience, since at the time the answers were obtained the instrument was closed [16]. n=

N ∗ p ∗ q ∗ Z α2 (N − 1) ∗ (e2 ) + ( p ∗ q ∗ Z α2 )

To obtain data, an evaluation instrument of the type of questionnaire of own creation was built, with predefined questions and a Likert scale was made: 1 (Never) Table 1 DatData to calculate the sample

Parameter

Description

Value

N

Poblation

684

Z

Trust level

0.95



Z-score with a 95% confidence level

1.96

p

Probability of the event occurring

0.5

q

Probability that the event will not occur

0.5

and

Estimated error

0.05

n

Sample size

247

Note: A margin of error of 5% and a confidence level of 95% are considered

Factors for the Application of a Knowledge Management Model ...

391

2 (Almost never), 3 (Sometimes), 4 (Almost always) and 5 (Always). The questionnaire was validated by expert judgment and the method was of individual aggregates [17]. Regarding the reliability analysis of the instrument, Cronbach’s Alpha was applied using the SPSS tool version 28.0.1.1 (15), obtaining a value of 0.93, this is an excellent level of reliability according to the interpretation of Cronbach’s Alpha coefficient from Herrera (1998) cited by [18]. Similarly, the questionnaire was applied using Google Forms, which is composed of four indicators of the dimension factors for the application of a knowledge management model, which are: (1) Identify and build knowledge, (2) Create and maintain knowledge, (3) Classify and share knowledge and (4) Evaluate the use of knowledge. Microsoft Excel was used since the results were obtained using Google Forms, which provides the results in a spreadsheet. We proceeded to perform the descriptive statistical analysis technique, using the SPSS software version 28.0.1.1 (15), so that the table of frequency distribution for each indicator was generated.

4 Results The results obtained from the application of the instrument, ranges were calculated considering the maximum value and the minimum value of the responses, as well as the number of intervals. Likewise, frequency distribution tables of each indicator were elaborated that included: range, class mark, absolute frequency (AF), accumulated frequency (af ), relative frequency (Rf ), and percentage (%). Thus, the results were grouped by indicator for analysis. They were also grouped according to their presence in the ranges: very strong, strong, weak, and very weak. So, the results were grouped by indicator for analysis. Likewise, the data were grouped according to their presence in the categories: Very strong, Strong, Weak, and Very weak. In the Very strong category, the participants selected the options Always and Almost always. In the Strong category, participants responded on at least one item with the option Sometimes. In the Weak category, some participants, in at least one item selected the option Almost never and finally, in the Very weak category, some of the respondents answered in at least one item with the option Never. The data considered in the categories were calculated from the maximum and minimum value observed in each indicator. For indicators, Identify and build knowledge and Classify and share knowledge; the maximum value is 10 and the minimum value is 3, so the range (R) which is the maximum value minus the minimum, i.e., 10–3 = 7; R = 7. The number of intervals (K) is set to be 4. The amplitude (A) is calculated, which is the range between the number of intervals (R/K), A = 7/4 = 1.75; and rounded to 2, so the amplitude of each category for these indicators is 2. For indicators, Create and maintain knowledge and, and Evaluate the use of knowledge; the maximum value is 10 and the minimum value is 2, so the range (R) is R = 10–2; R = 8. The number of intervals (K) is 4. The amplitude A = R/K is calculated; A = 8/4; A = 2; Thus, the amplitude of each category for such indicators is 2. Finally, the calculation was made for each indicator, which can be seen in Table 2.

392

V. Martínez-Lazcano et al.

Table 2 Frequency distribution of factors for the application of an KMM in HEIs Category

Rank

Class mark

Af

af

Rf

%

Indicator: identify and build knowledge Very Strong

[9 10]

9.5

138

138/247

0.56

Strong

[7 9)

8

80

218/247

0.32

32

Weak

[5 7)

6

28

246/247

0.11

11

Very Weak

[3 5)

4

1 247

Total

56

247/247

0

1

1.00

100

0

Indicator: create and maintain knowledge Very Strong

[8 10]

9

184

184/247

0.74

74

Strong

[6 8)

7

54

238/247

0.22

22

Weak

[4 6)

5

7

245/247

0.03

3

Very Weak

[2 4)

3

Total

2 247

247/247

0.01

1

1

1.00

100

Indicator: classify and share knowledge Very Strong

[9 10]

9.5

150

150/247

0.61

61

Strong

[7 9)

8

73

223/247

0.30

30

Weak

[5 7)

6

20

243/247

0.08

8

Very Weak

[3 5)

4

Total

4 247

247/247

0.02

2

1

1.00

100 66

Indicator: evaluate the use of knowledge Very Strong

[8 10]

9

163

163/247

0.66

Strong

[6 8)

7

56

219/247

0.23

23

Weak

[4 6)

5

25

244/247

0.10

10

Very Weak

[2 4)

3

Total

3 247

247/247

0.01

1

1

1.00

100

4.1 Categories In this way, in Table 2 you can see the results of the frequency distribution of the indicator is: (1) Identify and build knowledge, (2) Create and maintain knowledge, (3) Classify and share knowledge and (4) Evaluate the use of knowledge. In the grouping of the data where the maximum value is 10 and the minimum value of 3. The range [9 10] indicates that it includes 9 and 10; that is, the values comprising the range are ≥9 and ≤10; whereas the range [3 5) indicates that it includes 3 but not 5, that is, the values that are included are son ≥3 and NFP

0.331**

4.031

Indirect effects Interaction

CIlowAdj 0.042

CIhighAdj 0.191

DI -> FP

0.392**

5.476

0.023

0.168

DI-> SR

0.343**

3.773

0.148

0.508

SR -> NFP

0.302**

3.971

0.145

0.444

SR -> FP

0.111 n.s.

1.169

−0.090

0.288

DI -> SR -> NFP

0.104**

2.728

0.042

0.191

DI -> SR -> FP

0.083*

2.225

0.023

0.168

Promotion x SR -> NFP

0.070 n.s.

1.009

−0.060

0.243

Promotion x SR -> FP

0.189*

2.090

0.020

0.377

NFP = Non-financial Performance; FP = Financial Performance; DI = Digital Investment; SR = Service Ratio. Standardized path coefficients are shown. **p < 0.01; *p < 0.05; n.s. = not significant.

5 Discussion 5.1 Research Contribution and Managerial Implications This study contributes to the relatively new research stream of digital servitization first through its extended methodological approach. In this quantitative study, research was conducted with a sample from 4 NACE sectors adding findings to the gap of empirical and cross-industry investigations. Further, the machine learning solution of automatically determining the specific services through the analysis of websites and the consecutive comparison with survey data is new. It demonstrates another practicable option for investigating services of manufacturers, in addition to the common database screenings or content analyses of reports [1]. Second, the results of this study show that the transition from analogue to digital services does not necessarily lead to better financial results. These findings are in line with the studies on the digitalization paradox [19]. The investment in digitalization improves both financial and non-financial performance. These findings may, hence, contribute to a better understanding of the vague and scant relationship between digital investment and financial performance. Additionally, the study adds insights to the performance element, as the customers’ attitude towards digital services is a supplementary facet. This non-financial performance often leads to higher financial performance after a certain period of time [18] and was a distinctive outcome perspective of digital servitization in this research. Thirdly, a contribution is made to the topic of promoting actively promoting the scope of offered services. Although managers across industries attempt to reduce the pressure of difficult economic times by enlarging their product portfolio with ancillary services [1] and have the parallel aim of fostering the financial performance through digitalization and data driven based solutions [3], our study demonstrates that this is not always

Transitioning to Digital Services: Assessing the Impact of Digital …

419

a guarantee of immediate financial results. It nevertheless shows that if companies invest in digitization and additionally transform their services into digital ones, the customers’ attitude (non-financial performance in the current study) is additionally strengthened through the mediating effect. The challenging pandemic years accelerating digital transformation and leading to a demand of digital interaction from the customers’ side [4] may be one reason why the transitioning to digital services was positively perceived by clients. We did not identify a significant mediating effect for the financial performance in the model. However, the positive moderating effect of promoting services suggests that if companies that digitize their services should focus on actively promoting them. In turn, it is linked to better financial performance.

6 Limitations and Future Research The study was vendor centric, which implicated that it did not quantitatively measure the financial results or involve customers to whom the services were directed. Also, the geographic choice and the selection of the B2B-sectors limit the ability to generalize the findings. The results of the study show that in future research a careful and clear distinction between general digital investments, as in product, processes or sensors, and digitization of services is advisable. As this study focused on externally oriented digital services and organizations develop different types of digital capabilities [46], further research on internal oriented digital services would also supportive.

References 1. Benedettini O, Swink M, Neely A (2017) Examining the influence of service additions on manufacturing firms’ bankruptcy likelihood. Ind Mark Manag 60:112–125 2. Gebauer H, Fleisch E, Friedli T (2005) Overcoming the service paradox in manufacturing companies. Eur Manag J 23(1):14–26 3. Abou-foul M, Ruiz-Alba JL, Soares A (2021) The impact of digitalization and servitization on the financial performance of a firm: an empirical analysis. Prod Plann Control 32(12):975–989 4. Rangarajan D, Sharma A, Lyngdoh T, Paesbrugghe B (2021) Business-to-business selling in the post-COVID-19 era: developing an adaptive sales force. Bus Horiz 64(5):647–658 5. Tronvoll B, Sklyar A, Sörhammar D, Kowalkowski C (2020) Transformational shifts through digital servitization. Ind Mark Manag 89:293–305 6. Paschou T, Rapaccini M, Adrodegari F, Saccani N (2020) Digital servitization in manufacturing: a systematic literature review and research agenda. Ind Mark Manag 89:278–292 7. Arli D, Bauer C, Palmatier RW (2018) Relational selling: past, present and future. Ind Mark Manag 69:169–184 8. Rabetino R, Kohtamäki M, Brax SA, Sihvonen J (2021) The tribes in the field of servitization: discovering latent streams across 30 years of research. Ind Mark Manag 95:70–84 9. Guo A, Li Y, Zuo Z, Chen G (2015) Influence of organizational elements on manufacturing firms’ service-enhancement: an empirical study based on Chinese ICT industry. Technol Soc 43:183–190

420

C. Stadlmann et al.

10. Baines T, Lightfoot H, Smart P, Fletcher S (2020) Framing the servitization transformation process: a model to understand and facilitate the servitization journey. Int J Prod Econ 221(4):107463 11. Rachinger M, Rauter R, Müller C, Vorraber W, Schirgi E (2019) Digitalization and its influence on business model innovation. JMTM 30(8):1143–1160 12. Parida V, Sjödin DR, Lenka S, Wincent J (2015) Developing global service innovation capabilities: how global manufacturers address the challenges of market heterogeneity. Res Technol Manag 58(5):35–44 13. Gebauer H, Paiola M, Saccani N, Rapaccini M (2021) Digital servitization: crossing the perspectives of digitization and servitization. Ind Mark Manag 93:382–388 14. Parida V, Sjödin D, Reim W (2019) Reviewing literature on digitalization, business model innovation, and sustainable industry: past achievements and future promises. Sustainability 11(2):391 15. Paiola M, Gebauer H (2020) Internet of Things technologies, digital servitization and business model innovation in BtoB manufacturing firms. Ind Mark Manag 89:245–264 16. Vargo SL, Lusch RF (2008) Service-dominant logic: continuing the evolution. J Acad Mark Sci 36(1):1–10 17. Cobelli N, Chiarini A (2020) Improving customer satisfaction and loyalty through mHealth service digitalization. TQM J 32(6):1541–1560 18. Oliva R, Gebauer H, Brann JM (2012) Separate or integrate?: Assessing the impact of separation between product and service business on service performance in product manufacturing firms. J Bus Bus Mark 19(4):309–334 19. Gebauer H, Fleisch E, Lamprecht C, Wortmann F (2020) Growth paths for overcoming the digitalization paradox. Bus Horiz 63(3):313–323 20. Homburg C, Fassnacht M, Guenther C (2003) The role of soft factors in implementing a service-oriented strategy in industrial marketing companies. J Bus Bus Mark 10(2):23–51 21. Gebauer H (2008) Identifying service strategies in product manufacturing companies by exploring environment–strategy configurations. Ind Mark Manag 37(3):278–291 22. Hydle KM, Hellström M, Aas TH, Breunig KJ (2021) Digital servitization: strategies for handling customization and customer interaction. In: Kohtamäki M, Baines T, Rabetino R, Bigdeli AZ, Kowalkowski C, Oliva R, Parida V (eds) The Palgrave handbook of servitization. Springer, Cham, pp 355–372 23. Partanen J, Kohtamäki M, Parida V, Wincent J (2017) Developing and validating a multidimensional scale for operationalizing industrial service offering. J Bus Ind Mark 32(2):295– 309 24. Maheepala SDSR, Warnakulasooriya BNF, Weerakoon Banda YK (2018) Measuring servitization. In: Kohtamäki M, Baines T, Rabetino R, Bigdeli AZ (eds) Practices and tools for servitization: managing service transition. Springer, Cham, pp 41–58 25. Simonazzi A, Ginzburg A, Nocella G (2013) Economic relations between Germany and southern Europe. Camb J Econ 37(3):653–675 26. Jääskeläinen A, Laihonen H, Lönnqvist A (2014) Distinctive features of service performance measurement. Int J Oper Prod Manag 34(12):1466–1486 27. Ukko J, Pekkola S, Saunila M, Rantala T (2015) Performance measurement approach to show the value for the customer in an industrial service network. Int J Bus Perform Manag 16(2– 3):214–229 28. Kohtamäki M, Baines T, Rabetino R, Bigdeli AZ, Kowalkowski C, Oliva R, Parida V (2021) Theoretical landscape in servitization. In: Kohtamäki M, Baines T, Rabetino R, Bigdeli AZ, Kowalkowski C, Oliva R, Parida V (eds) The Palgrave handbook of servitization. Springer, Cham, pp 1–23 29. Opazo-Basáez M, Vendrell-Herrero F, Bustinza OF (2022) Digital service innovation: a paradigm shift in technological innovation. J Serv Manag 33(1):97–120 30. Stadlmann C, Berend G, Fratriˇc A, Szántó Z, Aufreiter D, Ueberwimmer M, Mang S (2022) Find me if you can! Identification of services on websites by human beings and artificial intelligence. In: Reis JL, López EP, Moutinho L, Santos JPM (eds) Marketing and smart technologies, Singapore. Springer, Singapore, pp 23–35

Transitioning to Digital Services: Assessing the Impact of Digital …

421

31. Joulin A, Bojanowski P, Mikolov T, Jégou H, Grave E (2018) Loss in translation: learning bilingual word mapping with a retrieval criterion. In: Riloff E, Chiang D, Hockenmaier J, Tsujii J (eds) Proceedings of the 2018 conference on empirical methods in natural language processing, Brussels, Belgium. Association for Computational Linguistics, Stroudsburg, pp 2979–2984 32. Bagozzi RP, Yi Y (1988) On the evaluation of structural equation models. J Acad Mark Sci 16(1):74–94 33. Podsakoff PM, MacKenzie SB, Lee J-Y, Podsakoff NP (2003) Common method biases in behavioral research: a critical review of the literature and recommended remedies. J Appl Psychol 88(5):879–903 34. Malhotra NK, Kim SS, Patil A (2006) Common method variance in IS research: a comparison of alternative approaches and a reanalysis of past research. Manag Sci 52(12):1865–1883 35. Lindell MK, Whitney DJ (2001) Accounting for common method variance in cross-sectional research designs. J Appl Psychol 86(1):114 36. Diamantopoulos A, Siguaw JA (2006) Formative versus reflective indicators in organizational measure development: a comparison and empirical illustration. Br J Manag 17(4):263–282 37. Henseler J, Ringle CM, Sarstedt M (2015) A new criterion for assessing discriminant validity in variance-based structural equation modeling. J Acad Mark Sci 43(1):115–135 38. Kline RB (2015) Principles and practice of structural equation modeling. Guilford Publications, New York 39. Ringle CM, Wende S, Becker J-M (2022) SmartPLS 4. Boenningstedt: SmartPLS GmbH 40. Chin WW (1998) The partial least squares approach to structural equation modeling. Modern Methods Bus. Res. 295(2):295–336 41. Reinartz W, Haenlein M, Henseler J (2009) An empirical comparison of the efficacy of covariance-based and variance-based SEM. Int J Res Mark 26(4):332–344 42. Cohen P, West SG, Aiken LS (2014) Applied multiple regression/correlation analysis for the behavioral sciences. Psychology Press 43. Hair JF, Sarstedt M, Ringle CM, Gudergan SP (2018) Advanced issues in partial least squares structural equation modeling. SAGE, Los Angeles 44. Hair JF Jr, Sarstedt M, Matthews LM, Ringle CM (2016) Identifying and treating unobserved heterogeneity with FIMIX-PLS: part I–method. Eur Bus Rev 28(1):63–76 45. Hayes AF (2017) Introduction to mediation, moderation, and conditional process analysis: a regression-based approach. Guilford Publications, New York 46. Neirotti P, Raguseo E, Paolucci E (2018) How SMEs develop ICT-based capabilities in response to their environment. J Enterp Inf Manag 31(1):10–37

IoT Devices Data Management in Supply Chains by Applying BC and ML Daniel Guatibonza, Valentina Salazar, and Yezid Donoso

Abstract Current monitoring systems supply chains reveal deficiencies such as homogeneous traceability of the process, active participation of the actors involved in each stage, timely communication of results, compliance with regulatory aspects or international standards, and attention to consumer-oriented information requirements. Aiming to have a reliable mechanism that guarantees quality, safety and sustainability of food through the whole chain, this paper proposes a framework based on Blockchain (BC) attributes, IoT devices and Machine Learning (ML) techniques to detect unusual patterns in some attributes measured during transportation. The value proposition of this implementation is the alternation between Proof of Work and Proof of Learning that allows a better use of computational resources to support organizational needs relating ML tasks. Keywords Blockchain · IoT · Proof of Work · Proof of Learning · supply-chain

1 Introduction Monitoring systems related to the production and circulation of food tend to omit practical aspects around its implementation and information propagation. The active participation of all the actors involved in the process is not contemplated, which inhibits symmetry in data access and product traceability [1]. In addition, each party knows only the data coming in and out of their own systems, as well as the quality controls that concern them, making difficult the ability to detect and notify the risk of causing a foodborne disease or other similar event [1]. At the same time, there are regulatory aspects and international standards to be accomplished making the operation even more complex. The listed drawbacks become critical due to the human-centric nature of this area, particularly given the relevance of providing “consumer-oriented information, such as product quality or food safety attributes” [1]. D. Guatibonza (B) · V. Salazar · Y. Donoso Systems and Computing Engineering Department, Universidad de los Andes, Bogotá, Colombia e-mail: [email protected] V. Salazar e-mail: [email protected] Y. Donoso e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Á. Rocha et al. (eds.), Information Technology and Systems, Lecture Notes in Networks and Systems 691, https://doi.org/10.1007/978-3-031-33258-6_39

423

424

D. Guatibonza et al.

With this idea in mind, given the current context of emerging technologies oriented to distributed information systems, the storage and management of records based on Blockchain stands out. This structure, specifically when classified as public, acquires applicability and relevance in a wide diversity of contexts that benefit from its decentralization, transparency, free access and integrity of the data recorded. Due to the consensus among participating nodes that is required when adding new information to the Blockchain, there is a notion of supervision and legitimacy in transactions tracking with a guarantee of immutability. Such properties become relevant in the field of supply chains, where reliable mechanisms are required [2]. These aspects associated with monitoring and traceability are addressed by technologies such as RFID (Radio Frequency Identification) and QR codes whose coupling with Blockchain is the focus of recent research [2]. These techniques encourage the cooperation of different sensors for environmental and product-specific conditions, a field directly linked to the Internet of Things (IoT). This is consistent with the generalized trend among companies and consumers who increasingly demand greater information understanding and transparency. In this scenario, it is essential to complement existing alternatives with the great value provided by data analysis techniques linked to Artificial Intelligence (AI) oriented to the identification of behavioral patterns and the extraction of useful knowledge for decision making. Therefrom, it is possible to generate automatic alerts of records in the Blockchain that do not adapt to the distributions presented by the historical data already stored. Nonetheless, standard consensus methods such as Bitcoin’s reveals an excessive energy consumption involved in solving cryptographic puzzles needed to validate transactions, this process is known as Proof of Work (PoW). Based on the measurements from January to July, the annualized consumption by this currency in 2022 is estimated to be 116.83 TWh [3], an amount that can be compared with the energy demanded by the more than 50 million inhabitants in Colombia by 2020: 70.42 TWh [4]. Thus, the concept of Proof of Learning (PoLe) arises as an alternative mechanism, in which computational power is invested training Machine Learning models that can contribute to organizational objectives [5]. This project proposes the implementation of a Blockchain to monitor product specific conditions from production to sale. This is done using IoT devices that facilitate the collection of information in real time. Additionally, the alternation between PoW and PoLe consensus protocols is incorporated depending on the Machine Learning (ML) tasks that the organization requires.

2 Literature Review 2.1 VeChain VeChain is a public Blockchain designed to enable a transparent flow of information that provides participants with tangible economic and social value. Other available Blockchains in the market present shortcomings in business ecosystems, especially

IoT Devices Data Management in Supply Chains by Applying BC and ML

425

regarding the lack of an appropriate governance model, operational inefficiency, scalability limitations, fluctuating and unpredictable transaction costs and technical knowledge barriers imposed. Thus, VeChain technology, in conjunction with IoT equipment and the business model itself, enables both food lifecycle tracking and quality information management at each stage of the supply chain, which bridges the gap between regulators, producers and end users. Additionally, the Blockchain incorporates the interaction of external stakeholders through the collection of purchase records and consumer opinions [6].

2.2 Elliptic Curve Cryptography Secret keys, framed in the context of symmetric and asymmetric cryptography, are used to transform plain text into encrypted messages that obfuscate the original content, providing properties of authenticity, integrity and confidentiality. Most standard public-key algorithms (e.g., RSA and DSA) rely on integer arithmetic or polynomials of considerable magnitude, imposing a burden on key and data storage and processing. An alternative is the use of elliptic curves, which provide the same level of security with smaller key sizes. The aforementioned encryption (Elliptic Curve Cryptography or ECC) is based on the computational complexity involved in the mathematical problem of the discrete logarithm and it is generalized over a elliptic curve. Currently, EC cryptography is employed in Bitcoin to sign transactions using the private key held in a user’s personal wallet. Considering the above, [7] contrasts the key processing time of the RSA and ECC algorithms implemented for inter-vehicle communication in a scenario simulated in CupCarbon. According to the results of the tests performed, it is concluded that the incorporation of ECC is feasible in terms of load and performance with respect to other traditional ciphers.

2.3 Proof of Learning Proof of Learning was introduced in 2021 as a mechanism that directs the computational resources involved in the consensus required for block generation within a Blockchain towards optimizing ML models. These models test the active participation of each node within the network and it is the model that provides a better learning metric on undisclosed test data that determines the node that generates the new block. Compared to PoW, such alternative acquires a more stable block generation rate leading to more efficient transaction processing [5]. WekaCoin is a cryptocurrency that leverages this block generation approach in which the associated Blockchain stores transactions while serving as an open repository of trained ML models. This is developed by managing nodes that publish learning tasks with their respective specifications (suppliers), nodes that train the models (trainers) and

426

D. Guatibonza et al.

randomly selected nodes that prioritize the published models to define who adds the new block (validators) [8].

2.4 Machine Learning and Blockchain on IIoT Due to the persistent cybersecurity threats that arise in the field of critical infrastructures and IIoT (Industrial Internet of Things) topologies, [9] proposes a framework based on ML and BC technologies that allows the identification and containment of malicious activity within a network. The developed model consists of a monitoring and defense mechanism that provides coverage against two attack scenarios: injection of packets with erroneous information and denial of services; such attacks are previously reproduced in order to train the ML model. In addition, the effectiveness of the proposed solution is evaluated in comparison to the features provided by traditional alternatives, for example intrusion detection systems (IDS). Ultimately, it is concluded that the algorithm is scalable and favors interconnectivity between nodes while recognizing the execution of various attacks in IIoT networks [9]. On the other hand, [10] describes the existing opportunities in the agriculture and food handling industry by considering emerging technologies such as IoT, big data and artificial intelligence. Thus, applications like crop monitoring, automated verification of food quality, modernization of supply chains and food safety thanks to the traceability induced by Blockchains are considered. These advantages came from the analysis that it is feasible to develop on the large volume of data coming from IoTs using AI techniques. Considering the previous descriptions, Fig. 1 summarizes the characteristics incorporated in each of the studies and technologies presented, highlighting the framework of the present project.

Fig. 1 Comparison of properties implemented by each proposed solution

IoT Devices Data Management in Supply Chains by Applying BC and ML

427

3 Design To implement the proposed framework, the open source CupCarbon software was selected as the simulation platform given the possibility to extend its functionalities around the IoT devices interaction. The developed scenario aims to simulate the transport of poultry from a hatchery to a sales center in a route where 8 base stations (sink) are evenly distributed, interleaved with a repeater to facilitate communication despite the limited coverage radius. In addition, there are 5 sensors nodes that move along the demarcated route at different times, simulating one-hour intervals in the departure of trucks heading to the capital. The sensors are exclusively in charge of transmitting to the closest station the conditions of the poultry they “mobilize”: temperature, pH, among others. The sinks register the neighboring repeaters with which they are going to communicate, resend the transactions, and follow the next steps based on the current consensus mechanism once there are enough transactions to compose a new block. – Proof of Work: Initially, a sink founds a number (nonce) that leads to a valid hash block depending on the established difficulty (integer between 1 and 7 for the current context). After that, the other stations receive the block and the hash to verify the reference to the last block in the Blockchain and the compliance of the difficulty restriction. Finally, if more than the half of the quantity of sinks sends a validation of the block, this one is added. – Proof of Learning: Firstly, each sink trains its own ML model, configured with a random hyperparameters set, with the transactions of the validated blocks. Subsequently, the model is stored and its route is broadcasted in the network. Each sink evaluates the performance of the received models with the transactions of the block to build and selects the station with the winner model. Finally, the most voted station adds the block to the Blockchain. The Fig. 2 illustrates the high-level architecture of the modeled system. As presented, all the base stations of the network store an identical copy of the Blockchain, while the blocks are generated by any of the nodes and under any of the consensus schemes. Additionally, the information flow between the different actors guarantees the confidentiality of the exchanged messages due to encryption and decryption. The transport trucks represent the points from which the measurements of the variables monitored by the IoT sensors are taken.

4 Implementation 4.1 Communication Protocols Between Devices Based on the implementation made in [7], commands to manage the communication in a secure way (using ECC) are available. Other important commands defined are

428

D. Guatibonza et al.

Fig. 2 High level diagram of the implemented architecture

the ones for the storage and management of the Blockchain. In order to make use of these new instructions, there were configured protocols to enable the flow of information between the different devices. In the first place, there is modeled the registration and recognition of the devices among themselves. In the case of base stations and repeaters, which are static devices, the stations seek to detect the nearest repeaters in broadcast while sharing their public key. Thus, when a repeater receives the message, it is able to generate the symmetric key by applying the selected elliptic curve. Subsequently, the repeater replies to the station with its public key so that this device can perform the same process. The sensors at the trucks will have to communicate with a different sink according to their particular location. Consequently, when the sensor wants to share a measurement to register it at the Blockchain, it must identify in broadcast the nearest station to perform the symmetric key generation and sends the respective encrypted transaction. After that, it stores it in its Blockchain and distributes it throughout the network using the repeaters. When one of the nodes finds a valid nonce for PoW or it is determined to be the node with the best model for PoLe, this node sends its block to the other nodes for validation. Once the stations verify that the block complies with the expected format, the nodes broadcast their verdict through the repeaters. If the number of confirmations is greater than 50% of the network size, the block is closed, the timestamp corresponding to the current time is assigned and this value is spread to the network to maintain the consistency of the Blockchain. In the case of PoLe, the station recognizes the hyperparameters of the neural network to be trained (number of layers and

IoT Devices Data Management in Supply Chains by Applying BC and ML

429

neurons, learning rate, activation functions) from an external file used to store the specifications of the tasks postulated by the organization. Subsequently, the model is stored in a specific path and training is performed using a fixed number of epochs. Upon completion, the file path is shared throughout the topology and each station validate the models using the new transactions that are received. Thus, the neural network is evaluated for regression performance and the identifier of the station with the lowest error is reported, i.e., the node considered the winner.

4.2 Blockchain Implementation First of all, it is defined the Blockchain class associated with the corresponding base station and the list of blocks that compose. Regarding the functionalities included in this class, we highlight the reception of a transaction for its storage, the increase in the number of validations of the building block and the revision of ML models. Additionally, the class extends Thread so it is feasible to concurrently execute the consensus mechanism in the stations individually, while the main thread of each one of them keeps its communication protocols running. Likewise, it is possible for each thread to remain aware to the changes identified in the state of the block and, based on this, perform the actions required to guarantee consistency between the distinct copies of the Blockchain. At the next level, the Block class includes the identifier of the station generating the block, the set of transactions it contains, the nonce (the value by which it is demonstrated that an effort has been carried out in the context of the Proof of Work), the number of confirmations and the type of proof. Given that a hash allows to synthesize information in a fixed size string, the class also incorporates the block hash and the hash of the preceding block to consolidate the Blockchain. Some of its functionalities are closing a block and the execution of the proof assigned to it. Next, the Transaction class has the following attributes: the identifier of the linked sensor, the timestamp in which it was generated, the time elapsed since the start of data collection, the geographic coordinates where the record is positioned and the specific parameters of the poultry. This class is only used to model the measurements made by the sensors, so they do not have particular functionalities related to the operation of the Blockchain. Ultimately, the implementations of the two consensus mechanisms are incorporated. In the case of PoW, the difficulty of the cryptographic puzzle to be solved is specified and it is iterated until a hash is found that satisfies these requirements. For the PoLe, a file of learning tasks that the organization requires is defined, which is reviewed prior to the creation of a new block to determine the type of proof to be used. Thus, this file specifies the input and output data concerning the model to be trained, the possible network architectures to iterate on (which resembles a distributed execution of a random hyperparameter search) and the metric to be used to determine the node with the best model.

430

D. Guatibonza et al.

5 Results 5.1 Test Methodology Firstly, multiple simulations of the scenario under PoW were run changing the difficulty of the cryptographic puzzle and storing the copies of the Blockchain that each of the base stations handles. This is done for two purposes: to validate the integrity and consistency of the Blockchain with respect to the delimited complexity, and to be able to extract the average block generation time for each of them so that it is possible to establish the most appropriate value for its operation. Regarding PoLe, the learning task consists of an outlier detector to identify food that is in unsuitable or potentially dangerous conditions for consumption. Therefore, possible hyperparameters are established for the construction of an autoencoder to perform such detection. Thus, each base station trains a model whose configuration is randomized within such specifications. It is important to note that this behavior resembles the parallel execution of the Randomized Search technique used for hyperparameter tuning when training ML models. Finally, two tests are carried out in the context of the developed system: 1. The average generation time of blocks using PoLe is recorded considering training and validation time. This is done to determine an appropriate PoW difficulty to have the most stable block creation rate possible when interleaving between mechanisms. 2. Since the PoLe scheme manages to obtain an appropriate model for the detection of uncommon transactions, an alert system is generated based on it. In this way, it is validated that the system is capable of labeling a measurement as atypical when exposed to records with variables outside the original distribution of historical data.

5.2 Obtained Results The distribution of execution times under PoW is shown in Fig. 3, where the median and extreme values are highlighted. As can be observed, for difficulties lower than 6, the creation of blocks does not reveal large alterations with an average generation time around 90 s. This is due to the fact that, for these difficulties, the time required to find the nonce that satisfies the hash constraints is considerably small than the time required by the mechanism to send the blocks and the respective validations. This behavior is consistent with the observed in [5], where it can be seen that for these difficulties the time is under 100 milliseconds. From difficulty 6 onwards, the expected exponential tendency according to [5] becomes evident as the “effort” required in the PoW increases, since there is a growth in both the mean and the variability of the measurements. After this point, it was decided to stop the experimentation phase at difficulty 7 because of the computational resources available. It is important to

IoT Devices Data Management in Supply Chains by Applying BC and ML

431

Fig. 3 Distribution of block generation times for different Proof of Work difficulties

note that the dispersion in the measurements is linked to the linear communication structure between the base stations defined by the scenario. This is due to it takes longer to disseminate information throughout the network when a block is closed by a station located at the extremes in contrast to one located at a midpoint. Subsequently, it was determined the average time and deviation of the PoLe blocks creation: 122.9 and 25.1 s, respectively. Such lapses consider both the validations required by the nodes with the network, as well as the additional processes required to build the neural network. When contrasted with the PoW graph, it is possible to deduce that the most appropriate difficulty to maintain a constant generation rate when interleaving between consensus mechanisms is 6, since this is where the distribution most closely matches these values. Ultimately, regarding the generation of alerts described in the methodology, the best selected model is executed in parallel to the main flow of the application. Based on the reconstruction of the transactions done by the autoencoder, it is feasible to establish an MSE that serves as a threshold to classify a measurement as anomalous, corroborating the usefulness of the consolidated neural network.

6 Conclusions Considering the architecture implemented and the tests performed on it, the correct operation of a framework where the latest information technologies converge was validated allowing the information to be simultaneously transparent, complete, available and affordable. Specifically, the Blockchain is established as the core around the functionalities provided by IoT devices and IA models are articulated and structured thanks to the flexibility exhibited by consensus methods. Consequently, the results obtained reflect the added value of alternating between PoW and PoLe according

432

D. Guatibonza et al.

to ML requirements without significantly impacting the performance. Although the consumption of energy resources is not reduced, they are better invested by orienting them to the training of models without making the generation of blocks dependent on the availability of learning tasks. In investigations such as [5], the block generation time with PoW are considerably smaller than the ones obtained even for greater difficulties. However, it is attributed to the simulator limitations around the information spread. In a real scenario, these times are attenuated; although it would be expected an increase given the encryption and decryption steps. Likewise, the experiments support a Blockchain that keeps stable the records reception rate and their consolidation in blocks despite the exchange between PoW and PoLe. At the same time, there is the opportunity to carry out what is called Randomized Search, increasing the efficiency to identify the values of the hyperparameters that provide the best error metric on validation data. Such behavior favors obtaining a model with an appropriate performance that support the operation of a particular organization.

7 Future Work Considering what was said in the introduction, the ideal scenario would be to grant both internal and external stakeholders the option of accessing the data that concerns them in order to favor the democratization of information. With such intention, it would be feasible to develop a web application that supports access to updated records, which would provide a complete and faithful overview of the operations linked to the target supply chain. Such opportunity preserving the properties of the Blockchain by protecting the identity of the original sources of the measurements through obfuscation techniques that guarantee the privacy. Ultimately, the alert system implemented opens up a wide range of possibilities concerning the knowledge network that is built with the information gathered at crucial stages of a product life cycle. Also, extending the scope of the framework would lead to the consolidation of a more extensive database of food properties relevant to the final consumer and that are the focus of organizations in the health field. Furthermore, it allows to understand the ecosystem in which the actors involved in the production, transportation, management and storage of articles for consumption coexist and interact with each other.

References 1. Shew, A.M., Heather A.S., Nayga Jr, R.M., Lacity, M.C.: Consumer valuation of blockchain traceability for beef in the United States. Appl. Econ. Perspect. Policy (2021) 2. Jiang, D., Fang, J., Zhao, Y.: Establishment of a traceability model of fresh milk based on blockchain. In: The 7th International Workshop on Advanced Computational Intelligence and Intelligent Informatics [IWACIII2021] (2021)

IoT Devices Data Management in Supply Chains by Applying BC and ML

433

3. University of Cambridge. Cambridge Bitcoin Electricity Consumption Index. https://ccaf.io/ cbeci/index 4. Unidad de Planeación Minero-Energética (UPME). Proyecciones de demanda. https://bit.ly/ 3B8h2nh 5. Liu, Y., Lan, Y., Li, B., Miao, C., Tian, Z.: Proof of Learning (PoLe): empowering neural network training with consensus building on blockchains. Comput. Netw. (2021) 6. Brisbin, S., et al.: Vechain: development plan and whitepaper (2018) 7. Parra, J. F.: Elliptic curve cryptography to enhance vehicular networks security. Universidad de los Andes (2020) 8. Bravo-Marque, F., Reeves, S., Ugarte, M.: Proof-of-learning: a blockchain consensus mechanism based on machine learning competitions, pp. 119–124. In: Artificial Intelligence in Agriculture (2019) 9. Vargas, H. F.: Defensa contra intrusos en redes de dispositivos IoT usando técnicas de Blockchain y Machine Learning. Universidad de los Andes (2020) 10. Misra, N.N., Dixit, Y., Al-Mallahi, A., Singh, M., Upadhyay, R., Martynenko, A.: IoT, big data and artificial intelligence in agriculture and food industry. IEEE Internet Things J. (2020)

Raising Awareness of CEO Fraud in Germany: Emotionally Engaging Narratives Are a MUST for Long-Term Efficacy Margit Scholl

Abstract This article illustrates the need for a different approach to awarenessraising as a means to generate more cybersecurity in companies. Important findings from the applied scientific literature on the specific topic of CEO fraud attacks are summarized, and two game-based learning scenarios from a current German project for small and medium-sized enterprises (SMEs) are presented. These scenarios have been developed on the basis of insights from the realm of psychology. It is important to arouse positive emotions in employees with these awareness-raising measures in order to create a lasting effect. This, in turn, gives rise to serious learning games with an emotional design, which includes discursive team exchanges, opportunities for individual identification, appealing multimedia elements, and storytelling. The design of these stories—whether they are analog or digital—and employee investment in them are of central importance. Keywords Cyberattacks · Awareness · Game-based learning · SMEs

1 Introduction to the Current Situation Rapidly increasing digitization is becoming important for all institutions, both large and (very) small, and thus brings with it a range of different security challenges. Successful digitization requires a strategy to be implemented by the top management of the institution to guarantee an appropriate operational level of security, implemented by sufficiently qualified staff, and it demands an in-depth understanding of the security culture. Chief executive officers (CEOs) are the focus of a range of different studies. One point of interest is the relationship between their compensation and performance incentives and the likelihood of fraud in the areas of financial reporting and accounting [1]. The various studies carried out for this purpose have so far not led M. Scholl (B) TH Wildau/TWZ e.V., Hochschulring 1, 15745 Wildau, Germany e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Á. Rocha et al. (eds.), Information Technology and Systems, Lecture Notes in Networks and Systems 691, https://doi.org/10.1007/978-3-031-33258-6_40

435

436

M. Scholl

to uniform results [2], nor is there any agreement on what CEO characteristics are indicators of accounting fraud (see [3, 4]). In large companies the ties between the CEO and the board merit scrutiny as part of the governance processes (see [5, 6]). However, the types of scams that CEOs can be involved in are beyond the scope of this paper on CEO fraud. Nevertheless, this short clarification illustrates the extreme importance and independent position of CEOs in companies. In this paper, CEO fraud refers to a special cyberattack from outside the company. Employees who are allowed to transfer money in the name of the company are given a fictitious order from their boss, instructing them to carry out high-volume financial transactions for an allegedly urgent business that requires secrecy and is extremely important for the company’s survival [7]. Everything, of course, is to be done as quickly as possible, discreetly, and without further questions, and the scammers’ accounts are typically held abroad. This CEO fraud is also known as a “boss trick,” “fake president scam,” or “business e-mail compromise (BEC)” [8]. The attack method is a special scam combining phishing and social engineering that involves the social manipulation of people as a means to achieve significant financial gain. Most CEO fraud attacks are a collection of attacks rather than just a single phishing incident [9]. The attackers can obtain most of the information they need about a CEO or chief financial officer (CFO) from public sources [10, 11]. CEO fraud has been observed in Germany since 2013, but it was still largely unknown to the general public even in 2015 [12]; only at the end of 2016 did experts first issue a public warning about it [7]. It affects both large corporations and small and medium-sized enterprises (SMEs), and a successful attack of this kind can pose a major threat [7], with millions lost by individual companies (see [13]). Companies need to take a two-pronged approach. Firstly, technical anti-spoofing measures such as Sender Policy Framework (SPF), Domain Keys Identified Mail (DKIM), and Domain-Based Message Authentication, Reporting, and Conformance (DMARC) must be implemented. In addition to SPF, DKIM, and DMARC verifications, the firm Cisco recommends Forged Email Detection (FED) and a total of nine levels of technical protection [14]. However, SMEs rarely have the necessary resources (experts and budget) to comply with this in full. Secondly, awareness must be created, and this should also be done in two ways: on the one hand, employees and CEOs/CFOs must be attentive to what they publish online, because this information could facilitate a fraud by helping attackers fill in gaps in their knowledge, while, on the other, employees should be able to recognize phishing e-mails even if basic technical protection is in place, because no system is completely secure, and ultimately it is the company’s staff who actively implement the incoming scam by clicking links, transferring money, etc. Awareness raising is a crucial step in achieving a higher level of information security (ISec), including cybersecurity (CSec), which is needed, in turn, for successful digitization and an appropriate ISec culture in SMEs. ENISA (2018) suggests that measures that empower people to respond appropriately to ISec threats in the belief that their actions will be successful are more effective in provoking security-conscious behavior than measures that emphasize threats and trigger a feeling of fear [15]. Thus, it is important to arouse positive emotions in employees when raising their awareness. To encourage employees to feel positive

Raising Awareness of CEO Fraud in Germany: Emotionally Engaging …

437

toward ISec/CSec and to support their learning process, awareness-raising measures can make use of blended and game-based learning in combination with team events and the principles of emotional design. This paper presents the development of an analog and a digital serious game designed to raise awareness of CEO fraud in German SMEs. Section 2 summarizes the main findings from literature reviews. Section 3 briefly explains the complex project and describes the two learning scenarios. Section 4 focuses on discussion of the topic and is followed by a look ahead to the next phases of the project in Sect. 5.

2 The Main Findings from Literature Reviews There is no doubt that merely imparting knowledge does not lead to the necessary awareness, appropriate risk assessment, or sustainable behavioral change in the area of ISec and CSec (see [16, 17]). At the international level, the KAB model (Knowledge, Attitude, Behavior) is often used for explanatory purposes (see the summary in [18]). Rothschild (1999) [19] modified the MOA (Motivation, Opportunity, and Ability) model components for information processing in advertising/marketing [20] so that they could be used in managing behavior related to public health and social issues—this might equally be applied to the abstract topics of ISec and CSec. In the German-speaking world, the KVC model—Knowledge (“being informed”), Volition (“being willing”), and Capacity (“being able”)—has been established to raise ISec awareness (see [21]). In order to develop capacity, the concrete conditions in the company must be taken into account. Volition, meanwhile, should be promoted using internal company measures. Knowledge itself plays an important underlying role in awareness-raising measures but is not the only factor. On its own, it is not enough to ensure the longterm efficacy of such measures, which has been a desideratum for years. Nevertheless, the results of [22] show that respondents with recognized computer skills reported a higher positive association with CSec awareness. However, providing all employees with specialist informatics training is not an option for most companies. Moreover, simply telling people to do something is certainly not enough—the traditional approach to cybersecurity awareness is not effective in influencing employees and bringing about lasting behavioral changes [23]. The results of [24] show that selfefficacy, risk awareness, and social support are significant predictors of (smartphone) security behavior.

2.1 CEO Fraud Attacks and Learning The phishing industry alone is worth billions, and CEO fraud is a scam with large payouts in approximately 140 countries [9]. Spoofing e-mail addresses and phishing are the two main attack methods in CEO fraud, combined with the registration of

438

M. Scholl

a similar corporate domain: the attacker sends an e-mail using the CEO’s name to an accounts payable employee to convince the targeted user to send money to the attacker-controlled bank account [9]. To increase the level of urgency, the attacker might use social engineering too. In Germany, companies often claim damages from the credit institutions that processed their payments, arguing that the banks should have recognized the fraud and thus prevented it, or at least that they failed to point out the risk of fraud to the companies as their customers. However, only a small proportion of these cases are ever brought to court, because the fear associated with a tarnished image usually prevails over the allure of monetary benefit from pressing for damages [12]. As a result, only a few court decisions have become public knowledge in Germany, and their outcomes were very different, ranging from full liability on the part of the executing bank to sole liability on the part of the customer [12]. Financial loss is not the only possible impact of such attacks—some attackers target HR departments in an attempt to convince employees to send personally identifiable information to a third party, and this is later used in bank fraud or identity theft [9]. Most compliance regulations require organizations to alert customers after a data breach, so CEO fraud can damage a brand’s reputation and involve litigation costs; furthermore, employees who fall for CEO fraud often risk losing their jobs, especially if the targeted victim was another executive [9]. According to the latest report on the situation of IT security in Germany, the threat from the manipulation of media identities, and thus also from CEO fraud, has increased significantly [25]. A “medial identity” is understood to refer to an individual present within a digital medium who can be identified by biometric attributes such as their face or voice [26]. The real-time capability of the automated manipulation methods increases as they become more and more refined; this is accompanied by the growing public availability of tools that can be used to create counterfeits of ever-higher quality [25]. Proofpoint, in its latest 2022 phishing situation report, notes a general decline in security awareness [27] and suggests that pandemic fatigue may play a role in terms of the impact it has had on employee motivation and attention spans, or that security awareness training had a lower priority in 2021. In any case, it cannot be assumed that employees will understand and be able to differentiate between security terms, especially if awareness measures only take place on an irregular basis. Repetition and consolidation measures are crucial for building knowledge and skills [27].

2.2 Learning, Narratives, and Serious Games Learning is a process that operates individually and relies on personal experience— the learner is actively involved: bringing their own experience into the learning process, adapting existing knowledge and skills, and being enabled to change their behavior, attitudes, and values [28]. If an institution wants to pursue active personnel development through training measures, four conditions should be met [23]: information must appear meaningful, relevant, helpful in the employee’s own work situation, and linked to their existing knowledge—it should not be redundant. Moreover, the

Raising Awareness of CEO Fraud in Germany: Emotionally Engaging …

439

results of [29] suggest that resource beliefs, utility beliefs, and reciprocity beliefs are all positively associated with knowledge acquisition, while reward beliefs are not. According to the study by [30], chief information security officers (CISOs) recognized increased awareness among employees when microlearning with shorter topic modules was used. This was also positively received by all employees who had experience with it. Some saw microlearning as a sort of “alarm clock” consisting of a module with a small, focused, contained piece of information, whose value was as a recurring reminder of cybersecurity. Stories amplify messages: they make facts more accessible by connecting them to emotional experience, they are memorable, and they can be actively shared [31]— narratives are thus a prime means of changing security awareness [32]. Well-crafted stories consisting of a sequence of events that fit together to produce a final outcome should be incorporated in lessons [33, 34] and are currently being used by government agencies too [35]. Short stories can also be designed as serious comics—intended to convey scientific knowledge in a clear, entertaining way [36]. However, such cyberattack stories may also become routine, so that focusing on social and behavioral issues is important to improve the current situation in organizations [37]. Emotions enhance our ability to form vivid memories—emotional arousal causes a hormone to be released that “primes” nerve cells to remember events by increasing their chemical sensitivity at sites where nerves are rewired to create new memory circuits [31]. An experience characterized by emotional stress is therefore remembered more longterm: this can be put to positive use in increasing ISec/CSec awareness—i.e., in addition to imparting knowledge, we also need to let people “immerse” themselves emotionally in the specific security topic in order to produce an emotional “point of contact” and thus motivate further discussion of the topic. Stories are a universal, cross-cultural phenomenon: humans have an innate tendency to process stories, although narratives should not be treated as a panacea, since we still know very little about how this operates (Amy Shirong Lu in [38]). However, when players are unaware of, unconvinced by, or resistant to the desired behavior, storytelling is crucial—a great story, well told, may be the most effective vehicle for behavioral change (Richard Buday in [38]). It’s a complex matter, and the story must be consistent with the core purpose of the game (Jesse Schell in [38]). There is strong evidence that stories are powerful tools for promoting behavioral change and can be particularly powerful when told interactively (Elizabeth J. Lyons in [38]). The question is, how can this be achieved in a simple (digital) game? Laamarti et al. [39] focused on digital serious games and found that most of the definitions agree that serious games also contain an entertainment dimension and the potential to improve the user experience through multimodal interaction. According to their taxonomy [39], digital serious games differ in terms of the type of activity, modality, interaction style, the environment, and the area of application. For participants to become actively involved, it is crucial that the topic and the problems of the game are relevant and interesting for them [40]. The term “serious game” is established nowadays, but there is no unique definition of the concept, because it refers to a wide range of applications: they are used for

440

M. Scholl

training, advertising, simulation, or education and are designed to run on different systems [41]. According to Susi et al. [41], serious games focus on learning with practical simulations for problem solving, while entertainment games favor rich experiences with high-quality communication and a focus on simply having fun. The literature research by [41] and [39] show a number of positive effects when it comes to informing, learning, developing skills, social interaction, and psychological aspects. However, despite these findings, according to [41], there seems to be no conclusive evidence to support the celebrated usefulness of serious games, so research should focus on explaining why and under what conditions these games are compelling and effective. Laamarti et al. [39] emphasize that special attention should be paid to the design of digital serious games and their development so that they can fulfill their purpose. It is successful when the game developer strikes a balance between the fun factor and the game’s main purpose, which is obviously not entertainment. How this balance can be achieved is another area to be researched [39]. The current handbook [42] offers an up-to-date summary of the broad spectrum of serious games and the multiple areas they cover. Moreover, from November 2022, the Erasmus University Rotterdam will offer a free Massive Open Online Course (MOOC) in which the broad spectrum of serious games will be dealt with more intensively [43]. This MOOC differentiates between serious games and simulations to prepare for risky tasks (imitating real-life situations), with educational games used to motivate learning, healthcare games to encourage an exercise regimen, advertising games to make testing a product fun, military games to recruit soldiers, political games to change people’s view on topics, and gamification to apply gaming elements in a non-game context. Gaming environments have great potential as a support for immersive learning [44]: the user experience (UX) is predicated on a truly expanded and nuanced view, reflecting the quality of interactive technologies and positive experiences [45]. “Emotional design is the concept of how to create designs that evoke emotions that lead to positive user experiences” [46]. Emotional design principles use personalization, identification options, appealing multimedia elements, and storytelling. However, UX is subjective, and it is rarely easy to extrapolate it from objective data because the experience is holistic—it encompasses perception, action, motivation, and cognition [45]. Additionally, serious games need to reflect reality to fulfill their purpose, be it learning, training, or behavioral change, and they employ features that make gaming a self-motivating, engaging, and enjoyable activity [47].

Raising Awareness of CEO Fraud in Germany: Emotionally Engaging …

441

3 The Project’s Serious Games Simulating CEO Fraud Attacks 3.1 Background of the German Project The information and experience compiled in our current project with pilot companies should enable the management, acting together with CISOs, to initiate training and further educational measures tailored to the needs of the specific working groups. To this end, interviews were carried out to ascertain the main ISec/CSec issues of interest to the companies [48]. An online survey was also conducted and evaluated, allowing security profiles to be developed [49]. The results provided the basis on which to build an innovative overall scenario for ISec/CSec in SMEs and develop seven analog and seven digital learning scenarios geared to the main topics. The project is named “Awareness Lab SME (ALARM) Information Security” and funded by the German Federal Ministry for Economic Affairs and Climate Action (BMWK). It runs from October 1, 2020, to September 30, 2023. The BMWK only allows learning scenarios for German SMEs to be developed in German: the final versions can be used free of charge and are available from the project website [50].

3.2 The Analog CEO Fraud Learning Scenario The subcontractor known_sense has been committed to lively communication on the subject of security and effective awareness raising since 2002. The task of the project is to carry out impact studies based on depth psychology and to develop, visualize, and produce seven game-based, analog learning scenarios. In Fig. 1, A shows the board, the original size of which is 120 cm × 120 cm. It is divided up into five CEO fraud phases, with each phase broken down into further process steps. A total of twenty-two playing cards can be assigned to these phases/ processes. In addition to twenty-two correct assignment cards, there are also three incorrect cards to focus the attention of participants. Some of the cards must be individually positioned on the board, while others only need to be assigned to the appropriate sub-process. There are also six e-mail cards, four of which are phishing e-mails used in preparation for CEO fraud as a kind of spear phishing, while two are non-critical e-mail cards that do not constitute phishing—i.e., there is no intention to scam. The four phishing e-mails are placed in the center of the board; the two uncritical e-mails are placed outside the center. In Fig. 1, B–D show three examples of the assignment cards. B symbolizes the acquisition of the target company, C indicates further telephone payment instructions, and D says that there is no access to the money transferred. E shows one of the phishing e-mails used for CEO fraud. For the four phishing e-mails, there are also suggestions indicating the fraudulent intentions applied in the game. A stopwatch is recommended to put the participants under time pressure, because “the biggest trick with any phishing attack is to force

442

M. Scholl

Fig. 1 Analog learning scenario (serious game) “CEO Fraud Attack” under development for SMEs (preliminary version). See text for a description

a sense of urgency” [9]. Each of the project’s analog learning scenarios provides a game instruction and a specific guide for the game moderator. The moderator starts the stopwatch as soon as the participating team begins to place the playing cards in the appropriate areas on the board. The facilitator prescribes a period of time within which the team must have reached a solution—e.g., six minutes. If several teams are involved one after the other, there is a competition between the teams. The solution is then discussed by the respective team and the moderation manager. Here, too, a time limit is recommended if you have different employee teams going through different analog learning scenarios in parallel as circuit training. The moderator calls time after six minutes, briefly discusses the result, arranges the incorrectly placed playing cards, and counts the points. The preliminary sample solution of this analog scenario is not shown here. One point is awarded for each card placed in the correct phase, with a maximum of 25 points. Optionally, there is an extra point for each correctly recognized e-mail card (phishing or non-phishing), yielding a maximum of 6 points. Another option is to add points if the cards are put not only in the right phase but also in the right place in the sequence, with a possible 22 extra points available. If all the options are implemented, this gives a total of 53 points. In the case of CEO fraud, the moderator is given the following specific guidelines. Start with a personal introduction and explain the topic of the learning station in a maximum of four minutes. Step 1: Clarify whether managers, controllers, or others affected by “CEO fraud” are present, and whether they have been confronted with this security problem. As a reference for CEO fraud, also briefly discuss the topic of phishing and social engineering. The “Golden Rules” should also be looked at in advance of the moderation. Step 2: Explain the rules of the game and the scoring. If

Raising Awareness of CEO Fraud in Germany: Emotionally Engaging …

443

the playing time does not seem sufficient, do not distribute the cards to individuals but rather form small groups that can consult. Step 3: At the end of the playing time, conduct a debriefing lasting about four minutes. Step 4: It is important to take all the cards off the board immediately after completion and shuffle them before the next team arrives.

3.3 The Digital CEO Fraud Learning Scenario The subcontractor Gamebook Studio provides an easy-to-use and comprehensive toolset for the development of digital serious games based on storytelling using the visual novel format and Gamebook Technology. The project’s seven digital learning scenarios are being developed by this subcontractor in consultation with the research team. The next figure shows the decision tree (Fig. 2, middle) of the digital version of the CEO Fraud serious game—represented here in a very rough form—which the player goes through in the role of a security detective. At the very beginning of the game, the player is given general information. Green nodes in the decision tree are “story modules,” providing the player with information, presented as text, instructions, feedback, or even music. Number 1 indicates the beginning of the digital game. Figure 2, left, shows a representative image of a hacker who gives the player initial information about his/her task. The hacker also explains that whatever decisions the player makes will affect how the game progresses. The decisions to be made should therefore be well considered by the player, since the goal is to pick up on the hacker’s tricks. The hacker also explains that the player’s points will be counted, and both his efficiency and social skills will be analyzed. In this case, efficiency means whether and how quickly the player catches on to the hacker’s tricks. At the end, feedback is given on how much learning content the player has discovered along the way. Number 2 in Fig. 2 refers to a red node where the player has to make decisions. At the beginning, there is the question of which character the player chooses as an avatar and whether the player wants to be addressed as a woman or as a man. A representative image can be seen in Fig. 2, right; the player also enters his chosen name as an avatar. The actual story starts in the detective agency at the next green node, identified by the number 3 in Fig. 2—the corresponding image can be seen on the left. At decision point 4, the player must decide, for example, which questions to ask the caller—as illustrated by the picture on the right. Later, the story continues in the SME, denoted by numbers 5 and 6 in Fig. 2. At decision point 7, the player must decide how to proceed in the company. At the end, the player is given feedback and her/his score in the form of stars.

444

M. Scholl

Fig. 2 The rough decision tree of the digital learning scenario “CEO-Fraud” (middle) and sample scenes (left and right) with seven positions—see text for explanation.

Raising Awareness of CEO Fraud in Germany: Emotionally Engaging …

445

4 Discussion Our analog and digital learning scenarios (serious games) are being developed and improved in three iteration stages in the project. In the first iteration, the subcontractor’s proposal is tested by the university research team itself. In the second, improved iteration stage, the tests are carried out with the project’s pilot companies—however, the number of participating employees and the specific people are not identical for the individual games. Another round of feedback leads, at the third iteration level, to a new version, which is tested with other participants in public events. From this feedback, the final versions will be created in German, which will be available for gaming or download free of charge from the project website at the end of the project in 2023. The analog game should be completed in fifteen minutes and thus corresponds to the study results of [30]. The length of such a microlearning module should be very short, lasting only a few minutes: “What you can do quickly during a break” [30]. Quizzes that followed a module were also valued because they force people to focus on content [30]. Adapting the (external) training material to the local context and language was seen as crucial by staff; however, they also acknowledged that this could be a challenge for the CISO [30]. Presentations should be limited in duration to around fifteen to twenty minutes to help learners focus on the topic. Employees had a positive perception of workshops and various types of interactive meetings in which discussions could take place and knowledge exchange was encouraged, with the element of interactivity cited as the main reason for the positive attitude [30]. We can confirm these results. From the summary of CEO fraud in Sect. 2.1, it is clear that the attackers proceed in phases. How many phases there are—and what they each involve—is described in different ways in the literature, and, in reality, this is likely to be contingent on different circumstances, reasons, and motives. In collaboration with our subcontractor, we opted for five main phases (processes) in the analog learning scenario (see Fig. 1): 1. Research/Investigations; 2. Testing/Inquiry; 3. Maintaining contacts; 4. The attack; and 5. The damage. This involves a high level of complexity when playing the analog serious game. One could reduce this complexity and build the game in a modular fashion, carrying out the individual phases separately—the analog game is flexible enough to accommodate that. However, to maintain the commitment of the participants, care must be taken to ensure that the individual analog modules do not become too easy. The digital games are not a simple reflection of the analog variants, because they are intended to generate independent motivation for addressing the issues, with employees targeted as different types of learners. In contrast to the analog version, the digital scenario is played by the employee working alone at a time and location of their choice. Therefore, the discursive environment within the digital game must be determined by the player’s decisions—the game designer has an important task here to develop the story and the learning paths in an appropriate and appealing way (see Fig. 2). The more successful the designer is in this, the more the story will stick

446

M. Scholl

in the learner’s memory. In addition, a discussion of the digital learning scenarios should take place within the company afterwards—with an active, in-depth focus on ISec/CSec to anchor what has been learned over the long term. Don’t underestimate the importance of debriefings! As mentioned above, Fig. 2 offers a very rough representation of the decision tree: the actual individual parts that make up the story can only be seen in the production tool of Gamebook Technology. For example, decision points can be used to allow the player to interrupt the game and take a look at the glossary to get more input on the topic. Decision points may also constitute a “time choice” such that the player must choose between options within a specified period. If the player takes too much time to decide, the digital game sends her/him back. However, all the information the player has previously accumulated will remain, leading to other options for the next step. This brief explanation makes it clear how important the designer’s connection to the topic and empathy for the target group is in building an appropriate story. Our experience creating three agile iterations and the test results compiled so far confirm that special attention should be paid to the design and the story of serious games. To find a balance between the fun factor and the game’s main purpose, it is necessary to reduce the complexity of the ISec/CSec topics. All the digital serious games in the project are immersive stories that depict security-relevant everyday work situations in SMEs. The players experience the stories from a first-person perspective, which is different in each game. This enables the learning content to be examined in detail and encourages identification with it.

5 Outlook The two serious games on CEO fraud presented here, like the other six analog and digital learning scenarios, will have been evaluated by the end of 2022, so that the final versions will be created from spring 2023 on. The development and evaluation results will be recorded in the final project documentation in German. Individual aspects will also be made available on an ongoing basis in scientific publications in English. Many organizations offer only an hour or two of knowledge training per year to raise security awareness among their employees [51]—this fails to produce any lasting knowledge, attention, or behavioral change. Instead, short and interactive game-based learning scenarios should be used continuously, especially when time is at a premium. Our two game-based approaches instill the kind of security thinking that can turn employees into a critical layer of defense. Developing any degree of long-term effect relies on the provision of appropriate and compelling security stories that stick in the memory, as well as the opportunity for exchange between employees. After the project in 2023, the awareness-raising events outlined in this paper and training courses for CISOs [52] can be booked through the Wildau Institute for Innovative Teaching, Lifelong Learning, and Design Evaluation (WILLE), which

Raising Awareness of CEO Fraud in Germany: Emotionally Engaging …

447

is part of the Technology Transfer and Continuing Education Center (TWZ) at TH Wildau [53]. Acknowledgements As the initiator of “Awareness Lab SME (ALARM) Information Security” and project manager, I would like to thank the Federal Ministry for Economic Affairs and Climate Action (BMWK) for funding this project. I am grateful to our long-standing security awareness partner, the company known_sense, and the other subcontractor, Gamebook Studio. My special thanks to the pilot companies for their active involvement and to my research team—also featured on the project website [50]—who have moved the project forward in different constellations. Finally, I would like to acknowledge the anonymous reviewers for their helpful critical comments. Many thanks, too, to Simon Cowper for his detailed and professional proofreading of the text.

References 1. Chen D, Wang F, Xing C (2021) Financial reporting fraud and CEO pay-performance incentives. J Manag Sci Eng 6(2):197–210 2. Chen J, Cumming D, Hou W, Lee E (2016) CEO accountability for corporate fraud: evidence from the split share structure reform in China. J Bus Ethics 138(4):787–806 3. Troy C, Smith KG, Domino MA (2011) CEO demographics and accounting fraud: who is more likely to rationalize illegal acts? Strateg Organ 9(4):259–282 4. Masruroh S, Carolina A (2022) Beneish model: detection of indications of financial statement fraud using CEO characteristics. Asia Pac Fraud J 7(1):85–101 5. Chidambaran NK, Kedia S, Prabhala N (2011) CEO director connections and corporate fraud. Fordham University Schools of Business Research Paper (1787500) 6. Nistala JS, Aggarwal D (2022) YES Bank Fraud: examining the softer underbelly of the fraud from a behavioral model. J Forensic Account Res 7(1):133–150 7. Allianz für Sicherheit Homepage. https://www.allianz-fuer-cybersicherheit.de/SharedDocs/ Downloads/Webs/ACS/DE/partner/20161129_expkr_statement02.pdf. Accessed 23 Aug 2022 8. Buss S (2017) Identitätsmissbrauch-Strafbarkeit beim CEO Fraud. Comput und Recht 2017:410–416 9. Proofpoint Homepage. https://www.proofpoint.com/us/threat-reference/ceo-fraud. Accessed 22 Aug 2022 10. European Union Agency for Cybersecurity (ENISA) Homepage, 10 February 2016. https:/ /www.enisa.europa.eu/publications/info-notes/how-to-avoid-losing-a-lot-of-money-to-ceofraud. Accessed 23 Apr 2021 11. KnowBe4 (ed): CEO Fraud–Schützen Sie Ihr Unternehmen gezielt gegen Social Engineering/CEO Fraud—Protect your company against social engineering in a targeted manner. Whitepaper. https://www.knowbe4.de/wissen/whitepaper/ceo-fraud. Accessed from Homepage (undated). Accessed 24 May 2021 12. Zahrte K (2021) Begriff des Zahlungsinstruments und Haftungsverteilung beim CEO-Fraud. Zeitschrift für Bankrecht und Bankwirtschaft 33(2):131–139 13. Industrie und Handelskammer (IHK) Hessen Homepage. https://www.ihk-hessen-innovativ. de/chef-betrug-mit-folgen-21-mio-schaden-zwei-manager-entlassen/. Accessed 29 Oct 2022 14. Cisco Homepage. https://www.cisco.com/c/en/us/support/docs/security/email-securityapp44-best-practices-guide-for-anti-spoofing.html. Accessed 27 Oct 2022 15. European Union Agency for Network and Information Security (ENISA) (2018) Cybersecurity culture guidelines: behavioural aspects of cybersecurity, Heraklion, Greece 16. Sasse MA, Hielscher J, Friedauer J, Peiffer M (2022) Warum IT-Sicherheit in Organisationen einen Neustart braucht/Why IT security in organizations needs a fresh start. In: Federal Office for Information Security (BSI) (ed) Proceedings of the 18. Deutscher IT-Sicherheitskongress

448

17.

18.

19. 20. 21.

22. 23. 24. 25.

26.

27.

28. 29. 30. 31.

32. 33. 34. 35. 36. 37.

M. Scholl des BSI/18th German IT security congress of the BSI, February 2022. At: Virtual Event Volume. ISBN 978-3-922746-84-3 Bada M, Sasse MA, Nurse JRC (2016) Cyber security awareness campaigns. Why do they fail to change behaviour? In: Proceedings of international conference on ICT for sustainable development. ICT4SD 2015, vol 2, 1st edn. Scholl MC, Fuhrmann F, Scholl LR (2018) Scientific knowledge of the human side of information security as a basis for sustainable trainings in organizational practices. In: Proceedings of the 51st Hawaii international conference on system sciences Rothschild ML (1999) Carrots, sticks, and promises: a conceptual framework for the management of public health and social issue behaviors. J Mark 63(4):24–37 Maclnnis DJ, Moorman C, Jaworski BJ (1991) Enhancing and measuring consumers’ motivation, opportunity, and ability to process brand information from ads. J Mark 55:32–53 Helisch M, Pokoyski D (eds) (2009) Security awareness–Neue Wege zur erfolgreichen Mitarbeiter-Sensibilisierung / Security awareness—new ways to successfully raise employee awareness. Springer, Wiesbaden Zwilling M, Klien G, Lesjak D, Wiechetek Ł, Cetin F, Basim HN (2022) Cyber security awareness, knowledge and behavior: a comparative study. J Comput Inf Syst 62(1):82–97 Alshaikh M, Adamson B (2021) From awareness to influence: toward a model for improving employees’ security behaviour. Pers Ubiquit Comput 25(5):829–841 Zhou G, Gou M, Gan Y, Schwarzer R (2020) Risk awareness, self-efficacy, and social support predict secure smartphone usage. Front Psychol 11:1066 BSI—Federal Office for Information Security (ed) (2022) Die Lage der IT-Sicherheit in Deutschland 2022. https://www.bsi.bund.de/SharedDocs/Downloads/DE/BSI/Publikationen/ Lageberichte/Lagebericht2022.html?nn=129410. Accessed from Homepage. Accessed 27 Oct 2022 BSI—Federal Office for Information Security (ed) (2021) The state of IT security in Germany in 2021. https://www.bsi.bund.de/SharedDocs/Downloads/EN/BSI/Publications/Securitysitu ation/IT-Security-Situation-in-Germany-2021.pdf?__blob=publicationFile&v=5. Accessed from Homepage. Accessed 27 Oct 2022 Proofpoint (ed) (2022) State of the Phish Sicherheitsbewusstsein und Bedrohungsabwehr im Fokus–eine umfassende Bestandsaufnahme (German version). https://www.proofpoint.com/ sites/default/files/threat-reports/pfpt-de-tr-state-of-the-phish-2022.pdf. Accessed from Homepage. Accessed 27 Oct 2022 Gabler Wirtschaftslexikon: Lernen Homepage. https://wirtschaftslexikon.gabler.de/definition/ lernen-41169. Accessed 4 Feb 2022 David DP, Keupp MM, Mermoud A (2020) Knowledge absorption for cyber-security: the role of human beliefs. Comput Hum Behav 106:106255 Johansson K, Paulsson T, Bergström E, Seigerroth U (2022) Improving cybersecurity awareness among SMEs in the manufacturing industry. In: SPS2022. IOS Press, pp 209–220 Johns Hopkins Medical Institutions (2007) Why emotionally charged events are so memorable. ScienceDaily. http://www.sciencedaily.com/releases/2007/10/071004121045.htm. Accessed from Homepage. Accessed 04 Feb 2022 Carpenter P (2019) Transformational security awareness: what neuroscientists, storytellers, and marketers can teach us about driving secure behaviors. Wiley Doyle W, Carter K (2003) Narrative and learning to teach: implications for teacher-education curriculum. J Curric Stud 35(2):129–137 Stigler JW, Hiebert J (2009) The teaching gap: best ideas from the world’s teachers for improving education in the classroom. Simon and Schuster Lambach D, Oppermann K (2022) Narratives of digital sovereignty in German political discourse. Governance Linek SB, Huff M (2022) Serious comics for science popularization: impact of subjective affinities and the crucial role of comic figures. In: INTED2022 proceedings. IATED, pp 517–526 Maalem Lahcen RA, Caulkins B, Mohapatra R, Kumar M (2020) Review and insight on the behavioral aspects of cybersecurity. Cybersecurity 3(1):1–18

Raising Awareness of CEO Fraud in Germany: Emotionally Engaging …

449

38. Baranowski MT, Lu PAS, Buday R, Lyons EJ, Schell J, Russoniello C (2013) Stories in games for health: more pros or cons? Games Health Res Dev Clin Appl 2(5):256–263 39. Laamarti F, Eid M, El Saddik A (2014) An overview of serious games. Int J Comput Games Technol 2014:11 40. Schell J (2019) The art of game design. A book of lenses, 3rd edn. CRC Press, London 41. Susi T, Johannesson M, Backlund P (2007) Serious games: an overview. Technical report HS-IKI-TR-07–001 42. Bernardes O, Amorim V, Moreira AC (eds) (2022) Handbook of research on cross-disciplinary uses of gamification in organizations. IGI Global 43. Erasmus Universität Rotterdam Homepage. https://www.coursera.org/lecture/serious-gaming/ different-kinds-of-serious-games-from-simulation-to-gamification-xxMNX. Accessed 27 Oct 2022 44. Paras B (2005) Game, motivation, and effective learning: an integrated model for educational game design. http://summit.sfu.ca/item/281. Accessed 04 Feb 2022 45. Hassenzahl M (2008) User experience (UX): towards an experiential perspective on product quality. In: IHM 2008: Proceedings of the 20th French-speaking conference on humancomputer interaction (Conf. Francophone sur l’Interaction Homme-Machine), pp 11–15 46. Interaction Design Foundation: Emotional design. https://www.interaction-design.org/litera ture/topics/emotional-design. Accessed 05 Aug 2022 47. Pavlidis GP, Markantonatou S (2018) Playful education and innovative gamified learning approaches. In: Handbook of research on educational design and cloud computing in modern classroom settings. IGI Global, pp 321–341 48. Pokoyski D, Matas I, Haucke A, Scholl M (2021) Qualitative Wirkungsanalyse security awareness in KMU (study 1 of the project “ALARM Informationssicherheit”). Technische Hochschule Wildau, Wildau, p 72 49. Von Tippelskirch H, Schuktomow R, Scholl M, Walch MC (2022) Report zur Informationssicherheit in KMU – Sicherheitsrelevante Tätigkeitsprofile (report 1). TH Wildau, Wildau, p 111 50. Project “ALARM Information Security”. https://alarm.wildau.biz/en. Accessed 17 Oct 2022 51. Proofpoint, Beyond Awareness Training. https://www.proofpoint.com/sites/default/files/ebooks/pfpt-us-eb-beyond-awareness-training.pdf. Accessed 22 Aug 2022 52. Scholl M (2021) Information security officer: job profile, necessary qualifications, and awareness raising explained in a practical way; basis: ISO/IEC 2700x, BSI standards 200-x, and IT-Grundschutz compendium. BoD–Books on Demand and Buchwelten-Verlag 53. TWZ Homepage. https://twz-ev.org/institute/wildau-institut-fuer-innovative-lehre-lebenslan ges-machen-und-gestaltende-evaluation/#tab-id-1. Accessed 17 Oct 2022

A Module Based on Data Mining Techniques to Analyze School Attendance Patterns of Children with Disabilities in Cañar - Ecuador Denys Dutan-Sanchez , Paúl Idrovo-Berrezueta , Ana Parra-Astudillo , Vladimir Robles-Bykbaev , and María Ordóñez-Vásquez Abstract According to UNICEF, 24% of children with disabilities are less likely to receive early stimulation, whereas 49% are more likely to have never attended school. For these reasons, it is essential to provide early stimulation tools for children with disabilities living in developing countries. In this paper, we describe a module based on data mining techniques to determine the children’s school attendance. To train the algorithms we collected data of 245 children that received early stimulation and therapy in the Special School “Jesus for the Children” of the city of Cañar - Ecuador during the last 26 years. The preliminary results show that the analysis module can classify children according to their attendance patterns with 85% of precision. Keywords Early stimulation · Education · Data mining · Children with disabilities · Therapy · Rural areas

1 Introduction In Ecuador, education is a right to which all people must have access, From which the state is the entity in charge of controlling and regulating all acti4vities related to education. According to article 24 of the Constitution of Ecuador of 2008, the D. Dutan-Sanchez · P. Idrovo-Berrezueta · A. Parra-Astudillo · V. Robles-Bykbaev (B) · M. Ordóñez-Vásquez GI-IATa, Cátedra UNESCO Tecnologías de apoyo para la Inclusión Educativa, Universidad Politécnica Salesiana, Cuenca, Ecuador e-mail: [email protected] D. Dutan-Sanchez e-mail: [email protected] P. Idrovo-Berrezueta e-mail: [email protected] A. Parra-Astudillo e-mail: [email protected] M. Ordóñez-Vásquez e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Á. Rocha et al. (eds.), Information Technology and Systems, Lecture Notes in Networks and Systems 691, https://doi.org/10.1007/978-3-031-33258-6_41

451

452

D. Dutan-Sanchez et al.

“International Convention on the Rights of Persons with Disabilities” indicate that the people with disabilities have the right to receive an education under equal conditions and opportunities within a quality education system, this will allow these people to develop their human potential, self-esteem and human diversity, in addition to develop their personality, creativity, and talent. This will ensure that people with disabilities feel part of a free society [5]. Of the 17.64 million inhabitants of Ecuador, 471,205 are people with disabilities registered within the national disability system, of which only 47,603 are students of the educational system that are distributed in elementary school, middle school and high school [3]. According to the preliminary accountability report of the ministry of Education for the year 20202, it indicates that in the last four years the number of students with disabilities in public specializes education schools has increased with a percentage of 82% followed by the fiscomisional system with 11%. In the city of Cañar there is only one public school that offers specialized education and this school welcomes 38 children and teenagers with different disabilities from the canton of Cañar (center and parishes) and its surrounding cantons. Here the students receive a quality education through early stimulation, in addition to the development of skills according to each of the learning areas, with the assistance of specialized personnel located in the area. It is also worth mentioning that within the school the record of enrollment has been handled manually from the school year 1996–1997 to 2021–2022, from which they do not present a complete register of their students and ex-students, that is, there is no precise data on the disabilities and the grade that these children and tennagers have. Similarly there is an existence of null information of the pre and post natal process of the people with disabilities, this data is only in the possession of the ministry of health from which it can not accessed not even for investigation cases such as this. According to the statistical data given by the CONADIS only 16,84% of the people with disabilities from Cañar receive specialized education, while the 83,16% are enrolled in a regular education system, from which 53,95% are male and the 41,05% are female [3]. From this schooled population, the prevalent disability is intelectual with 63,68% followed by physic with 14,74%, hearing and psychological with 7,37% and visual with 6,84%. According to the age page from CONADIS 51,05% belong to an age range between 13–18 years, 37,89% belong to an age range between 7–12 years and 6,84% belong to an age range between 4–6 years, this age group is received at the school of special basic education Jesus for children [3]. Among the most common cases for school dropout in Ecuador are: the lack of income, abrupt migration and the difficulty of mobilizing students to educational centers, as in the case of the Jesus specialized elementary school for children. Throughout this research it is shown that most of the students that are domiciled in the rural parts of the city of Cañar do not present a continuous registration of enrollment in the school years. Through an interview with the teachers of the school, it allowed to know the educational process that was carried out during the COVID-19 pandemic, the

School Attendance Patterns of Children with Disabilities in Cañar - Ecuador

453

2019–2020 school year ended virtually, being the parents or legal representatives, who withdrew the workbooks of each student, on the next school year 2020–2021 the therapy sessions were given individually at a different time, respecting all of the biosafety measures, ensuring the health of all the children, teenagers and teachers, in addition, ensuring their permanence in the Educational Unit.

2 Related Work There are several publications that involve Machine learning methodologies and data on children’s educational performance, which handle different groups of variables, where it can be seen that there are no single or defined variables that allow predicting academic performance and school dropout of children. In the following articles, we will go more into detail on how different models are created to help predict academic performance and predict school dropouts. In the next article, scientists have worked on a database that has the following features: gender, ethnicity, mental health, lifestyle, nutrition, and physical activities, in addition, they also gathered their parents’ background education and occupation. These features made it possible to generate a correlation between academic performance and physique, nutritional, and emotional situation of the children [7]. As indicated in [6], the study explains the relationship between academic performance and the following factors: depression, smoking, diet, age, gender, and family factors. The collection of this data allowed it to create a Naive Bayes model with an accuracy of 94% and a Random Forest Type model with an accuracy of 97,7% in addition to other models used in this article. Another research done at the University of Babylon in Iraq [8] has found that there is a relationship between the socio-demographic information of their students with their degree completion. The features used in this research are: income, gender, health, age, and family information, among others. As a result of this data set, when they applied a random forest classifier the precision of their model was 86%. In conclusion, their research discovered that socio-demographic information was significant in predicting degree completion, and the results varied depending on the student’s situation. In the end, they made it clear that educational institutions should pay attention to students who present certain features this will prevent them from dropping out and allow them to finish their degrees. In [1] the authors has determined that it is possible to apply an early intervention to students that needed help in school to prevent them from failing or dropping out. All of this process was made by using machine learning models such as Naive Bayes, RandomForest, and many more. The data used to train their classifiers were: performance in school, grades, GPA, etc. In this case, They tested their model to predict students’ final grades and found out that the best classifiers were the Naive Bayes with an accuracy of 63–69% and the Random Forest with an accuracy 63– 67% for regular students. When applied for a prediction on an honorary student the accuracy for Naive Bayes was 77% and the accuracy for RandomForest was 85–92%.

454

D. Dutan-Sanchez et al.

School dropout can occur at all levels of education, as described in [2], a research was conducted at the Universidad de Costa in Colombia with 2479 student records, where they used RandomForest classifier and obtained an accuracy of 84.1%, from their research they determined that every 8 out of 10 students from the first cycles have a greater possibility of dropping out, taking corrective actions with departments such as University Welfare, in order to reduce the risk of desertion, in addition, throughout the study it was possible to determine that there are attributes that do not have an impact on the prediction, such as Marital Status and Applied Scholarships, so they must be eliminated from the database.

3 Methodology When constructing a predictive method for an analysis of a dataset it is always fundamental to choose wisely what machine learning algorithm to use. Since there are a lot of methods that help us detect useful patterns we have chosen three method classifiers. The Scope of this research is mainly on the continuous attendance of children with disabilities with respect to their previous school year attendance history. The first Method used for the analysis of the dataset is the Naive Bayes classifier. The reason why we used this method is because in this dataset we have some features that may not have relation making it difficult to calculate the probability of an event, so we used a naive approach in which we assume that all of the features are independent. This theorem lets us use conditional probabilities of a certain event based on the occurrence of various events. This conditional probability is equal to the likelihood of the second event given the first event multiplied by the probability of the first event (Eq. 1) [9]. P(A|D, H, T w, T, G) = P( A) · P(D|A) · P(H |A) · P(T w|A) · P(T |A) · P(G|A) P(D) · P(H ) · P(T w) · P(T ) · P(G) (1) Where: – A: Attendance - Variable indicator for students who present a regular school year attendance. Regular (Yes) or irregular (No). – D: Diagnostic - Brief description of the diagnosed syndrome. – H : Home address - variable that represents a place, parish, or community where the child home is located. – T w: Town - variable that represents the town in which the child home address is located.

School Attendance Patterns of Children with Disabilities in Cañar - Ecuador

455

– T : Type - variable that represents the location of the child’s home. Rural zone (R), Urban zone (U), or Undefined (R/U) – G: Gender, male (M) or female (F) The Second method of approach is the RandomForest classifier, this classifier helps us construct a forest of random decision trees. A random forest has higher accuracy than most machine learning algorithms and its training time is also less. A random forest helps us maintain accuracy even when missing data exists. To explain how a random forest works we must describe briefly how a decision tree works. A decision tree has four important terms: entropy, information gain, leaf node, decision node, and root node. Entropy is the measurement of randomness or unpredictability in the dataset. An information gain (see Eq. 2) is the measurement of a decrease in the entropy once the data set is split. A leaf node carries out the classification of the decision. A decision node is a node that has two or more branches, and a Root node is the beginning of a decision tree where all of the data will be located. Once we explained how a decision tree works now we can explain how a random forest works. A random forest generates a random number of independent decision trees. When we use this model all of the decision trees take the features that we have provided and each tree gives out a vote, in the end, the choice with more votes gets elected and that is how a random forest classifies a dataset given certain features [9]. Entropy equation: Gini = 1 −

C  ( pi )2

(2)

i=1

Where: – Gini: This is a cost function that evaluates the splits in the dataset and determines which branch has a higher probability to occur. In this research, our class output is the Attendance from which we determine if a student will attend classes continuously during the school year, given that sociodemographic data has been used to determine this prediction. – C: Represents the number of classes according to a task (type of values of class Attendance - YES, NO). For this investigation, C will be 2 for the task of attendance evaluation. Our last method of approach is the Adaboost classifier. This classifier is similar to the random forest classifier, but instead of using decision trees instead, it uses stumps. A stump is a tree that has one node and only two leaf nodes. A stump is created by using each feature of the dataset and viewing how many mistakes it makes. To determine which stump will be used, first, we have to use the Gini index. In the end, the stump with the lowest Gini index will be chosen. This step is fundamental because the second stump will be created by taking into consideration the errors that the first stump makes, and the third stump will take into consideration the errors the second stump made, and so on. Once the forest of stumps is created, we calculate the

456

D. Dutan-Sanchez et al.

amount to say of each stump, and the stumps with a bigger weight will have more consideration than those with less weight. One more step that must be explained is the step to create the next stump once the first stump is chosen; once the first stump is chosen, a new sample table is created, and a new weight must be calculated. Once the new weight is calculated, the weight must be normalized, and in some cases, the weight might not be equal to one. Then we can proceed the same way we started at the beginning. The reason why we used this method is that in a random forest, all the trees generated have the same weight of vote even if the tree generated was big or small, but with an Adaboost classifier, the stumps generated have different sayings. A stump’s vote is taken more into consideration when its size is significantly bigger than the other stumps. In other words, the larger stumps have the last say (Eq. 3) [4]. ) ( T  αt · h t (x) (3) H (x ) = sign t=1

Where: x ): represents – h t (√ the weak or basis classifier. The equation to calculate the weak classifier is: 2 e · (1 − e). – H (x ): final classifier or final output calculated by the weak classifiers given certain socio-demographic data from the student. – αt : is the weight calculated and assigned to the classifier. The equation to calculate where e represents the error rate. this weight is: 21 log 1−e e – x-: represents an array of each socio-demographic feature that a student presents which may affect the probability of attendance of a school year. This array contains data such as {diagnostic, homeaddr ess, town, t ype, gender }.

4 Experiment and Preliminary Results The data used in this research was collected from the “Jesús para los Niños” special elementary school from the city of Cañar - Ecuador, which has records from the school year 1996–1997 to the current school year. The total number of entries available for this research is 245, the features of this dataset are: diagnostic, home address, canton, type of sector, gender, and regularity in their enrollment. Figure 1 shows six subplots containing the total of children enrolled in the school by each decade. Each subplot depicts the groups of children according to one of the four “disability types” defined by the National Council for Disabilities Equality (CONADIS). As can be seen, the number of enrolled children has increased since the 1960s s decade. This situation results from several awareness programs carried out by local governments to reduce the “stigma” of having a relative with disabilities. With the data collected, the objective is to create a prediction model, the same one that will allow Machine Learning to help understand the factors that interfere with the continuing education of children with disabilities. For this reason, we have

Fig. 1 Total children enrolled in the Special School “Jesus for the Children”. Each subplot represents a decade and the number of children according to the four disability types defined by the CONADIS

School Attendance Patterns of Children with Disabilities in Cañar - Ecuador 457

458

D. Dutan-Sanchez et al.

Table 1 Table dataset variables Value range Variables Diagnostic

Home address Town Type Gender Attendance

Congenital hearing impairment AB50, Disorders of intellectual development, unspecified 6A00.Z, Disorder of intellectual development, profound 6A00.3, Psychomotor retardation MB23.N, Vision impairment unspecified 9D9Z, Epilepsy due to degenerative brain disorders 8A60.2, Muscular hypertonia MB47.8, Obesity 5B81, Spinal cord XA0V83, Myelomeningocele without hydrocephalus LA02.01, Other specified spastic cerebral palsy 8D20.Y, Developmental speech or language disorders 6A01, Complete trisomy 21 LD40.0, Autism spectrum disorder 6A02 54 different sectors are available Cañar, El Tambo, Suscal, Chunchi, Azogues, Alausí, Biblián R = Rural U = Urban M = Male F = Female YES, NO

analyzed the dataset with different methodologies and we have obtained favorable results for a prediction. We have used three different ways of training or models, to make the explanation quicker and more efficient we will call these three training groups: group A, group B, and Group C. Group A consists of assigning 70% of the dataset for training or model and 30% of the data set for testing. Group B consists of assigning 80% of the dataset for training or model and 20% of the data set for testing and group C consists of assigning 90% of the dataset for training or model and 10% of the data set for testing. When we applied this group training to our Naive Bayes model we obtained different accuracy results. For Group A we obtained 75% of accuracy and a specificity of 96.77%, for Group B we obtained 90,3% of accuracy and a specificity of 95.83%, and lastly, for Group C we obtained 75% of accuracy and a specificity of 90.90%. In conclusion, our training in Group B gave us the best accuracy results and Group A gave us the best specificity results. The second model that we used is the RandomForest classifier, in this case, we applied the same group training as the Naive Bayes, when Group A was applied to this model we obtained an 81,3% of precision and a specificity of 99.31% when Group B was applied to this model we obtained an 84,1% of precision and a specificity of 100% and lastly when Group C was applied to this model we obtained an 80.20% of precision and a specificity of 95.83%. In this model, the best group training is Group B.

School Attendance Patterns of Children with Disabilities in Cañar - Ecuador Table 2 Results table

459

Model

Group A Accuracy %

Naive Bayes Random Forest AdaBoost

75.00 81.30 76.50 Group B

96.77 99.31 93.55

Naive Bayes Random Forest AdaBoost

90.30 84.10 88.40 Group C

95.83 100.00 93.75

Naive Bayes Random Forest AdaBoost

75.00 80.20 80.80

90.90 95.83 94.50

Specificity %

Finally, we applied the last model which is the AdaBoost classifier. We applied the same group training as the two models before, when Group A was applied we obtained a result of 76,5% of accuracy and a specificity of 93.55%, and when Group B was applied we obtained a result of 88,4% of accuracy and a specificity of 93.75% and the last group C gave us an 80.8% of accuracy and a specificity of 94.5% (Table 1). In this model the best Group training was Group B. So to recap, the best group training for all three models was Group B and the model that had the highest precision was the Naive Bayes classifier, giving us a 90.3% of accuracy and 95.83% of specificity. Something that must be taken into consideration is that the RandomForest classifier allows us to have a smaller precision variance when switching between different training groups. In the RandomForest model, we have an average of 81,86% of accuracy and 99.93% of specificity, demonstrating a small variation with respect to the other two models used for this research.

5 Limitations In this study we can determine several limitations, one of the limitations is the lack of student data, this is one of the main factors given that these led us to use certain attributes for prediction. Another limitation is complementing with null or blank data that doesn’t allow a more in-depth analysis. Something very important to take into account for future research is to look for more detailed medical profiles from the students, which will allow a better understanding of the limitations of the students. A strong suggestion when realizing these types of research is to build a dataset with nutritional values such as parents’ occupation, demographic information, etc. this

460

D. Dutan-Sanchez et al.

will allow anyone to have a better view of the situation of the group that is to be studied [6] (Table 2).

6 Conclusions As a result of this research we have determined that it is fundamental for the staff of Specialized Schools to pay attention to students who present certain sociodemographic features. The importance of identifying students that may drop out is significant, this may assure that a special intervention can be made earlier before the parents of the student or the student himself decides to drop out. With the data provided by the School “Jesús para los Niños” and the results of the different classifiers, we can predict what type of students may drop out. With the information provided by this research, we can decrease the percentage of students that do not receive early stimulation and we can also increase the number of students that will attend the school continuously. A curious observation in this research is that the probability that a student may continue to attend his/her therapy is much higher if they reside in an urban location. A future investigation can be made to determine what causes this phenomenon given that more data is needed to analyze this behavior. With the implementation of these types of modules as a tool for specialized school support, the researchers of this article together with the teachers of the institution, intend to carry out quality therapy sessions, where users will have the opportunity to develop skills according to their degree of disability and avoid the user from dropping out of school. When analyzing the results obtained from the prediction models, we can determine that the Naive Bayes classifier allows us to obtain a great accuracy in cases of children who will attend classes, since we have a high percentage in specificity the classifier model helps prevent students from not being continuous in their studies. This allows the projection of children’s cases according to the variables described above and thus be able to provide follow-up and support, preventing children from dropping out of their education. Taking this into account, this will allow to help children with disabilities to finish their education through preventive follow-up, the same that can be designated with the applied models.

References 1. Alturki S, Alturki N, Stuckenschmidt H (2021) Using educational data mining to predict students’ academic performance for applying early interventions. J Inf Technol Educ Innov Pract 20:121–137. https://doi.org/10.28945/4835

School Attendance Patterns of Children with Disabilities in Cañar - Ecuador

461

2. Camargo García AJ (2020) Modelo para la predicción de la deserción de estudiantes de pregrado, basado en técnicas de minería de datos. Master’s thesis, Universidad de la Costa. https:// hdl.handle.net/11323/7077 3. Consejo Nacional para la Igualdad de Discapacidades (CONADIS) (2022) Estadísticas de Discapacidad - Consejo Nacional para la Igualdad de Discapacidades. https://www. consejodiscapacidades.gob.ec/estadisticas-de-discapacidad/. Publication Title: Estadísticas de Discapacidad 4. Hu X, Wang K, Dong Q (2016) Protein ligand-specific binding residue predictions by an ensemble classifier. BMC Bioinform 17(1):470. https://doi.org/10.1186/s12859-016-1348-3 5. Ministerio de Educación (2018) ACUERDO Nro. MINEDUC-MINEDUC-2018-00055-A. Ministerio de Educación 6. Poudyal S, Mohammadi-Aragh M, Ball J (2022) Prediction of student academic performance using a hybrid 2D CNN model. Electronics 11:1005. https://doi.org/10.3390/ electronics11071005 7. Qasrawi R, Abu Al-Halawa D (2022) Cluster analysis and classification model of nutritional anemia associated risk factors among palestinian schoolchildren, 2014. Front Nutr 9:838937. https://doi.org/10.3389/fnut.2022.838937 8. Shakir M, Al-Azawei A (2022) Using socio-demographic information in predicting students’ degree completion based on a dynamic model. Int J Intell Eng Syst 15. https://doi.org/10. 22266/ijies2022.0430.11 9. Uddin S, Khan A, Hossain ME, Moni MA (2019) Comparing different supervised machine learning algorithms for disease prediction. BMC Med Inform Decis Making 19(1):281. https:// doi.org/10.1186/s12911-019-1004-8

Improving Semantic Similarity Measure Within a Recommender System Based-on RDF Graphs Ngoc Luyen Le, Marie-Hélène Abel, and Philippe Gouspillou

Abstract In today’s era of information explosion, more users are becoming more reliant upon recommender systems to have better advice, suggestions, or inspire them. The measure of the semantic relatedness or likeness between terms, words, or text data plays an important role in different applications dealing with textual data, as in a recommender system. Over the past few years, many ontologies have been developed and used as a form of structured representation of knowledge bases for information systems. The measure of semantic similarity from ontology has developed by several methods. In this paper, we propose and carry on an approach for the improvement of semantic similarity calculations within a recommender system based-on RDF graphs. Keywords Semantic similarity · Ontology · Recommender system

1 Introduction With the development of the Internet, users face a large amount of information on ecommerce websites or mobile applications. On the other side, searching and matching algorithms have to deal with different structured, semi-structured, and unstructured textual data. The measure of semantic similarity allows comparing how close two terms or two entities are. Therefore, the exploitation of methods for semantic similarity measure becomes one of the means for improving these search and matching algorithms. In the context of a recommender system, semantic similarity measure can be applied in order to improve certain tasks such as searching, matching and ranking data. N. L. Le (B) · M.-H. Abel Université de technologie de Compiègne, CNRS, Heudiasyc (Heuristics and Diagnosis of Complex Systems), CS 60319, 60203 Compiègne Cedex, France e-mail: [email protected] N. L. Le · P. Gouspillou Vivocaz, 8 B Rue de la Gare, 02200 Mercin-et-Vaux, France © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Á. Rocha et al. (eds.), Information Technology and Systems, Lecture Notes in Networks and Systems 691, https://doi.org/10.1007/978-3-031-33258-6_42

463

464

N. L. Le et al.

A recommender system provides suggestions for items that are most likely of interest to the particular user. These suggestions can support users in various decisionmaking processes, for example, which films to watch, which music to listen to, or which products to purchase. In order to give suggestions for items, the recommender systems attempt to collect characteristics about users and items by regarding user’s preferences, items description, and behaviors. Thus, the measure of semantic similarity allows finding the most relevant items to the user. By structuring and organizing a set of terms or concepts within a domain in a hierarchical way and by modeling the relationships between these sets of terms or concepts using a relation descriptor, an ontology allows to specify a standard conceptual vocabulary for representing entities in a particular domain [15, 20]. In recent years, the use of ontologies has become more popular in recommender systems as well as in decision support systems for different tasks [8, 19]. As a part of out work, we are especially interested in calculating semantic similarity to improve the precision within a recommender system based-on RDF graphs. The rest of this article is organized as follows. First, we precise the principal problem that the paper focuses on and resolves. Then Sect. 3 introduces works from the literature on which our approach is based. Section 4 presents our main contributions on the construction of the recommender system exploiting the measure of similarity between sets of triplets. Before concluding, we test our work in Sect. 5 from an experimental case dealing with the purchase/sale of used vehicles. Finally, we conclude and present the perspectives.

2 Problem Statement We are looking to improve the semantic similarity measure in order to obtain more accuracy on the items’ recommendation lists to the user within the recommender system. In general, an ontology can be represented as a set of RDF triplets. An RDF triplet includes three components: subject, predicate and object. Common semantic similarity measure approaches based on an ontology have two weak points. The first weak point concerns the measure of similarity which is calculated either between objects, or between objects and predicates [13]. The object-based measure does not use the subject information, although it can contain contextual information from the triplet which is interesting for the comparison. The second weak point concerns the distinction of the type of objects: textual or numerical [16]. The measure of similarity between numerical objects consists of a simple arithmetic calculation. The measure of similarity between textual objects is based on the frequency of the words composing the textual objects to be compared. This measure does not take into account the semantic dependence between these words. The latter can be a richness for comparison.

Improving Similarity Measure Within a RS Based-on RDF Graphs

465

3 Related Works 3.1 Recommender Systems The Recommender System (RS) is conventionally defined as an application that attempts to recommend the most relevant items to users by reasoning or predicting the user’s preferences in an item based on related information about users, items, and the interactions between items and users [11, 14]. In general, recommendation techniques can be classified into six main approaches: Demographic-based RSs, Content-based RSs, Collaboration Filtering-based RSs, Knowledge-based RSs, Context-aware RSs, and Hybrid RSs. In several areas such as financial services, expensive luxury goods, real estate or automobiles, items are rarely purchased, and user reviews are often not available. In addition, item descriptions can be complex, and it is difficult to get a reasonable set of ratings that reflect users’ history on a similar item. Therefore, demographicbased, content-based, collaborative filtering-based RSs are generally not well suited to domains where items possess the mentioned characteristics. Recommender systems based on knowledge and contextual information represented by means of ontologies are proposed to address these challenges by explicitly soliciting user needs for these items and in-depth knowledge of the underlying domain for similarity measures and calculations of predictions [10]. To improve the quality of the recommendation, the similarity measures between items or user profiles in a recommender system plays a very important role. They make it possible to establish a list of recommendations taking into account the preferences of the users obtained following the declarations of the users or their interactions. We detail in the next section the measures of semantic similarity between items within a recommender system.

3.2 Semantic Similarity Measure The advantages of using ontologies consist of the reuse of the knowledge base in various fields, traceability and the ability to use computation and application at a complex and large scale [18]. Depending on the structure of the application context and its knowledge representation model, different similarity measures have been proposed. In general, these approaches can be categorized into four main strategies [16, 22]: (1) path-based, (2) feature-based, (3) information content-based, and (4) hybrid strategy. With path-based semantic similarity measures, ontologies can be considered as a directed graph with nodes and links, in which classes or instances are interconnected mainly by means of hypernym and homonym relationships where the information is structured hierarchically using the ‘is-a’ relationship [16]. Thus, semantic similarities are calculated based on the distance between two classes or instances. The main

466

N. L. Le et al.

advantage of this strategy is simplicity because it requires a low computational cost and does not require the detailed information of each class and instance [13]. The main drawback of this strategy concerns the degree of completeness, homogeneity, coverage and granularity of the relationships defined in the ontology [22]. When feature-based semantic similarity measures, classes and instances in ontologies are represented as a set of ontological features [16, 22]. Commonalities between classes or instances are calculated based on their ontological feature set. The similarity evaluation can be performed using multiple coefficients on property sets such as the Jaccard index [9], the Dice’s coefficient [4]. The advantage of this strategy is that it evaluates both the commonalities and the differences of sets of compared properties which allow to exploit more semantic knowledge than the path-based approach. However, the limitation is that it is necessary to balance the contribution of each property by deciding the standardization and the weighting of the parameters on each property. With semantic similarity measures based on information content, information content is used as a measure of information by associating probabilities of occurrence with each class or instance in the ontology and calculating the number of occurrences of these classes or instances [22]. In this way, infrequent classes or instances become more informative than frequent classes or instances. A disadvantage of this strategy is that it requires large ontologies with a detailed taxonomic structure in order to properly differentiate between classes. Beyond the measure of semantic similarities mentioned above, there are several approaches based on combinations of the three main strategies. Many works have combined the feature-based and path-based strategy [3, 7]. In our work, we have chosen to work on the representation of an ontology by means of triplets. An RDF triplet has three components: subject, predicate and object. In particular, the subject can be the name of a class, or an instance. The predicate is the name of a property of a class or an instance. Object is a value of a property of the class or instance that can be separated into a literal or a name of another class or instance. The name of a class, an instance, or literals in triplets are expressed via a text that can include several words. In order to prepare their treatment, these textual contents are vectorized. We detail in the next section the methods we have studied for this purpose.

3.3 Vector Representation of Words Word vectorization allows to represent a word by a numeric feature vector and this vector describes the meaning of this word in its context. In general, several techniques are proposed to vectorize a word such as Term Frequency-Inverse Document Frequency (TF-IDF) [21], or Continuous Bag of Word (CBOW) and Skip Gram Model (Skip-Gram).

Improving Similarity Measure Within a RS Based-on RDF Graphs

467

The TF-IDF is a statistical measure based on a corpus of documents1 . This technique assesses the relevance of a word to a document in a corpus of documents. The CBOW model constructs the vector representation of a word by predicting its occurrence and knowing the neighboring words. In another side, the Skip-Gram model constructs the vector representation of a word by predicting its context of occurrence. Word2vec is one of the most popular techniques for creating word embedding using a neural network architecture. It predicts words based on their context by combining the two models CBOW and Skip-gram [17]. Several word embeddings are created using this model for different languages [17]. Fauconnier [5], and Hadi and his colleagues [1] implement this model from French texts. Word embedding trained with very large corpora allows to quickly obtain the vector representation of a word. In our work, we have chosen to calculate the measure of similarity between two textual terms taking into account the combination of CBOW and Skip-gram models. The similarity between two textual terms that consist of different words can take advantage of this form of representation in order to calculate the distance between them. In the next section, we detail our proposed approach to measure similarity within a recommender system.

4 Measure of Similarity Within a Recommender System 4.1 Recommender System for the Purchase/Sale of Used Vehicles As part of our work, we are interested in the illustration of the semantic similarity measure on the recommender system based on the knowledge represented by means of ontologies in an e-commerce application for the sale/purchase of used vehicles. Knowledge base using for our RS represented by means of ontologies focuses on three main types: user profiles, item descriptions or item attributes, and interactions between users and items. First of all, user profiles include the user’s personal information, their usage context, and their preferences about vehicle items. They can be organized and rewritten as triplets formally defined as follows: G U = {a1u , a2u , ..., anu , }

(1)

where aiu denotes the triplet aiu = ⟨subjecti , pr edicatei , objecti ⟩. In other words, the triplet aiu can also be expressed as ⟨r esour cei , pr oper t yi , statei ⟩. For example, “Louis likes the Tesla Model S car”. This natural language expression can be represented through two different triplets: ⟨Louis, likes, the_T esla_Model_S_car ⟩, ⟨T he_T esla_Model_S_car, is_manu f actur ed_by, T esla_Motor s⟩. Then, vehicle descriptions can also be represented as a knowledge graph. They can be defined using the same approach: 1

In the context of an ontology, a set of triples is equivalent to a document.

468

N. L. Le et al.

G V = {a1v , a2v , ..., amv , }

(2)

where a vj denotes the triplet a vj = ⟨subject j , pr edicate j , object j ⟩ or a vj = ⟨r esour ce j , pr oper t y j , state j ⟩. Finally, when a user performs an interaction on vehicle description items by giving a rating, a comment or adding to a list of favorites, we mark these interactions to have an analysis of the intention and the behavior of the user in order to propose relevant vehicle item recommendations. Therefore, interactions are defined as a function with several parameters: RS : G U × G V × G C1 × ... × G Ck → I nteraction

(3)

where G U corresponds to the user, G V corresponds to the vehicle description item, G Ch s corresponds to contextual information, for example: objectives, locations, times, resources [2]. Ontologies are developed to profile users and model vehicle description items [11]. Based on these ontologies, RDF data is collected and stored in a searchable triplestore using SPARQL queries. Rules can be defined to infer or filter items using inference ontologies. In this case, knowledge-based RS has the following four main tasks: – Receive and analyze user requests from the user interface. – Build and perform queries on the knowledge base. – Calculate semantic similarities between the vehicle description items, the user profile. – Classify the items corresponding to the needs of the user. Similarity measures between items or user profiles is an important task to generate the most relevant list of recommendations. Comparisons between two RDF triplets are often limited to common or non-common objects. The subject and predicate information can however also provide important information about the object itself and its comparison with other triplets. In the following section, we exploit information of triplets and calculate the semantic similarities between them in a knowledge base.

4.2 Semantic Similarity Measure Between Triplets We have chosen to define a hybrid approach that takes into account the combination of feature-based and content-based approaches to calculating similarities. The subject, predicate and object in triplet contain important information. A set of triplet allows to aggregate information from single triples. Therefore, the measure of semantic similarity between the sets of triplets must take into account all the triplets/elements in each set. The measure of semantic similarity focuses on comparing two sets of triplets from all their elements by separating them into quantitative and qualitative information. On the one hand, the object comparison is performed using the property-based semantic

Improving Similarity Measure Within a RS Based-on RDF Graphs

469

similarity strategy. On the other hand, the comparison of subjects and predicates is performed by the semantic similarity strategy based on the content of information. Measure of Qualitative Information. Qualitative information refers to words, labels used to describe classes, relationships, and annotations. In a triplet, the subject and the predicate express qualitative information. Objects can contain qualitative or quantitative information. For example, we have the following three triplets: ⟨ f or d_ f ocus_4_2018, has_transmission, mechanical⟩ ⟨ f or d_ f ocus_4_2020, has_transmission, mechanical⟩ ⟨citr oen_c5_air cr oss, has_transmission, mechanical⟩ All components of these three triplets are qualitative. The subject information of three triplets can be used to contribute to the measure of similarity between them. In this section, we focus on measuring semantic similarity for Qualitative Subjects, Predicates and Objects (QSPO). We propose the same formula for all three components to calculate similarity. Let as1 and as2 be two QSPOs whose word vectors are M1 = {w11 , w12 , ..., w1k } and M2 = {w21 , w22 , ..., w2l }, their semantic similarity is defined as follows: ∑k Sim 1 (as1 , as2 ) =

i=1

¯ 1i , as2 ) + S(w

∑l j=1

k +l

¯ 2 j , as1 ) S(w

(4)

where S¯ (w, as ) denotes the semantic similarity of a word w and a QSPO. The function ¯ S(w, as ) is formally calculated as follows: ¯ ¯ wi ) S(w, as ) = max S(w, wi ∈M

(5)

where wi ∈ M = {w1 , w2 , ..., wk } is the word vector of as . Each word wi is represented by a numerical vector. One can use the techniques introduced in Sect. 3.3. The TF-IDF word frequency-based approach facilitates obtaining the probability of a word in a set of triplets. However, the main disadvantage of this approach is that it cannot capture the semantic information of the word with the other words or the word order of the elements in the set of triplets because it creates the vector based on the frequency of the word in the set of triplets and the collection of sets of triplets. Therefore, we propose the use of CBOW and Skip-gram models with the implementation of Word2vec [1, 17] in order to overcome this weakness. We finally calculate ¯ i , w j ) = wi .w j . the similarity between two words wi , w j by cosine similarity: S(w ∥wi ∥∥w j ∥ Measure of Quantitative Information. Quantitative information is numerical information that is used to express nominal, ordinal, interval, or ratio information. In a triplet, the object often uses this form of information to manifest property information for classes, concepts of the ontology. For example, we have the following triplets:

470

N. L. Le et al.

⟨ f or d_ f ocus_4_2018, has_number _o f _mileage, 107351⟩ ⟨ f or d_ f ocus_4_2020, has_number _o f _mileage, 25040⟩ ⟨citr oen_c5_air cr oss, has_number _o f _mileage, 48369⟩ The objects of these triplets are numeric values. The comparison between numbers is done simply by measuring the distance. In order to compare two different objects, we use the Euclidean distance between two objects. Thus, the smaller the difference between two objects, the higher similarity between them. Let ao1 and ao1 be two objects whose vectors are ao1 = {o11 , o12 , ..., o1k } and ao2 = {o21 , o22 , ..., o2k }, their semantic similarity is defined as follows: Sim 2 (ao1 , ao2 ) =

1+

/∑

1

k i=0 (o1i

(6) − o2i )2

Measure of Triplets. The comparison of two triplets a1 = ⟨as1 , a p1 , ao1 ⟩ and a2 = ⟨as2 , a p2 , ao2 ⟩ is performed according to the type of information of the objects in the triplets. If the object contains qualitative information, the semantic similarity between a1 and a2 is defined as follows: Sim I (a1 , a2 ) =

1 N



ω × Sim 1 (ai1 , ai2 )

(7)

i∈P,ω∈Q

where P = {s, p, o} corresponds to the information of subject, pr edicate, and object as a vector of words. Q = {α, β, γ } is the respective weights for the triplet components. N is the number of triplet components. Moreover, if the object contains quantitative information, the semantic similarity measure of the triplets a1 and a2 is defined as follows: Sim I I (a1 , a2 ) =

1 ∑ ( ω × Sim 1 (ai1 , ai2 ) + γ × Sim 2 (ao1 , ao2 )) N i∈P,ω∈Q

(8)

where P = {s, p} corresponds to the information of subject and pr edicate in the form of a vector of words. Q = {α, β} represents the respective weights of the subject and the predicate. And γ is the weight for the object. Therefore, the semantic similarity of two sets of triplets G 1 = {a1 , a2 , ..., ag } and G 2 = {a1 , a2 , ..., ag } is calculated on the basis for similarity comparison of each simple triplet as follows: Sim(G 1 , G 2 ) =

L H 1 ∑ 1 ∑ ( Sim I (a1i , a2i )) + ( Sim I I (a1 j , a2 j )) L i=0 H j=0

(9)

Improving Similarity Measure Within a RS Based-on RDF Graphs

471

where L is the number of triplets that contains the qualitative objects. H is the number of triplets that contains the quantitative objects.

5 Experiments In this section we test our approach in the case of a vehicle purchase/sale application. We thus measure the semantic similarity between two sets of triplets each representing a vehicle. By using ontology, we can reconstruct the knowledge base of a domain in a form that is readable by machines as well as humans. From the vehicle ontologies developed in the work [11, 12], we realize a collection of class instances and their relationships to create an RDF dataset. Figure 1 illustrates in a simple way two sets of triplets representing two vehicles. The dataset contains approximately 1000 used vehicles with its different features, characteristics. The transformation of words into vector representation is achieved by using pretrained word embeddings for French developed by Hadi and his colleagues [1]. We chose to employ the CBOW and Skip-gram models instead of TF-IDF model because the problem concerns capturing semantic information which is almost impossible on the TF-IDF model. Based on the instances collected, we carry out experiments and evaluations on the following four approaches: 1. Jaccard: the approach based on Jaccard index that measures similarities between two triplet sets [6]. 2. SiLi: the approach proposed by Siying Li and her colleagues [13], this hybrid approach combines the strategy based on the content of information and based on features but only considers the objects and the predicates of the triplets. 3. N2: our approach with the use of the TF-IDF model to vectorize qualitative information 4. N1: our main proposed approach with the use of the Word2vec model [1] to vectorize qualitative information. We experiment with the four approaches by measuring the similarity scores for each RDF instance of used vehicles with all others. With RDF graphs of 1000 used vehicle instances, we have a total of 100000 similarity scores. In general, the approach N1 yields better results when compared to the approaches N2, Sili, and Jaccard in terms of similarity score through some aspects of the results that are represented by heat map (Fig. 2) and histogram chart (Fig. 3). The yellow color distribution on the heat map illustrates the higher similarity score of the approach N1 than the other approaches and is evenly distributed throughout the map. Likewise, the histogram chart also shows that the distribution of similarity score of the approach N1 is higher than the others. A deep dive into the results obtained, we arrive at several conclusions. First, our approach N1 gives the result of the calculation of the similarity between the vehicles

472

N. L. Le et al.

Fig. 1 Triplet data visualized by a graph of two different vehicles (vo notes for the Vehicle Ontology)

Fig. 2 Heat map correlation among similarity measures across 1000 used vehicles for four approaches

Fig. 3 Histogram representing the similarity score distribution of the four approaches

Improving Similarity Measure Within a RS Based-on RDF Graphs

473

higher than the other approaches with 82.4% highest scores. Second, our approach using the TF-IDF N2 technique for vector word representation obtained the lower results by comparing with the approach N1. This is explained by the ability to capture contextual and semantic information of the Word2vec approach which is better than that of the TF-IDF approach. Experiments show that our approach N1 obtained good results for similarity measures between the sets of triples. The use of the subject in the comparison allows to add information to the measure of similarity of a triplet. Also, the distinction between textual and numerical content allows to apply the appropriate formula according to the type of content. The sum of the two calculations represents the measured similarity. Taking into account this distinction, the contextual triplets and the measure from the textual contents enriched with the semantic dependencies between the words constituting the text, the similarity obtained is more precise than those encountered in the literature [13, 16, 22].

6 Conclusion and Perspectives Measuring semantic similarity based on ontology is an important task in proposing a list of relevant recommendations to a user. In this article, we propose a hybrid strategy that combines feature-based strategy and content-based information. The two weak points in the problem statement section are intervened by our semantic similarity measure approach: (1) the three components of a triplet are considered in the similarity measure in order not to lose information, and (2) The distinction of data type, textual or numeric, allows to carry out an adapted and more precise measure. We carried out an experiment of our approach and compared with three other similarity measures. The results obtained show its interest. Now we need to continue our work and carry out other tests on different corpora and different applications. We must concede that the words not considered in the trained corpus pose a problem. In perspective, the research for treating and cleaning a corpus of the vehicle domain as well as applying ontologies of the domain in order to improve the precision of the recommender systems could be promising works in the future. Acknowledgment This work was funded by the French Research Agency (ANR) and by the company Vivocaz under the project France Relance - preservation of R&D employment (ANR-21PRRD-0072-01).

References 1. Abdine H, Xypolopoulos C, Eddine MK, Vazirgiannis M (2021) Evaluation of word embeddings from large-scale French web content

474

N. L. Le et al.

2. Adomavicius G, Tuzhilin A (2011) Context-aware recommender systems. Springer, Boston, pp 217–253 3. Batet M, Sánchez D, Valls A (2011) An ontology-based measure to compute semantic similarity in biomedicine. J Biomed Inform 44(1):118–125 4. Dice LR (1945) Measures of the amount of ecologic association between species. Ecology 26(3):297–302 5. Fauconnier J-P (2015) French word embeddings 6. Fletcher S, Zahidul Islam Md, et al (2018) Comparing sets of patterns with the Jaccard index. Australas J Inf Syst 22 7. Hu B, Kalfoglou Y, Alani H, Dupplaw D, Lewis P, Shadbolt N (2006) Semantic metrics. In: International conference on knowledge engineering and knowledge management. Springer, pp 166–181 8. Ibrahim ME, Yang Y, Ndzi DL, Yang G, Al-Maliki M (2018) Ontology-based personalized course recommendation framework. IEEE Access 7:5180–5199 9. Jaccard P (1901) Étude comparative de la distribution florale dans une portion des alpes et des jura. Bull Soc Vaudoise Sci Nat 37:547–579 10. Jannach D, Zanker M, Felfernig A, Friedrich G (2010) Knowledge-based recommendation. Cambridge University Press, pp 81–123 11. Le NL, Abel M-H, Gouspillou P (2022) Towards an ontology-based recommender system for the vehicle sales area. In: Progresses in artificial intelligence & robotics: algorithms & applications. Springer, Cham, pp 126–136 12. Ngoc LL, Abel M-H, Gouspillou P (2022) Apport des ontologies pour le calcul de la similarité sémantique au sein d’un système de recommandation. In: Ingénierie des Connaissances (Evènement affilié à PFIA 2022 Plate-Forme Intelligence Artificielle), Saint-Étienne, France 13. Li S, Abel M-H, Negre E (2021) Ontology-based semantic similarity in generating contextaware collaborator recommendations. In: 2021 IEEE 24th international conference on computer supported cooperative work in design (CSCWD). IEEE, pp 751–756 14. Jie L, Dianshuang W, Mao M, Wang W, Zhang G (2015) Recommender system application developments: a survey. Decis Support Syst 74:12–32 15. Luyen LN, Tireau A, Venkatesan A, Neveu P, Larmande P (2016) Development of a knowledge system for big data: case study to plant phenotyping data. In: Proceedings of the 6th international conference on web intelligence, mining and semantics, WIMS 2016 16. Meymandpour R, Davis JG (2016) A semantic similarity measure for linked data: an information content-based approach. Knowl-Based Syst 109:276–293 17. Mikolov T, Chen K, Corrado G, Dean J (2013) Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 18. Nguyen V (2011) Ontologies and information systems: a literature survey 19. Obeid C, Lahoud I, El Khoury H, Champin P-A (2018) Ontology-based recommender system in higher education. In: Companion proceedings of the the web conference 2018, pp 1031–1034 20. Rodriguez MA, Egenhofer MJ (2003) Determining semantic similarity among entity classes from different ontologies. IEEE Trans Knowl Data Eng 15(2):442–456 21. Salton G, McGill J (1986) Introduction to modern information retrieval 22. Sánchez D, Batet M, Isern D, Valls A (2012) Ontology-based semantic similarity: a new feature-based approach. Expert Syst Appl 39(9):7718–7728

Comparative Analysis of the Prediction of the Academic Performance of Entering University Students Using Decision Tree and Random Forest Jesús Aguilar-Ruiz, Edgar Taya-Acosta, and Edgar Taya-Osorio

Abstract In this work, we discuss the most determinant variables in predicting the first-grade university courses (period 2021-I) of students who course a degree in the Faculty of Engineering in the Jorge Basadre Grohmann National University in times of pandemic. We used Machine Learning algorithms to determine the relations between high school grades, students’ personal information and preferences, and their final grade in the first semester of 2021. Our research found that the main variables are the modality of university entrance, students’ self-perception in the subject of leadership, the frequency of watching TV, YouTube, and Netflix, and the literary genres prefer. To confirm our results, we got an accuracy of 93.28% with DT and 93.33% with RF. Keywords Academic performance · Data analysis · Educational Data Mining · Random Forest · Decision Tree

J. Aguilar-Ruiz Universidad Pablo de Olavide, Sevilla, Spain e-mail: [email protected] E. Taya-Acosta (B) · E. Taya-Osorio Universidad Nacional Jorge Basadre Grohmann, Tacna, Peru e-mail: [email protected] E. Taya-Osorio e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Á. Rocha et al. (eds.), Information Technology and Systems, Lecture Notes in Networks and Systems 691, https://doi.org/10.1007/978-3-031-33258-6_43

475

476

J. Aguilar-Ruiz et al.

1 Introduction In 2021, students who left school and entered university had to deal with the university environment differently because of the COVID-19 pandemic. Before the pandemic, a typical university student often discovers an environment composed of new friends, teachers, and a classroom where he or she usually spent most of his/her time in classes. However, this new reality that appears with the pandemic has a different approach to students [1]; they do not interact the way they usually do. With the same aim, we focused on predicting academic performance, which is one of the most important topics of learning analytics [2] and helps to improve students’ retention in university. It also helps to develop tools to detect those students who need a little help on their road, reducing students’ dropout rate at university. In this context, we analyzed the school grades of these new students facing a new academic stage, using different machine learning algorithms and techniques to predict their behavior based on grades in the first semester of 2021 at Jorge Basadre Grohmann National University. In the present work, we gathered data from students’ grades in the academic period of 2021-I and their personal information. Then, we combined them into a single dataset; and we applied machine learning techniques like Random Forest and Decision Trees to predict student academic performance of the mathematics subject of the first cycle of the career and found the determinant variables.

2 Related Works 2.1 Learning Concepts In [3], they found various machine learning techniques among twenty academic researchers. It points out the limitations in the methodology used; for example, many papers did not include an analysis of student life. This study is approaching of state of the art for techniques used in this kind of work. In [4] the authors used two different datasets. The first one was the mathematics dataset with 396 records, and the second was the Portuguese dataset with 649 records, both of which had 33 feature columns. They applied Support Vector Machine and Random Forest, and they, in the case of binary classification, achieved a superior accurate prediction reach of 93%.

Comparative Analysis of the Prediction of the Academic Performance …

477

We also collected data of students’ personal information and in [5] they used data that contains family income, family assets, and student personal information from different universities in Pakistan, to predict whether students will complete their degrees or not. They also show that SVM Classifiers performed better in classifying their features. In [6], they gather a dataset for Virtual Learning Environments (VLE) and a dataset from the management information system of the Manchester Metropolitan University. The study found that variables such as students’ time day of usage, the last time students access the VLE, and the number of documents hits by the staff are the best indicators to predict student progression. Another work [7] uses a tiny Artificial Neural Network to classify student performance by analyzing students’ written emails. This paper also shows that CorC-net outperformed other multiclass classification algorithms like decision trees, support vector machines, Gaussian Naïve Bayes, and K-nearest neighbors. For our purpose, we also evaluate [8], who proved that support vector machine and learning discriminant analysis algorithms have an acceptable classification accuracy and reliability for predicting student performance in small datasets. In [9] show what ML algorithms are used to analyze different types of data obtained by surveys. They group the algorithms according to their complexity. Most of the studies analyzed used “classic learning” in such algorithms as Probabilistic Soft Logic, Decision Trees, Naive Bayes, Support Vector Machines, Survival Models, and Linear Regression. In [2], the authors identified a relationship between the learning management system and the academic performance of Jorge Basadre Grohmann National University students. They performed the Gradient Boosted trees algorithm with two classes. They got an accuracy of 91.79%, and the Random Forest algorithm with three classes got an accuracy of 89.26% in predicting academic performance. Another study in a Nigerian University proves statistically that the ethnic background of the student is insignificant in predicting student performance based on their graduation results. They used multiple regression analysis, and the maximum accuracy observed was 53.2%, but using an over-sampling technique, they got a 79.8% [10] In [11] non-intelligence factors were analyzed to predict their influence on the English learning of college students, and they found Autonomy and Self-discipline were the most influential variables. Decision tree algorithms were used in other areas of the university, too, like [12] where the authors found a method that makes college financial management and decision-making more efficient.

478

J. Aguilar-Ruiz et al.

2.2 Theoretical Bases University and Pandemic. Education has to adapt to a new scenario caused by the pandemic of COVID-19, where technologies like virtual environment systems, video conferencing applications, and every kind of virtual communication had a relevant protagonism in this plot [1]. It was the same for higher education. Another study was about the impact of digital learning on students’ motivation [13]. They evaluated 689 students from 10 universities in Pakistan and show the learning climate has been insignificant for student engagement. EDM (Educational Data Mining). “Educational data mining is an emerging discipline, concerned with developing methods for exploring the unique types of data that come from the educational context” [14]. With the pandemic, more universities are using virtual learning environment systems, and a vast amount of data is emerging from this. It is vital for them who make decisions to evaluate this behavior, and why not with the tools of EDM. Prediction. In our research, prediction is the statement of predicting a feature of our dataset. It is an essential point of EDM in predicting student academic performance. A paper in this area [15] predicts students’ performance in an English course by Bachelor of Usuluddin and Bachelor of Syariah students. They use a deep learning model and got 93% accuracy running against 3 testing datasets. In another study held in Gujarat, Indie, the dataset gathered more than 1200 instances. They use various ML approaches to foresee the correlation between student social engagement for the most popular social engagement platforms during the covid-19 pandemic. This study reveals various classifications derived those students were majorly engaged with WhatsApp and YouTube [16]. Knowledge Discover in Database. Another important field in this research is Knowledge Discovery in Database (KDD), in words of [17] KDD means the extraction of useful information previously unknown from data. Decision Tree (DT). It is a flow chart-like structure representing a tree structure and is used to solve classification or regression problems [18]. We can only use DT with the following Eq. 1 of Entropy: Entr opy(S) =

k i=1

− pi log2 ( pi )

(1)

where pi represents case probability. Making a DT also requires to use of an attribute selection measure. One of them is information gain value with Eq. 2 Gain( A) = Entr opy(S) −

m j=1

  S j  |S|

  × Entr opy S j

(2)

where S j is the number of case samples from attributes A, S is the sum of all data samples, and Entropy S j is the entropy value for samples of j [8].

Comparative Analysis of the Prediction of the Academic Performance …

479

We can also use DT with another attribute selection measure which is Gain Ratio, which is an expansion of information gain, it is defined: Gain Ratio( A) =

Gain(S) Split I n f A(D)

(3)

where Split I n f is defined as follows: Split I n f A = −

n  |Sk| k=1

|S|

log

|Sk| |S|

(4)

The last attribute selection measure is Gini Index, it measures impurity of S, is. defined: o Gini (S) = 1 − Pi2 (5) l=1

Random Forest (RF). It is a set of Decision Trees algorithms combined with bagging, which means each DT has a part of the total dataset, and each one is different from the other [19]. It is used for classification or regression tasks; it’s represented by Eq. 6. G(t) = 1 −

Q r =1

p 2 (k|t)

(6)

2.3 Synthetic Minority Oversampling Technique (SMOTE) A first reference was found in [20] (Algorithm 1) as an “over-sampling approach in which the minority class is oversampled by creating ‘synthetic’ examples rather than by oversampling with replacement.” This technique will help us to lead with our unbalanced data. The algorithm would create new synthetic data from our dataset if we got the most data from a determinate class compared with the others.

480

J. Aguilar-Ruiz et al.

Algorithm 1 SMOTE (T, N, k) [20] Input: Number of minority class samples T; Amount of SM OT E N %; Number of nearest neighbors k Output: (N/100) T synthetic minority class samples

1. ( If N is less than 100%, randomize 12. for i ← 1 to T do the minority class samples as only a 13. Compute k nearest neighbors for i, and save the indices in the nnarray random percent of them will be 14. Populate (N, i, nnarray) SMOTEd. ) 2. if N < 100 then Randomize the T mi- 15. end for Populate(N, i, nnarray) ( Function nority class samples to generate the synthetic samples. ) 3. T = (N/100) T 16. while N = 0 do 4. N = 100 17. Choose a random number between 1 5. end if and k, call it nn. This step chooses one 6. N = (int)(N/100) (* The amount of of the k nearest neighbors of i. SMOTE is assumed to be in integral 18. for attr ← 1 to numattrs do multiples of 100. ) Compute: dif = Sample[ nnar7. k = Number of nearest neighbors ray[nn]][attr] − Sample[i][attr] 8. numattrs = Number of attributes Compute: gap = random number 9. Sample[][]: array for original minority 19. between 0 and 1 class samples Synthetic[newindex][attr] = Sam10. newindex : keeps a count of number of 20. ple[i][attr] + gap dif synthetic samples generated, initial21. end for ized to 0 11. Synthetic[][]: array for synthetic sam- 22. newindex + + 23. N = N – 1 ples ( Compute k nearest neighbors for 24. end while 25. return (* End of Populate. *) each minority class sample only. )

3 Methodology 3.1 Research Design This work is focused on identifying the relationship between a student’s characteristics (high school grades, university entrance modality, height, weight, literary preferences, etc.) and their academic performance in the subject of mathematics in the first cycle of her university studies. We gather data from “Dirección Académica de Actividades y Servicios Académicos” (DASA). These records are associated with all our students’ grades, and we applied a survey to collect students´ information about their socioeconomic status, family background, tastes, and grades from high school. We resume our proposed

Comparative Analysis of the Prediction of the Academic Performance …

481

Fig. 1 Proposed methodology for predicting academic performance using Random Forest and Decision Tree

framework for predicting academic performance in Fig. 1. Pre-processing techniques were applied to the data. We removed features that contained students’ personal information, such as names and personal identification, handled missing values on students’ records, and deleted redundant data.

3.2 Population and Sample This research population has records of UNJBG students from two different sources. The first one DASA DB contains information about the academic period 2021-I. 6,314 records were grouped by their student code, and we focused on their final grade per subject. The other data source was the data obtained by a survey applied to all students enrolled in the academic period of 2021-I; it produced a total of 135 records.

4 Results and Discussion In the methodology, it should be pointed out that the following were obtained. Table 1 shows the confusion matrix corresponding to the application of the decision tree algorithm to predict the academic performance of the first cycle mathematics subject of the students entering in the year 2021. Having obtained an accuracy of 92.28%. In the same way, in Table 2 we note that the accuracy is 93.33% applying the random forest algorithm.

482

J. Aguilar-Ruiz et al.

Table 1 Confusion Matrix and overall results of applying Decision Tree

Table 2 Confusion matrix and overall results of applying Random Forest

With the help of the Synthetic Minority Oversampling Technique (SMOTE), it can balance our records from the dataset, and we got 50% of approved and 50% of disapproved students. Of the results of the experiments in Fig. 2, we can establish that the most impacted feature in the determination of classes is “El tipo de acceso a la universidad”, which means that the process by that they were admitted to the university is a key factor in determining their academic performance in the first year of university studies in the career of Engineering in Informatics and Systems. We can notice that there is a probability of 74% of students who were admitted to the university by the modality of “Phase II entrance exam” in the year 2021, and 45%

Fig. 2 Main nodes of the result of our Decision Tree

Comparative Analysis of the Prediction of the Academic Performance …

483

of students who were admitted by the modality of “Phase 1 entrance exam” in the year 2021, will disapproved mathematics subject in the first semester of his university journey. On the other hand, students who were admitted by the modality “CEPU (PreUniversity Center) I-2021 entrance exam”, “Phase II-2020 entrance exam”, “Phase II-2020 entrance exam”, “CEPU I-2021 entrance exam”, “Extraordinary entrance exam (students who placed first in their respective schools and high-performance school graduates (COAR))” and “CEPU Summer 2020-III”, have a probability of 100\% to approved mathematics subject in the first semester of the university studies. Our results make sense since we can understand that those students who were admitted to the university well in advance (i.e., by the admission process of a previous year, which is the case of the “Phase II entrance exam” of the year 2020) they’d had more time to prepare and reinforce their knowledge in university mathematics. We also pointed to those students who were admitted by the extraordinary admission process. They were high-achieving students in regular elementary education o who come from high-performance schools, already have a high learning pace, and the process of adaptation to higher education is lightest. The second variable most important to can predict the academic performance of entrants students in the career in Informatics and Systems Engineering is: “la autopercepción de su nivel de liderazgo”, exists a probability of 93.9\% for those students who consider their leadership level is good and were admitted by the process of “Phase II-2021 entrance exam”, as well as a probability of 47.4\% for those students who considerer ther leadership level as low and also were admitted by the process of “Phase II-2021 entrance exam”, of disapproving mathematics subject in the first semester of the career in informatics and systems engineering. This situation can be explained by the fact that they are not totally convinced of their success in teamwork or study groups, probably to their reserved personality. We can also observe that the other predominant variable is: “La frecuencia semanal de ver TV, YouTube, Netflix o videos por internet”, exists a probability of 100\% for those students who say that they never watch TV, Youtube, Netflix, or videos on internet, and were admitted in the process of “Phase I-2021 entrance exam”, of disapproving mathematics subject in the first semester of their university studies. This situation may indicate that they are probably lying about their weakness or addiction to distract themselves by viewing audiovisual content. It is also interesting to note in Fig. 3 the students who feel their leadership level is good; have the chance of 100% those students who read the literary genre of comedy will approve of the mathematics subject, while those who prefer to read literature genres like fantasy, science fiction, action, and romance, have a probability of 90% of disapproving mathematics subject. This situation may indicate that those who waste time on light reading and unsophisticated reading are more focused on their studies than those who choose a fantasy or science fiction saga. This raises the possibility of further studies oriented to literary preferences and academic performance.

484

J. Aguilar-Ruiz et al.

Fig. 3 A branch of Decision Tree

5 Conclusions and Future Works It has been possible to identify the main variables that influence the academic performance of students of the faculty of engineering in the subject of mathematics using machine learning models, especially decision trees; It has been noted that the Random Forest algorithm is much more accurate in the classification of our data, it has also been seen that the main variables are related to school grades and reading habits, which we consider of great importance for further analysis of a much more extensive work. For future work it would be interesting to use other variables such as physical exercise time, hours of sleep, the number of hours spent playing video games, with students from other disciplines and different subjects. Acknowledgements We thank the Jorge Basadre Grohmann National University it funded the research project: “Data science applied to the evaluation of the social impact of COVID-19 in the Tacna region: Health and education” approved with rectoral resolution No. 10086-2022-UNJBG for the facilities provided in data and technical advice.

References 1. Torres MJF, Sánchez RC, Villarrubia RS (2021) Universidad y pandemia: la comunicación en la educación a distancia. Ámbitos Revista Internacional de Comunicación 156–174. https:// doi.org/10.12795/ambitos.2021.i52.10 2. Taya-Acosta EA, Barraza-Vizcarra HM, de Jesus Ramirez-Rejas R, Taya-Osorio E (2022) Academic performance evaluation using data mining in times of pandemic. Tech Rev Int Technol Sci Soc Rev 11. https://doi.org/10.37467/gkarevtechno.v11.3324 3. Katarya R, Gaba J, Garg A, Verma V (2021) A review on machine learning based student’s academic performance prediction systems. In: Proceedings - international conference on artificial intelligence and smart systems, ICAIS 2021. Institute of Electrical and Electronics Engineers Inc., pp 254–259

Comparative Analysis of the Prediction of the Academic Performance …

485

4. Alamri LH, Almuslim RS, Alotibi MS, et al (2020) Predicting student academic performance using support vector machine and random forest. In: ACM international conference proceeding series. Association for Computing Machinery, pp 100–107 5. Daud A, Lytras MD, Aljohani NR, et al (2017) Predicting student performance using advanced learning analytics. In: 26th international world wide web conference 2017, WWW 2017 Companion. International world wide web conferences steering committee, pp 415–421 6. Hardman J, Paucar-Caceres A, Fielding A (2013) Predicting students’ progression in higher education by using the random forest algorithm. Syst Res Behav Sci 30:194–203. https://doi. org/10.1002/sres.2130 7. Yadav N, Srivastava K (2020) Student performance prediction from E-mail assessments using tiny neural networks. In: 2020 9th IEEE Integrated STEM Education Conference, ISEC 2020. Institute of Electrical and Electronics Engineers Inc. 8. Zohair LMA (2019) Prediction of student’s performance by modelling small dataset size. Int J Educ Technol Higher Educ 16. https://doi.org/10.1186/s41239-019-0160-3 9. Prenkaj B, Velardi P, Stilo G, et al (2020) A survey of machine learning approaches for student dropout prediction in online courses. ACM Comput Surv 53 10. Adekitan AI, Salau O (2020) Toward an improved learning process: the relevance of ethnicity to data mining prediction of students’ performance. SN Appl Sci 2. https://doi.org/10.1007/ s42452-019-1752-1 11. Li M (2018) A study on the influence of non-intelligence factors on college students’ English learning achievement based on C4.5 algorithm of decision tree. Wirel Pers Commun 102:1213– 1222. https://doi.org/10.1007/s11277-017-5177-0 12. Jin M, Wang H, Zhang Q, Luo C (2018) Financial management and decision based on decision tree algorithm. Wirel Pers Commun 102:2869–2884. https://doi.org/10.1007/s11277-0185312-6 13. Shah SS, Shah AA, Memon F et al (2021) Online learning during the COVID-19 pandemic: applying the self-determination theory in the ‘new normal.’ Revista de Psicodidactica 26:169– 178. https://doi.org/10.1016/j.psicod.2020.12.004 14. Romero C, Ventura S, García E (2008) Data mining in course management systems: MOODLE case study and tutorial. Comput Educ 51:368–384. https://doi.org/10.1016/j.compedu.2007. 05.016 15. Yusof MHM, Khalid IA (2021) Precision education reviews: a case study on predicting student’s performance using feed forward neural network. In: 2021 International conference of technology, science and administration, ICTSA 2021. Institute of Electrical and Electronics Engineers Inc. 16. Prajapati JB, Patel SK (2021) Performance comparison of machine learning algorithms for prediction of students’ social engagement. In: Proceedings - 5th international conference on computing methodologies and communication, ICCMC 2021. Institute of Electrical and Electronics Engineers Inc., pp 947–951 17. Fayyad U (2001) Knowledge discovery in databases: an overview. In: Džeroski S, Lavraˇc N (eds) Relational data mining. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-66204599-2_2 18. Gupta B, Uttarakhand P, Rawat IA (2017) Analysis of various decision tree algorithms for classification in data mining. Int J Comput Appl 163:975–8887 19. Shah MB, Kaistha M, Gupta Y (2019) Student performance assessment and prediction system using machine learning. In: 2019 4th international conference on information systems and computer networks (ISCON). pp 386–390 20. Chawla NV, Bowyer KW, Hall LO, Kegelmeyer WP (2002) SMOTE: synthetic minority oversampling technique. J. Artif. Intell. Res. 16, 321–357

Intellectual Capital and Public Sector: There are Similarities with Private Sector? Óscar Teixeira Ramada

Abstract The goal of this paper is to know if the relevant literature on intellectual capital with regard to the public sector has important content and if it can be used in such a way that it is possible to know if there are similarities with the private sector. It is concluded that the literature, which relates the two aspects, is scarce and, therefore, cannot bring abundant information to scientific knowledge. Thus, there is a very large expansion of scientific knowledge to be carried out, in this domain of the public sector related to intellectual capital, and it is not expected that, even at a high rate of research production, as much literature as already is obtained. exists concerning the private sector. The problem of the definition-measurement-value sequence of intellectual capital, in the public sector, arises with more emphasis than in the private sector and hence the research to be carried out is much more laborious, and it is no wonder that these two aspects together lag far behind on the same aspects of the private sector. Keywords Intellectual capital · Public and private sectors · Innovation

1 Introduction The intellectual capital has been assuming a growing importance, in the scientific community, in general, and in praxis, in particular, that is, in reality. However, it is important that, although it is related to other topics such as business performance [4] and [6], competitive advantages [8] and [9] and innovation [5] and [7], which again, it is somewhat conditioned by not clarifying the three pillars on which it should be based: definition-measurement-value. It is not scientifically accepted peacefully, what is meant by the intellectual capital and how to define it in the three components: [10] and [11]. Sometimes the number is the same but the components vary in their semantic meaning and even sometimes vary in the number of components itself: [12]. This pillar is the first, measurement Ó. T. Ramada (B) Instituto Superior de Ciências Educativas do Douro – ISCE Douro, Porto, Portugal e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Á. Rocha et al. (eds.), Information Technology and Systems, Lecture Notes in Networks and Systems 691, https://doi.org/10.1007/978-3-031-33258-6_44

487

488

Ó. T. Ramada

is the second and value is the third, which results from the first two. In the absence of an unequivocal clarification of the first one, the number of components, and its meaning, in addition to the possibility of being applied in any activity sector, the following two appear not possible to appear subsequently. In fact, for the same definition we can have different or equal ways of measuring it. But, they must lead to the same final calculation, that is, a value, of intellectual capital. If so, the practical usefulness is very limited or even useless. What is intended, in the end, is to know the value of an intangible asset, as in the case of intellectual capital. This can be interpreted as knowledge, and this, in turn, is a way to produce wealth, to produce business performance and to produce competitive advantages, for companies, for nations and for, particularly, the activity sectors from which the same wealth is produced. Countries are not rich for the material wealth they are capable of producing. They are rich because they have immaterial wealth and it is this that allows them to be produced, either in quantity or in variety or differentiation. A country may not have a large endowment of immaterial wealth, but if its substrate allows for greater material wealth to be created than a country with a greater endowment of immaterial wealth, that country is richer. The theme of intellectual capital, understood in this way, is the gordian knot, as the new “The Wealth of Nations”, by Adam Smith (1723–1790) passes, in which today, via the intellectual capital, there is much to know about the nature and cause of wealth of nations. At that time, wealth took on a more material and less immaterial nature, but today the opposite happens: more immaterial and less material, although it is from the one that it is translated. Generally, the multifaceted research that exists on intellectual capital focuses, primarily, on the private sector (more) and (less) on the public sector (the State, understandably). Thus, a strong bias arises, as it is assumed that the private sector is where more wealth is created, overvaluing it, and in the public sector, less is created. What happens is that the ways of producing, via the intellectual capital, are different, comparing the private sector with the public sector. There are even countries where there are two extremes: there is almost only the public sector, countries with a strong predominance of the State and the other only the private sector, countries in which the presence of the State is largely dissipated. Thus, the problem arises of knowing what relevant literature exists on the public or administrative sector (in quantity), what it claims, and what are the main features that it patents. An influx of research on the aforementioned literature soon highlighted, with greater evidence, the scarcity of this same literature in such a way that it could be selected in relation to the public sector. Thus, there is, from the outset, a paucity of it, which does not allow for any in-depth development of the theme. Having said that, it is necessary to highlight the research problem, the research question to which we intend to answer: what does the literature that relates the intellectual capital with the public sector claim to be relevant? What is most remarkable is underlined?

Intellectual Capital and Public Sector: There are Similarities …

489

This research is divided into three sections: Introduction, in which an introduction to the theme above is made; a Literature Review, in which the most important ideas of the three selected papers are pointed out (among the few mentioned), finally, a last Conclusions section, with the most conclusions that are most notorious, not forgetting the References which list the most important bibliographical references consulted for all this research.

2 Literature Review [1] these authors carried out a research on the effects of the intellectual capital component, structural intellectual capital, on the innovation capacity of public administration. Its measurement, in this context, is a way of evaluating the value it has for society. In this case, it is a case whose locus is a city in Latin America. The existence of a structure for innovation is characterized by the granting of autonomy, flexibility control, fluidity of horizontal communication, valuing of knowledge and experience and informality of people and the relationships that are established between them. An organization that works within this framework has an organic structure that allows quick answers to changes in the external environment and, due to this, has great possibilities to innovate. The internal processes for innovation, mean that they constitute a set of activities that represent the organization’s operations that directly address the customers, whether internal or external to it. The authors assessed that the relationship between internal processes and innovation is broader in business environments, but not very solid in public administration. Notwithstanding the excessive bureaucracy that is seen in the aforementioned public administration in the city of Latin America, the authors believe that they will be able to contribute to how innovation can play an important role in this area. With regard to the method used by the authors to test some hypotheses, it was a questionnaire given to workers as an instrument. The universe studied was the Municipality of Santiago, that is, in Brazil, in the Grande do Sul region, in Latin America. The organization’s managers, the leaders and managers of each sector, departments and secretariats, and the principals and vice-principals of municipal schools were interviewed. Thus, primary data was used, between the months of December 2019 and February 2020. A total of 158 questionnaires were received and used. The questionnaire was prepared in accordance with empirical and theoretical studies on the themes of structural intellectual capital and innovation in the Public Sector. The choice of questions had to do with the degree of tactical and strategical leadership. Both structural capital and innovation capacity components can be seen as being multidimensional and, therefore, composed of multiple items to be evaluated. Innovation capability comprises eight items, divided into three factors, which are: service and process innovation capability, organizational innovation capability and institutional innovation capacity.

490

Ó. T. Ramada

As for the conclusions, it should be noted that the authors confirmed that structural intellectual capital had a significant and direct positive effect on innovation capability in the three factors, which confirmed the hypotheses raised and studied. From a practical point of view, the contributions of management showed that the greater (smaller) the structural capital, the greater (smaller) the possibility of innovation, so investment in the structure for innovation, not in the departments per se, but actions to stimulate collective thoughts and solutions, among other factors, are such that they do not depend on the budget. Thus, there is an alignment of actions, results and financial incentives for workers, via Personnal Management, in each department and not just via Personnal Department. [2] this is an author, of portuguese nationality, who carried out research on the intellectual capital in the public sector, having designed a taxonomy of intangible assets in the public sector. She states that its growing importance has been observed thanks to its interpretation as a productive factor and for its contribution as a factor that lies at the origin of the creation of value and competitive advantages. The importance of being well managed is recognized in a range of situations including public organizations. In these, its application has been less developed in their environment, considering that most of its development comes from the business area and research in the domain of organizations has not received the same attention and interest from researchers. The author mentions some aspects that the consulted literature cites as the justification for applying the theory of intellectual capital to the public sector: intangible goals (non-financial goals), intangible output (use of its resources to achieve goals translated into results that are intangible), intangible resources (unlike the business world, it uses knowledge based on its activity in this and its intensive use, which has been accentuated with the development of e-governance), internal management tool and reduced urgency in the quantify (the usefulness of intangible assets is related to decision-making by managers and the implementation of the theory of intellectual capital), external presentation and transparency (seek to report activities to citizens about the functions they perform, in addition to data and in line with legality, budget execution, economic efficiency and effectiveness), social and environmental responsibility (many companies see the responsibility, image and public information about the repercussion of their actions as being reflected in their Financial Statements and in this sector these requirements should be high and the commitment should not only be to improve their image and be part of its goals), low incentive to adopt new management techniques (an obstacle to the implementation of intellectual capital projects is related to low motivation in the public sector to adopt new management practices) and low space for the management of managers (a implementation of management ideas underlying the intellectual capital is attractive but difficult to apply in the public sector). Another reference that the author highlights is that the components that make up the intellectual capital in the private sector are different from those adopted by the public. The meanings are different beyond the components: in the public sector we have human capital, structural capital, relational capital, services capital and public commitment capital.

Intellectual Capital and Public Sector: There are Similarities …

491

As main conclusions, the author extracts that, in the public sector, research is not as abundant as it is in the private sector, which translates into lesser development. However, the theory of intellectual capital presents ideas that invoke the application to the public sector, such as the intangibility of the concept of intellectual capital being present in the goals, the output produced by organizations and the resources used, among others. The fact of applying in the private sector is different from that in the public sector, in that one may be useful for it, but there are aspects that are not subject to comparison. It is necessary to create new methodologies that emphasize the adoption of the private to the public, such as specific models and concepts that are only applicable to the latter, and those applied to the private are not suitable. [3] these authors aim to produce research that presents a model of intellectual capital applied to public administrations. Indeed, according to the same authors, public administrations have known a process of change and modernization in order to obtain more efficiency and effectiveness. A model that favors the role of intangible assets proves to be useful for this effort. With regard to the methodology adopted, the sources for its construction have, at their origin, the Intellectual Forum, a set of workshops organized by the Knowledge Society Research Center (CIC) of the IADE at the Universidad Autónoma de Madrid. research refers to is missing. The main output of these workshops was the constructed model which was tested by international experts in networks and which was adopted by the public sector: Fiscal Studies Institute and the Tributary Agency. Briefly, the model can be described as a schematic tree in which the components are those accepted by the scientific community, relating to intellectual capital. These are: Public Human Capital (PHC), Public Structural Capital (PSC) and Public Relational Capital (PRC). In the second component, it is subdivided into Public Organizational Capital (POC), Public Social Capital (PSC) and Public Technological Capital (PTC), for management purposes. Regarding the main findings, the authors concluded that knowledge-based intangible resources are very important in the public sector and measuring and managing them can contribute to improving the scope of the public sector that has been very much driven by budgetary issues. On the other hand, in order to innovate in the public sector and develop measurement and management models, it is necessary to create its own methodologies. These methodologies themselves lack empirical research. The model presented has the advantage that, in its structure, it can be replicated in other structures, with variables and indicators in different units. The need to develop strategies and tools to measure and manage intangibles constitutes a new avenue of research, in open, which should deserve more careful scientific exploration.

492

Ó. T. Ramada

3 Conclusions What can be concluded from all this research is that, due to the scarcity of relevant literature found, its essence is not comprehensive. Hence, in its description, the knowledge it conveys is also very little clarifying and little informative. Briefly, the literature deals with the effects of intellectual capital components on the possibility of Public Administrations (public sector) to innovate, on a taxonomy (classification) of intangible assets in the public sector, ending up with the production of a model involving capital intellectual property to be applied in the public sector. From the outset, it should be noted that no clear distinction is made between the administrative public sector and the business public sector. This may have borders that differ little or nothing from the private sector and, especially, the business fabric. What can be highlighted is the fact that it is not known whether the public, administrative or business sector is more predominant in the global economy, as well as the weight of the private sector. In any case, even in economies with a strong presence of the private and/or public sector, the characterizing structure is not highlighted, which would prove mandatory to assess it. In global terms, it is concluded that there is a long way to go in order to know the intellectual capital inserted in the public sector, what is its most evident features and how it is important to exist together with the private sector. It should be noted that even in the case of literature, of very recent production, in the year 2021, with two papers, this does not allow concluding any insights indicating signs initiated in the sense of developing more promising directions. The main contribution of this research is based on knowing that the existing literature on the public sector still has a long way to go, in the sense that much more scientific production is needed to explore and provide clarifying answers about the intellectual capital within it. As main limitations, it is highlighted the fact that this is literature that refers to the intellectual capital related to innovation, with its arrangement in the public sector as well as a model of intellectual capital in the public sector. In fact, nothing relevant is known about the dimensions of public sector institutions, what to understand and the quantification of intellectual capital, in its structural component, nothing is known about the others: human capital component, employed customer, among others. Finally, future research avenues have to do with the fact that it is imperative to carry out studies that relate to specific institutions in the field of public education, with representative samples of the global whole or through case studies. Answering the research question, in terms of synthesis, we can conclude that the relevant literature is very unsatisfactory and little remarkable can be evidenced, which is consistent with the selected scarcity, with regard to intellectual capital with the public sector.

Intellectual Capital and Public Sector: There are Similarities …

493

References 1. Silva R, Jardón C, Avila L (2021) Effects of structural intellectual capital on the innovation capacity of public administration. J Technol Manag Innov 16(1):1–12 2. Bailoa S (2021) The intellectual capital in the public sector: a taxonomy of intangibles to public organizations. Zbornick Veleucilista u Rijeci 9(1):1–16 3. Campos E, Salmador M, Merino C (2006) Towards a model of intellectual capital in public administration. Int J Learn Intellect Cap 3(3):1–19 4. Abdullah D, Sofian S (2012) The relationship between intellectual capital and corporate performance. Procedia Soc Behav Sci 40:292–298 5. Delgado-Verde M, Martín-de-Castro G, Amores-Salvado J (2016) Intellectual capital and radical innovation: exploring the quadratic effects in technology-based manufacturing firms. Technovation 54:34–47 6. Hashim M, Osman I, Alhabshi S (2015) Effect of intellectual capital on organization performance. Procedia Soc Behav Sci 211:207–214 7. Zang M, Qi Y, Guo H (2017) Impacts of intellectual capital on process innovation and mass customization capability direct and mediating effects. Int J Product Res 55(3):1–13. (Taylor and Francis) 8. Vosloban R (2012) The Influence of the employee´s performance on the company´s growth – a managerial perspective. Procedia Econ Finan 3:660–665 9. Sumedrea S (2013) Intellectual capital and firm performance: a dynamic relationship in crisis time. Procedia Econ Finan 6:137–144 10. Berzkalne I, Zelgalve E (2014) Intellectual capital and company value. Procedia Soc Behav Sci 110:887–896 11. Gogan L, Draghici A (2013) A model to evaluate the intellectual capital. Procedia Technol 9:867–875 12. Sekhar C, Patwardhan M, Vyas V (2015) A Delphi-AHP-TOPSIS based framework for the prioritization of intellectual capital indicators: a SME’s perspective. Procedia Soc Behav Sci 189:275–284

EITBOK, ITBOK, and BIZBOK to Educational Process Improvement Pablo Alejandro Quezada–Sarmiento and Aurora Fernanda Samaniego-Namicela

Abstract The Bodies Knowledge (BOK) shows information relevant to the different areas of science, the knowledge representation for BOK is essential to understand the real context and the possible application of these BOKs in science especially in the educational context. This article shows the combination of EITBOK, ITBOK BIZBOK and Enterprise Architecture (EA) as a framework for educational process improvement. Keywords Business architecture · Body of knowledge · EITBOK · ITBOK · BIZBOK · Education · Enterprise Architecture (EA)

1 Introduction Information technologies are a vital factor in the transformation of the new global economy and in the rapid changes taking place in society [1]. In recent decades, new information and communication technology tools have brought about a profound change in the way individuals communicate and interact in the business context and have brought about significant changes in the industry [2]. “Information technology and the digital economy present new opportunities for all sectors of the economy. Today, the economy is undergoing fundamental changes

P. A. Quezada–Sarmiento (B) Centro de investigación de Ciencias Humanas y de la Educación-CICHE, Facultad de Ciencias de la Educación, Universidad Indoamérica, Bolivar 2035 y Guayaquil, 180103 Ambato, Tungurahua, Ecuador e-mail: [email protected]; [email protected] A. F. Samaniego-Namicela Universidad Técnica Particular de Loja- Facultad de Ciencias Económicas y Empresariales, 1101608 Loja, Ecuador e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Á. Rocha et al. (eds.), Information Technology and Systems, Lecture Notes in Networks and Systems 691, https://doi.org/10.1007/978-3-031-33258-6_45

495

496

P. A. Quezada–Sarmiento and A. F. Samaniego-Namicela

as a result of the rapid development of information technology and its use is also very fast” [1] on the organizations. One of the key competitive factors in recent years has undoubtedly been the application of planning in organizations that arises as a consequence of the concept of organization from the classical schools of thought, the theory of equilibrium, and the theory of limits, cognitive, which proposes the strategy and the use of the Body of Knowledge Standards (BOKS), as performance and representation of elements on the organization [3]. In the same context the EA is directed by the strategic objectives and requirements of the business, which must be aligned to the characteristics of ITBOK, EITBOK and BIZBOK as base standards for the improvement of IT processes [4]. ITBoK (Information Technology Body of Knowledge), is defined as the body of knowledge defined by ACM (Association of Computer Manufacturers) for the construction of the information technology engineer curriculum, defined in its Computing Curriculum [5]. ITBOK is a compendium of high-level descriptions of knowledge areas (KAs) that are generally required for the successful operation of information technology (IT) services provided to the enterprise served. Enterprise IT is an overarching term that describes the work needed in IT products, and services [6]. According [7], BIZBOK is a series of iterative stages to incorporate the practicebased experience and expertise of guild members who represent a large and growing community of practice. EITBOK attempts to help the practitioner and aspiring practitioner see themselves as part of a community of those who work together within enterprises across the globe to facilitate the successful execution of the enterprise’s activities. Enterprise IT (EIT), at its heart, provides the circulatory system for the information that drives the enterprise’s decision-making and thus its ability to survive and thrive [9]. The process of translating the company’s vision and business strategy into effective change by creating, communicating, and improving the fundamental requirements, principles, and models that describe the future state of the business and enable its evolution [8]. The scope of EA includes not only the business processes, but also the people, the information and the technology of the company and its relationships with each other and with the external environment. The architectural requirements that must be met are given by the strategic objectives of the business and always second by those of IT [9]. The architectural strategy can be summarized in a roadmap, and in a conceptual document that describes the main characteristics and functionalities of the architecture aligned with the business. As next steps, you will need to choose the technology (technical architecture) that best fits the defined architectural strategy [10]. The IT development and operations framework is the methodological framework of the technical architecture, which is used for the development of applications and services, providing business solutions with standardization, productivity and best practices [11].

EITBOK, ITBOK, and BIZBOK to Educational Process Improvement

497

EA3 also creates abstract views and analysis models that will help you model the current and future views of the company, allowing for better planning and decision making practices [12]. This article proposes a methodology based on EA3 and supported by the areas of knowledge of information technologies (ITBOK), EITBOK and BIZBOK in order to present a guide for the improvement of processes of an organization and that it can be sustainable.

2 Methodology “Information technologies are a vital factor in the transformation of the new global economy and in the rapid changes taking place in society” [13]. Business approaches in today’s society have become technologically driven and highly applicable in various professional fields, especially on its technologies. It is important to consider that for the implementation of an architectural program you must have defined a documentation scheme to use [14]. In the following phases, the architectural design is described under the architectural principles aligned to BOK previously mention. Aligned to the 3 bodies of knowledge that are taken as a reference to the training of those responsible for establishing the architectural program consistency in these areas will help IT professionals perform more effectively and reliably. Organization leaders, in particular, need to understand and assign value to IT activities, in order to fully support, fund, and staff IT functions. In order to establish the EA, the next steps are proposed based on [15–17]. 1. The Executive Sponsor (ES) establishes the program and a chief architect is identified to direct the program [16]. Within the BIZBOK and ITBOK the basic training is established that allows incorporating the experience and knowledge based on the practice of the members. The CA must have a strong background in information technology, architectural models and business vision, which will allow a correct implementation of the EA [17], which must be supported by the BOKs mentioned above. In this context Senior executives continue to be concerned about factors influencing the business effect of information technology (IT) [18]. 2. The CA and the EA team to identify all of the methodology steps that the company needs to create an effective program [17]. Business architecture programs should consider the methodological aspects suggested by BOKs related to the implementation of IT processes and representation of these standards [18]. 3. CA to co-develop the architectural governance approach, enabling effective policies, plans, and decision-making within the management program [16]. One of the fundamental pillars of BOK is curricular development; which according to [19] will allow adequate training of information technology engineers.

498

P. A. Quezada–Sarmiento and A. F. Samaniego-Namicela

Fig. 1 Levels of EA adapted of [16, 17]

In the same context [20] proposes activities and tasks that enable architects and others to more effectively and efficiently implement architecture practices. 4. The Architectural Communication Plan should be written in plain language to gain the attention of technical stakeholders [16]. The communication management plan documents in each project are developed by considering the suitability of project communication activities and the stakeholder needs [21]. 5. Selection of an architectural documentation framework by the CA. 6. Identifying the vertical lines of business and the horizontal cross-cutting initiatives within the company that can be designed. 7. In this step, CA identify the architectural components that will have to be documented in each functional area of the structure that is related with Knowledge Areas of BOK. 8. develop the architectural artifacts within the documentation (Fig. 1). In the same context is important select a documentation technique that provides the information that needed for resource planning and decision making. 9. Once you know the functional areas, structure levels, and types of architectural components, you can establish the documentation and artifacts for the program. 10. The repository must be hosted on the company’s based on communication principles of BOK. In Fig. 2 we can view an example of an online EA repository for documentation. 11. The foregoing activities establish should be documented. 12. It is important to consider that the documentation methods based on ITBoK to support the company activities. 13. Develop a future view of architectural components (FVAC) based on BIZBoK. 14. Describe each future scenario in narrative form, that is, a possible business/ technology operating environment that the company may pursue. In this step,

EITBOK, ITBOK, and BIZBOK to Educational Process Improvement

499

Fig. 2 EA repository for online documentation

15. 16. 17. 18. 19. 20.

the key elements of each future scenario are analyzed to reveal what is important to the company and what changes have to occur for the scenario to become a reality. This step should document changes in AC in the short term (1–2 years) and long term 3–5 [16]. Development the Architectural Management Plan. Review the current and future views of the architecture are stored in the EAR [16]. The information in the repository will be valuable for planning and decision making only when it is comprehensive and accurate [16]. The CA and the EA team must ensure that the application/tool repository and support is kept up to date in terms of licensing and functionality [17]. The CA through the annual issuance of a Management Plan that describes the changes that were made both in the current and future view in the last year [18].

Enterprises are transforming their strategy, culture, processes, and their information systems to enlarge their Digitization efforts or to approach for digital leadership [19]. In each step of the previous methodology based on [16] and supported by ITBOK, BIZBOK, EITBOK is necessary to consider the knowledge areas and the consensus in order to improve the business process on organizations.

500

P. A. Quezada–Sarmiento and A. F. Samaniego-Namicela

3 Results It is a reality that the advancement of Information Technology revolutionized the business practices and strategies of entire industries [20]. Due to the advancement of information and communication technologies, the levels of use of these and their applications are so varied in all fields of knowledge that today these technologies are essential both in institutions and in companies, and universities [21]. It is a well-known fact that as many companies grow, it takes effective organization and management within them. In this sense, for the management of data and information applications and hardware are required that meet requirements, such as mobility and accessibility, that allow a total integration to the management system of the organization and the representation of knowledge supported by models of bodies of knowledge (BOK) in this context, within information technologies (ITBOK). Likewise, the creation of new disciplines and the deepening of these has promoted the development of a set of knowledge concepts, in order to improve the undergraduate programs of several accredited universities. A BOK is the collection of knowledge, processes, and facts that provide a solid foundation from which continuous improvements and innovative changes can be produced [22]. The range of management activities of BOK is broad and addresses different aspects of business operations. For example: Creation of knowledge databases best practices, expert directories, market intelligence, etc. Based on the previously described EA methodological process, the base structure of the EA supported in ITBOK, EITBOK and BIZBOK can be created. Figure 3 shows the relationship between BOKS and EA, which would be one of the bases for achieving business objectives at the technological level to improve business processes.

Fig. 3 Relationship between BOKS and EA

EITBOK, ITBOK, and BIZBOK to Educational Process Improvement

501

In recent years, we see the audience and attention for business architecture steadily increase. Business architecture provides a business-oriented abstraction of the enterprise in its ecosystem, which helps the organization in decision-making and direction-setting. Organization leaders, in particular, need to understand and assign value to IT activities, in order to fully support, fund, and staff IT functions.

4 Discussion One of the main concerns of the software industry is developing the talent of its human resources since the quality and innovation of its products and services depend largely on the knowledge, skills, and talent of its software or its engineers [23]. Knowledge already exists, the objective is to establish a consensus on the subset of the core of knowledge that characterizes the discipline of information technology engineering, the importance of which is aligned with its respective body of knowledge [24]. Regarding the levels of knowledge in a BOKs, they define the amount of knowledge that will be offered within a specific level of an educational program [25]. The BOKS have a specific structure according to the area of engineering or science, in this particular case of aligning them with the business architecture and its impact on the business processes improvement. In [26], shown an EA as a method for promoting an IT architecture that establishes consistency between corporate business and IT strategies, and it has been applied mostly in global corporations. According [27]; Enterprise Architecture (EA) as part of an organization’s processes is an important milestone in reaching higher maturity levels because it will drive a long-term alignment of IT and business dimensions for this reason [28] paper explores some important questions related to the introduction of EA inside very small enterprises or entities (VSE), referring to the very common situation where the IT department has limited resources even possibly in a larger organization. According [29]; the Adaptive enterprise architecture capability plays an important role in enabling complex enterprise transformations; in the same way [30] develop and present an adaptive enterprise service system (AESS) conceptual model, which is a part of The Gill Framework for Adaptive Enterprise Service Systems but not consider BOK aspect as a base to improve the EA. [31] describes a new metamodel-based approach for integrating Internet of Things architectural objects, which are semi-automatically federated into a holistic Digital Enterprise Architecture environment where the skills of its professionals were considered similar to this proposal. In [32]; The BITAMSOA Framework and Schematic advance are both business-IT alignment and software architecture analysis techniques supporting the engineering of enterprise-wide service-oriented systems-that is, service engineering. In the same context [33] introduces the BITAM (Business IT Alignment Method) which is a process that describes a set of twelve steps for managing, detecting, and correcting misalignment on EA. In [34] presents key AWB innovations and discusses how their design was motivated by insights into EA. In [35] focuses on the analysis of the relationship between

502

P. A. Quezada–Sarmiento and A. F. Samaniego-Namicela

quality management, knowledge-based on Bodies of Knowledge (BOK), innovation in SMEs in the manufacturing industry especially on its aspect but only considers one general BOK as the contrast with this proposal where 3 BOKs were taking account. In [36] shown an Improving Agility Through Enterprise Architecture Management; the mediating Role of Aligning Business and IT the Mediating Role of Aligning Business and IT as a base to improve the business process on IT organizations.

5 Conclusions The BOK provides the basis for curriculum development and maintenance and supports professional development and any current and future certification schemes. Lastly, it promotes integration and connections with related disciplines and the application on the business process improvement. The concept of EA must then be understood as a discipline that provides concepts, models and instruments for organizations to face the challenges of articulation of strategic areas and business processes with IT areas, with which is possible to generate greater value, improve performance, communication and integration in companies, which will ultimately lead to the creation of competitive advantage by effective support for the fulfillment of the strategies and objectives established in the deal. It is important to consider that the architectural components are active elements within the company’s business and technology operating environment; These include strategic objectives and initiatives related to IT, supply chains, information systems, software applications, knowledge warehouses, databases, websites, voice, data, video networks and security systems and that through of adequate levels of trust and strategies allow the organization to meet its objectives. The proposed framework allows us to know the relationship between the bodies of knowledge and the process of implementing a business architecture in the context of process improvement in order to adequately meet organizational objectives.

References 1. Berisha-Shaqiri A, Berisha-Namani M (2015) Information technology and the digital economy. Mediterran J Soc Sci 6(6):78. Retrieved from https://www.mcser.org/journal/index.php/mjss/ article/view/7915/7580 2. de Wet W, Koekemoer E, Nel J (2016) Exploring the impact of information and communication technology on employees’ work and personal lives. SA J Indust Psychol 42(1):11. https://doi. org/10.4102/sajip.v42i1.1330 3. Conde A, Larranaga M, Arruarte A, Elorriaga JA (2019) A combined approach for eliciting relationships for educational ontologies using general-purpose knowledge bases. IEEE Access 7:48339–48355. https://doi.org/10.1109/ACCESS.2019.2910079 4. Gøtze J (2018) The EA3 Cube Approach. https://eapad.dk/ea3-cube/overview/. Accessed 18 March 2020

EITBOK, ITBOK, and BIZBOK to Educational Process Improvement

503

5. Agresti WW (2008) An IT body of knowledge: the key to an emerging profession. IT Professional 10(6):18–22. https://doi.org/10.1109/MITP.2008.115 6. Miller KW, Voas J (2008) Computer scientist software engineer, or IT professional: which do you think you are? IT Professional 10(4):4–6. https://doi.org/10.1109/MITP.2008.64 7. Benchmark Consulting Canada (2014) A Guide to the Business Architecture Body of Knowledge® (BIZBOK® Guide). https://community.biz-architect.com/books/guide-business-archit ecture-body-knowledgebizbok-guide/. Accessed 18 March 2020 8. Walrad C (2017) What is the Enterprise IT BOK? http://eitbokwiki.org/What_is_the_Enterp rise_IT_BOK%3F. Accessed 18 March 2020 9. Rentes VC, de Pádua SID, Coelho EB, Cintra MACT, Ilana GGF, Rozenfeld H (2019) Implementation of a strategic planning process oriented towards promoting business process management (BPM) at a clinical research center (CRC). Bus Process Manag J 25(4):707–737. https:/ /doi.org/10.1108/BPMJ-08-2016-0169 10. Kurniawan NB, Suhardi (2013) Enterprise architecture design for ensuring strategic business IT alignment (integrating SAMM with TOGAF 9.1). Paper presented at the proceedings of the 2013 joint international conference on rural information and communication technology and electric-vehicle technology, rICT and ICEV-T 2013. https://doi.org/10.1109/rICT-ICeVT. 2013.6741505 11. Hans RT, Mnkandla E (2019) A framework for improving the recognition of project teams as key stakeholders in information and communication technology projects. Int J Project Organ Manage 11(3):199–226. https://doi.org/10.1504/IJPOM.2019.102941 12. Buckl S, Matthes F, Schulz C, Schweda CM (2010) Exemplifying a framework for interrelating enterprise architecture concerns. In: Sicilia MA, Kop C, Sartori F (eds) Ontology, conceptualization and epistemology for information systems, software engineering and service science. Lecture Notes in Business Information Processing, vol 62, pp 33–46. Springer, Berlin/ Heidelberg, Germany. https://doi.org/10.1007/978-3-642-16496-5_3 13. Shepherd DA, Wennberg K, Suddaby R, Wiklund J (2019) What are we explaining? A review and agenda on initiating, engaging, performing, and contextualizing entrepreneurship. J Manag 45(1):159–196. https://doi.org/10.1177/0149206318799443 14. Bon J (2008) Fundamentos De La Gestión De Servicios De TI Basada En ITIL V3. Editorial Van Haren Publishing, Holanda 15. Kryvinska N, Bickel L (2020) Scenario-based analysis of IT enterprises servitization as a part of digital transformation of modern economy. Appl Sci (Switzerland) 10(3). https://doi.org/10. 3390/app10031076 16. Bernard S (2012) An introduction to enterprise architecture (3rd ed) AuthorHouse, Bloomington, IL. ISBN 978–1-4772-5800-2 (sc), 978-1-4772-5801-9 (e) 17. Cabrera A, Quezada P (2015) Texto-Guía: Gestión de Tecnologías de Información. Universidad Técnica Particular de Loja, Ecuador 18. Kearns GS, Sabherwal R (2006) Strategic alignment between business and information technology: a knowledge-based view of behaviors, outcome, and consequences. J Manag Inf Syst 23(3):129–162. https://doi.org/10.2753/MIS0742-1222230306 19. Alarifi A, Zarour M, Alomar N, Alshaikh Z, Alsaleh M (2016) SECDEP: software engineering curricula development and evaluation process using SWEBOK. Inf Softw Technol 74:114–126. https://doi.org/10.1016/j.infsof.2016.01.013 20. ISO/IEC/IEEE International Standard - Software, systems and enterprise (2019) Architecture processes. in ISO/IEC/IEEE 42020:2019(E), pp 1–126 21. Tyas Darmaningrat EW, Muqtadiroh FA, Bukit TA (2019) Communication management plan of ERP implementation program: a case study of PTPN XI. Paper presented at the Procedia Computer Science, vol 161, pp 359–366. https://doi.org/10.1016/j.procs.2019.11.134 22. Wongwuttiwat J, Buraphadeja V, Tantontrakul T (2020) A case study of blended e-learning in Thailand, Interactive Technology and Smart Education, vol ahead-of-print No. ahead-of-print. https://doi.org/10.1108/ITSE-10-2019-0068 23. Thompson R, Compeau D, Higgins C (2006) Intentions to use information technologies: an integrative model. J Organ End User Comput 18(3):25–46. https://doi.org/10.4018/joeuc.200 6070102

504

P. A. Quezada–Sarmiento and A. F. Samaniego-Namicela

24. Bourque P, Dupuis R (2014) Guide to the software engineering body of knowledge 2014 version, SWEBOK V3.0 Kin. https://www.computer.org/education/bodies-of-knowledge/sof tware-engineering. Accessed 18 March 2020 25. Penzenstadler B, Fernandez DM, Richardson D, Callele D, Wnuk K (2013) The requirements engineering body of knowledge (REBoK). Paper presented at the 2013 21st IEEE international requirements engineering conference, RE 2013 - Proceedings, pp 377–379. https://doi.org/10. 1109/RE.2013.6636758 26. Biffl S, Kalinowski M, Rabiser R, Ekaputra F, Winkler D (2014) Systematic knowledge engineering: building bodies of knowledge from published research. Int J Software Eng Knowl Eng 24(10):1533–1571. https://doi.org/10.1142/S021819401440018X 27. Taguchi K, Nishihara H, Aoki T, Kumeno F, Hayamizu K, Shinozaki K (2013) Building a body of knowledge on model checking for software development. Paper presented at the proceedings - international computer software and applications conference, pp 784–789. https://doi.org/10. 1109/COMPSAC.2013.129 28. Masuda Y, Viswanathan M (2019) Enterprise Architecture for Global Companies in a Digital IT Era: Adaptive Integrated Digital Architecture Framework (AIDAF). Springer Singapore, Singapore. https://doi.org/10.1007/978-981-13-1083-6 29. Ponsard C, Majchrowski A (2015) Driving the adoption of enterprise architecture inside small companies - lessons learnt from a long term case study. In: Proceedings of the 17th international conference on enterprise information systems - Volume 3: ICEIS, ISBN 978-989-758-098-7, pp 334–339. https://doi.org/10.5220/0005464903340339 30. Gill AQ (2014) Applying agility and living service systems thinking to enterprise architecture. Int J Intell Inf Technol 10(1):1–15. https://doi.org/10.4018/ijiit.2014010101 31. Zimmermann A, Schmidt R, Sandkuhl K, Wißotzki M, Jugel D, Möhring M (2015) Digital enterprise architecture-transformation for the internet of things. Paper presented at the proceedings of the 2015 IEEE 19th international enterprise distributed object computing conference workshops and demonstrations, EDOCW 2015, pp 130–138. https://doi.org/10.1109/EDOCW. 2015.16 32. Chen H, Kazman R, Perry O (2010) From software architecture analysis to service engineering: an empirical study of methodology development for enterprise SOA implementation. IEEE Trans Serv Comput 3(2):145–160. https://doi.org/10.1109/TSC.2010.21 33. Chen H, Kazman R, Garg A (2005) BITAM: an engineering-principled method for managing misalignments between business and IT architectures. Sci Comput Program 57(1):5–26. https:/ /doi.org/10.1016/j.scico.2004.10.002 34. Abrams S, et al (2006) Architectural thinking and modeling with the architects’ workbench. IBM Syst J 45(3):481–500. https://doi.org/10.1147/sj.453.0481 35. Suárez-Morales L, Quezada-Sarmiento PA, Guaigua-Vizcaino ME, Navas-Alcivar SJ, RoseroBustos M (2019) The relational marketing and confidence like strategies of the entrepreneurship. Paper presented at the Iberian conference on information systems and technologies, CISTI, 2019-June. https://doi.org/10.23919/CISTI.2019.8760976 36. Maurice P, van de Wetering R, Kusters R (2020) Improving Agility Through Enterprise Architecture Management: The Mediating Role of Aligning Business and IT

Rating Pre-writing Skills in Ecuadorian Children: A Preliminary Study Based on Transfer Learning, Hyperparameter Tuning, and Deep Learning Adolfo Jara-Gavilanes , Romel Ávila-Faicán , Vladimir Robles-Bykbaev , and Luis Serpa-Andrade Abstract Pre-writing skills are a set of skills that have a great impact on children’s self-esteem and academic achievement. This is a big problem in the development of children. Educators and therapists don’t have much time to detect children lacking these skills in an early stage. To deal with this problem, we present a system that helps evaluate drawings of geometric shapes (circle, triangle, and square) drawn by children. To reach this goal, the system is divided into 4 phases: first the images are divided into 2 categories. Second, the dataset is loaded and a data augmentation is applied. Third, the following Convolutional Neural Networks are applied: CNN with hyperparameter tuning, VGG16, ResNet50 and InceptionV3. To feed these models, we used the “UPSWriting-Skills” dataset. Fourth, results are shown and the best models are selected to present their final hyperparameters. Finally, this system could be the base for new experiments with other Convolutional Neural Networks and other techniques to improve the accuracy of the system. Keywords Pre-writing skills · Education · CNN · VGG16 · ResNet50 · InceptionV3

1 Introduction Pre-writing skills are very important to acquire in an early childhood and it’s left unnoticed at the beginning of the process of writing [1]. Even though it can be seen A. Jara-Gavilanes · R. Ávila-Faicán · V. Robles-Bykbaev (B) · L. Serpa-Andrade GI-IATa, Cátedra UNESCO Tecnologías de apoyo para la Inclusión Educativa,Universidad Politécnica Salesiana, Cuenca, Ecuador e-mail: [email protected] A. Jara-Gavilanes e-mail: [email protected] R. Ávila-Faicán e-mail: [email protected] L. Serpa-Andrade e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Á. Rocha et al. (eds.), Information Technology and Systems, Lecture Notes in Networks and Systems 691, https://doi.org/10.1007/978-3-031-33258-6_46

505

506

A. Jara-Gavilanes et al.

as irrelevant to practice this skill in early childhood, it is vital to practice and develop these skills to prevent 2 main problems. These are serious problems because this skill is linked to academic achievement and self-esteem [11]. The lack of acquisition of this skill affects between 10% and 20% of children [8]. Pre-writing refers to strokes, lines, and scribbles that the child draws before learning to write letters, words, etc [16]. Although pre-writing sounds like an easy task that involves a pencil and hand movements, it involves many other underlying skills like: bilateral and visual-motor integration, fine motor control, in-hand manipulation, visual perception, sensory awareness of the fingers, among others are the most common skills identified [11]. Identifying children lacking these skills can be complex and time consuming for educators and therapists who could commit that time to other chores. Therefore it’s necessary to create new tools with the aid of artificial intelligence and machine learning to help detect the lack of skills that the children may present. In this paper we introduce a method based on different convolutional neural networks to reach two goals: predict which figure was drawn by the children, and the score of the figure; with the aim to help educators and therapists to detect the lack of acquisition of pre-writing skills. The method consists of 4 steps: first the images drawn by the children are classified into 2 categories: based on shape (circle, triangle, square) and based on score (0,1,2). Second, a data augmentation technique is applied with the goal to obtain better accuracy. Third, a model applied hyperparameter tuning and 3 pretrained models are trained. Finally, the results are discussed and the best models are chosen and their hyperparameters are presented. To achieve this goal the “UPSWriting-Skills” dataset is used.

2 Related Work In [16] the authors introduced a classifying and rating system for pre-writing skills in children with and without disabilities. It classified and rated 3 geometric shapes: circle, triangle, and square. The dataset they used contains 358 images. They extracted the shape features using Hu moments and shape signature to feed an ANN. For the first task, the ANN achieved a 98% of accuracy, and for the second task it reached 75% of accuracy [16]. In [3] the authors propose a new technique to classify different types of geometrical shapes in a photograph: circle, triangle, rectangle and square. To achieve this classification, they use the following techniques: Random Forest, Decision Trees and Support Vector Machines (SVM). They used a dataset that contains 250 images. Random Forest achieved the best results with a 96% of accuracy [3]. In [4] the researchers conducted a study to classify geometric forms in mosaics like: triangles, squares, circles, octagons and leaves. To reach this goal, the authors made use of different models of deep learning: CNN, VGG19, ResNet50, MobileNetV2, and InceptionV3. They worked with a small dataset of 407 images. The best model was their CNN that reached 97% accuracy [4].

Rating Pre-writing Skills in Ecuadorian Children...

507

In [15] the authors developed a system to measure the pre-writing skills by classifying 3 geometrical figures: circle, triangle and square. They worked with the following dataset: 118 805 circle images, 120 538 square images, and 120 499 triangle images. They designed a CNN that achieved 98.77% accuracy [15]. In [2] the researchers developed a technique to classify 4 geometrical shapes: circle, triangle, square and rectangle. They extracted the following features from the shapes to feed the classifiers: area, perimeter, compactness, length, rectangularity, and roundness. They made use of the following algorithms: the Levenberg-Marquardt, the quasi-Newton backpropagation, the scaled conjugate gradient, the resilient back propagation, and the conjugate gradient backpropagation. The model that threw the best results was Levenberg-Marquardt with a root-mean-square error of 0.9% and a mean error of 0.004% [2].

3 Methodology This work presents 4 different ANNs: CNN(Convolutional Neural Network) with hyperparameter tuning, and 3 pretrained models (VGG16, ResNet50 and InceptionV3). The goal is to compare results for 2 tasks: classify a geometrical figure and grade the geometrical figure. The architecture and parameters of each model are explained in detail later on. The dataset contains 358 images of 3 geometrical shapes: circle, triangle and square, with 114, 129, and 115 samples, respectively. The distribution of images according to the score is as follows: 14 with a score of 0, 117 with a score of 1, and 230 with a score of 2. The goal is to undertake the following 2 tasks: to classify and rate 3 geometrical figures: circle, triangle and square. Table 1 provides different examples about the figure and the score assigned to different draws. The dataset that was used for this research contained 358 images drawn by teenagers and infants with and without disabilities that are related with the acquisition of pre-writing skills. The process is explained below. – First, the images were separated into different categories based on their geometrical figure (circle, triangle, square) and their score (0, 1, 2). – Second, the data was loaded and a Data Augmentation technique was applied to generate more images. Thus the dataset was bigger for training the models. – Third, the computer vision models were trained with the goal to compare results. The 4 models used for this step were mentioned previously. Transfer learning was applied to the pretrained models. – Fourth, results are explained and their final parameters are presented.

508

A. Jara-Gavilanes et al.

Table 1 Examples of figures and scores assigned to some draws Figure Score Explanation Draw 0: Does not The Circle reach the skill circular at all

1: is about to reach the skill 2: skill reached

The oval The closed

Square

0: Does not reach the skill

The sides

1: is about to The reach the skill 2: skill reached

The between 80 and 100 degrees, it has sides,

0: Does not The Triangle reach the skill 1: is about to reach the skill

does not contain 3 vertexes

The not aligned to the center of the base line

2: skill reached

The is close to the center of the base line

3.1 Transfer Learning This technique solves the problem of training and building a model from scratch that could require a vast amount of data. What it does is that a model already built and trained for one task transfers its knowledge to solve another task [14], with the goal to increase accuracy, decrease the value of loss or training time. Most of the pretrained models are trained on the ImageNet dataset that contains 15 million labeled images. The purpose of transfer learning is that even though the images from ImageNet differ from others dataset, the low-level features(lines, edges) of an image are the same for most of the image analysis duties [10].

Rating Pre-writing Skills in Ecuadorian Children...

509

3.2 Data Augmentation The goal of this technique is to increase the accuracy training of a model enlarging the labeled data. This technique generates artificial data from an original dataset [10]. It can generate new images by performing: zoom in or out, rotation, shear, and applying some noise. There is also another type of transformation: Geometric distortions. This method consists of applying: histogram equalization, enhancing contrast or brightness, blurring, sharpening or white balancing [9].

3.3 Hyperparameter Tuning Hyperparameter tuning refers to the strategies that try to maximize accuracy or validation accuracy, or minimize the loss or validation loss of a model. This strategy attempts to find the best hyperparameters over some configurations considered like: the learning rate, optimizers, number of neurons or layers, and so on; that a model should contain to produce great results. It possesses different methods: random search, Bayesian Optimization, or iterated F-racing [12]. After these 3 techniques were explained, we describe each model and their hyperparameters. The first model is selected based on the effectiveness of applying hyperparameter tuning, and the other 3 models are chosen based on their performance obtained from the ImageNet dataset.

3.4 CNN (Convolutional Neural Networks) CNN is a type of deep learning that is used to classify images by recognising low or high-level patterns [19]. It is basically composed of 3 types of layers or blocks: convolution, pooling and fully connected layers. Convolution and pooling layers are in charge of extracting features and patterns from the input(image), and the fully connected layer has the task to map those features and produce an output that is the result of classifying it. Hyperparameter tuning was applied in this case to obtain the best hyperparameters for the model.

3.5 VGG16 VGG16 is a pretrained CNN developed by Oxford and’. It contains 16 convolution layers. All the layers have a size of 3 × 3, but they differ in the number of filters: Conv1 has 64, Conv2 has 128, Conv3 has 256, and Conv4 and Conv5 have 512. The

510

A. Jara-Gavilanes et al.

maxpooling layers have a size of 2 × 2. Then comes the fully connected layer that counts with 3 dense layers [7]. This CNN was pretrained with the Imagenet dataset which achieved 71.3% of accuracy.

3.6 Resnet50 Resnet50 is another pretrained CNN. The difference with VGG16 is that it has a characteristic called “Skip Connection” which consists in skipping some convolutional layers of the model [13]. It contains 2 different modules: Identity Block which doesn’t count with convolution layers as shortcut, which means that the input will have the same size as the output; and the Convolution Block which counts with a convolution layer at shortcut. Both blocks include a convolution layer of 1 × 1, thus the number of parameters are reduced without decreasing the performance of the network. This technique is called “Bottleneck design” [6]. In sum, this CNN contains 48 convolution layers, 1 Maxpooling layer and 1 Average pooling layer [6]. It was trained with the Imagenet dataset and it reached 74.9% of accuracy [20].

3.7 InceptionV3 InceptionV3 is another pretrained CNN. It has 42 layers, and its key difference is that some calculations of convolution and maxpooling layers are made at the same time and chained [18]. This type of calculation is called “Inception”. In addition, it is made up of 11 Inception modules. This CNN was pretrained with the Imagenet dataset and it achieved 77.9% of accuracy [5].

4 Experiment and Preliminary Results 4.1 Geometric Shapes Classification In this segment, after the 4 models were trained and tested; the results yielded for classifying the geometrical shape are explained. As can be seen in Table 2, the best model is VGG16 that achieved 100% of accuracy. In summary, this model contains an entry layer that is inserted above the VGG16 layers. This is a Rescaling layer which normalizes the image and sets the input shape which is (100,100,3). After the VGG16 layers, a flatten layer is inserted. Finally, a dense layer with 3 neurons and softmax activation is introduced. The final model uses the Adam optimizer, and SparseCategoricalCrossEntropy loss function. Below, Fig. 1 there is a comparison between loss and validation loss, and accuracy

Rating Pre-writing Skills in Ecuadorian Children...

511

Table 2 Results obtained from the 4 models for classifying the geometric shape Model Loss Accuracy Validation Loss Validation Accuracy CNN VGG16 ResNet50 InceptionV3

0.2297 0.0074 0.9517 0.853

0.9199 1 0.5521 0.9688

0.2079 0.0214 1.2384 0.199

0.9155 1 0.3944 0.8732

Fig. 1 Graphs comparing loss and validation loss (left), and train accuracy and validation accuracy (right) from the VGG16 Table 3 Results obtained from the 4 models for scoring the geometric shape Loss Accuracy Validation Loss Validation Model Accuracy CNN VGG16 ResNet50 InceptionV3

0.7306 0.7345 0.8214 0.727

0.6701 0.6285 0.5972 0.6493

0.6981 0.9207 0.743 1.2156

0.6571 0.6429 0.6429 0.6429

and validation accuracy. To train the model, an early stop was placed to prevent validation loss from getting higher.

4.2 Geometric Shapes Rating In this segment the results that the 4 models yielded for rating the geometric shapes are shown. As is shown in Table 3, the model that reached a higher validation accuracy is the CNN with a 65.71% of accuracy. This model made use of hyperparameter tuning to select the best hyperparameters to reach a higher accuracy. Figure 3 shows the

512

A. Jara-Gavilanes et al.

Fig. 2 Graphs comparing loss and validation loss (left), and train accuracy and validation accuracy (right) from the CNN

Fig. 3 CNN’s architecture applied hyperparameter tuning

architecture of the CNN. The Convolution layers use a kernel size of 3 × 3, the filters are shown in Fig. 3 and for activation the relu function was used. The maxpooling layers use a kernel size of 2 × 2. After the model was compiled, the hyperparameters are: Adam is used as the optimal optimizer with a learning rate of 0.001, and SparseCategoricalCrossEntropy for the loss function. To train the model, an early stop was placed to prevent validation loss from getting higher. Furthermore, Fig. 2 presents a comparison between loss and validation loss, and accuracy and validation accuracy.

Rating Pre-writing Skills in Ecuadorian Children...

513

5 Limitations This system, however, has a potential limitation: the size of the dataset. As it was described, the dataset contains only 358 images, and they aren’t equal distributed among the score categories (0, 1, 2). This problem is reflected in the results obtained from the models because they are really low accurate; the training dataset was too small. A data augmentation technique was applied but it had some issues too: some of the parameters of this technique affected the original image and the output brought on a noisy image; hence the models yielded poor results. There are 2 ways to solve this issue: first, an open dataset from ’: Quick, Draw! can be used. This dataset contains around 50 million drawings and 345 categories. Among these categories, there are the 3 geometric shapes used in this work and each one of them contains around 120 000 drawings. After some of these drawings are extracted, the help of an educator or therapist is required to rate the drawings based on the scale shown in Table 1. Also, this solves the issue of having to gather more drawings which would require a vast amount of time. The other alternative is to implement a Generative Adversarial Network (GAN). This technique is a generative model which means it uses unsupervised learning to automatically learn the pattern of input data hence the model can reproduce new samples of data [17]. Thus the size of the dataset increases with the goal to obtain better results.

6 Conclusions After applying the different Convolutional Neural Networks for both tasks proposed in this paper, the system reached 1 of 2 goals proposed: classify the geometric shapes. First, the images were divided into their respective categories. This showed the imbalance that existed between the categories for rating the geometric shapes. Besides, a data augmentation technique was applied to increase the size of the dataset, but there was a problem with it: most of the parameters produced noisy images hence the models yielded bad results. The parameters for this technique were carefully selected to prevent this outcome, consequently the results improved a little bit, but the dataset didn’t increase its size as much as needed to obtain great outcomes. Then the models were applied, and for the first task: to classify the geometric shapes; VGG16 yielded the best result with 100% of accuracy, beating all the previous works mentioned in related works section. With respect to the second task: rate the geometric shapes; CNN with hyperparameter tuning obtained the best result with 65.71% of accuracy. Their architecture was explained and their loss and accuracy curves were shown for both models. Moreover, this system is more accurate than the related works presented the first task, but it reached lower accuracy for the second task regarding [15], and [16], which are the ones that try to evaluate the figure.

514

A. Jara-Gavilanes et al.

As future work, we propose the following lines: – To increase the size of the dataset by using Quick, Draw! or to use a GAN. – To apply other pretrained models like: EfficientNet and its different versions, InceptionResNetV2, NASNetLarge, among others.

References 1. Abdullah SNA, Hashim H, Mahmud MS (2018) Using mobile application as an alternative to pre-writing strategy. Int J Eng Technol 7(4.21):143–147 2. Ayyıldız M, Çetinkaya K (2017) Predictive modeling of geometric shapes of different objects using image processing and an artificial neural network. Proc Inst Mech Eng Part E: J Process Mech Eng 231(6):1206–1216 3. Debnath S, Changder S (2018) Automatic detection of regular geometrical shapes in photograph using machine learning approach. In: 2018 10th International conference on advanced computing, ICoAC 2018, pp 1–6 4. Ghosh M, Obaidullah SM, Gherardini F, Zdimalova M (2021) Classification of geometric forms in mosaics using deep neural network. J Imag 7(8):149 5. Sam SM, Kamardin K, Sjarif NNA, Mohamed N (2019) Offline signature verification using deep learning convolutional neural network (CNN) architectures GoogLeNet inception-v1 and inception-v3. Procedia Comput Sci 161:475–483 (2019). https://www.sciencedirect.com/ science/article/pii/S1877050919318587. The Fifth Information Systems International Conference, 23-24 July 2019, Surabaya, Indonesia 6. Ji Q, Huang J, He W, Sun Y (2019) Optimized deep convolutional neural networks for identification of macular diseases from optical coherence tomography images. Algorithms 12(3) (2019). https://www.mdpi.com/1999-4893/12/3/51 7. Jiang ZP, Liu YY, Shao ZE, Huang KW (2011) An improved VGG16 model for pneumonia image classification. Appl Sci 11(23) (2021). https://www.mdpi.com/2076-3417/11/23/11185 8. Kadar M, Wan Yunus F, Tan E, Chai SC, Razaob@Razab NA, Mohamat Kasim DH (2020) A systematic review of occupational therapy intervention for handwriting skills in 4-6 year old children. Aust Occup Therapy J 67(1):3–12 (2020) 9. Mikolajczyk A, Grochowski M (2018) Data augmentation for improving deep learning in image classification problem. In: 2018 International Interdisciplinary PhD Workshop (IIPhDW), pp 117–122 10. Morid MA, Borjali A, Del Fiol G (2021) A scoping review of transfer learning research on medical image analysis using ImageNet. Comput Biol Med 128:104115. https://www.sciencedirect. com/science/article/pii/S0010482520304467 11. Morris C, McLaughlin T, Derby KM, McKensie M (2012) The differential effects of using handwriting without tears® and mat man materials to teach seven preschoolers prewriting skills using the draw a person with sixteen specific body parts. Acad Res Int 2(1):590 12. Probst P, Boulesteix AL, Bischl B (2019) Tunability: importance of hyperparameters of machine learning algorithms. J Mach Learn Res 20(1):1934–1965 13. Rezende E, Ruppert G, Carvalho T, Ramos F, de Geus P (2017) Malicious software classification using transfer learning of ResNet-50 deep neural network. In: 2017 16th IEEE International conference on machine learning and applications (ICMLA), pp 1011–1014 14. Ribani R, Marengoni M (2019) A survey of transfer learning for convolutional neural networks. In: 2019 32nd SIBGRAPI conference on graphics, patterns and images tutorials (SIBGRAPIT), pp 47–57

Rating Pre-writing Skills in Ecuadorian Children...

515

15. Serpa-Andrade L, Perez-Muñoz A (2021) Application of graphological coincidence applied in the field of speech therapy in children with motor difficulties. In: Kalra J, Lightner NJ, Taiar R (eds) AHFE 2021, vol 263. LNNS. Springer, Cham, pp 362–366. https://doi.org/10.1007/9783-030-80744-3_45 16. Serpa-Andrade LJ, Pazos-Arias JJ, López-Nores M, Robles-Bykbaev VE (2021) Design, implementation and evaluation of a support system for educators and therapists to rate the acquisition of pre-writing skills. IEEE Access 9:77920–77929 17. Singh NK, Raza K (2021) Medical image generation using generative adversarial networks: a review. In: Patgiri R, Biswas A, Roy P (eds) Health Informatics: A Computational Perspective in Healthcare, vol 932. SCI. Springer, Singapore, pp 77–96. https://doi.org/10.1007/978-98115-9735-0_5 18. Szegedy C, Vanhoucke V, Ioffe S, Shlens J, Wojna Z (2016) Rethinking the inception architecture for computer vision. In: Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), June 2016 19. Yamashita R, Nishio M, Do RKG, Togashi K (2018) Convolutional neural networks: an overview and application in radiology. Insights Imag 9(4):611–629 20. You Y, Zhang Z, Hsieh CJ, Demmel J, Keutzer K (2018) ImageNet training in minutes. In: Proceedings of the 47th international conference on parallel processing. Association for Computing Machinery, New York. https://doi.org/10.1145/3225058.3225069

Geographic Patterns of Academic Dropout and Socioeconomic Characteristics Using Clustering Vanessa Maribel Choque-Soto, Victor Dario Sosa-Jauregui, and Waldo Ibarra

Abstract Academic dropout is a fundamental problem at all educational levels. Several studies have been conducted to identify the characteristics that motivate dropout. Few studies have found patterns related to location. The objective of this article is to show the geographic patterns of academic dropout of undergraduate university students and to analyze them in order to improve decision making in Peruvian universities. The methodology for this purpose is KDD (Knowledge Data Discovery) commonly known as data mining, which includes the Clustering technique and the application of the K means algorithm, one of the most popular and efficient algorithms. The results show outstanding geographical patterns such as the university site and district to which dropout students belong. The results help to identify the university campuses with the highest dropout rates and the districts from which the dropouts come. The latter is analyzed from a socioeconomic perspective. Keywords Clustering · Geographic dropout patterns · Data mining · Educational data mining · Student dropout

1 Introduction Academic dropout affects universities in terms of student capacity and economically. This problem has been studied from different perspectives, identifying sociodemographic and academic factors, institution-specific variables, student trajectory, among others [1]. V. M. Choque-Soto (B) · V. D. Sosa-Jauregui · W. Ibarra Department of Informatics, Universidad Nacional de San Antonio Abad del Cusco, Cusco, Cusco, Perú e-mail: [email protected] V. D. Sosa-Jauregui e-mail: [email protected] W. Ibarra e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Á. Rocha et al. (eds.), Information Technology and Systems, Lecture Notes in Networks and Systems 691, https://doi.org/10.1007/978-3-031-33258-6_47

517

518

V. M. Choque-Soto et al.

To achieve new insights, it is important to involve different tools. Data mining helps to identify unknown information in large volumes of data over a period of time [2]. It then allows to process information not limited to a specific population, but to data sets coming from large periods of execution of the organizations. In the article “Classification models for determining types of academic risk and predicting dropout in university students”, he suggests expanding the results of the research by considering new variables, such as institutional and socioeconomic variables, and working with a much larger data set [3]. This article presents the use of clustering techniques to find patterns of geographic location, which makes it possible to expand variables around the same problem. On the other hand [4] in their research Analysis of University Student Attrition using Data Mining Techniques concludes that academic results such as socioeconomic status influence a student’s decision to remain in their respective career and they also state that by managing these variables it is possible to reduce attrition rates in the university system. The article makes a socioeconomic analysis with information from the 2018 poverty map around geographic patterns. Clustering is the process of grouping data into clusters, so that objects within a cluster have high similarity to each other, but are very different from objects in other clusters. The differences are evaluated based on the values of the attributes that describe the objects [5]. The objective of the article is to show the geographic distribution of academic dropout at the district level using the Clustering technique to guide decision making in universities [6].

2 Methodology 2.1 Research and Information Technics The research covered undergraduate programs in the city of Cusco. The universe of the research corresponds to the professional programs of Systems Engineering with face-to-face modality. The research techniques used were: Interviews with managers, academic directors, principal, students and teachers of the case study and other local universities. Observation of situations, phenomena and behaviors. Documentary review, curriculum, transparency documents published in university web sites. The academic data set was requested in .csv format. It was a dataset from 10 years of activity.

Geographic Patterns of Academic Dropout and Socioeconomic …

519

Fig. 1 KDD Process. Source Han Pei [5]

2.2 Methodology Process The methodology used was knowledge discovery from data KDD, consisting of the stages shown in Fig. 1.

Fig. 2 Dropout model Flow

520

V. M. Choque-Soto et al.

In the figure, based on KDD, the flow for the dropout model is proposed (Fig. 2). Clustering data mining technique was used. Within the Clustering technique K means algorithm was used due to its popularity, simplicity and lower computational complexity [7]. For information processing, Open-Source Data Mining software packages were used (Open-Source Organization 2020), for the extraction, transformation and loading process that make up the data preparation phase [8]. The data mining software was Weka (Waikato Environment for Knowledge Analysis) [9].

3 Results The study was conducted on information from 10 years of academic exercise of the Systems Engineering program. From which a model of attrition was constructed, resulting in 1451 records, on the basis of which several academic, social, gender, and other factors emerged. This article focuses on the presentation of geographical patterns. Figure 3 is the pre-processing view, which is loaded with the previously elaborated attrition model and shows the fields that the tool has identified, which can be selected according to the type of analysis required. Two geographic criteria have been selected: district and site name.

Fig. 3 Pre-processed database with geographic criteria selected

Geographic Patterns of Academic Dropout and Socioeconomic …

521

Fig. 4 Bar chart by site

Figure 4 shows the bar chart of the site, which represents the place to which the program belongs. It shows two groups, one with 1317 data corresponding to the city of Cusco and the colors to its districts, and the other belonging to Puerto Maldonado with 134 data. Figure 5 shows the bar chart of the district to which the students belong according to the attrition model, in each bar the number of students per district is displayed, which will be expanded in Table 1. Figure 6 shows the results of the processing of the dropout model, applying the K means. It generates a cluster with 2 percentages for each site, which in this case confirms the data 1317 for Cusco and 134 for Puerto Maldonado. Figure 7 shows the district field with a single cluster, showing the number of dropouts per district from which they come, information that will be expanded in Table 1. Table 1 shows detailed results by district and Table 2 by site. Figure 8 shows the scatter plot for the site. The clustering in Cusco is shown in blue, indicating a higher dropout rate at this site. Figure 9 shows the scatter plot by district, showing several districts grouped together, with a greater predominance in the Cusco district. Different tests were generated based on K means algorithm. In these tests the number of clusters was varied. The test with 4 clusters shows the main districts to which the dropouts belong, see Fig. 10.

522

V. M. Choque-Soto et al.

Fig. 5 Bar chart by district

Table 3 summarizes the detailed results of the test with 4 clusters, showing the main districts with dropouts along with other fields. A socioeconomic analysis of the districts was made through indicators based on the Peru 2018 District Monetary Poverty Map. The districts listed with the highest dropouts have average percentages of monetary poverty above 7.7% and at their highest levels up to 18% as in the district of San Jeronimo. These data were taken from [10]. The Fig. 11 shows the bar chart of the average monetary poverty level of the 3 districts with the highest attrition. The figure also shows the bar chart of the dropout rate in each of these districts. Comparing both data Monetary poverty level and dropout rate for each district.

3.1 Discussion Regarding geographic patterns, the first result found after data mining is that student dropouts occur in greater volume at the Cusco site compared to the Puerto Maldonado site. In relation to the districts, the city of Cusco has 8 districts, in the initial processing there are several districts to which the dropouts belong, but in the K Means algorithm test with 4 groups, 3 districts, San Sebastian, San Jeronimo and Cusco, appear as the districts with the highest dropouts.

Geographic Patterns of Academic Dropout and Socioeconomic …

523

Table 1 Results by district District

Dropouts

LIMA-LIMA-LIMA

1

CUSCO-CUSCO-WANCHAQ

113

CUSCO-CUSCO-SANTIAGO

73

CUSCO-CUSCO-CUSCO

178

CUSCO-CUSCO-SAN SEBASTIAN

120

MADRE DE DIOS-TAMBOPATA-TAMBOPATA

7

CUSCO-CUSCO-SAN JERONIMO

92

APURIMAC-GRAU-PROGRESO

1

CUSCO-URUBAMBA-OLLANTAYTAMBO

4

APURIMAC-COTABAMBAS-CHALLHUAHUACHO

2

CUSCO-CANAS-YANAOCA

1

APURIMAC-ABANCAY-CURAHUASI

1

CUSCO-URUBAMBA-URUBAMBA

10

CUSCO-CANCHIS-SICUANI

7

CUSCO-LA CONVENCION-SANTA TERESA

1

CUSCO-PAUCARTAMBO-CAICAY

1

CUSCO-QUISPICANCHI-OROPESA

4

CUSCO-LA CONVENCION-SANTA ANA

4

CUSCO-CALCA-PISAC

3

CUSCO-QUISPICANCHI-URCOS

3

AREQUIPA-AREQUIPA-JOSE LUIS BUSTAMANTE

1

CUSCO-ANTA-ANCAHUASI

1

CUSCO-CALCA-CALCA

4

CUSCO-ESPINAR-ESPINAR

1

CUSCO-QUISPICANCHI-ANDAHUAYLILLAS

3

CUSCO-LA CONVENCION-ECHARATE

2

CUSCO-QUISPICANCHI-HUARO

1

CUSCO-ANTA-PUCYURA

1

CUSCO-CUSCO-POROY

1

CUSCO-ACOMAYO-ACOS

1

Total

1451

In the socioeconomic analysis according to the District Monetary Poverty Map. For the districts of San Sebastian and San Jeronimo, a similar relationship is identified between the levels of dropout and district monetary poverty. However, this does not occur in the district of Cusco, which, despite having high levels of dropout, has a low level of monetary poverty compared to the other districts. This analysis was carried out in order to identify the relationship between both indicators.

524

V. M. Choque-Soto et al.

Fig. 6 Results of K means algorithm

Fig. 7 District with one cluster Table 2 Results by site

Site

Dropouts

Cusco

1317

Puerto Maldonado

134

Total

1451

Geographic Patterns of Academic Dropout and Socioeconomic …

Fig. 8 Scatter plot by site

Fig. 9 Scatter plot by district

Fig. 10 K means results with 04 clusters

525

526

V. M. Choque-Soto et al.

Table 3 Results showing districts Field

Cluster 0

Cluster 1

Cluster 2

Cluster 3

Cluster 4

Sex

Male

Male

Male

Male

Female

Marital status

Single

Single

Single

Single

Single

District

Cusco, Cusco, San Jeronimo

Cusco, Cusco, Cusco

Cusco, Cusco, Cusco

Cusco, Cusco, Sn Sebastián

Cusco, Cusco, Cusco

Site

Cusco

Cusco

Cusco

Cusco

Cusco

Curricula

Plan 2013

Plan 2005

Plan 1993 Plan 2005

Plan 1993

Credits

52.3302

58.3521

29.119

122.0992

34.6

Total per field

212

585

353

121

180

Total

1451

Dropouts and Monetary poverty level 77.05% 80.00% 70.00% 60.00% 50.00% 40.00% 30.00% 20.00%

8.34% 7.70%

14.61% 12.00%

8.20%

10.00% 0.00%

SAN SEBASTIÁN

SAN JERÓNIMO

Dropouts

CUSCO

Monetary poverty level

Fig. 11 Dropouts and Monetary poverty level

The results obtained will help to understand and recognize the importance of geographic location data in the problem of academic dropout in higher education. These patterns and trends will also help academic directors to improve institutional decision making with respect to dropouts. The objective of showing geographic location patterns at the district level is to identify and prioritize the areas with the highest dropout rates.

Geographic Patterns of Academic Dropout and Socioeconomic …

527

From the discovery of this knowledge and the traditional strategies that universities apply to their students, some measures can be improved, such as: • • • •

Designing attention policies for students coming from dropout sites and districts. Articulating preventive tutoring strategies with dropout district municipalities. Focusing information campaigns aimed at the areas with the highest dropout rates. Articulate and direct vocational orientation campaigns in the last years of secondary education in the main schools of each dropout district.

Consider the level of monetary poverty as a criterion for directing social campaigns such as scholarships, agreements and discounts in dropout districts with low socioeconomic status. The study has certain limitations such as, the dataset used as a resource had fields of academic origin only. Another limitation is that the dataset belongs to only one academic program. The visualization software does not associate maps of the districts of the city of Cusco, which did not allow to see the geographic report.

3.2 Conclusions 1. Using the academic dropout model through the Clustering technique, was possible to discover geographic patterns in the dropout behavior. 2. From a data set of 10 years of operation of a professional program, a dropout model was constructed, resulting in 1451 dropout records. 3. A careful work of data cleaning and a methodical process of Extraction Transformation and Loading, will allow building an adequate dropout model to discover knowledge. 4. K Means algorithm doesn’t establish a defined number of groups, therefore it´s important for the researcher to review the results carefully until outstanding patterns are found. 5. The geographic patterns identified are the location of the professional program and the district to which the dropouts belong. 6. The pattern identified as a site shows a greater predominance of dropouts in Cusco and the second site with the highest number of dropouts is Puerto Maldonado. 7. Using the K Means algorithm with 4 clusters, 3 districts with predominant dropouts were identified: Cusco, San Sebastian and San Jeronimo. 8. The application of clustering allowed the district data to be grouped by maximizing intraclass similarity which generated scatter plots of the districts with the highest concentration of dropouts. 9. The socioeconomic analysis around the geographic data would be improved by having students’ economic data, which would confirm the relationship between the dropout district and the level of monetary poverty.

528

V. M. Choque-Soto et al.

References 1. Alvarado-Uribe J, et al (2022) Student Dataset from Tecnologico de Monterrey in Mexico to Predict Dropout in Higher Education. Data, 7(9), art. no. 119, Cited 0 times. https://doi.org/10. 3390/data7090119. https://www.scopus.com/inward/record.uri?eid=2-s2.0-85138644437& doi=10.3390%2fdata7090119&partnerID=40&md5=0cfe5b56adb51128c3e0cf3305dce401 2. Bedregal-Alpaca N, Cornejo-Aparicio V, Zárate-Valderrama J, Yanque-Churo P (2020) Classification models for determining types of academic risk and predicting dropout in university students. Int J Adv Comput Sci Appl 11(1) 3. Sosa Jauregui VD (2019) Modelo para la evaluación del perfil del ingresante universitario utilizando técnicas de Data Mining. REPOSITORIO UNSA 1(1):133. http://repositorio.unsa. edu.pe/bitstream/handle/UNSA/11023/UPsojavd.pdf?sequence=1&isAllowed=y 4. Miranda MA, Guzmán J (2017) Análisis de la deserción de estudiantes universitarios usando técnicas de minería de datos. Formación universitaria 10(3):61–68 5. Han J, Pei J, Tong H (2022) Data mining: concepts and techniques. Morgan kaufmann 6. Choque Soto VM (2019) Minería de datos aplicada a la identificación de factores de deserción universitaria en programas de pre grado 7. Nanda S, Panda G (2014) A survey on nature inspired metaheuristic algorithms for partitional clustering. Swarm Evolution Comput 1–18 8. Talend Open Source (20 de 10 de 2022) Talend. Obtenido de Talend: https://www.talend.com/ products/talend-open-studio/ 9. Weka Project (20 de 10 de 2022). https://waikato.github.io/weka-site/index.html 10. Instituto Nacional de Estadistica e Informatica (2018) Mapa de pobreza monetaria distrital (1st ed) INEI. https://www.inei.gob.pe/media/MenuRecursivo/publicaciones_digitales/Est/Lib 1718/Libro.pdf

Implementation of Digital Resources in the Teaching and Learning of English in University Students Kevin Mario Laura-De La Cruz , Paulo César Chiri-Saravia , Haydeé Flores-Piñas , Giomar Walter Moscoso-Zegarra , María Emilia Bahamondes-Rosado , and Luis Enrique Espinoza-Villalobos

Abstract ICT may be utilized to aid in the assimilation of a second language; since persons use technology on a daily basis, it is advantageous to extend this learning to ICT; nevertheless, integration is essential for increased interest. There are many reasons why it is vital to evaluate the interaction between ICT and English learning. That is why, with the broad objective of establishing a link between ICT and English acquisition, this research used a hypothetical deductive technique, a non-experimental cross-sectional design, and a sample of 60 university students from Enrique Guzman University and the surrounding valley. The sample was given two instruments made from the variables and dimensions using Spearman’s rho statistician. It was discovered that there is a substantial association between ICT and English learning. Thus, increased usage of ICT will result in increased English language learning accomplishment for students enrolled in the second year of an English study program at a public university. K. M. Laura-De La Cruz (B) · G. W. Moscoso-Zegarra · M. E. Bahamondes-Rosado · L. E. Espinoza-Villalobos Escuela de Posgrado Newman, Av Bolognesi 987, Tacna, Peru e-mail: [email protected] G. W. Moscoso-Zegarra e-mail: [email protected] M. E. Bahamondes-Rosado e-mail: [email protected] L. E. Espinoza-Villalobos e-mail: [email protected] P. C. Chiri-Saravia · H. Flores-Piñas Universidad Nacional Enrique Guzman y Valle, Enrique Guzman y Valle N°951, Lurigancho-Chosica, Peru e-mail: [email protected] H. Flores-Piñas e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Á. Rocha et al. (eds.), Information Technology and Systems, Lecture Notes in Networks and Systems 691, https://doi.org/10.1007/978-3-031-33258-6_48

529

530

K. M. Laura-De La Cruz et al.

Keywords Learning · Communication · Digital Competences · Higher Level · Technology

1 Introduction According to Soler [1], information and communication technology (ICT) is the use of diverse technical means to store and transmit visual and digital data for a variety of business or academic reasons. Gómez and Macedo [2] argue that educational institutions around the world have the challenge of adopting ICTs in the classrooms and the instruction they deliver in order to provide their students with the technology and information they need to succeed in the modern world. In 1998, the UNESCO world report on teacher education and how to educate in a changing society highlighted the impact of ICT on traditional teaching and learning approaches, predicting the transformation of teaching and the way instructors and students acquire information. Likewise, Choque [3] describes cognitive tools as extensions of man created by people to facilitate their task. On the other hand, Besse [4] characterizes learning a foreign language as appropriation, which indicates that one must acquire and make sense of the lexical norms of what one hears and speaks, just as one must be able to connect the experiences stated above in order to remember the information. The ability to acquire a second language depends on the ability to comprehend texts and to hear; in other words, the information must be comprehensible. According to Harmer [5], language acquisition happens because broad exposure to the language and numerous opportunities to use it are frequently advantageous, indicating an increase in the learner’s knowledge and abilities. However, the absence of technology in the classroom for many courses and topics demonstrates the need to promote its use in the educational environment [6]. From a different perspective, Raiszadesh and Ettkin [7] note that, despite the fact that ICTs represent a significant advancement and bring benefits, most classes are unable to use them for various reasons. Due to the restricted usage of contemporary technology, it has been noticed that conventional teaching approaches predominate in classrooms. In Peru, Chumpitaz and Rivero [8] investigated the usage of ICT by instructors in Lima schools and discovered that various teachers emphasize the benefits of utilizing digital tools, noting that these technologies provide pupils with more. This result is supported by the support provided to instructors for monitoring students using a variety of forms, as well as the fact that they enable teachers to provide students with more complete information through audiovisual content, allowing students to interact with the resource and enabling teachers to remain current despite the problems that affect their different skill sets. Apart from improving technology skills, it is critical for individuals to study a second language nowadays, since it opens up global development pathways for communication. It is reported that English is part of the primary school curriculum,

Implementation of Digital Resources in the Teaching and Learning …

531

yet the attempt to incorporate this language into the children’ early years of studies has not been enough, and the pupils’ learning success has not been as anticipated. Similarly, Prato and Mendoza [9] assert that this is due to passivity and concludes that all students have the same learning requirements when, in fact, they comprehend at varying rates and that a single methodology is ineffective for all students and does not result in the same academic achievement. On the path to acquiring a second language, ICT can be used as a tool because society has integrated digital tools into their daily lives and an increasing number of people are attracted to them; therefore, it is advantageous if this learning is conducted through various disruptive strategies; they only require proper incorporation with the goal of increasing interest [10]. For these reasons, it is vital to evaluate ICT and English language acquisition in the Peruvian environment, despite the fact that technology integration is frequently weak in this setting. Similarly, the research is relevant, as it is expected to benefit university teachers greatly to make necessary adjustments to the teaching of English, taking into account the impact on society, because students at a public university seek to improve their knowledge of English in order to enter a context that requires the use of another language.

2 Method 2.1 Level and Design The research employs a quantitative method of the most fundamental kind, with a correlational level. Similarly, the research design will be non-experimental, as no variables will be changed, and cross-sectional, since the purpose of the study will be to characterize the variables and examine their relationships through time [11]. Furthermore, the hypothetical deductive procedure was applied.

2.2 Population and Sample Hernandez et al. [11] define a population as a collection of people who share a set of characteristics that will be stated in the study’s purpose. For that reason, the population consisted of 73 s-year English students, whereas the sample consisted of 59 individuals. Table 1 displays the non-probabilistic sample for the study, which was selected based on characteristics including English language proficiency, regular attendance, and willingness to be surveyed.

532 Table 1 Sample of the study

K. M. Laura-De La Cruz et al.

Specialty

Students

English - Italian

16

English - Spanish as foreign language

14

English - French

18

English - German

11

Total

59

2.3 Technique and Instruments for Data Analysis The technique used was a survey, and the instrument used was a questionnaire. In accordance with the objectives of this study, the dimensions and indicators established in the variable use of ICT were considered, which is structured in 17 items, and in the case of learning English, a 20-point test was used entirely in English for its exam-type resolution, both instruments were validated by four experts, with the majority of them receiving approval of more than 80%. ICT Dimensions – Terminals: Referring to electronic resources that store, process, display, or transfer information, such as video projectors and computers. – Services: Materials or resources that fulfill a particular purpose, such as presentations, videos, and web 2.0. English Language Learning Dimensions – Comprehension and oral expression: In relation to the skills demonstrated by the student to understand and produce oral texts in English taking into account the context, feelings, tone of voice, among others. – Comprehension of texts: It is related to the comprehension of written texts in the English language at different levels. – Text production: In the production of texts in the English language, the ability to express their ideas and emotions was taken into account, taking into account grammar, vocabulary, register, etc.

2.4 Instrument Reliability Additionally, a Cronbach alpha reliability of 0.758 was obtained for the first variable, and a Kuder Richardson reliability of 0.830 for the second variable, indicating both variables had a high level of reliability.

Implementation of Digital Resources in the Teaching and Learning …

533

Fig. 1 English language learning

Table 2 Frequency distribution of the ICT variable

Levels

Range

Absolute frequency (f)

Relative frequency (%)

Very good

[72–85]

6

10,2%

Good

[59 71]

35

59,3%

Regular

[45 58]

9

15,3%

Bad

[32 44]

8

13,6%

Very bad [17 31]

1

1,7%

Total

59

100.0%

Note: Variable ICT. Source: SPSS 24 results

3 Results Of those who completed the survey, 59.3% (35) indicate that the use of ICT is good, followed by 15.3% (9) who consider it to be regular, 13.6% (8) affirm is bad, 10.2% (6) consider it very good, and 1.7% (1) indicate very bad. The average is 58.76, this indicates that for those who completed the ICT survey, it is good (Table 2). Table 3 displays the percentages achieved for the ICT variable’s dimensions. In terms of terminals, 49.2% of respondents viewed the use of video projectors and PCs as positive. Similarly, 52.5% of respondents viewed the utilization of technical resources within the services dimension as positive. Therefore, it may be concluded that the respondents’ use of terminals and services within the institution is satisfactory (Fig. 1).

534

K. M. Laura-De La Cruz et al.

Table 3 Frequency distribution of ICT dimensions Terminals Levels Very good

Services

Absolute frequency (f) 5

Relative frequency (%) 8.5%

Absolute frequency (f) 6

Relative frequency (%) 10.2%

Good

29

49.2%

31

52.5%

Regular

18

30.5%

13

22%

Bad

6

10.2%

8

13.6%

Very bad

1

1.7%

1

1.7%

Total

59

100%

59

100%

Source: SPSS 24 results

Of those who completed the survey, 42.4% (25) have a high level of learning English, followed by 23.7% (14) who have a medium level, 16.9% (10) very high, 15.3% (9) a low level, and 1.7% (1) very low. The average is 19.78, this indicates that for those who completed the survey, learning English is at a high level. Table 4 shows the percentages obtained for the dimensions of English language learning. In the dimension of oral comprehension and expression, it is observed that 39% of the respondents express this ability as good and 23.7% as fair. In the dimension of text comprehension, 45.8% showed a good ability. In text production, 37.3% express a good command and 27.1% a fair command. Thus, it can be understood that the majority of those surveyed present a good level with respect to learning English (Fig. 2). From the foregoing, it is estimated that when those who completed the survey indicate that the use of ICT is very good, 8.5% have a very high level of English learning, and 1.7% have a medium level; when those who completed the survey state that the use of ICT is good, 6.8% have a very high level of learning English, 39.0% Table 4 Frequency distribution of English language learning dimensions

Levels

Comprehension and oral expression

Comprehension of texts

Text production

Absolute frequency (f)

Absolute frequency (f)

Absolute frequency (f)

Relative frequency (%)

Relative frequency (%)

Relative frequency (%)

Very good 10

16.9%

6

10.2%

9

15.3%

Good

23

39%

27

45.8%

22

37.3%

Regular

14

23.7%

14

23.7%

16

27.1%

Bad

11

18.6%

11

18.6%

10

16.9%

1

1.7%

1

1.7%

2

3.4%

59

100%

59

100%

59

100%

Very bad Total

Implementation of Digital Resources in the Teaching and Learning …

535

Fig. 2 Distribution of comparative levels between ICT and English language learning. Note: Cross tables in relation to ICT and learning English. SPSS 24 result

Table 5 Correlation and significance between ICT and English learning

ICT

Correlation coefficient

ICT

English learning

1,000

,636**

Sig. (bilateral) English learning

** .

,000

N

59

59

Correlation coefficient

,636**

1,000

Sig. (bilateral)

,000

N

59

59

The correlation is significant at the 0.01 level

high, 10.2% medium, and 3.4% low. On the other hand, when those who completed the survey indicate that the use of ICT is regular, 1.7% have a very high level of English learning, 1.7% high, and 11.9% medium; when those who completed the survey affirm that the use of ICT is bad, 1.7% have a high learning of English, and 11.9% low; and finally, when those who completed the survey say that the use of ICT is very bad, 1.7% have a very low learning of English (Table 5). The results are presented to test the general hypothesis: there is a Spearman’s Rho correlation coefficient = 0.636** which is understood to be 99.99% **the correlation is significant at the bilateral 0.01 level, understood as a positive relationship between the variables, with p = 0.00 (p < 0.01), then the null hypothesis is rejected. In addition,

536

K. M. Laura-De La Cruz et al.

it can be seen that ICTs are directly related to learning English, in other words, the better the ICTs, the greater the learning of English, also according to Spearman’s correlation of 0.636, this means a positive correlation.

4 Discussion and Conclusion The research’s overall objective was to ascertain the extent to which ICT is used to teach and study English among second-year students enrolled in an English career at a higher education institution in Lima, Peru. When the general hypothesis was tested, a Spearman’s Rho value of 0.636 and a p value of 0.000 were obtained, indicating the presence of a direct and significant relationship between ICT and English learning, which is consistent with de Luperdi’s [12] research on the use of English and the use of ICT as teaching and learning techniques in university students. These results are a result of educational technology’ potential to adapt teaching and learning processes in novel ways that accommodate both classic and non-traditional forms of instruction [13]. Thus, Robles et al. [14] argue for the inclusion of technology instruments in higher level education in order to foster the autonomy of future professional English language instructors throughout their educational experiences. Concerning particular aim 1, which was to determine the degree of association between the usage of ICT in the classroom and oral understanding and expression of second-year English students. When the specific hypothesis 1 is tested, Spearman’s Rho = 0.617 and p-value = 0.000 indicate that there is a direct and significant relationship between ICT in education and English comprehension and oral expression, similar to Vega [15], whose Pearson coefficient results indicated a positive correlation between ICT and English learning. De La Cruz et al. [16] demonstrate the efficacy of gamification aspects in generating substantial improvements in higher level students’ English learning, and then validate the efficacy of digital tools in improving reading and listening skills. Addressing particular goal 2, which attempted to determine the degree of association between the use of ICT in the classroom and text comprehension in second-year English degree students. For specific hypothesis 2, a Spearman’s Rho of 0.572 and a p value of 0.000 indicate that there is a direct and significant relationship between ICT in the classroom and text comprehension, which is consistent with the findings of Castellano et al. [17], who discovered a significant relationship between motivation and English learning in public school students. So, it is critical for university students concentrating in English to master the English language, recognizing their talents in information discovery and the English–Spanish translation process [18]. Similarly, Laura et al. [19] and Almenárez y Ortega [20] prove the efficacy of using digital tools in their educational practice by optimizing the reading ability of English language in normal basic level pupils, not just by creating major positive improvements but also desire and enthusiasm in studying and engaging in class activities improves. Assessing particular goal 3, which sought to ascertain the degree of link between ICT in education and text production among second-year English students. When

Implementation of Digital Resources in the Teaching and Learning …

537

specific hypothesis 3 is tested, a Spearman’s Rho correlation coefficient of 0.596 and a p value of 0.000 are obtained, indicating that there is a direct and significant relationship between ICT and text writing. This finding is consistent with the findings of Brito and Garcia [21], who stated that their proposal for didactic guides benefits and facilitates the process of teaching and learning the lexicon while also promoting achievement of specified competencies. Similarly, Laura et al. [22] emphasize the necessity of incorporating digital technologies into the process of teaching and learning the English language, stating that gamification engages students considerably in accomplishing stated goals and is a feasible method of reinforcing their autonomy. On the other hand, it is demonstrated that the use of educational software that teachers plan and develop enhances students’ learning of the English language by improving the planning and execution of activities that engage students in improving their English vocabulary and reinforcing their writing skills in order to have direct contact with the English language [18]. It is worth noting that the research had a time constraint on the use of the instruments in the case of students, and so data collection took around one month. Similarly, this research will have a transcendent effect on second-year English students at a public university institution. English instructors should examine a variety of digital technologies to significantly increase English learning over time. Additionally, it is vital to do and display more research-based activities at a higher level. While the significant results in regular education for English learning are confirmed, in a university setting, the same progressive improvement is expected to support teachers’ theory and application experiences, resulting in significant changes in their pedagogical practice and reinforcement of digital skills. The usage of technology will continue to evolve at a breakneck pace, and as a result, teaching must keep pace in order to maintain the highest possible standard of English learning.

References 1. Soler V (2008) El uso de las TIC (Tecnologías de la Información y la Comunicación) como herramienta didáctica en la escuela. Contribuciones a las Ciencias Sociales 10 2. Gómez L, Macedo J (2010) Importancia de las TIC en la Educación Básica Regular. Investigación Educativa 14(25):209–224 3. Choque R (2010) Estudio en aulas de innovación pedagógica y desarrollo de capacidades TIC. Universidad Nacional Mayor de San Marcos, Lima 4. Besse H (1984) Grammaire et Didactique des Langues. Hatier – Crédif, Paris 5. Harmer J (2001) The practice of english language teaching. Longman Group Limited, London 6. Alfalla R, Arenas F, Medina C (2001) La aplicación de las TIC a la enseñanza universitaria y su empleo en la formación en dirección de la producción/operaciones. Píxel-Bit. Revista de Medios y Educación 16:61–75 7. Raiszadeh F, Ettkin L (1989) POM en academia: algunas causas de preocupación. Diario de producción y gestión de inventario 30(2):37–40 8. Chumpitaz L, Rivero C (2012) Uso cotidiano y pedagógico de las TIC por profesores de una universidad privada de Lima. Educación 11(41):81–100. http://revistas.pucp.edu.pe/index.php/ educacion/article/view/2900

538

K. M. Laura-De La Cruz et al.

9. Prato A, Mendoza M (2006) Opinión, conocimiento y uso de portales web para la enseñanza del inglés como lengua extranjera. Enl@ce: Revista Venezolana de Información 3(1):49–61. http://redalyc.uaemex.mx/redalyc/pdf/823/82330104.pdf 10. López M (2007) Uso de las TIC en la educación superior de México. Un estudio de caso. Apertura 7(7):63–81 11. Hernández R, Fernández C, Baptista P (2014) Metodología de la Investigación Científica. 6th edn. MC Graw Hill, Colombia 12. Luperdi F (2018) Dominio del inglés y el uso de tics como estrategias de enseñanza en el aprendizaje del idioma inglés en universitarios [Doctoral thesis]. Cesar Vallejo University, Lima 13. León E, Tapia J (2013) Educación con TIC para la sociedad del conocimiento. Revista Digital Universitaria 14(2) 14. Robles H, Salamanca R, Laura K (2021) Quizizz y su aplicación en el aprendizaje de los estudiantes de la carrera profesional de idioma extranjero. PURIQ 4(1):97–115. https://doi. org/10.37073/puriq.4.1.239 15. Vega F (2017) Uso de las TICS y su influencia con la enseñanza – aprendizaje del idioma inglés en los estudiantes del I y II ciclo de la Escuela Académico Profesional de la Facultad de Educación UNMSM-Lima [Master tesis]. San Marcos National University, Lima 16. Laura-De La Cruz K, Gebera O, Copaja S (2022) Application of gamification in higher education in the teaching of english as a foreign language. In: Mesquita A, Abreu A, Carvalho JV (eds) Perspectives and trends in education and technology: selected papers from ICITED 2021. Springer Singapore, Singapore, pp 323–341. https://doi.org/10.1007/978-981-16-5063-5_27 17. Castellano A, Ninapaytan D, Segura H (2014) La motivación y su relación con el aprendizaje del idioma inglés en los estudiantes del tercer grado de secundaria de la Institución Educativa 1283 Okinawa, Ate-Vitarte, 2014 [Thesis]. Enrique Guzman y Valle National University, Lima 18. Laura K, Morales K, Clavitea M, Aza P (2021) Aplicación Quizizz y comprensión de textos en inglés con el contenido de la plataforma educativa “Aprendo en Casa”. Revista Innova Educación 3(1):151–159. https://doi.org/10.35622/j.rie.2021.01.007 19. Laura K, Velarde J (2019) La Aplicación de un Software en Comprensión de Textos en inglés para Estudiantes en Perú. Neumann Bus. Rev. 5(2):108–121. https://doi.org/10.22451/3002. nbr2019.vol5.2.10042 20. Almenárez F, Ortega G (2017) Una Intervención Pedagógica mediada por TIC: Contribución para el mejoramiento del inglés en estudiantes de grado quinto. Didasc@lia: Didáctica y Educación 9(1):189–210 21. Brito F, García A (2016) Efectividad de las guías de enseñanza en el aprendizaje del vocabulario del idioma inglés en los estudiantes de grado sexto de la Institución Educativa Técnica San José del Municipio de Fresno-Tolima año 2016 [Mater tesis]. Norbert Wiener Private University, Lima 22. Laura K, Franco L, Luza K (2020) Gamification for understanding English texts for students in a public school in Peru. Int J Develop Res 10(10):41787–41791. https://doi.org/10.37118/ ijdr.20319.10.2020

Communicational Citizenship in the Media of Ecuador Abel Suing, Kruzkaya Ordóñez, and Lilia Carpio-Jiménez

Abstract The purpose of the research is to identify the treatment of communicational citizenship in the social information media, with local and national coverage, in Ecuador through the study of their Web sites in order to know the relationship with the emerging interests of society. The research questions are: what kind of manifestations related to citizenship and people’s rights are presented in the media; does the informative treatment that the media give to the manifestations of communicational citizenship make visible the interests and demands of the citizens? The methodology is quantitative and qualitative, descriptive and relational, through content analysis and expert interviews. The Ecuadorian media insert in their news the new emancipations sought by the community, but they do not develop them, they remain as marginal notes, thus depriving discussions and debates from appearing in public opinion. A conservative vision prevents social dynamism from being reflected in the media. Keywords Communicational citizenship · Democracy · Mass media · Freedom of expression

1 Introduction Communicational citizenship is “to put into practice mechanisms that prevent the monopolization and homogenization of multiple cultural meanings, allowing equal access and opportunities” [1], it involves the “creative and protagonist participation of people […] because there is no political democracy without communicational democracy” [2]. Communicational citizenship is built through dialogues and interactions in public spaces, it challenges power and helps the emergence of new relationships and balances that encourage the citizen to play a leading role in political life [3]. The collective values of communicational citizenship are freedom, equality and A. Suing (B) · K. Ordóñez · L. Carpio-Jiménez Universidad Técnica Particular de Loja, Departamento de Ciencias de la Comunicación, Grupo “Comunicación y Cultura Audiovisual”, Calle Champagnat, 110107 Loja, Ecuador e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Á. Rocha et al. (eds.), Information Technology and Systems, Lecture Notes in Networks and Systems 691, https://doi.org/10.1007/978-3-031-33258-6_49

539

540

A. Suing et al.

social justice, and the individual values are tolerance and acceptance of differences [4], in addition, new emancipations are included such as intercultural dialogue [5], ecological awareness [6], the defense of gender and of refugees and immigrants [7], that is, the integration of diversity, this search for identity calls to keep alive efforts for a more just and equitable society [8]. Through the exercise of communicational citizenship, we contribute to the deliberative democracy model whose fundamental requirement is the participation of people. A deliberative democracy implies “the articulation of participatory public debate through the media” [9], thus the challenge arises of “finding ways to increase democratic participation through the expansion of deliberative processes” [10]. Therefore, building democratic citizenship requires the participation of civil society in the media. Communication opens spaces for meeting and interaction [11, 12]. It should be remembered that citizen participation in the media is based on Article 19 of the Universal Declaration of Human Rights of 1948, is a characteristic of freedom of expression, and contributes to democracy and fair and inclusive societies [13]. According to UNESCO, participation comprises three levels [14–16]: the intervention of the population in a) the production of messages, b) in decision making; and, c) the contribution to formulate mass communication plans and policies [17]. Frequent forms of citizen participation in the media are “letters to the editor in newspapers or calls from listeners in the case of radio (…) [as well as] blank pages to be written by citizens” [18]. There are also readers’ comments [19], and “programs that allow audience responses” [20]. In the twenty-first century, communication technologies and the Internet opened avenues for platforms such as Facebook, Twitter, WhatsApp and Instagram, among others, “provide the necessary conditions for people to connect (…) what changes are the rules of the game and, therefore, also the results of citizen participation” [21]. “The media perceived participation as a valid strategy to generate traffic, attract visitors and, to the extent possible, build loyalty. To this end, they developed a wide range of interaction tools: comments, surveys, forums, recommendations, etc.” [22]. The media “can take advantage of the contributions made by users […] to give a voice to those sectors or groups that are less represented in society” [23]. The treatment of people’s rights “have a direct impact on the construction of full citizenship” [24]. On the other hand, democratic and digital citizenship demands social inclusion in the Network [25]. Today, the exercise of democratic citizenship requires media competencies that enhance people’s protagonism before the media [26]. In line with UNESCO, effective participation will happen when the population has a voice “on issues that affect their lives. In other words, it is about sharing decisionmaking power” [27] until reaching the “right to freedom of expression in its social and individual dimensions” [28]. In theory, it is considered that “citizen participation implies that the citizenry is involved in decision-making […] and any other form of political endeavor through a participatory and active democracy (beyond the act of voting)” [29]. A tangible sign of participation is the presence of citizens in the Boards of Directors of the media, with voice and voting capacity to “define rules, formats and/or programming grids in state, public or privately managed social communication media” [30]. Recent

Communicational Citizenship in the Media of Ecuador

541

research on the treatment of citizenship in digital media in Peru [31] identified that the six most used words to refer to the concept are citizenship, citizen, people and user/user, these acquire characteristics depending on the message and the level of linkage, but do not include meanings related to democratic and communicational citizenship.

2 Objectives Based on the above, the purpose of this work is to identify the treatment of communicational citizenship in the social information media, with local and national coverage, in Ecuador through the study of their Web sites, because in this way the relations with the emerging interests of society will be known. The research questions are: what kind of manifestations related to citizenship and people’s rights are presented in the media; does the informative treatment given by the media to the manifestations of communicational citizenship make visible the interests and demands of citizens?

3 Methodology The methodology is quantitative and qualitative, descriptive and relational [32], through content analysis and expert interviews. Descriptive research produces data in “people’s own words spoken or written” [33]. The content analysis considers news, reports, opinion articles and more informative genres published on the websites of 82 media outlets with local and national coverage in Ecuador, between November 16, 2020 and January 14, 2021, that include keywords linked to the concept of citizenship. The selection of media was done intentionally, 180 pieces (notes) were found from which the base “Data on communicational citizenship in Ecuador” [34] was extracted. The key words come from the study of Mendoza, Viaña and Espinoza [31], in them “traditional” forms and new forms linked to the concept are citizenship are manifested: a) Traditional (formal) manifestations: citizenship, citizen, people, user. b) New (emergent) manifestations: diversity, dialogue, participation, equality, etc. Then, the informative pieces are classified according to the treatment they receive: location of keywords in the text, by area, journalistic section, genres, digital resources and sources [31]. On the other hand, the results of the content analysis are interpreted with the support of interviews with 17 professionals, including communicators, researchers in the social sciences and others. The interviews take place between December 1 and 20, 2020. The profiles of the interviewees are: - EM1 (female interviewee 1): professor in the Department of Business Sciences at the Universidad Técnica Particular de Loja, in Ecuador. - EM2 (female interviewee 2): professor in the Social Communication career at the National University of Loja, in Ecuador. - EM3 (female interviewee 3): educational communicator. - MS4 (female

542

A. Suing et al.

interviewee 4): journalist and presenter at “Multicanal Mejía”. - MS5 (female interviewee 5): Professor in the Law Department of the Universidad Técnica Particular de Loja, in Ecuador. - EM6 (female interviewee 6): social communicator, translator and interpreter. - EH1 (male interviewee 1): master’s degree in communications technologies, systems and networks. - EH2 (male interviewee 2): industrial safety engineer. - EH3 (male interviewee 3): political science student. Former advisor to the United Nations Department of Environment in Ecuador. - EH4 (male interviewee 4): Journalist working for a local television station. - EH5 (male interviewee 5): journalist working at a local radio station. - EH6 (male interviewee 6): journalist, editor and photographer at Super Peninsular newspaper. - EH7 (male interviewee 7): reporter for the Archdiocese of Guayaquil. - EH8 (male interviewee 8): commentator and narrator at Radio Positiva and Gol TV. - EH9 (male interviewee 9): communicator at UP-Medios agency. - EH10 (male interviewee 10): professor at Universidad Central del Ecuador. - EH11 (interviewee male 11): political scientist at the National University of Colombia, researcher in political sociology at the Latin American Faculty of Social Sciences, Ecuador.

4 Results The execution of the methodology implied tracking information where at least one of the key words mentioned above appears. Once the pieces of information were located, the concepts related to the rights they address were paired with the UNESCO Thesaurus: 1) Biological diversity: ecological balance, biology, nature conservation, universal common heritage. 2) Freedom of expression: right to information, freedom of the press, censorship, control of communication, ethics of communication. 3) Health policy: epidemiology, disease control, drug policy, women’s health, immunology, sanitation, health service. 4) Cultural identity: culture, cultural rights, cultural diversity, education and culture. 5) Social problems: social conflict, crime, discrimination, poverty, suicide, violence, ethnic discrimination, social exclusion, refugees. 6) Democracy: democratization, human rights, participatory development, human security. 7) Sport: athlete, sport competition, physical education, education and leisure, sport facility. 8) Civil rights: right to justice, right to privacy, right to life, freedom of thought, human rights, equal opportunity. 9) Market economy: private property, private enterprise, market. 10) Women’s participation: social participation, political participation, women and development. 11) Housing: housing, housing needs, living conditions. Table 1 shows the relationship between the manifestations of citizenship and the rights alluded to in the information published during the period under analysis. The data show a pattern between the expressions (traditional and new) of communicational citizenship and the concepts of civil rights. To determine the degree of association, Pearson’s Chi-square test (Table 2), which is used to measure the relationship between qualitative variables, was used to determine a p-value, a significance level

4

1

5

New

Totals

Biological diversity

9

2

7

Freedom of expression

Subjects and rights

Traditionally

Manifestations

11

0

11

Health policy

Table 1 Manifestations of citizenship and related rights

12

4

8

Cultural identity

12

1

11

Social issues

16

3

13

Democracy

9

7

2

Sports

76

15

61

Civil rights

13

3

10

Market economy

12

10

2

Women’s participation

5

2

3

Housing

180

48

132

Totals

Communicational Citizenship in the Media of Ecuador 543

544

A. Suing et al.

of less than 5%, thus proving an association. Finally, Table 3 details the journalistic treatment of communicational citizenship. Based on the above data, it can be noted that the information published on the websites of Ecuador’s mass media in relation to citizenship is associated with classic or traditional manifestations of human rights, and very little with emerging rights such as personal integrity, the environment, biological diversity, peace, protection of personal data, equal representation, information and communication, among others. The central ideas of the interviewees’ answers to the question “What or how should journalistic routines and approaches be changed to make room for new manifestations of communicational citizenship?” are: EM1: “we have to change the national culture, it leads to a media culture only what is momentary news, that which sells, we have a culture of yellow journalism” (C. Gonzaga, personal communication, December 3, 2020). MS2: “spaces in which citizens make known the different problems that are part of their lives” (T. León, personal communication, December 3, 2020). MS3: “the communicator must focus on his role as a process guide. You have to investigate thoroughly, engage and dig until you find the root of the problem” (A. Sacoto, personal communication, December 7, 2020). MS4: “local media have more openness to the community, then segments can be created” (M. Villagomez, personal communication, December 14, 2020). MS5: “adequate political leadership in terms of more open support being given because they are not necessarily a priority for the government” (X. Torres, personal communication, December 20, 2020). MS6: “it is necessary to stop looking at the institutional as the starting point for everything and turn our gaze to everyday life and life in the street” (D. Amores, personal communication, December 17, 2020). EH1: “they should be more globalized and seek to educate and not merely profit” (J. Apolo, personal communication, December 15, 2020). EH2: “they should conduct in-depth and real research” (S. Pullas, personal communication, December 14, 2020). EH3: “they should change editorial lines, give openness to topics and rights that are currently in vogue, even if they bother Ecuadorian society” (J. Encalada, personal communication, December 20, 2020). EH4: “the media should focus on citizen participation content, in Loja there are very few citizen journalism newscasts, it is necessary to generate information from different parts of this society” (J. Vanegas, personal communication, December 3, 2020). EH5: “propitious time for young people to take the lead on this issue, they have greater access to information and education” (P. Narváez, personal communication, December 16, 2020). Table 2 Chi-square test result Indicator

Value

fd

Asymptotic significance (bilateral)

Pearson’s Chi-square

41.186

10

0.000

Likelihood ratio

40.001

10

0.000

6.163

1

0.013

Linear by linear association N of valid cases

180

117

Totals

51

63

12

109

27

136

Traditionally

New

Totals

20

10

10

News Report

Manifestations Journalistic genre

81

36

Traditionally

New

7

65 85

20 64

15

49

Text, Photos, Social network link

8

13

5

1 7

6

4 4

0

Source

70 85

15

123 159

36

79

25

54

National

5 9

4

2 4

2

Location keywords

16

8

8

2 7

5

98

17 31

14 Coverage

122

24

1

1

0

125

28

97

55

20

35

Entertainment Local National

27

10

17

International Title / Subtitle News Title / body Subtitle, body of the news item

Opinion Training, Sports educational, cultural

Type of content

20

9

11

Text, Local Photos, Social Network Link, Multimedia Link

Opinion Interview Chronicle News

11

4

Text, Photos

Digital resources used

Press Audiovisual Text and digital

Manifestations Type of media

Table 3 Journalistic treatment

180

48

132

Totals

180

48

132

Totals

Communicational Citizenship in the Media of Ecuador 545

546

A. Suing et al.

EH6: “the media must create new ways of representing the reality of the world” (W. Rosales, personal communication, December 16, 2020). EH7: “a clear focus on citizens, where their rights are promoted” (S. Pincay, personal communication, December 14, 2020). EH8: “official platform in social networks so that citizens can express themselves by exercising their right to communication” (F. Vizcaíno, personal communication, December 3, 2020). EH9: “the media must stop being afraid and fulfill their journalistic work with vocation” (J. Jiménez, personal communication, December 5, 2020). EH10: “this training system must be turned around. Change the correlation of forces within the media field. An important line is the promotion of an inclusive dynamic of professional practices” (M. Bonilla, personal communication, December 8, 2020). EH11: “the media should not only inform but also investigate” (A. Hernández, personal communication, December 5, 2020).

5 Discussion and Conclusions The manifestations of communicational citizenship in the Ecuadorian media, studied between November 2020 and January 2021, are expressed in traditional forms in relation to what is expressed in the Charter of Human Rights of 1948. The key words published in the informative pieces of the Web versions of the media in Ecuador link emerging rights to a lesser extent. The informative coverage of the media does not advance at the same pace as social demands, there is a risk of not making visible the groups, interests and demands of citizens and indirectly of not contributing to the quality of democracy in Ecuador. This implies that privacy, the right to life, freedom of thought, human rights and equal opportunities appear in the Ecuadorian media when the traditional notions of citizenship are addressed: citizen, people, user, and to a lesser extent when the words diversity, dialogue, participation and equality appear. From the information in Table 3, it can be deduced that: – The new manifestations of citizenship are proportionally more present in the Web versions of newspapers (31%) than in the Web versions of audiovisual media (19%). – The Web sites of the media in Ecuador tend to incorporate digital resources in the presentation of information: text, photos, social network links, multimedia links, etc. – The sources of information are local (47%), which is consistent with the tabulated sample, and most of the media are local coverage (69%), therefore, it is information aimed at citizens in the interior of the country. – The key words related to traditional and new manifestations of citizenship are located in the body of the news (68%), it happens that there is a minimum visibility of the interests and problems of the people, and implies a distance from their claims.

Communicational Citizenship in the Media of Ecuador

547

– The most frequently used journalistic genre is news (75%). The informative pieces published are of short length, of short-form and respond to the agendas of the media; there are no in-depth or moderately deep readings such as those provided by reports, interviews, chronicles or opinion articles. – The rights related to citizenship are presented in the informative sections, according to the classification of the Organic Law of Communication of Ecuador. The minimal projection of citizenship from educational, formative or cultural spaces is striking. There is no appreciation of the pedagogical potential of the media to improve participation in the construction of communicational citizenship. The reasons for the treatment of communicational citizenship from a conventional approach, in the words of the interviewees, are due to economic motivations, educational-communicational formats, routines and the little intervention of citizens in the media. Among the economic reasons it is mentioned that “we have to change, news is what sells, unfortunately we have a culture of yellow journalism, and the media promulgate that” (C. Gonzaga, personal communication, December 3, 2020). “The media field is not very diversified in our country, there is a real monopoly” (M. Bonilla, personal communication, December 8, 2020). Judgments linked to formats appear: “journalistic approaches should be more globalized, that seek to educate” (J. Apolo, personal communication, December 15, 2020), “presenting new ways of showing the reality of the world” (W. Rosales, personal communication, December 16, 2020), “it is necessary to create new communication spaces with a clear focus on citizens, where their rights are promoted, but for this to happen a good proposal to the different media is needed” (S. Pincay, personal communication, December 14, 2020). The deontological reasons stated refer to the fact that “the change that the media must make is to stop being afraid and fulfill journalistic work with vocation, giving greater importance to transparency” (J. Jiménez, personal communication, December 5, 2020). “It is necessary to stop looking at the institutional as the starting point for everything and turn our gaze to the everyday and life in the street to start from there, connecting towards the current state of the instruments that protect and facilitate the enforcement of rights” (D. Amores, personal communication, December 17, 2020).

Care must be taken that the media “carry out a thorough and true investigation for the trust of the citizenry” (S. Pullas, personal communication, December 14, 2020). “The media should not only inform but investigate” (A. Hernández, personal communication, December 5, 2020). “If the authorities are focused on citizen rights, we as journalists will be able to report what is happening” (J. Vanegas, personal communication, December 3, 2020). “It is necessary to change the professional training system for pe-riodists, since professional practices that reproduce contain citizen exclusion schemes (they do not take into account the rights to cultural diversity, biodiversity or citizen participation) are reproduced in this space” (M. Bonilla, personal communication, December 8, 2020).

548

A. Suing et al.

“The editorial lines must change, opening up to topics and rights that are currently in vogue. Human rights are taboo topics in our society, but they are necessary” (J. Encalada, personal communication, December 20, 2020). “The ethical vision of the communicator should focus on his or her role as a guide or promoter of different processes” (A. Sacoto, personal communication, December 7, 2020). Among the reasons that address participation are: “To solve the issue of citizen participation we must begin to ask questions, are there spaces for citizen participation that are already being developed? It is up to the media to disseminate these spaces” (P. Narváez, personal communication, December 16, 2020). “The media should focus on a greater content of citizen participation, there are very few citizen journalism newscasts” (J. Vanegas, personal communication, December 3, 2020). “It is essential that the media allow spaces for citizens [to] make known the different issues that are part of their lives” (T. León, personal communication, December 3, 2020). “Most of the time these expressions of citizenship do not reach the appropriate media to give attention to them, an official platform could be im-plemented in social networks so that citizens can express themselves by exercising their right to communication” (F. Vizcaíno, personal communication, December 3, 2020).

“In this case the local media have more openness, I dare say that local media give more space to the community” (M. Villagomez, personal communication, December 14, 2020). “There is no doubt that there is a lack of adequate political leadership in terms of giving more open support because they are not necessarily a priority for the government” (X. Torres, personal communication, December 20, 2020). Finally, the new manifestations of citizenship communication are significantly present in the informative pieces that deal with women’s participation, moreover, women’s opinions are in the macro, in the structure of the institutions rather than in the operative of journalism. This brief analysis answers the research questions. 1) what kind of manifestations related to citizenship and people’s rights are presented in the media? Tables 1 and 2 show that the treatment of the concept of citizenship occurs in a traditional way and is related to civil rights. The issues of biological diversity, freedom of expression and women’s participation are the least addressed. The informative pieces analyzed include concepts that involve diversity. According to the UNESCO Thesaurus, there are more than 150 related notions that include emerging human rights concerns. The Ecuadorian media insert in their news the new emancipations sought by the community, but they do not develop them, they remain as marginal notes, thus depriving discussions and debates from appearing in public opinion. A conservative vision prevents social dynamism from being reflected in the media. Research question 2) Does the informative treatment given by the media to the manifestations of communicational citizenship make visible the interests and demands of citizens? The answer is negative. The information in Tables 1 and 3, and the testimonies of the experts interviewed show that not all citizens’ demands are met. There is still work to be done for citizens to be able to appear in a diverse and pluralistic way in Ecuador’s media. The “effective participation of citizens will happen when the majority shares decision-making power. Empowerment is needed

Communicational Citizenship in the Media of Ecuador

549

so that, on the basis of awareness, ways are explored for the presence of multiple identities in the production and broadcasting of messages” [25]. Future lines of research are the quantitative analysis of the apparitions of the concepts of communicational citizenship in the media, methodological triangulations through focus groups and relations with social media dialogues to monitor the evolution of emerging human rights in public opinion. Acknowledgements This paper presents part of the results of the research project “Development of a model of citizen intervention in local social communication media in Ecuador”, which the Communication and Audiovisual Culture Research Group, attached to the Department of Communication Sciences of the Universidad Técnica Particular de Loja (UTPL), is carrying out with funds allocated by the Vice-Rectorate of Research of the UTPL.

References 1. Ottaviano C (2013) Corte Suprema: Presentación de la Defensora del Público de Servicios de Comunicación Audiovisual 2. Mutirão de comunicação (2010) Carta de Porto Alegre 3. Uganda W (2015) Comunicación para el diálogo político e intercultural. Derecho a la comunicación y ciudadanía comunicacional. Campos 3(1):51–78 4. Restrepo J (2006) Estándares básicos en competencias ciudadanas: una aproximación al problema de la formación ciudadana en Colombia. Papel Político 11(1):137-176 5. Consejo de Europa (2010) Carta del Consejo de Europa sobre la educación para la ciudadanía democrática y la educación en derechos humanos 6. Naval C (2003) Orígenes recientes y temas clave de la educación para la ciudadanía democrática actual. Revista de Educación, núm. extraordinario, pp 169–189 7. Requejo F (1996) Pluralismo, democracia y federalismo Una revisión de la ciudadanía democrática en estados plurinacionales. RIFP 7:93–120 8. Sánchez I (2008) Educación para una ciudadanía democrática e intercultural en Colombia. Revista Iberoamericana de Educación 46(3):1–12 9. Aznar H, Suay-Madrid A (2020) Tratamiento y participación de las personas mayores en los medios de comunicación: opinión cualificada de los periodistas especializados. El Profesional de la Información 29(3):1–12 10. Thompson J (1998) Los media y la modernidad. Paidós 11. Contreras P, Montecinos E (2019) Democracia y participación ciudadana: Tipología y mecanismos para la implementación. Revista de Ciencias Sociales 2:178–191 12. Martínez- Hermida M, Mayugo C, Tamarit A (2012) Comunidad y comunicación: prácticas comunicativas y medios comunitarios en Europa y América Latina. XV Encuentro de Latinoamericanistas Españoles, Madrid, España, pp 499-513 13. Burch S, León O, Tamayo E (2004) Se cayó el sistema: Enredos de la Sociedad de la Información. ALAI, Ecuador, Quito 14. Beaumont J (1978) El público tiene derecho a participar en la producción de los mensajes informativos”. El País, 6 de enero 15. Berrigan F (1979) Community Communications the role of community media in development. UNESCO, Paris, France. https://unesdoc.unesco.org/ark:/48223/pf0000044035 16. Guzmán V (2013) Políticas de comunicación y democratización. Pistas de una historia hacia la sanción de la ley de servicios de comunicación audiovisual en Argentina. Anagramas 11(22):19–36

550

A. Suing et al.

17. UNESCO (1977) “Final report” Meeting on self-management, access and participation in communication. Belgrade, Yugoslavia 18. García-Martín D (2020) Del zine al podcast. Repensar la cultura de la participación desde un análisis comparativo de los medios alternativos. doxa.comunicación 30:107–125 19. González-Pedraz C, Pérez-Rodríguez A, Participación y perfil de los usuarios que comentan noticias de ciencia y salud online: estudio de caso. Perspectivas de la Comunicación 12(1):101115 20. UNESCO (2008) Indicadores de Desarrollo Mediático: Marco para evaluar el desarrollo de los medios de comunicación social. UNESCO, Francia 21. Riva A (2019) La alfabetización mediática e informacional en la era del capitalismo de vigilancia. Cuadernos del CLAEH 38:323–344 22. Masip P, Suau J (2014) Audiencias activas y modelos de participación en los medios de comunicación españoles. Hipertext.net 23. Suárez-Villegas J, Rodríguez-Martínez R, Ramón-Vegas X (2020) Pluralismo informativo en la era de la deliberación digital: percepciones de periodistas y ciudadanos. Profesional de la información 29(5):e290525 24. Novelli C, Aguaded I (2011) Cine, competencias comunicativas y ciudadanía plena. Olhar de professor 14(1):41-62 25. Alva de la Selva A (2020) Escenarios y desafíos de la ciudadanía digital en México. Revista Mexicana de Ciencias Políticas y Sociales 65(238):81–105 26. Gozálvez V (2014) Ciudadanía mediática: una mirada educativa, Dykinson 27. Kimani R (2019) La participación ciudadana en los medios de comunicación y las normas culturales en torno a la radio Mugambo Jwetu FM. Anthropologica 37(42):245–269. https:// doi.org/10.18800/anthropologica.201901.011 28. Segura M (2018) De la resistencia a la incidencia. Sociedad civil y derecho a la comunicación en Argentina. UNGS, Argentina 29. Cabañes E, Jaimen N (2020) Videojuegos para la participación ciudadana. (Spanish). Cuadernos del Centro de Estudios de Diseño y Comunicación 23(98):151–161 30. Rossi D (2016) Acceso y participación: el desafío digital entre la garantía de derechos y la restauración desreguladora 31. Mendoza M, Viaña B, Espinoza A (2019) El concepto de “ciudadanía” en los cibermedios peruanos. Las perspectivas de los medios, los usuarios y los periodistas. Revista de Comunicación 18(2):201–223 32. Hernández R, Fernández C, Baptista M (2000) Metodología de la Investigación. McGraw-Hill/ Interamericana Editores México 33. Taylor J, Bodgan R (1984) Introducción a los métodos cualitativos de investigación. La búsqueda de significados. Paidos Ibérica, Barcelona 34. Suing A (2021) Datos de la ciudadanía comunicacional en el Ecuador [Data set]. Zenodo.https:/ /doi.org/10.5281/zenodo.4628259

A Multimedia Resource Ontology for Semantic E-Learning Domain Soulaymane Hodroj, Myriam Lamolle and Massra Sabeima

Abstract The past decade has seen the growing up of multimedia technologies in the context of e-learning. Smarter platforms are needed to support lifelong learning anywhere, anytime. This paper presents a new multimedia resource ontology (MMRO). It first provides a background about knowledge bases, description logics and reasoning, and an overview of multimedia ontologies. After highlighting their limitations in the context of e-learning, MMRO is detailed in terms of main class hierarchy and assertions. Finally, we identify the next steps of possible future research works. Keywords multimedia resource · ontology · e-learning

1 Introduction Recent years have seen new developments in e-learning to address lifelong learning issues in terms of competences; and more recently, the COVID pandemic has also necessitated a major shift to distance-learning for traditional face-to-face training. The sharing of multimedia resources anywhere at any time becomes crucial not only on the part of the designers of the resources (teacher, trainer, domain expert, etc.), but also for the learners who propose their resources for feedback or evaluation. For example, a pastry apprentice who has filmed herself/himself making croissants could upload her/his video to a platform. A pastry expert then annotates this resource with a textual comment and/or with graphic annotations, emoticons, etc. A dialogue can also take place to orally complement the written exchange. The semantic representation of multimedia resources, in particular video, is essential to facilitate exchanges, their retrieval during the design of an individualized S. Hodroj Université Paris 13, Saint Denis, France M.Lamolle (B) · M.Sabeima IUT de Montreuil - Université Paris 8, Laboratoire d’Intelligence Artificielle et Sémantique des données (LIASD), 140 rue de la Nouvelle France, 93100 Montreuil, France e-mail: [email protected] URL: http://www.iut.univ-paris8.fr/~lamolle/ © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Á. Rocha et al. (eds.), Information Technology and Systems, Lecture Notes in Networks and Systems 691, https://doi.org/10.1007/978-3-031-33258-6_50

551

552

S. Hodroj et al.

Fig. 1 Example. 2 expressive rendering (b) and (c) used on image source (a)

training path or when following an individualized training course. In this context, the semantic multimedia representation must allow the content of a resource to be described semantically, and to annotate and/or comment on different parts of a resource. The main semantic criteria that we need for our collaborative and adaptative elearning platform are (i) multimedia annotations and comments, use of expressive rendering 1, object detection on video, and their classifications, (ii) an annotation model of the multimedia resources based on textual indexing of the caption or surrounding text, and visual indexing that takes into consideration colors and shapes as well as segmentation and points of interest, and (iii) also make it possible to respect the FAIR (Findability, Accessibility, Interoperability, and Reuse) principles [14]. In this paper, Sect. 2 presents the main notions of ontology, description logics and reasoning, and specific multimedia ontologies according to our criteria. The MultiMedia Resource Ontology is described in Sect. 3. Section 4 outlines futures prospects.

2 Background In this section, after a brief presentation of what is an ontology and reasoning about ontology, we will consider different multimedia ontologies taking into account their relevance in the context of e-learning.

2.1 Ontology and Knowledge Inferencing It exists different definitions of notion of ontology in different communities. In computer Science, Gruber specified it as an "explicit specification of a conceptualisation" [3]. Studer et al. [12] stated that: “An ontology is a formal, explicit specification of a shared conceptualization.", which promotes linked open data. To better understand what is behind these definitions, we recommend reading [5] and [4]. We can think that an ontology is a data model representing a set of concepts for a specific

Multimedia Resource Ontology ...

553

domain and a set of relationships between those concepts. Some assertions complete these two sets. The whole is used to reason about the objects in that domain. Reasoning to infer new knowledge in the semantic Web is based on Description Logic. It is a formal languages that can be seen as a fragment of first-order predicate logic (FOL). Sebastian Rudolph said in [8]: "As opposed to general FOL where logical inferencing is undecidable, DL research has been focusing on decidable fragments to such an extent that today, decidability is almost conceived as a necessary condition to call a formalism a DL". DLs handle 3 notions: individuals (instance of a class), concepts (or class) and roles (binary relations). Therefore, a knowledge base K =< A, T > where A is a set of assertions and T is a terminology. The basic DL is ALC 1 . The only constructors that can be used with atomic concepts, universal concept (T) and bottom concept (⊥), are negation (¬), union (U), intersection (Π), existential restriction (∃), and universal restriction (∀). By adding new constructors, we get more complex logics [2]. Reasoning on ontology needs two services: (i) check the consistency of ontology, (ii) infer new knowledge. Reasoning task includes the following main cases of inference [2]: satisfiability, subsumption, checking instances, checking relationships, consistency. Let us illustrate these notions by an example. Consider K a knowledge base < A, T > where T = Player C Per son Π ∀ play.Match, Re f er ee C ¬Player A = Player (J ane), Per son(J ohn), play(J ane, m1) Two axioms of TBox T state that a player is a person who plays matches and that a referee is not a player. The ABox assertions mean that J ane is a player who plays match m1, and J ohn is a person. Let us I an interpretation such as ΔI = {Jane, John, m1}, and PersonI = {Jane, John}, PlayerI = {Jane}, MatchI = {m1}, RefereeI = ∅, playI = {(Jane, m1)}. By inferences, it can be verify that: – I satisfies each axiom of TBox T and each assertion of ABox A. Then, I is a model of T and A, and K is consistent; – All concepts Player, Person, Referee, and Match are satisfiable according to the TBox T ; – The concept Player is subsumed by the concept Person and the concept Referee is not subsumed by the concept player according to the model I of K ; – The individual Jane is an instance of concept Person and John is not an instance of concept player according to the model I de K ; – The role play is a relationship between the individuals Jane and m1 and the assertion play(John, m1) is not true according to the model I de K .

1

Attribute Language with general Complement.

554

S. Hodroj et al.

2.2 Multimedia Ontologies Poli et al. argue in [6] that "In a multimedia ontology concepts might be represented by multimedia entities (images, graphics, video, audio, segments, etc.) or terms. A multimedia ontology is a model of multimedia data, especially of sounds, still images and videos, in terms of low-level features and media structure. Multimedia ontologies enable the inclusion and exchange of multimedia content through a common understanding of the multimedia content description and semantic information". On the other hand, a global multimedia ontology can be considered like several multimedia ontologies integrated by matching [7]. Unfortunately, each ontology is designed for a specific domain with a specific interpretation. In our case of the design of semantic platform for collaborative and adaptive e-learning (SPACe-L), the choice of video ontology is crucial for learning manual trades. As far as we know, there are few multimedia ontologies where the semantic representation of videos is predominant, especially in the context of e-learning. Ontology for Media Resources 1.0 (OMR), proposed by W3C [16] lays the foundations of the vocabulary to be used to describe media resources. It defines a core set of metadata properties for media resources, along with their mappings to elements from a set of existing metadata formats. It presents a Semantic Web compatible implementation of the abstract ontology using RDF/OWL2 . Despite the importance of this ontology with its 14 classes and 56 object properties, there are major issues that limit its utility in e-learning platforms. Firstly, this ontology essentially supports multimedia resources like videos and audios, and partially images without any description of other resources used in the field of e-learning, such as texts and animations. Secondly, OMR has a limited role in recent projects researchers from several research organizations that are trying to develop semantic annotation tools for multimedia resources, which limits its extension and development in the future (compared to other ontologies used in these same projects, mainly LSCOM). Thirdly, it does not take into consideration many criteria for classifying multimedia resources that are related to the field of e-learning (resource themes, validated or not, annotated or not, etc.) [11]. The Video Ontology (VidOnt) [9] is the most expressive multimedia ontology to date, formally grounded in the decidable SROIQ description logic and complemented by a role box and a DL-safe rules unmatched in multimedia ontology engineering. Specifically designed for video representations, VidOnt covers the professional video production and distribution domains, and defines fundamental video production concepts, video characteristics, and spatiotemporal segments, and the relationship between them, aligned with standard concept definitions. This is the very first video ontology that exploits the entire range of mathematical constructors of OWL2 DL, rather than a small subset of constructors available in OWL DL. According to L. Sikos, the Visual Descriptor Ontology, which was published as an "ontology for multimedia reasoning" [10], has in fact a rather limited reasoning potential, because it is based on the constructors of the basic description logic ALH. 2

downloadable at www.w3.org/ns/ma-ont.rdf.

Multimedia Resource Ontology ...

555

Fig. 2 Web annotation architecture (source: https://www.w3.org/annotation/diagrams/ annotation-architecture.svg)

Unfortunately, VidOnt remains a limited ontology in the field of e-learning, since it only makes its reasoning on video scenes essentially, without implementimg the other types of multimedia resources which play an important role in e-learning. In addition, like many other multimedia ontologies, the definition of the essential concepts and rules of spatiotemporal annotation of the various educational resources remains absent in this ontology, whether it is advanced annotations based on automatic annotation ontologies, or the manual annotation that will be added later, and which plays a vital role in evaluating and improving these resources. Note that to our knowledge, none of the ontologies already mentioned or other multimedia ontologies, incorporates a well-defined vocabulary for specific educational resources to people with communicative learning disabilities. On the other hand, Web Annotation Data Model [19] specification describes a structured model and format to enable annotations to be shared and reused across different hardware and software platforms. It use the Web Annotation Vocabulary (WAV) expressed in RDF language.

3 Multimedia Resources Ontology Multimedia Resources Ontology (MMRO) is an extension of VidOnt [9] to adapt it to the specific context of semantic multimedia e-learning platform and Web annotation vocabulary. However, MMRO is designed in such a way that it can be reused in other domains. The modelling of MMRO was carried out using the editor Protégé [18]. Figure 3 represents the hierarchy of main classes. The class Multimedia Resour ce is the core of MMRO, which contains 5 basic classes Animation, Audio, I mage, T ext, V ideo for the different types of multimedia, which contains also subclasses. The class V ideo allows to describe a video which can be annotated (Annotated V ideo)

556

S. Hodroj et al.

Fig. 3 Class hierarchy of MMRO

or non-annotated (N on Annotated V ideo), validated (V alida-ted V ideo) or nonvalidated (N onV alidated V ideo), with a specific duration (Du-rationV ideo), a specific topic (T opicV ideo), and can be for specific disability person (Speci f icDisabilit yV ideo). Note that in a closed world (such as databases), AnnotatedV ideo and N on Annotated V ideo are de facto disjoint. In an open world such as ontologies), it is not sufficient to express the following assertions in description logic: N on Annotated V ideo ≡ V ideo ∧ (= 0has Annotation.Annotation) N on Annotated V ideo Π Annotated V ideo C ⊥ to forbid that an individual belongs to Annotated V ideo and N on AnnotatedV ideo; the same principle applies to other equivalent classes. For example, the individual video2 which is uploaded by (role autoU ploaded By) r eader 2. This video is made by a simple user and has not been validated by an E x per t, therefore it must be classified as nonV alidated V ideo. But this is not the case, since according to the hypothesis of the open world of the OWL, we cannot assume that video2 is not validated only because it is not mentioned in the ontology. To solve this problem we had to add a closure axiom (see Fig. 4) in the ontology by specifying that video2 is validated Multimedia Resour ceByexactly0E x per t, which led directly to classify this individual as nonV alidated V ideo. The class Annotation represents a primordial class in MMRO, since it is built according to W3C recommendations, and represents a basic improvement of other annotation ontologies. There are 3 main subclasses: Annex Annotation which represents the annotations and descriptions all around the multimedia resource

Multimedia Resource Ontology ...

557

Fig. 4 Example of closure axiom.

(CommentAnnotation, DescriptionAnnotation, KeyMomentAnnotation), T ext Annotation which is the parent class of all the annotations which provide textual information included in the multimedia resource (CalloutAnnotation, CutAnnotation, PauseAnnotation, TitleAnnotation, SubtitleAnnotation), and finally Z one Annotation which concerns annotations that can be placed on a multimedia resource in expressive rendering or automatic object detection (it also contains the subclasses, RapidShapeAnnotation, AdvancedShapeAnnotation). The class Contr olledT er m allows to describe terms used in the multimedia resources. Terms represent the relevant terms used in the text or audio of a multimedia resource. The class U ser has subclasses Cr eator (for content creator), Publisher and Reader from Web Annotation Model (see Fig. 2). The Reader is a viewer who plays a critical role in the content ecosystem, interacting with the content in a number of ways. With annotations and by auto uploading multimedia resources, a reader can provide a more powerful way for an active reading, but she/he needs the validation of an expert. We add the subclass Author for knowing the author of any resource, and the class E x per t representing the user having or involving a great deal of knowledge or skill in a particular topic for a multimedia resource, and has the right to validate a multimedia resource auto-uploaded by a simple reader. The definition equivalent class of Author in Description Logic is: Author ≡ ∃autoU pload.Multimedia Resour ce ∨ ∃cr eate.Multimedia Resour ce The class Review models judgment or discussion of the quality of something. Review also means to go over a subject again as part of study or to look at something another time. A review is a critique of something-a look at something’s good and bad points. The class Review has 5 subclasses Comment, Dislike, Like, Rating, V iew. Each of them has an equivalent class represented by an assertion; and some assertions represent the disjunction notion. For example, we define the class Comment with some assertions expressed in Description Logic by: Comment ≡ ∃isComment On.Multimedia Resour ce Comment Π Dislike C ⊥ Comment Π Like C ⊥ Comment Π Rating C ⊥ Comment Π V iew C ⊥

558

S. Hodroj et al.

Fig. 5 Graph of MMRO

Figure 5 represents the graph of MMRO where the classes are linked to others by roles (i.e. Object Property in Protégé, represented by a coloured dotted line in ontoGraph tab and named Arc Types at the right side of Figure). For example, the class Multimedia Resour ce is linked to Review by the role has A Review (green dotted line in Fig. 5) and the inverse role is is ReviewOn. A hierarchy of roles establishes that has Alike is a sub-role of has A Review and linking Multimedia Resour ce to Like. For some roles, we add specific characteristic such as functional, reflexive, symmetric, transitive, etc., and a chain operator. For example, the role has EquivalentContr olledT er m is symmetric and transitive (cyan dotted line in Fig. 5), and the role Annotate has a chain cr eate Annotation ◦ is Annotation O f . This chain expresses the fact that if a user U creates an annotation A ( i.e. cr eate Annotation(U, A) ) for a multimedia resource M R ( i.e. is Annotation O f (A, M R) ) then the reasoner infers automatically the knowledge annotate(U, M R). We experimented the different axioms using 3 individuals per class with different roles to check if the reasoner HermiT [15] infers knowledge that we assumed. The quality of MMRO was checked in two ways: the quality of the consideration of e-learning context and the quality of the schema which takes into account the special meaning of the OWL-Schema-definition constructs for the calculation of metrics on the ontology structure. By using ontoMetrics we have calculated the different parameters which allow to compare the schema metrics of MMRO with metrics of other ontologies (cf. Table 1). For the other way, and in the context of comparison with other multimedia resources ontologies, MMRO, that have a DL Expressivity of SRIQ, is more complete for e-learning context: add some specific classes used by training ontology and user profile ontology to build personalized learning path (for example, disability is possible; MMRO allows to get another version of a video taking into account some user’s disability. Table 1 shows schema metrics which address the design of the ontology. These metrics3 are important in the scope of quality of the model; let us keep in mind that it is not addressed the quality to represent the domain knowledge to be considered. 3

see definitions at ontometrics.informatik.uni-rostock.de/wiki/.../Schema_Metrics.

Multimedia Resource Ontology ...

559

Table 1 Schema metrics (source:ontologymetrics) Richness Attribute

Ratio Inheritance

Relationship

Equivalence

Axiom/

Inverse

Class/

class

relation

relation

OMR

2.071429

0.571429

0.903614

0.0

22.642857

0.375

0.168675

Vidont

0.605714

0.765714

0.665

0.057143

8.171429

0.105042

0.4375

MMRO

0.122363

0.978903

0.459207

0.14346

6.037975

0.373333

0.552448

MMRO has the highest inheritance richness because it has been specialised for e-learning but is still shallow. This favours its reuse. The other richenss criteria are low compared to OMR and Vidont. This is due to the fact that we wanted a high degree of modularity of the ontologies used in e-learning platforms, which favours interoperability. Reusability and interoperability are two very important notions for the FAIR principles and W3C standards.

4 Future Works In this paper, we presented a new multimedia resource ontology which is used in the scope of a semantic platform for collaborative and adaptive e-learning taking into account the FAIR principles. It is inspired from VidOnt, but it is focused on elearning. MMRO is integrated in a network of ontologies whom a training ontology and a user profile ontology. But MMRO can be used independently. The aim is twofold: (i) to facilitate the semantic search of resources from different points of view (content, term, duration, language, expertise, annotation/commentary, etc.), (ii) to facilitate exchanges between learners, trainers and experts through videos, especially for manual trades. The next steps of our future works will consist of improving the interoperability of MMRO with other ontologies by creating alignments semi-automatically [1] but also with e-learning platforms such as Moodle LMS [17], Chamilo [13], etc. Then, it will be interesting to implement an automatic semantic annotation engine, which will be complementary to the experts’ annotations.

References 1. Atencia M, David J, Euzenat J (2021) On the relation between keys and link keys for data interlinking. Semant. Web - Interoperab Usab Appl 12(4):547-567. IOS Press. https://hal. archives-ouvertes.fr/hal-03426150/ 2. Baader F, Nutt W (2003) Basic description logics. In: Baader F, Calvanese D, McGuinness Deborah L, Nardi D, Patel-Schneider PF (eds) The description logic handbook: theory, implementation, and applications, pp 43-95. Cambridge University Press (2003)

560

S. Hodroj et al.

3. Gruber TR (1993) A translation approach to portable ontologies. Knowl..Acquis 5(2):199–220 4. Gruber T: Ontology. https://queksiewkhoon.tripod.com/ontology_01.pdf 5. Guarino N, Oberle D, Staab S (2009) What is an ontology? In: Staab S, Studer R (eds) Handbook on Ontologies. IHIS. Springer, Heidelberg, pp 1–17. https://doi.org/10.1007/978-3-54092673-3_0 6. Poli R, Kameas A, Seremeti L (229) Ontology and multimedia. In: Pagani M (ed) Encyclopedia of multimedia technology and networking, second edition. IGI Global, pp 1093-1099. https:// doi.org/10.4018/978-1-60566-014-1.ch148 7. Rinaldi AM, Russo C (208) A Matching framework for multimedia data integration using semantics and ontologies. In: 2018 IEEE 12th international conference on semantic computing (ICSC). IEEE Press, pp 363-368. https://doi.org/10.1109/ICSC.2018.00074 8. Rudolph S: Karlsruhe Institute of Technology Germany. https://www.aifb.kit.edu/images/1/ 19/DL-Intro.pdf 9. Sikos LF (2018) VidOnt: a core reference ontology for reasoning over video scenes. J Inf Telecommun 2(2):192-204. Taylor & Francis. https://doi.org/10.1080/24751839.2018. 1437696 10. Simou T, Avrithis S, Kollias S (2005) A visual descriptor ontology for multimedia reasoning. In: 6th international workshop on image analysis for multimedia interactive services, montreux 11. Sjekavica T, Obradovi´c I, Gledec G (2013) Ontologies for multimedia annotation: an overview. In:4th European conference of computer science (ECCS2013), pp 123-129 (2013) 12. Studer R, Benjamins R, Fensel, D (1998) Knowledge engineering: principles and methods. Data Knowl Eng 25(1-2):161-198 13. Chamilo LMS. https://chamilo.org/en/ 14. FAIR. www.go-fair.org/fair-principles/ 15. HermiT reasoner. http://www.hermit-reasoner.com/ 16. Ontology for Media Resources 1.0, W3C Recommendation (2012). https://www.w3.org/TR/ mediaont-10/ 17. Moodle LMS. https://moodle.org/?lang=en 18. Protégé editor. https://protege.stanford.edu/ 19. Web Annotation Data Model, W3C Recommendation (2017). https://www.w3.org/TR/ annotation-model/ 20. Web Annotation Vocabulary, W3C Recommendation (2017). https://www.w3.org/TR/ annotation-vocab/

Business Process Management in the Digital Transformation of Higher Education Institutions Juan Cárdenas Tapia, Fernando Pesántez Avilés, Jessica Zúñiga García, Diana Arce Cuesta, and Christian Oyola Flores

Abstract The current demands in industry 4.0 as the automatization of business processes propel Digital Transformation. Higher Education Institutions are not exempt from these demands. In this context, Business Process Management promotes such transformation. Higher Education Institutions know the importance of processes and technology in their institutions; however, it faces challenges in managing them. This is particularly observable in the Latin-American Higher Education Institutions due to the management models in many cases are not documented. Thus, this research suggests that support mechanisms in the use of Business Process Management as a conductor of Digital Transformation are necessary. This research aims to provide orientation to managers of Higher Education Institutions about the use of Business Process Management as a conductor of Digital Transformation. This paper presents an approach that mixes features of Business Process Management with elements of Higher Education Institutions. The applicability of this approach is illustrated through a case study. This paper used participatory action research framed in an initiative of the Digital Transformation project of the Salesian Polytechnic University in Ecuador. The results show the potential of this proposal as a support element for Business Process Management uses in the Digital Transformation context of Higher Education Institutions. The main advantage of this approach is the opportunity to have an overview, organized, and unified process activities. J. C. Tapia · F. P. Avilés · J. Z. García (B) · D. A. Cuesta · C. O. Flores Research Group in Management, Information and Technology (LabGIT), Salesian Polytechnic University, Cuenca, Ecuador e-mail: [email protected] J. C. Tapia e-mail: [email protected] F. P. Avilés e-mail: [email protected] D. A. Cuesta e-mail: [email protected] C. O. Flores e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Á. Rocha et al. (eds.), Information Technology and Systems, Lecture Notes in Networks and Systems 691, https://doi.org/10.1007/978-3-031-33258-6_51

561

562

J. C. Tapia et al.

Keywords Digital Transformation · Higher Education · Processes · BPM

1 Introduction The current technological advances and unexpected events such as the COVID-19 pandemic have changed the politics of appropriate use of science and technology in Higher Educational Institutions (HEIs). The pandemic accelerated the technological incorporation leading the HEIs to face new social and technological challenges [1]. For this reason, there is a growing interest in driving universities toward Digital Transformation. Nonetheless, the technology alone is not transformative, therefore, guiding an effective Digital Transformation involves several challenges. These are focused on complex interactions between people and their cultural change; people and processes; as well as people and technology. Interactions that have an impact on the effectiveness of any transformation [2]. The Ecuadorian Higher Education Institutions and Latin American institutions with similar features face accelerated growth which increases challenges to their Digital Transformation [3]. Thus, there is the risk of an accelerated introduction of new technologies without prior evaluation of their role inside the business [4]. In this context, it is important noting the Business Process Management (BPM) role as an element to drive Digital Transformation in an organization [5–9]. Baiyere et al. [7] argue Digital Transformation provides a unique opportunity to perfect the existing BPM logic and extend it beyond its theoretical limits. According to Stjepi´c y Vugec [5], Digital Transformation must be observed as technology and strategic change integrated into the critical business process. Despite literature reflecting a growing interest in the BPM use in Digital Transformation environments, there is limited information that guides its application in real and complex environments such as HEIs [5]. To deal with this emptiness, we suggest that HEIs managers as institutional authorities and directors, as well as process analysts, need guidelines to implement BPM and hence the Digital Transformation in their institutions. In this paper, we introduce our work towards an approach for BPM in the Digital Transformation context of HEIs. This work is framed in an initiative of the Digital Transformation project of the Salesian Polytechnic University (UPS) in Ecuador. This research aims to provide orientation to HEIs managers about the BPM use as a conductor of Digital Transformation. This paper presents and discusses an approach composed of three areas of action. These areas integrate BPM elements with HEIs features. From that, we limit this work to analysis and lifting of processes, as a result of the first stage of this research. After, we validated the implementation of the results through a study case. This paper is organized as follows. Section 2 addresses the general background of this work. Section 3 details the general approach. Section 4 details the implementation of this approach. Section 5 presents the conclusions and future work.

Business Process Management in the Digital Transformation ...

563

2 Understanding Business Process Management in the Digital Transformation Context According to Fischer et al. [9], organizations use BPM to reduce costs, increase customer orientation, transparency, and product and service quality. BPM permits analyzing how activities are carried out in an organization. That, consolidates knowledge about how to manage the design and redesign of business processes [7]. However, in a Digital Transformation context, BPM must be adapted to the social, technological, and operational really of an organization. Therefore, every company needs to develop an individual strategy to successfully address the BPM in the Digital Transformation context. According to Furjan et al. [10], Digital Transformation is broadly recognized within the industry and academic community as an improvement to doing business through digital technology. In the HEIs context, Digital Transformation aims to redefine educational services and products, as well as, redesign academic, administrative, and operational processes since the priority attention to the needs of its internal and external customers [11]. To achieve that aim, Matkovic et al. [11] suggest using at least one of the following perspectives: (1) services transformation, analyzing products and services features for their redesign; (2) operations transformation, process creation or redesign, activities, and operations as a base to service redesign; (3) services and operations integration. This research is focused on the second perspective on the redesign process. Related literature, reflects the BPM role in Digital Transformation[8]. Fischer et al. [9] examine how companies use BPM to implement Digital Transformation. In the HEIs conditions, Mora and Sánchez [12] propose a theoretical model to implement Digital Transformation. Cruz et al. [13] describe the benefits of adopting BPM at HEIs. For Pridmore and Godin [14] BPM is an effective strategy to implement technological solutions. In addition, the authors defend the importance of including BPM in academic curriculums. Despite literature showing the BPM importance as a Digital Transformation conductor [11, 13], there is not enough guidance about how to implement BPM in the HEIs reality. Thus, in the next section, we present an approach to support managers and process analysts.

3 An Approach for Business Process Management in the Digital Transformation of Higher Education Institutions Matkovic et al. [11] claim that Digital Transformation demands a strategy and an organized approach to introducing new technologies due to it may represent improvements or dangers for the institution. For this reason, it is necessary to establish processes for the adoption of digital initiatives in HEIs.

564

J. C. Tapia et al.

Fig. 1 Structure of Business Process Management in Digital Transformation context of Higher Education Institutions

As a first action, it is important to define processes map general of the institution. In this sense, processes may be classified as manage, central, and support processes. Moreover, it is necessary to consider participation roles as coordinator, analyst, and expert user. As well as support tools for different phases of the BPM lifecycle: modeling and analysis, implementation, execution, monitoring, and optimization. From these components, this research developed a basic structure for BPM use in the Digital Transformation context of HEIs. Such structures combine elements of the BPM lifecycle identified in the literature with HEIs features identified through participatory action research framed in an initiative of the Digital Transformation project of the Salesian Polytechnic University (UPS) in Ecuador. In addition, a process and a sub-process were developed to guide the use of this approach. According to Fig. 1, this structure contemplates three action areas: Processes, General Processes Committee, and Technical Secretariat of Information Technologies. The Processes Area may be composed of professionals in computing, process, and industrial engineering. This area addresses the mapping and modeling of the flow of business processes. For that, this research suggests executing four activities: (a) processes mapping and identification; (b) processes analysis, lifting with users and processes documentation; (c) analysis and lifting of processes improvements; (d) processes publication and socialization with users. The General Processes Committee aim to structure, strengthen, and ensure the BPM, as well as processes review and recommend their approval in a Higher Council. The Technical Secretariat of Information Technologies automatizes processes and implements improvements in software applications. Those components interact with each other to achieve the Digital Transformation objectives. Such interaction is presented through a general process in Fig. 2. This process reflects the relationships between the three areas of the structure proposed.

Business Process Management in the Digital Transformation ...

565

Fig. 2 Orientation process for the use of the proposed structure. Business Processes management in the Digital Transformation context in Higher Education Institutions

According to Fig. 2, Processes Area suggests two subprocesses: processes analysis and lifting; analysis and lifting of process improvements. In this paper, we detail only the first subprocess (See Sect. 3.1). After, the process flow directs the user to a second action area called the General Process Committee. In this committee, the approval of a process in the superior council is suggested. If a process is approved, two activities are executed in parallel. The Processes Area requests to Technological Solutions Department to publish the process in the institutional web portal. At the same time, Processes Area sends the process to the action area Technical Secretariat of Information Technologies which manages the process automatization and implementation of improvements to the software. For those activities, we suggest executing two subprocesses not addressed in this research.

3.1 Subprocess: Analysis and Lifting of Process Improvements This subprocess is divided into three sections: (1) preparation; (2) documentation and process flow model; (3) validation and approval (Fig. 3). These sections are executed in the order mentioned above. Preparation: In the first stage is important to define a role for every member of the Business Process Management team. These roles are general coordinator, processes analyst, and expert user. In the HEIs context, this proposal also suggests a new role to support the general coordination: lead analyst both for academic and administrative processes.

566

J. C. Tapia et al.

Fig. 3 Subprocesses: analyzing and lifting of processes improvements

This subprocess begins with the process to be lifted (Fig. 4). After, two activities must be executed in parallel. On the one hand, collect information and regulations of a specific process. On the other hand, analysts coordinate a meeting with expert users in the areas involved in your process. In the lifting of a process, the analyst records the inputs of expert users on a matrix SIPOC (See example Table 1). However, when the HEIs do not have policies defined, analysts need to request the process departmental head for the elaboration of politics. Once there is a baseline policy, the analyst repeats the documentation collecting, matrix SIPOC, convocation, and process lifting in general. Then, the analyst models the process flow. Fig. 4 Preparation

Business Process Management in the Digital Transformation ...

567

Table 1 SIPOC SIPOC ANALYSIS PROCESS: Request for payment extensions DEPARTMENT: Student Wellbeing Department S I P O Supplier Inputs Process / Outputs/ Annexes Activities Applicant Student data Register payment Extension request (Student) extensión request Department Payment Analyze payment Report of amounts manager Extension extensión request owed request Campus vice Request Approve Amount approved chanceller Payment payment extensions extensión request

C Customer

PROCESS POLICIES

Student

Extension request only for the registration process

Student

Application approved by the Vice Chancellor

Process Flow Modeling and Documentation: In this section, we suggest modeling the current process (AS-IS), and when there is not a previous process model, do an added value analysis according to Table 2. This table is based on the philosophy of value analysis proposed by Miles [15] as a technic to identify steps that generate value in a process (Fig. 5). Table 2 Classification of the process flow activities ACTIVITY Activities that generate added value for the customer (student) Activities that generate added value for the busciness (department) Activities that do not generate added value.

Fig. 5 Modeling and Documentation

HOW TO IDENTIFY Activities that promote value or satisfaction for the customer. Example: Issuance of electronic certificates. Activities are necessary for the Process to be executed. Example: Titles registration. Axtivities are not required for the execution of the Process. Example: Print vacation request.

568

J. C. Tapia et al.

Fig. 6 Validation and Approval

After, model the redesign process (TO–BE). For that, the added value analysis is used as an instrument to justify changes in the process flow and the redesign proposal. Next, a review of the redesign process is necessary, for that, this research suggests including Software Development and User Support areas. Also, we suggest using the ISO standard 9001:2015 in the process documentation to ensure a model for continuous improvement. Finally, to validate the documentation and define a new version, the “preliminary version”. Validation and Approval: For the approval of the process, this approach recommends validating the archival references and the legal frame, according to Fig. 6. These actions may generate changes in the preliminary version, and from that, analysts get a final version. Finally, the coordinator presents the documentation to the general committee. This committee reviews and recommend the approval of the final documentation to the Higher Council.

4 Processes Analysis and Lifting at Salesian Polytechnic University To validate this proposal, we present a case study that illustrates how the processes team of the Salesian Polytechnic University used this approach to the processes analysis and lifting in the Student Wellbeing Department. In this department, the

Business Process Management in the Digital Transformation ...

569

Fig. 7 Assignment of social equity scholarship – Student Wellbeing Department, UPS Ecuador

lifted processes were the assignment of social equity scholarship (Fig. 7) and students social monitoring. For that, the analyst captured the related policies and defined the SIPOC matrix. After, a meeting with expert users was coordinated. In this case, the Technical Secretary of student wellbeing, and technical director of student wellbeing from Quito, Cuenca, and Guayaquil campuses, were a total of five people. After, the processes analyst identified that UPS did not have a process to capture students’ socioeconomic information. Therefore, the need to improve the Form to Family Affidavit was identified to define a more real tariffs differentiated system. Next, the process was modeled and a second meeting with expert users was developed. In that meeting, the software development department was included. Then, documentation activities and process approval were completed, according to the proposed approach. The use of this proposal reflects your potential in a real environment. According to processes analyst criteria, the main advantage of this approach is the opportunity to have an overview, organized, and unified process activities. This is especially relevant for UPS because it has a presence in three cities in Ecuador. According to process analysts, the design principles suggested in this research as support for Digital Transformation are useful elements. The proposed approach allows prioritizing process elements and identifying relevant actors. Finally, the processes analysts team claims that the inclusion of the Software department is fundamental due that it allows analyzing the technological inclusion in a real context.

570

J. C. Tapia et al.

5 Conclusions and Future Works This paper suggests that both managers of HEIs and processes analysts need guidelines for the implementation of BPM in the Digital Transformation context. Therefore, support mechanisms were presented in this work. These corresponded to a general approach and a subprocess, specifically, analysis and lifting process improvements. The research results presented in this paper do not pretend to be a definitive guide, but rather an orientation and support tool. The contributions here presented were based and validated on the Salesian Polytechnic University experience in the context of your Digital Transformation project. From that, we conclude this research may be useful for HEIs that seek process analysis and lifting. The proposed approach provides an overview of actions to achieve operationalization, execution time, and user satisfaction. Moreover, this approach may assist inexperienced analysts, as well as those who just joined a processes team. This work is a guideline to start their activities by considering the most relevant points. This approach was evaluated as positive and useful for the processes lifting in the HEIs context. Therefore, UPS is using it for the academic and administrative processes lifting. However, it is important to highlight some limitations. First, when the HEIs do not have support areas as defined in this approach, it is necessary to develop them for the better harnessing of this proposal. Second, to achieve a practical approach application, it is necessary to synthesize the research results. Next, assessments in other Ecuadorian HEIs are necessary. On the other hand, this paper provides advances in the understanding of Business Process Management in the Digital Transformation context. However, it still does not offer support in technological features for an effective Digital Transformation. Future works point to developing subprocesses to process automatization and implementation of improvements in software. Finally, through the integration of action areas, develop and validate a complete approach for Business Process Management in the Digital Transformation context of Higher Education Institutions.

References 1. Global University Network for Innovation (2022) Higher education in the world 8 - Special issue. New visions for higher education towards 2030 2. Russell K, O’Raghallaigh P, McAvoy J, Hayes J (2020) A cognitive model of digital transformation and IS decision making. J Decis Syst 29:45–62. https://doi.org/10.1080/12460125. 2020.1848388 3. Didriksson A, et al (2022) No region left behind: global responsibility in the face of inequalities. The future of universities in Latin America, p 30 4. Butt J (2020) A conceptual framework to support digital transformation in manufacturing using an integrated business process management approach. Designs 4:17. https://doi.org/10.3390/ designs4030017 5. Stjepi´c A-M, Vugec DS (2019) Managing business processes in the age of digital transformation: a literature review. Int J Econ Manag Eng 13:730–736

Business Process Management in the Digital Transformation ...

571

6. Lederer M, Knapp J, Schott P (2017) The digital future has many names—How business process management drives the digital transformation. In: Proceedings of the 2017 6th international conference on industrial technology and management (ICITM), pp 22–26. https://doi.org/10. 1109/ICITM.2017.7917889 7. Baiyere A, Salmela H, Tapanainen T (2020) Digital transformation and the new logic of business process management. Eur J Inf Syst 29:238–259. https://doi.org/10.1080/0960085X.2020.171 8007 8. Stjepi´c A-M, Ivanˇci´c L (2020) Dalia Suša Vugec: Mastering digital transformation through business process management: investigating alignments, goals, orchestration, and roles. J Entrep Manag Innov 16:41–73 9. Fischer M, Imgrund F, Janiesch C, Winkelmann A (2020) Strategy archetypes for digital transformation: Defining meta objectives using business process management. Inf Manage 57:103262. https://doi.org/10.1016/j.im.2019.103262 10. Furjan MT, Tomiˇci´c-Pupek K, Pihir I (2020) Understanding digital transformation initiatives: case studies analysis. Bus Syst Res J 11:125–141. https://doi.org/10.2478/bsrj-2020-0009 11. Matkovic P, Tumbas P, Maric M, Rakovi´c L (2018) Digital transformation of research process at higher education institutions. https://doi.org/10.21125/inted.2018.2344 12. Mora HL, Sánchez PP (2020) Digital transformation in higher education institutions with business process management : robotic process automation mediation model. In: Proceedings of the 2020 15th Iberian conference on information systems and technologies (CISTI), pp 1–6. https://doi.org/10.23919/CISTI49556.2020.9140851 13. Cruz L, Basto M, Silva J, Lopes N (2021) Business process management as a driver for digital transformation : a case study in a higher education institution. In: Proceedings of the 2021 16th Iberian conference on information systems and technologies (CISTI), pp 1–6. https://doi.org/ 10.23919/CISTI52073.2021.9476422 14. Pridmore J, Godin J (2021) Business process management and digital transformation in higher education 15. Miles LD (1962) Techniques of value analysis and engineering

Collaborative Strategies for Business Process Management at Salesian Polytechnic University Juan Cárdenas Tapia, Fernando Pesántez Avilés, Diana Arce Cuesta, Jessica Zúñiga García, and Andrea Flores Vega

Abstract Business Process Management has been extensively discussed in the literature. However, the collaboration between process analysts, expert users, and decision-makers have been little explored. This is evident when Business Process Management is used as a Digital Transformation conductor. This research aims to generate knowledge for Business Processes Management in collaborative environments. Thus, this research presents an empirical study where collaborative strategies for Business Process Management focused on Digital Transformation were identified and analyzed. This work is framed in an initiative of Digital Transformation of Salesian Polytechnic University (UPS) of Ecuador. Using a case study, this document provides an overview of collaborative activities for process teams. This research reflects the importance of generating cooperative physical spaces, as well as, providing communication strategies. Conducting process lifting sessions was indicated as a good practice. Those results may be useful for process teams and managers of Higher Education who seek process improvement in their institutions. Keywords Collaboration · Business Process Management · Digital Transformation

J. C. Tapia · F. P. Avilés · D. A. Cuesta · J. Z. García (B) · A. F. Vega Research Group in Management, Information and Technology (LabGIT), Salesian Polytechnic University, Cuenca, Ecuador e-mail: [email protected] J. C. Tapia e-mail: [email protected] F. P. Avilés e-mail: [email protected] D. A. Cuesta e-mail: [email protected] A. F. Vega e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Á. Rocha et al. (eds.), Information Technology and Systems, Lecture Notes in Networks and Systems 691, https://doi.org/10.1007/978-3-031-33258-6_52

573

574

J. C. Tapia et al.

1 Introduction The permanent social and technological changes have accelerated Digital Transformation in companies of different natures. Higher Education Institutions (HEIs) are not the exception, due to the need to provide quality services according to the environment in which they are immersed. Educational services focused on online courses, on-campus, hybrid, and dual-modality. Based on these needs, HEIs bet on Business Process Management (BPM) as a means to achieve Digital Transformation [1–3]. The lifecycle of Business Process Management involves analysis, lifting, modeling, and process automatization. Nevertheless, the use of Business Process Management as a conductor of Digital Transformation goes beyond your lifecycle. This involves interactions between process analysts, expert users, process coordinators, and decision-makers. These interactions, at the same time, involve collaboration strategies that allow for achieving the organization aims. However, collaboration in BPM, specifically, in the reality of Latinamerican Higher Education Institutions is scarcely addressed in the literature. According to Wang et al. [4], collaboration is a critical factor for business process performance. However, there is limited research on the impact of collaborative practices in Business Process Management. In that context, teams of Business Process Management do not have references on the application of collaborative strategies in their institutions. Thus, it is relevant to understand how collaborative practices in the context of Business Process Management allow for optimizing processes in an organization. This paper focuses on the research question: How do Latinamerican Higher Education Institutions may use collaborative strategies in Business Process Management teams? To address this problem we consider that, in the first instance, empirical research is necessary to analyze the collaborative nature of Business Process Management practices. In this context, it is necessary to study processes of analyst practices; interactions between the analyst team and staff involved; as well as support tools and artifacts. This paper presents a case study, where collaborative strategies between process analysts, expert users, process coordinators, and decision-makers were identified and discussed. This work is framed on an initiative of the Digital Transformation project of Salesian Polytechnic University in Ecuador. This research aimed to generate useful knowledge for Business Processes Management teams. Specifically, guidelines for the design of collaborative work environments in the Digital Transformation context of Latin American HEIs. As result, this paper presents collaborative strategies applied by the Business Process Management team from the Salesian Polytechnic University (UPS). Moreover, difficulties faced in the application of collaborative strategies are discussed. The results were analyzed according to the 3C model by Fuks et al. [5]. In this model, collaboration is addressed through three dimensions: communication, coordination, and cooperation. The results were constructed according to four interaction

Collaborative Strategies for Business Process Management at Salesian …

575

types. First, interactions among the processes coordinator and processes analysts. Second, interaction among process analysts and expert users. Third, interactions among processes analysts. Fourth, interactions among processes coordinator and decision-makers. In the case study of this paper, decision-makers correspond to institutional authorities and directors. This paper is organized as follows. Section 2 addresses the general background of this work. Section 3 details the research methodology. Section 4 describes the case study and discussion. Section 5 presents conclusions.

2 Collaboration and Business Process Management In work teams, the use of collaborative strategies generates positive results. This is viewed as a key factor in the project success [6]. According to the 3C collaboration model by Fuks et al. [5], collaboration is associated with three dimensions: communication, coordination, and cooperation. Communication is the exchange of messages and negotiation in the team. Coordination addresses the management of activities, resources, and people. Cooperation is the production that takes place in the shared workspace [5]. 3C model has been used as a base to implement collaborative systems, as well as an analysis tool for various purposes. This model has been used to analyze technologies for collaborative services [7]. Moreover, for defining collaborative tools in the development of virtual reality applications [8]. In addition, it has been used to represent dynamics for collaborative learning in communities [9]. In the context of Business Process Management, workflow control, as well as process resources are considered collaborative patterns [4]. Collaboration in Business Process Management needs a systematic approach that facilitates both collaboration and communication between stakeholders [10]. According to Ikhsan et al. [11], a business process is a series of structured activities which seek to solve a problem. That generates efficient and effective activities in a company. Business Process Management technology continues to be developed and implemented on small and large scales. On the other hand, a collaborative process not only is a business process but also a mechanism that allows another process to communicate, interact and execute processes to achieve the planned objectives [11]. Nowadays, organizations use collaborative processes to facilitate interorganizational interactions. From that, organizations generate good practices, despite the features and reality of each organization. This paper presents collaborative strategies applied by Salesian Polytechnic University in your Business Processes Management. The next section details the research methodology of this study.

576

J. C. Tapia et al.

3 Research Methodology This research was based on an empirical study of the interpretive qualitative type. A preliminary literature review was used to explore and understand collaborative practices. Moreover, data capture techniques were used to identify collaborative strategies used by the processes team from Salesian Polytechnic University. The results were presented as a case study. Observations and interviews were used as data capture techniques. Thus, 50 h were used to observe the processes team from Salesian Polytechnic University. The observations aim was to understand the interaction among 6 process analysts, 1 process coordinator, and 2 expert users. On the other hand, interviews were applied to 4 process analysts and 1 process coordinator. Questions were focused on teamwork, functions, negotiation, meetings, communication, technology, coordination, physical space, and cooperation in building products. The interviews and observations were recorded to explore collaborative strategies. Results were analyzed according to the 3C model by Fuks et al. [5], communication, coordination, and cooperation. At the same time, such dimensions were addressed and structured according to 4 interaction types identified in this research. First, interactions among processes coordinators and processes analysts. Second, interactions among process analysts and expert users. Next, interactions among processes analysts. Finally, interactions between a process coordinator and decision-makers. These interactions are detailed in the next section.

4 Collaborative Strategies for Business Process Management at Salesian Polytechnic University Identification and understanding of collaborative strategies in Business Process Management depend on the environment where they were implemented. The use of Business Process Management in the Digital Transformation context demands special attention at Latin America Higher Education Institutions. To explore collaborative strategies in such environments it is required to understand the different interactions, collaboration support tools, and work practices in Higher Education Institutions. In this section, the collaborative strategies of Business Process Management of Salesian Polytechnic University are presented. The UPS is located in Ecuador and has three campuses in Cuenca, Quito, and Guayaquil cities. Attentive to your environment’s demands; UPS is developing a Digital Transformation project. Business Processes Management is an area of such project, for this reason, a work team for Business Process Management was established. The team of Business Process Management was formed by a processes coordinator, 30 process analysts, 10 members of the Software Development department, and UPS authorities as decision-makers.

Collaborative Strategies for Business Process Management at Salesian …

577

The coordinator of the process team monitors the tasks of the process analysts. That is on the three campuses of Salesian Polytechnic University. Process analysts lift, analyze, and model processes in 27 management areas (Fig. 1). On the other hand, the members of the Software Development department are involved in the lifting and implementation of technological improvements through the SCRUM agile development methodology. Likewise, the authorities of Salesian Polytechnic University participate as key actors to achieve the aims of Digital Transformation. According to the 3C collaboration model, collaborative strategies were explored Based on the Business Process Management experience at Salesian Polytechnic University [5]. Figure 2 shows collaborative features identified in the processes team of Salesian Polytechnic University. These are detailed in the next subsections.

Fig. 1 Processes by area (UPS)

Fig. 2 Collaborative interactions at Salesian Polytechnic University, according to 3C model dimensions

578

J. C. Tapia et al.

4.1 Interactions Among Processes Coordinator and Processes Analysts In the interaction between the process coordinator and process analysts, strategies of coordination, communication, and cooperation are applied (Fig. 2). In coordination strategies, the coordinator assigns activities for each analyst. Moreover, meetings among analysts and expert users are coordinated when that is required. The activities of the processes team respond to academic and administrative management. The assignment of activities is carried out based on a process map of the Higher Education Institution. This map classifies the processes in macro processes of institutional policy, considering institutional prioritization. For Business Process Management, Salesian Polytechnic University uses a general approach and a core process that guides analysis, lifting, modeling, and improvement implementation. Through direct and continuous interaction with analysts, the processes coordinator ensures the proper use of both the core process and general approach. The coordinator planning, tasks monitoring, and provide support to the selforganization. For this, the processes coordinator uses calendars based on collaborative artifacts. In the calendar, the coordinator assigns activities and planning advances. On the other hand, the support for the problems solution was another coordination strategy observed. Thus, when exists disagreements between analysts and expert users, the coordinator acts as a mediator (Fig. 2). This is especially relevant when there are situations that delay the work of process analysts. In communication strategies, it was possible to observe the role of the Microsoft Teams tool as a communication channel between geographically distant team members. This platform is used for information exchange about advances in the lifting and modeling of processes. In general terms, communication among the team of Business Process Management at Salesian Polytechnic University is present in many ways, virtual and in-person, whit or without technical support. Communication is present at any time according to the availability of those involved. In cooperation strategies, the coordinator and analysts cooperate in the design and presentation of processes. Thus, the coordinator together with the analyst to reviews every process in detail to ensure that those are complete and understood by the users responding to the schema of the Business Process Model and Notation (BPMN). Processes are modeled in the ADONIS tool (Fig. 3). ADONIS allows the Business Process Analysis (BPA) on the Business Process Model and Notation. To facilitate cooperation in the team of Business Process Management, the Salesian Polytechnic University uses the Microsoft Teams tool to generate shared spaces.

Collaborative Strategies for Business Process Management at Salesian …

579

Fig. 3 Example of modeled process by Business Process Management team (UPS, 2022)

4.2 Interactions Among Process Analysts and Expert Users The interaction between the process analysts and expert users reflects strategies of coordination, communication, and cooperation (Fig. 2). The Salesian Polytechnic University has a general collaborative strategy called "processes lifting sessions". These sessions are scheduled meetings between process analysts and expert users. In these sessions, a processes analyst leads a group of expert users to lift a specific process. Then, the process analyst becomes a group coordinator. In that context, the process analyst is a mediator in the divergent opinions among expert users. That is especially observable in processes that involved users from all campuses of Salesian Polytechnic University. In this case, process analysts play a fundamental role to get a mutual census and information transparency. In the lifting sessions, the process is recorded, as well as changes and recommendations are carried out according to the analyst criterion. Analysts record the information in paper notes. That is complemented by mental maps on collaborative tools (MIRO). Also, the SIPOC matrix is generated. This analysis matrix allows tracing the activities of the process; identifying providers; entrances, exits, and clients; AS-IS process flow model (current process); policies, and annexes. This makes it possible to identify opportunities for improvement, generating the flow of the ideal TO-BE process (redesigned process). Consequently, the lifting of business processes is more agile. In addition, resistance to change on the part of expert users is reduced. That is because the process changes are worked in the group, and it is not imposed. Executing process lifting sessions was defined as good practice by the interviewees. According to process analysts, setting up individual meetings based on the availability of expert users delay the analysts work. In these sessions, analysts get a process overview, however, a detail of process features is necessary. Therefore, new contacts with user experts may be required, though in a lower percentage.

580

J. C. Tapia et al.

The interaction between analysts and expert users also reflects dimensions of communication and cooperation, during and after the business processes lifting sessions. Face-to-face communication may be affected by the limited participation of expert users. In this case and facing the need for future interactions, in the business processes lifting sessions, UPS analysts identify users who dominate the process to be contacted subsequently. The interaction with such users is in a virtual mode through the Microsoft Teams tool, email, and IP telephony. In that context, spaces for information exchange and negotiation are generated. The experience of the Salesian Polytechnic University shows that involving a high number of users in the business processes lifting is inefficient. On the other hand, the cooperation strategies reflect that the analysts job is not only to build business processes according to user observations but also to understand the user experience. A sense of belonging and shared goals between expert users and process analysts are relevant factors to achieving the team goal. Also, the resources provided by Salesian Polytechnic University for the task execution of the team of Business Process Management. Resources such as physical spaces and support technology. Despite processes lifting sessions returning positive results it is important to consider that, today, there is a trend toward virtual work. In that sense, it is necessary to design or adapt tools for computer-supported collaborative work as a complement to those sessions.

4.3 Interactions Among Process Analysts The interaction between process analysts reflects strategies of communication and cooperation (Fig. 2). The processes advances are socialized among all team members so that information about the interaction between processes (sub-processes) is exchanged and knowledge about certain areas is transferred. Cooperation is fundamental to advancing toward the joint development of the process flow. In some cases, to develop a subprocess, analysts need reviewing processes lifted by other analysts. In that case, cooperation among process analysts is necessary.

4.4 Interaction Among Process Coordinator and Decision-Makers To finish the lifting, analysis, and model of processes is necessary a validation stage. The processes raised in the Salesian Polytechnic University, which are required, are approved in different instances such as by the Academic Council,

Collaborative Strategies for Business Process Management at Salesian …

581

Fig. 4 Institutional Organizational Chart of Salesian Polytechnic University - Ecuador

and Economic Council. After, processes are sent to the Superior Council (Fig. 4). This activity reflects interactions of communication and cooperation between the processes coordinator and Salesian Polytechnic University authorities as decisionmakers. Communication and Cooperation dimensions are present in this interaction type (Fig. 2). Communication is developed in virtual and face-to-face modes. Lifted processes are discussed and approved third Tuesday of every month. For that, an environment for information exchange and negotiation is created. In this context, nomenclature in process modeling is important to facilitate a common understanding among the process coordinator and authorities of Salesian Polytechnic University. The process coordinator and decision-makers seek the optimization of processes in the institution. Thus, cooperation is present in the joint observation of improvements. The common goal is to eliminate manual and repetitive processes that delay the operational tasks of users (students, administrative staff, and faculty).

5 Conclusions This work presents a case study of collaboration strategies among a processes team, expert users, and decision-makers from Salesian Polytechnic University. The collaboration strategies were structured into four types of interaction. In each interaction, this paper analyzed how individuals used elements of coordination, communication, and cooperation. Dimensions of the 3C collaboration model.

582

J. C. Tapia et al.

This research evidenced the use of technology to support activities of communication and cooperation among members of a team. As well as coordination practices necessary for effective lifting, analysis, and modeling of processes. The results provide a view of how Business Process teams may execute collaborative activities to achieve institutional goals. That may be useful for both Business Process Management teams and managers of Higher Education Institutions, who seek to implement process improvements in their institutions. Conducting business process lifting sessions were referred to as good practice by the interviewees. However, the need to mobilize geographically dispersed expert users is not always feasible. This is the case at the Salesian Polytechnic University, having three campuses in different cities. Therefore, this research leads to questions about computer-supported collaborative work in process lifting sessions. In that sense, the design of collaborative tools that facilitate the interaction among participants who share the same physical space with those who are geographically dispersed areas is required. Future works are focused on the analyses of computer-supported collaborative work according to the needs of Business Process Management teams. Moreover, case studies in similar Salesian Polytechnic University environments are necessary. To identify new collaborative strategies as a complement to this research. In addition, the application of a balanced scorecard methodology to following collaborative strategies is necessary.

References 1. Stjepi´c A-M, Ivanˇci´c L, Vugec DS (2020) Mastering digital transformation through business process management: investigating alignments, goals, orchestration, and roles. J Entrep Manag Innov 16:41–73 2. Branch JW, Burgos D, Serna MDA, Ortega GP (2020) Digital transformation in higher education institutions: between myth and reality. In: Burgos D (ed) Radical solutions and eLearning: practical innovations and online educational technology. Springer, Singapore, pp 41–50. https:/ /doi.org/10.1007/978-981-15-4952-6_3 3. Baiyere A, Salmela H, Tapanainen T (2020) Digital transformation and the new logic of business process management. Eur J Inf Syst 29:238–259. https://doi.org/10.1080/0960085X.2020.171 8007 4. Wang S, Chen K, Liu Z, Guo R-Y, Sun J, Dai Q (2019) A data-driven approach for extracting and analyzing collaboration patterns at the interagent and intergroup levels in business process. Electron Commer Res 19:451–470. https://doi.org/10.1007/s10660-018-9307-x 5. Fuks H, Raposo A, Gerosa MA, Pimentel M, Filippo D, Lucena C (2008) Inter- and intrarelationships between communication coordination and cooperation in the scope of the 3C collaboration model. In: Proceedings of the 2008 12th international conference on computer supported cooperative work in design, pp 148–153. https://doi.org/10.1109/CSCWD.2008.453 6971 6. Caniëls MCJ, Chiocchio F, van Loon NPAA (2019) Collaboration in project teams: the role of mastery and performance climates. Int J Proj Manag 37:1–13. https://doi.org/10.1016/j.ijp roman.2018.09.006

Collaborative Strategies for Business Process Management at Salesian …

583

7. Simona T, Taupo T, Antunes P (2021) A scoping review on agency collaboration in emergency management based on the 3C model. Inf Syst Front. https://doi.org/10.1007/s10796-020-100 99-0 8. Medeiros D, et al (2012) A case study on the implementation of the 3C collaboration model in virtual environments. In: Proceedings of the 2012 14th symposium on virtual and augmented reality, pp 147–154. https://doi.org/10.1109/SVR.2012.28 9. Fernandes S, Barbosa LS (2016) Applying the 3C model to FLOSS communities. In: Yuizono T, Ogata H, Hoppe U, Vassileva J (eds) Collaboration and technology. Springer International Publishing, Cham, pp 139–150. https://doi.org/10.1007/978-3-319-44799-5_11 10. Fischer M, Imgrund F, Janiesch C, Winkelmann A (2020) Strategy archetypes for digital transformation: defining meta objectives using business process management. Inf Manage 57:103262. https://doi.org/10.1016/j.im.2019.103262 11. Ikhsan G, Sarno R, Sungkono KR (2021) Modification of Alpha++ for discovering collaboration business processes containing non-free choice. In: Proceedings of the 2021 IEEE Asia Pacific conference on wireless and mobile (APWiMob), pp 66–72. https://doi.org/10.1109/APWiMo b51111.2021.9435271

Does Sharing Lead to Smarter Products? Managing Information Flows for Collective Servitization Thomas A. Weber

Abstract Peer-to-peer sharing induces persistent changes in product design. Besides bifurcating product durability, this adaptation increases the compatibility of collaborative use with rent extraction—from a producer’s viewpoint. For owners it decreases the commitment required for taking the item into possession, while for nonowners it standardizes sharing transactions. The resulting sharing-induced design-ideal aligns the flow of utility from shared consumption with the flow of monetary compensation to the seller, thus mimicking a collective lease agreement between seller and an ex ante unknown group of users. Sustaining such a “collective servitization” requires an embedded capacity of user sensing and transmission of information flows ex post the initial product sale, thus implying a fundamental need for smart products in an access-based society. Keywords Collaborative Consumption · Collective Servitization · Design Principles · Internet of Things · Sharing Economy · Smart Products

1 Introduction With the advent of platforms that are able to solve the problem of matching interested parties (so as to create liquidity) and to overcome the moral hazard inherent in short-term lending (so as to leverage trust) [12, 13], the collaborative consumption of durable goods has increased significantly over the past two decades. A paradigm shift from ownership to access has been widely noted, where the flexible use of durable goods at the place and time of need begins to dominate the traditional model of having to purchase a product so as to be able to consume it [1, 2, 4]. While there are many interesting first-order problems associated with optimizing market matching and the design of short-term contracts on the various sharing markets, we are concerned here primarily with the more pertinent shifts in product design and the T. A. Weber (B) Swiss Federal Institute of Technology, Lausanne, Switzerland e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Á. Rocha et al. (eds.), Information Technology and Systems, Lecture Notes in Networks and Systems 691, https://doi.org/10.1007/978-3-031-33258-6_53

585

586

T. A. Weber

way products can be expected to interact with their environment. These higher-order effects are arguably more fundamental and long-lasting than the solutions to the aforementioned operational problems, but of course they are not entirely disassociated. With the access-versus-ownership question being intrinsically dynamic, sharing decisions are about the intertemporal allocation of stochastic flows of availabilities and needs. At the producer’s level, sharing-induced product design responds to a logic of dynamic usage transitions. Indeed, there is no reason to believe that postparadigm-shift products would look like pre-paradigm-shift products. Thus, the core of our inquiry is: What are the sharing-induced design traits that products are likely to feature in a world of collaborative consumption? Beyond the questions of durability and configurability, we ask: What are the flows of information that such products need to support? And: Does supporting such flows make products designed for sharing “smarter” than products designed for ownership? Finally, we briefly turn our attention to the broader societal implications of sharing-induced design traits on the sustainable production and consumption, in view of using natural resources responsibly.

2 Sharing-Induced Design Principles To understand how best to design shareable products, it is necessary to analyze the notion of shareability together with the decision processes in a peer-to-peer sharing market. “A shareable product is such that it can be transferred from its owner to another agent for temporary use without significant degradation” [8]. This implies that completely disposable products cannot be shared, as full degradation is attained after just one use. Shareability therefore increases in product durability. It also increases in the ease-of-transfer between users. Figure 1 provides an overview of various products and iso-shareability frontiers in the corresponding (durability, ease-of-transfer)-space.

2.1 Supply and Demand Dynamics Sharing is fundamentally about consumption dynamics [9]. Unlike in traditional markets, product users appear on either side of the market, depending on whether they are owners or nonowners and on whether they need to consume the item in a given period or not. For example, the owner of a power drill can become a supplier on the sharing market whenever he does not need the item, whereas a nonowner turns to the sharing market for access to the item whenever a need arises. Thus, in anticipation of the stochastic needs and the effective transaction price (=posted price, adjusted by the market-maker’s commission) in the sharing market, consumers decide about becoming an owner or a nonowner. Since consumers are heterogeneous in their anticipated needs and consumption values, the user base is naturally partitioned

Does Sharing Lead to Smarter Products? ...

587

Fig. 1 Common products on sharing markets, together with iso-shareability curves in the (durability, ease-of-transfer)-space.

into owners and nonowners. Hence, we can deduce the first two key features of peer-to-peer sharing economics. F1. [Heterogeneity] Sharing requires owners and nonowners. F2. [Balancedness] For a sharing market to clear, the nonowners with need and owners without need (for own consumption) must be balanced—at a given effective transaction price.1 Whereas the traditional ownership model is about sharp transitions from nonownership to ownership (and vice-versa), together with an alignment of ownership and usership, the sharing model—by spreading usership over a collective of potential users (including the owner and an ex ante random number of nonowners)—blurs the non/ownership boundaries and dissolves the ownership-usership alignment in the traditional modes of consumption [10, 11]. It follows therefore a principle of residual difference. F3. [Residual Claims] The main difference between an owner and a nonowner consists in the residual claims to the product (e.g., related to maintenance, privateusage and resale options, as well as unforeseen contingencies).

1

Balancedness may not be required when the products are “nonrival” (e.g., software), that is, several agents may be able to use it simultaneously; see Fig. 2.

588

T. A. Weber

Fig. 2 Collective servitization of a nonrival product, authorized by firm, facilitated by market maker (at time t).

2.2 Market-Maker Incentives A peer-to-peer market is usually enabled by a third party, especially in environments where traditional barriers for the sharing parties must be overcome, such as high costs of matching and shareproofing, or the lack of trust systems. By providing an environment for sharing transactions, the market maker can extract a portion of the gains from trade, up to all of the improvement over platform-free sharing interactions [13]. F4. [Intermediary Self-Interest] Market makers both enable and distort supply and demand on sharing markets through a system of commissions and transaction rules, so as to extract value from sharing transactions.

Does Sharing Lead to Smarter Products? ...

589

2.3 Aftermarket Control and Information In order to extract rents from a user of its products, a manufacturer can optimize design, including—relative to shareability—the product’s durability and its ease-oftransfer among different users. F5. [Manufacturer Self-Interest] Manufacturers prefer design features that maximize expected net present value in the long run. While in the traditional consumption model, there is a strong interest for the manufacturer to control and limit durability, considering the product lifetime as an essential design variable, this logic is transformed in the presence of liquid sharing markets. F6. [Flow Efficiency] To provide a flow of utility efficiently, it is best for the manufacturer to not artificially limit product durability. Finally, the manufacturer’s capacity to extract rents at the points of consumption requires aftermarket control, which we view as its user-sensing ability (so as to be able to detect usage transitions) and blocking ability (so as to be able to block usage transitions). F7. [Dynamic Rent-Extraction] The firm’s ability to extract rents over time from different users critically depends on the aftermarket control embedded in the product.

2.4 Product-Design Principles From the key features F1—F7, we now derive the following five product-design principles, induced by sharing markets. First, shared products must remember their owners and detect users different from their owners. D1. [User Awareness] The product can distinguish different users. By designating the product owner as a special user, D1 implies the ability to distinguish owners and nonowners. D2. [Usage Awareness] The product can sense if and how much it is being used. By being able to detect the intensity of use (including zero intensity), the product can harbor information related to the flow of utility it provides. D3. [Informativeness] The product can store or transmit reliable information about users and usage intensity. This information may be used cumulatively, to predict future usage as well.

590

T. A. Weber

D4. [Robustness] The product accommodates heterogeneous user types (e.g., by being able to be customized and reset, or by accommodating different usage profiles). Product design specifically for the shared use across different users, with heterogeneous preferences and usage patterns requires robust features which can be customized or else respond to the median preferences of the target base, thus guaranteeing a positive experience even at the boundaries of the user spectrum. D5. [Efficient Durability] Avoiding artificial obsolescence the product strives for an efficient provision of a utility stream at minimum cost (e.g., by a modular design so as to retain functional parts despite unavoidable obsolescence of other parts). [5, 6] In addition to the design principles D1—D5, there is naturally the overarching principle of Compliance, meaning that the product is realized according to the prevailing regulations (e.g., regarding privacy or safety standards).

3 Collective Servitization Achieving the design targets D1—D5 allows the manufacturer to convert the utility stream from the collaborative consumption of its product into a dynamic revenue stream fed by the community of users, which we refer to as collective servitization (generalizing the notion of mere servitization in [7]); see Fig. 2. Therefore, the sales contract with the initial owner includes a provision that specifies contingent fees to be paid by the owner at usage-transition events, possibly depending on the usage intensity. For the scheme to remain incentive-compatible the settlement of the fees would naturally leave the current owner with a net benefit at any time. In this manner, the total monetary transfer from the owner to the firm depends on the collective usage of the product. The owner may or may not be treated differently than the other users, in the sense that his payment to the firm may also be usagecontingent. Optimizing the retail of its shareable products, to accommodate for different types of owners the firm may offer different combinations of purchase price and contingent payments, thereby implementing a screening mechanism (also known as nonlinear pricing) so as to extract the pertinent private information from the owners as much as economically possible (i.e., taking into account the fundamental tradeoff between rent-extraction and information revelation) [8]. The corresponding information flow from product to the firm may be shared with the market maker so as to harmonize the joint fee structure, subject to antitrust compliance (which may limit such horizontal information exchange).

Does Sharing Lead to Smarter Products? ...

591

4 Product Intelligence Broadly speaking, “intelligent products” have the capability to collect and transmit information about themselves and their usage. Within a secure and interoperable framework this information can be accessed by authorized parties (such as the manufacturer or initial seller). The augmentation of awareness about time-varying product properties (such as the fitness of key components and its current or cumulative use) can provide operational assistance and economic incentives. For example, the fact that an asset such as a power drill is projected to be idle over a two-week period could trigger an automatic availability notice to a sharing market, thus reducing the transaction cost for the owner, who now has to respond to a concrete request for a transaction only if a suitable counterparty was identified by the market maker. Product intelligence can also be used to better align the flow of payments from the product users to the manufacturer (or intermediary) with the flow of value the various usage options create over time. This reduces risk of over-commitment in ownership, e.g., guarding against the contingency of low use after having purchased the product. Using the product’s intelligence, a manufacturer can enable and authorize the transfer of a product to a different user and ask for a commission. In effect, retaining forms of aftermarket control allows for a continuum of product-transfer options between the extremes of outright purchase on the one side and short-term lease on the other. The economic incentives for the necessary changes in the product design (according to the identified design principles D1—D5), as far as durability, modularity, and embedded intelligence are concerned, can sometimes be expected to arise endogenously because of the prospect for additional rents. For example, in the ensured presence of a sharing market, a manufacturer has an intrinsic incentive to provide more durable products because that increases the “sharing premium” that can be charged to owners who take advantage of the possibility of renting out an unused product in the future. At other times, however, an outside policy intervention may be required to nudge parties into the right equilibrium. For example, if the manufacturer sees a possibility to disable the sharing market by aggressive pricing, it may opt to do so, and—in order to compensate for the reduced margins— increase the sales volume by making its products less durable. The latter manufacturer-induced “sharing shutdown” [5] can be forestalled by promoting the existence of sharing markets, supporting standards for network protocols for the visibility of idle assets, and subsidizing sharing-market operators, e.g., by offering tax incentives market.

5 Conclusion The creation of product ecosystems that envelop consumers with products from a single firm or a group of allied partners has enabled unprecedented access to information about customer behavior and other hitherto private information. It also

592

T. A. Weber

increased the producers’ post-purchase control over the operation of their products, through updates, user- and usage-sensing, and access control. The consequent challenge (and opportunity) lies in the design and management of the associated information flows [3] and in finding new ways of extracting a portion of the utility flows that consumers realize over time. Sharing markets extend this endeavor for each product to a broad potential user base. They increase product utilization and thus also the collective utility gained through usage, and therefore also the firm’s revenue-extraction prospects. Overall, we have argued that by retaining aftermarket control over the shareability of items, in the sense of being able to meter usage-transitions in sharing markets, a firm (which could be the producer/manufacturer of the product, a retailer, or a system integrator) may be able to extract revenues over time, as the product is being used by various parties in the sharing market, thus leading to collective servitization. During any transition to a regime of smart, shareable, and “intelligently disposable” products (meaning that requests for the modular components of the product can be aggregated and transmitted to the places of disposal), there will be a mixed regime where intelligent assets coexist with legacy products. The level of penetration, including expectations about the future evolution of the diffusion of smart products, paired with the prospects of economic rents determine the manufacturer’s endogenous incentives for implementing design changes. Exogenous incentives include public standards and requirements, as well as a shift in consumer preferences. The diffusion of smart products may be desirable not only from the perspective of individual firms but also from a societal viewpoint, as product intelligence, with active sensing, may offer numerous side-benefits, such as usage traceability, predictive maintenance, closed-loop tracking, and intrinsically motivated efficient durability (cf. design principle D5). These side-benefits altogether support a more responsible use of natural resources.

References 1. Benkler Y (2004) Sharing nicely: On shareable goods and the emergence of sharing as a modality of economic production. Yale Law J 114:273–358 2. Botsman R, Rogers R (2010) What’s Mine Is Yours: How Collaborative Consumption Is Changing the Way We Live. HarperCollins, London, UK 3. Clemons EK, Dewan RM, Kauffman RJ, Weber TA (2017) Understanding the informationbased transformation of strategy and society. J Manag Inf Syst 34(2):425–456 4. Razeghian M, Weber TA (2019) The advent of the sharing culture and its effect on product pricing. Electron Commer Res Appl 33: Art. 100801 5. Razeghian M, Weber TA (2019) Strategic durability with sharing markets. Sustain Prod Consumption 19:79–96 6. Swan PL (1972) Optimum durability, second-hand markets, and planned obsolescence. J Polit Econ 80(3):575–585 7. Vandermerwe S, Rada J (1988) Servitization of business: Adding value by adding services. Eur Manag J 6(4):314–324 8. Weber TA (2020) How to market smart products: Design and pricing for sharing markets. J Manag Inf Syst 37(3):631–667

Does Sharing Lead to Smarter Products? ...

593

9. Weber TA (2018) The dynamics of asset sharing and private use. In: Proceedings of the Hawaii International Conference on System Sciences (HICSS), pp 5202–5211. https://doi. org/10.24251/hicss.2018.649 10. Weber TA (2017) Smart products for sharing. J Manag Inf Syst 34(2):341–368 11. Weber TA (2018) Controlling and pricing shareability. In: Proceedings of the Hawaii International Conference on System Sciences (HICSS), pp 5572–5581. https://doi.org/10.24251/ hicss.2017.672 12. Weber TA (2016) Product pricing in a peer-to-peer economy. J Manag Inf Syst 33(2):573–596 13. Weber TA (2014) Intermediation in a sharing economy: Insurance, moral hazard, and rent extraction. J Manag Inf Syst 31(3):35–71

Analysis of On-line Platforms for Citizen Participation in Latin America, Using International Regulations Alex Santamaría-Philco , Jorge Herrera-Tapia , Patricia Quiroz-Palma , Marjorie Coronel-Suárez , Juan Sendón-Varela , Dolores Muñoz-Verduga , and Klever Delgado-Reyes

Abstract The present work is based on the analysis of e-Participation platforms in Latin American countries under international standards and government criteria, excellent instruments for this research, given the quality of the existing e-participation platforms in this region. The platforms were verified with the criteria defined above and valuable information was obtained, as shown in a series of figures and tables. This research shows the degree of maturity of e-participation in Latin American countries, revealing certain problems that could generate short and medium-term improvement projects. In this context, the best representative of Latin America in terms of e-Participation is Uruguay, thanks to the support of its government and its users. This work invites to community researchers to help them determine the maturity level of e-participation and encourage to improve and development new platforms. A. Santamaría-Philco (B) · J. Herrera-Tapia · P. Quiroz-Palma · J. Sendón-Varela · D. Muñoz-Verduga · K. Delgado-Reyes Universidad Laica Eloy Alfaro de Manabí, Cdla. Universitaria, 130802 Manta, Ecuador e-mail: [email protected] J. Herrera-Tapia e-mail: [email protected] P. Quiroz-Palma e-mail: [email protected] J. Sendón-Varela e-mail: [email protected] D. Muñoz-Verduga e-mail: [email protected] K. Delgado-Reyes e-mail: [email protected] M. Coronel-Suárez Universidad Estatal Península de Santa Elena, 240350 La Libertad, Ecuador e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Á. Rocha et al. (eds.), Information Technology and Systems, Lecture Notes in Networks and Systems 691, https://doi.org/10.1007/978-3-031-33258-6_54

595

596

A. Santamaría-Philco et al.

Keywords Citizen participation · regulations · standards · government · citizens · e-government

1 Introduction The forms and styles of government have traditionally been framed within indirect citizen-participation through their representatives, i.e., those elected in democratic elections. These representatives, in some cases, have not been able to give voice to the citizens or actively considered them in the decision-making process, causing them discomfort and indifference. To solve this in some way, the authorities have made use of technology, but mostly use unilateral communications through reports or other reports, which in many cases have not considered the citizens as an active government entity. This type of governance situation has to a certain degree divided the coexistence of citizens and authorities and some countries have opted to develop and apply egovernment criteria. The United Nations Organization refers to e-government as the use of Information and Communication Technologies (ICT) by government institutions to improve the quality of information services offered to the citizens; to improve the efficiency and effectiveness of public management; and to substantially increase public sector transparency and citizen participation. e-government in the form of eParticipation is gradually expanding all over the world. Countries in North America (USA and Canada), Europe or Asia have successfully carried out diverse processes with their citizens, giving very good results in collaborative decisions. For example, Denmark, the Republic of Korea, and Australia are leaders in citizen participation at a global level, however the environment is different in Latin America [1, 2]. Although South America is not among the leaders in e-Participation, there are certain territories that are increasingly improving their performance, such as Uruguay, Argentina, or Chile, which are gradually supporting proposals made by their citizens for the improvement of their cities or other factors. This research investigates the form of e-Participation through certain existing platforms in most Latin American countries, where international criteria are proposed, e.g. the Public Participation Guide in [3], the e-government survey of the United Nations, 2020, and a reference framework of e-participation known as ePfw by [4]. This paper is organized into six sections: Sect. 2 presents an overview of citizen participation, in Sect. 3 contains the e-Participation Reference Frameworks, in the Sect. 4 is explained the methodology used, Sect. 5 describes the results and evaluations, and finally the Sect. 6 presents the Conclusions.

Analysis of On-line Platforms for Citizen Participation ...

597

2 Overview 2.1 E-Government According to [5, 6], there is no definitive definition of e-government, although there are several definitions that can provide a good description, for example in 1998 the OECD defined it as "the application of Internet-based technologies for commercial and non-commercial activities within public administrations". However, the best definition of e-government was “the use of information and communication technologies (ICT), particularly the Internet, as a tool to achieve better government”. The World Bank defines it, as “the use of information and communication technologies to improve the efficiency, effectiveness, transparency, and accountability of government”, and finally the United Nations defines it as “the use of the Internet and the World Wide Web to deliver government information and services to citizens”. All these definitions seem interesting to us with certain difference and at the same time great similarity. City governments should manage technologies so that information can be distributed in optimal processes, in addition to enabling communication both inside and outside the citizen-government, resulting in quality information and streamlining government processes.

2.2 Citizen E-Participation Traditional participation processes are combined with the use of ICTs as a fundamental support for the stages of their life cycle. The use of ICT tools within the context of public participation led to the term e-Participation. According to Macintosh [7], e-Participation means "ICT-supported participation in processes involved in government and governance". During the last two decades there has been a significant increase in the number of projects related to e-Participation; some ad-hoc support tools have also been developed thanks to funding from various government agencies. However, it is recognized that the research field is still very fragmented, and it will be necessary to develop models and frameworks that can reduce this fragmentation [8].

2.3 Citizen Participation Platforms Citizen participation web platforms are computer systems hosted on the Internet under a domain and hosting, depending on each organization oriented to innovation management to reach and expand citizen participation as a model of government and administration much more focused on their needs, requests, suggestions under rules that must be always satisfied during elections [9, 10].

598

A. Santamaría-Philco et al.

Throughout the entire process certain criteria must be met for the platform to be efficient in achieving a better administration and reach the proposed objectives, such as helping, acting, and interacting with society in general.

2.4 Citizen Participation in Latin America According to [11], in South America there is a deficiency of participation in terms of society in general, with processes of very low quality or poorly structured and many failures in the regulations and in electoral processes. It is argued that in Latin America there is another difficulty regarding democracy, such as socioeconomic inequality, which exists throughout the continent. For all these reasons, these shortcomings must be addressed by increasing the in-dices of democracy throughout the region so that society can contribute to public government activities, creating spaces to suggest and offer plans for future decisionmaking throughout the region. In some South America countries there is a certain level of disinterest when it comes to policies, so that people do not want to voice their displeasure, meaning that society must raise its voice in protest and demand to be heard [11–13]. According to [3], for a participation web platform to be recognized as such, it is required or suggested that it should have certain components that improve user performance and accessibility, also highlighting that a responsible authority or party acting on its behalf must integrate these elements in the preparation and implementation of its public participation plan. Among the most important points the following stand out:

3 e-Participation Reference Frameworks 3.1 Canada’s Public Participation Guide According to the Canadian guide, it is a general term for any process that involves public participation in decision making. It involves the process or activity of informing the public and inviting them to participate in decisions that affect them. The method of public participation is usually to share information with those who may be interested in proposed projects and to invite their input. This guide explains the requirements for public participation under the Canadian Environmental Assessment Act, particularly in the context of projects. It also presents best practices and tools for planning, implementing, and evaluating public participation. It proposes several elements of public participation and determines that they must show all the elements to become meaningful. These elements are given in Table 1 and maintain that a responsible authority or party acting on its behalf

Analysis of On-line Platforms for Citizen Participation ...

599

Table 1 Key elements of citizen participation according to the Canadian guide Key elements

Description

Accessible information

Information should be easy to always understand and available

Reasonable time

Maintain neither too long or too short a time

Adequate levels of participation

Adaptive processes involve society in general, defining what mistakes are made for future correction

Transparent results

Reliable results

Source: Canadian Environmental Agency (2008)

should include these elements in the preparation and implementation of its public participation plan.

3.2 United Nations e-Government According to the U.N. [1], e-government is the use of Technology and Communication (ICT) by government institutions to qualitatively improve the services and in-formation offered to citizens, in addition to increasing the efficiency and effectiveness of public management, as well as substantially increasing public sector transparency and citizen participation. This guide was chosen because it has become an indispensable classification, mapping and measurement tool for digital ministers, policymakers and analysts who delve into comparative analysis and contemporary research on e-government. What UN eGovernment presents is a data-driven analysis of the key trends in e-government development in 2020 based on the Electronic Government Development Index (EGDI) assessment. E-government development is highlighted, including key priorities such as health, education, social protection, environment, decent employment, and justice for all.

3.3 ePfw Framework (e-Participation Framework) This section describes ePfw, a basic framework for the definition and implementation of e-Participation processes. The framework aims to provide a common language to serve as an aid for organizations involved in the implementation of various eParticipation processes [4]. This framework focuses on the support of an e-participation metamodel, including reference materials and methods for the development and implementation of a program that automates all participatory processes concerning e-participation.

600

A. Santamaría-Philco et al.

4 Methodology The starting point was to compare the different citizen participation platforms in Latin America, indicating those countries with the highest participation index or the best interrelation between the government and its citizens. In response to this problem, a case study was carried out on the 52 citizen participation platforms currently existing in Latin America to determine whether they comply with the international standards of first-world countries in North America and Europe. A mixed study was carried out by quantitative methods through a comparison of the different citizen participation platforms, using statistical graphs and analysis. Also, qualitative methods were employed, through descriptive and interpretative theories of the different sources of information, required to evaluate the platforms according to criteria presented below, in Sect. 4.1. For this, the information is framed in a triangulation model, see Fig. 1, in which the flow of information is evaluated by different research techniques for an in-depth analysis from this triple perspective and to find any existing interrelationships.

4.1 Evaluation Criteria Table 2 shows the international standards used to analyze and obtain results of these citizen participation platforms. The key element of the Canadian Environmental Agency guide [1], the UN e-government [3], and the population chosen as the object of analysis is the e-participation framework proposed by Santamaria-Philco et al. [4], called the ePfw, combining with the most relevant criteria for analyzing citizenparticipation platforms of [14–16].

Fig. 1 Triangulation of research techniques

Analysis of On-line Platforms for Citizen Participation ... Table 2 International standards

601

International standards Canadian guide

United Nations e-government

ePfw framework

- Accessible information - Reasonable time - Levels of participation - Transparent results

Areas: - Justice - Environment - Social Protection - Employment - Health

Methods: - Survey - Round table - Forum - Voting - Transparency - E-Participation tools

4.2 Case Study: South American Citizen-Participation Platforms The qualitative case study allowed information to be through systematic research on citizen participation websites throughout Latin America. The application of this research method was based on the following objectives: • Research various citizen-participation platforms in-depth in different countries through case studies to identify their key elements. • Analyze the participation processes on the different web platforms through evaluation criteria. • Compare platforms to obtain results of increased participation index. • Carry out a reflective analysis of the results of the analysis using the statistical graphs obtained from the platform study as the reference.

4.3 Citizen Participation Platforms in Latin America There are about 100 citizen participation platforms in this continent, even though many of these do not meet certain basic criteria to be called such, many countries do not take them seriously and do not include or involve society in their governmental decisions. Many countries have few platforms due to the little importance of the authorities on this issue and are unaware of the multiple benefits of e-Participation. For this study, the existing citizen participation websites in Latin America was the population chosen as the object of analysis and provided details on the development of citizen participation. We have found 52 e-participation platforms distributed in 16 countries. It is important to mention that countries such as Cuba, Guatemala, and Honduras, do not have or do not allow online e-participation platforms, see the Table 3.

602 Table 3 E-participation platforms in Latin America

A. Santamaría-Philco et al.

Country

Nº of Platforms

Argentina

6

Bolivia

2

Brazil

5

Chile

6

Colombia

6

Costa Rica

1

Ecuador

2

El Salvador

1

México

7

Panamá

1

Paraguay

1

Perú

6

Puerto Rico

2

Dominican Republic

2

Uruguay

3

Venezuela

1

4.4 Analysis of Citizen e-Participation Platforms This study analyzed 52 e-participation platforms according to the international regulations, for example, in Argentina, there are six e-Participation platforms. Table 4 shows the criteria applied to the Canadian Guide, including accessible information, reasonable time, participation levels and evidence of the results. Table 5 shows the criteria of the different areas included in the different areas mentioned in the platforms. Table 6 shows the criteria of the ePfw e-Participation framework of reference, such methods, transparency, use of the platform and the tools used. Table 4 shows the criteria used by the Canadian guide to evaluate the eparticipation platforms. Argentina is taken as the example, with 6 e-Participation platforms. We found that the information on the website is accessible on all the selected platforms, only 2 platforms reasonable expiry dates for citizens to choose the best ideas suggested in the period allowed, 6 platforms met the level of informative participation and the consultation level and only 1 platform met the collaborative level. Table 5 shows the criteria used by the United Nations government, classifying them by sectors or areas, with the sectors of justice, environment, social protection, employment, education, and health in Argentinian platforms in which the government and citizens suggest different topics for debate. Table 6 shows the criteria of the ePFw e-participation framework among the different parameters of the guide, the methods used by citizens to participate in decisions, transparency of true results, without alterations or the citizens’ use of the platforms and the tools used.

Analysis of On-line Platforms for Citizen Participation ...

603

Table 4 Criteria analysis of Argentina according to the Canadian public participation guide Platform name

Canadian Guide Accessible Information

Reasonable Time

Appropriate levels of participation Informative

Consultive

Results

Portal Leyes Abiertas

Yes

Yes

x

x

Yes

Participa Bahía

Yes

It does not have expiration time

x

x

Yes

Rosario Participa

Yes

Yes

x

x

Yes

Buenos Yes Aires Elige

No expiry date

x

x

Participa Ciudad de Mendoza

Yes

No expiry date

x

x

Yes

Lujan de Cuyo Decide

Yes

No expiry date

x

x

Yes

Collaborative

x

Yes

Table 5 Criteria analysis in Argentina according to the United Nations electronic government survey guide Platform name

Sectors or Areas (United Nations Government) Justice

Environment

Social Protection

Employment

Education

Health

Portal Leyes Abiertas

x

x

x

x

x

x

x

x

Participa Bahía Rosario Participa

x

x

Buenos Aires Elige

x

x

Participa Ciudad de Mendoza

x

x

Lujan de Cuyo Decide

x

x

x x

604

A. Santamaría-Philco et al.

Table 6 Analysis of criteria according to the e-participation framework in Argentina ePFw (e-Participation framework)

Platform name

Methods

Transparency Is the platform still e- participation being used? tool

Portal Leyes Abiertas

Forums, votes, comments

Yes

Yes

Own

Participa Bahía

Discussions, votes, comments

Yes

Yes

Consul

Rosario Participa

Discussions, votes, comments

Yes

Yes

Decidim

No

No

Consul

Participa Ciudad de Debates, proposals, Yes Mendoza polls, votes

No

Consul

Lujan de Cuyo Decide

No

Consul

Buenos Aires Elige Proposals, votes, comments

Discussions, comments

Yes

5 Results and Evaluation In this section are presented the results obtained from the evaluation of the 52 eParticipation platforms organized by six criteria. It is important to note that each result has its respective description and analysis.

5.1 Reasonable Time Figure 2 shows that, Colombia and Mexico have the longest reasonable time for e-participation platforms for the citizens to consider the proposals. Countries such as Bolivia, Costa Rica, Panama, Paraguay, and Puerto Rico do not have a time limit for the participation processes, while others only have a starting time, but no expiry date and others are excessive.

15%

10%

20%

0%

5%

5%

0%

0%

5%

0%

5%

DO RE MI PU NIC BL AN IC

0%

20% 10%

Fig. 2 Results of the reasonable time criterion on Latin American platforms

5%

0%

Analysis of On-line Platforms for Citizen Participation ...

605

Fig. 3 Results of the criterion of e-participation mechanisms on Latin American platforms

5.2 e-Participation Mechanisms in Latin America The most frequently used method were the users’ comments because it can be an easy and effective method, in which citizens can express themselves on the information on the platform. This also occurs in debates in which users give their ideas and hope to receive the necessary support from the government. In the case of voting users give their ideas for improving sectors or cities, hoping that other citizens will support their initiative with only a single click (see Fig. 3).

5.3 e-Participation Levels Figure 4 shows that, Mexico is the country with the highest participation in the informational level, followed by Peru, Colombia, and Chile. Most of their e-Participation platforms are only informative with manual processes of citizen participation. Figure 5 gives the countries with the highest level of consultation: Colombia, Chile, and Argentina. It should be noted that at this level the citizens can participate in the ideas suggested in forums, debates, comments, voting, etc. In general, there are informative e-Participation platforms such as the Venezuelan, Bolivian and Panamanian websites, which are the countries that do not have this level since their function is simply to display information. Figure 6 shows that, in Peru, the government has a high level of interest in carrying out works suggested by the citizens themselves, who are the best at giving ideas and thus obtaining a better city. In the cases of Ecuador and Uruguay, the government takes a certain degree of interest in these votes so that when a certain number of approvals are reached the proposal can be implemented.

606

A. Santamaría-Philco et al. VENEZUELA URUGUAY DOMINICAN REPUBLIC PUERTO RICO PERU PARAGUAY PANAMA MEXICO EL SALVADOR ECUADOR COSTA RICA COLOMBIA CHILE BRAZIL BOLIVIA ARGENTINA

2% 6% 4% 4% 12% 2% 2% 13% 2% 4% 2% 12% 12% 10% 4% 12%

Fig. 4 Information level results on Latin American platforms VENEZUELA URUGUAY DOMINICAN REPUBLIC PUERTO RICO PERU PARAGUAY PANAMA MEXICO EL SALVADOR ECUADOR COSTA RICA COLOMBIA CHILE BRAZIL BOLIVIA ARGENTINA

0%

8%

3% 3% 0%

8%

3%

13%

3% 3% 3% 10%

0%

15% 15% 15%

Fig. 5 Results of the consultation level in the Latin American platforms VENEZUELA URUGUAY DOMINICAN REPUBLIC PUERTO RICO PERU PARAGUAY PANAMA MEXICO EL SALVADOR ECUADOR COSTA RICA COLOMBIA CHILE BRAZIL BOLIVIA ARGENTINA

7% 14% 0% 0% 29% 0% 7% 7% 0% 14% 0% 0% 7% 7% 0% 7%

Fig. 6 Results of the level of collaboration in Latin American platforms

Analysis of On-line Platforms for Citizen Participation ... 10%

10%

13% 13%

13%

2%

2%

607

4%

2%

13%

2%

4%

2%

4%

6% 0%

Fig. 7 Results of the transparency criteria on Latin American platforms

5.4 Transparency Figure 7 gives the countries that have e-participation platforms that can be analyzed for the transparency of their results and government processes. This is difficult to analyze if a government announces the real results without modifications. Countries such as Chile, Colombia, Mexico and Peru were found to have transparent results, i.e. the citizens are given the unadulterated information.

5.5 Platform Usage Figure 8 shows the countries with e-Participation platforms in different places. Most Mexican platforms are still in place thanks to the support of users and governors. To determine this point, 2020 is not considered due to the Covid 19 pandemic, as many platforms may have been in use, but their operations had to be suspended. 18% 13% 8%

5%

8%

15%

10% 5%

3%

3%

5%

5% 0%

DO RE MI PU NIC BL AN IC

0%

3%

Fig. 8 Results of platform utilization criteria in Latin America

3%

608

A. Santamaría-Philco et al. 31%

15%

15% 8%

8%

8%

8%

8% 0%

0%

0%

0%

0%

0%

0%

DO RE MI PU NIC BL AN IC

0%

Fig. 9 Results of the Consul tool on Latin American platforms 17%

3%

10% 0%

3%

0%

3%

3%

7%

7%

3%

3%

DO RE MI PU NIC BL AN IC

3%

7%

17%

13%

Fig. 10 Results of proprietary tools on Latin American platforms

5.6 e-Participation Tools Figure 9 shows that, Argentina has the largest number of platforms based on this software, generally obtaining good results, as is gradually happening in the rest of the world, followed by Brazil and Uruguay, which uses multiple methods of citizen participation and publish easily comprehensible results. Figure 10 shows that, in countries such as Colombia and Peru many of their platforms belong to local institutions founded by citizen organizations and may in fact be the most popular option due to their adaptability to the resources and processes desired for the future.

6 Conclusions Society, in general, formulates good proposals for improving city lifestyles. Ideas are often proposed on the administration of public funds and the government should give priority to the communities’ main needs. A good government should be able to lead the city or country forward. In this context, the conclusions of this research study are the following:

Analysis of On-line Platforms for Citizen Participation ...

609

e-Participation brings many benefits for governments because they show what their citizens really want or need in different regions The levels e-participation can define what kind of participation the country must use for futures improvements. However, E-participation is a topic that is given little importance by many governments, even if they know it can bring many benefits. This work is dedicated to community researchers to help them determine the maturity level of e-participation and offer support for the development of new platforms. Although e-Participation does exist in Latin America, it is far from being a good example of the subject, as many governments are not interested in the proposals of their citizens for various reasons such as corruption or self-interest. The best representative of Latin America in terms of e-Participation is Uruguay, thanks to the support of its government and its users, who do their best to improve the process. Although it is only a small country, eParticipation has good acceptance and participation and should continue to improve in the future. The degree of e-participation is generally linked to a country’s development and is reflected in its current political, social, and economic situation, Latin American countries being a clear example of this. As future work, a model for an online e-participation platform will be designed, with web and mobile access. This proposal will consider technical aspects of collaboration, management, and security, so that citizens can trust and make use of these technological tools that contribute to e-government.

References 1. Naciones Unidas (2020). UN E-Government survey 2020. https://publicadministration.un.org/ egovkb/en-us/Reports/UN-E-Government-Survey-2020 2. Guillen A, Sáenz K, Badii MH, Castillo J (2009) Origin, space and levels of participation. Daena Int J Good Conscience 4(1). www.daenajournal.org 3. Canadá (2008) Public participation guide. http://laws.justice.gc.ca/en/C-15.2/index.html.www. ceaa.gc.ca/012/newguidance_e.htm 4. Santamaria-Philco A, Canos-Cerda JH, Penades Gramaje MC (2019) Advances in eParticipation: a perspective of last years. IEEE Access 7:155894–155916. https://doi.org/10. 1109/ACCESS.2019.2948810 5. Naser A, Concha G (2011) E-government in public management. ISBN 1680-8827. https:/ /repositorio.cepal.org/bitstream/handle/11362/7330/S1100145_es.pdf?sequence=1&isAllo wed=y 6. Lissidini A (2010) Direct democracy in Latin America: between delegation and participation. http://biblioteca.clacso.edu.ar/gsdl/collect/clacso/index/assoc/D14451.dir/lissi.pdf 7. Macintosh A (2004) Characterization of e-participation in policy-making. Proceedings of the Hawaii International Conference on System Sciences 37:1843–1852. https://doi.org/10.1109/ hicss.2004.1265300 8. Torres R-M (2017) Citizen participation and education. http://www.oas.org/udse/documentos/ socicivil.html 9. Gordillo MM (n.d.) Scientific culture and citizen participation: materials for STS education. J CTS 6 10. Angulo López E (2016) Quantitative methodology. https://www.eumed.net/tesis-doctorales/ 2012/eal/metodologia_cuantitativa.html

610

A. Santamaría-Philco et al.

11. Cruz-González LD, Mballa V (2017) Mechanisms for citizen participation in public policies in Latin America. Revista Políticas Públicas 10(1). http://www.revistas.usach.cl/ojs/index.php/ politicas/article/view/2963/2706 12. Ruelas A-L, Pérez P (2016) E-government: its study and development prospects. UNIrevista 1(3). https://www.researchgate.net/publication/28132184 13. González S, Juan J (2015) Citizen participation as an instrument of open government. Public Spaces 18:51–73. http://www.redalyc.org/articulo.oa?id=67642415003 14. Medaglia R (2012) eParticipation research: advancing characterization (2006–2011). Gov Inf Q 29(3):346–360. https://doi.org/10.1016/j.giq.2012.02.010 15. Fietkiewicz KJ, Mainka A, Stock WG (2017) E-government in knowledge society cities. An empirical investigation of the Smart Cities governmental website. Gov Inf Q 34(1):75–83. https://doi.org/10.1016/j.giq.2016.08.003 16. Santamaría-Philco A, Quiroz-Palma P, Herrera-Tapia J, Macias-Mendoza D, Muñoz-Verduga D, Sendon-Varela J (2022) e-Participation tool for decision support based on the ePfw framework. In: Iberian conference on information systems and technologies, CISTI, June 2022. https://doi.org/10.23919/CISTI54924.2022.9820351

Managerial Information Processing in the Era of Big Data and AI – A Conceptual Framework from an Evolutionary Review Mark Xu, Yanqing Duan, Vincent Ong, and Guangming Cao

Abstract Information processing forms an essential part of managerial behavior in the decision-making process. With big data and intelligent technologies available, business environment becomes ever dynamic and challenging, e.g. the impact of the COVID-19 pandemic, the rise of misinformation and disinformation, etc., this paper aims to examine the emerging patterns of managerial information processing from both individual and organizational perspectives. This research identifies three driving forces based on an evolutionary review of studies and theories related to information processing behavior and develops a theoretical framework, which provides valuable implications on managerial information processing and organizational responses and calls for reduction of information needs on routine tasks, a shift of managers’ information attention towards uncertainties, and increased capabilities and responsibilities in analytics and AI within an effective digital governance framework. Keywords Information Processing · Information Processing Behavior · Human-Computer Interaction · Digital Governance

1 Introduction Manager are faced with increasing complexity and dynamism of operational and strategic information to handle. Over the years, organizations adopted various approaches in coping with these challenges, range from business intelligence to data M. Xu University of Portsmouth, Portsmouth, England Y. Duan (B) University of Bedfordshire, Luton, England V. Ong Regent’s University London, London, England G. Cao Ajman University, Ajman, UAE © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Á. Rocha et al. (eds.), Information Technology and Systems, Lecture Notes in Networks and Systems 691, https://doi.org/10.1007/978-3-031-33258-6_55

611

612

M. Xu et al.

analytics, from decision support systems to AI systems [25]. In a globally networked society, with big data, information is becoming increasingly accessible. There is an increasing amount of trigger, speculative and current information including mis/dis information [18] that senior managers and organisations need to attend to. More information provision does not necessarily lead to better decision making, it runs the risk of exacerbating the long-lasting data overload problem. The new challenge is what to do with the huge volume of information as opposed to how to get that information, and how to make sense of it for action [21]. The young generation managers are mostly technical savvy, which renders them more prone to using technologies to acquire and process information, hence hypothetically, managerial information behavior could have shifted to a new pattern [12]. This also requires organizational responses – i.e. collective cognition [7] and digital governance [11]. From decision support and organizational design perspectives, there is a need to ascertain if and how managerial information processing has been shaped by what driving forces, and if existing assumptions and theories on managerial/organizational information processing need to be updated. Given the challenges, research into the above becomes imperative. This paper reports the driving forces and managerial information processing from both individual and organizational perspectives. A theoretical framework is proposed based on an evolutionary review of studies and theories related to information processing behavior.

2 Evolutionary Mapping of Managerial Information Processing Information processing behavior is derived from the concept of “information behavior”, which is defined from the information science perspective as “those activities a person may engage in when identifying his or her own needs of information, searching for such information in any way, and using or transferring that information” [23, p. 249]. In information science, information behavior generally refers to information retrieval or information seeking behavior [24], information processing behavior, and information use behavior [22]. Specific to management information processing, Mintzberg [14, 15] studied managerial behavior and suggested information behavior is an essential part of managerial activities. In this paper, managerial information processing behavior is defined from a process view including the following four interrelated activities of senior managers: – Information acquisition – information seeking and receiving which are either solicited or unsolicited. – Information synthesizing – structured data manipulation and analysis of raw data, typically using data analytic tools from simple tabulation, comparison, statistics to comprehensive analytics.

Managerial Information Processing in the Era of Big Data …

613

– Information sensemaking – the human cognition process of reasoning and interpreting. – Information usage – actionable information is used for sharing, decision making, influencing and enhancing learning (individual and organizational). [4] stated that information behavior is inherently individual. During this process, the individual identifies and selects information sources, articulates a query, question or topic; extracts the information; evaluates the information retrieved; filters the irrelevant information, and interprets the information. The goal of information processing is to make sense of it for action. Much of the information behavior research between 1960’s and 1970’s focused on a highly individualized view of human behavior, represented by the work of pioneers such as Wilson and Mintzberg and Taylor. Organisational managerial information are relatively limited except [10] and [9] work on organizational information processing and design. Recent research has considered the power of intelligent machines and network-based analytics and distributed decision [2], as well as the importance of social processes, data governance and decision context as part of human information behavior [12]. The sections below give a detailed account of the evolution.

2.1 Information Behavior – [22] IUE Theory Taylor’s Information Use Environments (IUE) extends early models on general information behavior and prescribes four determinants of general information seeking and using: – – – –

Sets of people (information professionals who process information for users); Problem dimensions (ill structured vs well structured); Work settings (organizational environment, culture, style and structure); Problems resolution assumptions (perceived and anticipated ways to problem resolution)

The IUE also prescribes eight types of information usage – from enlightenment to understand the wide context of the problem/decision; problem understanding (why); instrumental (how); factual (what); conformational – verification of information; projective (what if scenarios); motivational, personal/political usage. Taylor suggests that the focus is on how professionals (set of people) acquire and use information for direct solution or sense making (understanding). The only one common method of information seeking was discussion with colleagues – a human interaction process. Tailor’s work considers specific situation as determinants affecting managerial and organizational information processing and usage, this has been adopted in this study.

614

M. Xu et al.

2.2 Mintzberg’s Managerial Information Behavior [16] theory on the nature of managerial work provides very useful insight into managerial behavior in information processing. One of the key managerial roles identified is informational role - as information monitor, information disseminator, and spokesman. Mintzberg observed that a manager as a monitor receives a variety of information from wide sources, both inside and outside his organization. With the information, the manager works as a disseminator to transfer information to subordinates, and as a spokesman to signal to the outside of the organization. Furthermore, the manager uses the information as a strategy-maker (decision roles) to detect changes, to identify problems and opportunities, to build up knowledge about his milieu, and to make models and plan. Mingzberg (1980) suggests that senior managers appear to favor verbal contact through personal networks to obtain information. Yet, there are no clear-cut sources and patterns have been identified. The informational role describes how a manager works along a process of information handling, which is similar to the IUE process but with role specification specific to management information. Further insight from [16] observation reveals that managers clearly prefer to have information in the form of concrete stimuli or triggers, not general aggregations, they demonstrate a thirst for external information, and tend to ignore formal information system. It can be argued that before the big data era, the conventional conception dominating managerial information behavior theory is that senior managers do not conform to formal information behavior models; instead, they are sporadically managing a mess, muddling through information as the business environment releases all kinds of signals and messages [3].

2.3 Choo’s Model of Human Cognition in Managerial Information Processing [6] extends Taylor’s IUEs with a focus on managers’ affective response and cognitive dimensions. It claims that an individual attempt to find information in order to bridge the situation gaps (cognitive needs) when he or she recognizes an inability to act or understand a situation. This is followed by an affective response resembled by indicating likes and dislike, pointing out doubt and uncertainty, channeling attention to certain issues [5]. [26] defines affect response as the first reaction to stimuli and that it drives human judgment. Affective evaluations result in predominantly positive, mixed, or negative affective state. It is the negative affective state that drives the collection and interpretation of cues, and triggers more information search [20]. Positive effective status is associated with seeking confirming data for cue – look for data that support prior assumptions and reinforce the prior affective state. When the decision maker collects confirming evidence, contentment arises because the situation is considered to be safe and having a high degree of certainty.

Managerial Information Processing in the Era of Big Data …

615

Centre to Choo’s model is the human cognitive process that can lead to heterogeneous information behavior. Human cognition carries the risks of bias in processing managerial information. Information is often perceived selectively and subjectively. [13] asserts that it must be recognized that different managers will inevitably use different criteria, methodology to assess the attractiveness of various markets and products, and will come to different conclusions, even when working with a similar set of available data. We argue that human cognition and effective response form an intrinsic part of the process of managerial information handling. In order to minimize such risks, an organizational mechanism, i.e. collective cognition and governance will be needed.

2.4 The Cognitive Views of Organisational Information Processing The aforementioned theories mainly focus on individual information processing, although organizational settings are considered as determinants in these models. Cognitive views of organizational information processing perceive organizations as systems that learn and interpret their environments [8] in order to cope with uncertainty. [10] theory on organizational design from organizational information processing view laid down the foundation for the later research and debates. The view rests on three important concepts: information processing needs, information processing capability, and the fit between the two to obtain optimal performance. Organizations need quality information to cope with environmental uncertainty and improve their decision making. Environmental uncertainty stems from the complexity – number of variables and dynamism – the frequency of changes of the variables. Organizations have two strategies to cope with uncertainty (1) reduce the effect of uncertainty by developing a buffer and (2) increase information processing capability by implementing structural mechanisms and enhancing lateral and vertical information flow. Galbraith elaborates further that where conditions are routine and simple, rules and programs can be used to absorb the relatively small amount of uncertainty facing the organization. Where uncertainty increases, exceptions must be referred up the hierarchical authority structure for decision making. When information-processing requirements threaten to overload the management structure, decentralized decision at lower levels in the organization shall take place, including the use of lateral relations allows more information processing to be decentralized so as to reduce the information-processing load on management. This includes for example more face-to-face communications, cross-functional committees, task forces, and matrix structures. When this is no longer adequate, various vertical information-processing systems can be attached to the hierarchical structure, which increase the organization’s information-processing capacity. Galbraith’s model links organizational design to organizational information-processing that shape individual information processing behavior.

616

M. Xu et al.

Organisational information processing has also been considered from routine vs nonroutine information processing prospective [8, 19], this is similar to the distinction between strategic vs. tactic information processing [1, 9]. Routine information processing deals with relatively large volume, repetitive routine day-to-day problems and situations that can be automated. Non-routine information processing deals with high uncertainty situation that are either unique, dynamic, infrequent and heterogeneous. Information processing is complex yet strategically important, hence human judgement and heurism remain critical as effective mechanisms, as well as organizational systems such as collective cognition, advanced AI.

2.5 Performativity of Intelligent Systems in Distributed Decision Making [2] extend the debate on information-based decision making by arguing that cognition is not simply located in the (head of the) decision maker but is distributed across a variety of non-human entities. There is a shift of focus of decision-making for strategic purposes from the mind of the individual(s) making the decisions, to the network of artifacts and human beings involved in the practice of deciding. Using a failure case of F1 racing real scenario with big data analytics, decision algorithms and instant/swift strategic decision requirement, the researchers proposed a three facets of information processing for decision making: – Situated nature of decision making with big data – organization culture, e.g. blame vs accountability; the ergonomics of the decision situation - visual, audio environment; – Distributed cognition in big data decision making – the collectives of human entities (HQ and control centres vs racing site team), non-human entities (technologies, networks, analytics, algorithms….); the cognitive division of labour between team members (social distribution) and the cognitive tasks are distributed across the time (material distribution). – Performative dimension of decision making tools – capabilities as well as assumptions embedded in the decision algorithms, unexpected events modelled, decision responsibilities, and what is not include in the framework. [2] model considers cognitive tasks shared between people, and non-human artifacts. The notion of performativity provides a useful perspective to consider digital technology power as a contemporary driving force that is shaping how tasks, information and decisions are made in organisations, we vision that the performative dimension of management information systems and analytics influence managers’ information behaviour, hence this dimension is included in the conceptual model. In summary, managerial information behavior research is rooted in general information seeking/retrieval behavior, but has evolved in two main stages – the early stage that focuses primarily on individual information processing behavior and the

Managerial Information Processing in the Era of Big Data …

617

Managerial information processing behavior - early stage

Managerial information processing behavior - digital era stage

Individual & Organisational (Mintzburg, 1973; Galbraith 1973; Choo, 1998)

Human and Non-Human entities (Aversa, et al 2018; Gullberg, 2011)

Fig. 1 The evolution of managerial information behavior

digital era stage that technology advances (performative force) are shaping managerial and organizational information behavior. Figure 1 maps out this evolutionary process. The review supports [12] assertion that managerial information patterns evolve slowly compared to the technological development. However, new technologies have influenced more routine exchange of information, thereby causing increased dispersion among users and creating new roles.

3 Discussion and the Framework 3.1 The Driving Forces The theoretical review suggests three key driving forces shaping managerial and organizational information processing. The situational dependencies – From Taylor, Mintzberg to Choo and Aversa, et al. it is clear that managerial information behavior is contingent on multiple factors including managerial role, task nature, decision situation, problem solving assumptions, organizational culture, sector differences etc. This is akin the to the notion of “Situated nature of decision making” [2]. Given the multiple dependencies and variations of these dependencies, we argue that it is unlikely to abstract managerial information processing in a homogeneous general pattern. This supports [12] assertation that managerial information patterns resemble a mosaic rather than a puzzle that can be solved by specific pieces. Human cognitive capability and affective response – Managers use their experience, knowledge, vision and judgement to interpret new information leading to either positive or negative affective response. Despite the heterogeneity of individual information processing behaviors, this has been proved as a common cognitive process that drives managerial information processing. We vision that individual cognition is constant, it underlines organizational responses – specifically on collective team cognition and collaboration. The digital performativity – The non-human entities centred around digital performativity is a drive affecting managerial information processing in the digital era. This can be viewed as an emerging distinctive driving force shaping not only managers as

618

M. Xu et al.

Fig. 2 A conceptual framework of managerial information processing

information user, but organizational setting for information processing. The digital power refers to e.g. intelligent systems, AI and distributed network that formulate new (non-human) decision nodes in distributed decision networks. As such, performativity has associated processing-decision responsibilities and accountabilities that require organization reconfiguration. To summarise, the variety and the dynamics of the driving forces can be considered as independent variables, which are hypothesized to be associated with managerial information processing represented by individual behavior and organizational responses as dependent variables. Figure 2 depicts the hypothetic relationships. The dependent variable contains two sub-variables that are assumed different strength of relationships. It can be argued that managerial information processing behavior are changing when referring to the conventional process – acquisition, synthesizing, sense making and usage. With automatic information feeding, news alerts, regular information provision and exception reports, managers may shift their focus from information acquisition to synthesis and sense making. Wide information sources provide high accessibility but often low credibility, this requires managers to perform specific targeted search to validate the data received. Structured data manipulation and analytics tools are widely available for data synthesising, it is envisaged that managers are actively using (or using the analytic results) computer-based systems for synthesising data, but not a shift of behaviour in using human cognition and effective response to make sense of information. Comparing to changes in individual information behavior, more organizational responses that differ from the traditional mechanism will emerge. This includes reduction of information needs by automating routine tasks and repetitive data processing; more distributed data analytics and decision making performed by intelligent machines over interconnected networks and cloud platforms. Due to increased uncertainty and complex decision situations, it is expected there will be a shift from individual sensemaking towards more collective cognition and sense making via both vertical and lateral relations designed by specific organizations. To minimise the risks caused by disinformation, misinformation and fake news, low trust on using AI and intelligent systems, a wide organisational digital governance framework is

Managerial Information Processing in the Era of Big Data …

619

needed, particularly, the responsible AI to govern non-human decision risks and responsibilities. All these require organisational structure and culture transformation. It is impossible to examine managerial information behavior without considering organizational dynamics. The managerial information focus shifts to uncertainties, which depends on the dynamic business and the environment situations. Low complexity and less dynamics suggest a tendency of more structured, repetitive and routine information processing, vice versa, high complexity and high dynamics demands mixed approaches and infrastructure to prevail. Driven by performativity, there are two contrast directions in managerial information processing – to reduce managerial information needs and to increase performativity to simply complexity. The former is achieved by using advanced systems/ technologies to perform routine information processing – including information acquisition and structured synthesising, and distributed decision making. The latter is realised by more intelligent systems including AI/ML to enhance complex information processing and sensemaking that usually performed less well by individual managers. The aim is not to substitute managerial role in making sense of the complexity, but to simply complexity by cues based on big data processing, algorithms and machine heuristics. From organisation design perspective, two new elements for information processing should be considered in the era of big data and AI: the collective sensemaking and digital governance. This requires an overhaul or a new design of team working – the lateral relations and the hierarchy reporting structure on information processing in organisations. Digital governance is not merely for compliances of legislations, data security and privacy but more on risks and accountabilities when data processing and decisions are performed by advanced systems in distributed networks. Responsible AI is on the horizon [17]. Novel concepts such as Data Quality Board, Data steward, Responsibility Assignment Matrix and Accountability Network can be implemented as future digital governance framework in organisations.

4 Conclusion This study reveals three driving forces shaping managerial information processing in the big data and AI era: Situational Dependency, Cognitive Capability Affective Response and Digital Performativity. The organizational/business dependencies are fundamental in determining the specific way of what and how information is processed by managers as individuals and by organisations. Attempt to generate a homogeneous pattern or model of managerial information processing is unlikely from this regard. Nonetheless, managerial cognition and effective responses are continuous playing important role in processing new information received, experience and judgements are indispensable knowledge used to establish initial cognition. The trend is on shifting focus from routine to complexity. Lateral relations and hierarchical escalation, as renewed organizational response, should be followed to form collective sensemaking for complex situation and incomplete information processing.

620

M. Xu et al.

The performativity force impacts managerial information processing significantly by reducing managerial information needs through automation, distributed information processing and decision making, and by increased digital capability, e.g. powerful analytics, algorisms and computer heuristics to enhance managers’ ability in handling complex situations. Performativity force in particular is changing the status quo of organizational design, i.e. call for corporate digital governance. This opens a spectrum of new organizational design and thinking as both an academic and a practical challenge.

References 1. Ansoff HI (1979) Strategic management. The MacMillan Press Ltd., London and Basingsoke 2. Aversa P, Cabantous L, Haefliger S (2018) When decision support systems fail: Insights for strategic information systems from Formula 1. J Strateg Inf Syst 27:221–236. https://doi.org/ 10.1016/j.jsis.2018.03.002 3. Bartelings JA, Goedee J, Raab B, Bijl R (2017) The nature of orchestrational work. Public Manag Rev 19(3):342–360 4. Bawden D, Robinson L (2013) No such thing as society? On the individuality of information behaviour. J Am Soc Inf Sci Technol 64(12):2587–2590 5. Brigham TJ (2017) Merging technology and emotions: introduction to affective computing. Med Ref Serv Q 36(4):399–407 6. Choo CW (1998) The knowing organisation. Oxford University Press, New York 7. Cristofaro M (2020) “I feel and think, therefore I am”: an affect-cognitive theory of management decisions. Eur Manage J 38:344–355. https://doi.org/10.1016/j.emj.2019.09.003 8. Daft R, Weick K (1984) Towards a model of organisations as interpretation systems. Acad Manag Rev 9:284–295 9. Egelhoff WG (1991) Information-processing theory and the multinational enterprise. J Int Bus Stud 22:341–368 10. Galbraith JR (1974) Organization design: an information processing view. Interfaces 4(3):28– 36 11. Vaia G, Arkhipova D, DeLone W (2022) Digital governance mechanisms and principles that enable agile responses in dynamic competitive environments. Eur J Inf Syst. https://doi.org/ 10.1080/0960085X.2022.2078743 12. Gullberg C (2011) Puzzle or mosaic? On managerial information patterns, Linköping Studies in Science and Technology Thesis No. 1483 LiU-TEK-LIC 2011:22 13. Martinsons M (1994) A strategic vision for managing business intelligence. Inf Strategy Executive’s J Spring 10(3):17–30 14. Mintzberg H (1973) The nature of managerial work. Harper and Row, New York 15. Mintzberg H (1980 edition) The nature of managerial work. Prentice-Hall, Inc., Englewood Cliffs 16. Mintzberg H (1980) The nature of managerial work. Prentice-Hall Inc., Englewood Cliffs 17. Mikalef P, Conboy K, Lundström JE, Popoviˇc A (2022) Thinking responsibly about responsible AI and ‘the dark side’ of AI. Eur J Inf Syst 31(3):257–268. https://doi.org/10.1080/0960085X. 2022.2026621 18. Petratos PN (2021) Misinformation, disinformation, and fake news: cyber risks to business. Bus Horiz 64:763–744. https://doi.org/10.1016/j.bushor.2021.07.012 19. Daft RL, Macintosh NB (1981) A tentative exploration into the amount and equivocality of information processing in organizational work units. Adm Sci Q 26:207–224 20. Maitlis S, Vogus TJ, Lawrence TB (2013) Sensemaking and emotion in organizations. Organ Psychol Rev 3(3):222–247. https://doi.org/10.1177/2041386613489062

Managerial Information Processing in the Era of Big Data …

621

21. Schildt H, Mantere S, Cornelissen J (2020) Power in sensemaking processes. Organ Stud 41(2):241–265. https://doi.org/10.1177/0170840619847718 22. Taylor RS (1991) Information use environments. In: Dervin B, Voigt MJ (eds) Progress in communication sciences. Ablex Publishing, Norwood, NJ, pp 217–255 23. Wilson TD (1999) Models in information behaviour research. J Documentation 55(3):249–270 24. Wilson TD (2016) A general theory of human information processing behavior. Inf Res 21(4):1– 19 25. Duan Y, Edwards JS, Dwivedi YK (2019) Artificial intelligence for decision making in the era of Big Data - evolution, challenges and research agenda. Int J Inf Manage 48:63–71. https:// doi.org/10.1016/j.ijinfomgt.2019.01.021 26. Zajonc RB (1980) Feeling and thinking: preferences need no inferences. Am Psychol 35(2):151–175. https://doi.org/10.1037/0003-066X.35.2.151

Empowering European Customers: A Digital Ecosystem for Farm-to-Fork Traceability Borja Bordel, Ramón Alcarria, Gema de la Torre, Isidoro Carretero, and Tomás Robles

Abstract Empowering users through the provision of more and enriched information about the (food) products they buy, so that they are able to make more conscious decisions, is one of the key objectives of the European Commission and the European framework programmes for research and innovation. The Industry 4.0 revolution has made it technologically possible to achieve this objective, but some logistic challenges remain open. Mainly, the great complexity of current supply chains makes it very difficult to “move” information from primary producers to final customers. A global agreement about data formats, information storage, confidentiality, etc. is nowadays unreachable. Thus, innovative tools that do not require such coordination and that allow, although partial, a relevant data sharing policy are needed. Different experiments have been reported, but, in general, they are focused on unelaborated products (such as vegetables or chicken meat). Solutions for more complex products, such as bakery products, which are composed of several different ingredients need to be investigated. This paper addresses this gap. In this paper, we describe a new tool for elaborated products traceability (named TrFood) based on QR codes, web technologies, and composition schemes. This application may also control the ecological footprint and the geographical origin of the supplies and final products. Users may obtain all this information using a specific mobile application. Furthermore, real deployment and experimental validation with real users was carried out in B. Bordel (B) · R. Alcarria · T. Robles Universidad Politécnica de Madrid, Madrid, Spain e-mail: [email protected] R. Alcarria e-mail: [email protected] T. Robles e-mail: [email protected] G. de la Torre · I. Carretero CODAN ESPAÑA S.A., Madrid, Spain e-mail: [email protected] I. Carretero e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Á. Rocha et al. (eds.), Information Technology and Systems, Lecture Notes in Networks and Systems 691, https://doi.org/10.1007/978-3-031-33258-6_56

623

624

B. Bordel et al.

the context of the European DEMETER project. Results show a relevant improvement in the Quality-of-Experience of customers when using the TrFood tool. Keywords Traceability services · Industry 4.0 · Empowering users · DEMETER · Bakery industry · mobile applications

1 Introduction Industry 4.0 [1] is a technological revolution based on the integration of newgeneration solutions in the industrial sector. Technologies such as Artificial Intelligence [2], Cyber-Physical Systems [3], Internet of Things [4], Automatic Decision Support Systems, etc., are embedded in production schemes and processes to improve competitivity and efficiency. However, as has happened many other times in history, this industrial revolution also affects the general society. In particular, the idea of “customer 4.0” [5] has recently been described, and today it is a reality today in most advanced economies. Customers 4.0 are now aware of most marketing and propaganda strategies [6]. They can feel when information is being manipulated and when branches are deploying persuasion strategies to promote products that do not match their real needs. In general, customers 4.0 are not open to buy generic products, but only those that really fit their needs, ethics, political positions, etc. Products, in general, must envision the future and satisfy the upcoming needs and the social status quo [7]. In order to ensure that products match their expectations, customers 4.0 employ a large catalogue of technologies: recommendation platforms [8], collaborative databases [9], mobile applications for waste management [10], etc. The usability and Quality of Experience (QoE) of these technological applications is continuously improving, as customers 4.0 are employing only those tools with the best and more positive interfaces and services. However, one of the key user complaints about these platforms is still unresolved: the lack of information they provide about general products [11]. In fact, digital tools to provide information about products are usually supported by a single agent in the supply chain [11]: final customers, retailers, producers, etc. but other involved actors rarely join. In this context, tools manage only information about one stage of the product manufacturing, but barely may represent the entire product lifecycle. This approach has proven to be successful and provide enough information when primary products are studied [12]. Supply chains are short (no more than three agents), and final customers get a very similar product to the one extracted by primary sector actors. Food products such as chicken meat or vegetables can be monitored thanks to this approach [12]. However, elaborated products, for example bakery products, have a much more complex lifecycle. Any recipe, even the simplest, includes at least ten different ingredients (flour, chocolate, eggs, etc.), which at the same time may be half-elaborated product (flour, for example). The initial ingredients (supplies) are ‘consumed’ to create a totally new product whose

Empowering European Customers ...

625

life cycle and supply chain do not manage any information about the supplies [13]. In this context, customers have real difficulties learning about the real origin of products, their ecological footprint, ethics in manufacturing, etc. This open challenge could be addressed if all agents involved in the logistics of (bakery) products collaborate and make an agreement about a common storage place, data sharing policy, and data format to put together all the information and make it available for customers. Nevertheless, such a global agreement is nowadays very complicated. Although the current Industry 4.0 technologies already allow the interconnection of different information domains (where different formats, communication solutions, and distributed storage are employed), some logistic challenges are still unsolved. Logistic agents have conflicts of interests, different approaches to the transparent data sharing policies, industrial property and industrial secrets to protect… and, on the other hand, opening (or replacing) the existing information management systems requires an investment that most companies are not willing to address. The European Union is aware of this challenging situation, and it is one of its key objectives to facilitate the adoption of potential solutions by the European economic fabric. In this way, the last European framework programmes for research and Innovation Framework Programmes have always identified as a social challenge to empower users through the provision of more and enriched information on the products they buy [14]. With this information, the customer could make more conscious decisions, aligned with the European principles of transparency, respect for the environment and human rights and promotion of local industries. Thus, new and innovative approaches to collect information about elaborated products across complex supply chains, including large catalogues of ingredients and supplies (which may also be elaborated or half—elaborated products), and with the capability to structure all this information in such a way it can be helpful and empower customers, are needed. This paper aims to address these challenges. The objective of this paper is to introduce a new traceability solution (named as TrFood), focused on elaborated food products. The proposed tool is based on interoperable web technologies. Using a web portal, each actor in the supply chain can register their products, which are labeled using a unique hash identifier and QR codes. For every product, a simplified supply chain is defined with only four possible states (farm, processing, distribution, and retailer). With every state, together with the product status, the ecological footprint and the geographical location is updated. In addition, elaborated products are described as compositions of already existing ingredients in the platform, so that the entire product lifecycle may be monitored. Customers could consume all this information through a specific mobile application. This application is independent from existing information management systems, and it can be employed by each actor according to their internal policies, principles, and needs. In order to evaluate the performance of this new tool, the acceptance among industrial partners and the perception of the final users, a real deployment and experimental validation were carried out in the European bakery sector within the DEMETER project.

626

B. Bordel et al.

The rest of the paper is organized as follows. Section 2 describes the state of the art of food product traceability; Sect. 3 presents the proposed tool from a technological point of view; Sect. 4 describes the experimental deployment and validation in the context of the DEMETER project and discusses the obtained results; and Sect. 5 concludes the paper.

2 State of the Art: Food Traceability Solutions Food traceability is a multidisciplinary challenge. Some authors [15] have reported up to 13 different approaches to the problem in the research literature: from logistic management [16], to unique identification [17], transparency [18], interoperability [19] and production management [20], among others. From a technological perspective, some authors have also studied the pending open questions in food traceability [21]. RFID [22], NFC [23], isotope analysis [24], and chemometrics [25] are the technologies most commonly employed to enable a transversal food traceability across the entire supply chain. However, all of these technologies are physical solutions for information capture. And an entire food traceability system cannot be based only on physical-level technologies. Some authors have actually analyzed how food traceability systems may be fully developed using some of the most promising or popular technologies. Solutions based on ontologies [28], the Internet of Things [29], fuzzy logic [30] and artificial intelligence [31] may be found in the research literature. However, some authors have studied information management techniques that can be applied in food traceability systems. Mathematical frameworks that describe the internal structure of systems and transactions [32], theoretical frameworks that determine the characteristics of the data to be collected and how they should be managed [26], or optimization algorithms [27] to identify critical paths, inefficiencies, etc. have been reported. The legal aspects and international normative are receiving a lot of attention in the last few years [33], but the most popular approach in the last five years is Blockchain [34]. Although many different challenges related to blockchain and food traceability have been identified [35], several architectures and designs can be found applying Blockchain to international [10], national [36] or local food traceability. However, this approach is not fully compliant with European regulation, so new Blockchain networks are being designed [37]. But today, deployed food traceability systems deployed should be based on more stable technologies, with no open legal questions or challenges.

Empowering European Customers ...

627

3 A New Tool for Farm-to-Fork Traceability Food traceability, especially when processed products must be monitored, requires a very precise model of ingredients, interactions, and states. As well as a clear definition of what relations and transformations are allowed, and how they affect the original primary products and/or the originated processed elements. Besides, all this information must be supported, accessed, and managed through a simple and multiplatform system, so all agents across the entire food supply chain may operate with products. And, finally, all the collected information must be presented to users in an accessible way.

3.1 Food Classification and Description: Ingredients The TrFood framework is a traceability platform that includes eight different types of food products or “food groups”. Those “food groups” are the following: • • • • • • • •

Milk and diary products Meat, fish and eggs Potatoes, pulses, and nuts Vegetables Fruits Cereals, sugar, and sweets Fats, oil, and butter Processed food

The first seven groups are primary products and potential ingredients. Thus, they can be registered and modeled without reference to any previous product. To introduce (define) a product in the system, four data must be provided: the name (this may include a generic reference and/or a code, such as the batch number), the expiration date, the specific type of food (for each group, the type of food may be specified, using a second level description from a provided list) and nutritional information (a free text area, where producers may indicate all the relevant information about their food products). Figure 1 shows the corresponding form. In addition, primary food products may be associated with a production location. Geographic coordinates for the production center may also be also, but optionally, in the food product’s model. Finally, and optionally, it is possible to upload a photo to describe the product. Regarding the final group of food (processed food), all previous information must be provided, but some additional data are required. In particular, the list of ingredients must be registered. This list is not free but must be made up of primary products already included in the TrFood platform. Also, for each of these ingredients, besides, it must be indicated the percentage (in weight) of the final processed product that corresponds to it must be indicated. Figure 2 shows the page where the TrFood platform allows these operations.

628

B. Bordel et al.

Fig. 1 Product registration

Fig. 2 Processed food registration: ingredients

Finally, for every product, it is possible to indicate the ecological footprint. The ecological footprint is represented by the equivalent amount of CO2 that is emitted to the atmosphere to product, process, transport, or, in general, manage a food element. It is possible for food agents to indicate a personalized value, but they might also not to give any information and the TrFood system will calculate the ecological footprint according to the data reported in scientific literature [38].

3.2 A Simplified Model for the Food Supply Chain In the TrFood model, the food supply chain consists of four steps: farm, processing, distribution, and retailer. Although the food supply chain may include several different steps, all of them may be classified into these four categories which are much more understandable for final customers than more technical named taken from the logistic world.

Empowering European Customers ...

629

In our system, TrFood, an unlimited number of steps, transactions, or phases may be added to the supply chain of any product (primary or processed). All products, when registered, are initiated with only one transaction in the supply chain (farmer), but any agent can add a new transaction or step, just using the unique identifier associated with the product. The TrFood system provides this identifier as an alphanumeric string, but also as a QR code to facilitate the system operation. As the product lifecycle only ends when its finally bought by final customers, the food supply chain in TrFood is not limited and does not end automatically, even if the final step (retailer) is reached. At any moment, the product could return to warehouses and the supply chain would continue evolving. When any agent in the food supply chain updates the status by adding a new step to the supply chain, two parameters are managed: the geographical location and the ecological footprint. The new geographical location can be introduced, so the location of the logistic center is registered. Taking into account the distance between the original geographic point and the new location, TrFood system calculates the equivalent CO2 emitted to the atmosphere, and the ecological footprint is updated. If the food agent has information with more details, the ecological footprint can also be updated with a customized amount of eCO2. But the customized value cannot be lower than the initial value and automatically calculated by TrFood.

3.3 TrFood Software Platform The TrFood system is based on web technologies. The platform is supported by a JavaScript sever (node.JS), while all data about products, transactions, etc. is maintained in a non-relational database (MongoDB). The front-end is accessible through all kinds of devices, and it is based on ejs templates. In addition, to spaces for product registration or food supply chain management, there are also functionalities for user registration. This is only mandatory for users who register new products, but not for agents across the supply chain that manipulate already registered products. These do not have to be registered on the TrFood platform, as they can update the state of the product just using the product’s unique identifier of the product. Regarding the data model, the TrFood platform is based on DEMETER AIM (Agricultural Information Model), consisting of a set of ontologies. TrFood is based on agri-context ontology [39], which is a Domain Specific Ontology defined within DEMETER project to model the agrifood sector. It is linked to other well-known previous ontologies such as Saref4Agri, FOODIE, and FIWARE Agri-profiles. TrFood defined a wrapper that transforms the information from a JSON format (typical in MongoDB database) to a JSON-LD format. JSON-LD is designed around the idea of “context”, to provide JSON assignments to a common/shared model. This allows applications to communicate using abbreviations but without losing precision. The context relates terms in a JSON document with elements in the ontology, the DEMETER AIM in our case.

630

B. Bordel et al.

Fig. 3. TrFood mobile application

The TrFood platform employs GoogleMaps functionalities, so users and agents can define locations just by clicking on a map, without the need to introduce a specific coordinated form. Finally, generic functionalities such as the edition of the personal data, passwords, profile visualization, etc. are also incorporated to TrFood platform. The TrFood platform includes a form to help food agents modify and update the food products. But also an endpoint to facilitate the customers to consume and visualize all the information about food products. As the TrFood web interface may be unconformable for the user during the shopping experience, the TrFood platform is complemented with a mobile application. The TrFood mobile application employs QR codes where the product identifiers are coded, to consume information from the TrFood database, using HTTP requests and messages. The application (see Fig. 3) was developed using JAVA and Android technologies. It also uses GoogleMaps technologies to visualize routes in a more user-friendly manner.

4 Study Case: Methods, Materials and Results The European project DEMETER aims to empower food agents in Europe through new and innovative technologies such as Cyber-Physical Systems, Artificial Intelligence, Industry 4.0 solutions or Internet of Things. To evaluate how the different

Empowering European Customers ...

631

technologies and paradigms may help European customers and/or farmers, different pilots are defined within the context of DEMETER project. Each of these pilots is measuring different Key Performance Indicators (KPI) about several different aspects in the food sector. KPI regarding the same target (farmers, customers, etc.) are analyzed together, organized in use cases. Each use case later defines different experiments to analyze the evolution of every KPI when implemented any technological solution. The case study in this paper belong to the pilot “from the farm to the fork” pilot. This pilot considers three use cases, including the Industry 4.0 use case. Finally, within this Industry 4.0 use case, seven different experiments have been defined. The one we are addressing is “Experiment#3”. It focuses on analyzing whether user satisfaction and (Quality-of-Experience) QoE increases if customers receive trustworthy and transparent information about the bakery value chain, suppliers and products. Besides we are addressing “Experiment#6”. Where there must be deployed an information system monitoring and providing traceability data about raw materials, providers, production, etc. (for industry managers). This Industry 4.0 solution should avoid shortage caused by logistics, improve food security, and promote food policies such as ecologic food. In Experiment#3, two different groups of customers were considered. A group was managed as control group and participants in that group were asked to shop food products the same way they were doing during the past three months. The second group was managed as a pilot group. Participants were provided with the TrFood mobile application to enrich the shopping process and help customers make decisions. Information in the TrFood platform was introduced by food processing industries participating in the DEMETER farm-to-fork pilot. Both groups had similar characteristics and privacy and anonymity were maintained throughout the experience. At the end of the experience, all users were asked to respond to a short survey with six questions, which should be answered using the Likert scale (from “totally disagree” to “totally agree”). Three of these questions were evaluating the QoE, while the remaining three questions were focused on the customers’ satisfaction (see Table 1). Results from both groups (pilot and control group) were collected and compared using the Mann-Withey U test. Table 1 shows the obtained results from this analysis. Regarding “Experiment#6”, a similar approach was followed. A selected group of industry managers in the DEMETER project was selected as control group. They continued with usual operations the same way they were doing during the last three months. On the other hand, a pilot group of managers was defined. They could also consume the information stored on the TrFood platform regarding their suppliers, processed products, environmental footprint, etc. Both groups have similar characteristics and privacy and anonymity were maintained throughout the experience. At the end of the experience, all users were asked to answer a short survey, with six questions, which should be answered using the Likert scale (from “totally disagree” to “totally agree”). The results of both groups (pilot and control group) were collected and compared using the Mann-Withey U test. Table 1 shows the results obtained.

632

B. Bordel et al.

Table 1 Surveys: results KPI

Question

Significance

QoE

I feel empowered to make better shopping decisions thanks to the information I get about products

**

The shopping process was focused on my needs and I could do the best decision for me

***

My shopping experience was enhanced and I feel the food system is focused on me as a customer

**

I feel satisfied with my shopping decisions

***

The information provided by food agents to me makes me feel satisfied as a customer

**

Satisfaction

Number of shortages

My general experience was satisfactory

**

The number of shortages in my supply chain reduced

***

I could anticipate shortages and organize alternative production *** schedules

Ecologic food I think I could help promote the food supply in a stronger way I *** will do it Cost

I am concerned about the ecological footprint of my products

**

Production organization costs have been reduced because of a better planification

***

NS not significant; * significant at p < 0.05; ** significant at p < 0.005; *** significant at p < 0.001

As can be seen, both indicators (QoE and satisfaction) improved significantly, with a significant level p < 0.005 or even higher. Specifically, participants in the pilot group reported a higher score in questions related to shopping decisions. As a conclusion, this experiment shows how the customer felt empowered in their decisions thanks to the information provided by the TrFood application. The proposed system not only improves the customer experience. As can also be seen in Table 1, the number of shortages in the food supply chain reduced in a very significant manner. They could control supplies almost in real time and that helped them to organize alternative production schedules. Also, better planification help managers to reduce production costs. However, managers in the pilot group reported an increasing concern about the ecological footprint. All improvements in Experiment#6 were associated with a very high significance level (p < 0.001).

5 Conclusions In this paper, we describe a new tool for elaborated products traceability (named TrFood) based on QR codes, web technologies, and composition schemes. This application may also control the ecological footprint and the geographical origin of the supplies and final products. Users may obtain all this information using a

Empowering European Customers ...

633

specific mobile application. The results show a relevant improvement in the Qualityof-Experience of customers when using the TrFood tool. Acknowledgements This work is supported by Comunidad de Madrid within the framework of the Multiannual Agreement with Universidad Politécnica de Madrid to encourage research by young doctors (PRINCE project). The authors also thank their participation in the DEMETER project (H2020-DT-2018-2020. Grant no: 857202).

References 1. Bordel B, Alcarria R, Robles T (2021) Controlling supervised industry 4.0 processes through logic rules and tensor deformation functions. Informatica 32(2):217–245 2. Bordel B, Alcarria R, Robles T (2022) Recognizing human activities in industry 4.0 scenarios through an analysis-modeling-recognition algorithm and context labels. Integr Comput Aided Eng (Preprint) 1–21 3. Bordel B, Alcarria R, Robles T, Martín D (2017) Cyber–physical systems: extending pervasive sensing from control theory to the Internet of Things. Pervasive Mob Comput 40:156–184 4. Malik PK (2021) Industrial Internet of Things and its applications in industry 4.0: state of the art. Comput Commun 166:125–139 5. Wereda W, Wo´zniak J (2019) Building relationships with customer 4.0 in the era of marketing 4.0: the case study of innovative enterprises in Poland. Soc Sci 8(6):177 6. Dash G, Kiefer K, Paul J (2021) Marketing-to-millennials: marketing 4.0, customer satisfaction and purchase intention. J Bus Res 122:608–620 7. Naeem HM, Di Maria E (2021) Customer participation in new product development: an industry 4.0 perspective. Eur J Innov Manag 8. Min W, Jiang S, Jain R (2019) Food recommendation: framework, existing solutions, and challenges. IEEE Trans Multimedia 22(10):2659–2671 9. Qian J, Fan B, Li J, Li X, Zhao L, Wang S, Shi C (2017) Agro-food collaborative traceability platform for distributed environment. Trans Chin Soc Agric Eng 33(8):259–266 10. Bordel B, Lebigot P, Alcarria R, Robles T (October 2018) Digital food product traceability: using blockchain in the international commerce. In: The 2018 international conference on digital science, pp. 224–231. Springer, Cham 11. Bordel B, Alcarria R, Robles T, de la Torre G, Carretero I (June 2021) Digital user-industry interactions and industry 4.0 services to improve customers’ experience and satisfaction in the European bakery sector. In: 2021 16th Iberian conference on information systems and technologies (CISTI). IEEE, pp 1–10 12. Marcos IFV, Bordel B, Cira CI, Alcarria R (June 2022) A methodology based on unsupervised learning techniques to identify the degree of food processing. In 2022 17th Iberian conference on information systems and technologies (CISTI). IEEE, pp 1–6 13. Galimberti A, Casiraghi M, Bruni I, Guzzetti L, Cortis P, Berterame NM, Labra M (2019) From DNA barcoding to personalized nutrition: the evolution of food traceability. Curr Opin Food Sci 28:41–48 14. Bordel B, Alcarria R, Torre GDL, Carretero I, Robles T (February 2022) Increasing the efficiency and workers wellbeing in the European bakery industry: an industry 4.0 case study. In: International conference on information technology & systems. Springer, Cham, pp 646–658 15. Ringsberg H (2014) Perspectives on food traceability: a systematic literature review. Supply Chain Manag Int J 16. Cambra-Fierro J, Ruiz-Benítez R (2011) Notions for the successful management of the supply chain: learning with Carrefour in Spain and Carrefour in China. Supply Chain Manag Int J

634

B. Bordel et al.

17. Karlsen KM, Dreyer B, Olsen P, Elvevoll EO (2012) Granularity and its role in implementation of seafood traceability. J Food Eng 112(1–2):78–85 18. Trienekens JH, Wognum PM, Beulens AJ, van der Vorst JG (2012) Transparency in complex dynamic food supply chains. Adv Eng Inform 26(1):55–65 19. Ruben R, Zuniga G (2011) How standards compete: comparative impact of coffee certification schemes in Northern Nicaragua. Supply Chain Manag Int J 20. Carpio CE, Isengildina-Massa O (2009) Consumer willingness to pay for locally grown products: the case of South Carolina. Agribus Int J 25(3):412–426 21. Badia-Melis R, Mishra P, Ruiz-García L (2015) Food traceability: new trends and recent advances. Rev Food Control 57:393–401 22. Barge P, Gay P, Merlino V, Tortia C (2014) Item-level radio-frequency identification for the traceability of food products: application on a dairy product. J Food Eng 125:119–130 23. Chen YY, Wang YJ, Jan JK (2014) A novel deployment of smart cold chain system using 2G-RFID-Sys. J Food Eng 141:113–121 24. Arcuri EF, El Sheikha AF, Rychlik T, Piro-Métayer I, Montet D (2013) Determination of cheese origin by using 16S rDNA fingerprinting of bacteria communities by PCR–DGGE: preliminary application to traditional Minas cheese. Food Control 30(1):1–6 25. Versari A, Laurie VF, Ricci A, Laghi L, Parpinello GP (2014) Progress in authentication, typification and traceability of grapes and wines by chemometric approaches. Food Res Int 60:2–18 26. Karlsen KM, Dreyer B, Olsen P, Elvevoll EO (2013) Literature review: does a common theoretical framework to implement food traceability exist? Food Control 32(2):409–417 27. Dabbene F, Gay P (2011) Food traceability systems: performance evaluation and optimization. Comput Electron Agric 75(1):139–146 28. Pizzuti T, Mirabelli G, Sanz-Bobi MA, Goméz-Gonzaléz F (2014) Food track & trace ontology for helping the food traceability control. J Food Eng 120:17–30 29. Amaral LA, Hessel FP, Bezerra EA, Corrêa JC, Longhi OB, Dias TF (2011) eCloudRFID– A mobile software framework architecture for pervasive RFID-based applications. J Netw Comput Appl 34(3):972–979 30. Thakur M, Sørensen CF, Bjørnson FO, Forås E, Hurburgh CR (2011) Managing food traceability information using EPCIS framework. J Food Eng 103(4):417–433 31. Ruiz-Garcia L, Steinberger G, Rothmund M (2010) A model and prototype implementation for tracking and tracing agricultural batch products along the food chain. Food Control 21(2):112– 121 32. Storøy J, Thakur M, Olsen P (2013) The TraceFood framework-principles and guidelines for implementing traceability in food value chains. J Food Eng 115(1):41–48 33. Newsome RL, Bhatt T, McEntire JC (2013) Proceedings of the July 2011 traceability research summit. J Food Sci 78(s2):B1–B8 34. Bordel B, Alcarria R, Robles T (2021) Denial of chain: evaluation and prediction of a novel cyberattack in Blockchain-supported systems. Future Gener Comput Syst 116:426–439 35. Galvez JF, Mejuto JC, Simal-Gandara J (2018) Future challenges on the use of blockchain for food traceability analysis. TrAC Trends Anal Chem 107:222–232 36. Alcarria R, Bordel B, Robles T, Martín D, Manso-Callejo MÁ (2018) A blockchain-based authorization system for trustworthy resource monitoring and trading in smart communities. Sensors 18(10):3561 37. Robles T, Bordel B, Alcarria R, Sánchez-de-Rivera D (2020) Enabling trustworthy personal data protection in eHealth and well-being services through privacy-by-design. Int J Distrib Sens Netw 16(5):1550147720912110 38. Caputo V, Nayga RM Jr, Scarpa R (2013) Food miles or carbon emissions? Exploring labelling preference for food transport footprint with a stated choice study. Aust J Agric Resour Econ 57(4):465–482 39. Agri-context DEMETER ontology. https://w3id.org/demeter/agri-context.jsonld. Accessed 11th Aug 2022

Design of a Task Delegation System in a Context of Standardized Projects Alfredo Chávez

and Abraham Dávila

Abstract The task delegation, in the organizational context, is a process by which the responsibility of carrying out a task is assigned to a suitable person. However, in various companies, this process is carried out subjectively and manually, leaving aside the performance of the worker. This situation affects the effectiveness of the company’s operation and produces dissatisfaction among the workers. In this article, based on a company in the telecommunications sector that operates under standardized projects, a scheme and a computer tool are established for the task delegation based on the performance of the worker. For this, a systematic mapping study was carried out in order to identify and characterize the existing solutions on delegation systems based on performance. Then, once the scope of the solution was delimited, the development of the “Delegator” computer tool and the validation of the delegation of tasks based on performance proceeded. From a practical perspective, the established assignment scheme allows a categorization based on the criteria: task start date, task deadline, number of inputs and number of skills required; and the tool facilitates its application to the defined case of the company. From a theoretical perspective, the categorization of workers using the SVM is performed according to four variables of the task to be delegated, two of them are new variables that count the required characteristics of the worker and the entries of the task. Keywords Task delegation · performance evaluation · standardized projects

A. Chávez Graduated School, Pontifical Catholic University of Peru, Lima, Peru e-mail: [email protected] A. Dávila (B) Department of Engineering, Pontifical Catholic University of Peru, Lima, Peru e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Á. Rocha et al. (eds.), Information Technology and Systems, Lecture Notes in Networks and Systems 691, https://doi.org/10.1007/978-3-031-33258-6_57

635

636

A. Chávez and A. Dávila

1 Introduction The task delegation is essential for the development of organizations. This process, in a company, allows a manager to deploy specific tasks to specialized personnel [1] and prioritize their functions [2]. The heterogeneity in terms of the skills of the collaborators and the level of difficulty of the tasks play an important role in the delegation processes. In this sense, [3] make use of optimization models to assign tasks to collaborators with multiple skills; and [4] determine that greater benefit is obtained if artificial intelligence is the one that delegates tasks to human beings. In order to delegate tasks objectively, in the context of a company, it is necessary to consider various factors, one of which is the evaluation of the worker’s performance [5]. In this case, some articles address the importance of performance evaluation. For example, a comparative analysis of performance evaluation is carried out, managing to support the advantages that 360° evaluation entails [6]. In addition, [7] and [8] analyze the impact of the delegation of authority on job satisfaction and performance, both concluding that the assignment of authority is statistically linked to organizational growth. Furthermore, the study conducted in [9], in the public domain, shows that the decision to delegate does not have an objective support beyond decentralization. In this way, this study differs from the others by introducing an objective variable such as performance. On the other hand, some enterprise resource planning (ERP) system programs have been identified which allow tasks to be delegated, such as [10]: (i) SAP S/ 4HANA Cloud, (ii) WorkBook, (iii) Oracle Fusion Cloud ERP, (iv) Odoo ERP and (v) Microsoft Dynamics 365; but, they do not use worker performance as a delegation criterion. However, many of them offer programming environments to implement various features. For example, in order to better control the selection of available employees based on their characteristics, a software application was developed in Odoo ERP [11]. This article establishes a task delegation scheme based on the performance of the collaborator and its implementation as a computer tool. The article is organized as follows: in Sect. 2, the background of this article is presented; in Sect. 3, the research methodology based on a systematic mapping study is showed; in Sect. 4, the development of “Delegator” is described; and in Sect. 5, the final discussion is presented.

2 Background In this section, task delegation and performance evaluation concepts are addressed.

Design of a Task Delegation System ...

637

2.1 Task Delegation The delegation of task is essential in the collaborative effort at any organization [12]. Delegation is a key aspect of management in which both the person who delegates and the person to whom it has been delegated benefit [12]. Similarly, the delegation of tasks provides benefits such as the development of skills, better decision-making at work and adequate time management for the development of other tasks [13]. Also, an effective delegation comprises the following points [14]: (i) clarity in communication, (ii) available resources, (iii) monitoring progress, (iv) celebrating success and finally, (v) generating a report or reflection.

2.2 Performance Evaluation The performance evaluation [15] allows estimating the growth potential of the worker in his respective position and, whose responsibility for said evaluation, may fall on: (i) the manager, which leads to a centralized process; (ii) the employee, who decentralizes the process, but lacks objectivity; or, (iii) an evaluation committee, which provides objectivity by providing an impartial point of view. The performance evaluation must be carried out through the evaluation of competencies, for which it is proposed the 360° Premium performance evaluation [16]. This system allows the evaluated person to obtain feedback from their environment: (i) internal and external clients, (ii) subordinates, (iii) co-workers, (iv) boss, (v) other people and, (vi) the evaluated in question.

2.3 Standardized Projects According to [17], an organization’s processes and procedures should be standardized. Also, [18] defines standardized projects as those that, through strategic planning, delimit the execution and monitoring phases, reduce variability and maximize their value.

3 Research Methodology This section presents the realization of a Systematic Mapping Study (SMS) based on [19]. The purpose of the SMS was to identify studies related to performance-based task delegation (See Fig. 1). The identified algorithms and their characteristics are used for the design, implementation and testing of Delegator.

638

A. Chávez and A. Dávila

Fig. 1 SMS adapted from [19]

3.1 Planning and Implementation of the SMS Although task delegation is used in all types of organization, it is mostly done subjectively. In this context, it is necessary to identify studies related to the use of criteria for task delegation. In this sense, the research questions are presented below: • RQ1 What criteria are applied for the task delegation? Are any of them related to performance? • RQ2 What computer tools are reported? Which ones allow tasks to be delegated based on established criteria? • RQ3 What performance evaluation method is used in organizations? The search string, according to [19] was determined by identifying the Population and Intervention. The search string was left as: “Task Delegation” OR “Task Assignment” (Performance OR Behavior) AND (Evaluation OR Appraisal OR Assessment OR Rating OR Ranking). The extraction process began with the execution of the search string in the selected databases: Scopus, Web of Science (WoS), ScienceDirect, Ebsco, and ProQuest. For this, the chain was adapted to the characteristics of the digital databases considered. In this first stage, 573 studies were obtained. After the selection process, applying inclusion and exclusion criteria to the titles, abstracts and content, 13 primary studies were obtained (See Fig. 2) (See Table 1).

Fig. 2 Inclusion and exclusion criteria of the SMS

Design of a Task Delegation System ...

639

Table 1 Primary studies obtained Id

Title

Ref

S01

A learning feature engineering method for task assignment

[20]

S02

A multi-armed bandit approach to online spatial task assignment

[21]

S03

An optimization method for task assignment for industrial manufacturing organizations

[22]

S04

Assign and appraise: achieving optimal performance in collaborative teams

[23]

S05

Competences-based performance model of multi-skilled workers with learning and forgetting

[24]

S06

Development of a task assignment tool to customize job descriptions and close person-job fit gaps

[25]

S07

Empirical analysis of reputation-aware task delegation by humans from a multi-agent game

[26]

S08

Employee-task assignments for organization modeling: a review of models and applications

[27]

S09

Enhancing the safety of construction crew by accounting for brain resource requirements of activities in job assignment

[28]

S10

Metanetwork analysis for project task assignment

[29]

S11

Screening talent for task assignment: absolute or percentile thresholds?

[30]

S12

Task delegation and computerized decision support reduce coronary heart disease risk factors in type 2 diabetes patients in primary care

[31]

S13

The prospective applicability of the strengths-based approach to managing and developing employees in small businesses

[32]

3.2 RQ1. What Criteria are Applied for the Task Delegation? are Any of them Related to Performance? Regarding the criteria used, the following aspects are considered: • similarity: the similarity between the task to be delegated and a previously delegated task is considered [S01]. • patterns: the pattern of acceptance of tasks by workers is considered [S02]. • performance: both the individual and collaborative performances of the worker are considered, the latter based on the appreciation of the co-workers [S01], [S03], [S04], [S06], [S07], [S08] and [S11]. • workload: the workload of the workers is considered [S02], [S03], [S04], [S07], [S08], [S09] and [S10]. • requirements: task requirements to be delegated are considered [S03] and [S09]. • skills: the knowledge and talent of the worker are considered [S01], [S03], [S04], [S05], [S06], [S07], [S08], [S10], [S11] and [S13].

640

A. Chávez and A. Dávila

3.3 RQ2. What Computer Tools are Reported? Which Ones Allow Tasks to be Delegated Based on Established Criteria? Regarding computer tools, the literature shows that there are several approaches to digitize the process. Among these, we can mention: • SVM: Support Vector Machine, represents a robust learning method that solves big data classification problems [S01]. • IMIRT: Individualized Models for Intelligent Routing of Tasks, consists of a framework that assumes task delegation for crowdsourcing [S02]. • HQGA: Heuristic Quantum Genetic Algorithm, is an optimization method for task assignment and adopts a hierarchical network to illustrate task composition [S03]. • ASAP: Assignment and Appraisal model, destined for optimal resource allocation problems [S04].

3.4 RQ3. What Method of Performance Evaluation is Used in Organizations? Regarding the evaluation method, the 360° performance evaluation is widely spread. In addition, as part of said evaluation, the following criteria are mentioned to be considered: • Completion scenario: The worker’s performance is evaluated based on the way in which the task was completed, considering whether it was completed within the established period or outside of it. In addition, the fact that the worker to whom the task was assigned or if it was reassigned was able to complete it is considered [S01], [S03], [S06] and [S07]. • Performance edges: Individual performance, cooperation performance within the organization, team performance and performance based on the worker’s knowledge are evaluated [S03], [S04], [S06] and [S10]. • Time: The time it takes for the worker to perform the task is evaluated [S01] and [S08].

4 Implementation of Delegator This section introduces key elements of implementing the solution named Delegator. It lists the high-level requirements, describes the main algorithm, presents the architecture of the solution, and describes the test cases applied.

Design of a Task Delegation System ...

641

4.1 High Level Requirements In the development of a software solution, for a problem in an organization, there are general aspects and specific aspects to be considered. The main solution is a software that allows the delegation of tasks using criteria. The criteria are established by the nature of standardized projects. Standardized projects are the way in which companies in the telecommunications sector carry out their activities. Based on the above, the following can be pointed out as high-level software requirements that the user can perform: (i) register task to delegate, (ii) register urgent task, in the scenario that all workers are assigned, (iii) update workers (general data), (iv) complete tasks, including performance evaluation, and (v) issue reports.

4.2 Main Algorithm Delegator’s main algorithm consists of a task categorization component and an assignment scheme based on that categorization considering organizational factors. From the SMS, for the categorization component, it was identified that the support vector machine (SVM) algorithm turned out to be more convenient for the task delegation problem based on a set of criteria. The SVM is a supervised learning algorithm [33] that categorizes tasks based on a historical record. In the particular case of Delegator, it was established to work with a base of historical records that will be used as training data (70%) and as test data (30%). Once the algorithm is trained, the real data is entered and the algorithm determines to which category the new entered task belongs. According to [34], the SVM can work with “n” categories. However, reliability decreases as the number of categories to identify increases [34]. For the established assignment scheme (software architecture), shown in Sect. 4.2, the different internal elements of the software were mapped and contrasted (see Fig. 3) with what was reported in the primary studies. With this, it was possible to confirm or discuss the decisions of the main algorithm.

Fig. 3 Mapping between primary studies and elements of Delegator

642

A. Chávez and A. Dávila

4.3 Delegator Architecture Delegator architecture presents 3 workflows (see Fig. 4), which allow covering all the possibilities that may arise when delegating the task to a worker. Regarding the conceptual variables, there are the following: (i) score: which corresponds to the experience of a worker having completed one or more tasks belonging to a certain category; (ii) performance: which is calculated through the weighted average of the performance edges: (a) individual performance, (b) collaborative performance, (c) team performance, and (d) knowledge-based performance; (iii) urgency: which is associated with the time remaining for a task, whether in process or new, to finish. The scoring flow begins with a process that locates the model category and orders the workers in that category from highest to lowest score. The sorted list is traversed until an available worker is found for assignment. If it is not found, it goes to the next flow. The performance flow begins with a process that orders the workers from highest to lowest performance without considering the workers eliminated in the scoring stage. As in the previous case, the list is traversed and, if possible, a worker is assigned or the next flow is passed. The urgency flow starts by consulting the user for the possibility of changing the start date of the new task. If possible, the user is requested to change the start date and a new task is generated. Otherwise, tasks in progress are sorted by start date, from oldest to newest. The task is delegated in case the new task is more urgent than the current task. In this way, the task in progress would be paused and the new task would be started.

Fig. 4 Delegator - internal workflow

Design of a Task Delegation System ...

643

4.4 Test Cases For the test cases, a scenario with 5 workers with the level of analyst to whom the tasks will be delegated was proposed. Regarding the tasks to be delegated, a format called a descriptive sheet will be used and when the task has been completed, a format called a task report will be used. The tests carried out were satisfactory internally. For the training of the categorizing algorithm, based on certain criteria, a database of 5000 records was created and entered, which present a tendency to differentiate the tasks into 2 categories. For the test case developed, recurring tasks of the telecommunications company were defined. These tasks can be grouped into: (i) preparation of reports, (ii) review of formats, (iii) audit, (iv) preparation of procedures and (v) integration.

5 Final Discussion This article represents a solution to the task delegation problem based on criteria, which occurs in a company. This problem is similar to other companies in the same telecommunications sector, which carry out standardized projects. The resolution goes through an SMS that allows identifying an algorithm for categorization and an architecture that allows flexibility for task assignment. The software tests, carried out internally, allow to verify that the software works completely and correctly according to the established requirements. In relation to the operational part of the Delegator, the following limitations need to be mentioned, and should be investigated in future research: (i) the effectiveness of the categorizing algorithm could be verified for a scenario of 2 categories and 20 tasks to be delegated. Future work could implement more than 2 categories and other supervised learning model; (ii) it was found that the maximum number of tasks in the process is limited by the number of available workers, since the unit “day” was used as the minimum task delegation block. Future studies should change the unit to “working hour”; (iii) it was verified that the urgent condition allows prioritizing a task by pausing the current one and resuming it at the end of the urgent task; so, the maximum number of tasks to delegate is twice as many workers, since they would have one active task and another paused. Future research should consider the execution of concurrent tasks. Acknowledgements This work is partially supported by Department of Engineering and the Group of Research and Development on Software Engineering (GIDIS) from the Pontifical Catholic University of Peru.

644

A. Chávez and A. Dávila

References 1. Leyer M, Schneider S (2019) Me, you or AI? How do we feel about delegation 2. Abril Bolaños CJ (2014) “Delegación” Destreza vital de un buen líder. http://www.springer. com/series/15440%0Apapers://ae99785b-2213-416d-aa7e-3a12880cc9b9/Paper/p18311 3. Cheng H, Chu X (2012) Task assignment with multiskilled employees and multiple modes for product development projects. Int J Adv Manuf Technol 61:391–403. https://doi.org/10.1007/ s00170-011-3686-7 4. Fügener A, Grahl J, Ketter W, Gupta A (2019) Cognitive challenges in human-AI collaboration: investigating the path towards productive delegation. Europe 39. https://doi.org/10.2139/ssrn. 3368813 5. Aguinis H (2013) Performance management. Prentice Hall 6. López J (2013) Análisis comparativo de la evaluación del desempeño según Martha Alles y Idalberto Chiavenato; estudio de caso Corporación Holdingdine 7. Muhammad Shah SG, Kazmi AB (2020) The impact of delegation of authority on job satisfaction, job performance and organizational growth at higher educational institutions in Sindh. Glob Soc Sci Rev V:32–45. https://doi.org/10.31703/gssr.2020(v-iii).04 8. Teryima J (2017) Effective delegation of authority as a strategy for task accomplishment and performance enhancement in business organizations: an empirical survey of flour mills of Nigeria plc, Lagos-Nigeria. Bus Manag Rev 8:138–157 9. Overman S (2016) Great expectations of public service delegation: a systematic review. Public Manag Rev 18:1238–1262. https://doi.org/10.1080/14719037.2015.1103891 10. FBI Editor (2022) Enterprise resource planning [ERP] software market size. https://www.for tunebusinessinsights.com/enterprise-resource-planning-erp-software-market-102498 11. González J (2020) Desarrollo de aplicación para asignación de empleados en tareas específicas con ERP 12. Tarnowski J, Quinn K, Alvero A, Sadri G (2019) Delegation of tasks: importance, obstacles, implementation. Ind Manag 61:22–26 13. Culp G, Smith A (1997) Six steps to effective delegation. J Manag Eng 13:30–31. https://doi. org/10.1061/(asce)0742-597x(1997)13:1(30) 14. Akridge J (2015) The art of delegation. https://doi.org/10.1049/em:19920040 15. Chiavenato I (2011) Administración de Recursos Humanos. Mc Graw Hill, México, D.F 16. Capuano A (2004) Evaluación desempeño por Competencias. Invenio 7:139–150 17. PMI (2017) PMBOK, 6th edn. 18. Karim A, Nekoufar S (2011) Lean project management in large scale industrial & infrastructure project via standardization. http://projectmanager.com.au/wp-content/uploads/2011/ 03/LeanPM_Saviz-Nekoufar.pdf 19. Petersen K, Vakkalanka S, Kuzniarz L (2015) Guidelines for conducting systematic mapping studies in software engineering: an update. Inf Softw Technol 64:1–18. https://doi.org/10.1016/ j.infsof.2015.03.007 20. Loewenstern D, Pinel F, Shwartz L, Gatti M, Herrmann R, Cavalcante V (2012) A learning feature engineering method for task assignment. In: Proceedings of the 2012 IEEE network operations and management symposium, NOMS 2012, pp 961–967. https://doi.org/10.1109/ NOMS.2012.6212015 21. Hassan UU, Curry E (2014) A multi-armed bandit approach to online spatial task assignment. In: Proceedings - 2014 IEEE international conference on ubiquitous intelligence and computing, 2014 IEEE international conference on autonomic and trusted computing, 2014 IEEE international conference on scalable computing and communications and associated, pp 212–219. https://doi.org/10.1109/UIC-ATC-ScalCom.2014.68 22. Li N, Li Y, Sun M, Kong H, Gong G (2017) An optimization method for task assignment for industrial manufacturing organizations. Appl Intell 47:1144–1156. https://doi.org/10.1007/s10 489-017-0940-1 23. Huang EY, Paccagnan D, Mei W, Bullo F (2020) Assign and appraise: achieving optimal performance in collaborative teams. IEEE Trans Autom Control 1–13

Design of a Task Delegation System ...

645

24. Korytkowski P (2017) Competences-based performance model of multi-skilled workers with learning and forgetting. Expert Syst Appl 77:226–235. https://doi.org/10.1016/j.eswa.2017. 02.004 25. Booker BW (2010) Development of a task assignment tool to customize job 26. Yu H, Lin H, Lim SF, Lin J, Shen Z, Miao C (2015) Empirical analysis of reputation-aware task delegation by humans from a multi-agent game. In: Proceedings of the international joint conference on autonomous agents and multiagent systems, AAMAS, pp 1687–1688 27. Kandemir C, Handley HAH (2014) Employee-task assignments for organization modeling: a review of models and applications. In: 2014 international annual conference of the American society for engineering management - entrepreneurship engineering: harnessing innovation, ASEM 2014 28. Ahmadian Fard Fini A, Akbarnezhad A, Rashidi TH, Waller ST (2018) Enhancing the safety of construction crew by accounting for brain resource requirements of activities in job assignment. Autom. Constr. 88:31–43. https://doi.org/10.1016/j.autcon.2017.12.013 29. Li Y, Lu Y, Li D, Ma L (2015) Metanetwork analysis for project task assignment. J Constr Eng Manag 141:04015044. https://doi.org/10.1061/(asce)co.1943-7862.0001019 30. Balakrishnan R, Lin H, Sivaramakrishnan K (2020) Screening talent for task assignment: absolute or percentile thresholds? J Account Res 58:831–868. https://doi.org/10.1111/1475679X.12327 31. Cleveringa FGW, Gorter KJ, Van Den Donk M, Pijman PLW, Rutten GEHM (2007) Task delegation and computerized decision support reduce coronary heart disease risk factors in type 2 diabetes patients in primary care. Diabetes Technol Ther 9:473–481. https://doi.org/10. 1089/dia.2007.0210 32. Wijekuruppu CK, Coetzer A, Susomrith P (2021) The prospective applicability of the strengthsbased approach to managing and developing employees in small businesses. J. Organ. Eff. 8:323–346. https://doi.org/10.1108/JOEPP-04-2020-0051 33. Caruana R, Niculescu-Mizil A (2006) An empirical comparison of supervised learning algorithms. In: ACM international conference proceeding series, pp. 161–168 https://doi.org/10. 1145/1143844.1143865 34. Patle A, Chouhan DS (2013) SVM kernel functions for classification. In: 2013 international conference on advances in technology and engineering, ICATE 2013. https://doi.org/10.1109/ ICAdTE.2013.6524743

eSFarmer - A Solution for Accident Detection in Farmer Tractors Rui Alves, Paulo Matos, João Ascensão, and Diogo Camelo

Abstract IoT and Cloud are two of the concepts increasingly present in human life. The introduction of these concepts has, over time, contributed to the optimization of human tasks, improving the quality of life of the general population. However, they go far beyond a simple optimization of tasks, often having a great influence on the preservation of human life. This technological evolution does not go unnoticed in the agricultural sector, where the use of IoT and Cloud has grown considerably, allowing the design of agricultural machinery to be increasingly efficient and safe. However, older models, even not offering the same levels of safety as the latest versions, have a strong presence in the sector, something that, in case of an accident, often becomes a determining factor in the preservation of human life. Thus, this paper, using a set of sensors, NB-IoT for communication and a web platform, proposes a technical solution that will allow to increase the level of safety of older agricultural machines. Keywords Agriculture · Farmer Tractors · IoT · Safety · Accident

This work has been supported by FCT - Fundação para a Ciência e Tecnologia within the Project Scope: UIDB/05757/2020. R. Alves (B) · J. Ascensão · D. Camelo Polytechnic Institute of Bragança, Bragança, Portugal e-mail: [email protected] D. Camelo e-mail: [email protected]; [email protected] P. Matos Research Centre in Digitalization and Intelligent Robotics (CeDRI), Instituto Politécnico de Bragança, Campus de Santa Apolónia, 5300-253 Bragança, Portugal e-mail: [email protected] Laboratório para a Sustentabilidade e Tecnologia em Regiões de Montanha (SusTEC), Instituto Politécnico de Bragança, Campus de Santa Apolónia, 5300-253 Bragança, Portugal © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Á. Rocha et al. (eds.), Information Technology and Systems, Lecture Notes in Networks and Systems 691, https://doi.org/10.1007/978-3-031-33258-6_58

647

648

R. Alves et al.

1 Introduction From very early, the agricultural sector has a key role in society. If on the one hand it’s the sector responsible for food production; on the other hand it’s the main source of income for a percentage of the population [1]. However, its modernization advances at different speeds [2–4], delaying the development of the sector and detracting from the efficiency of the operations associated with it. An important step in farming modernization was with the introduction of tractors in agricultural activities. The usage of these machines allowed the optimization of such tasks as plowing, application of fertilizers, transportation of loads which until then had been performed by animals. These machines allowed farmers not only to optimize tasks but also to reduce operational costs [5], something that has made agriculture a little more profitable [6]. This agricultural revolution, in addition to all the impact it had on the sector, also influenced the market of farmer tractors. The new models have more and more features, allowing greater optimization of tasks, which results in a reduction of operating costs. In addition, the safety of drivers [7] is another point that is increasingly taken into account in the design of new models, something that has played an important role in preserving life in the event of an accident. However, despite the obvious improvements, in terms of task optimization and security, that the new models have, it’s the older models that still dominate a fairly considerable part of the agricultural market. In this way, it becomes obvious the growing need for auxiliary tools that allow placing in the older agricultural tractors, the levels of operational safety existing in the latest versions. Thus, this paper, using a set of sensors, chips and intelligent algorithms, presents a solution for the identification of accidents in farmer tractors. The preservation of life is the main objective of this solution, and as such, whenever it detects possible accidents in the data received, it issues alerts to the emergency teams. The rest paper is organized as follows: Sect. 2 provides a brief analysis of accidents with farmer tractors; the technical details of the proposed solution are described in Sect. 3; Sect. 4 the conclusions of the work and objectives for future work are presented.

2 Accidents Overview According to recent studies [8–10], most accidents with farmer tractors occur by: behavioral factors where drivers ignored the safety procedures of machines, performing tasks for long periods, in adverse environmental and climatic conditions and in isolated rural areas, something that often makes it difficult to identify the accident. In the north of Portugal [11], where the solution presented is being developed and tested, accidents with farmer tractors are frequent, making Portugal one of the 3 European countries [12] where accidents with agricultural tractors cause more deaths.

eSFarmer - A Solution for Accident Detection in Farmer Tractors ...

649

The reduction of these types of accidents involves the prevention and awareness of the dangers of driving these machines. However, in the authors’ opinion, it’s a fact that all these actions do not completely eradicate these accidents. Thus, and being one of the main causes the execution of long tasks in remote areas, something that hinders both the identification of the accident and the sending of the aid, it’s increasingly necessary to create solutions as the one presented in this paper, since sometimes the driver ends up dying not by the severity of the injuries but by the time he waited for medical help.

3 Proposal Solution The Fig. 1 illustrates the high-level view of the built architecture. It consists of 4 elements: the Arduino MKR NB 1500 and its sensors that are placed on the farm tractor, and correspond to the client component (Sect. 3.1) of the solution; the Broker (Sect. 3.2) is responsible for connecting the client component with the server component; the server component (Sect. 3.3) which is composed of the application server that receives the data published in Broker, processes and registers it in the database; the mobile application (Sect. 3.4) which makes it possible to consults the information produced. All previously listed components have specific functionality within the overall architecture that will be detailed in the next sections.

3.1 Client Component An Arduino MKR NB 1500 GPS sensor [13] and an MPU6050 sensor [14] must be placed on the farmer tractor, which works as a client of the architecture. Communication is supported by NB-IoT. Narrow Band-Internet of Things or (NBIoT) [15, 16] is designed so that IoT devices can connect directly to the global network. One of the key points of this technology is to enable the ease of connection that exists in smartphones can be used in the IoT world, namely ultra-low consumption communications. The choice of this version of Arduino is due to the fact that it’s the first and only one of the Arduino family compatible with LTE Cat M1 and NB-IoT [17, 18]. Listing 1.1 Example of JSON string sent by Arduino board to broker

{ "id_client ":123, "coordinates":["41.8072","−6.75919"], "timestamp": 1662887046 "axes":[20,20,30] }

650

R. Alves et al.

Fig. 1 Overview of eSFarmer architecture.

Client Farm Tractor

GPS Sensor

MPU6050 Sensor Arduino MKR NB 1500

Over NB-IoT Network

HTTP and MQTT (Publisher)

Broker

HTTP and MQTT (Publisher) Payara Processing routine data

HTTP (RESTful API)

Mobile App

Database Server

In listing 1.1 the structure of the message that Arduino publishes in Broker can be analyzed. The field id_client identifies the farmer machine concerned. In the array with coordinates are represented the geographical coordinates of the farmer tractor. The Arduino periodically sends messages to Broker, the value entered in this field represents the geographic coordinates of the farmer machine at the time of sending each message. The axes field represents the values of the three axes (x, y, z). The information inserted in this field has a relevant value, since it’s from here that, along with the analysis of geographical coordinates, that the server component analyzes possible suspicious movements (eg. overturning) that the farmer machine performed. Finally, the field timestamp represents the Unix time the message was sent.

eSFarmer - A Solution for Accident Detection in Farmer Tractors ...

651

3.2 Broker Although the Arduino has the ability to communicate directly with the application server, communication in reverse is not possible to be performed with the same ease without an intermediate point. Thus, the MQTT protocol was used, through Broker [19] to manage the messages exchanged. In the current state of development, there are only one-way communications (tractor-server), however, in the future it’s considered to add new features that will imply the sending of messages in the servertractor direction.

3.3 Server Component The server component, as already mentioned, consists of an application server, a set of background services, and a database server. The process begins when the server connects to Broker performing the Subscriber operation. After performing this operation, it’s able to start receiving the data. For development purposes, the server receives messages from Arduino every 5 s and stores the history of the last 30 min of data received for each farmer machine in the database. When the server receives a new message associated with a particular machine, it compares the received data with the history associated with the farmer tractor and validates whether the values correspond to a pattern of a possible rollover or suspicious maneuver. In the current phase, this operation is based on a threshold, that is, the server checks whether the values in the axes field exceed the threshold value. Additionally, it’s also validated whether or not the values of the coordinates field show a change in farmer tractor position. Thus, when a pattern that may coincide with a rollover is registered, the emergency contact is notified.

3.4 Mobile Application The mobile application plays only the role of querying the data in the current version of the architecture. With this component, the emergency contact can track the movements of the driver of the tractor, trying to understand, manually, if there is any anomalous situation on the route. At future stages of development it will be possible, through the application, to change the configuration data (such as Polling time), to request the sending of more data related to the movement of the machine (ex: speed, photos of the driver) or receive alerts if the machine is taking a route that is unusual.

652

R. Alves et al.

4 Conclusion Although the new models of machinery already have a good set of safety measures, the main objective of the solution presented in this paper is to reduce the time it takes medical help to arrive in case of an accident with the tractor, something that can be applied to both new and older models. Often a tractor performs tasks in isolated and remote regions where accidents are often only identified long after their occurrence, which increases the likelihood of death in case of serious injury. The simplicity of use associated with the ability to detect precise changes in the movement of tracts makes the solution presented a possible solution to reduce deaths in accidents with agricultural machinery. In the current state of development, the exchange of messages from Arduino to the server is already validated. Using a remote-controlled tractor, it is already possible to identify some critical situations, but the current values and Threshold have to be adapted to more realistic contexts. Finally, the application is already built and is also in the final testing phase. However, in the testing phase, it was demonstrated that the use of the NB-IoT network may be a problem in regions where there is no coverage. The solution to this problem may be through the use of SMS, but it is something that is not a complete solution to the problem, because of the delay that often happens between sending and receiving the message something that directly influences the results of this research In terms of literature, few references were found with solutions similar to that presented in this paper, which demonstrates the potential of the developed solution. However, it is important to reference a [20] simulator prototype, which allows the study of accident scenarios with agricultural machinery helping to create possible mitigation measures.

4.1 Future Work Despite the great potential, the solution still has a long way to go. The following points have been left for future work: – Put machine learning algorithms to identify rollover situations. – Enable the emergency contact to request more tractor data through the mobile application (e.g. photos, speed). – Validate the system in more realistic scenarios. – Understand whether SMS usage can be used in scenarios where NB-IoT coverage is weak or non-existent.

eSFarmer - A Solution for Accident Detection in Farmer Tractors ...

653

References 1. Ritchie H, Roser M (2021) Farm size. Our world in data (2021). https://ourworldindata.org/ farm-size (2021) 2. Devlet A (2021) Modern agriculture and challanges. Front Life Sci Related Technol 2 3. Odey S, Adinya I (2007) Effective tractor utilization for agricultural mechanization in central cross river state. Global J Eng Res 7:37–42 4. Dayou, ED et al (2021) Analysis of the use of tractors in different poles of agricultural development in Benin republic. Heliyon 7(2):e06145 5. Peng J, Zhao Z, Liu D (2022) Impact of agricultural mechanization on agricultural production, income, and mechanism: Evidence from Hubei province, china. Front Environ Sci 10:02 6. Johnson AN (1950) The impact of farm machinery on the farm economy. Agric Hist 24(1):58– 62 7. Bradley Wingrave (2022). Counteracting risk with tractor security devices (2022) 8. Moreschi C et al (2017) The analysis of the cause-effect relation between tractor overturns and traumatic lesions suffered by drivers and passengers: a crucial step in the reconstruction of accident dynamics and the improvement of prevention. Agriculture 7(12) 9. Abubakar MS, Ahmad D, Akande FB (2010) A review of farm tractor overturning accidents and safety. Pertanika J Sci Technol 18 10. Jarén C et al (2022) Fatal tractor accidents in the agricultural sector in Spain during the past decade. Agronomy 12(7) 11. Produtores Florestais (2022). Sinistralidade com tratores agrava em 2021 (2022) 12. CONFAGRI (2019). Portugal é um dos três países europeus onde morrem mais pessoas em acidentes com tratores. o que é que se passa? (2019) 13. Guide to neo-6m GPS module with Arduino. https://randomnerdtutorials.com/guide-to-neo6m-gps-module-with-arduino/. Accessed 05 June 2022 14. Mpu6050 (gyroscope + accelerometer + temperature) sensor module. https://www. electronicwings.com/sensors-modules/mpu6050-gyroscope-accelerometer-temperaturesensor-module. Accessed 05 June 2022 15. GSA. Narrow band IoT & m2m - global narrowband iot - lte-m networks - march (2019). https://gsacom.com/paper/global-narrowband-iot-lte-m-networks-march-2019/. Accessed 01 Mar 2022 16. GSMA. Mobile IoT deployment map. https://www.gsma.com/iot/deployment-map/. Accessed 01 Mar 2022 17. Arduino MKR NB 1500. https://dev.telstra.com/iot-marketplace/arduino-mkr-nb-1500. Accessed 05 June 2022 18. Mkr family. https://store.arduino.cc/arduino/mkr-family. Accessed 05 June 2022 19. What is AWS IoT core? https://aws.amazon.com/iot-core/. Accessed 08 May 2021 20. Watanabe M, Sakai K (2021) Identifying tractor overturning scenarios using a driving simulator with a motion system. Biosys Eng 210:261–270

An Approach to Identifying Best Practices in Higher Education Teaching Using a Digital Humanities Data Capturing and Pattern Tool Nico Hillah, Bernhard Standl, Nadine Schlomske-Bodenstein, Fabian Bühler, and Johanna Barzen

Abstract The aim of this paper is to provide an overview of the process of identifying effective teaching patterns by capturing relevant features of given digitally-supported teachings. The overall goal is to collect learning and instructional settings in a way so that they can be reused in different subject domains. First, this involves describing the development of a suitable data structure for capturing instructional data that results from a validated taxonomy. This paper also presents the integration of an application that was developed for modelling and capturing data in domain analysis. Keywords Best practices in digital teaching · Patterns identification · Data modelling

1 Introduction Since the COVID-19 pandemic, the education sector around the world has been able to survive with the help of digital tools. Different workshops, training sessions and applications have been put in place to train teachers, educators, and lecturers on how to use information technology to perform their educational duties. To represent best practices of digital teaching, in such a way that they can be reused and transferred into different subjects, is of special interest in the field of educational research. For instance, Vercoustre et al. [23] examined the reuse and transfer of teaching and learning material into other domains. MotschnigPitrik et al. [14] further described how conceptual models can be used to capture, promote, explore, and improve rich N. Hillah (B) · B. Standl · N. Schlomske-Bodenstein Karlsruhe University of Education, Karlsruhe, Germany e-mail: [email protected] F. Bühler · J. Barzen University of Stuttgart, Stuttgart, Germany e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Á. Rocha et al. (eds.), Information Technology and Systems, Lecture Notes in Networks and Systems 691, https://doi.org/10.1007/978-3-031-33258-6_59

655

656

N. Hillah et al.

didactic practices. Considering this, the goal is to bridge the gap between teaching practices and the data collection of these practices for data mining. To transfer the instructional practices into a suitable data repository, a conceptual taxonomy was developed, providing a structure for collecting teaching practices in a hierarchical structure [19, 21]. This paper presents an approach to how, based on this taxonomy, a conceptual structure can be modelled to map best practices in digital teaching utilizing a digital humanities’ pattern tool MUSE4Anything [8].

2 Related Works Researchers in digital humanities [7, 12, 22], and in education [13, 15] proposed and used different approaches in data modelling to bring aspects of the real world into a data model; the goal is to simplify the collection of data of a domain. One of these approaches is the ontology approach. It consists of identifying the objects in a domain and the relationships among them. The purpose of identifying these objects is first, to define common terms, definitions, and relationships existing in a domain that can be shared among researchers; and second, to allow researchers to collect data on these objects for data analysis to discover new knowledge such as patterns. Design patterns provide a promising way to identify effective teaching patterns. The pattern approach originally comes from architecture [2] and is also found in the educational context, [6, 9, 20]. In the literature, different methods [16, 17] and methodologies [11] are used for pattern identification. The application of these methods varies from identifying patterns of the internet of things [18] in the computer science field to finding patterns of movie costumes [10] in the digital humanities field. Data collection and data mining are essential for pattern identification. Thus, the selection of the right techniques and the right data collection and analysis tools are important to identify patterns. One of these tools, the MUSE4Anything tool, is derived from two patterns identification studies [10] and, [5] in the digital humanities. We adopted this tool for our study.

3 Modelling Best Practices in Digital Teaching In this section, we will first introduce the MUSE4Anything tool, then present how the taxonomy was integrated into it to model our domain of study as an ontology, before we present the resulting data model in the following section.

An Approach to Identifying Best Practices ...

657

3.1 The MUSE4Anything Tool MUSE4Anything is a tool for capturing structured data in any research field [8]. The structure of the collected data is determined by an ontology that can be created and edited directly in the tool. Automatically generated data input forms support the collection process and help to prevent input errors. The ontology definition in MUSE4Anything is based on a JSON schema. Users can define object types that specify which attributes an object has, and taxonomies to define the allowed values of attributes. A key insight from the preceding projects MUSE [3] and MUSE4Music [4] was that further changes to the ontology are likely to become necessary during the data collection process to adapt to the data. Usually, these changes require expensive modifications to the data input forms. However, the MUSE4Anything tool allows researchers to modify the ontology easily, and with that the automatically generated input forms, even during the data collection process.

3.2 Data Collection Using the MUSE4Anything Tool To collect data for our future pattern identification, we needed to identify what kind of information is relevant for our study. In order to do that, we had to design a taxonomy of our domain of study. Our main developed taxonomy is divided into three major sub-taxonomies, with each divided into different subparts: – Metadata: is a collection of information about the formal description of the course such as the number of students, the environment in which the course was conducted, and the type of technology used to deliver the course. One of the subparts of metadata is the class organization (see Fig. 1). – Intended learning outcomes: is a set of information describing the skills students should have at the end of a course. – Empirical evaluation: collects the information on how students perceived a course. Although each of these main parts includes relevant information, we only consider the information that fulfils the technical requirements of our data mining technique for our study. Having our taxonomy ready, we proceeded to design our ontology schema (see Fig. 2) by listing and connecting all the objects necessary for our study, such as the teacher, the technology, the lecture. We used the MUSE4Anything tool to create this ontology. During this ontologizing, we identified single lectures, instead of the whole course, as the central objects of our domain. Thus, we proceeded to capture information about the lectures. We created a data model to collect our data.

658

N. Hillah et al.

Fig. 1 Class organization subpart taxonomy

Fig. 2 Our ontology in the MUSE4Anything tool

4 Result and Discussion: Data Model Based on the ontology schema created using MUSE4Anything, we were able to design our data model. We took each object of our ontology schema and identified its attributes. For each attribute, there is a corresponding data type. E.g., the object course, has an attribute named “Course Name”, which has a data type “variable

An Approach to Identifying Best Practices ...

659

Fig. 3 The user interface to collect data on a lecture

character (varchar, (50))”. In addition, for most attributes, we provided an exhaustive list of values this attribute may have. This prevents the user from entering wrong values or incompatible values, such as spelling errors. E.g. A lecture of the Java 2 course (see Fig. 3) used phones as technology device (Technology), was evaluated using the formal assessment method (Assessment), and was taught using the project based teaching method. Another example is a digital course on how to create an e-book (See Fig. 4). E-book and Java 2 are instances of the object type lecture. We gathered data on some courses run in our department of computer science, and we are extending our data collection to other departments in the coming semesters. During this process, the need to modify the taxonomy arose, and we were able to do that easily because of the adjustable features of the MUSE4anything tool. We collected and saved data in a JSON format, as provided by the MUSE4Anything tool. With the interface of the tool, we had the possibility to easily navigate among different objects of the ontology schema.

660

N. Hillah et al.

Fig. 4 E-book lecture: an instance of the lecture object

5 Conclusion The usage of MUSE4Anything, as repository for conceptualizing and collecting data, proves to be very adaptable to our field of study. With the use of this tool, we are able to ontologize and to improve our digital teaching domain. Taxonomy and ontology evolve over times; this tool takes this evolution into consideration and allows the necessary features to incorporate these changes into the data collection process. Furthermore, it also provides an application programming interface API for data extraction for future data analysis, such as pattern identification. Based on the collected data, we are able to apply and efficiently redesign our ontology with the new outcomes from the data analysis. In our future works, we will mine the collected data using the Apriori algorithm [1] to identify patterns in best practices in digital teaching. In addition, we aim to transfer these patterns to help educators in other subjects to enhance their teaching techniques using digital tools.

An Approach to Identifying Best Practices ...

661

References 1. Agrawal R, Srikant R, et al (1994) Fast algorithms for mining association rules. In: Proceedings of the 20th international conference on very large data bases, VLDB, vol 1215. Citeseer, pp 487–499 2. Alexander C (1977) A pattern language: towns, buildings, construction. Oxford University Press, Oxford 3. Barzen J (2018) Wenn Kostüme sprechen musterforschung in den digital humanitiesam beispiel vestimentärer kommunikation im film. Dissertation, University of Cologne. http://kups.ub.unikoeln.de/id/eprint/9134 4. Barzen J, Breitenbücher U, Eusterbrock L, Falkenthal M, Hentschel F, Leymann F (November 2016) The vision for MUSE4Music. Applying the MUSE method in musicology. Comput Sci Res Dev 1–6 5. Barzen J, Breitenbücher U, Eusterbrock L, Falkenthal M, Hentschel F, Leymann F (2017) The vision for muse4music. Comput Sci Res Dev 32(3):323–328 6. Bergin J, et al (2012) Pedagogical patterns: advice for educators. Joseph Bergin Software Tools 7. Berry DM, Fagerjord A (2017) Digital humanities: knowledge and critique in a digitalage. Wiley, Hoboken 8. Bühler F, Barzen J, Leymann F, Standl B, Schlomske-Bodenstein N (March 2022) MUSE4Anything - Ontologiebasierte Generierung von Werkzeugen zur strukturierten Erfassung von Daten. In: DHd2022: Kulturen des digitalen Gedächtnisses. Konferenzabstracts, Zenodo, pp 329–331. https://zenodo.org/record/6327945 9. Derntl M, Botturi L (2006) Essential use cases for pedagogical patterns. Comput Sci Educ 16(2):137–156 10. Falkenthal M, Barzen J, Breitenbücher U, Brügmann S, Joos D, LeymannF, Wurster M (2017) Pattern research in the digital humanities: how data mining techniques support the identification of costume patterns. Comput Sci Res Dev 32(3):311–321 11. Fehling C, Barzen J, Breitenbücher U, Leymann F (2014) A process for patternidentification, authoring, and application. In: Proceedings of the 19th European conference on pattern languages of programs, pp 1–9 12. Flanders J, Jannidis F (2015) Data modeling. New Companion Digit Humanit 229–237 13. Gasmi H, Bouras A (2017) Ontology-based education/industry collaboration system. IEEE Access 6:1362–1371 14. Motschnig-Pitrik R, Derntl M (2005) Learning process models as mediators betweendidactical practice and web support. In: International conference on conceptual modeling. Springer, Heidelberg, pp 112–127 15. Perkins D, Jay E, Tishman S (1993) New conceptions of thinking: from ontology to education. Educ Psychol 28(1):67–85 16. Reiners R, Falkenthal M, Jugel D, Zimmermann A (2015) Requirements for a collaborative formulation process of evolutionary patterns. In: Proceedings of the 18th European conference on pattern languages of program, pp 1–12 17. Reiners R, Jarke M (2014) An evolving pattern library for collaborative project documentation. Technical report. Fachgruppe Informatik 18. Reinfurt L, Breitenbücher U, Falkenthal M, Leymann F, Riegg A (2016) Internetof things patterns. In: Proceedings of the 21st European conference on pattern languages of programs, pp 1–21 19. Rich P (1992) The organizational taxonomy: Definition and design. Acad Manag Rev 17(4):758–781 20. Standl B, Grossmann W (2014) Towards combining conceptual lesson patterns withaustrian k12 computer science standard curriculum in the context of pedagogical content knowledge. In: ISSEP 2014, p 85 21. Standl B, Schlomske-Bodenstein N (2021) A pattern mining method for teaching practices. Future Internet 13(5):106

662

N. Hillah et al.

22. Van Zundert J (2012) If you build it, will we come? Large scale digital infrastructuresas a dead end for digital humanities. Hist Soc Res/Historische Sozialforschung 165–186 23. Vercoustre AM, McLean A (2005) Reusing educational material for teaching andlearning: current approaches and directions. Int J E-learn (1):57–68